- AgentHub: 13 files updated with non-engineering examples (content drafts, research, strategy) — engineering stays primary, cross-domain secondary - AgentHub: 7 slash commands, 5 Python scripts, 3 references, 1 agent, dry_run.py validation (57 checks) - Marketplace: agenthub entry added with cross-domain keywords, engineering POWERFUL updated (25→30), product (12→13), counts synced across all configs - SEO: generate-docs.py now produces keyword-rich <title> tags and meta descriptions using SKILL.md frontmatter — "Claude Code Skills" in site_name propagates to all 276 HTML pages - SEO: per-domain title suffixes (Agent Skill for Codex & OpenClaw, etc.), slug-as-title cleanup, domain label stripping from titles - Broken links: 141→0 warnings — new rewrite_skill_internal_links() converts references/, scripts/, assets/ links to GitHub source URLs; skills/index.md phantom slugs fixed (6 marketing, 7 RA/QM) - Counts synced: 204 skills, 266 tools, 382 refs, 16 agents, 17 commands, 21 plugins — consistent across CLAUDE.md, README.md, docs/index.md, marketplace.json, getting-started.md, mkdocs.yml - Platform sync: Codex 163 skills, Gemini 246 items, OpenClaw compatible Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.4 KiB
2.4 KiB
title, description
| title | description |
|---|---|
| /ar:status — Experiment Dashboard — Agent Skill for Codex & OpenClaw | Show experiment dashboard with results, active loops, and progress. Agent skill for Claude Code, Codex CLI, Gemini CLI, OpenClaw. |
/ar:status — Experiment Dashboard
:material-rocket-launch: Engineering - POWERFUL
:material-identifier: `status`
:material-github: Source
Install:
claude /plugin install engineering-advanced-skills
Show experiment results, active loops, and progress across all experiments.
Usage
/ar:status # Full dashboard
/ar:status engineering/api-speed # Single experiment detail
/ar:status --domain engineering # All experiments in a domain
/ar:status --format markdown # Export as markdown
/ar:status --format csv --output results.csv # Export as CSV
What It Does
Single experiment
python {skill_path}/scripts/log_results.py --experiment {domain}/{name}
Also check for active loop:
cat .autoresearch/{domain}/{name}/loop.json 2>/dev/null
If loop.json exists, show:
Active loop: every {interval} (cron ID: {id}, started: {date})
Domain view
python {skill_path}/scripts/log_results.py --domain {domain}
Full dashboard
python {skill_path}/scripts/log_results.py --dashboard
For each experiment, also check for loop.json and show loop status.
Export
# CSV
python {skill_path}/scripts/log_results.py --dashboard --format csv --output {file}
# Markdown
python {skill_path}/scripts/log_results.py --dashboard --format markdown --output {file}
Output Example
DOMAIN EXPERIMENT RUNS KEPT BEST CHANGE STATUS LOOP
engineering api-speed 47 14 185ms -76.9% active every 1h
engineering bundle-size 23 8 412KB -58.3% paused —
marketing medium-ctr 31 11 8.4/10 +68.0% active daily
prompts support-tone 15 6 82/100 +46.4% done —