**Bug fixes (run_experiment.py):** - Fix broken revert logic: was saving HEAD as pre_commit (no-op revert), now uses git reset --hard HEAD~1 for correct rollback - Remove broken --loop mode (agent IS the loop, script handles one iteration) - Fix shell injection: all git commands use subprocess list form - Replace shell tail with Python file read **Bug fixes (other scripts):** - setup_experiment.py: fix shell injection in git branch creation, remove dead --skip-baseline flag, fix evaluator docstring parsing - log_results.py: fix 6 falsy-zero bugs (baseline=0 treated as None), add domain_filter to CSV/markdown export, move import time to top - evaluators: add FileNotFoundError handling, fix output format mismatch in llm_judge_copy, add peak_kb on macOS, add ValueError handling **Plugin packaging (NEW):** - plugin.json, settings.json, CLAUDE.md for plugin registry - 5 slash commands: /ar:setup, /ar:run, /ar:loop, /ar:status, /ar:resume - /ar:loop supports user-selected intervals (10m, 1h, daily, weekly, monthly) - experiment-runner agent for autonomous loop iterations - Registered in marketplace.json as plugin #20 **SKILL.md rewrite:** - Replace ambiguous "Loop Protocol" with clear "Agent Protocol" - Add results.tsv format spec, strategy escalation, self-improvement - Replace "NEVER STOP" with resumable stopping logic **Docs & sync:** - Codex (157 skills), Gemini (229 items), convert.sh all pick up the skill - 6 new MkDocs pages, mkdocs.yml nav updated - Counts updated: 17 agents, 22 slash commands Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2.3 KiB
2.3 KiB
title, description
| title | description |
|---|---|
| /ar:status — Experiment Dashboard | /ar:status — Experiment Dashboard - Claude Code skill from the Engineering - POWERFUL domain. |
/ar:status — Experiment Dashboard
:material-rocket-launch: Engineering - POWERFUL
:material-identifier: `status`
:material-github: Source
Install:
claude /plugin install engineering-advanced-skills
Show experiment results, active loops, and progress across all experiments.
Usage
/ar:status # Full dashboard
/ar:status engineering/api-speed # Single experiment detail
/ar:status --domain engineering # All experiments in a domain
/ar:status --format markdown # Export as markdown
/ar:status --format csv --output results.csv # Export as CSV
What It Does
Single experiment
python {skill_path}/scripts/log_results.py --experiment {domain}/{name}
Also check for active loop:
cat .autoresearch/{domain}/{name}/loop.json 2>/dev/null
If loop.json exists, show:
Active loop: every {interval} (cron ID: {id}, started: {date})
Domain view
python {skill_path}/scripts/log_results.py --domain {domain}
Full dashboard
python {skill_path}/scripts/log_results.py --dashboard
For each experiment, also check for loop.json and show loop status.
Export
# CSV
python {skill_path}/scripts/log_results.py --dashboard --format csv --output {file}
# Markdown
python {skill_path}/scripts/log_results.py --dashboard --format markdown --output {file}
Output Example
DOMAIN EXPERIMENT RUNS KEPT BEST CHANGE STATUS LOOP
engineering api-speed 47 14 185ms -76.9% active every 1h
engineering bundle-size 23 8 412KB -58.3% paused —
marketing medium-ctr 31 11 8.4/10 +68.0% active daily
prompts support-tone 15 6 82/100 +46.4% done —