- AgentHub: 13 files updated with non-engineering examples (content drafts, research, strategy) — engineering stays primary, cross-domain secondary - AgentHub: 7 slash commands, 5 Python scripts, 3 references, 1 agent, dry_run.py validation (57 checks) - Marketplace: agenthub entry added with cross-domain keywords, engineering POWERFUL updated (25→30), product (12→13), counts synced across all configs - SEO: generate-docs.py now produces keyword-rich <title> tags and meta descriptions using SKILL.md frontmatter — "Claude Code Skills" in site_name propagates to all 276 HTML pages - SEO: per-domain title suffixes (Agent Skill for Codex & OpenClaw, etc.), slug-as-title cleanup, domain label stripping from titles - Broken links: 141→0 warnings — new rewrite_skill_internal_links() converts references/, scripts/, assets/ links to GitHub source URLs; skills/index.md phantom slugs fixed (6 marketing, 7 RA/QM) - Counts synced: 204 skills, 266 tools, 382 refs, 16 agents, 17 commands, 21 plugins — consistent across CLAUDE.md, README.md, docs/index.md, marketplace.json, getting-started.md, mkdocs.yml - Platform sync: Codex 163 skills, Gemini 246 items, OpenClaw compatible Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.0 KiB
3.0 KiB
title, description
| title | description |
|---|---|
| /ar:run — Single Experiment Iteration — Agent Skill for Codex & OpenClaw | Run a single experiment iteration. Edit the target file, evaluate, keep or discard. Agent skill for Claude Code, Codex CLI, Gemini CLI, OpenClaw. |
/ar:run — Single Experiment Iteration
:material-rocket-launch: Engineering - POWERFUL
:material-identifier: `run`
:material-github: Source
Install:
claude /plugin install engineering-advanced-skills
Run exactly ONE experiment iteration: review history, decide a change, edit, commit, evaluate.
Usage
/ar:run engineering/api-speed # Run one iteration
/ar:run # List experiments, let user pick
What It Does
Step 1: Resolve experiment
If no experiment specified, run python {skill_path}/scripts/setup_experiment.py --list and ask the user to pick.
Step 2: Load context
# Read experiment config
cat .autoresearch/{domain}/{name}/config.cfg
# Read strategy and constraints
cat .autoresearch/{domain}/{name}/program.md
# Read experiment history
cat .autoresearch/{domain}/{name}/results.tsv
# Checkout the experiment branch
git checkout autoresearch/{domain}/{name}
Step 3: Decide what to try
Review results.tsv:
- What changes were kept? What pattern do they share?
- What was discarded? Avoid repeating those approaches.
- What crashed? Understand why.
- How many runs so far? (Escalate strategy accordingly)
Strategy escalation:
- Runs 1-5: Low-hanging fruit (obvious improvements)
- Runs 6-15: Systematic exploration (vary one parameter)
- Runs 16-30: Structural changes (algorithm swaps)
- Runs 30+: Radical experiments (completely different approaches)
Step 4: Make ONE change
Edit only the target file specified in config.cfg. Change one thing. Keep it simple.
Step 5: Commit and evaluate
git add {target}
git commit -m "experiment: {short description of what changed}"
python {skill_path}/scripts/run_experiment.py \
--experiment {domain}/{name} --single
Step 6: Report result
Read the script output. Tell the user:
- KEEP: "Improvement! {metric}: {value} ({delta} from previous best)"
- DISCARD: "No improvement. {metric}: {value} vs best {best}. Reverted."
- CRASH: "Evaluation failed: {reason}. Reverted."
Step 7: Self-improvement check
After every 10th experiment (check results.tsv line count), update the Strategy section of program.md with patterns learned.
Rules
- ONE change per iteration. Don't change 5 things at once.
- NEVER modify the evaluator (evaluate.py). It's ground truth.
- Simplicity wins. Equal performance with simpler code is an improvement.
- No new dependencies.