Files
claude-skills-reference/docs/skills/engineering/autoresearch-agent-resume.md
Reza Rezvani 2f57ef8948 feat(agenthub): add AgentHub plugin with cross-domain examples, SEO optimization, and docs site fixes
- AgentHub: 13 files updated with non-engineering examples (content drafts,
  research, strategy) — engineering stays primary, cross-domain secondary
- AgentHub: 7 slash commands, 5 Python scripts, 3 references, 1 agent,
  dry_run.py validation (57 checks)
- Marketplace: agenthub entry added with cross-domain keywords, engineering
  POWERFUL updated (25→30), product (12→13), counts synced across all configs
- SEO: generate-docs.py now produces keyword-rich <title> tags and meta
  descriptions using SKILL.md frontmatter — "Claude Code Skills" in site_name
  propagates to all 276 HTML pages
- SEO: per-domain title suffixes (Agent Skill for Codex & OpenClaw, etc.),
  slug-as-title cleanup, domain label stripping from titles
- Broken links: 141→0 warnings — new rewrite_skill_internal_links() converts
  references/, scripts/, assets/ links to GitHub source URLs; skills/index.md
  phantom slugs fixed (6 marketing, 7 RA/QM)
- Counts synced: 204 skills, 266 tools, 382 refs, 16 agents, 17 commands,
  21 plugins — consistent across CLAUDE.md, README.md, docs/index.md,
  marketplace.json, getting-started.md, mkdocs.yml
- Platform sync: Codex 163 skills, Gemini 246 items, OpenClaw compatible

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:10:46 +01:00

2.6 KiB

title, description
title description
/ar:resume — Resume Experiment — Agent Skill for Codex & OpenClaw Resume a paused experiment. Checkout the experiment branch, read results history, continue iterating. Agent skill for Claude Code, Codex CLI, Gemini CLI, OpenClaw.

/ar:resume — Resume Experiment

:material-rocket-launch: Engineering - POWERFUL :material-identifier: `resume` :material-github: Source
Install: claude /plugin install engineering-advanced-skills

Resume a paused or context-limited experiment. Reads all history and continues where you left off.

Usage

/ar:resume                                  # List experiments, let user pick
/ar:resume engineering/api-speed            # Resume specific experiment

What It Does

Step 1: List experiments if needed

If no experiment specified:

python {skill_path}/scripts/setup_experiment.py --list

Show status for each (active/paused/done based on results.tsv age). Let user pick.

Step 2: Load full context

# Checkout the experiment branch
git checkout autoresearch/{domain}/{name}

# Read config
cat .autoresearch/{domain}/{name}/config.cfg

# Read strategy
cat .autoresearch/{domain}/{name}/program.md

# Read full results history
cat .autoresearch/{domain}/{name}/results.tsv

# Read recent git log for the branch
git log --oneline -20

Step 3: Report current state

Summarize for the user:

Resuming: engineering/api-speed
  Target: src/api/search.py
  Metric: p50_ms (lower is better)
  Experiments: 23 total — 8 kept, 12 discarded, 3 crashed
  Best: 185ms (-42% from baseline of 320ms)
  Last experiment: "added response caching" → KEEP (185ms)

  Recent patterns:
  - Caching changes: 3 kept, 1 discarded (consistently helpful)
  - Algorithm changes: 2 discarded, 1 crashed (high risk, low reward so far)
  - I/O optimization: 2 kept (promising direction)

Step 4: Ask next action

How would you like to continue?
  1. Single iteration (/ar:run)  — I'll make one change and evaluate
  2. Start a loop (/ar:loop)     — Autonomous with scheduled interval
  3. Just show me the results    — I'll review and decide

If the user picks loop, hand off to /ar:loop with the experiment pre-selected. If single, hand off to /ar:run.