- AgentHub: 13 files updated with non-engineering examples (content drafts, research, strategy) — engineering stays primary, cross-domain secondary - AgentHub: 7 slash commands, 5 Python scripts, 3 references, 1 agent, dry_run.py validation (57 checks) - Marketplace: agenthub entry added with cross-domain keywords, engineering POWERFUL updated (25→30), product (12→13), counts synced across all configs - SEO: generate-docs.py now produces keyword-rich <title> tags and meta descriptions using SKILL.md frontmatter — "Claude Code Skills" in site_name propagates to all 276 HTML pages - SEO: per-domain title suffixes (Agent Skill for Codex & OpenClaw, etc.), slug-as-title cleanup, domain label stripping from titles - Broken links: 141→0 warnings — new rewrite_skill_internal_links() converts references/, scripts/, assets/ links to GitHub source URLs; skills/index.md phantom slugs fixed (6 marketing, 7 RA/QM) - Counts synced: 204 skills, 266 tools, 382 refs, 16 agents, 17 commands, 21 plugins — consistent across CLAUDE.md, README.md, docs/index.md, marketplace.json, getting-started.md, mkdocs.yml - Platform sync: Codex 163 skills, Gemini 246 items, OpenClaw compatible Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
60 lines
2.5 KiB
Markdown
60 lines
2.5 KiB
Markdown
# AgentHub — Claude Code Instructions
|
|
|
|
This plugin enables multi-agent collaboration. Spawn N parallel subagents that compete on the same task, evaluate results, and merge the winner.
|
|
|
|
## Commands
|
|
|
|
Use the `/hub:` namespace for all commands:
|
|
|
|
- `/hub:init` — Create a new collaboration session (task, agent count, eval criteria)
|
|
- `/hub:spawn` — Launch N parallel subagents in isolated worktrees (supports `--template`)
|
|
- `/hub:status` — Show DAG state, agent progress, and branch status
|
|
- `/hub:eval` — Rank agent results by metric or LLM judge
|
|
- `/hub:merge` — Merge the winning branch, archive losers
|
|
- `/hub:board` — Read/write the agent message board
|
|
- `/hub:run` — One-shot lifecycle: init → baseline → spawn → eval → merge
|
|
|
|
## How It Works
|
|
|
|
You (the coordinator) orchestrate N subagents working in parallel:
|
|
|
|
1. `/hub:init` — define the task, number of agents, and evaluation criteria
|
|
2. `/hub:spawn` — launch all agents simultaneously via the Agent tool with `isolation: "worktree"`
|
|
3. Each agent works independently in its own git worktree, commits results, writes to the board
|
|
4. `/hub:eval` — compare results (run eval command per worktree, or LLM-judge diffs)
|
|
5. `/hub:merge` — merge the best branch into base, tag and archive the rest
|
|
|
|
## Key Principle
|
|
|
|
**Parallel competition. Immutable history. Best result wins.**
|
|
|
|
Agents never see each other's work. Every approach is preserved in the git DAG. The coordinator evaluates objectively and merges only the winner.
|
|
|
|
## Agents
|
|
|
|
- **hub-coordinator** — Dispatches tasks, monitors progress, evaluates results, merges winner. This is YOUR role as the main Claude Code session.
|
|
|
|
## Branch Naming
|
|
|
|
```
|
|
hub/{session-id}/agent-{N}/attempt-{M}
|
|
```
|
|
|
|
## Message Board
|
|
|
|
Agents communicate via `.agenthub/board/` markdown files:
|
|
- `dispatch/` — task assignments from coordinator
|
|
- `progress/` — status updates from agents
|
|
- `results/` — final result summaries from agents
|
|
|
|
## When to Use
|
|
|
|
- User says "try multiple approaches" or "have agents compete"
|
|
- Optimization tasks where different strategies might win
|
|
- Code generation where diversity of solutions helps
|
|
- Competing content drafts — 3 agents write blog posts or landing page copy, LLM judge picks best
|
|
- Research synthesis — agents explore different source sets or analytical frameworks
|
|
- Process optimization — agents propose competing workflow improvements
|
|
- Feature prioritization — agents build different RICE/ICE scoring models
|
|
- Any task that benefits from parallel exploration
|