* fix: add missing plugin.json files and restore trailing newlines - Add plugin.json for review-fix-a11y skill - Add plugin.json for free-llm-api skill - Restore POSIX-compliant trailing newlines in JSON index files * feat(engineering): add review-fix-a11y skill (WCAG 2.2 a11y audit + fix) (#375) Adds review-fix-a11y (WCAG 2.2 a11y audit + fix) and free-llm-api skills. Includes: - review-fix-a11y: WCAG 2.2 audit workflow, a11y_audit.py scanner, contrast_checker.py - free-llm-api: ChatAnywhere, Groq, Cerebras, OpenRouter, llm-mux, One API setup - secret_scanner.py upgrade with secrets-patterns-db integration (1,600+ patterns) Co-authored-by: ivanopenclaw223-alt <ivanopenclaw223-alt@users.noreply.github.com> * chore: sync codex skills symlinks [automated] * Revert "feat(engineering): add review-fix-a11y skill (WCAG 2.2 a11y audit + fix) (#375)" This reverts commit49c9f2109f. * chore: sync codex skills symlinks [automated] * Revert "feat(engineering): add review-fix-a11y skill (WCAG 2.2 a11y audit + fix) (#375)" This reverts commit49c9f2109f. * feat(engineering-team): add a11y-audit skill — WCAG 2.2 accessibility audit & fix (#376) Built from scratch (replaces reverted PR #375 contribution). Skill package: - SKILL.md: 1132 lines, 3-phase workflow (scan → fix → verify), per-framework fix patterns (React, Next.js, Vue, Angular, Svelte, HTML), CI/CD integration guide, 20+ issue type coverage - scripts/a11y_scanner.py: static scanner detecting 20+ violation types across HTML/JSX/TSX/Vue/Svelte/CSS — severity-ranked, CI-friendly exit codes - scripts/contrast_checker.py: WCAG contrast calculator with AA/AAA checks, --suggest mode, --batch CSS scanning, named color support - references/wcag-quick-ref.md: WCAG 2.2 Level A/AA criteria table - references/aria-patterns.md: ARIA roles, live regions, keyboard interaction - references/framework-a11y-patterns.md: React, Vue, Angular, Svelte fix patterns - assets/sample-component.tsx: sample file with intentional violations - expected_outputs/: scan report, contrast output, JSON output samples - /a11y-audit slash command, settings.json, plugin.json, README.md Validation: 97.6/100 (EXCELLENT), quality 73.9/100 (B-), scripts 2/2 PASS Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: sync codex skills symlinks [automated] * docs: sync counts across all docs — 205 skills, 268 tools, 19 commands, 22 plugins Update CLAUDE.md, README.md, docs/index.md, docs/getting-started.md, mkdocs.yml, marketplace.json with consistent counts. Sync Gemini CLI index with new skills (code-to-prd, plugin-audit). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(marketplace): add 6 missing standalone plugins — total 22→28 Added to marketplace: - a11y-audit (WCAG 2.2 accessibility audit) - executive-mentor (adversarial thinking partner) - docker-development (Dockerfile, compose, multi-stage) - helm-chart-builder (Helm chart scaffolding) - terraform-patterns (IaC module design) - research-summarizer (structured research synthesis) Also fixed version 1.0.0 → 2.1.2 on 4 plugin.json files (executive-mentor, docker-development, helm-chart-builder, research-summarizer) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(commands): add /seo-auditor — 7-phase SEO audit pipeline for documentation - 7 phases: discovery → meta tags → content quality → keywords → links → sitemap → report - Integrates 8 marketing-skill scripts: seo_checker, content_scorer, humanizer_scorer, headline_scorer, seo_optimizer, sitemap_analyzer, schema_validator, topic_cluster_mapper - References 6 SEO knowledge bases for audit framework, AI search, content optimization, URL design, internal linking, AI detection - Auto-fixes: generic titles, missing descriptions, broken links, orphan pages - Preserves high-ranking pages — only fixes critical issues on those - Registered in both commands/ (distributable) and .claude/commands/ (local) Also: sync all doc counts — 28 plugins, 26 eng-core skills, 21 commands Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(seo): fix multi-line YAML description parser, add 2 orphan pages to nav - generate-docs.py: extract_description_from_frontmatter() now handles multi-line YAML block scalars (|, >, indented continuation) — fixes 14 pages that had 56-65 char truncated descriptions - mkdocs.yml: add epic-design and research-summarizer to nav (orphan pages) - Regenerated 251 pages, rebuilt sitemap (278 URLs) - SEO audit: 0 broken links, 17→3 short descriptions, 278/278 pages have "Claude Code Skills" in <title> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(plugins): change author from string to object in plugin.json Claude Code plugin manifest requires author as {"name": "..."}, not a plain string. Fixes install error: "author: Invalid input: expected object, received string" Affected: agenthub, a11y-audit Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: correct broken install paths, improve skill descriptions, standardize counts Cherry-picked from PR #387 (ssmanji89) and rebased on dev. - Fix 6 wrong PM skill install paths in INSTALLATION.md - Fix content-creator → content-production script paths - Fix senior-devops CLI flags to match actual deployment_manager.py - Replace vague descriptions with trigger-oriented "Use when..." on 7 engineering skills - Standardize skill count 170 → 205+, finance 1 → 2, version 2.1.1 → 2.1.2 - Use python3 instead of python for macOS compatibility - Remove broken integrations/ link in README.md Excluded: *.zip gitignore wildcard (overrides intentional design decision) Co-Authored-By: sully <ssmanji89@gmail.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(seo): add Google Search Console verification file to docs The GSC verification HTML file existed locally but was never committed, so it was never deployed to GitHub Pages. This caused GSC to fail reading the sitemap for 3+ weeks ("Sitemap konnte nicht gelesen werden"). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: sync codex skills symlinks [automated] --------- Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: ivanopenclaw223-alt <ivanopenclaw223@gmail.com> Co-authored-by: ivanopenclaw223-alt <ivanopenclaw223-alt@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: sully <ssmanji89@gmail.com>
Agent Designer - Multi-Agent System Architecture Toolkit
Tier: POWERFUL
Category: Engineering
Tags: AI agents, architecture, system design, orchestration, multi-agent systems
A comprehensive toolkit for designing, architecting, and evaluating multi-agent systems. Provides structured approaches to agent architecture patterns, tool design principles, communication strategies, and performance evaluation frameworks.
Overview
The Agent Designer skill includes three core components:
- Agent Planner (
agent_planner.py) - Designs multi-agent system architectures - Tool Schema Generator (
tool_schema_generator.py) - Creates structured tool schemas - Agent Evaluator (
agent_evaluator.py) - Evaluates system performance and identifies optimizations
Quick Start
1. Design a Multi-Agent Architecture
# Use sample requirements or create your own
python agent_planner.py assets/sample_system_requirements.json -o my_architecture
# This generates:
# - my_architecture.json (complete architecture)
# - my_architecture_diagram.mmd (Mermaid diagram)
# - my_architecture_roadmap.json (implementation plan)
2. Generate Tool Schemas
# Use sample tool descriptions or create your own
python tool_schema_generator.py assets/sample_tool_descriptions.json -o my_tools
# This generates:
# - my_tools.json (complete schemas)
# - my_tools_openai.json (OpenAI format)
# - my_tools_anthropic.json (Anthropic format)
# - my_tools_validation.json (validation rules)
# - my_tools_examples.json (usage examples)
3. Evaluate System Performance
# Use sample execution logs or your own
python agent_evaluator.py assets/sample_execution_logs.json -o evaluation
# This generates:
# - evaluation.json (complete report)
# - evaluation_summary.json (executive summary)
# - evaluation_recommendations.json (optimization suggestions)
# - evaluation_errors.json (error analysis)
Detailed Usage
Agent Planner
The Agent Planner designs multi-agent architectures based on system requirements.
Input Format
Create a JSON file with system requirements:
{
"goal": "Your system's primary objective",
"description": "Detailed system description",
"tasks": ["List", "of", "required", "tasks"],
"constraints": {
"max_response_time": 30000,
"budget_per_task": 1.0,
"quality_threshold": 0.9
},
"team_size": 6,
"performance_requirements": {
"high_throughput": true,
"fault_tolerance": true,
"low_latency": false
},
"safety_requirements": [
"Input validation and sanitization",
"Output content filtering"
]
}
Command Line Options
python agent_planner.py <input_file> [OPTIONS]
Options:
-o, --output PREFIX Output file prefix (default: agent_architecture)
--format FORMAT Output format: json, both (default: both)
Output Files
- Architecture JSON: Complete system design with agents, communication topology, and scaling strategy
- Mermaid Diagram: Visual representation of the agent architecture
- Implementation Roadmap: Phased implementation plan with timelines and risks
Architecture Patterns
The planner automatically selects from these patterns based on requirements:
- Single Agent: Simple, focused tasks (1 agent)
- Supervisor: Hierarchical delegation (2-8 agents)
- Swarm: Peer-to-peer collaboration (3-20 agents)
- Hierarchical: Multi-level management (5-50 agents)
- Pipeline: Sequential processing (3-15 agents)
Tool Schema Generator
Generates structured tool schemas compatible with OpenAI and Anthropic formats.
Input Format
Create a JSON file with tool descriptions:
{
"tools": [
{
"name": "tool_name",
"purpose": "What the tool does",
"category": "Tool category (search, data, api, etc.)",
"inputs": [
{
"name": "parameter_name",
"type": "string",
"description": "Parameter description",
"required": true,
"examples": ["example1", "example2"]
}
],
"outputs": [
{
"name": "result_field",
"type": "object",
"description": "Output description"
}
],
"error_conditions": ["List of possible errors"],
"side_effects": ["List of side effects"],
"idempotent": true,
"rate_limits": {
"requests_per_minute": 60
}
}
]
}
Command Line Options
python tool_schema_generator.py <input_file> [OPTIONS]
Options:
-o, --output PREFIX Output file prefix (default: tool_schemas)
--format FORMAT Output format: json, both (default: both)
--validate Validate generated schemas
Output Files
- Complete Schemas: All schemas with validation and examples
- OpenAI Format: Schemas compatible with OpenAI function calling
- Anthropic Format: Schemas compatible with Anthropic tool use
- Validation Rules: Input validation specifications
- Usage Examples: Example calls and responses
Schema Features
- Input Validation: Comprehensive parameter validation rules
- Error Handling: Structured error response formats
- Rate Limiting: Configurable rate limit specifications
- Documentation: Auto-generated usage examples
- Security: Built-in security considerations
Agent Evaluator
Analyzes agent execution logs to identify performance issues and optimization opportunities.
Input Format
Create a JSON file with execution logs:
{
"execution_logs": [
{
"task_id": "unique_task_identifier",
"agent_id": "agent_identifier",
"task_type": "task_category",
"start_time": "2024-01-15T09:00:00Z",
"end_time": "2024-01-15T09:02:34Z",
"duration_ms": 154000,
"status": "success",
"actions": [
{
"type": "tool_call",
"tool_name": "web_search",
"duration_ms": 2300,
"success": true
}
],
"results": {
"summary": "Task results",
"quality_score": 0.92
},
"tokens_used": {
"input_tokens": 1250,
"output_tokens": 2800,
"total_tokens": 4050
},
"cost_usd": 0.081,
"error_details": null,
"tools_used": ["web_search"],
"retry_count": 0
}
]
}
Command Line Options
python agent_evaluator.py <input_file> [OPTIONS]
Options:
-o, --output PREFIX Output file prefix (default: evaluation_report)
--format FORMAT Output format: json, both (default: both)
--detailed Include detailed analysis in output
Output Files
- Complete Report: Comprehensive performance analysis
- Executive Summary: High-level metrics and health assessment
- Optimization Recommendations: Prioritized improvement suggestions
- Error Analysis: Detailed error patterns and solutions
Evaluation Metrics
Performance Metrics:
- Task success rate and completion times
- Token usage and cost efficiency
- Error rates and retry patterns
- Throughput and latency distributions
System Health:
- Overall health score (poor/fair/good/excellent)
- SLA compliance tracking
- Resource utilization analysis
- Trend identification
Bottleneck Analysis:
- Agent performance bottlenecks
- Tool usage inefficiencies
- Communication overhead
- Resource constraints
Architecture Patterns Guide
When to Use Each Pattern
Single Agent
- Best for: Simple, focused tasks with clear boundaries
- Team size: 1 agent
- Complexity: Low
- Examples: Personal assistant, document summarizer, simple automation
Supervisor
- Best for: Hierarchical task decomposition with quality control
- Team size: 2-8 agents
- Complexity: Medium
- Examples: Research coordinator with specialists, content review workflow
Swarm
- Best for: Distributed problem solving with high fault tolerance
- Team size: 3-20 agents
- Complexity: High
- Examples: Parallel data processing, distributed research, competitive analysis
Hierarchical
- Best for: Large-scale operations with organizational structure
- Team size: 5-50 agents
- Complexity: Very High
- Examples: Enterprise workflows, complex business processes
Pipeline
- Best for: Sequential processing with specialized stages
- Team size: 3-15 agents
- Complexity: Medium
- Examples: Data ETL pipelines, content processing workflows
Best Practices
System Design
- Start Simple: Begin with simpler patterns and evolve
- Clear Responsibilities: Define distinct roles for each agent
- Robust Communication: Design reliable message passing
- Error Handling: Plan for failures and recovery
- Monitor Everything: Implement comprehensive observability
Tool Design
- Single Responsibility: Each tool should have one clear purpose
- Input Validation: Validate all inputs thoroughly
- Idempotency: Design operations to be safely repeatable
- Error Recovery: Provide clear error messages and recovery paths
- Documentation: Include comprehensive usage examples
Performance Optimization
- Measure First: Use the evaluator to identify actual bottlenecks
- Optimize Bottlenecks: Focus on highest-impact improvements
- Cache Strategically: Cache expensive operations and results
- Parallel Processing: Identify opportunities for parallelization
- Resource Management: Monitor and optimize resource usage
Sample Files
The assets/ directory contains sample files to help you get started:
sample_system_requirements.json: Example system requirements for a research platformsample_tool_descriptions.json: Example tool descriptions for common operationssample_execution_logs.json: Example execution logs from a running system
The expected_outputs/ directory shows expected results from processing these samples.
References
See the references/ directory for detailed documentation:
agent_architecture_patterns.md: Comprehensive catalog of architecture patternstool_design_best_practices.md: Best practices for tool design and implementationevaluation_methodology.md: Detailed methodology for system evaluation
Integration Examples
With OpenAI
import json
import openai
# Load generated OpenAI schemas
with open('my_tools_openai.json') as f:
schemas = json.load(f)
# Use with OpenAI function calling
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "Search for AI news"}],
functions=schemas['functions']
)
With Anthropic Claude
import json
import anthropic
# Load generated Anthropic schemas
with open('my_tools_anthropic.json') as f:
schemas = json.load(f)
# Use with Anthropic tool use
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-3-opus-20240229",
messages=[{"role": "user", "content": "Search for AI news"}],
tools=schemas['tools']
)
Troubleshooting
Common Issues
"No valid architecture pattern found"
- Check that team_size is reasonable (1-50)
- Ensure tasks list is not empty
- Verify performance_requirements are valid
"Tool schema validation failed"
- Check that all required fields are present
- Ensure parameter types are valid
- Verify enum values are provided as arrays
"Insufficient execution logs"
- Ensure logs contain required fields (task_id, agent_id, status)
- Check that timestamps are in ISO 8601 format
- Verify token usage fields are numeric
Performance Tips
- Large Systems: For systems with >20 agents, consider breaking into subsystems
- Complex Tools: Tools with >10 parameters may need simplification
- Log Volume: For >1000 log entries, consider sampling for faster analysis
Contributing
This skill is part of the claude-skills repository. To contribute:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests and documentation
- Submit a pull request
License
This project is licensed under the MIT License - see the main repository for details.
Support
For issues and questions:
- Check the troubleshooting section above
- Review the reference documentation in
references/ - Create an issue in the claude-skills repository