Initial release: Professional Claude Code Skills Marketplace
8 production-ready skills for enhanced Claude Code workflows: 1. github-ops - Comprehensive GitHub operations via gh CLI and API - PR/issue management, workflow automation, API interactions 2. markdown-tools - Document conversion to markdown - PDF/Word/PowerPoint/Confluence → Markdown with WSL support 3. mermaid-tools - Mermaid diagram generation - Extract and render diagrams from markdown to PNG/SVG 4. statusline-generator - Claude Code statusline customization - Multi-line layouts, cost tracking, git status, colors 5. teams-channel-post-writer - Microsoft Teams communication - Adaptive Cards, formatted announcements, corporate standards 6. repomix-unmixer - Repomix file extraction - Extract from XML/Markdown/JSON formats with auto-detection 7. skill-creator - Skill development toolkit - Init, validation, packaging scripts with privacy best practices 8. llm-icon-finder - AI/LLM brand icon finder - 100+ AI model icons in SVG/PNG/WEBP formats Features: - Individual skill installation (install only what you need) - Progressive disclosure design (optimized context usage) - Privacy-safe examples (no personal/company information) - Comprehensive documentation with references - Production-tested workflows Installation: /plugin marketplace add daymade/claude-code-skills /plugin marketplace install daymade/claude-code-skills#<skill-name> Version: 1.2.0 License: See individual skill licenses 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
94
.claude-plugin/marketplace.json
Normal file
94
.claude-plugin/marketplace.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"name": "daymade-skills",
|
||||
"owner": {
|
||||
"name": "daymade",
|
||||
"email": "daymadev89@gmail.com"
|
||||
},
|
||||
"metadata": {
|
||||
"description": "Professional Claude Code skills for GitHub operations, document conversion, diagram generation, statusline customization, Teams communication, repomix utilities, skill creation, and LLM icon access",
|
||||
"version": "1.2.0",
|
||||
"homepage": "https://github.com/daymade/claude-code-skills"
|
||||
},
|
||||
"plugins": [
|
||||
{
|
||||
"name": "github-ops",
|
||||
"description": "Comprehensive GitHub operations using gh CLI and GitHub API for pull requests, issues, repositories, workflows, and API interactions",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "developer-tools",
|
||||
"keywords": ["github", "gh-cli", "pull-request", "issues", "workflows", "api"],
|
||||
"skills": ["./github-ops"]
|
||||
},
|
||||
{
|
||||
"name": "markdown-tools",
|
||||
"description": "Convert documents (PDFs, Word, PowerPoint, Confluence exports) to markdown with Windows/WSL path handling support",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "document-conversion",
|
||||
"keywords": ["markdown", "pdf", "docx", "confluence", "markitdown", "wsl"],
|
||||
"skills": ["./markdown-tools"]
|
||||
},
|
||||
{
|
||||
"name": "mermaid-tools",
|
||||
"description": "Generate Mermaid diagrams from markdown with automatic PNG/SVG rendering and extraction from documents",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "documentation",
|
||||
"keywords": ["mermaid", "diagrams", "visualization", "flowchart", "sequence"],
|
||||
"skills": ["./mermaid-tools"]
|
||||
},
|
||||
{
|
||||
"name": "statusline-generator",
|
||||
"description": "Configure Claude Code statuslines with multi-line layouts, cost tracking via ccusage, git status, and customizable colors",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "customization",
|
||||
"keywords": ["statusline", "ccusage", "git-status", "customization", "prompt"],
|
||||
"skills": ["./statusline-generator"]
|
||||
},
|
||||
{
|
||||
"name": "teams-channel-post-writer",
|
||||
"description": "Create professional Microsoft Teams channel posts with Adaptive Cards, formatted announcements, and corporate communication standards",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "communication",
|
||||
"keywords": ["teams", "microsoft", "adaptive-cards", "communication", "announcements"],
|
||||
"skills": ["./teams-channel-post-writer"]
|
||||
},
|
||||
{
|
||||
"name": "repomix-unmixer",
|
||||
"description": "Extract files from repomix packaged formats (XML, Markdown, JSON) with automatic format detection and validation",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "utilities",
|
||||
"keywords": ["repomix", "unmix", "extract", "xml", "conversion"],
|
||||
"skills": ["./repomix-unmixer"]
|
||||
},
|
||||
{
|
||||
"name": "skill-creator",
|
||||
"description": "Guide for creating effective Claude Code skills with initialization scripts, validation, packaging, and privacy best practices",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "developer-tools",
|
||||
"keywords": ["skill-creation", "claude-code", "development", "tooling", "workflow"],
|
||||
"skills": ["./skill-creator"]
|
||||
},
|
||||
{
|
||||
"name": "llm-icon-finder",
|
||||
"description": "Find and access AI/LLM model brand icons from lobe-icons library in SVG/PNG/WEBP formats",
|
||||
"source": "./",
|
||||
"strict": false,
|
||||
"version": "1.0.0",
|
||||
"category": "assets",
|
||||
"keywords": ["icons", "ai-models", "llm", "branding", "lobe-icons"],
|
||||
"skills": ["./llm-icon-finder"]
|
||||
}
|
||||
]
|
||||
}
|
||||
53
.gitignore
vendored
Normal file
53
.gitignore
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Virtual environments
|
||||
venv/
|
||||
ENV/
|
||||
env/
|
||||
.venv
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.DS_Store
|
||||
|
||||
# Testing
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
.tox/
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
|
||||
# Temporary files
|
||||
tmp/
|
||||
temp/
|
||||
*.tmp
|
||||
|
||||
# OS files
|
||||
Thumbs.db
|
||||
.DS_Store
|
||||
215
CONTRIBUTING.md
Normal file
215
CONTRIBUTING.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# Contributing to Claude Code Skills Marketplace
|
||||
|
||||
Thank you for your interest in contributing! This marketplace aims to provide high-quality, production-ready skills for Claude Code users.
|
||||
|
||||
## How to Contribute
|
||||
|
||||
### Reporting Issues
|
||||
|
||||
1. Check if the issue already exists
|
||||
2. Provide clear description and reproduction steps
|
||||
3. Include Claude Code version and environment details
|
||||
4. Add relevant error messages or screenshots
|
||||
|
||||
### Suggesting New Skills
|
||||
|
||||
1. Open an issue with the `skill-request` label
|
||||
2. Describe the skill's purpose and use cases
|
||||
3. Explain why it would benefit the community
|
||||
4. Provide examples of when it would activate
|
||||
|
||||
### Submitting Skills
|
||||
|
||||
To submit a new skill to this marketplace:
|
||||
|
||||
#### 1. Skill Quality Requirements
|
||||
|
||||
All skills must meet these standards:
|
||||
|
||||
**Required Structure:**
|
||||
- ✅ `SKILL.md` with valid YAML frontmatter (`name` and `description`)
|
||||
- ✅ Imperative/infinitive writing style (verb-first instructions)
|
||||
- ✅ Clear "When to Use This Skill" section
|
||||
- ✅ Proper resource organization (`scripts/`, `references/`, `assets/`)
|
||||
|
||||
**Quality Standards:**
|
||||
- ✅ Comprehensive documentation
|
||||
- ✅ Working code examples
|
||||
- ✅ Tested functionality
|
||||
- ✅ No TODOs or placeholder text
|
||||
- ✅ Proper cross-referencing of bundled resources
|
||||
|
||||
**Best Practices:**
|
||||
- ✅ Progressive disclosure pattern (metadata → SKILL.md → references)
|
||||
- ✅ No duplication between SKILL.md and references
|
||||
- ✅ Scripts have proper shebangs and are executable
|
||||
- ✅ Clear activation criteria in description
|
||||
|
||||
#### 2. Validation
|
||||
|
||||
Before submitting, validate your skill:
|
||||
|
||||
```bash
|
||||
# Use skill-creator validation
|
||||
~/.claude/plugins/marketplaces/anthropics-skills/skill-creator/scripts/quick_validate.py /path/to/your-skill
|
||||
|
||||
# Test in Claude Code
|
||||
# 1. Copy skill to ~/.claude/skills/your-skill
|
||||
# 2. Restart Claude Code
|
||||
# 3. Verify skill activates correctly
|
||||
```
|
||||
|
||||
#### 3. Submission Process
|
||||
|
||||
1. **Fork this repository**
|
||||
|
||||
2. **Add your skill:**
|
||||
```bash
|
||||
# Create skill directory
|
||||
mkdir your-skill-name
|
||||
|
||||
# Add SKILL.md and resources
|
||||
# Follow the structure of existing skills
|
||||
```
|
||||
|
||||
3. **Update marketplace.json:**
|
||||
```json
|
||||
{
|
||||
"skills": [
|
||||
// ... existing skills
|
||||
"./your-skill-name"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
4. **Update README.md:**
|
||||
- Add skill description to "Included Skills" section
|
||||
- Follow the existing format
|
||||
|
||||
5. **Test locally:**
|
||||
```bash
|
||||
# Add your fork as marketplace
|
||||
/plugin marketplace add your-username/claude-code-skills
|
||||
|
||||
# Install and test
|
||||
/plugin install productivity-skills
|
||||
```
|
||||
|
||||
6. **Submit Pull Request:**
|
||||
- Clear title describing the skill
|
||||
- Description explaining the skill's purpose
|
||||
- Link to any relevant documentation
|
||||
- Screenshots or examples (if applicable)
|
||||
|
||||
### Improving Existing Skills
|
||||
|
||||
To improve an existing skill:
|
||||
|
||||
1. Open an issue describing the improvement
|
||||
2. Fork the repository
|
||||
3. Make your changes
|
||||
4. Test thoroughly
|
||||
5. Submit a pull request referencing the issue
|
||||
|
||||
## Skill Authoring Guidelines
|
||||
|
||||
### Writing Style
|
||||
|
||||
Use **imperative/infinitive form** throughout:
|
||||
|
||||
✅ **Good:**
|
||||
```markdown
|
||||
Extract files from a repomix file using the bundled script.
|
||||
```
|
||||
|
||||
❌ **Bad:**
|
||||
```markdown
|
||||
You should extract files from a repomix file by using the script.
|
||||
```
|
||||
|
||||
### Documentation Structure
|
||||
|
||||
Follow this pattern:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: skill-name
|
||||
description: Clear description with activation triggers. Activates when...
|
||||
---
|
||||
|
||||
# Skill Name
|
||||
|
||||
## Overview
|
||||
[1-2 sentence explanation]
|
||||
|
||||
## When to Use This Skill
|
||||
[Bullet list of activation scenarios]
|
||||
|
||||
## Core Workflow
|
||||
[Step-by-step instructions]
|
||||
|
||||
## Resources
|
||||
[Reference bundled files]
|
||||
```
|
||||
|
||||
### Bundled Resources
|
||||
|
||||
- **scripts/**: Executable code (Python/Bash) for automation
|
||||
- **references/**: Documentation loaded as needed
|
||||
- **assets/**: Templates/files used in output
|
||||
|
||||
Keep SKILL.md lean (~100-500 lines). Move detailed content to `references/`.
|
||||
|
||||
## Code Quality
|
||||
|
||||
### Python Scripts
|
||||
|
||||
- Use Python 3.6+ compatible syntax
|
||||
- Include proper shebang: `#!/usr/bin/env python3`
|
||||
- Add docstrings for functions
|
||||
- Follow PEP 8 style guidelines
|
||||
- No external dependencies (or document them clearly)
|
||||
|
||||
### Bash Scripts
|
||||
|
||||
- Include shebang: `#!/bin/bash`
|
||||
- Use `set -e` for error handling
|
||||
- Add comments for complex operations
|
||||
- Make scripts executable: `chmod +x script.sh`
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
Before submitting, verify:
|
||||
|
||||
- [ ] Skill has valid YAML frontmatter
|
||||
- [ ] Description includes activation triggers
|
||||
- [ ] All referenced files exist
|
||||
- [ ] Scripts are executable and working
|
||||
- [ ] No absolute paths (use relative or `~/.claude/skills/`)
|
||||
- [ ] Tested in actual Claude Code session
|
||||
- [ ] Documentation is clear and complete
|
||||
- [ ] No sensitive information (API keys, passwords, etc.)
|
||||
|
||||
## Review Process
|
||||
|
||||
Pull requests will be reviewed for:
|
||||
|
||||
1. **Functionality**: Does the skill work as described?
|
||||
2. **Quality**: Does it meet our quality standards?
|
||||
3. **Documentation**: Is it well-documented?
|
||||
4. **Originality**: Is it distinct from existing skills?
|
||||
5. **Value**: Does it benefit the community?
|
||||
|
||||
## Questions?
|
||||
|
||||
- Open an issue with the `question` label
|
||||
- Email: daymadev89@gmail.com
|
||||
- Check [Claude Code documentation](https://docs.claude.com/en/docs/claude-code)
|
||||
|
||||
## License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the MIT License.
|
||||
|
||||
---
|
||||
|
||||
Thank you for helping make Claude Code skills better for everyone! 🎉
|
||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 daymade
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
231
README.md
Normal file
231
README.md
Normal file
@@ -0,0 +1,231 @@
|
||||
# Claude Code Skills Marketplace
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://github.com/daymade/claude-code-skills)
|
||||
[](https://github.com/daymade/claude-code-skills)
|
||||
|
||||
Professional Claude Code skills marketplace featuring 6 production-ready skills for enhanced development workflows.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Installation
|
||||
|
||||
Add this marketplace to Claude Code:
|
||||
|
||||
```bash
|
||||
/plugin marketplace add daymade/claude-code-skills
|
||||
```
|
||||
|
||||
Then install the productivity skills bundle:
|
||||
|
||||
```bash
|
||||
/plugin install productivity-skills
|
||||
```
|
||||
|
||||
All 6 skills will be automatically available in your Claude Code session!
|
||||
|
||||
## 📦 Included Skills
|
||||
|
||||
### 1. **github-ops** - GitHub Operations Suite
|
||||
|
||||
Comprehensive GitHub operations using gh CLI and GitHub API.
|
||||
|
||||
**When to use:**
|
||||
- Creating, viewing, or managing pull requests
|
||||
- Managing issues and repository settings
|
||||
- Querying GitHub API endpoints
|
||||
- Working with GitHub Actions workflows
|
||||
- Automating GitHub operations
|
||||
|
||||
**Key features:**
|
||||
- PR creation with JIRA integration
|
||||
- Issue management workflows
|
||||
- GitHub API (REST & GraphQL) operations
|
||||
- Workflow automation
|
||||
- Enterprise GitHub support
|
||||
|
||||
---
|
||||
|
||||
### 2. **markdown-tools** - Document Conversion Suite
|
||||
|
||||
Converts documents to markdown with Windows/WSL path handling and Obsidian integration.
|
||||
|
||||
**When to use:**
|
||||
- Converting .doc/.docx/PDF/PPTX to markdown
|
||||
- Processing Confluence exports
|
||||
- Handling Windows/WSL path conversions
|
||||
- Working with markitdown utility
|
||||
|
||||
**Key features:**
|
||||
- Multi-format document conversion
|
||||
- Confluence export processing
|
||||
- Windows/WSL path automation
|
||||
- Obsidian vault integration
|
||||
- Helper scripts for path conversion
|
||||
|
||||
---
|
||||
|
||||
### 3. **mermaid-tools** - Diagram Generation
|
||||
|
||||
Extracts Mermaid diagrams from markdown and generates high-quality PNG images.
|
||||
|
||||
**When to use:**
|
||||
- Converting Mermaid diagrams to PNG
|
||||
- Extracting diagrams from markdown files
|
||||
- Processing documentation with embedded diagrams
|
||||
- Creating presentation-ready visuals
|
||||
|
||||
**Key features:**
|
||||
- Automatic diagram extraction
|
||||
- High-resolution PNG generation
|
||||
- Smart sizing based on diagram type
|
||||
- Customizable dimensions and scaling
|
||||
- WSL2 Chrome/Puppeteer support
|
||||
|
||||
---
|
||||
|
||||
### 4. **statusline-generator** - Statusline Customization
|
||||
|
||||
Configures Claude Code statuslines with multi-line layouts and cost tracking.
|
||||
|
||||
**When to use:**
|
||||
- Customizing Claude Code statusline
|
||||
- Adding cost tracking (session/daily)
|
||||
- Displaying git status
|
||||
- Multi-line layouts for narrow screens
|
||||
- Color customization
|
||||
|
||||
**Key features:**
|
||||
- Multi-line statusline layouts
|
||||
- ccusage cost integration
|
||||
- Git branch status indicators
|
||||
- Customizable colors
|
||||
- Portrait screen optimization
|
||||
|
||||
---
|
||||
|
||||
### 5. **teams-channel-post-writer** - Teams Communication
|
||||
|
||||
Creates educational Teams channel posts for internal knowledge sharing.
|
||||
|
||||
**When to use:**
|
||||
- Writing Teams posts about features
|
||||
- Sharing Claude Code best practices
|
||||
- Documenting lessons learned
|
||||
- Creating internal announcements
|
||||
- Teaching effective prompting patterns
|
||||
|
||||
**Key features:**
|
||||
- Post templates with proven structure
|
||||
- Writing guidelines for quality content
|
||||
- "Normal vs Better" example patterns
|
||||
- Emphasis on underlying principles
|
||||
- Ready-to-use markdown templates
|
||||
|
||||
---
|
||||
|
||||
### 6. **repomix-unmixer** - Repository Extraction
|
||||
|
||||
Extracts files from repomix-packed repositories and restores directory structures.
|
||||
|
||||
**When to use:**
|
||||
- Unmixing repomix output files
|
||||
- Extracting packed repositories
|
||||
- Restoring file structures
|
||||
- Reviewing repomix content
|
||||
- Converting repomix to usable files
|
||||
|
||||
**Key features:**
|
||||
- Multi-format support (XML, Markdown, JSON)
|
||||
- Auto-format detection
|
||||
- Directory structure preservation
|
||||
- UTF-8 encoding support
|
||||
- Comprehensive validation workflows
|
||||
|
||||
## 🎯 Use Cases
|
||||
|
||||
### For GitHub Workflows
|
||||
Use **github-ops** to streamline PR creation, issue management, and API operations.
|
||||
|
||||
### For Documentation
|
||||
Combine **markdown-tools** for document conversion and **mermaid-tools** for diagram generation to create comprehensive documentation.
|
||||
|
||||
### For Team Communication
|
||||
Use **teams-channel-post-writer** to share knowledge and **statusline-generator** to track costs while working.
|
||||
|
||||
### For Repository Management
|
||||
Use **repomix-unmixer** to extract and validate repomix-packed skills or repositories.
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
Each skill includes:
|
||||
- **SKILL.md**: Core instructions and workflows
|
||||
- **scripts/**: Executable utilities (Python/Bash)
|
||||
- **references/**: Detailed documentation
|
||||
- **assets/**: Templates and resources (where applicable)
|
||||
|
||||
### Quick Links
|
||||
|
||||
- **github-ops**: See `github-ops/references/api_reference.md` for API documentation
|
||||
- **markdown-tools**: See `markdown-tools/references/conversion-examples.md` for conversion scenarios
|
||||
- **mermaid-tools**: See `mermaid-tools/references/setup_and_troubleshooting.md` for setup guide
|
||||
- **statusline-generator**: See `statusline-generator/references/color_codes.md` for customization
|
||||
- **teams-channel-post-writer**: See `teams-channel-post-writer/references/writing-guidelines.md` for quality standards
|
||||
- **repomix-unmixer**: See `repomix-unmixer/references/repomix-format.md` for format specifications
|
||||
|
||||
## 🛠️ Requirements
|
||||
|
||||
- **Claude Code** 2.0.13 or higher
|
||||
- **Python 3.6+** (for scripts in multiple skills)
|
||||
- **gh CLI** (for github-ops)
|
||||
- **markitdown** (for markdown-tools)
|
||||
- **mermaid-cli** (for mermaid-tools)
|
||||
- **ccusage** (optional, for statusline cost tracking)
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to:
|
||||
|
||||
1. Open issues for bugs or feature requests
|
||||
2. Submit pull requests with improvements
|
||||
3. Share feedback on skill quality
|
||||
|
||||
### Skill Quality Standards
|
||||
|
||||
All skills in this marketplace follow:
|
||||
- Imperative/infinitive writing style
|
||||
- Progressive disclosure pattern
|
||||
- Proper resource organization
|
||||
- Comprehensive documentation
|
||||
- Tested and validated
|
||||
|
||||
## 📄 License
|
||||
|
||||
This marketplace is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## ⭐ Support
|
||||
|
||||
If you find these skills useful, please:
|
||||
- ⭐ Star this repository
|
||||
- 🐛 Report issues
|
||||
- 💡 Suggest improvements
|
||||
- 📢 Share with your team
|
||||
|
||||
## 🔗 Related Resources
|
||||
|
||||
- [Claude Code Documentation](https://docs.claude.com/en/docs/claude-code)
|
||||
- [Agent Skills Guide](https://docs.claude.com/en/docs/claude-code/skills)
|
||||
- [Plugin Marketplaces](https://docs.claude.com/en/docs/claude-code/plugin-marketplaces)
|
||||
- [Anthropic Skills Repository](https://github.com/anthropics/skills)
|
||||
|
||||
## 📞 Contact
|
||||
|
||||
- **GitHub**: [@daymade](https://github.com/daymade)
|
||||
- **Email**: daymadev89@gmail.com
|
||||
- **Repository**: [daymade/claude-code-skills](https://github.com/daymade/claude-code-skills)
|
||||
|
||||
---
|
||||
|
||||
**Built with ❤️ using the skill-creator skill for Claude Code**
|
||||
|
||||
Last updated: 2025-10-22 | Version 1.0.0
|
||||
210
github-ops/SKILL.md
Normal file
210
github-ops/SKILL.md
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
name: github-ops
|
||||
description: Provides comprehensive GitHub operations using gh CLI and GitHub API. Activates when working with pull requests, issues, repositories, workflows, or GitHub API operations including creating/viewing/merging PRs, managing issues, querying API endpoints, and handling GitHub workflows in enterprise or public GitHub environments.
|
||||
---
|
||||
|
||||
# GitHub Operations
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides comprehensive guidance for GitHub operations using the `gh` CLI tool and GitHub REST/GraphQL APIs. Use this skill when performing any GitHub-related tasks including pull request management, issue tracking, repository operations, workflow automation, and API interactions.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates for tasks involving:
|
||||
- Creating, viewing, editing, or merging pull requests
|
||||
- Managing GitHub issues or repository settings
|
||||
- Querying GitHub API endpoints (REST or GraphQL)
|
||||
- Working with GitHub Actions workflows
|
||||
- Performing bulk operations on repositories
|
||||
- Integrating with GitHub Enterprise
|
||||
- Automating GitHub operations via CLI or API
|
||||
|
||||
## Core Operations
|
||||
|
||||
### Pull Requests
|
||||
|
||||
```bash
|
||||
# Create PR with NOJIRA prefix (bypasses JIRA enforcement checks)
|
||||
gh pr create --title "NOJIRA: Your PR title" --body "PR description"
|
||||
|
||||
# List and view PRs
|
||||
gh pr list --state open
|
||||
gh pr view 123
|
||||
|
||||
# Manage PRs
|
||||
gh pr merge 123 --squash
|
||||
gh pr review 123 --approve
|
||||
gh pr comment 123 --body "LGTM"
|
||||
```
|
||||
|
||||
📚 See `references/pr_operations.md` for comprehensive PR workflows
|
||||
|
||||
**PR Title Convention:**
|
||||
- With JIRA ticket: `GR-1234: Descriptive title`
|
||||
- Without JIRA ticket: `NOJIRA: Descriptive title`
|
||||
|
||||
### Issues
|
||||
|
||||
```bash
|
||||
# Create and manage issues
|
||||
gh issue create --title "Bug: Issue title" --body "Issue description"
|
||||
gh issue list --state open --label bug
|
||||
gh issue edit 456 --add-label "priority-high"
|
||||
gh issue close 456
|
||||
```
|
||||
|
||||
📚 See `references/issue_operations.md` for detailed issue management
|
||||
|
||||
### Repositories
|
||||
|
||||
```bash
|
||||
# View and manage repos
|
||||
gh repo view --web
|
||||
gh repo clone owner/repo
|
||||
gh repo create my-new-repo --public
|
||||
```
|
||||
|
||||
### Workflows
|
||||
|
||||
```bash
|
||||
# Manage GitHub Actions
|
||||
gh workflow list
|
||||
gh workflow run workflow-name
|
||||
gh run watch run-id
|
||||
gh run download run-id
|
||||
```
|
||||
|
||||
📚 See `references/workflow_operations.md` for advanced workflow operations
|
||||
|
||||
### GitHub API
|
||||
|
||||
The `gh api` command provides direct access to GitHub REST API endpoints. Refer to `references/api_reference.md` for comprehensive API endpoint documentation.
|
||||
|
||||
**Basic API operations:**
|
||||
```bash
|
||||
# Get PR details via API
|
||||
gh api repos/{owner}/{repo}/pulls/{pr_number}
|
||||
|
||||
# Add PR comment
|
||||
gh api repos/{owner}/{repo}/issues/{pr_number}/comments \
|
||||
-f body="Comment text"
|
||||
|
||||
# List workflow runs
|
||||
gh api repos/{owner}/{repo}/actions/runs
|
||||
```
|
||||
|
||||
For complex queries requiring multiple related resources, use GraphQL. See `references/api_reference.md` for GraphQL examples.
|
||||
|
||||
## Authentication and Configuration
|
||||
|
||||
```bash
|
||||
# Login to GitHub
|
||||
gh auth login
|
||||
|
||||
# Login to GitHub Enterprise
|
||||
gh auth login --hostname github.enterprise.com
|
||||
|
||||
# Check authentication status
|
||||
gh auth status
|
||||
|
||||
# Set default repository
|
||||
gh repo set-default owner/repo
|
||||
|
||||
# Configure gh settings
|
||||
gh config set editor vim
|
||||
gh config set git_protocol ssh
|
||||
gh config list
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
Control output format for programmatic processing:
|
||||
|
||||
```bash
|
||||
# JSON output
|
||||
gh pr list --json number,title,state,author
|
||||
|
||||
# JSON with jq processing
|
||||
gh pr list --json number,title | jq '.[] | select(.title | contains("bug"))'
|
||||
|
||||
# Template output
|
||||
gh pr list --template '{{range .}}{{.number}}: {{.title}}{{"\n"}}{{end}}'
|
||||
```
|
||||
|
||||
📚 See `references/best_practices.md` for shell patterns and automation strategies
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Most Common Operations:**
|
||||
```bash
|
||||
gh pr create --title "NOJIRA: Title" --body "Description" # Create PR
|
||||
gh pr list # List PRs
|
||||
gh pr view 123 # View PR details
|
||||
gh pr checks 123 # Check PR status
|
||||
gh pr merge 123 --squash # Merge PR
|
||||
gh pr comment 123 --body "LGTM" # Comment on PR
|
||||
gh issue create --title "Title" --body "Description" # Create issue
|
||||
gh workflow run workflow-name # Run workflow
|
||||
gh repo view --web # Open repo in browser
|
||||
gh api repos/{owner}/{repo}/pulls/{pr_number} # Direct API call
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
### references/pr_operations.md
|
||||
|
||||
Comprehensive pull request operations including:
|
||||
- Detailed PR creation patterns (JIRA integration, body from file, targeting branches)
|
||||
- Viewing and filtering strategies
|
||||
- Review workflows and approval patterns
|
||||
- PR lifecycle management
|
||||
- Bulk operations and automation examples
|
||||
|
||||
Load this reference when working with complex PR workflows or bulk operations.
|
||||
|
||||
### references/issue_operations.md
|
||||
|
||||
Detailed issue management examples including:
|
||||
- Issue creation with labels and assignees
|
||||
- Advanced filtering and search
|
||||
- Issue lifecycle and state management
|
||||
- Bulk operations on multiple issues
|
||||
- Integration with PRs and projects
|
||||
|
||||
Load this reference when managing issues at scale or setting up issue workflows.
|
||||
|
||||
### references/workflow_operations.md
|
||||
|
||||
Advanced GitHub Actions workflow operations including:
|
||||
- Workflow triggers and manual runs
|
||||
- Run monitoring and debugging
|
||||
- Artifact management
|
||||
- Secrets and variables
|
||||
- Performance optimization strategies
|
||||
|
||||
Load this reference when working with CI/CD workflows or debugging failed runs.
|
||||
|
||||
### references/best_practices.md
|
||||
|
||||
Shell scripting patterns and automation strategies including:
|
||||
- Output formatting (JSON, templates, jq)
|
||||
- Pagination and large result sets
|
||||
- Error handling and retry logic
|
||||
- Bulk operations and parallel execution
|
||||
- Enterprise GitHub patterns
|
||||
- Performance optimization
|
||||
|
||||
Load this reference when building automation scripts or handling enterprise deployments.
|
||||
|
||||
### references/api_reference.md
|
||||
|
||||
Contains comprehensive GitHub REST API endpoint documentation including:
|
||||
- Complete API endpoint reference with examples
|
||||
- Request/response formats
|
||||
- Authentication patterns
|
||||
- Rate limiting guidance
|
||||
- Webhook configurations
|
||||
- Advanced GraphQL query patterns
|
||||
|
||||
Load this reference when performing complex API operations or when needing detailed endpoint specifications.
|
||||
793
github-ops/references/api_reference.md
Normal file
793
github-ops/references/api_reference.md
Normal file
@@ -0,0 +1,793 @@
|
||||
# GitHub API Reference
|
||||
|
||||
This reference provides comprehensive documentation for GitHub REST and GraphQL APIs, focusing on common operations accessible via `gh api`.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Authentication](#authentication)
|
||||
2. [Pull Requests API](#pull-requests-api)
|
||||
3. [Issues API](#issues-api)
|
||||
4. [Repositories API](#repositories-api)
|
||||
5. [Actions/Workflows API](#actionsworkflows-api)
|
||||
6. [Search API](#search-api)
|
||||
7. [GraphQL API](#graphql-api)
|
||||
8. [Rate Limiting](#rate-limiting)
|
||||
9. [Webhooks](#webhooks)
|
||||
|
||||
## Authentication
|
||||
|
||||
All API calls via `gh api` automatically use the authenticated token from `gh auth login`.
|
||||
|
||||
```bash
|
||||
# Check authentication status
|
||||
gh auth status
|
||||
|
||||
# View current token (use cautiously)
|
||||
gh auth status --show-token
|
||||
```
|
||||
|
||||
**API Headers:**
|
||||
- `Accept: application/vnd.github+json` (automatically set)
|
||||
- `X-GitHub-Api-Version: 2022-11-28` (recommended)
|
||||
|
||||
## Pull Requests API
|
||||
|
||||
### List Pull Requests
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/pulls`
|
||||
|
||||
```bash
|
||||
# List all open PRs
|
||||
gh api repos/{owner}/{repo}/pulls
|
||||
|
||||
# List PRs with filters
|
||||
gh api repos/{owner}/{repo}/pulls -f state=closed -f base=main
|
||||
|
||||
# List PRs sorted by updated
|
||||
gh api repos/{owner}/{repo}/pulls -f sort=updated -f direction=desc
|
||||
```
|
||||
|
||||
**Query Parameters:**
|
||||
- `state`: `open`, `closed`, `all` (default: `open`)
|
||||
- `head`: Filter by branch name (format: `user:ref-name`)
|
||||
- `base`: Filter by base branch
|
||||
- `sort`: `created`, `updated`, `popularity`, `long-running`
|
||||
- `direction`: `asc`, `desc`
|
||||
- `per_page`: Results per page (max: 100)
|
||||
- `page`: Page number
|
||||
|
||||
### Get Pull Request
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/pulls/{pull_number}`
|
||||
|
||||
```bash
|
||||
# Get PR details
|
||||
gh api repos/{owner}/{repo}/pulls/123
|
||||
|
||||
# Get PR with specific fields
|
||||
gh api repos/{owner}/{repo}/pulls/123 --jq '.title, .state, .mergeable'
|
||||
```
|
||||
|
||||
**Response includes:**
|
||||
- Basic PR info (title, body, state)
|
||||
- Author and assignees
|
||||
- Labels, milestone
|
||||
- Merge status and conflicts
|
||||
- Review status
|
||||
- Head and base branch info
|
||||
|
||||
### Create Pull Request
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/pulls`
|
||||
|
||||
```bash
|
||||
# Create PR via API
|
||||
gh api repos/{owner}/{repo}/pulls \
|
||||
-f title="NOJIRA: New feature" \
|
||||
-f body="Description of changes" \
|
||||
-f head="feature-branch" \
|
||||
-f base="main"
|
||||
|
||||
# Create draft PR
|
||||
gh api repos/{owner}/{repo}/pulls \
|
||||
-f title="WIP: Feature" \
|
||||
-f body="Work in progress" \
|
||||
-f head="feature-branch" \
|
||||
-f base="main" \
|
||||
-F draft=true
|
||||
```
|
||||
|
||||
**Required fields:**
|
||||
- `title`: PR title
|
||||
- `head`: Branch containing changes
|
||||
- `base`: Branch to merge into
|
||||
|
||||
**Optional fields:**
|
||||
- `body`: PR description
|
||||
- `draft`: Boolean for draft PR
|
||||
- `maintainer_can_modify`: Allow maintainer edits
|
||||
|
||||
### Update Pull Request
|
||||
|
||||
**Endpoint:** `PATCH /repos/{owner}/{repo}/pulls/{pull_number}`
|
||||
|
||||
```bash
|
||||
# Update PR title and body
|
||||
gh api repos/{owner}/{repo}/pulls/123 \
|
||||
-X PATCH \
|
||||
-f title="Updated title" \
|
||||
-f body="Updated description"
|
||||
|
||||
# Convert to draft
|
||||
gh api repos/{owner}/{repo}/pulls/123 \
|
||||
-X PATCH \
|
||||
-F draft=true
|
||||
|
||||
# Change base branch
|
||||
gh api repos/{owner}/{repo}/pulls/123 \
|
||||
-X PATCH \
|
||||
-f base="develop"
|
||||
```
|
||||
|
||||
### Merge Pull Request
|
||||
|
||||
**Endpoint:** `PUT /repos/{owner}/{repo}/pulls/{pull_number}/merge`
|
||||
|
||||
```bash
|
||||
# Merge with commit message
|
||||
gh api repos/{owner}/{repo}/pulls/123/merge \
|
||||
-X PUT \
|
||||
-f commit_title="Merge PR #123" \
|
||||
-f commit_message="Additional merge message" \
|
||||
-f merge_method="squash"
|
||||
|
||||
# Merge methods: merge, squash, rebase
|
||||
```
|
||||
|
||||
### List PR Comments
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/pulls/{pull_number}/comments`
|
||||
|
||||
```bash
|
||||
# Get all review comments
|
||||
gh api repos/{owner}/{repo}/pulls/123/comments
|
||||
|
||||
# Get issue comments (conversation tab)
|
||||
gh api repos/{owner}/{repo}/issues/123/comments
|
||||
```
|
||||
|
||||
### Create PR Review
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/pulls/{pull_number}/reviews`
|
||||
|
||||
```bash
|
||||
# Approve PR
|
||||
gh api repos/{owner}/{repo}/pulls/123/reviews \
|
||||
-f event="APPROVE" \
|
||||
-f body="Looks good!"
|
||||
|
||||
# Request changes
|
||||
gh api repos/{owner}/{repo}/pulls/123/reviews \
|
||||
-f event="REQUEST_CHANGES" \
|
||||
-f body="Please address these issues"
|
||||
|
||||
# Comment without approval/rejection
|
||||
gh api repos/{owner}/{repo}/pulls/123/reviews \
|
||||
-f event="COMMENT" \
|
||||
-f body="Some feedback"
|
||||
```
|
||||
|
||||
**Review events:**
|
||||
- `APPROVE`: Approve the PR
|
||||
- `REQUEST_CHANGES`: Request changes
|
||||
- `COMMENT`: General comment
|
||||
|
||||
### List PR Reviews
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/pulls/{pull_number}/reviews`
|
||||
|
||||
```bash
|
||||
# Get all reviews
|
||||
gh api repos/{owner}/{repo}/pulls/123/reviews
|
||||
|
||||
# Parse review states
|
||||
gh api repos/{owner}/{repo}/pulls/123/reviews --jq '[.[] | {user: .user.login, state: .state}]'
|
||||
```
|
||||
|
||||
### Request Reviewers
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/pulls/{pull_number}/requested_reviewers`
|
||||
|
||||
```bash
|
||||
# Request user reviewers
|
||||
gh api repos/{owner}/{repo}/pulls/123/requested_reviewers \
|
||||
-f reviewers[]="user1" \
|
||||
-f reviewers[]="user2"
|
||||
|
||||
# Request team reviewers
|
||||
gh api repos/{owner}/{repo}/pulls/123/requested_reviewers \
|
||||
-f team_reviewers[]="team-slug"
|
||||
```
|
||||
|
||||
## Issues API
|
||||
|
||||
### List Issues
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/issues`
|
||||
|
||||
```bash
|
||||
# List all issues
|
||||
gh api repos/{owner}/{repo}/issues
|
||||
|
||||
# Filter by state and labels
|
||||
gh api repos/{owner}/{repo}/issues -f state=open -f labels="bug,priority-high"
|
||||
|
||||
# Filter by assignee
|
||||
gh api repos/{owner}/{repo}/issues -f assignee="username"
|
||||
|
||||
# Filter by milestone
|
||||
gh api repos/{owner}/{repo}/issues -f milestone="v1.0"
|
||||
```
|
||||
|
||||
**Query Parameters:**
|
||||
- `state`: `open`, `closed`, `all`
|
||||
- `labels`: Comma-separated label names
|
||||
- `assignee`: Username or `none` or `*`
|
||||
- `creator`: Username
|
||||
- `mentioned`: Username
|
||||
- `milestone`: Milestone number or `none` or `*`
|
||||
- `sort`: `created`, `updated`, `comments`
|
||||
- `direction`: `asc`, `desc`
|
||||
|
||||
### Create Issue
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/issues`
|
||||
|
||||
```bash
|
||||
# Create basic issue
|
||||
gh api repos/{owner}/{repo}/issues \
|
||||
-f title="Bug: Something broke" \
|
||||
-f body="Detailed description"
|
||||
|
||||
# Create issue with labels and assignees
|
||||
gh api repos/{owner}/{repo}/issues \
|
||||
-f title="Enhancement request" \
|
||||
-f body="Description" \
|
||||
-f labels[]="enhancement" \
|
||||
-f labels[]="good-first-issue" \
|
||||
-f assignees[]="username1"
|
||||
```
|
||||
|
||||
### Update Issue
|
||||
|
||||
**Endpoint:** `PATCH /repos/{owner}/{repo}/issues/{issue_number}`
|
||||
|
||||
```bash
|
||||
# Close issue
|
||||
gh api repos/{owner}/{repo}/issues/456 \
|
||||
-X PATCH \
|
||||
-f state="closed"
|
||||
|
||||
# Update labels
|
||||
gh api repos/{owner}/{repo}/issues/456 \
|
||||
-X PATCH \
|
||||
-f labels[]="bug" \
|
||||
-f labels[]="fixed"
|
||||
|
||||
# Assign issue
|
||||
gh api repos/{owner}/{repo}/issues/456 \
|
||||
-X PATCH \
|
||||
-f assignees[]="username"
|
||||
```
|
||||
|
||||
### Add Comment to Issue
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/issues/{issue_number}/comments`
|
||||
|
||||
```bash
|
||||
# Add comment
|
||||
gh api repos/{owner}/{repo}/issues/456/comments \
|
||||
-f body="This is a comment"
|
||||
```
|
||||
|
||||
## Repositories API
|
||||
|
||||
### Get Repository
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}`
|
||||
|
||||
```bash
|
||||
# Get repository details
|
||||
gh api repos/{owner}/{repo}
|
||||
|
||||
# Get specific fields
|
||||
gh api repos/{owner}/{repo} --jq '{name: .name, stars: .stargazers_count, forks: .forks_count}'
|
||||
```
|
||||
|
||||
### List Branches
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/branches`
|
||||
|
||||
```bash
|
||||
# List all branches
|
||||
gh api repos/{owner}/{repo}/branches
|
||||
|
||||
# Get branch names only
|
||||
gh api repos/{owner}/{repo}/branches --jq '.[].name'
|
||||
```
|
||||
|
||||
### Get Branch
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/branches/{branch}`
|
||||
|
||||
```bash
|
||||
# Get branch details
|
||||
gh api repos/{owner}/{repo}/branches/main
|
||||
|
||||
# Check if branch is protected
|
||||
gh api repos/{owner}/{repo}/branches/main --jq '.protected'
|
||||
```
|
||||
|
||||
### Get Branch Protection
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/branches/{branch}/protection`
|
||||
|
||||
```bash
|
||||
# Get protection rules
|
||||
gh api repos/{owner}/{repo}/branches/main/protection
|
||||
```
|
||||
|
||||
### List Commits
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/commits`
|
||||
|
||||
```bash
|
||||
# List recent commits
|
||||
gh api repos/{owner}/{repo}/commits
|
||||
|
||||
# Filter by branch
|
||||
gh api repos/{owner}/{repo}/commits -f sha="feature-branch"
|
||||
|
||||
# Filter by author
|
||||
gh api repos/{owner}/{repo}/commits -f author="username"
|
||||
|
||||
# Filter by date range
|
||||
gh api repos/{owner}/{repo}/commits -f since="2024-01-01T00:00:00Z"
|
||||
```
|
||||
|
||||
### Get Commit
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/commits/{sha}`
|
||||
|
||||
```bash
|
||||
# Get commit details
|
||||
gh api repos/{owner}/{repo}/commits/abc123
|
||||
|
||||
# Get files changed in commit
|
||||
gh api repos/{owner}/{repo}/commits/abc123 --jq '.files[].filename'
|
||||
```
|
||||
|
||||
### Get Commit Status
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/commits/{sha}/status`
|
||||
|
||||
```bash
|
||||
# Get combined status for commit
|
||||
gh api repos/{owner}/{repo}/commits/abc123/status
|
||||
|
||||
# Check if all checks passed
|
||||
gh api repos/{owner}/{repo}/commits/abc123/status --jq '.state'
|
||||
```
|
||||
|
||||
### List Collaborators
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/collaborators`
|
||||
|
||||
```bash
|
||||
# List all collaborators
|
||||
gh api repos/{owner}/{repo}/collaborators
|
||||
|
||||
# Get collaborator permissions
|
||||
gh api repos/{owner}/{repo}/collaborators --jq '[.[] | {login: .login, permissions: .permissions}]'
|
||||
```
|
||||
|
||||
### Create Release
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/releases`
|
||||
|
||||
```bash
|
||||
# Create release
|
||||
gh api repos/{owner}/{repo}/releases \
|
||||
-f tag_name="v1.0.0" \
|
||||
-f name="Release v1.0.0" \
|
||||
-f body="Release notes here" \
|
||||
-F draft=false \
|
||||
-F prerelease=false
|
||||
|
||||
# Create draft release
|
||||
gh api repos/{owner}/{repo}/releases \
|
||||
-f tag_name="v1.1.0" \
|
||||
-f name="Release v1.1.0" \
|
||||
-f body="Release notes" \
|
||||
-F draft=true
|
||||
```
|
||||
|
||||
### List Releases
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/releases`
|
||||
|
||||
```bash
|
||||
# List all releases
|
||||
gh api repos/{owner}/{repo}/releases
|
||||
|
||||
# Get latest release
|
||||
gh api repos/{owner}/{repo}/releases/latest
|
||||
```
|
||||
|
||||
## Actions/Workflows API
|
||||
|
||||
### List Workflows
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/actions/workflows`
|
||||
|
||||
```bash
|
||||
# List all workflows
|
||||
gh api repos/{owner}/{repo}/actions/workflows
|
||||
|
||||
# Get workflow names
|
||||
gh api repos/{owner}/{repo}/actions/workflows --jq '.workflows[].name'
|
||||
```
|
||||
|
||||
### Get Workflow
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/actions/workflows/{workflow_id}`
|
||||
|
||||
```bash
|
||||
# Get workflow by ID
|
||||
gh api repos/{owner}/{repo}/actions/workflows/12345
|
||||
|
||||
# Get workflow by filename
|
||||
gh api repos/{owner}/{repo}/actions/workflows/ci.yml
|
||||
```
|
||||
|
||||
### List Workflow Runs
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/actions/runs`
|
||||
|
||||
```bash
|
||||
# List all runs
|
||||
gh api repos/{owner}/{repo}/actions/runs
|
||||
|
||||
# Filter by workflow
|
||||
gh api repos/{owner}/{repo}/actions/runs -f workflow_id=12345
|
||||
|
||||
# Filter by branch
|
||||
gh api repos/{owner}/{repo}/actions/runs -f branch="main"
|
||||
|
||||
# Filter by status
|
||||
gh api repos/{owner}/{repo}/actions/runs -f status="completed"
|
||||
|
||||
# Filter by conclusion
|
||||
gh api repos/{owner}/{repo}/actions/runs -f conclusion="success"
|
||||
```
|
||||
|
||||
**Status values:** `queued`, `in_progress`, `completed`
|
||||
**Conclusion values:** `success`, `failure`, `cancelled`, `skipped`, `timed_out`, `action_required`
|
||||
|
||||
### Get Workflow Run
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/actions/runs/{run_id}`
|
||||
|
||||
```bash
|
||||
# Get run details
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456
|
||||
|
||||
# Check run status
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456 --jq '.status, .conclusion'
|
||||
```
|
||||
|
||||
### Trigger Workflow
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/actions/workflows/{workflow_id}/dispatches`
|
||||
|
||||
```bash
|
||||
# Trigger workflow on branch
|
||||
gh api repos/{owner}/{repo}/actions/workflows/ci.yml/dispatches \
|
||||
-f ref="main"
|
||||
|
||||
# Trigger with inputs
|
||||
gh api repos/{owner}/{repo}/actions/workflows/deploy.yml/dispatches \
|
||||
-f ref="main" \
|
||||
-f inputs[environment]="production" \
|
||||
-f inputs[version]="v1.0.0"
|
||||
```
|
||||
|
||||
### Cancel Workflow Run
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/actions/runs/{run_id}/cancel`
|
||||
|
||||
```bash
|
||||
# Cancel run
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456/cancel -X POST
|
||||
```
|
||||
|
||||
### Rerun Workflow
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/actions/runs/{run_id}/rerun`
|
||||
|
||||
```bash
|
||||
# Rerun all jobs
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456/rerun -X POST
|
||||
|
||||
# Rerun failed jobs only
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456/rerun-failed-jobs -X POST
|
||||
```
|
||||
|
||||
### Download Workflow Logs
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/actions/runs/{run_id}/logs`
|
||||
|
||||
```bash
|
||||
# Download logs (returns zip archive)
|
||||
gh api repos/{owner}/{repo}/actions/runs/123456/logs > logs.zip
|
||||
```
|
||||
|
||||
## Search API
|
||||
|
||||
### Search Repositories
|
||||
|
||||
**Endpoint:** `GET /search/repositories`
|
||||
|
||||
```bash
|
||||
# Search repositories
|
||||
gh api search/repositories -f q="topic:spring-boot language:java"
|
||||
|
||||
# Search with filters
|
||||
gh api search/repositories -f q="stars:>1000 language:python"
|
||||
```
|
||||
|
||||
### Search Code
|
||||
|
||||
**Endpoint:** `GET /search/code`
|
||||
|
||||
```bash
|
||||
# Search code
|
||||
gh api search/code -f q="addClass repo:owner/repo"
|
||||
|
||||
# Search in specific path
|
||||
gh api search/code -f q="function path:src/ repo:owner/repo"
|
||||
```
|
||||
|
||||
### Search Issues and PRs
|
||||
|
||||
**Endpoint:** `GET /search/issues`
|
||||
|
||||
```bash
|
||||
# Search issues
|
||||
gh api search/issues -f q="is:issue is:open label:bug repo:owner/repo"
|
||||
|
||||
# Search PRs
|
||||
gh api search/issues -f q="is:pr is:merged author:username"
|
||||
```
|
||||
|
||||
## GraphQL API
|
||||
|
||||
### Basic GraphQL Query
|
||||
|
||||
```bash
|
||||
# Execute GraphQL query
|
||||
gh api graphql -f query='
|
||||
query {
|
||||
viewer {
|
||||
login
|
||||
name
|
||||
}
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
### Query Repository Information
|
||||
|
||||
```bash
|
||||
gh api graphql -f query='
|
||||
query($owner: String!, $name: String!) {
|
||||
repository(owner: $owner, name: $name) {
|
||||
name
|
||||
description
|
||||
stargazerCount
|
||||
forkCount
|
||||
issues(states: OPEN) {
|
||||
totalCount
|
||||
}
|
||||
pullRequests(states: OPEN) {
|
||||
totalCount
|
||||
}
|
||||
}
|
||||
}
|
||||
' -f owner="owner" -f name="repo"
|
||||
```
|
||||
|
||||
### Query PR with Reviews
|
||||
|
||||
```bash
|
||||
gh api graphql -f query='
|
||||
query($owner: String!, $name: String!, $number: Int!) {
|
||||
repository(owner: $owner, name: $name) {
|
||||
pullRequest(number: $number) {
|
||||
title
|
||||
state
|
||||
author {
|
||||
login
|
||||
}
|
||||
reviews(first: 10) {
|
||||
nodes {
|
||||
state
|
||||
author {
|
||||
login
|
||||
}
|
||||
submittedAt
|
||||
}
|
||||
}
|
||||
commits(last: 1) {
|
||||
nodes {
|
||||
commit {
|
||||
statusCheckRollup {
|
||||
state
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
' -f owner="owner" -f name="repo" -F number=123
|
||||
```
|
||||
|
||||
### Query Multiple PRs with Pagination
|
||||
|
||||
```bash
|
||||
gh api graphql -f query='
|
||||
query($owner: String!, $name: String!, $cursor: String) {
|
||||
repository(owner: $owner, name: $name) {
|
||||
pullRequests(first: 10, states: OPEN, after: $cursor) {
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
nodes {
|
||||
number
|
||||
title
|
||||
author {
|
||||
login
|
||||
}
|
||||
createdAt
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
' -f owner="owner" -f name="repo"
|
||||
```
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
### Check Rate Limit
|
||||
|
||||
**Endpoint:** `GET /rate_limit`
|
||||
|
||||
```bash
|
||||
# Check current rate limit
|
||||
gh api rate_limit
|
||||
|
||||
# Check core API limit
|
||||
gh api rate_limit --jq '.resources.core'
|
||||
|
||||
# Check GraphQL limit
|
||||
gh api rate_limit --jq '.resources.graphql'
|
||||
```
|
||||
|
||||
**Rate limits:**
|
||||
- Authenticated: 5,000 requests/hour
|
||||
- GraphQL: 5,000 points/hour
|
||||
- Search: 30 requests/minute
|
||||
|
||||
### Rate Limit Headers
|
||||
|
||||
Every API response includes rate limit headers:
|
||||
- `X-RateLimit-Limit`: Total requests allowed
|
||||
- `X-RateLimit-Remaining`: Requests remaining
|
||||
- `X-RateLimit-Reset`: Unix timestamp when limit resets
|
||||
|
||||
## Webhooks
|
||||
|
||||
### List Webhooks
|
||||
|
||||
**Endpoint:** `GET /repos/{owner}/{repo}/hooks`
|
||||
|
||||
```bash
|
||||
# List repository webhooks
|
||||
gh api repos/{owner}/{repo}/hooks
|
||||
```
|
||||
|
||||
### Create Webhook
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/hooks`
|
||||
|
||||
```bash
|
||||
# Create webhook
|
||||
gh api repos/{owner}/{repo}/hooks \
|
||||
-f name="web" \
|
||||
-f config[url]="https://example.com/webhook" \
|
||||
-f config[content_type]="json" \
|
||||
-f events[]="push" \
|
||||
-f events[]="pull_request"
|
||||
```
|
||||
|
||||
### Test Webhook
|
||||
|
||||
**Endpoint:** `POST /repos/{owner}/{repo}/hooks/{hook_id}/tests`
|
||||
|
||||
```bash
|
||||
# Test webhook
|
||||
gh api repos/{owner}/{repo}/hooks/12345/tests -X POST
|
||||
```
|
||||
|
||||
## Pagination
|
||||
|
||||
For endpoints returning lists, use pagination:
|
||||
|
||||
```bash
|
||||
# First page (default)
|
||||
gh api repos/{owner}/{repo}/issues
|
||||
|
||||
# Specific page
|
||||
gh api repos/{owner}/{repo}/issues -f page=2 -f per_page=50
|
||||
|
||||
# Iterate through all pages
|
||||
for page in {1..10}; do
|
||||
gh api repos/{owner}/{repo}/issues -f page=$page -f per_page=100
|
||||
done
|
||||
```
|
||||
|
||||
**Link header:** Response includes `Link` header with `next`, `prev`, `first`, `last` URLs.
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Common HTTP status codes:**
|
||||
- `200 OK`: Success
|
||||
- `201 Created`: Resource created
|
||||
- `204 No Content`: Success with no response body
|
||||
- `400 Bad Request`: Invalid request
|
||||
- `401 Unauthorized`: Authentication required
|
||||
- `403 Forbidden`: Insufficient permissions or rate limited
|
||||
- `404 Not Found`: Resource doesn't exist
|
||||
- `422 Unprocessable Entity`: Validation failed
|
||||
|
||||
**Error response format:**
|
||||
```json
|
||||
{
|
||||
"message": "Validation Failed",
|
||||
"errors": [
|
||||
{
|
||||
"resource": "PullRequest",
|
||||
"code": "custom",
|
||||
"message": "Error details"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use conditional requests:** Include `If-None-Match` header with ETag to save rate limit quota
|
||||
2. **Paginate efficiently:** Use `per_page=100` (maximum) to minimize requests
|
||||
3. **Use GraphQL for complex queries:** Fetch multiple related resources in single request
|
||||
4. **Check rate limits proactively:** Monitor `X-RateLimit-Remaining` header
|
||||
5. **Handle errors gracefully:** Implement retry logic with exponential backoff for 5xx errors
|
||||
6. **Cache responses:** Cache GET responses when data doesn't change frequently
|
||||
7. **Use webhooks:** Subscribe to events instead of polling
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- GitHub REST API documentation: https://docs.github.com/en/rest
|
||||
- GitHub GraphQL API documentation: https://docs.github.com/en/graphql
|
||||
- gh CLI manual: https://cli.github.com/manual/
|
||||
446
github-ops/references/best_practices.md
Normal file
446
github-ops/references/best_practices.md
Normal file
@@ -0,0 +1,446 @@
|
||||
# GitHub CLI Best Practices
|
||||
|
||||
Shell scripting patterns, bulk operations, and automation strategies for gh CLI.
|
||||
|
||||
## Output Formats and Processing
|
||||
|
||||
### JSON Output for Programmatic Parsing
|
||||
|
||||
```bash
|
||||
# Default: Human-readable text
|
||||
gh pr list
|
||||
|
||||
# JSON output for programmatic parsing
|
||||
gh pr list --json number,title,state,author
|
||||
|
||||
# JSON with jq processing
|
||||
gh pr list --json number,title | jq '.[] | select(.title | contains("bug"))'
|
||||
|
||||
# Template output for custom formatting
|
||||
gh pr list --template '{{range .}}{{.number}}: {{.title}}{{"\n"}}{{end}}'
|
||||
```
|
||||
|
||||
### Field Selection
|
||||
|
||||
```bash
|
||||
# Select specific fields
|
||||
gh pr view 123 --json number,title,state,reviews
|
||||
|
||||
# All available fields
|
||||
gh pr view 123 --json
|
||||
|
||||
# Nested field extraction
|
||||
gh pr list --json number,author | jq '.[].author.login'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pagination Strategies
|
||||
|
||||
### Controlling Result Limits
|
||||
|
||||
```bash
|
||||
# Limit results (default is usually 30)
|
||||
gh pr list --limit 50
|
||||
|
||||
# Show all results (use carefully)
|
||||
gh pr list --limit 999
|
||||
|
||||
# Paginate manually
|
||||
gh pr list --limit 100 --page 1
|
||||
gh pr list --limit 100 --page 2
|
||||
```
|
||||
|
||||
### Processing Large Result Sets
|
||||
|
||||
```bash
|
||||
# Get all PRs in batches
|
||||
for page in {1..10}; do
|
||||
gh pr list --limit 100 --page $page --json number,title
|
||||
done
|
||||
|
||||
# Stop when no more results
|
||||
page=1
|
||||
while true; do
|
||||
results=$(gh pr list --limit 100 --page $page --json number)
|
||||
if [ "$results" == "[]" ]; then break; fi
|
||||
echo "$results"
|
||||
((page++))
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling and Reliability
|
||||
|
||||
### Exit Code Checking
|
||||
|
||||
```bash
|
||||
# Check exit codes
|
||||
gh pr merge 123 && echo "Success" || echo "Failed"
|
||||
|
||||
# Capture exit code
|
||||
gh pr create --title "Title" --body "Body"
|
||||
exit_code=$?
|
||||
if [ $exit_code -eq 0 ]; then
|
||||
echo "PR created successfully"
|
||||
else
|
||||
echo "PR creation failed with code $exit_code"
|
||||
fi
|
||||
```
|
||||
|
||||
### Error Output Handling
|
||||
|
||||
```bash
|
||||
# Separate stdout and stderr
|
||||
gh pr list > success.log 2> error.log
|
||||
|
||||
# Redirect errors to stdout
|
||||
gh pr list 2>&1 | tee combined.log
|
||||
|
||||
# Suppress errors
|
||||
gh pr view 999 2>/dev/null || echo "PR not found"
|
||||
```
|
||||
|
||||
### Retry Logic
|
||||
|
||||
```bash
|
||||
# Simple retry
|
||||
for i in {1..3}; do
|
||||
gh api repos/{owner}/{repo}/pulls && break
|
||||
echo "Retry $i failed, waiting..."
|
||||
sleep 5
|
||||
done
|
||||
|
||||
# Exponential backoff
|
||||
attempt=1
|
||||
max_attempts=5
|
||||
delay=1
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
if gh pr create --title "Title" --body "Body"; then
|
||||
break
|
||||
fi
|
||||
echo "Attempt $attempt failed, retrying in ${delay}s..."
|
||||
sleep $delay
|
||||
delay=$((delay * 2))
|
||||
attempt=$((attempt + 1))
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Operating on Multiple Items
|
||||
|
||||
```bash
|
||||
# Close all PRs with specific label
|
||||
gh pr list --label "wip" --json number -q '.[].number' | \
|
||||
xargs -I {} gh pr close {}
|
||||
|
||||
# Add label to multiple issues
|
||||
gh issue list --state open --json number -q '.[].number' | \
|
||||
xargs -I {} gh issue edit {} --add-label "needs-triage"
|
||||
|
||||
# Approve multiple PRs
|
||||
gh pr list --author username --json number -q '.[].number' | \
|
||||
xargs -I {} gh pr review {} --approve
|
||||
```
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
```bash
|
||||
# Process items in parallel (GNU parallel)
|
||||
gh pr list --json number -q '.[].number' | \
|
||||
parallel -j 4 gh pr view {}
|
||||
|
||||
# Xargs parallel execution
|
||||
gh pr list --json number -q '.[].number' | \
|
||||
xargs -P 4 -I {} gh pr checks {}
|
||||
```
|
||||
|
||||
### Batch Processing with Confirmation
|
||||
|
||||
```bash
|
||||
# Confirm before bulk operation
|
||||
gh pr list --label "old" --json number,title | \
|
||||
jq -r '.[] | "\(.number): \(.title)"' | \
|
||||
while read -r line; do
|
||||
echo "Close PR $line? (y/n)"
|
||||
read -r answer
|
||||
if [ "$answer" == "y" ]; then
|
||||
pr_num=$(echo "$line" | cut -d: -f1)
|
||||
gh pr close "$pr_num"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Enterprise GitHub Patterns
|
||||
|
||||
### Working with GitHub Enterprise
|
||||
|
||||
```bash
|
||||
# Authenticate with enterprise hostname
|
||||
gh auth login --hostname github.enterprise.com
|
||||
|
||||
# Set environment variable for enterprise
|
||||
export GH_HOST=github.enterprise.com
|
||||
gh pr list
|
||||
|
||||
# Use with specific host
|
||||
gh pr list --hostname github.enterprise.com
|
||||
|
||||
# Check current authentication
|
||||
gh auth status
|
||||
```
|
||||
|
||||
### Switching Between Instances
|
||||
|
||||
```bash
|
||||
# Switch between GitHub.com and Enterprise
|
||||
gh auth switch
|
||||
|
||||
# Use specific auth token
|
||||
GH_TOKEN=ghp_enterprise_token gh pr list --hostname github.enterprise.com
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automation and Scripting
|
||||
|
||||
### Capturing Output
|
||||
|
||||
```bash
|
||||
# Capture PR number
|
||||
PR_NUMBER=$(gh pr create --title "Title" --body "Body" | grep -oP '\d+$')
|
||||
echo "Created PR #$PR_NUMBER"
|
||||
|
||||
# Capture JSON and parse
|
||||
pr_data=$(gh pr view 123 --json number,title,state)
|
||||
pr_state=$(echo "$pr_data" | jq -r '.state')
|
||||
|
||||
# Capture and validate
|
||||
if output=$(gh pr merge 123 2>&1); then
|
||||
echo "Merged successfully"
|
||||
else
|
||||
echo "Merge failed: $output"
|
||||
fi
|
||||
```
|
||||
|
||||
### Conditional Operations
|
||||
|
||||
```bash
|
||||
# Check PR status before merging
|
||||
pr_state=$(gh pr view 123 --json state -q '.state')
|
||||
if [ "$pr_state" == "OPEN" ]; then
|
||||
gh pr merge 123 --squash
|
||||
fi
|
||||
|
||||
# Check CI status
|
||||
checks=$(gh pr checks 123 --json state -q '.[].state')
|
||||
if echo "$checks" | grep -q "FAILURE"; then
|
||||
echo "CI checks failed, cannot merge"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### Workflow Automation
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Automated PR workflow
|
||||
|
||||
# Create feature branch
|
||||
git checkout -b feature/new-feature
|
||||
|
||||
# Make changes and commit
|
||||
# ...
|
||||
|
||||
# Push and create PR
|
||||
git push -u origin feature/new-feature
|
||||
PR_NUM=$(gh pr create \
|
||||
--title "feat: New feature" \
|
||||
--body "Description of feature" \
|
||||
--label "enhancement" \
|
||||
| grep -oP '\d+$')
|
||||
|
||||
# Wait for CI
|
||||
echo "Waiting for CI checks..."
|
||||
while true; do
|
||||
status=$(gh pr checks "$PR_NUM" --json state -q '.[].state' | grep -v "SUCCESS")
|
||||
if [ -z "$status" ]; then
|
||||
echo "All checks passed!"
|
||||
break
|
||||
fi
|
||||
sleep 30
|
||||
done
|
||||
|
||||
# Auto-merge if checks pass
|
||||
gh pr merge "$PR_NUM" --squash --auto
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration and Customization
|
||||
|
||||
### Setting Defaults
|
||||
|
||||
```bash
|
||||
# Set default repository
|
||||
gh repo set-default owner/repo
|
||||
|
||||
# Configure editor
|
||||
gh config set editor vim
|
||||
|
||||
# Configure browser
|
||||
gh config set browser firefox
|
||||
|
||||
# Set Git protocol preference
|
||||
gh config set git_protocol ssh # or https
|
||||
|
||||
# View current configuration
|
||||
gh config list
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# GitHub token
|
||||
export GH_TOKEN=ghp_your_token
|
||||
|
||||
# GitHub host
|
||||
export GH_HOST=github.enterprise.com
|
||||
|
||||
# Default repository
|
||||
export GH_REPO=owner/repo
|
||||
|
||||
# Pager
|
||||
export GH_PAGER=less
|
||||
|
||||
# No prompts (for automation)
|
||||
export GH_NO_UPDATE_NOTIFIER=1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Reducing API Calls
|
||||
|
||||
```bash
|
||||
# Cache frequently used data
|
||||
pr_list=$(gh pr list --json number,title,state)
|
||||
echo "$pr_list" | jq '.[] | select(.state == "OPEN")'
|
||||
echo "$pr_list" | jq '.[] | select(.state == "MERGED")'
|
||||
|
||||
# Use single API call for multiple fields
|
||||
gh pr view 123 --json number,title,state,reviews,comments
|
||||
```
|
||||
|
||||
### Selective Field Loading
|
||||
|
||||
```bash
|
||||
# Only fetch needed fields
|
||||
gh pr list --json number,title # Fast
|
||||
|
||||
# vs. fetching all fields
|
||||
gh pr list --json # Slower
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Debugging and Troubleshooting
|
||||
|
||||
### Verbose Output
|
||||
|
||||
```bash
|
||||
# Enable debug logging
|
||||
GH_DEBUG=1 gh pr list
|
||||
|
||||
# API logging
|
||||
GH_DEBUG=api gh pr create --title "Test"
|
||||
|
||||
# Full HTTP trace
|
||||
GH_DEBUG=api,http gh api repos/{owner}/{repo}
|
||||
```
|
||||
|
||||
### Testing API Calls
|
||||
|
||||
```bash
|
||||
# Test API endpoint
|
||||
gh api repos/{owner}/{repo}/pulls
|
||||
|
||||
# Test with custom headers
|
||||
gh api repos/{owner}/{repo}/pulls \
|
||||
-H "Accept: application/vnd.github.v3+json"
|
||||
|
||||
# Test pagination
|
||||
gh api repos/{owner}/{repo}/pulls --paginate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### Do's
|
||||
|
||||
✅ **Use JSON output** for programmatic parsing
|
||||
✅ **Handle errors** with proper exit code checking
|
||||
✅ **Implement retries** for network operations
|
||||
✅ **Cache results** when making multiple queries
|
||||
✅ **Use bulk operations** for efficiency
|
||||
✅ **Set appropriate limits** to avoid rate limiting
|
||||
✅ **Validate input** before operations
|
||||
✅ **Log operations** for audit trail
|
||||
|
||||
### Don'ts
|
||||
|
||||
❌ **Don't hardcode credentials** - Use environment variables or gh auth
|
||||
❌ **Don't ignore errors** - Always check exit codes
|
||||
❌ **Don't fetch all fields** - Select only what you need
|
||||
❌ **Don't skip rate limit checks** - Monitor API usage
|
||||
❌ **Don't run destructive operations without confirmation**
|
||||
❌ **Don't assume unlimited results** - Always paginate
|
||||
❌ **Don't mix automation with interactive** - Use GH_NO_UPDATE_NOTIFIER=1
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Create, Wait, Merge Pattern
|
||||
|
||||
```bash
|
||||
# Create PR
|
||||
PR_NUM=$(gh pr create --title "Feature" --body "Description" | grep -oP '\d+$')
|
||||
|
||||
# Wait for checks
|
||||
gh pr checks "$PR_NUM" --watch
|
||||
|
||||
# Merge when ready
|
||||
gh pr merge "$PR_NUM" --squash
|
||||
```
|
||||
|
||||
### Search and Process Pattern
|
||||
|
||||
```bash
|
||||
# Find and process matching items
|
||||
gh pr list --json number,title | \
|
||||
jq -r '.[] | select(.title | contains("bug")) | .number' | \
|
||||
while read -r pr; do
|
||||
gh pr edit "$pr" --add-label "bug"
|
||||
done
|
||||
```
|
||||
|
||||
### Batch Approval Pattern
|
||||
|
||||
```bash
|
||||
# Review and approve multiple PRs
|
||||
gh pr list --author trusted-user --json number -q '.[].number' | \
|
||||
while read -r pr; do
|
||||
gh pr diff "$pr"
|
||||
gh pr review "$pr" --approve --body "LGTM"
|
||||
done
|
||||
```
|
||||
283
github-ops/references/issue_operations.md
Normal file
283
github-ops/references/issue_operations.md
Normal file
@@ -0,0 +1,283 @@
|
||||
# Issue Operations Reference
|
||||
|
||||
Comprehensive examples for GitHub issue management using gh CLI.
|
||||
|
||||
## Creating Issues
|
||||
|
||||
### Basic Issue Creation
|
||||
|
||||
```bash
|
||||
# Create simple issue
|
||||
gh issue create --title "Bug: Issue title" --body "Issue description"
|
||||
|
||||
# Create issue with labels and assignees
|
||||
gh issue create --title "Bug: Title" --body "Description" \
|
||||
--label bug,priority-high --assignee username
|
||||
|
||||
# Create issue from template
|
||||
gh issue create --template bug_report.md
|
||||
|
||||
# Create issue with body from file
|
||||
gh issue create --title "Feature Request" --body-file feature.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Listing Issues
|
||||
|
||||
### Basic Listing
|
||||
|
||||
```bash
|
||||
# List all issues
|
||||
gh issue list
|
||||
|
||||
# List issues with filters
|
||||
gh issue list --state open --label bug
|
||||
gh issue list --assignee username
|
||||
gh issue list --milestone "v2.0"
|
||||
|
||||
# List with pagination
|
||||
gh issue list --limit 50
|
||||
```
|
||||
|
||||
### Advanced Filtering
|
||||
|
||||
```bash
|
||||
# List issues by multiple labels
|
||||
gh issue list --label "bug,priority-high"
|
||||
|
||||
# List issues NOT assigned to anyone
|
||||
gh issue list --assignee ""
|
||||
|
||||
# List issues mentioned in PR
|
||||
gh issue list --mention username
|
||||
|
||||
# List recently updated issues
|
||||
gh issue list --state all --limit 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing Issues
|
||||
|
||||
### Viewing Details
|
||||
|
||||
```bash
|
||||
# View specific issue
|
||||
gh issue view 456
|
||||
|
||||
# View issue in browser
|
||||
gh issue view 456 --web
|
||||
|
||||
# View issue with comments
|
||||
gh issue view 456 --comments
|
||||
|
||||
# Get issue as JSON
|
||||
gh issue view 456 --json number,title,body,state,labels,assignees
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Editing Issues
|
||||
|
||||
### Update Issue Metadata
|
||||
|
||||
```bash
|
||||
# Edit issue title
|
||||
gh issue edit 456 --title "New title"
|
||||
|
||||
# Edit issue body
|
||||
gh issue edit 456 --body "Updated description"
|
||||
|
||||
# Add labels
|
||||
gh issue edit 456 --add-label enhancement,documentation
|
||||
|
||||
# Remove labels
|
||||
gh issue edit 456 --remove-label wip
|
||||
|
||||
# Add assignees
|
||||
gh issue edit 456 --add-assignee user1,user2
|
||||
|
||||
# Remove assignees
|
||||
gh issue edit 456 --remove-assignee user1
|
||||
|
||||
# Set milestone
|
||||
gh issue edit 456 --milestone "v2.0"
|
||||
|
||||
# Remove milestone
|
||||
gh issue edit 456 --milestone ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Lifecycle
|
||||
|
||||
### State Management
|
||||
|
||||
```bash
|
||||
# Close issue
|
||||
gh issue close 456
|
||||
|
||||
# Close issue with comment
|
||||
gh issue close 456 --comment "Fixed in PR #789"
|
||||
|
||||
# Reopen issue
|
||||
gh issue reopen 456
|
||||
|
||||
# Reopen with comment
|
||||
gh issue reopen 456 --comment "Issue persists in v2.0"
|
||||
```
|
||||
|
||||
### Issue Linking
|
||||
|
||||
```bash
|
||||
# Link to PR in issue (manual)
|
||||
gh issue comment 456 --body "Fixed by #789"
|
||||
|
||||
# Close issue when PR merges (in PR description)
|
||||
# Use keywords: closes, fixes, resolves
|
||||
gh pr create --title "Fix bug" --body "Closes #456"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Commenting on Issues
|
||||
|
||||
### Adding Comments
|
||||
|
||||
```bash
|
||||
# Add comment to issue
|
||||
gh issue comment 456 --body "Comment text"
|
||||
|
||||
# Add comment from file
|
||||
gh issue comment 456 --body-file comment.txt
|
||||
|
||||
# Add comment with emoji reactions
|
||||
gh issue comment 456 --body "Great idea! :+1:"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Pinning and Priority
|
||||
|
||||
### Pinning Issues
|
||||
|
||||
```bash
|
||||
# Pin issue to repository
|
||||
gh issue pin 456
|
||||
|
||||
# Unpin issue
|
||||
gh issue unpin 456
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Transfers
|
||||
|
||||
### Transfer to Another Repository
|
||||
|
||||
```bash
|
||||
# Transfer issue to another repo
|
||||
gh issue transfer 456 owner/other-repo
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Operating on Multiple Issues
|
||||
|
||||
```bash
|
||||
# Close all bug issues
|
||||
gh issue list --label bug --json number -q '.[].number' | \
|
||||
xargs -I {} gh issue close {}
|
||||
|
||||
# Add label to all open issues
|
||||
gh issue list --state open --json number -q '.[].number' | \
|
||||
xargs -I {} gh issue edit {} --add-label "needs-triage"
|
||||
|
||||
# Assign milestone to multiple issues
|
||||
gh issue list --label "v2.0" --json number -q '.[].number' | \
|
||||
xargs -I {} gh issue edit {} --milestone "Release 2.0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Formatting
|
||||
|
||||
### JSON Output
|
||||
|
||||
```bash
|
||||
# Get issue data as JSON
|
||||
gh issue view 456 --json number,title,body,state,labels,assignees,milestone
|
||||
|
||||
# List issues with custom fields
|
||||
gh issue list --json number,title,state,createdAt,updatedAt
|
||||
|
||||
# Process with jq
|
||||
gh issue list --json number,title,labels | \
|
||||
jq '.[] | select(.labels | any(.name == "bug"))'
|
||||
```
|
||||
|
||||
### Template Output
|
||||
|
||||
```bash
|
||||
# Custom format with Go templates
|
||||
gh issue list --template '{{range .}}#{{.number}}: {{.title}} [{{.state}}]{{"\n"}}{{end}}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Search Operations
|
||||
|
||||
### Using GitHub Search Syntax
|
||||
|
||||
```bash
|
||||
# Search issues with text
|
||||
gh issue list --search "error in logs"
|
||||
|
||||
# Search issues by author
|
||||
gh issue list --search "author:username"
|
||||
|
||||
# Search issues by label
|
||||
gh issue list --search "label:bug"
|
||||
|
||||
# Complex search queries
|
||||
gh issue list --search "is:open label:bug created:>2024-01-01"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Creating Effective Issues
|
||||
|
||||
1. **Use descriptive titles** - Be specific about the problem
|
||||
2. **Provide context** - Include steps to reproduce
|
||||
3. **Add labels** - Help with categorization and filtering
|
||||
4. **Assign appropriately** - Tag people who can help
|
||||
5. **Link related items** - Connect to PRs, other issues
|
||||
|
||||
### Issue Management
|
||||
|
||||
1. **Triage regularly** - Review and label new issues
|
||||
2. **Update status** - Keep issues current with comments
|
||||
3. **Close resolved issues** - Link to fixing PR
|
||||
4. **Use milestones** - Group related work
|
||||
5. **Pin important issues** - Highlight key items
|
||||
|
||||
### Labels Strategy
|
||||
|
||||
Common label categories:
|
||||
- **Type**: bug, feature, enhancement, documentation
|
||||
- **Priority**: priority-high, priority-medium, priority-low
|
||||
- **Status**: wip, needs-review, blocked
|
||||
- **Area**: frontend, backend, database, infrastructure
|
||||
|
||||
### Automation Tips
|
||||
|
||||
1. **Issue templates** - Create templates for bugs, features
|
||||
2. **Auto-labeling** - Use GitHub Actions to auto-label
|
||||
3. **Stale bot** - Auto-close inactive issues
|
||||
4. **Project boards** - Track issue progress
|
||||
5. **Webhooks** - Integrate with external tools
|
||||
250
github-ops/references/pr_operations.md
Normal file
250
github-ops/references/pr_operations.md
Normal file
@@ -0,0 +1,250 @@
|
||||
# Pull Request Operations Reference
|
||||
|
||||
Comprehensive examples for GitHub pull request operations using gh CLI.
|
||||
|
||||
## Creating Pull Requests
|
||||
|
||||
### Basic PR Creation
|
||||
|
||||
```bash
|
||||
# Create PR with NOJIRA prefix (bypasses JIRA enforcement checks)
|
||||
gh pr create --title "NOJIRA: Your PR title" --body "PR description"
|
||||
|
||||
# Create PR with JIRA ticket reference
|
||||
gh pr create --title "GR-1234: Your PR title" --body "PR description"
|
||||
|
||||
# Create PR targeting specific branch
|
||||
gh pr create --title "NOJIRA: Feature" --body "Description" --base main --head feature-branch
|
||||
|
||||
# Create PR with body from file
|
||||
gh pr create --title "NOJIRA: Feature" --body-file pr-description.md
|
||||
```
|
||||
|
||||
### PR Title Convention
|
||||
|
||||
- **With JIRA ticket**: `GR-1234: Descriptive title`
|
||||
- **Without JIRA ticket**: `NOJIRA: Descriptive title` (bypasses enforcement check)
|
||||
|
||||
---
|
||||
|
||||
## Viewing Pull Requests
|
||||
|
||||
### Listing PRs
|
||||
|
||||
```bash
|
||||
# List all PRs
|
||||
gh pr list
|
||||
|
||||
# List PRs with custom filters
|
||||
gh pr list --state open --limit 50
|
||||
gh pr list --author username
|
||||
gh pr list --label bug
|
||||
|
||||
# List PRs as JSON for parsing
|
||||
gh pr list --json number,title,state,author
|
||||
```
|
||||
|
||||
### Viewing Specific PRs
|
||||
|
||||
```bash
|
||||
# View specific PR details
|
||||
gh pr view 123
|
||||
|
||||
# View PR in browser
|
||||
gh pr view 123 --web
|
||||
|
||||
# View PR diff
|
||||
gh pr diff 123
|
||||
|
||||
# View PR checks/status
|
||||
gh pr checks 123
|
||||
|
||||
# View PR with comments
|
||||
gh pr view 123 --comments
|
||||
|
||||
# Get PR info as JSON for parsing
|
||||
gh pr view 123 --json number,title,state,author,reviews
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Managing Pull Requests
|
||||
|
||||
### Editing PRs
|
||||
|
||||
```bash
|
||||
# Edit PR title/body
|
||||
gh pr edit 123 --title "New title" --body "New description"
|
||||
|
||||
# Add reviewers
|
||||
gh pr edit 123 --add-reviewer username1,username2
|
||||
|
||||
# Add labels
|
||||
gh pr edit 123 --add-label "bug,priority-high"
|
||||
|
||||
# Remove labels
|
||||
gh pr edit 123 --remove-label "wip"
|
||||
```
|
||||
|
||||
### Merging PRs
|
||||
|
||||
```bash
|
||||
# Merge PR (various strategies)
|
||||
gh pr merge 123 --merge # Regular merge commit
|
||||
gh pr merge 123 --squash # Squash and merge
|
||||
gh pr merge 123 --rebase # Rebase and merge
|
||||
|
||||
# Auto-merge after checks pass
|
||||
gh pr merge 123 --auto --squash
|
||||
```
|
||||
|
||||
### PR Lifecycle Management
|
||||
|
||||
```bash
|
||||
# Close PR without merging
|
||||
gh pr close 123
|
||||
|
||||
# Reopen closed PR
|
||||
gh pr reopen 123
|
||||
|
||||
# Checkout PR locally for testing
|
||||
gh pr checkout 123
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PR Comments and Reviews
|
||||
|
||||
### Adding Comments
|
||||
|
||||
```bash
|
||||
# Add comment to PR
|
||||
gh pr comment 123 --body "Your comment here"
|
||||
|
||||
# Add comment from file
|
||||
gh pr comment 123 --body-file comment.txt
|
||||
```
|
||||
|
||||
### Reviewing PRs
|
||||
|
||||
```bash
|
||||
# Add review comment
|
||||
gh pr review 123 --comment --body "Review comments"
|
||||
|
||||
# Approve PR
|
||||
gh pr review 123 --approve
|
||||
|
||||
# Approve with comment
|
||||
gh pr review 123 --approve --body "LGTM! Great work."
|
||||
|
||||
# Request changes
|
||||
gh pr review 123 --request-changes --body "Please fix X"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced PR Operations
|
||||
|
||||
### Checking PR Status
|
||||
|
||||
```bash
|
||||
# Check CI/CD status
|
||||
gh pr checks 123
|
||||
|
||||
# Watch PR checks in real-time
|
||||
gh pr checks 123 --watch
|
||||
|
||||
# Get checks as JSON
|
||||
gh pr checks 123 --json name,status,conclusion
|
||||
```
|
||||
|
||||
### PR Metadata Operations
|
||||
|
||||
```bash
|
||||
# Add assignees
|
||||
gh pr edit 123 --add-assignee username
|
||||
|
||||
# Add to project
|
||||
gh pr edit 123 --add-project "Project Name"
|
||||
|
||||
# Set milestone
|
||||
gh pr edit 123 --milestone "v2.0"
|
||||
|
||||
# Mark as draft
|
||||
gh pr ready 123 --undo
|
||||
|
||||
# Mark as ready for review
|
||||
gh pr ready 123
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Formatting
|
||||
|
||||
### JSON Output for Scripting
|
||||
|
||||
```bash
|
||||
# Get PR data as JSON
|
||||
gh pr view 123 --json number,title,state,author,reviews,comments
|
||||
|
||||
# List PRs with specific fields
|
||||
gh pr list --json number,title,author,updatedAt
|
||||
|
||||
# Process with jq
|
||||
gh pr list --json number,title | jq '.[] | select(.title | contains("bug"))'
|
||||
```
|
||||
|
||||
### Template Output
|
||||
|
||||
```bash
|
||||
# Custom format with Go templates
|
||||
gh pr list --template '{{range .}}#{{.number}}: {{.title}} (@{{.author.login}}){{"\n"}}{{end}}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Operating on Multiple PRs
|
||||
|
||||
```bash
|
||||
# Close all PRs with specific label
|
||||
gh pr list --label "wip" --json number -q '.[].number' | \
|
||||
xargs -I {} gh pr close {}
|
||||
|
||||
# Add label to all open PRs
|
||||
gh pr list --state open --json number -q '.[].number' | \
|
||||
xargs -I {} gh pr edit {} --add-label "needs-review"
|
||||
|
||||
# Approve all PRs from specific author
|
||||
gh pr list --author username --json number -q '.[].number' | \
|
||||
xargs -I {} gh pr review {} --approve
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Creating Effective PRs
|
||||
|
||||
1. **Use descriptive titles** - Include ticket reference and clear description
|
||||
2. **Write meaningful descriptions** - Explain what, why, and how
|
||||
3. **Keep PRs focused** - One feature/fix per PR
|
||||
4. **Request specific reviewers** - Tag people with relevant expertise
|
||||
5. **Link related issues** - Use "Closes #123" in description
|
||||
|
||||
### Review Workflow
|
||||
|
||||
1. **Review promptly** - Don't let PRs sit for days
|
||||
2. **Be constructive** - Focus on code quality, not personal style
|
||||
3. **Test locally** - Use `gh pr checkout 123` to test changes
|
||||
4. **Approve clearly** - Use explicit approval, not just comments
|
||||
5. **Follow up** - Check that your feedback was addressed
|
||||
|
||||
### Automation Tips
|
||||
|
||||
1. **Use templates** - Create PR description templates
|
||||
2. **Auto-assign** - Set up CODEOWNERS for automatic reviewers
|
||||
3. **Branch protection** - Require reviews before merging
|
||||
4. **CI/CD integration** - Ensure checks pass before merge
|
||||
5. **Auto-merge** - Use `--auto` flag for trusted changes
|
||||
391
github-ops/references/workflow_operations.md
Normal file
391
github-ops/references/workflow_operations.md
Normal file
@@ -0,0 +1,391 @@
|
||||
# Workflow Operations Reference
|
||||
|
||||
Comprehensive guide for GitHub Actions workflow management using gh CLI.
|
||||
|
||||
## Listing Workflows
|
||||
|
||||
### View Available Workflows
|
||||
|
||||
```bash
|
||||
# List all workflows in repository
|
||||
gh workflow list
|
||||
|
||||
# List with detailed status
|
||||
gh workflow list --all
|
||||
|
||||
# List workflows as JSON
|
||||
gh workflow list --json name,id,state,path
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing Workflow Details
|
||||
|
||||
### Inspect Workflow Configuration
|
||||
|
||||
```bash
|
||||
# View workflow details
|
||||
gh workflow view workflow-name
|
||||
|
||||
# View workflow by ID
|
||||
gh workflow view 12345
|
||||
|
||||
# View workflow YAML
|
||||
gh workflow view workflow-name --yaml
|
||||
|
||||
# View workflow in browser
|
||||
gh workflow view workflow-name --web
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Enabling and Disabling Workflows
|
||||
|
||||
### Workflow State Management
|
||||
|
||||
```bash
|
||||
# Enable workflow
|
||||
gh workflow enable workflow-name
|
||||
|
||||
# Enable workflow by ID
|
||||
gh workflow enable 12345
|
||||
|
||||
# Disable workflow
|
||||
gh workflow disable workflow-name
|
||||
|
||||
# Disable workflow by ID
|
||||
gh workflow disable 12345
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Running Workflows
|
||||
|
||||
### Manual Workflow Triggers
|
||||
|
||||
```bash
|
||||
# Run workflow manually
|
||||
gh workflow run workflow-name
|
||||
|
||||
# Run workflow on specific branch
|
||||
gh workflow run workflow-name --ref feature-branch
|
||||
|
||||
# Run workflow with inputs
|
||||
gh workflow run workflow-name -f input1=value1 -f input2=value2
|
||||
|
||||
# Run workflow with JSON inputs
|
||||
gh workflow run workflow-name \
|
||||
-f config='{"env":"production","debug":false}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing Workflow Runs
|
||||
|
||||
### List Workflow Runs
|
||||
|
||||
```bash
|
||||
# List all workflow runs
|
||||
gh run list
|
||||
|
||||
# List runs for specific workflow
|
||||
gh run list --workflow=workflow-name
|
||||
|
||||
# List runs with filters
|
||||
gh run list --status success
|
||||
gh run list --status failure
|
||||
gh run list --branch main
|
||||
|
||||
# List recent runs
|
||||
gh run list --limit 20
|
||||
|
||||
# List runs as JSON
|
||||
gh run list --json databaseId,status,conclusion,headBranch,event
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing Specific Run Details
|
||||
|
||||
### Inspect Run Information
|
||||
|
||||
```bash
|
||||
# View specific run details
|
||||
gh run view run-id
|
||||
|
||||
# View run in browser
|
||||
gh run view run-id --web
|
||||
|
||||
# View run logs
|
||||
gh run view run-id --log
|
||||
|
||||
# View failed run logs only
|
||||
gh run view run-id --log-failed
|
||||
|
||||
# Get run as JSON
|
||||
gh run view run-id --json status,conclusion,jobs,createdAt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Runs
|
||||
|
||||
### Real-Time Monitoring
|
||||
|
||||
```bash
|
||||
# Watch workflow run in real-time
|
||||
gh run watch run-id
|
||||
|
||||
# Watch with log output
|
||||
gh run watch run-id --exit-status
|
||||
|
||||
# Watch interval (check every N seconds)
|
||||
gh run watch run-id --interval 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Downloading Artifacts and Logs
|
||||
|
||||
### Retrieve Run Data
|
||||
|
||||
```bash
|
||||
# Download workflow run logs
|
||||
gh run download run-id
|
||||
|
||||
# Download specific artifact
|
||||
gh run download run-id --name artifact-name
|
||||
|
||||
# Download to specific directory
|
||||
gh run download run-id --dir ./downloads
|
||||
|
||||
# List available artifacts
|
||||
gh run view run-id --log | grep "artifact"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Canceling and Rerunning Workflows
|
||||
|
||||
### Run Control Operations
|
||||
|
||||
```bash
|
||||
# Cancel workflow run
|
||||
gh run cancel run-id
|
||||
|
||||
# Rerun workflow
|
||||
gh run rerun run-id
|
||||
|
||||
# Rerun only failed jobs
|
||||
gh run rerun run-id --failed
|
||||
|
||||
# Rerun with debug logging
|
||||
gh run rerun run-id --debug
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Jobs
|
||||
|
||||
### Viewing Job Details
|
||||
|
||||
```bash
|
||||
# List jobs for a run
|
||||
gh api repos/{owner}/{repo}/actions/runs/{run_id}/jobs
|
||||
|
||||
# View specific job logs
|
||||
gh run view run-id --log --job job-id
|
||||
|
||||
# Download job logs
|
||||
gh api repos/{owner}/{repo}/actions/jobs/{job_id}/logs > job.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Workflow Operations
|
||||
|
||||
### Workflow Timing Analysis
|
||||
|
||||
```bash
|
||||
# Get run timing
|
||||
gh run view run-id --json createdAt,startedAt,updatedAt,conclusion
|
||||
|
||||
# List slow runs
|
||||
gh run list --workflow=ci --json databaseId,createdAt,updatedAt | \
|
||||
jq '.[] | select((.updatedAt | fromdate) - (.createdAt | fromdate) > 600)'
|
||||
```
|
||||
|
||||
### Workflow Success Rate
|
||||
|
||||
```bash
|
||||
# Calculate success rate for workflow
|
||||
gh run list --workflow=ci --limit 100 --json conclusion | \
|
||||
jq '[.[] | .conclusion] | group_by(.) | map({conclusion: .[0], count: length})'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Managing Multiple Runs
|
||||
|
||||
```bash
|
||||
# Cancel all running workflows
|
||||
gh run list --status in_progress --json databaseId -q '.[].databaseId' | \
|
||||
xargs -I {} gh run cancel {}
|
||||
|
||||
# Rerun all failed runs from today
|
||||
gh run list --status failure --created today --json databaseId -q '.[].databaseId' | \
|
||||
xargs -I {} gh run rerun {}
|
||||
|
||||
# Download artifacts from multiple runs
|
||||
gh run list --workflow=build --limit 5 --json databaseId -q '.[].databaseId' | \
|
||||
xargs -I {} gh run download {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Secrets and Variables
|
||||
|
||||
### Managing Secrets (via API)
|
||||
|
||||
```bash
|
||||
# List repository secrets
|
||||
gh api repos/{owner}/{repo}/actions/secrets
|
||||
|
||||
# Create/update secret
|
||||
gh secret set SECRET_NAME --body "secret-value"
|
||||
|
||||
# Create secret from file
|
||||
gh secret set SECRET_NAME < secret.txt
|
||||
|
||||
# Delete secret
|
||||
gh secret delete SECRET_NAME
|
||||
|
||||
# List secrets
|
||||
gh secret list
|
||||
```
|
||||
|
||||
### Managing Variables
|
||||
|
||||
```bash
|
||||
# List repository variables
|
||||
gh variable list
|
||||
|
||||
# Set variable
|
||||
gh variable set VAR_NAME --body "value"
|
||||
|
||||
# Delete variable
|
||||
gh variable delete VAR_NAME
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Dispatch Events
|
||||
|
||||
### Triggering with workflow_dispatch
|
||||
|
||||
Example workflow file configuration:
|
||||
```yaml
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
environment:
|
||||
description: 'Deployment environment'
|
||||
required: true
|
||||
default: 'staging'
|
||||
type: choice
|
||||
options:
|
||||
- staging
|
||||
- production
|
||||
debug:
|
||||
description: 'Enable debug mode'
|
||||
required: false
|
||||
type: boolean
|
||||
```
|
||||
|
||||
Trigger with inputs:
|
||||
```bash
|
||||
gh workflow run deploy.yml \
|
||||
-f environment=production \
|
||||
-f debug=true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Monitoring and Debugging
|
||||
|
||||
### Common Debugging Techniques
|
||||
|
||||
```bash
|
||||
# View recent failures
|
||||
gh run list --status failure --limit 10
|
||||
|
||||
# Check specific run logs
|
||||
gh run view run-id --log-failed
|
||||
|
||||
# Download logs for analysis
|
||||
gh run download run-id
|
||||
|
||||
# Rerun with debug logging
|
||||
gh run rerun run-id --debug
|
||||
|
||||
# Check workflow syntax
|
||||
gh workflow view workflow-name --yaml
|
||||
```
|
||||
|
||||
### Workflow Performance Monitoring
|
||||
|
||||
```bash
|
||||
# Get average run duration
|
||||
gh run list --workflow=ci --limit 50 --json createdAt,updatedAt | \
|
||||
jq '[.[] | ((.updatedAt | fromdate) - (.createdAt | fromdate))] | add / length'
|
||||
|
||||
# Find longest running jobs
|
||||
gh api repos/{owner}/{repo}/actions/runs/{run_id}/jobs | \
|
||||
jq '.jobs | sort_by(.started_at) | reverse | .[0:5]'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Workflow Organization
|
||||
|
||||
1. **Use descriptive names** - Make workflow purpose clear
|
||||
2. **Modular workflows** - Break complex workflows into reusable actions
|
||||
3. **Cache dependencies** - Speed up builds with caching
|
||||
4. **Matrix strategies** - Test across multiple environments
|
||||
5. **Workflow dependencies** - Use `needs` to control execution order
|
||||
|
||||
### Workflow Triggers
|
||||
|
||||
1. **Selective triggers** - Use path filters to run only when needed
|
||||
2. **Schedule wisely** - Avoid resource waste with cron triggers
|
||||
3. **Manual triggers** - Provide workflow_dispatch for flexibility
|
||||
4. **PR workflows** - Separate validation from deployment
|
||||
5. **Branch protection** - Require status checks before merge
|
||||
|
||||
### Secrets Management
|
||||
|
||||
1. **Use secrets** - Never hardcode credentials
|
||||
2. **Scope appropriately** - Use environment-specific secrets
|
||||
3. **Rotate regularly** - Update secrets periodically
|
||||
4. **Audit access** - Review who can access secrets
|
||||
5. **Use OIDC** - Prefer token-less authentication when possible
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
1. **Conditional execution** - Skip unnecessary jobs
|
||||
2. **Parallel jobs** - Run independent jobs concurrently
|
||||
3. **Artifact management** - Clean up old artifacts
|
||||
4. **Self-hosted runners** - Use for resource-intensive workloads
|
||||
5. **Job timeouts** - Set reasonable timeout limits
|
||||
|
||||
### Monitoring and Alerts
|
||||
|
||||
1. **Enable notifications** - Get alerted on failures
|
||||
2. **Status badges** - Display workflow status in README
|
||||
3. **Metrics tracking** - Monitor success rates and duration
|
||||
4. **Log retention** - Configure appropriate retention policies
|
||||
5. **Dependency updates** - Automate with Dependabot
|
||||
90
llm-icon-finder/SKILL.md
Normal file
90
llm-icon-finder/SKILL.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: llm-icon-finder
|
||||
description: Finding and accessing AI/LLM model brand icons from lobe-icons library. Use when users need icon URLs, want to download brand logos for AI models/providers/applications (Claude, GPT, Gemini, etc.), or request icons in SVG/PNG/WEBP formats.
|
||||
---
|
||||
|
||||
# Finding AI/LLM Brand Icons
|
||||
|
||||
Access AI/LLM model brand icons and logos from the [lobe-icons](https://github.com/lobehub/lobe-icons) library. The library contains 100+ icons for models (Claude, GPT, Gemini), providers (OpenAI, Anthropic, Google), and applications (ComfyUI, LobeChat).
|
||||
|
||||
## Icon Formats and Variants
|
||||
|
||||
**Available formats**: SVG (scalable), PNG (raster), WEBP (compressed)
|
||||
**Theme variants**: light, dark, and color (some icons)
|
||||
|
||||
## CDN URL Patterns
|
||||
|
||||
Construct URLs using these patterns:
|
||||
|
||||
```
|
||||
# SVG
|
||||
https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-svg/{light|dark}/{icon-name}.svg
|
||||
|
||||
# PNG
|
||||
https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-png/{light|dark}/{icon-name}.png
|
||||
|
||||
# WEBP
|
||||
https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-webp/{light|dark}/{icon-name}.webp
|
||||
|
||||
# Color variant (append -color to icon-name)
|
||||
https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-png/dark/{icon-name}-color.png
|
||||
```
|
||||
|
||||
**Icon naming convention**: Lowercase, hyphenated (e.g., `claude`, `chatglm`, `openai`, `huggingface`)
|
||||
|
||||
## Workflow
|
||||
|
||||
When users request icons:
|
||||
|
||||
1. Identify icon name (usually lowercase company/model name, hyphenated if multi-word)
|
||||
2. Determine format (default: PNG) and theme (default: dark)
|
||||
3. Construct CDN URL using pattern above
|
||||
4. Provide URL to user
|
||||
5. If download requested, use Bash tool with curl
|
||||
6. Include web viewer link: `https://lobehub.com/icons/{icon-name}`
|
||||
|
||||
## Finding Icon Names
|
||||
|
||||
**Common icons**: See `references/icons-list.md` for comprehensive list organized by category (Models, Providers, Applications, Chinese AI)
|
||||
|
||||
**Uncertain names**:
|
||||
- Browse https://lobehub.com/icons
|
||||
- Try variations (e.g., company name vs product name: `alibaba` vs `alibabacloud`)
|
||||
- Check for `-color` variants if standard URL fails
|
||||
|
||||
**Chinese AI models**: Support Chinese queries (e.g., "智谱" → `chatglm`, "月之暗面" → `moonshot`)
|
||||
|
||||
## Examples
|
||||
|
||||
**Single icon request**:
|
||||
```
|
||||
User: "Claude icon"
|
||||
→ Provide: https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-png/dark/claude.png
|
||||
→ Also mention color variant and web viewer link
|
||||
```
|
||||
|
||||
**Multiple icons download**:
|
||||
```bash
|
||||
curl -o openai.svg "https://raw.githubusercontent.com/lobehub/lobe-icons/.../dark/openai.svg"
|
||||
curl -o anthropic.svg "https://raw.githubusercontent.com/lobehub/lobe-icons/.../dark/anthropic.svg"
|
||||
```
|
||||
|
||||
**Chinese query**:
|
||||
```
|
||||
User: "找一下智谱的图标"
|
||||
→ Identify: 智谱 = ChatGLM → icon name: chatglm
|
||||
→ Provide URLs and mention related icons (zhipu, codegeex)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If URL returns 404:
|
||||
1. Try `-color` suffix variant
|
||||
2. Check alternate naming (e.g., `chatgpt` vs `gpt`, `google` vs `gemini`)
|
||||
3. Direct user to https://lobehub.com/icons to browse
|
||||
4. Search repository: https://github.com/lobehub/lobe-icons
|
||||
|
||||
## Reference Files
|
||||
|
||||
- `references/icons-list.md` - Comprehensive list of 100+ available icons by category
|
||||
- `references/developer-info.md` - npm installation and React usage examples
|
||||
47
llm-icon-finder/references/developer-info.md
Normal file
47
llm-icon-finder/references/developer-info.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Developer Information
|
||||
|
||||
This file contains additional information for developers who want to use lobe-icons in their projects.
|
||||
|
||||
## npm Installation
|
||||
|
||||
Icons can be installed as npm packages:
|
||||
|
||||
```bash
|
||||
# React components
|
||||
npm install @lobehub/icons
|
||||
|
||||
# Static SVG files
|
||||
npm install @lobehub/icons-static-svg
|
||||
|
||||
# Static PNG files
|
||||
npm install @lobehub/icons-static-png
|
||||
|
||||
# Static WEBP files
|
||||
npm install @lobehub/icons-static-webp
|
||||
|
||||
# React Native
|
||||
npm install @lobehub/icons-rn
|
||||
```
|
||||
|
||||
## Usage in React
|
||||
|
||||
```tsx
|
||||
import { Claude, OpenAI, Gemini } from '@lobehub/icons';
|
||||
|
||||
function MyComponent() {
|
||||
return (
|
||||
<div>
|
||||
<Claude size={48} />
|
||||
<OpenAI size={48} />
|
||||
<Gemini size={48} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Icon Gallery**: https://lobehub.com/icons
|
||||
- **GitHub Repository**: https://github.com/lobehub/lobe-icons
|
||||
- **Documentation**: https://icons.lobehub.com
|
||||
- **NPM Packages**: https://www.npmjs.com/search?q=%40lobehub%2Ficons
|
||||
87
llm-icon-finder/references/icons-list.md
Normal file
87
llm-icon-finder/references/icons-list.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Common AI/LLM Icons Reference
|
||||
|
||||
This file contains a comprehensive list of popular AI/LLM icons available in the lobe-icons library.
|
||||
|
||||
## Models
|
||||
|
||||
| Icon Name | Description |
|
||||
|-----------|-------------|
|
||||
| `claude` | Anthropic Claude |
|
||||
| `chatgpt` | ChatGPT |
|
||||
| `gpt` | GPT models |
|
||||
| `gemini` | Google Gemini |
|
||||
| `llama` | Meta LLaMA |
|
||||
| `mistral` | Mistral AI |
|
||||
| `chatglm` | 智谱 ChatGLM |
|
||||
| `baichuan` | 百川 Baichuan |
|
||||
| `deepseek` | DeepSeek |
|
||||
| `qwen` | 通义千问 Qwen |
|
||||
| `yi` | 零一万物 Yi |
|
||||
| `aya` | Cohere Aya |
|
||||
|
||||
## Providers
|
||||
|
||||
| Icon Name | Description |
|
||||
|-----------|-------------|
|
||||
| `openai` | OpenAI |
|
||||
| `anthropic` | Anthropic |
|
||||
| `google` | Google |
|
||||
| `cohere` | Cohere |
|
||||
| `huggingface` | Hugging Face |
|
||||
| `openrouter` | OpenRouter |
|
||||
| `perplexity` | Perplexity |
|
||||
| `stability` | Stability AI |
|
||||
| `alibaba` | 阿里巴巴 Alibaba |
|
||||
| `alibabacloud` | 阿里云 Alibaba Cloud |
|
||||
| `tencent` | 腾讯 Tencent |
|
||||
| `baidu` | 百度 Baidu |
|
||||
| `zhipu` | 智谱 AI |
|
||||
| `moonshot` | 月之暗面 Moonshot (Kimi) |
|
||||
| `minimax` | MiniMax |
|
||||
| `zeroone` | 零一万物 01.AI |
|
||||
| `ai21` | AI21 Labs (Jamba) |
|
||||
|
||||
## Applications
|
||||
|
||||
| Icon Name | Description |
|
||||
|-----------|-------------|
|
||||
| `lobechat` | LobeChat |
|
||||
| `comfyui` | ComfyUI |
|
||||
| `automatic` | Automatic1111 (SD WebUI) |
|
||||
| `midjourney` | Midjourney |
|
||||
| `runway` | Runway |
|
||||
| `capcut` | CapCut |
|
||||
| `cline` | Cline |
|
||||
| `colab` | Google Colab |
|
||||
| `copilotkit` | CopilotKit |
|
||||
| `aistudio` | Google AI Studio |
|
||||
| `clipdrop` | Clipdrop |
|
||||
|
||||
## Chinese AI Providers & Models
|
||||
|
||||
| Icon Name | Chinese Name | Description |
|
||||
|-----------|--------------|-------------|
|
||||
| `chatglm` | 智谱清言 | ChatGLM |
|
||||
| `zhipu` | 智谱 | Zhipu AI |
|
||||
| `baichuan` | 百川 | Baichuan |
|
||||
| `deepseek` | 深度求索 | DeepSeek |
|
||||
| `moonshot` | 月之暗面 | Moonshot (Kimi) |
|
||||
| `minimax` | 稀宇科技 | MiniMax |
|
||||
| `zeroone` | 零一万物 | 01.AI |
|
||||
| `qwen` | 通义千问 | Qwen |
|
||||
| `yi` | 零一万物 | Yi |
|
||||
| `alibaba` | 阿里巴巴 | Alibaba |
|
||||
| `alibabacloud` | 阿里云 | Alibaba Cloud |
|
||||
| `tencent` | 腾讯 | Tencent |
|
||||
| `baidu` | 百度 | Baidu |
|
||||
| `ai360` | 360智脑 | 360 AI Brain |
|
||||
| `aimass` | 紫东太初 | AiMass |
|
||||
| `aihubmix` | 推理时代 | AiHubMix |
|
||||
| `codegeex` | — | CodeGeeX |
|
||||
|
||||
## Tips for Finding Icons
|
||||
|
||||
1. **Icon naming**: Usually lowercase, hyphenated (e.g., `anthropic`, `chatglm`)
|
||||
2. **Company vs Product**: Some have both (e.g., `alibaba` and `alibabacloud`, `zhipu` and `chatglm`)
|
||||
3. **Color variants**: Many icons have `-color` suffix for colored versions
|
||||
4. **Browse all**: Visit https://lobehub.com/icons to see the complete catalog
|
||||
146
markdown-tools/SKILL.md
Normal file
146
markdown-tools/SKILL.md
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
name: markdown-tools
|
||||
description: Converts documents to markdown (PDFs, Word docs, PowerPoint, Confluence exports) with Windows/WSL path handling. Activates when converting .doc/.docx/PDF/PPTX files to markdown, processing Confluence exports, handling Windows/WSL path conversions, or working with markitdown utility.
|
||||
---
|
||||
|
||||
# Markdown Tools
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides document conversion to markdown with Windows/WSL path handling support. It helps convert various document formats to markdown and handles path conversions between Windows and WSL environments.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
### 1. Markdown Conversion
|
||||
Convert documents to markdown format with automatic Windows/WSL path handling.
|
||||
|
||||
### 2. Confluence Export Processing
|
||||
Handle Confluence .doc exports with special characters for knowledge base integration.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Convert Any Document to Markdown
|
||||
|
||||
```bash
|
||||
# Basic conversion
|
||||
markitdown "path/to/document.pdf" > output.md
|
||||
|
||||
# WSL path example
|
||||
markitdown "/mnt/c/Users/username/Documents/file.docx" > output.md
|
||||
```
|
||||
|
||||
See `references/conversion-examples.md` for detailed examples of various conversion scenarios.
|
||||
|
||||
### Convert Confluence Export
|
||||
|
||||
```bash
|
||||
# Direct conversion for simple exports
|
||||
markitdown "confluence-export.doc" > output.md
|
||||
|
||||
# For exports with special characters, see references/
|
||||
```
|
||||
|
||||
## Path Conversion
|
||||
|
||||
### Windows to WSL Path Format
|
||||
|
||||
Windows paths must be converted to WSL format before use in bash commands.
|
||||
|
||||
**Conversion rules:**
|
||||
- Replace `C:\` with `/mnt/c/`
|
||||
- Replace `\` with `/`
|
||||
- Preserve spaces and special characters
|
||||
- Use quotes for paths with spaces
|
||||
|
||||
**Example conversions:**
|
||||
```bash
|
||||
# Windows path
|
||||
C:\Users\username\Documents\file.doc
|
||||
|
||||
# WSL path
|
||||
/mnt/c/Users/username/Documents/file.doc
|
||||
```
|
||||
|
||||
**Helper script:** Use `scripts/convert_path.py` to automate conversion:
|
||||
|
||||
```bash
|
||||
python scripts/convert_path.py "C:\Users\username\Downloads\document.doc"
|
||||
```
|
||||
|
||||
See `references/conversion-examples.md` for detailed path conversion examples.
|
||||
|
||||
## Document Conversion Workflows
|
||||
|
||||
### Workflow 1: Simple Markdown Conversion
|
||||
|
||||
For straightforward document conversions (PDF, .docx without special characters):
|
||||
|
||||
1. Convert Windows path to WSL format (if needed)
|
||||
2. Run markitdown
|
||||
3. Redirect output to .md file
|
||||
|
||||
See `references/conversion-examples.md` for detailed examples.
|
||||
|
||||
### Workflow 2: Confluence Export with Special Characters
|
||||
|
||||
For Confluence .doc exports that contain special characters or complex formatting:
|
||||
|
||||
1. Save .doc file to accessible location
|
||||
2. Use appropriate conversion method (see references)
|
||||
3. Verify output formatting
|
||||
|
||||
See `references/conversion-examples.md` for step-by-step command examples.
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**markitdown not found:**
|
||||
```bash
|
||||
# Install markitdown via pip
|
||||
pip install markitdown
|
||||
|
||||
# Or via uv tools
|
||||
uv tool install markitdown
|
||||
```
|
||||
|
||||
**Path not found:**
|
||||
```bash
|
||||
# Verify path exists
|
||||
ls -la "/mnt/c/Users/username/Documents/file.doc"
|
||||
|
||||
# Use convert_path.py helper
|
||||
python scripts/convert_path.py "C:\Users\username\Documents\file.doc"
|
||||
```
|
||||
|
||||
**Encoding issues:**
|
||||
- Ensure files are UTF-8 encoded
|
||||
- Check for special characters in filenames
|
||||
- Use quotes around paths with spaces
|
||||
|
||||
## Resources
|
||||
|
||||
### references/conversion-examples.md
|
||||
Comprehensive examples for all conversion scenarios including:
|
||||
- Simple document conversions (PDF, Word, PowerPoint)
|
||||
- Confluence export handling
|
||||
- Path conversion examples for Windows/WSL
|
||||
- Batch conversion operations
|
||||
- Error recovery and troubleshooting examples
|
||||
|
||||
Load this reference when users need specific command examples or encounter conversion issues.
|
||||
|
||||
### scripts/convert_path.py
|
||||
Python script to automate Windows to WSL path conversion. Handles:
|
||||
- Drive letter conversion (C:\ → /mnt/c/)
|
||||
- Backslash to forward slash
|
||||
- Special characters and spaces
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Convert Windows paths to WSL format** before bash operations
|
||||
2. **Verify paths exist** before operations using ls or test commands
|
||||
3. **Check output quality** after conversion
|
||||
4. **Use markitdown directly** for simple conversions
|
||||
5. **Test incrementally** - Verify each conversion step before proceeding
|
||||
6. **Preserve directory structure** when doing batch conversions
|
||||
346
markdown-tools/references/conversion-examples.md
Normal file
346
markdown-tools/references/conversion-examples.md
Normal file
@@ -0,0 +1,346 @@
|
||||
# Document Conversion Examples
|
||||
|
||||
Comprehensive examples for converting various document formats to markdown.
|
||||
|
||||
## Basic Document Conversions
|
||||
|
||||
### PDF to Markdown
|
||||
|
||||
```bash
|
||||
# Simple PDF conversion
|
||||
markitdown "document.pdf" > output.md
|
||||
|
||||
# WSL path example
|
||||
markitdown "/mnt/c/Users/username/Documents/report.pdf" > report.md
|
||||
|
||||
# With explicit output
|
||||
markitdown "slides.pdf" > "slides.md"
|
||||
```
|
||||
|
||||
### Word Documents to Markdown
|
||||
|
||||
```bash
|
||||
# Modern Word document (.docx)
|
||||
markitdown "document.docx" > output.md
|
||||
|
||||
# Legacy Word document (.doc)
|
||||
markitdown "legacy-doc.doc" > output.md
|
||||
|
||||
# Preserve directory structure
|
||||
markitdown "/path/to/docs/file.docx" > "/path/to/output/file.md"
|
||||
```
|
||||
|
||||
### PowerPoint to Markdown
|
||||
|
||||
```bash
|
||||
# Convert presentation
|
||||
markitdown "presentation.pptx" > slides.md
|
||||
|
||||
# WSL path
|
||||
markitdown "/mnt/c/Users/username/Desktop/slides.pptx" > slides.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Windows/WSL Path Conversion
|
||||
|
||||
### Basic Path Conversion Rules
|
||||
|
||||
```bash
|
||||
# Windows path
|
||||
C:\Users\username\Documents\file.doc
|
||||
|
||||
# WSL equivalent
|
||||
/mnt/c/Users/username/Documents/file.doc
|
||||
```
|
||||
|
||||
### Conversion Examples
|
||||
|
||||
```bash
|
||||
# Single backslash to forward slash
|
||||
C:\folder\file.txt
|
||||
→ /mnt/c/folder/file.txt
|
||||
|
||||
# Path with spaces (must use quotes)
|
||||
C:\Users\John Doe\Documents\report.pdf
|
||||
→ "/mnt/c/Users/John Doe/Documents/report.pdf"
|
||||
|
||||
# OneDrive path
|
||||
C:\Users\username\OneDrive\Documents\file.doc
|
||||
→ "/mnt/c/Users/username/OneDrive/Documents/file.doc"
|
||||
|
||||
# Different drive letters
|
||||
D:\Projects\document.docx
|
||||
→ /mnt/d/Projects/document.docx
|
||||
```
|
||||
|
||||
### Using convert_path.py Helper
|
||||
|
||||
```bash
|
||||
# Automatic conversion
|
||||
python scripts/convert_path.py "C:\Users\username\Downloads\document.doc"
|
||||
# Output: /mnt/c/Users/username/Downloads/document.doc
|
||||
|
||||
# Use in conversion command
|
||||
wsl_path=$(python scripts/convert_path.py "C:\Users\username\file.docx")
|
||||
markitdown "$wsl_path" > output.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Batch Conversions
|
||||
|
||||
### Convert Multiple Files
|
||||
|
||||
```bash
|
||||
# Convert all PDFs in a directory
|
||||
for pdf in /path/to/pdfs/*.pdf; do
|
||||
filename=$(basename "$pdf" .pdf)
|
||||
markitdown "$pdf" > "/path/to/output/${filename}.md"
|
||||
done
|
||||
|
||||
# Convert all Word documents
|
||||
for doc in /path/to/docs/*.docx; do
|
||||
filename=$(basename "$doc" .docx)
|
||||
markitdown "$doc" > "/path/to/output/${filename}.md"
|
||||
done
|
||||
```
|
||||
|
||||
### Batch Conversion with Path Conversion
|
||||
|
||||
```bash
|
||||
# Windows batch (PowerShell)
|
||||
Get-ChildItem "C:\Documents\*.pdf" | ForEach-Object {
|
||||
$wslPath = "/mnt/c/Documents/$($_.Name)"
|
||||
$outFile = "/mnt/c/Output/$($_.BaseName).md"
|
||||
wsl markitdown $wslPath > $outFile
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Confluence Export Handling
|
||||
|
||||
### Simple Confluence Export
|
||||
|
||||
```bash
|
||||
# Direct conversion for exports without special characters
|
||||
markitdown "confluence-export.doc" > output.md
|
||||
```
|
||||
|
||||
### Export with Special Characters
|
||||
|
||||
For Confluence exports containing special characters:
|
||||
|
||||
1. Save the .doc file to an accessible location
|
||||
2. Try direct conversion first:
|
||||
```bash
|
||||
markitdown "confluence-export.doc" > output.md
|
||||
```
|
||||
|
||||
3. If special characters cause issues:
|
||||
- Open in Word and save as .docx
|
||||
- Or use LibreOffice to convert: `libreoffice --headless --convert-to docx export.doc`
|
||||
- Then convert the .docx file
|
||||
|
||||
### Handling Encoding Issues
|
||||
|
||||
```bash
|
||||
# Check file encoding
|
||||
file -i "document.doc"
|
||||
|
||||
# Convert if needed (using iconv)
|
||||
iconv -f ISO-8859-1 -t UTF-8 input.md > output.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Conversion Scenarios
|
||||
|
||||
### Preserving Directory Structure
|
||||
|
||||
```bash
|
||||
# Mirror directory structure
|
||||
src_dir="/mnt/c/Users/username/Documents"
|
||||
out_dir="/path/to/output"
|
||||
|
||||
find "$src_dir" -name "*.docx" | while read file; do
|
||||
# Get relative path
|
||||
rel_path="${file#$src_dir/}"
|
||||
out_file="$out_dir/${rel_path%.docx}.md"
|
||||
|
||||
# Create output directory
|
||||
mkdir -p "$(dirname "$out_file")"
|
||||
|
||||
# Convert
|
||||
markitdown "$file" > "$out_file"
|
||||
done
|
||||
```
|
||||
|
||||
### Conversion with Metadata
|
||||
|
||||
```bash
|
||||
# Add frontmatter to converted file
|
||||
{
|
||||
echo "---"
|
||||
echo "title: $(basename "$file" .pdf)"
|
||||
echo "converted: $(date -I)"
|
||||
echo "source: $file"
|
||||
echo "---"
|
||||
echo ""
|
||||
markitdown "$file"
|
||||
} > output.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Recovery
|
||||
|
||||
### Handling Failed Conversions
|
||||
|
||||
```bash
|
||||
# Check if markitdown succeeded
|
||||
if markitdown "document.pdf" > output.md 2> error.log; then
|
||||
echo "Conversion successful"
|
||||
else
|
||||
echo "Conversion failed, check error.log"
|
||||
fi
|
||||
```
|
||||
|
||||
### Retry Logic
|
||||
|
||||
```bash
|
||||
# Retry failed conversions
|
||||
for file in *.pdf; do
|
||||
output="${file%.pdf}.md"
|
||||
if ! [ -f "$output" ]; then
|
||||
echo "Converting $file..."
|
||||
markitdown "$file" > "$output" || echo "Failed: $file" >> failed.txt
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Verification
|
||||
|
||||
### Check Conversion Quality
|
||||
|
||||
```bash
|
||||
# Compare line counts
|
||||
wc -l document.pdf.md
|
||||
|
||||
# Check for common issues
|
||||
grep "TODO\|ERROR\|MISSING" output.md
|
||||
|
||||
# Preview first/last lines
|
||||
head -n 20 output.md
|
||||
tail -n 20 output.md
|
||||
```
|
||||
|
||||
### Validate Output
|
||||
|
||||
```bash
|
||||
# Check for empty files
|
||||
if [ ! -s output.md ]; then
|
||||
echo "Warning: Output file is empty"
|
||||
fi
|
||||
|
||||
# Verify markdown syntax
|
||||
# Use a markdown linter if available
|
||||
markdownlint output.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Path Handling
|
||||
- Always quote paths with spaces
|
||||
- Verify paths exist before conversion
|
||||
- Use absolute paths for scripts
|
||||
|
||||
### 2. Batch Processing
|
||||
- Log conversions for audit trail
|
||||
- Handle errors gracefully
|
||||
- Preserve original files
|
||||
|
||||
### 3. Output Organization
|
||||
- Mirror source directory structure
|
||||
- Use consistent naming conventions
|
||||
- Separate by document type or date
|
||||
|
||||
### 4. Quality Assurance
|
||||
- Spot-check random conversions
|
||||
- Validate critical documents manually
|
||||
- Keep conversion logs
|
||||
|
||||
### 5. Performance
|
||||
- Use parallel processing for large batches
|
||||
- Skip already converted files
|
||||
- Clean up temporary files
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern: Convert and Review
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
file="$1"
|
||||
output="${file%.*}.md"
|
||||
|
||||
# Convert
|
||||
markitdown "$file" > "$output"
|
||||
|
||||
# Open in editor for review
|
||||
${EDITOR:-vim} "$output"
|
||||
```
|
||||
|
||||
### Pattern: Safe Conversion
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
file="$1"
|
||||
backup="${file}.backup"
|
||||
output="${file%.*}.md"
|
||||
|
||||
# Backup original
|
||||
cp "$file" "$backup"
|
||||
|
||||
# Convert with error handling
|
||||
if markitdown "$file" > "$output" 2> conversion.log; then
|
||||
echo "Success: $output"
|
||||
rm "$backup"
|
||||
else
|
||||
echo "Failed: Check conversion.log"
|
||||
mv "$backup" "$file"
|
||||
fi
|
||||
```
|
||||
|
||||
### Pattern: Metadata Preservation
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Extract and preserve document metadata
|
||||
|
||||
file="$1"
|
||||
output="${file%.*}.md"
|
||||
|
||||
# Get file metadata
|
||||
created=$(stat -c %w "$file" 2>/dev/null || stat -f %SB "$file")
|
||||
modified=$(stat -c %y "$file" 2>/dev/null || stat -f %Sm "$file")
|
||||
|
||||
# Convert with metadata
|
||||
{
|
||||
echo "---"
|
||||
echo "original_file: $(basename "$file")"
|
||||
echo "created: $created"
|
||||
echo "modified: $modified"
|
||||
echo "converted: $(date -I)"
|
||||
echo "---"
|
||||
echo ""
|
||||
markitdown "$file"
|
||||
} > "$output"
|
||||
```
|
||||
61
markdown-tools/scripts/convert_path.py
Normal file
61
markdown-tools/scripts/convert_path.py
Normal file
@@ -0,0 +1,61 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Convert Windows paths to WSL format.
|
||||
|
||||
Usage:
|
||||
python convert_path.py "C:\\Users\\username\\Downloads\\file.doc"
|
||||
|
||||
Output:
|
||||
/mnt/c/Users/username/Downloads/file.doc
|
||||
"""
|
||||
|
||||
import sys
|
||||
import re
|
||||
|
||||
|
||||
def convert_windows_to_wsl(windows_path: str) -> str:
|
||||
"""
|
||||
Convert a Windows path to WSL format.
|
||||
|
||||
Args:
|
||||
windows_path: Windows path (e.g., "C:\\Users\\username\\file.doc")
|
||||
|
||||
Returns:
|
||||
WSL path (e.g., "/mnt/c/Users/username/file.doc")
|
||||
"""
|
||||
# Remove quotes if present
|
||||
path = windows_path.strip('"').strip("'")
|
||||
|
||||
# Handle drive letter (C:\ or C:/)
|
||||
drive_pattern = r'^([A-Za-z]):[\\\/]'
|
||||
match = re.match(drive_pattern, path)
|
||||
|
||||
if not match:
|
||||
# Already a WSL path or relative path
|
||||
return path
|
||||
|
||||
drive_letter = match.group(1).lower()
|
||||
path_without_drive = path[3:] # Remove "C:\"
|
||||
|
||||
# Replace backslashes with forward slashes
|
||||
path_without_drive = path_without_drive.replace('\\', '/')
|
||||
|
||||
# Construct WSL path
|
||||
wsl_path = f"/mnt/{drive_letter}/{path_without_drive}"
|
||||
|
||||
return wsl_path
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python convert_path.py <windows_path>")
|
||||
print('Example: python convert_path.py "C:\\Users\\username\\Downloads\\file.doc"')
|
||||
sys.exit(1)
|
||||
|
||||
windows_path = sys.argv[1]
|
||||
wsl_path = convert_windows_to_wsl(windows_path)
|
||||
print(wsl_path)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
164
mermaid-tools/SKILL.md
Normal file
164
mermaid-tools/SKILL.md
Normal file
@@ -0,0 +1,164 @@
|
||||
---
|
||||
name: mermaid-tools
|
||||
description: Extracts Mermaid diagrams from markdown files and generates high-quality PNG images using bundled scripts. Activates when working with Mermaid diagrams, converting diagrams to PNG, extracting diagrams from markdown, or processing markdown files with embedded Mermaid code.
|
||||
---
|
||||
|
||||
# Mermaid Tools
|
||||
|
||||
## Overview
|
||||
|
||||
This skill enables extraction of Mermaid diagrams from markdown files and generation of high-quality PNG images. The skill bundles all necessary scripts (`extract-and-generate.sh`, `extract_diagrams.py`, and `puppeteer-config.json`) in the `scripts/` directory for portability and reliability.
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Standard Diagram Extraction and Generation
|
||||
|
||||
Extract Mermaid diagrams from a markdown file and generate PNG images using the bundled `extract-and-generate.sh` script:
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `<markdown_file>`: Path to the markdown file containing Mermaid diagrams
|
||||
- `<output_directory>`: (Optional) Directory for output files. Defaults to `<markdown_file_directory>/diagrams`
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
./extract-and-generate.sh "/path/to/document.md" "/path/to/output"
|
||||
```
|
||||
|
||||
### What the Script Does
|
||||
|
||||
1. **Extracts** all Mermaid code blocks from the markdown file
|
||||
2. **Numbers** them sequentially (01, 02, 03, etc.) in order of appearance
|
||||
3. **Generates** `.mmd` files for each diagram
|
||||
4. **Creates** high-resolution PNG images with smart sizing
|
||||
5. **Validates** all generated PNG files
|
||||
|
||||
### Output Files
|
||||
|
||||
For each diagram, the script generates:
|
||||
- `01-diagram-name.mmd` - Extracted Mermaid code
|
||||
- `01-diagram-name.png` - High-resolution PNG image
|
||||
|
||||
The numbering ensures diagrams maintain their order from the source document.
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Dimensions and Scaling
|
||||
|
||||
Override default dimensions using environment variables:
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
MERMAID_WIDTH=1600 MERMAID_HEIGHT=1200 ./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
**Available variables:**
|
||||
- `MERMAID_WIDTH` (default: 1200) - Base width in pixels
|
||||
- `MERMAID_HEIGHT` (default: 800) - Base height in pixels
|
||||
- `MERMAID_SCALE` (default: 2) - Scale factor for high-resolution output
|
||||
|
||||
### High-Resolution Output for Presentations
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
MERMAID_WIDTH=2400 MERMAID_HEIGHT=1800 MERMAID_SCALE=4 ./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
### Print-Quality Output
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
MERMAID_SCALE=5 ./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
## Smart Sizing Feature
|
||||
|
||||
The script automatically adjusts dimensions based on diagram type (detected from filename):
|
||||
|
||||
- **Timeline/Gantt**: 2400×400 (wide and short)
|
||||
- **Architecture/System/Caching**: 2400×1600 (large and detailed)
|
||||
- **Monitoring/Workflow/Sequence/API**: 2400×800 (wide for process flows)
|
||||
- **Default**: 1200×800 (standard size)
|
||||
|
||||
Context-aware naming in the extraction process helps trigger appropriate smart sizing.
|
||||
|
||||
## Important Principles
|
||||
|
||||
### Use Bundled Scripts
|
||||
|
||||
**CRITICAL**: Use the bundled `extract-and-generate.sh` script from this skill's `scripts/` directory. All necessary dependencies are bundled together.
|
||||
|
||||
### Change to Script Directory
|
||||
|
||||
Run the script from its own directory to properly locate dependencies (`extract_diagrams.py` and `puppeteer-config.json`):
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/mermaid-tools/scripts
|
||||
./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
Running the script without changing to the scripts directory first may fail due to missing dependencies.
|
||||
|
||||
## Prerequisites Verification
|
||||
|
||||
Before running the script, verify dependencies are installed:
|
||||
|
||||
1. **mermaid-cli**: `mmdc --version`
|
||||
2. **Google Chrome**: `google-chrome-stable --version`
|
||||
3. **Python 3**: `python3 --version`
|
||||
|
||||
If any are missing, consult `references/setup_and_troubleshooting.md` for installation instructions.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
For detailed troubleshooting guidance, refer to `references/setup_and_troubleshooting.md`, which covers:
|
||||
|
||||
- Browser launch failures
|
||||
- Permission issues
|
||||
- No diagrams found
|
||||
- Python extraction failures
|
||||
- Output quality issues
|
||||
- Diagram-specific sizing problems
|
||||
|
||||
Quick fixes for common issues:
|
||||
|
||||
**Permission denied:**
|
||||
```bash
|
||||
chmod +x ~/.claude/skills/mermaid-tools/scripts/extract-and-generate.sh
|
||||
```
|
||||
|
||||
**Low quality output:**
|
||||
```bash
|
||||
MERMAID_SCALE=3 ./extract-and-generate.sh "<markdown_file>" "<output_directory>"
|
||||
```
|
||||
|
||||
**Chrome/Puppeteer errors:**
|
||||
Verify all WSL2 dependencies are installed (see references for full list).
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### scripts/
|
||||
|
||||
This skill bundles all necessary scripts for Mermaid diagram generation:
|
||||
|
||||
- **extract-and-generate.sh** - Main script that orchestrates extraction and PNG generation
|
||||
- **extract_diagrams.py** - Python script for extracting Mermaid code blocks from markdown
|
||||
- **puppeteer-config.json** - Chrome/Puppeteer configuration for WSL2 environment
|
||||
|
||||
All scripts must be run from the `scripts/` directory to properly locate dependencies.
|
||||
|
||||
### references/setup_and_troubleshooting.md
|
||||
|
||||
Comprehensive reference documentation including:
|
||||
- Complete prerequisite installation instructions
|
||||
- Detailed environment variable reference
|
||||
- Extensive troubleshooting guide
|
||||
- WSL2-specific Chrome dependency setup
|
||||
- Validation procedures
|
||||
|
||||
Load this reference when dealing with setup issues, installation problems, or advanced customization needs.
|
||||
175
mermaid-tools/references/setup_and_troubleshooting.md
Normal file
175
mermaid-tools/references/setup_and_troubleshooting.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Mermaid Diagram Generation - Setup and Troubleshooting
|
||||
|
||||
## Table of Contents
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Script Locations](#script-locations)
|
||||
- [Features](#features)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Validation](#validation)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### 1. Node.js and mermaid-cli
|
||||
|
||||
Install mermaid-cli globally:
|
||||
```bash
|
||||
npm install -g @mermaid-js/mermaid-cli
|
||||
```
|
||||
|
||||
Verify installation:
|
||||
```bash
|
||||
mmdc --version
|
||||
```
|
||||
|
||||
### 2. Google Chrome for WSL2
|
||||
|
||||
Install Chrome and dependencies:
|
||||
```bash
|
||||
# Add Chrome repository
|
||||
wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
|
||||
echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" | sudo tee /etc/apt/sources.list.d/google-chrome.list
|
||||
|
||||
# Update and install Chrome
|
||||
sudo apt update
|
||||
sudo apt install -y google-chrome-stable
|
||||
|
||||
# Install required dependencies for WSL2
|
||||
sudo apt install -y libgtk2.0-0 libgtk-3-0 libgbm-dev libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2 libxtst6 xauth xvfb fonts-liberation libxml2-utils
|
||||
```
|
||||
|
||||
Verify Chrome installation:
|
||||
```bash
|
||||
google-chrome-stable --version
|
||||
```
|
||||
|
||||
### 3. Python 3
|
||||
|
||||
The extraction script requires Python 3 (usually pre-installed on Ubuntu):
|
||||
```bash
|
||||
python3 --version
|
||||
```
|
||||
|
||||
## Script Locations
|
||||
|
||||
The mermaid diagram tools are bundled with this skill in the `scripts/` directory:
|
||||
- Main script: `~/.claude/skills/mermaid-tools/scripts/extract-and-generate.sh`
|
||||
- Python extractor: `~/.claude/skills/mermaid-tools/scripts/extract_diagrams.py`
|
||||
- Puppeteer config: `~/.claude/skills/mermaid-tools/scripts/puppeteer-config.json`
|
||||
|
||||
All scripts should be run from the `scripts/` directory to properly locate dependencies.
|
||||
|
||||
## Features
|
||||
|
||||
### Smart Sizing
|
||||
|
||||
The script automatically adjusts diagram dimensions based on filename patterns:
|
||||
|
||||
- **Timeline/Gantt charts**: 2400x400 (wide and short)
|
||||
- **Architecture/System/Caching diagrams**: 2400x1600 (large and detailed)
|
||||
- **Monitoring/Workflow/Sequence/API diagrams**: 2400x800 (wide for process flows)
|
||||
- **Default**: 1200x800 (standard size)
|
||||
|
||||
### Sequential Numbering
|
||||
|
||||
Diagrams are automatically numbered sequentially (01, 02, 03, etc.) in the order they appear in the markdown file.
|
||||
|
||||
### High-Resolution Output
|
||||
|
||||
Default scale factor is 2x for high-quality output. Can be customized with environment variables.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Override default dimensions and scaling:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `MERMAID_WIDTH` | 1200 | Base width for generated PNGs |
|
||||
| `MERMAID_HEIGHT` | 800 | Base height for generated PNGs |
|
||||
| `MERMAID_SCALE` | 2 | Scale factor for high-resolution output |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Custom dimensions
|
||||
MERMAID_WIDTH=1600 MERMAID_HEIGHT=1200 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
|
||||
# High-resolution mode for presentations
|
||||
MERMAID_WIDTH=2400 MERMAID_HEIGHT=1800 MERMAID_SCALE=4 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
|
||||
# Ultra-high resolution for print materials
|
||||
MERMAID_SCALE=5 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Browser Launch Failures
|
||||
|
||||
**Symptom**: Errors about Chrome not launching or Puppeteer failures
|
||||
|
||||
**Solution**:
|
||||
1. Verify Chrome is installed: `google-chrome-stable --version`
|
||||
2. Check Chrome path in script matches: `/usr/bin/google-chrome-stable`
|
||||
3. Ensure all dependencies are installed (see Prerequisites section 2)
|
||||
4. Verify puppeteer-config.json exists at the expected location
|
||||
|
||||
### Permission Issues
|
||||
|
||||
**Symptom**: "Permission denied" when running the script
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
chmod +x ~/.claude/skills/mermaid-tools/scripts/extract-and-generate.sh
|
||||
```
|
||||
|
||||
### No Diagrams Found
|
||||
|
||||
**Symptom**: Script reports "No .mmd files found" or "No diagrams extracted"
|
||||
|
||||
**Solution**:
|
||||
1. Verify the markdown file contains Mermaid code blocks (` ```mermaid`)
|
||||
2. Check the markdown file path is correct
|
||||
3. Ensure Mermaid code blocks are properly formatted
|
||||
|
||||
### Python Extraction Failures
|
||||
|
||||
**Symptom**: Errors during the extraction phase
|
||||
|
||||
**Solution**:
|
||||
1. Verify Python 3 is installed: `python3 --version`
|
||||
2. Check the markdown file encoding (should be UTF-8)
|
||||
3. Review the markdown file for malformed Mermaid code blocks
|
||||
|
||||
### Output Quality Issues
|
||||
|
||||
**Symptom**: Generated images are too small or low quality
|
||||
|
||||
**Solution**:
|
||||
Use environment variables to increase dimensions and scale:
|
||||
```bash
|
||||
MERMAID_WIDTH=2400 MERMAID_HEIGHT=1800 MERMAID_SCALE=3 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
```
|
||||
|
||||
### Diagram-Specific Sizing Issues
|
||||
|
||||
**Symptom**: Specific diagram types don't render well with default sizes
|
||||
|
||||
**Solution**:
|
||||
The script has smart sizing for common patterns, but you can override for specific cases:
|
||||
```bash
|
||||
# For very wide sequence diagrams
|
||||
MERMAID_WIDTH=3000 MERMAID_HEIGHT=1000 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
|
||||
# For very tall flowcharts
|
||||
MERMAID_WIDTH=1200 MERMAID_HEIGHT=2400 ./extract-and-generate.sh "file.md" "output_dir"
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
The script automatically validates generated PNG files by:
|
||||
1. Checking file size is non-zero
|
||||
2. Verifying the file is a valid PNG image
|
||||
3. Reporting actual dimensions
|
||||
4. Displaying file size in bytes
|
||||
|
||||
Look for ✅ validation messages in the output to confirm successful generation.
|
||||
166
mermaid-tools/scripts/extract-and-generate.sh
Normal file
166
mermaid-tools/scripts/extract-and-generate.sh
Normal file
@@ -0,0 +1,166 @@
|
||||
#!/bin/bash
|
||||
# Enhanced Mermaid diagram extraction and PNG generation script
|
||||
# Extracts diagrams from markdown and numbers them sequentially
|
||||
#
|
||||
# Usage: ./extract-and-generate.sh <markdown_file> [output_directory]
|
||||
# Example: ./extract-and-generate.sh "~/workspace/document.md" "~/workspace/diagrams"
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CONFIG_FILE="$SCRIPT_DIR/puppeteer-config.json"
|
||||
EXTRACTOR_SCRIPT="$SCRIPT_DIR/extract_diagrams.py"
|
||||
|
||||
# Parse arguments
|
||||
if [ $# -lt 1 ]; then
|
||||
echo "Usage: $0 <markdown_file> [output_directory]"
|
||||
echo "Example: $0 '~/workspace/document.md' '~/workspace/diagrams'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MARKDOWN_FILE="$1"
|
||||
OUTPUT_DIR="${2:-$(dirname "$MARKDOWN_FILE")/diagrams}"
|
||||
|
||||
echo "=== Enhanced Mermaid Diagram Processor ==="
|
||||
echo "Source markdown: $MARKDOWN_FILE"
|
||||
echo "Output directory: $OUTPUT_DIR"
|
||||
echo "Environment: WSL2 Ubuntu with Chrome dependencies"
|
||||
echo
|
||||
|
||||
# Validate inputs
|
||||
if [ ! -f "$MARKDOWN_FILE" ]; then
|
||||
echo "ERROR: Markdown file not found: $MARKDOWN_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create output directory if it doesn't exist
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Configuration
|
||||
CHROME_PATH="/usr/bin/google-chrome-stable"
|
||||
|
||||
# Check dependencies
|
||||
echo "Checking dependencies..."
|
||||
if ! command -v mmdc &> /dev/null; then
|
||||
echo "ERROR: @mermaid-js/mermaid-cli not installed"
|
||||
echo "Install with: npm install -g @mermaid-js/mermaid-cli"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$CHROME_PATH" ]; then
|
||||
echo "ERROR: Google Chrome not found at $CHROME_PATH"
|
||||
echo "Install Chrome and dependencies with the setup commands"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
echo "ERROR: Puppeteer config not found: $CONFIG_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$EXTRACTOR_SCRIPT" ]; then
|
||||
echo "ERROR: Python extractor script not found: $EXTRACTOR_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Dependencies verified"
|
||||
echo
|
||||
|
||||
# Extract Mermaid diagrams from markdown
|
||||
echo "Extracting Mermaid diagrams from markdown..."
|
||||
python3 "$EXTRACTOR_SCRIPT" "$MARKDOWN_FILE" "$OUTPUT_DIR"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "ERROR: Failed to extract diagrams from markdown"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
|
||||
# Now generate PNGs using the existing generation logic
|
||||
echo "Generating PNG files..."
|
||||
cd "$OUTPUT_DIR"
|
||||
|
||||
# Default dimensions - can be overridden with environment variables
|
||||
DEFAULT_WIDTH="${MERMAID_WIDTH:-1200}"
|
||||
DEFAULT_HEIGHT="${MERMAID_HEIGHT:-800}"
|
||||
SCALE_FACTOR="${MERMAID_SCALE:-2}"
|
||||
|
||||
# Process all .mmd files in order
|
||||
mmd_files=(*.mmd)
|
||||
|
||||
if [ ${#mmd_files[@]} -eq 1 ] && [ "${mmd_files[0]}" = "*.mmd" ]; then
|
||||
echo "No .mmd files found in output directory"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Sort files numerically by their prefix
|
||||
IFS=$'\n' mmd_files=($(sort -V <<< "${mmd_files[*]}"))
|
||||
|
||||
echo "Found ${#mmd_files[@]} Mermaid diagram(s) to process"
|
||||
echo
|
||||
|
||||
for mmd_file in "${mmd_files[@]}"; do
|
||||
if [ ! -f "$mmd_file" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Extract filename without extension
|
||||
diagram="${mmd_file%.mmd}"
|
||||
|
||||
# Use smart defaults based on diagram content or filename patterns
|
||||
width="$DEFAULT_WIDTH"
|
||||
height="$DEFAULT_HEIGHT"
|
||||
|
||||
# Smart sizing based on filename patterns
|
||||
if [[ "$diagram" =~ timeline|gantt ]]; then
|
||||
width=$((DEFAULT_WIDTH * 2)) # Wider for timelines
|
||||
height=$((DEFAULT_HEIGHT / 2)) # Shorter for timelines
|
||||
elif [[ "$diagram" =~ architecture|system ]]; then
|
||||
width=$((DEFAULT_WIDTH * 2)) # Larger for complex diagrams
|
||||
height=$((DEFAULT_HEIGHT * 2))
|
||||
elif [[ "$diagram" =~ caching ]]; then
|
||||
width=$((DEFAULT_WIDTH * 2)) # Larger for caching flowcharts
|
||||
height=$((DEFAULT_HEIGHT * 2))
|
||||
elif [[ "$diagram" =~ monitoring|workflow|sequence|api ]]; then
|
||||
width=$((DEFAULT_WIDTH * 2)) # Wider for workflows and sequences
|
||||
height="$DEFAULT_HEIGHT"
|
||||
fi
|
||||
|
||||
echo "Generating $diagram.png (${width}x${height}, scale: ${SCALE_FACTOR}x)..."
|
||||
PUPPETEER_EXECUTABLE_PATH="$CHROME_PATH" mmdc \
|
||||
-i "$mmd_file" \
|
||||
-o "$diagram.png" \
|
||||
--puppeteerConfigFile "$CONFIG_FILE" \
|
||||
-w "$width" \
|
||||
-H "$height" \
|
||||
-s "$SCALE_FACTOR"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo " ✅ Generated successfully"
|
||||
else
|
||||
echo " ❌ Generation failed"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Validate PNG
|
||||
if test -s "$diagram.png" && file "$diagram.png" | grep -q "PNG image"; then
|
||||
size=$(stat -c%s "$diagram.png")
|
||||
dimensions_actual=$(identify -format "%wx%h" "$diagram.png" 2>/dev/null || echo "unknown")
|
||||
echo " ✅ Validated PNG (${size} bytes, ${dimensions_actual})"
|
||||
else
|
||||
echo " ❌ PNG validation failed"
|
||||
fi
|
||||
echo
|
||||
done
|
||||
|
||||
echo "=== Generation Complete ==="
|
||||
echo "All PNG diagrams generated and validated successfully!"
|
||||
echo
|
||||
|
||||
echo "Generated files (in sequence order):"
|
||||
ls -la [0-9][0-9]-*.png 2>/dev/null | awk '{printf " %s (%s bytes)\n", $9, $5}' || echo " No numbered PNG files found"
|
||||
|
||||
echo
|
||||
echo "Generated files (all):"
|
||||
ls -la *.png 2>/dev/null | awk '{printf " %s (%s bytes)\n", $9, $5}' || echo " No PNG files found"
|
||||
134
mermaid-tools/scripts/extract_diagrams.py
Normal file
134
mermaid-tools/scripts/extract_diagrams.py
Normal file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Extract Mermaid diagrams from markdown file and create numbered .mmd files
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def extract_mermaid_diagrams(markdown_file, output_dir):
|
||||
"""Extract Mermaid diagrams from markdown file and create numbered .mmd files"""
|
||||
|
||||
try:
|
||||
with open(markdown_file, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
except Exception as e:
|
||||
print(f"ERROR: Cannot read markdown file: {e}")
|
||||
return []
|
||||
|
||||
# Find all mermaid code blocks with their content
|
||||
mermaid_pattern = r'```mermaid\n(.*?)\n```'
|
||||
matches = re.findall(mermaid_pattern, content, re.DOTALL)
|
||||
|
||||
if not matches:
|
||||
print("No Mermaid diagrams found in markdown file")
|
||||
return []
|
||||
|
||||
# Extract diagram names from context (look backwards for section headers)
|
||||
diagrams = []
|
||||
lines = content.split('\n')
|
||||
|
||||
for i, match in enumerate(matches, 1):
|
||||
# Find the position of this diagram in the content
|
||||
diagram_pattern = f'```mermaid\n{re.escape(match)}\n```'
|
||||
diagram_match = re.search(diagram_pattern, content)
|
||||
|
||||
if not diagram_match:
|
||||
# Fallback: use simple search
|
||||
diagram_start = content.find(f'```mermaid\n{match}\n```')
|
||||
else:
|
||||
diagram_start = diagram_match.start()
|
||||
|
||||
# Count lines up to this point to find context
|
||||
if diagram_start >= 0:
|
||||
lines_before = content[:diagram_start].count('\n')
|
||||
else:
|
||||
lines_before = 0
|
||||
|
||||
# Look backwards for the most recent section header or meaningful context
|
||||
diagram_name = f"diagram-{i:02d}" # Default fallback
|
||||
|
||||
# Look for context clues in the 20 lines before the diagram
|
||||
context_start = max(0, lines_before - 20)
|
||||
context_lines = lines[context_start:lines_before] if lines_before > 0 else []
|
||||
|
||||
# Priority 1: Look for specific diagram descriptions
|
||||
for line in reversed(context_lines):
|
||||
line = line.strip().lower()
|
||||
if 'system architecture' in line:
|
||||
diagram_name = f"{i:02d}-system-architecture"
|
||||
break
|
||||
elif 'authentication flow' in line:
|
||||
diagram_name = f"{i:02d}-authentication-flow"
|
||||
break
|
||||
elif 'caching architecture' in line or 'multi-layer cache' in line:
|
||||
diagram_name = f"{i:02d}-caching-architecture"
|
||||
break
|
||||
elif 'data flow' in line or 'redshift schema' in line:
|
||||
diagram_name = f"{i:02d}-data-flow"
|
||||
break
|
||||
elif 'api request' in line or 'dashboard metrics endpoints' in line:
|
||||
diagram_name = f"{i:02d}-api-request-response"
|
||||
break
|
||||
elif 'dashboard layout' in line or 'presentation layer' in line:
|
||||
diagram_name = f"{i:02d}-dashboard-layout"
|
||||
break
|
||||
elif 'agency' in line and ('hierarchy' in line or 'filter' in line):
|
||||
diagram_name = f"{i:02d}-agency-hierarchy"
|
||||
break
|
||||
|
||||
# Priority 2: Look for section headers (## or ###)
|
||||
if diagram_name.startswith('diagram-'):
|
||||
for line in reversed(context_lines):
|
||||
line = line.strip()
|
||||
if line.startswith('###') or line.startswith('##'):
|
||||
# Extract meaningful part from header
|
||||
header = re.sub(r'^#+\s*\*?\*?', '', line)
|
||||
header = re.sub(r'\*?\*?$', '', header)
|
||||
header = header.strip()
|
||||
|
||||
# Convert to filename-friendly format
|
||||
name_part = re.sub(r'[^\w\s-]', '', header)
|
||||
name_part = re.sub(r'\s+', '-', name_part.strip())
|
||||
name_part = name_part.lower()[:30] # Limit length
|
||||
|
||||
if name_part and name_part != 'detailed-design':
|
||||
diagram_name = f"{i:02d}-{name_part}"
|
||||
break
|
||||
|
||||
diagrams.append({
|
||||
'number': i,
|
||||
'name': diagram_name,
|
||||
'content': match.strip()
|
||||
})
|
||||
|
||||
print(f"Found diagram {i}: {diagram_name}")
|
||||
|
||||
# Write .mmd files
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
created_files = []
|
||||
for diagram in diagrams:
|
||||
mmd_file = output_path / f"{diagram['name']}.mmd"
|
||||
try:
|
||||
with open(mmd_file, 'w', encoding='utf-8') as f:
|
||||
f.write(diagram['content'])
|
||||
created_files.append(str(mmd_file))
|
||||
print(f"Created: {mmd_file}")
|
||||
except Exception as e:
|
||||
print(f"ERROR: Cannot create {mmd_file}: {e}")
|
||||
|
||||
return created_files
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 3:
|
||||
print("Usage: python3 extract_diagrams.py <markdown_file> <output_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
markdown_file = sys.argv[1]
|
||||
output_dir = sys.argv[2]
|
||||
|
||||
files = extract_mermaid_diagrams(markdown_file, output_dir)
|
||||
print(f"\nExtracted {len(files)} diagrams successfully")
|
||||
14
mermaid-tools/scripts/puppeteer-config.json
Normal file
14
mermaid-tools/scripts/puppeteer-config.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"args": [
|
||||
"--no-sandbox",
|
||||
"--disable-setuid-sandbox",
|
||||
"--disable-dev-shm-usage",
|
||||
"--disable-gpu",
|
||||
"--no-first-run",
|
||||
"--disable-background-timer-throttling",
|
||||
"--disable-backgrounding-occluded-windows",
|
||||
"--disable-renderer-backgrounding",
|
||||
"--disable-features=TranslateUI",
|
||||
"--disable-ipc-flooding-protection"
|
||||
]
|
||||
}
|
||||
195
repomix-unmixer/README.md
Normal file
195
repomix-unmixer/README.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# Repomix Unmixer Skill
|
||||
|
||||
A Claude Code skill for extracting files from repomix-packed repositories and restoring their original directory structure.
|
||||
|
||||
## Overview
|
||||
|
||||
Repomix packs entire repositories into single AI-friendly files (XML, Markdown, or JSON). This skill reverses that process, extracting all files and restoring the original directory structure.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Installation
|
||||
|
||||
1. Download `repomix-unmixer.zip`
|
||||
2. Extract to `~/.claude/skills/repomix-unmixer/`
|
||||
3. Restart Claude Code
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Extract a repomix file:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"<path_to_repomix_file>" \
|
||||
"<output_directory>"
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"/path/to/skills.xml" \
|
||||
"/tmp/extracted-skills"
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Multi-format support**: XML (default), Markdown, and JSON repomix formats
|
||||
- **Auto-detection**: Automatically detects repomix format
|
||||
- **Structure preservation**: Restores original directory structure
|
||||
- **UTF-8 encoding**: Handles international characters correctly
|
||||
- **Progress reporting**: Shows extraction progress and statistics
|
||||
- **Validation workflows**: Includes comprehensive validation guides
|
||||
|
||||
## Supported Formats
|
||||
|
||||
### XML Format (default)
|
||||
```xml
|
||||
<file path="relative/path/to/file.ext">
|
||||
content here
|
||||
</file>
|
||||
```
|
||||
|
||||
### Markdown Format
|
||||
````markdown
|
||||
### File: relative/path/to/file.ext
|
||||
|
||||
```language
|
||||
content here
|
||||
```
|
||||
````
|
||||
|
||||
### JSON Format
|
||||
```json
|
||||
{
|
||||
"files": [
|
||||
{"path": "file.ext", "content": "content here"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### scripts/unmix_repomix.py
|
||||
Main unmixing script with:
|
||||
- Format auto-detection
|
||||
- Multi-format parsing (XML, Markdown, JSON)
|
||||
- Directory structure creation
|
||||
- Progress reporting
|
||||
|
||||
### references/repomix-format.md
|
||||
Comprehensive format documentation:
|
||||
- XML, Markdown, and JSON format specifications
|
||||
- Extraction patterns and regex
|
||||
- Edge cases and examples
|
||||
- Format detection logic
|
||||
|
||||
### references/validation-workflow.md
|
||||
Detailed validation procedures:
|
||||
- File count verification
|
||||
- Directory structure validation
|
||||
- Content integrity checks
|
||||
- Skill-specific validation for Claude Code skills
|
||||
- Quality assurance checklists
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### Unmix Claude Skills
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"skills.xml" "/tmp/review-skills"
|
||||
|
||||
# Review and validate
|
||||
tree /tmp/review-skills
|
||||
|
||||
# Install if valid
|
||||
cp -r /tmp/review-skills/* ~/.claude/skills/
|
||||
```
|
||||
|
||||
### Extract Repository for Review
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"repo-output.xml" "/tmp/review-repo"
|
||||
|
||||
# Review structure
|
||||
tree /tmp/review-repo
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"backup.xml" "~/workspace/restored-project"
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
After extraction, validate the results:
|
||||
|
||||
1. **Check file count**: Verify extracted count matches expected
|
||||
2. **Review structure**: Use `tree` to inspect directory layout
|
||||
3. **Spot check content**: Read a few files to verify integrity
|
||||
4. **Run validation**: For skills, use skill-creator validation
|
||||
|
||||
For detailed validation procedures, see `references/validation-workflow.md`.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.6 or higher
|
||||
- Standard library only (no external dependencies)
|
||||
|
||||
## Skill Activation
|
||||
|
||||
This skill activates when:
|
||||
- Unmixing a repomix output file
|
||||
- Extracting files from a packed repository
|
||||
- Restoring original directory structure
|
||||
- Reviewing repomix-packed content
|
||||
- Converting repomix output back to usable files
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Extract to temp directories** - Always extract to `/tmp` for initial review
|
||||
2. **Verify file count** - Check extracted count matches expectations
|
||||
3. **Review structure** - Inspect directory layout before use
|
||||
4. **Check content** - Spot-check files for integrity
|
||||
5. **Use validation tools** - For skills, use skill-creator validation
|
||||
6. **Preserve originals** - Keep the repomix file as backup
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Files Extracted
|
||||
- Verify input file is a valid repomix file
|
||||
- Check format (XML/Markdown/JSON)
|
||||
- Refer to `references/repomix-format.md`
|
||||
|
||||
### Permission Errors
|
||||
- Ensure output directory is writable
|
||||
- Use `mkdir -p` to create directory first
|
||||
- Check file permissions
|
||||
|
||||
### Encoding Issues
|
||||
- Script uses UTF-8 by default
|
||||
- Verify repomix file encoding
|
||||
- Check for special characters
|
||||
|
||||
## Version
|
||||
|
||||
- **Version**: 1.0.0
|
||||
- **Created**: 2025-10-22
|
||||
- **Last Updated**: 2025-10-22
|
||||
|
||||
## License
|
||||
|
||||
This skill follows the same license as Claude Code.
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check `references/repomix-format.md` for format details
|
||||
2. Review `references/validation-workflow.md` for validation help
|
||||
3. Inspect the script source code at `scripts/unmix_repomix.py`
|
||||
4. Report issues to the skill creator
|
||||
|
||||
## Credits
|
||||
|
||||
Created using the skill-creator skill for Claude Code.
|
||||
310
repomix-unmixer/SKILL.md
Normal file
310
repomix-unmixer/SKILL.md
Normal file
@@ -0,0 +1,310 @@
|
||||
---
|
||||
name: repomix-unmixer
|
||||
description: Extracts files from repomix-packed repositories, restoring original directory structures from XML/Markdown/JSON formats. Activates when users need to unmix repomix files, extract packed repositories, restore file structures from repomix output, or reverse the repomix packing process.
|
||||
---
|
||||
|
||||
# Repomix Unmixer
|
||||
|
||||
## Overview
|
||||
|
||||
This skill extracts files from repomix-packed repositories and restores their original directory structure. Repomix packs entire repositories into single AI-friendly files (XML, Markdown, or JSON), and this skill reverses that process to restore individual files.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates when:
|
||||
- Unmixing a repomix output file (*.xml, *.md, *.json)
|
||||
- Extracting files from a packed repository
|
||||
- Restoring original directory structure from repomix format
|
||||
- Reviewing or validating repomix-packed content
|
||||
- Converting repomix output back to usable files
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Standard Unmixing Process
|
||||
|
||||
Extract all files from a repomix file and restore the original directory structure using the bundled `unmix_repomix.py` script:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"<path_to_repomix_file>" \
|
||||
"<output_directory>"
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `<path_to_repomix_file>`: Path to the repomix output file (XML, Markdown, or JSON)
|
||||
- `<output_directory>`: Directory where files will be extracted (will be created if doesn't exist)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"/path/to/repomix-output.xml" \
|
||||
"/tmp/extracted-files"
|
||||
```
|
||||
|
||||
### What the Script Does
|
||||
|
||||
1. **Parses** the repomix file format (XML, Markdown, or JSON)
|
||||
2. **Extracts** each file path and content
|
||||
3. **Creates** the original directory structure
|
||||
4. **Writes** each file to its original location
|
||||
5. **Reports** extraction progress and statistics
|
||||
|
||||
### Output
|
||||
|
||||
The script will:
|
||||
- Create all necessary parent directories
|
||||
- Extract all files maintaining their paths
|
||||
- Print extraction progress for each file
|
||||
- Display total count of extracted files
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
Unmixing /path/to/skill.xml...
|
||||
Output directory: /tmp/extracted-files
|
||||
|
||||
✓ Extracted: github-ops/SKILL.md
|
||||
✓ Extracted: github-ops/references/api_reference.md
|
||||
✓ Extracted: markdown-tools/SKILL.md
|
||||
...
|
||||
|
||||
✅ Successfully extracted 20 files!
|
||||
|
||||
Extracted files are in: /tmp/extracted-files
|
||||
```
|
||||
|
||||
## Supported Formats
|
||||
|
||||
### XML Format (default)
|
||||
|
||||
Repomix XML format structure:
|
||||
```xml
|
||||
<file path="relative/path/to/file.ext">
|
||||
file content here
|
||||
</file>
|
||||
```
|
||||
|
||||
The script uses regex to match `<file path="...">content</file>` blocks.
|
||||
|
||||
### Markdown Format
|
||||
|
||||
For markdown-style repomix output with file markers:
|
||||
```markdown
|
||||
## File: relative/path/to/file.ext
|
||||
```
|
||||
file content
|
||||
```
|
||||
```
|
||||
|
||||
Refer to `references/repomix-format.md` for detailed format specifications.
|
||||
|
||||
### JSON Format
|
||||
|
||||
For JSON-style repomix output:
|
||||
```json
|
||||
{
|
||||
"files": [
|
||||
{
|
||||
"path": "relative/path/to/file.ext",
|
||||
"content": "file content here"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### Use Case 1: Unmix Claude Skills
|
||||
|
||||
Extract skills that were shared as a repomix file:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"/path/to/skills.xml" \
|
||||
"/tmp/unmixed-skills"
|
||||
```
|
||||
|
||||
Then review, validate, or install the extracted skills.
|
||||
|
||||
### Use Case 2: Extract Repository for Review
|
||||
|
||||
Extract a packed repository to review its structure and contents:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"/path/to/repo-output.xml" \
|
||||
"/tmp/review-repo"
|
||||
|
||||
# Review the structure
|
||||
tree /tmp/review-repo
|
||||
```
|
||||
|
||||
### Use Case 3: Restore Working Files
|
||||
|
||||
Restore files from a repomix backup to a working directory:
|
||||
|
||||
```bash
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"/path/to/backup.xml" \
|
||||
"~/workspace/restored-project"
|
||||
```
|
||||
|
||||
## Validation Workflow
|
||||
|
||||
After unmixing, validate the extracted files are correct:
|
||||
|
||||
1. **Check file count**: Verify the number of extracted files matches expectations
|
||||
2. **Review structure**: Use `tree` or `ls -R` to inspect directory layout
|
||||
3. **Spot check content**: Read a few key files to verify content integrity
|
||||
4. **Run validation**: For skills, use the skill-creator validation tools
|
||||
|
||||
Refer to `references/validation-workflow.md` for detailed validation procedures, especially for unmixing Claude skills.
|
||||
|
||||
## Important Principles
|
||||
|
||||
### Always Specify Output Directory
|
||||
|
||||
Always provide an output directory to avoid cluttering the current working directory:
|
||||
|
||||
```bash
|
||||
# Good: Explicit output directory
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"input.xml" "/tmp/output"
|
||||
|
||||
# Avoid: Default output (may clutter current directory)
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py "input.xml"
|
||||
```
|
||||
|
||||
### Use Temporary Directories for Review
|
||||
|
||||
Extract to temporary directories first for review:
|
||||
|
||||
```bash
|
||||
# Extract to /tmp for review
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"skills.xml" "/tmp/review-skills"
|
||||
|
||||
# Review the contents
|
||||
tree /tmp/review-skills
|
||||
|
||||
# If satisfied, copy to final destination
|
||||
cp -r /tmp/review-skills ~/.claude/skills/
|
||||
```
|
||||
|
||||
### Verify Before Overwriting
|
||||
|
||||
Never extract directly to important directories without review:
|
||||
|
||||
```bash
|
||||
# Bad: Might overwrite existing files
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"repo.xml" "~/workspace/my-project"
|
||||
|
||||
# Good: Extract to temp, review, then move
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"repo.xml" "/tmp/extracted"
|
||||
# Review, then:
|
||||
mv /tmp/extracted ~/workspace/my-project
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Files Extracted
|
||||
|
||||
**Issue**: Script completes but no files are extracted.
|
||||
|
||||
**Possible causes:**
|
||||
- Wrong file format (not a repomix file)
|
||||
- Unsupported repomix format version
|
||||
- File path pattern doesn't match
|
||||
|
||||
**Solution:**
|
||||
1. Verify the input file is a repomix output file
|
||||
2. Check the format (XML/Markdown/JSON)
|
||||
3. Examine the file structure manually
|
||||
4. Refer to `references/repomix-format.md` for format details
|
||||
|
||||
### Permission Errors
|
||||
|
||||
**Issue**: Cannot write to output directory.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Ensure output directory is writable
|
||||
mkdir -p /tmp/output
|
||||
chmod 755 /tmp/output
|
||||
|
||||
# Or use a directory you own
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"input.xml" "$HOME/extracted"
|
||||
```
|
||||
|
||||
### Encoding Issues
|
||||
|
||||
**Issue**: Special characters appear garbled in extracted files.
|
||||
|
||||
**Solution:**
|
||||
The script uses UTF-8 encoding by default. If issues persist:
|
||||
- Check the original repomix file encoding
|
||||
- Verify the file was created correctly
|
||||
- Report the issue with specific character examples
|
||||
|
||||
### Path Already Exists
|
||||
|
||||
**Issue**: Files exist at extraction path.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Option 1: Use a fresh output directory
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"input.xml" "/tmp/output-$(date +%s)"
|
||||
|
||||
# Option 2: Clear the directory first
|
||||
rm -rf /tmp/output && mkdir /tmp/output
|
||||
python3 ~/.claude/skills/repomix-unmixer/scripts/unmix_repomix.py \
|
||||
"input.xml" "/tmp/output"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Extract to temp directories** - Always extract to `/tmp` or similar for initial review
|
||||
2. **Verify file count** - Check that extracted file count matches expectations
|
||||
3. **Review structure** - Use `tree` to inspect directory layout before use
|
||||
4. **Check content** - Spot-check a few files to ensure content is intact
|
||||
5. **Use validation tools** - For skills, use skill-creator validation after unmixing
|
||||
6. **Preserve originals** - Keep the original repomix file as backup
|
||||
|
||||
## Resources
|
||||
|
||||
### scripts/unmix_repomix.py
|
||||
|
||||
Main unmixing script that:
|
||||
- Parses repomix XML/Markdown/JSON formats
|
||||
- Extracts file paths and content using regex
|
||||
- Creates directory structures automatically
|
||||
- Writes files to their original locations
|
||||
- Reports extraction progress and statistics
|
||||
|
||||
The script is self-contained and requires only Python 3 standard library.
|
||||
|
||||
### references/repomix-format.md
|
||||
|
||||
Comprehensive documentation of repomix file formats including:
|
||||
- XML format structure and examples
|
||||
- Markdown format patterns
|
||||
- JSON format schema
|
||||
- File path encoding rules
|
||||
- Content extraction patterns
|
||||
- Format version differences
|
||||
|
||||
Load this reference when dealing with format-specific issues or supporting new repomix versions.
|
||||
|
||||
### references/validation-workflow.md
|
||||
|
||||
Detailed validation procedures for extracted content including:
|
||||
- File count verification steps
|
||||
- Directory structure validation
|
||||
- Content integrity checks
|
||||
- Skill-specific validation using skill-creator tools
|
||||
- Quality assurance checklists
|
||||
|
||||
Load this reference when users need to validate unmixed skills or verify extraction quality.
|
||||
448
repomix-unmixer/references/repomix-format.md
Normal file
448
repomix-unmixer/references/repomix-format.md
Normal file
@@ -0,0 +1,448 @@
|
||||
# Repomix File Format Reference
|
||||
|
||||
This document provides comprehensive documentation of repomix output formats for accurate file extraction.
|
||||
|
||||
## Overview
|
||||
|
||||
Repomix can generate output in three formats:
|
||||
1. **XML** (default) - Most common, uses XML tags
|
||||
2. **Markdown** - Human-readable, uses markdown code blocks
|
||||
3. **JSON** - Structured data format
|
||||
|
||||
## XML Format
|
||||
|
||||
### Structure
|
||||
|
||||
The XML format is the default and most common repomix output:
|
||||
|
||||
```xml
|
||||
<file_summary>
|
||||
[Summary and metadata about the packed repository]
|
||||
</file_summary>
|
||||
|
||||
<directory_structure>
|
||||
[Text-based directory tree visualization]
|
||||
</directory_structure>
|
||||
|
||||
<files>
|
||||
<file path="relative/path/to/file1.ext">
|
||||
content of file1
|
||||
</file>
|
||||
|
||||
<file path="relative/path/to/file2.ext">
|
||||
content of file2
|
||||
</file>
|
||||
</files>
|
||||
```
|
||||
|
||||
### File Block Pattern
|
||||
|
||||
Each file is enclosed in a `<file>` tag with a `path` attribute:
|
||||
|
||||
```xml
|
||||
<file path="src/main.py">
|
||||
#!/usr/bin/env python3
|
||||
|
||||
def main():
|
||||
print("Hello, world!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
</file>
|
||||
```
|
||||
|
||||
### Key Characteristics
|
||||
|
||||
- File path is in the `path` attribute (relative path)
|
||||
- Content starts on the line after the opening tag
|
||||
- Content ends on the line before the closing tag
|
||||
- No leading/trailing blank lines in content (content is trimmed)
|
||||
|
||||
### Extraction Pattern
|
||||
|
||||
The unmixing script uses this regex pattern:
|
||||
|
||||
```python
|
||||
r'<file path="([^"]+)">\n(.*?)\n</file>'
|
||||
```
|
||||
|
||||
**Pattern breakdown:**
|
||||
- `<file path="([^"]+)">` - Captures the file path from the path attribute
|
||||
- `\n` - Expects a newline after opening tag
|
||||
- `(.*?)` - Captures file content (non-greedy, allows multiline)
|
||||
- `\n</file>` - Expects newline before closing tag
|
||||
|
||||
## Markdown Format
|
||||
|
||||
### Structure
|
||||
|
||||
The Markdown format uses code blocks to delimit file content:
|
||||
|
||||
````markdown
|
||||
# Repository Summary
|
||||
|
||||
[Summary content]
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
directory/
|
||||
file1.txt
|
||||
file2.txt
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### File: relative/path/to/file1.ext
|
||||
|
||||
```python
|
||||
# File content here
|
||||
def example():
|
||||
pass
|
||||
```
|
||||
|
||||
### File: relative/path/to/file2.ext
|
||||
|
||||
```javascript
|
||||
// Another file
|
||||
console.log("Hello");
|
||||
```
|
||||
````
|
||||
|
||||
### File Block Pattern
|
||||
|
||||
Each file uses a level-3 heading with "File:" prefix and code block:
|
||||
|
||||
````markdown
|
||||
### File: src/main.py
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
|
||||
def main():
|
||||
print("Hello, world!")
|
||||
```
|
||||
````
|
||||
|
||||
### Key Characteristics
|
||||
|
||||
- File path follows "### File: " heading
|
||||
- Content is within a code block (triple backticks)
|
||||
- Language hint may be included after opening backticks
|
||||
- Content preserves original formatting
|
||||
|
||||
### Extraction Pattern
|
||||
|
||||
```python
|
||||
r'## File: ([^\n]+)\n```[^\n]*\n(.*?)\n```'
|
||||
```
|
||||
|
||||
**Pattern breakdown:**
|
||||
- `## File: ([^\n]+)` - Captures file path from heading
|
||||
- `\n` - Newline after heading
|
||||
- ``` `[^\n]*` ``` - Opening code block with optional language
|
||||
- `\n(.*?)\n` - Captures content between backticks
|
||||
- ``` ` ``` ``` - Closing backticks
|
||||
|
||||
## JSON Format
|
||||
|
||||
### Structure
|
||||
|
||||
The JSON format provides structured data:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"repository": "owner/repo",
|
||||
"timestamp": "2025-10-22T19:00:00Z"
|
||||
},
|
||||
"directoryStructure": "directory/\n file1.txt\n file2.txt\n",
|
||||
"files": [
|
||||
{
|
||||
"path": "relative/path/to/file1.ext",
|
||||
"content": "content of file1\n"
|
||||
},
|
||||
{
|
||||
"path": "relative/path/to/file2.ext",
|
||||
"content": "content of file2\n"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### File Entry Structure
|
||||
|
||||
Each file is an object in the `files` array:
|
||||
|
||||
```json
|
||||
{
|
||||
"path": "src/main.py",
|
||||
"content": "#!/usr/bin/env python3\n\ndef main():\n print(\"Hello, world!\")\n\nif __name__ == \"__main__\":\n main()\n"
|
||||
}
|
||||
```
|
||||
|
||||
### Key Characteristics
|
||||
|
||||
- Files are in a `files` array
|
||||
- Each file has `path` and `content` keys
|
||||
- Content includes literal `\n` for newlines
|
||||
- Content is JSON-escaped (quotes, backslashes)
|
||||
|
||||
### Extraction Approach
|
||||
|
||||
```python
|
||||
data = json.loads(content)
|
||||
files = data.get('files', [])
|
||||
for file_entry in files:
|
||||
file_path = file_entry.get('path')
|
||||
file_content = file_entry.get('content', '')
|
||||
```
|
||||
|
||||
## Format Detection
|
||||
|
||||
### Detection Logic
|
||||
|
||||
The unmixing script auto-detects format using these checks:
|
||||
|
||||
1. **XML**: Contains `<file path=` and `</file>`
|
||||
2. **JSON**: Starts with `{` and contains `"files"`
|
||||
3. **Markdown**: Contains `## File:`
|
||||
|
||||
### Detection Priority
|
||||
|
||||
1. Check XML markers first (most common)
|
||||
2. Check JSON structure second
|
||||
3. Check Markdown markers last
|
||||
4. Return `None` if no format matches
|
||||
|
||||
### Example Detection Code
|
||||
|
||||
```python
|
||||
def detect_format(content):
|
||||
if '<file path=' in content and '</file>' in content:
|
||||
return 'xml'
|
||||
if content.strip().startswith('{') and '"files"' in content:
|
||||
return 'json'
|
||||
if '## File:' in content:
|
||||
return 'markdown'
|
||||
return None
|
||||
```
|
||||
|
||||
## File Path Encoding
|
||||
|
||||
### Relative Paths
|
||||
|
||||
All file paths in repomix output are relative to the repository root:
|
||||
|
||||
```
|
||||
src/components/Header.tsx
|
||||
docs/README.md
|
||||
package.json
|
||||
```
|
||||
|
||||
### Special Characters
|
||||
|
||||
File paths may contain:
|
||||
- Spaces: `"My Documents/file.txt"`
|
||||
- Hyphens: `"some-file.md"`
|
||||
- Underscores: `"my_script.py"`
|
||||
- Dots: `"config.local.json"`
|
||||
|
||||
Paths are preserved exactly as they appear in the original repository.
|
||||
|
||||
### Directory Separators
|
||||
|
||||
- Always forward slashes (`/`) regardless of platform
|
||||
- No leading slash (relative paths)
|
||||
- No trailing slash for files
|
||||
|
||||
## Content Encoding
|
||||
|
||||
### Character Encoding
|
||||
|
||||
All formats use **UTF-8** encoding for both the container file and extracted content.
|
||||
|
||||
### Special Characters
|
||||
|
||||
- **XML**: Content may contain XML-escaped characters (`<`, `>`, `&`)
|
||||
- **Markdown**: Content is plain text within code blocks
|
||||
- **JSON**: Content uses JSON string escaping (`\"`, `\\`, `\n`)
|
||||
|
||||
### Line Endings
|
||||
|
||||
- Original line endings are preserved
|
||||
- May be `\n` (Unix), `\r\n` (Windows), or `\r` (old Mac)
|
||||
- Extraction preserves original endings
|
||||
|
||||
## Edge Cases
|
||||
|
||||
### Empty Files
|
||||
|
||||
**XML:**
|
||||
```xml
|
||||
<file path="empty.txt">
|
||||
</file>
|
||||
```
|
||||
|
||||
**Markdown:**
|
||||
````markdown
|
||||
### File: empty.txt
|
||||
|
||||
```
|
||||
```
|
||||
````
|
||||
|
||||
**JSON:**
|
||||
```json
|
||||
{"path": "empty.txt", "content": ""}
|
||||
```
|
||||
|
||||
### Binary Files
|
||||
|
||||
Binary files are typically **not included** in repomix output. The directory structure may list them, but they won't have content blocks.
|
||||
|
||||
### Large Files
|
||||
|
||||
Some repomix configurations may truncate or exclude large files. Check the file summary section for exclusion notes.
|
||||
|
||||
## Version Differences
|
||||
|
||||
### Repomix v1.x
|
||||
|
||||
- Uses XML format by default
|
||||
- File blocks have consistent structure
|
||||
- No automatic format version marker
|
||||
|
||||
### Repomix v2.x
|
||||
|
||||
- Adds JSON and Markdown format support
|
||||
- May include version metadata in output
|
||||
- Maintains backward compatibility with v1 XML
|
||||
|
||||
## Validation
|
||||
|
||||
### Successful Extraction Indicators
|
||||
|
||||
After extraction, verify:
|
||||
1. **File count** matches expected number
|
||||
2. **Directory structure** matches the `<directory_structure>` section
|
||||
3. **Content integrity** - spot-check a few files
|
||||
4. **No empty directories** unless explicitly included
|
||||
|
||||
### Common Format Issues
|
||||
|
||||
**Issue**: Files not extracted
|
||||
- **Cause**: Format pattern mismatch
|
||||
- **Solution**: Check format manually, verify repomix version
|
||||
|
||||
**Issue**: Partial content extraction
|
||||
- **Cause**: Incorrect regex pattern (too greedy or not greedy enough)
|
||||
- **Solution**: Check for nested tags or malformed blocks
|
||||
|
||||
**Issue**: Encoding errors
|
||||
- **Cause**: Non-UTF-8 content in repomix file
|
||||
- **Solution**: Verify source file encoding
|
||||
|
||||
## Examples
|
||||
|
||||
### Complete XML Example
|
||||
|
||||
```xml
|
||||
<file_summary>
|
||||
This is a packed repository.
|
||||
</file_summary>
|
||||
|
||||
<directory_structure>
|
||||
my-skill/
|
||||
SKILL.md
|
||||
scripts/
|
||||
helper.py
|
||||
</directory_structure>
|
||||
|
||||
<files>
|
||||
<file path="my-skill/SKILL.md">
|
||||
---
|
||||
name: my-skill
|
||||
description: Example skill
|
||||
---
|
||||
|
||||
# My Skill
|
||||
|
||||
This is an example.
|
||||
</file>
|
||||
|
||||
<file path="my-skill/scripts/helper.py">
|
||||
#!/usr/bin/env python3
|
||||
|
||||
def help():
|
||||
print("Helping!")
|
||||
</file>
|
||||
</files>
|
||||
```
|
||||
|
||||
### Complete Markdown Example
|
||||
|
||||
````markdown
|
||||
# Repository: my-skill
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
my-skill/
|
||||
SKILL.md
|
||||
scripts/
|
||||
helper.py
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### File: my-skill/SKILL.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill
|
||||
description: Example skill
|
||||
---
|
||||
|
||||
# My Skill
|
||||
|
||||
This is an example.
|
||||
```
|
||||
|
||||
### File: my-skill/scripts/helper.py
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
|
||||
def help():
|
||||
print("Helping!")
|
||||
```
|
||||
````
|
||||
|
||||
### Complete JSON Example
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"repository": "my-skill"
|
||||
},
|
||||
"directoryStructure": "my-skill/\n SKILL.md\n scripts/\n helper.py\n",
|
||||
"files": [
|
||||
{
|
||||
"path": "my-skill/SKILL.md",
|
||||
"content": "---\nname: my-skill\ndescription: Example skill\n---\n\n# My Skill\n\nThis is an example.\n"
|
||||
},
|
||||
{
|
||||
"path": "my-skill/scripts/helper.py",
|
||||
"content": "#!/usr/bin/env python3\n\ndef help():\n print(\"Helping!\")\n"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- Repomix documentation: https://github.com/yamadashy/repomix
|
||||
- Repomix output examples: Check the repomix repository for sample outputs
|
||||
- XML specification: https://www.w3.org/XML/
|
||||
- JSON specification: https://www.json.org/
|
||||
444
repomix-unmixer/references/validation-workflow.md
Normal file
444
repomix-unmixer/references/validation-workflow.md
Normal file
@@ -0,0 +1,444 @@
|
||||
# Validation Workflow for Unmixed Content
|
||||
|
||||
This guide provides detailed validation procedures for verifying the quality and correctness of unmixed repomix content, with special focus on Claude Code skills.
|
||||
|
||||
## Overview
|
||||
|
||||
After unmixing a repomix file, validation ensures:
|
||||
- All files were extracted correctly
|
||||
- Directory structure is intact
|
||||
- Content integrity is preserved
|
||||
- Skills (if applicable) meet Claude Code requirements
|
||||
|
||||
## General Validation Workflow
|
||||
|
||||
### Step 1: File Count Verification
|
||||
|
||||
Compare the extracted file count with the expected count.
|
||||
|
||||
**Check extraction output:**
|
||||
```
|
||||
✅ Successfully extracted 20 files!
|
||||
```
|
||||
|
||||
**Verify against directory structure:**
|
||||
```bash
|
||||
# Count files in the repomix directory structure section
|
||||
grep -c "^ " repomix-file.xml
|
||||
|
||||
# Count extracted files
|
||||
find /tmp/extracted -type f | wc -l
|
||||
```
|
||||
|
||||
**Expected result:** Counts should match (accounting for any excluded binary files).
|
||||
|
||||
### Step 2: Directory Structure Validation
|
||||
|
||||
Compare the extracted structure with the repomix directory structure section.
|
||||
|
||||
**Extract directory structure from repomix file:**
|
||||
```bash
|
||||
# For XML format
|
||||
sed -n '/<directory_structure>/,/<\/directory_structure>/p' repomix-file.xml
|
||||
```
|
||||
|
||||
**Compare with extracted structure:**
|
||||
```bash
|
||||
tree /tmp/extracted
|
||||
# or
|
||||
ls -R /tmp/extracted
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] All directories present
|
||||
- [ ] Nesting levels match
|
||||
- [ ] No unexpected directories
|
||||
|
||||
### Step 3: Content Integrity Spot Checks
|
||||
|
||||
Randomly select 3-5 files to verify content integrity.
|
||||
|
||||
**Check file size:**
|
||||
```bash
|
||||
# Compare sizes (should be reasonable)
|
||||
ls -lh /tmp/extracted/path/to/file.txt
|
||||
```
|
||||
|
||||
**Check content:**
|
||||
```bash
|
||||
# Read the file and verify it looks correct
|
||||
cat /tmp/extracted/path/to/file.txt
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] Content is readable (UTF-8 encoded)
|
||||
- [ ] No obvious truncation
|
||||
- [ ] Code/markup is properly formatted
|
||||
- [ ] No XML/JSON escape artifacts (e.g., `<` instead of `<`)
|
||||
|
||||
### Step 4: File Type Distribution
|
||||
|
||||
Verify that expected file types are present.
|
||||
|
||||
**Check file types:**
|
||||
```bash
|
||||
# List all file extensions
|
||||
find /tmp/extracted -type f | sed 's/.*\.//' | sort | uniq -c
|
||||
```
|
||||
|
||||
**Expected distributions:**
|
||||
- Skills: `.md`, `.py`, `.sh`, `.json`, etc.
|
||||
- Projects: Language-specific extensions
|
||||
- Documentation: `.md`, `.txt`, `.pdf`, etc.
|
||||
|
||||
## Skill-Specific Validation
|
||||
|
||||
For Claude Code skills extracted from repomix files, perform additional validation.
|
||||
|
||||
### Step 1: Verify Skill Structure
|
||||
|
||||
Check that each skill has the required `SKILL.md` file.
|
||||
|
||||
**Find all SKILL.md files:**
|
||||
```bash
|
||||
find /tmp/extracted -name "SKILL.md"
|
||||
```
|
||||
|
||||
**Expected result:** One `SKILL.md` per skill directory.
|
||||
|
||||
### Step 2: Validate YAML Frontmatter
|
||||
|
||||
Each `SKILL.md` must have valid YAML frontmatter with `name` and `description`.
|
||||
|
||||
**Check frontmatter:**
|
||||
```bash
|
||||
head -n 5 /tmp/extracted/skill-name/SKILL.md
|
||||
```
|
||||
|
||||
**Expected format:**
|
||||
```yaml
|
||||
---
|
||||
name: skill-name
|
||||
description: Clear description with activation triggers
|
||||
---
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] Opening `---` on line 1
|
||||
- [ ] `name:` field present
|
||||
- [ ] `description:` field present
|
||||
- [ ] Closing `---` present
|
||||
- [ ] Description mentions when to activate
|
||||
|
||||
### Step 3: Verify Resource Organization
|
||||
|
||||
Check that bundled resources follow the proper structure.
|
||||
|
||||
**Check directory structure:**
|
||||
```bash
|
||||
tree /tmp/extracted/skill-name
|
||||
```
|
||||
|
||||
**Expected structure:**
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
├── scripts/ (optional)
|
||||
│ └── *.py, *.sh
|
||||
├── references/ (optional)
|
||||
│ └── *.md
|
||||
└── assets/ (optional)
|
||||
└── templates, images, etc.
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] `SKILL.md` exists at root
|
||||
- [ ] Resources organized in proper directories
|
||||
- [ ] No unexpected directories (e.g., `__pycache__`, `.git`)
|
||||
|
||||
### Step 4: Validate with skill-creator
|
||||
|
||||
Use the skill-creator validation tools for comprehensive validation.
|
||||
|
||||
**Run quick validation:**
|
||||
```bash
|
||||
~/.claude/plugins/marketplaces/anthropics-skills/skill-creator/scripts/quick_validate.py \
|
||||
/tmp/extracted/skill-name
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ Skill structure is valid
|
||||
✅ YAML frontmatter is valid
|
||||
✅ Description is informative
|
||||
✅ All resource references are valid
|
||||
```
|
||||
|
||||
**Common validation errors:**
|
||||
- Missing or malformed YAML frontmatter
|
||||
- Description too short or missing activation criteria
|
||||
- References to non-existent files
|
||||
- Improper directory structure
|
||||
|
||||
### Step 5: Content Quality Checks
|
||||
|
||||
Verify the content quality of each skill.
|
||||
|
||||
**Check SKILL.md length:**
|
||||
```bash
|
||||
wc -l /tmp/extracted/skill-name/SKILL.md
|
||||
```
|
||||
|
||||
**Recommended:** 100-500 lines for most skills (lean, with details in references).
|
||||
|
||||
**Check for TODOs:**
|
||||
```bash
|
||||
grep -i "TODO" /tmp/extracted/skill-name/SKILL.md
|
||||
```
|
||||
|
||||
**Expected result:** No TODOs (unless intentional).
|
||||
|
||||
**Check writing style:**
|
||||
```bash
|
||||
# Should use imperative/infinitive form
|
||||
head -n 50 /tmp/extracted/skill-name/SKILL.md
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] Uses imperative form ("Extract files from..." not "You extract files...")
|
||||
- [ ] Clear section headings
|
||||
- [ ] Code examples properly formatted
|
||||
- [ ] Resources properly referenced
|
||||
|
||||
### Step 6: Bundled Resource Validation
|
||||
|
||||
Verify bundled scripts, references, and assets are intact.
|
||||
|
||||
**Check scripts are executable:**
|
||||
```bash
|
||||
ls -l /tmp/extracted/skill-name/scripts/
|
||||
```
|
||||
|
||||
**Check for shebang in Python/Bash scripts:**
|
||||
```bash
|
||||
head -n 1 /tmp/extracted/skill-name/scripts/*.py
|
||||
head -n 1 /tmp/extracted/skill-name/scripts/*.sh
|
||||
```
|
||||
|
||||
**Expected:** `#!/usr/bin/env python3` or `#!/bin/bash`
|
||||
|
||||
**Verify references are markdown:**
|
||||
```bash
|
||||
file /tmp/extracted/skill-name/references/*.md
|
||||
```
|
||||
|
||||
**Expected:** All files are text/UTF-8
|
||||
|
||||
**Validation checks:**
|
||||
- [ ] Scripts have proper shebangs
|
||||
- [ ] Scripts are executable (or will be made executable)
|
||||
- [ ] References are readable markdown
|
||||
- [ ] Assets are in expected formats
|
||||
|
||||
## Automated Validation Script
|
||||
|
||||
For batch validation of multiple skills:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# validate_all_skills.sh
|
||||
|
||||
EXTRACTED_DIR="/tmp/extracted"
|
||||
SKILL_CREATOR_VALIDATOR="$HOME/.claude/plugins/marketplaces/anthropics-skills/skill-creator/scripts/quick_validate.py"
|
||||
|
||||
echo "Validating all skills in $EXTRACTED_DIR..."
|
||||
|
||||
for skill_dir in "$EXTRACTED_DIR"/*; do
|
||||
if [ -d "$skill_dir" ] && [ -f "$skill_dir/SKILL.md" ]; then
|
||||
skill_name=$(basename "$skill_dir")
|
||||
echo ""
|
||||
echo "=== Validating: $skill_name ==="
|
||||
|
||||
# Run quick validation
|
||||
if [ -f "$SKILL_CREATOR_VALIDATOR" ]; then
|
||||
python3 "$SKILL_CREATOR_VALIDATOR" "$skill_dir"
|
||||
else
|
||||
echo "⚠️ Skill creator validator not found, skipping automated validation"
|
||||
fi
|
||||
|
||||
# Check for TODOs
|
||||
if grep -q "TODO" "$skill_dir/SKILL.md"; then
|
||||
echo "⚠️ Warning: Found TODOs in SKILL.md"
|
||||
fi
|
||||
|
||||
# Count files
|
||||
file_count=$(find "$skill_dir" -type f | wc -l)
|
||||
echo "📁 Files: $file_count"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "✅ Validation complete!"
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
bash validate_all_skills.sh
|
||||
```
|
||||
|
||||
## Quality Assurance Checklist
|
||||
|
||||
Use this checklist after unmixing:
|
||||
|
||||
### General Extraction Quality
|
||||
- [ ] File count matches expected count
|
||||
- [ ] Directory structure matches repomix directory listing
|
||||
- [ ] No extraction errors in console output
|
||||
- [ ] All files are UTF-8 encoded and readable
|
||||
- [ ] No binary files incorrectly extracted as text
|
||||
|
||||
### Skill Quality (if applicable)
|
||||
- [ ] Each skill has a valid `SKILL.md`
|
||||
- [ ] YAML frontmatter is well-formed
|
||||
- [ ] Description includes activation triggers
|
||||
- [ ] Writing style is imperative/infinitive
|
||||
- [ ] Resources are properly organized (scripts/, references/, assets/)
|
||||
- [ ] No TODOs or placeholder text
|
||||
- [ ] Scripts have proper shebangs and permissions
|
||||
- [ ] References are informative markdown
|
||||
- [ ] skill-creator validation passes
|
||||
|
||||
### Content Integrity
|
||||
- [ ] Random spot-checks show correct content
|
||||
- [ ] Code examples are properly formatted
|
||||
- [ ] No XML/JSON escape artifacts
|
||||
- [ ] File sizes are reasonable
|
||||
- [ ] No truncated files
|
||||
|
||||
### Ready for Use
|
||||
- [ ] Extracted to appropriate location
|
||||
- [ ] Scripts made executable (if needed)
|
||||
- [ ] Skills ready for installation to `~/.claude/skills/`
|
||||
- [ ] Documentation reviewed and understood
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### Issue: File Count Mismatch
|
||||
|
||||
**Symptom:** Fewer files extracted than expected.
|
||||
|
||||
**Possible causes:**
|
||||
- Binary files excluded (expected)
|
||||
- Malformed file blocks in repomix file
|
||||
- Wrong format detection
|
||||
|
||||
**Solution:**
|
||||
1. Check repomix `<file_summary>` section for exclusion notes
|
||||
2. Manually inspect repomix file for file blocks
|
||||
3. Verify format detection was correct
|
||||
|
||||
### Issue: Malformed YAML Frontmatter
|
||||
|
||||
**Symptom:** skill-creator validation fails on YAML.
|
||||
|
||||
**Possible causes:**
|
||||
- Extraction didn't preserve line breaks correctly
|
||||
- Content had literal `---` that broke frontmatter
|
||||
|
||||
**Solution:**
|
||||
1. Manually inspect `SKILL.md` frontmatter
|
||||
2. Ensure opening `---` is on line 1
|
||||
3. Ensure closing `---` is on its own line
|
||||
4. Check for stray `---` in description
|
||||
|
||||
### Issue: Missing Resource Files
|
||||
|
||||
**Symptom:** References to scripts/references not found.
|
||||
|
||||
**Possible causes:**
|
||||
- Resource files excluded from repomix
|
||||
- Extraction path mismatch
|
||||
|
||||
**Solution:**
|
||||
1. Check repomix file for resource file blocks
|
||||
2. Verify resource was in original packed content
|
||||
3. Check extraction console output for errors
|
||||
|
||||
### Issue: Permission Errors on Scripts
|
||||
|
||||
**Symptom:** Scripts not executable.
|
||||
|
||||
**Possible causes:**
|
||||
- Permissions not preserved during extraction
|
||||
- Scripts need to be marked executable
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Make all scripts executable
|
||||
find /tmp/extracted -name "*.py" -exec chmod +x {} \;
|
||||
find /tmp/extracted -name "*.sh" -exec chmod +x {} \;
|
||||
```
|
||||
|
||||
### Issue: Encoding Problems
|
||||
|
||||
**Symptom:** Special characters appear garbled.
|
||||
|
||||
**Possible causes:**
|
||||
- Repomix file not UTF-8
|
||||
- Extraction script encoding mismatch
|
||||
|
||||
**Solution:**
|
||||
1. Verify repomix file encoding: `file -i repomix-file.xml`
|
||||
2. Re-extract with explicit UTF-8 encoding
|
||||
3. Check original files for encoding issues
|
||||
|
||||
## Post-Validation Actions
|
||||
|
||||
### For Valid Skills
|
||||
|
||||
**Install to Claude Code:**
|
||||
```bash
|
||||
# Copy to skills directory
|
||||
cp -r /tmp/extracted/skill-name ~/.claude/skills/
|
||||
|
||||
# Restart Claude Code to load the skill
|
||||
```
|
||||
|
||||
**Package for distribution:**
|
||||
```bash
|
||||
~/.claude/plugins/marketplaces/anthropics-skills/skill-creator/scripts/package_skill.py \
|
||||
/tmp/extracted/skill-name
|
||||
```
|
||||
|
||||
### For Invalid Skills
|
||||
|
||||
**Document issues:**
|
||||
- Create an issues list
|
||||
- Note specific validation failures
|
||||
- Identify required fixes
|
||||
|
||||
**Fix issues:**
|
||||
- Manually edit extracted files
|
||||
- Re-validate after fixes
|
||||
- Document changes made
|
||||
|
||||
**Re-package if needed:**
|
||||
- Once fixed, re-validate
|
||||
- Package for distribution
|
||||
- Test in Claude Code
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always validate before use** - Don't skip validation steps
|
||||
2. **Extract to temp first** - Review before installing
|
||||
3. **Use automated tools** - skill-creator validation for skills
|
||||
4. **Document findings** - Keep notes on any issues
|
||||
5. **Preserve originals** - Keep the repomix file as backup
|
||||
6. **Spot-check content** - Don't rely solely on automated checks
|
||||
7. **Test in isolation** - Install one skill at a time for testing
|
||||
|
||||
## References
|
||||
|
||||
- Skill creator documentation: `~/.claude/plugins/marketplaces/anthropics-skills/skill-creator/SKILL.md`
|
||||
- Skill authoring best practices: https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices.md
|
||||
- Claude Code skills directory: `~/.claude/skills/`
|
||||
178
repomix-unmixer/scripts/unmix_repomix.py
Executable file
178
repomix-unmixer/scripts/unmix_repomix.py
Executable file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Unmix a repomix file to restore original file structure.
|
||||
|
||||
Supports XML, Markdown, and JSON repomix output formats.
|
||||
"""
|
||||
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def unmix_xml(content, output_dir):
|
||||
"""Extract files from repomix XML format."""
|
||||
# Pattern: <file path="...">content</file>
|
||||
file_pattern = r'<file path="([^"]+)">\n(.*?)\n</file>'
|
||||
matches = re.finditer(file_pattern, content, re.DOTALL)
|
||||
|
||||
extracted_files = []
|
||||
for match in matches:
|
||||
file_path = match.group(1)
|
||||
file_content = match.group(2)
|
||||
|
||||
# Create full output path
|
||||
full_path = Path(output_dir) / file_path
|
||||
full_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write the file
|
||||
with open(full_path, 'w', encoding='utf-8') as f:
|
||||
f.write(file_content)
|
||||
|
||||
extracted_files.append(file_path)
|
||||
print(f"✓ Extracted: {file_path}")
|
||||
|
||||
return extracted_files
|
||||
|
||||
|
||||
def unmix_markdown(content, output_dir):
|
||||
"""Extract files from repomix Markdown format."""
|
||||
# Pattern: ## File: path\n```\ncontent\n```
|
||||
file_pattern = r'## File: ([^\n]+)\n```[^\n]*\n(.*?)\n```'
|
||||
matches = re.finditer(file_pattern, content, re.DOTALL)
|
||||
|
||||
extracted_files = []
|
||||
for match in matches:
|
||||
file_path = match.group(1).strip()
|
||||
file_content = match.group(2)
|
||||
|
||||
# Create full output path
|
||||
full_path = Path(output_dir) / file_path
|
||||
full_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write the file
|
||||
with open(full_path, 'w', encoding='utf-8') as f:
|
||||
f.write(file_content)
|
||||
|
||||
extracted_files.append(file_path)
|
||||
print(f"✓ Extracted: {file_path}")
|
||||
|
||||
return extracted_files
|
||||
|
||||
|
||||
def unmix_json(content, output_dir):
|
||||
"""Extract files from repomix JSON format."""
|
||||
try:
|
||||
data = json.loads(content)
|
||||
files = data.get('files', [])
|
||||
|
||||
extracted_files = []
|
||||
for file_entry in files:
|
||||
file_path = file_entry.get('path')
|
||||
file_content = file_entry.get('content', '')
|
||||
|
||||
if not file_path:
|
||||
continue
|
||||
|
||||
# Create full output path
|
||||
full_path = Path(output_dir) / file_path
|
||||
full_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write the file
|
||||
with open(full_path, 'w', encoding='utf-8') as f:
|
||||
f.write(file_content)
|
||||
|
||||
extracted_files.append(file_path)
|
||||
print(f"✓ Extracted: {file_path}")
|
||||
|
||||
return extracted_files
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Failed to parse JSON: {e}")
|
||||
return []
|
||||
|
||||
|
||||
def detect_format(content):
|
||||
"""Detect the repomix file format."""
|
||||
# Check for XML format
|
||||
if '<file path=' in content and '</file>' in content:
|
||||
return 'xml'
|
||||
|
||||
# Check for JSON format
|
||||
if content.strip().startswith('{') and '"files"' in content:
|
||||
return 'json'
|
||||
|
||||
# Check for Markdown format
|
||||
if '## File:' in content:
|
||||
return 'markdown'
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def unmix_repomix(repomix_file, output_dir):
|
||||
"""Extract files from a repomix file (auto-detects format)."""
|
||||
|
||||
# Read the repomix file
|
||||
with open(repomix_file, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Detect format
|
||||
format_type = detect_format(content)
|
||||
|
||||
if format_type is None:
|
||||
print("Error: Could not detect repomix format")
|
||||
print("Expected XML (<file path=...>), Markdown (## File:), or JSON format")
|
||||
return []
|
||||
|
||||
print(f"Detected format: {format_type.upper()}")
|
||||
|
||||
# Extract based on format
|
||||
if format_type == 'xml':
|
||||
return unmix_xml(content, output_dir)
|
||||
elif format_type == 'markdown':
|
||||
return unmix_markdown(content, output_dir)
|
||||
elif format_type == 'json':
|
||||
return unmix_json(content, output_dir)
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: unmix_repomix.py <repomix_file> [output_directory]")
|
||||
print()
|
||||
print("Arguments:")
|
||||
print(" repomix_file Path to the repomix output file (XML, Markdown, or JSON)")
|
||||
print(" output_directory Optional: Directory to extract files to (default: ./extracted)")
|
||||
print()
|
||||
print("Examples:")
|
||||
print(" unmix_repomix.py skills.xml /tmp/extracted-skills")
|
||||
print(" unmix_repomix.py repo-output.md")
|
||||
sys.exit(1)
|
||||
|
||||
repomix_file = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else "./extracted"
|
||||
|
||||
# Validate input file exists
|
||||
if not os.path.exists(repomix_file):
|
||||
print(f"Error: File not found: {repomix_file}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Unmixing {repomix_file}...")
|
||||
print(f"Output directory: {output_dir}\n")
|
||||
|
||||
# Extract files
|
||||
extracted = unmix_repomix(repomix_file, output_dir)
|
||||
|
||||
if not extracted:
|
||||
print("\n⚠️ No files extracted!")
|
||||
print("Check that the input file is a valid repomix output file.")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"\n✅ Successfully extracted {len(extracted)} files!")
|
||||
print(f"\nExtracted files are in: {output_dir}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
202
skill-creator/LICENSE.txt
Normal file
202
skill-creator/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
224
skill-creator/SKILL.md
Normal file
224
skill-creator/SKILL.md
Normal file
@@ -0,0 +1,224 @@
|
||||
---
|
||||
name: skill-creator
|
||||
description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
# Skill Creator
|
||||
|
||||
This skill provides guidance for creating effective skills.
|
||||
|
||||
## About Skills
|
||||
|
||||
Skills are modular, self-contained packages that extend Claude's capabilities by providing
|
||||
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
|
||||
domains or tasks—they transform Claude from a general-purpose agent into a specialized agent
|
||||
equipped with procedural knowledge that no model can fully possess.
|
||||
|
||||
### What Skills Provide
|
||||
|
||||
1. Specialized workflows - Multi-step procedures for specific domains
|
||||
2. Tool integrations - Instructions for working with specific file formats or APIs
|
||||
3. Domain expertise - Company-specific knowledge, schemas, business logic
|
||||
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
|
||||
|
||||
### Anatomy of a Skill
|
||||
|
||||
Every skill consists of a required SKILL.md file and optional bundled resources:
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
│ ├── YAML frontmatter metadata (required)
|
||||
│ │ ├── name: (required)
|
||||
│ │ └── description: (required)
|
||||
│ └── Markdown instructions (required)
|
||||
└── Bundled Resources (optional)
|
||||
├── scripts/ - Executable code (Python/Bash/etc.)
|
||||
├── references/ - Documentation intended to be loaded into context as needed
|
||||
└── assets/ - Files used in output (templates, icons, fonts, etc.)
|
||||
```
|
||||
|
||||
#### SKILL.md (required)
|
||||
|
||||
**Metadata Quality:** The `name` and `description` in YAML frontmatter determine when Claude will use the skill. Be specific about what the skill does and when to use it. Use the third-person (e.g. "This skill should be used when..." instead of "Use this skill when...").
|
||||
|
||||
#### Bundled Resources (optional)
|
||||
|
||||
##### Scripts (`scripts/`)
|
||||
|
||||
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
|
||||
|
||||
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Claude for patching or environment-specific adjustments
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
|
||||
|
||||
- **When to include**: For documentation that Claude should reference while working
|
||||
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
|
||||
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
|
||||
- **Benefits**: Keeps SKILL.md lean, loaded only when Claude determines it's needed
|
||||
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
|
||||
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
|
||||
|
||||
##### Assets (`assets/`)
|
||||
|
||||
Files not intended to be loaded into context, but rather used within the output Claude produces.
|
||||
|
||||
- **When to include**: When the skill needs files that will be used in the final output
|
||||
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
|
||||
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
|
||||
- **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context
|
||||
|
||||
##### Privacy and Path References
|
||||
|
||||
**CRITICAL**: Skills intended for public distribution must not contain user-specific or company-specific information:
|
||||
|
||||
- **Forbidden**: Absolute paths to user directories (`/home/username/`, `/Users/username/`, `/mnt/c/Users/username/`)
|
||||
- **Forbidden**: Personal usernames, company names, department names, product names
|
||||
- **Forbidden**: OneDrive paths, cloud storage paths, or any environment-specific absolute paths
|
||||
- **Allowed**: Relative paths within the skill bundle (`scripts/example.py`, `references/guide.md`)
|
||||
- **Allowed**: Standard placeholders (`~/workspace/project`, `username`, `your-company`)
|
||||
- **Best practice**: Use generic examples and placeholders; all paths should reference bundled skill files or use standard environment-agnostic patterns
|
||||
|
||||
### Progressive Disclosure Design Principle
|
||||
|
||||
Skills use a three-level loading system to manage context efficiently:
|
||||
|
||||
1. **Metadata (name + description)** - Always in context (~100 words)
|
||||
2. **SKILL.md body** - When skill triggers (<5k words)
|
||||
3. **Bundled resources** - As needed by Claude (Unlimited*)
|
||||
|
||||
*Unlimited because scripts can be executed without reading into context window.
|
||||
|
||||
### Skill Creation Best Practice
|
||||
|
||||
Anthropic has wrote skill authoring best practices, you SHOULD retrieve it before you create or update any skills, the link is https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices.md
|
||||
|
||||
## Skill Creation Process
|
||||
|
||||
To create a skill, follow the "Skill Creation Process" in order, skipping steps only if there is a clear reason why they are not applicable.
|
||||
|
||||
### Step 1: Understanding the Skill with Concrete Examples
|
||||
|
||||
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
|
||||
|
||||
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
|
||||
|
||||
For example, when building an image-editor skill, relevant questions include:
|
||||
|
||||
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
|
||||
- "Can you give some examples of how this skill would be used?"
|
||||
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
|
||||
- "What would a user say that should trigger this skill?"
|
||||
|
||||
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
|
||||
|
||||
Conclude this step when there is a clear sense of the functionality the skill should support.
|
||||
|
||||
### Step 2: Planning the Reusable Skill Contents
|
||||
|
||||
To turn concrete examples into an effective skill, analyze each example by:
|
||||
|
||||
1. Considering how to execute on the example from scratch
|
||||
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
|
||||
|
||||
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
|
||||
|
||||
1. Rotating a PDF requires re-writing the same code each time
|
||||
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
|
||||
|
||||
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
|
||||
|
||||
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
|
||||
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
|
||||
|
||||
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
|
||||
|
||||
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
|
||||
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
|
||||
|
||||
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
|
||||
|
||||
### Step 3: Initializing the Skill
|
||||
|
||||
At this point, it is time to actually create the skill.
|
||||
|
||||
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
|
||||
|
||||
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py <skill-name> --path <output-directory>
|
||||
```
|
||||
|
||||
The script:
|
||||
|
||||
- Creates the skill directory at the specified path
|
||||
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
|
||||
- Creates example resource directories: `scripts/`, `references/`, and `assets/`
|
||||
- Adds example files in each directory that can be customized or deleted
|
||||
|
||||
After initialization, customize or remove the generated SKILL.md and example files as needed.
|
||||
|
||||
### Step 4: Edit the Skill
|
||||
|
||||
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Focus on including information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.
|
||||
|
||||
#### Start with Reusable Skill Contents
|
||||
|
||||
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
|
||||
|
||||
Also, delete any example files and directories not needed for the skill. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
**Writing Style:** Write the entire skill using **imperative/infinitive form** (verb-first instructions), not second person. Use objective, instructional language (e.g., "To accomplish X, do Y" rather than "You should do X" or "If you need to do X"). This maintains consistency and clarity for AI consumption.
|
||||
|
||||
To complete SKILL.md, answer the following questions:
|
||||
|
||||
1. What is the purpose of the skill, in a few sentences?
|
||||
2. When should the skill be used?
|
||||
3. In practice, how should Claude use the skill? All reusable skill contents developed above should be referenced so that Claude knows how to use them.
|
||||
|
||||
### Step 5: Packaging a Skill
|
||||
|
||||
Once the skill is ready, it should be packaged into a distributable zip file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder>
|
||||
```
|
||||
|
||||
Optional output directory specification:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder> ./dist
|
||||
```
|
||||
|
||||
The packaging script will:
|
||||
|
||||
1. **Validate** the skill automatically, checking:
|
||||
- YAML frontmatter format and required fields
|
||||
- Skill naming conventions and directory structure
|
||||
- Description completeness and quality
|
||||
- File organization and resource references
|
||||
|
||||
2. **Package** the skill if validation passes, creating a zip file named after the skill (e.g., `my-skill.zip`) that includes all files and maintains the proper directory structure for distribution.
|
||||
|
||||
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
|
||||
|
||||
### Step 6: Iterate
|
||||
|
||||
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
|
||||
|
||||
**Iteration workflow:**
|
||||
1. Use the skill on real tasks
|
||||
2. Notice struggles or inefficiencies
|
||||
3. Identify how SKILL.md or bundled resources should be updated
|
||||
4. Implement changes and test again
|
||||
303
skill-creator/scripts/init_skill.py
Executable file
303
skill-creator/scripts/init_skill.py
Executable file
@@ -0,0 +1,303 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Initializer - Creates a new skill from template
|
||||
|
||||
Usage:
|
||||
init_skill.py <skill-name> --path <path>
|
||||
|
||||
Examples:
|
||||
init_skill.py my-new-skill --path skills/public
|
||||
init_skill.py my-api-helper --path skills/private
|
||||
init_skill.py custom-skill --path /custom/location
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
SKILL_TEMPLATE = """---
|
||||
name: {skill_name}
|
||||
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
|
||||
---
|
||||
|
||||
# {skill_title}
|
||||
|
||||
## Overview
|
||||
|
||||
[TODO: 1-2 sentences explaining what this skill enables]
|
||||
|
||||
## Structuring This Skill
|
||||
|
||||
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
|
||||
|
||||
**1. Workflow-Based** (best for sequential processes)
|
||||
- Works well when there are clear step-by-step procedures
|
||||
- Example: DOCX skill with "Workflow Decision Tree" → "Reading" → "Creating" → "Editing"
|
||||
- Structure: ## Overview → ## Workflow Decision Tree → ## Step 1 → ## Step 2...
|
||||
|
||||
**2. Task-Based** (best for tool collections)
|
||||
- Works well when the skill offers different operations/capabilities
|
||||
- Example: PDF skill with "Quick Start" → "Merge PDFs" → "Split PDFs" → "Extract Text"
|
||||
- Structure: ## Overview → ## Quick Start → ## Task Category 1 → ## Task Category 2...
|
||||
|
||||
**3. Reference/Guidelines** (best for standards or specifications)
|
||||
- Works well for brand guidelines, coding standards, or requirements
|
||||
- Example: Brand styling with "Brand Guidelines" → "Colors" → "Typography" → "Features"
|
||||
- Structure: ## Overview → ## Guidelines → ## Specifications → ## Usage...
|
||||
|
||||
**4. Capabilities-Based** (best for integrated systems)
|
||||
- Works well when the skill provides multiple interrelated features
|
||||
- Example: Product Management with "Core Capabilities" → numbered capability list
|
||||
- Structure: ## Overview → ## Core Capabilities → ### 1. Feature → ### 2. Feature...
|
||||
|
||||
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
|
||||
|
||||
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
|
||||
|
||||
## [TODO: Replace with the first main section based on chosen structure]
|
||||
|
||||
[TODO: Add content here. See examples in existing skills:
|
||||
- Code samples for technical skills
|
||||
- Decision trees for complex workflows
|
||||
- Concrete examples with realistic user requests
|
||||
- References to scripts/templates/references as needed]
|
||||
|
||||
## Resources
|
||||
|
||||
This skill includes example resource directories that demonstrate how to organize different types of bundled resources:
|
||||
|
||||
### scripts/
|
||||
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
|
||||
|
||||
**Examples from other skills:**
|
||||
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
|
||||
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
|
||||
|
||||
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
|
||||
|
||||
**Note:** Scripts may be executed without loading into context, but can still be read by Claude for patching or environment adjustments.
|
||||
|
||||
### references/
|
||||
Documentation and reference material intended to be loaded into context to inform Claude's process and thinking.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
|
||||
- BigQuery: API reference documentation and query examples
|
||||
- Finance: Schema documentation, company policies
|
||||
|
||||
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Claude should reference while working.
|
||||
|
||||
### assets/
|
||||
Files not intended to be loaded into context, but rather used within the output Claude produces.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Brand styling: PowerPoint template files (.pptx), logo files
|
||||
- Frontend builder: HTML/React boilerplate project directories
|
||||
- Typography: Font files (.ttf, .woff2)
|
||||
|
||||
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
|
||||
|
||||
---
|
||||
|
||||
**Any unneeded directories can be deleted.** Not every skill requires all three types of resources.
|
||||
"""
|
||||
|
||||
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
|
||||
"""
|
||||
Example helper script for {skill_name}
|
||||
|
||||
This is a placeholder script that can be executed directly.
|
||||
Replace with actual implementation or delete if not needed.
|
||||
|
||||
Example real scripts from other skills:
|
||||
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
|
||||
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
|
||||
"""
|
||||
|
||||
def main():
|
||||
print("This is an example script for {skill_name}")
|
||||
# TODO: Add actual script logic here
|
||||
# This could be data processing, file conversion, API calls, etc.
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
'''
|
||||
|
||||
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
|
||||
|
||||
This is a placeholder for detailed reference documentation.
|
||||
Replace with actual reference content or delete if not needed.
|
||||
|
||||
Example real reference docs from other skills:
|
||||
- product-management/references/communication.md - Comprehensive guide for status updates
|
||||
- product-management/references/context_building.md - Deep-dive on gathering context
|
||||
- bigquery/references/ - API references and query examples
|
||||
|
||||
## When Reference Docs Are Useful
|
||||
|
||||
Reference docs are ideal for:
|
||||
- Comprehensive API documentation
|
||||
- Detailed workflow guides
|
||||
- Complex multi-step processes
|
||||
- Information too lengthy for main SKILL.md
|
||||
- Content that's only needed for specific use cases
|
||||
|
||||
## Structure Suggestions
|
||||
|
||||
### API Reference Example
|
||||
- Overview
|
||||
- Authentication
|
||||
- Endpoints with examples
|
||||
- Error codes
|
||||
- Rate limits
|
||||
|
||||
### Workflow Guide Example
|
||||
- Prerequisites
|
||||
- Step-by-step instructions
|
||||
- Common patterns
|
||||
- Troubleshooting
|
||||
- Best practices
|
||||
"""
|
||||
|
||||
EXAMPLE_ASSET = """# Example Asset File
|
||||
|
||||
This placeholder represents where asset files would be stored.
|
||||
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
|
||||
|
||||
Asset files are NOT intended to be loaded into context, but rather used within
|
||||
the output Claude produces.
|
||||
|
||||
Example asset files from other skills:
|
||||
- Brand guidelines: logo.png, slides_template.pptx
|
||||
- Frontend builder: hello-world/ directory with HTML/React boilerplate
|
||||
- Typography: custom-font.ttf, font-family.woff2
|
||||
- Data: sample_data.csv, test_dataset.json
|
||||
|
||||
## Common Asset Types
|
||||
|
||||
- Templates: .pptx, .docx, boilerplate directories
|
||||
- Images: .png, .jpg, .svg, .gif
|
||||
- Fonts: .ttf, .otf, .woff, .woff2
|
||||
- Boilerplate code: Project directories, starter files
|
||||
- Icons: .ico, .svg
|
||||
- Data files: .csv, .json, .xml, .yaml
|
||||
|
||||
Note: This is a text placeholder. Actual assets can be any file type.
|
||||
"""
|
||||
|
||||
|
||||
def title_case_skill_name(skill_name):
|
||||
"""Convert hyphenated skill name to Title Case for display."""
|
||||
return ' '.join(word.capitalize() for word in skill_name.split('-'))
|
||||
|
||||
|
||||
def init_skill(skill_name, path):
|
||||
"""
|
||||
Initialize a new skill directory with template SKILL.md.
|
||||
|
||||
Args:
|
||||
skill_name: Name of the skill
|
||||
path: Path where the skill directory should be created
|
||||
|
||||
Returns:
|
||||
Path to created skill directory, or None if error
|
||||
"""
|
||||
# Determine skill directory path
|
||||
skill_dir = Path(path).resolve() / skill_name
|
||||
|
||||
# Check if directory already exists
|
||||
if skill_dir.exists():
|
||||
print(f"❌ Error: Skill directory already exists: {skill_dir}")
|
||||
return None
|
||||
|
||||
# Create skill directory
|
||||
try:
|
||||
skill_dir.mkdir(parents=True, exist_ok=False)
|
||||
print(f"✅ Created skill directory: {skill_dir}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating directory: {e}")
|
||||
return None
|
||||
|
||||
# Create SKILL.md from template
|
||||
skill_title = title_case_skill_name(skill_name)
|
||||
skill_content = SKILL_TEMPLATE.format(
|
||||
skill_name=skill_name,
|
||||
skill_title=skill_title
|
||||
)
|
||||
|
||||
skill_md_path = skill_dir / 'SKILL.md'
|
||||
try:
|
||||
skill_md_path.write_text(skill_content)
|
||||
print("✅ Created SKILL.md")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating SKILL.md: {e}")
|
||||
return None
|
||||
|
||||
# Create resource directories with example files
|
||||
try:
|
||||
# Create scripts/ directory with example script
|
||||
scripts_dir = skill_dir / 'scripts'
|
||||
scripts_dir.mkdir(exist_ok=True)
|
||||
example_script = scripts_dir / 'example.py'
|
||||
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
|
||||
example_script.chmod(0o755)
|
||||
print("✅ Created scripts/example.py")
|
||||
|
||||
# Create references/ directory with example reference doc
|
||||
references_dir = skill_dir / 'references'
|
||||
references_dir.mkdir(exist_ok=True)
|
||||
example_reference = references_dir / 'api_reference.md'
|
||||
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
|
||||
print("✅ Created references/api_reference.md")
|
||||
|
||||
# Create assets/ directory with example asset placeholder
|
||||
assets_dir = skill_dir / 'assets'
|
||||
assets_dir.mkdir(exist_ok=True)
|
||||
example_asset = assets_dir / 'example_asset.txt'
|
||||
example_asset.write_text(EXAMPLE_ASSET)
|
||||
print("✅ Created assets/example_asset.txt")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating resource directories: {e}")
|
||||
return None
|
||||
|
||||
# Print next steps
|
||||
print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
|
||||
print("\nNext steps:")
|
||||
print("1. Edit SKILL.md to complete the TODO items and update the description")
|
||||
print("2. Customize or delete the example files in scripts/, references/, and assets/")
|
||||
print("3. Run the validator when ready to check the skill structure")
|
||||
|
||||
return skill_dir
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 4 or sys.argv[2] != '--path':
|
||||
print("Usage: init_skill.py <skill-name> --path <path>")
|
||||
print("\nSkill name requirements:")
|
||||
print(" - Hyphen-case identifier (e.g., 'data-analyzer')")
|
||||
print(" - Lowercase letters, digits, and hyphens only")
|
||||
print(" - Max 40 characters")
|
||||
print(" - Must match directory name exactly")
|
||||
print("\nExamples:")
|
||||
print(" init_skill.py my-new-skill --path skills/public")
|
||||
print(" init_skill.py my-api-helper --path skills/private")
|
||||
print(" init_skill.py custom-skill --path /custom/location")
|
||||
sys.exit(1)
|
||||
|
||||
skill_name = sys.argv[1]
|
||||
path = sys.argv[3]
|
||||
|
||||
print(f"🚀 Initializing skill: {skill_name}")
|
||||
print(f" Location: {path}")
|
||||
print()
|
||||
|
||||
result = init_skill(skill_name, path)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
110
skill-creator/scripts/package_skill.py
Executable file
110
skill-creator/scripts/package_skill.py
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Packager - Creates a distributable zip file of a skill folder
|
||||
|
||||
Usage:
|
||||
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||
|
||||
Example:
|
||||
python utils/package_skill.py skills/public/my-skill
|
||||
python utils/package_skill.py skills/public/my-skill ./dist
|
||||
"""
|
||||
|
||||
import sys
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from quick_validate import validate_skill
|
||||
|
||||
|
||||
def package_skill(skill_path, output_dir=None):
|
||||
"""
|
||||
Package a skill folder into a zip file.
|
||||
|
||||
Args:
|
||||
skill_path: Path to the skill folder
|
||||
output_dir: Optional output directory for the zip file (defaults to current directory)
|
||||
|
||||
Returns:
|
||||
Path to the created zip file, or None if error
|
||||
"""
|
||||
skill_path = Path(skill_path).resolve()
|
||||
|
||||
# Validate skill folder exists
|
||||
if not skill_path.exists():
|
||||
print(f"❌ Error: Skill folder not found: {skill_path}")
|
||||
return None
|
||||
|
||||
if not skill_path.is_dir():
|
||||
print(f"❌ Error: Path is not a directory: {skill_path}")
|
||||
return None
|
||||
|
||||
# Validate SKILL.md exists
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
print(f"❌ Error: SKILL.md not found in {skill_path}")
|
||||
return None
|
||||
|
||||
# Run validation before packaging
|
||||
print("🔍 Validating skill...")
|
||||
valid, message = validate_skill(skill_path)
|
||||
if not valid:
|
||||
print(f"❌ Validation failed: {message}")
|
||||
print(" Please fix the validation errors before packaging.")
|
||||
return None
|
||||
print(f"✅ {message}\n")
|
||||
|
||||
# Determine output location
|
||||
skill_name = skill_path.name
|
||||
if output_dir:
|
||||
output_path = Path(output_dir).resolve()
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
output_path = Path.cwd()
|
||||
|
||||
zip_filename = output_path / f"{skill_name}.zip"
|
||||
|
||||
# Create the zip file
|
||||
try:
|
||||
with zipfile.ZipFile(zip_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
# Walk through the skill directory
|
||||
for file_path in skill_path.rglob('*'):
|
||||
if file_path.is_file():
|
||||
# Calculate the relative path within the zip
|
||||
arcname = file_path.relative_to(skill_path.parent)
|
||||
zipf.write(file_path, arcname)
|
||||
print(f" Added: {arcname}")
|
||||
|
||||
print(f"\n✅ Successfully packaged skill to: {zip_filename}")
|
||||
return zip_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating zip file: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||
print("\nExample:")
|
||||
print(" python utils/package_skill.py skills/public/my-skill")
|
||||
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||
sys.exit(1)
|
||||
|
||||
skill_path = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"📦 Packaging skill: {skill_path}")
|
||||
if output_dir:
|
||||
print(f" Output directory: {output_dir}")
|
||||
print()
|
||||
|
||||
result = package_skill(skill_path, output_dir)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
65
skill-creator/scripts/quick_validate.py
Executable file
65
skill-creator/scripts/quick_validate.py
Executable file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick validation script for skills - minimal version
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
def validate_skill(skill_path):
|
||||
"""Basic validation of a skill"""
|
||||
skill_path = Path(skill_path)
|
||||
|
||||
# Check SKILL.md exists
|
||||
skill_md = skill_path / 'SKILL.md'
|
||||
if not skill_md.exists():
|
||||
return False, "SKILL.md not found"
|
||||
|
||||
# Read and validate frontmatter
|
||||
content = skill_md.read_text()
|
||||
if not content.startswith('---'):
|
||||
return False, "No YAML frontmatter found"
|
||||
|
||||
# Extract frontmatter
|
||||
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
|
||||
if not match:
|
||||
return False, "Invalid frontmatter format"
|
||||
|
||||
frontmatter = match.group(1)
|
||||
|
||||
# Check required fields
|
||||
if 'name:' not in frontmatter:
|
||||
return False, "Missing 'name' in frontmatter"
|
||||
if 'description:' not in frontmatter:
|
||||
return False, "Missing 'description' in frontmatter"
|
||||
|
||||
# Extract name for validation
|
||||
name_match = re.search(r'name:\s*(.+)', frontmatter)
|
||||
if name_match:
|
||||
name = name_match.group(1).strip()
|
||||
# Check naming convention (hyphen-case: lowercase with hyphens)
|
||||
if not re.match(r'^[a-z0-9-]+$', name):
|
||||
return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
|
||||
if name.startswith('-') or name.endswith('-') or '--' in name:
|
||||
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
|
||||
|
||||
# Extract and validate description
|
||||
desc_match = re.search(r'description:\s*(.+)', frontmatter)
|
||||
if desc_match:
|
||||
description = desc_match.group(1).strip()
|
||||
# Check for angle brackets
|
||||
if '<' in description or '>' in description:
|
||||
return False, "Description cannot contain angle brackets (< or >)"
|
||||
|
||||
return True, "Skill is valid!"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python quick_validate.py <skill_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
valid, message = validate_skill(sys.argv[1])
|
||||
print(message)
|
||||
sys.exit(0 if valid else 1)
|
||||
213
statusline-generator/SKILL.md
Normal file
213
statusline-generator/SKILL.md
Normal file
@@ -0,0 +1,213 @@
|
||||
---
|
||||
name: statusline-generator
|
||||
description: Configures and customizes Claude Code statuslines with multi-line layouts, cost tracking via ccusage, git status indicators, and customizable colors. Activates for statusline setup, installation, configuration, customization, color changes, cost display, git status integration, or troubleshooting statusline issues.
|
||||
---
|
||||
|
||||
# Statusline Generator
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides tools and guidance for creating and customizing Claude Code statuslines. It generates multi-line statuslines optimized for portrait screens, integrates with `ccusage` for session/daily cost tracking, displays git branch status, and supports color customization.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates for:
|
||||
- Statusline configuration requests for Claude Code
|
||||
- Cost information display (session/daily costs)
|
||||
- Multi-line layouts for portrait or narrow screens
|
||||
- Statusline color or format customization
|
||||
- Statusline display or cost tracking issues
|
||||
- Git status or path shortening features
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Installation
|
||||
|
||||
Install the default multi-line statusline:
|
||||
|
||||
1. Run the installation script:
|
||||
```bash
|
||||
bash scripts/install_statusline.sh
|
||||
```
|
||||
|
||||
2. Restart Claude Code to see the statusline
|
||||
|
||||
The default statusline displays:
|
||||
- **Line 1**: `username (model) [session_cost/daily_cost]`
|
||||
- **Line 2**: `current_path`
|
||||
- **Line 3**: `[git:branch*+]`
|
||||
|
||||
### Manual Installation
|
||||
|
||||
Alternatively, manually install by:
|
||||
|
||||
1. Copy `scripts/generate_statusline.sh` to `~/.claude/statusline.sh`
|
||||
2. Make it executable: `chmod +x ~/.claude/statusline.sh`
|
||||
3. Update `~/.claude/settings.json`:
|
||||
```json
|
||||
{
|
||||
"statusLine": {
|
||||
"type": "command",
|
||||
"command": "bash /home/username/.claude/statusline.sh",
|
||||
"padding": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Statusline Features
|
||||
|
||||
### Multi-Line Layout
|
||||
|
||||
The statusline uses a 3-line layout optimized for portrait screens:
|
||||
|
||||
```
|
||||
username (Sonnet 4.5 [1M]) [$0.26/$25.93]
|
||||
~/workspace/java/ready-together-svc
|
||||
[git:feature/branch-name*+]
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Shorter lines fit narrow screens
|
||||
- Clear visual separation of information types
|
||||
- No horizontal scrolling needed
|
||||
|
||||
### Cost Tracking Integration
|
||||
|
||||
Cost tracking via `ccusage`:
|
||||
- **Session Cost**: Current conversation cost
|
||||
- **Daily Cost**: Total cost for today
|
||||
- **Format**: `[$session/$daily]` in magenta
|
||||
- **Caching**: 2-minute cache to avoid performance impact
|
||||
- **Background Fetch**: First run loads costs asynchronously
|
||||
|
||||
**Requirements:** `ccusage` must be installed and in PATH. See `references/ccusage_integration.md` for installation and troubleshooting.
|
||||
|
||||
### Model Name Shortening
|
||||
|
||||
Model names are automatically shortened:
|
||||
- `"Sonnet 4.5 (with 1M token context)"` → `"Sonnet 4.5 [1M]"`
|
||||
- `"Opus 4.1 (with 500K token context)"` → `"Opus 4.1 [500K]"`
|
||||
|
||||
This saves horizontal space while preserving key information.
|
||||
|
||||
### Git Status Indicators
|
||||
|
||||
Git branch status shows:
|
||||
- **Yellow**: Clean branch (no changes)
|
||||
- **Red**: Dirty branch (uncommitted changes)
|
||||
- **Indicators**:
|
||||
- `*` - Modified or staged files
|
||||
- `+` - Untracked files
|
||||
- Example: `[git:main*+]` - Modified files and untracked files
|
||||
|
||||
### Path Shortening
|
||||
|
||||
Paths are shortened:
|
||||
- Home directory replaced with `~`
|
||||
- Example: `/home/username/workspace/project` → `~/workspace/project`
|
||||
|
||||
### Color Scheme
|
||||
|
||||
Default colors optimized for visibility:
|
||||
- **Username**: Bright Green (`\033[01;32m`)
|
||||
- **Model**: Bright Cyan (`\033[01;36m`)
|
||||
- **Costs**: Bright Magenta (`\033[01;35m`)
|
||||
- **Path**: Bright White (`\033[01;37m`)
|
||||
- **Git (clean)**: Bright Yellow (`\033[01;33m`)
|
||||
- **Git (dirty)**: Bright Red (`\033[01;31m`)
|
||||
|
||||
## Customization
|
||||
|
||||
### Changing Colors
|
||||
|
||||
Customize colors by editing `~/.claude/statusline.sh` and modifying the ANSI color codes in the final `printf` statement. See `references/color_codes.md` for available colors.
|
||||
|
||||
**Example: Change username to blue**
|
||||
```bash
|
||||
# Find this line:
|
||||
printf '\033[01;32m%s\033[00m \033[01;36m(%s)\033[00m%s\n\033[01;37m%s\033[00m\n%s' \
|
||||
|
||||
# Change \033[01;32m (green) to \033[01;34m (blue):
|
||||
printf '\033[01;34m%s\033[00m \033[01;36m(%s)\033[00m%s\n\033[01;37m%s\033[00m\n%s' \
|
||||
```
|
||||
|
||||
### Single-Line Layout
|
||||
|
||||
Convert to single-line layout by modifying the final `printf`:
|
||||
|
||||
```bash
|
||||
# Replace:
|
||||
printf '\033[01;32m%s\033[00m \033[01;36m(%s)\033[00m%s\n\033[01;37m%s\033[00m\n%s' \
|
||||
"$username" "$model" "$cost_info" "$short_path" "$git_info"
|
||||
|
||||
# With:
|
||||
printf '\033[01;32m%s\033[00m \033[01;36m(%s)\033[00m:\033[01;37m%s\033[00m%s%s' \
|
||||
"$username" "$model" "$short_path" "$git_info" "$cost_info"
|
||||
```
|
||||
|
||||
### Disabling Cost Tracking
|
||||
|
||||
If `ccusage` is unavailable or not desired:
|
||||
|
||||
1. Comment out the cost section in the script (lines ~47-73)
|
||||
2. Remove `%s` for `$cost_info` from the final `printf`
|
||||
|
||||
See `references/ccusage_integration.md` for details.
|
||||
|
||||
### Adding Custom Elements
|
||||
|
||||
Add custom information (e.g., hostname, time):
|
||||
|
||||
```bash
|
||||
# Add variable before final printf:
|
||||
hostname=$(hostname -s)
|
||||
current_time=$(date +%H:%M)
|
||||
|
||||
# Update printf to include new elements:
|
||||
printf '\033[01;32m%s@%s\033[00m \033[01;36m(%s)\033[00m%s [%s]\n...' \
|
||||
"$username" "$hostname" "$model" "$cost_info" "$current_time" ...
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Costs Not Showing
|
||||
|
||||
**Check:**
|
||||
1. Is `ccusage` installed? Run `which ccusage`
|
||||
2. Test `ccusage` manually: `ccusage session --json --offline -o desc`
|
||||
3. Wait 5-10 seconds after first display (background fetch)
|
||||
4. Check cache: `ls -lh /tmp/claude_cost_cache_*.txt`
|
||||
|
||||
**Solution:** See `references/ccusage_integration.md` for detailed troubleshooting.
|
||||
|
||||
### Colors Hard to Read
|
||||
|
||||
**Solution:** Adjust colors for your terminal background using `references/color_codes.md`. Bright colors (`01;3X`) are generally more visible than regular (`00;3X`).
|
||||
|
||||
### Statusline Not Updating
|
||||
|
||||
**Check:**
|
||||
1. Verify settings.json points to correct script path
|
||||
2. Ensure script is executable: `chmod +x ~/.claude/statusline.sh`
|
||||
3. Restart Claude Code
|
||||
|
||||
### Git Status Not Showing
|
||||
|
||||
**Check:**
|
||||
1. Are you in a git repository?
|
||||
2. Test git commands: `git branch --show-current`
|
||||
3. Check git permissions in the directory
|
||||
|
||||
## Resources
|
||||
|
||||
### scripts/generate_statusline.sh
|
||||
Main statusline script with all features (multi-line, ccusage, git, colors). Copy this to `~/.claude/statusline.sh` for use.
|
||||
|
||||
### scripts/install_statusline.sh
|
||||
Automated installation script that copies the statusline script and updates settings.json.
|
||||
|
||||
### references/color_codes.md
|
||||
Complete ANSI color code reference for customizing statusline colors. Load when users request color customization.
|
||||
|
||||
### references/ccusage_integration.md
|
||||
Detailed explanation of ccusage integration, caching strategy, JSON structure, and troubleshooting. Load when users experience cost tracking issues or want to understand how it works.
|
||||
166
statusline-generator/references/ccusage_integration.md
Normal file
166
statusline-generator/references/ccusage_integration.md
Normal file
@@ -0,0 +1,166 @@
|
||||
# ccusage Integration Reference
|
||||
|
||||
This reference explains how the statusline integrates with `ccusage` for cost tracking and troubleshooting.
|
||||
|
||||
## What is ccusage?
|
||||
|
||||
`ccusage` is a command-line tool that tracks Claude Code usage and costs by reading conversation transcripts. It provides session-based and daily cost reporting.
|
||||
|
||||
## How Statusline Uses ccusage
|
||||
|
||||
The statusline script calls `ccusage` to display session and daily costs:
|
||||
|
||||
```bash
|
||||
session=$(ccusage session --json --offline -o desc 2>/dev/null | jq -r '.sessions[0].totalCost' 2>/dev/null | xargs printf "%.2f")
|
||||
daily=$(ccusage daily --json --offline -o desc 2>/dev/null | jq -r '.daily[0].totalCost' 2>/dev/null | xargs printf "%.2f")
|
||||
```
|
||||
|
||||
### Key Features
|
||||
|
||||
1. **JSON Output**: Uses `--json` flag for machine-readable output
|
||||
2. **Offline Mode**: Uses `--offline` to avoid fetching pricing data (faster)
|
||||
3. **Descending Order**: Uses `-o desc` to get most recent data first
|
||||
4. **Error Suppression**: Redirects errors to `/dev/null` to prevent statusline clutter
|
||||
|
||||
## Caching Strategy
|
||||
|
||||
To avoid slowing down the statusline, costs are cached:
|
||||
|
||||
- **Cache File**: `/tmp/claude_cost_cache_YYYYMMDD_HHMM.txt`
|
||||
- **Cache Duration**: 2 minutes (refreshes based on minute timestamp)
|
||||
- **Background Refresh**: First run fetches costs in background
|
||||
- **Fallback**: Uses previous cache (up to 10 minutes old) while refreshing
|
||||
|
||||
### Cache Behavior
|
||||
|
||||
1. **First Display**: Statusline shows without costs
|
||||
2. **2-5 Seconds Later**: Costs appear after background fetch completes
|
||||
3. **Next 2 Minutes**: Cached costs shown instantly
|
||||
4. **After 2 Minutes**: New cache generated in background
|
||||
|
||||
## ccusage JSON Structure
|
||||
|
||||
### Session Data
|
||||
```json
|
||||
{
|
||||
"sessions": [
|
||||
{
|
||||
"sessionId": "conversation-id",
|
||||
"totalCost": 0.26206769999999996,
|
||||
"inputTokens": 2065,
|
||||
"outputTokens": 1313,
|
||||
"lastActivity": "2025-10-20"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Daily Data
|
||||
```json
|
||||
{
|
||||
"daily": [
|
||||
{
|
||||
"date": "2025-10-20",
|
||||
"totalCost": 25.751092800000013,
|
||||
"inputTokens": 16796,
|
||||
"outputTokens": 142657
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Costs Not Showing
|
||||
|
||||
**Symptoms**: Statusline appears but no `[$X.XX/$X.XX]` shown
|
||||
|
||||
**Possible Causes**:
|
||||
1. ccusage not installed
|
||||
2. ccusage not in PATH
|
||||
3. No transcript data available yet
|
||||
4. Background fetch still in progress
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check if ccusage is installed
|
||||
which ccusage
|
||||
|
||||
# Test ccusage manually
|
||||
ccusage session --json --offline -o desc
|
||||
|
||||
# Check cache files
|
||||
ls -lh /tmp/claude_cost_cache_*.txt
|
||||
|
||||
# Wait 5-10 seconds and check again (first fetch runs in background)
|
||||
```
|
||||
|
||||
### Slow Statusline
|
||||
|
||||
**Symptoms**: Statusline takes >1 second to appear
|
||||
|
||||
**Possible Causes**:
|
||||
1. Cache not working (being regenerated too often)
|
||||
2. ccusage taking too long to execute
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check cache timestamp
|
||||
ls -lh /tmp/claude_cost_cache_*.txt
|
||||
|
||||
# Test ccusage speed
|
||||
time ccusage session --json --offline -o desc
|
||||
|
||||
# If slow, consider disabling cost tracking by commenting out cost section in script
|
||||
```
|
||||
|
||||
### Incorrect Costs
|
||||
|
||||
**Symptoms**: Costs don't match expected values
|
||||
|
||||
**Possible Causes**:
|
||||
1. Cache stale (showing old data)
|
||||
2. ccusage database out of sync
|
||||
3. Multiple Claude sessions confusing costs
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Clear cache to force refresh
|
||||
rm /tmp/claude_cost_cache_*.txt
|
||||
|
||||
# Verify ccusage data
|
||||
ccusage session -o desc | head -20
|
||||
ccusage daily -o desc | head -20
|
||||
|
||||
# Check ccusage database location
|
||||
ls -lh ~/.config/ccusage/
|
||||
```
|
||||
|
||||
## Installing ccusage
|
||||
|
||||
If ccusage is not installed:
|
||||
|
||||
```bash
|
||||
# Using npm (Node.js required)
|
||||
npm install -g @anthropic-ai/ccusage
|
||||
|
||||
# Or check the official ccusage repository for latest installation instructions
|
||||
```
|
||||
|
||||
## Disabling Cost Tracking
|
||||
|
||||
To disable costs (e.g., if ccusage not available), comment out the cost section in `generate_statusline.sh`:
|
||||
|
||||
```bash
|
||||
# Cost information using ccusage with caching
|
||||
cost_info=""
|
||||
# cache_file="/tmp/claude_cost_cache_$(date +%Y%m%d_%H%M).txt"
|
||||
# ... rest of cost section commented out
|
||||
```
|
||||
|
||||
Then update the final printf to remove `%s` for cost_info:
|
||||
|
||||
```bash
|
||||
printf '\033[01;32m%s\033[00m \033[01;36m(%s)\033[00m\n\033[01;37m%s\033[00m\n%s' \
|
||||
"$username" "$model" "$short_path" "$git_info"
|
||||
```
|
||||
86
statusline-generator/references/color_codes.md
Normal file
86
statusline-generator/references/color_codes.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# ANSI Color Codes Reference
|
||||
|
||||
This reference provides ANSI escape codes for customizing statusline colors.
|
||||
|
||||
## Format
|
||||
|
||||
ANSI color codes follow this format:
|
||||
```
|
||||
\033[<attributes>m<text>\033[00m
|
||||
```
|
||||
|
||||
- `\033[` - Escape sequence start
|
||||
- `<attributes>` - Color and style codes (see below)
|
||||
- `m` - Marks end of escape sequence
|
||||
- `\033[00m` - Reset to default
|
||||
|
||||
## Common Color Codes
|
||||
|
||||
### Regular Colors
|
||||
- `\033[00;30m` - Black
|
||||
- `\033[00;31m` - Red
|
||||
- `\033[00;32m` - Green
|
||||
- `\033[00;33m` - Yellow
|
||||
- `\033[00;34m` - Blue
|
||||
- `\033[00;35m` - Magenta
|
||||
- `\033[00;36m` - Cyan
|
||||
- `\033[00;37m` - White
|
||||
|
||||
### Bright/Bold Colors (Used in Default Statusline)
|
||||
- `\033[01;30m` - Bright Black (Gray)
|
||||
- `\033[01;31m` - Bright Red
|
||||
- `\033[01;32m` - Bright Green
|
||||
- `\033[01;33m` - Bright Yellow
|
||||
- `\033[01;34m` - Bright Blue
|
||||
- `\033[01;35m` - Bright Magenta
|
||||
- `\033[01;36m` - Bright Cyan
|
||||
- `\033[01;37m` - Bright White
|
||||
|
||||
## Default Statusline Colors
|
||||
|
||||
The generated statusline uses these colors by default:
|
||||
|
||||
| Element | Color Code | Color Name | Visibility |
|
||||
|---------|-----------|------------|-----------|
|
||||
| Username | `\033[01;32m` | Bright Green | Excellent |
|
||||
| Model | `\033[01;36m` | Bright Cyan | Excellent |
|
||||
| Costs | `\033[01;35m` | Bright Magenta | Excellent |
|
||||
| Path | `\033[01;37m` | Bright White | Excellent |
|
||||
| Git (clean) | `\033[01;33m` | Bright Yellow | Excellent |
|
||||
| Git (dirty) | `\033[01;31m` | Bright Red | Excellent |
|
||||
|
||||
## Customizing Colors
|
||||
|
||||
To customize colors in the statusline script, edit the `printf` statements:
|
||||
|
||||
### Example: Change username to bright blue
|
||||
```bash
|
||||
# Original:
|
||||
printf '\033[01;32m%s\033[00m' "$username"
|
||||
|
||||
# Modified:
|
||||
printf '\033[01;34m%s\033[00m' "$username"
|
||||
```
|
||||
|
||||
### Example: Change path to yellow
|
||||
```bash
|
||||
# Original:
|
||||
printf '\033[01;37m%s\033[00m' "$short_path"
|
||||
|
||||
# Modified:
|
||||
printf '\033[01;33m%s\033[00m' "$short_path"
|
||||
```
|
||||
|
||||
## Testing Colors
|
||||
|
||||
Test color codes in terminal:
|
||||
```bash
|
||||
echo -e "\033[01;32mGreen\033[00m \033[01;36mCyan\033[00m \033[01;35mMagenta\033[00m"
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Always reset**: End each colored section with `\033[00m` to reset colors
|
||||
2. **Visibility**: Bright colors (01;3X) are more visible than regular (00;3X)
|
||||
3. **Contrast**: Choose colors that contrast well with your terminal background
|
||||
4. **Consistency**: Use consistent colors for similar elements across your environment
|
||||
80
statusline-generator/scripts/generate_statusline.sh
Normal file
80
statusline-generator/scripts/generate_statusline.sh
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Read JSON input from stdin
|
||||
input=$(cat)
|
||||
|
||||
# Extract values from JSON
|
||||
model_full=$(echo "$input" | jq -r '.model.display_name' 2>/dev/null || echo "Claude")
|
||||
cwd=$(echo "$input" | jq -r '.workspace.current_dir' 2>/dev/null || pwd)
|
||||
transcript=$(echo "$input" | jq -r '.transcript_path' 2>/dev/null)
|
||||
|
||||
# Shorten model name: "Sonnet 4.5 (with 1M token context)" -> "Sonnet 4.5 [1M]"
|
||||
model=$(echo "$model_full" | sed -E 's/(.*)\(with ([0-9]+[KM]) token context\)/\1[\2]/' | sed 's/ *$//')
|
||||
|
||||
# Get username
|
||||
username=$(whoami)
|
||||
|
||||
# Shorten path (replace home with ~)
|
||||
short_path="${cwd/#$HOME/~}"
|
||||
|
||||
# Git branch status
|
||||
git_info=""
|
||||
if [ -d "$cwd/.git" ] || git -C "$cwd" rev-parse --git-dir >/dev/null 2>&1; then
|
||||
branch=$(git -C "$cwd" --no-optional-locks branch --show-current 2>/dev/null || echo "detached")
|
||||
|
||||
# Check for changes
|
||||
status=""
|
||||
if ! git -C "$cwd" --no-optional-locks diff --quiet 2>/dev/null || \
|
||||
! git -C "$cwd" --no-optional-locks diff --cached --quiet 2>/dev/null; then
|
||||
status="*"
|
||||
fi
|
||||
|
||||
# Check for untracked files
|
||||
if [ -n "$(git -C "$cwd" --no-optional-locks ls-files --others --exclude-standard 2>/dev/null)" ]; then
|
||||
status="${status}+"
|
||||
fi
|
||||
|
||||
# Format git info with color
|
||||
if [ -n "$status" ]; then
|
||||
# Red for dirty
|
||||
git_info=$(printf ' \033[01;31m[git:%s%s]\033[00m' "$branch" "$status")
|
||||
else
|
||||
# Yellow for clean
|
||||
git_info=$(printf ' \033[01;33m[git:%s]\033[00m' "$branch")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Cost information using ccusage with caching
|
||||
cost_info=""
|
||||
cache_file="/tmp/claude_cost_cache_$(date +%Y%m%d_%H%M).txt"
|
||||
|
||||
# Clean old cache files (older than 2 minutes)
|
||||
find /tmp -name "claude_cost_cache_*.txt" -mmin +2 -delete 2>/dev/null
|
||||
|
||||
if [ -f "$cache_file" ]; then
|
||||
# Use cached costs
|
||||
cost_info=$(cat "$cache_file")
|
||||
else
|
||||
# Get costs from ccusage (in background to not block statusline on first run)
|
||||
{
|
||||
session=$(ccusage session --json --offline -o desc 2>/dev/null | jq -r '.sessions[0].totalCost' 2>/dev/null | xargs printf "%.2f")
|
||||
daily=$(ccusage daily --json --offline -o desc 2>/dev/null | jq -r '.daily[0].totalCost' 2>/dev/null | xargs printf "%.2f")
|
||||
|
||||
if [ -n "$session" ] && [ -n "$daily" ] && [ "$session" != "" ] && [ "$daily" != "" ]; then
|
||||
printf ' \033[01;35m[$%s/$%s]\033[00m' "$session" "$daily" > "$cache_file"
|
||||
fi
|
||||
} &
|
||||
|
||||
# Try to use previous cache while new one is being generated
|
||||
prev_cache=$(find /tmp -name "claude_cost_cache_*.txt" -mmin -10 2>/dev/null | head -1)
|
||||
if [ -f "$prev_cache" ]; then
|
||||
cost_info=$(cat "$prev_cache")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Print the final status line (multi-line format for portrait screens)
|
||||
# Line 1: username (model) [costs]
|
||||
# Line 2: path (bright white for better visibility)
|
||||
# Line 3: [git:branch]
|
||||
printf '\033[01;32m%s\033[00m \033[01;36m(%s)\033[00m%s\n\033[01;37m%s\033[00m\n%s' \
|
||||
"$username" "$model" "$cost_info" "$short_path" "$git_info"
|
||||
73
statusline-generator/scripts/install_statusline.sh
Normal file
73
statusline-generator/scripts/install_statusline.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Install statusline script to Claude Code configuration directory
|
||||
# Usage: ./install_statusline.sh [target_path]
|
||||
|
||||
set -e
|
||||
|
||||
# Determine target path
|
||||
if [ -n "$1" ]; then
|
||||
TARGET_PATH="$1"
|
||||
else
|
||||
TARGET_PATH="$HOME/.claude/statusline.sh"
|
||||
fi
|
||||
|
||||
# Get the directory where this script is located
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SOURCE_SCRIPT="$SCRIPT_DIR/generate_statusline.sh"
|
||||
|
||||
# Check if source script exists
|
||||
if [ ! -f "$SOURCE_SCRIPT" ]; then
|
||||
echo "❌ Error: generate_statusline.sh not found at $SOURCE_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create .claude directory if it doesn't exist
|
||||
CLAUDE_DIR=$(dirname "$TARGET_PATH")
|
||||
if [ ! -d "$CLAUDE_DIR" ]; then
|
||||
echo "📁 Creating directory: $CLAUDE_DIR"
|
||||
mkdir -p "$CLAUDE_DIR"
|
||||
fi
|
||||
|
||||
# Copy the script
|
||||
echo "📋 Copying statusline script to: $TARGET_PATH"
|
||||
cp "$SOURCE_SCRIPT" "$TARGET_PATH"
|
||||
chmod +x "$TARGET_PATH"
|
||||
|
||||
# Update settings.json
|
||||
SETTINGS_FILE="$HOME/.claude/settings.json"
|
||||
|
||||
if [ ! -f "$SETTINGS_FILE" ]; then
|
||||
echo "⚠️ Warning: settings.json not found at $SETTINGS_FILE"
|
||||
echo " Please create it manually or restart Claude Code"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if statusLine already configured
|
||||
if grep -q '"statusLine"' "$SETTINGS_FILE"; then
|
||||
echo "✅ statusLine already configured in settings.json"
|
||||
echo " Current configuration will use the updated script"
|
||||
else
|
||||
echo "📝 Adding statusLine configuration to settings.json"
|
||||
|
||||
# Backup settings.json
|
||||
cp "$SETTINGS_FILE" "$SETTINGS_FILE.backup"
|
||||
|
||||
# Add statusLine configuration using jq
|
||||
jq '. + {"statusLine": {"type": "command", "command": "bash '"$TARGET_PATH"'", "padding": 0}}' "$SETTINGS_FILE.backup" > "$SETTINGS_FILE"
|
||||
|
||||
echo "✅ statusLine configuration added"
|
||||
echo " Backup saved to: $SETTINGS_FILE.backup"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎉 Installation complete!"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Restart Claude Code to see your new statusline"
|
||||
echo " 2. The statusline will show:"
|
||||
echo " Line 1: username (model) [session_cost/daily_cost]"
|
||||
echo " Line 2: current_path"
|
||||
echo " Line 3: [git:branch]"
|
||||
echo ""
|
||||
echo "Note: Cost information requires ccusage to be installed and accessible"
|
||||
108
teams-channel-post-writer/SKILL.md
Normal file
108
teams-channel-post-writer/SKILL.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
name: teams-channel-post-writer
|
||||
description: Creates educational Teams channel posts for internal knowledge sharing about Claude Code features, tools, and best practices. Applies when writing posts, announcements, or documentation to teach colleagues effective Claude Code usage, announce new features, share productivity tips, or document lessons learned. Provides templates, writing guidelines, and structured approaches emphasizing concrete examples, underlying principles, and connections to best practices like context engineering. Activates for content involving Teams posts, channel announcements, feature documentation, or tip sharing.
|
||||
---
|
||||
|
||||
# Teams Channel Post Writer
|
||||
|
||||
## Overview
|
||||
|
||||
Create well-structured, educational Teams channel posts for internal knowledge sharing about Claude Code features and best practices. This skill provides templates, writing guidelines, and a structured workflow to produce consistent, actionable content that helps colleagues learn effective Claude Code usage.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill activates when creating Teams channel posts to:
|
||||
- Announce and explain new Claude Code features
|
||||
- Share Claude Code tips and best practices
|
||||
- Teach effective prompting patterns and workflows
|
||||
- Connect features to broader engineering principles (e.g., context engineering)
|
||||
- Document lessons learned from using Claude Code
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Understand the Topic
|
||||
|
||||
Gather information about what to write about:
|
||||
- Research the feature/topic thoroughly using official documentation
|
||||
- Verify release dates and version numbers from changelogs
|
||||
- Identify the core benefit or principle the post should teach
|
||||
- Collect concrete examples from real usage
|
||||
|
||||
**Research checklist:**
|
||||
- [ ] Found official release date/version number
|
||||
- [ ] Verified feature behavior through testing or documentation
|
||||
- [ ] Identified authoritative sources to link to
|
||||
- [ ] Understood the underlying principle or best practice
|
||||
|
||||
### 2. Plan the Content
|
||||
|
||||
Based on the writing guidelines in `references/writing-guidelines.md`, plan:
|
||||
- **Hook**: What's new or important about this topic?
|
||||
- **Core principle**: What best practice does this illustrate?
|
||||
- **Examples**: What concrete prompts or workflows demonstrate this?
|
||||
- **Call-to-action**: What should readers try next?
|
||||
|
||||
### 3. Draft Using the Template
|
||||
|
||||
Start with the template in `assets/post-template.md` and fill in:
|
||||
|
||||
1. **Title**: Use an emoji and clear description
|
||||
2. **Introduction**: Include release date and brief context
|
||||
3. **What it is**: 1-2 sentence explanation
|
||||
4. **How to use it**: Show "Normal vs Better" pattern with explicit instructions
|
||||
5. **Why use it**: Explain the underlying principle with 4 key benefits
|
||||
6. **Examples**: Provide 3+ realistic, concrete prompts
|
||||
7. **Options/Settings**: List key configurations or parameters
|
||||
8. **Call-to-action**: End with actionable next step
|
||||
9. **Learn more**: Link to authoritative resources
|
||||
|
||||
### 4. Apply Writing Guidelines
|
||||
|
||||
Review the draft against the quality checklist in `references/writing-guidelines.md`:
|
||||
- Educational and helpful tone
|
||||
- "Normal/Better" pattern (not "Wrong/Correct")
|
||||
- Concrete, realistic examples
|
||||
- Explains the "why" with principles
|
||||
- Clear structure with bullets and formatting
|
||||
- Verified facts and dates
|
||||
|
||||
### 5. Save and Share
|
||||
|
||||
Save the final post to your team's documentation location with a descriptive filename like "Claude Code Tips.md" or "[Topic Name].md"
|
||||
|
||||
## Key Principles
|
||||
|
||||
### Show, Don't Just Tell
|
||||
Always include concrete examples users can adapt. Use "Normal vs Better" comparisons to demonstrate improvements without making readers feel criticized.
|
||||
|
||||
### Connect to Principles
|
||||
Don't just describe features—explain the underlying best practices. For example, connect the Explore agent to "context offloading" principles in context engineering.
|
||||
|
||||
### Make it Actionable
|
||||
Be explicit about invocation patterns. Users should be able to copy/paste examples and immediately use them.
|
||||
|
||||
### Verify Everything
|
||||
Always research release dates, verify feature behavior, and link to authoritative sources. Accuracy builds trust.
|
||||
|
||||
## Resources
|
||||
|
||||
### references/writing-guidelines.md
|
||||
Comprehensive writing guidelines including:
|
||||
- Tone and style standards
|
||||
- Structure patterns for different post types
|
||||
- Formatting conventions
|
||||
- Research requirements
|
||||
- Quality checklist
|
||||
|
||||
Reference this file for detailed guidance on tone, structure, and quality standards.
|
||||
|
||||
### assets/post-template.md
|
||||
Ready-to-use markdown template with placeholder structure for:
|
||||
- Title and introduction
|
||||
- Feature explanation
|
||||
- Usage examples
|
||||
- Benefits and principles
|
||||
- Options and settings
|
||||
- Call-to-action and resources
|
||||
|
||||
Copy this template as a starting point for new posts, then customize the content while maintaining the proven structure.
|
||||
40
teams-channel-post-writer/assets/post-template.md
Normal file
40
teams-channel-post-writer/assets/post-template.md
Normal file
@@ -0,0 +1,40 @@
|
||||
## 🎯 [Title]: [Feature/Tool Name]
|
||||
|
||||
**New in [Tool Name] ([Date]):** Brief introduction of what this is about.
|
||||
|
||||
**What is it?**
|
||||
[1-2 sentence explanation of the feature/tool/concept]
|
||||
|
||||
**How to use it - BE EXPLICIT:**
|
||||
|
||||
📝 **Normal:** "[Example of typical approach]"
|
||||
|
||||
⭐ **Better:** "[Example of improved approach with explicit instructions]"
|
||||
|
||||
**Why use it? ([Key Principle/Best Practice])**
|
||||
[Explanation of the underlying principle or best practice]
|
||||
|
||||
- **Benefit 1**: Explanation
|
||||
- **Benefit 2**: Explanation
|
||||
- **Benefit 3**: Explanation
|
||||
- **Benefit 4**: Explanation
|
||||
|
||||
[Optional analogy or comparison to make the concept relatable]
|
||||
|
||||
**Example prompts:**
|
||||
|
||||
"[Example 1]"
|
||||
|
||||
"[Example 2]"
|
||||
|
||||
"[Example 3]"
|
||||
|
||||
**Key settings/levels/options:**
|
||||
|
||||
- `option1` - Description
|
||||
- `option2` - Description (recommended)
|
||||
- `option3` - Description
|
||||
|
||||
Try it next time you [call to action]!
|
||||
|
||||
**Learn more:** [Link to additional resources]
|
||||
81
teams-channel-post-writer/references/writing-guidelines.md
Normal file
81
teams-channel-post-writer/references/writing-guidelines.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Teams Channel Post Writing Guidelines
|
||||
|
||||
## Purpose
|
||||
These posts are internal educational content to help team members learn effective Claude Code usage patterns and best practices.
|
||||
|
||||
## Tone and Style
|
||||
|
||||
- **Educational and helpful**: Focus on teaching concepts, not just announcing features
|
||||
- **Professional but approachable**: Conversational without being too casual
|
||||
- **Action-oriented**: Include concrete examples and calls-to-action
|
||||
- **Concise**: Keep posts scannable - use short paragraphs and bullet points
|
||||
|
||||
## Structure Principles
|
||||
|
||||
### 1. Start with Context
|
||||
- Always include release date or timing information when relevant
|
||||
- Lead with what the feature/tool is and why it matters
|
||||
|
||||
### 2. Show, Don't Just Tell
|
||||
- Use "Normal vs Better" comparisons instead of "Wrong vs Correct"
|
||||
- Include 3+ concrete example prompts users can adapt
|
||||
- Provide realistic use cases from actual development workflows
|
||||
|
||||
### 3. Explain the "Why"
|
||||
- Connect features to broader best practices (e.g., context engineering)
|
||||
- Use analogies to make technical concepts relatable
|
||||
- Link to authoritative external resources when applicable
|
||||
|
||||
### 4. Make it Actionable
|
||||
- Be explicit about how to invoke features
|
||||
- Specify any options, settings, or parameters
|
||||
- End with a clear call-to-action
|
||||
|
||||
## Content Patterns
|
||||
|
||||
### Feature Announcements
|
||||
```
|
||||
1. What is it? (with release date)
|
||||
2. Why use it? (principles/benefits)
|
||||
3. How to use it (with examples)
|
||||
4. Key options/settings
|
||||
5. Call-to-action + learn more
|
||||
```
|
||||
|
||||
### Best Practices/Tips
|
||||
```
|
||||
1. The challenge/problem
|
||||
2. The solution/approach
|
||||
3. Why it works (principles)
|
||||
4. How to implement (examples)
|
||||
5. When to use it
|
||||
6. Related resources
|
||||
```
|
||||
|
||||
## Formatting Standards
|
||||
|
||||
- **Emojis**: Use sparingly and only in titles (🔍 🎯 💡 ⚡)
|
||||
- **Bold**: Use for emphasis on key terms and section headers
|
||||
- **Code blocks**: Use for example prompts (triple backticks or quotes)
|
||||
- **Lists**: Use bullets for benefits/features, numbers for sequential steps
|
||||
- **Comparisons**: Use 📝 for "Normal" and ⭐ for "Better"
|
||||
|
||||
## Research Requirements
|
||||
|
||||
Before writing any post:
|
||||
1. **Verify release dates**: Use official documentation or changelog
|
||||
2. **Test the feature**: Ensure examples work as described
|
||||
3. **Find authoritative sources**: Link to official docs or reputable technical blogs
|
||||
4. **Check for updates**: Ensure information is current
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Includes specific release date or timing
|
||||
- [ ] Explains the "why" with principles or best practices
|
||||
- [ ] Provides 3+ concrete, realistic examples
|
||||
- [ ] Uses "Normal/Better" pattern (not "Wrong/Correct")
|
||||
- [ ] Includes clear call-to-action
|
||||
- [ ] Links to additional learning resources
|
||||
- [ ] Uses professional, educational tone
|
||||
- [ ] Is scannable with clear structure
|
||||
- [ ] All facts verified against official sources
|
||||
Reference in New Issue
Block a user