feat: enhancement workflow preset system with multi-target CLI
- Add YAML-based enhancement workflow presets shipped inside the package (default, minimal, security-focus, architecture-comprehensive, api-documentation) - Add `skill-seekers workflows` subcommand: list, show, copy, add, remove, validate - copy/add/remove all accept multiple names/files in one invocation with partial-failure behaviour - `add --name` override restricted to single-file operations - Add 5 MCP tools: list_workflows, get_workflow, create_workflow, update_workflow, delete_workflow - Fix: create command _add_common_args() now correctly forwards each --enhance-workflow as a separate flag instead of passing the whole list as a single argument - Update README: reposition as "data layer for AI systems" with AI Skills front and centre - Update CHANGELOG, QUICK_REFERENCE, CLAUDE.md with workflow preset details - 1,880+ tests passing Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
30
CHANGELOG.md
30
CHANGELOG.md
@@ -7,6 +7,36 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
#### Enhancement Workflow Preset Management (`skill-seekers workflows`)
|
||||
- New `workflows` CLI subcommand to manage enhancement workflow presets
|
||||
- Bundled presets shipped as YAML files inside the package (`skill_seekers/workflows/`)
|
||||
- `default`, `minimal`, `security-focus`, `architecture-comprehensive`, `api-documentation`
|
||||
- User presets stored in `~/.config/skill-seekers/workflows/`
|
||||
- Subcommands:
|
||||
- `skill-seekers workflows list` — List all bundled + user workflows with descriptions
|
||||
- `skill-seekers workflows show <name>` — Print YAML content of a workflow
|
||||
- `skill-seekers workflows copy <name> [name ...]` — Copy bundled workflow(s) to user dir
|
||||
- `skill-seekers workflows add <file.yaml> [file ...]` — Install custom YAML file(s) into user dir
|
||||
- `skill-seekers workflows remove <name> [name ...]` — Delete user workflow(s)
|
||||
- `skill-seekers workflows validate <name|path>` — Parse and validate a workflow
|
||||
- `copy`, `add`, `remove` all accept multiple names/files in one command (partial-failure: continues processing, returns non-zero exit if any item fails)
|
||||
- `add --name` flag override is restricted to single-file operations
|
||||
- New entry point: `skill-seekers-workflows`
|
||||
|
||||
#### Multiple `--enhance-workflow` flags from CLI
|
||||
- `skill-seekers create <source> --enhance-workflow security-focus --enhance-workflow minimal` — apply multiple workflows in a single command
|
||||
- All workflow management commands (`copy`, `add`, `remove`) now accept multiple names/files in one invocation
|
||||
|
||||
### Fixed
|
||||
- `create` command `_add_common_args()` now correctly forwards each workflow as a separate `--enhance-workflow` flag to sub-scrapers (previously passed the whole list as a single argument, causing all workflows to be ignored)
|
||||
|
||||
### Changed
|
||||
- `workflows copy` now accepts one or more workflow names: `skill-seekers workflows copy wf-a wf-b`
|
||||
- `workflows add` now accepts one or more YAML files: `skill-seekers workflows add a.yaml b.yaml`
|
||||
- `workflows remove` now accepts one or more workflow names: `skill-seekers workflows remove wf-a wf-b`
|
||||
|
||||
## [3.0.0] - 2026-02-10
|
||||
|
||||
### 🚀 "Universal Intelligence Platform" - Major Release
|
||||
|
||||
12
CLAUDE.md
12
CLAUDE.md
@@ -290,7 +290,7 @@ pytest tests/test_mcp_fastmcp.py -v
|
||||
**Test Architecture:**
|
||||
- 46 test files covering all features
|
||||
- CI Matrix: Ubuntu + macOS, Python 3.10-3.13
|
||||
- **1,765 tests passing** (current), up from 700+ in v2.x, growing to 1,852+ in v3.1.0
|
||||
- **1,880+ tests passing** (current), up from 700+ in v2.x, growing to 1,952+ in v3.1.0
|
||||
- Must run `pip install -e .` before tests (src/ layout requirement)
|
||||
- Tests include create command integration tests, CLI refactor E2E tests
|
||||
|
||||
@@ -749,6 +749,7 @@ skill-seekers-install = "skill_seekers.cli.install_skill:main"
|
||||
skill-seekers-install-agent = "skill_seekers.cli.install_agent:main"
|
||||
skill-seekers-patterns = "skill_seekers.cli.pattern_recognizer:main" # C3.1 Pattern detection
|
||||
skill-seekers-how-to-guides = "skill_seekers.cli.how_to_guide_builder:main" # C3.3 Guide generation
|
||||
skill-seekers-workflows = "skill_seekers.cli.workflows_command:main" # NEW: Workflow preset management
|
||||
|
||||
# New v3.0.0 Entry Points
|
||||
skill-seekers-setup = "skill_seekers.cli.setup_wizard:main" # NEW: v3.0.0 Setup wizard
|
||||
@@ -801,7 +802,7 @@ pip install -e .
|
||||
|
||||
Per user instructions in `~/.claude/CLAUDE.md`:
|
||||
- "never skip any test. always make sure all test pass"
|
||||
- All 1,765+ tests must pass before commits (1,852+ in upcoming v3.1.0)
|
||||
- All 1,880+ tests must pass before commits (1,952+ in upcoming v3.1.0)
|
||||
- Run full test suite: `pytest tests/ -v`
|
||||
- New tests added for create command and CLI refactor work
|
||||
|
||||
@@ -2187,8 +2188,11 @@ The `scripts/` directory contains utility scripts:
|
||||
- ⚡ **-p Shortcut** - Quick preset selection (`-p quick|standard|comprehensive`)
|
||||
- 🔧 **Enhancement Flag Consolidation** - `--enhance-level` (0-3) replaces 3 separate flags
|
||||
- 🎨 **Smart Source Detection** - No need to specify whether input is URL, repo, or directory
|
||||
- ✅ **1,765 Tests Passing** - All CLI refactor work verified
|
||||
- 📚 **Improved Documentation** - CLAUDE.md enhanced with CLI refactor details
|
||||
- 🔄 **Enhancement Workflow Presets** - YAML-based presets; `skill-seekers workflows list/show/copy/add/remove/validate`; bundled presets: `default`, `minimal`, `security-focus`, `architecture-comprehensive`, `api-documentation`
|
||||
- 🔀 **Multiple Workflows from CLI** - `--enhance-workflow wf-a --enhance-workflow wf-b` chains presets in a single command; `workflows copy/add/remove` all accept multiple names/files at once
|
||||
- 🐛 **Bug Fix** - `create` command now correctly forwards multiple `--enhance-workflow` flags to sub-scrapers
|
||||
- ✅ **1,880+ Tests Passing** - All CLI refactor + workflow preset work verified
|
||||
- 📚 **Improved Documentation** - CLAUDE.md, README, QUICK_REFERENCE updated with workflow preset details
|
||||
|
||||
**v3.0.0 (February 10, 2026) - "Universal Intelligence Platform":**
|
||||
- 🚀 **16 Platform Adaptors** - RAG frameworks (LangChain, LlamaIndex, Haystack), vector DBs (Chroma, FAISS, Weaviate, Qdrant), AI coding assistants (Cursor, Windsurf, Cline, Continue.dev), LLM platforms (Claude, Gemini, OpenAI)
|
||||
|
||||
199
README.md
199
README.md
@@ -17,117 +17,117 @@ English | [简体中文](https://github.com/yusufkaraaslan/Skill_Seekers/blob/ma
|
||||
[](https://x.com/_yUSyUS_)
|
||||
[](https://github.com/yusufkaraaslan/Skill_Seekers)
|
||||
|
||||
**🚀 v3.0.0 "Universal Intelligence Platform" - The universal preprocessor for any AI system. Convert documentation, GitHub repos, and PDFs into 16 production-ready formats: LangChain, LlamaIndex, Haystack, Pinecone, Cursor, Windsurf, Cline, Continue.dev, Claude, and any RAG pipeline—in minutes, not hours.**
|
||||
**🧠 The data layer for AI systems.** Skill Seekers turns any documentation, GitHub repo, or PDF into structured knowledge assets—ready to power AI Skills (Claude, Gemini, OpenAI), RAG pipelines (LangChain, LlamaIndex, Pinecone), and AI coding assistants (Cursor, Windsurf, Cline) in minutes, not hours.
|
||||
|
||||
> 🌐 **[Visit SkillSeekersWeb.com](https://skillseekersweb.com/)** - Browse 24+ preset configs, share your configs, and access complete documentation!
|
||||
|
||||
> 📋 **[View Development Roadmap & Tasks](https://github.com/users/yusufkaraaslan/projects/2)** - 134 tasks across 10 categories, pick any to contribute!
|
||||
|
||||
## 🚀 **NEW: Universal RAG Preprocessor**
|
||||
## 🧠 The Data Layer for AI Systems
|
||||
|
||||
**Skill Seekers is now the data layer for AI systems.** 70% of RAG development time is spent on data preprocessing—scraping, cleaning, chunking, and structuring documentation. **We automate all of it.**
|
||||
**Skill Seekers is the universal preprocessing layer** that sits between raw documentation and every AI system that consumes it. Whether you are building Claude skills, a LangChain RAG pipeline, or a Cursor `.cursorrules` file — the data preparation is identical. You do it once, and export to all targets.
|
||||
|
||||
```bash
|
||||
# One command → Production-ready RAG data
|
||||
skill-seekers scrape --config configs/react.json
|
||||
skill-seekers package output/react --target langchain # or llama-index, pinecone, cursor
|
||||
# One command → structured knowledge asset
|
||||
skill-seekers create https://docs.react.dev/
|
||||
# or: skill-seekers create facebook/react
|
||||
# or: skill-seekers create ./my-project
|
||||
|
||||
# 15 minutes → Ready for: LangChain, LlamaIndex, Haystack, Pinecone, Cursor, Custom RAG
|
||||
# Export to any AI system
|
||||
skill-seekers package output/react --target claude # → Claude AI Skill (ZIP)
|
||||
skill-seekers package output/react --target langchain # → LangChain Documents
|
||||
skill-seekers package output/react --target llama-index # → LlamaIndex TextNodes
|
||||
skill-seekers package output/react --target cursor # → .cursorrules
|
||||
```
|
||||
|
||||
### Supported Integrations
|
||||
### What gets built
|
||||
|
||||
| Integration | Format | Use Case | Guide |
|
||||
|------------|--------|----------|-------|
|
||||
| **LangChain** | `Documents` | QA chains, agents, retrievers | [Guide](docs/integrations/LANGCHAIN.md) |
|
||||
| **LlamaIndex** | `TextNodes` | Query engines, chat engines | [Guide](docs/integrations/LLAMA_INDEX.md) |
|
||||
| **Haystack** | `Documents` | Enterprise RAG pipelines | [Guide](docs/integrations/HAYSTACK.md) |
|
||||
| **Pinecone** | Ready for upsert | Production vector search | [Guide](docs/integrations/PINECONE.md) |
|
||||
| **Cursor IDE** | `.cursorrules` | AI coding (VS Code fork) | [Guide](docs/integrations/CURSOR.md) |
|
||||
| **Windsurf** | `.windsurfrules` | AI coding (Codeium IDE) | [Guide](docs/integrations/WINDSURF.md) |
|
||||
| **Cline** | `.clinerules` + MCP | AI coding (VS Code ext) | [Guide](docs/integrations/CLINE.md) |
|
||||
| **Continue.dev** | HTTP context | AI coding (any IDE) | [Guide](docs/integrations/CONTINUE_DEV.md) |
|
||||
| **Claude AI** | Skills (ZIP) | Claude Code skills | Default |
|
||||
| **Gemini** | tar.gz | Google Gemini skills | `--target gemini` |
|
||||
| **OpenAI** | ChatGPT format | Custom GPTs | `--target openai` |
|
||||
| Output | Target | What it powers |
|
||||
|--------|--------|---------------|
|
||||
| **Claude Skill** (ZIP + YAML) | `--target claude` | Claude Code, Claude API |
|
||||
| **Gemini Skill** (tar.gz) | `--target gemini` | Google Gemini |
|
||||
| **OpenAI / Custom GPT** (ZIP) | `--target openai` | GPT-4o, custom assistants |
|
||||
| **LangChain Documents** | `--target langchain` | QA chains, agents, retrievers |
|
||||
| **LlamaIndex TextNodes** | `--target llama-index` | Query engines, chat engines |
|
||||
| **Haystack Documents** | `--target haystack` | Enterprise RAG pipelines |
|
||||
| **Pinecone-ready** (Markdown) | `--target markdown` | Vector upsert |
|
||||
| **ChromaDB / FAISS / Qdrant** | `--format chroma/faiss/qdrant` | Local vector DBs |
|
||||
| **Cursor** `.cursorrules` | `--target claude` → copy | Cursor IDE AI context |
|
||||
| **Windsurf / Cline / Continue** | `--target claude` → copy | VS Code, IntelliJ, Vim |
|
||||
|
||||
**Why Skill Seekers for RAG?**
|
||||
### Why it matters
|
||||
|
||||
- ⚡ **99% faster preprocessing** - Days → 15-45 minutes
|
||||
- ✅ **Production quality** - 700+ tests, battle-tested on 24+ frameworks
|
||||
- 🎯 **Smart chunking** - Preserves code blocks, maintains context
|
||||
- 📊 **Rich metadata** - Categories, sources, types for filtering
|
||||
- 🔄 **Multi-source** - Combine docs + GitHub + PDFs seamlessly
|
||||
- 🌐 **Platform-agnostic** - One preprocessing, export anywhere
|
||||
- ⚡ **99% faster** — Days of manual data prep → 15–45 minutes
|
||||
- 🎯 **AI Skill quality** — 500+ line SKILL.md files with examples, patterns, and guides
|
||||
- 📊 **RAG-ready chunks** — Smart chunking preserves code blocks and maintains context
|
||||
- 🔄 **Multi-source** — Combine docs + GitHub + PDFs into one knowledge asset
|
||||
- 🌐 **One prep, every target** — Export the same asset to 16 platforms without re-scraping
|
||||
- ✅ **Battle-tested** — 1,880+ tests, 24+ framework presets, production-ready
|
||||
|
||||
**Read the full story:** [Blog: Universal RAG Preprocessor](docs/blog/UNIVERSAL_RAG_PREPROCESSOR.md)
|
||||
|
||||
## Quick Start: RAG Pipeline
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Install
|
||||
pip install skill-seekers
|
||||
|
||||
# 2. Generate documentation (Django example)
|
||||
skill-seekers scrape --config configs/django.json # 15 min
|
||||
# Build an AI skill from any source
|
||||
skill-seekers create https://docs.django.com/ # web docs
|
||||
skill-seekers create django/django # GitHub repo
|
||||
skill-seekers create ./my-codebase # local project
|
||||
skill-seekers create manual.pdf # PDF
|
||||
|
||||
# 3. Export for your RAG stack
|
||||
skill-seekers package output/django --target langchain # For LangChain
|
||||
skill-seekers package output/django --target llama-index # For LlamaIndex
|
||||
|
||||
# 4. Use in your RAG pipeline
|
||||
python your_rag_pipeline.py # Load and query!
|
||||
# Export for your use case
|
||||
skill-seekers package output/django --target claude # Claude AI Skill
|
||||
skill-seekers package output/django --target langchain # LangChain RAG
|
||||
skill-seekers package output/django --target cursor # Cursor IDE context
|
||||
```
|
||||
|
||||
**Complete examples:**
|
||||
- [Claude AI Skill](examples/claude-skill/) - Skills for Claude Code
|
||||
- [LangChain RAG Pipeline](examples/langchain-rag-pipeline/) - QA chain with Chroma
|
||||
- [LlamaIndex Query Engine](examples/llama-index-query-engine/) - Chat with memory
|
||||
- [Pinecone Upsert](examples/pinecone-upsert/) - Production vector search
|
||||
- [Cursor IDE Context](examples/cursor-react-skill/) - Framework-aware AI coding
|
||||
|
||||
## What is Skill Seeker?
|
||||
## What is Skill Seekers?
|
||||
|
||||
Skill Seeker is the **universal preprocessing layer for AI systems**. It transforms documentation websites, GitHub repositories, and PDF files into production-ready formats for:
|
||||
Skill Seekers is the **data layer for AI systems**. It transforms documentation websites, GitHub repositories, and PDF files into structured knowledge assets for every AI target:
|
||||
|
||||
- **RAG Pipelines** - LangChain, LlamaIndex, Pinecone, Weaviate, Chroma, FAISS
|
||||
- **AI Coding Assistants** - Cursor IDE, VS Code, custom tools
|
||||
- **Claude AI Skills** - [Claude Code](https://www.anthropic.com/news/skills) and Claude API
|
||||
- **Custom GPTs** - OpenAI, Gemini, and other LLM platforms
|
||||
| Use Case | What you get | Examples |
|
||||
|----------|-------------|---------|
|
||||
| **AI Skills** | Comprehensive SKILL.md + references | Claude Code, Gemini, GPT |
|
||||
| **RAG Pipelines** | Chunked documents with rich metadata | LangChain, LlamaIndex, Haystack |
|
||||
| **Vector Databases** | Pre-formatted data ready for upsert | Pinecone, Chroma, Weaviate, FAISS |
|
||||
| **AI Coding Assistants** | Context files your IDE AI reads automatically | Cursor, Windsurf, Cline, Continue.dev |
|
||||
|
||||
Instead of spending days on manual preprocessing, Skill Seeker:
|
||||
Instead of spending days on manual preprocessing, Skill Seekers:
|
||||
|
||||
1. **Scrapes** multiple sources (docs, GitHub repos, PDFs) automatically
|
||||
2. **Analyzes** code repositories with deep AST parsing
|
||||
3. **Detects** conflicts between documentation and code implementation
|
||||
4. **Organizes** content into categorized reference files
|
||||
5. **Enhances** with AI to extract best examples and key concepts
|
||||
6. **Packages** everything into an uploadable `.zip` file for Claude
|
||||
|
||||
**Result:** Get comprehensive Claude skills for any framework, API, or tool in 20-40 minutes instead of hours of manual work.
|
||||
1. **Ingests** — docs, GitHub repos, local codebases, PDFs
|
||||
2. **Analyzes** — deep AST parsing, pattern detection, API extraction
|
||||
3. **Structures** — categorized reference files with metadata
|
||||
4. **Enhances** — AI-powered SKILL.md generation (Claude, Gemini, or local)
|
||||
5. **Exports** — 16 platform-specific formats from one asset
|
||||
|
||||
## Why Use This?
|
||||
|
||||
### For AI Skill Builders (Claude, Gemini, OpenAI)
|
||||
|
||||
- 🎯 **Production-grade Skills** — 500+ line SKILL.md files with code examples, patterns, and guides
|
||||
- 🔄 **Enhancement Workflows** — Apply `security-focus`, `architecture-comprehensive`, or custom YAML presets
|
||||
- 🎮 **Any Domain** — Game engines (Godot, Unity), frameworks (React, Django), internal tools
|
||||
- 🔧 **Teams** — Combine internal docs + code into a single source of truth
|
||||
- 📚 **Quality** — AI-enhanced with examples, quick reference, and navigation guidance
|
||||
|
||||
### For RAG Builders & AI Engineers
|
||||
|
||||
- 🤖 **RAG Systems**: Build production-grade Q&A bots, chatbots, documentation portals
|
||||
- 🚀 **99% Faster**: Days of preprocessing → 15-45 minutes
|
||||
- ✅ **Battle-Tested**: 700+ tests, 24+ framework presets, production-ready
|
||||
- 🔄 **Multi-Source**: Combine docs + GitHub + PDFs automatically
|
||||
- 🌐 **Platform-Agnostic**: Export to LangChain, LlamaIndex, Pinecone, or custom
|
||||
- 📊 **Smart Metadata**: Categories, sources, types → Better retrieval accuracy
|
||||
- 🤖 **RAG-ready data** — Pre-chunked LangChain `Documents`, LlamaIndex `TextNodes`, Haystack `Documents`
|
||||
- 🚀 **99% faster** — Days of preprocessing → 15–45 minutes
|
||||
- 📊 **Smart metadata** — Categories, sources, types → better retrieval accuracy
|
||||
- 🔄 **Multi-source** — Combine docs + GitHub + PDFs in one pipeline
|
||||
- 🌐 **Platform-agnostic** — Export to any vector DB or framework without re-scraping
|
||||
|
||||
### For AI Coding Assistant Users
|
||||
|
||||
- 💻 **Cursor IDE**: Generate .cursorrules for framework-specific AI assistance
|
||||
- 🎯 **Persistent Context**: AI "knows" your frameworks without manual prompting
|
||||
- 📚 **Always Current**: Update docs in 5 minutes, not hours
|
||||
|
||||
### For Claude Code Users
|
||||
|
||||
- 🎯 **Skills**: Create comprehensive Claude Code skills from any documentation
|
||||
- 🎮 **Game Dev**: Generate skills for game engines (Godot, Unity, Unreal)
|
||||
- 🔧 **Teams**: Combine internal docs + code into single source of truth
|
||||
- 📚 **Learning**: Build skills from docs, code examples, and PDFs
|
||||
- 🔍 **Open Source**: Analyze repos to find documentation gaps
|
||||
- 💻 **Cursor / Windsurf / Cline** — Generate `.cursorrules` / `.windsurfrules` / `.clinerules` automatically
|
||||
- 🎯 **Persistent context** — AI "knows" your frameworks without repeated prompting
|
||||
- 📚 **Always current** — Update context in minutes when docs change
|
||||
|
||||
## Key Features
|
||||
|
||||
@@ -525,6 +525,57 @@ skill-seekers analyze --directory tests/ --enhance
|
||||
|
||||
**Full Documentation:** [docs/HOW_TO_GUIDES.md](docs/HOW_TO_GUIDES.md#ai-enhancement-new)
|
||||
|
||||
### 🔄 Enhancement Workflow Presets (**NEW!**)
|
||||
|
||||
Reusable YAML-defined enhancement pipelines that control how AI transforms your raw documentation into a polished skill.
|
||||
|
||||
- ✅ **5 Bundled Presets** — `default`, `minimal`, `security-focus`, `architecture-comprehensive`, `api-documentation`
|
||||
- ✅ **User-Defined Presets** — add custom workflows to `~/.config/skill-seekers/workflows/`
|
||||
- ✅ **Multiple Workflows** — chain two or more workflows in one command
|
||||
- ✅ **Fully Managed CLI** — list, inspect, copy, add, remove, and validate workflows
|
||||
|
||||
```bash
|
||||
# Apply a single workflow
|
||||
skill-seekers create ./my-project --enhance-workflow security-focus
|
||||
|
||||
# Chain multiple workflows (applied in order)
|
||||
skill-seekers create ./my-project \
|
||||
--enhance-workflow security-focus \
|
||||
--enhance-workflow minimal
|
||||
|
||||
# Manage presets
|
||||
skill-seekers workflows list # List all (bundled + user)
|
||||
skill-seekers workflows show security-focus # Print YAML content
|
||||
skill-seekers workflows copy security-focus # Copy to user dir for editing
|
||||
skill-seekers workflows add ./my-workflow.yaml # Install a custom preset
|
||||
skill-seekers workflows remove my-workflow # Remove a user preset
|
||||
skill-seekers workflows validate security-focus # Validate preset structure
|
||||
|
||||
# Copy multiple at once
|
||||
skill-seekers workflows copy security-focus minimal api-documentation
|
||||
|
||||
# Add multiple files at once
|
||||
skill-seekers workflows add ./wf-a.yaml ./wf-b.yaml
|
||||
|
||||
# Remove multiple at once
|
||||
skill-seekers workflows remove my-wf-a my-wf-b
|
||||
```
|
||||
|
||||
**YAML preset format:**
|
||||
```yaml
|
||||
name: security-focus
|
||||
description: "Security-focused review: vulnerabilities, auth, data handling"
|
||||
version: "1.0"
|
||||
stages:
|
||||
- name: vulnerabilities
|
||||
type: custom
|
||||
prompt: "Review for OWASP top 10 and common security vulnerabilities..."
|
||||
- name: auth-review
|
||||
type: custom
|
||||
prompt: "Examine authentication and authorisation patterns..."
|
||||
uses_history: true
|
||||
```
|
||||
|
||||
### ⚡ Performance & Scale
|
||||
- ✅ **Async Mode** - 2-3x faster scraping with async/await (use `--async` flag)
|
||||
- ✅ **Large Documentation Support** - Handle 10K-40K+ page docs with intelligent splitting
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Quick Reference - Skill Seekers Cheat Sheet
|
||||
|
||||
**Version:** 2.7.0 | **Quick Commands** | **One-Page Reference**
|
||||
**Version:** 3.1.0-dev | **Quick Commands** | **One-Page Reference**
|
||||
|
||||
---
|
||||
|
||||
@@ -91,8 +91,54 @@ skill-seekers enhance output/react/ --background
|
||||
|
||||
# Monitor background enhancement
|
||||
skill-seekers enhance-status output/react/ --watch
|
||||
|
||||
# Apply a workflow preset during create
|
||||
skill-seekers create ./my-project --enhance-workflow security-focus
|
||||
|
||||
# Chain multiple workflow presets
|
||||
skill-seekers create ./my-project \
|
||||
--enhance-workflow security-focus \
|
||||
--enhance-workflow minimal
|
||||
```
|
||||
|
||||
### Enhancement Workflow Presets
|
||||
|
||||
```bash
|
||||
# List all available workflows (bundled + user)
|
||||
skill-seekers workflows list
|
||||
|
||||
# Show the YAML content of a workflow
|
||||
skill-seekers workflows show security-focus
|
||||
|
||||
# Copy a bundled workflow to user dir for editing
|
||||
skill-seekers workflows copy security-focus
|
||||
|
||||
# Copy multiple bundled workflows at once
|
||||
skill-seekers workflows copy security-focus minimal api-documentation
|
||||
|
||||
# Install a custom YAML file as a user workflow
|
||||
skill-seekers workflows add ./my-workflow.yaml
|
||||
|
||||
# Install multiple YAML files at once
|
||||
skill-seekers workflows add ./wf-a.yaml ./wf-b.yaml
|
||||
|
||||
# Install with a custom name (single file only)
|
||||
skill-seekers workflows add ./my-workflow.yaml --name my-custom-name
|
||||
|
||||
# Remove a user workflow (bundled presets cannot be removed)
|
||||
skill-seekers workflows remove my-workflow
|
||||
|
||||
# Remove multiple user workflows at once
|
||||
skill-seekers workflows remove wf-a wf-b
|
||||
|
||||
# Validate a workflow by name or file path
|
||||
skill-seekers workflows validate security-focus
|
||||
skill-seekers workflows validate ./my-workflow.yaml
|
||||
```
|
||||
|
||||
**Bundled presets:** `default`, `minimal`, `security-focus`, `architecture-comprehensive`, `api-documentation`
|
||||
**User presets dir:** `~/.config/skill-seekers/workflows/`
|
||||
|
||||
### Packaging & Upload
|
||||
|
||||
```bash
|
||||
@@ -417,4 +463,4 @@ skill-seekers validate-config configs/my-config.json
|
||||
|
||||
---
|
||||
|
||||
**Version:** 2.7.0 | **Test Count:** 1200+ | **Platforms:** Claude, Gemini, OpenAI, Markdown
|
||||
**Version:** 3.1.0-dev | **Test Count:** 1880+ | **Platforms:** Claude, Gemini, OpenAI, Markdown
|
||||
|
||||
@@ -203,6 +203,7 @@ skill-seekers-stream = "skill_seekers.cli.streaming_ingest:main"
|
||||
skill-seekers-update = "skill_seekers.cli.incremental_updater:main"
|
||||
skill-seekers-multilang = "skill_seekers.cli.multilang_support:main"
|
||||
skill-seekers-quality = "skill_seekers.cli.quality_metrics:main"
|
||||
skill-seekers-workflows = "skill_seekers.cli.workflows_command:main"
|
||||
|
||||
[tool.setuptools]
|
||||
package-dir = {"" = "src"}
|
||||
@@ -213,7 +214,7 @@ include = ["skill_seekers*"]
|
||||
namespaces = false
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
skill_seekers = ["py.typed"]
|
||||
skill_seekers = ["py.typed", "workflows/*.yaml"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
|
||||
@@ -376,7 +376,8 @@ class CreateCommand:
|
||||
|
||||
# Enhancement Workflow arguments (NEW - Phase 2)
|
||||
if getattr(self.args, "enhance_workflow", None):
|
||||
argv.extend(["--enhance-workflow", self.args.enhance_workflow])
|
||||
for wf in self.args.enhance_workflow:
|
||||
argv.extend(["--enhance-workflow", wf])
|
||||
if getattr(self.args, "enhance_stage", None):
|
||||
for stage in self.args.enhance_stage:
|
||||
argv.extend(["--enhance-stage", stage])
|
||||
|
||||
@@ -27,6 +27,7 @@ import logging
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from importlib.resources import files as importlib_files
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal
|
||||
|
||||
@@ -99,25 +100,63 @@ class WorkflowEngine:
|
||||
self.history: list[dict[str, Any]] = []
|
||||
self.enhancer = None # Lazy load UnifiedEnhancer
|
||||
|
||||
def _load_workflow(self, workflow_path: str | Path) -> EnhancementWorkflow:
|
||||
"""Load workflow from YAML file."""
|
||||
workflow_path = Path(workflow_path)
|
||||
def _load_workflow(self, workflow_ref: str | Path) -> EnhancementWorkflow:
|
||||
"""Load workflow from YAML file using 3-level search order.
|
||||
|
||||
# Resolve path (support both absolute and relative)
|
||||
if not workflow_path.is_absolute():
|
||||
# Try relative to CWD first
|
||||
if not workflow_path.exists():
|
||||
# Try in config directory
|
||||
config_dir = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
workflow_path = config_dir / workflow_path
|
||||
Search order:
|
||||
1. Raw file path (absolute or relative) — existing behaviour
|
||||
2. ~/.config/skill-seekers/workflows/{name}.yaml — user overrides/custom
|
||||
3. skill_seekers/workflows/{name}.yaml via importlib.resources — bundled defaults
|
||||
"""
|
||||
workflow_ref = Path(workflow_ref)
|
||||
|
||||
if not workflow_path.exists():
|
||||
raise FileNotFoundError(f"Workflow not found: {workflow_path}")
|
||||
# Add .yaml extension for bare names
|
||||
name_str = str(workflow_ref)
|
||||
if not name_str.endswith((".yaml", ".yml")):
|
||||
yaml_ref = Path(name_str + ".yaml")
|
||||
else:
|
||||
yaml_ref = workflow_ref
|
||||
|
||||
logger.info(f"📋 Loading workflow: {workflow_path}")
|
||||
resolved_path: Path | None = None
|
||||
yaml_text: str | None = None
|
||||
|
||||
with open(workflow_path, encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f)
|
||||
# Level 1: absolute path or relative-to-CWD
|
||||
if yaml_ref.is_absolute():
|
||||
if yaml_ref.exists():
|
||||
resolved_path = yaml_ref
|
||||
else:
|
||||
cwd_path = Path.cwd() / yaml_ref
|
||||
if cwd_path.exists():
|
||||
resolved_path = cwd_path
|
||||
elif yaml_ref.exists():
|
||||
resolved_path = yaml_ref
|
||||
|
||||
# Level 2: user config directory
|
||||
if resolved_path is None:
|
||||
user_dir = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
user_path = user_dir / yaml_ref.name
|
||||
if user_path.exists():
|
||||
resolved_path = user_path
|
||||
|
||||
# Level 3: bundled package workflows via importlib.resources
|
||||
if resolved_path is None:
|
||||
bare_name = yaml_ref.name # e.g. "security-focus.yaml"
|
||||
try:
|
||||
pkg_ref = importlib_files("skill_seekers.workflows").joinpath(bare_name)
|
||||
yaml_text = pkg_ref.read_text(encoding="utf-8")
|
||||
logger.info(f"📋 Loading bundled workflow: {bare_name}")
|
||||
except (FileNotFoundError, TypeError, ModuleNotFoundError):
|
||||
raise FileNotFoundError(
|
||||
f"Workflow '{yaml_ref.stem}' not found. "
|
||||
"Use 'skill-seekers workflows list' to see available workflows."
|
||||
)
|
||||
|
||||
if resolved_path is not None:
|
||||
logger.info(f"📋 Loading workflow: {resolved_path}")
|
||||
with open(resolved_path, encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f)
|
||||
else:
|
||||
data = yaml.safe_load(yaml_text)
|
||||
|
||||
# Handle inheritance (extends)
|
||||
if "extends" in data and data["extends"]:
|
||||
@@ -430,103 +469,27 @@ class WorkflowEngine:
|
||||
logger.info(f"💾 Saved workflow history: {output_path}")
|
||||
|
||||
|
||||
def create_default_workflows():
|
||||
"""Create default workflow templates in user config directory."""
|
||||
config_dir = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
config_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Default workflow
|
||||
default_workflow = {
|
||||
"name": "Default Enhancement",
|
||||
"description": "Standard AI enhancement with all features",
|
||||
"version": "1.0",
|
||||
"applies_to": ["codebase_analysis", "doc_scraping", "github_analysis"],
|
||||
"stages": [
|
||||
{
|
||||
"name": "base_analysis",
|
||||
"type": "builtin",
|
||||
"target": "patterns",
|
||||
"enabled": True,
|
||||
},
|
||||
{
|
||||
"name": "test_examples",
|
||||
"type": "builtin",
|
||||
"target": "examples",
|
||||
"enabled": True,
|
||||
},
|
||||
],
|
||||
"post_process": {
|
||||
"add_metadata": {"enhanced": True, "workflow": "default"}
|
||||
},
|
||||
}
|
||||
|
||||
# Security-focused workflow
|
||||
security_workflow = {
|
||||
"name": "Security-Focused Analysis",
|
||||
"description": "Emphasize security patterns and vulnerabilities",
|
||||
"version": "1.0",
|
||||
"applies_to": ["codebase_analysis"],
|
||||
"variables": {"focus_area": "security"},
|
||||
"stages": [
|
||||
{
|
||||
"name": "base_patterns",
|
||||
"type": "builtin",
|
||||
"target": "patterns",
|
||||
},
|
||||
{
|
||||
"name": "security_analysis",
|
||||
"type": "custom",
|
||||
"target": "security",
|
||||
"uses_history": True,
|
||||
"prompt": """Based on the patterns detected: {previous_results}
|
||||
|
||||
Perform deep security analysis:
|
||||
|
||||
1. **Authentication/Authorization**:
|
||||
- Auth bypass risks?
|
||||
- Token handling secure?
|
||||
- Session management issues?
|
||||
|
||||
2. **Input Validation**:
|
||||
- User input sanitized?
|
||||
- SQL injection risks?
|
||||
- XSS vulnerabilities?
|
||||
|
||||
3. **Data Exposure**:
|
||||
- Sensitive data in logs?
|
||||
- Secrets in config?
|
||||
- PII handling?
|
||||
|
||||
4. **Cryptography**:
|
||||
- Weak algorithms?
|
||||
- Hardcoded keys?
|
||||
- Insecure RNG?
|
||||
|
||||
Output as JSON with 'findings' array.""",
|
||||
},
|
||||
],
|
||||
"post_process": {
|
||||
"add_metadata": {"security_reviewed": True},
|
||||
},
|
||||
}
|
||||
|
||||
# Save workflows
|
||||
workflows = {
|
||||
"default.yaml": default_workflow,
|
||||
"security-focus.yaml": security_workflow,
|
||||
}
|
||||
|
||||
for filename, workflow_data in workflows.items():
|
||||
workflow_file = config_dir / filename
|
||||
if not workflow_file.exists():
|
||||
with open(workflow_file, "w", encoding="utf-8") as f:
|
||||
yaml.dump(workflow_data, f, default_flow_style=False, sort_keys=False)
|
||||
logger.info(f"✅ Created workflow: {workflow_file}")
|
||||
|
||||
return config_dir
|
||||
def list_bundled_workflows() -> list[str]:
|
||||
"""Return names of all bundled default workflows (without .yaml extension)."""
|
||||
try:
|
||||
pkg = importlib_files("skill_seekers.workflows")
|
||||
names = []
|
||||
for item in pkg.iterdir():
|
||||
name = str(item.name)
|
||||
if name.endswith((".yaml", ".yml")):
|
||||
names.append(name.removesuffix(".yaml").removesuffix(".yml"))
|
||||
return sorted(names)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create default workflows
|
||||
create_default_workflows()
|
||||
print("✅ Default workflows created!")
|
||||
def list_user_workflows() -> list[str]:
|
||||
"""Return names of all user-defined workflows (without .yaml extension)."""
|
||||
user_dir = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
if not user_dir.exists():
|
||||
return []
|
||||
names = []
|
||||
for p in user_dir.iterdir():
|
||||
if p.suffix in (".yaml", ".yml"):
|
||||
names.append(p.stem)
|
||||
return sorted(names)
|
||||
|
||||
@@ -62,6 +62,7 @@ COMMAND_MODULES = {
|
||||
"update": "skill_seekers.cli.incremental_updater",
|
||||
"multilang": "skill_seekers.cli.multilang_support",
|
||||
"quality": "skill_seekers.cli.quality_metrics",
|
||||
"workflows": "skill_seekers.cli.workflows_command",
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ from .stream_parser import StreamParser
|
||||
from .update_parser import UpdateParser
|
||||
from .multilang_parser import MultilangParser
|
||||
from .quality_parser import QualityParser
|
||||
from .workflows_parser import WorkflowsParser
|
||||
|
||||
# Registry of all parsers (in order of usage frequency)
|
||||
PARSERS = [
|
||||
@@ -50,6 +51,7 @@ PARSERS = [
|
||||
UpdateParser(),
|
||||
MultilangParser(),
|
||||
QualityParser(),
|
||||
WorkflowsParser(),
|
||||
]
|
||||
|
||||
|
||||
|
||||
85
src/skill_seekers/cli/parsers/workflows_parser.py
Normal file
85
src/skill_seekers/cli/parsers/workflows_parser.py
Normal file
@@ -0,0 +1,85 @@
|
||||
"""Workflows subcommand parser."""
|
||||
|
||||
from .base import SubcommandParser
|
||||
|
||||
|
||||
class WorkflowsParser(SubcommandParser):
|
||||
"""Parser for the workflows subcommand."""
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "workflows"
|
||||
|
||||
@property
|
||||
def help(self) -> str:
|
||||
return "Manage enhancement workflow presets"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return (
|
||||
"List, inspect, copy, add, remove, and validate enhancement workflow "
|
||||
"presets. Bundled presets ship with the package; user presets live in "
|
||||
"~/.config/skill-seekers/workflows/."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser) -> None:
|
||||
subparsers = parser.add_subparsers(dest="workflows_action", metavar="ACTION")
|
||||
|
||||
# list
|
||||
subparsers.add_parser(
|
||||
"list",
|
||||
help="List all available workflows (bundled + user)",
|
||||
)
|
||||
|
||||
# show
|
||||
show_p = subparsers.add_parser(
|
||||
"show",
|
||||
help="Print YAML content of a workflow",
|
||||
)
|
||||
show_p.add_argument("workflow_name", help="Workflow name (e.g. security-focus)")
|
||||
|
||||
# copy
|
||||
copy_p = subparsers.add_parser(
|
||||
"copy",
|
||||
help="Copy bundled workflow(s) to user dir for editing",
|
||||
)
|
||||
copy_p.add_argument(
|
||||
"workflow_names",
|
||||
nargs="+",
|
||||
help="Bundled workflow name(s) to copy",
|
||||
)
|
||||
|
||||
# add
|
||||
add_p = subparsers.add_parser(
|
||||
"add",
|
||||
help="Install a custom YAML file into the user workflow directory",
|
||||
)
|
||||
add_p.add_argument(
|
||||
"files",
|
||||
nargs="+",
|
||||
help="Path(s) to YAML workflow file(s) to install",
|
||||
)
|
||||
add_p.add_argument(
|
||||
"--name",
|
||||
help="Override the workflow filename (stem); only valid when adding a single file",
|
||||
)
|
||||
|
||||
# remove
|
||||
remove_p = subparsers.add_parser(
|
||||
"remove",
|
||||
help="Delete workflow(s) from the user directory (bundled workflows cannot be removed)",
|
||||
)
|
||||
remove_p.add_argument(
|
||||
"workflow_names",
|
||||
nargs="+",
|
||||
help="User workflow name(s) to remove",
|
||||
)
|
||||
|
||||
# validate
|
||||
validate_p = subparsers.add_parser(
|
||||
"validate",
|
||||
help="Parse and validate a workflow by name or file path",
|
||||
)
|
||||
validate_p.add_argument(
|
||||
"workflow_name", help="Workflow name or path to YAML file"
|
||||
)
|
||||
311
src/skill_seekers/cli/workflows_command.py
Normal file
311
src/skill_seekers/cli/workflows_command.py
Normal file
@@ -0,0 +1,311 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Workflows CLI Command
|
||||
|
||||
Manage enhancement workflow presets:
|
||||
list List all workflows (bundled + user)
|
||||
show Print YAML content of a workflow
|
||||
copy Copy a bundled workflow to user dir for editing
|
||||
add Install a custom YAML into user dir
|
||||
remove Delete a user workflow (bundled ones cannot be removed)
|
||||
validate Parse and validate a workflow YAML
|
||||
|
||||
Usage:
|
||||
skill-seekers workflows list
|
||||
skill-seekers workflows show security-focus
|
||||
skill-seekers workflows copy security-focus
|
||||
skill-seekers workflows add ./my-workflow.yaml
|
||||
skill-seekers workflows remove my-workflow
|
||||
skill-seekers workflows validate security-focus
|
||||
"""
|
||||
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
from skill_seekers.cli.enhancement_workflow import (
|
||||
WorkflowEngine,
|
||||
list_bundled_workflows,
|
||||
)
|
||||
|
||||
USER_WORKFLOWS_DIR = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
|
||||
|
||||
def _ensure_user_dir() -> Path:
|
||||
USER_WORKFLOWS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
return USER_WORKFLOWS_DIR
|
||||
|
||||
|
||||
def _bundled_yaml_text(name: str) -> str | None:
|
||||
"""Return raw YAML text of a bundled workflow, or None if not found."""
|
||||
from importlib.resources import files as importlib_files
|
||||
|
||||
for suffix in (".yaml", ".yml"):
|
||||
try:
|
||||
pkg_ref = importlib_files("skill_seekers.workflows").joinpath(name + suffix)
|
||||
return pkg_ref.read_text(encoding="utf-8")
|
||||
except (FileNotFoundError, TypeError, ModuleNotFoundError):
|
||||
continue
|
||||
return None
|
||||
|
||||
|
||||
def _workflow_yaml_text(name_or_path: str) -> str | None:
|
||||
"""Resolve a workflow by name or path and return its raw YAML text."""
|
||||
# Try as a file path first
|
||||
p = Path(name_or_path)
|
||||
if p.suffix in (".yaml", ".yml") and p.exists():
|
||||
return p.read_text(encoding="utf-8")
|
||||
|
||||
# Try as a name with .yaml extension
|
||||
for suffix in (".yaml", ".yml"):
|
||||
candidate = Path(name_or_path + suffix)
|
||||
if candidate.exists():
|
||||
return candidate.read_text(encoding="utf-8")
|
||||
|
||||
# User dir
|
||||
user_file = USER_WORKFLOWS_DIR / (name_or_path + ".yaml")
|
||||
if user_file.exists():
|
||||
return user_file.read_text(encoding="utf-8")
|
||||
user_file_yml = USER_WORKFLOWS_DIR / (name_or_path + ".yml")
|
||||
if user_file_yml.exists():
|
||||
return user_file_yml.read_text(encoding="utf-8")
|
||||
|
||||
# Bundled
|
||||
return _bundled_yaml_text(name_or_path)
|
||||
|
||||
|
||||
def _list_user_workflow_names() -> list[str]:
|
||||
"""Return names of user workflows (without extension) from USER_WORKFLOWS_DIR."""
|
||||
if not USER_WORKFLOWS_DIR.exists():
|
||||
return []
|
||||
return sorted(
|
||||
p.stem for p in USER_WORKFLOWS_DIR.iterdir() if p.suffix in (".yaml", ".yml")
|
||||
)
|
||||
|
||||
|
||||
def cmd_list() -> int:
|
||||
"""List all available workflows."""
|
||||
bundled = list_bundled_workflows()
|
||||
user = _list_user_workflow_names()
|
||||
|
||||
if not bundled and not user:
|
||||
print("No workflows found.")
|
||||
return 0
|
||||
|
||||
if bundled:
|
||||
print("Bundled workflows (read-only):")
|
||||
for name in bundled:
|
||||
# Load description from YAML
|
||||
text = _bundled_yaml_text(name)
|
||||
desc = ""
|
||||
if text:
|
||||
try:
|
||||
data = yaml.safe_load(text)
|
||||
desc = data.get("description", "")
|
||||
except Exception:
|
||||
pass
|
||||
print(f" {name:<32} {desc}")
|
||||
|
||||
if user:
|
||||
print("\nUser workflows (~/.config/skill-seekers/workflows/):")
|
||||
for name in user:
|
||||
user_file = USER_WORKFLOWS_DIR / (name + ".yaml")
|
||||
if not user_file.exists():
|
||||
user_file = USER_WORKFLOWS_DIR / (name + ".yml")
|
||||
desc = ""
|
||||
try:
|
||||
data = yaml.safe_load(user_file.read_text(encoding="utf-8"))
|
||||
desc = data.get("description", "")
|
||||
except Exception:
|
||||
pass
|
||||
print(f" {name:<32} {desc}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def cmd_show(name: str) -> int:
|
||||
"""Print YAML content of a workflow."""
|
||||
text = _workflow_yaml_text(name)
|
||||
if text is None:
|
||||
print(f"Error: Workflow '{name}' not found.", file=sys.stderr)
|
||||
print("Use 'skill-seekers workflows list' to see available workflows.", file=sys.stderr)
|
||||
return 1
|
||||
print(text, end="")
|
||||
return 0
|
||||
|
||||
|
||||
def cmd_copy(names: list[str]) -> int:
|
||||
"""Copy one or more bundled workflows to user dir."""
|
||||
rc = 0
|
||||
for name in names:
|
||||
text = _bundled_yaml_text(name)
|
||||
if text is None:
|
||||
print(f"Error: Bundled workflow '{name}' not found.", file=sys.stderr)
|
||||
bundled = list_bundled_workflows()
|
||||
if bundled:
|
||||
print(f"Available bundled workflows: {', '.join(bundled)}", file=sys.stderr)
|
||||
rc = 1
|
||||
continue
|
||||
|
||||
dest = _ensure_user_dir() / (name + ".yaml")
|
||||
if dest.exists():
|
||||
print(f"Warning: '{dest}' already exists. Overwriting.")
|
||||
|
||||
dest.write_text(text, encoding="utf-8")
|
||||
print(f"Copied '{name}' to: {dest}")
|
||||
print(f"Edit it with your favourite editor, then reference it as '--enhance-workflow {name}'")
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def cmd_add(file_paths: list[str], override_name: str | None = None) -> int:
|
||||
"""Install one or more custom YAML workflows into user dir."""
|
||||
if override_name and len(file_paths) > 1:
|
||||
print("Error: --name cannot be used when adding multiple files.", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
rc = 0
|
||||
for file_path in file_paths:
|
||||
src = Path(file_path)
|
||||
if not src.exists():
|
||||
print(f"Error: File '{file_path}' does not exist.", file=sys.stderr)
|
||||
rc = 1
|
||||
continue
|
||||
if src.suffix not in (".yaml", ".yml"):
|
||||
print(f"Error: '{file_path}' must have a .yaml or .yml extension.", file=sys.stderr)
|
||||
rc = 1
|
||||
continue
|
||||
|
||||
# Validate before installing
|
||||
try:
|
||||
text = src.read_text(encoding="utf-8")
|
||||
data = yaml.safe_load(text)
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError("YAML root must be a mapping")
|
||||
if "stages" not in data:
|
||||
raise ValueError("Workflow must contain a 'stages' key")
|
||||
except Exception as exc:
|
||||
print(f"Error: Invalid workflow YAML '{file_path}' – {exc}", file=sys.stderr)
|
||||
rc = 1
|
||||
continue
|
||||
|
||||
dest_name = override_name if override_name else src.stem
|
||||
dest = _ensure_user_dir() / (dest_name + ".yaml")
|
||||
|
||||
if dest.exists():
|
||||
print(f"Warning: '{dest}' already exists. Overwriting.")
|
||||
|
||||
shutil.copy2(src, dest)
|
||||
print(f"Installed workflow '{dest_name}' to: {dest}")
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def cmd_remove(names: list[str]) -> int:
|
||||
"""Delete one or more user workflows."""
|
||||
rc = 0
|
||||
bundled = list_bundled_workflows()
|
||||
for name in names:
|
||||
if name in bundled:
|
||||
print(
|
||||
f"Error: '{name}' is a bundled workflow and cannot be removed.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
print("Use 'skill-seekers workflows copy' to create an editable copy.", file=sys.stderr)
|
||||
rc = 1
|
||||
continue
|
||||
|
||||
removed = False
|
||||
for suffix in (".yaml", ".yml"):
|
||||
candidate = USER_WORKFLOWS_DIR / (name + suffix)
|
||||
if candidate.exists():
|
||||
candidate.unlink()
|
||||
print(f"Removed workflow: {candidate}")
|
||||
removed = True
|
||||
break
|
||||
|
||||
if not removed:
|
||||
print(f"Error: User workflow '{name}' not found.", file=sys.stderr)
|
||||
rc = 1
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def cmd_validate(name_or_path: str) -> int:
|
||||
"""Parse and validate a workflow."""
|
||||
try:
|
||||
engine = WorkflowEngine(name_or_path)
|
||||
wf = engine.workflow
|
||||
print(f"✅ Workflow '{wf.name}' is valid.")
|
||||
print(f" Description : {wf.description}")
|
||||
print(f" Version : {wf.version}")
|
||||
print(f" Stages : {len(wf.stages)}")
|
||||
for stage in wf.stages:
|
||||
status = "enabled" if stage.enabled else "disabled"
|
||||
print(f" - {stage.name} ({stage.type}, {status})")
|
||||
return 0
|
||||
except FileNotFoundError as exc:
|
||||
print(f"Error: {exc}", file=sys.stderr)
|
||||
return 1
|
||||
except Exception as exc:
|
||||
print(f"Error: Invalid workflow – {exc}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
|
||||
def main(argv=None) -> None:
|
||||
"""Entry point for skill-seekers-workflows."""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="skill-seekers-workflows",
|
||||
description="Manage enhancement workflow presets",
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="action", metavar="ACTION")
|
||||
|
||||
subparsers.add_parser("list", help="List all workflows (bundled + user)")
|
||||
|
||||
show_p = subparsers.add_parser("show", help="Print YAML content of a workflow")
|
||||
show_p.add_argument("workflow_name")
|
||||
|
||||
copy_p = subparsers.add_parser("copy", help="Copy bundled workflow(s) to user dir")
|
||||
copy_p.add_argument("workflow_names", nargs="+")
|
||||
|
||||
add_p = subparsers.add_parser("add", help="Install custom YAML file(s) into user dir")
|
||||
add_p.add_argument("files", nargs="+")
|
||||
add_p.add_argument("--name")
|
||||
|
||||
remove_p = subparsers.add_parser("remove", help="Delete user workflow(s)")
|
||||
remove_p.add_argument("workflow_names", nargs="+")
|
||||
|
||||
validate_p = subparsers.add_parser("validate", help="Validate a workflow by name or file")
|
||||
validate_p.add_argument("workflow_name")
|
||||
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.action is None:
|
||||
parser.print_help()
|
||||
sys.exit(0)
|
||||
|
||||
rc = 0
|
||||
if args.action == "list":
|
||||
rc = cmd_list()
|
||||
elif args.action == "show":
|
||||
rc = cmd_show(args.workflow_name)
|
||||
elif args.action == "copy":
|
||||
rc = cmd_copy(args.workflow_names)
|
||||
elif args.action == "add":
|
||||
rc = cmd_add(args.files, getattr(args, "name", None))
|
||||
elif args.action == "remove":
|
||||
rc = cmd_remove(args.workflow_names)
|
||||
elif args.action == "validate":
|
||||
rc = cmd_validate(args.workflow_name)
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -103,6 +103,12 @@ try:
|
||||
submit_config_impl,
|
||||
upload_skill_impl,
|
||||
validate_config_impl,
|
||||
# Workflow tools
|
||||
list_workflows_impl,
|
||||
get_workflow_impl,
|
||||
create_workflow_impl,
|
||||
update_workflow_impl,
|
||||
delete_workflow_impl,
|
||||
)
|
||||
except ImportError:
|
||||
# Fallback for direct script execution
|
||||
@@ -137,6 +143,11 @@ except ImportError:
|
||||
submit_config_impl,
|
||||
upload_skill_impl,
|
||||
validate_config_impl,
|
||||
list_workflows_impl,
|
||||
get_workflow_impl,
|
||||
create_workflow_impl,
|
||||
update_workflow_impl,
|
||||
delete_workflow_impl,
|
||||
)
|
||||
|
||||
# Initialize FastMCP server
|
||||
@@ -1178,6 +1189,100 @@ async def export_to_qdrant(
|
||||
return str(result)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# WORKFLOW TOOLS (5 tools)
|
||||
# ============================================================================
|
||||
|
||||
|
||||
@safe_tool_decorator(
|
||||
description="List all available enhancement workflows (bundled defaults + user-created). Returns name, description, and source (bundled/user) for each."
|
||||
)
|
||||
async def list_workflows() -> str:
|
||||
"""List all available enhancement workflow presets."""
|
||||
result = list_workflows_impl({})
|
||||
if isinstance(result, list) and result:
|
||||
return result[0].text if hasattr(result[0], "text") else str(result[0])
|
||||
return str(result)
|
||||
|
||||
|
||||
@safe_tool_decorator(
|
||||
description="Get the full YAML content of a named enhancement workflow. Searches user dir first, then bundled defaults."
|
||||
)
|
||||
async def get_workflow(name: str) -> str:
|
||||
"""
|
||||
Get full YAML content of a workflow.
|
||||
|
||||
Args:
|
||||
name: Workflow name (e.g. 'security-focus', 'default')
|
||||
|
||||
Returns:
|
||||
YAML content of the workflow, or error message if not found.
|
||||
"""
|
||||
result = get_workflow_impl({"name": name})
|
||||
if isinstance(result, list) and result:
|
||||
return result[0].text if hasattr(result[0], "text") else str(result[0])
|
||||
return str(result)
|
||||
|
||||
|
||||
@safe_tool_decorator(
|
||||
description="Create a new user workflow from YAML content. The workflow is saved to ~/.config/skill-seekers/workflows/."
|
||||
)
|
||||
async def create_workflow(name: str, content: str) -> str:
|
||||
"""
|
||||
Create a new user workflow.
|
||||
|
||||
Args:
|
||||
name: Workflow name (becomes the filename stem, e.g. 'my-custom')
|
||||
content: Full YAML content of the workflow
|
||||
|
||||
Returns:
|
||||
Success message with file path, or error message.
|
||||
"""
|
||||
result = create_workflow_impl({"name": name, "content": content})
|
||||
if isinstance(result, list) and result:
|
||||
return result[0].text if hasattr(result[0], "text") else str(result[0])
|
||||
return str(result)
|
||||
|
||||
|
||||
@safe_tool_decorator(
|
||||
description="Update (overwrite) an existing user workflow. Cannot update bundled workflows."
|
||||
)
|
||||
async def update_workflow(name: str, content: str) -> str:
|
||||
"""
|
||||
Update an existing user workflow.
|
||||
|
||||
Args:
|
||||
name: Workflow name to update
|
||||
content: New YAML content
|
||||
|
||||
Returns:
|
||||
Success message, or error if workflow is bundled or invalid.
|
||||
"""
|
||||
result = update_workflow_impl({"name": name, "content": content})
|
||||
if isinstance(result, list) and result:
|
||||
return result[0].text if hasattr(result[0], "text") else str(result[0])
|
||||
return str(result)
|
||||
|
||||
|
||||
@safe_tool_decorator(
|
||||
description="Delete a user workflow by name. Bundled workflows cannot be deleted."
|
||||
)
|
||||
async def delete_workflow(name: str) -> str:
|
||||
"""
|
||||
Delete a user workflow.
|
||||
|
||||
Args:
|
||||
name: Workflow name to delete
|
||||
|
||||
Returns:
|
||||
Success message, or error if workflow is bundled or not found.
|
||||
"""
|
||||
result = delete_workflow_impl({"name": name})
|
||||
if isinstance(result, list) and result:
|
||||
return result[0].text if hasattr(result[0], "text") else str(result[0])
|
||||
return str(result)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# MAIN ENTRY POINT
|
||||
# ============================================================================
|
||||
|
||||
@@ -96,6 +96,21 @@ from .vector_db_tools import (
|
||||
from .vector_db_tools import (
|
||||
export_to_weaviate_impl,
|
||||
)
|
||||
from .workflow_tools import (
|
||||
create_workflow_tool as create_workflow_impl,
|
||||
)
|
||||
from .workflow_tools import (
|
||||
delete_workflow_tool as delete_workflow_impl,
|
||||
)
|
||||
from .workflow_tools import (
|
||||
get_workflow_tool as get_workflow_impl,
|
||||
)
|
||||
from .workflow_tools import (
|
||||
list_workflows_tool as list_workflows_impl,
|
||||
)
|
||||
from .workflow_tools import (
|
||||
update_workflow_tool as update_workflow_impl,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
@@ -132,4 +147,10 @@ __all__ = [
|
||||
"export_to_chroma_impl",
|
||||
"export_to_faiss_impl",
|
||||
"export_to_qdrant_impl",
|
||||
# Workflow tools
|
||||
"list_workflows_impl",
|
||||
"get_workflow_impl",
|
||||
"create_workflow_impl",
|
||||
"update_workflow_impl",
|
||||
"delete_workflow_impl",
|
||||
]
|
||||
|
||||
226
src/skill_seekers/mcp/tools/workflow_tools.py
Normal file
226
src/skill_seekers/mcp/tools/workflow_tools.py
Normal file
@@ -0,0 +1,226 @@
|
||||
"""
|
||||
MCP Tool Implementations for Workflow Management
|
||||
|
||||
5 tools:
|
||||
list_workflows – list all workflows (bundled + user) with source info
|
||||
get_workflow – return full YAML of a named workflow
|
||||
create_workflow – write a new YAML to user dir
|
||||
update_workflow – overwrite an existing user workflow
|
||||
delete_workflow – remove a user workflow by name
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
try:
|
||||
from mcp.types import TextContent
|
||||
except ImportError:
|
||||
# Graceful degradation for testing without mcp installed
|
||||
class TextContent: # type: ignore[no-redef]
|
||||
def __init__(self, type: str, text: str):
|
||||
self.type = type
|
||||
self.text = text
|
||||
|
||||
USER_WORKFLOWS_DIR = Path.home() / ".config" / "skill-seekers" / "workflows"
|
||||
|
||||
|
||||
def _ensure_user_dir() -> Path:
|
||||
USER_WORKFLOWS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
return USER_WORKFLOWS_DIR
|
||||
|
||||
|
||||
def _bundled_names() -> list[str]:
|
||||
from importlib.resources import files as importlib_files
|
||||
|
||||
try:
|
||||
pkg = importlib_files("skill_seekers.workflows")
|
||||
names = []
|
||||
for item in pkg.iterdir():
|
||||
name = str(item.name)
|
||||
if name.endswith((".yaml", ".yml")):
|
||||
names.append(name.removesuffix(".yaml").removesuffix(".yml"))
|
||||
return sorted(names)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def _user_names() -> list[str]:
|
||||
if not USER_WORKFLOWS_DIR.exists():
|
||||
return []
|
||||
return sorted(
|
||||
p.stem for p in USER_WORKFLOWS_DIR.iterdir() if p.suffix in (".yaml", ".yml")
|
||||
)
|
||||
|
||||
|
||||
def _read_bundled(name: str) -> str | None:
|
||||
from importlib.resources import files as importlib_files
|
||||
|
||||
for suffix in (".yaml", ".yml"):
|
||||
try:
|
||||
pkg_ref = importlib_files("skill_seekers.workflows").joinpath(name + suffix)
|
||||
return pkg_ref.read_text(encoding="utf-8")
|
||||
except (FileNotFoundError, TypeError, ModuleNotFoundError):
|
||||
continue
|
||||
return None
|
||||
|
||||
|
||||
def _read_workflow(name: str) -> str | None:
|
||||
"""Read YAML text: user dir first, then bundled."""
|
||||
for suffix in (".yaml", ".yml"):
|
||||
p = USER_WORKFLOWS_DIR / (name + suffix)
|
||||
if p.exists():
|
||||
return p.read_text(encoding="utf-8")
|
||||
return _read_bundled(name)
|
||||
|
||||
|
||||
def _validate_yaml(text: str) -> dict:
|
||||
"""Parse and basic-validate workflow YAML; returns parsed dict."""
|
||||
data = yaml.safe_load(text)
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError("Workflow YAML root must be a mapping")
|
||||
if "stages" not in data:
|
||||
raise ValueError("Workflow must contain a 'stages' key")
|
||||
return data
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Tool implementations
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def list_workflows_tool(args: dict) -> list:
|
||||
"""Return all workflows with name, description, and source."""
|
||||
result: list[dict[str, str]] = []
|
||||
|
||||
for name in _bundled_names():
|
||||
desc = ""
|
||||
text = _read_bundled(name)
|
||||
if text:
|
||||
try:
|
||||
data = yaml.safe_load(text)
|
||||
desc = data.get("description", "")
|
||||
except Exception:
|
||||
pass
|
||||
result.append({"name": name, "description": desc, "source": "bundled"})
|
||||
|
||||
for name in _user_names():
|
||||
desc = ""
|
||||
text = _read_workflow(name)
|
||||
if text:
|
||||
try:
|
||||
data = yaml.safe_load(text)
|
||||
desc = data.get("description", "")
|
||||
except Exception:
|
||||
pass
|
||||
result.append({"name": name, "description": desc, "source": "user"})
|
||||
|
||||
output = yaml.dump(result, default_flow_style=False, sort_keys=False)
|
||||
return [TextContent(type="text", text=output)]
|
||||
|
||||
|
||||
def get_workflow_tool(args: dict) -> list:
|
||||
"""Return full YAML content of a named workflow."""
|
||||
name = args.get("name", "").strip()
|
||||
if not name:
|
||||
return [TextContent(type="text", text="Error: 'name' parameter is required.")]
|
||||
|
||||
text = _read_workflow(name)
|
||||
if text is None:
|
||||
bundled = _bundled_names()
|
||||
user = _user_names()
|
||||
available = bundled + [f"{n} (user)" for n in user]
|
||||
msg = (
|
||||
f"Error: Workflow '{name}' not found.\n"
|
||||
f"Available workflows: {', '.join(available) if available else 'none'}"
|
||||
)
|
||||
return [TextContent(type="text", text=msg)]
|
||||
|
||||
return [TextContent(type="text", text=text)]
|
||||
|
||||
|
||||
def create_workflow_tool(args: dict) -> list:
|
||||
"""Write a new workflow YAML to the user directory."""
|
||||
name = args.get("name", "").strip()
|
||||
content = args.get("content", "")
|
||||
|
||||
if not name:
|
||||
return [TextContent(type="text", text="Error: 'name' parameter is required.")]
|
||||
if not content:
|
||||
return [TextContent(type="text", text="Error: 'content' parameter is required.")]
|
||||
|
||||
# Validate
|
||||
try:
|
||||
_validate_yaml(content)
|
||||
except Exception as exc:
|
||||
return [TextContent(type="text", text=f"Error: Invalid workflow YAML – {exc}")]
|
||||
|
||||
dest = _ensure_user_dir() / (name + ".yaml")
|
||||
if dest.exists():
|
||||
return [
|
||||
TextContent(
|
||||
type="text",
|
||||
text=f"Error: Workflow '{name}' already exists in user dir. Use update_workflow to overwrite.",
|
||||
)
|
||||
]
|
||||
|
||||
dest.write_text(content, encoding="utf-8")
|
||||
return [TextContent(type="text", text=f"Created workflow '{name}' at: {dest}")]
|
||||
|
||||
|
||||
def update_workflow_tool(args: dict) -> list:
|
||||
"""Overwrite an existing user workflow. Cannot update bundled workflows."""
|
||||
name = args.get("name", "").strip()
|
||||
content = args.get("content", "")
|
||||
|
||||
if not name:
|
||||
return [TextContent(type="text", text="Error: 'name' parameter is required.")]
|
||||
if not content:
|
||||
return [TextContent(type="text", text="Error: 'content' parameter is required.")]
|
||||
|
||||
if name in _bundled_names() and name not in _user_names():
|
||||
return [
|
||||
TextContent(
|
||||
type="text",
|
||||
text=(
|
||||
f"Error: '{name}' is a bundled workflow and cannot be updated. "
|
||||
"Use create_workflow with a different name, or copy it first with "
|
||||
"'skill-seekers workflows copy'."
|
||||
),
|
||||
)
|
||||
]
|
||||
|
||||
# Validate
|
||||
try:
|
||||
_validate_yaml(content)
|
||||
except Exception as exc:
|
||||
return [TextContent(type="text", text=f"Error: Invalid workflow YAML – {exc}")]
|
||||
|
||||
dest = _ensure_user_dir() / (name + ".yaml")
|
||||
dest.write_text(content, encoding="utf-8")
|
||||
return [TextContent(type="text", text=f"Updated workflow '{name}' at: {dest}")]
|
||||
|
||||
|
||||
def delete_workflow_tool(args: dict) -> list:
|
||||
"""Remove a user workflow by name. Bundled workflows cannot be deleted."""
|
||||
name = args.get("name", "").strip()
|
||||
if not name:
|
||||
return [TextContent(type="text", text="Error: 'name' parameter is required.")]
|
||||
|
||||
if name in _bundled_names():
|
||||
return [
|
||||
TextContent(
|
||||
type="text",
|
||||
text=f"Error: '{name}' is a bundled workflow and cannot be deleted.",
|
||||
)
|
||||
]
|
||||
|
||||
for suffix in (".yaml", ".yml"):
|
||||
candidate = USER_WORKFLOWS_DIR / (name + suffix)
|
||||
if candidate.exists():
|
||||
candidate.unlink()
|
||||
return [TextContent(type="text", text=f"Deleted user workflow: {candidate}")]
|
||||
|
||||
return [TextContent(type="text", text=f"Error: User workflow '{name}' not found.")]
|
||||
1
src/skill_seekers/workflows/__init__.py
Normal file
1
src/skill_seekers/workflows/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Bundled default enhancement workflow presets."""
|
||||
71
src/skill_seekers/workflows/api-documentation.yaml
Normal file
71
src/skill_seekers/workflows/api-documentation.yaml
Normal file
@@ -0,0 +1,71 @@
|
||||
name: api-documentation
|
||||
description: Generate comprehensive API documentation from code analysis
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
- github_analysis
|
||||
variables:
|
||||
depth: comprehensive
|
||||
stages:
|
||||
- name: base_patterns
|
||||
type: builtin
|
||||
target: patterns
|
||||
enabled: true
|
||||
uses_history: false
|
||||
- name: api_extraction
|
||||
type: custom
|
||||
target: api
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: >
|
||||
Extract and document all public API endpoints, functions, and interfaces
|
||||
from this codebase.
|
||||
|
||||
For each API element include:
|
||||
1. Name and signature
|
||||
2. Purpose and description
|
||||
3. Parameters (name, type, required/optional, description)
|
||||
4. Return value (type, description)
|
||||
5. Exceptions that may be raised
|
||||
6. Usage example
|
||||
|
||||
Output JSON with an "api_reference" array of API elements.
|
||||
- name: usage_examples
|
||||
type: custom
|
||||
target: examples
|
||||
uses_history: true
|
||||
enabled: true
|
||||
prompt: >
|
||||
Based on the API reference, create practical usage examples that demonstrate
|
||||
common integration patterns.
|
||||
|
||||
Create examples for:
|
||||
1. Basic getting-started scenario
|
||||
2. Common use case (most frequently used APIs)
|
||||
3. Advanced integration pattern
|
||||
4. Error handling example
|
||||
|
||||
Output JSON with a "usage_examples" array where each item has
|
||||
"title", "description", and "code" fields.
|
||||
- name: integration_guide
|
||||
type: custom
|
||||
target: integration
|
||||
uses_history: true
|
||||
enabled: true
|
||||
prompt: >
|
||||
Create a concise integration guide based on the API documentation and examples.
|
||||
|
||||
Include:
|
||||
1. Prerequisites and installation
|
||||
2. Authentication setup (if applicable)
|
||||
3. Quick start in 5 minutes
|
||||
4. Common gotchas and how to avoid them
|
||||
5. Links to further resources
|
||||
|
||||
Output JSON with an "integration_guide" object.
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata:
|
||||
enhanced: true
|
||||
workflow: api-documentation
|
||||
has_api_docs: true
|
||||
72
src/skill_seekers/workflows/architecture-comprehensive.yaml
Normal file
72
src/skill_seekers/workflows/architecture-comprehensive.yaml
Normal file
@@ -0,0 +1,72 @@
|
||||
name: architecture-comprehensive
|
||||
description: Deep architectural analysis including patterns, dependencies, and design quality
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
- github_analysis
|
||||
variables:
|
||||
depth: comprehensive
|
||||
stages:
|
||||
- name: base_patterns
|
||||
type: builtin
|
||||
target: patterns
|
||||
enabled: true
|
||||
uses_history: false
|
||||
- name: architecture_overview
|
||||
type: custom
|
||||
target: architecture
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: >
|
||||
Analyse the architectural patterns and design decisions in this codebase.
|
||||
|
||||
Identify:
|
||||
1. Overall architectural style (MVC, microservices, layered, hexagonal, etc.)
|
||||
2. Key design patterns used and their purpose
|
||||
3. Component boundaries and responsibilities
|
||||
4. Data flow between components
|
||||
5. External dependencies and integration points
|
||||
|
||||
Output JSON with an "architecture" object containing:
|
||||
- "style": primary architectural style
|
||||
- "patterns": list of design patterns detected
|
||||
- "components": list of key components with descriptions
|
||||
- "data_flow": description of data flow
|
||||
- "quality_score": 1-10 rating with justification
|
||||
- name: dependency_analysis
|
||||
type: custom
|
||||
target: dependencies
|
||||
uses_history: true
|
||||
enabled: true
|
||||
prompt: >
|
||||
Based on the architectural overview, analyse the dependency structure.
|
||||
|
||||
Identify:
|
||||
1. Circular dependencies (red flags)
|
||||
2. High coupling between modules
|
||||
3. Opportunities for dependency injection
|
||||
4. Third-party dependency risks (outdated, unmaintained)
|
||||
5. Suggested refactoring priorities
|
||||
|
||||
Output JSON with a "dependency_analysis" object.
|
||||
- name: improvement_roadmap
|
||||
type: custom
|
||||
target: roadmap
|
||||
uses_history: true
|
||||
enabled: true
|
||||
prompt: >
|
||||
Based on the full architectural analysis, create an improvement roadmap.
|
||||
|
||||
Provide:
|
||||
1. Top 3 quick wins (low effort, high impact)
|
||||
2. Medium-term improvements (1-3 months)
|
||||
3. Long-term architectural goals
|
||||
4. Technical debt prioritisation
|
||||
|
||||
Output JSON with a "roadmap" object containing "quick_wins", "medium_term", and "long_term" arrays.
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata:
|
||||
enhanced: true
|
||||
workflow: architecture-comprehensive
|
||||
deep_analysis: true
|
||||
24
src/skill_seekers/workflows/default.yaml
Normal file
24
src/skill_seekers/workflows/default.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
name: default
|
||||
description: Standard AI enhancement with all features enabled
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
- doc_scraping
|
||||
- github_analysis
|
||||
variables: {}
|
||||
stages:
|
||||
- name: base_analysis
|
||||
type: builtin
|
||||
target: patterns
|
||||
enabled: true
|
||||
uses_history: false
|
||||
- name: test_examples
|
||||
type: builtin
|
||||
target: examples
|
||||
enabled: true
|
||||
uses_history: false
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata:
|
||||
enhanced: true
|
||||
workflow: default
|
||||
27
src/skill_seekers/workflows/minimal.yaml
Normal file
27
src/skill_seekers/workflows/minimal.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
name: minimal
|
||||
description: Lightweight enhancement - SKILL.md only, no heavy analysis
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
- doc_scraping
|
||||
- github_analysis
|
||||
variables:
|
||||
depth: surface
|
||||
stages:
|
||||
- name: skill_md_polish
|
||||
type: custom
|
||||
target: skill_md
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: >
|
||||
Review the following SKILL.md content and make minimal improvements:
|
||||
- Fix obvious formatting issues
|
||||
- Ensure the overview section is clear and concise
|
||||
- Remove duplicate or redundant information
|
||||
|
||||
Return the improved content as plain text without extra commentary.
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata:
|
||||
enhanced: true
|
||||
workflow: minimal
|
||||
59
src/skill_seekers/workflows/security-focus.yaml
Normal file
59
src/skill_seekers/workflows/security-focus.yaml
Normal file
@@ -0,0 +1,59 @@
|
||||
name: security-focus
|
||||
description: "Security-focused review: vulnerabilities, auth, data handling"
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
- python
|
||||
- javascript
|
||||
- typescript
|
||||
variables:
|
||||
depth: comprehensive
|
||||
stages:
|
||||
- name: base_patterns
|
||||
type: builtin
|
||||
target: patterns
|
||||
enabled: true
|
||||
uses_history: false
|
||||
- name: vulnerabilities
|
||||
type: custom
|
||||
target: security
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: >
|
||||
Analyze this codebase for OWASP Top 10 and common security vulnerabilities.
|
||||
|
||||
Focus on:
|
||||
1. Injection flaws (SQL, command, LDAP injection)
|
||||
2. Broken authentication and session management
|
||||
3. Sensitive data exposure (secrets in code, logging PII)
|
||||
4. Security misconfigurations
|
||||
5. Cross-site scripting (XSS) risks
|
||||
6. Insecure direct object references
|
||||
|
||||
Output JSON with a "findings" array where each item has:
|
||||
- "category": vulnerability category
|
||||
- "severity": "critical" | "high" | "medium" | "low"
|
||||
- "description": what the issue is
|
||||
- "recommendation": how to fix it
|
||||
- name: auth_review
|
||||
type: custom
|
||||
target: auth
|
||||
uses_history: true
|
||||
enabled: true
|
||||
prompt: >
|
||||
Examine authentication and authorisation patterns in this codebase.
|
||||
|
||||
Review:
|
||||
1. Token handling and storage
|
||||
2. Password hashing mechanisms
|
||||
3. Session expiry and invalidation
|
||||
4. Role-based access control implementation
|
||||
5. OAuth/JWT usage correctness
|
||||
|
||||
Output JSON with an "auth_analysis" object containing "strengths" and "weaknesses" arrays.
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata:
|
||||
enhanced: true
|
||||
workflow: security-focus
|
||||
security_reviewed: true
|
||||
@@ -23,19 +23,20 @@ class TestParserRegistry:
|
||||
"""Test parser registry functionality."""
|
||||
|
||||
def test_all_parsers_registered(self):
|
||||
"""Test that all 19 parsers are registered."""
|
||||
assert len(PARSERS) == 20, f"Expected 19 parsers, got {len(PARSERS)}"
|
||||
"""Test that all parsers are registered."""
|
||||
assert len(PARSERS) == 21, f"Expected 21 parsers, got {len(PARSERS)}"
|
||||
|
||||
def test_get_parser_names(self):
|
||||
"""Test getting list of parser names."""
|
||||
names = get_parser_names()
|
||||
assert len(names) == 20
|
||||
assert len(names) == 21
|
||||
assert "scrape" in names
|
||||
assert "github" in names
|
||||
assert "package" in names
|
||||
assert "upload" in names
|
||||
assert "analyze" in names
|
||||
assert "config" in names
|
||||
assert "workflows" in names
|
||||
|
||||
def test_all_parsers_are_subcommand_parsers(self):
|
||||
"""Test that all parsers inherit from SubcommandParser."""
|
||||
@@ -241,9 +242,9 @@ class TestBackwardCompatibility:
|
||||
assert cmd in names, f"Command '{cmd}' not found in parser registry!"
|
||||
|
||||
def test_command_count_matches(self):
|
||||
"""Test that we have exactly 20 commands (includes new create command)."""
|
||||
assert len(PARSERS) == 20
|
||||
assert len(get_parser_names()) == 20
|
||||
"""Test that we have exactly 21 commands (includes new create and workflows commands)."""
|
||||
assert len(PARSERS) == 21
|
||||
assert len(get_parser_names()) == 21
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -117,6 +117,134 @@ class TestCreateCommandBasic:
|
||||
assert "--dry-run" in result.stdout
|
||||
|
||||
|
||||
class TestCreateCommandArgvForwarding:
|
||||
"""Unit tests for _add_common_args argv forwarding."""
|
||||
|
||||
def _make_args(self, **kwargs):
|
||||
import argparse
|
||||
defaults = dict(
|
||||
enhance_workflow=None,
|
||||
enhance_stage=None,
|
||||
var=None,
|
||||
workflow_dry_run=False,
|
||||
enhance_level=0,
|
||||
output=None,
|
||||
name=None,
|
||||
description=None,
|
||||
config=None,
|
||||
api_key=None,
|
||||
dry_run=False,
|
||||
verbose=False,
|
||||
quiet=False,
|
||||
chunk_for_rag=False,
|
||||
chunk_size=512,
|
||||
chunk_overlap=50,
|
||||
preset=None,
|
||||
no_preserve_code_blocks=False,
|
||||
no_preserve_paragraphs=False,
|
||||
interactive_enhancement=False,
|
||||
)
|
||||
defaults.update(kwargs)
|
||||
return argparse.Namespace(**defaults)
|
||||
|
||||
def _collect_argv(self, args):
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
cmd = CreateCommand(args)
|
||||
argv = []
|
||||
cmd._add_common_args(argv)
|
||||
return argv
|
||||
|
||||
def test_single_enhance_workflow_forwarded(self):
|
||||
args = self._make_args(enhance_workflow=["security-focus"])
|
||||
argv = self._collect_argv(args)
|
||||
assert argv.count("--enhance-workflow") == 1
|
||||
assert "security-focus" in argv
|
||||
|
||||
def test_multiple_enhance_workflows_all_forwarded(self):
|
||||
"""Each workflow must appear as a separate --enhance-workflow flag."""
|
||||
args = self._make_args(enhance_workflow=["security-focus", "minimal"])
|
||||
argv = self._collect_argv(args)
|
||||
assert argv.count("--enhance-workflow") == 2
|
||||
idx1 = argv.index("security-focus")
|
||||
idx2 = argv.index("minimal")
|
||||
assert argv[idx1 - 1] == "--enhance-workflow"
|
||||
assert argv[idx2 - 1] == "--enhance-workflow"
|
||||
|
||||
def test_no_enhance_workflow_not_forwarded(self):
|
||||
args = self._make_args(enhance_workflow=None)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--enhance-workflow" not in argv
|
||||
|
||||
# ── enhance_stage ────────────────────────────────────────────────────────
|
||||
|
||||
def test_single_enhance_stage_forwarded(self):
|
||||
args = self._make_args(enhance_stage=["security:Check for vulnerabilities"])
|
||||
argv = self._collect_argv(args)
|
||||
assert "--enhance-stage" in argv
|
||||
assert "security:Check for vulnerabilities" in argv
|
||||
|
||||
def test_multiple_enhance_stages_all_forwarded(self):
|
||||
stages = ["sec:Check security", "cleanup:Remove boilerplate"]
|
||||
args = self._make_args(enhance_stage=stages)
|
||||
argv = self._collect_argv(args)
|
||||
assert argv.count("--enhance-stage") == 2
|
||||
for stage in stages:
|
||||
assert stage in argv
|
||||
|
||||
def test_enhance_stage_none_not_forwarded(self):
|
||||
args = self._make_args(enhance_stage=None)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--enhance-stage" not in argv
|
||||
|
||||
# ── var ──────────────────────────────────────────────────────────────────
|
||||
|
||||
def test_single_var_forwarded(self):
|
||||
args = self._make_args(var=["depth=comprehensive"])
|
||||
argv = self._collect_argv(args)
|
||||
assert "--var" in argv
|
||||
assert "depth=comprehensive" in argv
|
||||
|
||||
def test_multiple_vars_all_forwarded(self):
|
||||
args = self._make_args(var=["depth=comprehensive", "focus=security"])
|
||||
argv = self._collect_argv(args)
|
||||
assert argv.count("--var") == 2
|
||||
assert "depth=comprehensive" in argv
|
||||
assert "focus=security" in argv
|
||||
|
||||
def test_var_none_not_forwarded(self):
|
||||
args = self._make_args(var=None)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--var" not in argv
|
||||
|
||||
# ── workflow_dry_run ─────────────────────────────────────────────────────
|
||||
|
||||
def test_workflow_dry_run_forwarded(self):
|
||||
args = self._make_args(workflow_dry_run=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--workflow-dry-run" in argv
|
||||
|
||||
def test_workflow_dry_run_false_not_forwarded(self):
|
||||
args = self._make_args(workflow_dry_run=False)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--workflow-dry-run" not in argv
|
||||
|
||||
# ── mixed ────────────────────────────────────────────────────────────────
|
||||
|
||||
def test_workflow_and_stage_both_forwarded(self):
|
||||
args = self._make_args(
|
||||
enhance_workflow=["security-focus"],
|
||||
enhance_stage=["cleanup:Remove boilerplate"],
|
||||
var=["depth=basic"],
|
||||
workflow_dry_run=True,
|
||||
)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--enhance-workflow" in argv
|
||||
assert "security-focus" in argv
|
||||
assert "--enhance-stage" in argv
|
||||
assert "--var" in argv
|
||||
assert "--workflow-dry-run" in argv
|
||||
|
||||
|
||||
class TestBackwardCompatibility:
|
||||
"""Test that old commands still work."""
|
||||
|
||||
@@ -165,3 +293,16 @@ class TestBackwardCompatibility:
|
||||
assert "scrape" in result.stdout
|
||||
assert "github" in result.stdout
|
||||
assert "analyze" in result.stdout
|
||||
|
||||
def test_workflows_command_still_works(self):
|
||||
"""The new workflows subcommand is accessible via the main CLI."""
|
||||
import subprocess
|
||||
|
||||
result = subprocess.run(
|
||||
["skill-seekers", "workflows", "--help"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
assert result.returncode == 0
|
||||
assert "workflow" in result.stdout.lower()
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
Covers:
|
||||
- run_workflows() with no workflow flags → (False, [])
|
||||
- run_workflows() with a single named workflow
|
||||
- WorkflowEngine loads bundled presets by name (integration)
|
||||
- run_workflows() with multiple named workflows (chaining)
|
||||
- run_workflows() with inline --enhance-stage flags
|
||||
- run_workflows() with both named and inline workflows
|
||||
@@ -372,3 +373,70 @@ class TestRunWorkflowsDryRun:
|
||||
for engine in engines:
|
||||
engine.preview.assert_called_once()
|
||||
engine.run.assert_not_called()
|
||||
|
||||
|
||||
# ────────────────── bundled preset loading (integration) ─────────────────────
|
||||
|
||||
|
||||
class TestBundledPresetsLoad:
|
||||
"""Verify WorkflowEngine can load each bundled preset by name.
|
||||
|
||||
These are real integration tests – they actually read the YAML files
|
||||
shipped inside the package via importlib.resources.
|
||||
"""
|
||||
|
||||
BUNDLED_NAMES = [
|
||||
"default",
|
||||
"minimal",
|
||||
"security-focus",
|
||||
"architecture-comprehensive",
|
||||
"api-documentation",
|
||||
]
|
||||
|
||||
@pytest.mark.parametrize("preset_name", BUNDLED_NAMES)
|
||||
def test_bundled_preset_loads(self, preset_name):
|
||||
from skill_seekers.cli.enhancement_workflow import WorkflowEngine
|
||||
|
||||
engine = WorkflowEngine(preset_name)
|
||||
wf = engine.workflow
|
||||
assert wf.name, f"Workflow '{preset_name}' has no name"
|
||||
assert isinstance(wf.stages, list), "stages must be a list"
|
||||
assert len(wf.stages) > 0, f"Workflow '{preset_name}' has no stages"
|
||||
|
||||
@pytest.mark.parametrize("preset_name", BUNDLED_NAMES)
|
||||
def test_bundled_preset_stages_have_required_fields(self, preset_name):
|
||||
from skill_seekers.cli.enhancement_workflow import WorkflowEngine
|
||||
|
||||
engine = WorkflowEngine(preset_name)
|
||||
for stage in engine.workflow.stages:
|
||||
assert stage.name, f"Stage in '{preset_name}' has no name"
|
||||
assert stage.type in ("builtin", "custom"), (
|
||||
f"Stage '{stage.name}' in '{preset_name}' has unknown type '{stage.type}'"
|
||||
)
|
||||
|
||||
def test_unknown_preset_raises_file_not_found(self):
|
||||
from skill_seekers.cli.enhancement_workflow import WorkflowEngine
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
WorkflowEngine("completely-nonexistent-preset-xyz")
|
||||
|
||||
def test_list_bundled_workflows_returns_all(self):
|
||||
from skill_seekers.cli.enhancement_workflow import list_bundled_workflows
|
||||
|
||||
names = list_bundled_workflows()
|
||||
for expected in self.BUNDLED_NAMES:
|
||||
assert expected in names, f"'{expected}' not in bundled workflows: {names}"
|
||||
|
||||
def test_list_user_workflows_empty_when_no_user_dir(self, tmp_path, monkeypatch):
|
||||
"""list_user_workflows returns [] when ~/.config/skill-seekers/workflows/ does not exist."""
|
||||
from skill_seekers.cli import enhancement_workflow as ew_mod
|
||||
import pathlib
|
||||
|
||||
fake_home = tmp_path / "fake_home"
|
||||
fake_home.mkdir()
|
||||
monkeypatch.setenv("HOME", str(fake_home))
|
||||
# Also patch Path.home() used inside the module
|
||||
monkeypatch.setattr(pathlib.Path, "home", staticmethod(lambda: fake_home))
|
||||
|
||||
names = ew_mod.list_user_workflows()
|
||||
assert names == []
|
||||
|
||||
301
tests/test_workflow_tools_mcp.py
Normal file
301
tests/test_workflow_tools_mcp.py
Normal file
@@ -0,0 +1,301 @@
|
||||
"""Tests for the workflow MCP tools.
|
||||
|
||||
Covers:
|
||||
- list_workflows_tool
|
||||
- get_workflow_tool
|
||||
- create_workflow_tool
|
||||
- update_workflow_tool
|
||||
- delete_workflow_tool
|
||||
"""
|
||||
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
|
||||
MINIMAL_YAML = textwrap.dedent("""\
|
||||
name: test-workflow
|
||||
description: A test workflow
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
variables: {}
|
||||
stages:
|
||||
- name: step1
|
||||
type: custom
|
||||
target: all
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: "Do something useful."
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata: {}
|
||||
""")
|
||||
|
||||
INVALID_YAML_NO_STAGES = textwrap.dedent("""\
|
||||
name: broken
|
||||
description: Missing stages key
|
||||
version: "1.0"
|
||||
""")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Fixtures & helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_user_dir(tmp_path, monkeypatch):
|
||||
"""Redirect USER_WORKFLOWS_DIR in workflow_tools to a temp dir."""
|
||||
fake_dir = tmp_path / "workflows"
|
||||
fake_dir.mkdir()
|
||||
monkeypatch.setattr(
|
||||
"skill_seekers.mcp.tools.workflow_tools.USER_WORKFLOWS_DIR", fake_dir
|
||||
)
|
||||
return fake_dir
|
||||
|
||||
|
||||
def _mock_bundled_names(names=("default", "security-focus")):
|
||||
return patch(
|
||||
"skill_seekers.mcp.tools.workflow_tools._bundled_names",
|
||||
return_value=list(names),
|
||||
)
|
||||
|
||||
|
||||
def _mock_bundled_text(mapping: dict):
|
||||
def _read(name):
|
||||
return mapping.get(name)
|
||||
return patch(
|
||||
"skill_seekers.mcp.tools.workflow_tools._read_bundled",
|
||||
side_effect=_read,
|
||||
)
|
||||
|
||||
|
||||
def _text(result) -> str:
|
||||
"""Extract text from first TextContent in result."""
|
||||
if isinstance(result, list) and result:
|
||||
item = result[0]
|
||||
return item.text if hasattr(item, "text") else str(item)
|
||||
return str(result)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# list_workflows_tool
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestListWorkflowsTool:
|
||||
def test_lists_bundled_and_user(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import list_workflows_tool
|
||||
|
||||
(tmp_user_dir / "my-workflow.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
bundled_map = {"default": MINIMAL_YAML}
|
||||
with _mock_bundled_names(["default"]), _mock_bundled_text(bundled_map):
|
||||
result = list_workflows_tool({})
|
||||
|
||||
text = _text(result)
|
||||
assert "default" in text
|
||||
assert "bundled" in text
|
||||
assert "my-workflow" in text
|
||||
assert "user" in text
|
||||
|
||||
def test_empty_lists(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import list_workflows_tool
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = list_workflows_tool({})
|
||||
|
||||
text = _text(result)
|
||||
# Should return a valid (possibly empty) YAML list or empty
|
||||
data = yaml.safe_load(text)
|
||||
assert isinstance(data, (list, type(None)))
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# get_workflow_tool
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestGetWorkflowTool:
|
||||
def test_get_bundled(self):
|
||||
from skill_seekers.mcp.tools.workflow_tools import get_workflow_tool
|
||||
|
||||
with patch(
|
||||
"skill_seekers.mcp.tools.workflow_tools._read_workflow",
|
||||
return_value=MINIMAL_YAML,
|
||||
):
|
||||
result = get_workflow_tool({"name": "default"})
|
||||
|
||||
assert "stages" in _text(result)
|
||||
|
||||
def test_get_not_found(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import get_workflow_tool
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = get_workflow_tool({"name": "ghost"})
|
||||
|
||||
text = _text(result)
|
||||
assert "not found" in text.lower() or "Error" in text
|
||||
|
||||
def test_missing_name_param(self):
|
||||
from skill_seekers.mcp.tools.workflow_tools import get_workflow_tool
|
||||
|
||||
result = get_workflow_tool({})
|
||||
assert "required" in _text(result).lower()
|
||||
|
||||
def test_get_user_workflow(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import get_workflow_tool
|
||||
|
||||
(tmp_user_dir / "custom.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
result = get_workflow_tool({"name": "custom"})
|
||||
assert "stages" in _text(result)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# create_workflow_tool
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCreateWorkflowTool:
|
||||
def test_create_new_workflow(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import create_workflow_tool
|
||||
|
||||
result = create_workflow_tool({"name": "new-wf", "content": MINIMAL_YAML})
|
||||
text = _text(result)
|
||||
assert "Created" in text or "created" in text.lower()
|
||||
assert (tmp_user_dir / "new-wf.yaml").exists()
|
||||
|
||||
def test_create_duplicate_fails(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import create_workflow_tool
|
||||
|
||||
(tmp_user_dir / "existing.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
result = create_workflow_tool({"name": "existing", "content": MINIMAL_YAML})
|
||||
assert "already exists" in _text(result).lower()
|
||||
|
||||
def test_create_invalid_yaml(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import create_workflow_tool
|
||||
|
||||
result = create_workflow_tool(
|
||||
{"name": "bad", "content": INVALID_YAML_NO_STAGES}
|
||||
)
|
||||
assert "invalid" in _text(result).lower() or "stages" in _text(result).lower()
|
||||
|
||||
def test_create_missing_name(self):
|
||||
from skill_seekers.mcp.tools.workflow_tools import create_workflow_tool
|
||||
|
||||
result = create_workflow_tool({"content": MINIMAL_YAML})
|
||||
assert "required" in _text(result).lower()
|
||||
|
||||
def test_create_missing_content(self):
|
||||
from skill_seekers.mcp.tools.workflow_tools import create_workflow_tool
|
||||
|
||||
result = create_workflow_tool({"name": "test"})
|
||||
assert "required" in _text(result).lower()
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# update_workflow_tool
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestUpdateWorkflowTool:
|
||||
def test_update_user_workflow(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import update_workflow_tool
|
||||
|
||||
(tmp_user_dir / "my-wf.yaml").write_text("old content", encoding="utf-8")
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = update_workflow_tool(
|
||||
{"name": "my-wf", "content": MINIMAL_YAML}
|
||||
)
|
||||
|
||||
text = _text(result)
|
||||
assert "Updated" in text or "updated" in text.lower()
|
||||
assert (tmp_user_dir / "my-wf.yaml").read_text() == MINIMAL_YAML
|
||||
|
||||
def test_update_bundled_refused(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import update_workflow_tool
|
||||
|
||||
with _mock_bundled_names(["default"]):
|
||||
result = update_workflow_tool(
|
||||
{"name": "default", "content": MINIMAL_YAML}
|
||||
)
|
||||
|
||||
assert "bundled" in _text(result).lower()
|
||||
|
||||
def test_update_invalid_yaml(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import update_workflow_tool
|
||||
|
||||
(tmp_user_dir / "my-wf.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = update_workflow_tool(
|
||||
{"name": "my-wf", "content": INVALID_YAML_NO_STAGES}
|
||||
)
|
||||
|
||||
assert "invalid" in _text(result).lower() or "stages" in _text(result).lower()
|
||||
|
||||
def test_update_user_override_of_bundled_name(self, tmp_user_dir):
|
||||
"""A user workflow with same name as bundled should be updatable."""
|
||||
from skill_seekers.mcp.tools.workflow_tools import update_workflow_tool
|
||||
|
||||
(tmp_user_dir / "default.yaml").write_text("old", encoding="utf-8")
|
||||
|
||||
with _mock_bundled_names(["default"]):
|
||||
result = update_workflow_tool(
|
||||
{"name": "default", "content": MINIMAL_YAML}
|
||||
)
|
||||
|
||||
text = _text(result)
|
||||
# User has a file named 'default', so it should succeed
|
||||
assert "Updated" in text or "updated" in text.lower()
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# delete_workflow_tool
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestDeleteWorkflowTool:
|
||||
def test_delete_user_workflow(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import delete_workflow_tool
|
||||
|
||||
wf = tmp_user_dir / "to-delete.yaml"
|
||||
wf.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = delete_workflow_tool({"name": "to-delete"})
|
||||
|
||||
assert "Deleted" in _text(result) or "deleted" in _text(result).lower()
|
||||
assert not wf.exists()
|
||||
|
||||
def test_delete_bundled_refused(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import delete_workflow_tool
|
||||
|
||||
with _mock_bundled_names(["default"]):
|
||||
result = delete_workflow_tool({"name": "default"})
|
||||
|
||||
assert "bundled" in _text(result).lower()
|
||||
|
||||
def test_delete_nonexistent(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import delete_workflow_tool
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = delete_workflow_tool({"name": "ghost"})
|
||||
|
||||
assert "not found" in _text(result).lower()
|
||||
|
||||
def test_delete_yml_extension(self, tmp_user_dir):
|
||||
from skill_seekers.mcp.tools.workflow_tools import delete_workflow_tool
|
||||
|
||||
wf = tmp_user_dir / "my-wf.yml"
|
||||
wf.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled_names([]):
|
||||
result = delete_workflow_tool({"name": "my-wf"})
|
||||
|
||||
assert not wf.exists()
|
||||
|
||||
def test_delete_missing_name(self):
|
||||
from skill_seekers.mcp.tools.workflow_tools import delete_workflow_tool
|
||||
|
||||
result = delete_workflow_tool({})
|
||||
assert "required" in _text(result).lower()
|
||||
568
tests/test_workflows_command.py
Normal file
568
tests/test_workflows_command.py
Normal file
@@ -0,0 +1,568 @@
|
||||
"""Tests for the workflows CLI command.
|
||||
|
||||
Covers:
|
||||
- workflows list (bundled + user)
|
||||
- workflows show (found / not-found)
|
||||
- workflows copy (bundled → user dir)
|
||||
- workflows add (install custom YAML)
|
||||
- workflows remove (user dir; refuses bundled)
|
||||
- workflows validate (valid / invalid)
|
||||
"""
|
||||
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
# Import the MODULE object (not just individual symbols) so we can patch it
|
||||
# directly via patch.object(). This survives any sys.modules manipulation by
|
||||
# other tests (e.g. test_swift_detection clears skill_seekers.cli.*), because
|
||||
# we hold a reference to the original module object at collection time.
|
||||
import skill_seekers.cli.workflows_command as _wf_cmd
|
||||
|
||||
cmd_list = _wf_cmd.cmd_list
|
||||
cmd_show = _wf_cmd.cmd_show
|
||||
cmd_copy = _wf_cmd.cmd_copy
|
||||
cmd_add = _wf_cmd.cmd_add
|
||||
cmd_remove = _wf_cmd.cmd_remove
|
||||
cmd_validate = _wf_cmd.cmd_validate
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Fixtures
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
MINIMAL_YAML = textwrap.dedent("""\
|
||||
name: test-workflow
|
||||
description: A test workflow
|
||||
version: "1.0"
|
||||
applies_to:
|
||||
- codebase_analysis
|
||||
variables: {}
|
||||
stages:
|
||||
- name: step1
|
||||
type: custom
|
||||
target: all
|
||||
uses_history: false
|
||||
enabled: true
|
||||
prompt: "Do something useful."
|
||||
post_process:
|
||||
reorder_sections: []
|
||||
add_metadata: {}
|
||||
""")
|
||||
|
||||
INVALID_YAML = "not: a: valid: workflow" # missing 'stages' key
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_user_dir(tmp_path, monkeypatch):
|
||||
"""Redirect USER_WORKFLOWS_DIR to a temp directory.
|
||||
|
||||
Uses patch.object on the captured module reference so the patch is applied
|
||||
to the same module dict that the functions reference via __globals__,
|
||||
regardless of any sys.modules manipulation by other tests.
|
||||
"""
|
||||
fake_dir = tmp_path / "workflows"
|
||||
fake_dir.mkdir()
|
||||
monkeypatch.setattr(_wf_cmd, "USER_WORKFLOWS_DIR", fake_dir)
|
||||
return fake_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yaml_file(tmp_path):
|
||||
"""Write MINIMAL_YAML to a temp file and return its path."""
|
||||
p = tmp_path / "test-workflow.yaml"
|
||||
p.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
return p
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _mock_bundled(names=("default", "minimal", "security-focus")):
|
||||
"""Patch list_bundled_workflows on the captured module object."""
|
||||
return patch.object(_wf_cmd, "list_bundled_workflows", return_value=list(names))
|
||||
|
||||
|
||||
def _mock_bundled_text(name_to_text: dict):
|
||||
"""Patch _bundled_yaml_text on the captured module object."""
|
||||
def _bundled_yaml_text(name):
|
||||
return name_to_text.get(name)
|
||||
return patch.object(_wf_cmd, "_bundled_yaml_text", side_effect=_bundled_yaml_text)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_list
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdList:
|
||||
def test_shows_bundled_and_user(self, capsys, tmp_user_dir):
|
||||
(tmp_user_dir / "my-workflow.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
bundled_text = {"default": MINIMAL_YAML}
|
||||
with _mock_bundled(["default"]), _mock_bundled_text(bundled_text):
|
||||
rc = cmd_list()
|
||||
|
||||
out = capsys.readouterr().out
|
||||
assert rc == 0
|
||||
assert "Bundled" in out
|
||||
assert "default" in out
|
||||
assert "User" in out
|
||||
assert "my-workflow" in out
|
||||
|
||||
def test_no_workflows(self, capsys, tmp_user_dir):
|
||||
# tmp_user_dir is empty, and we mock bundled to return empty
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_list()
|
||||
assert rc == 0
|
||||
assert "No workflows" in capsys.readouterr().out
|
||||
|
||||
def test_only_bundled(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled(["default"]), _mock_bundled_text({"default": MINIMAL_YAML}):
|
||||
rc = cmd_list()
|
||||
out = capsys.readouterr().out
|
||||
assert rc == 0
|
||||
assert "Bundled" in out
|
||||
assert "User" not in out # no user workflows
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_show
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdShow:
|
||||
def test_show_bundled(self, capsys):
|
||||
with patch.object(_wf_cmd, "_workflow_yaml_text", return_value=MINIMAL_YAML):
|
||||
rc = cmd_show("default")
|
||||
assert rc == 0
|
||||
assert "name: test-workflow" in capsys.readouterr().out
|
||||
|
||||
def test_show_not_found(self, capsys):
|
||||
with patch.object(_wf_cmd, "_workflow_yaml_text", return_value=None):
|
||||
rc = cmd_show("nonexistent")
|
||||
assert rc == 1
|
||||
assert "not found" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_show_user_workflow(self, capsys, tmp_user_dir):
|
||||
(tmp_user_dir / "my-wf.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
rc = cmd_show("my-wf")
|
||||
assert rc == 0
|
||||
assert "name: test-workflow" in capsys.readouterr().out
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_copy
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdCopy:
|
||||
def test_copy_bundled_to_user_dir(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled_text({"security-focus": MINIMAL_YAML}):
|
||||
rc = cmd_copy(["security-focus"])
|
||||
|
||||
assert rc == 0
|
||||
dest = tmp_user_dir / "security-focus.yaml"
|
||||
assert dest.exists()
|
||||
assert dest.read_text(encoding="utf-8") == MINIMAL_YAML
|
||||
|
||||
def test_copy_nonexistent(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled_text({}):
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_copy(["ghost-workflow"])
|
||||
assert rc == 1
|
||||
assert "not found" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_copy_overwrites_existing(self, capsys, tmp_user_dir):
|
||||
existing = tmp_user_dir / "default.yaml"
|
||||
existing.write_text("old content", encoding="utf-8")
|
||||
|
||||
with _mock_bundled_text({"default": MINIMAL_YAML}):
|
||||
rc = cmd_copy(["default"])
|
||||
|
||||
assert rc == 0
|
||||
assert existing.read_text(encoding="utf-8") == MINIMAL_YAML
|
||||
assert "Warning" in capsys.readouterr().out
|
||||
|
||||
def test_copy_multiple(self, capsys, tmp_user_dir):
|
||||
"""Copying multiple bundled workflows installs all of them."""
|
||||
texts = {"default": MINIMAL_YAML, "minimal": MINIMAL_YAML}
|
||||
with _mock_bundled_text(texts):
|
||||
rc = cmd_copy(["default", "minimal"])
|
||||
|
||||
assert rc == 0
|
||||
assert (tmp_user_dir / "default.yaml").exists()
|
||||
assert (tmp_user_dir / "minimal.yaml").exists()
|
||||
|
||||
def test_copy_partial_failure_continues(self, capsys, tmp_user_dir):
|
||||
"""A missing workflow doesn't prevent others from being copied."""
|
||||
with _mock_bundled_text({"default": MINIMAL_YAML}), _mock_bundled(["default"]):
|
||||
rc = cmd_copy(["default", "ghost"])
|
||||
|
||||
assert rc == 1
|
||||
assert (tmp_user_dir / "default.yaml").exists()
|
||||
assert "not found" in capsys.readouterr().err.lower()
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_add
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdAdd:
|
||||
def test_add_valid_yaml(self, capsys, tmp_user_dir, sample_yaml_file):
|
||||
rc = cmd_add([str(sample_yaml_file)])
|
||||
assert rc == 0
|
||||
dest = tmp_user_dir / "test-workflow.yaml"
|
||||
assert dest.exists()
|
||||
assert "Installed" in capsys.readouterr().out
|
||||
|
||||
def test_add_with_override_name(self, capsys, tmp_user_dir, sample_yaml_file):
|
||||
rc = cmd_add([str(sample_yaml_file)], override_name="custom-name")
|
||||
assert rc == 0
|
||||
assert (tmp_user_dir / "custom-name.yaml").exists()
|
||||
|
||||
def test_add_invalid_yaml(self, capsys, tmp_path, tmp_user_dir):
|
||||
bad = tmp_path / "bad.yaml"
|
||||
bad.write_text(INVALID_YAML, encoding="utf-8")
|
||||
rc = cmd_add([str(bad)])
|
||||
assert rc == 1
|
||||
assert "invalid" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_add_nonexistent_file(self, capsys, tmp_user_dir):
|
||||
rc = cmd_add(["/nonexistent/path/workflow.yaml"])
|
||||
assert rc == 1
|
||||
assert "does not exist" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_add_wrong_extension(self, capsys, tmp_path, tmp_user_dir):
|
||||
f = tmp_path / "workflow.json"
|
||||
f.write_text("{}", encoding="utf-8")
|
||||
rc = cmd_add([str(f)])
|
||||
assert rc == 1
|
||||
|
||||
def test_add_overwrites_with_warning(self, capsys, tmp_user_dir, sample_yaml_file):
|
||||
# Pre-create the destination
|
||||
(tmp_user_dir / "test-workflow.yaml").write_text("old", encoding="utf-8")
|
||||
rc = cmd_add([str(sample_yaml_file)])
|
||||
assert rc == 0
|
||||
assert "Warning" in capsys.readouterr().out
|
||||
|
||||
def test_add_multiple_files(self, capsys, tmp_user_dir, tmp_path):
|
||||
"""Adding multiple YAML files installs all of them."""
|
||||
wf1 = tmp_path / "wf-one.yaml"
|
||||
wf2 = tmp_path / "wf-two.yaml"
|
||||
wf1.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
wf2.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
rc = cmd_add([str(wf1), str(wf2)])
|
||||
assert rc == 0
|
||||
assert (tmp_user_dir / "wf-one.yaml").exists()
|
||||
assert (tmp_user_dir / "wf-two.yaml").exists()
|
||||
out = capsys.readouterr().out
|
||||
assert "wf-one" in out
|
||||
assert "wf-two" in out
|
||||
|
||||
def test_add_multiple_name_flag_rejected(self, capsys, tmp_user_dir, tmp_path):
|
||||
"""--name with multiple files returns error without installing."""
|
||||
wf1 = tmp_path / "wf-a.yaml"
|
||||
wf2 = tmp_path / "wf-b.yaml"
|
||||
wf1.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
wf2.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
rc = cmd_add([str(wf1), str(wf2)], override_name="should-fail")
|
||||
assert rc == 1
|
||||
assert "cannot be used" in capsys.readouterr().err.lower()
|
||||
assert not (tmp_user_dir / "should-fail.yaml").exists()
|
||||
|
||||
def test_add_partial_failure_continues(self, capsys, tmp_user_dir, tmp_path):
|
||||
"""A bad file in the middle doesn't prevent valid files from installing."""
|
||||
good = tmp_path / "good.yaml"
|
||||
bad = tmp_path / "bad.yaml"
|
||||
good.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
bad.write_text(INVALID_YAML, encoding="utf-8")
|
||||
|
||||
rc = cmd_add([str(good), str(bad)])
|
||||
assert rc == 1 # non-zero because of the bad file
|
||||
assert (tmp_user_dir / "good.yaml").exists() # good one still installed
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_remove
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdRemove:
|
||||
def test_remove_user_workflow(self, capsys, tmp_user_dir):
|
||||
wf = tmp_user_dir / "my-wf.yaml"
|
||||
wf.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_remove(["my-wf"])
|
||||
|
||||
assert rc == 0
|
||||
assert not wf.exists()
|
||||
assert "Removed" in capsys.readouterr().out
|
||||
|
||||
def test_remove_bundled_refused(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled(["default"]):
|
||||
rc = cmd_remove(["default"])
|
||||
assert rc == 1
|
||||
assert "bundled" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_remove_nonexistent(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_remove(["ghost"])
|
||||
assert rc == 1
|
||||
assert "not found" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_remove_yml_extension(self, capsys, tmp_user_dir):
|
||||
wf = tmp_user_dir / "my-wf.yml"
|
||||
wf.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_remove(["my-wf"])
|
||||
|
||||
assert rc == 0
|
||||
assert not wf.exists()
|
||||
|
||||
def test_remove_multiple(self, capsys, tmp_user_dir):
|
||||
"""Removing multiple workflows deletes all of them."""
|
||||
(tmp_user_dir / "wf-a.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
(tmp_user_dir / "wf-b.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_remove(["wf-a", "wf-b"])
|
||||
|
||||
assert rc == 0
|
||||
assert not (tmp_user_dir / "wf-a.yaml").exists()
|
||||
assert not (tmp_user_dir / "wf-b.yaml").exists()
|
||||
|
||||
def test_remove_partial_failure_continues(self, capsys, tmp_user_dir):
|
||||
"""A missing workflow doesn't prevent others from being removed."""
|
||||
(tmp_user_dir / "wf-good.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
|
||||
with _mock_bundled([]):
|
||||
rc = cmd_remove(["wf-good", "ghost"])
|
||||
|
||||
assert rc == 1
|
||||
assert not (tmp_user_dir / "wf-good.yaml").exists()
|
||||
assert "not found" in capsys.readouterr().err.lower()
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# cmd_validate
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestCmdValidate:
|
||||
def test_validate_bundled_by_name(self, capsys):
|
||||
with patch.object(_wf_cmd, "WorkflowEngine") as mock_engine_cls:
|
||||
mock_wf = MagicMock()
|
||||
mock_wf.name = "security-focus"
|
||||
mock_wf.description = "Security review"
|
||||
mock_wf.version = "1.0"
|
||||
mock_wf.stages = [MagicMock(name="step1", type="custom", enabled=True)]
|
||||
mock_engine_cls.return_value.workflow = mock_wf
|
||||
|
||||
rc = cmd_validate("security-focus")
|
||||
|
||||
assert rc == 0
|
||||
out = capsys.readouterr().out
|
||||
assert "valid" in out.lower()
|
||||
assert "security-focus" in out
|
||||
|
||||
def test_validate_file_path(self, capsys, sample_yaml_file):
|
||||
rc = cmd_validate(str(sample_yaml_file))
|
||||
assert rc == 0
|
||||
assert "valid" in capsys.readouterr().out.lower()
|
||||
|
||||
def test_validate_not_found(self, capsys):
|
||||
with patch.object(_wf_cmd, "WorkflowEngine", side_effect=FileNotFoundError("not found")):
|
||||
rc = cmd_validate("nonexistent")
|
||||
assert rc == 1
|
||||
assert "error" in capsys.readouterr().err.lower()
|
||||
|
||||
def test_validate_invalid_content(self, capsys, tmp_path):
|
||||
bad = tmp_path / "bad.yaml"
|
||||
bad.write_text("- this: is\n- not: valid workflow", encoding="utf-8")
|
||||
rc = cmd_validate(str(bad))
|
||||
assert rc == 1
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# main() entry point
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestMain:
|
||||
def test_main_no_action_exits_0(self):
|
||||
from skill_seekers.cli.workflows_command import main
|
||||
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
main([])
|
||||
assert exc.value.code == 0
|
||||
|
||||
def test_main_list(self, capsys, tmp_user_dir):
|
||||
from skill_seekers.cli.workflows_command import main
|
||||
|
||||
# tmp_user_dir is empty; mock bundled to return nothing
|
||||
with _mock_bundled([]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
main(["list"])
|
||||
assert exc.value.code == 0
|
||||
|
||||
def test_main_validate_success(self, capsys, sample_yaml_file):
|
||||
from skill_seekers.cli.workflows_command import main
|
||||
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
main(["validate", str(sample_yaml_file)])
|
||||
assert exc.value.code == 0
|
||||
|
||||
def test_main_show_success(self, capsys, tmp_user_dir):
|
||||
(tmp_user_dir / "my-wf.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["show", "my-wf"])
|
||||
assert exc.value.code == 0
|
||||
assert "name: test-workflow" in capsys.readouterr().out
|
||||
|
||||
def test_main_show_not_found_exits_1(self, capsys, tmp_user_dir):
|
||||
with patch.object(_wf_cmd, "_workflow_yaml_text", return_value=None):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["show", "ghost"])
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_main_copy_single(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled_text({"default": MINIMAL_YAML}):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["copy", "default"])
|
||||
assert exc.value.code == 0
|
||||
assert (tmp_user_dir / "default.yaml").exists()
|
||||
|
||||
def test_main_copy_multiple(self, capsys, tmp_user_dir):
|
||||
texts = {"default": MINIMAL_YAML, "minimal": MINIMAL_YAML}
|
||||
with _mock_bundled_text(texts):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["copy", "default", "minimal"])
|
||||
assert exc.value.code == 0
|
||||
assert (tmp_user_dir / "default.yaml").exists()
|
||||
assert (tmp_user_dir / "minimal.yaml").exists()
|
||||
|
||||
def test_main_copy_not_found_exits_1(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled_text({}), _mock_bundled([]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["copy", "ghost"])
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_main_add_single_file(self, capsys, tmp_user_dir, sample_yaml_file):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["add", str(sample_yaml_file)])
|
||||
assert exc.value.code == 0
|
||||
assert (tmp_user_dir / "test-workflow.yaml").exists()
|
||||
|
||||
def test_main_add_multiple_files(self, capsys, tmp_user_dir, tmp_path):
|
||||
wf1 = tmp_path / "wf-a.yaml"
|
||||
wf2 = tmp_path / "wf-b.yaml"
|
||||
wf1.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
wf2.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["add", str(wf1), str(wf2)])
|
||||
assert exc.value.code == 0
|
||||
assert (tmp_user_dir / "wf-a.yaml").exists()
|
||||
assert (tmp_user_dir / "wf-b.yaml").exists()
|
||||
|
||||
def test_main_add_with_name_flag(self, capsys, tmp_user_dir, sample_yaml_file):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["add", str(sample_yaml_file), "--name", "renamed"])
|
||||
assert exc.value.code == 0
|
||||
assert (tmp_user_dir / "renamed.yaml").exists()
|
||||
|
||||
def test_main_add_name_rejected_for_multiple(self, capsys, tmp_user_dir, tmp_path):
|
||||
wf1 = tmp_path / "wf-a.yaml"
|
||||
wf2 = tmp_path / "wf-b.yaml"
|
||||
wf1.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
wf2.write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["add", str(wf1), str(wf2), "--name", "bad"])
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_main_remove_single(self, capsys, tmp_user_dir):
|
||||
(tmp_user_dir / "my-wf.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
with _mock_bundled([]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["remove", "my-wf"])
|
||||
assert exc.value.code == 0
|
||||
assert not (tmp_user_dir / "my-wf.yaml").exists()
|
||||
|
||||
def test_main_remove_multiple(self, capsys, tmp_user_dir):
|
||||
(tmp_user_dir / "wf-a.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
(tmp_user_dir / "wf-b.yaml").write_text(MINIMAL_YAML, encoding="utf-8")
|
||||
with _mock_bundled([]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["remove", "wf-a", "wf-b"])
|
||||
assert exc.value.code == 0
|
||||
assert not (tmp_user_dir / "wf-a.yaml").exists()
|
||||
assert not (tmp_user_dir / "wf-b.yaml").exists()
|
||||
|
||||
def test_main_remove_bundled_refused(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled(["default"]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["remove", "default"])
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_main_remove_not_found_exits_1(self, capsys, tmp_user_dir):
|
||||
with _mock_bundled([]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
_wf_cmd.main(["remove", "ghost"])
|
||||
assert exc.value.code == 1
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Parser argument binding
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestWorkflowsParserArgumentBinding:
|
||||
"""Verify nargs='+' parsers produce lists with correct attribute names."""
|
||||
|
||||
def _parse(self, argv):
|
||||
"""Parse argv through the standalone main() parser by capturing args."""
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers(dest="action")
|
||||
|
||||
copy_p = subparsers.add_parser("copy")
|
||||
copy_p.add_argument("workflow_names", nargs="+")
|
||||
|
||||
add_p = subparsers.add_parser("add")
|
||||
add_p.add_argument("files", nargs="+")
|
||||
add_p.add_argument("--name")
|
||||
|
||||
remove_p = subparsers.add_parser("remove")
|
||||
remove_p.add_argument("workflow_names", nargs="+")
|
||||
|
||||
return parser.parse_args(argv)
|
||||
|
||||
def test_copy_single_produces_list(self):
|
||||
args = self._parse(["copy", "security-focus"])
|
||||
assert args.workflow_names == ["security-focus"]
|
||||
|
||||
def test_copy_multiple_produces_list(self):
|
||||
args = self._parse(["copy", "security-focus", "minimal"])
|
||||
assert args.workflow_names == ["security-focus", "minimal"]
|
||||
|
||||
def test_add_single_produces_list(self):
|
||||
args = self._parse(["add", "my.yaml"])
|
||||
assert args.files == ["my.yaml"]
|
||||
|
||||
def test_add_multiple_produces_list(self):
|
||||
args = self._parse(["add", "a.yaml", "b.yaml", "c.yaml"])
|
||||
assert args.files == ["a.yaml", "b.yaml", "c.yaml"]
|
||||
|
||||
def test_add_name_flag_captured(self):
|
||||
args = self._parse(["add", "my.yaml", "--name", "custom"])
|
||||
assert args.files == ["my.yaml"]
|
||||
assert args.name == "custom"
|
||||
|
||||
def test_remove_single_produces_list(self):
|
||||
args = self._parse(["remove", "my-wf"])
|
||||
assert args.workflow_names == ["my-wf"]
|
||||
|
||||
def test_remove_multiple_produces_list(self):
|
||||
args = self._parse(["remove", "wf-a", "wf-b"])
|
||||
assert args.workflow_names == ["wf-a", "wf-b"]
|
||||
Reference in New Issue
Block a user