feat: agent-agnostic refactor, smart SPA discovery, marketplace pipeline (#336)
* feat: fix unified scraper pipeline gaps, add multi-agent support, and Unity skill configs Fix multiple bugs in the unified scraper pipeline discovered while creating Unity skills (Spine, Addressables, DOTween): - Fix doc scraper KeyError by passing base_url in temp config - Fix scraped_data list-vs-dict bug in detect_conflicts() and merge_sources() - Add Phase 6 auto-enhancement from config "enhancement" block (LOCAL + API mode) - Add "browser": true config support for JavaScript SPA documentation sites - Add Phase 3 skip message for better UX - Add subprocess timeout (3600s) for doc scraper - Fix SkillEnhancer missing skill_dir argument in API mode - Fix browser renderer defaults (60s timeout, domcontentloaded wait condition) - Fix C3.x JSON filename mismatch (design_patterns.json → all_patterns.json) - Fix workflow builtin target handling when no pattern data available - Make AI enhancement timeout configurable via SKILL_SEEKER_ENHANCE_TIMEOUT env var (300s default) - Add C#, Go, Rust, Swift, Ruby, PHP, GDScript to GitHub scraper extension map - Add multi-agent LOCAL mode support across all 17 scrapers (--agent flag) - Add Kimi/Moonshot platform support (API keys, agent presets, config wizard) - Add unity-game-dev.yaml workflow (7 stages covering Unity-specific patterns) - Add 3 Unity skill configs (Spine, Addressables, DOTween) - Add comprehensive Claude bias audit report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: create AgentClient abstraction, remove hardcoded Claude from 5 enhancers (#334) Phase 1 of the full agent-agnostic refactor. Creates a centralized AgentClient that all enhancers use instead of hardcoded subprocess calls and model names. New file: - agent_client.py: Unified AI client supporting API mode (Anthropic, Moonshot, Google, OpenAI) and LOCAL mode (Claude Code, Kimi, Codex, Copilot, OpenCode, custom agents). Provides detect_api_key(), get_model(), detect_default_target(). Refactored (removed all hardcoded ["claude", ...] subprocess calls): - ai_enhancer.py: -140 lines, delegates to AgentClient - config_enhancer.py: -150 lines, removed _run_claude_cli() - guide_enhancer.py: -120 lines, removed _check_claude_cli(), _call_claude_*() - unified_enhancer.py: -100 lines, removed _check_claude_cli(), _call_claude_*() - codebase_scraper.py: collapsed 3 functions into 1 using AgentClient Fixed: - utils.py: has_api_key()/get_api_key() now check all providers - enhance_skill.py, video_scraper.py, video_visual.py: model names configurable via ANTHROPIC_MODEL env var - enhancement_workflow.py: uses call() with _call_claude() fallback Net: -153 lines of code while adding full multi-agent support. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 2 agent-agnostic refactor — defaults, help text, merge mode, MCP (#334) Phase 2 of the full agent-agnostic refactor: Default targets: - Changed default="claude" to auto-detect from API keys in 5 argument files and 3 CLI scripts (install_skill, upload_skill, enhance_skill) - Added AgentClient.detect_default_target() for runtime resolution - MCP server functions now use "auto" default with runtime detection Help text (16+ argument files): - Replaced "ANTHROPIC_API_KEY" / "Claude Code" with agent-neutral wording - Now mentions all API keys (ANTHROPIC, MOONSHOT, etc.) and "AI coding agent" Log messages: - main.py, enhance_command.py: "Claude Code CLI" → dynamic agent name - enhance_command.py docstring: "Claude Code" → "AI coding agent" Merge mode rename: - Added "ai-enhanced" as the preferred merge mode name - "claude-enhanced" kept as backward-compatible alias - Renamed ClaudeEnhancedMerger → AIEnhancedMerger (with alias) - Updated choices, validators, and descriptions MCP server descriptions: - server_fastmcp.py: "Claude AI skills" → "LLM skills" in tool descriptions - packaging_tools.py: Updated defaults and dry-run messages Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 3 agent-agnostic refactor — docstrings, MCP descriptions, README (#334) Phase 3 of the full agent-agnostic refactor: Module docstrings (17+ scraper files): - "Claude Skill Converter" → "AI Skill Converter" - "Build Claude skill" → "Build AI/LLM skill" - "Asking Claude" → "Asking AI" - Updated doc_scraper, github_scraper, pdf_scraper, word_scraper, epub_scraper, video_scraper, enhance_skill, enhance_skill_local, unified_scraper, and others MCP server_legacy.py (30+ fixes): - All tool descriptions: "Claude skill" → "LLM skill" - "Upload to Claude" → "Upload skill" - "enhance with Claude Code" → "enhance with AI agent" - Kept claude.ai/skills URLs (platform-specific, correct) MCP README.md: - Added multi-agent support note at top - "Claude AI skills" → "LLM skills" throughout - Updated examples to show multi-platform usage - Kept Claude Code in supported agents list (accurate) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 3 continued — remaining docstring and comment fixes (#334) Additional agent-neutral text fixes in 8 files missed from the initial Phase 3 commit: - config_extractor.py, config_manager.py, constants.py: comments - enhance_command.py: docstring and print messages - guide_enhancer.py: class/module docstrings - parsers/enhance_parser.py, install_parser.py: help text - signal_flow_analyzer.py: docstring Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * workflow added * fix: address code review issues in AgentClient and Phase 6 (#334) Fixes found during commit review: 1. AgentClient._call_local: Only append "Write your response to:" when caller explicitly passes output_file (was always appending) 2. Codex agent: Added uses_stdin flag to preset, pipe prompt via stdin instead of DEVNULL (codex reads from stdin with "-" arg) 3. Provider detection: Added _detect_provider_from_key() to detect provider from API key prefix (sk-ant- → anthropic, AIza → google) instead of always assuming anthropic 4. Phase 6 API mode: Replaced direct SkillEnhancer/ANTHROPIC_API_KEY with AgentClient for multi-provider support (Moonshot, Google, OpenAI) 5. config_enhancer: Removed output_file path from prompt — AgentClient manages temp files and output detection Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: make claude adaptor model name configurable via ANTHROPIC_MODEL env var Missed in the Phase 1 refactor — adaptors/claude.py:381 had a hardcoded model name without the os.environ.get() wrapper that all other files use. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add copilot stdin support, custom agent, and kimi aliases (#334) Additional agent improvements from Kimi review: - Added uses_stdin: True to copilot agent preset (reads from stdin like codex) - Added custom agent support via SKILL_SEEKER_AGENT_CMD env var in _call_local() - Added kimi_code/kimi-code aliases in normalize_agent_name() - Added "kimi" to --target choices in enhance arguments - Updated help text with MOONSHOT_API_KEY across argument files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: Kimi CLI integration — add uses_stdin and output parsing (#334) Kimi CLI's --print mode requires stdin piping and outputs structured protocol messages (TurnBegin, TextPart, etc.) instead of plain text. Fixes: - Added uses_stdin: True to kimi preset (was not piping prompt) - Added parse_output: "kimi" flag to preset - Added _parse_kimi_output() to extract text from TextPart lines - Kimi now returns clean text instead of raw protocol dump Tested: kimi returns '{"status": "ok"}' correctly via AgentClient. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: Kimi CLI in enhance_skill_local — remove wrong skip-permissions, use absolute path Two bugs in enhance_skill_local.py AGENT_PRESETS for Kimi: 1. supports_skip_permissions was True — Kimi doesn't support --dangerously-skip-permissions, only Claude does. Fixed to False. 2. {skill_dir} was resolved as relative path — Kimi CLI requires absolute paths for --work-dir. Fixed with .resolve(). Tested: `skill-seekers enhance output/test-e2e/ --agent kimi` now works end-to-end (107s, 9233 bytes output). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove invalid --enhance-level flag from enhance subprocess calls doc_scraper.py and video_scraper.py were passing --enhance-level to skill-seekers-enhance, which doesn't accept that flag. This caused enhancement to fail silently after scraping completed. Fixes: - Removed --enhance-level from enhance subprocess calls - Added --agent passthrough in doc_scraper.py - Fixed log messages to show correct command Tested: `skill-seekers create <url> --enhance-level 1` now chains scrape → enhance successfully. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add --agent and --agent-cmd to create command UNIVERSAL_ARGUMENTS The --agent flag was defined in common.py but not imported into the create command's UNIVERSAL_ARGUMENTS, so it wasn't available when using `skill-seekers create <source> --agent kimi`. Now all 17 source types support the --agent flag via the create command. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: update docs data_file path after moving to cache directory The scraped_data["documentation"] stored the original output/ path for data_file, but the directory was moved to .skillseeker-cache/ afterward. Phase 2 conflict detection then failed with FileNotFoundError trying to read the old path. Now updates data_file to point to the cache location after the move. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: multi-language code signature extraction in GitHub scraper The GitHub scraper only analyzed files matching the primary language (by bytes). For multi-language repos like spine-runtimes (C++ primary but C# is the target), this meant 0 C# files were analyzed. Fix: Analyze top 3 languages with known extension mappings instead of just the primary. Also support "language" field in config source to explicitly target specific languages (e.g., "language": "C#"). Updated Unity configs to specify language: "C#" for focused analysis. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: per-file language detection + remove artificial analysis limits Rewrites GitHub scraper's _extract_signatures_and_tests() to detect language per-file from extension instead of only analyzing the primary language. This fixes multi-language repos like spine-runtimes (C++ primary) where C# files were never analyzed. Changes: - Build reverse ext→language map, detect language per-file - Analyze ALL files with known extensions (not just primary language) - Config "language" field works as optional filter, not a workaround - Store per-file language + languages_analyzed in output - Remove 50-file API mode limit (rate limiting already handles this) - Remove 100-file default config extraction limit (now unlimited by default) - Fix unified scraper default max_pages from 100 to 500 (matches constants.py) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove remaining 100-file limit in config_extractor.extract_from_directory The find_config_files default was changed to unlimited but extract_from_directory and CLI --max-files still defaulted to 100. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: replace interactive terminal merge with automated AgentClient call AIEnhancedMerger._launch_claude_merge() used to open a terminal window, run a bash script, and poll for a file — requiring manual interaction. Now uses AgentClient.call() to send the merge prompt directly and parse the JSON response. Fully automated, no terminal needed, works with any configured AI agent (Claude, Kimi, etc.). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add marketplace pipeline for publishing skills to Claude Code plugin repos Connect the three-repo pipeline: configs repo → Skill Seekers engine → plugin marketplace repos. Enables automated publishing of generated skills directly into Claude Code plugin repositories with proper plugin.json and marketplace.json structure. New components: - MarketplaceManager: Registry for plugin marketplace repos at ~/.skill-seekers/marketplaces.json with per-repo git tokens, branch config, and default author metadata - MarketplacePublisher: Clones marketplace repo, creates plugin directory structure (skills/, .claude-plugin/plugin.json), updates marketplace.json, commits and pushes. Includes skill_name validation to prevent path traversal, and cleanup of partial state on git failures - 4 MCP tools: add_marketplace, list_marketplaces, remove_marketplace, publish_to_marketplace — registered in FastMCP server - Phase 6 in install workflow: automatic marketplace publishing after packaging, triggered by --marketplace CLI arg or marketplace_targets config field CLI additions: - --marketplace NAME: publish to registered marketplace after packaging - --marketplace-category CAT: plugin category (default: development) - --create-branch: create feature branch instead of committing to main Security: - Skill name regex validation (^[a-zA-Z0-9][a-zA-Z0-9._-]*$) prevents path traversal attacks via malicious SKILL.md frontmatter - has_api_key variable scoping fix in install workflow summary - try/finally cleanup of partial plugin directories on publish failure Config schema: - Optional marketplace_targets field in config JSON for multi-marketplace auto-publishing: [{"marketplace": "spyke", "category": "development"}] - Backward compatible — ignored by older versions Tests: 58 tests (36 manager + 22 publisher including 2 integration tests using file:// git protocol for full publish success path) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: thread agent selection through entire enhancement pipeline Propagates the --agent and --agent-cmd CLI parameters through all enhancement components so users can use any supported coding agent (kimi, claude, copilot, codex, opencode) consistently across the full pipeline, not just in top-level enhancement. Agent parameter threading: - AIEnhancer: accepts agent param, passes to AgentClient - ConfigEnhancer: accepts agent param, passes to AgentClient - WorkflowEngine: accepts agent param, passes to sub-enhancers (PatternEnhancer, TestExampleEnhancer, AIEnhancer) - ArchitecturalPatternDetector: accepts agent param for AI enhancement - analyze_codebase(): accepts agent/agent_cmd, forwards to ConfigEnhancer, ArchitecturalPatternDetector, and doc processing - UnifiedScraper: reads agent from CLI args, forwards to doc scraper subprocess, C3.x analysis, and LOCAL enhancement - CreateCommand: forwards --agent and --agent-cmd to subprocess argv - workflow_runner: passes agent to WorkflowEngine for inline/named workflows Timeout improvements: - Default enhancement timeout increased from 300s (5min) to 2700s (45min) to accommodate large skill generation with local agents - New get_default_timeout() in agent_client.py with env var override (SKILL_SEEKER_ENHANCE_TIMEOUT) supporting 'unlimited' value - Config enhancement block supports "timeout": "unlimited" field - Removed hardcoded timeout=300 and timeout=600 calls in config_enhancer and merge_sources, now using centralized default CLI additions (unified_scraper): - --agent AGENT: select local coding agent for enhancement - --agent-cmd CMD: override agent command template (advanced) Config: unity-dotween.json updated with agent=kimi, timeout=unlimited, removed unused file_patterns Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add claude-code unified config for Claude Code CLI skill generation Unified config combining official Claude Code documentation and source code analysis. Covers internals, architecture, tools, commands, IDE integrations, MCP, plugins, skills, and development workflows. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add multi-agent support verification report and test artifacts - AGENT_SUPPORT_VERIFICATION.md: verification report confirming agent parameter threading works across all enhancement components - END_TO_END_EXAMPLES.md: complete workflows for all 17 source types with both Claude and Kimi agents - test_agents.sh: shell script for real-world testing of agent support across major CLI commands with both agents - test_realworld.md: real-world test scenarios for manual QA Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add .env to .gitignore to prevent secret exposure The .env file containing API keys (ANTHROPIC_API_KEY, GITHUB_TOKEN, etc.) was not in .gitignore, causing it to appear as untracked and risking accidental commit. Added .env, .env.local, and .env.*.local patterns. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: URL filtering uses base directory instead of full page URL (#331) is_valid_url() checked url.startswith(self.base_url) where base_url could be a full page path like ".../manual/index.html". Sibling pages like ".../manual/LoadingAssets.html" failed the check because they don't start with ".../index.html". Now strips the filename to get the directory prefix: "https://example.com/docs/index.html" → "https://example.com/docs/" This fixes SPA sites like Unity's DocFX docs where browser mode renders the page but sibling links were filtered out. Closes #331 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: pass language config through to GitHub scraper in unified flow The unified scraper built github_config from source fields but didn't include the "language" field. The GitHub scraper's per-file detection read self.config.get("language", "") which was always empty, so it fell back to analyzing all languages instead of the focused C# filter. For DOTween (C# only repo), this caused 0 files analyzed because without the language filter, it analyzed top 3 languages but the file tree matching failed silently. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: centralize all enhancement timeouts to 45min default with unlimited support All enhancement/AI timeouts now use get_default_timeout() from agent_client.py instead of scattered hardcoded values (120s, 300s, 600s). Default: 2700s (45 minutes) Override: SKILL_SEEKER_ENHANCE_TIMEOUT env var Unlimited: Set to "unlimited", "none", or "0" Updated: agent_client.py, enhance_skill_local.py, arguments/enhance.py, enhance_command.py, unified_enhancer.py, unified_scraper.py Not changed (different purposes): - Browser page load timeout (60s) - API HTTP request timeout (120s) - Doc scraper subprocess timeout (3600s) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add browser_wait_until and browser_extra_wait config for SPA docs DocFX sites (Unity docs) render navigation via JavaScript after initial page load. With domcontentloaded, only 1 link was found. With networkidle + 5s extra wait, 95 content pages are discovered. New config options for documentation sources: - browser_wait_until: "networkidle" | "load" | "domcontentloaded" - browser_extra_wait: milliseconds to wait after page load for lazy nav Updated Addressables config to use networkidle + 5000ms extra wait. Pass browser settings through unified scraper to doc scraper config. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: three-layer smart discovery engine for SPA documentation sites Replaces the browser_wait_until/browser_extra_wait config hacks with a proper discovery engine that runs before the BFS crawl loop: Layer 1: sitemap.xml — checks domain root for sitemap, parses <loc> tags Layer 2: llms.txt — existing mechanism (unchanged) Layer 3: SPA nav — renders index page with networkidle via Playwright, extracts all links from the fully-rendered DOM sidebar/TOC The BFS crawl then uses domcontentloaded (fast) since all pages are already discovered. No config hacks needed — browser mode automatically triggers SPA discovery when only 1 page is found. Tested: Unity Addressables DocFX site now discovers 95 pages (was 1). Removed browser_wait_until/browser_extra_wait from Addressables config. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace manual arg forwarding with dynamic routing in create command The create command manually hardcoded ~60% of scraper flags in _route_*() methods, causing ~40 flags to be silently dropped. Every new flag required edits in 2 places (arguments/create.py + create_command.py), guaranteed to drift. Replaced with _build_argv() — a dynamic forwarder that iterates vars(self.args) and forwards all explicitly-set arguments automatically, using the same pattern as main.py::_reconstruct_argv(). This eliminates the root cause of all flag gaps. Changes in create_command.py (-380 lines, +175 lines = net -205): - Added _build_argv() dynamic arg forwarder with dest→flag translation map for mismatched names (async_mode→--async, video_playlist→--playlist, skip_config→--skip-config-patterns, workflow_var→--var) - Added _call_module() helper (dedup sys.argv swap pattern) - Simplified all _route_*() methods from 50-70 lines to 5-10 lines each - Deleted _add_common_args() entirely (subsumed by _build_argv) - _route_generic() now forwards ALL args, not just universal ones New flags accessible via create command: - --from-json: build skill from pre-extracted JSON (all source types) - --skip-api-reference: skip API reference generation (local codebase) - --skip-dependency-graph: skip dependency analysis (local codebase) - --skip-config-patterns: skip config pattern extraction (local codebase) - --no-comments: skip comment extraction (local codebase) - --depth: analysis depth control (local codebase, deprecated) - --setup: auto-detect GPU/install video deps (video) Bug fix in unified_scraper.py: - Fixed C3.x pattern data loss: unified_scraper read patterns/detected_patterns.json but codebase_scraper writes patterns/all_patterns.json. Changed both read locations (line 828 for local sources, line 1597 for GitHub C3.x) to use the correct filename. This was causing 100% loss of design pattern data (e.g., 905 patterns detected but 0 included in final skill). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address 5 code review issues in marketplace and package pipeline Fixes found by automated code review of the marketplace feature and package command: 1. --marketplace flag silently ignored in package_skill.py CLI Added MarketplacePublisher invocation after successful packaging when --marketplace is provided. Previously the flag was parsed but never acted on. 2. Missing 7 platform choices in --target (package.py) Added minimax, opencode, deepseek, qwen, openrouter, together, fireworks to the argparse choices list. These platforms have registered adaptors but were rejected by the argument parser. 3. is_update always True for new marketplace registrations Two separate datetime.now() calls produced different microsecond timestamps, making added_at != updated_at always. Fixed by assigning a single timestamp to both fields. 4. Shallow clone (depth=1) caused push failures for marketplace repos MarketplacePublisher now does full clones instead of using GitConfigRepo's shallow clone (which is designed for read-only config fetching). Full clone is required for commit+push workflow. 5. Partial plugin dir not cleaned on force=True failure Removed the `and not force` guard from cleanup logic — if an operation fails midway, the partial directory should be cleaned regardless of whether force was set. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address dynamic routing edge cases in create_command Fixes from code review of the _build_argv() refactor: 1. Non-None defaults forwarded unconditionally — added enhance_level=2, doc_version="", video_languages="en", whisper_model="base", platform="slack", visual_interval=0.7, visual_min_gap=0.5, visual_similarity=3.0 to the defaults dict so they're only forwarded when the user explicitly overrides them. This fixes video sources incorrectly getting --enhance-level 2 (video default is 0). 2. video_url dest not translated — added "video_url": "--url" to _DEST_TO_FLAG so create correctly forwards --video-url as --url to video_scraper.py. 3. Video positional args double-forwarded — added video_url, video_playlist, video_file to _SKIP_ARGS since _route_video() already handles them via positional args from source detection. 4. Removed dead workflow_var entry from _DEST_TO_FLAG — the create parser uses key "var" not "workflow_var", so the translation was never triggered. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 15 broken tests and --from-json crash bug in create command Fixes found by Kimi code review of the dynamic routing refactor: 1. 3 test_create_arguments.py failures — UNIVERSAL_ARGUMENTS count changed from 19 to 21 (added agent, agent_cmd). Updated expected count and name set. Moved from_json out of UNIVERSAL to ADVANCED_ARGUMENTS since not all scrapers support it. 2. 12 test_create_integration_basic.py failures — tests called _add_common_args() which was deleted in the refactor. Rewrote _collect_argv() to use _build_argv() via CreateCommand with SourceDetector. Updated _make_args defaults to match new parameter set. 3. --from-json crash bug — was in UNIVERSAL_ARGUMENTS so create accepted it for all source types, but web/github/local scrapers don't support it. Forwarding it caused argparse "unrecognized arguments" errors. Moved to ADVANCED_ARGUMENTS with documentation listing which source types support it. 4. Additional _is_explicitly_set defaults — added enhance_level=2, doc_version="", video_languages="en", whisper_model="base", platform="slack", visual_interval/min_gap/similarity defaults to prevent unconditional forwarding of parser defaults. 5. Video arg handling — added video_url to _DEST_TO_FLAG translation map, added video_url/video_playlist/video_file to _SKIP_ARGS (handled as positionals by _route_video). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: C3.x analysis data loss — read from references/ after _generate_references() cleanup Root cause: _generate_references() in codebase_scraper.py copies analysis directories (patterns/, test_examples/, config_patterns/, architecture/, dependencies/, api_reference/) into references/ then DELETES the originals to avoid duplication (Issue #279). But unified_scraper.py reads from the original paths after analyze_codebase() returns — by which time the originals are gone. This caused 100% data loss for all 6 C3.x data types (design patterns, test examples, config patterns, architecture, dependencies, API reference) in the unified scraper pipeline. The data was correctly detected (e.g., 905 patterns in 510 files) but never made it into the final skill. Fix: Added _load_json_fallback() method that checks references/{subdir}/ first (where _generate_references moves the data), falling back to the original path. Applied to both GitHub C3.x analysis (line ~1599) and local source analysis (line ~828). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add allowlist to _build_argv for config route to unified_scraper _build_argv() was forwarding all CLI args (--name, --doc-version, etc.) to unified_scraper which doesn't accept them. Added allowlist parameter to _build_argv() — when provided, ONLY args in the allowlist are forwarded. The config route now uses _UNIFIED_SCRAPER_ARGS allowlist with the exact set of flags unified_scraper accepts. This is a targeted patch — the proper fix is the ExecutionContext singleton refactor planned separately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add force=True to marketplace publish from package CLI The package command's --marketplace flag didn't pass force=True to MarketplacePublisher, so re-publishing an existing skill would fail with "already exists" error instead of overwriting. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add push_config tool for publishing configs to registered source repos New ConfigPublisher class that validates configs, places them in the correct category directory, commits, and pushes to registered source repositories. Follows the MarketplacePublisher pattern. Features: - Auto-detect category from config name/description - Validate via ConfigValidator + repo's validate-config.py - Support feature branch or direct push - Force overwrite existing configs - MCP tool: push_config(config_path, source_name, category) Usage: push_config(config_path="configs/unity-spine.json", source_name="spyke") Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: security hardening, error handling, tests, and cleanup Security: - Remove command injection via cloned repo script execution (config_publisher) - Replace git add -A with targeted staging (marketplace_publisher) - Clear auth tokens from cached .git/config after clone - Use defusedxml for sitemap XML parsing (XXE protection) - Add path traversal validation for config names Error handling: - AgentClient: specific exception handling for rate limit, auth, connection errors - AgentClient: log subprocess stderr on non-zero exit, raise on explicit API mode failure - config_publisher: only catch ValueError for validation warnings Logic bugs: - Fix _build_argv silently dropping --enhance-level 2 (matched default) - Fix URL filtering over-broadening (strip to parent instead of adding /) - Log warning when _call_module returns None exit code Tests (134 new): - test_agent_client.py: 71 tests for normalize, detect, init, timeout, model - test_config_publisher.py: 23 tests for detect_category, publish, errors - test_create_integration_basic.py: 20 tests for _build_argv routing - Fix 11 pre-existing failures (guide_enhancer, doctor, install_skill, marketplace) Cleanup: - Remove 5 dev artifact files (-1405 lines) - Rename _launch_claude_merge to _launch_ai_merge All 3194 tests pass, 39 expected skips. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: pin ruff==0.15.8 in CI and reformat packaging_tools.py Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add missing pytest install to vector DB adaptor test jobs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: reformat 7 files for ruff 0.15.8 and fix vector DB test path Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove test-week2-integration job referencing missing script Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: update e2e test to accept dynamic platform name in upload phase Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
422
tests/test_agent_client.py
Normal file
422
tests/test_agent_client.py
Normal file
@@ -0,0 +1,422 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for the AgentClient unified AI client."""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
|
||||
from skill_seekers.cli.agent_client import (
|
||||
DEFAULT_ENHANCE_TIMEOUT,
|
||||
DEFAULT_MODELS,
|
||||
UNLIMITED_TIMEOUT,
|
||||
AgentClient,
|
||||
get_default_timeout,
|
||||
normalize_agent_name,
|
||||
)
|
||||
|
||||
|
||||
class TestNormalizeAgentName:
|
||||
"""Test normalize_agent_name() alias resolution."""
|
||||
|
||||
def test_claude_aliases(self):
|
||||
assert normalize_agent_name("claude-code") == "claude"
|
||||
assert normalize_agent_name("claude_code") == "claude"
|
||||
assert normalize_agent_name("claude") == "claude"
|
||||
|
||||
def test_kimi_aliases(self):
|
||||
assert normalize_agent_name("kimi") == "kimi"
|
||||
assert normalize_agent_name("kimi-cli") == "kimi"
|
||||
assert normalize_agent_name("kimi_code") == "kimi"
|
||||
assert normalize_agent_name("kimi-code") == "kimi"
|
||||
|
||||
def test_codex_aliases(self):
|
||||
assert normalize_agent_name("codex") == "codex"
|
||||
assert normalize_agent_name("codex-cli") == "codex"
|
||||
|
||||
def test_copilot_aliases(self):
|
||||
assert normalize_agent_name("copilot") == "copilot"
|
||||
assert normalize_agent_name("copilot-cli") == "copilot"
|
||||
|
||||
def test_opencode_aliases(self):
|
||||
assert normalize_agent_name("opencode") == "opencode"
|
||||
assert normalize_agent_name("open-code") == "opencode"
|
||||
assert normalize_agent_name("open_code") == "opencode"
|
||||
|
||||
def test_custom_passthrough(self):
|
||||
assert normalize_agent_name("custom") == "custom"
|
||||
|
||||
def test_unknown_name_passthrough(self):
|
||||
assert normalize_agent_name("some-unknown-agent") == "some-unknown-agent"
|
||||
|
||||
def test_empty_string_defaults_to_claude(self):
|
||||
assert normalize_agent_name("") == "claude"
|
||||
|
||||
def test_none_defaults_to_claude(self):
|
||||
# The docstring says "if not agent_name" which covers None too,
|
||||
# but the type hint says str. If called with empty string, it returns "claude".
|
||||
assert normalize_agent_name("") == "claude"
|
||||
|
||||
def test_case_insensitive(self):
|
||||
assert normalize_agent_name("Claude-Code") == "claude"
|
||||
assert normalize_agent_name("KIMI-CLI") == "kimi"
|
||||
assert normalize_agent_name("Codex") == "codex"
|
||||
|
||||
def test_whitespace_stripped(self):
|
||||
assert normalize_agent_name(" claude ") == "claude"
|
||||
assert normalize_agent_name(" kimi-cli ") == "kimi"
|
||||
|
||||
|
||||
class TestDetectApiKey:
|
||||
"""Test AgentClient.detect_api_key() static method."""
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "sk-ant-test123"}, clear=True)
|
||||
def test_detects_anthropic_key(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "sk-ant-test123"
|
||||
assert provider == "anthropic"
|
||||
|
||||
@patch.dict(os.environ, {"MOONSHOT_API_KEY": "moonshot-key-abc"}, clear=True)
|
||||
def test_detects_moonshot_key(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "moonshot-key-abc"
|
||||
assert provider == "moonshot"
|
||||
|
||||
@patch.dict(os.environ, {"GOOGLE_API_KEY": "AIzaSyTest123"}, clear=True)
|
||||
def test_detects_google_key(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "AIzaSyTest123"
|
||||
assert provider == "google"
|
||||
|
||||
@patch.dict(os.environ, {"OPENAI_API_KEY": "sk-test-openai"}, clear=True)
|
||||
def test_detects_openai_key(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "sk-test-openai"
|
||||
assert provider == "openai"
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_AUTH_TOKEN": "sk-ant-auth"}, clear=True)
|
||||
def test_detects_anthropic_auth_token(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "sk-ant-auth"
|
||||
assert provider == "anthropic"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_no_key_returns_none(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key is None
|
||||
assert provider is None
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": " "}, clear=True)
|
||||
def test_whitespace_only_key_returns_none(self):
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key is None
|
||||
assert provider is None
|
||||
|
||||
@patch.dict(
|
||||
os.environ,
|
||||
{"ANTHROPIC_API_KEY": "first-key", "OPENAI_API_KEY": "second-key"},
|
||||
clear=True,
|
||||
)
|
||||
def test_priority_order_anthropic_first(self):
|
||||
"""API_KEY_MAP is iterated in order; ANTHROPIC_API_KEY comes first."""
|
||||
key, provider = AgentClient.detect_api_key()
|
||||
assert key == "first-key"
|
||||
assert provider == "anthropic"
|
||||
|
||||
|
||||
class TestAgentClientInit:
|
||||
"""Test AgentClient.__init__() mode auto-detection."""
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "sk-ant-test"}, clear=True)
|
||||
@patch.object(AgentClient, "_init_api_client", return_value=MagicMock())
|
||||
def test_auto_mode_with_api_key_sets_api(self, mock_init):
|
||||
client = AgentClient(mode="auto")
|
||||
assert client.mode == "api"
|
||||
assert client.api_key == "sk-ant-test"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_auto_mode_without_api_key_sets_local(self):
|
||||
client = AgentClient(mode="auto")
|
||||
assert client.mode == "local"
|
||||
assert client.api_key is None
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "sk-ant-test"}, clear=True)
|
||||
def test_explicit_local_mode_overrides_api_key(self):
|
||||
client = AgentClient(mode="local")
|
||||
assert client.mode == "local"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch.object(AgentClient, "_init_api_client", return_value=MagicMock())
|
||||
def test_explicit_api_mode_with_provided_key(self, mock_init):
|
||||
client = AgentClient(mode="api", api_key="sk-ant-explicit")
|
||||
assert client.mode == "api"
|
||||
assert client.api_key == "sk-ant-explicit"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_agent_is_claude(self):
|
||||
client = AgentClient(mode="local")
|
||||
assert client.agent == "claude"
|
||||
assert client.agent_display == "Claude Code"
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_AGENT": "kimi"}, clear=True)
|
||||
def test_env_agent_override(self):
|
||||
client = AgentClient(mode="local")
|
||||
assert client.agent == "kimi"
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_AGENT": "kimi"}, clear=True)
|
||||
def test_explicit_agent_overrides_env(self):
|
||||
client = AgentClient(mode="local", agent="codex")
|
||||
assert client.agent == "codex"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch.object(AgentClient, "_init_api_client", return_value=MagicMock())
|
||||
def test_explicit_api_key_detects_provider(self, mock_init):
|
||||
client = AgentClient(mode="api", api_key="sk-ant-mykey")
|
||||
assert client.provider == "anthropic"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch.object(AgentClient, "_init_api_client", return_value=MagicMock())
|
||||
def test_explicit_openai_key_detects_provider(self, mock_init):
|
||||
client = AgentClient(mode="api", api_key="sk-openai-key")
|
||||
assert client.provider == "openai"
|
||||
|
||||
|
||||
class TestDetectProviderFromKey:
|
||||
"""Test AgentClient._detect_provider_from_key() static method."""
|
||||
|
||||
def test_anthropic_prefix(self):
|
||||
assert AgentClient._detect_provider_from_key("sk-ant-abc123") == "anthropic"
|
||||
|
||||
def test_openai_prefix(self):
|
||||
assert AgentClient._detect_provider_from_key("sk-abc123") == "openai"
|
||||
|
||||
def test_google_prefix(self):
|
||||
assert AgentClient._detect_provider_from_key("AIzaSyTest") == "google"
|
||||
|
||||
@patch.dict(os.environ, {"MOONSHOT_API_KEY": "sk-moonshot-key"}, clear=True)
|
||||
def test_moonshot_via_env_match(self):
|
||||
result = AgentClient._detect_provider_from_key("sk-moonshot-key")
|
||||
assert result == "moonshot"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_sk_prefix_without_moonshot_env_defaults_to_openai(self):
|
||||
result = AgentClient._detect_provider_from_key("sk-some-key")
|
||||
assert result == "openai"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_unknown_prefix_defaults_to_anthropic(self):
|
||||
result = AgentClient._detect_provider_from_key("unknown-prefix-key")
|
||||
assert result == "anthropic"
|
||||
|
||||
@patch.dict(os.environ, {"GOOGLE_API_KEY": "custom-google-key"}, clear=True)
|
||||
def test_env_var_match_for_unknown_prefix(self):
|
||||
result = AgentClient._detect_provider_from_key("custom-google-key")
|
||||
assert result == "google"
|
||||
|
||||
|
||||
class TestGetDefaultTimeout:
|
||||
"""Test get_default_timeout() function."""
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_without_env(self):
|
||||
assert get_default_timeout() == DEFAULT_ENHANCE_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "unlimited"}, clear=True)
|
||||
def test_unlimited_string(self):
|
||||
assert get_default_timeout() == UNLIMITED_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "none"}, clear=True)
|
||||
def test_none_string(self):
|
||||
assert get_default_timeout() == UNLIMITED_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "0"}, clear=True)
|
||||
def test_zero_string(self):
|
||||
assert get_default_timeout() == UNLIMITED_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "600"}, clear=True)
|
||||
def test_valid_int_string(self):
|
||||
assert get_default_timeout() == 600
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "-5"}, clear=True)
|
||||
def test_negative_value_returns_unlimited(self):
|
||||
assert get_default_timeout() == UNLIMITED_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": "not_a_number"}, clear=True)
|
||||
def test_invalid_string_returns_default(self):
|
||||
assert get_default_timeout() == DEFAULT_ENHANCE_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": " UNLIMITED "}, clear=True)
|
||||
def test_unlimited_with_whitespace_and_case(self):
|
||||
assert get_default_timeout() == UNLIMITED_TIMEOUT
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_ENHANCE_TIMEOUT": ""}, clear=True)
|
||||
def test_empty_env_returns_default(self):
|
||||
assert get_default_timeout() == DEFAULT_ENHANCE_TIMEOUT
|
||||
|
||||
|
||||
class TestGetModel:
|
||||
"""Test AgentClient.get_model() static method."""
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_anthropic_model(self):
|
||||
model = AgentClient.get_model("anthropic")
|
||||
assert model == DEFAULT_MODELS["anthropic"]
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_openai_model(self):
|
||||
model = AgentClient.get_model("openai")
|
||||
assert model == DEFAULT_MODELS["openai"]
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_google_model(self):
|
||||
model = AgentClient.get_model("google")
|
||||
assert model == DEFAULT_MODELS["google"]
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_default_moonshot_model(self):
|
||||
model = AgentClient.get_model("moonshot")
|
||||
assert model == DEFAULT_MODELS["moonshot"]
|
||||
|
||||
@patch.dict(os.environ, {"SKILL_SEEKER_MODEL": "my-custom-model"}, clear=True)
|
||||
def test_global_override(self):
|
||||
model = AgentClient.get_model("anthropic")
|
||||
assert model == "my-custom-model"
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_MODEL": "claude-opus-4-20250514"}, clear=True)
|
||||
def test_provider_specific_env_var(self):
|
||||
model = AgentClient.get_model("anthropic")
|
||||
assert model == "claude-opus-4-20250514"
|
||||
|
||||
@patch.dict(
|
||||
os.environ,
|
||||
{"SKILL_SEEKER_MODEL": "global-model", "ANTHROPIC_MODEL": "provider-model"},
|
||||
clear=True,
|
||||
)
|
||||
def test_global_override_takes_precedence_over_provider(self):
|
||||
model = AgentClient.get_model("anthropic")
|
||||
assert model == "global-model"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_unknown_provider_falls_back_to_anthropic_default(self):
|
||||
model = AgentClient.get_model("unknown-provider")
|
||||
assert model == "claude-sonnet-4-20250514"
|
||||
|
||||
@patch.dict(os.environ, {"OPENAI_MODEL": "gpt-5"}, clear=True)
|
||||
def test_openai_model_env_var(self):
|
||||
model = AgentClient.get_model("openai")
|
||||
assert model == "gpt-5"
|
||||
|
||||
@patch.dict(os.environ, {"GOOGLE_MODEL": "gemini-ultra"}, clear=True)
|
||||
def test_google_model_env_var(self):
|
||||
model = AgentClient.get_model("google")
|
||||
assert model == "gemini-ultra"
|
||||
|
||||
|
||||
class TestParseKimiOutput:
|
||||
"""Test AgentClient._parse_kimi_output() static method."""
|
||||
|
||||
def test_valid_textpart_output(self):
|
||||
raw = (
|
||||
"TurnBegin(turn_id=1)\n"
|
||||
"StepBegin(step_id=1)\n"
|
||||
"TextPart(type='text', text='Hello world')\n"
|
||||
"ThinkPart(type='think', think='...')\n"
|
||||
"TextPart(type='text', text='Second line')\n"
|
||||
)
|
||||
result = AgentClient._parse_kimi_output(raw)
|
||||
assert result == "Hello world\nSecond line"
|
||||
|
||||
def test_single_textpart(self):
|
||||
raw = "TextPart(type='text', text='Only one part')\n"
|
||||
result = AgentClient._parse_kimi_output(raw)
|
||||
assert result == "Only one part"
|
||||
|
||||
def test_no_textpart_falls_back_to_raw(self):
|
||||
raw = "Some random output without TextPart markers"
|
||||
result = AgentClient._parse_kimi_output(raw)
|
||||
assert result == raw
|
||||
|
||||
def test_empty_string_returns_empty(self):
|
||||
result = AgentClient._parse_kimi_output("")
|
||||
assert result == ""
|
||||
|
||||
def test_thinkpart_only_falls_back(self):
|
||||
raw = "ThinkPart(type='think', think='internal thinking')"
|
||||
result = AgentClient._parse_kimi_output(raw)
|
||||
assert result == raw
|
||||
|
||||
|
||||
class TestIsAvailable:
|
||||
"""Test AgentClient.is_available() method."""
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_api_mode_with_client_is_available(self):
|
||||
client = AgentClient(mode="local")
|
||||
# Force to api mode with a client
|
||||
client.mode = "api"
|
||||
client.client = MagicMock()
|
||||
assert client.is_available() is True
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_api_mode_without_client_is_not_available(self):
|
||||
client = AgentClient(mode="local")
|
||||
client.mode = "api"
|
||||
client.client = None
|
||||
assert client.is_available() is False
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch("subprocess.run")
|
||||
def test_local_mode_claude_available(self, mock_run):
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
client = AgentClient(mode="local", agent="claude")
|
||||
assert client.is_available() is True
|
||||
mock_run.assert_called_once()
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch("subprocess.run", side_effect=FileNotFoundError)
|
||||
def test_local_mode_cli_not_found(self, mock_run):
|
||||
client = AgentClient(mode="local", agent="claude")
|
||||
assert client.is_available() is False
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch("subprocess.run", side_effect=subprocess.TimeoutExpired(cmd="claude", timeout=5))
|
||||
def test_local_mode_timeout(self, mock_run):
|
||||
client = AgentClient(mode="local", agent="claude")
|
||||
assert client.is_available() is False
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_local_mode_unknown_agent_not_available(self):
|
||||
client = AgentClient(mode="local")
|
||||
client.agent = "nonexistent-agent"
|
||||
assert client.is_available() is False
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
@patch("subprocess.run")
|
||||
def test_local_mode_nonzero_returncode(self, mock_run):
|
||||
mock_run.return_value = MagicMock(returncode=1)
|
||||
client = AgentClient(mode="local", agent="codex")
|
||||
assert client.is_available() is False
|
||||
|
||||
|
||||
class TestDetectDefaultTarget:
|
||||
"""Test AgentClient.detect_default_target() static method."""
|
||||
|
||||
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "sk-ant-test"}, clear=True)
|
||||
def test_anthropic_maps_to_claude(self):
|
||||
assert AgentClient.detect_default_target() == "claude"
|
||||
|
||||
@patch.dict(os.environ, {"MOONSHOT_API_KEY": "moon-key"}, clear=True)
|
||||
def test_moonshot_maps_to_kimi(self):
|
||||
assert AgentClient.detect_default_target() == "kimi"
|
||||
|
||||
@patch.dict(os.environ, {"GOOGLE_API_KEY": "AIzaTest"}, clear=True)
|
||||
def test_google_maps_to_gemini(self):
|
||||
assert AgentClient.detect_default_target() == "gemini"
|
||||
|
||||
@patch.dict(os.environ, {"OPENAI_API_KEY": "sk-test"}, clear=True)
|
||||
def test_openai_maps_to_openai(self):
|
||||
assert AgentClient.detect_default_target() == "openai"
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_no_key_defaults_to_markdown(self):
|
||||
assert AgentClient.detect_default_target() == "markdown"
|
||||
402
tests/test_config_publisher.py
Normal file
402
tests/test_config_publisher.py
Normal file
@@ -0,0 +1,402 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for ConfigPublisher class (config publishing to source repos)."""
|
||||
|
||||
import json
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from skill_seekers.mcp.config_publisher import ConfigPublisher, detect_category
|
||||
|
||||
|
||||
def _get_default_branch(repo_path):
|
||||
"""Get the default branch name of a git repo (master or main)."""
|
||||
import git
|
||||
|
||||
repo = git.Repo(repo_path)
|
||||
return repo.active_branch.name
|
||||
|
||||
|
||||
def _init_repo_with_main_branch(path):
|
||||
"""Initialize a git repo ensuring the branch is named 'main'."""
|
||||
import git
|
||||
|
||||
repo = git.Repo.init(path)
|
||||
repo.config_writer().set_value("user", "name", "Test").release()
|
||||
repo.config_writer().set_value("user", "email", "test@test.com").release()
|
||||
|
||||
# Create initial commit on whatever default branch
|
||||
(path / "README.md").write_text("# Init\n")
|
||||
repo.index.add(["README.md"])
|
||||
repo.index.commit("Initial commit")
|
||||
|
||||
# Rename branch to 'main' if needed
|
||||
if repo.active_branch.name != "main":
|
||||
repo.git.branch("-m", repo.active_branch.name, "main")
|
||||
|
||||
return repo
|
||||
|
||||
|
||||
class TestDetectCategory:
|
||||
"""Test detect_category() keyword scoring."""
|
||||
|
||||
def test_game_engine_detected(self):
|
||||
config = {"name": "godot-4", "description": "Godot game engine config"}
|
||||
assert detect_category(config) == "game-engines"
|
||||
|
||||
def test_web_framework_detected(self):
|
||||
config = {"name": "react-config", "description": "React web framework setup"}
|
||||
assert detect_category(config) == "web-frameworks"
|
||||
|
||||
def test_ai_ml_detected(self):
|
||||
config = {"name": "pytorch-training", "description": "PyTorch model training config"}
|
||||
assert detect_category(config) == "ai-ml"
|
||||
|
||||
def test_database_detected(self):
|
||||
config = {"name": "postgres-setup", "description": "PostgreSQL database config"}
|
||||
assert detect_category(config) == "databases"
|
||||
|
||||
def test_devops_detected(self):
|
||||
config = {"name": "docker-compose", "description": "Docker container orchestration"}
|
||||
assert detect_category(config) == "devops"
|
||||
|
||||
def test_cloud_detected(self):
|
||||
config = {"name": "aws-deployment", "description": "AWS cloud deployment config"}
|
||||
assert detect_category(config) == "cloud"
|
||||
|
||||
def test_mobile_detected(self):
|
||||
config = {"name": "flutter-app", "description": "Flutter mobile application config"}
|
||||
assert detect_category(config) == "mobile"
|
||||
|
||||
def test_testing_detected(self):
|
||||
config = {"name": "pytest-setup", "description": "Pytest testing framework"}
|
||||
assert detect_category(config) == "testing"
|
||||
|
||||
def test_unknown_returns_custom(self):
|
||||
config = {"name": "my-random-thing", "description": "Something unrelated"}
|
||||
assert detect_category(config) == "custom"
|
||||
|
||||
def test_empty_config_returns_custom(self):
|
||||
config = {}
|
||||
assert detect_category(config) == "custom"
|
||||
|
||||
def test_name_only_matching(self):
|
||||
config = {"name": "tailwind-theme"}
|
||||
assert detect_category(config) == "css-frameworks"
|
||||
|
||||
def test_description_only_matching(self):
|
||||
config = {"name": "my-config", "description": "Uses kubernetes for orchestration"}
|
||||
assert detect_category(config) == "devops"
|
||||
|
||||
def test_highest_score_wins(self):
|
||||
# "react" and "vue" both in web-frameworks, so web-frameworks should score higher
|
||||
config = {"name": "react-vue-toolkit", "description": "React and Vue comparison"}
|
||||
assert detect_category(config) == "web-frameworks"
|
||||
|
||||
def test_security_detected(self):
|
||||
config = {"name": "oauth-setup", "description": "OAuth and JWT authentication"}
|
||||
assert detect_category(config) == "security"
|
||||
|
||||
def test_messaging_detected(self):
|
||||
config = {"name": "kafka-config", "description": "Apache Kafka messaging setup"}
|
||||
assert detect_category(config) == "messaging"
|
||||
|
||||
|
||||
class TestPublishErrors:
|
||||
"""Test ConfigPublisher.publish() error cases."""
|
||||
|
||||
def test_publish_missing_config_file(self, tmp_path):
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
with pytest.raises(FileNotFoundError, match="Config file not found"):
|
||||
publisher.publish(
|
||||
config_path=tmp_path / "nonexistent.json",
|
||||
source_name="test-source",
|
||||
)
|
||||
|
||||
def test_publish_missing_name_field(self, tmp_path):
|
||||
config_file = tmp_path / "bad_config.json"
|
||||
config_file.write_text(json.dumps({"description": "No name field"}))
|
||||
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
with pytest.raises(ValueError, match="must have a 'name' field"):
|
||||
publisher.publish(
|
||||
config_path=config_file,
|
||||
source_name="test-source",
|
||||
)
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_publish_missing_token(self, tmp_path):
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text(json.dumps({"name": "test-config"}))
|
||||
|
||||
# Create a mock source that returns proper data
|
||||
mock_source = {
|
||||
"name": "test-source",
|
||||
"git_url": "https://github.com/test/repo.git",
|
||||
"branch": "main",
|
||||
"token_env": "NONEXISTENT_TOKEN",
|
||||
}
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = mock_source
|
||||
mock_manager.list_sources.return_value = [mock_source]
|
||||
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
|
||||
with (
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
pytest.raises(RuntimeError, match="NONEXISTENT_TOKEN"),
|
||||
):
|
||||
publisher.publish(config_path=config_file, source_name="test-source")
|
||||
|
||||
def test_publish_source_not_found(self, tmp_path):
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text(json.dumps({"name": "test-config"}))
|
||||
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = None
|
||||
mock_manager.list_sources.return_value = []
|
||||
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
|
||||
with (
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
pytest.raises(ValueError, match="not found"),
|
||||
):
|
||||
publisher.publish(config_path=config_file, source_name="nonexistent")
|
||||
|
||||
def test_publish_duplicate_without_force(self, tmp_path):
|
||||
"""Config already exists in target repo and force=False should raise."""
|
||||
import git as gitmodule
|
||||
|
||||
config_file = tmp_path / "config.json"
|
||||
config_file.write_text(json.dumps({"name": "existing-config"}))
|
||||
|
||||
# Create a working repo with existing config
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
repo = _init_repo_with_main_branch(working_path)
|
||||
|
||||
# Add existing config
|
||||
config_dir_in_repo = working_path / "configs" / "custom"
|
||||
config_dir_in_repo.mkdir(parents=True)
|
||||
(config_dir_in_repo / "existing-config.json").write_text(
|
||||
json.dumps({"name": "existing-config"})
|
||||
)
|
||||
repo.index.add(["configs/custom/existing-config.json"])
|
||||
repo.index.commit("Add existing config")
|
||||
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
# Mock source manager
|
||||
mock_source = {
|
||||
"name": "test-source",
|
||||
"git_url": f"file://{bare_repo_path}",
|
||||
"branch": "main",
|
||||
"token_env": "DUMMY_TOKEN",
|
||||
}
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = mock_source
|
||||
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "fake-token"}),
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
pytest.raises(ValueError, match="already exists"),
|
||||
):
|
||||
publisher.publish(
|
||||
config_path=config_file,
|
||||
source_name="test-source",
|
||||
category="custom",
|
||||
force=False,
|
||||
)
|
||||
|
||||
|
||||
class TestPublishSuccess:
|
||||
"""Test ConfigPublisher.publish() success path using a local bare git repo."""
|
||||
|
||||
def test_publish_happy_path(self, tmp_path):
|
||||
"""Full success path: clone -> copy -> commit -> push."""
|
||||
import git as gitmodule
|
||||
|
||||
# Create config file to publish
|
||||
config_file = tmp_path / "my-config.json"
|
||||
config_data = {"name": "my-config", "description": "A test config for pytest"}
|
||||
config_file.write_text(json.dumps(config_data))
|
||||
|
||||
# Create working repo with 'main' branch, then bare-clone as "remote"
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
_init_repo_with_main_branch(working_path)
|
||||
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
# Mock source manager
|
||||
mock_source = {
|
||||
"name": "local-test",
|
||||
"git_url": f"file://{bare_repo_path}",
|
||||
"branch": "main",
|
||||
"token_env": "DUMMY_TOKEN",
|
||||
}
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = mock_source
|
||||
|
||||
# Create publisher with custom cache dir
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "not-needed-for-file-protocol"}),
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
):
|
||||
result = publisher.publish(
|
||||
config_path=config_file,
|
||||
source_name="local-test",
|
||||
category="testing",
|
||||
)
|
||||
|
||||
# Verify result
|
||||
assert result["success"] is True
|
||||
assert result["config_name"] == "my-config"
|
||||
assert result["config_path"] == "configs/testing/my-config.json"
|
||||
assert result["source"] == "local-test"
|
||||
assert result["category"] == "testing"
|
||||
assert len(result["commit_sha"]) == 8
|
||||
assert result["branch"] == "main"
|
||||
|
||||
# Verify the file exists in the cached clone
|
||||
cached_repo = cache_dir / "source_local-test"
|
||||
assert (cached_repo / "configs" / "testing" / "my-config.json").exists()
|
||||
|
||||
# Verify the config content was preserved
|
||||
with open(cached_repo / "configs" / "testing" / "my-config.json") as f:
|
||||
saved = json.load(f)
|
||||
assert saved["name"] == "my-config"
|
||||
|
||||
def test_publish_force_overwrite(self, tmp_path):
|
||||
"""Test that force=True overwrites an existing config."""
|
||||
import git as gitmodule
|
||||
|
||||
config_file = tmp_path / "overwrite-config.json"
|
||||
config_data = {"name": "overwrite-config", "description": "Updated version"}
|
||||
config_file.write_text(json.dumps(config_data))
|
||||
|
||||
# Create working repo with existing config
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
repo = _init_repo_with_main_branch(working_path)
|
||||
|
||||
# Pre-populate with existing config
|
||||
configs_dir = working_path / "configs" / "custom"
|
||||
configs_dir.mkdir(parents=True)
|
||||
(configs_dir / "overwrite-config.json").write_text(
|
||||
json.dumps({"name": "overwrite-config", "description": "Old version"})
|
||||
)
|
||||
repo.index.add(["configs/custom/overwrite-config.json"])
|
||||
repo.index.commit("Add existing config")
|
||||
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
mock_source = {
|
||||
"name": "local-test",
|
||||
"git_url": f"file://{bare_repo_path}",
|
||||
"branch": "main",
|
||||
"token_env": "DUMMY_TOKEN",
|
||||
}
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = mock_source
|
||||
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "x"}),
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
):
|
||||
result = publisher.publish(
|
||||
config_path=config_file,
|
||||
source_name="local-test",
|
||||
category="custom",
|
||||
force=True,
|
||||
)
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["config_name"] == "overwrite-config"
|
||||
|
||||
# Verify the file has updated content
|
||||
cached_repo = cache_dir / "source_local-test"
|
||||
with open(cached_repo / "configs" / "custom" / "overwrite-config.json") as f:
|
||||
saved = json.load(f)
|
||||
assert saved["description"] == "Updated version"
|
||||
|
||||
def test_publish_auto_detect_category(self, tmp_path):
|
||||
"""Test that category='auto' auto-detects from config content."""
|
||||
import git as gitmodule
|
||||
|
||||
config_file = tmp_path / "react-config.json"
|
||||
config_data = {"name": "react-config", "description": "React web framework config"}
|
||||
config_file.write_text(json.dumps(config_data))
|
||||
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
_init_repo_with_main_branch(working_path)
|
||||
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
mock_source = {
|
||||
"name": "local-test",
|
||||
"git_url": f"file://{bare_repo_path}",
|
||||
"branch": "main",
|
||||
"token_env": "DUMMY_TOKEN",
|
||||
}
|
||||
mock_manager = MagicMock()
|
||||
mock_manager.get_source.return_value = mock_source
|
||||
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
publisher = ConfigPublisher.__new__(ConfigPublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "x"}),
|
||||
patch("skill_seekers.mcp.source_manager.SourceManager", return_value=mock_manager),
|
||||
patch("skill_seekers.cli.config_validator.validate_config", return_value=None),
|
||||
):
|
||||
result = publisher.publish(
|
||||
config_path=config_file,
|
||||
source_name="local-test",
|
||||
category="auto",
|
||||
)
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["category"] == "web-frameworks"
|
||||
@@ -25,8 +25,8 @@ class TestUniversalArguments:
|
||||
"""Test universal argument definitions."""
|
||||
|
||||
def test_universal_count(self):
|
||||
"""Should have exactly 19 universal arguments (after Phase 2 workflow integration + local_repo_path + doc_version)."""
|
||||
assert len(UNIVERSAL_ARGUMENTS) == 19
|
||||
"""Should have exactly 21 universal arguments."""
|
||||
assert len(UNIVERSAL_ARGUMENTS) == 21
|
||||
|
||||
def test_universal_argument_names(self):
|
||||
"""Universal arguments should have expected names."""
|
||||
@@ -35,22 +35,23 @@ class TestUniversalArguments:
|
||||
"description",
|
||||
"output",
|
||||
"enhance_level",
|
||||
"api_key", # Phase 1: consolidated from enhance + enhance_local
|
||||
"api_key",
|
||||
"dry_run",
|
||||
"verbose",
|
||||
"quiet",
|
||||
"chunk_for_rag",
|
||||
"chunk_tokens",
|
||||
"chunk_overlap_tokens", # Phase 2: RAG args from common.py
|
||||
"chunk_overlap_tokens",
|
||||
"preset",
|
||||
"config",
|
||||
# Phase 2: Workflow arguments (universal workflow support)
|
||||
"enhance_workflow",
|
||||
"enhance_stage",
|
||||
"var",
|
||||
"workflow_dry_run",
|
||||
"local_repo_path", # GitHub local clone path for unlimited C3.x analysis
|
||||
"doc_version", # Documentation version tag for RAG metadata
|
||||
"local_repo_path",
|
||||
"doc_version",
|
||||
"agent",
|
||||
"agent_cmd",
|
||||
}
|
||||
assert set(UNIVERSAL_ARGUMENTS.keys()) == expected_names
|
||||
|
||||
@@ -132,7 +133,7 @@ class TestArgumentHelpers:
|
||||
names = get_universal_argument_names()
|
||||
assert isinstance(names, set)
|
||||
assert (
|
||||
len(names) == 19
|
||||
len(names) == 21
|
||||
) # Phase 2: added 4 workflow arguments + local_repo_path + doc_version
|
||||
assert "name" in names
|
||||
assert "enhance_level" in names # Phase 1: consolidated flag
|
||||
|
||||
@@ -131,17 +131,18 @@ class TestCreateCommandBasic:
|
||||
|
||||
|
||||
class TestCreateCommandArgvForwarding:
|
||||
"""Unit tests for _add_common_args argv forwarding."""
|
||||
"""Unit tests for _build_argv argument forwarding."""
|
||||
|
||||
def _make_args(self, **kwargs):
|
||||
import argparse
|
||||
|
||||
defaults = {
|
||||
"source": "https://example.com",
|
||||
"enhance_workflow": None,
|
||||
"enhance_stage": None,
|
||||
"var": None,
|
||||
"workflow_dry_run": False,
|
||||
"enhance_level": 0,
|
||||
"enhance_level": 2,
|
||||
"output": None,
|
||||
"name": None,
|
||||
"description": None,
|
||||
@@ -157,17 +158,20 @@ class TestCreateCommandArgvForwarding:
|
||||
"no_preserve_code_blocks": False,
|
||||
"no_preserve_paragraphs": False,
|
||||
"interactive_enhancement": False,
|
||||
"agent": None,
|
||||
"agent_cmd": None,
|
||||
"doc_version": "",
|
||||
}
|
||||
defaults.update(kwargs)
|
||||
return argparse.Namespace(**defaults)
|
||||
|
||||
def _collect_argv(self, args):
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
from skill_seekers.cli.source_detector import SourceDetector
|
||||
|
||||
cmd = CreateCommand(args)
|
||||
argv = []
|
||||
cmd._add_common_args(argv)
|
||||
return argv
|
||||
cmd.source_info = SourceDetector.detect(args.source)
|
||||
return cmd._build_argv("test_module", [])
|
||||
|
||||
def test_single_enhance_workflow_forwarded(self):
|
||||
args = self._make_args(enhance_workflow=["security-focus"])
|
||||
@@ -259,6 +263,197 @@ class TestCreateCommandArgvForwarding:
|
||||
assert "--var" in argv
|
||||
assert "--workflow-dry-run" in argv
|
||||
|
||||
# ── _SKIP_ARGS exclusion ────────────────────────────────────────────────
|
||||
|
||||
def test_source_never_forwarded(self):
|
||||
"""'source' is in _SKIP_ARGS and must never appear in argv."""
|
||||
args = self._make_args(source="https://example.com")
|
||||
argv = self._collect_argv(args)
|
||||
assert "--source" not in argv
|
||||
|
||||
def test_func_never_forwarded(self):
|
||||
"""'func' is in _SKIP_ARGS and must never appear in argv."""
|
||||
args = self._make_args(func=lambda: None)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--func" not in argv
|
||||
|
||||
def test_config_never_forwarded_by_build_argv(self):
|
||||
"""'config' is in _SKIP_ARGS; forwarded manually by specific routes."""
|
||||
args = self._make_args(config="/path/to/config.json")
|
||||
argv = self._collect_argv(args)
|
||||
assert "--config" not in argv
|
||||
|
||||
def test_subcommand_never_forwarded(self):
|
||||
"""'subcommand' is in _SKIP_ARGS."""
|
||||
args = self._make_args(subcommand="create")
|
||||
argv = self._collect_argv(args)
|
||||
assert "--subcommand" not in argv
|
||||
|
||||
def test_command_never_forwarded(self):
|
||||
"""'command' is in _SKIP_ARGS."""
|
||||
args = self._make_args(command="create")
|
||||
argv = self._collect_argv(args)
|
||||
assert "--command" not in argv
|
||||
|
||||
# ── _DEST_TO_FLAG mapping ───────────────────────────────────────────────
|
||||
|
||||
def test_async_mode_maps_to_async_flag(self):
|
||||
"""async_mode dest should produce --async flag, not --async-mode."""
|
||||
args = self._make_args(async_mode=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--async" in argv
|
||||
assert "--async-mode" not in argv
|
||||
|
||||
def test_skip_config_maps_to_skip_config_patterns(self):
|
||||
"""skip_config dest should produce --skip-config-patterns flag."""
|
||||
args = self._make_args(skip_config=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--skip-config-patterns" in argv
|
||||
assert "--skip-config" not in argv
|
||||
|
||||
# ── Boolean arg forwarding ──────────────────────────────────────────────
|
||||
|
||||
def test_boolean_true_appends_flag(self):
|
||||
args = self._make_args(dry_run=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--dry-run" in argv
|
||||
|
||||
def test_boolean_false_does_not_append_flag(self):
|
||||
args = self._make_args(dry_run=False)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--dry-run" not in argv
|
||||
|
||||
def test_verbose_true_forwarded(self):
|
||||
args = self._make_args(verbose=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--verbose" in argv
|
||||
|
||||
def test_quiet_true_forwarded(self):
|
||||
args = self._make_args(quiet=True)
|
||||
argv = self._collect_argv(args)
|
||||
assert "--quiet" in argv
|
||||
|
||||
# ── List arg forwarding ─────────────────────────────────────────────────
|
||||
|
||||
def test_list_arg_each_item_gets_separate_flag(self):
|
||||
"""Each list item gets its own --flag value pair."""
|
||||
args = self._make_args(enhance_workflow=["a", "b", "c"])
|
||||
argv = self._collect_argv(args)
|
||||
assert argv.count("--enhance-workflow") == 3
|
||||
for item in ["a", "b", "c"]:
|
||||
idx = argv.index(item)
|
||||
assert argv[idx - 1] == "--enhance-workflow"
|
||||
|
||||
# ── _is_explicitly_set ──────────────────────────────────────────────────
|
||||
|
||||
def test_is_explicitly_set_none_is_not_set(self):
|
||||
"""None values should NOT be considered explicitly set."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
assert cmd._is_explicitly_set("name", None) is False
|
||||
|
||||
def test_is_explicitly_set_bool_true_is_set(self):
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
assert cmd._is_explicitly_set("dry_run", True) is True
|
||||
|
||||
def test_is_explicitly_set_bool_false_is_not_set(self):
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
assert cmd._is_explicitly_set("dry_run", False) is False
|
||||
|
||||
def test_is_explicitly_set_default_doc_version_empty_not_set(self):
|
||||
"""doc_version defaults to '' which means not explicitly set."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
assert cmd._is_explicitly_set("doc_version", "") is False
|
||||
|
||||
def test_is_explicitly_set_nonempty_string_is_set(self):
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
assert cmd._is_explicitly_set("name", "my-skill") is True
|
||||
|
||||
def test_is_explicitly_set_non_default_value_is_set(self):
|
||||
"""A value that differs from the known default IS explicitly set."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
|
||||
args = self._make_args()
|
||||
cmd = CreateCommand(args)
|
||||
# max_issues default is 100; setting to 50 means explicitly set
|
||||
assert cmd._is_explicitly_set("max_issues", 50) is True
|
||||
# Setting to default value means NOT explicitly set
|
||||
assert cmd._is_explicitly_set("max_issues", 100) is False
|
||||
|
||||
# ── Allowlist filtering ─────────────────────────────────────────────────
|
||||
|
||||
def test_allowlist_only_forwards_allowed_args(self):
|
||||
"""When allowlist is provided, only those args are forwarded."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
from skill_seekers.cli.source_detector import SourceDetector
|
||||
|
||||
args = self._make_args(
|
||||
dry_run=True,
|
||||
verbose=True,
|
||||
name="test-skill",
|
||||
)
|
||||
cmd = CreateCommand(args)
|
||||
cmd.source_info = SourceDetector.detect(args.source)
|
||||
|
||||
# Only allow dry_run in the allowlist
|
||||
allowlist = frozenset({"dry_run"})
|
||||
argv = cmd._build_argv("test_module", [], allowlist=allowlist)
|
||||
|
||||
assert "--dry-run" in argv
|
||||
assert "--verbose" not in argv
|
||||
assert "--name" not in argv
|
||||
|
||||
def test_allowlist_skips_non_allowed_even_if_set(self):
|
||||
"""Args not in the allowlist are excluded even if explicitly set."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
from skill_seekers.cli.source_detector import SourceDetector
|
||||
|
||||
args = self._make_args(
|
||||
enhance_workflow=["security-focus"],
|
||||
quiet=True,
|
||||
)
|
||||
cmd = CreateCommand(args)
|
||||
cmd.source_info = SourceDetector.detect(args.source)
|
||||
|
||||
allowlist = frozenset({"quiet"})
|
||||
argv = cmd._build_argv("test_module", [], allowlist=allowlist)
|
||||
|
||||
assert "--quiet" in argv
|
||||
assert "--enhance-workflow" not in argv
|
||||
|
||||
def test_allowlist_empty_forwards_nothing(self):
|
||||
"""Empty allowlist should forward no user args (auto-name may still be added)."""
|
||||
from skill_seekers.cli.create_command import CreateCommand
|
||||
from skill_seekers.cli.source_detector import SourceDetector
|
||||
|
||||
args = self._make_args(dry_run=True, verbose=True)
|
||||
cmd = CreateCommand(args)
|
||||
cmd.source_info = SourceDetector.detect(args.source)
|
||||
|
||||
allowlist = frozenset()
|
||||
argv = cmd._build_argv("test_module", ["pos"], allowlist=allowlist)
|
||||
|
||||
# User-set args (dry_run, verbose) should NOT be forwarded
|
||||
assert "--dry-run" not in argv
|
||||
assert "--verbose" not in argv
|
||||
# Only module name, positional, and possibly auto-added --name
|
||||
assert argv[0] == "test_module"
|
||||
assert "pos" in argv
|
||||
|
||||
|
||||
class TestBackwardCompatibility:
|
||||
"""Test that old commands still work."""
|
||||
|
||||
@@ -88,6 +88,7 @@ class TestCheckApiKeys:
|
||||
"GITHUB_TOKEN": "ghp_test123456789",
|
||||
"GOOGLE_API_KEY": "AIza_test123456789",
|
||||
"OPENAI_API_KEY": "sk-test123456789",
|
||||
"MOONSHOT_API_KEY": "sk-moon-test123456789",
|
||||
}
|
||||
with patch.dict(os.environ, env, clear=True):
|
||||
result = check_api_keys()
|
||||
|
||||
@@ -11,7 +11,7 @@ Tests dual-mode AI enhancement for how-to guides:
|
||||
|
||||
import json
|
||||
import os
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -91,7 +91,7 @@ class TestGuideEnhancerStepDescriptions:
|
||||
result = enhancer.enhance_step_descriptions(steps)
|
||||
assert result == []
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_step_descriptions_api_mode(self, mock_call):
|
||||
"""Test step descriptions with API mode"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -156,7 +156,7 @@ class TestGuideEnhancerTroubleshooting:
|
||||
result = enhancer.enhance_troubleshooting(guide_data)
|
||||
assert result == []
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_troubleshooting_api_mode(self, mock_call):
|
||||
"""Test troubleshooting with API mode"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -215,7 +215,7 @@ class TestGuideEnhancerPrerequisites:
|
||||
result = enhancer.enhance_prerequisites(prereqs)
|
||||
assert result == []
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_prerequisites_api_mode(self, mock_call):
|
||||
"""Test prerequisites with API mode"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -267,7 +267,7 @@ class TestGuideEnhancerNextSteps:
|
||||
result = enhancer.enhance_next_steps(guide_data)
|
||||
assert result == []
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_next_steps_api_mode(self, mock_call):
|
||||
"""Test next steps with API mode"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -313,7 +313,7 @@ class TestGuideEnhancerUseCases:
|
||||
result = enhancer.enhance_use_cases(guide_data)
|
||||
assert result == []
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_use_cases_api_mode(self, mock_call):
|
||||
"""Test use cases with API mode"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -372,7 +372,7 @@ class TestGuideEnhancerFullWorkflow:
|
||||
assert result["title"] == guide_data["title"]
|
||||
assert len(result["steps"]) == 2
|
||||
|
||||
@patch.object(GuideEnhancer, "_call_claude_api")
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_enhance_guide_api_mode_success(self, mock_call):
|
||||
"""Test successful full guide enhancement via API"""
|
||||
mock_call.return_value = json.dumps(
|
||||
@@ -467,43 +467,36 @@ class TestGuideEnhancerFullWorkflow:
|
||||
class TestGuideEnhancerLocalMode:
|
||||
"""Test LOCAL mode (Claude Code CLI)"""
|
||||
|
||||
@patch("subprocess.run")
|
||||
def test_call_claude_local_success(self, mock_run):
|
||||
"""Test successful LOCAL mode call"""
|
||||
mock_run.return_value = MagicMock(
|
||||
returncode=0,
|
||||
stdout=json.dumps(
|
||||
{
|
||||
"step_descriptions": [],
|
||||
"troubleshooting": [],
|
||||
"prerequisites_detailed": [],
|
||||
"next_steps": [],
|
||||
"use_cases": [],
|
||||
}
|
||||
),
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_call_ai_local_success(self, mock_call_ai):
|
||||
"""Test successful LOCAL mode call via AgentClient"""
|
||||
mock_call_ai.return_value = json.dumps(
|
||||
{
|
||||
"step_descriptions": [],
|
||||
"troubleshooting": [],
|
||||
"prerequisites_detailed": [],
|
||||
"next_steps": [],
|
||||
"use_cases": [],
|
||||
}
|
||||
)
|
||||
|
||||
enhancer = GuideEnhancer(mode="local")
|
||||
if enhancer.mode == "local":
|
||||
prompt = "Test prompt"
|
||||
result = enhancer._call_claude_local(prompt)
|
||||
prompt = "Test prompt"
|
||||
result = enhancer._call_ai(prompt)
|
||||
|
||||
assert result is not None
|
||||
assert mock_run.called
|
||||
assert result is not None
|
||||
assert mock_call_ai.called
|
||||
|
||||
@patch("subprocess.run")
|
||||
def test_call_claude_local_timeout(self, mock_run):
|
||||
"""Test LOCAL mode timeout handling"""
|
||||
from subprocess import TimeoutExpired
|
||||
|
||||
mock_run.side_effect = TimeoutExpired("claude", 300)
|
||||
@patch.object(GuideEnhancer, "_call_ai")
|
||||
def test_call_ai_local_timeout(self, mock_call_ai):
|
||||
"""Test LOCAL mode timeout handling via AgentClient"""
|
||||
mock_call_ai.return_value = None
|
||||
|
||||
enhancer = GuideEnhancer(mode="local")
|
||||
if enhancer.mode == "local":
|
||||
prompt = "Test prompt"
|
||||
result = enhancer._call_claude_local(prompt)
|
||||
prompt = "Test prompt"
|
||||
result = enhancer._call_ai(prompt)
|
||||
|
||||
assert result is None
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestGuideEnhancerPromptGeneration:
|
||||
|
||||
@@ -70,12 +70,11 @@ class TestInstallSkillDryRun:
|
||||
assert "🔍 DRY RUN MODE" in output
|
||||
assert "Preview only, no actions taken" in output
|
||||
|
||||
# Verify all 5 phases are shown
|
||||
assert "PHASE 1/5: Fetch Config" in output
|
||||
assert "PHASE 2/5: Scrape Documentation" in output
|
||||
assert "PHASE 3/5: AI Enhancement (MANDATORY)" in output
|
||||
assert "PHASE 4/5: Package Skill" in output
|
||||
assert "PHASE 5/5: Upload to Claude" in output
|
||||
# Verify core phases are shown
|
||||
assert "Fetch Config" in output
|
||||
assert "Scrape Documentation" in output
|
||||
assert "AI Enhancement (MANDATORY)" in output
|
||||
assert "Package Skill" in output
|
||||
|
||||
# Verify dry run indicators
|
||||
assert "[DRY RUN]" in output
|
||||
@@ -92,11 +91,10 @@ class TestInstallSkillDryRun:
|
||||
# Verify dry run mode
|
||||
assert "🔍 DRY RUN MODE" in output
|
||||
|
||||
# Verify only 4 phases (no fetch)
|
||||
assert "PHASE 1/4: Scrape Documentation" in output
|
||||
assert "PHASE 2/4: AI Enhancement (MANDATORY)" in output
|
||||
assert "PHASE 3/4: Package Skill" in output
|
||||
assert "PHASE 4/4: Upload to Claude" in output
|
||||
# Verify core phases are shown (no fetch)
|
||||
assert "Scrape Documentation" in output
|
||||
assert "AI Enhancement (MANDATORY)" in output
|
||||
assert "Package Skill" in output
|
||||
|
||||
# Should not show fetch phase
|
||||
assert "PHASE 1/5" not in output
|
||||
@@ -243,18 +241,16 @@ class TestInstallSkillPhaseOrchestration:
|
||||
|
||||
output = result[0].text
|
||||
|
||||
# Should only have 4 phases (no fetch)
|
||||
assert "PHASE 1/4: Scrape Documentation" in output
|
||||
assert "PHASE 2/4: AI Enhancement" in output
|
||||
assert "PHASE 3/4: Package Skill" in output
|
||||
assert "PHASE 4/4: Upload to Claude" in output
|
||||
# Should have core phases (no fetch)
|
||||
assert "Scrape Documentation" in output
|
||||
assert "AI Enhancement" in output
|
||||
assert "Package Skill" in output
|
||||
|
||||
# Should not have fetch phase
|
||||
assert "Fetch Config" not in output
|
||||
|
||||
# Should show manual upload instructions (no API key)
|
||||
assert "⚠️ ANTHROPIC_API_KEY not set" in output
|
||||
assert "Manual upload:" in output
|
||||
assert "Manual upload" in output
|
||||
|
||||
|
||||
@pytest.mark.skipif(not MCP_AVAILABLE, reason="MCP package not installed")
|
||||
|
||||
@@ -228,7 +228,7 @@ class TestInstallSkillE2E:
|
||||
assert "PHASE 2/5: Scrape Documentation" in output
|
||||
assert "PHASE 3/5: AI Enhancement" in output
|
||||
assert "PHASE 4/5: Package Skill" in output
|
||||
assert "PHASE 5/5: Upload to Claude" in output
|
||||
assert "PHASE 5/5: Upload to" in output
|
||||
|
||||
# Verify fetch was called
|
||||
mock_fetch.assert_called_once()
|
||||
|
||||
248
tests/test_marketplace_manager.py
Normal file
248
tests/test_marketplace_manager.py
Normal file
@@ -0,0 +1,248 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for MarketplaceManager class (marketplace registry management)"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_config_dir(tmp_path):
|
||||
config_dir = tmp_path / "test_config"
|
||||
config_dir.mkdir()
|
||||
return config_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def manager(temp_config_dir):
|
||||
return MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
|
||||
|
||||
class TestMarketplaceManagerInit:
|
||||
def test_init_creates_config_dir(self, tmp_path):
|
||||
config_dir = tmp_path / "new_config"
|
||||
mgr = MarketplaceManager(config_dir=str(config_dir))
|
||||
assert config_dir.exists()
|
||||
assert mgr.config_dir == config_dir
|
||||
|
||||
def test_init_creates_registry_file(self, temp_config_dir):
|
||||
_mgr = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
registry_file = temp_config_dir / "marketplaces.json"
|
||||
assert registry_file.exists()
|
||||
with open(registry_file) as f:
|
||||
data = json.load(f)
|
||||
assert data == {"version": "1.0", "marketplaces": []}
|
||||
|
||||
def test_init_preserves_existing_registry(self, temp_config_dir):
|
||||
registry_file = temp_config_dir / "marketplaces.json"
|
||||
existing_data = {
|
||||
"version": "1.0",
|
||||
"marketplaces": [{"name": "test", "git_url": "https://example.com/repo.git"}],
|
||||
}
|
||||
with open(registry_file, "w") as f:
|
||||
json.dump(existing_data, f)
|
||||
_mgr = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
with open(registry_file) as f:
|
||||
data = json.load(f)
|
||||
assert len(data["marketplaces"]) == 1
|
||||
|
||||
def test_init_with_default_config_dir(self):
|
||||
mgr = MarketplaceManager()
|
||||
assert mgr.config_dir == Path.home() / ".skill-seekers"
|
||||
|
||||
|
||||
class TestAddMarketplace:
|
||||
def test_add_marketplace_minimal(self, manager):
|
||||
mp = manager.add_marketplace(
|
||||
name="spyke", git_url="https://github.com/spykegames/plugins.git"
|
||||
)
|
||||
assert mp["name"] == "spyke"
|
||||
assert mp["git_url"] == "https://github.com/spykegames/plugins.git"
|
||||
assert mp["token_env"] == "GITHUB_TOKEN"
|
||||
assert mp["branch"] == "main"
|
||||
assert mp["enabled"] is True
|
||||
assert mp["author"] == {"name": "", "email": ""}
|
||||
|
||||
def test_add_marketplace_full_parameters(self, manager):
|
||||
author = {"name": "Spyke Team", "email": "team@spyke.com"}
|
||||
mp = manager.add_marketplace(
|
||||
name="spyke",
|
||||
git_url="https://github.com/spykegames/plugins.git",
|
||||
token_env="SPYKE_TOKEN",
|
||||
branch="develop",
|
||||
author=author,
|
||||
enabled=False,
|
||||
)
|
||||
assert mp["token_env"] == "SPYKE_TOKEN"
|
||||
assert mp["branch"] == "develop"
|
||||
assert mp["author"] == author
|
||||
assert mp["enabled"] is False
|
||||
|
||||
def test_add_marketplace_normalizes_name(self, manager):
|
||||
mp = manager.add_marketplace(name="MyMarket", git_url="https://github.com/org/repo.git")
|
||||
assert mp["name"] == "mymarket"
|
||||
|
||||
def test_add_marketplace_invalid_name_empty(self, manager):
|
||||
with pytest.raises(ValueError, match="Invalid marketplace name"):
|
||||
manager.add_marketplace(name="", git_url="https://github.com/org/repo.git")
|
||||
|
||||
def test_add_marketplace_invalid_name_special_chars(self, manager):
|
||||
with pytest.raises(ValueError, match="Invalid marketplace name"):
|
||||
manager.add_marketplace(name="my@market", git_url="https://github.com/org/repo.git")
|
||||
|
||||
def test_add_marketplace_valid_name_with_hyphens(self, manager):
|
||||
mp = manager.add_marketplace(name="my-market", git_url="https://github.com/org/repo.git")
|
||||
assert mp["name"] == "my-market"
|
||||
|
||||
def test_add_marketplace_empty_git_url(self, manager):
|
||||
with pytest.raises(ValueError, match="git_url cannot be empty"):
|
||||
manager.add_marketplace(name="spyke", git_url="")
|
||||
|
||||
def test_add_marketplace_strips_git_url(self, manager):
|
||||
mp = manager.add_marketplace(name="spyke", git_url=" https://github.com/org/repo.git ")
|
||||
assert mp["git_url"] == "https://github.com/org/repo.git"
|
||||
|
||||
def test_add_marketplace_updates_existing(self, manager):
|
||||
mp1 = manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo1.git")
|
||||
mp2 = manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo2.git")
|
||||
assert mp2["git_url"] == "https://github.com/org/repo2.git"
|
||||
assert mp2["added_at"] == mp1["added_at"]
|
||||
assert len(manager.list_marketplaces()) == 1
|
||||
|
||||
def test_add_marketplace_persists_to_file(self, manager, temp_config_dir):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
registry_file = temp_config_dir / "marketplaces.json"
|
||||
with open(registry_file) as f:
|
||||
data = json.load(f)
|
||||
assert len(data["marketplaces"]) == 1
|
||||
assert data["marketplaces"][0]["name"] == "spyke"
|
||||
|
||||
|
||||
class TestGetMarketplace:
|
||||
def test_get_marketplace_exact_match(self, manager):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
mp = manager.get_marketplace("spyke")
|
||||
assert mp["name"] == "spyke"
|
||||
|
||||
def test_get_marketplace_case_insensitive(self, manager):
|
||||
manager.add_marketplace(name="Spyke", git_url="https://github.com/org/repo.git")
|
||||
mp = manager.get_marketplace("spyke")
|
||||
assert mp["name"] == "spyke"
|
||||
|
||||
def test_get_marketplace_not_found(self, manager):
|
||||
with pytest.raises(KeyError, match="Marketplace 'nonexistent' not found"):
|
||||
manager.get_marketplace("nonexistent")
|
||||
|
||||
def test_get_marketplace_not_found_shows_available(self, manager):
|
||||
manager.add_marketplace(name="mp1", git_url="https://example.com/1.git")
|
||||
manager.add_marketplace(name="mp2", git_url="https://example.com/2.git")
|
||||
with pytest.raises(KeyError, match="Available marketplaces: mp1, mp2"):
|
||||
manager.get_marketplace("mp3")
|
||||
|
||||
def test_get_marketplace_empty_registry(self, manager):
|
||||
with pytest.raises(KeyError, match="Available marketplaces: none"):
|
||||
manager.get_marketplace("spyke")
|
||||
|
||||
|
||||
class TestListMarketplaces:
|
||||
def test_list_marketplaces_empty(self, manager):
|
||||
assert manager.list_marketplaces() == []
|
||||
|
||||
def test_list_marketplaces_multiple(self, manager):
|
||||
manager.add_marketplace(name="mp1", git_url="https://example.com/1.git")
|
||||
manager.add_marketplace(name="mp2", git_url="https://example.com/2.git")
|
||||
assert len(manager.list_marketplaces()) == 2
|
||||
|
||||
def test_list_marketplaces_enabled_only(self, manager):
|
||||
manager.add_marketplace(name="enabled", git_url="https://example.com/1.git", enabled=True)
|
||||
manager.add_marketplace(name="disabled", git_url="https://example.com/2.git", enabled=False)
|
||||
marketplaces = manager.list_marketplaces(enabled_only=True)
|
||||
assert len(marketplaces) == 1
|
||||
assert marketplaces[0]["name"] == "enabled"
|
||||
|
||||
|
||||
class TestRemoveMarketplace:
|
||||
def test_remove_marketplace_exists(self, manager):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
assert manager.remove_marketplace("spyke") is True
|
||||
assert len(manager.list_marketplaces()) == 0
|
||||
|
||||
def test_remove_marketplace_not_found(self, manager):
|
||||
assert manager.remove_marketplace("nonexistent") is False
|
||||
|
||||
def test_remove_marketplace_persists_to_file(self, manager, temp_config_dir):
|
||||
manager.add_marketplace(name="mp1", git_url="https://example.com/1.git")
|
||||
manager.add_marketplace(name="mp2", git_url="https://example.com/2.git")
|
||||
manager.remove_marketplace("mp1")
|
||||
registry_file = temp_config_dir / "marketplaces.json"
|
||||
with open(registry_file) as f:
|
||||
data = json.load(f)
|
||||
assert len(data["marketplaces"]) == 1
|
||||
assert data["marketplaces"][0]["name"] == "mp2"
|
||||
|
||||
|
||||
class TestUpdateMarketplace:
|
||||
def test_update_marketplace_git_url(self, manager):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo1.git")
|
||||
updated = manager.update_marketplace(
|
||||
name="spyke", git_url="https://github.com/org/repo2.git"
|
||||
)
|
||||
assert updated["git_url"] == "https://github.com/org/repo2.git"
|
||||
|
||||
def test_update_marketplace_author(self, manager):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
new_author = {"name": "New Author", "email": "new@example.com"}
|
||||
updated = manager.update_marketplace(name="spyke", author=new_author)
|
||||
assert updated["author"] == new_author
|
||||
|
||||
def test_update_marketplace_updates_timestamp(self, manager):
|
||||
mp = manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
updated = manager.update_marketplace(name="spyke", branch="develop")
|
||||
assert updated["updated_at"] > mp["updated_at"]
|
||||
|
||||
def test_update_marketplace_not_found(self, manager):
|
||||
with pytest.raises(KeyError, match="Marketplace 'nonexistent' not found"):
|
||||
manager.update_marketplace(name="nonexistent", branch="main")
|
||||
|
||||
|
||||
class TestDefaultTokenEnv:
|
||||
def test_github_url(self, manager):
|
||||
mp = manager.add_marketplace(name="test", git_url="https://github.com/org/repo.git")
|
||||
assert mp["token_env"] == "GITHUB_TOKEN"
|
||||
|
||||
def test_gitlab_url(self, manager):
|
||||
mp = manager.add_marketplace(name="test", git_url="https://gitlab.com/org/repo.git")
|
||||
assert mp["token_env"] == "GITLAB_TOKEN"
|
||||
|
||||
def test_bitbucket_url(self, manager):
|
||||
mp = manager.add_marketplace(name="test", git_url="https://bitbucket.org/org/repo.git")
|
||||
assert mp["token_env"] == "BITBUCKET_TOKEN"
|
||||
|
||||
def test_unknown_url(self, manager):
|
||||
mp = manager.add_marketplace(
|
||||
name="test", git_url="https://custom-git.example.com/org/repo.git"
|
||||
)
|
||||
assert mp["token_env"] == "GIT_TOKEN"
|
||||
|
||||
def test_override_token_env(self, manager):
|
||||
mp = manager.add_marketplace(
|
||||
name="test",
|
||||
git_url="https://github.com/org/repo.git",
|
||||
token_env="MY_CUSTOM_TOKEN",
|
||||
)
|
||||
assert mp["token_env"] == "MY_CUSTOM_TOKEN"
|
||||
|
||||
|
||||
class TestRegistryPersistence:
|
||||
def test_registry_atomic_write(self, manager, temp_config_dir):
|
||||
manager.add_marketplace(name="spyke", git_url="https://github.com/org/repo.git")
|
||||
assert len(list(temp_config_dir.glob("*.tmp"))) == 0
|
||||
|
||||
def test_registry_corrupted_file(self, temp_config_dir):
|
||||
mgr = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
(temp_config_dir / "marketplaces.json").write_text("{ invalid json }")
|
||||
with pytest.raises(ValueError, match="Corrupted registry file"):
|
||||
mgr._read_registry()
|
||||
437
tests/test_marketplace_publisher.py
Normal file
437
tests/test_marketplace_publisher.py
Normal file
@@ -0,0 +1,437 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for MarketplacePublisher class (skill publishing to plugin repos)"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from skill_seekers.mcp.marketplace_publisher import MarketplacePublisher
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_config_dir(tmp_path):
|
||||
config_dir = tmp_path / "config"
|
||||
config_dir.mkdir()
|
||||
return config_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def skill_dir(tmp_path):
|
||||
sd = tmp_path / "test-skill"
|
||||
sd.mkdir()
|
||||
(sd / "SKILL.md").write_text(
|
||||
"---\nname: test-skill\ndescription: A test skill for unit testing.\n---\n\n"
|
||||
"# Test Skill\n\nThis is a test skill.\n"
|
||||
)
|
||||
refs = sd / "references" / "documentation"
|
||||
refs.mkdir(parents=True)
|
||||
(refs / "index.md").write_text("# Documentation\n\nTest docs.\n")
|
||||
return sd
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def skill_dir_no_frontmatter(tmp_path):
|
||||
sd = tmp_path / "plain-skill"
|
||||
sd.mkdir()
|
||||
(sd / "SKILL.md").write_text("# Plain Skill\n\nNo frontmatter here.\n")
|
||||
return sd
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_marketplace_repo(tmp_path):
|
||||
import git
|
||||
|
||||
repo_path = tmp_path / "marketplace_repo"
|
||||
repo_path.mkdir()
|
||||
repo = git.Repo.init(repo_path)
|
||||
mp_dir = repo_path / ".claude-plugin"
|
||||
mp_dir.mkdir()
|
||||
mp_json = {
|
||||
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
|
||||
"name": "test-marketplace",
|
||||
"description": "Test marketplace",
|
||||
"owner": {"name": "Test", "email": "test@example.com"},
|
||||
"plugins": [
|
||||
{
|
||||
"name": "existing-plugin",
|
||||
"description": "An existing plugin",
|
||||
"author": {"name": "Test", "email": "test@example.com"},
|
||||
"source": "./plugins/existing-plugin",
|
||||
"category": "development",
|
||||
}
|
||||
],
|
||||
}
|
||||
with open(mp_dir / "marketplace.json", "w") as f:
|
||||
json.dump(mp_json, f, indent=2)
|
||||
(repo_path / "plugins").mkdir()
|
||||
repo.index.add([".claude-plugin/marketplace.json"])
|
||||
repo.index.commit("Initial commit")
|
||||
return repo_path
|
||||
|
||||
|
||||
class TestReadFrontmatter:
|
||||
def test_read_frontmatter_valid(self, skill_dir):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
fm = publisher._read_frontmatter(skill_dir / "SKILL.md")
|
||||
assert fm["name"] == "test-skill"
|
||||
assert fm["description"] == "A test skill for unit testing."
|
||||
|
||||
def test_read_frontmatter_no_frontmatter(self, skill_dir_no_frontmatter):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
fm = publisher._read_frontmatter(skill_dir_no_frontmatter / "SKILL.md")
|
||||
assert fm == {}
|
||||
|
||||
def test_read_frontmatter_empty_file(self, tmp_path):
|
||||
(tmp_path / "SKILL.md").write_text("")
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
assert publisher._read_frontmatter(tmp_path / "SKILL.md") == {}
|
||||
|
||||
|
||||
class TestCopySkillToPlugin:
|
||||
def test_copy_creates_correct_structure(self, skill_dir, tmp_path):
|
||||
plugin_dir = tmp_path / "plugin_output"
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher._copy_skill_to_plugin(skill_dir, plugin_dir, "test-skill")
|
||||
assert (plugin_dir / "skills" / "test-skill" / "SKILL.md").exists()
|
||||
assert (
|
||||
plugin_dir / "skills" / "test-skill" / "references" / "documentation" / "index.md"
|
||||
).exists()
|
||||
|
||||
def test_copy_skill_md_content_preserved(self, skill_dir, tmp_path):
|
||||
plugin_dir = tmp_path / "plugin_output"
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher._copy_skill_to_plugin(skill_dir, plugin_dir, "test-skill")
|
||||
original = (skill_dir / "SKILL.md").read_text()
|
||||
copied = (plugin_dir / "skills" / "test-skill" / "SKILL.md").read_text()
|
||||
assert original == copied
|
||||
|
||||
def test_copy_without_references(self, tmp_path):
|
||||
skill_dir = tmp_path / "skill-no-refs"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text("# Skill\n")
|
||||
plugin_dir = tmp_path / "plugin_output"
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher._copy_skill_to_plugin(skill_dir, plugin_dir, "test-skill")
|
||||
assert (plugin_dir / "skills" / "test-skill" / "SKILL.md").exists()
|
||||
assert not (plugin_dir / "skills" / "test-skill" / "references").exists()
|
||||
|
||||
|
||||
class TestGeneratePluginJson:
|
||||
def test_generate_plugin_json(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
result = publisher._generate_plugin_json(
|
||||
"test-skill", "A test skill", {"name": "Test", "email": "test@example.com"}
|
||||
)
|
||||
assert result == {
|
||||
"name": "test-skill",
|
||||
"description": "A test skill",
|
||||
"author": {"name": "Test", "email": "test@example.com"},
|
||||
}
|
||||
|
||||
|
||||
class TestUpdateMarketplaceJson:
|
||||
def test_update_appends_new_plugin(self, mock_marketplace_repo):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
author = {"name": "Test", "email": "test@example.com"}
|
||||
publisher._update_marketplace_json(
|
||||
mock_marketplace_repo, "new-plugin", "New plugin", author, "development"
|
||||
)
|
||||
with open(mock_marketplace_repo / ".claude-plugin" / "marketplace.json") as f:
|
||||
data = json.load(f)
|
||||
assert len(data["plugins"]) == 2
|
||||
assert "new-plugin" in [p["name"] for p in data["plugins"]]
|
||||
|
||||
def test_update_existing_plugin(self, mock_marketplace_repo):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
author = {"name": "Test", "email": "test@example.com"}
|
||||
publisher._update_marketplace_json(
|
||||
mock_marketplace_repo, "existing-plugin", "Updated", author, "tools"
|
||||
)
|
||||
with open(mock_marketplace_repo / ".claude-plugin" / "marketplace.json") as f:
|
||||
data = json.load(f)
|
||||
assert len(data["plugins"]) == 1
|
||||
assert data["plugins"][0]["description"] == "Updated"
|
||||
|
||||
def test_update_sorts_plugins_alphabetically(self, mock_marketplace_repo):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
author = {"name": "Test", "email": "test@example.com"}
|
||||
publisher._update_marketplace_json(
|
||||
mock_marketplace_repo, "aaa-plugin", "First", author, "dev"
|
||||
)
|
||||
with open(mock_marketplace_repo / ".claude-plugin" / "marketplace.json") as f:
|
||||
data = json.load(f)
|
||||
names = [p["name"] for p in data["plugins"]]
|
||||
assert names == sorted(names)
|
||||
|
||||
def test_update_creates_marketplace_json_if_missing(self, tmp_path):
|
||||
repo_path = tmp_path / "empty_repo"
|
||||
repo_path.mkdir()
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
author = {"name": "Test", "email": "test@example.com"}
|
||||
publisher._update_marketplace_json(repo_path, "new-plugin", "Desc", author, "development")
|
||||
assert (repo_path / ".claude-plugin" / "marketplace.json").exists()
|
||||
|
||||
|
||||
class TestPublishErrors:
|
||||
def test_publish_missing_skill_md(self, tmp_path):
|
||||
empty_dir = tmp_path / "empty"
|
||||
empty_dir.mkdir()
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
with pytest.raises(FileNotFoundError, match="SKILL.md not found"):
|
||||
publisher.publish(skill_dir=empty_dir, marketplace_name="test")
|
||||
|
||||
@patch.dict(os.environ, {}, clear=True)
|
||||
def test_publish_missing_token(self, skill_dir, temp_config_dir):
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
manager = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
manager.add_marketplace(
|
||||
name="test", git_url="https://github.com/test/repo.git", token_env="NONEXISTENT_TOKEN"
|
||||
)
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
with (
|
||||
patch(
|
||||
"skill_seekers.mcp.marketplace_publisher.MarketplaceManager", return_value=manager
|
||||
),
|
||||
pytest.raises(RuntimeError, match="Set NONEXISTENT_TOKEN"),
|
||||
):
|
||||
publisher.publish(skill_dir=skill_dir, marketplace_name="test")
|
||||
|
||||
def test_publish_plugin_already_exists(self, skill_dir, tmp_path, temp_config_dir):
|
||||
import git as gitmodule
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
manager = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
manager.add_marketplace(
|
||||
name="test", git_url="https://github.com/test/repo.git", token_env="TEST_TOKEN"
|
||||
)
|
||||
# Create a cached repo without .git so publish() takes the clone path
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
publisher.git_repo.cache_dir = cache_dir
|
||||
publisher.git_repo.inject_token.return_value = "https://fake@github.com/test/repo.git"
|
||||
|
||||
# Mock clone_from to create the dir with existing plugin
|
||||
def fake_clone(_url, path, **_kwargs):
|
||||
from pathlib import Path
|
||||
|
||||
p = Path(path)
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
(p / "plugins" / "test-skill").mkdir(parents=True)
|
||||
r = gitmodule.Repo.init(p, initial_branch="main")
|
||||
r.create_remote("origin", _url)
|
||||
return r
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"TEST_TOKEN": "fake-token"}),
|
||||
patch(
|
||||
"skill_seekers.mcp.marketplace_publisher.MarketplaceManager",
|
||||
return_value=manager,
|
||||
),
|
||||
patch.object(gitmodule.Repo, "clone_from", side_effect=fake_clone),
|
||||
pytest.raises(ValueError, match="already exists"),
|
||||
):
|
||||
publisher.publish(skill_dir=skill_dir, marketplace_name="test")
|
||||
|
||||
def test_publish_marketplace_not_found(self, skill_dir, temp_config_dir):
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
manager = MarketplaceManager(config_dir=str(temp_config_dir))
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
publisher.git_repo = MagicMock()
|
||||
with (
|
||||
patch(
|
||||
"skill_seekers.mcp.marketplace_publisher.MarketplaceManager", return_value=manager
|
||||
),
|
||||
pytest.raises(KeyError, match="not found"),
|
||||
):
|
||||
publisher.publish(skill_dir=skill_dir, marketplace_name="nonexistent")
|
||||
|
||||
|
||||
class TestValidateSkillName:
|
||||
"""Test skill name validation to prevent path traversal."""
|
||||
|
||||
def test_valid_names(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
assert publisher._validate_skill_name("react") == "react"
|
||||
assert publisher._validate_skill_name("spine-unity") == "spine-unity"
|
||||
assert publisher._validate_skill_name("my_skill_v2") == "my_skill_v2"
|
||||
assert publisher._validate_skill_name("skill.v1") == "skill.v1"
|
||||
|
||||
def test_path_traversal_rejected(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
with pytest.raises(ValueError, match="Invalid skill name"):
|
||||
publisher._validate_skill_name("../../etc/passwd")
|
||||
with pytest.raises(ValueError, match="Invalid skill name"):
|
||||
publisher._validate_skill_name("../escape")
|
||||
|
||||
def test_empty_name_rejected(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
with pytest.raises(ValueError, match="Invalid skill name"):
|
||||
publisher._validate_skill_name("")
|
||||
|
||||
def test_slash_rejected(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
with pytest.raises(ValueError, match="Invalid skill name"):
|
||||
publisher._validate_skill_name("path/traversal")
|
||||
|
||||
def test_special_chars_rejected(self):
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
with pytest.raises(ValueError, match="Invalid skill name"):
|
||||
publisher._validate_skill_name("skill;rm -rf")
|
||||
|
||||
|
||||
class TestPublishSuccess:
|
||||
"""Test publish() success path using a local bare git repo."""
|
||||
|
||||
def test_publish_success_flow(self, skill_dir, tmp_path):
|
||||
"""Full success path: clone → copy → commit → push."""
|
||||
import git as gitmodule
|
||||
|
||||
# Create a working repo with initial marketplace structure, then bare-clone it
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
repo = gitmodule.Repo.init(working_path, initial_branch="main")
|
||||
repo.config_writer().set_value("user", "name", "Test").release()
|
||||
repo.config_writer().set_value("user", "email", "test@test.com").release()
|
||||
|
||||
mp_dir = working_path / ".claude-plugin"
|
||||
mp_dir.mkdir()
|
||||
mp_json = {
|
||||
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
|
||||
"name": "test",
|
||||
"description": "Test",
|
||||
"owner": {"name": "Test", "email": "test@test.com"},
|
||||
"plugins": [],
|
||||
}
|
||||
with open(mp_dir / "marketplace.json", "w") as f:
|
||||
json.dump(mp_json, f)
|
||||
(working_path / "plugins").mkdir()
|
||||
repo.index.add([".claude-plugin/marketplace.json"])
|
||||
repo.index.commit("Initial commit")
|
||||
|
||||
# Create bare clone as the "remote"
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
# Register marketplace with file:// URL
|
||||
config_dir = tmp_path / "config"
|
||||
config_dir.mkdir()
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
manager = MarketplaceManager(config_dir=str(config_dir))
|
||||
manager.add_marketplace(
|
||||
name="local-test",
|
||||
git_url=f"file://{bare_repo_path}",
|
||||
token_env="DUMMY_TOKEN",
|
||||
branch="main",
|
||||
author={"name": "Test Author", "email": "test@example.com"},
|
||||
)
|
||||
|
||||
# Create publisher with custom cache dir
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
# Publish — file:// URLs don't need real tokens but we need the env var set
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "not-needed-for-file-protocol"}),
|
||||
patch(
|
||||
"skill_seekers.mcp.marketplace_publisher.MarketplaceManager",
|
||||
return_value=manager,
|
||||
),
|
||||
):
|
||||
result = publisher.publish(
|
||||
skill_dir=skill_dir,
|
||||
marketplace_name="local-test",
|
||||
category="testing",
|
||||
)
|
||||
|
||||
# Verify result
|
||||
assert result["success"] is True
|
||||
assert result["plugin_path"] == "plugins/test-skill"
|
||||
assert result["branch"] == "main"
|
||||
assert len(result["commit_sha"]) == 7
|
||||
|
||||
# Verify files in the cached clone
|
||||
cached_repo = cache_dir / "marketplace_local-test"
|
||||
assert (
|
||||
cached_repo / "plugins" / "test-skill" / "skills" / "test-skill" / "SKILL.md"
|
||||
).exists()
|
||||
assert (cached_repo / "plugins" / "test-skill" / ".claude-plugin" / "plugin.json").exists()
|
||||
|
||||
# Verify marketplace.json was updated
|
||||
with open(cached_repo / ".claude-plugin" / "marketplace.json") as f:
|
||||
data = json.load(f)
|
||||
plugin_names = [p["name"] for p in data["plugins"]]
|
||||
assert "test-skill" in plugin_names
|
||||
|
||||
def test_publish_with_force_overwrites(self, skill_dir, tmp_path):
|
||||
"""Test that force=True overwrites an existing plugin."""
|
||||
import git as gitmodule
|
||||
|
||||
working_path = tmp_path / "working"
|
||||
working_path.mkdir()
|
||||
repo = gitmodule.Repo.init(working_path, initial_branch="main")
|
||||
repo.config_writer().set_value("user", "name", "Test").release()
|
||||
repo.config_writer().set_value("user", "email", "t@t.com").release()
|
||||
|
||||
mp_dir = working_path / ".claude-plugin"
|
||||
mp_dir.mkdir()
|
||||
with open(mp_dir / "marketplace.json", "w") as f:
|
||||
json.dump(
|
||||
{"$schema": "", "name": "t", "description": "", "owner": {}, "plugins": []}, f
|
||||
)
|
||||
(working_path / "plugins" / "test-skill" / ".claude-plugin").mkdir(parents=True)
|
||||
repo.index.add([".claude-plugin/marketplace.json"])
|
||||
repo.index.commit("Initial")
|
||||
|
||||
bare_repo_path = tmp_path / "remote.git"
|
||||
gitmodule.Repo.clone_from(str(working_path), str(bare_repo_path), bare=True)
|
||||
|
||||
config_dir = tmp_path / "config"
|
||||
config_dir.mkdir()
|
||||
from skill_seekers.mcp.marketplace_manager import MarketplaceManager
|
||||
|
||||
manager = MarketplaceManager(config_dir=str(config_dir))
|
||||
manager.add_marketplace(
|
||||
name="local-test",
|
||||
git_url=f"file://{bare_repo_path}",
|
||||
token_env="DUMMY_TOKEN",
|
||||
branch="main",
|
||||
author={"name": "Test", "email": "t@t.com"},
|
||||
)
|
||||
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
publisher = MarketplacePublisher.__new__(MarketplacePublisher)
|
||||
from skill_seekers.mcp.git_repo import GitConfigRepo
|
||||
|
||||
publisher.git_repo = GitConfigRepo(cache_dir=str(cache_dir))
|
||||
|
||||
with (
|
||||
patch.dict(os.environ, {"DUMMY_TOKEN": "x"}),
|
||||
patch(
|
||||
"skill_seekers.mcp.marketplace_publisher.MarketplaceManager",
|
||||
return_value=manager,
|
||||
),
|
||||
):
|
||||
result = publisher.publish(
|
||||
skill_dir=skill_dir,
|
||||
marketplace_name="local-test",
|
||||
category="testing",
|
||||
force=True,
|
||||
)
|
||||
|
||||
assert result["success"] is True
|
||||
Reference in New Issue
Block a user