Files
skill-seekers-reference/tests/test_mcp_server.py
yusyus 6d37e43b83 feat: Grand Unification — one command, one interface, direct converters (#346)
* fix: resolve 8 pipeline bugs found during skill quality review

- Fix 0 APIs extracted from documentation by enriching summary.json
  with individual page file content before conflict detection
- Fix all "Unknown" entries in merged_api.md by injecting dict keys
  as API names and falling back to AI merger field names
- Fix frontmatter using raw slugs instead of config name by
  normalizing frontmatter after SKILL.md generation
- Fix leaked absolute filesystem paths in patterns/index.md by
  stripping .skillseeker-cache repo clone prefixes
- Fix ARCHITECTURE.md file count always showing "1 files" by
  counting files per language from code_analysis data
- Fix YAML parse errors on GitHub Actions workflows by converting
  boolean keys (on: true) to strings
- Fix false React/Vue.js framework detection in C# projects by
  filtering web frameworks based on primary language
- Improve how-to guide generation by broadening workflow example
  filter to include setup/config examples with sufficient complexity
- Fix test_git_sources_e2e failures caused by git init default
  branch being 'main' instead of 'master'

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address 6 review issues in ExecutionContext implementation

Fixes from code review:

1. Mode resolution (#3 critical): _args_to_data no longer unconditionally
   overwrites mode. Only writes mode="api" when --api-key explicitly passed.
   Env-var-based mode detection moved to _default_data() as lowest priority.

2. Re-initialization warning (#4): initialize() now logs debug message
   when called a second time instead of silently returning stale instance.

3. _raw_args preserved in override (#5): temp context now copies _raw_args
   from parent so get_raw() works correctly inside override blocks.

4. test_local_mode_detection env cleanup (#7): test now saves/restores
   API key env vars to prevent failures when ANTHROPIC_API_KEY is set.

5. _load_config_file error handling (#8): wraps FileNotFoundError and
   JSONDecodeError with user-friendly ValueError messages.

6. Lint fixes: added logging import, fixed Generator import from
   collections.abc, fixed AgentClient return type annotation.

Remaining P2/P3 items (documented, not blocking):
- Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL
- get() reads _instance without lock — same CPython caveat
- config_path not stored on instance
- AnalysisSettings.depth not Literal constrained

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address all remaining P2/P3 review issues in ExecutionContext

1. Thread safety: get() now acquires _lock before reading _instance (#2)
2. Thread safety: override() saves/restores _initialized flag to prevent
   re-init during override blocks (#10)
3. Config path stored: _config_path PrivateAttr + config_path property (#6)
4. Literal validation: AnalysisSettings.depth now uses
   Literal["surface", "deep", "full"] — rejects invalid values (#9)
5. Test updated: test_analysis_depth_choices now expects ValidationError
   for invalid depth, added test_analysis_depth_valid_choices
6. Lint cleanup: removed unused imports, fixed whitespace in tests

All 10 previously reported issues now resolved.
26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init

5 scrapers had main() truncated with "# Original main continues here..."
after Kimi's migration — business logic was never connected:
- html_scraper.py — restored HtmlToSkillConverter extraction + build
- pptx_scraper.py — restored PptxToSkillConverter extraction + build
- confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes
- notion_scraper.py — restored NotionToSkillConverter with 4 sources
- chat_scraper.py — restored ChatToSkillConverter extraction + build

unified_scraper.py — migrated main() to context-first pattern with argv fallback

Fixed context initialization chain:
- main.py no longer initializes ExecutionContext (was stealing init from commands)
- create_command.py now passes config_path from source_info.parsed
- execution_context.py handles SourceInfo.raw_input (not raw_source)

All 18 scrapers now genuinely migrated. 26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths

Critical fixes (CLI args silently lost):
- unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON
  when args=None (#3, #4)
- unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of
  3 independent env var lookups (#5)
- doc_scraper._run_enhancement: uses agent_client.api_key instead of raw
  os.environ.get() — respects config file api_key (#1)

Important fixes:
- main._handle_analyze_command: populates _fake_args from ExecutionContext
  so --agent and --api-key aren't lost in analyze→enhance path (#6)
- doc_scraper type annotations: replaced forward refs with Any to avoid
  F821 undefined name errors

All changes include RuntimeError fallback for backward compatibility when
ExecutionContext isn't initialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: 3 crashes + 1 stub in migrated scrapers found by deep scan

1. github_scraper.py: args.scrape_only and args.enhance_level crash when
   args=None (context path). Guarded with if args and getattr(). Also
   fixed agent fallback to read ctx.enhancement.agent.

2. codebase_scraper.py: args.output and args.skip_api_reference crash in
   summary block when args=None. Replaced with output_dir local var and
   ctx.analysis.skip_api_reference.

3. epub_scraper.py: main() was still a stub ending with "# Rest of main()
   continues..." — restored full extraction + build + enhancement logic
   using ctx values exclusively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: complete ExecutionContext migration for remaining scrapers

Kimi's Phase 4 scraper migrations + Claude's review fixes.
All 18 scrapers now use context-first pattern with argv fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError)

get() now returns a default context instead of raising RuntimeError when
not explicitly initialized. This eliminates the need for try/except
RuntimeError blocks in all 18 scrapers.

Components can always call ExecutionContext.get() safely — it returns
defaults if not initialized, or the explicitly initialized instance.

Updated tests: test_get_returns_defaults_when_not_initialized,
test_reset_clears_instance (no longer expects RuntimeError).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 2a-c — remove 16 individual scraper CLI commands

Removed individual scraper commands from:
- COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word,
  epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage,
  confluence, notion, chat)
- pyproject.toml entry points (16 skill-seekers-<type> binaries)
- parsers/__init__.py (16 parser registrations)

All source types now accessed via: skill-seekers create <source>
Kept: create, unified, analyze, enhance, package, upload, install,
      install-agent, config, doctor, and utility commands.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: create SkillConverter base class + converter registry

New base interface that all 17 converters will inherit:
- SkillConverter.run() — extract + build (same call for all types)
- SkillConverter.extract() — override in subclass
- SkillConverter.build_skill() — override in subclass
- get_converter(source_type, config) — factory from registry
- CONVERTER_REGISTRY — maps source type → (module, class)

create_command will use get_converter() instead of _call_module().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Grand Unification — one command, one interface, direct converters

Complete the Grand Unification refactor: `skill-seekers create` is now
the single entry point for all 18 source types. Individual scraper CLI
commands (scrape, github, pdf, analyze, unified, etc.) are removed.

## Architecture changes

- **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter
  with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter().
- **create_command.py rewritten**: _build_config() constructs config dicts
  from ExecutionContext for each source type. Direct converter.run() calls
  replace the old _build_argv() + sys.argv swap + _call_module() machinery.
- **main.py simplified**: create command bypasses _reconstruct_argv entirely,
  calls CreateCommand(args).execute() directly. analyze/unified commands
  removed (create handles both via auto-detection).
- **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags
  (--browser, --max-pages, --depth, etc.) since create is the only entry.
- **Centralized enhancement**: Runs once in create_command after converter,
  not duplicated in each scraper.
- **MCP tools use converters**: 5 scraping tools call get_converter()
  directly instead of subprocess. Config type auto-detected from keys.
- **ConfigValidator → UniSkillConfigValidator**: Renamed with backward-
  compat alias.
- **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext
  first, env vars as fallback.

## What was removed

- main() from all 18 scraper files (~3400 lines)
- 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points
- analyze + unified parsers from parser registry
- _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*()
- setup_argument_parser, get_configuration, _check_deprecated_flags
- Tests referencing removed commands/functions

## Net impact

51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: review fixes for Grand Unification PR

- Add autouse conftest fixture to reset ExecutionContext singleton between tests
- Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults
- Upgrade ExecutionContext double-init log from debug to info
- Use logger.exception() in SkillConverter.run() to preserve tracebacks
- Fix docstring "17 types" → "18 types" in skill_converter.py
- DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed)
- Fix 2 CI workflows still referencing removed `skill-seekers scrape` command
- Remove broken pyproject.toml entry point for codebase_scraper:main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 12 logic/flow issues found in deep review

Critical fixes:
- UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0
- doc_scraper: use ExecutionContext.get() when already initialized instead
  of re-calling initialize() which silently discards new config
- unified_scraper: define enhancement_config before try/except to prevent
  UnboundLocalError in LOCAL enhancement timeout read

Important fixes:
- override(): cleaner tuple save/restore for singleton swap
- --agent without --api-key now sets mode="local" so env API key doesn't
  override explicit agent choice
- Remove DeprecationWarning from _reconstruct_argv (fires on every
  non-create command in production)
- Rewrite scrape_generic_tool to use get_converter() instead of subprocess
  calls to removed main() functions
- SkillConverter.run() checks build_skill() return value, returns 1 if False
- estimate_pages_tool uses -m module invocation instead of .py file path

Low-priority fixes:
- get_converter() raises descriptive ValueError on class name typo
- test_default_values: save/clear API key env vars before asserting mode
- test_get_converter_pdf: fix config key "path" → "pdf_path"

3056 passed, 4 failed (pre-existing dep version issues), 32 skipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: update MCP server tests to mock converter instead of subprocess

scrape_docs_tool now uses get_converter() + _run_converter() in-process
instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool
tests to mock the converter layer instead of the removed subprocess path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 23:00:52 +03:00

727 lines
28 KiB
Python

#!/usr/bin/env python3
"""
Comprehensive test suite for Skill Seeker MCP Server
Tests all MCP tools and server functionality
"""
import json
import os
import shutil
import sys
import tempfile
import unittest
from pathlib import Path
from unittest.mock import MagicMock, patch
# CRITICAL: Import MCP package BEFORE adding project to path
# to avoid shadowing the installed mcp package with our local mcp/ directory
# WORKAROUND for shadowing issue: Temporarily change to /tmp to import external mcp
# This avoids our local mcp/ directory being in the import path
_original_dir = os.getcwd()
try:
os.chdir("/tmp") # Change away from project directory
from mcp.server import Server # noqa: F401
from mcp.types import TextContent, Tool # noqa: F401
MCP_AVAILABLE = True
except ImportError:
MCP_AVAILABLE = False
print("Warning: MCP package not available, skipping MCP tests")
finally:
os.chdir(_original_dir) # Restore original directory
# NOW add parent directory to path for importing our local modules
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Import our local MCP server module
if MCP_AVAILABLE:
# Import from installed package (new src/ layout)
try:
from skill_seekers.mcp import server as skill_seeker_server
except ImportError as e:
print(f"Warning: Could not import skill_seeker server: {e}")
skill_seeker_server = None
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestMCPServerInitialization(unittest.TestCase):
"""Test MCP server initialization"""
def test_server_import(self):
"""Test that server module can be imported"""
from mcp import server as mcp_server_module
self.assertIsNotNone(mcp_server_module)
def test_server_initialization(self):
"""Test server initializes correctly"""
import mcp.server
app = mcp.server.Server("test-skill-seeker")
self.assertEqual(app.name, "test-skill-seeker")
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestListTools(unittest.IsolatedAsyncioTestCase):
"""Test list_tools functionality"""
async def test_list_tools_returns_tools(self):
"""Test that list_tools returns all expected tools"""
tools = await skill_seeker_server.list_tools()
self.assertIsInstance(tools, list)
self.assertGreater(len(tools), 0)
# Check all expected tools are present
tool_names = [tool.name for tool in tools]
expected_tools = [
"generate_config",
"estimate_pages",
"scrape_docs",
"package_skill",
"list_configs",
"validate_config",
]
for expected in expected_tools:
self.assertIn(expected, tool_names, f"Missing tool: {expected}")
async def test_tool_schemas(self):
"""Test that all tools have valid schemas"""
tools = await skill_seeker_server.list_tools()
for tool in tools:
self.assertIsInstance(tool.name, str)
self.assertIsInstance(tool.description, str)
self.assertIn("inputSchema", tool.__dict__)
# Verify schema has required structure
schema = tool.inputSchema
self.assertEqual(schema["type"], "object")
self.assertIn("properties", schema)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestGenerateConfigTool(unittest.IsolatedAsyncioTestCase):
"""Test generate_config tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
async def test_generate_config_basic(self):
"""Test basic config generation"""
args = {
"name": "test-framework",
"url": "https://test-framework.dev/",
"description": "Test framework skill",
}
result = await skill_seeker_server.generate_config_tool(args)
self.assertIsInstance(result, list)
self.assertGreater(len(result), 0)
self.assertIsInstance(result[0], TextContent)
self.assertIn("", result[0].text)
# Verify config file was created
config_path = Path("configs/test-framework.json")
self.assertTrue(config_path.exists())
# Verify config content (unified format)
with open(config_path) as f:
config = json.load(f)
self.assertEqual(config["name"], "test-framework")
self.assertEqual(config["description"], "Test framework skill")
# Check unified format structure
self.assertIn("sources", config)
self.assertEqual(len(config["sources"]), 1)
self.assertEqual(config["sources"][0]["type"], "documentation")
self.assertEqual(config["sources"][0]["base_url"], "https://test-framework.dev/")
async def test_generate_config_with_options(self):
"""Test config generation with custom options"""
args = {
"name": "custom-framework",
"url": "https://custom.dev/",
"description": "Custom skill",
"max_pages": 200,
"rate_limit": 1.0,
}
_result = await skill_seeker_server.generate_config_tool(args)
# Verify config has custom options (unified format)
config_path = Path("configs/custom-framework.json")
with open(config_path) as f:
config = json.load(f)
self.assertEqual(config["sources"][0]["max_pages"], 200)
self.assertEqual(config["sources"][0]["rate_limit"], 1.0)
async def test_generate_config_defaults(self):
"""Test that default values are applied correctly"""
args = {"name": "default-test", "url": "https://test.dev/", "description": "Test defaults"}
_result = await skill_seeker_server.generate_config_tool(args)
config_path = Path("configs/default-test.json")
with open(config_path) as f:
config = json.load(f)
# Check unified format defaults
self.assertEqual(config["sources"][0]["max_pages"], 100) # Default
self.assertEqual(config["sources"][0]["rate_limit"], 0.5) # Default
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestEstimatePagesTool(unittest.IsolatedAsyncioTestCase):
"""Test estimate_pages tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
# Create a test config
os.makedirs("configs", exist_ok=True)
self.config_path = Path("configs/test.json")
config_data = {
"name": "test",
"base_url": "https://example.com/",
"selectors": {"main_content": "article", "title": "h1", "code_blocks": "pre"},
"rate_limit": 0.5,
"max_pages": 50,
}
with open(self.config_path, "w") as f:
json.dump(config_data, f)
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
@patch("skill_seekers.mcp.tools.scraping_tools.run_subprocess_with_streaming")
async def test_estimate_pages_success(self, mock_streaming):
"""Test successful page estimation"""
# Mock successful subprocess run with streaming
# Returns (stdout, stderr, returncode)
mock_streaming.return_value = ("Estimated 50 pages", "", 0)
args = {"config_path": str(self.config_path)}
result = await skill_seeker_server.estimate_pages_tool(args)
self.assertIsInstance(result, list)
self.assertIsInstance(result[0], TextContent)
self.assertIn("50 pages", result[0].text)
# Should also have progress message
self.assertIn("Estimating page count", result[0].text)
@patch("skill_seekers.mcp.tools.scraping_tools.run_subprocess_with_streaming")
async def test_estimate_pages_with_max_discovery(self, mock_streaming):
"""Test page estimation with custom max_discovery"""
# Mock successful subprocess run with streaming
mock_streaming.return_value = ("Estimated 100 pages", "", 0)
args = {"config_path": str(self.config_path), "max_discovery": 500}
_result = await skill_seeker_server.estimate_pages_tool(args)
# Verify subprocess was called with correct args
mock_streaming.assert_called_once()
call_args = mock_streaming.call_args[0][0]
self.assertIn("--max-discovery", call_args)
self.assertIn("500", call_args)
@patch("skill_seekers.mcp.tools.scraping_tools.run_subprocess_with_streaming")
async def test_estimate_pages_error(self, mock_streaming):
"""Test error handling in page estimation"""
# Mock failed subprocess run with streaming
mock_streaming.return_value = ("", "Config file not found", 1)
args = {"config_path": "nonexistent.json"}
result = await skill_seeker_server.estimate_pages_tool(args)
self.assertIn("Error", result[0].text)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestScrapeDocsTool(unittest.IsolatedAsyncioTestCase):
"""Test scrape_docs tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
# Create test config
os.makedirs("configs", exist_ok=True)
self.config_path = Path("configs/test.json")
config_data = {
"name": "test",
"base_url": "https://example.com/",
"selectors": {"main_content": "article", "title": "h1", "code_blocks": "pre"},
}
with open(self.config_path, "w") as f:
json.dump(config_data, f)
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
@patch("skill_seekers.mcp.tools.scraping_tools._run_converter")
@patch("skill_seekers.cli.skill_converter.get_converter")
async def test_scrape_docs_basic(self, mock_get_converter, mock_run_converter):
"""Test basic documentation scraping via in-process converter"""
from skill_seekers.mcp.tools.scraping_tools import TextContent
mock_run_converter.return_value = [
TextContent(type="text", text="Scraping completed successfully")
]
args = {"config_path": str(self.config_path)}
result = await skill_seeker_server.scrape_docs_tool(args)
self.assertIsInstance(result, list)
self.assertIn("success", result[0].text.lower())
mock_get_converter.assert_called_once()
mock_run_converter.assert_called_once()
@patch("skill_seekers.mcp.tools.scraping_tools._run_converter")
@patch("skill_seekers.cli.skill_converter.get_converter")
async def test_scrape_docs_with_skip_scrape(self, mock_get_converter, mock_run_converter):
"""Test scraping with skip_scrape flag"""
from skill_seekers.mcp.tools.scraping_tools import TextContent
mock_run_converter.return_value = [TextContent(type="text", text="Using cached data")]
args = {"config_path": str(self.config_path), "skip_scrape": True}
result = await skill_seeker_server.scrape_docs_tool(args)
self.assertIsInstance(result, list)
mock_get_converter.assert_called_once()
@patch("skill_seekers.mcp.tools.scraping_tools._run_converter")
@patch("skill_seekers.cli.skill_converter.get_converter")
async def test_scrape_docs_with_dry_run(self, mock_get_converter, mock_run_converter):
"""Test scraping with dry_run flag sets converter.dry_run"""
from skill_seekers.mcp.tools.scraping_tools import TextContent
mock_converter = mock_get_converter.return_value
mock_run_converter.return_value = [TextContent(type="text", text="Dry run completed")]
args = {"config_path": str(self.config_path), "dry_run": True}
result = await skill_seeker_server.scrape_docs_tool(args)
self.assertIsInstance(result, list)
# Verify dry_run was set on the converter instance
self.assertTrue(mock_converter.dry_run)
@patch("skill_seekers.mcp.tools.scraping_tools._run_converter")
@patch("skill_seekers.cli.skill_converter.get_converter")
async def test_scrape_docs_with_enhance_local(self, mock_get_converter, mock_run_converter):
"""Test scraping with local enhancement flag"""
from skill_seekers.mcp.tools.scraping_tools import TextContent
mock_run_converter.return_value = [
TextContent(type="text", text="Scraping with enhancement")
]
args = {"config_path": str(self.config_path), "enhance_local": True}
result = await skill_seeker_server.scrape_docs_tool(args)
self.assertIsInstance(result, list)
mock_get_converter.assert_called_once()
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestPackageSkillTool(unittest.IsolatedAsyncioTestCase):
"""Test package_skill tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
# Create a mock skill directory
self.skill_dir = Path("output/test-skill")
self.skill_dir.mkdir(parents=True)
(self.skill_dir / "SKILL.md").write_text("# Test Skill")
(self.skill_dir / "references").mkdir()
(self.skill_dir / "references/index.md").write_text("# Index")
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
@patch("subprocess.run")
async def test_package_skill_success(self, mock_run):
"""Test successful skill packaging"""
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = "Package created: test-skill.zip"
mock_run.return_value = mock_result
args = {"skill_dir": str(self.skill_dir)}
result = await skill_seeker_server.package_skill_tool(args)
self.assertIsInstance(result, list)
self.assertIn("test-skill", result[0].text)
@patch("subprocess.run")
async def test_package_skill_error(self, mock_run):
"""Test error handling in skill packaging"""
mock_result = MagicMock()
mock_result.returncode = 1
mock_result.stderr = "Directory not found"
mock_run.return_value = mock_result
args = {"skill_dir": "nonexistent-dir"}
result = await skill_seeker_server.package_skill_tool(args)
self.assertIn("Error", result[0].text)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestListConfigsTool(unittest.IsolatedAsyncioTestCase):
"""Test list_configs tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
# Create test configs
os.makedirs("configs", exist_ok=True)
configs = [
{"name": "test1", "description": "Test 1 skill", "base_url": "https://test1.dev/"},
{"name": "test2", "description": "Test 2 skill", "base_url": "https://test2.dev/"},
]
for config in configs:
path = Path(f"configs/{config['name']}.json")
with open(path, "w") as f:
json.dump(config, f)
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
async def test_list_configs_success(self):
"""Test listing all configs"""
result = await skill_seeker_server.list_configs_tool({})
self.assertIsInstance(result, list)
self.assertIsInstance(result[0], TextContent)
self.assertIn("test1", result[0].text)
self.assertIn("test2", result[0].text)
self.assertIn("https://test1.dev/", result[0].text)
self.assertIn("https://test2.dev/", result[0].text)
async def test_list_configs_empty(self):
"""Test listing configs when directory is empty"""
# Remove all configs
for config_file in Path("configs").glob("*.json"):
config_file.unlink()
result = await skill_seeker_server.list_configs_tool({})
self.assertIn("No config files found", result[0].text)
async def test_list_configs_no_directory(self):
"""Test listing configs when directory doesn't exist"""
# Remove configs directory
shutil.rmtree("configs")
result = await skill_seeker_server.list_configs_tool({})
self.assertIn("No configs directory", result[0].text)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestValidateConfigTool(unittest.IsolatedAsyncioTestCase):
"""Test validate_config tool"""
async def asyncSetUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.original_cwd = os.getcwd()
os.chdir(self.temp_dir)
os.makedirs("configs", exist_ok=True)
async def asyncTearDown(self):
"""Clean up test environment"""
os.chdir(self.original_cwd)
shutil.rmtree(self.temp_dir, ignore_errors=True)
async def test_validate_valid_config(self):
"""Test validating a valid config"""
# Create valid config (unified format)
config_path = Path("configs/valid.json")
valid_config = {
"name": "valid-test",
"description": "Test configuration",
"sources": [
{
"type": "documentation",
"base_url": "https://example.com/",
"selectors": {"main_content": "article", "title": "h1", "code_blocks": "pre"},
"rate_limit": 0.5,
"max_pages": 100,
}
],
}
with open(config_path, "w") as f:
json.dump(valid_config, f)
args = {"config_path": str(config_path)}
result = await skill_seeker_server.validate_config_tool(args)
self.assertIsInstance(result, list)
self.assertIn("", result[0].text)
self.assertIn("valid", result[0].text.lower())
async def test_validate_invalid_config(self):
"""Test validating an invalid config"""
# Create invalid config (missing required fields)
config_path = Path("configs/invalid.json")
invalid_config = {
"description": "Missing name field",
"sources": [
{"type": "invalid_type", "url": "https://example.com"} # Invalid source type
],
}
with open(config_path, "w") as f:
json.dump(invalid_config, f)
args = {"config_path": str(config_path)}
result = await skill_seeker_server.validate_config_tool(args)
# Should show error for invalid source type
self.assertIn("", result[0].text)
async def test_validate_nonexistent_config(self):
"""Test validating a nonexistent config"""
args = {"config_path": "configs/nonexistent.json"}
result = await skill_seeker_server.validate_config_tool(args)
self.assertIn("Error", result[0].text)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestCallToolRouter(unittest.IsolatedAsyncioTestCase):
"""Test call_tool routing"""
async def test_call_tool_unknown(self):
"""Test calling an unknown tool"""
result = await skill_seeker_server.call_tool("unknown_tool", {})
self.assertIsInstance(result, list)
self.assertIn("Unknown tool", result[0].text)
async def test_call_tool_exception_handling(self):
"""Test that exceptions are caught and returned as errors"""
# Call with invalid arguments that should cause an exception
result = await skill_seeker_server.call_tool("generate_config", {})
self.assertIsInstance(result, list)
self.assertIn("Error", result[0].text)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestMCPServerIntegration(unittest.IsolatedAsyncioTestCase):
"""Integration tests for MCP server"""
async def test_full_workflow_simulation(self):
"""Test complete workflow: generate config -> validate -> estimate"""
temp_dir = tempfile.mkdtemp()
original_cwd = os.getcwd()
os.chdir(temp_dir)
try:
# Step 1: Generate config using skill_seeker_server
generate_args = {
"name": "workflow-test",
"url": "https://workflow-test.dev/",
"description": "Workflow test skill",
}
result1 = await skill_seeker_server.generate_config_tool(generate_args)
self.assertIn("", result1[0].text)
# Step 2: Validate config
validate_args = {"config_path": "configs/workflow-test.json"}
result2 = await skill_seeker_server.validate_config_tool(validate_args)
self.assertIn("", result2[0].text)
# Step 3: List configs
result3 = await skill_seeker_server.list_configs_tool({})
self.assertIn("workflow-test", result3[0].text)
finally:
os.chdir(original_cwd)
shutil.rmtree(temp_dir, ignore_errors=True)
@unittest.skipUnless(MCP_AVAILABLE, "MCP package not installed")
class TestSubmitConfigTool(unittest.IsolatedAsyncioTestCase):
"""Test submit_config MCP tool"""
async def test_submit_config_requires_token(self):
"""Should error without GitHub token"""
args = {
"config_json": '{"name": "test", "description": "Test", "sources": [{"type": "documentation", "base_url": "https://example.com"}]}'
}
result = await skill_seeker_server.submit_config_tool(args)
self.assertIn("GitHub token required", result[0].text)
async def test_submit_config_validates_required_fields(self):
"""Should reject config missing required fields"""
args = {
"config_json": '{"name": "test"}', # Missing description and sources
"github_token": "fake_token",
}
result = await skill_seeker_server.submit_config_tool(args)
# Should fail validation for missing required fields
result_text = result[0].text.lower()
self.assertTrue(
"validation failed" in result_text
or "error" in result_text
or "missing" in result_text
or "required" in result_text,
f"Expected validation error, got: {result[0].text}",
)
async def test_submit_config_validates_name_format(self):
"""Should reject invalid name characters"""
args = {
"config_json": '{"name": "React@2024!", "description": "Test", "sources": [{"type": "documentation", "base_url": "https://example.com"}]}',
"github_token": "fake_token",
}
result = await skill_seeker_server.submit_config_tool(args)
self.assertIn("validation failed", result[0].text.lower())
async def test_submit_config_validates_url_format(self):
"""Should reject invalid URL format"""
args = {
"config_json": '{"name": "test", "description": "Test", "sources": [{"type": "documentation", "base_url": "not-a-url"}]}',
"github_token": "fake_token",
}
result = await skill_seeker_server.submit_config_tool(args)
self.assertIn("validation failed", result[0].text.lower())
async def test_submit_config_rejects_legacy_format(self):
"""Should reject legacy config format (removed in v2.11.0)"""
legacy_config = {
"name": "testframework",
"description": "Test framework docs",
"base_url": "https://docs.test.com/", # Legacy: base_url at root level
"selectors": {"main_content": "article", "title": "h1", "code_blocks": "pre code"},
"max_pages": 100,
}
args = {"config_json": json.dumps(legacy_config), "github_token": "fake_token"}
result = await skill_seeker_server.submit_config_tool(args)
# Should reject with helpful error message
self.assertIn("", result[0].text)
self.assertIn("LEGACY CONFIG FORMAT DETECTED", result[0].text)
self.assertIn("sources", result[0].text) # Should mention unified format with sources array
async def test_submit_config_accepts_unified_format(self):
"""Should accept valid unified config"""
unified_config = {
"name": "testunified",
"description": "Test unified config",
"merge_mode": "rule-based",
"sources": [
{"type": "documentation", "base_url": "https://docs.test.com/", "max_pages": 100},
{"type": "github", "repo": "testorg/testrepo"},
],
}
args = {"config_json": json.dumps(unified_config), "github_token": "fake_token"}
with patch("github.Github") as mock_gh:
mock_repo = MagicMock()
mock_issue = MagicMock()
mock_issue.html_url = "https://github.com/test/issue/2"
mock_issue.number = 2
mock_repo.create_issue.return_value = mock_issue
mock_gh.return_value.get_repo.return_value = mock_repo
result = await skill_seeker_server.submit_config_tool(args)
self.assertIn("Config submitted successfully", result[0].text)
self.assertTrue("Unified" in result[0].text or "multi-source" in result[0].text)
async def test_submit_config_from_file_path(self):
"""Should accept config_path parameter"""
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(
{
"name": "testfile",
"description": "From file",
"sources": [{"type": "documentation", "base_url": "https://test.com/"}],
},
f,
)
temp_path = f.name
try:
args = {"config_path": temp_path, "github_token": "fake_token"}
with patch("github.Github") as mock_gh:
mock_repo = MagicMock()
mock_issue = MagicMock()
mock_issue.html_url = "https://github.com/test/issue/3"
mock_issue.number = 3
mock_repo.create_issue.return_value = mock_issue
mock_gh.return_value.get_repo.return_value = mock_repo
result = await skill_seeker_server.submit_config_tool(args)
self.assertIn("Config submitted successfully", result[0].text)
finally:
os.unlink(temp_path)
async def test_submit_config_detects_category(self):
"""Should auto-detect category from config name"""
args = {
"config_json": '{"name": "react-test", "description": "React", "sources": [{"type": "documentation", "base_url": "https://react.dev/"}]}',
"github_token": "fake_token",
}
with patch("github.Github") as mock_gh:
mock_repo = MagicMock()
mock_issue = MagicMock()
mock_issue.html_url = "https://github.com/test/issue/4"
mock_issue.number = 4
mock_repo.create_issue.return_value = mock_issue
mock_gh.return_value.get_repo.return_value = mock_repo
result = await skill_seeker_server.submit_config_tool(args)
# Verify category appears in result
self.assertTrue("web-frameworks" in result[0].text or "Category" in result[0].text)
if __name__ == "__main__":
unittest.main()