Files
skill-seekers-reference/tests/test_unified_scraper_orchestration.py
yusyus 6d37e43b83 feat: Grand Unification — one command, one interface, direct converters (#346)
* fix: resolve 8 pipeline bugs found during skill quality review

- Fix 0 APIs extracted from documentation by enriching summary.json
  with individual page file content before conflict detection
- Fix all "Unknown" entries in merged_api.md by injecting dict keys
  as API names and falling back to AI merger field names
- Fix frontmatter using raw slugs instead of config name by
  normalizing frontmatter after SKILL.md generation
- Fix leaked absolute filesystem paths in patterns/index.md by
  stripping .skillseeker-cache repo clone prefixes
- Fix ARCHITECTURE.md file count always showing "1 files" by
  counting files per language from code_analysis data
- Fix YAML parse errors on GitHub Actions workflows by converting
  boolean keys (on: true) to strings
- Fix false React/Vue.js framework detection in C# projects by
  filtering web frameworks based on primary language
- Improve how-to guide generation by broadening workflow example
  filter to include setup/config examples with sufficient complexity
- Fix test_git_sources_e2e failures caused by git init default
  branch being 'main' instead of 'master'

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address 6 review issues in ExecutionContext implementation

Fixes from code review:

1. Mode resolution (#3 critical): _args_to_data no longer unconditionally
   overwrites mode. Only writes mode="api" when --api-key explicitly passed.
   Env-var-based mode detection moved to _default_data() as lowest priority.

2. Re-initialization warning (#4): initialize() now logs debug message
   when called a second time instead of silently returning stale instance.

3. _raw_args preserved in override (#5): temp context now copies _raw_args
   from parent so get_raw() works correctly inside override blocks.

4. test_local_mode_detection env cleanup (#7): test now saves/restores
   API key env vars to prevent failures when ANTHROPIC_API_KEY is set.

5. _load_config_file error handling (#8): wraps FileNotFoundError and
   JSONDecodeError with user-friendly ValueError messages.

6. Lint fixes: added logging import, fixed Generator import from
   collections.abc, fixed AgentClient return type annotation.

Remaining P2/P3 items (documented, not blocking):
- Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL
- get() reads _instance without lock — same CPython caveat
- config_path not stored on instance
- AnalysisSettings.depth not Literal constrained

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address all remaining P2/P3 review issues in ExecutionContext

1. Thread safety: get() now acquires _lock before reading _instance (#2)
2. Thread safety: override() saves/restores _initialized flag to prevent
   re-init during override blocks (#10)
3. Config path stored: _config_path PrivateAttr + config_path property (#6)
4. Literal validation: AnalysisSettings.depth now uses
   Literal["surface", "deep", "full"] — rejects invalid values (#9)
5. Test updated: test_analysis_depth_choices now expects ValidationError
   for invalid depth, added test_analysis_depth_valid_choices
6. Lint cleanup: removed unused imports, fixed whitespace in tests

All 10 previously reported issues now resolved.
26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init

5 scrapers had main() truncated with "# Original main continues here..."
after Kimi's migration — business logic was never connected:
- html_scraper.py — restored HtmlToSkillConverter extraction + build
- pptx_scraper.py — restored PptxToSkillConverter extraction + build
- confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes
- notion_scraper.py — restored NotionToSkillConverter with 4 sources
- chat_scraper.py — restored ChatToSkillConverter extraction + build

unified_scraper.py — migrated main() to context-first pattern with argv fallback

Fixed context initialization chain:
- main.py no longer initializes ExecutionContext (was stealing init from commands)
- create_command.py now passes config_path from source_info.parsed
- execution_context.py handles SourceInfo.raw_input (not raw_source)

All 18 scrapers now genuinely migrated. 26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths

Critical fixes (CLI args silently lost):
- unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON
  when args=None (#3, #4)
- unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of
  3 independent env var lookups (#5)
- doc_scraper._run_enhancement: uses agent_client.api_key instead of raw
  os.environ.get() — respects config file api_key (#1)

Important fixes:
- main._handle_analyze_command: populates _fake_args from ExecutionContext
  so --agent and --api-key aren't lost in analyze→enhance path (#6)
- doc_scraper type annotations: replaced forward refs with Any to avoid
  F821 undefined name errors

All changes include RuntimeError fallback for backward compatibility when
ExecutionContext isn't initialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: 3 crashes + 1 stub in migrated scrapers found by deep scan

1. github_scraper.py: args.scrape_only and args.enhance_level crash when
   args=None (context path). Guarded with if args and getattr(). Also
   fixed agent fallback to read ctx.enhancement.agent.

2. codebase_scraper.py: args.output and args.skip_api_reference crash in
   summary block when args=None. Replaced with output_dir local var and
   ctx.analysis.skip_api_reference.

3. epub_scraper.py: main() was still a stub ending with "# Rest of main()
   continues..." — restored full extraction + build + enhancement logic
   using ctx values exclusively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: complete ExecutionContext migration for remaining scrapers

Kimi's Phase 4 scraper migrations + Claude's review fixes.
All 18 scrapers now use context-first pattern with argv fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError)

get() now returns a default context instead of raising RuntimeError when
not explicitly initialized. This eliminates the need for try/except
RuntimeError blocks in all 18 scrapers.

Components can always call ExecutionContext.get() safely — it returns
defaults if not initialized, or the explicitly initialized instance.

Updated tests: test_get_returns_defaults_when_not_initialized,
test_reset_clears_instance (no longer expects RuntimeError).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 2a-c — remove 16 individual scraper CLI commands

Removed individual scraper commands from:
- COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word,
  epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage,
  confluence, notion, chat)
- pyproject.toml entry points (16 skill-seekers-<type> binaries)
- parsers/__init__.py (16 parser registrations)

All source types now accessed via: skill-seekers create <source>
Kept: create, unified, analyze, enhance, package, upload, install,
      install-agent, config, doctor, and utility commands.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: create SkillConverter base class + converter registry

New base interface that all 17 converters will inherit:
- SkillConverter.run() — extract + build (same call for all types)
- SkillConverter.extract() — override in subclass
- SkillConverter.build_skill() — override in subclass
- get_converter(source_type, config) — factory from registry
- CONVERTER_REGISTRY — maps source type → (module, class)

create_command will use get_converter() instead of _call_module().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Grand Unification — one command, one interface, direct converters

Complete the Grand Unification refactor: `skill-seekers create` is now
the single entry point for all 18 source types. Individual scraper CLI
commands (scrape, github, pdf, analyze, unified, etc.) are removed.

## Architecture changes

- **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter
  with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter().
- **create_command.py rewritten**: _build_config() constructs config dicts
  from ExecutionContext for each source type. Direct converter.run() calls
  replace the old _build_argv() + sys.argv swap + _call_module() machinery.
- **main.py simplified**: create command bypasses _reconstruct_argv entirely,
  calls CreateCommand(args).execute() directly. analyze/unified commands
  removed (create handles both via auto-detection).
- **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags
  (--browser, --max-pages, --depth, etc.) since create is the only entry.
- **Centralized enhancement**: Runs once in create_command after converter,
  not duplicated in each scraper.
- **MCP tools use converters**: 5 scraping tools call get_converter()
  directly instead of subprocess. Config type auto-detected from keys.
- **ConfigValidator → UniSkillConfigValidator**: Renamed with backward-
  compat alias.
- **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext
  first, env vars as fallback.

## What was removed

- main() from all 18 scraper files (~3400 lines)
- 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points
- analyze + unified parsers from parser registry
- _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*()
- setup_argument_parser, get_configuration, _check_deprecated_flags
- Tests referencing removed commands/functions

## Net impact

51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: review fixes for Grand Unification PR

- Add autouse conftest fixture to reset ExecutionContext singleton between tests
- Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults
- Upgrade ExecutionContext double-init log from debug to info
- Use logger.exception() in SkillConverter.run() to preserve tracebacks
- Fix docstring "17 types" → "18 types" in skill_converter.py
- DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed)
- Fix 2 CI workflows still referencing removed `skill-seekers scrape` command
- Remove broken pyproject.toml entry point for codebase_scraper:main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 12 logic/flow issues found in deep review

Critical fixes:
- UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0
- doc_scraper: use ExecutionContext.get() when already initialized instead
  of re-calling initialize() which silently discards new config
- unified_scraper: define enhancement_config before try/except to prevent
  UnboundLocalError in LOCAL enhancement timeout read

Important fixes:
- override(): cleaner tuple save/restore for singleton swap
- --agent without --api-key now sets mode="local" so env API key doesn't
  override explicit agent choice
- Remove DeprecationWarning from _reconstruct_argv (fires on every
  non-create command in production)
- Rewrite scrape_generic_tool to use get_converter() instead of subprocess
  calls to removed main() functions
- SkillConverter.run() checks build_skill() return value, returns 1 if False
- estimate_pages_tool uses -m module invocation instead of .py file path

Low-priority fixes:
- get_converter() raises descriptive ValueError on class name typo
- test_default_values: save/clear API key env vars before asserting mode
- test_get_converter_pdf: fix config key "path" → "pdf_path"

3056 passed, 4 failed (pre-existing dep version issues), 32 skipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: update MCP server tests to mock converter instead of subprocess

scrape_docs_tool now uses get_converter() + _run_converter() in-process
instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool
tests to mock the converter layer instead of the removed subprocess path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 23:00:52 +03:00

597 lines
23 KiB
Python

"""
Tests for UnifiedScraper orchestration methods.
Covers:
- scrape_all_sources() - routing by source type
- _scrape_documentation() - subprocess invocation and data population
- _scrape_github() - GitHubScraper delegation and scraped_data append
- _scrape_pdf() - PDFToSkillConverter delegation and scraped_data append
- _scrape_local() - analyze_codebase delegation; known 'args' bug
- run() - 4-phase orchestration and workflow integration
"""
import json
from pathlib import Path
from unittest.mock import MagicMock, patch
from skill_seekers.cli.unified_scraper import UnifiedScraper
# ---------------------------------------------------------------------------
# Shared factory helper
# ---------------------------------------------------------------------------
def _make_scraper(extra_config=None, tmp_path=None):
"""Create a minimal UnifiedScraper bypassing __init__ dir creation."""
config = {
"name": "test_unified",
"description": "Test unified config",
"sources": [],
**(extra_config or {}),
}
scraper = UnifiedScraper.__new__(UnifiedScraper)
scraper.config = config
scraper.name = config["name"]
scraper.merge_mode = config.get("merge_mode", "rule-based")
scraper.scraped_data = {
"documentation": [],
"github": [],
"pdf": [],
"local": [],
}
scraper._source_counters = {"documentation": 0, "github": 0, "pdf": 0, "local": 0}
if tmp_path:
scraper.output_dir = str(tmp_path / "output")
scraper.cache_dir = str(tmp_path / "cache")
scraper.sources_dir = str(tmp_path / "cache/sources")
scraper.data_dir = str(tmp_path / "cache/data")
scraper.repos_dir = str(tmp_path / "cache/repos")
scraper.logs_dir = str(tmp_path / "cache/logs")
# Pre-create data_dir so tests that write temp configs can proceed
Path(scraper.data_dir).mkdir(parents=True, exist_ok=True)
else:
scraper.output_dir = "output/test_unified"
scraper.cache_dir = ".skillseeker-cache/test_unified"
scraper.sources_dir = ".skillseeker-cache/test_unified/sources"
scraper.data_dir = ".skillseeker-cache/test_unified/data"
scraper.repos_dir = ".skillseeker-cache/test_unified/repos"
scraper.logs_dir = ".skillseeker-cache/test_unified/logs"
# Mock validator so scrape_all_sources() doesn't need real config file
scraper.validator = MagicMock()
scraper.validator.is_unified = True
scraper.validator.needs_api_merge.return_value = False
return scraper
# ===========================================================================
# 1. scrape_all_sources() routing
# ===========================================================================
class TestScrapeAllSourcesRouting:
"""scrape_all_sources() dispatches to the correct _scrape_* method."""
def _run_with_sources(self, sources, monkeypatch):
"""Helper: set sources on a fresh scraper and run scrape_all_sources()."""
scraper = _make_scraper()
scraper.config["sources"] = sources
calls = {"documentation": 0, "github": 0, "pdf": 0, "local": 0}
monkeypatch.setattr(
scraper,
"_scrape_documentation",
lambda _s: calls.__setitem__("documentation", calls["documentation"] + 1),
)
monkeypatch.setattr(
scraper, "_scrape_github", lambda _s: calls.__setitem__("github", calls["github"] + 1)
)
monkeypatch.setattr(
scraper, "_scrape_pdf", lambda _s: calls.__setitem__("pdf", calls["pdf"] + 1)
)
monkeypatch.setattr(
scraper, "_scrape_local", lambda _s: calls.__setitem__("local", calls["local"] + 1)
)
scraper.scrape_all_sources()
return calls
def test_documentation_source_routes_to_scrape_documentation(self, monkeypatch):
calls = self._run_with_sources(
[{"type": "documentation", "base_url": "https://example.com"}], monkeypatch
)
assert calls["documentation"] == 1
assert calls["github"] == 0
assert calls["pdf"] == 0
assert calls["local"] == 0
def test_github_source_routes_to_scrape_github(self, monkeypatch):
calls = self._run_with_sources([{"type": "github", "repo": "user/repo"}], monkeypatch)
assert calls["github"] == 1
assert calls["documentation"] == 0
def test_pdf_source_routes_to_scrape_pdf(self, monkeypatch):
calls = self._run_with_sources([{"type": "pdf", "path": "/tmp/doc.pdf"}], monkeypatch)
assert calls["pdf"] == 1
assert calls["documentation"] == 0
def test_local_source_routes_to_scrape_local(self, monkeypatch):
calls = self._run_with_sources([{"type": "local", "path": "/tmp/project"}], monkeypatch)
assert calls["local"] == 1
assert calls["documentation"] == 0
def test_unknown_source_type_is_skipped(self, monkeypatch):
"""Unknown types are logged as warnings but do not crash or call any scraper."""
calls = self._run_with_sources([{"type": "unsupported_xyz"}], monkeypatch)
assert all(v == 0 for v in calls.values())
def test_multiple_sources_each_scraper_called_once(self, monkeypatch):
sources = [
{"type": "documentation", "base_url": "https://a.com"},
{"type": "github", "repo": "user/repo"},
{"type": "pdf", "path": "/tmp/a.pdf"},
{"type": "local", "path": "/tmp/proj"},
]
calls = self._run_with_sources(sources, monkeypatch)
assert calls == {"documentation": 1, "github": 1, "pdf": 1, "local": 1}
def test_exception_in_one_source_continues_others(self, monkeypatch):
"""An exception in one scraper does not abort remaining sources."""
scraper = _make_scraper()
scraper.config["sources"] = [
{"type": "documentation", "base_url": "https://a.com"},
{"type": "github", "repo": "user/repo"},
]
calls = {"documentation": 0, "github": 0}
def raise_on_doc(_s):
raise RuntimeError("simulated doc failure")
def count_github(_s):
calls["github"] += 1
monkeypatch.setattr(scraper, "_scrape_documentation", raise_on_doc)
monkeypatch.setattr(scraper, "_scrape_github", count_github)
# Should not raise
scraper.scrape_all_sources()
assert calls["github"] == 1
# ===========================================================================
# 2. _scrape_documentation()
# ===========================================================================
class TestScrapeDocumentation:
"""_scrape_documentation() calls scrape_documentation() directly."""
def test_scrape_documentation_called_directly(self, tmp_path):
"""scrape_documentation is called directly (not via subprocess)."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {"base_url": "https://docs.example.com/", "type": "documentation"}
with patch("skill_seekers.cli.doc_scraper.scrape_documentation") as mock_scrape:
mock_scrape.return_value = 1 # simulate failure
scraper._scrape_documentation(source)
assert mock_scrape.called
def test_nothing_appended_on_scrape_failure(self, tmp_path):
"""If scrape_documentation returns non-zero, scraped_data["documentation"] stays empty."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {"base_url": "https://docs.example.com/", "type": "documentation"}
with patch("skill_seekers.cli.doc_scraper.scrape_documentation") as mock_scrape:
mock_scrape.return_value = 1
scraper._scrape_documentation(source)
assert scraper.scraped_data["documentation"] == []
def test_llms_txt_url_forwarded_to_doc_config(self, tmp_path):
"""llms_txt_url from source is forwarded to the doc config."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {
"base_url": "https://docs.example.com/",
"type": "documentation",
"llms_txt_url": "https://docs.example.com/llms.txt",
}
captured_config = {}
def fake_scrape(config, ctx=None): # noqa: ARG001
captured_config.update(config)
return 1 # fail so we don't need to set up output files
with patch("skill_seekers.cli.doc_scraper.scrape_documentation", side_effect=fake_scrape):
scraper._scrape_documentation(source)
# The llms_txt_url should be in the sources list of the doc config
sources = captured_config.get("sources", [])
assert any("llms_txt_url" in s for s in sources)
def test_start_urls_forwarded_to_doc_config(self, tmp_path):
"""start_urls from source is forwarded to the doc config."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {
"base_url": "https://docs.example.com/",
"type": "documentation",
"start_urls": ["https://docs.example.com/intro"],
}
captured_config = {}
def fake_scrape(config, ctx=None): # noqa: ARG001
captured_config.update(config)
return 1
with patch("skill_seekers.cli.doc_scraper.scrape_documentation", side_effect=fake_scrape):
scraper._scrape_documentation(source)
sources = captured_config.get("sources", [])
assert any("start_urls" in s for s in sources)
# ===========================================================================
# 3. _scrape_github()
# ===========================================================================
class TestScrapeGithub:
"""_scrape_github() delegates to GitHubScraper and populates scraped_data."""
def _mock_github_scraper(self, monkeypatch, github_data=None):
"""Patch GitHubScraper class in the unified_scraper module."""
if github_data is None:
github_data = {"files": [], "readme": "", "stars": 0}
mock_scraper_cls = MagicMock()
mock_instance = MagicMock()
mock_instance.scrape.return_value = github_data
mock_scraper_cls.return_value = mock_instance
monkeypatch.setattr(
"skill_seekers.cli.github_scraper.GitHubScraper",
mock_scraper_cls,
)
return mock_scraper_cls, mock_instance
def test_github_scraper_instantiated_with_repo(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "github", "repo": "user/myrepo", "enable_codebase_analysis": False}
mock_cls, mock_inst = self._mock_github_scraper(monkeypatch)
(tmp_path / "output").mkdir(parents=True, exist_ok=True)
with (
patch("skill_seekers.cli.unified_scraper.json.dump"),
patch("skill_seekers.cli.unified_scraper.json.dumps", return_value="{}"),
patch("builtins.open", MagicMock()),
):
scraper._scrape_github(source)
mock_cls.assert_called_once()
init_call_config = mock_cls.call_args[0][0]
assert init_call_config["repo"] == "user/myrepo"
def test_scrape_method_called(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "github", "repo": "user/myrepo", "enable_codebase_analysis": False}
_, mock_inst = self._mock_github_scraper(monkeypatch)
with patch("builtins.open", MagicMock()):
scraper._scrape_github(source)
mock_inst.scrape.assert_called_once()
def test_scraped_data_appended(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "github", "repo": "user/myrepo", "enable_codebase_analysis": False}
gh_data = {"files": [{"path": "README.md"}], "readme": "Hello"}
self._mock_github_scraper(monkeypatch, github_data=gh_data)
with patch("builtins.open", MagicMock()):
scraper._scrape_github(source)
assert len(scraper.scraped_data["github"]) == 1
entry = scraper.scraped_data["github"][0]
assert entry["repo"] == "user/myrepo"
assert entry["data"] == gh_data
def test_source_counter_incremented(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
assert scraper._source_counters["github"] == 0
source = {"type": "github", "repo": "user/repo1", "enable_codebase_analysis": False}
self._mock_github_scraper(monkeypatch)
with patch("builtins.open", MagicMock()):
scraper._scrape_github(source)
assert scraper._source_counters["github"] == 1
def test_c3_analysis_not_triggered_when_disabled(self, tmp_path, monkeypatch):
"""When enable_codebase_analysis=False, _clone_github_repo is never called."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "github", "repo": "user/repo", "enable_codebase_analysis": False}
self._mock_github_scraper(monkeypatch)
clone_mock = MagicMock(return_value=None)
monkeypatch.setattr(scraper, "_clone_github_repo", clone_mock)
with patch("builtins.open", MagicMock()):
scraper._scrape_github(source)
clone_mock.assert_not_called()
# ===========================================================================
# 4. _scrape_pdf()
# ===========================================================================
class TestScrapePdf:
"""_scrape_pdf() delegates to PDFToSkillConverter and populates scraped_data."""
def _mock_pdf_converter(self, monkeypatch, tmp_path, pages=None):
"""Patch PDFToSkillConverter class and provide a fake data_file."""
if pages is None:
pages = [{"page": 1, "content": "Hello world"}]
# Create a fake data file that the converter will "produce"
data_file = tmp_path / "pdf_data.json"
data_file.write_text(json.dumps({"pages": pages}))
mock_cls = MagicMock()
mock_instance = MagicMock()
mock_instance.data_file = str(data_file)
mock_cls.return_value = mock_instance
monkeypatch.setattr(
"skill_seekers.cli.pdf_scraper.PDFToSkillConverter",
mock_cls,
)
return mock_cls, mock_instance
def test_pdf_converter_instantiated_with_path(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
pdf_path = str(tmp_path / "manual.pdf")
source = {"type": "pdf", "path": pdf_path}
mock_cls, _ = self._mock_pdf_converter(monkeypatch, tmp_path)
with patch("skill_seekers.cli.unified_scraper.shutil.copy"):
scraper._scrape_pdf(source)
mock_cls.assert_called_once()
init_config = mock_cls.call_args[0][0]
assert init_config["pdf_path"] == pdf_path
def test_extract_pdf_called(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "pdf", "path": str(tmp_path / "doc.pdf")}
_, mock_inst = self._mock_pdf_converter(monkeypatch, tmp_path)
with patch("skill_seekers.cli.unified_scraper.shutil.copy"):
scraper._scrape_pdf(source)
mock_inst.extract_pdf.assert_called_once()
def test_scraped_data_appended_with_pages(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
pdf_path = str(tmp_path / "report.pdf")
source = {"type": "pdf", "path": pdf_path}
pages = [{"page": 1, "content": "Hello"}, {"page": 2, "content": "World"}]
self._mock_pdf_converter(monkeypatch, tmp_path, pages=pages)
with patch("skill_seekers.cli.unified_scraper.shutil.copy"):
scraper._scrape_pdf(source)
assert len(scraper.scraped_data["pdf"]) == 1
entry = scraper.scraped_data["pdf"][0]
assert entry["pdf_path"] == pdf_path
assert entry["data"]["pages"] == pages
def test_source_counter_incremented(self, tmp_path, monkeypatch):
scraper = _make_scraper(tmp_path=tmp_path)
assert scraper._source_counters["pdf"] == 0
source = {"type": "pdf", "path": str(tmp_path / "a.pdf")}
self._mock_pdf_converter(monkeypatch, tmp_path)
with patch("skill_seekers.cli.unified_scraper.shutil.copy"):
scraper._scrape_pdf(source)
assert scraper._source_counters["pdf"] == 1
# ===========================================================================
# 5. _scrape_local() — known 'args' scoping bug
# ===========================================================================
class TestScrapeLocal:
"""_scrape_local() delegates to analyze_codebase and populates scraped_data."""
def test_source_counter_incremented(self, tmp_path, monkeypatch):
"""Counter is incremented when _scrape_local() is called."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "local", "path": str(tmp_path)}
assert scraper._source_counters["local"] == 0
monkeypatch.setattr(
"skill_seekers.cli.codebase_scraper.analyze_codebase",
MagicMock(),
)
scraper._scrape_local(source)
assert scraper._source_counters["local"] == 1
def test_enhance_level_uses_cli_args_override(self, tmp_path, monkeypatch):
"""CLI --enhance-level overrides per-source enhance_level."""
import argparse
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "local", "path": str(tmp_path), "enhance_level": 1}
scraper._cli_args = argparse.Namespace(enhance_level=3)
captured_kwargs = {}
def fake_analyze(**kwargs):
captured_kwargs.update(kwargs)
monkeypatch.setattr(
"skill_seekers.cli.codebase_scraper.analyze_codebase",
fake_analyze,
)
scraper._scrape_local(source)
assert captured_kwargs.get("enhance_level") == 3
def test_analyze_codebase_not_called_with_old_kwargs(self, tmp_path, monkeypatch):
"""analyze_codebase() must not receive enhance_with_ai or ai_mode (#323)."""
scraper = _make_scraper(tmp_path=tmp_path)
source = {"type": "local", "path": str(tmp_path)}
captured_kwargs = {}
def fake_analyze(**kwargs):
captured_kwargs.update(kwargs)
monkeypatch.setattr(
"skill_seekers.cli.codebase_scraper.analyze_codebase",
fake_analyze,
)
scraper._scrape_local(source)
assert "enhance_with_ai" not in captured_kwargs, (
"enhance_with_ai is not a valid analyze_codebase() parameter"
)
assert "ai_mode" not in captured_kwargs, (
"ai_mode is not a valid analyze_codebase() parameter"
)
assert "enhance_level" in captured_kwargs
# ===========================================================================
# 6. run() orchestration
# ===========================================================================
class TestRunOrchestration:
"""run() executes 4 phases in order and integrates enhancement workflows."""
def _make_run_scraper(self, extra_config=None):
"""Minimal scraper for run() tests with all heavy methods pre-mocked."""
scraper = _make_scraper(extra_config=extra_config)
scraper.scrape_all_sources = MagicMock()
scraper.detect_conflicts = MagicMock(return_value=[])
scraper.merge_sources = MagicMock(return_value=None)
scraper.build_skill = MagicMock()
return scraper
def test_four_phases_called(self):
"""scrape_all_sources, detect_conflicts, build_skill are always called."""
scraper = self._make_run_scraper()
with patch("skill_seekers.cli.unified_scraper.run_workflows", create=True):
scraper.run()
scraper.scrape_all_sources.assert_called_once()
scraper.detect_conflicts.assert_called_once()
scraper.build_skill.assert_called_once()
def test_merge_sources_skipped_when_no_conflicts(self):
"""merge_sources is NOT called when detect_conflicts returns empty list."""
scraper = self._make_run_scraper()
scraper.detect_conflicts.return_value = [] # no conflicts
scraper.run()
scraper.merge_sources.assert_not_called()
def test_merge_sources_called_when_conflicts_present(self):
"""merge_sources IS called when conflicts are detected."""
scraper = self._make_run_scraper()
conflict = {"type": "api_mismatch", "severity": "high"}
scraper.detect_conflicts.return_value = [conflict]
scraper.run()
scraper.merge_sources.assert_called_once_with([conflict])
def test_workflow_not_called_without_args_and_no_json_workflows(self):
"""When args=None and config has no workflow fields, run_workflows is never called."""
scraper = self._make_run_scraper() # sources=[], no workflow fields
with patch("skill_seekers.cli.unified_scraper.run_workflows", create=True) as mock_wf:
scraper.run(args=None)
mock_wf.assert_not_called()
def test_workflow_called_when_args_provided(self):
"""When CLI args are passed, run_workflows is invoked."""
import argparse
scraper = self._make_run_scraper()
cli_args = argparse.Namespace(
enhance_workflow=["security-focus"],
enhance_stage=None,
var=None,
workflow_dry_run=False,
)
# run_workflows is imported dynamically inside run() from workflow_runner.
# Patch at the source module so the local `from ... import` picks it up.
with patch("skill_seekers.cli.workflow_runner.run_workflows") as mock_wf:
scraper.run(args=cli_args)
mock_wf.assert_called_once()
def test_workflow_called_for_json_config_workflows(self):
"""When config has 'workflows' list, run_workflows is called even with args=None."""
scraper = self._make_run_scraper(extra_config={"workflows": ["minimal"]})
captured = {}
def fake_run_workflows(args, context=None): # noqa: ARG001
captured["workflows"] = getattr(args, "enhance_workflow", None)
import contextlib
import skill_seekers.cli.unified_scraper as us_mod
import skill_seekers.cli.workflow_runner as wr_mod
orig_us = getattr(us_mod, "run_workflows", None)
orig_wr = getattr(wr_mod, "run_workflows", None)
us_mod.run_workflows = fake_run_workflows
wr_mod.run_workflows = fake_run_workflows
try:
scraper.run(args=None)
finally:
if orig_us is None:
with contextlib.suppress(AttributeError):
delattr(us_mod, "run_workflows")
else:
us_mod.run_workflows = orig_us
if orig_wr is None:
with contextlib.suppress(AttributeError):
delattr(wr_mod, "run_workflows")
else:
wr_mod.run_workflows = orig_wr
assert "minimal" in (captured.get("workflows") or [])