Files
skill-seekers-reference/tests/test_pdf_scraper.py
yusyus 6d37e43b83 feat: Grand Unification — one command, one interface, direct converters (#346)
* fix: resolve 8 pipeline bugs found during skill quality review

- Fix 0 APIs extracted from documentation by enriching summary.json
  with individual page file content before conflict detection
- Fix all "Unknown" entries in merged_api.md by injecting dict keys
  as API names and falling back to AI merger field names
- Fix frontmatter using raw slugs instead of config name by
  normalizing frontmatter after SKILL.md generation
- Fix leaked absolute filesystem paths in patterns/index.md by
  stripping .skillseeker-cache repo clone prefixes
- Fix ARCHITECTURE.md file count always showing "1 files" by
  counting files per language from code_analysis data
- Fix YAML parse errors on GitHub Actions workflows by converting
  boolean keys (on: true) to strings
- Fix false React/Vue.js framework detection in C# projects by
  filtering web frameworks based on primary language
- Improve how-to guide generation by broadening workflow example
  filter to include setup/config examples with sufficient complexity
- Fix test_git_sources_e2e failures caused by git init default
  branch being 'main' instead of 'master'

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address 6 review issues in ExecutionContext implementation

Fixes from code review:

1. Mode resolution (#3 critical): _args_to_data no longer unconditionally
   overwrites mode. Only writes mode="api" when --api-key explicitly passed.
   Env-var-based mode detection moved to _default_data() as lowest priority.

2. Re-initialization warning (#4): initialize() now logs debug message
   when called a second time instead of silently returning stale instance.

3. _raw_args preserved in override (#5): temp context now copies _raw_args
   from parent so get_raw() works correctly inside override blocks.

4. test_local_mode_detection env cleanup (#7): test now saves/restores
   API key env vars to prevent failures when ANTHROPIC_API_KEY is set.

5. _load_config_file error handling (#8): wraps FileNotFoundError and
   JSONDecodeError with user-friendly ValueError messages.

6. Lint fixes: added logging import, fixed Generator import from
   collections.abc, fixed AgentClient return type annotation.

Remaining P2/P3 items (documented, not blocking):
- Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL
- get() reads _instance without lock — same CPython caveat
- config_path not stored on instance
- AnalysisSettings.depth not Literal constrained

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address all remaining P2/P3 review issues in ExecutionContext

1. Thread safety: get() now acquires _lock before reading _instance (#2)
2. Thread safety: override() saves/restores _initialized flag to prevent
   re-init during override blocks (#10)
3. Config path stored: _config_path PrivateAttr + config_path property (#6)
4. Literal validation: AnalysisSettings.depth now uses
   Literal["surface", "deep", "full"] — rejects invalid values (#9)
5. Test updated: test_analysis_depth_choices now expects ValidationError
   for invalid depth, added test_analysis_depth_valid_choices
6. Lint cleanup: removed unused imports, fixed whitespace in tests

All 10 previously reported issues now resolved.
26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init

5 scrapers had main() truncated with "# Original main continues here..."
after Kimi's migration — business logic was never connected:
- html_scraper.py — restored HtmlToSkillConverter extraction + build
- pptx_scraper.py — restored PptxToSkillConverter extraction + build
- confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes
- notion_scraper.py — restored NotionToSkillConverter with 4 sources
- chat_scraper.py — restored ChatToSkillConverter extraction + build

unified_scraper.py — migrated main() to context-first pattern with argv fallback

Fixed context initialization chain:
- main.py no longer initializes ExecutionContext (was stealing init from commands)
- create_command.py now passes config_path from source_info.parsed
- execution_context.py handles SourceInfo.raw_input (not raw_source)

All 18 scrapers now genuinely migrated. 26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths

Critical fixes (CLI args silently lost):
- unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON
  when args=None (#3, #4)
- unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of
  3 independent env var lookups (#5)
- doc_scraper._run_enhancement: uses agent_client.api_key instead of raw
  os.environ.get() — respects config file api_key (#1)

Important fixes:
- main._handle_analyze_command: populates _fake_args from ExecutionContext
  so --agent and --api-key aren't lost in analyze→enhance path (#6)
- doc_scraper type annotations: replaced forward refs with Any to avoid
  F821 undefined name errors

All changes include RuntimeError fallback for backward compatibility when
ExecutionContext isn't initialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: 3 crashes + 1 stub in migrated scrapers found by deep scan

1. github_scraper.py: args.scrape_only and args.enhance_level crash when
   args=None (context path). Guarded with if args and getattr(). Also
   fixed agent fallback to read ctx.enhancement.agent.

2. codebase_scraper.py: args.output and args.skip_api_reference crash in
   summary block when args=None. Replaced with output_dir local var and
   ctx.analysis.skip_api_reference.

3. epub_scraper.py: main() was still a stub ending with "# Rest of main()
   continues..." — restored full extraction + build + enhancement logic
   using ctx values exclusively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: complete ExecutionContext migration for remaining scrapers

Kimi's Phase 4 scraper migrations + Claude's review fixes.
All 18 scrapers now use context-first pattern with argv fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError)

get() now returns a default context instead of raising RuntimeError when
not explicitly initialized. This eliminates the need for try/except
RuntimeError blocks in all 18 scrapers.

Components can always call ExecutionContext.get() safely — it returns
defaults if not initialized, or the explicitly initialized instance.

Updated tests: test_get_returns_defaults_when_not_initialized,
test_reset_clears_instance (no longer expects RuntimeError).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 2a-c — remove 16 individual scraper CLI commands

Removed individual scraper commands from:
- COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word,
  epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage,
  confluence, notion, chat)
- pyproject.toml entry points (16 skill-seekers-<type> binaries)
- parsers/__init__.py (16 parser registrations)

All source types now accessed via: skill-seekers create <source>
Kept: create, unified, analyze, enhance, package, upload, install,
      install-agent, config, doctor, and utility commands.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: create SkillConverter base class + converter registry

New base interface that all 17 converters will inherit:
- SkillConverter.run() — extract + build (same call for all types)
- SkillConverter.extract() — override in subclass
- SkillConverter.build_skill() — override in subclass
- get_converter(source_type, config) — factory from registry
- CONVERTER_REGISTRY — maps source type → (module, class)

create_command will use get_converter() instead of _call_module().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Grand Unification — one command, one interface, direct converters

Complete the Grand Unification refactor: `skill-seekers create` is now
the single entry point for all 18 source types. Individual scraper CLI
commands (scrape, github, pdf, analyze, unified, etc.) are removed.

## Architecture changes

- **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter
  with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter().
- **create_command.py rewritten**: _build_config() constructs config dicts
  from ExecutionContext for each source type. Direct converter.run() calls
  replace the old _build_argv() + sys.argv swap + _call_module() machinery.
- **main.py simplified**: create command bypasses _reconstruct_argv entirely,
  calls CreateCommand(args).execute() directly. analyze/unified commands
  removed (create handles both via auto-detection).
- **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags
  (--browser, --max-pages, --depth, etc.) since create is the only entry.
- **Centralized enhancement**: Runs once in create_command after converter,
  not duplicated in each scraper.
- **MCP tools use converters**: 5 scraping tools call get_converter()
  directly instead of subprocess. Config type auto-detected from keys.
- **ConfigValidator → UniSkillConfigValidator**: Renamed with backward-
  compat alias.
- **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext
  first, env vars as fallback.

## What was removed

- main() from all 18 scraper files (~3400 lines)
- 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points
- analyze + unified parsers from parser registry
- _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*()
- setup_argument_parser, get_configuration, _check_deprecated_flags
- Tests referencing removed commands/functions

## Net impact

51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: review fixes for Grand Unification PR

- Add autouse conftest fixture to reset ExecutionContext singleton between tests
- Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults
- Upgrade ExecutionContext double-init log from debug to info
- Use logger.exception() in SkillConverter.run() to preserve tracebacks
- Fix docstring "17 types" → "18 types" in skill_converter.py
- DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed)
- Fix 2 CI workflows still referencing removed `skill-seekers scrape` command
- Remove broken pyproject.toml entry point for codebase_scraper:main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 12 logic/flow issues found in deep review

Critical fixes:
- UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0
- doc_scraper: use ExecutionContext.get() when already initialized instead
  of re-calling initialize() which silently discards new config
- unified_scraper: define enhancement_config before try/except to prevent
  UnboundLocalError in LOCAL enhancement timeout read

Important fixes:
- override(): cleaner tuple save/restore for singleton swap
- --agent without --api-key now sets mode="local" so env API key doesn't
  override explicit agent choice
- Remove DeprecationWarning from _reconstruct_argv (fires on every
  non-create command in production)
- Rewrite scrape_generic_tool to use get_converter() instead of subprocess
  calls to removed main() functions
- SkillConverter.run() checks build_skill() return value, returns 1 if False
- estimate_pages_tool uses -m module invocation instead of .py file path

Low-priority fixes:
- get_converter() raises descriptive ValueError on class name typo
- test_default_values: save/clear API key env vars before asserting mode
- test_get_converter_pdf: fix config key "path" → "pdf_path"

3056 passed, 4 failed (pre-existing dep version issues), 32 skipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: update MCP server tests to mock converter instead of subprocess

scrape_docs_tool now uses get_converter() + _run_converter() in-process
instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool
tests to mock the converter layer instead of the removed subprocess path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 23:00:52 +03:00

524 lines
18 KiB
Python

#!/usr/bin/env python3
"""
Tests for PDF Scraper (cli/pdf_scraper.py)
Tests cover:
- Config-based PDF extraction
- Direct PDF path conversion
- JSON-based workflow
- Skill structure generation
- Categorization
- Error handling
"""
import json
import shutil
import tempfile
import unittest
from pathlib import Path
try:
import fitz # noqa: F401 PyMuPDF
PYMUPDF_AVAILABLE = True
except ImportError:
PYMUPDF_AVAILABLE = False
class TestPDFToSkillConverter(unittest.TestCase):
"""Test PDFToSkillConverter initialization and basic functionality"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
# Create temporary directory for test output
self.temp_dir = tempfile.mkdtemp()
self.output_dir = Path(self.temp_dir)
def tearDown(self):
# Clean up temporary directory
if hasattr(self, "temp_dir"):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_init_with_name_and_pdf_path(self):
"""Test initialization with name and PDF path"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
self.assertEqual(converter.name, "test_skill")
self.assertEqual(converter.pdf_path, "test.pdf")
def test_init_with_config(self):
"""Test initialization with config file"""
# Create test config
config = {
"name": "config_skill",
"description": "Test skill",
"pdf_path": "docs/test.pdf",
"extract_options": {"chunk_size": 10, "min_quality": 5.0},
}
converter = self.PDFToSkillConverter(config)
self.assertEqual(converter.name, "config_skill")
self.assertEqual(converter.config.get("description"), "Test skill")
def test_init_requires_name_or_config(self):
"""Test that initialization requires config dict with 'name' field"""
with self.assertRaises((ValueError, TypeError, KeyError)):
self.PDFToSkillConverter({})
class TestCategorization(unittest.TestCase):
"""Test content categorization functionality"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_categorize_by_keywords(self):
"""Test categorization using keyword matching"""
config = {
"name": "test",
"pdf_path": "test.pdf",
"categories": {
"getting_started": ["introduction", "getting started"],
"api": ["api", "reference", "function"],
},
}
converter = self.PDFToSkillConverter(config)
# Mock extracted data with different content
converter.extracted_data = {
"pages": [
{
"page_number": 1,
"text": "Introduction to the API",
"chapter": "Chapter 1: Getting Started",
},
{"page_number": 2, "text": "API reference for functions", "chapter": None},
]
}
categories = converter.categorize_content()
# With single PDF source, should use single-file strategy
# Category named after PDF basename (test.pdf -> test)
self.assertIn("test", categories)
self.assertEqual(len(categories), 1)
self.assertEqual(len(categories["test"]["pages"]), 2)
def test_categorize_by_chapters(self):
"""Test categorization using chapter information"""
config = {"name": "test", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Mock data with chapters
converter.extracted_data = {
"pages": [
{"page_number": 1, "text": "Content here", "chapter": "Chapter 1: Introduction"},
{"page_number": 2, "text": "More content", "chapter": "Chapter 1: Introduction"},
{"page_number": 3, "text": "New chapter", "chapter": "Chapter 2: Advanced Topics"},
]
}
categories = converter.categorize_content()
# Should create categories based on chapters
self.assertIsInstance(categories, dict)
self.assertGreater(len(categories), 0)
def test_categorize_handles_no_chapters(self):
"""Test categorization when no chapters are detected"""
config = {"name": "test", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Mock data without chapters
converter.extracted_data = {
"pages": [{"page_number": 1, "text": "Some content", "chapter": None}]
}
categories = converter.categorize_content()
# Should still create categories (fallback to "other")
self.assertIsInstance(categories, dict)
class TestSkillBuilding(unittest.TestCase):
"""Test skill structure generation"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_build_skill_creates_structure(self):
"""Test that build_skill creates required directory structure"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
# Mock extracted data
converter.extracted_data = {
"pages": [{"page_number": 1, "text": "Test content", "code_blocks": [], "images": []}],
"total_pages": 1,
}
# Mock categorization
converter.categories = {"getting_started": [converter.extracted_data["pages"][0]]}
converter.build_skill()
# Check directory structure
skill_dir = Path(self.temp_dir) / "test_skill"
self.assertTrue(skill_dir.exists())
self.assertTrue((skill_dir / "references").exists())
self.assertTrue((skill_dir / "scripts").exists())
self.assertTrue((skill_dir / "assets").exists())
def test_build_skill_creates_skill_md(self):
"""Test that SKILL.md is created"""
config = {"name": "test_skill", "pdf_path": "test.pdf", "description": "Test description"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
converter.extracted_data = {
"pages": [{"page_number": 1, "text": "Test", "code_blocks": [], "images": []}],
"total_pages": 1,
}
converter.categories = {"test": [converter.extracted_data["pages"][0]]}
converter.build_skill()
skill_md = Path(self.temp_dir) / "test_skill" / "SKILL.md"
self.assertTrue(skill_md.exists())
# Check content
content = skill_md.read_text()
self.assertIn("test_skill", content)
self.assertIn("Test description", content)
def test_build_skill_creates_reference_files(self):
"""Test that reference files are created for categories"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
converter.extracted_data = {
"pages": [
{"page_number": 1, "text": "Getting started", "code_blocks": [], "images": []},
{"page_number": 2, "text": "API reference", "code_blocks": [], "images": []},
],
"total_pages": 2,
}
converter.build_skill()
# Check reference files exist
# With single PDF source, uses single-file strategy (named after PDF basename)
refs_dir = Path(self.temp_dir) / "test_skill" / "references"
self.assertTrue((refs_dir / "test.md").exists())
self.assertTrue((refs_dir / "index.md").exists())
class TestCodeBlockHandling(unittest.TestCase):
"""Test code block extraction and inclusion in references"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_code_blocks_included_in_references(self):
"""Test that code blocks are included in reference files"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
# Mock data with code blocks
converter.extracted_data = {
"pages": [
{
"page_number": 1,
"text": "Example code",
"code_blocks": [
{
"code": "def hello():\n print('world')",
"language": "python",
"quality": 8.0,
}
],
"images": [],
}
],
"total_pages": 1,
}
converter.build_skill()
# Check code block in reference file
# With single PDF source, uses single-file strategy (named after PDF basename)
ref_file = Path(self.temp_dir) / "test_skill" / "references" / "test.md"
content = ref_file.read_text()
self.assertIn("```python", content)
self.assertIn("def hello()", content)
self.assertIn("print('world')", content)
def test_high_quality_code_preferred(self):
"""Test that high-quality code blocks are prioritized"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
# Mock data with varying quality
converter.extracted_data = {
"pages": [
{
"page_number": 1,
"text": "Code examples",
"code_blocks": [
{"code": "x = 1", "language": "python", "quality": 2.0},
{
"code": "def process():\n return result",
"language": "python",
"quality": 9.0,
},
],
"images": [],
}
],
"total_pages": 1,
}
converter.build_skill()
# With single PDF source, uses single-file strategy (named after PDF basename)
ref_file = Path(self.temp_dir) / "test_skill" / "references" / "test.md"
content = ref_file.read_text()
# High quality code should be included
self.assertIn("def process()", content)
class TestImageHandling(unittest.TestCase):
"""Test image extraction and handling"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_images_saved_to_assets(self):
"""Test that images are saved to assets directory"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
# Mock image data (1x1 white PNG)
mock_image_bytes = b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x06\x00\x00\x00\x1f\x15\xc4\x89\x00\x00\x00\nIDATx\x9cc\x00\x01\x00\x00\x05\x00\x01\r\n-\xb4\x00\x00\x00\x00IEND\xaeB`\x82"
converter.extracted_data = {
"pages": [
{
"page_number": 1,
"text": "See diagram",
"code_blocks": [],
"images": [
{
"page": 1,
"index": 0,
"width": 100,
"height": 100,
"data": mock_image_bytes,
}
],
}
],
"total_pages": 1,
}
converter.categories = {"diagrams": [converter.extracted_data["pages"][0]]}
converter.build_skill()
# Check assets directory has image
assets_dir = Path(self.temp_dir) / "test_skill" / "assets"
image_files = list(assets_dir.glob("*.png"))
self.assertGreater(len(image_files), 0)
def test_image_references_in_markdown(self):
"""Test that images are referenced in markdown files"""
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
# Override skill_dir to use temp directory
converter.skill_dir = str(Path(self.temp_dir) / "test_skill")
mock_image_bytes = b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x06\x00\x00\x00\x1f\x15\xc4\x89\x00\x00\x00\nIDATx\x9cc\x00\x01\x00\x00\x05\x00\x01\r\n-\xb4\x00\x00\x00\x00IEND\xaeB`\x82"
converter.extracted_data = {
"pages": [
{
"page_number": 1,
"text": "Architecture diagram",
"code_blocks": [],
"images": [
{
"page": 1,
"index": 0,
"width": 200,
"height": 150,
"data": mock_image_bytes,
}
],
}
],
"total_pages": 1,
}
converter.build_skill()
# Check markdown has image reference
# With single PDF source, uses single-file strategy (named after PDF basename)
ref_file = Path(self.temp_dir) / "test_skill" / "references" / "test.md"
content = ref_file.read_text()
self.assertIn("![", content) # Markdown image syntax
self.assertIn("../assets/", content) # Relative path to assets
class TestErrorHandling(unittest.TestCase):
"""Test error handling for invalid inputs"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_missing_pdf_file(self):
"""Test error when PDF file doesn't exist"""
config = {"name": "test", "pdf_path": "nonexistent.pdf"}
converter = self.PDFToSkillConverter(config)
with self.assertRaises((FileNotFoundError, RuntimeError)):
converter.extract_pdf()
def test_invalid_config_file(self):
"""Test error when config dict is invalid"""
invalid_config = "invalid string not a dict"
with self.assertRaises((ValueError, TypeError, AttributeError)):
self.PDFToSkillConverter(invalid_config)
def test_missing_required_config_fields(self):
"""Test error when config is missing required fields"""
config = {"description": "Missing name and pdf_path"}
with self.assertRaises((ValueError, KeyError)):
converter = self.PDFToSkillConverter(config)
converter.extract_pdf()
class TestJSONWorkflow(unittest.TestCase):
"""Test building skills from extracted JSON"""
def setUp(self):
if not PYMUPDF_AVAILABLE:
self.skipTest("PyMuPDF not installed")
from skill_seekers.cli.pdf_scraper import PDFToSkillConverter
self.PDFToSkillConverter = PDFToSkillConverter
self.temp_dir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_load_from_json(self):
"""Test loading extracted data from JSON file"""
# Create mock extracted JSON
extracted_data = {
"pages": [{"page_number": 1, "text": "Test content", "code_blocks": [], "images": []}],
"total_pages": 1,
"metadata": {"title": "Test PDF"},
}
json_path = Path(self.temp_dir) / "extracted.json"
json_path.write_text(json.dumps(extracted_data, indent=2))
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
converter.load_extracted_data(str(json_path))
self.assertEqual(converter.extracted_data["total_pages"], 1)
self.assertEqual(len(converter.extracted_data["pages"]), 1)
def test_build_from_json_without_extraction(self):
"""Test that from_json workflow skips PDF extraction"""
extracted_data = {
"pages": [{"page_number": 1, "text": "Content", "code_blocks": [], "images": []}],
"total_pages": 1,
}
json_path = Path(self.temp_dir) / "extracted.json"
json_path.write_text(json.dumps(extracted_data))
config = {"name": "test_skill", "pdf_path": "test.pdf"}
converter = self.PDFToSkillConverter(config)
converter.load_extracted_data(str(json_path))
# Should have data loaded without calling extract_pdf()
self.assertIsNotNone(converter.extracted_data)
self.assertEqual(converter.extracted_data["total_pages"], 1)
if __name__ == "__main__":
unittest.main()