Files
skill-seekers-reference/tests/test_execution_context.py
yusyus 6d37e43b83 feat: Grand Unification — one command, one interface, direct converters (#346)
* fix: resolve 8 pipeline bugs found during skill quality review

- Fix 0 APIs extracted from documentation by enriching summary.json
  with individual page file content before conflict detection
- Fix all "Unknown" entries in merged_api.md by injecting dict keys
  as API names and falling back to AI merger field names
- Fix frontmatter using raw slugs instead of config name by
  normalizing frontmatter after SKILL.md generation
- Fix leaked absolute filesystem paths in patterns/index.md by
  stripping .skillseeker-cache repo clone prefixes
- Fix ARCHITECTURE.md file count always showing "1 files" by
  counting files per language from code_analysis data
- Fix YAML parse errors on GitHub Actions workflows by converting
  boolean keys (on: true) to strings
- Fix false React/Vue.js framework detection in C# projects by
  filtering web frameworks based on primary language
- Improve how-to guide generation by broadening workflow example
  filter to include setup/config examples with sufficient complexity
- Fix test_git_sources_e2e failures caused by git init default
  branch being 'main' instead of 'master'

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address 6 review issues in ExecutionContext implementation

Fixes from code review:

1. Mode resolution (#3 critical): _args_to_data no longer unconditionally
   overwrites mode. Only writes mode="api" when --api-key explicitly passed.
   Env-var-based mode detection moved to _default_data() as lowest priority.

2. Re-initialization warning (#4): initialize() now logs debug message
   when called a second time instead of silently returning stale instance.

3. _raw_args preserved in override (#5): temp context now copies _raw_args
   from parent so get_raw() works correctly inside override blocks.

4. test_local_mode_detection env cleanup (#7): test now saves/restores
   API key env vars to prevent failures when ANTHROPIC_API_KEY is set.

5. _load_config_file error handling (#8): wraps FileNotFoundError and
   JSONDecodeError with user-friendly ValueError messages.

6. Lint fixes: added logging import, fixed Generator import from
   collections.abc, fixed AgentClient return type annotation.

Remaining P2/P3 items (documented, not blocking):
- Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL
- get() reads _instance without lock — same CPython caveat
- config_path not stored on instance
- AnalysisSettings.depth not Literal constrained

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address all remaining P2/P3 review issues in ExecutionContext

1. Thread safety: get() now acquires _lock before reading _instance (#2)
2. Thread safety: override() saves/restores _initialized flag to prevent
   re-init during override blocks (#10)
3. Config path stored: _config_path PrivateAttr + config_path property (#6)
4. Literal validation: AnalysisSettings.depth now uses
   Literal["surface", "deep", "full"] — rejects invalid values (#9)
5. Test updated: test_analysis_depth_choices now expects ValidationError
   for invalid depth, added test_analysis_depth_valid_choices
6. Lint cleanup: removed unused imports, fixed whitespace in tests

All 10 previously reported issues now resolved.
26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init

5 scrapers had main() truncated with "# Original main continues here..."
after Kimi's migration — business logic was never connected:
- html_scraper.py — restored HtmlToSkillConverter extraction + build
- pptx_scraper.py — restored PptxToSkillConverter extraction + build
- confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes
- notion_scraper.py — restored NotionToSkillConverter with 4 sources
- chat_scraper.py — restored ChatToSkillConverter extraction + build

unified_scraper.py — migrated main() to context-first pattern with argv fallback

Fixed context initialization chain:
- main.py no longer initializes ExecutionContext (was stealing init from commands)
- create_command.py now passes config_path from source_info.parsed
- execution_context.py handles SourceInfo.raw_input (not raw_source)

All 18 scrapers now genuinely migrated. 26 tests pass, lint clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths

Critical fixes (CLI args silently lost):
- unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON
  when args=None (#3, #4)
- unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of
  3 independent env var lookups (#5)
- doc_scraper._run_enhancement: uses agent_client.api_key instead of raw
  os.environ.get() — respects config file api_key (#1)

Important fixes:
- main._handle_analyze_command: populates _fake_args from ExecutionContext
  so --agent and --api-key aren't lost in analyze→enhance path (#6)
- doc_scraper type annotations: replaced forward refs with Any to avoid
  F821 undefined name errors

All changes include RuntimeError fallback for backward compatibility when
ExecutionContext isn't initialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: 3 crashes + 1 stub in migrated scrapers found by deep scan

1. github_scraper.py: args.scrape_only and args.enhance_level crash when
   args=None (context path). Guarded with if args and getattr(). Also
   fixed agent fallback to read ctx.enhancement.agent.

2. codebase_scraper.py: args.output and args.skip_api_reference crash in
   summary block when args=None. Replaced with output_dir local var and
   ctx.analysis.skip_api_reference.

3. epub_scraper.py: main() was still a stub ending with "# Rest of main()
   continues..." — restored full extraction + build + enhancement logic
   using ctx values exclusively.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: complete ExecutionContext migration for remaining scrapers

Kimi's Phase 4 scraper migrations + Claude's review fixes.
All 18 scrapers now use context-first pattern with argv fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError)

get() now returns a default context instead of raising RuntimeError when
not explicitly initialized. This eliminates the need for try/except
RuntimeError blocks in all 18 scrapers.

Components can always call ExecutionContext.get() safely — it returns
defaults if not initialized, or the explicitly initialized instance.

Updated tests: test_get_returns_defaults_when_not_initialized,
test_reset_clears_instance (no longer expects RuntimeError).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Phase 2a-c — remove 16 individual scraper CLI commands

Removed individual scraper commands from:
- COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word,
  epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage,
  confluence, notion, chat)
- pyproject.toml entry points (16 skill-seekers-<type> binaries)
- parsers/__init__.py (16 parser registrations)

All source types now accessed via: skill-seekers create <source>
Kept: create, unified, analyze, enhance, package, upload, install,
      install-agent, config, doctor, and utility commands.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: create SkillConverter base class + converter registry

New base interface that all 17 converters will inherit:
- SkillConverter.run() — extract + build (same call for all types)
- SkillConverter.extract() — override in subclass
- SkillConverter.build_skill() — override in subclass
- get_converter(source_type, config) — factory from registry
- CONVERTER_REGISTRY — maps source type → (module, class)

create_command will use get_converter() instead of _call_module().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Grand Unification — one command, one interface, direct converters

Complete the Grand Unification refactor: `skill-seekers create` is now
the single entry point for all 18 source types. Individual scraper CLI
commands (scrape, github, pdf, analyze, unified, etc.) are removed.

## Architecture changes

- **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter
  with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter().
- **create_command.py rewritten**: _build_config() constructs config dicts
  from ExecutionContext for each source type. Direct converter.run() calls
  replace the old _build_argv() + sys.argv swap + _call_module() machinery.
- **main.py simplified**: create command bypasses _reconstruct_argv entirely,
  calls CreateCommand(args).execute() directly. analyze/unified commands
  removed (create handles both via auto-detection).
- **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags
  (--browser, --max-pages, --depth, etc.) since create is the only entry.
- **Centralized enhancement**: Runs once in create_command after converter,
  not duplicated in each scraper.
- **MCP tools use converters**: 5 scraping tools call get_converter()
  directly instead of subprocess. Config type auto-detected from keys.
- **ConfigValidator → UniSkillConfigValidator**: Renamed with backward-
  compat alias.
- **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext
  first, env vars as fallback.

## What was removed

- main() from all 18 scraper files (~3400 lines)
- 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points
- analyze + unified parsers from parser registry
- _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*()
- setup_argument_parser, get_configuration, _check_deprecated_flags
- Tests referencing removed commands/functions

## Net impact

51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: review fixes for Grand Unification PR

- Add autouse conftest fixture to reset ExecutionContext singleton between tests
- Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults
- Upgrade ExecutionContext double-init log from debug to info
- Use logger.exception() in SkillConverter.run() to preserve tracebacks
- Fix docstring "17 types" → "18 types" in skill_converter.py
- DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed)
- Fix 2 CI workflows still referencing removed `skill-seekers scrape` command
- Remove broken pyproject.toml entry point for codebase_scraper:main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve 12 logic/flow issues found in deep review

Critical fixes:
- UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0
- doc_scraper: use ExecutionContext.get() when already initialized instead
  of re-calling initialize() which silently discards new config
- unified_scraper: define enhancement_config before try/except to prevent
  UnboundLocalError in LOCAL enhancement timeout read

Important fixes:
- override(): cleaner tuple save/restore for singleton swap
- --agent without --api-key now sets mode="local" so env API key doesn't
  override explicit agent choice
- Remove DeprecationWarning from _reconstruct_argv (fires on every
  non-create command in production)
- Rewrite scrape_generic_tool to use get_converter() instead of subprocess
  calls to removed main() functions
- SkillConverter.run() checks build_skill() return value, returns 1 if False
- estimate_pages_tool uses -m module invocation instead of .py file path

Low-priority fixes:
- get_converter() raises descriptive ValueError on class name typo
- test_default_values: save/clear API key env vars before asserting mode
- test_get_converter_pdf: fix config key "path" → "pdf_path"

3056 passed, 4 failed (pre-existing dep version issues), 32 skipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: update MCP server tests to mock converter instead of subprocess

scrape_docs_tool now uses get_converter() + _run_converter() in-process
instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool
tests to mock the converter layer instead of the removed subprocess path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 23:00:52 +03:00

512 lines
16 KiB
Python

"""Tests for ExecutionContext singleton.
This module tests the ExecutionContext class which provides a single source
of truth for all configuration in Skill Seekers.
"""
import argparse
import json
import os
import tempfile
import pytest
from skill_seekers.cli.execution_context import (
ExecutionContext,
get_context,
)
class TestExecutionContextBasics:
"""Basic functionality tests."""
def setup_method(self):
"""Reset singleton before each test."""
ExecutionContext.reset()
def teardown_method(self):
"""Clean up after each test."""
ExecutionContext.reset()
def test_get_returns_defaults_when_not_initialized(self):
"""Should return default context when not explicitly initialized."""
ctx = ExecutionContext.get()
assert ctx is not None
assert ctx.enhancement.level == 2 # default
assert ctx.output.name is None # default
def test_get_context_shortcut(self):
"""get_context() should be equivalent to ExecutionContext.get()."""
args = argparse.Namespace(name="test-skill")
ExecutionContext.initialize(args=args)
ctx = get_context()
assert ctx.output.name == "test-skill"
def test_initialize_returns_instance(self):
"""initialize() should return the context instance."""
args = argparse.Namespace(name="test")
ctx = ExecutionContext.initialize(args=args)
assert isinstance(ctx, ExecutionContext)
assert ctx.output.name == "test"
def test_singleton_behavior(self):
"""Multiple calls should return same instance."""
args = argparse.Namespace(name="first")
ctx1 = ExecutionContext.initialize(args=args)
ctx2 = ExecutionContext.get()
assert ctx1 is ctx2
def test_reset_clears_instance(self):
"""reset() should clear the initialized instance, get() returns fresh defaults."""
args = argparse.Namespace(name="test-skill")
ExecutionContext.initialize(args=args)
assert ExecutionContext.get().output.name == "test-skill"
ExecutionContext.reset()
# After reset, get() returns default context (not the old one)
ctx = ExecutionContext.get()
assert ctx.output.name is None # default, not "test-skill"
class TestExecutionContextFromArgs:
"""Tests for building context from CLI args."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_basic_args(self):
"""Should extract basic args correctly."""
args = argparse.Namespace(
name="react-docs",
output="custom/output",
doc_version="18.2",
dry_run=True,
enhance_level=3,
agent="kimi",
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.output.name == "react-docs"
assert ctx.output.output_dir == "custom/output"
assert ctx.output.doc_version == "18.2"
assert ctx.output.dry_run is True
assert ctx.enhancement.level == 3
assert ctx.enhancement.agent == "kimi"
def test_scraping_args(self):
"""Should extract scraping args correctly."""
args = argparse.Namespace(
name="test",
max_pages=100,
rate_limit=1.5,
browser=True,
workers=4,
async_mode=True,
resume=True,
fresh=False,
skip_scrape=True,
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.scraping.max_pages == 100
assert ctx.scraping.rate_limit == 1.5
assert ctx.scraping.browser is True
assert ctx.scraping.workers == 4
assert ctx.scraping.async_mode is True
assert ctx.scraping.resume is True
assert ctx.scraping.skip_scrape is True
def test_analysis_args(self):
"""Should extract analysis args correctly."""
args = argparse.Namespace(
name="test",
depth="full",
skip_patterns=True,
skip_test_examples=True,
skip_how_to_guides=True,
file_patterns="*.py,*.js",
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.analysis.depth == "full"
assert ctx.analysis.skip_patterns is True
assert ctx.analysis.skip_test_examples is True
assert ctx.analysis.skip_how_to_guides is True
assert ctx.analysis.file_patterns == ["*.py", "*.js"]
def test_workflow_args(self):
"""Should extract workflow args correctly."""
args = argparse.Namespace(
name="test",
enhance_workflow=["security-focus", "api-docs"],
enhance_stage=["stage1:prompt1"],
var=["key1=value1", "key2=value2"],
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.enhancement.workflows == ["security-focus", "api-docs"]
assert ctx.enhancement.stages == ["stage1:prompt1"]
assert ctx.enhancement.workflow_vars == {"key1": "value1", "key2": "value2"}
def test_rag_args(self):
"""Should extract RAG args correctly."""
args = argparse.Namespace(
name="test",
chunk_for_rag=True,
chunk_tokens=1024,
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.rag.chunk_for_rag is True
assert ctx.rag.chunk_tokens == 1024
def test_api_mode_detection(self):
"""Should detect API mode from api_key."""
args = argparse.Namespace(
name="test",
api_key="test-key",
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.enhancement.mode == "api"
def test_local_mode_detection(self):
"""Should default to local/auto mode without API key."""
# Clean API key env vars to ensure test isolation
api_keys = ["ANTHROPIC_API_KEY", "OPENAI_API_KEY", "MOONSHOT_API_KEY", "GOOGLE_API_KEY"]
saved = {k: os.environ.pop(k, None) for k in api_keys}
try:
args = argparse.Namespace(name="test")
ctx = ExecutionContext.initialize(args=args)
assert ctx.enhancement.mode in ("local", "auto")
finally:
for k, v in saved.items():
if v is not None:
os.environ[k] = v
def test_raw_args_access(self):
"""Should provide access to raw args for backward compatibility."""
args = argparse.Namespace(
name="test",
custom_field="custom_value",
)
ctx = ExecutionContext.initialize(args=args)
assert ctx.get_raw("name") == "test"
assert ctx.get_raw("custom_field") == "custom_value"
assert ctx.get_raw("nonexistent", "default") == "default"
class TestExecutionContextFromConfigFile:
"""Tests for building context from config files."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_unified_config_format(self):
"""Should load unified config with sources array."""
config = {
"name": "unity-docs",
"version": "2022.3",
"enhancement": {
"enabled": True,
"level": 2,
"mode": "local",
"agent": "kimi",
"timeout": "unlimited",
},
"workflows": ["unity-game-dev"],
"workflow_stages": ["custom:stage"],
"workflow_vars": {"var1": "value1"},
"sources": [{"type": "documentation", "base_url": "https://docs.unity3d.com/"}],
}
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(config, f)
config_path = f.name
try:
ctx = ExecutionContext.initialize(config_path=config_path)
assert ctx.output.name == "unity-docs"
assert ctx.output.doc_version == "2022.3"
assert ctx.enhancement.enabled is True
assert ctx.enhancement.level == 2
assert ctx.enhancement.mode == "local"
assert ctx.enhancement.agent == "kimi"
assert ctx.enhancement.workflows == ["unity-game-dev"]
assert ctx.enhancement.stages == ["custom:stage"]
assert ctx.enhancement.workflow_vars == {"var1": "value1"}
finally:
os.unlink(config_path)
def test_simple_web_config_format(self):
"""Should load simple web config format."""
config = {
"name": "react-docs",
"version": "18.2",
"base_url": "https://react.dev/",
"max_pages": 500,
"rate_limit": 0.5,
"browser": True,
}
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(config, f)
config_path = f.name
try:
ctx = ExecutionContext.initialize(config_path=config_path)
assert ctx.output.name == "react-docs"
assert ctx.output.doc_version == "18.2"
assert ctx.scraping.max_pages == 500
assert ctx.scraping.rate_limit == 0.5
assert ctx.scraping.browser is True
finally:
os.unlink(config_path)
def test_timeout_integer(self):
"""Should handle integer timeout in config."""
config = {
"name": "test",
"enhancement": {"timeout": 3600},
"sources": [],
}
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(config, f)
config_path = f.name
try:
ctx = ExecutionContext.initialize(config_path=config_path)
assert ctx.enhancement.timeout == 3600
finally:
os.unlink(config_path)
class TestExecutionContextPriority:
"""Tests for configuration priority (CLI > Config > Env > Defaults)."""
def setup_method(self):
ExecutionContext.reset()
self._original_env = {}
def teardown_method(self):
ExecutionContext.reset()
# Restore env vars
for key, value in self._original_env.items():
if value is not None:
os.environ[key] = value
else:
os.environ.pop(key, None)
def test_cli_overrides_config(self):
"""CLI args should override config file values."""
config = {"name": "config-name", "sources": []}
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(config, f)
config_path = f.name
try:
args = argparse.Namespace(name="cli-name")
ctx = ExecutionContext.initialize(args=args, config_path=config_path)
# CLI should win
assert ctx.output.name == "cli-name"
finally:
os.unlink(config_path)
def test_config_overrides_defaults(self):
"""Config file should override default values."""
config = {
"name": "config-name",
"enhancement": {"level": 3},
"sources": [],
}
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
json.dump(config, f)
config_path = f.name
try:
ctx = ExecutionContext.initialize(config_path=config_path)
# Config should override default (level=2)
assert ctx.enhancement.level == 3
finally:
os.unlink(config_path)
def test_env_overrides_defaults(self):
"""Environment variables should override defaults."""
self._original_env["SKILL_SEEKER_AGENT"] = os.environ.get("SKILL_SEEKER_AGENT")
os.environ["SKILL_SEEKER_AGENT"] = "claude"
ctx = ExecutionContext.initialize()
# Env var should override default (None)
assert ctx.enhancement.agent == "claude"
class TestExecutionContextSourceInfo:
"""Tests for source info integration."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_source_info_integration(self):
"""Should integrate source info from source_detector."""
class MockSourceInfo:
type = "web"
raw_source = "https://react.dev/"
parsed = {"url": "https://react.dev/"}
suggested_name = "react"
ctx = ExecutionContext.initialize(source_info=MockSourceInfo())
assert ctx.source is not None
assert ctx.source.type == "web"
assert ctx.source.raw_source == "https://react.dev/"
assert ctx.source.suggested_name == "react"
class TestExecutionContextOverride:
"""Tests for the override context manager."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_override_temporarily_changes_values(self):
"""override() should temporarily change values."""
args = argparse.Namespace(name="original", enhance_level=2)
ctx = ExecutionContext.initialize(args=args)
assert ctx.enhancement.level == 2
with ctx.override(enhancement__level=3):
ctx_from_get = ExecutionContext.get()
assert ctx_from_get.enhancement.level == 3
# After exit, original value restored
assert ExecutionContext.get().enhancement.level == 2
def test_override_restores_on_exception(self):
"""override() should restore values even on exception."""
args = argparse.Namespace(name="original", enhance_level=2)
ctx = ExecutionContext.initialize(args=args)
try:
with ctx.override(enhancement__level=3):
assert ExecutionContext.get().enhancement.level == 3
raise ValueError("Test error")
except ValueError:
pass
# Should still be restored
assert ExecutionContext.get().enhancement.level == 2
class TestExecutionContextValidation:
"""Tests for Pydantic validation."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_enhancement_level_bounds(self):
"""Enhancement level should be 0-3."""
args = argparse.Namespace(name="test", enhance_level=5)
with pytest.raises(ValueError) as exc_info:
ExecutionContext.initialize(args=args)
assert "level" in str(exc_info.value)
def test_analysis_depth_choices(self):
"""Analysis depth should reject invalid values."""
import pydantic
args = argparse.Namespace(name="test", depth="invalid")
with pytest.raises(pydantic.ValidationError):
ExecutionContext.initialize(args=args)
def test_analysis_depth_valid_choices(self):
"""Analysis depth should accept surface, deep, full."""
for depth in ("surface", "deep", "full"):
ExecutionContext.reset()
args = argparse.Namespace(name="test", depth=depth)
ctx = ExecutionContext.initialize(args=args)
assert ctx.analysis.depth == depth
class TestExecutionContextDefaults:
"""Tests for default values."""
def setup_method(self):
ExecutionContext.reset()
def teardown_method(self):
ExecutionContext.reset()
def test_default_values(self):
"""Should have sensible defaults."""
# Clear API key env vars so mode defaults to "auto" regardless of environment
api_keys = ("ANTHROPIC_API_KEY", "OPENAI_API_KEY", "MOONSHOT_API_KEY", "GOOGLE_API_KEY")
saved = {k: os.environ.pop(k, None) for k in api_keys}
try:
ctx = ExecutionContext.initialize()
# Enhancement defaults
assert ctx.enhancement.enabled is True
assert ctx.enhancement.level == 2
assert ctx.enhancement.mode == "auto" # Default is auto, resolved at runtime
assert ctx.enhancement.timeout == 2700 # 45 minutes
finally:
for k, v in saved.items():
if v is not None:
os.environ[k] = v
# Output defaults
assert ctx.output.name is None
assert ctx.output.dry_run is False
# Scraping defaults
assert ctx.scraping.browser is False
assert ctx.scraping.workers == 1
assert ctx.scraping.languages == ["en"]
# Analysis defaults
assert ctx.analysis.depth == "surface"
assert ctx.analysis.skip_patterns is False
# RAG defaults
assert ctx.rag.chunk_for_rag is False
assert ctx.rag.chunk_tokens == 512