* fix: resolve 8 pipeline bugs found during skill quality review - Fix 0 APIs extracted from documentation by enriching summary.json with individual page file content before conflict detection - Fix all "Unknown" entries in merged_api.md by injecting dict keys as API names and falling back to AI merger field names - Fix frontmatter using raw slugs instead of config name by normalizing frontmatter after SKILL.md generation - Fix leaked absolute filesystem paths in patterns/index.md by stripping .skillseeker-cache repo clone prefixes - Fix ARCHITECTURE.md file count always showing "1 files" by counting files per language from code_analysis data - Fix YAML parse errors on GitHub Actions workflows by converting boolean keys (on: true) to strings - Fix false React/Vue.js framework detection in C# projects by filtering web frameworks based on primary language - Improve how-to guide generation by broadening workflow example filter to include setup/config examples with sufficient complexity - Fix test_git_sources_e2e failures caused by git init default branch being 'main' instead of 'master' Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address 6 review issues in ExecutionContext implementation Fixes from code review: 1. Mode resolution (#3 critical): _args_to_data no longer unconditionally overwrites mode. Only writes mode="api" when --api-key explicitly passed. Env-var-based mode detection moved to _default_data() as lowest priority. 2. Re-initialization warning (#4): initialize() now logs debug message when called a second time instead of silently returning stale instance. 3. _raw_args preserved in override (#5): temp context now copies _raw_args from parent so get_raw() works correctly inside override blocks. 4. test_local_mode_detection env cleanup (#7): test now saves/restores API key env vars to prevent failures when ANTHROPIC_API_KEY is set. 5. _load_config_file error handling (#8): wraps FileNotFoundError and JSONDecodeError with user-friendly ValueError messages. 6. Lint fixes: added logging import, fixed Generator import from collections.abc, fixed AgentClient return type annotation. Remaining P2/P3 items (documented, not blocking): - Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL - get() reads _instance without lock — same CPython caveat - config_path not stored on instance - AnalysisSettings.depth not Literal constrained Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address all remaining P2/P3 review issues in ExecutionContext 1. Thread safety: get() now acquires _lock before reading _instance (#2) 2. Thread safety: override() saves/restores _initialized flag to prevent re-init during override blocks (#10) 3. Config path stored: _config_path PrivateAttr + config_path property (#6) 4. Literal validation: AnalysisSettings.depth now uses Literal["surface", "deep", "full"] — rejects invalid values (#9) 5. Test updated: test_analysis_depth_choices now expects ValidationError for invalid depth, added test_analysis_depth_valid_choices 6. Lint cleanup: removed unused imports, fixed whitespace in tests All 10 previously reported issues now resolved. 26 tests pass, lint clean. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init 5 scrapers had main() truncated with "# Original main continues here..." after Kimi's migration — business logic was never connected: - html_scraper.py — restored HtmlToSkillConverter extraction + build - pptx_scraper.py — restored PptxToSkillConverter extraction + build - confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes - notion_scraper.py — restored NotionToSkillConverter with 4 sources - chat_scraper.py — restored ChatToSkillConverter extraction + build unified_scraper.py — migrated main() to context-first pattern with argv fallback Fixed context initialization chain: - main.py no longer initializes ExecutionContext (was stealing init from commands) - create_command.py now passes config_path from source_info.parsed - execution_context.py handles SourceInfo.raw_input (not raw_source) All 18 scrapers now genuinely migrated. 26 tests pass, lint clean. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths Critical fixes (CLI args silently lost): - unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON when args=None (#3, #4) - unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of 3 independent env var lookups (#5) - doc_scraper._run_enhancement: uses agent_client.api_key instead of raw os.environ.get() — respects config file api_key (#1) Important fixes: - main._handle_analyze_command: populates _fake_args from ExecutionContext so --agent and --api-key aren't lost in analyze→enhance path (#6) - doc_scraper type annotations: replaced forward refs with Any to avoid F821 undefined name errors All changes include RuntimeError fallback for backward compatibility when ExecutionContext isn't initialized. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: 3 crashes + 1 stub in migrated scrapers found by deep scan 1. github_scraper.py: args.scrape_only and args.enhance_level crash when args=None (context path). Guarded with if args and getattr(). Also fixed agent fallback to read ctx.enhancement.agent. 2. codebase_scraper.py: args.output and args.skip_api_reference crash in summary block when args=None. Replaced with output_dir local var and ctx.analysis.skip_api_reference. 3. epub_scraper.py: main() was still a stub ending with "# Rest of main() continues..." — restored full extraction + build + enhancement logic using ctx values exclusively. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: complete ExecutionContext migration for remaining scrapers Kimi's Phase 4 scraper migrations + Claude's review fixes. All 18 scrapers now use context-first pattern with argv fallback. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError) get() now returns a default context instead of raising RuntimeError when not explicitly initialized. This eliminates the need for try/except RuntimeError blocks in all 18 scrapers. Components can always call ExecutionContext.get() safely — it returns defaults if not initialized, or the explicitly initialized instance. Updated tests: test_get_returns_defaults_when_not_initialized, test_reset_clears_instance (no longer expects RuntimeError). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 2a-c — remove 16 individual scraper CLI commands Removed individual scraper commands from: - COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word, epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage, confluence, notion, chat) - pyproject.toml entry points (16 skill-seekers-<type> binaries) - parsers/__init__.py (16 parser registrations) All source types now accessed via: skill-seekers create <source> Kept: create, unified, analyze, enhance, package, upload, install, install-agent, config, doctor, and utility commands. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: create SkillConverter base class + converter registry New base interface that all 17 converters will inherit: - SkillConverter.run() — extract + build (same call for all types) - SkillConverter.extract() — override in subclass - SkillConverter.build_skill() — override in subclass - get_converter(source_type, config) — factory from registry - CONVERTER_REGISTRY — maps source type → (module, class) create_command will use get_converter() instead of _call_module(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Grand Unification — one command, one interface, direct converters Complete the Grand Unification refactor: `skill-seekers create` is now the single entry point for all 18 source types. Individual scraper CLI commands (scrape, github, pdf, analyze, unified, etc.) are removed. ## Architecture changes - **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter(). - **create_command.py rewritten**: _build_config() constructs config dicts from ExecutionContext for each source type. Direct converter.run() calls replace the old _build_argv() + sys.argv swap + _call_module() machinery. - **main.py simplified**: create command bypasses _reconstruct_argv entirely, calls CreateCommand(args).execute() directly. analyze/unified commands removed (create handles both via auto-detection). - **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags (--browser, --max-pages, --depth, etc.) since create is the only entry. - **Centralized enhancement**: Runs once in create_command after converter, not duplicated in each scraper. - **MCP tools use converters**: 5 scraping tools call get_converter() directly instead of subprocess. Config type auto-detected from keys. - **ConfigValidator → UniSkillConfigValidator**: Renamed with backward- compat alias. - **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext first, env vars as fallback. ## What was removed - main() from all 18 scraper files (~3400 lines) - 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points - analyze + unified parsers from parser registry - _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*() - setup_argument_parser, get_configuration, _check_deprecated_flags - Tests referencing removed commands/functions ## Net impact 51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: review fixes for Grand Unification PR - Add autouse conftest fixture to reset ExecutionContext singleton between tests - Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults - Upgrade ExecutionContext double-init log from debug to info - Use logger.exception() in SkillConverter.run() to preserve tracebacks - Fix docstring "17 types" → "18 types" in skill_converter.py - DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed) - Fix 2 CI workflows still referencing removed `skill-seekers scrape` command - Remove broken pyproject.toml entry point for codebase_scraper:main Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 12 logic/flow issues found in deep review Critical fixes: - UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0 - doc_scraper: use ExecutionContext.get() when already initialized instead of re-calling initialize() which silently discards new config - unified_scraper: define enhancement_config before try/except to prevent UnboundLocalError in LOCAL enhancement timeout read Important fixes: - override(): cleaner tuple save/restore for singleton swap - --agent without --api-key now sets mode="local" so env API key doesn't override explicit agent choice - Remove DeprecationWarning from _reconstruct_argv (fires on every non-create command in production) - Rewrite scrape_generic_tool to use get_converter() instead of subprocess calls to removed main() functions - SkillConverter.run() checks build_skill() return value, returns 1 if False - estimate_pages_tool uses -m module invocation instead of .py file path Low-priority fixes: - get_converter() raises descriptive ValueError on class name typo - test_default_values: save/clear API key env vars before asserting mode - test_get_converter_pdf: fix config key "path" → "pdf_path" 3056 passed, 4 failed (pre-existing dep version issues), 32 skipped. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: update MCP server tests to mock converter instead of subprocess scrape_docs_tool now uses get_converter() + _run_converter() in-process instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool tests to mock the converter layer instead of the removed subprocess path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
449 lines
12 KiB
TOML
449 lines
12 KiB
TOML
[build-system]
|
|
requires = ["setuptools>=61.0", "wheel"]
|
|
build-backend = "setuptools.build_meta"
|
|
|
|
[project]
|
|
name = "skill-seekers"
|
|
version = "3.4.0"
|
|
description = "Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills. International support with Chinese (简体中文) documentation."
|
|
readme = "README.md"
|
|
requires-python = ">=3.10"
|
|
license = {text = "MIT"}
|
|
authors = [
|
|
{name = "Yusuf Karaaslan"}
|
|
]
|
|
keywords = [
|
|
"claude",
|
|
"ai",
|
|
"documentation",
|
|
"scraping",
|
|
"skills",
|
|
"llm",
|
|
"mcp",
|
|
"automation",
|
|
"i18n",
|
|
"chinese",
|
|
"international"
|
|
]
|
|
classifiers = [
|
|
"Development Status :: 4 - Beta",
|
|
"Intended Audience :: Developers",
|
|
"License :: OSI Approved :: MIT License",
|
|
"Operating System :: OS Independent",
|
|
"Programming Language :: Python :: 3",
|
|
"Programming Language :: Python :: 3.10",
|
|
"Programming Language :: Python :: 3.11",
|
|
"Programming Language :: Python :: 3.12",
|
|
"Programming Language :: Python :: 3.13",
|
|
"Topic :: Software Development :: Documentation",
|
|
"Topic :: Software Development :: Libraries :: Python Modules",
|
|
"Topic :: Text Processing :: Markup :: Markdown",
|
|
"Natural Language :: English",
|
|
"Natural Language :: Chinese (Simplified)",
|
|
]
|
|
|
|
# Core dependencies
|
|
dependencies = [
|
|
"requests>=2.32.5",
|
|
"beautifulsoup4>=4.14.2",
|
|
"PyGithub>=2.5.0",
|
|
"GitPython>=3.1.40",
|
|
"httpx>=0.28.1", # Required for async scraping (core feature)
|
|
"anthropic>=0.76.0", # Required for AI enhancement (core feature)
|
|
"PyMuPDF>=1.24.14",
|
|
"Pillow>=11.0.0",
|
|
"pydantic>=2.12.3",
|
|
"pydantic-settings>=2.11.0",
|
|
"python-dotenv>=1.1.1",
|
|
"jsonschema>=4.25.1",
|
|
"click>=8.3.0",
|
|
"Pygments>=2.19.2",
|
|
"pathspec>=0.12.1",
|
|
"networkx>=3.0",
|
|
"tomli>=2.0.0; python_version < '3.11'", # TOML parser for version reading
|
|
"schedule>=1.2.0", # Required for sync monitoring
|
|
"PyYAML>=6.0", # Required for workflow preset management
|
|
"langchain>=1.2.10",
|
|
"llama-index>=0.14.15",
|
|
]
|
|
|
|
[project.optional-dependencies]
|
|
# MCP server dependencies (NOW TRULY OPTIONAL)
|
|
mcp = [
|
|
"mcp>=1.25,<2",
|
|
"httpx>=0.28.1",
|
|
"httpx-sse>=0.4.3",
|
|
"uvicorn>=0.38.0",
|
|
"starlette>=0.48.0",
|
|
"sse-starlette>=3.0.2",
|
|
]
|
|
|
|
# LLM platform-specific dependencies
|
|
# Google Gemini support
|
|
gemini = [
|
|
"google-generativeai>=0.8.0",
|
|
]
|
|
|
|
# OpenAI ChatGPT support
|
|
openai = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# MiniMax AI support (uses OpenAI-compatible API)
|
|
minimax = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# Kimi (Moonshot AI) support (uses OpenAI-compatible API)
|
|
kimi = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# DeepSeek AI support (uses OpenAI-compatible API)
|
|
deepseek = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# Qwen (Alibaba) support (uses OpenAI-compatible API)
|
|
qwen = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# OpenRouter support (uses OpenAI-compatible API)
|
|
openrouter = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# Together AI support (uses OpenAI-compatible API)
|
|
together = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# Fireworks AI support (uses OpenAI-compatible API)
|
|
fireworks = [
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# All LLM platforms combined
|
|
all-llms = [
|
|
"google-generativeai>=0.8.0",
|
|
"openai>=1.0.0",
|
|
]
|
|
|
|
# Cloud storage support
|
|
s3 = [
|
|
"boto3>=1.34.0",
|
|
]
|
|
|
|
gcs = [
|
|
"google-cloud-storage>=2.10.0",
|
|
]
|
|
|
|
azure = [
|
|
"azure-storage-blob>=12.19.0",
|
|
]
|
|
|
|
# Word document (.docx) support
|
|
docx = [
|
|
"mammoth>=1.6.0",
|
|
"python-docx>=1.1.0",
|
|
]
|
|
|
|
# EPUB (.epub) support
|
|
epub = [
|
|
"ebooklib>=0.18",
|
|
]
|
|
|
|
# Video processing (lightweight: YouTube transcripts + metadata)
|
|
video = [
|
|
"yt-dlp>=2024.12.0",
|
|
"youtube-transcript-api>=1.2.0",
|
|
]
|
|
|
|
# Video processing (full: + Whisper + visual extraction)
|
|
# NOTE: easyocr removed — it pulls torch with the wrong GPU variant.
|
|
# Use: skill-seekers video --setup (auto-detects GPU, installs correct PyTorch + easyocr)
|
|
video-full = [
|
|
"yt-dlp>=2024.12.0",
|
|
"youtube-transcript-api>=1.2.0",
|
|
"faster-whisper>=1.0.0",
|
|
"scenedetect[opencv]>=0.6.4",
|
|
"opencv-python-headless>=4.9.0",
|
|
"pytesseract>=0.3.13",
|
|
]
|
|
|
|
# RAG vector database upload support
|
|
chroma = [
|
|
"chromadb>=0.4.0",
|
|
]
|
|
|
|
weaviate = [
|
|
"weaviate-client>=3.25.0",
|
|
]
|
|
|
|
sentence-transformers = [
|
|
"sentence-transformers>=2.2.0",
|
|
]
|
|
|
|
pinecone = [
|
|
"pinecone>=5.0.0",
|
|
]
|
|
|
|
rag-upload = [
|
|
"chromadb>=0.4.0",
|
|
"weaviate-client>=3.25.0",
|
|
"sentence-transformers>=2.2.0",
|
|
"pinecone>=5.0.0",
|
|
]
|
|
|
|
# All cloud storage providers combined
|
|
all-cloud = [
|
|
"boto3>=1.34.0",
|
|
"google-cloud-storage>=2.10.0",
|
|
"azure-storage-blob>=12.19.0",
|
|
]
|
|
|
|
# New source type dependencies (v3.2.0+)
|
|
jupyter = [
|
|
"nbformat>=5.9.0",
|
|
]
|
|
|
|
asciidoc = [
|
|
"asciidoc>=10.0.0",
|
|
]
|
|
|
|
pptx = [
|
|
"python-pptx>=0.6.21",
|
|
]
|
|
|
|
confluence = [
|
|
"atlassian-python-api>=3.41.0",
|
|
]
|
|
|
|
notion = [
|
|
"notion-client>=2.0.0",
|
|
]
|
|
|
|
rss = [
|
|
"feedparser>=6.0.0",
|
|
]
|
|
|
|
chat = [
|
|
"slack-sdk>=3.27.0",
|
|
]
|
|
|
|
# Headless browser for JavaScript SPA sites
|
|
browser = [
|
|
"playwright>=1.40.0",
|
|
]
|
|
|
|
# Embedding server support
|
|
embedding = [
|
|
"fastapi>=0.109.0",
|
|
"uvicorn>=0.27.0",
|
|
"sentence-transformers>=2.3.0",
|
|
"numpy>=1.24.0",
|
|
"voyageai>=0.2.0",
|
|
]
|
|
|
|
# All optional dependencies combined (dev dependencies now in [dependency-groups])
|
|
# Note: video-full deps (opencv, easyocr, faster-whisper) excluded due to heavy
|
|
# native dependencies. Install separately: pip install skill-seekers[video-full]
|
|
all = [
|
|
"mammoth>=1.6.0",
|
|
"python-docx>=1.1.0",
|
|
"ebooklib>=0.18",
|
|
"yt-dlp>=2024.12.0",
|
|
"youtube-transcript-api>=1.2.0",
|
|
"mcp>=1.25,<2",
|
|
"httpx>=0.28.1",
|
|
"httpx-sse>=0.4.3",
|
|
"uvicorn>=0.38.0",
|
|
"starlette>=0.48.0",
|
|
"sse-starlette>=3.0.2",
|
|
"google-generativeai>=0.8.0",
|
|
"openai>=1.0.0",
|
|
"boto3>=1.34.0",
|
|
"google-cloud-storage>=2.10.0",
|
|
"azure-storage-blob>=12.19.0",
|
|
"chromadb>=0.4.0",
|
|
"weaviate-client>=3.25.0",
|
|
"pinecone>=5.0.0",
|
|
"fastapi>=0.109.0",
|
|
"sentence-transformers>=2.3.0",
|
|
"numpy>=1.24.0",
|
|
"voyageai>=0.2.0",
|
|
# New source types (v3.2.0+)
|
|
"nbformat>=5.9.0",
|
|
"asciidoc>=10.0.0",
|
|
"python-pptx>=0.6.21",
|
|
"atlassian-python-api>=3.41.0",
|
|
"notion-client>=2.0.0",
|
|
"feedparser>=6.0.0",
|
|
"slack-sdk>=3.27.0",
|
|
]
|
|
|
|
[project.urls]
|
|
Homepage = "https://skillseekersweb.com/"
|
|
Website = "https://skillseekersweb.com/"
|
|
Repository = "https://github.com/yusufkaraaslan/Skill_Seekers"
|
|
"Bug Tracker" = "https://github.com/yusufkaraaslan/Skill_Seekers/issues"
|
|
Documentation = "https://skillseekersweb.com/"
|
|
"Config Browser" = "https://skillseekersweb.com/"
|
|
"中文文档 (Chinese)" = "https://github.com/yusufkaraaslan/Skill_Seekers/blob/main/README.zh-CN.md"
|
|
"Author" = "https://x.com/_yUSyUS_"
|
|
"Website Repository" = "https://github.com/yusufkaraaslan/skillseekersweb"
|
|
"Community Configs" = "https://github.com/yusufkaraaslan/skill-seekers-configs"
|
|
"GitHub Action" = "https://github.com/yusufkaraaslan/skill-seekers-action"
|
|
"Plugin" = "https://github.com/yusufkaraaslan/skill-seekers-plugin"
|
|
"Homebrew Tap" = "https://github.com/yusufkaraaslan/homebrew-skill-seekers"
|
|
|
|
[project.scripts]
|
|
# Main CLI entry point
|
|
skill-seekers = "skill_seekers.cli.main:main"
|
|
|
|
# Core commands
|
|
skill-seekers-create = "skill_seekers.cli.create_command:main"
|
|
skill-seekers-enhance = "skill_seekers.cli.enhance_command:main"
|
|
skill-seekers-enhance-status = "skill_seekers.cli.enhance_status:main"
|
|
skill-seekers-package = "skill_seekers.cli.package_skill:main"
|
|
skill-seekers-upload = "skill_seekers.cli.upload_skill:main"
|
|
skill-seekers-install = "skill_seekers.cli.install_skill:main"
|
|
skill-seekers-install-agent = "skill_seekers.cli.install_agent:main"
|
|
|
|
# Analysis & utilities
|
|
skill-seekers-estimate = "skill_seekers.cli.estimate_pages:main"
|
|
skill-seekers-patterns = "skill_seekers.cli.pattern_recognizer:main"
|
|
skill-seekers-how-to-guides = "skill_seekers.cli.how_to_guide_builder:main"
|
|
skill-seekers-quality = "skill_seekers.cli.quality_metrics:main"
|
|
skill-seekers-workflows = "skill_seekers.cli.workflows_command:main"
|
|
|
|
# Configuration & setup
|
|
skill-seekers-config = "skill_seekers.cli.config_command:main"
|
|
skill-seekers-doctor = "skill_seekers.cli.doctor:main"
|
|
skill-seekers-setup = "skill_seekers.cli.setup_wizard:main"
|
|
skill-seekers-resume = "skill_seekers.cli.resume_command:main"
|
|
skill-seekers-sync-config = "skill_seekers.cli.sync_config:main"
|
|
|
|
# Advanced
|
|
skill-seekers-cloud = "skill_seekers.cli.cloud_storage_cli:main"
|
|
skill-seekers-embed = "skill_seekers.embedding.server:main"
|
|
skill-seekers-sync = "skill_seekers.cli.sync_cli:main"
|
|
skill-seekers-benchmark = "skill_seekers.cli.benchmark_cli:main"
|
|
skill-seekers-stream = "skill_seekers.cli.streaming_ingest:main"
|
|
skill-seekers-update = "skill_seekers.cli.incremental_updater:main"
|
|
skill-seekers-multilang = "skill_seekers.cli.multilang_support:main"
|
|
|
|
[tool.setuptools]
|
|
package-dir = {"" = "src"}
|
|
|
|
[tool.setuptools.packages.find]
|
|
where = ["src"]
|
|
include = ["skill_seekers*"]
|
|
namespaces = false
|
|
|
|
[tool.setuptools.package-data]
|
|
skill_seekers = ["py.typed", "workflows/*.yaml"]
|
|
|
|
[tool.pytest.ini_options]
|
|
testpaths = ["tests"]
|
|
python_files = ["test_*.py"]
|
|
python_classes = ["Test*"]
|
|
python_functions = ["test_*"]
|
|
addopts = "-v --tb=short --strict-markers"
|
|
markers = [
|
|
"asyncio: mark test as an async test",
|
|
"slow: mark test as slow running (>5 seconds)",
|
|
"integration: mark test as integration test (requires external services)",
|
|
"e2e: mark test as end-to-end (resource-intensive, may create files)",
|
|
"venv: mark test as requiring virtual environment setup",
|
|
"bootstrap: mark test as bootstrap feature specific",
|
|
"benchmark: mark test as performance benchmark",
|
|
]
|
|
asyncio_mode = "auto"
|
|
asyncio_default_fixture_loop_scope = "function"
|
|
|
|
[tool.coverage.run]
|
|
source = ["src/skill_seekers"]
|
|
omit = ["*/tests/*", "*/__pycache__/*", "*/venv/*"]
|
|
|
|
[tool.coverage.report]
|
|
exclude_lines = [
|
|
"pragma: no cover",
|
|
"def __repr__",
|
|
"raise AssertionError",
|
|
"raise NotImplementedError",
|
|
"if __name__ == .__main__.:",
|
|
"if TYPE_CHECKING:",
|
|
"@abstractmethod",
|
|
]
|
|
|
|
[tool.ruff]
|
|
line-length = 100
|
|
target-version = "py310"
|
|
src = ["src", "tests"]
|
|
|
|
[tool.ruff.lint]
|
|
select = [
|
|
"E", # pycodestyle errors
|
|
"W", # pycodestyle warnings
|
|
"F", # Pyflakes
|
|
"I", # isort
|
|
"B", # flake8-bugbear
|
|
"C4", # flake8-comprehensions
|
|
"UP", # pyupgrade
|
|
"ARG", # flake8-unused-arguments
|
|
"SIM", # flake8-simplify
|
|
]
|
|
ignore = [
|
|
"E501", # line too long (handled by formatter)
|
|
"F541", # f-string without placeholders (style preference)
|
|
"ARG002", # unused method argument (often needed for interface compliance)
|
|
"B007", # loop control variable not used (sometimes intentional)
|
|
"I001", # import block unsorted (handled by formatter)
|
|
"SIM114", # combine if branches (style preference, can reduce readability)
|
|
]
|
|
|
|
[tool.ruff.lint.isort]
|
|
known-first-party = ["skill_seekers"]
|
|
|
|
[tool.mypy]
|
|
python_version = "3.10"
|
|
warn_return_any = true
|
|
warn_unused_configs = true
|
|
disallow_untyped_defs = false
|
|
disallow_incomplete_defs = false
|
|
check_untyped_defs = true
|
|
ignore_missing_imports = true
|
|
show_error_codes = true
|
|
pretty = true
|
|
|
|
[[tool.mypy.overrides]]
|
|
module = "tests.*"
|
|
disallow_untyped_defs = false
|
|
check_untyped_defs = false
|
|
|
|
[dependency-groups]
|
|
dev = [
|
|
# Core testing
|
|
"pytest>=8.4.2",
|
|
"pytest-asyncio>=0.24.0",
|
|
"pytest-cov>=7.0.0",
|
|
"coverage>=7.11.0",
|
|
|
|
# Code quality
|
|
"ruff>=0.14.13",
|
|
"mypy>=1.19.1",
|
|
|
|
# Test dependencies (Kimi's finding #3)
|
|
"psutil>=5.9.0", # Process utilities for testing
|
|
"numpy>=1.24.0", # Numerical operations
|
|
"starlette>=0.31.0", # HTTP transport testing
|
|
"httpx>=0.24.0", # HTTP client for testing
|
|
|
|
# Cloud storage testing (Kimi's finding #2)
|
|
"boto3>=1.26.0", # AWS S3
|
|
"google-cloud-storage>=2.10.0", # Google Cloud Storage
|
|
"azure-storage-blob>=12.17.0", # Azure Blob Storage
|
|
]
|