* fix: resolve 8 pipeline bugs found during skill quality review - Fix 0 APIs extracted from documentation by enriching summary.json with individual page file content before conflict detection - Fix all "Unknown" entries in merged_api.md by injecting dict keys as API names and falling back to AI merger field names - Fix frontmatter using raw slugs instead of config name by normalizing frontmatter after SKILL.md generation - Fix leaked absolute filesystem paths in patterns/index.md by stripping .skillseeker-cache repo clone prefixes - Fix ARCHITECTURE.md file count always showing "1 files" by counting files per language from code_analysis data - Fix YAML parse errors on GitHub Actions workflows by converting boolean keys (on: true) to strings - Fix false React/Vue.js framework detection in C# projects by filtering web frameworks based on primary language - Improve how-to guide generation by broadening workflow example filter to include setup/config examples with sufficient complexity - Fix test_git_sources_e2e failures caused by git init default branch being 'main' instead of 'master' Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address 6 review issues in ExecutionContext implementation Fixes from code review: 1. Mode resolution (#3 critical): _args_to_data no longer unconditionally overwrites mode. Only writes mode="api" when --api-key explicitly passed. Env-var-based mode detection moved to _default_data() as lowest priority. 2. Re-initialization warning (#4): initialize() now logs debug message when called a second time instead of silently returning stale instance. 3. _raw_args preserved in override (#5): temp context now copies _raw_args from parent so get_raw() works correctly inside override blocks. 4. test_local_mode_detection env cleanup (#7): test now saves/restores API key env vars to prevent failures when ANTHROPIC_API_KEY is set. 5. _load_config_file error handling (#8): wraps FileNotFoundError and JSONDecodeError with user-friendly ValueError messages. 6. Lint fixes: added logging import, fixed Generator import from collections.abc, fixed AgentClient return type annotation. Remaining P2/P3 items (documented, not blocking): - Lock TOCTOU in override() — safe on CPython, needs fix for no-GIL - get() reads _instance without lock — same CPython caveat - config_path not stored on instance - AnalysisSettings.depth not Literal constrained Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address all remaining P2/P3 review issues in ExecutionContext 1. Thread safety: get() now acquires _lock before reading _instance (#2) 2. Thread safety: override() saves/restores _initialized flag to prevent re-init during override blocks (#10) 3. Config path stored: _config_path PrivateAttr + config_path property (#6) 4. Literal validation: AnalysisSettings.depth now uses Literal["surface", "deep", "full"] — rejects invalid values (#9) 5. Test updated: test_analysis_depth_choices now expects ValidationError for invalid depth, added test_analysis_depth_valid_choices 6. Lint cleanup: removed unused imports, fixed whitespace in tests All 10 previously reported issues now resolved. 26 tests pass, lint clean. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: restore 5 truncated scrapers, migrate unified_scraper, fix context init 5 scrapers had main() truncated with "# Original main continues here..." after Kimi's migration — business logic was never connected: - html_scraper.py — restored HtmlToSkillConverter extraction + build - pptx_scraper.py — restored PptxToSkillConverter extraction + build - confluence_scraper.py — restored ConfluenceToSkillConverter with 3 modes - notion_scraper.py — restored NotionToSkillConverter with 4 sources - chat_scraper.py — restored ChatToSkillConverter extraction + build unified_scraper.py — migrated main() to context-first pattern with argv fallback Fixed context initialization chain: - main.py no longer initializes ExecutionContext (was stealing init from commands) - create_command.py now passes config_path from source_info.parsed - execution_context.py handles SourceInfo.raw_input (not raw_source) All 18 scrapers now genuinely migrated. 26 tests pass, lint clean. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 7 data flow conflicts between ExecutionContext and legacy paths Critical fixes (CLI args silently lost): - unified_scraper Phase 6: reads ctx.enhancement.level instead of raw JSON when args=None (#3, #4) - unified_scraper Phase 6 agent: reads ctx.enhancement.agent instead of 3 independent env var lookups (#5) - doc_scraper._run_enhancement: uses agent_client.api_key instead of raw os.environ.get() — respects config file api_key (#1) Important fixes: - main._handle_analyze_command: populates _fake_args from ExecutionContext so --agent and --api-key aren't lost in analyze→enhance path (#6) - doc_scraper type annotations: replaced forward refs with Any to avoid F821 undefined name errors All changes include RuntimeError fallback for backward compatibility when ExecutionContext isn't initialized. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: 3 crashes + 1 stub in migrated scrapers found by deep scan 1. github_scraper.py: args.scrape_only and args.enhance_level crash when args=None (context path). Guarded with if args and getattr(). Also fixed agent fallback to read ctx.enhancement.agent. 2. codebase_scraper.py: args.output and args.skip_api_reference crash in summary block when args=None. Replaced with output_dir local var and ctx.analysis.skip_api_reference. 3. epub_scraper.py: main() was still a stub ending with "# Rest of main() continues..." — restored full extraction + build + enhancement logic using ctx values exclusively. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: complete ExecutionContext migration for remaining scrapers Kimi's Phase 4 scraper migrations + Claude's review fixes. All 18 scrapers now use context-first pattern with argv fallback. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 1 — ExecutionContext.get() always returns context (no RuntimeError) get() now returns a default context instead of raising RuntimeError when not explicitly initialized. This eliminates the need for try/except RuntimeError blocks in all 18 scrapers. Components can always call ExecutionContext.get() safely — it returns defaults if not initialized, or the explicitly initialized instance. Updated tests: test_get_returns_defaults_when_not_initialized, test_reset_clears_instance (no longer expects RuntimeError). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Phase 2a-c — remove 16 individual scraper CLI commands Removed individual scraper commands from: - COMMAND_MODULES in main.py (16 entries: scrape, github, pdf, word, epub, video, jupyter, html, openapi, asciidoc, pptx, rss, manpage, confluence, notion, chat) - pyproject.toml entry points (16 skill-seekers-<type> binaries) - parsers/__init__.py (16 parser registrations) All source types now accessed via: skill-seekers create <source> Kept: create, unified, analyze, enhance, package, upload, install, install-agent, config, doctor, and utility commands. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: create SkillConverter base class + converter registry New base interface that all 17 converters will inherit: - SkillConverter.run() — extract + build (same call for all types) - SkillConverter.extract() — override in subclass - SkillConverter.build_skill() — override in subclass - get_converter(source_type, config) — factory from registry - CONVERTER_REGISTRY — maps source type → (module, class) create_command will use get_converter() instead of _call_module(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Grand Unification — one command, one interface, direct converters Complete the Grand Unification refactor: `skill-seekers create` is now the single entry point for all 18 source types. Individual scraper CLI commands (scrape, github, pdf, analyze, unified, etc.) are removed. ## Architecture changes - **18 SkillConverter subclasses**: Every scraper now inherits SkillConverter with extract() + build_skill() + SOURCE_TYPE. Factory via get_converter(). - **create_command.py rewritten**: _build_config() constructs config dicts from ExecutionContext for each source type. Direct converter.run() calls replace the old _build_argv() + sys.argv swap + _call_module() machinery. - **main.py simplified**: create command bypasses _reconstruct_argv entirely, calls CreateCommand(args).execute() directly. analyze/unified commands removed (create handles both via auto-detection). - **CreateParser mode="all"**: Top-level parser now accepts all 120+ flags (--browser, --max-pages, --depth, etc.) since create is the only entry. - **Centralized enhancement**: Runs once in create_command after converter, not duplicated in each scraper. - **MCP tools use converters**: 5 scraping tools call get_converter() directly instead of subprocess. Config type auto-detected from keys. - **ConfigValidator → UniSkillConfigValidator**: Renamed with backward- compat alias. - **Data flow**: AgentClient + LocalSkillEnhancer read ExecutionContext first, env vars as fallback. ## What was removed - main() from all 18 scraper files (~3400 lines) - 18 CLI commands from COMMAND_MODULES + pyproject.toml entry points - analyze + unified parsers from parser registry - _build_argv, _call_module, _SKIP_ARGS, _DEST_TO_FLAG, all _route_*() - setup_argument_parser, get_configuration, _check_deprecated_flags - Tests referencing removed commands/functions ## Net impact 51 files changed, ~6000 lines removed. 2996 tests pass, 0 failures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: review fixes for Grand Unification PR - Add autouse conftest fixture to reset ExecutionContext singleton between tests - Replace hardcoded defaults in _is_explicitly_set() with parser-derived defaults - Upgrade ExecutionContext double-init log from debug to info - Use logger.exception() in SkillConverter.run() to preserve tracebacks - Fix docstring "17 types" → "18 types" in skill_converter.py - DRY up 10 copy-paste help handlers into dict + loop (~100 lines removed) - Fix 2 CI workflows still referencing removed `skill-seekers scrape` command - Remove broken pyproject.toml entry point for codebase_scraper:main Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 12 logic/flow issues found in deep review Critical fixes: - UnifiedScraper.run(): replace sys.exit(1) with return 1, add return 0 - doc_scraper: use ExecutionContext.get() when already initialized instead of re-calling initialize() which silently discards new config - unified_scraper: define enhancement_config before try/except to prevent UnboundLocalError in LOCAL enhancement timeout read Important fixes: - override(): cleaner tuple save/restore for singleton swap - --agent without --api-key now sets mode="local" so env API key doesn't override explicit agent choice - Remove DeprecationWarning from _reconstruct_argv (fires on every non-create command in production) - Rewrite scrape_generic_tool to use get_converter() instead of subprocess calls to removed main() functions - SkillConverter.run() checks build_skill() return value, returns 1 if False - estimate_pages_tool uses -m module invocation instead of .py file path Low-priority fixes: - get_converter() raises descriptive ValueError on class name typo - test_default_values: save/clear API key env vars before asserting mode - test_get_converter_pdf: fix config key "path" → "pdf_path" 3056 passed, 4 failed (pre-existing dep version issues), 32 skipped. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: update MCP server tests to mock converter instead of subprocess scrape_docs_tool now uses get_converter() + _run_converter() in-process instead of run_subprocess_with_streaming. Update 4 TestScrapeDocsTool tests to mock the converter layer instead of the removed subprocess path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: YusufKaraaslanSpyke <yusuf@spykegames.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
189 lines
5.9 KiB
YAML
189 lines
5.9 KiB
YAML
name: Vector Database Export
|
|
|
|
on:
|
|
workflow_dispatch:
|
|
inputs:
|
|
skill_name:
|
|
description: 'Skill name to export (e.g., react, django, godot)'
|
|
required: true
|
|
type: string
|
|
targets:
|
|
description: 'Vector databases to export (comma-separated: weaviate,chroma,faiss,qdrant or "all")'
|
|
required: true
|
|
default: 'all'
|
|
type: string
|
|
config_path:
|
|
description: 'Path to config file (optional, auto-detected from skill_name if not provided)'
|
|
required: false
|
|
type: string
|
|
schedule:
|
|
# Run weekly on Sunday at 2 AM UTC for popular frameworks
|
|
- cron: '0 2 * * 0'
|
|
|
|
jobs:
|
|
export:
|
|
name: Export to Vector Databases
|
|
runs-on: ubuntu-latest
|
|
strategy:
|
|
fail-fast: false
|
|
matrix:
|
|
# For scheduled runs, export popular frameworks
|
|
skill: ${{ github.event_name == 'schedule' && fromJson('["react", "django", "godot", "fastapi"]') || fromJson(format('["{0}"]', github.event.inputs.skill_name)) }}
|
|
|
|
env:
|
|
SKILL_NAME: ${{ matrix.skill }}
|
|
TARGETS_INPUT: ${{ github.event.inputs.targets }}
|
|
CONFIG_PATH_INPUT: ${{ github.event.inputs.config_path }}
|
|
|
|
steps:
|
|
- uses: actions/checkout@v4
|
|
with:
|
|
submodules: recursive
|
|
|
|
- name: Set up Python 3.12
|
|
uses: actions/setup-python@v5
|
|
with:
|
|
python-version: '3.12'
|
|
|
|
- name: Install dependencies
|
|
run: |
|
|
python -m pip install --upgrade pip
|
|
pip install -e .
|
|
|
|
- name: Determine config path
|
|
id: config
|
|
run: |
|
|
if [ -n "$CONFIG_PATH_INPUT" ]; then
|
|
echo "path=$CONFIG_PATH_INPUT" >> $GITHUB_OUTPUT
|
|
else
|
|
echo "path=configs/$SKILL_NAME.json" >> $GITHUB_OUTPUT
|
|
fi
|
|
|
|
- name: Check if config exists
|
|
id: check_config
|
|
run: |
|
|
CONFIG_FILE="${{ steps.config.outputs.path }}"
|
|
if [ -f "$CONFIG_FILE" ]; then
|
|
echo "exists=true" >> $GITHUB_OUTPUT
|
|
else
|
|
echo "exists=false" >> $GITHUB_OUTPUT
|
|
echo "⚠️ Config not found: $CONFIG_FILE"
|
|
fi
|
|
|
|
- name: Scrape documentation
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
run: |
|
|
echo "📥 Scraping documentation for $SKILL_NAME..."
|
|
skill-seekers create "${{ steps.config.outputs.path }}" --max-pages 100
|
|
continue-on-error: true
|
|
|
|
- name: Determine export targets
|
|
id: targets
|
|
run: |
|
|
TARGETS="${TARGETS_INPUT:-all}"
|
|
if [ "$TARGETS" = "all" ]; then
|
|
echo "list=weaviate chroma faiss qdrant" >> $GITHUB_OUTPUT
|
|
else
|
|
echo "list=$(echo "$TARGETS" | tr ',' ' ')" >> $GITHUB_OUTPUT
|
|
fi
|
|
|
|
- name: Export to vector databases
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
env:
|
|
EXPORT_TARGETS: ${{ steps.targets.outputs.list }}
|
|
run: |
|
|
SKILL_DIR="output/$SKILL_NAME"
|
|
|
|
if [ ! -d "$SKILL_DIR" ]; then
|
|
echo "❌ Skill directory not found: $SKILL_DIR"
|
|
exit 1
|
|
fi
|
|
|
|
echo "📦 Exporting $SKILL_NAME to vector databases..."
|
|
|
|
for target in $EXPORT_TARGETS; do
|
|
echo ""
|
|
echo "🔹 Exporting to $target..."
|
|
|
|
# Use adaptor directly via CLI
|
|
python3 -c "
|
|
from pathlib import Path
|
|
from skill_seekers.cli.adaptors import get_adaptor
|
|
adaptor = get_adaptor('$target')
|
|
package_path = adaptor.package(Path('$SKILL_DIR'), Path('output'))
|
|
print(f'Exported to {package_path}')
|
|
"
|
|
|
|
if [ $? -eq 0 ]; then
|
|
echo "✅ $target export complete"
|
|
else
|
|
echo "❌ $target export failed"
|
|
fi
|
|
done
|
|
|
|
- name: Generate quality report
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
run: |
|
|
SKILL_DIR="output/$SKILL_NAME"
|
|
|
|
if [ -d "$SKILL_DIR" ]; then
|
|
echo "📊 Generating quality metrics..."
|
|
|
|
python3 -c "
|
|
from pathlib import Path
|
|
from skill_seekers.cli.quality_metrics import QualityAnalyzer
|
|
analyzer = QualityAnalyzer(Path('$SKILL_DIR'))
|
|
report = analyzer.generate_report()
|
|
formatted = analyzer.format_report(report)
|
|
print(formatted)
|
|
with open('quality_report_${SKILL_NAME}.txt', 'w') as f:
|
|
f.write(formatted)
|
|
"
|
|
fi
|
|
continue-on-error: true
|
|
|
|
- name: Upload vector database exports
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
uses: actions/upload-artifact@v4
|
|
with:
|
|
name: ${{ env.SKILL_NAME }}-vector-exports
|
|
path: |
|
|
output/${{ env.SKILL_NAME }}-*.json
|
|
retention-days: 30
|
|
|
|
- name: Upload quality report
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
uses: actions/upload-artifact@v4
|
|
with:
|
|
name: ${{ env.SKILL_NAME }}-quality-report
|
|
path: quality_report_${{ env.SKILL_NAME }}.txt
|
|
retention-days: 30
|
|
continue-on-error: true
|
|
|
|
- name: Create export summary
|
|
if: steps.check_config.outputs.exists == 'true'
|
|
env:
|
|
EXPORT_TARGETS: ${{ steps.targets.outputs.list }}
|
|
run: |
|
|
echo "## 📦 Vector Database Export Summary: $SKILL_NAME" >> $GITHUB_STEP_SUMMARY
|
|
echo "" >> $GITHUB_STEP_SUMMARY
|
|
|
|
for target in $EXPORT_TARGETS; do
|
|
FILE="output/${SKILL_NAME}-${target}.json"
|
|
if [ -f "$FILE" ]; then
|
|
SIZE=$(du -h "$FILE" | cut -f1)
|
|
echo "✅ **$target**: $SIZE" >> $GITHUB_STEP_SUMMARY
|
|
else
|
|
echo "❌ **$target**: Export failed" >> $GITHUB_STEP_SUMMARY
|
|
fi
|
|
done
|
|
|
|
echo "" >> $GITHUB_STEP_SUMMARY
|
|
|
|
if [ -f "quality_report_${SKILL_NAME}.txt" ]; then
|
|
echo "### 📊 Quality Metrics" >> $GITHUB_STEP_SUMMARY
|
|
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
|
|
head -30 "quality_report_${SKILL_NAME}.txt" >> $GITHUB_STEP_SUMMARY
|
|
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
|
|
fi
|