Files
skill-seekers-reference/docs/integrations/INTEGRATIONS.md
yusyus 4f87de6b56 fix: improve MiniMax adaptor from PR #318 review (#319)
* feat: add MiniMax AI as LLM platform adaptor

Original implementation by octo-patch in PR #318.
This commit includes comprehensive improvements and documentation.

Code Improvements:
- Fix API key validation to properly check JWT format (eyJ prefix)
- Add specific exception handling for timeout and connection errors
- Remove unused variable in upload method

Dependencies:
- Add MiniMax to [all-llms] extra group in pyproject.toml

Tests:
- Remove duplicate setUp method in integration test class
- Add 4 new test methods:
  * test_package_excludes_backup_files
  * test_upload_success_mocked (with OpenAI mocking)
  * test_upload_network_error
  * test_upload_connection_error
  * test_validate_api_key_jwt_format
- Update test_validate_api_key_valid to use JWT format keys
- Fix test assertions for error message matching

Documentation:
- Create comprehensive MINIMAX_INTEGRATION.md guide (380+ lines)
- Update MULTI_LLM_SUPPORT.md with MiniMax platform entry
- Update 01-installation.md extras table
- Update INTEGRATIONS.md AI platforms table
- Update AGENTS.md adaptor import pattern example
- Fix README.md platform count from 4 to 5

All tests pass (33 passed, 3 skipped)
Lint checks pass

Co-authored-by: octo-patch <octo-patch@users.noreply.github.com>

* fix: improve MiniMax adaptor — typed exceptions, key validation, tests, docs

- Remove invalid "minimax" self-reference from all-llms dependency group
- Use typed OpenAI exceptions (APITimeoutError, APIConnectionError)
  instead of string-matching on generic Exception
- Replace incorrect JWT assumption in validate_api_key with length check
- Use DEFAULT_API_ENDPOINT constant instead of hardcoded URLs (3 sites)
- Add Path() cast for output_path before .is_dir() call
- Add sys.modules mock to test_enhance_missing_library
- Add mocked test_enhance_success with backup/content verification
- Update test assertions for new exception types and key validation
- Add MiniMax to __init__.py docstrings (module, get_adaptor, list_platforms)
- Add MiniMax sections to MULTI_LLM_SUPPORT.md (install, format, API key,
  workflow example, export-to-all)

Follows up on PR #318 by @octo-patch (feat: add MiniMax AI as LLM platform adaptor).

Co-Authored-By: Octopus <octo-patch@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: octo-patch <octo-patch@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 22:12:23 +03:00

18 KiB

AI System Integrations with Skill Seekers

Universal Preprocessor: Transform documentation into structured knowledge for any AI system


🤔 Which Integration Should I Use?

Your Goal Recommended Tool Format Setup Time Guide
Build RAG with Python LangChain --target langchain 5 min Guide
Query engine from docs LlamaIndex --target llama-index 5 min Guide
Vector database only Pinecone/Weaviate --target [db] 3 min Guide
AI coding (VS Code fork) Cursor --target claude 5 min Guide
AI coding (Windsurf) Windsurf --target markdown 5 min Guide
AI coding (VS Code ext) Cline (MCP) --target claude 10 min Guide
AI coding (any IDE) Continue.dev --target markdown 5 min Guide
Claude AI chat Claude --target claude 3 min Guide
Chunked for RAG Any + chunking --chunk-for-rag + 2 min RAG Guide

📚 RAG & Vector Databases

Production-Ready RAG Frameworks

Transform documentation into RAG-ready formats for AI-powered search and retrieval:

Framework Users Format Best For Guide
LangChain 500K+ Document Python RAG, most popular Setup →
LlamaIndex 200K+ TextNode Q&A focus, query engine Setup →
Haystack 50K+ Document Enterprise, multi-language Setup →

Quick Example:

# Generate LangChain documents
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target langchain

# Use in RAG pipeline
python examples/langchain-rag-pipeline/quickstart.py

Vector Database Integrations

Direct upload to vector databases without RAG frameworks:

Database Type Best For Guide
Pinecone Cloud Production, serverless Setup →
Weaviate Self-hosted/Cloud Enterprise, GraphQL Setup →
Chroma Local Development, embeddings included Setup →
FAISS Local High performance, Facebook Setup →
Qdrant Self-hosted/Cloud Rust engine, filtering Setup →

Quick Example:

# Generate Pinecone format
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target pinecone

# Upsert to Pinecone
python examples/pinecone-upsert/quickstart.py

💻 AI Coding Assistants

IDE-Native AI Tools

Give AI coding assistants expert knowledge of your frameworks:

Tool Type IDEs Format Setup Guide
Cursor IDE (VS Code fork) Cursor IDE .cursorrules 5 min Setup →
Windsurf IDE (Codeium) Windsurf IDE .windsurfrules 5 min Setup →
Cline VS Code Extension VS Code .clinerules + MCP 10 min Setup →
Continue.dev Plugin VS Code, JetBrains, Vim HTTP context 5 min Setup →

Quick Example:

# For any AI coding assistant (Cursor, Windsurf, Cline, Continue.dev)
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target markdown  # or --target claude

# Copy to your project
cp output/django-markdown/SKILL.md my-project/.cursorrules  # or appropriate config

Comparison:

Feature Cursor Windsurf Cline Continue.dev
IDE Type Fork (VS Code) Native IDE Extension Plugin (multi-IDE)
Config File .cursorrules .windsurfrules .clinerules HTTP context provider
Multi-IDE (Cursor only) (Windsurf only) (VS Code only) (All IDEs)
MCP Support
Character Limit No limit 12K chars (6K per file) No limit No limit
Setup Complexity Easy Easy Medium Easy
Team Sharing Git-tracked file Git-tracked files Git-tracked file HTTP server

🎯 AI Chat Platforms

Upload documentation as custom skills to AI chat platforms:

Platform Provider Format Best For Guide
Claude Anthropic ZIP + YAML Claude.ai Projects Setup →
Gemini Google tar.gz Gemini AI Setup →
ChatGPT OpenAI ZIP + Vector Store GPT Actions Setup →
MiniMax MiniMax ZIP MiniMax AI Platform Setup →

Quick Example:

# Generate Claude skill
skill-seekers scrape --config configs/vue.json
skill-seekers package output/vue --target claude

# Upload to Claude
skill-seekers upload output/vue-claude.zip --target claude

🧠 Choosing the Right Integration

By Use Case

Your Goal Best Integration Why? Setup Time
Build Python RAG pipeline LangChain Most popular, 500K+ users, extensive docs 5 min
Query engine from docs LlamaIndex Optimized for Q&A, built-in persistence 5 min
Enterprise RAG system Haystack Production-ready, multi-language support 10 min
Vector DB only (no framework) Pinecone/Weaviate/Chroma Direct upload, no framework overhead 3 min
AI coding (VS Code fork) Cursor Best integration, native .cursorrules 5 min
AI coding (flow-based) Windsurf Unique flow paradigm, Codeium AI 5 min
AI coding (VS Code ext) Cline Claude in VS Code, MCP integration 10 min
AI coding (any IDE) Continue.dev Works everywhere, open-source 5 min
Chat with documentation Claude/Gemini/ChatGPT/MiniMax Direct upload as custom skill 3 min

By Technical Requirements

Requirement Compatible Integrations
Python required LangChain, LlamaIndex, Haystack, all vector DBs
No dependencies Cursor, Windsurf, Cline, Continue.dev (markdown export)
Cloud-hosted Pinecone, Claude, Gemini, ChatGPT
Self-hosted Chroma, FAISS, Qdrant, Continue.dev
Multi-language Haystack, Continue.dev
VS Code specific Cursor, Cline, Continue.dev
IDE agnostic LangChain, LlamaIndex, Continue.dev
Real-time updates Continue.dev (HTTP server), MCP servers

By Team Size

Team Size Recommended Stack Why?
Solo developer Cursor + Claude + Chroma (local) Simple setup, no infrastructure
Small team (2-5) Continue.dev + LangChain + Pinecone IDE-agnostic, cloud vector DB
Medium team (5-20) Windsurf/Cursor + LlamaIndex + Weaviate Good balance of features
Enterprise (20+) Continue.dev + Haystack + Qdrant/Weaviate Production-ready, scalable

By Development Environment

Environment Recommended Tools Setup
VS Code Only Cursor (fork) or Cline (extension) .cursorrules or .clinerules
JetBrains Only Continue.dev HTTP context provider
Mixed IDEs Continue.dev Same config, all IDEs
Vim/Neovim Continue.dev Plugin + HTTP server
Multiple Frameworks Continue.dev + RAG pipeline HTTP server + vector search

🚀 Quick Decision Tree

Do you need RAG/search?
├─ Yes → Use RAG framework (LangChain/LlamaIndex/Haystack)
│   ├─ Beginner? → LangChain (most docs)
│   ├─ Q&A focus? → LlamaIndex (optimized for queries)
│   └─ Enterprise? → Haystack (production-ready)
│
└─ No → Use AI coding tool or chat platform
    ├─ Need AI coding assistant?
    │   ├─ Use VS Code?
    │   │   ├─ Want native fork? → Cursor
    │   │   └─ Want extension? → Cline
    │   ├─ Use other IDE? → Continue.dev
    │   ├─ Use Windsurf? → Windsurf
    │   └─ Team uses mixed IDEs? → Continue.dev
    │
    └─ Just chat with docs? → Claude/Gemini/ChatGPT

🎨 Common Patterns

Pattern 1: RAG + AI Coding

Best for: Deep documentation search + context-aware coding

# 1. Generate RAG pipeline (LangChain)
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target langchain --chunk-for-rag

# 2. Generate AI coding context (Cursor)
skill-seekers package output/django --target claude

# 3. Use both:
# - Cursor: Quick context for common patterns
# - RAG: Deep search for complex questions

# Copy to project
cp output/django-claude/SKILL.md my-project/.cursorrules

# Query RAG when needed
python rag_search.py "How to implement custom Django middleware?"

Pattern 2: Multi-IDE Team Consistency

Best for: Teams using different IDEs

# 1. Generate documentation
skill-seekers scrape --config configs/react.json

# 2. Set up Continue.dev HTTP server (team server)
python context_server.py --host 0.0.0.0 --port 8765

# 3. Team members configure Continue.dev:
# ~/.continue/config.json (same for all IDEs)
{
  "contextProviders": [{
    "name": "http",
    "params": {
      "url": "http://team-server:8765/docs/react",
      "title": "react-docs"
    }
  }]
}

# Result: VS Code, IntelliJ, PyCharm all use same context!

Pattern 3: Full-Stack Development

Best for: Backend + Frontend with different frameworks

# 1. Generate backend context (FastAPI)
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target markdown

# 2. Generate frontend context (Vue)
skill-seekers scrape --config configs/vue.json
skill-seekers package output/vue --target markdown

# 3. For Cursor (modular rules):
cat output/fastapi-markdown/SKILL.md >> .cursorrules
echo "\n\n# Frontend Framework\n" >> .cursorrules
cat output/vue-markdown/SKILL.md >> .cursorrules

# 4. For Continue.dev (multiple providers):
{
  "contextProviders": [
    {"name": "http", "params": {"url": "http://localhost:8765/docs/fastapi"}},
    {"name": "http", "params": {"url": "http://localhost:8765/docs/vue"}}
  ]
}

# Now AI knows BOTH backend AND frontend patterns!

Pattern 4: Documentation + Codebase Analysis

Best for: Custom internal frameworks

# 1. Scrape public documentation
skill-seekers scrape --config configs/custom-framework.json

# 2. Analyze internal codebase
skill-seekers analyze --directory /path/to/internal/repo --comprehensive

# 3. Merge both:
skill-seekers merge-sources \
  --docs output/custom-framework \
  --codebase output/internal-repo \
  --output output/complete-knowledge

# 4. Package for any platform
skill-seekers package output/complete-knowledge --target [platform]

# Result: Documentation + Real-world code patterns!

💡 Best Practices

1. Start Simple, Scale Up

Phase 1: Single framework, single tool

# Week 1: Just Cursor + React
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target claude
cp output/react-claude/SKILL.md .cursorrules

Phase 2: Add RAG for deep search

# Week 2: Add LangChain for complex queries
skill-seekers package output/react --target langchain --chunk-for-rag
# Now you have: Cursor (quick) + RAG (deep)

Phase 3: Scale to team

# Week 3: Continue.dev HTTP server for team
python context_server.py --host 0.0.0.0
# Team members configure Continue.dev

2. Layer Your Context

Priority order:

  1. Project conventions (highest priority)

    • Custom patterns
    • Team standards
    • Company guidelines
  2. Framework documentation (medium priority)

    • Official best practices
    • Common patterns
    • API reference
  3. RAG search (lowest priority)

    • Deep documentation search
    • Edge cases
    • Historical context

Example (Cursor):

# Layer 1: Project conventions (loaded first)
cat > .cursorrules << 'EOF'
# Project-Specific Patterns (HIGHEST PRIORITY)
Always use async/await for database operations.
Never use 'any' type in TypeScript.
EOF

# Layer 2: Framework docs (loaded second)
cat output/react-markdown/SKILL.md >> .cursorrules

# Layer 3: RAG search (when needed)
# Query separately for deep questions

3. Update Regularly

Monthly: Framework documentation

# Check for framework updates
skill-seekers scrape --config configs/react.json
# If new version, re-package
skill-seekers package output/react --target [your-platform]

Quarterly: Codebase analysis

# Re-analyze internal codebase for new patterns
skill-seekers analyze --directory . --comprehensive

Yearly: Architecture review

# Review and update project conventions
# Check if new integrations are available

4. Measure Effectiveness

Track these metrics:

  • Context hit rate: How often AI references your documentation
  • Code quality: Fewer pattern violations after adding context
  • Development speed: Time saved on common tasks
  • Team consistency: Similar code patterns across team members

Example monitoring:

# Track Cursor suggestions quality
# Compare before/after adding .cursorrules

# Before: 60% generic suggestions, 40% framework-specific
# After:  20% generic suggestions, 80% framework-specific
# Improvement: 2x better context awareness

5. Share with Team

Git-tracked configs:

# Add to version control
git add .cursorrules
git add .clinerules
git add .continue/config.json
git commit -m "Add AI assistant configuration"

# Team benefits immediately
git pull  # New team member gets context

Documentation:

# README.md

## AI Assistant Setup

This project uses Cursor with custom rules:

1. Install Cursor: https://cursor.sh/
2. Open project: `cursor .`
3. Rules auto-load from `.cursorrules`
4. Start coding with AI context!

📖 Complete Guides

RAG & Vector Databases

AI Coding Assistants

AI Chat Platforms

Advanced Topics


🚀 Quick Start Examples

For RAG Pipelines:

# Generate LangChain documents
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target langchain

# Use in RAG pipeline
python examples/langchain-rag-pipeline/quickstart.py

For AI Coding:

# Generate Cursor rules
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target claude

# Copy to project
cp output/django-claude/SKILL.md my-project/.cursorrules

For Vector Databases:

# Generate Pinecone format
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target pinecone

# Upsert to Pinecone
python examples/pinecone-upsert/quickstart.py

For Multi-IDE Teams:

# Generate documentation
skill-seekers scrape --config configs/vue.json

# Start HTTP context server
python examples/continue-dev-universal/context_server.py

# Configure Continue.dev (same config, all IDEs)
# ~/.continue/config.json

🎯 Platform Comparison Matrix

Feature LangChain LlamaIndex Cursor Windsurf Cline Continue.dev Claude Chat
Setup Time 5 min 5 min 5 min 5 min 10 min 5 min 3 min
Python Required
Works Offline
Multi-IDE
Real-time Updates (MCP)
Team Sharing Git Git Git Git Git HTTP server Cloud
Context Limit No limit No limit No limit 12K chars No limit No limit 200K tokens
Custom Search
Best For RAG pipelines Q&A engines VS Code users Windsurf users Claude in VS Code Multi-IDE teams Quick chat

🤝 Community & Support


📖 What's Next?

  1. Choose your integration from the table above
  2. Follow the setup guide (5-10 minutes)
  3. Test with your framework using provided examples
  4. Customize for your project with project-specific patterns
  5. Share with your team via Git or HTTP server

Need help deciding? Ask in GitHub Discussions


Last Updated: February 7, 2026 Skill Seekers Version: v2.10.0+