feat: Complete Phase 1 - AI Coding Assistant Integrations (v2.10.0)

Add comprehensive integration guides for 4 AI coding assistants:

## New Integration Guides (98KB total)
- docs/integrations/WINDSURF.md (20KB) - Windsurf IDE with .windsurfrules
- docs/integrations/CLINE.md (25KB) - Cline VS Code extension with MCP
- docs/integrations/CONTINUE_DEV.md (28KB) - Continue.dev for any IDE
- docs/integrations/INTEGRATIONS.md (25KB) - Comprehensive hub with decision tree

## Working Examples (3 directories, 11 files)
- examples/windsurf-fastapi-context/ - FastAPI + Windsurf automation
- examples/cline-django-assistant/ - Django + Cline with MCP server
- examples/continue-dev-universal/ - HTTP context server for all IDEs

## README.md Updates
- Updated tagline: Universal preprocessor for 10+ AI systems
- Expanded Supported Integrations table (7 → 10 platforms)
- Added 'AI Coding Assistant Integrations' section (60+ lines)
- Cross-links to all new guides and examples

## Impact
- Week 2 of ACTION_PLAN.md: 4/4 tasks complete (100%) 
- Total new documentation: ~3,000 lines
- Total new code: ~1,000 lines (automation scripts, servers)
- Integration coverage: LangChain, LlamaIndex, Pinecone, Cursor, Windsurf,
  Cline, Continue.dev, Claude, Gemini, ChatGPT

## Key Features
- All guides follow proven 11-section pattern from CURSOR.md
- Real-world examples with automation scripts
- Multi-IDE consistency (Continue.dev works in VS Code, JetBrains, Vim)
- MCP integration for dynamic documentation access
- Complete troubleshooting sections with solutions

Positions Skill Seekers as universal preprocessor for ANY AI system.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
yusyus
2026-02-07 20:46:26 +03:00
parent eff6673c89
commit bdd61687c5
15 changed files with 5892 additions and 5 deletions

1052
docs/integrations/CLINE.md Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,549 @@
# AI System Integrations with Skill Seekers
**Universal Preprocessor:** Transform documentation into structured knowledge for any AI system
---
## 🤔 Which Integration Should I Use?
| Your Goal | Recommended Tool | Format | Setup Time | Guide |
|-----------|-----------------|--------|------------|-------|
| Build RAG with Python | LangChain | `--target langchain` | 5 min | [Guide](LANGCHAIN.md) |
| Query engine from docs | LlamaIndex | `--target llama-index` | 5 min | [Guide](LLAMA_INDEX.md) |
| Vector database only | Pinecone/Weaviate | `--target [db]` | 3 min | [Guide](PINECONE.md) |
| AI coding (VS Code fork) | Cursor | `--target claude` | 5 min | [Guide](CURSOR.md) |
| AI coding (Windsurf) | Windsurf | `--target markdown` | 5 min | [Guide](WINDSURF.md) |
| AI coding (VS Code ext) | Cline (MCP) | `--target claude` | 10 min | [Guide](CLINE.md) |
| AI coding (any IDE) | Continue.dev | `--target markdown` | 5 min | [Guide](CONTINUE_DEV.md) |
| Claude AI chat | Claude | `--target claude` | 3 min | [Guide](CLAUDE.md) |
| Chunked for RAG | Any + chunking | `--chunk-for-rag` | + 2 min | [RAG Guide](RAG_PIPELINES.md) |
---
## 📚 RAG & Vector Databases
### Production-Ready RAG Frameworks
Transform documentation into RAG-ready formats for AI-powered search and retrieval:
| Framework | Users | Format | Best For | Guide |
|-----------|-------|--------|----------|-------|
| **[LangChain](LANGCHAIN.md)** | 500K+ | Document | Python RAG, most popular | [Setup →](LANGCHAIN.md) |
| **[LlamaIndex](LLAMA_INDEX.md)** | 200K+ | TextNode | Q&A focus, query engine | [Setup →](LLAMA_INDEX.md) |
| **[Haystack](HAYSTACK.md)** | 50K+ | Document | Enterprise, multi-language | *Coming in v2.11.0* |
**Quick Example:**
```bash
# Generate LangChain documents
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target langchain
# Use in RAG pipeline
python examples/langchain-rag-pipeline/quickstart.py
```
### Vector Database Integrations
Direct upload to vector databases without RAG frameworks:
| Database | Type | Best For | Guide |
|----------|------|----------|-------|
| **[Pinecone](PINECONE.md)** | Cloud | Production, serverless | [Setup →](PINECONE.md) |
| **[Weaviate](WEAVIATE.md)** | Self-hosted/Cloud | Enterprise, GraphQL | [Setup →](WEAVIATE.md) |
| **[Chroma](CHROMA.md)** | Local | Development, embeddings included | [Setup →](CHROMA.md) |
| **[FAISS](FAISS.md)** | Local | High performance, Facebook | [Setup →](FAISS.md) |
| **[Qdrant](QDRANT.md)** | Self-hosted/Cloud | Rust engine, filtering | [Setup →](QDRANT.md) |
**Quick Example:**
```bash
# Generate Pinecone format
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target pinecone
# Upsert to Pinecone
python examples/pinecone-upsert/quickstart.py
```
---
## 💻 AI Coding Assistants
### IDE-Native AI Tools
Give AI coding assistants expert knowledge of your frameworks:
| Tool | Type | IDEs | Format | Setup | Guide |
|------|------|------|--------|-------|-------|
| **[Cursor](CURSOR.md)** | IDE (VS Code fork) | Cursor IDE | `.cursorrules` | 5 min | [Setup →](CURSOR.md) |
| **[Windsurf](WINDSURF.md)** | IDE (Codeium) | Windsurf IDE | `.windsurfrules` | 5 min | [Setup →](WINDSURF.md) |
| **[Cline](CLINE.md)** | VS Code Extension | VS Code | `.clinerules` + MCP | 10 min | [Setup →](CLINE.md) |
| **[Continue.dev](CONTINUE_DEV.md)** | Plugin | VS Code, JetBrains, Vim | HTTP context | 5 min | [Setup →](CONTINUE_DEV.md) |
**Quick Example:**
```bash
# For any AI coding assistant (Cursor, Windsurf, Cline, Continue.dev)
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target markdown # or --target claude
# Copy to your project
cp output/django-markdown/SKILL.md my-project/.cursorrules # or appropriate config
```
**Comparison:**
| Feature | Cursor | Windsurf | Cline | Continue.dev |
|---------|--------|----------|-------|--------------|
| **IDE Type** | Fork (VS Code) | Native IDE | Extension | Plugin (multi-IDE) |
| **Config File** | `.cursorrules` | `.windsurfrules` | `.clinerules` | HTTP context provider |
| **Multi-IDE** | ❌ (Cursor only) | ❌ (Windsurf only) | ❌ (VS Code only) | ✅ (All IDEs) |
| **MCP Support** | ✅ | ✅ | ✅ | ✅ |
| **Character Limit** | No limit | 12K chars (6K per file) | No limit | No limit |
| **Setup Complexity** | Easy ⭐ | Easy ⭐ | Medium ⭐⭐ | Easy ⭐ |
| **Team Sharing** | Git-tracked file | Git-tracked files | Git-tracked file | HTTP server |
---
## 🎯 AI Chat Platforms
Upload documentation as custom skills to AI chat platforms:
| Platform | Provider | Format | Best For | Guide |
|----------|----------|--------|----------|-------|
| **[Claude](CLAUDE.md)** | Anthropic | ZIP + YAML | Claude.ai Projects | [Setup →](CLAUDE.md) |
| **[Gemini](GEMINI_INTEGRATION.md)** | Google | tar.gz | Gemini AI | [Setup →](GEMINI_INTEGRATION.md) |
| **[ChatGPT](OPENAI_INTEGRATION.md)** | OpenAI | ZIP + Vector Store | GPT Actions | [Setup →](OPENAI_INTEGRATION.md) |
**Quick Example:**
```bash
# Generate Claude skill
skill-seekers scrape --config configs/vue.json
skill-seekers package output/vue --target claude
# Upload to Claude
skill-seekers upload output/vue-claude.zip --target claude
```
---
## 🧠 Choosing the Right Integration
### By Use Case
| Your Goal | Best Integration | Why? | Setup Time |
|-----------|-----------------|------|------------|
| **Build Python RAG pipeline** | LangChain | Most popular, 500K+ users, extensive docs | 5 min |
| **Query engine from docs** | LlamaIndex | Optimized for Q&A, built-in persistence | 5 min |
| **Enterprise RAG system** | Haystack | Production-ready, multi-language support | 10 min |
| **Vector DB only (no framework)** | Pinecone/Weaviate/Chroma | Direct upload, no framework overhead | 3 min |
| **AI coding (VS Code fork)** | Cursor | Best integration, native `.cursorrules` | 5 min |
| **AI coding (flow-based)** | Windsurf | Unique flow paradigm, Codeium AI | 5 min |
| **AI coding (VS Code ext)** | Cline | Claude in VS Code, MCP integration | 10 min |
| **AI coding (any IDE)** | Continue.dev | Works everywhere, open-source | 5 min |
| **Chat with documentation** | Claude/Gemini/ChatGPT | Direct upload as custom skill | 3 min |
### By Technical Requirements
| Requirement | Compatible Integrations |
|-------------|-------------------------|
| **Python required** | LangChain, LlamaIndex, Haystack, all vector DBs |
| **No dependencies** | Cursor, Windsurf, Cline, Continue.dev (markdown export) |
| **Cloud-hosted** | Pinecone, Claude, Gemini, ChatGPT |
| **Self-hosted** | Chroma, FAISS, Qdrant, Continue.dev |
| **Multi-language** | Haystack, Continue.dev |
| **VS Code specific** | Cursor, Cline, Continue.dev |
| **IDE agnostic** | LangChain, LlamaIndex, Continue.dev |
| **Real-time updates** | Continue.dev (HTTP server), MCP servers |
### By Team Size
| Team Size | Recommended Stack | Why? |
|-----------|------------------|------|
| **Solo developer** | Cursor + Claude + Chroma (local) | Simple setup, no infrastructure |
| **Small team (2-5)** | Continue.dev + LangChain + Pinecone | IDE-agnostic, cloud vector DB |
| **Medium team (5-20)** | Windsurf/Cursor + LlamaIndex + Weaviate | Good balance of features |
| **Enterprise (20+)** | Continue.dev + Haystack + Qdrant/Weaviate | Production-ready, scalable |
### By Development Environment
| Environment | Recommended Tools | Setup |
|-------------|------------------|-------|
| **VS Code Only** | Cursor (fork) or Cline (extension) | `.cursorrules` or `.clinerules` |
| **JetBrains Only** | Continue.dev | HTTP context provider |
| **Mixed IDEs** | Continue.dev | Same config, all IDEs |
| **Vim/Neovim** | Continue.dev | Plugin + HTTP server |
| **Multiple Frameworks** | Continue.dev + RAG pipeline | HTTP server + vector search |
---
## 🚀 Quick Decision Tree
```
Do you need RAG/search?
├─ Yes → Use RAG framework (LangChain/LlamaIndex/Haystack)
│ ├─ Beginner? → LangChain (most docs)
│ ├─ Q&A focus? → LlamaIndex (optimized for queries)
│ └─ Enterprise? → Haystack (production-ready)
└─ No → Use AI coding tool or chat platform
├─ Need AI coding assistant?
│ ├─ Use VS Code?
│ │ ├─ Want native fork? → Cursor
│ │ └─ Want extension? → Cline
│ ├─ Use other IDE? → Continue.dev
│ ├─ Use Windsurf? → Windsurf
│ └─ Team uses mixed IDEs? → Continue.dev
└─ Just chat with docs? → Claude/Gemini/ChatGPT
```
---
## 🎨 Common Patterns
### Pattern 1: RAG + AI Coding
**Best for:** Deep documentation search + context-aware coding
```bash
# 1. Generate RAG pipeline (LangChain)
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target langchain --chunk-for-rag
# 2. Generate AI coding context (Cursor)
skill-seekers package output/django --target claude
# 3. Use both:
# - Cursor: Quick context for common patterns
# - RAG: Deep search for complex questions
# Copy to project
cp output/django-claude/SKILL.md my-project/.cursorrules
# Query RAG when needed
python rag_search.py "How to implement custom Django middleware?"
```
### Pattern 2: Multi-IDE Team Consistency
**Best for:** Teams using different IDEs
```bash
# 1. Generate documentation
skill-seekers scrape --config configs/react.json
# 2. Set up Continue.dev HTTP server (team server)
python context_server.py --host 0.0.0.0 --port 8765
# 3. Team members configure Continue.dev:
# ~/.continue/config.json (same for all IDEs)
{
"contextProviders": [{
"name": "http",
"params": {
"url": "http://team-server:8765/docs/react",
"title": "react-docs"
}
}]
}
# Result: VS Code, IntelliJ, PyCharm all use same context!
```
### Pattern 3: Full-Stack Development
**Best for:** Backend + Frontend with different frameworks
```bash
# 1. Generate backend context (FastAPI)
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target markdown
# 2. Generate frontend context (Vue)
skill-seekers scrape --config configs/vue.json
skill-seekers package output/vue --target markdown
# 3. For Cursor (modular rules):
cat output/fastapi-markdown/SKILL.md >> .cursorrules
echo "\n\n# Frontend Framework\n" >> .cursorrules
cat output/vue-markdown/SKILL.md >> .cursorrules
# 4. For Continue.dev (multiple providers):
{
"contextProviders": [
{"name": "http", "params": {"url": "http://localhost:8765/docs/fastapi"}},
{"name": "http", "params": {"url": "http://localhost:8765/docs/vue"}}
]
}
# Now AI knows BOTH backend AND frontend patterns!
```
### Pattern 4: Documentation + Codebase Analysis
**Best for:** Custom internal frameworks
```bash
# 1. Scrape public documentation
skill-seekers scrape --config configs/custom-framework.json
# 2. Analyze internal codebase
skill-seekers analyze --directory /path/to/internal/repo --comprehensive
# 3. Merge both:
skill-seekers merge-sources \
--docs output/custom-framework \
--codebase output/internal-repo \
--output output/complete-knowledge
# 4. Package for any platform
skill-seekers package output/complete-knowledge --target [platform]
# Result: Documentation + Real-world code patterns!
```
---
## 💡 Best Practices
### 1. Start Simple, Scale Up
**Phase 1:** Single framework, single tool
```bash
# Week 1: Just Cursor + React
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target claude
cp output/react-claude/SKILL.md .cursorrules
```
**Phase 2:** Add RAG for deep search
```bash
# Week 2: Add LangChain for complex queries
skill-seekers package output/react --target langchain --chunk-for-rag
# Now you have: Cursor (quick) + RAG (deep)
```
**Phase 3:** Scale to team
```bash
# Week 3: Continue.dev HTTP server for team
python context_server.py --host 0.0.0.0
# Team members configure Continue.dev
```
### 2. Layer Your Context
**Priority order:**
1. **Project conventions** (highest priority)
- Custom patterns
- Team standards
- Company guidelines
2. **Framework documentation** (medium priority)
- Official best practices
- Common patterns
- API reference
3. **RAG search** (lowest priority)
- Deep documentation search
- Edge cases
- Historical context
**Example (Cursor):**
```bash
# Layer 1: Project conventions (loaded first)
cat > .cursorrules << 'EOF'
# Project-Specific Patterns (HIGHEST PRIORITY)
Always use async/await for database operations.
Never use 'any' type in TypeScript.
EOF
# Layer 2: Framework docs (loaded second)
cat output/react-markdown/SKILL.md >> .cursorrules
# Layer 3: RAG search (when needed)
# Query separately for deep questions
```
### 3. Update Regularly
**Monthly:** Framework documentation
```bash
# Check for framework updates
skill-seekers scrape --config configs/react.json
# If new version, re-package
skill-seekers package output/react --target [your-platform]
```
**Quarterly:** Codebase analysis
```bash
# Re-analyze internal codebase for new patterns
skill-seekers analyze --directory . --comprehensive
```
**Yearly:** Architecture review
```bash
# Review and update project conventions
# Check if new integrations are available
```
### 4. Measure Effectiveness
**Track these metrics:**
- **Context hit rate:** How often AI references your documentation
- **Code quality:** Fewer pattern violations after adding context
- **Development speed:** Time saved on common tasks
- **Team consistency:** Similar code patterns across team members
**Example monitoring:**
```python
# Track Cursor suggestions quality
# Compare before/after adding .cursorrules
# Before: 60% generic suggestions, 40% framework-specific
# After: 20% generic suggestions, 80% framework-specific
# Improvement: 2x better context awareness
```
### 5. Share with Team
**Git-tracked configs:**
```bash
# Add to version control
git add .cursorrules
git add .clinerules
git add .continue/config.json
git commit -m "Add AI assistant configuration"
# Team benefits immediately
git pull # New team member gets context
```
**Documentation:**
```markdown
# README.md
## AI Assistant Setup
This project uses Cursor with custom rules:
1. Install Cursor: https://cursor.sh/
2. Open project: `cursor .`
3. Rules auto-load from `.cursorrules`
4. Start coding with AI context!
```
---
## 📖 Complete Guides
### RAG & Vector Databases
- **[LangChain Integration](LANGCHAIN.md)** - 500K+ users, Document format
- **[LlamaIndex Integration](LLAMA_INDEX.md)** - 200K+ users, TextNode format
- **[Pinecone Integration](PINECONE.md)** - Cloud-native vector database
- **[Weaviate Integration](WEAVIATE.md)** - Enterprise-grade, GraphQL API
- **[Chroma Integration](CHROMA.md)** - Local-first, embeddings included
- **[RAG Pipelines Guide](RAG_PIPELINES.md)** - End-to-end RAG setup
### AI Coding Assistants
- **[Cursor Integration](CURSOR.md)** - VS Code fork with AI (`.cursorrules`)
- **[Windsurf Integration](WINDSURF.md)** - Codeium's IDE with AI flows
- **[Cline Integration](CLINE.md)** - Claude in VS Code (MCP integration)
- **[Continue.dev Integration](CONTINUE_DEV.md)** - Multi-platform, open-source
### AI Chat Platforms
- **[Claude Integration](CLAUDE.md)** - Anthropic's AI assistant
- **[Gemini Integration](GEMINI_INTEGRATION.md)** - Google's AI
- **[ChatGPT Integration](OPENAI_INTEGRATION.md)** - OpenAI
### Advanced Topics
- **[Multi-LLM Support](MULTI_LLM_SUPPORT.md)** - Platform comparison
- **[MCP Setup Guide](../MCP_SETUP.md)** - Model Context Protocol
---
## 🚀 Quick Start Examples
### For RAG Pipelines:
```bash
# Generate LangChain documents
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target langchain
# Use in RAG pipeline
python examples/langchain-rag-pipeline/quickstart.py
```
### For AI Coding:
```bash
# Generate Cursor rules
skill-seekers scrape --config configs/django.json
skill-seekers package output/django --target claude
# Copy to project
cp output/django-claude/SKILL.md my-project/.cursorrules
```
### For Vector Databases:
```bash
# Generate Pinecone format
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target pinecone
# Upsert to Pinecone
python examples/pinecone-upsert/quickstart.py
```
### For Multi-IDE Teams:
```bash
# Generate documentation
skill-seekers scrape --config configs/vue.json
# Start HTTP context server
python examples/continue-dev-universal/context_server.py
# Configure Continue.dev (same config, all IDEs)
# ~/.continue/config.json
```
---
## 🎯 Platform Comparison Matrix
| Feature | LangChain | LlamaIndex | Cursor | Windsurf | Cline | Continue.dev | Claude Chat |
|---------|-----------|------------|--------|----------|-------|--------------|-------------|
| **Setup Time** | 5 min | 5 min | 5 min | 5 min | 10 min | 5 min | 3 min |
| **Python Required** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| **Works Offline** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
| **Multi-IDE** | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ |
| **Real-time Updates** | ✅ | ✅ | ❌ | ❌ | ✅ (MCP) | ✅ | ❌ |
| **Team Sharing** | Git | Git | Git | Git | Git | HTTP server | Cloud |
| **Context Limit** | No limit | No limit | No limit | 12K chars | No limit | No limit | 200K tokens |
| **Custom Search** | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ |
| **Best For** | RAG pipelines | Q&A engines | VS Code users | Windsurf users | Claude in VS Code | Multi-IDE teams | Quick chat |
---
## 🤝 Community & Support
- **Questions:** [GitHub Discussions](https://github.com/yusufkaraaslan/Skill_Seekers/discussions)
- **Issues:** [GitHub Issues](https://github.com/yusufkaraaslan/Skill_Seekers/issues)
- **Website:** [skillseekersweb.com](https://skillseekersweb.com/)
- **Examples:** [GitHub Examples](https://github.com/yusufkaraaslan/Skill_Seekers/tree/main/examples)
---
## 📖 What's Next?
1. **Choose your integration** from the table above
2. **Follow the setup guide** (5-10 minutes)
3. **Test with your framework** using provided examples
4. **Customize for your project** with project-specific patterns
5. **Share with your team** via Git or HTTP server
**Need help deciding?** Ask in [GitHub Discussions](https://github.com/yusufkaraaslan/Skill_Seekers/discussions)
---
**Last Updated:** February 7, 2026
**Skill Seekers Version:** v2.10.0+

View File

@@ -0,0 +1,986 @@
# Using Skill Seekers with Windsurf IDE
**Last Updated:** February 7, 2026
**Status:** Production Ready
**Difficulty:** Easy ⭐
---
## 🎯 The Problem
Windsurf IDE (by Codeium) offers powerful AI flows and Cascade agent, but:
- **Generic Knowledge** - AI doesn't know your project-specific frameworks or internal patterns
- **Manual Context** - Copy-pasting documentation into chat is tedious and breaks flow
- **Limited Memory** - Memory feature requires manual teaching through conversations
- **Context Limits** - Rules files are limited to 12,000 characters combined
**Example:**
> "When building a FastAPI app in Windsurf, Cascade might suggest outdated patterns or miss framework-specific best practices. You want the AI to reference comprehensive documentation without hitting character limits."
---
## ✨ The Solution
Use Skill Seekers to create **custom rules** for Windsurf's Cascade agent:
1. **Generate structured docs** from any framework or codebase
2. **Package as .windsurfrules** - Windsurf's markdown rules format
3. **Automatic Context** - Cascade references your docs in AI flows
4. **Modular Rules** - Split large docs into multiple rule files (6K chars each)
**Result:**
Windsurf's Cascade becomes an expert in your frameworks with persistent, automatic context that fits within character limits.
---
## 🚀 Quick Start (5 Minutes)
### Prerequisites
- Windsurf IDE installed (https://windsurf.com/)
- Python 3.10+ (for Skill Seekers)
### Installation
```bash
# Install Skill Seekers
pip install skill-seekers
# Verify installation
skill-seekers --version
```
### Generate .windsurfrules
```bash
# Example: FastAPI framework
skill-seekers scrape --config configs/fastapi.json
# Package for Windsurf (markdown format)
skill-seekers package output/fastapi --target markdown
# Extract SKILL.md
# output/fastapi-markdown/SKILL.md
```
### Setup in Windsurf
**Option 1: Project-Specific Rules** (recommended)
```bash
# Create rules directory
mkdir -p /path/to/your/project/.windsurf/rules
# Copy as rules.md
cp output/fastapi-markdown/SKILL.md /path/to/your/project/.windsurf/rules/fastapi.md
```
**Option 2: Legacy .windsurfrules** (single file)
```bash
# Copy to project root (legacy format)
cp output/fastapi-markdown/SKILL.md /path/to/your/project/.windsurfrules
```
**Option 3: Split Large Documentation** (for >6K char files)
```bash
# Skill Seekers automatically splits large files
skill-seekers package output/react --target markdown --split-rules
# This creates multiple rule files:
# output/react-markdown/rules/
# ├── core-concepts.md (5,800 chars)
# ├── hooks-reference.md (5,400 chars)
# ├── components-guide.md (5,900 chars)
# └── best-practices.md (4,200 chars)
# Copy all rules
cp -r output/react-markdown/rules/* /path/to/your/project/.windsurf/rules/
```
### Test in Windsurf
1. Open your project in Windsurf
2. Start Cascade (Cmd+L or Ctrl+L)
3. Test knowledge:
```
"Create a FastAPI endpoint with async database queries using best practices"
```
4. Verify Cascade references your documentation
---
## 📖 Detailed Setup Guide
### Step 1: Choose Your Documentation Source
**Option A: Use Preset Configs** (24+ frameworks)
```bash
# List available presets
ls configs/
# Popular presets:
# - react.json, vue.json, angular.json (Frontend)
# - django.json, fastapi.json, flask.json (Backend)
# - godot.json, unity.json (Game Development)
# - kubernetes.json, docker.json (Infrastructure)
```
**Option B: Custom Documentation**
Create `myframework-config.json`:
```json
{
"name": "myframework",
"description": "Custom framework documentation for Windsurf",
"base_url": "https://docs.myframework.com/",
"selectors": {
"main_content": "article",
"title": "h1",
"code_blocks": "pre code"
},
"categories": {
"getting_started": ["intro", "quickstart", "installation"],
"core_concepts": ["concepts", "architecture", "patterns"],
"api": ["api", "reference", "methods"],
"guides": ["guide", "tutorial", "how-to"],
"best_practices": ["best-practices", "tips", "patterns"]
}
}
```
**Option C: GitHub Repository**
```bash
# Analyze open-source codebase
skill-seekers github --repo facebook/react
# Or local codebase
skill-seekers analyze --directory /path/to/repo --comprehensive
```
### Step 2: Optimize for Windsurf
**Character Limit Awareness**
Windsurf has strict limits:
- **Per rule file:** 6,000 characters max
- **Combined global + local:** 12,000 characters max
**Use split-rules flag:**
```bash
# Automatically split large documentation
skill-seekers package output/django --target markdown --split-rules
# This creates modular rules:
# - core-concepts.md (Always On)
# - api-reference.md (Model Decision)
# - best-practices.md (Always On)
# - troubleshooting.md (Manual @mention)
```
**Rule Activation Modes**
Configure each rule file's activation mode in frontmatter:
```markdown
---
name: "FastAPI Core Concepts"
activation: "always-on"
priority: "high"
---
# FastAPI Framework Expert
You are an expert in FastAPI...
```
Activation modes:
- **Always On** - Applied to every request (use for core concepts)
- **Model Decision** - AI decides when to use (use for specialized topics)
- **Manual** - Only when @mentioned (use for troubleshooting)
- **Scheduled** - Time-based activation (use for context switching)
### Step 3: Configure Windsurf Settings
**Enable Rules**
1. Open Windsurf Settings (Cmd+, or Ctrl+,)
2. Search for "rules"
3. Enable "Use Custom Rules"
4. Set rules directory: `.windsurf/rules`
**Memory Integration**
Combine rules with Windsurf's Memory feature:
```bash
# Generate initial rules from docs
skill-seekers package output/fastapi --target markdown
# Windsurf Memory learns from your usage:
# - Coding patterns you use frequently
# - Variable naming conventions
# - Architecture decisions
# - Team-specific practices
# Rules provide documentation, Memory provides personalization
```
**MCP Server Integration**
For live documentation access:
```bash
# Install Skill Seekers MCP server
pip install skill-seekers[mcp]
# Configure in Windsurf's mcp_config.json
{
"mcpServers": {
"skill-seekers": {
"command": "python",
"args": ["-m", "skill_seekers.mcp.server_fastmcp", "--transport", "stdio"]
}
}
}
```
### Step 4: Test and Refine
**Test Cascade Knowledge**
```bash
# Start Cascade (Cmd+L)
# Ask framework-specific questions:
"Show me FastAPI async database patterns"
"Create a React component with TypeScript best practices"
"Implement Django REST framework viewset with pagination"
```
**Refine Rules**
```bash
# Add project-specific patterns
cat >> .windsurf/rules/project-conventions.md << 'EOF'
---
name: "Project Conventions"
activation: "always-on"
priority: "highest"
---
# Project-Specific Patterns
## Database Models
- Always use async SQLAlchemy
- Include created_at/updated_at timestamps
- Add __repr__ for debugging
## API Endpoints
- Use dependency injection for database sessions
- Return Pydantic models, not ORM instances
- Include OpenAPI documentation strings
EOF
# Reload Windsurf window (Cmd+Shift+P → "Reload Window")
```
**Monitor Character Usage**
```bash
# Check rule file sizes
find .windsurf/rules -name "*.md" -exec wc -c {} \;
# Ensure no file exceeds 6,000 characters
# If too large, split further:
skill-seekers package output/react --target markdown --split-rules --max-chars 5000
```
---
## 🎨 Advanced Usage
### Multi-Framework Projects
**Backend + Frontend Stack**
```bash
# Generate backend rules (FastAPI)
skill-seekers scrape --config configs/fastapi.json
skill-seekers package output/fastapi --target markdown --split-rules
# Generate frontend rules (React)
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target markdown --split-rules
# Organize rules directory:
.windsurf/rules/
├── backend/
│ ├── fastapi-core.md (Always On)
│ ├── fastapi-database.md (Model Decision)
│ └── fastapi-testing.md (Manual)
├── frontend/
│ ├── react-hooks.md (Always On)
│ ├── react-components.md (Model Decision)
│ └── react-performance.md (Manual)
└── project/
└── conventions.md (Always On, Highest Priority)
```
### Dynamic Context per Workflow
**Context Switching Based on Task**
```markdown
---
name: "Testing Context"
activation: "model-decision"
description: "Use when user is writing or debugging tests"
keywords: ["test", "pytest", "unittest", "mock", "fixture"]
---
# Testing Best Practices
When writing tests, follow these patterns...
```
**Scheduled Rules for Time-Based Context**
```markdown
---
name: "Code Review Mode"
activation: "scheduled"
schedule: "0 14 * * 1-5" # 2 PM on weekdays
priority: "high"
---
# Code Review Checklist
During code review, verify:
- Type annotations are complete
- Tests cover edge cases
- Documentation is updated
```
### Windsurf + RAG Pipeline
**Combine Rules with Vector Search**
```python
# Use Skill Seekers to create both:
# 1. Windsurf rules (for Cascade context)
# 2. RAG chunks (for deep search)
from skill_seekers.cli.doc_scraper import main as scrape
from skill_seekers.cli.package_skill import main as package
from skill_seekers.cli.adaptors import get_adaptor
# Scrape documentation
scrape(["--config", "configs/react.json"])
# Create Windsurf rules
package(["output/react", "--target", "markdown", "--split-rules"])
# Also create RAG pipeline for deep search
package(["output/react", "--target", "langchain", "--chunk-for-rag"])
# Now you have:
# - .windsurf/rules/*.md (for Cascade)
# - output/react-langchain/ (for custom RAG search)
```
**MCP Tool for Dynamic Context**
Create custom MCP tool that queries RAG pipeline:
```python
# mcp_custom_search.py
from skill_seekers.mcp.tools import search_docs
@mcp.tool()
def search_react_docs(query: str) -> str:
"""Search React documentation for specific patterns."""
# Query your RAG pipeline
results = vector_store.similarity_search(query, k=5)
return "\n\n".join([doc.page_content for doc in results])
```
Register in `mcp_config.json`:
```json
{
"mcpServers": {
"custom-search": {
"command": "python",
"args": ["mcp_custom_search.py"]
}
}
}
```
---
## 💡 Best Practices
### 1. Keep Rules Focused
**Bad: Single Monolithic Rule (15,000 chars - exceeds limit!)**
```markdown
---
name: "Everything React"
---
# React Framework (Complete Guide)
[... 15,000 characters of documentation ...]
```
**Good: Modular Rules (5,000 chars each)**
```markdown
<!-- react-core.md (5,200 chars) -->
---
name: "React Core Concepts"
activation: "always-on"
---
# React Fundamentals
[... focused on hooks, components, state ...]
<!-- react-performance.md (4,800 chars) -->
---
name: "React Performance"
activation: "model-decision"
description: "Use when optimizing React performance"
---
# Performance Optimization
[... focused on memoization, lazy loading ...]
<!-- react-testing.md (5,100 chars) -->
---
name: "React Testing"
activation: "manual"
---
# Testing React Components
[... focused on testing patterns ...]
```
### 2. Use Activation Modes Wisely
| Mode | Use Case | Example |
|------|----------|---------|
| **Always On** | Core concepts, common patterns | Framework fundamentals, project conventions |
| **Model Decision** | Specialized topics | Performance optimization, advanced patterns |
| **Manual** | Troubleshooting, rare tasks | Debugging guides, migration docs |
| **Scheduled** | Time-based context | Code review checklists, release procedures |
### 3. Prioritize Rules
```markdown
---
name: "Project Conventions"
activation: "always-on"
priority: "highest" # This overrides framework defaults
---
# Project-Specific Rules
Always use:
- Async/await for all database operations
- Pydantic V2 (not V1)
- pytest-asyncio for async tests
```
### 4. Include Code Examples
**Don't just describe patterns:**
```markdown
## Creating Database Models
Use SQLAlchemy with async patterns.
```
**Show actual code:**
```markdown
## Creating Database Models
```python
from sqlalchemy import Column, Integer, String, DateTime
from sqlalchemy.ext.asyncio import AsyncSession
from datetime import datetime
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
email = Column(String, unique=True, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
def __repr__(self):
return f"<User(email='{self.email}')>"
# Usage in endpoint
async def create_user(email: str, db: AsyncSession):
user = User(email=email)
db.add(user)
await db.commit()
await db.refresh(user)
return user
```
\```
Use this pattern in all endpoints.
```
### 5. Update Rules Regularly
```bash
# Framework updates quarterly
skill-seekers scrape --config configs/react.json
skill-seekers package output/react --target markdown --split-rules
# Check what changed
diff -r .windsurf/rules/react-old/ .windsurf/rules/react-new/
# Merge updates
cp -r .windsurf/rules/react-new/* .windsurf/rules/
# Test with Cascade
# Ask: "What's new in React 19?"
```
---
## 🔥 Real-World Examples
### Example 1: FastAPI + PostgreSQL Microservice
**Project Structure:**
```
my-api/
├── .windsurf/
│ └── rules/
│ ├── fastapi-core.md (5,200 chars, Always On)
│ ├── fastapi-database.md (5,800 chars, Always On)
│ ├── fastapi-testing.md (4,100 chars, Manual)
│ └── project-conventions.md (3,500 chars, Always On, Highest)
├── app/
│ ├── models.py
│ ├── schemas.py
│ └── routers/
└── tests/
```
**fastapi-core.md**
```markdown
---
name: "FastAPI Core Patterns"
activation: "always-on"
priority: "high"
---
# FastAPI Expert
You are an expert in FastAPI. Use these patterns:
## Endpoint Structure
Always use dependency injection:
\```python
from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from app.database import get_db
router = APIRouter(prefix="/api/v1")
@router.post("/users/", response_model=UserResponse)
async def create_user(
user: UserCreate,
db: AsyncSession = Depends(get_db)
):
"""Create a new user."""
# Implementation
\```
## Error Handling
Use HTTPException with proper status codes:
\```python
from fastapi import HTTPException
if not user:
raise HTTPException(
status_code=404,
detail="User not found"
)
\```
```
**project-conventions.md**
```markdown
---
name: "Project Conventions"
activation: "always-on"
priority: "highest"
---
# Project-Specific Patterns
## Database Sessions
ALWAYS use async sessions with context managers:
\```python
async with get_session() as db:
result = await db.execute(query)
\```
## Response Models
NEVER return ORM instances directly. Use Pydantic:
\```python
# BAD
return user # SQLAlchemy model
# GOOD
return UserResponse.model_validate(user)
\```
## Testing
All tests MUST use pytest-asyncio:
\```python
import pytest
@pytest.mark.asyncio
async def test_create_user():
# Test implementation
\```
```
**Result:**
When you ask Cascade:
> "Create an endpoint to list all users with pagination"
Cascade will:
1. ✅ Use async/await (from project-conventions.md)
2. ✅ Add dependency injection (from fastapi-core.md)
3. ✅ Return Pydantic models (from project-conventions.md)
4. ✅ Use proper database patterns (from fastapi-database.md)
### Example 2: Godot Game Engine
**Godot-Specific Rules**
```bash
# Generate Godot documentation + codebase analysis
skill-seekers github --repo godotengine/godot-demo-projects
skill-seekers package output/godot-demo-projects --target markdown --split-rules
# Create rules structure:
.windsurf/rules/
├── godot-core.md (GDScript syntax, node system)
├── godot-signals.md (Signal patterns, EventBus)
├── godot-scenes.md (Scene tree, node access)
└── project-patterns.md (Custom patterns from codebase)
```
**godot-signals.md**
```markdown
---
name: "Godot Signal Patterns"
activation: "model-decision"
description: "Use when working with signals and events"
keywords: ["signal", "connect", "emit", "EventBus"]
---
# Godot Signal Patterns
## Signal Declaration
\```gdscript
signal health_changed(new_health: int, max_health: int)
signal item_collected(item_type: String, quantity: int)
\```
## Connection Pattern
\```gdscript
func _ready():
player.health_changed.connect(_on_health_changed)
func _on_health_changed(new_health: int, max_health: int):
health_bar.value = (new_health / float(max_health)) * 100
\```
## EventBus Pattern (from codebase analysis)
\```gdscript
# EventBus.gd (autoload singleton)
extends Node
signal game_started
signal game_over(score: int)
signal player_died
# Usage in game scenes:
EventBus.game_started.emit()
EventBus.game_over.emit(final_score)
\```
```
---
## 🐛 Troubleshooting
### Issue: Rules Not Loading
**Symptoms:**
- Cascade doesn't reference documentation
- Rules directory exists but ignored
**Solutions:**
1. **Check rules directory location**
```bash
# Must be exactly:
.windsurf/rules/
# Not:
.windsurf/rule/ # Missing 's'
windsurf/rules/ # Missing leading dot
```
2. **Verify file extensions**
```bash
# Rules must be .md files
ls .windsurf/rules/
# Should show: fastapi.md, react.md, etc.
# NOT: fastapi.txt, rules.json
```
3. **Check Windsurf settings**
```
Cmd+, → Search "rules" → Enable "Use Custom Rules"
```
4. **Reload Windsurf**
```
Cmd+Shift+P → "Reload Window"
```
5. **Verify frontmatter syntax**
```markdown
---
name: "Rule Name"
activation: "always-on"
---
# Content starts here
```
### Issue: Rules Exceeding Character Limit
**Error:**
> "Rule file exceeds 6,000 character limit"
**Solutions:**
1. **Use split-rules flag**
```bash
skill-seekers package output/react --target markdown --split-rules
```
2. **Set custom max-chars**
```bash
skill-seekers package output/django --target markdown --split-rules --max-chars 5000
```
3. **Manual splitting**
```bash
# Split SKILL.md by sections
csplit SKILL.md '/^## /' '{*}'
# Rename files
mv xx00 core-concepts.md
mv xx01 api-reference.md
mv xx02 best-practices.md
```
4. **Use activation modes strategically**
```markdown
<!-- Keep core concepts Always On -->
---
name: "Core Concepts"
activation: "always-on"
---
<!-- Make specialized topics Manual -->
---
name: "Advanced Patterns"
activation: "manual"
---
```
### Issue: Cascade Not Using Rules
**Symptoms:**
- Rules loaded but AI doesn't reference them
- Generic responses despite custom documentation
**Solutions:**
1. **Check activation mode**
```markdown
# Change from Model Decision to Always On
---
activation: "always-on" # Not "model-decision"
---
```
2. **Increase priority**
```markdown
---
priority: "highest" # Override framework defaults
---
```
3. **Add explicit instructions**
```markdown
# FastAPI Expert
You MUST follow these patterns in all FastAPI code:
- Use async/await
- Dependency injection for database
- Pydantic response models
```
4. **Test with explicit mention**
```
In Cascade chat:
"@fastapi Create an endpoint with async database access"
```
5. **Combine with Memory**
```
Ask Cascade to remember:
"Remember to always use the patterns from fastapi.md rules file"
```
### Issue: Conflicting Rules
**Symptoms:**
- AI mixes patterns from different frameworks
- Inconsistent code suggestions
**Solutions:**
1. **Use priority levels**
```markdown
<!-- project-conventions.md -->
---
priority: "highest"
---
<!-- framework-defaults.md -->
---
priority: "medium"
---
```
2. **Make project conventions always-on**
```markdown
---
name: "Project Conventions"
activation: "always-on"
priority: "highest"
---
These rules OVERRIDE all framework defaults:
- [List project-specific patterns]
```
3. **Use model-decision for conflicting patterns**
```markdown
<!-- rest-api.md -->
---
activation: "model-decision"
description: "Use when creating REST APIs (not GraphQL)"
---
<!-- graphql-api.md -->
---
activation: "model-decision"
description: "Use when creating GraphQL APIs (not REST)"
---
```
---
## 📊 Before vs After Comparison
| Aspect | Before Skill Seekers | After Skill Seekers |
|--------|---------------------|---------------------|
| **Context Source** | Copy-paste docs into chat | Automatic rules files |
| **Character Limits** | Hit 12K limit easily | Modular rules fit perfectly |
| **AI Knowledge** | Generic framework patterns | Project-specific best practices |
| **Setup Time** | Manual doc curation (hours) | Automated scraping (5 min) |
| **Consistency** | Varies per conversation | Persistent across all flows |
| **Updates** | Manual doc editing | Re-run scraper for latest docs |
| **Multi-Framework** | Context switching confusion | Separate rule files |
| **Code Quality** | Hit-or-miss | Follows documented patterns |
---
## 🤝 Community & Support
- **Questions:** [GitHub Discussions](https://github.com/yusufkaraaslan/Skill_Seekers/discussions)
- **Issues:** [GitHub Issues](https://github.com/yusufkaraaslan/Skill_Seekers/issues)
- **Website:** [skillseekersweb.com](https://skillseekersweb.com/)
- **Windsurf Docs:** [docs.windsurf.com](https://docs.windsurf.com/)
- **Windsurf Rules Directory:** [windsurf.com/editor/directory](https://windsurf.com/editor/directory)
---
## 📚 Related Guides
- [Cursor Integration](CURSOR.md) - Similar IDE, different rules format
- [Cline Integration](CLINE.md) - VS Code extension with MCP
- [Continue.dev Integration](CONTINUE_DEV.md) - IDE-agnostic AI assistant
- [LangChain Integration](LANGCHAIN.md) - Build RAG pipelines
- [RAG Pipelines Guide](RAG_PIPELINES.md) - End-to-end RAG setup
---
## 📖 Next Steps
1. **Try another framework:** `skill-seekers scrape --config configs/vue.json`
2. **Combine multiple frameworks:** Create modular rules for full-stack projects
3. **Integrate with MCP:** Add live documentation access via MCP servers
4. **Build RAG pipeline:** Use `--target langchain` for deep search
5. **Share your rules:** Contribute to [awesome-windsurfrules](https://github.com/SchneiderSam/awesome-windsurfrules)
---
**Sources:**
- [Windsurf Official Site](https://windsurf.com/)
- [Windsurf Documentation](https://docs.windsurf.com/windsurf/getting-started)
- [Windsurf MCP Setup Guide](https://www.braingrid.ai/blog/windsurf-mcp)
- [Awesome Windsurfrules Repository](https://github.com/SchneiderSam/awesome-windsurfrules)
- [Windsurf Rules Directory](https://windsurf.com/editor/directory)
- [Mastering .windsurfrules Guide](https://blog.stackademic.com/mastering-windsurfrules-react-typescript-projects-aee1e3fe4376)