Files
antigravity-skills-reference/skills/rag-implementation/SKILL.md
sck_0 aa71e76eb9 chore: release 6.5.0 - Community & Experience
- Add date_added to all 950+ skills for complete tracking
- Update version to 6.5.0 in package.json and README
- Regenerate all indexes and catalog
- Sync all generated files

Features from merged PR #150:
- Stars/Upvotes system for community-driven discovery
- Auto-update mechanism via START_APP.bat
- Interactive Prompt Builder
- Date tracking badges
- Smart auto-categorization

All skills validated and indexed.

Made-with: Cursor
2026-02-27 09:19:41 +01:00

197 lines
4.2 KiB
Markdown

---
name: rag-implementation
description: "RAG (Retrieval-Augmented Generation) implementation workflow covering embedding selection, vector database setup, chunking strategies, and retrieval optimization."
category: granular-workflow-bundle
risk: safe
source: personal
date_added: "2026-02-27"
---
# RAG Implementation Workflow
## Overview
Specialized workflow for implementing RAG (Retrieval-Augmented Generation) systems including embedding model selection, vector database setup, chunking strategies, retrieval optimization, and evaluation.
## When to Use This Workflow
Use this workflow when:
- Building RAG-powered applications
- Implementing semantic search
- Creating knowledge-grounded AI
- Setting up document Q&A systems
- Optimizing retrieval quality
## Workflow Phases
### Phase 1: Requirements Analysis
#### Skills to Invoke
- `ai-product` - AI product design
- `rag-engineer` - RAG engineering
#### Actions
1. Define use case
2. Identify data sources
3. Set accuracy requirements
4. Determine latency targets
5. Plan evaluation metrics
#### Copy-Paste Prompts
```
Use @ai-product to define RAG application requirements
```
### Phase 2: Embedding Selection
#### Skills to Invoke
- `embedding-strategies` - Embedding selection
- `rag-engineer` - RAG patterns
#### Actions
1. Evaluate embedding models
2. Test domain relevance
3. Measure embedding quality
4. Consider cost/latency
5. Select model
#### Copy-Paste Prompts
```
Use @embedding-strategies to select optimal embedding model
```
### Phase 3: Vector Database Setup
#### Skills to Invoke
- `vector-database-engineer` - Vector DB
- `similarity-search-patterns` - Similarity search
#### Actions
1. Choose vector database
2. Design schema
3. Configure indexes
4. Set up connection
5. Test queries
#### Copy-Paste Prompts
```
Use @vector-database-engineer to set up vector database
```
### Phase 4: Chunking Strategy
#### Skills to Invoke
- `rag-engineer` - Chunking strategies
- `rag-implementation` - RAG implementation
#### Actions
1. Choose chunk size
2. Implement chunking
3. Add overlap handling
4. Create metadata
5. Test retrieval quality
#### Copy-Paste Prompts
```
Use @rag-engineer to implement chunking strategy
```
### Phase 5: Retrieval Implementation
#### Skills to Invoke
- `similarity-search-patterns` - Similarity search
- `hybrid-search-implementation` - Hybrid search
#### Actions
1. Implement vector search
2. Add keyword search
3. Configure hybrid search
4. Set up reranking
5. Optimize latency
#### Copy-Paste Prompts
```
Use @similarity-search-patterns to implement retrieval
```
```
Use @hybrid-search-implementation to add hybrid search
```
### Phase 6: LLM Integration
#### Skills to Invoke
- `llm-application-dev-ai-assistant` - LLM integration
- `llm-application-dev-prompt-optimize` - Prompt optimization
#### Actions
1. Select LLM provider
2. Design prompt template
3. Implement context injection
4. Add citation handling
5. Test generation quality
#### Copy-Paste Prompts
```
Use @llm-application-dev-ai-assistant to integrate LLM
```
### Phase 7: Caching
#### Skills to Invoke
- `prompt-caching` - Prompt caching
- `rag-engineer` - RAG optimization
#### Actions
1. Implement response caching
2. Set up embedding cache
3. Configure TTL
4. Add cache invalidation
5. Monitor hit rates
#### Copy-Paste Prompts
```
Use @prompt-caching to implement RAG caching
```
### Phase 8: Evaluation
#### Skills to Invoke
- `llm-evaluation` - LLM evaluation
- `evaluation` - AI evaluation
#### Actions
1. Define evaluation metrics
2. Create test dataset
3. Measure retrieval accuracy
4. Evaluate generation quality
5. Iterate on improvements
#### Copy-Paste Prompts
```
Use @llm-evaluation to evaluate RAG system
```
## RAG Architecture
```
User Query -> Embedding -> Vector Search -> Retrieved Docs -> LLM -> Response
| | | |
Model Vector DB Chunk Store Prompt + Context
```
## Quality Gates
- [ ] Embedding model selected
- [ ] Vector DB configured
- [ ] Chunking implemented
- [ ] Retrieval working
- [ ] LLM integrated
- [ ] Evaluation passing
## Related Workflow Bundles
- `ai-ml` - AI/ML development
- `ai-agent-development` - AI agents
- `database` - Vector databases