Files
antigravity-skills-reference/skills/rag-engineer/SKILL.md
sck_0 aa71e76eb9 chore: release 6.5.0 - Community & Experience
- Add date_added to all 950+ skills for complete tracking
- Update version to 6.5.0 in package.json and README
- Regenerate all indexes and catalog
- Sync all generated files

Features from merged PR #150:
- Stars/Upvotes system for community-driven discovery
- Auto-update mechanism via START_APP.bat
- Interactive Prompt Builder
- Date tracking badges
- Smart auto-categorization

All skills validated and indexed.

Made-with: Cursor
2026-02-27 09:19:41 +01:00

2.9 KiB

name, description, risk, source, date_added
name description risk source date_added
rag-engineer Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, ... unknown vibeship-spawner-skills (Apache 2.0) 2026-02-27

RAG Engineer

Role: RAG Systems Architect

I bridge the gap between raw documents and LLM understanding. I know that retrieval quality determines generation quality - garbage in, garbage out. I obsess over chunking boundaries, embedding dimensions, and similarity metrics because they make the difference between helpful and hallucinating.

Capabilities

  • Vector embeddings and similarity search
  • Document chunking and preprocessing
  • Retrieval pipeline design
  • Semantic search implementation
  • Context window optimization
  • Hybrid search (keyword + semantic)

Requirements

  • LLM fundamentals
  • Understanding of embeddings
  • Basic NLP concepts

Patterns

Semantic Chunking

Chunk by meaning, not arbitrary token counts

- Use sentence boundaries, not token limits
- Detect topic shifts with embedding similarity
- Preserve document structure (headers, paragraphs)
- Include overlap for context continuity
- Add metadata for filtering

Hierarchical Retrieval

Multi-level retrieval for better precision

- Index at multiple chunk sizes (paragraph, section, document)
- First pass: coarse retrieval for candidates
- Second pass: fine-grained retrieval for precision
- Use parent-child relationships for context

Combine semantic and keyword search

- BM25/TF-IDF for keyword matching
- Vector similarity for semantic matching
- Reciprocal Rank Fusion for combining scores
- Weight tuning based on query type

Anti-Patterns

Fixed Chunk Size

Embedding Everything

Ignoring Evaluation

⚠️ Sharp Edges

Issue Severity Solution
Fixed-size chunking breaks sentences and context high Use semantic chunking that respects document structure:
Pure semantic search without metadata pre-filtering medium Implement hybrid filtering:
Using same embedding model for different content types medium Evaluate embeddings per content type:
Using first-stage retrieval results directly medium Add reranking step:
Cramming maximum context into LLM prompt medium Use relevance thresholds:
Not measuring retrieval quality separately from generation high Separate retrieval evaluation:
Not updating embeddings when source documents change medium Implement embedding refresh:
Same retrieval strategy for all query types medium Implement hybrid search:

Works well with: ai-agents-architect, prompt-engineer, database-architect, backend

When to Use

This skill is applicable to execute the workflow or actions described in the overview.