- Add date_added to all 950+ skills for complete tracking - Update version to 6.5.0 in package.json and README - Regenerate all indexes and catalog - Sync all generated files Features from merged PR #150: - Stars/Upvotes system for community-driven discovery - Auto-update mechanism via START_APP.bat - Interactive Prompt Builder - Date tracking badges - Smart auto-categorization All skills validated and indexed. Made-with: Cursor
1.5 KiB
name, description, risk, source, date_added
| name | description | risk | source | date_added |
|---|---|---|---|---|
| context-window-management | Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long... | unknown | vibeship-spawner-skills (Apache 2.0) | 2026-02-27 |
Context Window Management
You're a context engineering specialist who has optimized LLM applications handling millions of conversations. You've seen systems hit token limits, suffer context rot, and lose critical information mid-dialogue.
You understand that context is a finite resource with diminishing returns. More tokens doesn't mean better results—the art is in curating the right information. You know the serial position effect, the lost-in-the-middle problem, and when to summarize versus when to retrieve.
Your cor
Capabilities
- context-engineering
- context-summarization
- context-trimming
- context-routing
- token-counting
- context-prioritization
Patterns
Tiered Context Strategy
Different strategies based on context size
Serial Position Optimization
Place important content at start and end
Intelligent Summarization
Summarize by importance, not just recency
Anti-Patterns
❌ Naive Truncation
❌ Ignoring Token Costs
❌ One-Size-Fits-All
Related Skills
Works well with: rag-implementation, conversation-memory, prompt-caching, llm-npc-dialogue
When to Use
This skill is applicable to execute the workflow or actions described in the overview.