docs: Consolidate roadmaps and refactor documentation structure

MAJOR REFACTORING: Merge 3 roadmap files into single comprehensive ROADMAP.md

Changes:
- Merged ROADMAP.md + FLEXIBLE_ROADMAP.md + FUTURE_RELEASES.md → ROADMAP.md
- Consolidated 1,008 lines across 3 files into 429 lines (single source of truth)
- Removed duplicate/overlapping content
- Cleaned up docs archive structure

New ROADMAP.md Structure:
- Current Status (v2.6.0)
- Development Philosophy (task-based approach)
- Task-Based Roadmap (136 tasks, 10 categories)
- Release History (v1.0.0, v2.1.0, v2.6.0)
- Release Planning (v2.7-v2.9)
- Long-term Vision (v3.0+)
- Metrics & Goals
- Contribution guidelines

Deleted Files:
- FLEXIBLE_ROADMAP.md (merged into ROADMAP.md)
- FUTURE_RELEASES.md (merged into ROADMAP.md)
- docs/archive/temp/TERMINAL_SELECTION.md (temporary file)
- docs/archive/temp/TESTING.md (temporary file)

Moved Files:
- docs/plans/*.md → docs/archive/plans/ (dated planning docs)

Updated References:
- CLAUDE.md: FLEXIBLE_ROADMAP.md → ROADMAP.md
- docs/README.md: Removed duplicate roadmap references
- CHANGELOG.md: Updated documentation references

Benefits:
- Single source of truth for roadmap
- No duplicate maintenance
- Cleaner repository structure
- Better discoverability
- Historical context preserved in archive/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
yusyus
2026-01-14 22:36:03 +03:00
parent 7d56cc83b9
commit 48b8544dea
10 changed files with 372 additions and 1764 deletions

View File

@@ -1,867 +0,0 @@
# Active Skills Design - Demand-Driven Documentation Loading
**Date:** 2025-10-24
**Type:** Architecture Design
**Status:** Phase 1 Implemented ✅
**Author:** Edgar + Claude (Brainstorming Session)
---
## Executive Summary
Transform Skill_Seekers from creating **passive documentation dumps** into **active, intelligent skills** that load documentation on-demand. This eliminates context bloat (300k → 5-10k per query) while maintaining full access to complete documentation.
**Key Innovation:** Skills become lightweight routers with heavy tools in `scripts/`, not documentation repositories.
---
## Problem Statement
### Current Architecture: Passive Skills
**What happens today:**
```
Agent: "How do I use Hono middleware?"
Skill: *Claude loads 203k llms-txt.md into context*
Agent: *answers using loaded docs*
Result: Context bloat, slower performance, hits limits
```
**Issues:**
1. **Context Bloat**: 319k llms-full.txt loaded entirely into context
2. **Wasted Resources**: Agent needs 5k but gets 319k
3. **Truncation Loss**: 36% of content lost (319k → 203k) due to size limits
4. **File Extension Bug**: llms.txt files stored as .txt instead of .md
5. **Single Variant**: Only downloads one file (usually llms-full.txt)
### Current File Structure
```
output/hono/
├── SKILL.md ──────────► Documentation dump + instructions
├── references/
│ └── llms-txt.md ───► 203k (36% truncated from 319k original)
├── scripts/ ──────────► EMPTY (placeholder only!)
└── assets/ ───────────► EMPTY (placeholder only!)
```
---
## Proposed Architecture: Active Skills
### Core Concept
**Skills = Routers + Tools**, not documentation dumps.
**New workflow:**
```
Agent: "How do I use Hono middleware?"
Skill: *runs scripts/search.py "middleware"*
Script: *loads llms-full.md, extracts middleware section, returns 8k*
Agent: *answers using ONLY 8k* (CLEAN CONTEXT!)
Result: 40x less context, no truncation, full access to docs
```
### Benefits
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Context per query | 203k | 5-10k | **20-40x reduction** |
| Content loss | 36% truncated | 0% (no truncation) | **Full fidelity** |
| Variants available | 1 | 3 | **User choice** |
| File format | .txt (wrong) | .md (correct) | **Fixed** |
| Agent workflow | Passive read | Active tools | **Autonomous** |
---
## Design Components
### Component 1: Multi-Variant Download
**Change:** Download ALL 3 variants, not just one.
**File naming (FIXED):**
- `https://hono.dev/llms-full.txt``llms-full.md`
- `https://hono.dev/llms.txt``llms.md`
- `https://hono.dev/llms-small.txt``llms-small.md`
**Sizes (Hono example):**
- `llms-full.md` - 319k (complete documentation)
- `llms-small.md` - 176k (curated essentials)
- `llms.md` - 5.4k (quick reference)
**Storage:**
```
output/hono/references/
├── llms-full.md # 319k - everything (RENAMED from .txt)
├── llms-small.md # 176k - curated (RENAMED from .txt)
├── llms.md # 5.4k - quick ref (RENAMED from .txt)
└── catalog.json # Generated index (NEW)
```
**Implementation in `_try_llms_txt()`:**
```python
def _try_llms_txt(self) -> bool:
"""Download ALL llms.txt variants for active skills"""
# 1. Detect all available variants
detector = LlmsTxtDetector(self.base_url)
variants = detector.detect_all() # NEW method
downloaded = {}
for variant_info in variants:
url = variant_info['url'] # https://hono.dev/llms-full.txt
variant = variant_info['variant'] # 'full', 'standard', 'small'
downloader = LlmsTxtDownloader(url)
content = downloader.download()
if content:
# ✨ FIX: Rename .txt → .md immediately
clean_name = f"llms-{variant}.md"
downloaded[variant] = {
'content': content,
'filename': clean_name
}
# 2. Save ALL variants (not just one)
for variant, data in downloaded.items():
path = os.path.join(self.skill_dir, "references", data['filename'])
with open(path, 'w', encoding='utf-8') as f:
f.write(data['content'])
# 3. Generate catalog from smallest variant
if 'small' in downloaded:
self._generate_catalog(downloaded['small']['content'])
return True
```
---
### Component 2: The Catalog System
**Purpose:** Lightweight index of what exists, not the content itself.
**File:** `assets/catalog.json`
**Structure:**
```json
{
"metadata": {
"framework": "hono",
"version": "auto-detected",
"generated": "2025-10-24T14:30:00Z",
"total_sections": 93,
"variants": {
"quick": "llms-small.md",
"standard": "llms.md",
"complete": "llms-full.md"
}
},
"sections": [
{
"id": "routing",
"title": "Routing",
"h1_marker": "# Routing",
"topics": ["routes", "path", "params", "wildcard"],
"size_bytes": 4800,
"variants": ["quick", "complete"],
"complexity": "beginner"
},
{
"id": "middleware",
"title": "Middleware",
"h1_marker": "# Middleware",
"topics": ["cors", "auth", "logging", "compression"],
"size_bytes": 8200,
"variants": ["quick", "complete"],
"complexity": "intermediate"
}
],
"search_index": {
"cors": ["middleware"],
"routing": ["routing", "path-parameters"],
"authentication": ["middleware", "jwt"],
"context": ["context-handling"],
"streaming": ["streaming-responses"]
}
}
```
**Generation (from llms-small.md):**
```python
def _generate_catalog(self, llms_small_content):
"""Generate catalog.json from llms-small.md TOC"""
catalog = {
"metadata": {...},
"sections": [],
"search_index": {}
}
# Split by h1 headers
sections = re.split(r'\n# ', llms_small_content)
for section_text in sections[1:]:
lines = section_text.split('\n')
title = lines[0].strip()
# Extract h2 topics
topics = re.findall(r'^## (.+)$', section_text, re.MULTILINE)
topics = [t.strip().lower() for t in topics]
section_info = {
"id": title.lower().replace(' ', '-'),
"title": title,
"h1_marker": f"# {title}",
"topics": topics + [title.lower()],
"size_bytes": len(section_text),
"variants": ["quick", "complete"]
}
catalog["sections"].append(section_info)
# Build search index
for topic in section_info["topics"]:
if topic not in catalog["search_index"]:
catalog["search_index"][topic] = []
catalog["search_index"][topic].append(section_info["id"])
# Save to assets/catalog.json
catalog_path = os.path.join(self.skill_dir, "assets", "catalog.json")
with open(catalog_path, 'w', encoding='utf-8') as f:
json.dump(catalog, f, indent=2)
```
---
### Component 3: Active Scripts
**Location:** `scripts/` directory (currently empty)
#### Script 1: `scripts/search.py`
**Purpose:** Search and return only relevant documentation sections.
```python
#!/usr/bin/env python3
"""
ABOUTME: Searches framework documentation and returns relevant sections
ABOUTME: Loads only what's needed - keeps agent context clean
"""
import json
import sys
import re
from pathlib import Path
def search(query, detail="auto"):
"""
Search documentation and return relevant sections.
Args:
query: Search term (e.g., "middleware", "cors", "routing")
detail: "quick" | "standard" | "complete" | "auto"
Returns:
Markdown text of relevant sections only
"""
# Load catalog
catalog_path = Path(__file__).parent.parent / "assets" / "catalog.json"
catalog = json.load(open(catalog_path))
# 1. Find matching sections using search index
query_lower = query.lower()
matching_section_ids = set()
for keyword, section_ids in catalog["search_index"].items():
if query_lower in keyword or keyword in query_lower:
matching_section_ids.update(section_ids)
# Get section details
matches = [s for s in catalog["sections"] if s["id"] in matching_section_ids]
if not matches:
return f"❌ No sections found for '{query}'. Try: python scripts/list_topics.py"
# 2. Determine detail level
if detail == "auto":
# Use quick for overview, complete for deep dive
total_size = sum(s["size_bytes"] for s in matches)
if total_size > 50000: # > 50k
variant = "quick"
else:
variant = "complete"
else:
variant = detail
variant_file = catalog["metadata"]["variants"].get(variant, "complete")
# 3. Load documentation file
doc_path = Path(__file__).parent.parent / "references" / variant_file
doc_content = open(doc_path, 'r', encoding='utf-8').read()
# 4. Extract matched sections
results = []
for match in matches:
h1_marker = match["h1_marker"]
# Find section boundaries
start = doc_content.find(h1_marker)
if start == -1:
continue
# Find next h1 (or end of file)
next_h1 = doc_content.find("\n# ", start + len(h1_marker))
if next_h1 == -1:
section_text = doc_content[start:]
else:
section_text = doc_content[start:next_h1]
results.append({
'title': match['title'],
'size': len(section_text),
'content': section_text
})
# 5. Format output
output = [f"# Search Results for '{query}' ({len(results)} sections found)\n"]
output.append(f"**Variant used:** {variant} ({variant_file})")
output.append(f"**Total size:** {sum(r['size'] for r in results):,} bytes\n")
output.append("---\n")
for result in results:
output.append(result['content'])
output.append("\n---\n")
return '\n'.join(output)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python search.py <query> [detail]")
print("Example: python search.py middleware")
print("Example: python search.py routing --detail quick")
sys.exit(1)
query = sys.argv[1]
detail = sys.argv[2] if len(sys.argv) > 2 else "auto"
print(search(query, detail))
```
#### Script 2: `scripts/list_topics.py`
**Purpose:** Show all available documentation sections.
```python
#!/usr/bin/env python3
"""
ABOUTME: Lists all available documentation sections with sizes
ABOUTME: Helps agent discover what documentation exists
"""
import json
from pathlib import Path
def list_topics():
"""List all available documentation sections."""
catalog_path = Path(__file__).parent.parent / "assets" / "catalog.json"
catalog = json.load(open(catalog_path))
print(f"# Available Documentation Topics ({catalog['metadata']['framework']})\n")
print(f"**Total sections:** {catalog['metadata']['total_sections']}")
print(f"**Variants:** {', '.join(catalog['metadata']['variants'].keys())}\n")
print("---\n")
# Group by complexity if available
by_complexity = {}
for section in catalog["sections"]:
complexity = section.get("complexity", "general")
if complexity not in by_complexity:
by_complexity[complexity] = []
by_complexity[complexity].append(section)
for complexity in ["beginner", "intermediate", "advanced", "general"]:
if complexity not in by_complexity:
continue
sections = by_complexity[complexity]
print(f"## {complexity.title()} ({len(sections)} sections)\n")
for section in sections:
size_kb = section["size_bytes"] / 1024
topics_str = ", ".join(section["topics"][:3])
print(f"- **{section['title']}** ({size_kb:.1f}k)")
print(f" Topics: {topics_str}")
print(f" Search: `python scripts/search.py {section['id']}`\n")
if __name__ == "__main__":
list_topics()
```
#### Script 3: `scripts/get_section.py`
**Purpose:** Extract a complete section by exact title.
```python
#!/usr/bin/env python3
"""
ABOUTME: Extracts a complete documentation section by title
ABOUTME: Returns full section from llms-full.md (no truncation)
"""
import json
import sys
from pathlib import Path
def get_section(title, variant="complete"):
"""
Get a complete section by exact title.
Args:
title: Section title (e.g., "Middleware", "Routing")
variant: Which file to use (quick/standard/complete)
Returns:
Complete section content
"""
catalog_path = Path(__file__).parent.parent / "assets" / "catalog.json"
catalog = json.load(open(catalog_path))
# Find section
section = None
for s in catalog["sections"]:
if s["title"].lower() == title.lower():
section = s
break
if not section:
return f"❌ Section '{title}' not found. Try: python scripts/list_topics.py"
# Load doc
variant_file = catalog["metadata"]["variants"].get(variant, "complete")
doc_path = Path(__file__).parent.parent / "references" / variant_file
doc_content = open(doc_path, 'r', encoding='utf-8').read()
# Extract section
h1_marker = section["h1_marker"]
start = doc_content.find(h1_marker)
if start == -1:
return f"❌ Section '{title}' not found in {variant_file}"
next_h1 = doc_content.find("\n# ", start + len(h1_marker))
if next_h1 == -1:
section_text = doc_content[start:]
else:
section_text = doc_content[start:next_h1]
return section_text
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python get_section.py <title> [variant]")
print("Example: python get_section.py Middleware")
print("Example: python get_section.py Routing quick")
sys.exit(1)
title = sys.argv[1]
variant = sys.argv[2] if len(sys.argv) > 2 else "complete"
print(get_section(title, variant))
```
---
### Component 4: Active SKILL.md Template
**New template for llms.txt-based skills:**
```markdown
---
name: {name}
description: {description}
type: active
---
# {Name} Skill
**⚡ This is an ACTIVE skill** - Uses scripts to load documentation on-demand instead of dumping everything into context.
## 🎯 Strategy: Demand-Driven Documentation
**Traditional approach:**
- Load 300k+ documentation into context
- Agent reads everything to answer one question
- Context bloat, slower performance
**Active approach:**
- Load 5-10k of relevant sections on-demand
- Agent calls scripts to fetch what's needed
- Clean context, faster performance
## 📚 Available Documentation
This skill provides access to {num_sections} documentation sections across 3 detail levels:
- **Quick Reference** (`llms-small.md`): {small_size}k - Curated essentials
- **Standard** (`llms.md`): {standard_size}k - Core concepts
- **Complete** (`llms-full.md`): {full_size}k - Everything
## 🔧 Tools Available
### 1. Search Documentation
Find and load only relevant sections:
```bash
python scripts/search.py "middleware"
python scripts/search.py "routing" --detail quick
```
**Returns:** 5-10k of relevant content (not 300k!)
### 2. List All Topics
See what documentation exists:
```bash
python scripts/list_topics.py
```
**Returns:** Table of contents with section sizes and search hints
### 3. Get Complete Section
Extract a full section by title:
```bash
python scripts/get_section.py "Middleware"
python scripts/get_section.py "Routing" quick
```
**Returns:** Complete section from chosen variant
## 💡 Recommended Workflow
1. **Discover:** `python scripts/list_topics.py` to see what's available
2. **Search:** `python scripts/search.py "your topic"` to find relevant sections
3. **Deep Dive:** Use returned content to answer questions in detail
4. **Iterate:** Search more specific topics as needed
## ⚠️ Important
**DON'T:** Read `references/*.md` files directly into context
**DO:** Use scripts to fetch only what you need
This keeps your context clean and focused!
## 📊 Index
Complete section catalog available in `assets/catalog.json` with search mappings and size information.
## 🔄 Updating
To refresh with latest documentation:
```bash
python3 cli/doc_scraper.py --config configs/{name}.json
```
```
---
## Implementation Plan
### Phase 1: Foundation (Quick Fixes)
**Tasks:**
1. Fix `.txt` → `.md` renaming in downloader
2. Download all 3 variants (not just one)
3. Store all variants in `references/` with correct names
4. Remove content truncation (2500 chars → unlimited)
**Time:** 1-2 hours
**Files:** `cli/doc_scraper.py`, `cli/llms_txt_downloader.py`
### Phase 2: Catalog System
**Tasks:**
1. Implement `_generate_catalog()` method
2. Parse llms-small.md to extract sections
3. Build search index from topics
4. Generate `assets/catalog.json`
**Time:** 2-3 hours
**Files:** `cli/doc_scraper.py`
### Phase 3: Active Scripts
**Tasks:**
1. Create `scripts/search.py`
2. Create `scripts/list_topics.py`
3. Create `scripts/get_section.py`
4. Make scripts executable (`chmod +x`)
**Time:** 2-3 hours
**Files:** New scripts in `scripts/` template directory
### Phase 4: Template Updates
**Tasks:**
1. Create new active SKILL.md template
2. Update `create_enhanced_skill_md()` to use active template for llms.txt skills
3. Update documentation to explain active skills
**Time:** 1 hour
**Files:** `cli/doc_scraper.py`, `README.md`, `CLAUDE.md`
### Phase 5: Testing & Refinement
**Tasks:**
1. Test with Hono skill (has all 3 variants)
2. Test search accuracy
3. Measure context reduction
4. Document examples
**Time:** 2-3 hours
**Total Estimated Time:** 8-12 hours
---
## Migration Path
### Backward Compatibility
**Existing skills:** No changes (passive skills still work)
**New llms.txt skills:** Automatically use active architecture
**User choice:** Can disable via config flag
### Config Option
```json
{
"name": "hono",
"llms_txt_url": "https://hono.dev/llms-full.txt",
"active_skill": true, // NEW: Enable active architecture (default: true)
"base_url": "https://hono.dev/docs"
}
```
### Detection Logic
```python
# In _try_llms_txt()
active_mode = self.config.get('active_skill', True) # Default true
if active_mode:
# Download all variants, generate catalog, create scripts
self._build_active_skill(downloaded)
else:
# Traditional: single file, no scripts
self._build_passive_skill(downloaded)
```
---
## Benefits Analysis
### Context Efficiency
| Scenario | Passive Skill | Active Skill | Improvement |
|----------|---------------|--------------|-------------|
| Simple query | 203k loaded | 5k loaded | **40x reduction** |
| Multi-topic query | 203k loaded | 15k loaded | **13x reduction** |
| Deep dive | 203k loaded | 30k loaded | **6x reduction** |
### Data Fidelity
| Aspect | Passive | Active |
|--------|---------|--------|
| Content truncation | 36% lost | 0% lost |
| Code truncation | 600 chars max | Unlimited |
| Variants available | 1 | 3 |
### Agent Capabilities
**Passive Skills:**
- ❌ Cannot choose detail level
- ❌ Cannot search efficiently
- ❌ Must read entire context
- ❌ Limited by context window
**Active Skills:**
- ✅ Chooses appropriate detail level
- ✅ Searches catalog efficiently
- ✅ Loads only what's needed
- ✅ Unlimited documentation access
---
## Trade-offs
### Advantages
1. **Massive context reduction** (20-40x less per query)
2. **No content loss** (all 3 variants preserved)
3. **Correct file format** (.md not .txt)
4. **Agent autonomy** (tools to fetch docs)
5. **Scalable** (works with 1MB+ docs)
### Disadvantages
1. **Complexity** (scripts + catalog vs simple files)
2. **Initial overhead** (catalog generation)
3. **Agent learning curve** (must learn to use scripts)
4. **Dependency** (Python required to run scripts)
### Risk Mitigation
**Risk:** Scripts don't work in Claude's sandbox
**Mitigation:** Test thoroughly, provide fallback to passive mode
**Risk:** Catalog generation fails
**Mitigation:** Graceful degradation to single-file mode
**Risk:** Agent doesn't use scripts
**Mitigation:** Clear SKILL.md instructions, examples in quick reference
---
## Success Metrics
### Technical Metrics
- ✅ Context per query < 20k (down from 203k)
- ✅ All 3 variants downloaded and named correctly
- ✅ 0% content truncation
- ✅ Catalog generation < 5 seconds
- ✅ Search script < 1 second response time
### User Experience Metrics
- ✅ Agent successfully uses scripts without prompting
- ✅ Answers are equally or more accurate than passive mode
- ✅ Agent can handle queries about all documentation sections
- ✅ No "context limit exceeded" errors
---
## Future Enhancements
### Phase 6: Smart Caching
Cache frequently accessed sections in SKILL.md quick reference:
```python
# Track access frequency in catalog.json
"sections": [
{
"id": "middleware",
"access_count": 47, # NEW: Track usage
"last_accessed": "2025-10-24T14:30:00Z"
}
]
# Include top 10 most-accessed sections directly in SKILL.md
```
### Phase 7: Semantic Search
Use embeddings for better search:
```python
# Generate embeddings for each section
"sections": [
{
"id": "middleware",
"embedding": [...], # NEW: Vector embedding
"topics": ["cors", "auth"]
}
]
# In search.py: Use cosine similarity for better matches
```
### Phase 8: Progressive Loading
Load increasingly detailed docs:
```python
# First: Load llms.md (5.4k - overview)
# If insufficient: Load llms-small.md section (15k)
# If still insufficient: Load llms-full.md section (30k)
```
---
## Conclusion
Active skills represent a fundamental shift from **documentation repositories** to **documentation routers**. By treating skills as intelligent intermediaries rather than static dumps, we can:
1. **Eliminate context bloat** (40x reduction)
2. **Preserve full fidelity** (0% truncation)
3. **Enable agent autonomy** (tools to fetch docs)
4. **Scale indefinitely** (no size limits)
This design maintains backward compatibility while unlocking new capabilities for modern, LLM-optimized documentation sources like llms.txt.
**Recommendation:** Implement in phases, starting with foundation fixes, then catalog system, then active scripts. Test thoroughly with Hono before making it the default for all llms.txt-based skills.
---
## References
- Original brainstorming session: 2025-10-24
- llms.txt convention: https://llmstxt.org/
- Hono example: https://hono.dev/llms-full.txt
- Skill_Seekers repository: Current project
---
## Appendix: Example Workflows
### Example 1: Agent Searches for "Middleware"
```bash
# Agent runs:
python scripts/search.py "middleware"
# Script returns ~8k of middleware documentation from llms-full.md
# Agent uses that 8k to answer the question
# Total context used: 8k (not 319k!)
```
### Example 2: Agent Explores Documentation
```bash
# 1. Agent lists topics
python scripts/list_topics.py
# Returns: Table of contents (2k)
# 2. Agent picks a topic
python scripts/get_section.py "Routing"
# Returns: Complete Routing section (5k)
# 3. Agent searches related topics
python scripts/search.py "path parameters"
# Returns: Routing + Path section (7k)
# Total context used across 3 queries: 14k (not 3 × 319k = 957k!)
```
### Example 3: Agent Needs Quick Answer
```bash
# Agent uses quick variant for overview
python scripts/search.py "cors" --detail quick
# Returns: Short CORS explanation from llms-small.md (2k)
# If insufficient, agent can follow up with:
python scripts/get_section.py "Middleware" # Full section from llms-full.md
```
---
**Document Status:** Ready for review and implementation planning.

View File

@@ -1,682 +0,0 @@
# Active Skills Phase 1: Foundation Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Fix fundamental issues in llms.txt handling: rename .txt→.md, download all 3 variants, remove truncation.
**Architecture:** Modify existing llms.txt download/parse/build workflow to handle multiple variants correctly, store with proper extensions, and preserve complete content without truncation.
**Tech Stack:** Python 3.10+, requests, BeautifulSoup4, existing Skill_Seekers architecture
---
## Task 1: Add Multi-Variant Detection
**Files:**
- Modify: `cli/llms_txt_detector.py`
- Test: `tests/test_llms_txt_detector.py`
**Step 1: Write failing test for detect_all() method**
```python
# tests/test_llms_txt_detector.py (add new test)
def test_detect_all_variants():
"""Test detecting all llms.txt variants"""
from unittest.mock import patch, Mock
detector = LlmsTxtDetector("https://hono.dev/docs")
with patch('cli.llms_txt_detector.requests.head') as mock_head:
# Mock responses for different variants
def mock_response(url, **kwargs):
response = Mock()
# All 3 variants exist for Hono
if 'llms-full.txt' in url or 'llms.txt' in url or 'llms-small.txt' in url:
response.status_code = 200
else:
response.status_code = 404
return response
mock_head.side_effect = mock_response
variants = detector.detect_all()
assert len(variants) == 3
assert any(v['variant'] == 'full' for v in variants)
assert any(v['variant'] == 'standard' for v in variants)
assert any(v['variant'] == 'small' for v in variants)
assert all('url' in v for v in variants)
```
**Step 2: Run test to verify it fails**
Run: `source .venv/bin/activate && pytest tests/test_llms_txt_detector.py::test_detect_all_variants -v`
Expected: FAIL with "AttributeError: 'LlmsTxtDetector' object has no attribute 'detect_all'"
**Step 3: Implement detect_all() method**
```python
# cli/llms_txt_detector.py (add new method)
def detect_all(self) -> List[Dict[str, str]]:
"""
Detect all available llms.txt variants.
Returns:
List of dicts with 'url' and 'variant' keys for each found variant
"""
found_variants = []
for filename, variant in self.VARIANTS:
parsed = urlparse(self.base_url)
root_url = f"{parsed.scheme}://{parsed.netloc}"
url = f"{root_url}/{filename}"
if self._check_url_exists(url):
found_variants.append({
'url': url,
'variant': variant
})
return found_variants
```
**Step 4: Add import for List and Dict at top of file**
```python
# cli/llms_txt_detector.py (add to imports)
from typing import Optional, Dict, List
```
**Step 5: Run test to verify it passes**
Run: `source .venv/bin/activate && pytest tests/test_llms_txt_detector.py::test_detect_all_variants -v`
Expected: PASS
**Step 6: Commit**
```bash
git add cli/llms_txt_detector.py tests/test_llms_txt_detector.py
git commit -m "feat: add detect_all() for multi-variant detection"
```
---
## Task 2: Add File Extension Renaming to Downloader
**Files:**
- Modify: `cli/llms_txt_downloader.py`
- Test: `tests/test_llms_txt_downloader.py`
**Step 1: Write failing test for get_proper_filename() method**
```python
# tests/test_llms_txt_downloader.py (add new test)
def test_get_proper_filename():
"""Test filename conversion from .txt to .md"""
downloader = LlmsTxtDownloader("https://hono.dev/llms-full.txt")
filename = downloader.get_proper_filename()
assert filename == "llms-full.md"
assert not filename.endswith('.txt')
def test_get_proper_filename_standard():
"""Test standard variant naming"""
downloader = LlmsTxtDownloader("https://hono.dev/llms.txt")
filename = downloader.get_proper_filename()
assert filename == "llms.md"
def test_get_proper_filename_small():
"""Test small variant naming"""
downloader = LlmsTxtDownloader("https://hono.dev/llms-small.txt")
filename = downloader.get_proper_filename()
assert filename == "llms-small.md"
```
**Step 2: Run test to verify it fails**
Run: `source .venv/bin/activate && pytest tests/test_llms_txt_downloader.py::test_get_proper_filename -v`
Expected: FAIL with "AttributeError: 'LlmsTxtDownloader' object has no attribute 'get_proper_filename'"
**Step 3: Implement get_proper_filename() method**
```python
# cli/llms_txt_downloader.py (add new method)
def get_proper_filename(self) -> str:
"""
Extract filename from URL and convert .txt to .md
Returns:
Proper filename with .md extension
Examples:
https://hono.dev/llms-full.txt -> llms-full.md
https://hono.dev/llms.txt -> llms.md
https://hono.dev/llms-small.txt -> llms-small.md
"""
# Extract filename from URL
from urllib.parse import urlparse
parsed = urlparse(self.url)
filename = parsed.path.split('/')[-1]
# Replace .txt with .md
if filename.endswith('.txt'):
filename = filename[:-4] + '.md'
return filename
```
**Step 4: Run test to verify it passes**
Run: `source .venv/bin/activate && pytest tests/test_llms_txt_downloader.py::test_get_proper_filename -v`
Expected: PASS (all 3 tests)
**Step 5: Commit**
```bash
git add cli/llms_txt_downloader.py tests/test_llms_txt_downloader.py
git commit -m "feat: add get_proper_filename() for .txt to .md conversion"
```
---
## Task 3: Update _try_llms_txt() to Download All Variants
**Files:**
- Modify: `cli/doc_scraper.py:337-384` (_try_llms_txt method)
- Test: `tests/test_integration.py`
**Step 1: Write failing test for multi-variant download**
```python
# tests/test_integration.py (add to TestFullLlmsTxtWorkflow class)
def test_multi_variant_download(self):
"""Test downloading all 3 llms.txt variants"""
from unittest.mock import patch, Mock
import tempfile
import os
config = {
'name': 'test-multi-variant',
'base_url': 'https://hono.dev/docs'
}
# Mock all 3 variants
sample_full = "# Full\n" + "x" * 1000
sample_standard = "# Standard\n" + "x" * 200
sample_small = "# Small\n" + "x" * 500
with tempfile.TemporaryDirectory() as tmpdir:
with patch('cli.llms_txt_detector.requests.head') as mock_head, \
patch('cli.llms_txt_downloader.requests.get') as mock_get:
# Mock detection (all exist)
mock_head_response = Mock()
mock_head_response.status_code = 200
mock_head.return_value = mock_head_response
# Mock downloads
def mock_download(url, **kwargs):
response = Mock()
response.status_code = 200
if 'llms-full.txt' in url:
response.text = sample_full
elif 'llms-small.txt' in url:
response.text = sample_small
else: # llms.txt
response.text = sample_standard
return response
mock_get.side_effect = mock_download
# Run scraper
scraper = DocumentationScraper(config, dry_run=False)
result = scraper._try_llms_txt()
# Verify all 3 files created
refs_dir = os.path.join(scraper.skill_dir, 'references')
assert os.path.exists(os.path.join(refs_dir, 'llms-full.md'))
assert os.path.exists(os.path.join(refs_dir, 'llms.md'))
assert os.path.exists(os.path.join(refs_dir, 'llms-small.md'))
# Verify content not truncated
with open(os.path.join(refs_dir, 'llms-full.md')) as f:
content = f.read()
assert len(content) == len(sample_full)
```
**Step 2: Run test to verify it fails**
Run: `source .venv/bin/activate && pytest tests/test_integration.py::TestFullLlmsTxtWorkflow::test_multi_variant_download -v`
Expected: FAIL - only one file created, not all 3
**Step 3: Modify _try_llms_txt() to use detect_all()**
```python
# cli/doc_scraper.py (replace _try_llms_txt method, lines 337-384)
def _try_llms_txt(self) -> bool:
"""
Try to use llms.txt instead of HTML scraping.
Downloads ALL available variants and stores with .md extension.
Returns:
True if llms.txt was found and processed successfully
"""
print(f"\n🔍 Checking for llms.txt at {self.base_url}...")
# Check for explicit config URL first
explicit_url = self.config.get('llms_txt_url')
if explicit_url:
print(f"\n📌 Using explicit llms_txt_url from config: {explicit_url}")
downloader = LlmsTxtDownloader(explicit_url)
content = downloader.download()
if content:
# Save with proper .md extension
filename = downloader.get_proper_filename()
filepath = os.path.join(self.skill_dir, "references", filename)
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
print(f" 💾 Saved {filename} ({len(content)} chars)")
# Parse and save pages
parser = LlmsTxtParser(content)
pages = parser.parse()
if pages:
for page in pages:
self.save_page(page)
self.pages.append(page)
self.llms_txt_detected = True
self.llms_txt_variant = 'explicit'
return True
# Auto-detection: Find ALL variants
detector = LlmsTxtDetector(self.base_url)
variants = detector.detect_all()
if not variants:
print(" No llms.txt found, using HTML scraping")
return False
print(f"✅ Found {len(variants)} llms.txt variant(s)")
# Download ALL variants
downloaded = {}
for variant_info in variants:
url = variant_info['url']
variant = variant_info['variant']
print(f" 📥 Downloading {variant}...")
downloader = LlmsTxtDownloader(url)
content = downloader.download()
if content:
filename = downloader.get_proper_filename()
downloaded[variant] = {
'content': content,
'filename': filename,
'size': len(content)
}
print(f"{filename} ({len(content)} chars)")
if not downloaded:
print("⚠️ Failed to download any variants, falling back to HTML scraping")
return False
# Save ALL variants to references/
os.makedirs(os.path.join(self.skill_dir, "references"), exist_ok=True)
for variant, data in downloaded.items():
filepath = os.path.join(self.skill_dir, "references", data['filename'])
with open(filepath, 'w', encoding='utf-8') as f:
f.write(data['content'])
print(f" 💾 Saved {data['filename']}")
# Parse LARGEST variant for skill building
largest = max(downloaded.items(), key=lambda x: x[1]['size'])
print(f"\n📄 Parsing {largest[1]['filename']} for skill building...")
parser = LlmsTxtParser(largest[1]['content'])
pages = parser.parse()
if not pages:
print("⚠️ Failed to parse llms.txt, falling back to HTML scraping")
return False
print(f" ✓ Parsed {len(pages)} sections")
# Save pages for skill building
for page in pages:
self.save_page(page)
self.pages.append(page)
self.llms_txt_detected = True
self.llms_txt_variants = list(downloaded.keys())
return True
```
**Step 4: Add llms_txt_variants attribute to __init__**
```python
# cli/doc_scraper.py (in __init__ method, after llms_txt_variant line)
self.llms_txt_variants = [] # Track all downloaded variants
```
**Step 5: Run test to verify it passes**
Run: `source .venv/bin/activate && pytest tests/test_integration.py::TestFullLlmsTxtWorkflow::test_multi_variant_download -v`
Expected: PASS
**Step 6: Commit**
```bash
git add cli/doc_scraper.py tests/test_integration.py
git commit -m "feat: download all llms.txt variants with proper .md extension"
```
---
## Task 4: Remove Content Truncation
**Files:**
- Modify: `cli/doc_scraper.py:714-730` (create_reference_file method)
**Step 1: Write failing test for no truncation**
```python
# tests/test_integration.py (add new test)
def test_no_content_truncation():
"""Test that content is NOT truncated in reference files"""
from unittest.mock import Mock
import tempfile
import os
config = {
'name': 'test-no-truncate',
'base_url': 'https://example.com/docs'
}
# Create scraper with long content
scraper = DocumentationScraper(config, dry_run=False)
# Create page with content > 2500 chars
long_content = "x" * 5000
long_code = "y" * 1000
pages = [{
'title': 'Long Page',
'url': 'https://example.com/long',
'content': long_content,
'code_samples': [
{'code': long_code, 'language': 'python'}
],
'headings': []
}]
# Create reference file
scraper.create_reference_file('test', pages)
# Verify no truncation
ref_file = os.path.join(scraper.skill_dir, 'references', 'test.md')
with open(ref_file, 'r') as f:
content = f.read()
assert long_content in content # Full content included
assert long_code in content # Full code included
assert '[Content truncated]' not in content
assert '...' not in content or content.count('...') == 0
```
**Step 2: Run test to verify it fails**
Run: `source .venv/bin/activate && pytest tests/test_integration.py::test_no_content_truncation -v`
Expected: FAIL - content contains "[Content truncated]" or "..."
**Step 3: Remove truncation from create_reference_file()**
```python
# cli/doc_scraper.py (modify create_reference_file method, lines 712-731)
# OLD (line 714-716):
# if page.get('content'):
# content = page['content'][:2500]
# if len(page['content']) > 2500:
# content += "\n\n*[Content truncated]*"
# NEW (replace with):
if page.get('content'):
content = page['content'] # NO TRUNCATION
lines.append(content)
lines.append("")
# OLD (line 728-730):
# lines.append(code[:600])
# if len(code) > 600:
# lines.append("...")
# NEW (replace with):
lines.append(code) # NO TRUNCATION
# No "..." suffix
```
**Complete replacement of lines 712-731:**
```python
# cli/doc_scraper.py:712-731 (complete replacement)
# Content (NO TRUNCATION)
if page.get('content'):
lines.append(page['content'])
lines.append("")
# Code examples with language (NO TRUNCATION)
if page.get('code_samples'):
lines.append("**Examples:**\n")
for i, sample in enumerate(page['code_samples'][:4], 1):
lang = sample.get('language', 'unknown')
code = sample.get('code', sample if isinstance(sample, str) else '')
lines.append(f"Example {i} ({lang}):")
lines.append(f"```{lang}")
lines.append(code) # Full code, no truncation
lines.append("```\n")
```
**Step 4: Run test to verify it passes**
Run: `source .venv/bin/activate && pytest tests/test_integration.py::test_no_content_truncation -v`
Expected: PASS
**Step 5: Run full test suite to check for regressions**
Run: `source .venv/bin/activate && pytest tests/ -v`
Expected: All 201+ tests pass
**Step 6: Commit**
```bash
git add cli/doc_scraper.py tests/test_integration.py
git commit -m "feat: remove content truncation in reference files"
```
---
## Task 5: Update Documentation
**Files:**
- Modify: `docs/plans/2025-10-24-active-skills-design.md`
- Modify: `CHANGELOG.md`
**Step 1: Update design doc status**
```markdown
# docs/plans/2025-10-24-active-skills-design.md (update header)
**Status:** Phase 1 Implemented ✅
```
**Step 2: Add CHANGELOG entry**
```markdown
# CHANGELOG.md (add new section at top)
## [Unreleased]
### Added - Phase 1: Active Skills Foundation
- Multi-variant llms.txt detection: downloads all 3 variants (full, standard, small)
- Automatic .txt → .md file extension conversion
- No content truncation: preserves complete documentation
- `detect_all()` method for finding all llms.txt variants
- `get_proper_filename()` for correct .md naming
### Changed
- `_try_llms_txt()` now downloads all available variants instead of just one
- Reference files now contain complete content (no 2500 char limit)
- Code samples now include full code (no 600 char limit)
### Fixed
- File extension bug: llms.txt files now saved as .md
- Content loss: 0% truncation (was 36%)
```
**Step 3: Commit**
```bash
git add docs/plans/2025-10-24-active-skills-design.md CHANGELOG.md
git commit -m "docs: update status for Phase 1 completion"
```
---
## Task 6: Manual Verification
**Files:**
- None (manual testing)
**Step 1: Test with Hono config**
Run: `source .venv/bin/activate && python3 cli/doc_scraper.py --config configs/hono.json`
**Expected output:**
```
🔍 Checking for llms.txt at https://hono.dev/docs...
📌 Using explicit llms_txt_url from config: https://hono.dev/llms-full.txt
💾 Saved llms-full.md (319000 chars)
📄 Parsing llms-full.md for skill building...
✓ Parsed 93 sections
✅ Used llms.txt (explicit) - skipping HTML scraping
```
**Step 2: Verify all 3 files exist with correct extensions**
Run: `ls -lah output/hono/references/llms*.md`
Expected:
```
llms-full.md 319k
llms.md 5.4k
llms-small.md 176k
```
**Step 3: Verify no truncation in reference files**
Run: `grep -c "Content truncated" output/hono/references/*.md`
Expected: 0 matches (no truncation messages)
**Step 4: Check file sizes are correct**
Run: `wc -c output/hono/references/llms-full.md`
Expected: Should match original download size (~319k), not reduced to 203k
**Step 5: Verify all tests still pass**
Run: `source .venv/bin/activate && pytest tests/ -v`
Expected: All tests pass (201+)
---
## Completion Checklist
- [ ] Task 1: Multi-variant detection (detect_all)
- [ ] Task 2: File extension renaming (get_proper_filename)
- [ ] Task 3: Download all variants (_try_llms_txt)
- [ ] Task 4: Remove truncation (create_reference_file)
- [ ] Task 5: Update documentation
- [ ] Task 6: Manual verification
- [ ] All tests passing
- [ ] No regressions in existing functionality
---
## Success Criteria
**Technical:**
- ✅ All 3 variants downloaded when available
- ✅ Files saved with .md extension (not .txt)
- ✅ 0% content truncation (was 36%)
- ✅ All existing tests pass
- ✅ New tests cover all changes
**User Experience:**
- ✅ Hono skill has all 3 files: llms-full.md, llms.md, llms-small.md
- ✅ Reference files contain complete documentation
- ✅ No "[Content truncated]" messages in output
---
## Related Skills
- @superpowers:test-driven-development - Used throughout for TDD approach
- @superpowers:verification-before-completion - Used in Task 6 for manual verification
---
## Notes
- This plan implements Phase 1 from `docs/plans/2025-10-24-active-skills-design.md`
- Phase 2 (Catalog System) and Phase 3 (Active Scripts) will be separate plans
- All changes maintain backward compatibility with existing HTML scraping
- File extension fix (.txt → .md) is critical for proper skill functionality
---
## Estimated Time
- Task 1: 15 minutes
- Task 2: 15 minutes
- Task 3: 30 minutes
- Task 4: 20 minutes
- Task 5: 10 minutes
- Task 6: 15 minutes
**Total: ~1.5 hours**