release: v1.38.0 with continue-claude-work and skill-creator enhancements

## New Skill: continue-claude-work (v1.1.0)
- Recover actionable context from local `.claude` session artifacts
- Compact-boundary-aware extraction (reads Claude's own compaction summaries)
- Subagent workflow recovery (reports completed vs interrupted subagents)
- Session end reason detection (clean exit, interrupted, error cascade, abandoned)
- Size-adaptive strategy for small/large sessions
- Noise filtering (skips 37-53% of session lines)
- Self-session exclusion, stale index fallback, MEMORY.md integration
- Bundled Python script (no external dependencies)
- Security scan passed, argument-hint added

## Skill Updates
- **skill-creator** (v1.5.0): Complete rewrite with evaluation framework
  - Added agents/ (analyzer, comparator, grader)
  - Added eval-viewer/ (generate_review.py, viewer.html)
  - Added scripts/ (run_eval, aggregate_benchmark, improve_description, run_loop)
  - Added references/schemas.md (eval/benchmark schemas)
  - Expanded SKILL.md with inline vs fork guidance, progressive disclosure patterns
  - Enhanced package_skill.py and quick_validate.py

- **transcript-fixer** (v1.2.0): CLI improvements and test coverage
  - Enhanced argument_parser.py and commands.py
  - Added correction_service.py improvements
  - Added test_correction_service.py

- **tunnel-doctor** (v1.4.0): Quick diagnostic script
  - Added scripts/quick_diagnose.py
  - Enhanced SKILL.md with 5-layer conflict model

- **pdf-creator** (v1.1.0): Auto DYLD_LIBRARY_PATH + rendering fixes
  - Auto-detect and set DYLD_LIBRARY_PATH for weasyprint
  - Fixed list rendering and CSS improvements

- **github-contributor** (v1.0.3): Enhanced project evaluation
  - Added evidence-loop, redaction, and merge-ready PR guidance

## Documentation
- Updated marketplace.json (v1.38.0, 42 skills)
- Updated CHANGELOG.md with v1.38.0 entry
- Updated CLAUDE.md (skill count, marketplace version, #42 description)
- Updated README.md (badges, skill section #42, use case, requirements)
- Updated README.zh-CN.md (badges, skill section #42, use case, requirements)
- Fixed absolute paths in continue-claude-work/references/file_structure.md

## Validation
- All skills passed quick_validate.py
- continue-claude-work passed security_scan.py
- marketplace.json validated (valid JSON)
- Cross-checked version consistency across all docs
This commit is contained in:
daymade
2026-03-07 14:54:33 +08:00
parent b675ac6fee
commit c49e23e7ef
35 changed files with 7349 additions and 297 deletions

View File

@@ -62,8 +62,8 @@ def create_argument_parser() -> argparse.ArgumentParser:
)
parser.add_argument(
"--domain", "-d",
default="general",
help="Correction domain"
default=None,
help="Correction domain (default: all domains)"
)
# Learning commands

View File

@@ -58,11 +58,22 @@ def cmd_list_corrections(args: argparse.Namespace) -> None:
service = _get_service()
corrections = service.get_corrections(args.domain)
print(f"\n📋 Corrections (domain: {args.domain})")
if args.domain:
header = f"domain: {args.domain}, {len(corrections)} total"
else:
header = f"all domains, {len(corrections)} total"
print(f"\n📋 Corrections ({header})")
print("=" * 60)
for wrong, correct in sorted(corrections.items()):
print(f" '{wrong}''{correct}'")
print(f"\nTotal: {len(corrections)} corrections\n")
if args.domain:
for wrong, correct in sorted(corrections.items()):
print(f" '{wrong}''{correct}'")
else:
all_corrections = service.repository.get_all_corrections(active_only=True)
for c in all_corrections:
print(f" [{c.domain}] '{c.from_text}''{c.to_text}'")
print()
def cmd_run_correction(args: argparse.Namespace) -> None:
@@ -83,12 +94,23 @@ def cmd_run_correction(args: argparse.Namespace) -> None:
# Load corrections and rules
corrections = service.get_corrections(args.domain)
context_rules = service.load_context_rules()
domain_stats = service.get_domain_stats()
# Read input file
print(f"📖 Reading: {input_path.name}")
with open(input_path, 'r', encoding='utf-8') as f:
original_text = f.read()
print(f" File size: {len(original_text):,} characters\n")
print(f" File size: {len(original_text):,} characters")
# Show domain loading info
if args.domain:
print(f"📚 Loaded {len(corrections)} corrections (domain: {args.domain})")
elif domain_stats:
parts = ", ".join(f"{d}: {n}" for d, n in sorted(domain_stats.items()))
print(f"📚 Loaded {len(corrections)} corrections ({parts})")
else:
print(f"📚 No corrections in database")
print()
# Stage 1: Dictionary corrections
stage1_changes = []
@@ -109,7 +131,17 @@ def cmd_run_correction(args: argparse.Namespace) -> None:
stage1_file = output_dir / f"{input_path.stem}_stage1.md"
with open(stage1_file, 'w', encoding='utf-8') as f:
f.write(stage1_text)
print(f"💾 Saved: {stage1_file.name}\n")
print(f"💾 Saved: {stage1_file.name}")
# Hint when 0 corrections and other domains have rules
if summary['total_changes'] == 0 and args.domain and domain_stats:
other = {d: n for d, n in domain_stats.items() if d != args.domain}
if other:
parts = ", ".join(f"{d} ({n})" for d, n in sorted(other.items()))
total = sum(other.values())
print(f"hint: no rules in domain '{args.domain}'. Available: {parts}")
print(f"hint: run without --domain to use all {total} rules")
print()
# Stage 2: AI corrections
stage2_changes = []

View File

@@ -382,6 +382,14 @@ class CorrectionService:
# ==================== Statistics and Reporting ====================
def get_domain_stats(self) -> Dict[str, int]:
"""Get count of active corrections per domain."""
all_corrections = self.repository.get_all_corrections(active_only=True)
stats: Dict[str, int] = {}
for c in all_corrections:
stats[c.domain] = stats.get(c.domain, 0) + 1
return stats
def get_statistics(self, domain: Optional[str] = None) -> Dict[str, any]:
"""
Get correction statistics.

View File

@@ -160,6 +160,31 @@ class TestCorrectionService(unittest.TestCase):
success = self.service.remove_correction("nonexistent", "general")
self.assertFalse(success)
# ==================== Domain Stats Tests ====================
def test_get_corrections_all_domains(self):
"""domain=None loads all domains."""
self.service.add_correction("a", "b", "general")
self.service.add_correction("c", "d", "finance")
all_corr = self.service.get_corrections(None)
self.assertEqual(len(all_corr), 2)
self.assertIn("a", all_corr)
self.assertIn("c", all_corr)
def test_get_domain_stats(self):
"""get_domain_stats returns per-domain counts."""
self.service.add_correction("a", "b", "general")
self.service.add_correction("c", "d", "finance")
self.service.add_correction("e", "f", "finance")
stats = self.service.get_domain_stats()
self.assertEqual(stats["general"], 1)
self.assertEqual(stats["finance"], 2)
def test_get_domain_stats_empty(self):
"""get_domain_stats returns empty dict when no corrections."""
stats = self.service.get_domain_stats()
self.assertEqual(stats, {})
# ==================== Import/Export Tests ====================
def test_import_corrections(self):