Files
daymade c49e23e7ef release: v1.38.0 with continue-claude-work and skill-creator enhancements
## New Skill: continue-claude-work (v1.1.0)
- Recover actionable context from local `.claude` session artifacts
- Compact-boundary-aware extraction (reads Claude's own compaction summaries)
- Subagent workflow recovery (reports completed vs interrupted subagents)
- Session end reason detection (clean exit, interrupted, error cascade, abandoned)
- Size-adaptive strategy for small/large sessions
- Noise filtering (skips 37-53% of session lines)
- Self-session exclusion, stale index fallback, MEMORY.md integration
- Bundled Python script (no external dependencies)
- Security scan passed, argument-hint added

## Skill Updates
- **skill-creator** (v1.5.0): Complete rewrite with evaluation framework
  - Added agents/ (analyzer, comparator, grader)
  - Added eval-viewer/ (generate_review.py, viewer.html)
  - Added scripts/ (run_eval, aggregate_benchmark, improve_description, run_loop)
  - Added references/schemas.md (eval/benchmark schemas)
  - Expanded SKILL.md with inline vs fork guidance, progressive disclosure patterns
  - Enhanced package_skill.py and quick_validate.py

- **transcript-fixer** (v1.2.0): CLI improvements and test coverage
  - Enhanced argument_parser.py and commands.py
  - Added correction_service.py improvements
  - Added test_correction_service.py

- **tunnel-doctor** (v1.4.0): Quick diagnostic script
  - Added scripts/quick_diagnose.py
  - Enhanced SKILL.md with 5-layer conflict model

- **pdf-creator** (v1.1.0): Auto DYLD_LIBRARY_PATH + rendering fixes
  - Auto-detect and set DYLD_LIBRARY_PATH for weasyprint
  - Fixed list rendering and CSS improvements

- **github-contributor** (v1.0.3): Enhanced project evaluation
  - Added evidence-loop, redaction, and merge-ready PR guidance

## Documentation
- Updated marketplace.json (v1.38.0, 42 skills)
- Updated CHANGELOG.md with v1.38.0 entry
- Updated CLAUDE.md (skill count, marketplace version, #42 description)
- Updated README.md (badges, skill section #42, use case, requirements)
- Updated README.zh-CN.md (badges, skill section #42, use case, requirements)
- Fixed absolute paths in continue-claude-work/references/file_structure.md

## Validation
- All skills passed quick_validate.py
- continue-claude-work passed security_scan.py
- marketplace.json validated (valid JSON)
- Cross-checked version consistency across all docs
2026-03-07 14:54:33 +08:00

3.3 KiB

Project Evaluation Guide

How to evaluate open-source projects before contributing.

Prerequisites

  • Install GitHub CLI and verify availability: gh --version
  • Authenticate before running commands: gh auth status || gh auth login

Quick Health Check

# Check recent activity
gh repo view owner/repo \
  --json updatedAt,stargazerCount,issues \
  --jq '{updatedAt, stargazers: .stargazerCount, openIssues: .issues.totalCount}'

# Check PR response time
gh pr list --repo owner/repo --state merged --limit 10

# Check issue activity
gh issue list --repo owner/repo --state=open --limit 20

Evaluation Criteria

1. Activity Level

Signal Good Bad
Last commit < 1 month > 6 months
Open PRs Being reviewed Ignored
Issue responses Within days Never
Release frequency Regular Years ago

2. Community Health

Signal Good Bad
CONTRIBUTING.md Exists, detailed Missing
Code of Conduct Present Missing
Issue templates Well-structured None
Discussion tone Friendly, helpful Hostile

3. Maintainer Engagement

Signal Good Bad
Review comments Constructive Dismissive
Response time Days Months
Merge rate Regular merges Stale PRs
New contributor PRs Welcomed Ignored

4. Documentation Quality

Signal Good Bad
README Clear, comprehensive Minimal
Getting started Easy to follow Missing
API docs Complete Outdated
Examples Working, relevant Broken

Scoring System

Rate each category 1-5:

Activity Level:      _/5
Community Health:    _/5
Maintainer Engage:   _/5
Documentation:       _/5
----------------------------
Total:               _/20

Interpretation:

  • 16-20: Excellent choice
  • 12-15: Good, proceed with caution
  • 8-11: Consider carefully
  • < 8: Avoid or expect delays

Red Flags

Immediate Disqualifiers

  • No commits in 1+ year
  • Maintainer explicitly stepped away
  • Project archived
  • License issues

Warning Signs

  • Many open PRs without review
  • Hostile responses to contributors
  • No clear contribution path
  • Overly complex setup

Green Flags

Strong Indicators

  • "good first issue" labels maintained
  • Active Discord/Slack community
  • Regular release schedule
  • Responsive maintainers
  • Clear roadmap

Bonus Points

  • Funded/sponsored project
  • Multiple active maintainers
  • Good test coverage
  • CI/CD pipeline

Research Checklist

Project Evaluation:
- [ ] Check GitHub Insights
- [ ] Read recent issues
- [ ] Review merged PRs
- [ ] Check contributor guide
- [ ] Look for "good first issue"
- [ ] Assess community tone
- [ ] Verify active maintenance
- [ ] Confirm compatible license

Finding Projects

By Interest

# Find by topic
gh search repos "topic:cli" --sort=stars

# Find by language
gh search repos "language:python" --sort=stars

# Find with good first issues
gh search issues "good first issue" --language=rust --state=open

By Need

  • Tools you use daily
  • Libraries in your projects
  • Frameworks you're learning
  • Problems you've encountered

Curated Lists

  • awesome-for-beginners
  • first-timers-only
  • up-for-grabs.net
  • goodfirstissue.dev