Add skill: hamelsmu/evals-skills
Add 7 LLM evaluation skills from hamelsmu/prompts to the Development and Testing section: - eval-audit: Audit LLM eval pipelines - error-analysis: Identify failure modes in LLM pipelines - generate-synthetic-data: Create synthetic test inputs - write-judge-prompt: Design LLM-as-Judge evaluators - validate-evaluator: Calibrate judges against human labels - evaluate-rag: Evaluate RAG retrieval and generation - build-review-interface: Build annotation interfaces for traces Ref: https://x.com/omerfarukaplak/status/2029270930552439281
This commit is contained in:
@@ -791,6 +791,13 @@ Official Web3 and trading skills from the Binance team. Includes crypto market d
|
||||
- **[NeoLabHQ/ddd](https://github.com/NeoLabHQ/context-engineering-kit/tree/master/plugins/ddd)** - Domain-driven development skills that also include Clean Architecture, SOLID principles, and design patterns.
|
||||
- **[NeoLabHQ/sadd](https://github.com/NeoLabHQ/context-engineering-kit/tree/master/plugins/sadd)** - Dispatches independent subagents for individual tasks with code review checkpoints between iterations for rapid, controlled development.
|
||||
- **[NeoLabHQ/kaizen](https://github.com/NeoLabHQ/context-engineering-kit/tree/master/plugins/kaizen)** - Applies continuous improvement methodology with multiple analytical approaches, based on Japanese Kaizen philosophy and Lean methodology.
|
||||
- **[hamelsmu/eval-audit](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/eval-audit)** - Audit LLM eval pipelines and surface problems
|
||||
- **[hamelsmu/error-analysis](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/error-analysis)** - Systematically identify failure modes in LLM pipelines
|
||||
- **[hamelsmu/generate-synthetic-data](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/generate-synthetic-data)** - Create diverse synthetic test inputs for LLM evals
|
||||
- **[hamelsmu/write-judge-prompt](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/write-judge-prompt)** - Design LLM-as-Judge evaluators for subjective criteria
|
||||
- **[hamelsmu/validate-evaluator](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/validate-evaluator)** - Calibrate LLM judges against human labels
|
||||
- **[hamelsmu/evaluate-rag](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/evaluate-rag)** - Evaluate RAG retrieval and generation quality
|
||||
- **[hamelsmu/build-review-interface](https://github.com/hamelsmu/prompts/tree/main/evals-skills/skills/build-review-interface)** - Build annotation interfaces for reviewing LLM traces
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user