Files
claude-skills-reference/eval/promptfooconfig.yaml
Leo 75fa9de2bb feat: add promptfoo eval pipeline for skill quality testing
- Add eval/ directory with 10 pilot skill eval configs
- Add GitHub Action (skill-eval.yml) for automated eval on PR
- Add generate-eval-config.py script for bootstrapping new evals
- Add reusable assertion helpers (skill-quality.js)
- Add eval README with setup and usage docs

Skills covered: copywriting, cto-advisor, seo-audit, content-strategy,
aws-solution-architect, agile-product-owner, senior-frontend,
senior-security, mcp-server-builder, launch-strategy

CI integration:
- Triggers on PR to dev when SKILL.md files change
- Detects which skills changed and runs only those evals
- Posts results as PR comments (non-blocking)
- Uploads full results as artifacts

No existing files modified.
2026-03-12 05:39:24 +01:00

33 lines
856 B
YAML

# Promptfoo Master Config — claude-skills
# Run all pilot skill evals: npx promptfoo@latest eval -c eval/promptfooconfig.yaml
# Run a single skill: npx promptfoo@latest eval -c eval/skills/copywriting.yaml
description: "claude-skills quality evaluation — pilot batch"
prompts:
- |
You are an expert AI assistant. You have the following skill loaded that guides your behavior:
---BEGIN SKILL---
{{skill_content}}
---END SKILL---
Now complete this task:
{{task}}
providers:
- id: anthropic:messages:claude-sonnet-4-6
config:
max_tokens: 4096
temperature: 0.7
defaultTest:
assert:
- type: javascript
value: "output.length > 200"
- type: llm-rubric
value: "The response demonstrates domain expertise relevant to the task, not generic advice"
# Import per-skill test suites
tests: []