Files
claude-code-skills-reference/qa-expert/scripts/calculate_metrics.py
daymade 15582e2538 Release v1.10.0: Add qa-expert skill and improve SOP
## New Skill: qa-expert (v1.0.0)

Comprehensive QA testing infrastructure with autonomous LLM execution:
- One-command QA project initialization with complete templates
- Google Testing Standards (AAA pattern, 90% coverage targets)
- Autonomous LLM-driven test execution via master prompts (100x speedup)
- OWASP Top 10 security testing (90% coverage target)
- Bug tracking with P0-P4 severity classification
- Quality gates enforcement (100% execution, ≥80% pass rate, 0 P0 bugs)
- Ground Truth Principle for preventing doc/CSV sync issues
- Day 1 onboarding guide (5-hour timeline)
- 30+ ready-to-use LLM prompts for QA tasks
- Bundled scripts: init_qa_project.py, calculate_metrics.py

## Documentation Updates

- Updated marketplace to v1.10.0 (16 → 17 skills)
- Updated CHANGELOG.md with v1.10.0 entry
- Updated README.md (EN) with qa-expert skill section
- Updated README.zh-CN.md (ZH) with skills 11-16 and qa-expert
- Updated CLAUDE.md with qa-expert in available skills list
- Updated marketplace.json with qa-expert plugin entry

## SOP Improvements

Enhanced "Adding a New Skill to Marketplace" workflow:
- Added mandatory Step 7: Update README.zh-CN.md
- Added 6 new Chinese documentation checklist items
- Added Chinese documentation to Common Mistakes (#2, #3, #4, #5, #7, #8)
- Updated File Update Summary Template (7 files including zh-CN)
- Added verification commands for EN/ZH sync
- Made Chinese documentation updates MANDATORY

Total: 17 production-ready skills

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 01:48:37 +08:00

95 lines
2.8 KiB
Python

#!/usr/bin/env python3
"""
Calculate QA Metrics
Analyzes TEST-EXECUTION-TRACKING.csv and generates quality metrics.
Usage:
python scripts/calculate_metrics.py <tracking-csv-path>
"""
import sys
import csv
from pathlib import Path
from collections import Counter
def calculate_metrics(csv_path):
"""Calculate comprehensive QA metrics from tracking CSV."""
with open(csv_path, 'r') as f:
reader = csv.DictReader(f)
tests = list(reader)
total = len([t for t in tests if t['Test Case ID'] not in ['', 'Test Case ID']])
executed = len([t for t in tests if t['Status'] == 'Completed'])
passed = len([t for t in tests if t['Result'] == '✅ PASSED'])
failed = len([t for t in tests if t['Result'] == '❌ FAILED'])
pass_rate = (passed / executed * 100) if executed > 0 else 0
execution_rate = (executed / total * 100) if total > 0 else 0
# Bug analysis
bug_ids = [t['Bug ID'] for t in tests if t['Bug ID'] and t['Bug ID'] != '']
unique_bugs = len(set(bug_ids))
# Priority analysis
priority_counts = Counter([t['Priority'] for t in tests if t['Priority']])
print(f"\n{'='*60}")
print(f"QA METRICS DASHBOARD")
print(f"{'='*60}\n")
print(f"📊 TEST EXECUTION")
print(f" Total Tests: {total}")
print(f" Executed: {executed} ({execution_rate:.1f}%)")
print(f" Not Started: {total - executed}\n")
print(f"✅ TEST RESULTS")
print(f" Passed: {passed}")
print(f" Failed: {failed}")
print(f" Pass Rate: {pass_rate:.1f}%\n")
print(f"🐛 BUG ANALYSIS")
print(f" Unique Bugs: {unique_bugs}")
print(f" Total Failures: {failed}\n")
print(f"⭐ PRIORITY BREAKDOWN")
for priority in ['P0', 'P1', 'P2', 'P3']:
count = priority_counts.get(priority, 0)
print(f" {priority}: {count}")
print(f"\n🎯 QUALITY GATES")
gates = {
"Test Execution ≥100%": execution_rate >= 100,
"Pass Rate ≥80%": pass_rate >= 80,
"P0 Bugs = 0": len([t for t in tests if t['Bug ID'].startswith('BUG') and 'P0' in t['Notes']]) == 0,
}
for gate, status in gates.items():
symbol = "" if status else ""
print(f" {symbol} {gate}")
print(f"\n{'='*60}\n")
return {
'total': total,
'executed': executed,
'passed': passed,
'failed': failed,
'pass_rate': pass_rate,
'execution_rate': execution_rate,
'unique_bugs': unique_bugs
}
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python calculate_metrics.py <tracking-csv-path>")
sys.exit(1)
csv_path = Path(sys.argv[1])
if not csv_path.exists():
print(f"❌ Error: File not found: {csv_path}")
sys.exit(1)
calculate_metrics(csv_path)