- Reduce SKILL.md from 288 to 118 lines - Add trigger phrases: generate tests, analyze coverage, TDD workflow, etc. - Add Table of Contents - Remove marketing language - Move Python tools to scripts/ directory (8 files) - Move sample files to assets/ directory - Create references/ with TDD best practices, framework guide, CI integration - Use imperative voice consistently Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,287 +1,118 @@
|
||||
---
|
||||
name: tdd-guide
|
||||
description: Comprehensive Test Driven Development guide for engineering subagents with multi-framework support, coverage analysis, and intelligent test generation
|
||||
description: Test-driven development workflow with test generation, coverage analysis, and multi-framework support
|
||||
triggers:
|
||||
- generate tests
|
||||
- analyze coverage
|
||||
- TDD workflow
|
||||
- red green refactor
|
||||
- Jest tests
|
||||
- Pytest tests
|
||||
- JUnit tests
|
||||
- coverage report
|
||||
---
|
||||
|
||||
# TDD Guide - Test Driven Development for Engineering Teams
|
||||
# TDD Guide
|
||||
|
||||
A comprehensive Test Driven Development skill that provides intelligent test generation, coverage analysis, framework integration, and TDD workflow guidance across multiple languages and testing frameworks.
|
||||
Test-driven development skill for generating tests, analyzing coverage, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, and Vitest.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Capabilities](#capabilities)
|
||||
- [Workflows](#workflows)
|
||||
- [Tools](#tools)
|
||||
- [Input Requirements](#input-requirements)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Test Generation
|
||||
- **Generate Test Cases from Requirements**: Convert user stories, API specs, and business requirements into executable test cases
|
||||
- **Create Test Stubs**: Generate test function scaffolding with proper naming, imports, and setup/teardown
|
||||
- **Generate Test Fixtures**: Create realistic test data, mocks, and fixtures for various scenarios
|
||||
| Capability | Description |
|
||||
|------------|-------------|
|
||||
| Test Generation | Convert requirements or code into test cases with proper structure |
|
||||
| Coverage Analysis | Parse LCOV/JSON/XML reports, identify gaps, prioritize fixes |
|
||||
| TDD Workflow | Guide red-green-refactor cycles with validation |
|
||||
| Framework Adapters | Generate tests for Jest, Pytest, JUnit, Vitest, Mocha |
|
||||
| Quality Scoring | Assess test isolation, assertions, naming, detect test smells |
|
||||
| Fixture Generation | Create realistic test data, mocks, and factories |
|
||||
|
||||
### TDD Workflow Support
|
||||
- **Guide Red-Green-Refactor**: Step-by-step guidance through TDD cycles with validation
|
||||
- **Suggest Missing Scenarios**: Identify untested edge cases, error conditions, and boundary scenarios
|
||||
- **Review Test Quality**: Analyze test isolation, assertions quality, naming conventions, and maintainability
|
||||
---
|
||||
|
||||
### Coverage & Metrics Analysis
|
||||
- **Calculate Coverage**: Parse LCOV, JSON, and XML coverage reports for line/branch/function coverage
|
||||
- **Identify Untested Paths**: Find code paths, branches, and error handlers without test coverage
|
||||
- **Recommend Improvements**: Prioritized recommendations (P0/P1/P2) for coverage gaps and test quality
|
||||
## Workflows
|
||||
|
||||
### Framework Integration
|
||||
- **Multi-Framework Support**: Jest, Pytest, JUnit, Vitest, Mocha, RSpec adapters
|
||||
- **Generate Boilerplate**: Create test files with proper imports, describe blocks, and best practices
|
||||
- **Configure Test Runners**: Set up test configuration, coverage tools, and CI integration
|
||||
### Generate Tests from Code
|
||||
|
||||
### Comprehensive Metrics
|
||||
- **Test Coverage**: Line, branch, function coverage with gap analysis
|
||||
- **Code Complexity**: Cyclomatic complexity, cognitive complexity, testability scoring
|
||||
- **Test Quality**: Assertions per test, isolation score, naming quality, test smell detection
|
||||
- **Test Data**: Boundary value analysis, edge case identification, mock data generation
|
||||
- **Test Execution**: Timing analysis, slow test detection, flakiness detection
|
||||
- **Missing Tests**: Uncovered edge cases, error handling gaps, missing integration scenarios
|
||||
1. Provide source code (TypeScript, JavaScript, Python, Java)
|
||||
2. Specify target framework (Jest, Pytest, JUnit, Vitest)
|
||||
3. Run `test_generator.py` with requirements
|
||||
4. Review generated test stubs
|
||||
5. **Validation:** Tests compile and cover happy path, error cases, edge cases
|
||||
|
||||
### Analyze Coverage Gaps
|
||||
|
||||
1. Generate coverage report from test runner (`npm test -- --coverage`)
|
||||
2. Run `coverage_analyzer.py` on LCOV/JSON/XML report
|
||||
3. Review prioritized gaps (P0/P1/P2)
|
||||
4. Generate missing tests for uncovered paths
|
||||
5. **Validation:** Coverage meets target threshold (typically 80%+)
|
||||
|
||||
### TDD New Feature
|
||||
|
||||
1. Write failing test first (RED)
|
||||
2. Run `tdd_workflow.py --phase red` to validate
|
||||
3. Implement minimal code to pass (GREEN)
|
||||
4. Run `tdd_workflow.py --phase green` to validate
|
||||
5. Refactor while keeping tests green (REFACTOR)
|
||||
6. **Validation:** All tests pass after each cycle
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
| Tool | Purpose | Usage |
|
||||
|------|---------|-------|
|
||||
| `test_generator.py` | Generate test cases from code/requirements | `python scripts/test_generator.py --input source.py --framework pytest` |
|
||||
| `coverage_analyzer.py` | Parse and analyze coverage reports | `python scripts/coverage_analyzer.py --report lcov.info --threshold 80` |
|
||||
| `tdd_workflow.py` | Guide red-green-refactor cycles | `python scripts/tdd_workflow.py --phase red --test test_auth.py` |
|
||||
| `framework_adapter.py` | Convert tests between frameworks | `python scripts/framework_adapter.py --from jest --to pytest` |
|
||||
| `fixture_generator.py` | Generate test data and mocks | `python scripts/fixture_generator.py --entity User --count 5` |
|
||||
| `metrics_calculator.py` | Calculate test quality metrics | `python scripts/metrics_calculator.py --tests tests/` |
|
||||
| `format_detector.py` | Detect language and framework | `python scripts/format_detector.py --file source.ts` |
|
||||
| `output_formatter.py` | Format output for CLI/desktop/CI | `python scripts/output_formatter.py --format markdown` |
|
||||
|
||||
---
|
||||
|
||||
## Input Requirements
|
||||
|
||||
The skill supports **automatic format detection** for flexible input:
|
||||
**For Test Generation:**
|
||||
- Source code (file path or pasted content)
|
||||
- Target framework (Jest, Pytest, JUnit, Vitest)
|
||||
- Coverage scope (unit, integration, edge cases)
|
||||
|
||||
### Source Code
|
||||
- **Languages**: TypeScript, JavaScript, Python, Java
|
||||
- **Format**: Direct file paths or copy-pasted code blocks
|
||||
- **Detection**: Automatic language/framework detection from syntax and imports
|
||||
**For Coverage Analysis:**
|
||||
- Coverage report file (LCOV, JSON, or XML format)
|
||||
- Optional: Source code for context
|
||||
- Optional: Target threshold percentage
|
||||
|
||||
### Test Artifacts
|
||||
- **Coverage Reports**: LCOV (.lcov), JSON (coverage-final.json), XML (cobertura.xml)
|
||||
- **Test Results**: JUnit XML, Jest JSON, Pytest JSON, TAP format
|
||||
- **Format**: File paths or raw coverage data
|
||||
**For TDD Workflow:**
|
||||
- Feature requirements or user story
|
||||
- Current phase (RED, GREEN, REFACTOR)
|
||||
- Test code and implementation status
|
||||
|
||||
### Requirements (Optional)
|
||||
- **User Stories**: Text descriptions of functionality
|
||||
- **API Specifications**: OpenAPI/Swagger, REST endpoints, GraphQL schemas
|
||||
- **Business Requirements**: Acceptance criteria, business rules
|
||||
|
||||
### Input Methods
|
||||
- **Option A**: Provide file paths (skill will read files)
|
||||
- **Option B**: Copy-paste code/data directly
|
||||
- **Option C**: Mix of both (automatically detected)
|
||||
|
||||
## Output Formats
|
||||
|
||||
The skill provides **context-aware output** optimized for your environment:
|
||||
|
||||
### Code Files
|
||||
- **Test Files**: Generated tests (Jest/Pytest/JUnit/Vitest) with proper structure
|
||||
- **Fixtures**: Test data files, mock objects, factory functions
|
||||
- **Mocks**: Mock implementations, stub functions, test doubles
|
||||
|
||||
### Reports
|
||||
- **Markdown**: Rich coverage reports, recommendations, quality analysis (Claude Desktop)
|
||||
- **JSON**: Machine-readable metrics, structured data for CI/CD integration
|
||||
- **Terminal-Friendly**: Simplified output for Claude Code CLI
|
||||
|
||||
### Smart Defaults
|
||||
- **Desktop/Apps**: Rich markdown with tables, code blocks, visual hierarchy
|
||||
- **CLI**: Concise, terminal-friendly format with clear sections
|
||||
- **CI/CD**: JSON output for automated processing
|
||||
|
||||
### Progressive Disclosure
|
||||
- **Summary First**: High-level overview (<200 tokens)
|
||||
- **Details on Demand**: Full analysis available (500-1000 tokens)
|
||||
- **Prioritized**: P0 (critical) → P1 (important) → P2 (nice-to-have)
|
||||
|
||||
## How to Use
|
||||
|
||||
### Basic Usage
|
||||
```
|
||||
@tdd-guide
|
||||
|
||||
I need tests for my authentication module. Here's the code:
|
||||
[paste code or provide file path]
|
||||
|
||||
Generate comprehensive test cases covering happy path, error cases, and edge cases.
|
||||
```
|
||||
|
||||
### Coverage Analysis
|
||||
```
|
||||
@tdd-guide
|
||||
|
||||
Analyze test coverage for my TypeScript project. Coverage report: coverage/lcov.info
|
||||
|
||||
Identify gaps and provide prioritized recommendations.
|
||||
```
|
||||
|
||||
### TDD Workflow
|
||||
```
|
||||
@tdd-guide
|
||||
|
||||
Guide me through TDD for implementing a password validation function.
|
||||
|
||||
Requirements:
|
||||
- Min 8 characters
|
||||
- At least 1 uppercase, 1 lowercase, 1 number, 1 special char
|
||||
- No common passwords
|
||||
```
|
||||
|
||||
### Multi-Framework Support
|
||||
```
|
||||
@tdd-guide
|
||||
|
||||
Convert these Jest tests to Pytest format:
|
||||
[paste Jest tests]
|
||||
```
|
||||
|
||||
## Scripts
|
||||
|
||||
### Core Modules
|
||||
|
||||
- **test_generator.py**: Intelligent test case generation from requirements and code
|
||||
- **coverage_analyzer.py**: Parse and analyze coverage reports (LCOV, JSON, XML)
|
||||
- **metrics_calculator.py**: Calculate comprehensive test and code quality metrics
|
||||
- **framework_adapter.py**: Multi-framework adapter (Jest, Pytest, JUnit, Vitest)
|
||||
- **tdd_workflow.py**: Red-green-refactor workflow guidance and validation
|
||||
- **fixture_generator.py**: Generate realistic test data and fixtures
|
||||
- **format_detector.py**: Automatic language and framework detection
|
||||
|
||||
### Utilities
|
||||
|
||||
- **complexity_analyzer.py**: Cyclomatic and cognitive complexity analysis
|
||||
- **test_quality_scorer.py**: Test quality scoring (isolation, assertions, naming)
|
||||
- **missing_test_detector.py**: Identify untested paths and missing scenarios
|
||||
- **output_formatter.py**: Context-aware output formatting (Desktop vs CLI)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Test Generation
|
||||
1. **Start with Requirements**: Write tests from user stories before seeing implementation
|
||||
2. **Test Behavior, Not Implementation**: Focus on what code does, not how it does it
|
||||
3. **One Assertion Focus**: Each test should verify one specific behavior
|
||||
4. **Descriptive Names**: Test names should read like specifications
|
||||
|
||||
### TDD Workflow
|
||||
1. **Red**: Write failing test first
|
||||
2. **Green**: Write minimal code to make it pass
|
||||
3. **Refactor**: Improve code while keeping tests green
|
||||
4. **Repeat**: Small iterations, frequent commits
|
||||
|
||||
### Coverage Goals
|
||||
1. **Aim for 80%+**: Line coverage baseline for most projects
|
||||
2. **100% Critical Paths**: Authentication, payments, data validation must be fully covered
|
||||
3. **Branch Coverage Matters**: Line coverage alone is insufficient
|
||||
4. **Don't Game Metrics**: Focus on meaningful tests, not coverage numbers
|
||||
|
||||
### Test Quality
|
||||
1. **Independent Tests**: Each test should run in isolation
|
||||
2. **Fast Execution**: Keep unit tests under 100ms each
|
||||
3. **Deterministic**: Tests should always produce same results
|
||||
4. **Clear Failures**: Assertion messages should explain what went wrong
|
||||
|
||||
### Framework Selection
|
||||
1. **Jest**: JavaScript/TypeScript projects (React, Node.js)
|
||||
2. **Pytest**: Python projects (Django, Flask, FastAPI)
|
||||
3. **JUnit**: Java projects (Spring, Android)
|
||||
4. **Vitest**: Modern Vite-based projects
|
||||
|
||||
## Multi-Language Support
|
||||
|
||||
### TypeScript/JavaScript
|
||||
- Frameworks: Jest, Vitest, Mocha, Jasmine
|
||||
- Runners: Node.js, Karma, Playwright
|
||||
- Coverage: Istanbul/nyc, c8
|
||||
|
||||
### Python
|
||||
- Frameworks: Pytest, unittest, nose2
|
||||
- Runners: pytest, tox, nox
|
||||
- Coverage: coverage.py, pytest-cov
|
||||
|
||||
### Java
|
||||
- Frameworks: JUnit 5, TestNG, Mockito
|
||||
- Runners: Maven Surefire, Gradle Test
|
||||
- Coverage: JaCoCo, Cobertura
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
### Scope
|
||||
- **Unit Tests Focus**: Primarily optimized for unit tests (integration tests require different patterns)
|
||||
- **Static Analysis Only**: Cannot execute tests or measure actual code behavior
|
||||
- **Language Support**: Best support for TypeScript, JavaScript, Python, Java (other languages limited)
|
||||
| Scope | Details |
|
||||
|-------|---------|
|
||||
| Unit test focus | Integration and E2E tests require different patterns |
|
||||
| Static analysis | Cannot execute tests or measure runtime behavior |
|
||||
| Language support | Best for TypeScript, JavaScript, Python, Java |
|
||||
| Report formats | LCOV, JSON, XML only; other formats need conversion |
|
||||
| Generated tests | Provide scaffolding; require human review for complex logic |
|
||||
|
||||
### Coverage Analysis
|
||||
- **Report Dependency**: Requires existing coverage reports (cannot generate coverage from scratch)
|
||||
- **Format Support**: LCOV, JSON, XML only (other formats need conversion)
|
||||
- **Interpretation Context**: Coverage numbers need human judgment for meaningfulness
|
||||
|
||||
### Test Generation
|
||||
- **Baseline Quality**: Generated tests provide scaffolding, require human review and refinement
|
||||
- **Complex Logic**: Advanced business logic and integration scenarios need manual test design
|
||||
- **Mocking Strategy**: Mock/stub strategies should align with project patterns
|
||||
|
||||
### Framework Integration
|
||||
- **Configuration Required**: Test runners need proper setup (this skill doesn't modify package.json or pom.xml)
|
||||
- **Version Compatibility**: Generated code targets recent stable versions (Jest 29+, Pytest 7+, JUnit 5+)
|
||||
|
||||
### When NOT to Use This Skill
|
||||
- **E2E Testing**: Use dedicated E2E tools (Playwright, Cypress, Selenium)
|
||||
- **Performance Testing**: Use JMeter, k6, or Locust
|
||||
- **Security Testing**: Use OWASP ZAP, Burp Suite, or security-focused tools
|
||||
- **Manual Testing**: Some scenarios require human exploratory testing
|
||||
|
||||
## Example Workflows
|
||||
|
||||
### Workflow 1: Generate Tests from Requirements
|
||||
```
|
||||
Input: User story + API specification
|
||||
Process: Parse requirements → Generate test cases → Create test stubs
|
||||
Output: Complete test files ready for implementation
|
||||
```
|
||||
|
||||
### Workflow 2: Improve Coverage
|
||||
```
|
||||
Input: Coverage report + source code
|
||||
Process: Identify gaps → Suggest tests → Generate test code
|
||||
Output: Prioritized test cases for uncovered code
|
||||
```
|
||||
|
||||
### Workflow 3: TDD New Feature
|
||||
```
|
||||
Input: Feature requirements
|
||||
Process: Guide red-green-refactor → Validate each step → Suggest refactorings
|
||||
Output: Well-tested feature with clean code
|
||||
```
|
||||
|
||||
### Workflow 4: Framework Migration
|
||||
```
|
||||
Input: Tests in Framework A
|
||||
Process: Parse tests → Translate patterns → Generate equivalent tests
|
||||
Output: Tests in Framework B with same coverage
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### CI/CD Integration
|
||||
- Parse coverage reports from CI artifacts
|
||||
- Generate coverage badges and reports
|
||||
- Fail builds on coverage thresholds
|
||||
- Track coverage trends over time
|
||||
|
||||
### IDE Integration
|
||||
- Generate tests for selected code
|
||||
- Run coverage analysis on save
|
||||
- Highlight untested code paths
|
||||
- Quick-fix suggestions for test gaps
|
||||
|
||||
### Code Review
|
||||
- Validate test coverage in PRs
|
||||
- Check test quality standards
|
||||
- Identify missing test scenarios
|
||||
- Suggest improvements before merge
|
||||
|
||||
## Version Support
|
||||
|
||||
- **Node.js**: 16+ (Jest 29+, Vitest 0.34+)
|
||||
- **Python**: 3.8+ (Pytest 7+)
|
||||
- **Java**: 11+ (JUnit 5.9+)
|
||||
- **TypeScript**: 4.5+
|
||||
|
||||
## Related Skills
|
||||
|
||||
This skill works well with:
|
||||
- **code-review**: Validate test quality during reviews
|
||||
- **refactoring-assistant**: Maintain tests during refactoring
|
||||
- **ci-cd-helper**: Integrate coverage in pipelines
|
||||
- **documentation-generator**: Generate test documentation
|
||||
**When to use other tools:**
|
||||
- E2E testing: Playwright, Cypress, Selenium
|
||||
- Performance testing: k6, JMeter, Locust
|
||||
- Security testing: OWASP ZAP, Burp Suite
|
||||
|
||||
195
engineering-team/tdd-guide/references/ci-integration.md
Normal file
195
engineering-team/tdd-guide/references/ci-integration.md
Normal file
@@ -0,0 +1,195 @@
|
||||
# CI/CD Integration Guide
|
||||
|
||||
Integrating test coverage and quality gates into CI pipelines.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Coverage in CI](#coverage-in-ci)
|
||||
- [GitHub Actions Examples](#github-actions-examples)
|
||||
- [Quality Gates](#quality-gates)
|
||||
- [Trend Tracking](#trend-tracking)
|
||||
|
||||
---
|
||||
|
||||
## Coverage in CI
|
||||
|
||||
### Coverage Report Flow
|
||||
|
||||
1. Run tests with coverage enabled
|
||||
2. Generate report in machine-readable format (LCOV, JSON, XML)
|
||||
3. Parse report for threshold validation
|
||||
4. Upload to coverage service (Codecov, Coveralls)
|
||||
5. Fail build if below threshold
|
||||
|
||||
### Report Formats by Tool
|
||||
|
||||
| Tool | Command | Output Format |
|
||||
|------|---------|---------------|
|
||||
| Jest | `jest --coverage --coverageReporters=lcov` | LCOV |
|
||||
| Pytest | `pytest --cov-report=xml` | Cobertura XML |
|
||||
| JUnit/JaCoCo | `mvn jacoco:report` | JaCoCo XML |
|
||||
| Vitest | `vitest --coverage` | LCOV/JSON |
|
||||
|
||||
---
|
||||
|
||||
## GitHub Actions Examples
|
||||
|
||||
### Node.js (Jest)
|
||||
|
||||
```yaml
|
||||
name: Test and Coverage
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- run: npm ci
|
||||
- run: npm test -- --coverage
|
||||
|
||||
- name: Check coverage threshold
|
||||
run: |
|
||||
COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
|
||||
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
|
||||
echo "Coverage $COVERAGE% is below 80% threshold"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- uses: codecov/codecov-action@v4
|
||||
with:
|
||||
file: coverage/lcov.info
|
||||
```
|
||||
|
||||
### Python (Pytest)
|
||||
|
||||
```yaml
|
||||
name: Test and Coverage
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- run: pip install pytest pytest-cov
|
||||
- run: pytest --cov=src --cov-report=xml --cov-fail-under=80
|
||||
|
||||
- uses: codecov/codecov-action@v4
|
||||
with:
|
||||
file: coverage.xml
|
||||
```
|
||||
|
||||
### Java (Maven + JaCoCo)
|
||||
|
||||
```yaml
|
||||
name: Test and Coverage
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-java@v4
|
||||
with:
|
||||
distribution: 'temurin'
|
||||
java-version: '17'
|
||||
|
||||
- run: mvn test jacoco:check
|
||||
|
||||
- uses: codecov/codecov-action@v4
|
||||
with:
|
||||
file: target/site/jacoco/jacoco.xml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Threshold Configuration
|
||||
|
||||
**Jest (package.json):**
|
||||
```json
|
||||
{
|
||||
"jest": {
|
||||
"coverageThreshold": {
|
||||
"global": {
|
||||
"branches": 80,
|
||||
"functions": 80,
|
||||
"lines": 80,
|
||||
"statements": 80
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Pytest (pyproject.toml):**
|
||||
```toml
|
||||
[tool.coverage.report]
|
||||
fail_under = 80
|
||||
```
|
||||
|
||||
**JaCoCo (pom.xml):**
|
||||
```xml
|
||||
<rule>
|
||||
<element>BUNDLE</element>
|
||||
<limits>
|
||||
<limit>
|
||||
<counter>LINE</counter>
|
||||
<value>COVEREDRATIO</value>
|
||||
<minimum>0.80</minimum>
|
||||
</limit>
|
||||
</limits>
|
||||
</rule>
|
||||
```
|
||||
|
||||
### PR Coverage Checks
|
||||
|
||||
- Block merge if coverage drops
|
||||
- Show coverage diff in PR comments
|
||||
- Require coverage for changed files
|
||||
- Allow exceptions with justification
|
||||
|
||||
---
|
||||
|
||||
## Trend Tracking
|
||||
|
||||
### Metrics to Track
|
||||
|
||||
| Metric | Purpose | Alert Threshold |
|
||||
|--------|---------|-----------------|
|
||||
| Overall line coverage | Baseline health | < 80% |
|
||||
| Branch coverage | Logic completeness | < 70% |
|
||||
| Coverage delta | Regression detection | < -2% per PR |
|
||||
| Test execution time | Performance | > 5 min |
|
||||
| Flaky test count | Reliability | > 0 |
|
||||
|
||||
### Coverage Services
|
||||
|
||||
| Service | Features | Integration |
|
||||
|---------|----------|-------------|
|
||||
| Codecov | PR comments, badges, graphs | GitHub, GitLab, Bitbucket |
|
||||
| Coveralls | History, trends, badges | GitHub, GitLab |
|
||||
| SonarCloud | Full code quality suite | Multiple CI platforms |
|
||||
|
||||
### Badge Generation
|
||||
|
||||
```markdown
|
||||
<!-- README.md -->
|
||||
[](https://codecov.io/gh/org/repo)
|
||||
```
|
||||
206
engineering-team/tdd-guide/references/framework-guide.md
Normal file
206
engineering-team/tdd-guide/references/framework-guide.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Testing Framework Guide
|
||||
|
||||
Language and framework selection, configuration, and patterns.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Framework Selection](#framework-selection)
|
||||
- [TypeScript/JavaScript](#typescriptjavascript)
|
||||
- [Python](#python)
|
||||
- [Java](#java)
|
||||
- [Version Requirements](#version-requirements)
|
||||
|
||||
---
|
||||
|
||||
## Framework Selection
|
||||
|
||||
| Language | Recommended | Alternatives | Best For |
|
||||
|----------|-------------|--------------|----------|
|
||||
| TypeScript/JS | Jest | Vitest, Mocha | React, Node.js, Next.js |
|
||||
| Python | Pytest | unittest, nose2 | Django, Flask, FastAPI |
|
||||
| Java | JUnit 5 | TestNG | Spring, Android |
|
||||
| Vite projects | Vitest | Jest | Modern Vite-based apps |
|
||||
|
||||
---
|
||||
|
||||
## TypeScript/JavaScript
|
||||
|
||||
### Jest Configuration
|
||||
|
||||
```javascript
|
||||
// jest.config.js
|
||||
module.exports = {
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
testMatch: ['**/*.test.ts'],
|
||||
collectCoverageFrom: ['src/**/*.ts'],
|
||||
coverageThreshold: {
|
||||
global: { branches: 80, lines: 80 }
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Jest Test Pattern
|
||||
|
||||
```typescript
|
||||
describe('Calculator', () => {
|
||||
let calc: Calculator;
|
||||
|
||||
beforeEach(() => {
|
||||
calc = new Calculator();
|
||||
});
|
||||
|
||||
it('should add two numbers', () => {
|
||||
expect(calc.add(2, 3)).toBe(5);
|
||||
});
|
||||
|
||||
it('should throw on invalid input', () => {
|
||||
expect(() => calc.add(null, 3)).toThrow('Invalid input');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Vitest Configuration
|
||||
|
||||
```typescript
|
||||
// vitest.config.ts
|
||||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
environment: 'node',
|
||||
coverage: { provider: 'c8' }
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Coverage Tools
|
||||
- Istanbul/nyc: Traditional coverage
|
||||
- c8: Native V8 coverage (faster)
|
||||
- Vitest built-in: Integrated with test runner
|
||||
|
||||
---
|
||||
|
||||
## Python
|
||||
|
||||
### Pytest Configuration
|
||||
|
||||
```ini
|
||||
# pytest.ini
|
||||
[pytest]
|
||||
testpaths = tests
|
||||
python_files = test_*.py
|
||||
python_functions = test_*
|
||||
addopts = --cov=src --cov-report=term-missing
|
||||
```
|
||||
|
||||
### Pytest Test Pattern
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from calculator import Calculator
|
||||
|
||||
class TestCalculator:
|
||||
@pytest.fixture
|
||||
def calc(self):
|
||||
return Calculator()
|
||||
|
||||
def test_add_positive_numbers(self, calc):
|
||||
assert calc.add(2, 3) == 5
|
||||
|
||||
def test_add_raises_on_invalid_input(self, calc):
|
||||
with pytest.raises(ValueError, match="Invalid input"):
|
||||
calc.add(None, 3)
|
||||
|
||||
@pytest.mark.parametrize("a,b,expected", [
|
||||
(1, 2, 3),
|
||||
(-1, 1, 0),
|
||||
(0, 0, 0),
|
||||
])
|
||||
def test_add_various_inputs(self, calc, a, b, expected):
|
||||
assert calc.add(a, b) == expected
|
||||
```
|
||||
|
||||
### Coverage Tools
|
||||
- coverage.py: Standard Python coverage
|
||||
- pytest-cov: Pytest plugin wrapper
|
||||
- Report formats: HTML, XML, LCOV
|
||||
|
||||
---
|
||||
|
||||
## Java
|
||||
|
||||
### JUnit 5 Configuration (Maven)
|
||||
|
||||
```xml
|
||||
<!-- pom.xml -->
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
<artifactId>junit-jupiter</artifactId>
|
||||
<version>5.9.3</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<plugin>
|
||||
<groupId>org.jacoco</groupId>
|
||||
<artifactId>jacoco-maven-plugin</artifactId>
|
||||
<version>0.8.10</version>
|
||||
</plugin>
|
||||
```
|
||||
|
||||
### JUnit 5 Test Pattern
|
||||
|
||||
```java
|
||||
import org.junit.jupiter.api.*;
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
class CalculatorTest {
|
||||
private Calculator calc;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
calc = new Calculator();
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("should add two positive numbers")
|
||||
void testAddPositive() {
|
||||
assertEquals(5, calc.add(2, 3));
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("should throw on null input")
|
||||
void testAddThrowsOnNull() {
|
||||
assertThrows(IllegalArgumentException.class,
|
||||
() -> calc.add(null, 3));
|
||||
}
|
||||
|
||||
@ParameterizedTest
|
||||
@CsvSource({"1,2,3", "-1,1,0", "0,0,0"})
|
||||
void testAddVarious(int a, int b, int expected) {
|
||||
assertEquals(expected, calc.add(a, b));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Coverage Tools
|
||||
- JaCoCo: Standard Java coverage
|
||||
- Cobertura: Alternative XML format
|
||||
- Report formats: HTML, XML, CSV
|
||||
|
||||
---
|
||||
|
||||
## Version Requirements
|
||||
|
||||
| Tool | Minimum Version | Notes |
|
||||
|------|-----------------|-------|
|
||||
| Node.js | 16+ | Required for Jest 29+ |
|
||||
| Jest | 29+ | Modern async support |
|
||||
| Vitest | 0.34+ | Stable API |
|
||||
| Python | 3.8+ | f-strings, async support |
|
||||
| Pytest | 7+ | Modern fixtures |
|
||||
| Java | 11+ | JUnit 5 support |
|
||||
| JUnit | 5.9+ | ParameterizedTest improvements |
|
||||
| TypeScript | 4.5+ | Strict mode features |
|
||||
128
engineering-team/tdd-guide/references/tdd-best-practices.md
Normal file
128
engineering-team/tdd-guide/references/tdd-best-practices.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# TDD Best Practices
|
||||
|
||||
Guidelines for effective test-driven development workflows.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Red-Green-Refactor Cycle](#red-green-refactor-cycle)
|
||||
- [Test Generation Guidelines](#test-generation-guidelines)
|
||||
- [Test Quality Principles](#test-quality-principles)
|
||||
- [Coverage Goals](#coverage-goals)
|
||||
|
||||
---
|
||||
|
||||
## Red-Green-Refactor Cycle
|
||||
|
||||
### RED Phase
|
||||
1. Write a failing test before any implementation
|
||||
2. Test should fail for the right reason (not compilation errors)
|
||||
3. Name tests as specifications describing expected behavior
|
||||
4. Keep tests small and focused on single behaviors
|
||||
|
||||
### GREEN Phase
|
||||
1. Write minimal code to make the test pass
|
||||
2. Avoid over-engineering at this stage
|
||||
3. Duplicate code is acceptable temporarily
|
||||
4. Focus on correctness, not elegance
|
||||
|
||||
### REFACTOR Phase
|
||||
1. Improve code structure while keeping tests green
|
||||
2. Remove duplication introduced in GREEN phase
|
||||
3. Apply design patterns where appropriate
|
||||
4. Run tests after each small refactoring
|
||||
|
||||
### Cycle Discipline
|
||||
- Complete one cycle before starting the next
|
||||
- Commit after each successful GREEN phase
|
||||
- Small iterations lead to better designs
|
||||
- Resist temptation to write implementation first
|
||||
|
||||
---
|
||||
|
||||
## Test Generation Guidelines
|
||||
|
||||
### Behavior Focus
|
||||
- Test what code does, not how it does it
|
||||
- Avoid coupling tests to implementation details
|
||||
- Tests should survive internal refactoring
|
||||
- Focus on observable outcomes
|
||||
|
||||
### Naming Conventions
|
||||
- Use descriptive names that read as specifications
|
||||
- Format: `should_<expected>_when_<condition>`
|
||||
- Examples:
|
||||
- `should_return_zero_when_cart_is_empty`
|
||||
- `should_reject_negative_amounts`
|
||||
- `should_apply_discount_for_members`
|
||||
|
||||
### Test Structure
|
||||
- Follow Arrange-Act-Assert (AAA) pattern
|
||||
- Keep setup minimal and relevant
|
||||
- One logical assertion per test
|
||||
- Extract shared setup to fixtures
|
||||
|
||||
### Coverage Scope
|
||||
- Happy path: Normal expected usage
|
||||
- Error cases: Invalid inputs, failures
|
||||
- Edge cases: Boundaries, empty states
|
||||
- Exceptional cases: Timeouts, nulls
|
||||
|
||||
---
|
||||
|
||||
## Test Quality Principles
|
||||
|
||||
### Independence
|
||||
- Each test runs in isolation
|
||||
- No shared mutable state between tests
|
||||
- Tests can run in any order
|
||||
- Parallel execution should work
|
||||
|
||||
### Speed
|
||||
- Unit tests under 100ms each
|
||||
- Avoid I/O in unit tests
|
||||
- Mock external dependencies
|
||||
- Use in-memory databases for integration
|
||||
|
||||
### Determinism
|
||||
- Same inputs produce same results
|
||||
- No dependency on system time or random values
|
||||
- Controlled test data
|
||||
- No flaky tests allowed
|
||||
|
||||
### Clarity
|
||||
- Failure messages explain what went wrong
|
||||
- Test code is as clean as production code
|
||||
- Avoid clever tricks that obscure intent
|
||||
- Comments explain non-obvious setup
|
||||
|
||||
---
|
||||
|
||||
## Coverage Goals
|
||||
|
||||
### Thresholds by Type
|
||||
| Type | Target | Rationale |
|
||||
|------|--------|-----------|
|
||||
| Line coverage | 80%+ | Baseline for most projects |
|
||||
| Branch coverage | 70%+ | More meaningful than line |
|
||||
| Function coverage | 90%+ | Public APIs should be tested |
|
||||
|
||||
### Critical Path Rules
|
||||
- Authentication: 100% coverage required
|
||||
- Payment processing: 100% coverage required
|
||||
- Data validation: 100% coverage required
|
||||
- Error handlers: Must test all paths
|
||||
|
||||
### Avoiding Coverage Theater
|
||||
- High coverage != good tests
|
||||
- Focus on meaningful assertions
|
||||
- Test behaviors, not lines
|
||||
- Code review test quality, not just metrics
|
||||
|
||||
### Coverage Analysis Workflow
|
||||
1. Generate coverage report after test run
|
||||
2. Identify uncovered critical paths (P0)
|
||||
3. Review medium-priority gaps (P1)
|
||||
4. Document accepted low-priority gaps (P2)
|
||||
5. Set threshold gates in CI pipeline
|
||||
Reference in New Issue
Block a user