Merge pull request #450 from alirezarezvani/claude/analyze-test-coverage-cJg3A
This commit is contained in:
9
.github/workflows/ci-quality-gate.yml
vendored
9
.github/workflows/ci-quality-gate.yml
vendored
@@ -50,6 +50,7 @@ jobs:
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install yamllint==1.35.1 check-jsonschema==0.28.4 safety==3.2.4
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
@@ -71,9 +72,13 @@ jobs:
|
||||
! -name "smart-sync.yml" \
|
||||
-exec check-jsonschema --builtin-schema github-workflows {} + || true
|
||||
|
||||
- name: Python syntax check
|
||||
- name: Python syntax check (blocking)
|
||||
run: |
|
||||
python -m compileall marketing-skill product-team c-level-advisor engineering-team ra-qm-team || true
|
||||
python -m compileall marketing-skill product-team c-level-advisor engineering-team ra-qm-team engineering business-growth finance project-management scripts
|
||||
|
||||
- name: Run test suite
|
||||
run: |
|
||||
python -m pytest tests/ --tb=short -q
|
||||
|
||||
- name: Safety dependency audit (requirements*.txt)
|
||||
run: |
|
||||
|
||||
218
documentation/TEST_COVERAGE_ANALYSIS.md
Normal file
218
documentation/TEST_COVERAGE_ANALYSIS.md
Normal file
@@ -0,0 +1,218 @@
|
||||
# Test Coverage Analysis
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Scope:** Full repository analysis of testing infrastructure, coverage gaps, and improvement recommendations.
|
||||
|
||||
---
|
||||
|
||||
## Current State
|
||||
|
||||
### By the Numbers
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Python scripts | 301 |
|
||||
| Scripts with any test coverage | 0 |
|
||||
| Validation/quality scripts | 35 |
|
||||
| CI quality gate checks | 5 (YAML lint, JSON schema, Python syntax, safety audit, markdown links) |
|
||||
| Test framework configuration | None (no pytest.ini, tox.ini, etc.) |
|
||||
| Test dependencies declared | None |
|
||||
|
||||
### What Exists Today
|
||||
|
||||
The repository has **no unit tests**. Quality assurance relies on:
|
||||
|
||||
1. **CI quality gate** (`ci-quality-gate.yml`) - Runs syntax compilation, YAML linting, JSON schema validation, dependency safety audits, and markdown link checks. Most steps use `|| true`, making them non-blocking.
|
||||
2. **Playwright hooks** - Anti-pattern detection for Playwright test files (not test execution).
|
||||
3. **Skill validator** (`engineering/skill-tester/`) - Validates skill directory structure, script syntax, and argparse compliance. Designed for users to run on their own skills.
|
||||
4. **35 validation scripts** - Checkers and linters distributed across skills (SEO, compliance, security, API design). These are *skill products*, not repo infrastructure tests.
|
||||
|
||||
### Key Observation
|
||||
|
||||
The CLAUDE.md explicitly states "No build system or test frameworks - intentional design choice for portability." However, the repository has grown to 301 Python scripts, many with pure computational logic that is highly testable and would benefit from regression protection.
|
||||
|
||||
---
|
||||
|
||||
## Coverage Gaps (Prioritized)
|
||||
|
||||
### Priority 1: Core Infrastructure Scripts (High Impact, Easy)
|
||||
|
||||
**Scripts:** `scripts/generate-docs.py`, `scripts/sync-codex-skills.py`, `scripts/sync-gemini-skills.py`
|
||||
|
||||
**Risk:** These scripts power the documentation site build and multi-platform sync. A regression here breaks the entire docs pipeline or causes silent data loss in skill synchronization.
|
||||
|
||||
**What to test:**
|
||||
- `generate-docs.py`: Skill file discovery logic, domain categorization, YAML frontmatter parsing, MkDocs nav generation
|
||||
- `sync-*-skills.py`: Symlink creation, directory mapping, validation functions
|
||||
|
||||
**Effort:** Low. Functions are mostly pure with filesystem inputs that can be mocked or tested against fixture directories.
|
||||
|
||||
---
|
||||
|
||||
### Priority 2: Calculator/Scoring Scripts (High Value, Trivial)
|
||||
|
||||
**Scripts (examples):**
|
||||
- `product-team/product-manager-toolkit/scripts/rice_prioritizer.py` - RICE formula
|
||||
- `product-team/product-manager-toolkit/scripts/okr_tracker.py` - OKR scoring
|
||||
- `finance/financial-analysis/scripts/dcf_calculator.py` - DCF valuation
|
||||
- `finance/financial-analysis/scripts/ratio_analyzer.py` - Financial ratios
|
||||
- `marketing-skill/campaign-analytics/scripts/roi_calculator.py` - ROI calculations
|
||||
- `engineering/skill-tester/scripts/quality_scorer.py` - Quality scoring
|
||||
|
||||
**Risk:** Incorrect calculations silently produce wrong results. Users trust these as authoritative tools.
|
||||
|
||||
**What to test:**
|
||||
- Known-input/known-output parameterized tests for all formulas
|
||||
- Edge cases: zero values, negative inputs, division by zero, boundary scores
|
||||
- Categorical-to-numeric mappings (e.g., "high" -> 3)
|
||||
|
||||
**Effort:** Trivial. These are pure functions with zero external dependencies.
|
||||
|
||||
---
|
||||
|
||||
### Priority 3: Parser/Analyzer Scripts (Medium Impact, Moderate Effort)
|
||||
|
||||
**Scripts (examples):**
|
||||
- `marketing-skill/seo-audit/scripts/seo_checker.py` - HTML parsing + scoring
|
||||
- `marketing-skill/schema-markup/scripts/schema_validator.py` - JSON-LD validation
|
||||
- `engineering/api-design-reviewer/scripts/api_linter.py` - API spec linting
|
||||
- `engineering/docker-development/scripts/compose_validator.py` - Docker Compose validation
|
||||
- `engineering/helm-chart-builder/scripts/values_validator.py` - Helm values checking
|
||||
- `engineering/changelog-generator/scripts/commit_linter.py` - Conventional commit parsing
|
||||
|
||||
**Risk:** Parsers are notoriously fragile against edge-case inputs. Malformed HTML, YAML, or JSON can cause silent failures or crashes.
|
||||
|
||||
**What to test:**
|
||||
- Well-formed input produces correct parsed output
|
||||
- Malformed input is handled gracefully (no crashes, clear error messages)
|
||||
- Edge cases: empty files, very large files, unicode content, missing required fields
|
||||
|
||||
**Effort:** Moderate. Requires crafting fixture files but the parser classes are self-contained.
|
||||
|
||||
---
|
||||
|
||||
### Priority 4: Compliance Checker Scripts (High Regulatory Risk)
|
||||
|
||||
**Scripts:**
|
||||
- `ra-qm-team/gdpr-dsgvo-expert/scripts/gdpr_compliance_checker.py`
|
||||
- `ra-qm-team/fda-consultant-specialist/scripts/qsr_compliance_checker.py`
|
||||
- `ra-qm-team/information-security-manager-iso27001/scripts/compliance_checker.py`
|
||||
- `ra-qm-team/quality-documentation-manager/scripts/document_validator.py`
|
||||
|
||||
**Risk:** Compliance tools that give false positives or false negatives have real regulatory consequences. Users rely on these for audit preparation.
|
||||
|
||||
**What to test:**
|
||||
- Known-compliant inputs return passing results
|
||||
- Known-noncompliant inputs flag correct violations
|
||||
- Completeness: all documented requirements are actually checked
|
||||
- Output format consistency (JSON/human-readable modes)
|
||||
|
||||
**Effort:** Moderate. Requires building compliance fixture data.
|
||||
|
||||
---
|
||||
|
||||
### Priority 5: CI Quality Gate Hardening
|
||||
|
||||
**Current problem:** Most CI steps use `|| true`, meaning failures are swallowed silently. The quality gate currently cannot block a broken PR.
|
||||
|
||||
**Recommended improvements:**
|
||||
- Remove `|| true` from Python syntax check (currently only checks 5 of 9+ skill directories)
|
||||
- Add `engineering/`, `business-growth/`, `finance/`, `project-management/` to the compileall step
|
||||
- Add a `--help` smoke test for all argparse-based scripts (the repo already validated 237/237 passing)
|
||||
- Add SKILL.md structure validation (required sections, YAML frontmatter)
|
||||
- Make at least syntax and import checks blocking (remove `|| true`)
|
||||
|
||||
---
|
||||
|
||||
### Priority 6: Integration/Smoke Tests for Skill Packages
|
||||
|
||||
**What's missing:** No test verifies that a complete skill directory is internally consistent - that SKILL.md references to scripts and references actually exist, that scripts listed in workflows are present, etc.
|
||||
|
||||
**What to test:**
|
||||
- All file paths referenced in SKILL.md exist
|
||||
- All scripts in `scripts/` directories pass `python script.py --help`
|
||||
- All referenced `references/*.md` files exist and are non-empty
|
||||
- YAML frontmatter in SKILL.md is valid
|
||||
|
||||
---
|
||||
|
||||
## Recommended Implementation Plan
|
||||
|
||||
### Phase 1: Foundation (1-2 days)
|
||||
|
||||
1. Add `pytest` to a top-level `requirements-dev.txt`
|
||||
2. Create a `tests/` directory at the repo root
|
||||
3. Add pytest configuration in `pyproject.toml` (minimal)
|
||||
4. Write smoke tests: import + `--help` for all 301 scripts
|
||||
5. Harden CI: remove `|| true` from syntax checks, expand compileall scope
|
||||
|
||||
### Phase 2: Unit Tests for Pure Logic (2-3 days)
|
||||
|
||||
1. Test all calculator/scoring scripts (Priority 2) - ~15 scripts, parameterized tests
|
||||
2. Test core infrastructure scripts (Priority 1) - 3 scripts with mocked filesystem
|
||||
3. Add to CI pipeline as a blocking step
|
||||
|
||||
### Phase 3: Parser and Validator Tests (3-5 days)
|
||||
|
||||
1. Create fixture files for each parser type (HTML, YAML, JSON, Dockerfile, etc.)
|
||||
2. Test parser scripts (Priority 3) - ~10 scripts
|
||||
3. Test compliance checkers (Priority 4) - ~5 scripts with compliance fixtures
|
||||
4. Add to CI pipeline
|
||||
|
||||
### Phase 4: Integration Tests (2-3 days)
|
||||
|
||||
1. Skill package consistency validation (Priority 6)
|
||||
2. Cross-reference validation (SKILL.md -> scripts, references)
|
||||
3. Documentation build test (generate-docs.py end-to-end)
|
||||
|
||||
---
|
||||
|
||||
## Quick Win: Starter Test Examples
|
||||
|
||||
### Example 1: RICE Calculator Test
|
||||
|
||||
```python
|
||||
# tests/test_rice_prioritizer.py
|
||||
import pytest
|
||||
import sys, os
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'product-team', 'product-manager-toolkit', 'scripts'))
|
||||
from rice_prioritizer import RICECalculator
|
||||
|
||||
@pytest.mark.parametrize("reach,impact,confidence,effort,expected_min", [
|
||||
(1000, "massive", "high", "medium", 500),
|
||||
(0, "high", "high", "low", 0),
|
||||
(100, "low", "low", "massive", 0),
|
||||
])
|
||||
def test_rice_calculation(reach, impact, confidence, effort, expected_min):
|
||||
calc = RICECalculator()
|
||||
result = calc.calculate_rice(reach, impact, confidence, effort)
|
||||
assert result["score"] >= expected_min
|
||||
```
|
||||
|
||||
### Example 2: Script Smoke Test
|
||||
|
||||
```python
|
||||
# tests/test_script_smoke.py
|
||||
import subprocess, glob, pytest
|
||||
|
||||
scripts = glob.glob("**/scripts/*.py", recursive=True)
|
||||
|
||||
@pytest.mark.parametrize("script", scripts)
|
||||
def test_script_syntax(script):
|
||||
result = subprocess.run(["python", "-m", "py_compile", script], capture_output=True)
|
||||
assert result.returncode == 0, f"Syntax error in {script}: {result.stderr.decode()}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The repository has **0% unit test coverage** across 301 Python scripts. The CI quality gate exists but is non-blocking (`|| true`). The highest-impact improvements are:
|
||||
|
||||
1. **Harden CI** - Make syntax checks blocking, expand scope to all directories
|
||||
2. **Test pure calculations** - Trivial effort, high trust value for calculator scripts
|
||||
3. **Test infrastructure scripts** - Protect the docs build and sync pipelines
|
||||
4. **Test parsers with fixtures** - Prevent regressions in fragile parsing logic
|
||||
5. **Test compliance checkers** - Regulatory correctness matters
|
||||
|
||||
The recommended phased approach adds meaningful coverage within 1-2 weeks without violating the repository's "minimal dependencies" philosophy - pytest is the only addition needed.
|
||||
5
pyproject.toml
Normal file
5
pyproject.toml
Normal file
@@ -0,0 +1,5 @@
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
python_files = ["test_*.py"]
|
||||
python_functions = ["test_*"]
|
||||
addopts = "-v --tb=short"
|
||||
@@ -383,7 +383,7 @@ def main():
|
||||
parser.add_argument("--approve", type=str, help="Approve document (doc_id)")
|
||||
parser.add_argument("--approver", type=str, help="Approver name")
|
||||
parser.add_argument("--withdraw", type=str, help="Withdraw document (doc_id)")
|
||||
parser.add_argument("--reason", type=str, help="Withdrawal reason")
|
||||
parser.add_argument("--withdraw-reason", type=str, help="Withdrawal reason")
|
||||
parser.add_argument("--status", action="store_true", help="Show document status")
|
||||
parser.add_argument("--matrix", action="store_true", help="Generate document matrix")
|
||||
parser.add_argument("--output", choices=["text", "json"], default="text")
|
||||
@@ -434,8 +434,8 @@ def main():
|
||||
elif args.approve and args.approver:
|
||||
success = dvc.approve_document(args.approve, args.approver, "QMS Manager")
|
||||
print(f"{'✅ Approved' if success else '❌ Failed'} document {args.approve}")
|
||||
elif args.withdraw and args.reason:
|
||||
success = dvc.withdraw_document(args.withdraw, args.reason, "QMS Manager")
|
||||
elif args.withdraw and args.withdraw_reason:
|
||||
success = dvc.withdraw_document(args.withdraw, args.withdraw_reason, "QMS Manager")
|
||||
print(f"{'✅ Withdrawn' if success else '❌ Failed'} document {args.withdraw}")
|
||||
elif args.matrix:
|
||||
matrix = dvc.generate_document_matrix()
|
||||
|
||||
1
requirements-dev.txt
Normal file
1
requirements-dev.txt
Normal file
@@ -0,0 +1 @@
|
||||
pytest>=8.0,<9.0
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
15
tests/conftest.py
Normal file
15
tests/conftest.py
Normal file
@@ -0,0 +1,15 @@
|
||||
"""Shared fixtures and configuration for the test suite."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Repository root
|
||||
REPO_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
|
||||
def add_script_dir_to_path(script_path: str):
|
||||
"""Add a script's parent directory to sys.path for imports."""
|
||||
script_dir = os.path.dirname(os.path.abspath(script_path))
|
||||
if script_dir not in sys.path:
|
||||
sys.path.insert(0, script_dir)
|
||||
return script_dir
|
||||
163
tests/test_campaign_roi.py
Normal file
163
tests/test_campaign_roi.py
Normal file
@@ -0,0 +1,163 @@
|
||||
"""Unit tests for the Campaign ROI Calculator."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "marketing-skill", "campaign-analytics", "scripts"
|
||||
))
|
||||
from campaign_roi_calculator import (
|
||||
safe_divide,
|
||||
get_benchmark,
|
||||
assess_performance,
|
||||
calculate_campaign_metrics,
|
||||
calculate_portfolio_summary,
|
||||
)
|
||||
|
||||
|
||||
class TestSafeDivide:
|
||||
def test_normal(self):
|
||||
assert safe_divide(10, 2) == 5.0
|
||||
|
||||
def test_zero_denominator(self):
|
||||
assert safe_divide(10, 0) == 0.0
|
||||
|
||||
def test_custom_default(self):
|
||||
assert safe_divide(10, 0, -1.0) == -1.0
|
||||
|
||||
|
||||
class TestGetBenchmark:
|
||||
def test_known_channel(self):
|
||||
result = get_benchmark("ctr", "email")
|
||||
assert result == (1.0, 2.5, 5.0)
|
||||
|
||||
def test_falls_back_to_default(self):
|
||||
result = get_benchmark("ctr", "nonexistent_channel")
|
||||
assert result == (0.5, 2.0, 5.0)
|
||||
|
||||
def test_unknown_metric(self):
|
||||
result = get_benchmark("nonexistent_metric", "email")
|
||||
assert result == (0, 0, 0)
|
||||
|
||||
|
||||
class TestAssessPerformance:
|
||||
def test_excellent_high_is_better(self):
|
||||
assert assess_performance(10.0, (1.0, 3.0, 5.0), higher_is_better=True) == "excellent"
|
||||
|
||||
def test_good_high_is_better(self):
|
||||
assert assess_performance(3.5, (1.0, 3.0, 5.0), higher_is_better=True) == "good"
|
||||
|
||||
def test_below_target_high_is_better(self):
|
||||
assert assess_performance(1.5, (1.0, 3.0, 5.0), higher_is_better=True) == "below_target"
|
||||
|
||||
def test_underperforming_high_is_better(self):
|
||||
assert assess_performance(0.5, (1.0, 3.0, 5.0), higher_is_better=True) == "underperforming"
|
||||
|
||||
def test_excellent_low_is_better(self):
|
||||
# For cost metrics, lower is better
|
||||
assert assess_performance(0.5, (1.0, 3.0, 5.0), higher_is_better=False) == "excellent"
|
||||
|
||||
def test_underperforming_low_is_better(self):
|
||||
assert assess_performance(10.0, (1.0, 3.0, 5.0), higher_is_better=False) == "underperforming"
|
||||
|
||||
|
||||
class TestCalculateCampaignMetrics:
|
||||
@pytest.fixture
|
||||
def campaign(self):
|
||||
return {
|
||||
"name": "Test Campaign",
|
||||
"channel": "paid_search",
|
||||
"spend": 1000.0,
|
||||
"revenue": 5000.0,
|
||||
"impressions": 100000,
|
||||
"clicks": 3000,
|
||||
"leads": 100,
|
||||
"customers": 10,
|
||||
}
|
||||
|
||||
def test_roi(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
# ROI = (5000 - 1000) / 1000 * 100 = 400%
|
||||
assert result["metrics"]["roi_pct"] == 400.0
|
||||
|
||||
def test_roas(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
# ROAS = 5000 / 1000 = 5.0
|
||||
assert result["metrics"]["roas"] == 5.0
|
||||
|
||||
def test_cpa(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
# CPA = 1000 / 10 = 100.0
|
||||
assert result["metrics"]["cpa"] == 100.0
|
||||
|
||||
def test_ctr(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
# CTR = 3000 / 100000 * 100 = 3.0%
|
||||
assert result["metrics"]["ctr_pct"] == 3.0
|
||||
|
||||
def test_cvr(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
# CVR = 10 / 100 * 100 = 10.0%
|
||||
assert result["metrics"]["cvr_pct"] == 10.0
|
||||
|
||||
def test_profit(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
assert result["metrics"]["profit"] == 4000.0
|
||||
|
||||
def test_zero_customers(self):
|
||||
campaign = {"name": "No Customers", "channel": "display", "spend": 500, "revenue": 0,
|
||||
"impressions": 10000, "clicks": 50, "leads": 5, "customers": 0}
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
assert result["metrics"]["cpa"] is None
|
||||
assert result["metrics"]["cac"] is None
|
||||
|
||||
def test_zero_impressions(self):
|
||||
campaign = {"name": "No Impressions", "channel": "email", "spend": 100, "revenue": 500,
|
||||
"impressions": 0, "clicks": 0, "leads": 0, "customers": 0}
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
assert result["metrics"]["ctr_pct"] is None
|
||||
assert result["metrics"]["cpm"] is None
|
||||
|
||||
def test_unprofitable_campaign_flagged(self):
|
||||
campaign = {"name": "Loser", "channel": "display", "spend": 1000, "revenue": 200,
|
||||
"impressions": 50000, "clicks": 100, "leads": 5, "customers": 1}
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
assert any("unprofitable" in f.lower() for f in result["flags"])
|
||||
|
||||
def test_benchmark_assessments_present(self, campaign):
|
||||
result = calculate_campaign_metrics(campaign)
|
||||
assert "ctr" in result["assessments"]
|
||||
assert "benchmark_range" in result["assessments"]["ctr"]
|
||||
|
||||
|
||||
class TestCalculatePortfolioSummary:
|
||||
def test_aggregates_totals(self):
|
||||
campaigns = [
|
||||
calculate_campaign_metrics({
|
||||
"name": "A", "channel": "email", "spend": 500, "revenue": 2000,
|
||||
"impressions": 50000, "clicks": 1000, "leads": 50, "customers": 5,
|
||||
}),
|
||||
calculate_campaign_metrics({
|
||||
"name": "B", "channel": "paid_search", "spend": 1000, "revenue": 4000,
|
||||
"impressions": 100000, "clicks": 3000, "leads": 100, "customers": 10,
|
||||
}),
|
||||
]
|
||||
summary = calculate_portfolio_summary(campaigns)
|
||||
assert summary["total_spend"] == 1500
|
||||
assert summary["total_revenue"] == 6000
|
||||
assert summary["total_profit"] == 4500
|
||||
assert summary["total_customers"] == 15
|
||||
assert summary["total_campaigns"] == 2
|
||||
|
||||
def test_channel_summary(self):
|
||||
campaigns = [
|
||||
calculate_campaign_metrics({
|
||||
"name": "A", "channel": "email", "spend": 500, "revenue": 2000,
|
||||
"impressions": 50000, "clicks": 1000, "leads": 50, "customers": 5,
|
||||
}),
|
||||
]
|
||||
summary = calculate_portfolio_summary(campaigns)
|
||||
assert "email" in summary["channel_summary"]
|
||||
assert summary["channel_summary"]["email"]["spend"] == 500
|
||||
118
tests/test_commit_linter.py
Normal file
118
tests/test_commit_linter.py
Normal file
@@ -0,0 +1,118 @@
|
||||
"""Unit tests for the Commit Linter (Conventional Commits)."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "engineering", "changelog-generator", "scripts"
|
||||
))
|
||||
from commit_linter import lint, CONVENTIONAL_RE, lines_from_file, CLIError
|
||||
|
||||
|
||||
class TestConventionalCommitRegex:
|
||||
"""Test the regex pattern against various commit message formats."""
|
||||
|
||||
@pytest.mark.parametrize("msg", [
|
||||
"feat: add user authentication",
|
||||
"fix: resolve null pointer in parser",
|
||||
"docs: update API documentation",
|
||||
"refactor: simplify login flow",
|
||||
"test: add integration tests for auth",
|
||||
"build: upgrade webpack to v5",
|
||||
"ci: add GitHub Actions workflow",
|
||||
"chore: update dependencies",
|
||||
"perf: optimize database queries",
|
||||
"security: patch XSS vulnerability",
|
||||
"deprecated: mark v1 API as deprecated",
|
||||
"remove: drop legacy payment module",
|
||||
])
|
||||
def test_valid_types(self, msg):
|
||||
assert CONVENTIONAL_RE.match(msg) is not None
|
||||
|
||||
@pytest.mark.parametrize("msg", [
|
||||
"feat(auth): add OAuth2 support",
|
||||
"fix(parser/html): handle malformed tags",
|
||||
"docs(api.v2): update endpoint docs",
|
||||
])
|
||||
def test_valid_scopes(self, msg):
|
||||
assert CONVENTIONAL_RE.match(msg) is not None
|
||||
|
||||
def test_breaking_change_marker(self):
|
||||
assert CONVENTIONAL_RE.match("feat!: redesign API") is not None
|
||||
assert CONVENTIONAL_RE.match("feat(api)!: breaking change") is not None
|
||||
|
||||
@pytest.mark.parametrize("msg", [
|
||||
"Update readme",
|
||||
"Fixed the bug",
|
||||
"WIP: something",
|
||||
"FEAT: uppercase type",
|
||||
"feat:missing space",
|
||||
"feat : extra space before colon",
|
||||
"",
|
||||
"merge: not a valid type",
|
||||
])
|
||||
def test_invalid_messages(self, msg):
|
||||
assert CONVENTIONAL_RE.match(msg) is None
|
||||
|
||||
|
||||
class TestLint:
|
||||
def test_all_valid(self):
|
||||
lines = [
|
||||
"feat: add login",
|
||||
"fix: resolve crash",
|
||||
"docs: update README",
|
||||
]
|
||||
report = lint(lines)
|
||||
assert report.total == 3
|
||||
assert report.valid == 3
|
||||
assert report.invalid == 0
|
||||
assert report.violations == []
|
||||
|
||||
def test_mixed_valid_invalid(self):
|
||||
lines = [
|
||||
"feat: add login",
|
||||
"Updated the readme",
|
||||
"fix: resolve crash",
|
||||
]
|
||||
report = lint(lines)
|
||||
assert report.total == 3
|
||||
assert report.valid == 2
|
||||
assert report.invalid == 1
|
||||
assert "line 2" in report.violations[0]
|
||||
|
||||
def test_all_invalid(self):
|
||||
lines = ["bad commit", "another bad one"]
|
||||
report = lint(lines)
|
||||
assert report.valid == 0
|
||||
assert report.invalid == 2
|
||||
|
||||
def test_empty_input(self):
|
||||
report = lint([])
|
||||
assert report.total == 0
|
||||
assert report.valid == 0
|
||||
assert report.invalid == 0
|
||||
|
||||
|
||||
class TestLinesFromFile:
|
||||
def test_reads_file(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".txt", delete=False) as f:
|
||||
f.write("feat: add feature\nfix: fix bug\n")
|
||||
f.flush()
|
||||
lines = lines_from_file(f.name)
|
||||
os.unlink(f.name)
|
||||
assert lines == ["feat: add feature", "fix: fix bug"]
|
||||
|
||||
def test_skips_blank_lines(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".txt", delete=False) as f:
|
||||
f.write("feat: add feature\n\n\nfix: fix bug\n")
|
||||
f.flush()
|
||||
lines = lines_from_file(f.name)
|
||||
os.unlink(f.name)
|
||||
assert len(lines) == 2
|
||||
|
||||
def test_nonexistent_file_raises(self):
|
||||
with pytest.raises(CLIError, match="Failed reading"):
|
||||
lines_from_file("/nonexistent/path.txt")
|
||||
213
tests/test_dcf_valuation.py
Normal file
213
tests/test_dcf_valuation.py
Normal file
@@ -0,0 +1,213 @@
|
||||
"""Unit tests for the DCF Valuation Model."""
|
||||
|
||||
import math
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "finance", "financial-analyst", "scripts"
|
||||
))
|
||||
from dcf_valuation import DCFModel, safe_divide
|
||||
|
||||
|
||||
class TestSafeDivide:
|
||||
def test_normal_division(self):
|
||||
assert safe_divide(10, 2) == 5.0
|
||||
|
||||
def test_zero_denominator(self):
|
||||
assert safe_divide(10, 0) == 0.0
|
||||
|
||||
def test_none_denominator(self):
|
||||
assert safe_divide(10, None) == 0.0
|
||||
|
||||
def test_custom_default(self):
|
||||
assert safe_divide(10, 0, default=-1.0) == -1.0
|
||||
|
||||
def test_negative_values(self):
|
||||
assert safe_divide(-10, 2) == -5.0
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def model():
|
||||
"""A fully configured DCF model with sample data."""
|
||||
m = DCFModel()
|
||||
m.set_historical_financials({
|
||||
"revenue": [80_000_000, 100_000_000],
|
||||
"net_debt": 20_000_000,
|
||||
"shares_outstanding": 10_000_000,
|
||||
})
|
||||
m.set_assumptions({
|
||||
"projection_years": 5,
|
||||
"revenue_growth_rates": [0.15, 0.12, 0.10, 0.08, 0.06],
|
||||
"fcf_margins": [0.12, 0.13, 0.14, 0.15, 0.16],
|
||||
"wacc_inputs": {
|
||||
"risk_free_rate": 0.04,
|
||||
"equity_risk_premium": 0.06,
|
||||
"beta": 1.2,
|
||||
"cost_of_debt": 0.05,
|
||||
"tax_rate": 0.25,
|
||||
"equity_weight": 0.70,
|
||||
"debt_weight": 0.30,
|
||||
},
|
||||
"terminal_growth_rate": 0.025,
|
||||
"exit_ev_ebitda_multiple": 12.0,
|
||||
"terminal_ebitda_margin": 0.20,
|
||||
})
|
||||
return m
|
||||
|
||||
|
||||
class TestWACC:
|
||||
def test_wacc_calculation(self, model):
|
||||
wacc = model.calculate_wacc()
|
||||
# Cost of equity = 0.04 + 1.2 * 0.06 = 0.112
|
||||
# After-tax cost of debt = 0.05 * (1 - 0.25) = 0.0375
|
||||
# WACC = 0.70 * 0.112 + 0.30 * 0.0375 = 0.0784 + 0.01125 = 0.08965
|
||||
assert abs(wacc - 0.08965) < 0.0001
|
||||
|
||||
def test_wacc_default_inputs(self):
|
||||
m = DCFModel()
|
||||
m.set_assumptions({})
|
||||
wacc = m.calculate_wacc()
|
||||
# Defaults: rf=0.04, erp=0.06, beta=1.0, cod=0.05, tax=0.25
|
||||
# CoE = 0.04 + 1.0 * 0.06 = 0.10
|
||||
# ATCoD = 0.05 * 0.75 = 0.0375
|
||||
# WACC = 0.70 * 0.10 + 0.30 * 0.0375 = 0.08125
|
||||
assert abs(wacc - 0.08125) < 0.0001
|
||||
|
||||
|
||||
class TestProjectCashFlows:
|
||||
def test_projects_correct_years(self, model):
|
||||
model.calculate_wacc()
|
||||
revenue, fcf = model.project_cash_flows()
|
||||
assert len(revenue) == 5
|
||||
assert len(fcf) == 5
|
||||
|
||||
def test_first_year_revenue(self, model):
|
||||
model.calculate_wacc()
|
||||
revenue, _ = model.project_cash_flows()
|
||||
# base_revenue = 100M, growth = 15%
|
||||
assert abs(revenue[0] - 115_000_000) < 1
|
||||
|
||||
def test_first_year_fcf(self, model):
|
||||
model.calculate_wacc()
|
||||
revenue, fcf = model.project_cash_flows()
|
||||
# Year 1: revenue = 115M, fcf_margin = 12% -> FCF = 13.8M
|
||||
assert abs(fcf[0] - 13_800_000) < 1
|
||||
|
||||
def test_missing_historical_revenue(self):
|
||||
m = DCFModel()
|
||||
m.set_historical_financials({})
|
||||
m.set_assumptions({"projection_years": 3})
|
||||
with pytest.raises(ValueError, match="Historical revenue"):
|
||||
m.project_cash_flows()
|
||||
|
||||
def test_default_growth_when_rates_short(self):
|
||||
m = DCFModel()
|
||||
m.set_historical_financials({"revenue": [100_000]})
|
||||
m.set_assumptions({
|
||||
"projection_years": 3,
|
||||
"revenue_growth_rates": [0.10], # Only 1 year specified
|
||||
"default_revenue_growth": 0.05,
|
||||
"fcf_margins": [0.10],
|
||||
"default_fcf_margin": 0.10,
|
||||
})
|
||||
m.calculate_wacc()
|
||||
revenue, _ = m.project_cash_flows()
|
||||
assert len(revenue) == 3
|
||||
# Year 1: 100000 * 1.10 = 110000
|
||||
# Year 2: 110000 * 1.05 = 115500 (uses default)
|
||||
assert abs(revenue[1] - 115500) < 1
|
||||
|
||||
|
||||
class TestTerminalValue:
|
||||
def test_perpetuity_method(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
tv_perp, tv_exit = model.calculate_terminal_value()
|
||||
assert tv_perp > 0
|
||||
|
||||
def test_exit_multiple_method(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
_, tv_exit = model.calculate_terminal_value()
|
||||
# Terminal revenue * ebitda_margin * exit_multiple
|
||||
terminal_revenue = model.projected_revenue[-1]
|
||||
expected = terminal_revenue * 0.20 * 12.0
|
||||
assert abs(tv_exit - expected) < 1
|
||||
|
||||
def test_perpetuity_zero_when_wacc_lte_growth(self):
|
||||
m = DCFModel()
|
||||
m.set_historical_financials({"revenue": [100_000]})
|
||||
m.set_assumptions({
|
||||
"projection_years": 1,
|
||||
"revenue_growth_rates": [0.05],
|
||||
"fcf_margins": [0.10],
|
||||
"terminal_growth_rate": 0.10, # Higher than WACC
|
||||
"exit_ev_ebitda_multiple": 10.0,
|
||||
"terminal_ebitda_margin": 0.20,
|
||||
})
|
||||
m.wacc = 0.08 # Lower than terminal growth
|
||||
m.project_cash_flows()
|
||||
tv_perp, _ = m.calculate_terminal_value()
|
||||
assert tv_perp == 0.0
|
||||
|
||||
|
||||
class TestEnterpriseAndEquityValue:
|
||||
def test_full_valuation_pipeline(self, model):
|
||||
results = model.run_full_valuation()
|
||||
assert results["wacc"] > 0
|
||||
assert len(results["projected_revenue"]) == 5
|
||||
assert results["enterprise_value"]["perpetuity_growth"] > 0
|
||||
assert results["enterprise_value"]["exit_multiple"] > 0
|
||||
assert results["equity_value"]["perpetuity_growth"] > 0
|
||||
assert results["value_per_share"]["perpetuity_growth"] > 0
|
||||
|
||||
def test_equity_subtracts_net_debt(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
model.calculate_terminal_value()
|
||||
model.calculate_enterprise_value()
|
||||
model.calculate_equity_value()
|
||||
# equity = enterprise - net_debt (20M)
|
||||
assert abs(
|
||||
model.equity_value_perpetuity -
|
||||
(model.enterprise_value_perpetuity - 20_000_000)
|
||||
) < 1
|
||||
|
||||
def test_value_per_share(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
model.calculate_terminal_value()
|
||||
model.calculate_enterprise_value()
|
||||
model.calculate_equity_value()
|
||||
# shares = 10M
|
||||
expected = model.equity_value_perpetuity / 10_000_000
|
||||
assert abs(model.value_per_share_perpetuity - expected) < 0.01
|
||||
|
||||
|
||||
class TestSensitivityAnalysis:
|
||||
def test_returns_table_structure(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
model.calculate_terminal_value()
|
||||
result = model.sensitivity_analysis()
|
||||
assert "wacc_values" in result
|
||||
assert "growth_values" in result
|
||||
assert "enterprise_value_table" in result
|
||||
assert "share_price_table" in result
|
||||
assert len(result["enterprise_value_table"]) == 5
|
||||
assert len(result["enterprise_value_table"][0]) == 5
|
||||
|
||||
def test_inf_when_wacc_lte_growth(self, model):
|
||||
model.calculate_wacc()
|
||||
model.project_cash_flows()
|
||||
model.calculate_terminal_value()
|
||||
# Use a growth range that includes values >= wacc
|
||||
result = model.sensitivity_analysis(
|
||||
wacc_range=[0.05],
|
||||
growth_range=[0.05, 0.06],
|
||||
)
|
||||
assert result["enterprise_value_table"][0][0] == float("inf")
|
||||
assert result["enterprise_value_table"][0][1] == float("inf")
|
||||
101
tests/test_funnel_analyzer.py
Normal file
101
tests/test_funnel_analyzer.py
Normal file
@@ -0,0 +1,101 @@
|
||||
"""Unit tests for the Funnel Analyzer."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "marketing-skill", "campaign-analytics", "scripts"
|
||||
))
|
||||
from funnel_analyzer import analyze_funnel, compare_segments, safe_divide
|
||||
|
||||
|
||||
class TestAnalyzeFunnel:
|
||||
def test_basic_funnel(self):
|
||||
stages = ["Visit", "Signup", "Activate", "Pay"]
|
||||
counts = [10000, 5000, 2000, 500]
|
||||
result = analyze_funnel(stages, counts)
|
||||
|
||||
assert result["total_entries"] == 10000
|
||||
assert result["total_conversions"] == 500
|
||||
assert result["total_lost"] == 9500
|
||||
assert result["overall_conversion_rate"] == 5.0
|
||||
|
||||
def test_stage_metrics_count(self):
|
||||
stages = ["A", "B", "C"]
|
||||
counts = [1000, 500, 100]
|
||||
result = analyze_funnel(stages, counts)
|
||||
assert len(result["stage_metrics"]) == 3
|
||||
|
||||
def test_conversion_rates(self):
|
||||
stages = ["Visit", "Signup", "Pay"]
|
||||
counts = [1000, 500, 250]
|
||||
result = analyze_funnel(stages, counts)
|
||||
|
||||
# Visit -> Signup: 500/1000 = 50%
|
||||
assert result["stage_metrics"][1]["conversion_rate"] == 50.0
|
||||
# Signup -> Pay: 250/500 = 50%
|
||||
assert result["stage_metrics"][2]["conversion_rate"] == 50.0
|
||||
|
||||
def test_dropoff_detection(self):
|
||||
stages = ["A", "B", "C"]
|
||||
counts = [1000, 200, 100]
|
||||
result = analyze_funnel(stages, counts)
|
||||
|
||||
# Biggest absolute drop: A->B (800)
|
||||
assert result["bottleneck_absolute"]["dropoff_count"] == 800
|
||||
assert "A -> B" in result["bottleneck_absolute"]["transition"]
|
||||
|
||||
def test_relative_bottleneck(self):
|
||||
stages = ["A", "B", "C"]
|
||||
counts = [1000, 900, 100]
|
||||
result = analyze_funnel(stages, counts)
|
||||
|
||||
# A->B: dropoff_rate = 10%, B->C: dropoff_rate = 88.89%
|
||||
assert "B -> C" in result["bottleneck_relative"]["transition"]
|
||||
|
||||
def test_cumulative_conversion(self):
|
||||
stages = ["A", "B", "C"]
|
||||
counts = [1000, 500, 200]
|
||||
result = analyze_funnel(stages, counts)
|
||||
assert result["stage_metrics"][0]["cumulative_conversion"] == 100.0
|
||||
assert result["stage_metrics"][1]["cumulative_conversion"] == 50.0
|
||||
assert result["stage_metrics"][2]["cumulative_conversion"] == 20.0
|
||||
|
||||
def test_single_stage(self):
|
||||
result = analyze_funnel(["Only"], [500])
|
||||
assert result["overall_conversion_rate"] == 100.0
|
||||
assert result["total_entries"] == 500
|
||||
assert result["total_lost"] == 0
|
||||
|
||||
def test_mismatched_lengths_raises(self):
|
||||
with pytest.raises(ValueError, match="must match"):
|
||||
analyze_funnel(["A", "B"], [100])
|
||||
|
||||
def test_empty_stages_raises(self):
|
||||
with pytest.raises(ValueError, match="at least one"):
|
||||
analyze_funnel([], [])
|
||||
|
||||
def test_no_dropoff(self):
|
||||
stages = ["A", "B"]
|
||||
counts = [100, 100]
|
||||
result = analyze_funnel(stages, counts)
|
||||
assert result["stage_metrics"][1]["conversion_rate"] == 100.0
|
||||
assert result["stage_metrics"][1]["dropoff_count"] == 0
|
||||
|
||||
|
||||
class TestCompareSegments:
|
||||
def test_ranks_segments(self):
|
||||
stages = ["Visit", "Signup", "Pay"]
|
||||
segments = {
|
||||
"mobile": {"counts": [1000, 300, 50]},
|
||||
"desktop": {"counts": [1000, 600, 200]},
|
||||
}
|
||||
result = compare_segments(segments, stages)
|
||||
# Desktop has better overall conversion (20% vs 5%)
|
||||
assert result["rankings"][0]["segment"] == "desktop"
|
||||
|
||||
def test_mismatched_segment_counts_raises(self):
|
||||
with pytest.raises(ValueError, match="counts"):
|
||||
compare_segments({"bad": {"counts": [100, 50]}}, ["A", "B", "C"])
|
||||
133
tests/test_gdpr_compliance.py
Normal file
133
tests/test_gdpr_compliance.py
Normal file
@@ -0,0 +1,133 @@
|
||||
"""Unit tests for the GDPR Compliance Checker."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "ra-qm-team", "gdpr-dsgvo-expert", "scripts"
|
||||
))
|
||||
from gdpr_compliance_checker import (
|
||||
PERSONAL_DATA_PATTERNS,
|
||||
CODE_PATTERNS,
|
||||
should_skip,
|
||||
scan_file_for_patterns,
|
||||
analyze_project,
|
||||
)
|
||||
|
||||
|
||||
class TestShouldSkip:
|
||||
def test_skips_node_modules(self):
|
||||
assert should_skip(Path("project/node_modules/package/index.js")) is True
|
||||
|
||||
def test_skips_venv(self):
|
||||
assert should_skip(Path("project/venv/lib/site-packages/foo.py")) is True
|
||||
|
||||
def test_skips_git(self):
|
||||
assert should_skip(Path("project/.git/objects/abc123")) is True
|
||||
|
||||
def test_allows_normal_path(self):
|
||||
assert should_skip(Path("project/src/main.py")) is False
|
||||
|
||||
def test_allows_deep_path(self):
|
||||
assert should_skip(Path("project/src/utils/helpers/data.py")) is False
|
||||
|
||||
|
||||
class TestScanFileForPatterns:
|
||||
def test_detects_email(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
|
||||
f.write('user_email = "john@example.com"\n')
|
||||
f.flush()
|
||||
findings = scan_file_for_patterns(Path(f.name), PERSONAL_DATA_PATTERNS)
|
||||
os.unlink(f.name)
|
||||
email_findings = [f for f in findings if f["pattern"] == "email"]
|
||||
assert len(email_findings) >= 1
|
||||
assert email_findings[0]["category"] == "contact_data"
|
||||
|
||||
def test_detects_health_data(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
|
||||
f.write('record = {"diagnosis": "flu", "treatment": "rest"}\n')
|
||||
f.flush()
|
||||
findings = scan_file_for_patterns(Path(f.name), PERSONAL_DATA_PATTERNS)
|
||||
os.unlink(f.name)
|
||||
health_findings = [f for f in findings if f["pattern"] == "health_data"]
|
||||
assert len(health_findings) >= 1
|
||||
assert health_findings[0]["risk"] == "critical"
|
||||
|
||||
def test_detects_code_logging_issue(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
|
||||
f.write('log.info("User email: " + user.email)\n')
|
||||
f.flush()
|
||||
findings = scan_file_for_patterns(Path(f.name), CODE_PATTERNS)
|
||||
os.unlink(f.name)
|
||||
log_findings = [f for f in findings if f["pattern"] == "logging_personal_data"]
|
||||
assert len(log_findings) >= 1
|
||||
|
||||
def test_no_findings_on_clean_file(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
|
||||
f.write('x = 1 + 2\nprint("hello")\n')
|
||||
f.flush()
|
||||
findings = scan_file_for_patterns(Path(f.name), PERSONAL_DATA_PATTERNS)
|
||||
os.unlink(f.name)
|
||||
assert len(findings) == 0
|
||||
|
||||
def test_handles_unreadable_file(self):
|
||||
findings = scan_file_for_patterns(Path("/nonexistent/file.py"), PERSONAL_DATA_PATTERNS)
|
||||
assert findings == []
|
||||
|
||||
|
||||
class TestAnalyzeProject:
|
||||
def test_scores_clean_project(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
# Create a clean Python file
|
||||
src = Path(tmpdir) / "clean.py"
|
||||
src.write_text("x = 1\ny = 2\nresult = x + y\n", encoding="utf-8")
|
||||
result = analyze_project(Path(tmpdir))
|
||||
assert result["summary"]["compliance_score"] == 100
|
||||
assert result["summary"]["status"] == "compliant"
|
||||
|
||||
def test_detects_issues_in_project(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
src = Path(tmpdir) / "bad.py"
|
||||
src.write_text(
|
||||
'user_email = "john@example.com"\n'
|
||||
'log.info("Patient diagnosis: " + record.diagnosis)\n',
|
||||
encoding="utf-8",
|
||||
)
|
||||
result = analyze_project(Path(tmpdir))
|
||||
assert result["summary"]["compliance_score"] < 100
|
||||
assert len(result["personal_data_findings"]) > 0
|
||||
|
||||
def test_returns_recommendations(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
src = Path(tmpdir) / "issues.py"
|
||||
src.write_text(
|
||||
'password = "secret123"\n'
|
||||
'user_email = "test@test.com"\n',
|
||||
encoding="utf-8",
|
||||
)
|
||||
result = analyze_project(Path(tmpdir))
|
||||
assert "recommendations" in result
|
||||
assert isinstance(result["recommendations"], list)
|
||||
|
||||
|
||||
class TestPersonalDataPatterns:
|
||||
"""Test that the regex patterns work correctly."""
|
||||
|
||||
@pytest.mark.parametrize("pattern_name,test_string", [
|
||||
("email", "contact: user@example.com"),
|
||||
("ip_address", "server IP: 192.168.1.100"),
|
||||
("phone_number", "call +1-555-123-4567"),
|
||||
("credit_card", "card: 4111-1111-1111-1111"),
|
||||
("date_of_birth", "field: date of birth"),
|
||||
("health_data", "the patient reported symptoms"),
|
||||
("biometric", "store fingerprint data"),
|
||||
("religion", "religious preference recorded"),
|
||||
])
|
||||
def test_pattern_matches(self, pattern_name, test_string):
|
||||
import re
|
||||
pattern = PERSONAL_DATA_PATTERNS[pattern_name]["pattern"]
|
||||
assert re.search(pattern, test_string, re.IGNORECASE) is not None
|
||||
176
tests/test_generate_docs.py
Normal file
176
tests/test_generate_docs.py
Normal file
@@ -0,0 +1,176 @@
|
||||
"""Unit tests for the generate-docs.py infrastructure script."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "scripts"))
|
||||
|
||||
# The script uses a hyphenated filename, so import via importlib
|
||||
import importlib.util
|
||||
spec = importlib.util.spec_from_file_location(
|
||||
"generate_docs",
|
||||
os.path.join(os.path.dirname(__file__), "..", "scripts", "generate-docs.py"),
|
||||
)
|
||||
generate_docs = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(generate_docs)
|
||||
|
||||
|
||||
class TestSlugify:
|
||||
def test_basic(self):
|
||||
assert generate_docs.slugify("my-skill-name") == "my-skill-name"
|
||||
|
||||
def test_uppercase(self):
|
||||
assert generate_docs.slugify("My Skill") == "my-skill"
|
||||
|
||||
def test_special_chars(self):
|
||||
assert generate_docs.slugify("skill_v2.0") == "skill-v2-0"
|
||||
|
||||
def test_strips_leading_trailing(self):
|
||||
assert generate_docs.slugify("--test--") == "test"
|
||||
|
||||
|
||||
class TestPrettify:
|
||||
def test_kebab_case(self):
|
||||
assert generate_docs.prettify("senior-backend") == "Senior Backend"
|
||||
|
||||
def test_single_word(self):
|
||||
assert generate_docs.prettify("security") == "Security"
|
||||
|
||||
|
||||
class TestStripContent:
|
||||
def test_strips_frontmatter(self):
|
||||
content = "---\nname: test\n---\n# Title\nBody text"
|
||||
result = generate_docs.strip_content(content)
|
||||
assert "name: test" not in result
|
||||
assert "Body text" in result
|
||||
|
||||
def test_strips_first_h1(self):
|
||||
content = "# My Title\nBody text\n# Another H1"
|
||||
result = generate_docs.strip_content(content)
|
||||
assert "My Title" not in result
|
||||
assert "Body text" in result
|
||||
assert "Another H1" in result
|
||||
|
||||
def test_strips_hr_after_title(self):
|
||||
content = "# Title\n---\nBody text"
|
||||
result = generate_docs.strip_content(content)
|
||||
assert result.strip() == "Body text"
|
||||
|
||||
def test_no_frontmatter(self):
|
||||
content = "# Title\nBody text"
|
||||
result = generate_docs.strip_content(content)
|
||||
assert "Body text" in result
|
||||
|
||||
|
||||
class TestExtractTitle:
|
||||
def test_extracts_h1(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("# My Great Skill\nSome content")
|
||||
f.flush()
|
||||
title = generate_docs.extract_title(f.name)
|
||||
os.unlink(f.name)
|
||||
assert title == "My Great Skill"
|
||||
|
||||
def test_skips_frontmatter(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("---\nname: test\n---\n# Real Title\nContent")
|
||||
f.flush()
|
||||
title = generate_docs.extract_title(f.name)
|
||||
os.unlink(f.name)
|
||||
assert title == "Real Title"
|
||||
|
||||
def test_no_h1(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("No heading here\nJust content")
|
||||
f.flush()
|
||||
title = generate_docs.extract_title(f.name)
|
||||
os.unlink(f.name)
|
||||
assert title is None
|
||||
|
||||
def test_nonexistent_file(self):
|
||||
assert generate_docs.extract_title("/nonexistent/path.md") is None
|
||||
|
||||
|
||||
class TestExtractDescriptionFromFrontmatter:
|
||||
def test_double_quoted(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write('---\nname: test\ndescription: "My skill description"\n---\n# Title')
|
||||
f.flush()
|
||||
desc = generate_docs.extract_description_from_frontmatter(f.name)
|
||||
os.unlink(f.name)
|
||||
assert desc == "My skill description"
|
||||
|
||||
def test_single_quoted(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("---\nname: test\ndescription: 'Single quoted'\n---\n# Title")
|
||||
f.flush()
|
||||
desc = generate_docs.extract_description_from_frontmatter(f.name)
|
||||
os.unlink(f.name)
|
||||
assert desc == "Single quoted"
|
||||
|
||||
def test_unquoted(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("---\nname: test\ndescription: Unquoted description here\n---\n# Title")
|
||||
f.flush()
|
||||
desc = generate_docs.extract_description_from_frontmatter(f.name)
|
||||
os.unlink(f.name)
|
||||
assert desc == "Unquoted description here"
|
||||
|
||||
def test_no_frontmatter(self):
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".md", delete=False) as f:
|
||||
f.write("# Just a title\nNo frontmatter")
|
||||
f.flush()
|
||||
desc = generate_docs.extract_description_from_frontmatter(f.name)
|
||||
os.unlink(f.name)
|
||||
assert desc is None
|
||||
|
||||
|
||||
class TestFindSkillFiles:
|
||||
def test_returns_dict(self):
|
||||
skills = generate_docs.find_skill_files()
|
||||
assert isinstance(skills, dict)
|
||||
|
||||
def test_finds_known_domains(self):
|
||||
skills = generate_docs.find_skill_files()
|
||||
# At minimum these domains should have skills
|
||||
assert "engineering-team" in skills
|
||||
assert "product-team" in skills
|
||||
assert "finance" in skills
|
||||
|
||||
def test_skips_sample_skills(self):
|
||||
skills = generate_docs.find_skill_files()
|
||||
for domain, skill_list in skills.items():
|
||||
for skill in skill_list:
|
||||
assert "assets/sample-skill" not in skill["rel_path"]
|
||||
|
||||
|
||||
class TestRewriteSkillInternalLinks:
|
||||
def test_rewrites_script_link(self):
|
||||
content = "[my script](scripts/calculator.py)"
|
||||
result = generate_docs.rewrite_skill_internal_links(content, "product-team/my-skill")
|
||||
assert "github.com" in result
|
||||
assert "product-team/my-skill/scripts/calculator.py" in result
|
||||
|
||||
def test_preserves_external_links(self):
|
||||
content = "[Google](https://google.com)"
|
||||
result = generate_docs.rewrite_skill_internal_links(content, "product-team/my-skill")
|
||||
assert result == content
|
||||
|
||||
def test_preserves_anchor_links(self):
|
||||
content = "[section](#my-section)"
|
||||
result = generate_docs.rewrite_skill_internal_links(content, "product-team/my-skill")
|
||||
assert result == content
|
||||
|
||||
|
||||
class TestDomainMapping:
|
||||
def test_all_domains_have_sort_order(self):
|
||||
for key, value in generate_docs.DOMAINS.items():
|
||||
assert len(value) == 4
|
||||
assert isinstance(value[1], int)
|
||||
|
||||
def test_unique_sort_orders(self):
|
||||
orders = [v[1] for v in generate_docs.DOMAINS.values()]
|
||||
assert len(orders) == len(set(orders))
|
||||
128
tests/test_okr_tracker.py
Normal file
128
tests/test_okr_tracker.py
Normal file
@@ -0,0 +1,128 @@
|
||||
"""Unit tests for the OKR Tracker."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "c-level-advisor", "coo-advisor", "scripts"
|
||||
))
|
||||
from okr_tracker import calculate_kr_score, get_kr_status
|
||||
|
||||
|
||||
class TestCalculateKrScoreNumeric:
|
||||
def test_basic_numeric(self):
|
||||
kr = {"type": "numeric", "baseline_value": 0, "current_value": 50, "target_value": 100}
|
||||
assert calculate_kr_score(kr) == 0.5
|
||||
|
||||
def test_at_target(self):
|
||||
kr = {"type": "numeric", "baseline_value": 0, "current_value": 100, "target_value": 100}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
def test_no_progress(self):
|
||||
kr = {"type": "numeric", "baseline_value": 0, "current_value": 0, "target_value": 100}
|
||||
assert calculate_kr_score(kr) == 0.0
|
||||
|
||||
def test_clamped_above_one(self):
|
||||
kr = {"type": "numeric", "baseline_value": 0, "current_value": 150, "target_value": 100}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
def test_target_equals_baseline(self):
|
||||
kr = {"type": "numeric", "baseline_value": 50, "current_value": 50, "target_value": 50}
|
||||
assert calculate_kr_score(kr) == 0.0
|
||||
|
||||
def test_lower_is_better(self):
|
||||
# Reducing churn from 10% to 5%, currently at 7%
|
||||
kr = {
|
||||
"type": "numeric",
|
||||
"baseline_value": 10,
|
||||
"current_value": 7,
|
||||
"target_value": 5,
|
||||
"lower_is_better": True,
|
||||
}
|
||||
# improvement = 10 - 7 = 3, needed = 10 - 5 = 5 -> score = 0.6
|
||||
assert abs(calculate_kr_score(kr) - 0.6) < 0.01
|
||||
|
||||
def test_lower_is_better_at_target(self):
|
||||
kr = {
|
||||
"type": "numeric",
|
||||
"baseline_value": 10,
|
||||
"current_value": 5,
|
||||
"target_value": 5,
|
||||
"lower_is_better": True,
|
||||
}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
def test_lower_is_better_exceeded(self):
|
||||
kr = {
|
||||
"type": "numeric",
|
||||
"baseline_value": 10,
|
||||
"current_value": 3,
|
||||
"target_value": 5,
|
||||
"lower_is_better": True,
|
||||
}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
|
||||
class TestCalculateKrScorePercentage:
|
||||
def test_percentage_midway(self):
|
||||
kr = {"type": "percentage", "baseline_pct": 10, "current_pct": 15, "target_pct": 20}
|
||||
assert calculate_kr_score(kr) == 0.5
|
||||
|
||||
def test_percentage_at_target(self):
|
||||
kr = {"type": "percentage", "baseline_pct": 0, "current_pct": 100, "target_pct": 100}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
def test_percentage_target_equals_baseline(self):
|
||||
kr = {"type": "percentage", "baseline_pct": 50, "current_pct": 50, "target_pct": 50}
|
||||
assert calculate_kr_score(kr) == 0.0
|
||||
|
||||
|
||||
class TestCalculateKrScoreMilestone:
|
||||
def test_milestone_explicit_score(self):
|
||||
kr = {"type": "milestone", "score": 0.75}
|
||||
assert calculate_kr_score(kr) == 0.75
|
||||
|
||||
def test_milestone_hit_count(self):
|
||||
kr = {"type": "milestone", "milestones_total": 4, "milestones_hit": 3}
|
||||
assert calculate_kr_score(kr) == 0.75
|
||||
|
||||
def test_milestone_clamped(self):
|
||||
kr = {"type": "milestone", "score": 1.5}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
|
||||
class TestCalculateKrScoreBoolean:
|
||||
def test_boolean_done(self):
|
||||
kr = {"type": "boolean", "done": True}
|
||||
assert calculate_kr_score(kr) == 1.0
|
||||
|
||||
def test_boolean_not_done(self):
|
||||
kr = {"type": "boolean", "done": False}
|
||||
assert calculate_kr_score(kr) == 0.0
|
||||
|
||||
|
||||
class TestGetKrStatus:
|
||||
def test_on_track(self):
|
||||
status = get_kr_status(0.8, 0.5, {})
|
||||
assert status == "on_track"
|
||||
|
||||
def test_complete_requires_done_flag(self):
|
||||
# "complete" status requires kr["done"] = True
|
||||
status = get_kr_status(1.0, 0.5, {"done": True})
|
||||
assert status == "complete"
|
||||
|
||||
def test_score_one_without_done_is_on_track(self):
|
||||
status = get_kr_status(1.0, 0.5, {})
|
||||
assert status == "on_track"
|
||||
|
||||
def test_not_started(self):
|
||||
# not_started requires score==0 AND quarter_progress < 0.1
|
||||
status = get_kr_status(0.0, 0.05, {})
|
||||
assert status == "not_started"
|
||||
|
||||
def test_off_track(self):
|
||||
# Very low score deep into the quarter
|
||||
status = get_kr_status(0.1, 0.8, {})
|
||||
assert status == "off_track"
|
||||
194
tests/test_ratio_calculator.py
Normal file
194
tests/test_ratio_calculator.py
Normal file
@@ -0,0 +1,194 @@
|
||||
"""Unit tests for the Financial Ratio Calculator."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "finance", "financial-analyst", "scripts"
|
||||
))
|
||||
from ratio_calculator import FinancialRatioCalculator, safe_divide
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_data():
|
||||
return {
|
||||
"income_statement": {
|
||||
"revenue": 1_000_000,
|
||||
"cost_of_goods_sold": 400_000,
|
||||
"operating_income": 200_000,
|
||||
"net_income": 150_000,
|
||||
"interest_expense": 20_000,
|
||||
"ebitda": 250_000,
|
||||
},
|
||||
"balance_sheet": {
|
||||
"total_assets": 2_000_000,
|
||||
"total_equity": 1_200_000,
|
||||
"current_assets": 500_000,
|
||||
"current_liabilities": 300_000,
|
||||
"inventory": 100_000,
|
||||
"cash_and_equivalents": 200_000,
|
||||
"total_debt": 500_000,
|
||||
"accounts_receivable": 150_000,
|
||||
},
|
||||
"cash_flow": {
|
||||
"operating_cash_flow": 180_000,
|
||||
},
|
||||
"market_data": {
|
||||
"share_price": 50.0,
|
||||
"shares_outstanding": 100_000,
|
||||
"earnings_growth_rate": 0.15,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def calc(sample_data):
|
||||
return FinancialRatioCalculator(sample_data)
|
||||
|
||||
|
||||
class TestProfitability:
|
||||
def test_roe(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
# 150000 / 1200000 = 0.125
|
||||
assert abs(ratios["roe"]["value"] - 0.125) < 0.001
|
||||
|
||||
def test_roa(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
# 150000 / 2000000 = 0.075
|
||||
assert abs(ratios["roa"]["value"] - 0.075) < 0.001
|
||||
|
||||
def test_gross_margin(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
# (1000000 - 400000) / 1000000 = 0.60
|
||||
assert abs(ratios["gross_margin"]["value"] - 0.60) < 0.001
|
||||
|
||||
def test_operating_margin(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
# 200000 / 1000000 = 0.20
|
||||
assert abs(ratios["operating_margin"]["value"] - 0.20) < 0.001
|
||||
|
||||
def test_net_margin(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
# 150000 / 1000000 = 0.15
|
||||
assert abs(ratios["net_margin"]["value"] - 0.15) < 0.001
|
||||
|
||||
def test_interpretation_populated(self, calc):
|
||||
ratios = calc.calculate_profitability()
|
||||
for key in ratios:
|
||||
assert "interpretation" in ratios[key]
|
||||
|
||||
|
||||
class TestLiquidity:
|
||||
def test_current_ratio(self, calc):
|
||||
ratios = calc.calculate_liquidity()
|
||||
# 500000 / 300000 = 1.667
|
||||
assert abs(ratios["current_ratio"]["value"] - 1.667) < 0.01
|
||||
|
||||
def test_quick_ratio(self, calc):
|
||||
ratios = calc.calculate_liquidity()
|
||||
# (500000 - 100000) / 300000 = 1.333
|
||||
assert abs(ratios["quick_ratio"]["value"] - 1.333) < 0.01
|
||||
|
||||
def test_cash_ratio(self, calc):
|
||||
ratios = calc.calculate_liquidity()
|
||||
# 200000 / 300000 = 0.667
|
||||
assert abs(ratios["cash_ratio"]["value"] - 0.667) < 0.01
|
||||
|
||||
|
||||
class TestLeverage:
|
||||
def test_debt_to_equity(self, calc):
|
||||
ratios = calc.calculate_leverage()
|
||||
# 500000 / 1200000 = 0.417
|
||||
assert abs(ratios["debt_to_equity"]["value"] - 0.417) < 0.01
|
||||
|
||||
def test_interest_coverage(self, calc):
|
||||
ratios = calc.calculate_leverage()
|
||||
# 200000 / 20000 = 10.0
|
||||
assert abs(ratios["interest_coverage"]["value"] - 10.0) < 0.01
|
||||
|
||||
|
||||
class TestEfficiency:
|
||||
def test_asset_turnover(self, calc):
|
||||
ratios = calc.calculate_efficiency()
|
||||
# 1000000 / 2000000 = 0.5
|
||||
assert abs(ratios["asset_turnover"]["value"] - 0.5) < 0.01
|
||||
|
||||
def test_inventory_turnover(self, calc):
|
||||
ratios = calc.calculate_efficiency()
|
||||
# 400000 / 100000 = 4.0
|
||||
assert abs(ratios["inventory_turnover"]["value"] - 4.0) < 0.01
|
||||
|
||||
def test_dso(self, calc):
|
||||
ratios = calc.calculate_efficiency()
|
||||
# receivables_turnover = 1000000 / 150000 = 6.667
|
||||
# DSO = 365 / 6.667 = 54.75
|
||||
assert abs(ratios["dso"]["value"] - 54.75) < 0.5
|
||||
|
||||
|
||||
class TestValuation:
|
||||
def test_pe_ratio(self, calc):
|
||||
ratios = calc.calculate_valuation()
|
||||
# EPS = 150000 / 100000 = 1.5
|
||||
# PE = 50.0 / 1.5 = 33.33
|
||||
assert abs(ratios["pe_ratio"]["value"] - 33.33) < 0.1
|
||||
|
||||
def test_ev_ebitda(self, calc):
|
||||
ratios = calc.calculate_valuation()
|
||||
# market_cap = 50 * 100000 = 5000000
|
||||
# EV = 5000000 + 500000 - 200000 = 5300000
|
||||
# EV/EBITDA = 5300000 / 250000 = 21.2
|
||||
assert abs(ratios["ev_ebitda"]["value"] - 21.2) < 0.1
|
||||
|
||||
|
||||
class TestCalculateAll:
|
||||
def test_returns_all_categories(self, calc):
|
||||
results = calc.calculate_all()
|
||||
assert "profitability" in results
|
||||
assert "liquidity" in results
|
||||
assert "leverage" in results
|
||||
assert "efficiency" in results
|
||||
assert "valuation" in results
|
||||
|
||||
|
||||
class TestInterpretation:
|
||||
def test_dso_lower_is_better(self, calc):
|
||||
result = calc.interpret_ratio("dso", 25.0)
|
||||
assert "Excellent" in result
|
||||
|
||||
def test_dso_high_is_concern(self, calc):
|
||||
result = calc.interpret_ratio("dso", 90.0)
|
||||
assert "Concern" in result
|
||||
|
||||
def test_debt_to_equity_conservative(self, calc):
|
||||
result = calc.interpret_ratio("debt_to_equity", 0.2)
|
||||
assert "Conservative" in result
|
||||
|
||||
def test_zero_value(self, calc):
|
||||
result = calc.interpret_ratio("roe", 0.0)
|
||||
assert "Insufficient" in result
|
||||
|
||||
def test_unknown_ratio(self, calc):
|
||||
result = calc.interpret_ratio("unknown_ratio", 5.0)
|
||||
assert "No benchmark" in result
|
||||
|
||||
|
||||
class TestEdgeCases:
|
||||
def test_zero_revenue(self):
|
||||
data = {"income_statement": {"revenue": 0}, "balance_sheet": {}, "cash_flow": {}, "market_data": {}}
|
||||
calc = FinancialRatioCalculator(data)
|
||||
ratios = calc.calculate_profitability()
|
||||
assert ratios["gross_margin"]["value"] == 0.0
|
||||
|
||||
def test_zero_equity(self):
|
||||
data = {"income_statement": {"net_income": 100}, "balance_sheet": {"total_equity": 0}, "cash_flow": {}, "market_data": {}}
|
||||
calc = FinancialRatioCalculator(data)
|
||||
ratios = calc.calculate_profitability()
|
||||
assert ratios["roe"]["value"] == 0.0
|
||||
|
||||
def test_missing_market_data(self):
|
||||
data = {"income_statement": {}, "balance_sheet": {}, "cash_flow": {}, "market_data": {}}
|
||||
calc = FinancialRatioCalculator(data)
|
||||
ratios = calc.calculate_valuation()
|
||||
assert ratios["pe_ratio"]["value"] == 0.0
|
||||
143
tests/test_rice_prioritizer.py
Normal file
143
tests/test_rice_prioritizer.py
Normal file
@@ -0,0 +1,143 @@
|
||||
"""Unit tests for the RICE Prioritizer."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "product-team", "product-manager-toolkit", "scripts"
|
||||
))
|
||||
from rice_prioritizer import RICECalculator
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def calc():
|
||||
return RICECalculator()
|
||||
|
||||
|
||||
class TestCalculateRice:
|
||||
"""Test the core RICE formula: (Reach * Impact * Confidence) / Effort."""
|
||||
|
||||
def test_basic_calculation(self, calc):
|
||||
# reach=1000, impact=high(2.0), confidence=high(100/100=1.0), effort=m(5)
|
||||
# = (1000 * 2.0 * 1.0) / 5 = 400.0
|
||||
assert calc.calculate_rice(1000, "high", "high", "m") == 400.0
|
||||
|
||||
def test_massive_impact(self, calc):
|
||||
# reach=500, impact=massive(3.0), confidence=medium(0.8), effort=s(3)
|
||||
# = (500 * 3.0 * 0.8) / 3 = 400.0
|
||||
assert calc.calculate_rice(500, "massive", "medium", "s") == 400.0
|
||||
|
||||
def test_minimal_impact(self, calc):
|
||||
# reach=1000, impact=minimal(0.25), confidence=low(0.5), effort=xs(1)
|
||||
# = (1000 * 0.25 * 0.5) / 1 = 125.0
|
||||
assert calc.calculate_rice(1000, "minimal", "low", "xs") == 125.0
|
||||
|
||||
def test_zero_reach(self, calc):
|
||||
assert calc.calculate_rice(0, "high", "high", "m") == 0.0
|
||||
|
||||
def test_case_insensitive(self, calc):
|
||||
assert calc.calculate_rice(1000, "HIGH", "HIGH", "M") == 400.0
|
||||
|
||||
def test_unknown_impact_defaults_to_one(self, calc):
|
||||
# Unknown impact maps to 1.0
|
||||
# reach=1000, impact=1.0, confidence=high(1.0), effort=m(5)
|
||||
# = (1000 * 1.0 * 1.0) / 5 = 200.0
|
||||
assert calc.calculate_rice(1000, "unknown", "high", "m") == 200.0
|
||||
|
||||
def test_xl_effort(self, calc):
|
||||
# reach=1300, impact=medium(1.0), confidence=high(1.0), effort=xl(13)
|
||||
# = (1300 * 1.0 * 1.0) / 13 = 100.0
|
||||
assert calc.calculate_rice(1300, "medium", "high", "xl") == 100.0
|
||||
|
||||
@pytest.mark.parametrize("impact,expected_score", [
|
||||
("massive", 3.0),
|
||||
("high", 2.0),
|
||||
("medium", 1.0),
|
||||
("low", 0.5),
|
||||
("minimal", 0.25),
|
||||
])
|
||||
def test_impact_map(self, calc, impact, expected_score):
|
||||
# reach=100, confidence=high(1.0), effort=xs(1) -> score = 100 * impact
|
||||
result = calc.calculate_rice(100, impact, "high", "xs")
|
||||
assert result == round(100 * expected_score, 2)
|
||||
|
||||
|
||||
class TestPrioritizeFeatures:
|
||||
"""Test feature sorting by RICE score."""
|
||||
|
||||
def test_sorts_descending(self, calc):
|
||||
features = [
|
||||
{"name": "low", "reach": 100, "impact": "low", "confidence": "low", "effort": "xl"},
|
||||
{"name": "high", "reach": 10000, "impact": "massive", "confidence": "high", "effort": "xs"},
|
||||
]
|
||||
result = calc.prioritize_features(features)
|
||||
assert result[0]["name"] == "high"
|
||||
assert result[1]["name"] == "low"
|
||||
|
||||
def test_adds_rice_score(self, calc):
|
||||
features = [{"name": "test", "reach": 1000, "impact": "high", "confidence": "high", "effort": "m"}]
|
||||
result = calc.prioritize_features(features)
|
||||
assert "rice_score" in result[0]
|
||||
assert result[0]["rice_score"] == 400.0
|
||||
|
||||
def test_empty_list(self, calc):
|
||||
assert calc.prioritize_features([]) == []
|
||||
|
||||
def test_defaults_for_missing_fields(self, calc):
|
||||
features = [{"name": "sparse"}]
|
||||
result = calc.prioritize_features(features)
|
||||
assert result[0]["rice_score"] == 0.0 # reach defaults to 0
|
||||
|
||||
|
||||
class TestAnalyzePortfolio:
|
||||
"""Test portfolio analysis metrics."""
|
||||
|
||||
def test_empty_features(self, calc):
|
||||
assert calc.analyze_portfolio([]) == {}
|
||||
|
||||
def test_counts_quick_wins(self, calc):
|
||||
features = [
|
||||
{"name": "qw", "reach": 1000, "impact": "high", "confidence": "high", "effort": "xs", "rice_score": 100},
|
||||
{"name": "big", "reach": 1000, "impact": "high", "confidence": "high", "effort": "xl", "rice_score": 50},
|
||||
]
|
||||
result = calc.analyze_portfolio(features)
|
||||
assert result["quick_wins"] == 1
|
||||
assert result["big_bets"] == 1
|
||||
assert result["total_features"] == 2
|
||||
|
||||
def test_total_effort(self, calc):
|
||||
features = [
|
||||
{"name": "a", "effort": "m", "rice_score": 10}, # 5 months
|
||||
{"name": "b", "effort": "s", "rice_score": 20}, # 3 months
|
||||
]
|
||||
result = calc.analyze_portfolio(features)
|
||||
assert result["total_effort_months"] == 8
|
||||
|
||||
|
||||
class TestGenerateRoadmap:
|
||||
"""Test roadmap generation with capacity constraints."""
|
||||
|
||||
def test_single_quarter(self, calc):
|
||||
features = [
|
||||
{"name": "a", "effort": "s", "rice_score": 100}, # 3 months
|
||||
{"name": "b", "effort": "s", "rice_score": 50}, # 3 months
|
||||
]
|
||||
roadmap = calc.generate_roadmap(features, team_capacity=10)
|
||||
assert len(roadmap) == 1
|
||||
assert len(roadmap[0]["features"]) == 2
|
||||
assert roadmap[0]["capacity_used"] == 6
|
||||
|
||||
def test_overflow_to_next_quarter(self, calc):
|
||||
features = [
|
||||
{"name": "a", "effort": "l", "rice_score": 100}, # 8 months
|
||||
{"name": "b", "effort": "l", "rice_score": 50}, # 8 months
|
||||
]
|
||||
roadmap = calc.generate_roadmap(features, team_capacity=10)
|
||||
assert len(roadmap) == 2
|
||||
assert roadmap[0]["features"][0]["name"] == "a"
|
||||
assert roadmap[1]["features"][0]["name"] == "b"
|
||||
|
||||
def test_empty_features(self, calc):
|
||||
assert calc.generate_roadmap([], team_capacity=10) == []
|
||||
167
tests/test_seo_checker.py
Normal file
167
tests/test_seo_checker.py
Normal file
@@ -0,0 +1,167 @@
|
||||
"""Unit tests for the SEO Checker."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, os.path.join(
|
||||
os.path.dirname(__file__), "..", "marketing-skill", "seo-audit", "scripts"
|
||||
))
|
||||
from seo_checker import SEOParser, analyze_html, compute_overall_score
|
||||
|
||||
|
||||
class TestSEOParser:
|
||||
def test_extracts_title(self):
|
||||
p = SEOParser()
|
||||
p.feed("<html><head><title>My Page Title</title></head></html>")
|
||||
assert p.title == "My Page Title"
|
||||
|
||||
def test_extracts_meta_description(self):
|
||||
p = SEOParser()
|
||||
p.feed('<html><head><meta name="description" content="A great page"></head></html>')
|
||||
assert p.meta_description == "A great page"
|
||||
|
||||
def test_extracts_og_description_fallback(self):
|
||||
p = SEOParser()
|
||||
p.feed('<html><head><meta property="og:description" content="OG desc"></head></html>')
|
||||
assert p.meta_description == "OG desc"
|
||||
|
||||
def test_meta_description_takes_priority_over_og(self):
|
||||
p = SEOParser()
|
||||
p.feed('<head><meta name="description" content="Primary"><meta property="og:description" content="OG"></head>')
|
||||
assert p.meta_description == "Primary"
|
||||
|
||||
def test_extracts_headings(self):
|
||||
p = SEOParser()
|
||||
p.feed("<h1>Main Title</h1><h2>Section 1</h2><h3>Subsection</h3>")
|
||||
assert len(p.h_tags) == 3
|
||||
assert p.h_tags[0] == (1, "Main Title")
|
||||
assert p.h_tags[1] == (2, "Section 1")
|
||||
assert p.h_tags[2] == (3, "Subsection")
|
||||
|
||||
def test_extracts_images(self):
|
||||
p = SEOParser()
|
||||
p.feed('<img src="photo.jpg" alt="A photo"><img src="icon.png">')
|
||||
assert len(p.images) == 2
|
||||
assert p.images[0]["alt"] == "A photo"
|
||||
assert p.images[1]["alt"] is None
|
||||
|
||||
def test_extracts_links(self):
|
||||
p = SEOParser()
|
||||
p.feed('<a href="/internal">Click here</a><a href="https://example.com">External</a>')
|
||||
assert len(p.links) == 2
|
||||
assert p.links[0]["href"] == "/internal"
|
||||
assert p.links[1]["href"] == "https://example.com"
|
||||
|
||||
def test_viewport_meta(self):
|
||||
p = SEOParser()
|
||||
p.feed('<meta name="viewport" content="width=device-width">')
|
||||
assert p.viewport_meta is True
|
||||
|
||||
def test_ignores_script_content(self):
|
||||
p = SEOParser()
|
||||
p.feed("<body><script>var x = 1;</script><p>Real content</p></body>")
|
||||
body_text = " ".join(p.body_text_parts)
|
||||
assert "var x" not in body_text
|
||||
assert "Real content" in body_text
|
||||
|
||||
|
||||
class TestAnalyzeHTML:
|
||||
def test_perfect_title(self):
|
||||
# 55 chars is within 50-60 optimal range
|
||||
title = "A" * 55
|
||||
html = f"<html><head><title>{title}</title></head><body></body></html>"
|
||||
result = analyze_html(html)
|
||||
assert result["title"]["pass"] is True
|
||||
assert result["title"]["score"] == 100
|
||||
|
||||
def test_missing_title(self):
|
||||
result = analyze_html("<html><head></head><body></body></html>")
|
||||
assert result["title"]["pass"] is False
|
||||
assert result["title"]["score"] == 0
|
||||
|
||||
def test_one_h1_passes(self):
|
||||
result = analyze_html("<h1>Title</h1>")
|
||||
assert result["h1"]["pass"] is True
|
||||
assert result["h1"]["count"] == 1
|
||||
|
||||
def test_multiple_h1s_fail(self):
|
||||
result = analyze_html("<h1>First</h1><h1>Second</h1>")
|
||||
assert result["h1"]["pass"] is False
|
||||
assert result["h1"]["count"] == 2
|
||||
|
||||
def test_no_h1_fails(self):
|
||||
result = analyze_html("<h2>No H1</h2>")
|
||||
assert result["h1"]["pass"] is False
|
||||
assert result["h1"]["count"] == 0
|
||||
|
||||
def test_heading_hierarchy_skip(self):
|
||||
result = analyze_html("<h1>Title</h1><h3>Skipped H2</h3>")
|
||||
assert result["heading_hierarchy"]["pass"] is False
|
||||
assert len(result["heading_hierarchy"]["issues"]) == 1
|
||||
|
||||
def test_heading_hierarchy_ok(self):
|
||||
result = analyze_html("<h1>Title</h1><h2>Section</h2><h3>Sub</h3>")
|
||||
assert result["heading_hierarchy"]["pass"] is True
|
||||
|
||||
def test_image_alt_text_all_present(self):
|
||||
result = analyze_html('<img src="a.jpg" alt="Photo"><img src="b.jpg" alt="Icon">')
|
||||
assert result["image_alt_text"]["pass"] is True
|
||||
assert result["image_alt_text"]["coverage_pct"] == 100.0
|
||||
|
||||
def test_image_alt_text_missing(self):
|
||||
result = analyze_html('<img src="a.jpg" alt="Photo"><img src="b.jpg">')
|
||||
assert result["image_alt_text"]["pass"] is False
|
||||
assert result["image_alt_text"]["with_alt"] == 1
|
||||
|
||||
def test_no_images_passes(self):
|
||||
result = analyze_html("<p>No images</p>")
|
||||
assert result["image_alt_text"]["pass"] is True
|
||||
|
||||
def test_word_count_sufficient(self):
|
||||
words = " ".join(["word"] * 350)
|
||||
result = analyze_html(f"<body><p>{words}</p></body>")
|
||||
assert result["word_count"]["pass"] is True
|
||||
assert result["word_count"]["count"] >= 300
|
||||
|
||||
def test_word_count_insufficient(self):
|
||||
result = analyze_html("<body><p>Too few words here</p></body>")
|
||||
assert result["word_count"]["pass"] is False
|
||||
|
||||
def test_viewport_present(self):
|
||||
result = analyze_html('<meta name="viewport" content="width=device-width">')
|
||||
assert result["viewport_meta"]["pass"] is True
|
||||
|
||||
def test_viewport_missing(self):
|
||||
result = analyze_html("<html><head></head></html>")
|
||||
assert result["viewport_meta"]["pass"] is False
|
||||
|
||||
|
||||
class TestComputeOverallScore:
|
||||
def test_returns_integer(self):
|
||||
html = "<html><head><title>Test</title></head><body><h1>Title</h1></body></html>"
|
||||
results = analyze_html(html)
|
||||
score = compute_overall_score(results)
|
||||
assert isinstance(score, int)
|
||||
assert 0 <= score <= 100
|
||||
|
||||
def test_demo_html_scores_reasonably(self):
|
||||
from seo_checker import DEMO_HTML
|
||||
results = analyze_html(DEMO_HTML)
|
||||
score = compute_overall_score(results)
|
||||
# Demo page is well-optimized, should score above 70
|
||||
assert score >= 70
|
||||
|
||||
|
||||
class TestEdgeCases:
|
||||
def test_empty_html(self):
|
||||
result = analyze_html("")
|
||||
assert result["title"]["pass"] is False
|
||||
assert result["h1"]["count"] == 0
|
||||
|
||||
def test_malformed_html(self):
|
||||
# Should not crash on malformed HTML
|
||||
result = analyze_html("<h1>Unclosed<h2>Nested badly")
|
||||
assert isinstance(result, dict)
|
||||
assert "h1" in result
|
||||
192
tests/test_skill_integrity.py
Normal file
192
tests/test_skill_integrity.py
Normal file
@@ -0,0 +1,192 @@
|
||||
"""Integration tests: verify skill package consistency across the repository.
|
||||
|
||||
These tests validate that:
|
||||
1. Every skill directory with a SKILL.md has valid structure
|
||||
2. SKILL.md files have required YAML frontmatter
|
||||
3. File references in SKILL.md actually exist
|
||||
4. Scripts directories contain valid Python files
|
||||
5. No orphaned scripts directories without a SKILL.md
|
||||
"""
|
||||
|
||||
import glob
|
||||
import os
|
||||
import re
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
SKILL_DOMAINS = [
|
||||
"engineering-team",
|
||||
"engineering",
|
||||
"product-team",
|
||||
"marketing-skill",
|
||||
"project-management",
|
||||
"c-level-advisor",
|
||||
"ra-qm-team",
|
||||
"business-growth",
|
||||
"finance",
|
||||
]
|
||||
|
||||
SKIP_PATTERNS = [
|
||||
"assets/sample-skill",
|
||||
"assets/sample_codebase",
|
||||
"__pycache__",
|
||||
]
|
||||
|
||||
|
||||
def _find_all_skill_dirs():
|
||||
"""Find all directories containing a SKILL.md file."""
|
||||
skills = []
|
||||
for domain in SKILL_DOMAINS:
|
||||
domain_path = os.path.join(REPO_ROOT, domain)
|
||||
if not os.path.isdir(domain_path):
|
||||
continue
|
||||
for root, dirs, files in os.walk(domain_path):
|
||||
if "SKILL.md" in files:
|
||||
rel = os.path.relpath(root, REPO_ROOT)
|
||||
if any(skip in rel for skip in SKIP_PATTERNS):
|
||||
continue
|
||||
skills.append(root)
|
||||
return skills
|
||||
|
||||
|
||||
ALL_SKILL_DIRS = _find_all_skill_dirs()
|
||||
|
||||
|
||||
def _short_id(path):
|
||||
return os.path.relpath(path, REPO_ROOT)
|
||||
|
||||
|
||||
class TestSkillMdExists:
|
||||
"""Every recognized skill directory must have a SKILL.md."""
|
||||
|
||||
def test_found_skills(self):
|
||||
assert len(ALL_SKILL_DIRS) > 100, f"Expected 100+ skills, found {len(ALL_SKILL_DIRS)}"
|
||||
|
||||
|
||||
class TestSkillMdFrontmatter:
|
||||
"""SKILL.md files should have YAML frontmatter with name and description."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"skill_dir",
|
||||
ALL_SKILL_DIRS,
|
||||
ids=[_short_id(s) for s in ALL_SKILL_DIRS],
|
||||
)
|
||||
def test_has_frontmatter(self, skill_dir):
|
||||
skill_md = os.path.join(skill_dir, "SKILL.md")
|
||||
with open(skill_md, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for YAML frontmatter delimiters
|
||||
assert content.startswith("---"), (
|
||||
f"{_short_id(skill_dir)}/SKILL.md is missing YAML frontmatter (no opening ---)"
|
||||
)
|
||||
# Find closing ---
|
||||
second_delim = content.find("---", 4)
|
||||
assert second_delim > 0, (
|
||||
f"{_short_id(skill_dir)}/SKILL.md has unclosed frontmatter"
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"skill_dir",
|
||||
ALL_SKILL_DIRS,
|
||||
ids=[_short_id(s) for s in ALL_SKILL_DIRS],
|
||||
)
|
||||
def test_frontmatter_has_name(self, skill_dir):
|
||||
skill_md = os.path.join(skill_dir, "SKILL.md")
|
||||
with open(skill_md, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
match = re.match(r"^---\n(.*?)---\n", content, re.DOTALL)
|
||||
if match:
|
||||
fm = match.group(1)
|
||||
assert "name:" in fm, (
|
||||
f"{_short_id(skill_dir)}/SKILL.md frontmatter missing 'name' field"
|
||||
)
|
||||
|
||||
|
||||
class TestSkillMdHasH1:
|
||||
"""Every SKILL.md must have at least one H1 heading."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"skill_dir",
|
||||
ALL_SKILL_DIRS,
|
||||
ids=[_short_id(s) for s in ALL_SKILL_DIRS],
|
||||
)
|
||||
def test_has_h1(self, skill_dir):
|
||||
skill_md = os.path.join(skill_dir, "SKILL.md")
|
||||
with open(skill_md, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# Strip frontmatter
|
||||
content = re.sub(r"^---\n.*?---\n", "", content, flags=re.DOTALL)
|
||||
assert re.search(r"^# .+", content, re.MULTILINE), (
|
||||
f"{_short_id(skill_dir)}/SKILL.md has no H1 heading"
|
||||
)
|
||||
|
||||
|
||||
class TestScriptDirectories:
|
||||
"""Validate scripts/ directories within skills."""
|
||||
|
||||
def _get_skills_with_scripts(self):
|
||||
result = []
|
||||
for skill_dir in ALL_SKILL_DIRS:
|
||||
scripts_dir = os.path.join(skill_dir, "scripts")
|
||||
if os.path.isdir(scripts_dir):
|
||||
py_files = glob.glob(os.path.join(scripts_dir, "*.py"))
|
||||
if py_files:
|
||||
result.append((skill_dir, py_files))
|
||||
return result
|
||||
|
||||
def test_scripts_dirs_have_python_files(self):
|
||||
"""Every scripts/ directory should contain at least one .py file."""
|
||||
for skill_dir in ALL_SKILL_DIRS:
|
||||
scripts_dir = os.path.join(skill_dir, "scripts")
|
||||
if os.path.isdir(scripts_dir):
|
||||
py_files = glob.glob(os.path.join(scripts_dir, "*.py"))
|
||||
assert len(py_files) > 0, (
|
||||
f"{_short_id(skill_dir)}/scripts/ exists but has no .py files"
|
||||
)
|
||||
|
||||
def test_no_empty_skill_md(self):
|
||||
"""SKILL.md files should not be empty."""
|
||||
for skill_dir in ALL_SKILL_DIRS:
|
||||
skill_md = os.path.join(skill_dir, "SKILL.md")
|
||||
size = os.path.getsize(skill_md)
|
||||
assert size > 100, (
|
||||
f"{_short_id(skill_dir)}/SKILL.md is suspiciously small ({size} bytes)"
|
||||
)
|
||||
|
||||
|
||||
class TestReferencesDirectories:
|
||||
"""Validate references/ directories are non-empty."""
|
||||
|
||||
def test_references_not_empty(self):
|
||||
for skill_dir in ALL_SKILL_DIRS:
|
||||
refs_dir = os.path.join(skill_dir, "references")
|
||||
if os.path.isdir(refs_dir):
|
||||
files = [f for f in os.listdir(refs_dir) if not f.startswith(".")]
|
||||
assert len(files) > 0, (
|
||||
f"{_short_id(skill_dir)}/references/ exists but is empty"
|
||||
)
|
||||
|
||||
|
||||
class TestNoDuplicateSkillNames:
|
||||
"""Skill directory names should be unique across the entire repo."""
|
||||
|
||||
def test_unique_top_level_skill_names(self):
|
||||
"""Top-level skills (direct children of domains) should not have 3+ duplicates."""
|
||||
names = {}
|
||||
for skill_dir in ALL_SKILL_DIRS:
|
||||
rel = _short_id(skill_dir)
|
||||
parts = rel.split(os.sep)
|
||||
# Only check top-level skills (domain/skill-name), not sub-skills
|
||||
if len(parts) != 2:
|
||||
continue
|
||||
name = parts[1]
|
||||
names.setdefault(name, []).append(rel)
|
||||
|
||||
# Report names that appear 3+ times (2 is acceptable for cross-domain)
|
||||
triples = {k: v for k, v in names.items() if len(v) >= 3}
|
||||
assert not triples, f"Top-level skill names appearing 3+ times: {triples}"
|
||||
90
tests/test_smoke.py
Normal file
90
tests/test_smoke.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Smoke tests: syntax compilation and --help for all Python scripts.
|
||||
|
||||
These tests verify that every Python script in the repository:
|
||||
1. Compiles without syntax errors (all scripts)
|
||||
2. Runs --help without crashing (argparse-based scripts only)
|
||||
"""
|
||||
|
||||
import glob
|
||||
import os
|
||||
import py_compile
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# Directories to skip (sample/fixture code, not real scripts)
|
||||
SKIP_PATTERNS = [
|
||||
"assets/sample_codebase",
|
||||
"__pycache__",
|
||||
".venv",
|
||||
"tests/",
|
||||
]
|
||||
|
||||
|
||||
def _collect_all_python_scripts():
|
||||
"""Find all .py files in the repo, excluding test/fixture code."""
|
||||
all_py = glob.glob(os.path.join(REPO_ROOT, "**", "*.py"), recursive=True)
|
||||
scripts = []
|
||||
for path in sorted(all_py):
|
||||
rel = os.path.relpath(path, REPO_ROOT)
|
||||
if any(skip in rel for skip in SKIP_PATTERNS):
|
||||
continue
|
||||
scripts.append(path)
|
||||
return scripts
|
||||
|
||||
|
||||
def _has_argparse(path):
|
||||
"""Check if a script imports argparse (heuristic)."""
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8", errors="replace") as f:
|
||||
content = f.read()
|
||||
return "ArgumentParser" in content or "import argparse" in content
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
ALL_SCRIPTS = _collect_all_python_scripts()
|
||||
ARGPARSE_SCRIPTS = [s for s in ALL_SCRIPTS if _has_argparse(s)]
|
||||
|
||||
|
||||
def _short_id(path):
|
||||
"""Create a readable test ID from a full path."""
|
||||
return os.path.relpath(path, REPO_ROOT)
|
||||
|
||||
|
||||
class TestSyntaxCompilation:
|
||||
"""Every Python file must compile without syntax errors."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"script_path",
|
||||
ALL_SCRIPTS,
|
||||
ids=[_short_id(s) for s in ALL_SCRIPTS],
|
||||
)
|
||||
def test_syntax(self, script_path):
|
||||
py_compile.compile(script_path, doraise=True)
|
||||
|
||||
|
||||
class TestArgparseHelp:
|
||||
"""Every argparse-based script must run --help successfully."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"script_path",
|
||||
ARGPARSE_SCRIPTS,
|
||||
ids=[_short_id(s) for s in ARGPARSE_SCRIPTS],
|
||||
)
|
||||
def test_help_flag(self, script_path):
|
||||
result = subprocess.run(
|
||||
[sys.executable, script_path, "--help"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=30,
|
||||
cwd=REPO_ROOT,
|
||||
)
|
||||
assert result.returncode == 0, (
|
||||
f"--help failed for {os.path.relpath(script_path, REPO_ROOT)}:\n"
|
||||
f"STDOUT: {result.stdout[:500]}\n"
|
||||
f"STDERR: {result.stderr[:500]}"
|
||||
)
|
||||
Reference in New Issue
Block a user