feat: Add comprehensive performance benchmarking (Phase 4)
Phase 4 of optional enhancements: Performance Benchmarking **New Files:** - tests/test_adaptor_benchmarks.py (478 lines) - 6 comprehensive benchmark tests with pytest - Measures format_skill_md() across 11 adaptors - Tests package operations (time + file size) - Analyzes scaling behavior (1-50 references) - Compares JSON vs ZIP compression ratios (~80-90x) - Quantifies metadata processing overhead (<10%) - Compares empty vs full skill performance - scripts/run_benchmarks.sh (executable runner) - Beautiful terminal UI with colored output - Automated benchmark execution - Summary reporting with key insights - Package installation check **Modified Files:** - pyproject.toml - Added "benchmark" pytest marker **Test Results:** - All 6 benchmark tests passing - All 164 adaptor tests still passing - No regressions detected **Key Findings:** • All adaptors complete formatting in < 500ms • Package operations complete in < 1 second • Linear scaling confirmed (0.39x factor at 50 refs) • Metadata overhead negligible (-1.8%) • ZIP compression ratio: 83-84x • Empty skill processing: 0.03ms • Full skill (50 refs): 2.62ms **Usage:** ./scripts/run_benchmarks.sh Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -206,6 +206,7 @@ markers = [
|
||||
"e2e: mark test as end-to-end (resource-intensive, may create files)",
|
||||
"venv: mark test as requiring virtual environment setup",
|
||||
"bootstrap: mark test as bootstrap feature specific",
|
||||
"benchmark: mark test as performance benchmark",
|
||||
]
|
||||
asyncio_mode = "auto"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
|
||||
Reference in New Issue
Block a user