feat(skills): add 5 new skills via factory methodology (#176)

Build campaign-analytics, financial-analyst, customer-success-manager,
sales-engineer, and revenue-operations skills using the Claude Skills
Factory workflow. Each skill includes SKILL.md, Python CLI tools,
reference guides, and asset templates. All 16 Python scripts use
standard library only with --format json/text support.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Alireza Rezvani
2026-02-06 23:51:58 +01:00
committed by GitHub
parent b35adebfae
commit eef020c9e0
72 changed files with 18455 additions and 30 deletions

View File

@@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
This is a **comprehensive skills library** for Claude AI - reusable, production-ready skill packages that bundle domain expertise, best practices, analysis tools, and strategic frameworks. The repository provides modular skills that teams can download and use directly in their workflows.
**Current Scope:** 48 production-ready skills across 6 domains with 68+ Python automation tools.
**Current Scope:** 54 production-ready skills across 8 domains with 87+ Python automation tools.
**Key Distinction**: This is NOT a traditional application. It's a library of skill packages meant to be extracted and deployed by users into their own Claude workflows.
@@ -17,12 +17,14 @@ This repository uses **modular documentation**. For domain-specific guidance, se
| Domain | CLAUDE.md Location | Focus |
|--------|-------------------|-------|
| **Agent Development** | [agents/CLAUDE.md](agents/CLAUDE.md) | cs-* agent creation, YAML frontmatter, relative paths |
| **Marketing Skills** | [marketing-skill/CLAUDE.md](marketing-skill/CLAUDE.md) | Content creation, SEO, demand gen Python tools |
| **Marketing Skills** | [marketing-skill/CLAUDE.md](marketing-skill/CLAUDE.md) | Content creation, SEO, demand gen, campaign analytics Python tools |
| **Product Team** | [product-team/CLAUDE.md](product-team/CLAUDE.md) | RICE, OKRs, user stories, UX research tools |
| **Engineering** | [engineering-team/CLAUDE.md](engineering-team/CLAUDE.md) | Scaffolding, fullstack, AI/ML, data tools |
| **C-Level Advisory** | [c-level-advisor/CLAUDE.md](c-level-advisor/CLAUDE.md) | CEO/CTO strategic decision-making |
| **Project Management** | [project-management/CLAUDE.md](project-management/CLAUDE.md) | Atlassian MCP, Jira/Confluence integration |
| **RA/QM Compliance** | [ra-qm-team/CLAUDE.md](ra-qm-team/CLAUDE.md) | ISO 13485, MDR, FDA compliance workflows |
| **Business & Growth** | [business-growth/CLAUDE.md](business-growth/CLAUDE.md) | Customer success, sales engineering, revenue operations |
| **Finance** | [finance/CLAUDE.md](finance/CLAUDE.md) | Financial analysis, DCF valuation, budgeting, forecasting |
| **Standards Library** | [standards/CLAUDE.md](standards/CLAUDE.md) | Communication, quality, git, security standards |
| **Templates** | [templates/CLAUDE.md](templates/CLAUDE.md) | Template system usage |
@@ -35,12 +37,14 @@ This repository uses **modular documentation**. For domain-specific guidance, se
```
claude-code-skills/
├── agents/ # cs-* prefixed agents (in development)
├── marketing-skill/ # 5 marketing skills + Python tools
├── marketing-skill/ # 6 marketing skills + Python tools
├── product-team/ # 5 product skills + Python tools
├── engineering-team/ # 18 engineering skills + Python tools
├── c-level-advisor/ # 2 C-level skills
├── project-management/ # 6 PM skills + Atlassian MCP
├── ra-qm-team/ # 12 RA/QM compliance skills
├── business-growth/ # 3 business & growth skills + Python tools
├── finance/ # 1 finance skill + Python tools
├── standards/ # 5 standards library files
├── templates/ # Reusable templates
└── documentation/ # Implementation plans, sprints, delivery
@@ -132,17 +136,16 @@ See [standards/git/git-workflow-standards.md](standards/git/git-workflow-standar
## Roadmap
**Phase 1 Complete:** 48 production-ready skills deployed
- Marketing (5), C-Level (2), Product (5), PM (6), Engineering (18), RA/QM (12)
- 68+ Python automation tools, 90+ reference guides
- Complete enterprise coverage from marketing through regulatory compliance
**Phase 1-2 Complete:** 54 production-ready skills deployed
- Marketing (7), C-Level (2), Product (5), PM (6), Engineering (18), RA/QM (12), Business & Growth (3), Finance (1)
- 87+ Python automation tools, 104+ reference guides
- Complete enterprise coverage from marketing through regulatory compliance, sales, customer success, and finance
**Next Priorities:**
- **Phase 2 (Q1 2026):** Marketing expansion - SEO optimizer, social media manager, campaign analytics
- **Phase 3 (Q2 2026):** Business & growth - Sales engineer, customer success, growth marketer
- **Phase 4 (Q3 2026):** Specialized domains - Mobile, blockchain, web3, finance
- **Phase 3 (Q2 2026):** Marketing expansion - SEO optimizer, social media manager, growth marketer
- **Phase 4 (Q3 2026):** Specialized domains - Mobile, blockchain, web3, advanced analytics
**Target:** 50+ skills by Q3 2026
**Target:** 60+ skills by Q3 2026
See domain-specific roadmaps in each skill folder's README.md or roadmap files.
@@ -179,6 +182,6 @@ See domain-specific roadmaps in each skill folder's README.md or roadmap files.
---
**Last Updated:** November 5, 2025
**Last Updated:** February 2026
**Current Sprint:** sprint-11-05-2025 (Skill-Agent Integration Phase 1-2)
**Status:** 48 skills deployed, agent system in development
**Status:** 54 skills deployed, agent system in development

View File

@@ -0,0 +1,13 @@
{
"name": "business-growth-skills",
"description": "3 production-ready business & growth skills: customer success manager, sales engineer, and revenue operations",
"version": "1.0.0",
"author": {
"name": "Alireza Rezvani",
"url": "https://alirezarezvani.com"
},
"homepage": "https://github.com/alirezarezvani/claude-skills/tree/main/business-growth",
"repository": "https://github.com/alirezarezvani/claude-skills",
"license": "MIT",
"skills": "./"
}

188
business-growth/CLAUDE.md Normal file
View File

@@ -0,0 +1,188 @@
# Business & Growth Skills - Claude Code Guidance
This guide covers the 3 production-ready business and growth skills and their Python automation tools.
## Business & Growth Skills Overview
**Available Skills:**
1. **customer-success-manager/** - Customer health scoring, churn risk analysis, expansion opportunities (3 Python tools)
2. **sales-engineer/** - Technical discovery, RFP analysis, competitive positioning, POC planning (3 Python tools)
3. **revenue-operations/** - Pipeline analysis, forecast accuracy, GTM efficiency metrics (3 Python tools)
**Total Tools:** 9 Python automation tools, 9 knowledge bases, 19+ templates
## Python Automation Tools
### Customer Success Manager Tools
#### 1. Health Score Calculator (`customer-success-manager/scripts/health_score_calculator.py`)
**Purpose:** Multi-dimensional customer health scoring with trend analysis
**Features:**
- Weighted scoring across 4 dimensions (usage, engagement, support, relationship)
- Red/Yellow/Green classification with configurable thresholds
- Trend analysis comparing current vs previous period
- Segment-aware benchmarking (Enterprise/Mid-Market/SMB)
**Usage:**
```bash
python customer-success-manager/scripts/health_score_calculator.py customer_data.json
python customer-success-manager/scripts/health_score_calculator.py customer_data.json --format json
```
#### 2. Churn Risk Analyzer (`customer-success-manager/scripts/churn_risk_analyzer.py`)
**Purpose:** Identify at-risk accounts with intervention recommendations
**Features:**
- Risk scoring based on behavioral signals
- Warning signal detection and categorization
- Tier-appropriate intervention playbooks
- Urgency-based prioritization
**Usage:**
```bash
python customer-success-manager/scripts/churn_risk_analyzer.py customer_data.json
python customer-success-manager/scripts/churn_risk_analyzer.py customer_data.json --format json
```
#### 3. Expansion Opportunity Scorer (`customer-success-manager/scripts/expansion_opportunity_scorer.py`)
**Purpose:** Identify upsell and cross-sell opportunities
**Features:**
- Adoption depth analysis across product modules
- Whitespace mapping for unused features
- Revenue opportunity estimation
- Priority ranking by effort and impact
**Usage:**
```bash
python customer-success-manager/scripts/expansion_opportunity_scorer.py customer_data.json
python customer-success-manager/scripts/expansion_opportunity_scorer.py customer_data.json --format json
```
### Sales Engineer Tools
#### 4. RFP Response Analyzer (`sales-engineer/scripts/rfp_response_analyzer.py`)
**Purpose:** Score RFP/RFI coverage and identify gaps
**Features:**
- Requirement coverage scoring (Full/Partial/Planned/Gap)
- Effort estimation per requirement
- Gap identification with mitigation strategies
- Overall bid/no-bid recommendation
**Usage:**
```bash
python sales-engineer/scripts/rfp_response_analyzer.py rfp_data.json
python sales-engineer/scripts/rfp_response_analyzer.py rfp_data.json --format json
```
#### 5. Competitive Matrix Builder (`sales-engineer/scripts/competitive_matrix_builder.py`)
**Purpose:** Generate feature comparison matrices and competitive positioning
**Features:**
- Feature-by-feature comparison matrix
- Competitive scoring with weighted categories
- Differentiator identification
- Battlecard-ready output
**Usage:**
```bash
python sales-engineer/scripts/competitive_matrix_builder.py competitive_data.json
python sales-engineer/scripts/competitive_matrix_builder.py competitive_data.json --format json
```
#### 6. POC Planner (`sales-engineer/scripts/poc_planner.py`)
**Purpose:** Plan proof-of-concept engagements
**Features:**
- Timeline estimation based on scope
- Resource allocation planning
- Success criteria definition
- Evaluation scorecard generation
**Usage:**
```bash
python sales-engineer/scripts/poc_planner.py poc_data.json
python sales-engineer/scripts/poc_planner.py poc_data.json --format json
```
### Revenue Operations Tools
#### 7. Pipeline Analyzer (`revenue-operations/scripts/pipeline_analyzer.py`)
**Purpose:** Analyze sales pipeline health and velocity
**Features:**
- Coverage ratio calculation (pipeline/quota)
- Stage conversion rate analysis
- Sales velocity metrics (4-lever model)
- Deal aging analysis
**Usage:**
```bash
python revenue-operations/scripts/pipeline_analyzer.py pipeline_data.json
python revenue-operations/scripts/pipeline_analyzer.py pipeline_data.json --format json
```
#### 8. Forecast Accuracy Tracker (`revenue-operations/scripts/forecast_accuracy_tracker.py`)
**Purpose:** Measure and improve forecast accuracy
**Features:**
- MAPE (Mean Absolute Percentage Error) calculation
- Forecast bias detection (over/under-forecasting)
- Period-over-period trend analysis
- Category-level accuracy breakdown
**Usage:**
```bash
python revenue-operations/scripts/forecast_accuracy_tracker.py forecast_data.json
python revenue-operations/scripts/forecast_accuracy_tracker.py forecast_data.json --format json
```
#### 9. GTM Efficiency Calculator (`revenue-operations/scripts/gtm_efficiency_calculator.py`)
**Purpose:** Calculate go-to-market efficiency metrics
**Features:**
- Magic number calculation
- LTV:CAC ratio analysis
- CAC payback period
- Burn multiple assessment
- Industry benchmarking
**Usage:**
```bash
python revenue-operations/scripts/gtm_efficiency_calculator.py gtm_data.json
python revenue-operations/scripts/gtm_efficiency_calculator.py gtm_data.json --format json
```
## Quality Standards
**All business & growth Python tools must:**
- Use standard library only (no external dependencies)
- Support both JSON and human-readable output via `--format` flag
- Provide clear error messages for invalid input
- Return appropriate exit codes
- Process files locally (no API calls)
- Include argparse CLI with `--help` support
## Related Skills
- **Marketing:** Content creation, demand generation -> `../marketing-skill/`
- **Product Team:** User research, feature prioritization -> `../product-team/`
- **C-Level:** Strategic planning -> `../c-level-advisor/`
- **Engineering:** Technical implementation -> `../engineering-team/`
---
**Last Updated:** February 2026
**Skills Deployed:** 3/3 business & growth skills production-ready
**Total Tools:** 9 Python automation tools

View File

@@ -0,0 +1,323 @@
---
name: customer-success-manager
description: Monitors customer health, predicts churn risk, and identifies expansion opportunities using weighted scoring models for SaaS customer success
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: business-growth
domain: customer-success
updated: 2026-02-06
python-tools: health_score_calculator.py, churn_risk_analyzer.py, expansion_opportunity_scorer.py
tech-stack: customer-success, saas-metrics, health-scoring
---
# Customer Success Manager
Production-grade customer success analytics with multi-dimensional health scoring, churn risk prediction, and expansion opportunity identification. Three Python CLI tools provide deterministic, repeatable analysis using standard library only -- no external dependencies, no API calls, no ML models.
---
## Table of Contents
- [Capabilities](#capabilities)
- [Input Requirements](#input-requirements)
- [Output Formats](#output-formats)
- [How to Use](#how-to-use)
- [Scripts](#scripts)
- [Reference Guides](#reference-guides)
- [Templates](#templates)
- [Best Practices](#best-practices)
- [Limitations](#limitations)
---
## Capabilities
- **Customer Health Scoring**: Multi-dimensional weighted scoring across usage, engagement, support, and relationship dimensions with Red/Yellow/Green classification
- **Churn Risk Analysis**: Behavioral signal detection with tier-based intervention playbooks and time-to-renewal urgency multipliers
- **Expansion Opportunity Scoring**: Adoption depth analysis, whitespace mapping, and revenue opportunity estimation with effort-vs-impact prioritization
- **Segment-Aware Benchmarking**: Configurable thresholds for Enterprise, Mid-Market, and SMB customer segments
- **Trend Analysis**: Period-over-period comparison to detect improving or declining trajectories
- **Executive Reporting**: QBR templates, success plans, and executive business review templates
---
## Input Requirements
All scripts accept a JSON file as positional input argument. See `assets/sample_customer_data.json` for complete examples.
### Health Score Calculator
```json
{
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"usage": {
"login_frequency": 85,
"feature_adoption": 72,
"dau_mau_ratio": 0.45
},
"engagement": {
"support_ticket_volume": 3,
"meeting_attendance": 90,
"nps_score": 8,
"csat_score": 4.2
},
"support": {
"open_tickets": 2,
"escalation_rate": 0.05,
"avg_resolution_hours": 18
},
"relationship": {
"executive_sponsor_engagement": 80,
"multi_threading_depth": 4,
"renewal_sentiment": "positive"
},
"previous_period": {
"usage_score": 70,
"engagement_score": 65,
"support_score": 75,
"relationship_score": 60
}
}
]
}
```
### Churn Risk Analyzer
```json
{
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"contract_end_date": "2026-06-30",
"usage_decline": {
"login_trend": -15,
"feature_adoption_change": -10,
"dau_mau_change": -0.08
},
"engagement_drop": {
"meeting_cancellations": 2,
"response_time_days": 5,
"nps_change": -3
},
"support_issues": {
"open_escalations": 1,
"unresolved_critical": 0,
"satisfaction_trend": "declining"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": false,
"competitor_mentions": 1
},
"commercial_factors": {
"contract_type": "annual",
"pricing_complaints": false,
"budget_cuts_mentioned": false
}
}
]
}
```
### Expansion Opportunity Scorer
```json
{
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"contract": {
"licensed_seats": 100,
"active_seats": 95,
"plan_tier": "professional",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 85},
"analytics_module": {"adopted": true, "usage_pct": 60},
"integrations_module": {"adopted": false, "usage_pct": 0},
"api_access": {"adopted": true, "usage_pct": 40},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["engineering", "product"],
"potential": ["marketing", "sales", "support"]
}
}
]
}
```
---
## Output Formats
All scripts support two output formats via the `--format` flag:
- **`text`** (default): Human-readable formatted output for terminal viewing
- **`json`**: Machine-readable JSON output for integrations and pipelines
---
## How to Use
### Quick Start
```bash
# Health scoring
python scripts/health_score_calculator.py assets/sample_customer_data.json
python scripts/health_score_calculator.py assets/sample_customer_data.json --format json
# Churn risk analysis
python scripts/churn_risk_analyzer.py assets/sample_customer_data.json
python scripts/churn_risk_analyzer.py assets/sample_customer_data.json --format json
# Expansion opportunity scoring
python scripts/expansion_opportunity_scorer.py assets/sample_customer_data.json
python scripts/expansion_opportunity_scorer.py assets/sample_customer_data.json --format json
```
### Workflow Integration
```bash
# 1. Score customer health across portfolio
python scripts/health_score_calculator.py customer_portfolio.json --format json > health_results.json
# 2. Identify at-risk accounts
python scripts/churn_risk_analyzer.py customer_portfolio.json --format json > risk_results.json
# 3. Find expansion opportunities in healthy accounts
python scripts/expansion_opportunity_scorer.py customer_portfolio.json --format json > expansion_results.json
# 4. Prepare QBR using templates
# Reference: assets/qbr_template.md
```
---
## Scripts
### 1. health_score_calculator.py
**Purpose:** Multi-dimensional customer health scoring with trend analysis and segment-aware benchmarking.
**Dimensions and Weights:**
| Dimension | Weight | Metrics |
|-----------|--------|---------|
| Usage | 30% | Login frequency, feature adoption, DAU/MAU ratio |
| Engagement | 25% | Support ticket volume, meeting attendance, NPS/CSAT |
| Support | 20% | Open tickets, escalation rate, avg resolution time |
| Relationship | 25% | Executive sponsor engagement, multi-threading depth, renewal sentiment |
**Classification:**
- Green (75-100): Healthy -- customer achieving value
- Yellow (50-74): Needs attention -- monitor closely
- Red (0-49): At risk -- immediate intervention required
**Usage:**
```bash
python scripts/health_score_calculator.py customer_data.json
python scripts/health_score_calculator.py customer_data.json --format json
```
### 2. churn_risk_analyzer.py
**Purpose:** Identify at-risk accounts with behavioral signal detection and tier-based intervention recommendations.
**Risk Signal Weights:**
| Signal Category | Weight | Indicators |
|----------------|--------|------------|
| Usage Decline | 30% | Login trend, feature adoption change, DAU/MAU change |
| Engagement Drop | 25% | Meeting cancellations, response time, NPS change |
| Support Issues | 20% | Open escalations, unresolved critical, satisfaction trend |
| Relationship Signals | 15% | Champion left, sponsor change, competitor mentions |
| Commercial Factors | 10% | Contract type, pricing complaints, budget cuts |
**Risk Tiers:**
- Critical (80-100): Immediate executive escalation
- High (60-79): Urgent CSM intervention
- Medium (40-59): Proactive outreach
- Low (0-39): Standard monitoring
**Usage:**
```bash
python scripts/churn_risk_analyzer.py customer_data.json
python scripts/churn_risk_analyzer.py customer_data.json --format json
```
### 3. expansion_opportunity_scorer.py
**Purpose:** Identify upsell, cross-sell, and expansion opportunities with revenue estimation and priority ranking.
**Expansion Types:**
- **Upsell**: Upgrade to higher tier or more of existing product
- **Cross-sell**: Add new product modules
- **Expansion**: Additional seats or departments
**Usage:**
```bash
python scripts/expansion_opportunity_scorer.py customer_data.json
python scripts/expansion_opportunity_scorer.py customer_data.json --format json
```
---
## Reference Guides
| Reference | Description |
|-----------|-------------|
| `references/health-scoring-framework.md` | Complete health scoring methodology, dimension definitions, weighting rationale, threshold calibration |
| `references/cs-playbooks.md` | Intervention playbooks for each risk tier, onboarding, renewal, expansion, and escalation procedures |
| `references/cs-metrics-benchmarks.md` | Industry benchmarks for NRR, GRR, churn rates, health scores, expansion rates by segment and industry |
---
## Templates
| Template | Purpose |
|----------|---------|
| `assets/qbr_template.md` | Quarterly Business Review presentation structure |
| `assets/success_plan_template.md` | Customer success plan with goals, milestones, and metrics |
| `assets/onboarding_checklist_template.md` | 90-day onboarding checklist with phase gates |
| `assets/executive_business_review_template.md` | Executive stakeholder review for strategic accounts |
---
## Best Practices
1. **Score regularly**: Run health scoring weekly for Enterprise, bi-weekly for Mid-Market, monthly for SMB
2. **Act on trends, not snapshots**: A declining Green is more urgent than a stable Yellow
3. **Combine signals**: Use all three scripts together for a complete customer picture
4. **Calibrate thresholds**: Adjust segment benchmarks based on your product and industry
5. **Document interventions**: Track what actions you took and outcomes for playbook refinement
6. **Prepare with data**: Run scripts before every QBR and executive meeting
---
## Limitations
- **No real-time data**: Scripts analyze point-in-time snapshots from JSON input files
- **No CRM integration**: Data must be exported manually from your CRM/CS platform
- **Deterministic only**: No predictive ML -- scoring is algorithmic based on weighted signals
- **Threshold tuning**: Default thresholds are industry-standard but may need calibration for your business
- **Revenue estimates**: Expansion revenue estimates are approximations based on usage patterns
---
**Last Updated:** February 2026
**Tools:** 3 Python CLI tools
**Dependencies:** Python 3.7+ standard library only

View File

@@ -0,0 +1,209 @@
# Executive Business Review
**Customer:** [Customer Name]
**Date:** [Review Date]
**Prepared for:** [Executive Name, Title]
**Prepared by:** [CSM Name] | [VP Customer Success Name]
**Classification:** [Strategic / Enterprise / Key Account]
---
## 1. Partnership Summary
| Metric | Value |
|--------|-------|
| Partnership Duration | [X months/years] |
| Current ARR | $[Amount] |
| Lifetime Value to Date | $[Amount] |
| Current Plan | [Tier] |
| Licensed Seats | [Number] |
| Active Seats | [Number] |
| Health Score | [Score]/100 ([Green/Yellow/Red]) |
| NPS Score | [Score] |
| Renewal Date | [Date] ([X] days remaining) |
---
## 2. Strategic Alignment
### Customer's Business Priorities (This Year)
1. **[Priority 1]** -- [How our solution supports this]
2. **[Priority 2]** -- [How our solution supports this]
3. **[Priority 3]** -- [How our solution supports this]
### Alignment Assessment
| Business Priority | Our Contribution | Alignment Score |
|-------------------|-----------------|----------------|
| [Priority 1] | [Specific contribution] | [Strong / Moderate / Weak] |
| [Priority 2] | [Specific contribution] | [Strong / Moderate / Weak] |
| [Priority 3] | [Specific contribution] | [Strong / Moderate / Weak] |
---
## 3. Value Delivered
### Quantified Business Impact
| Outcome | Metric | Before | After | Business Value |
|---------|--------|--------|-------|---------------|
| [e.g., Operational efficiency] | [Hours saved/week] | [Baseline] | [Current] | $[Estimated value] |
| [e.g., Revenue acceleration] | [Deal velocity] | [Baseline] | [Current] | $[Estimated value] |
| [e.g., Risk reduction] | [Error rate] | [Baseline] | [Current] | $[Estimated value] |
**Total Estimated Business Value:** $[Amount]
**ROI:** [X]x return on investment
### Key Achievements This Period
1. [Achievement 1 with measurable outcome]
2. [Achievement 2 with measurable outcome]
3. [Achievement 3 with measurable outcome]
---
## 4. Adoption and Engagement Scorecard
### Platform Utilisation
| Module | Adoption Status | Usage Depth | Benchmark | Assessment |
|--------|---------------|-------------|-----------|------------|
| [Module 1] | Fully Adopted | [High/Med/Low] | [Benchmark] | [Above/At/Below] |
| [Module 2] | Partially Adopted | [High/Med/Low] | [Benchmark] | [Above/At/Below] |
| [Module 3] | Not Adopted | -- | -- | Opportunity |
### Engagement Health
| Indicator | Current | Previous Period | Trend |
|-----------|---------|----------------|-------|
| Executive Engagement | [Score] | [Score] | [Up/Down/Stable] |
| Stakeholder Breadth | [# contacts] | [# contacts] | [Up/Down/Stable] |
| Meeting Participation | [%] | [%] | [Up/Down/Stable] |
| Feature Request Activity | [Count] | [Count] | [Up/Down/Stable] |
---
## 5. Account Health Overview
### Health Score Trend (Last 4 Quarters)
| Quarter | Overall | Usage | Engagement | Support | Relationship |
|---------|---------|-------|------------|---------|-------------|
| [Q-3] | [Score] | [Score] | [Score] | [Score] | [Score] |
| [Q-2] | [Score] | [Score] | [Score] | [Score] | [Score] |
| [Q-1] | [Score] | [Score] | [Score] | [Score] | [Score] |
| Current | [Score] | [Score] | [Score] | [Score] | [Score] |
### Risk Assessment
| Risk Factor | Level | Details | Mitigation |
|------------|-------|---------|-----------|
| [Risk 1] | [High/Med/Low] | [Description] | [Action] |
| [Risk 2] | [High/Med/Low] | [Description] | [Action] |
---
## 6. Support and Service Quality
| Metric | This Period | SLA Target | Status |
|--------|------------|-----------|--------|
| Total Tickets | [Number] | -- | |
| Avg First Response | [Hours] | [Hours] | [Met / Not Met] |
| Avg Resolution Time | [Hours] | [Hours] | [Met / Not Met] |
| Escalations | [Number] | 0 | |
| CSAT Score | [Score] | [Target] | [Above / Below] |
| Critical Issues | [Number] | 0 | |
### Notable Support Interactions
- [Summary of any significant support events and resolution]
---
## 7. Product Roadmap Alignment
### Features Delivered (Relevant to This Customer)
| Feature | Release Date | Customer Impact |
|---------|-------------|----------------|
| [Feature 1] | [Date] | [How it helps them] |
| [Feature 2] | [Date] | [How it helps them] |
### Upcoming Features (Customer-Relevant)
| Feature | Expected Release | Expected Impact |
|---------|-----------------|----------------|
| [Feature 1] | [Quarter] | [Business value] |
| [Feature 2] | [Quarter] | [Business value] |
### Customer Feature Requests
| Request | Priority | Status | Business Case |
|---------|----------|--------|--------------|
| [Request 1] | [P1/P2/P3] | [Status] | [Why it matters] |
| [Request 2] | [P1/P2/P3] | [Status] | [Why it matters] |
---
## 8. Growth and Expansion Opportunity
### Current Whitespace Analysis
| Opportunity | Type | Est. Revenue | Effort | Priority |
|------------|------|-------------|--------|----------|
| [Opportunity 1] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
| [Opportunity 2] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
| [Opportunity 3] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
**Total Expansion Opportunity:** $[Amount]
### Recommended Next Steps for Growth
1. [Specific expansion recommendation with business justification]
2. [Specific expansion recommendation with business justification]
---
## 9. Renewal Outlook
| Factor | Assessment |
|--------|-----------|
| Overall Renewal Confidence | [High / Medium / Low] |
| Budget Availability | [Confirmed / Expected / Uncertain] |
| Sponsor Support | [Strong / Moderate / Weak] |
| Competitive Threat | [None / Low / Medium / High] |
| Value Perception | [Strong / Moderate / Weak] |
| Contract Satisfaction | [Satisfied / Neutral / Concerned] |
### Renewal Strategy
[2-3 sentences on the approach for securing renewal, including any specific actions needed]
---
## 10. Executive-Level Action Items
| Action | Owner | Due Date | Priority | Impact |
|--------|-------|----------|----------|--------|
| [Action 1] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
| [Action 2] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
| [Action 3] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
---
## Appendix
### Stakeholder Map
| Name | Title | Influence | Sentiment | Last Contact |
|------|-------|-----------|-----------|-------------|
| [Name] | [Title] | [Decision Maker / Influencer / User] | [Positive / Neutral / Negative] | [Date] |
| [Name] | [Title] | [Decision Maker / Influencer / User] | [Positive / Neutral / Negative] | [Date] |
### Competitive Landscape (If Applicable)
- **Known competitors in evaluation:** [List]
- **Our differentiators:** [Key strengths vs. competition]
- **Risk mitigation:** [Actions to defend position]
---
**Confidential -- For Internal and Customer Executive Use Only**
**Next Executive Review:** [Date]

View File

@@ -0,0 +1,170 @@
{
"report": "customer_health_scores",
"summary": {
"total_customers": 4,
"average_score": 78.8,
"green_count": 3,
"yellow_count": 1,
"red_count": 0
},
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"overall_score": 86.2,
"classification": "green",
"dimensions": {
"usage": {
"score": 91.6,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 82.0,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 78.5,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 90.1,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "stable",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
},
{
"customer_id": "CUST-002",
"name": "TechStart Inc",
"segment": "smb",
"arr": 18000,
"overall_score": 53.7,
"classification": "yellow",
"dimensions": {
"usage": {
"score": 52.5,
"weight": "30%",
"classification": "yellow"
},
"engagement": {
"score": 61.6,
"weight": "25%",
"classification": "yellow"
},
"support": {
"score": 63.2,
"weight": "20%",
"classification": "yellow"
},
"relationship": {
"score": 39.5,
"weight": "25%",
"classification": "red"
}
},
"trends": {
"usage": "stable",
"engagement": "improving",
"support": "stable",
"relationship": "declining",
"overall": "stable"
},
"recommendations": [
"Login frequency below target -- schedule product engagement session",
"NPS below threshold -- conduct a feedback deep-dive with customer",
"CSAT is critically low -- escalate to support leadership",
"Single-threaded relationship -- expand contacts across departments",
"Renewal sentiment is negative -- initiate save plan immediately"
]
},
{
"customer_id": "CUST-003",
"name": "GlobalTrade Solutions",
"segment": "mid-market",
"arr": 55000,
"overall_score": 79.7,
"classification": "green",
"dimensions": {
"usage": {
"score": 85.6,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 79.6,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 72.0,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 79.0,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "improving",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
},
{
"customer_id": "CUST-004",
"name": "HealthFirst Medical",
"segment": "enterprise",
"arr": 200000,
"overall_score": 95.7,
"classification": "green",
"dimensions": {
"usage": {
"score": 100.0,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 92.0,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 88.7,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 100.0,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "stable",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
}
]
}

View File

@@ -0,0 +1,215 @@
# Customer Onboarding Checklist (90-Day)
**Customer:** [Customer Name]
**Segment:** [Enterprise / Mid-Market / SMB]
**CSM:** [CSM Name]
**Kickoff Date:** [Date]
**Target Go-Live:** [Date]
**Target First Value Date:** [Date -- must be within 30 days]
---
## Phase 1: Welcome and Setup (Days 1-14)
### Pre-Kickoff Preparation (Day 0)
- [ ] Review signed contract and SOW for scope and commitments
- [ ] Research customer's industry, business model, and competitive landscape
- [ ] Review handoff notes from sales team (pain points, decision drivers, stakeholders)
- [ ] Prepare welcome package (login credentials, documentation links, support contacts)
- [ ] Create customer workspace in CS platform
- [ ] Schedule kickoff meeting with all required attendees
- [ ] Prepare kickoff deck with agenda and success plan draft
### Kickoff Meeting (Day 1-2)
- [ ] Conduct kickoff meeting with customer stakeholders
- [ ] Confirm business objectives and success criteria
- [ ] Identify key stakeholders and their roles (sponsor, champion, technical lead, users)
- [ ] Align on communication cadence and preferred channels
- [ ] Review onboarding timeline and milestones
- [ ] Set expectations for time commitment from customer team
- [ ] Share and agree on success plan (mutual accountability)
- [ ] Schedule recurring check-in meetings
**Kickoff Meeting Notes:**
> [Document key takeaways, concerns raised, decisions made]
### Technical Setup (Days 3-7)
- [ ] Provision customer environment (tenant, workspace, permissions)
- [ ] Configure SSO/authentication if applicable
- [ ] Set up integrations with customer's existing tools
- [ ] Import or migrate existing data (if applicable)
- [ ] Validate data integrity post-migration
- [ ] Configure role-based access and permissions
- [ ] Set up monitoring and alerting
**Technical Setup Owner:** [SE / Implementation team name]
**Technical Setup Notes:**
> [Document configuration decisions, customizations, issues]
### Admin Training (Days 7-10)
- [ ] Deliver admin training session (system configuration, user management)
- [ ] Provide admin documentation and quick reference guide
- [ ] Ensure admins can independently manage basic operations
- [ ] Set up admin support escalation path
### Initial User Training (Days 10-14)
- [ ] Deliver core user training (session 1: basic navigation and key workflows)
- [ ] Provide user quickstart guide and video resources
- [ ] Set up user support channel (Slack, email, in-app chat)
- [ ] Confirm all target users have active accounts
- [ ] Track initial login completion rate
**Training Completion Rate:** [___%] of target users
---
## Phase 2: Activation (Days 15-30)
### User Activation (Days 15-20)
- [ ] Monitor daily active user metrics
- [ ] Follow up with users who have not logged in
- [ ] Conduct follow-up training for users needing additional help
- [ ] Address any usability issues or confusion reported
- [ ] Validate that core workflows are functioning as expected
- [ ] Collect early feedback from champion and key users
**Activation Rate:** [___%] of licensed users active
### First Value Milestone (Days 20-30)
- [ ] Define and track first value milestone (specific to customer objectives)
- [ ] Verify customer has completed their first meaningful workflow
- [ ] Document value delivered (even if small -- establish the pattern)
- [ ] Share "first win" with executive sponsor
- [ ] Celebrate the milestone with the customer team
**First Value Milestone:** [Describe the specific milestone]
**Date Achieved:** [Date]
### 30-Day Review (Day 28-30)
- [ ] Conduct 30-day review meeting with customer
- [ ] Review activation metrics (logins, usage, adoption)
- [ ] Assess progress against success plan milestones
- [ ] Identify any blockers or concerns
- [ ] Adjust onboarding plan if needed
- [ ] Confirm transition from setup phase to adoption phase
- [ ] Set goals for days 31-60
**30-Day Health Score:** [Score]/100 -- [Green/Yellow/Red]
---
## Phase 3: Adoption (Days 31-60)
### Feature Expansion (Days 31-45)
- [ ] Introduce additional features beyond core workflows
- [ ] Deliver advanced training session (session 2: power features)
- [ ] Enable at least one integration with customer's existing tools
- [ ] Identify and address feature adoption gaps
- [ ] Share best practices from similar customers
### Usage Benchmarking (Days 45-55)
- [ ] Compare customer's usage against segment benchmarks
- [ ] Identify underperforming areas and create enablement plan
- [ ] Share usage report with customer champion
- [ ] Discuss usage targets for the next 30 days
**Current vs. Benchmark:**
| Metric | Current | Benchmark | Gap |
|--------|---------|-----------|-----|
| Feature Adoption | [%] | [%] | [+/-] |
| Daily Active Users | [#] | [#] | [+/-] |
| Key Workflow Completion | [%] | [%] | [+/-] |
### 60-Day Check-in (Day 55-60)
- [ ] Conduct 60-day check-in meeting
- [ ] Review adoption metrics and progress
- [ ] Discuss any roadblocks to deeper adoption
- [ ] Begin identifying advanced use cases
- [ ] Set goals for days 61-90
---
## Phase 4: Optimisation (Days 61-90)
### Advanced Use Cases (Days 61-75)
- [ ] Conduct use case discovery workshop with customer
- [ ] Identify 2-3 advanced use cases beyond initial scope
- [ ] Build implementation plan for advanced use cases
- [ ] Begin pilot of advanced use cases with power users
### ROI Measurement (Days 75-85)
- [ ] Collect data for ROI measurement against baseline
- [ ] Build ROI summary document
- [ ] Share ROI results with executive sponsor
- [ ] Document customer testimonial or case study opportunity (if willing)
**ROI Summary:**
| Metric | Baseline | Current | Improvement |
|--------|----------|---------|-------------|
| [Metric 1] | [Value] | [Value] | [% change] |
| [Metric 2] | [Value] | [Value] | [% change] |
### 90-Day Executive Review (Days 85-90)
- [ ] Prepare 90-day executive review presentation
- [ ] Include: value delivered, adoption metrics, ROI, next steps
- [ ] Conduct review meeting with executive sponsor
- [ ] Transition from onboarding to ongoing success management
- [ ] Establish ongoing success plan with quarterly milestones
- [ ] Confirm ongoing meeting cadence
- [ ] Introduce expansion opportunities if appropriate
**90-Day Health Score:** [Score]/100 -- [Green/Yellow/Red]
---
## Onboarding Completion Gate
The following criteria must be met to consider onboarding complete:
- [ ] User activation rate above 80%
- [ ] First value milestone achieved within 30 days
- [ ] Core workflows actively used by target users
- [ ] Executive sponsor confirms satisfaction
- [ ] Health score is Yellow (50+) or better
- [ ] Success plan established with ongoing milestones
- [ ] Recurring meeting cadence confirmed
- [ ] Support escalation path understood by customer
**Onboarding Status:** [Complete / In Progress / Blocked]
**Completion Date:** [Date]
**Handoff to Steady-State CSM:** [Date if different CSM]
---
## Notes
### Risks and Blockers
| Risk/Blocker | Impact | Mitigation | Status |
|-------------|--------|-----------|--------|
| [Item] | [High/Med/Low] | [Action] | [Open/Resolved] |
### Key Decisions
| Date | Decision | Made By | Impact |
|------|----------|---------|--------|
| [Date] | [Decision] | [Name] | [Description] |
---
**Template Version:** 1.0
**Last Updated:** February 2026

View File

@@ -0,0 +1,163 @@
# Quarterly Business Review (QBR)
**Customer:** [Customer Name]
**Date:** [QBR Date]
**Prepared by:** [CSM Name]
**Attendees:** [List attendees and titles]
---
## 1. Executive Summary
**Overall Relationship Status:** [Green / Yellow / Red]
**Health Score:** [Score]/100
**Key Theme:** [One sentence summarizing the quarter]
### Quarter Highlights
- [Highlight 1: major achievement or milestone]
- [Highlight 2: value delivered]
- [Highlight 3: initiative completed]
### Areas of Focus
- [Focus area 1]
- [Focus area 2]
---
## 2. Value Delivered This Quarter
### Business Outcomes Achieved
| Objective | Target | Actual | Status |
|-----------|--------|--------|--------|
| [Objective 1] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
| [Objective 2] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
| [Objective 3] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
### ROI Summary
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| [Metric 1, e.g., Time savings] | [Baseline] | [Current] | [% change] |
| [Metric 2, e.g., Cost reduction] | [Baseline] | [Current] | [% change] |
| [Metric 3, e.g., Revenue impact] | [Baseline] | [Current] | [% change] |
**Estimated Total Value Delivered:** $[Amount]
---
## 3. Product Usage and Adoption
### Usage Metrics
| Metric | Last Quarter | This Quarter | Trend |
|--------|-------------|--------------|-------|
| Monthly Active Users | [Number] | [Number] | [Up/Down/Stable] |
| Feature Adoption Rate | [%] | [%] | [Up/Down/Stable] |
| DAU/MAU Ratio | [Ratio] | [Ratio] | [Up/Down/Stable] |
| Seat Utilization | [%] | [%] | [Up/Down/Stable] |
### Feature Adoption Breakdown
| Feature/Module | Status | Usage Level | Notes |
|---------------|--------|-------------|-------|
| [Feature 1] | Active | [High/Med/Low] | |
| [Feature 2] | Active | [High/Med/Low] | |
| [Feature 3] | Not Adopted | -- | [Reason / Opportunity] |
### Adoption Recommendations
1. [Recommendation for increasing adoption of underused features]
2. [Recommendation for enabling new use cases]
---
## 4. Support Summary
| Metric | This Quarter | Previous Quarter | Benchmark |
|--------|-------------|-----------------|-----------|
| Total Tickets | [Number] | [Number] | [Segment avg] |
| Avg Resolution Time | [Hours] | [Hours] | [SLA target] |
| Escalations | [Number] | [Number] | [Target: 0] |
| CSAT Score | [Score] | [Score] | [Target] |
### Open Issues
| Issue | Priority | Status | ETA |
|-------|----------|--------|-----|
| [Issue 1] | [P1/P2/P3] | [In Progress / Pending] | [Date] |
---
## 5. Success Plan Progress
### Current Success Plan Goals
| Goal | Timeline | Progress | Status |
|------|----------|----------|--------|
| [Goal 1] | [Date] | [%] | [On Track / At Risk / Complete] |
| [Goal 2] | [Date] | [%] | [On Track / At Risk / Complete] |
| [Goal 3] | [Date] | [%] | [On Track / At Risk / Complete] |
### Next Quarter Goals (Proposed)
1. [Goal 1 with specific measurable outcome]
2. [Goal 2 with specific measurable outcome]
3. [Goal 3 with specific measurable outcome]
---
## 6. Product Roadmap Highlights
### Recently Released (Relevant to [Customer Name])
- [Feature/enhancement 1] -- [How it benefits them]
- [Feature/enhancement 2] -- [How it benefits them]
### Coming Next Quarter
- [Upcoming feature 1] -- [Expected benefit]
- [Upcoming feature 2] -- [Expected benefit]
### Feature Requests Status
| Request | Priority | Status | Expected Release |
|---------|----------|--------|-----------------|
| [Request 1] | [High/Med/Low] | [Planned / In Development / Under Review] | [Quarter] |
---
## 7. Growth Opportunities
### Expansion Discussion Points
- [Opportunity 1: e.g., additional seats for new team]
- [Opportunity 2: e.g., new module that addresses identified need]
- [Opportunity 3: e.g., tier upgrade for advanced capabilities]
### Estimated Value of Expansion: $[Amount] additional ARR
---
## 8. Action Items
| Action | Owner | Due Date | Priority |
|--------|-------|----------|----------|
| [Action 1] | [Name] | [Date] | [High/Med/Low] |
| [Action 2] | [Name] | [Date] | [High/Med/Low] |
| [Action 3] | [Name] | [Date] | [High/Med/Low] |
| [Action 4] | [Name] | [Date] | [High/Med/Low] |
---
## 9. Contract and Renewal
**Contract Start:** [Date]
**Renewal Date:** [Date]
**Current ARR:** $[Amount]
**Days to Renewal:** [Number]
### Renewal Readiness
- [ ] Value documented and communicated
- [ ] Executive sponsor aligned
- [ ] Open issues resolved or plan in place
- [ ] Pricing and terms discussed
- [ ] Expansion proposal prepared (if applicable)
---
**Next QBR Date:** [Date]
**Next Check-in:** [Date]

View File

@@ -0,0 +1,314 @@
{
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"contract_end_date": "2026-12-31",
"usage": {
"login_frequency": 85,
"feature_adoption": 72,
"dau_mau_ratio": 0.45
},
"engagement": {
"support_ticket_volume": 3,
"meeting_attendance": 90,
"nps_score": 8,
"csat_score": 4.2
},
"support": {
"open_tickets": 2,
"escalation_rate": 0.05,
"avg_resolution_hours": 18
},
"relationship": {
"executive_sponsor_engagement": 80,
"multi_threading_depth": 4,
"renewal_sentiment": "positive"
},
"previous_period": {
"usage_score": 70,
"engagement_score": 65,
"support_score": 75,
"relationship_score": 60,
"overall_score": 67
},
"usage_decline": {
"login_trend": 5,
"feature_adoption_change": 3,
"dau_mau_change": 0.02
},
"engagement_drop": {
"meeting_cancellations": 0,
"response_time_days": 1,
"nps_change": 1
},
"support_issues": {
"open_escalations": 0,
"unresolved_critical": 0,
"satisfaction_trend": "improving"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": false,
"competitor_mentions": 0
},
"commercial_factors": {
"contract_type": "annual",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 100,
"active_seats": 95,
"plan_tier": "professional",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 85},
"analytics_module": {"adopted": true, "usage_pct": 60},
"integrations_module": {"adopted": false, "usage_pct": 0},
"api_access": {"adopted": true, "usage_pct": 40},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["engineering", "product"],
"potential": ["marketing", "sales", "support"]
}
},
{
"customer_id": "CUST-002",
"name": "TechStart Inc",
"segment": "smb",
"arr": 18000,
"contract_end_date": "2026-04-15",
"usage": {
"login_frequency": 40,
"feature_adoption": 30,
"dau_mau_ratio": 0.15
},
"engagement": {
"support_ticket_volume": 8,
"meeting_attendance": 50,
"nps_score": 5,
"csat_score": 3.0
},
"support": {
"open_tickets": 6,
"escalation_rate": 0.18,
"avg_resolution_hours": 42
},
"relationship": {
"executive_sponsor_engagement": 30,
"multi_threading_depth": 1,
"renewal_sentiment": "negative"
},
"previous_period": {
"usage_score": 55,
"engagement_score": 50,
"support_score": 60,
"relationship_score": 45,
"overall_score": 52
},
"usage_decline": {
"login_trend": -25,
"feature_adoption_change": -18,
"dau_mau_change": -0.12
},
"engagement_drop": {
"meeting_cancellations": 3,
"response_time_days": 8,
"nps_change": -4
},
"support_issues": {
"open_escalations": 2,
"unresolved_critical": 1,
"satisfaction_trend": "declining"
},
"relationship_signals": {
"champion_left": true,
"sponsor_change": false,
"competitor_mentions": 3
},
"commercial_factors": {
"contract_type": "month-to-month",
"pricing_complaints": true,
"budget_cuts_mentioned": true
},
"contract": {
"licensed_seats": 20,
"active_seats": 8,
"plan_tier": "starter",
"available_tiers": ["starter", "professional", "enterprise"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 35},
"analytics_module": {"adopted": false, "usage_pct": 0},
"integrations_module": {"adopted": false, "usage_pct": 0},
"api_access": {"adopted": false, "usage_pct": 0},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["engineering"],
"potential": ["product", "design"]
}
},
{
"customer_id": "CUST-003",
"name": "GlobalTrade Solutions",
"segment": "mid-market",
"arr": 55000,
"contract_end_date": "2026-09-30",
"usage": {
"login_frequency": 70,
"feature_adoption": 58,
"dau_mau_ratio": 0.35
},
"engagement": {
"support_ticket_volume": 5,
"meeting_attendance": 75,
"nps_score": 7,
"csat_score": 3.8
},
"support": {
"open_tickets": 3,
"escalation_rate": 0.10,
"avg_resolution_hours": 30
},
"relationship": {
"executive_sponsor_engagement": 60,
"multi_threading_depth": 3,
"renewal_sentiment": "neutral"
},
"previous_period": {
"usage_score": 68,
"engagement_score": 70,
"support_score": 65,
"relationship_score": 62,
"overall_score": 66
},
"usage_decline": {
"login_trend": -8,
"feature_adoption_change": -5,
"dau_mau_change": -0.03
},
"engagement_drop": {
"meeting_cancellations": 1,
"response_time_days": 3,
"nps_change": -1
},
"support_issues": {
"open_escalations": 1,
"unresolved_critical": 0,
"satisfaction_trend": "stable"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": true,
"competitor_mentions": 1
},
"commercial_factors": {
"contract_type": "annual",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 50,
"active_seats": 48,
"plan_tier": "professional",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 78},
"analytics_module": {"adopted": true, "usage_pct": 45},
"integrations_module": {"adopted": true, "usage_pct": 55},
"api_access": {"adopted": false, "usage_pct": 0},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["operations", "finance"],
"potential": ["logistics", "compliance"]
}
},
{
"customer_id": "CUST-004",
"name": "HealthFirst Medical",
"segment": "enterprise",
"arr": 200000,
"contract_end_date": "2027-03-15",
"usage": {
"login_frequency": 92,
"feature_adoption": 88,
"dau_mau_ratio": 0.55
},
"engagement": {
"support_ticket_volume": 2,
"meeting_attendance": 95,
"nps_score": 9,
"csat_score": 4.6
},
"support": {
"open_tickets": 1,
"escalation_rate": 0.02,
"avg_resolution_hours": 12
},
"relationship": {
"executive_sponsor_engagement": 92,
"multi_threading_depth": 6,
"renewal_sentiment": "positive"
},
"previous_period": {
"usage_score": 85,
"engagement_score": 82,
"support_score": 88,
"relationship_score": 80,
"overall_score": 84
},
"usage_decline": {
"login_trend": 3,
"feature_adoption_change": 5,
"dau_mau_change": 0.03
},
"engagement_drop": {
"meeting_cancellations": 0,
"response_time_days": 1,
"nps_change": 0
},
"support_issues": {
"open_escalations": 0,
"unresolved_critical": 0,
"satisfaction_trend": "improving"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": false,
"competitor_mentions": 0
},
"commercial_factors": {
"contract_type": "multi-year",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 250,
"active_seats": 240,
"plan_tier": "enterprise",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 92},
"analytics_module": {"adopted": true, "usage_pct": 80},
"integrations_module": {"adopted": true, "usage_pct": 70},
"api_access": {"adopted": true, "usage_pct": 65},
"advanced_reporting": {"adopted": true, "usage_pct": 50},
"security_module": {"adopted": false, "usage_pct": 0},
"audit_module": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["clinical", "operations", "IT", "compliance"],
"potential": ["research", "finance", "HR"]
}
}
]
}

View File

@@ -0,0 +1,167 @@
# Customer Success Plan
**Customer:** [Customer Name]
**CSM:** [CSM Name]
**Account Executive:** [AE Name]
**Plan Created:** [Date]
**Last Updated:** [Date]
**Review Cadence:** [Monthly / Quarterly]
---
## 1. Customer Overview
| Field | Details |
|-------|---------|
| Industry | [Industry] |
| Company Size | [Employees] |
| Segment | [Enterprise / Mid-Market / SMB] |
| ARR | $[Amount] |
| Contract Start | [Date] |
| Renewal Date | [Date] |
| Plan Tier | [Tier name] |
| Licensed Seats | [Number] |
### Key Stakeholders
| Name | Title | Role | Engagement Level |
|------|-------|------|-----------------|
| [Name] | [Title] | Executive Sponsor | [High / Medium / Low] |
| [Name] | [Title] | Day-to-Day Champion | [High / Medium / Low] |
| [Name] | [Title] | Technical Lead | [High / Medium / Low] |
| [Name] | [Title] | End User Lead | [High / Medium / Low] |
---
## 2. Business Objectives
### Primary Business Objectives
| # | Objective | Success Metric | Target | Timeline |
|---|-----------|---------------|--------|----------|
| 1 | [e.g., Reduce manual reporting time] | [Hours saved per week] | [Target number] | [Date] |
| 2 | [e.g., Improve team collaboration] | [Project completion rate] | [Target %] | [Date] |
| 3 | [e.g., Increase revenue visibility] | [Forecast accuracy] | [Target %] | [Date] |
### Why These Objectives Matter
- **Objective 1:** [Business context -- why this matters to the customer's overall strategy]
- **Objective 2:** [Business context]
- **Objective 3:** [Business context]
---
## 3. Success Milestones
### Phase 1: Foundation (Days 1-30)
| Milestone | Target Date | Status | Owner | Notes |
|-----------|------------|--------|-------|-------|
| Technical setup complete | [Date] | [ ] | [Name] | |
| Admin training delivered | [Date] | [ ] | CSM | |
| Core team onboarded | [Date] | [ ] | CSM | |
| First value milestone achieved | [Date] | [ ] | [Name] | |
| Data migration validated | [Date] | [ ] | SE | |
### Phase 2: Adoption (Days 31-90)
| Milestone | Target Date | Status | Owner | Notes |
|-----------|------------|--------|-------|-------|
| 80% user adoption | [Date] | [ ] | CSM | |
| Key workflows live | [Date] | [ ] | [Name] | |
| Integrations configured | [Date] | [ ] | SE | |
| First ROI measurement | [Date] | [ ] | CSM | |
| 30-day review complete | [Date] | [ ] | CSM | |
### Phase 3: Value Realisation (Days 91-180)
| Milestone | Target Date | Status | Owner | Notes |
|-----------|------------|--------|-------|-------|
| Objective 1 progress measurable | [Date] | [ ] | [Name] | |
| Advanced features adopted | [Date] | [ ] | CSM | |
| QBR completed | [Date] | [ ] | CSM | |
| Executive alignment confirmed | [Date] | [ ] | CSM | |
### Phase 4: Optimisation and Growth (Days 181-365)
| Milestone | Target Date | Status | Owner | Notes |
|-----------|------------|--------|-------|-------|
| All objectives on track | [Date] | [ ] | CSM | |
| ROI documented for renewal | [Date] | [ ] | CSM | |
| Expansion opportunities identified | [Date] | [ ] | CSM + AE | |
| Renewal conversation initiated | [Date] | [ ] | CSM + AE | |
---
## 4. Health Score Tracking
| Date | Overall Score | Usage | Engagement | Support | Relationship | Classification |
|------|--------------|-------|------------|---------|-------------|---------------|
| [Date] | [Score] | [Score] | [Score] | [Score] | [Score] | [Green/Yellow/Red] |
| [Date] | [Score] | [Score] | [Score] | [Score] | [Score] | [Green/Yellow/Red] |
---
## 5. Risk Register
| Risk | Probability | Impact | Mitigation | Owner | Status |
|------|------------|--------|-----------|-------|--------|
| [e.g., Executive sponsor departure] | [High/Med/Low] | [High/Med/Low] | [Multi-thread relationships] | CSM | [Active/Resolved] |
| [e.g., Low adoption in team X] | [High/Med/Low] | [High/Med/Low] | [Targeted training session] | CSM | [Active/Resolved] |
| [e.g., Budget review next quarter] | [High/Med/Low] | [High/Med/Low] | [Document ROI before review] | CSM | [Active/Resolved] |
---
## 6. Communication Plan
| Activity | Frequency | Participants | Purpose |
|----------|-----------|-------------|---------|
| Status check-in | [Weekly / Bi-weekly] | CSM + Champion | Tactical progress review |
| Strategic review | [Monthly] | CSM + Stakeholders | Objective alignment |
| QBR | [Quarterly] | CSM + Executive Sponsor | Executive business review |
| Technical review | [As needed] | SE + Technical Lead | Architecture and integration |
| Renewal planning | [90 days before] | CSM + AE + Sponsor | Contract discussion |
---
## 7. Product Adoption Plan
### Current State
| Module/Feature | Status | Usage Level | Target Usage | Gap |
|---------------|--------|-------------|-------------|-----|
| [Module 1] | Adopted | [%] | [%] | [Actions needed] |
| [Module 2] | Adopted | [%] | [%] | [Actions needed] |
| [Module 3] | Not Adopted | 0% | [%] | [Enablement plan] |
### Enablement Activities
| Activity | Target Date | Audience | Expected Outcome |
|----------|------------|----------|-----------------|
| [Training session] | [Date] | [Team/Group] | [Metric improvement] |
| [Workshop] | [Date] | [Team/Group] | [New workflow adoption] |
| [Office hours] | [Ongoing] | [All users] | [Question resolution] |
---
## 8. Expansion Roadmap
| Opportunity | Type | Estimated Value | Timeline | Prerequisites |
|------------|------|----------------|----------|--------------|
| [e.g., Additional seats] | Expansion | $[Amount] | [Quarter] | [Usage > 90%] |
| [e.g., Tier upgrade] | Upsell | $[Amount] | [Quarter] | [Feature requests] |
| [e.g., New module] | Cross-sell | $[Amount] | [Quarter] | [Use case validated] |
---
## 9. Notes and Updates
### [Date] - [Author]
[Update notes, key decisions, changes to plan]
### [Date] - [Author]
[Update notes, key decisions, changes to plan]
---
**Next Review Date:** [Date]
**Plan Owner:** [CSM Name]

View File

@@ -0,0 +1,259 @@
# Customer Success Metrics and Benchmarks
Industry benchmarks for key customer success metrics, segmented by company size, customer segment, and industry vertical.
---
## Core SaaS Metrics
### Net Revenue Retention (NRR)
NRR measures revenue retained from existing customers including expansion, contraction, and churn. It is the single most important metric for SaaS customer success.
**Formula:** (Starting ARR + Expansion - Contraction - Churn) / Starting ARR * 100
| Performance Level | NRR Range | Interpretation |
|-------------------|-----------|----------------|
| Best-in-class | > 130% | Strong expansion engine, very low churn |
| Excellent | 120-130% | Healthy growth from existing customers |
| Good | 110-120% | Solid retention with moderate expansion |
| Target | > 110% | Minimum for sustainable growth |
| Acceptable | 100-110% | Revenue stable but limited expansion |
| Below target | 90-100% | Churn exceeds expansion |
| Concerning | < 90% | Significant revenue erosion |
**Benchmarks by Segment:**
| Customer Segment | Median NRR | Top Quartile | Bottom Quartile |
|-----------------|------------|--------------|-----------------|
| Enterprise (>$100K ARR) | 115% | 130%+ | 105% |
| Mid-Market ($25K-$100K) | 108% | 120% | 98% |
| SMB (<$25K ARR) | 95% | 105% | 85% |
### Gross Revenue Retention (GRR)
GRR measures revenue retained without counting expansion. It isolates the churn and contraction signal.
**Formula:** (Starting ARR - Contraction - Churn) / Starting ARR * 100
| Performance Level | GRR Range | Interpretation |
|-------------------|-----------|----------------|
| Best-in-class | > 95% | Minimal churn, highly sticky product |
| Excellent | 92-95% | Strong retention |
| Good | 90-92% | Healthy with room to improve |
| Target | > 90% | Industry standard target |
| Acceptable | 85-90% | Moderate churn, needs focus |
| Below target | 80-85% | High churn impacting growth |
| Concerning | < 80% | Urgent retention problem |
**Benchmarks by Segment:**
| Customer Segment | Median GRR | Top Quartile | Bottom Quartile |
|-----------------|------------|--------------|-----------------|
| Enterprise | 95% | 98% | 90% |
| Mid-Market | 90% | 95% | 85% |
| SMB | 82% | 90% | 75% |
---
## Health Score Benchmarks
### Portfolio Health Distribution (Target)
A healthy CS portfolio should have the following approximate distribution:
| Classification | Target Distribution | Alert Threshold |
|---------------|-------------------|-----------------|
| Green (Healthy) | 60-70% | < 50% triggers portfolio review |
| Yellow (Attention) | 20-30% | > 35% signals systemic issues |
| Red (At Risk) | 5-10% | > 15% requires executive intervention |
### Average Health Score by Segment
| Segment | Target Average | Industry Median | Top Quartile |
|---------|---------------|-----------------|--------------|
| Enterprise | > 78 | 72 | 82 |
| Mid-Market | > 75 | 68 | 78 |
| SMB | > 70 | 65 | 75 |
### Health Score by Dimension (Industry Medians)
| Dimension | Enterprise | Mid-Market | SMB |
|-----------|-----------|------------|-----|
| Usage | 72 | 68 | 60 |
| Engagement | 70 | 62 | 55 |
| Support | 78 | 72 | 65 |
| Relationship | 68 | 60 | 50 |
---
## Churn Metrics
### Logo Churn Rate (Annual)
| Performance Level | Rate | Interpretation |
|-------------------|------|----------------|
| Best-in-class | < 5% | Exceptional retention |
| Excellent | 5-8% | Very strong |
| Good | 8-12% | Healthy |
| Acceptable | 12-15% | Room for improvement |
| Below target | 15-20% | Significant churn problem |
| Concerning | > 20% | Urgent -- product-market fit issues likely |
**Benchmarks by Segment:**
| Segment | Median Annual Logo Churn | Top Quartile | Bottom Quartile |
|---------|------------------------|--------------|-----------------|
| Enterprise | 5% | 2% | 10% |
| Mid-Market | 10% | 5% | 18% |
| SMB | 20% | 12% | 35% |
### Churn Leading Indicators
The following metrics have the highest predictive power for churn events:
| Indicator | Lead Time | Correlation with Churn |
|-----------|-----------|----------------------|
| Login frequency decline (>30%) | 60-90 days | Very High |
| NPS drop (>3 points) | 30-60 days | High |
| Executive sponsor departure | 30-90 days | Very High |
| Support escalation rate increase | 30-60 days | High |
| Meeting cancellation increase | 30-45 days | Moderate-High |
| Feature adoption decline | 60-90 days | Moderate |
| Competitor mentions | 30-60 days | Moderate |
---
## Expansion Metrics
### Expansion Revenue Rate
| Performance Level | Rate | Notes |
|-------------------|------|-------|
| Best-in-class | > 30% of total revenue | Strong land-and-expand motion |
| Excellent | 25-30% | Effective expansion engine |
| Good | 20-25% | Solid upsell/cross-sell |
| Target | > 20% | Minimum for healthy growth |
| Below target | 10-20% | Expansion motion needs development |
| Concerning | < 10% | Missing significant expansion opportunity |
### Expansion by Type
| Expansion Type | Typical Contribution | Average Deal Size |
|---------------|---------------------|-------------------|
| Seat Expansion | 40-50% of expansion | 15-25% of contract value |
| Tier Upsell | 25-35% of expansion | 40-80% of contract value |
| Module Cross-sell | 15-25% of expansion | 10-20% of contract value |
| Department Expansion | 5-15% of expansion | 50-100% of contract value |
### Expansion Readiness Indicators
| Signal | Interpretation |
|--------|---------------|
| Seat utilisation > 90% | Ready for seat expansion |
| Feature requests for higher tier | Upsell opportunity |
| Usage of 70%+ of current modules | Ready for cross-sell |
| New department interest | Department expansion play |
| Customer referral activity | Strong relationship, open to expansion |
---
## Engagement Metrics
### Customer Engagement Score (CES) Benchmarks
| Metric | Target | Median | Warning |
|--------|--------|--------|---------|
| Meeting attendance rate | > 80% | 72% | < 50% |
| Average NPS | > 50 | 35 | < 20 |
| Average CSAT | > 4.2/5 | 3.8/5 | < 3.0/5 |
| Response time (days) | < 2 | 3 | > 5 |
| QBR completion rate | > 90% | 75% | < 60% |
### Time to First Value (TTFV)
| Segment | Target TTFV | Median TTFV | Warning Threshold |
|---------|------------|------------|-------------------|
| Enterprise | < 30 days | 45 days | > 60 days |
| Mid-Market | < 21 days | 30 days | > 45 days |
| SMB | < 14 days | 21 days | > 30 days |
---
## CSM Operational Metrics
### Portfolio Management
| Metric | Enterprise CSM | Mid-Market CSM | SMB CSM (Tech-Touch) |
|--------|---------------|----------------|---------------------|
| Accounts per CSM | 10-25 | 30-60 | 100-300+ |
| ARR per CSM | $2M-$5M | $2M-$4M | $1M-$3M |
| Touch frequency | Weekly-biweekly | Biweekly-monthly | Quarterly-automated |
| QBR frequency | Quarterly | Semi-annually | Annually |
| Health score reviews | Weekly | Bi-weekly | Monthly |
### CSM Activity Benchmarks
| Activity | Target per Month | Purpose |
|----------|-----------------|---------|
| Strategic calls | 2-4 per account | Relationship building |
| Health score reviews | 4 (weekly) | Portfolio monitoring |
| QBR preparation | 3-5 per quarter | Executive engagement |
| Escalation handling | < 2 per month | Issue resolution |
| Expansion conversations | 1-2 per account | Revenue growth |
---
## Industry-Specific Benchmarks
### By Industry Vertical
| Industry | Median NRR | Median GRR | Median Logo Churn |
|----------|-----------|-----------|------------------|
| Infrastructure/DevOps | 125% | 95% | 5% |
| Cybersecurity | 120% | 93% | 7% |
| HR Tech | 110% | 90% | 12% |
| MarTech | 105% | 87% | 15% |
| FinTech | 115% | 92% | 8% |
| HealthTech | 112% | 91% | 10% |
| EdTech | 100% | 85% | 18% |
| eCommerce Tools | 108% | 88% | 14% |
### By Company Stage
| Stage | Median NRR | Median GRR | Notes |
|-------|-----------|-----------|-------|
| Early Stage (<$10M ARR) | 100% | 85% | Focus on product-market fit |
| Growth ($10M-$50M ARR) | 110% | 90% | Building CS function |
| Scale ($50M-$200M ARR) | 118% | 93% | Mature CS operations |
| Enterprise (>$200M ARR) | 115% | 95% | Optimisation phase |
---
## Metric Relationships
### Key Correlations
| If This Metric Moves | This Also Tends to Move | Direction |
|---------------------|------------------------|-----------|
| Health score down | Churn probability up | Inverse |
| NPS up | NRR up | Direct |
| TTFV down | GRR up | Inverse |
| Feature adoption up | Expansion rate up | Direct |
| Escalation rate up | NPS down | Inverse |
| Multi-threading depth up | GRR up | Direct |
### The SaaS Retention Equation
**Sustainable Growth requires:** NRR > 110% AND GRR > 90%
If NRR is high but GRR is low: You are churning customers and replacing with expansion from survivors. Not sustainable.
If GRR is high but NRR is low: You retain well but do not expand. Leaving money on the table.
Both high: Healthy, compounding growth from existing customers.
---
**Last Updated:** February 2026
**Sources:** Industry surveys, SaaS benchmarking reports, customer success community data (2024-2025 data cycles).

View File

@@ -0,0 +1,290 @@
# Customer Success Playbooks
Comprehensive intervention, onboarding, renewal, expansion, and escalation playbooks for SaaS customer success management.
---
## Risk Tier Intervention Playbooks
### Critical Risk (Score 80-100)
**Situation:** Customer is at imminent risk of churn. Multiple severe warning signals detected. Requires immediate executive-level intervention.
**Timeline:** Act within 48 hours.
**Steps:**
1. **Executive Escalation (Day 0)**
- Alert VP of Customer Success and account executive immediately
- Brief internal leadership on situation, warning signals, and ARR at risk
- Identify any pending support issues and fast-track resolution
2. **Customer Contact (Day 1-2)**
- Schedule executive-to-executive call (VP CS to customer VP/C-level)
- Frame the conversation around understanding their challenges, not defending your product
- Listen more than talk -- capture the real objections
3. **Save Plan Creation (Day 2-3)**
- Create a detailed save plan with specific value milestones tied to their business outcomes
- Include timeline, owners, and measurable success criteria
- Get internal alignment on any concessions (pricing, features, roadmap commitments)
4. **Rescue Team Assignment (Day 3-5)**
- Assign a dedicated rescue team: CSM + Solutions Engineer + Support Lead
- Daily internal stand-up (15 min max) on account status
- Solutions Engineer to conduct technical health check
5. **Execution and Monitoring (Week 2-4)**
- Execute save plan with weekly customer check-ins
- Track progress against milestones
- Prepare competitive displacement defence if competitor involvement detected
6. **Resolution Assessment (Week 4)**
- Evaluate whether the situation is stabilising
- If improving: transition to High-risk monitoring cadence
- If not improving: escalate to CEO/GM for final intervention
**Success Criteria:** Risk score drops below 60 within 30 days. Customer confirms continued partnership intent.
---
### High Risk (Score 60-79)
**Situation:** Customer showing clear signs of dissatisfaction or disengagement. Still salvageable with focused CSM intervention.
**Timeline:** Act within 1 week.
**Steps:**
1. **Root Cause Analysis (Day 1-3)**
- Review all health score dimensions to identify the primary drivers
- Pull support ticket history for patterns
- Check product usage trends for the past 90 days
2. **CSM Outreach (Day 3-5)**
- Schedule a dedicated call with the customer (not a routine check-in)
- Open with empathy: "I've noticed some changes and want to make sure we're supporting you properly"
- Identify the top 3 customer concerns
3. **30-Day Recovery Plan (Day 5-7)**
- Build a 30-day recovery plan with measurable checkpoints every week
- Include specific actions for each concern identified
- Share the plan with the customer for mutual commitment
4. **Re-Engage Executive Sponsor (Week 2)**
- Request a meeting with the executive sponsor
- Align on business outcomes and how your product supports them
- Confirm continued sponsorship and address any political changes
5. **Support Fast-Track (Ongoing)**
- Escalate any pending support tickets internally
- Assign a support point of contact for this account
- Provide weekly status updates on open issues
6. **Progress Review (Week 3-4)**
- Review all metrics for improvement
- Adjust plan if specific interventions are not working
- If score drops to Critical: escalate to executive playbook
**Success Criteria:** Risk score drops below 40 within 30 days. No new warning signals emerge.
---
### Medium Risk (Score 40-59)
**Situation:** Early warning signs detected. Customer may not be aware of emerging issues. Proactive outreach prevents escalation.
**Timeline:** Act within 2 weeks.
**Steps:**
1. **Data Review (Day 1-5)**
- Analyse which dimension(s) are pulling the score down
- Review recent support interactions for sentiment clues
- Check for any known product issues affecting this customer
2. **Proactive Check-In (Week 1-2)**
- Schedule a "value check-in" call (position it as routine, not reactive)
- Share relevant success stories from similar customers
- Propose a training session or product walkthrough for underutilised features
3. **Value Reinforcement (Week 2-3)**
- Send a customised ROI summary showing value delivered
- Highlight feature releases relevant to their use case
- Connect them with your customer community or user group
4. **Monitoring (Week 3-4)**
- Increase monitoring frequency to bi-weekly
- Watch for improvement or continued decline
- If declining: move to High-risk playbook
**Success Criteria:** Score stabilises above 50 or improves. No escalation to High risk.
---
### Low Risk (Score 0-39)
**Situation:** Customer is healthy. Standard success cadence applies. Focus on value reinforcement and expansion readiness.
**Timeline:** Standard touch cadence.
**Steps:**
1. **Maintain Cadence**
- Enterprise: Monthly strategic reviews, quarterly QBRs
- Mid-Market: Bi-monthly check-ins, semi-annual reviews
- SMB: Quarterly automated health updates, annual review
2. **Proactive Communication**
- Share product updates and release notes
- Invite to webinars, conferences, and community events
- Share relevant industry insights and benchmarks
3. **Expansion Readiness**
- Monitor for expansion signals (usage approaching limits, new use cases)
- Prepare expansion proposals when timing is right
- Position premium features and modules relevant to their needs
4. **Renewal Preparation**
- Begin renewal preparation 90 days before contract end
- Build renewal proposal with value delivered summary
- Identify any terms or pricing adjustments needed
**Success Criteria:** Customer remains in Green classification. Expansion conversations initiated when appropriate.
---
## Onboarding Playbook
### Phase 1: Welcome and Setup (Day 1-14)
| Day | Activity | Owner | Deliverable |
|-----|----------|-------|-------------|
| 1 | Welcome email and introduction | CSM | Welcome package sent |
| 1-2 | Kickoff call | CSM + SE | Success plan drafted |
| 3-5 | Technical setup and configuration | SE | Environment configured |
| 5-7 | Admin training session | CSM | Admins trained |
| 7-10 | Data migration (if applicable) | SE | Data validated |
| 10-14 | Initial user training | CSM | Core team trained |
### Phase 2: Activation (Day 15-30)
| Day | Activity | Owner | Deliverable |
|-----|----------|-------|-------------|
| 15 | Activation check -- are users logging in? | CSM | Usage report |
| 15-20 | Follow-up training for laggards | CSM | All users active |
| 20-25 | First business outcome milestone | CSM | Milestone achieved |
| 25-30 | 30-day review call | CSM | Review documented |
**Critical Milestone:** Time to First Value must be under 30 days.
### Phase 3: Adoption (Day 31-60)
| Day | Activity | Owner | Deliverable |
|-----|----------|-------|-------------|
| 30-40 | Feature adoption expansion | CSM | New features in use |
| 40-50 | Integration setup (if applicable) | SE | Integrations live |
| 50-60 | Usage benchmarking vs. peers | CSM | Benchmark report |
### Phase 4: Optimisation (Day 61-90)
| Day | Activity | Owner | Deliverable |
|-----|----------|-------|-------------|
| 60-70 | Advanced use case workshop | CSM + SE | New use cases identified |
| 70-80 | ROI measurement | CSM | ROI documented |
| 80-90 | 90-day executive review | CSM | Transition to steady-state |
**Gate:** Handoff from onboarding to ongoing CSM management. Health score must be Yellow or better.
---
## Renewal Playbook
### 120 Days Before Renewal
- Review contract terms and pricing
- Assess current health score and trajectory
- Identify any outstanding issues or concerns
- Begin internal alignment on renewal strategy
### 90 Days Before Renewal
- Schedule renewal conversation with customer
- Prepare value delivered summary (ROI, usage stats, milestones achieved)
- Draft renewal proposal with recommended terms
- If at-risk: escalate and begin risk mitigation
### 60 Days Before Renewal
- Present renewal proposal to customer
- Negotiate terms if needed
- Address any concerns raised during the process
- Escalate blockers to leadership
### 30 Days Before Renewal
- Finalise contract terms
- Obtain signatures
- Plan for any post-renewal actions (expansion, migration)
- Update CRM with renewal details
### Post-Renewal
- Confirm renewed contract in systems
- Send thank-you and updated success plan
- Schedule next QBR
- Identify expansion opportunities
---
## Expansion Playbook
### Identifying Expansion Signals
| Signal | Expansion Type | Priority |
|--------|---------------|----------|
| Seat utilisation > 90% | Seat expansion | High |
| Requests for features in higher tier | Tier upsell | High |
| New department inquiries | Department expansion | Medium |
| High adoption of existing modules | Module cross-sell | Medium |
| Customer referencing competitors for missing features | Cross-sell | High |
### Expansion Conversation Framework
1. **Discovery:** "I noticed your team has been getting great value from [feature]. Have you considered how [new module] could help with [related business outcome]?"
2. **Value Framing:** "Companies similar to yours who adopted [module] saw [specific metric improvement]."
3. **Proposal:** "Based on your current usage, here's what the expansion would look like..."
4. **Stakeholder Alignment:** Involve the economic buyer early. The champion can advocate, but the budget holder decides.
5. **Close:** Coordinate with sales/account executive for commercial negotiation.
---
## Escalation Procedures
### Internal Escalation Matrix
| Trigger | Escalation Level | Response Time |
|---------|-----------------|---------------|
| Health score drops to Red | VP Customer Success | 24 hours |
| Executive sponsor leaves | Director CS + AE | 48 hours |
| Critical bug affecting customer | VP Engineering + VP CS | 4 hours |
| Customer mentions competitor evaluation | VP CS + VP Sales | 24 hours |
| Renewal at risk (60 days or less) | CRO/VP Sales | 24 hours |
| Customer threatens legal action | Legal + VP CS | Immediate |
### Escalation Communication Template
**Subject:** [ESCALATION] {Customer Name} -- {Brief Description}
**Body:**
- Customer: {name}, {segment}, ${ARR}
- Health Score: {score} ({classification})
- Renewal Date: {date}
- Issue Summary: {2-3 sentences}
- Warning Signals: {list}
- Recommended Action: {specific next step}
- Urgency: {critical/high/medium}
---
**Last Updated:** February 2026

View File

@@ -0,0 +1,184 @@
# Health Scoring Framework
Complete methodology for multi-dimensional customer health scoring in SaaS customer success.
---
## Overview
Customer health scoring is the foundation of proactive customer success management. A well-calibrated health score enables CSMs to prioritise their portfolio, identify emerging risks before they become churn events, and allocate resources where they will have the greatest impact.
This framework uses a weighted, multi-dimensional approach that scores customers across four key areas: usage, engagement, support, and relationship. Each dimension contributes to an overall health score (0-100) that classifies accounts as Green (healthy), Yellow (needs attention), or Red (at risk).
---
## Scoring Dimensions
### 1. Usage (Weight: 30%)
Usage metrics are the strongest leading indicator of customer health. Customers who are not using the product are not deriving value and are at elevated churn risk.
| Metric | Definition | Scoring Method |
|--------|-----------|----------------|
| Login Frequency | Percentage of expected login days with actual logins | (actual / target) * 100, capped at 100 |
| Feature Adoption | Percentage of available features actively used | (adopted / available) * 100, capped at 100 |
| DAU/MAU Ratio | Daily active users divided by monthly active users | (actual / target) * 100, capped at 100 |
**Sub-weights within Usage:**
- Login Frequency: 35%
- Feature Adoption: 40%
- DAU/MAU Ratio: 25%
**Why 30% weight:** Usage is the most objective, data-driven signal. Declining usage almost always precedes churn. However, some customers may have seasonal usage patterns, which is why it is not weighted even higher.
### 2. Engagement (Weight: 25%)
Engagement measures how actively the customer participates in the relationship beyond just product usage.
| Metric | Definition | Scoring Method |
|--------|-----------|----------------|
| Support Ticket Volume | Number of support tickets in the period | Inverse score: (1 - actual/max) * 100 |
| Meeting Attendance | Percentage of scheduled meetings attended | (actual / target) * 100, capped at 100 |
| NPS Score | Net Promoter Score response (0-10) | (actual / target) * 100, capped at 100 |
| CSAT Score | Customer Satisfaction score (1-5) | (actual / target) * 100, capped at 100 |
**Sub-weights within Engagement:**
- Support Ticket Volume: 20% (inverse -- fewer tickets is better)
- Meeting Attendance: 30%
- NPS Score: 25%
- CSAT Score: 25%
**Why 25% weight:** Engagement signals complement usage data. A customer who attends meetings but does not use the product may be in an evaluation phase. A customer who uses the product but skips meetings may be becoming self-sufficient -- or disengaging.
### 3. Support (Weight: 20%)
Support health measures the quality of the customer's support experience, which directly impacts satisfaction and renewal likelihood.
| Metric | Definition | Scoring Method |
|--------|-----------|----------------|
| Open Tickets | Number of currently unresolved tickets | Inverse score: (1 - actual/max) * 100 |
| Escalation Rate | Percentage of tickets escalated | Inverse score: (1 - actual/max) * 100 |
| Avg Resolution Time | Average hours to resolve tickets | Inverse score: (1 - actual/max) * 100 |
**Sub-weights within Support:**
- Open Tickets: 35%
- Escalation Rate: 35%
- Resolution Time: 30%
**Why 20% weight:** Support issues are lagging indicators -- they tell you there is already a problem. However, unresolved support issues are a strong predictor of churn, especially when combined with declining engagement.
### 4. Relationship (Weight: 25%)
Relationship health measures the strength and depth of the human connection between the customer and your organisation.
| Metric | Definition | Scoring Method |
|--------|-----------|----------------|
| Executive Sponsor Engagement | Engagement level of exec sponsor (0-100) | (actual / target) * 100, capped at 100 |
| Multi-Threading Depth | Number of stakeholder contacts | (actual / target) * 100, capped at 100 |
| Renewal Sentiment | Qualitative sentiment assessment | Mapped to score: positive=100, neutral=60, negative=20, unknown=50 |
**Sub-weights within Relationship:**
- Executive Sponsor Engagement: 35%
- Multi-Threading Depth: 30%
- Renewal Sentiment: 35%
**Why 25% weight:** Relationship strength is the most important defence against competitive displacement. A customer with strong relationships will give you more chances to fix problems. A customer with weak relationships may leave without warning.
---
## Classification Thresholds
### Standard Thresholds
| Classification | Score Range | Meaning | Action |
|---------------|-------------|---------|--------|
| Green | 75-100 | Customer is healthy and achieving value | Standard cadence, focus on expansion |
| Yellow | 50-74 | Customer needs attention | Increase touch frequency, investigate root causes |
| Red | 0-49 | Customer is at risk | Immediate intervention, create save plan |
### Segment-Adjusted Thresholds
Enterprise customers typically have higher expectations and more complex deployments, which means a higher bar for "healthy." SMB customers may have simpler use cases and lower engagement expectations.
| Segment | Green Threshold | Yellow Threshold | Red Threshold |
|---------|----------------|------------------|---------------|
| Enterprise | 75-100 | 50-74 | 0-49 |
| Mid-Market | 70-100 | 45-69 | 0-44 |
| SMB | 65-100 | 40-64 | 0-39 |
### Segment-Specific Benchmarks
Each metric target is calibrated per segment. Enterprise customers are expected to have higher login frequency, attendance, and sponsor engagement. SMB customers have lower targets but still meaningful thresholds.
**Example Calibration:**
- Enterprise login frequency target: 90% (high-touch, deeply embedded)
- Mid-Market login frequency target: 80% (balanced engagement)
- SMB login frequency target: 70% (self-serve oriented)
---
## Trend Analysis
A single health score snapshot is useful. A health score trend is actionable.
### Trend Classification
| Trend | Criteria | Implication |
|-------|----------|-------------|
| Improving | Current > Previous by 5+ points | Positive trajectory, reinforce what is working |
| Stable | Within +/- 5 points | Maintain current approach |
| Declining | Current < Previous by 5+ points | Investigate and intervene |
| No Data | No previous period available | Establish baseline |
### Trend Priority Matrix
| Current Score | Trend | Priority |
|--------------|-------|----------|
| Green | Declining | HIGH -- intervene before it drops further |
| Yellow | Declining | CRITICAL -- trajectory leads to Red |
| Yellow | Improving | MEDIUM -- reinforce positive momentum |
| Red | Improving | HIGH -- support the recovery |
| Red | Stable | CRITICAL -- needs new intervention approach |
---
## Calibration Guidelines
### When to Recalibrate
1. **After major product changes**: New features may change what "good usage" looks like
2. **Seasonal patterns**: Some industries have cyclical usage (retail holiday season, fiscal year end)
3. **Portfolio composition changes**: If you add many SMB customers, the overall averages shift
4. **After churn events**: Review whether the health score predicted the churn
### Calibration Process
1. Export health scores for all customers over the past 12 months
2. Identify all churn events in the same period
3. Calculate the average health score of churned customers 90, 60, and 30 days before churn
4. Adjust thresholds so that churned customers would have been classified as Yellow or Red at least 60 days before churn
5. Validate with a holdout set of recent data
### Common Calibration Pitfalls
- **Threshold creep**: Gradually lowering Green thresholds to make the portfolio look healthier
- **Over-weighting lagging indicators**: Support metrics react after the damage is done
- **Ignoring segment differences**: Using one threshold for all segments
- **Sentiment bias**: Over-relying on subjective renewal sentiment
---
## Implementation Checklist
1. Define data sources for each metric (CRM, product analytics, support system)
2. Establish data refresh frequency (daily for usage, weekly for engagement)
3. Configure segment benchmarks for your customer base
4. Set initial thresholds using industry defaults (provided above)
5. Run a 30-day pilot with manual review of edge cases
6. Calibrate thresholds based on pilot results
7. Automate scoring and alerting
8. Review and recalibrate quarterly
---
**Last Updated:** February 2026

View File

@@ -0,0 +1,487 @@
#!/usr/bin/env python3
"""
Churn Risk Analyzer
Identifies at-risk customer accounts by scoring behavioral signals across
usage decline, engagement drop, support issues, relationship signals, and
commercial factors. Produces risk tiers with intervention playbooks and
time-to-renewal urgency multipliers.
Usage:
python churn_risk_analyzer.py customer_data.json
python churn_risk_analyzer.py customer_data.json --format json
"""
import argparse
import json
import sys
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
RISK_SIGNAL_WEIGHTS: Dict[str, float] = {
"usage_decline": 0.30,
"engagement_drop": 0.25,
"support_issues": 0.20,
"relationship_signals": 0.15,
"commercial_factors": 0.10,
}
RISK_TIERS: List[Dict[str, Any]] = [
{"name": "critical", "min": 80, "max": 100, "label": "CRITICAL", "action": "Immediate executive escalation"},
{"name": "high", "min": 60, "max": 79, "label": "HIGH", "action": "Urgent CSM intervention"},
{"name": "medium", "min": 40, "max": 59, "label": "MEDIUM", "action": "Proactive outreach"},
{"name": "low", "min": 0, "max": 39, "label": "LOW", "action": "Standard monitoring"},
]
WARNING_SEVERITY: Dict[str, int] = {
"critical": 4,
"high": 3,
"medium": 2,
"low": 1,
}
# Intervention playbooks per tier
INTERVENTION_PLAYBOOKS: Dict[str, List[str]] = {
"critical": [
"Schedule executive-to-executive call within 48 hours",
"Create detailed save plan with specific value milestones",
"Offer concessions or contract restructuring if needed",
"Assign dedicated rescue team (CSM + Solutions Engineer)",
"Daily internal stand-up on account status until stabilised",
"Prepare competitive displacement defence strategy",
],
"high": [
"Schedule urgent CSM call within 1 week",
"Conduct root cause analysis on declining metrics",
"Build 30-day recovery plan with measurable checkpoints",
"Re-engage executive sponsor for alignment meeting",
"Accelerate any pending feature requests or bug fixes",
"Increase touch frequency to weekly until improvement",
],
"medium": [
"Schedule proactive check-in within 2 weeks",
"Share relevant success stories and best practices",
"Propose training session or product walkthrough",
"Review current usage against success plan goals",
"Identify and address any unvoiced concerns",
"Bi-weekly monitoring until score improves to Low",
],
"low": [
"Maintain standard touch cadence",
"Share product updates and new feature announcements",
"Monitor health score trends monthly",
"Proactively share relevant industry insights",
"Prepare for upcoming renewal conversations (if within 90 days)",
],
}
SATISFACTION_TREND_SCORES: Dict[str, float] = {
"improving": 10.0,
"stable": 30.0,
"declining": 70.0,
"critical": 95.0,
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def days_until(date_str: Optional[str]) -> Optional[int]:
"""Return days from today until *date_str* (ISO format), or None."""
if not date_str:
return None
try:
target = datetime.strptime(date_str[:10], "%Y-%m-%d")
delta = (target - datetime.now()).days
return max(delta, 0)
except (ValueError, TypeError):
return None
def renewal_urgency_multiplier(days_remaining: Optional[int]) -> float:
"""Return a multiplier (1.0 - 1.5) based on proximity to renewal.
Closer renewals amplify the risk score.
"""
if days_remaining is None:
return 1.0
if days_remaining <= 30:
return 1.5
elif days_remaining <= 60:
return 1.35
elif days_remaining <= 90:
return 1.2
elif days_remaining <= 180:
return 1.1
return 1.0
def get_risk_tier(score: float) -> Dict[str, Any]:
"""Return the risk tier dict matching the score."""
for tier in RISK_TIERS:
if tier["min"] <= score <= tier["max"]:
return tier
return RISK_TIERS[-1] # default to low
# ---------------------------------------------------------------------------
# Signal Scoring
# ---------------------------------------------------------------------------
def score_usage_decline(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score usage decline signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
login_trend = data.get("login_trend", 0) # negative = decline
feature_change = data.get("feature_adoption_change", 0)
dau_mau_change = data.get("dau_mau_change", 0)
# Convert declines to risk scores (0-100)
login_risk = clamp(abs(min(login_trend, 0)) * 3.0) # -33% => 100
feature_risk = clamp(abs(min(feature_change, 0)) * 4.0) # -25% => 100
dau_mau_risk = clamp(abs(min(dau_mau_change, 0)) * 500) # -0.20 => 100
score = round(login_risk * 0.40 + feature_risk * 0.35 + dau_mau_risk * 0.25, 1)
if login_trend <= -20:
warnings.append({"severity": "critical", "signal": f"Login frequency dropped {abs(login_trend)}%"})
elif login_trend <= -10:
warnings.append({"severity": "high", "signal": f"Login frequency declined {abs(login_trend)}%"})
elif login_trend < -5:
warnings.append({"severity": "medium", "signal": f"Login frequency dipping {abs(login_trend)}%"})
if feature_change <= -15:
warnings.append({"severity": "high", "signal": f"Feature adoption dropped {abs(feature_change)}%"})
elif feature_change < -5:
warnings.append({"severity": "medium", "signal": f"Feature adoption declining {abs(feature_change)}%"})
if dau_mau_change <= -0.10:
warnings.append({"severity": "high", "signal": f"DAU/MAU ratio fell by {abs(dau_mau_change):.2f}"})
return score, warnings
def score_engagement_drop(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score engagement drop signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
cancellations = data.get("meeting_cancellations", 0)
response_days = data.get("response_time_days", 1)
nps_change = data.get("nps_change", 0)
cancel_risk = clamp(cancellations * 25.0) # 4 cancellations => 100
response_risk = clamp((response_days - 1) * 15.0) # 1 day baseline; 7+ days => 90+
nps_risk = clamp(abs(min(nps_change, 0)) * 20.0) # -5 => 100
score = round(cancel_risk * 0.30 + response_risk * 0.35 + nps_risk * 0.35, 1)
if cancellations >= 3:
warnings.append({"severity": "critical", "signal": f"{cancellations} meeting cancellations -- customer disengaging"})
elif cancellations >= 2:
warnings.append({"severity": "high", "signal": f"{cancellations} meeting cancellations recently"})
if response_days >= 7:
warnings.append({"severity": "critical", "signal": f"Customer response time: {response_days} days -- going dark"})
elif response_days >= 4:
warnings.append({"severity": "high", "signal": f"Customer response time increasing: {response_days} days"})
if nps_change <= -4:
warnings.append({"severity": "critical", "signal": f"NPS dropped by {abs(nps_change)} points"})
elif nps_change <= -2:
warnings.append({"severity": "high", "signal": f"NPS declined by {abs(nps_change)} points"})
return score, warnings
def score_support_issues(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score support-related risk signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
escalations = data.get("open_escalations", 0)
critical_unresolved = data.get("unresolved_critical", 0)
sat_trend = data.get("satisfaction_trend", "stable").lower()
esc_risk = clamp(escalations * 35.0) # 3 escalations => 100
critical_risk = clamp(critical_unresolved * 50.0) # 2 unresolved critical => 100
sat_risk = SATISFACTION_TREND_SCORES.get(sat_trend, 30.0)
score = round(esc_risk * 0.35 + critical_risk * 0.35 + sat_risk * 0.30, 1)
if critical_unresolved >= 2:
warnings.append({"severity": "critical", "signal": f"{critical_unresolved} unresolved critical support tickets"})
elif critical_unresolved >= 1:
warnings.append({"severity": "high", "signal": "Unresolved critical support ticket"})
if escalations >= 2:
warnings.append({"severity": "high", "signal": f"{escalations} open escalations"})
elif escalations >= 1:
warnings.append({"severity": "medium", "signal": "Open support escalation"})
if sat_trend == "critical":
warnings.append({"severity": "critical", "signal": "Support satisfaction at critical levels"})
elif sat_trend == "declining":
warnings.append({"severity": "high", "signal": "Support satisfaction trending down"})
return score, warnings
def score_relationship_signals(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score relationship risk signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
risk_points = 0.0
champion_left = data.get("champion_left", False)
sponsor_change = data.get("sponsor_change", False)
competitor_mentions = data.get("competitor_mentions", 0)
if champion_left:
risk_points += 45.0
warnings.append({"severity": "critical", "signal": "Internal champion has left the organisation"})
if sponsor_change:
risk_points += 30.0
warnings.append({"severity": "high", "signal": "Executive sponsor change detected"})
if competitor_mentions >= 3:
risk_points += 35.0
warnings.append({"severity": "critical", "signal": f"Customer mentioned competitors {competitor_mentions} times"})
elif competitor_mentions >= 1:
risk_points += competitor_mentions * 12.0
warnings.append({"severity": "medium", "signal": f"Customer mentioned competitor {competitor_mentions} time(s)"})
score = clamp(risk_points)
return round(score, 1), warnings
def score_commercial_factors(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score commercial risk factors (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
risk_points = 0.0
contract_type = data.get("contract_type", "annual").lower()
pricing_complaints = data.get("pricing_complaints", False)
budget_cuts = data.get("budget_cuts_mentioned", False)
if contract_type == "month-to-month":
risk_points += 30.0
warnings.append({"severity": "medium", "signal": "Month-to-month contract -- low switching cost"})
elif contract_type == "quarterly":
risk_points += 15.0
if pricing_complaints:
risk_points += 35.0
warnings.append({"severity": "high", "signal": "Customer has raised pricing complaints"})
if budget_cuts:
risk_points += 40.0
warnings.append({"severity": "high", "signal": "Customer mentioned budget cuts or cost reduction"})
score = clamp(risk_points)
return round(score, 1), warnings
# ---------------------------------------------------------------------------
# Main Analysis
# ---------------------------------------------------------------------------
def analyse_churn_risk(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Analyse churn risk for a single customer."""
usage_score, usage_warnings = score_usage_decline(customer.get("usage_decline", {}))
engagement_score, engagement_warnings = score_engagement_drop(customer.get("engagement_drop", {}))
support_score, support_warnings = score_support_issues(customer.get("support_issues", {}))
relationship_score, relationship_warnings = score_relationship_signals(customer.get("relationship_signals", {}))
commercial_score, commercial_warnings = score_commercial_factors(customer.get("commercial_factors", {}))
# Weighted raw score
raw_score = (
usage_score * RISK_SIGNAL_WEIGHTS["usage_decline"]
+ engagement_score * RISK_SIGNAL_WEIGHTS["engagement_drop"]
+ support_score * RISK_SIGNAL_WEIGHTS["support_issues"]
+ relationship_score * RISK_SIGNAL_WEIGHTS["relationship_signals"]
+ commercial_score * RISK_SIGNAL_WEIGHTS["commercial_factors"]
)
# Apply renewal urgency multiplier
remaining = days_until(customer.get("contract_end_date"))
multiplier = renewal_urgency_multiplier(remaining)
adjusted_score = clamp(round(raw_score * multiplier, 1))
tier = get_risk_tier(adjusted_score)
# Collect and sort warnings by severity
all_warnings = usage_warnings + engagement_warnings + support_warnings + relationship_warnings + commercial_warnings
all_warnings.sort(key=lambda w: WARNING_SEVERITY.get(w["severity"], 0), reverse=True)
playbook = INTERVENTION_PLAYBOOKS.get(tier["name"], [])
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": customer.get("segment", "unknown"),
"arr": customer.get("arr", 0),
"risk_score": adjusted_score,
"raw_score": round(raw_score, 1),
"risk_tier": tier["name"],
"risk_label": tier["label"],
"urgency_multiplier": multiplier,
"days_to_renewal": remaining,
"signal_scores": {
"usage_decline": {"score": usage_score, "weight": "30%"},
"engagement_drop": {"score": engagement_score, "weight": "25%"},
"support_issues": {"score": support_score, "weight": "20%"},
"relationship_signals": {"score": relationship_score, "weight": "15%"},
"commercial_factors": {"score": commercial_score, "weight": "10%"},
},
"warning_signals": all_warnings,
"recommended_actions": playbook,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("CHURN RISK ANALYSIS REPORT")
lines.append("=" * 72)
lines.append("")
total = len(results)
critical_count = sum(1 for r in results if r["risk_tier"] == "critical")
high_count = sum(1 for r in results if r["risk_tier"] == "high")
medium_count = sum(1 for r in results if r["risk_tier"] == "medium")
low_count = sum(1 for r in results if r["risk_tier"] == "low")
total_arr_at_risk = sum(r["arr"] for r in results if r["risk_tier"] in ("critical", "high"))
lines.append(f"Portfolio Summary: {total} customers analysed")
lines.append(f" Critical Risk: {critical_count}")
lines.append(f" High Risk: {high_count}")
lines.append(f" Medium Risk: {medium_count}")
lines.append(f" Low Risk: {low_count}")
lines.append(f" ARR at Risk (Critical + High): ${total_arr_at_risk:,.0f}")
lines.append("")
# Sort by risk score descending
sorted_results = sorted(results, key=lambda r: r["risk_score"], reverse=True)
for r in sorted_results:
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | ARR: ${r['arr']:,.0f}")
renewal_str = f"{r['days_to_renewal']} days" if r["days_to_renewal"] is not None else "N/A"
lines.append(f"Risk Score: {r['risk_score']}/100 [{r['risk_label']}] | Renewal: {renewal_str}")
if r["urgency_multiplier"] > 1.0:
lines.append(f" ** Urgency multiplier applied: {r['urgency_multiplier']}x (renewal approaching)")
lines.append("")
lines.append(" Signal Scores:")
for signal_name, signal_data in r["signal_scores"].items():
display_name = signal_name.replace("_", " ").title()
lines.append(f" {display_name:25s} {signal_data['score']:6.1f}/100 ({signal_data['weight']})")
if r["warning_signals"]:
lines.append("")
lines.append(" Warning Signals:")
for w in r["warning_signals"]:
severity_tag = w["severity"].upper()
lines.append(f" [{severity_tag}] {w['signal']}")
if r["recommended_actions"]:
lines.append("")
lines.append(" Recommended Actions:")
for i, action in enumerate(r["recommended_actions"], 1):
lines.append(f" {i}. {action}")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total = len(results)
output = {
"report": "churn_risk_analysis",
"summary": {
"total_customers": total,
"critical_count": sum(1 for r in results if r["risk_tier"] == "critical"),
"high_count": sum(1 for r in results if r["risk_tier"] == "high"),
"medium_count": sum(1 for r in results if r["risk_tier"] == "medium"),
"low_count": sum(1 for r in results if r["risk_tier"] == "low"),
"total_arr_at_risk": sum(r["arr"] for r in results if r["risk_tier"] in ("critical", "high")),
},
"customers": sorted(results, key=lambda r: r["risk_score"], reverse=True),
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Analyse churn risk with behavioral signal detection and intervention recommendations."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [analyse_churn_risk(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,414 @@
#!/usr/bin/env python3
"""
Expansion Opportunity Scorer
Analyses customer product adoption depth, maps whitespace for unused
features/products, estimates revenue opportunities, and prioritises
expansion plays by effort vs impact.
Usage:
python expansion_opportunity_scorer.py customer_data.json
python expansion_opportunity_scorer.py customer_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
# Tier pricing multipliers (relative to current plan price)
TIER_UPLIFT: Dict[str, float] = {
"starter": 1.0,
"professional": 1.8,
"enterprise": 3.0,
"enterprise_plus": 4.5,
}
# Module revenue estimates as a fraction of base ARR
MODULE_REVENUE_FRACTION: Dict[str, float] = {
"core_platform": 0.00, # Already included in base
"analytics_module": 0.15,
"integrations_module": 0.12,
"api_access": 0.10,
"advanced_reporting": 0.18,
"security_module": 0.20,
"automation_module": 0.15,
"collaboration_module": 0.10,
"data_export": 0.08,
"custom_workflows": 0.22,
"sso_module": 0.08,
"audit_module": 0.10,
}
# Effort classification for different expansion types
EFFORT_MAP: Dict[str, str] = {
"upsell_tier": "medium",
"cross_sell_module": "low",
"seat_expansion": "low",
"department_expansion": "high",
}
# Usage thresholds for recommendations
HIGH_USAGE_THRESHOLD = 75 # % usage indicates readiness for more
LOW_ADOPTION_THRESHOLD = 30 # % usage is too low to push expansion there
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def estimate_seat_expansion_revenue(
arr: float, licensed: int, active: int, segment: str
) -> Tuple[float, str]:
"""Estimate revenue from seat expansion.
Returns (estimated_revenue, rationale).
"""
utilisation = safe_divide(active, licensed)
if utilisation >= 0.90:
# Near capacity -- likely needs more seats
growth_factor = {"enterprise": 0.25, "mid-market": 0.20, "smb": 0.15}
factor = growth_factor.get(segment.lower(), 0.15)
revenue = round(arr * factor, 0)
return revenue, f"Seat utilisation at {utilisation:.0%} -- likely needs {int(licensed * factor)} additional seats"
return 0.0, f"Seat utilisation at {utilisation:.0%} -- not yet at expansion threshold"
def estimate_tier_upgrade_revenue(
arr: float, current_tier: str, available_tiers: List[str]
) -> Tuple[float, Optional[str], str]:
"""Estimate revenue from tier upgrade.
Returns (estimated_revenue, target_tier, rationale).
"""
current_mult = TIER_UPLIFT.get(current_tier.lower(), 1.0)
best_revenue = 0.0
best_tier = None
rationale = "Already on highest tier"
for tier in available_tiers:
tier_mult = TIER_UPLIFT.get(tier.lower(), 1.0)
if tier_mult > current_mult:
# Calculate revenue as the incremental ARR from upgrading
base_arr = safe_divide(arr, current_mult)
upgrade_arr = base_arr * tier_mult
incremental = upgrade_arr - arr
if incremental > best_revenue:
# Pick the next tier up (not skip tiers)
if best_tier is None or tier_mult < TIER_UPLIFT.get(best_tier.lower(), 999):
best_revenue = round(incremental, 0)
best_tier = tier
rationale = f"Upgrade from {current_tier} to {tier} adds ${incremental:,.0f} ARR"
return best_revenue, best_tier, rationale
def estimate_module_revenue(
arr: float, product_usage: Dict[str, Dict[str, Any]]
) -> List[Dict[str, Any]]:
"""Identify cross-sell opportunities from unadopted modules.
Returns list of opportunity dicts.
"""
opportunities: List[Dict[str, Any]] = []
for module_name, module_data in product_usage.items():
adopted = module_data.get("adopted", False)
usage_pct = module_data.get("usage_pct", 0)
fraction = MODULE_REVENUE_FRACTION.get(module_name.lower(), 0.10)
if not adopted and fraction > 0:
revenue = round(arr * fraction, 0)
opportunities.append({
"module": module_name,
"type": "cross_sell",
"estimated_revenue": revenue,
"effort": "low",
"rationale": f"Module not adopted -- ${revenue:,.0f} potential ARR",
})
elif adopted and usage_pct < LOW_ADOPTION_THRESHOLD and fraction > 0:
# Already adopted but underutilised -- focus on enablement, not expansion
pass # Skip -- needs enablement, not a sales motion
return opportunities
def estimate_department_expansion_revenue(
arr: float,
current_departments: List[str],
potential_departments: List[str],
segment: str,
) -> List[Dict[str, Any]]:
"""Estimate revenue from expanding to new departments."""
opportunities: List[Dict[str, Any]] = []
current_set = {d.lower() for d in current_departments}
per_dept_estimate = safe_divide(arr, max(len(current_departments), 1))
for dept in potential_departments:
if dept.lower() not in current_set:
# Estimate each new department at the average per-department ARR
revenue = round(per_dept_estimate * 0.8, 0) # Slight discount for new dept
opportunities.append({
"department": dept,
"type": "expansion",
"estimated_revenue": revenue,
"effort": "high",
"rationale": f"Expand to {dept} department -- est. ${revenue:,.0f} ARR",
})
return opportunities
# ---------------------------------------------------------------------------
# Priority Scoring
# ---------------------------------------------------------------------------
def priority_score(revenue: float, effort: str) -> float:
"""Calculate priority score (higher = better).
Favours high revenue with low effort.
"""
effort_multiplier = {"low": 3.0, "medium": 2.0, "high": 1.0}
mult = effort_multiplier.get(effort.lower(), 1.0)
# Normalise revenue to a 0-100 scale (assume max single opportunity is $200k)
rev_score = clamp(safe_divide(revenue, 2000.0)) # $200k => 100
return round(rev_score * mult, 1)
# ---------------------------------------------------------------------------
# Main Analysis
# ---------------------------------------------------------------------------
def analyse_expansion(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Analyse expansion opportunities for a single customer."""
arr = customer.get("arr", 0)
segment = customer.get("segment", "mid-market").lower()
contract = customer.get("contract", {})
product_usage = customer.get("product_usage", {})
departments = customer.get("departments", {})
all_opportunities: List[Dict[str, Any]] = []
# 1. Seat expansion
licensed = contract.get("licensed_seats", 0)
active = contract.get("active_seats", 0)
seat_rev, seat_rationale = estimate_seat_expansion_revenue(arr, licensed, active, segment)
if seat_rev > 0:
all_opportunities.append({
"type": "expansion",
"category": "seat_expansion",
"estimated_revenue": seat_rev,
"effort": "low",
"rationale": seat_rationale,
"priority_score": priority_score(seat_rev, "low"),
})
# 2. Tier upgrade
current_tier = contract.get("plan_tier", "").lower()
available_tiers = contract.get("available_tiers", [])
tier_rev, target_tier, tier_rationale = estimate_tier_upgrade_revenue(arr, current_tier, available_tiers)
if tier_rev > 0 and target_tier:
all_opportunities.append({
"type": "upsell",
"category": "tier_upgrade",
"target_tier": target_tier,
"estimated_revenue": tier_rev,
"effort": "medium",
"rationale": tier_rationale,
"priority_score": priority_score(tier_rev, "medium"),
})
# 3. Module cross-sell
module_opps = estimate_module_revenue(arr, product_usage)
for opp in module_opps:
opp["category"] = "module_cross_sell"
opp["priority_score"] = priority_score(opp["estimated_revenue"], opp["effort"])
all_opportunities.append(opp)
# 4. Department expansion
current_depts = departments.get("current", [])
potential_depts = departments.get("potential", [])
dept_opps = estimate_department_expansion_revenue(arr, current_depts, potential_depts, segment)
for opp in dept_opps:
opp["category"] = "department_expansion"
opp["priority_score"] = priority_score(opp["estimated_revenue"], opp["effort"])
all_opportunities.append(opp)
# Sort by priority score descending
all_opportunities.sort(key=lambda o: o["priority_score"], reverse=True)
# Adoption depth summary
total_modules = len(product_usage)
adopted_modules = sum(1 for m in product_usage.values() if m.get("adopted", False))
avg_usage = round(
safe_divide(
sum(m.get("usage_pct", 0) for m in product_usage.values() if m.get("adopted", False)),
max(adopted_modules, 1),
),
1,
)
total_estimated_revenue = sum(o["estimated_revenue"] for o in all_opportunities)
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": segment,
"arr": arr,
"adoption_summary": {
"total_modules": total_modules,
"adopted_modules": adopted_modules,
"adoption_rate": round(safe_divide(adopted_modules, total_modules) * 100, 1) if total_modules > 0 else 0,
"avg_usage_pct": avg_usage,
"seat_utilisation": round(safe_divide(active, max(licensed, 1)) * 100, 1),
"current_tier": current_tier,
"departments_covered": len(current_depts),
"departments_potential": len(potential_depts),
},
"total_estimated_revenue": round(total_estimated_revenue, 0),
"opportunity_count": len(all_opportunities),
"opportunities": all_opportunities,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("EXPANSION OPPORTUNITY REPORT")
lines.append("=" * 72)
lines.append("")
total_rev = sum(r["total_estimated_revenue"] for r in results)
total_opps = sum(r["opportunity_count"] for r in results)
lines.append(f"Portfolio Summary: {len(results)} customers")
lines.append(f" Total Expansion Revenue Potential: ${total_rev:,.0f}")
lines.append(f" Total Opportunities Identified: {total_opps}")
lines.append("")
# Sort customers by total estimated revenue descending
sorted_results = sorted(results, key=lambda r: r["total_estimated_revenue"], reverse=True)
for r in sorted_results:
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | Current ARR: ${r['arr']:,.0f}")
lines.append(f"Total Expansion Potential: ${r['total_estimated_revenue']:,.0f} ({r['opportunity_count']} opportunities)")
lines.append("")
adoption = r["adoption_summary"]
lines.append(" Adoption Summary:")
lines.append(f" Modules Adopted: {adoption['adopted_modules']}/{adoption['total_modules']} ({adoption['adoption_rate']}%)")
lines.append(f" Avg Module Usage: {adoption['avg_usage_pct']}%")
lines.append(f" Seat Utilisation: {adoption['seat_utilisation']}%")
lines.append(f" Current Tier: {adoption['current_tier'].title()}")
lines.append(f" Departments: {adoption['departments_covered']} active, {adoption['departments_potential']} potential")
if r["opportunities"]:
lines.append("")
lines.append(" Opportunities (ranked by priority):")
for i, opp in enumerate(r["opportunities"], 1):
opp_type = opp.get("type", "unknown").title()
category = opp.get("category", "").replace("_", " ").title()
rev = opp["estimated_revenue"]
effort = opp.get("effort", "unknown").title()
pri = opp.get("priority_score", 0)
lines.append(f" {i}. [{opp_type}] {category}")
lines.append(f" Revenue: ${rev:,.0f} | Effort: {effort} | Priority: {pri}")
lines.append(f" {opp.get('rationale', '')}")
else:
lines.append("")
lines.append(" No expansion opportunities identified at this time.")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total_rev = sum(r["total_estimated_revenue"] for r in results)
total_opps = sum(r["opportunity_count"] for r in results)
output = {
"report": "expansion_opportunities",
"summary": {
"total_customers": len(results),
"total_estimated_revenue": total_rev,
"total_opportunities": total_opps,
},
"customers": sorted(results, key=lambda r: r["total_estimated_revenue"], reverse=True),
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Score expansion opportunities with adoption analysis and revenue estimation."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [analyse_expansion(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,438 @@
#!/usr/bin/env python3
"""
Customer Health Score Calculator
Multi-dimensional weighted health scoring across usage, engagement, support,
and relationship dimensions. Produces Red/Yellow/Green classification with
trend analysis and segment-aware benchmarking.
Usage:
python health_score_calculator.py customer_data.json
python health_score_calculator.py customer_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
DIMENSION_WEIGHTS: Dict[str, float] = {
"usage": 0.30,
"engagement": 0.25,
"support": 0.20,
"relationship": 0.25,
}
# Segment-specific thresholds (green_min, yellow_min)
SEGMENT_THRESHOLDS: Dict[str, Dict[str, Tuple[int, int]]] = {
"enterprise": {"green": (75, 100), "yellow": (50, 74), "red": (0, 49)},
"mid-market": {"green": (70, 100), "yellow": (45, 69), "red": (0, 44)},
"smb": {"green": (65, 100), "yellow": (40, 64), "red": (0, 39)},
}
# Benchmarks per segment for normalising raw metrics
SEGMENT_BENCHMARKS: Dict[str, Dict[str, Any]] = {
"enterprise": {
"login_frequency_target": 90,
"feature_adoption_target": 80,
"dau_mau_target": 0.50,
"support_ticket_volume_max": 5,
"meeting_attendance_target": 95,
"nps_target": 9,
"csat_target": 4.5,
"open_tickets_max": 10,
"escalation_rate_max": 0.25,
"avg_resolution_hours_max": 72,
"exec_sponsor_target": 90,
"multi_threading_target": 5,
},
"mid-market": {
"login_frequency_target": 80,
"feature_adoption_target": 70,
"dau_mau_target": 0.40,
"support_ticket_volume_max": 8,
"meeting_attendance_target": 85,
"nps_target": 8,
"csat_target": 4.0,
"open_tickets_max": 15,
"escalation_rate_max": 0.30,
"avg_resolution_hours_max": 96,
"exec_sponsor_target": 75,
"multi_threading_target": 3,
},
"smb": {
"login_frequency_target": 70,
"feature_adoption_target": 60,
"dau_mau_target": 0.30,
"support_ticket_volume_max": 10,
"meeting_attendance_target": 75,
"nps_target": 7,
"csat_target": 3.8,
"open_tickets_max": 20,
"escalation_rate_max": 0.40,
"avg_resolution_hours_max": 120,
"exec_sponsor_target": 60,
"multi_threading_target": 2,
},
}
RENEWAL_SENTIMENT_SCORES: Dict[str, float] = {
"positive": 100.0,
"neutral": 60.0,
"negative": 20.0,
"unknown": 50.0,
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def get_benchmarks(segment: str) -> Dict[str, Any]:
"""Return benchmarks for the given segment, falling back to mid-market."""
return SEGMENT_BENCHMARKS.get(segment.lower(), SEGMENT_BENCHMARKS["mid-market"])
def get_thresholds(segment: str) -> Dict[str, Tuple[int, int]]:
"""Return classification thresholds for the given segment."""
return SEGMENT_THRESHOLDS.get(segment.lower(), SEGMENT_THRESHOLDS["mid-market"])
def classify(score: float, segment: str) -> str:
"""Return 'green', 'yellow', or 'red' classification."""
thresholds = get_thresholds(segment)
if score >= thresholds["green"][0]:
return "green"
elif score >= thresholds["yellow"][0]:
return "yellow"
return "red"
def trend_direction(current: float, previous: Optional[float]) -> str:
"""Return trend direction string."""
if previous is None:
return "no_data"
diff = current - previous
if diff > 5:
return "improving"
elif diff < -5:
return "declining"
return "stable"
# ---------------------------------------------------------------------------
# Dimension Scoring
# ---------------------------------------------------------------------------
def score_usage(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the usage dimension (0-100).
Metrics: login_frequency, feature_adoption, dau_mau_ratio.
"""
recommendations: List[str] = []
login = clamp(safe_divide(data.get("login_frequency", 0), benchmarks["login_frequency_target"]) * 100)
adoption = clamp(safe_divide(data.get("feature_adoption", 0), benchmarks["feature_adoption_target"]) * 100)
dau_mau = clamp(safe_divide(data.get("dau_mau_ratio", 0), benchmarks["dau_mau_target"]) * 100)
score = round(login * 0.35 + adoption * 0.40 + dau_mau * 0.25, 1)
if login < 60:
recommendations.append("Login frequency below target -- schedule product engagement session")
if adoption < 50:
recommendations.append("Feature adoption is low -- recommend guided feature walkthrough")
if dau_mau < 50:
recommendations.append("DAU/MAU ratio indicates shallow usage -- investigate stickiness barriers")
return score, recommendations
def score_engagement(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the engagement dimension (0-100).
Metrics: support_ticket_volume (inverse), meeting_attendance, nps_score, csat_score.
"""
recommendations: List[str] = []
# Lower ticket volume is better -- invert
ticket_vol = data.get("support_ticket_volume", 0)
ticket_score = clamp((1.0 - safe_divide(ticket_vol, benchmarks["support_ticket_volume_max"])) * 100)
attendance = clamp(safe_divide(data.get("meeting_attendance", 0), benchmarks["meeting_attendance_target"]) * 100)
nps_raw = data.get("nps_score", 5)
nps_score = clamp(safe_divide(nps_raw, benchmarks["nps_target"]) * 100)
csat_raw = data.get("csat_score", 3.0)
csat_score = clamp(safe_divide(csat_raw, benchmarks["csat_target"]) * 100)
score = round(ticket_score * 0.20 + attendance * 0.30 + nps_score * 0.25 + csat_score * 0.25, 1)
if attendance < 60:
recommendations.append("Meeting attendance is low -- re-evaluate meeting cadence and agenda value")
if nps_raw < 7:
recommendations.append("NPS below threshold -- conduct a feedback deep-dive with customer")
if csat_raw < 3.5:
recommendations.append("CSAT is critically low -- escalate to support leadership")
return score, recommendations
def score_support(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the support dimension (0-100).
Metrics: open_tickets (inverse), escalation_rate (inverse), avg_resolution_hours (inverse).
"""
recommendations: List[str] = []
open_tix = data.get("open_tickets", 0)
open_score = clamp((1.0 - safe_divide(open_tix, benchmarks["open_tickets_max"])) * 100)
esc_rate = data.get("escalation_rate", 0)
esc_score = clamp((1.0 - safe_divide(esc_rate, benchmarks["escalation_rate_max"])) * 100)
res_hours = data.get("avg_resolution_hours", 0)
res_score = clamp((1.0 - safe_divide(res_hours, benchmarks["avg_resolution_hours_max"])) * 100)
score = round(open_score * 0.35 + esc_score * 0.35 + res_score * 0.30, 1)
if open_tix > benchmarks["open_tickets_max"] * 0.5:
recommendations.append("Open ticket count elevated -- prioritise ticket resolution")
if esc_rate > benchmarks["escalation_rate_max"] * 0.5:
recommendations.append("Escalation rate too high -- review support process and training")
if res_hours > benchmarks["avg_resolution_hours_max"] * 0.5:
recommendations.append("Resolution time exceeds SLA target -- engage support leadership")
return score, recommendations
def score_relationship(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the relationship dimension (0-100).
Metrics: executive_sponsor_engagement, multi_threading_depth, renewal_sentiment.
"""
recommendations: List[str] = []
exec_score = clamp(safe_divide(data.get("executive_sponsor_engagement", 0), benchmarks["exec_sponsor_target"]) * 100)
threading = data.get("multi_threading_depth", 1)
thread_score = clamp(safe_divide(threading, benchmarks["multi_threading_target"]) * 100)
sentiment_str = data.get("renewal_sentiment", "unknown").lower()
sentiment_score = RENEWAL_SENTIMENT_SCORES.get(sentiment_str, 50.0)
score = round(exec_score * 0.35 + thread_score * 0.30 + sentiment_score * 0.35, 1)
if exec_score < 50:
recommendations.append("Executive sponsor engagement is weak -- schedule executive alignment meeting")
if threading < 2:
recommendations.append("Single-threaded relationship -- expand contacts across departments")
if sentiment_str == "negative":
recommendations.append("Renewal sentiment is negative -- initiate save plan immediately")
return score, recommendations
# ---------------------------------------------------------------------------
# Main Scoring
# ---------------------------------------------------------------------------
def calculate_health_score(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Calculate the overall health score for a single customer."""
segment = customer.get("segment", "mid-market").lower()
benchmarks = get_benchmarks(segment)
# Score each dimension
usage_score, usage_recs = score_usage(customer.get("usage", {}), benchmarks)
engagement_score, engagement_recs = score_engagement(customer.get("engagement", {}), benchmarks)
support_score, support_recs = score_support(customer.get("support", {}), benchmarks)
relationship_score, relationship_recs = score_relationship(customer.get("relationship", {}), benchmarks)
# Weighted overall
overall = round(
usage_score * DIMENSION_WEIGHTS["usage"]
+ engagement_score * DIMENSION_WEIGHTS["engagement"]
+ support_score * DIMENSION_WEIGHTS["support"]
+ relationship_score * DIMENSION_WEIGHTS["relationship"],
1,
)
classification = classify(overall, segment)
# Trend analysis
prev = customer.get("previous_period", {})
trends = {
"usage": trend_direction(usage_score, prev.get("usage_score")),
"engagement": trend_direction(engagement_score, prev.get("engagement_score")),
"support": trend_direction(support_score, prev.get("support_score")),
"relationship": trend_direction(relationship_score, prev.get("relationship_score")),
}
overall_prev = prev.get("overall_score")
trends["overall"] = trend_direction(overall, overall_prev)
# Combine recommendations
all_recs = usage_recs + engagement_recs + support_recs + relationship_recs
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": segment,
"arr": customer.get("arr", 0),
"overall_score": overall,
"classification": classification,
"dimensions": {
"usage": {"score": usage_score, "weight": "30%", "classification": classify(usage_score, segment)},
"engagement": {"score": engagement_score, "weight": "25%", "classification": classify(engagement_score, segment)},
"support": {"score": support_score, "weight": "20%", "classification": classify(support_score, segment)},
"relationship": {"score": relationship_score, "weight": "25%", "classification": classify(relationship_score, segment)},
},
"trends": trends,
"recommendations": all_recs,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
CLASSIFICATION_LABELS = {
"green": "HEALTHY",
"yellow": "NEEDS ATTENTION",
"red": "AT RISK",
}
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("CUSTOMER HEALTH SCORE REPORT")
lines.append("=" * 72)
lines.append("")
# Portfolio summary
total = len(results)
green_count = sum(1 for r in results if r["classification"] == "green")
yellow_count = sum(1 for r in results if r["classification"] == "yellow")
red_count = sum(1 for r in results if r["classification"] == "red")
avg_score = round(safe_divide(sum(r["overall_score"] for r in results), total), 1)
lines.append(f"Portfolio Summary: {total} customers")
lines.append(f" Average Health Score: {avg_score}/100")
lines.append(f" Green (Healthy): {green_count}")
lines.append(f" Yellow (Attention): {yellow_count}")
lines.append(f" Red (At Risk): {red_count}")
lines.append("")
for r in results:
label = CLASSIFICATION_LABELS.get(r["classification"], "UNKNOWN")
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | ARR: ${r['arr']:,.0f}")
lines.append(f"Overall Score: {r['overall_score']}/100 [{label}]")
lines.append("")
lines.append(" Dimension Scores:")
for dim_name, dim_data in r["dimensions"].items():
dim_label = CLASSIFICATION_LABELS.get(dim_data["classification"], "")
lines.append(f" {dim_name.title():15s} {dim_data['score']:6.1f}/100 ({dim_data['weight']}) [{dim_label}]")
lines.append("")
lines.append(" Trends:")
for dim_name, direction in r["trends"].items():
arrow = {"improving": "+", "declining": "-", "stable": "=", "no_data": "?"}
lines.append(f" {dim_name.title():15s} {arrow.get(direction, '?')} {direction}")
if r["recommendations"]:
lines.append("")
lines.append(" Recommendations:")
for i, rec in enumerate(r["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total = len(results)
output = {
"report": "customer_health_scores",
"summary": {
"total_customers": total,
"average_score": round(safe_divide(sum(r["overall_score"] for r in results), total), 1),
"green_count": sum(1 for r in results if r["classification"] == "green"),
"yellow_count": sum(1 for r in results if r["classification"] == "yellow"),
"red_count": sum(1 for r in results if r["classification"] == "red"),
},
"customers": results,
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Calculate multi-dimensional customer health scores with trend analysis."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [calculate_health_score(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,286 @@
---
name: revenue-operations
description: Analyzes pipeline coverage, tracks forecast accuracy with MAPE, and calculates GTM efficiency metrics for SaaS revenue optimization
---
# Revenue Operations
Pipeline analysis, forecast accuracy tracking, and GTM efficiency measurement for SaaS revenue teams.
## Table of Contents
- [Quick Start](#quick-start)
- [Tools Overview](#tools-overview)
- [Pipeline Analyzer](#1-pipeline-analyzer)
- [Forecast Accuracy Tracker](#2-forecast-accuracy-tracker)
- [GTM Efficiency Calculator](#3-gtm-efficiency-calculator)
- [Revenue Operations Workflows](#revenue-operations-workflows)
- [Weekly Pipeline Review](#weekly-pipeline-review)
- [Forecast Accuracy Review](#forecast-accuracy-review)
- [GTM Efficiency Audit](#gtm-efficiency-audit)
- [Quarterly Business Review](#quarterly-business-review)
- [Reference Documentation](#reference-documentation)
- [Templates](#templates)
---
## Quick Start
```bash
# Analyze pipeline health and coverage
python scripts/pipeline_analyzer.py --input assets/sample_pipeline_data.json --format text
# Track forecast accuracy over multiple periods
python scripts/forecast_accuracy_tracker.py assets/sample_forecast_data.json --format text
# Calculate GTM efficiency metrics
python scripts/gtm_efficiency_calculator.py assets/sample_gtm_data.json --format text
```
---
## Tools Overview
### 1. Pipeline Analyzer
Analyzes sales pipeline health including coverage ratios, stage conversion rates, deal velocity, aging risks, and concentration risks.
**Input:** JSON file with deals, quota, and stage configuration
**Output:** Coverage ratios, conversion rates, velocity metrics, aging flags, risk assessment
**Usage:**
```bash
# Text report (human-readable)
python scripts/pipeline_analyzer.py --input pipeline.json --format text
# JSON output (for dashboards/integrations)
python scripts/pipeline_analyzer.py --input pipeline.json --format json
```
**Key Metrics Calculated:**
- **Pipeline Coverage Ratio** -- Total pipeline value / quota target (healthy: 3-4x)
- **Stage Conversion Rates** -- Stage-to-stage progression rates
- **Sales Velocity** -- (Opportunities x Avg Deal Size x Win Rate) / Avg Sales Cycle
- **Deal Aging** -- Flags deals exceeding 2x average cycle time per stage
- **Concentration Risk** -- Warns when >40% of pipeline is in a single deal
- **Coverage Gap Analysis** -- Identifies quarters with insufficient pipeline
**Input Schema:**
```json
{
"quota": 500000,
"stages": ["Discovery", "Qualification", "Proposal", "Negotiation", "Closed Won"],
"average_cycle_days": 45,
"deals": [
{
"id": "D001",
"name": "Acme Corp",
"stage": "Proposal",
"value": 85000,
"age_days": 32,
"close_date": "2025-03-15",
"owner": "rep_1"
}
]
}
```
### 2. Forecast Accuracy Tracker
Tracks forecast accuracy over time using MAPE, detects systematic bias, analyzes trends, and provides category-level breakdowns.
**Input:** JSON file with forecast periods and optional category breakdowns
**Output:** MAPE score, bias analysis, trends, category breakdown, accuracy rating
**Usage:**
```bash
# Track forecast accuracy
python scripts/forecast_accuracy_tracker.py forecast_data.json --format text
# JSON output for trend analysis
python scripts/forecast_accuracy_tracker.py forecast_data.json --format json
```
**Key Metrics Calculated:**
- **MAPE** -- Mean Absolute Percentage Error: mean(|actual - forecast| / |actual|) x 100
- **Forecast Bias** -- Over-forecasting (positive) vs under-forecasting (negative) tendency
- **Weighted Accuracy** -- MAPE weighted by deal value for materiality
- **Period Trends** -- Improving, stable, or declining accuracy over time
- **Category Breakdown** -- Accuracy by rep, product, segment, or any custom dimension
**Accuracy Ratings:**
| Rating | MAPE Range | Interpretation |
|--------|-----------|----------------|
| Excellent | <10% | Highly predictable, data-driven process |
| Good | 10-15% | Reliable forecasting with minor variance |
| Fair | 15-25% | Needs process improvement |
| Poor | >25% | Significant forecasting methodology gaps |
**Input Schema:**
```json
{
"forecast_periods": [
{"period": "2025-Q1", "forecast": 480000, "actual": 520000},
{"period": "2025-Q2", "forecast": 550000, "actual": 510000}
],
"category_breakdowns": {
"by_rep": [
{"category": "Rep A", "forecast": 200000, "actual": 210000},
{"category": "Rep B", "forecast": 280000, "actual": 310000}
]
}
}
```
### 3. GTM Efficiency Calculator
Calculates core SaaS GTM efficiency metrics with industry benchmarking, ratings, and improvement recommendations.
**Input:** JSON file with revenue, cost, and customer metrics
**Output:** Magic Number, LTV:CAC, CAC Payback, Burn Multiple, Rule of 40, NDR with ratings
**Usage:**
```bash
# Calculate all GTM efficiency metrics
python scripts/gtm_efficiency_calculator.py gtm_data.json --format text
# JSON output for dashboards
python scripts/gtm_efficiency_calculator.py gtm_data.json --format json
```
**Key Metrics Calculated:**
| Metric | Formula | Target |
|--------|---------|--------|
| Magic Number | Net New ARR / Prior Period S&M Spend | >0.75 |
| LTV:CAC | (ARPA x Gross Margin / Churn Rate) / CAC | >3:1 |
| CAC Payback | CAC / (ARPA x Gross Margin) months | <18 months |
| Burn Multiple | Net Burn / Net New ARR | <2x |
| Rule of 40 | Revenue Growth % + FCF Margin % | >40% |
| Net Dollar Retention | (Begin ARR + Expansion - Contraction - Churn) / Begin ARR | >110% |
**Input Schema:**
```json
{
"revenue": {
"current_arr": 5000000,
"prior_arr": 3800000,
"net_new_arr": 1200000,
"arpa_monthly": 2500,
"revenue_growth_pct": 31.6
},
"costs": {
"sales_marketing_spend": 1800000,
"cac": 18000,
"gross_margin_pct": 78,
"total_operating_expense": 6500000,
"net_burn": 1500000,
"fcf_margin_pct": 8.4
},
"customers": {
"beginning_arr": 3800000,
"expansion_arr": 600000,
"contraction_arr": 100000,
"churned_arr": 300000,
"annual_churn_rate_pct": 8
}
}
```
---
## Revenue Operations Workflows
### Weekly Pipeline Review
Use this workflow for your weekly pipeline inspection cadence.
1. **Generate pipeline report:**
```bash
python scripts/pipeline_analyzer.py --input current_pipeline.json --format text
```
2. **Review key indicators:**
- Pipeline coverage ratio (is it above 3x quota?)
- Deals aging beyond threshold (which deals need intervention?)
- Concentration risk (are we over-reliant on a few large deals?)
- Stage distribution (is there a healthy funnel shape?)
3. **Document using template:** Use `assets/pipeline_review_template.md`
4. **Action items:** Address aging deals, redistribute pipeline concentration, fill coverage gaps
### Forecast Accuracy Review
Use monthly or quarterly to evaluate and improve forecasting discipline.
1. **Generate accuracy report:**
```bash
python scripts/forecast_accuracy_tracker.py forecast_history.json --format text
```
2. **Analyze patterns:**
- Is MAPE trending down (improving)?
- Which reps or segments have the highest error rates?
- Is there systematic over- or under-forecasting?
3. **Document using template:** Use `assets/forecast_report_template.md`
4. **Improvement actions:** Coach high-bias reps, adjust methodology, improve data hygiene
### GTM Efficiency Audit
Use quarterly or during board prep to evaluate go-to-market efficiency.
1. **Calculate efficiency metrics:**
```bash
python scripts/gtm_efficiency_calculator.py quarterly_data.json --format text
```
2. **Benchmark against targets:**
- Magic Number signals GTM spend efficiency
- LTV:CAC validates unit economics
- CAC Payback shows capital efficiency
- Rule of 40 balances growth and profitability
3. **Document using template:** Use `assets/gtm_dashboard_template.md`
4. **Strategic decisions:** Adjust spend allocation, optimize channels, improve retention
### Quarterly Business Review
Combine all three tools for a comprehensive QBR analysis.
1. Run pipeline analyzer for forward-looking coverage
2. Run forecast tracker for backward-looking accuracy
3. Run GTM calculator for efficiency benchmarks
4. Cross-reference pipeline health with forecast accuracy
5. Align GTM efficiency metrics with growth targets
---
## Reference Documentation
| Reference | Description |
|-----------|-------------|
| [RevOps Metrics Guide](references/revops-metrics-guide.md) | Complete metrics hierarchy, definitions, formulas, and interpretation |
| [Pipeline Management Framework](references/pipeline-management-framework.md) | Pipeline best practices, stage definitions, conversion benchmarks |
| [GTM Efficiency Benchmarks](references/gtm-efficiency-benchmarks.md) | SaaS benchmarks by stage, industry standards, improvement strategies |
---
## Templates
| Template | Use Case |
|----------|----------|
| [Pipeline Review Template](assets/pipeline_review_template.md) | Weekly/monthly pipeline inspection documentation |
| [Forecast Report Template](assets/forecast_report_template.md) | Forecast accuracy reporting and trend analysis |
| [GTM Dashboard Template](assets/gtm_dashboard_template.md) | GTM efficiency dashboard for leadership review |
| [Sample Pipeline Data](assets/sample_pipeline_data.json) | Example input for pipeline_analyzer.py |
| [Expected Output](assets/expected_output.json) | Reference output from pipeline_analyzer.py |

View File

@@ -0,0 +1,117 @@
{
"coverage": {
"total_pipeline_value": 1105000,
"quota": 500000,
"coverage_ratio": 2.21,
"rating": "At Risk",
"target": "3.0x - 4.0x"
},
"stage_conversions": [
{
"from_stage": "Discovery",
"to_stage": "Qualification",
"from_count": 17,
"to_count": 12,
"conversion_rate_pct": 70.6
},
{
"from_stage": "Qualification",
"to_stage": "Proposal",
"from_count": 12,
"to_count": 9,
"conversion_rate_pct": 75.0
},
{
"from_stage": "Proposal",
"to_stage": "Negotiation",
"from_count": 9,
"to_count": 5,
"conversion_rate_pct": 55.6
},
{
"from_stage": "Negotiation",
"to_stage": "Closed Won",
"from_count": 5,
"to_count": 2,
"conversion_rate_pct": 40.0
}
],
"velocity": {
"num_opportunities": 17,
"avg_deal_size": 74588.24,
"win_rate_pct": 11.8,
"avg_cycle_days": 32.5,
"velocity_per_day": 4594.2,
"velocity_per_month": 137826.09
},
"aging": {
"global_aging_threshold_days": 90,
"stage_thresholds": {
"Discovery": 90,
"Qualification": 78,
"Proposal": 67,
"Negotiation": 56
},
"total_open_deals": 15,
"healthy_deals": 13,
"at_risk_deals": 2,
"aging_deals": [
{
"id": "D011",
"name": "Vertex Solutions",
"stage": "Proposal",
"age_days": 95,
"threshold_days": 67,
"days_over": 28,
"value": 110000
},
{
"id": "D014",
"name": "Horizon Telecom",
"stage": "Negotiation",
"age_days": 60,
"threshold_days": 56,
"days_over": 4,
"value": 250000
}
]
},
"risk": {
"overall_risk": "MEDIUM",
"risk_factors_count": 3,
"concentration_risks": [],
"has_concentration_risk": false,
"stage_distribution": {
"Discovery": {
"count": 5,
"value": 194000,
"pct_of_pipeline": 17.6
},
"Qualification": {
"count": 3,
"value": 150000,
"pct_of_pipeline": 13.6
},
"Proposal": {
"count": 4,
"value": 333000,
"pct_of_pipeline": 30.1
},
"Negotiation": {
"count": 3,
"value": 428000,
"pct_of_pipeline": 38.7
}
},
"empty_stages": [],
"coverage_gaps": [
{
"quarter": "2025-Q2",
"pipeline_value": 344000,
"quarterly_target": 125000.0,
"coverage_ratio": 2.75,
"gap": "Below 3x target"
}
]
}
}

View File

@@ -0,0 +1,149 @@
# Forecast Accuracy Report - [Period]
## Report Details
- **Prepared By:** [Name]
- **Report Date:** [YYYY-MM-DD]
- **Period Analyzed:** [Start Period] to [End Period]
- **Periods Covered:** [N] periods
---
## Executive Summary
| Metric | Value | Rating | Trend |
|--------|-------|--------|-------|
| MAPE | _% | | |
| Weighted MAPE | _% | | |
| Forecast Bias | _% | | |
| Bias Direction | | | |
**Accuracy Rating:**
- Excellent (<10%) / Good (10-15%) / Fair (15-25%) / Poor (>25%)
**Key Finding:** [1-2 sentence summary of forecast accuracy status]
---
## Period-by-Period Analysis
| Period | Forecast | Actual | Variance | Error % | Bias |
|--------|----------|--------|----------|---------|------|
| | $_ | $_ | $_ | _% | Over/Under |
| | $_ | $_ | $_ | _% | Over/Under |
| | $_ | $_ | $_ | _% | Over/Under |
| | $_ | $_ | $_ | _% | Over/Under |
| | $_ | $_ | $_ | _% | Over/Under |
| | $_ | $_ | $_ | _% | Over/Under |
---
## Bias Analysis
### Overall Bias
- **Direction:** [Over-forecasting / Under-forecasting / Balanced]
- **Bias Magnitude:** _%
- **Over-forecast Periods:** _ of _
- **Under-forecast Periods:** _ of _
- **Bias Ratio:** _ (1.0 = always over, 0.0 = always under, 0.5 = balanced)
### Interpretation
[What does the bias pattern tell us about our forecasting process? Is it systematic or random?]
### Root Cause
[Identify the primary drivers of bias: optimistic deal assessment, poor stage qualification, sandbagging, late-arriving deals, etc.]
---
## Trend Analysis
### Accuracy Trend
- **Direction:** [Improving / Stable / Declining]
- **Early Period MAPE:** _%
- **Recent Period MAPE:** _%
- **MAPE Change:** _% (positive = worsening, negative = improving)
### Trend Chart (Text)
```
Period Error% Trend
Q1 __% ████████
Q2 __% ██████████
Q3 __% ██████
Q4 __% ████████████
```
---
## Category Breakdown
### By Rep
| Rep | Forecast | Actual | Error % | Bias | Rating |
|-----|----------|--------|---------|------|--------|
| | $_ | $_ | _% | | |
| | $_ | $_ | _% | | |
| | $_ | $_ | _% | | |
| | $_ | $_ | _% | | |
**Overall Rep MAPE:** _%
### By Segment
| Segment | Forecast | Actual | Error % | Bias | Rating |
|---------|----------|--------|---------|------|--------|
| Enterprise | $_ | $_ | _% | | |
| Mid-Market | $_ | $_ | _% | | |
| SMB | $_ | $_ | _% | | |
**Overall Segment MAPE:** _%
### By Product (if applicable)
| Product | Forecast | Actual | Error % | Bias | Rating |
|---------|----------|--------|---------|------|--------|
| | $_ | $_ | _% | | |
| | $_ | $_ | _% | | |
---
## Recommendations
### Immediate Actions (This Quarter)
1. **[Action]** -- [Why and expected impact]
2. **[Action]** -- [Why and expected impact]
3. **[Action]** -- [Why and expected impact]
### Process Improvements (Next Quarter)
1. **[Improvement]** -- [Implementation plan]
2. **[Improvement]** -- [Implementation plan]
### Coaching Focus Areas
| Rep/Team | Issue | Coaching Action | Target |
|----------|-------|-----------------|--------|
| | | | |
| | | | |
---
## Forecast Methodology Notes
### Current Methodology
[Describe the current forecasting methodology: weighted pipeline, commit/upside categories, AI-assisted, etc.]
### Methodology Changes This Period
[Any changes to the forecasting process or methodology during the reporting period]
### Data Quality Issues
[Note any data quality issues that may affect accuracy: missing close dates, inconsistent stage definitions, CRM hygiene gaps]
---
## Next Steps
| # | Action | Owner | Due Date |
|---|--------|-------|----------|
| 1 | | | |
| 2 | | | |
| 3 | | | |

View File

@@ -0,0 +1,215 @@
# GTM Efficiency Dashboard - [Quarter/Period]
## Dashboard Details
- **Prepared By:** [Name]
- **Report Date:** [YYYY-MM-DD]
- **Period:** [Quarter or Date Range]
- **Company Stage:** [Seed / Series A / Series B / Series C+ / Growth]
---
## Metrics At A Glance
| Metric | Value | Rating | Target | Trend | vs. Last Period |
|--------|-------|--------|--------|-------|-----------------|
| Magic Number | _ | | >0.75 | | |
| LTV:CAC | _:1 | | >3:1 | | |
| CAC Payback | _ mo | | <18 mo | | |
| Burn Multiple | _x | | <2x | | |
| Rule of 40 | _% | | >40% | | |
| NDR | _% | | >110% | | |
**Rating Legend:** Green = Healthy | Yellow = Monitor | Red = Action Required
**Overall GTM Health:** [Strong / Healthy / Needs Attention / Critical]
---
## Detailed Metric Analysis
### Magic Number
| Component | Value |
|-----------|-------|
| Net New ARR | $_ |
| Prior Period S&M Spend | $_ |
| **Magic Number** | **_** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [What does this metric tell us about GTM spend efficiency?]
### LTV:CAC Ratio
| Component | Value |
|-----------|-------|
| ARPA (Monthly) | $_ |
| ARPA (Annual) | $_ |
| Gross Margin | _% |
| Annual Churn Rate | _% |
| **Customer LTV** | **$_** |
| Customer Acquisition Cost | $_ |
| **LTV:CAC Ratio** | **_:1** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [Are unit economics sustainable?]
### CAC Payback Period
| Component | Value |
|-----------|-------|
| CAC | $_ |
| Monthly Gross Margin Contribution | $_ |
| **CAC Payback** | **_ months** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [How quickly are we recovering acquisition costs?]
### Burn Multiple
| Component | Value |
|-----------|-------|
| Net Burn | $_ |
| Net New ARR | $_ |
| **Burn Multiple** | **_x** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [Is growth capital-efficient?]
### Rule of 40
| Component | Value |
|-----------|-------|
| Revenue Growth Rate | _% |
| FCF Margin | _% |
| **Rule of 40 Score** | **_%** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [Is the growth-profitability balance healthy?]
### Net Dollar Retention
| Component | Value |
|-----------|-------|
| Beginning ARR | $_ |
| Expansion ARR | +$_ |
| Contraction ARR | -$_ |
| Churned ARR | -$_ |
| Ending ARR | $_ |
| **NDR** | **_%** |
- **Rating:** [Green / Yellow / Red]
- **Percentile:** [Top 10% / Top 25% / Median / Below Median]
- **Trend:** [Improving / Stable / Declining]
- **Interpretation:** [Are we growing revenue from the existing customer base?]
---
## Quarterly Trend
| Metric | Q-3 | Q-2 | Q-1 | Current | Direction |
|--------|-----|-----|-----|---------|-----------|
| Magic Number | _ | _ | _ | _ | |
| LTV:CAC | _:1 | _:1 | _:1 | _:1 | |
| CAC Payback | _ mo | _ mo | _ mo | _ mo | |
| Burn Multiple | _x | _x | _x | _x | |
| Rule of 40 | _% | _% | _% | _% | |
| NDR | _% | _% | _% | _% | |
---
## Benchmark Comparison
| Metric | Our Value | Stage Median | Top Quartile | Gap to Top Quartile |
|--------|-----------|-------------|--------------|---------------------|
| Magic Number | _ | _ | _ | _ |
| LTV:CAC | _:1 | _:1 | _:1 | _ |
| CAC Payback | _ mo | _ mo | _ mo | _ mo |
| Burn Multiple | _x | _x | _x | _ |
| Rule of 40 | _% | _% | _% | _% |
| NDR | _% | _% | _% | _% |
---
## Revenue Composition
### ARR Bridge
```
Beginning ARR: $____________
+ New Logo ARR: $____________
+ Expansion ARR: $____________
- Contraction ARR: $____________
- Churned ARR: $____________
= Ending ARR: $____________
Net New ARR: $____________
Growth Rate: ____________%
```
### Cost Structure
```
S&M Spend: $____________ (___% of revenue)
R&D Spend: $____________ (___% of revenue)
G&A Spend: $____________ (___% of revenue)
Total OpEx: $____________
Net Burn: $____________
Gross Margin: ____________%
```
---
## Strategic Recommendations
### Top 3 Priorities
1. **[Priority]**
- Current state: [Where we are]
- Target: [Where we need to be]
- Action plan: [How to get there]
- Expected impact: [Metric improvement]
- Timeline: [When]
2. **[Priority]**
- Current state:
- Target:
- Action plan:
- Expected impact:
- Timeline:
3. **[Priority]**
- Current state:
- Target:
- Action plan:
- Expected impact:
- Timeline:
### Investment Recommendations
| Area | Current Spend | Recommended | Rationale |
|------|--------------|-------------|-----------|
| | $_ | $_ | |
| | $_ | $_ | |
| | $_ | $_ | |
---
## Next Steps
| # | Action | Owner | Due Date | Success Metric |
|---|--------|-------|----------|---------------|
| 1 | | | | |
| 2 | | | | |
| 3 | | | | |
| 4 | | | | |
| 5 | | | | |

View File

@@ -0,0 +1,138 @@
# Pipeline Review - [Date]
## Review Period
- **Review Type:** Weekly / Monthly (circle one)
- **Prepared By:** [Name]
- **Review Date:** [YYYY-MM-DD]
- **Period Covered:** [Start Date] to [End Date]
---
## Executive Summary
| Metric | Current | Last Period | Target | Status |
|--------|---------|-------------|--------|--------|
| Pipeline Coverage | _x | _x | 3-4x | |
| Total Pipeline Value | $_ | $_ | $_ | |
| Net Pipeline Change | $_ | $_ | >$0 | |
| Deals in Pipeline | _ | _ | _ | |
| Avg Deal Size | $_ | $_ | $_ | |
| Sales Velocity ($/mo) | $_ | $_ | $_ | |
**Overall Assessment:** [1-2 sentence summary of pipeline health]
---
## Coverage Analysis
### By Quarter
| Quarter | Pipeline | Target | Coverage | Status |
|---------|----------|--------|----------|--------|
| Current Quarter | $_ | $_ | _x | |
| Next Quarter | $_ | $_ | _x | |
| Q+2 | $_ | $_ | _x | |
### By Segment
| Segment | Pipeline | Target | Coverage | Notes |
|---------|----------|--------|----------|-------|
| Enterprise | $_ | $_ | _x | |
| Mid-Market | $_ | $_ | _x | |
| SMB | $_ | $_ | _x | |
---
## Stage Distribution
| Stage | # Deals | Value | % of Pipeline | Conversion Rate |
|-------|---------|-------|---------------|-----------------|
| Discovery | _ | $_ | _% | _% |
| Qualification | _ | $_ | _% | _% |
| Proposal | _ | $_ | _% | _% |
| Negotiation | _ | $_ | _% | _% |
**Funnel Health:** [Healthy / Top-heavy / Bottom-heavy / Gaps identified]
---
## Top Deals Review (S3+)
| Deal | Stage | Value | Age | Close Date | Risk | Next Step |
|------|-------|-------|-----|------------|------|-----------|
| | | $_ | _d | | | |
| | | $_ | _d | | | |
| | | $_ | _d | | | |
| | | $_ | _d | | | |
| | | $_ | _d | | | |
---
## Risk Assessment
### Concentration Risk
- **Largest deal as % of pipeline:** _%
- **Top 3 deals as % of pipeline:** _%
- **Risk Level:** [Low / Medium / High]
- **Mitigation:** [Actions to diversify]
### Aging Deals
| Deal | Stage | Age | Threshold | Days Over | Action Required |
|------|-------|-----|-----------|-----------|-----------------|
| | | _d | _d | +_d | |
| | | _d | _d | +_d | |
### Deals Pushed from Last Period
| Deal | Original Close | New Close | Times Pushed | Reason |
|------|---------------|-----------|-------------|--------|
| | | | | |
| | | | | |
---
## Pipeline Movement
### Created This Period
| Deal | Source | Value | Stage | Expected Close |
|------|--------|-------|-------|---------------|
| | | $_ | | |
| | | $_ | | |
**Total Created:** $_
### Advanced This Period
| Deal | From Stage | To Stage | Value |
|------|-----------|----------|-------|
| | | | $_ |
| | | | $_ |
### Closed Won This Period
| Deal | Value | Cycle Days | Source |
|------|-------|-----------|--------|
| | $_ | _d | |
| | $_ | _d | |
**Total Closed Won:** $_
### Closed Lost This Period
| Deal | Value | Stage Lost | Loss Reason |
|------|-------|-----------|-------------|
| | $_ | | |
| | $_ | | |
**Total Closed Lost:** $_
---
## Action Items
| # | Action | Owner | Due Date | Priority |
|---|--------|-------|----------|----------|
| 1 | | | | |
| 2 | | | | |
| 3 | | | | |
| 4 | | | | |
| 5 | | | | |
---
## Notes
[Additional context, observations, or discussion points for the review meeting]

View File

@@ -0,0 +1,23 @@
{
"forecast_periods": [
{"period": "2024-Q1", "forecast": 420000, "actual": 445000},
{"period": "2024-Q2", "forecast": 480000, "actual": 460000},
{"period": "2024-Q3", "forecast": 510000, "actual": 525000},
{"period": "2024-Q4", "forecast": 550000, "actual": 510000},
{"period": "2025-Q1", "forecast": 520000, "actual": 540000},
{"period": "2025-Q2", "forecast": 580000, "actual": 560000}
],
"category_breakdowns": {
"by_rep": [
{"category": "Sarah Chen", "forecast": 210000, "actual": 225000},
{"category": "Marcus Johnson", "forecast": 185000, "actual": 160000},
{"category": "Priya Patel", "forecast": 125000, "actual": 135000},
{"category": "Alex Rivera", "forecast": 60000, "actual": 40000}
],
"by_segment": [
{"category": "Enterprise", "forecast": 320000, "actual": 310000},
{"category": "Mid-Market", "forecast": 180000, "actual": 175000},
{"category": "SMB", "forecast": 80000, "actual": 75000}
]
}
}

View File

@@ -0,0 +1,24 @@
{
"revenue": {
"current_arr": 5000000,
"prior_arr": 3800000,
"net_new_arr": 1200000,
"arpa_monthly": 2500,
"revenue_growth_pct": 31.6
},
"costs": {
"sales_marketing_spend": 1800000,
"cac": 18000,
"gross_margin_pct": 78,
"total_operating_expense": 6500000,
"net_burn": 1500000,
"fcf_margin_pct": 8.4
},
"customers": {
"beginning_arr": 3800000,
"expansion_arr": 600000,
"contraction_arr": 100000,
"churned_arr": 300000,
"annual_churn_rate_pct": 8
}
}

View File

@@ -0,0 +1,160 @@
{
"quota": 500000,
"stages": ["Discovery", "Qualification", "Proposal", "Negotiation", "Closed Won"],
"average_cycle_days": 45,
"deals": [
{
"id": "D001",
"name": "Acme Corp",
"stage": "Proposal",
"value": 85000,
"age_days": 32,
"close_date": "2025-03-15",
"owner": "rep_1"
},
{
"id": "D002",
"name": "TechFlow Inc",
"stage": "Discovery",
"value": 42000,
"age_days": 8,
"close_date": "2025-04-30",
"owner": "rep_2"
},
{
"id": "D003",
"name": "GlobalData Systems",
"stage": "Negotiation",
"value": 120000,
"age_days": 55,
"close_date": "2025-02-28",
"owner": "rep_1"
},
{
"id": "D004",
"name": "Pinnacle Software",
"stage": "Qualification",
"value": 35000,
"age_days": 18,
"close_date": "2025-04-15",
"owner": "rep_3"
},
{
"id": "D005",
"name": "Meridian Health",
"stage": "Proposal",
"value": 95000,
"age_days": 40,
"close_date": "2025-03-20",
"owner": "rep_2"
},
{
"id": "D006",
"name": "CloudVault",
"stage": "Discovery",
"value": 28000,
"age_days": 5,
"close_date": "2025-05-15",
"owner": "rep_1"
},
{
"id": "D007",
"name": "Nexus Financial",
"stage": "Closed Won",
"value": 72000,
"age_days": 38,
"close_date": "2025-01-31",
"owner": "rep_3"
},
{
"id": "D008",
"name": "Urban Analytics",
"stage": "Negotiation",
"value": 58000,
"age_days": 42,
"close_date": "2025-03-05",
"owner": "rep_2"
},
{
"id": "D009",
"name": "Redwood Logistics",
"stage": "Discovery",
"value": 31000,
"age_days": 12,
"close_date": "2025-05-01",
"owner": "rep_3"
},
{
"id": "D010",
"name": "Summit Enterprises",
"stage": "Qualification",
"value": 48000,
"age_days": 22,
"close_date": "2025-04-10",
"owner": "rep_1"
},
{
"id": "D011",
"name": "Vertex Solutions",
"stage": "Proposal",
"value": 110000,
"age_days": 95,
"close_date": "2025-03-01",
"owner": "rep_2"
},
{
"id": "D012",
"name": "DataBridge AI",
"stage": "Discovery",
"value": 55000,
"age_days": 3,
"close_date": "2025-06-15",
"owner": "rep_1"
},
{
"id": "D013",
"name": "Atlas Manufacturing",
"stage": "Qualification",
"value": 67000,
"age_days": 28,
"close_date": "2025-04-20",
"owner": "rep_3"
},
{
"id": "D014",
"name": "Horizon Telecom",
"stage": "Negotiation",
"value": 250000,
"age_days": 60,
"close_date": "2025-03-10",
"owner": "rep_1"
},
{
"id": "D015",
"name": "BlueShift Labs",
"stage": "Proposal",
"value": 43000,
"age_days": 35,
"close_date": "2025-03-25",
"owner": "rep_3"
},
{
"id": "D016",
"name": "Crestview Partners",
"stage": "Discovery",
"value": 38000,
"age_days": 15,
"close_date": "2025-05-20",
"owner": "rep_2"
},
{
"id": "D017",
"name": "Ironclad Security",
"stage": "Closed Won",
"value": 91000,
"age_days": 44,
"close_date": "2025-02-10",
"owner": "rep_1"
}
]
}

View File

@@ -0,0 +1,257 @@
# GTM Efficiency Benchmarks
SaaS benchmarks by funding stage, industry standards, and strategies for improving go-to-market efficiency.
---
## Benchmarks by Funding Stage
### Seed Stage ($0-$2M ARR)
| Metric | Red | Yellow | Green | Elite |
|--------|-----|--------|-------|-------|
| Magic Number | <0.3 | 0.3-0.5 | >0.5 | >0.8 |
| LTV:CAC | <1.5:1 | 1.5-2.5:1 | >2.5:1 | >4:1 |
| CAC Payback | >30 mo | 24-30 mo | <24 mo | <15 mo |
| Burn Multiple | >5x | 3-5x | <3x | <2x |
| Rule of 40 | <0% | 0-20% | >20% | >40% |
| NDR | <90% | 90-100% | >100% | >110% |
**Context:** At seed stage, efficiency metrics are naturally less stable due to small sample sizes. Focus on directional improvement rather than absolute numbers. Burn multiple is the most critical metric -- investors want to see capital-efficient growth.
### Series A ($2M-$10M ARR)
| Metric | Red | Yellow | Green | Elite |
|--------|-----|--------|-------|-------|
| Magic Number | <0.4 | 0.4-0.6 | >0.6 | >0.9 |
| LTV:CAC | <2:1 | 2-3:1 | >3:1 | >5:1 |
| CAC Payback | >24 mo | 18-24 mo | <18 mo | <12 mo |
| Burn Multiple | >4x | 2.5-4x | <2.5x | <1.5x |
| Rule of 40 | <10% | 10-30% | >30% | >50% |
| NDR | <95% | 95-105% | >105% | >115% |
**Context:** Series A is where unit economics must prove out. LTV:CAC >3:1 validates product-market fit in the revenue model. Investors will scrutinize CAC payback to understand capital requirements.
### Series B ($10M-$50M ARR)
| Metric | Red | Yellow | Green | Elite |
|--------|-----|--------|-------|-------|
| Magic Number | <0.5 | 0.5-0.75 | >0.75 | >1.0 |
| LTV:CAC | <2.5:1 | 2.5-3.5:1 | >3.5:1 | >5:1 |
| CAC Payback | >22 mo | 15-22 mo | <15 mo | <10 mo |
| Burn Multiple | >3x | 2-3x | <2x | <1.5x |
| Rule of 40 | <20% | 20-35% | >35% | >50% |
| NDR | <100% | 100-110% | >110% | >120% |
**Context:** At Series B, the GTM machine should be scaling predictably. Magic Number >0.75 demonstrates that adding GTM spend produces proportional returns. NDR >110% proves land-and-expand motion works.
### Series C+ ($50M-$200M ARR)
| Metric | Red | Yellow | Green | Elite |
|--------|-----|--------|-------|-------|
| Magic Number | <0.5 | 0.5-0.75 | >0.75 | >1.0 |
| LTV:CAC | <3:1 | 3-4:1 | >4:1 | >6:1 |
| CAC Payback | >20 mo | 14-20 mo | <14 mo | <10 mo |
| Burn Multiple | >2.5x | 1.5-2.5x | <1.5x | <1x |
| Rule of 40 | <25% | 25-40% | >40% | >60% |
| NDR | <105% | 105-115% | >115% | >130% |
**Context:** Growth efficiency and path to profitability become paramount. The Rule of 40 is the primary board-level metric. Companies approaching IPO should target Rule of 40 >40% consistently.
### Growth / Pre-IPO ($200M+ ARR)
| Metric | Red | Yellow | Green | Elite |
|--------|-----|--------|-------|-------|
| Magic Number | <0.6 | 0.6-0.8 | >0.8 | >1.0 |
| LTV:CAC | <3:1 | 3-5:1 | >5:1 | >7:1 |
| CAC Payback | >18 mo | 12-18 mo | <12 mo | <8 mo |
| Burn Multiple | >2x | 1-2x | <1x | <0.5x |
| Rule of 40 | <30% | 30-45% | >45% | >65% |
| NDR | <110% | 110-120% | >120% | >140% |
**Context:** Pre-IPO and public companies are measured on absolute efficiency. FCF margin matters as much as growth rate. Best-in-class companies demonstrate both growth and profitability.
---
## Industry Vertical Benchmarks
### Horizontal SaaS (CRM, HR, Finance, Marketing)
| Metric | Median | Top Quartile |
|--------|--------|-------------|
| Magic Number | 0.65 | 0.90+ |
| LTV:CAC | 3.2:1 | 5.5:1+ |
| CAC Payback | 17 months | 11 months |
| Gross Margin | 72% | 80%+ |
| NDR | 108% | 120%+ |
| Win Rate | 22% | 32%+ |
### Vertical SaaS (Healthcare, FinTech, PropTech)
| Metric | Median | Top Quartile |
|--------|--------|-------------|
| Magic Number | 0.55 | 0.80+ |
| LTV:CAC | 3.8:1 | 6.0:1+ |
| CAC Payback | 15 months | 10 months |
| Gross Margin | 68% | 76%+ |
| NDR | 112% | 125%+ |
| Win Rate | 25% | 38%+ |
**Note:** Vertical SaaS often has higher NDR (deeper embedding) and higher win rates (less competition) but lower gross margins (more services).
### Infrastructure / DevTools
| Metric | Median | Top Quartile |
|--------|--------|-------------|
| Magic Number | 0.70 | 1.0+ |
| LTV:CAC | 4.0:1 | 7.0:1+ |
| CAC Payback | 14 months | 9 months |
| Gross Margin | 75% | 85%+ |
| NDR | 118% | 140%+ |
| Win Rate | 18% | 28%+ |
**Note:** Usage-based pricing in infrastructure drives exceptional NDR but more volatile revenue patterns.
### Security / Compliance
| Metric | Median | Top Quartile |
|--------|--------|-------------|
| Magic Number | 0.60 | 0.85+ |
| LTV:CAC | 3.5:1 | 5.8:1+ |
| CAC Payback | 16 months | 11 months |
| Gross Margin | 74% | 82%+ |
| NDR | 115% | 130%+ |
| Win Rate | 20% | 30%+ |
---
## Efficiency Improvement Strategies
### Improving Magic Number
**Current: <0.5 (Red) -- Target: >0.75 (Green)**
1. **Channel ROI analysis:** Audit spend by channel (paid, outbound, events, content). Cut bottom 20% performing channels and reallocate.
2. **Sales productivity:** Measure revenue per rep. Identify bottom-quartile performers for coaching or role change. Top performers should be studied and their practices systematized.
3. **Funnel efficiency:** Improve MQL-to-SQL conversion through better lead scoring. Fewer, higher-quality leads reduce wasted sales capacity.
4. **Ramp time reduction:** Accelerate new rep ramp from average 6 months to 4 months through structured onboarding, shadowing, and certification.
5. **Territory optimization:** Ensure territories are balanced by opportunity (not just geography). Over-served territories waste capacity.
### Improving LTV:CAC
**Current: <3:1 (Yellow) -- Target: >5:1 (Green)**
**Increase LTV:**
- Reduce churn through proactive health scoring and intervention
- Build expansion playbooks for cross-sell and upsell
- Increase pricing through value-based packaging
- Improve product stickiness with integrations and workflows
**Decrease CAC:**
- Invest in organic channels (content, SEO, community)
- Implement product-led growth (PLG) motion
- Optimize paid spend through better targeting and attribution
- Leverage customer referrals and case studies
### Improving CAC Payback
**Current: >18 months (Yellow) -- Target: <12 months (Green)**
1. **Increase ARPA:** Package features to drive higher initial contract values. Annual prepay discounts accelerate cash collection.
2. **Improve gross margin:** Reduce COGS through automation, self-serve onboarding, and tech-touch customer success.
3. **Reduce CAC:** Same strategies as LTV:CAC improvement on the CAC side.
4. **Contract structure:** Annual or multi-year contracts with upfront payment reduce effective payback period.
### Improving Burn Multiple
**Current: >2x (Yellow) -- Target: <1.5x (Green)**
1. **Revenue efficiency:** Focus on the highest ROI growth activities. Not all ARR is equal -- expansion ARR is typically much cheaper than new logo ARR.
2. **Operational efficiency:** Automate repeatable processes (billing, provisioning, basic support). Reduce headcount growth rate relative to revenue growth rate.
3. **Spending discipline:** Implement zero-based budgeting for non-essential spend. Every dollar of burn should connect to revenue generation.
4. **Revenue acceleration:** Sometimes the best way to improve burn multiple is not cutting costs but accelerating revenue. If you can accelerate revenue growth by 20% with 5% more spend, the burn multiple improves.
### Improving NDR
**Current: 100-110% (Yellow) -- Target: >120% (Green)**
1. **Expansion playbooks:** Define trigger events for upsell (usage thresholds, team growth, feature requests). Arm CSMs with expansion talk tracks.
2. **Usage-based pricing:** Align pricing with customer value creation. As customers use more, they pay more -- naturally drives expansion.
3. **Product-led expansion:** Build in-product prompts for upgrades. Feature gating that shows value of next tier.
4. **Reduce contraction:** Identify reasons for downgrades. Often related to poor adoption of features customers are paying for.
5. **Reduce churn:** Implement early warning system (health scores). Intervene before renewal, not at renewal.
6. **Multi-product strategy:** Cross-sell additional products to existing customers. Second product adoption reduces churn by 30-50%.
---
## Metric Relationships and Trade-offs
### Growth vs. Efficiency
The fundamental tension in SaaS is between growth rate and capital efficiency:
```
High Growth + High Burn = Blitzscaling (risky but fast)
High Growth + Low Burn = Efficient Growth (ideal)
Low Growth + Low Burn = Cash Cow (sustainable but limited)
Low Growth + High Burn = Trouble (restructure immediately)
```
**Rule of 40** captures this balance: growth rate + margin should exceed 40%.
### CAC Payback vs. Growth Rate
Shorter CAC payback enables faster reinvestment in growth. A company with 12-month payback can reinvest recovered CAC into new customer acquisition sooner than one with 24-month payback, creating a compounding advantage.
### NDR vs. New Logo Acquisition
High NDR reduces dependence on new logo acquisition for growth:
- NDR of 120% means 20% growth from existing base before any new customers
- NDR of 100% means all growth must come from new customers (expensive)
- NDR of 80% means the company is shrinking and must acquire even more new customers just to replace lost revenue
**Strategic implication:** Invest in NDR improvement before scaling new logo acquisition. Every dollar spent improving NDR has higher ROI than acquiring new customers.
---
## Benchmark Data Sources
The benchmarks in this guide are compiled from:
1. **Bessemer Cloud Index** -- Public cloud company financial data
2. **KeyBanc SaaS Survey** -- Annual survey of private SaaS companies
3. **OpenView SaaS Benchmarks** -- Product-led growth focused benchmarks
4. **Iconiq Growth Analytics** -- Private company growth and efficiency data
5. **SaaStr Annual Surveys** -- Community-sourced SaaS metrics
6. **Battery Ventures Software Report** -- Enterprise software metrics
**Note:** Benchmarks shift over time. In capital-constrained environments (higher interest rates), efficiency metrics (burn multiple, Rule of 40) receive more weight. In growth-oriented environments (lower interest rates), growth rate and market share gain importance.
---
## Quarterly Board Reporting Template
When presenting GTM efficiency to the board, organize metrics as follows:
1. **Growth:** ARR, net new ARR, growth rate, NDR
2. **Efficiency:** Magic Number, LTV:CAC, CAC Payback, Burn Multiple
3. **Balance:** Rule of 40 score and composition
4. **Pipeline:** Coverage ratio, velocity, forecast accuracy
5. **Trends:** Quarter-over-quarter change for each metric with directional indicators
6. **Benchmarks:** How the company compares to stage-appropriate benchmarks
7. **Actions:** Top 3 initiatives to improve weakest metrics

View File

@@ -0,0 +1,292 @@
# Pipeline Management Framework
Best practices for pipeline management including stage definitions, conversion benchmarks, velocity optimization, and inspection cadence.
---
## Pipeline Stage Definitions
A well-defined pipeline requires clear, observable exit criteria at each stage. Subjective stages lead to inaccurate forecasting and unreliable conversion data.
### Recommended Stage Model (B2B SaaS)
| Stage | Name | Exit Criteria | Probability | Typical Duration |
|-------|------|--------------|-------------|-----------------|
| S0 | Lead | Contact identified, initial interest signal | 5% | 0-7 days |
| S1 | Discovery | Pain identified, budget confirmed, stakeholder engaged | 10% | 7-14 days |
| S2 | Qualification | MEDDPICC criteria met, mutual action plan created | 20% | 14-21 days |
| S3 | Proposal | Solution presented, pricing delivered, champion confirmed | 40% | 7-14 days |
| S4 | Negotiation | Commercial terms discussed, legal engaged, verbal commitment | 60% | 7-21 days |
| S5 | Commit | Contract redlined, signature timeline confirmed | 80% | 3-7 days |
| S6 | Closed Won | Signed contract received | 100% | -- |
| SL | Closed Lost | Deal disposition recorded with loss reason | 0% | -- |
### Stage Exit Criteria Best Practices
**Discovery (S1) Exit Criteria:**
- Pain point articulated by prospect (not assumed by rep)
- Budget range discussed (even if informal)
- Decision-making process understood
- Next meeting scheduled with clear agenda
**Qualification (S2) Exit Criteria:**
- MEDDPICC or BANT qualification framework completed
- Economic buyer identified (not just champion)
- Compelling event or timeline identified
- Mutual action plan (MAP) shared and agreed upon
- Technical requirements understood
**Proposal (S3) Exit Criteria:**
- Solution demo completed and well-received
- Pricing proposal delivered
- Champion validated proposal internally
- Competitive landscape understood
- No unresolved technical blockers
**Negotiation (S4) Exit Criteria:**
- Commercial terms discussed (not just pricing, but payment terms, SLA, etc.)
- Legal review initiated
- Security/procurement review started
- Verbal agreement on core terms
- Close date confirmed within 30 days
**Commit (S5) Exit Criteria:**
- Final contract sent for signature
- All legal redlines resolved
- Procurement approval obtained
- Signature expected within 7 business days
---
## Conversion Benchmarks by Segment
### SMB (ACV <$25K)
| Transition | Benchmark | Top Quartile |
|-----------|-----------|--------------|
| Lead to Discovery | 20-30% | 35%+ |
| Discovery to Qualification | 40-50% | 55%+ |
| Qualification to Proposal | 50-60% | 65%+ |
| Proposal to Negotiation | 55-65% | 70%+ |
| Negotiation to Close | 65-75% | 80%+ |
| Overall Win Rate | 20-30% | 35%+ |
| Avg Cycle Length | 14-30 days | <14 days |
### Mid-Market (ACV $25K-$100K)
| Transition | Benchmark | Top Quartile |
|-----------|-----------|--------------|
| Lead to Discovery | 15-25% | 30%+ |
| Discovery to Qualification | 35-45% | 50%+ |
| Qualification to Proposal | 45-55% | 60%+ |
| Proposal to Negotiation | 50-60% | 65%+ |
| Negotiation to Close | 60-70% | 75%+ |
| Overall Win Rate | 15-25% | 30%+ |
| Avg Cycle Length | 30-60 days | <30 days |
### Enterprise (ACV >$100K)
| Transition | Benchmark | Top Quartile |
|-----------|-----------|--------------|
| Lead to Discovery | 10-20% | 25%+ |
| Discovery to Qualification | 30-40% | 45%+ |
| Qualification to Proposal | 40-50% | 55%+ |
| Proposal to Negotiation | 45-55% | 60%+ |
| Negotiation to Close | 55-65% | 70%+ |
| Overall Win Rate | 10-20% | 25%+ |
| Avg Cycle Length | 60-120 days | <60 days |
---
## Sales Velocity Optimization
Sales velocity = (# Opportunities x Avg Deal Size x Win Rate) / Avg Cycle Days
Each component is an optimization lever:
### Lever 1: Increase Opportunity Volume
**Strategies:**
- Invest in inbound marketing (content, SEO, paid)
- Scale outbound SDR capacity
- Develop partner/channel sourcing
- Launch product-led growth (PLG) motion
- Implement customer referral programs
**Measurement:** Pipeline created ($) per week/month, by source
### Lever 2: Increase Average Deal Size
**Strategies:**
- Multi-product bundling and packaging
- Usage-based pricing with growth triggers
- Land-and-expand with defined expansion playbooks
- Move upmarket with enterprise features
- Value-based pricing tied to customer outcomes
**Measurement:** ACV trend by quarter, by segment
### Lever 3: Increase Win Rate
**Strategies:**
- Implement MEDDPICC qualification rigor
- Build competitive battle cards and train on them
- Create multi-threaded relationships (not single-threaded)
- Develop ROI/business case tools
- Invest in sales engineering and demo quality
- Win/loss analysis with structured debriefs
**Measurement:** Win rate by stage entry, by competitor, by rep
### Lever 4: Decrease Sales Cycle Length
**Strategies:**
- Pre-qualify harder at S1/S2 to remove slow deals
- Mutual action plans with milestone dates
- Champion enablement (arm champions with internal selling materials)
- Parallel processing (legal/security review concurrent with evaluation)
- Standardized contracts and pre-approved terms
- Executive sponsor engagement for stuck deals
**Measurement:** Days in each stage, cycle length trend, stage-specific bottlenecks
---
## Pipeline Inspection Cadence
### Daily (Rep Level)
**Focus:** Deal-level activity and next steps
**Questions:**
- What is the next step for each deal in S3+?
- Are any deals missing next steps or scheduled meetings?
- Which deals have not been updated in >3 days?
### Weekly (Manager/Team Level)
**Focus:** Pipeline health and forecast accuracy
**Review Format (45-60 minutes):**
1. **Coverage Check (10 min)**
- Current pipeline vs. quota -- is coverage >3x?
- Pipeline created this week vs. target
- Net pipeline change (created minus closed minus lost)
2. **Deal Inspection (25 min)**
- Walk top 10 deals by value in S3+
- MEDDPICC validation for each commit deal
- Identify deals at risk (aging, single-threaded, no next step)
3. **Forecast Call (10 min)**
- Commit, best case, and pipeline forecast
- Changes from last week's forecast (what moved and why)
- Gaps to plan and remediation
4. **Action Items (5 min)**
- Deals needing executive engagement
- Pipeline generation actions for next week
- Coaching priorities
### Monthly (Leadership Level)
**Focus:** Pipeline trends, velocity, and efficiency
**Review Areas:**
- Month-over-month pipeline growth trend
- Conversion rate trends by stage
- Sales velocity trend (improving or declining?)
- Forecast accuracy (MAPE) for the month
- Rep performance distribution (quartile analysis)
- Pipeline source mix health
### Quarterly (Executive/Board Level)
**Focus:** GTM efficiency and strategic pipeline
**Review Areas:**
- Pipeline coverage for next 2-3 quarters
- LTV:CAC and Magic Number trends
- Sales efficiency ratio trends
- Market segment performance comparison
- New market/product pipeline contribution
- Competitive win/loss trends
---
## Pipeline Hygiene
### Deal Hygiene Standards
1. **Close date accuracy:** Close dates must be based on buyer commitment, not rep hope. Any deal pushed more than twice should be flagged for re-qualification.
2. **Stage accuracy:** Deals must meet exit criteria to be in a stage. No deal should be in Proposal (S3) without a pricing deliverable sent.
3. **Amount accuracy:** Deal amounts must reflect the current proposal, not aspirational upsell. Variance between deal value and proposal should be <10%.
4. **Contact coverage:** Deals >$50K should have 3+ contacts associated. Enterprise deals should have economic buyer, champion, and technical evaluator.
5. **Activity recency:** No deal should go 7+ days without logged activity. Deals without recent activity signal stalling.
### Pipeline Cleanup Triggers
Run cleanup when:
- Pipeline-to-quota ratio drops below 2.5x
- Forecast accuracy (MAPE) exceeds 20%
- More than 15% of pipeline is >90 days old
- Average deal age exceeds 1.5x normal cycle time
### Cleanup Process
1. Flag all deals with close date in the past
2. Flag all deals with no activity in 14+ days
3. Flag all deals pushed 3+ times
4. Rep self-assessment: keep, push, or close for each flagged deal
5. Manager review and disposition
6. Update CRM and recalculate metrics
---
## Pipeline Risk Indicators
### Concentration Risk
**Definition:** Over-reliance on a small number of large deals.
**Thresholds:**
- Single deal >40% of pipeline = HIGH risk
- Single deal >25% of pipeline = MEDIUM risk
- Top 3 deals >70% of pipeline = HIGH risk
**Mitigation:** Diversify pipeline across segments, deal sizes, and sources. Increase deal count even if average deal size decreases.
### Stage Imbalance Risk
**Definition:** Pipeline is concentrated in early or late stages with gaps in between.
**Healthy Distribution:**
- Discovery/Qualification: 50-60% of pipeline value
- Proposal: 20-25% of pipeline value
- Negotiation/Commit: 15-20% of pipeline value
**Warning Signs:**
- >70% in early stages = insufficient progression
- >50% in late stages = insufficient pipeline generation
- Empty stages = broken funnel mechanics
### Temporal Risk
**Definition:** Pipeline is concentrated in a single quarter or lacks coverage for future quarters.
**Standard:** Maintain 3x coverage for current quarter and 1.5x for next quarter.
### Source Risk
**Definition:** Pipeline is overly dependent on a single source (e.g., 80% outbound, 0% inbound).
**Healthy Mix (varies by stage):**
- Inbound/Marketing: 30-40%
- Outbound/SDR: 30-40%
- Partner/Channel: 10-20%
- Expansion/Customer: 10-20%

View File

@@ -0,0 +1,304 @@
# RevOps Metrics Guide
Complete reference for Revenue Operations metrics hierarchy, definitions, formulas, interpretation guidelines, and common mistakes.
---
## Metrics Hierarchy
Revenue Operations metrics are organized in a hierarchy from leading indicators (pipeline activity) through lagging indicators (efficiency outcomes):
```
Level 1: Activity Metrics (Leading)
├── Pipeline created ($, #)
├── Meetings booked
├── Proposals sent
└── Demo completion rate
Level 2: Pipeline Metrics (Mid-funnel)
├── Pipeline coverage ratio
├── Stage conversion rates
├── Sales velocity
├── Deal aging
└── Pipeline hygiene score
Level 3: Revenue Metrics (Outcomes)
├── Bookings (new, expansion, renewal)
├── Revenue (ARR, MRR, TCV)
├── Win rate
└── Average deal size
Level 4: Efficiency Metrics (Unit Economics)
├── Magic Number
├── LTV:CAC Ratio
├── CAC Payback Period
├── Burn Multiple
├── Rule of 40
└── Net Dollar Retention
Level 5: Strategic Metrics (Board-Level)
├── Revenue per employee
├── Gross margin trend
├── NRR cohort analysis
└── Customer health score
```
---
## Core Metric Definitions
### Pipeline Coverage Ratio
**Formula:** Total Weighted Pipeline / Quota Target
**What it measures:** Whether there is sufficient pipeline to meet revenue targets.
**Interpretation:**
- 4x+: Strong coverage, selective deal pursuit possible
- 3-4x: Healthy coverage, standard operations
- 2-3x: At risk, accelerate pipeline generation
- <2x: Critical, immediate pipeline intervention needed
**Common Mistakes:**
- Including closed-won deals in the pipeline total
- Not weighting by stage probability
- Using annual quota against quarterly pipeline
- Ignoring deal quality in favor of quantity
**Best Practice:** Measure coverage ratio weekly. Track by quarter to identify seasonal gaps early.
---
### Stage Conversion Rates
**Formula:** # Deals advancing to Stage N+1 / # Deals entering Stage N
**What it measures:** Efficiency of progression through each pipeline stage.
**Typical SaaS Conversion Benchmarks:**
| Stage Transition | Median Rate | Top Quartile |
|-----------------|-------------|--------------|
| Lead to Qualification | 15-25% | 30%+ |
| Qualification to Proposal | 40-50% | 60%+ |
| Proposal to Negotiation | 50-60% | 70%+ |
| Negotiation to Close | 60-70% | 80%+ |
| Overall Win Rate | 15-25% | 30%+ |
**Common Mistakes:**
- Not standardizing stage exit criteria (subjective stages)
- Comparing conversion rates across different sales motions (PLG vs enterprise)
- Ignoring stage skipping (deals that jump stages inflate later conversion rates)
- Not segmenting by deal size or segment
---
### Sales Velocity
**Formula:** (# Opportunities x Avg Deal Size x Win Rate) / Avg Sales Cycle Days
**What it measures:** The rate at which the pipeline generates revenue, measured as revenue per day.
**Components:**
1. **# Opportunities** -- Volume of qualified deals in pipeline
2. **Avg Deal Size** -- Average contract value of won deals
3. **Win Rate** -- Percentage of deals that close
4. **Avg Sales Cycle** -- Days from opportunity creation to close
**Optimization levers:**
- Increase opportunity volume (marketing/SDR investment)
- Increase deal size (pricing, packaging, upsell)
- Increase win rate (sales enablement, competitive positioning)
- Decrease cycle length (champion building, MEDDPICC adherence)
**Common Mistakes:**
- Using all pipeline deals instead of qualified opportunities
- Not normalizing for segment (SMB velocity vs Enterprise velocity)
- Conflating calendar time with active selling time
- Ignoring velocity trend in favor of absolute number
---
### MAPE (Mean Absolute Percentage Error)
**Formula:** mean(|Actual - Forecast| / |Actual|) x 100
**What it measures:** Average forecast error magnitude as a percentage.
**Interpretation:**
| MAPE | Rating | Action |
|------|--------|--------|
| <10% | Excellent | Maintain current methodology |
| 10-15% | Good | Minor calibration adjustments |
| 15-25% | Fair | Methodology review needed |
| >25% | Poor | Fundamental process overhaul |
**Common Mistakes:**
- Using forecast vs. target instead of forecast vs. actual
- Not distinguishing between bias (systematic) and variance (random)
- Measuring only at the aggregate level (masks individual rep errors)
- Comparing MAPE across different time horizons (monthly vs quarterly)
---
### Forecast Bias
**Formula:** mean(Forecast - Actual) / mean(Actual) x 100
**What it measures:** Systematic tendency to over-forecast or under-forecast.
**Types:**
- **Positive bias (over-forecasting):** Forecast consistently exceeds actual. Often indicates optimistic deal assessment, insufficient qualification, or sandbagging reversal.
- **Negative bias (under-forecasting):** Actual consistently exceeds forecast. Often indicates conservative call culture, late-stage deals arriving unexpectedly, or poor pipeline visibility.
**Healthy Range:** Bias within +/- 5% of actual is considered well-calibrated.
---
### Magic Number
**Formula:** Net New ARR / Prior Period S&M Spend
**What it measures:** Efficiency of sales & marketing spend in generating new revenue.
**Interpretation:**
- >1.0: Extremely efficient, consider increasing GTM investment
- 0.75-1.0: Healthy efficiency, optimize and scale
- 0.50-0.75: Acceptable, focus on channel/spend optimization
- <0.50: Inefficient, audit spend allocation and productivity
**Common Mistakes:**
- Using total revenue instead of net new ARR
- Including expansion ARR (Magic Number measures new logo efficiency)
- Using current period spend instead of prior period (lag effect)
- Not separating sales spend from marketing spend for diagnostics
---
### LTV:CAC Ratio
**Formula:** Customer Lifetime Value / Customer Acquisition Cost
**Where:**
- LTV = (ARPA x Gross Margin) / Churn Rate
- ARPA = Average Revenue Per Account (annualized)
- CAC = Total S&M Spend / New Customers Acquired
**Target:** >3:1 is healthy; >5:1 may indicate under-investment in growth
**Common Mistakes:**
- Using revenue instead of gross-margin-weighted revenue in LTV
- Not including all acquisition costs (SDR, marketing, sales engineering)
- Using blended churn instead of cohort-specific churn
- Comparing across segments without normalizing (enterprise LTV:CAC is naturally higher)
---
### CAC Payback Period
**Formula:** CAC / (ARPA_monthly x Gross Margin)
**What it measures:** Months to recover the cost of acquiring a customer.
**Interpretation:**
- <12 months: Excellent capital efficiency
- 12-18 months: Healthy, especially for mid-market/enterprise
- 18-24 months: Acceptable for enterprise, concerning for SMB
- >24 months: Capital-intensive, needs optimization
**Common Mistakes:**
- Using revenue instead of gross-margin contribution
- Ignoring expansion revenue in payback calculation (conservative approach)
- Comparing SMB payback to enterprise payback without context
---
### Burn Multiple
**Formula:** Net Burn / Net New ARR
**What it measures:** How much cash is consumed for each dollar of new ARR.
**Interpretation (David Sacks framework):**
- <1.0x: Amazing -- hyper-efficient growth
- 1.0-1.5x: Great -- strong capital efficiency
- 1.5-2.0x: Good -- healthy burn rate
- 2.0-3.0x: Suspect -- needs attention
- >3.0x: Bad -- unsustainable without course correction
**Common Mistakes:**
- Using gross burn instead of net burn
- Not annualizing ARR when using quarterly burn
- Ignoring the denominator quality (all new ARR is not equal)
---
### Rule of 40
**Formula:** Revenue Growth Rate (%) + Free Cash Flow Margin (%)
**What it measures:** Balance between growth and profitability.
**Interpretation:**
- >60%: Elite SaaS company
- 40-60%: Strong performance
- 20-40%: Acceptable, optimize one dimension
- <20%: Needs significant improvement
**Common Mistakes:**
- Using EBITDA margin instead of FCF margin
- Comparing early-stage (growth-heavy) with late-stage (margin-heavy)
- Not considering the composition (80% growth + -40% margin vs 30% + 10%)
---
### Net Dollar Retention (NDR)
**Formula:** (Beginning ARR + Expansion - Contraction - Churn) / Beginning ARR x 100
**What it measures:** Revenue retention and expansion from existing customers.
**Interpretation:**
- >130%: World-class expansion (Snowflake, Datadog)
- 120-130%: Excellent land-and-expand
- 110-120%: Strong retention with moderate expansion
- 100-110%: Stable base, limited expansion
- <100%: Net revenue contraction -- critical concern
**Common Mistakes:**
- Including new logos in the calculation
- Not normalizing for cohort age (newer cohorts expand differently)
- Confusing gross retention with net retention
- Using logo retention as a proxy for dollar retention
---
## Metric Interdependencies
Understanding how metrics relate prevents conflicting optimizations:
1. **Magic Number and LTV:CAC** -- Both use S&M spend but measure different horizons. Magic Number is period-specific; LTV:CAC is lifetime.
2. **Burn Multiple and Rule of 40** -- Both measure efficiency but from different angles. Burn Multiple is cash-focused; Rule of 40 balances growth with profitability.
3. **Pipeline Coverage and Sales Velocity** -- High coverage with low velocity means pipeline is stagnating. Both must be healthy.
4. **NDR and LTV** -- NDR directly impacts LTV. Improving NDR is the highest-leverage way to improve LTV:CAC.
5. **Win Rate and Deal Size** -- Often inversely correlated. Moving upmarket increases deal size but may reduce win rate.
---
## Measurement Cadence
| Metric | Cadence | Owner |
|--------|---------|-------|
| Pipeline Coverage | Weekly | Sales Leadership |
| Stage Conversion | Bi-weekly | Sales Ops |
| Sales Velocity | Monthly | RevOps |
| Forecast Accuracy (MAPE) | Monthly/Quarterly | RevOps |
| Magic Number | Quarterly | CRO/CFO |
| LTV:CAC | Quarterly | Finance/RevOps |
| CAC Payback | Quarterly | Finance |
| Burn Multiple | Quarterly | CFO |
| Rule of 40 | Quarterly/Annual | CEO/Board |
| NDR | Quarterly | CS/RevOps |

View File

@@ -0,0 +1,531 @@
#!/usr/bin/env python3
"""Forecast Accuracy Tracker - Measures forecast accuracy and bias for SaaS revenue teams.
Calculates MAPE (Mean Absolute Percentage Error), detects systematic forecasting
bias, analyzes accuracy trends, and provides category-level breakdowns.
Usage:
python forecast_accuracy_tracker.py forecast_data.json --format text
python forecast_accuracy_tracker.py forecast_data.json --format json
"""
import argparse
import json
import sys
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def calculate_mape(periods: list[dict]) -> float:
"""Calculate Mean Absolute Percentage Error.
Formula: mean(|actual - forecast| / |actual|) x 100
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
MAPE as a percentage.
"""
if not periods:
return 0.0
errors = []
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
errors.append(abs(actual - forecast) / abs(actual))
if not errors:
return 0.0
return (sum(errors) / len(errors)) * 100
def calculate_weighted_mape(periods: list[dict]) -> float:
"""Calculate value-weighted MAPE.
Weights each period's error by its actual value, giving more importance
to larger periods.
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
Weighted MAPE as a percentage.
"""
if not periods:
return 0.0
total_actual = sum(abs(p["actual"]) for p in periods)
if total_actual == 0:
return 0.0
weighted_errors = 0.0
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
weight = abs(actual) / total_actual
weighted_errors += weight * (abs(actual - forecast) / abs(actual))
return weighted_errors * 100
def get_accuracy_rating(mape: float) -> dict[str, str]:
"""Return accuracy rating based on MAPE threshold.
Ratings:
Excellent: <10%
Good: 10-15%
Fair: 15-25%
Poor: >25%
"""
if mape < 10:
return {"rating": "Excellent", "description": "Highly predictable, data-driven process"}
elif mape < 15:
return {"rating": "Good", "description": "Reliable forecasting with minor variance"}
elif mape < 25:
return {"rating": "Fair", "description": "Needs process improvement"}
else:
return {"rating": "Poor", "description": "Significant forecasting methodology gaps"}
def analyze_bias(periods: list[dict]) -> dict[str, Any]:
"""Analyze systematic forecasting bias.
Positive bias = over-forecasting (forecast > actual, i.e., actual fell short)
Negative bias = under-forecasting (forecast < actual, i.e., actual exceeded)
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
Bias analysis with direction, magnitude, and ratio.
"""
if not periods:
return {
"direction": "None",
"bias_pct": 0.0,
"over_forecast_count": 0,
"under_forecast_count": 0,
"exact_count": 0,
"bias_ratio": 0.0,
}
over_count = 0
under_count = 0
exact_count = 0
total_bias = 0.0
for p in periods:
diff = p["forecast"] - p["actual"]
total_bias += diff
if diff > 0:
over_count += 1
elif diff < 0:
under_count += 1
else:
exact_count += 1
avg_bias = total_bias / len(periods)
total_actual = sum(p["actual"] for p in periods)
bias_pct = safe_divide(total_bias, total_actual) * 100
if over_count > under_count:
direction = "Over-forecasting"
elif under_count > over_count:
direction = "Under-forecasting"
else:
direction = "Balanced"
bias_ratio = safe_divide(over_count, over_count + under_count)
return {
"direction": direction,
"avg_bias_amount": round(avg_bias, 2),
"bias_pct": round(bias_pct, 1),
"over_forecast_count": over_count,
"under_forecast_count": under_count,
"exact_count": exact_count,
"bias_ratio": round(bias_ratio, 2),
}
def analyze_trend(periods: list[dict]) -> dict[str, Any]:
"""Analyze period-over-period accuracy trend.
Determines if forecast accuracy is improving, stable, or declining
by comparing error rates across consecutive periods.
Args:
periods: List of dicts with 'period', 'forecast', and 'actual' keys.
Returns:
Trend analysis with direction and period details.
"""
if len(periods) < 2:
return {
"trend": "Insufficient data",
"period_errors": [],
"improving_periods": 0,
"declining_periods": 0,
}
period_errors = []
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
error_pct = abs(actual - forecast) / abs(actual) * 100
else:
error_pct = 0.0
period_errors.append({
"period": p.get("period", "Unknown"),
"error_pct": round(error_pct, 1),
"forecast": forecast,
"actual": actual,
})
improving = 0
declining = 0
for i in range(1, len(period_errors)):
if period_errors[i]["error_pct"] < period_errors[i - 1]["error_pct"]:
improving += 1
elif period_errors[i]["error_pct"] > period_errors[i - 1]["error_pct"]:
declining += 1
if improving > declining:
trend = "Improving"
elif declining > improving:
trend = "Declining"
else:
trend = "Stable"
# Calculate recent vs historical MAPE
midpoint = len(periods) // 2
if midpoint > 0:
early_mape = calculate_mape(periods[:midpoint])
recent_mape = calculate_mape(periods[midpoint:])
mape_change = recent_mape - early_mape
else:
early_mape = 0.0
recent_mape = 0.0
mape_change = 0.0
return {
"trend": trend,
"period_errors": period_errors,
"improving_periods": improving,
"declining_periods": declining,
"early_mape": round(early_mape, 1),
"recent_mape": round(recent_mape, 1),
"mape_change": round(mape_change, 1),
}
def analyze_categories(category_breakdowns: dict) -> dict[str, Any]:
"""Analyze accuracy by category (rep, product, segment, etc.).
Args:
category_breakdowns: Dict of category_name -> list of
{category, forecast, actual} dicts.
Returns:
Category-level MAPE and accuracy analysis.
"""
results = {}
for category_name, entries in category_breakdowns.items():
category_results = []
for entry in entries:
actual = entry["actual"]
forecast = entry["forecast"]
if actual != 0:
error_pct = abs(actual - forecast) / abs(actual) * 100
else:
error_pct = 0.0
diff = forecast - actual
if diff > 0:
bias = "Over"
elif diff < 0:
bias = "Under"
else:
bias = "Exact"
rating = get_accuracy_rating(error_pct)
category_results.append({
"category": entry["category"],
"forecast": forecast,
"actual": actual,
"error_pct": round(error_pct, 1),
"bias": bias,
"variance": round(diff, 2),
"rating": rating["rating"],
})
# Sort by error percentage (worst first)
category_results.sort(key=lambda x: x["error_pct"], reverse=True)
overall_mape = calculate_mape(entries)
results[category_name] = {
"entries": category_results,
"overall_mape": round(overall_mape, 1),
"overall_rating": get_accuracy_rating(overall_mape)["rating"],
}
return results
def generate_recommendations(
mape: float, bias: dict, trend: dict, categories: dict
) -> list[str]:
"""Generate actionable recommendations based on analysis results.
Args:
mape: Overall MAPE percentage.
bias: Bias analysis results.
trend: Trend analysis results.
categories: Category analysis results.
Returns:
List of recommendation strings.
"""
recommendations = []
# MAPE-based recommendations
if mape > 25:
recommendations.append(
"CRITICAL: MAPE exceeds 25%. Implement structured forecasting methodology "
"(e.g., weighted pipeline with stage-based probabilities)."
)
elif mape > 15:
recommendations.append(
"Forecast accuracy needs improvement. Consider implementing deal-level "
"forecasting with commit/upside/pipeline categories."
)
# Bias-based recommendations
if bias["direction"] == "Over-forecasting" and abs(bias["bias_pct"]) > 10:
recommendations.append(
f"Systematic over-forecasting detected ({bias['bias_pct']}% bias). "
"Review deal qualification criteria and apply more conservative "
"stage probabilities."
)
elif bias["direction"] == "Under-forecasting" and abs(bias["bias_pct"]) > 10:
recommendations.append(
f"Systematic under-forecasting detected ({bias['bias_pct']}% bias). "
"Review upside deals more carefully and improve pipeline visibility."
)
# Trend-based recommendations
if trend["trend"] == "Declining":
recommendations.append(
"Forecast accuracy is declining over time. Schedule a forecasting "
"methodology review and retrain the team on forecasting best practices."
)
elif trend["trend"] == "Improving":
recommendations.append(
"Forecast accuracy is improving. Continue current methodology and "
"document best practices for consistency."
)
# Category-based recommendations
for cat_name, cat_data in categories.items():
worst_entries = [
e for e in cat_data["entries"] if e["error_pct"] > 25
]
if worst_entries:
names = ", ".join(e["category"] for e in worst_entries[:3])
recommendations.append(
f"High error rates in {cat_name}: {names}. "
f"Provide targeted coaching on forecasting discipline."
)
if not recommendations:
recommendations.append(
"Forecasting performance is strong. Maintain current processes "
"and continue monitoring for drift."
)
return recommendations
def track_forecast_accuracy(data: dict) -> dict[str, Any]:
"""Run complete forecast accuracy analysis.
Args:
data: Forecast data with periods and optional category breakdowns.
Returns:
Complete forecast accuracy analysis results.
"""
periods = data["forecast_periods"]
mape = calculate_mape(periods)
weighted_mape = calculate_weighted_mape(periods)
rating = get_accuracy_rating(mape)
bias = analyze_bias(periods)
trend = analyze_trend(periods)
categories = {}
if "category_breakdowns" in data:
categories = analyze_categories(data["category_breakdowns"])
recommendations = generate_recommendations(mape, bias, trend, categories)
return {
"mape": round(mape, 1),
"weighted_mape": round(weighted_mape, 1),
"accuracy_rating": rating,
"bias": bias,
"trend": trend,
"category_breakdowns": categories,
"recommendations": recommendations,
"periods_analyzed": len(periods),
}
def format_currency(value: float) -> str:
"""Format a number as currency."""
if abs(value) >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif abs(value) >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("FORECAST ACCURACY REPORT")
lines.append("=" * 70)
# Overall accuracy
lines.append("")
lines.append("OVERALL ACCURACY")
lines.append("-" * 40)
lines.append(f" MAPE: {results['mape']}%")
lines.append(f" Weighted MAPE: {results['weighted_mape']}%")
lines.append(f" Rating: {results['accuracy_rating']['rating']}")
lines.append(f" Assessment: {results['accuracy_rating']['description']}")
lines.append(f" Periods Analyzed: {results['periods_analyzed']}")
# Bias analysis
bias = results["bias"]
lines.append("")
lines.append("FORECAST BIAS")
lines.append("-" * 40)
lines.append(f" Direction: {bias['direction']}")
lines.append(f" Bias %: {bias['bias_pct']}%")
lines.append(f" Avg Bias Amount: {format_currency(bias['avg_bias_amount'])}")
lines.append(f" Over-forecast: {bias['over_forecast_count']} periods")
lines.append(f" Under-forecast: {bias['under_forecast_count']} periods")
lines.append(f" Bias Ratio: {bias['bias_ratio']}")
# Trend analysis
trend = results["trend"]
lines.append("")
lines.append("ACCURACY TREND")
lines.append("-" * 40)
lines.append(f" Trend: {trend['trend']}")
lines.append(f" Improving: {trend['improving_periods']} periods")
lines.append(f" Declining: {trend['declining_periods']} periods")
if trend.get("early_mape") is not None and trend["trend"] != "Insufficient data":
lines.append(f" Early MAPE: {trend['early_mape']}%")
lines.append(f" Recent MAPE: {trend['recent_mape']}%")
lines.append(f" MAPE Change: {trend['mape_change']:+.1f}%")
if trend.get("period_errors"):
lines.append("")
lines.append(" PERIOD DETAIL:")
for pe in trend["period_errors"]:
lines.append(
f" {pe['period']:12s} "
f"Forecast: {format_currency(pe['forecast']):>10s} "
f"Actual: {format_currency(pe['actual']):>10s} "
f"Error: {pe['error_pct']}%"
)
# Category breakdowns
if results["category_breakdowns"]:
lines.append("")
lines.append("CATEGORY BREAKDOWN")
lines.append("-" * 40)
for cat_name, cat_data in results["category_breakdowns"].items():
lines.append(
f"\n {cat_name.upper()} (Overall MAPE: {cat_data['overall_mape']}% "
f"- {cat_data['overall_rating']})"
)
for entry in cat_data["entries"]:
lines.append(
f" {entry['category']:20s} "
f"Error: {entry['error_pct']:5.1f}% "
f"Bias: {entry['bias']:5s} "
f"Rating: {entry['rating']}"
)
# Recommendations
lines.append("")
lines.append("RECOMMENDATIONS")
lines.append("-" * 40)
for i, rec in enumerate(results["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for forecast accuracy tracker CLI."""
parser = argparse.ArgumentParser(
description="Track and analyze forecast accuracy for SaaS revenue teams."
)
parser.add_argument(
"input",
help="Path to JSON file containing forecast data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
if "forecast_periods" not in data:
print("Error: Missing required field 'forecast_periods' in input data", file=sys.stderr)
sys.exit(1)
results = track_forecast_accuracy(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,658 @@
#!/usr/bin/env python3
"""GTM Efficiency Calculator - Calculates go-to-market efficiency metrics for SaaS.
Computes Magic Number, LTV:CAC, CAC Payback, Burn Multiple, Rule of 40,
and Net Dollar Retention with industry benchmarking and ratings.
Usage:
python gtm_efficiency_calculator.py gtm_data.json --format text
python gtm_efficiency_calculator.py gtm_data.json --format json
"""
import argparse
import json
import sys
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
# --- Benchmark tables ---
# Each benchmark defines green/yellow/red thresholds
# and optional percentile placement guidance
BENCHMARKS = {
"magic_number": {
"green": {"min": 0.75, "label": ">0.75 - Efficient GTM spend"},
"yellow": {"min": 0.50, "max": 0.75, "label": "0.50-0.75 - Acceptable efficiency"},
"red": {"max": 0.50, "label": "<0.50 - Inefficient GTM spend"},
"elite": 1.0,
"description": "Net New ARR / Prior Period S&M Spend",
},
"ltv_cac_ratio": {
"green": {"min": 3.0, "label": ">3:1 - Strong unit economics"},
"yellow": {"min": 1.0, "max": 3.0, "label": "1:1-3:1 - Marginal unit economics"},
"red": {"max": 1.0, "label": "<1:1 - Unsustainable unit economics"},
"elite": 5.0,
"description": "Customer LTV / Customer Acquisition Cost",
},
"cac_payback_months": {
"green": {"max": 18, "label": "<18 months - Healthy payback"},
"yellow": {"min": 18, "max": 24, "label": "18-24 months - Acceptable payback"},
"red": {"min": 24, "label": ">24 months - Capital intensive"},
"elite": 12,
"description": "CAC / (ARPA x Gross Margin) in months",
},
"burn_multiple": {
"green": {"max": 2.0, "label": "<2x - Capital efficient growth"},
"yellow": {"min": 2.0, "max": 4.0, "label": "2-4x - Moderate burn"},
"red": {"min": 4.0, "label": ">4x - Unsustainable burn"},
"elite": 1.0,
"description": "Net Burn / Net New ARR",
},
"rule_of_40": {
"green": {"min": 40, "label": ">40% - Strong balance of growth & profitability"},
"yellow": {"min": 20, "max": 40, "label": "20-40% - Acceptable balance"},
"red": {"max": 20, "label": "<20% - Needs improvement"},
"elite": 60,
"description": "Revenue Growth % + FCF Margin %",
},
"ndr_pct": {
"green": {"min": 110, "label": ">110% - Strong expansion revenue"},
"yellow": {"min": 100, "max": 110, "label": "100-110% - Stable base"},
"red": {"max": 100, "label": "<100% - Net revenue contraction"},
"elite": 130,
"description": "(Begin ARR + Expansion - Contraction - Churn) / Begin ARR",
},
}
def rate_metric(metric_name: str, value: float) -> dict[str, str]:
"""Rate a metric as Green/Yellow/Red based on benchmark thresholds.
Args:
metric_name: Key into BENCHMARKS dict.
value: The metric value to rate.
Returns:
Dict with rating color, label, and percentile guidance.
"""
bench = BENCHMARKS.get(metric_name)
if not bench:
return {"rating": "Unknown", "label": "No benchmark available"}
# For metrics where lower is better (cac_payback, burn_multiple)
lower_is_better = metric_name in ("cac_payback_months", "burn_multiple")
if lower_is_better:
if "max" in bench["green"] and value <= bench["green"]["max"]:
rating = "Green"
label = bench["green"]["label"]
elif "min" in bench.get("yellow", {}) and "max" in bench.get("yellow", {}):
if bench["yellow"]["min"] <= value <= bench["yellow"]["max"]:
rating = "Yellow"
label = bench["yellow"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
if "min" in bench["green"] and value >= bench["green"]["min"]:
rating = "Green"
label = bench["green"]["label"]
elif "min" in bench.get("yellow", {}) and "max" in bench.get("yellow", {}):
if bench["yellow"]["min"] <= value <= bench["yellow"]["max"]:
rating = "Yellow"
label = bench["yellow"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
# Percentile placement (simplified)
elite = bench.get("elite", 0)
if lower_is_better:
if elite > 0 and value > 0:
if value <= elite:
percentile = "Top 10%"
elif rating == "Green":
percentile = "Top 25%"
elif rating == "Yellow":
percentile = "Median"
else:
percentile = "Below median"
else:
percentile = "N/A"
else:
if elite > 0:
if value >= elite:
percentile = "Top 10%"
elif rating == "Green":
percentile = "Top 25%"
elif rating == "Yellow":
percentile = "Median"
else:
percentile = "Below median"
else:
percentile = "N/A"
return {
"rating": rating,
"label": label,
"percentile": percentile,
}
def calculate_magic_number(net_new_arr: float, sm_spend: float) -> dict[str, Any]:
"""Calculate Magic Number.
Formula: Net New ARR / Prior Period S&M Spend
Target: >0.75
Args:
net_new_arr: Net new annual recurring revenue in the period.
sm_spend: Sales & marketing spend in the prior period.
Returns:
Magic number value with rating and benchmark.
"""
value = safe_divide(net_new_arr, sm_spend)
benchmark = rate_metric("magic_number", value)
return {
"value": round(value, 2),
"net_new_arr": net_new_arr,
"sm_spend": sm_spend,
"formula": "Net New ARR / Prior Period S&M Spend",
"target": ">0.75",
**benchmark,
}
def calculate_ltv_cac(
arpa_monthly: float,
gross_margin_pct: float,
annual_churn_rate_pct: float,
cac: float,
) -> dict[str, Any]:
"""Calculate LTV:CAC Ratio.
LTV = ARPA_monthly x 12 x Gross Margin / Annual Churn Rate
Ratio = LTV / CAC
Target: >3:1
Args:
arpa_monthly: Average revenue per account per month.
gross_margin_pct: Gross margin as percentage (e.g., 78 for 78%).
annual_churn_rate_pct: Annual churn rate as percentage (e.g., 8 for 8%).
cac: Customer acquisition cost.
Returns:
LTV:CAC ratio with component values, rating, and benchmark.
"""
gross_margin = gross_margin_pct / 100
churn_rate = annual_churn_rate_pct / 100
arpa_annual = arpa_monthly * 12
ltv = safe_divide(arpa_annual * gross_margin, churn_rate)
ratio = safe_divide(ltv, cac)
benchmark = rate_metric("ltv_cac_ratio", ratio)
return {
"ratio": round(ratio, 1),
"ltv": round(ltv, 2),
"cac": cac,
"arpa_monthly": arpa_monthly,
"arpa_annual": arpa_annual,
"gross_margin_pct": gross_margin_pct,
"annual_churn_rate_pct": annual_churn_rate_pct,
"formula": "LTV (ARPA x Gross Margin / Churn Rate) / CAC",
"target": ">3:1",
**benchmark,
}
def calculate_cac_payback(
cac: float, arpa_monthly: float, gross_margin_pct: float
) -> dict[str, Any]:
"""Calculate CAC Payback Period.
Formula: CAC / (ARPA_monthly x Gross Margin) in months
Target: <18 months
Args:
cac: Customer acquisition cost.
arpa_monthly: Average revenue per account per month.
gross_margin_pct: Gross margin as percentage.
Returns:
CAC payback months with rating and benchmark.
"""
gross_margin = gross_margin_pct / 100
monthly_contribution = arpa_monthly * gross_margin
payback_months = safe_divide(cac, monthly_contribution)
benchmark = rate_metric("cac_payback_months", payback_months)
return {
"months": round(payback_months, 1),
"cac": cac,
"arpa_monthly": arpa_monthly,
"gross_margin_pct": gross_margin_pct,
"monthly_contribution": round(monthly_contribution, 2),
"formula": "CAC / (ARPA_monthly x Gross Margin)",
"target": "<18 months",
**benchmark,
}
def calculate_burn_multiple(net_burn: float, net_new_arr: float) -> dict[str, Any]:
"""Calculate Burn Multiple.
Formula: Net Burn / Net New ARR
Target: <2x (lower is better)
Args:
net_burn: Net cash burn in the period.
net_new_arr: Net new ARR added in the period.
Returns:
Burn multiple with rating and benchmark.
"""
value = safe_divide(net_burn, net_new_arr)
benchmark = rate_metric("burn_multiple", value)
return {
"value": round(value, 2),
"net_burn": net_burn,
"net_new_arr": net_new_arr,
"formula": "Net Burn / Net New ARR",
"target": "<2x",
**benchmark,
}
def calculate_rule_of_40(
revenue_growth_pct: float, fcf_margin_pct: float
) -> dict[str, Any]:
"""Calculate Rule of 40.
Formula: Revenue Growth % + FCF Margin %
Target: >40%
Args:
revenue_growth_pct: Year-over-year revenue growth percentage.
fcf_margin_pct: Free cash flow margin percentage.
Returns:
Rule of 40 score with rating and benchmark.
"""
value = revenue_growth_pct + fcf_margin_pct
benchmark = rate_metric("rule_of_40", value)
return {
"value": round(value, 1),
"revenue_growth_pct": revenue_growth_pct,
"fcf_margin_pct": fcf_margin_pct,
"formula": "Revenue Growth % + FCF Margin %",
"target": ">40%",
**benchmark,
}
def calculate_ndr(
beginning_arr: float,
expansion_arr: float,
contraction_arr: float,
churned_arr: float,
) -> dict[str, Any]:
"""Calculate Net Dollar Retention.
Formula: (Beginning ARR + Expansion - Contraction - Churn) / Beginning ARR
Target: >110%
Args:
beginning_arr: ARR at start of period.
expansion_arr: Expansion revenue from existing customers.
contraction_arr: Revenue lost from downgrades.
churned_arr: Revenue lost from customer churn.
Returns:
NDR percentage with rating and benchmark.
"""
ending_arr = beginning_arr + expansion_arr - contraction_arr - churned_arr
ndr_pct = safe_divide(ending_arr, beginning_arr) * 100
benchmark = rate_metric("ndr_pct", ndr_pct)
return {
"ndr_pct": round(ndr_pct, 1),
"beginning_arr": beginning_arr,
"expansion_arr": expansion_arr,
"contraction_arr": contraction_arr,
"churned_arr": churned_arr,
"ending_arr": round(ending_arr, 2),
"formula": "(Begin ARR + Expansion - Contraction - Churn) / Begin ARR",
"target": ">110%",
**benchmark,
}
def generate_recommendations(metrics: dict) -> list[str]:
"""Generate strategic recommendations based on GTM efficiency metrics.
Args:
metrics: Dict of all calculated metric results.
Returns:
List of recommendation strings.
"""
recs = []
# Magic Number
mn = metrics["magic_number"]
if mn["rating"] == "Red":
recs.append(
f"Magic Number is {mn['value']} (target >0.75). GTM spend is inefficient. "
"Audit channel ROI, optimize sales productivity, and consider reducing "
"low-performing spend."
)
elif mn["rating"] == "Yellow":
recs.append(
f"Magic Number is {mn['value']}. GTM efficiency is acceptable but can improve. "
"Focus on sales enablement and pipeline quality over quantity."
)
# LTV:CAC
lc = metrics["ltv_cac"]
if lc["rating"] == "Red":
recs.append(
f"LTV:CAC ratio is {lc['ratio']}:1 (target >3:1). Unit economics are unsustainable. "
"Reduce CAC through better targeting, improve retention to increase LTV, "
"or increase ARPA through pricing optimization."
)
elif lc["rating"] == "Yellow":
recs.append(
f"LTV:CAC ratio is {lc['ratio']}:1. Unit economics are marginal. "
"Focus on reducing churn and expanding within existing accounts."
)
# CAC Payback
cp = metrics["cac_payback"]
if cp["rating"] == "Red":
recs.append(
f"CAC payback is {cp['months']} months (target <18). Capital recovery is too slow. "
"Reduce acquisition costs or increase gross-margin-weighted ARPA."
)
# Burn Multiple
bm = metrics["burn_multiple"]
if bm["rating"] == "Red":
recs.append(
f"Burn multiple is {bm['value']}x (target <2x). Cash consumption relative to "
"growth is unsustainable. Prioritize operating efficiency and path to profitability."
)
# Rule of 40
r40 = metrics["rule_of_40"]
if r40["rating"] == "Red":
recs.append(
f"Rule of 40 score is {r40['value']}% (target >40%). Balance of growth and "
"profitability needs improvement. Either accelerate growth or improve margins."
)
# NDR
ndr = metrics["ndr"]
if ndr["rating"] == "Red":
recs.append(
f"NDR is {ndr['ndr_pct']}% (target >110%). Net revenue is contracting from "
"the existing base. Prioritize churn reduction and expansion playbooks."
)
elif ndr["rating"] == "Yellow":
recs.append(
f"NDR is {ndr['ndr_pct']}%. Base is stable but not expanding. "
"Invest in cross-sell/upsell motions and customer success capacity."
)
# Positive summary if everything is green
green_count = sum(
1 for m in metrics.values()
if isinstance(m, dict) and m.get("rating") == "Green"
)
total_metrics = 6
if green_count == total_metrics:
recs.append(
"All GTM efficiency metrics are in healthy ranges. Maintain current "
"trajectory and optimize for best-in-class performance."
)
elif green_count >= 4:
recs.append(
f"{green_count}/{total_metrics} metrics are green. GTM efficiency is generally "
"healthy. Address the yellow/red areas for continuous improvement."
)
return recs
def calculate_all_metrics(data: dict) -> dict[str, Any]:
"""Calculate all GTM efficiency metrics from input data.
Args:
data: Input data with revenue, costs, and customers sections.
Returns:
Complete GTM efficiency analysis results.
"""
revenue = data["revenue"]
costs = data["costs"]
customers = data["customers"]
metrics = {
"magic_number": calculate_magic_number(
net_new_arr=revenue["net_new_arr"],
sm_spend=costs["sales_marketing_spend"],
),
"ltv_cac": calculate_ltv_cac(
arpa_monthly=revenue["arpa_monthly"],
gross_margin_pct=costs["gross_margin_pct"],
annual_churn_rate_pct=customers["annual_churn_rate_pct"],
cac=costs["cac"],
),
"cac_payback": calculate_cac_payback(
cac=costs["cac"],
arpa_monthly=revenue["arpa_monthly"],
gross_margin_pct=costs["gross_margin_pct"],
),
"burn_multiple": calculate_burn_multiple(
net_burn=costs["net_burn"],
net_new_arr=revenue["net_new_arr"],
),
"rule_of_40": calculate_rule_of_40(
revenue_growth_pct=revenue["revenue_growth_pct"],
fcf_margin_pct=costs["fcf_margin_pct"],
),
"ndr": calculate_ndr(
beginning_arr=customers["beginning_arr"],
expansion_arr=customers["expansion_arr"],
contraction_arr=customers["contraction_arr"],
churned_arr=customers["churned_arr"],
),
}
metrics["recommendations"] = generate_recommendations(metrics)
return metrics
def format_currency(value: float) -> str:
"""Format a number as currency."""
if abs(value) >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif abs(value) >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("GTM EFFICIENCY REPORT")
lines.append("=" * 70)
# Metric summary table
metrics_order = [
("magic_number", "Magic Number", lambda m: f"{m['value']}"),
("ltv_cac", "LTV:CAC Ratio", lambda m: f"{m['ratio']}:1"),
("cac_payback", "CAC Payback", lambda m: f"{m['months']} months"),
("burn_multiple", "Burn Multiple", lambda m: f"{m['value']}x"),
("rule_of_40", "Rule of 40", lambda m: f"{m['value']}%"),
("ndr", "Net Dollar Retention", lambda m: f"{m['ndr_pct']}%"),
]
lines.append("")
lines.append("METRICS SUMMARY")
lines.append("-" * 70)
lines.append(f" {'Metric':25s} {'Value':>12s} {'Rating':>8s} {'Target':>15s}")
lines.append(f" {'':25s} {'':>12s} {'':>8s} {'':>15s}")
for key, name, fmt_fn in metrics_order:
m = results[key]
lines.append(
f" {name:25s} {fmt_fn(m):>12s} {m['rating']:>8s} {m['target']:>15s}"
)
# Detailed breakdown
lines.append("")
lines.append("DETAILED BREAKDOWN")
lines.append("-" * 70)
# Magic Number
mn = results["magic_number"]
lines.append("")
lines.append(f" MAGIC NUMBER: {mn['value']}")
lines.append(f" Net New ARR: {format_currency(mn['net_new_arr'])}")
lines.append(f" S&M Spend: {format_currency(mn['sm_spend'])}")
lines.append(f" Rating: {mn['rating']} - {mn['label']}")
lines.append(f" Percentile: {mn['percentile']}")
# LTV:CAC
lc = results["ltv_cac"]
lines.append("")
lines.append(f" LTV:CAC RATIO: {lc['ratio']}:1")
lines.append(f" Customer LTV: {format_currency(lc['ltv'])}")
lines.append(f" CAC: {format_currency(lc['cac'])}")
lines.append(f" ARPA (Monthly): {format_currency(lc['arpa_monthly'])}")
lines.append(f" Gross Margin: {lc['gross_margin_pct']}%")
lines.append(f" Churn Rate: {lc['annual_churn_rate_pct']}%")
lines.append(f" Rating: {lc['rating']} - {lc['label']}")
lines.append(f" Percentile: {lc['percentile']}")
# CAC Payback
cp = results["cac_payback"]
lines.append("")
lines.append(f" CAC PAYBACK: {cp['months']} months")
lines.append(f" CAC: {format_currency(cp['cac'])}")
lines.append(f" Monthly Contribution:{format_currency(cp['monthly_contribution'])}")
lines.append(f" Rating: {cp['rating']} - {cp['label']}")
lines.append(f" Percentile: {cp['percentile']}")
# Burn Multiple
bm = results["burn_multiple"]
lines.append("")
lines.append(f" BURN MULTIPLE: {bm['value']}x")
lines.append(f" Net Burn: {format_currency(bm['net_burn'])}")
lines.append(f" Net New ARR: {format_currency(bm['net_new_arr'])}")
lines.append(f" Rating: {bm['rating']} - {bm['label']}")
lines.append(f" Percentile: {bm['percentile']}")
# Rule of 40
r40 = results["rule_of_40"]
lines.append("")
lines.append(f" RULE OF 40: {r40['value']}%")
lines.append(f" Revenue Growth: {r40['revenue_growth_pct']}%")
lines.append(f" FCF Margin: {r40['fcf_margin_pct']}%")
lines.append(f" Rating: {r40['rating']} - {r40['label']}")
lines.append(f" Percentile: {r40['percentile']}")
# NDR
ndr = results["ndr"]
lines.append("")
lines.append(f" NET DOLLAR RETENTION: {ndr['ndr_pct']}%")
lines.append(f" Beginning ARR: {format_currency(ndr['beginning_arr'])}")
lines.append(f" Expansion: +{format_currency(ndr['expansion_arr'])}")
lines.append(f" Contraction: -{format_currency(ndr['contraction_arr'])}")
lines.append(f" Churn: -{format_currency(ndr['churned_arr'])}")
lines.append(f" Ending ARR: {format_currency(ndr['ending_arr'])}")
lines.append(f" Rating: {ndr['rating']} - {ndr['label']}")
lines.append(f" Percentile: {ndr['percentile']}")
# Recommendations
lines.append("")
lines.append("RECOMMENDATIONS")
lines.append("-" * 70)
for i, rec in enumerate(results["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for GTM efficiency calculator CLI."""
parser = argparse.ArgumentParser(
description="Calculate GTM efficiency metrics for SaaS revenue teams."
)
parser.add_argument(
"input",
help="Path to JSON file containing GTM data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
required_sections = ["revenue", "costs", "customers"]
for section in required_sections:
if section not in data:
print(
f"Error: Missing required section '{section}' in input data",
file=sys.stderr,
)
sys.exit(1)
results = calculate_all_metrics(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,496 @@
#!/usr/bin/env python3
"""Pipeline Analyzer - Analyzes sales pipeline health for SaaS revenue teams.
Calculates pipeline coverage ratios, stage conversion rates, sales velocity,
deal aging risks, and concentration risks from pipeline data.
Usage:
python pipeline_analyzer.py --input pipeline.json --format text
python pipeline_analyzer.py --input pipeline.json --format json
"""
import argparse
import json
import sys
from datetime import datetime, date
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def parse_date(date_str: str) -> date:
"""Parse a date string in YYYY-MM-DD format."""
return datetime.strptime(date_str, "%Y-%m-%d").date()
def get_quarter(d: date) -> str:
"""Return the quarter string for a given date (e.g., '2025-Q1')."""
quarter = (d.month - 1) // 3 + 1
return f"{d.year}-Q{quarter}"
def calculate_coverage_ratio(deals: list[dict], quota: float) -> dict[str, Any]:
"""Calculate pipeline coverage ratio against quota.
Target: 3-4x pipeline coverage for healthy pipeline.
"""
total_pipeline = sum(d["value"] for d in deals if d["stage"] != "Closed Won")
ratio = safe_divide(total_pipeline, quota)
if ratio >= 4.0:
rating = "Strong"
elif ratio >= 3.0:
rating = "Healthy"
elif ratio >= 2.0:
rating = "At Risk"
else:
rating = "Critical"
return {
"total_pipeline_value": total_pipeline,
"quota": quota,
"coverage_ratio": round(ratio, 2),
"rating": rating,
"target": "3.0x - 4.0x",
}
def calculate_stage_conversion_rates(
deals: list[dict], stages: list[str]
) -> list[dict[str, Any]]:
"""Calculate stage-to-stage conversion rates.
Measures the percentage of deals that progress from one stage to the next.
"""
stage_order = {stage: i for i, stage in enumerate(stages)}
stage_counts: dict[str, int] = {stage: 0 for stage in stages}
for deal in deals:
stage = deal["stage"]
if stage in stage_order:
stage_idx = stage_order[stage]
# A deal at stage N has passed through all stages 0..N
for i in range(stage_idx + 1):
stage_counts[stages[i]] += 1
conversions = []
for i in range(len(stages) - 1):
from_stage = stages[i]
to_stage = stages[i + 1]
from_count = stage_counts[from_stage]
to_count = stage_counts[to_stage]
rate = safe_divide(to_count, from_count) * 100
conversions.append({
"from_stage": from_stage,
"to_stage": to_stage,
"from_count": from_count,
"to_count": to_count,
"conversion_rate_pct": round(rate, 1),
})
return conversions
def calculate_sales_velocity(deals: list[dict]) -> dict[str, Any]:
"""Calculate sales velocity.
Formula: (# opportunities x avg deal size x win rate) / avg sales cycle length
Result is revenue per day.
"""
if not deals:
return {
"num_opportunities": 0,
"avg_deal_size": 0,
"win_rate_pct": 0,
"avg_cycle_days": 0,
"velocity_per_day": 0,
"velocity_per_month": 0,
}
won_deals = [d for d in deals if d["stage"] == "Closed Won"]
open_deals = [d for d in deals if d["stage"] != "Closed Won"]
all_considered = deals
num_opportunities = len(all_considered)
avg_deal_size = safe_divide(
sum(d["value"] for d in all_considered), num_opportunities
)
win_rate = safe_divide(len(won_deals), num_opportunities)
avg_cycle_days = safe_divide(
sum(d["age_days"] for d in all_considered), num_opportunities
)
velocity_per_day = safe_divide(
num_opportunities * avg_deal_size * win_rate, avg_cycle_days
)
return {
"num_opportunities": num_opportunities,
"avg_deal_size": round(avg_deal_size, 2),
"win_rate_pct": round(win_rate * 100, 1),
"avg_cycle_days": round(avg_cycle_days, 1),
"velocity_per_day": round(velocity_per_day, 2),
"velocity_per_month": round(velocity_per_day * 30, 2),
}
def analyze_deal_aging(
deals: list[dict], average_cycle_days: int, stages: list[str]
) -> dict[str, Any]:
"""Analyze deal aging and flag stale deals.
Flags deals older than 2x the average cycle time.
Uses stage-specific thresholds based on position in the pipeline.
"""
aging_threshold = average_cycle_days * 2
num_stages = len(stages)
stage_order = {stage: i for i, stage in enumerate(stages)}
# Stage-specific thresholds: early stages get more time, later stages less
stage_thresholds: dict[str, int] = {}
for i, stage in enumerate(stages):
if stage == "Closed Won":
continue
# Progressive thresholds: first stage gets full cycle, last open stage gets 50%
progress = safe_divide(i, num_stages - 1)
threshold = int(average_cycle_days * (1.0 + (1.0 - progress)))
stage_thresholds[stage] = threshold
aging_deals = []
healthy_deals = 0
at_risk_deals = 0
for deal in deals:
if deal["stage"] == "Closed Won":
continue
stage = deal["stage"]
age = deal["age_days"]
threshold = stage_thresholds.get(stage, aging_threshold)
if age > threshold:
at_risk_deals += 1
aging_deals.append({
"id": deal["id"],
"name": deal["name"],
"stage": stage,
"age_days": age,
"threshold_days": threshold,
"days_over": age - threshold,
"value": deal["value"],
})
else:
healthy_deals += 1
aging_deals.sort(key=lambda x: x["days_over"], reverse=True)
return {
"global_aging_threshold_days": aging_threshold,
"stage_thresholds": stage_thresholds,
"total_open_deals": healthy_deals + at_risk_deals,
"healthy_deals": healthy_deals,
"at_risk_deals": at_risk_deals,
"aging_deals": aging_deals,
}
def assess_pipeline_risk(
deals: list[dict], quota: float, stages: list[str]
) -> dict[str, Any]:
"""Assess overall pipeline risk.
Checks for:
- Concentration risk (>40% in single deal)
- Stage distribution health
- Coverage gap by quarter
"""
open_deals = [d for d in deals if d["stage"] != "Closed Won"]
total_pipeline = sum(d["value"] for d in open_deals)
# Concentration risk
concentration_risks = []
for deal in open_deals:
pct = safe_divide(deal["value"], total_pipeline) * 100
if pct > 40:
concentration_risks.append({
"id": deal["id"],
"name": deal["name"],
"value": deal["value"],
"pct_of_pipeline": round(pct, 1),
"risk_level": "HIGH",
})
elif pct > 25:
concentration_risks.append({
"id": deal["id"],
"name": deal["name"],
"value": deal["value"],
"pct_of_pipeline": round(pct, 1),
"risk_level": "MEDIUM",
})
has_concentration_risk = any(
r["risk_level"] == "HIGH" for r in concentration_risks
)
# Stage distribution
stage_distribution: dict[str, dict] = {}
for stage in stages:
if stage == "Closed Won":
continue
stage_deals = [d for d in open_deals if d["stage"] == stage]
count = len(stage_deals)
value = sum(d["value"] for d in stage_deals)
stage_distribution[stage] = {
"count": count,
"value": value,
"pct_of_pipeline": round(safe_divide(value, total_pipeline) * 100, 1),
}
# Check for empty stages (unhealthy funnel)
empty_stages = [
stage for stage, data in stage_distribution.items() if data["count"] == 0
]
# Coverage gap by quarter
today = date.today()
quarterly_coverage: dict[str, float] = {}
for deal in open_deals:
try:
close_date = parse_date(deal["close_date"])
quarter = get_quarter(close_date)
quarterly_coverage[quarter] = (
quarterly_coverage.get(quarter, 0) + deal["value"]
)
except (ValueError, KeyError):
pass
quarterly_target = quota / 4
coverage_gaps = []
for quarter, value in sorted(quarterly_coverage.items()):
coverage = safe_divide(value, quarterly_target)
if coverage < 3.0:
coverage_gaps.append({
"quarter": quarter,
"pipeline_value": value,
"quarterly_target": quarterly_target,
"coverage_ratio": round(coverage, 2),
"gap": "Below 3x target",
})
# Overall risk rating
risk_factors = 0
if has_concentration_risk:
risk_factors += 2
if len(empty_stages) > 0:
risk_factors += 1
if len(coverage_gaps) > 0:
risk_factors += 1
if safe_divide(total_pipeline, quota) < 3.0:
risk_factors += 2
if risk_factors >= 4:
overall_risk = "HIGH"
elif risk_factors >= 2:
overall_risk = "MEDIUM"
else:
overall_risk = "LOW"
return {
"overall_risk": overall_risk,
"risk_factors_count": risk_factors,
"concentration_risks": concentration_risks,
"has_concentration_risk": has_concentration_risk,
"stage_distribution": stage_distribution,
"empty_stages": empty_stages,
"coverage_gaps": coverage_gaps,
}
def analyze_pipeline(data: dict) -> dict[str, Any]:
"""Run complete pipeline analysis.
Args:
data: Pipeline data with deals, quota, stages, and average_cycle_days.
Returns:
Complete analysis results dictionary.
"""
deals = data["deals"]
quota = data["quota"]
stages = data["stages"]
average_cycle_days = data.get("average_cycle_days", 45)
return {
"coverage": calculate_coverage_ratio(deals, quota),
"stage_conversions": calculate_stage_conversion_rates(deals, stages),
"velocity": calculate_sales_velocity(deals),
"aging": analyze_deal_aging(deals, average_cycle_days, stages),
"risk": assess_pipeline_risk(deals, quota, stages),
}
def format_currency(value: float) -> str:
"""Format a number as currency."""
if value >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif value >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("PIPELINE ANALYSIS REPORT")
lines.append("=" * 70)
# Coverage
cov = results["coverage"]
lines.append("")
lines.append("PIPELINE COVERAGE")
lines.append("-" * 40)
lines.append(f" Total Pipeline: {format_currency(cov['total_pipeline_value'])}")
lines.append(f" Quota Target: {format_currency(cov['quota'])}")
lines.append(f" Coverage Ratio: {cov['coverage_ratio']}x (Target: {cov['target']})")
lines.append(f" Rating: {cov['rating']}")
# Stage Conversions
lines.append("")
lines.append("STAGE CONVERSION RATES")
lines.append("-" * 40)
for conv in results["stage_conversions"]:
lines.append(
f" {conv['from_stage']} -> {conv['to_stage']}: "
f"{conv['conversion_rate_pct']}% "
f"({conv['to_count']}/{conv['from_count']})"
)
# Velocity
vel = results["velocity"]
lines.append("")
lines.append("SALES VELOCITY")
lines.append("-" * 40)
lines.append(f" Opportunities: {vel['num_opportunities']}")
lines.append(f" Avg Deal Size: {format_currency(vel['avg_deal_size'])}")
lines.append(f" Win Rate: {vel['win_rate_pct']}%")
lines.append(f" Avg Cycle: {vel['avg_cycle_days']} days")
lines.append(f" Velocity/Day: {format_currency(vel['velocity_per_day'])}")
lines.append(f" Velocity/Month: {format_currency(vel['velocity_per_month'])}")
# Aging
aging = results["aging"]
lines.append("")
lines.append("DEAL AGING ANALYSIS")
lines.append("-" * 40)
lines.append(f" Total Open Deals: {aging['total_open_deals']}")
lines.append(f" Healthy: {aging['healthy_deals']}")
lines.append(f" At Risk: {aging['at_risk_deals']}")
if aging["aging_deals"]:
lines.append("")
lines.append(" AGING DEALS (needs attention):")
for deal in aging["aging_deals"]:
lines.append(
f" - {deal['name']} ({deal['stage']}): "
f"{deal['age_days']}d (threshold: {deal['threshold_days']}d, "
f"+{deal['days_over']}d over) | {format_currency(deal['value'])}"
)
# Risk
risk = results["risk"]
lines.append("")
lines.append("PIPELINE RISK ASSESSMENT")
lines.append("-" * 40)
lines.append(f" Overall Risk: {risk['overall_risk']}")
lines.append(f" Risk Factors: {risk['risk_factors_count']}")
if risk["concentration_risks"]:
lines.append("")
lines.append(" CONCENTRATION RISKS:")
for cr in risk["concentration_risks"]:
lines.append(
f" - {cr['name']}: {format_currency(cr['value'])} "
f"({cr['pct_of_pipeline']}% of pipeline) [{cr['risk_level']}]"
)
if risk["empty_stages"]:
lines.append("")
lines.append(f" EMPTY STAGES: {', '.join(risk['empty_stages'])}")
lines.append("")
lines.append(" STAGE DISTRIBUTION:")
for stage, data in risk["stage_distribution"].items():
bar = "#" * max(1, int(data["pct_of_pipeline"] / 2))
lines.append(
f" {stage:20s} {data['count']:3d} deals "
f"{format_currency(data['value']):>10s} "
f"{data['pct_of_pipeline']:5.1f}% {bar}"
)
if risk["coverage_gaps"]:
lines.append("")
lines.append(" COVERAGE GAPS BY QUARTER:")
for gap in risk["coverage_gaps"]:
lines.append(
f" - {gap['quarter']}: {gap['coverage_ratio']}x coverage "
f"({format_currency(gap['pipeline_value'])} vs "
f"{format_currency(gap['quarterly_target'])} target)"
)
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for pipeline analyzer CLI."""
parser = argparse.ArgumentParser(
description="Analyze sales pipeline health for SaaS revenue teams."
)
parser.add_argument(
"--input",
required=True,
help="Path to JSON file containing pipeline data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
# Validate required fields
required_fields = ["deals", "quota", "stages"]
for field in required_fields:
if field not in data:
print(f"Error: Missing required field '{field}' in input data", file=sys.stderr)
sys.exit(1)
results = analyze_pipeline(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,247 @@
---
name: sales-engineer
description: Analyzes RFP responses for coverage gaps, builds competitive feature matrices, and plans proof-of-concept engagements for pre-sales engineering
---
# Sales Engineer Skill
A production-ready skill package for pre-sales engineering that bridges technical expertise and sales execution. Provides automated analysis for RFP/RFI responses, competitive positioning, and proof-of-concept planning.
## Overview
**Role:** Sales Engineer / Solutions Architect
**Domain:** Pre-Sales Engineering, Solution Design, Technical Demos, Proof of Concepts
**Business Type:** SaaS / Pre-Sales Engineering
### What This Skill Does
- **RFP/RFI Response Analysis** - Score requirement coverage, identify gaps, generate bid/no-bid recommendations
- **Competitive Technical Positioning** - Build feature comparison matrices, identify differentiators and vulnerabilities
- **POC Planning** - Generate timelines, resource plans, success criteria, and evaluation scorecards
- **Demo Preparation** - Structure demo scripts with talking points and objection handling
- **Technical Proposal Creation** - Framework for solution architecture and implementation planning
- **Win/Loss Analysis** - Data-driven competitive assessment for deal strategy
### Key Metrics
| Metric | Description | Target |
|--------|-------------|--------|
| Win Rate | Deals won / total opportunities | >30% |
| Sales Cycle Length | Average days from discovery to close | <90 days |
| POC Conversion Rate | POCs resulting in closed deals | >60% |
| Customer Engagement Score | Stakeholder participation in evaluation | >75% |
| RFP Coverage Score | Requirements fully addressed | >80% |
## 5-Phase Workflow
### Phase 1: Discovery & Research
**Objective:** Understand customer requirements, technical environment, and business drivers.
**Activities:**
1. Conduct technical discovery calls with stakeholders
2. Map customer's current architecture and pain points
3. Identify integration requirements and constraints
4. Document security and compliance requirements
5. Assess competitive landscape for this opportunity
**Tools:** Use `rfp_response_analyzer.py` to score initial requirement alignment.
**Output:** Technical discovery document, requirement map, initial coverage assessment.
### Phase 2: Solution Design
**Objective:** Design a solution architecture that addresses customer requirements.
**Activities:**
1. Map product capabilities to customer requirements
2. Design integration architecture
3. Identify customization needs and development effort
4. Build competitive differentiation strategy
5. Create solution architecture diagrams
**Tools:** Use `competitive_matrix_builder.py` to identify differentiators and vulnerabilities.
**Output:** Solution architecture, competitive positioning, technical differentiation strategy.
### Phase 3: Demo Preparation & Delivery
**Objective:** Deliver compelling technical demonstrations tailored to stakeholder priorities.
**Activities:**
1. Build demo environment matching customer's use case
2. Create demo script with talking points per stakeholder role
3. Prepare objection handling responses
4. Rehearse failure scenarios and recovery paths
5. Collect feedback and adjust approach
**Templates:** Use `demo_script_template.md` for structured demo preparation.
**Output:** Customized demo, stakeholder-specific talking points, feedback capture.
### Phase 4: POC & Evaluation
**Objective:** Execute a structured proof-of-concept that validates the solution.
**Activities:**
1. Define POC scope, success criteria, and timeline
2. Allocate resources and set up environment
3. Execute phased testing (core, advanced, edge cases)
4. Track progress against success criteria
5. Generate evaluation scorecard
**Tools:** Use `poc_planner.py` to generate the complete POC plan.
**Templates:** Use `poc_scorecard_template.md` for evaluation tracking.
**Output:** POC plan, evaluation scorecard, go/no-go recommendation.
### Phase 5: Proposal & Closing
**Objective:** Deliver a technical proposal that supports the commercial close.
**Activities:**
1. Compile POC results and success metrics
2. Create technical proposal with implementation plan
3. Address outstanding objections with evidence
4. Support pricing and packaging discussions
5. Conduct win/loss analysis post-decision
**Templates:** Use `technical_proposal_template.md` for the proposal document.
**Output:** Technical proposal, implementation timeline, risk mitigation plan.
## Python Automation Tools
### 1. RFP Response Analyzer
**Script:** `scripts/rfp_response_analyzer.py`
**Purpose:** Parse RFP/RFI requirements, score coverage, identify gaps, and generate bid/no-bid recommendations.
**Coverage Categories:**
- **Full (100%)** - Requirement fully met by current product
- **Partial (50%)** - Requirement partially met, workaround or configuration needed
- **Planned (25%)** - On product roadmap, not yet available
- **Gap (0%)** - Not supported, no current plan
**Priority Weighting:**
- Must-Have: 3x weight
- Should-Have: 2x weight
- Nice-to-Have: 1x weight
**Bid/No-Bid Logic:**
- **Bid:** Coverage score >70% AND must-have gaps <=3
- **Conditional Bid:** Coverage score 50-70% OR must-have gaps 2-3
- **No-Bid:** Coverage score <50% OR must-have gaps >3
**Usage:**
```bash
# Human-readable output
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json
# JSON output
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json
# Help
python scripts/rfp_response_analyzer.py --help
```
**Input Format:** See `assets/sample_rfp_data.json` for the complete schema.
### 2. Competitive Matrix Builder
**Script:** `scripts/competitive_matrix_builder.py`
**Purpose:** Generate feature comparison matrices, calculate competitive scores, identify differentiators and vulnerabilities.
**Feature Scoring:**
- **Full (3)** - Complete feature support
- **Partial (2)** - Partial or limited feature support
- **Limited (1)** - Minimal or basic feature support
- **None (0)** - Feature not available
**Usage:**
```bash
# Human-readable output
python scripts/competitive_matrix_builder.py competitive_data.json
# JSON output
python scripts/competitive_matrix_builder.py competitive_data.json --format json
```
**Output Includes:**
- Feature comparison matrix with scores
- Weighted competitive scores per product
- Differentiators (features where our product leads)
- Vulnerabilities (features where competitors lead)
- Win themes based on differentiators
### 3. POC Planner
**Script:** `scripts/poc_planner.py`
**Purpose:** Generate structured POC plans with timeline, resource allocation, success criteria, and evaluation scorecards.
**Default Phase Breakdown:**
- **Week 1:** Setup - Environment provisioning, data migration, configuration
- **Weeks 2-3:** Core Testing - Primary use cases, integration testing
- **Week 4:** Advanced Testing - Edge cases, performance, security
- **Week 5:** Evaluation - Scorecard completion, stakeholder review, go/no-go
**Usage:**
```bash
# Human-readable output
python scripts/poc_planner.py poc_data.json
# JSON output
python scripts/poc_planner.py poc_data.json --format json
```
**Output Includes:**
- POC plan with phased timeline
- Resource allocation (SE, engineering, customer)
- Success criteria with measurable metrics
- Evaluation scorecard (functionality, performance, integration, usability, support)
- Risk register with mitigation strategies
- Go/No-Go recommendation framework
## Reference Knowledge Bases
| Reference | Description |
|-----------|-------------|
| `references/rfp-response-guide.md` | RFP/RFI response best practices, compliance matrix, bid/no-bid framework |
| `references/competitive-positioning-framework.md` | Competitive analysis methodology, battlecard creation, objection handling |
| `references/poc-best-practices.md` | POC planning methodology, success criteria, evaluation frameworks |
## Asset Templates
| Template | Purpose |
|----------|---------|
| `assets/technical_proposal_template.md` | Technical proposal with executive summary, solution architecture, implementation plan |
| `assets/demo_script_template.md` | Demo script with agenda, talking points, objection handling |
| `assets/poc_scorecard_template.md` | POC evaluation scorecard with weighted scoring |
| `assets/sample_rfp_data.json` | Sample RFP data for testing the analyzer |
| `assets/expected_output.json` | Expected output from rfp_response_analyzer.py |
## Communication Style
- **Technical yet accessible** - Translate complex concepts for business stakeholders
- **Confident and consultative** - Position as trusted advisor, not vendor
- **Evidence-based** - Back every claim with data, demos, or case studies
- **Stakeholder-aware** - Tailor depth and focus to audience (CTO vs. end user vs. procurement)
## Integration Points
- **Marketing Skills** - Leverage competitive intelligence and messaging frameworks from `../../marketing-skill/`
- **Product Team** - Coordinate on roadmap items flagged as "Planned" in RFP analysis from `../../product-team/`
- **C-Level Advisory** - Escalate strategic deals requiring executive engagement from `../../c-level-advisor/`
- **Customer Success** - Hand off POC results and success criteria to CSM from `../customer-success-manager/`
---
**Last Updated:** February 2026
**Status:** Production-ready
**Tools:** 3 Python automation scripts
**References:** 3 knowledge base documents
**Templates:** 5 asset files

View File

@@ -0,0 +1,232 @@
# Demo Script Template
## Demo Information
| Field | Value |
|-------|-------|
| Customer | [Customer Name] |
| Date/Time | [Date and Time] |
| Duration | [XX minutes] |
| Demo Environment | [Environment URL/Details] |
| Presenter | [Sales Engineer Name] |
| AE/Account Executive | [AE Name] |
---
## Pre-Demo Checklist
- [ ] Demo environment tested and confirmed working
- [ ] Sample data loaded and validated
- [ ] Backup demo environment prepared
- [ ] Screen sharing tested with correct resolution
- [ ] Browser tabs pre-loaded with key screens
- [ ] Recording setup confirmed (if applicable)
- [ ] Customer-specific branding applied (if applicable)
- [ ] Network and VPN connectivity verified
- [ ] All integrations connected and tested
- [ ] Backup slides prepared in case of technical issues
---
## Attendees and Roles
| Name | Title | Role in Evaluation | Key Interest |
|------|-------|-------------------|--------------|
| [Name] | [CTO/VP Eng] | Decision Maker | ROI, strategic fit |
| [Name] | [Director] | Champion | Solving [specific problem] |
| [Name] | [Manager] | Technical Evaluator | Architecture, integrations |
| [Name] | [Analyst] | End User | Day-to-day usability |
---
## Agenda
| Time | Duration | Topic | Lead |
|------|----------|-------|------|
| 0:00 | 5 min | Welcome and introductions | AE |
| 0:05 | 5 min | Agenda and objectives | SE |
| 0:10 | 20 min | Core demo (Use Cases 1-3) | SE |
| 0:30 | 10 min | Integration demo | SE |
| 0:40 | 5 min | Admin and security overview | SE |
| 0:45 | 10 min | Q&A | SE + AE |
| 0:55 | 5 min | Next steps and wrap-up | AE |
---
## Demo Flow
### Opening (5 minutes)
**Talking Points:**
- Thank attendees for their time
- Recap what we learned in discovery: "[Summarize 2-3 key challenges]"
- Set expectations: "Today I'll show you how we address [Challenge 1], [Challenge 2], and [Challenge 3]"
- Frame the demo: "I'll be using [data type] similar to what you described in our earlier conversations"
**Transition:** "Let me start with the challenge you mentioned is most pressing: [Challenge 1]."
---
### Use Case 1: [Name] (7 minutes)
**Business Context:**
[1-2 sentences on why this matters to the customer]
**Demo Steps:**
1. **Step 1:** [Navigate to / Click on / Show...]
- **What to say:** "[Explain what they're seeing and why it matters]"
- **Highlight:** [Specific feature or capability to emphasize]
2. **Step 2:** [Navigate to / Click on / Show...]
- **What to say:** "[Connect this to their specific pain point]"
- **Highlight:** [Differentiator from competitor]
3. **Step 3:** [Navigate to / Click on / Show...]
- **What to say:** "[Quantify the value - time saved, errors reduced, etc.]"
- **Highlight:** [Ease of use or power of the feature]
**Key Message:** "[One sentence summarizing the value demonstrated]"
**Transition:** "Now that you've seen how we handle [Use Case 1], let me show you [Use Case 2]."
---
### Use Case 2: [Name] (7 minutes)
**Business Context:**
[1-2 sentences on why this matters to the customer]
**Demo Steps:**
1. **Step 1:** [Navigate to / Click on / Show...]
- **What to say:** "[Explanation]"
- **Highlight:** [Key capability]
2. **Step 2:** [Navigate to / Click on / Show...]
- **What to say:** "[Explanation]"
- **Highlight:** [Key capability]
3. **Step 3:** [Navigate to / Click on / Show...]
- **What to say:** "[Explanation]"
- **Highlight:** [Key capability]
**Key Message:** "[One sentence summarizing the value demonstrated]"
**Transition:** "[Transition statement to next section]"
---
### Use Case 3: [Name] (6 minutes)
**Business Context:**
[1-2 sentences on why this matters to the customer]
**Demo Steps:**
1. **Step 1:** [Description]
- **What to say:** "[Explanation]"
- **Highlight:** [Key capability]
2. **Step 2:** [Description]
- **What to say:** "[Explanation]"
- **Highlight:** [Key capability]
**Key Message:** "[One sentence summarizing the value demonstrated]"
---
### Integration Demo (10 minutes)
**Context:** "You mentioned that integration with [System X] and [System Y] is critical. Let me show you how that works."
**Demo Steps:**
1. **Show integration configuration:**
- **What to say:** "Setting up the connection takes [X minutes/clicks]"
- **Highlight:** Native connector, no custom code required
2. **Show data flow:**
- **What to say:** "Data syncs in [real-time/X minute intervals]"
- **Highlight:** Reliability, error handling, monitoring
3. **Show end-to-end workflow:**
- **What to say:** "Here's the complete flow from [source] to [destination]"
- **Highlight:** Automation, reduced manual effort
---
### Admin and Security (5 minutes)
**Demo Steps:**
1. **Show RBAC configuration:**
- **What to say:** "Administrators can define roles and permissions at [granularity level]"
2. **Show audit log:**
- **What to say:** "Every action is logged for compliance and security review"
3. **Show SSO setup:**
- **What to say:** "Single sign-on integrates with your existing identity provider"
---
## Objection Handling
### Anticipated Objections
| Objection | Response |
|-----------|----------|
| "[Feature X] looks limited compared to [Competitor]" | "Great observation. Our approach to [Feature X] focuses on [benefit]. What specific aspect of [Feature X] is most important to your workflow? [Then demonstrate or explain how we address the specific need]" |
| "How does this handle [edge case]?" | "That's an important scenario. [If supported: Let me show you how that works.] [If not directly: Here's how our customers typically handle that use case...]" |
| "What about performance at our scale?" | "Excellent question. Our platform handles [benchmark data]. For your specific scale of [X], we'd recommend [architecture approach]. We can validate this in a POC." |
| "The implementation timeline seems long" | "The timeline I shared is for the full solution. We can phase the rollout to deliver value sooner. Phase 1 would give you [core capability] within [X weeks]." |
| "What happens if we outgrow this?" | "Our architecture is designed for growth. [Describe scaling approach]. We have customers who have scaled from [X] to [Y] without re-architecture." |
### Recovery Strategies
**If the demo breaks:**
1. Stay calm: "Let me switch to [backup environment / backup approach]"
2. Explain what they would have seen
3. Offer to follow up with a recorded walkthrough
4. Pivot to the next demo section
**If an unexpected question derails the flow:**
1. Acknowledge: "That's an excellent question"
2. Briefly answer or note it for follow-up
3. Return to the demo flow: "Let me continue with [next section] and we can dive deeper into that during Q&A"
**If the audience seems disengaged:**
1. Pause and ask: "Before I continue, is this addressing what you're looking for?"
2. Adjust focus based on their response
3. Skip ahead to the section most relevant to their interests
---
## Post-Demo Actions
- [ ] Send thank-you email with recording link (if recorded)
- [ ] Share demo environment access credentials (if applicable)
- [ ] Send follow-up document addressing unanswered questions
- [ ] Schedule next meeting (POC kickoff, technical deep-dive, etc.)
- [ ] Update CRM with demo notes and next steps
- [ ] Debrief with AE on stakeholder reactions and concerns
- [ ] Log key objections and responses for battlecard updates
---
## Notes
[Space for real-time notes during the demo]
### Questions Raised
1. [Question] - [Answer / Follow-up needed]
2. [Question] - [Answer / Follow-up needed]
### Feedback Received
- [Positive feedback]
- [Concerns raised]
### Next Steps Agreed
1. [Action item] - [Owner] - [Date]
2. [Action item] - [Owner] - [Date]

View File

@@ -0,0 +1,474 @@
{
"rfp_info": {
"rfp_name": "Enterprise Data Analytics Platform RFP",
"customer": "Acme Financial Services",
"due_date": "2026-03-15",
"strategic_value": "high",
"deal_value": "$450,000 ARR"
},
"coverage_summary": {
"overall_coverage_percentage": 84.5,
"total_requirements": 21,
"full": 14,
"partial": 3,
"planned": 2,
"gap": 2,
"must_have_gaps": 0
},
"category_scores": {
"Data Integration": {
"coverage_percentage": 90.0,
"requirements_count": 4,
"full": 3,
"partial": 1,
"planned": 0,
"gap": 0,
"effort_hours": 34
},
"Analytics & Visualization": {
"coverage_percentage": 77.8,
"requirements_count": 4,
"full": 2,
"partial": 1,
"planned": 1,
"gap": 0,
"effort_hours": 56
},
"Security & Compliance": {
"coverage_percentage": 81.8,
"requirements_count": 4,
"full": 3,
"partial": 0,
"planned": 0,
"gap": 1,
"effort_hours": 50
},
"Performance & Scalability": {
"coverage_percentage": 87.5,
"requirements_count": 3,
"full": 2,
"partial": 1,
"planned": 0,
"gap": 0,
"effort_hours": 32
},
"API & Extensibility": {
"coverage_percentage": 87.5,
"requirements_count": 3,
"full": 2,
"partial": 0,
"planned": 1,
"gap": 0,
"effort_hours": 38
},
"Support & SLA": {
"coverage_percentage": 100.0,
"requirements_count": 2,
"full": 2,
"partial": 0,
"planned": 0,
"gap": 0,
"effort_hours": 4
},
"Deployment": {
"coverage_percentage": 0.0,
"requirements_count": 1,
"full": 0,
"partial": 0,
"planned": 0,
"gap": 1,
"effort_hours": 80
}
},
"bid_recommendation": {
"decision": "BID",
"confidence": "high",
"overall_coverage_percentage": 84.5,
"must_have_gaps": 0,
"strategic_value": "high",
"reasons": [
"Coverage score 84.5% exceeds 70% threshold"
]
},
"gap_analysis": [
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"severity": "high",
"effort_hours": 16,
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"severity": "high",
"effort_hours": 24,
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"severity": "high",
"effort_hours": 40,
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"severity": "high",
"effort_hours": 20,
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"severity": "low",
"effort_hours": 20,
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"severity": "low",
"effort_hours": 30,
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"severity": "low",
"effort_hours": 80,
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
],
"risk_assessment": [
{
"risk": "High customization effort",
"impact": "high",
"description": "230 hours estimated for non-full requirements",
"mitigation": "Evaluate resource availability and timeline feasibility before committing"
}
],
"effort_estimate": {
"total_hours": 294,
"gap_closure_hours": 230,
"full_coverage_hours": 64
},
"requirements_detail": [
{
"id": "R-001",
"requirement": "Real-time data ingestion from multiple sources (APIs, databases, streaming)",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Native connectors for 200+ data sources",
"mitigation": ""
},
{
"id": "R-002",
"requirement": "Support for SQL and NoSQL data sources",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Supports PostgreSQL, MySQL, MongoDB, Cassandra, and more",
"mitigation": ""
},
{
"id": "R-003",
"requirement": "Automated ETL pipeline creation with visual designer",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 6,
"notes": "Drag-and-drop pipeline builder included",
"mitigation": ""
},
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 2.0,
"weighted_score": 1.0,
"max_weighted": 2.0,
"effort_hours": 16,
"notes": "CDC supported for major databases; some require custom configuration",
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-005",
"requirement": "Interactive dashboard creation with drag-and-drop",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Full drag-and-drop dashboard builder with 50+ chart types",
"mitigation": ""
},
{
"id": "R-006",
"requirement": "Embedded analytics with white-labeling support",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Full embedding SDK with CSS customization",
"mitigation": ""
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"coverage_score": 0.25,
"weight": 2.0,
"weighted_score": 0.5,
"max_weighted": 2.0,
"effort_hours": 24,
"notes": "NLQ feature on roadmap for Q3 2026",
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 1.0,
"weighted_score": 0.5,
"max_weighted": 1.0,
"effort_hours": 20,
"notes": "Python/R integration available; no built-in ML models",
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-009",
"requirement": "Role-based access control (RBAC) with row-level security",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 6,
"notes": "Granular RBAC with row-level and column-level security",
"mitigation": ""
},
{
"id": "R-010",
"requirement": "SOC 2 Type II certification",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "Current SOC 2 Type II report available upon NDA",
"mitigation": ""
},
{
"id": "R-011",
"requirement": "Data encryption at rest and in transit (AES-256, TLS 1.3)",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "AES-256 at rest, TLS 1.3 in transit, customer-managed keys supported",
"mitigation": ""
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"coverage_score": 0.0,
"weight": 2.0,
"weighted_score": 0.0,
"max_weighted": 2.0,
"effort_hours": 40,
"notes": "HIPAA BAA not currently offered",
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-013",
"requirement": "Horizontal scaling to handle 10B+ rows",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Distributed query engine scales to 50B+ rows",
"mitigation": ""
},
{
"id": "R-014",
"requirement": "Sub-second query response for cached dashboards",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Intelligent caching layer with <500ms p95 for cached queries",
"mitigation": ""
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 2.0,
"weighted_score": 1.0,
"max_weighted": 2.0,
"effort_hours": 20,
"notes": "US and EU regions available; APAC region in beta",
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-016",
"requirement": "RESTful API with comprehensive documentation",
"category": "API & Extensibility",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Full REST API with OpenAPI spec and interactive documentation",
"mitigation": ""
},
{
"id": "R-017",
"requirement": "Webhook support for event-driven workflows",
"category": "API & Extensibility",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 4,
"notes": "Webhook support for 30+ event types",
"mitigation": ""
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"coverage_score": 0.25,
"weight": 1.0,
"weighted_score": 0.25,
"max_weighted": 1.0,
"effort_hours": 30,
"notes": "Plugin framework on roadmap for Q4 2026",
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-019",
"requirement": "24/7 enterprise support with 1-hour critical response time",
"category": "Support & SLA",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "Premium support tier includes 24/7 coverage with 30-min critical response SLA",
"mitigation": ""
},
{
"id": "R-020",
"requirement": "Dedicated customer success manager",
"category": "Support & SLA",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 2,
"notes": "Included in Enterprise tier",
"mitigation": ""
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"coverage_score": 0.0,
"weight": 1.0,
"weighted_score": 0.0,
"max_weighted": 1.0,
"effort_hours": 80,
"notes": "Cloud-only platform; no on-premise offering",
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
]
}

View File

@@ -0,0 +1,213 @@
# POC Evaluation Scorecard
## Scorecard Information
| Field | Value |
|-------|-------|
| POC Name | [POC Name] |
| Customer | [Customer Name] |
| Vendor/Product | [Product Name] |
| Evaluation Period | [Start Date] - [End Date] |
| Evaluated By | [Names and Roles] |
| Date Completed | [Date] |
---
## Scoring Scale
| Score | Label | Definition |
|-------|-------|------------|
| 5 | Exceeds | Superior capability; exceeds requirements with notable strengths |
| 4 | Meets | Full capability; meets all requirements with no significant gaps |
| 3 | Partial | Acceptable capability; minor gaps that can be addressed |
| 2 | Below | Below expectations; significant gaps that impact value |
| 1 | Fails | Does not meet requirements; critical gaps |
| N/A | Not Evaluated | Not tested during this POC |
---
## Evaluation Categories
### 1. Functionality (Weight: 30%)
| Criterion | Score (1-5) | Evidence / Notes |
|-----------|-------------|-----------------|
| Core feature completeness | | |
| Use case coverage | | |
| Customization flexibility | | |
| Workflow automation | | |
| Data handling and transformation | | |
| Reporting and analytics | | |
**Category Score:** ___/5.0
**Category Notes:**
[Summary of functionality evaluation, key strengths and gaps]
---
### 2. Performance (Weight: 20%)
| Criterion | Score (1-5) | Evidence / Notes |
|-----------|-------------|-----------------|
| Response time under expected load | | |
| Response time under peak load | | |
| Throughput capacity | | |
| Scalability characteristics | | |
| Resource utilization | | |
| Batch processing performance | | |
**Category Score:** ___/5.0
**Category Notes:**
[Summary of performance evaluation, benchmark results]
---
### 3. Integration (Weight: 20%)
| Criterion | Score (1-5) | Evidence / Notes |
|-----------|-------------|-----------------|
| API completeness and documentation | | |
| Data migration ease | | |
| Third-party connector availability | | |
| Authentication/SSO integration | | |
| Real-time sync reliability | | |
| Error handling and recovery | | |
**Category Score:** ___/5.0
**Category Notes:**
[Summary of integration evaluation, systems tested]
---
### 4. Usability (Weight: 15%)
| Criterion | Score (1-5) | Evidence / Notes |
|-----------|-------------|-----------------|
| User interface intuitiveness | | |
| Learning curve assessment | | |
| Documentation quality | | |
| Admin console functionality | | |
| Mobile experience | | |
| Accessibility compliance | | |
**Category Score:** ___/5.0
**Category Notes:**
[Summary of usability evaluation, user feedback]
---
### 5. Support (Weight: 15%)
| Criterion | Score (1-5) | Evidence / Notes |
|-----------|-------------|-----------------|
| Technical support responsiveness | | |
| Knowledge base quality | | |
| Training resources availability | | |
| Community and ecosystem | | |
| Issue resolution speed | | |
| Proactive engagement quality | | |
**Category Score:** ___/5.0
**Category Notes:**
[Summary of support evaluation during POC]
---
## Score Summary
| Category | Weight | Score | Weighted Score |
|----------|--------|-------|----------------|
| Functionality | 30% | ___/5.0 | ___ |
| Performance | 20% | ___/5.0 | ___ |
| Integration | 20% | ___/5.0 | ___ |
| Usability | 15% | ___/5.0 | ___ |
| Support | 15% | ___/5.0 | ___ |
| **Overall** | **100%** | | **___/5.0** |
### Decision Thresholds
| Weighted Average | Decision |
|-----------------|----------|
| >= 4.0 | **Strong Pass** - Proceed to procurement |
| 3.5 - 3.9 | **Pass** - Proceed with noted conditions |
| 3.0 - 3.4 | **Conditional** - Requires further evaluation |
| < 3.0 | **Fail** - Does not meet requirements |
---
## Success Criteria Results
| # | Criterion | Priority | Target | Actual | Pass/Fail |
|---|-----------|----------|--------|--------|-----------|
| 1 | [Criterion 1] | Must-Have | [Target] | [Result] | [ ] |
| 2 | [Criterion 2] | Must-Have | [Target] | [Result] | [ ] |
| 3 | [Criterion 3] | Must-Have | [Target] | [Result] | [ ] |
| 4 | [Criterion 4] | Should-Have | [Target] | [Result] | [ ] |
| 5 | [Criterion 5] | Should-Have | [Target] | [Result] | [ ] |
| 6 | [Criterion 6] | Nice-to-Have | [Target] | [Result] | [ ] |
**Must-Have Pass Rate:** ___/%
**Overall Pass Rate:** ___/%
---
## Issues Log
| # | Issue | Severity | Status | Resolution | Impact on Score |
|---|-------|----------|--------|------------|----------------|
| 1 | [Issue] | [Critical/High/Medium/Low] | [Open/Resolved] | [Resolution] | [Category affected] |
| 2 | [Issue] | [Critical/High/Medium/Low] | [Open/Resolved] | [Resolution] | [Category affected] |
---
## Stakeholder Feedback
### [Stakeholder Name 1] - [Role]
**Rating:** ___/5
**Comments:** [Feedback]
### [Stakeholder Name 2] - [Role]
**Rating:** ___/5
**Comments:** [Feedback]
### [Stakeholder Name 3] - [Role]
**Rating:** ___/5
**Comments:** [Feedback]
---
## Recommendation
### Decision: [ ] GO / [ ] CONDITIONAL GO / [ ] NO-GO
**Rationale:**
[2-3 paragraphs explaining the recommendation based on scorecard results, success criteria outcomes, stakeholder feedback, and overall evaluation]
**Conditions (if Conditional GO):**
1. [Condition 1 that must be met before proceeding]
2. [Condition 2 that must be met before proceeding]
**Key Strengths:**
1. [Strength 1]
2. [Strength 2]
3. [Strength 3]
**Key Concerns:**
1. [Concern 1 with proposed mitigation]
2. [Concern 2 with proposed mitigation]
**Next Steps:**
1. [Action item] - [Owner] - [Date]
2. [Action item] - [Owner] - [Date]
3. [Action item] - [Owner] - [Date]
---
## Sign-Off
| Role | Name | Signature | Date |
|------|------|-----------|------|
| Technical Evaluator | | | |
| Business Sponsor | | | |
| Decision Maker | | | |
| Sales Engineer | | | |

View File

@@ -0,0 +1,219 @@
{
"rfp_name": "Enterprise Data Analytics Platform RFP",
"customer": "Acme Financial Services",
"due_date": "2026-03-15",
"deal_value": "$450,000 ARR",
"strategic_value": "high",
"requirements": [
{
"id": "R-001",
"requirement": "Real-time data ingestion from multiple sources (APIs, databases, streaming)",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Native connectors for 200+ data sources",
"mitigation": ""
},
{
"id": "R-002",
"requirement": "Support for SQL and NoSQL data sources",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Supports PostgreSQL, MySQL, MongoDB, Cassandra, and more",
"mitigation": ""
},
{
"id": "R-003",
"requirement": "Automated ETL pipeline creation with visual designer",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 6,
"notes": "Drag-and-drop pipeline builder included",
"mitigation": ""
},
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"effort_hours": 16,
"notes": "CDC supported for major databases; some require custom configuration",
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-005",
"requirement": "Interactive dashboard creation with drag-and-drop",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Full drag-and-drop dashboard builder with 50+ chart types",
"mitigation": ""
},
{
"id": "R-006",
"requirement": "Embedded analytics with white-labeling support",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Full embedding SDK with CSS customization",
"mitigation": ""
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"effort_hours": 24,
"notes": "NLQ feature on roadmap for Q3 2026",
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"effort_hours": 20,
"notes": "Python/R integration available; no built-in ML models",
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-009",
"requirement": "Role-based access control (RBAC) with row-level security",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 6,
"notes": "Granular RBAC with row-level and column-level security",
"mitigation": ""
},
{
"id": "R-010",
"requirement": "SOC 2 Type II certification",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Current SOC 2 Type II report available upon NDA",
"mitigation": ""
},
{
"id": "R-011",
"requirement": "Data encryption at rest and in transit (AES-256, TLS 1.3)",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "AES-256 at rest, TLS 1.3 in transit, customer-managed keys supported",
"mitigation": ""
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"effort_hours": 40,
"notes": "HIPAA BAA not currently offered",
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-013",
"requirement": "Horizontal scaling to handle 10B+ rows",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Distributed query engine scales to 50B+ rows",
"mitigation": ""
},
{
"id": "R-014",
"requirement": "Sub-second query response for cached dashboards",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Intelligent caching layer with <500ms p95 for cached queries",
"mitigation": ""
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"effort_hours": 20,
"notes": "US and EU regions available; APAC region in beta",
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-016",
"requirement": "RESTful API with comprehensive documentation",
"category": "API & Extensibility",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Full REST API with OpenAPI spec and interactive documentation",
"mitigation": ""
},
{
"id": "R-017",
"requirement": "Webhook support for event-driven workflows",
"category": "API & Extensibility",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Webhook support for 30+ event types",
"mitigation": ""
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"effort_hours": 30,
"notes": "Plugin framework on roadmap for Q4 2026",
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-019",
"requirement": "24/7 enterprise support with 1-hour critical response time",
"category": "Support & SLA",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Premium support tier includes 24/7 coverage with 30-min critical response SLA",
"mitigation": ""
},
{
"id": "R-020",
"requirement": "Dedicated customer success manager",
"category": "Support & SLA",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Included in Enterprise tier",
"mitigation": ""
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"effort_hours": 80,
"notes": "Cloud-only platform; no on-premise offering",
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
]
}

View File

@@ -0,0 +1,231 @@
# Technical Proposal Template
## Document Information
| Field | Value |
|-------|-------|
| Customer | [Customer Name] |
| Opportunity | [Opportunity Name / RFP Reference] |
| Prepared By | [Sales Engineer Name] |
| Date | [Date] |
| Version | [Version Number] |
| Classification | [Confidential / Internal] |
---
## 1. Executive Summary
### Business Context
[2-3 paragraphs summarizing the customer's business challenges and strategic objectives that this solution addresses. Focus on business outcomes, not technical features.]
### Proposed Solution
[1-2 paragraphs describing the solution at a high level, emphasizing how it addresses the specific challenges identified above.]
### Key Value Propositions
1. **[Value 1]:** [Quantified benefit, e.g., "Reduce reporting time by 60%"]
2. **[Value 2]:** [Quantified benefit]
3. **[Value 3]:** [Quantified benefit]
### Recommended Approach
[Brief overview of the implementation approach, timeline, and key milestones.]
---
## 2. Requirements Summary
### Coverage Overview
| Category | Requirements | Full | Partial | Planned | Gap | Coverage |
|----------|-------------|------|---------|---------|-----|----------|
| [Category 1] | [N] | [N] | [N] | [N] | [N] | [X%] |
| [Category 2] | [N] | [N] | [N] | [N] | [N] | [X%] |
| **Total** | **[N]** | **[N]** | **[N]** | **[N]** | **[N]** | **[X%]** |
### Key Differentiators
1. [Differentiator 1 with brief explanation]
2. [Differentiator 2 with brief explanation]
3. [Differentiator 3 with brief explanation]
### Gap Mitigation Plan
| Gap | Priority | Mitigation Strategy | Timeline |
|-----|----------|-------------------|----------|
| [Gap 1] | [Must/Should/Nice] | [Strategy] | [Date] |
| [Gap 2] | [Must/Should/Nice] | [Strategy] | [Date] |
---
## 3. Solution Architecture
### Architecture Overview
[High-level architecture description. Include or reference an architecture diagram.]
```
[ASCII architecture diagram or reference to attached diagram]
Example:
+------------------+ +------------------+ +------------------+
| Data Sources | --> | Our Platform | --> | Delivery |
| - System A | | - Ingestion | | - Dashboards |
| - System B | | - Processing | | - API |
| - System C | | - Analytics | | - Exports |
+------------------+ +------------------+ +------------------+
|
+------------------+
| Management |
| - Security |
| - Monitoring |
| - Admin |
+------------------+
```
### Component Details
#### [Component 1]
- **Purpose:** [What this component does]
- **Technology:** [Underlying technology]
- **Scaling:** [How it scales]
- **Availability:** [HA/DR approach]
#### [Component 2]
- **Purpose:** [What this component does]
- **Technology:** [Underlying technology]
- **Scaling:** [How it scales]
- **Availability:** [HA/DR approach]
### Integration Architecture
| Integration Point | Protocol | Direction | Frequency | Authentication |
|-------------------|----------|-----------|-----------|---------------|
| [System A] | REST API | Inbound | Real-time | OAuth 2.0 |
| [System B] | JDBC | Inbound | Batch (hourly) | Service Account |
| [System C] | Webhook | Outbound | Event-driven | API Key |
### Security Architecture
- **Authentication:** [SSO, SAML, OAuth, etc.]
- **Authorization:** [RBAC, row-level security, etc.]
- **Encryption:** [At rest, in transit, key management]
- **Compliance:** [SOC 2, GDPR, HIPAA, etc.]
- **Network:** [VPC, firewall, IP restrictions]
---
## 4. Implementation Plan
### Phase Overview
| Phase | Duration | Focus | Deliverables |
|-------|----------|-------|-------------|
| Phase 1: Foundation | [X weeks] | Environment setup, core configuration | Working environment, admin access |
| Phase 2: Core Implementation | [X weeks] | Primary use cases, integrations | [Deliverables] |
| Phase 3: Advanced Features | [X weeks] | Advanced scenarios, optimization | [Deliverables] |
| Phase 4: Go-Live | [X weeks] | Testing, training, cutover | Production deployment |
### Detailed Timeline
```
Week 1-2: [Phase 1 - Foundation]
- Environment provisioning
- Security configuration
- Data source connectivity
Week 3-6: [Phase 2 - Core Implementation]
- Use case 1 implementation
- Use case 2 implementation
- Integration testing
Week 7-8: [Phase 3 - Advanced Features]
- Advanced analytics
- Custom workflows
- Performance optimization
Week 9-10: [Phase 4 - Go-Live]
- User acceptance testing
- Training sessions
- Production cutover
- Post-launch support
```
### Resource Requirements
| Role | Hours | Phase(s) | Provider |
|------|-------|----------|----------|
| Solutions Architect | [X] | All | [Vendor] |
| Implementation Engineer | [X] | 1-3 | [Vendor] |
| Project Manager | [X] | All | [Vendor] |
| Customer IT Admin | [X] | 1, 4 | [Customer] |
| Customer Business Lead | [X] | 2-4 | [Customer] |
### Training Plan
| Audience | Format | Duration | Content |
|----------|--------|----------|---------|
| Administrators | Workshop | [X hours] | Configuration, security, monitoring |
| Power Users | Workshop | [X hours] | Advanced features, reporting, automation |
| End Users | Webinar | [X hours] | Core workflows, self-service analytics |
---
## 5. Risk Mitigation
| Risk | Probability | Impact | Mitigation |
|------|------------|--------|------------|
| [Risk 1] | [H/M/L] | [H/M/L] | [Strategy] |
| [Risk 2] | [H/M/L] | [H/M/L] | [Strategy] |
| [Risk 3] | [H/M/L] | [H/M/L] | [Strategy] |
---
## 6. Commercial Summary
### Pricing Overview
| Component | Annual Cost |
|-----------|------------|
| Platform License | $[X] |
| Implementation Services | $[X] |
| Training | $[X] |
| Premium Support | $[X] |
| **Total Year 1** | **$[X]** |
| **Annual Renewal** | **$[X]** |
### ROI Projection
| Metric | Current State | With Solution | Improvement |
|--------|--------------|---------------|-------------|
| [Metric 1] | [Value] | [Value] | [%] |
| [Metric 2] | [Value] | [Value] | [%] |
| [Metric 3] | [Value] | [Value] | [%] |
**Estimated payback period:** [X months]
---
## 7. Next Steps
1. [Next step 1 with owner and date]
2. [Next step 2 with owner and date]
3. [Next step 3 with owner and date]
---
## Appendices
### A. Detailed Compliance Matrix
[Reference to full requirement-by-requirement response]
### B. Reference Customers
[2-3 relevant customer references with industry, use case, and outcomes]
### C. Architecture Diagrams
[Detailed architecture diagrams]
### D. Product Roadmap (Relevant Items)
[Roadmap items relevant to this proposal with estimated delivery dates]

View File

@@ -0,0 +1,226 @@
# Competitive Positioning Framework
A comprehensive guide for Sales Engineers to analyze competitors, build battlecards, handle objections, and position for wins.
## Competitive Analysis Methodology
### 1. Intelligence Gathering
**Primary Sources:**
- Competitor product documentation and release notes
- Analyst reports (Gartner, Forrester, IDC)
- Customer feedback from win/loss reviews
- Industry conferences and webinars
- Public case studies and testimonials
- Open-source repositories and API documentation
**Secondary Sources:**
- Glassdoor reviews (engineering culture, product direction)
- Job postings (technology stack, expansion areas)
- Patent filings (future direction signals)
- Social media and community forums
- Partner ecosystem announcements
### 2. Feature Comparison Best Practices
**Feature Scoring Scale:**
| Score | Label | Definition |
|-------|-------|------------|
| 3 | Full | Complete, production-ready feature support |
| 2 | Partial | Feature exists but with limitations or caveats |
| 1 | Limited | Minimal implementation, significant gaps |
| 0 | None | Feature not available |
**Comparison Categories:**
Organize features into weighted categories that reflect customer priorities:
| Category | Typical Weight | What to Evaluate |
|----------|---------------|------------------|
| Core Functionality | 25-35% | Primary use case coverage |
| Integration & API | 15-25% | Ecosystem connectivity |
| Security & Compliance | 15-20% | Enterprise readiness |
| Scalability & Performance | 10-20% | Growth capacity |
| Usability & UX | 10-15% | Time to value |
| Support & Services | 5-10% | Vendor partnership quality |
**Weighting Guidelines:**
- Adjust weights based on the specific customer's priorities
- Security-sensitive industries (healthcare, finance) should weight compliance higher
- High-growth companies should weight scalability higher
- Enterprise deals should weight integration and support higher
### 3. Differentiator Identification
A differentiator is a feature or capability where your product scores highest among all compared products. Strong differentiators have these properties:
- **Unique:** Only your product offers this capability
- **Valuable:** Customers care about this capability
- **Defensible:** Not easily replicated by competitors
- **Demonstrable:** Can be shown in a demo or POC
**Differentiator Categories:**
| Type | Description | Example |
|------|-------------|---------|
| Feature Differentiator | Unique product capability | Native ML-powered anomaly detection |
| Architecture Differentiator | Fundamental design advantage | Multi-tenant with data isolation |
| Ecosystem Differentiator | Partner or integration advantage | 200+ native integrations |
| Service Differentiator | Support or engagement model | Dedicated SE throughout contract |
| Economic Differentiator | Pricing or TCO advantage | Usage-based pricing with no minimums |
### 4. Vulnerability Assessment
Vulnerabilities are features where competitors score higher than your product. Address vulnerabilities proactively:
**Vulnerability Response Strategies:**
1. **Acknowledge and redirect:** Confirm the gap, then pivot to your strength areas
2. **Reframe the requirement:** Show why the customer's real need is better met differently
3. **Demonstrate workaround:** Show how existing capabilities address the underlying need
4. **Commit to roadmap:** Provide a credible timeline for native support
5. **Partner solution:** Identify an integration partner that fills the gap
## Objection Handling
### Common Technical Objections
#### "Your product lacks [Feature X]"
**Response Framework:**
1. Acknowledge: "You're right that [Feature X] is not a standalone feature today."
2. Explore: "Help me understand the specific use case you need [Feature X] for."
3. Redirect: "Our approach to solving that is [alternative], which actually provides [benefit]."
4. Evidence: "Customer [reference] had the same concern and found [outcome]."
#### "Competitor [Y] has better [Capability]"
**Response Framework:**
1. Acknowledge: "I understand [Competitor Y] has invested in [Capability]."
2. Qualify: "Can you share what specific aspects of [Capability] are most important?"
3. Differentiate: "While they focus on [approach], we take a different approach with [our method] because [reason]."
4. Quantify: "The practical difference in real-world usage is [metric/evidence]."
#### "Your product is too expensive"
**Response Framework:**
1. Acknowledge: "I appreciate you sharing that concern."
2. Reframe: "Let's look at total cost of ownership rather than license cost alone."
3. Quantify: "When you factor in [implementation, training, maintenance, time-to-value], the TCO comparison shows..."
4. Value: "Based on our analysis, the ROI timeline is [X months], delivering [Y value]."
#### "We're concerned about vendor lock-in"
**Response Framework:**
1. Acknowledge: "That's a smart concern for any technology investment."
2. Evidence: "Our architecture uses [open standards, APIs, data portability features]."
3. Demonstrate: "Here's how data export and migration work [show the feature]."
4. Reference: "We can connect you with customers who evaluated this exact concern."
### Objection Handling Principles
1. **Never disparage competitors.** Focus on your strengths, not their weaknesses.
2. **Ask questions first.** Understand the real concern behind the objection.
3. **Use evidence.** Reference customers, benchmarks, and demonstrations.
4. **Be honest about gaps.** Credibility is your most valuable asset.
5. **Redirect to value.** Connect every response back to business outcomes.
## Win/Loss Analysis
### Post-Decision Review Process
**Timing:** Conduct within 2 weeks of the decision for accurate recall.
**Interview Questions (for wins):**
1. What was the deciding factor in choosing us?
2. Which features or capabilities were most compelling?
3. How did our demo/POC compare to alternatives?
4. What concerns did you have that were resolved during the process?
5. What could we have done better in the evaluation process?
**Interview Questions (for losses):**
1. What was the primary reason for choosing the competitor?
2. Were there specific requirements we did not meet?
3. How did our demo/POC compare to the winning vendor?
4. What would have changed your decision?
5. Would you consider us for future evaluations?
### Win/Loss Data Tracking
| Data Point | Purpose |
|-----------|---------|
| Deal size | Pattern analysis by segment |
| Industry | Vertical-specific insights |
| Competitor | Head-to-head record |
| Decision factors | Feature priority validation |
| Sales cycle length | Process efficiency |
| Stakeholder roles | Engagement strategy |
| Technical requirements | Capability gap tracking |
| POC outcome | POC process improvement |
### Analysis Dimensions
1. **By Competitor:** Win rate per competitor, common objections, feature gaps
2. **By Segment:** Enterprise vs mid-market vs SMB patterns
3. **By Industry:** Vertical-specific win factors
4. **By Deal Size:** Large vs small deal dynamics
5. **By Feature Category:** Which capabilities drive wins vs losses
## Battlecard Creation
### Battlecard Structure
**Page 1: Quick Reference**
- Competitor overview (company size, funding, market position)
- Key strengths (top 3)
- Key weaknesses (top 3)
- Ideal customer profile for the competitor
- Our win rate against this competitor
**Page 2: Feature Comparison**
- Category-by-category comparison (summary view)
- Top differentiators (features where we lead)
- Top vulnerabilities (features where they lead)
- Parity features (features at same level)
**Page 3: Talk Track**
- Opening positioning statement
- Discovery questions that expose competitor weaknesses
- Objection responses for their key strengths
- Proof points (customer references, benchmarks, case studies)
- Trap-setting questions for demos and POCs
**Page 4: Win Strategies**
- Recommended evaluation criteria that favor our strengths
- Demo scenarios that highlight our differentiators
- POC success criteria that align with our capabilities
- Pricing and packaging positioning
- Stakeholder engagement strategy
### Battlecard Maintenance
- **Monthly review:** Update feature scores based on new releases
- **Quarterly refresh:** Incorporate win/loss analysis findings
- **Trigger-based update:** Major competitor release, pricing change, or acquisition
## Competitive Positioning During Evaluations
### Evaluation Stage Tactics
| Stage | Tactic |
|-------|--------|
| Discovery | Ask questions that expose competitor weaknesses |
| Demo | Lead with differentiators, show end-to-end workflows |
| POC | Define success criteria aligned with your strengths |
| Proposal | Quantify TCO advantage, emphasize implementation risk |
| Negotiation | Leverage competitive urgency, offer migration assistance |
### Influencing Evaluation Criteria
The sales engineer's most impactful opportunity is shaping the evaluation criteria before the formal process begins:
1. **Map criteria to strengths:** Propose evaluation categories where you excel
2. **Weight appropriately:** Ensure critical categories (where you lead) carry higher weight
3. **Define metrics:** Specific, measurable criteria favor the more capable product
4. **Include non-obvious criteria:** Total cost of ownership, time-to-value, ecosystem breadth
---
**Last Updated:** February 2026

View File

@@ -0,0 +1,277 @@
# Proof of Concept (POC) Best Practices
A comprehensive guide for Sales Engineers planning, executing, and evaluating proof-of-concept engagements.
## POC Planning Methodology
### 1. Pre-POC Qualification
Not every deal warrants a POC. Qualify before committing resources:
**POC-Worthy Indicators:**
- Deal value justifies 80-200+ hours of SE and engineering time
- Customer has an identified champion who will actively participate
- Clear decision timeline with POC as a defined evaluation step
- Budget is allocated or allocation process is underway
- Technical stakeholders are available for the evaluation period
**POC Red Flags:**
- "Free trial" request with no commitment to evaluate
- No identified decision-maker or budget owner
- Competitor has already been selected; POC is for validation only
- Customer expects production-grade environment for extended period
- No defined success criteria or evaluation framework
### 2. Scope Definition
The most critical success factor is a well-defined scope. An uncontrolled scope leads to extended timelines, unmet expectations, and lost deals.
**Scope Elements:**
- **Use cases:** 3-5 specific scenarios to validate (not "everything")
- **Integrations:** Which systems must connect during the POC
- **Data:** What data will be used (sample, synthetic, production subset)
- **Users:** Who will access the POC environment and in what roles
- **Duration:** Fixed timeline with clear milestones
- **Success criteria:** Measurable, objective criteria for each use case
**Scope Control Tactics:**
- Document scope in writing with customer sign-off
- Define what is explicitly out of scope
- Create a change request process for scope additions
- Set a maximum number of use cases per complexity tier
### 3. Timeline Planning
**Standard 5-Week Framework:**
| Week | Phase | Focus | Key Activities |
|------|-------|-------|---------------|
| 1 | Setup | Foundation | Environment, data, access, kickoff |
| 2-3 | Core Testing | Validation | Primary use cases, integrations, workflows |
| 4 | Advanced Testing | Edge cases | Performance, security, scale, administration |
| 5 | Evaluation | Decision | Scorecard, review, recommendation |
**Timeline Adjustments by Complexity:**
| Complexity | Duration | Use Cases | Integrations |
|-----------|----------|-----------|-------------|
| Low | 3 weeks | 2-3 | 0-1 |
| Medium | 5 weeks | 3-5 | 2-3 |
| High | 6-8 weeks | 5-8 | 4+ |
**Timeline Rules:**
- Never exceed 8 weeks. Longer POCs lose momentum and stakeholder attention.
- Front-load the most impressive capabilities to build early momentum.
- Schedule stakeholder checkpoints at the end of each phase.
- Build 20% buffer into each phase for unexpected issues.
### 4. Resource Planning
**SE Allocation:**
| Activity | Hours/Week (Medium Complexity) |
|----------|-------------------------------|
| Environment setup and configuration | 15-20 (Week 1 only) |
| Use case execution and testing | 20-25 |
| Stakeholder communication | 3-5 |
| Documentation and reporting | 3-5 |
| Issue resolution | 5-8 |
**Engineering Support:**
- Allocate dedicated engineering support for complex integrations
- Establish an escalation path for blocking issues
- Pre-schedule engineering availability during Core Testing phase
- Request customer IT support for integration access and credentials
**Customer Resources:**
- Technical sponsor for daily communication
- Business stakeholders for use case validation
- IT/Security for environment access and compliance review
- End users for usability feedback (if applicable)
## Success Criteria Definition
### Writing Effective Success Criteria
Each criterion must be:
- **Specific:** Clearly defined with no ambiguity
- **Measurable:** Quantifiable metric or clear pass/fail
- **Agreed:** Documented and signed off by both parties
- **Relevant:** Tied to a business outcome or technical requirement
- **Time-bound:** Evaluated within the POC timeline
### Success Criteria Categories
**Functionality Criteria:**
- "System processes [X] transactions per hour without errors"
- "Workflow automation reduces manual steps from [Y] to [Z]"
- "Report generation completes within [N] seconds for [M] records"
- "All [X] defined use cases completed successfully"
**Performance Criteria:**
- "API response time <200ms at p95 under [N] concurrent users"
- "Batch processing completes [X] records in under [Y] minutes"
- "System maintains performance with [N]x expected data volume"
**Integration Criteria:**
- "Bidirectional sync with [System X] operates within [Y] minute latency"
- "SSO integration with [IdP] supports all required authentication flows"
- "Data import from [Source] completes with <1% error rate"
**Usability Criteria:**
- "New users complete [task] within [N] minutes without assistance"
- "Admin configuration for [scenario] requires fewer than [N] steps"
- "Stakeholder satisfaction rating >= 4.0/5.0"
### Anti-Patterns in Success Criteria
- **Too vague:** "System performs well" (what is "well"?)
- **Too many:** More than 15 criteria dilutes focus and extends timeline
- **Unmeasurable:** "Users like the interface" (how do you measure "like"?)
- **Biased toward feature count:** "Must have Feature X" instead of "Must solve Problem Y"
- **Moving target:** Criteria that change mid-POC without formal agreement
## Stakeholder Management
### Stakeholder Map
| Role | Priority | Engagement Strategy |
|------|----------|-------------------|
| Decision Maker | High | Executive briefings, ROI summaries |
| Champion | Critical | Daily communication, progress updates |
| Technical Evaluator | High | Hands-on access, deep-dive sessions |
| End User | Medium | Usability testing, feedback sessions |
| IT/Security | High | Compliance reviews, architecture sessions |
| Procurement | Low-Medium | TCO documentation, reference connections |
### Engagement Cadence
- **Daily:** Champion check-in (10 min, Slack/email)
- **Weekly:** Progress report to all stakeholders (written summary)
- **Phase transitions:** Formal review meeting with demo of progress
- **Final:** Executive presentation with scorecard results and recommendation
### Managing Stakeholder Expectations
1. **Set clear boundaries:** Define what will and will not be demonstrated
2. **Communicate early and often:** No surprises; surface issues immediately
3. **Document everything:** Meeting notes, decisions, change requests
4. **Celebrate wins:** Highlight successful milestones to maintain momentum
5. **Address concerns immediately:** Delays in resolution erode confidence
## Evaluation Frameworks
### Weighted Scorecard Model
The evaluation scorecard provides an objective, comparable assessment:
| Category | Weight | Score (1-5) | Weighted Score |
|----------|--------|-------------|----------------|
| Functionality | 30% | | |
| Performance | 20% | | |
| Integration | 20% | | |
| Usability | 15% | | |
| Support | 15% | | |
| **Total** | **100%** | | |
**Scoring Scale:**
- 5: Exceeds requirements - superior capability demonstrated
- 4: Meets requirements - full capability with minor enhancements possible
- 3: Partially meets - acceptable but notable gaps remain
- 2: Below expectations - significant gaps that impact value
- 1: Does not meet - critical failure for this category
**Decision Thresholds:**
- Weighted average >= 4.0: **Strong Pass** - proceed to procurement
- Weighted average 3.5-3.9: **Pass** - proceed with noted conditions
- Weighted average 3.0-3.4: **Conditional** - requires further evaluation or negotiation
- Weighted average < 3.0: **Fail** - does not meet requirements
### Go/No-Go Decision Framework
The go/no-go decision should be based on multiple factors, not just the scorecard:
**Go Indicators:**
- Scorecard score >= 3.5
- All must-have success criteria met
- Champion and decision-maker both express positive sentiment
- No unresolved critical technical blockers
- Clear implementation path identified
**No-Go Indicators:**
- Scorecard score < 3.0
- Critical success criteria failed without clear resolution
- Decision-maker expresses significant concerns
- Multiple unresolved technical blockers
- Competitive alternative clearly preferred by evaluators
**Conditional Go Indicators:**
- Scorecard score 3.0-3.5 with clear path to improvement
- 1-2 minor success criteria not met but with workarounds
- Mixed stakeholder sentiment that can be addressed
- Blockers identified but resolution path confirmed with engineering
## Common POC Failure Modes
### 1. Scope Creep
**Symptom:** Customer continuously adds requirements during the POC.
**Prevention:** Written scope agreement with change request process.
**Recovery:** Renegotiate timeline or defer additions to Phase 2.
### 2. Champion Absence
**Symptom:** Champion becomes unavailable or disengaged mid-POC.
**Prevention:** Identify a backup champion. Schedule regular touchpoints.
**Recovery:** Escalate to decision-maker. Demonstrate value already achieved.
### 3. Data Issues
**Symptom:** Customer data is unavailable, poor quality, or incompatible.
**Prevention:** Request sample data before kickoff. Prepare synthetic data.
**Recovery:** Use synthetic data for core testing. Document data requirements for implementation.
### 4. Environment Problems
**Symptom:** POC environment is unstable, slow, or inaccessible.
**Prevention:** Use a dedicated, pre-configured environment. Test before kickoff.
**Recovery:** Have a backup environment. Communicate honestly about delays.
### 5. Moving Goalposts
**Symptom:** Evaluation criteria change mid-POC, often influenced by competitor demos.
**Prevention:** Get written sign-off on criteria before starting. Reference agreement when changes arise.
**Recovery:** Agree to evaluate new criteria as addendum, not replacement. Highlight what has already been validated.
### 6. Extended Timeline
**Symptom:** POC drags beyond planned duration without clear progress.
**Prevention:** Set hard deadlines in the agreement. Schedule decision meetings in advance.
**Recovery:** Force a checkpoint. Present results to date and ask for a go/no-go with current evidence.
### 7. Technical Blockers
**Symptom:** Unexpected technical issues prevent completion of key use cases.
**Prevention:** Conduct technical discovery before committing to POC. Have engineering on standby.
**Recovery:** Escalate immediately. Provide transparent status updates. Offer alternative approaches.
## POC Documentation
### Required Artifacts
| Document | When | Owner |
|----------|------|-------|
| Scope agreement | Pre-POC | SE + Customer |
| Environment setup guide | Week 1 | SE |
| Progress reports | Weekly | SE |
| Phase review presentations | Phase transitions | SE |
| Issue log | Ongoing | SE |
| Final evaluation report | Week 5 | SE + Customer |
| Lessons learned | Post-POC | SE |
### Final Report Template
1. **Executive Summary** - POC objectives, approach, and outcome
2. **Scope and Success Criteria** - What was tested and how
3. **Results Summary** - Success criteria outcomes with evidence
4. **Evaluation Scorecard** - Weighted scores across all categories
5. **Issues and Resolutions** - Problems encountered and how they were addressed
6. **Recommendation** - Go/No-Go with rationale
7. **Implementation Considerations** - Next steps, timeline, and resource needs
---
**Last Updated:** February 2026

View File

@@ -0,0 +1,189 @@
# RFP/RFI Response Guide
A comprehensive reference for Sales Engineers responding to Requests for Proposal (RFP) and Requests for Information (RFI).
## RFP Response Best Practices
### 1. Pre-Response Assessment
Before investing time in a response, conduct a thorough bid/no-bid assessment:
**Bid Criteria Checklist:**
- Do we have a pre-existing relationship with the customer?
- Is there an identified champion or sponsor?
- Do our capabilities align with >70% of requirements?
- Is the deal size justified against the response effort?
- Do we understand the competitive landscape?
- Is the timeline realistic for our solution?
**Red Flags for No-Bid:**
- No prior customer engagement (blind RFP)
- Requirement language mirrors a competitor's product
- Timeline is unrealistically short
- Must-have requirements fall outside our platform
- Budget is undefined or misaligned with our pricing
### 2. Response Organization
**Executive Summary (1-2 pages):**
- Lead with business outcomes, not features
- Reference the customer's specific challenges
- Quantify value proposition with relevant metrics
- State confidence level and key differentiators
**Solution Overview:**
- Map directly to the customer's stated requirements
- Use the customer's language and terminology
- Include architecture diagrams for technical sections
- Address integration with existing systems
**Compliance Matrix:**
- Mirror the RFP's requirement numbering exactly
- Use consistent coverage categories: Full, Partial, Planned, Gap
- Provide clear explanations for each response
- Include roadmap dates for "Planned" items
### 3. Coverage Classification
| Status | Score | Definition | Response Approach |
|--------|-------|------------|-------------------|
| Full | 100% | Current product fully meets requirement | Describe capability with evidence |
| Partial | 50% | Met with configuration or workaround | Explain approach and any limitations |
| Planned | 25% | On product roadmap | Provide timeline and interim solution |
| Gap | 0% | Not currently supported | Acknowledge gap and propose alternatives |
### 4. Priority-Weighted Scoring
Not all requirements are equal. Weight them by business impact:
- **Must-Have (3x weight):** Core requirements that are deal-breakers. Gaps here typically result in disqualification.
- **Should-Have (2x weight):** Important requirements that influence the decision significantly.
- **Nice-to-Have (1x weight):** Desirable but not critical. Often used as tie-breakers.
### 5. Response Writing Tips
**Do:**
- Answer the question directly before elaborating
- Use the customer's terminology, not internal jargon
- Provide specific examples, case studies, and metrics
- Include screenshots or architecture diagrams where relevant
- Cross-reference related answers to avoid redundancy
- Proofread for consistency across sections (multiple authors)
**Avoid:**
- Marketing fluff or vague language ("best-in-class", "world-class")
- Answering a question you were not asked
- Contradictions between sections
- Overselling capabilities you do not have
- Ignoring the question format (tables vs. narrative)
## Bid/No-Bid Decision Framework
### Decision Matrix
| Factor | Weight | Score (1-5) | Weighted |
|--------|--------|-------------|----------|
| Technical fit | 25% | | |
| Relationship strength | 20% | | |
| Competitive position | 20% | | |
| Deal value vs effort | 15% | | |
| Strategic importance | 10% | | |
| Win probability | 10% | | |
| **Total** | **100%** | | |
**Scoring Guide:**
- 5: Strong advantage
- 4: Slight advantage
- 3: Neutral / competitive parity
- 2: Slight disadvantage
- 1: Significant disadvantage
**Decision Thresholds:**
- Score >= 3.5: **Bid** - proceed with full response
- Score 2.5 - 3.4: **Conditional Bid** - proceed with executive approval
- Score < 2.5: **No-Bid** - decline or submit information-only response
### Effort Estimation
Estimate the total effort required and compare against deal value:
| Response Component | Typical Effort (hours) |
|-------------------|----------------------|
| Requirements analysis | 4-8 |
| Technical writing | 16-40 |
| Architecture diagrams | 4-8 |
| Demo preparation | 8-16 |
| Internal review | 4-8 |
| Final formatting | 2-4 |
| **Total** | **38-84 hours** |
**Rule of thumb:** The response effort should not exceed 2% of the deal value.
## Compliance Matrix Structure
### Standard Format
```
| Req ID | Requirement Description | Priority | Compliance | Response | Evidence |
|--------|------------------------|----------|------------|----------|----------|
| R-001 | SSO via SAML 2.0 | Must | Full | Native SAML 2.0 support... | Config guide |
| R-002 | Custom reporting | Should | Partial | Standard reports + API... | API docs |
```
### Section Organization
Organize requirements by category for clarity:
1. **Functional Requirements** - Core features and capabilities
2. **Technical Requirements** - Architecture, APIs, performance
3. **Security & Compliance** - Authentication, encryption, certifications
4. **Integration Requirements** - Third-party systems, data flows
5. **Support & SLA** - Support tiers, response times, uptime
6. **Vendor Qualifications** - Company size, financials, references
## Common Pitfalls
### 1. The Wired RFP
**Symptom:** Requirements language matches a competitor's product feature list.
**Response:** Focus on outcomes over features. Highlight areas of differentiation. Ask clarifying questions that expose broader needs.
### 2. Feature Checklist Syndrome
**Symptom:** RFP is a massive feature checklist with no context about business problems.
**Response:** Group features by business outcome. Add context in your response that demonstrates understanding of the underlying need.
### 3. Scope Creep in Response
**Symptom:** Team keeps adding content that was not requested.
**Response:** Assign a response manager to enforce scope. Answer what was asked, provide references for additional information.
### 4. Inconsistent Messaging
**Symptom:** Multiple authors provide contradictory information.
**Response:** Assign a single editor for final review. Create a response style guide. Use consistent terminology throughout.
### 5. Overcommitting on Gaps
**Symptom:** Marking "Planned" items as "Full" to improve scores.
**Response:** Never misrepresent coverage. Planned items with firm timelines and interim workarounds are better than lies discovered during POC.
## RFP Response Timeline Management
### Typical Response Timeline
| Day | Activity |
|-----|----------|
| Day 1 | Receive RFP, conduct initial review, assign team |
| Day 2-3 | Bid/no-bid decision, questions submission |
| Day 4-7 | Requirements analysis, coverage assessment |
| Day 8-14 | Draft responses, architecture diagrams |
| Day 15-17 | Internal review, quality check |
| Day 18-19 | Final edits, formatting, executive review |
| Day 20 | Submission |
### Time-Saving Strategies
1. **Maintain a response library** - Reusable answers for common requirements
2. **Pre-built architecture diagrams** - Template diagrams for common integration patterns
3. **Standardized compliance language** - Pre-approved language for security and compliance sections
4. **Question templates** - Standard clarifying questions for common ambiguities
---
**Last Updated:** February 2026

View File

@@ -0,0 +1,525 @@
#!/usr/bin/env python3
"""Competitive Matrix Builder - Generate feature comparison matrices and positioning analysis.
Builds feature-by-feature comparison matrices, calculates weighted competitive
scores, identifies differentiators and vulnerabilities, and generates win themes.
Usage:
python competitive_matrix_builder.py competitive_data.json
python competitive_matrix_builder.py competitive_data.json --format json
python competitive_matrix_builder.py competitive_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Feature scoring levels
FEATURE_SCORES: dict[str, int] = {
"full": 3,
"partial": 2,
"limited": 1,
"none": 0,
}
FEATURE_LABELS: dict[int, str] = {
3: "Full",
2: "Partial",
1: "Limited",
0: "None",
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_competitive_data(filepath: str) -> dict[str, Any]:
"""Load and validate competitive data from a JSON file.
Args:
filepath: Path to the JSON file containing competitive data.
Returns:
Parsed competitive data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "categories" not in data:
print("Error: JSON must contain a 'categories' array.", file=sys.stderr)
sys.exit(1)
if "our_product" not in data:
print("Error: JSON must contain 'our_product' name.", file=sys.stderr)
sys.exit(1)
if "competitors" not in data or not data["competitors"]:
print("Error: JSON must contain a non-empty 'competitors' array.", file=sys.stderr)
sys.exit(1)
return data
def normalize_score(score_value: Any) -> int:
"""Normalize a score value to an integer.
Args:
score_value: Score as string label or integer.
Returns:
Normalized integer score (0-3).
"""
if isinstance(score_value, str):
return FEATURE_SCORES.get(score_value.lower(), 0)
if isinstance(score_value, (int, float)):
return max(0, min(3, int(score_value)))
return 0
def build_comparison_matrix(data: dict[str, Any]) -> dict[str, Any]:
"""Build the feature comparison matrix from input data.
Args:
data: Competitive data with categories, features, and scores.
Returns:
Comparison matrix with per-feature and per-category scores.
"""
our_product = data["our_product"]
competitors = data["competitors"]
all_products = [our_product] + competitors
matrix: list[dict[str, Any]] = []
category_summaries: dict[str, dict[str, Any]] = {}
for category in data["categories"]:
cat_name = category["name"]
cat_weight = category.get("weight", 1.0)
cat_features = category.get("features", [])
cat_scores: dict[str, list[int]] = {p: [] for p in all_products}
for feature in cat_features:
feature_name = feature["name"]
scores: dict[str, int] = {}
for product in all_products:
raw_score = feature.get("scores", {}).get(product, 0)
scores[product] = normalize_score(raw_score)
cat_scores[product].append(scores[product])
# Determine leader for this feature
max_score = max(scores.values())
leaders = [p for p, s in scores.items() if s == max_score]
matrix.append({
"category": cat_name,
"feature": feature_name,
"scores": scores,
"leaders": leaders,
"our_score": scores[our_product],
"max_score": max_score,
"we_lead": our_product in leaders and len(leaders) == 1,
"we_trail": scores[our_product] < max_score,
})
# Category summary
cat_product_scores = {}
for product in all_products:
product_scores = cat_scores[product]
total = sum(product_scores)
max_possible = len(product_scores) * 3
pct = safe_divide(total, max_possible) * 100
cat_product_scores[product] = {
"total_score": total,
"max_possible": max_possible,
"percentage": round(pct, 1),
}
category_summaries[cat_name] = {
"weight": cat_weight,
"feature_count": len(cat_features),
"product_scores": cat_product_scores,
}
return {
"our_product": our_product,
"competitors": competitors,
"all_products": all_products,
"matrix": matrix,
"category_summaries": category_summaries,
}
def compute_competitive_scores(
comparison: dict[str, Any],
) -> dict[str, dict[str, Any]]:
"""Compute weighted competitive scores for each product.
Args:
comparison: Comparison matrix data.
Returns:
Product scores with weighted and unweighted totals.
"""
all_products = comparison["all_products"]
category_summaries = comparison["category_summaries"]
product_scores: dict[str, dict[str, float]] = {
p: {"weighted_total": 0.0, "max_weighted": 0.0, "unweighted_total": 0, "max_unweighted": 0}
for p in all_products
}
for cat_name, cat_data in category_summaries.items():
weight = cat_data["weight"]
for product in all_products:
p_data = cat_data["product_scores"][product]
product_scores[product]["weighted_total"] += p_data["total_score"] * weight
product_scores[product]["max_weighted"] += p_data["max_possible"] * weight
product_scores[product]["unweighted_total"] += p_data["total_score"]
product_scores[product]["max_unweighted"] += p_data["max_possible"]
result = {}
for product in all_products:
ps = product_scores[product]
weighted_pct = safe_divide(ps["weighted_total"], ps["max_weighted"]) * 100
unweighted_pct = safe_divide(ps["unweighted_total"], ps["max_unweighted"]) * 100
result[product] = {
"weighted_score": round(weighted_pct, 1),
"unweighted_score": round(unweighted_pct, 1),
"weighted_total": round(ps["weighted_total"], 2),
"max_weighted": round(ps["max_weighted"], 2),
}
return result
def identify_differentiators(comparison: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify features where our product leads all competitors.
Args:
comparison: Comparison matrix data.
Returns:
List of differentiator features with details.
"""
differentiators = []
for entry in comparison["matrix"]:
if entry["we_lead"] and entry["our_score"] >= 2:
# Calculate gap from nearest competitor
competitor_scores = [
entry["scores"][c] for c in comparison["competitors"]
]
max_competitor = max(competitor_scores) if competitor_scores else 0
gap = entry["our_score"] - max_competitor
differentiators.append({
"feature": entry["feature"],
"category": entry["category"],
"our_score": entry["our_score"],
"our_label": FEATURE_LABELS.get(entry["our_score"], "Unknown"),
"best_competitor_score": max_competitor,
"gap": gap,
})
# Sort by gap size descending
differentiators.sort(key=lambda d: d["gap"], reverse=True)
return differentiators
def identify_vulnerabilities(comparison: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify features where competitors lead our product.
Args:
comparison: Comparison matrix data.
Returns:
List of vulnerability features with details.
"""
vulnerabilities = []
for entry in comparison["matrix"]:
if entry["we_trail"]:
# Find which competitor leads
leader_scores = {
p: entry["scores"][p]
for p in comparison["competitors"]
if entry["scores"][p] == entry["max_score"]
}
gap = entry["max_score"] - entry["our_score"]
vulnerabilities.append({
"feature": entry["feature"],
"category": entry["category"],
"our_score": entry["our_score"],
"our_label": FEATURE_LABELS.get(entry["our_score"], "Unknown"),
"leading_competitors": leader_scores,
"gap": gap,
})
# Sort by gap size descending
vulnerabilities.sort(key=lambda v: v["gap"], reverse=True)
return vulnerabilities
def generate_win_themes(
differentiators: list[dict[str, Any]],
competitive_scores: dict[str, dict[str, Any]],
our_product: str,
) -> list[str]:
"""Generate win themes based on differentiators and competitive position.
Args:
differentiators: List of differentiator features.
competitive_scores: Product competitive scores.
our_product: Our product name.
Returns:
List of win theme strings.
"""
themes = []
# Theme from top differentiators
if differentiators:
top_diff_categories = list({d["category"] for d in differentiators[:5]})
for cat in top_diff_categories[:3]:
cat_diffs = [d for d in differentiators if d["category"] == cat]
feature_names = [d["feature"] for d in cat_diffs[:3]]
themes.append(
f"Superior {cat} capabilities: {', '.join(feature_names)}"
)
# Theme from overall competitive position
our_score = competitive_scores.get(our_product, {}).get("weighted_score", 0)
competitor_scores = [
(p, s["weighted_score"])
for p, s in competitive_scores.items()
if p != our_product
]
if competitor_scores:
best_competitor_name, best_competitor_score = max(
competitor_scores, key=lambda x: x[1]
)
if our_score > best_competitor_score:
themes.append(
f"Overall strongest solution ({our_score:.1f}% vs {best_competitor_name} at {best_competitor_score:.1f}%)"
)
# Theme from breadth of coverage
strong_diffs = [d for d in differentiators if d["gap"] >= 2]
if len(strong_diffs) >= 3:
themes.append(
f"Clear technical leadership across {len(strong_diffs)} key features with significant competitive gaps"
)
if not themes:
themes.append("Competitive parity - emphasize implementation quality, support, and total cost of ownership")
return themes
def analyze_competitive(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete competitive analysis pipeline.
Args:
data: Parsed competitive data dictionary.
Returns:
Complete analysis results dictionary.
"""
comparison = build_comparison_matrix(data)
competitive_scores = compute_competitive_scores(comparison)
differentiators = identify_differentiators(comparison)
vulnerabilities = identify_vulnerabilities(comparison)
win_themes = generate_win_themes(
differentiators, competitive_scores, comparison["our_product"]
)
return {
"analysis_info": {
"our_product": comparison["our_product"],
"competitors": comparison["competitors"],
"total_features": len(comparison["matrix"]),
"total_categories": len(comparison["category_summaries"]),
},
"competitive_scores": competitive_scores,
"category_breakdown": comparison["category_summaries"],
"comparison_matrix": comparison["matrix"],
"differentiators": differentiators,
"vulnerabilities": vulnerabilities,
"win_themes": win_themes,
}
def format_text(result: dict[str, Any]) -> str:
"""Format analysis results as human-readable text.
Args:
result: Complete analysis results dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["analysis_info"]
all_products = [info["our_product"]] + info["competitors"]
lines.append("=" * 80)
lines.append("COMPETITIVE MATRIX ANALYSIS")
lines.append("=" * 80)
lines.append(f"Our Product: {info['our_product']}")
lines.append(f"Competitors: {', '.join(info['competitors'])}")
lines.append(f"Features: {info['total_features']}")
lines.append(f"Categories: {info['total_categories']}")
lines.append("")
# Competitive scores
lines.append("-" * 80)
lines.append("COMPETITIVE SCORES")
lines.append("-" * 80)
lines.append(f"{'Product':<25} {'Weighted':>10} {'Unweighted':>12}")
lines.append("-" * 80)
# Sort by weighted score descending
sorted_scores = sorted(
result["competitive_scores"].items(),
key=lambda x: x[1]["weighted_score"],
reverse=True,
)
for product, scores in sorted_scores:
marker = " <-- US" if product == info["our_product"] else ""
lines.append(
f"{product:<25} {scores['weighted_score']:>9.1f}% {scores['unweighted_score']:>11.1f}%{marker}"
)
lines.append("")
# Feature matrix
lines.append("-" * 80)
lines.append("FEATURE COMPARISON MATRIX")
lines.append("-" * 80)
# Build header
product_cols = " ".join(f"{p[:10]:>10}" for p in all_products)
lines.append(f"{'Feature':<30} {product_cols}")
lines.append("-" * 80)
current_category = ""
for entry in result["comparison_matrix"]:
if entry["category"] != current_category:
current_category = entry["category"]
cat_data = result["category_breakdown"].get(current_category, {})
weight = cat_data.get("weight", 1.0)
lines.append(f"\n [{current_category}] (weight: {weight}x)")
score_cols = " ".join(
f"{FEATURE_LABELS.get(entry['scores'].get(p, 0), 'N/A'):>10}"
for p in all_products
)
lead_marker = " *" if entry["we_lead"] else (" !" if entry["we_trail"] else "")
feature_display = entry["feature"][:28]
lines.append(f" {feature_display:<28} {score_cols}{lead_marker}")
lines.append("")
lines.append(" * = We lead | ! = We trail")
lines.append("")
# Differentiators
diffs = result["differentiators"]
if diffs:
lines.append("-" * 80)
lines.append(f"DIFFERENTIATORS ({len(diffs)} features where we lead)")
lines.append("-" * 80)
for d in diffs:
lines.append(
f" + {d['feature']} [{d['category']}] "
f"- Us: {d['our_label']} vs Best Competitor: {FEATURE_LABELS.get(d['best_competitor_score'], 'N/A')} "
f"(gap: +{d['gap']})"
)
lines.append("")
# Vulnerabilities
vulns = result["vulnerabilities"]
if vulns:
lines.append("-" * 80)
lines.append(f"VULNERABILITIES ({len(vulns)} features where competitors lead)")
lines.append("-" * 80)
for v in vulns:
leaders = ", ".join(
f"{p}: {FEATURE_LABELS.get(s, 'N/A')}"
for p, s in v["leading_competitors"].items()
)
lines.append(
f" - {v['feature']} [{v['category']}] "
f"- Us: {v['our_label']} vs {leaders} "
f"(gap: -{v['gap']})"
)
lines.append("")
# Win themes
themes = result["win_themes"]
lines.append("-" * 80)
lines.append("WIN THEMES")
lines.append("-" * 80)
for i, theme in enumerate(themes, 1):
lines.append(f" {i}. {theme}")
lines.append("")
lines.append("=" * 80)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the Competitive Matrix Builder."""
parser = argparse.ArgumentParser(
description="Build competitive feature comparison matrices and positioning analysis.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Feature Scoring:\n"
" Full (3) - Complete feature support\n"
" Partial (2) - Partial or limited support\n"
" Limited (1) - Minimal or basic support\n"
" None (0) - Feature not available\n"
"\n"
"Example:\n"
" python competitive_matrix_builder.py competitive_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing competitive data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_competitive_data(args.input_file)
result = analyze_competitive(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,765 @@
#!/usr/bin/env python3
"""POC Planner - Plan proof-of-concept engagements with timeline, resources, and scorecards.
Generates structured POC plans including phased timelines, resource allocation,
success criteria with measurable metrics, evaluation scorecards, risk identification,
and go/no-go recommendation frameworks.
Usage:
python poc_planner.py poc_data.json
python poc_planner.py poc_data.json --format json
python poc_planner.py poc_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Default phase definitions
DEFAULT_PHASES = [
{
"name": "Setup",
"duration_weeks": 1,
"description": "Environment provisioning, data migration, initial configuration",
"activities": [
"Provision POC environment",
"Configure authentication and access",
"Migrate sample data sets",
"Set up monitoring and logging",
"Conduct kickoff meeting with stakeholders",
],
},
{
"name": "Core Testing",
"duration_weeks": 2,
"description": "Primary use case validation and integration testing",
"activities": [
"Execute primary use case scenarios",
"Test core integrations",
"Validate data flow and transformations",
"Conduct mid-point review with stakeholders",
"Document findings and adjust test plan",
],
},
{
"name": "Advanced Testing",
"duration_weeks": 1,
"description": "Edge cases, performance testing, and security validation",
"activities": [
"Execute edge case scenarios",
"Run performance and load tests",
"Validate security controls and compliance",
"Test disaster recovery and failover",
"Test administrative workflows",
],
},
{
"name": "Evaluation",
"duration_weeks": 1,
"description": "Scorecard completion, stakeholder review, and go/no-go decision",
"activities": [
"Complete evaluation scorecard",
"Compile POC results documentation",
"Conduct final stakeholder review",
"Present go/no-go recommendation",
"Gather lessons learned",
],
},
]
# Evaluation categories with default weights
DEFAULT_EVAL_CATEGORIES = {
"Functionality": {
"weight": 0.30,
"criteria": [
"Core feature completeness",
"Use case coverage",
"Customization flexibility",
"Workflow automation",
],
},
"Performance": {
"weight": 0.20,
"criteria": [
"Response time under load",
"Throughput capacity",
"Scalability characteristics",
"Resource utilization",
],
},
"Integration": {
"weight": 0.20,
"criteria": [
"API completeness and documentation",
"Data migration ease",
"Third-party connector availability",
"Authentication/SSO integration",
],
},
"Usability": {
"weight": 0.15,
"criteria": [
"User interface intuitiveness",
"Learning curve assessment",
"Documentation quality",
"Admin console functionality",
],
},
"Support": {
"weight": 0.15,
"criteria": [
"Technical support responsiveness",
"Knowledge base quality",
"Training resources availability",
"Community and ecosystem",
],
},
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_poc_data(filepath: str) -> dict[str, Any]:
"""Load and validate POC data from a JSON file.
Args:
filepath: Path to the JSON file containing POC data.
Returns:
Parsed POC data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "poc_name" not in data:
print("Error: JSON must contain 'poc_name' field.", file=sys.stderr)
sys.exit(1)
return data
def estimate_resources(data: dict[str, Any], phases: list[dict[str, Any]]) -> dict[str, Any]:
"""Estimate resource requirements for the POC.
Args:
data: POC data with scope and requirements.
phases: List of phase definitions.
Returns:
Resource allocation dictionary.
"""
total_weeks = sum(p["duration_weeks"] for p in phases)
complexity = data.get("complexity", "medium").lower()
scope_items = data.get("scope_items", [])
num_integrations = data.get("num_integrations", 0)
# Base SE hours per week by complexity
se_hours_per_week = {"low": 15, "medium": 25, "high": 35}.get(complexity, 25)
# Engineering support hours
eng_base = {"low": 5, "medium": 10, "high": 20}.get(complexity, 10)
eng_integration_hours = num_integrations * 8
# Customer resource hours
customer_hours_per_week = {"low": 5, "medium": 8, "high": 12}.get(complexity, 8)
se_total = se_hours_per_week * total_weeks
eng_total = (eng_base * total_weeks) + eng_integration_hours
customer_total = customer_hours_per_week * total_weeks
# Phase-level breakdown
phase_resources = []
for phase in phases:
weeks = phase["duration_weeks"]
# Setup phase has higher SE and eng effort
se_multiplier = 1.3 if phase["name"] == "Setup" else (
1.0 if phase["name"] in ("Core Testing", "Advanced Testing") else 0.7
)
eng_multiplier = 1.5 if phase["name"] == "Setup" else (
1.0 if phase["name"] == "Core Testing" else (
1.2 if phase["name"] == "Advanced Testing" else 0.5
)
)
phase_resources.append({
"phase": phase["name"],
"duration_weeks": weeks,
"se_hours": round(se_hours_per_week * weeks * se_multiplier),
"engineering_hours": round(eng_base * weeks * eng_multiplier),
"customer_hours": round(customer_hours_per_week * weeks),
})
return {
"total_duration_weeks": total_weeks,
"complexity": complexity,
"totals": {
"se_hours": se_total,
"engineering_hours": eng_total,
"customer_hours": customer_total,
"total_hours": se_total + eng_total + customer_total,
},
"phase_breakdown": phase_resources,
"additional_resources": {
"integration_hours": eng_integration_hours,
"num_integrations": num_integrations,
},
}
def generate_success_criteria(data: dict[str, Any]) -> list[dict[str, Any]]:
"""Generate success criteria based on POC scope and requirements.
Args:
data: POC data with scope and requirements.
Returns:
List of success criteria with metrics.
"""
criteria = []
# Custom criteria from input
custom_criteria = data.get("success_criteria", [])
for cc in custom_criteria:
criteria.append({
"criterion": cc.get("criterion", "Unnamed criterion"),
"metric": cc.get("metric", "Pass/Fail"),
"target": cc.get("target", "Met"),
"category": cc.get("category", "Functionality"),
"priority": cc.get("priority", "must-have"),
})
# Auto-generated criteria based on scope
scope_items = data.get("scope_items", [])
for item in scope_items:
if isinstance(item, str):
criteria.append({
"criterion": f"Validate: {item}",
"metric": "Pass/Fail",
"target": "Pass",
"category": "Functionality",
"priority": "must-have",
})
elif isinstance(item, dict):
criteria.append({
"criterion": item.get("name", "Unnamed scope item"),
"metric": item.get("metric", "Pass/Fail"),
"target": item.get("target", "Pass"),
"category": item.get("category", "Functionality"),
"priority": item.get("priority", "must-have"),
})
# Default criteria if none provided
if not criteria:
criteria = [
{
"criterion": "Core use case validation",
"metric": "Percentage of use cases successfully demonstrated",
"target": ">90%",
"category": "Functionality",
"priority": "must-have",
},
{
"criterion": "Performance under expected load",
"metric": "Response time at target concurrency",
"target": "<2 seconds p95",
"category": "Performance",
"priority": "must-have",
},
{
"criterion": "Integration with existing systems",
"metric": "Number of integrations successfully tested",
"target": "All planned integrations",
"category": "Integration",
"priority": "must-have",
},
{
"criterion": "User acceptance",
"metric": "Stakeholder satisfaction score",
"target": ">4.0/5.0",
"category": "Usability",
"priority": "should-have",
},
]
return criteria
def generate_evaluation_scorecard(data: dict[str, Any]) -> dict[str, Any]:
"""Generate the POC evaluation scorecard template.
Args:
data: POC data.
Returns:
Evaluation scorecard structure.
"""
custom_categories = data.get("evaluation_categories", {})
# Merge custom categories with defaults
categories = {}
for cat_name, cat_data in DEFAULT_EVAL_CATEGORIES.items():
if cat_name in custom_categories:
custom = custom_categories[cat_name]
categories[cat_name] = {
"weight": custom.get("weight", cat_data["weight"]),
"criteria": custom.get("criteria", cat_data["criteria"]),
"score": None,
"notes": "",
}
else:
categories[cat_name] = {
"weight": cat_data["weight"],
"criteria": cat_data["criteria"],
"score": None,
"notes": "",
}
# Normalize weights to sum to 1.0
total_weight = sum(c["weight"] for c in categories.values())
if total_weight > 0 and abs(total_weight - 1.0) > 0.01:
for cat in categories.values():
cat["weight"] = round(safe_divide(cat["weight"], total_weight), 2)
return {
"scoring_scale": {
"5": "Exceeds requirements - superior capability",
"4": "Meets requirements - full capability",
"3": "Partially meets - acceptable with minor gaps",
"2": "Below expectations - significant gaps",
"1": "Does not meet - critical gaps",
},
"categories": categories,
"pass_threshold": 3.5,
"strong_pass_threshold": 4.0,
}
def identify_risks(data: dict[str, Any], resources: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify POC risks and generate mitigation strategies.
Args:
data: POC data.
resources: Resource allocation data.
Returns:
List of risk entries with probability, impact, and mitigation.
"""
risks = []
complexity = data.get("complexity", "medium").lower()
num_integrations = data.get("num_integrations", 0)
total_weeks = resources["total_duration_weeks"]
stakeholders = data.get("stakeholders", [])
# Timeline risk
if total_weeks > 6:
risks.append({
"risk": "Extended timeline may lose stakeholder attention",
"probability": "high",
"impact": "high",
"mitigation": "Schedule weekly progress checkpoints; deliver early wins in week 2",
"category": "Timeline",
})
elif total_weeks >= 4:
risks.append({
"risk": "Timeline may slip due to unforeseen technical issues",
"probability": "medium",
"impact": "medium",
"mitigation": "Build 20% buffer into each phase; identify critical path early",
"category": "Timeline",
})
# Integration risks
if num_integrations > 3:
risks.append({
"risk": "Multiple integrations increase complexity and failure points",
"probability": "high",
"impact": "high",
"mitigation": "Prioritize integrations by business value; test incrementally; have fallback demo data",
"category": "Technical",
})
elif num_integrations > 0:
risks.append({
"risk": "Integration dependencies may cause delays",
"probability": "medium",
"impact": "medium",
"mitigation": "Engage customer IT early; confirm API access and credentials in setup phase",
"category": "Technical",
})
# Data risks
risks.append({
"risk": "Customer data quality or availability issues",
"probability": "medium",
"impact": "high",
"mitigation": "Request sample data early; prepare synthetic data as fallback; validate data format in setup",
"category": "Data",
})
# Stakeholder risks
if len(stakeholders) > 5:
risks.append({
"risk": "Too many stakeholders may slow decision-making",
"probability": "medium",
"impact": "medium",
"mitigation": "Identify decision-maker and champion; schedule focused reviews per stakeholder group",
"category": "Stakeholder",
})
if not stakeholders:
risks.append({
"risk": "Undefined stakeholder map may lead to misaligned evaluation",
"probability": "high",
"impact": "high",
"mitigation": "Confirm stakeholder list, roles, and evaluation criteria before setup phase",
"category": "Stakeholder",
})
# Resource risks
if complexity == "high":
risks.append({
"risk": "High complexity may require additional engineering resources",
"probability": "medium",
"impact": "high",
"mitigation": "Secure engineering commitment upfront; identify escalation path for blockers",
"category": "Resource",
})
# Competitive risk
risks.append({
"risk": "Competitor POC running in parallel may shift evaluation criteria",
"probability": "medium",
"impact": "medium",
"mitigation": "Stay close to champion; align success criteria early; differentiate on unique strengths",
"category": "Competitive",
})
return risks
def generate_go_no_go_framework(data: dict[str, Any]) -> dict[str, Any]:
"""Generate the go/no-go decision framework.
Args:
data: POC data.
Returns:
Go/no-go framework with criteria and thresholds.
"""
return {
"decision_criteria": [
{
"criterion": "Overall scorecard score",
"go_threshold": ">=3.5 weighted average",
"no_go_threshold": "<3.0 weighted average",
"conditional_range": "3.0 - 3.5",
},
{
"criterion": "Must-have success criteria met",
"go_threshold": "100% of must-have criteria pass",
"no_go_threshold": "<80% of must-have criteria pass",
"conditional_range": "80-99% with mitigation plan",
},
{
"criterion": "Stakeholder satisfaction",
"go_threshold": "Champion and decision-maker both positive",
"no_go_threshold": "Decision-maker negative",
"conditional_range": "Mixed signals - needs follow-up",
},
{
"criterion": "Technical blockers",
"go_threshold": "No unresolved critical blockers",
"no_go_threshold": ">2 unresolved critical blockers",
"conditional_range": "1-2 blockers with clear resolution path",
},
],
"recommendation_logic": {
"GO": "All criteria meet go thresholds, or majority go with no no-go triggers",
"CONDITIONAL_GO": "Some criteria in conditional range, but no no-go triggers and clear resolution plan",
"NO_GO": "Any criterion triggers no-go threshold without clear mitigation",
},
}
def plan_poc(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete POC planning pipeline.
Args:
data: Parsed POC data dictionary.
Returns:
Complete POC plan dictionary.
"""
poc_info = {
"poc_name": data.get("poc_name", "Unnamed POC"),
"customer": data.get("customer", "Unknown Customer"),
"opportunity_value": data.get("opportunity_value", "Not specified"),
"complexity": data.get("complexity", "medium"),
"start_date": data.get("start_date", "TBD"),
"champion": data.get("champion", "Not identified"),
"decision_maker": data.get("decision_maker", "Not identified"),
}
# Use custom phases if provided, otherwise defaults
phases = data.get("phases", DEFAULT_PHASES)
# Resource estimation
resources = estimate_resources(data, phases)
# Success criteria
success_criteria = generate_success_criteria(data)
# Evaluation scorecard
scorecard = generate_evaluation_scorecard(data)
# Risk identification
risks = identify_risks(data, resources)
# Go/No-Go framework
go_no_go = generate_go_no_go_framework(data)
# Timeline with phase details
timeline = []
current_week = 1
for phase in phases:
end_week = current_week + phase["duration_weeks"] - 1
timeline.append({
"phase": phase["name"],
"start_week": current_week,
"end_week": end_week,
"duration_weeks": phase["duration_weeks"],
"description": phase["description"],
"activities": phase["activities"],
})
current_week = end_week + 1
# Stakeholder plan
stakeholders = data.get("stakeholders", [])
stakeholder_plan = []
for s in stakeholders:
if isinstance(s, str):
stakeholder_plan.append({
"name": s,
"role": "Evaluator",
"engagement": "Weekly updates, phase reviews",
})
elif isinstance(s, dict):
stakeholder_plan.append({
"name": s.get("name", "Unknown"),
"role": s.get("role", "Evaluator"),
"engagement": s.get("engagement", "Weekly updates, phase reviews"),
})
return {
"poc_info": poc_info,
"timeline": timeline,
"resource_allocation": resources,
"success_criteria": success_criteria,
"evaluation_scorecard": scorecard,
"risk_register": risks,
"go_no_go_framework": go_no_go,
"stakeholder_plan": stakeholder_plan,
}
def format_text(result: dict[str, Any]) -> str:
"""Format POC plan as human-readable text.
Args:
result: Complete POC plan dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["poc_info"]
lines.append("=" * 70)
lines.append("PROOF OF CONCEPT PLAN")
lines.append("=" * 70)
lines.append(f"POC Name: {info['poc_name']}")
lines.append(f"Customer: {info['customer']}")
lines.append(f"Opportunity Value: {info['opportunity_value']}")
lines.append(f"Complexity: {info['complexity'].upper()}")
lines.append(f"Start Date: {info['start_date']}")
lines.append(f"Champion: {info['champion']}")
lines.append(f"Decision Maker: {info['decision_maker']}")
lines.append("")
# Timeline
lines.append("-" * 70)
lines.append("TIMELINE")
lines.append("-" * 70)
for phase in result["timeline"]:
week_range = (
f"Week {phase['start_week']}"
if phase["start_week"] == phase["end_week"]
else f"Weeks {phase['start_week']}-{phase['end_week']}"
)
lines.append(f"\n Phase: {phase['phase']} ({week_range})")
lines.append(f" {phase['description']}")
lines.append(" Activities:")
for activity in phase["activities"]:
lines.append(f" - {activity}")
lines.append("")
# Resource allocation
res = result["resource_allocation"]
lines.append("-" * 70)
lines.append("RESOURCE ALLOCATION")
lines.append("-" * 70)
lines.append(f"Total Duration: {res['total_duration_weeks']} weeks")
lines.append(f"Complexity: {res['complexity'].upper()}")
lines.append("")
lines.append(" Totals:")
lines.append(f" SE Hours: {res['totals']['se_hours']}")
lines.append(f" Engineering Hours: {res['totals']['engineering_hours']}")
lines.append(f" Customer Hours: {res['totals']['customer_hours']}")
lines.append(f" Total Hours: {res['totals']['total_hours']}")
lines.append("")
lines.append(" Phase Breakdown:")
lines.append(f" {'Phase':<20} {'Weeks':>5} {'SE':>6} {'Eng':>6} {'Cust':>6}")
lines.append(" " + "-" * 45)
for pr in res["phase_breakdown"]:
lines.append(
f" {pr['phase']:<20} {pr['duration_weeks']:>5} "
f"{pr['se_hours']:>5}h {pr['engineering_hours']:>5}h {pr['customer_hours']:>5}h"
)
lines.append("")
# Success criteria
criteria = result["success_criteria"]
lines.append("-" * 70)
lines.append("SUCCESS CRITERIA")
lines.append("-" * 70)
for i, sc in enumerate(criteria, 1):
priority_marker = "[MUST]" if sc["priority"] == "must-have" else (
"[SHOULD]" if sc["priority"] == "should-have" else "[NICE]"
)
lines.append(f" {i}. {priority_marker} {sc['criterion']}")
lines.append(f" Metric: {sc['metric']}")
lines.append(f" Target: {sc['target']}")
lines.append(f" Category: {sc['category']}")
lines.append("")
# Evaluation scorecard
scorecard = result["evaluation_scorecard"]
lines.append("-" * 70)
lines.append("EVALUATION SCORECARD")
lines.append("-" * 70)
lines.append(f" Pass Threshold: {scorecard['pass_threshold']}/5.0")
lines.append(f" Strong Pass Threshold: {scorecard['strong_pass_threshold']}/5.0")
lines.append("")
lines.append(" Scoring Scale:")
for score, desc in scorecard["scoring_scale"].items():
lines.append(f" {score} = {desc}")
lines.append("")
lines.append(" Categories:")
for cat_name, cat_data in scorecard["categories"].items():
lines.append(f"\n {cat_name} (weight: {cat_data['weight']:.0%})")
for criterion in cat_data["criteria"]:
lines.append(f" [ ] {criterion}")
lines.append("")
# Risk register
risks = result["risk_register"]
lines.append("-" * 70)
lines.append("RISK REGISTER")
lines.append("-" * 70)
for risk in risks:
lines.append(f" [{risk['impact'].upper()}] {risk['risk']}")
lines.append(f" Probability: {risk['probability']} | Impact: {risk['impact']}")
lines.append(f" Category: {risk['category']}")
lines.append(f" Mitigation: {risk['mitigation']}")
lines.append("")
# Go/No-Go framework
framework = result["go_no_go_framework"]
lines.append("-" * 70)
lines.append("GO / NO-GO DECISION FRAMEWORK")
lines.append("-" * 70)
for dc in framework["decision_criteria"]:
lines.append(f" {dc['criterion']}:")
lines.append(f" GO: {dc['go_threshold']}")
lines.append(f" CONDITIONAL: {dc['conditional_range']}")
lines.append(f" NO-GO: {dc['no_go_threshold']}")
lines.append("")
lines.append(" Recommendation Logic:")
for decision, logic in framework["recommendation_logic"].items():
lines.append(f" {decision}: {logic}")
lines.append("")
# Stakeholder plan
stakeholders = result["stakeholder_plan"]
if stakeholders:
lines.append("-" * 70)
lines.append("STAKEHOLDER PLAN")
lines.append("-" * 70)
for s in stakeholders:
lines.append(f" {s['name']} ({s['role']})")
lines.append(f" Engagement: {s['engagement']}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the POC Planner."""
parser = argparse.ArgumentParser(
description="Plan proof-of-concept engagements with timeline, resources, and evaluation scorecards.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Default Phases:\n"
" Week 1: Setup - Environment provisioning, configuration\n"
" Weeks 2-3: Core Testing - Primary use cases, integrations\n"
" Week 4: Advanced Testing - Edge cases, performance, security\n"
" Week 5: Evaluation - Scorecard, stakeholder review, go/no-go\n"
"\n"
"Example:\n"
" python poc_planner.py poc_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing POC scope and requirements",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_poc_data(args.input_file)
result = plan_poc(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,557 @@
#!/usr/bin/env python3
"""RFP/RFI Response Analyzer - Score coverage, identify gaps, and recommend bid/no-bid.
Parses RFP/RFI requirements and scores coverage using Full/Partial/Planned/Gap
categories. Generates weighted coverage scores, gap analysis with mitigation
strategies, effort estimation, and bid/no-bid recommendations.
Usage:
python rfp_response_analyzer.py rfp_data.json
python rfp_response_analyzer.py rfp_data.json --format json
python rfp_response_analyzer.py rfp_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Coverage status to score mapping
COVERAGE_SCORES: dict[str, float] = {
"full": 1.0,
"partial": 0.5,
"planned": 0.25,
"gap": 0.0,
}
# Priority to weight mapping
PRIORITY_WEIGHTS: dict[str, float] = {
"must-have": 3.0,
"should-have": 2.0,
"nice-to-have": 1.0,
}
# Bid thresholds
BID_THRESHOLD = 0.70
CONDITIONAL_THRESHOLD = 0.50
MAX_MUST_HAVE_GAPS_FOR_BID = 3
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_rfp_data(filepath: str) -> dict[str, Any]:
"""Load and validate RFP data from a JSON file.
Args:
filepath: Path to the JSON file containing RFP data.
Returns:
Parsed RFP data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "requirements" not in data:
print("Error: JSON must contain a 'requirements' array.", file=sys.stderr)
sys.exit(1)
return data
def analyze_requirement(req: dict[str, Any]) -> dict[str, Any]:
"""Analyze a single requirement and compute its score.
Args:
req: Requirement dictionary with category, priority, coverage_status, etc.
Returns:
Enriched requirement with computed score and weight.
"""
coverage_status = req.get("coverage_status", "gap").lower()
priority = req.get("priority", "nice-to-have").lower()
coverage_score = COVERAGE_SCORES.get(coverage_status, 0.0)
weight = PRIORITY_WEIGHTS.get(priority, 1.0)
weighted_score = coverage_score * weight
max_weighted = weight
effort_hours = req.get("effort_hours", 0)
result = {
"id": req.get("id", "unknown"),
"requirement": req.get("requirement", "Unnamed requirement"),
"category": req.get("category", "Uncategorized"),
"priority": priority,
"coverage_status": coverage_status,
"coverage_score": coverage_score,
"weight": weight,
"weighted_score": weighted_score,
"max_weighted": max_weighted,
"effort_hours": effort_hours,
"notes": req.get("notes", ""),
"mitigation": req.get("mitigation", ""),
}
return result
def generate_gap_analysis(analyzed_reqs: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Generate gap analysis for requirements not fully covered.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
Returns:
List of gap entries with mitigation strategies.
"""
gaps = []
for req in analyzed_reqs:
if req["coverage_status"] in ("gap", "partial", "planned"):
severity = "critical" if req["priority"] == "must-have" else (
"high" if req["priority"] == "should-have" else "low"
)
mitigation = req["mitigation"]
if not mitigation:
if req["coverage_status"] == "partial":
mitigation = "Enhance existing capability to achieve full coverage"
elif req["coverage_status"] == "planned":
mitigation = "Communicate roadmap timeline and interim workaround"
else:
mitigation = "Evaluate build vs. partner vs. no-bid for this requirement"
gaps.append({
"id": req["id"],
"requirement": req["requirement"],
"category": req["category"],
"priority": req["priority"],
"coverage_status": req["coverage_status"],
"severity": severity,
"effort_hours": req["effort_hours"],
"mitigation": mitigation,
})
# Sort by severity: critical > high > low
severity_order = {"critical": 0, "high": 1, "low": 2}
gaps.sort(key=lambda g: severity_order.get(g["severity"], 3))
return gaps
def compute_category_scores(analyzed_reqs: list[dict[str, Any]]) -> dict[str, dict[str, Any]]:
"""Compute coverage scores grouped by requirement category.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
Returns:
Dictionary of category names to score summaries.
"""
categories: dict[str, dict[str, float]] = {}
for req in analyzed_reqs:
cat = req["category"]
if cat not in categories:
categories[cat] = {
"weighted_score": 0.0,
"max_weighted": 0.0,
"count": 0,
"full_count": 0,
"partial_count": 0,
"planned_count": 0,
"gap_count": 0,
"effort_hours": 0,
}
categories[cat]["weighted_score"] += req["weighted_score"]
categories[cat]["max_weighted"] += req["max_weighted"]
categories[cat]["count"] += 1
categories[cat]["effort_hours"] += req["effort_hours"]
status_key = f"{req['coverage_status']}_count"
if status_key in categories[cat]:
categories[cat][status_key] += 1
result = {}
for cat, scores in categories.items():
coverage_pct = safe_divide(scores["weighted_score"], scores["max_weighted"]) * 100
result[cat] = {
"coverage_percentage": round(coverage_pct, 1),
"requirements_count": int(scores["count"]),
"full": int(scores["full_count"]),
"partial": int(scores["partial_count"]),
"planned": int(scores["planned_count"]),
"gap": int(scores["gap_count"]),
"effort_hours": int(scores["effort_hours"]),
}
return result
def determine_bid_recommendation(
overall_coverage: float,
must_have_gaps: int,
strategic_value: str,
) -> dict[str, Any]:
"""Determine bid/no-bid recommendation based on coverage and gaps.
Args:
overall_coverage: Overall weighted coverage percentage (0-100).
must_have_gaps: Number of must-have requirements with gap status.
strategic_value: Strategic value assessment (high, medium, low).
Returns:
Recommendation dictionary with decision and rationale.
"""
coverage_ratio = overall_coverage / 100.0
reasons = []
# Primary decision logic
if coverage_ratio >= BID_THRESHOLD and must_have_gaps <= MAX_MUST_HAVE_GAPS_FOR_BID:
decision = "BID"
reasons.append(f"Coverage score {overall_coverage:.1f}% exceeds {BID_THRESHOLD*100:.0f}% threshold")
if must_have_gaps > 0:
reasons.append(f"{must_have_gaps} must-have gap(s) within acceptable range (max {MAX_MUST_HAVE_GAPS_FOR_BID})")
elif coverage_ratio >= CONDITIONAL_THRESHOLD or (
must_have_gaps <= MAX_MUST_HAVE_GAPS_FOR_BID and coverage_ratio >= 0.4
):
decision = "CONDITIONAL BID"
reasons.append(f"Coverage score {overall_coverage:.1f}% in conditional range ({CONDITIONAL_THRESHOLD*100:.0f}%-{BID_THRESHOLD*100:.0f}%)")
if must_have_gaps > 0:
reasons.append(f"{must_have_gaps} must-have gap(s) require mitigation plan")
else:
decision = "NO-BID"
if coverage_ratio < CONDITIONAL_THRESHOLD:
reasons.append(f"Coverage score {overall_coverage:.1f}% below {CONDITIONAL_THRESHOLD*100:.0f}% minimum")
if must_have_gaps > MAX_MUST_HAVE_GAPS_FOR_BID:
reasons.append(f"{must_have_gaps} must-have gaps exceed maximum of {MAX_MUST_HAVE_GAPS_FOR_BID}")
# Strategic value adjustment
if strategic_value.lower() == "high" and decision == "CONDITIONAL BID":
reasons.append("High strategic value supports pursuing despite coverage gaps")
elif strategic_value.lower() == "low" and decision == "CONDITIONAL BID":
decision = "NO-BID"
reasons.append("Low strategic value does not justify investment for conditional coverage")
confidence = "high" if coverage_ratio >= 0.80 else (
"medium" if coverage_ratio >= 0.60 else "low"
)
return {
"decision": decision,
"confidence": confidence,
"overall_coverage_percentage": round(overall_coverage, 1),
"must_have_gaps": must_have_gaps,
"strategic_value": strategic_value,
"reasons": reasons,
}
def generate_risk_assessment(
analyzed_reqs: list[dict[str, Any]],
gaps: list[dict[str, Any]],
) -> list[dict[str, str]]:
"""Generate risk assessment based on gaps and coverage patterns.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
gaps: List of gap analysis entries.
Returns:
List of risk entries with impact and mitigation.
"""
risks = []
critical_gaps = [g for g in gaps if g["severity"] == "critical"]
if critical_gaps:
risks.append({
"risk": "Critical requirement gaps",
"impact": "high",
"description": f"{len(critical_gaps)} must-have requirements not fully met",
"mitigation": "Prioritize engineering effort or partner integration for gap closure",
})
total_effort = sum(r["effort_hours"] for r in analyzed_reqs if r["coverage_status"] != "full")
if total_effort > 200:
risks.append({
"risk": "High customization effort",
"impact": "high",
"description": f"{total_effort} hours estimated for non-full requirements",
"mitigation": "Evaluate resource availability and timeline feasibility before committing",
})
elif total_effort > 80:
risks.append({
"risk": "Moderate customization effort",
"impact": "medium",
"description": f"{total_effort} hours estimated for non-full requirements",
"mitigation": "Phase implementation and set clear expectations on delivery timeline",
})
planned_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "planned")
if planned_count > 3:
risks.append({
"risk": "Roadmap dependency",
"impact": "medium",
"description": f"{planned_count} requirements depend on planned product features",
"mitigation": "Confirm roadmap timelines with product team; include contractual commitments if needed",
})
partial_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "partial")
if partial_count > 5:
risks.append({
"risk": "Workaround complexity",
"impact": "medium",
"description": f"{partial_count} requirements need workarounds or configuration",
"mitigation": "Document workarounds clearly; plan for native support in future releases",
})
if not risks:
risks.append({
"risk": "No significant risks identified",
"impact": "low",
"description": "Strong coverage across all requirement categories",
"mitigation": "Maintain standard engagement process",
})
return risks
def analyze_rfp(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete RFP analysis pipeline.
Args:
data: Parsed RFP data with requirements array.
Returns:
Complete analysis results dictionary.
"""
rfp_info = {
"rfp_name": data.get("rfp_name", "Unnamed RFP"),
"customer": data.get("customer", "Unknown Customer"),
"due_date": data.get("due_date", "Not specified"),
"strategic_value": data.get("strategic_value", "medium"),
"deal_value": data.get("deal_value", "Not specified"),
}
# Analyze each requirement
analyzed_reqs = [analyze_requirement(req) for req in data["requirements"]]
# Compute overall scores
total_weighted = sum(r["weighted_score"] for r in analyzed_reqs)
total_max = sum(r["max_weighted"] for r in analyzed_reqs)
overall_coverage = safe_divide(total_weighted, total_max) * 100
# Coverage summary
total_count = len(analyzed_reqs)
full_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "full")
partial_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "partial")
planned_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "planned")
gap_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "gap")
# Must-have gap count
must_have_gaps = sum(
1 for r in analyzed_reqs
if r["priority"] == "must-have" and r["coverage_status"] == "gap"
)
# Category breakdown
category_scores = compute_category_scores(analyzed_reqs)
# Gap analysis
gaps = generate_gap_analysis(analyzed_reqs)
# Bid recommendation
bid_recommendation = determine_bid_recommendation(
overall_coverage,
must_have_gaps,
rfp_info["strategic_value"],
)
# Risk assessment
risks = generate_risk_assessment(analyzed_reqs, gaps)
# Effort summary
total_effort = sum(r["effort_hours"] for r in analyzed_reqs)
gap_effort = sum(r["effort_hours"] for r in analyzed_reqs if r["coverage_status"] != "full")
return {
"rfp_info": rfp_info,
"coverage_summary": {
"overall_coverage_percentage": round(overall_coverage, 1),
"total_requirements": total_count,
"full": full_count,
"partial": partial_count,
"planned": planned_count,
"gap": gap_count,
"must_have_gaps": must_have_gaps,
},
"category_scores": category_scores,
"bid_recommendation": bid_recommendation,
"gap_analysis": gaps,
"risk_assessment": risks,
"effort_estimate": {
"total_hours": total_effort,
"gap_closure_hours": gap_effort,
"full_coverage_hours": total_effort - gap_effort,
},
"requirements_detail": analyzed_reqs,
}
def format_text(result: dict[str, Any]) -> str:
"""Format analysis results as human-readable text.
Args:
result: Complete analysis results dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["rfp_info"]
lines.append("=" * 70)
lines.append("RFP RESPONSE ANALYSIS")
lines.append("=" * 70)
lines.append(f"RFP: {info['rfp_name']}")
lines.append(f"Customer: {info['customer']}")
lines.append(f"Due Date: {info['due_date']}")
lines.append(f"Deal Value: {info['deal_value']}")
lines.append(f"Strategic Value: {info['strategic_value'].upper()}")
lines.append("")
# Coverage summary
cs = result["coverage_summary"]
lines.append("-" * 70)
lines.append("COVERAGE SUMMARY")
lines.append("-" * 70)
lines.append(f"Overall Coverage: {cs['overall_coverage_percentage']}%")
lines.append(f"Total Requirements: {cs['total_requirements']}")
lines.append(f" Full: {cs['full']} | Partial: {cs['partial']} | Planned: {cs['planned']} | Gap: {cs['gap']}")
lines.append(f"Must-Have Gaps: {cs['must_have_gaps']}")
lines.append("")
# Bid recommendation
bid = result["bid_recommendation"]
lines.append("-" * 70)
lines.append(f"BID RECOMMENDATION: {bid['decision']}")
lines.append(f"Confidence: {bid['confidence'].upper()}")
lines.append("-" * 70)
for reason in bid["reasons"]:
lines.append(f" - {reason}")
lines.append("")
# Category scores
lines.append("-" * 70)
lines.append("CATEGORY BREAKDOWN")
lines.append("-" * 70)
lines.append(f"{'Category':<25} {'Coverage':>8} {'Full':>5} {'Part':>5} {'Plan':>5} {'Gap':>5} {'Effort':>7}")
lines.append("-" * 70)
for cat, scores in result["category_scores"].items():
lines.append(
f"{cat:<25} {scores['coverage_percentage']:>7.1f}% "
f"{scores['full']:>5} {scores['partial']:>5} "
f"{scores['planned']:>5} {scores['gap']:>5} "
f"{scores['effort_hours']:>6}h"
)
lines.append("")
# Gap analysis
gaps = result["gap_analysis"]
if gaps:
lines.append("-" * 70)
lines.append("GAP ANALYSIS")
lines.append("-" * 70)
for gap in gaps:
severity_marker = "!!!" if gap["severity"] == "critical" else (
"!!" if gap["severity"] == "high" else "!"
)
lines.append(f" [{severity_marker}] {gap['id']}: {gap['requirement']}")
lines.append(f" Category: {gap['category']} | Priority: {gap['priority']} | Status: {gap['coverage_status']}")
lines.append(f" Effort: {gap['effort_hours']}h | Mitigation: {gap['mitigation']}")
lines.append("")
# Risk assessment
risks = result["risk_assessment"]
lines.append("-" * 70)
lines.append("RISK ASSESSMENT")
lines.append("-" * 70)
for risk in risks:
lines.append(f" [{risk['impact'].upper()}] {risk['risk']}")
lines.append(f" {risk['description']}")
lines.append(f" Mitigation: {risk['mitigation']}")
lines.append("")
# Effort estimate
effort = result["effort_estimate"]
lines.append("-" * 70)
lines.append("EFFORT ESTIMATE")
lines.append("-" * 70)
lines.append(f" Total Effort: {effort['total_hours']} hours")
lines.append(f" Gap Closure Effort: {effort['gap_closure_hours']} hours")
lines.append(f" Supported Effort: {effort['full_coverage_hours']} hours")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the RFP Response Analyzer."""
parser = argparse.ArgumentParser(
description="Analyze RFP/RFI requirements for coverage, gaps, and bid recommendation.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Coverage Categories:\n"
" Full (100%) - Requirement fully met\n"
" Partial (50%) - Partially met, workaround needed\n"
" Planned (25%) - On roadmap, not yet available\n"
" Gap (0%) - Not supported\n"
"\n"
"Priority Weights:\n"
" Must-Have (3x) | Should-Have (2x) | Nice-to-Have (1x)\n"
"\n"
"Example:\n"
" python rfp_response_analyzer.py rfp_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing RFP requirements data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_rfp_data(args.input_file)
result = analyze_rfp(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,13 @@
{
"name": "finance-skills",
"description": "1 production-ready finance skill: financial analyst with ratio analysis, DCF valuation, budget variance, and forecasting",
"version": "1.0.0",
"author": {
"name": "Alireza Rezvani",
"url": "https://alirezarezvani.com"
},
"homepage": "https://github.com/alirezarezvani/claude-skills/tree/main/finance",
"repository": "https://github.com/alirezarezvani/claude-skills",
"license": "MIT",
"skills": "./"
}

102
finance/CLAUDE.md Normal file
View File

@@ -0,0 +1,102 @@
# Finance Skills - Claude Code Guidance
This guide covers the finance skill and its Python automation tools.
## Finance Skills Overview
**Available Skills:**
1. **financial-analyst/** - Financial statement analysis, ratio analysis, DCF valuation, budgeting, forecasting (4 Python tools)
**Total Tools:** 4 Python automation tools, 3 knowledge bases, 5 templates
## Python Automation Tools
### 1. Ratio Calculator (`financial-analyst/scripts/ratio_calculator.py`)
**Purpose:** Calculate and interpret financial ratios from statement data
**Features:**
- Profitability ratios (ROE, ROA, Gross/Operating/Net Margin)
- Liquidity ratios (Current, Quick, Cash)
- Leverage ratios (Debt-to-Equity, Interest Coverage, DSCR)
- Efficiency ratios (Asset/Inventory/Receivables Turnover, DSO)
- Valuation ratios (P/E, P/B, P/S, EV/EBITDA, PEG)
- Built-in interpretation and benchmarking
**Usage:**
```bash
python financial-analyst/scripts/ratio_calculator.py financial_data.json
python financial-analyst/scripts/ratio_calculator.py financial_data.json --format json
```
### 2. DCF Valuation (`financial-analyst/scripts/dcf_valuation.py`)
**Purpose:** Discounted Cash Flow enterprise and equity valuation
**Features:**
- Revenue and cash flow projections
- WACC calculation (CAPM-based)
- Terminal value (perpetuity growth and exit multiple methods)
- Enterprise and equity value derivation
- Two-way sensitivity analysis
- No external dependencies (uses math/statistics)
**Usage:**
```bash
python financial-analyst/scripts/dcf_valuation.py valuation_data.json
python financial-analyst/scripts/dcf_valuation.py valuation_data.json --format json
```
### 3. Budget Variance Analyzer (`financial-analyst/scripts/budget_variance_analyzer.py`)
**Purpose:** Analyze actual vs budget vs prior year performance
**Features:**
- Variance calculation (actual vs budget, actual vs prior year)
- Materiality threshold filtering
- Favorable/unfavorable classification
- Department and category breakdown
**Usage:**
```bash
python financial-analyst/scripts/budget_variance_analyzer.py budget_data.json
python financial-analyst/scripts/budget_variance_analyzer.py budget_data.json --format json
```
### 4. Forecast Builder (`financial-analyst/scripts/forecast_builder.py`)
**Purpose:** Driver-based revenue forecasting and cash flow projection
**Features:**
- Driver-based revenue forecast model
- 13-week cash flow projection
- Scenario modeling (base/bull/bear)
- Trend analysis from historical data
**Usage:**
```bash
python financial-analyst/scripts/forecast_builder.py forecast_data.json
python financial-analyst/scripts/forecast_builder.py forecast_data.json --format json
```
## Quality Standards
**All finance Python tools must:**
- Use standard library only (math, statistics, json, argparse)
- Support both JSON and human-readable output via `--format` flag
- Provide clear error messages for invalid input
- Return appropriate exit codes
- Process files locally (no API calls)
- Include argparse CLI with `--help` support
## Related Skills
- **C-Level:** Strategic financial decision-making -> `../c-level-advisor/`
- **Business & Growth:** Revenue operations, sales metrics -> `../business-growth/`
- **Product Team:** Budget allocation, RICE scoring -> `../product-team/`
---
**Last Updated:** February 2026
**Skills Deployed:** 1/1 finance skills production-ready
**Total Tools:** 4 Python automation tools

View File

@@ -0,0 +1,178 @@
---
name: financial-analyst
description: Performs financial ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction for strategic decision-making
---
# Financial Analyst Skill
## Overview
Production-ready financial analysis toolkit providing ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction. Designed for financial analysts with 3-6 years experience performing financial modeling, forecasting & budgeting, management reporting, business performance analysis, and investment analysis.
## 5-Phase Workflow
### Phase 1: Scoping
- Define analysis objectives and stakeholder requirements
- Identify data sources and time periods
- Establish materiality thresholds and accuracy targets
- Select appropriate analytical frameworks
### Phase 2: Data Analysis & Modeling
- Collect and validate financial data (income statement, balance sheet, cash flow)
- Calculate financial ratios across 5 categories (profitability, liquidity, leverage, efficiency, valuation)
- Build DCF models with WACC and terminal value calculations
- Construct budget variance analyses with favorable/unfavorable classification
- Develop driver-based forecasts with scenario modeling
### Phase 3: Insight Generation
- Interpret ratio trends and benchmark against industry standards
- Identify material variances and root causes
- Assess valuation ranges through sensitivity analysis
- Evaluate forecast scenarios (base/bull/bear) for decision support
### Phase 4: Reporting
- Generate executive summaries with key findings
- Produce detailed variance reports by department and category
- Deliver DCF valuation reports with sensitivity tables
- Present rolling forecasts with trend analysis
### Phase 5: Follow-up
- Track forecast accuracy (target: +/-5% revenue, +/-3% expenses)
- Monitor report delivery timeliness (target: 100% on time)
- Update models with actuals as they become available
- Refine assumptions based on variance analysis
## Tools
### 1. Ratio Calculator (`scripts/ratio_calculator.py`)
Calculate and interpret financial ratios from financial statement data.
**Ratio Categories:**
- **Profitability:** ROE, ROA, Gross Margin, Operating Margin, Net Margin
- **Liquidity:** Current Ratio, Quick Ratio, Cash Ratio
- **Leverage:** Debt-to-Equity, Interest Coverage, DSCR
- **Efficiency:** Asset Turnover, Inventory Turnover, Receivables Turnover, DSO
- **Valuation:** P/E, P/B, P/S, EV/EBITDA, PEG Ratio
```bash
python scripts/ratio_calculator.py sample_financial_data.json
python scripts/ratio_calculator.py sample_financial_data.json --format json
python scripts/ratio_calculator.py sample_financial_data.json --category profitability
```
### 2. DCF Valuation (`scripts/dcf_valuation.py`)
Discounted Cash Flow enterprise and equity valuation with sensitivity analysis.
**Features:**
- WACC calculation via CAPM
- Revenue and free cash flow projections (5-year default)
- Terminal value via perpetuity growth and exit multiple methods
- Enterprise value and equity value derivation
- Two-way sensitivity analysis (discount rate vs growth rate)
```bash
python scripts/dcf_valuation.py valuation_data.json
python scripts/dcf_valuation.py valuation_data.json --format json
python scripts/dcf_valuation.py valuation_data.json --projection-years 7
```
### 3. Budget Variance Analyzer (`scripts/budget_variance_analyzer.py`)
Analyze actual vs budget vs prior year performance with materiality filtering.
**Features:**
- Dollar and percentage variance calculation
- Materiality threshold filtering (default: 10% or $50K)
- Favorable/unfavorable classification with revenue/expense logic
- Department and category breakdown
- Executive summary generation
```bash
python scripts/budget_variance_analyzer.py budget_data.json
python scripts/budget_variance_analyzer.py budget_data.json --format json
python scripts/budget_variance_analyzer.py budget_data.json --threshold-pct 5 --threshold-amt 25000
```
### 4. Forecast Builder (`scripts/forecast_builder.py`)
Driver-based revenue forecasting with rolling cash flow projection and scenario modeling.
**Features:**
- Driver-based revenue forecast model
- 13-week rolling cash flow projection
- Scenario modeling (base/bull/bear cases)
- Trend analysis using simple linear regression (standard library)
```bash
python scripts/forecast_builder.py forecast_data.json
python scripts/forecast_builder.py forecast_data.json --format json
python scripts/forecast_builder.py forecast_data.json --scenarios base,bull,bear
```
## Knowledge Bases
| Reference | Purpose |
|-----------|---------|
| `references/financial-ratios-guide.md` | Ratio formulas, interpretation, industry benchmarks |
| `references/valuation-methodology.md` | DCF methodology, WACC, terminal value, comps |
| `references/forecasting-best-practices.md` | Driver-based forecasting, rolling forecasts, accuracy |
## Templates
| Template | Purpose |
|----------|---------|
| `assets/variance_report_template.md` | Budget variance report template |
| `assets/dcf_analysis_template.md` | DCF valuation analysis template |
| `assets/forecast_report_template.md` | Revenue forecast report template |
## Industry Adaptations
### SaaS
- Key metrics: MRR, ARR, CAC, LTV, Churn Rate, Net Revenue Retention
- Revenue recognition: subscription-based, deferred revenue tracking
- Unit economics: CAC payback period, LTV/CAC ratio
- Cohort analysis for retention and expansion revenue
### Retail
- Key metrics: Same-store sales, Revenue per square foot, Inventory turnover
- Seasonal adjustment factors in forecasting
- Gross margin analysis by product category
- Working capital cycle optimization
### Manufacturing
- Key metrics: Gross margin by product line, Capacity utilization, COGS breakdown
- Bill of materials cost analysis
- Absorption vs variable costing impact
- Capital expenditure planning and ROI
### Financial Services
- Key metrics: Net Interest Margin, Efficiency Ratio, ROA, Tier 1 Capital
- Regulatory capital requirements
- Credit loss provisioning and reserves
- Fee income analysis and diversification
### Healthcare
- Key metrics: Revenue per patient, Payer mix, Days in A/R, Operating margin
- Reimbursement rate analysis by payer
- Case mix index impact on revenue
- Compliance cost allocation
## Key Metrics & Targets
| Metric | Target |
|--------|--------|
| Forecast accuracy (revenue) | +/-5% |
| Forecast accuracy (expenses) | +/-3% |
| Report delivery | 100% on time |
| Model documentation | Complete for all assumptions |
| Variance explanation | 100% of material variances |
## Input Data Format
All scripts accept JSON input files. See `assets/sample_financial_data.json` for the complete input schema covering all four tools.
## Dependencies
**None** - All scripts use Python standard library only (`math`, `statistics`, `json`, `argparse`, `datetime`). No numpy, pandas, or scipy required.

View File

@@ -0,0 +1,184 @@
# DCF Valuation Analysis
## Report Header
| Field | Value |
|-------|-------|
| **Company** | [Company Name] |
| **Ticker** | [Ticker Symbol] |
| **Analysis Date** | [Date] |
| **Prepared By** | [Analyst Name] |
| **Current Share Price** | $[X] |
| **Shares Outstanding** | [X]M |
## Executive Summary
[2-3 sentence overview of the valuation conclusion, including the implied value range per share compared to the current market price, and whether the stock appears undervalued, fairly valued, or overvalued.]
### Valuation Summary
| Method | Enterprise Value | Equity Value | Value Per Share | vs Current Price |
|--------|-----------------|-------------|----------------|-----------------|
| DCF (Perpetuity Growth) | $[X]M | $[X]M | $[X] | [X]% |
| DCF (Exit Multiple) | $[X]M | $[X]M | $[X] | [X]% |
| Comparable Companies | $[X]M | $[X]M | $[X] | [X]% |
| **Blended Estimate** | **$[X]M** | **$[X]M** | **$[X]** | **[X]%** |
## Investment Thesis
[Summary of the investment case, including key strengths, risks, and catalysts.]
## Historical Financial Summary
| ($M) | FY-4 | FY-3 | FY-2 | FY-1 | LTM |
|------|------|------|------|------|-----|
| Revenue | [X] | [X] | [X] | [X] | [X] |
| Revenue Growth | [X]% | [X]% | [X]% | [X]% | [X]% |
| Gross Profit | [X] | [X] | [X] | [X] | [X] |
| Gross Margin | [X]% | [X]% | [X]% | [X]% | [X]% |
| EBITDA | [X] | [X] | [X] | [X] | [X] |
| EBITDA Margin | [X]% | [X]% | [X]% | [X]% | [X]% |
| Net Income | [X] | [X] | [X] | [X] | [X] |
| Free Cash Flow | [X] | [X] | [X] | [X] | [X] |
## WACC Calculation
### Cost of Equity (CAPM)
| Component | Value | Source |
|-----------|-------|--------|
| Risk-Free Rate | [X]% | [10-Year Treasury] |
| Equity Risk Premium | [X]% | [Damodaran / internal] |
| Beta (Levered) | [X] | [Bloomberg / regression] |
| Size Premium | [X]% | [Duff & Phelps] |
| Company-Specific Risk | [X]% | [Analyst judgment] |
| **Cost of Equity** | **[X]%** | |
### Cost of Debt
| Component | Value |
|-----------|-------|
| Pre-Tax Cost of Debt | [X]% |
| Tax Rate | [X]% |
| After-Tax Cost of Debt | [X]% |
### Capital Structure
| Component | Market Value ($M) | Weight |
|-----------|------------------|--------|
| Equity | [X] | [X]% |
| Debt | [X] | [X]% |
| **Total Capital** | **[X]** | **100%** |
### WACC Result: [X]%
## Revenue Projections
| ($M) | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
|------|--------|--------|--------|--------|--------|
| Revenue | [X] | [X] | [X] | [X] | [X] |
| Growth Rate | [X]% | [X]% | [X]% | [X]% | [X]% |
**Key Revenue Assumptions:**
- [Assumption 1 with supporting rationale]
- [Assumption 2 with supporting rationale]
- [Assumption 3 with supporting rationale]
## Free Cash Flow Projections
| ($M) | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
|------|--------|--------|--------|--------|--------|
| Revenue | [X] | [X] | [X] | [X] | [X] |
| EBIT | [X] | [X] | [X] | [X] | [X] |
| Taxes on EBIT | ([X]) | ([X]) | ([X]) | ([X]) | ([X]) |
| NOPAT | [X] | [X] | [X] | [X] | [X] |
| D&A | [X] | [X] | [X] | [X] | [X] |
| CapEx | ([X]) | ([X]) | ([X]) | ([X]) | ([X]) |
| Change in NWC | ([X]) | ([X]) | ([X]) | ([X]) | ([X]) |
| **Unlevered FCF** | **[X]** | **[X]** | **[X]** | **[X]** | **[X]** |
| FCF Margin | [X]% | [X]% | [X]% | [X]% | [X]% |
## Terminal Value
### Perpetuity Growth Method
| Component | Value |
|-----------|-------|
| Terminal FCF | $[X]M |
| Terminal Growth Rate | [X]% |
| WACC | [X]% |
| **Terminal Value** | **$[X]M** |
| TV as % of EV | [X]% |
### Exit Multiple Method
| Component | Value |
|-----------|-------|
| Terminal EBITDA | $[X]M |
| Exit EV/EBITDA Multiple | [X]x |
| **Terminal Value** | **$[X]M** |
| TV as % of EV | [X]% |
## Enterprise Value Bridge
| Component | Perpetuity Growth | Exit Multiple |
|-----------|------------------|---------------|
| PV of Projected FCFs | $[X]M | $[X]M |
| PV of Terminal Value | $[X]M | $[X]M |
| **Enterprise Value** | **$[X]M** | **$[X]M** |
| Less: Net Debt | ($[X]M) | ($[X]M) |
| Less: Minority Interest | ($[X]M) | ($[X]M) |
| **Equity Value** | **$[X]M** | **$[X]M** |
| Diluted Shares (M) | [X] | [X] |
| **Value Per Share** | **$[X]** | **$[X]** |
## Sensitivity Analysis
### WACC vs Terminal Growth Rate (Enterprise Value, $M)
| WACC \ Growth | [g-2]% | [g-1]% | [g]% | [g+1]% | [g+2]% |
|--------------|--------|--------|------|--------|--------|
| [WACC-2]% | [X] | [X] | [X] | [X] | [X] |
| [WACC-1]% | [X] | [X] | [X] | [X] | [X] |
| **[WACC]%** | [X] | [X] | **[X]** | [X] | [X] |
| [WACC+1]% | [X] | [X] | [X] | [X] | [X] |
| [WACC+2]% | [X] | [X] | [X] | [X] | [X] |
### Implied Share Price Range
| Scenario | Share Price | vs Current | Upside/Downside |
|----------|-----------|------------|----------------|
| Bear Case (WACC+2%, g-2%) | $[X] | [X]% | [X]% |
| Base Case | $[X] | [X]% | [X]% |
| Bull Case (WACC-2%, g+2%) | $[X] | [X]% | [X]% |
## Key Risks to Valuation
1. **[Risk 1]** - [Description and potential impact on value]
2. **[Risk 2]** - [Description and potential impact on value]
3. **[Risk 3]** - [Description and potential impact on value]
## Comparable Company Analysis
| Company | EV/Revenue | EV/EBITDA | P/E | Growth | Margin |
|---------|-----------|----------|-----|--------|--------|
| [Comp 1] | [X]x | [X]x | [X]x | [X]% | [X]% |
| [Comp 2] | [X]x | [X]x | [X]x | [X]% | [X]% |
| [Comp 3] | [X]x | [X]x | [X]x | [X]% | [X]% |
| [Comp 4] | [X]x | [X]x | [X]x | [X]% | [X]% |
| **Median** | **[X]x** | **[X]x** | **[X]x** | **[X]%** | **[X]%** |
| **[Target]** | **[X]x** | **[X]x** | **[X]x** | **[X]%** | **[X]%** |
## Conclusion and Recommendation
**Valuation Range:** $[Low] - $[High] per share
**Current Price:** $[X]
**Recommendation:** [Buy / Hold / Sell]
[Final paragraph with investment recommendation rationale, key upside catalysts, and primary risks to monitor.]
---
*Analysis generated using Financial Analyst Skill - DCF Valuation Model*

View File

@@ -0,0 +1,161 @@
{
"_description": "Expected output structure for all 4 scripts. Values are illustrative to show data format.",
"ratio_calculator_output": {
"categories": {
"profitability": {
"roe": {
"value": 0.25,
"formula": "Net Income / Total Equity",
"name": "Return on Equity",
"interpretation": "Good - above average performance"
},
"roa": {
"value": 0.1375,
"formula": "Net Income / Total Assets",
"name": "Return on Assets",
"interpretation": "Excellent - significantly above peers"
},
"gross_margin": {
"value": 0.40,
"formula": "(Revenue - COGS) / Revenue",
"name": "Gross Margin",
"interpretation": "Acceptable - within normal range"
},
"operating_margin": {
"value": 0.16,
"formula": "Operating Income / Revenue",
"name": "Operating Margin",
"interpretation": "Good - above average performance"
},
"net_margin": {
"value": 0.11,
"formula": "Net Income / Revenue",
"name": "Net Margin",
"interpretation": "Good - above average performance"
}
},
"liquidity": {
"current_ratio": {"value": 1.875, "name": "Current Ratio"},
"quick_ratio": {"value": 1.4375, "name": "Quick Ratio"},
"cash_ratio": {"value": 0.625, "name": "Cash Ratio"}
},
"leverage": {
"debt_to_equity": {"value": 0.545, "name": "Debt-to-Equity Ratio"},
"interest_coverage": {"value": 6.67, "name": "Interest Coverage Ratio"},
"dscr": {"value": 2.50, "name": "Debt Service Coverage Ratio"}
},
"efficiency": {
"asset_turnover": {"value": 1.25, "name": "Asset Turnover"},
"inventory_turnover": {"value": 8.57, "name": "Inventory Turnover"},
"receivables_turnover": {"value": 8.33, "name": "Receivables Turnover"},
"dso": {"value": 43.8, "name": "Days Sales Outstanding"}
},
"valuation": {
"pe_ratio": {"value": 81.82, "name": "Price-to-Earnings Ratio"},
"pb_ratio": {"value": 20.45, "name": "Price-to-Book Ratio"},
"ps_ratio": {"value": 9.0, "name": "Price-to-Sales Ratio"},
"ev_ebitda": {"value": 45.7, "name": "EV/EBITDA"},
"peg_ratio": {"value": 6.82, "name": "PEG Ratio"}
}
}
},
"dcf_valuation_output": {
"wacc": 0.085,
"projected_revenue": [55000000, 59950000, 64746000, 69278220, 73434953],
"projected_fcf": [6600000, 7793500, 8416980, 9698951, 10280893],
"terminal_value": {
"perpetuity_growth": 175382225,
"exit_multiple": 176243484
},
"enterprise_value": {
"perpetuity_growth": 149500000,
"exit_multiple": 150100000
},
"equity_value": {
"perpetuity_growth": 142500000,
"exit_multiple": 143100000
},
"value_per_share": {
"perpetuity_growth": 14.25,
"exit_multiple": 14.31
},
"sensitivity_analysis": {
"wacc_values": [0.065, 0.075, 0.085, 0.095, 0.105],
"growth_values": [0.015, 0.020, 0.025, 0.030, 0.035],
"enterprise_value_table": "5x5 nested list of enterprise values",
"share_price_table": "5x5 nested list of share prices"
}
},
"budget_variance_output": {
"executive_summary": {
"period": "Q4 2025",
"company": "Acme Corp",
"total_line_items": 10,
"material_variances_count": 3,
"favorable_count": 4,
"unfavorable_count": 6,
"revenue": {
"actual": 15700000,
"budget": 15500000,
"variance_amount": 200000,
"variance_pct": 1.29
},
"expenses": {
"actual": 13255000,
"budget": 12520000,
"variance_amount": 735000,
"variance_pct": 5.87
},
"net_impact": -535000
},
"material_variances": [
{
"name": "Cost of Goods Sold",
"budget_variance_amount": 600000,
"budget_variance_pct": 8.33,
"favorability": "Unfavorable"
}
],
"department_summary": {
"Sales": {"total_variance": 0, "variance_pct": 0},
"Operations": {"total_variance": 0, "variance_pct": 0}
},
"category_summary": {
"Revenue": {"total_variance": 0, "variance_pct": 0},
"COGS": {"total_variance": 0, "variance_pct": 0}
}
},
"forecast_builder_output": {
"trend_analysis": {
"trend": {
"slope": 650000,
"intercept": 9500000,
"r_squared": 0.98,
"direction": "upward"
},
"average_growth_rate": 0.06,
"seasonality_index": [0.92, 0.97, 1.01, 1.10]
},
"scenario_comparison": {
"comparison": [
{"scenario": "base", "total_revenue": 185000000, "growth_rate": 0.08},
{"scenario": "bull", "total_revenue": 210000000, "growth_rate": 0.12},
{"scenario": "bear", "total_revenue": 165000000, "growth_rate": 0.05}
]
},
"rolling_cash_flow": {
"weeks": 13,
"opening_balance": 2500000,
"closing_balance": 2800000,
"total_inflows": 4200000,
"total_outflows": 3900000,
"minimum_balance": 2100000,
"minimum_balance_week": 4,
"cash_runway_weeks": 12
}
}
}

View File

@@ -0,0 +1,177 @@
# Revenue Forecast Report
## Report Header
| Field | Value |
|-------|-------|
| **Company** | [Company Name] |
| **Forecast Period** | [Start] to [End] |
| **Prepared By** | [Analyst Name] |
| **Date** | [Report Date] |
| **Forecast Type** | [Driver-Based / Trend-Based / Blended] |
## Executive Summary
[2-3 sentence overview of the revenue forecast, key assumptions, and confidence level. Highlight the base case total revenue, expected growth rate, and any significant departures from prior forecast or budget.]
### Key Metrics at a Glance
| Metric | Value |
|--------|-------|
| Base Case Total Revenue | $[X]M |
| Expected Growth Rate | [X]% |
| Forecast Confidence | [High / Medium / Low] |
| Revenue Range (Bear to Bull) | $[X]M - $[X]M |
| Primary Revenue Driver | [Driver description] |
## Historical Trend Analysis
### Revenue Trend
| Period | Revenue | Growth Rate | Gross Margin |
|--------|---------|------------|-------------|
| [Q/Year-4] | $[X]M | - | [X]% |
| [Q/Year-3] | $[X]M | [X]% | [X]% |
| [Q/Year-2] | $[X]M | [X]% | [X]% |
| [Q/Year-1] | $[X]M | [X]% | [X]% |
| [Current] | $[X]M | [X]% | [X]% |
### Trend Statistics
| Metric | Value |
|--------|-------|
| Average Growth Rate | [X]% |
| Trend Direction | [Upward / Flat / Downward] |
| R-squared (fit quality) | [X] |
| Seasonality Detected | [Yes / No] |
## Revenue Drivers
### Primary Drivers
| Driver | Current Value | Projected Value | Growth |
|--------|-------------|-----------------|--------|
| [Units / Customers / etc.] | [X] | [X] | [X]% |
| [Price / ARPU / etc.] | $[X] | $[X] | [X]% |
| [Conversion / Retention] | [X]% | [X]% | [X]pp |
### Driver Assumptions
1. **[Driver 1]:** [Assumption and rationale]
2. **[Driver 2]:** [Assumption and rationale]
3. **[Driver 3]:** [Assumption and rationale]
## Scenario Comparison
### Summary
| Scenario | Total Revenue | Growth Rate | Op. Income | Gross Margin | Probability |
|----------|-------------|-------------|-----------|-------------|-------------|
| Bull | $[X]M | [X]% | $[X]M | [X]% | [X]% |
| **Base** | **$[X]M** | **[X]%** | **$[X]M** | **[X]%** | **[X]%** |
| Bear | $[X]M | [X]% | $[X]M | [X]% | [X]% |
### Scenario Assumptions
**Bull Case:**
- [Key assumption 1]
- [Key assumption 2]
- [Trigger: what conditions would cause this scenario]
**Base Case:**
- [Key assumption 1]
- [Key assumption 2]
**Bear Case:**
- [Key assumption 1]
- [Key assumption 2]
- [Trigger: what conditions would cause this scenario]
## Monthly/Quarterly Forecast Detail (Base Case)
| Period | Revenue | COGS | Gross Profit | OpEx | Op. Income |
|--------|---------|------|-------------|------|-----------|
| [Period 1] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Period 2] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Period 3] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Period 4] | $[X] | $[X] | $[X] | $[X] | $[X] |
| ... | ... | ... | ... | ... | ... |
| **Total** | **$[X]** | **$[X]** | **$[X]** | **$[X]** | **$[X]** |
## 13-Week Rolling Cash Flow
### Summary
| Metric | Value |
|--------|-------|
| Opening Cash Balance | $[X] |
| Projected Closing Balance | $[X] |
| Net Cash Change | $[X] |
| Minimum Cash Balance | $[X] (Week [N]) |
| Cash Runway | [N] weeks |
### Weekly Cash Flow Projection
| Week | Inflows | Outflows | Net Cash Flow | Closing Balance |
|------|---------|----------|--------------|----------------|
| 1 | $[X] | $[X] | $[X] | $[X] |
| 2 | $[X] | $[X] | $[X] | $[X] |
| 3 | $[X] | $[X] | $[X] | $[X] |
| ... | ... | ... | ... | ... |
| 13 | $[X] | $[X] | $[X] | $[X] |
### Cash Flow Notes
- **Week [N]:** [Description of any significant one-time items]
- **Week [N]:** [Description of any significant one-time items]
## Forecast Accuracy Tracking
### vs Prior Forecast
| Metric | Prior Forecast | Current Forecast | Change |
|--------|---------------|-----------------|--------|
| Revenue | $[X]M | $[X]M | [X]% |
| Growth Rate | [X]% | [X]% | [X]pp |
| Gross Margin | [X]% | [X]% | [X]pp |
### Historical Forecast Accuracy (MAPE)
| Period | Forecast | Actual | Error | MAPE |
|--------|----------|--------|-------|------|
| [Period-3] | $[X] | $[X] | $[X] | [X]% |
| [Period-2] | $[X] | $[X] | $[X] | [X]% |
| [Period-1] | $[X] | $[X] | $[X] | [X]% |
| **Average MAPE** | | | | **[X]%** |
## Key Risks and Assumptions
### Upside Risks
1. [Risk/opportunity with quantified potential impact]
2. [Risk/opportunity with quantified potential impact]
### Downside Risks
1. [Risk with quantified potential impact]
2. [Risk with quantified potential impact]
### Critical Assumptions
1. [Assumption that if wrong would materially change the forecast]
2. [Assumption that if wrong would materially change the forecast]
## Recommendations
1. **[Recommendation 1]:** [Specific action with expected impact]
2. **[Recommendation 2]:** [Specific action with expected impact]
3. **[Recommendation 3]:** [Specific action with expected impact]
## Next Steps
| # | Action | Owner | Due Date |
|---|--------|-------|----------|
| 1 | [Action item] | [Name] | [Date] |
| 2 | [Action item] | [Name] | [Date] |
| 3 | [Action item] | [Name] | [Date] |
---
*Report generated using Financial Analyst Skill - Forecast Builder*

View File

@@ -0,0 +1,219 @@
{
"_description": "Sample financial data covering all 4 scripts: ratio_calculator, dcf_valuation, budget_variance_analyzer, and forecast_builder",
"ratio_analysis": {
"income_statement": {
"revenue": 50000000,
"cost_of_goods_sold": 30000000,
"operating_income": 8000000,
"ebitda": 10000000,
"net_income": 5500000,
"interest_expense": 1200000
},
"balance_sheet": {
"total_assets": 40000000,
"current_assets": 15000000,
"cash_and_equivalents": 5000000,
"accounts_receivable": 6000000,
"inventory": 3500000,
"total_equity": 22000000,
"total_debt": 12000000,
"current_liabilities": 8000000
},
"cash_flow": {
"operating_cash_flow": 7500000,
"total_debt_service": 3000000
},
"market_data": {
"share_price": 45.00,
"shares_outstanding": 10000000,
"market_cap": 450000000,
"earnings_growth_rate": 0.12
}
},
"dcf_valuation": {
"historical": {
"revenue": [38000000, 42000000, 45000000, 48000000, 50000000],
"net_income": [3800000, 4200000, 4500000, 5000000, 5500000],
"net_debt": 7000000,
"shares_outstanding": 10000000
},
"assumptions": {
"projection_years": 5,
"revenue_growth_rates": [0.10, 0.09, 0.08, 0.07, 0.06],
"fcf_margins": [0.12, 0.13, 0.13, 0.14, 0.14],
"default_revenue_growth": 0.05,
"default_fcf_margin": 0.10,
"terminal_growth_rate": 0.025,
"terminal_ebitda_margin": 0.20,
"exit_ev_ebitda_multiple": 12.0,
"wacc_inputs": {
"risk_free_rate": 0.04,
"equity_risk_premium": 0.06,
"beta": 1.1,
"cost_of_debt": 0.055,
"tax_rate": 0.25,
"debt_weight": 0.30,
"equity_weight": 0.70
}
}
},
"budget_variance": {
"company": "Acme Corp",
"period": "Q4 2025",
"line_items": [
{
"name": "Product Revenue",
"type": "revenue",
"department": "Sales",
"category": "Revenue",
"actual": 12500000,
"budget": 12000000,
"prior_year": 10800000
},
{
"name": "Service Revenue",
"type": "revenue",
"department": "Sales",
"category": "Revenue",
"actual": 3200000,
"budget": 3500000,
"prior_year": 2900000
},
{
"name": "Cost of Goods Sold",
"type": "expense",
"department": "Operations",
"category": "COGS",
"actual": 7800000,
"budget": 7200000,
"prior_year": 6700000
},
{
"name": "Salaries & Wages",
"type": "expense",
"department": "Human Resources",
"category": "Personnel",
"actual": 2100000,
"budget": 2200000,
"prior_year": 1950000
},
{
"name": "Marketing & Advertising",
"type": "expense",
"department": "Marketing",
"category": "Sales & Marketing",
"actual": 850000,
"budget": 750000,
"prior_year": 680000
},
{
"name": "Software & Technology",
"type": "expense",
"department": "Engineering",
"category": "Technology",
"actual": 420000,
"budget": 400000,
"prior_year": 350000
},
{
"name": "Office & Facilities",
"type": "expense",
"department": "Operations",
"category": "G&A",
"actual": 180000,
"budget": 200000,
"prior_year": 175000
},
{
"name": "Travel & Entertainment",
"type": "expense",
"department": "Sales",
"category": "Sales & Marketing",
"actual": 95000,
"budget": 120000,
"prior_year": 88000
},
{
"name": "Professional Services",
"type": "expense",
"department": "Finance",
"category": "G&A",
"actual": 310000,
"budget": 250000,
"prior_year": 220000
},
{
"name": "R&D Expenses",
"type": "expense",
"department": "Engineering",
"category": "R&D",
"actual": 1500000,
"budget": 1400000,
"prior_year": 1200000
}
]
},
"forecast": {
"historical_periods": [
{"period": "Q1 2024", "revenue": 10500000, "gross_profit": 4200000, "operating_income": 1575000},
{"period": "Q2 2024", "revenue": 11200000, "gross_profit": 4480000, "operating_income": 1680000},
{"period": "Q3 2024", "revenue": 11800000, "gross_profit": 4720000, "operating_income": 1770000},
{"period": "Q4 2024", "revenue": 12500000, "gross_profit": 5000000, "operating_income": 1875000},
{"period": "Q1 2025", "revenue": 12800000, "gross_profit": 5120000, "operating_income": 1920000},
{"period": "Q2 2025", "revenue": 13500000, "gross_profit": 5400000, "operating_income": 2025000},
{"period": "Q3 2025", "revenue": 14100000, "gross_profit": 5640000, "operating_income": 2115000},
{"period": "Q4 2025", "revenue": 15700000, "gross_profit": 6280000, "operating_income": 2355000}
],
"drivers": {
"units": {
"base_units": 5000,
"growth_rate": 0.04
},
"pricing": {
"base_price": 2800,
"annual_increase": 0.03
}
},
"assumptions": {
"revenue_growth_rate": 0.08,
"gross_margin": 0.40,
"opex_pct_revenue": 0.25,
"forecast_periods": 12
},
"scenarios": {
"base": {
"growth_adjustment": 0.0,
"margin_adjustment": 0.0
},
"bull": {
"growth_adjustment": 0.04,
"margin_adjustment": 0.03
},
"bear": {
"growth_adjustment": -0.03,
"margin_adjustment": -0.02
}
},
"cash_flow_inputs": {
"opening_cash_balance": 2500000,
"weekly_revenue": 350000,
"collection_rate": 0.85,
"collection_lag_weeks": 2,
"weekly_payroll": 160000,
"weekly_rent": 15000,
"weekly_operating": 45000,
"weekly_other": 20000,
"one_time_items": [
{"week": 3, "amount": -250000, "description": "Annual insurance premium"},
{"week": 6, "amount": 500000, "description": "Customer prepayment"},
{"week": 9, "amount": -180000, "description": "Equipment purchase"},
{"week": 13, "amount": -75000, "description": "Quarterly tax payment"}
]
},
"forecast_periods": 12
}
}

View File

@@ -0,0 +1,122 @@
# Budget Variance Report
## Report Header
| Field | Value |
|-------|-------|
| **Company** | [Company Name] |
| **Period** | [Reporting Period] |
| **Prepared By** | [Analyst Name] |
| **Date** | [Report Date] |
| **Materiality Threshold** | [X]% or $[Y]K |
## Executive Summary
[2-3 sentence overview of overall performance vs budget, highlighting whether the company is tracking ahead or behind plan and the primary drivers of variance.]
### Key Metrics
| Metric | Actual | Budget | Variance ($) | Variance (%) | Status |
|--------|--------|--------|-------------|-------------|--------|
| Total Revenue | $[X] | $[X] | $[X] | [X]% | [Fav/Unfav] |
| Total Expenses | $[X] | $[X] | $[X] | [X]% | [Fav/Unfav] |
| Net Income | $[X] | $[X] | $[X] | [X]% | [Fav/Unfav] |
| Operating Margin | [X]% | [X]% | [X]pp | - | [Fav/Unfav] |
## Material Variances
### [Variance Item 1 - e.g., Product Revenue]
| | Actual | Budget | Variance | |
|---|--------|--------|---------|---|
| Amount | $[X] | $[X] | $[X] | [X]% |
**Root Cause:** [Detailed explanation of why this variance occurred]
**Impact:** [Quantified impact on profitability and cash flow]
**Corrective Action:** [Specific steps being taken to address the variance]
**Responsible:** [Owner] | **Target Date:** [Date]
---
### [Variance Item 2]
| | Actual | Budget | Variance | |
|---|--------|--------|---------|---|
| Amount | $[X] | $[X] | $[X] | [X]% |
**Root Cause:** [Explanation]
**Impact:** [Impact]
**Corrective Action:** [Action items]
**Responsible:** [Owner] | **Target Date:** [Date]
---
## Department Performance
| Department | Actual | Budget | Variance ($) | Variance (%) | Favorable | Unfavorable |
|-----------|--------|--------|-------------|-------------|-----------|-------------|
| Sales | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
| Operations | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
| Marketing | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
| Engineering | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
| Finance | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
| HR | $[X] | $[X] | $[X] | [X]% | [N] | [N] |
## Category Breakdown
| Category | Actual | Budget | Variance ($) | Variance (%) |
|----------|--------|--------|-------------|-------------|
| Revenue | $[X] | $[X] | $[X] | [X]% |
| COGS | $[X] | $[X] | $[X] | [X]% |
| Personnel | $[X] | $[X] | $[X] | [X]% |
| Sales & Marketing | $[X] | $[X] | $[X] | [X]% |
| Technology | $[X] | $[X] | $[X] | [X]% |
| G&A | $[X] | $[X] | $[X] | [X]% |
| R&D | $[X] | $[X] | $[X] | [X]% |
## Prior Year Comparison
| Metric | Current Actual | Prior Year | YoY Change ($) | YoY Change (%) |
|--------|---------------|-----------|---------------|---------------|
| Revenue | $[X] | $[X] | $[X] | [X]% |
| Gross Profit | $[X] | $[X] | $[X] | [X]% |
| Operating Income | $[X] | $[X] | $[X] | [X]% |
| Net Income | $[X] | $[X] | $[X] | [X]% |
## Risks and Opportunities
### Risks
1. [Risk description with quantified impact]
2. [Risk description with quantified impact]
### Opportunities
1. [Opportunity description with quantified upside]
2. [Opportunity description with quantified upside]
## Forecast Impact
Based on current variances, the full-year forecast is adjusted as follows:
| Metric | Original FY Forecast | Revised FY Forecast | Change |
|--------|---------------------|--------------------|---------|
| Revenue | $[X] | $[X] | $[X] |
| EBITDA | $[X] | $[X] | $[X] |
| Net Income | $[X] | $[X] | $[X] |
## Action Items
| # | Action | Owner | Due Date | Status |
|---|--------|-------|----------|--------|
| 1 | [Action description] | [Name] | [Date] | [Open/In Progress/Complete] |
| 2 | [Action description] | [Name] | [Date] | [Open/In Progress/Complete] |
| 3 | [Action description] | [Name] | [Date] | [Open/In Progress/Complete] |
---
*Report generated using Financial Analyst Skill - Budget Variance Analyzer*

View File

@@ -0,0 +1,376 @@
# Financial Ratios Guide
Comprehensive reference for financial ratio analysis covering formulas, interpretation, and industry benchmarks across five categories.
## 1. Profitability Ratios
Measure a company's ability to generate earnings relative to revenue, assets, or equity.
### Return on Equity (ROE)
**Formula:** Net Income / Total Shareholders' Equity
**Interpretation:**
- Measures how effectively management uses equity to generate profits
- Higher ROE indicates more efficient use of equity capital
- Compare against cost of equity - ROE should exceed it
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Below Average | < 8% |
| Acceptable | 8% - 15% |
| Good | 15% - 25% |
| Excellent | > 25% |
**Caveats:** High leverage can inflate ROE. Use DuPont decomposition (ROE = Margin x Turnover x Leverage) for deeper analysis.
### Return on Assets (ROA)
**Formula:** Net Income / Total Assets
**Interpretation:**
- Measures how efficiently assets generate profit
- Asset-light businesses naturally have higher ROA
- Compare within industry only
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Below Average | < 3% |
| Acceptable | 3% - 6% |
| Good | 6% - 12% |
| Excellent | > 12% |
### Gross Margin
**Formula:** (Revenue - COGS) / Revenue
**Interpretation:**
- Measures production efficiency and pricing power
- Declining gross margin may signal competitive pressure or cost inflation
- Critical for evaluating business model sustainability
**Benchmarks by Industry:**
| Industry | Typical Range |
|----------|--------------|
| Software/SaaS | 70% - 85% |
| Financial Services | 50% - 70% |
| Retail | 25% - 45% |
| Manufacturing | 20% - 40% |
| Grocery | 25% - 30% |
### Operating Margin
**Formula:** Operating Income / Revenue
**Interpretation:**
- Measures operational efficiency after all operating expenses
- Excludes interest and taxes for better operational comparison
- Indicates management effectiveness in controlling costs
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Below Average | < 5% |
| Acceptable | 5% - 15% |
| Good | 15% - 25% |
| Excellent | > 25% |
### Net Margin
**Formula:** Net Income / Revenue
**Interpretation:**
- Bottom-line profitability after all expenses
- Affected by tax strategy, capital structure, and one-time items
- Most comprehensive profitability measure
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Below Average | < 3% |
| Acceptable | 3% - 10% |
| Good | 10% - 20% |
| Excellent | > 20% |
## 2. Liquidity Ratios
Measure a company's ability to meet short-term obligations.
### Current Ratio
**Formula:** Current Assets / Current Liabilities
**Interpretation:**
- Measures short-term solvency
- Too high may indicate inefficient asset use
- Too low signals potential liquidity risk
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Concern | < 1.0 |
| Acceptable | 1.0 - 1.5 |
| Healthy | 1.5 - 3.0 |
| Excessive | > 3.0 |
### Quick Ratio (Acid Test)
**Formula:** (Current Assets - Inventory) / Current Liabilities
**Interpretation:**
- More conservative than current ratio
- Excludes inventory (least liquid current asset)
- Critical for businesses with slow-moving inventory
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Concern | < 0.8 |
| Acceptable | 0.8 - 1.0 |
| Healthy | 1.0 - 2.0 |
| Excessive | > 2.0 |
### Cash Ratio
**Formula:** Cash & Equivalents / Current Liabilities
**Interpretation:**
- Most conservative liquidity measure
- Indicates ability to pay obligations with cash on hand
- Particularly important during credit crunches
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Low | < 0.2 |
| Adequate | 0.2 - 0.5 |
| Strong | 0.5 - 1.0 |
| Excessive | > 1.0 |
## 3. Leverage Ratios
Measure the extent to which a company uses debt financing.
### Debt-to-Equity Ratio
**Formula:** Total Debt / Total Shareholders' Equity
**Interpretation:**
- Measures financial leverage and risk
- Higher ratio = more reliance on debt financing
- Industry norms vary significantly (utilities vs tech)
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Conservative | < 0.3 |
| Moderate | 0.3 - 0.8 |
| Elevated | 0.8 - 2.0 |
| High Risk | > 2.0 |
### Interest Coverage Ratio
**Formula:** Operating Income (EBIT) / Interest Expense
**Interpretation:**
- Measures ability to service debt from operating earnings
- Below 1.5x is a red flag for lenders
- Critical for credit analysis
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Distressed | < 2.0 |
| Adequate | 2.0 - 5.0 |
| Strong | 5.0 - 10.0 |
| Very Strong | > 10.0 |
### Debt Service Coverage Ratio (DSCR)
**Formula:** Operating Cash Flow / Total Debt Service
**Interpretation:**
- Cash-based measure of debt servicing capacity
- Includes principal repayments (unlike interest coverage)
- Required by many loan covenants
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Default Risk | < 1.0 |
| Minimum | 1.0 - 1.5 |
| Comfortable | 1.5 - 2.5 |
| Strong | > 2.5 |
## 4. Efficiency Ratios
Measure how effectively a company uses its assets and manages operations.
### Asset Turnover
**Formula:** Revenue / Total Assets
**Interpretation:**
- Measures revenue generated per dollar of assets
- Higher indicates more efficient asset utilization
- Inversely related to profit margins (DuPont)
**Benchmarks:**
| Industry | Typical Range |
|----------|--------------|
| Retail | 2.0 - 3.0 |
| Manufacturing | 0.8 - 1.5 |
| Utilities | 0.3 - 0.5 |
| Technology | 0.5 - 1.0 |
### Inventory Turnover
**Formula:** COGS / Average Inventory
**Interpretation:**
- Measures how quickly inventory is sold
- Low turnover suggests overstock or obsolescence risk
- High turnover may indicate strong sales or thin inventory
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Slow | < 4x |
| Average | 4x - 8x |
| Efficient | 8x - 12x |
| Very Efficient | > 12x |
### Receivables Turnover
**Formula:** Revenue / Accounts Receivable
**Interpretation:**
- Measures efficiency of credit and collections
- Higher turnover means faster collections
- Monitor trends for credit policy changes
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Slow | < 6x |
| Average | 6x - 10x |
| Efficient | 10x - 15x |
| Very Efficient | > 15x |
### Days Sales Outstanding (DSO)
**Formula:** 365 / Receivables Turnover
**Interpretation:**
- Average days to collect payment after a sale
- Lower DSO = faster cash conversion
- Compare against payment terms
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Excellent | < 30 days |
| Good | 30 - 45 days |
| Acceptable | 45 - 60 days |
| Concern | > 60 days |
## 5. Valuation Ratios
Measure a company's market value relative to financial metrics.
### Price-to-Earnings (P/E) Ratio
**Formula:** Share Price / Earnings Per Share
**Interpretation:**
- Most widely used valuation metric
- High P/E suggests growth expectations or overvaluation
- Use trailing (TTM) and forward P/E for comparison
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Value | < 10x |
| Fair | 10x - 20x |
| Growth | 20x - 35x |
| Premium | > 35x |
### Price-to-Book (P/B) Ratio
**Formula:** Share Price / Book Value Per Share
**Interpretation:**
- Compares market value to accounting value
- Below 1.0 may indicate undervaluation or distress
- Most useful for asset-heavy industries
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Undervalued | < 1.0 |
| Fair | 1.0 - 2.5 |
| Premium | 2.5 - 5.0 |
| Rich | > 5.0 |
### Price-to-Sales (P/S) Ratio
**Formula:** Market Cap / Revenue
**Interpretation:**
- Useful for companies without positive earnings
- Compare within industry only
- Lower = potentially better value
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Value | < 1.0 |
| Fair | 1.0 - 3.0 |
| Growth | 3.0 - 8.0 |
| Premium | > 8.0 |
### EV/EBITDA
**Formula:** Enterprise Value / EBITDA
**Interpretation:**
- Capital-structure-neutral valuation metric
- Preferred for M&A analysis and leveraged buyouts
- More comparable across capital structures than P/E
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Value | < 6x |
| Fair | 6x - 12x |
| Growth | 12x - 20x |
| Premium | > 20x |
### PEG Ratio
**Formula:** P/E Ratio / Earnings Growth Rate (%)
**Interpretation:**
- Growth-adjusted P/E ratio
- PEG of 1.0 suggests fair valuation relative to growth
- Below 1.0 may indicate undervaluation
**Benchmarks:**
| Rating | Range |
|--------|-------|
| Undervalued | < 0.5 |
| Fair | 0.5 - 1.0 |
| Fully Valued | 1.0 - 2.0 |
| Overvalued | > 2.0 |
## Ratio Analysis Best Practices
1. **Compare within industry** - Ratios vary significantly across sectors
2. **Analyze trends** - A single period snapshot is insufficient; look at 3-5 year trends
3. **Use multiple ratios** - No single ratio tells the complete story
4. **Consider context** - Accounting policies, business cycle, and company stage matter
5. **DuPont decomposition** - Break ROE into margin, turnover, and leverage components
6. **Peer comparison** - Compare against direct competitors, not just broad benchmarks
7. **Watch for manipulation** - Revenue recognition changes, off-balance-sheet items, and one-time adjustments can distort ratios

View File

@@ -0,0 +1,279 @@
# Forecasting Best Practices
Comprehensive reference for financial forecasting including driver-based models, rolling forecasts, accuracy improvement techniques, and scenario planning.
## 1. Driver-Based Forecasting
### Overview
Driver-based forecasting models financial outcomes based on key business drivers rather than extrapolating from historical trends alone. This approach creates more transparent, actionable, and accurate forecasts.
### Identifying Key Drivers
**Revenue Drivers:**
| Business Model | Primary Drivers |
|---------------|----------------|
| SaaS/Subscription | Customers x ARPU x Retention Rate |
| E-commerce | Visitors x Conversion Rate x AOV |
| Manufacturing | Units x Price per Unit |
| Professional Services | Headcount x Utilization x Bill Rate |
| Retail | Stores x Revenue per Store (or sqft) |
| Marketplace | GMV x Take Rate |
**Cost Drivers:**
| Category | Common Drivers |
|----------|---------------|
| COGS | Revenue x (1 - Gross Margin) or Units x Unit Cost |
| Headcount Costs | Employees x Average Compensation x (1 + Benefits Rate) |
| Sales & Marketing | Revenue x S&M % or CAC x New Customers |
| R&D | Engineering Headcount x Avg Salary |
| G&A | Headcount-based + fixed costs |
| CapEx | Revenue x CapEx Intensity or Project-based |
### Building a Driver-Based Model
**Step 1: Map the value chain**
- Revenue = f(volume drivers, pricing drivers, mix drivers)
- Costs = f(variable drivers, fixed components, step functions)
**Step 2: Establish driver relationships**
- Linear: Revenue = Units x Price
- Non-linear: Revenue = Base x (1 + Growth Rate)^t
- Step function: Facilities costs that jump at capacity thresholds
**Step 3: Validate driver assumptions**
- Compare driver values to historical actuals
- Benchmark against industry data
- Stress-test extreme values
**Step 4: Build sensitivity**
- Identify which drivers have the largest impact on output
- Quantify the range of reasonable values for each driver
- Create scenario combinations
### Driver Sensitivity Matrix
Rank drivers by impact and uncertainty:
| | High Impact | Low Impact |
|---|-----------|-----------|
| **High Uncertainty** | Model these carefully, run scenarios | Monitor but don't over-model |
| **Low Uncertainty** | Get these right; high accuracy needed | Use simple assumptions |
## 2. Rolling Forecasts
### What Is a Rolling Forecast?
A rolling forecast continuously extends the forecast horizon as each period closes. Unlike a static annual budget, a rolling forecast always looks forward the same number of periods (typically 12-18 months).
### Rolling Forecast vs Annual Budget
| Feature | Annual Budget | Rolling Forecast |
|---------|--------------|-----------------|
| Time Horizon | Fixed (Jan-Dec) | Rolling (12-18 months) |
| Update Frequency | Once per year | Monthly or quarterly |
| Detail Level | Very detailed | Driver-level |
| Preparation Time | 3-6 months | 2-5 days per cycle |
| Relevance | Declines over time | Stays current |
| Flexibility | Rigid | Adaptive |
### Implementation Steps
1. **Select the horizon** - 12 months rolling is most common (some use 18 months for CapEx planning)
2. **Define update cadence** - Monthly for volatile businesses; quarterly for stable ones
3. **Choose the right detail** - Driver-level, not line-item detail
4. **Automate data feeds** - Reduce manual effort per cycle
5. **Separate actuals from forecast** - Clear delineation between reported and projected periods
6. **Track forecast accuracy** - Measure MAPE (Mean Absolute Percentage Error) over time
### 13-Week Cash Flow Forecast
A specialized rolling forecast for liquidity management:
**Structure:**
- Week-by-week cash inflows and outflows
- Opening and closing cash balances
- Minimum cash threshold alerts
**Key Components:**
| Inflows | Outflows |
|---------|----------|
| Customer collections (by aging) | Payroll (fixed cadence) |
| Other receivables | Rent / Lease payments |
| Asset sales | Vendor payments (by terms) |
| Financing proceeds | Debt service |
| Tax refunds | Tax payments |
| Other income | Capital expenditures |
**Collection Modeling:**
- Apply collection rates by customer segment or aging bucket
- Model DSO trends to project collection timing
- Account for seasonal patterns in payment behavior
## 3. Accuracy Improvement
### Measuring Forecast Accuracy
**Mean Absolute Percentage Error (MAPE):**
```
MAPE = (1/n) x Sum of |Actual - Forecast| / |Actual| x 100%
```
**Accuracy Benchmarks:**
| MAPE | Rating |
|------|--------|
| < 5% | Excellent |
| 5% - 10% | Good |
| 10% - 20% | Acceptable |
| > 20% | Needs improvement |
**Weighted MAPE (WMAPE):**
Use when line items vary significantly in magnitude - weights errors by actual values.
### Techniques to Improve Accuracy
**1. Bias Detection and Correction**
- Track directional bias (consistently over or under forecasting)
- Calculate mean signed error to detect systematic bias
- Adjust driver assumptions to correct persistent bias
**2. Variance Analysis Loop**
- After each period closes, compare actual vs forecast
- Identify root causes of significant variances
- Update driver assumptions based on learnings
- Document what changed and why
**3. Ensemble Approach**
- Combine multiple forecasting methods
- Blend statistical (trend) with judgmental (management input)
- Weight methods by their historical accuracy
**4. Granularity Optimization**
- Forecast at the right level of detail - not too aggregated, not too granular
- Product/segment level usually more accurate than single top-line
- Aggregate bottom-up forecasts for total, then adjust
**5. Leading Indicators**
- Identify metrics that predict financial outcomes 1-3 months ahead
- Pipeline/bookings predict revenue
- Hiring plans predict headcount costs
- Customer churn signals predict retention revenue
### Common Accuracy Killers
1. **Anchoring bias** - Over-relying on last year's numbers
2. **Optimism bias** - Systematic overestimation of growth
3. **Lack of accountability** - No one tracks forecast vs actual
4. **Stale assumptions** - Not updating for market changes
5. **Missing data** - Forecasting without key driver inputs
6. **Over-precision** - False precision in uncertain environments
## 4. Scenario Planning
### Three-Scenario Framework
| Scenario | Description | Probability |
|----------|-------------|-------------|
| **Base Case** | Most likely outcome based on current trajectory | 50-60% |
| **Bull Case** | Favorable conditions, upside realization | 15-25% |
| **Bear Case** | Adverse conditions, downside risks | 15-25% |
### Scenario Construction
**Base Case:**
- Continuation of current trends
- Management's operational plan
- Market consensus assumptions
- Normal competitive dynamics
**Bull Case (apply selectively, not uniformly):**
- Faster customer acquisition or market adoption
- Successful product launch or expansion
- Favorable macro conditions
- Competitor weakness or exit
- Margin expansion from operating leverage
**Bear Case (be realistic, not catastrophic):**
- Slower growth or market contraction
- Increased competition or pricing pressure
- Key customer or contract loss
- Supply chain disruption
- Regulatory headwinds
### Scenario Variables
Map each scenario to specific driver values:
| Driver | Bear | Base | Bull |
|--------|------|------|------|
| Revenue Growth | +2% | +8% | +15% |
| Gross Margin | 35% | 40% | 43% |
| Customer Churn | 8% | 5% | 3% |
| New Customers/Month | 50 | 100 | 180 |
| Price Increase | 0% | 3% | 5% |
### Presenting Scenarios
1. **Show the range** - Management needs to see the potential outcomes
2. **Quantify the gap** - Dollar impact of bull vs bear on key metrics
3. **Identify triggers** - What conditions would cause each scenario
4. **Define actions** - What levers to pull in each scenario
5. **Assign probabilities** - Not all scenarios are equally likely
## 5. Forecast Communication
### Stakeholder Needs
| Audience | Needs |
|----------|-------|
| Board | High-level scenarios, key risks, strategic implications |
| CEO/CFO | Detailed drivers, variance explanations, action items |
| Department Heads | Their specific budget vs forecast, headcount plans |
| Investors | Revenue guidance, margin trajectory, capital allocation |
| Operations | Weekly/monthly targets, resource requirements |
### Presentation Framework
1. **Executive summary** - Key metrics, direction of travel, confidence level
2. **Variance bridge** - Walk from budget/prior forecast to current forecast
3. **Driver analysis** - What changed and why
4. **Scenario comparison** - Range of outcomes
5. **Key risks and opportunities** - What could change the forecast
6. **Action items** - Decisions needed based on forecast
### Forecast Cadence
| Activity | Frequency | Time Required |
|----------|-----------|--------------|
| 13-week cash flow update | Weekly | 1-2 hours |
| Rolling forecast update | Monthly | 1-2 days |
| Full reforecast | Quarterly | 3-5 days |
| Annual budget/plan | Annually | 4-8 weeks |
| Board reporting | Quarterly | 2-3 days |
## 6. Industry-Specific Considerations
### SaaS Metrics in Forecasting
- **MRR/ARR decomposition:** New, expansion, contraction, churn
- **Cohort-based forecasting:** Forecast by customer cohort for retention accuracy
- **Rule of 40:** Revenue growth % + Profit margin % should exceed 40%
- **Net Revenue Retention:** Target > 110% for healthy SaaS
- **CAC Payback:** Should be < 18 months
### Retail Forecasting
- **Same-store sales growth** as primary organic growth metric
- **Seasonal decomposition** for accurate monthly/weekly forecasts
- **Markdown optimization** impact on gross margin
- **Inventory turns** drive working capital forecasts
### Manufacturing Forecasting
- **Order backlog** as a leading indicator
- **Capacity constraints** creating step-function cost increases
- **Raw material price forecasts** for COGS
- **Maintenance CapEx vs growth CapEx** distinction
- **Utilization rates** driving unit cost projections

View File

@@ -0,0 +1,246 @@
# Valuation Methodology Guide
Comprehensive reference for business valuation approaches including DCF analysis, comparable company analysis, and precedent transactions.
## 1. Discounted Cash Flow (DCF) Methodology
### Overview
DCF is an intrinsic valuation method that estimates the present value of a company's expected future free cash flows, discounted at an appropriate rate reflecting the risk of those cash flows.
**Core Principle:** The value of a business equals the present value of all future cash flows it will generate.
**Formula:**
```
Enterprise Value = Sum of [FCF_t / (1 + WACC)^t] + Terminal Value / (1 + WACC)^n
```
Where:
- FCF_t = Free Cash Flow in year t
- WACC = Weighted Average Cost of Capital
- n = number of projection years
### Step 1: Historical Analysis
Before projecting, analyze 3-5 years of historical financials:
- **Revenue growth rates** - Identify organic vs acquisition-driven growth
- **Margin trends** - Gross, operating, and net margin trajectories
- **Capital intensity** - CapEx as % of revenue
- **Working capital** - Cash conversion cycle trends
- **Free cash flow conversion** - FCF / Net Income ratio
### Step 2: Revenue Projections
**Approaches:**
1. **Top-down:** Market size x Market share x Pricing
2. **Bottom-up:** Units x Price, or Customers x ARPU
3. **Growth rate extrapolation:** Historical growth with decay
**Revenue Projection Best Practices:**
- Use 5-7 year explicit projection period
- Growth should converge toward GDP growth by terminal year
- Support assumptions with market data and management guidance
- Model revenue by segment/product line when possible
### Step 3: Free Cash Flow Calculation
**Unlevered Free Cash Flow (UFCF):**
```
UFCF = EBIT x (1 - Tax Rate)
+ Depreciation & Amortization
- Capital Expenditures
- Changes in Net Working Capital
```
**Key Drivers:**
- Operating margin trajectory
- CapEx as % of revenue (maintenance vs growth)
- Working capital requirements (DSO, DIO, DPO)
- Tax rate (effective vs marginal)
### Step 4: WACC Calculation
**Weighted Average Cost of Capital:**
```
WACC = (E/V x Re) + (D/V x Rd x (1 - T))
```
Where:
- E/V = Equity weight (market value)
- D/V = Debt weight (market value)
- Re = Cost of equity
- Rd = Cost of debt (pre-tax)
- T = Marginal tax rate
#### Cost of Equity (CAPM)
```
Re = Rf + Beta x (Rm - Rf) + Size Premium + Company-Specific Risk
```
| Component | Description | Typical Range |
|-----------|-------------|---------------|
| Risk-Free Rate (Rf) | 10-year Treasury yield | 3.5% - 5.0% |
| Equity Risk Premium (ERP) | Market return above risk-free | 5.0% - 7.0% |
| Beta | Systematic risk relative to market | 0.5 - 2.0 |
| Size Premium | Small-cap additional risk | 0% - 5% |
| Company-Specific Risk | Unique risk factors | 0% - 5% |
**Beta Estimation:**
- Use 2-5 year weekly returns against broad market index
- Unlevered betas for comparability, then re-lever to target capital structure
- Consider industry median beta for stability
#### Cost of Debt
```
Rd = Yield on comparable-maturity corporate bonds
OR
Rd = Risk-Free Rate + Credit Spread
```
**Credit Spread by Rating:**
| Rating | Typical Spread |
|--------|---------------|
| AAA | 0.5% - 1.0% |
| AA | 1.0% - 1.5% |
| A | 1.5% - 2.0% |
| BBB | 2.0% - 3.0% |
| BB | 3.0% - 5.0% |
| B | 5.0% - 8.0% |
### Step 5: Terminal Value
Terminal value typically represents 60-80% of total enterprise value. Use two methods and cross-check.
#### Perpetuity Growth Method
```
TV = FCF_n x (1 + g) / (WACC - g)
```
Where g = terminal growth rate (typically 2.0% - 3.0%, should not exceed long-term GDP growth)
**Sensitivity:** Terminal value is highly sensitive to g. A 0.5% change in g can move enterprise value by 15-25%.
#### Exit Multiple Method
```
TV = Terminal Year EBITDA x Exit EV/EBITDA Multiple
```
**Exit Multiple Selection:**
- Use current trading multiples of comparable companies
- Consider whether current multiples are at historical highs/lows
- Apply a discount for lack of marketability if private
**Cross-Check:** Both methods should yield similar results. Large discrepancies signal inconsistent assumptions.
### Step 6: Enterprise to Equity Bridge
```
Enterprise Value
- Net Debt (Total Debt - Cash)
- Minority Interest
- Preferred Equity
+ Equity Method Investments
= Equity Value
Equity Value / Diluted Shares Outstanding = Value Per Share
```
### Step 7: Sensitivity Analysis
Always present results as a range, not a single point estimate.
**Standard Sensitivity Tables:**
1. WACC vs Terminal Growth Rate
2. WACC vs Exit Multiple
3. Revenue Growth vs Operating Margin
**Scenario Analysis:**
- Base case: Management guidance / consensus estimates
- Bull case: Upside scenario with faster growth or margin expansion
- Bear case: Downside scenario with slower growth or margin compression
## 2. Comparable Company Analysis
### Methodology
1. **Select peer group** - Similar size, industry, growth profile, and margins
2. **Calculate trading multiples** for each peer
3. **Determine appropriate multiple range**
4. **Apply to target company's metrics**
### Common Multiples
| Multiple | When to Use |
|----------|-------------|
| EV/Revenue | Pre-profit companies, high-growth tech |
| EV/EBITDA | Most common for mature companies |
| EV/EBIT | When D&A differs significantly across peers |
| P/E | Stable earnings, financial services |
| P/B | Banks, insurance, asset-heavy industries |
| EV/FCF | Capital-light businesses with clean FCF |
### Peer Selection Criteria
- **Industry:** Same or closely adjacent sectors
- **Size:** Within 0.5x to 2x of target revenue/market cap
- **Geography:** Same primary markets
- **Growth profile:** Similar revenue growth rates (within 5-10%)
- **Margin profile:** Similar operating margin structure
- **Business model:** Comparable revenue mix and customer base
### Premium/Discount Adjustments
| Factor | Adjustment |
|--------|-----------|
| Higher growth | Premium of 1-3x on EV/EBITDA |
| Lower margins | Discount of 1-2x |
| Smaller scale | Discount of 10-20% |
| Private company | Discount of 15-30% (illiquidity) |
| Control premium | Premium of 20-40% (for acquisitions) |
## 3. Precedent Transaction Analysis
### Methodology
1. **Identify comparable transactions** in same industry
2. **Calculate transaction multiples** (EV/Revenue, EV/EBITDA)
3. **Adjust for market conditions** and deal-specific factors
4. **Apply adjusted multiples** to target
### Key Considerations
- Transactions include control premiums (typically 20-40%)
- Market conditions at time of deal affect multiples
- Strategic vs financial buyer valuations differ
- Consider synergy expectations embedded in price
- More recent transactions carry greater relevance
## 4. Valuation Framework Selection
| Situation | Primary Method | Secondary Method |
|-----------|---------------|-----------------|
| Profitable, stable | DCF | Comparable companies |
| High growth, pre-profit | Comparable companies (EV/Revenue) | DCF with scenario analysis |
| M&A target | Precedent transactions | DCF |
| Asset-heavy, cyclical | Asset-based valuation | Normalized DCF |
| Financial institution | Dividend discount model | P/B, P/E comps |
| Distressed | Liquidation value | Restructured DCF |
## 5. Common Pitfalls
1. **Hockey stick projections** - Unrealistic growth acceleration in later years
2. **Terminal value dominance** - If TV > 80% of EV, shorten projection period or question assumptions
3. **Circular references** - WACC depends on equity value which depends on WACC
4. **Ignoring working capital** - Can significantly affect FCF
5. **Single-point estimates** - Always present as a range
6. **Stale comparables** - Market conditions change; update regularly
7. **Confirmation bias** - Don't work backward from a desired conclusion
8. **Ignoring dilution** - Use fully diluted shares (treasury stock method for options)

View File

@@ -0,0 +1,406 @@
#!/usr/bin/env python3
"""
Budget Variance Analyzer
Analyzes actual vs budget vs prior year performance with materiality
threshold filtering, favorable/unfavorable classification, and
department/category breakdown.
Usage:
python budget_variance_analyzer.py budget_data.json
python budget_variance_analyzer.py budget_data.json --format json
python budget_variance_analyzer.py budget_data.json --threshold-pct 5 --threshold-amt 25000
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0 or denominator is None:
return default
return numerator / denominator
class BudgetVarianceAnalyzer:
"""Analyze budget variances with materiality filtering and classification."""
def __init__(
self,
data: Dict[str, Any],
threshold_pct: float = 10.0,
threshold_amt: float = 50000.0,
) -> None:
"""
Initialize the analyzer.
Args:
data: Budget data with line items
threshold_pct: Materiality threshold as percentage (default 10%)
threshold_amt: Materiality threshold as dollar amount (default $50K)
"""
self.line_items: List[Dict[str, Any]] = data.get("line_items", [])
self.period: str = data.get("period", "Current Period")
self.company: str = data.get("company", "Company")
self.threshold_pct = threshold_pct
self.threshold_amt = threshold_amt
self.variances: List[Dict[str, Any]] = []
self.material_variances: List[Dict[str, Any]] = []
self.summary: Dict[str, Any] = {}
def classify_favorability(
self, line_type: str, variance_amount: float
) -> str:
"""
Classify variance as favorable or unfavorable.
Revenue: over budget = favorable
Expense: under budget = favorable
"""
if line_type.lower() in ("revenue", "income", "sales"):
return "Favorable" if variance_amount > 0 else "Unfavorable"
else:
# For expenses, under budget (negative variance) is favorable
return "Favorable" if variance_amount < 0 else "Unfavorable"
def calculate_variances(self) -> List[Dict[str, Any]]:
"""Calculate variances for all line items."""
self.variances = []
for item in self.line_items:
name = item.get("name", "Unknown")
line_type = item.get("type", "expense")
department = item.get("department", "General")
category = item.get("category", "Other")
actual = item.get("actual", 0)
budget = item.get("budget", 0)
prior_year = item.get("prior_year", None)
# Budget variance
budget_var_amt = actual - budget
budget_var_pct = safe_divide(budget_var_amt, budget) * 100
# Prior year variance (if available)
py_var_amt = (actual - prior_year) if prior_year is not None else None
py_var_pct = (
safe_divide(py_var_amt, prior_year) * 100
if prior_year is not None
else None
)
favorability = self.classify_favorability(line_type, budget_var_amt)
is_material = (
abs(budget_var_pct) >= self.threshold_pct
or abs(budget_var_amt) >= self.threshold_amt
)
variance_record = {
"name": name,
"type": line_type,
"department": department,
"category": category,
"actual": actual,
"budget": budget,
"prior_year": prior_year,
"budget_variance_amount": budget_var_amt,
"budget_variance_pct": round(budget_var_pct, 2),
"prior_year_variance_amount": py_var_amt,
"prior_year_variance_pct": (
round(py_var_pct, 2) if py_var_pct is not None else None
),
"favorability": favorability,
"is_material": is_material,
}
self.variances.append(variance_record)
# Filter material variances
self.material_variances = [v for v in self.variances if v["is_material"]]
return self.variances
def department_summary(self) -> Dict[str, Dict[str, Any]]:
"""Summarize variances by department."""
departments: Dict[str, Dict[str, float]] = {}
for v in self.variances:
dept = v["department"]
if dept not in departments:
departments[dept] = {
"total_actual": 0.0,
"total_budget": 0.0,
"total_variance": 0.0,
"favorable_count": 0,
"unfavorable_count": 0,
"line_count": 0,
}
departments[dept]["total_actual"] += v["actual"]
departments[dept]["total_budget"] += v["budget"]
departments[dept]["total_variance"] += v["budget_variance_amount"]
departments[dept]["line_count"] += 1
if v["favorability"] == "Favorable":
departments[dept]["favorable_count"] += 1
else:
departments[dept]["unfavorable_count"] += 1
# Add variance percentage
for dept_data in departments.values():
dept_data["variance_pct"] = round(
safe_divide(
dept_data["total_variance"], dept_data["total_budget"]
)
* 100,
2,
)
return departments
def category_summary(self) -> Dict[str, Dict[str, Any]]:
"""Summarize variances by category."""
categories: Dict[str, Dict[str, float]] = {}
for v in self.variances:
cat = v["category"]
if cat not in categories:
categories[cat] = {
"total_actual": 0.0,
"total_budget": 0.0,
"total_variance": 0.0,
"line_count": 0,
}
categories[cat]["total_actual"] += v["actual"]
categories[cat]["total_budget"] += v["budget"]
categories[cat]["total_variance"] += v["budget_variance_amount"]
categories[cat]["line_count"] += 1
for cat_data in categories.values():
cat_data["variance_pct"] = round(
safe_divide(
cat_data["total_variance"], cat_data["total_budget"]
)
* 100,
2,
)
return categories
def generate_executive_summary(self) -> Dict[str, Any]:
"""Generate an executive summary of the variance analysis."""
total_actual = sum(
v["actual"] for v in self.variances if v["type"].lower() in ("revenue", "income", "sales")
)
total_budget = sum(
v["budget"] for v in self.variances if v["type"].lower() in ("revenue", "income", "sales")
)
total_expense_actual = sum(
v["actual"] for v in self.variances if v["type"].lower() not in ("revenue", "income", "sales")
)
total_expense_budget = sum(
v["budget"] for v in self.variances if v["type"].lower() not in ("revenue", "income", "sales")
)
revenue_variance = total_actual - total_budget
expense_variance = total_expense_actual - total_expense_budget
favorable_count = sum(
1 for v in self.variances if v["favorability"] == "Favorable"
)
unfavorable_count = sum(
1 for v in self.variances if v["favorability"] == "Unfavorable"
)
self.summary = {
"period": self.period,
"company": self.company,
"total_line_items": len(self.variances),
"material_variances_count": len(self.material_variances),
"favorable_count": favorable_count,
"unfavorable_count": unfavorable_count,
"revenue": {
"actual": total_actual,
"budget": total_budget,
"variance_amount": revenue_variance,
"variance_pct": round(
safe_divide(revenue_variance, total_budget) * 100, 2
),
},
"expenses": {
"actual": total_expense_actual,
"budget": total_expense_budget,
"variance_amount": expense_variance,
"variance_pct": round(
safe_divide(expense_variance, total_expense_budget) * 100, 2
),
},
"net_impact": revenue_variance - expense_variance,
"materiality_thresholds": {
"percentage": self.threshold_pct,
"amount": self.threshold_amt,
},
}
return self.summary
def run_analysis(self) -> Dict[str, Any]:
"""Run the complete variance analysis."""
self.calculate_variances()
dept_summary = self.department_summary()
cat_summary = self.category_summary()
exec_summary = self.generate_executive_summary()
return {
"executive_summary": exec_summary,
"all_variances": self.variances,
"material_variances": self.material_variances,
"department_summary": dept_summary,
"category_summary": cat_summary,
}
def format_text(self, results: Dict[str, Any]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("BUDGET VARIANCE ANALYSIS")
lines.append("=" * 70)
summary = results["executive_summary"]
lines.append(f"\n Company: {summary['company']}")
lines.append(f" Period: {summary['period']}")
def fmt_money(val: float) -> str:
sign = "+" if val > 0 else ""
if abs(val) >= 1e6:
return f"{sign}${val / 1e6:,.2f}M"
if abs(val) >= 1e3:
return f"{sign}${val / 1e3:,.1f}K"
return f"{sign}${val:,.2f}"
lines.append(f"\n--- EXECUTIVE SUMMARY ---")
rev = summary["revenue"]
exp = summary["expenses"]
lines.append(
f" Revenue: Actual {fmt_money(rev['actual'])} vs "
f"Budget {fmt_money(rev['budget'])} "
f"({fmt_money(rev['variance_amount'])}, {rev['variance_pct']:+.1f}%)"
)
lines.append(
f" Expenses: Actual {fmt_money(exp['actual'])} vs "
f"Budget {fmt_money(exp['budget'])} "
f"({fmt_money(exp['variance_amount'])}, {exp['variance_pct']:+.1f}%)"
)
lines.append(f" Net Impact: {fmt_money(summary['net_impact'])}")
lines.append(
f" Total Items: {summary['total_line_items']} | "
f"Material: {summary['material_variances_count']} | "
f"Favorable: {summary['favorable_count']} | "
f"Unfavorable: {summary['unfavorable_count']}"
)
# Material variances
material = results["material_variances"]
if material:
lines.append(f"\n--- MATERIAL VARIANCES ---")
lines.append(
f" (Threshold: {self.threshold_pct}% or "
f"${self.threshold_amt:,.0f})"
)
for v in material:
lines.append(
f"\n {v['name']} ({v['department']})"
)
lines.append(
f" Actual: {fmt_money(v['actual'])} | "
f"Budget: {fmt_money(v['budget'])}"
)
lines.append(
f" Variance: {fmt_money(v['budget_variance_amount'])} "
f"({v['budget_variance_pct']:+.1f}%) - {v['favorability']}"
)
# Department summary
dept = results["department_summary"]
if dept:
lines.append(f"\n--- DEPARTMENT SUMMARY ---")
for dept_name, d in dept.items():
lines.append(
f" {dept_name}: Variance {fmt_money(d['total_variance'])} "
f"({d['variance_pct']:+.1f}%) | "
f"Fav: {d['favorable_count']} / Unfav: {d['unfavorable_count']}"
)
# Category summary
cat = results["category_summary"]
if cat:
lines.append(f"\n--- CATEGORY SUMMARY ---")
for cat_name, c in cat.items():
lines.append(
f" {cat_name}: Variance {fmt_money(c['total_variance'])} "
f"({c['variance_pct']:+.1f}%)"
)
lines.append("\n" + "=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Analyze budget variances with materiality filtering"
)
parser.add_argument(
"input_file",
help="Path to JSON file with budget data",
)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format (default: text)",
)
parser.add_argument(
"--threshold-pct",
type=float,
default=10.0,
help="Materiality threshold percentage (default: 10)",
)
parser.add_argument(
"--threshold-amt",
type=float,
default=50000.0,
help="Materiality threshold dollar amount (default: 50000)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File '{args.input_file}' not found.", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.input_file}': {e}", file=sys.stderr)
sys.exit(1)
analyzer = BudgetVarianceAnalyzer(
data,
threshold_pct=args.threshold_pct,
threshold_amt=args.threshold_amt,
)
results = analyzer.run_analysis()
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(analyzer.format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,449 @@
#!/usr/bin/env python3
"""
DCF Valuation Model
Discounted Cash Flow enterprise and equity valuation with WACC calculation,
terminal value estimation, and two-way sensitivity analysis.
Uses standard library only (math, statistics) - NO numpy/pandas/scipy.
Usage:
python dcf_valuation.py valuation_data.json
python dcf_valuation.py valuation_data.json --format json
python dcf_valuation.py valuation_data.json --projection-years 7
"""
import argparse
import json
import math
import sys
from statistics import mean
from typing import Any, Dict, List, Optional, Tuple
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0 or denominator is None:
return default
return numerator / denominator
class DCFModel:
"""Discounted Cash Flow valuation model."""
def __init__(self) -> None:
"""Initialize the DCF model."""
self.historical: Dict[str, Any] = {}
self.assumptions: Dict[str, Any] = {}
self.wacc: float = 0.0
self.projected_revenue: List[float] = []
self.projected_fcf: List[float] = []
self.projection_years: int = 5
self.terminal_value_perpetuity: float = 0.0
self.terminal_value_exit_multiple: float = 0.0
self.enterprise_value_perpetuity: float = 0.0
self.enterprise_value_exit_multiple: float = 0.0
self.equity_value_perpetuity: float = 0.0
self.equity_value_exit_multiple: float = 0.0
self.value_per_share_perpetuity: float = 0.0
self.value_per_share_exit_multiple: float = 0.0
def set_historical_financials(self, historical: Dict[str, Any]) -> None:
"""Set historical financial data."""
self.historical = historical
def set_assumptions(self, assumptions: Dict[str, Any]) -> None:
"""Set projection assumptions."""
self.assumptions = assumptions
self.projection_years = assumptions.get("projection_years", 5)
def calculate_wacc(self) -> float:
"""Calculate Weighted Average Cost of Capital via CAPM."""
wacc_inputs = self.assumptions.get("wacc_inputs", {})
risk_free_rate = wacc_inputs.get("risk_free_rate", 0.04)
equity_risk_premium = wacc_inputs.get("equity_risk_premium", 0.06)
beta = wacc_inputs.get("beta", 1.0)
cost_of_debt = wacc_inputs.get("cost_of_debt", 0.05)
tax_rate = wacc_inputs.get("tax_rate", 0.25)
debt_weight = wacc_inputs.get("debt_weight", 0.30)
equity_weight = wacc_inputs.get("equity_weight", 0.70)
# CAPM: Cost of Equity = Risk-Free Rate + Beta * Equity Risk Premium
cost_of_equity = risk_free_rate + beta * equity_risk_premium
# WACC = (E/V * Re) + (D/V * Rd * (1 - T))
after_tax_cost_of_debt = cost_of_debt * (1 - tax_rate)
self.wacc = (equity_weight * cost_of_equity) + (
debt_weight * after_tax_cost_of_debt
)
return self.wacc
def project_cash_flows(self) -> Tuple[List[float], List[float]]:
"""Project revenue and free cash flow over the projection period."""
base_revenue = self.historical.get("revenue", [])
if not base_revenue:
raise ValueError("Historical revenue data is required")
last_revenue = base_revenue[-1]
revenue_growth_rates = self.assumptions.get("revenue_growth_rates", [])
fcf_margins = self.assumptions.get("fcf_margins", [])
# If growth rates not provided for all years, use average or default
default_growth = self.assumptions.get("default_revenue_growth", 0.05)
default_fcf_margin = self.assumptions.get("default_fcf_margin", 0.10)
self.projected_revenue = []
self.projected_fcf = []
current_revenue = last_revenue
for year in range(self.projection_years):
growth = (
revenue_growth_rates[year]
if year < len(revenue_growth_rates)
else default_growth
)
fcf_margin = (
fcf_margins[year]
if year < len(fcf_margins)
else default_fcf_margin
)
current_revenue = current_revenue * (1 + growth)
fcf = current_revenue * fcf_margin
self.projected_revenue.append(current_revenue)
self.projected_fcf.append(fcf)
return self.projected_revenue, self.projected_fcf
def calculate_terminal_value(self) -> Tuple[float, float]:
"""Calculate terminal value using both perpetuity growth and exit multiple."""
if not self.projected_fcf:
raise ValueError("Must project cash flows before terminal value")
terminal_fcf = self.projected_fcf[-1]
terminal_growth = self.assumptions.get("terminal_growth_rate", 0.025)
exit_multiple = self.assumptions.get("exit_ev_ebitda_multiple", 12.0)
# Perpetuity growth method: TV = FCF * (1+g) / (WACC - g)
if self.wacc > terminal_growth:
self.terminal_value_perpetuity = (
terminal_fcf * (1 + terminal_growth)
) / (self.wacc - terminal_growth)
else:
self.terminal_value_perpetuity = 0.0
# Exit multiple method: TV = Terminal EBITDA * Exit Multiple
terminal_revenue = self.projected_revenue[-1]
ebitda_margin = self.assumptions.get("terminal_ebitda_margin", 0.20)
terminal_ebitda = terminal_revenue * ebitda_margin
self.terminal_value_exit_multiple = terminal_ebitda * exit_multiple
return self.terminal_value_perpetuity, self.terminal_value_exit_multiple
def calculate_enterprise_value(self) -> Tuple[float, float]:
"""Calculate enterprise value by discounting projected FCFs and terminal value."""
if not self.projected_fcf:
raise ValueError("Must project cash flows first")
# Discount projected FCFs
pv_fcf = 0.0
for i, fcf in enumerate(self.projected_fcf):
discount_factor = (1 + self.wacc) ** (i + 1)
pv_fcf += fcf / discount_factor
# Discount terminal values
terminal_discount = (1 + self.wacc) ** self.projection_years
pv_tv_perpetuity = self.terminal_value_perpetuity / terminal_discount
pv_tv_exit = self.terminal_value_exit_multiple / terminal_discount
self.enterprise_value_perpetuity = pv_fcf + pv_tv_perpetuity
self.enterprise_value_exit_multiple = pv_fcf + pv_tv_exit
return self.enterprise_value_perpetuity, self.enterprise_value_exit_multiple
def calculate_equity_value(self) -> Tuple[float, float]:
"""Calculate equity value from enterprise value."""
net_debt = self.historical.get("net_debt", 0)
shares_outstanding = self.historical.get("shares_outstanding", 1)
self.equity_value_perpetuity = (
self.enterprise_value_perpetuity - net_debt
)
self.equity_value_exit_multiple = (
self.enterprise_value_exit_multiple - net_debt
)
self.value_per_share_perpetuity = safe_divide(
self.equity_value_perpetuity, shares_outstanding
)
self.value_per_share_exit_multiple = safe_divide(
self.equity_value_exit_multiple, shares_outstanding
)
return self.equity_value_perpetuity, self.equity_value_exit_multiple
def sensitivity_analysis(
self,
wacc_range: Optional[List[float]] = None,
growth_range: Optional[List[float]] = None,
) -> Dict[str, Any]:
"""
Two-way sensitivity analysis: WACC vs terminal growth rate.
Returns a table of enterprise values using nested lists (no numpy).
"""
if wacc_range is None:
base_wacc = self.wacc
wacc_range = [
round(base_wacc - 0.02, 4),
round(base_wacc - 0.01, 4),
round(base_wacc, 4),
round(base_wacc + 0.01, 4),
round(base_wacc + 0.02, 4),
]
if growth_range is None:
base_growth = self.assumptions.get("terminal_growth_rate", 0.025)
growth_range = [
round(base_growth - 0.01, 4),
round(base_growth - 0.005, 4),
round(base_growth, 4),
round(base_growth + 0.005, 4),
round(base_growth + 0.01, 4),
]
rows = len(wacc_range)
cols = len(growth_range)
# Initialize sensitivity table as nested lists
ev_table = [[0.0] * cols for _ in range(rows)]
share_price_table = [[0.0] * cols for _ in range(rows)]
terminal_fcf = self.projected_fcf[-1] if self.projected_fcf else 0
for i, wacc_val in enumerate(wacc_range):
for j, growth_val in enumerate(growth_range):
if wacc_val <= growth_val:
ev_table[i][j] = float("inf")
share_price_table[i][j] = float("inf")
continue
# Recalculate PV of projected FCFs with this WACC
pv_fcf = 0.0
for k, fcf in enumerate(self.projected_fcf):
pv_fcf += fcf / ((1 + wacc_val) ** (k + 1))
# Terminal value with this growth rate
tv = (terminal_fcf * (1 + growth_val)) / (wacc_val - growth_val)
pv_tv = tv / ((1 + wacc_val) ** self.projection_years)
ev = pv_fcf + pv_tv
ev_table[i][j] = round(ev, 2)
net_debt = self.historical.get("net_debt", 0)
shares = self.historical.get("shares_outstanding", 1)
equity = ev - net_debt
share_price_table[i][j] = round(
safe_divide(equity, shares), 2
)
return {
"wacc_values": wacc_range,
"growth_values": growth_range,
"enterprise_value_table": ev_table,
"share_price_table": share_price_table,
}
def run_full_valuation(self) -> Dict[str, Any]:
"""Run the complete DCF valuation."""
self.calculate_wacc()
self.project_cash_flows()
self.calculate_terminal_value()
self.calculate_enterprise_value()
self.calculate_equity_value()
sensitivity = self.sensitivity_analysis()
return {
"wacc": self.wacc,
"projected_revenue": self.projected_revenue,
"projected_fcf": self.projected_fcf,
"terminal_value": {
"perpetuity_growth": self.terminal_value_perpetuity,
"exit_multiple": self.terminal_value_exit_multiple,
},
"enterprise_value": {
"perpetuity_growth": self.enterprise_value_perpetuity,
"exit_multiple": self.enterprise_value_exit_multiple,
},
"equity_value": {
"perpetuity_growth": self.equity_value_perpetuity,
"exit_multiple": self.equity_value_exit_multiple,
},
"value_per_share": {
"perpetuity_growth": self.value_per_share_perpetuity,
"exit_multiple": self.value_per_share_exit_multiple,
},
"sensitivity_analysis": sensitivity,
}
def format_text(self, results: Dict[str, Any]) -> str:
"""Format valuation results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("DCF VALUATION ANALYSIS")
lines.append("=" * 70)
def fmt_money(val: float) -> str:
if val == float("inf"):
return "N/A (WACC <= growth)"
if abs(val) >= 1e9:
return f"${val / 1e9:,.2f}B"
if abs(val) >= 1e6:
return f"${val / 1e6:,.2f}M"
if abs(val) >= 1e3:
return f"${val / 1e3:,.1f}K"
return f"${val:,.2f}"
lines.append(f"\n--- WACC ---")
lines.append(f" Weighted Average Cost of Capital: {results['wacc'] * 100:.2f}%")
lines.append(f"\n--- REVENUE PROJECTIONS ---")
for i, rev in enumerate(results["projected_revenue"], 1):
lines.append(f" Year {i}: {fmt_money(rev)}")
lines.append(f"\n--- FREE CASH FLOW PROJECTIONS ---")
for i, fcf in enumerate(results["projected_fcf"], 1):
lines.append(f" Year {i}: {fmt_money(fcf)}")
lines.append(f"\n--- TERMINAL VALUE ---")
lines.append(
f" Perpetuity Growth Method: "
f"{fmt_money(results['terminal_value']['perpetuity_growth'])}"
)
lines.append(
f" Exit Multiple Method: "
f"{fmt_money(results['terminal_value']['exit_multiple'])}"
)
lines.append(f"\n--- ENTERPRISE VALUE ---")
lines.append(
f" Perpetuity Growth Method: "
f"{fmt_money(results['enterprise_value']['perpetuity_growth'])}"
)
lines.append(
f" Exit Multiple Method: "
f"{fmt_money(results['enterprise_value']['exit_multiple'])}"
)
lines.append(f"\n--- EQUITY VALUE ---")
lines.append(
f" Perpetuity Growth Method: "
f"{fmt_money(results['equity_value']['perpetuity_growth'])}"
)
lines.append(
f" Exit Multiple Method: "
f"{fmt_money(results['equity_value']['exit_multiple'])}"
)
lines.append(f"\n--- VALUE PER SHARE ---")
vps = results["value_per_share"]
lines.append(f" Perpetuity Growth Method: ${vps['perpetuity_growth']:,.2f}")
lines.append(f" Exit Multiple Method: ${vps['exit_multiple']:,.2f}")
# Sensitivity table
sens = results["sensitivity_analysis"]
lines.append(f"\n--- SENSITIVITY ANALYSIS (Enterprise Value) ---")
lines.append(f" WACC vs Terminal Growth Rate")
lines.append("")
header = " {:>10s}".format("WACC \\ g")
for g in sens["growth_values"]:
header += f" {g * 100:>8.1f}%"
lines.append(header)
lines.append(" " + "-" * (10 + 10 * len(sens["growth_values"])))
for i, w in enumerate(sens["wacc_values"]):
row = f" {w * 100:>9.1f}%"
for j in range(len(sens["growth_values"])):
val = sens["enterprise_value_table"][i][j]
if val == float("inf"):
row += f" {'N/A':>8s}"
else:
row += f" {fmt_money(val):>8s}"
lines.append(row)
lines.append("\n" + "=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="DCF Valuation Model - Enterprise and equity valuation"
)
parser.add_argument(
"input_file",
help="Path to JSON file with valuation data",
)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format (default: text)",
)
parser.add_argument(
"--projection-years",
type=int,
default=None,
help="Number of projection years (overrides input file)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File '{args.input_file}' not found.", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.input_file}': {e}", file=sys.stderr)
sys.exit(1)
model = DCFModel()
model.set_historical_financials(data.get("historical", {}))
assumptions = data.get("assumptions", {})
if args.projection_years is not None:
assumptions["projection_years"] = args.projection_years
model.set_assumptions(assumptions)
try:
results = model.run_full_valuation()
except ValueError as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if args.format == "json":
# Handle inf values for JSON serialization
def sanitize(obj: Any) -> Any:
if isinstance(obj, float) and math.isinf(obj):
return None
if isinstance(obj, dict):
return {k: sanitize(v) for k, v in obj.items()}
if isinstance(obj, list):
return [sanitize(v) for v in obj]
return obj
print(json.dumps(sanitize(results), indent=2))
else:
print(model.format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,494 @@
#!/usr/bin/env python3
"""
Forecast Builder
Driver-based revenue forecasting with 13-week rolling cash flow projection,
scenario modeling (base/bull/bear), and trend analysis using simple linear
regression (standard library only).
Usage:
python forecast_builder.py forecast_data.json
python forecast_builder.py forecast_data.json --format json
python forecast_builder.py forecast_data.json --scenarios base,bull,bear
"""
import argparse
import json
import math
import sys
from statistics import mean
from typing import Any, Dict, List, Optional, Tuple
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0 or denominator is None:
return default
return numerator / denominator
def simple_linear_regression(
x_values: List[float], y_values: List[float]
) -> Tuple[float, float, float]:
"""
Simple linear regression using standard library.
Returns (slope, intercept, r_squared).
"""
n = len(x_values)
if n < 2 or n != len(y_values):
return (0.0, 0.0, 0.0)
x_mean = mean(x_values)
y_mean = mean(y_values)
ss_xy = sum((x - x_mean) * (y - y_mean) for x, y in zip(x_values, y_values))
ss_xx = sum((x - x_mean) ** 2 for x in x_values)
ss_yy = sum((y - y_mean) ** 2 for y in y_values)
slope = safe_divide(ss_xy, ss_xx)
intercept = y_mean - slope * x_mean
# R-squared
r_squared = safe_divide(ss_xy ** 2, ss_xx * ss_yy) if ss_yy > 0 else 0.0
return (slope, intercept, r_squared)
class ForecastBuilder:
"""Driver-based revenue forecasting with scenario modeling."""
def __init__(self, data: Dict[str, Any]) -> None:
"""Initialize the forecast builder."""
self.historical: List[Dict[str, Any]] = data.get("historical_periods", [])
self.drivers: Dict[str, Any] = data.get("drivers", {})
self.assumptions: Dict[str, Any] = data.get("assumptions", {})
self.cash_flow_inputs: Dict[str, Any] = data.get("cash_flow_inputs", {})
self.scenarios_config: Dict[str, Any] = data.get("scenarios", {})
self.forecast_periods: int = data.get("forecast_periods", 12)
def analyze_trends(self) -> Dict[str, Any]:
"""Analyze historical trends using linear regression."""
if not self.historical:
return {"error": "No historical data available"}
# Extract revenue series
revenues = [p.get("revenue", 0) for p in self.historical]
periods = list(range(1, len(revenues) + 1))
slope, intercept, r_squared = simple_linear_regression(
[float(x) for x in periods],
[float(y) for y in revenues],
)
# Calculate growth rates
growth_rates = []
for i in range(1, len(revenues)):
if revenues[i - 1] > 0:
growth = (revenues[i] - revenues[i - 1]) / revenues[i - 1]
growth_rates.append(growth)
avg_growth = mean(growth_rates) if growth_rates else 0.0
# Seasonality detection (if enough data)
seasonality_index: List[float] = []
if len(revenues) >= 4:
overall_avg = mean(revenues)
if overall_avg > 0:
seasonality_index = [r / overall_avg for r in revenues[-4:]]
return {
"trend": {
"slope": round(slope, 2),
"intercept": round(intercept, 2),
"r_squared": round(r_squared, 4),
"direction": "upward" if slope > 0 else "downward" if slope < 0 else "flat",
},
"growth_rates": [round(g, 4) for g in growth_rates],
"average_growth_rate": round(avg_growth, 4),
"seasonality_index": [round(s, 4) for s in seasonality_index],
"historical_revenues": revenues,
}
def build_driver_based_forecast(
self, scenario: str = "base"
) -> Dict[str, Any]:
"""
Build a driver-based revenue forecast.
Drivers may include: units, price, customers, ARPU, conversion rate, etc.
"""
scenario_adjustments = self.scenarios_config.get(scenario, {})
growth_adjustment = scenario_adjustments.get("growth_adjustment", 0.0)
margin_adjustment = scenario_adjustments.get("margin_adjustment", 0.0)
base_revenue = 0.0
if self.historical:
base_revenue = self.historical[-1].get("revenue", 0)
# Driver-based calculation
unit_drivers = self.drivers.get("units", {})
price_drivers = self.drivers.get("pricing", {})
customer_drivers = self.drivers.get("customers", {})
base_growth = self.assumptions.get("revenue_growth_rate", 0.05)
adjusted_growth = base_growth + growth_adjustment
base_margin = self.assumptions.get("gross_margin", 0.40)
adjusted_margin = base_margin + margin_adjustment
cogs_pct = 1.0 - adjusted_margin
opex_pct = self.assumptions.get("opex_pct_revenue", 0.25)
forecast_periods: List[Dict[str, Any]] = []
current_revenue = base_revenue
# If we have unit and price drivers, use them
has_unit_drivers = bool(unit_drivers) and bool(price_drivers)
if has_unit_drivers:
base_units = unit_drivers.get("base_units", 1000)
unit_growth = unit_drivers.get("growth_rate", 0.03) + growth_adjustment
base_price = price_drivers.get("base_price", 100)
price_growth = price_drivers.get("annual_increase", 0.02)
current_units = base_units
current_price = base_price
for period in range(1, self.forecast_periods + 1):
current_units = current_units * (1 + unit_growth / 12)
if period % 12 == 0:
current_price = current_price * (1 + price_growth)
period_revenue = current_units * current_price
cogs = period_revenue * cogs_pct
gross_profit = period_revenue - cogs
opex = period_revenue * opex_pct
operating_income = gross_profit - opex
forecast_periods.append({
"period": period,
"revenue": round(period_revenue, 2),
"units": round(current_units, 0),
"price": round(current_price, 2),
"cogs": round(cogs, 2),
"gross_profit": round(gross_profit, 2),
"gross_margin": round(adjusted_margin, 4),
"opex": round(opex, 2),
"operating_income": round(operating_income, 2),
})
else:
# Simple growth-based forecast
monthly_growth = (1 + adjusted_growth) ** (1 / 12) - 1
for period in range(1, self.forecast_periods + 1):
current_revenue = current_revenue * (1 + monthly_growth)
cogs = current_revenue * cogs_pct
gross_profit = current_revenue - cogs
opex = current_revenue * opex_pct
operating_income = gross_profit - opex
forecast_periods.append({
"period": period,
"revenue": round(current_revenue, 2),
"cogs": round(cogs, 2),
"gross_profit": round(gross_profit, 2),
"gross_margin": round(adjusted_margin, 4),
"opex": round(opex, 2),
"operating_income": round(operating_income, 2),
})
total_revenue = sum(p["revenue"] for p in forecast_periods)
total_operating_income = sum(p["operating_income"] for p in forecast_periods)
return {
"scenario": scenario,
"growth_rate": round(adjusted_growth, 4),
"gross_margin": round(adjusted_margin, 4),
"forecast_periods": forecast_periods,
"total_revenue": round(total_revenue, 2),
"total_operating_income": round(total_operating_income, 2),
"average_monthly_revenue": round(
safe_divide(total_revenue, len(forecast_periods)), 2
),
}
def build_rolling_cash_flow(self, weeks: int = 13) -> Dict[str, Any]:
"""Build a 13-week rolling cash flow projection."""
cfi = self.cash_flow_inputs
opening_balance = cfi.get("opening_cash_balance", 0)
weekly_revenue = cfi.get("weekly_revenue", 0)
collection_rate = cfi.get("collection_rate", 0.85)
collection_lag_weeks = cfi.get("collection_lag_weeks", 2)
# Weekly expenses
weekly_payroll = cfi.get("weekly_payroll", 0)
weekly_rent = cfi.get("weekly_rent", 0)
weekly_operating = cfi.get("weekly_operating", 0)
weekly_other = cfi.get("weekly_other", 0)
total_weekly_expenses = weekly_payroll + weekly_rent + weekly_operating + weekly_other
# One-time items
one_time_items: List[Dict[str, Any]] = cfi.get("one_time_items", [])
weekly_projections: List[Dict[str, Any]] = []
running_balance = opening_balance
# Revenue pipeline for lagged collections
revenue_pipeline: List[float] = [0.0] * collection_lag_weeks
for week in range(1, weeks + 1):
# Revenue collections (lagged)
revenue_pipeline.append(weekly_revenue)
collections = revenue_pipeline.pop(0) * collection_rate
# One-time items for this week
one_time_inflows = 0.0
one_time_outflows = 0.0
one_time_labels: List[str] = []
for item in one_time_items:
if item.get("week") == week:
amount = item.get("amount", 0)
if amount > 0:
one_time_inflows += amount
else:
one_time_outflows += abs(amount)
one_time_labels.append(item.get("description", ""))
total_inflows = collections + one_time_inflows
total_outflows = total_weekly_expenses + one_time_outflows
net_cash_flow = total_inflows - total_outflows
running_balance += net_cash_flow
weekly_projections.append({
"week": week,
"collections": round(collections, 2),
"one_time_inflows": round(one_time_inflows, 2),
"total_inflows": round(total_inflows, 2),
"payroll": round(weekly_payroll, 2),
"rent": round(weekly_rent, 2),
"operating": round(weekly_operating, 2),
"other_expenses": round(weekly_other, 2),
"one_time_outflows": round(one_time_outflows, 2),
"total_outflows": round(total_outflows, 2),
"net_cash_flow": round(net_cash_flow, 2),
"closing_balance": round(running_balance, 2),
"notes": ", ".join(one_time_labels) if one_time_labels else "",
})
# Summary
total_inflows = sum(w["total_inflows"] for w in weekly_projections)
total_outflows = sum(w["total_outflows"] for w in weekly_projections)
min_balance = min(w["closing_balance"] for w in weekly_projections)
min_balance_week = next(
w["week"]
for w in weekly_projections
if w["closing_balance"] == min_balance
)
return {
"weeks": weeks,
"opening_balance": opening_balance,
"closing_balance": round(running_balance, 2),
"total_inflows": round(total_inflows, 2),
"total_outflows": round(total_outflows, 2),
"net_change": round(total_inflows - total_outflows, 2),
"minimum_balance": round(min_balance, 2),
"minimum_balance_week": min_balance_week,
"cash_runway_weeks": (
round(safe_divide(running_balance, total_weekly_expenses))
if total_weekly_expenses > 0
else None
),
"weekly_projections": weekly_projections,
}
def build_scenario_comparison(
self, scenarios: Optional[List[str]] = None
) -> Dict[str, Any]:
"""Build and compare multiple scenarios."""
if scenarios is None:
scenarios = ["base", "bull", "bear"]
scenario_results: Dict[str, Any] = {}
for scenario in scenarios:
scenario_results[scenario] = self.build_driver_based_forecast(scenario)
# Comparison summary
comparison: List[Dict[str, Any]] = []
for scenario in scenarios:
result = scenario_results[scenario]
comparison.append({
"scenario": scenario,
"total_revenue": result["total_revenue"],
"total_operating_income": result["total_operating_income"],
"growth_rate": result["growth_rate"],
"gross_margin": result["gross_margin"],
"avg_monthly_revenue": result["average_monthly_revenue"],
})
return {
"scenarios": scenario_results,
"comparison": comparison,
}
def run_full_forecast(
self, scenarios: Optional[List[str]] = None
) -> Dict[str, Any]:
"""Run the complete forecast analysis."""
trends = self.analyze_trends()
scenario_comparison = self.build_scenario_comparison(scenarios)
cash_flow = self.build_rolling_cash_flow()
return {
"trend_analysis": trends,
"scenario_comparison": scenario_comparison,
"rolling_cash_flow": cash_flow,
}
def format_text(self, results: Dict[str, Any]) -> str:
"""Format forecast results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("FINANCIAL FORECAST REPORT")
lines.append("=" * 70)
def fmt_money(val: float) -> str:
if abs(val) >= 1e9:
return f"${val / 1e9:,.2f}B"
if abs(val) >= 1e6:
return f"${val / 1e6:,.2f}M"
if abs(val) >= 1e3:
return f"${val / 1e3:,.1f}K"
return f"${val:,.2f}"
# Trend Analysis
trend = results["trend_analysis"]
if "error" not in trend:
lines.append(f"\n--- TREND ANALYSIS ---")
t = trend["trend"]
lines.append(f" Direction: {t['direction']}")
lines.append(f" R-squared: {t['r_squared']:.4f}")
lines.append(
f" Average Historical Growth: "
f"{trend['average_growth_rate'] * 100:.1f}%"
)
if trend["seasonality_index"]:
lines.append(
f" Seasonality Index (last 4): "
f"{', '.join(f'{s:.2f}' for s in trend['seasonality_index'])}"
)
# Scenario Comparison
comp = results["scenario_comparison"]["comparison"]
lines.append(f"\n--- SCENARIO COMPARISON ---")
lines.append(
f" {'Scenario':<10s} {'Revenue':>14s} {'Op. Income':>14s} "
f"{'Growth':>8s} {'Margin':>8s}"
)
lines.append(" " + "-" * 62)
for c in comp:
lines.append(
f" {c['scenario']:<10s} {fmt_money(c['total_revenue']):>14s} "
f"{fmt_money(c['total_operating_income']):>14s} "
f"{c['growth_rate'] * 100:>7.1f}% "
f"{c['gross_margin'] * 100:>7.1f}%"
)
# Base scenario detail
base = results["scenario_comparison"]["scenarios"].get("base", {})
if base and base.get("forecast_periods"):
lines.append(f"\n--- BASE CASE MONTHLY FORECAST ---")
lines.append(
f" {'Period':>6s} {'Revenue':>12s} {'Gross Profit':>12s} "
f"{'Op. Income':>12s}"
)
lines.append(" " + "-" * 48)
for p in base["forecast_periods"]:
lines.append(
f" {p['period']:>6d} {fmt_money(p['revenue']):>12s} "
f"{fmt_money(p['gross_profit']):>12s} "
f"{fmt_money(p['operating_income']):>12s}"
)
# Cash Flow
cf = results["rolling_cash_flow"]
lines.append(f"\n--- 13-WEEK ROLLING CASH FLOW ---")
lines.append(f" Opening Balance: {fmt_money(cf['opening_balance'])}")
lines.append(f" Closing Balance: {fmt_money(cf['closing_balance'])}")
lines.append(f" Net Change: {fmt_money(cf['net_change'])}")
lines.append(
f" Minimum Balance: {fmt_money(cf['minimum_balance'])} "
f"(Week {cf['minimum_balance_week']})"
)
if cf.get("cash_runway_weeks"):
lines.append(f" Cash Runway: {cf['cash_runway_weeks']:.0f} weeks")
lines.append(f"\n Weekly Detail:")
lines.append(
f" {'Wk':>3s} {'Inflows':>10s} {'Outflows':>10s} "
f"{'Net':>10s} {'Balance':>12s}"
)
lines.append(" " + "-" * 50)
for w in cf["weekly_projections"]:
notes = f" {w['notes']}" if w["notes"] else ""
lines.append(
f" {w['week']:>3d} {fmt_money(w['total_inflows']):>10s} "
f"{fmt_money(w['total_outflows']):>10s} "
f"{fmt_money(w['net_cash_flow']):>10s} "
f"{fmt_money(w['closing_balance']):>12s}{notes}"
)
lines.append("\n" + "=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Driver-based revenue forecasting with scenario modeling"
)
parser.add_argument(
"input_file",
help="Path to JSON file with forecast data",
)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format (default: text)",
)
parser.add_argument(
"--scenarios",
type=str,
default="base,bull,bear",
help="Comma-separated list of scenarios (default: base,bull,bear)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File '{args.input_file}' not found.", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.input_file}': {e}", file=sys.stderr)
sys.exit(1)
builder = ForecastBuilder(data)
scenarios = [s.strip() for s in args.scenarios.split(",")]
results = builder.run_full_forecast(scenarios)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(builder.format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,432 @@
#!/usr/bin/env python3
"""
Financial Ratio Calculator
Calculates and interprets financial ratios across 5 categories:
profitability, liquidity, leverage, efficiency, and valuation.
Usage:
python ratio_calculator.py financial_data.json
python ratio_calculator.py financial_data.json --format json
python ratio_calculator.py financial_data.json --category profitability
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0 or denominator is None:
return default
return numerator / denominator
class FinancialRatioCalculator:
"""Calculate and interpret financial ratios from statement data."""
# Industry benchmark ranges: (low, typical, high)
BENCHMARKS: Dict[str, Tuple[float, float, float]] = {
"roe": (0.08, 0.15, 0.25),
"roa": (0.03, 0.06, 0.12),
"gross_margin": (0.25, 0.40, 0.60),
"operating_margin": (0.05, 0.15, 0.25),
"net_margin": (0.03, 0.10, 0.20),
"current_ratio": (1.0, 1.5, 3.0),
"quick_ratio": (0.8, 1.0, 2.0),
"cash_ratio": (0.2, 0.5, 1.0),
"debt_to_equity": (0.3, 0.8, 2.0),
"interest_coverage": (2.0, 5.0, 10.0),
"dscr": (1.0, 1.5, 2.5),
"asset_turnover": (0.5, 1.0, 2.0),
"inventory_turnover": (4.0, 8.0, 12.0),
"receivables_turnover": (6.0, 10.0, 15.0),
"dso": (30.0, 45.0, 60.0),
"pe_ratio": (10.0, 20.0, 35.0),
"pb_ratio": (1.0, 2.5, 5.0),
"ps_ratio": (1.0, 3.0, 8.0),
"ev_ebitda": (6.0, 12.0, 20.0),
"peg_ratio": (0.5, 1.0, 2.0),
}
def __init__(self, data: Dict[str, Any]) -> None:
"""Initialize with financial statement data."""
self.income = data.get("income_statement", {})
self.balance = data.get("balance_sheet", {})
self.cash_flow = data.get("cash_flow", {})
self.market = data.get("market_data", {})
self.results: Dict[str, Dict[str, Any]] = {}
def calculate_profitability(self) -> Dict[str, Any]:
"""Calculate profitability ratios."""
revenue = self.income.get("revenue", 0)
cogs = self.income.get("cost_of_goods_sold", 0)
operating_income = self.income.get("operating_income", 0)
net_income = self.income.get("net_income", 0)
total_equity = self.balance.get("total_equity", 0)
total_assets = self.balance.get("total_assets", 0)
gross_profit = revenue - cogs
ratios = {
"roe": {
"value": safe_divide(net_income, total_equity),
"formula": "Net Income / Total Equity",
"name": "Return on Equity",
},
"roa": {
"value": safe_divide(net_income, total_assets),
"formula": "Net Income / Total Assets",
"name": "Return on Assets",
},
"gross_margin": {
"value": safe_divide(gross_profit, revenue),
"formula": "(Revenue - COGS) / Revenue",
"name": "Gross Margin",
},
"operating_margin": {
"value": safe_divide(operating_income, revenue),
"formula": "Operating Income / Revenue",
"name": "Operating Margin",
},
"net_margin": {
"value": safe_divide(net_income, revenue),
"formula": "Net Income / Revenue",
"name": "Net Margin",
},
}
for key, ratio in ratios.items():
ratio["interpretation"] = self.interpret_ratio(key, ratio["value"])
self.results["profitability"] = ratios
return ratios
def calculate_liquidity(self) -> Dict[str, Any]:
"""Calculate liquidity ratios."""
current_assets = self.balance.get("current_assets", 0)
current_liabilities = self.balance.get("current_liabilities", 0)
inventory = self.balance.get("inventory", 0)
cash = self.balance.get("cash_and_equivalents", 0)
ratios = {
"current_ratio": {
"value": safe_divide(current_assets, current_liabilities),
"formula": "Current Assets / Current Liabilities",
"name": "Current Ratio",
},
"quick_ratio": {
"value": safe_divide(
current_assets - inventory, current_liabilities
),
"formula": "(Current Assets - Inventory) / Current Liabilities",
"name": "Quick Ratio",
},
"cash_ratio": {
"value": safe_divide(cash, current_liabilities),
"formula": "Cash & Equivalents / Current Liabilities",
"name": "Cash Ratio",
},
}
for key, ratio in ratios.items():
ratio["interpretation"] = self.interpret_ratio(key, ratio["value"])
self.results["liquidity"] = ratios
return ratios
def calculate_leverage(self) -> Dict[str, Any]:
"""Calculate leverage ratios."""
total_debt = self.balance.get("total_debt", 0)
total_equity = self.balance.get("total_equity", 0)
operating_income = self.income.get("operating_income", 0)
interest_expense = self.income.get("interest_expense", 0)
operating_cash_flow = self.cash_flow.get("operating_cash_flow", 0)
total_debt_service = self.cash_flow.get(
"total_debt_service", interest_expense
)
ratios = {
"debt_to_equity": {
"value": safe_divide(total_debt, total_equity),
"formula": "Total Debt / Total Equity",
"name": "Debt-to-Equity Ratio",
},
"interest_coverage": {
"value": safe_divide(operating_income, interest_expense),
"formula": "Operating Income / Interest Expense",
"name": "Interest Coverage Ratio",
},
"dscr": {
"value": safe_divide(operating_cash_flow, total_debt_service),
"formula": "Operating Cash Flow / Total Debt Service",
"name": "Debt Service Coverage Ratio",
},
}
for key, ratio in ratios.items():
ratio["interpretation"] = self.interpret_ratio(key, ratio["value"])
self.results["leverage"] = ratios
return ratios
def calculate_efficiency(self) -> Dict[str, Any]:
"""Calculate efficiency ratios."""
revenue = self.income.get("revenue", 0)
cogs = self.income.get("cost_of_goods_sold", 0)
total_assets = self.balance.get("total_assets", 0)
inventory = self.balance.get("inventory", 0)
accounts_receivable = self.balance.get("accounts_receivable", 0)
receivables_turnover_val = safe_divide(revenue, accounts_receivable)
ratios = {
"asset_turnover": {
"value": safe_divide(revenue, total_assets),
"formula": "Revenue / Total Assets",
"name": "Asset Turnover",
},
"inventory_turnover": {
"value": safe_divide(cogs, inventory),
"formula": "COGS / Inventory",
"name": "Inventory Turnover",
},
"receivables_turnover": {
"value": receivables_turnover_val,
"formula": "Revenue / Accounts Receivable",
"name": "Receivables Turnover",
},
"dso": {
"value": safe_divide(365, receivables_turnover_val)
if receivables_turnover_val > 0
else 0.0,
"formula": "365 / Receivables Turnover",
"name": "Days Sales Outstanding",
},
}
for key, ratio in ratios.items():
ratio["interpretation"] = self.interpret_ratio(key, ratio["value"])
self.results["efficiency"] = ratios
return ratios
def calculate_valuation(self) -> Dict[str, Any]:
"""Calculate valuation ratios (requires market data)."""
market_cap = self.market.get("market_cap", 0)
share_price = self.market.get("share_price", 0)
shares_outstanding = self.market.get("shares_outstanding", 0)
earnings_growth_rate = self.market.get("earnings_growth_rate", 0)
net_income = self.income.get("net_income", 0)
revenue = self.income.get("revenue", 0)
total_equity = self.balance.get("total_equity", 0)
total_debt = self.balance.get("total_debt", 0)
cash = self.balance.get("cash_and_equivalents", 0)
ebitda = self.income.get("ebitda", 0)
if market_cap == 0 and share_price > 0 and shares_outstanding > 0:
market_cap = share_price * shares_outstanding
eps = safe_divide(net_income, shares_outstanding)
book_value_per_share = safe_divide(total_equity, shares_outstanding)
enterprise_value = market_cap + total_debt - cash
pe = safe_divide(share_price, eps)
ratios = {
"pe_ratio": {
"value": pe,
"formula": "Share Price / Earnings Per Share",
"name": "Price-to-Earnings Ratio",
},
"pb_ratio": {
"value": safe_divide(share_price, book_value_per_share),
"formula": "Share Price / Book Value Per Share",
"name": "Price-to-Book Ratio",
},
"ps_ratio": {
"value": safe_divide(
market_cap, revenue
),
"formula": "Market Cap / Revenue",
"name": "Price-to-Sales Ratio",
},
"ev_ebitda": {
"value": safe_divide(enterprise_value, ebitda),
"formula": "Enterprise Value / EBITDA",
"name": "EV/EBITDA",
},
"peg_ratio": {
"value": safe_divide(pe, earnings_growth_rate * 100)
if earnings_growth_rate > 0
else 0.0,
"formula": "P/E Ratio / Earnings Growth Rate (%)",
"name": "PEG Ratio",
},
}
for key, ratio in ratios.items():
ratio["interpretation"] = self.interpret_ratio(key, ratio["value"])
self.results["valuation"] = ratios
return ratios
def calculate_all(self) -> Dict[str, Dict[str, Any]]:
"""Calculate all ratio categories."""
self.calculate_profitability()
self.calculate_liquidity()
self.calculate_leverage()
self.calculate_efficiency()
self.calculate_valuation()
return self.results
def interpret_ratio(self, ratio_key: str, value: float) -> str:
"""Interpret a ratio value against benchmarks."""
if value == 0.0:
return "Insufficient data to calculate"
benchmarks = self.BENCHMARKS.get(ratio_key)
if not benchmarks:
return "No benchmark available"
low, typical, high = benchmarks
# DSO is inverse - lower is better
if ratio_key == "dso":
if value <= low:
return "Excellent - collections well above average"
elif value <= typical:
return "Good - collections within normal range"
elif value <= high:
return "Acceptable - monitor collection trends"
else:
return "Concern - collections significantly slower than peers"
# Debt-to-equity - lower generally better (but context matters)
if ratio_key == "debt_to_equity":
if value <= low:
return "Conservative leverage - strong equity position"
elif value <= typical:
return "Moderate leverage - well balanced"
elif value <= high:
return "Elevated leverage - monitor debt levels"
else:
return "High leverage - potential financial risk"
# Standard interpretation (higher is better for most ratios)
if value < low:
return "Below average - needs improvement"
elif value <= typical:
return "Acceptable - within normal range"
elif value <= high:
return "Good - above average performance"
else:
return "Excellent - significantly above peers"
@staticmethod
def format_ratio(value: float, is_percentage: bool = False) -> str:
"""Format a ratio value for display."""
if is_percentage:
return f"{value * 100:.1f}%"
return f"{value:.2f}"
def format_text(self, category: Optional[str] = None) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("FINANCIAL RATIO ANALYSIS")
lines.append("=" * 70)
categories = (
{category: self.results[category]}
if category and category in self.results
else self.results
)
percentage_ratios = {
"roe", "roa", "gross_margin", "operating_margin", "net_margin"
}
for cat_name, ratios in categories.items():
lines.append(f"\n--- {cat_name.upper()} ---")
for key, ratio in ratios.items():
is_pct = key in percentage_ratios
formatted = self.format_ratio(ratio["value"], is_pct)
lines.append(f" {ratio['name']}: {formatted}")
lines.append(f" Formula: {ratio['formula']}")
lines.append(f" Assessment: {ratio['interpretation']}")
lines.append("\n" + "=" * 70)
return "\n".join(lines)
def to_json(self, category: Optional[str] = None) -> Dict[str, Any]:
"""Return results as JSON-serializable dict."""
if category and category in self.results:
return {"category": category, "ratios": self.results[category]}
return {"categories": self.results}
def main() -> None:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Calculate and interpret financial ratios"
)
parser.add_argument(
"input_file",
help="Path to JSON file with financial statement data",
)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format (default: text)",
)
parser.add_argument(
"--category",
choices=[
"profitability",
"liquidity",
"leverage",
"efficiency",
"valuation",
],
default=None,
help="Calculate only a specific ratio category",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File '{args.input_file}' not found.", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.input_file}': {e}", file=sys.stderr)
sys.exit(1)
calculator = FinancialRatioCalculator(data)
if args.category:
method_map = {
"profitability": calculator.calculate_profitability,
"liquidity": calculator.calculate_liquidity,
"leverage": calculator.calculate_leverage,
"efficiency": calculator.calculate_efficiency,
"valuation": calculator.calculate_valuation,
}
method_map[args.category]()
else:
calculator.calculate_all()
if args.format == "json":
print(json.dumps(calculator.to_json(args.category), indent=2))
else:
print(calculator.format_text(args.category))
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,6 @@
{
"name": "marketing-skills",
"description": "5 production-ready marketing skills: content creator, demand generation, product marketing strategy, app store optimization, and social media analytics",
"description": "6 production-ready marketing skills: content creator, demand generation, product marketing strategy, app store optimization, social media analytics, and campaign analytics",
"version": "1.0.0",
"author": {
"name": "Alireza Rezvani",

View File

@@ -1,6 +1,6 @@
# Marketing Skills - Claude Code Guidance
This guide covers the 3 production-ready marketing skills and their Python automation tools.
This guide covers the 4 production-ready marketing skills and their Python automation tools.
## Marketing Skills Overview
@@ -8,8 +8,9 @@ This guide covers the 3 production-ready marketing skills and their Python autom
1. **content-creator/** - Content creation, brand voice, SEO optimization (2 Python tools)
2. **marketing-demand-acquisition/** - Demand generation and customer acquisition (1 Python tool)
3. **marketing-strategy-pmm/** - Product marketing and go-to-market strategy
4. **campaign-analytics/** - Multi-touch attribution, funnel conversion analysis, campaign ROI calculation (3 Python tools)
**Total Tools:** 3 Python automation tools, 6+ knowledge bases, 10+ templates
**Total Tools:** 6 Python automation tools, 9+ knowledge bases, 15+ templates
## Python Automation Tools
@@ -108,6 +109,57 @@ Recommendations:
python marketing-demand-acquisition/scripts/demand_gen_analyzer.py campaign-data.csv
```
### Campaign Analytics Tools
#### 4. Attribution Analyzer (`campaign-analytics/scripts/attribution_analyzer.py`)
**Purpose:** Multi-touch attribution modeling across marketing channels
**Features:**
- Five attribution models (first-touch, last-touch, linear, time-decay, position-based)
- Configurable time-decay half-life
- Per-channel credit allocation and revenue attribution
- Conversion and non-conversion journey analysis
**Usage:**
```bash
python campaign-analytics/scripts/attribution_analyzer.py campaign_data.json
python campaign-analytics/scripts/attribution_analyzer.py campaign_data.json --model time-decay --half-life 14
python campaign-analytics/scripts/attribution_analyzer.py campaign_data.json --format json
```
#### 5. Funnel Analyzer (`campaign-analytics/scripts/funnel_analyzer.py`)
**Purpose:** Conversion funnel analysis with bottleneck detection
**Features:**
- Stage-to-stage conversion rates and drop-off percentages
- Automatic bottleneck identification (largest absolute and relative drops)
- Overall funnel conversion rate
- Segment comparison when multiple segments provided
**Usage:**
```bash
python campaign-analytics/scripts/funnel_analyzer.py funnel_data.json
python campaign-analytics/scripts/funnel_analyzer.py funnel_data.json --format json
```
#### 6. Campaign ROI Calculator (`campaign-analytics/scripts/campaign_roi_calculator.py`)
**Purpose:** Calculate comprehensive campaign ROI metrics with benchmarking
**Features:**
- ROI, ROAS, CPA, CPL, CAC calculation
- CTR and conversion rate metrics
- Industry benchmark comparison
- Underperformance flagging
**Usage:**
```bash
python campaign-analytics/scripts/campaign_roi_calculator.py campaign_data.json
python campaign-analytics/scripts/campaign_roi_calculator.py campaign_data.json --format json
```
## Knowledge Bases
### Content Creator References
@@ -134,6 +186,14 @@ python marketing-demand-acquisition/scripts/demand_gen_analyzer.py campaign-data
- Facebook: Conversational, engagement tactics
- TikTok: Short-form video, trending sounds
### Campaign Analytics References
**Location:** `campaign-analytics/references/`
1. **attribution-models-guide.md** - Deep dive into 5 attribution models with formulas, pros/cons, selection criteria
2. **campaign-metrics-benchmarks.md** - Industry benchmarks by channel and vertical for CTR, CPC, CPM, CPA, ROAS
3. **funnel-optimization-framework.md** - Stage-by-stage optimization strategies, common bottlenecks, best practices
## User Templates
### Content Creator Assets
@@ -182,6 +242,22 @@ cat content-creator/references/brand_guidelines.md
python marketing-demand-acquisition/scripts/demand_gen_analyzer.py campaign-results.csv
```
### Pattern 3: Campaign Performance Analysis
```bash
# 1. Analyze multi-touch attribution
python campaign-analytics/scripts/attribution_analyzer.py journey_data.json
# 2. Identify funnel bottlenecks
python campaign-analytics/scripts/funnel_analyzer.py funnel_data.json
# 3. Calculate campaign ROI
python campaign-analytics/scripts/campaign_roi_calculator.py campaign_data.json
# 4. Document findings using templates
# Reference: campaign-analytics/assets/campaign_report_template.md
```
## Development Commands
```bash
@@ -195,6 +271,11 @@ python content-creator/scripts/seo_optimizer.py article.md "main keyword" "secon
# Demand generation
python marketing-demand-acquisition/scripts/demand_gen_analyzer.py data.csv
# Campaign analytics
python campaign-analytics/scripts/attribution_analyzer.py campaign_data.json
python campaign-analytics/scripts/funnel_analyzer.py funnel_data.json
python campaign-analytics/scripts/campaign_roi_calculator.py campaign_data.json
```
## Quality Standards
@@ -208,18 +289,18 @@ python marketing-demand-acquisition/scripts/demand_gen_analyzer.py data.csv
## Roadmap
**Current (Phase 1):** 3 skills deployed
**Current (Phase 1-2):** 4 skills deployed
- ✅ Content creator (brand voice + SEO)
- ✅ Demand generation & acquisition
- ✅ Product marketing strategy
- ✅ Campaign analytics (attribution, funnel, ROI)
**Phase 2 (Q1 2026):** Marketing expansion
**Phase 3 (Q2 2026):** Marketing expansion
- SEO content optimizer (advanced)
- Social media manager (multi-platform)
- Campaign analytics dashboard
- Email marketing automation
**Phase 3 (Q2 2026):** Growth marketing
**Phase 4 (Q3 2026):** Growth marketing
- Growth hacking frameworks
- Viral content analyzer
- Influencer collaboration tools
@@ -248,6 +329,6 @@ See `marketing_skills_roadmap.md` for detailed expansion plans.
---
**Last Updated:** November 5, 2025
**Skills Deployed:** 3/3 marketing skills production-ready
**Total Tools:** 3 Python automation tools
**Last Updated:** February 2026
**Skills Deployed:** 4/4 marketing skills production-ready
**Total Tools:** 6 Python automation tools

View File

@@ -0,0 +1,216 @@
---
name: campaign-analytics
description: Analyzes campaign performance with multi-touch attribution, funnel conversion, and ROI calculation for marketing optimization
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
domain: campaign-analytics
updated: 2026-02-06
python-tools: attribution_analyzer.py, funnel_analyzer.py, campaign_roi_calculator.py
tech-stack: marketing-analytics, attribution-modeling
---
# Campaign Analytics
Production-grade campaign performance analysis with multi-touch attribution modeling, funnel conversion analysis, and ROI calculation. Three Python CLI tools provide deterministic, repeatable analytics using standard library only -- no external dependencies, no API calls, no ML models.
---
## Table of Contents
- [Capabilities](#capabilities)
- [Input Requirements](#input-requirements)
- [Output Formats](#output-formats)
- [How to Use](#how-to-use)
- [Scripts](#scripts)
- [Reference Guides](#reference-guides)
- [Best Practices](#best-practices)
- [Limitations](#limitations)
---
## Capabilities
- **Multi-Touch Attribution**: Five attribution models (first-touch, last-touch, linear, time-decay, position-based) with configurable parameters
- **Funnel Conversion Analysis**: Stage-by-stage conversion rates, drop-off identification, bottleneck detection, and segment comparison
- **Campaign ROI Calculation**: ROI, ROAS, CPA, CPL, CAC metrics with industry benchmarking and underperformance flagging
- **A/B Test Support**: Templates for structured A/B test documentation and analysis
- **Channel Comparison**: Cross-channel performance comparison with normalized metrics
- **Executive Reporting**: Ready-to-use templates for campaign performance reports
---
## Input Requirements
All scripts accept a JSON file as positional input argument. See `assets/sample_campaign_data.json` for complete examples.
### Attribution Analyzer
```json
{
"journeys": [
{
"journey_id": "j1",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-01T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-05T14:30:00", "interaction": "open"},
{"channel": "paid_search", "timestamp": "2025-10-08T09:15:00", "interaction": "click"}
],
"converted": true,
"revenue": 500.00
}
]
}
```
### Funnel Analyzer
```json
{
"funnel": {
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"counts": [10000, 5200, 2800, 1400, 420]
}
}
```
### Campaign ROI Calculator
```json
{
"campaigns": [
{
"name": "Spring Email Campaign",
"channel": "email",
"spend": 5000.00,
"revenue": 25000.00,
"impressions": 50000,
"clicks": 2500,
"leads": 300,
"customers": 45
}
]
}
```
---
## Output Formats
All scripts support two output formats via the `--format` flag:
- `--format text` (default): Human-readable tables and summaries for review
- `--format json`: Machine-readable JSON for integrations and pipelines
---
## How to Use
### Attribution Analysis
```bash
# Run all 5 attribution models
python scripts/attribution_analyzer.py campaign_data.json
# Run a specific model
python scripts/attribution_analyzer.py campaign_data.json --model time-decay
# JSON output for pipeline integration
python scripts/attribution_analyzer.py campaign_data.json --format json
# Custom time-decay half-life (default: 7 days)
python scripts/attribution_analyzer.py campaign_data.json --model time-decay --half-life 14
```
### Funnel Analysis
```bash
# Basic funnel analysis
python scripts/funnel_analyzer.py funnel_data.json
# JSON output
python scripts/funnel_analyzer.py funnel_data.json --format json
```
### Campaign ROI Calculation
```bash
# Calculate ROI metrics for all campaigns
python scripts/campaign_roi_calculator.py campaign_data.json
# JSON output
python scripts/campaign_roi_calculator.py campaign_data.json --format json
```
---
## Scripts
### 1. attribution_analyzer.py
Implements five industry-standard attribution models to allocate conversion credit across marketing channels:
| Model | Description | Best For |
|-------|-------------|----------|
| First-Touch | 100% credit to first interaction | Brand awareness campaigns |
| Last-Touch | 100% credit to last interaction | Direct response campaigns |
| Linear | Equal credit to all touchpoints | Balanced multi-channel evaluation |
| Time-Decay | More credit to recent touchpoints | Short sales cycles |
| Position-Based | 40/20/40 split (first/middle/last) | Full-funnel marketing |
### 2. funnel_analyzer.py
Analyzes conversion funnels to identify bottlenecks and optimization opportunities:
- Stage-to-stage conversion rates and drop-off percentages
- Automatic bottleneck identification (largest absolute and relative drops)
- Overall funnel conversion rate
- Segment comparison when multiple segments are provided
### 3. campaign_roi_calculator.py
Calculates comprehensive ROI metrics with industry benchmarking:
- **ROI**: Return on investment percentage
- **ROAS**: Return on ad spend ratio
- **CPA**: Cost per acquisition
- **CPL**: Cost per lead
- **CAC**: Customer acquisition cost
- **CTR**: Click-through rate
- **CVR**: Conversion rate (leads to customers)
- Flags underperforming campaigns against industry benchmarks
---
## Reference Guides
| Guide | Location | Purpose |
|-------|----------|---------|
| Attribution Models Guide | `references/attribution-models-guide.md` | Deep dive into 5 models with formulas, pros/cons, selection criteria |
| Campaign Metrics Benchmarks | `references/campaign-metrics-benchmarks.md` | Industry benchmarks by channel and vertical for CTR, CPC, CPM, CPA, ROAS |
| Funnel Optimization Framework | `references/funnel-optimization-framework.md` | Stage-by-stage optimization strategies, common bottlenecks, best practices |
---
## Best Practices
1. **Use multiple attribution models** -- No single model tells the full story. Compare at least 3 models to triangulate channel value.
2. **Set appropriate lookback windows** -- Match your time-decay half-life to your average sales cycle length.
3. **Segment your funnels** -- Always compare segments (channel, cohort, geography) to identify what drives best performance.
4. **Benchmark against your own history first** -- Industry benchmarks provide context, but your own historical data is the most relevant comparison.
5. **Run ROI analysis at regular intervals** -- Weekly for active campaigns, monthly for strategic review.
6. **Include all costs** -- Factor in creative, tooling, and labor costs alongside media spend for accurate ROI.
7. **Document A/B tests rigorously** -- Use the provided template to ensure statistical validity and clear decision criteria.
---
## Limitations
- **No statistical significance testing** -- A/B test analysis requires external tools for p-value calculations. Scripts provide descriptive metrics only.
- **Standard library only** -- No advanced statistical or data processing libraries. Suitable for most campaign sizes but not optimized for datasets exceeding 100K journeys.
- **Offline analysis** -- Scripts analyze static JSON snapshots. No real-time data connections or API integrations.
- **Single-currency** -- All monetary values assumed to be in the same currency. No currency conversion support.
- **Simplified time-decay** -- Uses exponential decay based on configurable half-life. Does not account for weekday/weekend or seasonal patterns.
- **No cross-device tracking** -- Attribution operates on provided journey data as-is. Cross-device identity resolution must be handled upstream.

View File

@@ -0,0 +1,130 @@
# A/B Test Analysis
**Test Name:** [Descriptive test name]
**Test ID:** [Internal tracking ID]
**Date:** [Start Date] - [End Date]
**Status:** [Planning / Running / Complete / Inconclusive]
---
## Hypothesis
**If** [we change X],
**then** [Y will happen],
**because** [rationale based on data or insight].
---
## Test Design
| Parameter | Detail |
|-----------|--------|
| **Variable Tested** | [What is being changed] |
| **Control (A)** | [Description of control variant] |
| **Variant (B)** | [Description of test variant] |
| **Primary Metric** | [The main metric being measured] |
| **Secondary Metrics** | [Additional metrics to monitor] |
| **Traffic Split** | [50/50, 70/30, etc.] |
| **Minimum Sample Size** | [Required sample per variant for statistical significance] |
| **Minimum Detectable Effect** | [Smallest meaningful difference, e.g., 5% lift] |
| **Confidence Level** | [95% or 99%] |
| **Expected Duration** | [X days/weeks based on traffic and sample size] |
---
## Targeting
| Criterion | Value |
|-----------|-------|
| **Audience** | [Who sees the test] |
| **Channel** | [Where the test runs] |
| **Device** | [All / Desktop / Mobile] |
| **Geography** | [Regions included] |
| **Exclusions** | [Who is excluded and why] |
---
## Results
### Primary Metric: [Metric Name]
| Variant | Sample Size | Conversions | Rate | Lift vs Control |
|---------|------------|-------------|------|----------------|
| Control (A) | | | % | - |
| Variant (B) | | | % | % |
**Statistical Significance:** [Yes/No] at [X]% confidence
**P-value:** [X.XXX]
### Secondary Metrics
| Metric | Control (A) | Variant (B) | Lift | Significant? |
|--------|------------|-------------|------|-------------|
| [Metric 1] | | | % | [Yes/No] |
| [Metric 2] | | | % | [Yes/No] |
| [Metric 3] | | | % | [Yes/No] |
---
## Segment Analysis
| Segment | Control Rate | Variant Rate | Lift | Notes |
|---------|-------------|-------------|------|-------|
| Desktop | % | % | % | |
| Mobile | % | % | % | |
| New Visitors | % | % | % | |
| Returning Visitors | % | % | % | |
| [Custom Segment] | % | % | % | |
---
## Revenue Impact Estimate
| Metric | Value |
|--------|-------|
| **Projected Annual Lift** | [X]% |
| **Projected Additional Revenue** | $[X] |
| **Projected Additional Conversions** | [X] |
| **Confidence in Estimate** | [High/Medium/Low] |
---
## Decision
**Winner:** [Control / Variant / Inconclusive]
**Rationale:** [Why this decision was made, citing specific metrics and statistical significance]
**Implementation Plan:**
- [ ] [Step 1: e.g., Roll out variant to 100% of traffic]
- [ ] [Step 2: e.g., Update creative assets across campaigns]
- [ ] [Step 3: e.g., Monitor for X days post-implementation]
- [ ] [Step 4: e.g., Document learnings in knowledge base]
---
## Learnings
**What we learned:**
1. [Key learning 1]
2. [Key learning 2]
3. [Key learning 3]
**Follow-up tests to consider:**
1. [Next test idea based on results]
2. [Next test idea based on results]
---
## Quality Checks
- [ ] Sample size reached minimum threshold
- [ ] Test ran for at least 1 full business cycle (7 days minimum)
- [ ] No external factors (holidays, outages, promotions) affected results
- [ ] Segments were balanced between variants
- [ ] No sample ratio mismatch (SRM) detected
- [ ] Results reviewed by at least 2 team members
---
*Template from campaign-analytics skill. Statistical significance calculations require external tools (e.g., online calculators or scipy).*

View File

@@ -0,0 +1,141 @@
# Campaign Performance Report
**Report Period:** [Start Date] - [End Date]
**Prepared By:** [Name]
**Date:** [Report Date]
---
## Executive Summary
[2-3 sentence summary of overall campaign performance, key wins, and areas of concern.]
---
## Portfolio Overview
| Metric | This Period | Previous Period | Change |
|--------|-----------|----------------|--------|
| Total Spend | $ | $ | % |
| Total Revenue | $ | $ | % |
| Total Profit | $ | $ | % |
| Portfolio ROI | % | % | pp |
| Portfolio ROAS | x | x | % |
| Total Leads | | | % |
| Total Customers | | | % |
| Blended CPA | $ | $ | % |
| Blended CPL | $ | $ | % |
---
## Channel Performance
| Channel | Spend | Revenue | ROI | ROAS | CPA | Leads | Customers |
|---------|-------|---------|-----|------|-----|-------|-----------|
| Email | $ | $ | % | x | $ | | |
| Paid Search | $ | $ | % | x | $ | | |
| Paid Social | $ | $ | % | x | $ | | |
| Display | $ | $ | % | x | $ | | |
| Organic | $ | $ | % | x | $ | | |
| **Total** | **$** | **$** | **%** | **x** | **$** | | |
---
## Top Performing Campaigns
### 1. [Campaign Name]
- **Channel:** [Channel]
- **Spend:** $[Amount] | **Revenue:** $[Amount] | **ROI:** [X]%
- **Key Success Factor:** [What made this campaign successful]
### 2. [Campaign Name]
- **Channel:** [Channel]
- **Spend:** $[Amount] | **Revenue:** $[Amount] | **ROI:** [X]%
- **Key Success Factor:** [What made this campaign successful]
### 3. [Campaign Name]
- **Channel:** [Channel]
- **Spend:** $[Amount] | **Revenue:** $[Amount] | **ROI:** [X]%
- **Key Success Factor:** [What made this campaign successful]
---
## Underperforming Campaigns
### [Campaign Name]
- **Channel:** [Channel]
- **Issue:** [Description of underperformance]
- **Benchmark Comparison:** [How it compares to benchmarks]
- **Recommended Action:** [Specific action to take]
### [Campaign Name]
- **Channel:** [Channel]
- **Issue:** [Description of underperformance]
- **Benchmark Comparison:** [How it compares to benchmarks]
- **Recommended Action:** [Specific action to take]
---
## Attribution Analysis
| Channel | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|---------|------------|------------|--------|------------|----------------|
| [Channel 1] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Channel 2] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Channel 3] | $[X] | $[X] | $[X] | $[X] | $[X] |
**Key Insight:** [What does the attribution analysis tell us about channel value that single-model analysis would miss?]
---
## Funnel Analysis
| Stage | Count | Conversion Rate | Drop-off | vs. Previous Period |
|-------|-------|----------------|----------|-------------------|
| Awareness | | - | - | % |
| Interest | | % | % | pp |
| Consideration | | % | % | pp |
| Intent | | % | % | pp |
| Purchase | | % | % | pp |
**Overall Funnel Conversion:** [X]%
**Primary Bottleneck:** [Stage transition with largest drop-off]
**Recommended Focus:** [What to optimize next]
---
## Budget Allocation Recommendations
Based on this period's performance data:
| Channel | Current Allocation | Recommended Allocation | Rationale |
|---------|-------------------|----------------------|-----------|
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
---
## Action Items
| Priority | Action | Owner | Deadline | Expected Impact |
|----------|--------|-------|----------|----------------|
| High | [Action] | [Name] | [Date] | [Impact] |
| High | [Action] | [Name] | [Date] | [Impact] |
| Medium | [Action] | [Name] | [Date] | [Impact] |
| Low | [Action] | [Name] | [Date] | [Impact] |
---
## Next Period Goals
| Metric | Current | Target | Strategy |
|--------|---------|--------|----------|
| Portfolio ROI | [X]% | [X]% | [How] |
| ROAS | [X]x | [X]x | [How] |
| CPA | $[X] | $[X] | [How] |
| Lead Volume | [X] | [X] | [How] |
---
*Report generated using campaign-analytics toolkit. Data source: [Source system/platform].*

View File

@@ -0,0 +1,158 @@
# Channel Performance Comparison
**Period:** [Start Date] - [End Date]
**Compared Against:** [Previous period / Industry benchmarks / Both]
**Prepared By:** [Name]
---
## Summary
[1-2 sentence overview: which channels are performing best, which need attention, and the overall channel mix health.]
---
## Channel Scorecard
| Channel | Spend | Revenue | Profit | ROI | ROAS | CTR | CPA | CPL | Grade |
|---------|-------|---------|--------|-----|------|-----|-----|-----|-------|
| Email | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Paid Search | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Paid Social | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Display | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Organic Search | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Organic Social | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Referral | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Direct | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| **Total** | **$** | **$** | **$** | **%** | **x** | **%** | **$** | **$** | |
**Grading Scale:**
- A: Exceeds all benchmarks
- B: Meets or exceeds target benchmarks
- C: Between low and target benchmarks
- D: Below low benchmark on 1+ key metrics
- F: Underperforming on multiple metrics or unprofitable
---
## Channel Deep Dives
### [Channel Name]
**Performance Summary:** [1-2 sentences]
| Metric | Actual | Target | Benchmark | vs. Target | vs. Benchmark |
|--------|--------|--------|-----------|-----------|---------------|
| Spend | $ | $ | - | % | - |
| Revenue | $ | $ | - | % | - |
| ROI | % | % | % | pp | pp |
| ROAS | x | x | x | % | % |
| CTR | % | % | % | pp | pp |
| CPA | $ | $ | $ | % | % |
| CPL | $ | $ | $ | % | % |
| CPC | $ | $ | $ | % | % |
**Trend (Last 3 Periods):**
| Period | Spend | Revenue | ROI | ROAS | Key Event |
|--------|-------|---------|-----|------|-----------|
| [Period 1] | $ | $ | % | x | [Note] |
| [Period 2] | $ | $ | % | x | [Note] |
| [Current] | $ | $ | % | x | [Note] |
**Assessment:** [Improving / Stable / Declining]
**Action Items:**
1. [Specific action for this channel]
2. [Specific action for this channel]
---
[Repeat deep dive section for each channel]
---
## Attribution View
How each channel is valued under different attribution models:
| Channel | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|---------|------------|------------|--------|------------|----------------|
| [Channel 1] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
| [Channel 2] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
| [Channel 3] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
**Insight:** [Which channels are over/undervalued by single-touch models?]
---
## Funnel Performance by Channel
| Stage | [Ch 1] | [Ch 2] | [Ch 3] | [Ch 4] | Overall |
|-------|--------|--------|--------|--------|---------|
| Awareness | [Count] | [Count] | [Count] | [Count] | [Count] |
| Interest | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Consideration | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Intent | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Purchase | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| **Overall** | **[Rate]%** | **[Rate]%** | **[Rate]%** | **[Rate]%** | **[Rate]%** |
**Best Funnel:** [Channel with highest overall conversion rate]
**Biggest Bottleneck:** [Channel + stage transition with worst drop-off]
---
## Budget Allocation Analysis
### Current vs. Optimal Allocation
| Channel | Current % | Current $ | Recommended % | Recommended $ | Rationale |
|---------|----------|-----------|--------------|---------------|-----------|
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| **Total** | **100%** | **$** | **100%** | **$** | |
### Reallocation Impact Estimate
| Scenario | Projected Revenue | Projected ROI | Change vs Current |
|----------|------------------|---------------|-------------------|
| Current allocation | $ | % | - |
| Recommended allocation | $ | % | +% |
| Aggressive growth | $ | % | +% |
| Cost optimization | $ | % | +% |
---
## Competitive Context
| Metric | Our Performance | Industry Average | Gap |
|--------|----------------|-----------------|-----|
| Channel Mix Diversity | [X channels active] | [X channels] | |
| Overall ROAS | [X]x | [X]x | |
| Paid vs Organic Split | [X/X]% | [X/X]% | |
| Digital vs Traditional | [X/X]% | [X/X]% | |
---
## Recommendations
### Immediate Actions (This Week)
1. **[Action]** -- [Expected impact], [Owner]
2. **[Action]** -- [Expected impact], [Owner]
### Short-Term (This Month)
1. **[Action]** -- [Expected impact], [Owner]
2. **[Action]** -- [Expected impact], [Owner]
### Strategic (This Quarter)
1. **[Action]** -- [Expected impact], [Owner]
2. **[Action]** -- [Expected impact], [Owner]
---
*Template from campaign-analytics skill. Populate with data from attribution_analyzer.py, funnel_analyzer.py, and campaign_roi_calculator.py.*

View File

@@ -0,0 +1,110 @@
{
"_description": "Expected output from running the 3 scripts against sample_campaign_data.json with --format json",
"attribution_analyzer": {
"_command": "python scripts/attribution_analyzer.py assets/sample_campaign_data.json --format json",
"summary": {
"total_journeys": 8,
"converted_journeys": 6,
"conversion_rate": 75.0,
"total_revenue": 3700.0,
"channels_observed": [
"direct", "display", "email", "organic_search",
"organic_social", "paid_search", "paid_social", "referral"
]
},
"models": {
"first-touch": {
"organic_search": 700.0,
"paid_social": 1200.0,
"display": 350.0,
"organic_social": 800.0,
"referral": 650.0
},
"last-touch": {
"paid_search": 1500.0,
"direct": 2000.0,
"organic_search": 200.0
},
"linear": {
"organic_search": 666.67,
"email": 1003.33,
"paid_search": 718.33,
"paid_social": 300.0,
"direct": 460.0,
"display": 175.0,
"organic_social": 160.0,
"referral": 216.67
},
"time-decay": {
"organic_search": 582.38,
"email": 1053.68,
"paid_search": 881.03,
"paid_social": 178.4,
"direct": 638.82,
"display": 140.62,
"organic_social": 78.48,
"referral": 146.59
},
"position-based": {
"organic_search": 520.0,
"paid_search": 688.33,
"email": 456.67,
"paid_social": 480.0,
"direct": 800.0,
"display": 175.0,
"organic_social": 320.0,
"referral": 260.0
}
}
},
"funnel_analyzer": {
"_command": "python scripts/funnel_analyzer.py assets/sample_campaign_data.json --format json",
"_note": "Uses segment comparison mode since 'segments' key is present in the data",
"rankings": [
{"rank": 1, "segment": "organic", "overall_conversion_rate": 5.6, "total_entries": 5000, "total_conversions": 280},
{"rank": 2, "segment": "paid", "overall_conversion_rate": 3.0, "total_entries": 3000, "total_conversions": 90},
{"rank": 3, "segment": "email", "overall_conversion_rate": 2.5, "total_entries": 2000, "total_conversions": 50}
],
"key_findings": {
"all_segments_bottleneck_absolute": "Awareness -> Interest",
"all_segments_bottleneck_relative": "Intent -> Purchase",
"best_performing_segment": "organic (5.6% overall conversion)",
"worst_performing_segment": "email (2.5% overall conversion)"
}
},
"campaign_roi_calculator": {
"_command": "python scripts/campaign_roi_calculator.py assets/sample_campaign_data.json --format json",
"portfolio_summary": {
"total_campaigns": 5,
"total_spend": 34000.0,
"total_revenue": 99000.0,
"total_profit": 65000.0,
"portfolio_roi_pct": 191.18,
"portfolio_roas": 2.91,
"blended_ctr_pct": 1.04,
"blended_cpl": 27.64,
"blended_cpa": 161.9,
"top_performer": "Spring Email Campaign",
"underperforming_campaigns": [
"Spring Email Campaign",
"Facebook Awareness Q1",
"LinkedIn B2B Outreach"
]
},
"channel_summary": {
"email": {"spend": 5000.0, "revenue": 25000.0, "roi_pct": 400.0, "roas": 5.0},
"paid_search": {"spend": 12000.0, "revenue": 48000.0, "roi_pct": 300.0, "roas": 4.0},
"paid_social": {"spend": 14000.0, "revenue": 17000.0, "roi_pct": 21.43, "roas": 1.21},
"display": {"spend": 3000.0, "revenue": 9000.0, "roi_pct": 200.0, "roas": 3.0}
},
"key_findings": {
"most_profitable_channel": "paid_search ($36,000 profit)",
"highest_roas_channel": "email (5.0x ROAS)",
"unprofitable_campaign": "LinkedIn B2B Outreach (-$1,000 loss)",
"best_ctr": "Spring Email Campaign (5.0%)"
}
}
}

View File

@@ -0,0 +1,151 @@
{
"journeys": [
{
"journey_id": "j001",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-01T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-05T14:30:00", "interaction": "open"},
{"channel": "paid_search", "timestamp": "2025-10-08T09:15:00", "interaction": "click"}
],
"converted": true,
"revenue": 500.00
},
{
"journey_id": "j002",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-02T11:00:00", "interaction": "click"},
{"channel": "organic_search", "timestamp": "2025-10-06T16:45:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-09T08:00:00", "interaction": "click"},
{"channel": "direct", "timestamp": "2025-10-10T13:20:00", "interaction": "visit"}
],
"converted": true,
"revenue": 1200.00
},
{
"journey_id": "j003",
"touchpoints": [
{"channel": "display", "timestamp": "2025-10-03T09:30:00", "interaction": "view"},
{"channel": "paid_search", "timestamp": "2025-10-07T10:00:00", "interaction": "click"}
],
"converted": true,
"revenue": 350.00
},
{
"journey_id": "j004",
"touchpoints": [
{"channel": "organic_social", "timestamp": "2025-10-01T08:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-04T12:00:00", "interaction": "click"},
{"channel": "paid_search", "timestamp": "2025-10-08T14:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-11T09:00:00", "interaction": "click"},
{"channel": "direct", "timestamp": "2025-10-12T16:00:00", "interaction": "visit"}
],
"converted": true,
"revenue": 800.00
},
{
"journey_id": "j005",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-05T10:00:00", "interaction": "click"},
{"channel": "display", "timestamp": "2025-10-08T11:30:00", "interaction": "view"}
],
"converted": false,
"revenue": 0
},
{
"journey_id": "j006",
"touchpoints": [
{"channel": "referral", "timestamp": "2025-10-06T14:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-10T09:30:00", "interaction": "click"},
{"channel": "paid_search", "timestamp": "2025-10-13T11:00:00", "interaction": "click"}
],
"converted": true,
"revenue": 650.00
},
{
"journey_id": "j007",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-04T08:30:00", "interaction": "click"}
],
"converted": true,
"revenue": 200.00
},
{
"journey_id": "j008",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-07T13:00:00", "interaction": "click"},
{"channel": "organic_search", "timestamp": "2025-10-09T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-12T15:00:00", "interaction": "click"}
],
"converted": false,
"revenue": 0
}
],
"funnel": {
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"counts": [10000, 5200, 2800, 1400, 420]
},
"segments": {
"organic": {
"counts": [5000, 2800, 1600, 850, 280]
},
"paid": {
"counts": [3000, 1500, 750, 350, 90]
},
"email": {
"counts": [2000, 900, 450, 200, 50]
}
},
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"campaigns": [
{
"name": "Spring Email Campaign",
"channel": "email",
"spend": 5000.00,
"revenue": 25000.00,
"impressions": 50000,
"clicks": 2500,
"leads": 300,
"customers": 45
},
{
"name": "Google Search - Brand",
"channel": "paid_search",
"spend": 12000.00,
"revenue": 48000.00,
"impressions": 200000,
"clicks": 8000,
"leads": 600,
"customers": 120
},
{
"name": "Facebook Awareness Q1",
"channel": "paid_social",
"spend": 8000.00,
"revenue": 12000.00,
"impressions": 500000,
"clicks": 5000,
"leads": 200,
"customers": 25
},
{
"name": "Display Retargeting",
"channel": "display",
"spend": 3000.00,
"revenue": 9000.00,
"impressions": 800000,
"clicks": 1200,
"leads": 80,
"customers": 15
},
{
"name": "LinkedIn B2B Outreach",
"channel": "paid_social",
"spend": 6000.00,
"revenue": 5000.00,
"impressions": 120000,
"clicks": 600,
"leads": 50,
"customers": 5
}
]
}

View File

@@ -0,0 +1,285 @@
# Attribution Models Guide
Comprehensive reference for multi-touch attribution modeling in marketing analytics. This guide covers the five standard attribution models, their mathematical foundations, selection criteria, and practical application guidelines.
---
## Overview
Attribution modeling answers the question: **Which marketing touchpoints deserve credit for conversions?** When a customer interacts with multiple channels before converting, attribution models distribute conversion credit across those touchpoints using different rules.
No single model is "correct." Each reveals different aspects of channel performance. Best practice is to run multiple models and compare results to build a complete picture.
---
## Model 1: First-Touch Attribution
### How It Works
All conversion credit (100%) goes to the first touchpoint in the customer journey.
### Formula
```
Credit(channel) = Revenue * 1.0 (if channel is first touchpoint)
Credit(channel) = 0 (otherwise)
```
### When to Use
- **Brand awareness campaigns**: Measures which channels bring new prospects into the funnel
- **Top-of-funnel optimization**: Identifies the best channels for initial discovery
- **New market entry**: Evaluating which channels generate first contact in new segments
### Pros
- Simple to understand and implement
- Clearly identifies awareness-driving channels
- Useful for budget allocation toward customer acquisition
### Cons
- Ignores all touchpoints after the first
- Overvalues awareness channels, undervalues conversion channels
- Does not reflect the reality of multi-touch customer journeys
### Best For
Marketing teams focused on expanding reach and entering new markets where understanding initial discovery channels is the priority.
---
## Model 2: Last-Touch Attribution
### How It Works
All conversion credit (100%) goes to the last touchpoint before conversion.
### Formula
```
Credit(channel) = Revenue * 1.0 (if channel is last touchpoint)
Credit(channel) = 0 (otherwise)
```
### When to Use
- **Direct response campaigns**: Measures which channels close deals
- **Bottom-of-funnel optimization**: Identifies the most effective conversion channels
- **Short sales cycles**: When customers typically convert within 1-2 interactions
### Pros
- Simple to implement (default in many analytics platforms)
- Highlights channels that directly drive conversions
- Useful for performance marketing optimization
### Cons
- Ignores all touchpoints before the last
- Overvalues conversion channels, undervalues awareness channels
- Can lead to cutting awareness spending that actually feeds the pipeline
### Best For
Performance marketing teams running direct-response campaigns where the final interaction is the primary lever.
---
## Model 3: Linear Attribution
### How It Works
Conversion credit is split equally across all touchpoints in the journey.
### Formula
```
Credit(channel) = Revenue / N (for each of N touchpoints)
```
### When to Use
- **Balanced multi-channel evaluation**: When all touchpoints are considered equally valuable
- **Long sales cycles**: Where multiple interactions are required
- **Content marketing**: Where each piece of content plays a role in nurturing
### Pros
- Fair distribution across all channels
- Recognizes the contribution of every touchpoint
- Good starting point for teams new to multi-touch attribution
### Cons
- Treats all touchpoints equally, which rarely reflects reality
- Does not account for the relative importance of different positions in the journey
- Can dilute the signal of truly impactful touchpoints
### Best For
Teams running consistent multi-channel campaigns where every touchpoint is intentionally designed to contribute to conversion.
---
## Model 4: Time-Decay Attribution
### How It Works
Touchpoints closer to conversion receive exponentially more credit. Uses a half-life parameter: a touchpoint occurring one half-life before conversion gets 50% of the credit of the converting touchpoint.
### Formula
```
Weight(touchpoint) = e^(-lambda * days_before_conversion)
where lambda = ln(2) / half_life_days
Credit(channel) = Revenue * (Weight / Sum_of_all_weights)
```
### Configurable Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| half_life_days | 7 | Days for weight to decay by 50% |
### Guidance on Half-Life Selection
| Sales Cycle Length | Recommended Half-Life |
|-------------------|----------------------|
| 1-3 days (impulse) | 1-2 days |
| 1-2 weeks (considered) | 5-7 days |
| 1-3 months (B2B) | 14-21 days |
| 3-6 months (enterprise) | 30-45 days |
| 6-12 months (complex B2B) | 60-90 days |
### When to Use
- **Short-to-medium sales cycles**: Where recent interactions are more influential
- **Promotional campaigns**: Where urgency and recency matter
- **E-commerce**: Where the last few interactions before purchase are most impactful
### Pros
- Accounts for recency, which aligns with many buying behaviors
- More sophisticated than first/last-touch
- Configurable half-life allows tuning to specific business contexts
### Cons
- May undervalue early-stage awareness that planted the seed
- Half-life selection is subjective and requires testing
- More complex to explain to stakeholders
### Best For
E-commerce and B2C companies with identifiable sales cycles where recent interactions carry more decision weight.
---
## Model 5: Position-Based Attribution (U-Shaped)
### How It Works
40% of credit goes to the first touchpoint, 40% to the last touchpoint, and the remaining 20% is split equally among middle touchpoints.
### Formula
```
Credit(first_channel) = Revenue * 0.40
Credit(last_channel) = Revenue * 0.40
Credit(middle_channel) = Revenue * 0.20 / (N - 2) (for each middle touchpoint)
Special cases:
- 1 touchpoint: 100% credit
- 2 touchpoints: 50% each
```
### When to Use
- **Full-funnel marketing**: Values both awareness (first) and conversion (last)
- **Mature marketing programs**: With established multi-channel strategies
- **B2B marketing**: Where both lead generation and deal closure are distinct priorities
### Pros
- Recognizes the importance of first and last interactions
- Still gives credit to middle nurturing touchpoints
- Provides a balanced view of the full journey
### Cons
- The 40/20/40 split is arbitrary (some businesses may need 30/40/30 or other splits)
- Middle touchpoints get relatively little credit
- May not suit businesses where middle interactions are the primary differentiator
### Best For
B2B and enterprise marketing teams running coordinated campaigns across the full customer journey from awareness through conversion.
---
## Model Comparison Matrix
| Criteria | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|----------|------------|------------|--------|------------|----------------|
| Complexity | Low | Low | Low | Medium | Medium |
| Awareness bias | High | None | Neutral | Low | Medium |
| Conversion bias | None | High | Neutral | High | Medium |
| Multi-touch fairness | Poor | Poor | Good | Good | Good |
| Best sales cycle | Any | Short | Long | Short-Medium | Any |
| Stakeholder clarity | High | High | High | Medium | Medium |
---
## Practical Guidelines
### Running Multiple Models
Always run at least 3 models and look for channels that rank highly across multiple models. These are your most reliable performers. Channels that rank well in only one model may be overvalued by that model's bias.
### Interpreting Divergent Results
When models disagree significantly on a channel's value:
1. **High in first-touch, low in last-touch**: The channel is strong for awareness but does not close. Pair it with stronger conversion channels.
2. **Low in first-touch, high in last-touch**: The channel closes deals but does not generate new prospects. Ensure upstream awareness channels feed it.
3. **High in linear, low in first/last**: The channel plays a critical nurturing role. Cutting it may break the journey without immediately visible impact.
### Common Pitfalls
- **Over-relying on last-touch**: Most analytics platforms default to last-touch, which chronically undervalues awareness spending.
- **Ignoring non-converting journeys**: Attribution only counts converted journeys. Channels that contribute to unconverted journeys may still have value.
- **Confusing correlation with causation**: Attribution shows correlation between touchpoints and conversion, not definitive causation.
- **Insufficient data volume**: Models require statistically meaningful journey counts. With fewer than 100 journeys, results are unreliable.
---
## Data Requirements
### Minimum Data
| Field | Required | Description |
|-------|----------|-------------|
| journey_id | Yes | Unique identifier for each customer journey |
| touchpoints | Yes | Array of channel interactions with timestamps |
| converted | Yes | Boolean indicating whether the journey converted |
| revenue | Recommended | Conversion value for credit allocation |
### Touchpoint Fields
| Field | Required | Description |
|-------|----------|-------------|
| channel | Yes | Marketing channel name |
| timestamp | Yes | ISO-format timestamp of the interaction |
| interaction | Optional | Type of interaction (click, view, open, etc.) |
---
## Further Reading
- Google Analytics attribution model comparison documentation
- Facebook/Meta attribution window settings and their impact
- HubSpot multi-touch revenue attribution methodology
- Bizible/Marketo B2B attribution best practices

View File

@@ -0,0 +1,259 @@
# Campaign Metrics Benchmarks
Industry benchmark reference for marketing campaign performance metrics. Use these benchmarks to contextualize your campaign results, identify underperformance, and set realistic targets.
---
## How to Use This Reference
1. Find your industry vertical and channel combination
2. Compare your actual metrics to the benchmark ranges
3. Use the assessment scale: Below Low = underperforming, Low-Target = below target, Target-High = good, Above High = excellent
4. Adjust targets based on your historical performance (your own data is always the best benchmark)
---
## Click-Through Rate (CTR) Benchmarks
CTR = (Clicks / Impressions) * 100
### By Channel (Cross-Industry Average)
| Channel | Low | Target | High | Notes |
|---------|-----|--------|------|-------|
| Email | 1.0% | 2.5% | 5.0% | Highly dependent on list quality and segmentation |
| Paid Search (Google) | 1.5% | 3.5% | 7.0% | Brand keywords typically 5-10%, generic 1-3% |
| Paid Social (Facebook) | 0.5% | 1.2% | 3.0% | Video ads trend higher, static lower |
| Paid Social (LinkedIn) | 0.3% | 0.8% | 2.0% | B2B focused, lower volume but higher intent |
| Display Ads | 0.05% | 0.10% | 0.50% | Retargeting typically 0.5-1.0% |
| Organic Search | 1.5% | 3.0% | 8.0% | Position 1 averages 28-31% CTR |
| Organic Social | 0.5% | 1.5% | 4.0% | Platform algorithm changes affect significantly |
| Referral | 1.0% | 3.0% | 6.0% | Quality of referring site matters greatly |
| Direct | 2.0% | 4.0% | 8.0% | Highest intent channel |
### By Industry (Paid Search)
| Industry | Average CTR | Low | High |
|----------|------------|-----|------|
| B2B | 2.4% | 1.5% | 4.0% |
| E-commerce | 2.7% | 1.8% | 5.0% |
| Education | 3.3% | 2.0% | 6.0% |
| Finance & Insurance | 2.9% | 1.5% | 5.5% |
| Healthcare | 3.3% | 2.0% | 5.0% |
| Legal | 2.9% | 1.5% | 5.0% |
| Real Estate | 3.7% | 2.5% | 6.0% |
| Retail | 2.5% | 1.5% | 5.0% |
| SaaS | 2.1% | 1.2% | 3.5% |
| Technology | 2.1% | 1.0% | 4.0% |
| Travel & Hospitality | 4.7% | 3.0% | 8.0% |
---
## Cost Per Click (CPC) Benchmarks
CPC = Spend / Clicks
### By Channel (USD)
| Channel | Low | Target | High | Notes |
|---------|-----|--------|------|-------|
| Google Search | $0.50 | $2.50 | $8.00 | Legal/finance can exceed $50 per click |
| Google Display | $0.10 | $0.50 | $2.00 | Programmatic can be lower |
| Facebook | $0.30 | $1.00 | $3.00 | B2C typically lower than B2B |
| LinkedIn | $2.00 | $5.50 | $12.00 | Highest CPC among social platforms |
| Instagram | $0.40 | $1.20 | $3.50 | Stories ads trending lower |
| Twitter/X | $0.20 | $0.80 | $2.50 | High variability by topic |
| TikTok | $0.10 | $0.50 | $2.00 | Rapidly evolving, currently lower |
### By Industry (Google Ads)
| Industry | Average CPC | Range |
|----------|------------|-------|
| Automotive | $2.46 | $1.00-$6.00 |
| B2B | $3.33 | $1.50-$8.00 |
| E-commerce | $1.16 | $0.50-$3.00 |
| Education | $2.40 | $1.00-$5.00 |
| Finance & Insurance | $3.44 | $1.00-$50.00 |
| Healthcare | $2.62 | $1.00-$6.00 |
| Legal | $6.75 | $2.00-$100.00 |
| Real Estate | $2.37 | $1.00-$5.00 |
| SaaS/Technology | $3.80 | $1.50-$10.00 |
| Travel | $1.53 | $0.50-$4.00 |
---
## Cost Per Mille / Thousand Impressions (CPM) Benchmarks
CPM = (Spend / Impressions) * 1000
### By Channel (USD)
| Channel | Low | Target | High | Notes |
|---------|-----|--------|------|-------|
| Facebook | $3.00 | $8.00 | $15.00 | Q4 holiday season can exceed $20 |
| Instagram | $4.00 | $10.00 | $18.00 | Reels ads trending lower |
| LinkedIn | $8.00 | $25.00 | $50.00 | Premium B2B audience |
| Google Display | $1.00 | $3.50 | $8.00 | Programmatic ranges widely |
| TikTok | $2.00 | $6.00 | $12.00 | Growing platform, rates increasing |
| YouTube | $4.00 | $10.00 | $20.00 | Pre-roll vs discovery ads vary |
| Programmatic Display | $0.50 | $2.00 | $6.00 | Dependent on targeting precision |
---
## Cost Per Acquisition (CPA) Benchmarks
CPA = Spend / Customers Acquired
### By Channel (USD)
| Channel | Low | Target | High | Notes |
|---------|-----|--------|------|-------|
| Email | $5 | $15 | $40 | Existing list; acquisition cost amortized |
| Paid Search | $20 | $50 | $150 | Highly dependent on industry and competition |
| Paid Social | $15 | $40 | $100 | Retargeting typically lower |
| Display | $30 | $75 | $200 | Awareness-focused; higher CPA expected |
| Organic Search | $5 | $20 | $60 | Excludes SEO investment costs |
| Organic Social | $10 | $30 | $80 | Content production costs excluded |
| Referral | $10 | $25 | $70 | Referral incentive costs included |
### By Industry (Across Channels)
| Industry | Average CPA | Acceptable Range |
|----------|------------|------------------|
| B2B SaaS | $150-$400 | $75-$700 |
| E-commerce | $25-$80 | $10-$150 |
| Education | $40-$120 | $20-$250 |
| Finance | $75-$200 | $30-$500 |
| Healthcare | $50-$150 | $25-$300 |
| Legal | $100-$300 | $50-$700 |
| Real Estate | $60-$180 | $30-$350 |
| Retail | $15-$50 | $8-$100 |
| Travel | $20-$70 | $10-$150 |
---
## Cost Per Lead (CPL) Benchmarks
CPL = Spend / Leads Generated
### By Channel (USD)
| Channel | Low | Target | High |
|---------|-----|--------|------|
| Email | $3 | $10 | $25 |
| Paid Search | $15 | $35 | $90 |
| Paid Social (Facebook) | $8 | $20 | $50 |
| Paid Social (LinkedIn) | $25 | $75 | $150 |
| Display | $20 | $50 | $120 |
| Content Marketing | $10 | $30 | $80 |
| Webinars | $30 | $70 | $150 |
### By Industry
| Industry | Average CPL | Range |
|----------|------------|-------|
| B2B SaaS | $50-$150 | $25-$300 |
| E-commerce | $10-$30 | $5-$60 |
| Education | $25-$70 | $15-$150 |
| Financial Services | $40-$120 | $20-$250 |
| Healthcare | $30-$90 | $15-$180 |
| Manufacturing | $50-$120 | $25-$200 |
| Technology | $40-$100 | $20-$200 |
---
## Return on Ad Spend (ROAS) Benchmarks
ROAS = Revenue / Ad Spend
### By Channel
| Channel | Low | Target | High | Notes |
|---------|-----|--------|------|-------|
| Email | 30x | 42x | 60x | Highest ROAS channel when list is healthy |
| Paid Search (Brand) | 8x | 15x | 30x | Brand terms have high ROAS |
| Paid Search (Generic) | 2x | 4x | 8x | Competitive; ROAS varies widely |
| Paid Social | 1.5x | 3x | 6x | Retargeting typically 4-10x |
| Display | 0.5x | 1.5x | 3x | Often used for awareness; lower direct ROAS |
| Organic Search | 5x | 10x | 20x | Excludes SEO investment amortization |
| Organic Social | 3x | 6x | 12x | Excludes content production costs |
### By Industry
| Industry | Minimum Viable ROAS | Target ROAS |
|----------|--------------------:|------------:|
| E-commerce (low margin) | 4x | 8x+ |
| E-commerce (high margin) | 2x | 4x+ |
| SaaS | 3x | 6x+ |
| B2B Services | 5x | 10x+ |
| Retail | 3x | 5x+ |
| DTC Brands | 2.5x | 5x+ |
### ROAS Calculation Notes
- **Breakeven ROAS** = 1 / Profit Margin (e.g., 25% margin = 4x breakeven)
- **Target ROAS** should be at least 2x the breakeven ROAS for sustainable growth
- Always include all costs (media, creative, tools, labor) for true ROAS
---
## Conversion Rate Benchmarks
### Landing Page Conversion Rate
| Industry | Low | Average | High |
|----------|-----|---------|------|
| B2B SaaS | 2.0% | 4.5% | 9.0% |
| E-commerce | 1.5% | 3.0% | 6.0% |
| Education | 2.5% | 5.5% | 10.0% |
| Finance | 2.0% | 5.0% | 11.0% |
| Healthcare | 2.0% | 4.0% | 8.0% |
| Legal | 3.0% | 7.0% | 13.0% |
| Real Estate | 2.0% | 4.5% | 8.0% |
| Travel | 2.0% | 4.0% | 9.0% |
### Email Conversion Rates
| Metric | Low | Average | High |
|--------|-----|---------|------|
| Open Rate | 15% | 22% | 35% |
| Click Rate | 1.0% | 2.5% | 5.0% |
| Click-to-Open Rate | 8% | 12% | 20% |
| Unsubscribe Rate | 0.1% | 0.2% | 0.5% |
---
## Seasonal Adjustments
Campaign benchmarks fluctuate by season. Apply these adjustment factors to normalize your comparisons:
| Quarter | CPC Adjustment | CPM Adjustment | CVR Adjustment |
|---------|---------------|----------------|----------------|
| Q1 (Jan-Mar) | -10% to -15% | -15% to -20% | Baseline |
| Q2 (Apr-Jun) | Baseline | Baseline | Baseline |
| Q3 (Jul-Sep) | +5% to +10% | +5% to +10% | -5% |
| Q4 (Oct-Dec) | +15% to +30% | +20% to +40% | +10% to +20% |
**Key seasonal events:**
- Black Friday/Cyber Monday: CPMs can increase 50-100%
- January: Lowest competition, good for testing
- Back-to-School (Aug-Sep): Education and retail spike
- Tax Season (Jan-Apr): Finance vertical spike
---
## Using Benchmarks Effectively
### Do
- Compare against your own historical data first, then industry benchmarks
- Account for seasonality when comparing time periods
- Consider your funnel position (awareness vs conversion campaigns have different benchmarks)
- Update benchmarks annually as industry norms shift
### Do Not
- Treat benchmarks as absolute targets (your business context matters more)
- Compare across industries without adjustment
- Ignore sample size (small campaigns have high variance)
- Use benchmarks to justify cutting channels without understanding their full-funnel role

View File

@@ -0,0 +1,302 @@
# Funnel Optimization Framework
A stage-by-stage guide to diagnosing and improving marketing and sales funnel performance. Use this framework alongside the funnel_analyzer.py tool to identify bottlenecks and implement targeted optimizations.
---
## The Standard Marketing Funnel
```
AWARENESS (Impressions, Reach)
|
INTEREST (Clicks, Engagement)
|
CONSIDERATION (Leads, Sign-ups)
|
INTENT (Demos, Trials, Cart Adds)
|
PURCHASE (Customers, Revenue)
|
RETENTION (Repeat, Upsell, Referral)
```
Each transition between stages represents a conversion point. The funnel analyzer measures these transitions and identifies where the largest drop-offs occur.
---
## Stage-by-Stage Optimization
### Stage 1: Awareness to Interest
**What it measures:** How effectively you capture attention and generate initial engagement.
**Healthy conversion rate:** 2-8% (varies widely by channel)
**Common bottlenecks:**
- Poor targeting: Reaching the wrong audience
- Weak creative: Ads that do not stand out or communicate value
- Message-market mismatch: Content that does not resonate with the audience's needs
- Low brand recognition: No trust or familiarity established
**Optimization tactics:**
| Tactic | Expected Impact | Effort |
|--------|----------------|--------|
| Audience refinement (lookalike, interest targeting) | High | Medium |
| Creative testing (3-5 variants per campaign) | High | Medium |
| Headline optimization (clear value proposition) | Medium | Low |
| Channel diversification (test new platforms) | Medium | High |
| Retargeting past engagers | Medium | Low |
**Key metrics to track:**
- Impressions and reach
- CTR by creative variant
- Cost per engagement
- Brand lift (if measured)
---
### Stage 2: Interest to Consideration
**What it measures:** How well you convert initial interest into genuine evaluation.
**Healthy conversion rate:** 10-30%
**Common bottlenecks:**
- Landing page disconnect: The page does not match the ad promise
- Poor user experience: Slow load times, confusing layout, mobile issues
- Missing social proof: No testimonials, case studies, or trust signals
- Unclear value proposition: Visitor does not understand "what's in it for me"
- Friction in lead capture: Too many form fields, unclear CTA
**Optimization tactics:**
| Tactic | Expected Impact | Effort |
|--------|----------------|--------|
| Landing page A/B testing | High | Medium |
| Message match (ad copy = page headline) | High | Low |
| Reduce form fields to essential only | High | Low |
| Add social proof (logos, testimonials, numbers) | Medium | Low |
| Improve page load speed (<3 seconds) | Medium | Medium |
| Mobile optimization | Medium | Medium |
| Add exit-intent offers | Low-Medium | Low |
**Key metrics to track:**
- Landing page conversion rate
- Bounce rate
- Time on page
- Form abandonment rate
---
### Stage 3: Consideration to Intent
**What it measures:** How effectively you move evaluated prospects toward a purchase decision.
**Healthy conversion rate:** 15-40%
**Common bottlenecks:**
- Insufficient nurturing: Leads go cold without follow-up
- Lack of differentiation: Prospects do not understand why you are better than alternatives
- Missing information: Pricing, features, or comparisons not available
- Sales-marketing misalignment: MQLs are not meeting sales expectations
- Poor timing: Follow-up is too slow or too aggressive
**Optimization tactics:**
| Tactic | Expected Impact | Effort |
|--------|----------------|--------|
| Email nurture sequences (5-7 touchpoints) | High | Medium |
| Lead scoring to prioritize sales outreach | High | High |
| Comparison content (vs. competitors) | Medium | Medium |
| Free trial or demo offers | High | Medium |
| Case studies relevant to prospect's industry | Medium | Medium |
| Retargeting with mid-funnel content | Medium | Low |
| Pricing transparency | Medium | Low |
**Key metrics to track:**
- MQL to SQL conversion rate
- Lead response time
- Email engagement rates (nurture sequences)
- Content engagement (case studies, comparisons)
---
### Stage 4: Intent to Purchase
**What it measures:** How well you convert ready-to-buy prospects into paying customers.
**Healthy conversion rate:** 20-50%
**Common bottlenecks:**
- Complex purchase process: Too many steps, unclear pricing, difficult checkout
- Lack of urgency: No reason to buy now
- Unaddressed objections: Common concerns not proactively handled
- Poor sales process: Inconsistent follow-up, inadequate discovery
- Payment friction: Limited payment options, security concerns
**Optimization tactics:**
| Tactic | Expected Impact | Effort |
|--------|----------------|--------|
| Simplify checkout/purchase flow | High | Medium |
| Add urgency (limited-time offers, scarcity) | Medium | Low |
| Address objections in sales collateral | Medium | Medium |
| Offer guarantees (money-back, free trial extension) | Medium | Low |
| Cart abandonment emails (3-email sequence) | High | Low |
| Live chat or chatbot support at checkout | Medium | Medium |
| Multiple payment options | Low-Medium | Medium |
| Customer success stories at point of purchase | Medium | Low |
**Key metrics to track:**
- Cart abandonment rate
- Checkout completion rate
- Average deal cycle length
- Win rate (B2B)
- Average order value
---
### Stage 5: Purchase to Retention
**What it measures:** How well you retain customers and expand their lifetime value.
**Healthy retention rate:** 70-95% annually (varies by business model)
**Common bottlenecks:**
- Poor onboarding: Customers do not achieve value quickly
- Lack of engagement: No ongoing communication or community
- Product/service issues: Unmet expectations post-purchase
- No expansion path: No upsell, cross-sell, or referral programs
- Competitor poaching: Better offers from alternatives
**Optimization tactics:**
| Tactic | Expected Impact | Effort |
|--------|----------------|--------|
| Structured onboarding (first 30/60/90 days) | High | High |
| Regular check-ins and health scoring | High | Medium |
| Loyalty programs | Medium | Medium |
| Referral incentives | Medium | Low |
| Cross-sell/upsell email sequences | Medium | Medium |
| Customer community building | Medium | High |
| Proactive support based on usage patterns | High | High |
**Key metrics to track:**
- Customer retention rate
- Net Promoter Score (NPS)
- Customer Lifetime Value (CLV)
- Expansion revenue
- Churn rate and reasons
---
## Bottleneck Diagnosis Framework
When the funnel analyzer identifies a bottleneck, use this diagnostic framework:
### Step 1: Quantify the Problem
- What is the conversion rate at this stage?
- How does it compare to your historical average?
- How does it compare to industry benchmarks?
- What is the absolute number of prospects lost?
### Step 2: Segment the Data
Look at the bottleneck broken down by:
- **Channel**: Is the drop-off worse for certain traffic sources?
- **Device**: Mobile vs desktop performance gaps
- **Geography**: Regional differences
- **Cohort**: Has it changed over time?
- **Campaign**: Specific campaigns performing worse
### Step 3: Identify Root Cause
| Symptom | Likely Root Cause | Diagnostic Action |
|---------|------------------|-------------------|
| High bounce rate | Message mismatch or UX issue | Review landing page vs ad |
| High time on page but low conversion | Confusion or missing CTA | Heatmap analysis |
| Drop-off at form | Too many fields or unclear value | Form analytics review |
| Long time between stages | Insufficient nurturing | Review email engagement |
| Drop-off after pricing page | Pricing concerns | Test pricing presentation |
| High cart abandonment | Checkout friction | Checkout flow analysis |
### Step 4: Prioritize Fixes
Use the ICE scoring framework:
- **Impact** (1-10): How much will fixing this improve the bottleneck?
- **Confidence** (1-10): How confident are you that this fix will work?
- **Ease** (1-10): How easy is this to implement?
Score = (Impact + Confidence + Ease) / 3
Prioritize fixes with the highest ICE score.
---
## Funnel Math and Revenue Impact
### Calculating the Revenue Impact of Funnel Improvements
A useful way to prioritize is to calculate how much revenue each percentage point of improvement is worth at each stage.
**Formula:**
```
Revenue Impact = Current_Revenue * (1 / Current_Conversion_Rate) * Improvement_Percentage
```
**Example:**
| Stage | Current Rate | +1pp Improvement | Revenue Impact |
|-------|-------------|-----------------|----------------|
| Awareness -> Interest | 5.0% | 6.0% | +20% more leads entering funnel |
| Interest -> Consideration | 25% | 26% | +4% more MQLs |
| Consideration -> Intent | 30% | 31% | +3.3% more SQLs |
| Intent -> Purchase | 40% | 41% | +2.5% more customers |
**Key insight:** Improvements at the top of the funnel have a multiplied effect on downstream stages. But improvements at the bottom of the funnel convert to revenue faster.
---
## Common Anti-Patterns
### 1. Optimizing the Wrong Stage
Fixing a bottom-of-funnel problem when the real issue is top-of-funnel volume. Always diagnose the full funnel before optimizing.
### 2. Ignoring Segment Differences
Aggregate funnel metrics can hide that one segment performs well while another is broken. Always segment before optimizing.
### 3. Over-Optimizing for Conversion Rate
Increasing conversion rate by narrowing the funnel (stricter targeting, higher-intent-only leads) can reduce total volume. Balance rate and volume.
### 4. Single-Metric Focus
Optimizing CTR without watching CPA, or optimizing CPA without watching volume. Always track paired metrics.
### 5. Not Accounting for Time Lag
B2B funnels can take weeks or months. Measuring a campaign's funnel performance too early produces incomplete data.
---
## Segment Comparison Best Practices
When using the funnel analyzer's segment comparison feature:
1. **Compare meaningful segments**: Channel, campaign type, audience demographic, or time period
2. **Ensure comparable volume**: Do not compare a segment with 100 entries to one with 10,000
3. **Look for stage-specific differences**: Two segments may have similar overall rates but different bottlenecks
4. **Use insights to inform targeting**: If one segment converts better at a specific stage, understand why and apply those lessons
---
## Recommended Review Cadence
| Review Type | Frequency | Focus |
|-------------|-----------|-------|
| Campaign funnel check | Weekly | Active campaign stage rates |
| Full funnel audit | Monthly | Overall funnel health, bottleneck shifts |
| Segment deep-dive | Monthly | Channel and cohort comparisons |
| Strategic funnel review | Quarterly | Funnel structure, stage definitions, benchmark updates |
| Annual funnel redesign | Annually | Stage definitions, measurement methodology, tool updates |

View File

@@ -0,0 +1,347 @@
#!/usr/bin/env python3
"""
Attribution Analyzer - Multi-touch attribution modeling for marketing campaigns.
Implements 5 attribution models:
- first-touch: 100% credit to first interaction
- last-touch: 100% credit to last interaction
- linear: Equal credit across all touchpoints
- time-decay: Exponential decay favoring recent touchpoints
- position-based: 40% first, 40% last, 20% split among middle
Usage:
python attribution_analyzer.py data.json
python attribution_analyzer.py data.json --model time-decay
python attribution_analyzer.py data.json --model time-decay --half-life 14
python attribution_analyzer.py data.json --format json
"""
import argparse
import json
import sys
from datetime import datetime
from typing import Any, Dict, List, Optional
MODELS = ["first-touch", "last-touch", "linear", "time-decay", "position-based"]
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def parse_timestamp(ts: str) -> datetime:
"""Parse an ISO-format timestamp string into a datetime object."""
for fmt in ("%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S", "%Y-%m-%d"):
try:
return datetime.strptime(ts, fmt)
except ValueError:
continue
raise ValueError(f"Cannot parse timestamp: {ts}")
def first_touch_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""First-touch: 100% credit to the first touchpoint in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
channel = sorted_tp[0]["channel"]
revenue = journey.get("revenue", 1.0)
credits[channel] = credits.get(channel, 0.0) + revenue
return credits
def last_touch_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Last-touch: 100% credit to the last touchpoint in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
channel = sorted_tp[-1]["channel"]
revenue = journey.get("revenue", 1.0)
credits[channel] = credits.get(channel, 0.0) + revenue
return credits
def linear_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Linear: Equal credit split across all touchpoints in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
share = safe_divide(revenue, len(touchpoints))
for tp in touchpoints:
channel = tp["channel"]
credits[channel] = credits.get(channel, 0.0) + share
return credits
def time_decay_attribution(journeys: List[Dict], half_life_days: float = 7.0) -> Dict[str, float]:
"""Time-decay: Exponential decay giving more credit to recent touchpoints.
Uses a configurable half-life (in days). Touchpoints closer to conversion
receive exponentially more credit.
"""
import math
credits: Dict[str, float] = {}
decay_rate = math.log(2) / half_life_days
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
conversion_time = parse_timestamp(sorted_tp[-1]["timestamp"])
# Calculate raw weights
weights: List[float] = []
for tp in sorted_tp:
tp_time = parse_timestamp(tp["timestamp"])
days_before = (conversion_time - tp_time).total_seconds() / 86400.0
weight = math.exp(-decay_rate * days_before)
weights.append(weight)
total_weight = sum(weights)
if total_weight == 0:
continue
for i, tp in enumerate(sorted_tp):
channel = tp["channel"]
share = safe_divide(weights[i], total_weight) * revenue
credits[channel] = credits.get(channel, 0.0) + share
return credits
def position_based_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Position-based: 40% first, 40% last, 20% split among middle touchpoints."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
if len(sorted_tp) == 1:
channel = sorted_tp[0]["channel"]
credits[channel] = credits.get(channel, 0.0) + revenue
elif len(sorted_tp) == 2:
first_channel = sorted_tp[0]["channel"]
last_channel = sorted_tp[-1]["channel"]
credits[first_channel] = credits.get(first_channel, 0.0) + revenue * 0.5
credits[last_channel] = credits.get(last_channel, 0.0) + revenue * 0.5
else:
first_channel = sorted_tp[0]["channel"]
last_channel = sorted_tp[-1]["channel"]
credits[first_channel] = credits.get(first_channel, 0.0) + revenue * 0.4
credits[last_channel] = credits.get(last_channel, 0.0) + revenue * 0.4
middle_count = len(sorted_tp) - 2
middle_share = safe_divide(revenue * 0.2, middle_count)
for tp in sorted_tp[1:-1]:
channel = tp["channel"]
credits[channel] = credits.get(channel, 0.0) + middle_share
return credits
def run_model(model_name: str, journeys: List[Dict], half_life: float = 7.0) -> Dict[str, float]:
"""Dispatch to the appropriate attribution model."""
if model_name == "first-touch":
return first_touch_attribution(journeys)
elif model_name == "last-touch":
return last_touch_attribution(journeys)
elif model_name == "linear":
return linear_attribution(journeys)
elif model_name == "time-decay":
return time_decay_attribution(journeys, half_life)
elif model_name == "position-based":
return position_based_attribution(journeys)
else:
raise ValueError(f"Unknown model: {model_name}. Choose from: {', '.join(MODELS)}")
def compute_summary(journeys: List[Dict]) -> Dict[str, Any]:
"""Compute summary statistics about the journey data."""
total_journeys = len(journeys)
converted = sum(1 for j in journeys if j.get("converted", False))
total_revenue = sum(j.get("revenue", 0.0) for j in journeys if j.get("converted", False))
all_channels = set()
for j in journeys:
for tp in j.get("touchpoints", []):
all_channels.add(tp["channel"])
return {
"total_journeys": total_journeys,
"converted_journeys": converted,
"conversion_rate": round(safe_divide(converted, total_journeys) * 100, 2),
"total_revenue": round(total_revenue, 2),
"channels_observed": sorted(all_channels),
}
def format_text(results: Dict[str, Any]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("MULTI-TOUCH ATTRIBUTION ANALYSIS")
lines.append("=" * 70)
summary = results["summary"]
lines.append("")
lines.append("SUMMARY")
lines.append(f" Total Journeys: {summary['total_journeys']}")
lines.append(f" Converted: {summary['converted_journeys']}")
lines.append(f" Conversion Rate: {summary['conversion_rate']}%")
lines.append(f" Total Revenue: ${summary['total_revenue']:,.2f}")
lines.append(f" Channels Observed: {', '.join(summary['channels_observed'])}")
for model_name, credits in results["models"].items():
lines.append("")
lines.append("-" * 70)
lines.append(f"MODEL: {model_name.upper()}")
lines.append("-" * 70)
if not credits:
lines.append(" No conversions to attribute.")
continue
total_credit = sum(credits.values())
sorted_channels = sorted(credits.items(), key=lambda x: x[1], reverse=True)
lines.append(f" {'Channel':<25} {'Revenue Credit':>15} {'Share':>10}")
lines.append(f" {'-'*25} {'-'*15} {'-'*10}")
for channel, credit in sorted_channels:
pct = safe_divide(credit, total_credit) * 100
lines.append(f" {channel:<25} ${credit:>13,.2f} {pct:>8.1f}%")
lines.append(f" {'TOTAL':<25} ${total_credit:>13,.2f} {'100.0%':>10}")
# Comparison table
if len(results["models"]) > 1:
lines.append("")
lines.append("=" * 70)
lines.append("CROSS-MODEL COMPARISON")
lines.append("=" * 70)
all_channels = set()
for credits in results["models"].values():
all_channels.update(credits.keys())
all_channels_sorted = sorted(all_channels)
model_names = list(results["models"].keys())
header = f" {'Channel':<20}"
for mn in model_names:
short = mn.replace("-", " ").title()
header += f" {short:>14}"
lines.append(header)
lines.append(f" {'-'*20}" + f" {'-'*14}" * len(model_names))
for ch in all_channels_sorted:
row = f" {ch:<20}"
for mn in model_names:
val = results["models"][mn].get(ch, 0.0)
row += f" ${val:>12,.2f}"
lines.append(row)
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the attribution analyzer."""
parser = argparse.ArgumentParser(
description="Multi-touch attribution analyzer for marketing campaigns.",
epilog="Example: python attribution_analyzer.py data.json --model linear --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing journey/touchpoint data",
)
parser.add_argument(
"--model",
choices=MODELS,
default=None,
help="Run a specific attribution model (default: run all 5 models)",
)
parser.add_argument(
"--half-life",
type=float,
default=7.0,
help="Half-life in days for time-decay model (default: 7)",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
journeys = data.get("journeys", [])
if not journeys:
print("Error: No 'journeys' array found in input data.", file=sys.stderr)
sys.exit(1)
# Determine which models to run
models_to_run = [args.model] if args.model else MODELS
# Run models
model_results: Dict[str, Dict[str, float]] = {}
for model_name in models_to_run:
credits = run_model(model_name, journeys, args.half_life)
model_results[model_name] = {ch: round(v, 2) for ch, v in credits.items()}
# Build output
results: Dict[str, Any] = {
"summary": compute_summary(journeys),
"models": model_results,
}
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,459 @@
#!/usr/bin/env python3
"""
Campaign ROI Calculator - Comprehensive campaign ROI and performance metrics.
Calculates:
- ROI (Return on Investment)
- ROAS (Return on Ad Spend)
- CPA (Cost per Acquisition/Customer)
- CPL (Cost per Lead)
- CAC (Customer Acquisition Cost)
- CTR (Click-Through Rate)
- CVR (Conversion Rate - Leads to Customers)
Includes industry benchmarking and underperformance flagging.
Usage:
python campaign_roi_calculator.py campaign_data.json
python campaign_roi_calculator.py campaign_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional
# Industry benchmark ranges by channel
# Format: {metric: {channel: (low, target, high)}}
BENCHMARKS: Dict[str, Dict[str, tuple]] = {
"ctr": {
"email": (1.0, 2.5, 5.0),
"paid_search": (1.5, 3.5, 7.0),
"paid_social": (0.5, 1.2, 3.0),
"display": (0.05, 0.1, 0.5),
"organic_search": (1.5, 3.0, 8.0),
"organic_social": (0.5, 1.5, 4.0),
"referral": (1.0, 3.0, 6.0),
"direct": (2.0, 4.0, 8.0),
"default": (0.5, 2.0, 5.0),
},
"roas": {
"email": (30.0, 42.0, 60.0),
"paid_search": (2.0, 4.0, 8.0),
"paid_social": (1.5, 3.0, 6.0),
"display": (0.5, 1.5, 3.0),
"organic_search": (5.0, 10.0, 20.0),
"organic_social": (3.0, 6.0, 12.0),
"referral": (3.0, 5.0, 10.0),
"direct": (4.0, 8.0, 15.0),
"default": (2.0, 4.0, 8.0),
},
"cpa": {
"email": (5.0, 15.0, 40.0),
"paid_search": (20.0, 50.0, 150.0),
"paid_social": (15.0, 40.0, 100.0),
"display": (30.0, 75.0, 200.0),
"organic_search": (5.0, 20.0, 60.0),
"organic_social": (10.0, 30.0, 80.0),
"referral": (10.0, 25.0, 70.0),
"direct": (5.0, 15.0, 50.0),
"default": (15.0, 45.0, 120.0),
},
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def get_benchmark(metric: str, channel: str) -> tuple:
"""Get benchmark range for a metric and channel.
Returns:
Tuple of (low, target, high) for the given metric and channel.
"""
metric_benchmarks = BENCHMARKS.get(metric, {})
return metric_benchmarks.get(channel, metric_benchmarks.get("default", (0, 0, 0)))
def assess_performance(value: float, benchmark: tuple, higher_is_better: bool = True) -> str:
"""Assess a metric value against its benchmark range.
Args:
value: The metric value to assess.
benchmark: Tuple of (low, target, high).
higher_is_better: Whether higher values are better (True for CTR, ROAS; False for CPA).
Returns:
Performance assessment string.
"""
low, target, high = benchmark
if higher_is_better:
if value >= high:
return "excellent"
elif value >= target:
return "good"
elif value >= low:
return "below_target"
else:
return "underperforming"
else:
# For cost metrics, lower is better
if value <= low:
return "excellent"
elif value <= target:
return "good"
elif value <= high:
return "below_target"
else:
return "underperforming"
def calculate_campaign_metrics(campaign: Dict[str, Any]) -> Dict[str, Any]:
"""Calculate all ROI metrics for a single campaign.
Args:
campaign: Dict with keys: name, channel, spend, revenue, impressions, clicks, leads, customers.
Returns:
Dict with all calculated metrics, benchmarks, and assessments.
"""
name = campaign.get("name", "Unnamed Campaign")
channel = campaign.get("channel", "default")
spend = campaign.get("spend", 0.0)
revenue = campaign.get("revenue", 0.0)
impressions = campaign.get("impressions", 0)
clicks = campaign.get("clicks", 0)
leads = campaign.get("leads", 0)
customers = campaign.get("customers", 0)
# Core metrics
roi = safe_divide(revenue - spend, spend) * 100
roas = safe_divide(revenue, spend)
cpa = safe_divide(spend, customers) if customers > 0 else None
cpl = safe_divide(spend, leads) if leads > 0 else None
cac = safe_divide(spend, customers) if customers > 0 else None
ctr = safe_divide(clicks, impressions) * 100 if impressions > 0 else None
cvr = safe_divide(customers, leads) * 100 if leads > 0 else None
cpc = safe_divide(spend, clicks) if clicks > 0 else None
cpm = safe_divide(spend, impressions) * 1000 if impressions > 0 else None
lead_conversion_rate = safe_divide(leads, clicks) * 100 if clicks > 0 else None
# Profit
profit = revenue - spend
# Benchmark assessments
assessments: Dict[str, Any] = {}
flags: List[str] = []
if ctr is not None:
benchmark = get_benchmark("ctr", channel)
assessment = assess_performance(ctr, benchmark, higher_is_better=True)
assessments["ctr"] = {
"value": round(ctr, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"CTR ({ctr:.2f}%) is below industry low ({benchmark[0]}%) for {channel}")
if roas > 0:
benchmark = get_benchmark("roas", channel)
assessment = assess_performance(roas, benchmark, higher_is_better=True)
assessments["roas"] = {
"value": round(roas, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"ROAS ({roas:.2f}x) is below industry low ({benchmark[0]}x) for {channel}")
if cpa is not None:
benchmark = get_benchmark("cpa", channel)
assessment = assess_performance(cpa, benchmark, higher_is_better=False)
assessments["cpa"] = {
"value": round(cpa, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"CPA (${cpa:.2f}) exceeds industry high (${benchmark[2]:.2f}) for {channel}")
if profit < 0:
flags.append(f"Campaign is unprofitable: ${profit:,.2f} net loss")
# Recommendations
recommendations: List[str] = []
if ctr is not None and assessments.get("ctr", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Improve ad creative and targeting to increase CTR")
if assessments.get("roas", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Review targeting and bid strategy to improve ROAS")
if assessments.get("cpa", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Optimize landing pages and conversion flow to reduce CPA")
if cvr is not None and cvr < 10:
recommendations.append("Lead-to-customer conversion is low; review sales process and lead quality")
if lead_conversion_rate is not None and lead_conversion_rate < 2:
recommendations.append("Click-to-lead rate is low; improve landing page relevance and form experience")
if profit > 0 and assessments.get("roas", {}).get("assessment") in ("good", "excellent"):
recommendations.append("Campaign performing well; consider scaling budget")
return {
"name": name,
"channel": channel,
"metrics": {
"spend": round(spend, 2),
"revenue": round(revenue, 2),
"profit": round(profit, 2),
"roi_pct": round(roi, 2),
"roas": round(roas, 2),
"cpa": round(cpa, 2) if cpa is not None else None,
"cpl": round(cpl, 2) if cpl is not None else None,
"cac": round(cac, 2) if cac is not None else None,
"ctr_pct": round(ctr, 2) if ctr is not None else None,
"cvr_pct": round(cvr, 2) if cvr is not None else None,
"cpc": round(cpc, 2) if cpc is not None else None,
"cpm": round(cpm, 2) if cpm is not None else None,
"lead_conversion_rate_pct": round(lead_conversion_rate, 2) if lead_conversion_rate is not None else None,
"impressions": impressions,
"clicks": clicks,
"leads": leads,
"customers": customers,
},
"assessments": assessments,
"flags": flags,
"recommendations": recommendations,
}
def calculate_portfolio_summary(campaign_results: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Calculate aggregate metrics across all campaigns.
Args:
campaign_results: List of individual campaign result dicts.
Returns:
Portfolio-level summary with totals and weighted averages.
"""
total_spend = sum(c["metrics"]["spend"] for c in campaign_results)
total_revenue = sum(c["metrics"]["revenue"] for c in campaign_results)
total_impressions = sum(c["metrics"]["impressions"] for c in campaign_results)
total_clicks = sum(c["metrics"]["clicks"] for c in campaign_results)
total_leads = sum(c["metrics"]["leads"] for c in campaign_results)
total_customers = sum(c["metrics"]["customers"] for c in campaign_results)
total_profit = total_revenue - total_spend
underperforming = [c["name"] for c in campaign_results if c["flags"]]
top_performers = sorted(
campaign_results,
key=lambda c: c["metrics"]["roi_pct"],
reverse=True,
)
# Channel breakdown
channel_totals: Dict[str, Dict[str, float]] = {}
for c in campaign_results:
ch = c["channel"]
if ch not in channel_totals:
channel_totals[ch] = {"spend": 0, "revenue": 0, "leads": 0, "customers": 0}
channel_totals[ch]["spend"] += c["metrics"]["spend"]
channel_totals[ch]["revenue"] += c["metrics"]["revenue"]
channel_totals[ch]["leads"] += c["metrics"]["leads"]
channel_totals[ch]["customers"] += c["metrics"]["customers"]
channel_summary = {}
for ch, totals in channel_totals.items():
channel_summary[ch] = {
"spend": round(totals["spend"], 2),
"revenue": round(totals["revenue"], 2),
"roi_pct": round(safe_divide(totals["revenue"] - totals["spend"], totals["spend"]) * 100, 2),
"roas": round(safe_divide(totals["revenue"], totals["spend"]), 2),
"leads": int(totals["leads"]),
"customers": int(totals["customers"]),
}
return {
"total_campaigns": len(campaign_results),
"total_spend": round(total_spend, 2),
"total_revenue": round(total_revenue, 2),
"total_profit": round(total_profit, 2),
"portfolio_roi_pct": round(safe_divide(total_profit, total_spend) * 100, 2),
"portfolio_roas": round(safe_divide(total_revenue, total_spend), 2),
"total_impressions": total_impressions,
"total_clicks": total_clicks,
"total_leads": total_leads,
"total_customers": total_customers,
"blended_ctr_pct": round(safe_divide(total_clicks, total_impressions) * 100, 2),
"blended_cpl": round(safe_divide(total_spend, total_leads), 2) if total_leads > 0 else None,
"blended_cpa": round(safe_divide(total_spend, total_customers), 2) if total_customers > 0 else None,
"underperforming_campaigns": underperforming,
"top_performer": top_performers[0]["name"] if top_performers else None,
"channel_summary": channel_summary,
}
def format_text(results: Dict[str, Any]) -> str:
"""Format full results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("CAMPAIGN ROI ANALYSIS")
lines.append("=" * 70)
# Portfolio summary
summary = results["portfolio_summary"]
lines.append("")
lines.append("PORTFOLIO SUMMARY")
lines.append(f" Total Campaigns: {summary['total_campaigns']}")
lines.append(f" Total Spend: ${summary['total_spend']:>12,.2f}")
lines.append(f" Total Revenue: ${summary['total_revenue']:>12,.2f}")
lines.append(f" Total Profit: ${summary['total_profit']:>12,.2f}")
lines.append(f" Portfolio ROI: {summary['portfolio_roi_pct']}%")
lines.append(f" Portfolio ROAS: {summary['portfolio_roas']}x")
lines.append(f" Blended CTR: {summary['blended_ctr_pct']}%")
if summary["blended_cpl"] is not None:
lines.append(f" Blended CPL: ${summary['blended_cpl']:>12,.2f}")
if summary["blended_cpa"] is not None:
lines.append(f" Blended CPA: ${summary['blended_cpa']:>12,.2f}")
if summary["top_performer"]:
lines.append(f" Top Performer: {summary['top_performer']}")
if summary["underperforming_campaigns"]:
lines.append(f" Flagged: {', '.join(summary['underperforming_campaigns'])}")
# Channel summary
if summary["channel_summary"]:
lines.append("")
lines.append("-" * 70)
lines.append("CHANNEL SUMMARY")
lines.append(f" {'Channel':<20} {'Spend':>12} {'Revenue':>12} {'ROI':>10} {'ROAS':>8}")
lines.append(f" {'-'*20} {'-'*12} {'-'*12} {'-'*10} {'-'*8}")
for ch, cs in sorted(summary["channel_summary"].items()):
lines.append(
f" {ch:<20} ${cs['spend']:>10,.2f} ${cs['revenue']:>10,.2f} "
f"{cs['roi_pct']:>8.1f}% {cs['roas']:>6.2f}x"
)
# Individual campaigns
for campaign in results["campaigns"]:
lines.append("")
lines.append("-" * 70)
lines.append(f"CAMPAIGN: {campaign['name']}")
lines.append(f"Channel: {campaign['channel']}")
lines.append("-" * 70)
m = campaign["metrics"]
lines.append(f" {'Metric':<25} {'Value':>15}")
lines.append(f" {'-'*25} {'-'*15}")
lines.append(f" {'Spend':<25} ${m['spend']:>13,.2f}")
lines.append(f" {'Revenue':<25} ${m['revenue']:>13,.2f}")
lines.append(f" {'Profit':<25} ${m['profit']:>13,.2f}")
lines.append(f" {'ROI':<25} {m['roi_pct']:>13.2f}%")
lines.append(f" {'ROAS':<25} {m['roas']:>13.2f}x")
if m["cpa"] is not None:
lines.append(f" {'CPA':<25} ${m['cpa']:>13,.2f}")
if m["cpl"] is not None:
lines.append(f" {'CPL':<25} ${m['cpl']:>13,.2f}")
if m["cac"] is not None:
lines.append(f" {'CAC':<25} ${m['cac']:>13,.2f}")
if m["ctr_pct"] is not None:
lines.append(f" {'CTR':<25} {m['ctr_pct']:>13.2f}%")
if m["cpc"] is not None:
lines.append(f" {'CPC':<25} ${m['cpc']:>13,.2f}")
if m["cpm"] is not None:
lines.append(f" {'CPM':<25} ${m['cpm']:>13,.2f}")
if m["cvr_pct"] is not None:
lines.append(f" {'Lead-to-Customer CVR':<25} {m['cvr_pct']:>13.2f}%")
if m["lead_conversion_rate_pct"] is not None:
lines.append(f" {'Click-to-Lead Rate':<25} {m['lead_conversion_rate_pct']:>13.2f}%")
# Benchmark assessments
if campaign["assessments"]:
lines.append("")
lines.append(" BENCHMARK ASSESSMENT")
for metric_name, a in campaign["assessments"].items():
br = a["benchmark_range"]
status = a["assessment"].upper().replace("_", " ")
lines.append(
f" {metric_name.upper()}: {a['value']} "
f"[low={br['low']}, target={br['target']}, high={br['high']}] "
f"-> {status}"
)
# Flags
if campaign["flags"]:
lines.append("")
lines.append(" WARNING FLAGS")
for flag in campaign["flags"]:
lines.append(f" ! {flag}")
# Recommendations
if campaign["recommendations"]:
lines.append("")
lines.append(" RECOMMENDATIONS")
for i, rec in enumerate(campaign["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the campaign ROI calculator."""
parser = argparse.ArgumentParser(
description="Calculate campaign ROI, ROAS, CPA, CPL, CAC with industry benchmarking.",
epilog="Example: python campaign_roi_calculator.py campaigns.json --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing campaign data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
campaigns = data.get("campaigns", [])
if not campaigns:
print("Error: No 'campaigns' array found in input data.", file=sys.stderr)
sys.exit(1)
# Calculate metrics for each campaign
campaign_results = [calculate_campaign_metrics(c) for c in campaigns]
# Calculate portfolio summary
portfolio_summary = calculate_portfolio_summary(campaign_results)
results = {
"portfolio_summary": portfolio_summary,
"campaigns": campaign_results,
}
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,305 @@
#!/usr/bin/env python3
"""
Funnel Analyzer - Conversion funnel analysis with bottleneck detection.
Analyzes marketing/sales funnels to identify:
- Stage-to-stage conversion rates and drop-off percentages
- Biggest bottleneck (largest absolute and relative drops)
- Overall funnel conversion rate
- Segment comparison when multiple segments are provided
Usage:
python funnel_analyzer.py funnel_data.json
python funnel_analyzer.py funnel_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def analyze_funnel(stages: List[str], counts: List[int]) -> Dict[str, Any]:
"""Analyze a single funnel and return stage-by-stage metrics.
Args:
stages: Ordered list of funnel stage names (top to bottom).
counts: Corresponding counts at each stage.
Returns:
Dictionary with stage metrics, bottleneck info, and overall conversion.
"""
if len(stages) != len(counts):
raise ValueError("Number of stages must match number of counts.")
if not stages:
raise ValueError("Funnel must have at least one stage.")
stage_metrics: List[Dict[str, Any]] = []
max_dropoff_abs = 0
max_dropoff_rel = 0.0
bottleneck_abs: Optional[str] = None
bottleneck_rel: Optional[str] = None
for i, (stage, count) in enumerate(zip(stages, counts)):
metric: Dict[str, Any] = {
"stage": stage,
"count": count,
"cumulative_conversion": round(safe_divide(count, counts[0]) * 100, 2),
}
if i > 0:
prev_count = counts[i - 1]
dropoff = prev_count - count
conversion_rate = safe_divide(count, prev_count) * 100
dropoff_rate = 100 - conversion_rate
metric["from_previous"] = stages[i - 1]
metric["conversion_rate"] = round(conversion_rate, 2)
metric["dropoff_count"] = dropoff
metric["dropoff_rate"] = round(dropoff_rate, 2)
# Track biggest absolute drop-off
if dropoff > max_dropoff_abs:
max_dropoff_abs = dropoff
bottleneck_abs = f"{stages[i-1]} -> {stage}"
# Track biggest relative drop-off
if dropoff_rate > max_dropoff_rel:
max_dropoff_rel = dropoff_rate
bottleneck_rel = f"{stages[i-1]} -> {stage}"
else:
metric["conversion_rate"] = 100.0
metric["dropoff_count"] = 0
metric["dropoff_rate"] = 0.0
stage_metrics.append(metric)
overall_conversion = safe_divide(counts[-1], counts[0]) * 100
return {
"stage_metrics": stage_metrics,
"overall_conversion_rate": round(overall_conversion, 2),
"total_entries": counts[0],
"total_conversions": counts[-1],
"total_lost": counts[0] - counts[-1],
"bottleneck_absolute": {
"transition": bottleneck_abs,
"dropoff_count": max_dropoff_abs,
},
"bottleneck_relative": {
"transition": bottleneck_rel,
"dropoff_rate": round(max_dropoff_rel, 2),
},
}
def compare_segments(segments: Dict[str, Dict[str, Any]], stages: List[str]) -> Dict[str, Any]:
"""Compare funnel performance across segments.
Args:
segments: Dict mapping segment name to {"counts": [...]}.
stages: Shared stage names for all segments.
Returns:
Comparison data with per-segment analysis and relative rankings.
"""
segment_results: Dict[str, Dict[str, Any]] = {}
for seg_name, seg_data in segments.items():
counts = seg_data.get("counts", [])
if len(counts) != len(stages):
raise ValueError(
f"Segment '{seg_name}' has {len(counts)} counts but {len(stages)} stages."
)
segment_results[seg_name] = analyze_funnel(stages, counts)
# Rank segments by overall conversion rate
ranked = sorted(
segment_results.items(),
key=lambda x: x[1]["overall_conversion_rate"],
reverse=True,
)
rankings = [
{
"rank": i + 1,
"segment": name,
"overall_conversion_rate": result["overall_conversion_rate"],
"total_entries": result["total_entries"],
"total_conversions": result["total_conversions"],
}
for i, (name, result) in enumerate(ranked)
]
# Stage-by-stage comparison
stage_comparison: List[Dict[str, Any]] = []
for i, stage in enumerate(stages):
stage_data: Dict[str, Any] = {"stage": stage}
for seg_name in segments:
metrics = segment_results[seg_name]["stage_metrics"][i]
stage_data[seg_name] = {
"count": metrics["count"],
"conversion_rate": metrics["conversion_rate"],
}
stage_comparison.append(stage_data)
return {
"segment_results": segment_results,
"rankings": rankings,
"stage_comparison": stage_comparison,
}
def format_single_funnel_text(analysis: Dict[str, Any], title: str = "FUNNEL") -> str:
"""Format a single funnel analysis as human-readable text."""
lines: List[str] = []
lines.append(f" {title}")
lines.append(f" {'='*60}")
lines.append(f" Total Entries: {analysis['total_entries']:,}")
lines.append(f" Total Conversions: {analysis['total_conversions']:,}")
lines.append(f" Total Lost: {analysis['total_lost']:,}")
lines.append(f" Overall Conversion: {analysis['overall_conversion_rate']}%")
lines.append("")
lines.append(f" {'Stage':<20} {'Count':>10} {'Conv Rate':>12} {'Drop-off':>12} {'Cumulative':>12}")
lines.append(f" {'-'*20} {'-'*10} {'-'*12} {'-'*12} {'-'*12}")
for m in analysis["stage_metrics"]:
stage = m["stage"]
count = m["count"]
conv = f"{m['conversion_rate']:.1f}%"
drop = f"-{m['dropoff_count']:,} ({m['dropoff_rate']:.1f}%)" if m["dropoff_count"] > 0 else "-"
cumul = f"{m['cumulative_conversion']:.1f}%"
lines.append(f" {stage:<20} {count:>10,} {conv:>12} {drop:>12} {cumul:>12}")
lines.append("")
bn_abs = analysis["bottleneck_absolute"]
bn_rel = analysis["bottleneck_relative"]
lines.append(f" BOTTLENECK (Absolute): {bn_abs['transition']} (lost {bn_abs['dropoff_count']:,})")
lines.append(f" BOTTLENECK (Relative): {bn_rel['transition']} ({bn_rel['dropoff_rate']}% drop-off)")
return "\n".join(lines)
def format_text(results: Dict[str, Any]) -> str:
"""Format full results as human-readable text output."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("FUNNEL CONVERSION ANALYSIS")
lines.append("=" * 70)
if "stage_comparison" in results:
# Multi-segment output
lines.append("")
lines.append("SEGMENT RANKINGS")
lines.append(f" {'Rank':>4} {'Segment':<25} {'Conversion':>12} {'Entries':>10} {'Conversions':>12}")
lines.append(f" {'-'*4} {'-'*25} {'-'*12} {'-'*10} {'-'*12}")
for r in results["rankings"]:
lines.append(
f" {r['rank']:>4} {r['segment']:<25} {r['overall_conversion_rate']:>11.2f}% "
f"{r['total_entries']:>10,} {r['total_conversions']:>12,}"
)
lines.append("")
for seg_name, seg_result in results["segment_results"].items():
lines.append("")
lines.append(format_single_funnel_text(seg_result, title=f"SEGMENT: {seg_name.upper()}"))
# Stage comparison table
lines.append("")
lines.append("-" * 70)
lines.append("STAGE-BY-STAGE COMPARISON")
lines.append("-" * 70)
seg_names = list(results["segment_results"].keys())
header = f" {'Stage':<20}"
for sn in seg_names:
header += f" {sn:>20}"
lines.append(header)
lines.append(f" {'-'*20}" + f" {'-'*20}" * len(seg_names))
for sc in results["stage_comparison"]:
row = f" {sc['stage']:<20}"
for sn in seg_names:
data = sc[sn]
row += f" {data['count']:>8,} ({data['conversion_rate']:>5.1f}%)"
lines.append(row)
else:
# Single funnel output
lines.append("")
lines.append(format_single_funnel_text(results))
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the funnel analyzer."""
parser = argparse.ArgumentParser(
description="Analyze conversion funnels with bottleneck detection and segment comparison.",
epilog="Example: python funnel_analyzer.py funnel_data.json --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing funnel data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
# Determine mode: single funnel vs. segment comparison
if "segments" in data:
# Multi-segment mode
stages = data.get("funnel", {}).get("stages", data.get("stages", []))
if not stages:
print("Error: 'stages' list required for segment comparison.", file=sys.stderr)
sys.exit(1)
segments = data["segments"]
if not segments:
print("Error: 'segments' dict is empty.", file=sys.stderr)
sys.exit(1)
results = compare_segments(segments, stages)
elif "funnel" in data:
# Single funnel mode
funnel = data["funnel"]
stages = funnel.get("stages", [])
counts = funnel.get("counts", [])
if not stages or not counts:
print("Error: 'funnel' must contain 'stages' and 'counts' arrays.", file=sys.stderr)
sys.exit(1)
results = analyze_funnel(stages, counts)
else:
print("Error: Input must contain 'funnel' or 'segments' key.", file=sys.stderr)
sys.exit(1)
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()

View File

@@ -40,15 +40,16 @@ The **content-creator** skill is ready for deployment and includes:
- Community management workflows
- Crisis response protocols
### 3. campaign-analytics (Priority: High)
### 3. campaign-analytics (Priority: High) ✅ DELIVERED
**Purpose**: Performance measurement and reporting
**Deployed**: February 2026
**Components**:
- GA4 integration scripts
- Custom dashboard templates
- ROI calculation tools
- Attribution modeling
- A/B testing frameworks
- Executive report generators
- Multi-touch attribution analyzer (5 models: first/last/linear/time-decay/position-based)
- Funnel conversion analyzer with bottleneck detection
- Campaign ROI calculator with budget reallocation
- Attribution models reference guide
- Campaign benchmarks by channel and industry
- Executive report, campaign brief, and A/B test templates
### 4. email-marketing (Priority: Medium)
**Purpose**: Email campaign creation and automation