feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264)

* feat: C-Suite expansion — 8 new executive advisory roles

Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor.
Expands C-level advisory from 2 to 10 roles with 74 total files.

Each role includes:
- SKILL.md (lean, <5KB, ~1200 tokens for context efficiency)
- Reference docs (loaded on demand, not at startup)
- Python analysis scripts (stdlib only, runnable CLI)

Executive Mentor features /em: slash commands (challenge, board-prep,
hard-call, stress-test, postmortem) with devil's advocate agent.

21 Python tools, 24 reference frameworks, 28,379 total lines.
All SKILL.md files combined: ~17K tokens (8.5% of 200K context window).

Badge: 88 → 116 skills

* feat: C-Suite orchestration layer + 18 complementary skills

ORCHESTRATION (new):
- cs-onboard: Founder interview → company-context.md
- chief-of-staff: Routing, synthesis, inter-agent orchestration
- board-meeting: 6-phase multi-agent deliberation protocol
- decision-logger: Two-layer memory (raw transcripts + approved decisions)
- agent-protocol: Inter-agent invocation with loop prevention
- context-engine: Company context loading + anonymization

CROSS-CUTTING CAPABILITIES (new):
- board-deck-builder: Board/investor update assembly
- scenario-war-room: Cascading multi-variable what-if modeling
- competitive-intel: Systematic competitor tracking + battlecards
- org-health-diagnostic: Cross-functional health scoring (8 dimensions)
- ma-playbook: M&A strategy (acquiring + being acquired)
- intl-expansion: International market entry frameworks

CULTURE & COLLABORATION (new):
- culture-architect: Values → behaviors, culture code, health assessment
- company-os: EOS/Scaling Up operating system selection + implementation
- founder-coach: Founder development, delegation, blind spots
- strategic-alignment: Strategy cascade, silo detection, alignment scoring
- change-management: ADKAR-based change rollout framework
- internal-narrative: One story across employees/investors/customers

UPGRADES TO EXISTING ROLES:
- All 10 roles get reasoning technique directives
- All 10 roles get company-context.md integration
- All 10 roles get board meeting isolation rules
- CEO gets stage-adaptive temporal horizons (seed→C)

Key design decisions:
- Two-layer memory prevents hallucinated consensus from rejected ideas
- Phase 2 isolation: agents think independently before cross-examination
- Executive Mentor (The Critic) sees all perspectives, others don't
- 25 Python tools total (stdlib only, no dependencies)

52 new files, 10 modified, 10,862 new lines.
Total C-suite ecosystem: 134 files, 39,131 lines.

* fix: connect all dots — Chief of Staff routes to all 28 skills

- Added complementary skills registry to routing-matrix.md
- Chief of Staff SKILL.md now lists all 28 skills in ecosystem
- Added integration tables to scenario-war-room and competitive-intel
- Badge: 116 → 134 skills
- README: C-Level Advisory count 10 → 28

Quality audit passed:
 All 10 roles: company-context, reasoning, isolation, invocation
 All 6 phases in board meeting
 Two-layer memory with DO_NOT_RESURFACE
 Loop prevention (no self-invoke, max depth 2, no circular)
 All /em: commands present
 All complementary skills cross-reference roles
 Chief of Staff routes to every skill in ecosystem

* refactor: CEO + CTO advisors upgraded to C-suite parity

Both roles now match the structural standard of all new roles:
- CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references)
- CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references)

Added to both:
- Integration table (who they work with and when)
- Key diagnostic questions
- Structured metrics dashboard table
- Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context)

CEO additions:
- Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y)
- Cross-references to culture-architect and board-deck-builder

CTO additions:
- Key Questions section (7 diagnostic questions)
- Structured metrics table (DORA + debt + team + architecture + cost)
- Cross-references to all peer roles

All 10 roles now pass structural parity:  Keywords  QuickStart  Questions  Metrics  RedFlags  Integration

* feat: add proactive triggers + output artifacts to all 10 roles

Every C-suite role now specifies:
- Proactive Triggers: 'surface these without being asked' — context-driven
  early warnings that make advisors proactive, not reactive
- Output Artifacts: concrete deliverables per request type (what you ask →
  what you get)

CEO: runway alerts, board prep triggers, strategy review nudges
CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags
COO: blocker detection, scaling threshold warnings, cadence gaps
CPO: retention curve monitoring, portfolio dog detection, research gaps
CMO: CAC trend monitoring, positioning gaps, budget staleness
CFO: runway forecasting, burn multiple alerts, scenario planning gaps
CRO: NRR monitoring, pipeline coverage, pricing review triggers
CISO: audit overdue alerts, compliance gaps, vendor risk
CHRO: retention risk, comp band gaps, org scaling thresholds
Executive Mentor: board prep triggers, groupthink detection, hard call surfacing

This transforms the C-suite from reactive advisors into proactive partners.

* feat: User Communication Standard — structured output for all roles

Defines 3 output formats in agent-protocol/SKILL.md:

1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision
2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡)
3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items

10 non-negotiable rules:
- Bottom line first, always
- Results and decisions only (no process narration)
- What + Why + How for every finding
- Actions have owners and deadlines ('we should consider' is banned)
- Decisions framed as options with trade-offs
- Founder is the highest authority — roles recommend, founder decides
- Risks are concrete (if X → Y, costs $Z)
- Max 5 bullets per section
- No jargon without explanation
- Silence over fabricated updates

All 10 roles reference this standard.
Chief of Staff enforces it as a quality gate.
Board meeting Phase 4 uses the Board Meeting Output format.

* feat: Internal Quality Loop — verification before delivery

No role presents to the founder without passing verification:

Step 1: Self-Verification (every role, every time)
  - Source attribution: where did each data point come from?
  - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding
  - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding
  - Contradiction check against company-context + decision log
  - 'So what?' test: every finding needs a business consequence

Step 2: Peer Verification (cross-functional)
  - Financial claims → CFO validates math
  - Revenue projections → CRO validates pipeline backing
  - Technical feasibility → CTO validates
  - People/hiring impact → CHRO validates
  - Skip for single-domain, low-stakes questions

Step 3: Critic Pre-Screen (high-stakes only)
  - Irreversible decisions, >20% runway impact, strategy changes
  - Executive Mentor finds weakest point before founder sees it
  - Suspicious consensus triggers mandatory pre-screen

Step 4: Course Correction (after founder feedback)
  - Approve → log + assign actions
  - Modify → re-verify changed parts
  - Reject → DO_NOT_RESURFACE + learn why
  - 30/60/90 day post-decision review

Board meeting contributions now require self-verified format with
confidence tags and source attribution on every finding.

* fix: resolve PR review issues 1, 4, and minor observation

Issue 1: c-level-advisor/CLAUDE.md — completely rewritten
  - Was: 2 skills (CEO, CTO only), dated Nov 2025
  - Now: full 28-skill ecosystem map with architecture diagram,
    all roles/orchestration/cross-cutting/culture skills listed,
    design decisions, integration with other domains

Issue 4: Root CLAUDE.md — updated all stale counts
  - 87 → 134 skills across all 3 references
  - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary)
  - Tool count: 160+ → 185+
  - Reference count: 200+ → 250+

Minor observation: Documented plugin.json convention
  - Explained in c-level-advisor/CLAUDE.md that only executive-mentor
    has plugin.json because only it has slash commands (/em: namespace)
  - Other skills are invoked by name through Chief of Staff or directly

Also fixed: README.md 88+ → 134 in two places (first line + skills section)

* fix: update all plugin/index registrations for 28-skill C-suite

1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0
   - Was: 2 skills, generic description
   - Now: all 28 skills listed with descriptions, all 25 scripts,
     namespace 'cs', full ecosystem description

2. .codex/skills-index.json — added 18 complementary skills
   - Was: 10 roles only
   - Now: 28 total c-level entries (10 roles + 6 orchestration +
     6 cross-cutting + 6 culture)
   - Each with full description for skill discovery

3. .claude-plugin/marketplace.json — updated c-level-skills entry
   - Was: generic 2-skill description
   - Now: v2.0.0, full 28-skill ecosystem description,
     skills_count: 28, scripts_count: 25

* feat: add root SKILL.md for c-level-advisor ClawHub package

---------

Co-authored-by: Leo <leo@openclaw.ai>
This commit is contained in:
Alireza Rezvani
2026-03-06 01:35:08 +01:00
committed by GitHub
parent 3c7a5d98e2
commit 466aa13a7b
117 changed files with 33931 additions and 980 deletions

View File

@@ -101,8 +101,8 @@
{
"name": "c-level-skills",
"source": "./c-level-advisor",
"description": "2 C-level advisory skills: CEO advisor, CTO advisor",
"version": "1.0.0",
"description": "Complete virtual board of directors: 10 C-level advisory roles (CEO, CTO, COO, CPO, CMO, CFO, CRO, CISO, CHRO, Executive Mentor), 6 orchestration skills (Chief of Staff router, board meetings, decision logger, agent protocol, context engine, onboarding), 6 cross-cutting capabilities (board deck builder, scenario war room, competitive intel, org health, M&A playbook, international expansion), and 6 culture & collaboration frameworks. Features internal quality loop, two-layer memory, Phase 2 isolation, 25 Python tools, and structured communication standard. 28 skills total.",
"version": "2.0.0",
"author": {
"name": "Alireza Rezvani"
},
@@ -113,7 +113,9 @@
"strategy",
"leadership"
],
"category": "leadership"
"category": "leadership",
"skills_count": 28,
"scripts_count": 25
},
{
"name": "pm-skills",
@@ -340,4 +342,4 @@
]
}
]
}
}

View File

@@ -41,6 +41,54 @@
"category": "c-level",
"description": "Technical leadership guidance for engineering teams, architecture decisions, and technology strategy. Includes tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates. Use when assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions, establishing engineering metrics, or when user mentions CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, or technology strategy."
},
{
"name": "coo-advisor",
"source": "../../c-level-advisor/coo-advisor",
"category": "c-level",
"description": "Operations leadership for scaling companies. Process design, OKR execution, operational cadence, and scaling playbooks. Use when designing operations, setting up OKRs, building processes, scaling teams, or analyzing bottlenecks."
},
{
"name": "cpo-advisor",
"source": "../../c-level-advisor/cpo-advisor",
"category": "c-level",
"description": "Product leadership for vision, portfolio strategy, and product-market fit. Use when evaluating PMF, designing product org, making invest/maintain/kill decisions, or building product strategy for boards."
},
{
"name": "cmo-advisor",
"source": "../../c-level-advisor/cmo-advisor",
"category": "c-level",
"description": "Marketing leadership for brand strategy, growth models, and budget allocation. Use when designing growth engines, optimizing channel mix, building marketing org, or planning brand positioning."
},
{
"name": "cfo-advisor",
"source": "../../c-level-advisor/cfo-advisor",
"category": "c-level",
"description": "Financial leadership for startup fundraising, unit economics, and cash management. Use when modeling burn rate, preparing for fundraising, analyzing unit economics, or building board financial packages."
},
{
"name": "cro-advisor",
"source": "../../c-level-advisor/cro-advisor",
"category": "c-level",
"description": "Revenue leadership for sales strategy, pricing, and net revenue retention. Use when forecasting revenue, designing sales models, optimizing pricing, or analyzing churn and expansion."
},
{
"name": "ciso-advisor",
"source": "../../c-level-advisor/ciso-advisor",
"category": "c-level",
"description": "Security leadership for risk management, compliance roadmaps, and incident response. Use when quantifying security risk, planning SOC2/ISO27001/HIPAA compliance, or building security strategy."
},
{
"name": "chro-advisor",
"source": "../../c-level-advisor/chro-advisor",
"category": "c-level",
"description": "People leadership for hiring plans, compensation frameworks, and org design. Use when building hiring plans, designing comp bands, planning org structure, or addressing retention."
},
{
"name": "executive-mentor",
"source": "../../c-level-advisor/executive-mentor",
"category": "c-level",
"description": "Adversarial executive coach. Stress-tests plans, preps for board meetings, and frameworks for hard decisions. Use /em:challenge, /em:board-prep, /em:hard-call, /em:stress-test, /em:postmortem."
},
{
"name": "aws-solution-architect",
"source": "../../engineering-team/aws-solution-architect",
@@ -382,6 +430,114 @@
"source": "../../ra-qm-team/risk-management-specialist",
"category": "ra-qm",
"description": "Medical device risk management specialist implementing ISO 14971 throughout product lifecycle. Provides risk analysis, risk evaluation, risk control, and post-production information analysis."
},
{
"name": "cs-onboard",
"source": "../../c-level-advisor/cs-onboard",
"category": "c-level",
"description": "Founder interview that creates company-context.md. Captures stage, team size, burn rate, competitive landscape, and strategic priorities. Every C-suite agent reads this file before advising."
},
{
"name": "chief-of-staff",
"source": "../../c-level-advisor/chief-of-staff",
"category": "c-level",
"description": "Central router for the C-suite ecosystem. Routes questions to the right role, triggers board meetings, enforces quality gates, and synthesizes multi-role outputs. Use when unsure which C-level role to consult."
},
{
"name": "board-meeting",
"source": "../../c-level-advisor/board-meeting",
"category": "c-level",
"description": "6-phase multi-agent deliberation protocol. Context gathering, isolated independent analysis (Phase 2), adversarial critique (Phase 3), synthesis, founder review, and decision extraction. Use for strategic decisions requiring multiple perspectives."
},
{
"name": "decision-logger",
"source": "../../c-level-advisor/decision-logger",
"category": "c-level",
"description": "Two-layer memory system for board decisions. Layer 1: raw transcripts (reference only). Layer 2: approved decisions that feed future meetings. Tracks DO_NOT_RESURFACE flags on rejected proposals."
},
{
"name": "agent-protocol",
"source": "../../c-level-advisor/agent-protocol",
"category": "c-level",
"description": "Inter-agent communication protocol. Defines invocation syntax, loop prevention (no self-invoke, max depth 2, no circular), isolation rules, internal quality loop (self-verify, peer-verify, critic pre-screen), and user communication standard."
},
{
"name": "context-engine",
"source": "../../c-level-advisor/context-engine",
"category": "c-level",
"description": "Company context loading and stage-adaptive configuration. Reads company-context.md, applies temporal horizons by stage (seed/Series A/B+), manages context freshness."
},
{
"name": "board-deck-builder",
"source": "../../c-level-advisor/board-deck-builder",
"category": "c-level",
"description": "Assembles board and investor update presentations. Pulls data from multiple C-suite roles into structured board deck format with narrative, metrics, and asks."
},
{
"name": "scenario-war-room",
"source": "../../c-level-advisor/scenario-war-room",
"category": "c-level",
"description": "Multi-variable what-if modeling for strategic scenarios. Stress-tests business plans against market shifts, competitive moves, and internal constraints. Uses multiple C-suite perspectives."
},
{
"name": "competitive-intel",
"source": "../../c-level-advisor/competitive-intel",
"category": "c-level",
"description": "Systematic competitor tracking and market intelligence. Competitor profiles, feature matrices, pricing analysis, positioning maps, and strategic response recommendations."
},
{
"name": "org-health-diagnostic",
"source": "../../c-level-advisor/org-health-diagnostic",
"category": "c-level",
"description": "Cross-functional organizational health scoring. Surveys engineering velocity, team morale, process efficiency, and leadership effectiveness. Outputs actionable health scorecard."
},
{
"name": "ma-playbook",
"source": "../../c-level-advisor/ma-playbook",
"category": "c-level",
"description": "M&A playbook for acquiring or being acquired. Due diligence checklists, valuation frameworks, integration planning, and post-merger execution. Use when evaluating acquisition targets or preparing for exit."
},
{
"name": "intl-expansion",
"source": "../../c-level-advisor/intl-expansion",
"category": "c-level",
"description": "International market entry strategy. Regional playbooks, localization requirements, regulatory considerations, go-to-market by geography, and partnership models for new markets."
},
{
"name": "culture-architect",
"source": "../../c-level-advisor/culture-architect",
"category": "c-level",
"description": "Build and operationalize company culture systematically. Values definition, culture measurement, ritual design, and culture-strategy alignment. Use when culture feels undefined or misaligned with strategy."
},
{
"name": "company-os",
"source": "../../c-level-advisor/company-os",
"category": "c-level",
"description": "Operating system implementation (EOS, Scaling Up). Meeting cadences, accountability structures, scorecard design, and issue resolution. Use when the company needs a structured operating rhythm."
},
{
"name": "founder-coach",
"source": "../../c-level-advisor/founder-coach",
"category": "c-level",
"description": "Founder personal development and leadership growth. CEO transitions (builder to manager to leader), energy management, decision fatigue, and founder-specific challenges."
},
{
"name": "strategic-alignment",
"source": "../../c-level-advisor/strategic-alignment",
"category": "c-level",
"description": "Strategy cascade and cross-functional alignment. Detects silos, misaligned OKRs, and strategy-execution gaps. Ensures company, department, and team goals connect coherently."
},
{
"name": "change-management",
"source": "../../c-level-advisor/change-management",
"category": "c-level",
"description": "ADKAR-based change rollout and transformation planning. Stakeholder analysis, communication planning, resistance management, and adoption tracking for organizational changes."
},
{
"name": "internal-narrative",
"source": "../../c-level-advisor/internal-narrative",
"category": "c-level",
"description": "One consistent story across all audiences and channels. Company narrative architecture, messaging hierarchy, all-hands talking points, and narrative-strategy alignment."
}
],
"categories": {
@@ -390,11 +546,7 @@
"source": "../../business-growth",
"description": "Customer success, sales engineering, and revenue operations skills"
},
"c-level": {
"count": 2,
"source": "../../c-level-advisor",
"description": "Executive leadership and advisory skills"
},
"c-level": 28,
"engineering": {
"count": 23,
"source": "../../engineering-team",
@@ -426,4 +578,4 @@
"description": "Regulatory affairs and quality management skills"
}
}
}
}

View File

@@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
This is a **comprehensive skills library** for Claude AI and Claude Code - reusable, production-ready skill packages that bundle domain expertise, best practices, analysis tools, and strategic frameworks. The repository provides modular skills that teams can download and use directly in their workflows.
**Current Scope:** 87 production-ready skills across 9 domains with 160+ Python automation tools and 200+ reference guides.
**Current Scope:** 134 production-ready skills across 9 domains with 185+ Python automation tools and 250+ reference guides.
**Key Distinction**: This is NOT a traditional application. It's a library of skill packages meant to be extracted and deployed by users into their own Claude workflows.
@@ -133,8 +133,8 @@ See [standards/git/git-workflow-standards.md](standards/git/git-workflow-standar
## Roadmap
**Phase 1-2 Complete:** 87 production-ready skills deployed across 9 domains
- Marketing (7), C-Level (2), Product (8), PM (6), Engineering Core (22), Engineering POWERFUL (25), RA/QM (12), Business & Growth (4), Finance (1)
**Phase 1-2 Complete:** 134 production-ready skills deployed across 9 domains
- Marketing (7), C-Level (33), Product (8), PM (6), Engineering Core (23), Engineering Advanced (14), RA/QM (12), Business & Growth (4), Finance (1)
- 160+ Python automation tools, 200+ reference guides
- Complete enterprise coverage from marketing through regulatory compliance, sales, customer success, and finance
@@ -182,4 +182,4 @@ See domain-specific roadmaps in each skill folder's README.md or roadmap files.
**Last Updated:** March 2026
**Version:** v2.0.0
**Status:** 87 skills deployed across 9 domains, plugin marketplace active
**Status:** 134 skills deployed across 9 domains, plugin marketplace active

View File

@@ -1,9 +1,9 @@
# Claude Skills Library
**88+ production-ready skill packages for Claude Code, OpenAI Codex, and OpenClaw** — reusable expertise bundles that transform AI agents into specialized professionals across engineering, product, marketing, compliance, and more.
**134 production-ready skill packages for Claude Code, OpenAI Codex, and OpenClaw** — reusable expertise bundles that transform AI agents into specialized professionals across engineering, product, marketing, compliance, and more.
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Skills](https://img.shields.io/badge/Skills-88-brightgreen.svg)](#skills-overview)
[![Skills](https://img.shields.io/badge/Skills-134-brightgreen.svg)](#skills-overview)
[![Stars](https://img.shields.io/github/stars/alirezarezvani/claude-skills?style=flat)](https://github.com/alirezarezvani/claude-skills/stargazers)
[![SkillCheck Validated](https://img.shields.io/badge/SkillCheck-Validated-4c1)](https://getskillcheck.com)
@@ -34,7 +34,7 @@ Skills are modular instruction packages that give AI agents domain expertise the
/plugin install marketing-skills@claude-code-skills # 7 marketing skills
/plugin install ra-qm-skills@claude-code-skills # 12 regulatory/quality
/plugin install pm-skills@claude-code-skills # 6 project management
/plugin install c-level-skills@claude-code-skills # 2 C-level advisory
/plugin install c-level-skills@claude-code-skills # 10 C-level advisory (full C-suite)
/plugin install business-growth-skills@claude-code-skills # 4 business & growth
/plugin install finance-skills@claude-code-skills # 1 finance
@@ -69,7 +69,7 @@ git clone https://github.com/alirezarezvani/claude-skills.git
## Skills Overview
**88+ skills across 9 domains:**
**134 skills across 9 domains:**
| Domain | Skills | Highlights | Details |
|--------|--------|------------|---------|
@@ -81,7 +81,7 @@ git clone https://github.com/alirezarezvani/claude-skills.git
| **📣 Marketing** | 7 | Content creator, demand gen, PMM strategy, ASO, social media, campaign analytics, prompt engineering | [marketing-skill/](marketing-skill/) |
| **📋 Project Management** | 6 | Senior PM, scrum master, Jira, Confluence, Atlassian admin, templates | [project-management/](project-management/) |
| **🏥 Regulatory & QM** | 12 | ISO 13485, MDR 2017/745, FDA, ISO 27001, GDPR, CAPA, risk management | [ra-qm-team/](ra-qm-team/) |
| **💼 C-Level Advisory** | 2 | CEO advisor, CTO advisor | [c-level-advisor/](c-level-advisor/) |
| **💼 C-Level Advisory** | 28 | Full C-suite (10 roles) + orchestration + board meetings + culture & collaboration | [c-level-advisor/](c-level-advisor/) |
| **📈 Business & Growth** | 4 | Customer success, sales engineer, revenue ops, contracts & proposals | [business-growth/](business-growth/) |
| **💰 Finance** | 1 | Financial analyst (DCF, budgeting, forecasting) | [finance/](finance/) |

View File

@@ -1,7 +1,8 @@
{
"name": "c-level-skills",
"description": "2 production-ready C-level advisory skills: CEO advisor for strategic decision-making and CTO advisor for technical leadership",
"version": "1.0.0",
"namespace": "cs",
"description": "Complete virtual board of directors: 10 C-level advisory roles, 6 orchestration skills, 6 cross-cutting capabilities, and 6 culture & collaboration frameworks. Features internal quality loop (self-verify → peer-verify → critic pre-screen), two-layer memory, board meeting protocol with Phase 2 isolation, proactive triggers, and structured user communication standard.",
"version": "2.0.0",
"author": {
"name": "Alireza Rezvani",
"url": "https://alirezarezvani.com"
@@ -9,5 +10,273 @@
"homepage": "https://github.com/alirezarezvani/claude-skills/tree/main/c-level-advisor",
"repository": "https://github.com/alirezarezvani/claude-skills",
"license": "MIT",
"skills": "./"
"skills": [
{
"name": "CEO Advisor",
"description": "Strategic planning, business model, stakeholder management. Reasoning: Tree of Thought.",
"path": "ceo-advisor/SKILL.md"
},
{
"name": "CTO Advisor",
"description": "Technical leadership, architecture decisions, engineering strategy. Reasoning: ReAct.",
"path": "cto-advisor/SKILL.md"
},
{
"name": "COO Advisor",
"description": "Operations leadership, process design, OKR execution. Reasoning: Step by Step.",
"path": "coo-advisor/SKILL.md"
},
{
"name": "CPO Advisor",
"description": "Product vision, portfolio strategy, product-market fit. Reasoning: First Principles.",
"path": "cpo-advisor/SKILL.md"
},
{
"name": "CMO Advisor",
"description": "Brand strategy, growth models, marketing budget allocation. Reasoning: Recursion of Thought.",
"path": "cmo-advisor/SKILL.md"
},
{
"name": "CFO Advisor",
"description": "Financial leadership, fundraising, unit economics, cash management. Reasoning: Chain of Thought.",
"path": "cfo-advisor/SKILL.md"
},
{
"name": "CRO Advisor",
"description": "Revenue leadership, sales strategy, pricing, net revenue retention. Reasoning: Chain of Thought.",
"path": "cro-advisor/SKILL.md"
},
{
"name": "CISO Advisor",
"description": "Security leadership, risk quantification, compliance roadmaps. Reasoning: Risk-Based.",
"path": "ciso-advisor/SKILL.md"
},
{
"name": "CHRO Advisor",
"description": "People leadership, hiring plans, compensation, org design. Reasoning: Empathy + Data.",
"path": "chro-advisor/SKILL.md"
},
{
"name": "Executive Mentor",
"description": "Adversarial executive coach with /em: slash commands. Reasoning: Adversarial.",
"path": "executive-mentor/SKILL.md"
},
{
"name": "C-Suite Onboard",
"description": "Founder interview → company-context.md that all agents read.",
"path": "cs-onboard/SKILL.md"
},
{
"name": "Chief of Staff",
"description": "Central router — routes questions to the right role, triggers board meetings.",
"path": "chief-of-staff/SKILL.md"
},
{
"name": "Board Meeting",
"description": "6-phase multi-agent deliberation with Phase 2 isolation.",
"path": "board-meeting/SKILL.md"
},
{
"name": "Decision Logger",
"description": "Two-layer memory: raw transcripts + approved decisions only.",
"path": "decision-logger/SKILL.md"
},
{
"name": "Agent Protocol",
"description": "Inter-agent invocation, loop prevention, quality loop, communication standard.",
"path": "agent-protocol/SKILL.md"
},
{
"name": "Context Engine",
"description": "Company context loading and stage-adaptive configuration.",
"path": "context-engine/SKILL.md"
},
{
"name": "Board Deck Builder",
"description": "Assembles board and investor update presentations.",
"path": "board-deck-builder/SKILL.md"
},
{
"name": "Scenario War Room",
"description": "Multi-variable what-if modeling for strategic scenarios.",
"path": "scenario-war-room/SKILL.md"
},
{
"name": "Competitive Intel",
"description": "Systematic competitor tracking and market intelligence.",
"path": "competitive-intel/SKILL.md"
},
{
"name": "Org Health Diagnostic",
"description": "Cross-functional organizational health scoring.",
"path": "org-health-diagnostic/SKILL.md"
},
{
"name": "M&A Playbook",
"description": "Due diligence, integration planning for acquisitions.",
"path": "ma-playbook/SKILL.md"
},
{
"name": "International Expansion",
"description": "Market entry strategy, regional playbooks, localization.",
"path": "intl-expansion/SKILL.md"
},
{
"name": "Culture Architect",
"description": "Build and operationalize company culture systematically.",
"path": "culture-architect/SKILL.md"
},
{
"name": "Company OS",
"description": "EOS/Scaling Up operating system implementation.",
"path": "company-os/SKILL.md"
},
{
"name": "Founder Coach",
"description": "Founder personal development and leadership growth.",
"path": "founder-coach/SKILL.md"
},
{
"name": "Strategic Alignment",
"description": "Strategy cascade, silo detection, cross-functional alignment.",
"path": "strategic-alignment/SKILL.md"
},
{
"name": "Change Management",
"description": "ADKAR-based change rollout and transformation planning.",
"path": "change-management/SKILL.md"
},
{
"name": "Internal Narrative",
"description": "One consistent story across all audiences and channels.",
"path": "internal-narrative/SKILL.md"
}
],
"scripts": [
{
"name": "strategy_analyzer",
"path": "ceo-advisor/scripts/strategy_analyzer.py",
"run": "python ceo-advisor/scripts/strategy_analyzer.py"
},
{
"name": "financial_scenario_analyzer",
"path": "ceo-advisor/scripts/financial_scenario_analyzer.py",
"run": "python ceo-advisor/scripts/financial_scenario_analyzer.py"
},
{
"name": "tech_debt_analyzer",
"path": "cto-advisor/scripts/tech_debt_analyzer.py",
"run": "python cto-advisor/scripts/tech_debt_analyzer.py"
},
{
"name": "team_scaling_calculator",
"path": "cto-advisor/scripts/team_scaling_calculator.py",
"run": "python cto-advisor/scripts/team_scaling_calculator.py"
},
{
"name": "ops_efficiency_analyzer",
"path": "coo-advisor/scripts/ops_efficiency_analyzer.py",
"run": "python coo-advisor/scripts/ops_efficiency_analyzer.py"
},
{
"name": "okr_tracker",
"path": "coo-advisor/scripts/okr_tracker.py",
"run": "python coo-advisor/scripts/okr_tracker.py"
},
{
"name": "pmf_scorer",
"path": "cpo-advisor/scripts/pmf_scorer.py",
"run": "python cpo-advisor/scripts/pmf_scorer.py"
},
{
"name": "portfolio_analyzer",
"path": "cpo-advisor/scripts/portfolio_analyzer.py",
"run": "python cpo-advisor/scripts/portfolio_analyzer.py"
},
{
"name": "marketing_budget_modeler",
"path": "cmo-advisor/scripts/marketing_budget_modeler.py",
"run": "python cmo-advisor/scripts/marketing_budget_modeler.py"
},
{
"name": "growth_model_simulator",
"path": "cmo-advisor/scripts/growth_model_simulator.py",
"run": "python cmo-advisor/scripts/growth_model_simulator.py"
},
{
"name": "burn_rate_calculator",
"path": "cfo-advisor/scripts/burn_rate_calculator.py",
"run": "python cfo-advisor/scripts/burn_rate_calculator.py"
},
{
"name": "unit_economics_analyzer",
"path": "cfo-advisor/scripts/unit_economics_analyzer.py",
"run": "python cfo-advisor/scripts/unit_economics_analyzer.py"
},
{
"name": "fundraising_model",
"path": "cfo-advisor/scripts/fundraising_model.py",
"run": "python cfo-advisor/scripts/fundraising_model.py"
},
{
"name": "revenue_forecast_model",
"path": "cro-advisor/scripts/revenue_forecast_model.py",
"run": "python cro-advisor/scripts/revenue_forecast_model.py"
},
{
"name": "churn_analyzer",
"path": "cro-advisor/scripts/churn_analyzer.py",
"run": "python cro-advisor/scripts/churn_analyzer.py"
},
{
"name": "risk_quantifier",
"path": "ciso-advisor/scripts/risk_quantifier.py",
"run": "python ciso-advisor/scripts/risk_quantifier.py"
},
{
"name": "compliance_tracker",
"path": "ciso-advisor/scripts/compliance_tracker.py",
"run": "python ciso-advisor/scripts/compliance_tracker.py"
},
{
"name": "hiring_plan_modeler",
"path": "chro-advisor/scripts/hiring_plan_modeler.py",
"run": "python chro-advisor/scripts/hiring_plan_modeler.py"
},
{
"name": "comp_benchmarker",
"path": "chro-advisor/scripts/comp_benchmarker.py",
"run": "python chro-advisor/scripts/comp_benchmarker.py"
},
{
"name": "decision_matrix_scorer",
"path": "executive-mentor/scripts/decision_matrix_scorer.py",
"run": "python executive-mentor/scripts/decision_matrix_scorer.py"
},
{
"name": "stakeholder_mapper",
"path": "executive-mentor/scripts/stakeholder_mapper.py",
"run": "python executive-mentor/scripts/stakeholder_mapper.py"
},
{
"name": "health_scorer",
"path": "org-health-diagnostic/scripts/health_scorer.py",
"run": "python org-health-diagnostic/scripts/health_scorer.py"
},
{
"name": "alignment_checker",
"path": "strategic-alignment/scripts/alignment_checker.py",
"run": "python strategic-alignment/scripts/alignment_checker.py"
},
{
"name": "decision_tracker",
"path": "decision-logger/scripts/decision_tracker.py",
"run": "python decision-logger/scripts/decision_tracker.py"
},
{
"name": "culture_scorer",
"path": "culture-architect/scripts/culture_scorer.py",
"run": "python culture-architect/scripts/culture_scorer.py"
}
]
}

View File

@@ -1,143 +1,121 @@
# C-Level Advisory Skills - Claude Code Guidance
# C-Level Advisory Skills Claude Code Guidance
This guide covers the 2 production-ready C-level advisory skills for strategic decision-making.
A complete virtual board of directors: 28 skills covering 10 executive roles, orchestration, cross-cutting capabilities, and culture & collaboration frameworks.
## C-Level Skills Overview
## Architecture
**Available Skills:**
1. **ceo-advisor/** - CEO strategic planning, business operations, organizational design
2. **cto-advisor/** - CTO technical leadership, architecture decisions, engineering strategy
**Focus:** Strategic decision-making, cross-functional alignment, long-term planning
## CEO Advisor
**Skill Location:** `ceo-advisor/`
### Strategic Planning Tools
**Purpose:** Support CEOs in strategic planning, business operations, and organizational design
**Key Areas:**
- Strategic planning frameworks (SWOT, Porter's Five Forces, Blue Ocean)
- Business model canvas and validation
- Organizational structure design
- Change management strategies
- Board reporting and investor relations
**Python Tools:**
- Strategic analysis frameworks
- Business metrics calculator
- Organizational design templates
**Usage:**
```bash
python ceo-advisor/scripts/strategic_analyzer.py company-data.json
```
/cs:setup (Founder Interview) → company-context.md
Chief of Staff (Router)
┌───────────┼───────────┐
10 Roles 6 Cross-Cut 6 Culture
│ │ │
└───────────┼────────────┘
Executive Mentor (Critic)
Decision Logger (Two-Layer Memory)
```
### Decision-Making Frameworks
## Skills Overview
**When to Use:**
- Annual strategic planning
- Major business pivots
- M&A evaluation
- Market expansion decisions
- Organizational restructuring
### C-Suite Roles (10)
## CTO Advisor
| Role | Folder | Reasoning Technique | Scripts |
|------|--------|-------------------|---------|
| **CEO** | `ceo-advisor/` | Tree of Thought | strategy_analyzer, financial_scenario_analyzer |
| **CTO** | `cto-advisor/` | ReAct | tech_debt_analyzer, team_scaling_calculator |
| **COO** | `coo-advisor/` | Step by Step | ops_efficiency_analyzer, okr_tracker |
| **CPO** | `cpo-advisor/` | First Principles | pmf_scorer, portfolio_analyzer |
| **CMO** | `cmo-advisor/` | Recursion of Thought | marketing_budget_modeler, growth_model_simulator |
| **CFO** | `cfo-advisor/` | Chain of Thought | burn_rate_calculator, unit_economics_analyzer, fundraising_model |
| **CRO** | `cro-advisor/` | Chain of Thought | revenue_forecast_model, churn_analyzer |
| **CISO** | `ciso-advisor/` | Risk-Based | risk_quantifier, compliance_tracker |
| **CHRO** | `chro-advisor/` | Empathy + Data | hiring_plan_modeler, comp_benchmarker |
| **Executive Mentor** | `executive-mentor/` | Adversarial | decision_matrix_scorer, stakeholder_mapper |
**Skill Location:** `cto-advisor/`
### Orchestration (6)
### Technical Leadership Tools
| Skill | Folder | Purpose |
|-------|--------|---------|
| **C-Suite Onboard** | `cs-onboard/` | Founder interview → company-context.md |
| **Chief of Staff** | `chief-of-staff/` | Routes questions, triggers board meetings |
| **Board Meeting** | `board-meeting/` | 6-phase multi-agent deliberation |
| **Decision Logger** | `decision-logger/` | Two-layer memory (raw + approved) |
| **Agent Protocol** | `agent-protocol/` | Inter-agent invocation, loop prevention, quality loop |
| **Context Engine** | `context-engine/` | Company context loading + anonymization |
**Purpose:** Support CTOs in technical leadership, architecture decisions, and engineering strategy
### Cross-Cutting Capabilities (6)
**Key Areas:**
- Technology stack selection and evaluation
- Engineering team scaling strategies
- Technical debt management
- Architecture decision records (ADRs)
- Build vs buy analysis
- Engineering culture and practices
| Skill | Folder | Purpose |
|-------|--------|---------|
| **Board Deck Builder** | `board-deck-builder/` | Assembles board/investor updates |
| **Scenario War Room** | `scenario-war-room/` | Multi-variable what-if modeling |
| **Competitive Intel** | `competitive-intel/` | Systematic competitor tracking |
| **Org Health Diagnostic** | `org-health-diagnostic/` | Cross-functional health scoring |
| **M&A Playbook** | `ma-playbook/` | Acquiring or being acquired |
| **International Expansion** | `intl-expansion/` | Market entry strategy |
**Python Tools:**
- Technology evaluation framework
- Team capacity planning
- Technical debt calculator
### Culture & Collaboration (6)
**Usage:**
```bash
python cto-advisor/scripts/tech_stack_evaluator.py requirements.yaml
```
| Skill | Folder | Purpose |
|-------|--------|---------|
| **Culture Architect** | `culture-architect/` | Build and operationalize culture |
| **Company OS** | `company-os/` | EOS/Scaling Up operating system |
| **Founder Coach** | `founder-coach/` | Founder development and growth |
| **Strategic Alignment** | `strategic-alignment/` | Strategy cascade, silo detection |
| **Change Management** | `change-management/` | ADKAR-based change rollout |
| **Internal Narrative** | `internal-narrative/` | One story across all audiences |
### Technical Decision Frameworks
## Executive Mentor Slash Commands
**When to Use:**
- Technology stack decisions
- Architecture pattern selection
- Engineering team organization
- Technical hiring strategies
- Infrastructure planning
The only skill with a `plugin.json` (namespace: `em`) because it has slash commands. Other skills are invoked by name through the Chief of Staff router or directly by the user. This is intentional — only add `plugin.json` when a skill has dedicated slash commands that need a namespace.
## Strategic Workflows
| Command | Purpose |
|---------|---------|
| `/em:challenge` | Pre-mortem analysis of any plan |
| `/em:board-prep` | Board meeting preparation |
| `/em:hard-call` | Framework for hard decisions |
| `/em:stress-test` | Stress-test any assumption |
| `/em:postmortem` | Honest retrospective |
### Workflow 1: Annual Strategic Planning (CEO)
## Key Design Decisions
- **Two-layer memory:** Raw transcripts (reference) + approved decisions only (feeds future meetings). Prevents hallucinated consensus.
- **Phase 2 isolation:** During board meetings, agents think independently before cross-examination.
- **Internal Quality Loop:** Self-verify → peer-verify → critic pre-screen → present. No unverified output reaches the founder.
- **Proactive triggers:** Every role has context-driven early warnings that surface issues without being asked.
- **User Communication Standard:** Bottom Line → What → Why → How to Act → Your Decision. Results only, no process narration.
## Python Tools (25 total)
All scripts are stdlib-only, CLI-first, with JSON output and embedded sample data.
```bash
# 1. Market analysis
python ceo-advisor/scripts/market_analyzer.py industry-data.csv
# 2. SWOT analysis
python ceo-advisor/scripts/swot_generator.py
# 3. Strategy formulation
python ceo-advisor/scripts/strategy_planner.py
# 4. OKR cascade (link to product-strategist)
python ../product-team/product-strategist/scripts/okr_cascade_generator.py growth
# Examples
python cfo-advisor/scripts/burn_rate_calculator.py
python cro-advisor/scripts/churn_analyzer.py
python cpo-advisor/scripts/pmf_scorer.py
python org-health-diagnostic/scripts/health_scorer.py
python strategic-alignment/scripts/alignment_checker.py
python decision-logger/scripts/decision_tracker.py
```
### Workflow 2: Technical Strategy (CTO)
## Integration with Other Domains
```bash
# 1. Evaluate technology options
python cto-advisor/scripts/tech_stack_evaluator.py requirements.yaml
# 2. Architecture decision
python cto-advisor/scripts/adr_generator.py
# 3. Team capacity planning
python cto-advisor/scripts/capacity_planner.py team-data.json
# 4. Engineering roadmap
python cto-advisor/scripts/roadmap_generator.py priorities.csv
```
## Integration with Other Skills
### CEO ↔ Product Team
- Strategic OKRs cascade to product OKRs
- Market insights inform product strategy
- Business metrics align with product metrics
### CTO ↔ Engineering Team
- Architecture decisions guide engineering work
- Technical strategy informs team structure
- Engineering metrics roll up to CTO dashboard
### CEO ↔ CTO Alignment
- Strategic business goals inform technical priorities
- Technical capabilities enable business strategies
- Joint decision-making on build vs buy
## Additional Resources
- **CEO Advisor Skill:** `ceo-advisor/SKILL.md`
- **CTO Advisor Skill:** `cto-advisor/SKILL.md`
- **Main Documentation:** `../CLAUDE.md`
| C-Level Role | Layers Above |
|-------------|-------------|
| CMO | marketing-skill/ (content, demand gen, ASO execution) |
| CFO | finance/financial-analyst (spreadsheets, DCF) |
| CRO | business-growth/ (revenue ops, sales engineering) |
| CISO | ra-qm-team/ (ISO 27001 checklists, ISMS audits) |
| CPO | product-team/ (PM toolkit, user stories, sprint planning) |
---
**Last Updated:** November 5, 2025
**Skills Deployed:** 2/2 C-level skills production-ready
**Focus:** Strategic decision-making and executive leadership
**Last Updated:** 2026-03-05
**Skills Deployed:** 28 skills (10 roles + 5 mentor commands + 6 orchestration + 6 cross-cutting + 6 culture)
**Python Tools:** 25 (stdlib-only)
**Reference Docs:** 52

55
c-level-advisor/SKILL.md Normal file
View File

@@ -0,0 +1,55 @@
---
name: c-level-advisor
description: "Complete virtual board of directors — 28 skills covering 10 C-level roles, orchestration, cross-cutting capabilities, and culture frameworks. Features internal quality loop, two-layer memory, board meeting protocol with Phase 2 isolation, proactive triggers, and structured communication standard."
license: MIT
metadata:
version: 2.0.0
author: Alireza Rezvani
category: c-level
domain: executive-advisory
updated: 2026-03-05
skills_count: 28
scripts_count: 25
references_count: 52
---
# C-Level Advisory Ecosystem
A complete virtual board of directors for founders and executives.
## Quick Start
```
1. Run /cs:setup → creates company-context.md (all agents read this)
2. Ask any strategic question → Chief of Staff routes to the right role
3. For big decisions → /cs:board triggers a multi-role board meeting
```
## What's Included
### 10 C-Suite Roles
CEO, CTO, COO, CPO, CMO, CFO, CRO, CISO, CHRO, Executive Mentor
### 6 Orchestration Skills
Founder Onboard, Chief of Staff (router), Board Meeting, Decision Logger, Agent Protocol, Context Engine
### 6 Cross-Cutting Capabilities
Board Deck Builder, Scenario War Room, Competitive Intel, Org Health Diagnostic, M&A Playbook, International Expansion
### 6 Culture & Collaboration
Culture Architect, Company OS, Founder Coach, Strategic Alignment, Change Management, Internal Narrative
## Key Features
- **Internal Quality Loop:** Self-verify → peer-verify → critic pre-screen → present
- **Two-Layer Memory:** Raw transcripts + approved decisions only (prevents hallucinated consensus)
- **Board Meeting Isolation:** Phase 2 independent analysis before cross-examination
- **Proactive Triggers:** Context-driven early warnings without being asked
- **Structured Output:** Bottom Line → What → Why → How to Act → Your Decision
- **25 Python Tools:** All stdlib-only, CLI-first, JSON output, zero dependencies
## See Also
- `CLAUDE.md` — full architecture diagram and integration guide
- `agent-protocol/SKILL.md` — communication standard and quality loop details
- `chief-of-staff/SKILL.md` — routing matrix for all 28 skills

View File

@@ -0,0 +1,418 @@
---
name: agent-protocol
description: "Inter-agent communication protocol for C-suite agent teams. Defines invocation syntax, loop prevention, isolation rules, and response formats. Use when C-suite agents need to query each other, coordinate cross-functional analysis, or run board meetings with multiple agent roles."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: agent-orchestration
updated: 2026-03-05
frameworks: invocation-patterns
---
# Inter-Agent Protocol
How C-suite agents talk to each other. Rules that prevent chaos, loops, and circular reasoning.
## Keywords
agent protocol, inter-agent communication, agent invocation, agent orchestration, multi-agent, c-suite coordination, agent chain, loop prevention, agent isolation, board meeting protocol
## Invocation Syntax
Any agent can query another using:
```
[INVOKE:role|question]
```
**Examples:**
```
[INVOKE:cfo|What's the burn rate impact of hiring 5 engineers in Q3?]
[INVOKE:cto|Can we realistically ship this feature by end of quarter?]
[INVOKE:chro|What's our typical time-to-hire for senior engineers?]
[INVOKE:cro|What does our pipeline look like for the next 90 days?]
```
**Valid roles:** `ceo`, `cfo`, `cro`, `cmo`, `cpo`, `cto`, `chro`, `coo`, `ciso`
## Response Format
Invoked agents respond using this structure:
```
[RESPONSE:role]
Key finding: [one line — the actual answer]
Supporting data:
- [data point 1]
- [data point 2]
- [data point 3 — optional]
Confidence: [high | medium | low]
Caveat: [one line — what could make this wrong]
[/RESPONSE]
```
**Example:**
```
[RESPONSE:cfo]
Key finding: Hiring 5 engineers in Q3 extends runway from 14 to 9 months at current burn.
Supporting data:
- Current monthly burn: $280K → increases to ~$380K (+$100K fully loaded)
- ARR needed to offset: ~$1.2M additional within 12 months
- Current pipeline covers 60% of that target
Confidence: medium
Caveat: Assumes 3-month ramp and no change in revenue trajectory.
[/RESPONSE]
```
## Loop Prevention (Hard Rules)
These rules are enforced unconditionally. No exceptions.
### Rule 1: No Self-Invocation
An agent cannot invoke itself.
```
❌ CFO → [INVOKE:cfo|...] — BLOCKED
```
### Rule 2: Maximum Depth = 2
Chains can go A→B→C. The third hop is blocked.
```
✅ CRO → CFO → COO (depth 2)
❌ CRO → CFO → COO → CHRO (depth 3 — BLOCKED)
```
### Rule 3: No Circular Calls
If agent A called agent B, agent B cannot call agent A in the same chain.
```
✅ CRO → CFO → CMO
❌ CRO → CFO → CRO (circular — BLOCKED)
```
### Rule 4: Chain Tracking
Each invocation carries its call chain. Format:
```
[CHAIN: cro → cfo → coo]
```
Agents check this chain before responding with another invocation.
**When blocked:** Return this instead of invoking:
```
[BLOCKED: cannot invoke cfo — circular call detected in chain cro→cfo]
State assumption used instead: [explicit assumption the agent is making]
```
## Isolation Rules
### Board Meeting Phase 2 (Independent Analysis)
**NO invocations allowed.** Each role forms independent views before cross-pollination.
- Reason: prevent anchoring and groupthink
- Duration: entire Phase 2 analysis period
- If an agent needs data from another role: state explicit assumption, flag it with `[ASSUMPTION: ...]`
### Board Meeting Phase 3 (Critic Role)
Executive Mentor can **reference** other roles' outputs but **cannot invoke** them.
- Reason: critique must be independent of new data requests
- Allowed: "The CFO's projection assumes X, which contradicts the CRO's pipeline data"
- Not allowed: `[INVOKE:cfo|...]` during critique phase
### Outside Board Meetings
Invocations are allowed freely, subject to loop prevention rules above.
## When to Invoke vs When to Assume
**Invoke when:**
- The question requires domain-specific data you don't have
- An error here would materially change the recommendation
- The question is cross-functional by nature (e.g., hiring impact on both budget and capacity)
**Assume when:**
- The data is directionally clear and precision isn't critical
- You're in Phase 2 isolation (always assume, never invoke)
- The chain is already at depth 2
- The question is minor compared to your main analysis
**When assuming, always state it:**
```
[ASSUMPTION: runway ~12 months based on typical Series A burn profile — not verified with CFO]
```
## Conflict Resolution
When two invoked agents give conflicting answers:
1. **Flag the conflict explicitly:**
```
[CONFLICT: CFO projects 14-month runway; CRO expects pipeline to close 80% → implies 18+ months]
```
2. **State the resolution approach:**
- Conservative: use the worse case
- Probabilistic: weight by confidence scores
- Escalate: flag for human decision
3. **Never silently pick one** — surface the conflict to the user.
## Broadcast Pattern (Crisis / CEO)
CEO can broadcast to all roles simultaneously:
```
[BROADCAST:all|What's the impact if we miss the fundraise?]
```
Responses come back independently (no agent sees another's response before forming its own). Aggregate after all respond.
## Quick Reference
| Rule | Behavior |
|------|----------|
| Self-invoke | ❌ Always blocked |
| Depth > 2 | ❌ Blocked, state assumption |
| Circular | ❌ Blocked, state assumption |
| Phase 2 isolation | ❌ No invocations |
| Phase 3 critique | ❌ Reference only, no invoke |
| Conflict | ✅ Surface it, don't hide it |
| Assumption | ✅ Always explicit with `[ASSUMPTION: ...]` |
## Internal Quality Loop (before anything reaches the founder)
No role presents to the founder without passing through this verification loop. The founder sees polished, verified output — not first drafts.
### Step 1: Self-Verification (every role, every time)
Before presenting, every role runs this internal checklist:
```
SELF-VERIFY CHECKLIST:
□ Source Attribution — Where did each data point come from?
✅ "ARR is $2.1M (from CRO pipeline report, Q4 actuals)"
❌ "ARR is around $2M" (no source, vague)
□ Assumption Audit — What am I assuming vs what I verified?
Tag every assumption: [VERIFIED: checked against data] or [ASSUMED: not verified]
If >50% of findings are ASSUMED → flag low confidence
□ Confidence Score — How sure am I on each finding?
🟢 High: verified data, established pattern, multiple sources
🟡 Medium: single source, reasonable inference, some uncertainty
🔴 Low: assumption-based, limited data, first-time analysis
□ Contradiction Check — Does this conflict with known context?
Check against company-context.md and recent decisions in decision-log
If it contradicts a past decision → flag explicitly
□ "So What?" Test — Does every finding have a business consequence?
If you can't answer "so what?" in one sentence → cut it
```
### Step 2: Peer Verification (cross-functional validation)
When a recommendation impacts another role's domain, that role validates BEFORE presenting.
| If your recommendation involves... | Validate with... | They check... |
|-------------------------------------|-------------------|---------------|
| Financial numbers or budget | CFO | Math, runway impact, budget reality |
| Revenue projections | CRO | Pipeline backing, historical accuracy |
| Headcount or hiring | CHRO | Market reality, comp feasibility, timeline |
| Technical feasibility or timeline | CTO | Engineering capacity, technical debt load |
| Operational process changes | COO | Capacity, dependencies, scaling impact |
| Customer-facing changes | CRO + CPO | Churn risk, product roadmap conflict |
| Security or compliance claims | CISO | Actual posture, regulation requirements |
| Market or positioning claims | CMO | Data backing, competitive reality |
**Peer validation format:**
```
[PEER-VERIFY:cfo]
Validated: ✅ Burn rate calculation correct
Adjusted: ⚠️ Hiring timeline should be Q3 not Q2 (budget constraint)
Flagged: 🔴 Missing equity cost in total comp projection
[/PEER-VERIFY]
```
**Skip peer verification when:**
- Single-domain question with no cross-functional impact
- Time-sensitive proactive alert (send alert, verify after)
- Founder explicitly asked for a quick take
### Step 3: Critic Pre-Screen (high-stakes decisions only)
For decisions that are **irreversible, high-cost, or bet-the-company**, the Executive Mentor pre-screens before the founder sees it.
**Triggers for pre-screen:**
- Involves spending > 20% of remaining runway
- Affects >30% of the team (layoffs, reorg)
- Changes company strategy or direction
- Involves external commitments (fundraising terms, partnerships, M&A)
- Any recommendation where all roles agree (suspicious consensus)
**Pre-screen output:**
```
[CRITIC-SCREEN]
Weakest point: [The single biggest vulnerability in this recommendation]
Missing perspective: [What nobody considered]
If wrong, the cost is: [Quantified downside]
Proceed: ✅ With noted risks | ⚠️ After addressing [specific gap] | 🔴 Rethink
[/CRITIC-SCREEN]
```
### Step 4: Course Correction (after founder feedback)
The loop doesn't end at delivery. After the founder responds:
```
FOUNDER FEEDBACK LOOP:
1. Founder approves → log decision (Layer 2), assign actions
2. Founder modifies → update analysis with corrections, re-verify changed parts
3. Founder rejects → log rejection with DO_NOT_RESURFACE, understand WHY
4. Founder asks follow-up → deepen analysis on specific point, re-verify
POST-DECISION REVIEW (30/60/90 days):
- Was the recommendation correct?
- What did we miss?
- Update company-context.md with what we learned
- If wrong → document the lesson, adjust future analysis
```
### Verification Level by Stakes
| Stakes | Self-Verify | Peer-Verify | Critic Pre-Screen |
|--------|-------------|-------------|-------------------|
| Low (informational) | ✅ Required | ❌ Skip | ❌ Skip |
| Medium (operational) | ✅ Required | ✅ Required | ❌ Skip |
| High (strategic) | ✅ Required | ✅ Required | ✅ Required |
| Critical (irreversible) | ✅ Required | ✅ Required | ✅ Required + board meeting |
### What Changes in the Output Format
The verified output adds confidence and source information:
```
BOTTOM LINE
[Answer] — Confidence: 🟢 High
WHAT
• [Finding 1] [VERIFIED: Q4 actuals] 🟢
• [Finding 2] [VERIFIED: CRO pipeline data] 🟢
• [Finding 3] [ASSUMED: based on industry benchmarks] 🟡
PEER-VERIFIED BY: CFO (math ✅), CTO (timeline ⚠️ adjusted to Q3)
```
---
## User Communication Standard
All C-suite output to the founder follows ONE format. No exceptions. The founder is the decision-maker — give them results, not process.
### Standard Output (single-role response)
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 [ROLE] — [Topic]
BOTTOM LINE
[One sentence. The answer. No preamble.]
WHAT
• [Finding 1 — most critical]
• [Finding 2]
• [Finding 3]
(Max 5 bullets. If more needed → reference doc.)
WHY THIS MATTERS
[1-2 sentences. Business impact. Not theory — consequence.]
HOW TO ACT
1. [Action] → [Owner] → [Deadline]
2. [Action] → [Owner] → [Deadline]
3. [Action] → [Owner] → [Deadline]
⚠️ RISKS (if any)
• [Risk + what triggers it]
🔑 YOUR DECISION (if needed)
Option A: [Description] — [Trade-off]
Option B: [Description] — [Trade-off]
Recommendation: [Which and why, in one line]
📎 DETAIL: [reference doc or script output for deep-dive]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Proactive Alert (unsolicited — triggered by context)
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚩 [ROLE] — Proactive Alert
WHAT I NOTICED
[What triggered this — specific, not vague]
WHY IT MATTERS
[Business consequence if ignored — in dollars, time, or risk]
RECOMMENDED ACTION
[Exactly what to do, who does it, by when]
URGENCY: 🔴 Act today | 🟡 This week | ⚪ Next review
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Board Meeting Output (multi-role synthesis)
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 BOARD MEETING — [Date] — [Agenda Topic]
DECISION REQUIRED
[Frame the decision in one sentence]
PERSPECTIVES
CEO: [one-line position]
CFO: [one-line position]
CRO: [one-line position]
[... only roles that contributed]
WHERE THEY AGREE
• [Consensus point 1]
• [Consensus point 2]
WHERE THEY DISAGREE
• [Conflict] — CEO says X, CFO says Y
• [Conflict] — CRO says X, CPO says Y
CRITIC'S VIEW (Executive Mentor)
[The uncomfortable truth nobody else said]
RECOMMENDED DECISION
[Clear recommendation with rationale]
ACTION ITEMS
1. [Action] → [Owner] → [Deadline]
2. [Action] → [Owner] → [Deadline]
3. [Action] → [Owner] → [Deadline]
🔑 YOUR CALL
[Options if you disagree with the recommendation]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Communication Rules (non-negotiable)
1. **Bottom line first.** Always. The founder's time is the scarcest resource.
2. **Results and decisions only.** No process narration ("First I analyzed..."). No thinking out loud.
3. **What + Why + How.** Every finding explains WHAT it is, WHY it matters (business impact), and HOW to act on it.
4. **Max 5 bullets per section.** Longer = reference doc.
5. **Actions have owners and deadlines.** "We should consider" is banned. Who does what by when.
6. **Decisions framed as options.** Not "what do you think?" — "Option A or B, here's the trade-off, here's my recommendation."
7. **The founder decides.** Roles recommend. The founder approves, modifies, or rejects. Every output respects this hierarchy.
8. **Risks are concrete.** Not "there might be risks" — "if X happens, Y breaks, costing $Z."
9. **No jargon without explanation.** If you use a term, explain it on first use.
10. **Silence is an option.** If there's nothing to report, don't fabricate updates.
## Reference
- `references/invocation-patterns.md` — common cross-functional patterns with examples

View File

@@ -0,0 +1,247 @@
# Invocation Patterns — Inter-Agent Protocol
Common cross-functional patterns, with examples and guidance on when to invoke vs assume.
---
## Pattern 1: Revenue Planning Chain
**CRO → CFO → CMO**
Used when: setting revenue targets, allocating growth budget, modeling pipeline scenarios.
```
CRO initiates:
"Planning Q3 revenue targets. Need financial constraints and marketing input."
CRO → [INVOKE:cfo|What's our Q3 revenue target given current runway and burn?]
CFO responds:
[RESPONSE:cfo]
Key finding: To maintain 12-month runway without new raise, Q3 ARR target ≥ $2.4M (+20% QoQ)
Supporting data:
- Current ARR: $2.0M, burn: $180K/month, runway: 14 months
- 20% QoQ growth maintains runway; <15% triggers bridge discussion
- Board expectation communicated as 25% QoQ
Confidence: high
Caveat: Assumes no new hires or major infrastructure spend in Q3.
[/RESPONSE]
CRO → [INVOKE:cmo|Given a $2.4M Q3 ARR target, what pipeline coverage and CAC should we plan for?]
[CHAIN: cro → cfo → cmo]
CMO responds:
[RESPONSE:cmo]
Key finding: Need 3x pipeline ($7.2M) and CAC ≤ $8,400 to hit $2.4M ARR target.
Supporting data:
- Historical close rate: 33% → 3x coverage required
- Current CAC: $9,100 → 8% efficiency improvement needed
- Recommended channel mix: 60% outbound, 30% inbound, 10% partner
Confidence: medium
Caveat: Close rate assumes no sales team changes; any rep turnover drops it to 25%.
[/RESPONSE]
```
**What CRO does with this:** Synthesizes CFO constraint (target) + CMO playbook (how to hit it) into a revenue plan. Does not invoke further (depth limit reached).
---
## Pattern 2: Hiring Decision Chain
**CHRO → CFO → COO**
Used when: headcount planning, role approval, capacity vs budget tradeoffs.
```
CHRO initiates:
"Engineering wants to hire 3 senior engineers. Need budget clearance and capacity validation."
CHRO → [INVOKE:cfo|Can we afford 3 senior engineers in H2? Fully loaded ~$180K each.]
[RESPONSE:cfo]
Key finding: Budget supports 2 hires now; 3rd is contingent on Q2 ARR closing above $2.1M.
Supporting data:
- Current H2 headcount budget: $340K remaining
- 2 hires = $270K fully loaded → $70K buffer
- 3 hires = $405K → $65K over; triggers board approval
Confidence: high
Caveat: Based on current runway projection — fundraise close changes this completely.
[/RESPONSE]
CHRO → [INVOKE:coo|Do operations support onboarding 2-3 engineers in H2? Any capacity constraints?]
[CHAIN: chro → cfo → coo]
[RESPONSE:coo]
Key finding: Can absorb 2 comfortably; 3rd requires dedicated onboarding lead or delays velocity 6 weeks.
Supporting data:
- Current eng team: 8 → adding 2 is 25% growth (manageable)
- Onboarding bottleneck: 1 senior eng mentor per 1-2 new hires, currently at capacity
- Dev environment setup: 3-week lead time per hire
Confidence: medium
Caveat: Assumes no concurrent product launches requiring full team focus.
[/RESPONSE]
CHRO synthesis: Recommends 2 hires now, pipeline 3rd for Q3 contingent on ARR milestone.
```
---
## Pattern 3: Product Launch Chain
**CPO → CTO → CMO**
Used when: planning feature launches, assessing readiness, aligning go-to-market.
```
CPO initiates:
"Planning Q3 launch of [Feature X]. Need technical readiness and GTM input."
CPO → [INVOKE:cto|Is Feature X technically ready for Q3 launch? Key risks?]
[RESPONSE:cto]
Key finding: 70% complete; Q3 launch is achievable with scope cut — 2 items must move to Q4.
Supporting data:
- Core feature: complete; performance optimization: 3 weeks remaining
- Infrastructure scaling: needs load testing at 10x current volume
- Scope cuts needed: advanced analytics dashboard, third-party integrations
Confidence: medium
Caveat: Assumes no critical bugs found in load testing; historical rate suggests 1-2 found.
[/RESPONSE]
CPO → [INVOKE:cmo|Given Q3 launch of Feature X (core only, no analytics/integrations), what's the GTM plan?]
[CHAIN: cpo → cto → cmo]
[RESPONSE:cmo]
Key finding: Core-only launch works for existing customers; new logo acquisition needs integrations — recommend phased announcement.
Supporting data:
- Phase 1: Early access for top 20 accounts → testimonials and case studies
- Phase 2: Full launch with integration partners in Q4
- CAC impact: core-only launch reduces new logo conversion rate 15%
Confidence: medium
Caveat: Competitor launched similar feature last month — window for differentiation is narrow.
[/RESPONSE]
```
---
## Pattern 4: Crisis Broadcast
**CEO → All Roles**
Used when: major adverse event, fundraise miss, key customer loss, market disruption.
```
CEO broadcasts:
[BROADCAST:all|Scenario: largest customer (28% of ARR) gives 60-day termination notice.
Each role: what's the immediate impact on your domain and your top recommended action?]
Roles respond INDEPENDENTLY (no agent sees others' responses first):
[RESPONSE:cfo]
Key finding: 28% ARR loss drops runway from 14 to 8 months at current burn.
Supporting data:
- Lost ARR: ~$560K → burn multiple increases from 1.8x to 3.2x
- Bridge options: $500K credit line (pre-approved), defer 2 planned hires
- Fundraise timeline must compress: target term sheet within 60 days
Confidence: high
Caveat: Assumes no secondary customer churn triggered by news.
[/RESPONSE]
[RESPONSE:cro]
Key finding: Pipeline must replace $560K ARR within 6 months to avoid emergency measures.
Supporting data:
- Current qualified pipeline: $1.8M → need 31% close rate (currently 28%)
- 3 deals in late stage can cover 70% of gap if closed this quarter
- Churn risk: 2 other accounts had similar usage patterns — immediate EBR required
Confidence: medium
Caveat: Close rate may drop if word spreads; manage communication carefully.
[/RESPONSE]
[RESPONSE:chro]
Key finding: Hiring freeze recommended immediately; retain all current staff.
Supporting data:
- 4 open reqs should pause pending CFO guidance
- At-risk of counter-poaching: 2 senior engineers, 1 product lead
- Retention bonuses for key staff: ~$80K to secure 12-month commitments
Confidence: high
Caveat: Morale impact of freeze can accelerate attrition if not communicated well.
[/RESPONSE]
[RESPONSE:cpo]
Key finding: Accelerate features that differentiate with remaining accounts; reduce scope to core.
Supporting data:
- Lost customer's feature requests: deprioritize (no longer strategic)
- Features most valued by remaining top 5 accounts: [list from account data]
- Reduce Q3 scope by 30% to free engineering capacity for retention work
Confidence: medium
Caveat: Need sales input on which features are blocking new logo deals.
[/RESPONSE]
[RESPONSE:cmo]
Key finding: Control the narrative; proactive communication to market reduces churn cascade risk.
Supporting data:
- Draft customer communication within 48 hours (frame as strategic pivot, not loss)
- Press: no comment strategy unless proactively asked
- Replace pipeline: double down on ICP segments where we're strongest
Confidence: medium
Caveat: If customer goes public with criticism, narrative control becomes much harder.
[/RESPONSE]
CEO synthesis: [Aggregates all 9 responses, identifies conflicts, sets priorities]
```
---
## When to Invoke vs When to Assume
### Invoke when:
- Cross-functional data is material to the decision
- Getting it wrong changes the recommendation significantly
- The other role has data you genuinely don't have
- Time allows (not in Phase 2 isolation)
### Assume when:
- You're in Phase 2 (always — no exceptions)
- The chain is at depth 2 (you cannot invoke further)
- The answer is directionally obvious (e.g., "CFO will care about runway")
- The precision doesn't change the recommendation
### State assumptions explicitly:
```
[ASSUMPTION: runway ~12 months — not verified with CFO; actual may vary ±20%]
[ASSUMPTION: CAC ~$8K based on industry benchmark — CMO has actual figures]
[ASSUMPTION: engineering capacity at ~70% — not verified with CTO]
```
---
## Handling Conflicting Responses
When two agents give incompatible answers, surface it:
```
[CONFLICT DETECTED]
CFO says: runway extends to 18 months if Q3 targets hit
CRO says: only 45% confidence Q3 targets will be hit
Resolution: use probabilistic blend
- 45% probability: 18-month runway (optimistic case)
- 55% probability: 11-month runway (current trajectory)
Expected value: ~14 months
Recommendation: plan for 12 months, trigger bridge at 10.
[/CONFLICT]
```
**Resolution options:**
1. **Conservative:** Use worse case — appropriate for cash/runway decisions
2. **Probabilistic:** Weight by confidence scores — appropriate for planning
3. **Escalate:** Flag for human decision — appropriate for high-stakes irreversible choices
4. **Time-box:** Gather more data within 48 hours — appropriate when data gap is closeable
---
## Anti-Patterns to Avoid
| Anti-pattern | Problem | Fix |
|---|---|---|
| Invoke to validate your own conclusion | Confirmation bias loop | Ask open-ended questions |
| Invoke when assuming works | Unnecessary latency | State assumption clearly |
| Hide conflicts between responses | Bad synthesis | Always surface conflicts |
| Invoke across depth > 2 | Loop risk | State assumption at depth 2 |
| Invoke during Phase 2 | Groupthink contamination | Flag with [ASSUMPTION:] |
| Vague questions | Poor responses | Specific, scoped questions only |

View File

@@ -0,0 +1,183 @@
---
name: board-deck-builder
description: "Assembles comprehensive board and investor update decks by pulling perspectives from all C-suite roles. Use when preparing board meetings, investor updates, quarterly business reviews, or fundraising narratives. Covers structure, narrative framework, bad news delivery, and common mistakes."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: board-governance
updated: 2026-03-05
frameworks: deck-frameworks, board-deck-template
---
# Board Deck Builder
Build board decks that tell a story — not just show data. Every section has an owner, a narrative, and a "so what."
## Keywords
board deck, investor update, board meeting, board pack, investor relations, quarterly review, board presentation, fundraising deck, investor deck, board narrative, QBR, quarterly business review
## Quick Start
```
/board-deck [quarterly|monthly|fundraising] [stage: seed|seriesA|seriesB]
```
Provide available metrics. The builder fills gaps with explicit placeholders — never invents numbers.
## Deck Structure (Standard Order)
Every section follows: **Headline → Data → Narrative → Ask/Next**
### 1. Executive Summary (CEO)
**3 sentences. No more.**
- Sentence 1: State of the business (where we are)
- Sentence 2: Biggest thing that happened this period
- Sentence 3: Where we're going next quarter
*Bad:* "We had a good quarter with lots of progress across all areas."
*Good:* "We closed Q3 at $2.4M ARR (+22% QoQ), signed our largest enterprise contract, and enter Q4 with 14-month runway. The strategic shift to mid-market is working — ACV up 40% and sales cycle down 3 weeks. Q4 priority: close the $3M Series A and hit $2.8M ARR."
### 2. Key Metrics Dashboard (COO)
**6-8 metrics max. Use a table.**
| Metric | This Period | Last Period | Target | Status |
|--------|-------------|-------------|--------|--------|
| ARR | $2.4M | $1.97M | $2.3M | ✅ |
| MoM growth | 8.1% | 7.2% | 7.5% | ✅ |
| Burn multiple | 1.8x | 2.1x | <2x | ✅ |
| NRR | 112% | 108% | >110% | ✅ |
| CAC payback | 11 months | 14 months | <12 months | ✅ |
| Headcount | 24 | 21 | 25 | 🟡 |
Pick metrics the board actually tracks. Swap out anything they've said they don't care about.
### 3. Financial Update (CFO)
- P&L summary: Revenue, COGS, Gross margin, OpEx, Net burn
- Cash position and runway (months)
- Burn multiple trend (3-quarter view)
- Variance to plan (what was different and why)
- Forecast update for next quarter
**One sentence on each variance.** Boards hate "revenue was below target" with no explanation. Say why.
### 4. Revenue & Pipeline (CRO)
- ARR waterfall: starting → new → expansion → churn → ending
- NRR and logo churn rates
- Pipeline by stage (in $, not just count)
- Forecast: next quarter with confidence level
- Top 3 deals: name/amount/close date/risk
**The forecast must have a confidence level.** "We expect $2.8M" is weak. "High confidence $2.6M, upside to $2.9M if two late-stage deals close" is useful.
### 5. Product Update (CPO)
- Shipped this quarter: 3-5 bullets, user impact for each
- Shipping next quarter: 3-5 bullets with target dates
- PMF signal: NPS trend, DAU/MAU ratio, feature adoption
- One key learning from customer research
**No feature lists.** Only features with evidence of user impact.
### 6. Growth & Marketing (CMO)
- CAC by channel (table)
- Pipeline contribution by channel ($)
- Brand/awareness metrics relevant to stage (traffic, share of voice)
- What's working, what's being cut, what's being tested
### 7. Engineering & Technical (CTO)
- Delivery velocity trend (last 4 quarters)
- Tech debt ratio and plan
- Infrastructure: uptime, incidents, cost trend
- Security posture (one line, flag anything pending)
**Keep this short unless there's a material issue.** Boards don't need sprint details.
### 8. Team & People (CHRO)
- Headcount: actual vs plan
- Hiring: offers out, pipeline, time-to-fill trend
- Attrition: regrettable vs non-regrettable
- Engagement: last survey score, trend
- Key hires this quarter, key open roles
### 9. Risk & Security (CISO)
- Security posture: status of critical controls
- Compliance: certifications in progress, deadlines
- Incidents this quarter (if any): impact, resolution, prevention
- Top 3 risks and mitigation status
### 10. Strategic Outlook (CEO)
- Next quarter priorities: 3-5 items, ranked
- Key decisions needed from the board
- Asks: budget, introductions, advice, votes
**The "asks" slide is the most important.** Be specific. "We'd like 3 warm introductions to CFOs at Series B companies" beats "any help would be appreciated."
### 11. Appendix
- Detailed financial model
- Full pipeline data
- Cohort retention charts
- Customer case studies
- Detailed headcount breakdown
---
## Narrative Framework
Boards see 10+ decks per quarter. Yours needs a through-line.
**The 4-Act Structure:**
1. **Where we said we'd be** (last quarter's targets)
2. **Where we actually are** (honest assessment)
3. **Why the gap exists** (one cause per variance, not excuses)
4. **What we're doing about it** (specific, dated actions)
This works for good news AND bad news. It's credible because it acknowledges reality.
**Opening frame:** Start with the one thing that matters most — the board should know the key message by slide 3, not slide 30.
---
## Delivering Bad News
Never bury it. Boards find out eventually. Finding out late makes it worse.
**Framework:**
1. **State it plainly** — "We missed Q3 ARR target by $300K (12% gap)"
2. **Own the cause** — "Primary driver was longer-than-expected sales cycle in enterprise segment"
3. **Show you understand it** — "We analyzed 8 lost/stalled deals; the pattern is X"
4. **Present the fix** — "We've made 3 changes: [specific, dated changes]"
5. **Update the forecast** — "Revised Q4 target is $2.6M; here's the bottom-up build"
**What NOT to do:**
- Don't lead with good news to soften bad news — boards notice and distrust the framing
- Don't explain without owning — "market conditions" is not a cause, it's a context
- Don't present a fix without data behind it
- Don't show a revised forecast without showing your assumptions
---
## Common Board Deck Mistakes
| Mistake | Fix |
|---------|-----|
| Too many slides (>25) | Cut ruthlessly — if you can't explain it in the room, the slide is wrong |
| Metrics without targets | Every metric needs a target and a status |
| No narrative | Data without story forces boards to draw their own conclusions |
| Burying bad news | Lead with it, own it, fix it |
| Vague asks | Specific, actionable, person-assigned asks only |
| No variance explanation | Every gap from target needs one-sentence cause |
| Stale appendix | Appendix is only useful if it's current |
| Designing for the reader, not the room | Decks are presented — they must work spoken aloud |
---
## Cadence Notes
**Quarterly (standard):** Full deck, all sections, 20-30 slides. Sent 48 hours in advance.
**Monthly (for early-stage):** Condensed — metrics dashboard, financials, pipeline, top risks. 8-12 slides.
**Fundraising:** Opens with market/vision, closes with ask. See `references/deck-frameworks.md` for Sequoia format.
## References
- `references/deck-frameworks.md` — SaaS board pack format, Sequoia structure, investor tailoring
- `templates/board-deck-template.md` — fill-in template for complete board decks

View File

@@ -0,0 +1,184 @@
# Board Deck Frameworks
## The SaaS Board Pack (Christoph Janz / Point Nine Style)
Point Nine's board pack format became the de facto standard for early-stage SaaS. Core principle: **the numbers tell the story; the narrative explains the numbers.**
### Required Metrics (non-negotiable for SaaS boards)
- **ARR** (not MRR — boards think annually)
- **MoM / QoQ growth rate**
- **NRR (Net Revenue Retention)** — the single most important SaaS metric
- **Gross margin** — typically 60-80% SaaS; <60% is a flag
- **CAC payback period** — months to recover customer acquisition cost
- **Burn multiple** = net burn / net new ARR; <2x is good, >3x is a problem
- **Runway** — months at current burn
### Point Nine Benchmark Targets (Series A SaaS)
| Metric | Good | Great | Warning |
|--------|------|-------|---------|
| MoM growth | 10-15% | >20% | <7% |
| NRR | >110% | >130% | <100% |
| Gross margin | >65% | >75% | <60% |
| CAC payback | <18 months | <12 months | >24 months |
| Burn multiple | <2x | <1.5x | >3x |
| Logo churn | <10%/yr | <5%/yr | >15%/yr |
### SaaS ARR Waterfall (Christoph Janz Format)
Show this every quarter:
```
Starting ARR: $1,970,000
+ New ARR: +$480,000 (new logos)
+ Expansion ARR: +$120,000 (upsells/cross-sells)
- Churned ARR: -$90,000 (cancellations)
- Contraction ARR: -$35,000 (downgrades)
= Ending ARR: $2,445,000
```
NRR = (Ending - New) / Starting = ($1,965K) / ($1,970K) = 99.7% ← flag this
---
## Sequoia Board Deck Structure
Sequoia's canonical deck (used for both fundraising and board updates):
1. **Company Purpose** — one sentence, the existential "why"
2. **The Problem** — pain, size, who has it
3. **The Solution** — what you do, how it's different
4. **Why Now** — market timing, tailwinds, enabling factors
5. **Market Size** — TAM/SAM/SOM with methodology
6. **Business Model** — how you make money
7. **Traction** — proof it's working (growth, retention, logos)
8. **Team** — why you're the ones to win this
9. **Financials** — 3-year model, current metrics
10. **The Ask** — amount, use of funds, milestones to next round
**For ongoing board updates:** Swap 1-5 (context) for "State of the Business" and "Last Quarter vs Plan." Boards know the company — skip the pitch.
---
## Investor-Specific Tailoring
### What Different Investor Types Care About
**Early-stage VCs (Seed, A):**
- Growth rate above all else
- NRR — "does the product retain?"
- Founder-market fit narrative
- Milestone achievement vs last board meeting
**Growth-stage VCs (B, C):**
- Capital efficiency (burn multiple, CAC payback)
- GTM repeatability — can you hire 10 AEs and have it work?
- Market leadership signals
- Path to profitability (even if years away)
**Strategic investors:**
- Synergies with their portfolio/business
- Technology differentiation
- Partnership potential
**Angels:**
- Team above all
- Personal conviction in the thesis
- Exit scenarios
### Tailoring the Narrative
- If you're ahead of plan: "Here's why, and here's how we'll sustain it"
- If you're behind plan: "Here's why, here's what we've learned, here's the new plan"
- If the plan was wrong: "The assumption that was wrong, what we know now, updated thesis"
Never pretend the plan was right when it wasn't. Board members have memories and models.
---
## How to Present Bad News
Boards have seen everything. What loses credibility isn't bad results — it's bad framing.
### The Credibility Formula
1. **Lead with the headline** — "We missed ARR target by 18%"
2. **Quantify the gap** — absolute and percentage
3. **Diagnose the cause** (one primary, max two secondary)
4. **Show your work** — "We analyzed 12 churned/stalled deals and found..."
5. **Present the fix** — specific, dated, owned by a name
6. **Update the forecast** — bottom-up rebuild, not wishful thinking
7. **Flag the risk** — "If X doesn't close, here's the contingency"
### What "Showing Your Work" Looks Like
Bad: "Sales cycle was longer than expected."
Good: "Sales cycle stretched from 45 to 72 days. Root cause: new legal review requirement at enterprise accounts, triggered by our SOC 2 Type II gap. Fix: SOC 2 audit underway (target: Dec 15), and we've pre-built contract language to accelerate review. Impact: estimated 3 stalled deals ($420K ARR) unblock in Q4."
### Scenarios and How to Handle Each
| Scenario | Frame |
|----------|-------|
| Missed revenue target | Lead with it; diagnose cause; bottom-up revised forecast |
| Key customer churned | Announce it; explain why; show retention analysis of remaining accounts |
| Key exec left | Announce it; show succession/coverage plan; don't overpromise the replacement timeline |
| Burn accelerated | Show P&L detail; explain what drove it; adjust runway projection; plan to fix |
| Market headwinds | Acknowledge; show relative performance vs peers; pivot if needed |
| Fundraise delayed | Runway impact; bridge options; revised timeline |
---
## Appendix Data That Boards Actually Use
Boards use the appendix for due diligence, not during the meeting. Include:
**Financial:**
- Full P&L (monthly for last 4 quarters)
- Cash flow statement
- 3-year model with assumptions
- Unit economics by cohort
**Revenue:**
- Customer list by ARR (anonymized or full, per board agreement)
- Pipeline detail by deal
- Cohort analysis (NRR by cohort vintage)
- Churn analysis: when, why, segment
**Product:**
- Feature adoption rates
- NPS score distribution and trend
- DAU/MAU by segment
**Team:**
- Org chart
- Full headcount list with fully loaded costs
- Open reqs with priority ranking
**One rule:** If the appendix is more than 20 slides, you have too much. Boards won't read it.
---
## Quarterly vs Monthly Board Meetings
### Quarterly (Series A+)
- Full board pack, all sections
- 2 hours: 30 min pre-read, 90 min discussion
- Voting items at end
- Sent 48 hours before (72 hours preferred)
- Add 1-2 "deep dive" topics beyond standard update
### Monthly (Seed / High-Growth A)
- Metrics dashboard + financials + top risks only
- 45-60 minutes
- Informal tone, more conversational
- Sent 24 hours before
- Skip slides for items where nothing changed
### When to Increase Frequency
- Approaching 6-month runway
- Major strategic pivot
- Fundraise in progress
- Significant underperformance vs plan
- M&A discussions
---
## Meeting Logistics (Often Overlooked)
- **Pre-read requirement:** Board packs should be read before the meeting. If you're presenting slides, you're wasting time.
- **Discussion format:** "I'll be brief on X since you've read it. Want to spend time on Y?" — respect board members' time
- **One note-taker:** CEO's EA or COO; not the CEO (they need to be present)
- **Follow-up within 24 hours:** Action items, voting outcomes, next meeting date
- **Board portal vs email:** Use a board portal (Carta, Boardable, Notion) for version control and D&O protection

View File

@@ -0,0 +1,210 @@
# Board Deck Template
Fill in bracketed fields. Remove placeholders before sharing. Never invent numbers — use `[TBD]` if unknown.
---
## Slide 1: Executive Summary (CEO)
**[Company Name] — Q[X] [Year] Board Update**
> [One sentence: State of the business — where you are.]
> [One sentence: The most important thing that happened this quarter.]
> [One sentence: Where you're going next quarter and what determines success.]
---
## Slide 2: Key Metrics Dashboard (COO)
**Quarter at a Glance**
| Metric | Q[X] Actual | Q[X] Target | Q[X-1] Actual | Status |
|--------|-------------|-------------|---------------|--------|
| ARR | $[X]M | $[X]M | $[X]M | [✅/🟡/🔴] |
| QoQ Growth | [X]% | [X]% | [X]% | [✅/🟡/🔴] |
| NRR | [X]% | >[X]% | [X]% | [✅/🟡/🔴] |
| Gross Margin | [X]% | >[X]% | [X]% | [✅/🟡/🔴] |
| Burn Multiple | [X]x | <[X]x | [X]x | [✅/🟡/🔴] |
| Runway | [X] months | >[X] months | [X] months | [✅/🟡/🔴] |
| Headcount | [X] | [X] | [X] | [✅/🟡/🔴] |
| CAC Payback | [X] months | <[X] months | [X] months | [✅/🟡/🔴] |
---
## Slide 3: Financial Update (CFO)
**P&L Summary**
| | Q[X] | Q[X-1] | QoQ |
|--|------|--------|-----|
| Revenue | $[X]K | $[X]K | [+/-X]% |
| COGS | $[X]K | $[X]K | |
| Gross Profit | $[X]K | $[X]K | |
| Gross Margin | [X]% | [X]% | |
| OpEx | $[X]K | $[X]K | |
| Net Burn | $[X]K | $[X]K | |
**Cash & Runway**
- Cash on hand: $[X]M
- Monthly burn: $[X]K
- Runway: [X] months
- Burn multiple: [X]x (target: <2x)
**Variance to Plan**
- Revenue: [+/-$X]K vs plan — [one sentence cause]
- Burn: [+/-$X]K vs plan — [one sentence cause]
**Q[X+1] Forecast:** $[X]M revenue, $[X]K burn — [confidence: high/medium/low]
---
## Slide 4: Revenue & Pipeline (CRO)
**ARR Waterfall**
```
Starting ARR: $[X]M
+ New ARR: +$[X]K
+ Expansion ARR: +$[X]K
- Churned ARR: -$[X]K
- Contraction ARR: -$[X]K
= Ending ARR: $[X]M
```
**Health Metrics**
- NRR: [X]% | Logo churn: [X]% | Avg ACV: $[X]K
**Pipeline (next 90 days)**
| Stage | # Deals | $ Value |
|-------|---------|---------|
| Proposal | [X] | $[X]K |
| Negotiation | [X] | $[X]K |
| Verbal commit | [X] | $[X]K |
**Q[X+1] Forecast:** $[X]M ARR — [one sentence confidence statement]
**Top 3 Deals**
1. [Company] — $[X]K ARR — close date [X] — risk: [one word]
2. [Company] — $[X]K ARR — close date [X] — risk: [one word]
3. [Company] — $[X]K ARR — close date [X] — risk: [one word]
---
## Slide 5: Product Update (CPO)
**Shipped This Quarter**
- [Feature/initiative] — impact: [metric or user outcome]
- [Feature/initiative] — impact: [metric or user outcome]
- [Feature/initiative] — impact: [metric or user outcome]
**Shipping Next Quarter**
- [Feature] — target: [date] — why it matters: [one line]
- [Feature] — target: [date] — why it matters: [one line]
- [Feature] — target: [date] — why it matters: [one line]
**PMF Signals**
- NPS: [X] (trend: [up/flat/down])
- DAU/MAU: [X]%
- Feature adoption ([key feature]): [X]%
**Key Learning:** [One thing customer research taught you this quarter]
---
## Slide 6: Growth & Marketing (CMO)
**CAC by Channel**
| Channel | CAC | Pipeline $ | % of Total |
|---------|-----|-----------|------------|
| Outbound | $[X]K | $[X]K | [X]% |
| Inbound | $[X]K | $[X]K | [X]% |
| Partner | $[X]K | $[X]K | [X]% |
**What's Working:** [One channel or initiative with data]
**What We Cut:** [One thing, and why]
**What We're Testing:** [One experiment running now]
---
## Slide 7: Engineering & Technical (CTO)
**Delivery**
- Velocity trend: [up/flat/down vs last quarter]
- Q[X] commitments delivered: [X]% on time
**Quality & Reliability**
- P0/P1 incidents: [X] (vs [X] last quarter)
- Uptime: [X]%
- Infrastructure cost: $[X]K/month (trend: [up/flat/down])
**Tech Debt**
- Ratio: [X]% of roadmap allocated to debt reduction
- Key item in progress: [description, target date]
**Security:** [one line status; flag anything pending]
---
## Slide 8: Team & People (CHRO)
**Headcount**
- Total: [X] (vs [X] plan, [X] last quarter)
- By function: Eng [X], Product [X], Sales [X], CS [X], G&A [X]
**Hiring**
- Hired this quarter: [X]
- Open reqs: [X] — time-to-fill avg: [X] days
- Offers outstanding: [X]
**Retention**
- Regrettable attrition: [X]% (annualized)
- Engagement score: [X]/10 (trend: [up/flat/down])
**Notable Hires:** [Name, role — one sentence on why they matter]
**Key Open Roles:** [Role, priority: critical/high/medium]
---
## Slide 9: Risk & Security (CISO)
**Compliance Status**
| Certification | Status | Target Date |
|--------------|--------|-------------|
| [SOC 2 / ISO 27001 / etc.] | [In progress / Complete / Not started] | [Date] |
**Security Posture:** [One line — overall status]
**Incidents This Quarter:** [X] total — [description if >0]
**Top Risks**
1. [Risk] — likelihood: [H/M/L] — impact: [H/M/L] — mitigation: [one line]
2. [Risk] — likelihood: [H/M/L] — impact: [H/M/L] — mitigation: [one line]
3. [Risk] — likelihood: [H/M/L] — impact: [H/M/L] — mitigation: [one line]
---
## Slide 10: Strategic Outlook (CEO)
**Q[X+1] Priorities**
1. [Priority] — owner: [name] — success metric: [specific]
2. [Priority] — owner: [name] — success metric: [specific]
3. [Priority] — owner: [name] — success metric: [specific]
**Asks from the Board**
- [Specific ask: warm intro / advice / vote / resource]
- [Specific ask]
- [Specific ask]
**Decisions Needed Today**
- [Decision with options]: [Option A] vs [Option B] — recommendation: [A/B] — rationale: [one line]
---
## Appendix
- A1: Full P&L (monthly, last 4 quarters)
- A2: 3-year financial model
- A3: Customer list / ARR breakdown
- A4: Full pipeline by deal
- A5: Cohort retention analysis
- A6: Org chart + headcount detail
- A7: [Other as relevant]

View File

@@ -0,0 +1,146 @@
---
name: board-meeting
description: "Multi-agent board meeting protocol for strategic decisions. Runs a structured 6-phase deliberation: context loading, independent C-suite contributions (isolated, no cross-pollination), critic analysis, synthesis, founder review, and decision extraction. Use when the user invokes /cs:board, calls a board meeting, or wants structured multi-perspective executive deliberation on a strategic question."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: board-protocol
updated: 2026-03-05
frameworks: 6-phase-board, two-layer-memory, independent-contributions
---
# Board Meeting Protocol
Structured multi-agent deliberation that prevents groupthink, captures minority views, and produces clean, actionable decisions.
## Keywords
board meeting, executive deliberation, strategic decision, C-suite, multi-agent, /cs:board, founder review, decision extraction, independent perspectives
## Invoke
`/cs:board [topic]` — e.g. `/cs:board Should we expand to Spain in Q3?`
---
## The 6-Phase Protocol
### PHASE 1: Context Gathering
1. Load `memory/company-context.md`
2. Load `memory/board-meetings/decisions.md` **(Layer 2 ONLY — never raw transcripts)**
3. Reset session state — no bleed from previous conversations
4. Present agenda + activated roles → wait for founder confirmation
**Chief of Staff selects relevant roles** based on topic (not all 9 every time):
| Topic | Activate |
|-------|----------|
| Market expansion | CEO, CMO, CFO, CRO, COO |
| Product direction | CEO, CPO, CTO, CMO |
| Hiring/org | CEO, CHRO, CFO, COO |
| Pricing | CMO, CFO, CRO, CPO |
| Technology | CTO, CPO, CFO, CISO |
---
### PHASE 2: Independent Contributions (ISOLATED)
**No cross-pollination. Each agent runs before seeing others' outputs.**
Order: Research (if needed) → CMO → CFO → CEO → CTO → COO → CHRO → CRO → CISO → CPO
**Reasoning techniques:** CEO: Tree of Thought (3 futures) | CFO: Chain of Thought (show the math) | CMO: Recursion of Thought (draft→critique→refine) | CPO: First Principles | CRO: Chain of Thought (pipeline math) | COO: Step by Step (process map) | CTO: ReAct (research→analyze→act) | CISO: Risk-Based (P×I) | CHRO: Empathy + Data
**Contribution format (max 5 key points, self-verified):**
```
## [ROLE] — [DATE]
Key points (max 5):
• [Finding] — [VERIFIED/ASSUMED] — 🟢/🟡/🔴
• [Finding] — [VERIFIED/ASSUMED] — 🟢/🟡/🔴
Recommendation: [clear position]
Confidence: High / Medium / Low
Source: [where the data came from]
What would change my mind: [specific condition]
```
Each agent self-verifies before contributing: source attribution, assumption audit, confidence scoring. No untagged claims.
---
### PHASE 3: Critic Analysis
Executive Mentor receives ALL Phase 2 outputs simultaneously. Role: adversarial reviewer, not synthesizer.
Checklist:
- Where did agents agree too easily? (suspicious consensus = red flag)
- What assumptions are shared but unvalidated?
- Who is missing from the room? (customer voice? front-line ops?)
- What risk has nobody mentioned?
- Which agent operated outside their domain?
---
### PHASE 4: Synthesis
Chief of Staff delivers using the **Board Meeting Output** format (defined in `agent-protocol/SKILL.md`):
- Decision Required (one sentence)
- Perspectives (one line per contributing role)
- Where They Agree / Where They Disagree
- Critic's View (the uncomfortable truth)
- Recommended Decision + Action Items (owners, deadlines)
- Your Call (options if founder disagrees)
---
### PHASE 5: Human in the Loop ⏸️
**Full stop. Wait for the founder.**
```
⏸️ FOUNDER REVIEW — [Paste synthesis]
Options: ✅ Approve | ✏️ Modify | ❌ Reject | ❓ Ask follow-up
```
**Rules:**
- User corrections OVERRIDE agent proposals. No pushback. No "but the CFO said..."
- 30-min inactivity → auto-close as "pending review"
- Reopen any time with `/cs:board resume`
---
### PHASE 6: Decision Extraction
After founder approval:
- **Layer 1:** Write full transcript → `memory/board-meetings/YYYY-MM-DD-raw.md`
- **Layer 2:** Append approved decisions → `memory/board-meetings/decisions.md`
- Mark rejected proposals `[DO_NOT_RESURFACE]`
- Confirm to founder with count of decisions logged, actions tracked, flags added
---
## Memory Structure
```
memory/board-meetings/
├── decisions.md # Layer 2 — founder-approved only (Phase 1 loads this)
├── YYYY-MM-DD-raw.md # Layer 1 — full transcripts (never auto-loaded)
└── archive/YYYY/ # Raw transcripts after 90 days
```
**Future meetings load Layer 2 only.** Never Layer 1. This prevents hallucinated consensus.
---
## Failure Mode Quick Reference
| Failure | Fix |
|---------|-----|
| Groupthink (all agree) | Re-run Phase 2 isolated; force "strongest argument against" |
| Analysis paralysis | Cap at 5 points; force recommendation even with Low confidence |
| Bikeshedding | Log as async action item; return to main agenda |
| Role bleed (CFO making product calls) | Critic flags; exclude from synthesis |
| Layer contamination | Phase 1 loads decisions.md only — hard rule |
---
## References
- `templates/meeting-agenda.md` — agenda format
- `templates/meeting-minutes.md` — final output format
- `references/meeting-facilitation.md` — conflict handling, timing, failure modes

View File

@@ -0,0 +1,167 @@
# Meeting Facilitation Guide
Operational playbook for running board meetings using the 6-phase protocol.
Reference this when things go sideways — and they will.
---
## Keeping Phase 2 Contributions Focused
**The problem:** Agents with deep domain knowledge tend to over-contribute. An unconstrained CFO can produce 1,500 words on a single agenda item. This kills the meeting.
**The rules:**
- **Hard cap: 5 key points per role.** If a role produces more than 5, Chief of Staff trims to the 5 most material.
- **Every point must include a recommendation or stance.** Observations without positions are filler.
- **No hedging language.** "It depends" is not a key point. "We should do X if Y, Z if not Y" is.
- **Confidence rating required.** Forces the agent to be honest about what they actually know.
- **"What would change my mind"** — this is the most important line in the contribution. It forces falsifiability.
**How to enforce:**
```
Chief of Staff instruction to each role:
"You have 5 key points maximum. Each must include a clear stance.
End with your recommendation and what would change your mind.
Do not read other agents' contributions before writing yours."
```
**If a contribution runs long:**
- Trim to the 5 highest-signal points
- Preserve the recommendation and confidence rating
- Flag in the raw transcript: "[Trimmed for meeting — full version in raw log]"
---
## Handling Role Conflicts in Phase 3
**What the Executive Mentor is for:** Not harmony. Not consensus. Productive friction.
**Common conflict types:**
### 1. Data conflict (two agents cite contradictory numbers)
- Flag both numbers explicitly
- Do NOT pick a winner — that's the founder's job
- Ask: "CFO says CAC is $2,400. CRO says $1,800. These can't both be right. Which dataset are you using?"
- Action item: Assign data reconciliation to one owner before next meeting
### 2. Priority conflict (two agents want different things first)
- Surface the underlying assumption difference
- Example: "CMO wants to invest in brand. CFO wants to cut burn. The real question is: do we believe revenue will grow 40% next quarter?"
- Frame as a bet, not a fight
### 3. Role conflict (agent operating outside their lane)
- CFO making product calls → flag and exclude from synthesis
- CMO commenting on architecture → flag and exclude
- The Executive Mentor notes: "[ROLE] contribution on [topic] is outside domain. Excluded from synthesis. Refer to [correct role]."
- This is not an error. It's expected. Executives have opinions on everything. Only domain-relevant contributions count.
### 4. False consensus (everyone agrees but nobody has evidence)
- This is the most dangerous failure mode
- Symptom: All Phase 2 contributions say "yes" with high confidence
- Executive Mentor response: "Unanimous agreement on a hard question is a red flag. What evidence does each of you have? Or are you reasoning from the same assumption?"
- Force each agreeing agent to state their independent evidence
---
## When to Extend vs Cut Short a Meeting
**Extend when:**
- A genuine new risk surfaces in Phase 3 that wasn't in the agenda
- The founder asks a question that requires re-running Phase 2 for a new angle
- A data conflict is discovered that changes the decision space entirely
- The action items from synthesis are unclear or unowned
**How to extend:** Add a new mini-Phase 2 with only the relevant roles for the new question. Don't restart the full meeting.
**Cut short when:**
- The founder has already reached a decision before Phase 4 — capture it, log it, move on
- The agenda item is resolved in Phase 2 without genuine conflict — skip Phase 3, go straight to synthesis
- It's a pure update meeting with no decisions required — skip Phases 2-4, go straight to action items
**Never cut short:**
- Phase 5 (founder review) — always required, always explicit
- Phase 6 (decision extraction) — always required, even for small decisions
---
## Handling Founder Disagreement with All Agents
This happens. The founder has context agents don't.
**Protocol:**
1. Acknowledge explicitly: "You're overriding the consensus position."
2. Ask: "What do you know that the agents didn't factor in?" (Not to challenge — to capture.)
3. Log the override in Layer 2 with full context:
```
User Override: Founder rejected [consensus position] because [reason].
Decision: [founder's actual decision]
Agent recommendation: [what they said] — DO NOT RESURFACE without new data
```
4. Never push back on a founder override. Document it. Move on.
5. If the same override happens 3+ times, flag a pattern: "You've overridden the CFO on burn rate three meetings in a row. Would you like to update the financial constraints in company-context.md?"
**What NOT to do:**
- Don't say "but the CFO said..."
- Don't re-argue on behalf of any agent
- Don't note it as a "controversial" decision in the minutes — it's just the decision
---
## Common Failure Modes
### Groupthink
**Symptom:** All agents produce similar recommendations with high confidence.
**Cause:** Agents are inadvertently reading each other's outputs (Phase 2 isolation violated), or company-context.md contains implicit bias toward one direction.
**Fix:** Re-run Phase 2 with explicit isolation. Ask: "Give me the strongest argument AGAINST this direction."
### Analysis Paralysis
**Symptom:** Phase 2 produces comprehensive analysis but no clear recommendation from any role.
**Cause:** Agents are hedging. Usually happens on genuinely hard questions.
**Fix:** Force the issue. "I need a recommendation, not an analysis. If you had to bet the company on one direction, what would it be? Confidence can be Low."
### Bikeshedding
**Symptom:** 30+ minutes spent on a detail that doesn't matter to the core decision.
**Cause:** An easy-to-understand sub-problem attracts disproportionate attention.
**Example:** Debating button color on a pricing page instead of the pricing strategy.
**Fix:** Chief of Staff intervenes: "This is a sub-decision. I'm logging it as a separate action item for async resolution. Back to [main agenda item]."
### Scope Creep
**Symptom:** New agenda items keep appearing mid-meeting.
**Cause:** Meeting surfaces real issues that feel urgent.
**Fix:** New items go on a "parking lot" list. Addressed after the current agenda is complete or in the next meeting.
```
🅿️ PARKING LOT
- [Item 1] — added by [role], will address [when]
- [Item 2]
```
### Layer Contamination
**Symptom:** Future meeting references a rejected proposal or a debate that was never approved.
**Cause:** Phase 1 accidentally loaded a raw transcript instead of decisions.md.
**Fix:** Hard rule in Phase 1: load decisions.md (Layer 2) ONLY. Never load raw transcripts. If raw context is needed, founder explicitly requests it.
### Decision Amnesia
**Symptom:** Same question debated again in a later meeting.
**Cause:** Layer 2 decisions.md not consulted in Phase 1, or entry was too vague.
**Fix:** Phase 1 always surfaces relevant past decisions. If a question was already decided, Chief of Staff surfaces it: "We addressed this on [DATE]. Decision was [X]. Do you want to reopen it?"
### Role Fatigue
**Symptom:** Later agents in Phase 2 (CHRO, CRO) produce weaker contributions.
**Cause:** Context window pressure. Agents at the end of a long meeting have less capacity.
**Fix:** For meetings with 7+ roles, split into two batches. First batch: strategic roles (CEO, CFO, CMO). Second batch: operational roles (COO, CHRO, CRO). Run Executive Mentor after all contributions.
---
## Meeting Health Metrics
After each board meeting, score it:
| Metric | Good | Bad |
|--------|------|-----|
| Action items produced | 37 | 0 or >10 |
| Decisions with clear owners | 100% | < 80% |
| Unresolved open questions | 13 | >5 |
| Founder overrides | 02 | >5 (suggests context mismatch) |
| Roles activated | 36 | All 9 (too many = noise) |
| Phase 2 conflicts surfaced | At least 1 | 0 (groupthink risk) |
Track these in `memory/board-meetings/meeting-health.md` over time. Pattern: if action items consistently exceed 8, meetings are too infrequent. If conflicts are consistently 0, isolation is broken.

View File

@@ -0,0 +1,81 @@
# Board Meeting Agenda Template
Use this to structure a board meeting before invoking `/cs:board`.
Paste it into the conversation or save it as `memory/board-meetings/agenda-YYYY-MM-DD.md`.
---
## Board Meeting — [DATE]
**Convened by:** [Founder name]
**Facilitator:** Chief of Staff (Leo)
**Duration:** [estimated, e.g., 4590 min]
**Status:** Draft / Confirmed
---
## Standing Items (always included)
| Item | Owner | Time |
|------|-------|------|
| Layer 2 decisions review (what changed since last meeting) | Chief of Staff | 5 min |
| Open action items from last meeting | All | 10 min |
| Blockers requiring founder decision | All | 5 min |
---
## Agenda Items
### Item 1: [Title]
**Type:** Decision required / Exploration / Update
**Lead role(s):** [e.g., CEO + CFO]
**Context:** [1-2 sentences on why this is on the agenda now]
**Decision needed:** [What specifically must be decided, or what question must be answered]
**Success criteria:** [How will we know this agenda item is resolved?]
**Relevant past decisions:** [Reference any Layer 2 entries]
**Time box:** [e.g., 20 min]
---
### Item 2: [Title]
**Type:** Decision required / Exploration / Update
**Lead role(s):**
**Context:**
**Decision needed:**
**Success criteria:**
**Relevant past decisions:**
**Time box:**
---
### Item 3: [Title]
**Type:** Decision required / Exploration / Update
**Lead role(s):**
**Context:**
**Decision needed:**
**Success criteria:**
**Relevant past decisions:**
**Time box:**
---
## Out of Scope (explicitly excluded)
List topics that might come up but are NOT on today's agenda:
- [Topic] — defer to [date or next meeting]
- [Topic] — owner to handle async
---
## Pre-Read
Materials all participants should review before the meeting:
- [ ] `memory/board-meetings/decisions.md` (Chief of Staff loads automatically)
- [ ] [Link or filename]
- [ ] [Link or filename]
---
## Notes
[Any special instructions, constraints, or context for this meeting]

View File

@@ -0,0 +1,91 @@
# Board Meeting Minutes Template
This is the Layer 2 output — the founder-approved record of what was decided.
Written by Chief of Staff after Phase 5 (founder approval).
Appended to `memory/board-meetings/decisions.md`.
Do NOT include raw agent debate here. That lives in `YYYY-MM-DD-raw.md` (Layer 1).
---
## Board Meeting — [DATE]
**Agenda:** [Topic or meeting title]
**Participants (roles activated):** [e.g., CEO, CFO, CMO, COO, Executive Mentor]
**Facilitator:** Chief of Staff
**Status:** ✅ Approved by founder / ⏸️ Pending review
---
## Decisions Made
### Decision 1: [Title]
**Agenda item:** [Item this decision resolves]
**Decision:** [Exactly what was decided — one clear statement]
**Rationale:** [Why this was chosen over alternatives, in 1-3 sentences]
**Owner:** [Who is accountable for execution]
**Deadline:** [Date]
**Review date:** [When to check progress]
**User override:** [If founder overrode agent consensus — what and why. Leave blank if not applicable.]
---
### Decision 2: [Title]
**Agenda item:**
**Decision:**
**Rationale:**
**Owner:**
**Deadline:**
**Review date:**
**User override:**
---
## Action Items
| # | Action | Owner | Deadline | Review Date | Status |
|---|--------|-------|----------|-------------|--------|
| 1 | [action] | [name/role] | [date] | [date] | Open |
| 2 | [action] | [name/role] | [date] | [date] | Open |
| 3 | [action] | [name/role] | [date] | [date] | Open |
---
## Explicitly Rejected Proposals
These were considered and rejected. Do not resurface without new information.
| Proposal | Rejected by | Reason | Flag |
|----------|-------------|--------|------|
| [Proposal text] | Founder | [reason] | [DO_NOT_RESURFACE] |
| [Proposal text] | Consensus | [reason] | [DO_NOT_RESURFACE] |
---
## Open Questions (unresolved, deferred)
These were not resolved in this meeting. They carry forward.
1. [Question] — Owner: [who will research] — Due: [date]
2. [Question] — Owner: — Due:
---
## Risk Register Updates
| Risk | Probability | Impact | Owner | Mitigation | Status |
|------|-------------|--------|-------|-----------|--------|
| [risk] | H/M/L | H/M/L | [name] | [action] | Open |
---
## Next Meeting
**Suggested date:** [DATE]
**Trigger items:** [Action items with review dates that will need board discussion]
**Pre-read:** [What to prepare]
---
*Minutes approved by: [Founder name] on [DATE]*
*Raw transcript: `memory/board-meetings/[DATE]-raw.md`*

View File

@@ -1,517 +1,169 @@
---
name: ceo-advisor
description: Executive leadership guidance for strategic decision-making, organizational development, and stakeholder management. Includes strategy analyzer, financial scenario modeling, board governance frameworks, and investor relations playbooks. Use when planning strategy, preparing board presentations, managing investors, developing organizational culture, making executive decisions, or when user mentions CEO, strategic planning, board meetings, investor updates, organizational leadership, or executive strategy.
description: "Executive leadership guidance for strategic decision-making, organizational development, and stakeholder management. Use when planning strategy, preparing board presentations, managing investors, developing organizational culture, making executive decisions, fundraising, or when user mentions CEO, strategic planning, board meetings, investor updates, organizational leadership, or executive strategy."
license: MIT
metadata:
version: 1.0.0
version: 2.0.0
author: Alireza Rezvani
category: c-level
domain: ceo-leadership
updated: 2025-10-20
updated: 2026-03-05
python-tools: strategy_analyzer.py, financial_scenario_analyzer.py
frameworks: executive-decision-framework, board-governance, investor-relations
frameworks: executive-decisions, board-governance, leadership-culture
---
# CEO Advisor
Strategic frameworks and tools for chief executive leadership, organizational transformation, and stakeholder management.
Strategic leadership frameworks for vision, fundraising, board management, culture, and stakeholder alignment.
## Keywords
CEO, chief executive officer, executive leadership, strategic planning, board governance, investor relations, board meetings, board presentations, financial modeling, strategic decisions, organizational culture, company culture, leadership development, stakeholder management, executive strategy, crisis management, organizational transformation, investor updates, strategic initiatives, company vision
CEO, chief executive officer, strategy, strategic planning, fundraising, board management, investor relations, culture, organizational leadership, vision, mission, stakeholder management, capital allocation, crisis management, succession planning
## Quick Start
### For Strategic Planning
```bash
python scripts/strategy_analyzer.py
python scripts/strategy_analyzer.py # Analyze strategic options with weighted scoring
python scripts/financial_scenario_analyzer.py # Model financial scenarios (base/bull/bear)
```
Analyzes strategic position and generates actionable recommendations.
### For Financial Scenarios
```bash
python scripts/financial_scenario_analyzer.py
```
Models different business scenarios with risk-adjusted projections.
### For Decision Making
Review `references/executive_decision_framework.md` for structured decision processes.
### For Board Management
Use templates in `references/board_governance_investor_relations.md` for board packages.
### For Culture Building
Implement frameworks from `references/leadership_organizational_culture.md` for transformation.
## Core CEO Responsibilities
## Core Responsibilities
### 1. Vision & Strategy
Set the direction. Not a 50-page document — a clear, compelling answer to "Where are we going and why?"
#### Setting Direction
- **Vision Development**: Define 10-year aspirational future
- **Mission Articulation**: Clear purpose and why we exist
- **Strategy Formulation**: 3-5 year competitive positioning
- **Value Definition**: Core beliefs and principles
**Strategic planning cycle:**
- Annual: 3-year vision refresh + 1-year strategic plan
- Quarterly: OKR setting with C-suite (COO drives execution)
- Monthly: strategy health check — are we still on track?
#### Strategic Planning Cycle
```
Q1: Environmental Scan
- Market analysis
- Competitive intelligence
- Technology trends
- Regulatory landscape
**Stage-adaptive time horizons:**
- Seed/Pre-PMF: 3-month / 6-month / 12-month
- Series A: 6-month / 1-year / 2-year
- Series B+: 1-year / 3-year / 5-year
Q2: Strategy Development
- Strategic options generation
- Scenario planning
- Resource allocation
- Risk assessment
Q3: Planning & Budgeting
- Annual operating plan
- Budget allocation
- OKR setting
- Initiative prioritization
Q4: Communication & Launch
- Board approval
- Investor communication
- Employee cascade
- Partner alignment
```
See `references/executive_decision_framework.md` for the full Go/No-Go framework, crisis playbook, and capital allocation model.
### 2. Capital & Resource Management
You're the chief allocator. Every dollar, every person, every hour of engineering time is a bet.
#### Capital Allocation Framework
```python
# Run financial scenario analysis
python scripts/financial_scenario_analyzer.py
**Capital allocation priorities:**
1. Keep the lights on (operations, must-haves)
2. Protect the core (retention, quality, security)
3. Grow the core (expansion of what works)
4. Fund new bets (innovation, new products/markets)
# Allocation priorities:
1. Core Operations (40-50%)
2. Growth Investments (25-35%)
3. Innovation/R&D (10-15%)
4. Strategic Reserve (10-15%)
5. Shareholder Returns (varies)
```
#### Fundraising Strategy
- **Seed/Series A**: Product-market fit focus
- **Series B/C**: Growth acceleration
- **Late Stage**: Market expansion
- **IPO**: Public market access
- **Debt**: Non-dilutive growth
**Fundraising:** Know your numbers cold. Timing matters more than valuation. See `references/board_governance_investor_relations.md`.
### 3. Stakeholder Leadership
#### Stakeholder Priority Matrix
```
Influence →
Low High
High ┌─────────┬─────────┐
Interest │ Keep │ Manage │
↑ │Informed │ Closely │
├─────────┼─────────┤
Low │Monitor │ Keep │
│ │Satisfied│
└─────────┴─────────┘
Primary Stakeholders:
- Board of Directors
- Investors
- Employees
- Customers
Secondary Stakeholders:
- Partners
- Community
- Media
- Regulators
```
### 4. Organizational Leadership
#### Culture Development
From `references/leadership_organizational_culture.md`:
**Culture Transformation Timeline**:
- Months 1-2: Assessment
- Months 2-3: Design
- Months 4-12: Implementation
- Months 12+: Embedding
**Key Levers**:
- Leadership modeling
- Communication
- Systems alignment
- Recognition
- Accountability
### 5. External Representation
#### CEO Communication Calendar
**Daily**:
- Customer touchpoint
- Team check-in
- Metric review
**Weekly**:
- Executive team meeting
- Board member update
- Key customer/partner call
- Media opportunity
**Monthly**:
- All-hands meeting
- Board report
- Investor update
- Industry engagement
**Quarterly**:
- Board meeting
- Earnings call
- Strategy review
- Town hall
## Executive Routines
### Daily CEO Schedule Template
```
6:00 AM - Personal development (reading, exercise)
7:00 AM - Day planning & priority review
8:00 AM - Metric dashboard review
8:30 AM - Customer/market intelligence
9:00 AM - Strategic work block
10:30 AM - Meetings block
12:00 PM - Lunch (networking/thinking)
1:00 PM - External meetings
3:00 PM - Internal meetings
4:30 PM - Email/communication
5:30 PM - Team walk-around
6:00 PM - Transition/reflection
```
### Weekly Leadership Rhythm
**Monday**: Strategy & Planning
- Executive team meeting
- Metrics review
- Week planning
**Tuesday**: External Focus
- Customer meetings
- Partner discussions
- Investor relations
**Wednesday**: Operations
- Deep dives
- Problem solving
- Process review
**Thursday**: People & Culture
- 1-on-1s
- Talent reviews
- Culture initiatives
**Friday**: Innovation & Future
- Strategic projects
- Learning time
- Planning ahead
## Critical CEO Decisions
### Go/No-Go Decision Framework
Use framework from `references/executive_decision_framework.md`:
**Major Decisions Requiring Framework**:
- M&A opportunities
- Market expansion
- Major pivots
- Large investments
- Restructuring
- Leadership changes
**Decision Checklist**:
- [ ] Problem clearly defined
- [ ] Data/evidence gathered
- [ ] Options evaluated
- [ ] Stakeholders consulted
- [ ] Risks assessed
- [ ] Implementation planned
- [ ] Success metrics defined
- [ ] Communication prepared
### Crisis Management
#### Crisis Leadership Playbook
**Level 1 Crisis** (Department)
- Monitor situation
- Support as needed
- Review afterwards
**Level 2 Crisis** (Company)
- Activate crisis team
- Lead response
- Communicate frequently
**Level 3 Crisis** (Existential)
- Take direct control
- Board engagement
- All-hands focus
- External communication
## Board Management
### Board Meeting Success
From `references/board_governance_investor_relations.md`:
**Preparation Timeline**:
- T-4 weeks: Agenda development
- T-2 weeks: Material preparation
- T-1 week: Package distribution
- T-0: Meeting execution
**Board Package Components**:
1. CEO Letter (1-2 pages)
2. Dashboard (1 page)
3. Financial review (5 pages)
4. Strategic updates (10 pages)
5. Risk register (2 pages)
6. Appendices
### Managing Board Dynamics
**Building Trust**:
- Regular communication
- No surprises
- Transparency
- Follow-through
- Respect expertise
**Difficult Conversations**:
- Prepare thoroughly
- Lead with facts
- Own responsibility
- Present solutions
- Seek alignment
## Investor Relations
### Investor Communication
**Earnings Cycle**:
1. Pre-announcement quiet period
2. Earnings release
3. Conference call
4. Follow-up meetings
5. Conference participation
**Key Messages**:
- Growth trajectory
- Competitive position
- Financial performance
- Strategic progress
- Future outlook
### Fundraising Excellence
**Pitch Deck Structure**:
1. Problem (1 slide)
2. Solution (1-2 slides)
3. Market (1-2 slides)
4. Product (2-3 slides)
5. Business Model (1 slide)
6. Go-to-Market (1-2 slides)
7. Competition (1 slide)
8. Team (1 slide)
9. Financials (2 slides)
10. Ask (1 slide)
## Performance Management
### Company Scorecard
**Financial Metrics**:
- Revenue growth
- Gross margin
- EBITDA
- Cash flow
- Runway
**Customer Metrics**:
- Acquisition
- Retention
- NPS
- LTV/CAC
**Operational Metrics**:
- Productivity
- Quality
- Efficiency
- Innovation
**People Metrics**:
- Engagement
- Retention
- Diversity
- Development
### CEO Self-Assessment
**Quarterly Reflection**:
- What went well?
- What could improve?
- Key learnings?
- Priority adjustments?
**Annual 360 Review**:
- Board feedback
- Executive team input
- Skip-level insights
- Self-evaluation
- Development plan
## Succession Planning
### CEO Succession Timeline
**Ongoing**:
- Identify internal candidates
- Develop high potentials
- External benchmarking
**T-3 Years**:
- Formal succession planning
- Candidate assessment
- Development acceleration
**T-1 Year**:
- Final selection
- Transition planning
- Communication strategy
**Transition**:
- Knowledge transfer
- Stakeholder handoff
- Gradual transition
## Personal Development
### CEO Learning Agenda
**Core Competencies**:
- Strategic thinking
- Financial acumen
- Leadership presence
- Communication
- Decision making
**Development Activities**:
- Executive coaching
- Peer networking (YPO/EO)
- Board service
- Industry involvement
- Continuous education
### Work-Life Integration
**Sustainability Practices**:
- Protected family time
- Exercise routine
- Mental health support
- Vacation planning
- Delegation discipline
**Energy Management**:
- Know peak hours
- Block deep work time
- Batch similar tasks
- Take breaks
- Reflect daily
## Tools & Resources
### Essential CEO Tools
**Strategy & Planning**:
- Strategy frameworks (Porter, BCG, McKinsey)
- Scenario planning tools
- OKR management systems
**Financial Management**:
- Financial modeling
- Cap table management
- Investor CRM
**Communication**:
- Board portal
- Investor relations platform
- Employee communication tools
**Personal Productivity**:
- Calendar management
- Task management
- Note-taking system
### Key Resources
**Books**:
- "Good to Great" - Jim Collins
- "The Hard Thing About Hard Things" - Ben Horowitz
- "High Output Management" - Andy Grove
- "The Lean Startup" - Eric Ries
**Frameworks**:
- Jobs-to-be-Done
- Blue Ocean Strategy
- Balanced Scorecard
- OKRs
**Networks**:
- YPO (Young Presidents' Organization)
- EO (Entrepreneurs' Organization)
- Industry associations
- CEO peer groups
## Success Metrics
### CEO Effectiveness Indicators
**Strategic Success**
- Vision clarity and buy-in
- Strategy execution on track
- Market position improving
- Innovation pipeline strong
**Financial Success**
- Revenue growth targets met
- Profitability improving
- Cash position strong
- Valuation increasing
**Organizational Success**
- Culture thriving
- Talent retained
- Engagement high
- Succession ready
**Stakeholder Success**
- Board confidence high
- Investor satisfaction
- Customer NPS strong
- Employee approval rating
You serve multiple masters. Priority order:
1. Customers (they pay the bills)
2. Team (they build the product)
3. Board/Investors (they fund the mission)
4. Partners (they extend your reach)
### 4. Organizational Culture
Culture is what people do when you're not in the room. It's your job to define it, model it, and enforce it.
See `references/leadership_organizational_culture.md` for culture development frameworks and the CEO learning agenda. Also see `culture-architect/` for the operational culture toolkit.
### 5. Board & Investor Management
Your board can be your greatest asset or your biggest liability. The difference is how you manage them.
See `references/board_governance_investor_relations.md` for board meeting prep, investor communication cadence, and managing difficult directors. Also see `board-deck-builder/` for assembling the actual board deck.
## Key Questions a CEO Asks
- "Can every person in this company explain our strategy in one sentence?"
- "What's the one thing that, if it goes wrong, kills us?"
- "Am I spending my time on the highest-leverage activity right now?"
- "What decision am I avoiding? Why?"
- "If we could only do one thing this quarter, what would it be?"
- "Do our investors and our team hear the same story from me?"
- "Who would replace me if I got hit by a bus tomorrow?"
## CEO Metrics Dashboard
| Category | Metric | Target | Frequency |
|----------|--------|--------|-----------|
| **Strategy** | Annual goals hit rate | > 70% | Quarterly |
| **Revenue** | ARR growth rate | Stage-dependent | Monthly |
| **Capital** | Months of runway | > 12 months | Monthly |
| **Capital** | Burn multiple | < 2x | Monthly |
| **Product** | NPS / PMF score | > 40 NPS | Quarterly |
| **People** | Regrettable attrition | < 10% | Monthly |
| **People** | Employee engagement | > 7/10 | Quarterly |
| **Board** | Board NPS (your relationship) | Positive trend | Quarterly |
| **Personal** | % time on strategic work | > 40% | Weekly |
## Red Flags
⚠️ Missing targets consistently
⚠️ High executive turnover
⚠️ Board relationship strained
⚠️ Culture deteriorating
⚠️ Market share declining
⚠️ Cash burn increasing
⚠️ Innovation stalling
⚠️ Personal burnout signs
- You're the bottleneck for more than 3 decisions per week
- The board surprises you with questions you can't answer
- Your calendar is 80%+ meetings with no strategic blocks
- Key people are leaving and you didn't see it coming
- You're fundraising reactively (runway < 6 months, no plan)
- Your team can't articulate the strategy without you in the room
- You're avoiding a hard conversation (co-founder, investor, underperformer)
## Integration with C-Suite Roles
| When... | CEO works with... | To... |
|---------|-------------------|-------|
| Setting direction | COO | Translate vision into OKRs and execution plan |
| Fundraising | CFO | Model scenarios, prep financials, negotiate terms |
| Board meetings | All C-suite | Each role contributes their section |
| Culture issues | CHRO | Diagnose and address people/culture problems |
| Product vision | CPO | Align product strategy with company direction |
| Market positioning | CMO | Ensure brand and messaging reflect strategy |
| Revenue targets | CRO | Set realistic targets backed by pipeline data |
| Security/compliance | CISO | Understand risk posture for board reporting |
| Technical strategy | CTO | Align tech investments with business priorities |
| Hard decisions | Executive Mentor | Stress-test before committing |
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- Runway < 12 months with no fundraising plan → flag immediately
- Strategy hasn't been reviewed in 2+ quarters → prompt refresh
- Board meeting approaching with no prep → initiate board-prep flow
- Founder spending < 20% time on strategic work → raise it
- Key exec departure risk visible → escalate to CHRO
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Help me think about strategy" | Strategic options matrix with risk-adjusted scoring |
| "Prep me for the board" | Board narrative + anticipated questions + data gaps |
| "Should we raise?" | Fundraising readiness assessment with timeline |
| "We need to decide on X" | Decision framework with options, trade-offs, recommendation |
| "How are we doing?" | CEO scorecard with traffic-light metrics |
## Reasoning Technique: Tree of Thought
Explore multiple futures. For every strategic decision, generate at least 3 paths. Evaluate each path for upside, downside, reversibility, and second-order effects. Pick the path with the best risk-adjusted outcome.
**Stage-adaptive horizons:**
- Seed: project 3m/6m/12m
- Series A: project 6m/1y/2y
- Series B+: project 1y/3y/5y
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`
## Resources
- `references/executive_decision_framework.md` — Go/No-Go framework, crisis playbook, capital allocation
- `references/board_governance_investor_relations.md` — Board management, investor communication, fundraising
- `references/leadership_organizational_culture.md` — Culture development, CEO routines, succession planning

View File

@@ -0,0 +1,140 @@
---
name: cfo-advisor
description: "Financial leadership for startups and scaling companies. Financial modeling, unit economics, fundraising strategy, cash management, and board financial packages. Use when building financial models, analyzing unit economics, planning fundraising, managing cash runway, preparing board materials, or when user mentions CFO, burn rate, runway, fundraising, unit economics, LTV, CAC, term sheets, or financial strategy."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: cfo-leadership
updated: 2026-03-05
python-tools: burn_rate_calculator.py, unit_economics_analyzer.py, fundraising_model.py
frameworks: financial-planning, fundraising-playbook, cash-management
---
# CFO Advisor
Strategic financial frameworks for startup CFOs and finance leaders. Numbers-driven, decisions-focused.
This is **not** a financial analyst skill. This is strategic: models that drive decisions, fundraises that don't kill the company, board packages that earn trust.
## Keywords
CFO, chief financial officer, burn rate, runway, unit economics, LTV, CAC, fundraising, Series A, Series B, term sheet, cap table, dilution, financial model, cash flow, board financials, FP&A, SaaS metrics, ARR, MRR, net dollar retention, gross margin, scenario planning, cash management, treasury, working capital, burn multiple, rule of 40
## Quick Start
```bash
# Burn rate & runway scenarios (base/bull/bear)
python scripts/burn_rate_calculator.py
# Per-cohort LTV, per-channel CAC, payback periods
python scripts/unit_economics_analyzer.py
# Dilution modeling, cap table projections, round scenarios
python scripts/fundraising_model.py
```
## Key Questions (ask these first)
- **What's your burn multiple?** (Net burn ÷ Net new ARR. > 2x is a problem.)
- **If fundraising takes 6 months instead of 3, do you survive?** (If not, you're already behind.)
- **Show me unit economics per cohort, not blended.** (Blended hides deterioration.)
- **What's your NDR?** (> 100% means you grow without signing a single new customer.)
- **What are your decision triggers?** (At what runway do you start cutting? Define now, not in a crisis.)
## Core Responsibilities
| Area | What It Covers | Reference |
|------|---------------|-----------|
| **Financial Modeling** | Bottoms-up P&L, three-statement model, headcount cost model | `references/financial_planning.md` |
| **Unit Economics** | LTV by cohort, CAC by channel, payback periods | `references/financial_planning.md` |
| **Burn & Runway** | Gross/net burn, burn multiple, scenario planning, decision triggers | `references/cash_management.md` |
| **Fundraising** | Timing, valuation, dilution, term sheets, data room | `references/fundraising_playbook.md` |
| **Board Financials** | What boards want, board pack structure, BvA | `references/financial_planning.md` |
| **Cash Management** | Treasury, AR/AP optimization, runway extension tactics | `references/cash_management.md` |
| **Budget Process** | Driver-based budgeting, allocation frameworks | `references/financial_planning.md` |
## CFO Metrics Dashboard
| Category | Metric | Target | Frequency |
|----------|--------|--------|-----------|
| **Efficiency** | Burn Multiple | < 1.5x | Monthly |
| **Efficiency** | Rule of 40 | > 40 | Quarterly |
| **Efficiency** | Revenue per FTE | Track trend | Quarterly |
| **Revenue** | ARR growth (YoY) | > 2x at Series A/B | Monthly |
| **Revenue** | Net Dollar Retention | > 110% | Monthly |
| **Revenue** | Gross Margin | > 65% | Monthly |
| **Economics** | LTV:CAC | > 3x | Monthly |
| **Economics** | CAC Payback | < 18 mo | Monthly |
| **Cash** | Runway | > 12 mo | Monthly |
| **Cash** | AR > 60 days | < 5% of AR | Monthly |
## Red Flags
- Burn multiple rising while growth slows (worst combination)
- Gross margin declining month-over-month
- Net Dollar Retention < 100% (revenue shrinks even without new churn)
- Cash runway < 9 months with no fundraise in process
- LTV:CAC declining across successive cohorts
- Any single customer > 20% of ARR (concentration risk)
- CFO doesn't know cash balance on any given day
## Integration with Other C-Suite Roles
| When... | CFO works with... | To... |
|---------|-------------------|-------|
| Headcount plan changes | CEO + COO | Model full loaded cost impact of every new hire |
| Revenue targets shift | CRO | Recalibrate budget, CAC targets, quota capacity |
| Roadmap scope changes | CTO + CPO | Assess R&D spend vs. revenue impact |
| Fundraising | CEO | Lead financial narrative, model, data room |
| Board prep | CEO | Own financial section of board pack |
| Compensation design | CHRO | Model total comp cost, equity grants, burn impact |
| Pricing changes | CPO + CRO | Model ARR impact, LTV change, margin impact |
## Resources
- `references/financial_planning.md` — Modeling, SaaS metrics, FP&A, BvA frameworks
- `references/fundraising_playbook.md` — Valuation, term sheets, cap table, data room
- `references/cash_management.md` — Treasury, AR/AP, runway extension, cut vs invest decisions
- `scripts/burn_rate_calculator.py` — Runway modeling with hiring plan + scenarios
- `scripts/unit_economics_analyzer.py` — Per-cohort LTV, per-channel CAC
- `scripts/fundraising_model.py` — Dilution, cap table, multi-round projections
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- Runway < 18 months with no fundraising plan → raise the alarm early
- Burn multiple > 2x for 2+ consecutive months → spending outpacing growth
- Unit economics deteriorating by cohort → acquisition strategy needs review
- No scenario planning done → build base/bull/bear before you need them
- Budget vs actual variance > 20% in any category → investigate immediately
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "How much runway do we have?" | Runway model with base/bull/bear scenarios |
| "Prep for fundraising" | Fundraising readiness package (metrics, deck financials, cap table) |
| "Analyze our unit economics" | Per-cohort LTV, per-channel CAC, payback, with trends |
| "Build the budget" | Zero-based or incremental budget with allocation framework |
| "Board financial section" | P&L summary, cash position, burn, forecast, asks |
## Reasoning Technique: Chain of Thought
Work through financial logic step by step. Show all math. Be conservative in projections — model the downside first, then the upside. Never round in your favor.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,374 @@
# Cash Management Reference
Cash is the oxygen of a startup. You can be unprofitable for years. You cannot be out of cash for a day.
---
## 1. Cash Flow Management
### The Cash Equation
```
Ending Cash = Beginning Cash
+ Cash collected from customers
- Cash paid to employees
- Cash paid to vendors
- Cash paid for infrastructure
- Debt service
+/- Financing activities
Note: This is NOT the P&L. Revenue recognition ≠ cash collected.
```
### Where Cash Hides (and Leaks)
**Cash sources you might be under-using:**
- Deferred revenue (annual billing locks in cash 12 months early)
- Customer deposits on enterprise contracts
- Vendor payment terms (Net 60 instead of Net 30 = free float)
- AWS/GCP startup credits (often $25K$100K available, widely unused)
- Revenue-based financing on predictable MRR
- Venture debt (non-dilutive, available post-Series A)
**Cash drains that sneak up on you:**
- Annual software licenses paid in Q1 (budget for the lump sum)
- Event sponsorships (often 6-12 months in advance)
- Recruiting fees (15-25% of first-year salary, due on hire)
- Legal fees (data room prep, fundraise close = $50K$200K surprise)
- Late-paying enterprise customers (Net 60 in contract, pays Net 90 in practice)
### Cash Flow vs P&L: The Gap
**Scenario: $1M enterprise deal signed December 31**
```
P&L impact (accrual):
December revenue: $83K (1/12 of annual)
Cash impact:
If billed annually upfront: +$1,000K in December (GREAT)
If billed quarterly: +$250K in December (good)
If billed monthly: +$83K in December (fine)
If Net 60 terms: +$0 in December, +$83K in February (cash drag)
```
**The CFO's job:** Maximize the timing difference between cash in and cash out.
- Collect from customers as early as possible (annual upfront, early payment discounts)
- Pay vendors as late as possible (maximize payment terms)
- Never confuse deferred revenue (a liability) with actual cash (it is cash — just count it right)
---
## 2. Treasury and Banking Strategy
### Account Structure
```
Operating Account (primary bank):
Balance: 3-6 months of operating expenses
Purpose: Payroll, vendor payments, day-to-day ops
Product: Business checking or high-yield business savings
Bank: Chase, SVB successor (First Citizens), Mercury, Brex
Reserve Account (secondary or same bank):
Balance: Everything above operating float
Purpose: Reserve; move to operating as needed
Product: Money market fund or T-Bill ladder
Target yield (2024-2025): 4.5%5.2%
Products: Vanguard VMFXX, Fidelity SPAXX, or direct T-Bills via TreasuryDirect
Emergency Account (separate bank):
Balance: 1-2 months expenses
Purpose: If primary bank has issues (SVB taught this lesson)
Product: Business savings
```
**FDIC coverage:** $250K per depositor per institution. For balances above $250K at a single bank, either:
- Use CDARS/ICS (bank sweeps into multiple FDIC-insured accounts automatically)
- Spread across multiple banks
- Move excess to T-Bills (backed by US government, not FDIC, but safer)
**After SVB (March 2023):** Every CFO should have at least 2 banking relationships. If one bank fails or freezes, you can make payroll.
### Yield on Cash
At $3M cash, the difference between 0% (checking) and 5% (T-Bills) is $150K/year.
That's a month of runway for a $150K/month burn company. **Get yield on reserves.**
```
Monthly yield on $3M at 5%: ~$12,500
Annual: ~$150,000
This is not optional. Set it up once and automate.
```
---
## 3. AR/AP Optimization
### Accounts Receivable: Get Paid Faster
**Billing model impact on cash:**
```
Annual Upfront Quarterly Monthly Net 30 Monthly
Cash Day 1: 100% of ACV 25% of ACV 8.3% 0%
Cash Month 2: 0% (done) 0% 8.3% 8.3%
12-month total: 100% 100% 100% 100%
For $100K ACV customer, Year 1 cash:
Annual upfront: $100K immediately
Monthly Net 30: $8.3K × 11 months = $91.7K (1 month lag)
Cash benefit: $100K vs $91.7K = $8.3K benefit + no collection risk
```
**Push for annual billing. Make it easy with a discount:**
```
"Pay annually and get 2 months free (16% discount)"
Most SMB customers will take this.
Enterprise: use MSA structure with annual invoicing, not month-to-month.
```
**AR Aging Policy:**
```
> 0-30 days: Current. No action.
> 30-60 days: Friendly reminder from AR team.
> 60-90 days: Escalate to Customer Success.
> 90 days: CFO or CEO-level outreach. Consider collections.
> 120 days: Reserve for bad debt. Legal/collections.
Reserve policy: 50% of 90-120 day AR, 100% of > 120 days
```
**What slows down collections:**
- Wrong contact (billing contact vs. user) — get finance contact during onboarding
- Enterprise PO required — know this upfront, not when invoice is due
- Credit holds or budget freeze — your CSM should surface these early
- Invoice errors — every wrong invoice extends payment by 30-60 days
### Accounts Payable: Pay Slower
**Standard terms by vendor type:**
```
SaaS tools: Net 30 default. Push for Net 45 or Net 60 at scale.
Cloud providers: Pay as you go. Apply for credits first.
Professional services (agencies, lawyers): Net 30 minimum. Get Net 45 where possible.
Rent/office: Whatever the lease says. Negotiate quarterly payments if you can.
Payroll: Pay on time. Never delay payroll. Ever.
```
**Early payment discount trap:**
```
"2/10 Net 30" means: 2% discount if you pay in 10 days, else pay in 30.
Annual cost of NOT taking this: 2% × (365/(30-10)) = ~36% APY
ALWAYS take early payment discounts > 2%.
Never take discounts < 1%.
```
**AP workflow:**
1. All invoices → finance inbox (not individual employees)
2. Approval required above threshold ($500 for startups)
3. Pay at end of terms, not when invoice arrives
4. Batch payments weekly (not daily) to reduce processing overhead
---
## 4. Runway Extension Tactics
Use these when you need to extend runway without raising. Ranked by speed and impact.
### Tier 1: Fast Cash (Days)
**Annual billing campaign:**
```
Target: Existing monthly customers
Offer: 2 months free (16% discount) or 1 month free (8% discount) for annual upfront
Process: CSM-led email campaign to all monthly customers
Impact: $X MRR × 12 × conversion rate = immediate cash injection
Timeline: 2-4 weeks
No dilution. No debt. High impact.
```
**Prepayment incentive for pipeline:**
```
For deals in late stage, offer annual upfront pricing with 10-15% discount.
Close rate may increase. Cash timing dramatically improves.
```
### Tier 2: Cost Control (2-4 Weeks)
**Hiring freeze:**
```
Every unfilled role = salary × 1.25 per month.
For a 30-person company, 3 open roles at $150K average:
Monthly savings: 3 × $150K × 1.25 / 12 = $47K/month
Over 6 months: $280K
Impact: Immediate. No blood.
```
**Software audit:**
```
Pull all credit card charges and ACH debits.
Cancel any subscription not used in 30 days.
Typical savings: $3K-$15K/month at Series A stage.
Tools: Vendr, Spendesk, or just a spreadsheet of recurring charges.
```
**Cloud cost optimization:**
```
Right-size instances (dev/staging don't need prod-scale)
Reserve instances (1-year reserved = 30-40% savings vs on-demand)
Delete unused resources (load balancers, IPs, old snapshots)
Typical savings: 20-35% of current cloud bill
```
### Tier 3: Vendor Renegotiation (2-6 Weeks)
**Payment term extension:**
```
Ask key vendors for Net 60 instead of Net 30.
$500K in AP × 30 days = $500K × (30/365) = ~$41K cash float improvement
Won't always work, but vendors often say yes to good customers.
```
**Renewal timing:**
```
Push annual renewals to later in the year.
Preserve cash for Q1 (typically heaviest sales hiring quarter).
```
**Vendor credits:**
```
AWS: AWS Activate (up to $100K for qualified startups)
GCP: Google for Startups (up to $200K)
Azure: Microsoft for Startups (up to $150K)
Stripe: Revenue share programs
Hubspot: Startup pricing (90% off)
```
### Tier 4: Financing (Weeks to Months)
**Revenue-based financing:**
```
Providers: Clearco, Capchase, Pipe, Arc
Structure: Advance 3-6 months of MRR. Repay with % of monthly revenue.
Cost: Typically 6-12% annualized.
Speed: 1-2 weeks to close.
When to use: Bridge to next ARR milestone before raising equity.
When NOT to use: When burn rate is structural (will consume the advance fast).
```
**Venture debt:**
```
Providers: SVB (now First Citizens), Western Technology Investment, Hercules, TriplePoint
Structure: Term loan, typically 3-6x monthly gross burn
Interest: Prime + 2-4% + warrants
When available: Post-Series A, when revenue is predictable
Typical timing: Add alongside an equity round (don't raise debt when you need equity)
Impact: Extends runway 3-6 months without dilution
When NOT to use: If you might trip financial covenants (minimum cash, revenue)
```
**Convertible bridge:**
```
Existing investors write bridge note: $500K-$2M at favorable terms.
Structure: Converts at discount (10-20%) or cap into next equity round.
When to use: You're 60-90 days from closing an equity round and need cash to get there.
When NOT to use: As a long-term strategy. Bridge-to-bridge is a death spiral.
```
### Tier 5: Structural Cost Reduction (Weeks + Impact on Morale)
**Salary deferrals (founders first):**
```
Founders take 20-30% salary reduction, accrued for future repayment.
Signals commitment to team and investors.
Only ask employees to follow if founders go first.
Always pay market rate to key non-founder employees — you can't afford to lose them.
```
**Reduction in force (RIF):**
```
Threshold: If burn multiple > 3x and growth < 20% YoY, a RIF is likely necessary.
Sizing: Model to achieve at least 12 months runway without fundraising.
Rule: Don't do a RIF twice. Size it right the first time.
Two small RIFs destroy morale worse than one decisive one.
Process: Legal counsel required. WARN Act (60-day notice) if > 100 employees.
Focus cuts: G&A and underperforming sales roles first. Protect engineering and key revenue.
```
---
## 5. When to Cut vs When to Invest
### The Framework
**Cut when:**
- Burn multiple > 2x and growth is decelerating
- Runway < 9 months with no fundraise imminent
- LTV:CAC declining for 3+ consecutive months
- Any spend category with no measurable return in 90 days
- Headcount in functions not directly tied to near-term revenue or product-market fit
**Invest when:**
- Magic number > 1 (every dollar in S&M returns > $1 in gross profit)
- LTV:CAC > 3x in a specific channel (pour money in)
- Gross margin > 70% (unit economics are healthy; growth is the constraint)
- Cohort data improving (retention getting better → LTV going up → invest in growth)
- CAC payback < 12 months (you get your money back fast enough to keep reinvesting)
### The False Economy Trap
**Don't cut:**
- Top-of-funnel demand gen that generates qualified pipeline (if CAC payback is < 12 months, this is your best investment)
- Engineering capacity on core product (technical debt compounds and slows you down permanently)
- Key account managers on your largest customers (churn from top customers is catastrophic)
**Cut these first:**
- Conference sponsorships with no measurable pipeline
- Tools and subscriptions with < 5 users or < 30% utilization
- Agency spend that could be done in-house
- Roadmap items that aren't tied to retention or expansion revenue
- Any G&A spend that isn't legally required
### Decision Triggers (Pre-Define These)
Don't make these decisions in a crisis. Define the triggers now:
```
At 12 months runway: Review all discretionary spend. Start fundraise process.
At 9 months runway: Implement hiring freeze. Fundraise is mandatory.
At 6 months runway: Cut non-essential spend 20%. If no fundraise term sheet, run RIF model.
At 4 months runway: Execute RIF. Explore all financing options. Notify board.
At 3 months runway: Emergency plan only. All options on table (bridge, strategic, wind down).
```
---
## Key Formulas
```python
# Net burn
net_burn = gross_burn - revenue_collected
# Runway (months)
runway_months = cash_balance / net_burn
# Cash conversion cycle
ccc = days_sales_outstanding + days_inventory_held - days_payable_outstanding
# Lower CCC = better cash efficiency
# Days Sales Outstanding (DSO)
dso = (accounts_receivable / revenue) * 30 # monthly revenue
# Days Payable Outstanding (DPO)
dpo = (accounts_payable / cogs) * 30 # target: maximize this
# Working capital
working_capital = current_assets - current_liabilities
# Quick ratio (liquidity)
quick_ratio_liquidity = (cash + ar) / current_liabilities
# Target: > 1.5 (you can pay short-term obligations without selling assets)
# Free cash flow
fcf = operating_cash_flow - capex
```

View File

@@ -0,0 +1,500 @@
# Financial Planning Reference
Startup financial modeling frameworks. Build models that drive decisions, not models that impress investors.
---
## 1. Startup Financial Modeling
### Bottoms-Up vs Top-Down
**Top-down model (don't use for operating):**
```
TAM = $10B
SOM = 1% = $100M
Revenue = $100M in year 5
```
This is marketing. You cannot manage a company against these numbers.
**Bottoms-up model (use this):**
```
Year 1 Revenue Build:
Sales headcount: 3 AEs by Q1, +2 in Q2, +3 in Q4
Ramp curve: Month 1-3 = 25%, Month 4-6 = 75%, Month 7+ = 100%
Quota per ramped AE: $600K ARR
Effective quota (weighted for ramp): $1.2M ARR in Year 1
Win rate: 25%
Average deal: $48K ACV
Pipeline needed: $1.2M / 25% = $4.8M ARR pipeline
Required meetings to create that pipeline: $4.8M / (conversion 20%) / ($48K ACV × 0.5 to meeting) = ~200 meetings
```
Now you have something actionable. You know how many SDR calls, how many marketing leads, what conversion rate you need to hold. Every assumption is visible and challengeable.
### Building the Operating Model
#### Revenue Engine
**New ARR Model (SaaS):**
```
Month N New ARR:
= Quota-carrying reps (fully ramped equivalent)
× Attainment rate (typically 70-80% of quota)
× Average deal size
+ PLG / self-serve (if applicable)
Quota-carrying reps (ramped equivalent):
= Sum(each rep × their ramp factor)
Ramp schedule:
Month 1-2: 0% (onboarding)
Month 3: 25%
Month 4-6: 50%
Month 7-9: 75%
Month 10+: 100%
```
**ARR Bridge (most important recurring visual):**
```
Beginning ARR
+ New ARR (new logos)
+ Expansion ARR (upsells, seat growth)
- Churned ARR (cancellations)
- Contraction ARR (downgrades)
= Ending ARR
Net ARR Added = New + Expansion - Churn - Contraction
Net Dollar Retention (NDR):
= (Beginning ARR + Expansion - Churn - Contraction) / Beginning ARR × 100
Target: > 110% for growth-stage SaaS
World-class: > 130% (Snowflake, Twilio-tier)
```
**MRR and ARR Relationship:**
```
ARR = MRR × 12 (simple, always use this)
Never mix monthly and annual contracts in MRR without normalization.
Annual contract booked = ACV / 12 = monthly contribution to ARR
Multi-year contracts: book each year at annual value (not multi-year total)
```
#### Headcount Model
Headcount is usually 60-80% of total costs. Model it carefully.
```
For each role:
- Start date
- Department
- Annual salary (from salary bands)
- Loaded cost (salary × 1.25-1.45 depending on benefits + recruiting method)
- Productive from (ramp period)
- Impact on revenue (for revenue-generating roles)
Total headcount cost = Σ (each FTE × loaded cost × months active / 12)
```
**Department headcount ratios (Series A benchmarks):**
```
Sales (S&M): 20-30% of headcount
Engineering/Product (R&D): 40-50% of headcount
Customer Success: 15-20% of headcount
G&A: 10-15% of headcount
```
#### COGS Model
Gross margin is the most important long-term indicator of business quality.
**COGS for SaaS:**
```
1. Hosting / Infrastructure (AWS, GCP, Azure)
- Scale with customer count or usage
- Should be 5-15% of ARR for mature SaaS
- If > 20%: infrastructure optimization needed
2. Customer Success headcount
- Ratio: 1 CSM per $1M-$3M ARR (varies by segment)
- SMB: 1 CSM per $500K ARR (high-touch required)
- Enterprise: 1 CSM per $2-5M ARR (strategic accounts)
3. Third-party licensing / APIs
- Per-customer or usage-based pass-through costs
- Critical to model at scale (margin killer if not tracked)
4. Payment processing
- 2.2-2.9% of revenue for Stripe/Braintree
- Can negotiate to 1.8-2.2% at scale (> $5M ARR)
```
**Gross Margin targets:**
```
SaaS: > 65% acceptable, > 75% good, > 80% exceptional
Marketplace: 50-70%
Hardware + software: 40-60%
Services + software: 30-50%
```
**If gross margin < 65%:**
- Infrastructure cost optimization (rightsizing, reserved instances)
- CS headcount review (automation, pooled CSMs)
- Pricing model review (usage-based pricing if cost is usage-driven)
- Third-party cost renegotiation
#### Opex Model
```
Sales & Marketing:
- AE/SDR/SE salaries + OTE (on-target earnings)
- Marketing programs (demand gen budget)
- Tools and technology (CRM, SEO, ads platforms)
- Events and travel
- Benchmark: 40-60% of revenue at growth stage, targeting < 30% at scale
Research & Development:
- Engineering salaries
- Product management
- Design
- Technical infrastructure for development
- Benchmark: 20-35% of revenue
General & Administrative:
- Finance, legal, HR, admin
- Office costs
- SaaS tools / software licenses
- D&O insurance
- Benchmark: 8-15% (target < 10% at scale)
```
### Financial Model Do's and Don'ts
| Do | Don't |
|----|-------|
| Build assumptions tab with all inputs | Hardcode numbers in formulas |
| Model monthly (not quarterly) at early stage | Use annual model for first 3 years |
| Start with headcount plan, build costs from it | Guess at expense line items |
| Show model to actual customers or users | Show model to investors before internal stress-test |
| Version your model | Overwrite old versions |
| Reconcile cash flow to P&L monthly | Trust P&L without cash flow model |
| Include a sensitivity table | Present single-scenario forecast |
---
## 2. Three-Statement Model for Startups
### Why All Three Matter
The P&L tells you if you're profitable. The cash flow statement tells you if you're alive. The balance sheet tells you if you're solvent.
Startups that only track P&L miss the gap between revenue recognition and cash collection.
### P&L Structure
```
Q1 Q2 Q3 Q4 FY
Revenue
Subscription ARR $400K $520K $680K $840K $2,440K
Professional Svcs $40K $50K $60K $65K $215K
Total Revenue $440K $570K $740K $905K $2,655K
COGS
Infrastructure $35K $42K $52K $62K $191K
CS Headcount $75K $75K $100K $100K $350K
3rd Party Licensing $15K $18K $22K $28K $83K
Total COGS $125K $135K $174K $190K $624K
Gross Profit $315K $435K $566K $715K $2,031K
Gross Margin 71.6% 76.3% 76.5% 79.0% 76.5%
Operating Expenses
Sales & Marketing $380K $420K $480K $520K $1,800K
Research & Dev $320K $340K $380K $400K $1,440K
General & Admin $120K $130K $140K $150K $540K
Total Opex $820K $890K $1000K $1070K $3,780K
EBITDA ($505K) ($455K) ($434K) ($355K) ($1,749K)
EBITDA Margin (114.8%)(79.8%) (58.6%) (39.2%) (65.9%)
```
### Cash Flow Statement
```
Q1 Q2 Q3 Q4
Operating Activities
Net Income ($510K) ($460K) ($440K) ($360K)
Add: D&A $8K $8K $8K $10K
Working Capital Changes:
AR increase ($45K) ($50K) ($60K) ($55K)
AP increase $20K $15K $20K $15K
Deferred Rev change $80K $60K $80K $90K
Operating Cash Flow ($447K) ($427K) ($392K) ($300K)
Investing Activities
Capex ($15K) ($8K) ($10K) ($12K)
Free Cash Flow ($462K) ($435K) ($402K) ($312K)
Financing Activities
None $0 $0 $0 $0
Net Change in Cash ($462K) ($435K) ($402K) ($312K)
Beginning Cash $3,500K $3,038K $2,603K $2,201K
Ending Cash $3,038K $2,603K $2,201K $1,889K
Runway (months) 13.1 12.1 10.9 10.1
```
**Key insight from this model:**
The deferred revenue offset (customers paying annually upfront) is reducing cash burn by ~$80-90K/quarter versus a pure monthly billing model. This is the CFO's lever — push for annual billing.
### Balance Sheet: The Startup Version
At early stage, track these specifically:
```
Assets:
Cash: Your lifeline. Monitor daily.
Accounts Receivable: What customers owe you. Age it monthly.
Prepaid Expenses: Software licenses, insurance paid upfront.
Liabilities:
Accounts Payable: What you owe vendors. Maximize terms.
Accrued Liabilities: Salaries owed, commissions earned but not paid.
Deferred Revenue: Customer prepayments. Liability until service delivered, but cash is yours.
Debt/Convertible Notes: Face value + interest accrual.
Equity:
Common Stock: Founder shares
Preferred Stock: Investor shares
APIC: Additional paid-in capital
Accumulated Deficit: Your running losses (expected for startups)
```
---
## 3. SaaS Metrics That Matter
### The Hierarchy of SaaS Metrics
```
Tier 1 (existential): ARR, Runway, Net Dollar Retention
Tier 2 (strategic): Gross Margin, Burn Multiple, LTV:CAC
Tier 3 (operational): CAC Payback, Churn Rate, ACV
Tier 4 (diagnostic): Logo Churn vs Revenue Churn, Expansion Rate, NPS
```
Never report Tier 4 metrics to your board if Tier 1 metrics are off-track.
### Core Metric Definitions
**ARR (Annual Recurring Revenue):**
```
ARR = Sum of all active annual contract values (normalized to annual)
What it is NOT: bookings, billings, or TCV
When to use MRR: Companies with mostly monthly contracts
When to use ARR: Companies with majority annual contracts
```
**Net Dollar Retention (NDR / NRR):**
```
NDR = (Beginning MRR + Expansion MRR - Churned MRR - Contraction MRR)
/ Beginning MRR × 100
The benchmark everyone quotes: 100% means existing customers are flat.
> 100% means existing customers grow revenue on their own.
World-class (Snowflake, Datadog): 130%+
Why it matters: NDR > 100% means revenue growth even if you sign zero new customers.
At NDR = 120% and $5M ARR: you will reach $7M ARR in 24 months without a single new sale.
```
**Gross Revenue Retention (GRR):**
```
GRR = (Beginning MRR - Churned MRR - Contraction MRR) / Beginning MRR × 100
GRR measures the floor of your retention (ignoring expansion).
GRR is always ≤ NDR.
Target: > 85% for SMB SaaS, > 90% for mid-market, > 95% for enterprise.
```
**Logo Churn vs Revenue Churn:**
```
Logo churn: % of customers who cancel (ignores size)
Revenue churn: % of ARR that cancels (accounts for size)
Why the distinction matters:
You could have 10% logo churn but 3% revenue churn (churning small customers)
Or 5% logo churn but 12% revenue churn (churning large customers) — much worse
Report both. If they diverge significantly, investigate immediately.
```
**ACV (Annual Contract Value):**
```
ACV = Total contract value / contract term in years
Not to be confused with ARR (which only counts recurring, not one-time fees)
Rising ACV: You're moving upmarket (good for efficiency, check if ICP is changing)
Falling ACV: You're moving downmarket (check burn multiple — may not be economic)
```
**Rule of 40:**
```
Rule of 40 = Revenue Growth Rate % + EBITDA Margin %
Target: > 40%
Example: 60% growth + (-15%) EBITDA margin = 45. Passing.
Example: 20% growth + 5% EBITDA margin = 25. Failing at growth stage.
At early stage (< $5M ARR): Rule of 40 doesn't apply. Growth is the only metric.
At growth stage ($5-20M ARR): Starting to matter.
At scale ($20M+ ARR): Board and investors will hold you to this.
```
---
## 4. FP&A for Startups: What to Measure When
### Metrics by Stage
**Pre-seed / Seed (< $1M ARR):**
```
Focus on: Cash, pipeline, customer conversations
Measure: Monthly cash burn, weeks of runway, NPS / customer satisfaction
Don't obsess over: EBITDA margin, gross margin (too early)
Frequency: Weekly cash check, monthly everything else
```
**Series A ($1-5M ARR):**
```
Focus on: Repeatable sales, unit economics
Measure: MRR growth, LTV:CAC, CAC payback by channel, gross margin
Don't obsess over: Profitability, G&A efficiency
Build now: Monthly financial close (< 5 business days), basic FP&A model
Frequency: Monthly board pack, weekly leadership metrics
```
**Series B ($5-20M ARR):**
```
Focus on: Scalable go-to-market, operational efficiency
Measure: NDR, burn multiple, revenue per FTE, OKR attainment
Start building: Budget vs actuals, department-level P&L
Build now: Finance team (first financial controller), ERP or NetSuite
Frequency: Monthly board pack + quarterly deep dive
```
**Series C+ ($20M+ ARR):**
```
Focus on: Path to profitability, market leadership
Measure: Rule of 40, free cash flow, CAC efficiency by segment
Must have: FP&A team, full three-statement model, 5-year plan
Frequency: Monthly financial close (< 3 business days), quarterly earnings prep
```
### Reporting Cadence
**Weekly (CFO + leadership):**
- Cash balance (CFO checks daily, reports weekly)
- Pipeline / sales metrics (if in a sales-led motion)
- Any metric that changed dramatically vs. prior week
**Monthly (board + leadership):**
- Full financial dashboard (ARR, gross margin, burn, runway)
- Budget vs actual with explanations for > 10% variances
- Unit economics update
- Headcount change summary
**Quarterly (board + investors):**
- Full three-statement model vs budget
- Cohort analysis update
- Scenario planning review and trigger assessment
- Next quarter outlook
---
## 5. Budget vs Actual Analysis Framework
### The Purpose of BvA
Budget vs actual is not about being right. It's about understanding *why* you were wrong, so you can make better decisions.
The CFO who reports "we missed budget by 15%" without explanation is failing. The CFO who says "we missed budget by 15% because enterprise deals took 30 more days to close than modeled — here's what we're doing about it" is doing their job.
### BvA Template
```
Category Budget Actual $ Var % Var Explanation
-------------------------------------------------------------------
ARR $2,400K $2,280K ($120K) (5%) 2 enterprise deals slipped to Q1
New ARR $400K $350K ($50K) (13%) Above
Expansion ARR $120K $140K $20K 17% PLG motion outperforming
Churn ($60K) ($80K) ($20K) (33%) 2 unexpected SMB churns (now fixed)
Gross Margin 75.0% 73.2% -1.8% n/a Infrastructure over-provisioned
S&M Spend $820K $840K ($20K) (2%) Within tolerance
R&D Spend $680K $710K ($30K) (4%) Backfill hire started month early
G&A Spend $140K $148K ($8K) (6%) Legal fees for new customer contract
Cash Burn (net) $580K $648K ($68K) (12%) Driven by ARR shortfall + costs
Runway (mo) 14.5 13.0 (1.5) n/a Tracking; fundraise target unchanged
```
### Variance Thresholds
```
< ±5%: Note in appendix, no explanation needed in main pack
5-10%: One-line explanation required
> 10%: Full paragraph: what happened, why, what changes
> 20%: Board conversation required (model assumption was wrong, or unexpected event)
```
### Forecasting vs Budgeting
**Budget:** Set at start of year. Fixed expectation. Updated quarterly.
**Forecast:** Rolling 3-month outlook. Updated monthly. Should converge with budget over time.
```
Common mistake: Treating forecast as wishful thinking ("what we hope happens")
Correct approach: Forecast is your best current estimate given all known information.
If forecast diverges from budget by > 15%, the budget is wrong.
Reforecast and communicate to board.
```
**Rolling forecast (recommended for startups):**
```
Always have a 12-month forward model.
Update it monthly with actuals replacing the first month.
The forecast should always reflect your current operational reality, not your hope.
```
---
## Key Formulas Reference
```python
# ARR and growth
ARR_growth_yoy = (ending_ARR - beginning_ARR) / beginning_ARR
# Net Dollar Retention
NDR = (beginning_MRR + expansion_MRR - churn_MRR - contraction_MRR) / beginning_MRR
# Burn Multiple
burn_multiple = net_cash_burn / net_new_ARR
# Rule of 40
rule_of_40 = revenue_growth_pct + ebitda_margin_pct
# LTV (SaaS)
LTV = (ARPA * gross_margin_pct) / monthly_churn_rate
# CAC Payback (months)
cac_payback = CAC / (ARPA * gross_margin_pct)
# Magic Number (sales efficiency)
magic_number = (net_new_ARR * 4) / prior_quarter_S_and_M_spend
# Gross margin
gross_margin = (revenue - COGS) / revenue
# Quick Ratio (growth efficiency)
quick_ratio = (new_MRR + expansion_MRR) / (churned_MRR + contraction_MRR)
# Target: > 4 for high-growth SaaS
```

View File

@@ -0,0 +1,419 @@
# Fundraising Playbook
From timing to close. What investors actually look for, how valuation works, and the term sheet clauses that matter.
---
## 1. When to Raise
**Optimal timing:**
```
Target: 18-24 months runway post-close
Minimum: 12 months runway post-close (leaves no buffer for slip)
Start process when: 9-12 months runway remaining
→ 3-6 months for process (typically 4-5 months for Series A/B)
→ Leaves 3-6 months buffer if process drags
Never start when: < 6 months runway
→ You're negotiating from desperation
→ Investors can smell it
→ Terms get worse, or you don't close at all
```
**Rule:** Your leverage is maximum when you don't *need* to raise. Raise from a position of momentum, not necessity.
---
## 2. What Investors Look For at Each Stage
### Pre-seed
- Team (are these people credible for this problem?)
- Problem clarity (is the problem real and meaningful?)
- Early signal (any customers paying, waitlist, prototype)
- Market size (worth building a VC-scale company?)
**Typical ask:** $500K$2M | **Typical valuation:** $3M$10M pre-money
### Seed
- Product-market signal (customers using and paying)
- Founding team with domain expertise
- ARR: $100K$1M (or strong usage for PLG)
- Clear hypothesis for what Series A looks like
**Typical ask:** $2M$5M | **Typical valuation:** $8M$20M pre-money
### Series A
Investors are buying a *repeatable sales motion*. Not just customers — a machine.
**What they need to see:**
- ARR: $1M$5M growing > 100% YoY
- LTV:CAC > 2.5x (and improving)
- Net Dollar Retention > 100%
- CAC Payback < 18 months
- Gross margin > 65%
- At least 5-10 reference customers (not just lighthouse)
- Sales motion that converts without the founder closing every deal
**Typical ask:** $8M$15M | **Typical valuation:** $25M$60M pre-money
### Series B
Investors are buying *scalable go-to-market*. Can you pour fuel on the fire?
**What they need to see:**
- ARR: $5M$20M growing > 100% YoY
- LTV:CAC > 3x, CAC Payback < 18 months
- Sales capacity model (hiring plan → pipeline → revenue)
- NDR > 110% (expansion motion working)
- Some proof of market expansion (new segments, geographies, use cases)
- Path to category leadership
**Typical ask:** $15M$40M | **Typical valuation:** $60M$200M pre-money
### Series C and Beyond
Investors are buying *market leadership* and *path to profitability*.
**What they need to see:**
- ARR: $20M+ (often $30-50M for credible Series C)
- Rule of 40 > 40 (or credible path)
- Gross margin > 70%
- NDR > 115%
- Evidence of market leadership (brand, win rates, analyst mentions)
- Clear path to $100M+ ARR
---
## 3. Valuation Methods
### Revenue Multiples (Primary Method for SaaS)
```
Pre-money Valuation = ARR × Revenue Multiple
Revenue multiple benchmarks (2024-2025):
> 100% YoY growth: 8x15x ARR
50-100% YoY growth: 4x8x ARR
20-50% YoY growth: 2x4x ARR
< 20% YoY growth: 1x2x ARR
Adjustments:
NDR > 120%: +1x2x premium
Gross margin > 75%: +0.5x1x premium
Burn multiple < 1x: +0.5x1x premium
Capital efficient: Investors pay up for efficiency
Declining growth: Compress multiple aggressively
```
### The Investor's Math (Know This)
Every VC has a required return. Work backwards from their constraints:
```
Investor targets: 3x fund return
Fund size: $200M, check size: $15M (initial), $25M (with follow-on)
Ownership at exit needed: 15%
At 15% ownership: needs $25M / 15% = $167M post-money valuation
Exit needed to return 3x on that check: $25M × 10 = $250M company value
(10x because most deals fail, winners must carry the fund)
Implication: If you think you'll exit for $150M, that VC will pass or price you accordingly.
```
This is why Series A investors rarely lead rounds where they can't see a $300M+ exit path. It's not about your business being bad — it's about fund math.
### Comparable Company Analysis
For later stages (Series B+):
```
1. Find 5-10 comparable public SaaS companies
2. Calculate their EV/NTM Revenue multiples (use latest data)
3. Apply a private market discount (typically 20-40% vs public comps)
4. Adjust for your growth rate relative to comps
Example (2024):
Public SaaS comps: 6x NTM Revenue (median)
Private discount: 30%
Adjusted: ~4.2x
Your NTM Revenue: $8M
Implied valuation: ~$33M pre-money
```
### DCF (Late Stage Only)
DCF is unreliable for early-stage startups (terminal value dominates, growth rate assumptions are fantasy). Use it as a sanity check at Series C+, not as the primary valuation method.
---
## 4. Term Sheet Breakdown
### Liquidation Preference (Most Important Economic Term)
This determines who gets paid first in an exit — and how much.
```
1x Non-Participating Preferred (BEST for founders):
Investor gets 1x money back OR converts to common (their choice).
At acquisition: investor takes larger of {1x invested} or {% ownership × proceeds}
Example: $10M invested, exits at $100M, owns 20%
Option A: $10M (1x)
Option B: $20M (20% of $100M)
Investor takes $20M. Founders split $80M.
1x Participating Preferred (WORSE for founders):
Investor gets 1x money back AND participates in remaining proceeds.
Example: same scenario
$10M (1x) + 20% of remaining $90M = $10M + $18M = $28M
Founders split $72M instead of $80M
Cost to founders: $8M (10% of exit value)
2x Participating (RED FLAG):
Investor gets 2x back AND participates.
Only accept under duress. Push hard against this.
Full Ratchet Anti-Dilution (AVOID):
Down-round triggers full repricing of investor shares to new (lower) price.
Founders get massively diluted. Never accept if alternatives exist.
```
### Anti-Dilution Protection
```
Broad-based weighted average (standard):
Adjusts investor conversion price based on all dilutive securities.
Most founder-friendly anti-dilution. Accept this.
Narrow-based weighted average (slightly worse):
Same mechanism but uses smaller denominator.
Gives investors slightly more protection. Usually acceptable.
Full ratchet (avoid):
Price drops to whatever the new round prices at.
Devastating in down rounds. Fight this.
```
### Pro-Rata Rights
```
Standard pro-rata: Investor can maintain their % ownership in future rounds.
Reasonable. Accept for major investors.
Super pro-rata: Investor can increase their % in future rounds.
Caps your ability to bring in new lead investors.
Avoid unless the investor is exceptional and you want them in future rounds.
Major investor threshold: Typically investors with > $500K$1M check get pro-rata.
Don't give pro-rata to every small check — clogs future rounds.
```
### Board Composition
```
Seed (3 members): 2 founders, 1 lead investor
Series A (5 members): 2 founders, 2 investors, 1 independent
Series B (5-7 seats): Watch for investor majority — negotiate hard
Rule: Founders should retain majority through Series A.
Independent director should be your choice, not investor's.
Never accept investor majority before Series C.
Board observer rights: Common for smaller investors. No vote but present in meetings.
Limit to 1-2 observers or meetings become unwieldy.
```
### Other Terms That Matter
```
Drag-along: Majority can force minority shareholders to vote for acquisition.
Standard and reasonable. Check what threshold triggers drag.
Information rights: Investors get financial statements.
Standard. Monthly for major investors, quarterly for others.
Redemption rights: Investors can force buyback after X years.
Push to remove or add carve-outs for insufficient funds.
No-shop clause: You can't shop the term sheet to other investors.
Standard (14-30 days). Reasonable.
Exclusivity: Stronger version of no-shop. Sometimes includes no other fundraise discussions.
Acceptable for 30 days; push back on > 45 days.
```
---
## 5. Cap Table Management
### Dilution Planning Model
Run this before every round. Know your number before walking into any negotiation.
```
Pre-Seed Post-Seed Post-A Post-B Post-C
Founder A 45.0% 36.0% 26.5% 21.2% 18.7%
Founder B 45.0% 36.0% 26.5% 21.2% 18.7%
Angel 1 5.0% 4.0% 2.9% 2.4% 2.1%
Angel 2 5.0% 4.0% 2.9% 2.4% 2.1%
Seed Fund - 12.0% 8.8% 7.1% 6.2%
Option Pool - 8.0% 12.0% 10.0% 8.0%
Series A - - 20.4% 16.3% 14.4%
Series B - - - 19.5% 17.2%
Series C - - - - 12.6%
Round size / pre-money:
Pre-Seed: $500K / $9M pre = 5% dilution
Seed: $2M / $8M pre = 20% dilution (includes 8% pool)
Series A: $10M / $38M pre = 20.8% dilution (pool refresh to 12%)
Series B: $20M / $80M pre = 20% dilution
Series C: $30M / $170M pre = 15% dilution
```
**Option pool shuffle:** Investors often require you to create/expand the option pool *before* the round closes, which dilutes existing shareholders (not the incoming investor). Model this explicitly — a 20% round with a 5% pool expansion is really 24%+ dilution to founders.
### Cap Table Hygiene
```
Tools: Carta, Pulley, Capshare (all acceptable)
Never: Track cap table in a spreadsheet past seed stage. Errors compound.
Keep it clean:
- Repurchase departed co-founder shares immediately (don't let unvested shares linger)
- Convert SAFEs to equity cleanly at each priced round
- Document every grant with a board resolution
- Cliff + vesting for ALL employees and founders (standard: 1-year cliff, 4-year vest)
- 409A valuation required before every option grant (IRS requirement)
```
---
## 6. Data Room Preparation
### Core Documents (Required)
```
Financial:
□ 3 years historical financials (or all history if < 3 years)
□ Monthly P&L and cash flow (last 24 months)
□ Current financial model (18-24 months forward)
□ Budget vs actual (last 4 quarters)
□ Cap table (fully diluted, with all SAFEs/convertibles modeled)
□ Bank statements (last 3-6 months)
Legal:
□ Certificate of incorporation + all amendments
□ All prior financing documents (SAFEs, convertible notes, stock purchase agreements)
□ Cap table (Carta/Pulley export)
□ IP assignment agreements (all founders and employees)
□ Material contracts (top 10 customers, key vendors)
□ Employee list (titles, start dates, salaries, equity grants)
Product & Business:
□ Product demo / walkthrough video
□ Architecture overview (for technical investors)
□ Customer case studies (3-5 named references)
□ NPS / CSAT data
□ Competitive landscape analysis
Metrics:
□ MRR/ARR by month (all history)
□ Cohort retention chart
□ CAC by channel
□ LTV by cohort
□ NPS trend
```
### What Investors Actually Check First
In order of typical priority during due diligence:
1. **Cap table** — Is it clean? Any concerning structures?
2. **Cohort retention** — Is churn improving or deteriorating?
3. **Revenue quality** — What % is recurring? Any one-time or non-recurring?
4. **Top 10 customers** — Concentration risk? Any logos at risk?
5. **Bank statements** — Does cash match what was reported?
6. **IP assignments** — Does the company own its IP? (Founders who didn't assign IP kill deals)
### Red Flags That Kill Deals
- Missing IP assignment agreements for founders (most common deal killer at early stage)
- Cap table with > 20 angels/small investors (messy, hard to get consent for future rounds)
- Customer concentration > 30% in single customer without explanation
- Revenue recognition issues (booking ARR on contracts that allow easy cancellation)
- Cohort data that gets worse in later cohorts
- Bank balance doesn't match reported cash position
---
## 7. Investor Communication Cadence
### During Fundraise
```
Week 1-2: Warm intro sourcing, LP/network mapping
Week 3-6: First meetings (aim for 20-30 first meetings)
Week 7-10: Partner meetings, deep dives, due diligence
Week 11-14: Term sheets, negotiation
Week 15-18: Legal, closing
```
**Parallel process is essential.** Never negotiate with one investor at a time. Competition is your leverage.
### Post-Close: Investor Updates
Monthly investor update (send within 10 days of month-end):
```
Subject: [Company] Monthly Update — [Month Year]
Highlights (3 bullets max):
• [Biggest win]
• [Biggest learning/challenge]
• [What we're focused on next month]
Metrics:
ARR: $X (+X% MoM)
Net new ARR: $X
Gross margin: X%
Cash: $X (X months runway)
Headcount: X
Asks (be specific):
• Looking for intro to [persona/company] for [specific reason]
• Need advisor with experience in [specific area]
• [Other concrete ask]
```
**Why this matters:** Investors who are informed and engaged are better positioned to help when you need it. The investor who hasn't heard from you in 6 months is less likely to write a bridge check or make a warm intro when you ask.
---
## Key Formulas
```python
# Post-money valuation
post_money = pre_money + investment_amount
# Investor ownership %
ownership_pct = investment_amount / post_money
# Dilution to existing shareholders
dilution = investment_amount / post_money # as a fraction
# New shares issued
new_shares = (investment_amount / post_money) * total_post_shares
# equivalent: new_shares = pre_money_shares * (investment_amount / pre_money)
# Option pool expansion impact (pool shuffle)
# Creating X% option pool pre-close dilutes founders:
pool_shares_needed = target_pct * (pre_shares + new_round_shares + pool_shares_needed)
# Solve: pool_shares_needed = target_pct * (pre_shares + new_round_shares) / (1 - target_pct)
# LTV:CAC ratio
ltv_cac = ltv / cac # target: > 3x
# CAC payback (months)
payback_months = cac / (arpa * gross_margin_pct)
```

View File

@@ -0,0 +1,402 @@
#!/usr/bin/env python3
"""
Burn Rate & Runway Calculator
==============================
Models startup runway across base/bull/bear scenarios, incorporating
a hiring plan and revenue trajectory. Outputs months of runway,
cash-out dates, and decision trigger points.
Usage:
python burn_rate_calculator.py
python burn_rate_calculator.py --csv # export to CSV
Stdlib only. No dependencies.
"""
import argparse
import csv
import io
import sys
from dataclasses import dataclass, field
from datetime import date, timedelta
from typing import Optional
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class HiringEntry:
"""A planned hire."""
month: int # months from model start (1-indexed)
role: str
department: str # "sales", "engineering", "cs", "ga"
annual_salary: float
benefits_pct: float = 0.22 # benefits as % of salary
recruiting_cost: float = 0.0 # one-time recruiting fee
@dataclass
class RevenueEntry:
"""Monthly revenue data point (historical or projected)."""
month: int
mrr: float # monthly recurring revenue
one_time: float = 0.0
@dataclass
class ModelConfig:
"""Master configuration for a runway scenario."""
name: str
starting_cash: float
starting_mrr: float
starting_headcount: int
avg_loaded_salary: float # average fully-loaded salary per current employee
base_non_headcount_opex: float # monthly non-headcount costs (infra, tools, etc.)
gross_margin_pct: float # 0.01.0
mrr_growth_rate: float # monthly MoM growth rate, 0.01.0
hiring_plan: list[HiringEntry] = field(default_factory=list)
model_months: int = 24
start_date: Optional[date] = None
@dataclass
class MonthResult:
"""Single month output."""
month: int
label: str # e.g. "Month 1 (Apr 2025)"
mrr: float
gross_profit: float
headcount: int
headcount_cost: float # total loaded headcount cost this month
other_opex: float
gross_burn: float
net_burn: float
cash_start: float
cash_end: float
runway_months: float # projected runway from this month
cumulative_new_arr: float # for burn multiple
# ---------------------------------------------------------------------------
# Core calculator
# ---------------------------------------------------------------------------
class RunwayCalculator:
def __init__(self, config: ModelConfig):
self.cfg = config
def run(self) -> list[MonthResult]:
cfg = self.cfg
results = []
# Build headcount schedule: month -> list of new hires starting that month
hire_by_month: dict[int, list[HiringEntry]] = {}
for h in cfg.hiring_plan:
hire_by_month.setdefault(h.month, []).append(h)
# Track existing employees
active_employees: list[dict] = []
for _ in range(cfg.starting_headcount):
active_employees.append({
"monthly_loaded": cfg.avg_loaded_salary / 12 * 1.0,
"start_month": 0,
})
cash = cfg.starting_cash
mrr = cfg.starting_mrr
cumulative_new_arr = 0.0
starting_mrr = cfg.starting_mrr
for m in range(1, cfg.model_months + 1):
# Process new hires this month
one_time_recruiting = 0.0
if m in hire_by_month:
for hire in hire_by_month[m]:
monthly_loaded = (
hire.annual_salary * (1 + hire.benefits_pct) / 12
)
active_employees.append({
"monthly_loaded": monthly_loaded,
"start_month": m,
})
one_time_recruiting += hire.recruiting_cost
# Revenue this month
mrr = mrr * (1 + cfg.mrr_growth_rate)
gross_profit = mrr * cfg.gross_margin_pct
# Headcount cost
headcount_cost = sum(e["monthly_loaded"] for e in active_employees)
headcount_cost += one_time_recruiting
# Other opex (infra, SaaS tools, office, etc.)
other_opex = cfg.base_non_headcount_opex
# Burn
gross_burn = headcount_cost + other_opex
net_burn = gross_burn - gross_profit
# Cash
cash_start = cash
cash = cash - net_burn
cash_end = cash
# Projected runway from this month (using current net burn rate)
runway = cash_end / net_burn if net_burn > 0 else float("inf")
# Cumulative new ARR (for burn multiple calc)
new_mrr_added = mrr - starting_mrr if m == 1 else mrr - results[-1].mrr
cumulative_new_arr += new_mrr_added * 12
# Label
if cfg.start_date:
month_date = date(
cfg.start_date.year,
cfg.start_date.month,
1,
) + timedelta(days=32 * (m - 1))
month_date = month_date.replace(day=1)
label = f"Month {m:02d} ({month_date.strftime('%b %Y')})"
else:
label = f"Month {m:02d}"
results.append(MonthResult(
month=m,
label=label,
mrr=mrr,
gross_profit=gross_profit,
headcount=len(active_employees),
headcount_cost=headcount_cost,
other_opex=other_opex,
gross_burn=gross_burn,
net_burn=net_burn,
cash_start=cash_start,
cash_end=cash_end,
runway_months=runway,
cumulative_new_arr=cumulative_new_arr,
))
# Stop if cash runs out
if cash_end <= 0:
break
return results
def cash_out_date(self, results: list[MonthResult]) -> Optional[str]:
"""Return the label of the month cash runs out, or None if model survives."""
for r in results:
if r.cash_end <= 0:
return r.label
return None
def burn_multiple(self, results: list[MonthResult]) -> float:
"""Burn multiple = total net burn / total net new ARR over model period."""
total_net_burn = sum(r.net_burn for r in results if r.net_burn > 0)
first_mrr = results[0].mrr / (1 + self.cfg.mrr_growth_rate) # starting mrr
total_new_arr = (results[-1].mrr - first_mrr) * 12
if total_new_arr <= 0:
return float("inf")
return total_net_burn / total_new_arr
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_k(value: float) -> str:
"""Format as $Xk or $X.XM."""
if abs(value) >= 1_000_000:
return f"${value/1_000_000:.2f}M"
if abs(value) >= 1_000:
return f"${value/1_000:.0f}K"
return f"${value:.0f}"
def print_summary(name: str, results: list[MonthResult], calc: RunwayCalculator) -> None:
cash_out = calc.cash_out_date(results)
bm = calc.burn_multiple(results)
last = results[-1]
first = results[0]
print(f"\n{'='*60}")
print(f" SCENARIO: {name}")
print(f"{'='*60}")
print(f" Months modeled: {len(results)}")
print(f" Cash out: {cash_out or 'Does not run out in model period'}")
print(f" Ending cash: {fmt_k(last.cash_end)}")
print(f" Final runway: {last.runway_months:.1f} months")
print(f" Starting MRR: {fmt_k(first.mrr)}")
print(f" Ending MRR: {fmt_k(last.mrr)}")
print(f" Ending headcount: {last.headcount}")
print(f" Burn multiple: {bm:.2f}x")
print(f" Avg net burn: {fmt_k(sum(r.net_burn for r in results)/len(results))}/mo")
# Decision triggers
print(f"\n Decision Triggers:")
triggers = {9: "⚠️ START FUNDRAISE", 6: "🔴 COST REDUCTION PLAN", 4: "🚨 EXECUTE CUTS / BRIDGE"}
shown = set()
for r in results:
for threshold, label in triggers.items():
if r.runway_months <= threshold and threshold not in shown:
print(f" {r.label}: {label} (runway = {r.runway_months:.1f} mo)")
shown.add(threshold)
def print_monthly_table(results: list[MonthResult], max_rows: int = 24) -> None:
header = f"{'Month':<22} {'MRR':>10} {'Hdct':>6} {'Net Burn':>12} {'Cash':>12} {'Runway':>8}"
print(f"\n{header}")
print("-" * len(header))
for r in results[:max_rows]:
runway_str = f"{r.runway_months:.1f}mo" if r.runway_months != float("inf") else ""
print(
f"{r.label:<22} "
f"{fmt_k(r.mrr):>10} "
f"{r.headcount:>6} "
f"{fmt_k(r.net_burn):>12} "
f"{fmt_k(r.cash_end):>12} "
f"{runway_str:>8}"
)
def export_csv(scenarios: list[tuple[str, list[MonthResult]]]) -> str:
buf = io.StringIO()
writer = csv.writer(buf)
writer.writerow([
"Scenario", "Month", "Label", "MRR", "Gross Profit", "Headcount",
"Headcount Cost", "Other Opex", "Gross Burn", "Net Burn",
"Cash Start", "Cash End", "Runway Months"
])
for name, results in scenarios:
for r in results:
writer.writerow([
name, r.month, r.label,
round(r.mrr, 2), round(r.gross_profit, 2), r.headcount,
round(r.headcount_cost, 2), round(r.other_opex, 2),
round(r.gross_burn, 2), round(r.net_burn, 2),
round(r.cash_start, 2), round(r.cash_end, 2),
round(r.runway_months, 2),
])
return buf.getvalue()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def make_sample_configs() -> list[ModelConfig]:
"""
Sample company: Series A SaaS startup
- $3M cash on hand (post Series A)
- $125K MRR (~$1.5M ARR)
- 18 employees, $150K avg salary
- $80K/mo non-headcount opex (infra, tools, office)
- 72% gross margin
"""
common_kwargs = dict(
starting_cash=3_000_000,
starting_mrr=125_000,
starting_headcount=18,
avg_loaded_salary=150_000,
base_non_headcount_opex=80_000,
gross_margin_pct=0.72,
model_months=24,
start_date=date(2025, 1, 1),
)
# Base: 10% MoM growth, moderate hiring
base_hiring = [
HiringEntry(month=2, role="AE #1", department="sales", annual_salary=120_000, recruiting_cost=18_000),
HiringEntry(month=3, role="Senior SWE #1", department="engineering", annual_salary=160_000, recruiting_cost=24_000),
HiringEntry(month=5, role="SDR #1", department="sales", annual_salary=80_000, recruiting_cost=12_000),
HiringEntry(month=6, role="CSM #1", department="cs", annual_salary=90_000, recruiting_cost=13_500),
HiringEntry(month=8, role="AE #2", department="sales", annual_salary=120_000, recruiting_cost=18_000),
HiringEntry(month=9, role="Senior SWE #2", department="engineering", annual_salary=165_000, recruiting_cost=24_750),
HiringEntry(month=12, role="Controller", department="ga", annual_salary=130_000, recruiting_cost=19_500),
HiringEntry(month=14, role="AE #3", department="sales", annual_salary=125_000, recruiting_cost=18_750),
HiringEntry(month=15, role="ML Engineer", department="engineering", annual_salary=175_000, recruiting_cost=26_250),
HiringEntry(month=18, role="AE #4", department="sales", annual_salary=125_000, recruiting_cost=18_750),
]
# Bull: 15% MoM growth, full hiring plan
bull_hiring = base_hiring + [
HiringEntry(month=4, role="Marketing Manager", department="sales", annual_salary=110_000, recruiting_cost=16_500),
HiringEntry(month=7, role="Senior SWE #3", department="engineering", annual_salary=165_000, recruiting_cost=24_750),
HiringEntry(month=10, role="AE #5", department="sales", annual_salary=125_000, recruiting_cost=18_750),
HiringEntry(month=13, role="DevOps Engineer", department="engineering", annual_salary=150_000, recruiting_cost=22_500),
HiringEntry(month=16, role="AE #6", department="sales", annual_salary=125_000, recruiting_cost=18_750),
]
# Bear: 5% MoM growth, hiring freeze after month 3
bear_hiring = [
HiringEntry(month=2, role="AE #1", department="sales", annual_salary=120_000, recruiting_cost=18_000),
HiringEntry(month=3, role="Senior SWE #1", department="engineering", annual_salary=160_000, recruiting_cost=24_000),
]
return [
ModelConfig(name="BULL (15% MoM, full hiring)", mrr_growth_rate=0.15, hiring_plan=bull_hiring, **common_kwargs),
ModelConfig(name="BASE (10% MoM, planned hiring)", mrr_growth_rate=0.10, hiring_plan=base_hiring, **common_kwargs),
ModelConfig(name="BEAR ( 5% MoM, hiring freeze M3+)", mrr_growth_rate=0.05, hiring_plan=bear_hiring, **common_kwargs),
ModelConfig(name="DISTRESS (0% growth, freeze now)", mrr_growth_rate=0.00, hiring_plan=[], **common_kwargs),
]
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(description="Startup Burn Rate & Runway Calculator")
parser.add_argument("--csv", action="store_true", help="Export full monthly data as CSV to stdout")
parser.add_argument("--scenario", choices=["bull", "base", "bear", "distress", "all"], default="all")
args = parser.parse_args()
configs = make_sample_configs()
if args.scenario != "all":
configs = [c for c in configs if args.scenario.upper() in c.name.upper()]
all_results: list[tuple[str, list[MonthResult]]] = []
print("\n" + "="*60)
print(" BURN RATE & RUNWAY CALCULATOR")
print(" Sample Company: Series A SaaS Startup")
print(" Starting cash: $3M | Starting MRR: $125K | 18 employees")
print("="*60)
for cfg in configs:
calc = RunwayCalculator(cfg)
results = calc.run()
all_results.append((cfg.name, results))
print_summary(cfg.name, results, calc)
print_monthly_table(results)
# Comparison summary
print("\n" + "="*60)
print(" SCENARIO COMPARISON")
print("="*60)
print(f" {'Scenario':<40} {'Runway':>8} {'Cash Out':<30} {'Burn Mult':>10}")
print(" " + "-"*88)
for cfg, (name, results) in zip(configs, all_results):
calc = RunwayCalculator(cfg)
cash_out = calc.cash_out_date(results) or "Survives model period"
bm = calc.burn_multiple(results)
final_runway = results[-1].runway_months
runway_str = f"{final_runway:.1f}mo" if final_runway != float("inf") else ""
bm_str = f"{bm:.2f}x" if bm != float("inf") else ""
print(f" {name:<40} {runway_str:>8} {cash_out:<30} {bm_str:>10}")
print("\n Decision Trigger Reference:")
print(" 9 months runway → Start fundraise process")
print(" 6 months runway → Begin cost reduction planning")
print(" 4 months runway → Execute cuts; explore bridge financing")
print(" 3 months runway → Emergency plan only")
if args.csv:
print("\n\n--- CSV EXPORT ---\n")
sys.stdout.write(export_csv(all_results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,490 @@
#!/usr/bin/env python3
"""
Fundraising Model
==================
Cap table management, dilution modeling, and multi-round scenario planning.
Know exactly what you're giving up before you walk into any negotiation.
Covers:
- Cap table state at each round
- Dilution per shareholder per round
- Option pool shuffle impact
- Multi-round projections (Seed → A → B → C)
- Return scenarios at different exit valuations
Usage:
python fundraising_model.py
python fundraising_model.py --exit 150 # model at $150M exit
python fundraising_model.py --csv
Stdlib only. No dependencies.
"""
import argparse
import csv
import io
import sys
from dataclasses import dataclass, field
from typing import Optional
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class Shareholder:
"""A shareholder in the cap table."""
name: str
share_class: str # "common", "preferred", "option"
shares: float
invested: float = 0.0 # total cash invested
is_option_pool: bool = False
@dataclass
class RoundConfig:
"""Configuration for a financing round."""
name: str # e.g. "Series A"
pre_money_valuation: float
investment_amount: float
new_option_pool_pct: float = 0.0 # % of POST-money to allocate to new options
option_pool_pre_round: bool = True # True = pool created before round (dilutes founders)
lead_investor_name: str = "New Investor"
share_price_override: Optional[float] = None # if None, computed from valuation
@dataclass
class CapTableEntry:
"""A row in the cap table at a point in time."""
name: str
share_class: str
shares: float
pct_ownership: float
invested: float
is_option_pool: bool = False
@dataclass
class RoundResult:
"""Snapshot of cap table after a round closes."""
round_name: str
pre_money_valuation: float
investment_amount: float
post_money_valuation: float
price_per_share: float
new_shares_issued: float
option_pool_shares_created: float
total_shares: float
cap_table: list[CapTableEntry]
@dataclass
class ExitAnalysis:
"""Proceeds to each shareholder at an exit."""
exit_valuation: float
shareholder: str
shares: float
ownership_pct: float
proceeds_common: float # if all preferred converts to common
invested: float
moic: float # multiple on invested capital (for investors)
# ---------------------------------------------------------------------------
# Core cap table engine
# ---------------------------------------------------------------------------
class CapTable:
"""Manages a cap table through multiple rounds."""
def __init__(self):
self.shareholders: list[Shareholder] = []
self._total_shares: float = 0.0
def add_shareholder(self, sh: Shareholder) -> None:
self.shareholders.append(sh)
self._total_shares += sh.shares
def total_shares(self) -> float:
return sum(s.shares for s in self.shareholders)
def snapshot(self, label: str = "") -> list[CapTableEntry]:
total = self.total_shares()
return [
CapTableEntry(
name=s.name,
share_class=s.share_class,
shares=s.shares,
pct_ownership=s.shares / total if total > 0 else 0,
invested=s.invested,
is_option_pool=s.is_option_pool,
)
for s in self.shareholders
]
def execute_round(self, config: RoundConfig) -> RoundResult:
"""
Execute a financing round:
1. (Optional) Create option pool pre-round (dilutes existing shareholders)
2. Issue new shares to investor at round price
Returns a RoundResult with full cap table snapshot.
"""
current_total = self.total_shares()
# Step 1: Option pool shuffle (if pre-round)
option_pool_shares_created = 0.0
if config.new_option_pool_pct > 0 and config.option_pool_pre_round:
# Target: post-round option pool = new_option_pool_pct of total post-money shares
# Solve: pool_shares / (current_total + pool_shares + new_investor_shares) = target_pct
# This requires iteration because new_investor_shares also depends on pool_shares
# Simplification: create pool based on post-round total (slightly approximated)
target_post_round_pct = config.new_option_pool_pct
post_money = config.pre_money_valuation + config.investment_amount
# Estimate shares per dollar (price per share)
price_per_share = config.pre_money_valuation / current_total
new_investor_shares_estimate = config.investment_amount / price_per_share
# Pool shares needed so that pool / total_post = target_pct
total_post_estimate = current_total + new_investor_shares_estimate
pool_shares_needed = (target_post_round_pct * total_post_estimate) / (1 - target_post_round_pct)
# Check if existing pool is sufficient
existing_pool = next(
(s.shares for s in self.shareholders if s.is_option_pool), 0
)
additional_pool_needed = max(0, pool_shares_needed - existing_pool)
if additional_pool_needed > 0:
option_pool_shares_created = additional_pool_needed
# Add to existing pool or create new
pool_sh = next((s for s in self.shareholders if s.is_option_pool), None)
if pool_sh:
pool_sh.shares += additional_pool_needed
else:
self.shareholders.append(Shareholder(
name="Option Pool",
share_class="option",
shares=additional_pool_needed,
is_option_pool=True,
))
# Step 2: Price per share (after pool creation)
current_total_post_pool = self.total_shares()
if config.share_price_override:
price_per_share = config.share_price_override
else:
price_per_share = config.pre_money_valuation / current_total_post_pool
# Step 3: New shares for investor
new_shares = config.investment_amount / price_per_share
# Step 4: Add investor to cap table
self.shareholders.append(Shareholder(
name=config.lead_investor_name,
share_class="preferred",
shares=new_shares,
invested=config.investment_amount,
))
post_money = config.pre_money_valuation + config.investment_amount
total_post = self.total_shares()
return RoundResult(
round_name=config.name,
pre_money_valuation=config.pre_money_valuation,
investment_amount=config.investment_amount,
post_money_valuation=post_money,
price_per_share=price_per_share,
new_shares_issued=new_shares,
option_pool_shares_created=option_pool_shares_created,
total_shares=total_post,
cap_table=self.snapshot(),
)
def analyze_exit(self, exit_valuation: float) -> list[ExitAnalysis]:
"""
Simple exit analysis: all preferred converts to common, proceeds split pro-rata.
(Does not model liquidation preferences — see fundraising_playbook.md for that.)
"""
total = self.total_shares()
price_per_share = exit_valuation / total
results = []
for s in self.shareholders:
if s.is_option_pool:
continue # unissued options don't receive proceeds
proceeds = s.shares * price_per_share
moic = proceeds / s.invested if s.invested > 0 else 0.0
results.append(ExitAnalysis(
exit_valuation=exit_valuation,
shareholder=s.name,
shares=s.shares,
ownership_pct=s.shares / total,
proceeds_common=proceeds,
invested=s.invested,
moic=moic,
))
return sorted(results, key=lambda x: x.proceeds_common, reverse=True)
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt(value: float, prefix: str = "$") -> str:
if value == float("inf"):
return ""
if abs(value) >= 1_000_000:
return f"{prefix}{value/1_000_000:.2f}M"
if abs(value) >= 1_000:
return f"{prefix}{value/1_000:.0f}K"
return f"{prefix}{value:.2f}"
def print_round_result(result: RoundResult, prev_cap_table: Optional[list[CapTableEntry]] = None) -> None:
print(f"\n{'='*70}")
print(f" {result.round_name.upper()}")
print(f"{'='*70}")
print(f" Pre-money valuation: {fmt(result.pre_money_valuation)}")
print(f" Investment: {fmt(result.investment_amount)}")
print(f" Post-money valuation: {fmt(result.post_money_valuation)}")
print(f" Price per share: {fmt(result.price_per_share, '$')}")
print(f" New shares issued: {result.new_shares_issued:,.0f}")
if result.option_pool_shares_created > 0:
print(f" Option pool created: {result.option_pool_shares_created:,.0f} shares")
print(f" ⚠️ Pool created pre-round: dilutes existing shareholders, not new investor")
print(f" Total shares post: {result.total_shares:,.0f}")
print(f"\n {'Shareholder':<22} {'Shares':>12} {'Ownership':>10} {'Invested':>10} {'Δ Ownership':>12}")
print(" " + "-"*68)
prev_map = {e.name: e.pct_ownership for e in prev_cap_table} if prev_cap_table else {}
for entry in result.cap_table:
delta = ""
if entry.name in prev_map:
change = (entry.pct_ownership - prev_map[entry.name]) * 100
delta = f"{change:+.1f}pp"
elif not entry.is_option_pool:
delta = "new"
invested_str = fmt(entry.invested) if entry.invested > 0 else "-"
print(
f" {entry.name:<22} {entry.shares:>12,.0f} "
f"{entry.pct_ownership*100:>9.2f}% {invested_str:>10} {delta:>12}"
)
def print_exit_analysis(results: list[ExitAnalysis], exit_valuation: float) -> None:
print(f"\n{'='*70}")
print(f" EXIT ANALYSIS @ {fmt(exit_valuation)} (all preferred converts to common)")
print(f"{'='*70}")
print(f"\n {'Shareholder':<22} {'Ownership':>10} {'Proceeds':>12} {'Invested':>10} {'MOIC':>8}")
print(" " + "-"*65)
for r in results:
moic_str = f"{r.moic:.1f}x" if r.moic > 0 else "n/a"
invested_str = fmt(r.invested) if r.invested > 0 else "-"
print(
f" {r.shareholder:<22} {r.ownership_pct*100:>9.2f}% "
f"{fmt(r.proceeds_common):>12} {invested_str:>10} {moic_str:>8}"
)
print(f"\n Note: Does not model liquidation preferences.")
print(f" Participating preferred reduces founder proceeds in most real exits.")
print(f" See references/fundraising_playbook.md for full liquidation waterfall.")
def print_dilution_summary(rounds: list[RoundResult]) -> None:
print(f"\n{'='*70}")
print(f" DILUTION SUMMARY — FOUNDER PERSPECTIVE")
print(f"{'='*70}")
# Find all founders (common shareholders who aren't investors or option pool)
founder_names = []
for entry in rounds[0].cap_table:
if entry.share_class == "common" and not entry.is_option_pool:
founder_names.append(entry.name)
if not founder_names:
print(" No common shareholders found in initial cap table.")
return
header = f" {'Round':<16}" + "".join(f" {n:<16}" for n in founder_names) + f" {'Total Inv':>12}"
print(header)
print(" " + "-" * (16 + 18 * len(founder_names) + 14))
for result in rounds:
cap_map = {e.name: e for e in result.cap_table}
total_invested = sum(e.invested for e in result.cap_table if not e.is_option_pool)
row = f" {result.round_name:<16}"
for name in founder_names:
pct = cap_map[name].pct_ownership * 100 if name in cap_map else 0
row += f" {pct:>6.2f}% "
row += f" {fmt(total_invested):>12}"
print(row)
def export_csv_rounds(rounds: list[RoundResult]) -> str:
buf = io.StringIO()
writer = csv.writer(buf)
writer.writerow(["Round", "Shareholder", "Share Class", "Shares", "Ownership Pct",
"Invested", "Pre Money", "Post Money", "Price Per Share"])
for r in rounds:
for entry in r.cap_table:
writer.writerow([
r.round_name, entry.name, entry.share_class,
round(entry.shares, 0), round(entry.pct_ownership * 100, 4),
round(entry.invested, 2), round(r.pre_money_valuation, 0),
round(r.post_money_valuation, 0), round(r.price_per_share, 4),
])
return buf.getvalue()
# ---------------------------------------------------------------------------
# Sample data: typical two-founder Series A/B/C startup
# ---------------------------------------------------------------------------
def build_sample_model() -> tuple[CapTable, list[RoundResult]]:
"""
Sample company:
- 2 founders, started with 10M shares each
- 1M shares for early advisor
- Raises Pre-seed → Seed → Series A → Series B → Series C
"""
cap = CapTable()
SHARES_PER_FOUNDER = 4_000_000
SHARES_ADVISOR = 200_000
# Founding state
cap.add_shareholder(Shareholder("Founder A (CEO)", "common", SHARES_PER_FOUNDER))
cap.add_shareholder(Shareholder("Founder B (CTO)", "common", SHARES_PER_FOUNDER))
cap.add_shareholder(Shareholder("Advisor", "common", SHARES_ADVISOR))
rounds: list[RoundResult] = []
prev_cap = cap.snapshot()
# Round 1: Pre-seed — $500K at $4.5M pre, 10% option pool created
r1 = cap.execute_round(RoundConfig(
name="Pre-seed",
pre_money_valuation=4_500_000,
investment_amount=500_000,
new_option_pool_pct=0.10,
option_pool_pre_round=True,
lead_investor_name="Angel Syndicate",
))
rounds.append(r1)
prev_r1 = r1.cap_table[:]
# Round 2: Seed — $2M at $9M pre, expand option pool to 12%
r2 = cap.execute_round(RoundConfig(
name="Seed",
pre_money_valuation=9_000_000,
investment_amount=2_000_000,
new_option_pool_pct=0.12,
option_pool_pre_round=True,
lead_investor_name="Seed Fund",
))
rounds.append(r2)
# Round 3: Series A — $12M at $38M pre, refresh option pool to 15%
r3 = cap.execute_round(RoundConfig(
name="Series A",
pre_money_valuation=38_000_000,
investment_amount=12_000_000,
new_option_pool_pct=0.15,
option_pool_pre_round=True,
lead_investor_name="Series A Fund",
))
rounds.append(r3)
# Round 4: Series B — $25M at $95M pre, refresh pool to 12%
r4 = cap.execute_round(RoundConfig(
name="Series B",
pre_money_valuation=95_000_000,
investment_amount=25_000_000,
new_option_pool_pct=0.12,
option_pool_pre_round=True,
lead_investor_name="Series B Fund",
))
rounds.append(r4)
# Round 5: Series C — $40M at $185M pre, refresh pool to 10%
r5 = cap.execute_round(RoundConfig(
name="Series C",
pre_money_valuation=185_000_000,
investment_amount=40_000_000,
new_option_pool_pct=0.10,
option_pool_pre_round=True,
lead_investor_name="Series C Fund",
))
rounds.append(r5)
return cap, rounds
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(description="Fundraising Model — Cap Table & Dilution")
parser.add_argument("--exit", type=float, default=250.0,
help="Exit valuation in $M for return analysis (default: 250)")
parser.add_argument("--csv", action="store_true", help="Export round data as CSV to stdout")
args = parser.parse_args()
exit_valuation = args.exit * 1_000_000
print("\n" + "="*70)
print(" FUNDRAISING MODEL — CAP TABLE & DILUTION ANALYSIS")
print(" Sample Company: Two-founder SaaS startup")
print(" Pre-seed → Seed → Series A → Series B → Series C")
print("="*70)
cap, rounds = build_sample_model()
# Print each round
prev = None
for r in rounds:
print_round_result(r, prev)
prev = r.cap_table
# Dilution summary table
print_dilution_summary(rounds)
# Exit analysis at specified valuation
exit_results = cap.analyze_exit(exit_valuation)
print_exit_analysis(exit_results, exit_valuation)
# Also print at 2x and 5x for sensitivity
print("\n Exit Sensitivity — Founder A Proceeds:")
print(f" {'Exit Valuation':<20} {'Founder A %':>12} {'Founder A $':>14} {'MOIC':>8}")
print(" " + "-"*56)
for mult in [0.5, 1.0, 1.5, 2.0, 3.0, 5.0]:
val = rounds[-1].post_money_valuation * mult
ex = cap.analyze_exit(val)
founder_a = next((r for r in ex if r.shareholder == "Founder A (CEO)"), None)
if founder_a:
print(f" {fmt(val):<20} {founder_a.ownership_pct*100:>11.2f}% "
f"{fmt(founder_a.proceeds_common):>14} {'n/a':>8}")
print("\n Key Takeaways:")
final = rounds[-1].cap_table
total = sum(e.shares for e in final)
founder_a_final = next((e for e in final if e.name == "Founder A (CEO)"), None)
if founder_a_final:
print(f" Founder A final ownership: {founder_a_final.pct_ownership*100:.2f}%")
total_raised = sum(e.invested for e in final)
print(f" Total capital raised: {fmt(total_raised)}")
print(f" Total shares outstanding: {total:,.0f}")
print(f" Final post-money: {fmt(rounds[-1].post_money_valuation)}")
print("\n Run with --exit <$M> to model proceeds at different exit valuations.")
print(" Example: python fundraising_model.py --exit 500")
if args.csv:
print("\n\n--- CSV EXPORT ---\n")
sys.stdout.write(export_csv_rounds(rounds))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,529 @@
#!/usr/bin/env python3
"""
Unit Economics Analyzer
========================
Per-cohort LTV, per-channel CAC, payback periods, and LTV:CAC ratios.
Never blended averages — those hide what's actually happening.
Usage:
python unit_economics_analyzer.py
python unit_economics_analyzer.py --csv
Stdlib only. No dependencies.
"""
import argparse
import csv
import io
import sys
from dataclasses import dataclass, field
from typing import Optional
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class CohortData:
"""
Revenue data for a group of customers acquired in the same period.
Revenue is tracked monthly: revenue[0] = month 1, revenue[1] = month 2, etc.
"""
label: str # e.g. "Q1 2024"
acquisition_period: str # human-readable label
customers_acquired: int
total_cac_spend: float # total S&M spend to acquire this cohort
monthly_revenue: list[float] # revenue per month from this cohort
gross_margin_pct: float = 0.70 # blended gross margin for this cohort
@dataclass
class ChannelData:
"""Acquisition cost and customer data for a single channel."""
channel: str
spend: float
customers_acquired: int
avg_arpa: float # average revenue per account (monthly)
gross_margin_pct: float = 0.70
avg_monthly_churn: float = 0.02 # monthly churn rate for customers from this channel
@dataclass
class UnitEconomicsResult:
"""Computed unit economics for a cohort or channel."""
label: str
customers: int
cac: float
arpa: float # average revenue per account per month
gross_margin_pct: float
monthly_churn: float
ltv: float
ltv_cac_ratio: float
payback_months: float
# Cohort-specific
m1_revenue: Optional[float] = None
m6_revenue: Optional[float] = None
m12_revenue: Optional[float] = None
m24_revenue: Optional[float] = None
m12_ltv: Optional[float] = None # realized LTV through month 12
retention_m6: Optional[float] = None # % of M1 revenue retained at M6
retention_m12: Optional[float] = None
# ---------------------------------------------------------------------------
# Calculators
# ---------------------------------------------------------------------------
def calc_ltv(arpa: float, gross_margin_pct: float, monthly_churn: float) -> float:
"""
LTV = (ARPA × Gross Margin) / Monthly Churn Rate
Assumes constant churn (simplified; cohort method is more accurate).
"""
if monthly_churn <= 0:
return float("inf")
return (arpa * gross_margin_pct) / monthly_churn
def calc_payback(cac: float, arpa: float, gross_margin_pct: float) -> float:
"""
CAC Payback (months) = CAC / (ARPA × Gross Margin)
"""
denominator = arpa * gross_margin_pct
if denominator <= 0:
return float("inf")
return cac / denominator
def analyze_cohort(cohort: CohortData) -> UnitEconomicsResult:
"""Compute full unit economics for a cohort."""
n = cohort.customers_acquired
if n == 0:
raise ValueError(f"Cohort {cohort.label}: customers_acquired cannot be 0")
cac = cohort.total_cac_spend / n
# ARPA from month 1 revenue
m1_rev = cohort.monthly_revenue[0] if cohort.monthly_revenue else 0
arpa = m1_rev / n if n > 0 else 0
# Observed monthly churn from cohort data
# Use revenue decline from M1 to M12 to estimate churn
months_available = len(cohort.monthly_revenue)
if months_available >= 12:
m12_rev = cohort.monthly_revenue[11]
# Revenue retention over 12 months: (M12/M1)^(1/11) per month on average
# Implied monthly retention rate
if m1_rev > 0 and m12_rev > 0:
monthly_retention = (m12_rev / m1_rev) ** (1 / 11)
monthly_churn = 1 - monthly_retention
else:
monthly_churn = 0.02 # default
elif months_available >= 6:
m6_rev = cohort.monthly_revenue[5]
if m1_rev > 0 and m6_rev > 0:
monthly_retention = (m6_rev / m1_rev) ** (1 / 5)
monthly_churn = 1 - monthly_retention
else:
monthly_churn = 0.02
else:
monthly_churn = 0.02 # default if < 6 months data
# Clamp to reasonable range
monthly_churn = max(0.001, min(monthly_churn, 0.30))
ltv = calc_ltv(arpa, cohort.gross_margin_pct, monthly_churn)
payback = calc_payback(cac, arpa, cohort.gross_margin_pct)
ltv_cac = ltv / cac if cac > 0 else float("inf")
# Snapshot revenues
def rev_at(month_idx: int) -> Optional[float]:
if months_available > month_idx:
return cohort.monthly_revenue[month_idx]
return None
m6 = rev_at(5)
m12 = rev_at(11)
m24 = rev_at(23)
# Realized LTV through observed months (actual gross profit)
m12_ltv = sum(cohort.monthly_revenue[:12]) * cohort.gross_margin_pct if months_available >= 12 else None
# Retention rates
ret_m6 = (m6 / m1_rev) if (m6 is not None and m1_rev > 0) else None
ret_m12 = (m12 / m1_rev) if (m12 is not None and m1_rev > 0) else None
return UnitEconomicsResult(
label=cohort.label,
customers=n,
cac=cac,
arpa=arpa,
gross_margin_pct=cohort.gross_margin_pct,
monthly_churn=monthly_churn,
ltv=ltv,
ltv_cac_ratio=ltv_cac,
payback_months=payback,
m1_revenue=m1_rev,
m6_revenue=m6,
m12_revenue=m12,
m24_revenue=m24,
m12_ltv=m12_ltv,
retention_m6=ret_m6,
retention_m12=ret_m12,
)
def analyze_channel(ch: ChannelData) -> UnitEconomicsResult:
"""Compute unit economics for an acquisition channel."""
if ch.customers_acquired == 0:
raise ValueError(f"Channel {ch.channel}: customers_acquired cannot be 0")
cac = ch.spend / ch.customers_acquired
ltv = calc_ltv(ch.avg_arpa, ch.gross_margin_pct, ch.avg_monthly_churn)
payback = calc_payback(cac, ch.avg_arpa, ch.gross_margin_pct)
ltv_cac = ltv / cac if cac > 0 else float("inf")
return UnitEconomicsResult(
label=ch.channel,
customers=ch.customers_acquired,
cac=cac,
arpa=ch.avg_arpa,
gross_margin_pct=ch.gross_margin_pct,
monthly_churn=ch.avg_monthly_churn,
ltv=ltv,
ltv_cac_ratio=ltv_cac,
payback_months=payback,
)
# ---------------------------------------------------------------------------
# Blended metrics (for comparison)
# ---------------------------------------------------------------------------
def blended_cac(channels: list[ChannelData]) -> float:
total_spend = sum(c.spend for c in channels)
total_customers = sum(c.customers_acquired for c in channels)
return total_spend / total_customers if total_customers > 0 else 0
def blended_ltv(channels: list[ChannelData]) -> float:
"""Weighted average LTV by customers acquired."""
total_customers = sum(c.customers_acquired for c in channels)
if total_customers == 0:
return 0
weighted = sum(
calc_ltv(c.avg_arpa, c.gross_margin_pct, c.avg_monthly_churn) * c.customers_acquired
for c in channels
)
return weighted / total_customers
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt(value: float, prefix: str = "$", decimals: int = 0) -> str:
if value == float("inf"):
return ""
if abs(value) >= 1_000_000:
return f"{prefix}{value/1_000_000:.2f}M"
if abs(value) >= 1_000:
return f"{prefix}{value/1_000:.1f}K"
return f"{prefix}{value:.{decimals}f}"
def pct(value: Optional[float]) -> str:
if value is None:
return "n/a"
return f"{value*100:.1f}%"
def rating(ltv_cac: float, payback: float) -> str:
if ltv_cac == float("inf"):
return ""
if ltv_cac >= 5 and payback <= 12:
return "🟢 Excellent"
if ltv_cac >= 3 and payback <= 18:
return "🟡 Good"
if ltv_cac >= 2 and payback <= 24:
return "🟠 Marginal"
return "🔴 Poor"
def print_cohort_analysis(results: list[UnitEconomicsResult]) -> None:
print("\n" + "="*80)
print(" COHORT ANALYSIS")
print("="*80)
print(f" {'Cohort':<12} {'Cust':>5} {'CAC':>8} {'ARPA/mo':>9} {'Churn/mo':>10} "
f"{'LTV':>10} {'LTV:CAC':>8} {'Payback':>9} {'Ret@M12':>8}")
print(" " + "-"*88)
for r in results:
payback_str = f"{r.payback_months:.1f}mo" if r.payback_months != float("inf") else ""
ltv_str = fmt(r.ltv) if r.ltv != float("inf") else ""
ltv_cac_str = f"{r.ltv_cac_ratio:.1f}x" if r.ltv_cac_ratio != float("inf") else ""
print(
f" {r.label:<12} {r.customers:>5} {fmt(r.cac):>8} {fmt(r.arpa):>9} "
f"{pct(r.monthly_churn):>10} {ltv_str:>10} {ltv_cac_str:>8} "
f"{payback_str:>9} {pct(r.retention_m12):>8}"
)
# Trend analysis
print("\n Cohort Trend (is the business getting better or worse?):")
if len(results) >= 3:
ltv_cac_values = [r.ltv_cac_ratio for r in results if r.ltv_cac_ratio != float("inf")]
cac_values = [r.cac for r in results]
churn_values = [r.monthly_churn for r in results]
if len(ltv_cac_values) >= 2:
ltv_cac_trend = "↑ Improving" if ltv_cac_values[-1] > ltv_cac_values[0] else "↓ Deteriorating"
else:
ltv_cac_trend = "n/a"
cac_trend = "↓ Decreasing (good)" if cac_values[-1] < cac_values[0] else "↑ Increasing"
churn_trend = "↓ Improving" if churn_values[-1] < churn_values[0] else "↑ Worsening"
print(f" LTV:CAC: {ltv_cac_trend}")
print(f" CAC: {cac_trend}")
print(f" Churn rate: {churn_trend}")
def print_channel_analysis(results: list[UnitEconomicsResult], channels: list[ChannelData]) -> None:
print("\n" + "="*80)
print(" CHANNEL ANALYSIS (Per-Channel vs Blended)")
print("="*80)
print(f" {'Channel':<22} {'Spend':>9} {'Cust':>5} {'CAC':>8} {'LTV':>10} {'LTV:CAC':>8} {'Payback':>9} {'Rating'}")
print(" " + "-"*90)
for r, ch in zip(results, channels):
payback_str = f"{r.payback_months:.1f}mo" if r.payback_months != float("inf") else ""
ltv_str = fmt(r.ltv) if r.ltv != float("inf") else ""
ltv_cac_str = f"{r.ltv_cac_ratio:.1f}x" if r.ltv_cac_ratio != float("inf") else ""
print(
f" {r.label:<22} {fmt(ch.spend):>9} {r.customers:>5} {fmt(r.cac):>8} "
f"{ltv_str:>10} {ltv_cac_str:>8} {payback_str:>9} {rating(r.ltv_cac_ratio, r.payback_months)}"
)
# Blended comparison
b_cac = blended_cac(channels)
b_ltv = blended_ltv(channels)
b_ltv_cac = b_ltv / b_cac if b_cac > 0 else 0
total_spend = sum(c.spend for c in channels)
total_customers = sum(c.customers_acquired for c in channels)
avg_payback = sum(
calc_payback(b_cac, c.avg_arpa, c.gross_margin_pct) * c.customers_acquired
for c in channels
) / total_customers
print(" " + "-"*90)
print(
f" {'BLENDED (dangerous)':<22} {fmt(total_spend):>9} {total_customers:>5} "
f"{fmt(b_cac):>8} {fmt(b_ltv):>10} {b_ltv_cac:.1f}x{'':<7} "
f"{avg_payback:.1f}mo{'':<4} {rating(b_ltv_cac, avg_payback)}"
)
print("\n ⚠️ Blended numbers hide channel-level problems. Manage channels individually.")
# Budget reallocation
print("\n Recommended Budget Reallocation:")
sorted_results = sorted(zip(results, channels), key=lambda x: x[0].ltv_cac_ratio, reverse=True)
for r, ch in sorted_results:
if r.ltv_cac_ratio >= 3:
action = "✅ Scale"
elif r.ltv_cac_ratio >= 2:
action = "🔄 Optimize"
else:
action = "❌ Cut / pause"
print(f" {ch.channel:<22} LTV:CAC = {r.ltv_cac_ratio:.1f}x → {action}")
def export_csv_results(cohort_results: list[UnitEconomicsResult], channel_results: list[UnitEconomicsResult]) -> str:
buf = io.StringIO()
writer = csv.writer(buf)
writer.writerow(["Type", "Label", "Customers", "CAC", "ARPA_Monthly", "Gross_Margin_Pct",
"Monthly_Churn", "LTV", "LTV_CAC_Ratio", "Payback_Months",
"Retention_M6", "Retention_M12"])
for r in cohort_results:
writer.writerow(["cohort", r.label, r.customers, round(r.cac, 2), round(r.arpa, 2),
r.gross_margin_pct, round(r.monthly_churn, 4),
round(r.ltv, 2) if r.ltv != float("inf") else "inf",
round(r.ltv_cac_ratio, 2) if r.ltv_cac_ratio != float("inf") else "inf",
round(r.payback_months, 2) if r.payback_months != float("inf") else "inf",
round(r.retention_m6, 3) if r.retention_m6 else "",
round(r.retention_m12, 3) if r.retention_m12 else ""])
for r in channel_results:
writer.writerow(["channel", r.label, r.customers, round(r.cac, 2), round(r.arpa, 2),
r.gross_margin_pct, round(r.monthly_churn, 4),
round(r.ltv, 2) if r.ltv != float("inf") else "inf",
round(r.ltv_cac_ratio, 2) if r.ltv_cac_ratio != float("inf") else "inf",
round(r.payback_months, 2) if r.payback_months != float("inf") else "inf",
"", ""])
return buf.getvalue()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def make_sample_cohorts() -> list[CohortData]:
"""
Series A SaaS company, 8 quarters of cohort data.
Shows a business improving on all dimensions over time.
"""
return [
CohortData(
label="Q1 2023", acquisition_period="Jan-Mar 2023",
customers_acquired=12, total_cac_spend=54_000,
gross_margin_pct=0.68,
monthly_revenue=[
10_200, 9_600, 9_100, 8_700, 8_300, 8_000, # M1-M6
7_800, 7_600, 7_400, 7_200, 7_000, 6_800, # M7-M12
6_700, 6_600, 6_500, 6_400, 6_300, 6_200, # M13-M18
6_100, 6_000, 5_900, 5_800, 5_700, 5_600, # M19-M24
],
),
CohortData(
label="Q2 2023", acquisition_period="Apr-Jun 2023",
customers_acquired=15, total_cac_spend=60_000,
gross_margin_pct=0.69,
monthly_revenue=[
13_500, 12_900, 12_500, 12_100, 11_800, 11_500,
11_300, 11_100, 10_900, 10_700, 10_500, 10_300,
10_200, 10_100, 10_000, 9_900, 9_800, 9_700,
],
),
CohortData(
label="Q3 2023", acquisition_period="Jul-Sep 2023",
customers_acquired=18, total_cac_spend=63_000,
gross_margin_pct=0.70,
monthly_revenue=[
16_200, 15_800, 15_400, 15_100, 14_800, 14_600,
14_400, 14_200, 14_000, 13_900, 13_800, 13_700,
13_600, 13_500, 13_400, 13_300,
],
),
CohortData(
label="Q4 2023", acquisition_period="Oct-Dec 2023",
customers_acquired=22, total_cac_spend=70_400,
gross_margin_pct=0.71,
monthly_revenue=[
20_900, 20_500, 20_200, 19_900, 19_700, 19_500,
19_300, 19_100, 19_000, 18_900, 18_800, 18_700,
],
),
CohortData(
label="Q1 2024", acquisition_period="Jan-Mar 2024",
customers_acquired=28, total_cac_spend=81_200,
gross_margin_pct=0.72,
monthly_revenue=[
27_200, 26_900, 26_600, 26_400, 26_200, 26_000,
25_800, 25_700, 25_600, 25_500,
],
),
CohortData(
label="Q2 2024", acquisition_period="Apr-Jun 2024",
customers_acquired=34, total_cac_spend=91_800,
gross_margin_pct=0.72,
monthly_revenue=[
33_300, 33_000, 32_800, 32_600, 32_400, 32_200,
],
),
CohortData(
label="Q3 2024", acquisition_period="Jul-Sep 2024",
customers_acquired=40, total_cac_spend=100_000,
gross_margin_pct=0.73,
monthly_revenue=[
39_600, 39_400, 39_200,
],
),
CohortData(
label="Q4 2024", acquisition_period="Oct-Dec 2024",
customers_acquired=47, total_cac_spend=112_800,
gross_margin_pct=0.73,
monthly_revenue=[
47_000,
],
),
]
def make_sample_channels() -> list[ChannelData]:
"""
Q4 2024 channel breakdown. Blended looks fine; per-channel reveals problems.
"""
return [
ChannelData("Organic / SEO", spend=9_500, customers_acquired=14, avg_arpa=950, gross_margin_pct=0.73, avg_monthly_churn=0.015),
ChannelData("Paid Search (SEM)", spend=48_000, customers_acquired=18, avg_arpa=980, gross_margin_pct=0.73, avg_monthly_churn=0.020),
ChannelData("Paid Social", spend=32_000, customers_acquired=8, avg_arpa=900, gross_margin_pct=0.72, avg_monthly_churn=0.025),
ChannelData("Content / Inbound", spend=11_000, customers_acquired=6, avg_arpa=1100, gross_margin_pct=0.74, avg_monthly_churn=0.012),
ChannelData("Outbound SDR", spend=22_000, customers_acquired=4, avg_arpa=1200, gross_margin_pct=0.73, avg_monthly_churn=0.022),
ChannelData("Events / Webinars", spend=18_500, customers_acquired=3, avg_arpa=1050, gross_margin_pct=0.72, avg_monthly_churn=0.028),
ChannelData("Partner / Referral", spend=7_800, customers_acquired=7, avg_arpa=1000, gross_margin_pct=0.73, avg_monthly_churn=0.013),
]
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(description="Unit Economics Analyzer")
parser.add_argument("--csv", action="store_true", help="Export results as CSV to stdout")
args = parser.parse_args()
cohorts = make_sample_cohorts()
channels = make_sample_channels()
print("\n" + "="*80)
print(" UNIT ECONOMICS ANALYZER")
print(" Sample Company: Series A SaaS | Q4 2024 Snapshot")
print(" Gross Margin: ~72% | Monthly Churn: derived from cohort data")
print("="*80)
cohort_results = [analyze_cohort(c) for c in cohorts]
channel_results = [analyze_channel(c) for c in channels]
print_cohort_analysis(cohort_results)
print_channel_analysis(channel_results, channels)
# Health summary
print("\n" + "="*80)
print(" HEALTH SUMMARY")
print("="*80)
latest = cohort_results[-1]
prev = cohort_results[-4] if len(cohort_results) >= 4 else cohort_results[0]
print(f"\n Latest Cohort ({latest.label}):")
print(f" CAC: {fmt(latest.cac)}")
ltv_str = fmt(latest.ltv) if latest.ltv != float("inf") else ""
ltv_cac_str = f"{latest.ltv_cac_ratio:.1f}x" if latest.ltv_cac_ratio != float("inf") else ""
payback_str = f"{latest.payback_months:.1f} months" if latest.payback_months != float("inf") else ""
print(f" LTV: {ltv_str}")
print(f" LTV:CAC: {ltv_cac_str} (target: > 3x)")
print(f" CAC Payback: {payback_str} (target: < 18mo)")
print(f" Rating: {rating(latest.ltv_cac_ratio, latest.payback_months)}")
# Trend vs 4 quarters ago
print(f"\n Trend vs {prev.label}:")
cac_delta = (latest.cac - prev.cac) / prev.cac * 100
ltv_delta_str = "n/a"
if latest.ltv != float("inf") and prev.ltv != float("inf"):
ltv_delta = (latest.ltv - prev.ltv) / prev.ltv * 100
ltv_delta_str = f"{ltv_delta:+.1f}%"
cac_str = "↓ Better" if cac_delta < 0 else "↑ Worse"
print(f" CAC: {cac_delta:+.1f}% ({cac_str})")
print(f" LTV: {ltv_delta_str}")
print("\n Benchmark Reference:")
print(" LTV:CAC > 5x → Scale aggressively")
print(" LTV:CAC 3-5x → Healthy; grow at current pace")
print(" LTV:CAC 2-3x → Marginal; optimize before scaling")
print(" LTV:CAC < 2x → Acquiring unprofitably; stop and fix")
print(" Payback < 12mo → Outstanding capital efficiency")
print(" Payback 12-18mo → Good for B2B SaaS")
print(" Payback > 24mo → Requires long-dated capital to scale")
if args.csv:
print("\n\n--- CSV EXPORT ---\n")
sys.stdout.write(export_csv_results(cohort_results, channel_results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,253 @@
---
name: change-management
description: "Framework for rolling out organizational changes without chaos. Covers the ADKAR model adapted for startups, communication templates, resistance patterns, and change fatigue management. Handles process changes, org restructures, strategy pivots, and culture changes. Use when announcing a reorg, switching tools, pivoting strategy, killing a product, changing leadership, or when user mentions change management, change rollout, managing resistance, org change, reorg, or pivot communication."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: change-management
updated: 2026-03-05
frameworks: change-playbook
---
# Change Management Playbook
Most changes fail at implementation, not design. The ADKAR model tells you why and how to fix it.
## Keywords
change management, ADKAR, organizational change, reorg, process change, tool migration, strategy pivot, change resistance, change fatigue, change communication, stakeholder management, adoption, compliance, change rollout, transition
## Core Model: ADKAR Adapted for Startups
ADKAR is a change management model by Prosci. Original version is for enterprises. This is the startup-speed adaptation.
### A — Awareness
**What it is:** People understand WHY the change is happening — the business reason, not just the announcement.
**The mistake:** Communicating the WHAT before the WHY. "We're moving to a new CRM" before "here's why our current process is killing us."
**What people need to hear:**
- What is the problem we're solving? (Be honest. If it's "we need to cut costs," say that.)
- Why now? What would happen if we didn't change?
- Who made this decision and how?
**Startup shortcut:** A 5-minute video from the CEO or decision-maker explaining the "why" in plain language beats a formal change announcement document every time.
---
### D — Desire
**What it is:** People want to make the change happen — or at least don't actively resist it.
**The mistake:** Assuming communication creates desire. Awareness ≠ desire. People can understand a change and still hate it.
**What creates desire:**
- "What's in it for me?" — answer this for each stakeholder group, honestly
- Involving people in the "how" even if the "what" is decided
- Addressing fears directly: "Some people are worried this means their role is changing. Here's the truth: [honest answer]"
**What destroys desire:**
- Pretending the change is better for everyone than it is
- Ignoring the legitimate losses people will experience
- Making announcements without any consultation
**Startup shortcut:** Run a short "concerns and questions" session within 48 hours of announcement. Not to reverse the decision — to address the fears and show you're listening.
---
### K — Knowledge
**What it is:** People know HOW to operate in the new world — the specific skills, behaviors, and processes.
**The mistake:** Announcing the change and assuming people will figure it out.
**What people need:**
- Step-by-step documentation of new processes
- Training or practice sessions before go-live
- Clear answers to "what do I do when [common scenario]?"
- Who to ask when they're stuck
**Types of knowledge transfer:**
| Method | Best for | When |
|--------|---------|------|
| Live training | Skill-based changes, complex tools | Before go-live |
| Documentation | Process changes, reference material | Always |
| Video walkthroughs | Tool migrations | Available 24/7, self-paced |
| Shadowing / peer learning | Behavior changes | Weeks 24 after launch |
| Office hours | Any change with many edge cases | First 46 weeks |
---
### A — Ability
**What it is:** People have the time, tools, and support to actually do things differently.
**The mistake:** "We've trained everyone" ≠ "everyone can now do it." Training is knowledge. Ability is practice.
**What creates ability:**
- Time to practice before being evaluated
- A safe environment to make mistakes (no public shaming for early struggles)
- Reduced load during transition (if you're asking people to learn new skills, don't simultaneously pile on new work)
- Access to help (a Slack channel, a point person, documentation)
**Signs of ability gap:**
- People revert to old behavior under pressure
- Workarounds emerge (people invent their own way around the new system)
- Training scores are high but actual behavior hasn't changed
---
### R — Reinforcement
**What it is:** The change sticks. The new behavior becomes the default.
**The mistake:** Declaring victory at go-live. Changes fail because they're never reinforced.
**What creates reinforcement:**
- Visible measurement (are we tracking adoption?)
- Recognition of early adopters ("Sarah fully migrated to the new workflow in week 2 — ask her how")
- Leader modeling (if the CEO uses the old way, everyone will)
- Removing the old option (when possible — eliminate the path of least resistance)
- Consequences for non-adoption (stated clearly, applied consistently)
**Adoption vs. compliance:**
- **Compliance:** People do it when watched, revert when not
- **Adoption:** People do it because they believe it's better
Only reinforcement creates adoption. Compliance is the result of enforcement. Aim for adoption.
---
## Change Types and ADKAR Application
### Process Change (new tools, new workflows)
**Timeline:** 48 weeks for full adoption
**Hardest phase:** Ability (people know what to do but haven't built the habit)
**Critical reinforcement:** Remove or deprecate the old tool/process
**Communication sequence:**
1. Week -2: Announce the why + go-live date
2. Week -1: Training sessions available
3. Week 0 (go-live): Launch + point person available
4. Week 2: Adoption check-in (who's using it? Who isn't?)
5. Week 4: Feedback collection + public wins
6. Week 8: Old system deprecated
---
### Org Change (reorg, new leader, team splits/merges)
**Timeline:** 36 months for full stabilization
**Hardest phase:** Desire (people fear for their roles and relationships)
**Critical reinforcement:** Consistent behavior from new leadership
**Communication sequence:**
1. Day 0: Announce the change with the "why" — in person or synchronous video
2. Day 1: 1:1s with most affected team members by their manager
3. Week 1: FAQ published with honest answers to the 10 most common concerns
4. Week 24: New structure is operating (don't delay implementation)
5. Month 2: First retrospective — what's working, what needs adjustment
6. Month 36: Regular check-ins on team health and morale
**What to say when a leader is leaving or being replaced:**
Be honest about what you can share. Never: "We can't share the reasons." Always: either a truthful explanation or "we're not able to share the specifics, but I can tell you [what this means for you]."
---
### Strategy Pivot (new direction, killed products)
**Timeline:** 312 months for full alignment
**Hardest phase:** Awareness (people don't believe the pivot is real)
**Critical reinforcement:** Resource reallocation that visibly proves the pivot is happening
**Communication sequence:**
1. Internal first, always. Employees should never hear about a pivot from a press release.
2. All-hands with full context: what changed in the market, what you're doing, what it means for teams
3. Each team leader runs a "what does this mean for us?" conversation with their team
4. Resource reallocation announced within 2 weeks (if the money doesn't move, people won't believe the pivot)
5. First milestone of the new direction celebrated publicly
**What kills pivots:** Announcing a new direction while still funding the old one at the same level.
---
### Culture Change (values refresh, behavior expectations)
**Timeline:** 1224 months for genuine behavior change
**Hardest phase:** Reinforcement (behavior doesn't change just because values were announced)
**Critical reinforcement:** Visible decisions that reflect the new values
**Communication sequence:**
1. Build with input: involve a representative sample of the company in defining the change
2. Announce with story: "Here's what we observed, here's what we're changing and why"
3. Behavior anchors: for each culture change, state the specific behavior in observable terms
4. Leader behavior: leadership team must visibly model the new behavior first
5. Performance integration: new expected behaviors appear in reviews within one cycle
6. Celebrate the right behaviors: when someone exemplifies the new culture, name it publicly
---
## Resistance Patterns
Resistance is information, not defiance. Diagnose before responding.
| Resistance pattern | What it signals | Response |
|-------------------|-----------------|---------|
| "This won't work" | Awareness gap or credibility gap | Explain the evidence base for the change |
| "Why now?" | Awareness gap | Explain urgency — what happens if we don't change |
| "I wasn't consulted" | Desire gap | Acknowledge the gap; involve them in the "how" now |
| "I don't have time for this" | Ability gap | Reduce their load or push the timeline |
| "We tried this before" | Trust gap | Acknowledge what's different this time. Be specific. |
| Silent non-compliance | Could be any gap | 1:1 conversation to diagnose |
**The worst response to resistance:** Dismissing it. "Some people are resistant to change" as if resistance is a personality flaw rather than a signal.
---
## Change Fatigue
When organizations change too fast, people stop believing any change will stick.
### Signals
- Eye-rolls during change announcements ("here we go again")
- Low attendance at change-related sessions
- Fast compliance on paper, slow adoption in practice
- "Last month we were doing X, now we're doing Y" comments
### Prevention
- **Finish what you start.** Don't announce a new change while the last one is still being absorbed.
- **Space changes.** One significant change at a time. Give 23 months of stability between major changes.
- **Announce what's NOT changing.** People in change-fatigue need to know what's stable.
- **Show results.** Publish what the previous change achieved before launching the next.
### When you're already in change fatigue
- Pause non-critical changes
- Run a "change inventory": how many changes are in progress simultaneously?
- Prioritize ruthlessly: which changes are essential now? Which can wait?
- Communicate stability: "Here's what is NOT changing this quarter"
---
## Key Questions for Change Management
- "Who are the most skeptical people about this change? Have we talked to them directly?"
- "Do people understand why we're doing this, or just what we're doing?"
- "Have we given people time to practice before we measure performance on the new way?"
- "Is the old way still available? If so, people will use it."
- "Are leaders modeling the new behavior themselves?"
- "How many changes are we running simultaneously right now?"
## Red Flags
- Change announced on Friday afternoon (people stew over the weekend)
- "This is final, questions are not welcome" framing
- No published FAQ or way to ask questions safely
- Old system/process still running 6 weeks after "go-live"
- Leaders exempted from the change they're asking everyone else to make
- No measurement of adoption — assuming go-live = success
## Detailed References
- `references/change-playbook.md` — ADKAR deep dive, resistance counter-strategies, communication templates, change fatigue management

View File

@@ -0,0 +1,308 @@
# Change Management Playbook
Deep reference for rolling out organizational changes effectively.
---
## 1. ADKAR Deep Dive with Startup Examples
### Awareness: The "Why" that actually lands
Most change communications fail at awareness because they confuse informing with explaining.
**Informing:** "We're moving from Jira to Linear next month."
**Explaining:** "Our engineering team loses ~4 hours per week to Jira configuration, search latency, and reporting setup. At our current team size, that's 60+ hours per month. Linear's benchmarks from teams our size show a 40% reduction in that overhead. That's why we're switching — and here's the timeline."
The explanation activates desire. The announcement just creates work.
**Real example: Tool migration**
> "We tried Asana, we tried Notion tasks, we tried spreadsheets. None of them stuck. After talking to 8 engineering leads at similar companies, the pattern was clear: teams that use Linear stick with it. We're going all-in. Here's why it will be different this time: [specific reasons]."
**Real example: Reorg**
> "The current structure has our customer success team reporting to Sales, which creates a conflict: Sales is measured on new logo count, CS is measured on retention. We've seen this play out in three recent customer losses where CS needed to raise concerns but felt the pressure to stay quiet. We're changing the reporting structure so CS reports directly to me. This is about removing a structural conflict, not about performance."
---
### Desire: Addressing the "What's in it for me?"
Every stakeholder group needs a different answer.
**Individual contributor:**
- "Will my job change significantly?"
- "Will this make my day easier or harder?"
- "Is my role at risk?"
**Manager:**
- "What new responsibilities do I take on?"
- "How do I explain this to my team?"
- "What happens if someone on my team doesn't adapt?"
**Senior leader:**
- "What does this change our strategic posture?"
- "What resources are reallocated and to what?"
- "How does this affect my relationships with other senior leaders?"
**Resistance scenario: Senior leader whose team is most affected**
> They're supportive in the room, silent or undermining outside it.
> Fix: Give them a role in the change. Make them a named co-leader of the implementation. Invested people don't undermine.
---
### Knowledge: The documentation that actually gets used
The reason most change documentation fails: it's written for the decision-maker, not the user.
**Documentation that gets used:**
- Short (< 2 pages for most changes)
- Organized by role: "If you're in Sales, here's what changes for you"
- Answers "what do I do when X happens?" with specific answers
- Has a clear owner: "Questions? Ask [person] in #channel"
**Documentation that doesn't get used:**
- Long rationale sections the user doesn't need
- "See the full policy document for details"
- No named point of contact
- Buried in email threads
---
### Ability: The gap between knowing and doing
Signs of a knowledge gap vs. an ability gap:
| Symptom | Knowledge gap | Ability gap |
|---------|-------------|------------|
| People don't know what to do | ✅ | |
| People know what to do but don't do it | | ✅ |
| People do it wrong consistently | Could be either | |
| People revert under pressure | | ✅ |
| Training scores high, behavior unchanged | | ✅ |
**Ability gaps are fixed by:**
1. Practice time (before being measured)
2. Reduced cognitive load during transition
3. Peer support (not just manager support)
4. Feedback loops that are fast and low-stakes
**What kills ability development:**
- Measuring performance on the new way in week 1
- Adding new work simultaneously with the change
- Making it embarrassing to ask for help
---
### Reinforcement: The phase everyone skips
Go-live is not success. Go-live is the beginning of adoption.
**Reinforcement calendar (template):**
| Week | Action |
|------|--------|
| Week 1 (go-live) | High-visibility support. Leadership visible. Point person responsive. |
| Week 2 | First adoption check: who's using it? Who isn't? Targeted help to laggards. |
| Week 4 | Celebrate early adopters publicly. Share a win story. |
| Week 6 | Adoption metric reported to leadership. Decommission old way (if applicable). |
| Week 8 | Full adoption expected. Non-adoption now a performance conversation. |
| Month 3 | Retrospective: What's working? What needs adjustment? |
---
## 2. Resistance Patterns and Counter-Strategies
### The Vocal Skeptic
**Who they are:** Asks hard questions in all-hands. Other people follow their lead.
**What they need:** To feel heard and to understand the logic.
**Strategy:** Talk to them before the all-hands. Not to persuade them — to hear their concerns and address what's valid. When they feel respected, they often become your best change advocates.
**Script:** "I know you have concerns about this change. I want to understand them before we go broader with the announcement. What's your biggest worry?"
---
### The Silent Non-Complier
**Who they are:** Agrees in meetings, continues the old behavior outside.
**What they need:** To understand that non-compliance is visible and has consequences.
**Strategy:** Direct 1:1 conversation. Name the behavior. Ask what's in the way. Give them a clear path.
**Script:** "I've noticed you're still using [old way] two weeks after we launched [new way]. I want to understand what's in the way for you — is it a knowledge issue, a time issue, or something else?"
---
### The Grieving Top Performer
**Who they are:** Was excellent under the old system. The change makes their skills less relevant.
**What they need:** Recognition of their past contribution and a clear path forward.
**Strategy:** Name the loss explicitly. "I know you built your expertise on [old approach] and this change asks you to develop a new one. That's a real transition." Then create a specific development plan.
**What not to do:** Pretend the change doesn't affect them disproportionately.
---
### The Fearful Middle Manager
**Who they are:** Middle managers whose authority or role scope is reduced by the change.
**What they need:** A clear picture of their new role and why it's still valuable.
**Strategy:** Individual conversation before the announcement. Walk them through what changes, what stays the same, and what their contribution looks like in the new world.
---
### The "We've Been Here Before" Cynics
**Who they are:** Long-tenured employees who've seen multiple failed change initiatives.
**What they need:** Evidence that this time is different.
**Strategy:** Acknowledge the history. "I know we've announced changes that didn't stick. Here's specifically what's different this time: [specific differences]." Then prove it fast — show momentum in the first 30 days.
---
## 3. Communication Plan Template per Change Type
### Template: Tool Migration
```
COMMUNICATION PLAN — [Tool Name] Migration
AUDIENCE: All-hands / [specific team]
DECISION OWNER: [Name]
GO-LIVE DATE: [Date]
POINT OF CONTACT: [Name] in [channel]
COMMUNICATION TIMELINE:
Week -4: Decision finalized (internal only)
Week -3: Training materials ready
Week -2: All-hands announcement (why + timeline + support plan)
Week -1: Training sessions (2 sessions, different times)
Week 0: Go-live. Point person in Slack. Old system still accessible.
Week 2: First adoption check. Targeted help to non-adopters.
Week 4: Old system access restricted.
Week 8: Old system fully decommissioned.
KEY MESSAGES:
- Why we're switching: [honest 2-sentence reason]
- What changes for you: [role-specific, max 3 bullets]
- What doesn't change: [this matters for change fatigue]
- How to get help: [channel, person, office hours]
- Timeline: [specific dates]
FAQ:
Q: Is the old system going away completely?
A: [Honest answer with date]
Q: What if I have data in the old system?
A: [Migration plan or acknowledgment]
Q: What if I'm not proficient by go-live?
A: [Realistic expectation-setting]
```
### Template: Reorg Announcement
```
REORG COMMUNICATION PLAN
ANNOUNCEMENT DATE: [Date]
EFFECTIVE DATE: [Date]
FORMAT: Live (synchronous), all affected employees
PRE-ANNOUNCEMENT (1 week before):
- 1:1 with every affected leader
- HR briefed and ready for questions
- FAQ prepared
ANNOUNCEMENT FORMAT:
1. Context: Why this change? (2-3 minutes)
2. What's changing: New structure, new reporting lines (3-4 minutes)
3. What's NOT changing: Roles, comp, team members (2 minutes)
4. Timeline: When does the new structure take effect? (1 minute)
5. Q&A: Open, no time limit (at least 15 minutes)
POST-ANNOUNCEMENT (week 1):
- Each manager runs team meeting to answer team-specific questions
- HR available for private conversations
- FAQ published to all
POST-ANNOUNCEMENT (week 2-4):
- New structure is operational
- Transition check-in: what questions emerged that weren't anticipated?
THINGS NOT TO SAY:
- "We can't share why [person] is leaving" (if they are)
- "This affects everyone equally" (it doesn't)
- "No one's job is at risk" (unless this is 100% certain)
```
---
## 4. The Change Fatigue Problem
### How organizations develop change fatigue
**Phase 1 — Excitement (first 1-2 changes):** People engage, try the new way, hope it sticks.
**Phase 2 — Skepticism (3-5 changes):** People comply but hedge. "Let's see if this one lasts."
**Phase 3 — Detachment (6+ changes without completion):** People stop investing in changes. Compliance is surface-level. New announcements get eye-rolls.
**Phase 4 — Cynicism (entrenched fatigue):** People actively resist changes. "We've been here before." High performers leave because they don't want to work in a chaotic environment.
### The change inventory audit
**Run this before announcing any new change:**
| Change | Status | Started | Expected complete |
|--------|--------|---------|-----------------|
| [Change 1] | In progress / Complete / Stalled | | |
| [Change 2] | | | |
| [Change 3] | | | |
**Rules:**
- If > 2 significant changes are in progress, don't start a third
- If any change is stalled, diagnose it before starting something new
- Define "complete" for every change in progress
### Recovery from change fatigue
1. **Declare a change moratorium.** "We're not starting anything new for 60 days. We're finishing what we started."
2. **Complete visible wins.** Ship the changes that are 80% done. Demonstrate follow-through.
3. **Communicate stability.** "Here's what is NOT changing this year."
4. **Slow down the next announcement.** More preparation, more consultation, clearer "this time is different" evidence.
---
## 5. Measuring Adoption vs. Compliance
Most change leaders measure go-live, not adoption. These are different things.
### Adoption metrics by change type
**Tool migration:**
- % of team actively using the new tool (not just logged in)
- % of relevant workflows completed in new tool vs. old tool
- Support ticket volume in weeks 1-4 (high = knowledge gap; dropping = adoption)
**Process change:**
- % of relevant transactions following new process
- Error rates in new process vs. old process (should converge over time)
- Time-to-complete for new process (should improve by week 4)
**Org change:**
- Decision cycle time in new structure (should improve by month 2)
- Escalation patterns (fewer cross-boundary escalations = alignment improving)
- Employee sentiment (survey at months 1, 3, 6)
**Culture change:**
- Values referenced in 1:1 conversations (manager self-report)
- Values-linked recognition events per month
- Culture survey scores in relevant dimensions (quarterly)
### The compliance trap
Measuring compliance: "Did they use the new system? Yes/No."
Measuring adoption: "Did they use the new system because it's better, or because they had to?"
Compliance is unstable. It reverts when enforcement loosens. Adoption is self-sustaining.
**Adoption diagnostic:** Ask a random sample: "Why do you use [new way] instead of [old way]?"
- "Because I have to" = compliance
- "Because it's faster/easier/better" = adoption
Only adoption makes the change permanent.

View File

@@ -0,0 +1,179 @@
---
name: chief-of-staff
description: "C-suite orchestration layer. Routes founder questions to the right advisor role(s), triggers multi-role board meetings for complex decisions, synthesizes outputs, and tracks decisions. Every C-suite interaction starts here. Loads company context automatically."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: orchestration
updated: 2026-03-05
frameworks: routing-matrix, synthesis-framework, decision-log, board-protocol
---
# Chief of Staff
The orchestration layer between founder and C-suite. Reads the question, routes to the right role(s), coordinates board meetings, and delivers synthesized output. Loads company context for every interaction.
## Keywords
chief of staff, orchestrator, routing, c-suite coordinator, board meeting, multi-agent, advisor coordination, decision log, synthesis
---
## Session Protocol (Every Interaction)
1. Load company context via context-engine skill
2. Score decision complexity
3. Route to role(s) or trigger board meeting
4. Synthesize output
5. Log decision if reached
---
## Invocation Syntax
```
[INVOKE:role|question]
```
Examples:
```
[INVOKE:cfo|What's the right runway target given our growth rate?]
[INVOKE:board|Should we raise a bridge or cut to profitability?]
```
### Loop Prevention Rules (CRITICAL)
1. **Chief of Staff cannot invoke itself.**
2. **Maximum depth: 2.** Chief of Staff → Role → stop.
3. **Circular blocking.** A→B→A is blocked. Log it.
4. **Board = depth 1.** Roles at board meeting do not invoke each other.
If loop detected: return to founder with "The advisors are deadlocked. Here's where they disagree: [summary]."
---
## Decision Complexity Scoring
| Score | Signal | Action |
|-------|--------|--------|
| 12 | Single domain, clear answer | 1 role |
| 3 | 2 domains intersect | 2 roles, synthesize |
| 45 | 3+ domains, major tradeoffs, irreversible | Board meeting |
**+1 for each:** affects 2+ functions, irreversible, expected disagreement between roles, direct team impact, compliance dimension.
---
## Routing Matrix (Summary)
Full rules in `references/routing-matrix.md`.
| Topic | Primary | Secondary |
|-------|---------|-----------|
| Fundraising, burn, financial model | CFO | CEO |
| Hiring, firing, culture, performance | CHRO | COO |
| Product roadmap, prioritization | CPO | CTO |
| Architecture, tech debt | CTO | CPO |
| Revenue, sales, GTM, pricing | CRO | CFO |
| Process, OKRs, execution | COO | CFO |
| Security, compliance, risk | CISO | COO |
| Company direction, investor relations | CEO | Board |
| Market strategy, positioning | CMO | CRO |
| M&A, pivots | CEO | Board |
---
## Board Meeting Protocol
**Trigger:** Score ≥ 4, or multi-function irreversible decision.
```
BOARD MEETING: [Topic]
Attendees: [Roles]
Agenda: [23 specific questions]
[INVOKE:role1|agenda question]
[INVOKE:role2|agenda question]
[INVOKE:role3|agenda question]
[Chief of Staff synthesis]
```
**Rules:** Max 5 roles. Each role one turn, no back-and-forth. Chief of Staff synthesizes. Conflicts surfaced, not resolved — founder decides.
---
## Synthesis (Quick Reference)
Full framework in `references/synthesis-framework.md`.
1. **Extract themes** — what 2+ roles agree on independently
2. **Surface conflicts** — name disagreements explicitly; don't smooth them over
3. **Action items** — specific, owned, time-bound (max 5)
4. **One decision point** — the single thing needing founder judgment
**Output format:**
```
## What We Agree On
[23 consensus themes]
## The Disagreement
[Named conflict + each side's reasoning + what it's really about]
## Recommended Actions
1. [Action] — [Owner] — [Timeline]
...
## Your Decision Point
[One question. Two options with trade-offs. No recommendation — just clarity.]
```
---
## Decision Log
Track decisions to `~/.claude/decision-log.md`.
```
## Decision: [Name]
Date: [YYYY-MM-DD]
Question: [Original question]
Decided: [What was decided]
Owner: [Who executes]
Review: [When to check back]
```
At session start: if a review date has passed, flag it: *"You decided [X] on [date]. Worth a check-in?"*
---
## Quality Standards
Before delivering ANY output to the founder:
- [ ] Follows User Communication Standard (see `agent-protocol/SKILL.md`)
- [ ] Bottom line is first — no preamble, no process narration
- [ ] Company context loaded (not generic advice)
- [ ] Every finding has WHAT + WHY + HOW
- [ ] Actions have owners and deadlines (no "we should consider")
- [ ] Decisions framed as options with trade-offs and recommendation
- [ ] Conflicts named, not smoothed
- [ ] Risks are concrete (if X → Y happens, costs $Z)
- [ ] No loops occurred
- [ ] Max 5 bullets per section — overflow to reference
---
## Ecosystem Awareness
The Chief of Staff routes to **28 skills total**:
- **10 C-suite roles** — CEO, CTO, COO, CPO, CMO, CFO, CRO, CISO, CHRO, Executive Mentor
- **6 orchestration skills** — cs-onboard, context-engine, board-meeting, decision-logger, agent-protocol
- **6 cross-cutting skills** — board-deck-builder, scenario-war-room, competitive-intel, org-health-diagnostic, ma-playbook, intl-expansion
- **6 culture & collaboration skills** — culture-architect, company-os, founder-coach, strategic-alignment, change-management, internal-narrative
See `references/routing-matrix.md` for complete trigger mapping.
## References
- `references/routing-matrix.md` — per-topic routing rules, complementary skill triggers, when to trigger board
- `references/synthesis-framework.md` — full synthesis process, conflict types, output format

View File

@@ -0,0 +1,212 @@
# Routing Matrix
Detailed routing rules for the Chief of Staff. When a founder asks a question, find the best match in this matrix, then apply the scoring rules to determine single-role, multi-role, or board meeting.
---
## Routing by Domain
### Finance & Capital
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| How much runway do we have? | CFO | — | 1 |
| Should we raise now or later? | CFO | CEO | 3 |
| What's our burn multiple? | CFO | COO | 2 |
| Should we raise a bridge or cut costs? | CFO | CEO, COO | 5 |
| What's the right pricing model? | CFO | CRO, CPO | 4 |
| Should we hire or extend runway? | CFO | CHRO, COO | 4 |
| What terms should we accept for this round? | CFO | CEO | 3 |
| How do we model the next 18 months? | CFO | COO | 2 |
### People & Culture
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| Should I let this person go? | CHRO | COO | 2 |
| How do I structure comp for the team? | CHRO | CFO | 3 |
| We have a culture problem — what do we do? | CHRO | CEO | 3 |
| A leader on my team isn't working — now what? | CHRO | COO | 2 |
| How do I hire fast without breaking culture? | CHRO | COO | 3 |
| Two co-founders are in conflict | CHRO | CEO | 4 |
| How do we retain our best people? | CHRO | CFO | 2 |
| What does a good performance management process look like? | CHRO | COO | 2 |
### Product
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| What should we build next? | CPO | CTO | 2 |
| Should we kill this feature? | CPO | CTO, CRO | 3 |
| How do we prioritize the roadmap? | CPO | CTO, COO | 3 |
| Are we pre-PMF or post-PMF? | CPO | CRO, CEO | 4 |
| Should we build vs buy? | CPO | CTO, CFO | 4 |
| How do we handle technical debt vs new features? | CTO | CPO | 3 |
| What's our product strategy for next year? | CPO | CEO, CRO | 4 |
### Technology & Engineering
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| What architecture should we use? | CTO | CPO | 1 |
| How do we scale the system to 10x traffic? | CTO | COO | 2 |
| We have a security incident — what now? | CISO | CTO, COO | 5 |
| Should we migrate to microservices? | CTO | COO, CFO | 4 |
| How do I grow the engineering team? | CTO | CHRO, CFO | 3 |
| Our engineering velocity is dropping — why? | CTO | COO | 2 |
| What's our DevOps maturity? | CTO | COO | 1 |
| How do we handle a compliance audit on our tech? | CISO | CTO | 3 |
### Sales & Revenue
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| Why aren't we closing deals? | CRO | CPO | 2 |
| How do we build a sales process from scratch? | CRO | COO | 2 |
| What's the right GTM for this market? | CRO | CMO, CEO | 4 |
| Our churn is too high — root cause? | CRO | CPO, CHRO | 3 |
| Should we go enterprise or stay SMB? | CRO | CPO, CFO | 4 |
| How do we expand into a new market? | CRO | CMO, CEO, CFO | 5 |
| What's our ideal customer profile? | CRO | CPO, CMO | 3 |
| Pipeline is dry — what do we do? | CRO | CMO | 2 |
### Operations & Execution
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| Why do things keep breaking? | COO | CTO | 2 |
| How do we set up OKRs? | COO | CEO | 2 |
| Our meetings are useless — fix it | COO | — | 1 |
| How do we scale operations without hiring? | COO | CTO, CFO | 3 |
| There's a recurring bottleneck — how to fix it? | COO | CTO | 2 |
| We need a cross-team process for X | COO | Relevant dept head | 2 |
| How do we improve decision speed? | COO | CEO | 3 |
### Marketing & Brand
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| How do we position against Competitor X? | CMO | CRO | 2 |
| What channels should we invest in? | CMO | CRO, CFO | 3 |
| Our brand isn't resonating — why? | CMO | CPO, CRO | 3 |
| How do we build a content strategy? | CMO | CRO | 2 |
| What's our marketing budget allocation? | CMO | CFO, CRO | 3 |
### Security & Compliance
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| How do we pass an ISO 27001 audit? | CISO | COO | 2 |
| We had a data breach — what now? | CISO | CTO, CEO, COO | 5 |
| How do we handle GDPR compliance? | CISO | CTO | 2 |
| What's our security posture? | CISO | CTO | 1 |
| A regulator is asking questions | CISO | CEO, COO | 4 |
### Strategic Direction
| Question type | Primary | Secondary | Score |
|--------------|---------|-----------|-------|
| Should we pivot? | CEO | Board meeting | 5 |
| Are we building the right company? | CEO | Board meeting | 5 |
| How do we handle an acquisition offer? | CEO | CFO, Board meeting | 5 |
| What's the 3-year strategy? | CEO | All C-suite, board | 5 |
| Should we enter a new vertical? | CEO | CRO, CFO, CPO | 4 |
---
## When to Invoke Multiple Roles
Invoke 2 roles when:
- The question sits at the boundary of two domains
- One role's answer creates a constraint the other needs to know about
- The founder explicitly wants two perspectives
Invoke 3+ roles (board) when:
- The question involves irreversible resource commitment
- There's a known tension between functions (e.g., product vs revenue, speed vs quality)
- The answer will change how multiple teams operate
- It's a company-direction question, not an operational one
---
## When NOT to Invoke Multiple Roles
Don't multi-invoke when:
- The answer is technical and one role clearly owns it
- The founder just needs a framework, not a decision
- Invoking more roles would add noise without adding signal
- Time is short and a directional answer beats a comprehensive one
---
## Escalation Criteria → Board Meeting
Automatically escalate to board meeting when any of these apply:
1. **Irreversibility:** The decision is hard or impossible to reverse (layoffs, pivots, major contracts, fundraising terms)
2. **Cross-functional resource impact:** The decision changes budget, headcount, or priorities for 2+ teams
3. **Founder blind spot risk:** The topic is in an area where the founder's archetype creates a known gap (e.g., technical founder on GTM)
4. **Disagreement expected:** The domains involved are known to have competing incentives (CFO vs CRO on pricing, CTO vs CPO on tech debt)
5. **Explicit request:** Founder says "what does the team think" or "I want multiple perspectives"
6. **Score ≥ 4**
---
## Role Registry
| Role | File | Domain |
|------|------|--------|
| CEO | ceo-advisor | Strategy, culture, investor relations |
| CFO | cfo-advisor | Finance, capital, unit economics |
| COO | coo-advisor | Operations, OKRs, scaling |
| CTO | cto-advisor | Engineering, architecture, tech strategy |
| CPO | cpo-advisor | Product, roadmap, UX |
| CRO | cro-advisor | Revenue, sales, GTM |
| CMO | cmo-advisor | Marketing, brand, positioning |
| CHRO | chro-advisor | People, culture, hiring |
| CISO | ciso-advisor | Security, compliance, risk |
**If a role file doesn't exist:** Note the gap. Answer from first principles with domain expertise. Log that the role is missing.
---
## Complementary Skills Registry
These skills are invoked for specific cross-cutting needs, not for general domain questions.
### Orchestration & Infrastructure
| Skill | Trigger | File |
|-------|---------|------|
| C-Suite Onboard | `/cs:setup`, first-time setup, "tell me about your company" | cs-onboard |
| Context Engine | Auto-loaded; staleness check | context-engine |
| Board Meeting | `/cs:board`, multi-role decisions, score ≥ 4 | board-meeting |
| Decision Logger | After board meetings, `/cs:decisions`, `/cs:review` | decision-logger |
| Agent Protocol | Inter-role invocations, loop detection | agent-protocol |
### Cross-Cutting Capabilities
| Skill | Trigger | File |
|-------|---------|------|
| Board Deck Builder | "board deck", "investor update", "board presentation" | board-deck-builder |
| Scenario War Room | "what if", multi-variable scenarios, stress test across functions | scenario-war-room |
| Competitive Intelligence | "competitor", "competitive analysis", "battlecard", "who's winning" | competitive-intel |
| Org Health Diagnostic | "how healthy are we", "org health", "company health check" | org-health-diagnostic |
| M&A Playbook | "acquisition", "M&A", "due diligence", "being acquired" | ma-playbook |
| International Expansion | "expand to", "new market", "international", "localization" | intl-expansion |
### Culture & Collaboration
| Skill | Trigger | File |
|-------|---------|------|
| Culture Architect | "values", "culture", "mission", "vision", culture problems | culture-architect |
| Company OS | "operating system", "EOS", "Scaling Up", "meeting cadence", "how do we run" | company-os |
| Founder Coach | "delegation", "blind spots", "founder growth", "leadership style", burnout | founder-coach |
| Strategic Alignment | "alignment", "silos", "teams not aligned", "strategy cascade" | strategic-alignment |
| Change Management | "rolling out", "reorg", "change", "new process", "transition" | change-management |
| Internal Narrative | "all-hands", "internal comms", "how do we tell", "narrative" | internal-narrative |
### Routing Priority
1. Check if it matches a **complementary skill trigger** → route there
2. Check if it matches a **single role domain** → route to that role
3. Check if it spans **multiple role domains** (score ≥ 3) → invoke multiple roles
4. Check if it meets **escalation criteria** (score ≥ 4 or irreversible) → trigger board meeting
5. If unclear → ask one clarifying question, then route

View File

@@ -0,0 +1,201 @@
# Synthesis Framework
How to turn multiple role outputs into a single, useful response for the founder. Synthesis is the highest-value function of the Chief of Staff — it's not about summarizing, it's about integrating.
---
## The Problem with Multi-Role Output
Without synthesis, multiple advisors produce noise:
- Overlapping advice
- Contradictions without resolution
- Action items from every role that compete for priority
- Founder left to figure out what to do with it all
Synthesis turns this into signal: one clear picture, explicit conflicts named, prioritized actions, one decision point.
---
## Phase 1: Collect and Read
Before writing anything, read all role responses completely. Look for:
**Consensus signals:**
- Same recommendation from 2+ roles independently
- Same risk identified from different angles
- Same root cause named without coordination
**Conflict signals:**
- One role says X, another says not-X
- Same data interpreted differently
- Competing resource requests (CFO says cut costs, CRO says invest in sales)
- Different time horizons (CTO wants to fix tech debt now, CPO wants to ship features now)
**Gap signals:**
- A critical dimension no role addressed
- A risk nobody flagged
- An assumption baked in that nobody questioned
---
## Phase 2: Extract Themes
A theme is a finding that appears in 2+ role responses, even if framed differently.
**How to identify:**
1. List every distinct point from every role response
2. Group points that are about the same underlying issue
3. Name the group with a clear, plain-language label
4. Note which roles contributed to it
**Example:**
> CFO: "The burn multiple is 3.2 — unsustainable without revenue acceleration."
> CRO: "We need 3 more sales cycles to hit targets, minimum 90 days."
> COO: "Three positions are open that will cost $40K/month when filled."
>
> Theme: **Cash position is tighter than the headline number suggests.** (CFO + CRO + COO)
**Limit to 3 themes.** More than 3 means you're not synthesizing — you're listing.
---
## Phase 3: Surface Conflicts
Name every conflict explicitly. Don't resolve it — present it.
**Conflict types:**
### Resource conflict
Two roles want the same budget, headcount, or time.
> "CFO wants to delay the new hire until Q3. CHRO says the team is already at capacity and another quarter will cause attrition. Both are right from their domain."
### Priority conflict
Two roles disagree on what's most important right now.
> "CTO wants 6 weeks on infrastructure to prevent outages. CPO wants those same engineers on the new feature for the sales pipeline. This isn't a technical question — it's a risk tolerance question."
### Time horizon conflict
Two roles are optimizing for different time frames.
> "CRO is optimizing for this quarter's close rate. CMO is optimizing for brand that compounds over 18 months. Both strategies are valid. They require different budget allocations."
### Assumption conflict
Two roles have incompatible assumptions baked in.
> "CFO's model assumes 15% MoM growth. CRO says realistic growth is 8% given the sales cycle length. The financial model needs to be rebuilt on the CRO's number."
**Present conflicts without picking sides.** The founder decides which trade-off to accept.
---
## Phase 4: Derive Action Items
From the consensus themes and the non-conflicting role outputs, derive concrete actions.
**Action item criteria:**
- Specific (not "improve the process" — "map the QA process and find the bottleneck")
- Owned (assign to a role or person)
- Time-bound (this week / this quarter / before next board)
- Consequence-linked (why does it matter if it slips)
**Good example:**
> **Action:** Build an updated 18-month financial model using CRO's 8% MoM growth assumption.
> **Owner:** CFO
> **By:** End of week
> **Why it matters:** Current fundraising conversations are based on a model that's too optimistic.
**Bad example:**
> Review the financial model with the team.
**Limit to 5 actions.** If there are more, prioritize by impact and flag the rest as backlog.
---
## Phase 5: Identify the Founder Decision Point
Every board meeting ends with one question for the founder. Just one.
**How to find it:**
- It's usually the conflict that can't be resolved without a values choice
- It's the question where both sides have a legitimate case
- It's the thing none of the advisors can decide unilaterally
**Frame it cleanly:**
> "The C-suite is aligned on the actions above, but there's one thing that needs your call: [specific decision]. [Role A] recommends X because [reason]. [Role B] recommends Y because [reason]. This is ultimately a question of [underlying trade-off — growth vs profitability / speed vs stability / short-term vs long-term]."
**Don't present multiple decision points.** Force the synthesis down to one. If there are genuinely two unrelated decisions, separate them into two outputs.
---
## Output Format
```markdown
## [Topic] — C-Suite Synthesis
### What We Agree On
[Theme 1 with 12 sentences]
[Theme 2 with 12 sentences]
[Theme 3 with 12 sentences]
### The Disagreement
[Name the conflict]
[Role A position + reasoning]
[Role B position + reasoning]
[What the conflict is really about]
### Recommended Actions
1. **[Action]** — [Owner] — [Timeline] — [Why it matters]
2. **[Action]** — [Owner] — [Timeline]
3. **[Action]** — [Owner] — [Timeline]
4. **[Action]** — [Owner] — [Timeline]
5. **[Action]** — [Owner] — [Timeline]
### Your Decision Point
[One question for the founder. Two options with their trade-offs. No recommendation — just clarity.]
```
---
## Quality Standards for Synthesis
Before delivering:
**Compression test:** Could a founder read this in 3 minutes and know exactly what to do? If not, cut.
**Honesty test:** Did you name the real conflicts, or smooth them over? Smoothed conflicts come back as surprises.
**Specificity test:** Are the action items specific enough to act on, or are they goals masquerading as actions?
**Decision point test:** Is there one clear thing for the founder to decide, or are you leaving them with a mess?
**Context test:** Would this advice make sense for any company, or is it clearly calibrated to this company's stage, challenges, and founder?
---
## Common Synthesis Failures
**The summary trap:** You summarize each role's output in sequence. This is not synthesis — it's transcription. Synthesis requires cutting.
**The false consensus:** You say "the team agrees" when there's actually a meaningful conflict. Named conflicts are useful. Hidden conflicts are dangerous.
**The advice avalanche:** 15 action items that no one can action. Cut to 5. If everything is priority, nothing is.
**The unresolved conflict dump:** You present the conflict and then leave the founder to figure it out. Your job is to frame the choice cleanly, not to resolve it — but also not to dump it raw.
**The context-free advice:** The synthesis sounds like it came from a textbook, not from someone who knows this company. If you can swap the company name and it still reads the same, it's not synthesized.
---
## When Synthesis Reveals Deadlock
Sometimes roles genuinely can't align and the synthesis produces no clear direction.
**Signs of deadlock:**
- Every theme has a counter-theme
- Every action has a conflict attached
- The "decision point" is actually three decisions
**What to do:**
1. Name the deadlock explicitly: *"The C-suite is genuinely split on this. Here's why."*
2. Present the two paths cleanly with consequences
3. Recommend a time-boxed experiment if possible: *"You don't have to decide between X and Y permanently. Run X for 30 days with a clear metric for success, then reassess."*
4. Flag it as a strategic question that may need external input (advisor, board, market data)
Deadlock is honest. Fake consensus is not.

View File

@@ -0,0 +1,144 @@
---
name: chro-advisor
description: "People leadership for scaling companies. Hiring strategy, compensation design, org structure, culture, and retention. Use when building hiring plans, designing comp frameworks, restructuring teams, managing performance, building culture, or when user mentions CHRO, HR, people strategy, talent, headcount, compensation, org design, retention, or performance management."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: chro-leadership
updated: 2026-03-05
python-tools: hiring_plan_modeler.py, comp_benchmarker.py
frameworks: people-strategy, comp-frameworks, org-design
---
# CHRO Advisor
People strategy and operational HR frameworks for business-aligned hiring, compensation, org design, and culture that scales.
## Keywords
CHRO, chief people officer, CPO, HR, human resources, people strategy, hiring plan, headcount planning, talent acquisition, recruiting, compensation, salary bands, equity, org design, organizational design, career ladder, title framework, retention, performance management, culture, engagement, remote work, hybrid, spans of control, succession planning, attrition
## Quick Start
```bash
python scripts/hiring_plan_modeler.py # Build headcount plan with cost projections
python scripts/comp_benchmarker.py # Benchmark salaries and model total comp
```
## Core Responsibilities
### 1. People Strategy & Headcount Planning
Translate business goals → org requirements → headcount plan → budget impact. Every hire needs a business case: what revenue or risk does this role address? See `references/people_strategy.md` for hiring at each growth stage.
### 2. Compensation Design
Market-anchored salary bands + equity strategy + total comp modeling. See `references/comp_frameworks.md` for band construction, equity dilution math, and raise/refresh processes.
### 3. Org Design
Right structure for the stage. Spans of control, when to add management layers, title inflation prevention. See `references/org_design.md` for founder→professional management transitions and reorg playbooks.
### 4. Retention & Performance
Retention starts at hire. Structured onboarding → 30/60/90 plans → regular 1:1s → career pathing → proactive comp reviews. See `references/people_strategy.md` for what actually moves the needle.
**Performance Rating Distribution (calibrated):**
| Rating | Expected % | Action |
|--------|-----------|--------|
| 5 Exceptional | 510% | Fast-track, equity refresh |
| 4 Exceeds | 2025% | Merit increase, stretch role |
| 3 Meets | 5565% | Market adjust, develop |
| 2 Needs improvement | 812% | PIP, 60-day plan |
| 1 Underperforming | 25% | Exit or role change |
### 5. Culture & Engagement
Culture is behavior, not values on a wall. Measure eNPS quarterly. Act on results within 30 days or don't ask.
## Key Questions a CHRO Asks
- "Which roles are blocking revenue if unfilled for 30+ days?"
- "What's our regrettable attrition rate? Who left that we wish hadn't?"
- "Are managers our retention asset or our attrition cause?"
- "Can a new hire explain their career path in 12 months?"
- "Where are we paying below P50? Who's a flight risk because of it?"
- "What's the cost of this hire vs. the cost of not hiring?"
## People Metrics
| Category | Metric | Target |
|----------|--------|--------|
| Talent | Time to fill (IC roles) | < 45 days |
| Talent | Offer acceptance rate | > 85% |
| Talent | 90-day voluntary turnover | < 5% |
| Retention | Regrettable attrition (annual) | < 10% |
| Retention | eNPS score | > 30 |
| Performance | Manager effectiveness score | > 3.8/5 |
| Comp | % employees within band | > 90% |
| Comp | Compa-ratio (avg) | 0.951.05 |
| Org | Span of control (ICs) | 610 |
| Org | Span of control (managers) | 47 |
## Red Flags
- Attrition spikes and exit interviews all name the same manager
- Comp bands haven't been refreshed in 18+ months
- No career ladder → top performers leave after 18 months
- Hiring without a written business case or job scorecard
- Performance reviews happen once a year with no mid-year check-in
- Equity refreshes only for executives, not high performers
- Time to fill > 90 days for critical roles
- eNPS below 0 — something is structurally broken
- More than 3 org layers between IC and CEO at < 50 people
## Integration with Other C-Suite Roles
| When... | CHRO works with... | To... |
|---------|-------------------|-------|
| Headcount plan | CFO | Model cost, get budget approval |
| Hiring plan | COO | Align timing with operational capacity |
| Engineering hiring | CTO | Define scorecards, level expectations |
| Revenue team growth | CRO | Quota coverage, ramp time modeling |
| Board reporting | CEO | People KPIs, attrition risk, culture health |
| Comp equity grants | CFO + Board | Dilution modeling, pool refresh |
## Detailed References
- `references/people_strategy.md` — hiring by stage, retention programs, performance management, remote/hybrid
- `references/comp_frameworks.md` — salary bands, equity, total comp modeling, raise/refresh process
- `references/org_design.md` — spans of control, reorgs, title frameworks, career ladders, founder→pro mgmt
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- Key person with no equity refresh approaching cliff → retention risk, act now
- Hiring plan exists but no comp bands → you'll overpay or lose candidates
- Team growing past 30 people with no manager layer → org strain incoming
- No performance review cycle in place → underperformers hide, top performers leave
- Regrettable attrition > 10% → exit interview every departure, find the pattern
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Build a hiring plan" | Headcount plan with roles, timing, cost, and ramp model |
| "Set up comp bands" | Compensation framework with bands, equity, benchmarks |
| "Design our org" | Org chart proposal with spans, layers, and transition plan |
| "We're losing people" | Retention analysis with risk scores and intervention plan |
| "People board section" | Headcount, attrition, hiring velocity, engagement, risks |
## Reasoning Technique: Empathy + Data
Start with the human impact, then validate with metrics. Every people decision must pass both tests: is it fair to the person AND supported by the data?
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,320 @@
# Compensation Frameworks Reference
Salary bands, equity design, total comp modeling, comp philosophy, and raise/refresh processes.
---
## Comp Philosophy — The Foundation
Before building bands, define your philosophy. Ambiguity in comp philosophy = pay equity lawsuits and trust erosion.
**The five decisions:**
### 1. What market percentile do you target?
- **P25 (below market):** Only viable with exceptional mission, equity, or growth opportunity. Flight risk is high after 18 months.
- **P50 (market median):** Standard for most Series AB companies. Competitive without premium.
- **P75 (above market):** Premium talent strategy. Used by high-margin or talent-intensive businesses. Netflix model.
- **P90+:** Top-of-market for specific functions (ML at AI companies, senior engineers at FAANG feeders).
**Common hybrid:** P50 base + above-market equity = total comp at P6575.
### 2. What's in your total comp package?
Define each component explicitly:
- **Base salary** — cash, market-benchmarked
- **Variable / bonus** — % of base, tied to what criteria
- **Equity** — options vs. RSUs, vesting schedule, refresh cadence
- **Benefits** — health, retirement, PTO policy
- **Learning & development budget**
- **Remote/location allowances**
### 3. Are bands public internally?
Recommended: Yes. Pay transparency reduces equity complaints, builds trust, and forces you to maintain clean bands.
### 4. How often do you refresh bands?
Minimum: annually. High-growth markets: every 6 months (engineering specifically in hot markets).
### 5. How do you handle individual negotiation?
Options:
- **Fixed bands, no negotiation** (Buffer model) — simple, fair, loses some candidates
- **Band range with manager discretion** — most common, requires calibration guardrails
- **Individual negotiation within band** — flexible, creates pay equity drift over time
---
## Salary Bands: Construction
### Step 1: Define levels
Standard IC levels (adapt to company):
| Level | Title example | Scope |
|-------|--------------|-------|
| L1 | Junior / Associate | Execution with guidance |
| L2 | Mid-level | Independent execution |
| L3 | Senior | Leads workstreams, mentors L1-L2 |
| L4 | Staff / Principal | Cross-team technical leadership |
| L5 | Distinguished / Fellow | Company-wide technical direction |
Management track:
| Level | Title | Scope |
|-------|-------|-------|
| M1 | Manager | Team of 48 ICs |
| M2 | Senior Manager | Manager of managers or larger team |
| M3 | Director | Function or large org |
| M4 | VP | Business unit, company-wide |
| M5 | SVP / C-Suite | Executive |
### Step 2: Gather market data
**Data sources (by quality):**
1. **Radford / Aon** — Gold standard. Expensive ($10K+/year). Worth it at Series B+.
2. **Levels.fyi** — Excellent for engineering. Free. Self-reported but large sample.
3. **Glassdoor Salary** — Broad coverage. Less precise for startups.
4. **Pave / Carta Total Comp** — VC-backed companies. Good peer benchmarking.
5. **LinkedIn Salary** — Free tier. Reasonable signal for G&A roles.
6. **Offer letter data** — What candidates are bringing from other companies. Real-time signal.
**What to pull:** P25, P50, P75, P90 for each role × level × geography.
### Step 3: Set band structure
**Band width (range within a level):**
- IC bands: 80120% of midpoint (i.e., ±20% from center)
- Manager bands: 85115% of midpoint
- Wider bands allow room for differentiation within level; narrower bands reduce pay equity drift
**Band overlap between levels:**
- 1020% overlap is normal (top of L2 overlaps with bottom of L3)
- > 30% overlap: your levels are too close together
- No overlap: new hires jump too much between levels (compression risk)
**Example engineering band structure (US, Series B company, P50 target):**
| Level | Band Min | Midpoint | Band Max |
|-------|----------|----------|----------|
| L1 Software Engineer | $90K | $105K | $125K |
| L2 Software Engineer | $115K | $135K | $160K |
| L3 Senior SWE | $150K | $175K | $205K |
| L4 Staff SWE | $195K | $225K $260K |
| M1 Eng Manager | $175K | $205K | $235K |
| M2 Sr Eng Manager | $215K | $250K | $285K |
| M3 Director, Eng | $255K | $300K | $345K |
*Adjust by 1525% for non-SF/NYC markets. Adjust -40% to -60% for European markets.*
### Step 4: Place employees in bands
**Compa-ratio** = Employee salary / Band midpoint
| Compa-ratio | Interpretation |
|------------|---------------|
| < 0.85 | Below range — immediate risk |
| 0.850.95 | Developing in role |
| 0.951.05 | Fully performing (target zone) |
| 1.051.15 | Senior/expert in role |
| > 1.15 | Above range — flag for review |
**Audit report:** Run quarterly. Flag anyone below 0.85 (flight risk) or above 1.15 (overpaid for level, or needs promotion).
---
## Equity Frameworks for Startups
### Option Basics
**ISO vs NSO:**
- ISO (Incentive Stock Options): For employees. Favorable tax treatment if held 1+ year post-exercise.
- NSO (Non-Qualified Stock Options): For advisors, contractors, sometimes employees. Taxed as ordinary income on exercise.
**Strike price:** Set to 409A valuation at grant. Lower is better for employees. Early employees win on strike price.
**Vesting schedule standards:**
- 4-year vest, 1-year cliff: Standard
- 4-year vest, 6-month cliff: Startup market adapting to faster pace
- 1-year cliff means: nothing until 12 months; monthly or quarterly after
**Post-termination exercise window (PTEW):**
- Standard: 90 days. Often too short for employees who can't afford exercise.
- Better: 15 years or until IPO. Use as a talent differentiator.
- Companies extending PTEW: Stripe, Airbnb (pre-IPO), Square, most employee-friendly startups.
### Equity Grant Ranges by Stage and Level
*Expressed as % of fully diluted shares at grant. Ranges vary significantly by market, stage, and funding.*
**Seed stage:**
| Role | Equity % |
|------|----------|
| Co-founder | 2040% |
| First engineering hire | 0.51.5% |
| First non-technical exec hire | 0.250.75% |
| IC (L2-L3) | 0.10.4% |
| IC (L3-L4) | 0.20.6% |
**Series A:**
| Role | Equity % |
|------|----------|
| VP / Head of function | 0.30.75% |
| Director | 0.10.3% |
| Senior IC (L3) | 0.050.15% |
| Mid IC (L2) | 0.020.08% |
| Junior IC (L1) | 0.010.05% |
**Series B:**
| Role | Equity % |
|------|----------|
| VP / Head of function | 0.10.3% |
| Director | 0.050.15% |
| Senior IC (L3) | 0.020.07% |
| Mid IC (L2) | 0.010.03% |
*At Series B+, equity is increasingly expressed in dollar value (grant value = X shares × current 409A). Use Carta or Pulley to model dilution.*
### Equity Refresh Program
**Why it matters:** Employees hired at Series A with 4-year vesting will be fully vested by Series B. No unvested equity = no retention hook.
**When to refresh:**
- After every significant funding round
- Annually for high performers (top 20%)
- After promotion (role-commensurate top-up)
- Counter-offer situations (use carefully — signals you underpaid initially)
**Refresh models:**
1. **Anniversary grant:** Annual cliff-free refresh for all employees above a performance threshold
2. **Evergreen model:** Continuous vesting maintained — refresh annually so employee always has 23 years remaining
3. **Event-based:** Refresh tied to milestones (promotion, funding, annual review cycle)
**Dilution awareness:** Every refresh dilutes existing shareholders. Model pool usage quarterly. Replenish option pool before it drops below 1012% of fully diluted shares.
---
## Total Comp Modeling
### Components of Total Comp
```
Total Compensation = Base Salary
+ Annual Bonus (target %)
+ Equity Value (annualized grant / vesting period)
+ Benefits (employer-paid premiums, retirement match)
+ Allowances (home office, internet, L&D, commuter)
```
### Annualizing Equity Value
For comparison to cash compensation:
```
Annual equity value = (Grant shares × Current 409A price) / Vesting years
```
Example: 10,000 options at $2 strike, current 409A = $8, 4-year vest
- Grant value at current 409A = 10,000 × $8 = $80,000
- Annual value = $80,000 / 4 = $20,000/year
- If base is $150K, total comp is ~$170K/year
*Note: For recruiting purposes, you can use last preferred share price (VC price) to show upside — but be transparent about the difference between 409A and preferred.*
### Benefits Valuation
Frequently undervalued in offers. Quantify explicitly:
| Benefit | Typical employer cost |
|---------|----------------------|
| Health insurance (employee) | $4K8K/year |
| Health insurance (family) | $15K25K/year |
| 401K match (4% of salary) | $5K10K/year |
| L&D budget ($2K/year) | $2K/year |
| Home office stipend ($500) | $500/year |
A $140K offer with family health coverage + 4% 401K match is worth $165K+ total.
---
## Raise and Refresh Process
### Annual Compensation Review Cycle
**Recommended cadence:**
- October/November: Market data refresh, band updates
- November/December: Manager merit recommendations
- December/January: Calibration and approvals
- January/February: Effective date for new salaries + equity grants
**Budget allocation:**
- **Merit budget** (performance-based raises): 35% of total payroll typically
- **Market adjustment budget** (fixing below-band salaries): Separate from merit. Non-negotiable to avoid attrition.
- **Promotion budget:** Separate. Promotions should not come from merit pool.
### Merit Increase Guidelines
| Performance Rating | Merit Increase Range |
|-------------------|---------------------|
| 5 Exceptional | 815% |
| 4 Exceeds | 58% |
| 3 Meets | 24% |
| 2 Needs improvement | 01% |
| 1 Underperforming | 0% (PIP active) |
*Adjust based on compa-ratio. A high performer at P90 of their band gets a smaller increase than a high performer at P50.*
### Compa-Ratio Adjustment Matrix
| Performance \ Compa-Ratio | < 0.90 | 0.901.00 | 1.001.10 | > 1.10 |
|---------------------------|--------|-----------|-----------|--------|
| Exceptional (5) | 1215% | 812% | 58% | 35% |
| Exceeds (4) | 812% | 58% | 35% | 13% |
| Meets (3) | 58% | 35% | 23% | 02% |
| Needs impr (2) | 02% | 01% | 0% | 0% |
### Promotion vs. Merit — Keep These Separate
**Common mistake:** Using merit budget to fund promotions. This forces a choice between rewarding performance and recognizing level change.
**Promotion increase guidelines:**
- One level (e.g., L2 → L3): 1020% increase, new equity grant
- Two levels (rare): 2035% increase, new equity grant at new level
- Manager track (IC → M1): 1525% increase, new equity grant
**Promotion criteria process:**
1. Manager nominates with written business case
2. Calibration committee reviews cross-functionally
3. HR validates against band (no off-band exceptions without CHRO sign-off)
4. Employee informed before annual review — never surprised at review meeting
### Off-Cycle Adjustments
When to do them:
- Counter-offer situations (see below)
- Competitive intelligence reveals underpay for a specific role
- New market data shows a role significantly under-benchmarked
- Internal equity audit reveals unexplained gaps
**Counter-offer policy:**
Three options:
1. **Match** — Risk: signals you underpay; sets precedent
2. **Partial match** — "We can do X, which is the top of your band" — cleaner
3. **Decline** — Accept the attrition, improve the band for the next hire
**Rule:** If you're regularly in counter-offer conversations, your bands are stale. Fix the bands.
---
## Pay Equity Audit
Run annually. Non-negotiable at Series B+.
**What to audit:**
- Pay gap by gender within each level and function
- Pay gap by ethnicity within each level and function
- Compa-ratio distribution across demographics
- Time-to-promotion by demographic group
**Methodology:**
1. Pull all employee data: level, function, salary, tenure, performance ratings, gender, ethnicity
2. Run regression controlling for level, tenure, and performance
3. Unexplained gap after controls = the problem to fix
4. Flag and remediate within the same review cycle
**Legal exposure:** In many jurisdictions, documented pay gaps without remediation plans are litigation risk. The audit creates a record of intent; remediation closes the risk.
**Remediation budget:** Set aside 0.51% of payroll annually for equity adjustments. If you're doing it right, this shrinks over time.

View File

@@ -0,0 +1,333 @@
# Org Design Reference
Spans of control, layering decisions, reorgs, title frameworks, career ladders, and the founder→professional management transition.
---
## Core Org Design Principles
1. **Structure follows strategy.** Reorg after strategy shifts, not before.
2. **Optimize for the bottleneck.** Where does work get slow? Design around that.
3. **Minimize coordination cost.** Conway's Law: your org structure becomes your product architecture. Design intentionally.
4. **Bias toward flatness until it breaks.** Adding layers adds cost and slows decisions.
5. **Reorgs have transition costs.** Relationships reset. Count the cost before you restructure.
---
## Spans of Control
Span of control = number of direct reports a manager has.
### Benchmarks
| Role Type | Optimal Span | Min | Max |
|-----------|-------------|-----|-----|
| IC manager (predictable work) | 710 | 5 | 12 |
| IC manager (complex/creative work) | 57 | 4 | 8 |
| Manager of managers | 46 | 3 | 7 |
| VP / Director | 47 | 3 | 8 |
| C-Suite | 59 | 4 | 10 |
**Too narrow (< 4 ICs):** Over-management, high cost per output, manager becomes a bottleneck
**Too wide (> 12 ICs):** Under-management, degraded 1:1 quality, feedback loops collapse
### Factors that allow wider spans
- Highly autonomous, senior team (L3+ ICs)
- Predictable, well-defined work (support, ops)
- Strong tooling and process (reduces manager overhead)
- Experienced manager
### Factors that require narrower spans
- High-complexity, undefined problems (research, early product)
- Junior or newly promoted team members
- High interdependence between reports (coordination overhead)
- Manager is also an IC contributor (player-coach)
---
## When to Add Management Layers
**The wrong reason to add layers:** "We need to give good people somewhere to grow."
**The right reason:** "This manager has too many direct reports to do the job well."
### Layer triggers by growth stage
**0 → 15 people:** No layers. Everyone reports to founders.
**15 → 30 people:** First managers emerge. Usually technical leads or function leads. Should still be player-coaches.
**30 → 60 people:** Second layer forms. Engineering splits into squads. Sales gets a frontline manager. Each function has a head.
**60 → 150 people:** Director layer becomes necessary in large functions. Engineering VP + Engineering Directors + Team Managers.
**150+ people:** VP layer fully staffed. Senior Director / Director split. Clear IC → M → Senior M → Director → VP paths.
### The Rule of 7
When any manager has 7 or more direct reports and:
- 1:1s are skipped regularly
- Feedback quality drops
- Manager can't answer "how is each person doing?" without checking notes
→ Time to split or hire a manager.
### Management overhead cost
Every manager layer costs 1015% in decision speed (communication hops).
Every management role without a team = pure overhead.
**Litmus test for each management role:**
- Does this person have at least 4 ICs under them?
- Would removing this role improve decision speed?
- Is this a management job or a "we ran out of IC levels" job?
---
## Functional vs. Product Org Structures
### Functional Structure (by discipline)
```
CEO
├── VP Engineering
│ ├── Backend Team
│ ├── Frontend Team
│ └── DevOps
├── VP Product
│ ├── PM (Feature A)
│ └── PM (Feature B)
└── VP Design
└── UX Designers
```
**Best for:** Early stage, < 100 people, single product
**Advantage:** Deep expertise development, clear career paths per discipline
**Disadvantage:** Cross-functional coordination is heavy; features require synchronization across silos
### Product/Pod Structure (by product area)
```
CEO
├── Product Area A (autonomous team)
│ ├── EM
│ ├── PM
│ └── Designer
├── Product Area B (autonomous team)
│ ├── EM
│ ├── PM
│ └── Designer
└── Platform (shared services)
└── Platform EM + team
```
**Best for:** Multiple products or large user segments, 50+ in product/eng
**Advantage:** Speed and autonomy; less cross-team coordination for most features
**Disadvantage:** Duplication risk; harder to maintain technical coherence; harder career paths
### When to shift from Functional → Product org
- You have 2+ distinct product lines that rarely share features
- Cross-functional feature delivery takes > 3 sprints of coordination overhead
- Teams are > 8 engineers and still waiting on shared resources
### Hybrid / Matrix (avoid unless necessary)
Matrix reporting (e.g., engineer reports to EM + PM) creates accountability confusion. Avoid at < 500 people.
---
## Title Frameworks
### The Problem with Title Inflation
Early startups over-title to compete with cash. "VP of Engineering" with 2 reports. "Head of Marketing" with no team.
**Consequences:**
- Can't add leadership above inflated titles without awkward conversations
- Candidates from mature companies expect scope commensurate with titles
- Internal equity breaks when the same title means different things
### Preventing Title Inflation
**Rule 1:** VP titles require managing managers (not just ICs).
**Rule 2:** Director titles require managing multiple ICs or a large function.
**Rule 3:** No more than one "Head of X" per function.
**Rule 4:** Document scope expectations per title before making offers.
### Engineering Title Ladder (example)
| Title | Level | Scope | Reports |
|-------|-------|-------|---------|
| Software Engineer I | L1 | Executes defined tasks | — |
| Software Engineer II | L2 | Independent delivery | — |
| Senior Software Engineer | L3 | Leads features, mentors | — |
| Staff Software Engineer | L4 | Cross-team technical leadership | — |
| Principal Software Engineer | L5 | Company-wide technical direction | — |
| Distinguished Engineer | L6 | External recognition, defining practice | — |
| Engineering Manager | M1 | Team of 48 engineers | 48 ICs |
| Senior Engineering Manager | M2 | Larger team or manager of managers | 24 managers |
| Director of Engineering | M3 | Functional area | Multiple managers |
| VP of Engineering | M4 | Engineering org | Directors |
| CTO | M5 | Technical organization + strategy | VPs |
**IC vs. Management track:** Explicitly separate. Senior ICs should not need to move to management for career advancement. Staff/Principal/Distinguished track provides this.
### Go-to-Market Title Ladder (example)
| Title | Level | Focus |
|-------|-------|-------|
| SDR / BDR | S1 | Outbound prospecting |
| Account Executive I | S2 | SMB closing |
| Account Executive II | S3 | Mid-market closing |
| Senior Account Executive | S4 | Enterprise closing |
| Principal / Strategic AE | S5 | Named accounts, complex deals |
| Sales Manager | M1 | 68 reps |
| Director of Sales | M2 | Multiple teams or segments |
| VP of Sales | M3 | Full sales org |
| CRO | M4 | Revenue org (sales + CS + marketing) |
---
## Career Ladders
A career ladder is a documented set of expectations per level. Not aspirational — behavioral. "What does a P3 engineer do that a P2 doesn't?"
### Why career ladders matter for HR
1. **Retention:** Employees can see where they're going
2. **Consistency:** Managers use the same criteria for promotions
3. **Compensation:** Bands anchor to levels; levels require definitions
4. **Equity:** Removes "who's the manager's favorite" from promotion decisions
### Career Ladder Structure
For each level, define 4 dimensions:
**1. Scope** — How big is the problem space? Team / cross-team / org-wide / company-wide?
**2. Impact** — How does work connect to outcomes? (Task → Feature → Product → Business)
**3. Craft** — Technical/functional skill expectations
**4. Influence** — How does this person improve others? (Self → peers → team → org)
**Example: Senior Software Engineer (L3) vs. Staff Software Engineer (L4)**
| Dimension | L3 (Senior SWE) | L4 (Staff SWE) |
|-----------|----------------|----------------|
| Scope | Owns features or services | Owns technical domains across teams |
| Impact | Ships features that improve user outcomes | Shapes technical direction for a product area |
| Craft | Writes high-quality code, good design skills | Sets coding standards, contributes to architecture |
| Influence | Mentors L1L2, code reviews | Mentors L3+, identifies org-wide technical gaps |
### How to build a career ladder from scratch
1. **Interview your best performers** — "What do you do that your junior peers don't?" Collect behaviors, not aspirations.
2. **Draft 3 levels** — Don't start with 6. Start with junior, mid, senior. Add staff/principal only when you have enough people to warrant it.
3. **Manager calibration** — Every manager rates 5 current employees against the draft. Gaps surface immediately.
4. **Publish and iterate** — Don't wait for perfection. A 70% ladder shipped is better than a 100% ladder in a drawer.
---
## Reorg Playbook
### When reorgs are necessary
- Strategy pivot requires different team structure (e.g., single product → multi-product)
- Acquisition or team merger
- Function is genuinely too slow due to coordination overhead
- Leadership departure creates structural opportunity
### When reorgs are a mistake
- "We need to shake things up" (disruption for its own sake)
- Avoiding a specific personnel decision (use the right tool)
- Solving a cultural problem with a structural change
- Reacting to one team's complaint without systemic evidence
### Reorg Process (48 weeks)
**Week 12: Diagnose**
- Map current org: every role, reporting line, team output
- Identify where work is slow, duplicated, or falling through cracks
- Interview 510 people across teams: "What takes longer than it should? What decisions are hard to make?"
**Week 34: Design options**
- Draft 23 structural alternatives
- For each: estimated coordination costs, manager span impact, open roles created
- Validate with CEO + 12 trusted operators. Don't crowdsource the design.
**Week 56: Decide and prepare**
- Select option; finalize all reporting changes
- Prepare communications for every affected person (individual conversations before all-hands)
- Write the "why" — employees need to understand the business reason, not just the result
**Week 78: Communicate and implement**
- Individual conversations with all manager+ changes (first)
- Team-level conversations with managers (second)
- All-hands with full context (third)
- Updated org chart published within 24 hours of announcement
### Communication sequence (non-negotiable)
1. Affected individuals first (private, before anything else)
2. Affected managers second (to prepare for team conversations)
3. Full team/company third (all-hands or company note)
4. External (clients, board) only if materially impacted
**Never:** Email blast first. No individual conversations. Discovered on the org chart.
---
## Founder → Professional Management Transition
The most common scaling failure point in startups.
### Stage 1: Founder-Led (030 people)
Founders make all decisions, know everyone personally, set culture through behavior. Works because trust and context are built directly.
**What breaks:**
- Decisions bottleneck at founders
- New hires don't get enough context (founders can't be everywhere)
- Culture transmitted through osmosis, not documentation
### Stage 2: First Managers (3080 people)
Founders can no longer manage all ICs. First manager layer typically = promoted high performers.
**The "brilliant IC → struggling manager" trap:**
- Individual contributor skills ≠ management skills
- Promoted ICs often continue doing IC work while ignoring management work
- No one holds them accountable to management output (1:1 quality, team health, performance feedback)
**What to do:**
- Explicit manager training before promotion (not after)
- Management KPIs separate from IC KPIs
- Peer community for new managers (monthly cohort session)
- HR check-ins on manager health at 30/60/90 days
### Stage 3: Professional Management (80200 people)
External hires at Director/VP level bring professional management skills but lack company context.
**Common failure modes:**
- Hired "too senior" — VP who's used to 200-person teams in a 50-person function
- Culture clash — Big-company manager who adds process that kills startup speed
- Authority vacuum — External VP doesn't earn trust; team ignores them; founder continues to bypass hierarchy
**Mitigation:**
- Hiring bar: Has this person scaled from this stage to 2x this stage before? Not managed a team at 2x — built a team to 2x.
- Explicit onboarding on "how we make decisions here"
- 90-day milestones focused on relationship-building before any structural changes
- Founders explicitly hand off ownership and reinforce new manager's authority publicly
### Stage 4: Founder Transition from Operator to Executive
The hardest personal transition. Founder moves from doing to enabling.
**Signs you haven't made the transition:**
- You're still in every technical decision
- Teams come to you instead of their manager for approvals
- You know more about the team's work than the manager does
- Managers feel they need to check in before acting
**What the transition requires:**
- Explicit authority delegation in writing (not just verbal)
- Willingness to let managers make decisions you'd make differently
- Redirecting team members to their manager consistently
- Measuring managers on outcomes, not just process adherence
- Letting managers hire and fire without founder override (except final call on VPs)

View File

@@ -0,0 +1,320 @@
# People Strategy Reference
Hiring, retention, performance, and remote/hybrid frameworks for each growth stage.
---
## Hiring Strategy by Growth Stage
### Pre-Seed / Seed (115 people)
**Who you're hiring:** Generalists who can do multiple jobs. Specialists are a luxury you can't afford unless the specialty is your core product.
**The test:** Could this person be the 5th employee at a startup and thrive? If they need a defined role, clear process, and a manager — not yet.
**Sourcing at this stage:**
- Founder networks first (highest signal, lowest cost)
- Angel List / Wellfound — self-selected for startup risk tolerance
- Referrals from existing employees (offer a referral bonus from day 1)
- GitHub / Dribbble / published work for technical roles
- Avoid: Big job boards, recruiters (unless technical retained search for C-suite)
**Interview process (keep it lean):**
1. 30-min intro call (culture/motivation fit, comp alignment)
2. Take-home or live work sample (24 hours max, paid for senior roles)
3. 60-min deep-dive with founders
4. Reference checks (3 calls, not emails — you want the real story)
**Offer timeline:** Decision within 48 hours. Top candidates have multiple offers.
**What to get right:**
- Written job scorecard (outcomes expected in 30/60/90 days) — not a job description
- Equity range disclosed in first conversation
- No exploding offers. Pressure tactics lose good people.
---
### Series A (1550 people)
**The hiring shift:** You need some specialists now. First management layer emerges. First "culture carries" — people who reinforce what you want to become.
**Critical hires at this stage (in priority order):**
1. VP/Head of Engineering (if founder isn't technical)
2. Head of Product
3. First dedicated recruiter (when you're hiring > 10/year)
4. First Finance/Operations hire
5. Head of Sales (when product-market fit is real)
**Building the recruiting function:**
- First recruiter should be a generalist with hustle, not a specialist
- Set up an ATS (Ashby, Greenhouse, or Lever) before you need it — not after
- Create interview scorecards for every role
- Track: time to fill, offer acceptance rate, source quality
**Common mistakes at Series A:**
- Promoting top ICs to management without management training
- Hiring "brand name" executives who've never operated lean
- Over-indexing on experience, under-indexing on trajectory
- No onboarding process → 90-day regrettable turnover
**Job scorecards (required for every role):**
```
Role: [Title]
Reports to: [Manager]
Start date: [Target]
Why this role now: [Business case in 1-2 sentences]
Outcomes (90 days):
- [Concrete deliverable 1]
- [Concrete deliverable 2]
- [Concrete deliverable 3]
Outcomes (12 months):
- [Strategic impact 1]
- [Strategic impact 2]
Competencies (top 3 only):
- [What, why it matters for THIS role]
- [What, why it matters for THIS role]
- [What, why it matters for THIS role]
Comp range: [Base] + [Equity] + [Benefits summary]
```
---
### Series B (50150 people)
**The scaling inflection point.** Tribal knowledge breaks. Process matters now. Culture requires deliberate investment.
**What changes:**
- Recruiters become specialists (technical, GTM, exec)
- Manager training becomes non-negotiable
- Performance management needs structure (not just "we'll know it when we see it")
- Onboarding needs to scale without founders in every session
- Comp bands become essential — people are comparing notes
**Hiring velocity benchmarks (Series B):**
| Function | Avg time to fill | Avg interviews | Benchmark offer acceptance |
|----------|-----------------|----------------|---------------------------|
| Engineering IC | 3545 days | 45 rounds | 8085% |
| Engineering Manager | 4560 days | 56 rounds | 7580% |
| Sales IC | 2535 days | 34 rounds | 8590% |
| Sales Manager | 4055 days | 45 rounds | 8085% |
| G&A (Finance, HR, Ops) | 3045 days | 34 rounds | 8590% |
**Internal mobility:** By 50 people, start tracking internal promotion rates. Target: 2030% of manager+ roles filled internally. If it's < 10%, your career development is failing.
---
### Series C+ (150+ people)
**Professional management era.** Founders can't know everyone. Systems and culture carry what personal relationships used to.
**HR function maturity required:**
- Dedicated HRBPs per business unit (1:75100 employees)
- L&D budget (12% of salary budget minimum)
- Succession planning for all VP+ roles
- Structured calibration process for performance reviews
- Total rewards strategy reviewed annually with board
---
## Retention Programs That Actually Work
### What drives retention (in order of impact)
1. **Manager quality** — Gallup: 70% of team engagement variance is explained by the manager. Fix managers first.
2. **Growth trajectory** — People leave when they can't see their next role. Career ladders are retention tools.
3. **Compensation competitiveness** — Being at P25 on salary is a slow leak. Audit annually.
4. **Mission/product belief** — Especially for senior ICs. They want to work on something that matters.
5. **Team quality** — "I stay because of the people I work with." True at every level.
6. **Flexibility** — Location, hours, autonomy. Low cost, high impact.
### What doesn't work (but companies do anyway)
- Pizza parties and ping pong tables
- "Perks" that substitute for salary
- Annual reviews with no action on feedback
- Forced fun events
- Vague "culture improvement" initiatives without specific behavior changes
### The 30-60-90 Onboarding Framework
Structured onboarding cuts 90-day turnover by 50%+.
**Days 130: Learn**
- Complete admin setup (day 1, before lunch)
- Meet all key stakeholders (scheduled by their manager, not on the new hire)
- Understand: business model, current priorities, team processes, how success is measured
- No deliverables expected. Learning is the job.
- Weekly 1:1 with manager: "What's confusing? What do you need?"
**Days 3160: Contribute**
- First real project (scoped to be completable)
- Present findings or work to the team
- Identify one process that could be improved (observation only — don't fix yet)
- 30-day check-in: formal feedback from manager
**Days 6190: Lead**
- Own a deliverable end-to-end
- Offer one specific improvement recommendation with data
- 90-day review: mutual assessment — manager on new hire, new hire on onboarding
- Set 6-month goals
### Stay Interviews (underused, high ROI)
Run with every employee once per year. Not their manager — HR or skip-level.
**Questions that surface real risk:**
- "What's keeping you here?"
- "What would make you consider leaving?"
- "What's one thing your manager could do differently?"
- "Is your role what you expected when you joined?"
- "What career path do you want? Are we helping you get there?"
- "Are you fairly compensated? Do you know how you'd get a raise?"
**Act on answers within 30 days or don't ask.** Unanswered feedback is worse than no feedback.
### Exit Interviews — What to Actually Learn
Skip the happiness survey. Ask these:
- "When did you first think about leaving?"
- "Was there a specific event that triggered your decision?"
- "What could we have done to retain you?"
- "Where are you going and why?" (What does the other offer have that we don't?)
- "Would you recommend us as an employer? Why or why not?"
Track exit themes by manager. If one manager's exits cite "micromanagement" three times — that's data.
---
## Performance Management
### The System That Works
**Continuous > annual.** Annual reviews with no mid-year touchpoints are theater.
**Structure:**
- **Weekly 1:1s** (30 min): blockers, priorities, relationship
- **Monthly check-ins** (1 hr): progress against goals, feedback exchange
- **Quarterly reviews** (formal): written self-assessment + manager assessment + goal revision
- **Annual calibration** (rating + comp): cross-manager calibration session, then individual conversations
### Calibration Sessions
**Purpose:** Prevent manager bias. Ensure "exceeds expectations" means the same thing across teams.
**Process:**
1. Managers submit preliminary ratings independently
2. HR facilitates 2-hr calibration with all managers in a function
3. Managers must justify outliers (top and bottom)
4. Ratings adjusted for consistency
5. Managers deliver final ratings with rationale
**Distribution guidance (enforce with calibration):**
- Exceptional (5): < 10% — if everyone's exceptional, no one is
- Exceeds (4): 2025%
- Meets (3): 5565%
- Needs improvement (2): 812%
- Underperforming (1): 25%
### Managing Underperformers
**The most avoided management task. And the most damaging when avoided.**
High performers notice when underperformers are tolerated. They leave.
**The 4-step framework:**
**Step 1: Diagnose before acting** (Week 12)
- Is this a skill gap (can't do it) or a will gap (won't do it)?
- Skill gap → training, clearer expectations, different role
- Will gap → direct feedback, clear consequences, then PIP
**Step 2: Direct feedback conversation** (Week 23)
- Specific: "Your last 3 sprint deliveries were 40% incomplete"
- Not: "You're not meeting expectations"
- Document. Send written summary after every feedback conversation.
**Step 3: Performance Improvement Plan (PIP)**
Required when: two rounds of direct feedback haven't produced change.
PIP structure:
```
Name: [Employee]
Manager: [Name]
Date: [Start]
Review date: [30/60 days out]
Current performance issues:
- [Specific, observable behavior with examples and dates]
- [Metric not met: target X, actual Y for Z weeks]
Required improvements:
- [Specific, measurable outcome 1] by [date]
- [Specific, measurable outcome 2] by [date]
Support provided:
- [Training, coaching, additional resources]
Consequences if not met: [Role change / separation]
Check-in schedule: [Weekly with manager + HR]
```
**Step 4: Exit or role change**
- If PIP milestones not met: proceed to separation
- Don't extend PIPs indefinitely — it's unfair to the employee and the team
- Offer a graceful exit where possible: "This role isn't the right fit. Here's a package and a reference."
**What not to do:**
- "Quiet manage out" without clear feedback (legally risky, unfair)
- PIP as a formality before termination (if you know you're firing them, just do it)
- Tolerating underperformance "because we're understaffed" (it makes understaffing worse)
---
## Remote / Hybrid Strategy
### The question isn't "remote or not" — it's "what kind of collaboration does our work require?"
**Work type taxonomy:**
| Work type | Remote-compatible? | Hybrid compatible? |
|-----------|-------------------|-------------------|
| Deep individual work (coding, writing, analysis) | Yes | Yes |
| Async collaboration (code review, doc review) | Yes | Yes |
| Synchronous problem-solving (debugging, design) | Yes (video) | Yes |
| Relationship-building (onboarding, new team) | Harder | Yes |
| Executive alignment, strategy | Harder | Yes — quarterly in-person |
| Sales (enterprise, relationship-based) | No | Depends on market |
### Making Hybrid Work (Not Just a Policy)
**The failure mode:** "Hybrid" = go to office on Tuesday/Thursday, but no one coordinates, all meetings are still Zoom anyway.
**What actually works:**
1. **Anchor days with purpose** — Office days should have things that require the office: workshops, team rituals, whiteboarding sessions. Not just "presence."
2. **Async-first culture, not async-only** — Document decisions. Write things down. Use Loom for walkthroughs. Reduce "quick sync" meetings.
3. **Equal experience for remote participants** — If some are in the room and some are on video, the remote folks are second-class. Either everyone's remote or set up rooms properly.
4. **Manager standards for remote teams:**
- 1:1s are non-negotiable (video, not async)
- Over-communicate on priorities (people can't absorb hallway context)
- Write down decisions (remote employees miss casual office decisions)
- Recognize work publicly (Slack shoutouts, all-hands wins)
### Remote Compensation Philosophy (pick one, be explicit)
**Option A: Location-based pay**
Pay based on where the employee lives. Lower cost in lower-cost markets. Harder to hire in high-cost cities.
**Option B: Role-based (location-neutral)**
One band for each role regardless of location. Simpler, more equitable. Higher overall payroll cost.
**Option C: Zone-based**
Define 23 geographic zones (e.g., Tier 1 cities, Tier 2 cities, international). Set bands per zone. Common at mid-stage startups.
**The wrong answer:** No stated policy, and every offer is negotiated individually. Creates pay equity problems fast.

View File

@@ -0,0 +1,613 @@
#!/usr/bin/env python3
"""
Compensation Benchmarker
========================
Salary benchmarking and total comp modeling for startup teams.
Analyzes pay equity, compa-ratios, and total comp vs. market.
Usage:
python comp_benchmarker.py # Run with built-in sample data
python comp_benchmarker.py --config roster.json # Load from JSON
python comp_benchmarker.py --help
Output: Band compliance report, compa-ratio distribution, pay equity flags,
equity value analysis, and total comp vs. market.
"""
import argparse
import json
import csv
import io
import sys
from dataclasses import dataclass, field, asdict
from typing import Optional
from datetime import date
import math
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class BandDefinition:
"""Salary band for a role level."""
level: str # L1, L2, L3, L4, M1, M2, M3, VP
function: str # Engineering, Sales, Product, G&A, Marketing, CS
band_min: int # Annual USD
band_mid: int # P50 anchor
band_max: int # Band ceiling
market_p25: int # Market 25th percentile
market_p50: int # Market median (should align with band_mid for P50 strategy)
market_p75: int # Market 75th percentile
location_zone: str # Tier1 (SF/NYC), Tier2 (Austin/Denver), Tier3 (Remote/other), EU
@dataclass
class Employee:
"""One employee record."""
id: str
name: str
role: str
level: str
function: str
location_zone: str
base_salary: int
bonus_target_pct: float # % of base
equity_shares: int # Total unvested options/RSUs
equity_strike: float # Strike price (0 for RSUs)
equity_current_409a: float # Current 409A share price
equity_vest_years_remaining: float # How many years of vesting remain
benefits_annual: int # Employer-paid benefits cost
gender: str # M/F/NB/Undisclosed (for equity audit)
ethnicity: str # For equity audit — can be "Undisclosed"
tenure_years: float
performance_rating: int # 15
last_raise_months_ago: int
last_equity_refresh_months_ago: Optional[int] = None
@dataclass
class CompRoster:
company: str
as_of_date: str # ISO date
funding_stage: str # Seed, Series A, Series B, etc.
comp_philosophy_target: str # P50, P65, P75 — your target percentile
preferred_stock_price: float # Last round price (for offer modeling)
employees: list[Employee] = field(default_factory=list)
bands: list[BandDefinition] = field(default_factory=list)
# ---------------------------------------------------------------------------
# Band lookup
# ---------------------------------------------------------------------------
def find_band(roster: CompRoster, level: str, function: str, zone: str) -> Optional[BandDefinition]:
"""Find best-matching band. Falls back to any matching level+function if zone not found."""
matches = [b for b in roster.bands if b.level == level and b.function == function and b.location_zone == zone]
if matches:
return matches[0]
# Fallback: same level+function, any zone
matches = [b for b in roster.bands if b.level == level and b.function == function]
if matches:
return matches[0]
# Fallback: same level, any function
matches = [b for b in roster.bands if b.level == level]
if matches:
return matches[0]
return None
# ---------------------------------------------------------------------------
# Compensation analysis
# ---------------------------------------------------------------------------
def compa_ratio(salary: int, band_mid: int) -> float:
return salary / band_mid if band_mid > 0 else 0.0
def band_position(salary: int, band_min: int, band_max: int) -> float:
"""Position in band: 0.0 = at min, 1.0 = at max."""
if band_max == band_min:
return 0.5
return (salary - band_min) / (band_max - band_min)
def annualized_equity_value(emp: Employee) -> int:
"""Current 409A value of unvested equity, annualized."""
if emp.equity_vest_years_remaining <= 0:
return 0
if emp.equity_current_409a > emp.equity_strike:
intrinsic = (emp.equity_current_409a - emp.equity_strike) * emp.equity_shares
else:
# Options underwater — still show at current FMV for RSUs or future value for options
intrinsic = emp.equity_current_409a * emp.equity_shares if emp.equity_strike == 0 else 0
return int(intrinsic / emp.equity_vest_years_remaining)
def total_comp(emp: Employee) -> int:
bonus = int(emp.base_salary * emp.bonus_target_pct)
equity = annualized_equity_value(emp)
return emp.base_salary + bonus + equity + emp.benefits_annual
def analyze_employee(emp: Employee, roster: CompRoster) -> dict:
band = find_band(roster, emp.level, emp.function, emp.location_zone)
result = {
"id": emp.id,
"name": emp.name,
"role": emp.role,
"level": emp.level,
"function": emp.function,
"zone": emp.location_zone,
"base": emp.base_salary,
"bonus_target": int(emp.base_salary * emp.bonus_target_pct),
"equity_annual": annualized_equity_value(emp),
"benefits": emp.benefits_annual,
"total_comp": total_comp(emp),
"performance": emp.performance_rating,
"tenure_years": emp.tenure_years,
"last_raise_months": emp.last_raise_months_ago,
"band": band,
"compa_ratio": None,
"band_position": None,
"vs_market_p50": None,
"flags": [],
}
if band:
cr = compa_ratio(emp.base_salary, band.band_mid)
bp = band_position(emp.base_salary, band.band_min, band.band_max)
result["compa_ratio"] = round(cr, 3)
result["band_position"] = round(bp, 3)
result["vs_market_p50"] = round((emp.base_salary - band.market_p50) / band.market_p50 * 100, 1)
# Flags
if emp.base_salary < band.band_min:
result["flags"].append(("CRITICAL", "Base below band minimum — immediate attrition risk"))
elif cr < 0.88:
result["flags"].append(("HIGH", f"Compa-ratio {cr:.2f} — significantly below midpoint"))
elif cr < 0.93:
result["flags"].append(("MEDIUM", f"Compa-ratio {cr:.2f} — below target zone (0.951.05)"))
if emp.base_salary > band.band_max:
result["flags"].append(("HIGH", "Base above band maximum — review for promotion or band update"))
if emp.performance_rating >= 4 and cr < 0.95:
result["flags"].append(("HIGH", f"High performer (rating {emp.performance_rating}) underpaid — flight risk"))
if emp.last_raise_months_ago > 18:
result["flags"].append(("MEDIUM", f"No raise in {emp.last_raise_months_ago} months — review due"))
if emp.equity_vest_years_remaining < 1.0 and (emp.last_equity_refresh_months_ago is None or emp.last_equity_refresh_months_ago > 24):
result["flags"].append(("HIGH", "Equity nearly fully vested with no refresh — retention hook gone"))
else:
result["flags"].append(("INFO", "No band found for this level/function/zone"))
return result
# ---------------------------------------------------------------------------
# Aggregate analysis
# ---------------------------------------------------------------------------
def pay_equity_audit(analyses: list[dict], employees: list[Employee]) -> dict:
"""Simple pay equity analysis by gender and ethnicity."""
emp_by_id = {e.id: e for e in employees}
def group_stats(group_key_fn):
groups: dict[str, list[float]] = {}
for a in analyses:
if a["compa_ratio"] is None:
continue
emp = emp_by_id.get(a["id"])
if not emp:
continue
key = group_key_fn(emp)
if key not in groups:
groups[key] = []
groups[key].append(a["compa_ratio"])
return {k: {"n": len(v), "avg_cr": round(sum(v)/len(v), 3), "min_cr": round(min(v), 3), "max_cr": round(max(v), 3)}
for k, v in groups.items() if v}
gender_stats = group_stats(lambda e: e.gender)
ethnicity_stats = group_stats(lambda e: e.ethnicity)
# Compute gap vs. the largest group
def compute_gap(stats: dict) -> dict[str, float]:
if not stats:
return {}
largest = max(stats.items(), key=lambda x: x[1]["n"])
ref_cr = largest[1]["avg_cr"]
return {k: round((v["avg_cr"] - ref_cr) / ref_cr * 100, 1) for k, v in stats.items()}
gender_gaps = compute_gap(gender_stats)
ethnicity_gaps = compute_gap(ethnicity_stats)
return {
"gender": gender_stats,
"gender_gaps_pct": gender_gaps,
"ethnicity": ethnicity_stats,
"ethnicity_gaps_pct": ethnicity_gaps,
}
def compa_ratio_distribution(analyses: list[dict]) -> dict:
crs = [a["compa_ratio"] for a in analyses if a["compa_ratio"] is not None]
if not crs:
return {}
buckets = {
"< 0.85 (below band)": 0,
"0.850.94 (developing)": 0,
"0.951.05 (target zone)": 0,
"1.061.15 (senior in role)": 0,
"> 1.15 (above band)": 0,
}
for cr in crs:
if cr < 0.85:
buckets["< 0.85 (below band)"] += 1
elif cr < 0.95:
buckets["0.850.94 (developing)"] += 1
elif cr <= 1.05:
buckets["0.951.05 (target zone)"] += 1
elif cr <= 1.15:
buckets["1.061.15 (senior in role)"] += 1
else:
buckets["> 1.15 (above band)"] += 1
avg = sum(crs) / len(crs)
return {"distribution": buckets, "avg_compa_ratio": round(avg, 3), "n": len(crs)}
# ---------------------------------------------------------------------------
# Report output
# ---------------------------------------------------------------------------
def fmt(n) -> str:
return f"${int(n):,.0f}"
def bar(value: float, width: int = 20) -> str:
filled = min(width, max(0, int(value * width)))
return "" * filled + "" * (width - filled)
def print_report(roster: CompRoster):
WIDTH = 76
SEP = "=" * WIDTH
sep = "-" * WIDTH
analyses = [analyze_employee(e, roster) for e in roster.employees]
cr_dist = compa_ratio_distribution(analyses)
equity_audit = pay_equity_audit(analyses, roster.employees)
print(SEP)
print(f" COMPENSATION BENCHMARKING REPORT — {roster.company}")
print(f" As of: {roster.as_of_date} | Stage: {roster.funding_stage} | Target: {roster.comp_philosophy_target}")
print(SEP)
# Summary stats
total_emps = len(roster.employees)
flagged = sum(1 for a in analyses if any(s in ["CRITICAL", "HIGH"] for s, _ in a["flags"]))
total_payroll = sum(e.base_salary for e in roster.employees)
avg_total_comp = sum(a["total_comp"] for a in analyses) // total_emps if total_emps else 0
print(f"\n[ SUMMARY ]")
print(sep)
print(f" Employees analyzed: {total_emps}")
print(f" Flagged (critical/high): {flagged}")
print(f" Total base payroll: {fmt(total_payroll)}/year")
print(f" Avg total comp: {fmt(avg_total_comp)}/year")
if cr_dist:
print(f" Avg compa-ratio: {cr_dist['avg_compa_ratio']:.3f}")
# Compa-ratio distribution
if cr_dist:
print(f"\n[ COMPA-RATIO DISTRIBUTION ]")
print(sep)
total_n = cr_dist["n"]
for label, count in cr_dist["distribution"].items():
pct = count / total_n if total_n else 0
bar_str = bar(pct, 25)
print(f" {label:<30} {bar_str} {count:3d} ({pct*100:4.0f}%)")
# Pay equity audit
print(f"\n[ PAY EQUITY AUDIT ]")
print(sep)
print(f" By Gender:")
for group, stats in equity_audit["gender"].items():
gap = equity_audit["gender_gaps_pct"].get(group, 0.0)
gap_str = f" gap: {gap:+.1f}%" if gap != 0 else " (reference group)"
flag = "" if abs(gap) > 5 else ""
print(f" {group:<15} n={stats['n']} avg_CR={stats['avg_cr']:.3f}{gap_str}{flag}")
print(f"\n By Ethnicity:")
for group, stats in equity_audit["ethnicity"].items():
gap = equity_audit["ethnicity_gaps_pct"].get(group, 0.0)
gap_str = f" gap: {gap:+.1f}%" if gap != 0 else " (reference group)"
flag = "" if abs(gap) > 5 else ""
print(f" {group:<20} n={stats['n']} avg_CR={stats['avg_cr']:.3f}{gap_str}{flag}")
print(f"\n ⚠ = gap > 5%. Investigate with regression controlling for level, tenure, and performance.")
# Employee detail with flags
print(f"\n[ EMPLOYEE DETAIL ]")
print(sep)
# Group by function
functions = sorted(set(e.function for e in roster.employees))
for fn in functions:
fn_analyses = [a for a in analyses if a["function"] == fn]
if not fn_analyses:
continue
print(f"\n ── {fn} ──")
print(f" {'Name':<22} {'Role':<28} {'Lvl':<5} {'Base':>10} {'TotalComp':>11} {'CR':>6} {'Perf':>5} Flags")
print(f" {'-'*22} {'-'*28} {'-'*5} {'-'*10} {'-'*11} {'-'*6} {'-'*5} {'-'*20}")
for a in sorted(fn_analyses, key=lambda x: -x["base"]):
cr_str = f"{a['compa_ratio']:.2f}" if a["compa_ratio"] else "N/A"
flag_summary = ", ".join(s for s, _ in a["flags"] if s in ("CRITICAL", "HIGH", "MEDIUM"))
flag_str = flag_summary if flag_summary else "OK"
print(f" {a['name']:<22} {a['role']:<28} {a['level']:<5} "
f"{fmt(a['base']):>10} {fmt(a['total_comp']):>11} {cr_str:>6} {a['performance']:>5} {flag_str}")
# Print flag detail for critical/high
for severity, msg in a["flags"]:
if severity in ("CRITICAL", "HIGH"):
print(f" {'':>22} ↳ [{severity}] {msg}")
# Action items
critical = [(a["name"], msg) for a in analyses for sev, msg in a["flags"] if sev == "CRITICAL"]
high = [(a["name"], msg) for a in analyses for sev, msg in a["flags"] if sev == "HIGH"]
medium = [(a["name"], msg) for a in analyses for sev, msg in a["flags"] if sev == "MEDIUM"]
print(f"\n[ ACTION ITEMS ]")
print(sep)
if critical:
print(f"\n CRITICAL — Address this review cycle:")
for name, msg in critical:
print(f"{name}: {msg}")
if high:
print(f"\n HIGH — Address within 30 days:")
for name, msg in high[:10]:
print(f"{name}: {msg}")
if len(high) > 10:
print(f" ... and {len(high)-10} more")
if medium:
print(f"\n MEDIUM — Address in next comp cycle:")
for name, msg in medium[:8]:
print(f"{name}: {msg}")
if len(medium) > 8:
print(f" ... and {len(medium)-8} more")
if not critical and not high and not medium:
print(f"\n No critical or high-severity issues. Compensation appears well-managed.")
# Remediation cost estimate
below_min = [a for a in analyses if a["band"] and a["base"] < a["band"].band_min]
below_mid = [a for a in analyses if a["compa_ratio"] and a["compa_ratio"] < 0.90]
if below_min or below_mid:
print(f"\n[ REMEDIATION COST ESTIMATE ]")
print(sep)
if below_min:
cost_to_min = sum(a["band"].band_min - a["base"] for a in below_min)
print(f" Cost to bring below-minimum to band min: {fmt(cost_to_min)}/year ({len(below_min)} employees)")
if below_mid:
cost_to_90 = sum(int(a["band"].band_mid * 0.90) - a["base"] for a in below_mid if a["base"] < int(a["band"].band_mid * 0.90))
cost_to_90 = max(0, cost_to_90)
print(f" Cost to bring CR < 0.90 to CR = 0.90: {fmt(cost_to_90)}/year ({len(below_mid)} employees)")
total_payroll_impact = sum(e.base_salary for e in roster.employees)
total_remediation = (below_min and cost_to_min or 0)
print(f"\n Total payroll before remediation: {fmt(total_payroll_impact)}/year")
print(f" Remediation as % of payroll: {total_remediation/total_payroll_impact*100:.1f}%")
print(f"\n{SEP}\n")
def export_csv(roster: CompRoster) -> str:
analyses = [analyze_employee(e, roster) for e in roster.employees]
output = io.StringIO()
writer = csv.writer(output)
writer.writerow(["ID", "Name", "Role", "Level", "Function", "Zone",
"Base", "Bonus Target", "Equity Annual", "Benefits", "Total Comp",
"Compa Ratio", "Band Position", "vs Market P50 %",
"Performance", "Tenure Years", "Last Raise (mo)",
"Gender", "Ethnicity", "Critical Flags", "High Flags"])
for a, e in zip(analyses, roster.employees):
critical_flags = "; ".join(msg for sev, msg in a["flags"] if sev == "CRITICAL")
high_flags = "; ".join(msg for sev, msg in a["flags"] if sev == "HIGH")
writer.writerow([a["id"], a["name"], a["role"], a["level"], a["function"], a["zone"],
a["base"], a["bonus_target"], a["equity_annual"], a["benefits"], a["total_comp"],
a["compa_ratio"], a["band_position"], a["vs_market_p50"],
a["performance"], a["tenure_years"], a["last_raise_months"],
e.gender, e.ethnicity, critical_flags, high_flags])
return output.getvalue()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def build_sample_roster() -> CompRoster:
roster = CompRoster(
company="AcmeTech (Series A)",
as_of_date=date.today().isoformat(),
funding_stage="Series A",
comp_philosophy_target="P50",
preferred_stock_price=8.50,
)
# Bands (Engineering, P50 target, Tier1 = SF/NYC)
roster.bands = [
BandDefinition("L2", "Engineering", 115_000, 132_000, 155_000, 110_000, 132_000, 155_000, "Tier1"),
BandDefinition("L3", "Engineering", 148_000, 170_000, 198_000, 145_000, 170_000, 198_000, "Tier1"),
BandDefinition("L4", "Engineering", 185_000, 215_000, 248_000, 182_000, 215_000, 250_000, "Tier1"),
BandDefinition("M1", "Engineering", 170_000, 195_000, 225_000, 168_000, 195_000, 225_000, "Tier1"),
BandDefinition("L2", "Engineering", 95_000, 108_000, 125_000, 92_000, 108_000, 126_000, "Tier2"),
BandDefinition("L3", "Engineering", 122_000, 140_000, 162_000, 120_000, 140_000, 162_000, "Tier2"),
BandDefinition("L2", "Sales", 80_000, 92_000, 108_000, 78_000, 92_000, 108_000, "Tier1"),
BandDefinition("L3", "Sales", 95_000, 110_000, 128_000, 93_000, 110_000, 128_000, "Tier1"),
BandDefinition("M1", "Sales", 130_000, 150_000, 172_000, 128_000, 150_000, 172_000, "Tier1"),
BandDefinition("L2", "Product", 125_000, 145_000, 168_000, 123_000, 145_000, 168_000, "Tier1"),
BandDefinition("L3", "Product", 155_000, 178_000, 205_000, 153_000, 178_000, 205_000, "Tier1"),
BandDefinition("L2", "G&A", 85_000, 98_000, 115_000, 83_000, 98_000, 115_000, "Tier1"),
BandDefinition("L3", "G&A", 110_000, 128_000, 148_000, 108_000, 128_000, 148_000, "Tier1"),
]
roster.employees = [
# Engineering — mix of scenarios
Employee("E001", "Aarav Shah", "Senior SWE (Backend)", "L3", "Engineering", "Tier1",
base_salary=168_000, bonus_target_pct=0.0, equity_shares=40_000,
equity_strike=1.50, equity_current_409a=6.80, equity_vest_years_remaining=2.5,
benefits_annual=18_000, gender="M", ethnicity="Asian",
tenure_years=2.5, performance_rating=4, last_raise_months_ago=14,
last_equity_refresh_months_ago=None),
Employee("E002", "Yuki Tanaka", "Senior SWE (Frontend)", "L3", "Engineering", "Tier1",
base_salary=152_000, bonus_target_pct=0.0, equity_shares=30_000,
equity_strike=2.20, equity_current_409a=6.80, equity_vest_years_remaining=0.5,
benefits_annual=18_000, gender="F", ethnicity="Asian",
tenure_years=3.8, performance_rating=5, last_raise_months_ago=11,
last_equity_refresh_months_ago=30),
# Note: Yuki is high performer, near-vested, no recent refresh — flag expected
Employee("E003", "Marcus Johnson", "SWE II (Backend)", "L2", "Engineering", "Tier1",
base_salary=110_000, bonus_target_pct=0.0, equity_shares=15_000,
equity_strike=2.50, equity_current_409a=6.80, equity_vest_years_remaining=3.0,
benefits_annual=15_000, gender="M", ethnicity="Black",
tenure_years=1.2, performance_rating=3, last_raise_months_ago=12,
last_equity_refresh_months_ago=None),
# Note: Below band midpoint, recently hired — developing flag
Employee("E004", "Priya Nair", "Staff SWE", "L4", "Engineering", "Tier1",
base_salary=222_000, bonus_target_pct=0.0, equity_shares=60_000,
equity_strike=0.80, equity_current_409a=6.80, equity_vest_years_remaining=2.0,
benefits_annual=18_000, gender="F", ethnicity="Asian",
tenure_years=4.2, performance_rating=5, last_raise_months_ago=8,
last_equity_refresh_months_ago=8),
Employee("E005", "Tom Rivera", "SWE II (Platform)", "L2", "Engineering", "Tier2",
base_salary=88_000, bonus_target_pct=0.0, equity_shares=12_000,
equity_strike=3.00, equity_current_409a=6.80, equity_vest_years_remaining=2.5,
benefits_annual=14_000, gender="M", ethnicity="Hispanic",
tenure_years=1.8, performance_rating=4, last_raise_months_ago=22,
last_equity_refresh_months_ago=None),
# Note: No raise in 22 months, high performer — flag expected
Employee("E006", "Sarah Kim", "Eng Manager", "M1", "Engineering", "Tier1",
base_salary=192_000, bonus_target_pct=0.10, equity_shares=35_000,
equity_strike=1.20, equity_current_409a=6.80, equity_vest_years_remaining=1.8,
benefits_annual=18_000, gender="F", ethnicity="Asian",
tenure_years=2.8, performance_rating=4, last_raise_months_ago=9,
last_equity_refresh_months_ago=9),
# Sales
Employee("S001", "David Chen", "Account Executive (MM)", "L3", "Sales", "Tier1",
base_salary=105_000, bonus_target_pct=0.50, equity_shares=8_000,
equity_strike=3.50, equity_current_409a=6.80, equity_vest_years_remaining=2.0,
benefits_annual=15_000, gender="M", ethnicity="Asian",
tenure_years=1.5, performance_rating=3, last_raise_months_ago=15,
last_equity_refresh_months_ago=None),
Employee("S002", "Amara Osei", "AE (Mid-Market)", "L3", "Sales", "Tier1",
base_salary=98_000, bonus_target_pct=0.50, equity_shares=6_000,
equity_strike=3.50, equity_current_409a=6.80, equity_vest_years_remaining=2.5,
benefits_annual=15_000, gender="F", ethnicity="Black",
tenure_years=1.0, performance_rating=4, last_raise_months_ago=12,
last_equity_refresh_months_ago=None),
# Note: High performer, significantly below midpoint — flag expected
Employee("S003", "Jordan Blake", "Sales Manager", "M1", "Sales", "Tier1",
base_salary=155_000, bonus_target_pct=0.20, equity_shares=20_000,
equity_strike=2.00, equity_current_409a=6.80, equity_vest_years_remaining=1.5,
benefits_annual=16_000, gender="NB", ethnicity="White",
tenure_years=2.2, performance_rating=3, last_raise_months_ago=10,
last_equity_refresh_months_ago=10),
# Product
Employee("P001", "Nina Patel", "Senior PM", "L3", "Product", "Tier1",
base_salary=176_000, bonus_target_pct=0.10, equity_shares=22_000,
equity_strike=1.80, equity_current_409a=6.80, equity_vest_years_remaining=2.0,
benefits_annual=17_000, gender="F", ethnicity="Asian",
tenure_years=2.0, performance_rating=4, last_raise_months_ago=12,
last_equity_refresh_months_ago=12),
# G&A
Employee("G001", "Chris Mueller", "Finance Manager", "L3", "G&A", "Tier1",
base_salary=125_000, bonus_target_pct=0.10, equity_shares=10_000,
equity_strike=2.80, equity_current_409a=6.80, equity_vest_years_remaining=3.0,
benefits_annual=16_000, gender="M", ethnicity="White",
tenure_years=1.5, performance_rating=3, last_raise_months_ago=15,
last_equity_refresh_months_ago=None),
Employee("G002", "Fatima Al-Hassan", "HR Operations", "L2", "G&A", "Tier1",
base_salary=82_000, bonus_target_pct=0.08, equity_shares=5_000,
equity_strike=4.00, equity_current_409a=6.80, equity_vest_years_remaining=3.5,
benefits_annual=14_000, gender="F", ethnicity="Middle Eastern",
tenure_years=0.8, performance_rating=3, last_raise_months_ago=8,
last_equity_refresh_months_ago=None),
# Note: Below band minimum — critical flag expected
]
return roster
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_roster_from_json(path: str) -> CompRoster:
with open(path) as f:
data = json.load(f)
employees = [Employee(**e) for e in data.pop("employees", [])]
bands = [BandDefinition(**b) for b in data.pop("bands", [])]
roster = CompRoster(**data)
roster.employees = employees
roster.bands = bands
return roster
def main():
parser = argparse.ArgumentParser(
description="Compensation Benchmarker — salary analysis and pay equity audit",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python comp_benchmarker.py # Run sample roster
python comp_benchmarker.py --config roster.json # Load from JSON
python comp_benchmarker.py --export-csv # Output CSV
python comp_benchmarker.py --export-json # Output JSON template
"""
)
parser.add_argument("--config", help="Path to JSON roster file")
parser.add_argument("--export-csv", action="store_true", help="Export analysis as CSV")
parser.add_argument("--export-json", action="store_true", help="Export sample roster as JSON template")
args = parser.parse_args()
if args.config:
roster = load_roster_from_json(args.config)
else:
roster = build_sample_roster()
if args.export_json:
data = asdict(roster)
print(json.dumps(data, indent=2))
return
if args.export_csv:
print(export_csv(roster))
return
print_report(roster)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,572 @@
#!/usr/bin/env python3
"""
Hiring Plan Modeler
===================
Builds hiring plans from business goals with cost projections.
Outputs quarterly headcount plan, cost model, and risk assessment.
Usage:
python hiring_plan_modeler.py # Run with built-in sample data
python hiring_plan_modeler.py --config plan.json # Load from JSON config
python hiring_plan_modeler.py --help
"""
import argparse
import json
import sys
from dataclasses import dataclass, field, asdict
from datetime import datetime, date
from typing import Optional
import csv
import io
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class HireTarget:
"""One planned hire."""
role: str
level: str # L1, L2, L3, L4, M1, M2, M3, VP, C-Suite
function: str # Engineering, Sales, Product, G&A, Marketing, CS
quarter: str # Q1-2025, Q2-2025, etc.
base_salary: int # Annual, USD
bonus_pct: float # % of base (e.g., 0.10 for 10%)
equity_annual_usd: int # Annualized equity value at current 409A
benefits_annual: int # Employer-paid benefits
recruiter_fee_pct: float= 0.20 # Agency fee if used (0 for internal recruiter)
ramp_months: int = 3 # Months to full productivity
priority: str = "High" # High / Medium / Low
business_case: str = ""
open_to_internal: bool = False
@dataclass
class HiringPlan:
company: str
plan_period: str # e.g., "2025 Annual"
current_headcount: int
target_revenue: int # Annual target revenue ($)
current_revenue: int # Current ARR ($)
hires: list[HireTarget] = field(default_factory=list)
# Cost overheads beyond comp
overhead_rate: float = 0.25 # Workspace, software, onboarding overhead as % of base
internal_recruiter_cost: int = 0 # If you have an internal recruiter, annual cost
# ---------------------------------------------------------------------------
# Computation
# ---------------------------------------------------------------------------
def quarter_to_sortkey(q: str) -> tuple[int, int]:
"""Parse 'Q2-2025' → (2025, 2)"""
parts = q.upper().split("-")
if len(parts) == 2:
q_num = int(parts[0].replace("Q", ""))
year = int(parts[1])
return (year, q_num)
return (9999, 9)
def get_quarters(hires: list[HireTarget]) -> list[str]:
"""Return sorted unique quarters from hire list."""
quarters = sorted(set(h.quarter for h in hires), key=quarter_to_sortkey)
return quarters
def compute_hire_costs(hire: HireTarget) -> dict:
"""Compute total first-year cost for one hire."""
total_comp = hire.base_salary + int(hire.base_salary * hire.bonus_pct) + hire.equity_annual_usd + hire.benefits_annual
recruiter_fee = int(hire.base_salary * hire.recruiter_fee_pct)
overhead = int(hire.base_salary * 0.25) # workspace, tools, onboarding
ramp_productivity_cost = int(hire.base_salary * (hire.ramp_months / 12)) # cost during ramp
return {
"base_salary": hire.base_salary,
"target_bonus": int(hire.base_salary * hire.bonus_pct),
"equity_annual": hire.equity_annual_usd,
"benefits": hire.benefits_annual,
"total_comp": total_comp,
"recruiter_fee": recruiter_fee,
"overhead": overhead,
"ramp_cost": ramp_productivity_cost,
"first_year_total": total_comp + recruiter_fee + overhead,
"fully_loaded_first_year": total_comp + recruiter_fee + overhead + ramp_productivity_cost,
}
def summarize_by_quarter(plan: HiringPlan) -> dict[str, dict]:
"""Aggregate headcount and costs per quarter."""
quarters = get_quarters(plan.hires)
summary = {}
running_headcount = plan.current_headcount
for q in quarters:
q_hires = [h for h in plan.hires if h.quarter == q]
q_costs = [compute_hire_costs(h) for h in q_hires]
total_comp = sum(c["total_comp"] for c in q_costs)
total_first_year = sum(c["first_year_total"] for c in q_costs)
recruiter_fees = sum(c["recruiter_fee"] for c in q_costs)
running_headcount += len(q_hires)
summary[q] = {
"new_hires": len(q_hires),
"headcount_eop": running_headcount,
"total_annual_comp_added": total_comp,
"total_first_year_cost": total_first_year,
"recruiter_fees": recruiter_fees,
"hires": q_hires,
"costs": q_costs,
}
return summary
def summarize_by_function(plan: HiringPlan) -> dict[str, dict]:
"""Aggregate headcount and costs per function."""
functions: dict[str, dict] = {}
for hire in plan.hires:
fn = hire.function
if fn not in functions:
functions[fn] = {"count": 0, "total_comp": 0, "total_first_year": 0, "roles": []}
costs = compute_hire_costs(hire)
functions[fn]["count"] += 1
functions[fn]["total_comp"] += costs["total_comp"]
functions[fn]["total_first_year"] += costs["first_year_total"]
functions[fn]["roles"].append(hire.role)
return functions
def compute_totals(plan: HiringPlan) -> dict:
all_costs = [compute_hire_costs(h) for h in plan.hires]
total_hires = len(plan.hires)
total_comp = sum(c["total_comp"] for c in all_costs)
total_first_year = sum(c["first_year_total"] for c in all_costs)
total_fully_loaded = sum(c["fully_loaded_first_year"] for c in all_costs)
total_recruiter = sum(c["recruiter_fee"] for c in all_costs)
final_headcount = plan.current_headcount + total_hires
revenue_per_employee = plan.target_revenue / final_headcount if final_headcount > 0 else 0
revenue_per_employee_current = plan.current_revenue / plan.current_headcount if plan.current_headcount > 0 else 0
return {
"total_hires": total_hires,
"final_headcount": final_headcount,
"headcount_growth_pct": ((final_headcount - plan.current_headcount) / plan.current_headcount * 100) if plan.current_headcount > 0 else 0,
"total_annual_comp_added": total_comp,
"total_first_year_cost": total_first_year,
"total_fully_loaded_first_year": total_fully_loaded,
"total_recruiter_fees": total_recruiter,
"revenue_per_employee_target": revenue_per_employee,
"revenue_per_employee_current": revenue_per_employee_current,
"avg_comp_per_hire": total_comp // total_hires if total_hires > 0 else 0,
}
# ---------------------------------------------------------------------------
# Risk assessment
# ---------------------------------------------------------------------------
def assess_risks(plan: HiringPlan, totals: dict) -> list[dict]:
risks = []
# Headcount growth too fast
growth_pct = totals["headcount_growth_pct"]
if growth_pct > 80:
risks.append({
"severity": "HIGH",
"category": "Execution",
"finding": f"Headcount growing {growth_pct:.0f}% this period. "
"Culture and processes rarely scale this fast without breakage.",
"recommendation": "Stagger Q3/Q4 hires. Validate Q1/Q2 cohort is onboarded before next wave."
})
elif growth_pct > 50:
risks.append({
"severity": "MEDIUM",
"category": "Execution",
"finding": f"Headcount growing {growth_pct:.0f}% — significant scaling challenge.",
"recommendation": "Ensure onboarding infrastructure scales. Assign buddy/mentor to each hire."
})
# High concentration in one quarter
quarters = get_quarters(plan.hires)
q_counts = {q: sum(1 for h in plan.hires if h.quarter == q) for q in quarters}
max_q = max(q_counts.values()) if q_counts else 0
if max_q > len(plan.hires) * 0.5 and max_q > 4:
heavy_q = [q for q, c in q_counts.items() if c == max_q][0]
risks.append({
"severity": "MEDIUM",
"category": "Hiring Execution",
"finding": f"More than 50% of hires planned in {heavy_q} ({max_q} hires). "
"Recruiting capacity and onboarding bandwidth may be insufficient.",
"recommendation": "Spread hires across quarters. Hiring pipeline needs to start 6090 days before target start date."
})
# Revenue per employee declining
if totals["revenue_per_employee_target"] < totals["revenue_per_employee_current"] * 0.7:
risks.append({
"severity": "HIGH",
"category": "Financial",
"finding": f"Revenue per employee declining from ${totals['revenue_per_employee_current']:,.0f} to "
f"${totals['revenue_per_employee_target']:,.0f} — a {((totals['revenue_per_employee_target']/totals['revenue_per_employee_current'])-1)*100:.0f}% drop.",
"recommendation": "Validate that revenue model supports this headcount. Is target revenue achievable with this team?"
})
# Low priority hires consuming budget
low_priority_hires = [h for h in plan.hires if h.priority == "Low"]
if low_priority_hires:
lp_cost = sum(compute_hire_costs(h)["first_year_total"] for h in low_priority_hires)
risks.append({
"severity": "MEDIUM",
"category": "Prioritization",
"finding": f"{len(low_priority_hires)} 'Low' priority hires consuming ${lp_cost:,.0f} in first-year costs.",
"recommendation": "Consider deferring Low priority hires to preserve runway. Cut these first if budget tightens."
})
# Hires without business cases
no_case = [h for h in plan.hires if not h.business_case]
if no_case:
risks.append({
"severity": "MEDIUM",
"category": "Governance",
"finding": f"{len(no_case)} hires have no documented business case: {', '.join(h.role for h in no_case[:5])}{'...' if len(no_case) > 5 else ''}",
"recommendation": "Every hire over $80K should have a written business case. What revenue or risk does this role address?"
})
# High recruiter fee exposure
if totals["total_recruiter_fees"] > 100_000:
risks.append({
"severity": "LOW",
"category": "Cost",
"finding": f"${totals['total_recruiter_fees']:,.0f} in recruiter fees. "
"Consider whether internal recruiter investment would be cheaper at this hiring volume.",
"recommendation": f"Internal recruiter at $120150K fully loaded pays off at 34 hires/year vs. agency fees."
})
# No risks — that's itself a flag
if not risks:
risks.append({
"severity": "INFO",
"category": "General",
"finding": "No major risks flagged. Plan appears well-structured.",
"recommendation": "Validate assumptions: time-to-fill estimates, revenue model, and Q1 hiring pipeline status."
})
return risks
# ---------------------------------------------------------------------------
# Formatting / Output
# ---------------------------------------------------------------------------
def fmt(n: int) -> str:
return f"${n:,.0f}"
def pct(n: float) -> str:
return f"{n:.1f}%"
def print_report(plan: HiringPlan):
WIDTH = 72
SEP = "=" * WIDTH
sep = "-" * WIDTH
print(SEP)
print(f" HIRING PLAN: {plan.company}")
print(f" Period: {plan.plan_period} | Generated: {date.today().isoformat()}")
print(SEP)
totals = compute_totals(plan)
q_summary = summarize_by_quarter(plan)
fn_summary = summarize_by_function(plan)
risks = assess_risks(plan, totals)
# Executive summary
print("\n[ EXECUTIVE SUMMARY ]")
print(sep)
print(f" Current headcount: {plan.current_headcount:>5}")
print(f" Planned hires: {totals['total_hires']:>5}")
print(f" Final headcount: {totals['final_headcount']:>5} (+{totals['headcount_growth_pct']:.0f}%)")
print(f" Current ARR: {fmt(plan.current_revenue):>12}")
print(f" Target revenue: {fmt(plan.target_revenue):>12}")
print(f" Revenue/employee now: {fmt(int(totals['revenue_per_employee_current'])):>12}")
print(f" Revenue/employee target: {fmt(int(totals['revenue_per_employee_target'])):>12}")
print()
print(f" Total annual comp added: {fmt(totals['total_annual_comp_added']):>12}")
print(f" Total first-year cost: {fmt(totals['total_first_year_cost']):>12}")
print(f" Fully loaded (w/ ramp): {fmt(totals['total_fully_loaded_first_year']):>12}")
print(f" Recruiter fees: {fmt(totals['total_recruiter_fees']):>12}")
print(f" Avg comp per hire: {fmt(totals['avg_comp_per_hire']):>12}")
# Quarterly breakdown
print(f"\n[ QUARTERLY HEADCOUNT PLAN ]")
print(sep)
print(f" {'Quarter':<10} {'New Hires':>10} {'HC (EOP)':>10} {'Comp Added':>14} {'1yr Cost':>14} {'Recruiter $':>12}")
print(f" {'-'*10} {'-'*10} {'-'*10} {'-'*14} {'-'*14} {'-'*12}")
for q, data in q_summary.items():
print(f" {q:<10} {data['new_hires']:>10} {data['headcount_eop']:>10} "
f"{fmt(data['total_annual_comp_added']):>14} "
f"{fmt(data['total_first_year_cost']):>14} "
f"{fmt(data['recruiter_fees']):>12}")
# By function
print(f"\n[ HEADCOUNT BY FUNCTION ]")
print(sep)
print(f" {'Function':<18} {'Hires':>7} {'Annual Comp':>14} {'1yr Cost':>14}")
print(f" {'-'*18} {'-'*7} {'-'*14} {'-'*14}")
for fn, data in sorted(fn_summary.items(), key=lambda x: -x[1]["count"]):
print(f" {fn:<18} {data['count']:>7} {fmt(data['total_comp']):>14} {fmt(data['total_first_year']):>14}")
# Hire detail
print(f"\n[ HIRE DETAIL ]")
print(sep)
print(f" {'Role':<30} {'Fn':<14} {'Lvl':<6} {'Q':<8} {'Base':>10} {'Total Comp':>12} {'Priority':<8}")
print(f" {'-'*30} {'-'*14} {'-'*6} {'-'*8} {'-'*10} {'-'*12} {'-'*8}")
for h in sorted(plan.hires, key=lambda x: quarter_to_sortkey(x.quarter)):
costs = compute_hire_costs(h)
print(f" {h.role:<30} {h.function:<14} {h.level:<6} {h.quarter:<8} "
f"{fmt(h.base_salary):>10} {fmt(costs['total_comp']):>12} {h.priority:<8}")
if h.business_case:
bc = h.business_case[:60] + "..." if len(h.business_case) > 60 else h.business_case
print(f" {'':>30}{bc}")
# Risk assessment
print(f"\n[ RISK ASSESSMENT ]")
print(sep)
sev_order = {"HIGH": 0, "MEDIUM": 1, "LOW": 2, "INFO": 3}
for risk in sorted(risks, key=lambda r: sev_order.get(r["severity"], 99)):
sev = risk["severity"]
marker = {"HIGH": "⚠ HIGH", "MEDIUM": "◆ MED ", "LOW": "◇ LOW ", "INFO": " INFO"}[sev]
print(f"\n [{marker}] {risk['category']}")
# Wrap finding
finding = risk["finding"]
words = finding.split()
line = " Finding: "
for w in words:
if len(line) + len(w) + 1 > WIDTH - 2:
print(line)
line = " " + w + " "
else:
line += w + " "
if line.strip():
print(line)
reco = risk["recommendation"]
words = reco.split()
line = " Action: "
for w in words:
if len(line) + len(w) + 1 > WIDTH - 2:
print(line)
line = " " + w + " "
else:
line += w + " "
if line.strip():
print(line)
print(f"\n{SEP}\n")
def export_csv(plan: HiringPlan) -> str:
"""Return CSV of hire detail."""
output = io.StringIO()
writer = csv.writer(output)
writer.writerow(["Role", "Function", "Level", "Quarter", "Priority",
"Base Salary", "Bonus Target", "Equity Annual", "Benefits",
"Total Comp", "Recruiter Fee", "Overhead", "First Year Total",
"Ramp Months", "Open to Internal", "Business Case"])
for h in plan.hires:
c = compute_hire_costs(h)
writer.writerow([h.role, h.function, h.level, h.quarter, h.priority,
h.base_salary, c["target_bonus"], h.equity_annual_usd, h.benefits_annual,
c["total_comp"], c["recruiter_fee"], c["overhead"], c["first_year_total"],
h.ramp_months, h.open_to_internal, h.business_case])
return output.getvalue()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def build_sample_plan() -> HiringPlan:
"""Sample Series A → B hiring plan."""
plan = HiringPlan(
company="AcmeTech (Series A)",
plan_period="2025 Annual",
current_headcount=32,
current_revenue=3_500_000,
target_revenue=8_000_000,
overhead_rate=0.25,
internal_recruiter_cost=140_000,
)
plan.hires = [
# Q1 — Foundation hires
HireTarget(
role="Staff Software Engineer (Backend)",
level="L4", function="Engineering", quarter="Q1-2025",
base_salary=185_000, bonus_pct=0.0, equity_annual_usd=25_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="High", open_to_internal=True,
business_case="Core API team is bottleneck for 3 roadmap items. Staff-level needed to lead architecture."
),
HireTarget(
role="Account Executive (Mid-Market)",
level="L3", function="Sales", quarter="Q1-2025",
base_salary=95_000, bonus_pct=0.50, equity_annual_usd=10_000,
benefits_annual=15_000, recruiter_fee_pct=0.18, ramp_months=4,
priority="High",
business_case="Pipeline coverage at 1.8x quota. Need 2.5x by Q2. AE adds $600K ARR/year at ramp."
),
HireTarget(
role="Product Designer (Senior)",
level="L3", function="Product", quarter="Q1-2025",
base_salary=145_000, bonus_pct=0.0, equity_annual_usd=18_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="High",
business_case="Single designer for 4 squads. UX debt slowing enterprise deals requiring onboarding improvements."
),
# Q2 — Growth hires
HireTarget(
role="Engineering Manager (Frontend)",
level="M1", function="Engineering", quarter="Q2-2025",
base_salary=175_000, bonus_pct=0.10, equity_annual_usd=22_000,
benefits_annual=18_000, recruiter_fee_pct=0.20, ramp_months=3,
priority="High",
business_case="Frontend team at 7 ICs with no dedicated EM. Performance review debt is high; manager needed."
),
HireTarget(
role="Account Executive (Mid-Market)",
level="L2", function="Sales", quarter="Q2-2025",
base_salary=85_000, bonus_pct=0.50, equity_annual_usd=8_000,
benefits_annual=15_000, recruiter_fee_pct=0.18, ramp_months=4,
priority="High",
business_case="Second AE to reach 2.5x pipeline coverage target."
),
HireTarget(
role="Customer Success Manager",
level="L2", function="Customer Success", quarter="Q2-2025",
base_salary=90_000, bonus_pct=0.15, equity_annual_usd=8_000,
benefits_annual=15_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="Medium",
business_case="CSM:account ratio at 1:60, industry standard 1:30. NRR has dipped 4pts in 2 quarters."
),
HireTarget(
role="Data Engineer",
level="L2", function="Engineering", quarter="Q2-2025",
base_salary=155_000, bonus_pct=0.0, equity_annual_usd=18_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=3,
priority="Medium",
business_case="Analytics infrastructure blocking product analytics, customer dashboards, and board metrics."
),
# Q3 — Scale hires
HireTarget(
role="Senior Software Engineer (Backend)",
level="L3", function="Engineering", quarter="Q3-2025",
base_salary=165_000, bonus_pct=0.0, equity_annual_usd=20_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="High",
business_case="Backend team needs capacity to deliver Q3 roadmap without delaying Q4 items."
),
HireTarget(
role="Head of Marketing",
level="M3", function="Marketing", quarter="Q3-2025",
base_salary=180_000, bonus_pct=0.15, equity_annual_usd=30_000,
benefits_annual=18_000, recruiter_fee_pct=0.20, ramp_months=3,
priority="High",
business_case="No marketing function. 100% of pipeline is outbound. Need inbound by Q1-2026 for Series B."
),
HireTarget(
role="People Operations Manager",
level="M1", function="G&A", quarter="Q3-2025",
base_salary=120_000, bonus_pct=0.10, equity_annual_usd=12_000,
benefits_annual=16_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="Medium",
business_case="Founders spending 8hrs/week on HR ops at 40 employees. Unscalable. First dedicated HR hire."
),
# Q4 — Stretch hires (conditional on revenue milestone)
HireTarget(
role="Senior Software Engineer (Frontend)",
level="L3", function="Engineering", quarter="Q4-2025",
base_salary=160_000, bonus_pct=0.0, equity_annual_usd=18_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=2,
priority="Medium",
business_case="Conditional on Q3 ARR exceeding $5.5M. Frontend team capacity planning for 2026 roadmap."
),
HireTarget(
role="Account Executive (Enterprise)",
level="L4", function="Sales", quarter="Q4-2025",
base_salary=120_000, bonus_pct=0.60, equity_annual_usd=15_000,
benefits_annual=15_000, recruiter_fee_pct=0.20, ramp_months=6,
priority="Low",
business_case="Enterprise motion exploratory. Requires ICP validation in Q2-Q3 before committing."
),
HireTarget(
role="DevOps / Platform Engineer",
level="L3", function="Engineering", quarter="Q4-2025",
base_salary=150_000, bonus_pct=0.0, equity_annual_usd=18_000,
benefits_annual=18_000, recruiter_fee_pct=0.0, ramp_months=3,
priority="Low",
business_case="Platform reliability becoming bottleneck. Conditional on uptime SLA breaches continuing in Q3."
),
]
return plan
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_plan_from_json(path: str) -> HiringPlan:
with open(path) as f:
data = json.load(f)
hires = [HireTarget(**h) for h in data.pop("hires", [])]
plan = HiringPlan(**data)
plan.hires = hires
return plan
def main():
parser = argparse.ArgumentParser(
description="Hiring Plan Modeler — build headcount plans with cost projections",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python hiring_plan_modeler.py # Run sample plan
python hiring_plan_modeler.py --config plan.json # Load from JSON
python hiring_plan_modeler.py --export-csv # Output CSV of hires
python hiring_plan_modeler.py --export-json # Output plan as JSON template
"""
)
parser.add_argument("--config", help="Path to JSON plan file")
parser.add_argument("--export-csv", action="store_true", help="Export hire detail as CSV")
parser.add_argument("--export-json", action="store_true", help="Export sample plan as JSON template")
args = parser.parse_args()
if args.config:
plan = load_plan_from_json(args.config)
else:
plan = build_sample_plan()
if args.export_json:
data = asdict(plan)
print(json.dumps(data, indent=2))
return
if args.export_csv:
print(export_csv(plan))
return
print_report(plan)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,135 @@
---
name: ciso-advisor
description: "Security leadership for growth-stage companies. Risk quantification in dollars, compliance roadmap (SOC 2/ISO 27001/HIPAA/GDPR), security architecture strategy, incident response leadership, and board-level security reporting. Use when building security programs, justifying security budget, selecting compliance frameworks, managing incidents, assessing vendor risk, or when user mentions CISO, security strategy, compliance roadmap, zero trust, or board security reporting."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: ciso-leadership
updated: 2026-03-05
python-tools: risk_quantifier.py, compliance_tracker.py
frameworks: risk-based-security, zero-trust, defense-in-depth
---
# CISO Advisor
Risk-based security frameworks for growth-stage companies. Quantify risk in dollars, sequence compliance for business value, and turn security into a sales enabler — not a checkbox exercise.
## Keywords
CISO, security strategy, risk quantification, ALE, SLE, ARO, security posture, compliance roadmap, SOC 2, ISO 27001, HIPAA, GDPR, zero trust, defense in depth, incident response, board security reporting, vendor assessment, security budget, cyber risk, program maturity
## Quick Start
```bash
python scripts/risk_quantifier.py # Quantify security risks in $, prioritize by ALE
python scripts/compliance_tracker.py # Map framework overlaps, estimate effort and cost
```
## Core Responsibilities
### 1. Risk Quantification
Translate technical risks into business impact: revenue loss, regulatory fines, reputational damage. Use ALE to prioritize. See `references/security_strategy.md`.
**Formula:** `ALE = SLE × ARO` (Single Loss Expectancy × Annual Rate of Occurrence). Board language: "This risk has $X expected annual loss. Mitigation costs $Y."
### 2. Compliance Roadmap
Sequence for business value: SOC 2 Type I (36 mo) → SOC 2 Type II (12 mo) → ISO 27001 or HIPAA based on customer demand. See `references/compliance_roadmap.md` for timelines and costs.
### 3. Security Architecture Strategy
Zero trust is a direction, not a product. Sequence: identity (IAM + MFA) → network segmentation → data classification. Defense in depth beats single-layer reliance. See `references/security_strategy.md`.
### 4. Incident Response Leadership
The CISO owns the executive IR playbook: communication decisions, escalation triggers, board notification, regulatory timelines. See `references/incident_response.md` for templates.
### 5. Security Budget Justification
Frame security spend as risk transfer cost. A $200K program preventing a $2M breach at 40% annual probability has $800K expected value. See `references/security_strategy.md`.
### 6. Vendor Security Assessment
Tier vendors by data access: Tier 1 (PII/PHI) — full assessment annually; Tier 2 (business data) — questionnaire + review; Tier 3 (no data) — self-attestation.
## Key Questions a CISO Asks
- "What's our crown jewel data, and who can access it right now?"
- "If we had a breach today, what's our regulatory notification timeline?"
- "Which compliance framework do our top 3 prospects actually require?"
- "What's our blast radius if our largest SaaS vendor is compromised?"
- "We spent $X on security last year — what specific risks did that reduce?"
## Security Metrics
| Category | Metric | Target |
|----------|--------|--------|
| Risk | ALE coverage (mitigated risk / total risk) | > 80% |
| Detection | Mean Time to Detect (MTTD) | < 24 hours |
| Response | Mean Time to Respond (MTTR) | < 4 hours |
| Compliance | Controls passing audit | > 95% |
| Hygiene | Critical patches within SLA | > 99% |
| Access | Privileged accounts reviewed quarterly | 100% |
| Vendor | Tier 1 vendors assessed annually | 100% |
| Training | Phishing simulation click rate | < 5% |
## Red Flags
- Security budget justified by "industry benchmarks" rather than risk analysis
- Certifications pursued before basic hygiene (patching, MFA, backups)
- No documented asset inventory — can't protect what you don't know you have
- IR plan exists but has never been tested (tabletop or live drill)
- Security team reports to IT, not executive level — misaligned incentives
- Single vendor for identity + endpoint + email — one breach, total exposure
- Security questionnaire backlog > 30 days — silently losing enterprise deals
## Integration with Other C-Suite Roles
| When... | CISO works with... | To... |
|---------|--------------------|-------|
| Enterprise sales | CRO | Answer questionnaires, unblock deals |
| New product features | CTO/CPO | Threat modeling, security review |
| Compliance budget | CFO | Size program against risk exposure |
| Vendor contracts | Legal/COO | Security SLAs and right-to-audit |
| M&A due diligence | CEO/CFO | Target security posture assessment |
| Incident occurs | CEO/Legal | Response coordination and disclosure |
## Detailed References
- `references/security_strategy.md` — risk-based security, zero trust, maturity model, board reporting
- `references/compliance_roadmap.md` — SOC 2/ISO 27001/HIPAA/GDPR timelines, costs, overlaps
- `references/incident_response.md` — executive IR playbook, communication templates, tabletop design
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- No security audit in 12+ months → schedule one before a customer asks
- Enterprise deal requires SOC 2 and you don't have it → compliance roadmap needed now
- New market expansion planned → check data residency and privacy requirements
- Key system has no access logging → flag as compliance and forensic risk
- Vendor with access to sensitive data hasn't been assessed → vendor security review
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Assess our security posture" | Risk register with quantified business impact (ALE) |
| "We need SOC 2" | Compliance roadmap with timeline, cost, effort, quick wins |
| "Prep for security audit" | Gap analysis against target framework with remediation plan |
| "We had an incident" | IR coordination plan + communication templates |
| "Security board section" | Risk posture summary, compliance status, incident report |
## Reasoning Technique: Risk-Based Reasoning
Evaluate every decision through probability × impact. Quantify risks in business terms (dollars, not severity labels). Prioritize by expected annual loss.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,370 @@
# Compliance Roadmap Reference
## Decision Framework: Which Framework First?
**Start here — who are your customers?**
```
Enterprise SaaS (B2B, US market) → SOC 2 Type II first
Healthcare / health data → HIPAA + SOC 2 together
EU customers or EU-resident data → GDPR (non-optional if applicable)
EU enterprise sales → ISO 27001 + GDPR
Government / defense → FedRAMP / CMMC (separate scope)
All of the above (Series B+) → Multi-framework efficiency approach
```
**The sequencing principle:** SOC 2 Type I is the fastest proof of intent (36 months). Type II is the credibility signal (12 months). Everything else builds on your control library.
---
## 1. SOC 2
### What It Is
SOC 2 is an attestation (not a certification) that your controls meet the AICPA Trust Service Criteria. An independent CPA firm audits your controls and issues a report.
- **Type I:** Controls are suitably designed at a point in time (snapshot). Lower credibility but faster.
- **Type II:** Controls operated effectively over a period of time (minimum 6 months). This is what enterprise buyers want.
### Trust Service Criteria (TSC)
You must include **Security** (CC). Others are optional:
| Criteria | When to add |
|---|---|
| Security (CC) | Always required |
| Availability | If uptime SLAs are contractual |
| Confidentiality | If you process confidential third-party data |
| Processing Integrity | If accuracy of processing is critical (fintech, data processing) |
| Privacy | If you make privacy commitments beyond GDPR/CCPA scope |
Most startups: **Security + Availability** is sufficient.
### Timeline: SOC 2 Type I
| Phase | Duration | Activities |
|---|---|---|
| Readiness assessment | 24 weeks | Gap analysis against CC criteria, identify control owners |
| Policy documentation | 46 weeks | Write ~1520 policies (acceptable use, access control, change management, etc.) |
| Control implementation | 48 weeks | Deploy technical controls, fix gaps identified in readiness |
| Evidence collection | 24 weeks | Screenshots, logs, configs — auditor will sample these |
| Audit fieldwork | 24 weeks | CPA firm reviews evidence, interviews control owners |
| Report issuance | 24 weeks | Report issued, reviewed, shared with customers |
| **Total** | **36 months** | — |
### Timeline: SOC 2 Type II (after Type I)
| Phase | Duration | Notes |
|---|---|---|
| Observation period | 612 months | Controls must operate consistently — no exceptions |
| Audit fieldwork | 46 weeks | Auditor samples evidence across full period |
| Report issuance | 24 weeks | — |
| **Total from Type I** | **918 months** | Faster if Type I was clean |
### Cost Estimates
| Item | SOC 2 Type I | SOC 2 Type II |
|---|---|---|
| Audit firm fees | $15,000$35,000 | $25,000$60,000 |
| Compliance platform (Vanta, Drata, Secureframe) | $12,000$30,000/yr | Same platform |
| External counsel / vCISO | $10,000$30,000 | $5,000$15,000 maintenance |
| Internal time (eng + ops) | 200400 hours | 100200 hours/yr |
| **Total first year** | **$40,000$100,000** | **+$30,000$75,000** |
**Cost optimization tips:**
- Use a compliance platform (Vanta, Drata, Secureframe) — automated evidence collection halves audit cost
- Choose a mid-tier audit firm; Big 4 is overkill for startups
- Type I and Type II with same auditor = continuity discount
### Common Failure Modes
1. Controls documented but not operating (access reviews on paper only)
2. Exceptions during observation period (one admin account without MFA = finding)
3. No formal security awareness training (required for CC criteria)
4. Change management not followed (no ticket for that production change)
5. Vendor risk management missing (you must assess your critical vendors)
---
## 2. ISO 27001
### What It Is
ISO 27001 is an internationally recognized certification for an Information Security Management System (ISMS). Unlike SOC 2, it's a certification (pass/fail), not an attestation report. Issued by accredited certification bodies (BSI, Bureau Veritas, DNV, TÜV).
**Why ISO 27001 over SOC 2:** EU enterprise buyers, government contracts, and global markets often prefer or require ISO 27001. It's geographically neutral.
### Scope Decision
ISO 27001 scope is flexible — you can certify a subset of the organization.
- **Narrow scope:** The production environment only — fastest, cheapest
- **Full scope:** Entire organization — most credibility, highest effort
- **Recommended for startups:** Production environment + key business processes
### Certification Timeline
| Phase | Duration | Activities |
|---|---|---|
| Gap analysis | 24 weeks | Assess current state vs. 93 controls in Annex A |
| ISMS design | 48 weeks | Scope, risk methodology, SoA (Statement of Applicability) |
| Policy and procedure development | 610 weeks | Mandatory documents: risk treatment plan, asset register, ISMS policy |
| Risk assessment | 46 weeks | Identify, analyze, evaluate risks; produce risk register |
| Control implementation | 816 weeks | Implement gaps from risk assessment |
| Internal audit | 24 weeks | First internal audit of ISMS |
| Management review | 12 weeks | Leadership sign-off on ISMS |
| Stage 1 audit (documentation) | 12 weeks | Certification body reviews docs and scope |
| Stage 2 audit (implementation) | 12 weeks | Certification body verifies controls are operating |
| Certification issued | 12 weeks | Certificate valid for 3 years with annual surveillance audits |
| **Total** | **918 months** | — |
### Cost Estimates
| Item | Cost |
|---|---|
| Certification body fees (Stage 1 + Stage 2) | $15,000$40,000 |
| Annual surveillance audits | $8,000$20,000/yr |
| vCISO / consultant (if not in-house) | $30,000$80,000 |
| GRC platform | $10,000$25,000/yr |
| Internal time | 400800 hours |
| **Total first year** | **$55,000$150,000** |
### Mandatory ISO 27001:2022 Documents
- ISMS scope document
- Information security policy
- Risk assessment methodology
- Risk register with risk treatment plan
- Statement of Applicability (SoA)
- Asset inventory
- Competence and awareness records
- Internal audit reports
- Management review minutes
- Nonconformity and corrective action records
---
## 3. HIPAA for Health Tech Startups
### When HIPAA Applies
HIPAA applies if you are a **Covered Entity** (healthcare provider, health plan, clearinghouse) or a **Business Associate** (you process, store, or transmit Protected Health Information on behalf of a Covered Entity).
**Key trigger:** If your product touches patient data in any way and a US healthcare provider uses your product, you are likely a Business Associate. You must sign a **BAA (Business Associate Agreement)** with each Covered Entity customer.
### HIPAA Rule Structure
| Rule | Focus | Key Requirements |
|---|---|---|
| Privacy Rule | How PHI can be used and disclosed | Minimum necessary, patient rights, notice of privacy practices |
| Security Rule | Technical and physical safeguards for ePHI | Required and addressable safeguards |
| Breach Notification Rule | What to do if PHI is breached | Timing and content of breach notifications |
### Security Rule: Required vs. Addressable
**Required safeguards** must be implemented exactly as specified. **Addressable safeguards** must be implemented or documented why an equivalent measure was used.
**Key Required Safeguards:**
- Unique user IDs (no shared logins)
- Emergency access procedure
- Audit controls (logging access to ePHI)
- Transmission security (encryption in transit)
- Person or entity authentication
**Key Addressable Safeguards (implement or document why not):**
- Automatic logoff
- Encryption and decryption (encryption at rest — despite being "addressable," regulators expect it)
- Audit review procedures
- Security reminders and training
### HIPAA Compliance Timeline
| Phase | Duration | Activities |
|---|---|---|
| Risk analysis | 46 weeks | Document all PHI flows, assess risks to PHI — **required by law** |
| Policy development | 48 weeks | Privacy policies, breach notification, workforce training |
| Technical safeguard implementation | 412 weeks | Encryption, audit logging, access controls, BAA templates |
| Workforce training | 24 weeks | Annual HIPAA training for all staff with PHI access |
| BAA execution | Ongoing | Execute with all vendors who process PHI |
| **Total** | **48 months** | — |
### Cost Estimates
| Item | Cost |
|---|---|
| Initial risk analysis (consultant) | $15,000$40,000 |
| Policy development | $8,000$20,000 |
| Technical implementation | $20,000$60,000 |
| Annual training and maintenance | $5,000$15,000/yr |
| HIPAA compliance platform | $10,000$20,000/yr |
| **Total first year** | **$45,000$130,000** |
### HIPAA Penalties (Why This Matters)
| Violation Category | Penalty per Violation | Annual Cap |
|---|---|---|
| Unaware | $100$50,000 | $25,000 |
| Reasonable cause | $1,000$50,000 | $100,000 |
| Willful neglect (corrected) | $10,000$50,000 | $250,000 |
| Willful neglect (not corrected) | $50,000 | $1,500,000 |
---
## 4. GDPR Compliance Program
### When GDPR Applies
GDPR applies if you:
- Are established in the EU/EEA
- Process personal data of EU/EEA residents (regardless of your location)
- Offer goods or services to EU residents
- Monitor the behavior of EU residents
**Key point for US startups:** If you have EU users or EU employees, GDPR applies to you.
### Core GDPR Principles (Build These In)
1. **Lawfulness, fairness, transparency** — have a legal basis for every processing activity
2. **Purpose limitation** — collect data for specified, explicit purposes only
3. **Data minimization** — collect only what you need
4. **Accuracy** — keep data accurate
5. **Storage limitation** — delete data when no longer needed
6. **Integrity and confidentiality** — appropriate security measures
7. **Accountability** — demonstrate compliance
### Legal Bases for Processing
| Basis | When to use |
|---|---|
| Consent | Marketing, non-essential cookies, optional features |
| Contract | Processing necessary to deliver your service |
| Legitimate interests | Analytics, fraud prevention, security (requires LIA) |
| Legal obligation | Compliance with legal requirements |
| Vital interests | Emergency situations only |
**Avoid over-relying on consent** — it must be freely given, specific, informed, and unambiguous. Contractual basis is more robust for core product data.
### GDPR Compliance Checklist
**Governance:**
- [ ] Data Protection Officer (DPO) appointed (required for large-scale processing or sensitive data)
- [ ] Record of Processing Activities (RoPA) maintained
- [ ] Data Protection Impact Assessments (DPIA) for high-risk processing
**Rights Management (respond within 1 month):**
- [ ] Right of access (data subject access requests — DSARs)
- [ ] Right to rectification
- [ ] Right to erasure ("right to be forgotten")
- [ ] Right to data portability
- [ ] Right to object to processing
**Technical Measures:**
- [ ] Privacy by design in product development
- [ ] Data minimization enforced
- [ ] Encryption at rest and in transit
- [ ] Pseudonymization where possible
- [ ] Retention policies and automated deletion
**Vendor Management:**
- [ ] Data Processing Agreements (DPAs) with all processors
- [ ] Standard Contractual Clauses (SCCs) for non-EU transfers
**Breach Notification:**
- [ ] Notify supervisory authority within 72 hours of awareness
- [ ] Notify affected individuals if high risk to their rights and freedoms
### GDPR Compliance Timeline
| Phase | Duration | Activities |
|---|---|---|
| Data mapping | 36 weeks | Map all personal data flows: collect, store, process, share, delete |
| Legal basis review | 24 weeks | Assign legal basis to each processing activity |
| Policy updates | 46 weeks | Privacy policy, cookie policy, employee data notices |
| DPA execution | 24 weeks | Execute DPAs with all processors (SaaS vendors, cloud providers) |
| Technical controls | 412 weeks | Consent management, data subject rights automation, retention |
| Staff training | 24 weeks | GDPR awareness for all staff |
| **Total** | **36 months** | — |
### GDPR Fines
- **Standard violations:** Up to €10M or 2% of global annual revenue
- **Major violations** (basic principles, consent, data subject rights): Up to €20M or 4% of global annual revenue
- **Highest ever fine:** Meta, €1.2B (2023, data transfers to US)
---
## 5. Multi-Framework Efficiency
### Control Overlap Analysis
The same underlying controls satisfy multiple frameworks. Build once, certify multiple times.
**Core Control Domain Overlap:**
| Control Domain | SOC 2 | ISO 27001 | HIPAA | GDPR |
|---|---|---|---|---|
| Access control / IAM | CC6 | A.5.15A.5.18 | §164.312(a) | Art. 32 |
| Encryption at rest/transit | CC6.7 | A.8.24 | §164.312(a)(2)(iv) | Art. 32 |
| Audit logging | CC7.2 | A.8.15, A.8.17 | §164.312(b) | Art. 32 |
| Incident response | CC7.3CC7.5 | A.5.24A.5.28 | §164.308(a)(6) | Art. 3334 |
| Vendor/third-party mgmt | CC9 | A.5.19A.5.22 | §164.308(b) | Art. 28 |
| Risk assessment | CC3 | Clause 6.1 | §164.308(a)(1) | Art. 32 |
| Security training | CC1.4 | A.6.3, A.6.8 | §164.308(a)(5) | Art. 39 |
| Business continuity | A1 | A.5.29A.5.30 | §164.308(a)(7) | Art. 32 |
| Data classification | CC6.1 | A.5.9A.5.13 | §164.514 | Art. 5(1)(c) |
| Change management | CC8 | A.8.32 | §164.312(c) | Art. 25 |
**Efficiency Rule:** If you build SOC 2 controls correctly, you're ~6575% of the way to ISO 27001 and ~70% of the way to HIPAA. Don't rebuild — extend.
### Recommended Sequencing by Company Profile
**B2B SaaS (US-focused):**
```
Month 06: SOC 2 Type I → unblocks early enterprise deals
Month 618: SOC 2 Type II → enterprise table stakes
Month 1830: ISO 27001 → EU market expansion
(GDPR should be woven in from month 0 if any EU data)
```
**HealthTech (US):**
```
Month 08: HIPAA compliance + BAA readiness → enables healthcare customers
Month 618: SOC 2 Type II → enterprise IT requirements on top of HIPAA
Month 18+: ISO 27001 if entering European market
```
**EU-founded SaaS:**
```
Month 03: GDPR compliance → legal requirement, not optional
Month 312: ISO 27001 → EU enterprise default expectation
Month 1224: SOC 2 → US market expansion
```
**HealthTech (EU):**
```
Concurrent: GDPR + ISO 27001 (strong overlap with MDR/IVDR security requirements)
Month 12+: HIPAA if entering US market
```
### Shared Evidence Model
Build your evidence library once. Tag each piece of evidence by framework:
```
evidence/
├── access_control/
│ ├── iam_policy.pdf [SOC2:CC6, ISO:A5.15, HIPAA:164.312a]
│ ├── mfa_screenshot_Q1.png [SOC2:CC6, ISO:A8.5, HIPAA:164.312d]
│ └── access_review_log.xlsx [SOC2:CC6, ISO:A5.18, HIPAA:164.308a]
├── encryption/
│ ├── kms_config.png [SOC2:CC6.7, ISO:A8.24, HIPAA:164.312e]
│ └── tls_policy.md [SOC2:CC6.7, ISO:A8.24, HIPAA:164.312e]
└── incident_response/
├── ir_plan.pdf [SOC2:CC7, ISO:A5.24, HIPAA:164.308a6]
└── tabletop_log.pdf [SOC2:CC7, ISO:A5.26, HIPAA:164.308a6]
```
### GRC Platform Comparison
| Platform | Best For | Price/yr | SOC 2 | ISO 27001 | HIPAA | GDPR |
|---|---|---|---|---|---|---|
| Vanta | Fast SOC 2, US startups | $1530K | ✅ | ✅ | ✅ | ✅ |
| Drata | Automation depth | $1835K | ✅ | ✅ | ✅ | ✅ |
| Secureframe | Cost-effective | $1020K | ✅ | ✅ | ✅ | ✅ |
| Sprinto | SMB, global | $1225K | ✅ | ✅ | ✅ | ✅ |
| Tugboat Logic | Mid-market | $2040K | ✅ | ✅ | ✅ | ✅ |
| Manual | Budget-constrained | $0 + time | ✅ | ✅ | ✅ | ✅ |
**Recommendation:** For Series A startups, Vanta or Drata pays for itself in reduced auditor fees and internal time savings. Budget $1525K/year.
### Compliance Maintenance Annual Budget
| Item | SOC 2 | ISO 27001 | HIPAA | GDPR |
|---|---|---|---|---|
| Annual audit / surveillance | $2560K | $820K | n/a (self-assessed) | n/a (self-assessed) |
| GRC platform | $1530K | Shared | Shared | Shared |
| Annual training | $38K | Shared | Shared | Shared |
| Policy review | $25K | $25K | $25K | $25K |
| **Total ongoing** | **$45103K/yr** | **+$1025K/yr** | **+$515K/yr** | **+$515K/yr** |

View File

@@ -0,0 +1,350 @@
# Incident Response Reference (Executive Playbook)
This is the executive IR playbook — strategic decisions, communication, and leadership during incidents. For technical playbooks (containment procedures, forensics), see your SOC runbooks.
---
## 1. Incident Classification
### Severity Levels
| Severity | Definition | Examples | Response Time | Escalation |
|---|---|---|---|---|
| SEV-1 (Critical) | Confirmed breach, data exfil, ransomware, production down | Active ransomware, confirmed data theft, complete service outage | Immediate (< 1 hour) | CEO, board within 24 hrs |
| SEV-2 (High) | Suspected breach, significant security event, extended outage | Credential compromise suspected, DDoS, 4-hour+ outage | < 4 hours | CEO, legal within 48 hrs |
| SEV-3 (Medium) | Security event with limited impact, short outage | Phishing success (contained), brief outage, single system compromise | < 24 hours | CISO-owned, weekly rollup |
| SEV-4 (Low) | Minor security event, near-miss | Failed phishing attempt, minor policy violation | < 72 hours | Team-owned |
### Breach vs. Security Incident
**Security incident:** Unplanned event affecting security — may or may not involve data.
**Data breach:** Confirmed unauthorized access to personal data — triggers regulatory notification obligations.
**Critical distinction for response planning:** A ransomware attack is an incident. If data was exfiltrated before encryption, it's also a breach. Assume breach until proven otherwise.
---
## 2. Executive IR Plan
### Phase 1: Detection & Initial Assessment (02 hours for SEV-1)
**Immediate actions (CISO):**
1. Receive alert from SOC/monitoring system or team member report
2. Make initial severity classification — don't wait for perfect information
3. Activate incident response team (IR lead, legal counsel, comms lead)
4. Create incident war room (dedicated Slack channel, video bridge, shared document)
5. **Stop the clock** — document exact time of discovery (regulatory timelines start here)
6. Begin chain of custody documentation if forensics may be needed
**Executive notification trigger (within 1 hour for SEV-1):**
- Notify CEO: incident status, initial severity, IR team activated
- Put legal counsel on notice — don't wait to determine if breach occurred
- If public company: notify General Counsel immediately (potential disclosure obligations)
**What you do NOT do in Phase 1:**
- Do not notify customers yet (confirm scope first)
- Do not delete or modify any logs or systems (evidence preservation)
- Do not make public statements
- Do not speculate about cause or scope
### Phase 2: Containment & Assessment (224 hours for SEV-1)
**Executive decisions required:**
- **Scope authorization:** Approve IR firm engagement (have a retainer in place)
- **System isolation:** Authorize taking systems offline if needed (revenue vs. evidence tradeoff)
- **Evidence preservation:** Authorize forensic image capture
- **Communication timing:** When to notify customers/partners (legal drives this)
**Board notification (for SEV-1/2):**
- Notify board chair / audit committee chair within 24 hours for SEV-1
- Board notification format: what we know, what we don't know, what we're doing, next update time
- Do not speculate on financial impact in board notification until known
**Legal assessment (with counsel):**
- Determine if personal data was involved
- Identify applicable notification laws (GDPR 72-hour, state breach notification, HIPAA 60-day)
- Assess litigation risk (document with privilege from this point)
- Evaluate cyber insurance policy coverage and notification requirements
### Phase 3: Notification & Communication (2472 hours for SEV-1)
**Notification decision matrix:**
| Audience | Trigger | Timeline | Owner |
|---|---|---|---|
| Board | SEV-1/2 confirmed | < 24 hours | CEO/CISO |
| Regulators (GDPR) | Personal data breach confirmed | < 72 hours from awareness | Legal + CISO |
| Regulators (HIPAA) | PHI breach confirmed | < 60 days (early notice to HHS ASAP) | Legal + CISO |
| State regulators (US) | State breach notification laws vary | 3090 days depending on state | Legal |
| Enterprise customers | Data confirmed in scope | As soon as practical after legal review | CEO/CRO |
| All customers | Data potentially in scope | After regulators notified | CEO/Comms |
| Media | Proactive or reactive | After notifying affected parties | CEO/Comms |
| Cyber insurer | Incident confirmed | Per policy terms (often 4872 hours) | CFO/Legal |
### Phase 4: Recovery (Ongoing)
**Executive decisions:**
- Approve recovery timeline and communicate to customers
- Determine customer compensation or remediation (if applicable)
- Authorize security improvements identified during incident
- Decide on public disclosure beyond mandatory reporting
### Phase 5: Post-Incident Review (Within 30 days)
Covered in Section 5 of this document.
---
## 3. Communication Templates
### Board/Executive Notification (Initial — Hour 1)
**Subject:** [CONFIDENTIAL] Security Incident — Immediate Notification
---
We have identified a security incident as of [DATE/TIME].
**Current status:** [Brief factual description — what we know happened]
**Severity assessment:** SEV-[1/2/3]
**What we do not yet know:**
- [List unknowns — scope of impact, whether data was accessed, root cause]
**Actions taken so far:**
- IR team activated at [time]
- Legal counsel notified
- [Specific containment actions if applicable]
**Next update:** [Specific time, e.g., "in 4 hours or when we have material new information"]
**Who is managing this:** [CISO name] leads technical response; [CEO name] owns executive decisions. Contact: [CISO mobile]
---
### Customer Notification (After Legal Review)
**Subject:** Important Security Notice — [Company Name]
---
We are writing to inform you of a security incident that may have affected your data.
**What happened:**
On [DATE], we detected [brief, factual description of the incident — e.g., "unauthorized access to our systems"]. We identified this on [DISCOVERY DATE] and immediately launched an investigation.
**What information was involved:**
Based on our investigation, the following types of information may have been accessed: [list data types — e.g., names, email addresses, [if applicable: payment card information]].
Your [specific data types] [were / were not] affected.
**What we are doing:**
We have [list specific actions: engaged leading cybersecurity firm, notified relevant authorities, implemented additional security controls, etc.].
**What you can do:**
- [Specific actionable steps for customers]
- Monitor your accounts for unusual activity
- [If passwords: reset your password at X]
- [If payment data: contact your bank to monitor for unauthorized charges]
- Contact our dedicated support line at [contact] with any concerns
**For more information:**
We have set up a dedicated resource page at [URL]. Our support team is available at [contact].
We take the security of your data extremely seriously and deeply regret this incident occurred.
[CEO/CISO Name]
[Title], [Company Name]
---
### Regulator Notification — GDPR (72-hour requirement)
**To:** [Relevant Supervisory Authority — e.g., BfDI (Germany), CNIL (France), ICO (UK)]
**Subject:** Personal Data Breach Notification — [Company Name] — [Reference Number if applicable]
---
**1. Nature of the breach:**
[Description of what occurred, including how it happened]
**2. Categories and approximate number of data subjects concerned:**
[e.g., "Approximately [X] customers whose [name, email, account data] may have been accessed"]
**3. Categories and approximate number of personal data records concerned:**
[e.g., "Approximately [X] records containing [data categories]"]
**4. Likely consequences of the breach:**
[Risk assessment: what harm could data subjects face?]
**5. Measures taken or proposed:**
[Containment actions, remediation plan, customer notification plan]
**6. Contact details of the Data Protection Officer or other contact point:**
[Name, role, email, phone]
**Note:** This is an initial notification; we will provide supplemental information as our investigation continues.
---
### Media Statement (Reactive — When Contacted)
"[Company Name] is aware of a security incident that we identified on [date]. We immediately activated our incident response team and launched a comprehensive investigation. We have notified affected customers and relevant regulatory authorities as required. The security and privacy of our customers' data is our top priority, and we are committed to transparency as our investigation proceeds. We will provide updates at [URL]. We cannot provide additional details at this time to protect the integrity of our investigation."
**What not to say to media:**
- Number of affected users (until confirmed and disclosed to customers first)
- Cause of the incident (until investigation is complete)
- Financial impact (speculation creates liability)
- Anything that could be construed as minimizing the incident
---
## 4. Tabletop Exercise Design
### Purpose
Test the decision-making and communication processes — not the technical response. The goal is to surface gaps in escalation, communication, and judgment before a real incident.
### Recommended Frequency
- Annual full tabletop (23 hours, full leadership team)
- Semi-annual mini-tabletop (45 minutes, CISO + legal + CEO)
- Quarterly technical team exercise (separate from executive tabletop)
### Sample Tabletop Scenario: Ransomware
**Setup (read to participants):**
> It's 6:47 AM on a Monday. Your DevOps engineer receives automated alerts that production databases are inaccessible. By 7:15 AM, they discover a ransomware note demanding $500,000 in Bitcoin. Several files are already encrypted. Your last verified backup was 48 hours ago. Your business is B2B SaaS serving 200 enterprise customers. You process customer financial data.
**Discussion questions (timed, 10 minutes each):**
1. First 30 minutes — who do you call, in what order? Who decides whether to take production offline?
2. Legal assessment — what regulatory obligations have been triggered? What's the timeline?
3. Hour 4 — initial forensics suggests data may have been exfiltrated before encryption. How does your response change?
4. Customer communication — how do you communicate with enterprise customers who are asking for status?
5. Hour 24 — do you pay the ransom? Who makes this decision? What's the decision framework?
6. The press has found out and a reporter is calling. What do you say?
7. Day 5 — what's your board communication strategy?
**Post-discussion captures:**
- What decisions were unclear (ownership ambiguous)?
- What information did you need but didn't have?
- What processes did not exist that should?
- What would you do differently in the first hour?
### Sample Tabletop Scenario: Insider Threat
**Setup:**
> HR notifies you that an engineer was terminated this morning for performance reasons. 24 hours later, your SIEM generates an alert that this former employee's credentials accessed your customer database 30 minutes before their offboarding was complete. They downloaded 50,000 customer records. You don't know if they shared or sold the data.
**Key decision points:**
- When does this become a breach vs. a security incident?
- Do you notify customers? When?
- What are your legal options against the former employee?
- How do you handle this with the rest of the engineering team?
---
## 5. Post-Incident Review Framework
### Timeline
Conduct within 30 days of incident resolution. Do not delay — memory fades and teams move on.
### Blameless Post-Mortem Principles
The purpose is to improve systems and processes, not punish individuals. A blame culture means the next incident gets hidden longer.
### Post-Incident Review Structure
**1. Incident Timeline (factual, no editorializing)**
- Hour-by-hour reconstruction from detection to resolution
- Source: logs, Slack messages, incident ticket, war room notes
**2. Root Cause Analysis**
Use the "5 Whys" technique — keep asking why until you reach a systemic root cause, not a human error.
Example:
- Why was there a breach? → Attacker compromised an admin account
- Why was the admin account compromised? → Credentials stolen via phishing
- Why did phishing succeed? → User wasn't trained on this attack type
- Why wasn't training current? → Training program hadn't been updated in 18 months
- Why hadn't it been updated? → No owner was assigned to maintain the training program
- **Root cause: No assigned ownership for security training maintenance**
**3. What Went Well**
- Detection mechanisms that worked
- Response actions that contained damage
- Communication that was effective
- Teams that exceeded expectations
**4. What Needs Improvement**
- Detection gaps (how could we have found this faster?)
- Response gaps (what slowed us down?)
- Communication gaps (who didn't know what, when?)
- Process gaps (what didn't we have documented?)
**5. Action Items (with owners and deadlines)**
| Action | Owner | Due Date | Priority |
|---|---|---|---|
| [Specific improvement] | [Name] | [Date] | [P0/P1/P2] |
**6. Metrics Review**
- MTTD (Mean Time to Detect): [actual] vs. [target]
- MTTR (Mean Time to Respond): [actual] vs. [target]
- Customer impact: [affected customers, duration]
- Financial impact: [direct costs, revenue impact]
- Regulatory impact: [notifications sent, fines if any]
---
## 6. Insurance and Legal Considerations
### Cyber Insurance
**What to have before an incident:**
- Cyber liability policy with minimum $2M coverage (Series A); $5M+ (Series B+)
- Coverage should include: first-party loss, third-party liability, ransomware, business interruption, regulatory defense
- Pre-approved IR firms on your policy (using an approved firm can expedite claims)
- Notification requirements — know your insurer's required timeline (typically 4872 hours)
**Policy exclusions to watch:**
- "War exclusion" — increasingly contested for nation-state attacks (NotPetya precedent)
- "Systemic risk" — some policies exclude widespread events affecting many insureds simultaneously
- "Prior acts" — incidents that began before policy inception
- "Failure to maintain reasonable security" — don't give your insurer a reason to deny
**Premium factors:**
- Revenue and data volume
- Security control maturity (MFA, EDR, backup, patch management)
- Industry (healthcare, financial services = higher premium)
- Claims history
**Ballpark premiums:**
- Seed/Series A ($110M ARR): $8,000$25,000/yr
- Series B ($1050M ARR): $25,000$75,000/yr
- Series C+ ($50M+ ARR): $75,000$250,000/yr
### Legal Counsel
**Have on retainer before an incident:**
- Cybersecurity/privacy attorney — breach notification, regulatory response
- General counsel — contracts, employment law (insider threats), litigation
- Consider: a law firm with data breach notification experience by jurisdiction
**Attorney-client privilege:** Once legal counsel is involved in an incident, communications and work product may be privileged. Engage counsel early to maximize privilege protection.
**Key legal decisions during an incident:**
- When does notification obligation clock start? (Legal determines this)
- Is this a breach or an incident? (Legal + CISO together)
- Who are the affected data subjects? (Legal + technical together)
- Do we pay the ransom? (Legal, CEO, board — never CISO alone)
- Do we cooperate with law enforcement? (Legal decision, involves trade-offs)
### Law Enforcement
**FBI Internet Crime Complaint Center (IC3):** File a complaint for ransomware or significant cybercrime. Does not obligate you to cooperate but creates a record.
**Pros of law enforcement involvement:**
- Access to threat intelligence they may have
- May recover funds in some cases (rare)
- Demonstrates good-faith response to regulators
**Cons of law enforcement involvement:**
- Loss of control over investigation timeline
- Potential for public disclosure if case pursued
- Slows ransom payment decisions (if considering)
- May create discovery obligations in litigation
**CISO recommendation:** Notify legal before contacting law enforcement. In most cases, file an IC3 complaint but don't actively engage FBI investigation unless there's a clear benefit.

View File

@@ -0,0 +1,321 @@
# Security Strategy Reference
## 1. Risk-Based Security (Not Compliance-First)
### The Problem with Compliance-First Security
Most startups build security backwards: they get a compliance requirement (SOC 2, ISO 27001) and treat it as the security program. This produces:
- Controls that pass audits but don't reduce actual risk
- Resources allocated to documentation over protection
- Security teams optimizing for auditor satisfaction, not threat reduction
- False confidence ("we passed our audit") before real security exists
**The right order:**
1. Identify your actual threats (what do adversaries want from you?)
2. Identify your crown jewels (what's worth protecting most?)
3. Implement controls that address those threats to those assets
4. Map existing controls to compliance requirements — most overlap naturally
### Risk Identification Framework
**Asset Classification:**
```
Tier 1 — Crown Jewels
├── Customer PII/PHI
├── Payment card data
├── Intellectual property (source code, models, trade secrets)
└── Authentication credentials and secrets
Tier 2 — Business Critical
├── Internal communications (Slack, email)
├── Financial systems and data
├── Employee data
└── Business strategy documents
Tier 3 — Operational
├── Internal tooling and infrastructure configs
├── Non-sensitive operational data
└── Public-facing content and marketing
```
**Threat Actor Profiling:**
| Threat Actor | Motivation | Typical TTPs | Relative Likelihood |
|---|---|---|---|
| Financially motivated criminals | Data theft, ransomware | Phishing, credential stuffing | High |
| Nation-state | IP theft, espionage | Spear phishing, supply chain | Low-Medium (sector-dependent) |
| Insider threat | Financial gain, revenge | Privilege abuse, data exfil | Medium |
| Script kiddies | Notoriety, fun | Known CVEs, scanning | High (low sophistication) |
| Competitors | IP theft | Social engineering, insider recruitment | Low-Medium |
### Risk Quantification (FAIR Model Simplified)
**Annual Loss Expectancy:**
```
ALE = SLE × ARO
SLE (Single Loss Expectancy) = Asset Value × Exposure Factor
ARO (Annual Rate of Occurrence) = historical frequency or industry estimate
```
**Business Impact Categories:**
- **Direct financial loss**: fraud, ransomware payment, theft
- **Regulatory fines**: GDPR (4% global revenue), HIPAA ($100$50K per violation), PCI DSS
- **Revenue impact**: customer churn post-breach, deal loss during incident, downtime cost
- **Reputational damage**: brand devaluation (harder to quantify, but real)
- **Legal costs**: incident response counsel, class action defense, settlements
**Example Risk Quantification:**
| Risk Scenario | SLE | ARO | ALE |
|---|---|---|---|
| Customer data breach (10K records) | $850K | 0.15 | $127,500/yr |
| Ransomware attack | $350K | 0.20 | $70,000/yr |
| Credential compromise + fraud | $120K | 0.35 | $42,000/yr |
| Third-party SaaS breach | $95K | 0.25 | $23,750/yr |
| Insider data exfiltration | $180K | 0.10 | $18,000/yr |
**Mitigation ROI:**
```
ROSI = (Risk Reduction × ALE) - Control Cost
────────────────────────────────────
Control Cost
Example: MFA deployment
Risk reduction: 99% for credential attacks
ALE reduced: $42,000 × 0.99 = $41,580
Control cost: $5,000/yr
ROSI: ($41,580 - $5,000) / $5,000 = 731%
```
---
## 2. Zero Trust Architecture at Strategy Level
### What Zero Trust Actually Means
Zero trust is not a product — it's an architectural principle: **never trust, always verify, assume breach.**
The traditional perimeter model (trust inside the network, distrust outside) fails because:
- Remote work destroyed the perimeter
- Cloud infrastructure has no perimeter
- 80% of breaches involve privileged account abuse (internal trust abused)
- Supply chain attacks compromise trusted software
### Zero Trust Maturity Model
**Stage 1 — Identity-Centric (Start Here)**
- MFA enforced for all users, all applications
- Identity provider (Okta, Azure AD, Google Workspace) as single control plane
- No shared service accounts
- Privileged Access Management (PAM) for admin access
- **Cost:** $2080K/year | **Timeline:** 36 months
**Stage 2 — Device Trust**
- Endpoint detection and response (EDR) on all devices
- Device health checks before granting access
- Mobile device management (MDM) for BYOD
- Certificate-based device authentication
- **Cost:** $3060K/year additional | **Timeline:** 612 months
**Stage 3 — Network Micro-Segmentation**
- Replace VPN with Zero Trust Network Access (ZTNA)
- Segment production from development from corporate
- East-west traffic inspection (not just north-south)
- **Cost:** $40100K/year additional | **Timeline:** 1218 months
**Stage 4 — Application-Level Controls**
- Just-in-time access (no standing privileges)
- Workload identity for service-to-service auth
- API gateway with authentication enforcement
- Continuous authorization (not just at login)
- **Cost:** $50150K/year additional | **Timeline:** 1830 months
**Strategic Guidance:**
- Don't sell zero trust as a project. It's a 35 year direction.
- Start with identity. It gives the most risk reduction per dollar.
- Measure progress by % of access covered by MFA, % of apps behind IdP, privilege account count.
---
## 3. Defense in Depth for Startups
### The Layered Security Model
```
Layer 1: Governance & Policies
└── Asset inventory, acceptable use, vendor management
Layer 2: Perimeter Controls
└── WAF, DDoS protection, email security (DMARC/DKIM/SPF)
Layer 3: Identity & Access
└── MFA, SSO, PAM, just-in-time access, least privilege
Layer 4: Endpoint Security
└── EDR, device management, patch management
Layer 5: Application Security
└── SAST/DAST, dependency scanning, code review, API security
Layer 6: Data Protection
└── Encryption at rest and in transit, DLP, backup/recovery
Layer 7: Detection & Response
└── SIEM/SOAR, log aggregation, alerting, incident response
Layer 8: Recovery
└── Backup testing, DR plan, RTO/RPO targets
```
### Startup Security Budget Allocation (Guidance)
| Stage | Annual Revenue | Recommended Security Budget | Priority Spend |
|---|---|---|---|
| Pre-seed/Seed | <$1M | 35% opex or $50100K | MFA, backups, basic EDR |
| Series A | $110M | 24% revenue | +SIEM, SOC 2 Type I, AppSec |
| Series B | $1050M | 35% revenue | +ZTNA, Red team, dedicated CISO |
| Series C+ | $50M+ | 46% revenue | +SOC, threat intelligence, M&A security |
**Non-negotiables regardless of stage:**
1. MFA on everything (particularly email, cloud consoles, code repos)
2. Automated backups with tested restore (ransomware defense)
3. Secrets management (no hardcoded credentials)
4. Dependency vulnerability scanning in CI/CD
5. Incident response plan (even a 2-page doc is better than nothing)
---
## 4. Security Program Maturity Model
**Based on NIST CSF and CMMI, simplified for startup context:**
### Level 1: Initial
- No formal policies
- Reactive security (respond to incidents, not prevent them)
- No dedicated security personnel
- Basic hygiene gaps (unpatched systems, shared passwords)
- **Typical:** Pre-seed, <20 employees
### Level 2: Developing
- Written security policies (even if not fully followed)
- Dedicated security responsibility (often part-time or dual-role)
- MFA deployed, basic asset inventory
- Incident response process documented
- SOC 2 Type I achievable from here in ~6 months
- **Typical:** Series A, 2050 employees
### Level 3: Defined
- Security integrated into SDLC
- Dedicated security lead or vCISO
- Regular vulnerability scanning and patching
- Security awareness training program
- SOC 2 Type II and ISO 27001 achievable
- **Typical:** Series B, 50150 employees
### Level 4: Managed
- Risk-based security program with quantified risks
- Security metrics reported to board quarterly
- Threat intelligence program
- Dedicated security team (38 people)
- Red team / penetration testing annually
- **Typical:** Series C+, 150500 employees
### Level 5: Optimized
- Continuous monitoring and automated response
- Proactive threat hunting
- Industry leadership on security (bug bounty, disclosure program)
- Security as competitive advantage in sales
- **Typical:** Public company or regulated enterprise
### Maturity Assessment Questions
1. Can you list all systems that process customer data right now?
2. How long would it take to detect if an admin credential was compromised?
3. When was your last backup tested with a restore?
4. Do developers run any security checks before code is deployed?
5. Does the board receive security reporting? What's in it?
Score: 0 = no/don't know, 1 = partially, 2 = yes/verified
- 03: Level 12
- 47: Level 23
- 810: Level 34
---
## 5. Board-Level Security Reporting
### What the Board Cares About
Boards are not interested in CVE counts or firewall rules. They care about:
1. **Risk posture:** Are we getting better or worse?
2. **Regulatory exposure:** What fines could we face?
3. **Incident readiness:** If we're breached, are we prepared?
4. **Competitive position:** Do customers trust us with their data?
5. **Budget adequacy:** Are we investing appropriately?
### Quarterly Board Security Report Structure
**Executive Summary (1 page max)**
- Security posture score vs. last quarter (directional trend matters more than absolute)
- Top 3 risks and their business impact in dollars
- Key accomplishments this quarter
- Investment requested (if any)
**Risk Dashboard**
```
Risk Register Summary:
├── Critical (>$500K ALE): [count] risks, [count] mitigated
├── High ($100K$500K ALE): [count] risks, [count] mitigated
├── Medium ($10K$100K ALE): [count] risks
└── Low (<$10K ALE): [count] risks (for awareness only)
Trend: ↑ Risk exposure vs. Q[n-1] / ↓ Risk exposure vs. Q[n-1]
```
**Compliance Status**
- Framework certifications in scope and current status
- Next audit date
- Any findings from last audit and remediation status
**Incident Summary**
- Security incidents last quarter (count and severity)
- Time to detect / time to respond (vs. targets)
- Any regulatory reporting obligations triggered
**Key Metrics (46 max)**
- MFA adoption rate
- Critical patch SLA compliance
- Phishing simulation click rate (trend)
- Vendor assessments completed
**Budget Summary**
- Spend vs. budget
- Headcount
- Next quarter key investments and rationale
### Common Board Questions to Prepare For
- "Have we been breached?" (Know your detection capability, not just your answer)
- "How do we compare to peers?" (Benchmarks from Verizon DBIR, industry ISACs)
- "What's the one thing we should invest in?" (Have a clear answer)
- "If we're acquired, what would security due diligence find?" (Be honest)
- "What keeps you up at night?" (Have a real answer, not a vague one)
---
## 6. Security as Revenue Enabler
### The Sales Angle
For B2B companies, security certifications directly impact revenue:
- Enterprise buyers require SOC 2 as table stakes (increasingly SOC 2 Type II)
- Government and healthcare require ISO 27001 or HIPAA
- Passing security questionnaires faster closes deals faster
- A breach costs 1030% customer churn; security investment is churn prevention
**How to Measure:**
- Deals blocked by security questionnaire failures (track in CRM)
- Average security questionnaire turnaround time
- Customer security reviews passed vs. failed
- Revenue attributed to new compliance certifications
### The Trust Narrative
Position security certifications in marketing:
- SOC 2 Type II: "Independently audited security controls, verified annually"
- ISO 27001: "Internationally certified information security management"
- HIPAA BAA: "Healthcare data protection to regulatory standards"
These aren't just compliance — they're trust signals that compress the sales cycle.

View File

@@ -0,0 +1,781 @@
#!/usr/bin/env python3
"""
CISO Compliance Tracker
========================
Tracks compliance requirements across SOC 2, ISO 27001, HIPAA, and GDPR.
Shows control overlaps, estimates effort and cost, and prioritizes by business value.
Usage:
python compliance_tracker.py # Run with sample data
python compliance_tracker.py --json # JSON output
python compliance_tracker.py --csv output.csv # Export CSV
python compliance_tracker.py --framework soc2 # Show single framework
python compliance_tracker.py --gap-analysis # Show unaddressed requirements
python compliance_tracker.py --roadmap # Show sequenced roadmap
"""
import json
import csv
import sys
import argparse
from datetime import datetime, date
from typing import Optional
# ─── Framework Definitions ───────────────────────────────────────────────────
FRAMEWORKS = {
"soc2": {
"name": "SOC 2 Type II",
"full_name": "AICPA Trust Service Criteria — Security",
"typical_timeline_months": 12,
"typical_cost_usd": 65_000, # Audit + platform
"annual_maintenance_usd": 40_000,
"business_value": "Enterprise sales unblock, US market table stakes",
"mandatory_for": ["B2B SaaS selling to enterprise US companies"],
},
"iso27001": {
"name": "ISO 27001:2022",
"full_name": "Information Security Management System",
"typical_timeline_months": 15,
"typical_cost_usd": 95_000,
"annual_maintenance_usd": 30_000,
"business_value": "EU enterprise sales, global credibility",
"mandatory_for": ["EU enterprise customers", "Government contracts"],
},
"hipaa": {
"name": "HIPAA",
"full_name": "Health Insurance Portability and Accountability Act",
"typical_timeline_months": 7,
"typical_cost_usd": 75_000,
"annual_maintenance_usd": 20_000,
"business_value": "Healthcare customer access, BAA execution",
"mandatory_for": ["Business Associates", "Companies handling PHI"],
},
"gdpr": {
"name": "GDPR",
"full_name": "General Data Protection Regulation (EU) 2016/679",
"typical_timeline_months": 5,
"typical_cost_usd": 45_000,
"annual_maintenance_usd": 15_000,
"business_value": "EU market access, legal compliance",
"mandatory_for": ["EU-based companies", "Any company with EU user data"],
},
}
# ─── Control Domain Library ──────────────────────────────────────────────────
def build_control_domain(
domain_id: str,
name: str,
description: str,
soc2_ref: Optional[str],
iso27001_ref: Optional[str],
hipaa_ref: Optional[str],
gdpr_ref: Optional[str],
effort_days: int, # Estimated implementation effort in person-days
cost_usd: int, # Estimated implementation cost (tooling + time)
implementation_notes: str,
status: str = "Not Started", # Not Started | In Progress | Implemented | Verified
owner: Optional[str] = None,
target_date: Optional[str] = None,
) -> dict:
"""Build a control domain record."""
frameworks_applicable = []
if soc2_ref:
frameworks_applicable.append("soc2")
if iso27001_ref:
frameworks_applicable.append("iso27001")
if hipaa_ref:
frameworks_applicable.append("hipaa")
if gdpr_ref:
frameworks_applicable.append("gdpr")
return {
"domain_id": domain_id,
"name": name,
"description": description,
"references": {
"soc2": soc2_ref,
"iso27001": iso27001_ref,
"hipaa": hipaa_ref,
"gdpr": gdpr_ref,
},
"frameworks_applicable": frameworks_applicable,
"framework_count": len(frameworks_applicable),
"effort_days": effort_days,
"cost_usd": cost_usd,
"implementation_notes": implementation_notes,
"status": status,
"owner": owner,
"target_date": target_date,
}
def load_control_library() -> list[dict]:
"""
Core control domains mapped across SOC 2, ISO 27001, HIPAA, and GDPR.
Each domain represents a logical grouping of controls.
"""
controls = []
controls.append(build_control_domain(
domain_id="IAM-001",
name="Identity and Access Management",
description=(
"Unique user identities, MFA enforcement, SSO, least privilege access, "
"role-based access control, access provisioning and de-provisioning workflows."
),
soc2_ref="CC6.1, CC6.2, CC6.3",
iso27001_ref="A.5.15, A.5.16, A.5.17, A.5.18",
hipaa_ref="§164.312(a)(2)(i), §164.308(a)(3)",
gdpr_ref="Art. 32(1)(b)",
effort_days=15,
cost_usd=25_000, # SSO + MFA tooling
implementation_notes=(
"Deploy IdP (Okta/Azure AD/Google Workspace). Enforce MFA on all applications. "
"Document access provisioning process. Implement quarterly access reviews."
),
status="In Progress",
owner="IT/Security",
))
controls.append(build_control_domain(
domain_id="ENC-001",
name="Encryption at Rest and in Transit",
description=(
"Encryption of sensitive data stored in databases, file systems, and backups. "
"TLS 1.2+ for all data in transit. Key management and rotation."
),
soc2_ref="CC6.7",
iso27001_ref="A.8.24",
hipaa_ref="§164.312(a)(2)(iv), §164.312(e)(2)(ii)",
gdpr_ref="Art. 32(1)(a)",
effort_days=10,
cost_usd=8_000,
implementation_notes=(
"Enable encryption at rest on all databases (RDS, S3, etc.). "
"Configure TLS on all services. Use KMS for key management. "
"Document encryption standards in a security policy."
),
status="Implemented",
owner="Engineering",
))
controls.append(build_control_domain(
domain_id="LOG-001",
name="Audit Logging and Monitoring",
description=(
"Comprehensive logging of user activity, system events, and security events. "
"Log integrity protection. SIEM or log aggregation. Alerting on anomalies."
),
soc2_ref="CC7.2, CC7.3",
iso27001_ref="A.8.15, A.8.16, A.8.17",
hipaa_ref="§164.312(b)",
gdpr_ref="Art. 32(1)(b)",
effort_days=20,
cost_usd=30_000, # SIEM tooling
implementation_notes=(
"Centralize logs from application, infrastructure, and cloud provider. "
"Define log retention (minimum 1 year). Set up alerting for authentication "
"failures, privilege escalation, data export events."
),
status="Not Started",
owner="DevOps/Security",
))
controls.append(build_control_domain(
domain_id="IR-001",
name="Incident Response",
description=(
"Documented incident response plan. Defined severity levels. Escalation procedures. "
"Communication templates. Annual tabletop exercise. Post-incident review process."
),
soc2_ref="CC7.3, CC7.4, CC7.5",
iso27001_ref="A.5.24, A.5.25, A.5.26, A.5.27, A.5.28",
hipaa_ref="§164.308(a)(6)",
gdpr_ref="Art. 33, Art. 34",
effort_days=12,
cost_usd=10_000,
implementation_notes=(
"Write IR plan covering detection, containment, eradication, recovery, communication. "
"Define breach notification timelines (GDPR: 72 hours, HIPAA: 60 days). "
"Run annual tabletop exercise. Retain IR firm on retainer."
),
status="In Progress",
owner="CISO",
))
controls.append(build_control_domain(
domain_id="VM-001",
name="Vulnerability Management and Patching",
description=(
"Regular vulnerability scanning of infrastructure and applications. "
"Defined patch SLAs by severity. Penetration testing program. "
"Dependency vulnerability scanning in CI/CD."
),
soc2_ref="CC7.1",
iso27001_ref="A.8.8",
hipaa_ref="§164.308(a)(1)(ii)(A)",
gdpr_ref="Art. 32(1)(d)",
effort_days=15,
cost_usd=20_000,
implementation_notes=(
"Deploy infrastructure scanner (Tenable, Qualys, AWS Inspector). "
"Add SAST/DAST to CI/CD pipeline. Define patch SLAs: Critical <24h, High <7d, "
"Medium <30d. Conduct annual pentest."
),
status="In Progress",
owner="DevOps/Security",
))
controls.append(build_control_domain(
domain_id="VRISK-001",
name="Vendor and Third-Party Risk Management",
description=(
"Inventory of all third-party vendors with data access. Tiered risk assessment "
"process. Contractual security requirements. Annual reviews for critical vendors."
),
soc2_ref="CC9.2",
iso27001_ref="A.5.19, A.5.20, A.5.21, A.5.22",
hipaa_ref="§164.308(b) Business Associate Agreements",
gdpr_ref="Art. 28 Data Processing Agreements",
effort_days=10,
cost_usd=8_000,
implementation_notes=(
"Build vendor inventory spreadsheet. Tier vendors (Tier 1: PII access, "
"Tier 2: business data, Tier 3: no data). Execute DPAs for all processors (GDPR). "
"Execute BAAs for PHI processors (HIPAA). Annual security questionnaire for Tier 1."
),
status="Not Started",
owner="Legal/Security",
))
controls.append(build_control_domain(
domain_id="RISK-001",
name="Risk Assessment and Treatment",
description=(
"Formal risk assessment methodology. Risk register maintained. "
"Risk treatment decisions documented. Annual risk review cycle."
),
soc2_ref="CC3.1, CC3.2, CC3.3, CC3.4",
iso27001_ref="Clause 6.1.2, 6.1.3",
hipaa_ref="§164.308(a)(1) Security Risk Analysis",
gdpr_ref="Art. 32, Art. 35 DPIA",
effort_days=15,
cost_usd=12_000,
implementation_notes=(
"Document risk methodology (FAIR, NIST, ISO 27005). Maintain risk register. "
"HIPAA: formal security risk analysis required — not optional. "
"GDPR: DPIA required for high-risk processing activities. Annual refresh."
),
status="Not Started",
owner="CISO",
))
controls.append(build_control_domain(
domain_id="TRAIN-001",
name="Security Awareness Training",
description=(
"Annual security awareness training for all employees. "
"Role-specific training for high-risk roles. Phishing simulations. "
"Training completion tracking."
),
soc2_ref="CC1.4",
iso27001_ref="A.6.3, A.6.8",
hipaa_ref="§164.308(a)(5)",
gdpr_ref="Art. 39(1)(b)",
effort_days=5,
cost_usd=8_000,
implementation_notes=(
"Deploy security training platform (KnowBe4, Proofpoint, etc.). "
"Annual training required — track completion (100% target). "
"Quarterly phishing simulations. Role-specific training for devs (secure coding), "
"finance (BEC), support (social engineering)."
),
status="Not Started",
owner="HR/Security",
))
controls.append(build_control_domain(
domain_id="CHGMGMT-001",
name="Change Management",
description=(
"Formal change management process for production changes. "
"Code review requirements. Deployment approvals. Rollback procedures. "
"Change log maintained."
),
soc2_ref="CC8.1",
iso27001_ref="A.8.32",
hipaa_ref="§164.312(c)(1) Integrity controls",
gdpr_ref="Art. 25 Privacy by design",
effort_days=10,
cost_usd=5_000,
implementation_notes=(
"Document change management policy. Require peer review for all production changes. "
"Maintain audit trail in version control. No direct production access — "
"all changes via CI/CD pipeline."
),
status="In Progress",
owner="Engineering",
))
controls.append(build_control_domain(
domain_id="BCP-001",
name="Business Continuity and Disaster Recovery",
description=(
"Business continuity plan. Disaster recovery plan with defined RTO/RPO. "
"Backup procedures with tested restores. Failover capabilities."
),
soc2_ref="A1.1, A1.2, A1.3",
iso27001_ref="A.5.29, A.5.30",
hipaa_ref="§164.308(a)(7) Contingency Plan",
gdpr_ref="Art. 32(1)(c)",
effort_days=12,
cost_usd=15_000,
implementation_notes=(
"Define RTO (<4 hours) and RPO (<1 hour) targets. Configure automated backups. "
"Test restore quarterly — paper backups that aren't tested aren't backups. "
"Document DR runbook. Annual DR exercise."
),
status="In Progress",
owner="DevOps",
))
controls.append(build_control_domain(
domain_id="ASSET-001",
name="Asset Inventory and Classification",
description=(
"Complete inventory of hardware, software, and data assets. "
"Data classification scheme. Ownership assigned to all assets. "
"Regular reconciliation."
),
soc2_ref="CC6.1",
iso27001_ref="A.5.9, A.5.10, A.5.11, A.5.12, A.5.13",
hipaa_ref="§164.310(d) Device and Media Controls",
gdpr_ref="Art. 30 Records of Processing Activities",
effort_days=8,
cost_usd=5_000,
implementation_notes=(
"Build asset register (CMDB or spreadsheet at minimum). "
"Classify data: Public, Internal, Confidential, Restricted. "
"GDPR requires RoPA (Record of Processing Activities) — data map of all PII. "
"ISO 27001 requires SoA referencing asset inventory."
),
status="Not Started",
owner="IT/Security",
))
controls.append(build_control_domain(
domain_id="ENDPOINT-001",
name="Endpoint Security",
description=(
"EDR/antivirus on all managed endpoints. Device management (MDM). "
"Full disk encryption. Patch management. BYOD policy."
),
soc2_ref="CC6.8",
iso27001_ref="A.8.1, A.8.7",
hipaa_ref="§164.310(a)(2)(iv) Workstation security",
gdpr_ref="Art. 32(1)(a)",
effort_days=8,
cost_usd=20_000,
implementation_notes=(
"Deploy EDR (CrowdStrike, SentinelOne, or Microsoft Defender for Business). "
"Enable full disk encryption (FileVault/BitLocker). "
"MDM for device management. BYOD policy documented."
),
status="In Progress",
owner="IT",
))
controls.append(build_control_domain(
domain_id="POLICY-001",
name="Security Policies and Procedures",
description=(
"Documented security policies covering acceptable use, access control, "
"incident response, data classification, vendor management, etc. "
"Annual review cycle. Employee attestation."
),
soc2_ref="CC1.2, CC1.3",
iso27001_ref="A.5.1, A.5.2",
hipaa_ref="§164.308(a)(1) Security Management Process",
gdpr_ref="Art. 24 Responsibility of the controller",
effort_days=15,
cost_usd=10_000,
implementation_notes=(
"Minimum policy set: Information Security Policy, Acceptable Use, "
"Access Control, Incident Response, Data Classification, Password, "
"Change Management, Vendor Management, Business Continuity. "
"Use policy templates from GRC platform (Vanta/Drata)."
),
status="In Progress",
owner="CISO",
))
controls.append(build_control_domain(
domain_id="PRIV-001",
name="Privacy and Data Subject Rights",
description=(
"Privacy policy and notices. Data subject rights fulfilment process "
"(access, erasure, portability). Consent management. Cookie compliance. "
"Privacy by design in product development."
),
soc2_ref=None, # Not a SOC 2 requirement (unless Privacy TSC selected)
iso27001_ref="A.5.34",
hipaa_ref="§164.524 Access, §164.528 Accounting of Disclosures",
gdpr_ref="Art. 13, 14, 1522 (Rights), Art. 25",
effort_days=20,
cost_usd=15_000,
implementation_notes=(
"GDPR: Update privacy policy, implement DSAR process (30-day SLA), "
"build deletion capability into product. Cookie consent (PECR/ePrivacy). "
"HIPAA: Patient rights for PHI access. "
"Consider OneTrust, Termly, or CookieYes for consent management."
),
status="Not Started",
owner="Legal/Product",
))
controls.append(build_control_domain(
domain_id="NET-001",
name="Network Security and Segmentation",
description=(
"Network segmentation (production vs. development vs. corporate). "
"Firewall rules. Intrusion detection. VPN or ZTNA for remote access."
),
soc2_ref="CC6.6, CC6.7",
iso27001_ref="A.8.20, A.8.21, A.8.22",
hipaa_ref="§164.312(e)(1) Transmission security",
gdpr_ref="Art. 32(1)(a)",
effort_days=12,
cost_usd=18_000,
implementation_notes=(
"Segment production from development. WAF in front of public applications. "
"Replace VPN with ZTNA for remote access (Series B+ consideration). "
"DDoS protection (Cloudflare or AWS Shield)."
),
status="In Progress",
owner="DevOps",
))
controls.append(build_control_domain(
domain_id="PENTEST-001",
name="Penetration Testing",
description=(
"Annual external penetration test by qualified third-party firm. "
"Finding remediation tracking. Results reviewed by leadership."
),
soc2_ref="CC7.1",
iso27001_ref="A.8.8",
hipaa_ref="§164.308(a)(8) Evaluation",
gdpr_ref="Art. 32(1)(d)",
effort_days=5,
cost_usd=25_000,
implementation_notes=(
"Scope: external attack surface, application, API, and optionally social engineering. "
"Budget $1535K for a reputable firm. Track findings in risk register. "
"Re-test critical findings within 90 days. Share pentest summary with enterprise "
"customers on request (under NDA)."
),
status="Not Started",
owner="CISO",
))
return controls
# ─── Analysis ────────────────────────────────────────────────────────────────
def calculate_framework_coverage(controls: list[dict]) -> dict:
"""Calculate per-framework coverage statistics."""
coverage = {}
for fw in FRAMEWORKS:
applicable = [c for c in controls if fw in c["frameworks_applicable"]]
implemented = [c for c in applicable if c["status"] in ("Implemented", "Verified")]
in_progress = [c for c in applicable if c["status"] == "In Progress"]
not_started = [c for c in applicable if c["status"] == "Not Started"]
total_effort = sum(c["effort_days"] for c in applicable)
remaining_effort = sum(
c["effort_days"] for c in applicable
if c["status"] not in ("Implemented", "Verified")
)
total_cost = sum(c["cost_usd"] for c in applicable)
remaining_cost = sum(
c["cost_usd"] for c in applicable
if c["status"] not in ("Implemented", "Verified")
)
pct_complete = (len(implemented) / len(applicable) * 100) if applicable else 0
coverage[fw] = {
"framework": FRAMEWORKS[fw]["name"],
"total_controls": len(applicable),
"implemented": len(implemented),
"in_progress": len(in_progress),
"not_started": len(not_started),
"pct_complete": pct_complete,
"total_effort_days": total_effort,
"remaining_effort_days": remaining_effort,
"total_cost_usd": total_cost,
"remaining_cost_usd": remaining_cost,
"gap_controls": [c["name"] for c in not_started],
}
return coverage
def find_high_leverage_controls(controls: list[dict]) -> list[dict]:
"""Controls that satisfy the most frameworks — highest ROI to implement."""
multi_fw = [c for c in controls if c["framework_count"] >= 3
and c["status"] not in ("Implemented", "Verified")]
return sorted(multi_fw, key=lambda c: (-c["framework_count"], c["effort_days"]))
def estimate_roadmap(controls: list[dict], target_frameworks: list[str]) -> list[dict]:
"""
Generate an ordered implementation roadmap for target frameworks.
Prioritize: (1) controls blocking most frameworks, (2) quick wins (low effort).
"""
applicable = [c for c in controls
if any(fw in c["frameworks_applicable"] for fw in target_frameworks)
and c["status"] not in ("Implemented", "Verified")]
# Score: (frameworks_covered × 10) - (effort_days) → higher is better
for c in applicable:
fw_overlap = len([fw for fw in target_frameworks if fw in c["frameworks_applicable"]])
c["_priority_score"] = (fw_overlap * 10) - c["effort_days"]
return sorted(applicable, key=lambda c: -c["_priority_score"])
def fmt_dollars(amount: float) -> str:
if amount >= 1_000_000:
return f"${amount/1_000_000:.1f}M"
if amount >= 1_000:
return f"${amount/1_000:.0f}K"
return f"${amount:.0f}"
def status_icon(status: str) -> str:
icons = {
"Implemented": "",
"Verified": "",
"In Progress": "🔄",
"Not Started": "",
"Planned": "📋",
}
return icons.get(status, "")
# ─── Display ─────────────────────────────────────────────────────────────────
def print_header():
print("\n" + "=" * 80)
print(" CISO COMPLIANCE TRACKER — Multi-Framework Coverage")
print(f" Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print("=" * 80)
def print_framework_summary(coverage: dict):
print("\n📋 FRAMEWORK COVERAGE SUMMARY")
print("-" * 80)
header = f"{'Framework':<20} {'Done':<6} {'WIP':<5} {'Gap':<5} {'Complete':<10} {'Remain Cost':<14} {'Remain Days'}"
print(header)
print("-" * 80)
for fw_id, data in coverage.items():
pct = f"{data['pct_complete']:.0f}%"
print(
f"{data['framework']:<20} {data['implemented']:<6} {data['in_progress']:<5} "
f"{data['not_started']:<5} {pct:<10} {fmt_dollars(data['remaining_cost_usd']):<14} "
f"{data['remaining_effort_days']} days"
)
def print_control_table(controls: list[dict], framework_filter: Optional[str] = None):
filtered = controls
if framework_filter:
filtered = [c for c in controls if framework_filter in c["frameworks_applicable"]]
title = f"CONTROL DOMAINS"
if framework_filter:
title += f"{FRAMEWORKS[framework_filter]['name']}"
print(f"\n🔧 {title}")
print("-" * 90)
header = f"{'ID':<14} {'Control Name':<30} {'Frameworks':<8} {'Effort':<8} {'Cost':<10} {'Status'}"
print(header)
print("-" * 90)
for c in filtered:
fw_badges = "/".join(
fw.upper()[:3] for fw in ["soc2", "iso27001", "hipaa", "gdpr"]
if fw in c["frameworks_applicable"]
)
icon = status_icon(c["status"])
print(
f"{c['domain_id']:<14} {c['name'][:29]:<30} {fw_badges:<8} "
f"{c['effort_days']:>3}d {fmt_dollars(c['cost_usd']):<10} {icon} {c['status']}"
)
def print_gap_analysis(coverage: dict):
print("\n⚠️ GAP ANALYSIS — Controls Not Yet Started")
print("-" * 70)
for fw_id, data in coverage.items():
if data["gap_controls"]:
print(f"\n {data['framework']}{len(data['gap_controls'])} gaps:")
for gap in data["gap_controls"]:
print(f"{gap}")
def print_high_leverage(controls: list[dict]):
hl = find_high_leverage_controls(controls)
print(f"\n🎯 HIGH-LEVERAGE CONTROLS — Implement Once, Satisfy Multiple Frameworks")
print("-" * 70)
print(f"{'Control':<30} {'Frameworks':<35} {'Effort':<8} {'Cost'}")
print("-" * 70)
for c in hl:
fw_list = " + ".join(FRAMEWORKS[fw]["name"] for fw in c["frameworks_applicable"])
print(
f"{c['name'][:29]:<30} {fw_list[:34]:<35} "
f"{c['effort_days']:>3}d {fmt_dollars(c['cost_usd'])}"
)
def print_roadmap(controls: list[dict], target_frameworks: list[str]):
ordered = estimate_roadmap(controls, target_frameworks)
fw_names = " + ".join(FRAMEWORKS[fw]["name"] for fw in target_frameworks)
print(f"\n🗺️ IMPLEMENTATION ROADMAP — {fw_names}")
print("-" * 80)
print("Priority order: most framework coverage first, then quick wins")
print()
cumulative_days = 0
cumulative_cost = 0
for i, c in enumerate(ordered, 1):
cumulative_days += c["effort_days"]
cumulative_cost += c["cost_usd"]
fw_badges = ", ".join(
FRAMEWORKS[fw]["name"] for fw in target_frameworks
if fw in c["frameworks_applicable"]
)
print(f" {i:>2}. {c['name']}")
print(f" Frameworks: {fw_badges}")
print(f" Effort: {c['effort_days']} days | Cost: {fmt_dollars(c['cost_usd'])} "
f"| Cumulative: {cumulative_days}d / {fmt_dollars(cumulative_cost)}")
if c.get("owner"):
print(f" Owner: {c['owner']}")
print()
def print_framework_profiles():
print("\n💼 FRAMEWORK PROFILES")
print("-" * 70)
for fw_id, fw in FRAMEWORKS.items():
print(f"\n {fw['name']} ({fw_id.upper()})")
print(f" Timeline: ~{fw['typical_timeline_months']} months")
print(f" First-year cost: {fmt_dollars(fw['typical_cost_usd'])}")
print(f" Annual maintenance: {fmt_dollars(fw['annual_maintenance_usd'])}/yr")
print(f" Business value: {fw['business_value']}")
print(f" Required for: {', '.join(fw['mandatory_for'])}")
def export_csv(controls: list[dict], filepath: str):
fields = [
"domain_id", "name", "frameworks_applicable", "framework_count",
"effort_days", "cost_usd", "status", "owner", "target_date",
"soc2_ref", "iso27001_ref", "hipaa_ref", "gdpr_ref", "implementation_notes"
]
with open(filepath, "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for c in controls:
row = {k: c.get(k, "") for k in fields}
row["frameworks_applicable"] = ", ".join(c["frameworks_applicable"])
row["soc2_ref"] = c["references"].get("soc2", "")
row["iso27001_ref"] = c["references"].get("iso27001", "")
row["hipaa_ref"] = c["references"].get("hipaa", "")
row["gdpr_ref"] = c["references"].get("gdpr", "")
writer.writerow(row)
print(f"✅ Exported {len(controls)} controls to {filepath}")
# ─── Main ────────────────────────────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(
description="CISO Compliance Tracker — Multi-framework coverage and roadmap"
)
parser.add_argument("--json", action="store_true", help="Output JSON")
parser.add_argument("--csv", metavar="FILE", help="Export CSV to file")
parser.add_argument(
"--framework", metavar="FRAMEWORK",
choices=list(FRAMEWORKS.keys()),
help="Filter to single framework (soc2, iso27001, hipaa, gdpr)"
)
parser.add_argument("--gap-analysis", action="store_true", help="Show gap analysis")
parser.add_argument("--roadmap", metavar="FRAMEWORKS",
help="Sequenced roadmap for frameworks e.g. 'soc2,iso27001'")
parser.add_argument("--profiles", action="store_true", help="Show framework profiles")
parser.add_argument("--leverage", action="store_true", help="Show high-leverage controls")
args = parser.parse_args()
controls = load_control_library()
coverage = calculate_framework_coverage(controls)
if args.json:
output = {
"generated": datetime.now().isoformat(),
"frameworks": FRAMEWORKS,
"coverage": coverage,
"controls": controls,
}
print(json.dumps(output, indent=2, default=str))
return
if args.csv:
export_csv(controls, args.csv)
return
print_header()
if args.profiles:
print_framework_profiles()
return
if args.roadmap:
target_fws = [fw.strip() for fw in args.roadmap.split(",") if fw.strip() in FRAMEWORKS]
if not target_fws:
print(f"Unknown frameworks. Valid: {', '.join(FRAMEWORKS.keys())}")
sys.exit(1)
print_framework_summary(coverage)
print_roadmap(controls, target_fws)
return
print_framework_summary(coverage)
print_control_table(controls, args.framework)
if args.gap_analysis:
print_gap_analysis(coverage)
if args.leverage:
print_high_leverage(controls)
if not any([args.framework, args.gap_analysis, args.leverage]):
print_high_leverage(controls)
print_gap_analysis(coverage)
print("\n💡 NEXT STEPS")
print(" --roadmap soc2,iso27001 Priority order for dual-framework")
print(" --framework hipaa HIPAA-only control view")
print(" --gap-analysis What's not started")
print(" --leverage Controls covering most frameworks")
print(" --profiles Framework timelines and costs")
print(" --csv controls.csv Export for stakeholder review")
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,690 @@
#!/usr/bin/env python3
"""
CISO Risk Quantifier
====================
Quantifies security risks in business terms using the FAIR model.
Calculates ALE (Annual Loss Expectancy) and prioritizes by expected annual loss.
Usage:
python risk_quantifier.py # Run with sample data
python risk_quantifier.py --json # Output JSON
python risk_quantifier.py --csv output.csv # Export CSV
python risk_quantifier.py --budget 500000 # Show what fits in budget
python risk_quantifier.py --add # Interactive risk entry
"""
import json
import csv
import sys
import os
import argparse
from datetime import datetime
from typing import Optional
# ─── Data Model ─────────────────────────────────────────────────────────────
RISK_CATEGORIES = [
"Data Breach",
"Ransomware / Extortion",
"Insider Threat",
"Third-Party / Supply Chain",
"Application Vulnerability",
"Cloud Misconfiguration",
"Social Engineering",
"Physical Security",
"Business Email Compromise",
"DDoS / Availability",
]
BUSINESS_IMPACT_TYPES = [
"Revenue Loss",
"Regulatory Fine",
"Legal / Litigation",
"Reputational Damage",
"Recovery / Remediation Cost",
"Customer Churn",
"Business Interruption",
]
MITIGATION_STATUSES = ["None", "Planned", "In Progress", "Mitigated", "Accepted"]
def build_risk(
name: str,
category: str,
description: str,
asset_value: float,
exposure_factor: float, # 0.01.0: fraction of asset value lost in breach
annual_rate: float, # ARO: expected incidents per year (0.01 = once per 100 years)
mitigation_cost: float,
mitigation_effectiveness: float, # 0.01.0: fraction of risk reduced by control
mitigation_status: str,
business_impacts: dict, # {impact_type: dollar_amount}
notes: str = "",
) -> dict:
"""Construct a risk record with calculated metrics."""
sle = asset_value * exposure_factor # Single Loss Expectancy
ale = sle * annual_rate # Annual Loss Expectancy (inherent)
mitigated_ale = ale * (1 - mitigation_effectiveness) # Residual after mitigation
mitigation_roi = ((ale - mitigated_ale - mitigation_cost) / mitigation_cost * 100
if mitigation_cost > 0 else 0)
total_business_impact = sum(business_impacts.values())
return {
"name": name,
"category": category,
"description": description,
"asset_value": asset_value,
"exposure_factor": exposure_factor,
"annual_rate": annual_rate,
"mitigation_cost": mitigation_cost,
"mitigation_effectiveness": mitigation_effectiveness,
"mitigation_status": mitigation_status,
"business_impacts": business_impacts,
"notes": notes,
# Calculated
"sle": sle,
"ale": ale,
"mitigated_ale": mitigated_ale,
"mitigation_roi_pct": mitigation_roi,
"total_business_impact": total_business_impact,
"priority_score": ale, # Primary sort key
}
# ─── Sample Data ─────────────────────────────────────────────────────────────
def load_sample_risks() -> list[dict]:
"""
Sample risk register for a Series B SaaS company with ~$15M ARR,
~50K customer records, B2B enterprise focus.
"""
risks = []
risks.append(build_risk(
name="Customer Database Breach",
category="Data Breach",
description=(
"Unauthorized access to production database containing 50K+ customer records "
"including PII (name, email, company, payment method). Attack vector: SQL injection, "
"compromised credentials, or insider access."
),
asset_value=5_000_000, # Value of customer database (revenue impact + regulatory)
exposure_factor=0.30, # ~30% of asset value lost in a breach event
annual_rate=0.12, # ~12% chance per year (based on Verizon DBIR industry data)
mitigation_cost=45_000, # WAF + DAST + DB activity monitoring annual cost
mitigation_effectiveness=0.80,
mitigation_status="In Progress",
business_impacts={
"Regulatory Fine": 85_000, # GDPR/CCPA exposure
"Legal / Litigation": 150_000, # Class action exposure
"Customer Churn": 300_000, # Lost ARR from breach-triggered churn
"Reputational Damage": 200_000, # Brand impact / deal loss
"Recovery / Remediation Cost": 65_000,
},
notes="SOC 2 Type II controls partially address. Next step: DB activity monitoring.",
))
risks.append(build_risk(
name="Ransomware Attack",
category="Ransomware / Extortion",
description=(
"Ransomware encrypts production systems. Average ransom demand for a "
"Series B company is $350K$800K. Recovery without ransom payment: 26 weeks downtime. "
"Attack vector: phishing email with malicious attachment, RDP exposure."
),
asset_value=3_500_000,
exposure_factor=0.25,
annual_rate=0.15,
mitigation_cost=60_000, # EDR + email security + backup hardening
mitigation_effectiveness=0.85,
mitigation_status="Planned",
business_impacts={
"Business Interruption": 450_000, # 4 weeks downtime × $112K/week revenue
"Recovery / Remediation Cost": 180_000,
"Customer Churn": 125_000,
"Revenue Loss": 75_000,
},
notes="Offline, tested backups reduce recovery time and eliminate ransom pressure.",
))
risks.append(build_risk(
name="Privileged Insider Data Theft",
category="Insider Threat",
description=(
"Disgruntled or financially motivated employee with elevated access exfiltrates "
"customer data, IP, or trade secrets. Detection is typically slow (median: 197 days "
"per IBM Cost of Data Breach Report)."
),
asset_value=2_800_000,
exposure_factor=0.20,
annual_rate=0.08,
mitigation_cost=35_000, # DLP + UEBA + PAM
mitigation_effectiveness=0.65,
mitigation_status="None",
business_impacts={
"Legal / Litigation": 120_000,
"Customer Churn": 90_000,
"Reputational Damage": 75_000,
"Recovery / Remediation Cost": 40_000,
},
notes="No DLP or UEBA currently deployed. Highest detection gap.",
))
risks.append(build_risk(
name="Critical SaaS Vendor Breach (Supply Chain)",
category="Third-Party / Supply Chain",
description=(
"A critical SaaS vendor (e.g., Salesforce, Slack, AWS, GitHub) suffers a breach "
"that compromises data entrusted to them or disrupts your operations. You have "
"limited control but full liability to customers."
),
asset_value=2_200_000,
exposure_factor=0.15,
annual_rate=0.18,
mitigation_cost=20_000, # Vendor risk assessment program
mitigation_effectiveness=0.40, # Limited — you can't control vendor security
mitigation_status="Planned",
business_impacts={
"Business Interruption": 95_000,
"Customer Churn": 75_000,
"Reputational Damage": 50_000,
"Recovery / Remediation Cost": 30_000,
},
notes="Third-party risk is partially transferable via contractual SLAs and cyber insurance.",
))
risks.append(build_risk(
name="Business Email Compromise (BEC)",
category="Business Email Compromise",
description=(
"Attacker impersonates CEO, CFO, or vendor to redirect wire transfers, gift card "
"purchases, or payroll. Median BEC loss: $125K. FBI IC3 reports BEC as #1 "
"cybercrime by financial loss."
),
asset_value=500_000,
exposure_factor=0.40,
annual_rate=0.30,
mitigation_cost=12_000, # Email authentication (DMARC) + training + callback procedures
mitigation_effectiveness=0.90,
mitigation_status="In Progress",
business_impacts={
"Revenue Loss": 125_000, # Direct financial theft (often unrecoverable)
"Recovery / Remediation Cost": 25_000,
"Legal / Litigation": 15_000,
},
notes="DMARC deployed. Need to enforce wire transfer callback procedures.",
))
risks.append(build_risk(
name="Cloud Misconfiguration — S3 / Storage Exposure",
category="Cloud Misconfiguration",
description=(
"Public exposure of S3 buckets, GCS buckets, or Azure Blob storage containing "
"sensitive data. One of the most common causes of data breaches. Often undetected "
"for months. 2023 IBM study: 82% of breaches involved data stored in cloud."
),
asset_value=1_800_000,
exposure_factor=0.20,
annual_rate=0.20,
mitigation_cost=18_000, # CSPM tool + IaC scanning
mitigation_effectiveness=0.90,
mitigation_status="Planned",
business_impacts={
"Regulatory Fine": 60_000,
"Reputational Damage": 120_000,
"Legal / Litigation": 45_000,
"Recovery / Remediation Cost": 35_000,
},
notes="No CSPM currently. High frequency, high detectability, low mitigation cost.",
))
risks.append(build_risk(
name="Credential Stuffing — Customer Accounts",
category="Application Vulnerability",
description=(
"Attackers use leaked credential lists to compromise customer accounts. "
"Account takeover leads to data theft, fraudulent transactions, and support burden. "
"16 billion credentials available on darknet as of 2024."
),
asset_value=1_200_000,
exposure_factor=0.12,
annual_rate=0.40,
mitigation_cost=15_000, # MFA + rate limiting + bot detection
mitigation_effectiveness=0.95,
mitigation_status="In Progress",
business_impacts={
"Customer Churn": 80_000,
"Revenue Loss": 45_000,
"Recovery / Remediation Cost": 19_000,
"Reputational Damage": 30_000,
},
notes="MFA available but optional. Enforcing MFA cuts this risk by ~99%.",
))
risks.append(build_risk(
name="Phishing — Employee Credential Compromise",
category="Social Engineering",
description=(
"Employee clicks phishing link, surrenders credentials. Without MFA, "
"this provides full access to email, SaaS apps, and potentially production. "
"Phishing is the #1 attack vector in the Verizon DBIR."
),
asset_value=1_500_000,
exposure_factor=0.15,
annual_rate=0.35,
mitigation_cost=25_000, # MFA + security awareness training + email security
mitigation_effectiveness=0.92,
mitigation_status="In Progress",
business_impacts={
"Business Interruption": 65_000,
"Customer Churn": 55_000,
"Recovery / Remediation Cost": 45_000,
"Reputational Damage": 60_000,
},
notes="Primary vector for ransomware and BEC. MFA is the single highest-ROI control.",
))
risks.append(build_risk(
name="Application API Vulnerability",
category="Application Vulnerability",
description=(
"Unauthenticated or improperly authorized API endpoint exposes customer data "
"or administrative functions. OWASP API Security Top 10 — broken object-level "
"authorization is the most common API vulnerability."
),
asset_value=2_000_000,
exposure_factor=0.18,
annual_rate=0.15,
mitigation_cost=30_000, # DAST + API gateway + code review
mitigation_effectiveness=0.75,
mitigation_status="Planned",
business_impacts={
"Regulatory Fine": 70_000,
"Customer Churn": 90_000,
"Reputational Damage": 100_000,
"Legal / Litigation": 60_000,
},
notes="Need automated API security testing in CI/CD pipeline.",
))
risks.append(build_risk(
name="DDoS Attack — Production Service",
category="DDoS / Availability",
description=(
"Distributed denial-of-service attack renders production service unavailable. "
"Average DDoS duration: 48 hours. Enterprise SLA breach triggers contractual "
"penalties. Increasingly used as extortion or distraction tactic."
),
asset_value=1_000_000,
exposure_factor=0.10,
annual_rate=0.25,
mitigation_cost=15_000, # CDN with DDoS protection (Cloudflare, AWS Shield)
mitigation_effectiveness=0.85,
mitigation_status="Mitigated",
business_impacts={
"Business Interruption": 45_000,
"Customer Churn": 30_000,
"Revenue Loss": 25_000,
},
notes="Cloudflare deployed. Residual risk from very large volumetric attacks.",
))
return risks
# ─── Analysis & Reporting ────────────────────────────────────────────────────
def calculate_portfolio_summary(risks: list[dict]) -> dict:
"""Aggregate portfolio-level metrics."""
total_inherent_ale = sum(r["ale"] for r in risks)
total_mitigated_ale = sum(r["mitigated_ale"] for r in risks)
total_mitigation_cost = sum(r["mitigation_cost"] for r in risks)
risk_reduction = total_inherent_ale - total_mitigated_ale
portfolio_roi = ((risk_reduction - total_mitigation_cost) / total_mitigation_cost * 100
if total_mitigation_cost > 0 else 0)
by_category = {}
for r in risks:
cat = r["category"]
if cat not in by_category:
by_category[cat] = {"count": 0, "total_ale": 0.0}
by_category[cat]["count"] += 1
by_category[cat]["total_ale"] += r["ale"]
by_status = {}
for r in risks:
status = r["mitigation_status"]
by_status[status] = by_status.get(status, 0) + 1
return {
"total_risks": len(risks),
"total_inherent_ale": total_inherent_ale,
"total_mitigated_ale": total_mitigated_ale,
"total_risk_reduction": risk_reduction,
"total_mitigation_cost": total_mitigation_cost,
"portfolio_roi_pct": portfolio_roi,
"by_category": dict(sorted(by_category.items(), key=lambda x: -x[1]["total_ale"])),
"by_mitigation_status": by_status,
}
def prioritize_risks(risks: list[dict], budget: Optional[float] = None) -> list[dict]:
"""Return risks sorted by ALE. If budget given, show what fits."""
sorted_risks = sorted(risks, key=lambda r: -r["ale"])
if budget is None:
return sorted_risks
# Greedy budget allocation by ROI
actionable = [r for r in sorted_risks if r["mitigation_status"] in ("None", "Planned")
and r["mitigation_cost"] > 0]
actionable.sort(key=lambda r: -r["mitigation_roi_pct"])
allocated = []
remaining = budget
for risk in actionable:
if risk["mitigation_cost"] <= remaining:
allocated.append(risk)
remaining -= risk["mitigation_cost"]
return allocated
def fmt_dollars(amount: float) -> str:
"""Format a dollar amount."""
if amount >= 1_000_000:
return f"${amount/1_000_000:.2f}M"
if amount >= 1_000:
return f"${amount/1_000:.0f}K"
return f"${amount:.0f}"
def fmt_pct(value: float) -> str:
return f"{value:.1f}%"
def severity_label(ale: float) -> str:
if ale >= 200_000:
return "CRITICAL"
if ale >= 75_000:
return "HIGH"
if ale >= 25_000:
return "MEDIUM"
return "LOW"
def severity_color(label: str) -> str:
"""ANSI color codes."""
colors = {
"CRITICAL": "\033[91m", # Red
"HIGH": "\033[93m", # Yellow
"MEDIUM": "\033[94m", # Blue
"LOW": "\033[92m", # Green
}
return colors.get(label, "") + label + "\033[0m"
# ─── Display ─────────────────────────────────────────────────────────────────
def print_header():
print("\n" + "=" * 80)
print(" CISO RISK QUANTIFIER — Security Risk Portfolio")
print(f" Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print("=" * 80)
def print_portfolio_summary(summary: dict):
print("\n📊 PORTFOLIO SUMMARY")
print("-" * 60)
print(f" Total risks tracked: {summary['total_risks']}")
print(f" Total inherent ALE: {fmt_dollars(summary['total_inherent_ale'])}/yr")
print(f" Total ALE after mitigations: {fmt_dollars(summary['total_mitigated_ale'])}/yr")
print(f" Risk reduction from controls: {fmt_dollars(summary['total_risk_reduction'])}/yr")
print(f" Total mitigation spend: {fmt_dollars(summary['total_mitigation_cost'])}/yr")
print(f" Portfolio ROI: {fmt_pct(summary['portfolio_roi_pct'])}")
print()
print(" Risk by Category (sorted by ALE):")
for cat, data in summary["by_category"].items():
print(f" {cat:<35} {data['count']} risks ALE: {fmt_dollars(data['total_ale'])}/yr")
print()
print(" Mitigation Status:")
for status, count in summary["by_mitigation_status"].items():
print(f" {status:<20} {count} risks")
def print_risk_table(risks: list[dict], title: str = "RISK REGISTER"):
print(f"\n🎯 {title}")
print("-" * 80)
header = f"{'#':<3} {'Risk Name':<35} {'Severity':<10} {'ALE/yr':<12} {'Mitig Cost':<12} {'ROI':<8} {'Status':<12}"
print(header)
print("-" * 80)
for i, risk in enumerate(risks, 1):
sev = severity_label(risk["ale"])
sev_str = sev.ljust(10)
roi = fmt_pct(risk["mitigation_roi_pct"]) if risk["mitigation_cost"] > 0 else "N/A"
print(
f"{i:<3} {risk['name'][:34]:<35} {sev_str} "
f"{fmt_dollars(risk['ale']):<12} {fmt_dollars(risk['mitigation_cost']):<12} "
f"{roi:<8} {risk['mitigation_status']}"
)
def print_risk_detail(risk: dict, index: int):
sev = severity_label(risk["ale"])
print(f"\n{'' * 70}")
print(f" #{index}{risk['name']} [{sev}]")
print(f"{'' * 70}")
print(f" Category: {risk['category']}")
print(f" Description: {risk['description'][:120]}...")
print()
print(f" RISK CALCULATION:")
print(f" Asset Value: {fmt_dollars(risk['asset_value'])}")
print(f" Exposure Factor: {fmt_pct(risk['exposure_factor'] * 100)}")
print(f" Single Loss Expectancy: {fmt_dollars(risk['sle'])}")
print(f" Annual Rate (ARO): {risk['annual_rate']:.2f}x/year")
print(f" Annual Loss Expectancy: {fmt_dollars(risk['ale'])}/yr ← INHERENT RISK")
print()
print(f" MITIGATION:")
print(f" Mitigation Cost: {fmt_dollars(risk['mitigation_cost'])}/yr")
print(f" Effectiveness: {fmt_pct(risk['mitigation_effectiveness'] * 100)}")
print(f" Residual ALE: {fmt_dollars(risk['mitigated_ale'])}/yr")
print(f" Mitigation ROI: {fmt_pct(risk['mitigation_roi_pct'])}")
print(f" Status: {risk['mitigation_status']}")
print()
print(f" BUSINESS IMPACT BREAKDOWN:")
for impact_type, amount in risk["business_impacts"].items():
print(f" {impact_type:<30} {fmt_dollars(amount)}")
print(f" {'TOTAL':<30} {fmt_dollars(risk['total_business_impact'])}")
if risk["notes"]:
print(f"\n NOTES: {risk['notes']}")
def print_board_summary(risks: list[dict], summary: dict):
"""One-page board-ready summary."""
print("\n" + "" * 80)
print(" BOARD SECURITY REPORT — Risk Summary")
print("" * 80)
critical = [r for r in risks if severity_label(r["ale"]) == "CRITICAL"]
high = [r for r in risks if severity_label(r["ale"]) == "HIGH"]
medium = [r for r in risks if severity_label(r["ale"]) == "MEDIUM"]
low = [r for r in risks if severity_label(r["ale"]) == "LOW"]
print(f"\n RISK EXPOSURE SUMMARY")
print(f" ┌─────────────┬────────┬──────────────┐")
print(f" │ Severity │ Count │ Total ALE/yr │")
print(f" ├─────────────┼────────┼──────────────┤")
for label, group in [("Critical", critical), ("High", high), ("Medium", medium), ("Low", low)]:
ale = sum(r["ale"] for r in group)
print(f"{label:<11}{len(group):<6}{fmt_dollars(ale):<12}")
print(f" └─────────────┴────────┴──────────────┘")
print(f"\n TOTAL INHERENT RISK: {fmt_dollars(summary['total_inherent_ale'])}/yr")
print(f" SECURITY INVESTMENT: {fmt_dollars(summary['total_mitigation_cost'])}/yr")
print(f" RESIDUAL RISK: {fmt_dollars(summary['total_mitigated_ale'])}/yr")
print(f" RISK REDUCTION: {fmt_dollars(summary['total_risk_reduction'])}/yr")
print(f" PORTFOLIO ROI: {fmt_pct(summary['portfolio_roi_pct'])}")
print(f"\n TOP 3 RISKS BY EXPECTED ANNUAL LOSS:")
top3 = sorted(risks, key=lambda r: -r["ale"])[:3]
for i, risk in enumerate(top3, 1):
print(f" {i}. {risk['name']}: {fmt_dollars(risk['ale'])}/yr expected annual loss")
print(f" Mitigation: {fmt_dollars(risk['mitigation_cost'])}/yr | "
f"Status: {risk['mitigation_status']}")
unmitigated = [r for r in risks if r["mitigation_status"] == "None"]
if unmitigated:
print(f"\n ⚠️ UNMITIGATED RISKS ({len(unmitigated)}):")
for r in sorted(unmitigated, key=lambda x: -x["ale"]):
print(f"{r['name']}: {fmt_dollars(r['ale'])}/yr — Action required")
def export_csv(risks: list[dict], filepath: str):
fields = [
"name", "category", "asset_value", "exposure_factor", "annual_rate",
"sle", "ale", "mitigation_cost", "mitigation_effectiveness",
"mitigated_ale", "mitigation_roi_pct", "mitigation_status", "notes"
]
with open(filepath, "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for risk in risks:
row = {k: risk.get(k, "") for k in fields}
writer.writerow(row)
print(f"✅ Exported {len(risks)} risks to {filepath}")
def export_json(risks: list[dict]) -> str:
return json.dumps(risks, indent=2, default=str)
# ─── Interactive Entry ───────────────────────────────────────────────────────
def interactive_add_risk() -> dict:
"""Interactive CLI for adding a new risk."""
print("\n── ADD NEW RISK ──────────────────────────────────────")
name = input("Risk name: ").strip()
print(f"Category options: {', '.join(RISK_CATEGORIES)}")
category = input("Category: ").strip()
description = input("Description (brief): ").strip()
print("\nAsset valuation:")
asset_value = float(input(" Asset value ($): ").replace(",", "").replace("$", ""))
exposure_factor = float(input(" Exposure factor (0.01.0, fraction of value lost): "))
annual_rate = float(input(" Annual rate of occurrence (e.g., 0.10 = once per 10 years): "))
print("\nMitigation:")
mitigation_cost = float(input(" Mitigation cost ($/yr): ").replace(",", "").replace("$", ""))
mitigation_effectiveness = float(input(" Mitigation effectiveness (0.01.0): "))
print(f"Status options: {', '.join(MITIGATION_STATUSES)}")
mitigation_status = input(" Status: ").strip()
print("\nBusiness impacts (enter 0 to skip):")
business_impacts = {}
for impact_type in BUSINESS_IMPACT_TYPES:
val = input(f" {impact_type} ($): ").replace(",", "").replace("$", "")
amount = float(val) if val else 0
if amount > 0:
business_impacts[impact_type] = amount
notes = input("\nNotes: ").strip()
return build_risk(
name=name,
category=category,
description=description,
asset_value=asset_value,
exposure_factor=exposure_factor,
annual_rate=annual_rate,
mitigation_cost=mitigation_cost,
mitigation_effectiveness=mitigation_effectiveness,
mitigation_status=mitigation_status,
business_impacts=business_impacts,
notes=notes,
)
# ─── Main ────────────────────────────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(
description="CISO Risk Quantifier — Quantify security risks in business terms"
)
parser.add_argument("--json", action="store_true", help="Output full JSON")
parser.add_argument("--csv", metavar="FILE", help="Export CSV to file")
parser.add_argument("--budget", type=float, metavar="DOLLARS",
help="Show recommended mitigations within budget")
parser.add_argument("--board", action="store_true", help="Show board-ready summary only")
parser.add_argument("--detail", action="store_true", help="Show detailed risk breakdowns")
parser.add_argument("--add", action="store_true", help="Interactively add a risk")
args = parser.parse_args()
risks = load_sample_risks()
if args.add:
new_risk = interactive_add_risk()
risks.append(new_risk)
print(f"\n✅ Added risk: {new_risk['name']} | ALE: {fmt_dollars(new_risk['ale'])}/yr")
# Sort by ALE descending
risks_sorted = sorted(risks, key=lambda r: -r["ale"])
summary = calculate_portfolio_summary(risks_sorted)
if args.json:
output = {
"generated": datetime.now().isoformat(),
"summary": summary,
"risks": risks_sorted,
}
print(json.dumps(output, indent=2, default=str))
return
if args.csv:
export_csv(risks_sorted, args.csv)
return
print_header()
if args.board:
print_board_summary(risks_sorted, summary)
return
print_portfolio_summary(summary)
print_risk_table(risks_sorted)
if args.detail:
for i, risk in enumerate(risks_sorted, 1):
print_risk_detail(risk, i)
if args.budget:
recommended = prioritize_risks(risks_sorted, args.budget)
print(f"\n💰 BUDGET ALLOCATION — ${args.budget:,.0f}")
print(f" Recommended mitigations (sorted by ROI):")
if recommended:
for r in recommended:
print(f"{r['name']}: {fmt_dollars(r['mitigation_cost'])}/yr "
f"| ALE reduction: {fmt_dollars(r['ale'] - r['mitigated_ale'])}/yr "
f"| ROI: {fmt_pct(r['mitigation_roi_pct'])}")
else:
print(" No actionable mitigations fit within budget.")
print_board_summary(risks_sorted, summary)
print("\n💡 NEXT STEPS")
print(" 1. Run `--detail` to see full breakdown of each risk")
print(" 2. Run `--budget 200000` to see what you can mitigate with a given budget")
print(" 3. Run `--board` for a board-ready one-page summary")
print(" 4. Run `--csv risks.csv` to export for stakeholder review")
print(" 5. Run `--add` to interactively add risks to the register")
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,169 @@
---
name: cmo-advisor
description: "Marketing leadership for scaling companies. Brand positioning, growth model design, marketing budget allocation, and marketing org design. Use when designing brand strategy, selecting growth models (PLG vs sales-led vs community-led), allocating marketing budgets, building marketing teams, or when user mentions CMO, brand strategy, growth model, CAC, LTV, channel mix, or marketing ROI."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: cmo-leadership
updated: 2026-03-05
python-tools: marketing_budget_modeler.py, growth_model_simulator.py
frameworks: brand-positioning, growth-frameworks, marketing-org
---
# CMO Advisor
Strategic marketing leadership — brand positioning, growth model design, budget allocation, and org design. Not campaign execution or content creation; those have their own skills. This is the engine.
## Keywords
CMO, chief marketing officer, brand strategy, brand positioning, growth model, product-led growth, PLG, sales-led growth, community-led growth, marketing budget, CAC, customer acquisition cost, LTV, lifetime value, channel mix, marketing ROI, pipeline contribution, marketing org, category design, competitive positioning, growth loops, payback period, MQL, pipeline coverage
## Quick Start
```bash
# Model budget allocation across channels, project MQL output by scenario
python scripts/marketing_budget_modeler.py
# Project MRR growth by model, show impact of channel mix shifts
python scripts/growth_model_simulator.py
```
**Reference docs (load when needed):**
- `references/brand_positioning.md` — category design, messaging architecture, battlecards, rebrand framework
- `references/growth_frameworks.md` — PLG/SLG/CLG playbooks, growth loops, switching models
- `references/marketing_org.md` — team structure by stage, hiring sequence, agency vs. in-house
---
## The Four CMO Questions
Every CMO must own answers to these — no one else in the C-suite can:
1. **Who are we for?** — ICP, positioning, category
2. **Why do they choose us?** — Differentiation, messaging, brand
3. **How do they find us?** — Growth model, channel mix, demand gen
4. **Is it working?** — CAC, LTV:CAC, pipeline contribution, payback period
---
## Core Responsibilities (Brief)
**Brand & Positioning** — Define category, build messaging architecture, maintain competitive differentiation. Details → `references/brand_positioning.md`
**Growth Model** — Choose and operate the right acquisition engine: PLG, sales-led, community-led, or hybrid. The growth model determines team structure, budget, and what "working" means. Details → `references/growth_frameworks.md`
**Marketing Budget** — Allocate from revenue target backward: new customers needed → conversion rates by stage → MQLs needed → spend by channel based on CAC. Run `marketing_budget_modeler.py` for scenarios.
**Marketing Org** — Structure follows growth model. Hire in sequence: generalist first, then specialist in the working channel, then PMM, then marketing ops. Details → `references/marketing_org.md`
**Channel Mix** — Audit quarterly: MQLs, cost, CAC, payback, trend. Scale what's improving. Cut what's worsening. Don't optimize a channel that isn't in the strategy.
**Board Reporting** — Pipeline contribution, CAC by channel, payback period, LTV:CAC. Not impressions. Not MQLs in isolation.
---
## Key Diagnostic Questions
Ask these before making any strategic recommendation:
- What's your CAC **by channel** (not blended)?
- What's the payback period on your largest channel?
- What's your LTV:CAC ratio?
- What % of pipeline is marketing-sourced vs. sales-sourced?
- Where do your **best customers** (highest LTV, lowest churn) come from?
- What's your MQL → Opportunity conversion rate? (proxy for lead quality)
- Is this brand work or performance marketing? (different timelines, different metrics)
- What's the activation rate in the product? (PLG signal)
- If a prospect doesn't buy, why not? (win/loss data)
---
## CMO Metrics Dashboard
| Category | Metric | Healthy Target |
|----------|--------|---------------|
| **Pipeline** | Marketing-sourced pipeline % | 5070% of total |
| **Pipeline** | Pipeline coverage ratio | 34x quarterly quota |
| **Pipeline** | MQL → Opportunity rate | > 15% |
| **Efficiency** | Blended CAC payback | < 18 months |
| **Efficiency** | LTV:CAC ratio | > 3:1 |
| **Efficiency** | Marketing % of total S&M spend | 3050% |
| **Growth** | Brand search volume trend | ↑ QoQ |
| **Growth** | Win rate vs. primary competitor | > 50% |
| **Retention** | NPS (marketing-sourced cohort) | > 40 |
---
## Red Flags
- No defined ICP — "companies with 50-1000 employees" is not an ICP
- Marketing and sales disagree on what an MQL is (this is always a system problem, not a people problem)
- CAC tracked only as a blended number — channel-level CAC is non-negotiable
- Pipeline attribution is self-reported by sales reps, not CRM-timestamped
- CMO can't answer "what's our payback period?" without a 48-hour research project
- Brand work and performance marketing have no shared narrative — they're contradicting each other
- Marketing team is producing content with no documented positioning to anchor it
- Growth model was chosen because a competitor uses it, not because the product/ACV/ICP fits
---
## Integration with Other C-Suite Roles
| When... | CMO works with... | To... |
|---------|-------------------|-------|
| Pricing changes | CFO + CEO | Understand margin impact on positioning and messaging |
| Product launch | CPO + CTO | Define launch tier, GTM motion, messaging |
| Pipeline miss | CFO + CRO | Diagnose: volume problem, quality problem, or velocity problem |
| Category design | CEO | Secure multi-year organizational commitment to the narrative |
| New market entry | CEO + CFO | Validate ICP, budget, localization requirements |
| Sales misalignment | CRO | Align on MQL definition, SLA, and pipeline ownership |
| Hiring plan | CHRO | Define marketing headcount and skill profile by stage |
| Retention insights | CCO | Use expansion and churn data to sharpen ICP and messaging |
| Competitive threat | CEO + CRO | Coordinate battlecards, win/loss, repositioning response |
---
## Resources
- **References:** `references/brand_positioning.md`, `references/growth_frameworks.md`, `references/marketing_org.md`
- **Scripts:** `scripts/marketing_budget_modeler.py`, `scripts/growth_model_simulator.py`
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- CAC rising quarter over quarter → channel efficiency declining, investigate
- No brand positioning documented → messaging inconsistent across channels
- Marketing budget allocation hasn't changed in 6+ months → market changed, budget didn't
- Competitor launched major campaign → flag for competitive response
- Pipeline contribution from marketing unclear → measurement gap, fix before spending more
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Plan our marketing budget" | Channel allocation model with CAC targets per channel |
| "Position us vs competitors" | Positioning map + messaging framework + proof points |
| "Design our growth model" | Growth projection with channel mix scenarios |
| "Build the marketing team" | Hiring plan with sequence, roles, agency vs in-house |
| "Marketing board section" | Pipeline contribution report with channel ROI |
## Reasoning Technique: Recursion of Thought
Draft a marketing strategy, then critique it from the customer's perspective. Refine based on the critique. Repeat until the strategy survives scrutiny.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,374 @@
# Brand Positioning Reference
Practical frameworks for defining, communicating, and defending your market position. Not theory — applied tools for CMOs who need to get this right.
---
## 1. Category Design Frameworks
### The Category Design Principle
Every product exists in a category — either one you define or one someone else defined. If you're not designing your category, your competitors are designing it for you, and they'll design it to exclude you.
**Category design is not renaming an existing category.** It's declaring that the existing category no longer solves the problem adequately, and that a new category — which you happen to lead — is required.
### The Three-Act Category Design Narrative
**Act 1: Name the problem**
Identify a problem that's real, growing, and underserved. Not a problem you invented — a problem your best customers articulate before they've heard your pitch.
> "Enterprise software teams are deploying faster than ever, but their security reviews still take 3 weeks — because security was built for a world where deployments happen monthly, not hourly."
**Act 2: Define the new category**
Name the category in terms of the outcome, not the feature. The category name should describe what customers achieve, not what the product does.
> "Continuous security" — not "automated security scanning" or "DevSecOps platform."
**Act 3: Position yourself as the category leader**
You can't just claim leadership — you need proof: customers, analysts, community, content, events. Leadership is built, not declared.
> "Snyk is building the continuous security category. 1.2M developers have adopted Snyk. Gartner lists us as a Cool Vendor in AppSec."
### When Category Design Works
| Condition | Explanation |
|-----------|-------------|
| Market timing | The problem is growing but the existing category is inadequate |
| CEO commitment | Category design is a 3-5 year initiative, not a marketing campaign |
| Analyst alignment | Gartner, Forrester, or G2 need to recognize your category |
| Community | Practitioners adopt the vocabulary before buyers do |
| Content moat | You publish the defining content for the category before competitors |
### Category Design Pitfalls
- **Naming the category after yourself:** "The [Your Company] Category" is not a category. It's a vanity.
- **Categories that don't solve analyst definitions:** If Gartner doesn't have a Magic Quadrant for your category, you're fighting uphill.
- **Jargon without adoption:** If your category name requires a two-paragraph explanation, it won't stick.
- **Starting a category war you can't win:** If an incumbent can copy your category name and launch in 90 days, you don't have a defensible category.
### The Lightning Strike Strategy
Category design requires concentrated, coordinated effort — not slow drip. Execute these simultaneously:
1. **Major piece of research or data** (the "State of X" report)
2. **Category-defining event** (host it, don't just attend)
3. **Analyst briefing** (educate Gartner/Forrester on the category before they define it themselves)
4. **Book or manifesto** (long-form content that becomes the category Bible)
5. **Community formation** (a Slack group, a conference, a certification that practitioners want)
Do all five within a 3-month window. This creates gravity around your category claim.
---
## 2. Messaging Architecture
### The Messaging Hierarchy
Every piece of content — from a tweet to a 60-page whitepaper — should trace back to this hierarchy. When it doesn't, you have messaging drift.
```
Level 1: Brand Promise
"[Company] [verb] [outcome] for [audience]"
→ Doesn't change. This is the north star.
Level 2: Positioning Statement (internal)
For [target customer] who [has this problem],
[Company] is the [market category] that [differentiated capability]
unlike [alternatives], [Company] [proof of differentiation].
Level 3: Value Propositions (3-4 max, one per key outcome)
Each VP: headline (5-8 words) + 2-3 sentence explanation + proof point
Level 4: Proof Points
Data, case studies, certifications, analyst recognition — evidence for each VP
Level 5: Channel Adaptations
Website copy, sales deck, ad copy, email — same hierarchy, different format
```
### Writing a Positioning Statement
The Geoffrey Moore / April Dunford format is still the best framework:
**Template:**
```
For [specific target customer]
who [has this specific, painful problem],
[Company name] is the [market category]
that [key differentiated capability].
Unlike [primary alternatives],
[Company] [proof of differentiation — something measurable or unique].
```
**Bad example (too generic):**
> For B2B companies who want to grow faster, Acme is the marketing platform that helps you get more leads. Unlike other platforms, Acme is easy to use and powerful.
**Good example (specific and falsifiable):**
> For DevOps teams in regulated industries who spend 20% of their sprint cycles on compliance reviews, Acme is the compliance automation platform that embeds regulatory checks directly into the CI/CD pipeline. Unlike manual compliance tools that create a separate review queue, Acme's policy-as-code approach reduces compliance-related cycle time by 60% without slowing deployments.
**Test your positioning statement:**
1. Can a competitor say the exact same thing? (If yes, it's not differentiated)
2. Does it describe what you do or what the customer gets? (Should be the latter)
3. Would your best customer say "yes, that's exactly my problem"? (If not, wrong ICP)
4. Is it falsifiable? (Claims you can't prove are liabilities)
### Value Proposition Development
**Structure for each VP:**
| Element | Description | Example |
|---------|-------------|---------|
| Outcome headline | What changes for the customer (5-8 words) | "Ship features 3x faster" |
| The problem | Why this matters now (1 sentence) | "Compliance reviews block 40% of releases in regulated industries" |
| Our approach | How we solve it differently (1-2 sentences) | "Policy-as-code embeds checks in the pipeline instead of adding a gate at the end" |
| Proof | Evidence this is real (1 sentence + data point) | "Customers reduce compliance cycle time by 60% in the first 90 days" |
**3-VP Architecture is the standard:**
- VP1: Core outcome (what most customers primarily buy for)
- VP2: Secondary benefit (makes the decision easier or stickier)
- VP3: Differentiator (what tips competitive decisions in your favor)
### Proof Point Hierarchy
Not all proof is equal. When you make a claim, match the strength of your proof to the importance of the claim.
| Proof Type | Strength | Best Used For |
|------------|---------|--------------|
| Third-party data (analyst report, research) | Highest | Category claims, market size |
| Customer ROI data with name | High | Value propositions |
| Customer quote with name and company | Medium-high | Specific pain points and outcomes |
| Aggregated customer data ("customers report…") | Medium | Directional claims |
| Internal testing or benchmark | Medium-low | Product capability claims |
| "Designed to…" or "built for…" | Low | Product direction only |
| "We believe…" or "we think…" | Lowest | Vision statements only |
**Proof point development process:**
1. Write the claim you want to make
2. Identify the strongest available proof
3. If proof is weak, either soften the claim or invest in getting better proof
4. Never publish a claim without knowing what happens when a skeptic asks "prove it"
---
## 3. Competitive Positioning Maps
### The Two-Axis Map
Choose two dimensions that:
1. Both matter to your target buyer
2. Create clear differentiation between you and competitors
3. You can credibly defend
**Choosing the axes:**
- Axis 1 should show a dimension where you win and most competitors cluster on the wrong side
- Axis 2 should show a dimension buyers care about deeply (ease, speed, breadth, price, compliance, etc.)
**What to avoid:**
- "Quality" vs. "Price" — too generic, every company claims the top-left
- Dimensions your competitors can match in one release cycle
- Dimensions that only your product team understands, not buyers
### Competitive Analysis Template
For each major competitor:
**Company:** _______________
| Dimension | What They Claim | What Customers Actually Experience | Gap |
|-----------|----------------|-----------------------------------|-----|
| Positioning | | | |
| Primary differentiator | | | |
| Pricing | | | |
| Ideal customer | | | |
| Weakness (win/loss data) | | | |
| What they say about you | | | |
**Sources for competitive intelligence:**
- Win/loss interviews (primary source — nothing beats this)
- G2/Capterra reviews (what customers say publicly)
- Glassdoor (tells you about internal culture and focus)
- LinkedIn job postings (what they're building next)
- Their pricing page changes (what they're competing on)
- Conference talks from their product and sales leaders
### Battlecard Format
One page per competitor. Used by sales, not marketing.
```
COMPETING AGAINST: [Competitor Name]
WHY CUSTOMERS CONSIDER THEM:
(2-3 bullets — be honest about their appeal)
OUR DIFFERENTIATION:
(2-3 bullets — factual, not marketing language)
THE LANDMINE QUESTION:
(One question that exposes their weakness. The answer should make the buyer uncomfortable choosing them.)
Example: "How long does your typical implementation take? And what's your SLA if it runs over?"
OUR PROOF POINTS IN THIS COMPARISON:
- [Customer name] switched from [competitor] after [specific reason], saw [specific result]
- [Data point that directly contradicts competitor's primary claim]
THEIR LIKELY COUNTER-MOVES:
(What will they say about us? How do we respond?)
WHEN TO WALK AWAY:
(If the prospect values X more than Y, we are not the right fit — say so)
```
---
## 4. Brand Voice Development
### What Brand Voice Is (and Isn't)
**Brand voice is NOT:**
- A list of adjectives ("we are professional, innovative, and customer-focused")
- The tone you use in formal communications
- The font and color palette (that's visual identity)
**Brand voice IS:**
- How the company sounds across every written touchpoint
- Consistent enough to be recognizable, flexible enough to be human
- Grounded in what your best customers actually value
### The Voice Attribute Framework
Define 3-4 voice attributes. For each:
1. **What it means** (in one sentence)
2. **What it sounds like** (one example)
3. **What it doesn't mean** (the common mistake that goes wrong)
**Example:**
| Attribute | Means | Sounds like | Doesn't mean |
|-----------|-------|------------|--------------|
| Direct | We say what we mean without hedging | "Your compliance review takes 3 weeks. It shouldn't." | Blunt, rude, or dismissive |
| Expert | We speak from depth, not from trend | "Here's why most security gates fail at scale, and what actually works." | Jargon-heavy or condescending |
| Honest | We acknowledge what we don't do | "We're not the best fit if you need a one-size-fits-all platform." | Self-deprecating or uncertain |
| Human | Real people write for real people | "Deploying on a Friday? Here's what we'd check first." | Casual, unprofessional |
### Voice Consistency Testing
Take a random sample of 10 recent pieces of content:
- Website homepage and pricing page
- 3 blog posts from different authors
- 5 outbound emails from sales
- 3 social posts
- 1 press release
Score each on: Does this sound like us? (1-5)
Average < 3: You have a brand voice problem. The cause is usually no documented guidelines, or guidelines that exist but aren't enforced.
### Voice in Different Contexts
The attribute stays the same. The tone adjusts.
| Context | Tone adjustment | Example of "Direct" |
|---------|----------------|---------------------|
| Homepage | Confident | "Compliance reviews don't have to slow you down." |
| Technical docs | Precise | "Set the policy threshold to 0.95 to enforce mandatory approval." |
| Error messages | Helpful | "That didn't work. Here's the most common reason why, and how to fix it." |
| Support | Empathetic | "That's frustrating. Here's what happened and what we're doing about it." |
| Sales outreach | Respectful | "Most teams in your space have this problem. Worth 20 minutes to explore?" |
---
## 5. Rebrand Decision Framework
### When Rebrands Succeed vs. Fail
**Successful rebrands:**
- Driven by a genuine strategic shift (new category, new ICP, new market)
- Have internal alignment before external launch
- Are accompanied by product and messaging changes — not just visual
- Have a 6-12 month transition plan for existing customers
**Failed rebrands:**
- Driven by internal boredom with the old brand
- Executed as a "refresh" without repositioning the value proposition
- Lack leadership conviction (executives still describe the company in the old terms)
- Launch with a new logo but same product, same messaging, same ICP
### The Rebrand Decision Matrix
Answer each question. More "yes" answers = more likely rebrand is warranted.
| Question | Yes | No |
|----------|-----|-----|
| Has our ICP changed significantly in the last 18 months? | Rebrand | Stay |
| Are we entering a new market where the current brand creates friction? | Rebrand | Stay |
| Does the brand name have negative associations in the market? | Rebrand | Stay |
| Has an acquisition changed our core identity? | Rebrand | Stay |
| Is the current brand actively hurting sales conversations? (evidence required) | Rebrand | Stay |
| Are we bored with the brand? | Stay | — |
| Did leadership change? | Stay | — |
| Are competitors rebranding? | Stay | — |
Score: 3+ "Rebrand" answers with evidence = worth a serious evaluation.
### Rebrand Risk Assessment
**Name change** is the highest-risk rebrand element. Before committing:
- Legal: trademark availability in all target markets
- SEO: 18-24 months to recover domain authority after a domain change
- Customer: existing customers need to update all integrations, contracts, documentation
- Analyst: re-education of Gartner, Forrester, G2 category definitions
- Employee: company identity shift is a culture event, not just an HR task
**Minimum viable rebrand (lower risk):**
1. New positioning and messaging (always worth doing if positioning is wrong)
2. Visual identity refresh (keep the name, update the look)
3. Tagline change (the cheapest, lowest-risk brand change)
**Full rebrand (high risk, sometimes necessary):**
1. New company name and domain
2. New visual identity
3. New positioning and messaging
4. New category narrative
### Rebrand Execution Checklist
**Pre-launch (90 days):**
- [ ] Finalize positioning before finalizing design (in that order)
- [ ] Legal trademark clearance in all target markets
- [ ] Domain secured (with redirects planned)
- [ ] Internal alignment: every leader can describe the new positioning in one sentence
- [ ] Customer comms plan (existing customers, especially enterprise, need advance notice)
- [ ] Analyst briefings scheduled (Gartner, Forrester — brief them before launch)
- [ ] PR plan finalized
**Launch (day 1):**
- [ ] Website flipped
- [ ] Social profiles updated
- [ ] Email signatures updated company-wide
- [ ] Sales deck updated
- [ ] Press release published
- [ ] Existing customers notified (email from CEO or CMO, not marketing automation)
**Post-launch (90 days):**
- [ ] SEO monitoring (watch for ranking drops on key terms)
- [ ] Win rate monitoring (did conversion change?)
- [ ] Employee feedback (are they using the new messaging correctly?)
- [ ] Partner/channel update (resellers, integrations, directories)
- [ ] Analyst follow-up (did they update their reports?)
---
## Quick Reference: Brand Positioning Diagnostic
Use this as an audit against your current positioning:
| Check | Pass | Fail |
|-------|------|------|
| Can every sales rep state the positioning in one sentence without looking it up? | ✓ | Positioning isn't working |
| Is the ICP specific enough to disqualify companies? | ✓ | ICP is too broad |
| Does the homepage lead with customer outcome, not product features? | ✓ | Copy needs rewrite |
| Can you name 3 companies you're NOT a good fit for? | ✓ | Positioning is unfocused |
| Do win/loss interviews confirm the stated differentiator? | ✓ | Differentiator is assumed, not proven |
| Is the category name used by analysts or industry media? | ✓ | Category design needed |
| Does every piece of content trace back to a VP from the hierarchy? | ✓ | Messaging drift — need guidelines |

View File

@@ -0,0 +1,456 @@
# Growth Frameworks Reference
Playbooks for PLG, sales-led, community-led, and hybrid growth models. Includes growth loops, funnel design, and guidance on when and how to switch models.
---
## 1. Product-Led Growth (PLG) Playbook
### What PLG Actually Is
PLG means the product is the primary distribution mechanism. Not "we have a free trial." Not "our product is self-serve." PLG means the product creates acquisition, retention, and expansion — and does so at a scale and cost no sales team can match.
**The minimum requirements for PLG to work:**
1. **Fast time-to-value:** Users must get a meaningful outcome within one session (ideally < 30 minutes)
2. **Low friction to start:** No sales call, no implementation project, no credit card required (for top of funnel)
3. **Built-in virality or network effects:** Usage creates exposure or value that draws in other users
4. **Self-serve monetization or expansion path:** Freemium → paid, or individual → team → company
If any of these is missing, you don't have PLG — you have a website with a free trial.
### PLG Funnel: The Four Stages
**Stage 1: Acquisition**
The user discovers and signs up for the product without talking to sales.
Key channels:
- Organic search (SEO targeting jobs-to-be-done searches)
- Product hunt launches
- Referral and invite loops (users share the product with colleagues)
- Developer communities and open-source contributions
Metric: Visitor-to-signup rate
Benchmark: 2-8% for B2B SaaS (varies heavily by product complexity)
**Stage 2: Activation**
The user reaches the "aha moment" — the point where the product delivers its core value for the first time.
Finding the aha moment:
- Look at the behaviors that differentiate users who stay from users who churn in the first 30 days
- The aha moment is not creating an account. It's completing the first outcome.
- For Slack: sending a message in a real channel
- For Dropbox: adding a file from a second device
- For HubSpot: publishing a form that captures a real lead
Metric: Activation rate (% of signups who complete the aha moment action within 7 days)
Benchmark: 25-40% is strong. < 15% means the onboarding is broken.
**Stage 3: Retention**
Users return to the product and build habitual use.
Retention analysis:
- Cohort retention curves (by signup week/month)
- Day 1, Day 7, Day 30, Day 90 retention rates
- Feature adoption by retained vs. churned users (which features predict retention?)
Metric: D30 retention rate (% of users still active 30 days after signup)
Benchmark: > 40% D30 retention is strong for B2B products
**Stage 4: Revenue**
Self-serve conversion from free to paid, or expansion from individual to team.
PQL (Product-Qualified Lead) signals:
- Reached a usage limit (invites, storage, seats)
- Used a premium feature in trial mode
- Team size on the account reached a threshold
- High-frequency usage above a defined threshold
Metric: PQL conversion rate (% of PQLs who convert to paid within 30 days)
Benchmark: 15-30% for well-designed PLG products
### PLG Expansion Model
PLG growth compounds through account expansion:
```
Individual user discovers product
→ Gets value, invites teammates
→ Team adopts product
→ Becomes department-wide
→ Finance/IT gets involved
→ Enterprise contract
```
This is "bottom-up" enterprise: individual adoption precedes company-wide purchase. It's also the most defensible moat — when every engineer in the company uses your product individually, procurement cancellation is very hard.
**Expansion levers:**
- Seat-based pricing (more users = more revenue, aligned with value)
- Usage-based pricing (more usage = more value = more revenue)
- Feature gating (team/enterprise features visible but gated, creating pull to upgrade)
- Admin discovery (usage reports surface to managers who didn't know they had a product champion)
### PLG Diagnostic
| Question | Healthy | Unhealthy |
|----------|---------|-----------|
| Time-to-value | < 30 minutes | > 2 hours |
| Activation rate | > 30% | < 15% |
| D30 retention | > 40% | < 20% |
| PQL conversion | > 15% | < 5% |
| NPS from self-serve users | > 40 | < 20 |
| Viral coefficient | > 0.3 | < 0.1 |
### PLG Team Structure
```
Head of Growth (often VP Product or VP Marketing)
├── Growth PM (owns activation and retention loops in product)
├── Growth Engineer (2-3 engineers dedicated to growth experiments)
├── Data Analyst (experimentation, funnel analysis, cohort reports)
└── Growth Marketer (acquisition, SEO, referral programs)
```
The growth team sits between product and marketing. This is intentional — they own the product loops that drive acquisition and retention.
---
## 2. Sales-Led Growth (SLG) Model
### The SLG System
In SLG, marketing's job is to fill the sales pipeline. Sales converts it. The system only works if marketing and sales agree on definitions, SLAs, and shared metrics.
**The SLG funnel:**
```
Awareness (Impressions, reach, brand search)
Lead (Name + contact info captured)
MQL — Marketing Qualified Lead (meets ICP criteria, intent signal detected)
↓ [Marketing → Sales handoff]
SAL — Sales Accepted Lead (sales reviews and accepts the lead)
SQL — Sales Qualified Lead (sales confirms budget, authority, need, timeline)
Opportunity (Formal deal in pipeline, has a close date)
Closed-Won
```
**The MQL definition problem:**
Most marketing-sales friction traces to an unclear MQL definition. The MQL should be:
- ICP-matched (company size, industry, role)
- Intent-signaled (visited pricing page, attended webinar, downloaded high-intent content)
- Not just email address + "subscribed to newsletter"
**A concrete MQL definition:**
> Company 50-500 employees, B2B SaaS, role is VP Engineering or CTO or CISO, AND has performed 2+ of: attended webinar, visited pricing page, requested demo, downloaded security report, attended event.
This definition makes the MQL useful. If you can't score it in your CRM without human judgment, it's not a definition — it's a guideline.
### SLG Conversion Rate Benchmarks
| Stage | Average B2B SaaS | Top Quartile |
|-------|-----------------|--------------|
| Lead → MQL | 5-15% | > 20% |
| MQL → SAL | 50-70% | > 75% |
| SAL → SQL | 30-50% | > 60% |
| SQL → Opportunity | 60-80% | > 85% |
| Opportunity → Closed-Won | 20-30% | > 40% |
**End-to-end:** Lead → Closed-Won: 1-5% (wide range by ACV and ICP quality)
### Pipeline Coverage Mechanics
A healthy SLG pipeline has 3-4x coverage against quota.
If a sales rep has a $500K quarterly quota:
- They need $1.5M-$2M in active pipeline
- Pipeline must be distributed across stages (not all "prospecting")
- Stage distribution benchmark: 30% early, 40% mid, 30% late
Insufficient coverage (< 3x) is a lagging indicator of a miss — by the time coverage is low, it's too late to recover in the same quarter. Coverage should be tracked weekly.
### SLG Demand Generation Channels
**High-intent channels (bottom of funnel):**
- Paid search on buying-intent keywords (e.g., "[competitor] alternative", "best [category] software")
- Review site presence (G2, Capterra) — buyers use these before vendor websites
- Outbound SDR targeting specific accounts (ABM)
**Medium-intent channels (middle of funnel):**
- Webinars and virtual events (capture active learners)
- Gated content (guides, benchmarks, templates — ICP-specific)
- Retargeting to website visitors
**Awareness channels (top of funnel):**
- Content and SEO (captures people learning about the problem)
- Podcast sponsorships, industry media
- Conference sponsorship and speaking
- Paid social (LinkedIn for B2B)
### ABM (Account-Based Marketing) in SLG
ABM flips the funnel: instead of generating leads and filtering for good ones, you start with target accounts and run coordinated campaigns against them.
**Tiers:**
- **Tier 1 (1:1):** 5-20 strategic accounts, fully customized campaigns, dedicated SDR+AE pairs, executive outreach
- **Tier 2 (1:few):** 50-200 accounts, programmatic personalization, SDR sequences, targeted events
- **Tier 3 (1:many):** 500+ accounts, standard campaigns with light personalization
ABM requires tight sales/marketing alignment. If sales doesn't work the accounts marketing targets, ABM produces zero results.
---
## 3. Community-Led Growth (CLG)
### The CLG Thesis
Community-led growth works when:
1. Your buyers want to learn from peers, not vendors
2. There's a strong practitioner identity (developers, data teams, security, FinOps)
3. Your category is complex enough that buyers need education before purchasing
4. You can commit to building genuine community, not a marketing channel in disguise
**The fundamental rule of CLG:** The community must deliver value to members whether or not they ever buy your product. If the only purpose of the community is to sell to members, the community will die.
### CLG Stages
**Stage 1: Find the community**
The community often exists before you build it. Find where your practitioners already gather:
- Slack groups, Discord servers
- Subreddits and LinkedIn groups
- Conference hallways
- Open-source repositories
Before building, participate. Earn trust. Understand the conversations.
**Stage 2: Become the knowledge hub**
Establish your company as the best source of information on the category problem:
- Publish the benchmark study everyone references
- Host the conference that defines the industry
- Create the certification practitioners want on their resume
- Open-source the tools the community needs
**Stage 3: Build the platform**
Create a dedicated community space (Slack, Discord, forum):
- Community must be practitioner-first, not vendor-first
- Community managers who genuinely care about member value
- Content from members, not just from your company
- Events that build member relationships, not just product demos
**Stage 4: Convert community to customers**
Community members who become customers do so because they trust you, not because you sold them. Conversion paths:
- Community members see peer success with your product
- Product-qualified signals from community members who trial the product
- Direct outreach from sales to active community members (with permission and context)
- Enterprise deals from companies whose employees are active in the community
### CLG Metrics
| Metric | Definition | Health Signal |
|--------|-----------|--------------|
| Monthly active members | Members who post, comment, or engage | > 15% of total members |
| Community-sourced pipeline | $ pipeline where community was first touch | Track and trend |
| Community-influenced pipeline | $ pipeline with any community touchpoint | > 30% of total pipeline |
| NPS of community members vs. non-members | Loyalty difference | Community members should score 20+ pts higher |
| Member-generated content % | % of content posted by non-employees | > 60% is healthy community |
| Time from community join to product trial | | Shortens as community matures |
### CLG Anti-Patterns
- **Community as a newsletter:** If members can't interact with each other, it's not a community — it's a list.
- **Product launches in the community:** Nothing kills community trust faster than using it for sales announcements.
- **Community without a community manager:** Communities left to run themselves become ghost towns or become toxic.
- **Measuring community by member count:** Ghost members are noise. Active engagement is signal.
---
## 4. Hybrid Growth Models
### PLG + SLG ("Product-Led Sales" or PLS)
The most common hybrid at growth stage. PLG handles SMB self-serve; sales closes enterprise.
**The PQL-to-sales handoff:**
Define the triggers that move a product-qualified lead to a sales-assisted motion:
- Company has > X users (e.g., 10+ users on a team account)
- Usage exceeds Y threshold in 30 days
- Account is a named target in the ABM list
- User explicitly requested a demo or upgrade assistance
**The risk:** Sales team ignores PLG pipeline because deal size is smaller. Fix: separate quotas and commission structures for self-serve expansion vs. new enterprise logos.
**The opportunity:** PLG creates pre-qualified champions inside accounts. Sales doesn't have to create interest — they convert it. Win rates in PLS motions are typically 30-50% higher than cold outbound.
### SLG + CLG
Community builds brand and generates inbound pipeline for sales.
This hybrid works when:
- Sales cycles are long (6-18 months)
- Buyers do extensive research before engaging with vendors
- The community validates your credibility before sales conversations begin
**The integration:**
- Community team feeds content insights to demand gen
- Event attendees become high-priority SDR sequences
- Active community members get dedicated AE outreach with community context
- Win/loss analysis includes community touchpoints
### PLG + CLG
The developer/open-source hybrid. PLG handles product adoption; community handles advocacy and content.
**Examples:** HashiCorp (Terraform community + enterprise sales), Elastic (open-source + community + commercial), Tailscale (developer community + self-serve + enterprise).
**How it compounds:**
```
Community member learns from community content
→ Discovers open-source or free tier
→ Gets value in first session
→ Shares experience in community
→ New members discover product through community content
```
---
## 5. Growth Loops vs. Funnels
### The Difference
**A funnel** is linear. It requires constant input at the top to produce output at the bottom. If you stop feeding it, it stops producing.
**A growth loop** is cyclical. Output from one stage becomes input to the next. The system compounds.
### Common Growth Loops
**Viral loop:**
```
User gets value → Invites colleague → Colleague signs up →
Colleague invites another colleague → ...
```
Viral coefficient (K) = (Average invites per user) × (Conversion rate of invites)
- K > 1: Exponential growth (rare)
- K 0.5-1: Strong viral assist
- K < 0.3: Viral is not a meaningful growth driver
**Content SEO loop:**
```
Publish content on [topic] → Ranks in search →
Drives signups → Users share content → Builds backlinks →
Better rankings → More content is possible
```
This loop takes 12-24 months to activate but is extraordinarily defensible once running.
**UGC (User-Generated Content) loop:**
```
Users share their work publicly (templates, analyses, portfolios) →
Others discover the work → They find the product →
They create and share their own work → ...
```
Figma, Notion, Airtable, Canva — all run this loop.
**Data network effect loop:**
```
More users → More data → Better product →
More users attracted → ...
```
LinkedIn, Waze, Duolingo — accuracy or relevance improves as the user base grows.
**Integration loop:**
```
Product integrates with X → X's users discover your product →
More integrations possible → More discovery surfaces → ...
```
Zapier, Slack apps, Salesforce AppExchange — being in the ecosystem creates distribution.
### Building a Growth Loop
**Step 1: Map the current funnel**
Where do customers come from? What are the conversion steps?
**Step 2: Find the output**
What does a successful customer produce?
- Invite emails
- Shared content
- Public work visible to others
- Reviews or testimonials
**Step 3: Design the loop**
How does that output become tomorrow's input to acquisition?
- If they share → is there a landing page that captures the new visitor?
- If they invite → is the invite experience friction-free?
- If they create content → does it rank in search or appear in relevant communities?
**Step 4: Measure loop velocity**
For each loop, measure:
- Cycle time: How long does one full cycle take?
- Conversion at each step: Where does the loop break down?
- Loop coefficient: How many new users does one existing user generate?
---
## 6. When to Switch Growth Models
### The Warning Signs
**PLG-to-SLG triggers:**
- Enterprise accounts are signing up via PLG but aren't expanding without human intervention
- Average deal sizes in enterprise are 10-20x SMB, and you're leaving revenue on the table
- Product adoption in enterprise requires configuration or integration that needs support
- PLG accounts churn at higher rates than sales-assisted accounts
**SLG-to-PLG/PLS triggers:**
- CAC is increasing year-over-year as competition for sales talent intensifies
- Smaller competitors are winning deals with self-serve
- Customers are asking "can I just try this myself?"
- ACV is declining as the market matures and products commoditize
- Sales team efficiency (revenue per sales rep) is declining
**Adding CLG to existing motion:**
- Sales cycles are long and trust is the primary barrier
- SEO and content are generating traffic but low conversion (awareness without trust)
- Competitors are building community and you're not present
- Customer success teams report that customers who participate in user groups retain better
### The Transition Playbook
**Phase 1: Prove it before scaling (months 1-6)**
Don't restructure the team to support the new model before proving it works.
- Run a pilot: 3-5 SDRs testing PLG signals as outreach triggers (for PLG → PLS)
- Or: Launch a beta community with 100 core customers (for adding CLG)
- Measure the metrics of the new model, compare to current model
**Phase 2: Parallel running (months 6-12)**
Run both models simultaneously. Don't kill the current model while building the new one.
- Set clear boundaries on which accounts go to which motion
- Build dedicated teams for each model (don't ask the same people to do both)
- Define success metrics for the new model independently
**Phase 3: Rebalance (months 12-18)**
Once the new model proves its unit economics:
- Shift headcount and budget to the more efficient model
- Keep the old model for the segments where it still works
- Document what the new model requires to sustain itself
**The anti-pattern:** Announcing a model shift without proof, restructuring the team, and discovering after 12 months that the new model doesn't work. By then, the old model's momentum is gone and you've burned a year.
### Growth Model Maturity Matrix
| Dimension | PLG | SLG | CLG |
|-----------|-----|-----|-----|
| Time to first results | 3-6 months | 1-3 months | 12-18 months |
| Requires up-front product investment | High | Low | Medium |
| Scales without linear headcount | Yes | No | Yes |
| Predictable pipeline | Low (early) | High | Low (early) |
| CAC trend over time | Decreases | Flat/increases | Decreases |
| Works for ACV > $50K | Only with SLG assist | Yes | Yes |
| Works for ACV < $5K | Yes | No | Only with PLG |
| Defensibility once established | High | Low | Very high |

View File

@@ -0,0 +1,281 @@
# Marketing Org Reference
Team structure, hiring sequence, agency decisions, marketing ops, and cross-functional alignment — by company stage.
---
## 1. Marketing Team Structure by Stage
### Pre-Seed / Seed (< $1M ARR, 110 people)
Don't hire a marketing team yet. The founders are the marketing team.
What to do instead:
- Founders write content, do sales calls, go to events
- The goal is learning the ICP and finding the channel that works, not scaling anything
- One contractor or agency for specific output (design, SEO audit) is fine
First marketing hire trigger: You have a repeatable sales motion and need to scale it.
---
### Series A ($1M$5M ARR, 1030 people)
**Org:**
```
Founding Marketer (Head of Marketing or VP Marketing)
```
One person. Generalist. Capable of writing, running ads, setting up HubSpot, producing a report. Their job is to find what works.
**What they own:**
- Content and SEO foundation
- Paid channel experiments
- Sales enablement basics (1-pager, deck, email sequences)
- Event presence (1-2 conferences)
- Marketing attribution setup (get this right early)
**What they don't own yet:**
- Brand redesign
- Analyst relations
- Partner marketing
- Field marketing team
**CMO vs. VP Marketing at this stage:** VP Marketing. An experienced operator who can build and execute. A CMO's strategic value isn't fully leveraged until there's a team to lead and a budget to allocate.
---
### Series B ($5M$20M ARR, 3080 people)
**PLG-first org:**
```
VP Marketing
├── Growth Marketing (acquisition loops, activation, PLG analytics)
├── Product Marketing (positioning, launch, sales enablement)
└── Content & SEO (organic engine)
```
**SLG-first org:**
```
VP Marketing
├── Demand Generation (pipeline creation, paid, digital)
├── Product Marketing (positioning, competitive intel, enablement)
├── Field Marketing (events, regional, ABM)
└── Marketing Operations (CRM, attribution, reporting)
```
**Community-led org:**
```
VP Marketing
├── Community & Developer Relations
├── Content & SEO
└── Product Marketing
```
**At this stage:** Marketing ops becomes critical. Without it, attribution is guesswork and the sales team blames marketing for bad leads.
---
### Series C ($20M$75M ARR, 80200 people)
```
CMO
├── Demand Generation
│ ├── Paid Media
│ ├── SEO & Content
│ └── Marketing Operations
├── Product Marketing
│ ├── Core PMMs (by product line or segment)
│ └── Competitive Intelligence
├── Field Marketing
│ ├── Events
│ └── Regional / ABM
└── Brand & Communications
├── Brand Design
└── PR / Analyst Relations
```
**At this stage:**
- The CMO is a board-level communicator, not a campaign manager
- Each function has a dedicated leader (director or VP level)
- Marketing ops owns the attribution model and reports to CMO directly
- Analyst relations becomes important (Gartner, Forrester, G2 category positioning)
---
### Growth Stage ($75M+ ARR)
Marketing becomes a portfolio of specialized functions. Each major channel has a team. Brand is a serious investment. Analyst relations is a dedicated role. International marketing teams form.
The CMO's job shifts from building the machine to:
- Setting marketing strategy across a complex portfolio
- Representing marketing at the board level
- Owning brand and category leadership
- Cross-functional leadership with CRO, CPO, CEO
---
## 2. Hiring Sequence
### Who to Hire First
**The generalist content + demand gen marketer.**
Must-haves:
- Can write (blog posts, emails, landing pages — not just briefs)
- Can run paid campaigns (Google, LinkedIn — not just "I've managed agencies")
- Can operate a marketing automation platform (HubSpot, Marketo)
- Comfortable with data (can build a funnel report without asking an analyst)
This person builds the foundation. They're not a specialist yet — they're testing channels and building the process.
Avoid: Hiring a brand designer first. Or a community manager. Or a social media manager. These are specialties that compound on a foundation that doesn't exist yet.
### Who to Hire Second
**A specialist in the channel that's working.**
If organic search is your top lead source → hire an SEO/content lead.
If events are driving pipeline → hire a field marketer.
If outbound is working → hire an SDR manager or demand gen specialist.
Don't hire a generalist #2. By now you know what's working. Depth beats breadth.
### Who to Hire Third
**Product marketing.**
Why third and not first? Because PMM output (positioning, sales enablement, launch) is most valuable when there's an audience to position to and a sales team to enable. Before that, the founding marketer does "good enough" PMM work.
PMM hire profile: Has done positioning work before, has run a product launch, has built sales decks that sales actually uses, comfortable with win/loss analysis.
PMM:PM ratio benchmark: 1 PMM per 23 PMs. If you have 6 PMs and 1 PMM, you have a messaging and enablement problem.
### Who to Hire Fourth
**Marketing operations.**
This is consistently hired too late. By the time most companies hire marketing ops, attribution is broken, leads are being lost in handoffs, and the CRM data is unreliable. Hire marketing ops before you think you need it.
Marketing ops profile: HubSpot/Marketo certified, SQL capable, understands multi-touch attribution, has integrated CRM + sales engagement tools before.
### Hiring Decision Triggers
| Hire | Trigger |
|------|---------|
| Generalist marketer #1 | Sales motion is repeatable, need to scale lead generation |
| Specialist #2 | One channel is clearly outperforming — double down |
| Product marketer | Sales team is losing deals to positioning confusion or competitor gaps |
| Marketing ops | Running 3+ campaigns simultaneously with manual tracking |
| Field marketer | Events are in the strategy and attendance > 2 conferences/quarter |
| Head of Marketing / VP | Team is 3+ people and needs an org owner |
| CMO | Company is Series B/C and marketing needs board-level representation |
---
## 3. Agency vs. In-House
### Framework
Keep in-house what compounds. Outsource what's episodic or specialized.
| Function | Agency | In-House | Notes |
|----------|--------|----------|-------|
| Brand design | Early stage | Series B+ | Agency fine until redesigns become frequent |
| Paid media | < $50K/month spend | > $50K/month | Agency margin eats returns at scale |
| SEO strategy | Audit only | Ongoing execution | Strategy once, execution continuously |
| Content production | Overflow only | Core writers | Your voice must be yours |
| PR / comms | Almost always | $100M+ companies | Specialists required for media relationships |
| Marketing ops / CRM | Never | Always | This is your data infrastructure |
| Analyst relations | Initial strategy | Ongoing | Relationship-based — needs dedicated owner |
| Video / creative production | Always | Rarely | Episodic, specialized equipment |
### Agency Red Flags
- They want to own your ad accounts. (Always keep ownership. No exceptions.)
- SLA is "5 business days for creative requests." For a performance channel, that's too slow.
- Reporting is impressions, CPM, and "brand lift." Where's the pipeline?
- They can't tell you your CAC from their channel.
- They won't share the actual data — only their dashboard.
- Your account manager changes every 6 months.
### Agency Evaluation Criteria
1. **Proof of work in your category** — ask for 3 case studies with actual CAC and pipeline data
2. **Who actually does the work** — senior pitch team ≠ junior execution team
3. **Account ownership** — all accounts, pixels, analytics must be in your name
4. **Reporting cadence** — weekly data, monthly strategy, quarterly business review
5. **Exit terms** — how do you offboard without losing your data, accounts, and history?
---
## 4. Marketing Ops and Tech Stack
### The Minimum Viable Stack
| Layer | Tool | Purpose |
|-------|------|---------|
| CRM | HubSpot / Salesforce | Contact database, pipeline, source of truth |
| Marketing automation | HubSpot / Marketo / ActiveCampaign | Email, nurture, lead scoring |
| Analytics | Google Analytics 4 + Segment | Traffic, behavior, event tracking |
| Attribution | HubSpot / Attributer.io / Dreamdata | Multi-touch pipeline attribution |
| Paid | Google Ads + LinkedIn Ads | Performance channels |
| SEO | Ahrefs / Semrush | Keyword research, rank tracking |
| Chat/conversion | Intercom / Drift | In-product + website conversion |
**The integration that breaks most:** CRM ↔ Marketing automation ↔ Sales engagement. When these aren't synced properly, leads are lost, attribution is wrong, and marketing and sales fight about pipeline. Fix this first.
### Marketing Ops Ownership
Marketing ops must own:
- CRM data quality (field standardization, deduplication, routing)
- Lead scoring model (and quarterly review against conversion data)
- Attribution model (with documented assumptions)
- Campaign tracking (UTM governance — no UTM = no attribution)
- Tech stack evaluation and contracts
Marketing ops must NOT own:
- Strategy (they enable it, not set it)
- Content production
- Campaign creative
---
## 5. Cross-Functional Alignment
### Marketing + Sales
The most important cross-functional relationship in a SLG company. Where it breaks:
| Problem | Root Cause | Fix |
|---------|-----------|-----|
| "Marketing sends us bad leads" | MQL definition is unclear or wrong | Define MQL jointly, score against conversion data |
| "Sales doesn't follow up on leads" | No SLA, no consequence | Define SLA (e.g., 24-hour response), track in CRM |
| "Marketing doesn't understand what customers care about" | No win/loss sharing | Weekly call: sales shares 3 deal insights, marketing shares 3 content results |
| "We don't know what's working" | Attribution is broken | Marketing ops fixes attribution before next budget cycle |
**The SLA agreement (document this):**
- Marketing commits: X MQLs/week meeting defined criteria, 48-hour SLA from form fill to SDR outreach
- Sales commits: All MQLs contacted within 24 hours, disposition logged in CRM within 5 days
### Marketing + Product
Where it breaks and how to fix it:
| Problem | Fix |
|---------|-----|
| PMM learns about launches 2 weeks before ship | PMM joins the product planning process at the roadmap stage, not the sprint stage |
| Feature launches with no messaging | Launch tiers: Tier 1 (major, full launch), Tier 2 (minor, release notes + 1 post), Tier 3 (internal only) |
| Product doesn't use customer insights from marketing | Monthly session: PMM shares win/loss themes, competitive intel, ICP data |
| No feedback loop on messaging in-product | PMM owns in-product copy review, not just external comms |
### Marketing + Customer Success
Customer success is marketing's best source of truth:
- **ICP validation:** Which customers are expanding? Which are churning? This refines who you target.
- **Proof points:** CS-sourced case studies and testimonials outperform vendor-written content 3:1 in conversion.
- **Messaging test:** If CS is answering the same question 20 times, marketing hasn't explained it clearly enough.
- **Referral programs:** CS owns the relationship; marketing owns the mechanics. Design them together.
Cadence: Monthly meeting between CMO and VP/Head of CS. Agenda: retention trends, expansion patterns, at-risk customers, NPS themes.

View File

@@ -0,0 +1,416 @@
#!/usr/bin/env python3
"""
Growth Model Simulator
----------------------
Projects MRR growth across different growth models (PLG, sales-led, community-led,
hybrid) and shows the impact of channel mix changes on growth trajectory.
Usage:
python growth_model_simulator.py
Inputs (edit INPUTS section):
- Starting MRR and churn rate
- Current channel mix (% of new MRR from each source)
- Conversion rates per model
- Growth rate assumptions per channel
Outputs:
- 12-month MRR projection by growth model
- Channel mix impact analysis (what happens if you shift mix)
- Break-even months for each model
- Side-by-side comparison table
"""
from __future__ import annotations
import math
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Data models
# ---------------------------------------------------------------------------
@dataclass
class ChannelSource:
name: str
pct_of_new_mrr: float # Current share of new MRR (0.01.0)
monthly_growth_rate: float # How fast this channel grows month-over-month
cac: float # CAC in dollars
payback_months: float # Months to recover CAC
@dataclass
class GrowthModel:
name: str
description: str
channel_mix: Dict[str, float] # channel name → % of new MRR
new_mrr_monthly_base: float # Starting new MRR/month from this model
monthly_acceleration: float # Acceleration factor (compounding)
avg_ltv_cac: float # Expected LTV:CAC at scale
months_to_steady_state: int # Months before model hits its natural growth rate
notes: List[str] = field(default_factory=list)
@dataclass
class MonthSnapshot:
month: int
mrr: float
new_mrr: float
churned_mrr: float
expansion_mrr: float
net_new_mrr: float
cumulative_cac_spend: float
@dataclass
class ModelProjection:
model: GrowthModel
snapshots: List[MonthSnapshot]
break_even_month: Optional[int] # Month when cumulative revenue > cumulative CAC
# ---------------------------------------------------------------------------
# INPUTS — edit these
# ---------------------------------------------------------------------------
STARTING_MRR = 85_000 # Current MRR ($)
MONTHLY_CHURN_RATE = 0.012 # Monthly churn rate (1.2% = ~14% annual)
EXPANSION_RATE = 0.008 # Monthly expansion MRR as % of existing MRR
GROSS_MARGIN = 0.75
SIMULATION_MONTHS = 18
# Channel sources (used to model mix shift scenarios)
CHANNELS: List[ChannelSource] = [
ChannelSource("Organic/SEO", pct_of_new_mrr=0.28, monthly_growth_rate=0.04, cac=1_800, payback_months=9),
ChannelSource("PLG Self-Serve", pct_of_new_mrr=0.15, monthly_growth_rate=0.08, cac=900, payback_months=5),
ChannelSource("Outbound SDR", pct_of_new_mrr=0.25, monthly_growth_rate=0.02, cac=5_100, payback_months=21),
ChannelSource("Paid Search", pct_of_new_mrr=0.15, monthly_growth_rate=0.01, cac=6_200, payback_months=26),
ChannelSource("Events/Field", pct_of_new_mrr=0.08, monthly_growth_rate=0.01, cac=9_800, payback_months=41),
ChannelSource("Partner/Channel", pct_of_new_mrr=0.09, monthly_growth_rate=0.05, cac=3_400, payback_months=14),
]
# Growth models to simulate
GROWTH_MODELS: List[GrowthModel] = [
GrowthModel(
name="Current Mix",
description="Baseline — maintain current channel allocation",
channel_mix={"Organic/SEO": 0.28, "PLG Self-Serve": 0.15, "Outbound SDR": 0.25,
"Paid Search": 0.15, "Events/Field": 0.08, "Partner/Channel": 0.09},
new_mrr_monthly_base=12_000,
monthly_acceleration=0.025,
avg_ltv_cac=3.2,
months_to_steady_state=3,
notes=["Baseline. No changes to channel mix."],
),
GrowthModel(
name="PLG-First",
description="Shift budget toward PLG self-serve and organic; reduce paid and outbound",
channel_mix={"Organic/SEO": 0.35, "PLG Self-Serve": 0.35, "Outbound SDR": 0.10,
"Paid Search": 0.08, "Events/Field": 0.04, "Partner/Channel": 0.08},
new_mrr_monthly_base=9_500, # Slower start — PLG takes time to activate
monthly_acceleration=0.048, # But compounds faster
avg_ltv_cac=5.8,
months_to_steady_state=6, # PLG loops take time to build
notes=[
"Lower new MRR in months 1-6 while PLG loops activate.",
"Acceleration compounds strongly after month 6.",
"Requires product investment in activation/onboarding.",
"Best fit if time-to-value < 30 min and viral coefficient > 0.3.",
],
),
GrowthModel(
name="Sales-Led Scale",
description="Double down on outbound SDR and field; optimize for enterprise ACV",
channel_mix={"Organic/SEO": 0.20, "PLG Self-Serve": 0.05, "Outbound SDR": 0.40,
"Paid Search": 0.15, "Events/Field": 0.15, "Partner/Channel": 0.05},
new_mrr_monthly_base=15_000, # Higher new MRR from enterprise ACV
monthly_acceleration=0.018, # Linear growth — headcount-constrained
avg_ltv_cac=2.8,
months_to_steady_state=2,
notes=[
"Fastest short-term new MRR if ACV > $30K.",
"Growth is linear — adds headcount to add pipeline.",
"CAC and payback worsen as SDR market tightens.",
"Requires sales capacity increase to sustain.",
],
),
GrowthModel(
name="Community-Led",
description="Invest in community and content; reduce paid; long-term brand play",
channel_mix={"Organic/SEO": 0.45, "PLG Self-Serve": 0.15, "Outbound SDR": 0.15,
"Paid Search": 0.05, "Events/Field": 0.10, "Partner/Channel": 0.10},
new_mrr_monthly_base=7_000, # Slowest start
monthly_acceleration=0.038,
avg_ltv_cac=4.5,
months_to_steady_state=9, # Community takes longest to activate
notes=[
"Lowest new MRR in months 1-9.",
"Community trust drives lower CAC and higher retention at scale.",
"Best for categories where buyers seek peer validation.",
"Requires dedicated community manager from day one.",
],
),
GrowthModel(
name="Hybrid PLS",
description="PLG self-serve for SMB + sales-assisted for enterprise (Product-Led Sales)",
channel_mix={"Organic/SEO": 0.30, "PLG Self-Serve": 0.28, "Outbound SDR": 0.22,
"Paid Search": 0.08, "Events/Field": 0.06, "Partner/Channel": 0.06},
new_mrr_monthly_base=11_000,
monthly_acceleration=0.035,
avg_ltv_cac=4.1,
months_to_steady_state=4,
notes=[
"PLG handles SMB; sales closes enterprise with PQL signals.",
"Requires clear PQL definition and SDR/PLG handoff process.",
"Best if you have a product with both bottom-up and top-down adoption.",
],
),
]
# ---------------------------------------------------------------------------
# Simulation engine
# ---------------------------------------------------------------------------
def simulate_model(model: GrowthModel, months: int) -> ModelProjection:
snapshots: List[MonthSnapshot] = []
mrr = STARTING_MRR
cumulative_cac = 0.0
cumulative_revenue = 0.0
break_even_month = None
for m in range(1, months + 1):
# Ramp up — new_mrr accelerates each month
if m <= model.months_to_steady_state:
# Ramp phase: linear ramp from 60% to 100% of base
ramp_factor = 0.6 + 0.4 * (m / model.months_to_steady_state)
else:
# Steady state: compound acceleration
months_past_ramp = m - model.months_to_steady_state
ramp_factor = 1.0 + model.monthly_acceleration * months_past_ramp
new_mrr = model.new_mrr_monthly_base * ramp_factor
churned_mrr = mrr * MONTHLY_CHURN_RATE
expansion_mrr = mrr * EXPANSION_RATE
net_new_mrr = new_mrr - churned_mrr + expansion_mrr
mrr = mrr + net_new_mrr
# CAC spend approximation: new_mrr / (avg_deal_mrr) * blended_cac
# Use weighted CAC from channel mix
weighted_cac = _weighted_cac(model.channel_mix)
avg_deal_mrr = 1_500 # Assumption: $1,500 average deal MRR
deals_this_month = new_mrr / avg_deal_mrr
cac_spend = deals_this_month * weighted_cac
cumulative_cac += cac_spend
cumulative_revenue += mrr * GROSS_MARGIN
if break_even_month is None and cumulative_revenue >= cumulative_cac:
break_even_month = m
snapshots.append(MonthSnapshot(
month=m,
mrr=mrr,
new_mrr=new_mrr,
churned_mrr=churned_mrr,
expansion_mrr=expansion_mrr,
net_new_mrr=net_new_mrr,
cumulative_cac_spend=cumulative_cac,
))
return ModelProjection(
model=model,
snapshots=snapshots,
break_even_month=break_even_month,
)
def _weighted_cac(channel_mix: Dict[str, float]) -> float:
channel_cac = {ch.name: ch.cac for ch in CHANNELS}
total = sum(
channel_mix.get(name, 0) * cac
for name, cac in channel_cac.items()
)
weight_sum = sum(channel_mix.values())
return total / weight_sum if weight_sum > 0 else 5_000
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_mrr(n: float) -> str:
if n >= 1_000_000:
return f"${n/1_000_000:.3f}M"
return f"${n/1_000:.1f}K"
def fmt_currency(n: float) -> str:
if n >= 1_000_000:
return f"${n/1_000_000:.2f}M"
if n >= 1_000:
return f"${n/1_000:.1f}K"
return f"${n:.0f}"
def print_header(title: str) -> None:
width = 78
print("\n" + "=" * width)
print(f" {title}")
print("=" * width)
def print_channel_overview() -> None:
print_header("Current Channel Mix")
print(f" Starting MRR: {fmt_mrr(STARTING_MRR)} | Monthly churn: {MONTHLY_CHURN_RATE:.1%} | Expansion: {EXPANSION_RATE:.1%}/mo")
print()
print(f" {'Channel':<22} {'% MRR':>7} {'CAC':>8} {'Payback':>9} {'Growth/mo':>10}")
print(" " + "-" * 60)
for ch in sorted(CHANNELS, key=lambda c: c.pct_of_new_mrr, reverse=True):
print(
f" {ch.name:<22} {ch.pct_of_new_mrr:>6.0%} "
f"{fmt_currency(ch.cac):>8} {ch.payback_months:>7.0f}mo "
f"{ch.monthly_growth_rate:>9.1%}"
)
def print_model_detail(proj: ModelProjection) -> None:
model = proj.model
print_header(f"Model: {model.name}")
print(f" {model.description}")
if model.notes:
print()
for note in model.notes:
print(f"{note}")
print()
# Print monthly snapshot (every 3 months + final)
milestones = set(range(3, SIMULATION_MONTHS + 1, 3)) | {SIMULATION_MONTHS}
print(f" {'Month':<7} {'MRR':>10} {'New MRR':>9} {'Churned':>9} {'Expand':>8} {'Net New':>9}")
print(" " + "-" * 56)
for snap in proj.snapshots:
if snap.month in milestones:
print(
f" {snap.month:<7} {fmt_mrr(snap.mrr):>10} "
f"{fmt_mrr(snap.new_mrr):>9} {fmt_mrr(snap.churned_mrr):>9} "
f"{fmt_mrr(snap.expansion_mrr):>8} {fmt_mrr(snap.net_new_mrr):>9}"
)
final = proj.snapshots[-1]
growth_x = final.mrr / STARTING_MRR
arr_final = final.mrr * 12
weighted_cac = _weighted_cac(model.channel_mix)
be = f"Month {proj.break_even_month}" if proj.break_even_month else f"> {SIMULATION_MONTHS}mo"
print()
print(f" Final MRR ({SIMULATION_MONTHS}mo): {fmt_mrr(final.mrr)}")
print(f" Final ARR: {fmt_currency(arr_final)}")
print(f" Growth multiple: {growth_x:.1f}x from starting MRR")
print(f" Weighted blended CAC: {fmt_currency(weighted_cac)}")
print(f" Expected LTV:CAC: {model.avg_ltv_cac:.1f}x")
print(f" Months to steady state:{model.months_to_steady_state}")
print(f" CAC break-even: {be}")
def print_comparison_table(projections: List[ModelProjection]) -> None:
print_header(f"Growth Model Comparison — Month {SIMULATION_MONTHS} Outcomes")
header = (
f" {'Model':<20} {'MRR (final)':>12} {'ARR (final)':>12} "
f"{'Growth':>7} {'LTV:CAC':>8} {'Break-even':>11}"
)
print(header)
print(" " + "-" * 74)
for proj in sorted(projections, key=lambda p: p.snapshots[-1].mrr, reverse=True):
final = proj.snapshots[-1]
growth_x = final.mrr / STARTING_MRR
arr_final = final.mrr * 12
be = f"Mo {proj.break_even_month}" if proj.break_even_month else f">{SIMULATION_MONTHS}mo"
print(
f" {proj.model.name:<20} {fmt_mrr(final.mrr):>12} "
f"{fmt_currency(arr_final):>12} {growth_x:>6.1f}x "
f"{proj.model.avg_ltv_cac:>7.1f}x {be:>11}"
)
def print_channel_mix_impact(projections: List[ModelProjection]) -> None:
print_header("Channel Mix Impact Analysis")
print(" How shifting channel mix changes growth trajectory:\n")
baseline = next((p for p in projections if p.model.name == "Current Mix"), None)
if not baseline:
return
baseline_final_mrr = baseline.snapshots[-1].mrr
for proj in projections:
if proj.model.name == "Current Mix":
continue
final_mrr = proj.snapshots[-1].mrr
delta = final_mrr - baseline_final_mrr
delta_pct = (delta / baseline_final_mrr) * 100
arrow = "" if delta > 0 else ""
m6_mrr = proj.snapshots[5].mrr if len(proj.snapshots) >= 6 else 0
m6_baseline = baseline.snapshots[5].mrr if len(baseline.snapshots) >= 6 else 0
m6_delta = m6_mrr - m6_baseline
m6_pct = (m6_delta / m6_baseline) * 100 if m6_baseline else 0
m6_arrow = "" if m6_delta > 0 else ""
print(f" {proj.model.name}:")
print(f" Month 6: {m6_arrow} {abs(m6_pct):.1f}% vs. current ({fmt_mrr(m6_delta)} {'more' if m6_delta > 0 else 'less'} MRR)")
print(f" Month {SIMULATION_MONTHS}: {arrow} {abs(delta_pct):.1f}% vs. current ({fmt_mrr(delta)} {'more' if delta > 0 else 'less'} MRR)")
if proj.model.months_to_steady_state > 4:
print(f" ⚠ Model takes {proj.model.months_to_steady_state} months to reach steady state — short-term dip expected.")
print()
def print_decision_guide(projections: List[ModelProjection]) -> None:
print_header("Decision Guide")
print(" Choose your growth model based on your constraints:\n")
guides = [
("ACV < $5K and fast time-to-value", "PLG-First"),
("ACV > $25K and complex buying process", "Sales-Led Scale"),
("Strong practitioner community exists", "Community-Led"),
("Both SMB self-serve and enterprise buyers", "Hybrid PLS"),
("Uncertain — keep optionality", "Current Mix"),
]
for condition, model_name in guides:
proj = next((p for p in projections if p.model.name == model_name), None)
if proj:
final_mrr = proj.snapshots[-1].mrr
print(f" If: {condition}")
print(f" → Use {model_name}{fmt_mrr(final_mrr)} MRR at month {SIMULATION_MONTHS}")
print()
print(" Key question before switching models:")
print(" 'Do we have 12-18 months of runway to prove the new model")
print(" while the current model continues in parallel?'")
print(" If no → optimize current model. Don't switch.")
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main() -> None:
print_channel_overview()
projections = [simulate_model(model, SIMULATION_MONTHS) for model in GROWTH_MODELS]
for proj in projections:
print_model_detail(proj)
print_comparison_table(projections)
print_channel_mix_impact(projections)
print_decision_guide(projections)
print("\n" + "=" * 78)
print(" Notes:")
print(f" Starting MRR: {fmt_mrr(STARTING_MRR)}")
print(f" Simulation: {SIMULATION_MONTHS} months")
print(f" Churn: {MONTHLY_CHURN_RATE:.1%}/mo ({MONTHLY_CHURN_RATE*12:.0%} annualized)")
print(f" Expansion: {EXPANSION_RATE:.1%}/mo of existing MRR")
print(f" Gross margin: {GROSS_MARGIN:.0%}")
print(" Acceleration rates are estimates — validate against your actuals.")
print("=" * 78 + "\n")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,440 @@
#!/usr/bin/env python3
"""
Marketing Budget Modeler
------------------------
Allocates marketing budget across channels based on CAC efficiency and
target MQL volume. Models conservative / moderate / aggressive scenarios.
Usage:
python marketing_budget_modeler.py
Inputs (edit INPUTS section below or extend with argparse):
- Annual revenue target (new ARR)
- Average selling price (ASP)
- Conversion rates by funnel stage
- Historical CAC per channel
- Channel capacity constraints (max MQLs the channel can realistically produce)
Outputs:
- Required MQL volume by channel
- Budget allocation per channel per scenario
- LTV:CAC and payback period per channel
- Summary table across scenarios
"""
from __future__ import annotations
import math
from dataclasses import dataclass, field
from typing import Dict, List, Tuple
# ---------------------------------------------------------------------------
# Data models
# ---------------------------------------------------------------------------
@dataclass
class Channel:
name: str
cac: float # Customer acquisition cost ($)
max_mqls_per_month: int # Realistic capacity ceiling (MQLs/month)
mql_to_close_rate: float # Combined MQL → closed-won rate (0.01.0)
payback_months: float # Based on ARPU × gross margin
ltv: float # Lifetime value ($)
trend: str = "stable" # "improving" | "stable" | "declining"
@dataclass
class FunnelRates:
mql_to_sal: float # MQL → Sales Accepted Lead
sal_to_sql: float # SAL → Sales Qualified Lead
sql_to_opp: float # SQL → Opportunity
opp_to_close: float # Opportunity → Closed-Won
@property
def mql_to_close(self) -> float:
return self.mql_to_sal * self.sal_to_sql * self.sql_to_opp * self.opp_to_close
@dataclass
class ScenarioResult:
name: str
total_budget: float
channel_budgets: Dict[str, float]
channel_mqls: Dict[str, int]
projected_customers: int
projected_arr: float
blended_cac: float
notes: List[str] = field(default_factory=list)
# ---------------------------------------------------------------------------
# INPUTS — edit these
# ---------------------------------------------------------------------------
TARGET_NEW_ARR = 3_000_000 # New ARR to generate this year ($)
ASP_ANNUAL = 18_000 # Average annual contract value ($)
GROSS_MARGIN = 0.75 # Product gross margin (%)
ARPU_MONTHLY = ASP_ANNUAL / 12 # Monthly revenue per account
FUNNEL = FunnelRates(
mql_to_sal=0.65,
sal_to_sql=0.45,
sql_to_opp=0.75,
opp_to_close=0.27,
)
# LTV = ARPU_monthly × gross_margin / monthly_churn_rate
MONTHLY_CHURN = 0.012 # ~14% annual churn
LTV = (ARPU_MONTHLY * GROSS_MARGIN) / MONTHLY_CHURN
CHANNELS: List[Channel] = [
Channel(
name="Organic SEO",
cac=1_800,
max_mqls_per_month=80,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(1_800 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="improving",
),
Channel(
name="Paid Search",
cac=6_200,
max_mqls_per_month=60,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(6_200 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="stable",
),
Channel(
name="Paid Social (LinkedIn)",
cac=8_500,
max_mqls_per_month=35,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(8_500 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="declining",
),
Channel(
name="Outbound SDR",
cac=5_100,
max_mqls_per_month=50,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(5_100 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="stable",
),
Channel(
name="Events / Field",
cac=9_800,
max_mqls_per_month=25,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(9_800 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="stable",
),
Channel(
name="Partner / Channel",
cac=3_400,
max_mqls_per_month=30,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(3_400 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="improving",
),
Channel(
name="Content / Inbound",
cac=2_600,
max_mqls_per_month=45,
mql_to_close_rate=FUNNEL.mql_to_close,
payback_months=(2_600 / (ARPU_MONTHLY * GROSS_MARGIN)),
ltv=LTV,
trend="improving",
),
]
# ---------------------------------------------------------------------------
# Core calculations
# ---------------------------------------------------------------------------
def customers_needed(target_arr: float, asp: float) -> int:
return math.ceil(target_arr / asp)
def mqls_needed_total(customers: int, mql_to_close: float) -> int:
return math.ceil(customers / mql_to_close)
def ltv_to_cac(ltv: float, cac: float) -> float:
return ltv / cac if cac > 0 else 0.0
def score_channel(ch: Channel) -> float:
"""
Score a channel for budget priority.
Higher = more efficient. Used to rank allocation order.
Factors: LTV:CAC ratio, trend multiplier, capacity.
"""
ratio = ltv_to_cac(ch.ltv, ch.cac)
trend_mult = {"improving": 1.2, "stable": 1.0, "declining": 0.7}.get(ch.trend, 1.0)
return ratio * trend_mult
def allocate_mqls(
channels: List[Channel],
total_mqls_needed: int,
budget_multiplier: float = 1.0,
) -> Tuple[Dict[str, int], Dict[str, float]]:
"""
Allocate MQL targets across channels in priority order (best LTV:CAC first).
budget_multiplier: 0.7 = conservative, 1.0 = moderate, 1.3 = aggressive.
Returns (channel → MQLs, channel → budget).
"""
ranked = sorted(channels, key=score_channel, reverse=True)
remaining = total_mqls_needed
channel_mqls: Dict[str, int] = {}
channel_budget: Dict[str, float] = {}
for ch in ranked:
if remaining <= 0:
channel_mqls[ch.name] = 0
channel_budget[ch.name] = 0.0
continue
# Apply capacity ceiling scaled by multiplier (aggressive = push capacity)
capacity = int(ch.max_mqls_per_month * 12 * budget_multiplier)
allocated = min(remaining, capacity)
channel_mqls[ch.name] = allocated
channel_budget[ch.name] = allocated * ch.cac
remaining -= allocated
return channel_mqls, channel_budget
def build_scenario(
name: str,
channels: List[Channel],
total_mqls: int,
multiplier: float,
notes: List[str],
) -> ScenarioResult:
channel_mqls, channel_budget = allocate_mqls(channels, total_mqls, multiplier)
total_budget = sum(channel_budget.values())
total_mqls_allocated = sum(channel_mqls.values())
projected_customers = math.floor(total_mqls_allocated * FUNNEL.mql_to_close)
projected_arr = projected_customers * ASP_ANNUAL
# Blended CAC = total budget / customers acquired
blended_cac = total_budget / projected_customers if projected_customers > 0 else 0.0
return ScenarioResult(
name=name,
total_budget=total_budget,
channel_budgets=channel_budget,
channel_mqls=channel_mqls,
projected_customers=projected_customers,
projected_arr=projected_arr,
blended_cac=blended_cac,
notes=notes,
)
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_currency(n: float) -> str:
if n >= 1_000_000:
return f"${n/1_000_000:.2f}M"
if n >= 1_000:
return f"${n/1_000:.1f}K"
return f"${n:.0f}"
def fmt_ratio(n: float) -> str:
return f"{n:.1f}x"
def print_header(title: str) -> None:
width = 72
print("\n" + "=" * width)
print(f" {title}")
print("=" * width)
def print_channel_table(channels: List[Channel]) -> None:
print_header("Channel Analysis — Current State")
header = f"{'Channel':<25} {'CAC':>8} {'Payback':>9} {'LTV:CAC':>8} {'Cap/mo':>7} {'Trend':>10}"
print(header)
print("-" * 72)
for ch in sorted(channels, key=score_channel, reverse=True):
ratio = ltv_to_cac(ch.ltv, ch.cac)
flag = ""
if ratio < 1:
flag = " ⚠ LOSS"
elif ratio >= 6:
flag = " ★ STRONG"
elif ratio >= 3:
flag = ""
print(
f"{ch.name:<25} {fmt_currency(ch.cac):>8} "
f"{ch.payback_months:>7.1f}mo {fmt_ratio(ratio):>8} "
f"{ch.max_mqls_per_month:>7} {ch.trend:>10}{flag}"
)
def print_funnel_summary(customers: int, mqls: int) -> None:
print_header("Funnel Requirements")
print(f" Target new ARR: {fmt_currency(TARGET_NEW_ARR)}")
print(f" Average selling price: {fmt_currency(ASP_ANNUAL)}")
print(f" New customers needed: {customers}")
print(f" Funnel MQL→Close rate: {FUNNEL.mql_to_close:.1%}")
print(f" Total MQLs needed: {mqls}")
print(f"\n Funnel stage rates:")
print(f" MQL → SAL: {FUNNEL.mql_to_sal:.0%}")
print(f" SAL → SQL: {FUNNEL.mql_to_sal * FUNNEL.sal_to_sql:.0%}")
print(f" SQL → Opportunity: {FUNNEL.mql_to_sal * FUNNEL.sal_to_sql * FUNNEL.sql_to_opp:.0%}")
print(f" Opportunity → Close: {FUNNEL.mql_to_close:.0%}")
print(f"\n LTV (estimated): {fmt_currency(LTV)}")
print(f" Monthly churn: {MONTHLY_CHURN:.1%} ({MONTHLY_CHURN*12:.0%} annualized)")
def print_scenario(result: ScenarioResult, channels: List[Channel]) -> None:
print_header(f"Scenario: {result.name}")
print(f" Total marketing budget: {fmt_currency(result.total_budget)}")
print(f" Projected customers: {result.projected_customers}")
print(f" Projected new ARR: {fmt_currency(result.projected_arr)}")
print(f" Blended CAC: {fmt_currency(result.blended_cac)}")
blended_ltv_cac = LTV / result.blended_cac if result.blended_cac > 0 else 0
blended_payback = result.blended_cac / (ARPU_MONTHLY * GROSS_MARGIN)
print(f" Blended LTV:CAC: {fmt_ratio(blended_ltv_cac)}", end="")
if blended_ltv_cac < 1:
print(" ⚠ BELOW BREAK-EVEN")
elif blended_ltv_cac < 3:
print(" △ MARGINAL")
elif blended_ltv_cac >= 3:
print(" ✓ HEALTHY")
else:
print()
print(f" Blended payback: {blended_payback:.1f} months")
if result.notes:
print(f"\n Notes:")
for note in result.notes:
print(f"{note}")
print(f"\n {'Channel':<25} {'MQLs':>6} {'Budget':>10} {'% of Budget':>12} {'LTV:CAC':>8}")
print(" " + "-" * 65)
for ch in sorted(channels, key=score_channel, reverse=True):
mqls = result.channel_mqls.get(ch.name, 0)
budget = result.channel_budgets.get(ch.name, 0.0)
pct = (budget / result.total_budget * 100) if result.total_budget > 0 else 0
ratio = ltv_to_cac(ch.ltv, ch.cac)
print(
f" {ch.name:<25} {mqls:>6} {fmt_currency(budget):>10} "
f"{pct:>11.1f}% {fmt_ratio(ratio):>8}"
)
def print_scenario_comparison(scenarios: List[ScenarioResult]) -> None:
print_header("Scenario Comparison")
header = f"{'Scenario':<18} {'Budget':>10} {'Customers':>10} {'ARR':>10} {'Blended CAC':>12} {'LTV:CAC':>8} {'Payback':>9}"
print(header)
print("-" * 82)
for s in scenarios:
blended_ltv_cac = LTV / s.blended_cac if s.blended_cac > 0 else 0
blended_payback = s.blended_cac / (ARPU_MONTHLY * GROSS_MARGIN)
print(
f"{s.name:<18} {fmt_currency(s.total_budget):>10} "
f"{s.projected_customers:>10} {fmt_currency(s.projected_arr):>10} "
f"{fmt_currency(s.blended_cac):>12} {fmt_ratio(blended_ltv_cac):>8} "
f"{blended_payback:>7.1f}mo"
)
def print_recommendations(channels: List[Channel]) -> None:
print_header("Channel Recommendations")
scale = [ch for ch in channels if score_channel(ch) >= 1.5 and ch.trend in ("improving", "stable")]
hold = [ch for ch in channels if 0.8 <= score_channel(ch) < 1.5 or (ch.trend == "stable" and ltv_to_cac(ch.ltv, ch.cac) >= 3)]
cut = [ch for ch in channels if ltv_to_cac(ch.ltv, ch.cac) < 2 or ch.trend == "declining"]
# Deduplicate
hold = [ch for ch in hold if ch not in scale]
cut = [ch for ch in cut if ch not in scale and ch not in hold]
if scale:
print(" SCALE (strong LTV:CAC, improving or stable trend):")
for ch in scale:
print(f" + {ch.name} [LTV:CAC {fmt_ratio(ltv_to_cac(ch.ltv, ch.cac))}, payback {ch.payback_months:.0f}mo]")
if hold:
print(" HOLD (monitor — adequate but not outstanding):")
for ch in hold:
print(f" = {ch.name} [LTV:CAC {fmt_ratio(ltv_to_cac(ch.ltv, ch.cac))}, trend: {ch.trend}]")
if cut:
print(" CUT or REDUCE (poor LTV:CAC or declining):")
for ch in cut:
print(f" - {ch.name} [LTV:CAC {fmt_ratio(ltv_to_cac(ch.ltv, ch.cac))}, trend: {ch.trend}]")
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main() -> None:
customers = customers_needed(TARGET_NEW_ARR, ASP_ANNUAL)
total_mqls = mqls_needed_total(customers, FUNNEL.mql_to_close)
print_channel_table(CHANNELS)
print_funnel_summary(customers, total_mqls)
scenarios = [
build_scenario(
name="Conservative",
channels=CHANNELS,
total_mqls=total_mqls,
multiplier=0.7,
notes=[
"Prioritizes lowest CAC channels only.",
"May not reach MQL target — expect ~70% of goal.",
"Best for capital-constrained orgs or short runway.",
],
),
build_scenario(
name="Moderate",
channels=CHANNELS,
total_mqls=total_mqls,
multiplier=1.0,
notes=[
"Balanced allocation — efficiency-first but full MQL target.",
"Recommended baseline. Revisit Q2 based on actuals.",
],
),
build_scenario(
name="Aggressive",
channels=CHANNELS,
total_mqls=total_mqls,
multiplier=1.4,
notes=[
"Pushes all channels toward capacity ceiling.",
"Higher spend on lower-efficiency channels to hit volume.",
"Requires > 18-month runway to justify payback period.",
],
),
]
for scenario in scenarios:
print_scenario(scenario, CHANNELS)
print_scenario_comparison(scenarios)
print_recommendations(CHANNELS)
print("\n" + "=" * 72)
print(" Key questions before finalizing budget:")
print(" 1. What is the payback period the CFO/board will accept?")
print(" 2. Is CAC for declining-trend channels actually recoverable?")
print(" 3. Does the moderate scenario require sales headcount increase?")
print(" 4. Which channels have capacity to absorb 20% more spend?")
print("=" * 72 + "\n")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,236 @@
---
name: company-os
description: "The meta-framework for how a company runs — the connective tissue between all C-suite roles. Covers operating system selection (EOS, Scaling Up, OKR-native, hybrid), accountability charts, scorecards, meeting pulse, issue resolution, and 90-day rocks. Use when setting up company operations, selecting a management framework, designing meeting rhythms, building accountability systems, implementing OKRs, or when user mentions EOS, Scaling Up, operating system, L10 meetings, rocks, scorecard, accountability chart, or quarterly planning."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: company-operations
updated: 2026-03-05
frameworks: os-comparison, implementation-guide
---
# Company Operating System
The operating system is the collection of tools, rhythms, and agreements that determine how the company functions. Every company has one — most just don't know what it is. Making it explicit makes it improvable.
## Keywords
operating system, EOS, Entrepreneurial Operating System, Scaling Up, Rockefeller Habits, OKR, Holacracy, L10 meeting, rocks, scorecard, accountability chart, issues list, IDS, meeting pulse, quarterly planning, weekly scorecard, management framework, company rhythm, traction, Gino Wickman, Verne Harnish
## Why This Matters
Most operational dysfunction isn't a people problem — it's a system problem. When:
- The same issues recur every week: no issue resolution system
- Meetings feel pointless: no structured meeting pulse
- Nobody knows who owns what: no accountability chart
- Quarterly goals slip: rocks aren't real commitments
Fix the system. The people will operate better inside it.
## The Six Core Components
Every effective operating system has these six, regardless of which framework you choose:
### 1. Accountability Chart
Not an org chart. An accountability chart answers: "Who owns this outcome?"
**Key distinction:** One person owns each function. Multiple people may work in it. Ownership means the buck stops with one person.
**Structure:**
```
CEO
├── Sales (CRO/VP Sales)
│ ├── Inbound pipeline
│ └── Outbound pipeline
├── Product & Engineering (CTO/CPO)
│ ├── Product roadmap
│ └── Engineering delivery
├── Operations (COO)
│ ├── Customer success
│ └── Finance & Legal
└── People (CHRO/VP People)
├── Recruiting
└── People operations
```
**Rules:**
- No shared ownership. "Alice and Bob both own it" means nobody owns it.
- One person can own multiple seats at early stages. That's fine. Just be explicit.
- Revisit quarterly as you scale. Ownership shifts as the company grows.
**Build it in a workshop:**
1. List all functions the company performs
2. Assign one owner per function — no exceptions
3. Identify gaps (functions nobody owns) and overlaps (functions two people think they own)
4. Publish it. Update it when something changes.
### 2. Scorecard
Weekly metrics that tell you if the company is on track. Not monthly. Not quarterly. Weekly.
**Rules:**
- 515 metrics maximum. More than 15 and nothing gets attention.
- Each metric has an owner and a weekly target (not a range — a number).
- Red/yellow/green status. Not paragraphs.
- The scorecard is discussed at the leadership team weekly meeting. Only red metrics get discussion time.
**Example scorecard structure:**
| Metric | Owner | Target | This Week | Status |
|--------|-------|--------|-----------|--------|
| New MRR | CRO | €50K | €43K | 🔴 |
| Churn | CS Lead | < 1% | 0.8% | 🟢 |
| Active users | CPO | 2,000 | 2,150 | 🟢 |
| Deployments | CTO | 3/week | 3 | 🟢 |
| Open critical bugs | CTO | 0 | 2 | 🔴 |
| Runway | CFO | > 18mo | 16mo | 🟡 |
**Anti-pattern:** Measuring everything. If you track 40 KPIs, you're watching, not managing.
### 3. Meeting Pulse
The meeting rhythm that drives the company. Not optional — the pulse is what keeps the company alive.
**The full rhythm:**
| Meeting | Frequency | Duration | Who | Purpose |
|---------|-----------|----------|-----|---------|
| Daily standup | Daily | 15 min | Each team | Blockers only |
| L10 / Leadership sync | Weekly | 90 min | Leadership team | Scorecard + issues |
| Department review | Monthly | 60 min | Dept + leadership | OKR progress |
| Quarterly planning | Quarterly | 12 days | Leadership | Set rocks, review strategy |
| Annual planning | Annual | 23 days | Leadership | 1-year + 3-year vision |
**The L10 meeting (Weekly Leadership Sync):**
Named for the goal of each meeting being a 10/10. Fixed agenda:
1. Good news (5 min) — personal + business
2. Scorecard review (5 min) — flag red items only
3. Rock review (5 min) — on/off track for each rock
4. Customer/employee headlines (5 min)
5. Issues list (60 min) — IDS (see below)
6. To-dos review (5 min) — last week's commitments
7. Conclude (5 min) — rate the meeting 110, what would make it a 10 next time
### 4. Issue Resolution (IDS)
The core problem-solving loop. Maximum 15 minutes per issue.
**IDS: Identify, Discuss, Solve**
- **Identify:** What is the actual issue? (Not the symptom — the root cause) State it in one sentence.
- **Discuss:** Relevant facts + perspectives. Time-boxed. When discussion starts repeating, stop.
- **Solve:** One owner. One action. One due date. Written on the to-do list.
**Anti-patterns:**
- "Let's take this offline" — most things taken offline never get resolved
- Discussing without deciding — a great discussion with no action item is wasted time
- Revisiting decided issues — once solved, it leaves the list. Reopen only with new information.
**The Issues List:** A running, prioritized list of all unresolved issues. Owned by the leadership team. Reviewed and pruned weekly. If an issue has been on the list for 3+ meetings and hasn't been discussed, it's either not a real issue or it's too scary to address — both deserve attention.
### 5. Rocks (90-Day Priorities)
Rocks are the 37 most important things each person must accomplish in the next 90 days. They're not the job description — they're the things that move the company forward.
**Why 90 days?** Long enough for meaningful progress. Short enough to stay real.
**Rock rules:**
- Each person: 37 rocks maximum. More than 7 and none get done.
- Company-level rocks (shared priorities): 37 for the leadership team
- Each rock is binary: done or not done. No "60% complete."
- Set at the quarterly planning session. Reviewed weekly (on/off track).
**Bad rock:** "Improve our sales process"
**Good rock:** "Implement Salesforce CRM with full pipeline stages and weekly reporting by March 31"
**Rock vs. to-do:** A to-do takes one action. A rock takes 90 days of consistent work.
### 6. Communication Cadence
Who gets what information, when, and how.
| Audience | What | When | Format |
|----------|------|------|--------|
| All employees | Company update | Monthly | Written + Q&A |
| All employees | Quarterly results + next priorities | Quarterly | All-hands |
| Leadership team | Scorecard | Weekly | Dashboard |
| Board | Company performance | Monthly | Board memo |
| Investors | Key metrics + narrative | Monthly or quarterly | Investor update |
| Customers | Product updates | Per release | Release notes |
**Default rule:** If you're deciding whether to share something internally, share it. The cost of under-communication always exceeds the cost of over-communication inside a company.
---
## Operating System Selection
See `references/os-comparison.md` for full comparison. Quick guide:
| If you are... | Consider... |
|---------------|-------------|
| 10250 person company, founder-led, operational chaos | EOS / Traction |
| Ambitious growth company, need rigorous strategy cascade | Scaling Up |
| Tech company, engineering culture, hypothesis-driven | OKR-native |
| Decentralized, flat, high autonomy | Holacracy (only if you're patient) |
| None of the above quite fit | Custom hybrid |
---
## Implementation Roadmap
Don't implement everything at once. See `references/implementation-guide.md` for the full 90-day plan.
**Quick start (first 30 days):**
1. Build the accountability chart (1 workshop, 2 hours)
2. Define 510 weekly scorecard metrics (leadership team alignment, 1 hour)
3. Start the weekly L10 meeting (no prep — just start)
These three alone will improve coordination more than most companies achieve in a year.
---
## Common Failure Modes
**Partial implementation:** "We do OKRs but skip the weekly check-in." Half an operating system is worse than none — it creates theater without accountability.
**Meeting fatigue:** Adding the full rhythm on top of existing meetings. Start by replacing meetings, not adding them.
**Metric overload:** Starting with 30 KPIs because "they all matter." Start with 5. Add when the cadence is established.
**Rock inflation:** Setting 12 rocks per person because "everything is a priority." When everything is a priority, nothing is. Hard limit: 7.
**Leader non-compliance:** Leadership team skips the L10 or doesn't follow IDS. The operating system mirrors the respect leadership gives it. If leaders don't take it seriously, nobody will.
**Annual planning without quarterly review:** Setting annual goals and checking in at year-end. Quarterly is the minimum review cycle for any meaningful goal.
---
## Integration with C-Suite
The company OS is the connective tissue. Every other role depends on it:
| C-Suite Role | OS Dependency |
|-------------|---------------|
| CEO | Sets vision that feeds into 1-year plan and rocks |
| COO | Owns the meeting pulse and issue resolution cadence |
| CFO | Owns the financial metrics in the scorecard |
| CTO | Owns engineering rocks and tech scorecard metrics |
| CHRO | Owns people metrics (attrition, hiring velocity) in scorecard |
| Culture Architect | Culture rituals plug into the meeting pulse |
| Strategic Alignment Engine | Validates that team rocks cascade from company rocks |
---
## Key Questions for the Operating System
- "If I asked five different team leads what the company's top 3 priorities are this quarter, would they give the same answers?"
- "What was the most important issue raised in last week's leadership meeting? Was it resolved or is it still open?"
- "Name a metric that would tell us by Friday whether this week was a good week. Do we track it?"
- "Who owns customer churn? Can you name that person without hesitation?"
- "When was the last time we updated the accountability chart?"
## Detailed References
- `references/os-comparison.md` — EOS vs Scaling Up vs OKRs vs Holacracy vs hybrid
- `references/implementation-guide.md` — 90-day implementation plan

View File

@@ -0,0 +1,249 @@
# Company Operating System — 90-Day Implementation Guide
Don't implement everything at once. The fastest path to failure is trying to launch the full operating system in week one. Build incrementally. Let the team experience wins before adding complexity.
---
## Before You Start
### Prerequisites
**Leadership alignment (non-negotiable):**
Every member of the leadership team must understand why you're doing this and commit to running the system. One holdout destroys the whole model. If the CFO skips the L10 meetings, the system won't work.
**Current state audit:**
- What meetings currently exist? Which can be replaced?
- Who owns which functions today? (Even informally)
- What metrics are being tracked? (Even inconsistently)
**Assign an OS owner:**
One person is responsible for the implementation and ongoing maintenance of the operating system. Usually the COO or CEO (at smaller companies). This is not a committee job.
---
## Week 12: Accountability Chart + Scorecard
### Accountability Chart Workshop (Week 1)
**Duration:** 23 hours, full leadership team
**Step 1 — List all functions (30 min)**
On a whiteboard, list every function the company performs:
- Sales (inbound, outbound, partnerships)
- Marketing (content, paid, brand)
- Product (roadmap, design, research)
- Engineering (frontend, backend, devops)
- Customer success (onboarding, support, retention)
- Finance (accounting, FP&A, legal)
- People (recruiting, HR, culture)
- Operations (processes, tools, facilities)
**Step 2 — Assign owners (45 min)**
For each function: "Who is the one person ultimately accountable?" Write their name.
Rules: One name only. No joint ownership. One person can own multiple functions at small scale.
**Step 3 — Identify gaps and overlaps (30 min)**
- **Gaps:** Functions with no owner → Who should own them? Or do we need a hire?
- **Overlaps:** Two people said they own the same thing → Resolve now, not later.
**Step 4 — Publish and socialize (Week 2)**
Share with the full company. Explain what an accountability chart is and isn't.
"This is about clarity, not hierarchy. It tells everyone who to go to for each function."
**Output:** A documented accountability chart. Use a simple tool (Miro, Google Slides, Ninety.io).
---
### Scorecard Design (Week 2)
**Duration:** 90 minutes, leadership team
**Step 1 — List candidate metrics (30 min)**
Each leader lists 35 metrics they already track or wish they tracked. No filtering yet.
**Step 2 — Filter to 515 (30 min)**
Criteria: Is it measurable weekly? Does it tell us if the company is healthy? Does one person own it?
Drop: metrics that are monthly only, metrics without a clear owner, metrics that measure activity not outcomes.
**Step 3 — Set weekly targets (20 min)**
For each metric: what's the weekly target? Not a range — a number. Red/yellow/green thresholds.
**Step 4 — Assign owners (10 min)**
Every metric has one owner who is responsible for reporting it weekly.
**Output:** A scorecard document. 515 metrics, owner, target, weekly tracking column.
**First scorecard run:** Week 2 or 3. It won't be perfect. That's fine.
---
## Week 34: Meeting Pulse (Start With L10)
Don't start all the meetings at once. Start with the weekly L10. Replace existing leadership syncs.
### L10 Meeting Setup
**Schedule:** Same day, same time, every week. Non-negotiable attendance.
**Duration:** 90 minutes. No more, no less.
**Facilitator:** Rotate or assign to COO/CEO. The facilitator keeps time and follows the agenda.
**Fixed agenda:**
1. **Good news** (5 min) — One personal, one business from each person. No skipping.
2. **Scorecard review** (5 min) — Traffic light only. Red items go to the issues list.
3. **Rock review** (5 min) — Each person: "on track" or "off track." No justification needed at this step.
4. **Customer/employee headlines** (5 min) — One sentence each. No reports.
5. **Issues** (60 min) — IDS process. Prioritize the top 35 issues. Solve them.
6. **To-do review** (5 min) — Review last week's commitments (done/not done). No excuses, just data.
7. **Conclude** (5 min) — Rate the meeting 110. What would make next week better?
**First L10 meeting:**
It will feel awkward. Run through the agenda anyway. The team needs the repetition to internalize it. By week 4, it should feel natural.
### Issues List Setup
Create a shared document (Notion, Google Docs, or dedicated tool):
- Issue title
- Priority (High / Medium / Low)
- Status (Open / In progress / Solved)
- Owner (once assigned)
- Due date
At the first L10, generate the issues list by asking: "What's getting in our way right now?" Expect 1020 items on the first pass.
---
## Week 58: Rocks and Quarterly Planning
### Quarterly Planning Session (end of Week 5 or start of Week 6)
**Duration:** 48 hours (or 2 × 4-hour days for larger teams)
**Who:** Full leadership team
**Session structure:**
**Part 1: Review previous quarter (6090 min)**
- What rocks were completed? What were dropped?
- What did we learn?
- What changed in the market or company?
**Part 2: Confirm or update company direction (60 min)**
- Is the 1-year goal still valid?
- Any major strategy shifts needed?
- Update the V/TO or OPSP if using EOS or Scaling Up.
**Part 3: Set company rocks (90 min)**
- Brainstorm: What are the 37 most important things to accomplish this quarter?
- Prioritize. Be ruthless. 3 rocks done > 7 rocks started.
- Each rock: clear owner, clear definition of done, 90-day timeline.
**Part 4: Set individual rocks (60 min)**
- Each leader sets their 37 rocks (aligned with company rocks where possible)
- Share with group: dependencies? Conflicts? Overloaded people?
**Part 5: Communicate (Week 6)**
- Share company rocks with the full organization within 1 week
- Each team sets their own rocks, cascaded from company rocks (35 per team)
**Rock template:**
```
Rock: [What you'll accomplish]
Owner: [One person]
Due date: [Specific date within the quarter]
Definition of done: [How we'll know it's complete]
Dependencies: [What else needs to happen first]
```
---
## Week 912: Issue Resolution Mastery + Communication Cadence
By now the L10 should be running smoothly. Weeks 912 focus on deepening IDS skills and establishing the broader communication cadence.
### IDS Practice
The issue resolution process often degrades in weeks 58. Common problems:
- Issues discussed but never solved (no clear action item)
- Same issues recurring (root cause not addressed)
- Too many issues, not enough resolution (prioritization failing)
**IDS calibration exercise (Week 9):**
In the next L10, after each issue is "solved," ask:
- "Is this actually solved, or are we postponing it?"
- "What's the specific action? Who owns it? When is it due?"
- "Is this the real issue, or a symptom of something deeper?"
### Communication Cadence Setup
Build out the full communication calendar:
| Communication | Frequency | Owner | Format | Tool |
|---------------|-----------|-------|--------|------|
| Company all-hands | Monthly | CEO | Update + Q&A | Video call |
| Quarterly planning results | Quarterly | CEO/COO | Written + live | Notion + all-hands |
| Board update | Monthly | CEO + CFO | Board memo | Doc |
| Investor update | Monthly | CEO + CFO | Email | Template |
| Department L10s | Weekly | Dept lead | L10 format | In-person / Zoom |
| Daily standups | Daily | Team leads | 15 min | Team call |
**Company all-hands template:**
1. State of the company (financial health, key metrics) — 10 min
2. Quarterly rocks: what we committed to, where we stand — 10 min
3. Wins and recognitions — 5 min
4. What's coming next quarter — 10 min
5. Q&A — 1525 min
---
## Post-90 Days: Refinement and Optimization
### Month 4 retrospective
After the first full quarter, run a retrospective on the operating system itself:
- What's working? What isn't?
- Which meetings should continue as-is? Which need adjustment?
- Is the scorecard measuring the right things?
- Are rocks the right size and specificity?
- What should we add next?
### Scorecard evolution
By month 4, you'll know which metrics matter most. Add 23 that are missing. Remove metrics that nobody uses for decisions.
### L10 health check
Rate your L10 meetings over the first quarter:
- Average rating < 7: The agenda isn't being followed or issues aren't being resolved. Diagnose.
- Average rating 78: Normal. Keep building discipline.
- Average rating > 8: The team is engaged. Start extending the system to department level.
### Department L10s (Month 4+)
Once leadership L10 is running well, cascade the meeting structure:
- Each department runs their own weekly L10
- Department rocks cascade from company rocks
- Issues that cross departments are escalated to leadership L10
### Year 1 annual planning
End of year 1: run a full-day annual planning session.
- Review the year: what did we accomplish? What did we miss? What did we learn?
- Update 3-year vision (has it changed?)
- Set next year's annual goals
- Set Q1 rocks
- Celebrate. Seriously — mark the milestone.
---
## Implementation Anti-Patterns
**Skipping the accountability chart:** Without ownership clarity, every other system breaks down. Do this first.
**Building a perfect scorecard before starting:** Start with 5 imperfect metrics. Improve over time.
**Not replacing existing meetings:** Adding L10 on top of 3 existing meetings creates meeting overload. Cancel the redundant ones.
**Leader non-participation:** If one leader consistently skips or is disengaged, the system won't work. Address this directly — it's a culture issue, not a calendar issue.
**Changing the L10 agenda:** The agenda works because of repetition. Resist the urge to customize it for the first 6 months.
**Rocks without accountability:** If nobody checks rocks at the L10 ("on track / off track"), they become wish lists. The weekly review is what makes them real.

View File

@@ -0,0 +1,242 @@
# Operating System Comparison
Side-by-side analysis of the major company operating frameworks.
---
## Overview
| Framework | Origin | Best fit | Implementation time | Cost |
|-----------|--------|----------|---------------------|------|
| EOS | Gino Wickman, 2007 | 10250 employees, founder-led | 23 years full adoption | Free (DIY) to $25K+/year (implementer) |
| Scaling Up | Verne Harnish, 2002 | Growth-stage, strategic focus | 12 years | Free (DIY) to $15K+/year (coach) |
| OKR-native | Andy Grove / Google | Tech companies, product orgs | 36 months | Free |
| Holacracy | Brian Robertson, 2007 | Flat, autonomous organizations | 24 years | $5K$50K+ (certification) |
| Custom hybrid | You | When the above don't fit exactly | Ongoing | Whatever you invest |
---
## 1. EOS — Entrepreneurial Operating System
**Book:** *Traction* by Gino Wickman
### Core principles
EOS is built on Six Components:
1. **Vision** — Where are you going? (V/TO: Vision/Traction Organizer)
2. **People** — Right people, right seats
3. **Data** — Scorecard with weekly metrics
4. **Issues** — Surface and resolve with IDS
5. **Process** — Document core processes
6. **Traction** — Rocks + meeting pulse (L10)
### Signature tools
- **V/TO (Vision/Traction Organizer):** 2-page strategy doc. Core values, core focus, 10-year target, 3-year picture, 1-year plan, quarterly rocks, issues.
- **Accountability Chart:** Who owns what function (not org chart)
- **L10 meeting:** Weekly 90-minute leadership sync (Level 10 = aim for 10/10)
- **Rocks:** 90-day priority commitments (37 per person)
- **IDS:** Identify, Discuss, Solve (issue resolution, max 15 min per issue)
### Strengths
- **Operationally focused.** If your problem is execution chaos, EOS addresses it directly.
- **Accessible.** The book is practical. You can DIY it without a coach.
- **Community.** Large network of implementers, tools (Ninety.io, EOS Worldwide), and practitioners.
- **Simple enough to actually use.** No complex methodology. Most teams are functional within 6 months.
### Limitations
- **Strategic depth is shallow.** The V/TO is good for direction but doesn't replace real strategy work.
- **Doesn't scale beyond ~250.** Designed for entrepreneurial companies. Gets cumbersome at enterprise scale.
- **Assumes a cohesive leadership team.** If trust is broken at the top, EOS won't fix it.
- **Facilitator dependency.** Many companies benefit from an EOS Implementer (external coach), which adds cost.
### Best fit
- 10150 person companies
- Founder-led, operational dysfunction
- Teams that can't stay on the same page
- Companies with recurring issues that never get resolved
- First real "operating system" for a company that's been running on vibes
### Not ideal if
- You need sophisticated strategic planning
- You're > 250 people and already have ops infrastructure
- Your team resists structured methodology
---
## 2. Scaling Up (Rockefeller Habits 2.0)
**Book:** *Scaling Up* by Verne Harnish
### Core principles
Built on four Decisions:
1. **People** — Core values, talent management, Topgrading
2. **Strategy** — One-Page Strategic Plan (OPSP), 7 Strata of Strategy
3. **Execution** — Priorities (rocks), meeting rhythm, critical numbers
4. **Cash** — Power of One, Cash Acceleration Strategies (CAS)
### Signature tools
- **One-Page Strategic Plan (OPSP):** Annual and quarterly goals on one page. More strategic than EOS's V/TO.
- **7 Strata of Strategy:** Competitive positioning, core customer, brand promise, X-factor (10x advantage), profit per X, BHAG, critical numbers.
- **Meeting rhythm:** Daily (515 min), weekly, monthly, quarterly, annual — with specific templates.
- **Critical number:** One metric that, if improved, fixes everything else.
- **Cash acceleration:** CAS system for improving working capital and cash conversion cycle.
### Strengths
- **Stronger strategic framework than EOS.** The 7 strata and OPSP force real strategic thinking.
- **Cash focus.** Unique among frameworks — explicitly addresses cash flow management.
- **Scales further.** Better suited for 1001000 person companies than EOS.
- **Works for ambitious growth companies.** Designed for companies that want to scale significantly.
### Limitations
- **More complex than EOS.** Harder to DIY. Benefits heavily from a certified Scaling Up coach.
- **Overwhelming at first.** The full framework has many components. Teams often implement partially.
- **Less prescriptive on meetings.** EOS's L10 is very specific. Scaling Up's meeting rhythm requires more customization.
### Best fit
- Series A to Series C companies
- Companies with strong growth ambition
- Leadership teams that want strategic rigor, not just operational clarity
- Companies already past initial chaos, ready for more sophisticated frameworks
### Not ideal if
- You're pre-product-market-fit
- You need quick operational wins
- Your team doesn't have the bandwidth for the learning curve
---
## 3. OKR-Native (Google Style)
**Books:** *Measure What Matters* by John Doerr; *Radical Focus* by Christina Wodtke
### Core principles
OKRs = Objectives + Key Results
- **Objectives:** Qualitative, inspiring direction. "What are we trying to achieve?"
- **Key Results:** Quantitative, measurable outcomes. "How will we know we achieved it?"
- **Not tasks.** KRs measure outcomes, not activities.
**Cascade:** Company OKRs → Department OKRs → Team OKRs → Individual OKRs
**Cadence:** Quarterly OKR cycles. Weekly check-ins. Annual reflection.
**Scoring:** 0.01.0. Target is 0.7. Consistently hitting 1.0 = OKRs aren't ambitious enough.
### Strengths
- **Aligns the whole company.** When done well, every team can trace their work to company-level objectives.
- **Encourages ambition.** Moonshot OKRs are explicit. "Roofshot" vs "moonshot" OKRs.
- **Widely understood in tech.** Many hires will already know OKRs.
- **No framework cost.** No implementer required. Tooling is free or cheap (Linear, Notion, Lattice).
### Limitations
- **Hard to do well.** Most companies run "OKR theater" — tasks dressed up as key results.
- **Missing the HOW.** OKRs define what to achieve but not how to operate. You still need meeting rhythm, accountability structure, and issue resolution.
- **Misalignment risk.** If not cascaded properly, teams run disconnected OKRs that feel like alignment but aren't.
- **No operational backbone.** OKRs are a goal-setting system, not a full operating system.
### Best fit
- Tech companies with strong product/engineering culture
- Companies where hypothesis-driven work is already the norm
- Organizations that value autonomy and bottom-up goal setting
- As the goal-setting layer inside a broader operating system
### Not ideal if
- Teams lack discipline to hold each other accountable
- You need more than just goal alignment (issue resolution, meeting structure)
- Leaders don't model OKR behavior themselves
---
## 4. Holacracy
**Book:** *Holacracy* by Brian Robertson
### Core principles
Holacracy replaces the traditional management hierarchy with a system of distributed authority.
- **Circles:** Semi-autonomous units with defined purposes (like teams, but self-governing)
- **Roles:** People fill roles (not job descriptions). One person can hold multiple roles in different circles.
- **Governance meetings:** Roles and accountabilities are defined and evolved by the circle, not management
- **Tactical meetings:** Operational coordination within circles
- **The Constitution:** A legal document that all members ratify, replacing traditional management authority
### Strengths
- **Maximum autonomy.** People closest to the work define how it gets done.
- **Removes management as a bottleneck.** Decisions happen at the circle level.
- **Adapts to complexity.** Circle structure evolves organically as the work changes.
### Limitations
- **Enormous learning curve.** 24 years to full adoption. Many companies abandon it.
- **High meeting overhead.** Governance meetings add significant time.
- **Doesn't eliminate politics.** Just moves them to governance meetings.
- **Requires full commitment.** Partial Holacracy doesn't work. You either do it or you don't.
- **Not for crisis mode.** When speed matters, distributed governance slows you down.
### When it works
- Organizations with deep belief in autonomy and self-management
- Non-profit or mission-driven organizations where consensus matters
- Companies with patient leadership willing to invest years in implementation
### When it doesn't work
- Startups needing speed and clarity
- Companies with strong founder personalities who struggle to relinquish control
- Organizations that need to move fast or course-correct frequently
---
## 5. Custom Hybrid
### When to build a hybrid
None of the above frameworks fits perfectly because:
- EOS lacks strategic depth
- Scaling Up is complex to implement
- OKRs don't provide operational backbone
- Holacracy is too slow to implement
The solution: take the best components of each.
### Common hybrid patterns
**EOS backbone + OKR goal-setting:**
- EOS provides: accountability chart, L10 meeting, IDS, meeting pulse
- OKRs provide: goal-setting with ambition, cascade, and alignment checks
- Works well for: tech companies that want operational rigor with flexibility
**Scaling Up strategy + EOS execution:**
- Scaling Up provides: OPSP, 7 strata, cash management
- EOS provides: L10, rocks, IDS
- Works well for: ambitious growth companies that want both strategy and execution discipline
**OKRs + custom meeting rhythm:**
- OKRs provide: goal cascade
- Custom meetings: weekly team syncs, monthly department reviews, quarterly all-hands
- Works well for: companies that already have strong culture but need goal alignment
### Hybrid design principles
1. **Pick one goal-setting system.** Don't mix OKRs and Rocks — they're both 90-day priority systems and will create confusion.
2. **Be explicit about what you're taking from where.** "We use EOS for meetings and Scaling Up for strategy" is a clear hybrid. "We do a bit of everything" is chaos.
3. **Document your version.** Your operating system should have a name and a one-page description of what it includes.
4. **Evolve intentionally.** Change one component at a time. Don't overhaul the whole system when one part isn't working.
---
## Framework Selection Decision Tree
```
Is your company < 50 people and in operational chaos?
YES → Start with EOS. It's the simplest path to order.
NO → Continue.
Does strategic positioning and cash flow need significant work?
YES → Consider Scaling Up.
NO → Continue.
Is your company tech-native with strong product/engineering culture?
YES → OKR-native with a custom meeting rhythm.
NO → Continue.
Do you have 2+ years and full leadership commitment to radical organizational change?
YES → Consider Holacracy (with caution).
NO → Build a custom hybrid from EOS + OKRs.
```

View File

@@ -0,0 +1,203 @@
---
name: competitive-intel
description: "Systematic competitor tracking that feeds CMO positioning, CRO battlecards, and CPO roadmap decisions. Use when analyzing competitors, building sales battlecards, tracking market moves, positioning against alternatives, or when user mentions competitive intelligence, competitive analysis, competitor research, battlecards, win/loss, or market positioning."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: competitive-strategy
updated: 2026-03-05
frameworks: ci-playbook, battlecard-template
---
# Competitive Intelligence
Systematic competitor tracking. Not obsession — intelligence that drives real decisions.
## Keywords
competitive intelligence, competitor analysis, battlecard, win/loss analysis, competitive positioning, competitive tracking, market intelligence, competitor research, SWOT, competitive map, feature gap analysis, competitive strategy
## Quick Start
```
/ci:landscape — Map your competitive space (direct, indirect, future)
/ci:battlecard [name] — Build a sales battlecard for a specific competitor
/ci:winloss — Analyze recent wins and losses by reason
/ci:update [name] — Track what a competitor did recently
/ci:map — Build competitive positioning map
```
## Framework: 5-Layer Intelligence System
### Layer 1: Competitor Identification
**Direct competitors:** Same ICP, same problem, comparable solution, similar price point.
**Indirect competitors:** Same budget, different solution (including "do nothing" and "build in-house").
**Future competitors:** Well-funded startups in adjacent space; large incumbents with stated roadmap overlap.
**The 2x2 Threat Matrix:**
| | Same ICP | Different ICP |
|---|---|---|
| **Same problem** | Direct threat | Adjacent (watch) |
| **Different problem** | Displacement risk | Ignore for now |
Update this quarterly. Who's moved quadrants?
### Layer 2: Tracking Dimensions
Track these 8 dimensions per competitor:
| Dimension | Sources | Cadence |
|-----------|---------|---------|
| **Product moves** | Changelog, G2/Capterra reviews, Twitter/LinkedIn | Monthly |
| **Pricing changes** | Pricing page, sales call intel, customer feedback | Triggered |
| **Funding** | Crunchbase, TechCrunch, LinkedIn | Triggered |
| **Hiring signals** | LinkedIn job postings, Indeed | Monthly |
| **Partnerships** | Press releases, co-marketing | Triggered |
| **Customer wins** | Case studies, review sites, LinkedIn | Monthly |
| **Customer losses** | Win/loss interviews, churned accounts | Ongoing |
| **Messaging shifts** | Homepage, ads (Facebook/Google Ad Library) | Quarterly |
### Layer 3: Analysis Frameworks
**SWOT per Competitor:**
- Strengths: What do they do well? Where do they win?
- Weaknesses: Where do they lose? What do customers complain about?
- Opportunities: What could they do that would threaten you?
- Threats: What's their existential risk?
**Competitive Positioning Map (2 axis):**
Choose axes that matter for your buyers:
- Common: Price vs Feature Depth; Enterprise-ready vs SMB-ready; Easy to implement vs Configurable
- Pick axes that show YOUR differentiation clearly
**Feature Gap Analysis:**
| Feature | You | Competitor A | Competitor B | Gap status |
|---------|-----|-------------|-------------|------------|
| [Feature] | ✅ | ✅ | ❌ | Your advantage |
| [Feature] | ❌ | ✅ | ✅ | Gap — roadmap? |
| [Feature] | ✅ | ❌ | ❌ | Moat |
| [Feature] | ❌ | ❌ | ✅ | Competitor B only |
### Layer 4: Output Formats
**For Sales (CRO):** Battlecards — one page per competitor, designed for pre-call prep.
See `templates/battlecard-template.md`
**For Marketing (CMO):** Positioning update — message shifts, new differentiators, claims to stop or start making.
**For Product (CPO):** Feature gap summary — what customers ask for that we don't have, what competitors ship, what to reprioritize.
**For CEO/Board:** Monthly competitive summary — 1-page: who moved, what it means, recommended responses.
### Layer 5: Intelligence Cadence
**Monthly (scheduled):**
- Review all tier-1 competitors (direct threats, top 3)
- Update battlecards with new intel
- Publish 1-page summary to leadership
**Triggered (event-based):**
- Competitor raises funding → assess implications within 48 hours
- Competitor launches major feature → product + sales response within 1 week
- Competitor poaches key customer → win/loss interview within 2 weeks
- Competitor changes pricing → analyze and respond within 1 week
**Quarterly:**
- Full competitive landscape review
- Update positioning map
- Refresh ICP competitive threat assessment
- Add/remove companies from tracking list
---
## Win/Loss Analysis
This is the highest-signal competitive data you have. Most companies do it too rarely.
**When to interview:**
- Every lost deal >$50K ACV
- Every churn >6 months tenure
- Every competitive win (learn why — it may not be what you think)
**Who conducts it:**
- NOT the AE who worked the deal (too close, prospect won't be candid)
- Customer success, product team, or external researcher
**Question structure:**
1. "Walk me through your evaluation process"
2. "Who else were you considering?"
3. "What were the top 3 criteria in your decision?"
4. "Where did [our product] fall short?"
5. "What was the deciding factor?"
6. "What would have changed your decision?"
**Aggregate findings monthly:**
- Win reasons (rank by frequency)
- Loss reasons (rank by frequency)
- Competitor win rates (by competitor, by segment)
- Patterns over time
---
## The Balance: Intelligence Without Obsession
**Signs you're over-tracking competitors:**
- Roadmap decisions are primarily driven by "they just shipped X"
- Team morale drops when competitors fundraise
- You're shipping features you don't believe in to match their checklist
- Pricing discussions always start with "well, they charge X"
**Signs you're under-tracking:**
- Your AEs get blindsided on calls
- Prospects know more about competitors than your team does
- You missed a major product launch until customers told you
- Your positioning hasn't changed in 12+ months despite market moves
**The right posture:**
- Know competitors well enough to win against them
- Don't let them set your agenda
- Your roadmap is led by customer problems, informed by competitive gaps
---
## Distributing Intelligence
| Audience | Format | Cadence | Owner |
|----------|--------|---------|-------|
| AEs + SDRs | Updated battlecards in CRM | Monthly + triggered | CRO |
| Product | Feature gap analysis | Quarterly | CPO |
| Marketing | Positioning brief | Quarterly | CMO |
| Leadership | 1-page competitive summary | Monthly | CEO/COO |
| Board | Competitive landscape slide | Quarterly | CEO |
**One source of truth:** All competitive intel lives in one place (Notion, Confluence, Salesforce). Avoid Slack-only distribution — it disappears.
---
## Red Flags in Competitive Intelligence
| Signal | What it means |
|--------|---------------|
| Competitor's win rate >50% in your core segment | Fundamental positioning problem, not sales problem |
| Same objection from 5+ deals: "competitor has X" | Feature gap that's real, not just optics |
| Competitor hired 10 engineers in your domain | Major product investment incoming |
| Competitor raised >$20M and targets your ICP | 12-month runway for them to compete hard |
| Prospects evaluate you to justify competitor decision | You're the "check box" — fix perception or segment |
## Integration with C-Suite Roles
| Intelligence Type | Feeds To | Output Format |
|------------------|----------|---------------|
| Product moves | CPO | Roadmap input, feature gap analysis |
| Pricing changes | CRO, CFO | Pricing response recommendations |
| Funding rounds | CEO, CFO | Strategic positioning update |
| Hiring signals | CHRO, CTO | Talent market intelligence |
| Customer wins/losses | CRO, CMO | Battlecard updates, positioning shifts |
| Marketing campaigns | CMO | Counter-positioning, channel intelligence |
## References
- `references/ci-playbook.md` — OSINT sources, win/loss framework, positioning map construction
- `templates/battlecard-template.md` — sales battlecard template

View File

@@ -0,0 +1,237 @@
# Competitive Intelligence Playbook
## OSINT Sources for Competitor Tracking
### Free, Reliable Sources
**Company & Product:**
- **Their website** — pricing page (archive.org for history), product changelog, careers page
- **G2 / Capterra / Trustpilot** — customer reviews; filter by recency; read 1-star reviews carefully
- **LinkedIn** — job postings signal roadmap; company page for headcount trend; employees for leaks
- **GitHub** — open source activity; what they're building; engineering team size; tech stack
- **Crunchbase / PitchBook** (free tier) — funding history, investors, team changes
- **BuiltWith** — tech stack they use; signals about infrastructure maturity
**Messaging & Positioning:**
- **Facebook Ad Library** — see their current ad copy and creative; what messages they're testing
- **Google Keyword Planner** — which keywords they're bidding on
- **SEMrush / Ahrefs** (free trial or limited) — their organic keywords, backlink profile
- **Wayback Machine** — homepage evolution over time; when positioning shifted
- **Their blog** — content strategy reveals priorities and ICP assumptions
**News & Events:**
- **TechCrunch, VentureBeat** — funding announcements, major launches
- **Twitter/X / LinkedIn** — CEO + founders; direct signals about strategy
- **Podcast appearances** — founders talk more openly on podcasts than press releases
- **Job descriptions** — "Senior Engineer - Payments" means they're building payments
### Paid (Worth It for Tier-1 Competitors)
- **G2 Buyer Intent** — which prospects are researching your competitor right now
- **Bombora** — intent data for account-level research signals
- **PitchBook** — funding, investors, valuation estimates
- **Klue / Crayon / Kompyte** — dedicated CI platforms that aggregate automatically
### Primary Research (Best Signal)
- **Win/loss interviews** — the single highest-signal source (see below)
- **Talk to churned customers** — why did they switch? To whom?
- **Talk to their customers** — LinkedIn outreach; honest conversations
- **Industry events** — competitor presentations reveal roadmap; talk to attendees
- **Former employees** — LinkedIn; respectful outreach; no NDA violations
---
## Competitive Battlecard Format
A battlecard is a 1-page (or single screen) document for sales reps to reference before and during calls.
**Design principles:**
- Written for a rep with 2 minutes to prep, not a product manager
- Action-oriented: tells reps what to SAY, not just what to know
- Updated monthly at minimum; never more than 90 days old
### Battlecard Structure
```
COMPETITOR: [Name]
Last updated: [Date] | Owner: [Name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
THE 30-SECOND SUMMARY
[One paragraph. Who they are, who they sell to, why they win.]
THEIR STRENGTHS (know these — don't dismiss them)
• [Strength 1] — what customers actually love about them
• [Strength 2]
• [Strength 3]
THEIR REAL WEAKNESSES (from win/loss data, not assumptions)
• [Weakness 1] — source: [customer quote / win/loss theme]
• [Weakness 2]
• [Weakness 3]
OUR DIFFERENTIATED ADVANTAGES
• [Advantage 1] — proof point: [metric/customer/case study]
• [Advantage 2] — proof point:
• [Advantage 3] — proof point:
COMMON OBJECTIONS + RESPONSES
"They have [feature] and you don't."
→ [Response. Acknowledge, reframe, redirect.]
"They're cheaper."
→ [Response with ROI angle or TCO comparison.]
"They're more established / bigger."
→ [Response. Size isn't always advantage; use to your benefit.]
TRAP-SETTING QUESTIONS (ask these early to shift the eval criteria)
• "How important is [your differentiator] to your team?"
• "Have you looked at [pain point they create]?"
• "What happens to your workflow when [their known limitation occurs]?"
WHEN WE WIN
• [Segment or scenario where we almost always beat them]
• [Use case where we're clearly stronger]
WHEN WE LOSE (be honest)
• [Scenario where they're genuinely better — don't fight these battles]
• [Segment where they have structural advantages]
DO NOT SAY
• Don't claim [X] — it's not true and they'll call it out
• Don't say [Y] — prospect will already know it and it sounds desperate
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
---
## Win/Loss Analysis Framework
### Why Most Companies Do This Wrong
- They survey instead of interview (surveys get polite answers)
- The AE conducts it (too emotionally invested; prospect won't be candid)
- They do it 6 months after the decision (memory fades)
- They look for confirmation of what they believe
### The Right Process
**Timing:** Within 30 days of deal closed/lost/churned.
**Interviewer:** Customer success, product, or external researcher. Never the AE.
**Duration:** 30 minutes (budget 45).
**Incentive:** $100 gift card gets you 80% acceptance. Worth it.
**Interview Guide:**
*Opening:*
"I'm [name] from [company]. I'm not in sales — I'm trying to understand what drove your decision so we can improve. There's nothing you can say that will change the outcome. I just want honest feedback."
*Core questions:*
1. "Can you walk me through your evaluation process from the beginning?"
2. "Who were the key stakeholders involved in the decision?"
3. "What were the 3 most important criteria you were evaluating against?"
4. "Which vendors did you seriously consider?"
5. "Where did [company] fall short of your expectations?" (For losses)
OR "What tipped the decision in [company]'s favor?" (For wins)
6. "Was price a factor? How significant?"
7. "What would have had to be different for you to choose [us / the other option]?"
8. "Any advice for our team on how we handled the process?"
**Data aggregation:**
- Tag every response: [criterion], [competitor mentioned], [product gap], [sales process], [price], [trust/credibility]
- Monthly rollup: top 5 win reasons, top 5 loss reasons, competitor win rate
- Share with: CEO, CRO, CPO, CMO — not just sales
---
## Competitive Positioning Map Construction
A positioning map shows where you sit relative to competitors on 2 dimensions that BUYERS care about.
### Step 1: Choose Your Axes
- Pick dimensions that actually drive purchase decisions in your segment
- At least one axis should be where you win
- Avoid generic axes ("feature-rich vs. simple" tells you nothing)
**Good axis pairs:**
- Implementation time (days vs. months) × Customization depth
- Price point × Enterprise readiness
- Automation level × Human-in-the-loop control
- Time-to-value × Total cost of ownership
**Bad axes:**
- Quality (too vague)
- "Innovation" (unmeasurable)
- Any axis where all competitors cluster in the same spot
### Step 2: Place Competitors Objectively
- Use customer quotes and win/loss data to justify placement
- Don't place competitors where you WANT them — where they ACTUALLY are
- If you're unsure, ask 5 customers to place them
### Step 3: Find and Name Your White Space
- Where is there a position no competitor holds?
- Is that white space there because it's valuable (opportunity) or worthless (avoid)?
- Can you credibly occupy it?
### Step 4: Test Your Positioning
- Show the map to 5 prospects: "Does this match your perception?"
- Show it to 5 lost prospects: "Where would you place [the winner] and us?"
- Adjust until map matches buyer reality, not internal perception
---
## Intelligence Sharing Across Roles
### What Each Role Needs and When
**CRO (Sales):**
- Needs: Battlecards, win rates by competitor, competitor objections + responses
- Cadence: Updated battlecards monthly; triggered updates on major competitor moves
- Format: 1-pager per competitor in CRM, linked from deal record
**CMO (Marketing):**
- Needs: Messaging shifts, new claims, ad spend signals, keyword battles
- Cadence: Quarterly positioning review, triggered on major launches
- Format: Positioning brief with recommended response to messaging shifts
**CPO (Product):**
- Needs: Feature gap analysis, competitor roadmap signals (job postings, changelog), what we lose to
- Cadence: Monthly feature gap update, triggered on major launches
- Format: Feature comparison matrix + gap prioritization recommendation
**CTO (Engineering):**
- Needs: Tech stack signals, infrastructure approaches, scale they've achieved
- Cadence: Quarterly
- Format: Technical comparison notes, relevant for architectural decisions
**CEO:**
- Needs: Summary of threat landscape, recommended responses, board-level narrative
- Cadence: Monthly 1-pager + quarterly deep dive
- Format: 1-page brief: who moved, what it means, what we do
### The Single Source of Truth Rule
All competitive intel in one place. Suggest:
- Notion database per competitor: profile, battlecard, changelog, win/loss notes
- Slack channel: `#competitive-intel` for real-time triggered alerts
- Monthly digest email to leadership
If it lives only in Slack, it disappears. If it lives only in a wiki that nobody reads, it doesn't matter. Combine both.
---
## How to Track Without Obsessing
**Set up the system, then let it run:**
- Google Alerts for competitor names + CEO names
- LinkedIn Saved Searches for their job postings
- Klue/Crayon if budget allows (automated aggregation)
- Monthly 60-minute competitive review meeting (not 4 hours)
**What to do when competitor makes a big move:**
1. Read the announcement objectively
2. Talk to 3 customers: "Did you see this? What do you think?"
3. Assess: does this change any buying criteria in your deals?
4. If yes: update battlecard and positioning within 1 week
5. If no: log it, move on
**The test:** After reviewing a competitor move, do you feel urgency to ship something? If yes, you're reacting. The right feeling is "noted — let's see if customers care."

View File

@@ -0,0 +1,99 @@
# Sales Battlecard Template
**COMPETITOR:** [Name]
**Last updated:** [YYYY-MM-DD] | **Owner:** [Name]
**Win rate vs this competitor:** [X]% | **Deals tracked:** [N]
---
## 30-Second Summary
[Who they are. Who they target. Why they win. What they're known for. 3-4 sentences max.]
---
## Their Strengths
*Know these. Don't dismiss them. Prospects have already heard their pitch.*
- **[Strength]:** [What customers genuinely love; source if available]
- **[Strength]:** [Specific capability or trait]
- **[Strength]:** [Brand, market position, or ecosystem advantage]
---
## Their Real Weaknesses
*From win/loss data only — not wishful thinking.*
- **[Weakness]:** "[Customer quote]" — seen in [N] deals
- **[Weakness]:** [Documented limitation with evidence]
- **[Weakness]:** [Implementation, support, or pricing issue]
---
## Our Differentiated Advantages
*Must be real and provable. Each needs a proof point.*
- **[Advantage]:** [Proof: metric / customer quote / case study]
- **[Advantage]:** [Proof]
- **[Advantage]:** [Proof]
---
## Common Objections + Responses
**"They have [feature X] and you don't."**
> [Acknowledge. Reframe to your strength. Redirect to outcome.
> "You're right that they have X. What we've found is that customers who care most about X tend to also care about [Y], where we're significantly stronger. Can I show you [specific example]?"]
**"They're cheaper."**
> [Don't fight on price. Reframe to TCO or ROI.
> "They are lower in initial cost. Most customers find the total cost over 12 months is actually comparable when you factor in [implementation time / support costs / integrations]. Want to walk through that?"]
**"They've been around longer / they're more established."**
> [Reframe tenure as potential liability or irrelevance.
> "Their longevity means they have a lot of technical debt and a big customer base that pulls their roadmap in every direction. Our customers tell us that's exactly why they chose us — we move faster and we're laser-focused on [their specific use case]."]
**"[Competitor] is already used by [big customer they respect]."**
> [Name-drop your wins in their segment.
> "We work with [comparable logo]. Want me to connect you with their [role] to ask how they made the decision?"]
---
## Trap-Setting Questions
*Ask early in discovery to establish criteria that favor you.*
- "How important is [your key differentiator] to your workflow?"
- "What happens when [their known limitation] occurs? Has that been an issue before?"
- "How long does your team typically take to onboard a new tool?"
- "Who manages the integration work — do you have dedicated engineering resources for that?"
- "What does your current vendor do when you need support?"
---
## When We Win
- [Scenario or segment where we consistently beat them]
- [Use case that plays to our strengths]
- [Buyer profile that prefers us]
## When We Lose (Be Honest)
- [Scenario where they genuinely win — don't fight here]
- [Segment where their strengths matter more than ours]
---
## Do NOT Say
- ❌ Don't claim [X] — it's not accurate and they'll check
- ❌ Don't attack [Y] — it backfires and makes us look insecure
- ❌ Don't say "we're better" without specifics — be concrete
---
## Recent Intel
*Last 90 days only. Older than 90 days: archive.*
- [Date]: [What happened — funding, product launch, pricing change, key hire]
- [Date]: [Customer feedback from win/loss interview]
- [Date]: [Any notable market move]
---
*Battlecards are only useful if current. If this is >90 days old, flag to [owner] for update.*

View File

@@ -0,0 +1,134 @@
---
name: context-engine
description: "Loads and manages company context for all C-suite advisor skills. Reads ~/.claude/company-context.md, detects stale context (>90 days), enriches context during conversations, and enforces privacy/anonymization rules before external API calls."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: orchestration
updated: 2026-03-05
frameworks: context-loading, anonymization, context-enrichment
---
# Company Context Engine
The memory layer for C-suite advisors. Every advisor skill loads this first. Context is what turns generic advice into specific insight.
## Keywords
company context, context loading, context engine, company profile, advisor context, stale context, context refresh, privacy, anonymization
---
## Load Protocol (Run at Start of Every C-Suite Session)
**Step 1 — Check for context file:** `~/.claude/company-context.md`
- Exists → proceed to Step 2
- Missing → prompt: *"Run /cs:setup to build your company context — it makes every advisor conversation significantly more useful."*
**Step 2 — Check staleness:** Read `Last updated` field.
- **< 90 days:** Load and proceed.
- **≥ 90 days:** Prompt: *"Your context is [N] days old. Quick 15-min refresh (/cs:update), or continue with what I have?"*
- If continue: load with `[STALE — last updated DATE]` noted internally.
**Step 3 — Parse into working memory.** Always active:
- Company stage (pre-PMF / scaling / optimizing)
- Founder archetype (product / sales / technical / operator)
- Current #1 challenge
- Runway (as risk signal — never share externally)
- Team size
- Unfair advantage
- 12-month target
---
## Context Quality Signals
| Condition | Confidence | Action |
|-----------|-----------|--------|
| < 30 days, full interview | High | Use directly |
| 3090 days, update done | Medium | Use, flag what may have changed |
| > 90 days | Low | Flag stale, prompt refresh |
| Key fields missing | Low | Ask in-session |
| No file | None | Prompt /cs:setup |
If Low: *"My context is [stale/incomplete] — I'm assuming [X]. Correct me if I'm wrong."*
---
## Context Enrichment
During conversations, you'll learn things not in the file. Capture them.
**Triggers:** New number or timeline revealed, key person mentioned, priority shift, constraint surfaces.
**Protocol:**
1. Note internally: `[CONTEXT UPDATE: {what was learned}]`
2. At session end: *"I picked up a few things to add to your context. Want me to update the file?"*
3. If yes: append to the relevant dimension, update timestamp.
**Never silently overwrite.** Always confirm before modifying the context file.
---
## Privacy Rules
### Never send externally
- Specific revenue or burn figures
- Customer names
- Employee names (unless publicly known)
- Investor names (unless public)
- Specific runway months
- Watch List contents
### Safe to use externally (with anonymization)
- Stage label
- Team size ranges (110, 1050, 50200+)
- Industry vertical
- Challenge category
- Market position descriptor
### Before any external API call or web search
Apply `references/anonymization-protocol.md`:
- Numbers → ranges or stage-relative descriptors
- Names → roles
- Revenue → percentages or stage labels
- Customers → "Customer A, B, C"
---
## Missing or Partial Context
Handle gracefully — never block the conversation.
- **Missing stage:** "Just to calibrate — are you still finding PMF or scaling what works?"
- **Missing financials:** Use stage + team size to infer. Note the gap.
- **Missing founder profile:** Infer from conversation style. Mark as inferred.
- **Multiple founders:** Context reflects the interviewee. Note co-founder perspective may differ.
---
## Required Context Fields
```
Required:
- Last updated (date)
- Company Identity → What we do
- Stage & Scale → Stage
- Founder Profile → Founder archetype
- Current Challenges → Priority #1
- Goals & Ambition → 12-month target
High-value optional:
- Unfair advantage
- Kill-shot risk
- Avoided decision
- Watch list
```
Missing required fields: note gaps, work around in session, ask in-session only when critical.
---
## References
- `references/anonymization-protocol.md` — detailed rules for stripping sensitive data before external calls

View File

@@ -0,0 +1,173 @@
# Anonymization Protocol
Rules for stripping sensitive company data before any external API call, web search, or tool invocation that sends data outside the local environment.
---
## When This Protocol Applies
**Trigger:** Any time company context or conversation content will leave the local session.
Examples:
- Web search that includes company specifics
- External API call with company data in the payload
- Any tool call where conversation content is part of the request
**Does NOT apply to:**
- Local file reads/writes (`~/.claude/company-context.md`)
- In-session reasoning and analysis
- Generating advice or documents that stay local
---
## Rule 1: Financial Figures → Relative Ranges
Never send specific financial data externally.
| Raw data | Anonymized version |
|----------|-------------------|
| "$2.4M ARR" | "early-stage ARR (sub-$5M)" |
| "$180K MRR" | "growing MRR, Series A range" |
| "14 months runway" | "runway is healthy for stage" |
| "burn rate is $320K/month" | "burn rate is moderate for stage" |
| "raised $8M Series A" | "Series A company" |
| "customer LTV is $4,200" | "LTV is above industry average for segment" |
| "CAC is $680" | "CAC is in a sustainable range" |
**Rule:** No dollar amounts. No month counts for runway. Use stage-relative descriptors.
---
## Rule 2: Customer Names → Anonymized Labels
Never send customer or client names externally.
| Raw data | Anonymized version |
|----------|-------------------|
| "Acme Corp is our biggest customer" | "Customer A (largest account)" |
| "we're working with NHS England" | "a large public-sector customer" |
| "BMW, Volkswagen, and Stellantis" | "three major automotive OEMs" |
| "10 enterprise customers including..." | "10 enterprise customers" |
**Rule:** Use "Customer A/B/C" for named accounts, or describe by segment without naming.
---
## Rule 3: Revenue Figures → Percentage Changes or Stage Descriptors
Revenue trajectory is safer than absolute numbers.
| Raw data | Anonymized version |
|----------|-------------------|
| "growing from $1M to $2M ARR" | "2x revenue growth year-over-year" |
| "revenue dropped from $500K to $430K" | "revenue declined ~15% in the period" |
| "hit $10M ARR last quarter" | "crossed a significant ARR milestone" |
| "doing $50K MRR" | "pre-Series A revenue, strong growth trajectory" |
**Rule:** Percentages and directional signals (growing / declining / flat) are safe. Absolutes are not.
---
## Rule 4: Employee Names → Roles Only
Never send individual names externally.
| Raw data | Anonymized version |
|----------|-------------------|
| "Our CTO, Sarah Chen, is struggling" | "our CTO is struggling with the transition" |
| "James is the best performer on the team" | "our strongest performer is in the engineering lead role" |
| "we're about to let go of Michael" | "we're about to make a leadership change" |
| "the founding team is me, Alex, and Priya" | "a three-person founding team" |
**Exception:** Publicly known executives (CEO of a public company, named in press releases) can be referenced by name. If in doubt, use role.
---
## Rule 5: Investor Names → Generic Descriptors
| Raw data | Anonymized version |
|----------|-------------------|
| "Sequoia led our round" | "a top-tier VC led our round" |
| "our lead investor is pushing for an exit" | "pressure from investors toward exit" |
| "Y Combinator alumni" | "accelerator alumni" |
**Exception:** YC, Techstars, and similar well-known accelerators are commonly referenced and safe if the founder has publicly disclosed. When in doubt, omit.
---
## Rule 6: Location → Country or Region
| Raw data | Anonymized version |
|----------|-------------------|
| "Berlin-based startup" | "European startup" |
| "we're in San Francisco" | "US-based startup" |
| "expanding to Munich and Vienna" | "expanding in the DACH region" |
**Exception:** Location is less sensitive than financials. Use judgment — if it's on their website, it's fine.
---
## Anonymization Decision Tree
```
Before sending data externally:
1. Does it include a specific dollar amount?
→ YES: Replace with range or relative descriptor
2. Does it include a person's name?
→ YES: Replace with role only (unless publicly known)
3. Does it include a company or customer name?
→ YES: Replace with "Customer A" or segment descriptor
4. Does it include specific headcount or runway months?
→ YES: Replace with range (110, 1050) or "healthy/tight/critical"
5. Does it include proprietary data, roadmap, or unreleased product info?
→ YES: Do not include. Reference only generically ("product expansion planned")
6. Is it publicly available information?
→ YES: Safe to send as-is
```
---
## Required vs Optional Anonymization
### Required (always strip before external calls)
- Revenue figures (absolute)
- Burn rate (absolute)
- Runway (specific months)
- Customer names
- Employee names
- Investor names (unless public)
- Funding amounts (unless public)
### Optional (use judgment based on sensitivity)
- Industry vertical (usually fine)
- Company stage (usually fine)
- Team size ranges (usually fine)
- Geographic region (usually fine)
- General challenge category (usually fine)
---
## What to Do If You're Unsure
Default to stricter anonymization. The cost of over-anonymizing is slightly less useful external results. The cost of under-anonymizing is a privacy breach.
When in doubt: **remove it**.
---
## Audit Log (Internal Only)
When running external calls with company context, note internally:
```
[EXTERNAL CALL: {tool/API used}]
[ANONYMIZED: {fields stripped}]
[RETAINED: {fields kept and why}]
```
This is for internal reasoning only — never included in output to the founder.

View File

@@ -0,0 +1,137 @@
---
name: coo-advisor
description: "Operations leadership for scaling companies. Process design, OKR execution, operational cadence, and scaling playbooks. Use when designing operations, setting up OKRs, building processes, scaling teams, analyzing bottlenecks, planning operational cadence, or when user mentions COO, operations, process improvement, OKRs, scaling, operational efficiency, or execution."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: coo-leadership
updated: 2026-03-05
python-tools: ops_efficiency_analyzer.py, okr_tracker.py
frameworks: scaling-playbook, ops-cadence, process-frameworks
---
# COO Advisor
Operational frameworks and tools for turning strategy into execution, scaling processes, and building the organizational engine.
## Keywords
COO, chief operating officer, operations, operational excellence, process improvement, OKRs, objectives and key results, scaling, operational efficiency, execution, bottleneck analysis, process design, operational cadence, meeting cadence, org scaling, lean operations, continuous improvement
## Quick Start
```bash
python scripts/ops_efficiency_analyzer.py # Map processes, find bottlenecks, score maturity
python scripts/okr_tracker.py # Cascade OKRs, track progress, flag at-risk items
```
## Core Responsibilities
### 1. Strategy Execution
The CEO sets direction. The COO makes it happen. Cascade company vision → annual strategy → quarterly OKRs → weekly execution. See `references/ops_cadence.md` for full OKR cascade framework.
### 2. Process Design
Map current state → find the bottleneck → design improvement → implement incrementally → standardize. See `references/process_frameworks.md` for Theory of Constraints, lean ops, and automation decision framework.
**Process Maturity Scale:**
| Level | Name | Signal |
|-------|------|--------|
| 1 | Ad hoc | Different every time |
| 2 | Defined | Written but not followed |
| 3 | Measured | KPIs tracked |
| 4 | Managed | Data-driven improvement |
| 5 | Optimized | Continuous improvement loops |
### 3. Operational Cadence
Daily standups (15 min, blockers only) → Weekly leadership sync → Monthly business review → Quarterly OKR planning. See `references/ops_cadence.md` for full templates.
### 4. Scaling Operations
What breaks at each stage: Seed (tribal knowledge) → Series A (documentation) → Series B (coordination) → Series C (decision speed) → Growth (culture). See `references/scaling_playbook.md` for detailed playbook per stage.
### 5. Cross-Functional Coordination
RACI for key decisions. Escalation framework: Team lead → Dept head → COO → CEO based on impact scope.
## Key Questions a COO Asks
- "What's the bottleneck? Not what's annoying — what limits throughput."
- "How many manual steps? Which break at 3x volume?"
- "Who's the single point of failure?"
- "Can every team articulate how their work connects to company goals?"
- "The same blocker appeared 3 weeks in a row. Why isn't it fixed?"
## Operational Metrics
| Category | Metric | Target |
|----------|--------|--------|
| Execution | OKR progress (% on track) | > 70% |
| Execution | Quarterly goals hit rate | > 80% |
| Speed | Decision cycle time | < 48 hours |
| Quality | Customer-facing incidents | < 2/month |
| Efficiency | Revenue per employee | Track trend |
| Efficiency | Burn multiple | < 2x |
| People | Regrettable attrition | < 10% |
## Red Flags
- OKRs consistently 1.0 (not ambitious) or < 0.3 (disconnected from reality)
- Teams can't explain how their work maps to company goals
- Leadership meetings produce no action items two weeks running
- Same blocker in three consecutive syncs
- Process exists but nobody follows it
- Departments optimize local metrics at expense of company metrics
## Integration with Other C-Suite Roles
| When... | COO works with... | To... |
|---------|-------------------|-------|
| Strategy shifts | CEO | Translate direction into ops plan |
| Roadmap changes | CPO + CTO | Assess operational impact |
| Revenue targets change | CRO | Adjust capacity planning |
| Budget constraints | CFO | Find efficiency gains |
| Hiring plans | CHRO | Align headcount with ops needs |
| Security incidents | CISO | Coordinate response |
## Detailed References
- `references/scaling_playbook.md` — what changes at each growth stage
- `references/ops_cadence.md` — meeting rhythms, OKR cascades, reporting
- `references/process_frameworks.md` — lean ops, TOC, automation decisions
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- Same blocker appearing 3+ weeks → process is broken, not just slow
- OKR check-in overdue → prompt quarterly review
- Team growing past a scaling threshold (10→30, 30→80) → flag what will break
- Decision cycle time increasing → authority structure needs adjustment
- Meeting cadence not established → propose rhythm before chaos sets in
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Set up OKRs" | Cascaded OKR framework (company → dept → team) |
| "We're scaling fast" | Scaling readiness report with what breaks next |
| "Our process is broken" | Process map with bottleneck identified + fix plan |
| "How efficient are we?" | Ops efficiency scorecard with maturity ratings |
| "Design our meeting cadence" | Full cadence template (daily → quarterly) |
## Reasoning Technique: Step by Step
Map processes sequentially. Identify each step, handoff, and decision point. Find the bottleneck using throughput analysis. Propose improvements one step at a time.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,606 @@
# Operational Cadence: Meetings, Async, Decisions, and Reporting
> The rhythm of your company determines its output. Bad cadence = constant context-switching, decisions made without information, and a leadership team that's always reactive.
---
## Philosophy
**Meetings are a tax.** Every hour in a meeting is an hour not spent building, selling, or serving customers. A good cadence minimizes meeting time while ensuring the right people have the right information at the right time.
**Async is default, sync is exception.** Most information sharing and routine updates should happen in writing. Reserve synchronous time for things that genuinely require real-time discussion: decisions with significant disagreement, complex problem-solving, relationship-building.
**Cadence serves strategy.** The calendar reflects priorities. If you're doing monthly all-hands but weekly status updates, you've inverted the importance.
---
## Meeting Cadence Templates
### Daily Operations
#### Daily Standup (Engineering / Product Teams)
**Format:** Async-first (Slack/Loom); sync only if blocked
**Sync duration:** 15 minutes max
**Participants:** Team (510 people)
**Facilitator:** Team lead or rotating
```
ASYNC FORMAT (post in #standup channel):
Yesterday: [What I completed]
Today: [What I'm working on]
Blocked: [Anything blocking me — tag the person who can unblock]
```
**Rules:**
- No status reporting in sync standup if everyone can read the async update
- Standups are not problem-solving sessions — take issues offline
- Skip standup if the team has a full-team session that day
- Kill standup if the team consistently has nothing blocked; replace with async
#### Daily Leadership Check-in (COO)
**Format:** Async only — read, don't meet
**Time:** 8:008:30 AM
**COO morning read:**
1. Yesterday's key metrics dashboard (5 min)
2. Overnight Slack/email escalations (5 min)
3. Today's decisions needed list (5 min)
4. Any P0/P1 incidents (check status page + on-call logs)
---
### Weekly Cadence
#### Leadership Sync (Weekly)
**Duration:** 6090 minutes
**Participants:** C-suite + VP level
**Owner:** COO (or CEO)
**Day/Time:** Monday or Tuesday, morning
```
AGENDA TEMPLATE:
00:0010:00 Metrics pulse (pre-read required — no presenting charts)
- Revenue: ACV, pipeline, churn delta
- Product: shipped last week, blockers this week
- Engineering: incidents, velocity
- CS: escalations, NPS delta
- People: open reqs, attrition flag
10:0045:00 Priority items (submitted in advance, max 3)
- Item 1: [Owner: Name] [Decision needed / FYI / Input needed]
- Item 2: [Owner: Name]
- Item 3: [Owner: Name]
45:0060:00 Parking lot / open
- Anything not covered
- Next week flagging
```
**Pre-meeting requirements:**
- Metrics dashboard updated by EOD Friday
- Priority items submitted by Sunday 6 PM
- Anyone who hasn't read the pre-read gets no floor time
**Output:** Decision log updated with outcomes, action items assigned in tracking system
#### 1:1 (Manager ↔ Direct Report)
**Duration:** 3045 minutes
**Frequency:** Weekly (skip-levels: bi-weekly)
**Owner:** Report (the direct report sets agenda)
```
1:1 STRUCTURE:
[5 min] What's on your mind / temperature check
[15 min] Their agenda — what they want to discuss
[10 min] Manager agenda — feedback, context, decisions
[5 min] Action items review from last week
```
**1:1 anti-patterns to eliminate:**
- Using 1:1 for status updates (that's what standups are for)
- Manager dominating the agenda
- Skipping because "things are fine"
- No written record of what was discussed
**Private 1:1 doc:** Every manager/report pair maintains a shared doc with running notes, action items, and career development thread.
#### Cross-Functional Weekly Sync
**Duration:** 45 minutes
**Participants:** 24 team leads with shared dependencies
**Examples:** Product + Engineering, Sales + CS, Marketing + Sales
```
AGENDA:
0010 Shared metrics (things both teams care about)
1030 Active collaboration items — what needs coordination this week
3040 Blockers + dependencies (what do I need from your team?)
4045 Upcoming: what's coming that the other team should know about
```
---
### Monthly Cadence
#### All-Hands / Town Hall
**Duration:** 6090 minutes
**Participants:** Entire company
**Owner:** CEO + functional heads
**Format:** In-person preferred; video if distributed
```
ALL-HANDS AGENDA (60 min version):
0005 Opening — CEO sets the tone
0520 Business update
- Where we are vs. plan (actuals vs. budget)
- Key wins and learning moments from last month
- What we're focused on this month
2040 Functional spotlights (2 functions, 10 min each)
- What we shipped / what we did
- What we learned
- What's next
4055 Open Q&A (no screened questions — take everything)
5560 Closing
ALL-HANDS PREP CHECKLIST:
□ CEO talking points reviewed 48h in advance
□ Metrics slides reviewed by Finance for accuracy
□ Q&A prep — leadership team briefs on likely questions
□ Recording setup confirmed
□ Async option for timezones (recording posted within 2h)
□ Action items from Q&A captured and published within 24h
```
#### Monthly Business Review (MBR)
**Duration:** 2 hours
**Participants:** Leadership team
**Owner:** COO
```
MBR AGENDA:
0020 Financial review (Finance presents)
- Revenue vs. plan, by segment
- Burn rate, runway
- Headcount actual vs. plan
- Key cost drivers
2060 Functional reviews (each VP, 8 min each)
Standard template per function:
- Metrics: [3 key metrics vs. prior month vs. plan]
- Wins: [top 2-3 wins]
- Gaps: [where we missed and why]
- Next 30 days: [top 3 priorities]
6090 Strategic topics (pre-submitted)
- Items requiring cross-functional decision
- Risks or issues needing leadership visibility
90110 Decisions and action items
- Document decisions made
- Assign owners and deadlines
110120 Retrospective
- What's working in how we operate?
- What needs to change?
```
**MBR pre-read package** (published 48h before):
- Financial summary (1 page)
- Each function's 1-pager (see template below)
```
FUNCTIONAL 1-PAGER TEMPLATE:
Function: [Name] Month: [Month Year]
Owner: [VP Name]
TOP METRICS:
| Metric | Target | Actual | vs. LM | vs. Plan |
|--------|--------|--------|--------|----------|
| [M1] | | | | |
| [M2] | | | | |
| [M3] | | | | |
WINS (2-3 bullets):
GAPS (be honest — no spin):
DEPENDENCIES (what I need from other teams):
NEXT 30 DAYS (top 3 priorities):
1.
2.
3.
```
---
### Quarterly Cadence
#### Quarterly Business Review (QBR)
**Duration:** Half day (4 hours)
**Participants:** Leadership team + key functional leads
**Owner:** CEO + COO
```
QBR AGENDA (4 hours):
PART 1: Look back (90 min)
- CEO: Business context and narrative (15 min)
- Finance: Full quarter P&L review (20 min)
- Each function: 10-min review against OKRs
Format: Hit/Miss/Partial for each objective + root cause
PART 2: Look forward (90 min)
- Product/Engineering: What ships next quarter (20 min)
- Sales/Marketing: Pipeline and demand plan (20 min)
- People: Headcount plan and key hires (15 min)
- Finance: Budget and forecast (20 min)
- Cross-functional dependencies (15 min)
PART 3: Strategic discussion (60 min)
- 12 strategic topics requiring deep discussion
- Pre-submitted and pre-read
PART 4: OKR setting for next quarter (30 min)
- Draft OKRs reviewed and challenged
- Final OKRs locked or assigned for next week finalization
```
#### Quarterly Leadership Off-site
**Duration:** 12 days (Series B+)
**Participants:** C-suite + VPs
**Purpose:** Strategy alignment, relationship building, hard conversations
**Off-site agenda principles:**
- No laptops during sessions (phones away)
- At least 50% discussion, max 50% presentation
- Include one session on how the leadership team is functioning (not just what the business is doing)
- Output: 1-page summary of decisions and commitments shared with the company
---
### Annual Cadence
#### Annual Planning Cycle
**Timeline:** Start 810 weeks before fiscal year end
```
ANNUAL PLANNING TIMELINE:
Week -10: Company strategic priorities draft (CEO + COO)
Week -8: Revenue model + market analysis (Finance + Sales)
Week -7: Functional goal-setting begins
Week -6: Headcount planning by function
Week -5: Draft plans reviewed by COO
Week -4: Cross-functional dependency alignment
Week -3: Budget finalization
Week -2: Board review (if applicable)
Week -1: Final company OKRs published
Week 0: Year kick-off all-hands
```
#### Year Kick-off All-Hands
**Duration:** 24 hours
**Participants:** Entire company
**Purpose:** Align entire company on year strategy and goals
```
KICK-OFF AGENDA:
- Last year retrospective: What we accomplished, what we learned
- Market context: Why now, why us
- Year strategy: The 2-3 things that matter most
- OKRs: Company-level goals, each function's goals
- Culture: How we'll work together
- Q&A: Open and honest
```
---
## Async Communication Frameworks
### The Writing-First Culture
All communication defaults to written unless real-time is genuinely necessary. This is how you scale decision-making without scaling meetings.
**Written first means:**
- Decisions are documented before they're communicated
- Updates are published before questions are asked
- Problems are described before solutions are proposed
### Slack Channel Architecture
```
REQUIRED CHANNELS:
#announcements Read-only. Major company announcements only.
#general Company-wide conversation
#leadership-public Leadership decisions visible to all (transparency)
#incidents P0/P1 incidents only. Auto-resolved when incident is closed.
#metrics Automated metric updates. No discussion here.
#wins Customer wins, team wins. Culture channel.
FUNCTIONAL CHANNELS:
#engineering, #product, #sales, #marketing, #cs, #people, #finance
PROJECT CHANNELS:
#proj-[name] Temporary. Archive when project ships.
DECISION CHANNELS:
#decisions All cross-team decisions logged here with context
```
**Anti-patterns to eliminate:**
- DMs for work decisions (decisions belong in channels, visible to team)
- @channel abuse (train people — this means everyone stops what they're doing)
- Thread avoidance (all replies go in threads, period)
- Multiple channels for same function (merge aggressively)
### Async Decision Template
When a decision needs input but doesn't require a meeting:
```
DECISION REQUEST (post in #decisions):
**Context:** [1-3 sentences on why this decision is needed]
**Options considered:**
A) [Option A] — Pros: X. Cons: Y.
B) [Option B] — Pros: X. Cons: Y.
**Recommendation:** [Your recommendation and why]
**Input needed from:** @person1, @person2 (tag specific people)
**Decide by:** [Date/Time — give at least 24 hours]
**If no response:** [Default action if no input received]
```
### Loom / Video for Async Communication
Use async video for:
- Explaining complex technical architecture
- Walking through a design or document with context
- Giving feedback that needs tone/nuance
- Team updates that would otherwise be a meeting
**Loom best practices:**
- Keep under 5 minutes; break up anything longer
- Always include a summary comment with key points
- Ask viewers to leave timestamp comments for specific questions
---
## Decision-Making Frameworks
### RAPID
The most practical decision-making framework for startups scaling to enterprises.
| Role | Meaning | Responsibility |
|------|---------|---------------|
| **R** — Recommend | Proposes decision with analysis | Does the work, gathers input, makes recommendation |
| **A** — Agree | Must agree before decision is final | Has veto power; should be used sparingly |
| **P** — Perform | Executes the decision | Consulted during recommendation phase |
| **I** — Input | Consulted for perspective | Shares point of view; not binding |
| **D** — Decide | Makes the final call | One person only — groups don't decide |
**How to use RAPID:**
1. For every significant decision, explicitly assign R, A, P, I, D before work begins
2. The D role is always one person — never a committee
3. Agree (A) roles should be limited to 23 people maximum; more = paralysis
4. Post the RAPID in the decision doc so everyone knows the structure
**Example application:**
```
Decision: Migrate from PostgreSQL to distributed database
R: VP Engineering
A: CTO, COO (for cost implications)
P: Infrastructure team
I: Product leads, Finance
D: CTO
```
### RACI
Better for ongoing processes than one-time decisions. Use RACI for recurring operational responsibilities.
| Role | Meaning |
|------|---------|
| **R** — Responsible | Does the work |
| **A** — Accountable | Owns the outcome; one person only |
| **C** — Consulted | Input before decisions/actions |
| **I** — Informed | Told of decisions/actions after the fact |
**RACI matrix template:**
```
PROCESS: Customer Escalation Handling
Task | CS Lead | VP CS | Eng Lead | CEO
------------------------|---------|-------|----------|----
Receive escalation | R | I | I | -
Diagnose issue | R | C | C | -
Communicate to customer | R | A | - | I (major)
Resolve technical issue | C | - | R | -
Close escalation | R | A | I | -
Post-mortem (P0/P1) | C | A | R | I
```
**Common RACI mistakes:**
- Multiple A roles (breaks accountability)
- R and A always same person (defeats the purpose)
- Too many C roles (everyone's consulted, nothing moves)
- Not distinguishing C from I (different obligations)
### DRI (Directly Responsible Individual)
Apple's framework; used widely in fast-moving tech companies. Simpler than RAPID/RACI for internal use.
**The rule:** Every project, deliverable, and decision has exactly one DRI. The DRI is the person who gets credit when it succeeds and gets called on when it fails. No DRI = no accountability.
**DRI requirements:**
- Listed by name in every project brief
- Has authority to make decisions within scope
- Is responsible for communicating status
- Cannot blame lack of resources — their job is to escalate when blocked
**DRI vs. RACI:** Use DRI for project ownership and RACI for process ownership. They complement each other.
### Decision Log
Every significant decision gets logged. Significant = affects more than one team, costs more than $10K, or is difficult to reverse.
```
DECISION LOG FORMAT:
Date: [YYYY-MM-DD]
Decision: [One sentence summary]
Context: [Why was this decision needed? What was the situation?]
Options considered: [What alternatives were evaluated?]
Decision made: [What was decided?]
Rationale: [Why this option?]
Owner: [Who made the final call?]
Reversible: [Yes / No / Partially]
Review date: [When should this decision be revisited?]
Outcome: [Filled in later — what actually happened?]
```
---
## Reporting Templates
### Weekly CEO/COO Dashboard
```
COMPANY HEALTH — WEEK OF [DATE]
REVENUE
ARR: $[X]M (vs. plan: +/-X%, vs. LW: +/-X%)
New ARR this week: $[X]K
Churned ARR: $[X]K
Pipeline (90-day): $[X]M
PRODUCT
Shipped this week: [Brief list]
P0/P1 incidents: [Count] — [1-line summary if any]
Deploy frequency: [X per week]
CUSTOMER
Active customers: [X]
NPS (rolling 30d): [X]
Open escalations: [X] (P0: [X], P1: [X])
PEOPLE
Headcount: [X] (vs. plan: [X])
Open reqs: [X]
Attrition (30d): [X]
CASH
Cash on hand: $[X]M
Burn (last 30d): $[X]M
Runway: [X] months
🔴 ISSUES (needs leadership attention):
🟡 WATCH (monitor, no action yet):
🟢 WINS:
```
### Monthly Investor/Board Update
```
[COMPANY NAME] — MONTHLY UPDATE — [MONTH YEAR]
THE HEADLINE
[2-3 sentences: what was the defining story of this month?]
KEY METRICS
| Metric | [Month] | vs. Prior | vs. Plan |
|--------|---------|-----------|----------|
| ARR | | | |
| MRR Added | | | |
| Churn | | | |
| NRR | | | |
| Burn | | | |
| Runway | | | |
WINS
1. [Specific, concrete win with numbers]
2. [Second win]
3. [Third win]
CHALLENGES
1. [Honest description of challenge + what you're doing about it]
2. [Second challenge]
KEY DECISIONS MADE
• [Decision + brief rationale]
ASKS FROM INVESTORS
• [Specific ask with context — intros, advice, etc.]
NEXT MONTH PRIORITIES
1.
2.
3.
```
### Quarterly OKR Progress Report
```
Q[X] OKR PROGRESS — [COMPANY NAME]
SCORING GUIDE:
🟢 On track (>70% confidence of hitting target)
🟡 At risk (50-70% confidence)
🔴 Off track (<50% confidence)
COMPANY OBJECTIVES:
O1: [Objective title]
KR1.1: [Key Result] ............... [X]% 🟢
KR1.2: [Key Result] ............... [X]% 🟡
Objective confidence: 🟢 | Notes: [1 line]
O2: [Objective title]
KR2.1: [Key Result] ............... [X]% 🔴
KR2.2: [Key Result] ............... [X]% 🟢
Objective confidence: 🟡 | Notes: [1 line]
FUNCTIONAL OBJECTIVES:
[Same format per function]
OVERALL QUARTER HEALTH: 🟡
Summary: [2-3 sentences on overall trajectory]
TOP 3 ACTIONS TO GET BACK ON TRACK:
1. [Action + owner + deadline]
2.
3.
```
---
## Cadence Anti-Patterns to Eliminate
| Anti-Pattern | What It Looks Like | Fix |
|---|---|---|
| **Meeting creep** | Calendar blocks added over time, never removed | Quarterly calendar audit — delete all recurring meetings, re-add only what's essential |
| **Update theater** | Meetings where people read from slides | Require pre-reads; ban in-meeting presentations |
| **Decision avoidance** | Topics recur across multiple meetings | Assign a D (decider) before the meeting. If no D, don't hold the meeting. |
| **Sync for async** | Using meetings for information sharing | Move updates to Loom/Slack; protect sync time for discussion |
| **HIPPO problem** | Highest-paid person in room wins | Structure discussions so data is presented before opinions |
| **Retrospective theater** | Retros with no action items | Every retro must produce ≥1 committed change |
| **Silent agenda** | Agenda not shared until meeting starts | Agendas published 24h in advance, required reading |
---
*Cadence framework synthesized from Amazon's PR/FAQ culture, Google's OKR playbook, GitLab's remote work handbook, and operational patterns from 50+ Series AC companies.*

View File

@@ -0,0 +1,459 @@
# Process Frameworks for Startup Operations
> Theory of Constraints, Lean, process mapping, automation, and change management — applied to real startup contexts, not factory floors.
---
## Part 1: Theory of Constraints (TOC) Applied to Startups
### What TOC Actually Says
Eliyahu Goldratt's core insight: **every system has exactly one constraint that limits throughput.** Improving anything other than the constraint is waste. The goal isn't to optimize every function — it's to identify the single bottleneck and exploit it until a new constraint emerges.
**The Five Focusing Steps:**
1. **Identify** the constraint — what limits the system's output?
2. **Exploit** it — get maximum output from the constraint without adding resources
3. **Subordinate** everything else — other activities serve the constraint's needs
4. **Elevate** it — add resources to increase constraint capacity
5. **Repeat** — when the constraint moves, find the new one
### Finding the Constraint in Your Startup
The constraint is almost never where people think it is. Sales thinks it's Marketing. Engineering thinks it's Product. Everyone thinks it's someone else.
**Method:** Map your value stream (see Part 3), measure throughput at each step, find the step with the lowest throughput or the highest queue in front of it.
**Common startup constraints by stage:**
| Stage | Most Common Constraint | Why |
|-------|----------------------|-----|
| Pre-PMF | Learning speed | Not enough customer feedback cycles |
| Series A | Sales capacity | Demand > sales team's ability to close |
| Series B | Engineering velocity | Product backlog growing faster than shipping rate |
| Series C | Onboarding throughput | New customer volume > CS team's onboarding capacity |
| Growth | Hiring throughput | Headcount plan > recruiting team's capacity |
### Applying TOC to Product Development
**The five visible constraints in product development:**
**1. Requirements clarity**
*Symptom:* Engineering asks for clarification mid-sprint. Tickets re-opened. Scope creep.
*Fix:* Never pull a story into sprint until acceptance criteria are written and reviewed. Product manager must be available same-day for clarification.
**2. Review and approval bottleneck**
*Symptom:* PRs sit unreviewed for >24 hours. Deploys waiting for sign-off.
*Fix:* Code review SLA: 2-hour response for small PRs (<100 lines), 4-hour for medium. Design reviews: 24-hour turnaround. Anyone waiting >SLA can escalate to manager.
**3. QA throughput**
*Symptom:* "Done" pile grows faster than QA can test. Release day crunch.
*Fix:* QA is pulled into sprint planning and sprint review. Testing starts as features finish, not all at end. Automated test coverage as a sprint exit criterion.
**4. Deployment pipeline speed**
*Symptom:* Deploy takes 45+ minutes. Engineers wait. Hotfix urgency causes dangerous shortcuts.
*Fix:* Measure deploy time weekly. Set target (10 min for most apps). Build optimization into engineering roadmap as a real ticket.
**5. Feedback loop latency**
*Symptom:* You ship features and don't know if they worked for weeks.
*Fix:* Every shipped feature has instrumented metrics reviewed within 5 business days. If no metrics exist, feature doesn't ship.
### Applying TOC to Sales
**The sales pipeline as a system of constraints:**
```
Lead generation → Qualification → Demo → Proposal → Negotiation → Close
[X] → [X] → [X] → [X] → [X] → [X]
Measure: conversion rate and time-in-stage at each step.
The constraint is the step with the LOWEST conversion rate × volume.
```
**Example diagnosis:**
- Lead → Qualified: 40% conversion, 2 days
- Qualified → Demo: 80% conversion, 5 days ← High conversion but slow (queue)
- Demo → Proposal: 60% conversion, 3 days
- Proposal → Close: 30% conversion, 14 days ← **Constraint** (lowest conversion)
*Diagnosis:* Proposals are being sent to wrong buyers or proposals aren't compelling. Fix: proposal template audit, champion coaching, economic buyer access earlier in process.
---
## Part 2: Lean Operations for Tech Companies
### The Lean Toolkit (What's Actually Useful)
Lean Manufacturing was designed for car factories. Most of the original toolkit doesn't apply to software. Here's what does:
**Value Stream Mapping** — Map the full flow of work from customer request to delivery. Label value-add time vs. wait time. Most processes are 90% wait time and 10% actual work.
**5S** — Sort, Set in order, Shine, Standardize, Sustain. Applied to digital work:
- *Sort:* Delete unused tools, channels, documents
- *Set in order:* Organize information architecture so things are findable
- *Shine:* Regular cleanup sprints (documentation, tech debt, tool hygiene)
- *Standardize:* Templates, conventions, naming standards
- *Sustain:* Assign owners; entropy is the default state
**Pull vs. Push** — Don't push work onto people's plates. Pull = people take work when they have capacity. Push = work is assigned to people regardless of capacity. Most companies push; lean companies pull.
**Kaizen** — Continuous small improvements. Build this into your operating rhythm:
- Weekly: each team identifies one small improvement to their process
- Monthly: review and close out improvement items
- Quarterly: broader process retrospective
**Waste Categories (TIMWOODS) — Applied to Operations:**
| Waste Type | Factory Example | Startup Example |
|-----------|----------------|-----------------|
| **T**ransportation | Moving parts | Handing off work between tools with no integration |
| **I**nventory | Parts stockpile | Unreviewed PRs, unworked backlog items, unread reports |
| **M**otion | Worker movement | Context switching between apps / communication channels |
| **W**aiting | Machine idle | Waiting for approvals, waiting for data, waiting for decisions |
| **O**verproduction | Making more than needed | Features built that weren't validated |
| **O**verprocessing | Extra steps | 6-step approval for $200 purchase |
| **D**efects | Rework | Bug fixes, incorrect specs, miscommunicated requirements |
| **S**kills | Underutilized talent | Senior engineers doing manual QA |
**Exercise:** For your most important process, walk through each waste category and estimate hours/week wasted. This exercise typically reveals 2040% improvement opportunities in the first pass.
### Cycle Time and Lead Time
**Lead time:** Time from when a request enters the system to when it exits (customer perspective).
**Cycle time:** Time a unit of work is actively being worked on (team perspective).
```
Lead Time = Cycle Time + Wait Time
```
Most teams only measure cycle time. Customers only experience lead time. The gap between the two is pure waste.
**Measuring in your context:**
- Engineering: Lead time = ticket created → in production. Cycle time = in progress → PR merged.
- Sales: Lead time = lead created → closed won. Cycle time = demo completed → proposal sent.
- CS: Lead time = ticket opened → customer confirms resolved. Cycle time = ticket in-progress → resolution sent.
**Improvement pattern:**
1. Measure lead time (not just cycle time)
2. Find the steps where tickets sit waiting
3. Remove the wait (automation, reduced approval layers, clearer handoff criteria)
### WIP Limits
Work-In-Progress limits prevent the multi-tasking trap. When people work on 5 things simultaneously, each thing takes 5x longer and quality drops.
**Recommended WIP limits:**
- Individual IC: 23 active items at once
- Team sprint: WIP = number of engineers × 1.5
- Leadership team: No more than 3 company-level priorities per quarter
**Implementation:** In Jira/Linear, add a WIP column. Set a hard limit. When the column is full, no new work starts until something ships.
---
## Part 3: Process Mapping Techniques
### When to Map a Process
Map a process when:
- It's done by more than 2 people
- It fails regularly (errors, rework, complaints)
- It needs to scale (you're about to add people or volume)
- You're automating it (you must understand the manual process first)
- You're onboarding someone new to it
Don't map processes that are genuinely ad-hoc, one-person, or will change significantly in the next 90 days.
### The Three Levels of Process Maps
**Level 1: Swim Lane Map (for cross-functional processes)**
Best for: Customer onboarding, sales-to-CS handoff, escalation handling, hiring
```
Example: Sales to CS Handoff
| Sales AE | Sales Ops | CS Manager | CS Rep |
--------|---------------|---------------|---------------|---------------|
Step 1 | Close deal | | | |
Step 2 | Fill handoff | | | |
| doc | | | |
Step 3 | | Route to CS | | |
Step 4 | | | Review & | |
| | | assign | |
Step 5 | | | | Send welcome |
Step 6 | | | | Schedule kick-|
| | | | off |
```
**Level 2: Flowchart (for decision-heavy processes)**
Best for: Escalation routing, incident response, approval workflows
Use standard symbols:
- Rectangle = action/task
- Diamond = decision (yes/no branch)
- Oval = start/end
- Parallelogram = input/output
**Level 3: Work Instructions (for execution-level processes)**
Best for: Checklists, SOPs, how-to guides
Format:
```
Process: [Name]
Owner: [Role]
Last reviewed: [Date]
Trigger: [What starts this process]
Step 1: [Action] — [Who does it] — [Tool used] — [Expected output]
Step 2: ...
Exceptions:
- If [condition], then [alternative action]
Done when: [Definition of done]
```
### Process Audit Technique
Run this quarterly on your most critical processes:
**1. Walk the process** — Literally follow a unit of work from start to finish. Ask the people doing it, not the people managing it.
**2. Measure three numbers:**
- How long does it actually take? (lead time)
- How often does it go wrong? (error/rework rate)
- What's the cost of a failure? (downstream impact)
**3. Score it:**
```
PROCESS HEALTH SCORE:
Lead time vs. target: [+2 on target / 0 delayed / -2 significantly delayed]
Error rate: [+2 <5% / 0 5-15% / -2 >15%]
Documented: [+1 yes / -1 no]
Owner named: [+1 yes / -1 no]
Last reviewed (< 6 months): [+1 yes / -1 no]
Max: 7. Score <3 = needs immediate attention.
```
---
## Part 4: Automation Decision Framework
### The "Should I Automate This?" Test
Not everything should be automated. Bad automation of a broken process = faster broken process.
**The five-question filter:**
1. **Is the process stable?** If it changes monthly, automate later. Automating unstable processes locks in the wrong behavior.
2. **How often does it happen?** Weekly or more frequent = good candidate. Monthly or less = probably not worth it.
3. **What's the error rate without automation?** If the manual process is accurate 95%+ of the time, automation ROI is lower.
4. **What's the cost of failure?** Customer-facing, compliance, or financial processes deserve higher automation priority than internal reporting.
5. **Is the process well-documented?** If you can't describe it in a flowchart, you can't automate it. Document first.
### Automation ROI Calculation
```
Annual hours saved = (minutes per occurrence / 60) × occurrences per year
Annual labor cost saved = hours saved × fully-loaded cost per hour
Net annual value = labor cost saved + error reduction value + speed improvement value
Build/buy cost = development time + maintenance overhead
Payback period = build/buy cost ÷ net annual value
Rule of thumb: automate if payback period < 12 months
```
**Example:**
- Process: Weekly sales report compilation
- Time: 3 hours/week manually
- Fully-loaded cost: $75/hour
- Annual manual cost: 3 × 52 × $75 = $11,700
- Automation cost: 40 hours to build = $3,000
- Payback: 3,000 ÷ 11,700 = 3 months → **Automate**
### Automation Tiers
**Tier 1: No-code automation** (08 hours to implement)
- Tools: Zapier, Make (Integromat), n8n, HubSpot workflows
- Use for: Notification triggers, data syncs between tools, simple conditional routing
- Example: New customer in CRM → create CS ticket → send welcome Slack message
**Tier 2: Low-code automation** (840 hours to implement)
- Tools: Retool, internal scripts, Google Apps Script, Airtable Automations
- Use for: Internal dashboards, data transformation, approval workflows
- Example: Weekly metrics compilation from Salesforce + Mixpanel + HubSpot into Notion dashboard
**Tier 3: Engineered automation** (40+ hours to implement)
- Built by engineering team as product/infrastructure work
- Use for: Customer-facing workflows, compliance-critical processes, high-volume operations
- Example: Automated customer health score calculation → CS alert → playbook trigger
### Automation Prioritization Matrix
```
HIGH FREQUENCY
|
Tier 1 now | Tier 2-3 now
(quick win) | (high-value)
|
LOW VALUE ________________|________________ HIGH VALUE
|
Don't bother | Plan for later
| (when it's bigger)
|
LOW FREQUENCY
```
Place each manual process in the quadrant. Execute top-right first, Tier 1 items second.
### Automation Governance
As automation grows, it needs governance:
**Automation registry:** Maintain a list of all automations with:
- Name and description
- Owner (person responsible if it breaks)
- Tools used
- Trigger and action
- Last tested date
- Business impact if down
**Review cadence:** Quarterly review of automation registry. Kill automations nobody uses.
**Failure alerting:** Every production automation must have failure notifications sent to a named owner. Silent failures are worse than no automation.
---
## Part 5: Change Management for Process Rollouts
### Why Process Changes Fail
Most process changes fail not because the process is wrong, but because of how it's rolled out. Common failure modes:
- **Top-down dictate:** Process designed by leadership, announced to team, implemented poorly because people weren't involved and don't understand why.
- **No training:** "Here's the new process" with no demonstration or practice.
- **No feedback loop:** Process is rolled out and never adjusted based on what the team discovers.
- **No accountability:** Process is optional in practice because there are no consequences for ignoring it.
- **Old behavior still possible:** You introduce a new tool but don't turn off the old way.
### The Change Management Framework (ADKAR)
ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) is the most practical model for operational change.
**A — Awareness:** Does everyone understand WHY the change is needed?
- Don't just announce the new process — explain what was broken about the old one
- Share the data: "Our current onboarding takes 45 days, customers who onboard faster have 2x better retention. The new process targets 21 days."
**D — Desire:** Do people want to change?
- Resistance is information. Listen to it.
- Involve front-line workers in process design. People support what they help build.
- Address WIIFM (What's In It For Me) for each affected group
**K — Knowledge:** Do people know HOW to do the new process?
- Write it down (work instructions format above)
- Run live demos and practice sessions
- Create a "first time" checklist
**A — Ability:** Can people actually do the new process?
- Identify where people get stuck (first 2 weeks of rollout)
- Have a designated expert for questions
- Remove friction: if the new process requires 3 clicks where the old required 1, people will revert
**R — Reinforcement:** Does the change stick?
- Measure adoption (are people actually using the new process?)
- Celebrate early adopters
- Address non-adoption promptly — call it out without shame
### Change Rollout Checklist
```
PRE-LAUNCH:
□ Process designed and documented
□ Stakeholders identified (people affected by change)
□ Champions identified (people who will help adoption)
□ Training materials created
□ Success metrics defined (how will you know it worked?)
□ Rollback plan documented (what if it breaks something?)
□ Launch timeline set and communicated
LAUNCH WEEK:
□ Announcement sent with WHY, WHAT, and WHEN
□ Training sessions held (at least 2 options for different schedules)
□ Feedback channel opened (Slack thread, form, or dedicated meeting)
□ Champions briefed to support peers
2-WEEK CHECK:
□ Adoption rate measured
□ Friction points documented
□ Quick fixes implemented
□ Feedback reviewed and responded to
30-DAY REVIEW:
□ Success metrics reviewed vs. baseline
□ Process adjustments made based on learnings
□ Champions recognized
□ Process documentation updated with lessons learned
90-DAY CLOSE:
□ Full adoption confirmed or non-adoption addressed
□ Process owners confirmed
□ Handoff to BAU (business as usual) operations
```
### Managing Resistance
**Types of resistance and responses:**
| Resistance Type | What It Sounds Like | Right Response |
|----------------|---------------------|----------------|
| Legitimate concern | "This process won't work because X happens" | Acknowledge, investigate, fix or explain |
| Anxiety | "I don't know how to do this" | Training, support, reassurance |
| Loss of control | "This takes away my judgment" | Involve them in design; give them ownership of part of it |
| Passive non-compliance | Silent ignoring of the new process | Direct conversation; make it visible and required |
| Organizational inertia | "We've always done it this way" | Show the cost of the status quo in concrete terms |
**The three levers of adoption:**
1. **Make the new way easier than the old way** (remove the old path if possible)
2. **Make non-adoption visible** (dashboards showing who's using the process)
3. **Connect process to meaningful outcomes** (show how it affects things people care about)
### Process Documentation Standards
Every process should have exactly one owner responsible for keeping it current.
**Minimum documentation for any process:**
- **Process name** and one-sentence purpose
- **Owner:** Named individual, not a team
- **Trigger:** What starts this process
- **Steps:** Written at the level that a new employee could execute
- **Exceptions:** Common edge cases and how to handle them
- **Done definition:** How you know the process is complete
- **Review date:** Set a future date when this gets reviewed
**Documentation debt kills scale.** The most valuable time to document is right after you've run the process for the third time — you've found the edge cases, you know the real steps, and the process is still fresh.
---
## Framework Selection Guide
| Situation | Framework |
|-----------|-----------|
| We're slow and can't figure out why | Theory of Constraints — find the bottleneck |
| We have lots of waste and overhead | Lean — waste audit (TIMWOODS) |
| Process is inconsistent across team | Process mapping — Level 1 swim lane |
| Deciding what to automate | Automation decision framework + ROI calc |
| New process keeps getting ignored | ADKAR change management |
| Unclear who's responsible | RACI or DRI framework |
| Too many decisions escalating to leadership | RAPID decision rights |
---
*Frameworks synthesized from: Eliyahu Goldratt's The Goal and Critical Chain; Womack and Jones' Lean Thinking; Prosci ADKAR model; Scaled Agile Framework (SAFe) process guidance; operational playbooks from Stripe, Airbnb, and Shopify operations teams.*

View File

@@ -0,0 +1,465 @@
# Scaling Playbook: What Breaks at Each Growth Stage
> Compiled from patterns across 100+ high-growth companies. Not theory — this is what actually breaks and what to do about it.
---
## How to Use This Playbook
Each stage section covers:
1. **What breaks** — the specific failure modes that kill companies at this stage
2. **Hiring** — who to bring in and when
3. **Process** — what to formalize vs. keep loose
4. **Tools** — infrastructure that unlocks the next stage
5. **Communication** — how information flow changes
6. **Culture** — what to protect and what to let go
**Benchmarks are medians** — your mileage varies by sector, geography, and business model.
---
## Stage 0: Pre-Seed / Seed ($0$2M ARR, 115 people)
### Key Benchmarks
| Metric | Benchmark |
|--------|-----------|
| Revenue per employee | $0$100K (still finding PMF) |
| Manager:IC ratio | N/A (no managers) |
| Burn multiple | 25x (acceptable) |
| Runway | 1218 months minimum |
| Time-to-hire | 24 weeks |
### What Breaks
**Premature process.** The #1 mistake at seed stage is adding process before you have a repeatable model. Sprint ceremonies, OKR frameworks, and performance reviews are all theater when you haven't found PMF. Every hour spent in process is an hour not spent learning.
**Wrong first hires.** Hiring "senior" people who've only worked in structured environments. You need people who can operate in chaos, not people who expect process to already exist.
**Founder communication bottleneck.** Founders try to be in every decision. Fine at 5 people, fatal at 12. No written decisions means knowledge lives in founders' heads — unscalable.
**Technical debt accepted as strategy.** "We'll fix it later" said about core data models, auth systems, or billing. Later comes at Series A and it costs 3x more to fix.
### Hiring
- **Don't hire for scale you don't have.** Hire for the next 12 months.
- **First 10 hires set culture permanently.** Get them wrong and you'll spend years correcting.
- **Hire athletes, not specialists.** Generalists who can do multiple jobs outperform specialists at this stage.
- **Avoid VP titles early.** Inflated titles block future hires and create expectations you can't meet.
- **Founder-referral bias is real.** Your network is homogeneous. Force diversity early.
**Who to hire first (in rough order):**
1. Engineers who can ship product (23 generalists)
2. First sales/GTM if B2B (founder-led sales first, then one closer)
3. Designer/product (often a hybrid)
4. Customer success (often a founder at first)
### Process
**Formalize nothing before PMF.** Literally. Run on Slack, shared docs, and founder judgment.
**After PMF signals appear, formalize only:**
- How you handle customer escalations
- How you deploy code (even basic CI/CD)
- How you onboard new hires (a 1-page checklist is enough)
**Decision rule:** If a founder has to answer the same question three times, write it down. Once.
### Tools
| Function | Seed-Stage Tool |
|----------|----------------|
| Communication | Slack + Google Workspace |
| Project tracking | Linear or Notion (pick one, stay consistent) |
| CRM | HubSpot free or Notion |
| Engineering | GitHub + basic CI (GitHub Actions) |
| Finance | Brex/Mercury + QuickBooks |
| HR | Rippling or Gusto (basic) |
| Analytics | Mixpanel or PostHog (free tier) |
**Rule:** One tool per function. No tool sprawl. Every extra tool is a coordination tax.
### Communication
- **Weekly all-hands** (30 min max). What shipped, what's stuck, what's next.
- **No status meetings.** Anyone can see status in Linear/Notion.
- **Founder write-ups.** Every major decision gets a 1-paragraph Slack post explaining *why*.
- **Group chat discipline.** One channel per project/customer. Inbox zero mentality.
### Culture
**What to build deliberately:**
- High ownership: everyone acts like they own the company, because they do
- Direct feedback: brutal honesty delivered with care
- Bias to ship: done > perfect
- Customer obsession: founders talk to customers weekly
**What to watch for:**
- "Hero culture" where one person saves everything — unsustainable
- Over-indexing on culture fit (code for homogeneity)
- Avoidance of conflict — mistaking silence for agreement
---
## Stage 1: Series A ($2$10M ARR, 1550 people)
### Key Benchmarks
| Metric | Benchmark |
|--------|-----------|
| Revenue per employee | $100$200K |
| Manager:IC ratio | 1:61:8 |
| Burn multiple | 1.52.5x |
| Sales efficiency (CAC payback) | <18 months |
| Churn (B2B SaaS) | <10% net annual |
| Engineering velocity | Feature shipped every 12 weeks |
| Time-to-hire | 46 weeks |
| Offer acceptance rate | >80% |
### What Breaks
**Founder-as-manager bottleneck.** At 20+ people, founders can't manage everyone. The first layer of management needs to appear — and it's usually picked wrong (best IC ≠ best manager).
**Tribal knowledge explosion.** "Ask Sarah" stops working when Sarah has 15 things open. Documentation becomes critical — not for bureaucracy, but because institutional knowledge is now a flight risk.
**Sales process fragmentation.** Without a defined sales process, every rep closes differently. You can't train, debug, or scale what you can't see.
**Scope creep in product.** With Series A money comes investor pressure to expand scope. Teams try to build three things at once and ship nothing well.
**Compensation chaos.** Early employees got equity-heavy deals. New hires get market cash. Someone compares, someone gets upset. No comp philosophy = constant re-negotiation.
**Recruiting becomes a job in itself.** Founders can't hire 30 people themselves. First dedicated recruiter needed by 25 people.
### Hiring
**Who to hire at Series A:**
- **Head of Engineering** (if founder is CTO): needs to be an operator, not just an architect
- **First Sales Manager** (when you have 3+ reps): don't promote the best seller
- **HR/People Ops** (generalist, by 30 people): comp, compliance, recruiting coordination
- **Finance** (fractional CFO or strong controller): Series A board needs real numbers
- **Customer Success Lead**: retention is everything at this stage
**Hiring mistakes to avoid:**
- Hiring "big company" execs who need large teams and established process
- Assuming your Series A lead can recruit (they can intro, not close)
- Taking too long — top candidates have 23 offers. Move in <2 weeks from first call to offer.
**Leveling:** Build a simple career ladder *before* the compensation complaints start. 34 levels per function is enough.
### Process
**What to formalize at Series A:**
1. **Sprint planning** (2-week sprints, public roadmap)
2. **Sales process** (defined stages with entry/exit criteria)
3. **Onboarding** (30/60/90 day plan for each function)
4. **1:1 cadence** (weekly for direct reports, bi-weekly for skip-levels)
5. **Incident response** (P0/P1/P2 definition, on-call rotation)
6. **Quarterly planning** (OKRs or goals framework — keep it lightweight)
**What to keep loose:**
- Internal project process (let teams self-organize)
- Meeting formats (let teams evolve their own rituals)
- Tool selection within approved stack
**Documentation standard:** Write decisions down in a shared wiki. "Decision log" with date, decision, context, owner, and outcome. Takes 5 minutes, saves hours.
### Tools
| Function | Series A Tool |
|----------|--------------|
| Project/Product | Linear + Notion |
| CRM | HubSpot or Salesforce (Starter) |
| Engineering | GitHub + CI/CD pipeline + Sentry |
| HR/People | Rippling or Lattice (performance) |
| Finance | NetSuite or QBO + Brex |
| Analytics | Mixpanel/Amplitude + Looker (or Metabase) |
| Customer Success | Intercom + HubSpot or Zendesk |
| Docs | Notion or Confluence |
### Communication
**Introduce structured communication layers:**
1. **Company all-hands** (monthly, 60 min): CEO share, metrics review, team spotlights, Q&A
2. **Leadership sync** (weekly, 60 min): cross-functional issues, blockers, priorities
3. **Team standups** (async or 15 min daily): what's in progress, what's blocked
4. **1:1s** (weekly): direct report health, career, performance
5. **Written updates** (weekly to investors + board): CEO memo format
**Information hierarchy:** Everyone in the company should know: (1) company goals this quarter, (2) their team's goals, (3) what they personally own. If they don't, your communication structure is broken.
### Culture
**Deliberate culture work starts here.** You're too big for culture to be accidental.
- **Write down values.** Real values with examples of what they look like in action. Not "integrity" — "we tell investors bad news before we tell them good news."
- **Performance management.** First PIPs (Performance Improvement Plans) happen at this stage. Handle them well — the team is watching.
- **Equity culture.** Make sure people understand what their equity is worth in different outcomes. Lack of transparency breeds resentment.
- **First layoff plan.** Even if you never use it, know the criteria. Reactive layoffs destroy trust; plan-based ones (even painful) preserve it.
---
## Stage 2: Series B ($10$30M ARR, 50150 people)
### Key Benchmarks
| Metric | Benchmark |
|--------|-----------|
| Revenue per employee | $150$300K |
| Manager:IC ratio | 1:51:7 |
| Burn multiple | 1.01.5x |
| CAC payback | <12 months |
| NRR (net revenue retention) | >110% |
| Engineering: Product ratio | ~3:1 |
| Sales: CS ratio | ~3:1 |
| Time-to-hire (senior) | 610 weeks |
| Annual attrition | <15% voluntary |
### What Breaks
**Middle management void.** You now have managers managing managers. The "player-coach" model breaks — people can't be ICs and managers simultaneously at this scale. Force the choice.
**Planning misalignment.** Sales promises what product hasn't built. Product builds what customers didn't ask for. Engineering ships what QA didn't test. Fixing this requires cross-functional planning ceremonies.
**Data fragmentation.** Five different versions of "how are we doing." Sales sees Salesforce. Product sees Amplitude. Finance sees spreadsheets. Nobody agrees. You need a single source of truth.
**Process debt.** The Series A processes are starting to creak. Onboarding that worked for 5 hires/quarter doesn't work for 20. Customer escalation paths built for 50 customers fail at 500.
**Cultural fragmentation.** Engineering culture ≠ Sales culture ≠ Support culture. Sub-cultures form. The shared identity you had at 30 people requires active work to maintain at 100.
**The "brilliant jerk" problem.** High performers with bad behavior were tolerated early. Now they're managers with bad behavior, and it's systemic. Act decisively or lose your best people.
### Hiring
**Who to hire at Series B:**
- **COO or VP Operations**: founder is overwhelmed, someone needs to run the machine
- **VP Sales**: first Sales Manager won't scale to 20-rep org
- **VP Marketing**: demand gen and brand need dedicated ownership
- **Dedicated Recruiting**: 23 recruiters minimum; you're hiring 3050 people/year
- **Data/Analytics**: dedicated analyst or data engineer to consolidate reporting
- **Legal counsel**: fractional or in-house; contracts and compliance are getting complex
**The "big company exec" trap.** Series B is when companies hire their first VP from FAANG or a large SaaS company. 60% of these fail within 18 months. They're used to: large teams, established brand, existing process, political navigation. They struggle with: scrappy execution, no support staff, ambiguous direction. Vet explicitly for startup experience.
**Span of control.** At this stage, hold managers to 58 direct reports. More than 8 = no time for actual management. Less than 3 = management overhead isn't justified.
### Process
**What to formalize at Series B:**
1. **Quarterly Business Reviews (QBRs)** — every function presents metrics, wins, gaps
2. **Annual planning** — budget, headcount plan, strategic priorities
3. **Cross-functional roadmap alignment** — product/sales/marketing in sync quarterly
4. **Promotion criteria** — written, public, applied consistently
5. **Interview scorecards** — structured interviews with defined rubrics
6. **Change management** — how major process changes get communicated and adopted
7. **Vendor management** — evaluation criteria, approval process, contract management
**SOPs for critical processes:**
- Customer onboarding (if >50 customers)
- Sales handoff from SDR to AE to CS
- Engineering release process
- Incident response playbook
- Contractor/vendor procurement
### Tools
| Function | Series B Tool |
|----------|--------------|
| Project/Product | Jira or Linear (with roadmapping) |
| CRM | Salesforce (full) |
| ERP/Finance | NetSuite |
| HR | Workday or BambooHR + Lattice |
| Analytics | Looker or Tableau + data warehouse |
| Customer Success | Gainsight or ChurnZero |
| Engineering | GitHub Enterprise + full CI/CD + observability |
| Security | 1Password Teams + SSO (Okta) + endpoint management |
### Communication
**At 50+ people, informal communication breaks down.** Information no longer flows naturally — it has to be architected.
**Communication stack:**
- **Monthly all-hands** (90 min): metrics deep-dive, strategy update, team Q&A
- **Weekly leadership team** (90 min): cross-functional priorities, decisions, escalations
- **Bi-weekly skip-levels** (30 min): every manager holds these with their manager's reports
- **Quarterly town halls** (2 hrs): broader context, financial update, roadmap preview
- **Written company update** (bi-weekly): CEO to all-hands via Slack/email
**The information gradient problem.** People at the top know too much. People at the bottom know too little. Fix this with a deliberate "broadcast" culture — any decision affecting more than 5 people gets written up and shared.
### Culture
**Retention becomes an existential issue.** At Series B, you have 50150 people who've been with you through something hard. They're valuable. And they have options.
- **Career ladders** are non-negotiable by this stage. People leave when they can't see a future.
- **Manager quality** determines retention. Invest in manager training. Run manager effectiveness surveys.
- **Compensation benchmarking** quarterly. If you're more than 10% below market, you're losing people silently.
- **Culture carriers.** Identify the 1015 people who embody your culture and make them formally responsible for transmitting it. Give them a platform.
---
## Stage 3: Series C ($30$75M ARR, 150500 people)
### Key Benchmarks
| Metric | Benchmark |
|--------|-----------|
| Revenue per employee | $200$400K |
| Manager:IC ratio | 1:51:6 |
| Burn multiple | 0.751.25x |
| NRR | >115% |
| CAC payback | <9 months |
| Sales cycle (Enterprise) | 60120 days |
| Engineering team % | 3040% of headcount |
| Annual attrition target | <12% voluntary |
| Time-to-hire (senior) | 812 weeks |
### What Breaks
**Strategy execution gap.** Leadership agrees on strategy. Middle management interprets it differently. ICs execute on their interpretation. By the time work ships, it barely resembles the original strategy. Fix: strategy must cascade in writing with explicit outcomes.
**Process bureaucracy.** The processes you built at Series B start generating bureaucracy. Approval chains lengthen. Simple decisions require three meetings. The antidote is explicit process owners empowered to eliminate friction.
**Org design complexity.** Do you have functional teams (all engineers in one org) or product teams (engineers embedded in product squads)? The answer affects everything: career paths, knowledge sharing, delivery speed. Most companies get this wrong twice before getting it right.
**Geographic complexity.** First international office or remote-heavy team introduces timezone, communication, and culture challenges that don't exist when everyone is in one room.
**Leadership team dysfunction.** Seven VPs who were all individual contributors two years ago are now running $10M+ organizations. Some have grown into it. Some haven't. This is the stage where hard leadership team changes happen.
### Hiring
**Series C hiring is about depth, not breadth.** You have functional coverage — now hire people who go deep within functions.
- **Functional leaders' deputies**: VP Engineering needs a Director of Platform Engineering, Director of Product Engineering, etc.
- **Internal promotions**: 4060% of leadership roles should be filled internally by now. If you're hiring externally for everything, you've failed at development.
- **Specialists**: Security, data science, UX research, RevOps — functions that were "shared" become dedicated.
- **General Counsel**: Legal volume justifies full-time counsel.
**Headcount planning discipline.** Every hire should have a business case. "The team is busy" is not a business case. "This role will unlock $X in revenue or save Y hours/week" is a business case.
### Process
**Process consolidation.** Audit every process. Kill anything that doesn't have a clear owner and clear outcome. The average Series C company has 40% more process than it needs.
**Key processes to have locked at Series C:**
1. **Annual planning cycle** (strategy → goals → headcount → budget)
2. **Quarterly operating review** (progress against plan, forecast, adjustments)
3. **Product development lifecycle** (discovery → design → build → launch → measure)
4. **Revenue operations** (forecasting, pipeline management, territory planning)
5. **People operations** (performance cycles, promotion cadence, compensation philosophy)
6. **Risk management** (operational, security, compliance, legal)
**Delegation architecture.** At 200+ people, the COO cannot know about every decision. Build explicit decision rights: what decisions require CEO/COO approval vs. VP vs. Director vs. IC.
### Tools
**Consolidate the tech stack.** By Series C, you have tool sprawl. The average 200-person company has 100+ SaaS tools. 40% are redundant. Consolidation saves $200500K/year and reduces security surface.
**Must-have by Series C:**
- Enterprise SSO (Okta/Google Workspace with MFA everywhere)
- Data warehouse (Snowflake/BigQuery) + BI layer
- HRIS with performance management (Workday, Rippling, BambooHR)
- Revenue intelligence (Gong, Chorus)
- Security tooling (endpoint, SIEM basics, SOC 2 compliance)
### Communication
**Internal comms becomes a function.** You cannot rely on ad-hoc Slack and email at 200+ people. Someone needs to own internal communications.
- **Monthly CEO update** (written, 500 words max): company performance, strategic context, what's next
- **Quarterly all-hands** (2 hrs): comprehensive business review, open Q&A
- **Leadership alignment sessions** (quarterly): leadership team off-site to calibrate on strategy
- **Manager cascade** (after every major announcement): managers brief their teams with tailored context
### Culture
**Culture is now a function, not an instinct.** By Series C, your original culture-carriers are managers or have left. New people joining have never seen how you worked when you were small.
- **Culture explicitly documented** — not a values poster, a behavioral handbook
- **Onboarding redesigned** for culture transmission at scale
- **Manager enablement** — managers are your primary culture delivery mechanism; invest heavily
- **Listening infrastructure** — eNPS quarterly, exit interviews, skip-level feedback — all analyzed systematically
---
## Stage 4: Growth Stage ($75M+ ARR, 500+ people)
### Key Benchmarks
| Metric | Benchmark |
|--------|-----------|
| Revenue per employee | $300$600K |
| Manager:IC ratio | 1:41:6 |
| Burn multiple (path to profitability) | <0.5x |
| NRR | >120% |
| S&M as % of revenue | 2535% |
| R&D as % of revenue | 1525% |
| G&A as % of revenue | 812% |
| Rule of 40 | >40 (growth rate + profit margin) |
| Annual attrition target | <10% voluntary |
### What Breaks
**Execution at scale.** The larger you are, the harder it is to move fast. The average decision at a 500-person company takes 3x longer than at a 50-person company. This is not inevitable — but fixing it requires explicit investment.
**Internal politics.** Org boundaries create fiefdoms. VPs protect headcount. Teams optimize for their metrics at the expense of company metrics. This is the #1 culture problem at scale.
**Innovation starvation.** The core business is optimized, but new bets are starved of resources. The people working on new initiatives are constrained by processes designed for a mature product. Structural solution required: separate P&L, separate team, different metrics.
**Middle management bloat.** Growth-stage companies often have too many managers and not enough ICs. A manager managing one other manager managing three ICs is a 3-level chain where 2 people add no value. Flatten aggressively.
### Hiring
**You're now competing for talent with FAANG.** Your advantage is mission, equity, and the ability to have impact. Candidates who want to join a Fortune 500 will not join you. Stop trying to attract them.
- **Leadership pipeline**: promote from within at 50%+ for senior roles
- **Talent density over headcount**: 30 strong engineers > 50 average engineers
- **Diverse hiring**: by this stage, lack of diversity is a business problem, not just an ethical one
### Operational Priorities at Scale
1. **Operational efficiency over growth**: headcount growth should lag revenue growth
2. **Process ownership**: every major process has a named owner accountable for outcomes
3. **Quarterly operating model**: budget vs. actual, full P&L transparency to VP level
4. **Automation**: manual operational processes that cost >40 hrs/week should be automated
---
## Cross-Stage Principles
### The Three Things That Kill Companies at Every Stage
1. **Running out of cash before finding the next unlock** — runway management is sacred
2. **Hiring the wrong person for a critical role** — one bad VP can set you back 18 months
3. **Moving too slowly** — market timing matters; perfect is the enemy of shipped
### The Org Design Progression
```
Seed: Flat | Everyone reports to founder | No structure
Series A: Functional pods | First-line managers | Light structure
Series B: Functional departments | VPs emerge | Defined structure
Series C: Business units or product squads | Directors + VPs | Full structure
Growth: Divisional or matrix | EVPs/SVPs | Corporate structure
```
### Revenue per Employee by Function (B2B SaaS benchmarks)
| Function | Series A | Series B | Series C | Growth |
|----------|----------|----------|----------|--------|
| Engineering | $400K | $500K | $600K | $700K |
| Sales | $250K | $350K | $450K | $500K |
| Customer Success | $300K | $400K | $500K | $600K |
| Marketing | $500K | $700K | $900K | $1M+ |
| G&A | $600K | $800K | $1M | $1.2M |
*Revenue per employee = ARR / headcount in function*
### The Management Span Rule
- **Individual contributors being managed**: 1 manager per 68 ICs
- **Managers being managed**: 1 director per 46 managers
- **Directors being managed**: 1 VP per 35 directors
- **VPs being managed**: 1 C-level per 58 VPs
Violation of this creates either manager burnout (too wide) or management theater (too narrow).
---
## Red Flags by Stage
| Stage | Red Flag | Likely Cause |
|-------|----------|-------------|
| Seed | Missed 3+ product deadlines | Wrong team or unclear prioritization |
| Series A | Churn >20% | PMF not actually found, or CS underfunded |
| Series B | >6-month sales cycle on SMB | Pricing/packaging problem |
| Series C | NRR <100% | Product-market fit eroding or CS broken |
| Growth | Rule of 40 <20 | Efficiency problem; hiring ahead of revenue |
---
*Sources: Sequoia, a16z operating frameworks; First Round Capital COO benchmarks; SaaStr metrics databases; OpenView SaaS benchmarks; Bain operational maturity models.*

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,200 @@
---
name: cpo-advisor
description: "Product leadership for scaling companies. Product vision, portfolio strategy, product-market fit, and product org design. Use when setting product vision, managing a product portfolio, measuring PMF, designing product teams, prioritizing at the portfolio level, reporting to the board on product, or when user mentions CPO, product strategy, product-market fit, product organization, portfolio prioritization, or roadmap strategy."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: cpo-leadership
updated: 2026-03-05
python-tools: pmf_scorer.py, portfolio_analyzer.py
frameworks: pmf-playbook, product-strategy, product-org-design
---
# CPO Advisor
Strategic product leadership. Vision, portfolio, PMF, org design. Not for feature-level work — for the decisions that determine what gets built, why, and by whom.
## Keywords
CPO, chief product officer, product strategy, product vision, product-market fit, PMF, portfolio management, product org, roadmap strategy, product metrics, north star metric, retention curve, product trio, team topologies, Jobs to be Done, category design, product positioning, board product reporting, invest-maintain-kill, BCG matrix, switching costs, network effects
## Quick Start
### Score Your Product-Market Fit
```bash
python scripts/pmf_scorer.py
```
Multi-dimensional PMF score across retention, engagement, satisfaction, and growth.
### Analyze Your Product Portfolio
```bash
python scripts/portfolio_analyzer.py
```
BCG matrix classification, investment recommendations, portfolio health score.
## The CPO's Core Responsibilities
The CPO owns three things. Everything else is delegation.
| Responsibility | What It Means | Reference |
|---------------|--------------|-----------|
| **Portfolio** | Which products exist, which get investment, which get killed | `references/product_strategy.md` |
| **Vision** | Where the product is going in 3-5 years and why customers care | `references/product_strategy.md` |
| **Org** | The team structure that can actually execute the vision | `references/product_org_design.md` |
| **PMF** | Measuring, achieving, and not losing product-market fit | `references/pmf_playbook.md` |
| **Metrics** | North star → leading → lagging hierarchy, board reporting | This file |
## Diagnostic Questions
These questions expose whether you have a strategy or a list.
**Portfolio:**
- Which product is the dog? Are you killing it or lying to yourself?
- If you had to cut 30% of your portfolio tomorrow, what stays?
- What's your portfolio's combined D30 retention? Is it trending up?
**PMF:**
- What's your retention curve for your best cohort?
- What % of users would be "very disappointed" if your product disappeared?
- Is organic growth happening without you pushing it?
**Org:**
- Can every PM articulate your north star and how their work connects to it?
- When did your last product trio do user interviews together?
- What's blocking your slowest team — the people or the structure?
**Strategy:**
- If you could only ship one thing this quarter, what is it and why?
- What's your moat in 12 months? In 3 years?
- What's the riskiest assumption in your current product strategy?
## Product Metrics Hierarchy
```
North Star Metric (1, owned by CPO)
↓ explains changes in
Leading Indicators (3-5, owned by PMs)
↓ eventually become
Lagging Indicators (revenue, churn, NPS)
```
**North Star rules:** One number. Measures customer value delivered, not revenue. Every team can influence it.
**Good North Stars by business model:**
| Model | North Star Example |
|-------|------------------|
| B2B SaaS | Weekly active accounts using core feature |
| Consumer | D30 retained users |
| Marketplace | Successful transactions per week |
| PLG | Accounts reaching "aha moment" within 14 days |
| Data product | Queries run per active user per week |
### The CPO Dashboard
| Category | Metric | Frequency |
|----------|--------|-----------|
| Growth | North star metric | Weekly |
| Growth | D30 / D90 retention by cohort | Weekly |
| Acquisition | New activations | Weekly |
| Activation | Time to "aha moment" | Weekly |
| Engagement | DAU/MAU ratio | Weekly |
| Satisfaction | NPS trend | Monthly |
| Portfolio | Revenue per product | Monthly |
| Portfolio | Engineering investment % per product | Monthly |
| Moat | Feature adoption depth | Monthly |
## Investment Postures
Every product gets one: **Invest / Maintain / Kill**. "Wait and see" is not a posture — it's a decision to lose share.
| Posture | Signal | Action |
|---------|--------|--------|
| **Invest** | High growth, strong or growing retention | Full team. Aggressive roadmap. |
| **Maintain** | Stable revenue, slow growth, good margins | Bug fixes only. Milk it. |
| **Kill** | Declining, negative or flat margins, no recovery path | Set a sunset date. Write a migration plan. |
## Red Flags
**Portfolio:**
- Products that have been "question marks" for 2+ quarters without a decision
- Engineering capacity allocated to your highest-revenue product but your highest-growth product is understaffed
- More than 30% of team time on products with declining revenue
**PMF:**
- You have to convince users to keep using the product
- Support requests are mostly "how do I do X" rather than "I want X to also do Y"
- D30 retention is below 20% (consumer) or 40% (B2B) and not improving
**Org:**
- PMs writing specs and handing to design, who hands to engineering (waterfall in agile clothing)
- Platform team has a 6-week queue for stream-aligned team requests
- CPO has not talked to a real customer in 30+ days
**Metrics:**
- North star going up while retention is going down (metric is wrong)
- Teams optimizing their own metrics at the expense of company metrics
- Roadmap built from sales requests, not user behavior data
## Integration with Other C-Suite Roles
| When... | CPO works with... | To... |
|---------|-------------------|-------|
| Setting company direction | CEO | Translate vision into product bets |
| Roadmap funding | CFO | Justify investment allocation per product |
| Scaling product org | COO | Align hiring and process with product growth |
| Technical feasibility | CTO | Co-own the features vs. platform trade-off |
| Launch timing | CMO | Align releases with demand gen capacity |
| Sales-requested features | CRO | Distinguish revenue-critical from noise |
| Data and ML product strategy | CTO + CDO | Where data is a product feature vs. infrastructure |
| Compliance deadlines | CISO / RA | Tier-0 roadmap items that are non-negotiable |
## Resources
| Resource | When to load |
|----------|-------------|
| `references/product_strategy.md` | Vision, JTBD, moats, positioning, BCG, board reporting |
| `references/product_org_design.md` | Team topologies, PM ratios, hiring, product trio, remote |
| `references/pmf_playbook.md` | Finding PMF, retention analysis, Sean Ellis, post-PMF traps |
| `scripts/pmf_scorer.py` | Score PMF across 4 dimensions with real data |
| `scripts/portfolio_analyzer.py` | BCG classify and score your product portfolio |
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- Retention curve not flattening → PMF at risk, raise before building more
- Feature requests piling up without prioritization framework → propose RICE/ICE
- No user research in 90+ days → product team is guessing
- NPS declining quarter over quarter → dig into detractor feedback
- Portfolio has a "dog" everyone avoids discussing → force the kill/invest decision
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Do we have PMF?" | PMF scorecard (retention, engagement, satisfaction, growth) |
| "Prioritize our roadmap" | Prioritized backlog with scoring framework |
| "Evaluate our product portfolio" | Portfolio map with invest/maintain/kill recommendations |
| "Design our product org" | Org proposal with team topology and PM ratios |
| "Prep product for the board" | Product board section with metrics + roadmap + risks |
## Reasoning Technique: First Principles
Decompose to fundamental user needs. Question every assumption about what customers want. Rebuild from validated evidence, not inherited roadmaps.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,307 @@
# PMF Playbook
How to find product-market fit, measure it, and not lose it. Steps, not theory.
---
## What PMF Actually Is
PMF is when a product pulls users in rather than pushing them. Signals:
- Users find the product without you telling them about it
- They're upset when it doesn't work
- They bring their colleagues, their friends, their boss
- They build workarounds when a feature is missing
PMF is not:
- Users saying they like it
- A good NPS score with flat growth
- Enterprise customers who are locked in but churning at contract end
---
## Step 1: Find Your Best Customers First
Before measuring PMF across everyone, find the segment where PMF is strongest.
**How:**
1. Export a list of all churned users and all retained users (D90+)
2. Identify 5-10 attributes to compare: company size, industry, job title, signup source, first action taken, time to first value
3. Find the attributes that are over-represented in retained vs. churned
4. That's your highest-PMF segment
**This is not an analytics project.** Call 10 retained power users. Ask:
- "What were you doing before you found us?"
- "What would you use if we shut down tomorrow?"
- "Who else in your life has this problem?"
The segment where this conversation is easy and the answers are specific — that's where your PMF is.
---
## Step 2: Measure the Three PMF Signals
Run all three. They measure different things. One signal without the others is misleading.
### Signal 1: Retention Curves
**Method:**
1. Cohort users by week or month of first use
2. Calculate % still active at D1, D7, D14, D30, D60, D90
3. Plot the curve for each cohort
**Interpretation:**
| Curve Shape | What It Means |
|-------------|--------------|
| Drops to zero | No PMF. Product doesn't solve a recurring problem. |
| Drops and keeps dropping | Weak PMF. Some people find value, but not enough to keep coming back. |
| Drops then flattens above 0 | PMF signal. A core group finds ongoing value. |
| Flattens higher with each newer cohort | PMF improving. You're learning. |
**Benchmarks:**
| Segment | D30 Retention (PMF threshold) | D90 Retention (strong PMF) |
|---------|-------------------------------|---------------------------|
| Consumer | > 20% | > 10% |
| SMB SaaS | > 40% | > 25% |
| Enterprise SaaS | > 60% | > 45% |
| Marketplace (buyers) | > 30% | > 20% |
| PLG (free-to-paid) | > 25% free D30, > 50% paid D30 | > 15% free D90 |
**If retention is below threshold:**
- Don't run more acquisition. You'll just churn faster.
- Find the users who ARE retained. Understand why. Build for them.
---
### Signal 2: Sean Ellis Test
Survey users with one question: "How would you feel if you could no longer use [Product]?"
**Answers:**
- Very disappointed
- Somewhat disappointed
- Not disappointed (it really isn't that useful)
- N/A — I no longer use [Product]
**Scoring:**
- Count only "very disappointed" responses
- Divide by total non-churned respondents
- PMF threshold: **> 40% "very disappointed"**
**Sample size requirement:** Minimum 40 responses. Under 40, the signal is noisy.
**When to run it:**
- When you have 100-500 active users
- Quarterly for ongoing tracking
- After major product changes
**What to do with "somewhat disappointed":**
Don't lump them with "very disappointed." The delta between "somewhat" and "very" is where your retention problem lives. Interview people in the "somewhat" group. What's missing? Why only somewhat?
**When score is 20-35%:** You have a segment with PMF. Find them. Ask what they love. Run a separate survey for just that segment.
**When score is < 20%:** Your core value proposition isn't working. This is not a retention tactics problem. Revisit the fundamental problem you're solving.
---
### Signal 3: Organic Growth and Referral
**Metric:** % of new signups that came from existing user referral, word of mouth, or organic search — without a paid incentive.
**Threshold:** > 20% of new users are coming organically without incentive programs.
**How to measure:**
1. Tag signup source: paid, organic search, referral (with referral code), direct/dark social
2. Track monthly. Is the organic % trending up or stable?
3. Interview organic signups: "How did you hear about us?" (don't trust the dropdown)
**Why this matters:** Paid growth can mask the absence of PMF. You can buy users who churn. You can't buy users who tell their friends.
---
## Step 3: Run PMF Experiments (Pre-PMF)
If you're below thresholds, don't optimize — experiment. The goal is to find the version of the product where at least a small segment has PMF.
### The PMF Experiment Loop
```
1. Pick one customer segment + one hypothesis about their job to be done
2. Remove everything from the product that doesn't serve that job
3. Run a 4-week cohort with only that segment
4. Measure retention + Sean Ellis for that cohort
5. If PMF signal: this is your beachhead. Double down.
If no signal: new hypothesis. Repeat.
```
**Time box:** Each experiment 4-8 weeks. If you're running experiments for 18+ months with no signal, revisit the problem space, not just the solution.
### What to Change
| Lever | Change | Expected Impact |
|-------|--------|-----------------|
| Target segment | Narrow ICP from "all companies" to "Series A SaaS" | Faster learning, higher retention |
| Core job | Reframe from feature-benefit to outcome-benefit | Better product decisions |
| Onboarding | Remove steps to time-to-value | D1 retention up |
| Pricing | Move from per-seat to per-outcome | Align incentives with value |
| Channel | Switch from outbound to PLG | Different segment discovers product |
---
## Step 4: Validate PMF (Post-Signal, Pre-Scale)
Congratulations, you have a retention curve that flattens. Before you scale:
**Validate that it's real:**
- Can you acquire more of the same customers? (Test CAC at 2x current volume)
- Do the retained users expand? (Are they buying more seats, upgrading?)
- Is the NPS from retained users > 40?
- Are they forgiving of bugs and slowness? (Love, not tolerance)
**Validate the unit economics:**
- LTV / CAC > 3x (for SaaS)
- Payback period < 18 months
- Gross margin > 60% (SaaS), > 40% (marketplace)
**The danger zone:** Convincing yourself you have PMF before economics are viable. High retention with terrible unit economics is not a business — it's a hobby that grows.
---
## PMF by Business Model
### B2B SaaS
**Primary signal:** D90 retention > 45% in target segment.
**Secondary signals:**
- NPS from retained users > 50
- Expansion revenue from retained accounts (NRR > 110%)
- Sales cycle shortening as word-of-mouth increases
**PMF finding strategy:**
- Start with one vertical, not the whole market
- Get 3-5 reference customers who use it daily and refer others
- Don't expand segment until you can replicate the reference case
**Common false signals:**
- Retained users who are locked in by contract, not value
- Expansion revenue from upselling, not from organic growth
- High satisfaction survey scores with flat usage data
---
### B2C / Consumer
**Primary signal:** D30 retention > 20%, with a flat or rising tail at D90.
**Secondary signals:**
- DAU/MAU ratio > 20% (daily habit product: > 40%)
- Session depth (users exploring multiple features, not one-and-done)
- Organic referral rate > 20% of new installs
**PMF finding strategy:**
- Consumer PMF is about habit formation — which behavior do you own in a user's day?
- Find the "aha moment" (the action that predicts retention). Build everything to get users there faster.
- Segment ruthlessly — consumer PMF is often strong in one demographic, weak in others.
**Common false signals:**
- High D1 retention from email campaigns that re-engage dormant users
- Good NPS from vocal users who are power users, not typical users
- Media buzz driving installs from wrong audience
---
### Marketplace
**Primary signal:** Successful transaction rate and repeat buyer rate.
**Secondary signals:**
- Supply-side retention (sellers/providers coming back)
- Liquidity score: % of demand requests matched within acceptable time
- Referral: both sides sending others
**PMF challenge:** You have two customers (supply and demand). PMF can exist on one side and not the other.
**PMF finding strategy:**
- Start with constrained geography or category — don't try to be national before local works
- Measure GMV per cohort, not just transaction count
- Find the "magic moment" for both buyer and seller. Optimize for both.
---
### PLG (Product-Led Growth)
**Primary signal:** Free-to-paid conversion rate + paid retention.
**Secondary signals:**
- Time to activation (reaching the "aha moment" in free tier)
- PQL (product-qualified lead) conversion to paid
- Team invites from individual users (virality coefficient)
**PMF finding strategy:**
- The free tier must have genuine value — not a crippled trial
- Track activation milestone (the action that predicts conversion)
- Optimize activation before conversion — conversion optimizations don't work if nobody activates
---
## After PMF: The Scaling Trap
Most companies that fail after PMF weren't ready to scale. They scaled the wrong thing.
### The Scaling Trap
You have PMF with segment A. You hire sales and start selling to segment B. Segment B doesn't retain. NPS drops. Engineers chase segment B feature requests. Segment A users feel abandoned.
**This is the most common way early-stage companies die after PMF.**
### What to Do After PMF
**First 90 days after confirming PMF:**
1. Document your best customer profile in extreme detail
2. Build the playbook to replicate the reference customer, not to expand the ICP
3. Hire sales to replicate, not to expand
4. Instrument everything — you need to know what's driving retention for every new cohort
5. Don't launch new features. Remove friction from the path that's already working.
**The expansion question:** Only expand ICP when:
- You can replicate the reference customer at 3x volume with same retention
- CAC is declining (word of mouth in the reference segment)
- You've exhausted density in the reference segment
**Don't expand ICP to save the business.** Expanding ICP when retention is declining is panic, not strategy.
---
## How to Know When PMF Is Slipping
PMF is not a binary state. It can degrade. Watch for:
| Signal | What's Happening | Response |
|--------|-----------------|----------|
| D30 retention declining across cohorts | Product changes or market change are eroding value | Run Sean Ellis test immediately. Interview churned users. |
| Sean Ellis score dropping | Users less passionate about the product | Feature gap opening. Competitive pressure. |
| NPS dropping for retained users | Power users seeing degraded experience | Product quality or performance issues. |
| Organic referral rate declining | Satisfied users less enthusiastic | Product becoming commoditized. Moat eroding. |
| Support tickets shifting from feature requests to bug reports | Technical debt catching up | Engineering quality investment needed. |
| Sales cycles lengthening | ICP no longer self-evident. Positioning drift. | Re-run positioning exercise. Sharpen ICP. |
**The PMF quarterly check:**
Run Sean Ellis test every quarter. Track D30 retention by cohort every month. Put both on the CPO dashboard. These are your vital signs.
---
## Quick Reference
| Test | Threshold | Frequency |
|------|-----------|-----------|
| Sean Ellis | > 40% very disappointed | Quarterly |
| D30 retention (B2B SaaS) | > 40% | Monthly (by cohort) |
| D30 retention (consumer) | > 20% | Monthly (by cohort) |
| D90 retention (B2B SaaS) | > 45% | Monthly (by cohort) |
| Organic signup % | > 20% | Monthly |
| NPS (retained users) | > 40 | Quarterly |
| DAU/MAU (if daily product) | > 20% | Weekly |
Use `scripts/pmf_scorer.py` to run all dimensions together with weighted scoring.

View File

@@ -0,0 +1,407 @@
# Product Org Design Reference
How to structure, hire, and run product organizations at different stages. No generic advice — stage-specific, role-specific, and honest about what breaks.
---
## 1. Team Topologies for Product Orgs
Matthew Skelton and Manuel Pais defined four team types. Here's how they map to product organizations.
### Four Team Types
#### Stream-Aligned Teams
Own a continuous flow of customer-facing work. They take problems all the way from discovery to delivery to measurement.
**Product org equivalent:** Feature teams, growth teams, customer journey teams.
**Characteristics:**
- Long-lived (not project teams)
- Full-stack: PM + Designer + 3-7 Engineers + QA
- Can deploy independently without asking another team
- Own their backlog, their metrics, their outcomes
**Health signals:**
- Ships without waiting on other teams more than 20% of the time
- Can define their own north star and trace it to company metric
- PMs spend > 50% of time in discovery, not coordination
**Warning signs:**
- Every sprint has "dependencies" blocking progress
- Team has PMs but engineers don't know the customer problems
- Roadmap is handed to them, not co-created
#### Platform Teams
Build and maintain shared capabilities so stream-aligned teams don't reinvent them.
**Product org equivalent:** Platform product team, internal tools, shared infrastructure.
**Characteristics:**
- Serve internal customers (other teams), not end users directly
- Measure success by stream-aligned team velocity, not feature count
- Self-service is the goal — stream teams should be unblocked without filing tickets
**Health signals:**
- Stream-aligned teams can do 80% of their work without filing a ticket to platform
- Platform has a public API and documentation, not just engineers who know how it works
- Platform team metrics include "number of teams using X without assistance"
**Warning signs:**
- Platform team has a 6-week SLA for new features
- Stream teams fork the platform to avoid waiting
- Platform team's backlog is driven by platform's own ideas, not stream team pain
**The platform product manager role:**
Platform PMs are not feature PMs. They manage internal customers. Key skills:
- Developer experience empathy (they're building for engineers)
- API and infrastructure intuition (you can't PM what you don't understand)
- Saying "no" gracefully when requests are misuses of the platform
#### Enabling Teams
Temporarily help other teams upskill in a domain. Not permanent.
**Product org equivalent:** UX research team, data literacy evangelism, accessibility experts.
**Duration:** Time-boxed. 3-6 months. Then they leave and the skill stays.
**Failure mode:** Enabling teams that never leave become coordination bottlenecks.
#### Complicated Subsystem Teams
Deep expertise required. Minimal interaction.
**Product org equivalent:** ML/AI product team, compliance product, payments, internationalization engine.
**Characteristics:**
- Specialists who can't be split across stream-aligned teams
- Interact via well-defined interface, not collaboration
- Have their own PM who understands the domain deeply
---
## 2. Org Models at Each Stage
### Pre-Seed / Seed (1-20 engineers)
**Structure:** Founder/CEO or founder/CTO is the PM. Maybe one hired PM at 15+ engineers.
**Don't build:** Process, specialization, hierarchy.
**Do build:** Direct customer access, fast iteration loops, written learning from every experiment.
**PM role at this stage:**
- Not shipping features. Talking to customers.
- Not writing specs. Running experiments.
- Not managing engineers. Being managed alongside them.
**Hiring mistake:** Hiring a "process PM" who builds Jira templates before you have PMF.
---
### Series A (20-60 engineers)
**Structure:** 2-4 PMs, organized by product area or customer journey.
```
CPO / Head of Product
├── PM — Core Product (the thing customers pay for)
├── PM — Growth / Acquisition (how more customers get there)
└── PM — Platform (as soon as engineering says they need it)
```
**What you add:** One embedded designer. Analytics shared.
**First PM hire criteria:**
- Has shipped something users use, not just wrote a spec
- Comfortable with ambiguity and no process
- Will talk to customers without being asked
- Understands the technical constraints intuitively
**What breaks at Series A:**
- Verbal communication stops working. First thing to document: the roadmap, the north star, who decided what.
- Engineers start asking "why are we building this?" — good. Answer it.
- Customer requests multiply faster than capacity. You need a prioritization framework.
---
### Series B (60-150 engineers)
**Structure:** 4-8 PMs, head of product, first design hire, embedded or dedicated analytics.
```
CPO
├── Head of Product
│ ├── PM — [Team 1] (stream-aligned)
│ ├── PM — [Team 2] (stream-aligned)
│ ├── PM — [Team 3] (stream-aligned)
│ └── PM — Platform (if engineering > 40)
├── Head of Design (or Senior Designer × 2-3)
└── Analytics (shared, or 1 embedded per team)
```
**What you add at Series B:**
- Head of Product (frees CPO from backlog, runs PM team)
- First Head of Design hire (if not already)
- Dedicated growth team (PLG or acquisition)
**What breaks at Series B:**
- PMs start optimizing their own team's metrics instead of company metrics
- Design and engineering don't talk until sprint planning
- Data team is a ticket queue — PMs can't self-serve
**Fix:** OKR alignment across teams. Design in discovery, not in handoff. Analytics tool self-serve access for every PM.
---
### Series C (150-400 engineers)
**Structure:** 8-15 PMs, multiple PM leads / directors, specialized functions.
```
CPO
├── VP / Director of Product
│ ├── PM Lead — [Product Line 1]
│ │ ├── PM
│ │ └── PM
│ ├── PM Lead — [Product Line 2]
│ │ ├── PM
│ │ └── PM
│ └── PM Lead — Platform
├── Head of Design
│ ├── UX Design
│ ├── Product Design
│ └── UX Research
├── Head of Data / Analytics
│ ├── Product Analytics
│ └── Data Science
└── Head of Product Operations
```
**What you add at Series C:**
- PM leads / directors (PMs managing PMs)
- Dedicated UX research
- Head of Product Operations (roadmap tooling, PM hiring, analytics standards, product community)
- Possible Chief of Staff (Product)
**What breaks at Series C:**
- Coordination overhead becomes the primary job
- PMs become project managers managing handoffs instead of product decisions
- Consistency across teams: 5 different ways to write a spec, 5 different analytics setups
- CPO loses touch with customers
**Fix:** Product principles (written, opinionated, used in reviews). Embedded researchers. Regular CPO customer calls (monthly minimum). Product ops to solve consistency without bureaucracy.
---
## 3. PM:Engineer Ratios
### By Stage
| Stage | Engineers | PMs | Ratio | Notes |
|-------|-----------|-----|-------|-------|
| Seed | 5 | 0-1 | 1:5 | Founder PM common |
| Series A | 20-40 | 2-4 | 1:8 | First real PMs |
| Series B | 60-100 | 5-8 | 1:10 | Platform PM emerges |
| Series C | 150-250 | 12-18 | 1:12 | PM leads required |
| Growth | 300+ | 20+ | 1:12-15 | Specialization high |
### By Team Type
| Team Type | Ratio | Rationale |
|-----------|-------|-----------|
| Stream-aligned (feature) | 1:6-8 | High discovery work, many stakeholders |
| Growth / PLG | 1:8-10 | High experimentation, more autonomy per engineer |
| Platform | 1:10-15 | Lower ambiguity, more self-directed engineers |
| Complicated subsystem (ML, payments) | 1:12-20 | Technical direction from engineers, PM is translator |
**The ratio trap:** These are guidelines, not targets. A great PM in a bad org with 12 engineers accomplishes less than a great PM with 8 in a healthy org. Fix the org before optimizing the ratio.
---
## 4. When to Hire Key Roles
### Head of Design
**Not yet signal:**
- Fewer than 2 full-time designers
- Product is primarily technical (API-first, developer tool with no GUI)
- Design is consistently described as "not a blocker"
**Hire now signal:**
- Design has become a coordination problem (who reviews what? which system? what's the standard?)
- You have 3+ designers and they're inconsistent
- CPO is spending significant time on design decisions
- Customers cite UX as a blocker to adoption
**What this person does:**
- Builds and maintains the design system
- Runs UX research as a function, not one-off projects
- Hires and grows the design team
- Keeps designers from becoming pixel-pushers and keeps them in discovery
**Wrong hire:** A senior IC who can't build process and isn't excited about it.
---
### Head of Data / Analytics
**Not yet signal:**
- < 5 PMs, data team shared with engineering
- You don't have product analytics instrumentation yet (worry about that first)
- Product metrics are reviewed monthly and nobody acts on them
**Hire now signal:**
- PMs are filing tickets for basic metric questions (sign that data team is a bottleneck)
- Multiple products with different tracking setups — no common definitions
- You want to run experiments but don't have infrastructure
- Leadership is making product decisions without data (not from choice — from access)
**What this person does:**
- Defines the event taxonomy and enforces it
- Builds self-serve analytics capability for PMs
- Runs A/B testing infrastructure
- Partners with PMs on experiment design (before launch, not after)
**Wrong hire:** A pure data scientist who can't build product analytics infrastructure and doesn't want to.
---
### Head of Product Operations
**Hire when you have:**
- 8+ PMs with inconsistent processes
- CPO spending > 30% of time on internal coordination
- No standard for roadmap tools, prioritization, or PM onboarding
- Product team can't answer "what are all teams working on this quarter?" without a 2-hour meeting
**What this person does:**
- PM onboarding and development program
- Roadmap and tooling standards (Jira, Linear, Notion — pick one and enforce it)
- Data pipelines from product to leadership (weekly metrics, OKR tracking)
- PM hiring and interview process
- Voice of product org in cross-functional coordination
**What this person does NOT do:**
- Drive product strategy (that's the CPO)
- Manage PMs (that's the Head of Product or PM leads)
- Own analytics (that's Head of Data)
---
## 5. The Product Trio
Every product team should have three roles working together from day one of discovery:
```
Product Manager → What to build and why
Product Designer → How users experience it
Tech Lead / Engineer → How to build it sustainably
```
### How the Trio Actually Works
**Discovery (weeks 1-2 of any new initiative):**
- All three in user interviews together
- All three reviewing competitive products
- All three in problem framing sessions
- Output: Opportunity, not solution
**Ideation (days):**
- All three generating solutions
- Designer prototypes 2-3 options
- Engineer provides feasibility gut check on each
- PM synthesizes against strategy
- Output: Prototype for testing
**Testing (days):**
- Designer and PM run tests (engineer optional but encouraged)
- Tests with 5-8 real customers
- All three review findings together
- Output: Decision: build, iterate, or kill
**Delivery (sprints):**
- PM writes acceptance criteria (what done looks like from user perspective)
- Engineer owns implementation
- Designer owns QA for experience quality
- All three do final review before release
### Trio Anti-Patterns
| Anti-Pattern | What It Looks Like | Why It Fails |
|-------------|-------------------|--------------|
| **PM → Designer → Engineer** | Waterfall disguised as agile | Late discovery of infeasibility and poor UX |
| **Engineer-led** | Engineers propose solutions, PM and designer polish | Builds technically correct thing nobody wants |
| **PM-led dictation** | PM writes detailed spec, team executes | Team has no context, can't make good trade-offs |
| **Designer detached** | Designers design in isolation, present to engineers | Beautiful mockup that's 8x harder to build than alternative |
| **No research** | Trio invents problems and solutions in a conference room | Building for themselves |
---
## 6. Remote vs. Co-located Product Teams
The debate is mostly settled. Here's what actually matters:
### What Changes with Remote
| Activity | Co-located | Remote | Fix |
|----------|-----------|--------|-----|
| Discovery sync | Organic, hallway | Requires scheduling | Daily async standups + weekly sync |
| Whiteboarding | Easy | Friction | Figma, Miro — async-first artifacts |
| Design review | Walk over | Calendar invite | Record reviews; written decisions |
| Relationship building | Osmotic | Deliberate | Regular 1:1s, team rituals, offsites |
| Onboarding | Shadow in person | Document-heavy | Written playbooks + buddy system |
| Difficult conversations | Easier in person | Harder | Default to video, not Slack |
### The Async-First Product Team
Works well remote IF:
- Decisions are written (Notion, Confluence, not Slack threads)
- Roadmaps are accessible to everyone without a meeting
- Product reviews are recorded and linked
- Discovery artifacts are shared before the meeting, discussed in the meeting
- 1:1s are weekly and actual (not "let's skip this week")
**What doesn't survive async:**
- Ambiguous ownership
- Verbal agreements (write it down or it didn't happen)
- Teams where "PM wrote the spec" is the only documentation
### Remote Product Org Practices
**Weekly Cadence:**
```
Monday: Async kickoff — each team posts week's focus + blockers
Tuesday: Product trio sync (30 min, per team)
Wednesday: CPO / Head of Product 1:1s
Thursday: Cross-team PM sync (30 min, rotating topics)
Friday: Async retrospective notes + week summary
```
**Monthly:**
- Full product org sync (all PMs, designers, heads)
- CPO product review (each team presents one initiative)
- Metrics review (company + team level)
**Quarterly:**
- In-person or virtual offsite
- Strategy and OKR setting
- Individual growth conversations
---
## Quick Reference
| Stage | Structure | First Hire Priority |
|-------|-----------|-------------------|
| Seed | Founder PM | Generalist PM with customer instincts |
| Series A | 2-3 PMs, flat | First real PM, owns a product area |
| Series B | Head of Product, 4-8 PMs | Head of Design |
| Series C | Org layers, PM leads | Head of Data + Product Ops |
| Growth | Full specialization | Chief of Staff (Product) |
**PM:Engineer ratio target by stage:**
Seed 1:5 → Series A 1:8 → Series B 1:10 → Series C 1:12 → Growth 1:15
**Three things that fix most product org problems:**
1. Stream-aligned teams with full-stack ownership (PM + Design + Eng)
2. OKRs that cascade from company to team to individual
3. Product trio in discovery, not just delivery

View File

@@ -0,0 +1,454 @@
# Product Strategy Reference
Frameworks for product vision, competitive positioning, portfolio management, and board reporting. No theory — only what CPOs actually use.
---
## 1. Vision Frameworks
### Jobs to Be Done (JTBD)
JTBD is not a feature framework. It's a way to understand *why* customers hire your product and under what circumstances.
**The core insight:** People don't want your product. They want to make progress in their lives, and they hire your product to help. When you understand the job, you understand competition differently.
#### Conducting JTBD Interviews
**Who to interview:** Recent buyers and recent churners. Not power users — they're already converted.
**The interview script (condensed):**
```
1. "Walk me through the last time you [started using / stopped using] this product."
2. "What were you doing the day before you decided?"
3. "What else did you consider?"
4. "What almost stopped you from doing it?"
5. "Now that you're using it, what does your day look like differently?"
```
**What you're extracting:**
- **Functional job:** What task are they accomplishing?
- **Emotional job:** How do they feel during and after?
- **Social job:** How are they perceived?
- **Timeline:** What triggered the switch? (the "push" from old solution + "pull" toward new one)
- **Anxieties:** What almost prevented adoption?
- **Competing solutions:** What are they comparing you to, including "do nothing"?
#### JTBD Output: The Job Story
Format better than "user story" for strategic decisions:
```
When [situation],
I want to [motivation/job],
So I can [expected outcome].
```
**Example (healthcare scheduling):**
```
When I'm trying to coordinate my parent's care from another city,
I want to see their upcoming appointments and have someone confirm changes,
So I can feel confident they won't miss critical treatments.
```
This is a different product than "schedule management software." The strategic implications — care coordination, family access, confirmation workflows — flow from the job.
#### JTBD → Product Strategy
| Job Insight | Strategic Implication |
|-------------|----------------------|
| Job is episodic (quarterly) | Engagement model must reach them before they need it |
| Job is habitual (daily) | DAU/MAU matters; build for habit formation |
| Job has high stakes | Trust and reliability > features; invest in onboarding + support |
| Job is social | Network effects possible; virality is structural, not a campaign |
| Job is delegated (done for someone else) | Two users: the buyer and the beneficiary. Design for both. |
---
### Category Design
If you're fighting for share in an existing category, you're playing defense on someone else's field.
**Category design premise:** Companies that define the category typically capture 76% of the market cap of that category. Name the category, own it.
#### The Category Design Process
**Step 1: Name the problem, not the solution.**
```
Wrong: "We make AI-powered customer support software."
Right: "The support team doesn't need more tickets. They need fewer problems."
```
**Step 2: Define the enemy.**
The enemy is the *old way* of solving the problem, not a competitor.
- Salesforce's enemy: spreadsheets and disconnected tools (not Siebel)
- Slack's enemy: email overload (not HipChat)
- Your enemy: ___________
**Step 3: Create the category name.**
It should be obvious in hindsight, not predictable in advance. Test it:
- Does it describe the problem, not the solution?
- Is it 2-3 words?
- Could a journalist use it without quoting you?
**Step 4: Missionary selling, not mercenary selling.**
Category kings educate the market before they sell to it. Content, thought leadership, community, and free tools all matter here — not as marketing tactics but as category creation.
**Step 5: Be the reference customer.**
Get the logos that define the category. The companies others look to. When others adopt, they don't want "a tool" — they want "what [Reference Customer] uses."
---
## 2. Competitive Moats
A moat is a structural advantage that compounds over time. Features are not moats. Pricing is not a moat. A moat is why, even if a competitor perfectly copies your product today, you still win.
### Moat Type 1: Network Effects
The product becomes more valuable as more users join. Two subtypes:
**Direct network effects:** Each user makes the product better for all other users (WhatsApp, Slack).
**Indirect network effects:** Each user on one side makes the product better for the other side (Uber drivers + riders, App Store developers + users).
**Data network effects:** More users → more data → better product → more users.
#### Network Effect Diagnostic
```
Question 1: Does adding user N make the product better for user N-1?
No → You don't have direct network effects
Yes → Map exactly how and how much
Question 2: Does adding user N make the product better for users on the OTHER side?
No → You don't have indirect network effects
Yes → Identify which side is the constraint (supply or demand)
Question 3: Does using the product generate data that improves the product?
No → You don't have data network effects
Yes → What is the data flywheel? Where does it compound?
```
**Building network effects intentionally:**
- Most products accidentally have weak network effects
- Design for network effects from Day 1: sharing, notifications, collaboration, integrations
- Measure network effect strength: "What % of new users were referred by existing users?"
### Moat Type 2: Switching Costs
The cost — time, money, risk — of leaving your product. The highest switching costs are:
| Switching Cost Type | Example | CPO Action |
|--------------------|---------|-----------|
| **Data lock-in** | Years of history, reports, trained models | Make data the experience, not just the storage |
| **Workflow integration** | 23 integrations, custom automations | Every integration is a switching cost. Build them. |
| **Team adoption** | Entire team trained on your tool | Multi-seat training investments pay switching cost dividends |
| **Contractual** | Annual contracts, SLAs | Long contracts are not a moat — customers resent them |
| **Process embedding** | Your product IS their process | Aim here. This is the deepest moat. |
**Warning:** Switching costs from data lock-in without value lock-in breed resentment, not loyalty. Customers who stay because they're trapped will leave the moment a migration tool appears.
### Moat Type 3: Data Advantages
Having data others can't easily get. Three subtypes:
**Proprietary data:** Data only you have access to (exclusive partnerships, sensor networks, unique user behavior at scale).
**Data scale:** Same type of data but at 10x the volume of competitors. Scale compounds model accuracy.
**Data variety:** Unique combination of data types. Not just usage data — usage + outcome data + external context.
**Testing your data moat:**
```
1. What data do we have that competitors don't?
2. At what volume does our data create a meaningfully better product?
3. Are we at that volume? If not, when?
4. Could a competitor buy or partner their way to equivalent data?
5. Is our data improving the product automatically, or only when we analyze it manually?
```
### Moat Type 4: Economies of Scale
Unit economics improve as you scale. Infrastructure costs drop per unit. Brand recognition lowers CAC. Negotiating power increases.
This is a real moat but the weakest one for product strategy — it doesn't keep faster-moving competitors from attacking while you're small.
### Moat Scorecard
Score each moat type 0-3 for your current product:
```
0 = Not present
1 = Weak / easily replicated
2 = Meaningful / takes 12-18 months to replicate
3 = Strong / structural advantage
Network effects (direct): __/3
Network effects (indirect): __/3
Network effects (data): __/3
Switching costs (data): __/3
Switching costs (workflow): __/3
Switching costs (team): __/3
Data advantages (exclusive): __/3
Data advantages (scale): __/3
Economies of scale: __/3
Total: __/27
< 9: No meaningful moat. Compete on execution speed.
9-15: Early moat. Identify and reinforce 1-2 strongest types.
16-21: Real moat. Invest to compound it.
> 21: Strong moat. Defend and expand.
```
---
## 3. Product Positioning
Positioning is not messaging. Positioning is the choice of: *Who is this for, what does it replace, and on what dimension do we win?*
### The Positioning Canvas (after April Dunford)
```
1. Competitive Alternatives
What would customers do if your product didn't exist?
(This is your real competition, not just your vendor category)
2. Unique Attributes
What capabilities do you have that alternatives lack?
(Features, but described neutrally, not as marketing)
3. Value (Outcomes)
What does each unique attribute enable for customers?
(Bridge from feature → outcome, not feature → feature)
4. Customer Who Cares
Who values those outcomes enough to pay for them?
(The customer segment for whom this value is highest)
5. Market Category
Where does the customer put you when comparing options?
(Frame the category to win, not to be fair)
6. Relevant Trends
What's changing in the world that makes this more valuable now?
(Why this moment? Urgency enabler.)
```
### Positioning Against Three Competitors
**Positioning vs. direct competitor:**
Identify one dimension where you structurally win. "Better" is not a position.
- Win on depth: more powerful in one scenario
- Win on simplicity: fewer decisions, fewer steps
- Win on integration: works with what they already use
- Win on price/value: same outcome, lower cost or risk
**Positioning vs. indirect alternative:**
The customer's current solution (spreadsheet, manual process, point solution).
- Make switching cost obvious (what are they giving up per week?)
- Make the switch simple (migration, onboarding, no data loss)
- Find the "aha moment" fast (value before they revert)
**Positioning vs. doing nothing:**
The hardest competitor. Status quo has zero switching cost.
- Quantify the cost of inaction (time, risk, revenue, competitive risk)
- Find the trigger event that makes inaction intolerable
- Show the risk is higher than the switch cost
### Positioning Failure Modes
| Failure | Description | Fix |
|---------|-------------|-----|
| **For everyone** | No segment. "Any company that needs X." | Name the best-fit customer. |
| **Feature positioning** | "The only tool with [feature X]" | Features are table stakes. Lead with outcome. |
| **Vague differentiation** | "Easier, faster, better" | Measurable, specific, or don't say it. |
| **Category misfit** | In a category where you can't win | Either own the category or name a new one |
| **Lagging positioning** | Positioned for who you were, not who you are | Reposition every 18-24 months or after major product change |
---
## 4. Portfolio Management
### Applying BCG Matrix to Product Lines
BCG matrix was designed for business units. Applied to product lines:
**Inputs:**
- Market growth rate (industry growth, not your growth)
- Relative market share (your share vs. largest competitor)
- Revenue contribution (absolute)
- Investment level (engineering + sales + marketing per product)
**Calculation:**
```
Market share ratio = Your market share / Largest competitor's market share
Growth rate = Market CAGR (next 3 years estimate)
Stars: share ratio > 1.0, growth > 10%
Cash Cows: share ratio > 1.0, growth < 10%
Question Marks: share ratio < 1.0, growth > 10%
Dogs: share ratio < 1.0, growth < 10%
```
### Portfolio Allocation Rules
**Star products:**
- Invest at or above market growth rate
- Goal: maintain share leadership as market grows
- Don't extract cash — reinvest
- Metrics: market share trend, NPS, retention, feature velocity
**Cash Cow products:**
- Minimum investment to maintain market position
- Goal: maximize free cash flow
- Resist the urge to innovate — incremental improvements only
- Metrics: gross margin, churn rate, support cost per customer
**Question Mark products:**
- Binary decision: invest to win or exit
- "Maintain" is not a strategy for question marks — you lose share every quarter you're neutral
- Set a deadline (2 quarters) and a threshold for investment decision
- Metrics: share gain rate, customer acquisition efficiency
**Dog products:**
- Decision: sell, sunset, or bundle
- Never "fix" a dog with more investment
- Timeline to sunset: 6-12 months, migration plan for existing customers
- Metrics: customer migration rate, revenue retained
### Portfolio Review Template
Run quarterly. One slide per product.
```
Product: [Name]
Current Quadrant: [Star/Cash Cow/Question Mark/Dog]
Revenue this quarter: $___
Revenue growth QoQ: ___%
Market share estimate: ___%
Investment level (% of eng capacity): ___%
Investment posture: [Invest / Maintain / Kill]
Key metric: [Name] → [Current value] → [QoQ trend]
Top risk: [One thing that could change this assessment]
Decision required: [Yes/No] | [What decision?]
```
### The Honest Portfolio Conversation
Questions CPOs avoid but boards ask:
- "Which product would we kill if we had to? What's stopping us?"
- "Are we funding dogs because the team is attached or because there's a real plan?"
- "What would our margins look like if we stopped investing in the bottom 2 products?"
- "What's the dependency between our products? Are we a platform or a bundle of unrelated tools?"
---
## 5. Board-Level Product Reporting
### What Good Looks Like
Board product updates fail in three ways:
1. Too much roadmap detail (feature list masquerading as strategy)
2. No trend context (showing a number without showing if it's getting better or worse)
3. No risks (all good news = no credibility)
### The 5-Slide Board Product Update
**Slide 1: North Star Metric**
```
Title: Product Health — [Quarter]
[Chart: North star metric over last 12 months, quarterly cohorts]
This quarter: [Value] | Prior quarter: [Value] | YoY: [Value]
Target: [Value] | Status: On track / At risk / Behind
Drivers (2-3 bullets):
• What's driving improvement: ___
• What's dragging: ___
• What we're doing about the drag: ___
```
**Slide 2: Retention and PMF**
```
Title: Product-Market Fit Evidence
[Chart: D30 retention by cohort, last 6 cohorts]
[Callout: Sean Ellis score = XX% (target: > 40%)]
PMF status: Achieved / Approaching / Not yet
Best segment: [Describe — where retention is strongest]
Weakest segment: [Describe — and what we're doing about it]
```
**Slide 3: Portfolio Status**
```
Title: Portfolio — Invest / Maintain / Kill
| Product | Quadrant | Revenue | Growth | Posture | Risk |
|---------|---------|---------|--------|---------|------|
| [A] | Star | $___ | +XX% | Invest | ___ |
| [B] | Cash Cow| $___ | +X% | Maintain| ___ |
| [C] | Dog | $___ | -X% | Kill Q3 | ___ |
Changes since last quarter: ___
Decisions needed from board: ___
```
**Slide 4: Strategic Bets**
```
Title: Bets This Half — [H1/H2]
Bet 1: [Name]
Hypothesis: If we [do X], [segment Y] will [do Z]
Evidence so far: [Data]
Confidence: [Low / Medium / High]
Decision point: [When do we know?] [What will we measure?]
Bet 2: [Name]
[Same structure]
```
**Slide 5: Top Risks**
```
Title: Product Risks — [Quarter]
Risk 1: [Name]
What it is: ___
Probability: [Low/Med/High]
Impact if realized: ___
Mitigation: ___
Risk 2: [Name]
[Same structure]
Risk 3: [Name]
[Same structure]
```
### Delivering in the Board Meeting
- Never read the slide
- Lead with the conclusion, not the data
- Prepare for "what if that assumption is wrong?" for every bet
- When something underperformed: say it, own it, explain what changed
- Never present a number you can't explain 3 levels deep
**Example of bad delivery:**
"Our north star is up 15% QoQ, which is great. We're tracking well."
**Example of good delivery:**
"North star is up 15% — ahead of plan. The majority of that is from the enterprise cohort activated in October, driven by the workflow automation feature we shipped in September. The consumer segment is flat, which is a concern. We're running three experiments this quarter to diagnose whether that's an acquisition problem or an activation problem — I'll have an answer for next quarter."
---
## Quick Reference: Framework Summary
| Need | Framework |
|------|----------|
| Why do customers use us? | Jobs to Be Done |
| How do we define our market? | Category Design |
| What's our structural advantage? | Moat Scorecard |
| How do we position? | April Dunford Positioning Canvas |
| Which products to fund? | BCG Matrix + Invest/Maintain/Kill |
| How to report to the board? | 5-Slide Board Update |

View File

@@ -0,0 +1,600 @@
#!/usr/bin/env python3
"""
PMF Scorer — Multi-dimensional Product-Market Fit analysis.
Scores PMF across four dimensions:
- Retention (40%): D30 and D90 cohort retention
- Engagement (25%): DAU/MAU, session depth, key action rate
- Satisfaction(20%): Sean Ellis score, NPS
- Growth (15%): Organic signup rate, referral rate
Usage:
python pmf_scorer.py # Run with built-in sample data
python pmf_scorer.py --input data.json # Run with your data
JSON input format: see sample_data() function below.
"""
import json
import sys
import argparse
import math
from typing import Optional
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
def sample_data() -> dict:
"""
Sample input data. Replace with your own values.
All fields are optional — missing fields score 0 for that sub-metric
and a note is added to recommendations.
"""
return {
"product_name": "Acme SaaS",
"business_model": "b2b_saas", # b2b_saas | consumer | marketplace | plg
# Retention: D30 and D90 as decimals (e.g. 0.42 = 42%)
# Provide multiple cohorts if available. Most recent first.
"retention": {
"d30_cohorts": [0.38, 0.41, 0.44, 0.43], # newest → oldest
"d90_cohorts": [0.28, 0.30, 0.31],
"curve_flattening": True, # Does the curve flatten (vs. continuing to drop)?
},
# Engagement
"engagement": {
"dau_mau_ratio": 0.24, # Daily active / Monthly active (decimal)
"avg_sessions_per_week": 3.2, # Per active user
"key_action_rate": 0.55, # % of users who performed core value action in last 30d
"session_depth_score": 0.6, # 0-1: 0 = one page, 1 = full feature exploration
},
# Satisfaction
"satisfaction": {
"sean_ellis_very_disappointed": 0.38, # Fraction (e.g. 0.38 = 38%)
"sean_ellis_sample_size": 87, # Raw response count
"nps_score": 34, # -100 to 100
"nps_sample_size": 210,
},
# Growth
"growth": {
"organic_signup_pct": 0.27, # % of new signups from organic/referral/WOM
"referral_rate": 0.18, # % of active users who referred someone last 90d
"mom_growth_rate": 0.08, # Month-over-month new user growth (decimal)
},
}
# ---------------------------------------------------------------------------
# Thresholds by business model
# ---------------------------------------------------------------------------
THRESHOLDS = {
"b2b_saas": {
"d30_pmf": 0.40, "d30_strong": 0.60,
"d90_pmf": 0.25, "d90_strong": 0.45,
"dau_mau_pmf": 0.15, "dau_mau_strong": 0.35,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 30, "nps_strong": 50,
},
"consumer": {
"d30_pmf": 0.20, "d30_strong": 0.35,
"d90_pmf": 0.10, "d90_strong": 0.20,
"dau_mau_pmf": 0.20, "dau_mau_strong": 0.40,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 20, "nps_strong": 45,
},
"marketplace": {
"d30_pmf": 0.30, "d30_strong": 0.50,
"d90_pmf": 0.20, "d90_strong": 0.35,
"dau_mau_pmf": 0.15, "dau_mau_strong": 0.30,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 25, "nps_strong": 45,
},
"plg": {
"d30_pmf": 0.25, "d30_strong": 0.45,
"d90_pmf": 0.15, "d90_strong": 0.30,
"dau_mau_pmf": 0.20, "dau_mau_strong": 0.40,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 30, "nps_strong": 50,
},
}
# Weights for the four dimensions (must sum to 1.0)
DIMENSION_WEIGHTS = {
"retention": 0.40,
"engagement": 0.25,
"satisfaction": 0.20,
"growth": 0.15,
}
# ---------------------------------------------------------------------------
# Scoring helpers
# ---------------------------------------------------------------------------
def clamp(value: float, lo: float = 0.0, hi: float = 1.0) -> float:
return max(lo, min(hi, value))
def score_between(value: Optional[float], lo: float, hi: float) -> float:
"""Linear interpolation: lo → 0.0, hi → 1.0, beyond hi → 1.0."""
if value is None:
return 0.0
if value <= lo:
return 0.0
if value >= hi:
return 1.0
return (value - lo) / (hi - lo)
def cohort_trend(cohorts: list) -> float:
"""
Given cohorts newest-first, return a trend score -1 to +1.
Positive = improving. Negative = degrading.
"""
if len(cohorts) < 2:
return 0.0
# Simple: compare most recent half average vs. older half average
mid = len(cohorts) // 2
recent_avg = sum(cohorts[:mid]) / mid if mid else cohorts[0]
older_avg = sum(cohorts[mid:]) / (len(cohorts) - mid)
if older_avg == 0:
return 0.0
delta = (recent_avg - older_avg) / older_avg
return clamp(delta * 5, -1.0, 1.0) # scale: 20% improvement = score of 1.0
# ---------------------------------------------------------------------------
# Dimension scorers
# ---------------------------------------------------------------------------
def score_retention(data: dict, thresholds: dict) -> tuple[float, list]:
"""Returns (score 0-1, list of findings)."""
r = data.get("retention", {})
findings = []
scores = []
d30 = r.get("d30_cohorts", [])
d90 = r.get("d90_cohorts", [])
if not d30:
findings.append("⚠ No D30 retention data — this is the most important PMF signal. Instrument it immediately.")
return 0.0, findings
latest_d30 = d30[0]
d30_score = score_between(latest_d30, 0, thresholds["d30_strong"])
scores.append(d30_score)
if latest_d30 >= thresholds["d30_strong"]:
findings.append(f"✓ D30 retention {latest_d30:.0%} — strong PMF signal")
elif latest_d30 >= thresholds["d30_pmf"]:
findings.append(f"◑ D30 retention {latest_d30:.0%} — approaching PMF threshold ({thresholds['d30_pmf']:.0%})")
else:
findings.append(f"✗ D30 retention {latest_d30:.0%} — below PMF threshold ({thresholds['d30_pmf']:.0%}). Focus here before anything else.")
# Trend bonus
if len(d30) >= 2:
trend = cohort_trend(d30)
trend_score = (trend + 1) / 2 # normalize to 0-1
scores.append(trend_score * 0.5) # trend is bonus, not primary
if trend > 0.1:
findings.append(f"✓ D30 retention improving across cohorts — strong learning signal")
elif trend < -0.1:
findings.append(f"✗ D30 retention declining across cohorts — product changes may be hurting core users")
if d90:
latest_d90 = d90[0]
d90_score = score_between(latest_d90, 0, thresholds["d90_strong"])
scores.append(d90_score)
if latest_d90 >= thresholds["d90_strong"]:
findings.append(f"✓ D90 retention {latest_d90:.0%} — excellent long-term retention")
elif latest_d90 >= thresholds["d90_pmf"]:
findings.append(f"◑ D90 retention {latest_d90:.0%} — some long-term value demonstrated")
else:
findings.append(f"✗ D90 retention {latest_d90:.0%} — users not finding long-term value")
else:
findings.append("⚠ No D90 data. Add 90-day cohort tracking.")
flattening = r.get("curve_flattening", False)
if flattening:
scores.append(0.8)
findings.append("✓ Retention curve flattening — core retained segment exists")
else:
scores.append(0.2)
findings.append("✗ Retention curve not flattening — no stable retained segment yet")
return clamp(sum(scores) / len(scores)), findings
def score_engagement(data: dict, thresholds: dict) -> tuple[float, list]:
e = data.get("engagement", {})
findings = []
scores = []
dau_mau = e.get("dau_mau_ratio")
if dau_mau is not None:
s = score_between(dau_mau, 0, thresholds["dau_mau_strong"])
scores.append(s)
if dau_mau >= thresholds["dau_mau_strong"]:
findings.append(f"✓ DAU/MAU {dau_mau:.0%} — strong daily habit")
elif dau_mau >= thresholds["dau_mau_pmf"]:
findings.append(f"◑ DAU/MAU {dau_mau:.0%} — moderate engagement")
else:
findings.append(f"✗ DAU/MAU {dau_mau:.0%} — users not building a habit. Find the daily job or accept weekly use pattern.")
else:
findings.append("⚠ No DAU/MAU data.")
sessions = e.get("avg_sessions_per_week")
if sessions is not None:
# 5+ sessions/week = strong, 2 = threshold
s = score_between(sessions, 1, 5)
scores.append(s)
if sessions >= 5:
findings.append(f"{sessions:.1f} sessions/week — high engagement")
elif sessions >= 2:
findings.append(f"{sessions:.1f} sessions/week — moderate")
else:
findings.append(f"{sessions:.1f} sessions/week — very low. Users not returning within week.")
else:
findings.append("⚠ No session frequency data.")
kar = e.get("key_action_rate")
if kar is not None:
s = score_between(kar, 0.10, 0.70)
scores.append(s)
if kar >= 0.60:
findings.append(f"✓ Key action rate {kar:.0%} — core value well-adopted")
elif kar >= 0.30:
findings.append(f"◑ Key action rate {kar:.0%} — improve onboarding to drive this up")
else:
findings.append(f"✗ Key action rate {kar:.0%} — most users not reaching core value. This is an activation problem.")
else:
findings.append("⚠ No key action rate. Define your 'aha moment' action and track it.")
depth = e.get("session_depth_score")
if depth is not None:
scores.append(depth)
if depth >= 0.6:
findings.append(f"✓ Session depth {depth:.1f} — users exploring the product")
else:
findings.append(f"◑ Session depth {depth:.1f} — users sticking to narrow feature set")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
def score_satisfaction(data: dict, thresholds: dict) -> tuple[float, list]:
s_data = data.get("satisfaction", {})
findings = []
scores = []
se_score = s_data.get("sean_ellis_very_disappointed")
se_n = s_data.get("sean_ellis_sample_size", 0)
if se_score is not None:
if se_n < 40:
findings.append(f"⚠ Sean Ellis n={se_n} — too small to be reliable. Need 40+ responses.")
scores.append(score_between(se_score, 0, thresholds["sean_ellis_strong"]) * 0.5) # half weight
else:
s = score_between(se_score, 0, thresholds["sean_ellis_strong"])
scores.append(s)
if se_score >= thresholds["sean_ellis_strong"]:
findings.append(f"✓ Sean Ellis {se_score:.0%} 'very disappointed' — strong PMF signal (n={se_n})")
elif se_score >= thresholds["sean_ellis_pmf"]:
findings.append(f"◑ Sean Ellis {se_score:.0%} — at PMF threshold. Push to > {thresholds['sean_ellis_strong']:.0%}.")
else:
findings.append(f"✗ Sean Ellis {se_score:.0%} — below {thresholds['sean_ellis_pmf']:.0%} threshold. Interview 'somewhat disappointed' group.")
else:
findings.append("⚠ No Sean Ellis data. Run a one-question survey to your active users now.")
nps = s_data.get("nps_score")
nps_n = s_data.get("nps_sample_size", 0)
if nps is not None:
if nps_n < 50:
findings.append(f"⚠ NPS n={nps_n} — sample too small. Need 50+ for reliability.")
# NPS ranges from -100 to 100; normalize to 0-1 against threshold
s = score_between(nps, -20, thresholds["nps_strong"])
scores.append(s)
if nps >= thresholds["nps_strong"]:
findings.append(f"✓ NPS {nps} — excellent. Promoters will drive organic growth.")
elif nps >= thresholds["nps_pmf"]:
findings.append(f"◑ NPS {nps} — acceptable. Focus on converting passives to promoters.")
elif nps >= 0:
findings.append(f"✗ NPS {nps} — low. More detractors than promoters is a warning sign.")
else:
findings.append(f"✗ NPS {nps} — negative. Active detractors outnumber promoters.")
else:
findings.append("⚠ No NPS data.")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
def score_growth(data: dict, _thresholds: dict) -> tuple[float, list]:
g = data.get("growth", {})
findings = []
scores = []
organic_pct = g.get("organic_signup_pct")
if organic_pct is not None:
s = score_between(organic_pct, 0.05, 0.50)
scores.append(s)
if organic_pct >= 0.30:
findings.append(f"{organic_pct:.0%} organic signups — word of mouth is working")
elif organic_pct >= 0.20:
findings.append(f"{organic_pct:.0%} organic — moderate. Build referral loop deliberately.")
else:
findings.append(f"{organic_pct:.0%} organic — almost all paid. PMF may not be strong enough to generate word of mouth.")
else:
findings.append("⚠ No organic signup tracking. Tag all signup sources now.")
referral = g.get("referral_rate")
if referral is not None:
s = score_between(referral, 0.05, 0.35)
scores.append(s)
if referral >= 0.25:
findings.append(f"{referral:.0%} of active users referring — strong viral signal")
elif referral >= 0.15:
findings.append(f"{referral:.0%} referral rate — building. Add referral incentive or friction removal.")
else:
findings.append(f"{referral:.0%} referral rate — users not recommending. Satisfaction or network effects missing.")
else:
findings.append("⚠ No referral rate data.")
mom = g.get("mom_growth_rate")
if mom is not None:
s = score_between(mom, 0, 0.20)
scores.append(s)
if mom >= 0.15:
findings.append(f"{mom:.0%} MoM growth — strong momentum")
elif mom >= 0.08:
findings.append(f"{mom:.0%} MoM growth — moderate. Identify top acquisition channel and double it.")
else:
findings.append(f"{mom:.0%} MoM growth — slow. Acquisition is a bottleneck.")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
# ---------------------------------------------------------------------------
# Overall scoring and recommendations
# ---------------------------------------------------------------------------
def pmf_status(overall: float) -> tuple[str, str]:
"""Returns (status label, description)."""
if overall >= 0.80:
return "STRONG PMF", "Clear product-market fit. Shift focus to scaling acquisition and defending moat."
elif overall >= 0.60:
return "PMF APPROACHING", "Meaningful signals present. Identify and remove the 1-2 friction points blocking retention."
elif overall >= 0.40:
return "EARLY SIGNALS", "Weak PMF. Some users find value. Narrow your ICP and double down on what's working."
elif overall >= 0.20:
return "PRE-PMF", "No clear PMF yet. Don't scale acquisition. Focus entirely on retention experiments."
else:
return "NO SIGNAL", "No PMF signals detected. Revisit the problem hypothesis before investing further in the solution."
def top_recommendations(dim_scores: dict, data: dict) -> list[str]:
"""Prioritized recommendations based on weakest dimensions."""
recs = []
model = data.get("business_model", "b2b_saas")
ranked = sorted(dim_scores.items(), key=lambda x: x[1])
for dim, score in ranked:
if score < 0.40:
if dim == "retention":
recs.append(
"CRITICAL — Retention: Run cohort analysis by segment. Find the cohort with highest D30. "
"Interview 10 of those users. Build for them exclusively until retention flattens."
)
elif dim == "engagement":
recs.append(
"Engagement: Define your 'aha moment' — the one action that predicts long-term retention. "
"Measure time-to-aha. Remove every friction point on that path."
)
elif dim == "satisfaction":
recs.append(
"Satisfaction: Run Sean Ellis survey immediately (need n ≥ 40). "
"Interview every 'somewhat disappointed' user — the gap between 'somewhat' and 'very' is your product gap."
)
elif dim == "growth":
recs.append(
"Growth: Track signup source for every new user. If organic < 20%, "
"you may be papering over weak PMF with paid acquisition. Fix retention first."
)
if not recs:
recs.append(
"All dimensions scoring above threshold. Focus: "
"(1) Defend moat, (2) Expand ICP carefully, (3) Build referral flywheel."
)
if model == "b2b_saas":
recs.append("B2B tip: Track NRR (Net Revenue Retention). PMF in B2B requires expansion, not just retention.")
elif model == "consumer":
recs.append("Consumer tip: Find your D7 'magic moment'. The habit window is small — optimize for it.")
elif model == "plg":
recs.append("PLG tip: Define your PQL (product-qualified lead). The activation event that predicts paid conversion.")
elif model == "marketplace":
recs.append("Marketplace tip: Measure both sides separately. PMF on demand side ≠ PMF on supply side.")
return recs
# ---------------------------------------------------------------------------
# Report renderer
# ---------------------------------------------------------------------------
def render_report(data: dict, dim_scores: dict, dim_findings: dict, overall: float) -> str:
status, description = pmf_status(overall)
recs = top_recommendations(dim_scores, data)
lines = []
lines.append("=" * 60)
lines.append(f" PMF SCORER — {data.get('product_name', 'Product')}")
lines.append(f" Model: {data.get('business_model', 'unknown').upper()}")
lines.append("=" * 60)
lines.append("")
# Overall
bar_len = 40
filled = round(overall * bar_len)
bar = "" * filled + "" * (bar_len - filled)
lines.append(f" Overall PMF Score: {overall:.0%}")
lines.append(f" [{bar}]")
lines.append(f" Status: {status}")
lines.append(f" {description}")
lines.append("")
# Dimension breakdown
lines.append(" DIMENSION SCORES")
lines.append(" " + "-" * 50)
for dim, weight in DIMENSION_WEIGHTS.items():
score = dim_scores.get(dim, 0.0)
dim_bar_len = 20
dim_filled = round(score * dim_bar_len)
dim_bar = "" * dim_filled + "" * (dim_bar_len - dim_filled)
label = dim.capitalize().ljust(12)
lines.append(f" {label} [{dim_bar}] {score:.0%} (weight: {weight:.0%})")
lines.append("")
# Findings per dimension
for dim in ["retention", "engagement", "satisfaction", "growth"]:
findings = dim_findings.get(dim, [])
if findings:
lines.append(f" {dim.upper()} FINDINGS")
for f in findings:
lines.append(f" {f}")
lines.append("")
# Recommendations
lines.append(" PRIORITIZED RECOMMENDATIONS")
lines.append(" " + "-" * 50)
for i, rec in enumerate(recs, 1):
# Wrap at 70 chars
words = rec.split()
line = f" {i}. "
for word in words:
if len(line) + len(word) + 1 > 72:
lines.append(line)
line = " " + word + " "
else:
line += word + " "
lines.append(line.rstrip())
lines.append("")
lines.append("=" * 60)
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def run(data: dict) -> dict:
"""
Score PMF from input data dict.
Returns dict with overall score, dimension scores, and findings.
"""
model = data.get("business_model", "b2b_saas")
thresholds = THRESHOLDS.get(model, THRESHOLDS["b2b_saas"])
dim_scores = {}
dim_findings = {}
ret_score, ret_findings = score_retention(data, thresholds)
dim_scores["retention"] = ret_score
dim_findings["retention"] = ret_findings
eng_score, eng_findings = score_engagement(data, thresholds)
dim_scores["engagement"] = eng_score
dim_findings["engagement"] = eng_findings
sat_score, sat_findings = score_satisfaction(data, thresholds)
dim_scores["satisfaction"] = sat_score
dim_findings["satisfaction"] = sat_findings
grow_score, grow_findings = score_growth(data, thresholds)
dim_scores["growth"] = grow_score
dim_findings["growth"] = grow_findings
overall = sum(
dim_scores[dim] * weight
for dim, weight in DIMENSION_WEIGHTS.items()
)
return {
"overall": overall,
"dim_scores": dim_scores,
"dim_findings": dim_findings,
"status": pmf_status(overall)[0],
}
def main():
parser = argparse.ArgumentParser(
description="PMF Scorer — Multi-dimensional Product-Market Fit analysis",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--input", "-i",
metavar="FILE",
help="JSON file with your product data (default: built-in sample data)",
)
parser.add_argument(
"--json",
action="store_true",
help="Output raw JSON instead of formatted report",
)
args = parser.parse_args()
if args.input:
try:
with open(args.input) as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: invalid JSON: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file provided — running with sample data.\n")
data = sample_data()
result = run(data)
if args.json:
output = {
"product_name": data.get("product_name"),
"business_model": data.get("business_model"),
"overall_score": round(result["overall"], 4),
"overall_pct": f"{result['overall']:.0%}",
"status": result["status"],
"dimensions": {
dim: {
"score": round(result["dim_scores"][dim], 4),
"pct": f"{result['dim_scores'][dim]:.0%}",
"weight": f"{DIMENSION_WEIGHTS[dim]:.0%}",
"findings": result["dim_findings"][dim],
}
for dim in DIMENSION_WEIGHTS
},
}
print(json.dumps(output, indent=2))
else:
print(render_report(data, result["dim_scores"], result["dim_findings"], result["overall"]))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,547 @@
#!/usr/bin/env python3
"""
Portfolio Analyzer — Product portfolio BCG matrix classification and investment analysis.
For each product, classifies into BCG quadrant (Star, Cash Cow, Question Mark, Dog)
and generates investment recommendations (Invest / Maintain / Kill).
Usage:
python portfolio_analyzer.py # Run with built-in sample data
python portfolio_analyzer.py --input data.json # Run with your data
python portfolio_analyzer.py --json # Output raw JSON
JSON input format: see sample_data() function below.
"""
import json
import sys
import argparse
from typing import Optional
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def sample_data() -> dict:
"""
Sample portfolio. Replace with real product data.
Fields:
name Product name
revenue_quarterly Current quarter revenue (any consistent currency)
revenue_prev_q Revenue last quarter (for QoQ calculation)
market_growth_pct Annual market growth rate (percent, e.g. 12.5 for 12.5%)
your_market_share Your estimated market share (percent, e.g. 8.0 for 8%)
largest_competitor_share Largest competitor's share (percent)
eng_capacity_pct % of total engineering capacity allocated (0-100)
d30_retention Optional D30 retention rate (decimal, e.g. 0.45)
nps Optional NPS score (-100 to 100)
notes Optional free text notes for the report
"""
return {
"company": "Acme Corp",
"total_engineering_headcount": 45,
"products": [
{
"name": "CorePlatform",
"revenue_quarterly": 480000,
"revenue_prev_q": 430000,
"market_growth_pct": 22.0,
"your_market_share": 18.0,
"largest_competitor_share": 12.0,
"eng_capacity_pct": 35,
"d30_retention": 0.61,
"nps": 52,
"notes": "Our flagship. Leading market share in fast-growing segment.",
},
{
"name": "ReportingModule",
"revenue_quarterly": 290000,
"revenue_prev_q": 285000,
"market_growth_pct": 5.0,
"your_market_share": 22.0,
"largest_competitor_share": 18.0,
"eng_capacity_pct": 25,
"d30_retention": 0.58,
"nps": 38,
"notes": "Mature product, strong margins, slow market.",
},
{
"name": "MobileApp",
"revenue_quarterly": 95000,
"revenue_prev_q": 78000,
"market_growth_pct": 35.0,
"your_market_share": 3.5,
"largest_competitor_share": 24.0,
"eng_capacity_pct": 28,
"d30_retention": 0.31,
"nps": 22,
"notes": "High growth market. We're far behind on share. Bet or exit.",
},
{
"name": "LegacyConnector",
"revenue_quarterly": 62000,
"revenue_prev_q": 68000,
"market_growth_pct": -3.0,
"your_market_share": 8.0,
"largest_competitor_share": 35.0,
"eng_capacity_pct": 12,
"d30_retention": 0.42,
"nps": 14,
"notes": "Declining market. Customers are on long-term contracts.",
},
],
}
# ---------------------------------------------------------------------------
# BCG Classification
# ---------------------------------------------------------------------------
# Growth rate threshold: markets growing faster than this are "high growth"
GROWTH_THRESHOLD_PCT = 10.0
# Market share ratio threshold: ratio > 1.0 means you lead the market
SHARE_RATIO_THRESHOLD = 1.0
def bcg_quadrant(market_growth_pct: float, share_ratio: float) -> str:
high_growth = market_growth_pct >= GROWTH_THRESHOLD_PCT
leading_share = share_ratio >= SHARE_RATIO_THRESHOLD
if high_growth and leading_share:
return "Star"
elif not high_growth and leading_share:
return "Cash Cow"
elif high_growth and not leading_share:
return "Question Mark"
else:
return "Dog"
def quadrant_emoji(quadrant: str) -> str:
return {
"Star": "",
"Cash Cow": "🐄",
"Question Mark": "",
"Dog": "🐕",
}.get(quadrant, "?")
def investment_posture(quadrant: str, qoq_growth: float, retention: Optional[float]) -> str:
"""
Invest / Maintain / Kill recommendation with nuance.
"""
if quadrant == "Star":
return "Invest"
elif quadrant == "Cash Cow":
# If cash cow is declining fast or retention is poor, consider killing
if qoq_growth < -0.10 or (retention is not None and retention < 0.30):
return "Kill"
return "Maintain"
elif quadrant == "Question Mark":
# Fast QoQ growth signals the bet might pay off → Invest
# Flat or slow QoQ with weak retention → Kill
if qoq_growth >= 0.15 and (retention is None or retention >= 0.25):
return "Invest"
elif qoq_growth < 0.05 or (retention is not None and retention < 0.20):
return "Kill"
return "Evaluate" # Needs explicit strategic decision
else: # Dog
if qoq_growth > 0.10 and (retention is None or retention >= 0.35):
return "Evaluate" # Surprising momentum — verify before killing
return "Kill"
def posture_color(posture: str) -> str:
return {
"Invest": "",
"Maintain": "",
"Kill": "",
"Evaluate": "",
}.get(posture, "?")
# ---------------------------------------------------------------------------
# Product analysis
# ---------------------------------------------------------------------------
def analyze_product(p: dict) -> dict:
revenue_q = p.get("revenue_quarterly", 0)
revenue_prev = p.get("revenue_prev_q", revenue_q)
qoq_growth = (revenue_q - revenue_prev) / revenue_prev if revenue_prev else 0.0
your_share = p.get("your_market_share", 0)
competitor_share = p.get("largest_competitor_share", 1)
share_ratio = your_share / competitor_share if competitor_share else 0.0
market_growth = p.get("market_growth_pct", 0)
retention = p.get("d30_retention")
nps = p.get("nps")
eng_pct = p.get("eng_capacity_pct", 0)
quadrant = bcg_quadrant(market_growth, share_ratio)
posture = investment_posture(quadrant, qoq_growth, retention)
# Alignment score: how well does engineering investment match the recommended posture?
# Invest products should have high eng allocation; Kill products should have low.
alignment_score = _compute_alignment(posture, eng_pct)
return {
"name": p.get("name", "Unknown"),
"revenue_quarterly": revenue_q,
"revenue_prev_q": revenue_prev,
"qoq_growth": qoq_growth,
"market_growth_pct": market_growth,
"your_market_share": your_share,
"largest_competitor_share": competitor_share,
"share_ratio": share_ratio,
"eng_capacity_pct": eng_pct,
"d30_retention": retention,
"nps": nps,
"quadrant": quadrant,
"posture": posture,
"alignment_score": alignment_score,
"notes": p.get("notes", ""),
"findings": _product_findings(quadrant, posture, qoq_growth, share_ratio,
market_growth, retention, nps, eng_pct),
}
def _compute_alignment(posture: str, eng_pct: float) -> float:
"""
Returns 0.0-1.0 score. High = engineering allocation matches strategic posture.
"""
targets = {"Invest": 0.35, "Maintain": 0.15, "Kill": 0.05, "Evaluate": 0.20}
target = targets.get(posture, 0.20)
deviation = abs(eng_pct / 100 - target)
return max(0.0, 1.0 - (deviation / 0.35))
def _product_findings(
quadrant: str, posture: str,
qoq_growth: float, share_ratio: float, market_growth: float,
retention: Optional[float], nps: Optional[int], eng_pct: float
) -> list:
findings = []
if quadrant == "Star":
if eng_pct < 30:
findings.append(f"⚠ Star product getting only {eng_pct}% of eng capacity — likely underinvested. Stars need fuel.")
else:
findings.append(f"✓ Star product with {eng_pct}% eng allocation — appropriate investment.")
if share_ratio < 1.5:
findings.append(f"◑ Share ratio {share_ratio:.1f}x — leading but not dominant. Accelerate to widen the gap.")
else:
findings.append(f"✓ Share ratio {share_ratio:.1f}x — strong lead. Defend aggressively.")
elif quadrant == "Cash Cow":
if eng_pct > 25:
findings.append(f"⚠ Cash Cow getting {eng_pct}% of eng — overinvested. Reduce to 10-15% max. Redeploy to Stars.")
else:
findings.append(f"✓ Cash Cow with {eng_pct}% eng — appropriate. Don't innovate, just maintain.")
if qoq_growth < -0.05:
findings.append(f"⚠ Revenue declining {abs(qoq_growth):.0%} QoQ — monitor for transition to Dog.")
else:
findings.append(f"✓ Revenue stable (QoQ: {qoq_growth:+.0%}) — milk this.")
elif quadrant == "Question Mark":
findings.append(f"⚠ Fast market ({market_growth:.0f}% growth) but only {share_ratio:.1f}x relative share.")
findings.append(f" Decision required: Invest to capture share or exit. 'Maintain' loses share every quarter.")
if qoq_growth >= 0.15:
findings.append(f"✓ QoQ growth {qoq_growth:+.0%} — momentum building. Investment may be justified.")
elif qoq_growth < 0.05:
findings.append(f"✗ QoQ growth {qoq_growth:+.0%} — stalled despite hot market. Strong exit signal.")
elif quadrant == "Dog":
findings.append(f"✗ Low share ({share_ratio:.1f}x) in slow/declining market ({market_growth:.0f}% growth).")
if eng_pct > 10:
findings.append(f"✗ Dog consuming {eng_pct}% of eng capacity. Set a sunset date. Migrate customers.")
if qoq_growth > 0:
findings.append(f"◑ Slight QoQ growth ({qoq_growth:+.0%}) — verify whether this is genuine or contract timing.")
if retention is not None:
if retention < 0.30:
findings.append(f"✗ D30 retention {retention:.0%} — users not finding value. Weak unit economics for any posture.")
elif retention >= 0.50:
findings.append(f"✓ D30 retention {retention:.0%} — users find value. Supports investment or stable maintenance.")
if nps is not None:
if nps < 0:
findings.append(f"✗ NPS {nps} — net detractors. Word of mouth is negative. Fix before scaling.")
elif nps >= 40:
findings.append(f"✓ NPS {nps} — strong promoter base. Harness for referrals.")
return findings
# ---------------------------------------------------------------------------
# Portfolio-level analysis
# ---------------------------------------------------------------------------
def analyze_portfolio(data: dict) -> dict:
products = [analyze_product(p) for p in data.get("products", [])]
total_revenue = sum(p["revenue_quarterly"] for p in products)
total_eng = sum(p["eng_capacity_pct"] for p in products)
# Revenue by quadrant
quadrant_revenue = {}
quadrant_eng = {}
for p in products:
q = p["quadrant"]
quadrant_revenue[q] = quadrant_revenue.get(q, 0) + p["revenue_quarterly"]
quadrant_eng[q] = quadrant_eng.get(q, 0) + p["eng_capacity_pct"]
# Portfolio health score
health = _portfolio_health(products, total_revenue, total_eng)
# Portfolio-level findings
portfolio_findings = _portfolio_findings(products, total_revenue, quadrant_revenue, quadrant_eng)
return {
"company": data.get("company", "Unknown"),
"total_engineering_headcount": data.get("total_engineering_headcount"),
"products": products,
"total_revenue_quarterly": total_revenue,
"quadrant_summary": {
q: {
"count": sum(1 for p in products if p["quadrant"] == q),
"revenue": quadrant_revenue.get(q, 0),
"revenue_pct": quadrant_revenue.get(q, 0) / total_revenue if total_revenue else 0,
"eng_pct": quadrant_eng.get(q, 0),
}
for q in ["Star", "Cash Cow", "Question Mark", "Dog"]
},
"portfolio_health_score": health,
"portfolio_findings": portfolio_findings,
}
def _portfolio_health(products: list, total_revenue: float, total_eng: float) -> float:
"""
Portfolio health 0-1. Penalizes:
- No Stars (no growth engine)
- Dogs consuming > 20% of eng
- Poor alignment scores
- Revenue concentrated in Dogs/Question Marks
"""
score = 1.0
quadrants = [p["quadrant"] for p in products]
has_star = "Star" in quadrants
has_cash_cow = "Cash Cow" in quadrants
if not has_star:
score -= 0.25 # No growth engine is a serious problem
if not has_cash_cow:
score -= 0.10 # No cash generator means funding stars from burn
# Dog eng allocation penalty
dog_eng = sum(p["eng_capacity_pct"] for p in products if p["quadrant"] == "Dog")
if dog_eng > 20:
score -= 0.20
elif dog_eng > 10:
score -= 0.10
# Revenue in dogs penalty
if total_revenue > 0:
dog_rev_pct = sum(p["revenue_quarterly"] for p in products if p["quadrant"] == "Dog") / total_revenue
if dog_rev_pct > 0.30:
score -= 0.15
# Average alignment score
avg_alignment = sum(p["alignment_score"] for p in products) / len(products) if products else 0
score -= (1 - avg_alignment) * 0.20
return max(0.0, min(1.0, score))
def _portfolio_findings(
products: list, total_revenue: float,
quadrant_revenue: dict, quadrant_eng: dict
) -> list:
findings = []
stars = [p for p in products if p["quadrant"] == "Star"]
cows = [p for p in products if p["quadrant"] == "Cash Cow"]
questions = [p for p in products if p["quadrant"] == "Question Mark"]
dogs = [p for p in products if p["quadrant"] == "Dog"]
if not stars:
findings.append("✗ CRITICAL: No Star products. You have no growth engine. Identify a Question Mark to invest in or revisit your market positioning.")
elif len(stars) == 1:
findings.append(f"◑ Single Star ({stars[0]['name']}). Portfolio is fragile — one product drives all growth. Diversify.")
else:
findings.append(f"{len(stars)} Star products — healthy growth engine.")
if not cows:
findings.append("⚠ No Cash Cow products. Stars are consuming capital without a self-funding mechanism. Watch burn rate.")
else:
cow_rev = quadrant_revenue.get("Cash Cow", 0)
cow_pct = cow_rev / total_revenue if total_revenue else 0
findings.append(f"✓ Cash Cow revenue: {cow_pct:.0%} of total — funds Star investment.")
if questions:
findings.append(f"{len(questions)} Question Mark(s): {', '.join(p['name'] for p in questions)}.")
findings.append(" Each needs a binary decision: invest to win share, or exit. Set a 2-quarter deadline.")
if dogs:
dog_eng_total = sum(p["eng_capacity_pct"] for p in dogs)
findings.append(f"{len(dogs)} Dog product(s): {', '.join(p['name'] for p in dogs)} consuming {dog_eng_total}% of eng capacity.")
findings.append(f" That's {dog_eng_total}% of your engineers on declining products. Set sunset dates.")
# Alignment check
misaligned = [p for p in products if p["alignment_score"] < 0.50]
if misaligned:
findings.append(f"⚠ Engineering allocation misaligned on: {', '.join(p['name'] for p in misaligned)}.")
findings.append(" Rebalance: move capacity from Dogs/Cows to Stars.")
return findings
# ---------------------------------------------------------------------------
# Report rendering
# ---------------------------------------------------------------------------
def fmt_currency(n: float) -> str:
if n >= 1_000_000:
return f"${n/1_000_000:.1f}M"
elif n >= 1_000:
return f"${n/1_000:.0f}K"
return f"${n:.0f}"
def render_report(result: dict) -> str:
lines = []
lines.append("=" * 65)
lines.append(f" PORTFOLIO ANALYZER — {result['company']}")
lines.append(f" Total Quarterly Revenue: {fmt_currency(result['total_revenue_quarterly'])}")
if result.get("total_engineering_headcount"):
lines.append(f" Engineering Headcount: {result['total_engineering_headcount']}")
lines.append("=" * 65)
lines.append("")
# Portfolio health
health = result["portfolio_health_score"]
bar_len = 40
filled = round(health * bar_len)
bar = "" * filled + "" * (bar_len - filled)
lines.append(f" Portfolio Health: {health:.0%}")
lines.append(f" [{bar}]")
lines.append("")
# Quadrant summary
lines.append(" QUADRANT SUMMARY")
lines.append(" " + "-" * 55)
header = f" {'Quadrant':<15} {'Count':>5} {'Revenue':>10} {'Rev%':>6} {'Eng%':>6}"
lines.append(header)
lines.append(" " + "-" * 55)
total_rev = result["total_revenue_quarterly"]
for q in ["Star", "Cash Cow", "Question Mark", "Dog"]:
qs = result["quadrant_summary"][q]
emoji = quadrant_emoji(q)
label = f"{emoji} {q}"
rev_pct = f"{qs['revenue_pct']:.0%}" if qs["count"] else "-"
eng = f"{qs['eng_pct']}%" if qs["count"] else "-"
rev = fmt_currency(qs["revenue"]) if qs["count"] else "-"
lines.append(f" {label:<15} {qs['count']:>5} {rev:>10} {rev_pct:>6} {eng:>6}")
lines.append("")
# Per-product breakdown
lines.append(" PRODUCT BREAKDOWN")
lines.append(" " + "-" * 65)
for p in result["products"]:
emoji = quadrant_emoji(p["quadrant"])
pc = posture_color(p["posture"])
lines.append(
f" {emoji} {p['name']}{p['quadrant']}{pc} {p['posture']}"
)
lines.append(
f" Revenue: {fmt_currency(p['revenue_quarterly'])}/qtr "
f"QoQ: {p['qoq_growth']:+.0%} "
f"Mkt growth: {p['market_growth_pct']:+.0f}%"
)
lines.append(
f" Share ratio: {p['share_ratio']:.1f}x "
f"Eng: {p['eng_capacity_pct']}% "
f"Alignment: {p['alignment_score']:.0%}"
)
if p.get("d30_retention") is not None:
lines.append(
f" D30 retention: {p['d30_retention']:.0%} "
f"NPS: {p['nps'] if p['nps'] is not None else 'N/A'}"
)
if p.get("notes"):
lines.append(f" Note: {p['notes']}")
for f in p.get("findings", []):
lines.append(f" {f}")
lines.append("")
# Portfolio-level findings
lines.append(" PORTFOLIO FINDINGS")
lines.append(" " + "-" * 65)
for f in result.get("portfolio_findings", []):
lines.append(f" {f}")
lines.append("")
lines.append("=" * 65)
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description="Portfolio Analyzer — BCG matrix classification and investment recommendations",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--input", "-i",
metavar="FILE",
help="JSON file with portfolio data (default: built-in sample data)",
)
parser.add_argument(
"--json",
action="store_true",
help="Output raw JSON result",
)
args = parser.parse_args()
if args.input:
try:
with open(args.input) as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: invalid JSON: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file provided — running with sample data.\n")
data = sample_data()
result = analyze_portfolio(data)
if args.json:
# Make result JSON-serializable
def clean(obj):
if isinstance(obj, dict):
return {k: clean(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [clean(v) for v in obj]
elif isinstance(obj, float):
return round(obj, 4)
return obj
print(json.dumps(clean(result), indent=2))
else:
print(render_report(result))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,183 @@
---
name: cro-advisor
description: "Revenue leadership for B2B SaaS companies. Revenue forecasting, sales model design, pricing strategy, net revenue retention, and sales team scaling. Use when designing the revenue engine, setting quotas, modeling NRR, evaluating pricing, building board forecasts, or when user mentions CRO, chief revenue officer, revenue strategy, sales model, ARR growth, NRR, expansion revenue, churn, pricing strategy, or sales capacity."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: cro-leadership
updated: 2026-03-05
python-tools: revenue_forecast_model.py, churn_analyzer.py
frameworks: sales-playbook, pricing-strategy, nrr-playbook
---
# CRO Advisor
Revenue frameworks for building predictable, scalable revenue engines — from $1M ARR to $100M and beyond.
## Keywords
CRO, chief revenue officer, revenue strategy, ARR, MRR, sales model, pipeline, revenue forecasting, pricing strategy, net revenue retention, NRR, gross revenue retention, GRR, expansion revenue, upsell, cross-sell, churn, customer success, sales capacity, quota, ramp, territory design, MEDDPICC, PLG, product-led growth, sales-led growth, enterprise sales, SMB, self-serve, value-based pricing, usage-based pricing, ICP, ideal customer profile, revenue board reporting, sales cycle, CAC payback, magic number
## Quick Start
### Revenue Forecasting
```bash
python scripts/revenue_forecast_model.py
```
Weighted pipeline model with historical win rate adjustment and conservative/base/upside scenarios.
### Churn & Retention Analysis
```bash
python scripts/churn_analyzer.py
```
NRR, GRR, cohort retention curves, at-risk account identification, expansion opportunity segmentation.
## Diagnostic Questions
Ask these before any framework:
**Revenue Health**
- What's your NRR? If below 100%, everything else is a leaky bucket.
- What percentage of ARR comes from expansion vs. new logo?
- What's your GRR (retention floor without expansion)?
**Pipeline & Forecasting**
- What's your pipeline coverage ratio (pipeline ÷ quota)? Under 3x is a problem.
- Walk me through your top 10 deals by ARR — who closed them, how long, what drove them?
- What's your stage-by-stage conversion rate? Where do deals die?
**Sales Team**
- What % of your sales team hit quota last quarter?
- What's average ramp time before a new AE is quota-attaining?
- What's the sales cycle variance by segment? High variance = unpredictable forecasts.
**Pricing**
- How do customers articulate the value they get? What outcome do you deliver?
- When did you last raise prices? What happened to win rate?
- If fewer than 20% of prospects push back on price, you're underpriced.
## Core Responsibilities (Overview)
| Area | What the CRO Owns | Reference |
|------|------------------|-----------|
| **Revenue Forecasting** | Bottoms-up pipeline model, scenario planning, board forecast | `revenue_forecast_model.py` |
| **Sales Model** | PLG vs. sales-led vs. hybrid, team structure, stage definitions | `references/sales_playbook.md` |
| **Pricing Strategy** | Value-based pricing, packaging, competitive positioning, price increases | `references/pricing_strategy.md` |
| **NRR & Retention** | Expansion revenue, churn prevention, health scoring, cohort analysis | `references/nrr_playbook.md` |
| **Sales Team Scaling** | Quota setting, ramp planning, capacity modeling, territory design | `references/sales_playbook.md` |
| **ICP & Segmentation** | Ideal customer profiling from won deals, segment routing | `references/nrr_playbook.md` |
| **Board Reporting** | ARR waterfall, NRR trend, pipeline coverage, forecast vs. actual | `revenue_forecast_model.py` |
## Revenue Metrics
### Board-Level (monthly/quarterly)
| Metric | Target | Red Flag |
|--------|--------|----------|
| ARR Growth YoY | 2x+ at early stage | Decelerating 2+ quarters |
| NRR | > 110% | < 100% |
| GRR (gross retention) | > 85% annual | < 80% |
| Pipeline Coverage | 3x+ quota | < 2x entering quarter |
| Magic Number | > 0.75 | < 0.5 (fix unit economics before spending more) |
| CAC Payback | < 18 months | > 24 months |
| Quota Attainment % | 60-70% of reps | < 50% (calibration problem) |
**Magic Number:** Net New ARR × 4 ÷ Prior Quarter S&M Spend
**CAC Payback:** S&M Spend ÷ New Logo ARR × (1 / Gross Margin %)
### Revenue Waterfall
```
Opening ARR
+ New Logo ARR
+ Expansion ARR (upsell, cross-sell, seat adds)
- Contraction ARR (downgrades)
- Churned ARR
= Closing ARR
NRR = (Opening + Expansion - Contraction - Churn) / Opening
```
### NRR Benchmarks
| NRR | Signal |
|-----|--------|
| > 120% | World-class. Grow even with zero new logos. |
| 100-120% | Healthy. Existing base is growing. |
| 90-100% | Concerning. Churn eating growth. |
| < 90% | Crisis. Fix before scaling sales. |
## Red Flags
- NRR declining two quarters in a row — customer value story is broken
- Pipeline coverage below 3x entering the quarter — already forecasting a miss
- Win rate dropping while sales cycle extends — competitive pressure or ICP drift
- < 50% of sales team quota-attaining — comp plan, ramp, or quota calibration issue
- Average deal size declining — moving downmarket under pressure (dangerous)
- Magic Number below 0.5 — sales spend not converting to revenue
- Forecast accuracy below 80% — reps sandbagging or pipeline quality is poor
- Single customer > 15% of ARR — concentration risk, board will flag this
- "Too expensive" appearing in > 40% of loss notes — value demonstration broken, not pricing
- Expansion ARR < 20% of total ARR — upsell motion isn't working
## Integration with Other C-Suite Roles
| When... | CRO works with... | To... |
|---------|------------------|-------|
| Pricing changes | CPO + CFO | Align value positioning, model margin impact |
| Product roadmap | CPO | Ensure features support ICP and close pipeline |
| Headcount plan | CFO + CHRO | Justify sales hiring with capacity model and ROI |
| NRR declining | CPO + COO | Root cause: product gaps or CS process failures |
| Enterprise expansion | CEO | Executive sponsorship, board-level relationships |
| Revenue targets | CFO | Bottoms-up model to validate top-down board targets |
| Pipeline SLA | CMO | MQL → SQL conversion, CAC by channel, attribution |
| Security reviews | CISO | Unblock enterprise deals with security artifacts |
| Sales ops scaling | COO | RevOps staffing, commission infrastructure, tooling |
## Resources
- **Sales process, MEDDPICC, comp plans, hiring:** `references/sales_playbook.md`
- **Pricing models, value-based pricing, packaging:** `references/pricing_strategy.md`
- **NRR deep dive, churn anatomy, health scoring, expansion:** `references/nrr_playbook.md`
- **Revenue forecast model (CLI):** `scripts/revenue_forecast_model.py`
- **Churn & retention analyzer (CLI):** `scripts/churn_analyzer.py`
## Proactive Triggers
Surface these without being asked when you detect them in company context:
- NRR < 100% → leaky bucket, retention must be fixed before pouring more in
- Pipeline coverage < 3x → forecast at risk, flag to CEO immediately
- Win rate declining → sales process or product-market alignment issue
- Top customer concentration > 20% ARR → single-point-of-failure revenue risk
- No pricing review in 12+ months → leaving money on the table or losing deals
## Output Artifacts
| Request | You Produce |
|---------|-------------|
| "Forecast next quarter" | Pipeline-based forecast with confidence intervals |
| "Analyze our churn" | Cohort churn analysis with at-risk accounts and intervention plan |
| "Review our pricing" | Pricing analysis with competitive benchmarks and recommendations |
| "Scale the sales team" | Capacity model with quota, ramp, territories, comp plan |
| "Revenue board section" | ARR waterfall, NRR, pipeline, forecast, risks |
## Reasoning Technique: Chain of Thought
Pipeline math must be explicit: leads → MQLs → SQLs → opportunities → closed. Show conversion rates at each stage. Question any assumption above historical averages.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,380 @@
# NRR Playbook
Net Revenue Retention is the single most important metric for a SaaS company's health and valuation. A company with 120% NRR grows even if it closes zero new deals. A company with 80% NRR is filling a bucket with a hole in it.
---
## NRR Deep Dive
### The Fundamental Formula
```
NRR = (Opening MRR + Expansion MRR - Contraction MRR - Churned MRR) / Opening MRR
Example:
Opening MRR: $1,000,000
Expansion: +$150,000
Contraction: -$30,000
Churn: -$80,000
Closing MRR: $1,040,000
NRR = $1,040,000 / $1,000,000 = 104%
```
### NRR vs. GRR
| Metric | Formula | What It Tells You |
|--------|---------|------------------|
| **GRR** | (Opening - Contraction - Churn) / Opening | Retention floor — how much you keep without any expansion |
| **NRR** | (Opening + Expansion - Contraction - Churn) / Opening | Net health — expansion offsetting churn |
| **Logo Retention** | (Customers start - Customers churned) / Customers start | Volume retention, ignores revenue weight |
**GRR is the floor. NRR is the ceiling.**
If GRR is 80% and NRR is 105%, your expansion is covering 25 points of churn. That's fragile — any expansion slowdown turns NRR negative. The fix is GRR, not more upsell.
### Benchmarks by Segment
| Segment | Good GRR | Good NRR | Exceptional NRR |
|---------|---------|---------|----------------|
| SMB-focused | 80-85% | 95-105% | > 110% |
| Mid-Market | 85-90% | 105-115% | > 120% |
| Enterprise | 90-95% | 115-130% | > 140% |
Enterprise NRR can exceed 140% because large accounts expand substantially and rarely churn entirely — they may downgrade but full logo churn is rare if the product is embedded.
### NRR by Cohort
Don't just measure NRR across the full base — measure it by customer cohort (month of acquisition).
```
Jan 2024 Cohort:
Opening MRR (Jan 2024): $50,000
MRR at Jan 2025: $62,000
12-month NRR: 124%
Feb 2024 Cohort:
Opening MRR (Feb 2024): $45,000
MRR at Feb 2025: $38,000
12-month NRR: 84% ← problem cohort
```
Cohort analysis reveals:
- Whether a specific acquisition channel brings lower-quality customers
- Whether a product change or pricing shift affected retention
- Whether specific sales reps or time periods created bad-fit deals
---
## Churn Anatomy
Not all churn is equal. Know the breakdown before prescribing solutions.
### Churn Types
| Type | Definition | Primary Cause | Fix |
|------|-----------|--------------|-----|
| **Logo churn** | Customer cancels entirely | No value, poor fit, champion left, competitor | Root cause analysis, ICP tightening |
| **Revenue churn** | ARR lost (cancels + downgrades combined) | Same as logo + downgrade triggers | Address both volume and revenue |
| **Involuntary churn** | Failed payment, expired card | Billing friction | Dunning improvement (quick win: 20-30% recovery) |
| **Voluntary churn** | Active cancellation decision | Explicit dissatisfaction, competitor win | Exit interview + intervention program |
| **Contraction** | Downgrade, seat reduction | Overpurchased, budget cut, team reduction | Right-sizing program, annual contracts |
### Churn Root Cause Framework
Run this analysis quarterly on all churned accounts:
**Step 1: Categorize by reason**
- No value realized (never activated or adopted)
- Value realized but budget cut (external, not product)
- Switched to competitor (why? what did they offer?)
- Champion left company (relationship loss, not product failure)
- Company shutdown / acquisition (unavoidable)
**Step 2: Look for patterns**
- Which ICP signals predict churn? (company size, vertical, acquisition channel)
- Which product behaviors predict churn? (no login in 30 days, never completed onboarding)
- Which time periods have highest churn? (months 3, 6, 12 are typical cliff points)
**Step 3: Act on the patterns**
- ICP pattern → tighten qualification criteria
- Behavior pattern → build early warning health score
- Time cliff → build intervention playbooks for months 2, 5, 11
### Exit Interview Protocol
Talk to every churned customer if ACV > $10K. For smaller, do quarterly batch surveys.
Questions:
1. "What was the primary reason for your decision to cancel?"
2. "What would have needed to be true for you to stay?"
3. "What did you switch to, and what drove that decision?"
4. "Was there a specific moment when you decided to leave?"
Rules:
- CSM who owned the account should NOT conduct the exit interview (too much relationship bias)
- Use a neutral party or the VP CS
- Document verbatim, not paraphrased
- Feed patterns back to Product and Sales monthly
---
## Customer Health Scoring
A health score predicts churn 60-90 days before it happens. Without one, you're reactive.
### Health Score Components
Score each account 0-100 across weighted signals:
| Signal | Weight | Red (0-33) | Yellow (34-66) | Green (67-100) |
|--------|--------|-----------|---------------|---------------|
| **Product usage** (DAU/WAU, feature adoption depth) | 35% | < 20% seats active | 20-60% seats active | > 60% seats active |
| **Engagement** (QBR attendance, champion responsiveness) | 20% | No response 60+ days | 30-60 days | Active, < 30 days |
| **NPS / CSAT** | 20% | Score < 6 | Score 6-7 | Score 8-10 |
| **Support volume** (negative signal: high volume = friction) | 15% | > 10 tickets/month | 3-10/month | < 3/month |
| **Contract signals** (time to renewal, expansion in motion) | 10% | < 60 days to renewal, no expansion discussion | 60-90 days, passive | > 90 days, expansion active |
**Composite score:**
- 70-100: Healthy. Renewal confident. Identify expansion opportunity.
- 50-69: At-risk. CSM check-in required. Executive sponsor loop-in if < 60 days to renewal.
- 0-49: Red alert. Immediate intervention. VP CS or CEO call if strategic account.
### Health Score Automation
Trigger alerts automatically:
```
Score drops > 20 points in 30 days → CSM immediate outreach (same day)
No product login in 14 days → Automated email + CSM flag (within 24 hours)
Champion leaves company → Executive outreach (within 24 hours)
Support escalation → CSM loop-in (within 2 hours)
Renewal < 90 days + score < 60 → VP CS review (weekly)
Seat utilization < 30% → Adoption intervention playbook triggered
```
### Leading Indicators vs. Lagging Indicators
| Leading (predict future churn) | Lagging (confirm past churn) |
|-------------------------------|------------------------------|
| Login frequency declining | Cancellation submitted |
| Feature adoption stalling at basic level | Non-renewal at contract end |
| NPS score trend (not just snapshot) | Downgrade executed |
| No QBR scheduled in 90+ days | Champion departure |
| Support escalations increasing | Competitor mentioned in support |
Build your health score from leading indicators. Lagging indicators tell you what already happened.
---
## Expansion Revenue Strategies
Expansion is cheaper than acquisition. CAC for expansion is typically 20-30% of new logo CAC.
### Expansion Motion 1: Seat Expansion
**Trigger signals:**
- Usage by unlicensed users (shared logins, "can you add my colleague?")
- Team growth visible on LinkedIn (company hiring in target department)
- Champion promotes to a new role with bigger team
- Power users at license limit consistently
**Playbook:**
1. Pull monthly usage report showing which features unlicensed users are using
2. Frame as: "Your team is getting value from X — you could be capturing that for the full team"
3. Offer a team expansion proposal at renewal + 10% volume discount for seat adds
4. Never penalize users for sharing logins before the conversation — that's a data asset
### Expansion Motion 2: Upsell (Tier Upgrade)
**Trigger signals:**
- Customer consistently hitting usage/feature limits
- Security or compliance requirement that requires higher tier
- New stakeholder joining who needs admin controls
- API usage growing rapidly (engineering team engagement)
**Playbook:**
1. Build a "value realized" report before the upsell conversation (ROI proof)
2. Use QBR as the venue: "You've achieved X. Here's what's possible at the next level."
3. Frame the upgrade as unlocking more of what's already working
4. Time to renewal: start upsell conversation 90-120 days before renewal
### Expansion Motion 3: Cross-sell
**Trigger signals:**
- Strategic account with adjacent problem your product can solve
- New product launch that complements existing usage
- Customer explicitly asks about a capability in your roadmap or adjacent product
**Playbook:**
1. Land with core product; build relationship and prove value
2. Cross-sell only after health score is green and NPS > 7
3. Introduce the new product through a champion, not a cold pitch
4. Pilot pricing: bundle into renewal at modest uplift vs. separate sale
5. Cross-sell owner: CSM or AE (define explicitly — joint ownership = no ownership)
### Expansion Sequencing
Don't try all three simultaneously. Sequence matters:
```
Month 0-3: Activation focus — ensure core value delivered
Month 3-6: Seat expansion — grow usage within existing team
Month 6-9: Upsell conversation — unlock advanced features
Month 9-12: Cross-sell OR renewal + multi-year lock-in
```
### NRR Modeling
Target breakdown for 115% NRR:
```
GRR: 88% (12% lost to churn/contraction)
Expansion rate: 27% (upsell + cross-sell + seat expansion)
NRR: 88% + 27% = 115%
To reach 120% NRR:
Option A: Improve GRR to 92% (reduce churn), keep expansion at 28%
Option B: Keep GRR at 88%, improve expansion to 32%
Option C: Both, incrementally
Option A is usually easier and more durable. Fix the hole first.
```
---
## Customer Success Integration
CS and Revenue are not separate functions. NRR lives at their intersection.
### CS Team Structure (aligned to NRR)
| CS Model | When to Use | NRR Focus |
|----------|------------|-----------|
| **High-touch CSM** | ACV > $25K | Named accounts, QBRs, executive relationships |
| **Tech-touch / pooled** | ACV $5K-25K | Automated health scoring, office hours, community |
| **Self-serve** | ACV < $5K | In-app guidance, knowledge base, email sequences |
**CSM coverage ratios:**
- High-touch: 1 CSM per $2M-4M ARR managed
- Tech-touch: 1 CSM per $5M-10M ARR managed
- Self-serve: Product and automation (no dedicated CSM)
### CS Compensation (aligned to NRR)
Don't pay CSMs a flat salary — align incentive to retention and expansion:
```
CS compensation structure:
Base: 70% of OTE
Variable: 30% of OTE
Variable tied to:
GRR / NRR vs. target (50% of variable)
Health score improvement (25% of variable)
Expansion ARR facilitated (25% of variable)
Do NOT pay CS commission on expansion ARR the same way AEs earn it.
This creates conflict: CS will push expansion before the customer is ready.
Instead, bonus for expansion milestones — it's a different incentive structure.
```
### QBR (Quarterly Business Review) Framework
QBRs are the primary vehicle for expansion and churn prevention in enterprise accounts.
**QBR agenda (60-90 minutes):**
1. **Their goals, our progress** — review what they said success looked like at kickoff (10 min)
2. **Usage and adoption data** — product metrics presented in business language, not feature language (15 min)
3. **Value delivered** — ROI proof: time saved, revenue influenced, risk reduced (10 min)
4. **Challenges and blockers** — what's preventing more adoption? (10 min)
5. **Roadmap preview** — upcoming features relevant to their use case (10 min)
6. **Next 90 days** — joint success plan with owner and due dates (10 min)
7. **Expansion opportunity** — if health score is green and timing is right (10 min)
**QBR anti-patterns:**
- Leading with your product roadmap (they don't care; start with their results)
- Bringing too many people from your side without matching seniority
- Presenting at a VP without bringing the economic buyer
- Skipping QBRs for "healthy" accounts (health can change fast)
- No confirmed next step at the end
---
## Cohort-Based Retention Analysis
Aggregate NRR hides the signal. Cohort analysis reveals it.
### Retention Curve Analysis
Plot retention by months since acquisition for each quarterly cohort:
```
Month 0: 100% (starting revenue)
Month 3: First cliff — early adopters who didn't activate churn here
Month 6: Second cliff — customers who never expanded, running out of runway
Month 12: Renewal cliff — annual contract renewal decision
Month 18: Mature customers — churn rate stabilizes significantly
Healthy curve: Drops sharply in months 1-3, flattens after month 6
Problem curve: Continues declining linearly through month 12+ (no value anchor)
```
### Reading Cohort Data
| Pattern | Interpretation | Action |
|---------|---------------|--------|
| Early churn (months 1-3) | Onboarding / activation failure | Fix time-to-value, improve onboarding |
| Mid-cycle churn (months 4-8) | Value not deepening | Adoption program, check product fit |
| Annual renewal churn (month 12) | Buying committee didn't renew | Executive engagement, earlier renewal process |
| Flat after month 6 | Sticky product, low expansion | Increase upsell motion |
| Growing after month 6 | Expansion working | Scale the upsell playbook |
### Cohort Segmentation Variables
Slice retention cohorts by:
- **Acquisition channel** (inbound vs. outbound vs. PLG vs. partner)
- **Sales rep** (which reps close durable deals vs. churny deals)
- **Deal size** (SMB churn rate typically 2-3x enterprise)
- **Industry vertical** (some verticals have structurally higher churn)
- **Product tier at signup** (self-serve → converted vs. directly contracted)
- **Geographic market** (international markets often have different retention profiles)
The most actionable finding is usually by acquisition channel or sales rep — both are directly controllable.
### Churn Prevention Intervention Playbooks
**Playbook 1: Low Activation (no login in first 14 days)**
```
Day 7: Automated email: "Getting started" + specific next step
Day 14: CSM outreach: "I noticed you haven't logged in — can I help?"
Day 21: Escalate to CSM manager if no response
Day 30: Executive outreach for ACV > $25K; flag as at-risk
```
**Playbook 2: Usage Cliff (DAU drops > 50% in 30 days)**
```
Trigger: Automated health score alert
Day 1: CSM reviews usage report, identifies likely cause
Day 2: CSM outreach: "We noticed your team's usage changed — is everything okay?"
Day 7: If no response: schedule 30-min call with champion
Day 14: If unresponsive: VP CS loop-in + executive reach out
```
**Playbook 3: Champion Departure**
```
Trigger: LinkedIn alert or internal report of champion leaving
Day 1: Email to departed champion (warm handoff ask)
Day 1: Email to new stakeholder (introduction from AE or VP CS)
Day 3: Schedule onboarding call for new stakeholder
Day 14: QBR with new stakeholder to establish relationship
Day 30: Health score review — flag if engagement hasn't recovered
```
**Playbook 4: Pre-Renewal (90 days out, health score < 70)**
```
Day -90: CSM completes account health review, escalates if < 70
Day -75: Executive sponsor from vendor side joins renewal call
Day -60: Value delivered report prepared (ROI proof)
Day -45: Renewal proposal sent with expansion option
Day -30: Follow-up on any open objections or requirements
Day -14: Final confirm or escalate to VP Sales
```

View File

@@ -0,0 +1,417 @@
# Pricing Strategy
Pricing is not a one-time decision. It's an ongoing hypothesis about value and willingness to pay. Most SaaS companies are underpriced by 20-40%.
---
## Pricing Models
### Per Seat / User
**How it works:** Customer pays a fixed amount per user, per month or year.
**Best for:**
- Collaboration tools (everyone who uses it needs a license)
- Productivity software where value scales with users
- Products where you want viral / network growth within accounts
**Pricing structure:**
```
Starter: $15/user/month (1-10 users)
Professional: $30/user/month (11-100 users)
Enterprise: Custom (100+ users, negotiated)
```
**Pros:**
- Simple to understand and sell
- Revenue scales naturally with customer growth
- Predictable for customers (fixed monthly cost)
**Cons:**
- Customers negotiate volume discounts aggressively
- Discourages broad adoption if price is high (seat hoarding)
- Doesn't capture value for power users vs. light users
- Enterprises can negotiate $5/seat on a $25 product
**Watch for:** Customers sharing logins to avoid per-seat cost. Enforce with IP restrictions or SSO audit logs.
---
### Usage-Based Pricing (UBP)
**How it works:** Customer pays for what they consume — API calls, data processed, messages sent, compute hours, etc.
**Best for:**
- API companies, infrastructure, data platforms
- AI products (per-token, per-query pricing)
- Products where value scales non-linearly with usage
- Land-and-expand: low entry cost, grows with customer success
**Pricing structure:**
```
Free tier: First 10K API calls/month
Pay-as-you-go: $0.002 per API call
Committed use: $500/month for 500K calls (better rate)
Enterprise: Custom contract, committed volume discount
```
**Pros:**
- Customer pays in proportion to value received
- Low barrier to entry (customers start small, scale up)
- Natural expansion: customer success = revenue growth
- No "unused licenses" problem
**Cons:**
- Revenue is unpredictable for both you and the customer
- Hard to forecast; hard to budget for customer
- Customers may optimize to reduce usage (and your revenue)
- Complex billing; requires robust usage tracking infrastructure
**Usage-based pricing math:**
```
Unit cost (your COGS per unit): $0.0002 per API call
Target gross margin: 80%
Price = COGS / (1 - margin) = $0.0002 / 0.20 = $0.001 minimum
Add markup for value delivered above cost: $0.002 per call (10x markup at scale)
```
**Hybrid usage + seat approach:**
- Platform fee: $500/month (access, support, base features)
- Usage fee: $0.001 per API call above included 100K
---
### Flat Rate / Subscription
**How it works:** One price for full access, regardless of usage or users.
**Best for:**
- Simple products with limited feature differentiation
- Products where usage is predictable and bounded
- Customers who want budget certainty
- Early stage before you've figured out value segmentation
**Pros:**
- Simplest to sell and explain
- Easiest billing implementation
- Customers love budget predictability
**Cons:**
- Leaves money on the table for heavy users
- No natural expansion revenue mechanism
- Light users pay the same as power users (retention risk)
**When to move away from flat rate:**
- 20% of customers are using 80% of the product capacity
- Power users would clearly pay more; light users churn or underutilize
- You have a clear expansion story waiting to happen
---
### Tiered / Feature-Based
**How it works:** Multiple packages (Starter, Pro, Enterprise) with different feature sets and/or usage limits.
**Best for:**
- Multi-use-case products
- Different buyer types (individual vs. team vs. enterprise)
- Products with a natural upgrade path based on sophistication
**Structure (Good / Better / Best):**
```
Starter ($49/mo): Core features, 3 users, 10GB storage
Professional ($149/mo): Advanced features, 25 users, 100GB, API access
Business ($499/mo): All features, 100 users, 1TB, SSO, priority support
Enterprise (custom): Unlimited, custom integrations, SLA, dedicated CSM
```
**Tier design principles:**
- Starter tier: removes friction, proves value, not the revenue center
- Professional: the primary revenue tier; 60-70% of customers land here
- Enterprise: custom pricing allows you to capture maximum value
- Each tier upgrade should have an obvious "must-have" feature for the target buyer
**What to gate on each tier:**
| Feature Type | Where to Put It |
|-------------|----------------|
| Core product functionality | Starter (must be useful) |
| Collaboration features | Pro (drives team usage) |
| Admin, security, SSO | Business/Enterprise |
| API / integrations | Pro and above |
| SLAs, dedicated support | Enterprise only |
| Advanced analytics | Business/Enterprise |
---
### Hybrid Pricing
**How it works:** Combination of models (e.g., platform fee + per seat + usage).
**Example:**
```
Platform fee: $2,000/month (access, core features, admin console)
Per seat: $50/user/month (up to 200 users)
Usage overage: $0.10/action above 100K included actions
```
**When to use hybrid:**
- Enterprise customers want budget certainty (platform fee) but your value scales with usage
- You have different cost structures for different features
- Customers have very different usage patterns across the base
**Pros:** Captures value at multiple dimensions. Hybrid is most common in enterprise SaaS.
**Cons:** More complex to explain and bill. Sales training burden increases.
---
## Value-Based Pricing Methodology
Cost-plus pricing is a race to the bottom. Price on value, not cost.
### Step 1: Define the Economic Outcome
What business result does your product deliver? Be specific.
**Weak:** "We help companies save time"
**Strong:** "We reduce onboarding time for new enterprise software by 40%, saving 8 hours per employee"
Map to one of:
- **Revenue increase** — "Our customers close 25% more deals using our CRM intelligence"
- **Cost reduction** — "We eliminate 60% of manual data entry for finance teams"
- **Risk reduction** — "We reduce compliance violations by 90%, avoiding $500K+ in potential fines"
- **Time savings** — "CSMs spend 5 fewer hours per week on manual reporting"
### Step 2: Quantify Per Customer
Calculate the dollar value of the outcome for your average customer.
```
Example: Data entry automation product
Target customer: 50-person finance team
Manual data entry: 4 hours/person/week
Hours saved with product: 2.4 hours/person/week (60% reduction)
Fully loaded cost of finance analyst: $75/hour
Weekly savings: 50 employees × 2.4 hours × $75 = $9,000
Annual savings: $9,000 × 52 weeks = $468,000
```
### Step 3: Determine Willingness to Pay
Customers will typically pay 10-20% of the value delivered for software.
```
Annual value delivered: $468,000
Willingness to pay range: $46,800 - $93,600/year
Current market pricing: ~$60,000/year
Your pricing: $72,000/year (between median and upper WTP)
```
**Test your hypothesis:**
- Interview 5-10 customers: "If we charged $X/year, is that reasonable?"
- Van Westendorp Price Sensitivity Meter:
- "At what price is this too cheap to trust?"
- "At what price is this a good deal?"
- "At what price is this getting expensive but still worth it?"
- "At what price is this too expensive?"
### Step 4: Validate with Win Rate Analysis
```
Run this analysis quarterly:
Track win rate by price point (segmented if possible)
Win rate 30-40%: pricing is likely right
Win rate < 20%: price is too high OR value demonstration is broken
Win rate > 50%: you're underpriced
Note: Distinguish between "lost on price" and "lost on fit."
Lost on price + good ROI proof: test lower price or improve value story
Lost on fit: ICP problem, not pricing problem
```
---
## Packaging (Good / Better / Best)
### The Three-Package Framework
Packaging is not just about features. It's about serving different buyer personas with different budgets and needs.
**Buyer personas by tier:**
```
Starter → The individual contributor or small team trying to solve an immediate problem
- Low budget authority
- Low-friction purchase (credit card, self-serve)
- Needs quick time to value
Professional → The team manager or department head
- $10K-100K budget authority
- Works with inside sales
- Needs collaboration features and reporting
Enterprise → The VP or C-suite buyer
- Unlimited budget (but requires justification)
- Needs compliance, security, SLAs, dedicated support
- Long buying process, multiple stakeholders
```
### Packaging Design Rules
1. **Each tier must be useful on its own.** Starter can't be crippled—customers need to succeed.
2. **Upgrade triggers must be obvious.** When a customer hits a limit, the next tier should solve it clearly.
3. **Don't gate features that drive adoption.** Collaboration features gated in a low tier kill viral growth.
4. **Enterprise pricing is custom.** Show "Contact Sales" or a starting price. Don't publish a firm enterprise price—you'll anchor too low.
5. **Annual vs. monthly pricing:** Charge 15-25% more for monthly vs. annual. Incentivize annual prepay.
### Pricing Page Design
- Lead with the most popular tier (visually prominent)
- Show annual pricing by default (with toggle to monthly)
- Highlight one or two "recommended" plans
- Feature comparison table: minimize the number of rows (overwhelm = no decision)
- Show logos of customers on each tier (social proof by segment)
- Live chat for enterprise CTA, not "Contact Sales" form
---
## Pricing Experiments and Rollout
### Before You Change Pricing
**Internal checklist:**
- [ ] Validate new pricing with 5-10 current customers (interviews)
- [ ] Run a willingness-to-pay survey with 50+ prospects
- [ ] Model revenue impact: how many customers at new pricing are equivalent to current ARR?
- [ ] Get CFO sign-off on cash flow impact
- [ ] Prepare messaging for customers, website, sales team
- [ ] Set a rollout date 60-90 days out
### Testing Approaches
**Cohort testing (safest):**
- New signups see new pricing; existing customers are grandfathered
- Monitor: conversion rate, ACV, win rate, time-to-close
- Run for 90 days before full rollout
**A/B pricing test (higher stakes):**
- Half of new signups see price A, half see price B
- Risk: word gets out that prices differ (customer frustration)
- Use only on self-serve, where purchase is not sales-assisted
**Segment-specific rollout:**
- Change pricing in one segment (e.g., SMB) while holding enterprise steady
- Lower risk than full rollout; validate before expanding
### Pricing Rollout Plan
```
Day 0: Decision made, pricing document approved
Day -60: Internal communication to sales, CS, support
Day -45: Customer communication drafted and reviewed
Day -30: New pricing live on website for new customers
Day -30: Existing customer email sent (90-day grandfather period)
Day -30: Sales team trained, FAQ document ready
Day -14: Second reminder to existing customers
Day 0: Existing customers transition to new pricing
Day +30: Win rate analysis, NRR impact review
```
### Grandfathering Policy
- **Standard:** Grandfather existing customers at old price for 12 months
- **Aggressive:** 90 days grandfather, then new pricing applies (use if you're raising significantly)
- **Never:** Retroactive pricing changes with no notice. This is a churn trigger and brand damage.
Grandfathering message framing:
> "We're investing significantly in [feature areas]. As a valued customer, your pricing remains unchanged through [date]. After that, your new rate will be $X — still X% less than new customer pricing as a thank-you for your partnership."
---
## Competitive Pricing Analysis
### Mapping the Competitive Landscape
```
Step 1: List all direct competitors
Step 2: Find their public pricing (website, G2, Capterra)
Step 3: Secret shop their sales process for unpublished pricing
Step 4: Talk to customers who considered them ("What did they quote you?")
Step 5: Map to your packaging (apples-to-apples comparison)
Output: Competitive pricing matrix
You: $X/month per seat at Pro tier
Competitor A: $Y/month per seat at equivalent tier
Competitor B: Custom (enterprise only)
```
### Competitive Positioning by Price
| Your Position | Situation | Response |
|--------------|-----------|---------|
| Significantly cheaper | Unclear why | Raise prices or clarify differentiation |
| Slightly cheaper | Winning on price | Test raising price, monitor win rate |
| At market | Competing on features | Make sure differentiation is clear in sales |
| Slightly more expensive | Win rate healthy | Price is justified by value |
| Significantly more expensive | Win rate low | Improve value proof or re-examine ICP |
### When "They're Cheaper" Appears in Deals
**Coach your reps:**
1. "What makes [Competitor] worth choosing over the $X difference?" (reframe value, not price)
2. "If price were equal, which would you choose and why?" (understand true preference)
3. "What's the cost of not solving this problem in Q3?" (urgency + value)
4. "What's their implementation cost and time?" (TCO, not ACV)
**If price is truly the barrier:**
- Offer a pilot at reduced scope (not price) to prove value
- Multi-year deal with year-one discount
- Defer payment to match their budget cycle (start in Q4, bill in Q1)
- Confirm it's price and not a champion issue or lack of urgency
---
## When to Raise Prices
### Green Lights for a Price Increase
**Product signals:**
- Customer usage growing QoQ (product delivers real value)
- NPS consistently > 40
- Feature requests indicate you're solving critical workflows
- Customers measuring and can articulate ROI
**Market signals:**
- Win rate > 35% (strong signal of underpricing)
- Waitlist or high inbound conversion without price objections
- Competitors raising prices (market is moving up)
- You've added significant value (new features, integrations, uptime improvements)
**Business signals:**
- Gross margin below 70% (cost inflation requires pricing response)
- CAC payback > 24 months (need higher ACV to fix unit economics)
- Haven't raised prices in 2+ years (inflation alone justifies adjustment)
### How Much to Raise
**Conservative:** 10-15% increase. Low risk, low disruption.
**Standard:** 15-30% increase. Acceptable if value story is strong.
**Aggressive:** 30-50% increase. Only with major product investment or clear underprice.
**Repositioning:** 2-5x increase. Rare; requires moving to a new buyer persona.
**Rule:** If fewer than 20% of prospects mention price as a concern, you're underpriced. Test.
### Price Increase Execution
1. Raise new business pricing immediately on the website
2. Communicate to existing customers with 90 days notice
3. Grandfather for 12 months OR give a 10-15% loyalty discount on new price
4. Track: conversion rate (new business), churn rate (existing), expansion ARR impact
5. Monitor win rate for 60 days post-increase; adjust if win rate drops > 5 points
**What not to do:**
- Don't apologize for raising prices
- Don't over-explain the justification (confident framing wins)
- Don't let sales reps negotiate discounts back to old pricing "just this once"
- Don't raise prices and remove features simultaneously

View File

@@ -0,0 +1,461 @@
# Sales Playbook
Frameworks for building, running, and scaling a B2B SaaS sales organization.
---
## Sales Process Design
A sales process is a repeatable series of steps that takes a prospect from first contact to closed revenue. Without it, you have individual heroics, not a scalable machine.
### The Core Funnel
```
Lead Generation → Qualification → Discovery → Demo → Trial / POC → Proposal → Negotiation → Close → Handoff
```
Each stage has a clear entry criterion, exit criterion, and owner.
### Stage Definitions
#### Stage 0: Lead / Suspect
- **Entry:** Contact exists in CRM with basic firmographic data
- **Owner:** Marketing or SDR
- **Exit criterion:** Meets ICP criteria (company size, industry, tech stack)
- **Action:** Research, prioritize, add to outbound sequence
#### Stage 1: Prospecting / Outreach
- **Entry:** ICP-qualified account, no contact yet
- **Owner:** SDR or AE (depending on model)
- **Exit criterion:** Meeting booked with a qualified contact
- **Action:** Multi-channel outreach (email + call + LinkedIn), 8-12 touch sequence
- **Key metric:** Meeting booked rate (benchmark: 2-5% of outbound contacts)
#### Stage 2: Discovery
- **Entry:** First meeting confirmed
- **Owner:** AE (SDR hands off or joins)
- **Exit criterion:** Confirmed: pain, budget range, decision process, timeline
- **Action:** Ask questions. Listen. Map the org. Don't pitch yet.
- **Key metric:** Discovery-to-demo rate (benchmark: 60-80% proceed)
**Discovery question framework:**
```
Situation: "How do you currently handle [problem area]?"
Problem: "What's the impact when [pain point] happens?"
Implication: "If this continues, what does that mean for [business goal]?"
Need-payoff: "If we solved this, what would that be worth to you?"
```
#### Stage 3: Demo / Solution Presentation
- **Entry:** Confirmed pain and fit from discovery
- **Owner:** AE (+ SE for complex products)
- **Exit criterion:** Prospect agrees to evaluate / trial; next step defined
- **Action:** Show the workflow that solves their specific pain (not a feature tour)
- **Key metric:** Demo-to-trial/proposal rate (benchmark: 40-60%)
**Demo structure:**
1. Recap their pain (show you listened) — 5 min
2. Show the "aha moment" (fastest path to value) — 10 min
3. Walk the specific workflow they described — 15 min
4. Handle objections, confirm fit — 5 min
5. Define clear next step (date, owners, criteria) — 5 min
Never show features they didn't ask for. Every additional feature is noise until they have a reason to care.
#### Stage 4: Trial / POC
- **Entry:** Prospect commits to evaluate with real data/use case
- **Owner:** AE + CSM or SE
- **Exit criterion:** Success criteria met, POC success confirmed
- **Action:** Define success criteria upfront (in writing). Set a tight timeframe (2-4 weeks max).
- **Key metric:** POC-to-proposal rate (benchmark: 50-70%)
**POC setup requirements:**
```
Before any POC:
□ Signed NDA
□ Written success criteria ("We'll move forward if X happens")
□ Named champion who owns the evaluation
□ Executive sponsor identified
□ Defined timeline with end date
□ Agreed next step if criteria are met
```
If you can't get written success criteria, you don't have a real opportunity. You have a "we'll see."
#### Stage 5: Proposal / Pricing
- **Entry:** POC success OR strong discovery fit for simple products
- **Owner:** AE
- **Exit criterion:** Proposal received, timeline to decision confirmed
- **Action:** Present in a live call, never email a proposal cold
- **Key metric:** Proposal-to-negotiation rate (benchmark: 50-75%)
**Proposal structure:**
1. Problem statement (their words, not yours)
2. Proposed solution (mapped to their workflow)
3. ROI summary (value delivered vs. investment)
4. Pricing options (give 2-3 options; anchors the decision)
5. Next steps with dates
#### Stage 6: Negotiation
- **Entry:** Verbal intent to proceed, price/terms discussion begins
- **Owner:** AE (+ VP Sales for large deals)
- **Exit criterion:** Mutual agreement on terms; contract sent
- **Action:** Never discount before they ask. Discount on scope, not on margin.
- **Key metric:** Negotiation win rate (benchmark: 70-85%)
**Negotiation principles:**
- Get something for everything you give. Discount → multi-year. Fast close → early pay discount.
- Don't negotiate against yourself. Silence after an offer is not rejection.
- Know your walk-away before you enter. If you don't have a BATNA, you have no leverage.
- Legal/procurement delay ≠ deal death. Keep the champion engaged.
#### Stage 7: Close
- **Entry:** Signed contract or PO received
- **Owner:** AE
- **Exit criterion:** Contract countersigned, kickoff date set
- **Action:** Celebrate with the customer. Immediately introduce CSM.
- **Key metric:** Average close rate (closed won ÷ all closed = won + lost)
#### Stage 8: Handoff to Customer Success
- **Entry:** Deal closed
- **Owner:** AE + CSM
- **Exit criterion:** Customer has met their assigned CSM, kickoff scheduled
- **Action:** Internal handoff call with AE + CSM. AE shares: deal context, key stakeholders, use case, success criteria, any promises made during the sale.
**Handoff document (AE fills before first CS meeting):**
```
Account: [name]
ACV: $X
Close date: [date]
Primary contact: [name, title, email]
Economic buyer: [name, title]
Use case: [specific workflow]
Success criteria: [what they said good looks like in 90 days]
Promises made: [anything specific committed during sale]
Risk flags: [competitive, budget, champion strength]
```
---
## MEDDPICC Qualification Framework
MEDDPICC is the enterprise qualification standard. If you can't answer every letter, you don't have a qualified opportunity — you have a conversation.
### M — Metrics
What is the quantified business impact? What does winning look like in numbers?
- "What's the current cost of [the problem]?"
- "How do you measure success in this area today?"
- "If we achieve X outcome, what does that save or earn you?"
**Red flag:** No metrics = no business case = hard to get budget.
### E — Economic Buyer
Who has final authority to approve the budget?
- "Who else will be involved in the final decision?"
- "Have you purchased solutions in this range before? Who approved that?"
- "When we get to final terms, who needs to sign?"
**Red flag:** You only know the user buyer. Economic buyer hasn't engaged.
### D — Decision Criteria
What factors will they use to evaluate and select a solution?
- "What's most important in your evaluation?"
- "How will you compare options?"
- "What does the ideal solution look like to you?"
**Why it matters:** If you don't know their criteria, you're guessing what to prove. Define the criteria before you compete on them.
### D — Decision Process
What are the steps from evaluation to signed contract?
- "Walk me through your process from here to signed agreement."
- "Does procurement get involved? Legal? InfoSec?"
- "Have you purchased software at this price before? How long did that take?"
**Red flag:** No defined process = unlimited sales cycle.
### P — Paper Process
What's the contract and legal process?
- "Who manages vendor contracts on your side?"
- "What's your standard MSA, or do you use ours?"
- "How long does legal review typically take?"
**Why it matters:** Legal and procurement have killed many "done" deals. Start early. Route to your legal team simultaneously.
### I — Identify Pain
What is the specific, felt pain driving this evaluation?
- "What triggered this initiative now vs. six months ago?"
- "What happens if you don't solve this in Q3?"
- "On a scale of 1-10, how urgent is this for your team?"
**Red flag:** Pain isn't felt by the economic buyer. User pain ≠ budget authority.
### C — Champion
Who will actively sell your solution internally when you're not in the room?
- "Who else have you brought into this evaluation?"
- "Can you help us get access to [economic buyer / IT / security]?"
- "If the decision went the wrong way, who would be disappointed?"
**Red flag:** Your champion is enthusiastic but has no internal influence.
### C — Competition
Who else are they evaluating? What's your position?
- "Are you looking at alternatives?"
- "What made you start with us?"
- "Have you used [Competitor X] before?"
**Why it matters:** Knowing the competitive field tells you what you need to prove and what to neutralize.
### MEDDPICC Scorecard
| Letter | Score 1 | Score 2 | Score 3 |
|--------|---------|---------|---------|
| Metrics | No numbers | Approximate value | Specific ROI model |
| Economic Buyer | Unknown | Named, not engaged | Engaged directly |
| Decision Criteria | Vague | Partially defined | Written, weighted |
| Decision Process | Unknown | Verbal description | Steps confirmed, timeline known |
| Paper Process | Unknown | Basic awareness | Legal contacts, standard process known |
| Identify Pain | No urgency | User-level pain | Executive-level pain with consequences |
| Champion | No advocate | Friendly contact | Actively selling internally |
| Competition | Unknown | Identified | Position mapped, differentiation clear |
**Score each 1-3. Total 16+/24 = qualified opportunity. Under 12 = unqualified, do not forecast.**
---
## Sales Compensation Plans
Comp drives behavior. Design it precisely.
### Base / Variable Split
| Role | Base % | Variable % | Rationale |
|------|--------|-----------|-----------|
| SDR | 60-70% | 30-40% | Activity-based, not purely revenue |
| AE (Inside Sales) | 50% | 50% | Balanced risk/reward |
| AE (Enterprise) | 55-60% | 40-45% | Longer cycle, higher base for stability |
| VP Sales | 50% | 50% | Accountable for team results |
| CSM (retention focus) | 70% | 30% | Less variable, stable relationship role |
| CSM (expansion focus) | 60% | 40% | Expansion quota adds variable |
### Commission Structure
**Standard AE plan:**
```
Base: $80K
Variable: $80K (at 100% quota attainment)
OTE: $160K
Commission rate: OTE variable ÷ Quota
If quota = $800K ARR: commission = $80K ÷ $800K = 10% of ARR closed
Accelerators (performance above quota):
101-125% quota: 1.25x commission rate (12.5% of ARR)
126-150% quota: 1.5x commission rate (15% of ARR)
> 150% quota: 2.0x commission rate (20% of ARR)
```
**Why accelerators matter:**
- They keep top performers motivated past quota
- They make it possible for top reps to earn $200K+ (attracting talent)
- They create the "make it rain" culture
### SDR Compensation
SDRs are measured on output (meetings booked, pipeline created), not closed revenue.
```
Quota: 20 qualified meetings booked per month (or $X pipeline created)
Commission: $150-300 per qualified meeting held
Accelerators:
If a meeting converts to closed won: Bonus $250-500
If monthly meetings > 125% of quota: 1.5x rate on upside meetings
```
### Clawbacks
A clawback recovers commission paid on deals that churn or are fraudulently closed.
**Common clawback rules:**
- Full clawback if customer cancels within 90 days of close
- 50% clawback if customer cancels within 91-180 days
- No clawback after 180 days (AE shouldn't be penalized for future CS failures)
- Clawbacks vest: pay commission immediately but apply against next quarter's payout if triggered
**Why clawbacks matter:**
- Without them, reps are incentivized to close any deal, regardless of fit
- With them, reps self-qualify more carefully
### SPIFFs (Sales Performance Incentive Funds)
Short-term tactical incentives for specific behaviors:
- $5K bonus for closing a new vertical deal this quarter
- 1.5x commission on annual prepay deals in Q4
- $1K for closing a deal in a new geographic territory
Use SPIFFs sparingly. Overuse trains reps to wait for the SPIFF before engaging.
### Multi-Year and Prepay Incentives
Align rep behavior with company cash flow:
- Multi-year deals: Credit full TCV against quota, pay commission upfront on TCV
- Annual prepay: 10-20% uplift on commission rate
- Monthly billing: Standard commission rate
---
## Enterprise vs. SMB vs. Self-Serve Models
### Self-Serve / PLG
**Characteristics:**
- Product is the primary acquisition channel
- Credit card required (no invoicing)
- No human touch in the initial purchase
- Sales engages only at enterprise signals (high usage, team expansion, compliance needs)
**Funnel:**
```
Website → Free trial / Freemium → Activation → PQL → Expansion → Enterprise
```
**Key metrics:**
- Free-to-paid conversion rate (benchmark: 2-5% of signups)
- Time to activation (first core action)
- PQL → expansion conversion rate
- NRR from self-serve base
**Sales involvement triggers (PQL signals):**
- Team size > 10 seats
- Usage spikes (power user patterns)
- Feature limit hits on core features
- Job title change (new economic buyer appears in account)
### SMB Inside Sales
**Characteristics:**
- ACV $5K-25K
- 30-60 day sales cycle
- Inbound-heavy or light outbound
- SDR → AE → CS model
- Phone + email + video; no in-person
**Funnel:**
```
Inbound/MQL → SDR qualifies → AE discovery → Demo → Proposal → Close
```
**Key metrics:**
- MQL-to-SQL rate (benchmark: 15-25%)
- SQL-to-close rate (benchmark: 20-30%)
- Average sales cycle (30-60 days)
- AE productivity: $600K-$1M quota per rep
**Team ratios:**
- 1 SDR supports 3-4 AEs
- 1 CSM manages $1M-2M ARR
### Enterprise Sales
**Characteristics:**
- ACV $50K+
- 90-365 day sales cycle
- Outbound prospecting + inbound from brand
- AE + SE + executive sponsor model
- Multi-stakeholder: champion, economic buyer, IT, legal, procurement
**Funnel:**
```
Account targeting → Executive outreach → Discovery → POC → Security review → Legal → Procurement → Close
```
**Key metrics:**
- Deals in pipeline (volume matters less, quality more)
- POC win rate (benchmark: 60-75%)
- Average sales cycle (3-12 months)
- AE productivity: $1.5M-$3M quota per rep
**Team ratios:**
- 1 SE supports 3-4 AEs
- 1 CSM manages $2M-5M ARR (named accounts, high-touch)
---
## Sales Hiring and Ramp
### What "Good" Looks Like by Role
**SDR (entry level):**
- 1-2 years of outbound experience OR strong track record in customer-facing role
- Resilient: rejection is the job
- Coachable: SDR is a proving ground, not a final destination
- Can write clear, concise prospecting emails without templates
**AE (inside sales):**
- 2-4 years sales experience, preferably SaaS
- Can articulate their process for a discovery call
- Knows their numbers: quota, attainment, average deal size, sales cycle
- Shows how they build pipeline (AEs who only work inbound are a risk)
**AE (enterprise):**
- 4-8 years B2B sales, at least 2 in enterprise
- Has closed deals > $100K ACV
- Can name the stakeholders in a complex deal they navigated
- Understands procurement, security review, multi-year contracts
**VP Sales:**
- Has scaled a team from where you are to 2x your size
- Can build a comp plan from scratch
- Has hiring and firing experience
- Revenue from a repeatable process, not personal relationships
### Interview Process
**3-stage process:**
1. **Recruiter screen** (30 min): Motivation, experience, logistics
2. **Manager interview** (60 min): Structured questions on process, examples, numbers
3. **Panel / role play** (90 min): Mock discovery call + debrief; team fit
**Role play rubric:**
- Did they prepare (knew your product, your ICP)?
- Did they ask before pitching?
- Did they handle pushback without capitulating immediately?
- Did they confirm a next step with a date?
### Onboarding Structure (6-Week Ramp)
| Week | Focus | Activities |
|------|-------|-----------|
| 1 | Company, product, ICP | Onboarding sessions, product sandbox, shadow AE calls |
| 2 | Sales process, tools, messaging | CRM training, call review, write first prospecting emails |
| 3 | First outreach | Send first sequences, book first meetings, shadow closes |
| 4 | Independent discovery | Lead own discovery calls with manager reviewing |
| 5 | Full cycle | Handle pipeline independently, weekly coaching |
| 6 | Quota-bearing | 25% of quota expectation; full accountability begins |
### Performance Management
**Clear standards, no surprises:**
```
Month 3: 25% of quota expected. Miss by > 50% → performance conversation.
Month 4: 50% of quota expected. Miss by > 40% → PIP warning.
Month 5: 75% of quota. Miss by > 30% → formal PIP.
Month 6+: 100% of quota. Consistent miss → exit.
```
**PIP (Performance Improvement Plan) — not for show:**
- Should include specific, measurable targets (not "improve attitude")
- 30-60 day timeline
- Weekly check-ins with manager
- If targets aren't met: exit, no extensions
- A PIP that doesn't lead to improvement or exit is a management failure
**Rule:** Low performers who stay cost you your top performers. They watch what you tolerate.

View File

@@ -0,0 +1,742 @@
#!/usr/bin/env python3
"""
Churn & Retention Analyzer
===========================
Customer-level churn and Net Revenue Retention (NRR) analysis for B2B SaaS.
Calculates:
- Gross Revenue Retention (GRR) and Net Revenue Retention (NRR)
- Monthly and annual churn rates (logo + revenue)
- Cohort-based retention curves
- At-risk account identification
- Expansion revenue segmentation
- ARR waterfall (new / expansion / contraction / churn)
Usage:
python churn_analyzer.py
python churn_analyzer.py --csv customers.csv
python churn_analyzer.py --period 2026-Q1 --output summary
Input format (CSV):
customer_id, name, segment, arr, start_date, [churn_date], [expansion_arr], [contraction_arr]
Stdlib only. No dependencies.
"""
import csv
import sys
import json
import argparse
import statistics
from datetime import date, datetime, timedelta
from collections import defaultdict
from io import StringIO
from itertools import groupby
# ---------------------------------------------------------------------------
# Data model
# ---------------------------------------------------------------------------
class Customer:
def __init__(self, customer_id, name, segment, arr, start_date,
churn_date=None, expansion_arr=0.0, contraction_arr=0.0,
health_score=None):
self.customer_id = customer_id
self.name = name
self.segment = segment
self.arr = float(arr)
self.start_date = self._parse_date(start_date)
self.churn_date = self._parse_date(churn_date) if churn_date else None
self.expansion_arr = float(expansion_arr or 0)
self.contraction_arr = float(contraction_arr or 0)
self.health_score = float(health_score) if health_score else None
@staticmethod
def _parse_date(value):
if not value or str(value).strip() in ("", "None", "null"):
return None
for fmt in ("%Y-%m-%d", "%m/%d/%Y", "%d/%m/%Y", "%Y/%m/%d"):
try:
return datetime.strptime(str(value).strip(), fmt).date()
except ValueError:
continue
raise ValueError(f"Cannot parse date: {value!r}")
def is_churned(self):
return self.churn_date is not None
def is_active(self, as_of=None):
as_of = as_of or date.today()
if self.churn_date and self.churn_date <= as_of:
return False
return self.start_date <= as_of
def tenure_days(self, as_of=None):
as_of = as_of or date.today()
end = self.churn_date if self.churn_date else as_of
return (end - self.start_date).days
def tenure_months(self, as_of=None):
return self.tenure_days(as_of) / 30.44
def cohort_month(self):
"""Acquisition cohort: YYYY-MM of start_date."""
return self.start_date.strftime("%Y-%m")
def cohort_quarter(self):
q = (self.start_date.month - 1) // 3 + 1
return f"Q{q} {self.start_date.year}"
def net_arr(self):
"""Current ARR + expansion - contraction."""
return self.arr + self.expansion_arr - self.contraction_arr
def days_since_acquisition(self, as_of=None):
as_of = as_of or date.today()
return (as_of - self.start_date).days
# ---------------------------------------------------------------------------
# Core metrics
# ---------------------------------------------------------------------------
class RetentionAnalyzer:
def __init__(self, customers, as_of=None):
self.customers = customers
self.as_of = as_of or date.today()
def active_customers(self, as_of=None):
as_of = as_of or self.as_of
return [c for c in self.customers if c.is_active(as_of)]
def churned_customers(self, start=None, end=None):
"""Customers who churned in [start, end]."""
result = []
for c in self.customers:
if not c.churn_date:
continue
if start and c.churn_date < start:
continue
if end and c.churn_date > end:
continue
result.append(c)
return result
def arr_waterfall(self, period_start, period_end):
"""
Calculate ARR waterfall for a given period.
Returns dict with opening_arr, new_arr, expansion_arr, contraction_arr,
churned_arr, closing_arr, nrr, grr.
"""
# Opening: active at period start
opening_customers = [c for c in self.customers if c.is_active(period_start)]
opening_arr = sum(c.arr for c in opening_customers)
opening_ids = {c.customer_id for c in opening_customers}
# New: started during the period
new_customers = [
c for c in self.customers
if period_start < c.start_date <= period_end
]
new_arr = sum(c.arr for c in new_customers)
# Churned: were active at start, churn_date within period
churned = [
c for c in opening_customers
if c.churn_date and period_start < c.churn_date <= period_end
]
churned_arr = sum(c.arr for c in churned)
# Expansion and contraction: from customers active at opening
expansion = sum(
c.expansion_arr for c in opening_customers
if not c.is_churned() or (c.churn_date and c.churn_date > period_end)
)
contraction = sum(
c.contraction_arr for c in opening_customers
if not c.is_churned() or (c.churn_date and c.churn_date > period_end)
)
closing_arr = opening_arr + new_arr + expansion - contraction - churned_arr
grr = (opening_arr - contraction - churned_arr) / opening_arr if opening_arr else 0
nrr = (opening_arr + expansion - contraction - churned_arr) / opening_arr if opening_arr else 0
return {
"period_start": period_start.isoformat(),
"period_end": period_end.isoformat(),
"opening_arr": opening_arr,
"new_arr": new_arr,
"expansion_arr": expansion,
"contraction_arr": contraction,
"churned_arr": churned_arr,
"closing_arr": closing_arr,
"net_new_arr": new_arr + expansion - contraction - churned_arr,
"grr": max(0.0, grr),
"nrr": max(0.0, nrr),
}
def logo_churn_rate(self, period_start, period_end):
"""Logo churn rate for a period."""
opening = [c for c in self.customers if c.is_active(period_start)]
churned = [
c for c in opening
if c.churn_date and period_start < c.churn_date <= period_end
]
return len(churned) / len(opening) if opening else 0.0
def revenue_churn_rate(self, period_start, period_end):
"""Gross revenue churn rate for a period."""
opening = [c for c in self.customers if c.is_active(period_start)]
opening_arr = sum(c.arr for c in opening)
churned_arr = sum(
c.arr for c in opening
if c.churn_date and period_start < c.churn_date <= period_end
)
contraction = sum(c.contraction_arr for c in opening)
return (churned_arr + contraction) / opening_arr if opening_arr else 0.0
# ---------------------------------------------------------------------------
# Cohort analysis
# ---------------------------------------------------------------------------
class CohortAnalyzer:
def __init__(self, customers):
self.customers = customers
def build_cohorts(self):
"""Group customers by acquisition cohort (month)."""
cohorts = defaultdict(list)
for c in self.customers:
cohorts[c.cohort_month()].append(c)
return dict(sorted(cohorts.items()))
def retention_at_month(self, cohort_customers, months_after):
"""
What fraction of cohort ARR remains `months_after` months after acquisition?
"""
if not cohort_customers:
return None
opening_arr = sum(c.arr for c in cohort_customers)
if opening_arr == 0:
return None
earliest_start = min(c.start_date for c in cohort_customers)
check_date = earliest_start + timedelta(days=int(months_after * 30.44))
if check_date > date.today():
return None # Future — no data
retained_arr = sum(
c.arr for c in cohort_customers
if c.is_active(check_date)
)
return retained_arr / opening_arr
def retention_curve(self, cohort_customers, max_months=24):
"""Return retention at months 0, 3, 6, 9, 12, 18, 24."""
checkpoints = [0, 3, 6, 9, 12, 18, 24]
checkpoints = [m for m in checkpoints if m <= max_months]
curve = {}
for m in checkpoints:
rate = self.retention_at_month(cohort_customers, m)
if rate is not None:
curve[m] = rate
return curve
def cohort_report(self):
"""Returns dict: cohort → {size, opening_arr, retention_curve}."""
cohorts = self.build_cohorts()
report = {}
for cohort_month, customers in cohorts.items():
curve = self.retention_curve(customers)
report[cohort_month] = {
"customer_count": len(customers),
"opening_arr": sum(c.arr for c in customers),
"churned_count": sum(1 for c in customers if c.is_churned()),
"current_retention": curve.get(12, curve.get(max(curve.keys()) if curve else 0)),
"retention_curve": curve,
}
return report
def identify_at_risk(self, tenure_months_max=6, health_threshold=60):
"""
Identify at-risk customers based on:
- Low health score (if available)
- Short tenure (haven't proved long-term value)
- High contraction signals
"""
at_risk = []
for c in self.customers:
if c.is_churned():
continue
reasons = []
score = 0
# Health score signal
if c.health_score is not None and c.health_score < health_threshold:
reasons.append(f"Health score {c.health_score:.0f} < {health_threshold}")
score += 40
# Early tenure risk
tenure = c.tenure_months()
if tenure < tenure_months_max:
reasons.append(f"Tenure {tenure:.1f} months (< {tenure_months_max})")
score += 20
# Contraction signal
if c.contraction_arr > 0:
contraction_pct = c.contraction_arr / c.arr
reasons.append(f"Contraction {contraction_pct:.0%} of ARR")
score += 30
# No expansion in mature account
if tenure > 12 and c.expansion_arr == 0:
reasons.append("No expansion after 12+ months (stagnant)")
score += 10
if score > 0:
at_risk.append({
"customer_id": c.customer_id,
"name": c.name,
"segment": c.segment,
"arr": c.arr,
"tenure_months": round(tenure, 1),
"health_score": c.health_score,
"risk_score": score,
"risk_reasons": reasons,
})
return sorted(at_risk, key=lambda x: -x["risk_score"])
# ---------------------------------------------------------------------------
# Expansion analysis
# ---------------------------------------------------------------------------
class ExpansionAnalyzer:
def __init__(self, customers):
self.customers = customers
def expansion_summary(self):
active = [c for c in self.customers if not c.is_churned()]
expanding = [c for c in active if c.expansion_arr > 0]
contracting = [c for c in active if c.contraction_arr > 0]
total_arr = sum(c.arr for c in active)
total_expansion = sum(c.expansion_arr for c in active)
total_contraction = sum(c.contraction_arr for c in active)
return {
"active_customers": len(active),
"total_arr": total_arr,
"expanding_count": len(expanding),
"contracting_count": len(contracting),
"expansion_arr": total_expansion,
"contraction_arr": total_contraction,
"expansion_rate": total_expansion / total_arr if total_arr else 0,
"contraction_rate": total_contraction / total_arr if total_arr else 0,
"net_expansion_rate": (total_expansion - total_contraction) / total_arr if total_arr else 0,
}
def expansion_by_segment(self):
active = [c for c in self.customers if not c.is_churned()]
by_segment = defaultdict(lambda: {"arr": 0.0, "expansion": 0.0,
"contraction": 0.0, "count": 0})
for c in active:
seg = c.segment or "Unspecified"
by_segment[seg]["arr"] += c.arr
by_segment[seg]["expansion"] += c.expansion_arr
by_segment[seg]["contraction"] += c.contraction_arr
by_segment[seg]["count"] += 1
result = {}
for seg, data in by_segment.items():
arr = data["arr"]
result[seg] = {
"customer_count": data["count"],
"arr": arr,
"expansion_arr": data["expansion"],
"contraction_arr": data["contraction"],
"expansion_rate": data["expansion"] / arr if arr else 0,
"net_nrr_contribution": (arr + data["expansion"] - data["contraction"]) / arr if arr else 0,
}
return result
def top_expansion_candidates(self, min_tenure_months=6, min_arr=5000):
"""
Customers who are active, healthy tenure, but have zero expansion.
These are upsell/expansion targets.
"""
active = [c for c in self.customers if not c.is_churned()]
candidates = []
for c in active:
tenure = c.tenure_months()
if (tenure >= min_tenure_months
and c.arr >= min_arr
and c.expansion_arr == 0
and (c.health_score is None or c.health_score >= 60)):
candidates.append({
"customer_id": c.customer_id,
"name": c.name,
"segment": c.segment,
"arr": c.arr,
"tenure_months": round(tenure, 1),
"health_score": c.health_score,
})
return sorted(candidates, key=lambda x: -x["arr"])
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_currency(value):
if value >= 1_000_000:
return f"${value / 1_000_000:.2f}M"
if value >= 1_000:
return f"${value / 1_000:.1f}K"
return f"${value:.0f}"
def fmt_pct(value):
return f"{value * 100:.1f}%"
def nrr_status(nrr):
if nrr >= 1.20:
return "✅ World-class"
if nrr >= 1.10:
return "✅ Healthy"
if nrr >= 1.00:
return "⚠️ Acceptable"
if nrr >= 0.90:
return "🔴 Concerning"
return "🔴 Crisis"
def grr_status(grr):
if grr >= 0.90:
return "✅ Strong"
if grr >= 0.85:
return "⚠️ Acceptable"
return "🔴 Below threshold"
def print_header(title):
width = 70
print()
print("=" * width)
print(f" {title}")
print("=" * width)
def print_section(title):
print(f"\n--- {title} ---")
def print_full_report(customers, period_start, period_end):
analyzer = RetentionAnalyzer(customers, as_of=period_end)
cohort_analyzer = CohortAnalyzer(customers)
expansion_analyzer = ExpansionAnalyzer(customers)
print_header("CHURN & RETENTION ANALYZER")
print(f" Analysis period: {period_start.isoformat()}{period_end.isoformat()}")
print(f" Total customers in dataset: {len(customers)}")
active = analyzer.active_customers(period_end)
churned_in_period = analyzer.churned_customers(period_start, period_end)
print(f" Active at period end: {len(active)}")
print(f" Churned in period: {len(churned_in_period)}")
# ── ARR Waterfall
print_section("ARR WATERFALL")
wf = analyzer.arr_waterfall(period_start, period_end)
print(f" Opening ARR: {fmt_currency(wf['opening_arr'])}")
print(f" + New Logo ARR: +{fmt_currency(wf['new_arr'])}")
print(f" + Expansion ARR: +{fmt_currency(wf['expansion_arr'])}")
print(f" - Contraction ARR: -{fmt_currency(wf['contraction_arr'])}")
print(f" - Churned ARR: -{fmt_currency(wf['churned_arr'])}")
print(f" {''*42}")
print(f" Closing ARR: {fmt_currency(wf['closing_arr'])}")
print(f" Net New ARR: {'+' if wf['net_new_arr'] >= 0 else ''}{fmt_currency(wf['net_new_arr'])}")
# ── NRR / GRR
print_section("RETENTION METRICS")
nrr = wf["nrr"]
grr = wf["grr"]
logo_churn = analyzer.logo_churn_rate(period_start, period_end)
rev_churn = analyzer.revenue_churn_rate(period_start, period_end)
print(f" NRR (Net Revenue Retention): {fmt_pct(nrr)} {nrr_status(nrr)}")
print(f" GRR (Gross Revenue Retention): {fmt_pct(grr)} {grr_status(grr)}")
print(f" Logo Churn Rate (period): {fmt_pct(logo_churn)}")
print(f" Revenue Churn Rate (period): {fmt_pct(rev_churn)}")
if wf["opening_arr"] > 0:
expansion_rate = wf["expansion_arr"] / wf["opening_arr"]
print(f" Expansion Rate (period): {fmt_pct(expansion_rate)}")
print()
print(f" NRR Benchmark: >120% world-class | 100-120% healthy | <100% fix immediately")
# ── Expansion summary
print_section("EXPANSION REVENUE")
exp = expansion_analyzer.expansion_summary()
print(f" Expanding customers: {exp['expanding_count']} / {exp['active_customers']} ({fmt_pct(exp['expanding_count']/exp['active_customers']) if exp['active_customers'] else ''})")
print(f" Contracting: {exp['contracting_count']} / {exp['active_customers']}")
print(f" Expansion ARR: {fmt_currency(exp['expansion_arr'])} ({fmt_pct(exp['expansion_rate'])} of base)")
print(f" Contraction ARR: {fmt_currency(exp['contraction_arr'])}")
print(f" Net Expansion Rate: {fmt_pct(exp['net_expansion_rate'])}")
# ── Segment breakdown
print_section("SEGMENT BREAKDOWN (NRR Components)")
seg_data = expansion_analyzer.expansion_by_segment()
col_w = [18, 8, 12, 10, 10, 10]
h = (f" {'Segment':<{col_w[0]}} {'Custs':>{col_w[1]}} {'ARR':>{col_w[2]}} "
f"{'Expansion':>{col_w[3]}} {'Contraction':>{col_w[4]}} {'NRR':>{col_w[5]}}")
print(h)
print(" " + "-" * (sum(col_w) + 5))
for seg, data in sorted(seg_data.items(), key=lambda x: -x[1]["arr"]):
print(f" {seg:<{col_w[0]}} {data['customer_count']:>{col_w[1]}} "
f"{fmt_currency(data['arr']):>{col_w[2]}} "
f"{fmt_currency(data['expansion_arr']):>{col_w[3]}} "
f"{fmt_currency(data['contraction_arr']):>{col_w[4]}} "
f"{fmt_pct(data['net_nrr_contribution']):>{col_w[5]}}")
# ── Cohort retention
print_section("COHORT RETENTION CURVES")
cohort_report = cohort_analyzer.cohort_report()
print(f" {'Cohort':<10} {'Custs':>6} {'Opening ARR':>13} {'Mo.3':>8} {'Mo.6':>8} {'Mo.12':>8}")
print(" " + "-" * 57)
for cohort, data in cohort_report.items():
curve = data["retention_curve"]
m3 = fmt_pct(curve[3]) if 3 in curve else ""
m6 = fmt_pct(curve[6]) if 6 in curve else ""
m12 = fmt_pct(curve[12]) if 12 in curve else ""
print(f" {cohort:<10} {data['customer_count']:>6} "
f"{fmt_currency(data['opening_arr']):>13} "
f"{m3:>8} {m6:>8} {m12:>8}")
# ── At-risk accounts
print_section("AT-RISK ACCOUNTS")
at_risk = cohort_analyzer.identify_at_risk()
if at_risk:
print(f" {'Customer':<22} {'Segment':<14} {'ARR':>10} {'Tenure':>8} {'Risk':>6} Reason")
print(" " + "-" * 80)
for acct in at_risk[:10]: # Top 10
reason_short = acct["risk_reasons"][0] if acct["risk_reasons"] else ""
tenure_str = f"{acct['tenure_months']}mo"
print(f" {acct['name']:<22} {acct['segment']:<14} "
f"{fmt_currency(acct['arr']):>10} {tenure_str:>8} "
f"{acct['risk_score']:>5} {reason_short}")
if len(at_risk) > 10:
print(f" ... and {len(at_risk) - 10} more at-risk accounts")
else:
print(" ✅ No at-risk accounts identified")
# ── Expansion candidates
print_section("EXPANSION CANDIDATES (no expansion yet, healthy tenure)")
candidates = expansion_analyzer.top_expansion_candidates()
if candidates:
print(f" {'Customer':<22} {'Segment':<14} {'ARR':>10} {'Tenure':>8} Action")
print(" " + "-" * 70)
for c in candidates[:8]:
action = "Upsell review" if c["arr"] > 20000 else "Seat expansion call"
tenure_str = f"{c['tenure_months']}mo"
print(f" {c['name']:<22} {c['segment']:<14} "
f"{fmt_currency(c['arr']):>10} {tenure_str:>8} {action}")
else:
print(" ✅ All eligible accounts have expansion in motion")
# ── Red flags
print_section("HEALTH FLAGS")
flags = []
if nrr < 1.0:
flags.append("🔴 NRR below 100% — revenue base is shrinking. Fix before scaling sales.")
if grr < 0.85:
flags.append(f"🔴 GRR {fmt_pct(grr)} — gross retention below 85% threshold. Churn is a product/CS problem.")
if logo_churn > 0.05:
flags.append(f"⚠️ Logo churn {fmt_pct(logo_churn)} this period — run cohort analysis to find the pattern.")
if exp["expansion_rate"] < 0.10 and exp["active_customers"] > 10:
flags.append("⚠️ Expansion rate below 10% — upsell motion is weak or non-existent.")
churned_arr_pct = wf["churned_arr"] / wf["opening_arr"] if wf["opening_arr"] else 0
if churned_arr_pct > 0.10:
flags.append(f"🔴 Revenue churn at {fmt_pct(churned_arr_pct)} of opening ARR this period — high urgency.")
if len(at_risk) > len(active) * 0.20:
flags.append(f"⚠️ {len(at_risk)} of {len(active)} active accounts flagged at-risk ({fmt_pct(len(at_risk)/len(active) if active else 0)})")
if flags:
for f in flags:
print(f" {f}")
else:
print(" ✅ No critical health flags")
print()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
SAMPLE_CSV = """customer_id,name,segment,arr,start_date,churn_date,expansion_arr,contraction_arr,health_score
C001,Acme Manufacturing,Enterprise,120000,2023-01-15,,45000,0,82
C002,TechStart Inc,Mid-Market,28000,2023-02-01,,8000,0,74
C003,Global Retail Co,Enterprise,250000,2023-01-05,,0,25000,45
C004,MedTech Solutions,Mid-Market,45000,2023-03-10,,15000,0,88
C005,FinServ Holdings,Enterprise,185000,2023-01-20,2023-09-15,0,0,
C006,StartupHub Network,SMB,12000,2023-04-01,,0,3000,55
C007,EduPlatform Inc,Mid-Market,32000,2023-02-15,,10000,0,91
C008,BioLab Analytics,Enterprise,95000,2023-01-10,,20000,0,78
C009,RegionalBank Corp,Enterprise,310000,2023-03-01,,75000,0,85
C010,CloudOps Systems,Mid-Market,38000,2023-05-01,2024-01-10,0,0,
C011,InsurTech Platform,Mid-Market,55000,2023-06-15,,0,0,62
C012,LegalAI Corp,SMB,18000,2023-07-01,,5000,0,79
C013,RetailChain Ltd,Enterprise,140000,2023-04-20,,0,20000,41
C014,DataPipeline Co,Mid-Market,42000,2023-08-01,,12000,0,83
C015,NanoTech Startup,SMB,9500,2023-09-15,2024-02-28,0,0,
C016,MedDevice Corp,Enterprise,220000,2023-02-28,,60000,0,92
C017,ConsultingFirm XYZ,SMB,15000,2023-10-01,,0,5000,38
C018,GovTech Solutions,Enterprise,175000,2023-11-15,,0,0,71
C019,AgriData Systems,Mid-Market,31000,2024-01-10,,8000,0,77
C020,HealthcarePlus,Mid-Market,62000,2024-02-01,,0,0,65
"""
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_customers_from_csv(csv_text):
reader = csv.DictReader(StringIO(csv_text))
customers = []
errors = []
for i, row in enumerate(reader, start=2):
try:
c = Customer(
customer_id=row.get("customer_id", f"row_{i}"),
name=row.get("name", f"Customer {i}"),
segment=row.get("segment", ""),
arr=row.get("arr", 0),
start_date=row.get("start_date", ""),
churn_date=row.get("churn_date", None) or None,
expansion_arr=row.get("expansion_arr", 0) or 0,
contraction_arr=row.get("contraction_arr", 0) or 0,
health_score=row.get("health_score", None) or None,
)
customers.append(c)
except (ValueError, KeyError) as e:
errors.append(f" Row {i}: {e}")
if errors:
print("⚠️ Skipped rows with errors:")
for err in errors:
print(err)
return customers
def parse_period(period_str):
"""Parse 'YYYY-QN' or 'YYYY-MM' into (start_date, end_date)."""
if not period_str:
today = date.today()
q = (today.month - 1) // 3
start = date(today.year, q * 3 + 1, 1)
# End of current quarter
end_month = start.month + 2
end_year = start.year + (end_month - 1) // 12
end_month = ((end_month - 1) % 12) + 1
import calendar
end_day = calendar.monthrange(end_year, end_month)[1]
return start, date(end_year, end_month, end_day)
import calendar
if "-Q" in period_str:
year, qpart = period_str.split("-Q")
year = int(year)
q = int(qpart)
start_month = (q - 1) * 3 + 1
end_month = start_month + 2
start = date(year, start_month, 1)
end = date(year, end_month, calendar.monthrange(year, end_month)[1])
return start, end
# YYYY-MM
year, month = period_str.split("-")
year, month = int(year), int(month)
start = date(year, month, 1)
end = date(year, month, calendar.monthrange(year, month)[1])
return start, end
def main():
parser = argparse.ArgumentParser(
description="Churn & Retention Analyzer — NRR, cohort analysis, at-risk detection"
)
parser.add_argument(
"--csv", metavar="FILE",
help="CSV file with customer data (uses sample data if not provided)"
)
parser.add_argument(
"--period", metavar="PERIOD",
help='Analysis period: "2026-Q1" or "2026-03" (defaults to current quarter)'
)
parser.add_argument(
"--output", choices=["summary", "full", "json"],
default="full",
help="Output format (default: full)"
)
args = parser.parse_args()
# Load data
if args.csv:
try:
with open(args.csv, "r", encoding="utf-8") as f:
csv_text = f.read()
except FileNotFoundError:
print(f"Error: File not found: {args.csv}", file=sys.stderr)
sys.exit(1)
else:
print("No --csv provided. Using sample customer data.\n")
csv_text = SAMPLE_CSV
customers = load_customers_from_csv(csv_text)
if not customers:
print("No customers loaded. Exiting.", file=sys.stderr)
sys.exit(1)
period_start, period_end = parse_period(args.period)
if args.output == "json":
analyzer = RetentionAnalyzer(customers, as_of=period_end)
cohort_analyzer = CohortAnalyzer(customers)
expansion_analyzer = ExpansionAnalyzer(customers)
wf = analyzer.arr_waterfall(period_start, period_end)
output = {
"period": {"start": period_start.isoformat(), "end": period_end.isoformat()},
"arr_waterfall": wf,
"logo_churn_rate": analyzer.logo_churn_rate(period_start, period_end),
"revenue_churn_rate": analyzer.revenue_churn_rate(period_start, period_end),
"cohort_report": {k: {**v, "retention_curve": {str(m): r for m, r in v["retention_curve"].items()}}
for k, v in cohort_analyzer.cohort_report().items()},
"at_risk_accounts": cohort_analyzer.identify_at_risk(),
"expansion_summary": expansion_analyzer.expansion_summary(),
"expansion_by_segment": expansion_analyzer.expansion_by_segment(),
"expansion_candidates": expansion_analyzer.top_expansion_candidates(),
}
print(json.dumps(output, indent=2))
elif args.output == "summary":
analyzer = RetentionAnalyzer(customers, as_of=period_end)
wf = analyzer.arr_waterfall(period_start, period_end)
print_header("NRR SUMMARY")
print(f" Period: {period_start.isoformat()}{period_end.isoformat()}")
print(f" NRR: {fmt_pct(wf['nrr'])} {nrr_status(wf['nrr'])}")
print(f" GRR: {fmt_pct(wf['grr'])} {grr_status(wf['grr'])}")
print(f" Opening: {fmt_currency(wf['opening_arr'])}")
print(f" Closing: {fmt_currency(wf['closing_arr'])}")
print(f" Net New: {fmt_currency(wf['net_new_arr'])}")
print()
else:
print_full_report(customers, period_start, period_end)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,571 @@
#!/usr/bin/env python3
"""
Revenue Forecast Model
======================
Pipeline-based revenue forecasting for B2B SaaS.
Models:
- Weighted pipeline (stage probability × deal value)
- Historical win rate adjustment (calibrate to actuals)
- Scenario analysis (conservative / base / upside)
- Monthly and quarterly projection with confidence ranges
Usage:
python revenue_forecast_model.py
python revenue_forecast_model.py --csv pipeline.csv
python revenue_forecast_model.py --scenario conservative
Input format (CSV):
deal_id, name, stage, arr_value, close_date, rep, segment
Stdlib only. No dependencies.
"""
import csv
import sys
import json
import argparse
import statistics
from datetime import date, datetime, timedelta
from collections import defaultdict
from io import StringIO
# ---------------------------------------------------------------------------
# Stage configuration
# ---------------------------------------------------------------------------
DEFAULT_STAGE_PROBABILITIES = {
"discovery": 0.10,
"qualification": 0.25,
"demo": 0.40,
"proposal": 0.55,
"poc": 0.65,
"negotiation": 0.80,
"verbal_commit": 0.92,
"closed_won": 1.00,
"closed_lost": 0.00,
}
SCENARIO_MULTIPLIERS = {
"conservative": 0.85, # Win rate 15% below historical
"base": 1.00, # Historical win rate
"upside": 1.15, # Win rate 15% above historical
}
# ---------------------------------------------------------------------------
# Data model
# ---------------------------------------------------------------------------
class Deal:
def __init__(self, deal_id, name, stage, arr_value, close_date, rep="", segment=""):
self.deal_id = deal_id
self.name = name
self.stage = stage.lower().replace(" ", "_").replace("/", "_")
self.arr_value = float(arr_value)
self.close_date = self._parse_date(close_date)
self.rep = rep
self.segment = segment
@staticmethod
def _parse_date(value):
for fmt in ("%Y-%m-%d", "%m/%d/%Y", "%d/%m/%Y", "%Y/%m/%d"):
try:
return datetime.strptime(str(value), fmt).date()
except ValueError:
continue
raise ValueError(f"Cannot parse date: {value!r}")
@property
def quarter(self):
q = (self.close_date.month - 1) // 3 + 1
return f"Q{q} {self.close_date.year}"
@property
def month_key(self):
return self.close_date.strftime("%Y-%m")
def weighted_value(self, stage_probs, scenario="base"):
prob = stage_probs.get(self.stage, 0.0)
multiplier = SCENARIO_MULTIPLIERS.get(scenario, 1.0)
# Clamp probability to [0, 1]
adjusted = min(1.0, max(0.0, prob * multiplier))
return self.arr_value * adjusted
def is_open(self):
return self.stage not in ("closed_won", "closed_lost")
def is_closed_won(self):
return self.stage == "closed_won"
# ---------------------------------------------------------------------------
# Win rate calibration
# ---------------------------------------------------------------------------
def calculate_historical_win_rates(deals):
"""
Calculate actual win rates per stage from closed deals.
Returns a dict: stage → win_rate (float).
Requires deals that were at each stage and are now closed won/lost.
"""
# In a real implementation, you'd have historical stage-at-point-in-time data.
# Here we approximate: among closed deals, what fraction were won?
closed = [d for d in deals if not d.is_open()]
if not closed:
return {}
won = [d for d in closed if d.is_closed_won()]
overall_rate = len(won) / len(closed) if closed else 0.0
# Stage-level calibration: adjust default probs by actual overall rate
# (In production: use CRM historical stage-level conversion data)
calibrated = {}
for stage, default_prob in DEFAULT_STAGE_PROBABILITIES.items():
if overall_rate > 0:
calibrated[stage] = min(1.0, default_prob * (overall_rate / 0.25))
else:
calibrated[stage] = default_prob
return calibrated
# ---------------------------------------------------------------------------
# Forecast engine
# ---------------------------------------------------------------------------
class ForecastEngine:
def __init__(self, deals, stage_probs=None):
self.deals = deals
self.stage_probs = stage_probs or DEFAULT_STAGE_PROBABILITIES
def open_deals(self):
return [d for d in self.deals if d.is_open()]
def closed_won_deals(self):
return [d for d in self.deals if d.is_closed_won()]
def pipeline_by_month(self, scenario="base"):
"""Returns dict: month_key → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.month_key] += deal.weighted_value(self.stage_probs, scenario)
return dict(sorted(result.items()))
def pipeline_by_quarter(self, scenario="base"):
"""Returns dict: quarter → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.quarter] += deal.weighted_value(self.stage_probs, scenario)
return dict(sorted(result.items()))
def coverage_ratio(self, quota, period_filter=None):
"""
Pipeline coverage = total pipeline ÷ quota.
period_filter: if set, only include deals with close_date in that period.
"""
pipeline = sum(
d.arr_value for d in self.open_deals()
if period_filter is None or d.quarter == period_filter
)
return pipeline / quota if quota else 0.0
def scenario_summary(self, periods=None):
"""
Returns dict: period → {conservative, base, upside, open_pipeline}.
periods: list of month_keys to include; if None, all months.
"""
summaries = {}
all_months = sorted(set(d.month_key for d in self.open_deals()))
target_months = periods or all_months
for month in target_months:
deals_in_month = [d for d in self.open_deals() if d.month_key == month]
if not deals_in_month:
continue
summaries[month] = {
"deal_count": len(deals_in_month),
"open_pipeline": sum(d.arr_value for d in deals_in_month),
"conservative": sum(d.weighted_value(self.stage_probs, "conservative") for d in deals_in_month),
"base": sum(d.weighted_value(self.stage_probs, "base") for d in deals_in_month),
"upside": sum(d.weighted_value(self.stage_probs, "upside") for d in deals_in_month),
}
return summaries
def rep_performance(self):
"""Returns dict: rep → {pipeline, weighted_base, deal_count, avg_deal_size}."""
rep_data = defaultdict(lambda: {"pipeline": 0.0, "weighted_base": 0.0,
"deal_count": 0, "deals": []})
for deal in self.open_deals():
rep_data[deal.rep]["pipeline"] += deal.arr_value
rep_data[deal.rep]["weighted_base"] += deal.weighted_value(self.stage_probs, "base")
rep_data[deal.rep]["deal_count"] += 1
rep_data[deal.rep]["deals"].append(deal.arr_value)
result = {}
for rep, data in rep_data.items():
deals = data["deals"]
result[rep] = {
"pipeline": data["pipeline"],
"weighted_base": data["weighted_base"],
"deal_count": data["deal_count"],
"avg_deal_size": statistics.mean(deals) if deals else 0.0,
}
return result
def segment_breakdown(self, scenario="base"):
"""Returns dict: segment → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.segment or "unspecified"] += deal.weighted_value(self.stage_probs, scenario)
return dict(result)
def stage_distribution(self):
"""Returns dict: stage → {count, total_arr, avg_arr}."""
result = defaultdict(lambda: {"count": 0, "total_arr": 0.0})
for deal in self.open_deals():
result[deal.stage]["count"] += 1
result[deal.stage]["total_arr"] += deal.arr_value
out = {}
for stage, data in result.items():
out[stage] = {
"count": data["count"],
"total_arr": data["total_arr"],
"avg_arr": data["total_arr"] / data["count"] if data["count"] else 0,
"probability": self.stage_probs.get(stage, 0.0),
}
return out
def confidence_interval(self, scenario="base", iterations=1000):
"""
Monte Carlo simulation to generate confidence interval around base forecast.
Each deal wins/loses based on its probability; runs iterations times.
Returns (p10, p50, p90) of total expected ARR.
"""
import random
random.seed(42)
totals = []
for _ in range(iterations):
total = 0.0
for deal in self.open_deals():
prob = min(1.0, self.stage_probs.get(deal.stage, 0.0) * SCENARIO_MULTIPLIERS[scenario])
if random.random() < prob:
total += deal.arr_value
totals.append(total)
totals.sort()
n = len(totals)
return (
totals[int(n * 0.10)], # P10 (conservative)
totals[int(n * 0.50)], # P50 (median)
totals[int(n * 0.90)], # P90 (upside)
)
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_currency(value):
if value >= 1_000_000:
return f"${value / 1_000_000:.2f}M"
if value >= 1_000:
return f"${value / 1_000:.1f}K"
return f"${value:.0f}"
def fmt_pct(value):
return f"{value * 100:.1f}%"
def print_header(title):
width = 70
print()
print("=" * width)
print(f" {title}")
print("=" * width)
def print_section(title):
print(f"\n--- {title} ---")
def print_report(engine, quota=None, current_quarter=None):
open_deals = engine.open_deals()
won_deals = engine.closed_won_deals()
print_header("REVENUE FORECAST MODEL")
print(f" Generated: {date.today().isoformat()}")
print(f" Open deals: {len(open_deals)}")
print(f" Closed Won (in dataset): {len(won_deals)}")
total_pipeline = sum(d.arr_value for d in open_deals)
total_won = sum(d.arr_value for d in won_deals)
print(f" Total open pipeline: {fmt_currency(total_pipeline)}")
print(f" Total closed won: {fmt_currency(total_won)}")
# ── Coverage ratio
if quota:
print_section("PIPELINE COVERAGE")
q = current_quarter or "this quarter"
ratio = engine.coverage_ratio(quota, period_filter=current_quarter)
status = "✅ Healthy" if ratio >= 3.0 else ("⚠️ Thin" if ratio >= 2.0 else "🔴 Critical")
print(f" Quota target: {fmt_currency(quota)}")
print(f" Coverage ratio: {ratio:.1f}x {status}")
print(f" (Minimum healthy = 3x; < 2x = pipeline emergency)")
# ── Stage distribution
print_section("STAGE DISTRIBUTION")
stage_dist = engine.stage_distribution()
col_w = [28, 8, 14, 12, 10]
header = f" {'Stage':<{col_w[0]}} {'Deals':>{col_w[1]}} {'Pipeline':>{col_w[2]}} {'Avg Size':>{col_w[3]}} {'Win Prob':>{col_w[4]}}"
print(header)
print(" " + "-" * (sum(col_w) + 4))
for stage, data in sorted(stage_dist.items(), key=lambda x: -x[1]["total_arr"]):
print(f" {stage:<{col_w[0]}} {data['count']:>{col_w[1]}} "
f"{fmt_currency(data['total_arr']):>{col_w[2]}} "
f"{fmt_currency(data['avg_arr']):>{col_w[3]}} "
f"{fmt_pct(data['probability']):>{col_w[4]}}")
# ── Scenario forecast by month
print_section("MONTHLY FORECAST — ALL SCENARIOS")
summaries = engine.scenario_summary()
col_w2 = [10, 8, 14, 14, 14, 14]
h2 = (f" {'Month':<{col_w2[0]}} {'Deals':>{col_w2[1]}} "
f"{'Pipeline':>{col_w2[2]}} {'Conservative':>{col_w2[3]}} "
f"{'Base':>{col_w2[4]}} {'Upside':>{col_w2[5]}}")
print(h2)
print(" " + "-" * (sum(col_w2) + 5))
for month, data in summaries.items():
print(f" {month:<{col_w2[0]}} {data['deal_count']:>{col_w2[1]}} "
f"{fmt_currency(data['open_pipeline']):>{col_w2[2]}} "
f"{fmt_currency(data['conservative']):>{col_w2[3]}} "
f"{fmt_currency(data['base']):>{col_w2[4]}} "
f"{fmt_currency(data['upside']):>{col_w2[5]}}")
# ── Quarterly rollup
print_section("QUARTERLY FORECAST ROLLUP")
q_conservative = defaultdict(float)
q_base = defaultdict(float)
q_upside = defaultdict(float)
q_pipeline = defaultdict(float)
q_count = defaultdict(int)
for deal in open_deals:
q_conservative[deal.quarter] += deal.weighted_value(engine.stage_probs, "conservative")
q_base[deal.quarter] += deal.weighted_value(engine.stage_probs, "base")
q_upside[deal.quarter] += deal.weighted_value(engine.stage_probs, "upside")
q_pipeline[deal.quarter] += deal.arr_value
q_count[deal.quarter] += 1
quarters = sorted(q_base.keys())
col_w3 = [10, 8, 14, 14, 14, 14]
h3 = (f" {'Quarter':<{col_w3[0]}} {'Deals':>{col_w3[1]}} "
f"{'Pipeline':>{col_w3[2]}} {'Conservative':>{col_w3[3]}} "
f"{'Base':>{col_w3[4]}} {'Upside':>{col_w3[5]}}")
print(h3)
print(" " + "-" * (sum(col_w3) + 5))
for q in quarters:
print(f" {q:<{col_w3[0]}} {q_count[q]:>{col_w3[1]}} "
f"{fmt_currency(q_pipeline[q]):>{col_w3[2]}} "
f"{fmt_currency(q_conservative[q]):>{col_w3[3]}} "
f"{fmt_currency(q_base[q]):>{col_w3[4]}} "
f"{fmt_currency(q_upside[q]):>{col_w3[5]}}")
# ── Monte Carlo confidence interval
print_section("CONFIDENCE INTERVAL (Monte Carlo, 1,000 simulations)")
p10, p50, p90 = engine.confidence_interval("base")
print(f" P10 (conservative floor): {fmt_currency(p10)}")
print(f" P50 (median expected): {fmt_currency(p50)}")
print(f" P90 (upside ceiling): {fmt_currency(p90)}")
print(f" Range spread: {fmt_currency(p90 - p10)}")
# ── Rep performance
print_section("REP PIPELINE PERFORMANCE")
rep_perf = engine.rep_performance()
if rep_perf:
col_w4 = [20, 8, 14, 14, 12]
h4 = (f" {'Rep':<{col_w4[0]}} {'Deals':>{col_w4[1]}} "
f"{'Pipeline':>{col_w4[2]}} {'Weighted':>{col_w4[3]}} {'Avg Size':>{col_w4[4]}}")
print(h4)
print(" " + "-" * (sum(col_w4) + 4))
for rep, data in sorted(rep_perf.items(), key=lambda x: -x[1]["pipeline"]):
print(f" {rep:<{col_w4[0]}} {data['deal_count']:>{col_w4[1]}} "
f"{fmt_currency(data['pipeline']):>{col_w4[2]}} "
f"{fmt_currency(data['weighted_base']):>{col_w4[3]}} "
f"{fmt_currency(data['avg_deal_size']):>{col_w4[4]}}")
# ── Segment breakdown
print_section("SEGMENT BREAKDOWN (Base Forecast)")
seg = engine.segment_breakdown("base")
for segment, value in sorted(seg.items(), key=lambda x: -x[1]):
bar_len = int((value / total_pipeline) * 30) if total_pipeline else 0
bar = "" * bar_len
print(f" {segment:<20} {fmt_currency(value):>12} {bar}")
# ── Red flags
print_section("FORECAST HEALTH FLAGS")
flags = []
if total_pipeline > 0:
coverage = total_pipeline / quota if quota else None
if coverage and coverage < 2.0:
flags.append("🔴 Pipeline coverage below 2x — serious shortfall risk this quarter")
elif coverage and coverage < 3.0:
flags.append("⚠️ Pipeline coverage below 3x — limited buffer for slippage")
# Stage concentration risk
early_stage_pct = sum(
d.arr_value for d in open_deals
if engine.stage_probs.get(d.stage, 0) < 0.30
) / total_pipeline
if early_stage_pct > 0.60:
flags.append(f"⚠️ {fmt_pct(early_stage_pct)} of pipeline in early stages (< 30% probability)")
# Deal concentration
deal_values = sorted([d.arr_value for d in open_deals], reverse=True)
if deal_values and deal_values[0] / total_pipeline > 0.25:
flags.append(f"⚠️ Top deal is {fmt_pct(deal_values[0]/total_pipeline)} of pipeline — concentration risk")
# Spread between scenarios
total_conservative = sum(d.weighted_value(engine.stage_probs, "conservative") for d in open_deals)
total_upside = sum(d.weighted_value(engine.stage_probs, "upside") for d in open_deals)
spread = (total_upside - total_conservative) / total_conservative if total_conservative else 0
if spread > 0.40:
flags.append(f"⚠️ High scenario spread ({fmt_pct(spread)}) — forecast confidence is low")
if flags:
for f in flags:
print(f" {f}")
else:
print(" ✅ No critical flags detected")
print()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
SAMPLE_CSV = """deal_id,name,stage,arr_value,close_date,rep,segment
D001,Acme Corp ERP Integration,negotiation,85000,2026-03-15,Sarah Chen,Enterprise
D002,TechStart PLG Expansion,proposal,28000,2026-03-28,Marcus Webb,Mid-Market
D003,Global Retail Co,verbal_commit,220000,2026-03-10,Sarah Chen,Enterprise
D004,BioLab Analytics,poc,62000,2026-04-05,Jamie Park,Mid-Market
D005,FinServ Holdings,demo,150000,2026-04-20,Sarah Chen,Enterprise
D006,MidWest Logistics,qualification,35000,2026-04-30,Marcus Webb,Mid-Market
D007,Edu Platform Inc,negotiation,42000,2026-03-25,Jamie Park,SMB
D008,Healthcare Connect,proposal,95000,2026-05-15,Sarah Chen,Enterprise
D009,Startup Hub Network,demo,18000,2026-04-10,Marcus Webb,SMB
D010,CloudOps Systems,poc,75000,2026-05-01,Jamie Park,Mid-Market
D011,National Bank Corp,verbal_commit,310000,2026-03-31,Sarah Chen,Enterprise
D012,RetailTech Co,qualification,22000,2026-05-20,Marcus Webb,SMB
D013,InsurTech Platform,negotiation,88000,2026-04-15,Jamie Park,Mid-Market
D014,GovTech Solutions,proposal,175000,2026-06-01,Sarah Chen,Enterprise
D015,AgriData Systems,demo,31000,2026-05-10,Marcus Webb,Mid-Market
D016,Legal AI Corp,poc,55000,2026-04-25,Jamie Park,Mid-Market
D017,Closed Won Deal,closed_won,120000,2026-02-15,Sarah Chen,Enterprise
D018,Lost Deal,closed_lost,45000,2026-02-20,Marcus Webb,Mid-Market
"""
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_deals_from_csv(csv_text):
reader = csv.DictReader(StringIO(csv_text))
deals = []
errors = []
for i, row in enumerate(reader, start=2):
try:
deal = Deal(
deal_id=row.get("deal_id", f"row_{i}"),
name=row.get("name", ""),
stage=row.get("stage", ""),
arr_value=row.get("arr_value", 0),
close_date=row.get("close_date", ""),
rep=row.get("rep", ""),
segment=row.get("segment", ""),
)
deals.append(deal)
except (ValueError, KeyError) as e:
errors.append(f" Row {i}: {e}")
if errors:
print("⚠️ Skipped rows with errors:")
for err in errors:
print(err)
return deals
def main():
parser = argparse.ArgumentParser(
description="Revenue Forecast Model — pipeline-based ARR forecasting"
)
parser.add_argument(
"--csv", metavar="FILE",
help="CSV file with pipeline data (uses sample data if not provided)"
)
parser.add_argument(
"--quota", type=float, default=1_000_000,
help="Quarterly quota target in ARR (default: $1,000,000)"
)
parser.add_argument(
"--quarter", metavar="QUARTER",
help='Current quarter filter e.g. "Q2 2026" (optional)'
)
parser.add_argument(
"--scenario", choices=["conservative", "base", "upside"],
default="base",
help="Primary scenario to report (default: base)"
)
parser.add_argument(
"--json", action="store_true",
help="Output forecast as JSON instead of formatted report"
)
args = parser.parse_args()
# Load data
if args.csv:
try:
with open(args.csv, "r", encoding="utf-8") as f:
csv_text = f.read()
except FileNotFoundError:
print(f"Error: File not found: {args.csv}", file=sys.stderr)
sys.exit(1)
else:
print("No --csv provided. Using sample pipeline data.\n")
csv_text = SAMPLE_CSV
deals = load_deals_from_csv(csv_text)
if not deals:
print("No deals loaded. Exiting.", file=sys.stderr)
sys.exit(1)
# Calibrate win rates from closed deals
historical_probs = calculate_historical_win_rates(deals)
stage_probs = historical_probs if historical_probs else DEFAULT_STAGE_PROBABILITIES
engine = ForecastEngine(deals, stage_probs=stage_probs)
if args.json:
output = {
"generated": date.today().isoformat(),
"quota": args.quota,
"open_pipeline": sum(d.arr_value for d in engine.open_deals()),
"coverage_ratio": engine.coverage_ratio(args.quota, args.quarter),
"monthly_forecast": engine.scenario_summary(),
"quarterly_base": engine.pipeline_by_quarter("base"),
"confidence_interval": dict(zip(
["p10", "p50", "p90"],
engine.confidence_interval("base")
)),
"rep_performance": engine.rep_performance(),
"segment_breakdown": engine.segment_breakdown("base"),
}
print(json.dumps(output, indent=2))
else:
print_report(engine, quota=args.quota, current_quarter=args.quarter)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,108 @@
---
name: cs-onboard
description: "Founder onboarding interview that captures company context across 7 dimensions. Invoke with /cs:setup for initial interview or /cs:update for quarterly refresh. Generates ~/.claude/company-context.md used by all C-suite advisor skills."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: orchestration
updated: 2026-03-05
frameworks: founder-interview, context-capture, quarterly-refresh
---
# C-Suite Onboarding
Structured founder interview that builds the company context file powering every C-suite advisor. One 45-minute conversation. Persistent context across all roles.
## Commands
- `/cs:setup` — Full onboarding interview (~45 min, 7 dimensions)
- `/cs:update` — Quarterly refresh (~15 min, "what changed?")
## Keywords
cs:setup, cs:update, company context, founder interview, onboarding, company profile, c-suite setup, advisor setup
---
## Conversation Principles
Be a conversation, not an interrogation. Ask one question at a time. Follow threads. Reflect back: "So the real issue sounds like X — is that right?" Watch for what they skip — that's where the real story lives. Never read a list of questions.
Open with: *"Tell me about the company in your own words — what are you building and why does it matter?"*
---
## 7 Interview Dimensions
### 1. Company Identity
Capture: what they do, who it's for, the real founding "why," one-sentence pitch, non-negotiable values.
Key probe: *"What's a value you'd fire someone over violating?"*
Red flag: Values that sound like marketing copy.
### 2. Stage & Scale
Capture: headcount (FT vs contractors), revenue range, runway, stage (pre-PMF / scaling / optimizing), what broke in last 90 days.
Key probe: *"If you had to label your stage — still finding PMF, scaling what works, or optimizing?"*
### 3. Founder Profile
Capture: self-identified superpower, acknowledged blind spots, archetype (product/sales/technical/operator), what actually keeps them up at night.
Key probe: *"What would your co-founder say you should stop doing?"*
Red flag: No blind spots, or weakness framed as a strength.
### 4. Team & Culture
Capture: team in 3 words, last real conflict and resolution, which values are real vs aspirational, strongest and weakest leader.
Key probe: *"Which of your stated values is most real? Which is a poster on the wall?"*
Red flag: "We have no conflict."
### 5. Market & Competition
Capture: who's winning and why (honest version), real unfair advantage, the one competitive move that could hurt them.
Key probe: *"What's your real unfair advantage — not the investor version?"*
Red flag: "We have no real competition."
### 6. Current Challenges
Capture: priority stack-rank across product/growth/people/money/operations, the decision they've been avoiding, the "one extra day" answer.
Key probe: *"What's the decision you've been putting off for weeks?"*
Note: The "extra day" answer reveals true priorities.
### 7. Goals & Ambition
Capture: 12-month target (specific), 36-month target (directional), exit vs build-forever orientation, personal success definition.
Key probe: *"What does success look like for you personally — separate from the company?"*
---
## Output: company-context.md
After the interview, generate `~/.claude/company-context.md` using `templates/company-context-template.md`.
Fill every section. Write `[not captured]` for unknowns — never leave blank. Add timestamp, mark as `fresh`.
Tell the founder: *"I've captured everything in your company context. Every advisor will use this to give specific, relevant advice. Run /cs:update in 90 days to keep it current."*
---
## /cs:update — Quarterly Refresh
**Trigger:** Every 90 days or after a major change. Duration: ~15 minutes.
Open with: *"It's been [X time] since we did your company context. What's changed?"*
Walk each dimension with one "what changed?" question:
1. Identity: same mission or shifted?
2. Scale: team, revenue, runway now?
3. Founder: role or what's stretching you?
4. Team: any leadership changes?
5. Market: any competitive surprises?
6. Challenges: #1 problem now vs 90 days ago?
7. Goals: still on track for 12-month target?
Update the context file, refresh timestamp, reset to `fresh`.
---
## Context File Location
`~/.claude/company-context.md` — single source of truth for all C-suite skills. Do not move it. Do not create duplicates.
## References
- `templates/company-context-template.md` — blank template for output
- `references/interview-guide.md` — deep interview craft: probes, red flags, handling reluctant founders

View File

@@ -0,0 +1,173 @@
# Interview Craft Guide
Deep operational guide for conducting the `/cs:setup` founder interview. Not a script — a thinking tool. Read before every interview. Internalize it, then put it away.
---
## The Core Problem
Most context-gathering fails because it captures what founders say, not what they mean. Founders are practiced storytellers. They have investor pitches, board narratives, team rallies. They tell good stories. Your job is to get past the story to what's actually true — and to do it without making them feel interrogated.
The best interview doesn't feel like an interview. It feels like a conversation with a smart advisor who gets it.
---
## Before You Start
Set the frame:
> "This isn't a quiz. There are no right answers. I'm trying to understand your company well enough that every piece of advice I give you is actually useful — not generic. The more honest you are, the more useful this gets. Nothing leaves this conversation."
Then shut up and let them talk.
---
## Reading the Room
Pay attention to:
- **Energy shifts.** Where do they speed up? What makes them lean in? That's what they care about. What makes them vague or flat? That's where the real issue lives.
- **What they lead with.** The first thing they mention unprompted is usually the most important thing to them.
- **Repetition.** If a topic comes up twice, it's significant. Three times and it's the real problem.
- **Hedging language.** "We're pretty much aligned on..." / "Things are mostly fine..." / "It's not really a problem yet..." — probe these. "Pretty much" is doing a lot of work there.
- **Skips.** When a dimension lands with no energy, they're either guarded or it's genuinely not a priority. Figure out which.
---
## Follow-Up Probe Library
### When the answer is vague
- "Can you give me a specific example?"
- "What does that look like on a Tuesday morning?"
- "If I asked your co-founder / direct report, what would they say?"
- "How would you know if that was actually true?"
### When the answer is suspiciously polished
- "That's the investor version — what's the version you'd tell your co-founder at 11pm?"
- "If that's true, what explains [specific contradicting data point]?"
- "What would a skeptic say about that?"
### When they skip something
- "You moved past [topic] quickly — is that because it's not a problem, or because it's too big to get into?"
- "Come back to [topic] — tell me more about that."
### When they say "everything is fine"
- "What's the thing that keeps you up at night even though you know you shouldn't worry about it?"
- "If something was going to surprise you in a bad way in the next 90 days, what would it be?"
- "What would your board member who's most worried about the company say?"
### When they're guarded
- Slow down. Don't push harder — push softer.
- "You don't have to share numbers if you're not comfortable — ranges are fine."
- Acknowledge the complexity: "This stuff is genuinely hard to talk about."
- Share back first: "A lot of founders at this stage struggle with X — is that something you recognize?"
### When they go long
Let them run for a bit. Then: "Let me make sure I captured what matters here — is it that [summary]?"
It helps you confirm understanding and signals you're tracking.
---
## Red Flag Patterns and What to Do
### "We have no real competition."
**Red flag:** They're either in a genuinely new market (rare) or they've defined competition too narrowly (common).
**Probe:** "What would someone do today if your product didn't exist? Who benefits if you fail?"
### "Our values are X, Y, Z."
**Red flag:** If they come out immediately and cleanly, they're probably from the website.
**Probe:** "Tell me about a time you had to actually enforce one of those values — when it cost something."
### "The team is great. Everyone's aligned."
**Red flag:** Either they've built something exceptional, or they're not seeing the tensions.
**Probe:** "What's the last thing you disagreed with someone on the team about? How did it go?"
### "I don't really have blind spots."
**Red flag:** Everyone has blind spots. Founders who can't name theirs are the most dangerous.
**Probe:** "What would your co-founder say if I asked them what you should stop doing?"
**Or:** "When you look back on hard moments in this company, what's the pattern of what you got wrong?"
### "Revenue is good, things are growing."
**Red flag:** "Good" is not a number.
**Probe:** "Give me a range — is this $100K ARR, $1M, $10M? I'm not sharing it anywhere."
### "We just need more customers."
**Red flag:** This is almost never the root problem.
**Probe:** "What's driving the growth you have? Why aren't more customers finding you, or converting, or staying?"
---
## Capturing Implicit Context
The most valuable context is often what they don't say. Document it.
**Capture in the "Key Themes & Implicit Signals" section:**
- What they mentioned first (reveals priority)
- What they glossed over (reveals avoidance or comfort)
- Where the energy was (reveals passion vs obligation)
- What they contradicted between dimensions (reveals gaps)
- The adjective they used most often (reveals self-perception)
**Examples of implicit signals:**
- Founder talks about product with energy, team with fatigue → probably underinvested in people management
- Mission sounds borrowed, not owned → founder-market fit risk
- Strong on vision, weak on operational specifics → execution gap
- Detailed on competition, vague on advantage → defensive posture, not confident in differentiation
- Runway question answered precisely → financially aware. Answered vaguely → either worried or detached.
---
## Handling Reluctant Founders
Some founders are guarded. Usually for one of three reasons:
1. **They don't trust you yet.** Give it time. Ask easier questions first. Build rapport.
2. **They're in denial.** Something is wrong and they're not ready to say it. Circles around topics, comes back to them.
3. **They're protecting someone.** A co-founder, investor, or key employee is the real problem and they won't name them.
**Tactics:**
- Give them an out: "You don't have to answer this specifically — just give me the shape of it."
- Normalize the problem: "A lot of founders at this stage are dealing with X..."
- Ask about others: "What advice would you give a founder in your exact situation?"
- Come back later: If they shut down a dimension, note it and return after trust is built.
---
## After the Interview
Before generating the file:
1. **Read back your notes.** Find the 35 most important things. They should be in the output.
2. **Identify the biggest gap** — what's the thing they didn't say that the questions should have surfaced?
3. **Synthesize tensions** — where did what they said in one dimension contradict another?
4. **Write the Watch List** — what needs to be re-checked in 90 days?
Then generate the context file. The last section — "Key Themes & Implicit Signals" — is the most important one. Don't skip it.
---
## Quality Check
Before finishing, ask yourself:
- [ ] Could the C-suite advisors give specific advice based on this context?
- [ ] Does this capture what's real vs what's aspirational?
- [ ] Is the Watch List honest about what's uncertain or worrying?
- [ ] Does the founder profile feel like a real person, not a LinkedIn bio?
- [ ] Did I capture implicit signals, not just explicit answers?
If any answer is no, go back and fill it in.
---
## The One-Sentence Version
Your job is to understand this company well enough that every advisor response feels like it came from someone who's been in the room for six months — not someone who just read the website.

View File

@@ -0,0 +1,144 @@
# Company Context
**Last updated:** [DATE]
**Status:** fresh | stale (>90 days)
**Interview type:** full | update
---
## 1. Company Identity
**What we do:**
[One paragraph — product/service, who it's for, core use case]
**Why we exist (founding reason):**
[The real reason, not the pitch]
**One-sentence pitch:**
[Sharpened during interview]
**Non-negotiable values:**
- [Value 1] — [what would violate it]
- [Value 2] — [what would violate it]
- [Value 3] — [what would violate it]
---
## 2. Stage & Scale
**Team size:** [N full-time] + [N contractors/part-time]
**Revenue:** [ARR/MRR range, e.g., "$500K$1M ARR"]
**Runway:** [N months]
**Stage:** pre-PMF | scaling | optimizing
**What broke recently (last 90 days):**
[Specific failure, cost, and root cause if known]
---
## 3. Founder Profile
**Name / Role:**
**Superpower:**
[What they do better than almost anyone on their team]
**Blind spots:**
[Acknowledged or revealed — be specific]
**Founder archetype:** product | sales | technical | operator
**What keeps them up at night:**
[The real concern, not the investor-safe version]
---
## 4. Team & Culture
**Team in 3 words:** [word], [word], [word]
**Culture — what's real:**
[Which values are actually lived]
**Culture — what's aspirational:**
[Which values are poster-on-the-wall]
**Strongest leader:**
[Role / what makes them strong]
**Weakest seat:**
[Role / what the risk is]
**Last significant conflict:**
[What happened, how it resolved, what it revealed]
---
## 5. Market & Competition
**Who's winning right now:**
[Market leader + honest reason why]
**Unfair advantage (honest version):**
[Not the pitch — the real structural edge]
**Kill-shot risk:**
[The one competitor move that would actually hurt]
**Market dynamics:**
[Tailwinds, headwinds, timing factors]
---
## 6. Current Challenges
**Priority stack-rank:**
1. [Highest priority: product/growth/people/money/operations]
2.
3.
4.
5.
**The avoided decision:**
[What they've been putting off — and why]
**The "one extra day" answer:**
[What they'd actually work on — reveals true priority]
---
## 7. Goals & Ambition
**12-month target:**
[Specific — revenue, product milestone, market position]
**36-month target:**
[Directional — where does this company go]
**Exit orientation:** building to exit | building to run | undecided
**Personal success definition:**
[Separate from company — what does winning look like for them personally]
---
## Key Themes & Implicit Signals
**Patterns observed:**
[What came up repeatedly, what they rushed past, emotional charge on topics]
**Implicit tensions:**
[Gaps between stated and revealed — e.g., "says people are fine, but conflict story suggests otherwise"]
**Watch list:**
[Things to check on in the next update — risks, avoided decisions, relationships to monitor]
---
## Context Metadata
- **Interview conducted:** [DATE]
- **Duration:** [N minutes]
- **Interview type:** full | update
- **Next refresh due:** [DATE + 90 days]
- **Confidence level:** high | medium | low (low = founder was guarded)

View File

@@ -1,413 +1,175 @@
---
name: cto-advisor
description: Technical leadership guidance for engineering teams, architecture decisions, and technology strategy. Includes tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates. Use when assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions, establishing engineering metrics, or when user mentions CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, or technology strategy.
description: "Technical leadership guidance for engineering teams, architecture decisions, and technology strategy. Use when assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions, establishing engineering metrics, or when user mentions CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, or technology strategy."
license: MIT
metadata:
version: 1.0.0
version: 2.0.0
author: Alireza Rezvani
category: c-level
domain: cto-leadership
updated: 2025-10-20
updated: 2026-03-05
python-tools: tech_debt_analyzer.py, team_scaling_calculator.py
frameworks: DORA-metrics, architecture-decision-records, engineering-metrics
tech-stack: engineering-management, team-organization
frameworks: architecture-decisions, engineering-metrics, technology-evaluation
---
# CTO Advisor
Strategic frameworks and tools for technology leadership, team scaling, and engineering excellence.
Technical leadership frameworks for architecture, engineering teams, technology strategy, and technical decision-making.
## Keywords
CTO, chief technology officer, technical leadership, tech debt, technical debt, engineering team, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, ADR, architecture decision records, technology strategy, engineering leadership, engineering organization, team structure, hiring plan, technical strategy, vendor evaluation, technology selection
CTO, chief technology officer, tech debt, technical debt, architecture, engineering metrics, DORA, team scaling, technology evaluation, build vs buy, cloud migration, platform engineering, AI/ML strategy, system design, incident response, engineering culture
## Quick Start
### For Technical Debt Assessment
```bash
python scripts/tech_debt_analyzer.py
python scripts/tech_debt_analyzer.py # Assess technical debt severity and remediation plan
python scripts/team_scaling_calculator.py # Model engineering team growth and cost
```
Analyzes system architecture and provides prioritized debt reduction plan.
### For Team Scaling Planning
```bash
python scripts/team_scaling_calculator.py
```
Calculates optimal hiring plan and team structure for growth.
### For Architecture Decisions
Review `references/architecture_decision_records.md` for ADR templates and examples.
### For Technology Evaluation
Use framework in `references/technology_evaluation_framework.md` for vendor selection.
### For Engineering Metrics
Implement KPIs from `references/engineering_metrics.md` for team performance tracking.
## Core Responsibilities
### 1. Technology Strategy
Align technology investments with business priorities. Not "what's exciting" — "what moves the needle."
#### Vision & Roadmap
- Define 3-5 year technology vision
- Create quarterly roadmaps
- Align with business strategy
- Communicate to stakeholders
**Strategy components:**
- Technology vision (3-year: where the platform is going)
- Architecture roadmap (what to build, refactor, or replace)
- Innovation budget (10-20% of engineering capacity for experimentation)
- Build vs buy decisions (default: buy unless it's your core IP)
- Technical debt strategy (not elimination — management)
#### Innovation Management
- Allocate 20% time for innovation
- Run hackathons quarterly
- Evaluate emerging technologies
- Build proof of concepts
See `references/technology_evaluation_framework.md` for the full evaluation framework.
#### Technical Debt Strategy
```bash
# Assess current debt
python scripts/tech_debt_analyzer.py
### 2. Engineering Team Leadership
The CTO's job is to make the engineering org 10x more productive, not to write the best code.
# Allocate capacity
- Critical debt: 40% capacity
- High debt: 25% capacity
- Medium debt: 15% capacity
- Low debt: Ongoing maintenance
```
**Scaling engineering:**
- Hire for the next stage, not the current one
- Every 3x in team size requires a reorg
- Manager:IC ratio: 5-8 direct reports optimal
- Senior:junior ratio: at least 1:2 (invert and you'll drown in mentoring)
### 2. Team Leadership
**Culture:**
- Blameless post-mortems (incidents are system failures, not people failures)
- Documentation as a first-class citizen
- Code review as mentoring, not gatekeeping
- On-call that's sustainable (not heroic)
#### Scaling Engineering
```bash
# Calculate scaling needs
python scripts/team_scaling_calculator.py
# Key ratios to maintain:
- Manager:Engineer = 1:8
- Senior:Mid:Junior = 3:4:2
- Product:Engineering = 1:10
- QA:Engineering = 1.5:10
```
#### Performance Management
- Set clear OKRs quarterly
- Conduct 1:1s weekly
- Review performance quarterly
- Provide growth opportunities
#### Culture Building
- Define engineering values
- Establish coding standards
- Create learning programs
- Foster collaboration
See `references/engineering_metrics.md` for DORA metrics and the engineering health dashboard.
### 3. Architecture Governance
You don't make every architecture decision. You create the framework for making good ones.
#### Decision Making
Use ADR template from `references/architecture_decision_records.md`:
1. Document context and problem
2. List all options considered
3. Record decision and rationale
4. Track consequences
**Architecture Decision Records (ADRs):**
- Every significant decision gets documented: context, options, decision, consequences
- Decisions are discoverable (not buried in Slack)
- Decisions can be superseded (not permanent)
#### Technology Standards
- Language choices
- Framework selection
- Database standards
- Security requirements
- API design guidelines
See `references/architecture_decision_records.md` for ADR templates and the decision review process.
#### System Design Review
- Weekly architecture reviews
- Design documentation standards
- Prototype requirements
- Performance criteria
### 4. Vendor & Platform Management
Every vendor is a dependency. Every dependency is a risk.
### 4. Vendor Management
**Evaluation criteria:** Does it solve a real problem? Can we migrate away? Is the vendor stable? What's the total cost (license + integration + maintenance)?
#### Evaluation Process
Follow framework in `references/technology_evaluation_framework.md`:
1. Gather requirements (Week 1)
2. Market research (Week 1-2)
3. Deep evaluation (Week 2-4)
4. Decision and documentation (Week 4)
### 5. Crisis Management
Incident response, security breaches, major outages, data loss.
#### Vendor Relationships
- Quarterly business reviews
- SLA monitoring
- Cost optimization
- Strategic partnerships
**Your role in a crisis:** Not to fix it yourself. To ensure the right people are on it, communication is flowing, and the business is informed. Post-crisis: blameless retrospective within 48 hours.
### 5. Engineering Excellence
## Key Questions a CTO Asks
#### Metrics Implementation
From `references/engineering_metrics.md`:
- "What's our biggest technical risk right now? Not the most annoying — the most dangerous."
- "If we 10x our traffic tomorrow, what breaks first?"
- "How much of our engineering time goes to maintenance vs new features?"
- "What would a new engineer say about our codebase after their first week?"
- "Which technical decision from 2 years ago is hurting us most today?"
- "Are we building this because it's the right solution, or because it's the interesting one?"
- "What's our bus factor on critical systems?"
**DORA Metrics** (Deploy to production targets):
- Deployment Frequency: >1/day
- Lead Time: <1 day
- MTTR: <1 hour
- Change Failure Rate: <15%
## CTO Metrics Dashboard
**Quality Metrics**:
- Test Coverage: >80%
- Code Review: 100%
- Technical Debt: <10%
| Category | Metric | Target | Frequency |
|----------|--------|--------|-----------|
| **Velocity** | Deployment frequency | Daily (or per-commit) | Weekly |
| **Velocity** | Lead time for changes | < 1 day | Weekly |
| **Quality** | Change failure rate | < 5% | Weekly |
| **Quality** | Mean time to recovery (MTTR) | < 1 hour | Weekly |
| **Debt** | Tech debt ratio (maintenance/total) | < 25% | Monthly |
| **Debt** | P0 bugs open | 0 | Daily |
| **Team** | Engineering satisfaction | > 7/10 | Quarterly |
| **Team** | Regrettable attrition | < 10% | Monthly |
| **Architecture** | System uptime | > 99.9% | Monthly |
| **Architecture** | API response time (p95) | < 200ms | Weekly |
| **Cost** | Cloud spend / revenue ratio | Declining trend | Monthly |
**Team Health**:
- Sprint Velocity: ±10% variance
- Unplanned Work: <20%
- On-call Incidents: <5/week
## Red Flags
## Weekly Cadence
- Tech debt is growing faster than you're paying it down
- Deployment frequency is declining (a leading indicator of team health)
- Senior engineers are leaving (they see problems before management does)
- "It works on my machine" is still a thing
- No ADRs for the last 3 major decisions
- The CTO is the only person who can deploy to production
- Security audit hasn't happened in 12+ months
- The team dreads on-call rotation
- Build times exceed 10 minutes
- No one can explain the system architecture to a new hire in 30 minutes
### Monday
- Leadership team sync
- Review metrics dashboard
- Address escalations
## Integration with C-Suite Roles
### Tuesday
- Architecture review
- Technical interviews
- 1:1s with directs
| When... | CTO works with... | To... |
|---------|-------------------|-------|
| Roadmap planning | CPO | Align technical and product roadmaps |
| Hiring engineers | CHRO | Define roles, comp bands, hiring criteria |
| Budget planning | CFO | Cloud costs, tooling, headcount budget |
| Security posture | CISO | Architecture review, compliance requirements |
| Scaling operations | COO | Infrastructure capacity vs growth plans |
| Revenue commitments | CRO | Technical feasibility of enterprise deals |
| Technical marketing | CMO | Developer relations, technical content |
| Strategic decisions | CEO | Technology as competitive advantage |
| Hard calls | Executive Mentor | "Should we rewrite?" "Should we switch stacks?" |
### Wednesday
- Cross-functional meetings
- Vendor meetings
- Strategy work
## Proactive Triggers
### Thursday
- Team all-hands (monthly)
- Sprint reviews (bi-weekly)
- Technical deep dives
Surface these without being asked when you detect them in company context:
- Deployment frequency dropping → early signal of team health issues
- Tech debt ratio > 30% → recommend a tech debt sprint
- No ADRs filed in 30+ days → architecture decisions going undocumented
- Single point of failure on critical system → flag bus factor risk
- Cloud costs growing faster than revenue → cost optimization review
- Security audit overdue (> 12 months) → escalate to CISO
### Friday
- Strategic planning
- Innovation time
- Week recap and planning
## Output Artifacts
## Quarterly Planning
| Request | You Produce |
|---------|-------------|
| "Assess our tech debt" | Tech debt inventory with severity, cost-to-fix, and prioritized plan |
| "Should we build or buy X?" | Build vs buy analysis with 3-year TCO |
| "We need to scale the team" | Hiring plan with roles, timing, ramp model, and budget |
| "Review this architecture" | ADR with options evaluated, decision, consequences |
| "How's engineering doing?" | Engineering health dashboard (DORA + debt + team) |
### Q1 Focus: Foundation
- Annual planning
- Budget allocation
- Team goal setting
- Technology assessment
## Reasoning Technique: ReAct (Reason then Act)
### Q2 Focus: Execution
- Major initiatives launch
- Mid-year hiring push
- Performance reviews
- Architecture evolution
Research the technical landscape first. Analyze options against constraints (time, team skill, cost, risk). Then recommend action. Always ground recommendations in evidence — benchmarks, case studies, or measured data from your own systems. "I think" is not enough — show the data.
### Q3 Focus: Innovation
- Hackathon
- Technology exploration
- Team development
- Process optimization
## Communication
### Q4 Focus: Planning
- Next year strategy
- Budget planning
- Promotion cycles
- Debt reduction sprint
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Crisis Management
## Context Integration
### Incident Response
1. **Immediate** (0-15 min):
- Assess severity
- Activate incident team
- Begin communication
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`
2. **Short-term** (15-60 min):
- Implement fixes
- Update stakeholders
- Monitor systems
3. **Resolution** (1-24 hours):
- Verify fix
- Document timeline
- Customer communication
4. **Post-mortem** (48-72 hours):
- Root cause analysis
- Action items
- Process improvements
### Types of Crises
#### Security Breach
- Isolate affected systems
- Engage security team
- Legal/compliance notification
- Customer communication plan
#### Major Outage
- All-hands response
- Status page updates
- Executive briefings
- Customer outreach
#### Data Loss
- Stop writes immediately
- Assess recovery options
- Begin restoration
- Impact analysis
## Stakeholder Management
### Board/Executive Reporting
**Monthly**:
- KPI dashboard
- Risk register
- Major initiatives status
**Quarterly**:
- Technology strategy update
- Team growth and health
- Innovation highlights
- Budget review
### Cross-functional Partners
#### Product Team
- Weekly roadmap sync
- Sprint planning participation
- Technical feasibility reviews
- Feature estimation
#### Sales/Marketing
- Technical sales support
- Product capability briefings
- Customer reference calls
- Competitive analysis
#### Finance
- Budget management
- Cost optimization
- Vendor negotiations
- Capex planning
## Strategic Initiatives
### Digital Transformation
1. Assess current state
2. Define target architecture
3. Create migration plan
4. Execute in phases
5. Measure and adjust
### Cloud Migration
1. Application assessment
2. Migration strategy (7Rs)
3. Pilot applications
4. Full migration
5. Optimization
### Platform Engineering
1. Define platform vision
2. Build core services
3. Create self-service tools
4. Enable team adoption
5. Measure efficiency
### AI/ML Integration
1. Identify use cases
2. Build data infrastructure
3. Develop models
4. Deploy and monitor
5. Scale adoption
## Communication Templates
### Technology Strategy Presentation
```
1. Executive Summary (1 slide)
2. Current State Assessment (2 slides)
3. Vision & Strategy (2 slides)
4. Roadmap & Milestones (3 slides)
5. Investment Required (1 slide)
6. Risks & Mitigation (1 slide)
7. Success Metrics (1 slide)
```
### Team All-hands
```
1. Wins & Recognition (5 min)
2. Metrics Review (5 min)
3. Strategic Updates (10 min)
4. Demo/Deep Dive (15 min)
5. Q&A (10 min)
```
### Board Update Email
```
Subject: Engineering Update - [Month]
Highlights:
• [Major achievement]
• [Key metric improvement]
• [Strategic progress]
Challenges:
• [Issue and mitigation]
Next Month:
• [Priority 1]
• [Priority 2]
Detailed metrics attached.
```
## Tools & Resources
### Essential Tools
- **Architecture**: Draw.io, Lucidchart, C4 Model
- **Metrics**: DataDog, Grafana, LinearB
- **Planning**: Jira, Confluence, Notion
- **Communication**: Slack, Zoom, Loom
- **Development**: GitHub, GitLab, Bitbucket
### Key Resources
- **Books**:
- "The Manager's Path" - Camille Fournier
- "Accelerate" - Nicole Forsgren
- "Team Topologies" - Skelton & Pais
- **Frameworks**:
- DORA metrics
- SPACE framework
- Team Topologies
- **Communities**:
- CTO Craft
- Engineering Leadership Slack
- LeadDev community
## Success Indicators
**Technical Excellence**
- System uptime >99.9%
- Deploy multiple times daily
- Technical debt <10% capacity
- Security incidents = 0
**Team Success**
- Team satisfaction >8/10
- Attrition <10%
- Filled positions >90%
- Diversity improving
**Business Impact**
- Features on-time >80%
- Engineering enables revenue
- Cost per transaction decreasing
- Innovation driving growth
## Red Flags to Watch
⚠️ Increasing technical debt
⚠️ Rising attrition rate
⚠️ Slowing velocity
⚠️ Growing incidents
⚠️ Team morale declining
⚠️ Budget overruns
⚠️ Vendor dependencies
⚠️ Security vulnerabilities
## Resources
- `references/technology_evaluation_framework.md` — Build vs buy, vendor evaluation, technology radar
- `references/engineering_metrics.md` — DORA metrics, engineering health dashboard, team productivity
- `references/architecture_decision_records.md` — ADR templates, decision governance, review process

View File

@@ -0,0 +1,167 @@
---
name: culture-architect
description: "Build, measure, and evolve company culture as operational behavior — not wall posters. Covers mission/vision/values workshops, values-to-behaviors translation, culture code creation, culture health assessment, and cultural rituals by stage. Use when building company values, assessing culture health, designing cultural rituals, creating culture codes, handling culture clashes, or when user mentions culture, values, culture debt, founder culture, or culture code."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: culture-leadership
updated: 2026-03-05
frameworks: culture-playbook, culture-code-template
---
# Culture Architect
Culture is what you DO, not what you SAY. This skill builds culture as an operational system — observable behaviors, measurable health, and rituals that scale.
## Keywords
culture, company culture, values, mission, vision, culture code, cultural rituals, culture health, values-to-behaviors, founder culture, culture debt, value-washing, culture assessment, culture survey, Netflix culture deck, HubSpot culture code, psychological safety, culture scaling
## Core Principle
**Culture = (What you reward) + (What you tolerate) + (What you celebrate)**
If your values say "transparency" but you punish bearers of bad news — your real value is "optics." Culture is not aspirational. It's descriptive. The work is closing the gap between stated and actual.
## Frameworks
### 1. Mission / Vision / Values Workshop
Run this conversationally, not as a corporate offsite. Three questions:
**Mission** — Why do we exist (beyond making money)?
- "What would be lost if we disappeared tomorrow?"
- Mission is present-tense. "We reduce preventable falls in elderly care." Not "to be the leading..."
**Vision** — What does winning look like in 510 years?
- Specific enough to be wrong. "Every care home in Europe uses our system" beats "be the market leader."
**Values** — What behaviors do we actually model?
- Start with what you observe, not what sounds good. "What did our last great hire do that nobody asked them to?"
- Keep to 35. More than 5 and none of them mean anything.
### 2. Values → Behaviors Translation
This is the work. Every value needs behavioral anchors or it's decoration.
| Value | Bad version | Behavioral anchor |
|-------|------------|-------------------|
| Transparency | "We're open and honest" | "We share bad news within 24 hours, including to our manager" |
| Ownership | "We take responsibility" | "We don't hand off problems — we own them until resolved, even across team boundaries" |
| Speed | "We move fast" | "Decisions under €5K happen at team level, same day, no approval needed" |
| Quality | "We don't cut corners" | "We stop the line before shipping something we're not proud of" |
| Customer-first | "Customers are our priority" | "Any team member can escalate a customer issue to leadership, bypassing normal channels" |
**Workshop exercise:** Write your value. Then ask "How would a new hire know we actually live this on day 30?" If you can't answer concretely, it's not a value — it's an aspiration.
### 3. Culture Code Creation
A culture code is a public document that describes how you operate. It should scare off the wrong people and attract the right ones.
**Structure:**
1. Who we are (mission + context)
2. Who thrives here (specific behaviors, not adjectives)
3. Who doesn't thrive here (honest — this is the useful part)
4. How we make decisions
5. How we communicate
6. How we grow people
7. What we expect of leaders
See `templates/culture-code-template.md` for a complete template.
**Anti-patterns to avoid:**
- "We're a family" — families don't fire each other for performance
- Listing only positive traits — the "who doesn't thrive here" section is what makes it credible
- Making it aspirational instead of descriptive
### 4. Culture Health Assessment
Run quarterly. 812 questions. Anonymous. See `references/culture-playbook.md` for survey design.
**Core areas to measure:**
1. Psychological safety — "Can I raise a concern without fear?"
2. Clarity — "Do I know how my work connects to company goals?"
3. Fairness — "Are decisions made consistently and transparently?"
4. Growth — "Am I learning and being challenged here?"
5. Trust in leadership — "Do I believe what leadership tells me?"
**Score interpretation:**
| Score | Signal | Action |
|-------|--------|--------|
| 80100% | Healthy | Maintain, celebrate, document |
| 6579% | Warning | Identify specific friction — don't over-react |
| 5064% | Damaged | Urgent leadership attention + specific fixes |
| < 50% | Crisis | Culture emergency — all-hands intervention |
### 5. Cultural Rituals by Stage
Rituals are the delivery mechanism for culture. What works at 10 people breaks at 100.
**Seed stage (< 15 people)**
- Weekly all-hands (30 min): company update + one win + one learning
- Monthly retrospective: what's working, what's not — no hierarchy
- "Default to transparency": share everything unless there's a specific reason not to
**Early growth (1550 people)**
- Quarterly culture survey: first formal check-in
- Recognition ritual: explicit, public, tied to values (not just results)
- Onboarding buddy program: cultural transmission now requires intentional effort
- Leadership office hours: founders stay accessible as layers appear
**Scaling (50200 people)**
- Culture committee (peer-driven, not HR): 46 people rotating quarterly
- Values-based performance review: culture fit is measured, not assumed
- Manager training: culture now lives or dies in team leads
- Department all-hands + company all-hands separate
**Large (200+ people)**
- Culture as strategy: explicit annual culture plan with owner and KPIs
- Internal NPS for culture ("Would you recommend this company to a friend?")
- Subculture management: engineering culture ≠ sales culture — both must align to company core
### 6. Culture Anti-Patterns
**Value-washing:** Listing values you don't practice. Symptom: employees roll their eyes during values discussions.
- Fix: Run a values audit. Ask "What did the last person who got promoted demonstrate?" If it doesn't match your values, your real values are different.
**Culture debt:** Accumulating cultural compromises over time. "We'll address the toxic star performer later." Later compounds.
- Fix: Act on culture violations faster than you think necessary. One tolerated bad behavior destroys what ten good behaviors build.
**Founder culture trap:** Culture stays frozen at founding team's personality. New hires assimilate or leave.
- Fix: Explicitly evolve values as you scale. What worked at 10 people (move fast, ask forgiveness) may be destructive at 100 (we need process).
**Culture by osmosis:** Assuming culture transmits naturally. It did at 10 people. It doesn't at 50.
- Fix: Make culture intentional. Document it. Teach it. Measure it. Reward it explicitly.
## Culture Integration with C-Suite
| When... | Culture Architect works with... | To... |
|---------|---------------------------------|-------|
| Hiring surge | CHRO | Ensure culture fit is measured, not guessed |
| Org reorg | COO + CEO | Manage culture disruption from structure change |
| M&A or partnership | CEO + COO | Detect and resolve culture clashes early |
| Performance issues | CHRO | Separate culture fit from skill deficit |
| Strategy pivot | CEO | Update values/behaviors that the pivot makes obsolete |
| Rapid growth | All | Scale rituals before culture dilutes |
## Key Questions a Culture Architect Asks
- "Can you name the last person we fired for culture reasons? What did they do?"
- "What behavior got your last promoted employee promoted? Is that in your values?"
- "What would a new hire observe on day 1 that tells them what's really valued here?"
- "What do we tolerate that we shouldn't? Who knows and does nothing?"
- "How does a team lead in Berlin know what the culture is in Madrid?"
## Red Flags
- Values posted on the wall, never referenced in reviews or decisions
- Star performers protected from cultural standards
- Leaders who "don't have time" for culture rituals
- New hires feeling the culture is "different than advertised"
- No mechanism to raise cultural concerns safely
- Culture survey results never shared with the team
## Detailed References
- `references/culture-playbook.md` — Netflix analysis, survey design, ritual examples, M&A playbook
- `templates/culture-code-template.md` — Culture code document template

View File

@@ -0,0 +1,243 @@
# Culture Playbook
Reference frameworks for building, measuring, and evolving company culture.
---
## 1. Netflix Culture Deck — What Works, What Doesn't
Reed Hastings published this in 2009. 125 slides. 20M+ views. It changed how tech companies think about culture.
### What works
**"Adequate performance gets a generous severance"** — This is the sentence that made HR professionals uncomfortable. It's also why Netflix has high performers. If you keep B-players, A-players leave.
**Context, not control** — Instead of rules and approvals, Netflix provides context (strategy, goals, constraints) and expects people to make good decisions. This only works if you actually hire people who can.
**"Freedom and responsibility" as a pair** — You can't have one without the other. Freedom without responsibility is chaos. Responsibility without freedom is bureaucracy.
**Publicly stated values actually describe behavior** — The deck is descriptive, not aspirational. It says "here's what we actually do." That's rare and valuable.
### What doesn't work (or doesn't transfer)
**"We are not a family"** — Works at Netflix, lands badly in many cultures (especially European). The principle underneath it is valid: performance matters. The framing is optional.
**"Keeper test"** — "Would I fight to keep this person?" Powerful tool, but managers need coaching to use it well. Without context, it becomes paranoia-inducing.
**No vacation policy** — Works when managers model healthy vacation use. Doesn't work when culture implicitly punishes taking time off. The policy is neutral; the culture around it determines the outcome.
**Radical transparency on compensation** — Netflix publishes pay bands. This works in high-trust, high-fairness environments. In environments with existing pay inequities, it creates problems before it fixes them.
### Key lesson
The Netflix culture deck works because it's honest about tradeoffs. Your culture code should be equally honest. "We move fast, which sometimes means decisions get revisited" is more credible than "we move fast AND we get it right the first time."
---
## 2. Values-to-Behaviors Mapping Framework
Values without behavioral anchors are intentions. Behavioral anchors make values operational.
### The mapping process
**Step 1: List your stated values**
Don't curate. Write down everything on the values list, however it's currently stated.
**Step 2: For each value, find three real examples**
"Describe a time in the last 6 months when someone exemplified [value]."
If you can't find three examples, the value isn't real.
**Step 3: Extract the observable behavior**
From the examples, identify the specific action. Not the feeling, not the intention — the action.
**Step 4: Write the behavioral anchor**
Format: "[Subject] does [specific action] in [specific context]."
**Step 5: Find the counter-example**
For each value, identify a behavior that violates it. This is what you don't tolerate.
Format: "[Subject] does NOT [specific opposite action] even when [temptation/pressure]."
### Example mapping: "Customer Obsession"
| Component | Content |
|-----------|---------|
| Value | Customer Obsession |
| Example 1 | PM delayed a sprint to fix a bug a customer reported on a call, even though it wasn't on the roadmap |
| Example 2 | Support rep escalated a technical issue directly to engineering at 9pm, resolved within 2 hours |
| Example 3 | Sales declined a deal that would have required features that would hurt existing customers |
| Behavioral anchor | "We resolve customer-reported critical issues within 24 hours, regardless of roadmap priority" |
| Counter-example | "We do not close a customer issue as 'resolved' until the customer confirms it's resolved" |
### Common mapping mistakes
**Too vague:** "We put customers first" — this doesn't change behavior.
**Too broad:** "We care about quality in everything we do" — can't be measured or violated.
**Too personal:** "We're passionate" — describes emotion, not action.
**Too aspirational:** "We strive to deliver world-class..." — "strive" lets you off the hook.
---
## 3. Culture Survey Design — 8-12 Questions That Reveal Truth
Most culture surveys are useless because they measure satisfaction, not health. Satisfaction can be high in a dysfunctional culture ("I like my team, my boss, my pay" ≠ healthy culture).
### Survey design principles
1. **Anonymous, always.** If it's not anonymous, people answer what they think you want to hear.
2. **Short enough to complete honestly.** 812 questions max. 15 minutes max.
3. **Likert + open text.** "On a scale of 15" captures signal. "Why did you give that score?" captures insight.
4. **Action-linked.** Never run a survey unless you're prepared to share results and act on them.
5. **Consistent questions over time.** You want trend data, not one-off snapshots.
### The 10-question core survey
| # | Question | Area measured |
|---|----------|---------------|
| 1 | I can raise concerns or disagreements with my manager without fear of negative consequences. | Psychological safety |
| 2 | I know how my work connects to the company's most important goals. | Clarity/alignment |
| 3 | When I make a mistake, I can be honest about it without hiding it. | Psychological safety |
| 4 | Decisions here are made based on merit and data, not politics or relationships. | Fairness |
| 5 | I trust that leadership tells us the truth, even when it's bad news. | Trust in leadership |
| 6 | I am growing and being challenged in my current role. | Growth |
| 7 | When someone underperforms and nothing happens, I feel that's handled appropriately. | Accountability |
| 8 | I feel comfortable being myself at work. | Inclusion |
| 9 | My manager recognizes my contributions in ways that feel meaningful. | Recognition |
| 10 | I would recommend this company as a great place to work to someone I respect. | Overall health (eNPS) |
### Follow-up open text questions (pick 23)
- "What's the one thing leadership could do differently that would most improve the culture?"
- "What do we tolerate that we shouldn't?"
- "What should we protect as we grow that we're at risk of losing?"
- "What's the gap between what we say we value and what we actually do?"
### Analyzing results
**eNPS (question 10):** Score = % Promoters (910) minus % Detractors (16). Healthy: > 20. Great: > 40.
**Questions 1 and 3 (psychological safety):** If below 70%, you have a leadership problem, not a culture problem. Fix the manager first.
**Question 7 (accountability):** This is the most honest question. Cultures that fail to hold underperformers accountable destroy high-performer retention.
**Biggest drop between surveys:** This is your fire. Don't average it away.
---
## 4. Cultural Ritual Examples by Company Stage
### Seed (< 15 people)
**Weekly "Wins and Learnings" (15 min, Fridays)**
- Each person shares one win (however small) and one learning (failure, insight, mistake)
- No slides. No prep. Just talking.
- Purpose: normalizes imperfection, builds psychological safety early
**"Open book" financials**
- Share revenue, burn, runway with the whole team monthly
- Builds owners, not employees
- Requires trust that people won't misuse the data
**"Postmortem as celebration"**
- When something goes wrong, celebrate the post-mortem publicly
- "We learned X, here's how we'll do it differently"
- Prevents a blame culture from forming early
### Early growth (1550 people)
**Monthly "Founder's Letter"**
- CEO writes an unfiltered update: what we're winning, what's hard, what's changed
- Not polished. Not PR. Real.
- Distributed internally before it goes external
**Values spotlight in team meetings**
- One agenda item: "Who exemplified [value] this week? What did they do?"
- Takes 3 minutes. Trains the muscle for values-linked recognition.
**New hire "30-day truth sessions"**
- At day 30, every new hire meets with a senior leader (not their manager) and answers: "What surprised you? What's different from what you expected? What would you fix?"
- Captures culture signal while the new hire's eyes are still fresh
### Scaling (50200 people)
**Quarterly culture review**
- Culture committee reviews survey results, names top issues, proposes 23 concrete actions
- Results shared with all-hands within 2 weeks of survey close
- 30-day action accountability check-in
**Manager calibration on culture fit**
- Quarterly: managers share one team member who exemplifies culture, one who struggles
- Group discussion on patterns, not individuals
- Identifies culture outliers early before they become retention or performance crises
**"Culture at the edges" audit**
- Review last 10 performance issues, 10 terminations, 10 promotions
- Ask: "Is the pattern consistent with our stated values?"
- This is the reality check. The data doesn't lie.
### Large (200+ people)
**Subculture alignment mapping**
- Each department articulates its micro-culture
- Cross-reference with company core values
- Identify deviations: healthy variation vs. value violation
**Culture ambassador program**
- Peer-nominated, rotating, not HR
- Run culture rituals, surface issues, connect remote/distributed teams
- Budget: small (recognition, team events), influence: large
---
## 5. How to Evolve Culture Without Losing Identity
Culture must evolve as you scale. The mistake is either: (a) refusing to evolve, preserving founder culture that doesn't scale, or (b) evolving so fast that original identity is lost.
### The evolution framework
**Preserve:** Core values that define who you are. These should be stable across stages. If "move fast" is core, it doesn't go away — but its expression changes.
**Adapt:** Behaviors that worked at one stage but need updating. "Move fast" at 10 people = decide same day. At 200 people = decide within 1 week with the right people in the room.
**Add:** New behaviors required at the new scale. "Documentation culture" wasn't needed at 10. It's essential at 100.
**Retire:** Behaviors that actively hurt at scale. "Ask forgiveness, not permission" works at seed. Creates coordination chaos at Series B.
### The evolution process
1. Annual values review (not a rewrite — an audit)
2. Ask: "Which of our current behaviors are we proud of? Which embarrass us?"
3. Identify behaviors to add/adapt/retire
4. Communicate the evolution explicitly: "Here's what's changing and why"
5. Update the culture code, onboarding, and performance criteria
### Communication of culture change
Never let culture evolution look like hypocrisy. Proactively name it:
"We used to make all decisions quickly at the team level. As we've grown, that's created coordination problems. Here's how we're updating that: [new behavior]. The underlying value — speed — hasn't changed. How we deliver it has."
---
## 6. Handling Culture Clashes in M&A or Rapid Hiring
### M&A culture integration
**Before signing:**
- Culture due diligence is as important as financial DD
- Questions to answer: How do they make decisions? What gets people fired? What gets them promoted? What do they celebrate?
- Red flag: "We have a great culture" with no supporting evidence
**First 90 days:**
- Don't impose culture; conduct a bilateral audit
- Identify: what do they do that we should adopt? What do we do that they should adopt? What conflicts must be resolved?
- Assign an integration lead on each side. Give them actual authority.
**Failure mode:** Assuming acquisition = cultural absorption. The target's culture doesn't disappear. It goes underground and resurfaces as dysfunction.
### Rapid hiring culture dilution
When a company doubles in headcount in 12 months, culture dilution is near-certain. Prevention:
1. **Codify before you scale.** Document the culture before the surge, not after.
2. **Onboarding is cultural transmission.** Not just process, not just paperwork — immersion in how decisions get made, what's celebrated, what's not tolerated.
3. **Hire for culture adds, not fits.** "Fit" means homogeneity. "Add" means the person brings a perspective or behavior that strengthens the culture without violating core values.
4. **Manager density matters.** If you're adding 10 ICs and 0 managers, the new people have nobody to transmit culture to them. Hire managers ahead of the curve.
5. **Culture buddy system.** Pair new hires with culture exemplars for the first 60 days.

View File

@@ -0,0 +1,137 @@
# [Company Name] Culture Code
> This document describes how we work, what we value, and what it's like to be here. It's meant to be honest — which means it will attract some people and repel others. Both outcomes are correct.
---
## Who We Are
[23 sentences: what you do, who you serve, what would be lost if you disappeared.]
**Our mission:** [One sentence. Present tense. Specific enough to be wrong.]
**Our vision:** [Where we'll be in 510 years. Specific enough to debate.]
---
## What We Value
*Values are behaviors, not adjectives. Each one has a "this is what it looks like" and a "this is what it doesn't look like."*
### [Value 1]
**What this means:** [Behavioral anchor — what someone does when they live this value]
**What this doesn't mean:** [The misconception or violation to guard against]
**Example:** [A real story of this value in action at your company]
---
### [Value 2]
**What this means:** [Behavioral anchor]
**What this doesn't mean:** [The misconception or violation]
**Example:** [Real story]
---
### [Value 3]
**What this means:** [Behavioral anchor]
**What this doesn't mean:** [The misconception or violation]
**Example:** [Real story]
---
*(Repeat for each value. 35 total. Never more than 5.)*
---
## Who Thrives Here
*These are specific, observable behaviors — not personality traits or adjectives.*
- You raise problems early, not after they've grown. You don't complain privately and stay silent publicly.
- You own decisions even when the outcome isn't what you expected.
- You say "I don't know" instead of bluffing. Then you find out.
- You give direct feedback to the person who needs to hear it, not to everyone else.
- You make things better, not just done. You notice what's broken and fix it even when it's not your job.
- [Add 23 specific to your company]
---
## Who Doesn't Thrive Here
*This is the most useful section. Read it carefully.*
- People who need clear instructions before taking action. We provide context; you figure out the path.
- People who optimize for credit over outcomes. We care what got done, not who gets the headline.
- People who treat bad news as a liability. Here, hiding problems is the problem.
- People who need consensus before every decision. We move faster than that.
- [Add 23 specific to your company — be honest]
---
## How We Make Decisions
**Decision types:**
- **Reversible, small scope:** Make it yourself. Don't ask. Tell us what you decided.
- **Reversible, larger scope:** Tell relevant people, move forward unless you hear an objection within 24 hours.
- **Irreversible or high-stakes:** Bring the right people into the room. Write it down. Decide together.
**Default:** Bias toward action. A good decision made fast beats a perfect decision made slow.
**Who decides:** The person closest to the problem, with the most context. Not the most senior person in the room.
---
## How We Communicate
**Default to async.** Most things don't need a meeting. If it can be written, write it.
**Meetings that happen:** [List your recurring meetings and what they're for]
**Meetings that don't happen:** Status updates (use tools), information sharing (write a doc), decisions that one person could make.
**How we give feedback:** Direct, specific, timely. "That report was late and incomplete" not "you should think about your time management." We give feedback to help, not to vent.
**How we share bad news:** Within 24 hours of knowing. To the person who needs to know. Not softened to the point of unclear.
---
## How We Grow People
**We invest in people who invest in themselves.** We provide [budget, learning days, access — be specific]. We don't require you to use them.
**Promotions:** Based on impact already demonstrated, not time served. You're promoted when you're already doing the job you want.
**Performance feedback:** [How often, what format, who delivers it]
**When things aren't working:** We have direct conversations early. We don't let problems simmer for quarterly reviews.
---
## What We Expect of Leaders
Leaders here are multipliers, not heroes. Your job is to make your team better.
- You share context, not just instructions. Your team should be able to make decisions you'd make when you're not there.
- You give credit visibly and take accountability privately.
- You have hard conversations before they become unavoidable.
- You model the culture. If you don't live the values, neither will your team.
- You develop people, including ones who will outgrow their role here.
---
## The Fine Print
This document is descriptive, not aspirational. It describes how we operate today, with the intent to keep improving.
We update this annually. When the update happens, we'll tell you what changed and why.
*Last updated: [Date] | Version: [X.X]*

View File

@@ -0,0 +1,149 @@
---
name: decision-logger
description: "Two-layer memory architecture for board meeting decisions. Manages raw transcripts (Layer 1) and approved decisions (Layer 2). Use when logging decisions after a board meeting, reviewing past decisions with /cs:decisions, or checking overdue action items with /cs:review. Invoked automatically by the board-meeting skill after Phase 5 founder approval."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: decision-memory
updated: 2026-03-05
python-tools: scripts/decision_tracker.py
---
# Decision Logger
Two-layer memory system. Layer 1 stores everything. Layer 2 stores only what the founder approved. Future meetings read Layer 2 only — this prevents hallucinated consensus from past debates bleeding into new deliberations.
## Keywords
decision log, memory, approved decisions, action items, board minutes, /cs:decisions, /cs:review, conflict detection, DO_NOT_RESURFACE
## Quick Start
```bash
python scripts/decision_tracker.py --demo # See sample output
python scripts/decision_tracker.py --summary # Overview + overdue
python scripts/decision_tracker.py --overdue # Past-deadline actions
python scripts/decision_tracker.py --conflicts # Contradiction detection
python scripts/decision_tracker.py --owner "CTO" # Filter by owner
python scripts/decision_tracker.py --search "pricing" # Search decisions
```
---
## Commands
| Command | Effect |
|---------|--------|
| `/cs:decisions` | Last 10 approved decisions |
| `/cs:decisions --all` | Full history |
| `/cs:decisions --owner CMO` | Filter by owner |
| `/cs:decisions --topic pricing` | Search by keyword |
| `/cs:review` | Action items due within 7 days |
| `/cs:review --overdue` | Items past deadline |
---
## Two-Layer Architecture
### Layer 1 — Raw Transcripts
**Location:** `memory/board-meetings/YYYY-MM-DD-raw.md`
- Full Phase 2 agent contributions, Phase 3 critique, Phase 4 synthesis
- All debates, including rejected arguments
- **NEVER auto-loaded.** Only on explicit founder request.
- Archive after 90 days → `memory/board-meetings/archive/YYYY/`
### Layer 2 — Approved Decisions
**Location:** `memory/board-meetings/decisions.md`
- ONLY founder-approved decisions, action items, user corrections
- **Loaded automatically in Phase 1 of every board meeting**
- Append-only. Decisions are never deleted — only superseded.
- Managed by Chief of Staff after Phase 5. Never written by agents directly.
---
## Decision Entry Format
```markdown
## [YYYY-MM-DD] — [AGENDA ITEM TITLE]
**Decision:** [One clear statement of what was decided.]
**Owner:** [One person or role — accountable for execution.]
**Deadline:** [YYYY-MM-DD]
**Review:** [YYYY-MM-DD]
**Rationale:** [Why this over alternatives. 1-2 sentences.]
**User Override:** [If founder changed agent recommendation — what and why. Blank if not applicable.]
**Rejected:**
- [Proposal] — [reason] [DO_NOT_RESURFACE]
**Action Items:**
- [ ] [Action] — Owner: [name] — Due: [YYYY-MM-DD] — Review: [YYYY-MM-DD]
**Supersedes:** [DATE of previous decision on same topic, if any]
**Superseded by:** [Filled in retroactively if overridden later]
**Raw transcript:** memory/board-meetings/[DATE]-raw.md
```
---
## Conflict Detection
Before logging, Chief of Staff checks for:
1. **DO_NOT_RESURFACE violations** — new decision matches a rejected proposal
2. **Topic contradictions** — two active decisions on same topic with different conclusions
3. **Owner conflicts** — same action assigned to different people in different decisions
When a conflict is found:
```
⚠️ DECISION CONFLICT
New: [text]
Conflicts with: [DATE] — [existing text]
Options: (1) Supersede old (2) Merge (3) Defer to founder
```
**DO_NOT_RESURFACE enforcement:**
```
🚫 BLOCKED: "[Proposal]" was rejected on [DATE]. Reason: [reason].
To reopen: founder must explicitly say "reopen [topic] from [DATE]".
```
---
## Logging Workflow (Post Phase 5)
1. Founder approves synthesis
2. Write Layer 1 raw transcript → `YYYY-MM-DD-raw.md`
3. Check conflicts against `decisions.md`
4. Surface conflicts → wait for founder resolution
5. Append approved entries to `decisions.md`
6. Confirm: decisions logged, actions tracked, DO_NOT_RESURFACE flags added
---
## Marking Actions Complete
```markdown
- [x] [Action] — Owner: [name] — Completed: [DATE] — Result: [one sentence]
```
Never delete completed items. The history is the record.
---
## File Structure
```
memory/board-meetings/
├── decisions.md # Layer 2: append-only, founder-approved
├── YYYY-MM-DD-raw.md # Layer 1: full transcript per meeting
└── archive/YYYY/ # Raw files after 90 days
```
---
## References
- `templates/decision-entry.md` — single entry template with field rules
- `scripts/decision_tracker.py` — CLI parser, overdue tracker, conflict detector

View File

@@ -0,0 +1,620 @@
#!/usr/bin/env python3
"""
decision_tracker.py — Board Meeting Decision Parser & Reporter
Part of the C-Level Advisor / Decision Logger skill.
Parses memory/board-meetings/decisions.md and produces actionable reports.
Stdlib only. No dependencies.
Usage:
python decision_tracker.py --summary
python decision_tracker.py --overdue
python decision_tracker.py --conflicts
python decision_tracker.py --owner "CMO"
python decision_tracker.py --search "pricing"
python decision_tracker.py --due-within 7
python decision_tracker.py --demo # Run with sample data
"""
import argparse
import os
import re
import sys
from datetime import date, datetime, timedelta
from pathlib import Path
from typing import Optional
# ─────────────────────────────────────────────
# Data structures
# ─────────────────────────────────────────────
class ActionItem:
def __init__(self, text: str, owner: str, due: Optional[date],
review: Optional[date], completed: bool, completed_date: Optional[date],
result: str):
self.text = text
self.owner = owner
self.due = due
self.review = review
self.completed = completed
self.completed_date = completed_date
self.result = result
def is_overdue(self) -> bool:
if self.completed:
return False
if self.due and self.due < date.today():
return True
return False
def is_due_within(self, days: int) -> bool:
if self.completed:
return False
if self.due:
return date.today() <= self.due <= date.today() + timedelta(days=days)
return False
class Decision:
def __init__(self):
self.date: Optional[date] = None
self.title: str = ""
self.decision: str = ""
self.owner: str = ""
self.deadline: Optional[date] = None
self.review: Optional[date] = None
self.rationale: str = ""
self.user_override: str = ""
self.rejected: list[str] = []
self.action_items: list[ActionItem] = []
self.supersedes: str = ""
self.superseded_by: str = ""
self.raw_transcript: str = ""
def is_active(self) -> bool:
return not bool(self.superseded_by.strip())
def has_override(self) -> bool:
return bool(self.user_override.strip())
# ─────────────────────────────────────────────
# Parser
# ─────────────────────────────────────────────
def parse_date(s: str) -> Optional[date]:
"""Parse YYYY-MM-DD or return None."""
if not s:
return None
s = s.strip()
for fmt in ("%Y-%m-%d", "%Y/%m/%d", "%d.%m.%Y"):
try:
return datetime.strptime(s, fmt).date()
except ValueError:
continue
return None
def parse_action_item(line: str) -> Optional[ActionItem]:
"""
Parse a line like:
- [ ] Action text — Owner: CMO — Due: 2026-03-15 — Review: 2026-03-29
- [x] Action text — Owner: CEO — Completed: 2026-03-10 — Result: Done
"""
line = line.strip()
if not line.startswith("- ["):
return None
completed = line.startswith("- [x]") or line.startswith("- [X]")
text_start = line.find("]") + 1
raw = line[text_start:].strip()
# Split on " — " (em dash with spaces) or " - " fallback
parts_raw = re.split(r"\s+[—\-]{1,2}\s+", raw)
text = parts_raw[0].strip() if parts_raw else raw
def extract(label: str, parts: list[str]) -> str:
for p in parts:
if p.lower().startswith(label.lower() + ":"):
return p[len(label) + 1:].strip()
return ""
owner = extract("Owner", parts_raw[1:])
due_str = extract("Due", parts_raw[1:])
review_str = extract("Review", parts_raw[1:])
completed_str = extract("Completed", parts_raw[1:])
result = extract("Result", parts_raw[1:])
return ActionItem(
text=text,
owner=owner,
due=parse_date(due_str),
review=parse_date(review_str),
completed=completed,
completed_date=parse_date(completed_str),
result=result,
)
def parse_decisions(content: str) -> list[Decision]:
"""Parse the full decisions.md content into Decision objects."""
decisions = []
current: Optional[Decision] = None
in_rejected = False
in_actions = False
for line in content.splitlines():
# New decision entry
header_match = re.match(r"^## (\d{4}-\d{2}-\d{2}) — (.+)$", line)
if header_match:
if current:
decisions.append(current)
current = Decision()
current.date = parse_date(header_match.group(1))
current.title = header_match.group(2).strip()
in_rejected = False
in_actions = False
continue
if current is None:
continue
# Field parsing
def extract_field(label: str) -> Optional[str]:
pattern = rf"^\*\*{re.escape(label)}:\*\*\s*(.*)$"
m = re.match(pattern, line)
return m.group(1).strip() if m else None
val = extract_field("Decision")
if val is not None:
current.decision = val
in_rejected = False
in_actions = False
continue
val = extract_field("Owner")
if val is not None:
current.owner = val
continue
val = extract_field("Deadline")
if val is not None:
current.deadline = parse_date(val)
continue
val = extract_field("Review")
if val is not None:
current.review = parse_date(val)
continue
val = extract_field("Rationale")
if val is not None:
current.rationale = val
continue
val = extract_field("User Override")
if val is not None:
current.user_override = val
in_rejected = False
in_actions = False
continue
val = extract_field("Supersedes")
if val is not None:
current.supersedes = val
continue
val = extract_field("Superseded by")
if val is not None:
current.superseded_by = val
continue
val = extract_field("Raw transcript")
if val is not None:
current.raw_transcript = val
continue
# Section headers
if re.match(r"^\*\*Rejected:\*\*", line):
in_rejected = True
in_actions = False
continue
if re.match(r"^\*\*Action Items:\*\*", line):
in_actions = True
in_rejected = False
continue
if line.startswith("**"):
in_rejected = False
in_actions = False
# List items
if in_rejected and line.strip().startswith("-"):
item = line.strip().lstrip("- ").strip()
if item and not item.startswith("<!--"):
current.rejected.append(item)
continue
if in_actions and line.strip().startswith("- ["):
action = parse_action_item(line)
if action:
current.action_items.append(action)
continue
if current:
decisions.append(current)
return decisions
# ─────────────────────────────────────────────
# Reports
# ─────────────────────────────────────────────
def fmt_date(d: Optional[date]) -> str:
return d.strftime("%Y-%m-%d") if d else ""
def fmt_delta(d: Optional[date]) -> str:
if not d:
return ""
delta = (d - date.today()).days
if delta < 0:
return f" ⚠️ {abs(delta)}d overdue"
if delta == 0:
return " 🔴 DUE TODAY"
if delta <= 3:
return f" 🟡 {delta}d left"
return f" ({delta}d)"
def print_section(title: str):
print(f"\n{'' * 60}")
print(f" {title}")
print(f"{'' * 60}")
def report_summary(decisions: list[Decision]):
active = [d for d in decisions if d.is_active()]
all_actions = [a for d in decisions for a in d.action_items]
open_actions = [a for a in all_actions if not a.completed]
overdue = [a for a in all_actions if a.is_overdue()]
overrides = [d for d in decisions if d.has_override()]
dnr_count = sum(len(d.rejected) for d in decisions)
print_section("DECISION LOG SUMMARY")
print(f" Total decisions: {len(decisions)}")
print(f" Active (not super.): {len(active)}")
print(f" Superseded: {len(decisions) - len(active)}")
print(f" Founder overrides: {len(overrides)}")
print(f" DO_NOT_RESURFACE: {dnr_count}")
print(f" Total action items: {len(all_actions)}")
print(f" Open action items: {len(open_actions)}")
print(f" Overdue: {len(overdue)}")
if overdue:
print(f"\n {'' * 40}")
print(f" ⚠️ OVERDUE ITEMS ({len(overdue)})")
print(f" {'' * 40}")
for a in overdue:
print(f" • [{a.owner}] {a.text}")
print(f" Due: {fmt_date(a.due)}{fmt_delta(a.due)}")
print(f"\n {'' * 40}")
print(f" RECENT DECISIONS")
print(f" {'' * 40}")
for d in sorted(active, key=lambda x: x.date or date.min, reverse=True)[:5]:
print(f" [{fmt_date(d.date)}] {d.title}")
print(f" Owner: {d.owner or ''} | Deadline: {fmt_date(d.deadline)}")
open_count = sum(1 for a in d.action_items if not a.completed)
if open_count:
print(f" Open actions: {open_count}")
def report_overdue(decisions: list[Decision]):
print_section("OVERDUE ACTION ITEMS")
found = False
for d in sorted(decisions, key=lambda x: x.date or date.min, reverse=True):
overdue = [a for a in d.action_items if a.is_overdue()]
if not overdue:
continue
found = True
print(f"\n 📋 {d.title} [{fmt_date(d.date)}]")
for a in overdue:
print(f" ⚠️ {a.text}")
print(f" Owner: {a.owner or ''} | Due: {fmt_date(a.due)}{fmt_delta(a.due)}")
if not found:
print("\n ✅ No overdue items.")
def report_due_within(decisions: list[Decision], days: int):
print_section(f"ACTION ITEMS DUE WITHIN {days} DAYS")
found = False
for d in sorted(decisions, key=lambda x: x.date or date.min, reverse=True):
upcoming = [a for a in d.action_items if a.is_due_within(days)]
if not upcoming:
continue
found = True
print(f"\n 📋 {d.title} [{fmt_date(d.date)}]")
for a in upcoming:
print(f"{a.text}")
print(f" Owner: {a.owner or ''} | Due: {fmt_date(a.due)}{fmt_delta(a.due)}")
if not found:
print(f"\n ✅ Nothing due in the next {days} days.")
def report_by_owner(decisions: list[Decision], owner: str):
print_section(f"ACTION ITEMS — OWNER: {owner.upper()}")
found = False
for d in sorted(decisions, key=lambda x: x.date or date.min, reverse=True):
items = [a for a in d.action_items
if a.owner.lower() == owner.lower() and not a.completed]
if not items:
continue
found = True
print(f"\n 📋 {d.title} [{fmt_date(d.date)}]")
for a in items:
flag = "⚠️ OVERDUE" if a.is_overdue() else ""
print(f" {'[ ]'} {a.text} {flag}")
print(f" Due: {fmt_date(a.due)}{fmt_delta(a.due)}")
if not found:
print(f"\n No open action items for '{owner}'.")
def report_search(decisions: list[Decision], query: str):
print_section(f"SEARCH: \"{query}\"")
q = query.lower()
found = False
for d in decisions:
hit_fields = []
if q in d.title.lower():
hit_fields.append("title")
if q in d.decision.lower():
hit_fields.append("decision")
if q in d.rationale.lower():
hit_fields.append("rationale")
if any(q in r.lower() for r in d.rejected):
hit_fields.append("rejected")
if hit_fields:
found = True
print(f"\n [{fmt_date(d.date)}] {d.title} (match: {', '.join(hit_fields)})")
if "decision" in hit_fields:
print(f"{d.decision}")
if "rejected" in hit_fields:
matches = [r for r in d.rejected if q in r.lower()]
for r in matches:
print(f" ✗ [REJECTED] {r}")
if not found:
print(f"\n No results for '{query}'.")
def report_conflicts(decisions: list[Decision]):
"""
Simple conflict detection: look for decisions on the same topic
(matching title words) that are both active and have different decisions.
Also flag if a rejected item appears as a new decision.
"""
print_section("CONFLICT DETECTION")
conflicts_found = False
# Check for DO_NOT_RESURFACE violations
all_rejected_texts = []
for d in decisions:
for r in d.rejected:
clean = re.sub(r"\[DO_NOT_RESURFACE\]", "", r).strip().lower()
all_rejected_texts.append((clean, d.date, d.title))
active = [d for d in decisions if d.is_active()]
for d in active:
decision_lower = d.decision.lower()
for rejected_text, rejected_date, rejected_title in all_rejected_texts:
if rejected_text and rejected_text in decision_lower:
conflicts_found = True
print(f"\n 🚫 POTENTIAL DO_NOT_RESURFACE VIOLATION")
print(f" Decision [{fmt_date(d.date)}]: {d.decision}")
print(f" Matches rejected item from [{fmt_date(rejected_date)}] ({rejected_title}):")
print(f" \"{rejected_text}\"")
# Check for same-topic contradictions (shared keywords in title)
stop_words = {"the", "a", "an", "and", "or", "to", "for", "of", "in", "on", "with", "vs"}
for i, d1 in enumerate(active):
words1 = set(w.lower() for w in d1.title.split() if w.lower() not in stop_words)
for d2 in active[i+1:]:
words2 = set(w.lower() for w in d2.title.split() if w.lower() not in stop_words)
overlap = words1 & words2
if len(overlap) >= 2 and d1.decision and d2.decision:
# Different decisions on similar topic
if d1.decision.lower() != d2.decision.lower():
conflicts_found = True
print(f"\n ⚠️ POTENTIAL CONFLICT (shared topic: {overlap})")
print(f" [{fmt_date(d1.date)}] {d1.title}")
print(f" Decision: {d1.decision}")
print(f" [{fmt_date(d2.date)}] {d2.title}")
print(f" Decision: {d2.decision}")
if d1.superseded_by or d2.superseded_by:
print(f" One may supersede the other — check Superseded by fields.")
if not conflicts_found:
print("\n ✅ No conflicts detected.")
# ─────────────────────────────────────────────
# Sample data for --demo mode
# ─────────────────────────────────────────────
SAMPLE_DECISIONS_MD = f"""# Board Meeting Decisions — Layer 2
This file contains ONLY founder-approved decisions.
---
## 2026-02-15 — Spain Market Expansion
**Decision:** Expand to Spain in Q3 2026 with a pilot in Madrid and Barcelona.
**Owner:** CMO
**Deadline:** 2026-03-01
**Review:** 2026-04-01
**Rationale:** Market research shows 40% lower CAC than Germany. Two pilot customers already committed.
**User Override:** Founder reduced pilot scope from 5 cities to 2. Reason: reduce operational risk during expansion.
**Rejected:**
- Launch in all of Spain simultaneously — too resource-intensive at current headcount [DO_NOT_RESURFACE]
- Partner with a local distributor instead of direct sales — margins too low [DO_NOT_RESURFACE]
**Action Items:**
- [x] Hire Spanish-speaking CSM — Owner: CHRO — Completed: 2026-02-28 — Result: Hired Maria G., starts March 10
- [ ] Finalize Madrid pilot customer contracts — Owner: CRO — Due: {(date.today() - timedelta(days=3)).strftime('%Y-%m-%d')} — Review: 2026-04-01
- [ ] Translate app to Spanish (ES-ES) — Owner: CTO — Due: {(date.today() + timedelta(days=5)).strftime('%Y-%m-%d')} — Review: 2026-04-15
**Supersedes:**
**Superseded by:**
**Raw transcript:** memory/board-meetings/2026-02-15-raw.md
---
## 2026-02-28 — Pricing Strategy Revision
**Decision:** Move from per-seat to usage-based pricing effective Q2 2026.
**Owner:** CFO
**Deadline:** 2026-03-20
**Review:** 2026-05-01
**Rationale:** Usage-based aligns with customer value. Three enterprise customers requested it explicitly.
**User Override:**
**Rejected:**
- Freemium tier — not appropriate for enterprise healthcare segment [DO_NOT_RESURFACE]
- Raise prices 30% across the board — too aggressive without usage data [DO_NOT_RESURFACE]
**Action Items:**
- [ ] Model 3 pricing scenarios (conservative/base/aggressive) — Owner: CFO — Due: {(date.today() - timedelta(days=1)).strftime('%Y-%m-%d')} — Review: 2026-03-25
- [ ] Customer interviews on usage patterns (n=10) — Owner: CMO — Due: {(date.today() + timedelta(days=10)).strftime('%Y-%m-%d')} — Review: 2026-04-01
- [ ] Update billing infrastructure for usage tracking — Owner: CTO — Due: 2026-04-01 — Review: 2026-04-15
**Supersedes:**
**Superseded by:**
**Raw transcript:** memory/board-meetings/2026-02-28-raw.md
---
## 2026-03-04 — Engineering Hiring Plan Q2
**Decision:** Hire 2 senior engineers in Q2: one ML/AI, one backend. No contractors.
**Owner:** CTO
**Deadline:** 2026-04-15
**Review:** 2026-05-01
**Rationale:** ML roadmap blocked. Backend capacity at 85%. Contractors rejected due to IP risk in regulated domain.
**User Override:** Founder added: "ML hire must have healthcare AI experience. Non-negotiable."
**Rejected:**
- Contract team of 5 for 3 months — IP risk in regulated domain [DO_NOT_RESURFACE]
- Hire junior engineers to save budget — wrong tradeoff at this stage [DO_NOT_RESURFACE]
**Action Items:**
- [ ] Post ML engineer JD — Owner: CHRO — Due: {(date.today() + timedelta(days=2)).strftime('%Y-%m-%d')} — Review: 2026-03-20
- [ ] Post backend engineer JD — Owner: CHRO — Due: {(date.today() + timedelta(days=2)).strftime('%Y-%m-%d')} — Review: 2026-03-20
- [ ] Define ML role requirements with healthcare AI spec — Owner: CTO — Due: {(date.today() + timedelta(days=1)).strftime('%Y-%m-%d')} — Review: 2026-03-15
**Supersedes:**
**Superseded by:**
**Raw transcript:** memory/board-meetings/2026-03-04-raw.md
"""
# ─────────────────────────────────────────────
# Main
# ─────────────────────────────────────────────
def load_decisions(decisions_path: Path, demo: bool) -> list[Decision]:
if demo:
content = SAMPLE_DECISIONS_MD
elif decisions_path.exists():
content = decisions_path.read_text(encoding="utf-8")
else:
print(f" ⚠️ decisions.md not found at: {decisions_path}")
print(f" Run with --demo to see sample output.")
print(f" To initialize: mkdir -p memory/board-meetings && touch memory/board-meetings/decisions.md")
sys.exit(1)
return parse_decisions(content)
def main():
parser = argparse.ArgumentParser(
description="Board Meeting Decision Tracker",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument("--file", default="memory/board-meetings/decisions.md",
help="Path to decisions.md (default: memory/board-meetings/decisions.md)")
parser.add_argument("--demo", action="store_true",
help="Run with built-in sample data (no file needed)")
parser.add_argument("--summary", action="store_true",
help="Show overview: counts, overdue, recent decisions")
parser.add_argument("--overdue", action="store_true",
help="List all overdue action items")
parser.add_argument("--due-within", type=int, metavar="DAYS",
help="List items due within N days")
parser.add_argument("--owner", metavar="ROLE",
help="Filter action items by owner")
parser.add_argument("--search", metavar="QUERY",
help="Search decisions and rejected proposals")
parser.add_argument("--conflicts", action="store_true",
help="Check for contradictory decisions or DO_NOT_RESURFACE violations")
parser.add_argument("--all", action="store_true",
help="Show all decisions (summary format)")
args = parser.parse_args()
if not any([args.summary, args.overdue, args.due_within, args.owner,
args.search, args.conflicts, getattr(args, "all")]):
args.summary = True # Default action
decisions_path = Path(args.file)
decisions = load_decisions(decisions_path, args.demo)
if not decisions:
print(" No decisions found in decisions.md.")
sys.exit(0)
if args.demo:
print(f"\n 🎯 DEMO MODE — using built-in sample data ({len(decisions)} decisions)")
if args.summary:
report_summary(decisions)
if args.overdue:
report_overdue(decisions)
if args.due_within:
report_due_within(decisions, args.due_within)
if args.owner:
report_by_owner(decisions, args.owner)
if args.search:
report_search(decisions, args.search)
if args.conflicts:
report_conflicts(decisions)
if getattr(args, "all"):
print_section(f"ALL DECISIONS ({len(decisions)} total)")
for d in sorted(decisions, key=lambda x: x.date or date.min, reverse=True):
status = "📦 SUPERSEDED" if not d.is_active() else ""
override = " [OVERRIDE]" if d.has_override() else ""
print(f"\n [{fmt_date(d.date)}] {d.title} {status}{override}")
print(f" Decision: {d.decision}")
print(f" Owner: {d.owner or ''} | Deadline: {fmt_date(d.deadline)}")
open_actions = [a for a in d.action_items if not a.completed]
if open_actions:
print(f" Open actions: {len(open_actions)}")
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,63 @@
# Decision Entry Template
Single entry for `memory/board-meetings/decisions.md`.
Copy this block and fill it in after each approved board decision.
---
```markdown
## [YYYY-MM-DD] — [AGENDA ITEM TITLE]
**Decision:** [One clear statement of what was decided.]
**Owner:** [Role or name. One person. If it needs two, the first is accountable.]
**Deadline:** [YYYY-MM-DD]
**Review:** [YYYY-MM-DD — when to check. Usually 24 weeks after deadline.]
**Rationale:** [Why this over alternatives. 1-2 sentences. No fluff.]
**User Override:**
<!-- Leave blank if founder approved the agent recommendation.
Fill in if founder changed something:
"Founder rejected [agent recommendation] because [reason].
Actual decision: [what founder decided instead]." -->
**Rejected:**
<!-- List every proposal explicitly rejected in this discussion.
These must not be resurfaced without new information. -->
- [Proposal text] — [reason for rejection] [DO_NOT_RESURFACE]
**Action Items:**
- [ ] [Specific action] — Owner: [name] — Due: [YYYY-MM-DD] — Review: [YYYY-MM-DD]
- [ ] [Specific action] — Owner: [name] — Due: [YYYY-MM-DD] — Review: [YYYY-MM-DD]
**Supersedes:** <!-- DATE of the previous decision on this topic, if any -->
**Superseded by:** <!-- Leave blank. Will be filled in if a later decision overrides this. -->
**Raw transcript:** memory/board-meetings/[YYYY-MM-DD]-raw.md
```
---
## Field Rules
| Field | Rule |
|-------|------|
| Decision | Must be a single statement. If it takes two sentences, split into two decisions. |
| Owner | One person or role. "Everyone" owns nothing. |
| Deadline | Required. No "TBD". If unknown, set 14 days and review. |
| Review | Always set. Minimum 1 day after deadline. |
| Rationale | Required. "Because we decided so" is not rationale. |
| User Override | Honest record. Do not soften or omit. |
| Rejected | Every rejected proposal must be listed. |
| DO_NOT_RESURFACE | Applied to every rejected item. No exceptions. |
---
## Marking Action Items Complete
When an action item is done, update the entry in decisions.md:
```markdown
- [x] [Action text] — Owner: [name] — Completed: [YYYY-MM-DD] — Result: [one sentence outcome]
```
Do not delete completed items. The history is the record.

View File

@@ -0,0 +1,75 @@
{
"name": "executive-mentor",
"namespace": "em",
"description": "Adversarial thinking partner for founders and executives. Stress-tests plans, prepares for board meetings, navigates hard decisions, and forces honest post-mortems.",
"version": "1.0.0",
"author": "Alireza Rezvani",
"skills": [
{
"name": "challenge",
"command": "/em:challenge",
"description": "Pre-mortem analysis. Find weaknesses in any plan before reality does.",
"path": "skills/challenge/SKILL.md",
"input": "<plan>",
"example": "/em:challenge Our Q3 go-to-market plan targeting enterprise customers"
},
{
"name": "board-prep",
"command": "/em:board-prep",
"description": "Prepare for board meetings. Anticipate hard questions. Know your numbers cold.",
"path": "skills/board-prep/SKILL.md",
"input": "<agenda>",
"example": "/em:board-prep Q3 review: missed revenue target, new product roadmap, hiring plan"
},
{
"name": "hard-call",
"command": "/em:hard-call",
"description": "Framework for decisions with no good options. Layoffs, pivots, firings.",
"path": "skills/hard-call/SKILL.md",
"input": "<decision>",
"example": "/em:hard-call Whether to fire co-founder who hasn't been performing for 6 months"
},
{
"name": "stress-test",
"command": "/em:stress-test",
"description": "Challenge any business assumption. Revenue projections, moats, market size.",
"path": "skills/stress-test/SKILL.md",
"input": "<assumption>",
"example": "/em:stress-test We'll reach $2M ARR by end of Q4 with current conversion rates"
},
{
"name": "postmortem",
"command": "/em:postmortem",
"description": "Honest analysis of what went wrong. 5 Whys done properly.",
"path": "skills/postmortem/SKILL.md",
"input": "<event>",
"example": "/em:postmortem We lost our largest customer after 18 months"
}
],
"agents": [
{
"name": "devils-advocate",
"description": "Adversarial thinker that systematically finds weaknesses. Always 3 concerns, always rated, always mitigated.",
"path": "agents/devils-advocate.md"
}
],
"scripts": [
{
"name": "decision_matrix_scorer",
"description": "Weighted decision analysis with sensitivity testing. Which option wins and how fragile is that result.",
"path": "scripts/decision_matrix_scorer.py",
"run": "python scripts/decision_matrix_scorer.py"
},
{
"name": "stakeholder_mapper",
"description": "Map stakeholders by influence and alignment. Find champions, blockers, swing votes.",
"path": "scripts/stakeholder_mapper.py",
"run": "python scripts/stakeholder_mapper.py"
}
],
"references": [
"references/hard_things.md",
"references/board_dynamics.md",
"references/crisis_playbook.md"
]
}

View File

@@ -0,0 +1,142 @@
---
name: executive-mentor
description: "Adversarial thinking partner for founders and executives. Stress-tests plans, prepares for brutal board meetings, dissects decisions with no good options, and forces honest post-mortems. Use when you need someone to find the holes before the board does, make a decision you've been avoiding, or understand what actually went wrong."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: executive-leadership
updated: 2026-03-05
python-tools: decision_matrix_scorer.py, stakeholder_mapper.py
frameworks: pre-mortem, board-prep, hard-call, stress-test, postmortem
---
# Executive Mentor
Not another advisor. An adversarial thinking partner — finds the holes before your competitors, board, or customers do.
## The Difference
Other C-suite skills give you frameworks. Executive Mentor gives you the questions you don't want to answer.
- **CEO/COO/CTO Advisor** → strategy, execution, tech — building the plan
- **Executive Mentor** → "Your plan has three fatal assumptions. Let's find them now."
## Keywords
executive mentor, pre-mortem, board prep, hard decisions, stress test, postmortem, plan challenge, devil's advocate, founder coaching, adversarial thinking, crisis, pivot, layoffs, co-founder conflict
## Commands
| Command | What It Does |
|---------|-------------|
| `/em:challenge <plan>` | Find weaknesses before they find you. Pre-mortem + severity ratings. |
| `/em:board-prep <agenda>` | Prepare for hard questions. Build the narrative. Know your numbers cold. |
| `/em:hard-call <decision>` | Framework for decisions with no good options. Layoffs, pivots, firings. |
| `/em:stress-test <assumption>` | Challenge any assumption. Revenue projections, moats, market size. |
| `/em:postmortem <event>` | Honest analysis. 5 Whys done properly. Who owns what change. |
## Quick Start
```bash
python scripts/decision_matrix_scorer.py # Weighted decision analysis with sensitivity
python scripts/stakeholder_mapper.py # Map influence vs alignment, find blockers
```
## Voice
Direct. Uncomfortable when necessary. Not mean — honest.
Questions nobody wants to answer:
- "What happens if your biggest customer churns next month?"
- "Your burn rate gives you 11 months. What's plan B?"
- "You've been 'almost closing' this deal for 6 weeks. Is it real?"
- "Your co-founder hasn't shipped anything meaningful in 90 days. What are you doing about it?"
This isn't therapy. It's preparation.
## When to Use This
**Use when:**
- You have a plan you're excited about (excitement = more scrutiny, not less)
- Board meeting is coming and you can't fully defend the numbers
- You're facing a decision you've avoided for weeks
- Something went wrong and you're still explaining it away
- You're about to take an irreversible action
**Don't use when:**
- You need validation for a decision already made
- You want frameworks without hard questions
## Commands in Detail
### `/em:challenge <plan>`
Takes any plan — roadmap, GTM, hiring, fundraising — and finds what breaks first. Identifies assumptions, rates confidence, maps dependencies. Output: numbered vulnerabilities with severity (Critical / High / Medium). See `skills/challenge/SKILL.md`
### `/em:board-prep <agenda>`
48 hours before investors. What are the 10 hardest questions? What data do you need cold? How do you build a narrative that acknowledges weakness without losing the room? Prepares you for the adversarial board, not the friendly one. See `skills/board-prep/SKILL.md`
### `/em:hard-call <decision>`
Reversibility test. 10/10/10 framework. Stakeholder impact mapping. Communication planning. For decisions with no good answer — only less bad ones. See `skills/hard-call/SKILL.md`
### `/em:stress-test <assumption>`
"$5B market." "$2M ARR by December." "3-year moat." Every plan is built on assumptions. Surfaces counter-evidence, models the downside, proposes the hedge. See `skills/stress-test/SKILL.md`
### `/em:postmortem <event>`
Lost deal. Failed feature. Missed quarter. No blame sessions, no whitewash. 5 Whys without softening, contributing factors vs root cause, owners per change, verification dates. See `skills/postmortem/SKILL.md`
## Agents & References
- `agents/devils-advocate.md` — Always finds 3 concerns, rates severity, never gives clean approval
- `references/hard_things.md` — Firing, layoffs, pivoting, co-founder conflicts, killing products
- `references/board_dynamics.md` — Board types, difficult directors, when they lose confidence
- `references/crisis_playbook.md` — Cash crisis, key departure, PR disaster, legal threat, failed fundraise
## What This Isn't
Executive Mentor won't tell you your plan is great. It won't soften bad news.
What it will do: make sure bad news comes from you — first, with a plan — not from your board or customers.
Andy Grove ran Intel through the memory chip crisis by being brutally honest. Ben Horowitz fired his best friend to save his company. The best executives see hard things coming and act first.
That's what this is for.
## Proactive Triggers
Surface these without being asked:
- Board meeting in < 2 weeks with no prep → initiate `/em:board-prep`
- Major decision made without stress-testing → retroactively challenge it
- Team in unanimous agreement on a big bet → that's suspicious, challenge it
- Founder avoiding a hard conversation for 2+ weeks → surface it directly
- Post-mortem not done after a significant failure → push for it
## When the Mentor Engages Other Roles
| Situation | Mentor Does | Invokes |
|-----------|-------------|---------|
| Revenue plan looks too optimistic | Challenges the assumptions | `[INVOKE:cfo|Model the bear case]` |
| Hiring plan with no budget check | Questions feasibility | `[INVOKE:cfo|Can we afford this?]` |
| Product bet without validation | Demands evidence | `[INVOKE:cpo|What's the retention data?]` |
| Strategy shift without alignment check | Tests for cascading impact | `[INVOKE:coo|What breaks if we pivot?]` |
| Security ignored in growth push | Raises the risk | `[INVOKE:ciso|What's the exposure?]` |
## Reasoning Technique: Adversarial Reasoning
Assume the plan will fail. Find the three most likely failure modes. For each, identify the earliest warning signal and the cheapest hedge. Never say 'this looks good' without finding at least one risk.
## Communication
All output passes the Internal Quality Loop before reaching the founder (see `agent-protocol/SKILL.md`).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
## Context Integration
- **Always** read `company-context.md` before responding (if it exists)
- **During board meetings:** Use only your own analysis in Phase 2 (no cross-pollination)
- **Invocation:** You can request input from other roles: `[INVOKE:role|question]`

View File

@@ -0,0 +1,139 @@
# Devil's Advocate Agent
**Role:** Adversarial thinker. Finds what's wrong before others do.
---
## System Prompt
You are a devil's advocate agent for executive decision-making. Your role is not to be contrarian for the sake of it — it is to ensure that every plan, proposal, and decision has been examined from an adversarial perspective before commitment.
You have one job: **find the risks that optimism is hiding.**
You are not pessimistic. You are rigorous. There's a difference.
---
## Non-Negotiable Rules
**Rule 1: Always give exactly 3 specific concerns.**
Not "there are some risks here." Three concerns, each one concrete and specific. Not "execution risk" — "the VP Sales role has been open for 4 months, which means Q3 revenue is dependent on someone who isn't hired yet."
**Rule 2: Always rate severity.**
Each concern gets a severity rating:
- **CRITICAL** — if this materializes, the plan likely fails or causes serious irreversible harm
- **HIGH** — significant impact, requires contingency planning
- **MEDIUM** — manageable but worth watching and mitigating
If you can't find a Critical or High risk, look harder. Plans presented for review almost always have at least one.
**Rule 3: Always suggest a mitigation.**
Every concern should come with a specific mitigation — something the team can actually do. Not "be more careful" — "validate this assumption with 5 customer conversations before committing budget."
**Rule 4: Never approve without finding a risk.**
If something genuinely looks well-constructed, your job is still to find the most likely failure point. "This looks solid, but here's what I'd watch most closely" is acceptable. "This looks good" with no qualification is not.
**Rule 5: Target the most important assumptions, not the easiest ones.**
It's easy to find surface-level risks. The valuable work is finding the assumptions the team is most confident about — and stress-testing those. Confident assumptions are dangerous precisely because they don't get questioned.
---
## Concern Structure
Each of your 3 concerns should follow this format:
```
[SEVERITY] Concern #N: [Short title]
What the plan assumes: [State the assumption explicitly]
Why this might be wrong: [Specific counter-evidence or reasoning]
What happens if it is: [Concrete impact — quantify when possible]
Mitigation: [Specific action that reduces this risk]
```
---
## Behaviors
**Do:**
- Be direct. "This is a critical risk because..." not "one might consider..."
- Be specific. Name the assumption, the counter-evidence, the impact.
- Be useful. The goal is to improve the plan, not to torpedo it.
- Engage with the strongest version of the plan, then find its weakness.
- Surface dependencies — places where multiple things must all go right.
**Don't:**
- List generic risks that apply to any plan ("execution is always hard")
- Repeat the same concern in different words
- Soften concerns to spare feelings
- Say "this looks great" without a genuine risk identified
- Confuse style concerns with substance concerns
---
## Example Output
**Input:** "We're planning to raise a $5M seed round in Q2, led by a Tier 1 VC we've been in conversations with for 6 weeks."
**Devil's Advocate Response:**
```
[CRITICAL] Concern #1: Single-thread fundraise
What the plan assumes: The Tier 1 VC will lead the round, and the process
will close in Q2.
Why this might be wrong: 6 weeks of conversations is early for a lead
commitment. Tier 1 VCs frequently stay in "exploratory" mode for months
before deciding. If they pass — or ask for a reference customer you don't
have — you have no parallel process running and Q2 close becomes impossible.
What happens if it is: At current burn, Q2 close was assumed for 18 months
of runway. A 3-month slip changes this to 15 months — still manageable, but
you lose the ability to be selective about the round.
Mitigation: Run parallel conversations with 34 additional funds now, even
if the Tier 1 is preferred. Parallel processes also create leverage.
---
[HIGH] Concern #2: Valuation expectation mismatch
What the plan assumes: Valuation expectations are aligned between you and
the lead investor.
Why this might be wrong: There's no mention of a term sheet or valuation
discussion. Many founders reach advanced-stage conversations before the
valuation gap becomes apparent.
What happens if it is: Late-stage valuation misalignment often kills rounds
or forces founder-unfavorable terms under time pressure.
Mitigation: Have the valuation conversation explicitly in the next meeting,
before other investors are engaged.
---
[HIGH] Concern #3: Q2 close assumption is baked into headcount plan
What the plan assumes: Q2 close means Q3 hires can proceed on schedule.
Why this might be wrong: Even if the round closes end of Q2, hiring 4
senior roles takes 812 weeks per role. The revenue impact of those hires
was modeled assuming Q3 start.
What happens if it is: Revenue in Q4 will be lower than modeled, which
affects the Series A story — you'll be raising on lower numbers than your
projections showed seed investors.
Mitigation: Either model hiring 6 weeks later in the financial model,
or begin recruiting now for roles you'll close post-funding.
```
---
## Calibration
The best devil's advocate responses are the ones the team didn't want to hear but couldn't argue with. If the team reads your concerns and says "yeah, we already thought about that" — good. Verification has value.
If they say "we hadn't thought about that" — that's what you're here for.

View File

@@ -0,0 +1,263 @@
# Board Dynamics — Managing the People Who Can Fire You
Your board has the power to fire you. Most boards don't want to. But the relationship deteriorates in predictable ways, and the founders who get replaced are rarely blindsided — in hindsight, they saw it coming.
This is the playbook for building a board that works for you, not against you.
---
## Part 1: Understanding Board Member Types
Not all directors are the same. Understanding who you're dealing with changes how you work with them.
### The Operator Board Member
Usually a former founder or executive. Has built companies, made payroll, managed crises. Values: pragmatism, execution, honesty about what's not working.
**What they want from you:**
- To see that you understand your own business cold
- Honesty when things are hard
- A clear sense that you know what you're doing operationally
**How to work with them:**
- Be direct and specific about problems
- Ask for their experience on specific operational challenges
- They can smell spin — don't try it
**Warning sign:** They go quiet in board meetings. Operators who disengage are usually losing confidence.
### The Financial Investor Director
VC or PE-backed. Focused on return. Watches: growth rate, burn, path to next round, exit prospects.
**What they want from you:**
- The company to be on track to return their fund
- To not be surprised by bad news
- Confidence that you're the right person to lead through the next stage
**How to work with them:**
- Know their fund's investment thesis — understand what "success" looks like to them
- Give them the data they need proactively, before they ask
- Be clear on fundraising timeline so they can plan
**Warning sign:** They start asking about the management team more than the business. This is a proxy for evaluating whether you need to be replaced.
### The Independent Director
Usually brought in for governance, domain expertise, or to balance the board. Can be former industry executives, board members at comparable companies, or subject matter experts.
**What they want from you:**
- To genuinely contribute, not just show up
- To be informed and included, not just called when there's a crisis
- Governance that protects them from legal exposure
**How to work with them:**
- Give them a specific domain to own (e.g., "I want your guidance on enterprise sales strategy")
- Consult them before board meetings on their area of expertise
- Treat them as partners, not decoration
### The Strategic Partner Director
Comes from a corporate strategic investment or partnership. Focused on how your success maps to their strategic interests.
**What they want from you:**
- Alignment on strategy (their strategy, not just yours)
- A productive relationship with the parent company
- Visibility into product direction
**The complication:** Their interests and your investors' interests sometimes diverge. Manage this proactively. Don't let the board divide into factions.
---
## Part 2: Information Architecture
What you tell the board, when you tell them, and how shapes the relationship more than almost anything else.
### The Rule on Bad News
**Tell them before the meeting, not during it.**
When revenue misses, when the key executive leaves, when the product launch slips — board members should hear from you directly, before the formal meeting. A brief message: "I want to flag that Q3 came in below target. Here's what happened, here's what I'm doing, here's what I'll cover in the board meeting."
Why this matters:
- It demonstrates you're on top of it
- It removes the emotional surprise during the meeting (which makes it harder to have a productive conversation)
- It shows that you treat them as partners, not as a board to manage
Board members who are surprised by bad news in a meeting start asking themselves: "What else don't I know?"
### The Pre-Read
Send materials 57 days before the meeting, not the night before.
Standard pre-read package:
- Board deck (current state, key metrics, major topics)
- 1-page executive summary (what's the meeting for, what decisions are needed)
- Supporting data appendices
- Any significant updates since last meeting
**The discipline test:** If you're sending materials the day before, you're not in control of your business. The data should be available earlier. If it isn't, that's a systems problem worth fixing.
### What to Keep Confidential
Not everything that happens in the company should go to the board. Use judgment:
**Always share:** Significant strategic changes, financial surprises, executive departures, legal matters, fundraising updates, product pivots.
**Use discretion:** Internal team conflicts, early-stage ideas, specific customer names (check NDAs), competitive intelligence.
**Be careful about:** Creating information asymmetry between board members. If you tell one director something significant, think carefully about whether others need to know.
---
## Part 3: Running Effective Board Meetings
### The Structure That Works
**(15 min) CEO Update**
Current state of business in 5 minutes. What changed since last meeting. The one or two things you're most focused on. What you need from the board today.
**(3045 min) Deep Dive Topics (12 max)**
One or two topics that need board input, expertise, or decision. Not status updates — strategic questions. "Should we enter the enterprise market now or in 12 months?" "We have two acquisition opportunities — what's your view?"
**(30 min) Financial Review**
Actuals vs budget. Burn, runway, key metrics. Honest discussion of variance.
**(15 min) Closed Session (CEO + Board only)**
Every meeting. Used for: board governance, executive compensation, confidential matters. This signals maturity. Skip it and directors raise it anyway.
**(15 min) Wrap + Action Items**
What was decided, who owns what, by when. Sent within 24 hours.
### How to Handle Disagreement in the Meeting
Board members will sometimes challenge your recommendations publicly. How you handle it determines the room's perception of your leadership.
**Good response to challenge:**
1. Acknowledge the concern genuinely ("That's a fair point — let me address it")
2. State your position with specific evidence
3. Acknowledge uncertainty where it exists
4. Be clear about who decides and that you've considered this
**Bad responses:**
- Getting defensive ("I think you're not seeing the full picture")
- Caving immediately to avoid conflict ("You're right, we'll change it")
- Being dismissive ("We already thought about that")
You can disagree with a board member and still build their confidence in you. What matters is how you engage with the challenge.
### The Closed Session
Every board meeting should end with a closed session — board members only, no CEO.
**Yes, this is uncomfortable.** It's supposed to be. This is the board's opportunity to discuss management team performance, compensation, and governance without the CEO present.
Don't skip it because it makes you nervous. Skipping it means the same conversations happen in parking lots and side calls instead. Better in the room.
**After the closed session:** The board chair should brief you on any significant outcomes. If they don't, ask.
---
## Part 4: When the Board Loses Confidence
### Early Warning Signs
- Questions about the management team become more frequent
- Board members start contacting reports directly without telling you
- You notice side conversations happening before or after board meetings
- Meeting dynamics shift — less engagement, more skepticism
- A director asks to be added to distribution lists you normally manage
- Requests for more frequent reporting
**The mistake:** Pretending not to notice.
**The right move:** Name it. "I've noticed some different dynamics in recent board interactions. I want to understand if there are concerns about my leadership or execution that we should talk about directly."
This is hard. It's also the only thing that gives you a chance to address it.
### The CEO Review
Most boards conduct annual or semi-annual CEO reviews. If yours doesn't, ask for one. This is a governance strength, not a vulnerability.
Questions typically asked in a CEO review:
- Is the company meeting its strategic goals?
- Is the CEO executing on the plan?
- Is the CEO building the right team?
- What's the CEO's relationship with the board?
- Is the CEO growing into the company's stage?
**Preparing for your own review:** Self-assess honestly first. Know where you're strong and where you're not. The directors already have opinions — your job is to show self-awareness and a plan.
### The Confidence Conversation
If you believe the board is losing confidence, have the direct conversation — one-on-one with the board chair or lead director.
"I want to be direct with you. I have a sense that there are questions about my performance or leadership that haven't been said explicitly. I'd rather hear them directly than through signals."
**If the answer is yes, there are concerns:**
- Listen without defending
- Ask clarifying questions
- Ask what a successful path forward looks like
- Agree on specific commitments and a timeline
**If the answer is "no, everything is fine":**
- Note your concern ("I appreciate that, and I'd rather air this concern than not")
- Keep watching the signals
---
## Part 5: Managing Investor Expectations
### The Fundraising Narrative
Your current investors are your reference letters for the next round. How you manage them through the current period shapes what they say about you to the next investor.
**The mistake:** Only engaging investors deeply when you need something.
**The right approach:** Proactive, regular, honest communication. Monthly investor updates. Reply to emails within 24 hours. Share wins and problems with equal transparency.
### Monthly Investor Update Template
```
[Company] — [Month] Update
**Headline:** [One sentence — the most important thing that happened]
**Key Metrics:**
- MRR: $X (vs $Y last month)
- Burn: $X/month, Runway: X months
- [3-5 metrics that matter for your stage]
**What went well:**
- [2-3 bullets]
**What didn't:**
- [1-2 bullets — being honest here builds more trust than hiding it]
**What we need:**
- [Specific asks — introductions, expertise, candidates]
```
Monthly. Brief. Honest. Consistent. This is table stakes.
### When to Call an Emergency Meeting
Don't wait for the quarterly board meeting if:
- You've missed a significant milestone by more than 20%
- A key executive is leaving
- There's a legal or compliance issue
- You're considering a strategic pivot
- Runway is below 9 months and fundraising hasn't started
The call should come from you, with your analysis and your plan, before they start asking questions.
### Navigating Competing Investor Interests
If you have multiple institutional investors, their interests sometimes conflict. Common tensions:
- One wants to sell early; another wants to push for a larger outcome
- One is focused on strategic acquirers; another on IPO
- One wants to protect pro-rata in a new round; another wants a new lead
**Your job:** Be transparent with all of them, don't manage information asymmetrically, and be clear about your own perspective and what's best for the company. You serve the company, not any individual investor.
When conflicts are severe: get independent legal counsel. Do not navigate cap table and governance conflicts with only your investors' lawyers advising.

View File

@@ -0,0 +1,173 @@
# Crisis Playbook — When Things Go Really Wrong
Crises aren't random. They fall into predictable categories. The companies that survive them have usually thought through the response before it happened.
This playbook covers six crisis types: cash crisis, key person departure, PR disaster, legal threat, lost major customer, failed fundraise.
For each: what to do in the first 24 hours, the first week, and the recovery path.
---
## Framework: The First Response
Every crisis response starts with the same three questions:
1. **What is the actual scope?** (Not the fear-amplified version — the real facts)
2. **Who needs to know, and in what order?** (Don't broadcast before you understand the problem)
3. **What's the first stabilizing action?** (One thing that stops the bleeding or prevents it from getting worse)
The biggest mistake in crisis response: reactive communication before you understand the situation. The second biggest: waiting too long to communicate once you do.
---
## Crisis 1: Cash Crisis
### Definition
Less than 6 months of runway at current burn, without a funded plan to extend it.
### First 24 Hours
- **Get exact numbers.** Not approximate — exact. Current cash balance, exact monthly burn, exact accounts receivable timeline, exact date when you hit zero.
- **Stop discretionary spending immediately.** Before you know the full plan, stop: all non-essential vendor renewals, all hiring (unless critical path), all travel, all subscriptions you don't use daily.
- **Call your board chair.** Not the full board — the chair, one-on-one. This conversation: "Here's the situation. Here's what I know. Here's what I'm doing today. I want to schedule an emergency board call for [48 hours from now]."
- **Do not tell the broader team yet.** Not because you're hiding it — because you'll be telling a different story in 48 hours when you have a plan. "We're out of money and I don't know what we're doing" is not a message that helps anyone.
### First Week
- **Model three scenarios.** (1) Raise now — how long and at what terms? (2) Reduce burn to extend runway — what cuts, and what does that company look like? (3) Bridge from existing investors — is that realistic?
- **Emergency board meeting.** Present the three scenarios. Make a recommendation. Come with a plan, not just a problem.
- **Start the raise immediately if that's the path.** Cash crises give you no luxury of preparation time. Reach out to existing investors and warm prospects the same week you make the decision.
- **If cutting, do it once and do it right.** See hard_things.md — layoffs section. Dragging it out is worse.
- **Communicate to team within one week.** After you have a plan. Honest, direct, with clarity on what it means for their jobs. "We have N months of runway. Here's what we're doing. Here's what this means for you."
### Recovery Path
- If raising: Closing the round is the only milestone that matters. Assign someone to own diligence data, legal docs, and investor follow-up. This is now the CEO's full-time job.
- If cutting: You need to demonstrate that the cuts were sufficient and that the business is stable. Three straight months of burn at or below plan is the proof point.
- The narrative question: "Why did this happen and why won't it happen again?" You will be asked this in the next fundraise. Have a direct, honest answer.
### What kills companies in cash crises
- Raising a bridge that isn't a bridge — it extends pain without solving the underlying problem
- Cutting too slowly (two rounds of cuts) — kills morale and loses the people you want to keep
- Hiding it from the team until it becomes a rumor — the rumor is always worse than the truth
- Not raising the issue with the board until it's critical — board members are more useful with more lead time
---
## Crisis 2: Key Person Departure
### Definition
A person whose departure significantly impacts company execution, customer relationships, or team stability. Usually C-level or a critical technical/commercial lead.
### First 24 Hours
- **Clarify what "departure" means.** Resignation? Fired? Mutual agreement? The situation determines the response.
- **Assess the actual impact.** What does this person own that isn't covered? Who on the team will be most affected? Do any customers have primary relationships with this person?
- **Secure institutional knowledge.** If possible and appropriate, agree on a knowledge transfer plan before they leave.
- **Notify the board chair.** Same day. Same rule: facts only, no spin.
- **Don't announce internally yet** unless the person is already telling people (which they sometimes do). Get ahead of it by a few hours if possible.
### First Week
- **Control the narrative internally.** All-hands or department meeting within 23 days. Honest: "Name is leaving. Here's what I can share about why. Here's the plan." Gap in leadership acknowledged, interim plan named, hiring process started.
- **Handle customer relationships.** Identify the top 5-10 customers with a relationship with this person. CEO or another senior person reaches out personally. "I want to make sure you hear from me directly..."
- **Announce interim ownership.** Don't leave reporting lines and responsibilities ambiguous. Even a temporary assignment provides stability.
- **Start the search.** Don't wait. The bench is always thinner than you think and searches take 34 months.
### Recovery Path
- The signal the team is watching: does the company continue executing or does it stall?
- Keep shipping. Keep hitting targets. The successor to a strong leader builds credibility by maintaining forward momentum.
- Be honest in fundraising about the departure — investors will do reference checks. "We had a key departure and here's how we managed the transition" is a much better story than one they have to discover.
---
## Crisis 3: PR Disaster
### Definition
A story, social media incident, or public situation that damages brand, reputation, or customer trust. Security breach, discriminatory behavior, regulatory violation, public founder misconduct.
### First 24 Hours
- **Establish facts before you communicate.** What actually happened? What data was affected? Who is affected? What is the extent?
- **Activate legal counsel immediately.** Before any external communication. Not to suppress the story — to make sure what you say is accurate and doesn't create additional liability.
- **Designate one spokesperson.** Only one person speaks to media, posts on social. Everyone else: "I can't comment on that, but [spokesperson] is handling media inquiries."
- **Acknowledge, don't stonewall.** If the story is breaking publicly, a "we are aware and investigating" response within hours is better than silence, which looks like hiding.
### First Week
- **Communicate to affected parties first.** If it's a data breach: affected customers before media. If it's a discrimination situation: affected employees and team before investors.
- **Draft a public statement.** Elements: what happened (factual), who is affected, what you're doing, what you're doing to prevent recurrence. No corporate-speak. No deflection. No passive voice ("mistakes were made").
- **Proactively update investors.** They'll hear about it anyway. Hearing from you first, with context, is materially better.
- **Execute the response plan.** Assign owners to every stream: affected customers, media, team, investors, legal.
### Recovery Path
- PR crises recover through consistent, demonstrated behavior over time — not through a single statement.
- What you do in the weeks after the initial story is more important than the initial statement.
- If someone in leadership caused the problem: the decision about whether they stay or go will be watched closely. Protecting the wrong person damages recovery.
- Customer trust recovers faster when they see tangible changes, not just words.
---
## Crisis 4: Legal Threat
### Definition
Significant legal action: patent claim, employment lawsuit, customer breach of contract claim, regulatory investigation, IP dispute.
### First 24 Hours
- **Do not engage directly with the opposing party without counsel.** Nothing — no calls, no emails, no messages.
- **Get legal counsel on the call today.** Not next week. If you have outside counsel, call them. If you don't have a relationship, get one immediately.
- **Document what you know.** The sequence of events, relevant contracts, communications. Don't delete or alter anything — that can become a separate problem.
- **Tell the board chair.** Same day. Board members sometimes have relevant experience or relationships that help.
### First Week
- **Assess exposure.** With counsel: what's the realistic worst case? What's the likely case? What's the cost range?
- **Determine response strategy.** Fight, settle, or ignore (only for clearly frivolous claims with no risk). Most legal threats are best resolved through settlement discussion, not litigation.
- **Evaluate business impact.** Does this affect fundraising? Customer relationships? Employment contracts? Scope the full impact.
- **Communication plan.** Employees? Customers? Investors? In most cases, confidentiality is important — but key stakeholders need to know.
### Recovery Path
- Most legal threats resolve. They resolve faster and cheaper when addressed directly and early.
- Avoid the temptation to ignore small claims — small claims become large ones when ignored.
- If this exposed a real process gap (inadequate IP protection, unclear employment agreements, contract gaps), fix it. The litigation is the signal; the underlying gap is the problem.
---
## Crisis 5: Lost Major Customer
### Definition
Churn of a customer representing more than 10% of ARR, or whose departure creates a dangerous narrative ("even your biggest customer left").
### First 24 Hours
- **Get the real reason.** Not the polite exit reason — the real one. Ask directly: "I want to understand what we could have done differently. Not to change the decision — to learn." Sometimes they'll tell you.
- **Assess financial impact.** Model the immediate effect on runway, burn coverage, and next fundraising story.
- **Notify the board chair.** If this is >10% ARR, same day. No surprises at board meeting.
- **Do not panic-announce internally.** You need a plan before you tell the team.
### First Week
- **Understand the signal.** Is this one customer's specific situation, or a symptom of a broader product/market fit problem? The answer changes the response completely.
- **Address the team.** The team will notice a major logo disappear. Name it, explain what you know, explain what's changing.
- **Accelerate pipeline.** If this creates a gap to target, which deals can be accelerated? What expansion opportunities are there with existing customers?
- **Review other at-risk customers.** Implement a customer health review — who else might be showing similar signals?
### Recovery Path
- If this is an isolated case: close the gap with another customer, document the lesson, move on.
- If this is a signal of broader PMF problems: this is the more serious situation. What are customers getting from you that they can't get elsewhere? Are your most engaged customers using the product the same way you thought?
- The fundraising question: "We lost [major customer]. Why?" Have a direct, honest answer that includes what you changed as a result.
---
## Crisis 6: Failed Fundraise
### Definition
A fundraising process that ends without closing: term sheet pulled, lead investor passed, round didn't close, or bridge not available.
### First 24 Hours
- **Assess actual runway.** How much time do you have at current burn?
- **Identify where the process broke.** Was it valuation? Team? Product? Market? The "why" determines the path.
- **Immediately convene board.** You need their help and their network. A failed raise is not something to manage quietly.
- **Do not tell the team yet.** You need a plan first. "We didn't raise and I don't know what we're doing" destroys morale in a way that's hard to recover from.
### First Week
- **Model survival scenarios.** At current burn: how long? At 50% reduced burn: how long? What does the reduced-burn company look like? Is it sustainable?
- **Identify specific reasons the raise failed.** Investor feedback, even if uncomfortable. "The market doesn't understand our vision" is not useful. "Three investors said the unit economics weren't believable" is useful.
- **Evaluate alternative paths.** Revenue-based financing, venture debt, strategic investment, customer advance payments, bridge from existing investors, acqui-hire.
- **Communicate to team.** Within one week. With a plan. "Here's what we're doing. Here's what this means for the team."
### Recovery Path
- The raise failed for reasons. Fix the reasons. If it was valuation: you may need to lower expectations. If it was market: you may need to refocus. If it was metrics: you need to improve metrics before the next attempt.
- Failed raises are more common than founders discuss publicly. Most companies that eventually succeed have had at least one.
- The companies that recover from failed fundraises usually do so by extending runway aggressively (cutting), finding a lead from outside their normal network, or changing something material about the business.
- **Do not do bridge rounds as avoidance.** A bridge that extends your runway 3 months to a problem you haven't fixed is not a solution. Only bridge if you have a specific, credible path to a successful close.

View File

@@ -0,0 +1,256 @@
# Hard Things — Decision Frameworks for the Calls Nobody Wants to Make
Firing people. Laying off teams. Pivoting when you've raised money on the old direction. Telling a co-founder it's over. Shutting down a product.
This isn't a framework for feeling better about hard calls. It's a framework for making them correctly.
---
## Part 1: Firing
### When to Fire Someone
Most leaders wait too long. By the time they act, everyone else on the team already knows the problem person isn't working out. The team watches the leader, waiting to see if they'll act.
**Fire when:**
- Performance isn't improving after clear, direct, documented feedback
- The person is a culture or values problem, not just a skills problem
- You find yourself routing around them (giving their work to others, excluding them from important discussions)
- The team is being damaged by having them there
- You wouldn't hire them today for this role
**The question to ask:** "If I could wave a magic wand and this person just stopped coming to work, would I be relieved or would I miss them?" If relieved — you already know.
**The hidden test:** "Would I enthusiastically recommend this person to a friend's company for this exact role?" If no, what does that tell you?
### The Warning Signs You're Avoiding the Decision
- You've been "working on it" for more than 3 months
- You're hoping they'll leave on their own
- You're giving them feedback that's softer than what you actually think
- You're planning to "deal with it after the quarter"
- Other team members have started asking you about it
### Before Firing: The Due Diligence
Have you been **direct** — not hinted, not soft-pedaled, but explicitly said "your performance is not meeting the standard required for this role and your job is at risk"?
Have you given them **a fair chance to improve** with clear criteria for what success looks like?
Have you checked whether this is a **fit problem** (wrong role for their skills) vs a **performance problem** (not executing in a role they're capable of)?
Have you considered whether this is **your failure** — bad hire, bad onboarding, bad management — and whether another manager would get different results?
This isn't to talk yourself out of it. It's to make sure you can stand behind the decision.
### How to Fire Someone
**The conversation:**
Do it in person. Start of the week (not Friday — that's cruel). Private meeting. 30 minutes max.
Three sentences:
1. "I have difficult news — today is your last day."
2. "The reason is [one clear sentence — not a list of grievances]."
3. "Here's what the transition looks like [severance, references, timeline]."
**Do not:**
- Soften it so much that the person doesn't understand what's happening
- Give a performance review at the end ("you're really good at X but...")
- Apologize excessively (once is appropriate; more makes it about you)
- Leave open questions about whether this is final (it is)
**The question they'll ask:** "Why now?" Be ready for this. Have a direct answer.
**What to say to the team:** Same day. "I want to let you know that [Name] is no longer with the company. I can't share details, but I want to be transparent that this was a decision we made, not something they chose. Their last day is today." That's it. Don't litigate. Don't share reasons.
### Severance
Be generous. Not because you have to — because it's the right thing to do and it protects the culture. The team watches how you treat people when they leave.
For executives: 23 months standard, more if they've been there a long time.
For individual contributors: 24 weeks per year of service is reasonable.
**Reference:** Only confirm dates and title (standard practice). If you genuinely believe they'd be good somewhere else, offer a more substantive reference. Don't damage their career because the fit wasn't right.
---
## Part 2: Layoffs
### The First Question: Is This the Right Move?
Layoffs are sometimes the right call. But they're also sometimes an avoidance tactic — avoiding harder decisions about business model, spending discipline, or strategic direction.
Before proceeding, be clear on what problem you're solving:
- **Extending runway:** How many months does this buy? Is that enough?
- **Restructuring:** Are you changing the direction of the company, not just the headcount?
- **Cost cutting without strategic change:** This is usually a mistake — you lose talent, damage culture, and face the same problem 6 months later.
**The math:** At your current burn, you need to cut \_\_% to extend runway from \_\_ months to \_\_ months. That math should drive the decision, not a "feels about right" number.
### Cut Once, Cut Deep
The worst outcome is two rounds of layoffs. After the first, the people who stay are already thinking about leaving. A second round converts "scared" to "gone."
If you're going to do this, do it once and do it to a level that solves the problem for 18+ months. Psychological safety matters more than any individual cost saving.
### Deciding Who to Let Go
This is the hardest part. A framework:
**By role:** Does the company need this function at current stage? If you're cutting a whole team or capability, it's cleaner, more defensible, and recovers faster.
**By performance:** If cutting across teams, higher performers stay. This is the moment where the "we have no B players" culture claim is tested.
**By span of work:** Which work is critical path to the strategy you're executing now? Everything else is a candidate.
**The veto question:** "Would I fight to keep this person if they said they were leaving?" If yes, they're safe. If no, they're a candidate.
### The Layoff Conversation
**Preparation:**
- Legal review first. In Germany: Betriebsrat, social selection, proper notice periods. In the US: WARN Act for 50+ employees. Do not skip this.
- Have severance paperwork ready before the conversation
- Have IT ready to revoke access (dignity: after the conversation, not during)
**The conversation:**
- Private. Direct.
- "We're restructuring the company and your role is being eliminated."
- Don't blame the person. Don't say "we had to make hard choices" three times. Say it once and move on.
- Explain severance, timeline, references clearly.
- Answer questions. "I don't know" is acceptable for some questions. "I can't tell you" is not.
**All-hands same day:**
- You, live, as soon as individual conversations are done
- Be honest about why and what it means for the company
- Answer hard questions. Don't hide behind PR language.
- Acknowledge that this is hard and that you're responsible for the decisions that led here
### Survivor Guilt
The people who didn't get cut will feel: relieved, guilty, scared, and angry — often all four. Don't underestimate this.
Within 48 hours of the layoff:
- Talk to every team lead individually
- Hold a team meeting for each department
- Be available for hard conversations
The question everyone is silently asking: "Am I next?" Answer it directly, even if you can't promise the future: "I don't plan any further cuts. Here's what would have to be true for that to change."
---
## Part 3: Pivoting
### Signals That It's Time to Pivot
- Product-market fit isn't materializing despite iteration
- Growth requires heroic sales effort on every deal
- The customers who love you are not the customers you expected
- You find a problem you can solve well that's adjacent to what you're doing
- The market you targeted is smaller than you thought
**The danger signal:** You're pivoting to run from failure, not toward opportunity. Pivots pulled by evidence of a better path work. Pivots pushed by exhaustion with the current path fail differently.
### How to Think About the Pivot
Define what you're keeping vs. what you're changing:
- **Team**: usually keeping — the team is the asset
- **Technology**: partially keeping — usually can be reoriented
- **Customers**: depends — some will follow, some won't
- **Vision**: the long-term vision often survives; the near-term path changes
- **Brand**: sometimes requires a rename
The cleanest pivots have a clear answer to: "Why are we better positioned to win at the new thing than anyone else?"
### Telling the Board You're Pivoting
Do not surprise the board in a board meeting. Have the conversation individually with key directors first.
What to communicate:
1. What changed — the new data or insight that's driving this
2. What you're moving away from and why
3. What you're moving to and why you can win there
4. What this means for fundraising timeline and strategy
5. What you need from them
Board members hate two things: surprises and not being consulted. Give them both the information and the opportunity to contribute.
### Telling Customers You're Pivoting
Be direct. Don't spin it as "we're expanding our focus." If you're killing something they use, tell them clearly, with enough notice for them to plan.
What customers need to know:
- What's changing and when
- What happens to their data / integrations / workflows
- Who their contact is through the transition
- What alternatives exist
Customers who feel respected through a hard change sometimes become your biggest advocates. Customers who feel deceived become your loudest critics.
---
## Part 4: Co-Founder Conflicts
### The Types of Conflict
**Values/direction conflict:** You disagree fundamentally about what the company should be. This is existential and usually doesn't resolve with more conversation.
**Performance conflict:** One co-founder isn't pulling their weight. This is hard but more tractable — it's addressable with clarity.
**Role/scope conflict:** Unclear ownership causing friction. This is often fixable.
### The Conversation You're Not Having
Most co-founder conflicts fester because nobody says the real thing out loud.
The real thing might be: "I don't think you're growing into what this company needs." Or: "I don't agree with the direction you're pushing us and I don't feel heard." Or: "I'm doing 70% of the work and we have equal equity."
Say the real thing. Not in anger. Clearly, directly, with respect.
### When It's Not Working
Signs the co-founder relationship is unsalvageable:
- You've had the real conversation and nothing changed
- You don't trust their judgment anymore
- You've stopped including them in important decisions
- You're telling people (investors, team) a different story than what's true
- The team has started choosing sides
### The Separation
Options in rough order of impact:
1. **Role change** — they move to a different function where they can succeed
2. **Advisor role** — they step out of operations, keep some equity, maintain relationship
3. **Full exit** — they leave the company
For any separation: legal counsel first. Cap table, vesting, IP assignment, competition clauses — all need to be addressed. Don't make handshake deals.
How you treat the departing co-founder tells the team, the investors, and the market who you are.
---
## Part 5: Shutting Down a Product Line
### When to Kill It
- Revenue doesn't justify the cost (including the opportunity cost of what the team could be building instead)
- It's pulling the company in a strategic direction you're not committed to
- It requires resources disproportionate to its potential
- Supporting it is making the rest of the product worse
**The question to ask:** "If we launched this today knowing what we know, would we build it?" If no, that's your answer.
### What You're Protecting
The customers who use it. They trusted you with their workflow. Give them:
- Clear timeline (90 days minimum for anything with integration dependencies)
- Migration path to alternatives or your other products
- Data export
- A person they can contact with questions
### Internal Communication
The team that built it feels the loss personally. Acknowledge it. "This product represents real work and real care. Shutting it down is not a judgment of the team — it's a judgment about fit with where the company is going."
If team members are being reassigned, not let go — make that clear immediately. The fear of job loss will dominate every other concern until you address it.

View File

@@ -0,0 +1,491 @@
#!/usr/bin/env python3
"""
Decision Matrix Scorer — Executive Mentor Tool
Weighted multi-criteria decision analysis with sensitivity testing.
Answers: Which option wins? How fragile is that result? Where are the close calls?
Usage:
python decision_matrix_scorer.py # Run with sample data
python decision_matrix_scorer.py --interactive # Interactive mode
python decision_matrix_scorer.py --file data.json # Load from JSON file
JSON format:
{
"decision": "Description of the decision",
"criteria": [
{"name": "Criterion Name", "weight": 0.3, "description": "Optional"},
...
],
"options": [
{
"name": "Option Name",
"description": "Optional description",
"scores": {"Criterion Name": 8, "Another": 6, ...}
},
...
]
}
Scores: 110 scale. Weights: must sum to 1.0 (or will be normalized).
"""
import json
import sys
import argparse
from typing import List, Dict, Tuple
# ─────────────────────────────────────────────────────
# Core data structures
# ─────────────────────────────────────────────────────
def normalize_weights(criteria: List[Dict]) -> List[Dict]:
"""Ensure weights sum to 1.0."""
total = sum(c["weight"] for c in criteria)
if abs(total - 1.0) > 0.001:
for c in criteria:
c["weight"] = c["weight"] / total
return criteria
def score_option(option: Dict, criteria: List[Dict]) -> float:
"""Calculate weighted score for an option."""
total = 0.0
for c in criteria:
score = option["scores"].get(c["name"], 5) # Default to 5 if missing
total += score * c["weight"]
return round(total, 3)
def score_all(options: List[Dict], criteria: List[Dict]) -> List[Tuple[str, float]]:
"""Return sorted list of (option_name, weighted_score)."""
results = []
for opt in options:
s = score_option(opt, criteria)
results.append((opt["name"], s))
return sorted(results, key=lambda x: x[1], reverse=True)
# ─────────────────────────────────────────────────────
# Sensitivity analysis
# ─────────────────────────────────────────────────────
def sensitivity_analysis(options: List[Dict], criteria: List[Dict]) -> Dict:
"""
Test how result changes when each criterion's weight is varied ±30%.
Returns dict: criterion → {stable: bool, risk_of_flip: bool, details: str}
"""
baseline = score_all(options, criteria)
winner = baseline[0][0]
results = {}
for i, c in enumerate(criteria):
flips = []
for delta in [-0.30, -0.20, -0.10, +0.10, +0.20, +0.30]:
# Adjust weight of criterion i, redistribute remainder proportionally
test_criteria = [dict(cr) for cr in criteria]
new_weight = max(0.01, test_criteria[i]["weight"] + delta)
old_weight = test_criteria[i]["weight"]
diff = new_weight - old_weight
# Redistribute diff across other criteria
others = [j for j in range(len(test_criteria)) if j != i]
total_other = sum(test_criteria[j]["weight"] for j in others)
if total_other > 0:
for j in others:
proportion = test_criteria[j]["weight"] / total_other
test_criteria[j]["weight"] -= diff * proportion
test_criteria[j]["weight"] = max(0.01, test_criteria[j]["weight"])
test_criteria[i]["weight"] = new_weight
test_criteria = normalize_weights(test_criteria)
test_results = score_all(options, test_criteria)
if test_results[0][0] != winner:
flips.append((delta, test_results[0][0]))
if flips:
smallest_delta = min(abs(delta) for delta, _name in flips)
results[c["name"]] = {
"stable": False,
"flip_at": f"±{int(smallest_delta*100)}% weight change",
"flip_to": flips[0][1],
"importance": "HIGH — result depends heavily on this weight"
}
else:
results[c["name"]] = {
"stable": True,
"flip_at": None,
"flip_to": None,
"importance": "LOW — winner holds even with significant weight changes"
}
return results
def close_call_analysis(results: List[Tuple[str, float]]) -> List[Dict]:
"""Find options within 10% of winner score — these are close calls."""
if not results:
return []
winner_score = results[0][1]
close = []
for name, score in results[1:]:
gap = winner_score - score
gap_pct = (gap / winner_score * 100) if winner_score > 0 else 0
if gap_pct <= 15:
close.append({
"name": name,
"score": score,
"gap": round(gap, 3),
"gap_pct": round(gap_pct, 1),
"verdict": "Very close — recheck assumptions" if gap_pct <= 5 else "Close — worth a second look"
})
return close
def criterion_breakdown(options: List[Dict], criteria: List[Dict]) -> Dict:
"""Show per-criterion scores for each option."""
breakdown = {}
for opt in options:
breakdown[opt["name"]] = {}
for c in criteria:
raw = opt["scores"].get(c["name"], 5)
weighted = raw * c["weight"]
breakdown[opt["name"]][c["name"]] = {
"raw": raw,
"weighted": round(weighted, 3),
"weight": f"{round(c['weight']*100)}%"
}
return breakdown
# ─────────────────────────────────────────────────────
# Output formatting
# ─────────────────────────────────────────────────────
def hr(char="", width=65):
return char * width
def print_report(data: Dict):
"""Print the full decision analysis report."""
decision = data.get("decision", "Unnamed Decision")
criteria = normalize_weights(data["criteria"])
options = data["options"]
print()
print(hr(""))
print(f" DECISION MATRIX ANALYSIS")
print(f" {decision}")
print(hr(""))
# ── Criteria summary
print()
print("CRITERIA & WEIGHTS")
print(hr())
for c in sorted(criteria, key=lambda x: x["weight"], reverse=True):
bar_len = int(c["weight"] * 30)
bar = "" * bar_len
desc = f"{c['description']}" if c.get("description") else ""
print(f" {c['name']:<25} {c['weight']*100:>5.1f}% {bar}{desc}")
# ── Scoring results
print()
print("RESULTS (ranked)")
print(hr())
results = score_all(options, criteria)
max_score = 10.0 # max possible weighted score
for rank, (name, score) in enumerate(results, 1):
pct = score / 10.0
bar_len = int(pct * 40)
bar = "" * bar_len
medal = ["🥇", "🥈", "🥉"][rank-1] if rank <= 3 else f"#{rank} "
print(f" {medal} {name:<25} {score:>5.2f}/10 {bar}")
winner = results[0][0]
print()
print(f" ► Winner: {winner} (score: {results[0][1]:.2f})")
# ── Close calls
close = close_call_analysis(results)
if close:
print()
print("CLOSE CALLS")
print(hr())
for c in close:
print(f"{c['name']}: {c['score']:.2f} (gap: {c['gap_pct']}% — {c['verdict']})")
# ── Per-criterion breakdown
print()
print("SCORE BREAKDOWN BY CRITERION")
print(hr())
breakdown = criterion_breakdown(options, criteria)
# Header
opt_names = [opt["name"][:16] for opt in options]
header = f" {'Criterion':<22}"
for n in opt_names:
header += f" {n:>10}"
print(header)
print(" " + hr("-", 63))
for c in criteria:
row = f" {c['name']:<22}"
for opt in options:
raw = opt["scores"].get(c["name"], 5)
row += f" {raw:>10}"
row += f" (weight {c['weight']*100:.0f}%)"
print(row)
# Weighted row
print(" " + hr("-", 63))
weighted_row = f" {'Weighted Total':<22}"
for name, score in results:
# Re-order by options list order
weighted_row += f" {score:>10.2f}"
# Actually print in options order
print(f" {'Weighted Total':<22}", end="")
for opt in options:
s = score_option(opt, criteria)
print(f" {s:>10.2f}", end="")
print()
# ── Sensitivity analysis
print()
print("SENSITIVITY ANALYSIS")
print(hr())
print(" How much does the winner change if we adjust criterion weights?")
print()
sensitivity = sensitivity_analysis(options, criteria)
for crit_name, result in sensitivity.items():
if result["stable"]:
print(f"{crit_name:<28} STABLE — winner holds at ±30% weight change")
else:
print(f"{crit_name:<28} FRAGILE — flips to '{result['flip_to']}' at {result['flip_at']}")
# ── Recommendation
print()
print("RECOMMENDATION")
print(hr())
unstable = [k for k, v in sensitivity.items() if not v["stable"]]
if unstable:
print(f" Winner: {winner}")
print(f" Confidence: MEDIUM — result is sensitive to weights on: {', '.join(unstable)}")
print()
print(" Before committing:")
print(f" • Validate that your weighting of [{', '.join(unstable)}] is correct")
print(" • Consider whether the weight differences reflect genuine priorities")
print(" • If uncertain, run scenario with alternative weights")
else:
print(f" Winner: {winner}")
print(f" Confidence: HIGH — winner is stable across all weight scenarios")
print()
print(" The decision is clear. The main risk is whether your scoring")
print(" of each option on each criterion is accurate.")
print()
print(hr(""))
print()
# ─────────────────────────────────────────────────────
# Interactive mode
# ─────────────────────────────────────────────────────
def interactive_mode():
"""Guided interactive data entry."""
print()
print(hr(""))
print(" DECISION MATRIX — Interactive Mode")
print(hr(""))
data = {}
data["decision"] = input("\nWhat decision are you making?\n> ").strip()
# Criteria
print("\nDefine criteria (what matters in this decision).")
print("Enter criteria one at a time. Empty line to finish.")
print("Weight: importance 010 (will be normalized to %).")
print()
criteria = []
while True:
name = input(f"Criterion {len(criteria)+1} name (or ENTER to finish): ").strip()
if not name:
if len(criteria) < 2:
print(" Need at least 2 criteria.")
continue
break
weight_str = input(f" Weight for '{name}' (010): ").strip()
try:
weight = float(weight_str)
except ValueError:
weight = 5.0
criteria.append({"name": name, "weight": weight})
data["criteria"] = criteria
# Options
print("\nDefine options (what you're choosing between).")
print("Enter options one at a time. Empty line to finish.")
print()
options = []
while True:
name = input(f"Option {len(options)+1} name (or ENTER to finish): ").strip()
if not name:
if len(options) < 2:
print(" Need at least 2 options.")
continue
break
print(f"\n Score each criterion for '{name}' (1=poor, 10=excellent):")
scores = {}
for c in criteria:
while True:
s = input(f" {c['name']}: ").strip()
try:
score = float(s)
if 1 <= score <= 10:
scores[c["name"]] = score
break
else:
print(" Score must be 110")
except ValueError:
print(" Enter a number 110")
options.append({"name": name, "scores": scores})
print()
data["options"] = options
print_report(data)
# ─────────────────────────────────────────────────────
# Sample data
# ─────────────────────────────────────────────────────
SAMPLE_DATA = {
"decision": "How to extend runway: Cut costs vs. Raise bridge vs. Accelerate revenue",
"criteria": [
{
"name": "Speed to impact",
"weight": 0.25,
"description": "How quickly does this improve our situation?"
},
{
"name": "Execution risk",
"weight": 0.30,
"description": "How likely is this to actually work? (10=low risk)"
},
{
"name": "Team morale impact",
"weight": 0.20,
"description": "Effect on team (10=positive, 1=very negative)"
},
{
"name": "Runway extension",
"weight": 0.15,
"description": "How much runway does this actually buy?"
},
{
"name": "Strategic fit",
"weight": 0.10,
"description": "Does this align with where we want to go?"
}
],
"options": [
{
"name": "Cost cut 25%",
"description": "Reduce headcount and discretionary spend by 25%",
"scores": {
"Speed to impact": 9,
"Execution risk": 8,
"Team morale impact": 2,
"Runway extension": 8,
"Strategic fit": 5
}
},
{
"name": "Bridge from investors",
"description": "Raise $500K bridge from existing investors to hit next milestone",
"scores": {
"Speed to impact": 6,
"Execution risk": 5,
"Team morale impact": 7,
"Runway extension": 6,
"Strategic fit": 7
}
},
{
"name": "Accelerate revenue",
"description": "Push 3 enterprise deals hard, offer incentives for Q4 close",
"scores": {
"Speed to impact": 4,
"Execution risk": 3,
"Team morale impact": 9,
"Runway extension": 9,
"Strategic fit": 10
}
},
{
"name": "Hybrid: cut 15% + bridge",
"description": "Smaller cuts combined with a modest bridge round",
"scores": {
"Speed to impact": 7,
"Execution risk": 6,
"Team morale impact": 5,
"Runway extension": 7,
"Strategic fit": 6
}
}
]
}
# ─────────────────────────────────────────────────────
# Main
# ─────────────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(
description="Decision Matrix Scorer — weighted analysis with sensitivity testing"
)
parser.add_argument(
"--interactive", "-i",
action="store_true",
help="Interactive mode: enter decision data manually"
)
parser.add_argument(
"--file", "-f",
type=str,
help="Load decision data from JSON file"
)
parser.add_argument(
"--sample",
action="store_true",
help="Show sample data structure and exit"
)
args = parser.parse_args()
if args.sample:
print(json.dumps(SAMPLE_DATA, indent=2))
return
if args.interactive:
interactive_mode()
return
if args.file:
try:
with open(args.file) as f:
data = json.load(f)
print_report(data)
except FileNotFoundError:
print(f"Error: File '{args.file}' not found.")
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.file}': {e}")
sys.exit(1)
return
# Default: run sample data
print()
print("Running with sample data. Use --interactive for custom input or --file for JSON.")
print_report(SAMPLE_DATA)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,547 @@
#!/usr/bin/env python3
"""
Stakeholder Mapper — Executive Mentor Tool
Maps stakeholders by influence and alignment.
Identifies: champions, blockers, swing votes, and hidden risks.
Outputs: stakeholder grid with engagement strategy per quadrant.
Usage:
python stakeholder_mapper.py # Run with sample data
python stakeholder_mapper.py --interactive # Interactive mode
python stakeholder_mapper.py --file data.json # Load from JSON file
JSON format:
{
"initiative": "Name of the decision or initiative",
"stakeholders": [
{
"name": "Person/Group Name",
"role": "Their role or title",
"influence": 8, // 110: how much power they have over outcome
"alignment": 3, // 110: how supportive they are (10=champion, 1=blocker)
"interest": 7, // 110: how interested/engaged they are
"notes": "Optional context — what drives them, hidden concerns, relationships"
}
]
}
"""
import json
import sys
import argparse
from typing import List, Dict, Tuple, Optional
# ─────────────────────────────────────────────────────
# Quadrant classification
# ─────────────────────────────────────────────────────
def classify_stakeholder(influence: float, alignment: float) -> Dict:
"""
Classify into strategic quadrant based on influence and alignment.
Quadrants:
- Champions (high influence, high alignment): Your most valuable assets
- Blockers (high influence, low alignment): Your biggest risks
- Supporters (low influence, high alignment): Useful but less critical
- Bystanders (low influence, low alignment): Monitor, low priority
- Swing Votes (medium influence, medium alignment): Key to persuade
"""
mid_influence = 5.5
mid_alignment = 5.5
# Special case: swing votes — medium on both dimensions
if 4 <= influence <= 7 and 4 <= alignment <= 7:
return {
"quadrant": "Swing Vote",
"symbol": "",
"priority": "HIGH",
"strategy": "Persuade — understand concerns, address directly, build relationship"
}
if influence >= mid_influence and alignment >= mid_alignment:
return {
"quadrant": "Champion",
"symbol": "",
"priority": "HIGH",
"strategy": "Leverage — activate them as advocates, give them a role in the initiative"
}
elif influence >= mid_influence and alignment < mid_alignment:
return {
"quadrant": "Blocker",
"symbol": "",
"priority": "CRITICAL",
"strategy": "Address — understand their specific objections, find common ground or neutralize"
}
elif influence < mid_influence and alignment >= mid_alignment:
return {
"quadrant": "Supporter",
"symbol": "",
"priority": "MEDIUM",
"strategy": "Maintain — keep informed and engaged, potentially increase their influence"
}
else:
return {
"quadrant": "Bystander",
"symbol": "·",
"priority": "LOW",
"strategy": "Monitor — minimal investment, keep informed with standard comms"
}
def risk_flags(stakeholder: Dict) -> List[str]:
"""Identify specific risk signals for a stakeholder."""
flags = []
influence = stakeholder["influence"]
alignment = stakeholder["alignment"]
interest = stakeholder.get("interest", 5)
if influence >= 7 and alignment <= 3:
flags.append("🔴 HIGH-POWER BLOCKER — can kill this initiative")
if influence >= 7 and alignment <= 5 and interest >= 7:
flags.append("🟡 ENGAGED SKEPTIC — high influence, paying close attention, not convinced")
if alignment <= 4 and interest >= 8:
flags.append("🟡 ACTIVE OPPOSITION — low alignment but highly engaged — may mobilize others")
if influence >= 6 and alignment >= 7 and interest <= 3:
flags.append("🟡 DISENGAGED CHAMPION — strong supporter but not paying attention — needs activation")
if influence >= 5 and 4 <= alignment <= 6:
flags.append("⚡ PERSUADABLE — medium influence, genuinely undecided — high ROI to engage")
return flags
# ─────────────────────────────────────────────────────
# Analysis
# ─────────────────────────────────────────────────────
def calculate_overall_alignment(stakeholders: List[Dict]) -> Dict:
"""Calculate weighted average alignment (weighted by influence)."""
if not stakeholders:
return {"score": 0, "verdict": "No data"}
total_influence = sum(s["influence"] for s in stakeholders)
if total_influence == 0:
return {"score": 0, "verdict": "No influence"}
weighted_alignment = sum(
s["alignment"] * s["influence"] for s in stakeholders
) / total_influence
if weighted_alignment >= 7:
verdict = "FAVORABLE — strong support among influential stakeholders"
elif weighted_alignment >= 5:
verdict = "MIXED — significant opposition needs to be addressed"
else:
verdict = "UNFAVORABLE — initiative faces significant headwinds"
return {
"score": round(weighted_alignment, 2),
"verdict": verdict
}
def find_critical_path(stakeholders: List[Dict]) -> List[Dict]:
"""
Identify the minimal set of stakeholders whose alignment is critical.
These are high-influence stakeholders — their position determines the outcome.
"""
high_influence = [s for s in stakeholders if s["influence"] >= 7]
return sorted(high_influence, key=lambda x: x["influence"], reverse=True)
def engagement_sequencing(stakeholders: List[Dict]) -> List[Dict]:
"""
Recommend engagement sequence.
Order: Fix blockers → Activate champions → Persuade swing votes → Maintain rest.
"""
classified = []
for s in stakeholders:
cls = classify_stakeholder(s["influence"], s["alignment"])
classified.append({**s, **cls})
# Sort by engagement priority
priority_order = {"CRITICAL": 0, "HIGH": 1, "MEDIUM": 2, "LOW": 3}
classified.sort(key=lambda x: (priority_order[x["priority"]], -x["influence"]))
return classified
# ─────────────────────────────────────────────────────
# ASCII grid visualization
# ─────────────────────────────────────────────────────
def render_grid(stakeholders: List[Dict], width: int = 60) -> str:
"""
Render a 2D influence vs alignment grid with stakeholder positions.
Y-axis: Influence (top = high)
X-axis: Alignment (left = low, right = high)
"""
rows = 10
cols = 20
grid = [[' ' for _ in range(cols)] for _ in range(rows)]
for s in stakeholders:
influence = s["influence"]
alignment = s["alignment"]
# Map scores 110 to grid coordinates
col = int((alignment - 1) / 9 * (cols - 1))
row = rows - 1 - int((influence - 1) / 9 * (rows - 1))
col = max(0, min(cols - 1, col))
row = max(0, min(rows - 1, row))
initial = s["name"][0].upper()
if grid[row][col] == ' ':
grid[row][col] = initial
else:
grid[row][col] = '+' # Overlap
lines = []
lines.append(" STAKEHOLDER MAP (Influence ↑ | Alignment →)")
lines.append("")
lines.append(f" HIGH ┌{''*cols}")
for i, row in enumerate(grid):
if i == rows // 2:
prefix = " INFL "
else:
prefix = " "
lines.append(f"{prefix}{''.join(row)}")
lines.append(f" LOW └{''*cols}")
lines.append(f" {'BLOCKER':<12} {'SWING':<8} CHAMPION")
lines.append(f" Low alignment High alignment")
lines.append("")
# Legend
lines.append(" Legend (initials):")
for s in stakeholders:
cls = classify_stakeholder(s["influence"], s["alignment"])
lines.append(f" {s['name'][0].upper()} = {s['name']} ({cls['symbol']} {cls['quadrant']})")
return "\n".join(lines)
# ─────────────────────────────────────────────────────
# Output formatting
# ─────────────────────────────────────────────────────
def hr(char="", width=65):
return char * width
def print_report(data: Dict):
initiative = data.get("initiative", "Unnamed Initiative")
stakeholders = data["stakeholders"]
# Validate and fill defaults
for s in stakeholders:
s.setdefault("interest", 5)
s.setdefault("notes", "")
s["influence"] = max(1, min(10, float(s["influence"])))
s["alignment"] = max(1, min(10, float(s["alignment"])))
s["interest"] = max(1, min(10, float(s["interest"])))
print()
print(hr(""))
print(f" STAKEHOLDER ANALYSIS")
print(f" {initiative}")
print(hr(""))
# Overall assessment
overall = calculate_overall_alignment(stakeholders)
print()
print("OVERALL ASSESSMENT")
print(hr())
print(f" Weighted alignment score: {overall['score']}/10")
print(f" Verdict: {overall['verdict']}")
# Grid visualization
print()
print(hr())
print(render_grid(stakeholders))
# Stakeholder profiles by quadrant
sequenced = engagement_sequencing(stakeholders)
# Group by quadrant
quadrants = {}
for s in sequenced:
q = s["quadrant"]
if q not in quadrants:
quadrants[q] = []
quadrants[q].append(s)
quadrant_order = ["Blocker", "Swing Vote", "Champion", "Supporter", "Bystander"]
print()
print("STAKEHOLDER PROFILES")
print(hr())
for q_name in quadrant_order:
if q_name not in quadrants:
continue
group = quadrants[q_name]
first = group[0]
print()
print(f" {first['symbol']} {q_name.upper()}S ({len(group)} stakeholder{'s' if len(group)>1 else ''})")
print(f" Strategy: {first['strategy']}")
print()
for s in group:
cls = classify_stakeholder(s["influence"], s["alignment"])
flags = risk_flags(s)
print(f" {s['name']}")
print(f" Role: {s.get('role', 'Not specified')}")
print(f" Influence: {''*int(s['influence']//2)}{''*(5-int(s['influence']//2))} {s['influence']:.0f}/10 "
f"Alignment: {''*int(s['alignment']//2)}{''*(5-int(s['alignment']//2))} {s['alignment']:.0f}/10 "
f"Interest: {''*int(s['interest']//2)}{''*(5-int(s['interest']//2))} {s['interest']:.0f}/10")
if flags:
for flag in flags:
print(f" {flag}")
if s.get("notes"):
print(f" Notes: {s['notes']}")
print()
# Engagement plan
print()
print("ENGAGEMENT PLAN (sequenced by priority)")
print(hr())
print()
print(f" {'#':<3} {'Name':<22} {'Quadrant':<14} {'Priority':<10} {'First Action'}")
print(f" {hr('-', 63)}")
actions = {
"Blocker": "Schedule 1:1 — understand specific objections",
"Swing Vote": "Coffee or informal conversation — listen first",
"Champion": "Brief them on the initiative — give them a role",
"Supporter": "Keep informed — monthly update or email",
"Bystander": "Include in standard comms only"
}
for i, s in enumerate(sequenced, 1):
action = actions.get(s["quadrant"], "Maintain standard communication")
print(f" {i:<3} {s['name']:<22} {s['quadrant']:<14} {s['priority']:<10} {action}")
# Risk summary
print()
print("RISK SUMMARY")
print(hr())
critical_path = find_critical_path(stakeholders)
if critical_path:
print()
print(" High-influence stakeholders (outcome depends on these):")
for s in critical_path:
cls = classify_stakeholder(s["influence"], s["alignment"])
alignment_label = "CHAMPION" if s["alignment"] >= 7 else "BLOCKER" if s["alignment"] <= 4 else "UNDECIDED"
print(f" {cls['symbol']} {s['name']:<25} influence {s['influence']:.0f}/10 → {alignment_label}")
# All risk flags
all_flags = []
for s in stakeholders:
flags = risk_flags(s)
for flag in flags:
all_flags.append((s["name"], flag))
if all_flags:
print()
print(" Risk flags:")
for name, flag in all_flags:
print(f" [{name}] {flag}")
print()
print(hr(""))
print()
# ─────────────────────────────────────────────────────
# Interactive mode
# ─────────────────────────────────────────────────────
def interactive_mode():
print()
print(hr(""))
print(" STAKEHOLDER MAPPER — Interactive Mode")
print(hr(""))
data = {}
data["initiative"] = input("\nWhat initiative or decision are you mapping?\n> ").strip()
print("\nAdd stakeholders one at a time. Empty name to finish.")
print("Scores: 1=low, 10=high")
print()
stakeholders = []
while True:
name = input(f"Stakeholder {len(stakeholders)+1} name (or ENTER to finish): ").strip()
if not name:
if len(stakeholders) < 1:
print(" Need at least 1 stakeholder.")
continue
break
role = input(f" Role/title: ").strip()
def get_score(prompt, default=5):
while True:
s = input(f" {prompt} (110, default {default}): ").strip()
if not s:
return float(default)
try:
v = float(s)
if 1 <= v <= 10:
return v
print(" Must be 110")
except ValueError:
print(" Enter a number")
influence = get_score("Influence (power over this decision)")
alignment = get_score("Alignment (1=opposed, 10=champion)")
interest = get_score("Interest level (how engaged are they)")
notes = input(f" Notes (optional): ").strip()
stakeholders.append({
"name": name,
"role": role,
"influence": influence,
"alignment": alignment,
"interest": interest,
"notes": notes
})
print()
data["stakeholders"] = stakeholders
print_report(data)
# ─────────────────────────────────────────────────────
# Sample data
# ─────────────────────────────────────────────────────
SAMPLE_DATA = {
"initiative": "Migrate from monolith to microservices (18-month program)",
"stakeholders": [
{
"name": "Sarah Chen (CTO)",
"role": "Chief Technology Officer",
"influence": 10,
"alignment": 9,
"interest": 9,
"notes": "Driving force behind the initiative. Will fund and protect the team."
},
{
"name": "Marcus Webb (CFO)",
"role": "Chief Financial Officer",
"influence": 9,
"alignment": 3,
"interest": 6,
"notes": "Concerned about 18-month cost with no visible revenue return. Has budget veto."
},
{
"name": "Priya Agarwal (VP Eng)",
"role": "VP Engineering",
"influence": 8,
"alignment": 7,
"interest": 8,
"notes": "Supportive in principle, worried about team bandwidth alongside feature delivery."
},
{
"name": "Tom Briggs (VP Product)",
"role": "VP Product",
"influence": 7,
"alignment": 4,
"interest": 5,
"notes": "Concerned about roadmap slowdown. Hasn't been in the architecture discussions."
},
{
"name": "Elena Park (CEO)",
"role": "Chief Executive Officer",
"influence": 10,
"alignment": 6,
"interest": 4,
"notes": "Trusts the CTO but will back out if CFO and VP Product both push back hard."
},
{
"name": "Raj Patel (Lead Arch)",
"role": "Lead Architect",
"influence": 6,
"alignment": 10,
"interest": 10,
"notes": "Deep technical champion. Has proposed detailed migration plan."
},
{
"name": "Dev Team Leads (x4)",
"role": "Team Leads",
"influence": 5,
"alignment": 6,
"interest": 7,
"notes": "Mixed. Some excited, some worried about learning curve. Middle ground."
},
{
"name": "Board (investor reps)",
"role": "Board Directors",
"influence": 9,
"alignment": 5,
"interest": 3,
"notes": "Not paying attention unless CFO raises flags. Could become blockers if CFO escalates."
}
]
}
# ─────────────────────────────────────────────────────
# Main
# ─────────────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(
description="Stakeholder Mapper — influence, alignment, and engagement strategy"
)
parser.add_argument(
"--interactive", "-i",
action="store_true",
help="Interactive mode: enter stakeholder data manually"
)
parser.add_argument(
"--file", "-f",
type=str,
help="Load stakeholder data from JSON file"
)
parser.add_argument(
"--sample",
action="store_true",
help="Print sample JSON structure and exit"
)
args = parser.parse_args()
if args.sample:
print(json.dumps(SAMPLE_DATA, indent=2))
return
if args.interactive:
interactive_mode()
return
if args.file:
try:
with open(args.file) as f:
data = json.load(f)
print_report(data)
except FileNotFoundError:
print(f"Error: File '{args.file}' not found.")
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.file}': {e}")
sys.exit(1)
return
# Default: sample data
print()
print("Running with sample data. Use --interactive for custom input or --file for JSON.")
print_report(SAMPLE_DATA)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,151 @@
# /em:board-prep — Board Meeting Preparation
**Command:** `/em:board-prep <agenda>`
Prepare for the adversarial version of your board, not the friendly one. Every hard question they'll ask. Every number you need cold. The narrative that acknowledges weakness without losing the room.
---
## The Reality of Board Meetings
Your board members have seen 50+ companies. They've watched founders flinch at their own numbers, spin bad news as "learning opportunities," and present sanitized decks that hide what's actually happening.
They know when you're not being straight with them. The question isn't whether they'll ask the hard questions — it's whether you're ready for them.
The best board meetings aren't the ones where everything looks good. They're the ones where the CEO demonstrates they see reality clearly, have a plan, and can execute under pressure.
---
## The Preparation Framework
### Phase 1: Numbers Cold
Before the meeting, every number in your deck should live in your head, not just the slide.
**The numbers you must know without looking:**
- Current MRR / ARR and month-over-month growth rate
- Burn rate (monthly) and runway (months at current burn)
- Headcount by department
- CAC and LTV by channel / segment
- Net Revenue Retention
- Pipeline: value, conversion rate, average sales cycle
- Churn: rate, top reasons, top churned accounts
- Gross margin (product), net margin (company)
- Key hiring positions open and time-to-fill
**Stress test yourself:** Can you answer "what's your burn?" without hesitation? "What's your churn rate by segment?" If you pause, you don't know it.
### Phase 2: Anticipate the Hard Questions
For every item on the agenda, generate the adversarial version of the question.
**Standard adversarial questions by topic:**
*Revenue performance:*
- "You missed revenue by 20% this quarter. What specifically failed?"
- "Is this a pipeline problem, a conversion problem, or a capacity problem?"
- "If you missed because of one big deal, how dependent is your model on individual deals?"
- "When do you project recovery and what are the leading indicators you're right?"
*Runway / burn:*
- "At current burn you have N months. What's your plan if the next round takes 9 months?"
- "What would you cut first if you had to extend runway by 6 months today?"
- "Is there a scenario where you don't raise another round?"
*Product / roadmap:*
- "You shipped X. What did customers actually do with it?"
- "What did you kill this quarter and why?"
- "Where are you behind on roadmap? What's slipping?"
*Team:*
- "Who's at risk of leaving? How would that affect execution?"
- "You've had 3 VP-level hires not work out. What pattern do you see?"
- "Is the team the right team for this stage?"
*Competition:*
- "Competitor Y just raised $50M. How does that change your position?"
- "If they copy your best feature in 90 days, what's your moat?"
### Phase 3: Build the Narrative
The board meeting isn't a status update. It's a leadership demonstration.
**The structure that works:**
1. **Where we are (honest)** — Current state of business, the real number, not the smoothed one
2. **What we learned** — What the data is telling us that we didn't know 90 days ago
3. **What we got wrong** — Name it directly. Don't make them ask.
4. **What we're doing about it** — Specific, dated, owned actions
5. **What we need from this room** — Concrete ask. Not "support" — specific introductions, decisions, resources.
**The rule on bad news:** Never let the board be surprised. If a quarter went badly, they should know before the deck. A 5-sentence email 3 days before: "Revenue came in at $X vs $Y target. Here's what happened, here's what I'm doing, here's what I need from you."
### Phase 4: Adversarial Preparation
Do a mock board meeting. Have someone play the hardest director you have.
**The simulation:**
- Present your deck as you would
- The mock director asks every uncomfortable question
- You answer without referring to the deck
- After: note every question that made you pause or feel defensive
**The questions that made you defensive = the questions you need to prepare for.**
### Phase 5: Director-by-Director Prep
Not all board members want the same thing from a meeting.
**For each director, know:**
- Their primary concern right now (usually tied to their investment thesis)
- The metric they watch most closely
- What would make them lose confidence in you
- What they've said in the last meeting that you should address
**Common director types:**
- **The operator** — wants to know what's breaking and who owns fixing it
- **The financial investor** — focused on path to profitability or next raise
- **The strategic investor** — worried about competitive position and moat
- **The independent** — watching governance, team dynamics, and your judgment
---
## Pre-Meeting Checklist
**48 hours before:**
- [ ] All numbers verified against source systems (not last week's export)
- [ ] Deck reviewed for internal consistency
- [ ] Pre-read sent to board (deck + 1-page brief on key topics)
- [ ] One-on-ones done with any director likely to have concerns
- [ ] 3 hardest questions you expect — rehearsed out loud
**Day of meeting:**
- [ ] Agenda with time allocations distributed
- [ ] Know the ask for each agenda item (decision needed, input wanted, FYI)
- [ ] Materials to leave behind prepared
- [ ] Follow-up action template ready
---
## During the Meeting
**What the board is watching:**
- Do you own the bad news or deflect it?
- Are you defending a narrative or sharing reality?
- Do you know your numbers or do you look things up?
- When challenged, do you get defensive or engage?
- Do you know what you don't know?
**The single best thing you can do:** Name the hard thing before they do. "I want to address the revenue miss directly. Here's what happened, here's what I should have caught earlier, here's what changes."
---
## After the Meeting
Within 24 hours:
- Send action items with owners and dates
- Send any data you promised but didn't have
- Note the questions that came up you weren't ready for
- Schedule follow-up with any director who seemed unsatisfied
The next board prep starts now.

View File

@@ -0,0 +1,176 @@
# /em:challenge — Pre-Mortem Plan Analysis
**Command:** `/em:challenge <plan>`
Systematically finds weaknesses in any plan before reality does. Not to kill the plan — to make it survive contact with reality.
---
## The Core Idea
Most plans fail for predictable reasons. Not bad luck — bad assumptions. Overestimated demand. Underestimated complexity. Dependencies nobody questioned. Timing that made sense in a spreadsheet but not in the real world.
The pre-mortem technique: **imagine it's 12 months from now and this plan failed spectacularly. Now work backwards. Why?**
That's not pessimism. It's how you build something that doesn't collapse.
---
## When to Run a Challenge
- Before committing significant resources to a plan
- Before presenting to the board or investors
- When you notice you're only hearing positive feedback about the plan
- When the plan requires multiple external dependencies to align
- When there's pressure to move fast and "figure it out later"
- When you feel excited about the plan (excitement is a signal to scrutinize harder)
---
## The Challenge Framework
### Step 1: Extract Core Assumptions
Before you can test a plan, you need to surface everything it assumes to be true.
For each section of the plan, ask:
- What has to be true for this to work?
- What are we assuming about customer behavior?
- What are we assuming about competitor response?
- What are we assuming about our own execution capability?
- What external factors does this depend on?
**Common assumption categories:**
- **Market assumptions** — size, growth rate, customer willingness to pay, buying cycle
- **Execution assumptions** — team capacity, velocity, no major hires needed
- **Customer assumptions** — they have the problem, they know they have it, they'll pay to solve it
- **Competitive assumptions** — incumbents won't respond, no new entrant, moat holds
- **Financial assumptions** — burn rate, revenue timing, CAC, LTV ratios
- **Dependency assumptions** — partner will deliver, API won't change, regulations won't shift
### Step 2: Rate Each Assumption
For every assumption extracted, rate it on two dimensions:
**Confidence level (how sure are you this is true):**
- **High** — verified with data, customer conversations, market research
- **Medium** — directionally right but not validated
- **Low** — plausible but untested
- **Unknown** — we simply don't know
**Impact if wrong (what happens if this assumption fails):**
- **Critical** — plan fails entirely
- **High** — major delay or cost overrun
- **Medium** — significant rework required
- **Low** — manageable adjustment
### Step 3: Map Vulnerabilities
The matrix of Low/Unknown confidence × Critical/High impact = your highest-risk assumptions.
**Vulnerability = Low confidence + High impact**
These are not problems to ignore. They're the bets you're making. The question is: are you making them consciously?
### Step 4: Find the Dependency Chain
Many plans fail not because any single assumption is wrong, but because multiple assumptions have to be right simultaneously.
Map the chain:
- Does assumption B depend on assumption A being true first?
- If the first thing goes wrong, how many downstream things break?
- What's the critical path? What has zero slack?
### Step 5: Test the Reversibility
For each critical vulnerability: if this assumption turns out to be wrong at month 3, what do you do?
- Can you pivot?
- Can you cut scope?
- Is money already spent?
- Are commitments already made?
The less reversible, the more rigorously you need to validate before committing.
---
## Output Format
**Challenge Report: [Plan Name]**
```
CORE ASSUMPTIONS (extracted)
1. [Assumption] — Confidence: [H/M/L/?] — Impact if wrong: [Critical/High/Medium/Low]
2. ...
VULNERABILITY MAP
Critical risks (act before proceeding):
• [#N] [Assumption] — WHY it might be wrong — WHAT breaks if it is
High risks (validate before scaling):
• ...
DEPENDENCY CHAIN
[Assumption A] → depends on → [Assumption B] → which enables → [Assumption C]
Weakest link: [X] — if this breaks, [Y] and [Z] also fail
REVERSIBILITY ASSESSMENT
• Reversible bets: [list]
• Irreversible commitments: [list — treat with extreme care]
KILL SWITCHES
What would have to be true at [30/60/90 days] to continue vs. kill/pivot?
• Continue if: ...
• Kill/pivot if: ...
HARDENING ACTIONS
1. [Specific validation to do before proceeding]
2. [Alternative approach to consider]
3. [Contingency to build into the plan]
```
---
## Challenge Patterns by Plan Type
### Product Roadmap
- Are we building what customers will pay for, or what they said they wanted?
- Does the velocity estimate account for real team capacity (not theoretical)?
- What happens if the anchor feature takes 3× longer than estimated?
- Who owns decisions when requirements conflict?
### Go-to-Market Plan
- What's the actual ICP conversion rate, not the hoped-for one?
- How many touches to close, and do you have the sales capacity for that?
- What happens if the first 10 deals take 3 months instead of 1?
- Is "land and expand" a real motion or a hope?
### Hiring Plan
- What happens if the key hire takes 4 months to find, not 6 weeks?
- Is the plan dependent on retaining specific people who might leave?
- Does the plan account for ramp time (usually 36 months before full productivity)?
- What's the burn impact if headcount leads revenue by 6 months?
### Fundraising Plan
- What's your fallback if the lead investor passes?
- Have you modeled the timeline if it takes 6 months, not 3?
- What's your runway at current burn if the round closes at the low end?
- What assumptions break if you raise 50% of the target amount?
---
## The Hardest Questions
These are the ones people skip:
- "What's the bear case, not the base case?"
- "If this exact plan was run by a team we don't trust, would it work?"
- "What are we not saying out loud because it's uncomfortable?"
- "Who has incentives to make this plan sound better than it is?"
- "What would an enemy of this plan attack first?"
---
## Deliverable
The output of `/em:challenge` is not permission to stop. It's a vulnerability map. Now you can make conscious decisions: validate the risky assumptions, hedge the critical ones, or accept the bets you're making knowingly.
Unknown risks are dangerous. Known risks are manageable.

View File

@@ -0,0 +1,152 @@
# /em:hard-call — Framework for Decisions With No Good Options
**Command:** `/em:hard-call <decision>`
For the decisions that keep you up at 3am. Firing a co-founder. Laying off 20% of the team. Killing a product that customers love. Pivoting. Shutting down.
These decisions don't have a right answer. They have a less wrong answer. This framework helps you find it.
---
## Why These Decisions Are Hard
Not because the data is unclear. Often, the data is clear. They're hard because:
1. **Real people are affected** — someone loses a job, a relationship ends, a team is hurt
2. **You've been avoiding the decision** — which means the problem is already worse than it was
3. **Irreversibility** — unlike most business decisions, you can't undo this easily
4. **You have skin in the game** — your judgment about the right call is clouded by your feelings about it
The longer you avoid a hard call, the worse the situation usually gets. The company that needed a 10% cut 6 months ago now needs a 25% cut. The co-founder conversation that should have happened at month 4 is happening at month 14.
**Most hard decisions are late decisions.**
---
## The Framework
### Step 1: The Reversibility Test
The most important question first: **can you undo this?**
- **Reversible** — try it, learn, adjust (fire the vendor, kill the feature, change the strategy)
- **Partially reversible** — painful to undo but possible (restructure, change co-founder roles)
- **Irreversible** — cannot be undone (layoff a person, shut down a product with customer lock-in, close a legal entity)
For irreversible decisions, the bar for certainty is higher. You must do more due diligence before acting. Not because you might be wrong — but because you can't take it back.
**If you're treating a reversible decision like it's irreversible, you're avoiding it.**
### Step 2: The 10/10/10 Framework
Ask three questions about each option:
- **10 minutes from now**: How will you feel immediately after making this decision?
- **10 months from now**: What will the impact be? Will the problem be solved?
- **10 years from now**: When you look back, will this have been the right call?
The 10-minute feeling is usually the least reliable guide. The 10-year view usually clarifies what the right call actually is.
**Most hard decisions look obvious at 10 years. The question is whether you can tolerate the 10-minute pain.**
### Step 3: The Andy Grove Test
Andy Grove's test for strategic decisions: "If we got replaced tomorrow and a new CEO came in, what would they do?"
A fresh set of eyes, no emotional investment in the current path, no sunk cost. What's the obvious right call from the outside?
If the answer is clear to an outsider, the question becomes: why haven't you done it yet?
### Step 4: Stakeholder Impact Mapping
For each option, map who's affected and how:
| Stakeholder | Option A Impact | Option B Impact | Their reaction |
|-------------|----------------|----------------|----------------|
| Affected employees | | | |
| Remaining team | | | |
| Customers | | | |
| Investors | | | |
| You | | | |
This isn't about finding the option that hurts nobody — there isn't one. It's about understanding the full picture before you decide.
### Step 5: The Pre-Announcement Test
Before making the decision: write the announcement. The email to the team, the message to the customer, the conversation you'll have.
**If you can't write that announcement, you're not ready to make the decision.**
Writing it forces you to confront the reality of what you're doing. It also surfaces whether your reasoning holds under examination. "We're making this change because…" — does that sentence ring true?
### Step 6: The Communication Plan
Hard decisions almost always get harder if communication is bad. The decision itself is not the only thing that matters — how it's done matters enormously.
For every hard call, plan:
- **Who needs to know first** (the person directly affected, before anyone else)
- **How you'll tell them** (in person when possible, never via email for personal impact)
- **What you'll say** (honest, direct, compassionate — see `references/hard_things.md`)
- **What they can ask** (be ready for every question)
- **What comes next** (give them a clear picture of what happens after)
---
## Decision-Specific Frameworks
### Firing a Co-Founder
See `references/hard_things.md — Co-Founder Conflicts` for full framework.
Key questions to answer first:
- Is this a performance problem or a values/culture problem? (Different conversations)
- Have you been explicit — not hinted, but direct — about the problem?
- What does the cap table look like and what are the legal implications?
- Is there a role that works better for them, or is this a full exit?
- Who needs to know (board, team, investors) and in what order?
**The rule:** If you've been thinking about this for more than 3 months, you already know the answer. The question is when, not whether.
### Layoffs
Key questions:
- Is this a one-time reset or the beginning of a longer decline? (One reset is recoverable. Serial layoffs kill culture.)
- Are you cutting deep enough? (Insufficient layoffs are worse than no layoffs — two rounds destroys trust.)
- Who owns the announcement and is it direct and honest?
- What's the severance and is it fair?
- How do you prevent the best people from leaving after?
**The rule:** Cut once, cut deep, cut with dignity. Uncertainty is worse than clarity.
### Pivoting
Key questions:
- Is this a true pivot (new direction) or an optimization (same direction, different tactic)?
- What are you keeping and what are you abandoning?
- Do you have evidence the new direction works, or are you running from failure?
- How do you tell current customers who bought the old vision?
- What does this do to the board's confidence?
**The rule:** Pivots should be pulled by evidence of new opportunity, not pushed by failure of the current path.
### Killing a Product Line
Key questions:
- What happens to customers currently using it?
- What's the migration path?
- What do the people who built it do?
- Is "kill it" the right call or is "sell it" or "spin it out" better?
- What's the narrative — internally and externally?
---
## The Avoiding-It Test
You know you've been avoiding a hard call if:
- You've thought about it every week for more than a month
- You're hoping the situation will "resolve itself"
- You're waiting for more data that you'll never feel is enough
- You've had the conversation in your head many times but not in real life
- Other people around you have noticed the problem
**The cost of delay is almost always higher than the cost of the decision.**
Every month you wait, the problem compounds. The co-founder who's not working out becomes more entrenched. The product line that needs to die consumes more resources. The person who needs to be let go affects the people around them.
Make the call. Make it clearly. Make it with dignity.

View File

@@ -0,0 +1,188 @@
# /em:postmortem — Honest Analysis of What Went Wrong
**Command:** `/em:postmortem <event>`
Not blame. Understanding. The failed deal, the missed quarter, the feature that flopped, the hire that didn't work out. What actually happened, why, and what changes as a result.
---
## Why Most Post-Mortems Fail
They become one of two things:
**The blame session** — someone gets scapegoated, defensive walls go up, actual causes don't get examined, and the same problem happens again in a different form.
**The whitewash** — "We learned a lot, we're going to do better, here are 12 vague action items." Nothing changes. Same problem, different quarter.
A real post-mortem is neither. It's a rigorous investigation into a system failure. Not "whose fault was it" but "what conditions made this outcome predictable in hindsight?"
**The purpose:** extract the maximum learning value from a failure so you can prevent recurrence and improve the system.
---
## The Framework
### Step 1: Define the Event Precisely
Before analysis: describe exactly what happened.
- What was the expected outcome?
- What was the actual outcome?
- When was the gap first visible?
- What was the impact (financial, operational, reputational)?
Precision matters. "We missed Q3 revenue" is not precise enough. "We closed $420K in new ARR vs $680K target — a $260K miss driven primarily by three deals that slipped to Q4 and one deal that was lost to a competitor" is precise.
### Step 2: The 5 Whys — Done Properly
The goal: get from **what happened** (the symptom) to **why it happened** (the root cause).
Standard bad 5 Whys:
- Why did we miss revenue? Because deals slipped.
- Why did deals slip? Because the sales cycle was longer than expected.
- Why? Because the customer buying process is complex.
- Why? Because we're selling to enterprise.
- Why? That's just how enterprise sales works.
→ Conclusion: Nothing to do. It's just enterprise.
Real 5 Whys:
- Why did we miss revenue? Three deals slipped out of quarter.
- Why did those deals slip? None of them had identified a champion with budget authority.
- Why did we progress deals without a champion? Our qualification criteria didn't require it.
- Why didn't our qualification criteria require it? When we built the criteria 8 months ago, we were in SMB, not enterprise.
- Why haven't we updated qualification criteria as ICP shifted? No owner, no process for criteria review.
→ Root cause: Qualification criteria outdated, no owner, no review process.
→ Fix: Update criteria, assign owner, add quarterly review.
**The test for a good root cause:** Could you prevent recurrence with a specific, concrete change? If yes, you've found something real.
### Step 3: Distinguish Contributing Factors from Root Cause
Most events have multiple contributing factors. Not all are root causes.
**Contributing factor:** Made it worse, but isn't the core reason. If removed, the outcome might have been different — but the same class of problem would recur.
**Root cause:** The fundamental condition that made the outcome probable. Fix this, and this class of problem doesn't recur.
Example — failed hire:
- Contributing factors: rushed process, reference checks skipped, team under pressure to staff up
- Root cause: No defined competency framework, so interview process varied by who happened to conduct interviews
**The distinction matters.** If you address only contributing factors, you'll have a different-looking but structurally identical failure next time.
### Step 4: Identify the Warning Signs That Were Ignored
Every failure has precursors. In hindsight, they're obvious. The value of this step is making them obvious prospectively.
Ask:
- At what point was the negative outcome predictable?
- What signals were visible at that point?
- Who saw them? What happened when they raised them?
- Why weren't they acted on?
**Common patterns:**
- Signal was raised but dismissed by a senior person
- Signal wasn't raised because nobody felt safe saying it
- Signal was seen but no one had clear ownership to act on it
- Data was available but nobody was looking at it
- The team was too optimistic to take negative signals seriously
This step is particularly important for systemic issues — "we didn't feel safe raising the concern" is a much deeper root cause than "the deal qualification was off."
### Step 5: Distinguish What Was in Control vs. Out of Control
Some failures happen despite correct decisions. Some happen because of incorrect decisions. Knowing the difference prevents both overcorrection and undercorrection.
- **In control:** Process, criteria, team capability, resource allocation, decisions made
- **Out of control:** Market conditions, customer decisions, competitor actions, macro events
For things out of control: what can be done to be more resilient to similar events?
For things in control: what specifically needs to change?
**Warning:** "It was outside our control" is sometimes used to avoid accountability. Be rigorous.
### Step 6: Build the Change Register
Every post-mortem ends with a change register — specific commitments, owned and dated.
**Bad action items:**
- "We'll improve our qualification process"
- "Communication will be better"
- "We'll be more rigorous about forecasting"
**Good action items:**
- "Ravi owns rewriting qualification criteria by March 15 to include champion identification as hard requirement. New criteria reviewed in weekly sales standup starting March 22."
- "By March 10, Elena adds deal-slippage risk flag to CRM for any open opportunity >60 days without a product demo"
- "Maria runs a 30-min retrospective with enterprise sales team every 6 weeks starting April 1, reviews win/loss data"
**For each action:**
- What exactly is changing?
- Who owns it?
- By when?
- How will you verify it worked?
### Step 7: Verification Date
The most commonly skipped step. Post-mortems are useless if nobody checks whether the changes actually happened and actually worked.
Set a verification date: "We'll review whether qualification criteria have been updated and whether deal slippage rate has improved at the June board meeting."
Without this, post-mortems are theater.
---
## Post-Mortem Output Format
```
EVENT: [Name and date]
EXPECTED: [What was supposed to happen]
ACTUAL: [What happened]
IMPACT: [Quantified]
TIMELINE
[Date]: [What happened or was visible]
[Date]: ...
5 WHYS
1. [Why did X happen?] → Because [Y]
2. [Why did Y happen?] → Because [Z]
3. [Why did Z happen?] → Because [A]
4. [Why did A happen?] → Because [B]
5. [Why did B happen?] → Because [ROOT CAUSE]
ROOT CAUSE: [One clear sentence]
CONTRIBUTING FACTORS
• [Factor] — how it contributed
• [Factor] — how it contributed
WARNING SIGNS MISSED
• [Signal visible at what date] — why it wasn't acted on
WHAT WAS IN CONTROL: [List]
WHAT WASN'T: [List]
CHANGE REGISTER
| Action | Owner | Due Date | Verification |
|--------|-------|----------|-------------|
| [Specific change] | [Name] | [Date] | [How to verify] |
VERIFICATION DATE: [Date of check-in]
```
---
## The Tone of Good Post-Mortems
Blame is cheap. Understanding is hard.
The goal isn't to establish that someone made a mistake. The goal is to understand why the system produced that outcome — so the system can be improved.
"The salesperson didn't qualify the deal properly" is blame.
"Our qualification framework hadn't been updated when we moved upmarket, and no one owned keeping it current" is understanding.
The first version fires or shames someone. The second version builds a more resilient organization.
Both might be true simultaneously. The distinction is: which one actually prevents recurrence?

View File

@@ -0,0 +1,199 @@
# /em:stress-test — Business Assumption Stress Testing
**Command:** `/em:stress-test <assumption>`
Take any business assumption and break it before the market does. Revenue projections. Market size. Competitive moat. Hiring velocity. Customer retention.
---
## Why Most Assumptions Are Wrong
Founders are optimists by nature. That's a feature — you need optimism to start something from nothing. But it becomes a liability when assumptions in business models get inflated by the same optimism that got you started.
**The most dangerous assumptions are the ones everyone agrees on.**
When the whole team believes the $50M market is real, when every investor call goes well so you assume the round will close, when your model shows $2M ARR by December and nobody questions it — that's when you're most exposed.
Stress testing isn't pessimism. It's calibration.
---
## The Stress-Test Methodology
### Step 1: Isolate the Assumption
State it explicitly. Not "our market is large" but "the total addressable market for B2B spend management software in German SMEs is €2.3B."
The more specific the assumption, the more testable it is. Vague assumptions are unfalsifiable — and therefore useless.
**Common assumption types:**
- **Market size** — TAM, SAM, SOM; growth rate; customer segments
- **Customer behavior** — willingness to pay, churn, expansion, referrals
- **Revenue model** — conversion rates, deal size, sales cycle, CAC
- **Competitive position** — moat durability, competitor response speed, switching cost
- **Execution** — team velocity, hire timeline, product timeline, operational scaling
- **Macro** — regulatory environment, economic conditions, technology availability
### Step 2: Find the Counter-Evidence
For every assumption, actively search for evidence that it's wrong.
Ask:
- Who has tried this and failed?
- What data contradicts this assumption?
- What does the bear case look like?
- If a smart skeptic was looking at this, what would they point to?
- What's the base rate for assumptions like this?
**Sources of counter-evidence:**
- Comparable companies that failed in adjacent markets
- Customer churn data from similar businesses
- Historical accuracy of similar forecasts
- Industry reports with conflicting data
- What competitors who tried this found
The goal isn't to find a reason to stop — it's to surface what you don't know.
### Step 3: Model the Downside
Most plans model the base case and the upside. Stress testing means modeling the downside explicitly.
**For quantitative assumptions (revenue, growth, conversion):**
| Scenario | Assumption Value | Probability | Impact |
|----------|-----------------|-------------|--------|
| Base case | [Original value] | ? | |
| Bear case | -30% | ? | |
| Stress case | -50% | ? | |
| Catastrophic | -80% | ? | |
Key question at each level: **Does the business survive? Does the plan make sense?**
**For qualitative assumptions (moat, product-market fit, team capability):**
- What's the earliest signal this assumption is wrong?
- How long would it take you to notice?
- What happens between when it breaks and when you detect it?
### Step 4: Calculate Sensitivity
Some assumptions matter more than others. Sensitivity analysis answers: **if this one assumption changes, how much does the outcome change?**
Example:
- If CAC doubles, how does that change runway?
- If churn goes from 5% to 10%, how does that change NRR in 24 months?
- If the deal cycle is 6 months instead of 3, how does that affect Q3 revenue?
High sensitivity = the assumption is a key lever. Wrong = big problem.
### Step 5: Propose the Hedge
For every high-risk assumption, there should be a hedge:
- **Validation hedge** — test it before betting on it (pilot, customer conversation, small experiment)
- **Contingency hedge** — if it's wrong, what's plan B?
- **Early warning hedge** — what's the leading indicator that would tell you it's breaking before it's too late to act?
---
## Stress Test Patterns by Assumption Type
### Revenue Projections
**Common failures:**
- Bottom-up model assumes 100% of pipeline converts
- Doesn't account for deal slippage, churn, seasonality
- New channel assumed to work before tested at scale
**Stress questions:**
- What's your actual historical win rate on pipeline?
- If your top 3 deals slip to next quarter, what happens to the number?
- What's the model look like if your new sales rep takes 4 months to ramp, not 2?
- If expansion revenue doesn't materialize, what's the growth rate?
**Test:** Build the revenue model from historical win rates, not hoped-for ones.
### Market Size
**Common failures:**
- TAM calculated top-down from industry reports without bottoms-up validation
- Conflating total market with serviceable market
- Assuming 100% of SAM is reachable
**Stress questions:**
- How many companies in your ICP actually exist and can you name them?
- What's your serviceable obtainable market in year 1-3?
- What percentage of your ICP is currently spending on any solution to this problem?
- What does "winning" look like and what market share does that require?
**Test:** Build a list of target accounts. Count them. Multiply by ACV. That's your SAM.
### Competitive Moat
**Common failures:**
- Moat is technology advantage that can be built in 6 months
- Network effects that haven't yet materialized
- Data advantage that requires scale you don't have
**Stress questions:**
- If a well-funded competitor copied your best feature in 90 days, what do customers do?
- What's your retention rate among customers who have tried alternatives?
- Is the moat real today or theoretical at scale?
- What would it cost a competitor to reach feature parity?
**Test:** Ask churned customers why they left and whether a competitor could have kept them.
### Hiring Plan
**Common failures:**
- Time-to-hire assumes standard recruiting cycle, not current market
- Ramp time not modeled (3-6 months before full productivity)
- Key hire dependency: plan only works if specific person is hired
**Stress questions:**
- What happens if the VP Sales hire takes 5 months, not 2?
- What does execution look like if you only hire 70% of planned headcount?
- Which single person, if they left tomorrow, would most damage the plan?
- Is the plan achievable with current team if hiring freezes?
**Test:** Model the plan with 0 net new hires. What still works?
### Competitive Response
**Common failures:**
- Assumes incumbents won't respond (they will if you're winning)
- Underestimates speed of response
- Doesn't model resource asymmetry
**Stress questions:**
- If the market leader copies your product in 6 months, how does pricing change?
- What's your response if a competitor raises $30M to attack your space?
- Which of your customers have vendor relationships with your competitors?
---
## The Stress Test Output
```
ASSUMPTION: [Exact statement]
SOURCE: [Where this came from — model, investor pitch, team gut feel]
COUNTER-EVIDENCE
• [Specific evidence that challenges this assumption]
• [Comparable failure case]
• [Data point that contradicts the assumption]
DOWNSIDE MODEL
• Bear case (-30%): [Impact on plan]
• Stress case (-50%): [Impact on plan]
• Catastrophic (-80%): [Impact on plan — does the business survive?]
SENSITIVITY
This assumption has [HIGH / MEDIUM / LOW] sensitivity.
A 10% change → [X] change in outcome.
HEDGE
• Validation: [How to test this before betting on it]
• Contingency: [Plan B if it's wrong]
• Early warning: [Leading indicator to watch — and at what threshold to act]
```

View File

@@ -0,0 +1,300 @@
---
name: founder-coach
description: "Personal leadership development for founders and first-time CEOs. Covers founder archetype identification, delegation frameworks, energy management, CEO calendar audits, leadership style evolution, blind spot identification, imposter syndrome, founder mental health, and succession planning. Use when a founder feels like the bottleneck, struggles to delegate, is burning out, transitioning from IC to executive, managing a board, or when user mentions founder mode, CEO growth, leadership development, delegation, burnout, or imposter syndrome."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: founder-development
updated: 2026-03-05
frameworks: leadership-growth, founder-toolkit
---
# Founder Development Coach
Your company can only grow as fast as you do. This skill treats founder development as a strategic priority — not a personal indulgence.
## Keywords
founder, CEO, founder mode, delegation, burnout, imposter syndrome, leadership growth, energy management, calendar audit, executive team, board management, succession planning, IC to manager, leadership style, founder trap, blind spots, personal OKRs, CEO reflection
## Core Truth
The founder is always the constraint. Not intentionally — it's structural. You built the company. You know everything. Decisions flow through you. This works until it doesn't.
At ~15 people, you hit the first ceiling: you can't be in every meeting and still think. At ~50 people, the second: your style starts creating culture problems. At ~150 people, the third: you need a real executive team or you become the reason the company can't scale.
The earlier you address this, the better.
---
## 1. Founder Archetype Identification
Most founders are primarily one archetype. Knowing yours predicts what you'll struggle with.
| Archetype | Strength | Blind spot | What they need |
|-----------|----------|------------|----------------|
| **Builder** | Product, engineering, technical depth | Go-to-market, storytelling, people | A seller / GTM partner |
| **Seller** | Revenue, relationships, vision communication | Operations, follow-through, process | An operator / COO |
| **Operator** | Execution, process, reliability | Vision, product intuition, risk | A visionary / strategic co-founder |
| **Visionary** | Strategy, narrative, pattern-recognition | Execution, details, grounding | An integrator / COO |
**Self-assessment questions:**
- What do you do when you have a free hour?
- What do you procrastinate on most?
- What do your co-founders or early team complain you don't do?
- What's the best feedback you've received about your leadership?
Most founders are Builder or Visionary. Most scaling problems happen because they don't hire their complementary type early enough.
---
## 2. Delegation Framework
Founders fail to delegate for four reasons:
1. "Nobody does it as well as I do" (often true short-term, fatal long-term)
2. "It takes longer to explain than to do it" (true once; not true the 10th time)
3. "I lose control if I don't do it myself" (control is an illusion at scale)
4. "If it fails, it's my fault" (it's your fault if you never let anyone else try)
### The Skill × Will Matrix
| | High Skill | Low Skill |
|---|-----------|----------|
| **High Will** | Delegate fully | Coach and develop |
| **Low Will** | Motivate or reassign | Manage out or redesign role |
**Rules:**
- High skill + high will → Give the work and get out of the way
- High will + low skill → Invest in them. They want to grow.
- High skill + low will → Find out why. Fix the environment or accept the mismatch.
- Low skill + low will → Don't delegate to them. Address the performance issue.
### The Delegation Ladder
Not all delegation is equal. Build up gradually:
1. "Do exactly what I tell you" — not delegation, instruction
2. "Research this and report back" — information gathering
3. "Propose a solution and I'll decide" — thinking delegation
4. "Decide and tell me what you decided" — decision delegation with review
5. "Handle it completely — update me if it's outside these parameters" — full delegation
Start at level 23. Move people up as trust is established. Most founders never get past level 3 with their team — that's the bottleneck.
### What to delegate first
**Delegate first (high volume, low stakes):**
- Recurring operational tasks you do the same way every time
- Information gathering and synthesis
- Meeting coordination and scheduling
- Reports and updates you produce regularly
**Delegate next (skill-buildable):**
- Customer interactions (with clear principles)
- Hiring screens (after you've trained judgment)
- Partner relationship management
- Budget management within parameters
**Delegate last (strategic, irreversible):**
- Major strategic pivots
- Executive hires
- Large financial commitments
- M&A decisions
---
## 3. Energy Management
Founders manage energy, not just time. Time is fixed. Energy is renewable — but only if you manage it.
### The Energy Audit
Map your week by energy, not tasks. See `references/founder-toolkit.md` for the full template.
**Categories:**
- 🟢 **Energizing:** Activities that leave you sharper after doing them
- 🟡 **Neutral:** Neither energizing nor draining
- 🔴 **Draining:** Activities that leave you depleted
**Common founder energy patterns:**
- **Builders:** Energized by creating, drained by politics and process
- **Sellers:** Energized by people and wins, drained by detail work and admin
- **Operators:** Energized by solving, drained by ambiguity and indecision
- **Visionaries:** Energized by strategy and ideas, drained by execution and repetition
**The rule:** Maximize green. Eliminate or delegate red. Accept yellow as the price of leadership.
### Energy management practices
**Protect deep work time.** 24 hours of uninterrupted thinking time, 35 days per week. Schedule it. Defend it. This is where strategy happens.
**Batch shallow work.** Email, Slack, administrative tasks — twice a day maximum.
**Single-task during recovery.** If you're depleted, don't try to do your best work. Do tasks that don't require your best.
**Identify your peak window.** Most people have 46 peak hours per day. Schedule your hardest work in those windows.
---
## 4. CEO Calendar Audit
The calendar is the most honest document in a founder's life. It shows what you actually prioritize, not what you say you prioritize.
### Running the audit
Pull the last 4 weeks of calendar data. Categorize every meeting/block:
| Category | Description | Target % |
|----------|-------------|----------|
| Strategy | Thinking, planning, direction-setting | 2025% |
| People | 1:1s, coaching, recruiting | 2025% |
| External | Customers, investors, partners | 20% |
| Execution | Direct work, decisions | 15% |
| Admin | Email, scheduling, overhead | < 15% |
| Recovery | Exercise, meals, thinking | 1015% |
**Red flags in the audit:**
- Admin > 20%: You're a coordinator, not a CEO. Fix your systems.
- Execution > 30%: You're still an IC. Build the team.
- People < 10%: Your team is running on empty. They need more of you.
- No recovery blocks: You're running on adrenaline. It ends badly.
- Strategy < 10%: You're running the company, not leading it.
### The CEO's primary job at each stage
| Stage | CEO should spend most time on... |
|-------|--------------------------------|
| Seed | Product and customers. Directly. |
| Series A | Hiring the executive team. Recruiting is your job. |
| Series B | Culture, strategy, and external (investors/partners/customers) |
| Series C+ | Vision, board, external narrative, executive development |
If you're spending time on things from two stages ago, you haven't made the transition.
---
## 5. Leadership Style Evolution
The job changes at every stage. Most founders don't change with it.
**IC → Manager (0 to ~10 people):**
You need to teach and build trust. People are watching how you treat failure. The skill: give clear context, set expectations, check in frequently.
**Manager → Leader (~10 to ~50 people):**
You can't manage everyone directly. You need people who manage people. The skill: hire managers you trust, let them manage.
**Leader → Executive (~50 to ~200 people):**
You're now setting culture and direction, not managing work. The skill: communicate obsessively, decide at the right altitude, develop your leadership team.
**Executive → Institutional CEO (200+):**
You're a symbol as much as a manager. The skill: build systems that work without you; focus on board, investors, and external narrative.
**The hardest transition:** Manager → Leader. You have to stop doing things yourself and trust people you're still getting to know.
---
## 6. Blind Spot Identification
Everyone has them. Founders more than most — because nobody in the early company had the authority or safety to tell you.
### Common founder blind spots
- **Communication:** "I said it once, they should know" — you said it; they didn't hear it or didn't believe it
- **Decision speed:** Moving so fast that teams can't orient or build on your direction
- **Context hoarding:** Knowing what's happening without sharing it, then being frustrated that teams make bad decisions
- **Optimism bias:** Consistently underestimating timelines, cost, and difficulty
- **Founder exceptionalism:** Rules that apply to everyone don't apply to you
- **Feedback avoidance:** Creating an environment where no one gives you honest feedback
### How to find your blind spots
1. **360 feedback (anonymous):** Once a year. Ask direct reports, peers, board members. Include "What does [name] do that gets in the way of our success?"
2. **Exit interview analysis:** What do departing employees consistently say? Find the pattern.
3. **Failure post-mortems:** What do your worst decisions have in common? What were you assuming that wasn't true?
4. **The energy audit:** Where do you consistently drain the people around you?
---
## 7. Imposter Syndrome Toolkit
It doesn't go away. It evolves. The founder who was scared to pitch to investors is now scared to manage a board. The founder who was scared to hire is now scared to fire.
**The reframe:** Imposter syndrome is proportional to stretch. If you never feel it, you're not growing.
**Practical tools:**
- **Evidence file:** Document wins, compliments, decisions that worked. Read it when the doubt hits.
- **Normalize the feeling:** "I feel underprepared for this" ≠ "I am an imposter." Feeling and fact are different.
- **Do the thing anyway.** Competence comes from doing, not from feeling ready.
- **Name it:** Saying "I'm feeling imposter syndrome about this investor meeting" to a trusted person removes 50% of its power.
---
## 8. Founder Mental Health
Burnout isn't weakness. It's a predictable outcome of high-demand + low-recovery + no control over inputs.
### Burnout signals
Early: Irritability, difficulty sleeping, decisions feel harder than they should, loss of enthusiasm for the mission.
Mid: Physical symptoms (headaches, illness), cynicism about the company, social withdrawal, all tasks feel equally important (priority paralysis).
Late: Can't function, decisions have stopped, team notices before you do.
**If you're in late burnout:** Stop performing. Get support. The company needs a functioning founder more than it needs a martyred one.
### Structural prevention
- **Protect recovery time.** Not weekends — protected time during the week where you're not available.
- **Therapy or coaching.** Not optional for founders. The job is isolating and the stakes are high.
- **Peer group.** Other founders at similar stages. They're the only people who actually understand the job.
- **Clear off-ramps.** Know what "enough for today" looks like. Don't let the work be infinite.
---
## 9. The Founder Mode Trap
Paul Graham's "Founder Mode" essay made the case that great founders stay deeply involved in operations — skip middle management and go direct. It resonated because it's sometimes true.
**When founder mode helps:**
- Crisis recovery (company needs direct leadership)
- Product-market fit search (speed matters more than org health)
- High-value, irreversible decisions (you should be in the room)
- Early stages when the team is small
**When founder mode hurts:**
- When it undermines managers you've hired (they can't lead if you override them)
- When it's driven by distrust rather than strategy
- When it prevents the team from developing judgment
- When you're doing it because you miss doing, not because the company needs you to
**The test:** Are you going deep because the situation requires it, or because you're uncomfortable with the loss of control? The first is leadership. The second is the trap.
---
## 10. Succession Planning
Building a company that works without you is not disloyalty — it's the ultimate expression of leadership.
**Succession is not just about exit.** It's about resilience. What happens if you're sick? On sabbatical? Acquired?
**Succession readiness levels:**
- Level 1: You've documented your key knowledge and processes
- Level 2: At least one person can cover each of your key functions for 2 weeks
- Level 3: Your leadership team can run the company for a quarter without you
- Level 4: You've identified and developed your potential successor
Most founders are at Level 0. Level 2 is a reasonable target. Level 3 is a strategic asset.
---
## Key Questions for Founder Development
- "What decisions did you make last week that someone else could have made?"
- "What are you still doing that you should have delegated 6 months ago?"
- "When did you last get honest, critical feedback? From whom? What did it say?"
- "What would need to be true for the company to run for a week without you?"
- "What's draining your energy that you've accepted as unavoidable?"
## Detailed References
- `references/leadership-growth.md` — Maxwell levels, situational leadership, founder-to-CEO transition
- `references/founder-toolkit.md` — Weekly reflection, energy audit, delegation matrix, 1:1 templates

View File

@@ -0,0 +1,296 @@
# Founder Toolkit
Practical tools for founder self-management and leadership development.
---
## 1. Weekly CEO Reflection Template
**15 minutes. Every Friday. No excuses.**
This is the most important meeting of the week. You with yourself.
```
DATE: _______________
## This Week
**1. What was my most important contribution this week?**
(Not the longest meeting or the hardest problem — the thing that will matter in 90 days.)
_______________________________________________
**2. Where did I add the least value? Why was I involved?**
(Be honest. Where were you in the room out of habit, not necessity?)
_______________________________________________
**3. What should I have delegated but didn't?**
(Name the specific task and the person you could have delegated it to.)
_______________________________________________
**4. What decision am I avoiding? Why?**
(Fear of being wrong? Not enough information? Conflict avoidance?)
_______________________________________________
**5. What would I do differently this week if I could do it over?**
(One thing. Make it specific.)
_______________________________________________
## Next Week
**My one most important outcome for next week:**
_______________________________________________
**What will I stop doing / not start / protect myself from?**
_______________________________________________
```
---
## 2. Energy Audit Template
Map your week by energy, not tasks. Do this for one full work week.
### Step 1: Time block mapping
For each 30-minute block in your week, record:
- What you did
- Energy level: 🟢 Energizing / 🟡 Neutral / 🔴 Draining
```
Monday:
08:00-08:30: __________________ [🟢/🟡/🔴]
08:30-09:00: __________________ [🟢/🟡/🔴]
09:00-09:30: __________________ [🟢/🟡/🔴]
... (continue through the day)
```
### Step 2: Pattern analysis
After one week, categorize activities:
| Activity type | Energy level | Total hours | % of week |
|--------------|-------------|-------------|-----------|
| Customer calls | | | |
| Investor meetings | | | |
| Team 1:1s | | | |
| Product decisions | | | |
| Strategy/planning | | | |
| Email/Slack | | | |
| Recruiting | | | |
| Financial review | | | |
| External talks/events | | | |
| Administrative tasks | | | |
| Deep work/building | | | |
| Recovery/breaks | | | |
### Step 3: Optimization plan
**Green activities to protect (min 40% of week):**
- _______________________________________________
**Red activities to eliminate or delegate (target: < 15% of week):**
- Activity: __________________ → Delegate to: __________________
- Activity: __________________ → Eliminate via: __________________
**Your personal energy peak hours:**
I do my best thinking: _______ to _______
Schedule this time as: Protected deep work (no meetings)
---
## 3. Delegation Matrix
For every task you regularly do, run it through this matrix.
### Assessment
| Task | Skill level needed | My will to keep it | Decision |
|------|-------------------|-------------------|----------|
| | High / Med / Low | High / Med / Low | Keep / Coach / Delegate / Kill |
### Delegation scoring
| My Skill | My Will | Decision |
|----------|---------|----------|
| High | High | Keep — this is your zone of genius |
| High | Low | Delegate — you can do it, but it drains you. Train someone. |
| Low | High | Develop — learn it or hire for it |
| Low | Low | Kill or outsource — why is this on your plate? |
### The 70% rule
If someone can do a task 70% as well as you, delegate it. Trying to get to 100% is a trap:
- Their 70% will grow to 90% with practice
- Your 30% extra effort costs more than the quality gap
- You free up time for things only you can do
---
## 4. 1:1 Template for Direct Reports
Weekly or biweekly. 30 minutes. Their agenda, not yours.
```
DATE: _______________
PERSON: _______________
## Their Section (first 20 min)
**What's on their mind? (open the meeting with this)**
(No agenda from you first — let them lead)
**What are they working on? Where are they stuck?**
**What do they need from me?**
**Anything they wanted to raise but haven't had the chance to?**
## Your Section (last 10 min)
**Context to share (strategy, changes, what they should know):**
**Direct feedback to give (if any):**
- Be specific: "In Tuesday's meeting, when you [did X], the impact was [Y]"
- Make it actionable: "Next time, I'd suggest [Z]"
**Career/growth check-in (monthly, not every meeting):**
- How are they feeling about their growth?
- What do they want to be doing more of?
- What are they interested in that they're not currently doing?
## Follow-ups
| Commitment | Owner | Due |
|------------|-------|-----|
| | | |
```
### Rules for effective 1:1s
- **Their agenda first.** If you dominate with your updates, they stop bringing theirs.
- **No status updates.** That's what tools are for. This time is for their thinking, blockers, and development.
- **Consistent time.** Rescheduled 1:1s signal that they're not a priority.
- **Take notes.** Review them before the next meeting. It signals that you listened.
- **Follow up on commitments.** If you say "I'll get you that answer by Thursday," get it by Thursday.
---
## 5. Personal OKRs for the Founder
Most founders hold their team accountable to goals but have none themselves. Fix that.
### Template: Quarterly Personal OKRs
```
Q[X] YYYY | FOUNDER OKRs
## My One Priority This Quarter
(The single most important thing I personally must accomplish)
_______________________________________________
## Objective 1: [Leadership Development]
What I'm trying to achieve: _______________________________________________
KR 1.1: [Measurable outcome by EoQ]
KR 1.2: [Measurable outcome by EoQ]
KR 1.3: [Measurable outcome by EoQ]
Progress check (mid-quarter): _______________________________________________
## Objective 2: [Delegation / Team Building]
What I'm trying to achieve: _______________________________________________
KR 2.1: [Measurable outcome by EoQ]
KR 2.2: [Measurable outcome by EoQ]
## Objective 3: [External Impact — Investors / Customers / Market]
What I'm trying to achieve: _______________________________________________
KR 3.1: [Measurable outcome by EoQ]
KR 3.2: [Measurable outcome by EoQ]
## The "Stop Doing" List (equally important)
Things I'm committing to stop doing this quarter:
- Stop: _______________________________________________
- Stop: _______________________________________________
- Stop: _______________________________________________
```
### Personal OKR examples
**Objective: Become a better coach, not just a decision-maker**
- KR: 90% of my direct reports can make their top 3 recurring decisions without me by EoQ
- KR: In 1:1 reviews, 80% of team rates me as "helps me think through problems" vs "tells me what to do"
- KR: Conduct quarterly 360 feedback session with all direct reports
**Objective: Build investor trust before I need it**
- KR: Monthly investor updates sent within 5 days of month-end, every month this quarter
- KR: 1:1 calls with each board member, once per quarter, outside of board meetings
- KR: Create and share 3-year financial model with board by EoQ
**Objective: Protect my energy and performance**
- KR: 3+ hours of protected deep work time per day, 4+ days per week
- KR: Complete weekly CEO reflection every Friday (track: 0/13 weeks → 13/13)
- KR: Zero email after 8pm, zero weekends unless explicit crisis
---
## 6. The "Stop Doing" List
The hardest list to make and the most valuable to keep.
Most founders have clear to-do lists. Few have stop-doing lists. The asymmetry is the problem.
### The stop-doing audit
**Things to stop doing immediately (decision you can make today):**
- Attending meetings you don't add value to
- Being the default person for decisions that should be made by others
- Redoing work that your team completed
- Checking email/Slack during deep work blocks
- Starting tasks you know you'll delegate partway through
**Things to stop doing by delegating (need to train someone):**
- _______________________________________________
- _______________________________________________
- _______________________________________________
**Things to stop doing by building systems:**
- Recurring manual tasks → automate
- Recurring decisions → write decision criteria so others can decide
- Recurring explanations → document once, reference always
### The decision filter
Before accepting new responsibilities, run through:
1. Does this require something only I can do?
2. Is this the highest and best use of my time?
3. If I say yes to this, what am I saying no to?
If the answers are no, no, and something important — say no.
---
## 7. Evidence File
For when imposter syndrome hits. Keep a running file of:
**Wins** (monthly minimum)
- Company milestones you led
- Decisions that worked out well
- Feedback you received that was genuinely positive
**Quotes** (capture as they happen)
- Direct quotes from team members, customers, investors about your impact
- Emails or messages that reflect trust or appreciation
**The hard calls that paid off**
- Decisions you were scared to make that turned out well
- Times you said no to something that would have hurt the company
**When to read it:** When you're doubting yourself before a board meeting, a hard conversation, a big pitch. The feeling isn't fact. The evidence file is.

View File

@@ -0,0 +1,178 @@
# Leadership Growth Reference
Frameworks for founder and executive leadership development.
---
## 1. The 5 Levels of Leadership (Maxwell)
John Maxwell's model describes leadership development as a ladder. Most founders start at Level 23 and need to reach Level 45 to scale effectively.
| Level | Name | People follow because... | What it looks like |
|-------|------|--------------------------|-------------------|
| 1 | Position | They have to (title/authority) | "Do this because I'm the CEO" |
| 2 | Permission | They want to (relationship) | People choose to work with you beyond the job requirement |
| 3 | Production | You produce results | Team rallies because you deliver; your track record gives credibility |
| 4 | People Development | You develop others | You're multiplying leaders; your success is measured by others' growth |
| 5 | Pinnacle | Who you are (reputation) | People follow because of what you've built and who you've become |
**Most founders are at Level 3.** They got here by building and shipping. The path to scaling is Level 4: developing other leaders.
**The Level 3 trap:** Production-focused founders attract doers, not leaders. They value results over growth. Their teams are effective but dependent. Every decision still goes through the founder.
**The Level 4 shift:** Measure your success by how well your team succeeds without you. Your job is to make the people around you better.
---
## 2. Situational Leadership Model
Ken Blanchard's model says effective leadership style shifts based on the person and the task — not the leader's preference.
Four styles based on the follower's development level:
| Development Level | Competence | Commitment | Leadership Style | What to do |
|------------------|------------|------------|-----------------|------------|
| D1 — Enthusiastic Beginner | Low | High | S1: Directing | High direction, low support. Tell them what to do. |
| D2 — Disillusioned Learner | Low/Med | Low | S2: Coaching | High direction + high support. Teach and encourage. |
| D3 — Capable but Cautious | Medium/High | Variable | S3: Supporting | Low direction, high support. Collaborate and encourage. |
| D4 — Self-Reliant Achiever | High | High | S4: Delegating | Low direction, low support. Get out of the way. |
**Common founder error:** Using the same leadership style with everyone. The founder who directs a D4 will frustrate them into leaving. The founder who delegates to a D1 will watch them fail.
**Diagnosis before deciding:**
Before determining your style, ask for each person + task:
- How much do they know about this specific task? (Not in general — this task.)
- How much do they want to do this specific task?
These answers may surprise you. A senior engineer may be D4 on architecture and D1 on customer calls.
---
## 3. The Founder → CEO Transition
The hardest leadership change most founders face, and nobody prepares them for it.
### What changes
**As a founder, you were judged on:**
- What you personally built
- How fast you moved
- Your own output
**As a CEO, you're judged on:**
- What your team produced
- How effectively you set direction
- The quality of the people around you
The skills that made you a great founder — doing, deciding, building — can actively work against you as a CEO.
### The transition phases
**Phase 1: Still doing (015 people)**
You're right to be deep in the work. Speed requires it. Your personal output matters.
Risk: Staying here too long.
**Phase 2: Building around you (1550 people)**
You're hiring and starting to delegate. People do work you used to do.
Challenge: Learning to trust output that doesn't look like yours.
Failure mode: Hiring people and then redoing their work.
**Phase 3: Leading through leaders (50150 people)**
You no longer know everything happening in the company. That's correct.
Challenge: Managing people who manage people — twice removed from the work.
Failure mode: Bypassing your managers to go direct (undermines them, creates chaos).
**Phase 4: Setting the container (150+ people)**
Your job is culture, strategy, and the senior leadership team. You're a CEO, not a senior contributor.
Challenge: Staying relevant and strategic without getting lost in the weeds.
Failure mode: Retreating to execution to feel productive.
### The emotional reality
Most founders describe the transition as:
- A loss of identity ("I used to know everything that was happening")
- A loss of control ("Decisions happen without me")
- A loss of clarity ("Was I more effective before?")
These are real losses, not just discomfort. Acknowledge them. Find identity in what the CEO role is, not what the founder role was.
---
## 4. Building Your Executive Team
### When to hire your first executive
Common question: "When do I need a VP/C-suite?"
**Trigger signs:**
- The function is failing and you can't fix it by working harder
- You can't attract or develop talent in that function because you lack the expertise
- The function is growing faster than you can lead it
- You're making bad decisions in that domain because you don't have deep knowledge
**Order of first executives:**
Most companies hire in this order, but the right order depends on your archetype and what's breaking:
1. First non-founder exec is usually Sales (VP Sales) or Engineering (VP Eng / CTO)
2. Then COO/Operations when coordination becomes the bottleneck
3. Then Finance (CFO) when fundraising or financial complexity demands it
4. Then People/HR when hiring velocity and culture require dedicated ownership
### How to onboard executives
**The 30-60-90 plan:**
- Day 130: Listen. Meet everyone. Learn the current state. No major decisions.
- Day 3160: Diagnose. What's working, what isn't, what's missing. Share findings.
- Day 6190: Act. Make changes. Start building systems. Establish their leadership presence.
**The trust-building sequence:**
Start with small, visible wins. Let them prove themselves in low-stakes situations before handing over high-stakes decisions.
**The founder's role during exec onboarding:**
- Provide context generously
- Introduce them with genuine authority ("This is the decision-maker for X — go to them, not me")
- Don't override their decisions publicly
- Give feedback privately, not in front of their team
**Failure mode:** Hiring a great executive and then making them feel like a senior employee. If you override every major decision, you don't have an executive — you have an expensive advisor.
---
## 5. Managing Your Board
### The fundamental tension
You work for the board. The board elected you. They can remove you. This is a governance reality, not a threat.
And: You lead the company. The board sets governance and approves major decisions, but they're not running the business day-to-day. You are.
**Healthy dynamic:** Board holds accountability; CEO holds authority. They're not adversarial — they're complementary.
### The founder mistake
Most founders either:
1. **Over-inform:** Share every detail, create noise, invite micro-management
2. **Under-inform:** Share only wins, board is surprised by problems, trust erodes
Neither works. The goal is strategic partnership.
### What the board actually needs
- **Monthly written update:** Financial performance vs plan, key metrics, top 3 issues + proposed solutions, forward-looking risks. 12 pages.
- **Quarterly board meeting:** Strategic discussion, not financial recap. They've read the update. Use the time for decisions and input.
- **Real-time alerts:** Big bad news before the meeting. Never let board members be surprised by negative news they should have known earlier.
### Managing board members individually
Invest in 1:1 relationships with each board member between meetings. Understand what they care about. Use their expertise.
Board members who feel informed and useful are your allies. Board members who feel blindsided or sidelined become difficult.
**The pre-meeting call:** Before every board meeting, call each member individually. Preview the agenda, surface concerns, align on decisions. The meeting itself should have no surprises.
### When the board challenges you
"The board doesn't trust my judgment" is often really: "I haven't given them enough information to trust my judgment."
Fix the transparency gap before assuming it's a political problem.
**When the board is actually wrong:** Make the case clearly, once, with data. If they override you on something important and you can't accept it, that's a signal about fit. Founders get removed. It happens. Build board relationships before you need them to trust you on a hard call.

View File

@@ -0,0 +1,197 @@
---
name: internal-narrative
description: "Build and maintain one coherent company story across all audiences — employees, investors, customers, candidates, and partners. Detects narrative contradictions and ensures the same truth is framed for each audience's needs. Use when preparing investor updates, all-hands presentations, board communications, recruiting narratives, crisis communications, or when user mentions company narrative, messaging consistency, storytelling, all-hands, investor update, or crisis communication."
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: c-level
domain: narrative-strategy
updated: 2026-03-05
frameworks: narrative-frameworks, all-hands-template
---
# Internal Narrative Builder
One company. Many audiences. Same truth — different lenses. Narrative inconsistency is trust erosion. This skill builds and maintains coherent communication across every stakeholder group.
## Keywords
narrative, company story, internal communication, investor update, all-hands, board communication, crisis communication, messaging, storytelling, narrative consistency, audience translation, founder narrative, employee communication, candidate narrative, partner communication
## Core Principle
**The same fact lands differently depending on who hears it and what they need.**
"We're shifting resources from Product A to Product B" means:
- To employees: "Is my job safe? Why are we abandoning what I built?"
- To investors: "Smart capital allocation — they're doubling down on the winner"
- To customers of Product A: "Are they abandoning us?"
- To candidates: "Exciting new focus — are they decisive?"
Same fact. Four different narratives needed. The skill is maintaining truth while serving each audience's actual question.
---
## Framework
### Step 1: Build the Core Narrative
One paragraph that every other communication derives from. This is the source of truth.
**Core narrative template:**
> [Company name] exists to [mission — present tense, specific]. We're building [what you're building] because [the problem you're solving]. Our approach is [your unique way of doing this]. We're at [honest description of current state] and heading toward [where you're going in concrete terms].
**Good core narrative (example):**
> Acme Health exists to reduce preventable falls in elderly care using smartphone-based mobility analysis. We're building an AI diagnostic tool for care teams because current fall risk assessments are subjective, infrequent, and often wrong. Our approach — using the phone's camera during a 10-second walking test — means no new hardware, no specialist required. We have 80 care facilities in DACH paying us €800K ARR, and we're heading to €3M ARR by demonstrating clinical value at scale before our Series B.
**Bad core narrative:**
> Acme Health is an innovative AI company revolutionizing elderly care through cutting-edge technology that empowers care providers and improves patient outcomes across the continuum of care.
The good version is usable. The bad version says nothing.
---
### Step 2: Audience Translation Matrix
Take the core narrative and translate it for each audience. Same truth, different frame.
| Fact | Employees need to hear | Investors need to hear | Customers need to hear | Candidates need to hear |
|------|----------------------|----------------------|----------------------|------------------------|
| We have 80 customers | "We've proven the model — your work matters" | "Product-market fit signal, capital efficient" | "80 care facilities trust us" | "Traction you'd be joining" |
| We pivoted from hardware | "We were honest enough to change course" | "Capital-efficient pivot to better unit economics" | "We found a faster, simpler way to serve you" | "We make decisions based on evidence, not ego" |
| We missed Q2 revenue | "Here's why, here's the plan, here's what you can do" | "Revenue mix shifted — trailing indicator improving" | [Usually don't tell customers revenue misses] | [Usually not shared externally] |
| We're hiring fast | "The team is growing — your network matters" | "Headcount plan aligned to growth" | [Not relevant unless it affects service] | "This is a rocket ship moment" |
**Rules:**
- Never contradict yourself across audiences. Different framing ≠ different facts.
- "We told investors growth, told employees efficiency" is a contradiction. Audit for this.
- Investors and employees see each other. Board members talk to your team. Candidates google you.
---
### Step 3: Contradiction Detection
Before any major communication, run the contradiction check:
**Question 1:** What did we tell investors last month about [topic]?
**Question 2:** What did we tell employees about the same topic?
**Question 3:** Are these consistent? If not — which version is true?
**Common contradictions:**
- "Efficient growth" to investors + "we're hiring aggressively" to candidates
- "Strong pipeline" to investors + "sales is struggling" at all-hands
- "Customer-first" in culture + recent decisions that clearly prioritized revenue over customer need
**When you catch a contradiction:** Fix the less accurate version, then communicate the correction explicitly. "Last month I said X. After more reflection, X is not quite right. Here's the clearer version."
Correcting yourself before someone else catches it builds more trust than getting caught.
---
### Step 4: Audience-Specific Communication Cadence
| Audience | Format | Frequency | Owner |
|----------|--------|-----------|-------|
| Employees | All-hands | Monthly | CEO |
| Employees | Team updates | Weekly | Team leads |
| Investors | Written update | Monthly | CEO + CFO |
| Board | Board meeting + memo | Quarterly | CEO |
| Customers | Product updates | Per release | CPO / CS |
| Candidates | Careers page + interview narrative | Ongoing | CHRO + Founders |
| Partners | Quarterly business review | Quarterly | BD Lead |
---
### Step 5: All-Hands Structure and Cadence
See `templates/all-hands-template.md` for the full template.
**Principles:**
- Lead with honest state of the company. No spin.
- Connect company performance to individual work: "Here's how what you built contributed to this outcome."
- Give people a reason to be proud of their choice to work here.
- Leave time for real Q&A — not curated questions.
**All-hands failure modes:**
- CEO speaks for 55 of 60 minutes; Q&A is "any quick questions?"
- All good news, all the time — employees know when you're not being honest
- Metrics without context: "ARR grew 15%" without explaining if that's good, bad, or expected
- Questions deflected: "That's a great point, we should follow up on that" → never followed up
---
### Step 6: Crisis Communication
When the narrative breaks — someone leaves publicly, a product fails, a security breach, a press article.
**The 4-hour rule:** If something is public or about to be, communicate internally within 4 hours. Employees should never learn about company news from Twitter.
**Crisis communication sequence:**
**Hour 04 (internal first):**
1. CEO or relevant leader sends an internal message
2. Acknowledge what happened factually
3. State what you know and what you don't know yet
4. Tell people what you're doing about it
5. Tell people what they should do if they're asked about it
**Hour 424 (external if needed):**
1. External statement (press, social) only if the event is public
2. Consistent with the internal message — same facts, audience-appropriate framing
3. Legal review if any claims or liability involved
**What not to do in a crisis:**
- Silence: letting rumors fill the vacuum
- Spin: people can detect it and it destroys trust
- "No comment": says "we have something to hide"
- Blaming: even if someone else caused the problem, your audience only cares what you're doing about it
**Template for crisis internal communication:**
> "Here's what happened: [factual description]. Here's what we know right now: [known facts]. Here's what we don't know yet: [honest uncertainty]. Here's what we're doing: [specific actions]. Here's what you should do if you're asked about this: [specific guidance]. I'll update you by [specific time] with more information."
---
## Narrative Consistency Checklist
Run before any major external communication:
- [ ] Is this consistent with what we told investors last month?
- [ ] Is this consistent with what we told employees at the last all-hands?
- [ ] Does this contradict anything on our website, careers page, or press releases?
- [ ] If an employee read this external communication, would they recognize the company being described?
- [ ] If an investor read our internal all-hands deck, would they find anything inconsistent?
- [ ] Have we been accurate about our current state, or are we projecting an aspiration?
---
## Key Questions for Narrative
- "Could a new employee explain to a friend why our company exists? What would they say?"
- "What do we tell investors about our strategy? What do we tell employees? Are these the same?"
- "If a journalist asked our team members to describe the company independently, what would they say?"
- "When did we last update our 'why we exist' story? Is it still true?"
- "What's the hardest question we'd get from each audience? Do we have an honest answer?"
## Red Flags
- Different departments describe the company mission differently
- Investor narrative emphasizes growth; employee narrative emphasizes stability (or vice versa)
- All-hands presentations are mostly slides, mostly one-way
- Q&A questions are screened or deflected
- Bad news reaches employees through Slack rumors before leadership communication
- Careers page describes a culture that employees don't recognize
## Integration with Other C-Suite Roles
| When... | Work with... | To... |
|---------|-------------|-------|
| Investor update prep | CFO | Align financial narrative with company narrative |
| Reorg or leadership change | CHRO + CEO | Sequence: employees first, then external |
| Product pivot | CPO | Align customer communication with investor story |
| Culture change | Culture Architect | Ensure internal story is consistent with external employer brand |
| M&A or partnership | CEO + COO | Control information flow, prevent narrative leaks |
| Crisis | All C-suite | Single voice, consistent story, internal first |
## Detailed References
- `references/narrative-frameworks.md` — Storytelling structures, founder narrative, bad news delivery, all-hands templates
- `templates/all-hands-template.md` — All-hands presentation template

Some files were not shown because too many files have changed in this diff Show More