* docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: Add VirusTotal security scan for skills (#252) * Dev (#231) * Improve senior-fullstack skill description and workflow validation - Expand frontmatter description with concrete actions and trigger clauses - Add validation steps to scaffolding workflow (verify scaffold succeeded) - Add re-run verification step to audit workflow (confirm P0 fixes) * chore: sync codex skills symlinks [automated] * fix(skill): normalize senior-fullstack frontmatter to inline format Normalize YAML description from block scalar (>) to inline single-line format matching all other 50+ skills. Align frontmatter trigger phrases with the body's Trigger Phrases section to eliminate duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(ci): add GITHUB_TOKEN to checkout + restore corrupted skill descriptions - Add token: ${{ secrets.GITHUB_TOKEN }} to actions/checkout@v4 in sync-codex-skills.yml so git-auto-commit-action can push back to branch (fixes: fatal: could not read Username, exit 128) - Restore correct description for incident-commander (was: 'Skill from engineering-team') - Restore correct description for senior-fullstack (was: '>') * fix(ci): pass PROJECTS_TOKEN to fix automated commits + remove duplicate checkout Fixes PROJECTS_TOKEN passthrough for git-auto-commit-action and removes duplicate checkout step in pr-issue-auto-close workflow. * fix(ci): remove stray merge conflict marker in sync-codex-skills.yml (#221) Co-authored-by: Leo <leo@leo-agent-server> * fix(ci): fix workflow errors + add OpenClaw support (#222) * feat: add 20 new practical skills for professional Claude Code users New skills across 5 categories: Engineering (12): - git-worktree-manager: Parallel dev with port isolation & env sync - ci-cd-pipeline-builder: Generate GitHub Actions/GitLab CI from stack analysis - mcp-server-builder: Build MCP servers from OpenAPI specs - changelog-generator: Conventional commits to structured changelogs - pr-review-expert: Blast radius analysis & security scan for PRs - api-test-suite-builder: Auto-generate test suites from API routes - env-secrets-manager: .env management, leak detection, rotation workflows - database-schema-designer: Requirements to migrations & types - codebase-onboarding: Auto-generate onboarding docs from codebase - performance-profiler: Node/Python/Go profiling & optimization - runbook-generator: Operational runbooks from codebase analysis - monorepo-navigator: Turborepo/Nx/pnpm workspace management Engineering Team (2): - stripe-integration-expert: Subscriptions, webhooks, billing patterns - email-template-builder: React Email/MJML transactional email systems Product Team (3): - saas-scaffolder: Full SaaS project generation from product brief - landing-page-generator: High-converting landing pages with copy frameworks - competitive-teardown: Structured competitive product analysis Business Growth (1): - contract-and-proposal-writer: Contracts, SOWs, NDAs per jurisdiction Marketing (1): - prompt-engineer-toolkit: Systematic prompt development & A/B testing Designed for daily professional use and commercial distribution. * chore: sync codex skills symlinks [automated] * docs: update README with 20 new skills, counts 65→86, new skills section * docs: add commercial distribution plan (Stan Store + Gumroad) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) (#226) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) - Consolidate 191 commits since v1.0.2 into proper v2.0.0 entry - Document 12 POWERFUL-tier skills, 37 refactored skills - Add new domains: business-growth, finance - Document Codex support and marketplace integration - Update version history summary table - Clean up [Unreleased] to only planned work * docs: add 24 POWERFUL-tier skills to plugin, fix counts to 85 across all docs - Add engineering-advanced-skills plugin (24 POWERFUL-tier skills) to marketplace.json - Add 13 missing skills to CHANGELOG v2.0.0 (agent-workflow-designer, api-test-suite-builder, changelog-generator, ci-cd-pipeline-builder, codebase-onboarding, database-schema-designer, env-secrets-manager, git-worktree-manager, mcp-server-builder, monorepo-navigator, performance-profiler, pr-review-expert, runbook-generator) - Fix skill count: 86→85 (excl sample-skill) across README, CHANGELOG, marketplace.json - Fix stale 53→85 references in README - Add engineering-advanced-skills install command to README - Update marketplace.json version to 2.0.0 --------- Co-authored-by: Leo <leo@openclaw.ai> * feat: add skill-security-auditor POWERFUL-tier skill (#230) Security audit and vulnerability scanner for AI agent skills before installation. Scans for: - Code execution risks (eval, exec, os.system, subprocess shell injection) - Data exfiltration (outbound HTTP, credential harvesting, env var extraction) - Prompt injection in SKILL.md (system override, role hijack, safety bypass) - Dependency supply chain (typosquatting, unpinned versions, runtime installs) - File system abuse (boundary violations, binaries, symlinks, hidden files) - Privilege escalation (sudo, SUID, cron manipulation, shell config writes) - Obfuscation (base64, hex encoding, chr chains, codecs) Produces clear PASS/WARN/FAIL verdict with per-finding remediation guidance. Supports local dirs, git repo URLs, JSON output, strict mode, and CI/CD integration. Includes: - scripts/skill_security_auditor.py (1049 lines, zero dependencies) - references/threat-model.md (complete attack vector documentation) - SKILL.md with usage guide and report format Tested against: rag-architect (PASS), agent-designer (PASS), senior-secops (FAIL - correctly flagged eval/exec patterns). Co-authored-by: Leo <leo@openclaw.ai> * docs: add skill-security-auditor to marketplace, README, and CHANGELOG - Add standalone plugin entry for skill-security-auditor in marketplace.json - Update engineering-advanced-skills plugin description to include it - Update skill counts: 85→86 across README, CHANGELOG, marketplace - Add install command to README Quick Install section - Add to CHANGELOG [Unreleased] section --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * Dev (#249) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * Dev (#250) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: add VirusTotal security scan for skills - Scans changed skill directories on PRs to dev/main - Scans all skills on release publish - Posts scan results as PR comment with analysis links - Rate-limited to 4 req/min (free tier compatible) - Appends VirusTotal links to release body on publish * fix: resolve YAML lint errors in virustotal workflow - Add document start marker (---) - Quote 'on' key for truthy lint rule - Remove trailing spaces - Break long lines under 160 char limit --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro plugin — production-grade Playwright testing toolkit (#254) Complete Claude Code plugin with: - 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report) - 3 specialized agents (test-architect, test-debugger, migration-planner) - 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility) - TestRail MCP server (TypeScript) — 8 tools for bidirectional sync - BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing - Smart hooks (auto-validate tests, auto-detect Playwright projects) - 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests) - Leverages Claude Code built-ins (/batch, /debug, Explore subagent) - Zero-config for core features; TestRail/BrowserStack via env vars - Both TypeScript and JavaScript support throughout Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro to marketplace registry (#256) - New plugin: playwright-pro (9 skills, 3 agents, 55 templates, 2 MCP servers) - Install: /plugin install playwright-pro@claude-code-skills - Total marketplace plugins: 17 Co-authored-by: Leo <leo@openclaw.ai> * fix: integrate playwright-pro across all platforms (#258) - Add root SKILL.md for OpenClaw and ClawHub compatibility - Add to README: Skills Overview table, install section, badge count - Regenerate .codex/skills-index.json with playwright-pro entry - Add .codex/skills/playwright-pro symlink for Codex CLI - Fix YAML frontmatter (single-line description for index parsing) Platforms verified: - Claude Code: marketplace.json ✅ (merged in PR #256) - Codex CLI: symlink + skills-index.json ✅ - OpenClaw: SKILL.md auto-discovered by install script ✅ - ClawHub: published as playwright-pro@1.1.0 ✅ Co-authored-by: Leo <leo@openclaw.ai> * docs: update CLAUDE.md — reflect 87 skills across 9 domains Sync CLAUDE.md with actual repository state: add Engineering POWERFUL tier (25 skills), update all skill counts, add plugin registry references, and replace stale sprint section with v2.0.0 version info. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: mention Claude Code in project description Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add self-improving-agent plugin — auto-memory curation for Claude Code (#260) New plugin: engineering-team/self-improving-agent/ - 5 skills: /si:review, /si:promote, /si:extract, /si:status, /si:remember - 2 agents: memory-analyst, skill-extractor - 1 hook: PostToolUse error capture (zero overhead on success) - 3 reference docs: memory architecture, promotion rules, rules directory patterns - 2 templates: rule template, skill template - 20 files, 1,829 lines Integrates natively with Claude Code's auto-memory (v2.1.32+). Reads from ~/.claude/projects/<path>/memory/ — no duplicate storage. Promotes proven patterns from MEMORY.md to CLAUDE.md or .claude/rules/. Also: - Added to marketplace.json (18 plugins total) - Added to README (Skills Overview + install section) - Updated badge count to 88+ - Regenerated .codex/skills-index.json + symlink Co-authored-by: Leo <leo@openclaw.ai> * feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264) * feat: C-Suite expansion — 8 new executive advisory roles Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor. Expands C-level advisory from 2 to 10 roles with 74 total files. Each role includes: - SKILL.md (lean, <5KB, ~1200 tokens for context efficiency) - Reference docs (loaded on demand, not at startup) - Python analysis scripts (stdlib only, runnable CLI) Executive Mentor features /em: slash commands (challenge, board-prep, hard-call, stress-test, postmortem) with devil's advocate agent. 21 Python tools, 24 reference frameworks, 28,379 total lines. All SKILL.md files combined: ~17K tokens (8.5% of 200K context window). Badge: 88 → 116 skills * feat: C-Suite orchestration layer + 18 complementary skills ORCHESTRATION (new): - cs-onboard: Founder interview → company-context.md - chief-of-staff: Routing, synthesis, inter-agent orchestration - board-meeting: 6-phase multi-agent deliberation protocol - decision-logger: Two-layer memory (raw transcripts + approved decisions) - agent-protocol: Inter-agent invocation with loop prevention - context-engine: Company context loading + anonymization CROSS-CUTTING CAPABILITIES (new): - board-deck-builder: Board/investor update assembly - scenario-war-room: Cascading multi-variable what-if modeling - competitive-intel: Systematic competitor tracking + battlecards - org-health-diagnostic: Cross-functional health scoring (8 dimensions) - ma-playbook: M&A strategy (acquiring + being acquired) - intl-expansion: International market entry frameworks CULTURE & COLLABORATION (new): - culture-architect: Values → behaviors, culture code, health assessment - company-os: EOS/Scaling Up operating system selection + implementation - founder-coach: Founder development, delegation, blind spots - strategic-alignment: Strategy cascade, silo detection, alignment scoring - change-management: ADKAR-based change rollout framework - internal-narrative: One story across employees/investors/customers UPGRADES TO EXISTING ROLES: - All 10 roles get reasoning technique directives - All 10 roles get company-context.md integration - All 10 roles get board meeting isolation rules - CEO gets stage-adaptive temporal horizons (seed→C) Key design decisions: - Two-layer memory prevents hallucinated consensus from rejected ideas - Phase 2 isolation: agents think independently before cross-examination - Executive Mentor (The Critic) sees all perspectives, others don't - 25 Python tools total (stdlib only, no dependencies) 52 new files, 10 modified, 10,862 new lines. Total C-suite ecosystem: 134 files, 39,131 lines. * fix: connect all dots — Chief of Staff routes to all 28 skills - Added complementary skills registry to routing-matrix.md - Chief of Staff SKILL.md now lists all 28 skills in ecosystem - Added integration tables to scenario-war-room and competitive-intel - Badge: 116 → 134 skills - README: C-Level Advisory count 10 → 28 Quality audit passed: ✅ All 10 roles: company-context, reasoning, isolation, invocation ✅ All 6 phases in board meeting ✅ Two-layer memory with DO_NOT_RESURFACE ✅ Loop prevention (no self-invoke, max depth 2, no circular) ✅ All /em: commands present ✅ All complementary skills cross-reference roles ✅ Chief of Staff routes to every skill in ecosystem * refactor: CEO + CTO advisors upgraded to C-suite parity Both roles now match the structural standard of all new roles: - CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references) - CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references) Added to both: - Integration table (who they work with and when) - Key diagnostic questions - Structured metrics dashboard table - Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context) CEO additions: - Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y) - Cross-references to culture-architect and board-deck-builder CTO additions: - Key Questions section (7 diagnostic questions) - Structured metrics table (DORA + debt + team + architecture + cost) - Cross-references to all peer roles All 10 roles now pass structural parity: ✅ Keywords ✅ QuickStart ✅ Questions ✅ Metrics ✅ RedFlags ✅ Integration * feat: add proactive triggers + output artifacts to all 10 roles Every C-suite role now specifies: - Proactive Triggers: 'surface these without being asked' — context-driven early warnings that make advisors proactive, not reactive - Output Artifacts: concrete deliverables per request type (what you ask → what you get) CEO: runway alerts, board prep triggers, strategy review nudges CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags COO: blocker detection, scaling threshold warnings, cadence gaps CPO: retention curve monitoring, portfolio dog detection, research gaps CMO: CAC trend monitoring, positioning gaps, budget staleness CFO: runway forecasting, burn multiple alerts, scenario planning gaps CRO: NRR monitoring, pipeline coverage, pricing review triggers CISO: audit overdue alerts, compliance gaps, vendor risk CHRO: retention risk, comp band gaps, org scaling thresholds Executive Mentor: board prep triggers, groupthink detection, hard call surfacing This transforms the C-suite from reactive advisors into proactive partners. * feat: User Communication Standard — structured output for all roles Defines 3 output formats in agent-protocol/SKILL.md: 1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision 2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡⚪) 3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items 10 non-negotiable rules: - Bottom line first, always - Results and decisions only (no process narration) - What + Why + How for every finding - Actions have owners and deadlines ('we should consider' is banned) - Decisions framed as options with trade-offs - Founder is the highest authority — roles recommend, founder decides - Risks are concrete (if X → Y, costs $Z) - Max 5 bullets per section - No jargon without explanation - Silence over fabricated updates All 10 roles reference this standard. Chief of Staff enforces it as a quality gate. Board meeting Phase 4 uses the Board Meeting Output format. * feat: Internal Quality Loop — verification before delivery No role presents to the founder without passing verification: Step 1: Self-Verification (every role, every time) - Source attribution: where did each data point come from? - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding - Contradiction check against company-context + decision log - 'So what?' test: every finding needs a business consequence Step 2: Peer Verification (cross-functional) - Financial claims → CFO validates math - Revenue projections → CRO validates pipeline backing - Technical feasibility → CTO validates - People/hiring impact → CHRO validates - Skip for single-domain, low-stakes questions Step 3: Critic Pre-Screen (high-stakes only) - Irreversible decisions, >20% runway impact, strategy changes - Executive Mentor finds weakest point before founder sees it - Suspicious consensus triggers mandatory pre-screen Step 4: Course Correction (after founder feedback) - Approve → log + assign actions - Modify → re-verify changed parts - Reject → DO_NOT_RESURFACE + learn why - 30/60/90 day post-decision review Board meeting contributions now require self-verified format with confidence tags and source attribution on every finding. * fix: resolve PR review issues 1, 4, and minor observation Issue 1: c-level-advisor/CLAUDE.md — completely rewritten - Was: 2 skills (CEO, CTO only), dated Nov 2025 - Now: full 28-skill ecosystem map with architecture diagram, all roles/orchestration/cross-cutting/culture skills listed, design decisions, integration with other domains Issue 4: Root CLAUDE.md — updated all stale counts - 87 → 134 skills across all 3 references - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary) - Tool count: 160+ → 185+ - Reference count: 200+ → 250+ Minor observation: Documented plugin.json convention - Explained in c-level-advisor/CLAUDE.md that only executive-mentor has plugin.json because only it has slash commands (/em: namespace) - Other skills are invoked by name through Chief of Staff or directly Also fixed: README.md 88+ → 134 in two places (first line + skills section) * fix: update all plugin/index registrations for 28-skill C-suite 1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0 - Was: 2 skills, generic description - Now: all 28 skills listed with descriptions, all 25 scripts, namespace 'cs', full ecosystem description 2. .codex/skills-index.json — added 18 complementary skills - Was: 10 roles only - Now: 28 total c-level entries (10 roles + 6 orchestration + 6 cross-cutting + 6 culture) - Each with full description for skill discovery 3. .claude-plugin/marketplace.json — updated c-level-skills entry - Was: generic 2-skill description - Now: v2.0.0, full 28-skill ecosystem description, skills_count: 28, scripts_count: 25 * feat: add root SKILL.md for c-level-advisor ClawHub package --------- Co-authored-by: Leo <leo@openclaw.ai> * chore: sync codex skills symlinks [automated] --------- Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server>
16 KiB
Financial Planning Reference
Startup financial modeling frameworks. Build models that drive decisions, not models that impress investors.
1. Startup Financial Modeling
Bottoms-Up vs Top-Down
Top-down model (don't use for operating):
TAM = $10B
SOM = 1% = $100M
Revenue = $100M in year 5
This is marketing. You cannot manage a company against these numbers.
Bottoms-up model (use this):
Year 1 Revenue Build:
Sales headcount: 3 AEs by Q1, +2 in Q2, +3 in Q4
Ramp curve: Month 1-3 = 25%, Month 4-6 = 75%, Month 7+ = 100%
Quota per ramped AE: $600K ARR
Effective quota (weighted for ramp): $1.2M ARR in Year 1
Win rate: 25%
Average deal: $48K ACV
Pipeline needed: $1.2M / 25% = $4.8M ARR pipeline
Required meetings to create that pipeline: $4.8M / (conversion 20%) / ($48K ACV × 0.5 to meeting) = ~200 meetings
Now you have something actionable. You know how many SDR calls, how many marketing leads, what conversion rate you need to hold. Every assumption is visible and challengeable.
Building the Operating Model
Revenue Engine
New ARR Model (SaaS):
Month N New ARR:
= Quota-carrying reps (fully ramped equivalent)
× Attainment rate (typically 70-80% of quota)
× Average deal size
+ PLG / self-serve (if applicable)
Quota-carrying reps (ramped equivalent):
= Sum(each rep × their ramp factor)
Ramp schedule:
Month 1-2: 0% (onboarding)
Month 3: 25%
Month 4-6: 50%
Month 7-9: 75%
Month 10+: 100%
ARR Bridge (most important recurring visual):
Beginning ARR
+ New ARR (new logos)
+ Expansion ARR (upsells, seat growth)
- Churned ARR (cancellations)
- Contraction ARR (downgrades)
= Ending ARR
Net ARR Added = New + Expansion - Churn - Contraction
Net Dollar Retention (NDR):
= (Beginning ARR + Expansion - Churn - Contraction) / Beginning ARR × 100
Target: > 110% for growth-stage SaaS
World-class: > 130% (Snowflake, Twilio-tier)
MRR and ARR Relationship:
ARR = MRR × 12 (simple, always use this)
Never mix monthly and annual contracts in MRR without normalization.
Annual contract booked = ACV / 12 = monthly contribution to ARR
Multi-year contracts: book each year at annual value (not multi-year total)
Headcount Model
Headcount is usually 60-80% of total costs. Model it carefully.
For each role:
- Start date
- Department
- Annual salary (from salary bands)
- Loaded cost (salary × 1.25-1.45 depending on benefits + recruiting method)
- Productive from (ramp period)
- Impact on revenue (for revenue-generating roles)
Total headcount cost = Σ (each FTE × loaded cost × months active / 12)
Department headcount ratios (Series A benchmarks):
Sales (S&M): 20-30% of headcount
Engineering/Product (R&D): 40-50% of headcount
Customer Success: 15-20% of headcount
G&A: 10-15% of headcount
COGS Model
Gross margin is the most important long-term indicator of business quality.
COGS for SaaS:
1. Hosting / Infrastructure (AWS, GCP, Azure)
- Scale with customer count or usage
- Should be 5-15% of ARR for mature SaaS
- If > 20%: infrastructure optimization needed
2. Customer Success headcount
- Ratio: 1 CSM per $1M-$3M ARR (varies by segment)
- SMB: 1 CSM per $500K ARR (high-touch required)
- Enterprise: 1 CSM per $2-5M ARR (strategic accounts)
3. Third-party licensing / APIs
- Per-customer or usage-based pass-through costs
- Critical to model at scale (margin killer if not tracked)
4. Payment processing
- 2.2-2.9% of revenue for Stripe/Braintree
- Can negotiate to 1.8-2.2% at scale (> $5M ARR)
Gross Margin targets:
SaaS: > 65% acceptable, > 75% good, > 80% exceptional
Marketplace: 50-70%
Hardware + software: 40-60%
Services + software: 30-50%
If gross margin < 65%:
- Infrastructure cost optimization (rightsizing, reserved instances)
- CS headcount review (automation, pooled CSMs)
- Pricing model review (usage-based pricing if cost is usage-driven)
- Third-party cost renegotiation
Opex Model
Sales & Marketing:
- AE/SDR/SE salaries + OTE (on-target earnings)
- Marketing programs (demand gen budget)
- Tools and technology (CRM, SEO, ads platforms)
- Events and travel
- Benchmark: 40-60% of revenue at growth stage, targeting < 30% at scale
Research & Development:
- Engineering salaries
- Product management
- Design
- Technical infrastructure for development
- Benchmark: 20-35% of revenue
General & Administrative:
- Finance, legal, HR, admin
- Office costs
- SaaS tools / software licenses
- D&O insurance
- Benchmark: 8-15% (target < 10% at scale)
Financial Model Do's and Don'ts
| Do | Don't |
|---|---|
| Build assumptions tab with all inputs | Hardcode numbers in formulas |
| Model monthly (not quarterly) at early stage | Use annual model for first 3 years |
| Start with headcount plan, build costs from it | Guess at expense line items |
| Show model to actual customers or users | Show model to investors before internal stress-test |
| Version your model | Overwrite old versions |
| Reconcile cash flow to P&L monthly | Trust P&L without cash flow model |
| Include a sensitivity table | Present single-scenario forecast |
2. Three-Statement Model for Startups
Why All Three Matter
The P&L tells you if you're profitable. The cash flow statement tells you if you're alive. The balance sheet tells you if you're solvent.
Startups that only track P&L miss the gap between revenue recognition and cash collection.
P&L Structure
Q1 Q2 Q3 Q4 FY
Revenue
Subscription ARR $400K $520K $680K $840K $2,440K
Professional Svcs $40K $50K $60K $65K $215K
Total Revenue $440K $570K $740K $905K $2,655K
COGS
Infrastructure $35K $42K $52K $62K $191K
CS Headcount $75K $75K $100K $100K $350K
3rd Party Licensing $15K $18K $22K $28K $83K
Total COGS $125K $135K $174K $190K $624K
Gross Profit $315K $435K $566K $715K $2,031K
Gross Margin 71.6% 76.3% 76.5% 79.0% 76.5%
Operating Expenses
Sales & Marketing $380K $420K $480K $520K $1,800K
Research & Dev $320K $340K $380K $400K $1,440K
General & Admin $120K $130K $140K $150K $540K
Total Opex $820K $890K $1000K $1070K $3,780K
EBITDA ($505K) ($455K) ($434K) ($355K) ($1,749K)
EBITDA Margin (114.8%)(79.8%) (58.6%) (39.2%) (65.9%)
Cash Flow Statement
Q1 Q2 Q3 Q4
Operating Activities
Net Income ($510K) ($460K) ($440K) ($360K)
Add: D&A $8K $8K $8K $10K
Working Capital Changes:
AR increase ($45K) ($50K) ($60K) ($55K)
AP increase $20K $15K $20K $15K
Deferred Rev change $80K $60K $80K $90K
Operating Cash Flow ($447K) ($427K) ($392K) ($300K)
Investing Activities
Capex ($15K) ($8K) ($10K) ($12K)
Free Cash Flow ($462K) ($435K) ($402K) ($312K)
Financing Activities
None $0 $0 $0 $0
Net Change in Cash ($462K) ($435K) ($402K) ($312K)
Beginning Cash $3,500K $3,038K $2,603K $2,201K
Ending Cash $3,038K $2,603K $2,201K $1,889K
Runway (months) 13.1 12.1 10.9 10.1
Key insight from this model: The deferred revenue offset (customers paying annually upfront) is reducing cash burn by ~$80-90K/quarter versus a pure monthly billing model. This is the CFO's lever — push for annual billing.
Balance Sheet: The Startup Version
At early stage, track these specifically:
Assets:
Cash: Your lifeline. Monitor daily.
Accounts Receivable: What customers owe you. Age it monthly.
Prepaid Expenses: Software licenses, insurance paid upfront.
Liabilities:
Accounts Payable: What you owe vendors. Maximize terms.
Accrued Liabilities: Salaries owed, commissions earned but not paid.
Deferred Revenue: Customer prepayments. Liability until service delivered, but cash is yours.
Debt/Convertible Notes: Face value + interest accrual.
Equity:
Common Stock: Founder shares
Preferred Stock: Investor shares
APIC: Additional paid-in capital
Accumulated Deficit: Your running losses (expected for startups)
3. SaaS Metrics That Matter
The Hierarchy of SaaS Metrics
Tier 1 (existential): ARR, Runway, Net Dollar Retention
Tier 2 (strategic): Gross Margin, Burn Multiple, LTV:CAC
Tier 3 (operational): CAC Payback, Churn Rate, ACV
Tier 4 (diagnostic): Logo Churn vs Revenue Churn, Expansion Rate, NPS
Never report Tier 4 metrics to your board if Tier 1 metrics are off-track.
Core Metric Definitions
ARR (Annual Recurring Revenue):
ARR = Sum of all active annual contract values (normalized to annual)
What it is NOT: bookings, billings, or TCV
When to use MRR: Companies with mostly monthly contracts
When to use ARR: Companies with majority annual contracts
Net Dollar Retention (NDR / NRR):
NDR = (Beginning MRR + Expansion MRR - Churned MRR - Contraction MRR)
/ Beginning MRR × 100
The benchmark everyone quotes: 100% means existing customers are flat.
> 100% means existing customers grow revenue on their own.
World-class (Snowflake, Datadog): 130%+
Why it matters: NDR > 100% means revenue growth even if you sign zero new customers.
At NDR = 120% and $5M ARR: you will reach $7M ARR in 24 months without a single new sale.
Gross Revenue Retention (GRR):
GRR = (Beginning MRR - Churned MRR - Contraction MRR) / Beginning MRR × 100
GRR measures the floor of your retention (ignoring expansion).
GRR is always ≤ NDR.
Target: > 85% for SMB SaaS, > 90% for mid-market, > 95% for enterprise.
Logo Churn vs Revenue Churn:
Logo churn: % of customers who cancel (ignores size)
Revenue churn: % of ARR that cancels (accounts for size)
Why the distinction matters:
You could have 10% logo churn but 3% revenue churn (churning small customers)
Or 5% logo churn but 12% revenue churn (churning large customers) — much worse
Report both. If they diverge significantly, investigate immediately.
ACV (Annual Contract Value):
ACV = Total contract value / contract term in years
Not to be confused with ARR (which only counts recurring, not one-time fees)
Rising ACV: You're moving upmarket (good for efficiency, check if ICP is changing)
Falling ACV: You're moving downmarket (check burn multiple — may not be economic)
Rule of 40:
Rule of 40 = Revenue Growth Rate % + EBITDA Margin %
Target: > 40%
Example: 60% growth + (-15%) EBITDA margin = 45. Passing.
Example: 20% growth + 5% EBITDA margin = 25. Failing at growth stage.
At early stage (< $5M ARR): Rule of 40 doesn't apply. Growth is the only metric.
At growth stage ($5-20M ARR): Starting to matter.
At scale ($20M+ ARR): Board and investors will hold you to this.
4. FP&A for Startups: What to Measure When
Metrics by Stage
Pre-seed / Seed (< $1M ARR):
Focus on: Cash, pipeline, customer conversations
Measure: Monthly cash burn, weeks of runway, NPS / customer satisfaction
Don't obsess over: EBITDA margin, gross margin (too early)
Frequency: Weekly cash check, monthly everything else
Series A ($1-5M ARR):
Focus on: Repeatable sales, unit economics
Measure: MRR growth, LTV:CAC, CAC payback by channel, gross margin
Don't obsess over: Profitability, G&A efficiency
Build now: Monthly financial close (< 5 business days), basic FP&A model
Frequency: Monthly board pack, weekly leadership metrics
Series B ($5-20M ARR):
Focus on: Scalable go-to-market, operational efficiency
Measure: NDR, burn multiple, revenue per FTE, OKR attainment
Start building: Budget vs actuals, department-level P&L
Build now: Finance team (first financial controller), ERP or NetSuite
Frequency: Monthly board pack + quarterly deep dive
Series C+ ($20M+ ARR):
Focus on: Path to profitability, market leadership
Measure: Rule of 40, free cash flow, CAC efficiency by segment
Must have: FP&A team, full three-statement model, 5-year plan
Frequency: Monthly financial close (< 3 business days), quarterly earnings prep
Reporting Cadence
Weekly (CFO + leadership):
- Cash balance (CFO checks daily, reports weekly)
- Pipeline / sales metrics (if in a sales-led motion)
- Any metric that changed dramatically vs. prior week
Monthly (board + leadership):
- Full financial dashboard (ARR, gross margin, burn, runway)
- Budget vs actual with explanations for > 10% variances
- Unit economics update
- Headcount change summary
Quarterly (board + investors):
- Full three-statement model vs budget
- Cohort analysis update
- Scenario planning review and trigger assessment
- Next quarter outlook
5. Budget vs Actual Analysis Framework
The Purpose of BvA
Budget vs actual is not about being right. It's about understanding why you were wrong, so you can make better decisions.
The CFO who reports "we missed budget by 15%" without explanation is failing. The CFO who says "we missed budget by 15% because enterprise deals took 30 more days to close than modeled — here's what we're doing about it" is doing their job.
BvA Template
Category Budget Actual $ Var % Var Explanation
-------------------------------------------------------------------
ARR $2,400K $2,280K ($120K) (5%) 2 enterprise deals slipped to Q1
New ARR $400K $350K ($50K) (13%) Above
Expansion ARR $120K $140K $20K 17% PLG motion outperforming
Churn ($60K) ($80K) ($20K) (33%) 2 unexpected SMB churns (now fixed)
Gross Margin 75.0% 73.2% -1.8% n/a Infrastructure over-provisioned
S&M Spend $820K $840K ($20K) (2%) Within tolerance
R&D Spend $680K $710K ($30K) (4%) Backfill hire started month early
G&A Spend $140K $148K ($8K) (6%) Legal fees for new customer contract
Cash Burn (net) $580K $648K ($68K) (12%) Driven by ARR shortfall + costs
Runway (mo) 14.5 13.0 (1.5) n/a Tracking; fundraise target unchanged
Variance Thresholds
< ±5%: Note in appendix, no explanation needed in main pack
5-10%: One-line explanation required
> 10%: Full paragraph: what happened, why, what changes
> 20%: Board conversation required (model assumption was wrong, or unexpected event)
Forecasting vs Budgeting
Budget: Set at start of year. Fixed expectation. Updated quarterly. Forecast: Rolling 3-month outlook. Updated monthly. Should converge with budget over time.
Common mistake: Treating forecast as wishful thinking ("what we hope happens")
Correct approach: Forecast is your best current estimate given all known information.
If forecast diverges from budget by > 15%, the budget is wrong.
Reforecast and communicate to board.
Rolling forecast (recommended for startups):
Always have a 12-month forward model.
Update it monthly with actuals replacing the first month.
The forecast should always reflect your current operational reality, not your hope.
Key Formulas Reference
# ARR and growth
ARR_growth_yoy = (ending_ARR - beginning_ARR) / beginning_ARR
# Net Dollar Retention
NDR = (beginning_MRR + expansion_MRR - churn_MRR - contraction_MRR) / beginning_MRR
# Burn Multiple
burn_multiple = net_cash_burn / net_new_ARR
# Rule of 40
rule_of_40 = revenue_growth_pct + ebitda_margin_pct
# LTV (SaaS)
LTV = (ARPA * gross_margin_pct) / monthly_churn_rate
# CAC Payback (months)
cac_payback = CAC / (ARPA * gross_margin_pct)
# Magic Number (sales efficiency)
magic_number = (net_new_ARR * 4) / prior_quarter_S_and_M_spend
# Gross margin
gross_margin = (revenue - COGS) / revenue
# Quick Ratio (growth efficiency)
quick_ratio = (new_MRR + expansion_MRR) / (churned_MRR + contraction_MRR)
# Target: > 4 for high-growth SaaS