Files
claude-skills-reference/c-level-advisor/scenario-war-room/scripts/scenario_modeler.py
Alireza Rezvani e145ac4a1d Dev (#265)
* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

* ci: Add VirusTotal security scan for skills (#252)

* Dev (#231)

* Improve senior-fullstack skill description and workflow validation

- Expand frontmatter description with concrete actions and trigger clauses
- Add validation steps to scaffolding workflow (verify scaffold succeeded)
- Add re-run verification step to audit workflow (confirm P0 fixes)

* chore: sync codex skills symlinks [automated]

* fix(skill): normalize senior-fullstack frontmatter to inline format

Normalize YAML description from block scalar (>) to inline single-line
format matching all other 50+ skills. Align frontmatter trigger phrases
with the body's Trigger Phrases section to eliminate duplication.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): add GITHUB_TOKEN to checkout + restore corrupted skill descriptions

- Add token: ${{ secrets.GITHUB_TOKEN }} to actions/checkout@v4 in
  sync-codex-skills.yml so git-auto-commit-action can push back to branch
  (fixes: fatal: could not read Username, exit 128)
- Restore correct description for incident-commander (was: 'Skill from engineering-team')
- Restore correct description for senior-fullstack (was: '>')

* fix(ci): pass PROJECTS_TOKEN to fix automated commits + remove duplicate checkout

Fixes PROJECTS_TOKEN passthrough for git-auto-commit-action and removes duplicate checkout step in pr-issue-auto-close workflow.

* fix(ci): remove stray merge conflict marker in sync-codex-skills.yml (#221)

Co-authored-by: Leo <leo@leo-agent-server>

* fix(ci): fix workflow errors + add OpenClaw support (#222)

* feat: add 20 new practical skills for professional Claude Code users

New skills across 5 categories:

Engineering (12):
- git-worktree-manager: Parallel dev with port isolation & env sync
- ci-cd-pipeline-builder: Generate GitHub Actions/GitLab CI from stack analysis
- mcp-server-builder: Build MCP servers from OpenAPI specs
- changelog-generator: Conventional commits to structured changelogs
- pr-review-expert: Blast radius analysis & security scan for PRs
- api-test-suite-builder: Auto-generate test suites from API routes
- env-secrets-manager: .env management, leak detection, rotation workflows
- database-schema-designer: Requirements to migrations & types
- codebase-onboarding: Auto-generate onboarding docs from codebase
- performance-profiler: Node/Python/Go profiling & optimization
- runbook-generator: Operational runbooks from codebase analysis
- monorepo-navigator: Turborepo/Nx/pnpm workspace management

Engineering Team (2):
- stripe-integration-expert: Subscriptions, webhooks, billing patterns
- email-template-builder: React Email/MJML transactional email systems

Product Team (3):
- saas-scaffolder: Full SaaS project generation from product brief
- landing-page-generator: High-converting landing pages with copy frameworks
- competitive-teardown: Structured competitive product analysis

Business Growth (1):
- contract-and-proposal-writer: Contracts, SOWs, NDAs per jurisdiction

Marketing (1):
- prompt-engineer-toolkit: Systematic prompt development & A/B testing

Designed for daily professional use and commercial distribution.

* chore: sync codex skills symlinks [automated]

* docs: update README with 20 new skills, counts 65→86, new skills section

* docs: add commercial distribution plan (Stan Store + Gumroad)

* docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) (#226)

* docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains)

- Consolidate 191 commits since v1.0.2 into proper v2.0.0 entry
- Document 12 POWERFUL-tier skills, 37 refactored skills
- Add new domains: business-growth, finance
- Document Codex support and marketplace integration
- Update version history summary table
- Clean up [Unreleased] to only planned work

* docs: add 24 POWERFUL-tier skills to plugin, fix counts to 85 across all docs

- Add engineering-advanced-skills plugin (24 POWERFUL-tier skills) to marketplace.json
- Add 13 missing skills to CHANGELOG v2.0.0 (agent-workflow-designer, api-test-suite-builder,
  changelog-generator, ci-cd-pipeline-builder, codebase-onboarding, database-schema-designer,
  env-secrets-manager, git-worktree-manager, mcp-server-builder, monorepo-navigator,
  performance-profiler, pr-review-expert, runbook-generator)
- Fix skill count: 86→85 (excl sample-skill) across README, CHANGELOG, marketplace.json
- Fix stale 53→85 references in README
- Add engineering-advanced-skills install command to README
- Update marketplace.json version to 2.0.0

---------

Co-authored-by: Leo <leo@openclaw.ai>

* feat: add skill-security-auditor POWERFUL-tier skill (#230)

Security audit and vulnerability scanner for AI agent skills before installation.

Scans for:
- Code execution risks (eval, exec, os.system, subprocess shell injection)
- Data exfiltration (outbound HTTP, credential harvesting, env var extraction)
- Prompt injection in SKILL.md (system override, role hijack, safety bypass)
- Dependency supply chain (typosquatting, unpinned versions, runtime installs)
- File system abuse (boundary violations, binaries, symlinks, hidden files)
- Privilege escalation (sudo, SUID, cron manipulation, shell config writes)
- Obfuscation (base64, hex encoding, chr chains, codecs)

Produces clear PASS/WARN/FAIL verdict with per-finding remediation guidance.
Supports local dirs, git repo URLs, JSON output, strict mode, and CI/CD integration.

Includes:
- scripts/skill_security_auditor.py (1049 lines, zero dependencies)
- references/threat-model.md (complete attack vector documentation)
- SKILL.md with usage guide and report format

Tested against: rag-architect (PASS), agent-designer (PASS), senior-secops (FAIL - correctly flagged eval/exec patterns).

Co-authored-by: Leo <leo@openclaw.ai>

* docs: add skill-security-auditor to marketplace, README, and CHANGELOG

- Add standalone plugin entry for skill-security-auditor in marketplace.json
- Update engineering-advanced-skills plugin description to include it
- Update skill counts: 85→86 across README, CHANGELOG, marketplace
- Add install command to README Quick Install section
- Add to CHANGELOG [Unreleased] section

---------

Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
Co-authored-by: Leo <leo@openclaw.ai>

* Dev (#249)

* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

---------

Co-authored-by: Leo <leo@openclaw.ai>

* Dev (#250)

* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

---------

Co-authored-by: Leo <leo@openclaw.ai>

* ci: add VirusTotal security scan for skills

- Scans changed skill directories on PRs to dev/main
- Scans all skills on release publish
- Posts scan results as PR comment with analysis links
- Rate-limited to 4 req/min (free tier compatible)
- Appends VirusTotal links to release body on publish

* fix: resolve YAML lint errors in virustotal workflow

- Add document start marker (---)
- Quote 'on' key for truthy lint rule
- Remove trailing spaces
- Break long lines under 160 char limit

---------

Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
Co-authored-by: Leo <leo@openclaw.ai>

* feat: add playwright-pro plugin — production-grade Playwright testing toolkit (#254)

Complete Claude Code plugin with:
- 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report)
- 3 specialized agents (test-architect, test-debugger, migration-planner)
- 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility)
- TestRail MCP server (TypeScript) — 8 tools for bidirectional sync
- BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing
- Smart hooks (auto-validate tests, auto-detect Playwright projects)
- 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests)
- Leverages Claude Code built-ins (/batch, /debug, Explore subagent)
- Zero-config for core features; TestRail/BrowserStack via env vars
- Both TypeScript and JavaScript support throughout

Co-authored-by: Leo <leo@openclaw.ai>

* feat: add playwright-pro to marketplace registry (#256)

- New plugin: playwright-pro (9 skills, 3 agents, 55 templates, 2 MCP servers)
- Install: /plugin install playwright-pro@claude-code-skills
- Total marketplace plugins: 17

Co-authored-by: Leo <leo@openclaw.ai>

* fix: integrate playwright-pro across all platforms (#258)

- Add root SKILL.md for OpenClaw and ClawHub compatibility
- Add to README: Skills Overview table, install section, badge count
- Regenerate .codex/skills-index.json with playwright-pro entry
- Add .codex/skills/playwright-pro symlink for Codex CLI
- Fix YAML frontmatter (single-line description for index parsing)

Platforms verified:
- Claude Code: marketplace.json  (merged in PR #256)
- Codex CLI: symlink + skills-index.json 
- OpenClaw: SKILL.md auto-discovered by install script 
- ClawHub: published as playwright-pro@1.1.0 

Co-authored-by: Leo <leo@openclaw.ai>

* docs: update CLAUDE.md — reflect 87 skills across 9 domains

Sync CLAUDE.md with actual repository state: add Engineering POWERFUL tier
(25 skills), update all skill counts, add plugin registry references, and
replace stale sprint section with v2.0.0 version info.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: mention Claude Code in project description

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add self-improving-agent plugin — auto-memory curation for Claude Code (#260)

New plugin: engineering-team/self-improving-agent/
- 5 skills: /si:review, /si:promote, /si:extract, /si:status, /si:remember
- 2 agents: memory-analyst, skill-extractor
- 1 hook: PostToolUse error capture (zero overhead on success)
- 3 reference docs: memory architecture, promotion rules, rules directory patterns
- 2 templates: rule template, skill template
- 20 files, 1,829 lines

Integrates natively with Claude Code's auto-memory (v2.1.32+).
Reads from ~/.claude/projects/<path>/memory/ — no duplicate storage.
Promotes proven patterns from MEMORY.md to CLAUDE.md or .claude/rules/.

Also:
- Added to marketplace.json (18 plugins total)
- Added to README (Skills Overview + install section)
- Updated badge count to 88+
- Regenerated .codex/skills-index.json + symlink

Co-authored-by: Leo <leo@openclaw.ai>

* feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264)

* feat: C-Suite expansion — 8 new executive advisory roles

Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor.
Expands C-level advisory from 2 to 10 roles with 74 total files.

Each role includes:
- SKILL.md (lean, <5KB, ~1200 tokens for context efficiency)
- Reference docs (loaded on demand, not at startup)
- Python analysis scripts (stdlib only, runnable CLI)

Executive Mentor features /em: slash commands (challenge, board-prep,
hard-call, stress-test, postmortem) with devil's advocate agent.

21 Python tools, 24 reference frameworks, 28,379 total lines.
All SKILL.md files combined: ~17K tokens (8.5% of 200K context window).

Badge: 88 → 116 skills

* feat: C-Suite orchestration layer + 18 complementary skills

ORCHESTRATION (new):
- cs-onboard: Founder interview → company-context.md
- chief-of-staff: Routing, synthesis, inter-agent orchestration
- board-meeting: 6-phase multi-agent deliberation protocol
- decision-logger: Two-layer memory (raw transcripts + approved decisions)
- agent-protocol: Inter-agent invocation with loop prevention
- context-engine: Company context loading + anonymization

CROSS-CUTTING CAPABILITIES (new):
- board-deck-builder: Board/investor update assembly
- scenario-war-room: Cascading multi-variable what-if modeling
- competitive-intel: Systematic competitor tracking + battlecards
- org-health-diagnostic: Cross-functional health scoring (8 dimensions)
- ma-playbook: M&A strategy (acquiring + being acquired)
- intl-expansion: International market entry frameworks

CULTURE & COLLABORATION (new):
- culture-architect: Values → behaviors, culture code, health assessment
- company-os: EOS/Scaling Up operating system selection + implementation
- founder-coach: Founder development, delegation, blind spots
- strategic-alignment: Strategy cascade, silo detection, alignment scoring
- change-management: ADKAR-based change rollout framework
- internal-narrative: One story across employees/investors/customers

UPGRADES TO EXISTING ROLES:
- All 10 roles get reasoning technique directives
- All 10 roles get company-context.md integration
- All 10 roles get board meeting isolation rules
- CEO gets stage-adaptive temporal horizons (seed→C)

Key design decisions:
- Two-layer memory prevents hallucinated consensus from rejected ideas
- Phase 2 isolation: agents think independently before cross-examination
- Executive Mentor (The Critic) sees all perspectives, others don't
- 25 Python tools total (stdlib only, no dependencies)

52 new files, 10 modified, 10,862 new lines.
Total C-suite ecosystem: 134 files, 39,131 lines.

* fix: connect all dots — Chief of Staff routes to all 28 skills

- Added complementary skills registry to routing-matrix.md
- Chief of Staff SKILL.md now lists all 28 skills in ecosystem
- Added integration tables to scenario-war-room and competitive-intel
- Badge: 116 → 134 skills
- README: C-Level Advisory count 10 → 28

Quality audit passed:
 All 10 roles: company-context, reasoning, isolation, invocation
 All 6 phases in board meeting
 Two-layer memory with DO_NOT_RESURFACE
 Loop prevention (no self-invoke, max depth 2, no circular)
 All /em: commands present
 All complementary skills cross-reference roles
 Chief of Staff routes to every skill in ecosystem

* refactor: CEO + CTO advisors upgraded to C-suite parity

Both roles now match the structural standard of all new roles:
- CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references)
- CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references)

Added to both:
- Integration table (who they work with and when)
- Key diagnostic questions
- Structured metrics dashboard table
- Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context)

CEO additions:
- Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y)
- Cross-references to culture-architect and board-deck-builder

CTO additions:
- Key Questions section (7 diagnostic questions)
- Structured metrics table (DORA + debt + team + architecture + cost)
- Cross-references to all peer roles

All 10 roles now pass structural parity:  Keywords  QuickStart  Questions  Metrics  RedFlags  Integration

* feat: add proactive triggers + output artifacts to all 10 roles

Every C-suite role now specifies:
- Proactive Triggers: 'surface these without being asked' — context-driven
  early warnings that make advisors proactive, not reactive
- Output Artifacts: concrete deliverables per request type (what you ask →
  what you get)

CEO: runway alerts, board prep triggers, strategy review nudges
CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags
COO: blocker detection, scaling threshold warnings, cadence gaps
CPO: retention curve monitoring, portfolio dog detection, research gaps
CMO: CAC trend monitoring, positioning gaps, budget staleness
CFO: runway forecasting, burn multiple alerts, scenario planning gaps
CRO: NRR monitoring, pipeline coverage, pricing review triggers
CISO: audit overdue alerts, compliance gaps, vendor risk
CHRO: retention risk, comp band gaps, org scaling thresholds
Executive Mentor: board prep triggers, groupthink detection, hard call surfacing

This transforms the C-suite from reactive advisors into proactive partners.

* feat: User Communication Standard — structured output for all roles

Defines 3 output formats in agent-protocol/SKILL.md:

1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision
2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡)
3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items

10 non-negotiable rules:
- Bottom line first, always
- Results and decisions only (no process narration)
- What + Why + How for every finding
- Actions have owners and deadlines ('we should consider' is banned)
- Decisions framed as options with trade-offs
- Founder is the highest authority — roles recommend, founder decides
- Risks are concrete (if X → Y, costs $Z)
- Max 5 bullets per section
- No jargon without explanation
- Silence over fabricated updates

All 10 roles reference this standard.
Chief of Staff enforces it as a quality gate.
Board meeting Phase 4 uses the Board Meeting Output format.

* feat: Internal Quality Loop — verification before delivery

No role presents to the founder without passing verification:

Step 1: Self-Verification (every role, every time)
  - Source attribution: where did each data point come from?
  - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding
  - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding
  - Contradiction check against company-context + decision log
  - 'So what?' test: every finding needs a business consequence

Step 2: Peer Verification (cross-functional)
  - Financial claims → CFO validates math
  - Revenue projections → CRO validates pipeline backing
  - Technical feasibility → CTO validates
  - People/hiring impact → CHRO validates
  - Skip for single-domain, low-stakes questions

Step 3: Critic Pre-Screen (high-stakes only)
  - Irreversible decisions, >20% runway impact, strategy changes
  - Executive Mentor finds weakest point before founder sees it
  - Suspicious consensus triggers mandatory pre-screen

Step 4: Course Correction (after founder feedback)
  - Approve → log + assign actions
  - Modify → re-verify changed parts
  - Reject → DO_NOT_RESURFACE + learn why
  - 30/60/90 day post-decision review

Board meeting contributions now require self-verified format with
confidence tags and source attribution on every finding.

* fix: resolve PR review issues 1, 4, and minor observation

Issue 1: c-level-advisor/CLAUDE.md — completely rewritten
  - Was: 2 skills (CEO, CTO only), dated Nov 2025
  - Now: full 28-skill ecosystem map with architecture diagram,
    all roles/orchestration/cross-cutting/culture skills listed,
    design decisions, integration with other domains

Issue 4: Root CLAUDE.md — updated all stale counts
  - 87 → 134 skills across all 3 references
  - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary)
  - Tool count: 160+ → 185+
  - Reference count: 200+ → 250+

Minor observation: Documented plugin.json convention
  - Explained in c-level-advisor/CLAUDE.md that only executive-mentor
    has plugin.json because only it has slash commands (/em: namespace)
  - Other skills are invoked by name through Chief of Staff or directly

Also fixed: README.md 88+ → 134 in two places (first line + skills section)

* fix: update all plugin/index registrations for 28-skill C-suite

1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0
   - Was: 2 skills, generic description
   - Now: all 28 skills listed with descriptions, all 25 scripts,
     namespace 'cs', full ecosystem description

2. .codex/skills-index.json — added 18 complementary skills
   - Was: 10 roles only
   - Now: 28 total c-level entries (10 roles + 6 orchestration +
     6 cross-cutting + 6 culture)
   - Each with full description for skill discovery

3. .claude-plugin/marketplace.json — updated c-level-skills entry
   - Was: generic 2-skill description
   - Now: v2.0.0, full 28-skill ecosystem description,
     skills_count: 28, scripts_count: 25

* feat: add root SKILL.md for c-level-advisor ClawHub package

---------

Co-authored-by: Leo <leo@openclaw.ai>

* chore: sync codex skills symlinks [automated]

---------

Co-authored-by: Leo <leo@openclaw.ai>
Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
2026-03-06 01:35:45 +01:00

487 lines
17 KiB
Python

#!/usr/bin/env python3
"""
Scenario War Room — Multi-Variable Cascade Modeler
Models cascading effects of compound adversity across business domains.
Stdlib only. Run with: python scenario_modeler.py
"""
import json
import sys
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Tuple
from enum import Enum
class Severity(Enum):
BASE = "base" # One variable hits
STRESS = "stress" # Two variables hit
SEVERE = "severe" # All variables hit
class Domain(Enum):
FINANCIAL = "Financial (CFO)"
REVENUE = "Revenue (CRO)"
PRODUCT = "Product (CPO)"
ENGINEERING = "Engineering (CTO)"
PEOPLE = "People (CHRO)"
OPERATIONS = "Operations (COO)"
SECURITY = "Security (CISO)"
MARKET = "Market (CMO)"
@dataclass
class Variable:
name: str
description: str
probability: float # 0.0-1.0
arrt_impact_pct: float # % of ARR at risk (negative = loss)
runway_impact_months: float # months lost from runway (negative = reduction)
affected_domains: List[Domain]
timeline_days: int # when it hits
@dataclass
class CascadeEffect:
trigger_domain: Domain
caused_domain: Domain
mechanism: str # how A causes B
severity_multiplier: float # compounds the base impact
@dataclass
class Hedge:
action: str
cost_usd: int
impact_description: str
owner: str
deadline_days: int
reduces_probability: float # how much it reduces scenario probability
@dataclass
class Scenario:
name: str
variables: List[Variable]
cascades: List[CascadeEffect]
hedges: List[Hedge]
# Company baseline
current_arr_usd: int = 2_000_000
current_runway_months: int = 14
monthly_burn_usd: int = 140_000
def calculate_impact(
scenario: Scenario,
severity: Severity
) -> Dict:
"""Calculate combined impact for a given severity level."""
variables = scenario.variables
# Select variables by severity
if severity == Severity.BASE:
active_vars = variables[:1]
elif severity == Severity.STRESS:
active_vars = variables[:2]
else:
active_vars = variables
# Direct impacts
total_arr_loss_pct = sum(abs(v.arrt_impact_pct) for v in active_vars)
total_runway_reduction = sum(abs(v.runway_impact_months) for v in active_vars)
arr_at_risk = scenario.current_arr_usd * (total_arr_loss_pct / 100)
new_arr = scenario.current_arr_usd - arr_at_risk
new_runway = scenario.current_runway_months - total_runway_reduction
# Cascade multiplier (stress/severe amplify via domain cascades)
cascade_multiplier = 1.0
if len(active_vars) > 1:
active_domains = set(d for v in active_vars for d in v.affected_domains)
for cascade in scenario.cascades:
if (cascade.trigger_domain in active_domains and
cascade.caused_domain in active_domains):
cascade_multiplier *= cascade.severity_multiplier
# Apply cascade
effective_arr_loss = arr_at_risk * cascade_multiplier
effective_arr = scenario.current_arr_usd - effective_arr_loss
effective_runway = max(0, new_runway - (cascade_multiplier - 1.0) * 2)
# New burn multiple
new_monthly_burn = scenario.monthly_burn_usd * cascade_multiplier
burn_multiple = (new_monthly_burn * 12) / max(effective_arr, 1)
# Affected domains
affected = set(d for v in active_vars for d in v.affected_domains)
return {
"severity": severity.value,
"active_variables": [v.name for v in active_vars],
"arr_at_risk_usd": int(effective_arr_loss),
"arr_at_risk_pct": round(effective_arr_loss / scenario.current_arr_usd * 100, 1),
"projected_arr_usd": int(effective_arr),
"runway_months": round(effective_runway, 1),
"runway_change": round(effective_runway - scenario.current_runway_months, 1),
"cascade_multiplier": round(cascade_multiplier, 2),
"new_burn_multiple": round(burn_multiple, 1),
"affected_domains": [d.value for d in affected],
"existential_risk": effective_runway < 6.0,
"board_escalation_required": effective_runway < 9.0,
}
def identify_triggers(variables: List[Variable]) -> List[Dict]:
"""Generate early warning triggers for each variable."""
triggers = []
for var in variables:
trigger = {
"variable": var.name,
"timeline": f"Watch from day 1; expect signal ~{var.timeline_days // 2} days before impact",
"signals": _generate_signals(var),
"response_owner": _domain_to_owner(var.affected_domains[0] if var.affected_domains else Domain.FINANCIAL),
}
triggers.append(trigger)
return triggers
def _generate_signals(var: Variable) -> List[str]:
"""Generate plausible early warning signals based on variable type."""
signals = []
name_lower = var.name.lower()
if any(k in name_lower for k in ["customer", "churn", "account"]):
signals = [
"Executive sponsor unreachable for >2 weeks",
"Product usage drops >20% month-over-month",
"No QBR scheduled within 90 days of contract renewal",
"Support ticket volume spikes >50% without explanation",
]
elif any(k in name_lower for k in ["fundraise", "raise", "capital", "investor"]):
signals = [
"Fewer than 3 term sheets after 60 days of active process",
"Lead investor requests 30+ day extension on diligence",
"Comparable company raises at lower valuation (market signal)",
"Investor meeting conversion rate below 20%",
]
elif any(k in name_lower for k in ["engineer", "people", "team", "resign", "quit"]):
signals = [
"2+ engineers receive above-market counter-offer in 90 days",
"Glassdoor activity increases from engineering team",
"Key person requests 1:1 to 'talk about career' unexpectedly",
"Referral interview requests from engineers increase",
]
elif any(k in name_lower for k in ["market", "competitor", "competition"]):
signals = [
"Competitor raises $10M+ funding round",
"Win/loss rate shifts >10% in 60 days",
"Multiple prospects cite competitor by name in objections",
"Competitor poaches 2+ of your customers in a quarter",
]
else:
signals = [
f"Leading indicator for '{var.name}' deteriorates 20%+ vs baseline",
"Weekly metric review shows 3-week trend in wrong direction",
"External validation from customers or partners confirms risk",
]
return signals[:3] # Top 3
def _domain_to_owner(domain: Domain) -> str:
mapping = {
Domain.FINANCIAL: "CFO",
Domain.REVENUE: "CRO",
Domain.PRODUCT: "CPO",
Domain.ENGINEERING: "CTO",
Domain.PEOPLE: "CHRO",
Domain.OPERATIONS: "COO",
Domain.SECURITY: "CISO",
Domain.MARKET: "CMO",
}
return mapping.get(domain, "CEO")
def format_currency(amount: int) -> str:
if amount >= 1_000_000:
return f"${amount / 1_000_000:.1f}M"
elif amount >= 1_000:
return f"${amount / 1_000:.0f}K"
return f"${amount}"
def print_report(scenario: Scenario) -> None:
"""Print full scenario analysis report."""
print("\n" + "=" * 70)
print(f"SCENARIO WAR ROOM: {scenario.name.upper()}")
print("=" * 70)
# Baseline
print(f"\n📊 BASELINE")
print(f" Current ARR: {format_currency(scenario.current_arr_usd)}")
print(f" Monthly Burn: {format_currency(scenario.monthly_burn_usd)}")
print(f" Runway: {scenario.current_runway_months} months")
# Variables
print(f"\n⚡ SCENARIO VARIABLES ({len(scenario.variables)})")
for i, var in enumerate(scenario.variables, 1):
prob_pct = int(var.probability * 100)
print(f"\n Variable {i}: {var.name}")
print(f" {var.description}")
print(f" Probability: {prob_pct}% | Timeline: {var.timeline_days} days")
print(f" ARR impact: -{var.arrt_impact_pct}% | "
f"Runway impact: -{var.runway_impact_months} months")
print(f" Affected: {', '.join(d.value for d in var.affected_domains)}")
# Combined probability
combined_prob = 1.0
for var in scenario.variables:
combined_prob *= var.probability
print(f"\n Combined probability (all hit): {combined_prob * 100:.1f}%")
# Severity Levels
print(f"\n{'=' * 70}")
print("SEVERITY ANALYSIS")
print("=" * 70)
for severity in Severity:
if severity == Severity.BASE and len(scenario.variables) < 1:
continue
if severity == Severity.STRESS and len(scenario.variables) < 2:
continue
impact = calculate_impact(scenario, severity)
icon = {"base": "🟡", "stress": "🔴", "severe": "💀"}[impact["severity"]]
print(f"\n{icon} {impact['severity'].upper()} SCENARIO")
print(f" Variables: {', '.join(impact['active_variables'])}")
print(f" ARR at risk: {format_currency(impact['arr_at_risk_usd'])} "
f"({impact['arr_at_risk_pct']}%)")
print(f" Projected ARR: {format_currency(impact['projected_arr_usd'])}")
print(f" Runway: {impact['runway_months']} months "
f"({impact['runway_change']:+.1f} months)")
print(f" Burn multiple: {impact['new_burn_multiple']}x")
if impact['cascade_multiplier'] > 1.0:
print(f" Cascade amplifier: {impact['cascade_multiplier']}x "
f"(domains interact)")
print(f" Board escalation: {'⚠️ YES' if impact['board_escalation_required'] else 'No'}")
print(f" Existential risk: {'🚨 YES' if impact['existential_risk'] else 'No'}")
# Cascade Map
if scenario.cascades:
print(f"\n{'=' * 70}")
print("CASCADE MAP")
print("=" * 70)
for i, cascade in enumerate(scenario.cascades, 1):
print(f"\n [{i}] {cascade.trigger_domain.value}")
print(f"{cascade.mechanism}")
print(f"{cascade.caused_domain.value} "
f"(amplified {cascade.severity_multiplier}x)")
# Early Warning Triggers
print(f"\n{'=' * 70}")
print("EARLY WARNING TRIGGERS")
print("=" * 70)
triggers = identify_triggers(scenario.variables)
for trigger in triggers:
print(f"\n 📡 {trigger['variable']}")
print(f" Watch: {trigger['timeline']}")
print(f" Owner: {trigger['response_owner']}")
for signal in trigger['signals']:
print(f"{signal}")
# Hedges
if scenario.hedges:
print(f"\n{'=' * 70}")
print("HEDGING STRATEGIES (act now)")
print("=" * 70)
sorted_hedges = sorted(scenario.hedges,
key=lambda h: h.reduces_probability, reverse=True)
for hedge in sorted_hedges:
print(f"\n{hedge.action}")
print(f" Cost: {format_currency(hedge.cost_usd)}/year | "
f"Owner: {hedge.owner} | Deadline: {hedge.deadline_days} days")
print(f" Impact: {hedge.impact_description}")
print(f" Risk reduction: {int(hedge.reduces_probability * 100)}%")
print(f"\n{'=' * 70}\n")
def build_sample_scenario() -> Scenario:
"""Sample: Customer churn + fundraise miss compound scenario."""
variables = [
Variable(
name="Top customer churn",
description="Largest customer (28% of ARR) gives 60-day termination notice",
probability=0.15,
arrt_impact_pct=28.0,
runway_impact_months=4.0,
affected_domains=[
Domain.FINANCIAL, Domain.REVENUE, Domain.OPERATIONS
],
timeline_days=60,
),
Variable(
name="Series A delayed 6 months",
description="Fundraise process extends beyond target close; bridge required",
probability=0.25,
arrt_impact_pct=0.0, # No ARR impact directly
runway_impact_months=3.0, # Bridge terms reduce effective runway
affected_domains=[
Domain.FINANCIAL, Domain.PEOPLE, Domain.OPERATIONS
],
timeline_days=120,
),
Variable(
name="Lead engineer resigns",
description="Engineering lead + 1 senior resign during uncertainty",
probability=0.20,
arrt_impact_pct=5.0, # Roadmap slip causes some revenue impact
runway_impact_months=1.0,
affected_domains=[
Domain.ENGINEERING, Domain.PRODUCT, Domain.REVENUE
],
timeline_days=30,
),
]
cascades = [
CascadeEffect(
trigger_domain=Domain.REVENUE,
caused_domain=Domain.FINANCIAL,
mechanism="ARR loss increases burn multiple; runway compresses",
severity_multiplier=1.3,
),
CascadeEffect(
trigger_domain=Domain.FINANCIAL,
caused_domain=Domain.PEOPLE,
mechanism="Hiring freeze + uncertainty triggers attrition risk",
severity_multiplier=1.2,
),
CascadeEffect(
trigger_domain=Domain.PEOPLE,
caused_domain=Domain.PRODUCT,
mechanism="Engineering attrition slips roadmap; customer value drops",
severity_multiplier=1.15,
),
]
hedges = [
Hedge(
action="Establish $750K revolving credit line",
cost_usd=7_500,
impact_description="Buys 4+ months if churn hits before fundraise closes",
owner="CFO",
deadline_days=45,
reduces_probability=0.40,
),
Hedge(
action="12-month retention bonuses for 3 key engineers",
cost_usd=90_000,
impact_description="Locks critical talent through fundraise uncertainty",
owner="CHRO",
deadline_days=30,
reduces_probability=0.60,
),
Hedge(
action="Diversify revenue: reduce top customer to <20% ARR in 2 quarters",
cost_usd=0,
impact_description="Structural risk reduction; takes 6+ months to achieve",
owner="CRO",
deadline_days=14,
reduces_probability=0.30,
),
Hedge(
action="Accelerate fundraise: start parallel process, compress timeline",
cost_usd=15_000,
impact_description="Closes before scenarios compound; reduces bridge risk",
owner="CEO",
deadline_days=7,
reduces_probability=0.35,
),
]
return Scenario(
name="Customer Churn + Fundraise Miss + Eng Attrition",
variables=variables,
cascades=cascades,
hedges=hedges,
current_arr_usd=2_000_000,
current_runway_months=14,
monthly_burn_usd=140_000,
)
def interactive_mode() -> Scenario:
"""Simple CLI for building a custom scenario."""
print("\n🔴 SCENARIO WAR ROOM — Custom Scenario Builder")
print("=" * 50)
print("Define up to 3 scenario variables.\n")
name = input("Scenario name: ").strip() or "Custom Scenario"
current_arr = int(input("Current ARR ($): ").strip() or "2000000")
current_runway = int(input("Current runway (months): ").strip() or "14")
monthly_burn = int(current_arr / current_runway) if current_runway > 0 else 140000
variables = []
for i in range(1, 4):
print(f"\nVariable {i} (press Enter to skip):")
var_name = input(" Name: ").strip()
if not var_name:
break
desc = input(" Description: ").strip() or var_name
prob = float(input(" Probability (0-100%): ").strip() or "20") / 100
arr_impact = float(input(" ARR impact (%): ").strip() or "10")
runway_impact = float(input(" Runway impact (months): ").strip() or "2")
timeline = int(input(" Timeline (days): ").strip() or "90")
variables.append(Variable(
name=var_name,
description=desc,
probability=prob,
arrt_impact_pct=arr_impact,
runway_impact_months=runway_impact,
affected_domains=[Domain.FINANCIAL, Domain.REVENUE],
timeline_days=timeline,
))
if not variables:
print("No variables defined. Using sample scenario.")
return build_sample_scenario()
return Scenario(
name=name,
variables=variables,
cascades=[],
hedges=[],
current_arr_usd=current_arr,
current_runway_months=current_runway,
monthly_burn_usd=monthly_burn,
)
def main():
print("\n🔴 SCENARIO WAR ROOM")
print("Multi-variable cascade modeler for startup adversity planning\n")
if "--interactive" in sys.argv or "-i" in sys.argv:
scenario = interactive_mode()
else:
print("Running sample scenario: Customer Churn + Fundraise Miss + Eng Attrition")
print("(Use --interactive or -i for custom scenario)\n")
scenario = build_sample_scenario()
print_report(scenario)
if "--json" in sys.argv:
results = {}
for severity in Severity:
impact = calculate_impact(scenario, severity)
results[severity.value] = impact
print(json.dumps(results, indent=2))
if __name__ == "__main__":
main()