* docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: Add VirusTotal security scan for skills (#252) * Dev (#231) * Improve senior-fullstack skill description and workflow validation - Expand frontmatter description with concrete actions and trigger clauses - Add validation steps to scaffolding workflow (verify scaffold succeeded) - Add re-run verification step to audit workflow (confirm P0 fixes) * chore: sync codex skills symlinks [automated] * fix(skill): normalize senior-fullstack frontmatter to inline format Normalize YAML description from block scalar (>) to inline single-line format matching all other 50+ skills. Align frontmatter trigger phrases with the body's Trigger Phrases section to eliminate duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(ci): add GITHUB_TOKEN to checkout + restore corrupted skill descriptions - Add token: ${{ secrets.GITHUB_TOKEN }} to actions/checkout@v4 in sync-codex-skills.yml so git-auto-commit-action can push back to branch (fixes: fatal: could not read Username, exit 128) - Restore correct description for incident-commander (was: 'Skill from engineering-team') - Restore correct description for senior-fullstack (was: '>') * fix(ci): pass PROJECTS_TOKEN to fix automated commits + remove duplicate checkout Fixes PROJECTS_TOKEN passthrough for git-auto-commit-action and removes duplicate checkout step in pr-issue-auto-close workflow. * fix(ci): remove stray merge conflict marker in sync-codex-skills.yml (#221) Co-authored-by: Leo <leo@leo-agent-server> * fix(ci): fix workflow errors + add OpenClaw support (#222) * feat: add 20 new practical skills for professional Claude Code users New skills across 5 categories: Engineering (12): - git-worktree-manager: Parallel dev with port isolation & env sync - ci-cd-pipeline-builder: Generate GitHub Actions/GitLab CI from stack analysis - mcp-server-builder: Build MCP servers from OpenAPI specs - changelog-generator: Conventional commits to structured changelogs - pr-review-expert: Blast radius analysis & security scan for PRs - api-test-suite-builder: Auto-generate test suites from API routes - env-secrets-manager: .env management, leak detection, rotation workflows - database-schema-designer: Requirements to migrations & types - codebase-onboarding: Auto-generate onboarding docs from codebase - performance-profiler: Node/Python/Go profiling & optimization - runbook-generator: Operational runbooks from codebase analysis - monorepo-navigator: Turborepo/Nx/pnpm workspace management Engineering Team (2): - stripe-integration-expert: Subscriptions, webhooks, billing patterns - email-template-builder: React Email/MJML transactional email systems Product Team (3): - saas-scaffolder: Full SaaS project generation from product brief - landing-page-generator: High-converting landing pages with copy frameworks - competitive-teardown: Structured competitive product analysis Business Growth (1): - contract-and-proposal-writer: Contracts, SOWs, NDAs per jurisdiction Marketing (1): - prompt-engineer-toolkit: Systematic prompt development & A/B testing Designed for daily professional use and commercial distribution. * chore: sync codex skills symlinks [automated] * docs: update README with 20 new skills, counts 65→86, new skills section * docs: add commercial distribution plan (Stan Store + Gumroad) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) (#226) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) - Consolidate 191 commits since v1.0.2 into proper v2.0.0 entry - Document 12 POWERFUL-tier skills, 37 refactored skills - Add new domains: business-growth, finance - Document Codex support and marketplace integration - Update version history summary table - Clean up [Unreleased] to only planned work * docs: add 24 POWERFUL-tier skills to plugin, fix counts to 85 across all docs - Add engineering-advanced-skills plugin (24 POWERFUL-tier skills) to marketplace.json - Add 13 missing skills to CHANGELOG v2.0.0 (agent-workflow-designer, api-test-suite-builder, changelog-generator, ci-cd-pipeline-builder, codebase-onboarding, database-schema-designer, env-secrets-manager, git-worktree-manager, mcp-server-builder, monorepo-navigator, performance-profiler, pr-review-expert, runbook-generator) - Fix skill count: 86→85 (excl sample-skill) across README, CHANGELOG, marketplace.json - Fix stale 53→85 references in README - Add engineering-advanced-skills install command to README - Update marketplace.json version to 2.0.0 --------- Co-authored-by: Leo <leo@openclaw.ai> * feat: add skill-security-auditor POWERFUL-tier skill (#230) Security audit and vulnerability scanner for AI agent skills before installation. Scans for: - Code execution risks (eval, exec, os.system, subprocess shell injection) - Data exfiltration (outbound HTTP, credential harvesting, env var extraction) - Prompt injection in SKILL.md (system override, role hijack, safety bypass) - Dependency supply chain (typosquatting, unpinned versions, runtime installs) - File system abuse (boundary violations, binaries, symlinks, hidden files) - Privilege escalation (sudo, SUID, cron manipulation, shell config writes) - Obfuscation (base64, hex encoding, chr chains, codecs) Produces clear PASS/WARN/FAIL verdict with per-finding remediation guidance. Supports local dirs, git repo URLs, JSON output, strict mode, and CI/CD integration. Includes: - scripts/skill_security_auditor.py (1049 lines, zero dependencies) - references/threat-model.md (complete attack vector documentation) - SKILL.md with usage guide and report format Tested against: rag-architect (PASS), agent-designer (PASS), senior-secops (FAIL - correctly flagged eval/exec patterns). Co-authored-by: Leo <leo@openclaw.ai> * docs: add skill-security-auditor to marketplace, README, and CHANGELOG - Add standalone plugin entry for skill-security-auditor in marketplace.json - Update engineering-advanced-skills plugin description to include it - Update skill counts: 85→86 across README, CHANGELOG, marketplace - Add install command to README Quick Install section - Add to CHANGELOG [Unreleased] section --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * Dev (#249) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * Dev (#250) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: add VirusTotal security scan for skills - Scans changed skill directories on PRs to dev/main - Scans all skills on release publish - Posts scan results as PR comment with analysis links - Rate-limited to 4 req/min (free tier compatible) - Appends VirusTotal links to release body on publish * fix: resolve YAML lint errors in virustotal workflow - Add document start marker (---) - Quote 'on' key for truthy lint rule - Remove trailing spaces - Break long lines under 160 char limit --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro plugin — production-grade Playwright testing toolkit (#254) Complete Claude Code plugin with: - 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report) - 3 specialized agents (test-architect, test-debugger, migration-planner) - 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility) - TestRail MCP server (TypeScript) — 8 tools for bidirectional sync - BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing - Smart hooks (auto-validate tests, auto-detect Playwright projects) - 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests) - Leverages Claude Code built-ins (/batch, /debug, Explore subagent) - Zero-config for core features; TestRail/BrowserStack via env vars - Both TypeScript and JavaScript support throughout Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro to marketplace registry (#256) - New plugin: playwright-pro (9 skills, 3 agents, 55 templates, 2 MCP servers) - Install: /plugin install playwright-pro@claude-code-skills - Total marketplace plugins: 17 Co-authored-by: Leo <leo@openclaw.ai> * fix: integrate playwright-pro across all platforms (#258) - Add root SKILL.md for OpenClaw and ClawHub compatibility - Add to README: Skills Overview table, install section, badge count - Regenerate .codex/skills-index.json with playwright-pro entry - Add .codex/skills/playwright-pro symlink for Codex CLI - Fix YAML frontmatter (single-line description for index parsing) Platforms verified: - Claude Code: marketplace.json ✅ (merged in PR #256) - Codex CLI: symlink + skills-index.json ✅ - OpenClaw: SKILL.md auto-discovered by install script ✅ - ClawHub: published as playwright-pro@1.1.0 ✅ Co-authored-by: Leo <leo@openclaw.ai> * docs: update CLAUDE.md — reflect 87 skills across 9 domains Sync CLAUDE.md with actual repository state: add Engineering POWERFUL tier (25 skills), update all skill counts, add plugin registry references, and replace stale sprint section with v2.0.0 version info. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: mention Claude Code in project description Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add self-improving-agent plugin — auto-memory curation for Claude Code (#260) New plugin: engineering-team/self-improving-agent/ - 5 skills: /si:review, /si:promote, /si:extract, /si:status, /si:remember - 2 agents: memory-analyst, skill-extractor - 1 hook: PostToolUse error capture (zero overhead on success) - 3 reference docs: memory architecture, promotion rules, rules directory patterns - 2 templates: rule template, skill template - 20 files, 1,829 lines Integrates natively with Claude Code's auto-memory (v2.1.32+). Reads from ~/.claude/projects/<path>/memory/ — no duplicate storage. Promotes proven patterns from MEMORY.md to CLAUDE.md or .claude/rules/. Also: - Added to marketplace.json (18 plugins total) - Added to README (Skills Overview + install section) - Updated badge count to 88+ - Regenerated .codex/skills-index.json + symlink Co-authored-by: Leo <leo@openclaw.ai> * feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264) * feat: C-Suite expansion — 8 new executive advisory roles Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor. Expands C-level advisory from 2 to 10 roles with 74 total files. Each role includes: - SKILL.md (lean, <5KB, ~1200 tokens for context efficiency) - Reference docs (loaded on demand, not at startup) - Python analysis scripts (stdlib only, runnable CLI) Executive Mentor features /em: slash commands (challenge, board-prep, hard-call, stress-test, postmortem) with devil's advocate agent. 21 Python tools, 24 reference frameworks, 28,379 total lines. All SKILL.md files combined: ~17K tokens (8.5% of 200K context window). Badge: 88 → 116 skills * feat: C-Suite orchestration layer + 18 complementary skills ORCHESTRATION (new): - cs-onboard: Founder interview → company-context.md - chief-of-staff: Routing, synthesis, inter-agent orchestration - board-meeting: 6-phase multi-agent deliberation protocol - decision-logger: Two-layer memory (raw transcripts + approved decisions) - agent-protocol: Inter-agent invocation with loop prevention - context-engine: Company context loading + anonymization CROSS-CUTTING CAPABILITIES (new): - board-deck-builder: Board/investor update assembly - scenario-war-room: Cascading multi-variable what-if modeling - competitive-intel: Systematic competitor tracking + battlecards - org-health-diagnostic: Cross-functional health scoring (8 dimensions) - ma-playbook: M&A strategy (acquiring + being acquired) - intl-expansion: International market entry frameworks CULTURE & COLLABORATION (new): - culture-architect: Values → behaviors, culture code, health assessment - company-os: EOS/Scaling Up operating system selection + implementation - founder-coach: Founder development, delegation, blind spots - strategic-alignment: Strategy cascade, silo detection, alignment scoring - change-management: ADKAR-based change rollout framework - internal-narrative: One story across employees/investors/customers UPGRADES TO EXISTING ROLES: - All 10 roles get reasoning technique directives - All 10 roles get company-context.md integration - All 10 roles get board meeting isolation rules - CEO gets stage-adaptive temporal horizons (seed→C) Key design decisions: - Two-layer memory prevents hallucinated consensus from rejected ideas - Phase 2 isolation: agents think independently before cross-examination - Executive Mentor (The Critic) sees all perspectives, others don't - 25 Python tools total (stdlib only, no dependencies) 52 new files, 10 modified, 10,862 new lines. Total C-suite ecosystem: 134 files, 39,131 lines. * fix: connect all dots — Chief of Staff routes to all 28 skills - Added complementary skills registry to routing-matrix.md - Chief of Staff SKILL.md now lists all 28 skills in ecosystem - Added integration tables to scenario-war-room and competitive-intel - Badge: 116 → 134 skills - README: C-Level Advisory count 10 → 28 Quality audit passed: ✅ All 10 roles: company-context, reasoning, isolation, invocation ✅ All 6 phases in board meeting ✅ Two-layer memory with DO_NOT_RESURFACE ✅ Loop prevention (no self-invoke, max depth 2, no circular) ✅ All /em: commands present ✅ All complementary skills cross-reference roles ✅ Chief of Staff routes to every skill in ecosystem * refactor: CEO + CTO advisors upgraded to C-suite parity Both roles now match the structural standard of all new roles: - CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references) - CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references) Added to both: - Integration table (who they work with and when) - Key diagnostic questions - Structured metrics dashboard table - Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context) CEO additions: - Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y) - Cross-references to culture-architect and board-deck-builder CTO additions: - Key Questions section (7 diagnostic questions) - Structured metrics table (DORA + debt + team + architecture + cost) - Cross-references to all peer roles All 10 roles now pass structural parity: ✅ Keywords ✅ QuickStart ✅ Questions ✅ Metrics ✅ RedFlags ✅ Integration * feat: add proactive triggers + output artifacts to all 10 roles Every C-suite role now specifies: - Proactive Triggers: 'surface these without being asked' — context-driven early warnings that make advisors proactive, not reactive - Output Artifacts: concrete deliverables per request type (what you ask → what you get) CEO: runway alerts, board prep triggers, strategy review nudges CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags COO: blocker detection, scaling threshold warnings, cadence gaps CPO: retention curve monitoring, portfolio dog detection, research gaps CMO: CAC trend monitoring, positioning gaps, budget staleness CFO: runway forecasting, burn multiple alerts, scenario planning gaps CRO: NRR monitoring, pipeline coverage, pricing review triggers CISO: audit overdue alerts, compliance gaps, vendor risk CHRO: retention risk, comp band gaps, org scaling thresholds Executive Mentor: board prep triggers, groupthink detection, hard call surfacing This transforms the C-suite from reactive advisors into proactive partners. * feat: User Communication Standard — structured output for all roles Defines 3 output formats in agent-protocol/SKILL.md: 1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision 2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡⚪) 3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items 10 non-negotiable rules: - Bottom line first, always - Results and decisions only (no process narration) - What + Why + How for every finding - Actions have owners and deadlines ('we should consider' is banned) - Decisions framed as options with trade-offs - Founder is the highest authority — roles recommend, founder decides - Risks are concrete (if X → Y, costs $Z) - Max 5 bullets per section - No jargon without explanation - Silence over fabricated updates All 10 roles reference this standard. Chief of Staff enforces it as a quality gate. Board meeting Phase 4 uses the Board Meeting Output format. * feat: Internal Quality Loop — verification before delivery No role presents to the founder without passing verification: Step 1: Self-Verification (every role, every time) - Source attribution: where did each data point come from? - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding - Contradiction check against company-context + decision log - 'So what?' test: every finding needs a business consequence Step 2: Peer Verification (cross-functional) - Financial claims → CFO validates math - Revenue projections → CRO validates pipeline backing - Technical feasibility → CTO validates - People/hiring impact → CHRO validates - Skip for single-domain, low-stakes questions Step 3: Critic Pre-Screen (high-stakes only) - Irreversible decisions, >20% runway impact, strategy changes - Executive Mentor finds weakest point before founder sees it - Suspicious consensus triggers mandatory pre-screen Step 4: Course Correction (after founder feedback) - Approve → log + assign actions - Modify → re-verify changed parts - Reject → DO_NOT_RESURFACE + learn why - 30/60/90 day post-decision review Board meeting contributions now require self-verified format with confidence tags and source attribution on every finding. * fix: resolve PR review issues 1, 4, and minor observation Issue 1: c-level-advisor/CLAUDE.md — completely rewritten - Was: 2 skills (CEO, CTO only), dated Nov 2025 - Now: full 28-skill ecosystem map with architecture diagram, all roles/orchestration/cross-cutting/culture skills listed, design decisions, integration with other domains Issue 4: Root CLAUDE.md — updated all stale counts - 87 → 134 skills across all 3 references - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary) - Tool count: 160+ → 185+ - Reference count: 200+ → 250+ Minor observation: Documented plugin.json convention - Explained in c-level-advisor/CLAUDE.md that only executive-mentor has plugin.json because only it has slash commands (/em: namespace) - Other skills are invoked by name through Chief of Staff or directly Also fixed: README.md 88+ → 134 in two places (first line + skills section) * fix: update all plugin/index registrations for 28-skill C-suite 1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0 - Was: 2 skills, generic description - Now: all 28 skills listed with descriptions, all 25 scripts, namespace 'cs', full ecosystem description 2. .codex/skills-index.json — added 18 complementary skills - Was: 10 roles only - Now: 28 total c-level entries (10 roles + 6 orchestration + 6 cross-cutting + 6 culture) - Each with full description for skill discovery 3. .claude-plugin/marketplace.json — updated c-level-skills entry - Was: generic 2-skill description - Now: v2.0.0, full 28-skill ecosystem description, skills_count: 28, scripts_count: 25 * feat: add root SKILL.md for c-level-advisor ClawHub package --------- Co-authored-by: Leo <leo@openclaw.ai> * chore: sync codex skills symlinks [automated] --------- Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server>
691 lines
28 KiB
Python
691 lines
28 KiB
Python
#!/usr/bin/env python3
|
||
"""
|
||
CISO Risk Quantifier
|
||
====================
|
||
Quantifies security risks in business terms using the FAIR model.
|
||
Calculates ALE (Annual Loss Expectancy) and prioritizes by expected annual loss.
|
||
|
||
Usage:
|
||
python risk_quantifier.py # Run with sample data
|
||
python risk_quantifier.py --json # Output JSON
|
||
python risk_quantifier.py --csv output.csv # Export CSV
|
||
python risk_quantifier.py --budget 500000 # Show what fits in budget
|
||
python risk_quantifier.py --add # Interactive risk entry
|
||
"""
|
||
|
||
import json
|
||
import csv
|
||
import sys
|
||
import os
|
||
import argparse
|
||
from datetime import datetime
|
||
from typing import Optional
|
||
|
||
|
||
# ─── Data Model ─────────────────────────────────────────────────────────────
|
||
|
||
RISK_CATEGORIES = [
|
||
"Data Breach",
|
||
"Ransomware / Extortion",
|
||
"Insider Threat",
|
||
"Third-Party / Supply Chain",
|
||
"Application Vulnerability",
|
||
"Cloud Misconfiguration",
|
||
"Social Engineering",
|
||
"Physical Security",
|
||
"Business Email Compromise",
|
||
"DDoS / Availability",
|
||
]
|
||
|
||
BUSINESS_IMPACT_TYPES = [
|
||
"Revenue Loss",
|
||
"Regulatory Fine",
|
||
"Legal / Litigation",
|
||
"Reputational Damage",
|
||
"Recovery / Remediation Cost",
|
||
"Customer Churn",
|
||
"Business Interruption",
|
||
]
|
||
|
||
MITIGATION_STATUSES = ["None", "Planned", "In Progress", "Mitigated", "Accepted"]
|
||
|
||
|
||
def build_risk(
|
||
name: str,
|
||
category: str,
|
||
description: str,
|
||
asset_value: float,
|
||
exposure_factor: float, # 0.0–1.0: fraction of asset value lost in breach
|
||
annual_rate: float, # ARO: expected incidents per year (0.01 = once per 100 years)
|
||
mitigation_cost: float,
|
||
mitigation_effectiveness: float, # 0.0–1.0: fraction of risk reduced by control
|
||
mitigation_status: str,
|
||
business_impacts: dict, # {impact_type: dollar_amount}
|
||
notes: str = "",
|
||
) -> dict:
|
||
"""Construct a risk record with calculated metrics."""
|
||
sle = asset_value * exposure_factor # Single Loss Expectancy
|
||
ale = sle * annual_rate # Annual Loss Expectancy (inherent)
|
||
mitigated_ale = ale * (1 - mitigation_effectiveness) # Residual after mitigation
|
||
mitigation_roi = ((ale - mitigated_ale - mitigation_cost) / mitigation_cost * 100
|
||
if mitigation_cost > 0 else 0)
|
||
total_business_impact = sum(business_impacts.values())
|
||
|
||
return {
|
||
"name": name,
|
||
"category": category,
|
||
"description": description,
|
||
"asset_value": asset_value,
|
||
"exposure_factor": exposure_factor,
|
||
"annual_rate": annual_rate,
|
||
"mitigation_cost": mitigation_cost,
|
||
"mitigation_effectiveness": mitigation_effectiveness,
|
||
"mitigation_status": mitigation_status,
|
||
"business_impacts": business_impacts,
|
||
"notes": notes,
|
||
# Calculated
|
||
"sle": sle,
|
||
"ale": ale,
|
||
"mitigated_ale": mitigated_ale,
|
||
"mitigation_roi_pct": mitigation_roi,
|
||
"total_business_impact": total_business_impact,
|
||
"priority_score": ale, # Primary sort key
|
||
}
|
||
|
||
|
||
# ─── Sample Data ─────────────────────────────────────────────────────────────
|
||
|
||
def load_sample_risks() -> list[dict]:
|
||
"""
|
||
Sample risk register for a Series B SaaS company with ~$15M ARR,
|
||
~50K customer records, B2B enterprise focus.
|
||
"""
|
||
risks = []
|
||
|
||
risks.append(build_risk(
|
||
name="Customer Database Breach",
|
||
category="Data Breach",
|
||
description=(
|
||
"Unauthorized access to production database containing 50K+ customer records "
|
||
"including PII (name, email, company, payment method). Attack vector: SQL injection, "
|
||
"compromised credentials, or insider access."
|
||
),
|
||
asset_value=5_000_000, # Value of customer database (revenue impact + regulatory)
|
||
exposure_factor=0.30, # ~30% of asset value lost in a breach event
|
||
annual_rate=0.12, # ~12% chance per year (based on Verizon DBIR industry data)
|
||
mitigation_cost=45_000, # WAF + DAST + DB activity monitoring annual cost
|
||
mitigation_effectiveness=0.80,
|
||
mitigation_status="In Progress",
|
||
business_impacts={
|
||
"Regulatory Fine": 85_000, # GDPR/CCPA exposure
|
||
"Legal / Litigation": 150_000, # Class action exposure
|
||
"Customer Churn": 300_000, # Lost ARR from breach-triggered churn
|
||
"Reputational Damage": 200_000, # Brand impact / deal loss
|
||
"Recovery / Remediation Cost": 65_000,
|
||
},
|
||
notes="SOC 2 Type II controls partially address. Next step: DB activity monitoring.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Ransomware Attack",
|
||
category="Ransomware / Extortion",
|
||
description=(
|
||
"Ransomware encrypts production systems. Average ransom demand for a "
|
||
"Series B company is $350K–$800K. Recovery without ransom payment: 2–6 weeks downtime. "
|
||
"Attack vector: phishing email with malicious attachment, RDP exposure."
|
||
),
|
||
asset_value=3_500_000,
|
||
exposure_factor=0.25,
|
||
annual_rate=0.15,
|
||
mitigation_cost=60_000, # EDR + email security + backup hardening
|
||
mitigation_effectiveness=0.85,
|
||
mitigation_status="Planned",
|
||
business_impacts={
|
||
"Business Interruption": 450_000, # 4 weeks downtime × $112K/week revenue
|
||
"Recovery / Remediation Cost": 180_000,
|
||
"Customer Churn": 125_000,
|
||
"Revenue Loss": 75_000,
|
||
},
|
||
notes="Offline, tested backups reduce recovery time and eliminate ransom pressure.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Privileged Insider Data Theft",
|
||
category="Insider Threat",
|
||
description=(
|
||
"Disgruntled or financially motivated employee with elevated access exfiltrates "
|
||
"customer data, IP, or trade secrets. Detection is typically slow (median: 197 days "
|
||
"per IBM Cost of Data Breach Report)."
|
||
),
|
||
asset_value=2_800_000,
|
||
exposure_factor=0.20,
|
||
annual_rate=0.08,
|
||
mitigation_cost=35_000, # DLP + UEBA + PAM
|
||
mitigation_effectiveness=0.65,
|
||
mitigation_status="None",
|
||
business_impacts={
|
||
"Legal / Litigation": 120_000,
|
||
"Customer Churn": 90_000,
|
||
"Reputational Damage": 75_000,
|
||
"Recovery / Remediation Cost": 40_000,
|
||
},
|
||
notes="No DLP or UEBA currently deployed. Highest detection gap.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Critical SaaS Vendor Breach (Supply Chain)",
|
||
category="Third-Party / Supply Chain",
|
||
description=(
|
||
"A critical SaaS vendor (e.g., Salesforce, Slack, AWS, GitHub) suffers a breach "
|
||
"that compromises data entrusted to them or disrupts your operations. You have "
|
||
"limited control but full liability to customers."
|
||
),
|
||
asset_value=2_200_000,
|
||
exposure_factor=0.15,
|
||
annual_rate=0.18,
|
||
mitigation_cost=20_000, # Vendor risk assessment program
|
||
mitigation_effectiveness=0.40, # Limited — you can't control vendor security
|
||
mitigation_status="Planned",
|
||
business_impacts={
|
||
"Business Interruption": 95_000,
|
||
"Customer Churn": 75_000,
|
||
"Reputational Damage": 50_000,
|
||
"Recovery / Remediation Cost": 30_000,
|
||
},
|
||
notes="Third-party risk is partially transferable via contractual SLAs and cyber insurance.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Business Email Compromise (BEC)",
|
||
category="Business Email Compromise",
|
||
description=(
|
||
"Attacker impersonates CEO, CFO, or vendor to redirect wire transfers, gift card "
|
||
"purchases, or payroll. Median BEC loss: $125K. FBI IC3 reports BEC as #1 "
|
||
"cybercrime by financial loss."
|
||
),
|
||
asset_value=500_000,
|
||
exposure_factor=0.40,
|
||
annual_rate=0.30,
|
||
mitigation_cost=12_000, # Email authentication (DMARC) + training + callback procedures
|
||
mitigation_effectiveness=0.90,
|
||
mitigation_status="In Progress",
|
||
business_impacts={
|
||
"Revenue Loss": 125_000, # Direct financial theft (often unrecoverable)
|
||
"Recovery / Remediation Cost": 25_000,
|
||
"Legal / Litigation": 15_000,
|
||
},
|
||
notes="DMARC deployed. Need to enforce wire transfer callback procedures.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Cloud Misconfiguration — S3 / Storage Exposure",
|
||
category="Cloud Misconfiguration",
|
||
description=(
|
||
"Public exposure of S3 buckets, GCS buckets, or Azure Blob storage containing "
|
||
"sensitive data. One of the most common causes of data breaches. Often undetected "
|
||
"for months. 2023 IBM study: 82% of breaches involved data stored in cloud."
|
||
),
|
||
asset_value=1_800_000,
|
||
exposure_factor=0.20,
|
||
annual_rate=0.20,
|
||
mitigation_cost=18_000, # CSPM tool + IaC scanning
|
||
mitigation_effectiveness=0.90,
|
||
mitigation_status="Planned",
|
||
business_impacts={
|
||
"Regulatory Fine": 60_000,
|
||
"Reputational Damage": 120_000,
|
||
"Legal / Litigation": 45_000,
|
||
"Recovery / Remediation Cost": 35_000,
|
||
},
|
||
notes="No CSPM currently. High frequency, high detectability, low mitigation cost.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Credential Stuffing — Customer Accounts",
|
||
category="Application Vulnerability",
|
||
description=(
|
||
"Attackers use leaked credential lists to compromise customer accounts. "
|
||
"Account takeover leads to data theft, fraudulent transactions, and support burden. "
|
||
"16 billion credentials available on darknet as of 2024."
|
||
),
|
||
asset_value=1_200_000,
|
||
exposure_factor=0.12,
|
||
annual_rate=0.40,
|
||
mitigation_cost=15_000, # MFA + rate limiting + bot detection
|
||
mitigation_effectiveness=0.95,
|
||
mitigation_status="In Progress",
|
||
business_impacts={
|
||
"Customer Churn": 80_000,
|
||
"Revenue Loss": 45_000,
|
||
"Recovery / Remediation Cost": 19_000,
|
||
"Reputational Damage": 30_000,
|
||
},
|
||
notes="MFA available but optional. Enforcing MFA cuts this risk by ~99%.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Phishing — Employee Credential Compromise",
|
||
category="Social Engineering",
|
||
description=(
|
||
"Employee clicks phishing link, surrenders credentials. Without MFA, "
|
||
"this provides full access to email, SaaS apps, and potentially production. "
|
||
"Phishing is the #1 attack vector in the Verizon DBIR."
|
||
),
|
||
asset_value=1_500_000,
|
||
exposure_factor=0.15,
|
||
annual_rate=0.35,
|
||
mitigation_cost=25_000, # MFA + security awareness training + email security
|
||
mitigation_effectiveness=0.92,
|
||
mitigation_status="In Progress",
|
||
business_impacts={
|
||
"Business Interruption": 65_000,
|
||
"Customer Churn": 55_000,
|
||
"Recovery / Remediation Cost": 45_000,
|
||
"Reputational Damage": 60_000,
|
||
},
|
||
notes="Primary vector for ransomware and BEC. MFA is the single highest-ROI control.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="Application API Vulnerability",
|
||
category="Application Vulnerability",
|
||
description=(
|
||
"Unauthenticated or improperly authorized API endpoint exposes customer data "
|
||
"or administrative functions. OWASP API Security Top 10 — broken object-level "
|
||
"authorization is the most common API vulnerability."
|
||
),
|
||
asset_value=2_000_000,
|
||
exposure_factor=0.18,
|
||
annual_rate=0.15,
|
||
mitigation_cost=30_000, # DAST + API gateway + code review
|
||
mitigation_effectiveness=0.75,
|
||
mitigation_status="Planned",
|
||
business_impacts={
|
||
"Regulatory Fine": 70_000,
|
||
"Customer Churn": 90_000,
|
||
"Reputational Damage": 100_000,
|
||
"Legal / Litigation": 60_000,
|
||
},
|
||
notes="Need automated API security testing in CI/CD pipeline.",
|
||
))
|
||
|
||
risks.append(build_risk(
|
||
name="DDoS Attack — Production Service",
|
||
category="DDoS / Availability",
|
||
description=(
|
||
"Distributed denial-of-service attack renders production service unavailable. "
|
||
"Average DDoS duration: 4–8 hours. Enterprise SLA breach triggers contractual "
|
||
"penalties. Increasingly used as extortion or distraction tactic."
|
||
),
|
||
asset_value=1_000_000,
|
||
exposure_factor=0.10,
|
||
annual_rate=0.25,
|
||
mitigation_cost=15_000, # CDN with DDoS protection (Cloudflare, AWS Shield)
|
||
mitigation_effectiveness=0.85,
|
||
mitigation_status="Mitigated",
|
||
business_impacts={
|
||
"Business Interruption": 45_000,
|
||
"Customer Churn": 30_000,
|
||
"Revenue Loss": 25_000,
|
||
},
|
||
notes="Cloudflare deployed. Residual risk from very large volumetric attacks.",
|
||
))
|
||
|
||
return risks
|
||
|
||
|
||
# ─── Analysis & Reporting ────────────────────────────────────────────────────
|
||
|
||
def calculate_portfolio_summary(risks: list[dict]) -> dict:
|
||
"""Aggregate portfolio-level metrics."""
|
||
total_inherent_ale = sum(r["ale"] for r in risks)
|
||
total_mitigated_ale = sum(r["mitigated_ale"] for r in risks)
|
||
total_mitigation_cost = sum(r["mitigation_cost"] for r in risks)
|
||
risk_reduction = total_inherent_ale - total_mitigated_ale
|
||
portfolio_roi = ((risk_reduction - total_mitigation_cost) / total_mitigation_cost * 100
|
||
if total_mitigation_cost > 0 else 0)
|
||
|
||
by_category = {}
|
||
for r in risks:
|
||
cat = r["category"]
|
||
if cat not in by_category:
|
||
by_category[cat] = {"count": 0, "total_ale": 0.0}
|
||
by_category[cat]["count"] += 1
|
||
by_category[cat]["total_ale"] += r["ale"]
|
||
|
||
by_status = {}
|
||
for r in risks:
|
||
status = r["mitigation_status"]
|
||
by_status[status] = by_status.get(status, 0) + 1
|
||
|
||
return {
|
||
"total_risks": len(risks),
|
||
"total_inherent_ale": total_inherent_ale,
|
||
"total_mitigated_ale": total_mitigated_ale,
|
||
"total_risk_reduction": risk_reduction,
|
||
"total_mitigation_cost": total_mitigation_cost,
|
||
"portfolio_roi_pct": portfolio_roi,
|
||
"by_category": dict(sorted(by_category.items(), key=lambda x: -x[1]["total_ale"])),
|
||
"by_mitigation_status": by_status,
|
||
}
|
||
|
||
|
||
def prioritize_risks(risks: list[dict], budget: Optional[float] = None) -> list[dict]:
|
||
"""Return risks sorted by ALE. If budget given, show what fits."""
|
||
sorted_risks = sorted(risks, key=lambda r: -r["ale"])
|
||
if budget is None:
|
||
return sorted_risks
|
||
|
||
# Greedy budget allocation by ROI
|
||
actionable = [r for r in sorted_risks if r["mitigation_status"] in ("None", "Planned")
|
||
and r["mitigation_cost"] > 0]
|
||
actionable.sort(key=lambda r: -r["mitigation_roi_pct"])
|
||
|
||
allocated = []
|
||
remaining = budget
|
||
for risk in actionable:
|
||
if risk["mitigation_cost"] <= remaining:
|
||
allocated.append(risk)
|
||
remaining -= risk["mitigation_cost"]
|
||
|
||
return allocated
|
||
|
||
|
||
def fmt_dollars(amount: float) -> str:
|
||
"""Format a dollar amount."""
|
||
if amount >= 1_000_000:
|
||
return f"${amount/1_000_000:.2f}M"
|
||
if amount >= 1_000:
|
||
return f"${amount/1_000:.0f}K"
|
||
return f"${amount:.0f}"
|
||
|
||
|
||
def fmt_pct(value: float) -> str:
|
||
return f"{value:.1f}%"
|
||
|
||
|
||
def severity_label(ale: float) -> str:
|
||
if ale >= 200_000:
|
||
return "CRITICAL"
|
||
if ale >= 75_000:
|
||
return "HIGH"
|
||
if ale >= 25_000:
|
||
return "MEDIUM"
|
||
return "LOW"
|
||
|
||
|
||
def severity_color(label: str) -> str:
|
||
"""ANSI color codes."""
|
||
colors = {
|
||
"CRITICAL": "\033[91m", # Red
|
||
"HIGH": "\033[93m", # Yellow
|
||
"MEDIUM": "\033[94m", # Blue
|
||
"LOW": "\033[92m", # Green
|
||
}
|
||
return colors.get(label, "") + label + "\033[0m"
|
||
|
||
|
||
# ─── Display ─────────────────────────────────────────────────────────────────
|
||
|
||
def print_header():
|
||
print("\n" + "=" * 80)
|
||
print(" CISO RISK QUANTIFIER — Security Risk Portfolio")
|
||
print(f" Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}")
|
||
print("=" * 80)
|
||
|
||
|
||
def print_portfolio_summary(summary: dict):
|
||
print("\n📊 PORTFOLIO SUMMARY")
|
||
print("-" * 60)
|
||
print(f" Total risks tracked: {summary['total_risks']}")
|
||
print(f" Total inherent ALE: {fmt_dollars(summary['total_inherent_ale'])}/yr")
|
||
print(f" Total ALE after mitigations: {fmt_dollars(summary['total_mitigated_ale'])}/yr")
|
||
print(f" Risk reduction from controls: {fmt_dollars(summary['total_risk_reduction'])}/yr")
|
||
print(f" Total mitigation spend: {fmt_dollars(summary['total_mitigation_cost'])}/yr")
|
||
print(f" Portfolio ROI: {fmt_pct(summary['portfolio_roi_pct'])}")
|
||
print()
|
||
|
||
print(" Risk by Category (sorted by ALE):")
|
||
for cat, data in summary["by_category"].items():
|
||
print(f" {cat:<35} {data['count']} risks ALE: {fmt_dollars(data['total_ale'])}/yr")
|
||
|
||
print()
|
||
print(" Mitigation Status:")
|
||
for status, count in summary["by_mitigation_status"].items():
|
||
print(f" {status:<20} {count} risks")
|
||
|
||
|
||
def print_risk_table(risks: list[dict], title: str = "RISK REGISTER"):
|
||
print(f"\n🎯 {title}")
|
||
print("-" * 80)
|
||
header = f"{'#':<3} {'Risk Name':<35} {'Severity':<10} {'ALE/yr':<12} {'Mitig Cost':<12} {'ROI':<8} {'Status':<12}"
|
||
print(header)
|
||
print("-" * 80)
|
||
|
||
for i, risk in enumerate(risks, 1):
|
||
sev = severity_label(risk["ale"])
|
||
sev_str = sev.ljust(10)
|
||
roi = fmt_pct(risk["mitigation_roi_pct"]) if risk["mitigation_cost"] > 0 else "N/A"
|
||
print(
|
||
f"{i:<3} {risk['name'][:34]:<35} {sev_str} "
|
||
f"{fmt_dollars(risk['ale']):<12} {fmt_dollars(risk['mitigation_cost']):<12} "
|
||
f"{roi:<8} {risk['mitigation_status']}"
|
||
)
|
||
|
||
|
||
def print_risk_detail(risk: dict, index: int):
|
||
sev = severity_label(risk["ale"])
|
||
print(f"\n{'─' * 70}")
|
||
print(f" #{index} — {risk['name']} [{sev}]")
|
||
print(f"{'─' * 70}")
|
||
print(f" Category: {risk['category']}")
|
||
print(f" Description: {risk['description'][:120]}...")
|
||
print()
|
||
print(f" RISK CALCULATION:")
|
||
print(f" Asset Value: {fmt_dollars(risk['asset_value'])}")
|
||
print(f" Exposure Factor: {fmt_pct(risk['exposure_factor'] * 100)}")
|
||
print(f" Single Loss Expectancy: {fmt_dollars(risk['sle'])}")
|
||
print(f" Annual Rate (ARO): {risk['annual_rate']:.2f}x/year")
|
||
print(f" Annual Loss Expectancy: {fmt_dollars(risk['ale'])}/yr ← INHERENT RISK")
|
||
print()
|
||
print(f" MITIGATION:")
|
||
print(f" Mitigation Cost: {fmt_dollars(risk['mitigation_cost'])}/yr")
|
||
print(f" Effectiveness: {fmt_pct(risk['mitigation_effectiveness'] * 100)}")
|
||
print(f" Residual ALE: {fmt_dollars(risk['mitigated_ale'])}/yr")
|
||
print(f" Mitigation ROI: {fmt_pct(risk['mitigation_roi_pct'])}")
|
||
print(f" Status: {risk['mitigation_status']}")
|
||
print()
|
||
print(f" BUSINESS IMPACT BREAKDOWN:")
|
||
for impact_type, amount in risk["business_impacts"].items():
|
||
print(f" {impact_type:<30} {fmt_dollars(amount)}")
|
||
print(f" {'TOTAL':<30} {fmt_dollars(risk['total_business_impact'])}")
|
||
if risk["notes"]:
|
||
print(f"\n NOTES: {risk['notes']}")
|
||
|
||
|
||
def print_board_summary(risks: list[dict], summary: dict):
|
||
"""One-page board-ready summary."""
|
||
print("\n" + "═" * 80)
|
||
print(" BOARD SECURITY REPORT — Risk Summary")
|
||
print("═" * 80)
|
||
|
||
critical = [r for r in risks if severity_label(r["ale"]) == "CRITICAL"]
|
||
high = [r for r in risks if severity_label(r["ale"]) == "HIGH"]
|
||
medium = [r for r in risks if severity_label(r["ale"]) == "MEDIUM"]
|
||
low = [r for r in risks if severity_label(r["ale"]) == "LOW"]
|
||
|
||
print(f"\n RISK EXPOSURE SUMMARY")
|
||
print(f" ┌─────────────┬────────┬──────────────┐")
|
||
print(f" │ Severity │ Count │ Total ALE/yr │")
|
||
print(f" ├─────────────┼────────┼──────────────┤")
|
||
for label, group in [("Critical", critical), ("High", high), ("Medium", medium), ("Low", low)]:
|
||
ale = sum(r["ale"] for r in group)
|
||
print(f" │ {label:<11} │ {len(group):<6} │ {fmt_dollars(ale):<12} │")
|
||
print(f" └─────────────┴────────┴──────────────┘")
|
||
|
||
print(f"\n TOTAL INHERENT RISK: {fmt_dollars(summary['total_inherent_ale'])}/yr")
|
||
print(f" SECURITY INVESTMENT: {fmt_dollars(summary['total_mitigation_cost'])}/yr")
|
||
print(f" RESIDUAL RISK: {fmt_dollars(summary['total_mitigated_ale'])}/yr")
|
||
print(f" RISK REDUCTION: {fmt_dollars(summary['total_risk_reduction'])}/yr")
|
||
print(f" PORTFOLIO ROI: {fmt_pct(summary['portfolio_roi_pct'])}")
|
||
|
||
print(f"\n TOP 3 RISKS BY EXPECTED ANNUAL LOSS:")
|
||
top3 = sorted(risks, key=lambda r: -r["ale"])[:3]
|
||
for i, risk in enumerate(top3, 1):
|
||
print(f" {i}. {risk['name']}: {fmt_dollars(risk['ale'])}/yr expected annual loss")
|
||
print(f" Mitigation: {fmt_dollars(risk['mitigation_cost'])}/yr | "
|
||
f"Status: {risk['mitigation_status']}")
|
||
|
||
unmitigated = [r for r in risks if r["mitigation_status"] == "None"]
|
||
if unmitigated:
|
||
print(f"\n ⚠️ UNMITIGATED RISKS ({len(unmitigated)}):")
|
||
for r in sorted(unmitigated, key=lambda x: -x["ale"]):
|
||
print(f" • {r['name']}: {fmt_dollars(r['ale'])}/yr — Action required")
|
||
|
||
|
||
def export_csv(risks: list[dict], filepath: str):
|
||
fields = [
|
||
"name", "category", "asset_value", "exposure_factor", "annual_rate",
|
||
"sle", "ale", "mitigation_cost", "mitigation_effectiveness",
|
||
"mitigated_ale", "mitigation_roi_pct", "mitigation_status", "notes"
|
||
]
|
||
with open(filepath, "w", newline="") as f:
|
||
writer = csv.DictWriter(f, fieldnames=fields)
|
||
writer.writeheader()
|
||
for risk in risks:
|
||
row = {k: risk.get(k, "") for k in fields}
|
||
writer.writerow(row)
|
||
print(f"✅ Exported {len(risks)} risks to {filepath}")
|
||
|
||
|
||
def export_json(risks: list[dict]) -> str:
|
||
return json.dumps(risks, indent=2, default=str)
|
||
|
||
|
||
# ─── Interactive Entry ───────────────────────────────────────────────────────
|
||
|
||
def interactive_add_risk() -> dict:
|
||
"""Interactive CLI for adding a new risk."""
|
||
print("\n── ADD NEW RISK ──────────────────────────────────────")
|
||
name = input("Risk name: ").strip()
|
||
|
||
print(f"Category options: {', '.join(RISK_CATEGORIES)}")
|
||
category = input("Category: ").strip()
|
||
|
||
description = input("Description (brief): ").strip()
|
||
|
||
print("\nAsset valuation:")
|
||
asset_value = float(input(" Asset value ($): ").replace(",", "").replace("$", ""))
|
||
exposure_factor = float(input(" Exposure factor (0.0–1.0, fraction of value lost): "))
|
||
annual_rate = float(input(" Annual rate of occurrence (e.g., 0.10 = once per 10 years): "))
|
||
|
||
print("\nMitigation:")
|
||
mitigation_cost = float(input(" Mitigation cost ($/yr): ").replace(",", "").replace("$", ""))
|
||
mitigation_effectiveness = float(input(" Mitigation effectiveness (0.0–1.0): "))
|
||
|
||
print(f"Status options: {', '.join(MITIGATION_STATUSES)}")
|
||
mitigation_status = input(" Status: ").strip()
|
||
|
||
print("\nBusiness impacts (enter 0 to skip):")
|
||
business_impacts = {}
|
||
for impact_type in BUSINESS_IMPACT_TYPES:
|
||
val = input(f" {impact_type} ($): ").replace(",", "").replace("$", "")
|
||
amount = float(val) if val else 0
|
||
if amount > 0:
|
||
business_impacts[impact_type] = amount
|
||
|
||
notes = input("\nNotes: ").strip()
|
||
|
||
return build_risk(
|
||
name=name,
|
||
category=category,
|
||
description=description,
|
||
asset_value=asset_value,
|
||
exposure_factor=exposure_factor,
|
||
annual_rate=annual_rate,
|
||
mitigation_cost=mitigation_cost,
|
||
mitigation_effectiveness=mitigation_effectiveness,
|
||
mitigation_status=mitigation_status,
|
||
business_impacts=business_impacts,
|
||
notes=notes,
|
||
)
|
||
|
||
|
||
# ─── Main ────────────────────────────────────────────────────────────────────
|
||
|
||
def main():
|
||
parser = argparse.ArgumentParser(
|
||
description="CISO Risk Quantifier — Quantify security risks in business terms"
|
||
)
|
||
parser.add_argument("--json", action="store_true", help="Output full JSON")
|
||
parser.add_argument("--csv", metavar="FILE", help="Export CSV to file")
|
||
parser.add_argument("--budget", type=float, metavar="DOLLARS",
|
||
help="Show recommended mitigations within budget")
|
||
parser.add_argument("--board", action="store_true", help="Show board-ready summary only")
|
||
parser.add_argument("--detail", action="store_true", help="Show detailed risk breakdowns")
|
||
parser.add_argument("--add", action="store_true", help="Interactively add a risk")
|
||
args = parser.parse_args()
|
||
|
||
risks = load_sample_risks()
|
||
|
||
if args.add:
|
||
new_risk = interactive_add_risk()
|
||
risks.append(new_risk)
|
||
print(f"\n✅ Added risk: {new_risk['name']} | ALE: {fmt_dollars(new_risk['ale'])}/yr")
|
||
|
||
# Sort by ALE descending
|
||
risks_sorted = sorted(risks, key=lambda r: -r["ale"])
|
||
summary = calculate_portfolio_summary(risks_sorted)
|
||
|
||
if args.json:
|
||
output = {
|
||
"generated": datetime.now().isoformat(),
|
||
"summary": summary,
|
||
"risks": risks_sorted,
|
||
}
|
||
print(json.dumps(output, indent=2, default=str))
|
||
return
|
||
|
||
if args.csv:
|
||
export_csv(risks_sorted, args.csv)
|
||
return
|
||
|
||
print_header()
|
||
|
||
if args.board:
|
||
print_board_summary(risks_sorted, summary)
|
||
return
|
||
|
||
print_portfolio_summary(summary)
|
||
print_risk_table(risks_sorted)
|
||
|
||
if args.detail:
|
||
for i, risk in enumerate(risks_sorted, 1):
|
||
print_risk_detail(risk, i)
|
||
|
||
if args.budget:
|
||
recommended = prioritize_risks(risks_sorted, args.budget)
|
||
print(f"\n💰 BUDGET ALLOCATION — ${args.budget:,.0f}")
|
||
print(f" Recommended mitigations (sorted by ROI):")
|
||
if recommended:
|
||
for r in recommended:
|
||
print(f" • {r['name']}: {fmt_dollars(r['mitigation_cost'])}/yr "
|
||
f"| ALE reduction: {fmt_dollars(r['ale'] - r['mitigated_ale'])}/yr "
|
||
f"| ROI: {fmt_pct(r['mitigation_roi_pct'])}")
|
||
else:
|
||
print(" No actionable mitigations fit within budget.")
|
||
|
||
print_board_summary(risks_sorted, summary)
|
||
|
||
print("\n💡 NEXT STEPS")
|
||
print(" 1. Run `--detail` to see full breakdown of each risk")
|
||
print(" 2. Run `--budget 200000` to see what you can mitigate with a given budget")
|
||
print(" 3. Run `--board` for a board-ready one-page summary")
|
||
print(" 4. Run `--csv risks.csv` to export for stakeholder review")
|
||
print(" 5. Run `--add` to interactively add risks to the register")
|
||
print()
|
||
|
||
|
||
if __name__ == "__main__":
|
||
main()
|