Files
claude-skills-reference/c-level-advisor/executive-mentor/skills/postmortem/SKILL.md
Alireza Rezvani 466aa13a7b feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264)
* feat: C-Suite expansion — 8 new executive advisory roles

Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor.
Expands C-level advisory from 2 to 10 roles with 74 total files.

Each role includes:
- SKILL.md (lean, <5KB, ~1200 tokens for context efficiency)
- Reference docs (loaded on demand, not at startup)
- Python analysis scripts (stdlib only, runnable CLI)

Executive Mentor features /em: slash commands (challenge, board-prep,
hard-call, stress-test, postmortem) with devil's advocate agent.

21 Python tools, 24 reference frameworks, 28,379 total lines.
All SKILL.md files combined: ~17K tokens (8.5% of 200K context window).

Badge: 88 → 116 skills

* feat: C-Suite orchestration layer + 18 complementary skills

ORCHESTRATION (new):
- cs-onboard: Founder interview → company-context.md
- chief-of-staff: Routing, synthesis, inter-agent orchestration
- board-meeting: 6-phase multi-agent deliberation protocol
- decision-logger: Two-layer memory (raw transcripts + approved decisions)
- agent-protocol: Inter-agent invocation with loop prevention
- context-engine: Company context loading + anonymization

CROSS-CUTTING CAPABILITIES (new):
- board-deck-builder: Board/investor update assembly
- scenario-war-room: Cascading multi-variable what-if modeling
- competitive-intel: Systematic competitor tracking + battlecards
- org-health-diagnostic: Cross-functional health scoring (8 dimensions)
- ma-playbook: M&A strategy (acquiring + being acquired)
- intl-expansion: International market entry frameworks

CULTURE & COLLABORATION (new):
- culture-architect: Values → behaviors, culture code, health assessment
- company-os: EOS/Scaling Up operating system selection + implementation
- founder-coach: Founder development, delegation, blind spots
- strategic-alignment: Strategy cascade, silo detection, alignment scoring
- change-management: ADKAR-based change rollout framework
- internal-narrative: One story across employees/investors/customers

UPGRADES TO EXISTING ROLES:
- All 10 roles get reasoning technique directives
- All 10 roles get company-context.md integration
- All 10 roles get board meeting isolation rules
- CEO gets stage-adaptive temporal horizons (seed→C)

Key design decisions:
- Two-layer memory prevents hallucinated consensus from rejected ideas
- Phase 2 isolation: agents think independently before cross-examination
- Executive Mentor (The Critic) sees all perspectives, others don't
- 25 Python tools total (stdlib only, no dependencies)

52 new files, 10 modified, 10,862 new lines.
Total C-suite ecosystem: 134 files, 39,131 lines.

* fix: connect all dots — Chief of Staff routes to all 28 skills

- Added complementary skills registry to routing-matrix.md
- Chief of Staff SKILL.md now lists all 28 skills in ecosystem
- Added integration tables to scenario-war-room and competitive-intel
- Badge: 116 → 134 skills
- README: C-Level Advisory count 10 → 28

Quality audit passed:
 All 10 roles: company-context, reasoning, isolation, invocation
 All 6 phases in board meeting
 Two-layer memory with DO_NOT_RESURFACE
 Loop prevention (no self-invoke, max depth 2, no circular)
 All /em: commands present
 All complementary skills cross-reference roles
 Chief of Staff routes to every skill in ecosystem

* refactor: CEO + CTO advisors upgraded to C-suite parity

Both roles now match the structural standard of all new roles:
- CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references)
- CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references)

Added to both:
- Integration table (who they work with and when)
- Key diagnostic questions
- Structured metrics dashboard table
- Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context)

CEO additions:
- Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y)
- Cross-references to culture-architect and board-deck-builder

CTO additions:
- Key Questions section (7 diagnostic questions)
- Structured metrics table (DORA + debt + team + architecture + cost)
- Cross-references to all peer roles

All 10 roles now pass structural parity:  Keywords  QuickStart  Questions  Metrics  RedFlags  Integration

* feat: add proactive triggers + output artifacts to all 10 roles

Every C-suite role now specifies:
- Proactive Triggers: 'surface these without being asked' — context-driven
  early warnings that make advisors proactive, not reactive
- Output Artifacts: concrete deliverables per request type (what you ask →
  what you get)

CEO: runway alerts, board prep triggers, strategy review nudges
CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags
COO: blocker detection, scaling threshold warnings, cadence gaps
CPO: retention curve monitoring, portfolio dog detection, research gaps
CMO: CAC trend monitoring, positioning gaps, budget staleness
CFO: runway forecasting, burn multiple alerts, scenario planning gaps
CRO: NRR monitoring, pipeline coverage, pricing review triggers
CISO: audit overdue alerts, compliance gaps, vendor risk
CHRO: retention risk, comp band gaps, org scaling thresholds
Executive Mentor: board prep triggers, groupthink detection, hard call surfacing

This transforms the C-suite from reactive advisors into proactive partners.

* feat: User Communication Standard — structured output for all roles

Defines 3 output formats in agent-protocol/SKILL.md:

1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision
2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡)
3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items

10 non-negotiable rules:
- Bottom line first, always
- Results and decisions only (no process narration)
- What + Why + How for every finding
- Actions have owners and deadlines ('we should consider' is banned)
- Decisions framed as options with trade-offs
- Founder is the highest authority — roles recommend, founder decides
- Risks are concrete (if X → Y, costs $Z)
- Max 5 bullets per section
- No jargon without explanation
- Silence over fabricated updates

All 10 roles reference this standard.
Chief of Staff enforces it as a quality gate.
Board meeting Phase 4 uses the Board Meeting Output format.

* feat: Internal Quality Loop — verification before delivery

No role presents to the founder without passing verification:

Step 1: Self-Verification (every role, every time)
  - Source attribution: where did each data point come from?
  - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding
  - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding
  - Contradiction check against company-context + decision log
  - 'So what?' test: every finding needs a business consequence

Step 2: Peer Verification (cross-functional)
  - Financial claims → CFO validates math
  - Revenue projections → CRO validates pipeline backing
  - Technical feasibility → CTO validates
  - People/hiring impact → CHRO validates
  - Skip for single-domain, low-stakes questions

Step 3: Critic Pre-Screen (high-stakes only)
  - Irreversible decisions, >20% runway impact, strategy changes
  - Executive Mentor finds weakest point before founder sees it
  - Suspicious consensus triggers mandatory pre-screen

Step 4: Course Correction (after founder feedback)
  - Approve → log + assign actions
  - Modify → re-verify changed parts
  - Reject → DO_NOT_RESURFACE + learn why
  - 30/60/90 day post-decision review

Board meeting contributions now require self-verified format with
confidence tags and source attribution on every finding.

* fix: resolve PR review issues 1, 4, and minor observation

Issue 1: c-level-advisor/CLAUDE.md — completely rewritten
  - Was: 2 skills (CEO, CTO only), dated Nov 2025
  - Now: full 28-skill ecosystem map with architecture diagram,
    all roles/orchestration/cross-cutting/culture skills listed,
    design decisions, integration with other domains

Issue 4: Root CLAUDE.md — updated all stale counts
  - 87 → 134 skills across all 3 references
  - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary)
  - Tool count: 160+ → 185+
  - Reference count: 200+ → 250+

Minor observation: Documented plugin.json convention
  - Explained in c-level-advisor/CLAUDE.md that only executive-mentor
    has plugin.json because only it has slash commands (/em: namespace)
  - Other skills are invoked by name through Chief of Staff or directly

Also fixed: README.md 88+ → 134 in two places (first line + skills section)

* fix: update all plugin/index registrations for 28-skill C-suite

1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0
   - Was: 2 skills, generic description
   - Now: all 28 skills listed with descriptions, all 25 scripts,
     namespace 'cs', full ecosystem description

2. .codex/skills-index.json — added 18 complementary skills
   - Was: 10 roles only
   - Now: 28 total c-level entries (10 roles + 6 orchestration +
     6 cross-cutting + 6 culture)
   - Each with full description for skill discovery

3. .claude-plugin/marketplace.json — updated c-level-skills entry
   - Was: generic 2-skill description
   - Now: v2.0.0, full 28-skill ecosystem description,
     skills_count: 28, scripts_count: 25

* feat: add root SKILL.md for c-level-advisor ClawHub package

---------

Co-authored-by: Leo <leo@openclaw.ai>
2026-03-06 01:35:08 +01:00

7.5 KiB

/em:postmortem — Honest Analysis of What Went Wrong

Command: /em:postmortem <event>

Not blame. Understanding. The failed deal, the missed quarter, the feature that flopped, the hire that didn't work out. What actually happened, why, and what changes as a result.


Why Most Post-Mortems Fail

They become one of two things:

The blame session — someone gets scapegoated, defensive walls go up, actual causes don't get examined, and the same problem happens again in a different form.

The whitewash — "We learned a lot, we're going to do better, here are 12 vague action items." Nothing changes. Same problem, different quarter.

A real post-mortem is neither. It's a rigorous investigation into a system failure. Not "whose fault was it" but "what conditions made this outcome predictable in hindsight?"

The purpose: extract the maximum learning value from a failure so you can prevent recurrence and improve the system.


The Framework

Step 1: Define the Event Precisely

Before analysis: describe exactly what happened.

  • What was the expected outcome?
  • What was the actual outcome?
  • When was the gap first visible?
  • What was the impact (financial, operational, reputational)?

Precision matters. "We missed Q3 revenue" is not precise enough. "We closed $420K in new ARR vs $680K target — a $260K miss driven primarily by three deals that slipped to Q4 and one deal that was lost to a competitor" is precise.

Step 2: The 5 Whys — Done Properly

The goal: get from what happened (the symptom) to why it happened (the root cause).

Standard bad 5 Whys:

  • Why did we miss revenue? Because deals slipped.
  • Why did deals slip? Because the sales cycle was longer than expected.
  • Why? Because the customer buying process is complex.
  • Why? Because we're selling to enterprise.
  • Why? That's just how enterprise sales works.

→ Conclusion: Nothing to do. It's just enterprise.

Real 5 Whys:

  • Why did we miss revenue? Three deals slipped out of quarter.
  • Why did those deals slip? None of them had identified a champion with budget authority.
  • Why did we progress deals without a champion? Our qualification criteria didn't require it.
  • Why didn't our qualification criteria require it? When we built the criteria 8 months ago, we were in SMB, not enterprise.
  • Why haven't we updated qualification criteria as ICP shifted? No owner, no process for criteria review.

→ Root cause: Qualification criteria outdated, no owner, no review process. → Fix: Update criteria, assign owner, add quarterly review.

The test for a good root cause: Could you prevent recurrence with a specific, concrete change? If yes, you've found something real.

Step 3: Distinguish Contributing Factors from Root Cause

Most events have multiple contributing factors. Not all are root causes.

Contributing factor: Made it worse, but isn't the core reason. If removed, the outcome might have been different — but the same class of problem would recur.

Root cause: The fundamental condition that made the outcome probable. Fix this, and this class of problem doesn't recur.

Example — failed hire:

  • Contributing factors: rushed process, reference checks skipped, team under pressure to staff up
  • Root cause: No defined competency framework, so interview process varied by who happened to conduct interviews

The distinction matters. If you address only contributing factors, you'll have a different-looking but structurally identical failure next time.

Step 4: Identify the Warning Signs That Were Ignored

Every failure has precursors. In hindsight, they're obvious. The value of this step is making them obvious prospectively.

Ask:

  • At what point was the negative outcome predictable?
  • What signals were visible at that point?
  • Who saw them? What happened when they raised them?
  • Why weren't they acted on?

Common patterns:

  • Signal was raised but dismissed by a senior person
  • Signal wasn't raised because nobody felt safe saying it
  • Signal was seen but no one had clear ownership to act on it
  • Data was available but nobody was looking at it
  • The team was too optimistic to take negative signals seriously

This step is particularly important for systemic issues — "we didn't feel safe raising the concern" is a much deeper root cause than "the deal qualification was off."

Step 5: Distinguish What Was in Control vs. Out of Control

Some failures happen despite correct decisions. Some happen because of incorrect decisions. Knowing the difference prevents both overcorrection and undercorrection.

  • In control: Process, criteria, team capability, resource allocation, decisions made
  • Out of control: Market conditions, customer decisions, competitor actions, macro events

For things out of control: what can be done to be more resilient to similar events? For things in control: what specifically needs to change?

Warning: "It was outside our control" is sometimes used to avoid accountability. Be rigorous.

Step 6: Build the Change Register

Every post-mortem ends with a change register — specific commitments, owned and dated.

Bad action items:

  • "We'll improve our qualification process"
  • "Communication will be better"
  • "We'll be more rigorous about forecasting"

Good action items:

  • "Ravi owns rewriting qualification criteria by March 15 to include champion identification as hard requirement. New criteria reviewed in weekly sales standup starting March 22."
  • "By March 10, Elena adds deal-slippage risk flag to CRM for any open opportunity >60 days without a product demo"
  • "Maria runs a 30-min retrospective with enterprise sales team every 6 weeks starting April 1, reviews win/loss data"

For each action:

  • What exactly is changing?
  • Who owns it?
  • By when?
  • How will you verify it worked?

Step 7: Verification Date

The most commonly skipped step. Post-mortems are useless if nobody checks whether the changes actually happened and actually worked.

Set a verification date: "We'll review whether qualification criteria have been updated and whether deal slippage rate has improved at the June board meeting."

Without this, post-mortems are theater.


Post-Mortem Output Format

EVENT: [Name and date]
EXPECTED: [What was supposed to happen]
ACTUAL: [What happened]
IMPACT: [Quantified]

TIMELINE
[Date]: [What happened or was visible]
[Date]: ...

5 WHYS
1. [Why did X happen?] → Because [Y]
2. [Why did Y happen?] → Because [Z]
3. [Why did Z happen?] → Because [A]
4. [Why did A happen?] → Because [B]
5. [Why did B happen?] → Because [ROOT CAUSE]

ROOT CAUSE: [One clear sentence]

CONTRIBUTING FACTORS
• [Factor] — how it contributed
• [Factor] — how it contributed

WARNING SIGNS MISSED
• [Signal visible at what date] — why it wasn't acted on

WHAT WAS IN CONTROL: [List]
WHAT WASN'T: [List]

CHANGE REGISTER
| Action | Owner | Due Date | Verification |
|--------|-------|----------|-------------|
| [Specific change] | [Name] | [Date] | [How to verify] |

VERIFICATION DATE: [Date of check-in]

The Tone of Good Post-Mortems

Blame is cheap. Understanding is hard.

The goal isn't to establish that someone made a mistake. The goal is to understand why the system produced that outcome — so the system can be improved.

"The salesperson didn't qualify the deal properly" is blame. "Our qualification framework hadn't been updated when we moved upmarket, and no one owned keeping it current" is understanding.

The first version fires or shames someone. The second version builds a more resilient organization.

Both might be true simultaneously. The distinction is: which one actually prevents recurrence?