Files
claude-skills-reference/c-level-advisor/cmo-advisor/references/brand_positioning.md
Alireza Rezvani e145ac4a1d Dev (#265)
* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

* ci: Add VirusTotal security scan for skills (#252)

* Dev (#231)

* Improve senior-fullstack skill description and workflow validation

- Expand frontmatter description with concrete actions and trigger clauses
- Add validation steps to scaffolding workflow (verify scaffold succeeded)
- Add re-run verification step to audit workflow (confirm P0 fixes)

* chore: sync codex skills symlinks [automated]

* fix(skill): normalize senior-fullstack frontmatter to inline format

Normalize YAML description from block scalar (>) to inline single-line
format matching all other 50+ skills. Align frontmatter trigger phrases
with the body's Trigger Phrases section to eliminate duplication.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): add GITHUB_TOKEN to checkout + restore corrupted skill descriptions

- Add token: ${{ secrets.GITHUB_TOKEN }} to actions/checkout@v4 in
  sync-codex-skills.yml so git-auto-commit-action can push back to branch
  (fixes: fatal: could not read Username, exit 128)
- Restore correct description for incident-commander (was: 'Skill from engineering-team')
- Restore correct description for senior-fullstack (was: '>')

* fix(ci): pass PROJECTS_TOKEN to fix automated commits + remove duplicate checkout

Fixes PROJECTS_TOKEN passthrough for git-auto-commit-action and removes duplicate checkout step in pr-issue-auto-close workflow.

* fix(ci): remove stray merge conflict marker in sync-codex-skills.yml (#221)

Co-authored-by: Leo <leo@leo-agent-server>

* fix(ci): fix workflow errors + add OpenClaw support (#222)

* feat: add 20 new practical skills for professional Claude Code users

New skills across 5 categories:

Engineering (12):
- git-worktree-manager: Parallel dev with port isolation & env sync
- ci-cd-pipeline-builder: Generate GitHub Actions/GitLab CI from stack analysis
- mcp-server-builder: Build MCP servers from OpenAPI specs
- changelog-generator: Conventional commits to structured changelogs
- pr-review-expert: Blast radius analysis & security scan for PRs
- api-test-suite-builder: Auto-generate test suites from API routes
- env-secrets-manager: .env management, leak detection, rotation workflows
- database-schema-designer: Requirements to migrations & types
- codebase-onboarding: Auto-generate onboarding docs from codebase
- performance-profiler: Node/Python/Go profiling & optimization
- runbook-generator: Operational runbooks from codebase analysis
- monorepo-navigator: Turborepo/Nx/pnpm workspace management

Engineering Team (2):
- stripe-integration-expert: Subscriptions, webhooks, billing patterns
- email-template-builder: React Email/MJML transactional email systems

Product Team (3):
- saas-scaffolder: Full SaaS project generation from product brief
- landing-page-generator: High-converting landing pages with copy frameworks
- competitive-teardown: Structured competitive product analysis

Business Growth (1):
- contract-and-proposal-writer: Contracts, SOWs, NDAs per jurisdiction

Marketing (1):
- prompt-engineer-toolkit: Systematic prompt development & A/B testing

Designed for daily professional use and commercial distribution.

* chore: sync codex skills symlinks [automated]

* docs: update README with 20 new skills, counts 65→86, new skills section

* docs: add commercial distribution plan (Stan Store + Gumroad)

* docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) (#226)

* docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains)

- Consolidate 191 commits since v1.0.2 into proper v2.0.0 entry
- Document 12 POWERFUL-tier skills, 37 refactored skills
- Add new domains: business-growth, finance
- Document Codex support and marketplace integration
- Update version history summary table
- Clean up [Unreleased] to only planned work

* docs: add 24 POWERFUL-tier skills to plugin, fix counts to 85 across all docs

- Add engineering-advanced-skills plugin (24 POWERFUL-tier skills) to marketplace.json
- Add 13 missing skills to CHANGELOG v2.0.0 (agent-workflow-designer, api-test-suite-builder,
  changelog-generator, ci-cd-pipeline-builder, codebase-onboarding, database-schema-designer,
  env-secrets-manager, git-worktree-manager, mcp-server-builder, monorepo-navigator,
  performance-profiler, pr-review-expert, runbook-generator)
- Fix skill count: 86→85 (excl sample-skill) across README, CHANGELOG, marketplace.json
- Fix stale 53→85 references in README
- Add engineering-advanced-skills install command to README
- Update marketplace.json version to 2.0.0

---------

Co-authored-by: Leo <leo@openclaw.ai>

* feat: add skill-security-auditor POWERFUL-tier skill (#230)

Security audit and vulnerability scanner for AI agent skills before installation.

Scans for:
- Code execution risks (eval, exec, os.system, subprocess shell injection)
- Data exfiltration (outbound HTTP, credential harvesting, env var extraction)
- Prompt injection in SKILL.md (system override, role hijack, safety bypass)
- Dependency supply chain (typosquatting, unpinned versions, runtime installs)
- File system abuse (boundary violations, binaries, symlinks, hidden files)
- Privilege escalation (sudo, SUID, cron manipulation, shell config writes)
- Obfuscation (base64, hex encoding, chr chains, codecs)

Produces clear PASS/WARN/FAIL verdict with per-finding remediation guidance.
Supports local dirs, git repo URLs, JSON output, strict mode, and CI/CD integration.

Includes:
- scripts/skill_security_auditor.py (1049 lines, zero dependencies)
- references/threat-model.md (complete attack vector documentation)
- SKILL.md with usage guide and report format

Tested against: rag-architect (PASS), agent-designer (PASS), senior-secops (FAIL - correctly flagged eval/exec patterns).

Co-authored-by: Leo <leo@openclaw.ai>

* docs: add skill-security-auditor to marketplace, README, and CHANGELOG

- Add standalone plugin entry for skill-security-auditor in marketplace.json
- Update engineering-advanced-skills plugin description to include it
- Update skill counts: 85→86 across README, CHANGELOG, marketplace
- Add install command to README Quick Install section
- Add to CHANGELOG [Unreleased] section

---------

Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
Co-authored-by: Leo <leo@openclaw.ai>

* Dev (#249)

* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

---------

Co-authored-by: Leo <leo@openclaw.ai>

* Dev (#250)

* docs: restructure README.md — 2,539 → 209 lines (#247)

- Cut from 2,539 lines / 73 sections to 209 lines / 18 sections
- Consolidated 4 install methods into one unified section
- Moved all skill details to domain-level READMEs (linked from table)
- Front-loaded value prop and keywords for SEO
- Added POWERFUL tier highlight section
- Added skill-security-auditor showcase section
- Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content
- Fixed all internal links
- Clean heading hierarchy (H2 for main sections only)

Closes #233

Co-authored-by: Leo <leo@openclaw.ai>

* fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248)

* fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices

* fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices

* fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices

* fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices

* docs: update README, CHANGELOG, and plugin metadata

* fix: correct marketing plugin count, expand thin references

---------

Co-authored-by: Leo <leo@openclaw.ai>

---------

Co-authored-by: Leo <leo@openclaw.ai>

* ci: add VirusTotal security scan for skills

- Scans changed skill directories on PRs to dev/main
- Scans all skills on release publish
- Posts scan results as PR comment with analysis links
- Rate-limited to 4 req/min (free tier compatible)
- Appends VirusTotal links to release body on publish

* fix: resolve YAML lint errors in virustotal workflow

- Add document start marker (---)
- Quote 'on' key for truthy lint rule
- Remove trailing spaces
- Break long lines under 160 char limit

---------

Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
Co-authored-by: Leo <leo@openclaw.ai>

* feat: add playwright-pro plugin — production-grade Playwright testing toolkit (#254)

Complete Claude Code plugin with:
- 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report)
- 3 specialized agents (test-architect, test-debugger, migration-planner)
- 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility)
- TestRail MCP server (TypeScript) — 8 tools for bidirectional sync
- BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing
- Smart hooks (auto-validate tests, auto-detect Playwright projects)
- 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests)
- Leverages Claude Code built-ins (/batch, /debug, Explore subagent)
- Zero-config for core features; TestRail/BrowserStack via env vars
- Both TypeScript and JavaScript support throughout

Co-authored-by: Leo <leo@openclaw.ai>

* feat: add playwright-pro to marketplace registry (#256)

- New plugin: playwright-pro (9 skills, 3 agents, 55 templates, 2 MCP servers)
- Install: /plugin install playwright-pro@claude-code-skills
- Total marketplace plugins: 17

Co-authored-by: Leo <leo@openclaw.ai>

* fix: integrate playwright-pro across all platforms (#258)

- Add root SKILL.md for OpenClaw and ClawHub compatibility
- Add to README: Skills Overview table, install section, badge count
- Regenerate .codex/skills-index.json with playwright-pro entry
- Add .codex/skills/playwright-pro symlink for Codex CLI
- Fix YAML frontmatter (single-line description for index parsing)

Platforms verified:
- Claude Code: marketplace.json  (merged in PR #256)
- Codex CLI: symlink + skills-index.json 
- OpenClaw: SKILL.md auto-discovered by install script 
- ClawHub: published as playwright-pro@1.1.0 

Co-authored-by: Leo <leo@openclaw.ai>

* docs: update CLAUDE.md — reflect 87 skills across 9 domains

Sync CLAUDE.md with actual repository state: add Engineering POWERFUL tier
(25 skills), update all skill counts, add plugin registry references, and
replace stale sprint section with v2.0.0 version info.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: mention Claude Code in project description

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add self-improving-agent plugin — auto-memory curation for Claude Code (#260)

New plugin: engineering-team/self-improving-agent/
- 5 skills: /si:review, /si:promote, /si:extract, /si:status, /si:remember
- 2 agents: memory-analyst, skill-extractor
- 1 hook: PostToolUse error capture (zero overhead on success)
- 3 reference docs: memory architecture, promotion rules, rules directory patterns
- 2 templates: rule template, skill template
- 20 files, 1,829 lines

Integrates natively with Claude Code's auto-memory (v2.1.32+).
Reads from ~/.claude/projects/<path>/memory/ — no duplicate storage.
Promotes proven patterns from MEMORY.md to CLAUDE.md or .claude/rules/.

Also:
- Added to marketplace.json (18 plugins total)
- Added to README (Skills Overview + install section)
- Updated badge count to 88+
- Regenerated .codex/skills-index.json + symlink

Co-authored-by: Leo <leo@openclaw.ai>

* feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264)

* feat: C-Suite expansion — 8 new executive advisory roles

Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor.
Expands C-level advisory from 2 to 10 roles with 74 total files.

Each role includes:
- SKILL.md (lean, <5KB, ~1200 tokens for context efficiency)
- Reference docs (loaded on demand, not at startup)
- Python analysis scripts (stdlib only, runnable CLI)

Executive Mentor features /em: slash commands (challenge, board-prep,
hard-call, stress-test, postmortem) with devil's advocate agent.

21 Python tools, 24 reference frameworks, 28,379 total lines.
All SKILL.md files combined: ~17K tokens (8.5% of 200K context window).

Badge: 88 → 116 skills

* feat: C-Suite orchestration layer + 18 complementary skills

ORCHESTRATION (new):
- cs-onboard: Founder interview → company-context.md
- chief-of-staff: Routing, synthesis, inter-agent orchestration
- board-meeting: 6-phase multi-agent deliberation protocol
- decision-logger: Two-layer memory (raw transcripts + approved decisions)
- agent-protocol: Inter-agent invocation with loop prevention
- context-engine: Company context loading + anonymization

CROSS-CUTTING CAPABILITIES (new):
- board-deck-builder: Board/investor update assembly
- scenario-war-room: Cascading multi-variable what-if modeling
- competitive-intel: Systematic competitor tracking + battlecards
- org-health-diagnostic: Cross-functional health scoring (8 dimensions)
- ma-playbook: M&A strategy (acquiring + being acquired)
- intl-expansion: International market entry frameworks

CULTURE & COLLABORATION (new):
- culture-architect: Values → behaviors, culture code, health assessment
- company-os: EOS/Scaling Up operating system selection + implementation
- founder-coach: Founder development, delegation, blind spots
- strategic-alignment: Strategy cascade, silo detection, alignment scoring
- change-management: ADKAR-based change rollout framework
- internal-narrative: One story across employees/investors/customers

UPGRADES TO EXISTING ROLES:
- All 10 roles get reasoning technique directives
- All 10 roles get company-context.md integration
- All 10 roles get board meeting isolation rules
- CEO gets stage-adaptive temporal horizons (seed→C)

Key design decisions:
- Two-layer memory prevents hallucinated consensus from rejected ideas
- Phase 2 isolation: agents think independently before cross-examination
- Executive Mentor (The Critic) sees all perspectives, others don't
- 25 Python tools total (stdlib only, no dependencies)

52 new files, 10 modified, 10,862 new lines.
Total C-suite ecosystem: 134 files, 39,131 lines.

* fix: connect all dots — Chief of Staff routes to all 28 skills

- Added complementary skills registry to routing-matrix.md
- Chief of Staff SKILL.md now lists all 28 skills in ecosystem
- Added integration tables to scenario-war-room and competitive-intel
- Badge: 116 → 134 skills
- README: C-Level Advisory count 10 → 28

Quality audit passed:
 All 10 roles: company-context, reasoning, isolation, invocation
 All 6 phases in board meeting
 Two-layer memory with DO_NOT_RESURFACE
 Loop prevention (no self-invoke, max depth 2, no circular)
 All /em: commands present
 All complementary skills cross-reference roles
 Chief of Staff routes to every skill in ecosystem

* refactor: CEO + CTO advisors upgraded to C-suite parity

Both roles now match the structural standard of all new roles:
- CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references)
- CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references)

Added to both:
- Integration table (who they work with and when)
- Key diagnostic questions
- Structured metrics dashboard table
- Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context)

CEO additions:
- Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y)
- Cross-references to culture-architect and board-deck-builder

CTO additions:
- Key Questions section (7 diagnostic questions)
- Structured metrics table (DORA + debt + team + architecture + cost)
- Cross-references to all peer roles

All 10 roles now pass structural parity:  Keywords  QuickStart  Questions  Metrics  RedFlags  Integration

* feat: add proactive triggers + output artifacts to all 10 roles

Every C-suite role now specifies:
- Proactive Triggers: 'surface these without being asked' — context-driven
  early warnings that make advisors proactive, not reactive
- Output Artifacts: concrete deliverables per request type (what you ask →
  what you get)

CEO: runway alerts, board prep triggers, strategy review nudges
CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags
COO: blocker detection, scaling threshold warnings, cadence gaps
CPO: retention curve monitoring, portfolio dog detection, research gaps
CMO: CAC trend monitoring, positioning gaps, budget staleness
CFO: runway forecasting, burn multiple alerts, scenario planning gaps
CRO: NRR monitoring, pipeline coverage, pricing review triggers
CISO: audit overdue alerts, compliance gaps, vendor risk
CHRO: retention risk, comp band gaps, org scaling thresholds
Executive Mentor: board prep triggers, groupthink detection, hard call surfacing

This transforms the C-suite from reactive advisors into proactive partners.

* feat: User Communication Standard — structured output for all roles

Defines 3 output formats in agent-protocol/SKILL.md:

1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision
2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡)
3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items

10 non-negotiable rules:
- Bottom line first, always
- Results and decisions only (no process narration)
- What + Why + How for every finding
- Actions have owners and deadlines ('we should consider' is banned)
- Decisions framed as options with trade-offs
- Founder is the highest authority — roles recommend, founder decides
- Risks are concrete (if X → Y, costs $Z)
- Max 5 bullets per section
- No jargon without explanation
- Silence over fabricated updates

All 10 roles reference this standard.
Chief of Staff enforces it as a quality gate.
Board meeting Phase 4 uses the Board Meeting Output format.

* feat: Internal Quality Loop — verification before delivery

No role presents to the founder without passing verification:

Step 1: Self-Verification (every role, every time)
  - Source attribution: where did each data point come from?
  - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding
  - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding
  - Contradiction check against company-context + decision log
  - 'So what?' test: every finding needs a business consequence

Step 2: Peer Verification (cross-functional)
  - Financial claims → CFO validates math
  - Revenue projections → CRO validates pipeline backing
  - Technical feasibility → CTO validates
  - People/hiring impact → CHRO validates
  - Skip for single-domain, low-stakes questions

Step 3: Critic Pre-Screen (high-stakes only)
  - Irreversible decisions, >20% runway impact, strategy changes
  - Executive Mentor finds weakest point before founder sees it
  - Suspicious consensus triggers mandatory pre-screen

Step 4: Course Correction (after founder feedback)
  - Approve → log + assign actions
  - Modify → re-verify changed parts
  - Reject → DO_NOT_RESURFACE + learn why
  - 30/60/90 day post-decision review

Board meeting contributions now require self-verified format with
confidence tags and source attribution on every finding.

* fix: resolve PR review issues 1, 4, and minor observation

Issue 1: c-level-advisor/CLAUDE.md — completely rewritten
  - Was: 2 skills (CEO, CTO only), dated Nov 2025
  - Now: full 28-skill ecosystem map with architecture diagram,
    all roles/orchestration/cross-cutting/culture skills listed,
    design decisions, integration with other domains

Issue 4: Root CLAUDE.md — updated all stale counts
  - 87 → 134 skills across all 3 references
  - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary)
  - Tool count: 160+ → 185+
  - Reference count: 200+ → 250+

Minor observation: Documented plugin.json convention
  - Explained in c-level-advisor/CLAUDE.md that only executive-mentor
    has plugin.json because only it has slash commands (/em: namespace)
  - Other skills are invoked by name through Chief of Staff or directly

Also fixed: README.md 88+ → 134 in two places (first line + skills section)

* fix: update all plugin/index registrations for 28-skill C-suite

1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0
   - Was: 2 skills, generic description
   - Now: all 28 skills listed with descriptions, all 25 scripts,
     namespace 'cs', full ecosystem description

2. .codex/skills-index.json — added 18 complementary skills
   - Was: 10 roles only
   - Now: 28 total c-level entries (10 roles + 6 orchestration +
     6 cross-cutting + 6 culture)
   - Each with full description for skill discovery

3. .claude-plugin/marketplace.json — updated c-level-skills entry
   - Was: generic 2-skill description
   - Now: v2.0.0, full 28-skill ecosystem description,
     skills_count: 28, scripts_count: 25

* feat: add root SKILL.md for c-level-advisor ClawHub package

---------

Co-authored-by: Leo <leo@openclaw.ai>

* chore: sync codex skills symlinks [automated]

---------

Co-authored-by: Leo <leo@openclaw.ai>
Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com>
Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Leo <leo@leo-agent-server>
2026-03-06 01:35:45 +01:00

16 KiB

Brand Positioning Reference

Practical frameworks for defining, communicating, and defending your market position. Not theory — applied tools for CMOs who need to get this right.


1. Category Design Frameworks

The Category Design Principle

Every product exists in a category — either one you define or one someone else defined. If you're not designing your category, your competitors are designing it for you, and they'll design it to exclude you.

Category design is not renaming an existing category. It's declaring that the existing category no longer solves the problem adequately, and that a new category — which you happen to lead — is required.

The Three-Act Category Design Narrative

Act 1: Name the problem Identify a problem that's real, growing, and underserved. Not a problem you invented — a problem your best customers articulate before they've heard your pitch.

"Enterprise software teams are deploying faster than ever, but their security reviews still take 3 weeks — because security was built for a world where deployments happen monthly, not hourly."

Act 2: Define the new category Name the category in terms of the outcome, not the feature. The category name should describe what customers achieve, not what the product does.

"Continuous security" — not "automated security scanning" or "DevSecOps platform."

Act 3: Position yourself as the category leader You can't just claim leadership — you need proof: customers, analysts, community, content, events. Leadership is built, not declared.

"Snyk is building the continuous security category. 1.2M developers have adopted Snyk. Gartner lists us as a Cool Vendor in AppSec."

When Category Design Works

Condition Explanation
Market timing The problem is growing but the existing category is inadequate
CEO commitment Category design is a 3-5 year initiative, not a marketing campaign
Analyst alignment Gartner, Forrester, or G2 need to recognize your category
Community Practitioners adopt the vocabulary before buyers do
Content moat You publish the defining content for the category before competitors

Category Design Pitfalls

  • Naming the category after yourself: "The [Your Company] Category" is not a category. It's a vanity.
  • Categories that don't solve analyst definitions: If Gartner doesn't have a Magic Quadrant for your category, you're fighting uphill.
  • Jargon without adoption: If your category name requires a two-paragraph explanation, it won't stick.
  • Starting a category war you can't win: If an incumbent can copy your category name and launch in 90 days, you don't have a defensible category.

The Lightning Strike Strategy

Category design requires concentrated, coordinated effort — not slow drip. Execute these simultaneously:

  1. Major piece of research or data (the "State of X" report)
  2. Category-defining event (host it, don't just attend)
  3. Analyst briefing (educate Gartner/Forrester on the category before they define it themselves)
  4. Book or manifesto (long-form content that becomes the category Bible)
  5. Community formation (a Slack group, a conference, a certification that practitioners want)

Do all five within a 3-month window. This creates gravity around your category claim.


2. Messaging Architecture

The Messaging Hierarchy

Every piece of content — from a tweet to a 60-page whitepaper — should trace back to this hierarchy. When it doesn't, you have messaging drift.

Level 1: Brand Promise
"[Company] [verb] [outcome] for [audience]"
→ Doesn't change. This is the north star.

Level 2: Positioning Statement (internal)
For [target customer] who [has this problem],
[Company] is the [market category] that [differentiated capability]
unlike [alternatives], [Company] [proof of differentiation].

Level 3: Value Propositions (3-4 max, one per key outcome)
Each VP: headline (5-8 words) + 2-3 sentence explanation + proof point

Level 4: Proof Points
Data, case studies, certifications, analyst recognition — evidence for each VP

Level 5: Channel Adaptations
Website copy, sales deck, ad copy, email — same hierarchy, different format

Writing a Positioning Statement

The Geoffrey Moore / April Dunford format is still the best framework:

Template:

For [specific target customer]
who [has this specific, painful problem],
[Company name] is the [market category]
that [key differentiated capability].
Unlike [primary alternatives],
[Company] [proof of differentiation — something measurable or unique].

Bad example (too generic):

For B2B companies who want to grow faster, Acme is the marketing platform that helps you get more leads. Unlike other platforms, Acme is easy to use and powerful.

Good example (specific and falsifiable):

For DevOps teams in regulated industries who spend 20% of their sprint cycles on compliance reviews, Acme is the compliance automation platform that embeds regulatory checks directly into the CI/CD pipeline. Unlike manual compliance tools that create a separate review queue, Acme's policy-as-code approach reduces compliance-related cycle time by 60% without slowing deployments.

Test your positioning statement:

  1. Can a competitor say the exact same thing? (If yes, it's not differentiated)
  2. Does it describe what you do or what the customer gets? (Should be the latter)
  3. Would your best customer say "yes, that's exactly my problem"? (If not, wrong ICP)
  4. Is it falsifiable? (Claims you can't prove are liabilities)

Value Proposition Development

Structure for each VP:

Element Description Example
Outcome headline What changes for the customer (5-8 words) "Ship features 3x faster"
The problem Why this matters now (1 sentence) "Compliance reviews block 40% of releases in regulated industries"
Our approach How we solve it differently (1-2 sentences) "Policy-as-code embeds checks in the pipeline instead of adding a gate at the end"
Proof Evidence this is real (1 sentence + data point) "Customers reduce compliance cycle time by 60% in the first 90 days"

3-VP Architecture is the standard:

  • VP1: Core outcome (what most customers primarily buy for)
  • VP2: Secondary benefit (makes the decision easier or stickier)
  • VP3: Differentiator (what tips competitive decisions in your favor)

Proof Point Hierarchy

Not all proof is equal. When you make a claim, match the strength of your proof to the importance of the claim.

Proof Type Strength Best Used For
Third-party data (analyst report, research) Highest Category claims, market size
Customer ROI data with name High Value propositions
Customer quote with name and company Medium-high Specific pain points and outcomes
Aggregated customer data ("customers report…") Medium Directional claims
Internal testing or benchmark Medium-low Product capability claims
"Designed to…" or "built for…" Low Product direction only
"We believe…" or "we think…" Lowest Vision statements only

Proof point development process:

  1. Write the claim you want to make
  2. Identify the strongest available proof
  3. If proof is weak, either soften the claim or invest in getting better proof
  4. Never publish a claim without knowing what happens when a skeptic asks "prove it"

3. Competitive Positioning Maps

The Two-Axis Map

Choose two dimensions that:

  1. Both matter to your target buyer
  2. Create clear differentiation between you and competitors
  3. You can credibly defend

Choosing the axes:

  • Axis 1 should show a dimension where you win and most competitors cluster on the wrong side
  • Axis 2 should show a dimension buyers care about deeply (ease, speed, breadth, price, compliance, etc.)

What to avoid:

  • "Quality" vs. "Price" — too generic, every company claims the top-left
  • Dimensions your competitors can match in one release cycle
  • Dimensions that only your product team understands, not buyers

Competitive Analysis Template

For each major competitor:

Company: _______________

Dimension What They Claim What Customers Actually Experience Gap
Positioning
Primary differentiator
Pricing
Ideal customer
Weakness (win/loss data)
What they say about you

Sources for competitive intelligence:

  • Win/loss interviews (primary source — nothing beats this)
  • G2/Capterra reviews (what customers say publicly)
  • Glassdoor (tells you about internal culture and focus)
  • LinkedIn job postings (what they're building next)
  • Their pricing page changes (what they're competing on)
  • Conference talks from their product and sales leaders

Battlecard Format

One page per competitor. Used by sales, not marketing.

COMPETING AGAINST: [Competitor Name]

WHY CUSTOMERS CONSIDER THEM:
(2-3 bullets — be honest about their appeal)

OUR DIFFERENTIATION:
(2-3 bullets — factual, not marketing language)

THE LANDMINE QUESTION:
(One question that exposes their weakness. The answer should make the buyer uncomfortable choosing them.)
Example: "How long does your typical implementation take? And what's your SLA if it runs over?"

OUR PROOF POINTS IN THIS COMPARISON:
- [Customer name] switched from [competitor] after [specific reason], saw [specific result]
- [Data point that directly contradicts competitor's primary claim]

THEIR LIKELY COUNTER-MOVES:
(What will they say about us? How do we respond?)

WHEN TO WALK AWAY:
(If the prospect values X more than Y, we are not the right fit — say so)

4. Brand Voice Development

What Brand Voice Is (and Isn't)

Brand voice is NOT:

  • A list of adjectives ("we are professional, innovative, and customer-focused")
  • The tone you use in formal communications
  • The font and color palette (that's visual identity)

Brand voice IS:

  • How the company sounds across every written touchpoint
  • Consistent enough to be recognizable, flexible enough to be human
  • Grounded in what your best customers actually value

The Voice Attribute Framework

Define 3-4 voice attributes. For each:

  1. What it means (in one sentence)
  2. What it sounds like (one example)
  3. What it doesn't mean (the common mistake that goes wrong)

Example:

Attribute Means Sounds like Doesn't mean
Direct We say what we mean without hedging "Your compliance review takes 3 weeks. It shouldn't." Blunt, rude, or dismissive
Expert We speak from depth, not from trend "Here's why most security gates fail at scale, and what actually works." Jargon-heavy or condescending
Honest We acknowledge what we don't do "We're not the best fit if you need a one-size-fits-all platform." Self-deprecating or uncertain
Human Real people write for real people "Deploying on a Friday? Here's what we'd check first." Casual, unprofessional

Voice Consistency Testing

Take a random sample of 10 recent pieces of content:

  • Website homepage and pricing page
  • 3 blog posts from different authors
  • 5 outbound emails from sales
  • 3 social posts
  • 1 press release

Score each on: Does this sound like us? (1-5)

Average < 3: You have a brand voice problem. The cause is usually no documented guidelines, or guidelines that exist but aren't enforced.

Voice in Different Contexts

The attribute stays the same. The tone adjusts.

Context Tone adjustment Example of "Direct"
Homepage Confident "Compliance reviews don't have to slow you down."
Technical docs Precise "Set the policy threshold to 0.95 to enforce mandatory approval."
Error messages Helpful "That didn't work. Here's the most common reason why, and how to fix it."
Support Empathetic "That's frustrating. Here's what happened and what we're doing about it."
Sales outreach Respectful "Most teams in your space have this problem. Worth 20 minutes to explore?"

5. Rebrand Decision Framework

When Rebrands Succeed vs. Fail

Successful rebrands:

  • Driven by a genuine strategic shift (new category, new ICP, new market)
  • Have internal alignment before external launch
  • Are accompanied by product and messaging changes — not just visual
  • Have a 6-12 month transition plan for existing customers

Failed rebrands:

  • Driven by internal boredom with the old brand
  • Executed as a "refresh" without repositioning the value proposition
  • Lack leadership conviction (executives still describe the company in the old terms)
  • Launch with a new logo but same product, same messaging, same ICP

The Rebrand Decision Matrix

Answer each question. More "yes" answers = more likely rebrand is warranted.

Question Yes No
Has our ICP changed significantly in the last 18 months? Rebrand Stay
Are we entering a new market where the current brand creates friction? Rebrand Stay
Does the brand name have negative associations in the market? Rebrand Stay
Has an acquisition changed our core identity? Rebrand Stay
Is the current brand actively hurting sales conversations? (evidence required) Rebrand Stay
Are we bored with the brand? Stay
Did leadership change? Stay
Are competitors rebranding? Stay

Score: 3+ "Rebrand" answers with evidence = worth a serious evaluation.

Rebrand Risk Assessment

Name change is the highest-risk rebrand element. Before committing:

  • Legal: trademark availability in all target markets
  • SEO: 18-24 months to recover domain authority after a domain change
  • Customer: existing customers need to update all integrations, contracts, documentation
  • Analyst: re-education of Gartner, Forrester, G2 category definitions
  • Employee: company identity shift is a culture event, not just an HR task

Minimum viable rebrand (lower risk):

  1. New positioning and messaging (always worth doing if positioning is wrong)
  2. Visual identity refresh (keep the name, update the look)
  3. Tagline change (the cheapest, lowest-risk brand change)

Full rebrand (high risk, sometimes necessary):

  1. New company name and domain
  2. New visual identity
  3. New positioning and messaging
  4. New category narrative

Rebrand Execution Checklist

Pre-launch (90 days):

  • Finalize positioning before finalizing design (in that order)
  • Legal trademark clearance in all target markets
  • Domain secured (with redirects planned)
  • Internal alignment: every leader can describe the new positioning in one sentence
  • Customer comms plan (existing customers, especially enterprise, need advance notice)
  • Analyst briefings scheduled (Gartner, Forrester — brief them before launch)
  • PR plan finalized

Launch (day 1):

  • Website flipped
  • Social profiles updated
  • Email signatures updated company-wide
  • Sales deck updated
  • Press release published
  • Existing customers notified (email from CEO or CMO, not marketing automation)

Post-launch (90 days):

  • SEO monitoring (watch for ranking drops on key terms)
  • Win rate monitoring (did conversion change?)
  • Employee feedback (are they using the new messaging correctly?)
  • Partner/channel update (resellers, integrations, directories)
  • Analyst follow-up (did they update their reports?)

Quick Reference: Brand Positioning Diagnostic

Use this as an audit against your current positioning:

Check Pass Fail
Can every sales rep state the positioning in one sentence without looking it up? Positioning isn't working
Is the ICP specific enough to disqualify companies? ICP is too broad
Does the homepage lead with customer outcome, not product features? Copy needs rewrite
Can you name 3 companies you're NOT a good fit for? Positioning is unfocused
Do win/loss interviews confirm the stated differentiator? Differentiator is assumed, not proven
Is the category name used by analysts or industry media? Category design needed
Does every piece of content trace back to a VP from the hierarchy? Messaging drift — need guidelines