* docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: Add VirusTotal security scan for skills (#252) * Dev (#231) * Improve senior-fullstack skill description and workflow validation - Expand frontmatter description with concrete actions and trigger clauses - Add validation steps to scaffolding workflow (verify scaffold succeeded) - Add re-run verification step to audit workflow (confirm P0 fixes) * chore: sync codex skills symlinks [automated] * fix(skill): normalize senior-fullstack frontmatter to inline format Normalize YAML description from block scalar (>) to inline single-line format matching all other 50+ skills. Align frontmatter trigger phrases with the body's Trigger Phrases section to eliminate duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(ci): add GITHUB_TOKEN to checkout + restore corrupted skill descriptions - Add token: ${{ secrets.GITHUB_TOKEN }} to actions/checkout@v4 in sync-codex-skills.yml so git-auto-commit-action can push back to branch (fixes: fatal: could not read Username, exit 128) - Restore correct description for incident-commander (was: 'Skill from engineering-team') - Restore correct description for senior-fullstack (was: '>') * fix(ci): pass PROJECTS_TOKEN to fix automated commits + remove duplicate checkout Fixes PROJECTS_TOKEN passthrough for git-auto-commit-action and removes duplicate checkout step in pr-issue-auto-close workflow. * fix(ci): remove stray merge conflict marker in sync-codex-skills.yml (#221) Co-authored-by: Leo <leo@leo-agent-server> * fix(ci): fix workflow errors + add OpenClaw support (#222) * feat: add 20 new practical skills for professional Claude Code users New skills across 5 categories: Engineering (12): - git-worktree-manager: Parallel dev with port isolation & env sync - ci-cd-pipeline-builder: Generate GitHub Actions/GitLab CI from stack analysis - mcp-server-builder: Build MCP servers from OpenAPI specs - changelog-generator: Conventional commits to structured changelogs - pr-review-expert: Blast radius analysis & security scan for PRs - api-test-suite-builder: Auto-generate test suites from API routes - env-secrets-manager: .env management, leak detection, rotation workflows - database-schema-designer: Requirements to migrations & types - codebase-onboarding: Auto-generate onboarding docs from codebase - performance-profiler: Node/Python/Go profiling & optimization - runbook-generator: Operational runbooks from codebase analysis - monorepo-navigator: Turborepo/Nx/pnpm workspace management Engineering Team (2): - stripe-integration-expert: Subscriptions, webhooks, billing patterns - email-template-builder: React Email/MJML transactional email systems Product Team (3): - saas-scaffolder: Full SaaS project generation from product brief - landing-page-generator: High-converting landing pages with copy frameworks - competitive-teardown: Structured competitive product analysis Business Growth (1): - contract-and-proposal-writer: Contracts, SOWs, NDAs per jurisdiction Marketing (1): - prompt-engineer-toolkit: Systematic prompt development & A/B testing Designed for daily professional use and commercial distribution. * chore: sync codex skills symlinks [automated] * docs: update README with 20 new skills, counts 65→86, new skills section * docs: add commercial distribution plan (Stan Store + Gumroad) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) (#226) * docs: rewrite CHANGELOG.md with v2.0.0 release (65 skills, 9 domains) - Consolidate 191 commits since v1.0.2 into proper v2.0.0 entry - Document 12 POWERFUL-tier skills, 37 refactored skills - Add new domains: business-growth, finance - Document Codex support and marketplace integration - Update version history summary table - Clean up [Unreleased] to only planned work * docs: add 24 POWERFUL-tier skills to plugin, fix counts to 85 across all docs - Add engineering-advanced-skills plugin (24 POWERFUL-tier skills) to marketplace.json - Add 13 missing skills to CHANGELOG v2.0.0 (agent-workflow-designer, api-test-suite-builder, changelog-generator, ci-cd-pipeline-builder, codebase-onboarding, database-schema-designer, env-secrets-manager, git-worktree-manager, mcp-server-builder, monorepo-navigator, performance-profiler, pr-review-expert, runbook-generator) - Fix skill count: 86→85 (excl sample-skill) across README, CHANGELOG, marketplace.json - Fix stale 53→85 references in README - Add engineering-advanced-skills install command to README - Update marketplace.json version to 2.0.0 --------- Co-authored-by: Leo <leo@openclaw.ai> * feat: add skill-security-auditor POWERFUL-tier skill (#230) Security audit and vulnerability scanner for AI agent skills before installation. Scans for: - Code execution risks (eval, exec, os.system, subprocess shell injection) - Data exfiltration (outbound HTTP, credential harvesting, env var extraction) - Prompt injection in SKILL.md (system override, role hijack, safety bypass) - Dependency supply chain (typosquatting, unpinned versions, runtime installs) - File system abuse (boundary violations, binaries, symlinks, hidden files) - Privilege escalation (sudo, SUID, cron manipulation, shell config writes) - Obfuscation (base64, hex encoding, chr chains, codecs) Produces clear PASS/WARN/FAIL verdict with per-finding remediation guidance. Supports local dirs, git repo URLs, JSON output, strict mode, and CI/CD integration. Includes: - scripts/skill_security_auditor.py (1049 lines, zero dependencies) - references/threat-model.md (complete attack vector documentation) - SKILL.md with usage guide and report format Tested against: rag-architect (PASS), agent-designer (PASS), senior-secops (FAIL - correctly flagged eval/exec patterns). Co-authored-by: Leo <leo@openclaw.ai> * docs: add skill-security-auditor to marketplace, README, and CHANGELOG - Add standalone plugin entry for skill-security-auditor in marketplace.json - Update engineering-advanced-skills plugin description to include it - Update skill counts: 85→86 across README, CHANGELOG, marketplace - Add install command to README Quick Install section - Add to CHANGELOG [Unreleased] section --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * Dev (#249) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * Dev (#250) * docs: restructure README.md — 2,539 → 209 lines (#247) - Cut from 2,539 lines / 73 sections to 209 lines / 18 sections - Consolidated 4 install methods into one unified section - Moved all skill details to domain-level READMEs (linked from table) - Front-loaded value prop and keywords for SEO - Added POWERFUL tier highlight section - Added skill-security-auditor showcase section - Removed stale Q4 2025 roadmap, outdated ROI claims, duplicate content - Fixed all internal links - Clean heading hierarchy (H2 for main sections only) Closes #233 Co-authored-by: Leo <leo@openclaw.ai> * fix: enhance 5 skills with scripts, references, and Anthropic best practices (#248) * fix(skill): enhance git-worktree-manager with scripts, references, and Anthropic best practices * fix(skill): enhance mcp-server-builder with scripts, references, and Anthropic best practices * fix(skill): enhance changelog-generator with scripts, references, and Anthropic best practices * fix(skill): enhance ci-cd-pipeline-builder with scripts, references, and Anthropic best practices * fix(skill): enhance prompt-engineer-toolkit with scripts, references, and Anthropic best practices * docs: update README, CHANGELOG, and plugin metadata * fix: correct marketing plugin count, expand thin references --------- Co-authored-by: Leo <leo@openclaw.ai> --------- Co-authored-by: Leo <leo@openclaw.ai> * ci: add VirusTotal security scan for skills - Scans changed skill directories on PRs to dev/main - Scans all skills on release publish - Posts scan results as PR comment with analysis links - Rate-limited to 4 req/min (free tier compatible) - Appends VirusTotal links to release body on publish * fix: resolve YAML lint errors in virustotal workflow - Add document start marker (---) - Quote 'on' key for truthy lint rule - Remove trailing spaces - Break long lines under 160 char limit --------- Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server> Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro plugin — production-grade Playwright testing toolkit (#254) Complete Claude Code plugin with: - 9 skills (/pw:init, generate, review, fix, migrate, coverage, testrail, browserstack, report) - 3 specialized agents (test-architect, test-debugger, migration-planner) - 55 test case templates across 11 categories (auth, CRUD, checkout, search, forms, dashboard, settings, onboarding, notifications, API, accessibility) - TestRail MCP server (TypeScript) — 8 tools for bidirectional sync - BrowserStack MCP server (TypeScript) — 7 tools for cross-browser testing - Smart hooks (auto-validate tests, auto-detect Playwright projects) - 6 curated reference docs (golden rules, locators, assertions, fixtures, pitfalls, flaky tests) - Leverages Claude Code built-ins (/batch, /debug, Explore subagent) - Zero-config for core features; TestRail/BrowserStack via env vars - Both TypeScript and JavaScript support throughout Co-authored-by: Leo <leo@openclaw.ai> * feat: add playwright-pro to marketplace registry (#256) - New plugin: playwright-pro (9 skills, 3 agents, 55 templates, 2 MCP servers) - Install: /plugin install playwright-pro@claude-code-skills - Total marketplace plugins: 17 Co-authored-by: Leo <leo@openclaw.ai> * fix: integrate playwright-pro across all platforms (#258) - Add root SKILL.md for OpenClaw and ClawHub compatibility - Add to README: Skills Overview table, install section, badge count - Regenerate .codex/skills-index.json with playwright-pro entry - Add .codex/skills/playwright-pro symlink for Codex CLI - Fix YAML frontmatter (single-line description for index parsing) Platforms verified: - Claude Code: marketplace.json ✅ (merged in PR #256) - Codex CLI: symlink + skills-index.json ✅ - OpenClaw: SKILL.md auto-discovered by install script ✅ - ClawHub: published as playwright-pro@1.1.0 ✅ Co-authored-by: Leo <leo@openclaw.ai> * docs: update CLAUDE.md — reflect 87 skills across 9 domains Sync CLAUDE.md with actual repository state: add Engineering POWERFUL tier (25 skills), update all skill counts, add plugin registry references, and replace stale sprint section with v2.0.0 version info. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: mention Claude Code in project description Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add self-improving-agent plugin — auto-memory curation for Claude Code (#260) New plugin: engineering-team/self-improving-agent/ - 5 skills: /si:review, /si:promote, /si:extract, /si:status, /si:remember - 2 agents: memory-analyst, skill-extractor - 1 hook: PostToolUse error capture (zero overhead on success) - 3 reference docs: memory architecture, promotion rules, rules directory patterns - 2 templates: rule template, skill template - 20 files, 1,829 lines Integrates natively with Claude Code's auto-memory (v2.1.32+). Reads from ~/.claude/projects/<path>/memory/ — no duplicate storage. Promotes proven patterns from MEMORY.md to CLAUDE.md or .claude/rules/. Also: - Added to marketplace.json (18 plugins total) - Added to README (Skills Overview + install section) - Updated badge count to 88+ - Regenerated .codex/skills-index.json + symlink Co-authored-by: Leo <leo@openclaw.ai> * feat: C-Suite expansion — 8 new executive advisory roles (2→10) (#264) * feat: C-Suite expansion — 8 new executive advisory roles Add COO, CPO, CMO, CFO, CRO, CISO, CHRO advisors and Executive Mentor. Expands C-level advisory from 2 to 10 roles with 74 total files. Each role includes: - SKILL.md (lean, <5KB, ~1200 tokens for context efficiency) - Reference docs (loaded on demand, not at startup) - Python analysis scripts (stdlib only, runnable CLI) Executive Mentor features /em: slash commands (challenge, board-prep, hard-call, stress-test, postmortem) with devil's advocate agent. 21 Python tools, 24 reference frameworks, 28,379 total lines. All SKILL.md files combined: ~17K tokens (8.5% of 200K context window). Badge: 88 → 116 skills * feat: C-Suite orchestration layer + 18 complementary skills ORCHESTRATION (new): - cs-onboard: Founder interview → company-context.md - chief-of-staff: Routing, synthesis, inter-agent orchestration - board-meeting: 6-phase multi-agent deliberation protocol - decision-logger: Two-layer memory (raw transcripts + approved decisions) - agent-protocol: Inter-agent invocation with loop prevention - context-engine: Company context loading + anonymization CROSS-CUTTING CAPABILITIES (new): - board-deck-builder: Board/investor update assembly - scenario-war-room: Cascading multi-variable what-if modeling - competitive-intel: Systematic competitor tracking + battlecards - org-health-diagnostic: Cross-functional health scoring (8 dimensions) - ma-playbook: M&A strategy (acquiring + being acquired) - intl-expansion: International market entry frameworks CULTURE & COLLABORATION (new): - culture-architect: Values → behaviors, culture code, health assessment - company-os: EOS/Scaling Up operating system selection + implementation - founder-coach: Founder development, delegation, blind spots - strategic-alignment: Strategy cascade, silo detection, alignment scoring - change-management: ADKAR-based change rollout framework - internal-narrative: One story across employees/investors/customers UPGRADES TO EXISTING ROLES: - All 10 roles get reasoning technique directives - All 10 roles get company-context.md integration - All 10 roles get board meeting isolation rules - CEO gets stage-adaptive temporal horizons (seed→C) Key design decisions: - Two-layer memory prevents hallucinated consensus from rejected ideas - Phase 2 isolation: agents think independently before cross-examination - Executive Mentor (The Critic) sees all perspectives, others don't - 25 Python tools total (stdlib only, no dependencies) 52 new files, 10 modified, 10,862 new lines. Total C-suite ecosystem: 134 files, 39,131 lines. * fix: connect all dots — Chief of Staff routes to all 28 skills - Added complementary skills registry to routing-matrix.md - Chief of Staff SKILL.md now lists all 28 skills in ecosystem - Added integration tables to scenario-war-room and competitive-intel - Badge: 116 → 134 skills - README: C-Level Advisory count 10 → 28 Quality audit passed: ✅ All 10 roles: company-context, reasoning, isolation, invocation ✅ All 6 phases in board meeting ✅ Two-layer memory with DO_NOT_RESURFACE ✅ Loop prevention (no self-invoke, max depth 2, no circular) ✅ All /em: commands present ✅ All complementary skills cross-reference roles ✅ Chief of Staff routes to every skill in ecosystem * refactor: CEO + CTO advisors upgraded to C-suite parity Both roles now match the structural standard of all new roles: - CEO: 11.7KB → 6.8KB SKILL.md (heavy content stays in references) - CTO: 10KB → 7.2KB SKILL.md (heavy content stays in references) Added to both: - Integration table (who they work with and when) - Key diagnostic questions - Structured metrics dashboard table - Consistent section ordering (Keywords → Quick Start → Responsibilities → Questions → Metrics → Red Flags → Integration → Reasoning → Context) CEO additions: - Stage-adaptive temporal horizons (seed=3m/6m/12m → B+=1y/3y/5y) - Cross-references to culture-architect and board-deck-builder CTO additions: - Key Questions section (7 diagnostic questions) - Structured metrics table (DORA + debt + team + architecture + cost) - Cross-references to all peer roles All 10 roles now pass structural parity: ✅ Keywords ✅ QuickStart ✅ Questions ✅ Metrics ✅ RedFlags ✅ Integration * feat: add proactive triggers + output artifacts to all 10 roles Every C-suite role now specifies: - Proactive Triggers: 'surface these without being asked' — context-driven early warnings that make advisors proactive, not reactive - Output Artifacts: concrete deliverables per request type (what you ask → what you get) CEO: runway alerts, board prep triggers, strategy review nudges CTO: deploy frequency monitoring, tech debt thresholds, bus factor flags COO: blocker detection, scaling threshold warnings, cadence gaps CPO: retention curve monitoring, portfolio dog detection, research gaps CMO: CAC trend monitoring, positioning gaps, budget staleness CFO: runway forecasting, burn multiple alerts, scenario planning gaps CRO: NRR monitoring, pipeline coverage, pricing review triggers CISO: audit overdue alerts, compliance gaps, vendor risk CHRO: retention risk, comp band gaps, org scaling thresholds Executive Mentor: board prep triggers, groupthink detection, hard call surfacing This transforms the C-suite from reactive advisors into proactive partners. * feat: User Communication Standard — structured output for all roles Defines 3 output formats in agent-protocol/SKILL.md: 1. Standard Output: Bottom Line → What → Why → How to Act → Risks → Your Decision 2. Proactive Alert: What I Noticed → Why It Matters → Action → Urgency (🔴🟡⚪) 3. Board Meeting: Decision Required → Perspectives → Agree/Disagree → Critic → Action Items 10 non-negotiable rules: - Bottom line first, always - Results and decisions only (no process narration) - What + Why + How for every finding - Actions have owners and deadlines ('we should consider' is banned) - Decisions framed as options with trade-offs - Founder is the highest authority — roles recommend, founder decides - Risks are concrete (if X → Y, costs $Z) - Max 5 bullets per section - No jargon without explanation - Silence over fabricated updates All 10 roles reference this standard. Chief of Staff enforces it as a quality gate. Board meeting Phase 4 uses the Board Meeting Output format. * feat: Internal Quality Loop — verification before delivery No role presents to the founder without passing verification: Step 1: Self-Verification (every role, every time) - Source attribution: where did each data point come from? - Assumption audit: [VERIFIED] vs [ASSUMED] tags on every finding - Confidence scoring: 🟢 high / 🟡 medium / 🔴 low per finding - Contradiction check against company-context + decision log - 'So what?' test: every finding needs a business consequence Step 2: Peer Verification (cross-functional) - Financial claims → CFO validates math - Revenue projections → CRO validates pipeline backing - Technical feasibility → CTO validates - People/hiring impact → CHRO validates - Skip for single-domain, low-stakes questions Step 3: Critic Pre-Screen (high-stakes only) - Irreversible decisions, >20% runway impact, strategy changes - Executive Mentor finds weakest point before founder sees it - Suspicious consensus triggers mandatory pre-screen Step 4: Course Correction (after founder feedback) - Approve → log + assign actions - Modify → re-verify changed parts - Reject → DO_NOT_RESURFACE + learn why - 30/60/90 day post-decision review Board meeting contributions now require self-verified format with confidence tags and source attribution on every finding. * fix: resolve PR review issues 1, 4, and minor observation Issue 1: c-level-advisor/CLAUDE.md — completely rewritten - Was: 2 skills (CEO, CTO only), dated Nov 2025 - Now: full 28-skill ecosystem map with architecture diagram, all roles/orchestration/cross-cutting/culture skills listed, design decisions, integration with other domains Issue 4: Root CLAUDE.md — updated all stale counts - 87 → 134 skills across all 3 references - C-Level: 2 → 33 (10 roles + 5 mentor commands + 18 complementary) - Tool count: 160+ → 185+ - Reference count: 200+ → 250+ Minor observation: Documented plugin.json convention - Explained in c-level-advisor/CLAUDE.md that only executive-mentor has plugin.json because only it has slash commands (/em: namespace) - Other skills are invoked by name through Chief of Staff or directly Also fixed: README.md 88+ → 134 in two places (first line + skills section) * fix: update all plugin/index registrations for 28-skill C-suite 1. c-level-advisor/.claude-plugin/plugin.json — v2.0.0 - Was: 2 skills, generic description - Now: all 28 skills listed with descriptions, all 25 scripts, namespace 'cs', full ecosystem description 2. .codex/skills-index.json — added 18 complementary skills - Was: 10 roles only - Now: 28 total c-level entries (10 roles + 6 orchestration + 6 cross-cutting + 6 culture) - Each with full description for skill discovery 3. .claude-plugin/marketplace.json — updated c-level-skills entry - Was: generic 2-skill description - Now: v2.0.0, full 28-skill ecosystem description, skills_count: 28, scripts_count: 25 * feat: add root SKILL.md for c-level-advisor ClawHub package --------- Co-authored-by: Leo <leo@openclaw.ai> * chore: sync codex skills symlinks [automated] --------- Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Baptiste Fernandez <fernandez.baptiste1@gmail.com> Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Leo <leo@leo-agent-server>
18 KiB
Growth Frameworks Reference
Playbooks for PLG, sales-led, community-led, and hybrid growth models. Includes growth loops, funnel design, and guidance on when and how to switch models.
1. Product-Led Growth (PLG) Playbook
What PLG Actually Is
PLG means the product is the primary distribution mechanism. Not "we have a free trial." Not "our product is self-serve." PLG means the product creates acquisition, retention, and expansion — and does so at a scale and cost no sales team can match.
The minimum requirements for PLG to work:
- Fast time-to-value: Users must get a meaningful outcome within one session (ideally < 30 minutes)
- Low friction to start: No sales call, no implementation project, no credit card required (for top of funnel)
- Built-in virality or network effects: Usage creates exposure or value that draws in other users
- Self-serve monetization or expansion path: Freemium → paid, or individual → team → company
If any of these is missing, you don't have PLG — you have a website with a free trial.
PLG Funnel: The Four Stages
Stage 1: Acquisition The user discovers and signs up for the product without talking to sales.
Key channels:
- Organic search (SEO targeting jobs-to-be-done searches)
- Product hunt launches
- Referral and invite loops (users share the product with colleagues)
- Developer communities and open-source contributions
Metric: Visitor-to-signup rate
Benchmark: 2-8% for B2B SaaS (varies heavily by product complexity)
Stage 2: Activation The user reaches the "aha moment" — the point where the product delivers its core value for the first time.
Finding the aha moment:
- Look at the behaviors that differentiate users who stay from users who churn in the first 30 days
- The aha moment is not creating an account. It's completing the first outcome.
- For Slack: sending a message in a real channel
- For Dropbox: adding a file from a second device
- For HubSpot: publishing a form that captures a real lead
Metric: Activation rate (% of signups who complete the aha moment action within 7 days)
Benchmark: 25-40% is strong. < 15% means the onboarding is broken.
Stage 3: Retention Users return to the product and build habitual use.
Retention analysis:
- Cohort retention curves (by signup week/month)
- Day 1, Day 7, Day 30, Day 90 retention rates
- Feature adoption by retained vs. churned users (which features predict retention?)
Metric: D30 retention rate (% of users still active 30 days after signup)
Benchmark: > 40% D30 retention is strong for B2B products
Stage 4: Revenue Self-serve conversion from free to paid, or expansion from individual to team.
PQL (Product-Qualified Lead) signals:
- Reached a usage limit (invites, storage, seats)
- Used a premium feature in trial mode
- Team size on the account reached a threshold
- High-frequency usage above a defined threshold
Metric: PQL conversion rate (% of PQLs who convert to paid within 30 days)
Benchmark: 15-30% for well-designed PLG products
PLG Expansion Model
PLG growth compounds through account expansion:
Individual user discovers product
→ Gets value, invites teammates
→ Team adopts product
→ Becomes department-wide
→ Finance/IT gets involved
→ Enterprise contract
This is "bottom-up" enterprise: individual adoption precedes company-wide purchase. It's also the most defensible moat — when every engineer in the company uses your product individually, procurement cancellation is very hard.
Expansion levers:
- Seat-based pricing (more users = more revenue, aligned with value)
- Usage-based pricing (more usage = more value = more revenue)
- Feature gating (team/enterprise features visible but gated, creating pull to upgrade)
- Admin discovery (usage reports surface to managers who didn't know they had a product champion)
PLG Diagnostic
| Question | Healthy | Unhealthy |
|---|---|---|
| Time-to-value | < 30 minutes | > 2 hours |
| Activation rate | > 30% | < 15% |
| D30 retention | > 40% | < 20% |
| PQL conversion | > 15% | < 5% |
| NPS from self-serve users | > 40 | < 20 |
| Viral coefficient | > 0.3 | < 0.1 |
PLG Team Structure
Head of Growth (often VP Product or VP Marketing)
├── Growth PM (owns activation and retention loops in product)
├── Growth Engineer (2-3 engineers dedicated to growth experiments)
├── Data Analyst (experimentation, funnel analysis, cohort reports)
└── Growth Marketer (acquisition, SEO, referral programs)
The growth team sits between product and marketing. This is intentional — they own the product loops that drive acquisition and retention.
2. Sales-Led Growth (SLG) Model
The SLG System
In SLG, marketing's job is to fill the sales pipeline. Sales converts it. The system only works if marketing and sales agree on definitions, SLAs, and shared metrics.
The SLG funnel:
Awareness (Impressions, reach, brand search)
↓
Lead (Name + contact info captured)
↓
MQL — Marketing Qualified Lead (meets ICP criteria, intent signal detected)
↓ [Marketing → Sales handoff]
SAL — Sales Accepted Lead (sales reviews and accepts the lead)
↓
SQL — Sales Qualified Lead (sales confirms budget, authority, need, timeline)
↓
Opportunity (Formal deal in pipeline, has a close date)
↓
Closed-Won
The MQL definition problem: Most marketing-sales friction traces to an unclear MQL definition. The MQL should be:
- ICP-matched (company size, industry, role)
- Intent-signaled (visited pricing page, attended webinar, downloaded high-intent content)
- Not just email address + "subscribed to newsletter"
A concrete MQL definition:
Company 50-500 employees, B2B SaaS, role is VP Engineering or CTO or CISO, AND has performed 2+ of: attended webinar, visited pricing page, requested demo, downloaded security report, attended event.
This definition makes the MQL useful. If you can't score it in your CRM without human judgment, it's not a definition — it's a guideline.
SLG Conversion Rate Benchmarks
| Stage | Average B2B SaaS | Top Quartile |
|---|---|---|
| Lead → MQL | 5-15% | > 20% |
| MQL → SAL | 50-70% | > 75% |
| SAL → SQL | 30-50% | > 60% |
| SQL → Opportunity | 60-80% | > 85% |
| Opportunity → Closed-Won | 20-30% | > 40% |
End-to-end: Lead → Closed-Won: 1-5% (wide range by ACV and ICP quality)
Pipeline Coverage Mechanics
A healthy SLG pipeline has 3-4x coverage against quota.
If a sales rep has a $500K quarterly quota:
- They need $1.5M-$2M in active pipeline
- Pipeline must be distributed across stages (not all "prospecting")
- Stage distribution benchmark: 30% early, 40% mid, 30% late
Insufficient coverage (< 3x) is a lagging indicator of a miss — by the time coverage is low, it's too late to recover in the same quarter. Coverage should be tracked weekly.
SLG Demand Generation Channels
High-intent channels (bottom of funnel):
- Paid search on buying-intent keywords (e.g., "[competitor] alternative", "best [category] software")
- Review site presence (G2, Capterra) — buyers use these before vendor websites
- Outbound SDR targeting specific accounts (ABM)
Medium-intent channels (middle of funnel):
- Webinars and virtual events (capture active learners)
- Gated content (guides, benchmarks, templates — ICP-specific)
- Retargeting to website visitors
Awareness channels (top of funnel):
- Content and SEO (captures people learning about the problem)
- Podcast sponsorships, industry media
- Conference sponsorship and speaking
- Paid social (LinkedIn for B2B)
ABM (Account-Based Marketing) in SLG
ABM flips the funnel: instead of generating leads and filtering for good ones, you start with target accounts and run coordinated campaigns against them.
Tiers:
- Tier 1 (1:1): 5-20 strategic accounts, fully customized campaigns, dedicated SDR+AE pairs, executive outreach
- Tier 2 (1:few): 50-200 accounts, programmatic personalization, SDR sequences, targeted events
- Tier 3 (1:many): 500+ accounts, standard campaigns with light personalization
ABM requires tight sales/marketing alignment. If sales doesn't work the accounts marketing targets, ABM produces zero results.
3. Community-Led Growth (CLG)
The CLG Thesis
Community-led growth works when:
- Your buyers want to learn from peers, not vendors
- There's a strong practitioner identity (developers, data teams, security, FinOps)
- Your category is complex enough that buyers need education before purchasing
- You can commit to building genuine community, not a marketing channel in disguise
The fundamental rule of CLG: The community must deliver value to members whether or not they ever buy your product. If the only purpose of the community is to sell to members, the community will die.
CLG Stages
Stage 1: Find the community The community often exists before you build it. Find where your practitioners already gather:
- Slack groups, Discord servers
- Subreddits and LinkedIn groups
- Conference hallways
- Open-source repositories
Before building, participate. Earn trust. Understand the conversations.
Stage 2: Become the knowledge hub Establish your company as the best source of information on the category problem:
- Publish the benchmark study everyone references
- Host the conference that defines the industry
- Create the certification practitioners want on their resume
- Open-source the tools the community needs
Stage 3: Build the platform Create a dedicated community space (Slack, Discord, forum):
- Community must be practitioner-first, not vendor-first
- Community managers who genuinely care about member value
- Content from members, not just from your company
- Events that build member relationships, not just product demos
Stage 4: Convert community to customers Community members who become customers do so because they trust you, not because you sold them. Conversion paths:
- Community members see peer success with your product
- Product-qualified signals from community members who trial the product
- Direct outreach from sales to active community members (with permission and context)
- Enterprise deals from companies whose employees are active in the community
CLG Metrics
| Metric | Definition | Health Signal |
|---|---|---|
| Monthly active members | Members who post, comment, or engage | > 15% of total members |
| Community-sourced pipeline | $ pipeline where community was first touch | Track and trend |
| Community-influenced pipeline | $ pipeline with any community touchpoint | > 30% of total pipeline |
| NPS of community members vs. non-members | Loyalty difference | Community members should score 20+ pts higher |
| Member-generated content % | % of content posted by non-employees | > 60% is healthy community |
| Time from community join to product trial | Shortens as community matures |
CLG Anti-Patterns
- Community as a newsletter: If members can't interact with each other, it's not a community — it's a list.
- Product launches in the community: Nothing kills community trust faster than using it for sales announcements.
- Community without a community manager: Communities left to run themselves become ghost towns or become toxic.
- Measuring community by member count: Ghost members are noise. Active engagement is signal.
4. Hybrid Growth Models
PLG + SLG ("Product-Led Sales" or PLS)
The most common hybrid at growth stage. PLG handles SMB self-serve; sales closes enterprise.
The PQL-to-sales handoff:
Define the triggers that move a product-qualified lead to a sales-assisted motion:
- Company has > X users (e.g., 10+ users on a team account)
- Usage exceeds Y threshold in 30 days
- Account is a named target in the ABM list
- User explicitly requested a demo or upgrade assistance
The risk: Sales team ignores PLG pipeline because deal size is smaller. Fix: separate quotas and commission structures for self-serve expansion vs. new enterprise logos.
The opportunity: PLG creates pre-qualified champions inside accounts. Sales doesn't have to create interest — they convert it. Win rates in PLS motions are typically 30-50% higher than cold outbound.
SLG + CLG
Community builds brand and generates inbound pipeline for sales.
This hybrid works when:
- Sales cycles are long (6-18 months)
- Buyers do extensive research before engaging with vendors
- The community validates your credibility before sales conversations begin
The integration:
- Community team feeds content insights to demand gen
- Event attendees become high-priority SDR sequences
- Active community members get dedicated AE outreach with community context
- Win/loss analysis includes community touchpoints
PLG + CLG
The developer/open-source hybrid. PLG handles product adoption; community handles advocacy and content.
Examples: HashiCorp (Terraform community + enterprise sales), Elastic (open-source + community + commercial), Tailscale (developer community + self-serve + enterprise).
How it compounds:
Community member learns from community content
→ Discovers open-source or free tier
→ Gets value in first session
→ Shares experience in community
→ New members discover product through community content
5. Growth Loops vs. Funnels
The Difference
A funnel is linear. It requires constant input at the top to produce output at the bottom. If you stop feeding it, it stops producing.
A growth loop is cyclical. Output from one stage becomes input to the next. The system compounds.
Common Growth Loops
Viral loop:
User gets value → Invites colleague → Colleague signs up →
Colleague invites another colleague → ...
Viral coefficient (K) = (Average invites per user) × (Conversion rate of invites)
- K > 1: Exponential growth (rare)
- K 0.5-1: Strong viral assist
- K < 0.3: Viral is not a meaningful growth driver
Content SEO loop:
Publish content on [topic] → Ranks in search →
Drives signups → Users share content → Builds backlinks →
Better rankings → More content is possible
This loop takes 12-24 months to activate but is extraordinarily defensible once running.
UGC (User-Generated Content) loop:
Users share their work publicly (templates, analyses, portfolios) →
Others discover the work → They find the product →
They create and share their own work → ...
Figma, Notion, Airtable, Canva — all run this loop.
Data network effect loop:
More users → More data → Better product →
More users attracted → ...
LinkedIn, Waze, Duolingo — accuracy or relevance improves as the user base grows.
Integration loop:
Product integrates with X → X's users discover your product →
More integrations possible → More discovery surfaces → ...
Zapier, Slack apps, Salesforce AppExchange — being in the ecosystem creates distribution.
Building a Growth Loop
Step 1: Map the current funnel Where do customers come from? What are the conversion steps?
Step 2: Find the output What does a successful customer produce?
- Invite emails
- Shared content
- Public work visible to others
- Reviews or testimonials
Step 3: Design the loop How does that output become tomorrow's input to acquisition?
- If they share → is there a landing page that captures the new visitor?
- If they invite → is the invite experience friction-free?
- If they create content → does it rank in search or appear in relevant communities?
Step 4: Measure loop velocity For each loop, measure:
- Cycle time: How long does one full cycle take?
- Conversion at each step: Where does the loop break down?
- Loop coefficient: How many new users does one existing user generate?
6. When to Switch Growth Models
The Warning Signs
PLG-to-SLG triggers:
- Enterprise accounts are signing up via PLG but aren't expanding without human intervention
- Average deal sizes in enterprise are 10-20x SMB, and you're leaving revenue on the table
- Product adoption in enterprise requires configuration or integration that needs support
- PLG accounts churn at higher rates than sales-assisted accounts
SLG-to-PLG/PLS triggers:
- CAC is increasing year-over-year as competition for sales talent intensifies
- Smaller competitors are winning deals with self-serve
- Customers are asking "can I just try this myself?"
- ACV is declining as the market matures and products commoditize
- Sales team efficiency (revenue per sales rep) is declining
Adding CLG to existing motion:
- Sales cycles are long and trust is the primary barrier
- SEO and content are generating traffic but low conversion (awareness without trust)
- Competitors are building community and you're not present
- Customer success teams report that customers who participate in user groups retain better
The Transition Playbook
Phase 1: Prove it before scaling (months 1-6) Don't restructure the team to support the new model before proving it works.
- Run a pilot: 3-5 SDRs testing PLG signals as outreach triggers (for PLG → PLS)
- Or: Launch a beta community with 100 core customers (for adding CLG)
- Measure the metrics of the new model, compare to current model
Phase 2: Parallel running (months 6-12) Run both models simultaneously. Don't kill the current model while building the new one.
- Set clear boundaries on which accounts go to which motion
- Build dedicated teams for each model (don't ask the same people to do both)
- Define success metrics for the new model independently
Phase 3: Rebalance (months 12-18) Once the new model proves its unit economics:
- Shift headcount and budget to the more efficient model
- Keep the old model for the segments where it still works
- Document what the new model requires to sustain itself
The anti-pattern: Announcing a model shift without proof, restructuring the team, and discovering after 12 months that the new model doesn't work. By then, the old model's momentum is gone and you've burned a year.
Growth Model Maturity Matrix
| Dimension | PLG | SLG | CLG |
|---|---|---|---|
| Time to first results | 3-6 months | 1-3 months | 12-18 months |
| Requires up-front product investment | High | Low | Medium |
| Scales without linear headcount | Yes | No | Yes |
| Predictable pipeline | Low (early) | High | Low (early) |
| CAC trend over time | Decreases | Flat/increases | Decreases |
| Works for ACV > $50K | Only with SLG assist | Yes | Yes |
| Works for ACV < $5K | Yes | No | Only with PLG |
| Defensibility once established | High | Low | Very high |