Files
Alireza Rezvani a68ae3a05e Dev (#305)
* chore: update gitignore for audit reports and playwright cache

* fix: add YAML frontmatter (name + description) to all SKILL.md files

- Added frontmatter to 34 skills that were missing it entirely (0% Tessl score)
- Fixed name field format to kebab-case across all 169 skills
- Resolves #284

* chore: sync codex skills symlinks [automated]

* fix: optimize 14 low-scoring skills via Tessl review (#290)

Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286.

* chore: sync codex skills symlinks [automated]

* fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291)

Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287.

* feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292)

Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%.

* feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293)

Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands.

* fix: Phase 5 verification fixes + docs update (#294)

Phase 5 verification fixes

* chore: sync codex skills symlinks [automated]

* fix: marketplace audit — all 11 plugins validated by Claude Code (#295)

Marketplace audit: all 11 plugins validated + installed + tested in Claude Code

* fix: restore 7 removed plugins + revert playwright-pro name to pw

Reverts two overly aggressive audit changes:
- Restored content-creator, demand-gen, fullstack-engineer, aws-architect,
  product-manager, scrum-master, skill-security-auditor to marketplace
- Reverted playwright-pro plugin.json name back to 'pw' (intentional short name)

* refactor: split 21 over-500-line skills into SKILL.md + references (#296)

* chore: sync codex skills symlinks [automated]

* docs: update all documentation with accurate counts and regenerated skill pages

- Update skill count to 170, Python tools to 213, references to 314 across all docs
- Regenerate all 170 skill doc pages from latest SKILL.md sources
- Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap
- Update README.md badges and overview table
- Update marketplace.json metadata description and version
- Update mkdocs.yml, index.md, getting-started.md with correct numbers

* fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301)

Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md
at the specified install path. 7 of 9 domain directories were missing this
file, causing "Skill not found" errors for bundle installs like:
  npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team

Fix:
- Add root-level SKILL.md with YAML frontmatter to 7 domains
- Add .codex/instructions.md to 8 domains (for Codex CLI discovery)
- Update INSTALLATION.md with accurate skill counts (53→170)
- Add troubleshooting entry for "Skill not found" error

All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json

Closes #301

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills

Gemini CLI:
- Add GEMINI.md with activation instructions
- Add scripts/gemini-install.sh setup script
- Add scripts/sync-gemini-skills.py (194 skills indexed)
- Add .gemini/skills/ with symlinks for all skills, agents, commands
- Remove phantom medium-content-pro entries from sync script
- Add top-level folder filter to prevent gitignored dirs from leaking

Codex CLI:
- Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills)
- Regenerate .codex/skills-index.json: 124 → 149 skills
- Add 25 new symlinks in .codex/skills/

OpenClaw:
- Add OpenClaw installation section to INSTALLATION.md
- Add ClawHub install + manual install + YAML frontmatter docs

Documentation:
- Update INSTALLATION.md with all 4 platforms + accurate counts
- Update README.md: "three platforms" → "four platforms" + Gemini quick start
- Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights
- Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps
- Add OpenClaw + Gemini to installation locations reference table

Marketplace: all 18 plugins validated — sources exist, SKILL.md present

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets

Phase 1 — Agent & Command Foundation:
- Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations)
- Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills)
- Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro

Phase 2 — Script Gap Closure (2,779 lines):
- jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py
- confluence-expert: space_structure_generator.py, content_audit_analyzer.py
- atlassian-admin: permission_audit_tool.py
- atlassian-templates: template_scaffolder.py (Confluence XHTML generation)

Phase 3 — Reference & Asset Enrichment:
- 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder)
- 6 PM references (confluence-expert, atlassian-admin, atlassian-templates)
- 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system)
- 1 PM asset (permission_scheme_template.json)

Phase 4 — New Agents:
- cs-agile-product-owner, cs-product-strategist, cs-ux-researcher

Phase 5 — Integration & Polish:
- Related Skills cross-references in 8 SKILL.md files
- Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands)
- Updated project-management/CLAUDE.md (0→12 scripts, 3 commands)
- Regenerated docs site (177 pages), updated homepage and getting-started

Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: audit and repair all plugins, agents, and commands

- Fix 12 command files: correct CLI arg syntax, script paths, and usage docs
- Fix 3 agents with broken script/reference paths (cs-content-creator,
  cs-demand-gen-specialist, cs-financial-analyst)
- Add complete YAML frontmatter to 5 agents (cs-growth-strategist,
  cs-engineering-lead, cs-senior-engineer, cs-financial-analyst,
  cs-quality-regulatory)
- Fix cs-ceo-advisor related agent path
- Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents,
  12 commands)

Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs
builds clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: repair 25 Python scripts failing --help across all domains

- Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts
- Add argparse CLI handling to 9 marketing scripts using raw sys.argv
- Fix 10 scripts crashing at module level (wrap in __main__, add argparse)
- Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts)
- Fix f-string backslash syntax in project_bootstrapper.py
- Fix -h flag conflict in pr_analyzer.py
- Fix tech-debt.md description (score → prioritize)

All 237 scripts now pass python3 --help verification.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(product-team): close 3 verified gaps in product skills

- Fix competitive-teardown/SKILL.md: replace broken references
  DATA_COLLECTION.md → references/data-collection-guide.md and
  TEMPLATES.md → references/analysis-templates.md (workflow was broken
  at steps 2 and 4)

- Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format
  (--format tsx) matching SKILL.md promise of Next.js/React components.
  4 design styles (dark-saas, clean-minimal, bold-startup, enterprise).
  TSX is now default; HTML preserved via --format html

- Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now
  accurately shows 8 skills/9 tools), remove 7 ghost scripts that
  never existed (sprint_planner.py, velocity_tracker.py, etc.)

- Fix tech-debt.md description (score → prioritize)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* release: v2.1.2 — landing page TSX output, brand voice integration, docs update

- Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles)
- Brand voice analyzer integrated into landing page generation workflow
- CHANGELOG, CLAUDE.md, README.md updated for v2.1.2
- All 13 plugin.json + marketplace.json bumped to 2.1.2
- Gemini/Codex skill indexes re-synced
- Backward compatible: --format html preserved, no breaking changes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Leo <leo@openclaw.ai>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 09:48:49 +01:00

8.2 KiB
Raw Permalink Blame History

name, description
name description
competitive-teardown Analyzes competitor products and companies by synthesizing data from pricing pages, app store reviews, job postings, SEO signals, and social media into structured competitive intelligence. Produces feature comparison matrices scored across 12 dimensions, SWOT analyses, positioning maps, UX audits, pricing model breakdowns, action item roadmaps, and stakeholder presentation templates. Use when conducting competitor analysis, comparing products against competitors, researching the competitive landscape, building battle cards for sales, preparing for a product strategy or roadmap session, responding to a competitor's new feature or pricing change, or performing a quarterly competitive review.

Competitive Teardown

Tier: POWERFUL
Category: Product Team
Domain: Competitive Intelligence, Product Strategy, Market Analysis


When to Use

  • Before a product strategy or roadmap session
  • When a competitor launches a major feature or pricing change
  • Quarterly competitive review
  • Before a sales pitch where you need battle card data
  • When entering a new market segment

Teardown Workflow

Follow these steps in sequence to produce a complete teardown:

  1. Define competitors — List 24 competitors to analyze. Confirm which is the primary focus.
  2. Collect data — Use references/data-collection-guide.md to gather raw signals from at least 3 sources per competitor (website, reviews, job postings, SEO, social).
    Validation checkpoint: Before proceeding, confirm you have pricing data, at least 20 reviews, and job posting counts for each competitor.
  3. Score using rubric — Apply the 12-dimension rubric below to produce a numeric scorecard for each competitor and your own product.
    Validation checkpoint: Every dimension should have a score and at least one supporting evidence note.
  4. Generate outputs — Populate the templates in references/analysis-templates.md (Feature Matrix, Pricing Analysis, SWOT, Positioning Map, UX Audit).
  5. Build action plan — Translate findings into the Action Items template (quick wins / medium-term / strategic).
  6. Package for stakeholders — Assemble the Stakeholder Presentation using outputs from steps 35.

Data Collection Guide

Full executable scripts for each source are in references/data-collection-guide.md. Summaries of what to capture are below.

1. Website Analysis

Key things to capture:

  • Pricing tiers and price points
  • Feature lists per tier
  • Primary CTA and messaging
  • Case studies / customer logos (signals ICP)
  • Integration logos
  • Trust signals (certifications, compliance badges)

2. App Store Reviews

Review sentiment categories:

  • Praise → what users love (defend / strengthen these)
  • Feature requests → unmet needs (opportunity gaps)
  • Bugs → quality signals
  • UX complaints → friction points you can beat them on

Sample App Store query (iTunes Search API):

GET https://itunes.apple.com/search?term=<competitor_name>&entity=software&limit=1
# Extract trackId, then:
GET https://itunes.apple.com/rss/customerreviews/id=<trackId>/sortBy=mostRecent/json?l=en&limit=50

Parse entry[].content.label for review text and entry[].im:rating.label for star rating.

3. Job Postings (Team Size & Tech Stack Signals)

Signals from job postings:

  • Engineering volume → scaling vs. consolidating
  • Specific tech mentions → stack (React/Vue, Postgres/Mongo, AWS/GCP)
  • Sales/CS ratio → product-led vs. sales-led motion
  • Data/ML roles → upcoming AI features
  • Compliance roles → regulatory expansion

4. SEO Analysis

SEO signals to capture:

  • Top 20 organic keywords (intent: informational / navigational / commercial)
  • Domain Authority / backlink count
  • Blog publishing cadence and topics
  • Which pages rank (product pages vs. blog vs. docs)

5. Social Media Sentiment

Capture recent mentions via Twitter/X API v2, Reddit, or LinkedIn. Look for recurring praise, complaints, and feature requests. See references/data-collection-guide.md for API query examples.


Scoring Rubric (12 Dimensions, 1-5)

# Dimension 1 (Weak) 3 (Average) 5 (Best-in-class)
1 Features Core only, many gaps Solid coverage Comprehensive + unique
2 Pricing Confusing / overpriced Market-rate, clear Transparent, flexible, fair
3 UX Confusing, high friction Functional Delightful, minimal friction
4 Performance Slow, unreliable Acceptable Fast, high uptime
5 Docs Sparse, outdated Decent coverage Comprehensive, searchable
6 Support Email only, slow Chat + email 24/7, great response
7 Integrations 0-5 integrations 6-25 26+ or deep ecosystem
8 Security No mentions SOC2 claimed SOC2 Type II, ISO 27001
9 Scalability No enterprise tier Mid-market ready Enterprise-grade
10 Brand Generic, unmemorable Decent positioning Strong, differentiated
11 Community None Forum / Slack Active, vibrant community
12 Innovation No recent releases Quarterly Frequent, meaningful

Example completed row (Competitor: Acme Corp, Dimension 3 UX):

Dimension Acme Corp Score Evidence
UX 2 App Store reviews cite "confusing navigation" (38 mentions); onboarding requires 7 steps before TTFV; no onboarding wizard; CC required at signup.

Apply this pattern to all 12 dimensions for each competitor.


Templates

Full template markdown is in references/analysis-templates.md. Abbreviated reference below.

Feature Comparison Matrix

Rows: core features, pricing tiers, platform capabilities (web, iOS, Android, API).
Columns: your product + up to 3 competitors.
Score each cell 15. Sum to get total out of 60.
Score legend: 5=Best-in-class, 4=Strong, 3=Average, 2=Below average, 1=Weak/Missing

Pricing Analysis

Capture per competitor: model type (per-seat / usage-based / flat rate / freemium), entry/mid/enterprise price points, free trial length.
Summarize: price leader, value leader, premium positioning, your position, and 23 pricing opportunity bullets.

SWOT Analysis

For each competitor: 35 bullets per quadrant (Strengths, Weaknesses, Opportunities for us, Threats to us). Anchor every bullet to a data signal (review quote, job posting count, pricing page, etc.).

Positioning Map

2x2 axes (e.g., Simple ↔ Complex / Low Value ↔ High Value). Place each competitor and your product. Bubble size = market share or funding. See references/analysis-templates.md for ASCII and editable versions.

UX Audit Checklist

Onboarding: TTFV (minutes), steps to activation, CC-required, onboarding wizard quality.
Key workflows: steps, friction points, comparative score (yours vs. theirs).
Mobile: iOS/Android ratings, feature parity, top complaint and praise.
Navigation: global search, keyboard shortcuts, in-app help.

Action Items

Horizon Effort Examples
Quick wins (04 wks) Low Add review badges, publish comparison landing page
Medium-term (13 mo) Moderate Launch free tier, improve onboarding TTFV, add top-requested integration
Strategic (312 mo) High Enter new market, build API v2, achieve SOC2 Type II

Stakeholder Presentation (7 slides)

  1. Executive Summary — Threat level (LOW/MEDIUM/HIGH/CRITICAL), top strength, top opportunity, recommended action
  2. Market Position — 2x2 positioning map
  3. Feature Scorecard — 12-dimension radar or table, total scores
  4. Pricing Analysis — Comparison table + key insight
  5. UX Highlights — What they do better (3 bullets) vs. where we win (3 bullets)
  6. Voice of Customer — Top 3 review complaints (quoted or paraphrased)
  7. Our Action Plan — Quick wins, medium-term, strategic priorities; Appendix with raw data
  • Product Strategist (product-team/product-strategist/) — Competitive insights feed OKR and strategy planning
  • Landing Page Generator (product-team/landing-page-generator/) — Competitive positioning informs landing page messaging