feat: add product discovery skill with assumption mapper
This commit is contained in:
114
product-team/product-discovery/SKILL.md
Normal file
114
product-team/product-discovery/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
name: product-discovery
|
||||
description: Use when validating product opportunities, mapping assumptions, planning discovery sprints, or testing problem-solution fit before committing delivery resources.
|
||||
---
|
||||
|
||||
# Product Discovery
|
||||
|
||||
Run structured discovery to identify high-value opportunities and de-risk product bets.
|
||||
|
||||
## When To Use
|
||||
|
||||
Use this skill for:
|
||||
- Opportunity Solution Tree facilitation
|
||||
- Assumption mapping and test planning
|
||||
- Problem validation interviews and evidence synthesis
|
||||
- Solution validation with prototypes/experiments
|
||||
- Discovery sprint planning and outputs
|
||||
|
||||
## Core Discovery Workflow
|
||||
|
||||
1. Define desired outcome
|
||||
- Set one measurable outcome to improve.
|
||||
- Establish baseline and target horizon.
|
||||
|
||||
2. Build Opportunity Solution Tree (OST)
|
||||
- Outcome -> opportunities -> solution ideas -> experiments
|
||||
- Keep opportunities grounded in user evidence, not internal opinions.
|
||||
|
||||
3. Map assumptions
|
||||
- Identify desirability, viability, feasibility, and usability assumptions.
|
||||
- Score assumptions by risk and certainty.
|
||||
|
||||
Use:
|
||||
```bash
|
||||
python3 scripts/assumption_mapper.py assumptions.csv
|
||||
```
|
||||
|
||||
4. Validate the problem
|
||||
- Conduct interviews and behavior analysis.
|
||||
- Confirm frequency, severity, and willingness to solve.
|
||||
- Reject weak opportunities early.
|
||||
|
||||
5. Validate the solution
|
||||
- Prototype before building.
|
||||
- Run concept, usability, and value tests.
|
||||
- Measure behavior, not only stated preference.
|
||||
|
||||
6. Plan discovery sprint
|
||||
- 1-2 week cycle with explicit hypotheses
|
||||
- Daily evidence reviews
|
||||
- End with decision: proceed, pivot, or stop
|
||||
|
||||
## Opportunity Solution Tree (Teresa Torres)
|
||||
|
||||
Structure:
|
||||
- Outcome: metric you want to move
|
||||
- Opportunities: unmet customer needs/pains
|
||||
- Solutions: candidate interventions
|
||||
- Experiments: fastest learning actions
|
||||
|
||||
Quality checks:
|
||||
- At least 3 distinct opportunities before converging.
|
||||
- At least 2 experiments per top opportunity.
|
||||
- Tie every branch to evidence source.
|
||||
|
||||
## Assumption Mapping
|
||||
|
||||
Assumption categories:
|
||||
- Desirability: users want this
|
||||
- Viability: business value exists
|
||||
- Feasibility: team can build/operate it
|
||||
- Usability: users can successfully use it
|
||||
|
||||
Prioritization rule:
|
||||
- High risk + low certainty assumptions are tested first.
|
||||
|
||||
## Problem Validation Techniques
|
||||
|
||||
- Problem interviews focused on current behavior
|
||||
- Journey friction mapping
|
||||
- Support ticket and sales-call synthesis
|
||||
- Behavioral analytics triangulation
|
||||
|
||||
Evidence threshold examples:
|
||||
- Same pain repeated across multiple target users
|
||||
- Observable workaround behavior
|
||||
- Measurable cost of current pain
|
||||
|
||||
## Solution Validation Techniques
|
||||
|
||||
- Concept tests (value proposition comprehension)
|
||||
- Prototype usability tests (task success/time-to-complete)
|
||||
- Fake door or concierge tests (demand signal)
|
||||
- Limited beta cohorts (retention/activation signals)
|
||||
|
||||
## Discovery Sprint Planning
|
||||
|
||||
Suggested 10-day structure:
|
||||
- Day 1-2: Outcome + opportunity framing
|
||||
- Day 3-4: Assumption mapping + test design
|
||||
- Day 5-7: Problem and solution tests
|
||||
- Day 8-9: Evidence synthesis + decision options
|
||||
- Day 10: Stakeholder decision review
|
||||
|
||||
## Tooling
|
||||
|
||||
### `scripts/assumption_mapper.py`
|
||||
|
||||
CLI utility that:
|
||||
- reads assumptions from CSV or inline input
|
||||
- scores risk/certainty priority
|
||||
- emits prioritized test plan with suggested test types
|
||||
|
||||
See `references/discovery-frameworks.md` for framework details.
|
||||
@@ -0,0 +1,72 @@
|
||||
# Discovery Frameworks
|
||||
|
||||
## Opportunity Solution Tree (OST)
|
||||
|
||||
Purpose: continuously connect product outcomes to validated opportunities and tested solutions.
|
||||
|
||||
Core structure:
|
||||
- Outcome (metric)
|
||||
- Opportunity nodes (needs/pains)
|
||||
- Solution ideas
|
||||
- Experiments
|
||||
|
||||
OST practice tips:
|
||||
- Keep tree live; update after each interview or test.
|
||||
- Separate opportunity evidence from solution proposals.
|
||||
- Avoid single-branch trees that force one solution.
|
||||
|
||||
## Jobs-to-be-Done (JTBD)
|
||||
|
||||
Use JTBD to understand progress users seek.
|
||||
|
||||
JTBD template:
|
||||
"When [situation], I want to [motivation], so I can [expected outcome]."
|
||||
|
||||
JTBD interview focus:
|
||||
- Trigger moments
|
||||
- Current alternatives and workarounds
|
||||
- Purchase/adoption anxieties
|
||||
- Desired progress and success criteria
|
||||
|
||||
## Kano Model
|
||||
|
||||
Classify features by impact on satisfaction:
|
||||
- Must-be: expected baseline features
|
||||
- Performance: more is better
|
||||
- Delighters: unexpected value multipliers
|
||||
- Indifferent: low impact
|
||||
- Reverse: can reduce satisfaction for some users
|
||||
|
||||
Use Kano when prioritizing solution concepts after problem validation.
|
||||
|
||||
## Design Sprint Methodology
|
||||
|
||||
Typical phases:
|
||||
1. Understand
|
||||
2. Sketch
|
||||
3. Decide
|
||||
4. Prototype
|
||||
5. Test
|
||||
|
||||
Discovery usage:
|
||||
- Compress learning cycle into one week.
|
||||
- Best for high-ambiguity opportunities requiring cross-functional alignment.
|
||||
|
||||
## Assumption Prioritization Matrix
|
||||
|
||||
Map assumptions on two axes:
|
||||
- Risk if wrong (low -> high)
|
||||
- Certainty (low -> high)
|
||||
|
||||
Priority order:
|
||||
1. High risk, low certainty (test first)
|
||||
2. High risk, high certainty (validate quickly)
|
||||
3. Low risk, low certainty (defer)
|
||||
4. Low risk, high certainty (document)
|
||||
|
||||
## Discovery Evidence Rules
|
||||
|
||||
- One source is not enough for major decisions.
|
||||
- Triangulate qualitative and quantitative signals.
|
||||
- Predefine decision criteria before test execution.
|
||||
- Archive evidence with date, segment, and method.
|
||||
123
product-team/product-discovery/scripts/assumption_mapper.py
Executable file
123
product-team/product-discovery/scripts/assumption_mapper.py
Executable file
@@ -0,0 +1,123 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Prioritize product assumptions and suggest validation tests."""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class Assumption:
|
||||
statement: str
|
||||
category: str
|
||||
risk: float
|
||||
certainty: float
|
||||
|
||||
@property
|
||||
def priority_score(self) -> float:
|
||||
# High-risk, low-certainty assumptions should be tested first.
|
||||
return self.risk * (1.0 - self.certainty)
|
||||
|
||||
|
||||
def parse_float(value: str, field: str) -> float:
|
||||
number = float(value)
|
||||
if number < 0 or number > 1:
|
||||
raise ValueError(f"{field} must be in [0, 1]")
|
||||
return number
|
||||
|
||||
|
||||
def suggest_test(category: str) -> str:
|
||||
category = category.lower().strip()
|
||||
if category == "desirability":
|
||||
return "problem interviews or fake-door test"
|
||||
if category == "viability":
|
||||
return "pricing/willingness-to-pay test"
|
||||
if category == "feasibility":
|
||||
return "technical spike or architecture prototype"
|
||||
if category == "usability":
|
||||
return "moderated usability test"
|
||||
return "smallest possible experiment with clear success criteria"
|
||||
|
||||
|
||||
def load_from_csv(path: str) -> list[Assumption]:
|
||||
assumptions: list[Assumption] = []
|
||||
with open(path, "r", encoding="utf-8", newline="") as handle:
|
||||
reader = csv.DictReader(handle)
|
||||
required = {"assumption", "category", "risk", "certainty"}
|
||||
missing = required - set(reader.fieldnames or [])
|
||||
if missing:
|
||||
missing_str = ", ".join(sorted(missing))
|
||||
raise ValueError(f"Missing required columns: {missing_str}")
|
||||
|
||||
for row in reader:
|
||||
assumptions.append(
|
||||
Assumption(
|
||||
statement=(row.get("assumption") or "").strip(),
|
||||
category=(row.get("category") or "").strip(),
|
||||
risk=parse_float(row.get("risk") or "0", "risk"),
|
||||
certainty=parse_float(row.get("certainty") or "0", "certainty"),
|
||||
)
|
||||
)
|
||||
return assumptions
|
||||
|
||||
|
||||
def parse_inline(items: list[str]) -> list[Assumption]:
|
||||
assumptions: list[Assumption] = []
|
||||
for item in items:
|
||||
# format: statement|category|risk|certainty
|
||||
parts = [part.strip() for part in item.split("|")]
|
||||
if len(parts) != 4:
|
||||
raise ValueError("Inline assumption must be: statement|category|risk|certainty")
|
||||
assumptions.append(
|
||||
Assumption(
|
||||
statement=parts[0],
|
||||
category=parts[1],
|
||||
risk=parse_float(parts[2], "risk"),
|
||||
certainty=parse_float(parts[3], "certainty"),
|
||||
)
|
||||
)
|
||||
return assumptions
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Prioritize assumptions and generate test plan.")
|
||||
parser.add_argument("input", nargs="?", help="CSV file path")
|
||||
parser.add_argument(
|
||||
"--assumption",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Inline assumption: statement|category|risk|certainty",
|
||||
)
|
||||
parser.add_argument("--top", type=int, default=10, help="Maximum assumptions to print")
|
||||
return parser
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = build_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
assumptions: list[Assumption] = []
|
||||
if args.input:
|
||||
assumptions.extend(load_from_csv(args.input))
|
||||
if args.assumption:
|
||||
assumptions.extend(parse_inline(args.assumption))
|
||||
|
||||
if not assumptions:
|
||||
parser.error("Provide a CSV input file or at least one --assumption value.")
|
||||
|
||||
assumptions.sort(key=lambda item: item.priority_score, reverse=True)
|
||||
|
||||
print("prioritized_assumption_test_plan")
|
||||
print("rank,priority_score,category,risk,certainty,test,assumption")
|
||||
for rank, item in enumerate(assumptions[: args.top], start=1):
|
||||
test = suggest_test(item.category)
|
||||
print(
|
||||
f"{rank},{item.priority_score:.4f},{item.category},{item.risk:.2f},"
|
||||
f"{item.certainty:.2f},{test},{item.statement}"
|
||||
)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Reference in New Issue
Block a user