chore: sync dev with main (saas-metrics-coach + governance)

This commit is contained in:
Leo
2026-03-10 13:58:11 +01:00
12 changed files with 2716 additions and 0 deletions

View File

@@ -100,3 +100,6 @@ python financial-analyst/scripts/forecast_builder.py forecast_data.json --format
**Last Updated:** February 2026
**Skills Deployed:** 1/1 finance skills production-ready
**Total Tools:** 4 Python automation tools
## saas-metrics-coach
SaaS financial health advisor. Calculates ARR, MRR, churn, CAC, LTV, NRR, Quick Ratio. Benchmarks against industry standards. Includes 12-month projection simulator.

View File

@@ -0,0 +1,158 @@
---
name: saas-metrics-coach
description: SaaS financial health advisor. Use when a user shares revenue or customer numbers, or mentions ARR, MRR, churn, LTV, CAC, NRR, or asks how their SaaS business is doing.
license: MIT
metadata:
version: 1.0.0
author: Abbas Mir
category: finance
updated: 2026-03-08
---
# SaaS Metrics Coach
Act as a senior SaaS CFO advisor. Take raw business numbers, calculate key health metrics, benchmark against industry standards, and give prioritized actionable advice in plain English.
## Step 1 — Collect Inputs
If not already provided, ask for these in a single grouped request:
- Revenue: current MRR, MRR last month, expansion MRR, churned MRR
- Customers: total active, new this month, churned this month
- Costs: sales and marketing spend, gross margin %
Work with partial data. Be explicit about what is missing and what assumptions are being made.
## Step 2 — Calculate Metrics
Run `scripts/metrics_calculator.py` with the user's inputs. If the script is unavailable, use the formulas in `references/formulas.md`.
Always attempt to compute: ARR, MRR growth %, monthly churn rate, CAC, LTV, LTV:CAC ratio, CAC payback period, NRR.
**Additional Analysis Tools:**
- Use `scripts/quick_ratio_calculator.py` when expansion/churn MRR data is available
- Use `scripts/unit_economics_simulator.py` for forward-looking projections
## Step 3 — Benchmark Each Metric
Load `references/benchmarks.md`. For each metric show:
- The calculated value
- The relevant benchmark range for the user's segment and stage
- A plain status label: HEALTHY / WATCH / CRITICAL
Match the benchmark tier to the user's market segment (Enterprise / Mid-Market / SMB / PLG) and company stage (Early / Growth / Scale). Ask if unclear.
## Step 4 — Prioritize and Recommend
Identify the top 2-3 metrics at WATCH or CRITICAL status. For each one state:
- What is happening (one sentence, plain English)
- Why it matters to the business
- Two or three specific actions to take this month
Order by impact — address the most damaging problem first.
## Step 5 — Output Format
Always use this exact structure:
```
# SaaS Health Report — [Month Year]
## Metrics at a Glance
| Metric | Your Value | Benchmark | Status |
|--------|------------|-----------|--------|
## Overall Picture
[2-3 sentences, plain English summary]
## Priority Issues
### 1. [Metric Name]
What is happening: ...
Why it matters: ...
Fix it this month: ...
### 2. [Metric Name]
...
## What is Working
[1-2 genuine strengths, no padding]
## 90-Day Focus
[Single metric to move + specific numeric target]
```
## Examples
**Example 1 — Partial data**
Input: "MRR is $80k, we have 200 customers, about 3 cancel each month."
Expected output: Calculates ARPA ($400), monthly churn (1.5%), ARR ($960k), LTV estimate. Flags CAC and growth rate as missing. Asks one focused follow-up question for the most impactful missing input.
**Example 2 — Critical scenario**
Input: "MRR $22k (was $23.5k), 80 customers, lost 9, gained 6, spent $15k on ads, 65% gross margin."
Expected output: Flags negative MoM growth (-6.4%), critical churn (11.25%), and LTV:CAC of 0.64:1 as CRITICAL. Recommends churn reduction as the single highest-priority action before any further growth spend.
## Key Principles
- Be direct. If a metric is bad, say it is bad.
- Explain every metric in one sentence before showing the number.
- Cap priority issues at three. More than three paralyzes action.
- Context changes benchmarks. Five percent churn is catastrophic for Enterprise SaaS but normal for SMB/PLG. Always confirm the user's target market before scoring.
## Reference Files
- `references/formulas.md` — All metric formulas with worked examples
- `references/benchmarks.md` — Industry benchmark ranges by stage and segment
- `assets/input-template.md` — Blank input form to share with users
- `scripts/metrics_calculator.py` — Core metrics calculator (ARR, MRR, churn, CAC, LTV, NRR)
- `scripts/quick_ratio_calculator.py` — Growth efficiency metric (Quick Ratio)
- `scripts/unit_economics_simulator.py` — 12-month forward projection
## Tools
### 1. Metrics Calculator (`scripts/metrics_calculator.py`)
Core SaaS metrics from raw business numbers.
```bash
# Interactive mode
python scripts/metrics_calculator.py
# CLI mode
python scripts/metrics_calculator.py --mrr 50000 --customers 100 --churned 5 --json
```
### 2. Quick Ratio Calculator (`scripts/quick_ratio_calculator.py`)
Growth efficiency metric: (New MRR + Expansion) / (Churned + Contraction)
```bash
python scripts/quick_ratio_calculator.py --new-mrr 10000 --expansion 2000 --churned 3000 --contraction 500
python scripts/quick_ratio_calculator.py --new-mrr 10000 --expansion 2000 --churned 3000 --json
```
**Benchmarks:**
- < 1.0 = CRITICAL (losing faster than gaining)
- 1-2 = WATCH (marginal growth)
- 2-4 = HEALTHY (good efficiency)
- \> 4 = EXCELLENT (strong growth)
### 3. Unit Economics Simulator (`scripts/unit_economics_simulator.py`)
Project metrics forward 12 months based on growth/churn assumptions.
```bash
python scripts/unit_economics_simulator.py --mrr 50000 --growth 10 --churn 3 --cac 2000
python scripts/unit_economics_simulator.py --mrr 50000 --growth 10 --churn 3 --cac 2000 --json
```
**Use for:**
- "What if we grow at X% per month?"
- Runway projections
- Scenario planning (best/base/worst case)
## Related Skills
- **financial-analyst**: Use for DCF valuation, budget variance analysis, and traditional financial modeling. NOT for SaaS-specific metrics like CAC, LTV, or churn.
- **business-growth/customer-success**: Use for retention strategies and customer health scoring. Complements this skill when churn is flagged as CRITICAL.

View File

@@ -0,0 +1,29 @@
# SaaS Metrics — Input Template
Fill in what you know and paste to the SaaS Metrics Coach. Leave blanks empty.
---
**Context**
- Target market: [ ] Enterprise [ ] Mid-Market [ ] SMB [ ] Consumer/PLG
- Stage: [ ] Early (<$1M ARR) [ ] Growth ($1M$10M) [ ] Scale ($10M+)
**Revenue**
- Current MRR: $
- MRR last month: $
- Expansion MRR this month (upsells/upgrades): $
- Churned MRR this month: $
- Contraction MRR (downgrades): $
**Customers**
- Total active customers:
- New customers this month:
- Churned customers this month:
**Costs**
- Sales & Marketing spend this month: $
- Gross margin %:
- Net profit margin % (optional):
---
*Partial data is fine — the coach works with whatever you have.*

View File

@@ -0,0 +1,101 @@
# SaaS Industry Benchmarks
Industry-standard benchmark ranges for SaaS metrics, segmented by company stage and market segment.
**Sources:**
- OpenView SaaS Benchmarks 2024
- Bessemer Venture Partners Cloud Index
- SaaS Capital Index
- Paddle SaaS Metrics Report 2025
**Last updated:** March 2026
## Stage Definitions
- Early: < $1M ARR
- Growth: $1M$10M ARR
- Scale: $10M$50M ARR
- Late: $50M+ ARR
---
## Monthly Churn Rate
| Segment | CRITICAL | WATCH | HEALTHY |
|---|---|---|---|
| Enterprise (ACV > $25k) | > 3% | 13% | < 1% |
| Mid-Market ($5k$25k ACV) | > 5% | 25% | < 2% |
| SMB / PLG (< $5k ACV) | > 8% | 48% | < 4% |
| Consumer | > 10% | 510% | < 5% |
## LTV:CAC Ratio
| Status | Range |
|---|---|
| CRITICAL | < 1:1 — losing money on every customer |
| POOR | 1:12:1 — barely breaking even |
| WATCH | 2:13:1 — marginally viable |
| HEALTHY | 3:15:1 — industry standard |
| EXCELLENT | > 5:1 — strong unit economics |
| WATCH | > 8:1 — possibly under-investing in growth |
## CAC Payback Period
| Status | Range |
|---|---|
| CRITICAL | > 24 months |
| WATCH | 1824 months |
| HEALTHY | 1218 months |
| GOOD | 612 months |
| EXCELLENT | < 6 months (PLG indicator) |
## NRR (Net Revenue Retention)
| Status | Range |
|---|---|
| CRITICAL | < 80% — revenue shrinking from existing base |
| POOR | 8090% |
| WATCH | 90100% — flat, not expanding |
| HEALTHY | 100110% |
| EXCELLENT | 110120% |
| WORLD-CLASS | > 120% (Snowflake / Datadog territory) |
## MoM MRR Growth
| Stage | CRITICAL | WATCH | HEALTHY | EXCELLENT |
|---|---|---|---|---|
| Early (< $1M ARR) | < 5% | 510% | 1020% | > 20% |
| Growth ($1M$10M) | < 3% | 37% | 715% | > 15% |
| Scale ($10M+) | < 1% | 13% | 37% | > 7% |
## Gross Margin
| Status | Range |
|---|---|
| CRITICAL | < 50% |
| WATCH | 5065% |
| HEALTHY | 6575% |
| EXCELLENT | 7585% |
| WORLD-CLASS | > 85% (API / infrastructure businesses) |
## Rule of 40
| Score | Status |
|---|---|
| < 20 | CONCERNING |
| 2040 | DEVELOPING |
| 4060 | HEALTHY |
| > 60 | EXCELLENT |
## Quick Reference Card
```
Metric Must Hit Good Great
---------------------------------------------
Monthly Churn < 5% < 3% < 1%
LTV:CAC > 3:1 > 4:1 > 5:1
CAC Payback < 18 mo < 12 mo < 6 mo
NRR > 100% > 110% > 120%
Gross Margin > 65% > 75% > 80%
MoM Growth > 5% > 10% > 15%
```

View File

@@ -0,0 +1,103 @@
# SaaS Metric Formulas
Complete reference with worked examples for all metrics calculated by the SaaS Metrics Coach.
## ARR (Annual Recurring Revenue)
```
ARR = MRR × 12
```
**Example:**
- Current MRR: $50,000
- ARR = $50,000 × 12 = **$600,000**
**When to use:** Quick snapshot of annualized revenue run rate. Not the same as actual annual revenue if you have seasonality or one-time fees.
## MoM MRR Growth Rate
```
MoM Growth % = ((MRR_now - MRR_last) / MRR_last) × 100
```
**Example:**
- Current MRR: $50,000
- Last month MRR: $45,000
- Growth = (($50,000 - $45,000) / $45,000) × 100 = **11.1%**
**Interpretation:**
- Negative = losing revenue
- 0-5% = slow growth (concerning for early stage)
- 5-15% = healthy growth
- >15% = strong growth (early stage)
## Monthly Churn Rate
```
Churn % = (Customers lost / Customers at start of month) × 100
```
**Example:**
- Customers at start of month: 100
- Customers lost during month: 5
- Churn = (5 / 100) × 100 = **5%**
**Annualized impact:** 5% monthly = ~46% annual churn (compounding effect)
**Critical context:** Churn tolerance varies by segment:
- Enterprise: >3% is critical
- SMB: >8% is critical
- Always confirm segment before judging severity
## ARPA (Avg Revenue Per Account)
```
ARPA = MRR / Total active customers
```
## CAC (Customer Acquisition Cost)
```
CAC = Total Sales & Marketing spend / New customers acquired
```
Example: $20k spend / 10 customers → CAC $2,000
## LTV (Customer Lifetime Value)
```
LTV = (ARPA / Monthly Churn Rate) × Gross Margin %
```
**Simplified (no gross margin data):**
```
LTV = ARPA / Monthly Churn Rate
```
**Example:**
- ARPA: $500
- Monthly churn: 5% (0.05)
- Gross margin: 70% (0.70)
- LTV = ($500 / 0.05) × 0.70 = **$7,000**
**Simplified (no margin):** $500 / 0.05 = **$10,000**
**Why it matters:** LTV tells you the total revenue you can expect from an average customer. Must be at least 3x your CAC to have sustainable unit economics.
## LTV:CAC Ratio
```
LTV:CAC = LTV / CAC
```
Example: LTV $10k / CAC $2k = 5:1
## CAC Payback Period
```
Payback (months) = CAC / (ARPA × Gross Margin %)
Simplified: Payback = CAC / ARPA
```
Example: CAC $2k / ARPA $500 = 4 months
## NRR (Net Revenue Retention)
```
NRR % = ((MRR_start + Expansion MRR - Churned MRR - Contraction MRR) / MRR_start) × 100
```
Simplified (no expansion data): NRR ≈ (1 - Revenue Churn Rate) × 100
## Rule of 40
```
Score = Annualized MoM Growth % + Net Profit Margin %
Healthy: ≥ 40
```

View File

@@ -0,0 +1,217 @@
#!/usr/bin/env python3
"""
SaaS Metrics Calculator — zero external dependencies (stdlib only).
Usage (interactive): python metrics_calculator.py
Usage (CLI): python metrics_calculator.py --mrr 48000 --customers 160 --json
Usage (import):
from metrics_calculator import calculate, report
results = calculate(mrr=48000, mrr_last=42000, customers=160,
churned=4, new_customers=22, sm_spend=18000,
gross_margin=0.72)
print(report(results))
"""
import json
import sys
def calculate(
mrr=None,
mrr_last=None,
customers=None,
churned=None,
new_customers=None,
sm_spend=None,
gross_margin=0.70,
expansion_mrr=0,
churned_mrr=0,
contraction_mrr=0,
profit_margin=None,
):
r, missing = {}, []
# ── Core revenue ─────────────────────────────────────────────────────────
if mrr is not None:
r["MRR"] = round(mrr, 2)
r["ARR"] = round(mrr * 12, 2)
else:
missing.append("ARR/MRR — need current MRR")
if mrr and customers:
r["ARPA"] = round(mrr / customers, 2)
else:
missing.append("ARPA — need MRR + customer count")
# ── Growth ────────────────────────────────────────────────────────────────
if mrr and mrr_last and mrr_last > 0:
r["MoM_Growth_Pct"] = round(((mrr - mrr_last) / mrr_last) * 100, 2)
else:
missing.append("MoM Growth — need last month MRR")
# ── Churn ─────────────────────────────────────────────────────────────────
if churned is not None and customers:
r["Churn_Pct"] = round((churned / customers) * 100, 2)
else:
missing.append("Churn Rate — need churned + total customers")
# ── CAC ───────────────────────────────────────────────────────────────────
if sm_spend and new_customers and new_customers > 0:
r["CAC"] = round(sm_spend / new_customers, 2)
else:
missing.append("CAC — need S&M spend + new customers")
# ── LTV ───────────────────────────────────────────────────────────────────
arpa = r.get("ARPA")
churn_dec = r.get("Churn_Pct", 0) / 100
if arpa and churn_dec > 0:
r["LTV"] = round((arpa / churn_dec) * gross_margin, 2)
else:
missing.append("LTV — need ARPA and churn rate")
# ── LTV:CAC ───────────────────────────────────────────────────────────────
if r.get("LTV") and r.get("CAC") and r["CAC"] > 0:
r["LTV_CAC"] = round(r["LTV"] / r["CAC"], 2)
else:
missing.append("LTV:CAC — need both LTV and CAC")
# ── Payback ───────────────────────────────────────────────────────────────
if r.get("CAC") and arpa and arpa > 0:
r["Payback_Months"] = round(r["CAC"] / (arpa * gross_margin), 1)
else:
missing.append("Payback Period — need CAC and ARPA")
# ── NRR ───────────────────────────────────────────────────────────────────
if mrr_last and mrr_last > 0 and (expansion_mrr or churned_mrr or contraction_mrr):
nrr = ((mrr_last + expansion_mrr - churned_mrr - contraction_mrr) / mrr_last) * 100
r["NRR_Pct"] = round(nrr, 2)
elif r.get("Churn_Pct"):
r["NRR_Est_Pct"] = round((1 - r["Churn_Pct"] / 100) * 100, 2)
missing.append("NRR (accurate) — using churn-only estimate; provide expansion MRR for full NRR")
# ── Rule of 40 ────────────────────────────────────────────────────────────
if r.get("MoM_Growth_Pct") and profit_margin is not None:
r["Rule_of_40"] = round(r["MoM_Growth_Pct"] * 12 + profit_margin, 1)
r["_missing"] = missing
r["_gross_margin"] = gross_margin
return r
def report(r):
labels = [
("MRR", "Monthly Recurring Revenue", "$"),
("ARR", "Annual Recurring Revenue", "$"),
("ARPA", "Avg Revenue Per Account/mo", "$"),
("MoM_Growth_Pct", "MoM MRR Growth", "%"),
("Churn_Pct", "Monthly Churn Rate", "%"),
("CAC", "Customer Acquisition Cost", "$"),
("LTV", "Customer Lifetime Value", "$"),
("LTV_CAC", "LTV:CAC Ratio", ":1"),
("Payback_Months", "CAC Payback Period", " months"),
("NRR_Pct", "NRR (Net Revenue Retention)", "%"),
("NRR_Est_Pct", "NRR Estimate (churn-only)", "%"),
("Rule_of_40", "Rule of 40 Score", ""),
]
lines = ["=" * 54, " SAAS METRICS CALCULATOR", "=" * 54, ""]
for key, label, unit in labels:
val = r.get(key)
if val is None:
continue
if unit == "$":
fmt = f"${val:,.2f}"
elif unit == "%":
fmt = f"{val}%"
elif unit == ":1":
fmt = f"{val}:1"
else:
fmt = f"{val}{unit}"
lines.append(f" {label:<40} {fmt}")
if r.get("_missing"):
lines += ["", " Missing / estimated:"]
for m in r["_missing"]:
lines.append(f" - {m}")
lines.append("=" * 54)
return "\n".join(lines)
# ── Interactive mode ──────────────────────────────────────────────────────────
def _ask(prompt, required=False):
while True:
v = input(f" {prompt}: ").strip()
if not v:
if required:
print(" Required — please enter a value.")
continue
return None
try:
return float(v)
except ValueError:
print(" Enter a number (e.g. 48000 or 72).")
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="SaaS Metrics Calculator")
parser.add_argument("--mrr", type=float, help="Current MRR")
parser.add_argument("--mrr-last", type=float, help="MRR last month")
parser.add_argument("--customers", type=int, help="Total active customers")
parser.add_argument("--churned", type=int, help="Customers churned this month")
parser.add_argument("--new-customers", type=int, help="New customers acquired")
parser.add_argument("--sm-spend", type=float, help="Sales & Marketing spend")
parser.add_argument("--gross-margin", type=float, default=70, help="Gross margin %% (default: 70)")
parser.add_argument("--expansion-mrr", type=float, default=0, help="Expansion MRR")
parser.add_argument("--churned-mrr", type=float, default=0, help="Churned MRR")
parser.add_argument("--contraction-mrr", type=float, default=0, help="Contraction MRR")
parser.add_argument("--profit-margin", type=float, help="Net profit margin %%")
parser.add_argument("--json", action="store_true", help="Output JSON format")
args = parser.parse_args()
# CLI mode
if args.mrr is not None:
inputs = {
"mrr": args.mrr,
"mrr_last": args.mrr_last,
"customers": args.customers,
"churned": args.churned,
"new_customers": args.new_customers,
"sm_spend": args.sm_spend,
"gross_margin": args.gross_margin / 100 if args.gross_margin > 1 else args.gross_margin,
"expansion_mrr": args.expansion_mrr,
"churned_mrr": args.churned_mrr,
"contraction_mrr": args.contraction_mrr,
"profit_margin": args.profit_margin,
}
result = calculate(**inputs)
if args.json:
print(json.dumps(result, indent=2))
else:
print("\n" + report(result))
sys.exit(0)
# Interactive mode
print("\nSaaS Metrics Calculator (press Enter to skip)\n")
gm = _ask("Gross margin % (default 70)", required=False) or 70
inputs = dict(
mrr=_ask("Current MRR ($)", required=True),
mrr_last=_ask("MRR last month ($)"),
customers=_ask("Total active customers"),
churned=_ask("Customers churned this month"),
new_customers=_ask("New customers acquired this month"),
sm_spend=_ask("Sales & Marketing spend this month ($)"),
gross_margin=gm / 100 if gm > 1 else gm,
expansion_mrr=_ask("Expansion MRR (upsells) ($)") or 0,
churned_mrr=_ask("Churned MRR ($)") or 0,
contraction_mrr=_ask("Contraction MRR (downgrades) ($)") or 0,
profit_margin=_ask("Net profit margin % (for Rule of 40, optional)"),
)
print("\n" + report(calculate(**inputs)))

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env python3
"""
Quick Ratio Calculator - SaaS growth efficiency metric.
Quick Ratio = (New MRR + Expansion MRR) / (Churned MRR + Contraction MRR)
A ratio > 4 indicates healthy, efficient growth.
A ratio < 1 means you're losing revenue faster than gaining it.
Usage:
python quick_ratio_calculator.py --new-mrr 10000 --expansion 2000 --churned 3000 --contraction 500
python quick_ratio_calculator.py --new-mrr 10000 --expansion 2000 --churned 3000 --contraction 500 --json
"""
import json
import sys
import argparse
def calculate_quick_ratio(new_mrr, expansion_mrr, churned_mrr, contraction_mrr):
"""
Calculate Quick Ratio and provide interpretation.
Args:
new_mrr: New MRR from new customers
expansion_mrr: Expansion MRR from existing customers (upsells)
churned_mrr: MRR lost from churned customers
contraction_mrr: MRR lost from downgrades
Returns:
dict with quick ratio and analysis
"""
# Calculate components
growth_mrr = new_mrr + expansion_mrr
lost_mrr = churned_mrr + contraction_mrr
# Quick Ratio
if lost_mrr == 0:
quick_ratio = float('inf') if growth_mrr > 0 else 0
quick_ratio_display = "" if growth_mrr > 0 else "0"
else:
quick_ratio = growth_mrr / lost_mrr
quick_ratio_display = f"{quick_ratio:.2f}"
# Status assessment
if lost_mrr == 0 and growth_mrr > 0:
status = "EXCELLENT"
interpretation = "No revenue loss - perfect retention with growth"
elif quick_ratio >= 4:
status = "EXCELLENT"
interpretation = "Strong, efficient growth - gaining revenue 4x faster than losing it"
elif quick_ratio >= 2:
status = "HEALTHY"
interpretation = "Good growth efficiency - gaining revenue 2x+ faster than losing it"
elif quick_ratio >= 1:
status = "WATCH"
interpretation = "Marginal growth - barely gaining more than losing"
else:
status = "CRITICAL"
interpretation = "Losing revenue faster than gaining - growth is unsustainable"
# Breakdown percentages
if growth_mrr > 0:
new_pct = (new_mrr / growth_mrr) * 100
expansion_pct = (expansion_mrr / growth_mrr) * 100
else:
new_pct = expansion_pct = 0
if lost_mrr > 0:
churned_pct = (churned_mrr / lost_mrr) * 100
contraction_pct = (contraction_mrr / lost_mrr) * 100
else:
churned_pct = contraction_pct = 0
results = {
"quick_ratio": quick_ratio if quick_ratio != float('inf') else None,
"quick_ratio_display": quick_ratio_display,
"status": status,
"interpretation": interpretation,
"components": {
"growth_mrr": round(growth_mrr, 2),
"lost_mrr": round(lost_mrr, 2),
"new_mrr": round(new_mrr, 2),
"expansion_mrr": round(expansion_mrr, 2),
"churned_mrr": round(churned_mrr, 2),
"contraction_mrr": round(contraction_mrr, 2),
},
"breakdown": {
"new_mrr_pct": round(new_pct, 1),
"expansion_mrr_pct": round(expansion_pct, 1),
"churned_mrr_pct": round(churned_pct, 1),
"contraction_mrr_pct": round(contraction_pct, 1),
},
}
return results
def format_report(results):
"""Format quick ratio results as human-readable report."""
lines = []
lines.append("\n" + "=" * 70)
lines.append("QUICK RATIO ANALYSIS")
lines.append("=" * 70)
# Quick Ratio
lines.append(f"\n⚡ QUICK RATIO: {results['quick_ratio_display']}")
lines.append(f" Status: {results['status']}")
lines.append(f" {results['interpretation']}")
# Components
comp = results["components"]
lines.append("\n📊 COMPONENTS")
lines.append(f" Growth MRR (New + Expansion): ${comp['growth_mrr']:,.2f}")
lines.append(f" • New MRR: ${comp['new_mrr']:,.2f}")
lines.append(f" • Expansion MRR: ${comp['expansion_mrr']:,.2f}")
lines.append(f" Lost MRR (Churned + Contraction): ${comp['lost_mrr']:,.2f}")
lines.append(f" • Churned MRR: ${comp['churned_mrr']:,.2f}")
lines.append(f" • Contraction MRR: ${comp['contraction_mrr']:,.2f}")
# Breakdown
bd = results["breakdown"]
lines.append("\n📈 GROWTH BREAKDOWN")
lines.append(f" New customers: {bd['new_mrr_pct']:.1f}%")
lines.append(f" Expansion: {bd['expansion_mrr_pct']:.1f}%")
lines.append("\n📉 LOSS BREAKDOWN")
lines.append(f" Churn: {bd['churned_mrr_pct']:.1f}%")
lines.append(f" Contraction: {bd['contraction_mrr_pct']:.1f}%")
# Benchmarks
lines.append("\n🎯 BENCHMARKS")
lines.append(" < 1.0 = CRITICAL (losing revenue faster than gaining)")
lines.append(" 1-2 = WATCH (marginal growth)")
lines.append(" 2-4 = HEALTHY (good growth efficiency)")
lines.append(" > 4 = EXCELLENT (strong, efficient growth)")
lines.append("\n" + "=" * 70 + "\n")
return "\n".join(lines)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Calculate SaaS Quick Ratio (growth efficiency metric)"
)
parser.add_argument(
"--new-mrr", type=float, required=True, help="New MRR from new customers"
)
parser.add_argument(
"--expansion", type=float, default=0, help="Expansion MRR from upsells (default: 0)"
)
parser.add_argument(
"--churned", type=float, required=True, help="Churned MRR from lost customers"
)
parser.add_argument(
"--contraction", type=float, default=0, help="Contraction MRR from downgrades (default: 0)"
)
parser.add_argument("--json", action="store_true", help="Output JSON format")
args = parser.parse_args()
results = calculate_quick_ratio(
new_mrr=args.new_mrr,
expansion_mrr=args.expansion,
churned_mrr=args.churned,
contraction_mrr=args.contraction,
)
if args.json:
print(json.dumps(results, indent=2))
else:
print(format_report(results))

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""
Unit Economics Simulator - Project SaaS metrics forward 12 months.
Usage:
python unit_economics_simulator.py --mrr 50000 --growth 10 --churn 3 --cac 2000
python unit_economics_simulator.py --mrr 50000 --growth 10 --churn 3 --cac 2000 --json
"""
import json
import sys
import argparse
def simulate(
mrr,
monthly_growth_pct,
monthly_churn_pct,
cac,
gross_margin=0.70,
sm_spend_pct=0.30,
months=12,
):
"""
Simulate unit economics forward.
Args:
mrr: Starting MRR
monthly_growth_pct: Expected monthly growth rate (%)
monthly_churn_pct: Expected monthly churn rate (%)
cac: Customer acquisition cost
gross_margin: Gross margin (0-1)
sm_spend_pct: Sales & marketing as % of revenue (0-1)
months: Number of months to project
Returns:
dict with monthly projections and summary
"""
results = {
"inputs": {
"starting_mrr": mrr,
"monthly_growth_pct": monthly_growth_pct,
"monthly_churn_pct": monthly_churn_pct,
"cac": cac,
"gross_margin": gross_margin,
"sm_spend_pct": sm_spend_pct,
},
"projections": [],
"summary": {},
}
current_mrr = mrr
cumulative_sm_spend = 0
cumulative_gross_profit = 0
for month in range(1, months + 1):
# Calculate growth and churn
growth_rate = monthly_growth_pct / 100
churn_rate = monthly_churn_pct / 100
# Net growth = growth - churn
net_growth_rate = growth_rate - churn_rate
new_mrr = current_mrr * (1 + net_growth_rate)
# Revenue and costs
monthly_revenue = current_mrr
gross_profit = monthly_revenue * gross_margin
sm_spend = monthly_revenue * sm_spend_pct
net_profit = gross_profit - sm_spend
# Accumulate
cumulative_sm_spend += sm_spend
cumulative_gross_profit += gross_profit
# ARR
arr = current_mrr * 12
results["projections"].append({
"month": month,
"mrr": round(current_mrr, 2),
"arr": round(arr, 2),
"monthly_revenue": round(monthly_revenue, 2),
"gross_profit": round(gross_profit, 2),
"sm_spend": round(sm_spend, 2),
"net_profit": round(net_profit, 2),
"growth_rate_pct": round(net_growth_rate * 100, 2),
})
current_mrr = new_mrr
# Summary
final_mrr = results["projections"][-1]["mrr"]
final_arr = results["projections"][-1]["arr"]
total_revenue = sum(p["monthly_revenue"] for p in results["projections"])
total_net_profit = sum(p["net_profit"] for p in results["projections"])
results["summary"] = {
"starting_mrr": mrr,
"ending_mrr": round(final_mrr, 2),
"ending_arr": round(final_arr, 2),
"mrr_growth_pct": round(((final_mrr - mrr) / mrr) * 100, 2),
"total_revenue_12m": round(total_revenue, 2),
"total_gross_profit_12m": round(cumulative_gross_profit, 2),
"total_sm_spend_12m": round(cumulative_sm_spend, 2),
"total_net_profit_12m": round(total_net_profit, 2),
"avg_monthly_growth_pct": round((monthly_growth_pct - monthly_churn_pct), 2),
}
return results
def format_report(results):
"""Format simulation results as human-readable report."""
lines = []
lines.append("\n" + "=" * 70)
lines.append("UNIT ECONOMICS SIMULATION - 12 MONTH PROJECTION")
lines.append("=" * 70)
# Inputs
inputs = results["inputs"]
lines.append("\n📊 INPUTS")
lines.append(f" Starting MRR: ${inputs['starting_mrr']:,.0f}")
lines.append(f" Monthly Growth: {inputs['monthly_growth_pct']}%")
lines.append(f" Monthly Churn: {inputs['monthly_churn_pct']}%")
lines.append(f" CAC: ${inputs['cac']:,.0f}")
lines.append(f" Gross Margin: {inputs['gross_margin']*100:.0f}%")
lines.append(f" S&M Spend: {inputs['sm_spend_pct']*100:.0f}% of revenue")
# Summary
summary = results["summary"]
lines.append("\n📈 12-MONTH SUMMARY")
lines.append(f" Starting MRR: ${summary['starting_mrr']:,.0f}")
lines.append(f" Ending MRR: ${summary['ending_mrr']:,.0f}")
lines.append(f" Ending ARR: ${summary['ending_arr']:,.0f}")
lines.append(f" MRR Growth: {summary['mrr_growth_pct']:+.1f}%")
lines.append(f" Total Revenue: ${summary['total_revenue_12m']:,.0f}")
lines.append(f" Total Gross Profit: ${summary['total_gross_profit_12m']:,.0f}")
lines.append(f" Total S&M Spend: ${summary['total_sm_spend_12m']:,.0f}")
lines.append(f" Total Net Profit: ${summary['total_net_profit_12m']:,.0f}")
# Monthly breakdown (first 3, last 3)
lines.append("\n📅 MONTHLY PROJECTIONS")
lines.append(f"{'Month':<8} {'MRR':<12} {'ARR':<12} {'Revenue':<12} {'Net Profit':<12}")
lines.append("-" * 70)
projs = results["projections"]
for p in projs[:3]:
lines.append(
f"{p['month']:<8} ${p['mrr']:<11,.0f} ${p['arr']:<11,.0f} "
f"${p['monthly_revenue']:<11,.0f} ${p['net_profit']:<11,.0f}"
)
if len(projs) > 6:
lines.append(" ...")
for p in projs[-3:]:
lines.append(
f"{p['month']:<8} ${p['mrr']:<11,.0f} ${p['arr']:<11,.0f} "
f"${p['monthly_revenue']:<11,.0f} ${p['net_profit']:<11,.0f}"
)
lines.append("\n" + "=" * 70 + "\n")
return "\n".join(lines)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Simulate SaaS unit economics over 12 months"
)
parser.add_argument("--mrr", type=float, required=True, help="Starting MRR")
parser.add_argument(
"--growth", type=float, required=True, help="Monthly growth rate (%)"
)
parser.add_argument(
"--churn", type=float, required=True, help="Monthly churn rate (%)"
)
parser.add_argument("--cac", type=float, required=True, help="Customer acquisition cost")
parser.add_argument(
"--gross-margin", type=float, default=70, help="Gross margin %% (default: 70)"
)
parser.add_argument(
"--sm-spend", type=float, default=30, help="S&M spend as %% of revenue (default: 30)"
)
parser.add_argument(
"--months", type=int, default=12, help="Months to project (default: 12)"
)
parser.add_argument("--json", action="store_true", help="Output JSON format")
args = parser.parse_args()
results = simulate(
mrr=args.mrr,
monthly_growth_pct=args.growth,
monthly_churn_pct=args.churn,
cac=args.cac,
gross_margin=args.gross_margin / 100 if args.gross_margin > 1 else args.gross_margin,
sm_spend_pct=args.sm_spend / 100 if args.sm_spend > 1 else args.sm_spend,
months=args.months,
)
if args.json:
print(json.dumps(results, indent=2))
else:
print(format_report(results))

View File

@@ -0,0 +1,393 @@
---
name: seek-and-analyze-video
description: Video intelligence and content analysis using Memories.ai LVMM. Discover videos on TikTok, YouTube, Instagram by topic or creator. Analyze video content, summarize meetings, build searchable knowledge bases across multiple videos. Use for video research, competitor content analysis, meeting notes, lecture summaries, or building video knowledge libraries.
license: MIT
metadata:
version: 1.0.0
author: Kenny Zheng
category: marketing-skill
updated: 2026-03-09
triggers:
- analyze video
- video content analysis
- summarize video
- meeting notes from video
- search TikTok videos
- search YouTube videos
- video knowledge base
- competitor video analysis
- extract video insights
- video research
- video intelligence
- cross-video search
---
# Seek and Analyze Video
You are an expert in video intelligence and content analysis. Your goal is to help users discover, analyze, and build knowledge from video content across social platforms using Memories.ai's Large Visual Memory Model (LVMM).
## Before Starting
**Check for context first:**
If `marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
**API Setup Required:**
This skill requires a Memories.ai API key. Guide users to:
1. Visit https://memories.ai to create an account
2. Get API key from dashboard (free tier: 100 credits, Plus: $15/month for 5,000 credits)
3. Set environment variable: `export MEMORIES_API_KEY=your_key_here`
Gather this context (ask if not provided):
### 1. Current State
- What video content do they need to analyze?
- What platforms are they researching? (YouTube, TikTok, Instagram, Vimeo)
- Do they have existing video libraries or starting fresh?
### 2. Goals
- What insights are they extracting? (summaries, action items, competitive analysis)
- Do they need one-time analysis or persistent knowledge base?
- Are they analyzing individual videos or building cross-video research?
### 3. Video-Specific Context
- What topics, hashtags, or creators are they tracking?
- What's their use case? (competitor research, content strategy, meeting notes, training materials)
- Do they need organized namespaces for team collaboration?
## How This Skill Works
This skill supports 5 primary modes:
### Mode 1: Quick Video Analysis
When you need one-time video analysis without persistent storage.
- Use `caption_video` for instant summaries
- Best for: ad-hoc analysis, quick insights, testing content
### Mode 2: Social Media Research
When discovering and analyzing videos across platforms.
- Search by topic, hashtag, or creator
- Import and analyze in bulk
- Best for: competitor analysis, trend research, content inspiration
### Mode 3: Knowledge Base Building
When creating searchable libraries from video content.
- Index videos with semantic search
- Query across multiple videos simultaneously
- Best for: training materials, research repositories, content archives
### Mode 4: Meeting & Lecture Notes
When extracting structured notes from recordings.
- Generate transcripts with visual descriptions
- Extract action items and key points
- Best for: meeting summaries, educational content, presentations
### Mode 5: Memory Management
When organizing text insights and cross-video knowledge.
- Store notes with tags for retrieval
- Search across videos and text memories
- Best for: research notes, insights collection, knowledge management
## Core Workflows
### Workflow 1: Analyze a Video URL
**When to use:** User provides a YouTube, TikTok, Instagram, or Vimeo URL
**Process:**
1. Validate URL format and platform support
2. Choose analysis mode:
- **Quick analysis:** `caption_video(url)` - instant summary, no storage
- **Persistent analysis:** `import_video(url)` - index for future queries
3. Extract key information (summary, transcript, action items)
4. Generate structured output (see Output Artifacts)
**Example:**
```python
# Quick analysis (no storage)
result = caption_video("https://youtube.com/watch?v=...")
# Persistent indexing (builds knowledge base)
video_id = import_video("https://youtube.com/watch?v=...")
summary = query_video(video_id, "Summarize the key points")
```
### Workflow 2: Social Media Video Research
**When to use:** User wants to find and analyze videos by topic, hashtag, or creator
**Process:**
1. Define search parameters:
- Platform: tiktok, youtube, instagram
- Query: topic, hashtag, or creator handle
- Count: number of videos to analyze
2. Execute search: `search_social(platform, query, count)`
3. Import discovered videos for deep analysis
4. Generate competitive insights or trend report
**Example:**
```python
# Find competitor content
videos = search_social("tiktok", "#SaaSmarketing", count=20)
# Analyze top performers
for video in videos[:5]:
import_video(video['url'])
# Cross-video analysis
insights = chat_personal("What content themes are working?")
```
### Workflow 3: Build Video Knowledge Base
**When to use:** User needs searchable library across multiple videos
**Process:**
1. Import videos with tags for organization
2. Store supplementary text memories (notes, insights)
3. Enable cross-video semantic search
4. Query entire library for insights
**Example:**
```python
# Import video library with tags
import_video(url1, tags=["product-demo", "Q1-2026"])
import_video(url2, tags=["product-demo", "Q2-2026"])
# Store text insights
create_memory("Key insight from demos...", tags=["product-demo"])
# Query across all tagged content
insights = chat_personal("Compare Q1 vs Q2 product demos")
```
### Workflow 4: Extract Meeting Notes
**When to use:** User needs structured notes from recorded meetings or lectures
**Process:**
1. Import meeting recording
2. Request structured extraction:
- Action items with owners
- Key decisions made
- Discussion topics
- Timestamps for important moments
3. Format as meeting minutes
4. Store for future reference
**Example:**
```python
video_id = import_video("meeting_recording.mp4")
notes = query_video(video_id, """
Extract:
1. Action items with owners
2. Key decisions
3. Discussion topics
4. Important timestamps
""")
```
### Workflow 5: Competitor Content Analysis
**When to use:** Analyzing competitor video strategies across platforms
**Process:**
1. Search for competitor content by creator handle
2. Import their top-performing videos
3. Analyze patterns:
- Content themes and formats
- Messaging strategies
- Production quality
- Engagement tactics
4. Generate competitive intelligence report
**Example:**
```python
# Find competitor videos
competitor_videos = search_social("youtube", "@competitor_handle", count=30)
# Import for analysis
for video in competitor_videos:
import_video(video['url'], tags=["competitor-X"])
# Extract insights
analysis = chat_personal("Analyze competitor-X content strategy and gaps")
```
## Command Reference
### Video Operations
| Command | Purpose | Storage |
|---------|---------|---------|
| `caption_video(url)` | Quick video summary | No |
| `import_video(url, tags=[])` | Index video for queries | Yes |
| `query_video(video_id, question)` | Ask about specific video | - |
| `list_videos(tags=[])` | List indexed videos | - |
| `delete_video(video_id)` | Remove from library | - |
### Social Media Search
| Command | Purpose |
|---------|---------|
| `search_social(platform, query, count)` | Find videos by topic/creator |
| `search_personal(query, filters={})` | Search your indexed videos |
Platforms: `tiktok`, `youtube`, `instagram`
### Memory Management
| Command | Purpose |
|---------|---------|
| `create_memory(text, tags=[])` | Store text insight |
| `search_memories(query)` | Find stored memories |
| `list_memories(tags=[])` | List all memories |
| `delete_memory(memory_id)` | Remove memory |
### Cross-Content Queries
| Command | Purpose |
|---------|---------|
| `chat_personal(question)` | Query across ALL videos and memories |
| `chat_video(video_id, question)` | Focus on specific video |
### Vision Tasks
| Command | Purpose |
|---------|---------|
| `caption_image(image_url)` | Describe image using AI vision |
| `import_image(image_url, tags=[])` | Index image for queries |
## Proactive Triggers
Surface these issues WITHOUT being asked when you notice them in context:
- **User requests video analysis without API key** → Guide them to memories.ai setup
- **Repeated similar queries across videos** → Suggest building knowledge base instead
- **Analyzing competitor content** → Recommend systematic tracking with tags
- **Meeting recording shared** → Offer structured note extraction
- **Multiple one-off analyses** → Suggest import_video for persistent reference
- **Large video libraries without tags** → Recommend tag organization strategy
## Output Artifacts
| When you ask for... | You get... |
|---------------------|------------|
| "Analyze this video" | Structured summary with key points, themes, action items, and timestamps |
| "Competitor content research" | Competitive analysis report with content themes, gaps, and recommendations |
| "Meeting notes from recording" | Meeting minutes with action items, decisions, discussion topics, and owners |
| "Video knowledge base" | Searchable library with semantic search across videos and memories |
| "Social media video research" | Platform research report with top videos, trends, and content insights |
## Communication
All output follows the structured communication standard:
- **Bottom line first** — answer before explanation
- **What + Why + How** — every finding has all three
- **Actions have owners and deadlines** — no "we should consider"
- **Confidence tagging** — 🟢 verified / 🟡 medium / 🔴 assumed
**Example output format:**
```
BOTTOM LINE: Competitor X focuses on product demos (60%) and customer stories (30%)
WHAT:
• 18/30 videos are product demos with detailed walkthroughs — 🟢 verified
• 9/30 videos are customer success stories with ROI metrics — 🟢 verified
• Average video length: 3:24 (demos), 2:15 (stories) — 🟢 verified
• Consistent posting: 2-3 videos/week on Tuesday/Thursday — 🟢 verified
WHY THIS MATTERS:
They're driving bottom-of-funnel conversions with proof over awareness content.
Your current 80% thought leadership leaves conversion gap.
HOW TO ACT:
1. Create 10 product demo videos → [Owner] → [2 weeks]
2. Record 5 customer case studies → [Owner] → [3 weeks]
3. Test demo video performance vs current content → [Owner] → [4 weeks]
YOUR DECISION:
Option A: Match their demo focus — higher conversion, lower reach
Option B: Hybrid approach (50% demos, 50% thought leadership) — balanced
```
## Technical Details
**Repository:** https://github.com/kennyzheng-builds/seek-and-analyze-video
**Requirements:**
- Python 3.8+
- Memories.ai API key (free tier or $15/month Plus)
- Environment variable: `MEMORIES_API_KEY`
**Installation:**
```bash
# Via Claude Code
claude skill install kennyzheng-builds/seek-and-analyze-video
# Or manual
git clone https://github.com/kennyzheng-builds/seek-and-analyze-video.git
export MEMORIES_API_KEY=your_key_here
```
**Pricing:**
- Free tier: 100 credits (testing and light use)
- Plus: $15/month for 5,000 credits (power users)
**Supported Platforms:**
- YouTube (all public videos)
- TikTok (public videos)
- Instagram (public videos and reels)
- Vimeo (public videos)
## Key Differentiators
**vs ChatGPT/Gemini Video Analysis:**
- Persistent memory (query anytime, not just during upload)
- Cross-video search (query 100s of videos simultaneously)
- Social media discovery (find videos, don't just analyze provided URLs)
- Knowledge base building (organize with tags, semantic search)
**vs Manual Video Research:**
- 40x faster video analysis
- Automatic transcript + visual description
- Semantic search across libraries
- Scalable to hundreds of videos
**vs Traditional Video Tools:**
- AI-native queries (ask questions vs manual review)
- Cross-platform support (TikTok, YouTube, Instagram unified)
- Zero-dependency Python client (works across Claude Code, OpenClaw, HappyCapy)
- Workflow automation (upload → analyze → store in one command)
## Best Practices
### Tagging Strategy
- Use consistent tag naming (kebab-case recommended)
- Tag by: content-type, date-range, platform, topic, campaign
- Example: `["competitor-analysis", "Q1-2026", "tiktok", "product-demo"]`
### Credit Management
- Quick analysis (`caption_video`): ~2 credits per video
- Import + indexing (`import_video`): ~5 credits per video
- Queries (`chat_personal`, `query_video`): ~1 credit per query
- Plan accordingly based on tier (free: 100, Plus: 5,000/month)
### Query Optimization
- Be specific in questions (better results, same credits)
- Use filtered searches when possible (faster, more relevant)
- Batch similar queries (analyze pattern, then ask once)
### Organization
- Create namespace strategy for teams (use tags for isolation)
- Archive old content (delete unused videos to reduce noise)
- Document video IDs for important content (VI... identifiers)
## Related Skills
- **social-media-analyzer**: For quantitative social media metrics. Use this skill for qualitative video content analysis.
- **content-strategy**: For planning content themes. Use this skill to research what's working in your niche.
- **competitor-alternatives**: For competitive positioning. Use this skill for competitor content intelligence.
- **marketing-context**: Provides audience and brand context. Use before running video research.
- **content-production**: For creating content. Use this skill to research successful formats first.
- **campaign-analytics**: For campaign performance data. Combine with this skill for qualitative video insights.

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env python3
"""
Example workflow demonstrating seek-and-analyze-video skill capabilities.
Shows competitive video analysis pipeline with Memories.ai LVMM.
Usage:
python example-workflow.py --mode [quick|full]
Modes:
quick: Run with demo data (no API calls)
full: Execute full workflow (requires MEMORIES_API_KEY)
"""
import json
import os
import sys
from datetime import datetime
from typing import Dict, List
def validate_api_key() -> bool:
"""Check if API key is configured."""
api_key = os.getenv("MEMORIES_API_KEY")
if not api_key:
print("❌ MEMORIES_API_KEY not set")
print("\nSetup instructions:")
print("1. Visit https://memories.ai and create account")
print("2. Get API key from dashboard")
print("3. Run: export MEMORIES_API_KEY=your_key_here")
return False
return True
def demo_mode():
"""Run demonstration with mock data (no API calls)."""
print("🎬 Running in DEMO mode (no API calls)")
print("=" * 60)
# Mock competitor discovery
print("\n📍 Stage 1: Discovering competitor content...")
mock_videos = [
{
"url": "https://youtube.com/watch?v=demo1",
"title": "Competitor A - Product Demo",
"views": 125000,
"likes": 8500,
"creator": "@competitor_a",
},
{
"url": "https://youtube.com/watch?v=demo2",
"title": "Competitor A - Pricing Guide",
"views": 98000,
"likes": 6200,
"creator": "@competitor_a",
},
{
"url": "https://youtube.com/watch?v=demo3",
"title": "Competitor A - Customer Success Story",
"views": 156000,
"likes": 12000,
"creator": "@competitor_a",
},
]
print(f"Found {len(mock_videos)} videos")
for video in mock_videos:
print(f" - {video['title']} ({video['views']:,} views)")
# Mock import
print("\n📥 Stage 2: Importing top performers...")
for video in mock_videos:
mock_video_id = f"VI_{video['title'][:10].replace(' ', '_')}"
print(f" ✓ Imported: {video['title']}{mock_video_id}")
# Mock content analysis
print("\n🔬 Stage 3: Analyzing content patterns...")
mock_analysis = {
"content_themes": {
"product_demos": "60%",
"customer_stories": "30%",
"thought_leadership": "10%",
},
"average_length": "3:24",
"hook_patterns": [
"Here's what nobody tells you about...",
"3 mistakes I see founders make...",
"Watch this before choosing...",
],
"posting_frequency": "2-3 videos per week (Tuesday/Thursday)",
}
print(json.dumps(mock_analysis, indent=2))
# Mock messaging analysis
print("\n💬 Stage 4: Extracting messaging...")
mock_messaging = {
"core_pillars": [
"ROI in first 90 days",
"Enterprise-grade security",
"No-code setup",
],
"pain_points_addressed": [
"Manual workflows wasting time",
"Security compliance complexity",
"Integration headaches",
],
"proof_elements": [
"Customer logos (Fortune 500)",
"ROI calculators with real data",
"Case studies with metrics",
],
}
print(json.dumps(mock_messaging, indent=2))
# Mock gap identification
print("\n🎯 Stage 5: Identifying opportunities...")
mock_gaps = {
"uncovered_topics": [
"Migration from legacy systems (high search volume)",
"Team training and onboarding",
"Advanced API usage",
],
"missed_angles": [
"Product demos focus on features, not workflows",
"Customer stories lack technical depth",
"No content for technical evaluators",
],
"format_opportunities": [
"Short-form TikTok/Reels (competitors use YouTube only)",
"Live Q&A sessions (no one doing this)",
"Comparison videos (avoided by competitors)",
],
}
print(json.dumps(mock_gaps, indent=2))
# Mock recommendations
print("\n📋 Stage 6: Generating recommendations...")
mock_recommendations = {
"quick_wins": [
{
"action": "Create 3 short-form product demos for TikTok/Reels",
"rationale": "Competitors only on YouTube, capture short-form audience",
"timeline": "2 weeks",
},
{
"action": "Record migration guide video",
"rationale": "High search demand, zero competition",
"timeline": "1 week",
},
],
"strategic_bets": [
{
"action": "Launch weekly live Q&A series",
"rationale": "Build community, no competitors doing this",
"timeline": "Q2 2026",
},
{
"action": "Create technical deep-dive series for evaluators",
"rationale": "Gap in competitor content, address technical audience",
"timeline": "Q2 2026",
},
],
"avoid": [
"Generic thought leadership (saturated)",
"Feature-focused demos without use cases (not resonating)",
],
"differentiation": [
"Lead with workflow outcomes, not features",
"Show migration path from specific competitors",
"Target technical evaluators ignored by competitors",
],
}
print(json.dumps(mock_recommendations, indent=2))
print("\n" + "=" * 60)
print("✅ Demo complete!")
print("\nTo run with real data:")
print("1. Set MEMORIES_API_KEY environment variable")
print("2. Run: python example-workflow.py --mode full")
def full_mode():
"""Execute full workflow with actual API calls."""
if not validate_api_key():
return
print("🚀 Running FULL workflow with Memories.ai API")
print("=" * 60)
print("\n⚠️ This will consume API credits:")
print(" - Discovery: ~1 credit per 10 videos")
print(" - Import: ~5 credits per video")
print(" - Queries: ~1-5 credits per query")
print("\nEstimated total: ~50-100 credits")
response = input("\nProceed? (yes/no): ").strip().lower()
if response != "yes":
print("Cancelled.")
return
print("\n📍 Stage 1: Discovering competitor content...")
print("(Implementation would call Memories.ai API here)")
# In real implementation, would import and use the Memories.ai client
# from seek_and_analyze_video import search_social, import_video, chat_personal
print("\nFull implementation requires:")
print("1. Clone: https://github.com/kennyzheng-builds/seek-and-analyze-video")
print("2. Import client from skill repository")
print("3. Execute workflow with actual API calls")
def main():
"""Main entry point."""
mode = "quick"
# Parse arguments
if len(sys.argv) > 1:
if sys.argv[1] == "--mode" and len(sys.argv) > 2:
mode = sys.argv[2]
elif sys.argv[1] in ["--help", "-h"]:
print(__doc__)
return
if mode not in ["quick", "full"]:
print(f"❌ Invalid mode: {mode}")
print("Valid modes: quick, full")
print("\nRun with --help for usage information")
return
print(f"""
╔════════════════════════════════════════════════════════════╗
║ Seek and Analyze Video - Example Workflow ║
║ Competitive Video Analysis ║
╚════════════════════════════════════════════════════════════╝
Mode: {mode.upper()}
Date: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
""")
if mode == "quick":
demo_mode()
else:
full_mode()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,445 @@
# Memories.ai API Command Reference
Complete reference for all 21 API commands available through the Memories.ai LVMM.
---
## Video Operations
### caption_video(url: str) → dict
Quick video analysis without persistent storage. Best for one-time summaries.
**Parameters:**
- `url`: Video URL (YouTube, TikTok, Instagram, Vimeo)
**Returns:**
```python
{
"summary": "Video summary text",
"duration": "3:24",
"platform": "youtube"
}
```
**Credits:** ~2 per video
**Use when:** Ad-hoc analysis, testing content, no need for future queries
---
### import_video(url: str, tags: list = []) → str
Index video for persistent queries. Returns video ID (VI...) for future reference.
**Parameters:**
- `url`: Video URL
- `tags`: Optional list of organization tags (e.g., `["competitor", "Q1-2026"]`)
**Returns:** Video ID string (e.g., `"VI_abc123def456"`)
**Credits:** ~5 per video
**Use when:** Building knowledge base, need cross-video search, repeated queries
**Example:**
```python
video_id = import_video(
"https://youtube.com/watch?v=dQw4w9WgXcQ",
tags=["product-demo", "competitor-A", "2026-03"]
)
# Returns: "VI_abc123def456"
```
---
### query_video(video_id: str, question: str) → str
Ask questions about a specific indexed video.
**Parameters:**
- `video_id`: Video ID from import_video
- `question`: Natural language question
**Returns:** Answer text
**Credits:** ~1 per query
**Example:**
```python
answer = query_video("VI_abc123def456", "What are the main action items?")
```
---
### list_videos(tags: list = []) → list
List all indexed videos, optionally filtered by tags.
**Parameters:**
- `tags`: Optional filter tags (returns videos matching ANY tag)
**Returns:**
```python
[
{
"video_id": "VI_abc123",
"url": "https://youtube.com/...",
"imported_at": "2026-03-09T10:30:00Z",
"tags": ["product-demo", "competitor-A"]
}
]
```
**Credits:** 0 (metadata only)
---
### delete_video(video_id: str) → bool
Remove video from your library. Cannot be undone.
**Parameters:**
- `video_id`: Video ID to delete
**Returns:** `True` if successful
**Credits:** 0
---
## Social Media Search
### search_social(platform: str, query: str, count: int = 10) → list
Discover public videos by topic, hashtag, or creator.
**Parameters:**
- `platform`: `"tiktok"`, `"youtube"`, or `"instagram"`
- `query`: Topic, hashtag (with #), or creator handle (with @)
- `count`: Number of results (default: 10, max: 50)
**Returns:**
```python
[
{
"url": "https://tiktok.com/@creator/video/123",
"title": "Video title",
"creator": "@creator",
"views": 125000,
"likes": 8500,
"published": "2026-03-08"
}
]
```
**Credits:** ~1 per 10 videos
**Examples:**
```python
# Topic search
videos = search_social("youtube", "SaaS pricing strategies", count=20)
# Hashtag search
videos = search_social("tiktok", "#contentmarketing", count=30)
# Creator search
videos = search_social("instagram", "@competitor_handle", count=15)
```
---
### search_personal(query: str, filters: dict = {}) → list
Search your indexed videos with semantic search.
**Parameters:**
- `query`: Natural language search query
- `filters`: Optional filters (`{"tags": ["tag1"], "date_from": "2026-01-01"}`)
**Returns:**
```python
[
{
"video_id": "VI_abc123",
"relevance_score": 0.92,
"snippet": "...relevant content snippet...",
"tags": ["product-demo"]
}
]
```
**Credits:** ~1 per query
**Example:**
```python
results = search_personal(
"product pricing discussions",
filters={"tags": ["competitor-A"], "date_from": "2026-03-01"}
)
```
---
## Memory Management
### create_memory(text: str, tags: list = []) → str
Store text insights for future retrieval.
**Parameters:**
- `text`: Note or insight text
- `tags`: Optional organization tags
**Returns:** Memory ID (e.g., `"MEM_xyz789"`)
**Credits:** ~1 per memory
**Use when:** Storing research notes, insights, key quotes not directly in videos
**Example:**
```python
memory_id = create_memory(
"Competitor A focuses on enterprise pricing tier, starts at $99/seat",
tags=["competitor-A", "pricing", "insight"]
)
```
---
### search_memories(query: str) → list
Search stored text memories with semantic search.
**Parameters:**
- `query`: Natural language search query
**Returns:**
```python
[
{
"memory_id": "MEM_xyz789",
"text": "Memory content...",
"relevance_score": 0.88,
"tags": ["pricing", "insight"],
"created_at": "2026-03-09T10:30:00Z"
}
]
```
**Credits:** ~1 per query
---
### list_memories(tags: list = []) → list
List all stored memories, optionally filtered by tags.
**Parameters:**
- `tags`: Optional filter tags
**Returns:** List of memory objects (same structure as search_memories)
**Credits:** 0 (metadata only)
---
### delete_memory(memory_id: str) → bool
Delete stored memory. Cannot be undone.
**Parameters:**
- `memory_id`: Memory ID to delete
**Returns:** `True` if successful
**Credits:** 0
---
## Cross-Content Queries
### chat_personal(question: str) → str
Query across ALL indexed videos and memories simultaneously.
**Parameters:**
- `question`: Natural language question
**Returns:** Answer synthesized from entire knowledge base
**Credits:** ~2-5 depending on complexity
**Use when:** Asking questions that require cross-video analysis
**Example:**
```python
insight = chat_personal("""
Compare competitor A and B's pricing strategies.
What are the key differences and which approach is more effective?
""")
```
---
### chat_video(video_id: str, question: str) → str
Interactive chat focused on specific video (alternative to query_video).
**Parameters:**
- `video_id`: Video ID
- `question`: Natural language question
**Returns:** Answer text
**Credits:** ~1 per query
**Note:** Functionally similar to `query_video`, use interchangeably.
---
## Vision Tasks
### caption_image(image_url: str) → str
Describe image content using AI vision.
**Parameters:**
- `image_url`: Public image URL (JPEG, PNG, WebP)
**Returns:** Image description text
**Credits:** ~1 per image
**Use when:** Analyzing thumbnails, screenshots, visual content
**Example:**
```python
description = caption_image("https://example.com/thumbnail.jpg")
# Returns: "A person presenting a pricing slide with three tiers..."
```
---
### import_image(image_url: str, tags: list = []) → str
Index image for persistent queries (similar to import_video for images).
**Parameters:**
- `image_url`: Public image URL
- `tags`: Optional organization tags
**Returns:** Image ID (e.g., `"IMG_def456"`)
**Credits:** ~2 per image
**Use when:** Building visual libraries, need repeated queries on images
---
## Advanced Usage Patterns
### Pattern 1: Bulk Import with Error Handling
```python
def import_video_batch(urls, tag_prefix):
"""Import multiple videos with error handling"""
results = []
for idx, url in enumerate(urls):
try:
video_id = import_video(url, tags=[tag_prefix, f"batch-{idx}"])
results.append({"url": url, "video_id": video_id, "status": "success"})
except Exception as e:
results.append({"url": url, "error": str(e), "status": "failed"})
return results
```
### Pattern 2: Smart Tag Organization
```python
# Hierarchical tagging strategy
tags = [
f"{platform}", # youtube, tiktok, instagram
f"{content_type}", # product-demo, tutorial, case-study
f"{date_range}", # Q1-2026, 2026-03
f"{campaign}", # launch-campaign-X
f"{source_type}" # competitor, internal, partner
]
video_id = import_video(url, tags=tags)
```
### Pattern 3: Progressive Research
```python
# Stage 1: Discover
videos = search_social("youtube", "@competitor", count=50)
# Stage 2: Import top performers (by views/likes)
top_videos = sorted(videos, key=lambda x: x['views'], reverse=True)[:10]
for video in top_videos:
import_video(video['url'], tags=["competitor", "top-performer"])
# Stage 3: Cross-video analysis
insights = chat_personal("What makes their top 10 videos successful?")
```
### Pattern 4: Meeting Intelligence
```python
# Import meeting recording
meeting_id = import_video(recording_url, tags=["team-meeting", "2026-03-09"])
# Extract structured data
action_items = query_video(meeting_id, "List all action items with owners")
decisions = query_video(meeting_id, "What decisions were made?")
topics = query_video(meeting_id, "What were the main discussion topics?")
# Store supplementary notes
create_memory(f"Meeting {date}: Key outcomes and next steps",
tags=["team-meeting", "summary"])
```
---
## Credit Usage Guidelines
| Operation | Credits | Recommendation |
|-----------|---------|----------------|
| Quick caption | 2 | Use for testing/one-off |
| Import video | 5 | Build library strategically |
| Query (simple) | 1 | Ask specific questions |
| Cross-video query | 2-5 | Batch similar questions |
| Image caption | 1 | Use sparingly |
| Social search | 0.1/video | Discover before importing |
| Memory operations | 1 | Store key insights only |
**Free Tier Strategy (100 credits):**
- Import ~15 key videos (75 credits)
- Query ~25 times (25 credits)
**Plus Tier Strategy (5,000 credits/month):**
- Import ~800 videos (4,000 credits)
- Query ~1,000 times (1,000 credits)
---
## Error Handling
Common errors and solutions:
**InvalidAPIKey**
- Check `MEMORIES_API_KEY` environment variable is set
- Verify key is active on memories.ai dashboard
**UnsupportedPlatform**
- Only YouTube, TikTok, Instagram, Vimeo supported
- Ensure URL is public (not private/unlisted)
**CreditLimitExceeded**
- Check usage on memories.ai dashboard
- Upgrade to Plus tier or wait for monthly reset
**VideoNotFound**
- Video may be deleted, private, or region-restricted
- Verify URL is accessible in browser
**RateLimitExceeded**
- Slow down request rate (max ~10 requests/second)
- Consider batching operations
---
## API Changelog
**v1.0.0 (Current)**
- 21 commands across 6 categories
- Support for YouTube, TikTok, Instagram, Vimeo
- Semantic search across videos and memories
- Tag-based organization system
- Cross-video chat functionality

View File

@@ -0,0 +1,638 @@
# Use Cases and Examples
Real-world applications of video intelligence with Memories.ai LVMM.
---
## Table of Contents
- [Competitor Content Intelligence](#competitor-content-intelligence)
- [Content Strategy Research](#content-strategy-research)
- [Meeting and Training Intelligence](#meeting-and-training-intelligence)
- [Social Media Monitoring](#social-media-monitoring)
- [Knowledge Base Management](#knowledge-base-management)
- [Creator and Influencer Research](#creator-and-influencer-research)
---
## Competitor Content Intelligence
### Use Case: Analyze Competitor Video Strategy
**Scenario:** You want to understand how Competitor X uses video content to drive conversions.
**Workflow:**
```python
# Stage 1: Discover their content
videos = search_social("youtube", "@competitor_x", count=50)
# Stage 2: Import their library
for video in videos:
import_video(video['url'], tags=["competitor-x", "analysis-2026-q1"])
# Stage 3: Content pattern analysis
themes = chat_personal("""
Tags: competitor-x
Question: What are the main content themes and formats?
Break down by frequency and video type.
""")
# Stage 4: Messaging analysis
messaging = chat_personal("""
Tags: competitor-x
Question: What value propositions do they emphasize?
What pain points do they address?
""")
# Stage 5: Production insights
production = chat_personal("""
Tags: competitor-x
Question: What's their production quality level?
Average video length? Consistent branding elements?
""")
# Stage 6: Identify gaps
gaps = chat_personal("""
Compare competitor-x videos to our content library (tag: our-content).
What topics do they cover that we don't?
What angles are they using successfully?
""")
```
**Expected Output:**
- Content theme breakdown (60% product demos, 30% customer stories, 10% thought leadership)
- Key messaging pillars (ROI, ease of use, enterprise security)
- Production specs (3:24 avg length, professional editing, consistent intro/outro)
- Content gaps in your strategy
**ROI:** 20 hours of manual analysis → 2 hours automated
---
### Use Case: Competitive Pricing Intelligence
**Scenario:** Extract pricing information from competitor product videos.
**Workflow:**
```python
# Import competitor product demo videos
competitor_demos = search_social("youtube", "competitor pricing demo", count=20)
for video in competitor_demos[:10]:
import_video(video['url'], tags=["competitor-pricing"])
# Extract pricing mentions
pricing_data = chat_personal("""
Tags: competitor-pricing
Question: Extract all pricing information mentioned.
Include: tiers, price points, billing cycles, discounts, enterprise pricing.
""")
# Analyze pricing strategy
strategy = chat_personal("""
Tags: competitor-pricing
Question: What pricing strategy are they using?
Value-based, cost-plus, competition-based, penetration?
How do they position their tiers?
""")
```
**Expected Output:**
- Pricing tier structure (Starter $49, Pro $99, Enterprise custom)
- Positioning strategy (value-based with ROI calculators)
- Competitive differentiation (monthly vs annual pricing emphasis)
---
## Content Strategy Research
### Use Case: Identify High-Performing Content Formats
**Scenario:** Research what video formats are working in your niche.
**Workflow:**
```python
# Search for top content in your niche
niche_videos = search_social("tiktok", "#SaaSmarketing", count=100)
# Import top performers (by engagement)
top_50 = sorted(niche_videos, key=lambda x: x['likes'] + x['views'], reverse=True)[:50]
for video in top_50:
import_video(video['url'], tags=["niche-research", "top-performer"])
# Analyze successful patterns
format_analysis = chat_personal("""
Tags: top-performer
Question: What video formats are most successful?
Break down by: length, hook style, content structure, CTA approach.
""")
# Identify successful hooks
hooks = chat_personal("""
Tags: top-performer
Question: Extract the first 3 seconds (hook) from each video.
What patterns make them effective?
""")
# Production requirements
production = chat_personal("""
Tags: top-performer
Question: What's the production quality distribution?
Can successful content be made with smartphone + basic editing?
""")
```
**Expected Output:**
- Winning formats (60-second problem-solution, 15-second quick tips)
- Hook patterns ("Here's what nobody tells you about...", "3 mistakes I made...")
- Production level (70% smartphone-quality acceptable, 30% professional)
**ROI:** Validate content strategy before investing in production
---
### Use Case: Topic Gap Analysis
**Scenario:** Find content opportunities your competitors aren't covering.
**Workflow:**
```python
# Import your content and competitor content
# (Assume already done with tags: "our-content", "competitor-a", "competitor-b")
# Identify covered topics
competitor_topics = chat_personal("""
Tags: competitor-a, competitor-b
Question: List all topics covered. Group by category.
""")
# Find gaps
gaps = chat_personal("""
Compare topics from competitors (tags: competitor-a, competitor-b)
vs audience questions (tag: customer-questions)
What topics are customers asking about that competitors haven't covered?
""")
# Opportunity sizing
opportunities = chat_personal("""
For each gap identified, search social platforms:
How many searches/hashtags exist for that topic?
Is there existing demand?
""")
```
**Expected Output:**
- 15 topic gaps with high demand, low competition
- Prioritized by search volume and strategic fit
- Content angle recommendations
---
## Meeting and Training Intelligence
### Use Case: Extract Action Items from Meetings
**Scenario:** Convert recorded meetings into structured action items.
**Workflow:**
```python
# Import meeting recording
meeting_id = import_video(
"internal_recording.mp4",
tags=["team-meeting", "product-planning", "2026-03-09"]
)
# Extract action items
action_items = query_video(meeting_id, """
Extract all action items mentioned in the meeting.
Format as:
- [ ] Action item description | Owner: Name | Due: Date | Context: Why needed
""")
# Extract decisions
decisions = query_video(meeting_id, """
List all decisions made during the meeting.
Format as:
DECISION: [Description]
RATIONALE: [Why]
OWNER: [Who's accountable]
IMPACT: [What changes]
""")
# Generate meeting summary
summary = query_video(meeting_id, """
Create executive summary:
1. Key topics discussed
2. Decisions made
3. Action items (grouped by owner)
4. Blockers identified
5. Next meeting agenda items
""")
# Store for future reference
create_memory(
f"Meeting Summary {date}: {summary}",
tags=["meeting-summary", "product-planning"]
)
```
**Expected Output:**
```
ACTION ITEMS:
- [ ] Update pricing page with new tier | Owner: Sarah | Due: 2026-03-15 | Context: Launch prep
- [ ] Schedule user interviews | Owner: Mike | Due: 2026-03-12 | Context: Validate feature priority
DECISIONS:
- Push mobile app launch to Q2 (Rationale: Backend infrastructure not ready)
- Focus Q1 on enterprise features (Rationale: 3 pilot customers waiting)
```
**ROI:** 30 minutes of manual note-taking → 2 minutes automated
---
### Use Case: Training Material Knowledge Base
**Scenario:** Build searchable library from training videos and courses.
**Workflow:**
```python
# Import all training videos
training_videos = [
"onboarding_day1.mp4",
"onboarding_day2.mp4",
"product_training_basics.mp4",
"product_training_advanced.mp4",
"sales_process_training.mp4"
]
for video_url in training_videos:
import_video(video_url, tags=["training", "onboarding"])
# Create searchable knowledge base
# New employees can now ask questions:
answer = chat_personal("How do I handle objections about pricing?")
answer = chat_personal("What's our product positioning vs competitors?")
answer = chat_personal("Walk me through the sales process step by step")
```
**Expected Output:**
- Instant answers to onboarding questions
- Reference to specific training video timestamps
- Consistent knowledge across team
**ROI:** Reduce onboarding time 40%, improve knowledge retention
---
## Social Media Monitoring
### Use Case: Track Brand Mentions Across Platforms
**Scenario:** Monitor videos mentioning your brand or product.
**Workflow:**
```python
# Search across platforms
tiktok_mentions = search_social("tiktok", "#YourBrand", count=50)
youtube_mentions = search_social("youtube", "YourBrand review", count=50)
instagram_mentions = search_social("instagram", "@yourbrand", count=50)
# Import for analysis
all_mentions = tiktok_mentions + youtube_mentions + instagram_mentions
for video in all_mentions:
import_video(video['url'], tags=["brand-mention", video['platform']])
# Sentiment analysis
sentiment = chat_personal("""
Tags: brand-mention
Question: Analyze sentiment across all brand mentions.
Positive, neutral, negative breakdown.
Common praise points and complaints.
""")
# Feature requests
requests = chat_personal("""
Tags: brand-mention
Question: Extract all feature requests or improvement suggestions.
Rank by frequency mentioned.
""")
# Competitive comparisons
comparisons = chat_personal("""
Tags: brand-mention
Question: When creators compare us to competitors, what do they say?
What are our perceived strengths and weaknesses?
""")
```
**Expected Output:**
- Sentiment: 70% positive, 20% neutral, 10% negative
- Top feature requests: Mobile app (15 mentions), API access (12 mentions)
- Competitive position: "Easier to use than X, but lacks Y feature"
**ROI:** Real-time feedback loop, inform product roadmap
---
### Use Case: Influencer Partnership Research
**Scenario:** Identify and vet potential influencer partners.
**Workflow:**
```python
# Find creators in your niche
creators = search_social("youtube", "SaaS founder", count=100)
# Filter to top performers
top_creators = sorted(creators, key=lambda x: x['views'], reverse=True)[:20]
# Import their content
for creator in top_creators:
videos = search_social("youtube", f"@{creator['handle']}", count=10)
for video in videos:
import_video(video['url'], tags=["influencer-research", creator['handle']])
# Analyze each creator
for creator in top_creators:
profile = chat_personal(f"""
Tags: {creator['handle']}
Question: Analyze this creator's content:
- Main topics covered
- Audience demographic (based on comments/content)
- Brand alignment with our values
- Engagement quality (comments depth)
- Partnership potential (do they do sponsorships?)
""")
create_memory(profile, tags=["influencer-profile", creator['handle']])
```
**Expected Output:**
- Vetted list of 5 high-fit influencers
- Audience alignment scores
- Estimated reach and engagement
- Partnership readiness assessment
---
## Knowledge Base Management
### Use Case: Customer Research Repository
**Scenario:** Build searchable library of customer interviews and feedback videos.
**Workflow:**
```python
# Import customer interview recordings
interviews = [
"customer_interview_acme_corp.mp4",
"customer_interview_tech_startup.mp4",
"user_testing_session_1.mp4"
]
for video_url in interviews:
import_video(video_url, tags=["customer-research", "interview"])
# Import product feedback videos
feedback_videos = search_social("youtube", "ProductName feedback", count=30)
for video in feedback_videos:
import_video(video['url'], tags=["customer-research", "feedback"])
# Cross-interview insights
pain_points = chat_personal("""
Tags: customer-research
Question: What are the top pain points mentioned across all interviews?
Rank by frequency and severity.
""")
feature_value = chat_personal("""
Tags: customer-research
Question: Which features do customers mention as most valuable?
What outcomes do they achieve?
""")
use_cases = chat_personal("""
Tags: customer-research
Question: What are the main use cases customers describe?
Group by industry or company size.
""")
# Store insights
create_memory(f"Customer Research Synthesis {date}: {pain_points}",
tags=["research-insight", "product-roadmap"])
```
**Expected Output:**
- Top 10 pain points ranked
- Feature value hierarchy
- Use case taxonomy
- Product roadmap implications
**ROI:** Centralize customer knowledge, inform product decisions
---
### Use Case: Competitive Intelligence Database
**Scenario:** Maintain up-to-date competitive intelligence from video sources.
**Workflow:**
```python
# Weekly competitor monitoring (automate with cron)
competitors = ["@competitor_a", "@competitor_b", "@competitor_c"]
for competitor in competitors:
# Search for new videos
new_videos = search_social("youtube", competitor, count=10)
# Import only videos from last 7 days
recent = [v for v in new_videos if is_within_last_week(v['published'])]
for video in recent:
import_video(video['url'], tags=["competitive-intel", competitor, "2026-q1"])
# Weekly intelligence report
report = chat_personal("""
Tags: competitive-intel, 2026-q1
Filter: last 7 days
Question: Generate competitive intelligence summary:
1. New product announcements or features
2. Pricing changes
3. Marketing message shifts
4. Partnership announcements
5. Strategic moves (funding, acquisitions, etc.)
""")
# Send to stakeholders
create_memory(f"Weekly Competitive Intel {date}: {report}",
tags=["intelligence-report", "weekly"])
```
**Expected Output:**
- Automated weekly competitive briefing
- Early detection of competitive moves
- Strategic planning inputs
---
## Creator and Influencer Research
### Use Case: Content Creator Trend Analysis
**Scenario:** Identify emerging content trends in your industry.
**Workflow:**
```python
# Search across platforms for industry hashtags
hashtags = ["#SaaSmarketing", "#ProductManagement", "#StartupTips"]
all_videos = []
for tag in hashtags:
tiktok = search_social("tiktok", tag, count=100)
youtube = search_social("youtube", tag.replace("#", ""), count=100)
all_videos.extend(tiktok + youtube)
# Import recent content (last 30 days)
recent_videos = [v for v in all_videos if is_recent(v['published'], days=30)]
for video in recent_videos:
import_video(video['url'], tags=["trend-research", "2026-q1"])
# Trend analysis
trends = chat_personal("""
Tags: trend-research, 2026-q1
Question: What are the emerging content trends?
Look for:
- Topics gaining traction (mentioned in 5+ videos)
- Format innovations (new video structures)
- Messaging shifts (new angles on old topics)
- Platform-specific trends (what works on TikTok vs YouTube)
""")
# Validate trend strength
validation = chat_personal("""
Tags: trend-research
Question: For each identified trend, assess:
- Growth trajectory (increasing or peak?)
- Audience engagement (comments, shares)
- Creator adoption (how many creators using this trend?)
- Longevity prediction (fad or sustainable?)
""")
```
**Expected Output:**
- 5-10 emerging trends with growth metrics
- Format innovations to test
- Timing recommendations (early mover vs wait and see)
---
## Advanced Workflows
### Multi-Stage Research Pipeline
**Complete competitive research workflow:**
```python
# Stage 1: Discovery
print("🔍 Stage 1: Discovering competitor content...")
competitors = ["@competitor_a", "@competitor_b"]
all_videos = []
for comp in competitors:
videos = search_social("youtube", comp, count=50)
all_videos.extend([(v, comp) for v in videos])
print(f"Found {len(all_videos)} videos")
# Stage 2: Import top performers
print("📥 Stage 2: Importing top performers...")
top_videos = sorted(all_videos, key=lambda x: x[0]['views'], reverse=True)[:30]
for video, comp in top_videos:
import_video(video['url'], tags=["competitor", comp, "top-performer"])
# Stage 3: Content analysis
print("🔬 Stage 3: Analyzing content patterns...")
content_analysis = chat_personal("""
Tags: competitor, top-performer
Question: Comprehensive content analysis:
1. Content themes (with % breakdown)
2. Average video length by theme
3. Hook patterns (first 5 seconds)
4. CTA strategies
5. Production quality levels
6. Posting frequency
""")
# Stage 4: Messaging extraction
print("💬 Stage 4: Extracting messaging...")
messaging = chat_personal("""
Tags: competitor, top-performer
Question: What are their core messaging pillars?
What customer pain points do they address?
What value propositions do they emphasize?
What proof/credibility elements do they use?
""")
# Stage 5: Gap identification
print("🎯 Stage 5: Identifying opportunities...")
gaps = chat_personal("""
Tags: competitor, top-performer
Question: Based on their content coverage, identify:
1. Topics they're NOT covering (search-demand exists)
2. Angles they're missing on covered topics
3. Audience questions unanswered
4. Format opportunities (they use X, but Y format might work)
""")
# Stage 6: Actionable recommendations
print("📋 Stage 6: Generating recommendations...")
recommendations = chat_personal("""
Based on the competitive analysis (tags: competitor, top-performer),
generate actionable content strategy recommendations:
1. QUICK WINS: What can we do in next 2 weeks?
2. STRATEGIC BETS: What should we invest in next quarter?
3. AVOID: What are they doing that's not working?
4. DIFFERENTIATION: How can we stand out?
Format with specific video ideas and rationale.
""")
# Stage 7: Report generation
print("📊 Stage 7: Compiling final report...")
final_report = f"""
COMPETITIVE CONTENT INTELLIGENCE REPORT
Date: {current_date}
Scope: {len(all_videos)} videos analyzed from {len(competitors)} competitors
{content_analysis}
{messaging}
{gaps}
{recommendations}
"""
create_memory(final_report, tags=["competitive-report", "strategy"])
print("✅ Complete! Report stored in knowledge base.")
```
**Timeline:** 40 hours manual → 3 hours automated
**Output:** Comprehensive competitive intelligence report with actionable recommendations
---
## ROI Summary
| Use Case | Manual Time | Automated Time | Time Saved | Quality Improvement |
|----------|-------------|----------------|------------|---------------------|
| Competitor Analysis | 40 hours | 3 hours | 37 hours | +50% depth |
| Content Research | 20 hours | 2 hours | 18 hours | +70% coverage |
| Meeting Notes | 30 min/meeting | 2 min/meeting | 28 min | +90% completeness |
| Brand Monitoring | 10 hours/week | 1 hour/week | 9 hours | Real-time vs weekly |
| Training KB | N/A | 3 hours setup | N/A | Instant access |
| Influencer Research | 15 hours | 2 hours | 13 hours | +60% data depth |
**Average ROI:** 40x time savings, 60% quality improvement