Files
claude-skills-reference/marketing-skill/signup-flow-cro/scripts/funnel_drop_analyzer.py
Alireza Rezvani 52321c86bc feat: Marketing Division expansion — 7 → 42 skills (#266)
* feat: Skill Authoring Standard + Marketing Expansion plans

SKILL-AUTHORING-STANDARD.md — the DNA of every skill in this repo:
10 universal patterns codified from C-Suite innovations + Corey Haines' marketingskills patterns:

1. Context-First: check domain context, ask only for gaps
2. Practitioner Voice: expert persona, goal-oriented, not textbook
3. Multi-Mode Workflows: build from scratch / optimize existing / situation-specific
4. Related Skills Navigation: when to use, when NOT to, bidirectional
5. Reference Separation: SKILL.md lean (≤10KB), refs deep
6. Proactive Triggers: surface issues without being asked
7. Output Artifacts: request → specific deliverable mapping
8. Quality Loop: self-verify, confidence tagging
9. Communication Standard: bottom line first, structured output
10. Python Tools: stdlib-only, CLI-first, JSON output, sample data

Marketing expansion plans for 40-skill marketing division build.

* feat: marketing foundation — context + ops router + authoring standard

marketing-context/: Foundation skill every marketing skill reads first
  - SKILL.md: 3 modes (auto-draft, guided interview, update)
  - templates/marketing-context-template.md: 14 sections covering
    product, audience, personas, pain points, competitive landscape,
    differentiation, objections, switching dynamics, customer language
    (verbatim), brand voice, style guide, proof points, SEO context, goals
  - scripts/context_validator.py: Scores completeness 0-100, section-by-section

marketing-ops/: Central router for 40-skill marketing ecosystem
  - Full routing matrix: 7 pods + cross-domain routing to 6 skills in
    business-growth, product-team, engineering-team, c-level-advisor
  - Campaign orchestration sequences (launch, content, CRO sprint)
  - Quality gate matching C-Suite standard
  - scripts/campaign_tracker.py: Campaign status tracking with progress,
    overdue detection, pod coverage, blocker identification

SKILL-AUTHORING-STANDARD.md: Universal DNA for all skills
  - 10 patterns: context-first, practitioner voice, multi-mode workflows,
    related skills navigation, reference separation, proactive triggers,
    output artifacts, quality loop, communication standard, python tools
  - Quality checklist for skill completion verification
  - Domain context file mapping for all 5 domains

* feat: import 20 workspace marketing skills + standard sections

Imported 20 marketing skills from OpenClaw workspace into repo:

Content Pod (5):
  content-strategy, copywriting, copy-editing, social-content, marketing-ideas

SEO Pod (2):
  seo-audit (+ references enriched by subagent), programmatic-seo (+ refs)

CRO Pod (5):
  page-cro, form-cro, signup-flow-cro, onboarding-cro, popup-cro, paywall-upgrade-cro

Channels Pod (2):
  email-sequence, paid-ads

Growth + Intel + GTM (5):
  ab-test-setup, competitor-alternatives, marketing-psychology, launch-strategy, brand-guidelines

All 29 skills now have standard sections per SKILL-AUTHORING-STANDARD.md:
   Proactive Triggers (4-5 per skill)
   Output Artifacts table
   Communication standard reference
   Related Skills with WHEN/NOT disambiguation

Subagents enriched 8 skills with additional reference docs:
  seo-audit, programmatic-seo, page-cro, form-cro,
  onboarding-cro, popup-cro, paywall-upgrade-cro, email-sequence

43 files, 10,566 lines added.

* feat: build 13 new marketing skills + social-media-manager upgrade

All skills are 100% original work — inspired by industry best practices,
written from scratch in our own voice following SKILL-AUTHORING-STANDARD.md.

NEW Content Pod (2):
  content-production — full research→draft→optimize pipeline, content_scorer.py
  content-humanizer — AI pattern detection + voice injection, humanizer_scorer.py

NEW SEO Pod (3):
  ai-seo — AI search optimization (AEO/GEO/LLMO), entirely new category
  schema-markup — JSON-LD structured data, schema_validator.py
  site-architecture — URL structure + internal linking, sitemap_analyzer.py

NEW Channels Pod (2):
  cold-email — B2B outreach (distinct from email-sequence lifecycle)
  ad-creative — bulk ad generation + platform specs, ad_copy_validator.py

NEW Growth Pod (3):
  churn-prevention — cancel flows + save offers + dunning, churn_impact_calculator.py
  referral-program — referral + affiliate programs
  free-tool-strategy — engineering as marketing

NEW Intelligence Pod (1):
  analytics-tracking — GA4/GTM setup + event taxonomy, tracking_plan_generator.py

NEW Sales Pod (1):
  pricing-strategy — pricing, packaging, monetization

UPGRADED:
  social-media-analyzer → social-media-manager (strategy, calendar, community)

Totals: 42 skills, 27 Python scripts, 60 reference docs, 163 files, 43,265 lines

* feat: update index, marketplace, README for 42 marketing skills

- skills-index.json: 89 → 124 skills (42 marketing entries)
- marketplace.json: marketing-skills v2.0.0 (42 skills, 27 tools)
- README.md: badge 134 → 169, marketing row updated
- prompt-engineer-toolkit: added YAML frontmatter
- Removed build logs from repo
- Parity check: 42/42 passed (YAML + Related + Proactive + Output + Communication)

* fix: merge content-creator into content-production, split marketing-psychology

Quality audit fixes:

1. content-creator → DEPRECATED redirect
   - Scripts (brand_voice_analyzer.py, seo_optimizer.py) moved to content-production
   - SKILL.md replaced with redirect to content-production + content-strategy
   - Eliminates duplicate routing confusion

2. marketing-psychology → 24KB split to 6.8KB + reference
   - 70+ mental models moved to references/mental-models-catalog.md (397 lines)
   - SKILL.md now lean: categories overview, most-used models, quick reference
   - Saves ~4,300 tokens per invocation

* feat: add plugin configs, Codex/OpenClaw compatibility, ClawHub packaging

- marketing-skill/SKILL.md: ClawHub-compatible root with Quick Start for Claude Code, Codex CLI, OpenClaw
- marketing-skill/CLAUDE.md: Agent instructions (routing, context, anti-patterns)
- marketing-skill/.codex/instructions.md: Codex CLI skill routing
- .claude-plugin/marketplace.json: deduplicated, marketing-skills v2.0.0
- .codex/skills-index.json: content-creator marked deprecated, psychology updated
- Total: 42 skills, 27 Python tools, 60 references, 18 plugins

* feat: add 16 Python tools to knowledge-only skills

Enriched 12 previously tool-less skills with practical Python scripts:
- seo-audit/seo_checker.py — HTML on-page SEO analysis (0-100)
- copywriting/headline_scorer.py — headline quality scoring (0-100)
- copy-editing/readability_scorer.py — Flesch + passive + filler detection
- content-strategy/topic_cluster_mapper.py — keyword clustering
- page-cro/conversion_audit.py — HTML CRO signal analysis (0-100)
- paid-ads/roas_calculator.py — ROAS/CPA/CPL calculator
- email-sequence/sequence_analyzer.py — email sequence scoring (0-100)
- form-cro/form_field_analyzer.py — form field CRO audit (0-100)
- onboarding-cro/activation_funnel_analyzer.py — funnel drop-off analysis
- programmatic-seo/url_pattern_generator.py — URL pattern planning
- ab-test-setup/sample_size_calculator.py — statistical sample sizing
- signup-flow-cro/funnel_drop_analyzer.py — signup funnel analysis
- launch-strategy/launch_readiness_scorer.py — launch checklist scoring
- competitor-alternatives/comparison_matrix_builder.py — feature comparison
- social-media-manager/social_calendar_generator.py — content calendar
- readability_scorer.py — fixed demo mode for non-TTY execution

All 43/43 scripts pass execution. All stdlib-only, zero pip installs.
Total: 42 skills, 43 Python tools, 60+ reference docs.

* feat: add 3 more Python tools + improve 6 existing scripts

New tools from build agent:
- email-sequence/scripts/sequence_analyzer.py — email sequence scoring (91/100 demo)
- paid-ads/scripts/roas_calculator.py — ROAS/CPA/CPL/break-even calculator
- competitor-alternatives/scripts/comparison_matrix_builder.py — feature matrix

Improved scripts (better demo modes, fuller analysis):
- seo_checker.py, headline_scorer.py, readability_scorer.py,
  conversion_audit.py, topic_cluster_mapper.py, launch_readiness_scorer.py

Total: 42 skills, 47 Python tools, all passing.

* fix: remove duplicate scripts from deprecated content-creator

Scripts already live in content-production/scripts/. The content-creator
directory is now a pure redirect (SKILL.md only + legacy assets/refs).

* fix: scope VirusTotal scan to executable files only

Skip scanning .md, .py, .json, .yml — they're plain text files
that VirusTotal can't meaningfully analyze. This prevents 429 rate
limit errors on PRs with many text file changes (like 42 marketing skills).

Scan still covers: .js, .ts, .sh, .mjs, .cjs, .exe, .dll, .so, .bin, .wasm

---------

Co-authored-by: Leo <leo@openclaw.ai>
2026-03-06 03:56:16 +01:00

321 lines
11 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
#!/usr/bin/env python3
"""
funnel_drop_analyzer.py — Signup Funnel Drop-Off Analyzer
100% stdlib, no pip installs required.
Usage:
python3 funnel_drop_analyzer.py # demo mode
python3 funnel_drop_analyzer.py --steps steps.json
python3 funnel_drop_analyzer.py --steps steps.json --json
echo '[{"step":"Visit","count":10000}]' | python3 funnel_drop_analyzer.py --stdin
steps.json format:
[
{"step": "Landing Page Visit", "count": 10000},
{"step": "Clicked Sign Up", "count": 4200},
{"step": "Filled Form", "count": 2800},
{"step": "Email Verified", "count": 1900},
{"step": "Onboarding Done", "count": 1100}
]
"""
import argparse
import json
import math
import sys
# ---------------------------------------------------------------------------
# Recommendation engine
# ---------------------------------------------------------------------------
RECOMMENDATIONS = {
"high_drop": {
"threshold": 0.50, # >50% drop
"landing_page": [
"Value proposition may be unclear — run a 5-second test.",
"Add social proof (testimonials, logos, user count) above the fold.",
"Ensure CTA button is prominent and benefit-focused ('Start Free' not 'Submit').",
],
"clicked_sign_up": [
"CTA label or placement may not resonate — A/B test button copy and colour.",
"Users may not trust the product — add trust badges and reviews near CTA.",
"Consider a sticky header CTA for long landing pages.",
],
"filled_form": [
"Form has too many fields — reduce to email + password minimum.",
"Try progressive disclosure: collect extra info post-signup.",
"Add inline validation so errors appear in real-time, not on submit.",
"Show a progress indicator if multi-step.",
],
"email_verified": [
"Verification email may land in spam — check SPF/DKIM/DMARC.",
"Send a plain-text follow-up 30 min after signup nudging verification.",
"Consider SMS or magic-link alternatives to email verification.",
"Reduce time-to-value: show a useful screen before requiring verification.",
],
"default": [
"Significant drop detected — instrument with session recordings (Hotjar/FullStory).",
"Run exit surveys at this step to capture qualitative reasons.",
"Check for UI bugs or broken flows on mobile.",
],
},
"medium_drop": {
"threshold": 0.25, # 2550% drop
"default": [
"Moderate friction — review copy and UX at this step.",
"Ensure mobile experience is frictionless (test on real devices).",
"Add micro-copy explaining why information is requested.",
],
},
"healthy": {
"default": [
"Step conversion is healthy — focus optimisation effort elsewhere.",
],
},
}
def classify_step_name(name: str) -> str:
"""Map step name to a known category for targeted recommendations."""
n = name.lower()
if any(k in n for k in ["land", "visit", "page", "home"]):
return "landing_page"
if any(k in n for k in ["cta", "click", "signup", "sign up", "register", "start"]):
return "clicked_sign_up"
if any(k in n for k in ["form", "fill", "detail", "info", "enter"]):
return "filled_form"
if any(k in n for k in ["email", "verif", "confirm", "activate"]):
return "email_verified"
return "default"
def get_recommendation(step_name: str, drop_rate: float) -> list:
if drop_rate > RECOMMENDATIONS["high_drop"]["threshold"]:
bucket = RECOMMENDATIONS["high_drop"]
cat = classify_step_name(step_name)
return bucket.get(cat, bucket["default"])
elif drop_rate > RECOMMENDATIONS["medium_drop"]["threshold"]:
return RECOMMENDATIONS["medium_drop"]["default"]
else:
return RECOMMENDATIONS["healthy"]["default"]
# ---------------------------------------------------------------------------
# Core analysis
# ---------------------------------------------------------------------------
def analyze_funnel(steps: list) -> dict:
"""
Analyse a funnel step list and return full metrics + recommendations.
Each step: {"step": <str>, "count": <int>}
"""
if not steps:
raise ValueError("steps list is empty")
if len(steps) < 2:
raise ValueError("Need at least 2 steps to analyse a funnel")
top_count = steps[0]["count"]
if top_count <= 0:
raise ValueError("Top-of-funnel count must be > 0")
step_metrics = []
worst_step = None
worst_drop_rate = -1.0
for i, s in enumerate(steps):
name = s["step"]
count = s["count"]
cumulative_rate = count / top_count
if i == 0:
step_to_step_rate = 1.0
drop_count = 0
drop_rate = 0.0
recommendations = ["Top of funnel — all visitors enter here."]
else:
prev_count = steps[i - 1]["count"]
step_to_step_rate = count / prev_count if prev_count > 0 else 0.0
drop_count = prev_count - count
drop_rate = 1 - step_to_step_rate
recommendations = get_recommendation(name, drop_rate)
if drop_rate > worst_drop_rate:
worst_drop_rate = drop_rate
worst_step = name
step_metrics.append({
"step": name,
"count": count,
"step_conversion_pct": round(step_to_step_rate * 100, 2),
"step_drop_pct": round(drop_rate * 100, 2),
"drop_count": drop_count,
"cumulative_conversion_pct": round(cumulative_rate * 100, 2),
"recommendations": recommendations,
})
# Overall funnel health score (0-100)
overall_conv = steps[-1]["count"] / top_count
score = _funnel_score(step_metrics, overall_conv)
return {
"summary": {
"total_steps": len(steps),
"top_of_funnel_count": top_count,
"bottom_of_funnel_count": steps[-1]["count"],
"overall_conversion_pct": round(overall_conv * 100, 2),
"worst_performing_step": worst_step,
"worst_step_drop_pct": round(worst_drop_rate * 100, 2),
"funnel_health_score": score,
"funnel_health_label": _score_label(score),
},
"steps": step_metrics,
"top_priority": _top_priority(step_metrics),
}
def _funnel_score(step_metrics: list, overall_conv: float) -> int:
"""
Score = 100 * overall_conversion adjusted for worst-step severity.
- Base: log-scale overall conversion (capped at a 10% target = 100 pts)
- Penalty: each step with >60% drop deducts points
"""
target_conv = 0.10 # 10% overall = score 100
base = min(100, math.log1p(overall_conv) / math.log1p(target_conv) * 100)
penalty = 0
for m in step_metrics[1:]:
if m["step_drop_pct"] > 60:
penalty += 10
elif m["step_drop_pct"] > 40:
penalty += 5
score = max(0, round(base - penalty))
return score
def _score_label(s: int) -> str:
if s >= 80: return "Excellent"
if s >= 60: return "Good"
if s >= 40: return "Fair"
if s >= 20: return "Poor"
return "Critical"
def _top_priority(step_metrics: list) -> dict:
"""Return the single highest-impact step to fix first."""
# Pick step with largest absolute drop count (not just rate)
candidates = step_metrics[1:]
if not candidates:
return {}
top = max(candidates, key=lambda m: m["drop_count"])
return {
"step": top["step"],
"drop_count": top["drop_count"],
"drop_pct": top["step_drop_pct"],
"why": "Largest absolute visitor loss — highest revenue impact.",
"quick_wins": top["recommendations"],
}
# ---------------------------------------------------------------------------
# Pretty-print
# ---------------------------------------------------------------------------
def pretty_print(result: dict) -> None:
s = result["summary"]
tp = result["top_priority"]
print("\n" + "=" * 65)
print(" SIGNUP FUNNEL DROP-OFF ANALYZER")
print("=" * 65)
print(f"\n📊 FUNNEL OVERVIEW")
print(f" Top of funnel : {s['top_of_funnel_count']:,} visitors")
print(f" Bottom of funnel : {s['bottom_of_funnel_count']:,} converted")
print(f" Overall conversion : {s['overall_conversion_pct']}%")
print(f" Funnel health : {s['funnel_health_score']}/100 ({s['funnel_health_label']})")
print(f" Worst step : {s['worst_performing_step']} "
f"({s['worst_step_drop_pct']}% drop)")
print(f"\n{'Step':<28} {'Count':>8} {'Step Conv':>10} {'Step Drop':>10} {'Cumul Conv':>10}")
print("" * 75)
for m in result["steps"]:
bar = "" * int(m["cumulative_conversion_pct"] / 5)
print(f" {m['step']:<26} {m['count']:>8,} "
f"{m['step_conversion_pct']:>9.1f}% "
f"{m['step_drop_pct']:>9.1f}% "
f"{m['cumulative_conversion_pct']:>9.1f}% {bar}")
print(f"\n🚨 TOP PRIORITY FIX: {tp.get('step', 'N/A')}")
print(f" Lost visitors : {tp.get('drop_count', 0):,} ({tp.get('drop_pct', 0)}% drop)")
print(f" Why fix first : {tp.get('why', '')}")
print(" Quick wins:")
for qw in tp.get("quick_wins", []):
print(f"{qw}")
print(f"\n💡 STEP-BY-STEP RECOMMENDATIONS")
for m in result["steps"][1:]:
if m["step_drop_pct"] > 10:
print(f"\n [{m['step']}] ↓{m['step_drop_pct']}% drop")
for r in m["recommendations"]:
print(f"{r}")
print()
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
DEMO_STEPS = [
{"step": "Landing Page Visit", "count": 12000},
{"step": "Clicked Sign Up CTA", "count": 4560},
{"step": "Filled Registration", "count": 2800},
{"step": "Email Verified", "count": 1540},
{"step": "Onboarding Completed", "count": 880},
{"step": "First Core Action", "count": 420},
]
def parse_args():
parser = argparse.ArgumentParser(
description="Analyse signup funnel drop-off by step (stdlib only).",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument("--steps", type=str, default=None,
help="Path to JSON file with funnel steps")
parser.add_argument("--stdin", action="store_true",
help="Read steps JSON from stdin")
parser.add_argument("--json", action="store_true",
help="Output results as JSON")
return parser.parse_args()
def main():
args = parse_args()
steps = None
if args.stdin:
steps = json.load(sys.stdin)
elif args.steps:
with open(args.steps) as f:
steps = json.load(f)
else:
print("🔬 DEMO MODE — using sample SaaS signup funnel\n")
steps = DEMO_STEPS
result = analyze_funnel(steps)
if args.json:
print(json.dumps(result, indent=2))
else:
pretty_print(result)
if __name__ == "__main__":
main()