* chore: update gitignore for audit reports and playwright cache * fix: add YAML frontmatter (name + description) to all SKILL.md files - Added frontmatter to 34 skills that were missing it entirely (0% Tessl score) - Fixed name field format to kebab-case across all 169 skills - Resolves #284 * chore: sync codex skills symlinks [automated] * fix: optimize 14 low-scoring skills via Tessl review (#290) Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286. * chore: sync codex skills symlinks [automated] * fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291) Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287. * feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292) Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%. * feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293) Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands. * fix: Phase 5 verification fixes + docs update (#294) Phase 5 verification fixes * chore: sync codex skills symlinks [automated] * fix: marketplace audit — all 11 plugins validated by Claude Code (#295) Marketplace audit: all 11 plugins validated + installed + tested in Claude Code * fix: restore 7 removed plugins + revert playwright-pro name to pw Reverts two overly aggressive audit changes: - Restored content-creator, demand-gen, fullstack-engineer, aws-architect, product-manager, scrum-master, skill-security-auditor to marketplace - Reverted playwright-pro plugin.json name back to 'pw' (intentional short name) * refactor: split 21 over-500-line skills into SKILL.md + references (#296) * chore: sync codex skills symlinks [automated] * docs: update all documentation with accurate counts and regenerated skill pages - Update skill count to 170, Python tools to 213, references to 314 across all docs - Regenerate all 170 skill doc pages from latest SKILL.md sources - Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap - Update README.md badges and overview table - Update marketplace.json metadata description and version - Update mkdocs.yml, index.md, getting-started.md with correct numbers * fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301) Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md at the specified install path. 7 of 9 domain directories were missing this file, causing "Skill not found" errors for bundle installs like: npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team Fix: - Add root-level SKILL.md with YAML frontmatter to 7 domains - Add .codex/instructions.md to 8 domains (for Codex CLI discovery) - Update INSTALLATION.md with accurate skill counts (53→170) - Add troubleshooting entry for "Skill not found" error All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json Closes #301 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills Gemini CLI: - Add GEMINI.md with activation instructions - Add scripts/gemini-install.sh setup script - Add scripts/sync-gemini-skills.py (194 skills indexed) - Add .gemini/skills/ with symlinks for all skills, agents, commands - Remove phantom medium-content-pro entries from sync script - Add top-level folder filter to prevent gitignored dirs from leaking Codex CLI: - Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills) - Regenerate .codex/skills-index.json: 124 → 149 skills - Add 25 new symlinks in .codex/skills/ OpenClaw: - Add OpenClaw installation section to INSTALLATION.md - Add ClawHub install + manual install + YAML frontmatter docs Documentation: - Update INSTALLATION.md with all 4 platforms + accurate counts - Update README.md: "three platforms" → "four platforms" + Gemini quick start - Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights - Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps - Add OpenClaw + Gemini to installation locations reference table Marketplace: all 18 plugins validated — sources exist, SKILL.md present Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets Phase 1 — Agent & Command Foundation: - Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations) - Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills) - Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro Phase 2 — Script Gap Closure (2,779 lines): - jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py - confluence-expert: space_structure_generator.py, content_audit_analyzer.py - atlassian-admin: permission_audit_tool.py - atlassian-templates: template_scaffolder.py (Confluence XHTML generation) Phase 3 — Reference & Asset Enrichment: - 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder) - 6 PM references (confluence-expert, atlassian-admin, atlassian-templates) - 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system) - 1 PM asset (permission_scheme_template.json) Phase 4 — New Agents: - cs-agile-product-owner, cs-product-strategist, cs-ux-researcher Phase 5 — Integration & Polish: - Related Skills cross-references in 8 SKILL.md files - Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands) - Updated project-management/CLAUDE.md (0→12 scripts, 3 commands) - Regenerated docs site (177 pages), updated homepage and getting-started Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: audit and repair all plugins, agents, and commands - Fix 12 command files: correct CLI arg syntax, script paths, and usage docs - Fix 3 agents with broken script/reference paths (cs-content-creator, cs-demand-gen-specialist, cs-financial-analyst) - Add complete YAML frontmatter to 5 agents (cs-growth-strategist, cs-engineering-lead, cs-senior-engineer, cs-financial-analyst, cs-quality-regulatory) - Fix cs-ceo-advisor related agent path - Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents, 12 commands) Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs builds clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: repair 25 Python scripts failing --help across all domains - Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts - Add argparse CLI handling to 9 marketing scripts using raw sys.argv - Fix 10 scripts crashing at module level (wrap in __main__, add argparse) - Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts) - Fix f-string backslash syntax in project_bootstrapper.py - Fix -h flag conflict in pr_analyzer.py - Fix tech-debt.md description (score → prioritize) All 237 scripts now pass python3 --help verification. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(product-team): close 3 verified gaps in product skills - Fix competitive-teardown/SKILL.md: replace broken references DATA_COLLECTION.md → references/data-collection-guide.md and TEMPLATES.md → references/analysis-templates.md (workflow was broken at steps 2 and 4) - Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format (--format tsx) matching SKILL.md promise of Next.js/React components. 4 design styles (dark-saas, clean-minimal, bold-startup, enterprise). TSX is now default; HTML preserved via --format html - Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now accurately shows 8 skills/9 tools), remove 7 ghost scripts that never existed (sprint_planner.py, velocity_tracker.py, etc.) - Fix tech-debt.md description (score → prioritize) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * release: v2.1.2 — landing page TSX output, brand voice integration, docs update - Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles) - Brand voice analyzer integrated into landing page generation workflow - CHANGELOG, CLAUDE.md, README.md updated for v2.1.2 - All 13 plugin.json + marketplace.json bumped to 2.1.2 - Gemini/Codex skill indexes re-synced - Backward compatible: --format html preserved, no breaking changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
518 lines
19 KiB
Python
518 lines
19 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
email_sequence_analyzer.py — Analyzes a cold email sequence for quality signals.
|
|
|
|
Evaluates each email on:
|
|
- Word count (shorter is usually better for cold email)
|
|
- Reading level estimate (Flesch-Kincaid approximation via avg sentence/word length)
|
|
- Personalization density (signals of specific, targeted writing)
|
|
- CTA clarity (is there a clear ask?)
|
|
- Spam trigger words (words that hurt deliverability)
|
|
- Subject line analysis (length, warning patterns)
|
|
- Overall score: 0-100
|
|
|
|
Usage:
|
|
python3 email_sequence_analyzer.py [sequence.json]
|
|
cat sequence.json | python3 email_sequence_analyzer.py
|
|
|
|
If no file provided, runs on embedded sample sequence.
|
|
|
|
Input format (JSON):
|
|
[
|
|
{
|
|
"email": 1,
|
|
"subject": "...",
|
|
"body": "..."
|
|
},
|
|
...
|
|
]
|
|
|
|
Stdlib only — no external dependencies.
|
|
"""
|
|
|
|
import json
|
|
import re
|
|
import sys
|
|
import math
|
|
import select
|
|
from typing import List, Dict, Any
|
|
|
|
|
|
# ─── Spam trigger words ───────────────────────────────────────────────────────
|
|
|
|
SPAM_TRIGGERS = [
|
|
"free", "guaranteed", "no obligation", "act now", "limited time",
|
|
"click here", "earn money", "make money", "risk-free", "special offer",
|
|
"no cost", "winner", "congratulations", "you've been selected",
|
|
"once in a lifetime", "urgent", "don't miss out", "buy now",
|
|
"order now", "100%", "best price", "lowest price", "incredible deal",
|
|
"amazing offer", "cash bonus", "extra cash", "fast cash",
|
|
"you have been chosen", "exclusive deal", "as seen on",
|
|
"dear friend", "valued customer",
|
|
]
|
|
|
|
# ─── Personalization signals ──────────────────────────────────────────────────
|
|
|
|
PERSONALIZATION_SIGNALS = [
|
|
# Direct references to "you"
|
|
r'\byou(?:r|rs|\'re|\'ve|\'d|\'ll)?\b',
|
|
# Trigger references
|
|
r'\b(?:saw|noticed|read|heard|saw|found|noted)\b',
|
|
# Named observation patterns
|
|
r'\b(?:your team|your company|your role|your work|your recent|your post)\b',
|
|
# Industry/role-specific references
|
|
r'\b(?:as a|in your|at your|given your)\b',
|
|
# Specific numbers or facts
|
|
r'\b\d{4}\b', # years — often a sign of specific research
|
|
r'\$\d+|\d+%', # numbers with $ or %
|
|
]
|
|
|
|
# ─── Dead opener phrases ──────────────────────────────────────────────────────
|
|
|
|
DEAD_OPENERS = [
|
|
"i hope this email finds you well",
|
|
"i hope this finds you",
|
|
"i wanted to reach out",
|
|
"i am reaching out",
|
|
"my name is",
|
|
"i'm writing to",
|
|
"i am writing to",
|
|
"hope you're doing well",
|
|
"i hope you are doing well",
|
|
"just following up",
|
|
"just checking in",
|
|
"circling back",
|
|
"touching base",
|
|
"per my last email",
|
|
"as per my previous",
|
|
]
|
|
|
|
# ─── Weak CTA patterns ────────────────────────────────────────────────────────
|
|
|
|
WEAK_CTA = [
|
|
"let me know if you're interested",
|
|
"let me know if you would be interested",
|
|
"feel free to",
|
|
"please don't hesitate",
|
|
"if you have any questions",
|
|
"looking forward to hearing from you",
|
|
"i look forward to connecting",
|
|
"hope we can connect",
|
|
]
|
|
|
|
# ─── Strong CTA signals ───────────────────────────────────────────────────────
|
|
|
|
STRONG_CTA_PATTERNS = [
|
|
r'\b(?:15|20|30|45|60)[\s-]?minute\b', # time-specific meeting ask
|
|
r'\b(?:call|chat|talk|speak|connect|meet)\b.*\?', # question + meeting word
|
|
r'worth\s+(?:a|an)\b', # "worth a call?"
|
|
r'\?$', # ends with question
|
|
r'\buseful\b\s*\?', # "useful?"
|
|
r'\b(?:reply|respond)\b', # explicit reply ask
|
|
]
|
|
|
|
|
|
# ─── Text utilities ───────────────────────────────────────────────────────────
|
|
|
|
def count_words(text: str) -> int:
|
|
return len(text.split())
|
|
|
|
|
|
def count_sentences(text: str) -> int:
|
|
"""Rough sentence count by terminal punctuation."""
|
|
sentences = re.split(r'[.!?]+', text)
|
|
return max(1, len([s for s in sentences if s.strip()]))
|
|
|
|
|
|
def avg_words_per_sentence(text: str) -> float:
|
|
words = count_words(text)
|
|
sentences = count_sentences(text)
|
|
return words / sentences if sentences else words
|
|
|
|
|
|
def avg_chars_per_word(text: str) -> float:
|
|
words = text.split()
|
|
if not words:
|
|
return 0
|
|
return sum(len(w.strip('.,!?;:')) for w in words) / len(words)
|
|
|
|
|
|
def flesch_reading_ease(text: str) -> float:
|
|
"""
|
|
Approximate Flesch Reading Ease score.
|
|
206.835 - 1.015 * (words/sentences) - 84.6 * (syllables/words)
|
|
We approximate syllables as: max(1, len(word) * 0.4) for each word.
|
|
"""
|
|
words = text.split()
|
|
if not words:
|
|
return 0
|
|
sentences = count_sentences(text)
|
|
syllables = sum(max(1, int(len(re.sub(r'[^aeiouAEIOU]', '', w)) * 1.2) or 1) for w in words)
|
|
asl = len(words) / sentences # avg sentence length
|
|
asw = syllables / len(words) # avg syllables per word
|
|
score = 206.835 - (1.015 * asl) - (84.6 * asw)
|
|
return max(0, min(100, score))
|
|
|
|
|
|
def grade_reading_level(fre_score: float) -> str:
|
|
"""Convert Flesch Reading Ease to a human label."""
|
|
if fre_score >= 70:
|
|
return "Easy (conversational)"
|
|
if fre_score >= 60:
|
|
return "Plain English"
|
|
if fre_score >= 50:
|
|
return "Fairly difficult"
|
|
return "Difficult (too complex for cold email)"
|
|
|
|
|
|
# ─── Analysis functions ───────────────────────────────────────────────────────
|
|
|
|
def analyze_subject_line(subject: str) -> Dict:
|
|
issues = []
|
|
warnings = []
|
|
|
|
if not subject:
|
|
return {"length": 0, "issues": ["No subject line provided"], "score": 0}
|
|
|
|
length = len(subject)
|
|
|
|
if length > 60:
|
|
issues.append(f"Too long ({length} chars) — aim for under 50")
|
|
if length > 50:
|
|
warnings.append("Subject is getting long — shorter subjects get more opens")
|
|
|
|
if subject.isupper():
|
|
issues.append("All caps subject lines trigger spam filters")
|
|
|
|
if re.search(r'!!!|!{2,}', subject):
|
|
issues.append("Multiple exclamation points look like spam")
|
|
|
|
if subject.startswith("Re:") or subject.startswith("Fwd:"):
|
|
lower = subject.lower()
|
|
if lower.startswith("re:") or lower.startswith("fwd:"):
|
|
warnings.append("Fake Re:/Fwd: subjects feel deceptive — people have learned this trick")
|
|
|
|
if re.search(r'[A-Z]{4,}', subject) and not subject.isupper():
|
|
warnings.append("SHOUTING words in subject lines look like spam")
|
|
|
|
if re.search(r'[\U0001F600-\U0001FFFF]', subject):
|
|
warnings.append("Emojis in subject lines are polarizing and often spam-filtered for B2B")
|
|
|
|
if '?' in subject:
|
|
warnings.append("Question mark in subject can feel like an ad — test without")
|
|
|
|
# Spam trigger check in subject
|
|
subject_lower = subject.lower()
|
|
triggered = [w for w in SPAM_TRIGGERS if w in subject_lower]
|
|
if triggered:
|
|
issues.append(f"Spam trigger words in subject: {', '.join(triggered)}")
|
|
|
|
# Score
|
|
score = 100
|
|
score -= len(issues) * 20
|
|
score -= len(warnings) * 10
|
|
score = max(0, min(100, score))
|
|
|
|
return {
|
|
"length": length,
|
|
"issues": issues,
|
|
"warnings": warnings,
|
|
"score": score,
|
|
}
|
|
|
|
|
|
def analyze_body(body: str) -> Dict:
|
|
body_lower = body.lower()
|
|
findings = []
|
|
deductions = []
|
|
|
|
word_count = count_words(body)
|
|
fre = flesch_reading_ease(body)
|
|
reading_level = grade_reading_level(fre)
|
|
avg_wps = avg_words_per_sentence(body)
|
|
|
|
# Word count scoring
|
|
if word_count > 200:
|
|
deductions.append(("word_count", 15, f"Too long ({word_count} words) — cold emails should be under 150"))
|
|
elif word_count > 150:
|
|
deductions.append(("word_count", 5, f"Getting long ({word_count} words) — aim for under 150"))
|
|
elif word_count < 30:
|
|
deductions.append(("word_count", 10, f"Very short ({word_count} words) — may lack enough context"))
|
|
|
|
# Sentence length
|
|
if avg_wps > 25:
|
|
deductions.append(("readability", 10, f"Sentences average {avg_wps:.0f} words — too complex, aim for 15-20"))
|
|
|
|
# Dead opener check
|
|
for opener in DEAD_OPENERS:
|
|
if opener in body_lower:
|
|
deductions.append(("opener", 20, f"Dead opener detected: '{opener}' — rewrite the opening"))
|
|
break
|
|
|
|
# Personalization density
|
|
pers_matches = 0
|
|
for pattern in PERSONALIZATION_SIGNALS:
|
|
matches = re.findall(pattern, body_lower)
|
|
pers_matches += len(matches)
|
|
|
|
pers_density = pers_matches / word_count * 100 if word_count else 0
|
|
if pers_density < 5:
|
|
deductions.append(("personalization", 10, "Low personalization signals — email may feel generic"))
|
|
|
|
# Spam trigger words in body
|
|
triggered = [w for w in SPAM_TRIGGERS if w in body_lower]
|
|
if triggered:
|
|
deductions.append(("spam", len(triggered) * 5, f"Spam trigger words: {', '.join(triggered[:5])}"))
|
|
|
|
# Weak CTA check
|
|
for weak in WEAK_CTA:
|
|
if weak in body_lower:
|
|
deductions.append(("cta", 10, f"Weak CTA: '{weak}' — be more direct"))
|
|
break
|
|
|
|
# Strong CTA check
|
|
has_strong_cta = any(re.search(p, body_lower) for p in STRONG_CTA_PATTERNS)
|
|
if not has_strong_cta:
|
|
deductions.append(("cta", 15, "No clear CTA detected — every cold email needs a single, direct ask"))
|
|
|
|
# HTML check
|
|
if re.search(r'<html|<body|<table|<div|style="|font-family:', body_lower):
|
|
deductions.append(("format", 20, "HTML detected — plain text emails get better deliverability for cold outreach"))
|
|
|
|
# Multiple links
|
|
links = re.findall(r'https?://', body)
|
|
if len(links) > 2:
|
|
deductions.append(("links", 10, f"{len(links)} links detected — keep to 1-2 max for cold email"))
|
|
|
|
# Calculate score
|
|
total_deduction = sum(d[1] for d in deductions)
|
|
score = max(0, min(100, 100 - total_deduction))
|
|
|
|
return {
|
|
"word_count": word_count,
|
|
"reading_ease_score": round(fre, 1),
|
|
"reading_level": reading_level,
|
|
"avg_words_per_sentence": round(avg_wps, 1),
|
|
"personalization_density": round(pers_density, 1),
|
|
"has_strong_cta": has_strong_cta,
|
|
"spam_triggers": triggered,
|
|
"deductions": [(d[2], d[1]) for d in deductions],
|
|
"score": score,
|
|
}
|
|
|
|
|
|
# ─── Report printer ───────────────────────────────────────────────────────────
|
|
|
|
def grade(score: int) -> str:
|
|
if score >= 85:
|
|
return "🟢 Strong"
|
|
if score >= 65:
|
|
return "🟡 Decent"
|
|
if score >= 45:
|
|
return "🟠 Needs work"
|
|
return "🔴 Rewrite"
|
|
|
|
|
|
def print_report(results: List[Dict]) -> None:
|
|
print("\n" + "═" * 64)
|
|
print(" COLD EMAIL SEQUENCE ANALYSIS")
|
|
print("═" * 64)
|
|
|
|
scores = []
|
|
for r in results:
|
|
email_num = r["email"]
|
|
subj = r["subject_analysis"]
|
|
body = r["body_analysis"]
|
|
overall = r["overall_score"]
|
|
scores.append(overall)
|
|
|
|
print(f"\n── Email {email_num}: \"{r['subject']}\" ──")
|
|
print(f" Overall: {overall}/100 {grade(overall)}")
|
|
|
|
print(f"\n Subject ({subj['length']} chars): {subj['score']}/100")
|
|
for issue in subj.get("issues", []):
|
|
print(f" ❌ {issue}")
|
|
for warn in subj.get("warnings", []):
|
|
print(f" ⚠️ {warn}")
|
|
|
|
print(f"\n Body Analysis:")
|
|
print(f" Words: {body['word_count']} | "
|
|
f"Reading: {body['reading_level']} | "
|
|
f"Avg sentence: {body['avg_words_per_sentence']} words | "
|
|
f"Personalization density: {body['personalization_density']}%")
|
|
print(f" CTA: {'✅ Clear ask detected' if body['has_strong_cta'] else '❌ No clear CTA found'}")
|
|
|
|
if body.get("spam_triggers"):
|
|
print(f" ⚠️ Spam triggers: {', '.join(body['spam_triggers'])}")
|
|
|
|
if body.get("deductions"):
|
|
print(f"\n Issues found:")
|
|
for desc, pts in body["deductions"]:
|
|
print(f" [-{pts:2d}] {desc}")
|
|
|
|
avg = sum(scores) // len(scores) if scores else 0
|
|
print(f"\n{'═' * 64}")
|
|
print(f" SEQUENCE OVERALL: {avg}/100 {grade(avg)}")
|
|
print(f" Emails analyzed: {len(results)}")
|
|
|
|
# Sequence-level observations
|
|
print("\n Sequence observations:")
|
|
word_counts = [r["body_analysis"]["word_count"] for r in results]
|
|
if all(abs(word_counts[i] - word_counts[i-1]) < 20 for i in range(1, len(word_counts))):
|
|
print(" ⚠️ All emails are similar length — vary length across sequence")
|
|
|
|
if len(results) > 1:
|
|
last_body = results[-1]["body_analysis"]
|
|
if last_body["word_count"] > 100:
|
|
print(" ⚠️ Final email (breakup) should be shorter — 3-5 sentences max")
|
|
|
|
print("═" * 64 + "\n")
|
|
|
|
|
|
# ─── Sample data ──────────────────────────────────────────────────────────────
|
|
|
|
SAMPLE_SEQUENCE = [
|
|
{
|
|
"email": 1,
|
|
"subject": "your SDR team expansion",
|
|
"body": (
|
|
"Saw you're hiring four SDRs simultaneously — that's a significant scale-up.\n\n"
|
|
"The challenge most teams hit at this stage isn't recruiting — it's ramp time. "
|
|
"When you're adding four people at once, the gaps in your onboarding process "
|
|
"become very expensive very fast. The average ramp in your segment is around "
|
|
"4.5 months; the fastest teams we've seen get it to 2.5.\n\n"
|
|
"We've helped three similar-sized SaaS teams compress that gap. Happy to share "
|
|
"what worked if it's useful.\n\n"
|
|
"Worth 15 minutes to compare notes?"
|
|
),
|
|
},
|
|
{
|
|
"email": 2,
|
|
"subject": "re: your onboarding stack",
|
|
"body": (
|
|
"I hope this email finds you well. I wanted to follow up on my previous email.\n\n"
|
|
"Just checking in to see if you had a chance to review what I sent. "
|
|
"As mentioned, our platform offers a comprehensive suite of tools designed to "
|
|
"help sales teams of all sizes achieve unprecedented growth through our "
|
|
"revolutionary AI-powered onboarding solution.\n\n"
|
|
"I'd love to schedule a 45-minute product demo at your earliest convenience. "
|
|
"Please don't hesitate to reach out if you have any questions. "
|
|
"I look forward to hearing from you!\n\n"
|
|
"Click here to book a time: https://calendly.com/example"
|
|
),
|
|
},
|
|
{
|
|
"email": 3,
|
|
"subject": "SDR ramp benchmark",
|
|
"body": (
|
|
"One data point that might be useful: across the 40 SaaS teams we've benchmarked, "
|
|
"the ones with the fastest SDR ramp time don't hire the most experienced reps — "
|
|
"they invest more heavily in structured onboarding in the first 30 days.\n\n"
|
|
"Happy to share the full breakdown. No catch — just thought it might be relevant "
|
|
"given where you're headed.\n\n"
|
|
"Useful?"
|
|
),
|
|
},
|
|
{
|
|
"email": 4,
|
|
"subject": "quick question",
|
|
"body": (
|
|
"Is SDR onboarding actually a priority right now, or is the timing just off?\n\n"
|
|
"No judgment either way — just helps me know whether it's worth staying in touch."
|
|
),
|
|
},
|
|
{
|
|
"email": 5,
|
|
"subject": "last one",
|
|
"body": (
|
|
"I'll stop cluttering your inbox after this one.\n\n"
|
|
"If scaling your SDR ramp time ever becomes a priority, happy to reconnect — "
|
|
"just reply here.\n\n"
|
|
"If there's someone else at your company who owns sales enablement, "
|
|
"a name would go a long way.\n\n"
|
|
"Either way, good luck with the expansion."
|
|
),
|
|
},
|
|
]
|
|
|
|
|
|
# ─── Main ─────────────────────────────────────────────────────────────────────
|
|
|
|
def main():
|
|
import argparse
|
|
|
|
parser = argparse.ArgumentParser(
|
|
description="Analyzes a cold email sequence for quality signals. "
|
|
"Evaluates word count, reading level, personalization, CTA clarity, "
|
|
"spam triggers, and subject lines. Scores each email 0-100."
|
|
)
|
|
parser.add_argument(
|
|
"file", nargs="?", default=None,
|
|
help="Path to a JSON file containing the email sequence. "
|
|
"Use '-' to read from stdin. If omitted, runs embedded sample."
|
|
)
|
|
args = parser.parse_args()
|
|
|
|
if args.file:
|
|
if args.file == "-":
|
|
sequence = json.load(sys.stdin)
|
|
else:
|
|
try:
|
|
with open(args.file, "r", encoding="utf-8") as f:
|
|
sequence = json.load(f)
|
|
except FileNotFoundError:
|
|
print(f"Error: File not found: {args.file}", file=sys.stderr)
|
|
sys.exit(1)
|
|
except json.JSONDecodeError as e:
|
|
print(f"Error: Invalid JSON: {e}", file=sys.stderr)
|
|
sys.exit(1)
|
|
else:
|
|
print("No file provided — running on embedded sample sequence.\n")
|
|
sequence = SAMPLE_SEQUENCE
|
|
|
|
results = []
|
|
for email in sequence:
|
|
subject = email.get("subject", "")
|
|
body = email.get("body", "")
|
|
email_num = email.get("email", len(results) + 1)
|
|
|
|
subject_analysis = analyze_subject_line(subject)
|
|
body_analysis = analyze_body(body)
|
|
|
|
# Overall score: 30% subject, 70% body
|
|
overall = int(subject_analysis["score"] * 0.3 + body_analysis["score"] * 0.7)
|
|
|
|
results.append({
|
|
"email": email_num,
|
|
"subject": subject,
|
|
"subject_analysis": subject_analysis,
|
|
"body_analysis": body_analysis,
|
|
"overall_score": overall,
|
|
})
|
|
|
|
print_report(results)
|
|
|
|
# JSON output for programmatic use
|
|
summary = {
|
|
"emails_analyzed": len(results),
|
|
"average_score": sum(r["overall_score"] for r in results) // len(results) if results else 0,
|
|
"results": [
|
|
{
|
|
"email": r["email"],
|
|
"subject": r["subject"],
|
|
"score": r["overall_score"],
|
|
"word_count": r["body_analysis"]["word_count"],
|
|
"has_strong_cta": r["body_analysis"]["has_strong_cta"],
|
|
"spam_triggers": r["body_analysis"]["spam_triggers"],
|
|
"subject_score": r["subject_analysis"]["score"],
|
|
}
|
|
for r in results
|
|
],
|
|
}
|
|
print("── JSON Output ──")
|
|
print(json.dumps(summary, indent=2))
|
|
|
|
|
|
if __name__ == "__main__":
|
|
main()
|