Merge pull request #311 from alirezarezvani/feat/x-twitter-growth
This commit is contained in:
1
marketing-skill/x-twitter-growth/.gitignore
vendored
Normal file
1
marketing-skill/x-twitter-growth/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.growth-data/
|
||||
226
marketing-skill/x-twitter-growth/SKILL.md
Normal file
226
marketing-skill/x-twitter-growth/SKILL.md
Normal file
@@ -0,0 +1,226 @@
|
||||
---
|
||||
name: "x-twitter-growth"
|
||||
description: "X/Twitter growth engine for building audience, crafting viral content, and analyzing engagement. Use when the user wants to grow on X/Twitter, write tweets or threads, analyze their X profile, research competitors on X, plan a posting strategy, or optimize engagement. Complements social-content (generic multi-platform) with X-specific depth: algorithm mechanics, thread engineering, reply strategy, profile optimization, and competitive intelligence via web search."
|
||||
license: MIT
|
||||
metadata:
|
||||
version: 1.0.0
|
||||
author: Alireza Rezvani
|
||||
category: marketing
|
||||
updated: 2026-03-10
|
||||
---
|
||||
|
||||
# X/Twitter Growth Engine
|
||||
|
||||
X-specific growth skill. For general social media content across platforms, see `social-content`. For social strategy and calendar planning, see `social-media-manager`. This skill goes deep on X.
|
||||
|
||||
## When to Use This vs Other Skills
|
||||
|
||||
| Need | Use |
|
||||
|------|-----|
|
||||
| Write a tweet or thread | **This skill** |
|
||||
| Plan content across LinkedIn + X + Instagram | social-content |
|
||||
| Analyze engagement metrics across platforms | social-media-analyzer |
|
||||
| Build overall social strategy | social-media-manager |
|
||||
| X-specific growth, algorithm, competitive intel | **This skill** |
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Profile Audit
|
||||
|
||||
Before any growth work, audit the current X presence. Run `scripts/profile_auditor.py` with the handle, or manually assess:
|
||||
|
||||
### Bio Checklist
|
||||
- [ ] Clear value proposition in first line (who you help + how)
|
||||
- [ ] Specific niche — not "entrepreneur | thinker | builder"
|
||||
- [ ] Social proof element (followers, title, metric, brand)
|
||||
- [ ] CTA or link (newsletter, product, site)
|
||||
- [ ] No hashtags in bio (signals amateur)
|
||||
|
||||
### Pinned Tweet
|
||||
- [ ] Exists and is less than 30 days old
|
||||
- [ ] Showcases best work or strongest hook
|
||||
- [ ] Has clear CTA (follow, subscribe, read)
|
||||
|
||||
### Recent Activity (last 30 posts)
|
||||
- [ ] Posting frequency: minimum 1x/day, ideal 3-5x/day
|
||||
- [ ] Mix of formats: tweets, threads, replies, quotes
|
||||
- [ ] Reply ratio: >30% of activity should be replies
|
||||
- [ ] Engagement trend: improving, flat, or declining
|
||||
|
||||
Run: `python3 scripts/profile_auditor.py --handle @username`
|
||||
|
||||
---
|
||||
|
||||
## Step 2 — Competitive Intelligence
|
||||
|
||||
Research competitors and successful accounts in your niche using web search.
|
||||
|
||||
### Process
|
||||
1. Search `site:x.com "topic" min_faves:100` via Brave to find high-performing content
|
||||
2. Identify 5-10 accounts in your niche with strong engagement
|
||||
3. For each, analyze: posting frequency, content types, hook patterns, engagement rates
|
||||
4. Run: `python3 scripts/competitor_analyzer.py --handles @acc1 @acc2 @acc3`
|
||||
|
||||
### What to Extract
|
||||
- **Hook patterns** — How do top posts start? Question? Bold claim? Statistic?
|
||||
- **Content themes** — What 3-5 topics get the most engagement?
|
||||
- **Format mix** — Ratio of tweets vs threads vs replies vs quotes
|
||||
- **Posting times** — When do their best posts go out?
|
||||
- **Engagement triggers** — What makes people reply vs like vs retweet?
|
||||
|
||||
---
|
||||
|
||||
## Step 3 — Content Creation
|
||||
|
||||
### Tweet Types (ordered by growth impact)
|
||||
|
||||
#### 1. Threads (highest reach, highest follow conversion)
|
||||
```
|
||||
Structure:
|
||||
- Tweet 1: Hook — must stop the scroll in <7 words
|
||||
- Tweet 2: Context or promise ("Here's what I learned:")
|
||||
- Tweets 3-N: One idea per tweet, each standalone-worthy
|
||||
- Final tweet: Summary + explicit CTA ("Follow @handle for more")
|
||||
- Reply to tweet 1: Restate hook + "Follow for more [topic]"
|
||||
|
||||
Rules:
|
||||
- 5-12 tweets optimal (under 5 feels thin, over 12 loses people)
|
||||
- Each tweet should make sense if read alone
|
||||
- Use line breaks for readability
|
||||
- No tweet should be a wall of text (3-4 lines max)
|
||||
- Number the tweets or use "↓" in tweet 1
|
||||
```
|
||||
|
||||
#### 2. Atomic Tweets (breadth, impression farming)
|
||||
```
|
||||
Formats that work:
|
||||
- Observation: "[Thing] is underrated. Here's why:"
|
||||
- Listicle: "10 tools I use daily:\n\n1. X — for Y"
|
||||
- Contrarian: "Unpopular opinion: [statement]"
|
||||
- Lesson: "I [did X] for [time]. Biggest lesson:"
|
||||
- Framework: "[Concept] explained in 30 seconds:"
|
||||
|
||||
Rules:
|
||||
- Under 200 characters gets more engagement
|
||||
- One idea per tweet
|
||||
- No links in tweet body (kills reach — put link in reply)
|
||||
- Question tweets drive replies (algorithm loves replies)
|
||||
```
|
||||
|
||||
#### 3. Quote Tweets (authority building)
|
||||
```
|
||||
Formula: Original tweet + your unique take
|
||||
- Add data the original missed
|
||||
- Provide counterpoint or nuance
|
||||
- Share personal experience that validates/contradicts
|
||||
- Never just say "This" or "So true"
|
||||
```
|
||||
|
||||
#### 4. Replies (network growth, fastest path to visibility)
|
||||
```
|
||||
Strategy:
|
||||
- Reply to accounts 2-10x your size
|
||||
- Add genuine value, not "great post!"
|
||||
- Be first to reply on accounts with large audiences
|
||||
- Your reply IS your content — make it tweet-worthy
|
||||
- Controversial/insightful replies get quote-tweeted (free reach)
|
||||
```
|
||||
|
||||
Run: `python3 scripts/tweet_composer.py --type thread --topic "your topic" --audience "your audience"`
|
||||
|
||||
---
|
||||
|
||||
## Step 4 — Algorithm Mechanics
|
||||
|
||||
### What X rewards (2025-2026)
|
||||
| Signal | Weight | Action |
|
||||
|--------|--------|--------|
|
||||
| Replies received | Very high | Write reply-worthy content (questions, debates) |
|
||||
| Time spent reading | High | Threads, longer tweets with line breaks |
|
||||
| Profile visits from tweet | High | Curiosity gaps, tease expertise |
|
||||
| Bookmarks | High | Tactical, save-worthy content (lists, frameworks) |
|
||||
| Retweets/Quotes | Medium | Shareable insights, bold takes |
|
||||
| Likes | Low-medium | Easy agreement, relatable content |
|
||||
| Link clicks | Low (penalized) | Never put links in tweet body — use reply |
|
||||
|
||||
### What kills reach
|
||||
- Links in tweet body (put in first reply instead)
|
||||
- Editing tweets within 30 min of posting
|
||||
- Posting and immediately going offline (no early engagement)
|
||||
- More than 2 hashtags
|
||||
- Tagging people who don't engage back
|
||||
- Threads with inconsistent quality (one weak tweet tanks the whole thread)
|
||||
|
||||
### Optimal Posting Cadence
|
||||
| Account size | Tweets/day | Threads/week | Replies/day |
|
||||
|-------------|------------|--------------|-------------|
|
||||
| < 1K followers | 2-3 | 1-2 | 10-20 |
|
||||
| 1K-10K | 3-5 | 2-3 | 5-15 |
|
||||
| 10K-50K | 3-7 | 2-4 | 5-10 |
|
||||
| 50K+ | 2-5 | 1-3 | 5-10 |
|
||||
|
||||
---
|
||||
|
||||
## Step 5 — Growth Playbook
|
||||
|
||||
### Week 1-2: Foundation
|
||||
1. Optimize bio and pinned tweet (Step 1)
|
||||
2. Identify 20 accounts in your niche to engage with daily
|
||||
3. Reply 10-20 times per day to larger accounts (genuine value only)
|
||||
4. Post 2-3 atomic tweets per day testing different formats
|
||||
5. Publish 1 thread
|
||||
|
||||
### Week 3-4: Pattern Recognition
|
||||
1. Review what formats got most engagement
|
||||
2. Double down on top 2 content formats
|
||||
3. Increase to 3-5 posts per day
|
||||
4. Publish 2-3 threads per week
|
||||
5. Start quote-tweeting relevant content daily
|
||||
|
||||
### Month 2+: Scale
|
||||
1. Develop 3-5 recurring content series (e.g., "Friday Framework")
|
||||
2. Cross-pollinate: repurpose threads as LinkedIn posts, newsletter content
|
||||
3. Build reply relationships with 5-10 accounts your size (mutual engagement)
|
||||
4. Experiment with spaces/audio if relevant to niche
|
||||
5. Run: `python3 scripts/growth_tracker.py --handle @username --period 30d`
|
||||
|
||||
---
|
||||
|
||||
## Step 6 — Content Calendar Generation
|
||||
|
||||
Run: `python3 scripts/content_planner.py --niche "your niche" --frequency 5 --weeks 2`
|
||||
|
||||
Generates a 2-week posting plan with:
|
||||
- Daily tweet topics with hook suggestions
|
||||
- Thread outlines (2-3 per week)
|
||||
- Reply targets (accounts to engage with)
|
||||
- Optimal posting times based on niche
|
||||
|
||||
---
|
||||
|
||||
## Scripts
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `scripts/profile_auditor.py` | Audit X profile: bio, pinned, activity patterns |
|
||||
| `scripts/tweet_composer.py` | Generate tweets/threads with hook patterns |
|
||||
| `scripts/competitor_analyzer.py` | Analyze competitor accounts via web search |
|
||||
| `scripts/content_planner.py` | Generate weekly/monthly content calendars |
|
||||
| `scripts/growth_tracker.py` | Track follower growth and engagement trends |
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
1. **Posting links directly** — Always put links in the first reply, never in the tweet body
|
||||
2. **Thread tweet 1 is weak** — If the hook doesn't stop scrolling, nothing else matters
|
||||
3. **Inconsistent posting** — Algorithm rewards daily consistency over occasional bangers
|
||||
4. **Only broadcasting** — Replies and engagement are 50%+ of growth, not just posting
|
||||
5. **Generic bio** — "Helping people do things" tells nobody anything
|
||||
6. **Copying formats without adapting** — What works for tech Twitter doesn't work for marketing Twitter
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `social-content` — Multi-platform content creation
|
||||
- `social-media-manager` — Overall social strategy
|
||||
- `social-media-analyzer` — Cross-platform analytics
|
||||
- `content-production` — Long-form content that feeds X threads
|
||||
- `copywriting` — Headline and hook writing techniques
|
||||
@@ -0,0 +1,70 @@
|
||||
# X/Twitter Algorithm Signals (2025-2026)
|
||||
|
||||
## Ranking Factors by Weight
|
||||
|
||||
### Tier 1 — Strongest Signals
|
||||
| Signal | Impact | How to Optimize |
|
||||
|--------|--------|----------------|
|
||||
| Replies received | Very high | Ask questions, make controversial/insightful points |
|
||||
| Dwell time (time reading) | Very high | Threads, longer tweets with line breaks |
|
||||
| Profile clicks from tweet | High | Create curiosity gaps, tease expertise |
|
||||
| Bookmarks | High | Tactical content (lists, frameworks, templates) |
|
||||
|
||||
### Tier 2 — Moderate Signals
|
||||
| Signal | Impact | How to Optimize |
|
||||
|--------|--------|----------------|
|
||||
| Retweets/Quotes | Medium | Shareable insights, bold takes, data |
|
||||
| Likes | Medium-low | Easy agreement, relatable content |
|
||||
| Follows from tweet | Medium | Thread CTAs, high-value niche content |
|
||||
|
||||
### Tier 3 — Negative Signals
|
||||
| Signal | Impact | How to Avoid |
|
||||
|--------|--------|-------------|
|
||||
| Link in tweet body | Reach penalty | Put links in first reply |
|
||||
| Edit within 30 min | Suppresses | Don't edit — delete and repost if needed |
|
||||
| Low early engagement | Decay | Stay online 30 min after posting, engage with replies |
|
||||
| Hashtag spam (3+) | Spam signal | Max 1-2 hashtags, or zero |
|
||||
| Tagging non-engagers | Negative | Only tag people likely to engage |
|
||||
|
||||
## Content Format Performance (ranked)
|
||||
|
||||
1. **Threads** — Highest reach potential, best for follower conversion
|
||||
2. **Image tweets** — 2-3x engagement vs text-only
|
||||
3. **Quote tweets** — Network effect (appear in two audiences)
|
||||
4. **Text tweets** — Baseline, best for hot takes and questions
|
||||
5. **Polls** — High engagement but low follower conversion
|
||||
6. **Link tweets** — Lowest reach (algorithm penalizes external links)
|
||||
|
||||
## Optimal Timing
|
||||
|
||||
| Time Slot (UTC) | Why |
|
||||
|----------------|-----|
|
||||
| 12:00-14:00 | US East Coast morning, EU afternoon |
|
||||
| 16:00-18:00 | US afternoon peak |
|
||||
| 21:00-23:00 | US evening, high scroll time |
|
||||
| 07:00-08:00 | EU morning, commute scrolling |
|
||||
|
||||
Best days: Tuesday-Thursday for B2B. Saturday-Sunday for consumer/lifestyle.
|
||||
|
||||
## Thread-Specific Mechanics
|
||||
|
||||
- Tweet 1 gets 10-50x the impressions of tweet 5+
|
||||
- Hook quality determines 90% of thread performance
|
||||
- "Numbered" threads (1/, 2/, etc.) signal commitment — algorithm boosts
|
||||
- Self-reply threads perform better than tweetstorm threads
|
||||
- Last tweet should have CTA + restate hook for people who scroll fast
|
||||
|
||||
## Premium/Blue Subscriber Advantages
|
||||
|
||||
- Longer tweets (up to 4,000 chars for Premium+)
|
||||
- Edit button (use sparingly — edits can suppress reach)
|
||||
- Higher reply ranking
|
||||
- Revenue sharing eligibility
|
||||
- Analytics access
|
||||
|
||||
## Sources
|
||||
|
||||
- X Engineering Blog (algorithm open-source release, 2023)
|
||||
- Community testing and experimentation (ongoing)
|
||||
- Creator program documentation
|
||||
- Third-party analytics platforms (Typefully, Hypefury, Shield)
|
||||
235
marketing-skill/x-twitter-growth/scripts/competitor_analyzer.py
Normal file
235
marketing-skill/x-twitter-growth/scripts/competitor_analyzer.py
Normal file
@@ -0,0 +1,235 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
X/Twitter Competitor Analyzer — Analyze competitor profiles for content strategy insights.
|
||||
|
||||
Takes competitor handles and available data, produces a competitive
|
||||
intelligence report with content patterns, engagement strategies, and gaps.
|
||||
|
||||
Usage:
|
||||
python3 competitor_analyzer.py --handles @user1 @user2 @user3
|
||||
python3 competitor_analyzer.py --handles @user1 --followers 50000 --niche "AI"
|
||||
python3 competitor_analyzer.py --import data.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class CompetitorProfile:
|
||||
handle: str
|
||||
followers: int = 0
|
||||
following: int = 0
|
||||
posts_per_week: float = 0
|
||||
avg_likes: float = 0
|
||||
avg_replies: float = 0
|
||||
avg_retweets: float = 0
|
||||
thread_frequency: str = "" # daily, weekly, rarely
|
||||
top_topics: list = field(default_factory=list)
|
||||
content_mix: dict = field(default_factory=dict) # format: percentage
|
||||
posting_times: list = field(default_factory=list)
|
||||
bio: str = ""
|
||||
notes: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class CompetitiveInsight:
|
||||
category: str
|
||||
finding: str
|
||||
opportunity: str
|
||||
priority: str # HIGH, MEDIUM, LOW
|
||||
|
||||
|
||||
def calculate_engagement_rate(profile: CompetitorProfile) -> float:
|
||||
if profile.followers <= 0:
|
||||
return 0
|
||||
total_engagement = profile.avg_likes + profile.avg_replies + profile.avg_retweets
|
||||
return (total_engagement / profile.followers) * 100
|
||||
|
||||
|
||||
def analyze_competitors(competitors: list) -> list:
|
||||
insights = []
|
||||
|
||||
# Engagement comparison
|
||||
engagement_rates = []
|
||||
for c in competitors:
|
||||
er = calculate_engagement_rate(c)
|
||||
engagement_rates.append((c.handle, er))
|
||||
|
||||
if engagement_rates:
|
||||
top = max(engagement_rates, key=lambda x: x[1])
|
||||
if top[1] > 0:
|
||||
insights.append(CompetitiveInsight(
|
||||
"Engagement", f"Highest engagement: {top[0]} ({top[1]:.2f}%)",
|
||||
"Study their top posts — what format and topics drive replies?",
|
||||
"HIGH"
|
||||
))
|
||||
|
||||
# Posting frequency
|
||||
frequencies = [(c.handle, c.posts_per_week) for c in competitors if c.posts_per_week > 0]
|
||||
if frequencies:
|
||||
avg_freq = sum(f for _, f in frequencies) / len(frequencies)
|
||||
insights.append(CompetitiveInsight(
|
||||
"Frequency", f"Average posting: {avg_freq:.0f}/week across competitors",
|
||||
f"Match or exceed {avg_freq:.0f} posts/week to compete for mindshare",
|
||||
"HIGH"
|
||||
))
|
||||
|
||||
# Thread usage
|
||||
thread_users = [c.handle for c in competitors if c.thread_frequency in ("daily", "weekly")]
|
||||
if thread_users:
|
||||
insights.append(CompetitiveInsight(
|
||||
"Format", f"Active thread users: {', '.join(thread_users)}",
|
||||
"Threads are a proven growth lever in your niche. Publish 2-3/week minimum.",
|
||||
"HIGH"
|
||||
))
|
||||
|
||||
# Reply engagement
|
||||
reply_heavy = [(c.handle, c.avg_replies) for c in competitors if c.avg_replies > c.avg_likes * 0.3]
|
||||
if reply_heavy:
|
||||
names = [h for h, _ in reply_heavy]
|
||||
insights.append(CompetitiveInsight(
|
||||
"Community", f"High reply ratios: {', '.join(names)}",
|
||||
"These accounts build community through conversation. Ask more questions in your tweets.",
|
||||
"MEDIUM"
|
||||
))
|
||||
|
||||
# Follower/following ratio
|
||||
for c in competitors:
|
||||
if c.followers > 0 and c.following > 0:
|
||||
ratio = c.followers / c.following
|
||||
if ratio > 10:
|
||||
insights.append(CompetitiveInsight(
|
||||
"Authority", f"{c.handle} has {ratio:.0f}x follower/following ratio",
|
||||
"Strong authority signal — they attract followers without follow-backs",
|
||||
"LOW"
|
||||
))
|
||||
|
||||
# Topic gaps
|
||||
all_topics = []
|
||||
for c in competitors:
|
||||
all_topics.extend(c.top_topics)
|
||||
|
||||
if all_topics:
|
||||
from collections import Counter
|
||||
common = Counter(all_topics).most_common(5)
|
||||
insights.append(CompetitiveInsight(
|
||||
"Topics", f"Most covered topics: {', '.join(t for t, _ in common)}",
|
||||
"Cover these topics to compete, but find unique angles. What are they NOT covering?",
|
||||
"MEDIUM"
|
||||
))
|
||||
|
||||
return insights
|
||||
|
||||
|
||||
def print_report(competitors: list, insights: list):
|
||||
print(f"\n{'='*70}")
|
||||
print(f" COMPETITIVE ANALYSIS REPORT")
|
||||
print(f"{'='*70}")
|
||||
|
||||
# Profile summary table
|
||||
print(f"\n {'Handle':<20} {'Followers':>10} {'Posts/wk':>10} {'Eng Rate':>10}")
|
||||
print(f" {'─'*20} {'─'*10} {'─'*10} {'─'*10}")
|
||||
for c in competitors:
|
||||
er = calculate_engagement_rate(c)
|
||||
print(f" {c.handle:<20} {c.followers:>10,} {c.posts_per_week:>10.0f} {er:>9.2f}%")
|
||||
|
||||
# Insights
|
||||
if insights:
|
||||
print(f"\n {'─'*66}")
|
||||
print(f" KEY INSIGHTS\n")
|
||||
|
||||
priority_order = {"HIGH": 0, "MEDIUM": 1, "LOW": 2}
|
||||
sorted_insights = sorted(insights, key=lambda x: priority_order.get(x.priority, 3))
|
||||
|
||||
for i in sorted_insights:
|
||||
icon = {"HIGH": "🔴", "MEDIUM": "🟡", "LOW": "⚪"}.get(i.priority, "❓")
|
||||
print(f" {icon} [{i.category}] {i.finding}")
|
||||
print(f" → {i.opportunity}")
|
||||
print()
|
||||
|
||||
# Action items
|
||||
print(f" {'─'*66}")
|
||||
print(f" NEXT STEPS\n")
|
||||
print(f" 1. Search each competitor's profile on X — note their pinned tweet and bio")
|
||||
print(f" 2. Read their last 20 posts — categorize by format and topic")
|
||||
print(f" 3. Identify their top 3 performing posts — what made them work?")
|
||||
print(f" 4. Find gaps — what topics do they NOT cover that you can own?")
|
||||
print(f" 5. Set engagement targets based on their metrics as benchmarks")
|
||||
print(f"\n{'='*70}\n")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Analyze X/Twitter competitors for content strategy insights",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
%(prog)s --handles @user1 @user2
|
||||
%(prog)s --import competitors.json
|
||||
|
||||
JSON format for --import:
|
||||
[{"handle": "@user1", "followers": 50000, "posts_per_week": 14, ...}]
|
||||
""")
|
||||
|
||||
parser.add_argument("--handles", nargs="+", default=[], help="Competitor handles")
|
||||
parser.add_argument("--import", dest="import_file", help="Import from JSON file")
|
||||
parser.add_argument("--json", action="store_true", help="Output JSON")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
competitors = []
|
||||
|
||||
if args.import_file:
|
||||
with open(args.import_file) as f:
|
||||
data = json.load(f)
|
||||
for item in data:
|
||||
competitors.append(CompetitorProfile(**item))
|
||||
elif args.handles:
|
||||
for handle in args.handles:
|
||||
if not handle.startswith("@"):
|
||||
handle = f"@{handle}"
|
||||
competitors.append(CompetitorProfile(handle=handle))
|
||||
|
||||
if all(c.followers == 0 for c in competitors):
|
||||
print(f"\n ℹ️ Handles registered: {', '.join(c.handle for c in competitors)}")
|
||||
print(f" To get full analysis, provide data via JSON import:")
|
||||
print(f" 1. Research each profile on X")
|
||||
print(f" 2. Create a JSON file with follower counts, posting frequency, etc.")
|
||||
print(f" 3. Run: {sys.argv[0]} --import data.json")
|
||||
print(f"\n Example JSON:")
|
||||
example = [asdict(CompetitorProfile(
|
||||
handle="@example",
|
||||
followers=25000,
|
||||
following=1200,
|
||||
posts_per_week=14,
|
||||
avg_likes=150,
|
||||
avg_replies=30,
|
||||
avg_retweets=20,
|
||||
thread_frequency="weekly",
|
||||
top_topics=["AI", "startups", "engineering"],
|
||||
))]
|
||||
print(f" {json.dumps(example, indent=2)}")
|
||||
print()
|
||||
return
|
||||
|
||||
if not competitors:
|
||||
print("Error: provide --handles or --import", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
insights = analyze_competitors(competitors)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps({
|
||||
"competitors": [asdict(c) for c in competitors],
|
||||
"insights": [asdict(i) for i in insights],
|
||||
}, indent=2))
|
||||
else:
|
||||
print_report(competitors, insights)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
210
marketing-skill/x-twitter-growth/scripts/content_planner.py
Normal file
210
marketing-skill/x-twitter-growth/scripts/content_planner.py
Normal file
@@ -0,0 +1,210 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
X/Twitter Content Planner — Generate weekly posting calendars.
|
||||
|
||||
Creates structured content plans with topic suggestions, format mix,
|
||||
optimal posting times, and engagement targets.
|
||||
|
||||
Usage:
|
||||
python3 content_planner.py --niche "AI engineering" --frequency 5 --weeks 2
|
||||
python3 content_planner.py --niche "SaaS growth" --frequency 3 --weeks 1 --json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
from dataclasses import dataclass, field, asdict
|
||||
|
||||
CONTENT_FORMATS = {
|
||||
"atomic_tweet": {"growth_weight": 0.3, "effort": "low", "description": "Single tweet — observation, tip, or hot take"},
|
||||
"thread": {"growth_weight": 0.35, "effort": "high", "description": "5-12 tweet deep dive — highest reach potential"},
|
||||
"question": {"growth_weight": 0.15, "effort": "low", "description": "Engagement bait — drives replies"},
|
||||
"quote_tweet": {"growth_weight": 0.10, "effort": "low", "description": "Add value to someone else's content"},
|
||||
"reply_session": {"growth_weight": 0.10, "effort": "medium", "description": "30 min focused engagement on target accounts"},
|
||||
}
|
||||
|
||||
OPTIMAL_TIMES = {
|
||||
"weekday": ["07:00-08:00", "12:00-13:00", "17:00-18:00", "20:00-21:00"],
|
||||
"weekend": ["09:00-10:00", "14:00-15:00", "19:00-20:00"],
|
||||
}
|
||||
|
||||
TOPIC_ANGLES = [
|
||||
"Lessons learned (personal experience)",
|
||||
"Framework/system breakdown",
|
||||
"Tool recommendation (with honest take)",
|
||||
"Myth busting (challenge common belief)",
|
||||
"Behind the scenes (process, workflow)",
|
||||
"Industry trend analysis",
|
||||
"Beginner guide (explain like I'm 5)",
|
||||
"Comparison (X vs Y — which is better?)",
|
||||
"Prediction (what's coming next)",
|
||||
"Case study (real example with numbers)",
|
||||
"Mistake I made (vulnerability + lesson)",
|
||||
"Quick tip (tactical, immediately useful)",
|
||||
"Controversial take (spicy but defensible)",
|
||||
"Curated list (best resources, tools, accounts)",
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class DayPlan:
|
||||
date: str
|
||||
day_of_week: str
|
||||
posts: list = field(default_factory=list)
|
||||
engagement_target: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class PostSlot:
|
||||
time: str
|
||||
format: str
|
||||
topic_angle: str
|
||||
topic_suggestion: str
|
||||
notes: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class WeekPlan:
|
||||
week_number: int
|
||||
start_date: str
|
||||
end_date: str
|
||||
days: list = field(default_factory=list)
|
||||
thread_count: int = 0
|
||||
total_posts: int = 0
|
||||
focus_theme: str = ""
|
||||
|
||||
|
||||
def generate_plan(niche: str, posts_per_day: int, weeks: int, start_date: datetime) -> list:
|
||||
plans = []
|
||||
angle_idx = 0
|
||||
time_idx = 0
|
||||
|
||||
for week in range(weeks):
|
||||
week_start = start_date + timedelta(weeks=week)
|
||||
week_end = week_start + timedelta(days=6)
|
||||
|
||||
week_plan = WeekPlan(
|
||||
week_number=week + 1,
|
||||
start_date=week_start.strftime("%Y-%m-%d"),
|
||||
end_date=week_end.strftime("%Y-%m-%d"),
|
||||
focus_theme=TOPIC_ANGLES[week % len(TOPIC_ANGLES)],
|
||||
)
|
||||
|
||||
for day in range(7):
|
||||
current = week_start + timedelta(days=day)
|
||||
day_name = current.strftime("%A")
|
||||
is_weekend = day >= 5
|
||||
|
||||
times = OPTIMAL_TIMES["weekend" if is_weekend else "weekday"]
|
||||
actual_posts = max(1, posts_per_day - (1 if is_weekend else 0))
|
||||
|
||||
day_plan = DayPlan(
|
||||
date=current.strftime("%Y-%m-%d"),
|
||||
day_of_week=day_name,
|
||||
engagement_target="15 min reply session" if is_weekend else "30 min reply session",
|
||||
)
|
||||
|
||||
for p in range(actual_posts):
|
||||
# Determine format based on day position
|
||||
if day in [1, 3] and p == 0: # Tue/Thu first slot = thread
|
||||
fmt = "thread"
|
||||
elif p == actual_posts - 1 and not is_weekend:
|
||||
fmt = "question" # Last post = engagement driver
|
||||
elif day == 4 and p == 0: # Friday first = quote tweet
|
||||
fmt = "quote_tweet"
|
||||
else:
|
||||
fmt = "atomic_tweet"
|
||||
|
||||
angle = TOPIC_ANGLES[angle_idx % len(TOPIC_ANGLES)]
|
||||
angle_idx += 1
|
||||
|
||||
slot = PostSlot(
|
||||
time=times[p % len(times)],
|
||||
format=fmt,
|
||||
topic_angle=angle,
|
||||
topic_suggestion=f"{angle} about {niche}",
|
||||
notes="Pin if performs well" if fmt == "thread" else "",
|
||||
)
|
||||
day_plan.posts.append(asdict(slot))
|
||||
|
||||
if fmt == "thread":
|
||||
week_plan.thread_count += 1
|
||||
week_plan.total_posts += 1
|
||||
|
||||
week_plan.days.append(asdict(day_plan))
|
||||
|
||||
plans.append(asdict(week_plan))
|
||||
|
||||
return plans
|
||||
|
||||
|
||||
def print_plan(plans: list, niche: str):
|
||||
print(f"\n{'='*70}")
|
||||
print(f" X/TWITTER CONTENT PLAN — {niche.upper()}")
|
||||
print(f"{'='*70}")
|
||||
|
||||
for week in plans:
|
||||
print(f"\n WEEK {week['week_number']} ({week['start_date']} to {week['end_date']})")
|
||||
print(f" Theme: {week['focus_theme']}")
|
||||
print(f" Posts: {week['total_posts']} | Threads: {week['thread_count']}")
|
||||
print(f" {'─'*66}")
|
||||
|
||||
for day in week['days']:
|
||||
print(f"\n {day['day_of_week']:9} {day['date']}")
|
||||
for post in day['posts']:
|
||||
fmt_icon = {
|
||||
"thread": "🧵",
|
||||
"atomic_tweet": "💬",
|
||||
"question": "❓",
|
||||
"quote_tweet": "🔄",
|
||||
"reply_session": "💬",
|
||||
}.get(post['format'], "📝")
|
||||
|
||||
print(f" {fmt_icon} {post['time']:12} [{post['format']:<14}] {post['topic_angle']}")
|
||||
if post['notes']:
|
||||
print(f" ℹ️ {post['notes']}")
|
||||
|
||||
print(f" 📊 Engagement: {day['engagement_target']}")
|
||||
|
||||
print(f"\n{'='*70}")
|
||||
print(f" WEEKLY TARGETS")
|
||||
print(f" • Reply to 10+ accounts in your niche daily")
|
||||
print(f" • Quote tweet 2-3 relevant posts per week")
|
||||
print(f" • Update pinned tweet if a thread outperforms current pin")
|
||||
print(f" • Review analytics every Sunday — double down on what works")
|
||||
print(f"{'='*70}\n")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate X/Twitter content calendars",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter)
|
||||
|
||||
parser.add_argument("--niche", required=True, help="Your content niche")
|
||||
parser.add_argument("--frequency", type=int, default=3, help="Posts per day (default: 3)")
|
||||
parser.add_argument("--weeks", type=int, default=2, help="Weeks to plan (default: 2)")
|
||||
parser.add_argument("--start", default="", help="Start date YYYY-MM-DD (default: next Monday)")
|
||||
parser.add_argument("--json", action="store_true", help="Output JSON")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.start:
|
||||
start = datetime.strptime(args.start, "%Y-%m-%d")
|
||||
else:
|
||||
today = datetime.now()
|
||||
days_until_monday = (7 - today.weekday()) % 7
|
||||
if days_until_monday == 0:
|
||||
days_until_monday = 7
|
||||
start = today + timedelta(days=days_until_monday)
|
||||
|
||||
plans = generate_plan(args.niche, args.frequency, args.weeks, start)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(plans, indent=2))
|
||||
else:
|
||||
print_plan(plans, args.niche)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
259
marketing-skill/x-twitter-growth/scripts/growth_tracker.py
Normal file
259
marketing-skill/x-twitter-growth/scripts/growth_tracker.py
Normal file
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
X/Twitter Growth Tracker — Track and analyze account growth over time.
|
||||
|
||||
Stores periodic snapshots of account metrics and calculates growth trends,
|
||||
engagement patterns, and milestone projections.
|
||||
|
||||
Usage:
|
||||
python3 growth_tracker.py --record --handle @user --followers 5200 --eng-rate 2.1
|
||||
python3 growth_tracker.py --report --handle @user
|
||||
python3 growth_tracker.py --report --handle @user --period 30d --json
|
||||
python3 growth_tracker.py --milestone --handle @user --target 10000
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
DATA_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", ".growth-data")
|
||||
|
||||
|
||||
def get_data_file(handle: str) -> str:
|
||||
clean = handle.lstrip("@").lower()
|
||||
os.makedirs(DATA_DIR, exist_ok=True)
|
||||
return os.path.join(DATA_DIR, f"{clean}.jsonl")
|
||||
|
||||
|
||||
def record_snapshot(handle: str, followers: int, following: int = 0,
|
||||
eng_rate: float = 0, posts_week: float = 0, notes: str = ""):
|
||||
entry = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"handle": handle,
|
||||
"followers": followers,
|
||||
"following": following,
|
||||
"engagement_rate": eng_rate,
|
||||
"posts_per_week": posts_week,
|
||||
"notes": notes,
|
||||
}
|
||||
|
||||
filepath = get_data_file(handle)
|
||||
with open(filepath, "a") as f:
|
||||
f.write(json.dumps(entry) + "\n")
|
||||
|
||||
return entry
|
||||
|
||||
|
||||
def load_snapshots(handle: str, period_days: int = 0) -> list:
|
||||
filepath = get_data_file(handle)
|
||||
if not os.path.exists(filepath):
|
||||
return []
|
||||
|
||||
entries = []
|
||||
cutoff = None
|
||||
if period_days > 0:
|
||||
cutoff = datetime.now() - timedelta(days=period_days)
|
||||
|
||||
with open(filepath) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
entry = json.loads(line)
|
||||
if cutoff:
|
||||
ts = datetime.fromisoformat(entry["timestamp"])
|
||||
if ts < cutoff:
|
||||
continue
|
||||
entries.append(entry)
|
||||
|
||||
return entries
|
||||
|
||||
|
||||
def generate_report(handle: str, entries: list) -> dict:
|
||||
if not entries:
|
||||
return {"handle": handle, "error": "No data found"}
|
||||
|
||||
report = {
|
||||
"handle": handle,
|
||||
"data_points": len(entries),
|
||||
"first_record": entries[0]["timestamp"],
|
||||
"last_record": entries[-1]["timestamp"],
|
||||
"current_followers": entries[-1]["followers"],
|
||||
}
|
||||
|
||||
if len(entries) >= 2:
|
||||
first = entries[0]
|
||||
last = entries[-1]
|
||||
|
||||
follower_change = last["followers"] - first["followers"]
|
||||
days_span = (datetime.fromisoformat(last["timestamp"]) -
|
||||
datetime.fromisoformat(first["timestamp"])).days
|
||||
days_span = max(days_span, 1)
|
||||
|
||||
report["follower_change"] = follower_change
|
||||
report["days_tracked"] = days_span
|
||||
report["daily_growth"] = round(follower_change / days_span, 1)
|
||||
report["weekly_growth"] = round((follower_change / days_span) * 7, 1)
|
||||
report["monthly_projection"] = round((follower_change / days_span) * 30)
|
||||
|
||||
if first["followers"] > 0:
|
||||
pct_change = ((last["followers"] - first["followers"]) / first["followers"]) * 100
|
||||
report["growth_percent"] = round(pct_change, 1)
|
||||
|
||||
# Engagement trend
|
||||
eng_rates = [e["engagement_rate"] for e in entries if e.get("engagement_rate", 0) > 0]
|
||||
if len(eng_rates) >= 2:
|
||||
mid = len(eng_rates) // 2
|
||||
first_half_avg = sum(eng_rates[:mid]) / mid
|
||||
second_half_avg = sum(eng_rates[mid:]) / (len(eng_rates) - mid)
|
||||
report["engagement_trend"] = "improving" if second_half_avg > first_half_avg else "declining"
|
||||
report["avg_engagement_rate"] = round(sum(eng_rates) / len(eng_rates), 2)
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def project_milestone(handle: str, entries: list, target: int) -> dict:
|
||||
if len(entries) < 2:
|
||||
return {"error": "Need at least 2 data points for projection"}
|
||||
|
||||
current = entries[-1]["followers"]
|
||||
if current >= target:
|
||||
return {"handle": handle, "target": target, "status": "Already reached!"}
|
||||
|
||||
first = entries[0]
|
||||
last = entries[-1]
|
||||
days_span = (datetime.fromisoformat(last["timestamp"]) -
|
||||
datetime.fromisoformat(first["timestamp"])).days
|
||||
days_span = max(days_span, 1)
|
||||
|
||||
daily_growth = (last["followers"] - first["followers"]) / days_span
|
||||
|
||||
if daily_growth <= 0:
|
||||
return {"handle": handle, "target": target, "status": "Not growing — can't project",
|
||||
"daily_growth": round(daily_growth, 1)}
|
||||
|
||||
remaining = target - current
|
||||
days_needed = remaining / daily_growth
|
||||
target_date = datetime.now() + timedelta(days=days_needed)
|
||||
|
||||
return {
|
||||
"handle": handle,
|
||||
"current": current,
|
||||
"target": target,
|
||||
"remaining": remaining,
|
||||
"daily_growth": round(daily_growth, 1),
|
||||
"days_needed": round(days_needed),
|
||||
"projected_date": target_date.strftime("%Y-%m-%d"),
|
||||
}
|
||||
|
||||
|
||||
def print_report(report: dict):
|
||||
print(f"\n{'='*60}")
|
||||
print(f" GROWTH REPORT — {report['handle']}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
if "error" in report:
|
||||
print(f"\n ⚠️ {report['error']}")
|
||||
print(f" Record data first: python3 growth_tracker.py --record --handle {report['handle']} --followers N")
|
||||
print()
|
||||
return
|
||||
|
||||
print(f"\n Current followers: {report['current_followers']:,}")
|
||||
print(f" Data points: {report['data_points']}")
|
||||
print(f" Tracking since: {report['first_record'][:10]}")
|
||||
|
||||
if "follower_change" in report:
|
||||
change_icon = "📈" if report["follower_change"] > 0 else "📉" if report["follower_change"] < 0 else "➡️"
|
||||
print(f"\n {change_icon} Change: {report['follower_change']:+,} followers over {report['days_tracked']} days")
|
||||
print(f" Daily avg: {report.get('daily_growth', 0):+.1f}/day")
|
||||
print(f" Weekly avg: {report.get('weekly_growth', 0):+.1f}/week")
|
||||
print(f" 30-day projection: {report.get('monthly_projection', 0):+,}")
|
||||
|
||||
if "growth_percent" in report:
|
||||
print(f" Growth rate: {report['growth_percent']:+.1f}%")
|
||||
|
||||
if "engagement_trend" in report:
|
||||
trend_icon = "📈" if report["engagement_trend"] == "improving" else "📉"
|
||||
print(f" Engagement: {trend_icon} {report['engagement_trend']} (avg {report['avg_engagement_rate']}%)")
|
||||
|
||||
print(f"\n{'='*60}\n")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Track X/Twitter account growth over time",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter)
|
||||
|
||||
parser.add_argument("--record", action="store_true", help="Record a new snapshot")
|
||||
parser.add_argument("--report", action="store_true", help="Generate growth report")
|
||||
parser.add_argument("--milestone", action="store_true", help="Project when target will be reached")
|
||||
|
||||
parser.add_argument("--handle", required=True, help="X handle")
|
||||
parser.add_argument("--followers", type=int, default=0, help="Current follower count")
|
||||
parser.add_argument("--following", type=int, default=0, help="Current following count")
|
||||
parser.add_argument("--eng-rate", type=float, default=0, help="Current engagement rate (pct)")
|
||||
parser.add_argument("--posts-week", type=float, default=0, help="Posts per week")
|
||||
parser.add_argument("--notes", default="", help="Notes for this snapshot")
|
||||
parser.add_argument("--period", default="all", help="Report period: 7d, 30d, 90d, all")
|
||||
parser.add_argument("--target", type=int, default=0, help="Follower milestone target")
|
||||
parser.add_argument("--json", action="store_true", help="Output JSON")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.handle.startswith("@"):
|
||||
args.handle = f"@{args.handle}"
|
||||
|
||||
if args.record:
|
||||
if args.followers <= 0:
|
||||
print("Error: --followers required for recording", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
entry = record_snapshot(args.handle, args.followers, args.following,
|
||||
args.eng_rate, args.posts_week, args.notes)
|
||||
if args.json:
|
||||
print(json.dumps(entry, indent=2))
|
||||
else:
|
||||
print(f" ✅ Recorded: {args.handle} — {args.followers:,} followers")
|
||||
print(f" File: {get_data_file(args.handle)}")
|
||||
|
||||
elif args.report:
|
||||
period_days = 0
|
||||
if args.period != "all":
|
||||
period_days = int(args.period.rstrip("d"))
|
||||
entries = load_snapshots(args.handle, period_days)
|
||||
report = generate_report(args.handle, entries)
|
||||
if args.json:
|
||||
print(json.dumps(report, indent=2))
|
||||
else:
|
||||
print_report(report)
|
||||
|
||||
elif args.milestone:
|
||||
if args.target <= 0:
|
||||
print("Error: --target required for milestone projection", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
entries = load_snapshots(args.handle)
|
||||
result = project_milestone(args.handle, entries, args.target)
|
||||
if args.json:
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
if "error" in result:
|
||||
print(f" ⚠️ {result['error']}")
|
||||
elif "status" in result and "days_needed" not in result:
|
||||
print(f" 🎉 {result['status']}")
|
||||
else:
|
||||
print(f"\n 🎯 Milestone Projection: {result['handle']}")
|
||||
print(f" Current: {result['current']:,}")
|
||||
print(f" Target: {result['target']:,}")
|
||||
print(f" Gap: {result['remaining']:,}")
|
||||
print(f" Growth: {result['daily_growth']:+.1f}/day")
|
||||
print(f" ETA: {result['projected_date']} (~{result['days_needed']} days)")
|
||||
print()
|
||||
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
294
marketing-skill/x-twitter-growth/scripts/profile_auditor.py
Normal file
294
marketing-skill/x-twitter-growth/scripts/profile_auditor.py
Normal file
@@ -0,0 +1,294 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
X/Twitter Profile Auditor — Audit any X profile for growth readiness.
|
||||
|
||||
Checks bio quality, pinned tweet, posting patterns, and provides
|
||||
actionable recommendations. Works without API access by analyzing
|
||||
profile data you provide or scraping public info via web search.
|
||||
|
||||
Usage:
|
||||
python3 profile_auditor.py --handle @username
|
||||
python3 profile_auditor.py --handle @username --json
|
||||
python3 profile_auditor.py --bio "current bio text" --followers 5000 --posts-per-week 10
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class ProfileData:
|
||||
handle: str = ""
|
||||
bio: str = ""
|
||||
followers: int = 0
|
||||
following: int = 0
|
||||
posts_per_week: float = 0
|
||||
reply_ratio: float = 0 # % of posts that are replies
|
||||
thread_ratio: float = 0 # % of posts that are threads
|
||||
has_pinned: bool = False
|
||||
pinned_age_days: int = 0
|
||||
has_link: bool = False
|
||||
has_newsletter: bool = False
|
||||
avg_engagement_rate: float = 0 # likes+replies+rts / followers
|
||||
|
||||
|
||||
@dataclass
|
||||
class AuditFinding:
|
||||
area: str
|
||||
status: str # GOOD, WARN, CRITICAL
|
||||
message: str
|
||||
fix: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class AuditReport:
|
||||
handle: str
|
||||
score: int = 0
|
||||
max_score: int = 100
|
||||
grade: str = ""
|
||||
findings: list = field(default_factory=list)
|
||||
recommendations: list = field(default_factory=list)
|
||||
|
||||
|
||||
def audit_bio(profile: ProfileData) -> list:
|
||||
findings = []
|
||||
bio = profile.bio.strip()
|
||||
|
||||
if not bio:
|
||||
findings.append(AuditFinding("Bio", "CRITICAL", "No bio provided for audit",
|
||||
"Provide bio text with --bio flag"))
|
||||
return findings
|
||||
|
||||
# Length check
|
||||
if len(bio) < 30:
|
||||
findings.append(AuditFinding("Bio", "WARN", f"Bio too short ({len(bio)} chars)",
|
||||
"Aim for 100-160 characters with clear value prop"))
|
||||
elif len(bio) > 160:
|
||||
findings.append(AuditFinding("Bio", "WARN", f"Bio may be too long ({len(bio)} chars)",
|
||||
"Keep under 160 chars for readability"))
|
||||
else:
|
||||
findings.append(AuditFinding("Bio", "GOOD", f"Bio length OK ({len(bio)} chars)"))
|
||||
|
||||
# Hashtag check
|
||||
hashtags = re.findall(r'#\w+', bio)
|
||||
if hashtags:
|
||||
findings.append(AuditFinding("Bio", "WARN", f"Hashtags in bio ({', '.join(hashtags)})",
|
||||
"Remove hashtags — signals amateur. Use plain text."))
|
||||
else:
|
||||
findings.append(AuditFinding("Bio", "GOOD", "No hashtags in bio"))
|
||||
|
||||
# Buzzword check
|
||||
buzzwords = ['entrepreneur', 'guru', 'ninja', 'rockstar', 'visionary', 'hustler',
|
||||
'thought leader', 'serial entrepreneur', 'dreamer', 'doer']
|
||||
found = [bw for bw in buzzwords if bw.lower() in bio.lower()]
|
||||
if found:
|
||||
findings.append(AuditFinding("Bio", "WARN", f"Buzzwords detected: {', '.join(found)}",
|
||||
"Replace with specific, concrete descriptions of what you do"))
|
||||
|
||||
# Specificity check — pipes and slashes often signal unfocused bios
|
||||
if bio.count('|') >= 3 or bio.count('/') >= 3:
|
||||
findings.append(AuditFinding("Bio", "WARN", "Bio may lack focus (too many roles/identities)",
|
||||
"Lead with ONE clear identity. What's the #1 thing you want to be known for?"))
|
||||
|
||||
# Social proof check
|
||||
proof_patterns = [r'\d+[kKmM]?\+?\s*(followers|subscribers|readers|users|customers)',
|
||||
r'(founder|ceo|cto|vp|head|director|lead)\s+(of|at|@)',
|
||||
r'(author|writer)\s+of', r'featured\s+in', r'ex-\w+']
|
||||
has_proof = any(re.search(p, bio, re.IGNORECASE) for p in proof_patterns)
|
||||
if has_proof:
|
||||
findings.append(AuditFinding("Bio", "GOOD", "Social proof detected"))
|
||||
else:
|
||||
findings.append(AuditFinding("Bio", "WARN", "No obvious social proof in bio",
|
||||
"Add a credential: title, metric, brand association, or achievement"))
|
||||
|
||||
# CTA/Link check
|
||||
if profile.has_link:
|
||||
findings.append(AuditFinding("Bio", "GOOD", "Profile has a link"))
|
||||
else:
|
||||
findings.append(AuditFinding("Bio", "WARN", "No link in profile",
|
||||
"Add a link to newsletter, product, or portfolio"))
|
||||
|
||||
return findings
|
||||
|
||||
|
||||
def audit_activity(profile: ProfileData) -> list:
|
||||
findings = []
|
||||
|
||||
# Posting frequency
|
||||
if profile.posts_per_week <= 0:
|
||||
findings.append(AuditFinding("Activity", "CRITICAL", "No posting data provided",
|
||||
"Provide --posts-per-week estimate"))
|
||||
elif profile.posts_per_week < 3:
|
||||
findings.append(AuditFinding("Activity", "CRITICAL",
|
||||
f"Very low posting ({profile.posts_per_week:.0f}/week)",
|
||||
"Minimum 7 posts/week (1/day). Aim for 14-21."))
|
||||
elif profile.posts_per_week < 7:
|
||||
findings.append(AuditFinding("Activity", "WARN",
|
||||
f"Low posting ({profile.posts_per_week:.0f}/week)",
|
||||
"Aim for 2-3 posts per day for consistent growth"))
|
||||
elif profile.posts_per_week < 21:
|
||||
findings.append(AuditFinding("Activity", "GOOD",
|
||||
f"Good posting cadence ({profile.posts_per_week:.0f}/week)"))
|
||||
else:
|
||||
findings.append(AuditFinding("Activity", "GOOD",
|
||||
f"High posting cadence ({profile.posts_per_week:.0f}/week)"))
|
||||
|
||||
# Reply ratio
|
||||
if profile.reply_ratio > 0:
|
||||
if profile.reply_ratio < 0.2:
|
||||
findings.append(AuditFinding("Activity", "WARN",
|
||||
f"Low reply ratio ({profile.reply_ratio:.0%})",
|
||||
"Aim for 30%+ replies. Engage with others, don't just broadcast."))
|
||||
elif profile.reply_ratio >= 0.3:
|
||||
findings.append(AuditFinding("Activity", "GOOD",
|
||||
f"Healthy reply ratio ({profile.reply_ratio:.0%})"))
|
||||
|
||||
# Follower/following ratio
|
||||
if profile.followers > 0 and profile.following > 0:
|
||||
ratio = profile.followers / profile.following
|
||||
if ratio < 0.5:
|
||||
findings.append(AuditFinding("Profile", "WARN",
|
||||
f"Low follower/following ratio ({ratio:.1f}x)",
|
||||
"Unfollow inactive accounts. Ratio should trend toward 2:1+"))
|
||||
elif ratio >= 2:
|
||||
findings.append(AuditFinding("Profile", "GOOD",
|
||||
f"Healthy follower/following ratio ({ratio:.1f}x)"))
|
||||
|
||||
# Pinned tweet
|
||||
if profile.has_pinned:
|
||||
if profile.pinned_age_days > 30:
|
||||
findings.append(AuditFinding("Profile", "WARN",
|
||||
f"Pinned tweet is {profile.pinned_age_days} days old",
|
||||
"Update pinned tweet monthly with your latest best content"))
|
||||
else:
|
||||
findings.append(AuditFinding("Profile", "GOOD", "Pinned tweet is recent"))
|
||||
else:
|
||||
findings.append(AuditFinding("Profile", "WARN", "No pinned tweet",
|
||||
"Pin your best-performing tweet or thread. It's your landing page."))
|
||||
|
||||
return findings
|
||||
|
||||
|
||||
def calculate_score(findings: list) -> tuple:
|
||||
total = len(findings)
|
||||
if total == 0:
|
||||
return 0, "F"
|
||||
|
||||
good = sum(1 for f in findings if f.status == "GOOD")
|
||||
score = int((good / total) * 100)
|
||||
|
||||
if score >= 90:
|
||||
grade = "A"
|
||||
elif score >= 75:
|
||||
grade = "B"
|
||||
elif score >= 60:
|
||||
grade = "C"
|
||||
elif score >= 40:
|
||||
grade = "D"
|
||||
else:
|
||||
grade = "F"
|
||||
|
||||
return score, grade
|
||||
|
||||
|
||||
def generate_recommendations(findings: list, profile: ProfileData) -> list:
|
||||
recs = []
|
||||
criticals = [f for f in findings if f.status == "CRITICAL"]
|
||||
warns = [f for f in findings if f.status == "WARN"]
|
||||
|
||||
for f in criticals:
|
||||
if f.fix:
|
||||
recs.append(f"🔴 {f.fix}")
|
||||
|
||||
for f in warns[:3]: # Top 3 warnings
|
||||
if f.fix:
|
||||
recs.append(f"🟡 {f.fix}")
|
||||
|
||||
# Stage-specific advice
|
||||
if profile.followers < 1000:
|
||||
recs.append("📈 Growth phase: Focus 70% on replies to larger accounts, 30% on your own posts")
|
||||
elif profile.followers < 10000:
|
||||
recs.append("📈 Momentum phase: 2-3 threads/week + daily engagement. Start a recurring series.")
|
||||
else:
|
||||
recs.append("📈 Scale phase: Leverage audience with cross-platform repurposing + newsletter growth")
|
||||
|
||||
return recs
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Audit an X/Twitter profile for growth readiness",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
%(prog)s --handle @rezarezvani --bio "CTO building AI products" --followers 5000
|
||||
%(prog)s --bio "Entrepreneur | Dreamer | Hustle" --followers 200 --posts-per-week 3
|
||||
%(prog)s --handle @example --followers 50000 --posts-per-week 21 --reply-ratio 0.4 --json
|
||||
""")
|
||||
|
||||
parser.add_argument("--handle", default="@unknown", help="X handle")
|
||||
parser.add_argument("--bio", default="", help="Current bio text")
|
||||
parser.add_argument("--followers", type=int, default=0, help="Follower count")
|
||||
parser.add_argument("--following", type=int, default=0, help="Following count")
|
||||
parser.add_argument("--posts-per-week", type=float, default=0, help="Average posts per week")
|
||||
parser.add_argument("--reply-ratio", type=float, default=0, help="Fraction of posts that are replies (0-1)")
|
||||
parser.add_argument("--has-pinned", action="store_true", help="Has a pinned tweet")
|
||||
parser.add_argument("--pinned-age-days", type=int, default=0, help="Age of pinned tweet in days")
|
||||
parser.add_argument("--has-link", action="store_true", help="Has link in profile")
|
||||
parser.add_argument("--json", action="store_true", help="Output JSON")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
profile = ProfileData(
|
||||
handle=args.handle,
|
||||
bio=args.bio,
|
||||
followers=args.followers,
|
||||
following=args.following,
|
||||
posts_per_week=args.posts_per_week,
|
||||
reply_ratio=args.reply_ratio,
|
||||
has_pinned=args.has_pinned,
|
||||
pinned_age_days=args.pinned_age_days,
|
||||
has_link=args.has_link,
|
||||
)
|
||||
|
||||
findings = audit_bio(profile) + audit_activity(profile)
|
||||
score, grade = calculate_score(findings)
|
||||
recs = generate_recommendations(findings, profile)
|
||||
|
||||
report = AuditReport(
|
||||
handle=profile.handle,
|
||||
score=score,
|
||||
grade=grade,
|
||||
findings=[asdict(f) for f in findings],
|
||||
recommendations=recs,
|
||||
)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(asdict(report), indent=2))
|
||||
else:
|
||||
print(f"\n{'='*60}")
|
||||
print(f" X PROFILE AUDIT — {report.handle}")
|
||||
print(f"{'='*60}")
|
||||
print(f"\n Score: {report.score}/100 (Grade: {report.grade})\n")
|
||||
|
||||
for f in findings:
|
||||
icon = {"GOOD": "✅", "WARN": "⚠️", "CRITICAL": "🔴"}.get(f.status, "❓")
|
||||
print(f" {icon} [{f.area}] {f.message}")
|
||||
if f.fix and f.status != "GOOD":
|
||||
print(f" → {f.fix}")
|
||||
|
||||
if recs:
|
||||
print(f"\n {'─'*56}")
|
||||
print(f" TOP RECOMMENDATIONS\n")
|
||||
for i, r in enumerate(recs, 1):
|
||||
print(f" {i}. {r}")
|
||||
|
||||
print(f"\n{'='*60}\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
290
marketing-skill/x-twitter-growth/scripts/tweet_composer.py
Normal file
290
marketing-skill/x-twitter-growth/scripts/tweet_composer.py
Normal file
@@ -0,0 +1,290 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Tweet Composer — Generate structured tweets and threads with proven hook patterns.
|
||||
|
||||
Provides templates, character counting, thread formatting, and hook generation
|
||||
for different content types. No API required — pure content scaffolding.
|
||||
|
||||
Usage:
|
||||
python3 tweet_composer.py --type tweet --topic "AI in healthcare"
|
||||
python3 tweet_composer.py --type thread --topic "lessons from scaling" --tweets 8
|
||||
python3 tweet_composer.py --type hooks --topic "startup mistakes" --count 10
|
||||
python3 tweet_composer.py --validate "your tweet text here"
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
import textwrap
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional
|
||||
|
||||
MAX_TWEET_CHARS = 280
|
||||
|
||||
HOOK_PATTERNS = {
|
||||
"listicle": [
|
||||
"{n} {topic} that changed how I {verb}:",
|
||||
"The {n} biggest mistakes in {topic}:",
|
||||
"{n} {topic} most people don't know about:",
|
||||
"I spent {time} studying {topic}. Here are {n} lessons:",
|
||||
"{n} signs your {topic} needs work:",
|
||||
],
|
||||
"contrarian": [
|
||||
"Unpopular opinion: {claim}",
|
||||
"Hot take: {claim}",
|
||||
"Everyone says {common_belief}. They're wrong.",
|
||||
"Stop {common_action}. Here's what to do instead:",
|
||||
"The {topic} advice you keep hearing is backwards.",
|
||||
],
|
||||
"story": [
|
||||
"I {did_thing} and it completely changed my {outcome}.",
|
||||
"Last {timeframe}, I made a mistake with {topic}. Here's what happened:",
|
||||
"3 years ago I was {before_state}. Now I'm {after_state}. Here's the playbook:",
|
||||
"I almost {near_miss}. Then I discovered {topic}.",
|
||||
"The best {topic} advice I ever got came from {unexpected_source}.",
|
||||
],
|
||||
"observation": [
|
||||
"{topic} is underrated. Here's why:",
|
||||
"Nobody talks about this part of {topic}:",
|
||||
"The gap between {thing_a} and {thing_b} is where the money is.",
|
||||
"If you're struggling with {topic}, you're probably {mistake}.",
|
||||
"The secret to {topic} isn't what you think.",
|
||||
],
|
||||
"framework": [
|
||||
"The {name} framework for {topic} (save this):",
|
||||
"How to {outcome} in {timeframe} (step by step):",
|
||||
"{topic} explained in 60 seconds:",
|
||||
"The only {n} things that matter for {topic}:",
|
||||
"A simple system for {topic} that actually works:",
|
||||
],
|
||||
"question": [
|
||||
"What's the most underrated {topic}?",
|
||||
"If you could only {do_one_thing} for {topic}, what would it be?",
|
||||
"What {topic} advice would you give your younger self?",
|
||||
"Real question: why do most people {common_mistake}?",
|
||||
"What's one {topic} that completely changed your perspective?",
|
||||
],
|
||||
}
|
||||
|
||||
THREAD_STRUCTURE = """
|
||||
Thread Outline: {topic}
|
||||
{'='*50}
|
||||
|
||||
Tweet 1 (HOOK — most important):
|
||||
Pattern: {hook_pattern}
|
||||
Draft: {hook_draft}
|
||||
Chars: {hook_chars}/280
|
||||
|
||||
Tweet 2 (CONTEXT):
|
||||
Purpose: Set up why this matters
|
||||
Suggestion: "Here's what most people get wrong about {topic}:"
|
||||
OR: "I spent [time] learning this. Here's the breakdown:"
|
||||
|
||||
Tweets 3-{n} (BODY — one idea per tweet):
|
||||
{body_suggestions}
|
||||
|
||||
Tweet {n_plus_1} (CLOSE):
|
||||
Purpose: Summarize + CTA
|
||||
Suggestion: "TL;DR:\\n\\n[3 bullet summary]\\n\\nFollow @handle for more on {topic}"
|
||||
|
||||
Reply to Tweet 1 (ENGAGEMENT BAIT):
|
||||
Purpose: Resurface the thread
|
||||
Suggestion: "What's your experience with {topic}? Drop it below 👇"
|
||||
"""
|
||||
|
||||
|
||||
@dataclass
|
||||
class TweetDraft:
|
||||
text: str
|
||||
char_count: int
|
||||
over_limit: bool
|
||||
warnings: list = field(default_factory=list)
|
||||
|
||||
|
||||
def validate_tweet(text: str) -> TweetDraft:
|
||||
"""Validate a tweet and return analysis."""
|
||||
char_count = len(text)
|
||||
over_limit = char_count > MAX_TWEET_CHARS
|
||||
warnings = []
|
||||
|
||||
if over_limit:
|
||||
warnings.append(f"Over limit by {char_count - MAX_TWEET_CHARS} characters")
|
||||
|
||||
# Check for links in body
|
||||
import re
|
||||
if re.search(r'https?://\S+', text):
|
||||
warnings.append("Contains URL — consider moving link to reply (hurts reach)")
|
||||
|
||||
# Check for hashtags
|
||||
hashtags = re.findall(r'#\w+', text)
|
||||
if len(hashtags) > 2:
|
||||
warnings.append(f"Too many hashtags ({len(hashtags)}) — max 1-2, ideally 0")
|
||||
elif len(hashtags) > 0:
|
||||
warnings.append(f"Has {len(hashtags)} hashtag(s) — consider removing for cleaner look")
|
||||
|
||||
# Check for @mentions at start
|
||||
if text.startswith('@'):
|
||||
warnings.append("Starts with @ — will be treated as reply, not shown in timeline")
|
||||
|
||||
# Readability
|
||||
lines = text.strip().split('\n')
|
||||
long_lines = [l for l in lines if len(l) > 70]
|
||||
if long_lines:
|
||||
warnings.append("Long unbroken lines — add line breaks for mobile readability")
|
||||
|
||||
return TweetDraft(text=text, char_count=char_count, over_limit=over_limit, warnings=warnings)
|
||||
|
||||
|
||||
def generate_hooks(topic: str, count: int = 10) -> list:
|
||||
"""Generate hook variations for a topic."""
|
||||
hooks = []
|
||||
for pattern_type, patterns in HOOK_PATTERNS.items():
|
||||
for p in patterns:
|
||||
hook = p.replace("{topic}", topic).replace("{n}", "7").replace(
|
||||
"{time}", "6 months").replace("{timeframe}", "month").replace(
|
||||
"{claim}", f"{topic} is overrated").replace(
|
||||
"{common_belief}", f"{topic} is simple").replace(
|
||||
"{common_action}", f"overthinking {topic}").replace(
|
||||
"{outcome}", "approach").replace("{verb}", "think").replace(
|
||||
"{name}", "3-Step").replace("{did_thing}", f"changed my {topic} strategy").replace(
|
||||
"{before_state}", "stuck").replace("{after_state}", "thriving").replace(
|
||||
"{near_miss}", f"gave up on {topic}").replace(
|
||||
"{unexpected_source}", "a complete beginner").replace(
|
||||
"{thing_a}", "theory").replace("{thing_b}", "execution").replace(
|
||||
"{mistake}", "overcomplicating it").replace(
|
||||
"{common_mistake}", f"ignore {topic}").replace(
|
||||
"{do_one_thing}", "change one thing").replace(
|
||||
"{common_action}", f"overthinking {topic}")
|
||||
hooks.append({"type": pattern_type, "hook": hook, "chars": len(hook)})
|
||||
if len(hooks) >= count:
|
||||
return hooks
|
||||
return hooks[:count]
|
||||
|
||||
|
||||
def generate_thread_outline(topic: str, num_tweets: int = 8) -> str:
|
||||
"""Generate a thread structure outline."""
|
||||
hooks = generate_hooks(topic, 3)
|
||||
best_hook = hooks[0]["hook"] if hooks else f"Everything I know about {topic}:"
|
||||
|
||||
body = []
|
||||
suggestions = [
|
||||
"Key insight or surprising fact",
|
||||
"Common mistake people make",
|
||||
"The counterintuitive truth",
|
||||
"A practical example or case study",
|
||||
"The framework or system",
|
||||
"Implementation steps",
|
||||
"Results or evidence",
|
||||
"The nuance most people miss",
|
||||
]
|
||||
|
||||
for i, s in enumerate(suggestions[:num_tweets - 3], 3):
|
||||
body.append(f" Tweet {i}: [{s}]")
|
||||
|
||||
body_text = "\n".join(body)
|
||||
|
||||
return f"""
|
||||
{'='*60}
|
||||
THREAD OUTLINE: {topic}
|
||||
{'='*60}
|
||||
|
||||
Tweet 1 (HOOK):
|
||||
"{best_hook}"
|
||||
Chars: {len(best_hook)}/280
|
||||
|
||||
Tweet 2 (CONTEXT):
|
||||
"Here's what most people get wrong about {topic}:"
|
||||
|
||||
{body_text}
|
||||
|
||||
Tweet {num_tweets - 1} (CLOSE):
|
||||
"TL;DR:
|
||||
|
||||
• [Key takeaway 1]
|
||||
• [Key takeaway 2]
|
||||
• [Key takeaway 3]
|
||||
|
||||
Follow for more on {topic}"
|
||||
|
||||
Reply to Tweet 1 (BOOST):
|
||||
"What's your biggest challenge with {topic}? 👇"
|
||||
|
||||
{'='*60}
|
||||
RULES:
|
||||
- Each tweet must stand alone (people read out of order)
|
||||
- Max 3-4 lines per tweet (mobile readability)
|
||||
- No filler tweets — cut anything that doesn't add value
|
||||
- Hook tweet determines 90%% of thread performance
|
||||
{'='*60}
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate tweets, threads, and hooks with proven patterns",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter)
|
||||
|
||||
parser.add_argument("--type", choices=["tweet", "thread", "hooks", "validate"],
|
||||
default="hooks", help="Content type to generate")
|
||||
parser.add_argument("--topic", default="", help="Topic for content generation")
|
||||
parser.add_argument("--tweets", type=int, default=8, help="Number of tweets in thread")
|
||||
parser.add_argument("--count", type=int, default=10, help="Number of hooks to generate")
|
||||
parser.add_argument("--validate", nargs="?", const="", help="Tweet text to validate")
|
||||
parser.add_argument("--json", action="store_true", help="Output JSON")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.type == "validate" or args.validate is not None:
|
||||
text = args.validate or args.topic
|
||||
if not text:
|
||||
print("Error: provide tweet text to validate", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
result = validate_tweet(text)
|
||||
if args.json:
|
||||
print(json.dumps(asdict(result), indent=2))
|
||||
else:
|
||||
icon = "🔴" if result.over_limit else "✅"
|
||||
print(f"\n {icon} {result.char_count}/{MAX_TWEET_CHARS} characters")
|
||||
if result.warnings:
|
||||
for w in result.warnings:
|
||||
print(f" ⚠️ {w}")
|
||||
else:
|
||||
print(" No issues found.")
|
||||
print()
|
||||
|
||||
elif args.type == "hooks":
|
||||
if not args.topic:
|
||||
print("Error: --topic required for hook generation", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
hooks = generate_hooks(args.topic, args.count)
|
||||
if args.json:
|
||||
print(json.dumps(hooks, indent=2))
|
||||
else:
|
||||
print(f"\n{'='*60}")
|
||||
print(f" HOOK IDEAS: {args.topic}")
|
||||
print(f"{'='*60}\n")
|
||||
for i, h in enumerate(hooks, 1):
|
||||
print(f" {i:2d}. [{h['type']:<12}] {h['hook']}")
|
||||
print(f" ({h['chars']} chars)")
|
||||
print()
|
||||
|
||||
elif args.type == "thread":
|
||||
if not args.topic:
|
||||
print("Error: --topic required for thread generation", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
outline = generate_thread_outline(args.topic, args.tweets)
|
||||
print(outline)
|
||||
|
||||
elif args.type == "tweet":
|
||||
if not args.topic:
|
||||
print("Error: --topic required", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
hooks = generate_hooks(args.topic, 5)
|
||||
print(f"\n 5 tweet drafts for: {args.topic}\n")
|
||||
for i, h in enumerate(hooks, 1):
|
||||
print(f" {i}. {h['hook']}")
|
||||
print(f" ({h['chars']} chars)\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user