* feat: Skill Authoring Standard + Marketing Expansion plans
SKILL-AUTHORING-STANDARD.md — the DNA of every skill in this repo:
10 universal patterns codified from C-Suite innovations + Corey Haines' marketingskills patterns:
1. Context-First: check domain context, ask only for gaps
2. Practitioner Voice: expert persona, goal-oriented, not textbook
3. Multi-Mode Workflows: build from scratch / optimize existing / situation-specific
4. Related Skills Navigation: when to use, when NOT to, bidirectional
5. Reference Separation: SKILL.md lean (≤10KB), refs deep
6. Proactive Triggers: surface issues without being asked
7. Output Artifacts: request → specific deliverable mapping
8. Quality Loop: self-verify, confidence tagging
9. Communication Standard: bottom line first, structured output
10. Python Tools: stdlib-only, CLI-first, JSON output, sample data
Marketing expansion plans for 40-skill marketing division build.
* feat: marketing foundation — context + ops router + authoring standard
marketing-context/: Foundation skill every marketing skill reads first
- SKILL.md: 3 modes (auto-draft, guided interview, update)
- templates/marketing-context-template.md: 14 sections covering
product, audience, personas, pain points, competitive landscape,
differentiation, objections, switching dynamics, customer language
(verbatim), brand voice, style guide, proof points, SEO context, goals
- scripts/context_validator.py: Scores completeness 0-100, section-by-section
marketing-ops/: Central router for 40-skill marketing ecosystem
- Full routing matrix: 7 pods + cross-domain routing to 6 skills in
business-growth, product-team, engineering-team, c-level-advisor
- Campaign orchestration sequences (launch, content, CRO sprint)
- Quality gate matching C-Suite standard
- scripts/campaign_tracker.py: Campaign status tracking with progress,
overdue detection, pod coverage, blocker identification
SKILL-AUTHORING-STANDARD.md: Universal DNA for all skills
- 10 patterns: context-first, practitioner voice, multi-mode workflows,
related skills navigation, reference separation, proactive triggers,
output artifacts, quality loop, communication standard, python tools
- Quality checklist for skill completion verification
- Domain context file mapping for all 5 domains
* feat: import 20 workspace marketing skills + standard sections
Imported 20 marketing skills from OpenClaw workspace into repo:
Content Pod (5):
content-strategy, copywriting, copy-editing, social-content, marketing-ideas
SEO Pod (2):
seo-audit (+ references enriched by subagent), programmatic-seo (+ refs)
CRO Pod (5):
page-cro, form-cro, signup-flow-cro, onboarding-cro, popup-cro, paywall-upgrade-cro
Channels Pod (2):
email-sequence, paid-ads
Growth + Intel + GTM (5):
ab-test-setup, competitor-alternatives, marketing-psychology, launch-strategy, brand-guidelines
All 29 skills now have standard sections per SKILL-AUTHORING-STANDARD.md:
✅ Proactive Triggers (4-5 per skill)
✅ Output Artifacts table
✅ Communication standard reference
✅ Related Skills with WHEN/NOT disambiguation
Subagents enriched 8 skills with additional reference docs:
seo-audit, programmatic-seo, page-cro, form-cro,
onboarding-cro, popup-cro, paywall-upgrade-cro, email-sequence
43 files, 10,566 lines added.
* feat: build 13 new marketing skills + social-media-manager upgrade
All skills are 100% original work — inspired by industry best practices,
written from scratch in our own voice following SKILL-AUTHORING-STANDARD.md.
NEW Content Pod (2):
content-production — full research→draft→optimize pipeline, content_scorer.py
content-humanizer — AI pattern detection + voice injection, humanizer_scorer.py
NEW SEO Pod (3):
ai-seo — AI search optimization (AEO/GEO/LLMO), entirely new category
schema-markup — JSON-LD structured data, schema_validator.py
site-architecture — URL structure + internal linking, sitemap_analyzer.py
NEW Channels Pod (2):
cold-email — B2B outreach (distinct from email-sequence lifecycle)
ad-creative — bulk ad generation + platform specs, ad_copy_validator.py
NEW Growth Pod (3):
churn-prevention — cancel flows + save offers + dunning, churn_impact_calculator.py
referral-program — referral + affiliate programs
free-tool-strategy — engineering as marketing
NEW Intelligence Pod (1):
analytics-tracking — GA4/GTM setup + event taxonomy, tracking_plan_generator.py
NEW Sales Pod (1):
pricing-strategy — pricing, packaging, monetization
UPGRADED:
social-media-analyzer → social-media-manager (strategy, calendar, community)
Totals: 42 skills, 27 Python scripts, 60 reference docs, 163 files, 43,265 lines
* feat: update index, marketplace, README for 42 marketing skills
- skills-index.json: 89 → 124 skills (42 marketing entries)
- marketplace.json: marketing-skills v2.0.0 (42 skills, 27 tools)
- README.md: badge 134 → 169, marketing row updated
- prompt-engineer-toolkit: added YAML frontmatter
- Removed build logs from repo
- Parity check: 42/42 passed (YAML + Related + Proactive + Output + Communication)
* fix: merge content-creator into content-production, split marketing-psychology
Quality audit fixes:
1. content-creator → DEPRECATED redirect
- Scripts (brand_voice_analyzer.py, seo_optimizer.py) moved to content-production
- SKILL.md replaced with redirect to content-production + content-strategy
- Eliminates duplicate routing confusion
2. marketing-psychology → 24KB split to 6.8KB + reference
- 70+ mental models moved to references/mental-models-catalog.md (397 lines)
- SKILL.md now lean: categories overview, most-used models, quick reference
- Saves ~4,300 tokens per invocation
* feat: add plugin configs, Codex/OpenClaw compatibility, ClawHub packaging
- marketing-skill/SKILL.md: ClawHub-compatible root with Quick Start for Claude Code, Codex CLI, OpenClaw
- marketing-skill/CLAUDE.md: Agent instructions (routing, context, anti-patterns)
- marketing-skill/.codex/instructions.md: Codex CLI skill routing
- .claude-plugin/marketplace.json: deduplicated, marketing-skills v2.0.0
- .codex/skills-index.json: content-creator marked deprecated, psychology updated
- Total: 42 skills, 27 Python tools, 60 references, 18 plugins
* feat: add 16 Python tools to knowledge-only skills
Enriched 12 previously tool-less skills with practical Python scripts:
- seo-audit/seo_checker.py — HTML on-page SEO analysis (0-100)
- copywriting/headline_scorer.py — headline quality scoring (0-100)
- copy-editing/readability_scorer.py — Flesch + passive + filler detection
- content-strategy/topic_cluster_mapper.py — keyword clustering
- page-cro/conversion_audit.py — HTML CRO signal analysis (0-100)
- paid-ads/roas_calculator.py — ROAS/CPA/CPL calculator
- email-sequence/sequence_analyzer.py — email sequence scoring (0-100)
- form-cro/form_field_analyzer.py — form field CRO audit (0-100)
- onboarding-cro/activation_funnel_analyzer.py — funnel drop-off analysis
- programmatic-seo/url_pattern_generator.py — URL pattern planning
- ab-test-setup/sample_size_calculator.py — statistical sample sizing
- signup-flow-cro/funnel_drop_analyzer.py — signup funnel analysis
- launch-strategy/launch_readiness_scorer.py — launch checklist scoring
- competitor-alternatives/comparison_matrix_builder.py — feature comparison
- social-media-manager/social_calendar_generator.py — content calendar
- readability_scorer.py — fixed demo mode for non-TTY execution
All 43/43 scripts pass execution. All stdlib-only, zero pip installs.
Total: 42 skills, 43 Python tools, 60+ reference docs.
* feat: add 3 more Python tools + improve 6 existing scripts
New tools from build agent:
- email-sequence/scripts/sequence_analyzer.py — email sequence scoring (91/100 demo)
- paid-ads/scripts/roas_calculator.py — ROAS/CPA/CPL/break-even calculator
- competitor-alternatives/scripts/comparison_matrix_builder.py — feature matrix
Improved scripts (better demo modes, fuller analysis):
- seo_checker.py, headline_scorer.py, readability_scorer.py,
conversion_audit.py, topic_cluster_mapper.py, launch_readiness_scorer.py
Total: 42 skills, 47 Python tools, all passing.
* fix: remove duplicate scripts from deprecated content-creator
Scripts already live in content-production/scripts/. The content-creator
directory is now a pure redirect (SKILL.md only + legacy assets/refs).
* fix: scope VirusTotal scan to executable files only
Skip scanning .md, .py, .json, .yml — they're plain text files
that VirusTotal can't meaningfully analyze. This prevents 429 rate
limit errors on PRs with many text file changes (like 42 marketing skills).
Scan still covers: .js, .ts, .sh, .mjs, .cjs, .exe, .dll, .so, .bin, .wasm
---------
Co-authored-by: Leo <leo@openclaw.ai>
8.4 KiB
AI Visibility Monitoring Guide
How to track whether your content is getting cited by AI search engines — and what to do when citations change.
The honest truth: AI citation monitoring is immature. There's no Google Search Console equivalent for Perplexity or ChatGPT. Most tracking is manual today. This guide covers what works now and what to watch for as tooling matures.
What You're Tracking
Goal: Know when you appear in AI answers, for which queries, on which platforms — and detect changes before your traffic is affected.
The challenge: Most AI search platforms don't give publishers visibility into their citation data. You're reverse-engineering your presence through manual testing and indirect signals.
Four things to track:
- Citation presence — are you appearing at all?
- Citation consistency — do you appear most of the time or occasionally?
- Competitor citations — who else is cited for your target queries?
- Traffic signals — is AI-driven traffic changing?
Platform-by-Platform Monitoring
Google AI Overviews — Best Current Tooling
Google Search Console is the best data source available for any AI platform:
Setup:
- Open Google Search Console → Performance → Search results
- Add filter: "Search type" → "AI Overviews"
- Set date range to last 90 days minimum
What you see:
- Queries where your pages appeared in AI Overviews
- Impressions from AI Overviews
- Clicks from AI Overviews (usually much lower than organic — users get the answer in the AI box)
- CTR from AI Overviews
What to do with it:
- Sort by impressions: these are your current AI Overview presences
- Sort by clicks: these are the queries where users still clicked through (high-value)
- Identify queries where you have impressions but zero clicks — consider whether that's acceptable or if you need to gate more value behind the click
- Watch for queries where impressions drop sharply — you may have lost an AI Overview position
Frequency: Weekly check. Pull a CSV monthly for trend analysis.
Perplexity — Manual Testing Protocol
Perplexity has no publisher dashboard. Manual testing is the only reliable method.
Weekly test protocol:
- Identify your 10-20 highest-priority target queries
- Search each query on perplexity.ai in an incognito window
- Check the Sources panel on the right side
- Record: cited (yes/no), position in sources (1st, 2nd, 3rd...), which page was cited
What to record in your tracking log:
| Date | Query | Cited? | Position | Cited URL | Top Competitor |
|---|---|---|---|---|---|
| 2026-03-06 | "how to reduce SaaS churn" | Yes | 2 | /blog/churn-reduction | competitor.com |
| 2026-03-06 | "SaaS churn rate benchmark" | No | — | — | competitor.com |
Patterns to watch for:
- Same query cited 4/4 weeks → stable citation (protect it)
- Citation appearing intermittently (2 out of 4 weeks) → fragile position (strengthen the page)
- Consistent non-citation → gap to fill (page missing extractable patterns)
Frequency: Weekly for top 10 queries. Monthly for the full list.
ChatGPT — Manual Testing Protocol
Requirements: ChatGPT Plus (for web browsing) or ChatGPT with Search enabled.
Test protocol:
- Start a new conversation (fresh context window)
- Enable browsing / search mode
- Ask your target query as a natural question
- Check citations in the response
- Click through to verify which pages are cited
Note: ChatGPT citations vary by session. The same query may cite different sources on consecutive days. This is by design — treat it as probabilistic. Your goal is to appear in the citation set, not to appear every time.
What to test:
- Exact keyword queries ("best email marketing software")
- Natural question queries ("what's the best email marketing software for small teams?")
- Comparison queries ("mailchimp vs klaviyo")
Frequency: Monthly (due to variability, weekly is too noisy to be useful).
Microsoft Copilot — Manual Testing Protocol
Access at copilot.microsoft.com or via Edge sidebar.
Same protocol as ChatGPT. Look for source cards that appear with citations. Copilot integrates Bing's index, so if your Bing presence is strong, Copilot citations follow.
Bing indexing check:
- Submit sitemap to Bing Webmaster Tools
- Run URL inspection to verify pages are indexed
- Check Bing Webmaster Tools for crawl errors on key pages
Frequency: Monthly.
Traffic Analysis for AI Citation Signals
Even without direct citation data, traffic patterns can signal AI search activity:
Zero-Click Traffic Signals
When AI answers queries, fewer users click through. Watch for:
Impression growth + traffic decline: If Google Search Console shows impressions growing for a keyword but organic clicks dropping, an AI Overview may be answering the query. You're being cited but not visited.
Query pattern in GSC: If informational queries show impression growth but navigational/commercial queries stay flat, AI Overviews are likely answering the informational queries.
Direct Traffic Anomalies
Some AI platforms (Claude, Gemini) show traffic as "direct" since users often copy/paste URLs rather than clicking. An increase in direct traffic to specific content pages (not your homepage) can signal AI-driven attention.
Referral Traffic from AI Platforms
Perplexity, ChatGPT, and Claude all send some referral traffic when users click cited sources. Set up in Google Analytics 4:
- Create a custom dimension tracking referral source
- Filter for:
perplexity.ai,chat.openai.com,claude.ai,copilot.microsoft.com - Track monthly — expect low absolute numbers but high engagement (these visitors are already pre-qualified)
Tracking Template
Weekly AI Citation Tracker (copy this structure):
Week of: [DATE]
GOOGLE AI OVERVIEWS (from Search Console):
- New queries with AI Overview impressions: [list]
- Queries that dropped out: [list]
- Top performing query: [query] — [# impressions] impressions
PERPLEXITY (manual tests):
Query: [query 1] → Cited: Y/N → Position: [#] → Competitor: [domain]
Query: [query 2] → Cited: Y/N → Position: [#] → Competitor: [domain]
Query: [query 3] → Cited: Y/N → Position: [#] → Competitor: [domain]
NOTABLE CHANGES:
- [Describe any significant wins or losses]
ACTIONS FROM LAST WEEK:
- [What we optimized] → [Result this week]
ACTIONS FOR NEXT WEEK:
- [Page to optimize]: [Specific change to make]
When Citations Drop
Immediate Diagnostic
If you notice a citation you had has disappeared:
-
Check robots.txt — Did someone accidentally block an AI crawler? Check
yourdomain.com/robots.txtand test each bot. -
Check the page itself — Did the page structure change? Was the definition block moved? Was the FAQ section deleted in an edit?
-
Check competitor pages — Did a competitor publish a more extractable version of the same content? Search the query and see who now appears.
-
Check page performance — Is the page load slower? Did it get added to a noindex? Did canonical tags change?
-
Check domain authority signals — Did you lose significant backlinks? Authority drops can affect AI citations on competitive queries.
Response Playbook
| Root cause | Fix |
|---|---|
| AI bot blocked | Update robots.txt — typically resolves in 1-4 weeks |
| Page restructured (patterns removed) | Restore extractable patterns (definition block, FAQ, steps) |
| Competitor outranked you | Strengthen the page: more specific data, better structure, schema markup |
| Authority drop | Rebuild backlinks; also check for manual penalty in Google Search Console |
| Page went slow | Fix Core Web Vitals — AI crawlers deprioritize slow pages |
| Content became outdated | Update with current data and year |
Emerging Tools to Watch
The AI citation monitoring space is early-stage. Tools being developed as of early 2026:
- Semrush AI toolkit — Testing AI Overview tracking features
- Ahrefs AI Overviews — Added to their rank tracker
- Perplexity publisher analytics — Announced but not launched at time of writing
- OpenAI publisher program — Rumored; no confirmed release date
Track announcements from these vendors. First-mover advantage on publisher analytics will be significant.
Until then: Manual testing + Google Search Console is the most reliable stack available. Don't let perfect be the enemy of done — weekly manual testing surfaces 80% of what you need to know.