* chore: update gitignore for audit reports and playwright cache

* fix: add YAML frontmatter (name + description) to all SKILL.md files

- Added frontmatter to 34 skills that were missing it entirely (0% Tessl score)
- Fixed name field format to kebab-case across all 169 skills
- Resolves #284

* chore: sync codex skills symlinks [automated]

* fix: optimize 14 low-scoring skills via Tessl review (#290)

Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286.

* chore: sync codex skills symlinks [automated]

* fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291)

Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287.

* feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292)

Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%.

* feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293)

Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands.

* fix: Phase 5 verification fixes + docs update (#294)

Phase 5 verification fixes

* chore: sync codex skills symlinks [automated]

* fix: marketplace audit — all 11 plugins validated by Claude Code (#295)

Marketplace audit: all 11 plugins validated + installed + tested in Claude Code

* fix: restore 7 removed plugins + revert playwright-pro name to pw

Reverts two overly aggressive audit changes:
- Restored content-creator, demand-gen, fullstack-engineer, aws-architect,
  product-manager, scrum-master, skill-security-auditor to marketplace
- Reverted playwright-pro plugin.json name back to 'pw' (intentional short name)

* refactor: split 21 over-500-line skills into SKILL.md + references (#296)

* chore: sync codex skills symlinks [automated]

* docs: update all documentation with accurate counts and regenerated skill pages

- Update skill count to 170, Python tools to 213, references to 314 across all docs
- Regenerate all 170 skill doc pages from latest SKILL.md sources
- Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap
- Update README.md badges and overview table
- Update marketplace.json metadata description and version
- Update mkdocs.yml, index.md, getting-started.md with correct numbers

* fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301)

Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md
at the specified install path. 7 of 9 domain directories were missing this
file, causing "Skill not found" errors for bundle installs like:
  npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team

Fix:
- Add root-level SKILL.md with YAML frontmatter to 7 domains
- Add .codex/instructions.md to 8 domains (for Codex CLI discovery)
- Update INSTALLATION.md with accurate skill counts (53→170)
- Add troubleshooting entry for "Skill not found" error

All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json

Closes #301

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills

Gemini CLI:
- Add GEMINI.md with activation instructions
- Add scripts/gemini-install.sh setup script
- Add scripts/sync-gemini-skills.py (194 skills indexed)
- Add .gemini/skills/ with symlinks for all skills, agents, commands
- Remove phantom medium-content-pro entries from sync script
- Add top-level folder filter to prevent gitignored dirs from leaking

Codex CLI:
- Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills)
- Regenerate .codex/skills-index.json: 124 → 149 skills
- Add 25 new symlinks in .codex/skills/

OpenClaw:
- Add OpenClaw installation section to INSTALLATION.md
- Add ClawHub install + manual install + YAML frontmatter docs

Documentation:
- Update INSTALLATION.md with all 4 platforms + accurate counts
- Update README.md: "three platforms" → "four platforms" + Gemini quick start
- Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights
- Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps
- Add OpenClaw + Gemini to installation locations reference table

Marketplace: all 18 plugins validated — sources exist, SKILL.md present

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets

Phase 1 — Agent & Command Foundation:
- Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations)
- Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills)
- Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro

Phase 2 — Script Gap Closure (2,779 lines):
- jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py
- confluence-expert: space_structure_generator.py, content_audit_analyzer.py
- atlassian-admin: permission_audit_tool.py
- atlassian-templates: template_scaffolder.py (Confluence XHTML generation)

Phase 3 — Reference & Asset Enrichment:
- 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder)
- 6 PM references (confluence-expert, atlassian-admin, atlassian-templates)
- 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system)
- 1 PM asset (permission_scheme_template.json)

Phase 4 — New Agents:
- cs-agile-product-owner, cs-product-strategist, cs-ux-researcher

Phase 5 — Integration & Polish:
- Related Skills cross-references in 8 SKILL.md files
- Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands)
- Updated project-management/CLAUDE.md (0→12 scripts, 3 commands)
- Regenerated docs site (177 pages), updated homepage and getting-started

Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: audit and repair all plugins, agents, and commands

- Fix 12 command files: correct CLI arg syntax, script paths, and usage docs
- Fix 3 agents with broken script/reference paths (cs-content-creator,
  cs-demand-gen-specialist, cs-financial-analyst)
- Add complete YAML frontmatter to 5 agents (cs-growth-strategist,
  cs-engineering-lead, cs-senior-engineer, cs-financial-analyst,
  cs-quality-regulatory)
- Fix cs-ceo-advisor related agent path
- Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents,
  12 commands)

Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs
builds clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: repair 25 Python scripts failing --help across all domains

- Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts
- Add argparse CLI handling to 9 marketing scripts using raw sys.argv
- Fix 10 scripts crashing at module level (wrap in __main__, add argparse)
- Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts)
- Fix f-string backslash syntax in project_bootstrapper.py
- Fix -h flag conflict in pr_analyzer.py
- Fix tech-debt.md description (score → prioritize)

All 237 scripts now pass python3 --help verification.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(product-team): close 3 verified gaps in product skills

- Fix competitive-teardown/SKILL.md: replace broken references
  DATA_COLLECTION.md → references/data-collection-guide.md and
  TEMPLATES.md → references/analysis-templates.md (workflow was broken
  at steps 2 and 4)

- Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format
  (--format tsx) matching SKILL.md promise of Next.js/React components.
  4 design styles (dark-saas, clean-minimal, bold-startup, enterprise).
  TSX is now default; HTML preserved via --format html

- Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now
  accurately shows 8 skills/9 tools), remove 7 ghost scripts that
  never existed (sprint_planner.py, velocity_tracker.py, etc.)

- Fix tech-debt.md description (score → prioritize)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* release: v2.1.2 — landing page TSX output, brand voice integration, docs update

- Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles)
- Brand voice analyzer integrated into landing page generation workflow
- CHANGELOG, CLAUDE.md, README.md updated for v2.1.2
- All 13 plugin.json + marketplace.json bumped to 2.1.2
- Gemini/Codex skill indexes re-synced
- Backward compatible: --format html preserved, no breaking changes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Leo <leo@openclaw.ai>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Alireza Rezvani
2026-03-10 09:48:49 +01:00
committed by GitHub
parent f6fe59aac4
commit a68ae3a05e
140 changed files with 10924 additions and 891 deletions

View File

@@ -1,7 +1,7 @@
{
"name": "marketing-skills",
"description": "42 production-ready marketing skills across 7 pods: Content (copywriting, content strategy, content production), SEO (audits, schema markup, programmatic SEO, site architecture), CRO (A/B testing, forms, popups, signup flows, pricing, onboarding), Channels (email sequences, social media, paid ads, cold email), Growth (launch strategy, referral programs, free tools), Intelligence (competitor analysis, marketing psychology, analytics tracking), and Sales enablement",
"version": "2.1.1",
"version": "2.1.2",
"author": {
"name": "Alireza Rezvani",
"url": "https://alirezarezvani.com"

View File

@@ -416,12 +416,25 @@ SAMPLE_ADS = [
# ---------------------------------------------------------------------------
def main():
import argparse
parser = argparse.ArgumentParser(
description="Validates ad copy against platform specs. "
"Checks character counts, rejection triggers, and scores each ad 0-100."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a JSON file containing ad data. "
"If omitted, reads from stdin or runs embedded sample."
)
args = parser.parse_args()
# Load from file or stdin, else use sample
ads = None
if len(sys.argv) > 1:
if args.file:
try:
with open(sys.argv[1]) as f:
with open(args.file) as f:
data = json.load(f)
ads = data if isinstance(data, list) else [data]
except Exception as e:

View File

@@ -352,18 +352,33 @@ def print_report(result, inputs):
def main():
if len(sys.argv) > 1 and sys.argv[1] != "--json":
with open(sys.argv[1]) as f:
import argparse
parser = argparse.ArgumentParser(
description="Tracking plan generator — produces event taxonomy, GTM config, and GA4 dimension recommendations."
)
parser.add_argument(
"input_file", nargs="?", default=None,
help="JSON file with business config (default: run with sample SaaS data)"
)
parser.add_argument(
"--json", action="store_true",
help="Output full config as JSON"
)
args = parser.parse_args()
if args.input_file:
with open(args.input_file) as f:
inputs = json.load(f)
else:
if "--json" not in sys.argv:
if not args.json:
print("No input file provided. Running with sample data...\n")
inputs = SAMPLE_INPUT
result = generate_tracking_plan(inputs)
print_report(result, inputs)
if "--json" in sys.argv:
if args.json:
print(json.dumps(result, indent=2))

View File

@@ -164,8 +164,23 @@ def print_report(result):
def main():
if len(sys.argv) > 1:
with open(sys.argv[1]) as f:
import argparse
parser = argparse.ArgumentParser(
description="Churn impact calculator — models revenue impact of churn reduction improvements."
)
parser.add_argument(
"input_file", nargs="?", default=None,
help="JSON file with churn metrics (default: run with sample data)"
)
parser.add_argument(
"--json", action="store_true",
help="Output results as JSON"
)
args = parser.parse_args()
if args.input_file:
with open(args.input_file) as f:
inputs = json.load(f)
else:
print("No input file provided. Running with sample data...\n")
@@ -176,8 +191,7 @@ def main():
result = calculate(inputs)
print_report(result)
# Also dump JSON for programmatic use
if "--json" in sys.argv:
if args.json:
print(json.dumps(result, indent=2))

View File

@@ -439,16 +439,29 @@ SAMPLE_SEQUENCE = [
# ─── Main ─────────────────────────────────────────────────────────────────────
def main():
if len(sys.argv) > 1:
arg = sys.argv[1]
if arg == "-":
import argparse
parser = argparse.ArgumentParser(
description="Analyzes a cold email sequence for quality signals. "
"Evaluates word count, reading level, personalization, CTA clarity, "
"spam triggers, and subject lines. Scores each email 0-100."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a JSON file containing the email sequence. "
"Use '-' to read from stdin. If omitted, runs embedded sample."
)
args = parser.parse_args()
if args.file:
if args.file == "-":
sequence = json.load(sys.stdin)
else:
try:
with open(arg, "r", encoding="utf-8") as f:
with open(args.file, "r", encoding="utf-8") as f:
sequence = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {arg}", file=sys.stderr)
print(f"Error: File not found: {args.file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON: {e}", file=sys.stderr)

View File

@@ -448,7 +448,25 @@ def print_report(result: dict, label: str = "") -> None:
def main():
if len(sys.argv) == 1:
import argparse
parser = argparse.ArgumentParser(
description="Scores content 0-100 on 'humanity' by detecting AI writing patterns. "
"Checks AI vocabulary, sentence variance, passive voice, hedging, "
"em-dash overuse, and paragraph variety."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a text file to analyze. If omitted, runs demo comparing "
"human vs AI sample content."
)
parser.add_argument(
"--json", action="store_true",
help="Also output results as JSON."
)
args = parser.parse_args()
if args.file is None:
# Demo mode: compare human vs AI sample
print("[Demo mode — comparing human vs AI sample content]")
print()
@@ -468,18 +486,17 @@ def main():
print(f" Difference: {r1['humanity_score'] - r2['humanity_score']} points")
print()
else:
filepath = sys.argv[1]
try:
with open(filepath, 'r', encoding='utf-8') as f:
with open(args.file, 'r', encoding='utf-8') as f:
text = f.read()
except FileNotFoundError:
print(f"Error: file not found: {filepath}", file=sys.stderr)
print(f"Error: file not found: {args.file}", file=sys.stderr)
sys.exit(1)
result = score_humanity(text)
print_report(result, filepath)
print_report(result, args.file)
if "--json" in sys.argv:
if args.json:
print(json.dumps(result, indent=2))

View File

@@ -174,12 +174,24 @@ def analyze_content(content: str, output_format: str = 'json') -> str:
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
with open(sys.argv[1], 'r') as f:
import argparse
parser = argparse.ArgumentParser(
description="Brand Voice Analyzer - Analyzes content to establish and maintain brand voice consistency"
)
parser.add_argument(
"file", nargs="?", default=None,
help="Text file to analyze"
)
parser.add_argument(
"--format", choices=["json", "text"], default="text",
help="Output format (default: text)"
)
args = parser.parse_args()
if args.file:
with open(args.file, 'r') as f:
content = f.read()
output_format = sys.argv[2] if len(sys.argv) > 2 else 'text'
print(analyze_content(content, output_format))
print(analyze_content(content, args.format))
else:
print("Usage: python brand_voice_analyzer.py <file> [json|text]")
print("Usage: python brand_voice_analyzer.py <file> [--format json|text]")

View File

@@ -401,11 +401,31 @@ def print_report(result: dict, title: str, keyword: str) -> None:
def main():
import argparse
parser = argparse.ArgumentParser(
description="Scores content 0-100 on readability, SEO, structure, and engagement."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a text/markdown file to analyze. If omitted, runs demo "
"with embedded sample content."
)
parser.add_argument(
"keyword", nargs="?", default="",
help="Target SEO keyword to check density and placement."
)
parser.add_argument(
"--json", action="store_true",
help="Also output results as JSON."
)
args = parser.parse_args()
title = ""
keyword = ""
keyword = args.keyword
text = ""
if len(sys.argv) == 1:
if args.file is None:
# Demo mode — use embedded sample
print("[Demo mode — using embedded sample content]")
text = SAMPLE_CONTENT
@@ -413,13 +433,11 @@ def main():
keyword = SAMPLE_KEYWORD
else:
# Read from file
filepath = sys.argv[1]
keyword = sys.argv[2] if len(sys.argv) > 2 else ""
try:
with open(filepath, 'r', encoding='utf-8') as f:
with open(args.file, 'r', encoding='utf-8') as f:
text = f.read()
except FileNotFoundError:
print(f"Error: file not found: {filepath}", file=sys.stderr)
print(f"Error: file not found: {args.file}", file=sys.stderr)
sys.exit(1)
# Extract title from first H1 or first line
@@ -438,7 +456,7 @@ def main():
print_report(result, title, keyword)
# JSON output for programmatic use
if "--json" in sys.argv:
if args.json:
print(json.dumps(result, indent=2))

View File

@@ -406,14 +406,28 @@ def optimize_content(content: str, keyword: str = None,
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
with open(sys.argv[1], 'r') as f:
import argparse
parser = argparse.ArgumentParser(
description="SEO Content Optimizer - Analyzes and optimizes content for SEO"
)
parser.add_argument(
"file", nargs="?", default=None,
help="Text file to analyze"
)
parser.add_argument(
"--keyword", "-k", default=None,
help="Primary keyword to optimize for"
)
parser.add_argument(
"--secondary", "-s", default=None,
help="Comma-separated secondary keywords"
)
args = parser.parse_args()
if args.file:
with open(args.file, 'r') as f:
content = f.read()
keyword = sys.argv[2] if len(sys.argv) > 2 else None
secondary = sys.argv[3] if len(sys.argv) > 3 else None
print(optimize_content(content, keyword, secondary))
print(optimize_content(content, args.keyword, args.secondary))
else:
print("Usage: python seo_optimizer.py <file> [primary_keyword] [secondary_keywords]")
print("Usage: python seo_optimizer.py <file> [--keyword primary] [--secondary kw1,kw2]")

View File

@@ -330,11 +330,25 @@ DEFAULT_PARAMS = {
# ---------------------------------------------------------------------------
def main():
import argparse
parser = argparse.ArgumentParser(
description="Estimates ROI of building a free marketing tool. "
"Models return given build cost, maintenance, traffic, "
"conversion rate, and lead value."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a JSON file with tool parameters. "
"If omitted, reads from stdin or runs embedded sample."
)
args = parser.parse_args()
params = None
if len(sys.argv) > 1:
if args.file:
try:
with open(sys.argv[1]) as f:
with open(args.file) as f:
params = json.load(f)
except Exception as e:
print(f"Error reading file: {e}", file=sys.stderr)

View File

@@ -123,8 +123,25 @@ def print_report(results: dict):
def main():
if len(sys.argv) > 1:
filepath = Path(sys.argv[1])
import argparse
parser = argparse.ArgumentParser(
description="Validates marketing context completeness. "
"Scores 0-100 based on required and optional section coverage."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a marketing context markdown file. "
"If omitted, runs demo with embedded sample data."
)
parser.add_argument(
"--json", action="store_true",
help="Also output results as JSON."
)
args = parser.parse_args()
if args.file:
filepath = Path(args.file)
if not filepath.exists():
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
@@ -194,7 +211,7 @@ def main():
results = validate_context(content)
print_report(results)
if "--json" in sys.argv:
if args.json:
print(f"\n{json.dumps(results, indent=2)}")

View File

@@ -119,8 +119,23 @@ def print_report(analysis: dict):
def main():
if len(sys.argv) > 1:
filepath = Path(sys.argv[1])
import argparse
parser = argparse.ArgumentParser(
description="Track campaign status across marketing skills — tasks, owners, deadlines."
)
parser.add_argument(
"input_file", nargs="?", default=None,
help="JSON file with campaign data (default: run with sample data)"
)
parser.add_argument(
"--json", action="store_true",
help="Also output results as JSON"
)
args = parser.parse_args()
if args.input_file:
filepath = Path(args.input_file)
if filepath.exists():
campaign = json.loads(filepath.read_text())
else:
@@ -133,7 +148,7 @@ def main():
analysis = analyze_campaign(campaign)
print_report(analysis)
if "--json" in sys.argv:
if args.json:
print(f"\n{json.dumps(analysis, indent=2)}")

View File

@@ -207,11 +207,26 @@ def print_report(result, inputs):
def main():
if len(sys.argv) > 1 and sys.argv[1] != "--json":
with open(sys.argv[1]) as f:
import argparse
parser = argparse.ArgumentParser(
description="Pricing modeler — projects revenue at different price points and recommends tier structure."
)
parser.add_argument(
"input_file", nargs="?", default=None,
help="JSON file with pricing data (default: run with sample data)"
)
parser.add_argument(
"--json", action="store_true",
help="Output results as JSON"
)
args = parser.parse_args()
if args.input_file:
with open(args.input_file) as f:
inputs = json.load(f)
else:
if "--json" not in sys.argv:
if not args.json:
print("No input file provided. Running with sample data...\n")
inputs = SAMPLE_INPUT
@@ -260,7 +275,7 @@ def main():
print_report(result, inputs)
if "--json" in sys.argv:
if args.json:
print(json.dumps(result, indent=2))

View File

@@ -353,11 +353,25 @@ def run(params):
# ---------------------------------------------------------------------------
def main():
import argparse
parser = argparse.ArgumentParser(
description="Calculates referral program ROI. "
"Models economics given LTV, CAC, referral rate, reward cost, "
"and conversion rate."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a JSON file with referral program parameters. "
"If omitted, reads from stdin or runs embedded sample."
)
args = parser.parse_args()
params = None
if len(sys.argv) > 1:
if args.file:
try:
with open(sys.argv[1]) as f:
with open(args.file) as f:
params = json.load(f)
except Exception as e:
print(f"Error reading file: {e}", file=sys.stderr)

View File

@@ -390,16 +390,28 @@ SAMPLE_HTML = """<!DOCTYPE html>
# ─── Main ─────────────────────────────────────────────────────────────────────
def main():
if len(sys.argv) > 1:
arg = sys.argv[1]
if arg == "-":
import argparse
parser = argparse.ArgumentParser(
description="Extracts and validates JSON-LD structured data from HTML. "
"Scores 0-100 per schema block based on required/recommended field coverage."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to an HTML file to validate. "
"Use '-' to read from stdin. If omitted, runs embedded sample."
)
args = parser.parse_args()
if args.file:
if args.file == "-":
html = sys.stdin.read()
else:
try:
with open(arg, "r", encoding="utf-8") as f:
with open(args.file, "r", encoding="utf-8") as f:
html = f.read()
except FileNotFoundError:
print(f"Error: File not found: {arg}", file=sys.stderr)
print(f"Error: File not found: {args.file}", file=sys.stderr)
sys.exit(1)
else:
print("No file provided — running on embedded sample HTML.\n")

View File

@@ -328,12 +328,24 @@ def load_content(source: str) -> str:
def main():
if len(sys.argv) > 1:
arg = sys.argv[1]
if arg == "-":
import argparse
parser = argparse.ArgumentParser(
description="Analyzes sitemap.xml files for structure, depth, and potential issues. "
"Reports depth distribution, URL patterns, orphan candidates, and duplicates."
)
parser.add_argument(
"file", nargs="?", default=None,
help="Path to a sitemap.xml file or URL (https://...). "
"Use '-' to read from stdin. If omitted, runs embedded sample."
)
args = parser.parse_args()
if args.file:
if args.file == "-":
content = sys.stdin.read()
else:
content = load_content(arg)
content = load_content(args.file)
else:
print("No file or URL provided — running on embedded sample sitemap.\n")
content = SAMPLE_SITEMAP