* chore: update gitignore for audit reports and playwright cache * fix: add YAML frontmatter (name + description) to all SKILL.md files - Added frontmatter to 34 skills that were missing it entirely (0% Tessl score) - Fixed name field format to kebab-case across all 169 skills - Resolves #284 * chore: sync codex skills symlinks [automated] * fix: optimize 14 low-scoring skills via Tessl review (#290) Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286. * chore: sync codex skills symlinks [automated] * fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291) Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287. * feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292) Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%. * feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293) Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands. * fix: Phase 5 verification fixes + docs update (#294) Phase 5 verification fixes * chore: sync codex skills symlinks [automated] * fix: marketplace audit — all 11 plugins validated by Claude Code (#295) Marketplace audit: all 11 plugins validated + installed + tested in Claude Code * fix: restore 7 removed plugins + revert playwright-pro name to pw Reverts two overly aggressive audit changes: - Restored content-creator, demand-gen, fullstack-engineer, aws-architect, product-manager, scrum-master, skill-security-auditor to marketplace - Reverted playwright-pro plugin.json name back to 'pw' (intentional short name) * refactor: split 21 over-500-line skills into SKILL.md + references (#296) * chore: sync codex skills symlinks [automated] * docs: update all documentation with accurate counts and regenerated skill pages - Update skill count to 170, Python tools to 213, references to 314 across all docs - Regenerate all 170 skill doc pages from latest SKILL.md sources - Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap - Update README.md badges and overview table - Update marketplace.json metadata description and version - Update mkdocs.yml, index.md, getting-started.md with correct numbers * fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301) Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md at the specified install path. 7 of 9 domain directories were missing this file, causing "Skill not found" errors for bundle installs like: npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team Fix: - Add root-level SKILL.md with YAML frontmatter to 7 domains - Add .codex/instructions.md to 8 domains (for Codex CLI discovery) - Update INSTALLATION.md with accurate skill counts (53→170) - Add troubleshooting entry for "Skill not found" error All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json Closes #301 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills Gemini CLI: - Add GEMINI.md with activation instructions - Add scripts/gemini-install.sh setup script - Add scripts/sync-gemini-skills.py (194 skills indexed) - Add .gemini/skills/ with symlinks for all skills, agents, commands - Remove phantom medium-content-pro entries from sync script - Add top-level folder filter to prevent gitignored dirs from leaking Codex CLI: - Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills) - Regenerate .codex/skills-index.json: 124 → 149 skills - Add 25 new symlinks in .codex/skills/ OpenClaw: - Add OpenClaw installation section to INSTALLATION.md - Add ClawHub install + manual install + YAML frontmatter docs Documentation: - Update INSTALLATION.md with all 4 platforms + accurate counts - Update README.md: "three platforms" → "four platforms" + Gemini quick start - Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights - Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps - Add OpenClaw + Gemini to installation locations reference table Marketplace: all 18 plugins validated — sources exist, SKILL.md present Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets Phase 1 — Agent & Command Foundation: - Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations) - Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills) - Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro Phase 2 — Script Gap Closure (2,779 lines): - jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py - confluence-expert: space_structure_generator.py, content_audit_analyzer.py - atlassian-admin: permission_audit_tool.py - atlassian-templates: template_scaffolder.py (Confluence XHTML generation) Phase 3 — Reference & Asset Enrichment: - 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder) - 6 PM references (confluence-expert, atlassian-admin, atlassian-templates) - 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system) - 1 PM asset (permission_scheme_template.json) Phase 4 — New Agents: - cs-agile-product-owner, cs-product-strategist, cs-ux-researcher Phase 5 — Integration & Polish: - Related Skills cross-references in 8 SKILL.md files - Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands) - Updated project-management/CLAUDE.md (0→12 scripts, 3 commands) - Regenerated docs site (177 pages), updated homepage and getting-started Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: audit and repair all plugins, agents, and commands - Fix 12 command files: correct CLI arg syntax, script paths, and usage docs - Fix 3 agents with broken script/reference paths (cs-content-creator, cs-demand-gen-specialist, cs-financial-analyst) - Add complete YAML frontmatter to 5 agents (cs-growth-strategist, cs-engineering-lead, cs-senior-engineer, cs-financial-analyst, cs-quality-regulatory) - Fix cs-ceo-advisor related agent path - Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents, 12 commands) Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs builds clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: repair 25 Python scripts failing --help across all domains - Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts - Add argparse CLI handling to 9 marketing scripts using raw sys.argv - Fix 10 scripts crashing at module level (wrap in __main__, add argparse) - Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts) - Fix f-string backslash syntax in project_bootstrapper.py - Fix -h flag conflict in pr_analyzer.py - Fix tech-debt.md description (score → prioritize) All 237 scripts now pass python3 --help verification. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(product-team): close 3 verified gaps in product skills - Fix competitive-teardown/SKILL.md: replace broken references DATA_COLLECTION.md → references/data-collection-guide.md and TEMPLATES.md → references/analysis-templates.md (workflow was broken at steps 2 and 4) - Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format (--format tsx) matching SKILL.md promise of Next.js/React components. 4 design styles (dark-saas, clean-minimal, bold-startup, enterprise). TSX is now default; HTML preserved via --format html - Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now accurately shows 8 skills/9 tools), remove 7 ghost scripts that never existed (sprint_planner.py, velocity_tracker.py, etc.) - Fix tech-debt.md description (score → prioritize) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * release: v2.1.2 — landing page TSX output, brand voice integration, docs update - Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles) - Brand voice analyzer integrated into landing page generation workflow - CHANGELOG, CLAUDE.md, README.md updated for v2.1.2 - All 13 plugin.json + marketplace.json bumped to 2.1.2 - Gemini/Codex skill indexes re-synced - Backward compatible: --format html preserved, no breaking changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
434 lines
16 KiB
Python
434 lines
16 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
SEO Content Optimizer - Analyzes and optimizes content for SEO
|
|
"""
|
|
|
|
import re
|
|
from typing import Dict, List, Set
|
|
import json
|
|
|
|
class SEOOptimizer:
|
|
def __init__(self):
|
|
# Common stop words to filter
|
|
self.stop_words = {
|
|
'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for',
|
|
'of', 'with', 'by', 'from', 'as', 'is', 'was', 'are', 'were', 'be',
|
|
'been', 'being', 'have', 'has', 'had', 'do', 'does', 'did', 'will',
|
|
'would', 'could', 'should', 'may', 'might', 'must', 'can', 'shall'
|
|
}
|
|
|
|
# SEO best practices
|
|
self.best_practices = {
|
|
'title_length': (50, 60),
|
|
'meta_description_length': (150, 160),
|
|
'url_length': (50, 60),
|
|
'paragraph_length': (40, 150),
|
|
'heading_keyword_placement': True,
|
|
'keyword_density': (0.01, 0.03) # 1-3%
|
|
}
|
|
|
|
def analyze(self, content: str, target_keyword: str = None,
|
|
secondary_keywords: List[str] = None) -> Dict:
|
|
"""Analyze content for SEO optimization"""
|
|
|
|
analysis = {
|
|
'content_length': len(content.split()),
|
|
'keyword_analysis': {},
|
|
'structure_analysis': self._analyze_structure(content),
|
|
'readability': self._analyze_readability(content),
|
|
'meta_suggestions': {},
|
|
'optimization_score': 0,
|
|
'recommendations': []
|
|
}
|
|
|
|
# Keyword analysis
|
|
if target_keyword:
|
|
analysis['keyword_analysis'] = self._analyze_keywords(
|
|
content, target_keyword, secondary_keywords or []
|
|
)
|
|
|
|
# Generate meta suggestions
|
|
analysis['meta_suggestions'] = self._generate_meta_suggestions(
|
|
content, target_keyword
|
|
)
|
|
|
|
# Calculate optimization score
|
|
analysis['optimization_score'] = self._calculate_seo_score(analysis)
|
|
|
|
# Generate recommendations
|
|
analysis['recommendations'] = self._generate_recommendations(analysis)
|
|
|
|
return analysis
|
|
|
|
def _analyze_keywords(self, content: str, primary: str,
|
|
secondary: List[str]) -> Dict:
|
|
"""Analyze keyword usage and density"""
|
|
content_lower = content.lower()
|
|
word_count = len(content.split())
|
|
|
|
results = {
|
|
'primary_keyword': {
|
|
'keyword': primary,
|
|
'count': content_lower.count(primary.lower()),
|
|
'density': 0,
|
|
'in_title': False,
|
|
'in_headings': False,
|
|
'in_first_paragraph': False
|
|
},
|
|
'secondary_keywords': [],
|
|
'lsi_keywords': []
|
|
}
|
|
|
|
# Calculate primary keyword metrics
|
|
if word_count > 0:
|
|
results['primary_keyword']['density'] = (
|
|
results['primary_keyword']['count'] / word_count
|
|
)
|
|
|
|
# Check keyword placement
|
|
first_para = content.split('\n\n')[0] if '\n\n' in content else content[:200]
|
|
results['primary_keyword']['in_first_paragraph'] = (
|
|
primary.lower() in first_para.lower()
|
|
)
|
|
|
|
# Analyze secondary keywords
|
|
for keyword in secondary:
|
|
count = content_lower.count(keyword.lower())
|
|
results['secondary_keywords'].append({
|
|
'keyword': keyword,
|
|
'count': count,
|
|
'density': count / word_count if word_count > 0 else 0
|
|
})
|
|
|
|
# Extract potential LSI keywords
|
|
results['lsi_keywords'] = self._extract_lsi_keywords(content, primary)
|
|
|
|
return results
|
|
|
|
def _analyze_structure(self, content: str) -> Dict:
|
|
"""Analyze content structure for SEO"""
|
|
lines = content.split('\n')
|
|
|
|
structure = {
|
|
'headings': {'h1': 0, 'h2': 0, 'h3': 0, 'total': 0},
|
|
'paragraphs': 0,
|
|
'lists': 0,
|
|
'images': 0,
|
|
'links': {'internal': 0, 'external': 0},
|
|
'avg_paragraph_length': 0
|
|
}
|
|
|
|
paragraphs = []
|
|
current_para = []
|
|
|
|
for line in lines:
|
|
# Count headings
|
|
if line.startswith('# '):
|
|
structure['headings']['h1'] += 1
|
|
structure['headings']['total'] += 1
|
|
elif line.startswith('## '):
|
|
structure['headings']['h2'] += 1
|
|
structure['headings']['total'] += 1
|
|
elif line.startswith('### '):
|
|
structure['headings']['h3'] += 1
|
|
structure['headings']['total'] += 1
|
|
|
|
# Count lists
|
|
if line.strip().startswith(('- ', '* ', '1. ')):
|
|
structure['lists'] += 1
|
|
|
|
# Count links
|
|
internal_links = len(re.findall(r'\[.*?\]\(/.*?\)', line))
|
|
external_links = len(re.findall(r'\[.*?\]\(https?://.*?\)', line))
|
|
structure['links']['internal'] += internal_links
|
|
structure['links']['external'] += external_links
|
|
|
|
# Track paragraphs
|
|
if line.strip() and not line.startswith('#'):
|
|
current_para.append(line)
|
|
elif current_para:
|
|
paragraphs.append(' '.join(current_para))
|
|
current_para = []
|
|
|
|
if current_para:
|
|
paragraphs.append(' '.join(current_para))
|
|
|
|
structure['paragraphs'] = len(paragraphs)
|
|
|
|
if paragraphs:
|
|
avg_length = sum(len(p.split()) for p in paragraphs) / len(paragraphs)
|
|
structure['avg_paragraph_length'] = round(avg_length, 1)
|
|
|
|
return structure
|
|
|
|
def _analyze_readability(self, content: str) -> Dict:
|
|
"""Analyze content readability"""
|
|
sentences = re.split(r'[.!?]+', content)
|
|
words = content.split()
|
|
|
|
if not sentences or not words:
|
|
return {'score': 0, 'level': 'Unknown'}
|
|
|
|
avg_sentence_length = len(words) / len(sentences)
|
|
|
|
# Simple readability scoring
|
|
if avg_sentence_length < 15:
|
|
level = 'Easy'
|
|
score = 90
|
|
elif avg_sentence_length < 20:
|
|
level = 'Moderate'
|
|
score = 70
|
|
elif avg_sentence_length < 25:
|
|
level = 'Difficult'
|
|
score = 50
|
|
else:
|
|
level = 'Very Difficult'
|
|
score = 30
|
|
|
|
return {
|
|
'score': score,
|
|
'level': level,
|
|
'avg_sentence_length': round(avg_sentence_length, 1)
|
|
}
|
|
|
|
def _extract_lsi_keywords(self, content: str, primary_keyword: str) -> List[str]:
|
|
"""Extract potential LSI (semantically related) keywords"""
|
|
words = re.findall(r'\b[a-z]+\b', content.lower())
|
|
word_freq = {}
|
|
|
|
# Count word frequencies
|
|
for word in words:
|
|
if word not in self.stop_words and len(word) > 3:
|
|
word_freq[word] = word_freq.get(word, 0) + 1
|
|
|
|
# Sort by frequency and return top related terms
|
|
sorted_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)
|
|
|
|
# Filter out the primary keyword and return top 10
|
|
lsi_keywords = []
|
|
for word, count in sorted_words:
|
|
if word != primary_keyword.lower() and count > 1:
|
|
lsi_keywords.append(word)
|
|
if len(lsi_keywords) >= 10:
|
|
break
|
|
|
|
return lsi_keywords
|
|
|
|
def _generate_meta_suggestions(self, content: str, keyword: str = None) -> Dict:
|
|
"""Generate SEO meta tag suggestions"""
|
|
# Extract first sentence for description base
|
|
sentences = re.split(r'[.!?]+', content)
|
|
first_sentence = sentences[0] if sentences else content[:160]
|
|
|
|
suggestions = {
|
|
'title': '',
|
|
'meta_description': '',
|
|
'url_slug': '',
|
|
'og_title': '',
|
|
'og_description': ''
|
|
}
|
|
|
|
if keyword:
|
|
# Title suggestion
|
|
suggestions['title'] = f"{keyword.title()} - Complete Guide"
|
|
if len(suggestions['title']) > 60:
|
|
suggestions['title'] = keyword.title()[:57] + "..."
|
|
|
|
# Meta description
|
|
desc_base = f"Learn everything about {keyword}. {first_sentence}"
|
|
if len(desc_base) > 160:
|
|
desc_base = desc_base[:157] + "..."
|
|
suggestions['meta_description'] = desc_base
|
|
|
|
# URL slug
|
|
suggestions['url_slug'] = re.sub(r'[^a-z0-9-]+', '-',
|
|
keyword.lower()).strip('-')
|
|
|
|
# Open Graph tags
|
|
suggestions['og_title'] = suggestions['title']
|
|
suggestions['og_description'] = suggestions['meta_description']
|
|
|
|
return suggestions
|
|
|
|
def _calculate_seo_score(self, analysis: Dict) -> int:
|
|
"""Calculate overall SEO optimization score"""
|
|
score = 0
|
|
max_score = 100
|
|
|
|
# Content length scoring (20 points)
|
|
if 300 <= analysis['content_length'] <= 2500:
|
|
score += 20
|
|
elif 200 <= analysis['content_length'] < 300:
|
|
score += 10
|
|
elif analysis['content_length'] > 2500:
|
|
score += 15
|
|
|
|
# Keyword optimization (30 points)
|
|
if analysis['keyword_analysis']:
|
|
kw_data = analysis['keyword_analysis']['primary_keyword']
|
|
|
|
# Density scoring
|
|
if 0.01 <= kw_data['density'] <= 0.03:
|
|
score += 15
|
|
elif 0.005 <= kw_data['density'] < 0.01:
|
|
score += 8
|
|
|
|
# Placement scoring
|
|
if kw_data['in_first_paragraph']:
|
|
score += 10
|
|
if kw_data.get('in_headings'):
|
|
score += 5
|
|
|
|
# Structure scoring (25 points)
|
|
struct = analysis['structure_analysis']
|
|
if struct['headings']['total'] > 0:
|
|
score += 10
|
|
if struct['paragraphs'] >= 3:
|
|
score += 10
|
|
if struct['links']['internal'] > 0 or struct['links']['external'] > 0:
|
|
score += 5
|
|
|
|
# Readability scoring (25 points)
|
|
readability_score = analysis['readability']['score']
|
|
score += int(readability_score * 0.25)
|
|
|
|
return min(score, max_score)
|
|
|
|
def _generate_recommendations(self, analysis: Dict) -> List[str]:
|
|
"""Generate SEO improvement recommendations"""
|
|
recommendations = []
|
|
|
|
# Content length recommendations
|
|
if analysis['content_length'] < 300:
|
|
recommendations.append(
|
|
f"Increase content length to at least 300 words (currently {analysis['content_length']})"
|
|
)
|
|
elif analysis['content_length'] > 3000:
|
|
recommendations.append(
|
|
"Consider breaking long content into multiple pages or adding a table of contents"
|
|
)
|
|
|
|
# Keyword recommendations
|
|
if analysis['keyword_analysis']:
|
|
kw_data = analysis['keyword_analysis']['primary_keyword']
|
|
|
|
if kw_data['density'] < 0.01:
|
|
recommendations.append(
|
|
f"Increase keyword density for '{kw_data['keyword']}' (currently {kw_data['density']:.2%})"
|
|
)
|
|
elif kw_data['density'] > 0.03:
|
|
recommendations.append(
|
|
f"Reduce keyword density to avoid over-optimization (currently {kw_data['density']:.2%})"
|
|
)
|
|
|
|
if not kw_data['in_first_paragraph']:
|
|
recommendations.append(
|
|
"Include primary keyword in the first paragraph"
|
|
)
|
|
|
|
# Structure recommendations
|
|
struct = analysis['structure_analysis']
|
|
if struct['headings']['total'] == 0:
|
|
recommendations.append("Add headings (H1, H2, H3) to improve content structure")
|
|
if struct['links']['internal'] == 0:
|
|
recommendations.append("Add internal links to related content")
|
|
if struct['avg_paragraph_length'] > 150:
|
|
recommendations.append("Break up long paragraphs for better readability")
|
|
|
|
# Readability recommendations
|
|
if analysis['readability']['avg_sentence_length'] > 20:
|
|
recommendations.append("Simplify sentences for better readability")
|
|
|
|
return recommendations
|
|
|
|
def optimize_content(content: str, keyword: str = None,
|
|
secondary_keywords: List[str] = None) -> str:
|
|
"""Main function to optimize content"""
|
|
optimizer = SEOOptimizer()
|
|
|
|
# Parse secondary keywords from comma-separated string if provided
|
|
if secondary_keywords and isinstance(secondary_keywords, str):
|
|
secondary_keywords = [kw.strip() for kw in secondary_keywords.split(',')]
|
|
|
|
results = optimizer.analyze(content, keyword, secondary_keywords)
|
|
|
|
# Format output
|
|
output = [
|
|
"=== SEO Content Analysis ===",
|
|
f"Overall SEO Score: {results['optimization_score']}/100",
|
|
f"Content Length: {results['content_length']} words",
|
|
f"",
|
|
"Content Structure:",
|
|
f" Headings: {results['structure_analysis']['headings']['total']}",
|
|
f" Paragraphs: {results['structure_analysis']['paragraphs']}",
|
|
f" Avg Paragraph Length: {results['structure_analysis']['avg_paragraph_length']} words",
|
|
f" Internal Links: {results['structure_analysis']['links']['internal']}",
|
|
f" External Links: {results['structure_analysis']['links']['external']}",
|
|
f"",
|
|
f"Readability: {results['readability']['level']} (Score: {results['readability']['score']})",
|
|
f""
|
|
]
|
|
|
|
if results['keyword_analysis']:
|
|
kw = results['keyword_analysis']['primary_keyword']
|
|
output.extend([
|
|
"Keyword Analysis:",
|
|
f" Primary Keyword: {kw['keyword']}",
|
|
f" Count: {kw['count']}",
|
|
f" Density: {kw['density']:.2%}",
|
|
f" In First Paragraph: {'Yes' if kw['in_first_paragraph'] else 'No'}",
|
|
f""
|
|
])
|
|
|
|
if results['keyword_analysis']['lsi_keywords']:
|
|
output.append(" Related Keywords Found:")
|
|
for lsi in results['keyword_analysis']['lsi_keywords'][:5]:
|
|
output.append(f" • {lsi}")
|
|
output.append("")
|
|
|
|
if results['meta_suggestions']:
|
|
output.extend([
|
|
"Meta Tag Suggestions:",
|
|
f" Title: {results['meta_suggestions']['title']}",
|
|
f" Description: {results['meta_suggestions']['meta_description']}",
|
|
f" URL Slug: {results['meta_suggestions']['url_slug']}",
|
|
f""
|
|
])
|
|
|
|
output.extend([
|
|
"Recommendations:",
|
|
])
|
|
|
|
for rec in results['recommendations']:
|
|
output.append(f" • {rec}")
|
|
|
|
return '\n'.join(output)
|
|
|
|
if __name__ == "__main__":
|
|
import sys
|
|
import argparse
|
|
|
|
parser = argparse.ArgumentParser(
|
|
description="SEO Content Optimizer - Analyzes and optimizes content for SEO"
|
|
)
|
|
parser.add_argument(
|
|
"file", nargs="?", default=None,
|
|
help="Text file to analyze"
|
|
)
|
|
parser.add_argument(
|
|
"--keyword", "-k", default=None,
|
|
help="Primary keyword to optimize for"
|
|
)
|
|
parser.add_argument(
|
|
"--secondary", "-s", default=None,
|
|
help="Comma-separated secondary keywords"
|
|
)
|
|
args = parser.parse_args()
|
|
|
|
if args.file:
|
|
with open(args.file, 'r') as f:
|
|
content = f.read()
|
|
print(optimize_content(content, args.keyword, args.secondary))
|
|
else:
|
|
print("Usage: python seo_optimizer.py <file> [--keyword primary] [--secondary kw1,kw2]")
|