feat(bundles): add editorial bundle plugins
This commit is contained in:
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"name": "antigravity-bundle-mobile-developer",
|
||||
"version": "8.10.0",
|
||||
"description": "Install the \"Mobile Developer\" editorial skill bundle from Antigravity Awesome Skills.",
|
||||
"author": {
|
||||
"name": "sickn33 and contributors",
|
||||
"url": "https://github.com/sickn33/antigravity-awesome-skills"
|
||||
},
|
||||
"homepage": "https://github.com/sickn33/antigravity-awesome-skills",
|
||||
"repository": "https://github.com/sickn33/antigravity-awesome-skills",
|
||||
"license": "MIT",
|
||||
"keywords": [
|
||||
"codex",
|
||||
"skills",
|
||||
"bundle",
|
||||
"mobile-developer",
|
||||
"productivity"
|
||||
],
|
||||
"skills": "./skills/",
|
||||
"interface": {
|
||||
"displayName": "Mobile Developer",
|
||||
"shortDescription": "Specialized Packs · 5 curated skills",
|
||||
"longDescription": "For iOS, Android, and cross-platform apps. Covers Mobile Developer, React Native Architecture, and 3 more skills.",
|
||||
"developerName": "sickn33 and contributors",
|
||||
"category": "Specialized Packs",
|
||||
"capabilities": [
|
||||
"Interactive",
|
||||
"Write"
|
||||
],
|
||||
"websiteURL": "https://github.com/sickn33/antigravity-awesome-skills",
|
||||
"brandColor": "#111827"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,281 @@
|
||||
# How to Use the App Store Optimization Skill
|
||||
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you help me optimize my app's presence on the App Store and Google Play?
|
||||
|
||||
## Example Invocations
|
||||
|
||||
### Keyword Research
|
||||
|
||||
**Example 1: Basic Keyword Research**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for my productivity app? I'm targeting professionals who need task management and team collaboration features.
|
||||
```
|
||||
|
||||
**Example 2: Competitive Keyword Analysis**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze keywords that Todoist, Asana, and Monday.com are using? I want to find gaps and opportunities for my project management app.
|
||||
```
|
||||
|
||||
### Metadata Optimization
|
||||
|
||||
**Example 3: Optimize App Title**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app title for the Apple App Store? My app is called "TaskFlow" and I want to rank for "task manager", "productivity", and "team collaboration". The title needs to be under 30 characters.
|
||||
```
|
||||
|
||||
**Example 4: Full Metadata Package**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create optimized metadata for both Apple App Store and Google Play Store? Here's my app info:
|
||||
- Name: TaskFlow
|
||||
- Category: Productivity
|
||||
- Key features: AI task prioritization, team collaboration, calendar integration
|
||||
- Target keywords: task manager, productivity app, team tasks
|
||||
```
|
||||
|
||||
### Competitor Analysis
|
||||
|
||||
**Example 5: Analyze Top Competitors**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of the top 5 productivity apps in the App Store? I want to understand their title strategies, keyword usage, and visual asset approaches.
|
||||
```
|
||||
|
||||
**Example 6: Identify Competitive Gaps**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you compare my app's ASO performance against competitors and identify what I'm missing? Here's my current metadata: [paste metadata]
|
||||
```
|
||||
|
||||
### ASO Score Calculation
|
||||
|
||||
**Example 7: Calculate Overall ASO Health**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's ASO health score? Here are my metrics:
|
||||
- Average rating: 4.2 stars
|
||||
- Total ratings: 3,500
|
||||
- Keywords in top 10: 3
|
||||
- Keywords in top 50: 12
|
||||
- Conversion rate: 4.5%
|
||||
```
|
||||
|
||||
**Example 8: Identify Improvement Areas**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. My ASO score is 62/100. Can you tell me which areas I should focus on first to improve my rankings and downloads?
|
||||
```
|
||||
|
||||
### A/B Testing
|
||||
|
||||
**Example 9: Plan Icon Test**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test two different app icons. My current conversion rate is 5%. Can you help me plan the test, calculate required sample size, and determine how long to run it?
|
||||
```
|
||||
|
||||
**Example 10: Analyze Test Results**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my A/B test results?
|
||||
- Variant A (control): 2,500 visitors, 125 installs
|
||||
- Variant B (new icon): 2,500 visitors, 150 installs
|
||||
Is this statistically significant? Should I implement variant B?
|
||||
```
|
||||
|
||||
### Localization
|
||||
|
||||
**Example 11: Plan Localization Strategy**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I currently only have English metadata. Which markets should I localize for first? I'm a bootstrapped startup with moderate budget.
|
||||
```
|
||||
|
||||
**Example 12: Translate Metadata**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you help me translate my app metadata to Spanish for the Mexico market? Here's my English metadata: [paste metadata]. Check if it fits within character limits.
|
||||
```
|
||||
|
||||
### Review Analysis
|
||||
|
||||
**Example 13: Analyze User Reviews**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my recent reviews and tell me:
|
||||
- Overall sentiment (positive/negative ratio)
|
||||
- Most common complaints
|
||||
- Most requested features
|
||||
- Bugs that need immediate fixing
|
||||
```
|
||||
|
||||
**Example 14: Generate Review Response Templates**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create professional response templates for:
|
||||
- Users reporting crashes
|
||||
- Feature requests
|
||||
- Positive 5-star reviews
|
||||
- General complaints
|
||||
```
|
||||
|
||||
### Launch Planning
|
||||
|
||||
**Example 15: Pre-Launch Checklist**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for both Apple App Store and Google Play Store? My launch date is December 1, 2025.
|
||||
```
|
||||
|
||||
**Example 16: Optimize Launch Timing**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. What's the best day and time to launch my fitness app? I want to maximize visibility and downloads in the first week.
|
||||
```
|
||||
|
||||
**Example 17: Plan Seasonal Campaign**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you identify seasonal opportunities for my fitness app? It's currently October—what campaigns should I run for the next 6 months?
|
||||
```
|
||||
|
||||
## What to Provide
|
||||
|
||||
### For Keyword Research
|
||||
- App name and category
|
||||
- Target audience description
|
||||
- Key features and unique value proposition
|
||||
- Competitor apps (optional)
|
||||
- Geographic markets to target
|
||||
|
||||
### For Metadata Optimization
|
||||
- Current app name
|
||||
- Platform (Apple, Google, or both)
|
||||
- Target keywords (prioritized list)
|
||||
- Key features and benefits
|
||||
- Target audience
|
||||
- Current metadata (for optimization)
|
||||
|
||||
### For Competitor Analysis
|
||||
- Your app category
|
||||
- List of competitor app names or IDs
|
||||
- Platform (Apple or Google)
|
||||
- Specific aspects to analyze (keywords, visuals, ratings)
|
||||
|
||||
### For ASO Score Calculation
|
||||
- Metadata quality metrics (title length, description length, keyword density)
|
||||
- Rating data (average rating, total ratings, recent ratings)
|
||||
- Keyword rankings (top 10, top 50, top 100 counts)
|
||||
- Conversion metrics (impression-to-install rate, downloads)
|
||||
|
||||
### For A/B Testing
|
||||
- Test type (icon, screenshot, title, description)
|
||||
- Control variant details
|
||||
- Test variant details
|
||||
- Baseline conversion rate
|
||||
- For results analysis: visitor and conversion counts for both variants
|
||||
|
||||
### For Localization
|
||||
- Current market and language
|
||||
- Budget level (low, medium, high)
|
||||
- Target number of markets
|
||||
- Current metadata text for translation
|
||||
|
||||
### For Review Analysis
|
||||
- Recent reviews (text, rating, date)
|
||||
- Platform (Apple or Google)
|
||||
- Time period to analyze
|
||||
- Specific focus (bugs, features, sentiment)
|
||||
|
||||
### For Launch Planning
|
||||
- Platform (Apple, Google, or both)
|
||||
- Target launch date
|
||||
- App category
|
||||
- App information (name, features, target audience)
|
||||
|
||||
## What You'll Get
|
||||
|
||||
### Keyword Research Output
|
||||
- Prioritized keyword list with search volume estimates
|
||||
- Competition level analysis
|
||||
- Relevance scores
|
||||
- Long-tail keyword opportunities
|
||||
- Strategic recommendations
|
||||
|
||||
### Metadata Optimization Output
|
||||
- Optimized titles (multiple options)
|
||||
- Optimized descriptions (short and full)
|
||||
- Keyword field optimization (Apple)
|
||||
- Character count validation
|
||||
- Keyword density analysis
|
||||
- Before/after comparison
|
||||
|
||||
### Competitor Analysis Output
|
||||
- Ranked competitors by ASO strength
|
||||
- Common keyword patterns
|
||||
- Keyword gaps and opportunities
|
||||
- Visual asset assessment
|
||||
- Best practices identified
|
||||
- Actionable recommendations
|
||||
|
||||
### ASO Score Output
|
||||
- Overall score (0-100)
|
||||
- Breakdown by category (metadata, ratings, keywords, conversion)
|
||||
- Strengths and weaknesses
|
||||
- Prioritized action items
|
||||
- Expected impact of improvements
|
||||
|
||||
### A/B Test Output
|
||||
- Test design with hypothesis
|
||||
- Required sample size calculation
|
||||
- Duration estimates
|
||||
- Statistical significance analysis
|
||||
- Implementation recommendations
|
||||
- Learnings and insights
|
||||
|
||||
### Localization Output
|
||||
- Prioritized target markets
|
||||
- Estimated translation costs
|
||||
- ROI projections
|
||||
- Character limit validation for each language
|
||||
- Cultural adaptation recommendations
|
||||
- Phased implementation plan
|
||||
|
||||
### Review Analysis Output
|
||||
- Sentiment distribution (positive/neutral/negative)
|
||||
- Common themes and topics
|
||||
- Top issues requiring fixes
|
||||
- Most requested features
|
||||
- Response templates
|
||||
- Trend analysis over time
|
||||
|
||||
### Launch Planning Output
|
||||
- Platform-specific checklists (Apple, Google, Universal)
|
||||
- Timeline with milestones
|
||||
- Compliance validation
|
||||
- Optimal launch timing recommendations
|
||||
- Seasonal campaign opportunities
|
||||
- Update cadence planning
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Be Specific**: Provide as much detail about your app as possible
|
||||
2. **Include Context**: Share your goals (increase downloads, improve ranking, boost conversion)
|
||||
3. **Provide Data**: Real metrics enable more accurate analysis
|
||||
4. **Iterate**: Start with keyword research, then optimize metadata, then test
|
||||
5. **Track Results**: Monitor changes after implementing recommendations
|
||||
6. **Stay Compliant**: Always verify recommendations against current App Store/Play Store guidelines
|
||||
7. **Test First**: Use A/B testing before making major metadata changes
|
||||
8. **Localize Strategically**: Start with highest-ROI markets first
|
||||
9. **Respond to Reviews**: Use provided templates to engage with users
|
||||
10. **Plan Ahead**: Use launch checklists and timelines to avoid last-minute rushes
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### New App Launch
|
||||
1. Keyword research → Competitor analysis → Metadata optimization → Pre-launch checklist → Launch timing optimization
|
||||
|
||||
### Improving Existing App
|
||||
1. ASO score calculation → Identify gaps → Metadata optimization → A/B testing → Review analysis → Implement changes
|
||||
|
||||
### International Expansion
|
||||
1. Localization planning → Market prioritization → Metadata translation → ROI analysis → Phased rollout
|
||||
|
||||
### Ongoing Optimization
|
||||
1. Monthly keyword ranking tracking → Quarterly metadata updates → Continuous A/B testing → Review monitoring → Seasonal campaigns
|
||||
|
||||
## Need Help?
|
||||
|
||||
If you need clarification on any aspect of ASO or want to combine multiple analyses, just ask! For example:
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create a complete ASO strategy for my new productivity app? I need keyword research, optimized metadata for both stores, a pre-launch checklist, and launch timing recommendations.
|
||||
```
|
||||
|
||||
The skill can handle comprehensive, multi-phase ASO projects as well as specific tactical optimizations.
|
||||
@@ -0,0 +1,430 @@
|
||||
# App Store Optimization (ASO) Skill
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: November 7, 2025
|
||||
**Author**: Claude Skills Factory
|
||||
|
||||
## Overview
|
||||
|
||||
A comprehensive App Store Optimization (ASO) skill that provides complete capabilities for researching, optimizing, and tracking mobile app performance on the Apple App Store and Google Play Store. This skill empowers app developers and marketers to maximize their app's visibility, downloads, and success in competitive app marketplaces.
|
||||
|
||||
## What This Skill Does
|
||||
|
||||
This skill provides end-to-end ASO capabilities across seven key areas:
|
||||
|
||||
1. **Research & Analysis**: Keyword research, competitor analysis, market trends, review sentiment
|
||||
2. **Metadata Optimization**: Title, description, keywords with platform-specific character limits
|
||||
3. **Conversion Optimization**: A/B testing framework, visual asset optimization
|
||||
4. **Rating & Review Management**: Sentiment analysis, response strategies, issue identification
|
||||
5. **Launch & Update Strategies**: Pre-launch checklists, timing optimization, update planning
|
||||
6. **Analytics & Tracking**: ASO scoring, keyword rankings, performance benchmarking
|
||||
7. **Localization**: Multi-language strategy, translation management, ROI analysis
|
||||
|
||||
## Key Features
|
||||
|
||||
### Comprehensive Keyword Research
|
||||
- Search volume and competition analysis
|
||||
- Long-tail keyword discovery
|
||||
- Competitor keyword extraction
|
||||
- Keyword difficulty scoring
|
||||
- Strategic prioritization
|
||||
|
||||
### Platform-Specific Metadata Optimization
|
||||
- **Apple App Store**:
|
||||
- Title (30 chars)
|
||||
- Subtitle (30 chars)
|
||||
- Promotional Text (170 chars)
|
||||
- Description (4000 chars)
|
||||
- Keywords field (100 chars)
|
||||
- **Google Play Store**:
|
||||
- Title (50 chars)
|
||||
- Short Description (80 chars)
|
||||
- Full Description (4000 chars)
|
||||
- Character limit validation
|
||||
- Keyword density analysis
|
||||
- Multiple optimization strategies
|
||||
|
||||
### Competitor Intelligence
|
||||
- Automated competitor discovery
|
||||
- Metadata strategy analysis
|
||||
- Visual asset assessment
|
||||
- Gap identification
|
||||
- Competitive positioning
|
||||
|
||||
### ASO Health Scoring
|
||||
- 0-100 overall score
|
||||
- Four-category breakdown (Metadata, Ratings, Keywords, Conversion)
|
||||
- Strengths and weaknesses identification
|
||||
- Prioritized action recommendations
|
||||
- Expected impact estimates
|
||||
|
||||
### Scientific A/B Testing
|
||||
- Test design and hypothesis formulation
|
||||
- Sample size calculation
|
||||
- Statistical significance analysis
|
||||
- Duration estimation
|
||||
- Implementation recommendations
|
||||
|
||||
### Global Localization
|
||||
- Market prioritization (Tier 1/2/3)
|
||||
- Translation cost estimation
|
||||
- Character limit adaptation by language
|
||||
- Cultural keyword considerations
|
||||
- ROI analysis
|
||||
|
||||
### Review Intelligence
|
||||
- Sentiment analysis
|
||||
- Common theme extraction
|
||||
- Bug and issue identification
|
||||
- Feature request clustering
|
||||
- Professional response templates
|
||||
|
||||
### Launch Planning
|
||||
- Platform-specific checklists
|
||||
- Timeline generation
|
||||
- Compliance validation
|
||||
- Optimal timing recommendations
|
||||
- Seasonal campaign planning
|
||||
|
||||
## Python Modules
|
||||
|
||||
This skill includes 8 powerful Python modules:
|
||||
|
||||
### 1. keyword_analyzer.py
|
||||
**Purpose**: Analyzes keywords for search volume, competition, and relevance
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_keyword()`: Single keyword analysis
|
||||
- `compare_keywords()`: Multi-keyword comparison and ranking
|
||||
- `find_long_tail_opportunities()`: Generate long-tail variations
|
||||
- `calculate_keyword_density()`: Analyze keyword usage in text
|
||||
- `extract_keywords_from_text()`: Extract keywords from reviews/descriptions
|
||||
|
||||
### 2. metadata_optimizer.py
|
||||
**Purpose**: Optimizes titles, descriptions, keywords with character limit validation
|
||||
|
||||
**Key Functions**:
|
||||
- `optimize_title()`: Generate optimal title options
|
||||
- `optimize_description()`: Create conversion-focused descriptions
|
||||
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||
- `validate_character_limits()`: Ensure platform compliance
|
||||
- `calculate_keyword_density()`: Analyze keyword integration
|
||||
|
||||
### 3. competitor_analyzer.py
|
||||
**Purpose**: Analyzes competitor ASO strategies
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_competitor()`: Single competitor deep-dive
|
||||
- `compare_competitors()`: Multi-competitor analysis
|
||||
- `identify_gaps()`: Find competitive opportunities
|
||||
- `_calculate_competitive_strength()`: Score competitor ASO quality
|
||||
|
||||
### 4. aso_scorer.py
|
||||
**Purpose**: Calculates comprehensive ASO health score
|
||||
|
||||
**Key Functions**:
|
||||
- `calculate_overall_score()`: 0-100 ASO health score
|
||||
- `score_metadata_quality()`: Evaluate metadata optimization
|
||||
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||
- `score_keyword_performance()`: Analyze ranking positions
|
||||
- `score_conversion_metrics()`: Evaluate conversion rates
|
||||
- `generate_recommendations()`: Prioritized improvement actions
|
||||
|
||||
### 5. ab_test_planner.py
|
||||
**Purpose**: Plans and tracks A/B tests for ASO elements
|
||||
|
||||
**Key Functions**:
|
||||
- `design_test()`: Create test hypothesis and structure
|
||||
- `calculate_sample_size()`: Determine required visitors
|
||||
- `calculate_significance()`: Assess statistical validity
|
||||
- `track_test_results()`: Monitor ongoing tests
|
||||
- `generate_test_report()`: Create comprehensive test reports
|
||||
|
||||
### 6. localization_helper.py
|
||||
**Purpose**: Manages multi-language ASO optimization
|
||||
|
||||
**Key Functions**:
|
||||
- `identify_target_markets()`: Prioritize localization markets
|
||||
- `translate_metadata()`: Adapt metadata for languages
|
||||
- `adapt_keywords()`: Cultural keyword adaptation
|
||||
- `validate_translations()`: Character limit validation
|
||||
- `calculate_localization_roi()`: Estimate investment returns
|
||||
|
||||
### 7. review_analyzer.py
|
||||
**Purpose**: Analyzes user reviews for actionable insights
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_sentiment()`: Calculate sentiment distribution
|
||||
- `extract_common_themes()`: Identify frequent topics
|
||||
- `identify_issues()`: Surface bugs and problems
|
||||
- `find_feature_requests()`: Extract desired features
|
||||
- `track_sentiment_trends()`: Monitor changes over time
|
||||
- `generate_response_templates()`: Create review responses
|
||||
|
||||
### 8. launch_checklist.py
|
||||
**Purpose**: Generates comprehensive launch and update checklists
|
||||
|
||||
**Key Functions**:
|
||||
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||
- `validate_app_store_compliance()`: Check guidelines compliance
|
||||
- `create_update_plan()`: Plan update cadence
|
||||
- `optimize_launch_timing()`: Recommend launch dates
|
||||
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||
|
||||
## Installation
|
||||
|
||||
### For Claude Code (Desktop/CLI)
|
||||
|
||||
#### Project-Level Installation
|
||||
```bash
|
||||
# Copy skill folder to project
|
||||
cp -r app-store-optimization /path/to/your/project/.claude/skills/
|
||||
|
||||
# Claude will auto-load the skill when working in this project
|
||||
```
|
||||
|
||||
#### User-Level Installation (Available in All Projects)
|
||||
```bash
|
||||
# Copy skill folder to user-level skills
|
||||
cp -r app-store-optimization ~/.claude/skills/
|
||||
|
||||
# Claude will load this skill in all your projects
|
||||
```
|
||||
|
||||
### For Claude Apps (Browser)
|
||||
|
||||
1. Use the `skill-creator` skill to import the skill
|
||||
2. Or manually import via Claude Apps interface
|
||||
|
||||
### Verification
|
||||
|
||||
To verify installation:
|
||||
```bash
|
||||
# Check if skill folder exists
|
||||
ls ~/.claude/skills/app-store-optimization/
|
||||
|
||||
# You should see:
|
||||
# SKILL.md
|
||||
# keyword_analyzer.py
|
||||
# metadata_optimizer.py
|
||||
# competitor_analyzer.py
|
||||
# aso_scorer.py
|
||||
# ab_test_planner.py
|
||||
# localization_helper.py
|
||||
# review_analyzer.py
|
||||
# launch_checklist.py
|
||||
# sample_input.json
|
||||
# expected_output.json
|
||||
# HOW_TO_USE.md
|
||||
# README.md
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Complete Keyword Research
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research keywords for my fitness app? I'm targeting people who want home workouts, yoga, and meal planning. Analyze top competitors like Nike Training Club and Peloton.
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `keyword_analyzer.py` to research keywords
|
||||
- Use `competitor_analyzer.py` to analyze Nike Training Club and Peloton
|
||||
- Provide prioritized keyword list with search volumes, competition levels
|
||||
- Identify gaps and long-tail opportunities
|
||||
- Recommend primary keywords for title and secondary keywords for description
|
||||
|
||||
### Example 2: Optimize App Store Metadata
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Optimize my app's metadata for both Apple App Store and Google Play Store:
|
||||
- App: FitFlow
|
||||
- Category: Health & Fitness
|
||||
- Features: AI workout plans, nutrition tracking, progress photos
|
||||
- Keywords: fitness app, workout planner, home fitness
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `metadata_optimizer.py` to create optimized titles (multiple options)
|
||||
- Generate platform-specific descriptions (short and full)
|
||||
- Optimize Apple's 100-character keyword field
|
||||
- Validate all character limits
|
||||
- Calculate keyword density
|
||||
- Provide before/after comparison
|
||||
|
||||
### Example 3: Calculate ASO Health Score
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Calculate my app's ASO score:
|
||||
- Average rating: 4.3 stars (8,200 ratings)
|
||||
- Keywords in top 10: 4
|
||||
- Keywords in top 50: 15
|
||||
- Conversion rate: 3.8%
|
||||
- Title: "FitFlow - Home Workouts"
|
||||
- Description: 1,500 characters with 3 keyword mentions
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `aso_scorer.py` to calculate overall score (0-100)
|
||||
- Break down by category (Metadata: X/25, Ratings: X/25, Keywords: X/25, Conversion: X/25)
|
||||
- Identify strengths and weaknesses
|
||||
- Generate prioritized recommendations
|
||||
- Estimate impact of improvements
|
||||
|
||||
### Example 4: A/B Test Planning
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon. My current conversion rate is 4.2%. How many visitors do I need and how long should I run the test?
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `ab_test_planner.py` to design test
|
||||
- Calculate required sample size (based on minimum detectable effect)
|
||||
- Estimate test duration for low/medium/high traffic scenarios
|
||||
- Provide test structure and success metrics
|
||||
- Explain how to analyze results
|
||||
|
||||
### Example 5: Review Sentiment Analysis
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Analyze my last 500 reviews and tell me:
|
||||
- Overall sentiment
|
||||
- Most common complaints
|
||||
- Top feature requests
|
||||
- Bugs needing immediate fixes
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `review_analyzer.py` to process reviews
|
||||
- Calculate sentiment distribution
|
||||
- Extract common themes
|
||||
- Identify and prioritize issues
|
||||
- Cluster feature requests
|
||||
- Generate response templates
|
||||
|
||||
### Example 6: Pre-Launch Checklist
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Generate a complete pre-launch checklist for both app stores. My launch date is March 15, 2026.
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `launch_checklist.py` to generate checklists
|
||||
- Create Apple App Store checklist (metadata, assets, technical, legal)
|
||||
- Create Google Play Store checklist (metadata, assets, technical, legal)
|
||||
- Add universal checklist (marketing, QA, support)
|
||||
- Generate timeline with milestones
|
||||
- Calculate completion percentage
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Keyword Research
|
||||
1. Start with 20-30 seed keywords
|
||||
2. Analyze top 5 competitors in your category
|
||||
3. Balance high-volume and long-tail keywords
|
||||
4. Prioritize relevance over search volume
|
||||
5. Update keyword research quarterly
|
||||
|
||||
### Metadata Optimization
|
||||
1. Front-load keywords in title (first 15 characters most important)
|
||||
2. Use every available character (don't waste space)
|
||||
3. Write for humans first, search engines second
|
||||
4. A/B test major changes before committing
|
||||
5. Update descriptions with each major release
|
||||
|
||||
### A/B Testing
|
||||
1. Test one element at a time (icon vs. screenshots vs. title)
|
||||
2. Run tests to statistical significance (90%+ confidence)
|
||||
3. Test high-impact elements first (icon has biggest impact)
|
||||
4. Allow sufficient duration (at least 1 week, preferably 2-3)
|
||||
5. Document learnings for future tests
|
||||
|
||||
### Localization
|
||||
1. Start with top 5 revenue markets (US, China, Japan, Germany, UK)
|
||||
2. Use professional translators, not machine translation
|
||||
3. Test translations with native speakers
|
||||
4. Adapt keywords for cultural context
|
||||
5. Monitor ROI by market
|
||||
|
||||
### Review Management
|
||||
1. Respond to reviews within 24-48 hours
|
||||
2. Always be professional, even with negative reviews
|
||||
3. Address specific issues raised
|
||||
4. Thank users for positive feedback
|
||||
5. Use insights to prioritize product improvements
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
- **Python**: 3.7+ (for Python modules)
|
||||
- **Platform Support**: Apple App Store, Google Play Store
|
||||
- **Data Formats**: JSON input/output
|
||||
- **Dependencies**: Standard library only (no external packages required)
|
||||
|
||||
## Limitations
|
||||
|
||||
### Data Dependencies
|
||||
- Keyword search volumes are estimates (no official Apple/Google data)
|
||||
- Competitor data limited to publicly available information
|
||||
- Review analysis requires access to public reviews
|
||||
- Historical data may not be available for new apps
|
||||
|
||||
### Platform Constraints
|
||||
- Apple: Metadata changes require app submission (except Promotional Text)
|
||||
- Google: Metadata changes take 1-2 hours to index
|
||||
- A/B testing requires significant traffic for statistical significance
|
||||
- Store algorithms are proprietary and change without notice
|
||||
|
||||
### Scope
|
||||
- Does not include paid user acquisition (Apple Search Ads, Google Ads)
|
||||
- Does not cover in-app analytics implementation
|
||||
- Does not handle technical app development
|
||||
- Focuses on organic discovery and conversion optimization
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Python modules not found
|
||||
**Solution**: Ensure all .py files are in the same directory as SKILL.md
|
||||
|
||||
### Issue: Character limit validation failing
|
||||
**Solution**: Check that you're using the correct platform ('apple' or 'google')
|
||||
|
||||
### Issue: Keyword research returning limited results
|
||||
**Solution**: Provide more context about your app, features, and target audience
|
||||
|
||||
### Issue: ASO score seems inaccurate
|
||||
**Solution**: Ensure you're providing accurate metrics (ratings, keyword rankings, conversion rate)
|
||||
|
||||
## Version History
|
||||
|
||||
### Version 1.0.0 (November 7, 2025)
|
||||
- Initial release
|
||||
- 8 Python modules with comprehensive ASO capabilities
|
||||
- Support for both Apple App Store and Google Play Store
|
||||
- Keyword research, metadata optimization, competitor analysis
|
||||
- ASO scoring, A/B testing, localization, review analysis
|
||||
- Launch planning and seasonal campaign tools
|
||||
|
||||
## Support & Feedback
|
||||
|
||||
This skill is designed to help app developers and marketers succeed in competitive app marketplaces. For the best results:
|
||||
|
||||
1. Provide detailed context about your app
|
||||
2. Include specific metrics when available
|
||||
3. Ask follow-up questions for clarification
|
||||
4. Iterate based on results
|
||||
|
||||
## Credits
|
||||
|
||||
Developed by Claude Skills Factory
|
||||
Based on industry-standard ASO best practices
|
||||
Platform requirements current as of November 2025
|
||||
|
||||
## License
|
||||
|
||||
This skill is provided as-is for use with Claude Code and Claude Apps. Customize and extend as needed for your specific use cases.
|
||||
|
||||
---
|
||||
|
||||
**Ready to optimize your app?** Start with keyword research, then move to metadata optimization, and finally implement A/B testing for continuous improvement. The skill handles everything from pre-launch planning to ongoing optimization.
|
||||
|
||||
For detailed usage examples, see [HOW_TO_USE.md](HOW_TO_USE.md).
|
||||
@@ -0,0 +1,409 @@
|
||||
---
|
||||
name: app-store-optimization
|
||||
description: "Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# App Store Optimization (ASO) Skill
|
||||
|
||||
This comprehensive skill provides complete ASO capabilities for successfully launching and optimizing mobile applications on the Apple App Store and Google Play Store.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Research & Analysis
|
||||
- **Keyword Research**: Analyze keyword volume, competition, and relevance for app discovery
|
||||
- **Competitor Analysis**: Deep-dive into top-performing apps in your category
|
||||
- **Market Trend Analysis**: Identify emerging trends and opportunities in your app category
|
||||
- **Review Sentiment Analysis**: Extract insights from user reviews to identify strengths and issues
|
||||
- **Category Analysis**: Evaluate optimal category and subcategory placement strategies
|
||||
|
||||
### Metadata Optimization
|
||||
- **Title Optimization**: Create compelling titles with optimal keyword placement (platform-specific character limits)
|
||||
- **Description Optimization**: Craft both short and full descriptions that convert and rank
|
||||
- **Subtitle/Promotional Text**: Optimize Apple-specific subtitle (30 chars) and promotional text (170 chars)
|
||||
- **Keyword Field**: Maximize Apple's 100-character keyword field with strategic selection
|
||||
- **Category Selection**: Data-driven recommendations for primary and secondary categories
|
||||
- **Icon Best Practices**: Guidelines for designing high-converting app icons
|
||||
- **Screenshot Optimization**: Strategies for creating screenshots that drive installs
|
||||
- **Preview Video**: Best practices for app preview videos
|
||||
- **Localization**: Multi-language optimization strategies for global reach
|
||||
|
||||
### Conversion Optimization
|
||||
- **A/B Testing Framework**: Plan and track metadata experiments for continuous improvement
|
||||
- **Visual Asset Testing**: Test icons, screenshots, and videos for maximum conversion
|
||||
- **Store Listing Optimization**: Comprehensive page optimization for impression-to-install conversion
|
||||
- **Call-to-Action**: Optimize CTAs in descriptions and promotional materials
|
||||
|
||||
### Rating & Review Management
|
||||
- **Review Monitoring**: Track and analyze user reviews for actionable insights
|
||||
- **Response Strategies**: Templates and best practices for responding to reviews
|
||||
- **Rating Improvement**: Tactical approaches to improve app ratings organically
|
||||
- **Issue Identification**: Surface common problems and feature requests from reviews
|
||||
|
||||
### Launch & Update Strategies
|
||||
- **Pre-Launch Checklist**: Complete validation before submitting to stores
|
||||
- **Launch Timing**: Optimize release timing for maximum visibility and downloads
|
||||
- **Update Cadence**: Plan optimal update frequency and feature rollouts
|
||||
- **Feature Announcements**: Craft "What's New" sections that re-engage users
|
||||
- **Seasonal Optimization**: Leverage seasonal trends and events
|
||||
|
||||
### Analytics & Tracking
|
||||
- **ASO Score**: Calculate overall ASO health score across multiple factors
|
||||
- **Keyword Rankings**: Track keyword position changes over time
|
||||
- **Conversion Metrics**: Monitor impression-to-install conversion rates
|
||||
- **Download Velocity**: Track download trends and momentum
|
||||
- **Performance Benchmarking**: Compare against category averages and competitors
|
||||
|
||||
### Platform-Specific Requirements
|
||||
- **Apple App Store**:
|
||||
- Title: 30 characters
|
||||
- Subtitle: 30 characters
|
||||
- Promotional Text: 170 characters (editable without app update)
|
||||
- Description: 4,000 characters
|
||||
- Keywords: 100 characters (comma-separated, no spaces)
|
||||
- What's New: 4,000 characters
|
||||
- **Google Play Store**:
|
||||
- Title: 50 characters (formerly 30, increased in 2021)
|
||||
- Short Description: 80 characters
|
||||
- Full Description: 4,000 characters
|
||||
- No separate keyword field (keywords extracted from title and description)
|
||||
|
||||
## Input Requirements
|
||||
|
||||
### Keyword Research
|
||||
```json
|
||||
{
|
||||
"app_name": "MyApp",
|
||||
"category": "Productivity",
|
||||
"target_keywords": ["task manager", "productivity", "todo list"],
|
||||
"competitors": ["Todoist", "Any.do", "Microsoft To Do"],
|
||||
"language": "en-US"
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata Optimization
|
||||
```json
|
||||
{
|
||||
"platform": "apple" | "google",
|
||||
"app_info": {
|
||||
"name": "MyApp",
|
||||
"category": "Productivity",
|
||||
"target_audience": "Professionals aged 25-45",
|
||||
"key_features": ["Task management", "Team collaboration", "AI assistance"],
|
||||
"unique_value": "AI-powered task prioritization"
|
||||
},
|
||||
"current_metadata": {
|
||||
"title": "Current Title",
|
||||
"subtitle": "Current Subtitle",
|
||||
"description": "Current description..."
|
||||
},
|
||||
"target_keywords": ["productivity", "task manager", "todo"]
|
||||
}
|
||||
```
|
||||
|
||||
### Review Analysis
|
||||
```json
|
||||
{
|
||||
"app_id": "com.myapp.app",
|
||||
"platform": "apple" | "google",
|
||||
"date_range": "last_30_days" | "last_90_days" | "all_time",
|
||||
"rating_filter": [1, 2, 3, 4, 5],
|
||||
"language": "en"
|
||||
}
|
||||
```
|
||||
|
||||
### ASO Score Calculation
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"title_quality": 0.8,
|
||||
"description_quality": 0.7,
|
||||
"keyword_density": 0.6
|
||||
},
|
||||
"ratings": {
|
||||
"average_rating": 4.5,
|
||||
"total_ratings": 15000
|
||||
},
|
||||
"conversion": {
|
||||
"impression_to_install": 0.05
|
||||
},
|
||||
"keyword_rankings": {
|
||||
"top_10": 5,
|
||||
"top_50": 12,
|
||||
"top_100": 18
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
### Keyword Research Report
|
||||
- List of recommended keywords with search volume estimates
|
||||
- Competition level analysis (low/medium/high)
|
||||
- Relevance scores for each keyword
|
||||
- Strategic recommendations for primary vs. secondary keywords
|
||||
- Long-tail keyword opportunities
|
||||
|
||||
### Optimized Metadata Package
|
||||
- Platform-specific title (with character count validation)
|
||||
- Subtitle/promotional text (Apple)
|
||||
- Short description (Google)
|
||||
- Full description (both platforms)
|
||||
- Keyword field (Apple - 100 chars)
|
||||
- Character count validation for all fields
|
||||
- Keyword density analysis
|
||||
- Before/after comparison
|
||||
|
||||
### Competitor Analysis Report
|
||||
- Top 10 competitors in category
|
||||
- Their metadata strategies
|
||||
- Keyword overlap analysis
|
||||
- Visual asset assessment
|
||||
- Rating and review volume comparison
|
||||
- Identified gaps and opportunities
|
||||
|
||||
### ASO Health Score
|
||||
- Overall score (0-100)
|
||||
- Category breakdown:
|
||||
- Metadata Quality (0-25)
|
||||
- Ratings & Reviews (0-25)
|
||||
- Keyword Performance (0-25)
|
||||
- Conversion Metrics (0-25)
|
||||
- Specific improvement recommendations
|
||||
- Priority action items
|
||||
|
||||
### A/B Test Plan
|
||||
- Hypothesis and test variables
|
||||
- Test duration recommendations
|
||||
- Success metrics definition
|
||||
- Sample size calculations
|
||||
- Statistical significance thresholds
|
||||
|
||||
### Launch Checklist
|
||||
- Pre-submission validation (all required assets, metadata)
|
||||
- Store compliance verification
|
||||
- Testing checklist (devices, OS versions)
|
||||
- Marketing preparation items
|
||||
- Post-launch monitoring plan
|
||||
|
||||
## How to Use
|
||||
|
||||
### Keyword Research
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for a productivity app targeting professionals? Focus on keywords with good search volume but lower competition.
|
||||
```
|
||||
|
||||
### Optimize App Store Listing
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app's metadata for the Apple App Store? Here's my current listing: [provide current metadata]. I want to rank for "task management" and "productivity tools".
|
||||
```
|
||||
|
||||
### Analyze Competitor Strategy
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of Todoist, Any.do, and Microsoft To Do? I want to understand what they're doing well and where there are opportunities.
|
||||
```
|
||||
|
||||
### Review Sentiment Analysis
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze recent reviews for my app (com.myapp.ios) and identify the most common user complaints and feature requests?
|
||||
```
|
||||
|
||||
### Calculate ASO Score
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's overall ASO health score and provide specific recommendations for improvement?
|
||||
```
|
||||
|
||||
### Plan A/B Test
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon and first screenshot. Can you help me design the test and determine how long to run it?
|
||||
```
|
||||
|
||||
### Pre-Launch Checklist
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for submitting my app to both Apple App Store and Google Play Store?
|
||||
```
|
||||
|
||||
## Scripts
|
||||
|
||||
### keyword_analyzer.py
|
||||
Analyzes keywords for search volume, competition, and relevance. Provides strategic recommendations for primary and secondary keywords.
|
||||
|
||||
**Key Functions:**
|
||||
- `analyze_keyword()`: Analyze single keyword metrics
|
||||
- `compare_keywords()`: Compare multiple keywords
|
||||
- `find_long_tail()`: Discover long-tail keyword opportunities
|
||||
- `calculate_keyword_difficulty()`: Assess competition level
|
||||
|
||||
### metadata_optimizer.py
|
||||
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||
|
||||
**Key Functions:**
|
||||
- `optimize_title()`: Create compelling, keyword-rich titles
|
||||
- `optimize_description()`: Generate conversion-focused descriptions
|
||||
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||
- `validate_character_limits()`: Ensure compliance with platform limits
|
||||
- `calculate_keyword_density()`: Analyze keyword usage in metadata
|
||||
|
||||
### competitor_analyzer.py
|
||||
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||
|
||||
**Key Functions:**
|
||||
- `get_top_competitors()`: Identify category leaders
|
||||
- `analyze_competitor_metadata()`: Extract and analyze competitor keywords
|
||||
- `compare_visual_assets()`: Evaluate icons and screenshots
|
||||
- `identify_gaps()`: Find competitive opportunities
|
||||
|
||||
### aso_scorer.py
|
||||
Calculates comprehensive ASO health score across multiple dimensions.
|
||||
|
||||
**Key Functions:**
|
||||
- `calculate_overall_score()`: Compute 0-100 ASO score
|
||||
- `score_metadata_quality()`: Evaluate title, description, keywords
|
||||
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||
- `score_keyword_performance()`: Analyze ranking positions
|
||||
- `score_conversion_metrics()`: Evaluate impression-to-install rates
|
||||
- `generate_recommendations()`: Provide prioritized action items
|
||||
|
||||
### ab_test_planner.py
|
||||
Plans and tracks A/B tests for metadata and visual assets.
|
||||
|
||||
**Key Functions:**
|
||||
- `design_test()`: Create test hypothesis and variables
|
||||
- `calculate_sample_size()`: Determine required test duration
|
||||
- `calculate_significance()`: Assess statistical significance
|
||||
- `track_results()`: Monitor test performance
|
||||
- `generate_report()`: Summarize test outcomes
|
||||
|
||||
### localization_helper.py
|
||||
Manages multi-language ASO optimization strategies.
|
||||
|
||||
**Key Functions:**
|
||||
- `identify_target_markets()`: Recommend localization priorities
|
||||
- `translate_metadata()`: Generate localized metadata
|
||||
- `adapt_keywords()`: Research locale-specific keywords
|
||||
- `validate_translations()`: Check character limits per language
|
||||
- `calculate_localization_roi()`: Estimate impact of localization
|
||||
|
||||
### review_analyzer.py
|
||||
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||
|
||||
**Key Functions:**
|
||||
- `analyze_sentiment()`: Calculate positive/negative/neutral ratios
|
||||
- `extract_common_themes()`: Identify frequently mentioned topics
|
||||
- `identify_issues()`: Surface bugs and user complaints
|
||||
- `find_feature_requests()`: Extract desired features
|
||||
- `track_sentiment_trends()`: Monitor sentiment over time
|
||||
- `generate_response_templates()`: Create review response drafts
|
||||
|
||||
### launch_checklist.py
|
||||
Generates comprehensive pre-launch and update checklists.
|
||||
|
||||
**Key Functions:**
|
||||
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||
- `validate_app_store_compliance()`: Check Apple guidelines
|
||||
- `validate_play_store_compliance()`: Check Google policies
|
||||
- `create_update_plan()`: Plan update cadence and features
|
||||
- `optimize_launch_timing()`: Recommend release dates
|
||||
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Keyword Research
|
||||
1. **Volume vs. Competition**: Balance high-volume keywords with achievable rankings
|
||||
2. **Relevance First**: Only target keywords genuinely relevant to your app
|
||||
3. **Long-Tail Strategy**: Include 3-4 word phrases with lower competition
|
||||
4. **Continuous Research**: Keyword trends change—research quarterly
|
||||
5. **Competitor Keywords**: Don't copy blindly; ensure relevance to your features
|
||||
|
||||
### Metadata Optimization
|
||||
1. **Front-Load Keywords**: Place most important keywords early in title/description
|
||||
2. **Natural Language**: Write for humans first, SEO second
|
||||
3. **Feature Benefits**: Focus on user benefits, not just features
|
||||
4. **A/B Test Everything**: Test titles, descriptions, screenshots systematically
|
||||
5. **Update Regularly**: Refresh metadata every major update
|
||||
6. **Character Limits**: Use every character—don't waste valuable space
|
||||
7. **Apple Keyword Field**: No plurals, duplicates, or spaces between commas
|
||||
|
||||
### Visual Assets
|
||||
1. **Icon**: Must be recognizable at small sizes (60x60px)
|
||||
2. **Screenshots**: First 2-3 are critical—most users don't scroll
|
||||
3. **Captions**: Use screenshot captions to tell your value story
|
||||
4. **Consistency**: Match visual style to app design
|
||||
5. **A/B Test Icons**: Icon is the single most important visual element
|
||||
|
||||
### Reviews & Ratings
|
||||
1. **Respond Quickly**: Reply to reviews within 24-48 hours
|
||||
2. **Professional Tone**: Always courteous, even with negative reviews
|
||||
3. **Address Issues**: Show you're actively fixing reported problems
|
||||
4. **Thank Supporters**: Acknowledge positive reviews
|
||||
5. **Prompt Strategically**: Ask for ratings after positive experiences
|
||||
|
||||
### Launch Strategy
|
||||
1. **Soft Launch**: Consider launching in smaller markets first
|
||||
2. **PR Timing**: Coordinate press coverage with launch
|
||||
3. **Update Frequently**: Initial updates signal active development
|
||||
4. **Monitor Closely**: Track metrics daily for first 2 weeks
|
||||
5. **Iterate Quickly**: Fix critical issues immediately
|
||||
|
||||
### Localization
|
||||
1. **Prioritize Markets**: Start with English, Spanish, Chinese, French, German
|
||||
2. **Native Speakers**: Use professional translators, not machine translation
|
||||
3. **Cultural Adaptation**: Some features resonate differently by culture
|
||||
4. **Test Locally**: Have native speakers review before publishing
|
||||
5. **Measure ROI**: Track downloads by locale to assess impact
|
||||
|
||||
## Limitations
|
||||
|
||||
### Data Dependencies
|
||||
- Keyword search volume estimates are approximate (no official data from Apple/Google)
|
||||
- Competitor data may be incomplete for private apps
|
||||
- Review analysis limited to public reviews (can't access private feedback)
|
||||
- Historical data may not be available for new apps
|
||||
|
||||
### Platform Constraints
|
||||
- Apple App Store keyword changes require app submission (except Promotional Text)
|
||||
- Google Play Store metadata changes take 1-2 hours to index
|
||||
- A/B testing requires significant traffic for statistical significance
|
||||
- Store algorithms are proprietary and change without notice
|
||||
|
||||
### Industry Variability
|
||||
- ASO benchmarks vary significantly by category (games vs. utilities)
|
||||
- Seasonality affects different categories differently
|
||||
- Geographic markets have different competitive landscapes
|
||||
- Cultural preferences impact what works in different countries
|
||||
|
||||
### Scope Boundaries
|
||||
- Does not include paid user acquisition strategies (Apple Search Ads, Google Ads)
|
||||
- Does not cover app development or UI/UX optimization
|
||||
- Does not include app analytics implementation (use Firebase, Mixpanel, etc.)
|
||||
- Does not handle app submission technical issues (provisioning profiles, certificates)
|
||||
|
||||
### When NOT to Use This Skill
|
||||
- For web apps (different SEO strategies apply)
|
||||
- For enterprise apps not in public stores
|
||||
- For apps in beta/TestFlight only
|
||||
- If you need paid advertising strategies (use marketing skills instead)
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
This skill works well with:
|
||||
- **Content Strategy Skills**: For creating app descriptions and marketing copy
|
||||
- **Analytics Skills**: For analyzing download and engagement data
|
||||
- **Localization Skills**: For managing multi-language content
|
||||
- **Design Skills**: For creating optimized visual assets
|
||||
- **Marketing Skills**: For coordinating broader launch campaigns
|
||||
|
||||
## Version & Updates
|
||||
|
||||
This skill is based on current Apple App Store and Google Play Store requirements as of November 2025. Store policies and best practices evolve—verify current requirements before major launches.
|
||||
|
||||
**Key Updates to Monitor:**
|
||||
- Apple App Store Connect updates (apple.com/app-store/review/guidelines)
|
||||
- Google Play Console updates (play.google.com/console/about/guides/releasewithconfidence)
|
||||
- iOS/Android version adoption rates (affects device testing)
|
||||
- Store algorithm changes (follow ASO blogs and communities)
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
@@ -0,0 +1,662 @@
|
||||
"""
|
||||
A/B testing module for App Store Optimization.
|
||||
Plans and tracks A/B tests for metadata and visual assets.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
import math
|
||||
|
||||
|
||||
class ABTestPlanner:
|
||||
"""Plans and tracks A/B tests for ASO elements."""
|
||||
|
||||
# Minimum detectable effect sizes (conservative estimates)
|
||||
MIN_EFFECT_SIZES = {
|
||||
'icon': 0.10, # 10% conversion improvement
|
||||
'screenshot': 0.08, # 8% conversion improvement
|
||||
'title': 0.05, # 5% conversion improvement
|
||||
'description': 0.03 # 3% conversion improvement
|
||||
}
|
||||
|
||||
# Statistical confidence levels
|
||||
CONFIDENCE_LEVELS = {
|
||||
'high': 0.95, # 95% confidence
|
||||
'standard': 0.90, # 90% confidence
|
||||
'exploratory': 0.80 # 80% confidence
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize A/B test planner."""
|
||||
self.active_tests = []
|
||||
|
||||
def design_test(
|
||||
self,
|
||||
test_type: str,
|
||||
variant_a: Dict[str, Any],
|
||||
variant_b: Dict[str, Any],
|
||||
hypothesis: str,
|
||||
success_metric: str = 'conversion_rate'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Design an A/B test with hypothesis and variables.
|
||||
|
||||
Args:
|
||||
test_type: Type of test ('icon', 'screenshot', 'title', 'description')
|
||||
variant_a: Control variant details
|
||||
variant_b: Test variant details
|
||||
hypothesis: Expected outcome hypothesis
|
||||
success_metric: Metric to optimize
|
||||
|
||||
Returns:
|
||||
Test design with configuration
|
||||
"""
|
||||
test_design = {
|
||||
'test_id': self._generate_test_id(test_type),
|
||||
'test_type': test_type,
|
||||
'hypothesis': hypothesis,
|
||||
'variants': {
|
||||
'a': {
|
||||
'name': 'Control',
|
||||
'details': variant_a,
|
||||
'traffic_split': 0.5
|
||||
},
|
||||
'b': {
|
||||
'name': 'Variation',
|
||||
'details': variant_b,
|
||||
'traffic_split': 0.5
|
||||
}
|
||||
},
|
||||
'success_metric': success_metric,
|
||||
'secondary_metrics': self._get_secondary_metrics(test_type),
|
||||
'minimum_effect_size': self.MIN_EFFECT_SIZES.get(test_type, 0.05),
|
||||
'recommended_confidence': 'standard',
|
||||
'best_practices': self._get_test_best_practices(test_type)
|
||||
}
|
||||
|
||||
self.active_tests.append(test_design)
|
||||
return test_design
|
||||
|
||||
def calculate_sample_size(
|
||||
self,
|
||||
baseline_conversion: float,
|
||||
minimum_detectable_effect: float,
|
||||
confidence_level: str = 'standard',
|
||||
power: float = 0.80
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate required sample size for statistical significance.
|
||||
|
||||
Args:
|
||||
baseline_conversion: Current conversion rate (0-1)
|
||||
minimum_detectable_effect: Minimum effect size to detect (0-1)
|
||||
confidence_level: 'high', 'standard', or 'exploratory'
|
||||
power: Statistical power (typically 0.80 or 0.90)
|
||||
|
||||
Returns:
|
||||
Sample size calculation with duration estimates
|
||||
"""
|
||||
alpha = 1 - self.CONFIDENCE_LEVELS[confidence_level]
|
||||
beta = 1 - power
|
||||
|
||||
# Expected conversion for variant B
|
||||
expected_conversion_b = baseline_conversion * (1 + minimum_detectable_effect)
|
||||
|
||||
# Z-scores for alpha and beta
|
||||
z_alpha = self._get_z_score(1 - alpha / 2) # Two-tailed test
|
||||
z_beta = self._get_z_score(power)
|
||||
|
||||
# Pooled standard deviation
|
||||
p_pooled = (baseline_conversion + expected_conversion_b) / 2
|
||||
sd_pooled = math.sqrt(2 * p_pooled * (1 - p_pooled))
|
||||
|
||||
# Sample size per variant
|
||||
n_per_variant = math.ceil(
|
||||
((z_alpha + z_beta) ** 2 * sd_pooled ** 2) /
|
||||
((expected_conversion_b - baseline_conversion) ** 2)
|
||||
)
|
||||
|
||||
total_sample_size = n_per_variant * 2
|
||||
|
||||
# Estimate duration based on typical traffic
|
||||
duration_estimates = self._estimate_test_duration(
|
||||
total_sample_size,
|
||||
baseline_conversion
|
||||
)
|
||||
|
||||
return {
|
||||
'sample_size_per_variant': n_per_variant,
|
||||
'total_sample_size': total_sample_size,
|
||||
'baseline_conversion': baseline_conversion,
|
||||
'expected_conversion_improvement': minimum_detectable_effect,
|
||||
'expected_conversion_b': expected_conversion_b,
|
||||
'confidence_level': confidence_level,
|
||||
'statistical_power': power,
|
||||
'duration_estimates': duration_estimates,
|
||||
'recommendations': self._generate_sample_size_recommendations(
|
||||
n_per_variant,
|
||||
duration_estimates
|
||||
)
|
||||
}
|
||||
|
||||
def calculate_significance(
|
||||
self,
|
||||
variant_a_conversions: int,
|
||||
variant_a_visitors: int,
|
||||
variant_b_conversions: int,
|
||||
variant_b_visitors: int
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate statistical significance of test results.
|
||||
|
||||
Args:
|
||||
variant_a_conversions: Conversions for control
|
||||
variant_a_visitors: Visitors for control
|
||||
variant_b_conversions: Conversions for variation
|
||||
variant_b_visitors: Visitors for variation
|
||||
|
||||
Returns:
|
||||
Significance analysis with decision recommendation
|
||||
"""
|
||||
# Calculate conversion rates
|
||||
rate_a = variant_a_conversions / variant_a_visitors if variant_a_visitors > 0 else 0
|
||||
rate_b = variant_b_conversions / variant_b_visitors if variant_b_visitors > 0 else 0
|
||||
|
||||
# Calculate improvement
|
||||
if rate_a > 0:
|
||||
relative_improvement = (rate_b - rate_a) / rate_a
|
||||
else:
|
||||
relative_improvement = 0
|
||||
|
||||
absolute_improvement = rate_b - rate_a
|
||||
|
||||
# Calculate standard error
|
||||
se_a = math.sqrt(rate_a * (1 - rate_a) / variant_a_visitors) if variant_a_visitors > 0 else 0
|
||||
se_b = math.sqrt(rate_b * (1 - rate_b) / variant_b_visitors) if variant_b_visitors > 0 else 0
|
||||
se_diff = math.sqrt(se_a**2 + se_b**2)
|
||||
|
||||
# Calculate z-score
|
||||
z_score = absolute_improvement / se_diff if se_diff > 0 else 0
|
||||
|
||||
# Calculate p-value (two-tailed)
|
||||
p_value = 2 * (1 - self._standard_normal_cdf(abs(z_score)))
|
||||
|
||||
# Determine significance
|
||||
is_significant_95 = p_value < 0.05
|
||||
is_significant_90 = p_value < 0.10
|
||||
|
||||
# Generate decision
|
||||
decision = self._generate_test_decision(
|
||||
relative_improvement,
|
||||
is_significant_95,
|
||||
is_significant_90,
|
||||
variant_a_visitors + variant_b_visitors
|
||||
)
|
||||
|
||||
return {
|
||||
'variant_a': {
|
||||
'conversions': variant_a_conversions,
|
||||
'visitors': variant_a_visitors,
|
||||
'conversion_rate': round(rate_a, 4)
|
||||
},
|
||||
'variant_b': {
|
||||
'conversions': variant_b_conversions,
|
||||
'visitors': variant_b_visitors,
|
||||
'conversion_rate': round(rate_b, 4)
|
||||
},
|
||||
'improvement': {
|
||||
'absolute': round(absolute_improvement, 4),
|
||||
'relative_percentage': round(relative_improvement * 100, 2)
|
||||
},
|
||||
'statistical_analysis': {
|
||||
'z_score': round(z_score, 3),
|
||||
'p_value': round(p_value, 4),
|
||||
'is_significant_95': is_significant_95,
|
||||
'is_significant_90': is_significant_90,
|
||||
'confidence_level': '95%' if is_significant_95 else ('90%' if is_significant_90 else 'Not significant')
|
||||
},
|
||||
'decision': decision
|
||||
}
|
||||
|
||||
def track_test_results(
|
||||
self,
|
||||
test_id: str,
|
||||
results_data: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Track ongoing test results and provide recommendations.
|
||||
|
||||
Args:
|
||||
test_id: Test identifier
|
||||
results_data: Current test results
|
||||
|
||||
Returns:
|
||||
Test tracking report with next steps
|
||||
"""
|
||||
# Find test
|
||||
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||
if not test:
|
||||
return {'error': f'Test {test_id} not found'}
|
||||
|
||||
# Calculate significance
|
||||
significance = self.calculate_significance(
|
||||
results_data['variant_a_conversions'],
|
||||
results_data['variant_a_visitors'],
|
||||
results_data['variant_b_conversions'],
|
||||
results_data['variant_b_visitors']
|
||||
)
|
||||
|
||||
# Calculate test progress
|
||||
total_visitors = results_data['variant_a_visitors'] + results_data['variant_b_visitors']
|
||||
required_sample = results_data.get('required_sample_size', 10000)
|
||||
progress_percentage = min((total_visitors / required_sample) * 100, 100)
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = self._generate_tracking_recommendations(
|
||||
significance,
|
||||
progress_percentage,
|
||||
test['test_type']
|
||||
)
|
||||
|
||||
return {
|
||||
'test_id': test_id,
|
||||
'test_type': test['test_type'],
|
||||
'progress': {
|
||||
'total_visitors': total_visitors,
|
||||
'required_sample_size': required_sample,
|
||||
'progress_percentage': round(progress_percentage, 1),
|
||||
'is_complete': progress_percentage >= 100
|
||||
},
|
||||
'current_results': significance,
|
||||
'recommendations': recommendations,
|
||||
'next_steps': self._determine_next_steps(
|
||||
significance,
|
||||
progress_percentage
|
||||
)
|
||||
}
|
||||
|
||||
def generate_test_report(
|
||||
self,
|
||||
test_id: str,
|
||||
final_results: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate final test report with insights and recommendations.
|
||||
|
||||
Args:
|
||||
test_id: Test identifier
|
||||
final_results: Final test results
|
||||
|
||||
Returns:
|
||||
Comprehensive test report
|
||||
"""
|
||||
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||
if not test:
|
||||
return {'error': f'Test {test_id} not found'}
|
||||
|
||||
significance = self.calculate_significance(
|
||||
final_results['variant_a_conversions'],
|
||||
final_results['variant_a_visitors'],
|
||||
final_results['variant_b_conversions'],
|
||||
final_results['variant_b_visitors']
|
||||
)
|
||||
|
||||
# Generate insights
|
||||
insights = self._generate_test_insights(
|
||||
test,
|
||||
significance,
|
||||
final_results
|
||||
)
|
||||
|
||||
# Implementation plan
|
||||
implementation_plan = self._create_implementation_plan(
|
||||
test,
|
||||
significance
|
||||
)
|
||||
|
||||
return {
|
||||
'test_summary': {
|
||||
'test_id': test_id,
|
||||
'test_type': test['test_type'],
|
||||
'hypothesis': test['hypothesis'],
|
||||
'duration_days': final_results.get('duration_days', 'N/A')
|
||||
},
|
||||
'results': significance,
|
||||
'insights': insights,
|
||||
'implementation_plan': implementation_plan,
|
||||
'learnings': self._extract_learnings(test, significance)
|
||||
}
|
||||
|
||||
def _generate_test_id(self, test_type: str) -> str:
|
||||
"""Generate unique test ID."""
|
||||
import time
|
||||
timestamp = int(time.time())
|
||||
return f"{test_type}_{timestamp}"
|
||||
|
||||
def _get_secondary_metrics(self, test_type: str) -> List[str]:
|
||||
"""Get secondary metrics to track for test type."""
|
||||
metrics_map = {
|
||||
'icon': ['tap_through_rate', 'impression_count', 'brand_recall'],
|
||||
'screenshot': ['tap_through_rate', 'time_on_page', 'scroll_depth'],
|
||||
'title': ['impression_count', 'tap_through_rate', 'search_visibility'],
|
||||
'description': ['time_on_page', 'scroll_depth', 'tap_through_rate']
|
||||
}
|
||||
return metrics_map.get(test_type, ['tap_through_rate'])
|
||||
|
||||
def _get_test_best_practices(self, test_type: str) -> List[str]:
|
||||
"""Get best practices for specific test type."""
|
||||
practices_map = {
|
||||
'icon': [
|
||||
'Test only one element at a time (color vs. style vs. symbolism)',
|
||||
'Ensure icon is recognizable at small sizes (60x60px)',
|
||||
'Consider cultural context for global audience',
|
||||
'Test against top competitor icons'
|
||||
],
|
||||
'screenshot': [
|
||||
'Test order of screenshots (users see first 2-3)',
|
||||
'Use captions to tell story',
|
||||
'Show key features and benefits',
|
||||
'Test with and without device frames'
|
||||
],
|
||||
'title': [
|
||||
'Test keyword variations, not major rebrand',
|
||||
'Keep brand name consistent',
|
||||
'Ensure title fits within character limits',
|
||||
'Test on both search and browse contexts'
|
||||
],
|
||||
'description': [
|
||||
'Test structure (bullet points vs. paragraphs)',
|
||||
'Test call-to-action placement',
|
||||
'Test feature vs. benefit focus',
|
||||
'Maintain keyword density'
|
||||
]
|
||||
}
|
||||
return practices_map.get(test_type, ['Test one variable at a time'])
|
||||
|
||||
def _estimate_test_duration(
|
||||
self,
|
||||
required_sample_size: int,
|
||||
baseline_conversion: float
|
||||
) -> Dict[str, Any]:
|
||||
"""Estimate test duration based on typical traffic levels."""
|
||||
# Assume different daily traffic scenarios
|
||||
traffic_scenarios = {
|
||||
'low': 100, # 100 page views/day
|
||||
'medium': 1000, # 1000 page views/day
|
||||
'high': 10000 # 10000 page views/day
|
||||
}
|
||||
|
||||
estimates = {}
|
||||
for scenario, daily_views in traffic_scenarios.items():
|
||||
days = math.ceil(required_sample_size / daily_views)
|
||||
estimates[scenario] = {
|
||||
'daily_page_views': daily_views,
|
||||
'estimated_days': days,
|
||||
'estimated_weeks': round(days / 7, 1)
|
||||
}
|
||||
|
||||
return estimates
|
||||
|
||||
def _generate_sample_size_recommendations(
|
||||
self,
|
||||
sample_size: int,
|
||||
duration_estimates: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations based on sample size."""
|
||||
recommendations = []
|
||||
|
||||
if sample_size > 50000:
|
||||
recommendations.append(
|
||||
"Large sample size required - consider testing smaller effect size or increasing traffic"
|
||||
)
|
||||
|
||||
if duration_estimates['medium']['estimated_days'] > 30:
|
||||
recommendations.append(
|
||||
"Long test duration - consider higher minimum detectable effect or focus on high-impact changes"
|
||||
)
|
||||
|
||||
if duration_estimates['low']['estimated_days'] > 60:
|
||||
recommendations.append(
|
||||
"Insufficient traffic for reliable testing - consider user acquisition or broader targeting"
|
||||
)
|
||||
|
||||
if not recommendations:
|
||||
recommendations.append("Sample size and duration are reasonable for this test")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _get_z_score(self, percentile: float) -> float:
|
||||
"""Get z-score for given percentile (approximation)."""
|
||||
# Common z-scores
|
||||
z_scores = {
|
||||
0.80: 0.84,
|
||||
0.85: 1.04,
|
||||
0.90: 1.28,
|
||||
0.95: 1.645,
|
||||
0.975: 1.96,
|
||||
0.99: 2.33
|
||||
}
|
||||
return z_scores.get(percentile, 1.96)
|
||||
|
||||
def _standard_normal_cdf(self, z: float) -> float:
|
||||
"""Approximate standard normal cumulative distribution function."""
|
||||
# Using error function approximation
|
||||
t = 1.0 / (1.0 + 0.2316419 * abs(z))
|
||||
d = 0.3989423 * math.exp(-z * z / 2.0)
|
||||
p = d * t * (0.3193815 + t * (-0.3565638 + t * (1.781478 + t * (-1.821256 + t * 1.330274))))
|
||||
|
||||
if z > 0:
|
||||
return 1.0 - p
|
||||
else:
|
||||
return p
|
||||
|
||||
def _generate_test_decision(
|
||||
self,
|
||||
improvement: float,
|
||||
is_significant_95: bool,
|
||||
is_significant_90: bool,
|
||||
total_visitors: int
|
||||
) -> Dict[str, Any]:
|
||||
"""Generate test decision and recommendation."""
|
||||
if total_visitors < 1000:
|
||||
return {
|
||||
'decision': 'continue',
|
||||
'rationale': 'Insufficient data - continue test to reach minimum sample size',
|
||||
'action': 'Keep test running'
|
||||
}
|
||||
|
||||
if is_significant_95:
|
||||
if improvement > 0:
|
||||
return {
|
||||
'decision': 'implement_b',
|
||||
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 95% confidence',
|
||||
'action': 'Implement Variant B'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'decision': 'keep_a',
|
||||
'rationale': 'Variant A performs better with 95% confidence',
|
||||
'action': 'Keep current version (A)'
|
||||
}
|
||||
|
||||
elif is_significant_90:
|
||||
if improvement > 0:
|
||||
return {
|
||||
'decision': 'implement_b_cautiously',
|
||||
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 90% confidence',
|
||||
'action': 'Consider implementing B, monitor closely'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'decision': 'keep_a',
|
||||
'rationale': 'Variant A performs better with 90% confidence',
|
||||
'action': 'Keep current version (A)'
|
||||
}
|
||||
|
||||
else:
|
||||
return {
|
||||
'decision': 'inconclusive',
|
||||
'rationale': 'No statistically significant difference detected',
|
||||
'action': 'Either keep A or test different hypothesis'
|
||||
}
|
||||
|
||||
def _generate_tracking_recommendations(
|
||||
self,
|
||||
significance: Dict[str, Any],
|
||||
progress: float,
|
||||
test_type: str
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for ongoing test."""
|
||||
recommendations = []
|
||||
|
||||
if progress < 50:
|
||||
recommendations.append(
|
||||
f"Test is {progress:.0f}% complete - continue collecting data"
|
||||
)
|
||||
|
||||
if progress >= 100:
|
||||
if significance['statistical_analysis']['is_significant_95']:
|
||||
recommendations.append(
|
||||
"Sufficient data collected with significant results - ready to conclude test"
|
||||
)
|
||||
else:
|
||||
recommendations.append(
|
||||
"Sample size reached but no significant difference - consider extending test or concluding"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _determine_next_steps(
|
||||
self,
|
||||
significance: Dict[str, Any],
|
||||
progress: float
|
||||
) -> str:
|
||||
"""Determine next steps for test."""
|
||||
if progress < 100:
|
||||
return f"Continue test until reaching 100% sample size (currently {progress:.0f}%)"
|
||||
|
||||
decision = significance.get('decision', {}).get('decision', 'inconclusive')
|
||||
|
||||
if decision == 'implement_b':
|
||||
return "Implement Variant B and monitor metrics for 2 weeks"
|
||||
elif decision == 'keep_a':
|
||||
return "Keep Variant A and design new test with different hypothesis"
|
||||
else:
|
||||
return "Test inconclusive - either keep A or design new test"
|
||||
|
||||
def _generate_test_insights(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any],
|
||||
results: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Generate insights from test results."""
|
||||
insights = []
|
||||
|
||||
improvement = significance['improvement']['relative_percentage']
|
||||
|
||||
if significance['statistical_analysis']['is_significant_95']:
|
||||
insights.append(
|
||||
f"Strong evidence: Variant B {'improved' if improvement > 0 else 'decreased'} "
|
||||
f"conversion by {abs(improvement):.1f}% with 95% confidence"
|
||||
)
|
||||
|
||||
insights.append(
|
||||
f"Tested {test['test_type']} changes: {test['hypothesis']}"
|
||||
)
|
||||
|
||||
# Add context-specific insights
|
||||
if test['test_type'] == 'icon' and improvement > 5:
|
||||
insights.append(
|
||||
"Icon change had substantial impact - visual first impression is critical"
|
||||
)
|
||||
|
||||
return insights
|
||||
|
||||
def _create_implementation_plan(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any]
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Create implementation plan for winning variant."""
|
||||
plan = []
|
||||
|
||||
if significance.get('decision', {}).get('decision') == 'implement_b':
|
||||
plan.append({
|
||||
'step': '1. Update store listing',
|
||||
'details': f"Replace {test['test_type']} with Variant B across all platforms"
|
||||
})
|
||||
plan.append({
|
||||
'step': '2. Monitor metrics',
|
||||
'details': 'Track conversion rate for 2 weeks to confirm sustained improvement'
|
||||
})
|
||||
plan.append({
|
||||
'step': '3. Document learnings',
|
||||
'details': 'Record insights for future optimization'
|
||||
})
|
||||
|
||||
return plan
|
||||
|
||||
def _extract_learnings(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Extract key learnings from test."""
|
||||
learnings = []
|
||||
|
||||
improvement = significance['improvement']['relative_percentage']
|
||||
|
||||
learnings.append(
|
||||
f"Testing {test['test_type']} can yield {abs(improvement):.1f}% conversion change"
|
||||
)
|
||||
|
||||
if test['test_type'] == 'title':
|
||||
learnings.append(
|
||||
"Title changes affect search visibility and user perception"
|
||||
)
|
||||
elif test['test_type'] == 'screenshot':
|
||||
learnings.append(
|
||||
"First 2-3 screenshots are critical for conversion"
|
||||
)
|
||||
|
||||
return learnings
|
||||
|
||||
|
||||
def plan_ab_test(
|
||||
test_type: str,
|
||||
variant_a: Dict[str, Any],
|
||||
variant_b: Dict[str, Any],
|
||||
hypothesis: str,
|
||||
baseline_conversion: float
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to plan an A/B test.
|
||||
|
||||
Args:
|
||||
test_type: Type of test
|
||||
variant_a: Control variant
|
||||
variant_b: Test variant
|
||||
hypothesis: Test hypothesis
|
||||
baseline_conversion: Current conversion rate
|
||||
|
||||
Returns:
|
||||
Complete test plan
|
||||
"""
|
||||
planner = ABTestPlanner()
|
||||
|
||||
test_design = planner.design_test(
|
||||
test_type,
|
||||
variant_a,
|
||||
variant_b,
|
||||
hypothesis
|
||||
)
|
||||
|
||||
sample_size = planner.calculate_sample_size(
|
||||
baseline_conversion,
|
||||
planner.MIN_EFFECT_SIZES.get(test_type, 0.05)
|
||||
)
|
||||
|
||||
return {
|
||||
'test_design': test_design,
|
||||
'sample_size_requirements': sample_size
|
||||
}
|
||||
@@ -0,0 +1,482 @@
|
||||
"""
|
||||
ASO scoring module for App Store Optimization.
|
||||
Calculates comprehensive ASO health score across multiple dimensions.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
|
||||
class ASOScorer:
|
||||
"""Calculates overall ASO health score and provides recommendations."""
|
||||
|
||||
# Score weights for different components (total = 100)
|
||||
WEIGHTS = {
|
||||
'metadata_quality': 25,
|
||||
'ratings_reviews': 25,
|
||||
'keyword_performance': 25,
|
||||
'conversion_metrics': 25
|
||||
}
|
||||
|
||||
# Benchmarks for scoring
|
||||
BENCHMARKS = {
|
||||
'title_keyword_usage': {'min': 1, 'target': 2},
|
||||
'description_length': {'min': 500, 'target': 2000},
|
||||
'keyword_density': {'min': 2, 'optimal': 5, 'max': 8},
|
||||
'average_rating': {'min': 3.5, 'target': 4.5},
|
||||
'ratings_count': {'min': 100, 'target': 5000},
|
||||
'keywords_top_10': {'min': 2, 'target': 10},
|
||||
'keywords_top_50': {'min': 5, 'target': 20},
|
||||
'conversion_rate': {'min': 0.02, 'target': 0.10}
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize ASO scorer."""
|
||||
self.score_breakdown = {}
|
||||
|
||||
def calculate_overall_score(
|
||||
self,
|
||||
metadata: Dict[str, Any],
|
||||
ratings: Dict[str, Any],
|
||||
keyword_performance: Dict[str, Any],
|
||||
conversion: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate comprehensive ASO score (0-100).
|
||||
|
||||
Args:
|
||||
metadata: Title, description quality metrics
|
||||
ratings: Rating average and count
|
||||
keyword_performance: Keyword ranking data
|
||||
conversion: Impression-to-install metrics
|
||||
|
||||
Returns:
|
||||
Overall score with detailed breakdown
|
||||
"""
|
||||
# Calculate component scores
|
||||
metadata_score = self.score_metadata_quality(metadata)
|
||||
ratings_score = self.score_ratings_reviews(ratings)
|
||||
keyword_score = self.score_keyword_performance(keyword_performance)
|
||||
conversion_score = self.score_conversion_metrics(conversion)
|
||||
|
||||
# Calculate weighted overall score
|
||||
overall_score = (
|
||||
metadata_score * (self.WEIGHTS['metadata_quality'] / 100) +
|
||||
ratings_score * (self.WEIGHTS['ratings_reviews'] / 100) +
|
||||
keyword_score * (self.WEIGHTS['keyword_performance'] / 100) +
|
||||
conversion_score * (self.WEIGHTS['conversion_metrics'] / 100)
|
||||
)
|
||||
|
||||
# Store breakdown
|
||||
self.score_breakdown = {
|
||||
'metadata_quality': {
|
||||
'score': metadata_score,
|
||||
'weight': self.WEIGHTS['metadata_quality'],
|
||||
'weighted_contribution': round(metadata_score * (self.WEIGHTS['metadata_quality'] / 100), 1)
|
||||
},
|
||||
'ratings_reviews': {
|
||||
'score': ratings_score,
|
||||
'weight': self.WEIGHTS['ratings_reviews'],
|
||||
'weighted_contribution': round(ratings_score * (self.WEIGHTS['ratings_reviews'] / 100), 1)
|
||||
},
|
||||
'keyword_performance': {
|
||||
'score': keyword_score,
|
||||
'weight': self.WEIGHTS['keyword_performance'],
|
||||
'weighted_contribution': round(keyword_score * (self.WEIGHTS['keyword_performance'] / 100), 1)
|
||||
},
|
||||
'conversion_metrics': {
|
||||
'score': conversion_score,
|
||||
'weight': self.WEIGHTS['conversion_metrics'],
|
||||
'weighted_contribution': round(conversion_score * (self.WEIGHTS['conversion_metrics'] / 100), 1)
|
||||
}
|
||||
}
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = self.generate_recommendations(
|
||||
metadata_score,
|
||||
ratings_score,
|
||||
keyword_score,
|
||||
conversion_score
|
||||
)
|
||||
|
||||
# Assess overall health
|
||||
health_status = self._assess_health_status(overall_score)
|
||||
|
||||
return {
|
||||
'overall_score': round(overall_score, 1),
|
||||
'health_status': health_status,
|
||||
'score_breakdown': self.score_breakdown,
|
||||
'recommendations': recommendations,
|
||||
'priority_actions': self._prioritize_actions(recommendations),
|
||||
'strengths': self._identify_strengths(self.score_breakdown),
|
||||
'weaknesses': self._identify_weaknesses(self.score_breakdown)
|
||||
}
|
||||
|
||||
def score_metadata_quality(self, metadata: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score metadata quality (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Title optimization
|
||||
- Description quality
|
||||
- Keyword usage
|
||||
"""
|
||||
scores = []
|
||||
|
||||
# Title score (0-35 points)
|
||||
title_keywords = metadata.get('title_keyword_count', 0)
|
||||
title_length = metadata.get('title_length', 0)
|
||||
|
||||
title_score = 0
|
||||
if title_keywords >= self.BENCHMARKS['title_keyword_usage']['target']:
|
||||
title_score = 35
|
||||
elif title_keywords >= self.BENCHMARKS['title_keyword_usage']['min']:
|
||||
title_score = 25
|
||||
else:
|
||||
title_score = 10
|
||||
|
||||
# Adjust for title length usage
|
||||
if title_length > 25: # Using most of available space
|
||||
title_score += 0
|
||||
else:
|
||||
title_score -= 5
|
||||
|
||||
scores.append(min(title_score, 35))
|
||||
|
||||
# Description score (0-35 points)
|
||||
desc_length = metadata.get('description_length', 0)
|
||||
desc_quality = metadata.get('description_quality', 0.0) # 0-1 scale
|
||||
|
||||
desc_score = 0
|
||||
if desc_length >= self.BENCHMARKS['description_length']['target']:
|
||||
desc_score = 25
|
||||
elif desc_length >= self.BENCHMARKS['description_length']['min']:
|
||||
desc_score = 15
|
||||
else:
|
||||
desc_score = 5
|
||||
|
||||
# Add quality bonus
|
||||
desc_score += desc_quality * 10
|
||||
scores.append(min(desc_score, 35))
|
||||
|
||||
# Keyword density score (0-30 points)
|
||||
keyword_density = metadata.get('keyword_density', 0.0)
|
||||
|
||||
if self.BENCHMARKS['keyword_density']['min'] <= keyword_density <= self.BENCHMARKS['keyword_density']['optimal']:
|
||||
density_score = 30
|
||||
elif keyword_density < self.BENCHMARKS['keyword_density']['min']:
|
||||
# Too low - proportional scoring
|
||||
density_score = (keyword_density / self.BENCHMARKS['keyword_density']['min']) * 20
|
||||
else:
|
||||
# Too high (keyword stuffing) - penalty
|
||||
excess = keyword_density - self.BENCHMARKS['keyword_density']['optimal']
|
||||
density_score = max(30 - (excess * 5), 0)
|
||||
|
||||
scores.append(density_score)
|
||||
|
||||
return round(sum(scores), 1)
|
||||
|
||||
def score_ratings_reviews(self, ratings: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score ratings and reviews (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Average rating
|
||||
- Total ratings count
|
||||
- Review velocity
|
||||
"""
|
||||
average_rating = ratings.get('average_rating', 0.0)
|
||||
total_ratings = ratings.get('total_ratings', 0)
|
||||
recent_ratings = ratings.get('recent_ratings_30d', 0)
|
||||
|
||||
# Rating quality score (0-50 points)
|
||||
if average_rating >= self.BENCHMARKS['average_rating']['target']:
|
||||
rating_quality_score = 50
|
||||
elif average_rating >= self.BENCHMARKS['average_rating']['min']:
|
||||
# Proportional scoring between min and target
|
||||
proportion = (average_rating - self.BENCHMARKS['average_rating']['min']) / \
|
||||
(self.BENCHMARKS['average_rating']['target'] - self.BENCHMARKS['average_rating']['min'])
|
||||
rating_quality_score = 30 + (proportion * 20)
|
||||
elif average_rating >= 3.0:
|
||||
rating_quality_score = 20
|
||||
else:
|
||||
rating_quality_score = 10
|
||||
|
||||
# Rating volume score (0-30 points)
|
||||
if total_ratings >= self.BENCHMARKS['ratings_count']['target']:
|
||||
rating_volume_score = 30
|
||||
elif total_ratings >= self.BENCHMARKS['ratings_count']['min']:
|
||||
# Proportional scoring
|
||||
proportion = (total_ratings - self.BENCHMARKS['ratings_count']['min']) / \
|
||||
(self.BENCHMARKS['ratings_count']['target'] - self.BENCHMARKS['ratings_count']['min'])
|
||||
rating_volume_score = 15 + (proportion * 15)
|
||||
else:
|
||||
# Very low volume
|
||||
rating_volume_score = (total_ratings / self.BENCHMARKS['ratings_count']['min']) * 15
|
||||
|
||||
# Rating velocity score (0-20 points)
|
||||
if recent_ratings > 100:
|
||||
velocity_score = 20
|
||||
elif recent_ratings > 50:
|
||||
velocity_score = 15
|
||||
elif recent_ratings > 10:
|
||||
velocity_score = 10
|
||||
else:
|
||||
velocity_score = 5
|
||||
|
||||
total_score = rating_quality_score + rating_volume_score + velocity_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def score_keyword_performance(self, keyword_performance: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score keyword ranking performance (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Top 10 rankings
|
||||
- Top 50 rankings
|
||||
- Ranking trends
|
||||
"""
|
||||
top_10_count = keyword_performance.get('top_10', 0)
|
||||
top_50_count = keyword_performance.get('top_50', 0)
|
||||
top_100_count = keyword_performance.get('top_100', 0)
|
||||
improving_keywords = keyword_performance.get('improving_keywords', 0)
|
||||
|
||||
# Top 10 score (0-50 points) - most valuable rankings
|
||||
if top_10_count >= self.BENCHMARKS['keywords_top_10']['target']:
|
||||
top_10_score = 50
|
||||
elif top_10_count >= self.BENCHMARKS['keywords_top_10']['min']:
|
||||
proportion = (top_10_count - self.BENCHMARKS['keywords_top_10']['min']) / \
|
||||
(self.BENCHMARKS['keywords_top_10']['target'] - self.BENCHMARKS['keywords_top_10']['min'])
|
||||
top_10_score = 25 + (proportion * 25)
|
||||
else:
|
||||
top_10_score = (top_10_count / self.BENCHMARKS['keywords_top_10']['min']) * 25
|
||||
|
||||
# Top 50 score (0-30 points)
|
||||
if top_50_count >= self.BENCHMARKS['keywords_top_50']['target']:
|
||||
top_50_score = 30
|
||||
elif top_50_count >= self.BENCHMARKS['keywords_top_50']['min']:
|
||||
proportion = (top_50_count - self.BENCHMARKS['keywords_top_50']['min']) / \
|
||||
(self.BENCHMARKS['keywords_top_50']['target'] - self.BENCHMARKS['keywords_top_50']['min'])
|
||||
top_50_score = 15 + (proportion * 15)
|
||||
else:
|
||||
top_50_score = (top_50_count / self.BENCHMARKS['keywords_top_50']['min']) * 15
|
||||
|
||||
# Coverage score (0-10 points) - based on top 100
|
||||
coverage_score = min((top_100_count / 30) * 10, 10)
|
||||
|
||||
# Trend score (0-10 points) - are rankings improving?
|
||||
if improving_keywords > 5:
|
||||
trend_score = 10
|
||||
elif improving_keywords > 0:
|
||||
trend_score = 5
|
||||
else:
|
||||
trend_score = 0
|
||||
|
||||
total_score = top_10_score + top_50_score + coverage_score + trend_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def score_conversion_metrics(self, conversion: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score conversion performance (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Impression-to-install conversion rate
|
||||
- Download velocity
|
||||
"""
|
||||
conversion_rate = conversion.get('impression_to_install', 0.0)
|
||||
downloads_30d = conversion.get('downloads_last_30_days', 0)
|
||||
downloads_trend = conversion.get('downloads_trend', 'stable') # 'up', 'stable', 'down'
|
||||
|
||||
# Conversion rate score (0-70 points)
|
||||
if conversion_rate >= self.BENCHMARKS['conversion_rate']['target']:
|
||||
conversion_score = 70
|
||||
elif conversion_rate >= self.BENCHMARKS['conversion_rate']['min']:
|
||||
proportion = (conversion_rate - self.BENCHMARKS['conversion_rate']['min']) / \
|
||||
(self.BENCHMARKS['conversion_rate']['target'] - self.BENCHMARKS['conversion_rate']['min'])
|
||||
conversion_score = 35 + (proportion * 35)
|
||||
else:
|
||||
conversion_score = (conversion_rate / self.BENCHMARKS['conversion_rate']['min']) * 35
|
||||
|
||||
# Download velocity score (0-20 points)
|
||||
if downloads_30d > 10000:
|
||||
velocity_score = 20
|
||||
elif downloads_30d > 1000:
|
||||
velocity_score = 15
|
||||
elif downloads_30d > 100:
|
||||
velocity_score = 10
|
||||
else:
|
||||
velocity_score = 5
|
||||
|
||||
# Trend bonus (0-10 points)
|
||||
if downloads_trend == 'up':
|
||||
trend_score = 10
|
||||
elif downloads_trend == 'stable':
|
||||
trend_score = 5
|
||||
else:
|
||||
trend_score = 0
|
||||
|
||||
total_score = conversion_score + velocity_score + trend_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def generate_recommendations(
|
||||
self,
|
||||
metadata_score: float,
|
||||
ratings_score: float,
|
||||
keyword_score: float,
|
||||
conversion_score: float
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Generate prioritized recommendations based on scores."""
|
||||
recommendations = []
|
||||
|
||||
# Metadata recommendations
|
||||
if metadata_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'metadata_quality',
|
||||
'priority': 'high',
|
||||
'action': 'Optimize app title and description',
|
||||
'details': 'Add more keywords to title, expand description to 1500-2000 characters, improve keyword density to 3-5%',
|
||||
'expected_impact': 'Improve discoverability and ranking potential'
|
||||
})
|
||||
elif metadata_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'metadata_quality',
|
||||
'priority': 'medium',
|
||||
'action': 'Refine metadata for better keyword targeting',
|
||||
'details': 'Test variations of title/subtitle, optimize keyword field for Apple',
|
||||
'expected_impact': 'Incremental ranking improvements'
|
||||
})
|
||||
|
||||
# Ratings recommendations
|
||||
if ratings_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'ratings_reviews',
|
||||
'priority': 'high',
|
||||
'action': 'Improve rating quality and volume',
|
||||
'details': 'Address top user complaints, implement in-app rating prompts, respond to negative reviews',
|
||||
'expected_impact': 'Better conversion rates and trust signals'
|
||||
})
|
||||
elif ratings_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'ratings_reviews',
|
||||
'priority': 'medium',
|
||||
'action': 'Increase rating velocity',
|
||||
'details': 'Optimize timing of rating requests, encourage satisfied users to rate',
|
||||
'expected_impact': 'Sustained rating quality'
|
||||
})
|
||||
|
||||
# Keyword performance recommendations
|
||||
if keyword_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'keyword_performance',
|
||||
'priority': 'high',
|
||||
'action': 'Improve keyword rankings',
|
||||
'details': 'Target long-tail keywords with lower competition, update metadata with high-potential keywords, build backlinks',
|
||||
'expected_impact': 'Significant improvement in organic visibility'
|
||||
})
|
||||
elif keyword_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'keyword_performance',
|
||||
'priority': 'medium',
|
||||
'action': 'Expand keyword coverage',
|
||||
'details': 'Target additional related keywords, test seasonal keywords, localize for new markets',
|
||||
'expected_impact': 'Broader reach and more discovery opportunities'
|
||||
})
|
||||
|
||||
# Conversion recommendations
|
||||
if conversion_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'conversion_metrics',
|
||||
'priority': 'high',
|
||||
'action': 'Optimize store listing for conversions',
|
||||
'details': 'Improve screenshots and icon, strengthen value proposition in description, add video preview',
|
||||
'expected_impact': 'Higher impression-to-install conversion'
|
||||
})
|
||||
elif conversion_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'conversion_metrics',
|
||||
'priority': 'medium',
|
||||
'action': 'Test visual asset variations',
|
||||
'details': 'A/B test different icon designs and screenshot sequences',
|
||||
'expected_impact': 'Incremental conversion improvements'
|
||||
})
|
||||
|
||||
return recommendations
|
||||
|
||||
def _assess_health_status(self, overall_score: float) -> str:
|
||||
"""Assess overall ASO health status."""
|
||||
if overall_score >= 80:
|
||||
return "Excellent - Top-tier ASO performance"
|
||||
elif overall_score >= 65:
|
||||
return "Good - Competitive ASO with room for improvement"
|
||||
elif overall_score >= 50:
|
||||
return "Fair - Needs strategic improvements"
|
||||
else:
|
||||
return "Poor - Requires immediate ASO overhaul"
|
||||
|
||||
def _prioritize_actions(
|
||||
self,
|
||||
recommendations: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Prioritize actions by impact and urgency."""
|
||||
# Sort by priority (high first) and expected impact
|
||||
priority_order = {'high': 0, 'medium': 1, 'low': 2}
|
||||
|
||||
sorted_recommendations = sorted(
|
||||
recommendations,
|
||||
key=lambda x: priority_order[x['priority']]
|
||||
)
|
||||
|
||||
return sorted_recommendations[:3] # Top 3 priority actions
|
||||
|
||||
def _identify_strengths(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||
"""Identify areas of strength (scores >= 75)."""
|
||||
strengths = []
|
||||
|
||||
for category, data in score_breakdown.items():
|
||||
if data['score'] >= 75:
|
||||
strengths.append(
|
||||
f"{category.replace('_', ' ').title()}: {data['score']}/100"
|
||||
)
|
||||
|
||||
return strengths if strengths else ["Focus on building strengths across all areas"]
|
||||
|
||||
def _identify_weaknesses(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||
"""Identify areas needing improvement (scores < 60)."""
|
||||
weaknesses = []
|
||||
|
||||
for category, data in score_breakdown.items():
|
||||
if data['score'] < 60:
|
||||
weaknesses.append(
|
||||
f"{category.replace('_', ' ').title()}: {data['score']}/100 - needs improvement"
|
||||
)
|
||||
|
||||
return weaknesses if weaknesses else ["All areas performing adequately"]
|
||||
|
||||
|
||||
def calculate_aso_score(
|
||||
metadata: Dict[str, Any],
|
||||
ratings: Dict[str, Any],
|
||||
keyword_performance: Dict[str, Any],
|
||||
conversion: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to calculate ASO score.
|
||||
|
||||
Args:
|
||||
metadata: Metadata quality metrics
|
||||
ratings: Ratings data
|
||||
keyword_performance: Keyword ranking data
|
||||
conversion: Conversion metrics
|
||||
|
||||
Returns:
|
||||
Complete ASO score report
|
||||
"""
|
||||
scorer = ASOScorer()
|
||||
return scorer.calculate_overall_score(
|
||||
metadata,
|
||||
ratings,
|
||||
keyword_performance,
|
||||
conversion
|
||||
)
|
||||
@@ -0,0 +1,577 @@
|
||||
"""
|
||||
Competitor analysis module for App Store Optimization.
|
||||
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
from collections import Counter
|
||||
import re
|
||||
|
||||
|
||||
class CompetitorAnalyzer:
|
||||
"""Analyzes competitor apps to identify ASO opportunities."""
|
||||
|
||||
def __init__(self, category: str, platform: str = 'apple'):
|
||||
"""
|
||||
Initialize competitor analyzer.
|
||||
|
||||
Args:
|
||||
category: App category (e.g., "Productivity", "Games")
|
||||
platform: 'apple' or 'google'
|
||||
"""
|
||||
self.category = category
|
||||
self.platform = platform
|
||||
self.competitors = []
|
||||
|
||||
def analyze_competitor(
|
||||
self,
|
||||
app_data: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze a single competitor's ASO strategy.
|
||||
|
||||
Args:
|
||||
app_data: Dictionary with app_name, title, description, rating, ratings_count, keywords
|
||||
|
||||
Returns:
|
||||
Comprehensive competitor analysis
|
||||
"""
|
||||
app_name = app_data.get('app_name', '')
|
||||
title = app_data.get('title', '')
|
||||
description = app_data.get('description', '')
|
||||
rating = app_data.get('rating', 0.0)
|
||||
ratings_count = app_data.get('ratings_count', 0)
|
||||
keywords = app_data.get('keywords', [])
|
||||
|
||||
analysis = {
|
||||
'app_name': app_name,
|
||||
'title_analysis': self._analyze_title(title),
|
||||
'description_analysis': self._analyze_description(description),
|
||||
'keyword_strategy': self._extract_keyword_strategy(title, description, keywords),
|
||||
'rating_metrics': {
|
||||
'rating': rating,
|
||||
'ratings_count': ratings_count,
|
||||
'rating_quality': self._assess_rating_quality(rating, ratings_count)
|
||||
},
|
||||
'competitive_strength': self._calculate_competitive_strength(
|
||||
rating,
|
||||
ratings_count,
|
||||
len(description)
|
||||
),
|
||||
'key_differentiators': self._identify_differentiators(description)
|
||||
}
|
||||
|
||||
self.competitors.append(analysis)
|
||||
return analysis
|
||||
|
||||
def compare_competitors(
|
||||
self,
|
||||
competitors_data: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare multiple competitors and identify patterns.
|
||||
|
||||
Args:
|
||||
competitors_data: List of competitor data dictionaries
|
||||
|
||||
Returns:
|
||||
Comparative analysis with insights
|
||||
"""
|
||||
# Analyze each competitor
|
||||
analyses = []
|
||||
for comp_data in competitors_data:
|
||||
analysis = self.analyze_competitor(comp_data)
|
||||
analyses.append(analysis)
|
||||
|
||||
# Extract common keywords across competitors
|
||||
all_keywords = []
|
||||
for analysis in analyses:
|
||||
all_keywords.extend(analysis['keyword_strategy']['primary_keywords'])
|
||||
|
||||
common_keywords = self._find_common_keywords(all_keywords)
|
||||
|
||||
# Identify keyword gaps (used by some but not all)
|
||||
keyword_gaps = self._identify_keyword_gaps(analyses)
|
||||
|
||||
# Rank competitors by strength
|
||||
ranked_competitors = sorted(
|
||||
analyses,
|
||||
key=lambda x: x['competitive_strength'],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
# Analyze rating distribution
|
||||
rating_analysis = self._analyze_rating_distribution(analyses)
|
||||
|
||||
# Identify best practices
|
||||
best_practices = self._identify_best_practices(ranked_competitors)
|
||||
|
||||
return {
|
||||
'category': self.category,
|
||||
'platform': self.platform,
|
||||
'competitors_analyzed': len(analyses),
|
||||
'ranked_competitors': ranked_competitors,
|
||||
'common_keywords': common_keywords,
|
||||
'keyword_gaps': keyword_gaps,
|
||||
'rating_analysis': rating_analysis,
|
||||
'best_practices': best_practices,
|
||||
'opportunities': self._identify_opportunities(
|
||||
analyses,
|
||||
common_keywords,
|
||||
keyword_gaps
|
||||
)
|
||||
}
|
||||
|
||||
def identify_gaps(
|
||||
self,
|
||||
your_app_data: Dict[str, Any],
|
||||
competitors_data: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify gaps between your app and competitors.
|
||||
|
||||
Args:
|
||||
your_app_data: Your app's data
|
||||
competitors_data: List of competitor data
|
||||
|
||||
Returns:
|
||||
Gap analysis with actionable recommendations
|
||||
"""
|
||||
# Analyze your app
|
||||
your_analysis = self.analyze_competitor(your_app_data)
|
||||
|
||||
# Analyze competitors
|
||||
competitor_comparison = self.compare_competitors(competitors_data)
|
||||
|
||||
# Identify keyword gaps
|
||||
your_keywords = set(your_analysis['keyword_strategy']['primary_keywords'])
|
||||
competitor_keywords = set(competitor_comparison['common_keywords'])
|
||||
missing_keywords = competitor_keywords - your_keywords
|
||||
|
||||
# Identify rating gap
|
||||
avg_competitor_rating = competitor_comparison['rating_analysis']['average_rating']
|
||||
rating_gap = avg_competitor_rating - your_analysis['rating_metrics']['rating']
|
||||
|
||||
# Identify description length gap
|
||||
avg_competitor_desc_length = sum(
|
||||
len(comp['description_analysis']['text'])
|
||||
for comp in competitor_comparison['ranked_competitors']
|
||||
) / len(competitor_comparison['ranked_competitors'])
|
||||
your_desc_length = len(your_analysis['description_analysis']['text'])
|
||||
desc_length_gap = avg_competitor_desc_length - your_desc_length
|
||||
|
||||
return {
|
||||
'your_app': your_analysis,
|
||||
'keyword_gaps': {
|
||||
'missing_keywords': list(missing_keywords)[:10],
|
||||
'recommendations': self._generate_keyword_recommendations(missing_keywords)
|
||||
},
|
||||
'rating_gap': {
|
||||
'your_rating': your_analysis['rating_metrics']['rating'],
|
||||
'average_competitor_rating': avg_competitor_rating,
|
||||
'gap': round(rating_gap, 2),
|
||||
'action_items': self._generate_rating_improvement_actions(rating_gap)
|
||||
},
|
||||
'content_gap': {
|
||||
'your_description_length': your_desc_length,
|
||||
'average_competitor_length': int(avg_competitor_desc_length),
|
||||
'gap': int(desc_length_gap),
|
||||
'recommendations': self._generate_content_recommendations(desc_length_gap)
|
||||
},
|
||||
'competitive_positioning': self._assess_competitive_position(
|
||||
your_analysis,
|
||||
competitor_comparison
|
||||
)
|
||||
}
|
||||
|
||||
def _analyze_title(self, title: str) -> Dict[str, Any]:
|
||||
"""Analyze title structure and keyword usage."""
|
||||
parts = re.split(r'[-' + r':|]', title)
|
||||
|
||||
return {
|
||||
'title': title,
|
||||
'length': len(title),
|
||||
'has_brand': len(parts) > 0,
|
||||
'has_keywords': len(parts) > 1,
|
||||
'components': [part.strip() for part in parts],
|
||||
'word_count': len(title.split()),
|
||||
'strategy': 'brand_plus_keywords' if len(parts) > 1 else 'brand_only'
|
||||
}
|
||||
|
||||
def _analyze_description(self, description: str) -> Dict[str, Any]:
|
||||
"""Analyze description structure and content."""
|
||||
lines = description.split('\n')
|
||||
word_count = len(description.split())
|
||||
|
||||
# Check for structural elements
|
||||
has_bullet_points = '•' in description or '*' in description
|
||||
has_sections = any(line.isupper() for line in lines if len(line) > 0)
|
||||
has_call_to_action = any(
|
||||
cta in description.lower()
|
||||
for cta in ['download', 'try', 'get', 'start', 'join']
|
||||
)
|
||||
|
||||
# Extract features mentioned
|
||||
features = self._extract_features(description)
|
||||
|
||||
return {
|
||||
'text': description,
|
||||
'length': len(description),
|
||||
'word_count': word_count,
|
||||
'structure': {
|
||||
'has_bullet_points': has_bullet_points,
|
||||
'has_sections': has_sections,
|
||||
'has_call_to_action': has_call_to_action
|
||||
},
|
||||
'features_mentioned': features,
|
||||
'readability': 'good' if 50 <= word_count <= 300 else 'needs_improvement'
|
||||
}
|
||||
|
||||
def _extract_keyword_strategy(
|
||||
self,
|
||||
title: str,
|
||||
description: str,
|
||||
explicit_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Extract keyword strategy from metadata."""
|
||||
# Extract keywords from title
|
||||
title_keywords = [word.lower() for word in title.split() if len(word) > 3]
|
||||
|
||||
# Extract frequently used words from description
|
||||
desc_words = re.findall(r'\b\w{4,}\b', description.lower())
|
||||
word_freq = Counter(desc_words)
|
||||
frequent_words = [word for word, count in word_freq.most_common(15) if count > 2]
|
||||
|
||||
# Combine with explicit keywords
|
||||
all_keywords = list(set(title_keywords + frequent_words + explicit_keywords))
|
||||
|
||||
return {
|
||||
'primary_keywords': title_keywords,
|
||||
'description_keywords': frequent_words[:10],
|
||||
'explicit_keywords': explicit_keywords,
|
||||
'total_unique_keywords': len(all_keywords),
|
||||
'keyword_focus': self._assess_keyword_focus(title_keywords, frequent_words)
|
||||
}
|
||||
|
||||
def _assess_rating_quality(self, rating: float, ratings_count: int) -> str:
|
||||
"""Assess the quality of ratings."""
|
||||
if ratings_count < 100:
|
||||
return 'insufficient_data'
|
||||
elif rating >= 4.5 and ratings_count > 1000:
|
||||
return 'excellent'
|
||||
elif rating >= 4.0 and ratings_count > 500:
|
||||
return 'good'
|
||||
elif rating >= 3.5:
|
||||
return 'average'
|
||||
else:
|
||||
return 'poor'
|
||||
|
||||
def _calculate_competitive_strength(
|
||||
self,
|
||||
rating: float,
|
||||
ratings_count: int,
|
||||
description_length: int
|
||||
) -> float:
|
||||
"""
|
||||
Calculate overall competitive strength (0-100).
|
||||
|
||||
Factors:
|
||||
- Rating quality (40%)
|
||||
- Rating volume (30%)
|
||||
- Metadata quality (30%)
|
||||
"""
|
||||
# Rating quality score (0-40)
|
||||
rating_score = (rating / 5.0) * 40
|
||||
|
||||
# Rating volume score (0-30)
|
||||
volume_score = min((ratings_count / 10000) * 30, 30)
|
||||
|
||||
# Metadata quality score (0-30)
|
||||
metadata_score = min((description_length / 2000) * 30, 30)
|
||||
|
||||
total_score = rating_score + volume_score + metadata_score
|
||||
|
||||
return round(total_score, 1)
|
||||
|
||||
def _identify_differentiators(self, description: str) -> List[str]:
|
||||
"""Identify key differentiators from description."""
|
||||
differentiator_keywords = [
|
||||
'unique', 'only', 'first', 'best', 'leading', 'exclusive',
|
||||
'revolutionary', 'innovative', 'patent', 'award'
|
||||
]
|
||||
|
||||
differentiators = []
|
||||
sentences = description.split('.')
|
||||
|
||||
for sentence in sentences:
|
||||
sentence_lower = sentence.lower()
|
||||
if any(keyword in sentence_lower for keyword in differentiator_keywords):
|
||||
differentiators.append(sentence.strip())
|
||||
|
||||
return differentiators[:5]
|
||||
|
||||
def _find_common_keywords(self, all_keywords: List[str]) -> List[str]:
|
||||
"""Find keywords used by multiple competitors."""
|
||||
keyword_counts = Counter(all_keywords)
|
||||
# Return keywords used by at least 2 competitors
|
||||
common = [kw for kw, count in keyword_counts.items() if count >= 2]
|
||||
return sorted(common, key=lambda x: keyword_counts[x], reverse=True)[:20]
|
||||
|
||||
def _identify_keyword_gaps(self, analyses: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Identify keywords used by some competitors but not others."""
|
||||
all_keywords_by_app = {}
|
||||
|
||||
for analysis in analyses:
|
||||
app_name = analysis['app_name']
|
||||
keywords = analysis['keyword_strategy']['primary_keywords']
|
||||
all_keywords_by_app[app_name] = set(keywords)
|
||||
|
||||
# Find keywords used by some but not all
|
||||
all_keywords_set = set()
|
||||
for keywords in all_keywords_by_app.values():
|
||||
all_keywords_set.update(keywords)
|
||||
|
||||
gaps = []
|
||||
for keyword in all_keywords_set:
|
||||
using_apps = [
|
||||
app for app, keywords in all_keywords_by_app.items()
|
||||
if keyword in keywords
|
||||
]
|
||||
if 1 < len(using_apps) < len(analyses):
|
||||
gaps.append({
|
||||
'keyword': keyword,
|
||||
'used_by': using_apps,
|
||||
'usage_percentage': round(len(using_apps) / len(analyses) * 100, 1)
|
||||
})
|
||||
|
||||
return sorted(gaps, key=lambda x: x['usage_percentage'], reverse=True)[:15]
|
||||
|
||||
def _analyze_rating_distribution(self, analyses: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Analyze rating distribution across competitors."""
|
||||
ratings = [a['rating_metrics']['rating'] for a in analyses]
|
||||
ratings_counts = [a['rating_metrics']['ratings_count'] for a in analyses]
|
||||
|
||||
return {
|
||||
'average_rating': round(sum(ratings) / len(ratings), 2),
|
||||
'highest_rating': max(ratings),
|
||||
'lowest_rating': min(ratings),
|
||||
'average_ratings_count': int(sum(ratings_counts) / len(ratings_counts)),
|
||||
'total_ratings_in_category': sum(ratings_counts)
|
||||
}
|
||||
|
||||
def _identify_best_practices(self, ranked_competitors: List[Dict[str, Any]]) -> List[str]:
|
||||
"""Identify best practices from top competitors."""
|
||||
if not ranked_competitors:
|
||||
return []
|
||||
|
||||
top_competitor = ranked_competitors[0]
|
||||
practices = []
|
||||
|
||||
# Title strategy
|
||||
title_analysis = top_competitor['title_analysis']
|
||||
if title_analysis['has_keywords']:
|
||||
practices.append(
|
||||
f"Title Strategy: Include primary keyword in title (e.g., '{title_analysis['title']}')"
|
||||
)
|
||||
|
||||
# Description structure
|
||||
desc_analysis = top_competitor['description_analysis']
|
||||
if desc_analysis['structure']['has_bullet_points']:
|
||||
practices.append("Description: Use bullet points to highlight key features")
|
||||
|
||||
if desc_analysis['structure']['has_sections']:
|
||||
practices.append("Description: Organize content with clear section headers")
|
||||
|
||||
# Rating strategy
|
||||
rating_quality = top_competitor['rating_metrics']['rating_quality']
|
||||
if rating_quality in ['excellent', 'good']:
|
||||
practices.append(
|
||||
f"Ratings: Maintain high rating quality ({top_competitor['rating_metrics']['rating']}★) "
|
||||
f"with significant volume ({top_competitor['rating_metrics']['ratings_count']} ratings)"
|
||||
)
|
||||
|
||||
return practices[:5]
|
||||
|
||||
def _identify_opportunities(
|
||||
self,
|
||||
analyses: List[Dict[str, Any]],
|
||||
common_keywords: List[str],
|
||||
keyword_gaps: List[Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Identify ASO opportunities based on competitive analysis."""
|
||||
opportunities = []
|
||||
|
||||
# Keyword opportunities from gaps
|
||||
if keyword_gaps:
|
||||
underutilized_keywords = [
|
||||
gap['keyword'] for gap in keyword_gaps
|
||||
if gap['usage_percentage'] < 50
|
||||
]
|
||||
if underutilized_keywords:
|
||||
opportunities.append(
|
||||
f"Target underutilized keywords: {', '.join(underutilized_keywords[:5])}"
|
||||
)
|
||||
|
||||
# Rating opportunity
|
||||
avg_rating = sum(a['rating_metrics']['rating'] for a in analyses) / len(analyses)
|
||||
if avg_rating < 4.5:
|
||||
opportunities.append(
|
||||
f"Category average rating is {avg_rating:.1f} - opportunity to differentiate with higher ratings"
|
||||
)
|
||||
|
||||
# Content depth opportunity
|
||||
avg_desc_length = sum(
|
||||
a['description_analysis']['length'] for a in analyses
|
||||
) / len(analyses)
|
||||
if avg_desc_length < 1500:
|
||||
opportunities.append(
|
||||
"Competitors have relatively short descriptions - opportunity to provide more comprehensive information"
|
||||
)
|
||||
|
||||
return opportunities[:5]
|
||||
|
||||
def _extract_features(self, description: str) -> List[str]:
|
||||
"""Extract feature mentions from description."""
|
||||
# Look for bullet points or numbered lists
|
||||
lines = description.split('\n')
|
||||
features = []
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
# Check if line starts with bullet or number
|
||||
if line and (line[0] in ['•', '*', '-', '✓'] or line[0].isdigit()):
|
||||
# Clean the line
|
||||
cleaned = re.sub(r'^[•*\-✓\d.)\s]+', '', line)
|
||||
if cleaned:
|
||||
features.append(cleaned)
|
||||
|
||||
return features[:10]
|
||||
|
||||
def _assess_keyword_focus(
|
||||
self,
|
||||
title_keywords: List[str],
|
||||
description_keywords: List[str]
|
||||
) -> str:
|
||||
"""Assess keyword focus strategy."""
|
||||
overlap = set(title_keywords) & set(description_keywords)
|
||||
|
||||
if len(overlap) >= 3:
|
||||
return 'consistent_focus'
|
||||
elif len(overlap) >= 1:
|
||||
return 'moderate_focus'
|
||||
else:
|
||||
return 'broad_focus'
|
||||
|
||||
def _generate_keyword_recommendations(self, missing_keywords: set) -> List[str]:
|
||||
"""Generate recommendations for missing keywords."""
|
||||
if not missing_keywords:
|
||||
return ["Your keyword coverage is comprehensive"]
|
||||
|
||||
recommendations = []
|
||||
missing_list = list(missing_keywords)[:5]
|
||||
|
||||
recommendations.append(
|
||||
f"Consider adding these competitor keywords: {', '.join(missing_list)}"
|
||||
)
|
||||
recommendations.append(
|
||||
"Test keyword variations in subtitle/promotional text first"
|
||||
)
|
||||
recommendations.append(
|
||||
"Monitor competitor keyword changes monthly"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _generate_rating_improvement_actions(self, rating_gap: float) -> List[str]:
|
||||
"""Generate actions to improve ratings."""
|
||||
actions = []
|
||||
|
||||
if rating_gap > 0.5:
|
||||
actions.append("CRITICAL: Significant rating gap - prioritize user satisfaction improvements")
|
||||
actions.append("Analyze negative reviews to identify top issues")
|
||||
actions.append("Implement in-app rating prompts after positive experiences")
|
||||
actions.append("Respond to all negative reviews professionally")
|
||||
elif rating_gap > 0.2:
|
||||
actions.append("Focus on incremental improvements to close rating gap")
|
||||
actions.append("Optimize timing of rating requests")
|
||||
else:
|
||||
actions.append("Ratings are competitive - maintain quality and continue improvements")
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_content_recommendations(self, desc_length_gap: int) -> List[str]:
|
||||
"""Generate content recommendations based on length gap."""
|
||||
recommendations = []
|
||||
|
||||
if desc_length_gap > 500:
|
||||
recommendations.append(
|
||||
"Expand description to match competitor detail level"
|
||||
)
|
||||
recommendations.append(
|
||||
"Add use case examples and success stories"
|
||||
)
|
||||
recommendations.append(
|
||||
"Include more feature explanations and benefits"
|
||||
)
|
||||
elif desc_length_gap < -500:
|
||||
recommendations.append(
|
||||
"Consider condensing description for better readability"
|
||||
)
|
||||
recommendations.append(
|
||||
"Focus on most important features first"
|
||||
)
|
||||
else:
|
||||
recommendations.append(
|
||||
"Description length is competitive"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _assess_competitive_position(
|
||||
self,
|
||||
your_analysis: Dict[str, Any],
|
||||
competitor_comparison: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Assess your competitive position."""
|
||||
your_strength = your_analysis['competitive_strength']
|
||||
competitors = competitor_comparison['ranked_competitors']
|
||||
|
||||
if not competitors:
|
||||
return "No comparison data available"
|
||||
|
||||
# Find where you'd rank
|
||||
better_than_count = sum(
|
||||
1 for comp in competitors
|
||||
if your_strength > comp['competitive_strength']
|
||||
)
|
||||
|
||||
position_percentage = (better_than_count / len(competitors)) * 100
|
||||
|
||||
if position_percentage >= 75:
|
||||
return "Strong Position: Top quartile in competitive strength"
|
||||
elif position_percentage >= 50:
|
||||
return "Competitive Position: Above average, opportunities for improvement"
|
||||
elif position_percentage >= 25:
|
||||
return "Challenging Position: Below average, requires strategic improvements"
|
||||
else:
|
||||
return "Weak Position: Bottom quartile, major ASO overhaul needed"
|
||||
|
||||
|
||||
def analyze_competitor_set(
|
||||
category: str,
|
||||
competitors_data: List[Dict[str, Any]],
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to analyze a set of competitors.
|
||||
|
||||
Args:
|
||||
category: App category
|
||||
competitors_data: List of competitor data
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Complete competitive analysis
|
||||
"""
|
||||
analyzer = CompetitorAnalyzer(category, platform)
|
||||
return analyzer.compare_competitors(competitors_data)
|
||||
@@ -0,0 +1,170 @@
|
||||
{
|
||||
"request_type": "keyword_research",
|
||||
"app_name": "TaskFlow Pro",
|
||||
"keyword_analysis": {
|
||||
"total_keywords_analyzed": 25,
|
||||
"primary_keywords": [
|
||||
{
|
||||
"keyword": "task manager",
|
||||
"search_volume": 45000,
|
||||
"competition_level": "high",
|
||||
"relevance_score": 0.95,
|
||||
"difficulty_score": 72.5,
|
||||
"potential_score": 78.3,
|
||||
"recommendation": "High priority - target immediately"
|
||||
},
|
||||
{
|
||||
"keyword": "productivity app",
|
||||
"search_volume": 38000,
|
||||
"competition_level": "high",
|
||||
"relevance_score": 0.90,
|
||||
"difficulty_score": 68.2,
|
||||
"potential_score": 75.1,
|
||||
"recommendation": "High priority - target immediately"
|
||||
},
|
||||
{
|
||||
"keyword": "todo list",
|
||||
"search_volume": 52000,
|
||||
"competition_level": "very_high",
|
||||
"relevance_score": 0.85,
|
||||
"difficulty_score": 78.9,
|
||||
"potential_score": 71.4,
|
||||
"recommendation": "High priority - target immediately"
|
||||
}
|
||||
],
|
||||
"secondary_keywords": [
|
||||
{
|
||||
"keyword": "team task manager",
|
||||
"search_volume": 8500,
|
||||
"competition_level": "medium",
|
||||
"relevance_score": 0.88,
|
||||
"difficulty_score": 42.3,
|
||||
"potential_score": 68.7,
|
||||
"recommendation": "Good opportunity - include in metadata"
|
||||
},
|
||||
{
|
||||
"keyword": "project planning app",
|
||||
"search_volume": 12000,
|
||||
"competition_level": "medium",
|
||||
"relevance_score": 0.75,
|
||||
"difficulty_score": 48.1,
|
||||
"potential_score": 64.2,
|
||||
"recommendation": "Good opportunity - include in metadata"
|
||||
}
|
||||
],
|
||||
"long_tail_keywords": [
|
||||
{
|
||||
"keyword": "ai task prioritization",
|
||||
"search_volume": 2800,
|
||||
"competition_level": "low",
|
||||
"relevance_score": 0.95,
|
||||
"difficulty_score": 25.4,
|
||||
"potential_score": 82.6,
|
||||
"recommendation": "Excellent long-tail opportunity"
|
||||
},
|
||||
{
|
||||
"keyword": "team productivity tool",
|
||||
"search_volume": 3500,
|
||||
"competition_level": "low",
|
||||
"relevance_score": 0.85,
|
||||
"difficulty_score": 28.7,
|
||||
"potential_score": 79.3,
|
||||
"recommendation": "Excellent long-tail opportunity"
|
||||
}
|
||||
]
|
||||
},
|
||||
"competitor_insights": {
|
||||
"competitors_analyzed": 4,
|
||||
"common_keywords": [
|
||||
"task",
|
||||
"todo",
|
||||
"list",
|
||||
"productivity",
|
||||
"organize",
|
||||
"manage"
|
||||
],
|
||||
"keyword_gaps": [
|
||||
{
|
||||
"keyword": "ai prioritization",
|
||||
"used_by": ["None of the major competitors"],
|
||||
"opportunity": "Unique positioning opportunity"
|
||||
},
|
||||
{
|
||||
"keyword": "smart task manager",
|
||||
"used_by": ["Things 3"],
|
||||
"opportunity": "Underutilized by most competitors"
|
||||
}
|
||||
]
|
||||
},
|
||||
"metadata_recommendations": {
|
||||
"apple_app_store": {
|
||||
"title_options": [
|
||||
{
|
||||
"title": "TaskFlow - AI Task Manager",
|
||||
"length": 26,
|
||||
"keywords_included": ["task manager", "ai"],
|
||||
"strategy": "brand_plus_primary"
|
||||
},
|
||||
{
|
||||
"title": "TaskFlow: Smart Todo & Tasks",
|
||||
"length": 29,
|
||||
"keywords_included": ["todo", "tasks"],
|
||||
"strategy": "brand_plus_multiple"
|
||||
}
|
||||
],
|
||||
"subtitle_recommendation": "AI-Powered Team Productivity",
|
||||
"keyword_field": "productivity,organize,planner,schedule,workflow,reminders,collaboration,calendar,sync,priorities",
|
||||
"description_focus": "Lead with AI differentiation, emphasize team features"
|
||||
},
|
||||
"google_play_store": {
|
||||
"title_options": [
|
||||
{
|
||||
"title": "TaskFlow - AI Task Manager & Team Productivity",
|
||||
"length": 48,
|
||||
"keywords_included": ["task manager", "ai", "team", "productivity"],
|
||||
"strategy": "keyword_rich"
|
||||
}
|
||||
],
|
||||
"short_description_recommendation": "AI task manager - Organize, prioritize, and collaborate with your team",
|
||||
"description_focus": "Keywords naturally integrated throughout 4000 character description"
|
||||
}
|
||||
},
|
||||
"strategic_recommendations": [
|
||||
"Focus on 'AI prioritization' as unique differentiator - low competition, high relevance",
|
||||
"Target 'team task manager' and 'team productivity' keywords - good search volume, lower competition than generic terms",
|
||||
"Include long-tail keywords in description for additional discovery opportunities",
|
||||
"Test title variations with A/B testing after launch",
|
||||
"Monitor competitor keyword changes quarterly"
|
||||
],
|
||||
"priority_actions": [
|
||||
{
|
||||
"action": "Optimize app title with primary keyword",
|
||||
"priority": "high",
|
||||
"expected_impact": "15-25% improvement in search visibility"
|
||||
},
|
||||
{
|
||||
"action": "Create description highlighting AI features with natural keyword integration",
|
||||
"priority": "high",
|
||||
"expected_impact": "10-15% improvement in conversion rate"
|
||||
},
|
||||
{
|
||||
"action": "Plan A/B tests for icon and screenshots post-launch",
|
||||
"priority": "medium",
|
||||
"expected_impact": "5-10% improvement in conversion rate"
|
||||
}
|
||||
],
|
||||
"aso_health_estimate": {
|
||||
"current_score": "N/A (pre-launch)",
|
||||
"potential_score_with_optimizations": "75-80/100",
|
||||
"key_strengths": [
|
||||
"Unique AI differentiation",
|
||||
"Clear target audience",
|
||||
"Strong feature set"
|
||||
],
|
||||
"areas_to_develop": [
|
||||
"Build rating volume post-launch",
|
||||
"Monitor and respond to reviews",
|
||||
"Continuous keyword optimization"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,406 @@
|
||||
"""
|
||||
Keyword analysis module for App Store Optimization.
|
||||
Analyzes keyword search volume, competition, and relevance for app discovery.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
import re
|
||||
from collections import Counter
|
||||
|
||||
|
||||
class KeywordAnalyzer:
|
||||
"""Analyzes keywords for ASO effectiveness."""
|
||||
|
||||
# Competition level thresholds (based on number of competing apps)
|
||||
COMPETITION_THRESHOLDS = {
|
||||
'low': 1000,
|
||||
'medium': 5000,
|
||||
'high': 10000
|
||||
}
|
||||
|
||||
# Search volume categories (monthly searches estimate)
|
||||
VOLUME_CATEGORIES = {
|
||||
'very_low': 1000,
|
||||
'low': 5000,
|
||||
'medium': 20000,
|
||||
'high': 100000,
|
||||
'very_high': 500000
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize keyword analyzer."""
|
||||
self.analyzed_keywords = {}
|
||||
|
||||
def analyze_keyword(
|
||||
self,
|
||||
keyword: str,
|
||||
search_volume: int = 0,
|
||||
competing_apps: int = 0,
|
||||
relevance_score: float = 0.0
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze a single keyword for ASO potential.
|
||||
|
||||
Args:
|
||||
keyword: The keyword to analyze
|
||||
search_volume: Estimated monthly search volume
|
||||
competing_apps: Number of apps competing for this keyword
|
||||
relevance_score: Relevance to your app (0.0-1.0)
|
||||
|
||||
Returns:
|
||||
Dictionary with keyword analysis
|
||||
"""
|
||||
competition_level = self._calculate_competition_level(competing_apps)
|
||||
volume_category = self._categorize_search_volume(search_volume)
|
||||
difficulty_score = self._calculate_keyword_difficulty(
|
||||
search_volume,
|
||||
competing_apps
|
||||
)
|
||||
|
||||
# Calculate potential score (0-100)
|
||||
potential_score = self._calculate_potential_score(
|
||||
search_volume,
|
||||
competing_apps,
|
||||
relevance_score
|
||||
)
|
||||
|
||||
analysis = {
|
||||
'keyword': keyword,
|
||||
'search_volume': search_volume,
|
||||
'volume_category': volume_category,
|
||||
'competing_apps': competing_apps,
|
||||
'competition_level': competition_level,
|
||||
'relevance_score': relevance_score,
|
||||
'difficulty_score': difficulty_score,
|
||||
'potential_score': potential_score,
|
||||
'recommendation': self._generate_recommendation(
|
||||
potential_score,
|
||||
difficulty_score,
|
||||
relevance_score
|
||||
),
|
||||
'keyword_length': len(keyword.split()),
|
||||
'is_long_tail': len(keyword.split()) >= 3
|
||||
}
|
||||
|
||||
self.analyzed_keywords[keyword] = analysis
|
||||
return analysis
|
||||
|
||||
def compare_keywords(self, keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare multiple keywords and rank by potential.
|
||||
|
||||
Args:
|
||||
keywords_data: List of dicts with keyword, search_volume, competing_apps, relevance_score
|
||||
|
||||
Returns:
|
||||
Comparison report with ranked keywords
|
||||
"""
|
||||
analyses = []
|
||||
for kw_data in keywords_data:
|
||||
analysis = self.analyze_keyword(
|
||||
keyword=kw_data['keyword'],
|
||||
search_volume=kw_data.get('search_volume', 0),
|
||||
competing_apps=kw_data.get('competing_apps', 0),
|
||||
relevance_score=kw_data.get('relevance_score', 0.0)
|
||||
)
|
||||
analyses.append(analysis)
|
||||
|
||||
# Sort by potential score (descending)
|
||||
ranked_keywords = sorted(
|
||||
analyses,
|
||||
key=lambda x: x['potential_score'],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
# Categorize keywords
|
||||
primary_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if kw['potential_score'] >= 70 and kw['relevance_score'] >= 0.8
|
||||
]
|
||||
|
||||
secondary_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if 50 <= kw['potential_score'] < 70 and kw['relevance_score'] >= 0.6
|
||||
]
|
||||
|
||||
long_tail_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if kw['is_long_tail'] and kw['relevance_score'] >= 0.7
|
||||
]
|
||||
|
||||
return {
|
||||
'total_keywords_analyzed': len(analyses),
|
||||
'ranked_keywords': ranked_keywords,
|
||||
'primary_keywords': primary_keywords[:5], # Top 5
|
||||
'secondary_keywords': secondary_keywords[:10], # Top 10
|
||||
'long_tail_keywords': long_tail_keywords[:10], # Top 10
|
||||
'summary': self._generate_comparison_summary(
|
||||
primary_keywords,
|
||||
secondary_keywords,
|
||||
long_tail_keywords
|
||||
)
|
||||
}
|
||||
|
||||
def find_long_tail_opportunities(
|
||||
self,
|
||||
base_keyword: str,
|
||||
modifiers: List[str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Generate long-tail keyword variations.
|
||||
|
||||
Args:
|
||||
base_keyword: Core keyword (e.g., "task manager")
|
||||
modifiers: List of modifiers (e.g., ["free", "simple", "team"])
|
||||
|
||||
Returns:
|
||||
List of long-tail keyword suggestions
|
||||
"""
|
||||
long_tail_keywords = []
|
||||
|
||||
# Generate combinations
|
||||
for modifier in modifiers:
|
||||
# Modifier + base
|
||||
variation1 = f"{modifier} {base_keyword}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': variation1,
|
||||
'pattern': 'modifier_base',
|
||||
'estimated_competition': 'low',
|
||||
'rationale': f"Less competitive variation of '{base_keyword}'"
|
||||
})
|
||||
|
||||
# Base + modifier
|
||||
variation2 = f"{base_keyword} {modifier}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': variation2,
|
||||
'pattern': 'base_modifier',
|
||||
'estimated_competition': 'low',
|
||||
'rationale': f"Specific use-case variation of '{base_keyword}'"
|
||||
})
|
||||
|
||||
# Add question-based long-tail
|
||||
question_words = ['how', 'what', 'best', 'top']
|
||||
for q_word in question_words:
|
||||
question_keyword = f"{q_word} {base_keyword}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': question_keyword,
|
||||
'pattern': 'question_based',
|
||||
'estimated_competition': 'very_low',
|
||||
'rationale': f"Informational search query"
|
||||
})
|
||||
|
||||
return long_tail_keywords
|
||||
|
||||
def extract_keywords_from_text(
|
||||
self,
|
||||
text: str,
|
||||
min_word_length: int = 3
|
||||
) -> List[Tuple[str, int]]:
|
||||
"""
|
||||
Extract potential keywords from text (descriptions, reviews).
|
||||
|
||||
Args:
|
||||
text: Text to analyze
|
||||
min_word_length: Minimum word length to consider
|
||||
|
||||
Returns:
|
||||
List of (keyword, frequency) tuples
|
||||
"""
|
||||
# Clean and normalize text
|
||||
text = text.lower()
|
||||
text = re.sub(r'[^\w\s]', ' ', text)
|
||||
|
||||
# Extract words
|
||||
words = text.split()
|
||||
|
||||
# Filter by length
|
||||
words = [w for w in words if len(w) >= min_word_length]
|
||||
|
||||
# Remove common stop words
|
||||
stop_words = {
|
||||
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||
'but', 'not', 'you', 'all', 'can', 'are', 'was', 'were', 'been'
|
||||
}
|
||||
words = [w for w in words if w not in stop_words]
|
||||
|
||||
# Count frequency
|
||||
word_counts = Counter(words)
|
||||
|
||||
# Extract 2-word phrases
|
||||
phrases = []
|
||||
for i in range(len(words) - 1):
|
||||
phrase = f"{words[i]} {words[i+1]}"
|
||||
phrases.append(phrase)
|
||||
|
||||
phrase_counts = Counter(phrases)
|
||||
|
||||
# Combine and sort
|
||||
all_keywords = list(word_counts.items()) + list(phrase_counts.items())
|
||||
all_keywords.sort(key=lambda x: x[1], reverse=True)
|
||||
|
||||
return all_keywords[:50] # Top 50
|
||||
|
||||
def calculate_keyword_density(
|
||||
self,
|
||||
text: str,
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, float]:
|
||||
"""
|
||||
Calculate keyword density in text.
|
||||
|
||||
Args:
|
||||
text: Text to analyze (title, description)
|
||||
target_keywords: Keywords to check density for
|
||||
|
||||
Returns:
|
||||
Dictionary of keyword: density (percentage)
|
||||
"""
|
||||
text_lower = text.lower()
|
||||
total_words = len(text_lower.split())
|
||||
|
||||
densities = {}
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower()
|
||||
occurrences = text_lower.count(keyword_lower)
|
||||
density = (occurrences / total_words) * 100 if total_words > 0 else 0
|
||||
densities[keyword] = round(density, 2)
|
||||
|
||||
return densities
|
||||
|
||||
def _calculate_competition_level(self, competing_apps: int) -> str:
|
||||
"""Determine competition level based on number of competing apps."""
|
||||
if competing_apps < self.COMPETITION_THRESHOLDS['low']:
|
||||
return 'low'
|
||||
elif competing_apps < self.COMPETITION_THRESHOLDS['medium']:
|
||||
return 'medium'
|
||||
elif competing_apps < self.COMPETITION_THRESHOLDS['high']:
|
||||
return 'high'
|
||||
else:
|
||||
return 'very_high'
|
||||
|
||||
def _categorize_search_volume(self, search_volume: int) -> str:
|
||||
"""Categorize search volume."""
|
||||
if search_volume < self.VOLUME_CATEGORIES['very_low']:
|
||||
return 'very_low'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['low']:
|
||||
return 'low'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['medium']:
|
||||
return 'medium'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['high']:
|
||||
return 'high'
|
||||
else:
|
||||
return 'very_high'
|
||||
|
||||
def _calculate_keyword_difficulty(
|
||||
self,
|
||||
search_volume: int,
|
||||
competing_apps: int
|
||||
) -> float:
|
||||
"""
|
||||
Calculate keyword difficulty score (0-100).
|
||||
Higher score = harder to rank.
|
||||
"""
|
||||
if competing_apps == 0:
|
||||
return 0.0
|
||||
|
||||
# Competition factor (0-1)
|
||||
competition_factor = min(competing_apps / 50000, 1.0)
|
||||
|
||||
# Volume factor (0-1) - higher volume = more difficulty
|
||||
volume_factor = min(search_volume / 1000000, 1.0)
|
||||
|
||||
# Difficulty score (weighted average)
|
||||
difficulty = (competition_factor * 0.7 + volume_factor * 0.3) * 100
|
||||
|
||||
return round(difficulty, 1)
|
||||
|
||||
def _calculate_potential_score(
|
||||
self,
|
||||
search_volume: int,
|
||||
competing_apps: int,
|
||||
relevance_score: float
|
||||
) -> float:
|
||||
"""
|
||||
Calculate overall keyword potential (0-100).
|
||||
Higher score = better opportunity.
|
||||
"""
|
||||
# Volume score (0-40 points)
|
||||
volume_score = min((search_volume / 100000) * 40, 40)
|
||||
|
||||
# Competition score (0-30 points) - inverse relationship
|
||||
if competing_apps > 0:
|
||||
competition_score = max(30 - (competing_apps / 500), 0)
|
||||
else:
|
||||
competition_score = 30
|
||||
|
||||
# Relevance score (0-30 points)
|
||||
relevance_points = relevance_score * 30
|
||||
|
||||
total_score = volume_score + competition_score + relevance_points
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def _generate_recommendation(
|
||||
self,
|
||||
potential_score: float,
|
||||
difficulty_score: float,
|
||||
relevance_score: float
|
||||
) -> str:
|
||||
"""Generate actionable recommendation for keyword."""
|
||||
if relevance_score < 0.5:
|
||||
return "Low relevance - avoid targeting"
|
||||
|
||||
if potential_score >= 70:
|
||||
return "High priority - target immediately"
|
||||
elif potential_score >= 50:
|
||||
if difficulty_score < 50:
|
||||
return "Good opportunity - include in metadata"
|
||||
else:
|
||||
return "Competitive - use in description, not title"
|
||||
elif potential_score >= 30:
|
||||
return "Secondary keyword - use for long-tail variations"
|
||||
else:
|
||||
return "Low potential - deprioritize"
|
||||
|
||||
def _generate_comparison_summary(
|
||||
self,
|
||||
primary_keywords: List[Dict[str, Any]],
|
||||
secondary_keywords: List[Dict[str, Any]],
|
||||
long_tail_keywords: List[Dict[str, Any]]
|
||||
) -> str:
|
||||
"""Generate summary of keyword comparison."""
|
||||
summary_parts = []
|
||||
|
||||
summary_parts.append(
|
||||
f"Identified {len(primary_keywords)} high-priority primary keywords."
|
||||
)
|
||||
|
||||
if primary_keywords:
|
||||
top_keyword = primary_keywords[0]['keyword']
|
||||
summary_parts.append(
|
||||
f"Top recommendation: '{top_keyword}' (potential score: {primary_keywords[0]['potential_score']})."
|
||||
)
|
||||
|
||||
summary_parts.append(
|
||||
f"Found {len(secondary_keywords)} secondary keywords for description and metadata."
|
||||
)
|
||||
|
||||
summary_parts.append(
|
||||
f"Discovered {len(long_tail_keywords)} long-tail opportunities with lower competition."
|
||||
)
|
||||
|
||||
return " ".join(summary_parts)
|
||||
|
||||
|
||||
def analyze_keyword_set(keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to analyze a set of keywords.
|
||||
|
||||
Args:
|
||||
keywords_data: List of keyword data dictionaries
|
||||
|
||||
Returns:
|
||||
Complete analysis report
|
||||
"""
|
||||
analyzer = KeywordAnalyzer()
|
||||
return analyzer.compare_keywords(keywords_data)
|
||||
@@ -0,0 +1,739 @@
|
||||
"""
|
||||
Launch checklist module for App Store Optimization.
|
||||
Generates comprehensive pre-launch and update checklists.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
|
||||
class LaunchChecklistGenerator:
|
||||
"""Generates comprehensive checklists for app launches and updates."""
|
||||
|
||||
def __init__(self, platform: str = 'both'):
|
||||
"""
|
||||
Initialize checklist generator.
|
||||
|
||||
Args:
|
||||
platform: 'apple', 'google', or 'both'
|
||||
"""
|
||||
if platform not in ['apple', 'google', 'both']:
|
||||
raise ValueError("Platform must be 'apple', 'google', or 'both'")
|
||||
|
||||
self.platform = platform
|
||||
|
||||
def generate_prelaunch_checklist(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
launch_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate comprehensive pre-launch checklist.
|
||||
|
||||
Args:
|
||||
app_info: App information (name, category, target_audience)
|
||||
launch_date: Target launch date (YYYY-MM-DD)
|
||||
|
||||
Returns:
|
||||
Complete pre-launch checklist
|
||||
"""
|
||||
checklist = {
|
||||
'app_info': app_info,
|
||||
'launch_date': launch_date,
|
||||
'checklists': {}
|
||||
}
|
||||
|
||||
# Generate platform-specific checklists
|
||||
if self.platform in ['apple', 'both']:
|
||||
checklist['checklists']['apple'] = self._generate_apple_checklist(app_info)
|
||||
|
||||
if self.platform in ['google', 'both']:
|
||||
checklist['checklists']['google'] = self._generate_google_checklist(app_info)
|
||||
|
||||
# Add universal checklist items
|
||||
checklist['checklists']['universal'] = self._generate_universal_checklist(app_info)
|
||||
|
||||
# Generate timeline
|
||||
if launch_date:
|
||||
checklist['timeline'] = self._generate_launch_timeline(launch_date)
|
||||
|
||||
# Calculate completion status
|
||||
checklist['summary'] = self._calculate_checklist_summary(checklist['checklists'])
|
||||
|
||||
return checklist
|
||||
|
||||
def validate_app_store_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate compliance with app store guidelines.
|
||||
|
||||
Args:
|
||||
app_data: App data including metadata, privacy policy, etc.
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Compliance validation report
|
||||
"""
|
||||
validation_results = {
|
||||
'platform': platform,
|
||||
'is_compliant': True,
|
||||
'errors': [],
|
||||
'warnings': [],
|
||||
'recommendations': []
|
||||
}
|
||||
|
||||
if platform == 'apple':
|
||||
self._validate_apple_compliance(app_data, validation_results)
|
||||
elif platform == 'google':
|
||||
self._validate_google_compliance(app_data, validation_results)
|
||||
|
||||
# Determine overall compliance
|
||||
validation_results['is_compliant'] = len(validation_results['errors']) == 0
|
||||
|
||||
return validation_results
|
||||
|
||||
def create_update_plan(
|
||||
self,
|
||||
current_version: str,
|
||||
planned_features: List[str],
|
||||
update_frequency: str = 'monthly'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create update cadence and feature rollout plan.
|
||||
|
||||
Args:
|
||||
current_version: Current app version
|
||||
planned_features: List of planned features
|
||||
update_frequency: 'weekly', 'biweekly', 'monthly', 'quarterly'
|
||||
|
||||
Returns:
|
||||
Update plan with cadence and feature schedule
|
||||
"""
|
||||
# Calculate next versions
|
||||
next_versions = self._calculate_next_versions(
|
||||
current_version,
|
||||
update_frequency,
|
||||
len(planned_features)
|
||||
)
|
||||
|
||||
# Distribute features across versions
|
||||
feature_schedule = self._distribute_features(
|
||||
planned_features,
|
||||
next_versions
|
||||
)
|
||||
|
||||
# Generate "What's New" templates
|
||||
whats_new_templates = [
|
||||
self._generate_whats_new_template(version_data)
|
||||
for version_data in feature_schedule
|
||||
]
|
||||
|
||||
return {
|
||||
'current_version': current_version,
|
||||
'update_frequency': update_frequency,
|
||||
'planned_updates': len(feature_schedule),
|
||||
'feature_schedule': feature_schedule,
|
||||
'whats_new_templates': whats_new_templates,
|
||||
'recommendations': self._generate_update_recommendations(update_frequency)
|
||||
}
|
||||
|
||||
def optimize_launch_timing(
|
||||
self,
|
||||
app_category: str,
|
||||
target_audience: str,
|
||||
current_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Recommend optimal launch timing.
|
||||
|
||||
Args:
|
||||
app_category: App category
|
||||
target_audience: Target audience description
|
||||
current_date: Current date (YYYY-MM-DD), defaults to today
|
||||
|
||||
Returns:
|
||||
Launch timing recommendations
|
||||
"""
|
||||
if not current_date:
|
||||
current_date = datetime.now().strftime('%Y-%m-%d')
|
||||
|
||||
# Analyze launch timing factors
|
||||
day_of_week_rec = self._recommend_day_of_week(app_category)
|
||||
seasonal_rec = self._recommend_seasonal_timing(app_category, current_date)
|
||||
competitive_rec = self._analyze_competitive_timing(app_category)
|
||||
|
||||
# Calculate optimal dates
|
||||
optimal_dates = self._calculate_optimal_dates(
|
||||
current_date,
|
||||
day_of_week_rec,
|
||||
seasonal_rec
|
||||
)
|
||||
|
||||
return {
|
||||
'current_date': current_date,
|
||||
'optimal_launch_dates': optimal_dates,
|
||||
'day_of_week_recommendation': day_of_week_rec,
|
||||
'seasonal_considerations': seasonal_rec,
|
||||
'competitive_timing': competitive_rec,
|
||||
'final_recommendation': self._generate_timing_recommendation(
|
||||
optimal_dates,
|
||||
seasonal_rec
|
||||
)
|
||||
}
|
||||
|
||||
def plan_seasonal_campaigns(
|
||||
self,
|
||||
app_category: str,
|
||||
current_month: int = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify seasonal opportunities for ASO campaigns.
|
||||
|
||||
Args:
|
||||
app_category: App category
|
||||
current_month: Current month (1-12), defaults to current
|
||||
|
||||
Returns:
|
||||
Seasonal campaign opportunities
|
||||
"""
|
||||
if not current_month:
|
||||
current_month = datetime.now().month
|
||||
|
||||
# Identify relevant seasonal events
|
||||
seasonal_opportunities = self._identify_seasonal_opportunities(
|
||||
app_category,
|
||||
current_month
|
||||
)
|
||||
|
||||
# Generate campaign ideas
|
||||
campaigns = [
|
||||
self._generate_seasonal_campaign(opportunity)
|
||||
for opportunity in seasonal_opportunities
|
||||
]
|
||||
|
||||
return {
|
||||
'current_month': current_month,
|
||||
'category': app_category,
|
||||
'seasonal_opportunities': seasonal_opportunities,
|
||||
'campaign_ideas': campaigns,
|
||||
'implementation_timeline': self._create_seasonal_timeline(campaigns)
|
||||
}
|
||||
|
||||
def _generate_apple_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate Apple App Store specific checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'App Store Connect Setup',
|
||||
'items': [
|
||||
{'task': 'App Store Connect account created', 'status': 'pending'},
|
||||
{'task': 'App bundle ID registered', 'status': 'pending'},
|
||||
{'task': 'App Privacy declarations completed', 'status': 'pending'},
|
||||
{'task': 'Age rating questionnaire completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Metadata (Apple)',
|
||||
'items': [
|
||||
{'task': 'App title (30 chars max)', 'status': 'pending'},
|
||||
{'task': 'Subtitle (30 chars max)', 'status': 'pending'},
|
||||
{'task': 'Promotional text (170 chars max)', 'status': 'pending'},
|
||||
{'task': 'Description (4000 chars max)', 'status': 'pending'},
|
||||
{'task': 'Keywords (100 chars, comma-separated)', 'status': 'pending'},
|
||||
{'task': 'Category selection (primary + secondary)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Visual Assets (Apple)',
|
||||
'items': [
|
||||
{'task': 'App icon (1024x1024px)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPhone 6.7" required)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPhone 5.5" required)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPad Pro 12.9" if iPad app)', 'status': 'pending'},
|
||||
{'task': 'App preview video (optional but recommended)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Technical Requirements (Apple)',
|
||||
'items': [
|
||||
{'task': 'Build uploaded to App Store Connect', 'status': 'pending'},
|
||||
{'task': 'TestFlight testing completed', 'status': 'pending'},
|
||||
{'task': 'App tested on required iOS versions', 'status': 'pending'},
|
||||
{'task': 'Crash-free rate > 99%', 'status': 'pending'},
|
||||
{'task': 'All links in app/metadata working', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Legal & Privacy (Apple)',
|
||||
'items': [
|
||||
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||
{'task': 'Terms of Service URL (if applicable)', 'status': 'pending'},
|
||||
{'task': 'Data collection declarations accurate', 'status': 'pending'},
|
||||
{'task': 'Third-party SDKs disclosed', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_google_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate Google Play Store specific checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'Play Console Setup',
|
||||
'items': [
|
||||
{'task': 'Google Play Console account created', 'status': 'pending'},
|
||||
{'task': 'Developer profile completed', 'status': 'pending'},
|
||||
{'task': 'Payment merchant account linked (if paid app)', 'status': 'pending'},
|
||||
{'task': 'Content rating questionnaire completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Metadata (Google)',
|
||||
'items': [
|
||||
{'task': 'App title (50 chars max)', 'status': 'pending'},
|
||||
{'task': 'Short description (80 chars max)', 'status': 'pending'},
|
||||
{'task': 'Full description (4000 chars max)', 'status': 'pending'},
|
||||
{'task': 'Category selection', 'status': 'pending'},
|
||||
{'task': 'Tags (up to 5)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Visual Assets (Google)',
|
||||
'items': [
|
||||
{'task': 'App icon (512x512px)', 'status': 'pending'},
|
||||
{'task': 'Feature graphic (1024x500px)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (2-8 required, phone)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (tablet, if applicable)', 'status': 'pending'},
|
||||
{'task': 'Promo video (YouTube link, optional)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Technical Requirements (Google)',
|
||||
'items': [
|
||||
{'task': 'APK/AAB uploaded to Play Console', 'status': 'pending'},
|
||||
{'task': 'Internal testing completed', 'status': 'pending'},
|
||||
{'task': 'App tested on required Android versions', 'status': 'pending'},
|
||||
{'task': 'Target API level meets requirements', 'status': 'pending'},
|
||||
{'task': 'All permissions justified', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Legal & Privacy (Google)',
|
||||
'items': [
|
||||
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||
{'task': 'Data safety section completed', 'status': 'pending'},
|
||||
{'task': 'Ads disclosure (if applicable)', 'status': 'pending'},
|
||||
{'task': 'In-app purchase disclosure (if applicable)', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_universal_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate universal (both platforms) checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'Pre-Launch Marketing',
|
||||
'items': [
|
||||
{'task': 'Landing page created', 'status': 'pending'},
|
||||
{'task': 'Social media accounts setup', 'status': 'pending'},
|
||||
{'task': 'Press kit prepared', 'status': 'pending'},
|
||||
{'task': 'Beta tester feedback collected', 'status': 'pending'},
|
||||
{'task': 'Launch announcement drafted', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'ASO Preparation',
|
||||
'items': [
|
||||
{'task': 'Keyword research completed', 'status': 'pending'},
|
||||
{'task': 'Competitor analysis done', 'status': 'pending'},
|
||||
{'task': 'A/B test plan created for post-launch', 'status': 'pending'},
|
||||
{'task': 'Analytics tracking configured', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Quality Assurance',
|
||||
'items': [
|
||||
{'task': 'All core features tested', 'status': 'pending'},
|
||||
{'task': 'User flows validated', 'status': 'pending'},
|
||||
{'task': 'Performance testing completed', 'status': 'pending'},
|
||||
{'task': 'Accessibility features tested', 'status': 'pending'},
|
||||
{'task': 'Security audit completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Support Infrastructure',
|
||||
'items': [
|
||||
{'task': 'Support email/system setup', 'status': 'pending'},
|
||||
{'task': 'FAQ page created', 'status': 'pending'},
|
||||
{'task': 'Documentation for users prepared', 'status': 'pending'},
|
||||
{'task': 'Team trained on handling reviews', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_launch_timeline(self, launch_date: str) -> List[Dict[str, Any]]:
|
||||
"""Generate timeline with milestones leading to launch."""
|
||||
launch_dt = datetime.strptime(launch_date, '%Y-%m-%d')
|
||||
|
||||
milestones = [
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=90)).strftime('%Y-%m-%d'),
|
||||
'milestone': '90 days before: Complete keyword research and competitor analysis'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=60)).strftime('%Y-%m-%d'),
|
||||
'milestone': '60 days before: Finalize metadata and visual assets'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=45)).strftime('%Y-%m-%d'),
|
||||
'milestone': '45 days before: Begin beta testing program'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||
'milestone': '30 days before: Submit app for review (Apple typically takes 1-2 days, Google instant)'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=14)).strftime('%Y-%m-%d'),
|
||||
'milestone': '14 days before: Prepare launch marketing materials'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
'milestone': '7 days before: Set up analytics and monitoring'
|
||||
},
|
||||
{
|
||||
'date': launch_dt.strftime('%Y-%m-%d'),
|
||||
'milestone': 'Launch Day: Release app and execute marketing plan'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
'milestone': '7 days after: Monitor metrics, respond to reviews, address critical issues'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt + timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||
'milestone': '30 days after: Analyze launch metrics, plan first update'
|
||||
}
|
||||
]
|
||||
|
||||
return milestones
|
||||
|
||||
def _calculate_checklist_summary(self, checklists: Dict[str, List[Dict[str, Any]]]) -> Dict[str, Any]:
|
||||
"""Calculate completion summary."""
|
||||
total_items = 0
|
||||
completed_items = 0
|
||||
|
||||
for platform, categories in checklists.items():
|
||||
for category in categories:
|
||||
for item in category['items']:
|
||||
total_items += 1
|
||||
if item['status'] == 'completed':
|
||||
completed_items += 1
|
||||
|
||||
completion_percentage = (completed_items / total_items * 100) if total_items > 0 else 0
|
||||
|
||||
return {
|
||||
'total_items': total_items,
|
||||
'completed_items': completed_items,
|
||||
'pending_items': total_items - completed_items,
|
||||
'completion_percentage': round(completion_percentage, 1),
|
||||
'is_ready_to_launch': completion_percentage == 100
|
||||
}
|
||||
|
||||
def _validate_apple_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
validation_results: Dict[str, Any]
|
||||
) -> None:
|
||||
"""Validate Apple App Store compliance."""
|
||||
# Check for required fields
|
||||
if not app_data.get('privacy_policy_url'):
|
||||
validation_results['errors'].append("Privacy Policy URL is required")
|
||||
|
||||
if not app_data.get('app_icon'):
|
||||
validation_results['errors'].append("App icon (1024x1024px) is required")
|
||||
|
||||
# Check metadata character limits
|
||||
title = app_data.get('title', '')
|
||||
if len(title) > 30:
|
||||
validation_results['errors'].append(f"Title exceeds 30 characters ({len(title)})")
|
||||
|
||||
# Warnings for best practices
|
||||
subtitle = app_data.get('subtitle', '')
|
||||
if not subtitle:
|
||||
validation_results['warnings'].append("Subtitle is empty - consider adding for better discoverability")
|
||||
|
||||
keywords = app_data.get('keywords', '')
|
||||
if len(keywords) < 80:
|
||||
validation_results['warnings'].append(
|
||||
f"Keywords field underutilized ({len(keywords)}/100 chars) - add more keywords"
|
||||
)
|
||||
|
||||
def _validate_google_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
validation_results: Dict[str, Any]
|
||||
) -> None:
|
||||
"""Validate Google Play Store compliance."""
|
||||
# Check for required fields
|
||||
if not app_data.get('privacy_policy_url'):
|
||||
validation_results['errors'].append("Privacy Policy URL is required")
|
||||
|
||||
if not app_data.get('feature_graphic'):
|
||||
validation_results['errors'].append("Feature graphic (1024x500px) is required")
|
||||
|
||||
# Check metadata character limits
|
||||
title = app_data.get('title', '')
|
||||
if len(title) > 50:
|
||||
validation_results['errors'].append(f"Title exceeds 50 characters ({len(title)})")
|
||||
|
||||
short_desc = app_data.get('short_description', '')
|
||||
if len(short_desc) > 80:
|
||||
validation_results['errors'].append(f"Short description exceeds 80 characters ({len(short_desc)})")
|
||||
|
||||
# Warnings
|
||||
if not short_desc:
|
||||
validation_results['warnings'].append("Short description is empty")
|
||||
|
||||
def _calculate_next_versions(
|
||||
self,
|
||||
current_version: str,
|
||||
update_frequency: str,
|
||||
feature_count: int
|
||||
) -> List[str]:
|
||||
"""Calculate next version numbers."""
|
||||
# Parse current version (assume semantic versioning)
|
||||
parts = current_version.split('.')
|
||||
major, minor, patch = int(parts[0]), int(parts[1]), int(parts[2] if len(parts) > 2 else 0)
|
||||
|
||||
versions = []
|
||||
for i in range(feature_count):
|
||||
if update_frequency == 'weekly':
|
||||
patch += 1
|
||||
elif update_frequency == 'biweekly':
|
||||
patch += 1
|
||||
elif update_frequency == 'monthly':
|
||||
minor += 1
|
||||
patch = 0
|
||||
else: # quarterly
|
||||
minor += 1
|
||||
patch = 0
|
||||
|
||||
versions.append(f"{major}.{minor}.{patch}")
|
||||
|
||||
return versions
|
||||
|
||||
def _distribute_features(
|
||||
self,
|
||||
features: List[str],
|
||||
versions: List[str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Distribute features across versions."""
|
||||
features_per_version = max(1, len(features) // len(versions))
|
||||
|
||||
schedule = []
|
||||
for i, version in enumerate(versions):
|
||||
start_idx = i * features_per_version
|
||||
end_idx = start_idx + features_per_version if i < len(versions) - 1 else len(features)
|
||||
|
||||
schedule.append({
|
||||
'version': version,
|
||||
'features': features[start_idx:end_idx],
|
||||
'release_priority': 'high' if i == 0 else ('medium' if i < len(versions) // 2 else 'low')
|
||||
})
|
||||
|
||||
return schedule
|
||||
|
||||
def _generate_whats_new_template(self, version_data: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Generate What's New template for version."""
|
||||
features_list = '\n'.join([f"• {feature}" for feature in version_data['features']])
|
||||
|
||||
template = f"""Version {version_data['version']}
|
||||
|
||||
{features_list}
|
||||
|
||||
We're constantly improving your experience. Thanks for using [App Name]!
|
||||
|
||||
Have feedback? Contact us at support@[company].com"""
|
||||
|
||||
return {
|
||||
'version': version_data['version'],
|
||||
'template': template
|
||||
}
|
||||
|
||||
def _generate_update_recommendations(self, update_frequency: str) -> List[str]:
|
||||
"""Generate recommendations for update strategy."""
|
||||
recommendations = []
|
||||
|
||||
if update_frequency == 'weekly':
|
||||
recommendations.append("Weekly updates show active development but ensure quality doesn't suffer")
|
||||
elif update_frequency == 'monthly':
|
||||
recommendations.append("Monthly updates are optimal for most apps - balance features and stability")
|
||||
|
||||
recommendations.extend([
|
||||
"Include bug fixes in every update",
|
||||
"Update 'What's New' section with each release",
|
||||
"Respond to reviews mentioning fixed issues"
|
||||
])
|
||||
|
||||
return recommendations
|
||||
|
||||
def _recommend_day_of_week(self, app_category: str) -> Dict[str, Any]:
|
||||
"""Recommend best day of week to launch."""
|
||||
# General recommendations based on category
|
||||
if app_category.lower() in ['games', 'entertainment']:
|
||||
return {
|
||||
'recommended_day': 'Thursday',
|
||||
'rationale': 'People download entertainment apps before weekend'
|
||||
}
|
||||
elif app_category.lower() in ['productivity', 'business']:
|
||||
return {
|
||||
'recommended_day': 'Tuesday',
|
||||
'rationale': 'Business users most active mid-week'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'recommended_day': 'Wednesday',
|
||||
'rationale': 'Mid-week provides good balance and review potential'
|
||||
}
|
||||
|
||||
def _recommend_seasonal_timing(self, app_category: str, current_date: str) -> Dict[str, Any]:
|
||||
"""Recommend seasonal timing considerations."""
|
||||
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||
month = current_dt.month
|
||||
|
||||
# Avoid certain periods
|
||||
avoid_periods = []
|
||||
if month == 12:
|
||||
avoid_periods.append("Late December - low user engagement during holidays")
|
||||
if month in [7, 8]:
|
||||
avoid_periods.append("Summer months - some categories see lower engagement")
|
||||
|
||||
# Recommend periods
|
||||
good_periods = []
|
||||
if month in [1, 9]:
|
||||
good_periods.append("New Year/Back-to-school - high user engagement")
|
||||
if month in [10, 11]:
|
||||
good_periods.append("Pre-holiday season - good for shopping/gift apps")
|
||||
|
||||
return {
|
||||
'current_month': month,
|
||||
'avoid_periods': avoid_periods,
|
||||
'good_periods': good_periods
|
||||
}
|
||||
|
||||
def _analyze_competitive_timing(self, app_category: str) -> Dict[str, str]:
|
||||
"""Analyze competitive timing considerations."""
|
||||
return {
|
||||
'recommendation': 'Research competitor launch schedules in your category',
|
||||
'strategy': 'Avoid launching same week as major competitor updates'
|
||||
}
|
||||
|
||||
def _calculate_optimal_dates(
|
||||
self,
|
||||
current_date: str,
|
||||
day_rec: Dict[str, Any],
|
||||
seasonal_rec: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Calculate optimal launch dates."""
|
||||
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||
|
||||
# Find next occurrence of recommended day
|
||||
target_day = day_rec['recommended_day']
|
||||
days_map = {'Monday': 0, 'Tuesday': 1, 'Wednesday': 2, 'Thursday': 3, 'Friday': 4}
|
||||
target_day_num = days_map.get(target_day, 2)
|
||||
|
||||
days_ahead = (target_day_num - current_dt.weekday()) % 7
|
||||
if days_ahead == 0:
|
||||
days_ahead = 7
|
||||
|
||||
next_target_date = current_dt + timedelta(days=days_ahead)
|
||||
|
||||
optimal_dates = [
|
||||
next_target_date.strftime('%Y-%m-%d'),
|
||||
(next_target_date + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
(next_target_date + timedelta(days=14)).strftime('%Y-%m-%d')
|
||||
]
|
||||
|
||||
return optimal_dates
|
||||
|
||||
def _generate_timing_recommendation(
|
||||
self,
|
||||
optimal_dates: List[str],
|
||||
seasonal_rec: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Generate final timing recommendation."""
|
||||
if seasonal_rec['avoid_periods']:
|
||||
return f"Consider launching in {optimal_dates[1]} to avoid {seasonal_rec['avoid_periods'][0]}"
|
||||
elif seasonal_rec['good_periods']:
|
||||
return f"Launch on {optimal_dates[0]} to capitalize on {seasonal_rec['good_periods'][0]}"
|
||||
else:
|
||||
return f"Recommended launch date: {optimal_dates[0]}"
|
||||
|
||||
def _identify_seasonal_opportunities(
|
||||
self,
|
||||
app_category: str,
|
||||
current_month: int
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Identify seasonal opportunities for category."""
|
||||
opportunities = []
|
||||
|
||||
# Universal opportunities
|
||||
if current_month == 1:
|
||||
opportunities.append({
|
||||
'event': 'New Year Resolutions',
|
||||
'dates': 'January 1-31',
|
||||
'relevance': 'high' if app_category.lower() in ['health', 'fitness', 'productivity'] else 'medium'
|
||||
})
|
||||
|
||||
if current_month in [11, 12]:
|
||||
opportunities.append({
|
||||
'event': 'Holiday Shopping Season',
|
||||
'dates': 'November-December',
|
||||
'relevance': 'high' if app_category.lower() in ['shopping', 'gifts'] else 'low'
|
||||
})
|
||||
|
||||
# Category-specific
|
||||
if app_category.lower() == 'education' and current_month in [8, 9]:
|
||||
opportunities.append({
|
||||
'event': 'Back to School',
|
||||
'dates': 'August-September',
|
||||
'relevance': 'high'
|
||||
})
|
||||
|
||||
return opportunities
|
||||
|
||||
def _generate_seasonal_campaign(self, opportunity: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate campaign idea for seasonal opportunity."""
|
||||
return {
|
||||
'event': opportunity['event'],
|
||||
'campaign_idea': f"Create themed visuals and messaging for {opportunity['event']}",
|
||||
'metadata_updates': 'Update app description and screenshots with seasonal themes',
|
||||
'promotion_strategy': 'Consider limited-time features or discounts'
|
||||
}
|
||||
|
||||
def _create_seasonal_timeline(self, campaigns: List[Dict[str, Any]]) -> List[str]:
|
||||
"""Create implementation timeline for campaigns."""
|
||||
return [
|
||||
f"30 days before: Plan {campaign['event']} campaign strategy"
|
||||
for campaign in campaigns
|
||||
]
|
||||
|
||||
|
||||
def generate_launch_checklist(
|
||||
platform: str,
|
||||
app_info: Dict[str, Any],
|
||||
launch_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to generate launch checklist.
|
||||
|
||||
Args:
|
||||
platform: Platform ('apple', 'google', or 'both')
|
||||
app_info: App information
|
||||
launch_date: Target launch date
|
||||
|
||||
Returns:
|
||||
Complete launch checklist
|
||||
"""
|
||||
generator = LaunchChecklistGenerator(platform)
|
||||
return generator.generate_prelaunch_checklist(app_info, launch_date)
|
||||
@@ -0,0 +1,588 @@
|
||||
"""
|
||||
Localization helper module for App Store Optimization.
|
||||
Manages multi-language ASO optimization strategies.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
|
||||
|
||||
class LocalizationHelper:
|
||||
"""Helps manage multi-language ASO optimization."""
|
||||
|
||||
# Priority markets by language (based on app store revenue and user base)
|
||||
PRIORITY_MARKETS = {
|
||||
'tier_1': [
|
||||
{'language': 'en-US', 'market': 'United States', 'revenue_share': 0.25},
|
||||
{'language': 'zh-CN', 'market': 'China', 'revenue_share': 0.20},
|
||||
{'language': 'ja-JP', 'market': 'Japan', 'revenue_share': 0.10},
|
||||
{'language': 'de-DE', 'market': 'Germany', 'revenue_share': 0.08},
|
||||
{'language': 'en-GB', 'market': 'United Kingdom', 'revenue_share': 0.06}
|
||||
],
|
||||
'tier_2': [
|
||||
{'language': 'fr-FR', 'market': 'France', 'revenue_share': 0.05},
|
||||
{'language': 'ko-KR', 'market': 'South Korea', 'revenue_share': 0.05},
|
||||
{'language': 'es-ES', 'market': 'Spain', 'revenue_share': 0.03},
|
||||
{'language': 'it-IT', 'market': 'Italy', 'revenue_share': 0.03},
|
||||
{'language': 'pt-BR', 'market': 'Brazil', 'revenue_share': 0.03}
|
||||
],
|
||||
'tier_3': [
|
||||
{'language': 'ru-RU', 'market': 'Russia', 'revenue_share': 0.02},
|
||||
{'language': 'es-MX', 'market': 'Mexico', 'revenue_share': 0.02},
|
||||
{'language': 'nl-NL', 'market': 'Netherlands', 'revenue_share': 0.02},
|
||||
{'language': 'sv-SE', 'market': 'Sweden', 'revenue_share': 0.01},
|
||||
{'language': 'pl-PL', 'market': 'Poland', 'revenue_share': 0.01}
|
||||
]
|
||||
}
|
||||
|
||||
# Character limit multipliers by language (some languages need more/less space)
|
||||
CHAR_MULTIPLIERS = {
|
||||
'en': 1.0,
|
||||
'zh': 0.6, # Chinese characters are more compact
|
||||
'ja': 0.7, # Japanese uses kanji
|
||||
'ko': 0.8, # Korean is relatively compact
|
||||
'de': 1.3, # German words are typically longer
|
||||
'fr': 1.2, # French tends to be longer
|
||||
'es': 1.1, # Spanish slightly longer
|
||||
'pt': 1.1, # Portuguese similar to Spanish
|
||||
'ru': 1.1, # Russian similar length
|
||||
'ar': 1.0, # Arabic varies
|
||||
'it': 1.1 # Italian similar to Spanish
|
||||
}
|
||||
|
||||
def __init__(self, app_category: str = 'general'):
|
||||
"""
|
||||
Initialize localization helper.
|
||||
|
||||
Args:
|
||||
app_category: App category to prioritize relevant markets
|
||||
"""
|
||||
self.app_category = app_category
|
||||
self.localization_plans = []
|
||||
|
||||
def identify_target_markets(
|
||||
self,
|
||||
current_market: str = 'en-US',
|
||||
budget_level: str = 'medium',
|
||||
target_market_count: int = 5
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Recommend priority markets for localization.
|
||||
|
||||
Args:
|
||||
current_market: Current/primary market
|
||||
budget_level: 'low', 'medium', or 'high'
|
||||
target_market_count: Number of markets to target
|
||||
|
||||
Returns:
|
||||
Prioritized market recommendations
|
||||
"""
|
||||
# Determine tier priorities based on budget
|
||||
if budget_level == 'low':
|
||||
priority_tiers = ['tier_1']
|
||||
max_markets = min(target_market_count, 3)
|
||||
elif budget_level == 'medium':
|
||||
priority_tiers = ['tier_1', 'tier_2']
|
||||
max_markets = min(target_market_count, 8)
|
||||
else: # high budget
|
||||
priority_tiers = ['tier_1', 'tier_2', 'tier_3']
|
||||
max_markets = target_market_count
|
||||
|
||||
# Collect markets from priority tiers
|
||||
recommended_markets = []
|
||||
for tier in priority_tiers:
|
||||
for market in self.PRIORITY_MARKETS[tier]:
|
||||
if market['language'] != current_market:
|
||||
recommended_markets.append({
|
||||
**market,
|
||||
'tier': tier,
|
||||
'estimated_translation_cost': self._estimate_translation_cost(
|
||||
market['language']
|
||||
)
|
||||
})
|
||||
|
||||
# Sort by revenue share and limit
|
||||
recommended_markets.sort(key=lambda x: x['revenue_share'], reverse=True)
|
||||
recommended_markets = recommended_markets[:max_markets]
|
||||
|
||||
# Calculate potential ROI
|
||||
total_potential_revenue_share = sum(m['revenue_share'] for m in recommended_markets)
|
||||
|
||||
return {
|
||||
'recommended_markets': recommended_markets,
|
||||
'total_markets': len(recommended_markets),
|
||||
'estimated_total_revenue_lift': f"{total_potential_revenue_share*100:.1f}%",
|
||||
'estimated_cost': self._estimate_total_localization_cost(recommended_markets),
|
||||
'implementation_priority': self._prioritize_implementation(recommended_markets)
|
||||
}
|
||||
|
||||
def translate_metadata(
|
||||
self,
|
||||
source_metadata: Dict[str, str],
|
||||
source_language: str,
|
||||
target_language: str,
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate localized metadata with character limit considerations.
|
||||
|
||||
Args:
|
||||
source_metadata: Original metadata (title, description, etc.)
|
||||
source_language: Source language code (e.g., 'en')
|
||||
target_language: Target language code (e.g., 'es')
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Localized metadata with character limit validation
|
||||
"""
|
||||
# Get character multiplier
|
||||
target_lang_code = target_language.split('-')[0]
|
||||
char_multiplier = self.CHAR_MULTIPLIERS.get(target_lang_code, 1.0)
|
||||
|
||||
# Platform-specific limits
|
||||
if platform == 'apple':
|
||||
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||
else:
|
||||
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||
|
||||
localized_metadata = {}
|
||||
warnings = []
|
||||
|
||||
for field, text in source_metadata.items():
|
||||
if field not in limits:
|
||||
continue
|
||||
|
||||
# Estimate target length
|
||||
estimated_length = int(len(text) * char_multiplier)
|
||||
limit = limits[field]
|
||||
|
||||
localized_metadata[field] = {
|
||||
'original_text': text,
|
||||
'original_length': len(text),
|
||||
'estimated_target_length': estimated_length,
|
||||
'character_limit': limit,
|
||||
'fits_within_limit': estimated_length <= limit,
|
||||
'translation_notes': self._get_translation_notes(
|
||||
field,
|
||||
target_language,
|
||||
estimated_length,
|
||||
limit
|
||||
)
|
||||
}
|
||||
|
||||
if estimated_length > limit:
|
||||
warnings.append(
|
||||
f"{field}: Estimated length ({estimated_length}) may exceed limit ({limit}) - "
|
||||
f"condensing may be required"
|
||||
)
|
||||
|
||||
return {
|
||||
'source_language': source_language,
|
||||
'target_language': target_language,
|
||||
'platform': platform,
|
||||
'localized_fields': localized_metadata,
|
||||
'character_multiplier': char_multiplier,
|
||||
'warnings': warnings,
|
||||
'recommendations': self._generate_translation_recommendations(
|
||||
target_language,
|
||||
warnings
|
||||
)
|
||||
}
|
||||
|
||||
def adapt_keywords(
|
||||
self,
|
||||
source_keywords: List[str],
|
||||
source_language: str,
|
||||
target_language: str,
|
||||
target_market: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Adapt keywords for target market (not just direct translation).
|
||||
|
||||
Args:
|
||||
source_keywords: Original keywords
|
||||
source_language: Source language code
|
||||
target_language: Target language code
|
||||
target_market: Target market (e.g., 'France', 'Japan')
|
||||
|
||||
Returns:
|
||||
Adapted keyword recommendations
|
||||
"""
|
||||
# Cultural adaptation considerations
|
||||
cultural_notes = self._get_cultural_keyword_considerations(target_market)
|
||||
|
||||
# Search behavior differences
|
||||
search_patterns = self._get_search_patterns(target_market)
|
||||
|
||||
adapted_keywords = []
|
||||
for keyword in source_keywords:
|
||||
adapted_keywords.append({
|
||||
'source_keyword': keyword,
|
||||
'adaptation_strategy': self._determine_adaptation_strategy(
|
||||
keyword,
|
||||
target_market
|
||||
),
|
||||
'cultural_considerations': cultural_notes.get(keyword, []),
|
||||
'priority': 'high' if keyword in source_keywords[:3] else 'medium'
|
||||
})
|
||||
|
||||
return {
|
||||
'source_language': source_language,
|
||||
'target_language': target_language,
|
||||
'target_market': target_market,
|
||||
'adapted_keywords': adapted_keywords,
|
||||
'search_behavior_notes': search_patterns,
|
||||
'recommendations': [
|
||||
'Use native speakers for keyword research',
|
||||
'Test keywords with local users before finalizing',
|
||||
'Consider local competitors\' keyword strategies',
|
||||
'Monitor search trends in target market'
|
||||
]
|
||||
}
|
||||
|
||||
def validate_translations(
|
||||
self,
|
||||
translated_metadata: Dict[str, str],
|
||||
target_language: str,
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate translated metadata for character limits and quality.
|
||||
|
||||
Args:
|
||||
translated_metadata: Translated text fields
|
||||
target_language: Target language code
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Validation report
|
||||
"""
|
||||
# Platform limits
|
||||
if platform == 'apple':
|
||||
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||
else:
|
||||
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||
|
||||
validation_results = {
|
||||
'is_valid': True,
|
||||
'field_validations': {},
|
||||
'errors': [],
|
||||
'warnings': []
|
||||
}
|
||||
|
||||
for field, text in translated_metadata.items():
|
||||
if field not in limits:
|
||||
continue
|
||||
|
||||
actual_length = len(text)
|
||||
limit = limits[field]
|
||||
is_within_limit = actual_length <= limit
|
||||
|
||||
validation_results['field_validations'][field] = {
|
||||
'text': text,
|
||||
'length': actual_length,
|
||||
'limit': limit,
|
||||
'is_valid': is_within_limit,
|
||||
'usage_percentage': round((actual_length / limit) * 100, 1)
|
||||
}
|
||||
|
||||
if not is_within_limit:
|
||||
validation_results['is_valid'] = False
|
||||
validation_results['errors'].append(
|
||||
f"{field} exceeds limit: {actual_length}/{limit} characters"
|
||||
)
|
||||
|
||||
# Quality checks
|
||||
quality_issues = self._check_translation_quality(
|
||||
translated_metadata,
|
||||
target_language
|
||||
)
|
||||
|
||||
validation_results['quality_checks'] = quality_issues
|
||||
|
||||
if quality_issues:
|
||||
validation_results['warnings'].extend(
|
||||
[f"Quality issue: {issue}" for issue in quality_issues]
|
||||
)
|
||||
|
||||
return validation_results
|
||||
|
||||
def calculate_localization_roi(
|
||||
self,
|
||||
target_markets: List[str],
|
||||
current_monthly_downloads: int,
|
||||
localization_cost: float,
|
||||
expected_lift_percentage: float = 0.15
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Estimate ROI of localization investment.
|
||||
|
||||
Args:
|
||||
target_markets: List of market codes
|
||||
current_monthly_downloads: Current monthly downloads
|
||||
localization_cost: Total cost to localize
|
||||
expected_lift_percentage: Expected download increase (default 15%)
|
||||
|
||||
Returns:
|
||||
ROI analysis
|
||||
"""
|
||||
# Estimate market-specific lift
|
||||
market_data = []
|
||||
total_expected_lift = 0
|
||||
|
||||
for market_code in target_markets:
|
||||
# Find market in priority lists
|
||||
market_info = None
|
||||
for tier_name, markets in self.PRIORITY_MARKETS.items():
|
||||
for m in markets:
|
||||
if m['language'] == market_code:
|
||||
market_info = m
|
||||
break
|
||||
|
||||
if not market_info:
|
||||
continue
|
||||
|
||||
# Estimate downloads from this market
|
||||
market_downloads = int(current_monthly_downloads * market_info['revenue_share'])
|
||||
expected_increase = int(market_downloads * expected_lift_percentage)
|
||||
total_expected_lift += expected_increase
|
||||
|
||||
market_data.append({
|
||||
'market': market_info['market'],
|
||||
'current_monthly_downloads': market_downloads,
|
||||
'expected_increase': expected_increase,
|
||||
'revenue_potential': market_info['revenue_share']
|
||||
})
|
||||
|
||||
# Calculate payback period (assuming $2 revenue per download)
|
||||
revenue_per_download = 2.0
|
||||
monthly_additional_revenue = total_expected_lift * revenue_per_download
|
||||
payback_months = (localization_cost / monthly_additional_revenue) if monthly_additional_revenue > 0 else float('inf')
|
||||
|
||||
return {
|
||||
'markets_analyzed': len(market_data),
|
||||
'market_breakdown': market_data,
|
||||
'total_expected_monthly_lift': total_expected_lift,
|
||||
'expected_monthly_revenue_increase': f"${monthly_additional_revenue:,.2f}",
|
||||
'localization_cost': f"${localization_cost:,.2f}",
|
||||
'payback_period_months': round(payback_months, 1) if payback_months != float('inf') else 'N/A',
|
||||
'annual_roi': f"{((monthly_additional_revenue * 12 - localization_cost) / localization_cost * 100):.1f}%" if payback_months != float('inf') else 'Negative',
|
||||
'recommendation': self._generate_roi_recommendation(payback_months)
|
||||
}
|
||||
|
||||
def _estimate_translation_cost(self, language: str) -> Dict[str, float]:
|
||||
"""Estimate translation cost for a language."""
|
||||
# Base cost per word (professional translation)
|
||||
base_cost_per_word = 0.12
|
||||
|
||||
# Language-specific multipliers
|
||||
multipliers = {
|
||||
'zh-CN': 1.5, # Chinese requires specialist
|
||||
'ja-JP': 1.5, # Japanese requires specialist
|
||||
'ko-KR': 1.3,
|
||||
'ar-SA': 1.4, # Arabic (right-to-left)
|
||||
'default': 1.0
|
||||
}
|
||||
|
||||
multiplier = multipliers.get(language, multipliers['default'])
|
||||
|
||||
# Typical word counts for app store metadata
|
||||
typical_word_counts = {
|
||||
'title': 5,
|
||||
'subtitle': 5,
|
||||
'description': 300,
|
||||
'keywords': 20,
|
||||
'screenshots': 50 # Caption text
|
||||
}
|
||||
|
||||
total_words = sum(typical_word_counts.values())
|
||||
estimated_cost = total_words * base_cost_per_word * multiplier
|
||||
|
||||
return {
|
||||
'cost_per_word': base_cost_per_word * multiplier,
|
||||
'total_words': total_words,
|
||||
'estimated_cost': round(estimated_cost, 2)
|
||||
}
|
||||
|
||||
def _estimate_total_localization_cost(self, markets: List[Dict[str, Any]]) -> str:
|
||||
"""Estimate total cost for multiple markets."""
|
||||
total = sum(m['estimated_translation_cost']['estimated_cost'] for m in markets)
|
||||
return f"${total:,.2f}"
|
||||
|
||||
def _prioritize_implementation(self, markets: List[Dict[str, Any]]) -> List[Dict[str, str]]:
|
||||
"""Create phased implementation plan."""
|
||||
phases = []
|
||||
|
||||
# Phase 1: Top revenue markets
|
||||
phase_1 = [m for m in markets[:3]]
|
||||
if phase_1:
|
||||
phases.append({
|
||||
'phase': 'Phase 1 (First 30 days)',
|
||||
'markets': ', '.join([m['market'] for m in phase_1]),
|
||||
'rationale': 'Highest revenue potential markets'
|
||||
})
|
||||
|
||||
# Phase 2: Remaining tier 1 and top tier 2
|
||||
phase_2 = [m for m in markets[3:6]]
|
||||
if phase_2:
|
||||
phases.append({
|
||||
'phase': 'Phase 2 (Days 31-60)',
|
||||
'markets': ', '.join([m['market'] for m in phase_2]),
|
||||
'rationale': 'Strong revenue markets with good ROI'
|
||||
})
|
||||
|
||||
# Phase 3: Remaining markets
|
||||
phase_3 = [m for m in markets[6:]]
|
||||
if phase_3:
|
||||
phases.append({
|
||||
'phase': 'Phase 3 (Days 61-90)',
|
||||
'markets': ', '.join([m['market'] for m in phase_3]),
|
||||
'rationale': 'Complete global coverage'
|
||||
})
|
||||
|
||||
return phases
|
||||
|
||||
def _get_translation_notes(
|
||||
self,
|
||||
field: str,
|
||||
target_language: str,
|
||||
estimated_length: int,
|
||||
limit: int
|
||||
) -> List[str]:
|
||||
"""Get translation-specific notes for field."""
|
||||
notes = []
|
||||
|
||||
if estimated_length > limit:
|
||||
notes.append(f"Condensing required - aim for {limit - 10} characters to allow buffer")
|
||||
|
||||
if field == 'title' and target_language.startswith('zh'):
|
||||
notes.append("Chinese characters convey more meaning - may need fewer characters")
|
||||
|
||||
if field == 'keywords' and target_language.startswith('de'):
|
||||
notes.append("German compound words may be longer - prioritize shorter keywords")
|
||||
|
||||
return notes
|
||||
|
||||
def _generate_translation_recommendations(
|
||||
self,
|
||||
target_language: str,
|
||||
warnings: List[str]
|
||||
) -> List[str]:
|
||||
"""Generate translation recommendations."""
|
||||
recommendations = [
|
||||
"Use professional native speakers for translation",
|
||||
"Test translations with local users before finalizing"
|
||||
]
|
||||
|
||||
if warnings:
|
||||
recommendations.append("Work with translator to condense text while preserving meaning")
|
||||
|
||||
if target_language.startswith('zh') or target_language.startswith('ja'):
|
||||
recommendations.append("Consider cultural context and local idioms")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _get_cultural_keyword_considerations(self, target_market: str) -> Dict[str, List[str]]:
|
||||
"""Get cultural considerations for keywords by market."""
|
||||
# Simplified example - real implementation would be more comprehensive
|
||||
considerations = {
|
||||
'China': ['Avoid politically sensitive terms', 'Consider local alternatives to blocked services'],
|
||||
'Japan': ['Honorific language important', 'Technical terms often use katakana'],
|
||||
'Germany': ['Privacy and security terms resonate', 'Efficiency and quality valued'],
|
||||
'France': ['French language protection laws', 'Prefer French terms over English'],
|
||||
'default': ['Research local search behavior', 'Test with native speakers']
|
||||
}
|
||||
|
||||
return considerations.get(target_market, considerations['default'])
|
||||
|
||||
def _get_search_patterns(self, target_market: str) -> List[str]:
|
||||
"""Get search pattern notes for market."""
|
||||
patterns = {
|
||||
'China': ['Use both simplified characters and romanization', 'Brand names often romanized'],
|
||||
'Japan': ['Mix of kanji, hiragana, and katakana', 'English words common in tech'],
|
||||
'Germany': ['Compound words common', 'Specific technical terminology'],
|
||||
'default': ['Research local search trends', 'Monitor competitor keywords']
|
||||
}
|
||||
|
||||
return patterns.get(target_market, patterns['default'])
|
||||
|
||||
def _determine_adaptation_strategy(self, keyword: str, target_market: str) -> str:
|
||||
"""Determine how to adapt keyword for market."""
|
||||
# Simplified logic
|
||||
if target_market in ['China', 'Japan', 'Korea']:
|
||||
return 'full_localization' # Complete translation needed
|
||||
elif target_market in ['Germany', 'France', 'Spain']:
|
||||
return 'adapt_and_translate' # Some adaptation needed
|
||||
else:
|
||||
return 'direct_translation' # Direct translation usually sufficient
|
||||
|
||||
def _check_translation_quality(
|
||||
self,
|
||||
translated_metadata: Dict[str, str],
|
||||
target_language: str
|
||||
) -> List[str]:
|
||||
"""Basic quality checks for translations."""
|
||||
issues = []
|
||||
|
||||
# Check for untranslated placeholders
|
||||
for field, text in translated_metadata.items():
|
||||
if '[' in text or '{' in text or 'TODO' in text.upper():
|
||||
issues.append(f"{field} contains placeholder text")
|
||||
|
||||
# Check for excessive punctuation
|
||||
for field, text in translated_metadata.items():
|
||||
if text.count('!') > 3:
|
||||
issues.append(f"{field} has excessive exclamation marks")
|
||||
|
||||
return issues
|
||||
|
||||
def _generate_roi_recommendation(self, payback_months: float) -> str:
|
||||
"""Generate ROI recommendation."""
|
||||
if payback_months <= 3:
|
||||
return "Excellent ROI - proceed immediately"
|
||||
elif payback_months <= 6:
|
||||
return "Good ROI - recommended investment"
|
||||
elif payback_months <= 12:
|
||||
return "Moderate ROI - consider if strategic market"
|
||||
else:
|
||||
return "Low ROI - reconsider or focus on higher-priority markets first"
|
||||
|
||||
|
||||
def plan_localization_strategy(
|
||||
current_market: str,
|
||||
budget_level: str,
|
||||
monthly_downloads: int
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to plan localization strategy.
|
||||
|
||||
Args:
|
||||
current_market: Current market code
|
||||
budget_level: Budget level
|
||||
monthly_downloads: Current monthly downloads
|
||||
|
||||
Returns:
|
||||
Complete localization plan
|
||||
"""
|
||||
helper = LocalizationHelper()
|
||||
|
||||
target_markets = helper.identify_target_markets(
|
||||
current_market=current_market,
|
||||
budget_level=budget_level
|
||||
)
|
||||
|
||||
# Extract market codes
|
||||
market_codes = [m['language'] for m in target_markets['recommended_markets']]
|
||||
|
||||
# Calculate ROI
|
||||
estimated_cost = float(target_markets['estimated_cost'].replace('$', '').replace(',', ''))
|
||||
|
||||
roi_analysis = helper.calculate_localization_roi(
|
||||
market_codes,
|
||||
monthly_downloads,
|
||||
estimated_cost
|
||||
)
|
||||
|
||||
return {
|
||||
'target_markets': target_markets,
|
||||
'roi_analysis': roi_analysis
|
||||
}
|
||||
@@ -0,0 +1,581 @@
|
||||
"""
|
||||
Metadata optimization module for App Store Optimization.
|
||||
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
import re
|
||||
|
||||
|
||||
class MetadataOptimizer:
|
||||
"""Optimizes app store metadata for maximum discoverability and conversion."""
|
||||
|
||||
# Platform-specific character limits
|
||||
CHAR_LIMITS = {
|
||||
'apple': {
|
||||
'title': 30,
|
||||
'subtitle': 30,
|
||||
'promotional_text': 170,
|
||||
'description': 4000,
|
||||
'keywords': 100,
|
||||
'whats_new': 4000
|
||||
},
|
||||
'google': {
|
||||
'title': 50,
|
||||
'short_description': 80,
|
||||
'full_description': 4000
|
||||
}
|
||||
}
|
||||
|
||||
def __init__(self, platform: str = 'apple'):
|
||||
"""
|
||||
Initialize metadata optimizer.
|
||||
|
||||
Args:
|
||||
platform: 'apple' or 'google'
|
||||
"""
|
||||
if platform not in ['apple', 'google']:
|
||||
raise ValueError("Platform must be 'apple' or 'google'")
|
||||
|
||||
self.platform = platform
|
||||
self.limits = self.CHAR_LIMITS[platform]
|
||||
|
||||
def optimize_title(
|
||||
self,
|
||||
app_name: str,
|
||||
target_keywords: List[str],
|
||||
include_brand: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize app title with keyword integration.
|
||||
|
||||
Args:
|
||||
app_name: Your app's brand name
|
||||
target_keywords: List of keywords to potentially include
|
||||
include_brand: Whether to include brand name
|
||||
|
||||
Returns:
|
||||
Optimized title options with analysis
|
||||
"""
|
||||
max_length = self.limits['title']
|
||||
|
||||
title_options = []
|
||||
|
||||
# Option 1: Brand name only
|
||||
if include_brand:
|
||||
option1 = app_name[:max_length]
|
||||
title_options.append({
|
||||
'title': option1,
|
||||
'length': len(option1),
|
||||
'remaining_chars': max_length - len(option1),
|
||||
'keywords_included': [],
|
||||
'strategy': 'brand_only',
|
||||
'pros': ['Maximum brand recognition', 'Clean and simple'],
|
||||
'cons': ['No keyword targeting', 'Lower discoverability']
|
||||
})
|
||||
|
||||
# Option 2: Brand + Primary Keyword
|
||||
if target_keywords:
|
||||
primary_keyword = target_keywords[0]
|
||||
option2 = self._build_title_with_keywords(
|
||||
app_name,
|
||||
[primary_keyword],
|
||||
max_length
|
||||
)
|
||||
if option2:
|
||||
title_options.append({
|
||||
'title': option2,
|
||||
'length': len(option2),
|
||||
'remaining_chars': max_length - len(option2),
|
||||
'keywords_included': [primary_keyword],
|
||||
'strategy': 'brand_plus_primary',
|
||||
'pros': ['Targets main keyword', 'Maintains brand identity'],
|
||||
'cons': ['Limited keyword coverage']
|
||||
})
|
||||
|
||||
# Option 3: Brand + Multiple Keywords (if space allows)
|
||||
if len(target_keywords) > 1:
|
||||
option3 = self._build_title_with_keywords(
|
||||
app_name,
|
||||
target_keywords[:2],
|
||||
max_length
|
||||
)
|
||||
if option3:
|
||||
title_options.append({
|
||||
'title': option3,
|
||||
'length': len(option3),
|
||||
'remaining_chars': max_length - len(option3),
|
||||
'keywords_included': target_keywords[:2],
|
||||
'strategy': 'brand_plus_multiple',
|
||||
'pros': ['Multiple keyword targets', 'Better discoverability'],
|
||||
'cons': ['May feel cluttered', 'Less brand focus']
|
||||
})
|
||||
|
||||
# Option 4: Keyword-first approach (for new apps)
|
||||
if target_keywords and not include_brand:
|
||||
option4 = " ".join(target_keywords[:2])[:max_length]
|
||||
title_options.append({
|
||||
'title': option4,
|
||||
'length': len(option4),
|
||||
'remaining_chars': max_length - len(option4),
|
||||
'keywords_included': target_keywords[:2],
|
||||
'strategy': 'keyword_first',
|
||||
'pros': ['Maximum SEO benefit', 'Clear functionality'],
|
||||
'cons': ['No brand recognition', 'Generic appearance']
|
||||
})
|
||||
|
||||
return {
|
||||
'platform': self.platform,
|
||||
'max_length': max_length,
|
||||
'options': title_options,
|
||||
'recommendation': self._recommend_title_option(title_options)
|
||||
}
|
||||
|
||||
def optimize_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str],
|
||||
description_type: str = 'full'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize app description with keyword integration and conversion focus.
|
||||
|
||||
Args:
|
||||
app_info: Dict with 'name', 'key_features', 'unique_value', 'target_audience'
|
||||
target_keywords: List of keywords to integrate naturally
|
||||
description_type: 'full', 'short' (Google), 'subtitle' (Apple)
|
||||
|
||||
Returns:
|
||||
Optimized description with analysis
|
||||
"""
|
||||
if description_type == 'short' and self.platform == 'google':
|
||||
return self._optimize_short_description(app_info, target_keywords)
|
||||
elif description_type == 'subtitle' and self.platform == 'apple':
|
||||
return self._optimize_subtitle(app_info, target_keywords)
|
||||
else:
|
||||
return self._optimize_full_description(app_info, target_keywords)
|
||||
|
||||
def optimize_keyword_field(
|
||||
self,
|
||||
target_keywords: List[str],
|
||||
app_title: str = "",
|
||||
app_description: str = ""
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize Apple's 100-character keyword field.
|
||||
|
||||
Rules:
|
||||
- No spaces between commas
|
||||
- No plural forms if singular exists
|
||||
- No duplicates
|
||||
- Keywords in title/subtitle are already indexed
|
||||
|
||||
Args:
|
||||
target_keywords: List of target keywords
|
||||
app_title: Current app title (to avoid duplication)
|
||||
app_description: Current description (to check coverage)
|
||||
|
||||
Returns:
|
||||
Optimized keyword field (comma-separated, no spaces)
|
||||
"""
|
||||
if self.platform != 'apple':
|
||||
return {'error': 'Keyword field optimization only applies to Apple App Store'}
|
||||
|
||||
max_length = self.limits['keywords']
|
||||
|
||||
# Extract words already in title (these don't need to be in keyword field)
|
||||
title_words = set(app_title.lower().split()) if app_title else set()
|
||||
|
||||
# Process keywords
|
||||
processed_keywords = []
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower().strip()
|
||||
|
||||
# Skip if already in title
|
||||
if keyword_lower in title_words:
|
||||
continue
|
||||
|
||||
# Remove duplicates and process
|
||||
words = keyword_lower.split()
|
||||
for word in words:
|
||||
if word not in processed_keywords and word not in title_words:
|
||||
processed_keywords.append(word)
|
||||
|
||||
# Remove plurals if singular exists
|
||||
deduplicated = self._remove_plural_duplicates(processed_keywords)
|
||||
|
||||
# Build keyword field within 100 character limit
|
||||
keyword_field = self._build_keyword_field(deduplicated, max_length)
|
||||
|
||||
# Calculate keyword density in description
|
||||
density = self._calculate_coverage(target_keywords, app_description)
|
||||
|
||||
return {
|
||||
'keyword_field': keyword_field,
|
||||
'length': len(keyword_field),
|
||||
'remaining_chars': max_length - len(keyword_field),
|
||||
'keywords_included': keyword_field.split(','),
|
||||
'keywords_count': len(keyword_field.split(',')),
|
||||
'keywords_excluded': [kw for kw in target_keywords if kw.lower() not in keyword_field],
|
||||
'description_coverage': density,
|
||||
'optimization_tips': [
|
||||
'Keywords in title are auto-indexed - no need to repeat',
|
||||
'Use singular forms only (Apple indexes plurals automatically)',
|
||||
'No spaces between commas to maximize character usage',
|
||||
'Update keyword field with each app update to test variations'
|
||||
]
|
||||
}
|
||||
|
||||
def validate_character_limits(
|
||||
self,
|
||||
metadata: Dict[str, str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate all metadata fields against platform character limits.
|
||||
|
||||
Args:
|
||||
metadata: Dictionary of field_name: value
|
||||
|
||||
Returns:
|
||||
Validation report with errors and warnings
|
||||
"""
|
||||
validation_results = {
|
||||
'is_valid': True,
|
||||
'errors': [],
|
||||
'warnings': [],
|
||||
'field_status': {}
|
||||
}
|
||||
|
||||
for field_name, value in metadata.items():
|
||||
if field_name not in self.limits:
|
||||
validation_results['warnings'].append(
|
||||
f"Unknown field '{field_name}' for {self.platform} platform"
|
||||
)
|
||||
continue
|
||||
|
||||
max_length = self.limits[field_name]
|
||||
actual_length = len(value)
|
||||
remaining = max_length - actual_length
|
||||
|
||||
field_status = {
|
||||
'value': value,
|
||||
'length': actual_length,
|
||||
'limit': max_length,
|
||||
'remaining': remaining,
|
||||
'is_valid': actual_length <= max_length,
|
||||
'usage_percentage': round((actual_length / max_length) * 100, 1)
|
||||
}
|
||||
|
||||
validation_results['field_status'][field_name] = field_status
|
||||
|
||||
if actual_length > max_length:
|
||||
validation_results['is_valid'] = False
|
||||
validation_results['errors'].append(
|
||||
f"'{field_name}' exceeds limit: {actual_length}/{max_length} chars"
|
||||
)
|
||||
elif remaining > max_length * 0.2: # More than 20% unused
|
||||
validation_results['warnings'].append(
|
||||
f"'{field_name}' under-utilizes space: {remaining} chars remaining"
|
||||
)
|
||||
|
||||
return validation_results
|
||||
|
||||
def calculate_keyword_density(
|
||||
self,
|
||||
text: str,
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate keyword density in text.
|
||||
|
||||
Args:
|
||||
text: Text to analyze
|
||||
target_keywords: Keywords to check
|
||||
|
||||
Returns:
|
||||
Density analysis
|
||||
"""
|
||||
text_lower = text.lower()
|
||||
total_words = len(text_lower.split())
|
||||
|
||||
keyword_densities = {}
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower()
|
||||
count = text_lower.count(keyword_lower)
|
||||
density = (count / total_words * 100) if total_words > 0 else 0
|
||||
|
||||
keyword_densities[keyword] = {
|
||||
'occurrences': count,
|
||||
'density_percentage': round(density, 2),
|
||||
'status': self._assess_density(density)
|
||||
}
|
||||
|
||||
# Overall assessment
|
||||
total_keyword_occurrences = sum(kw['occurrences'] for kw in keyword_densities.values())
|
||||
overall_density = (total_keyword_occurrences / total_words * 100) if total_words > 0 else 0
|
||||
|
||||
return {
|
||||
'total_words': total_words,
|
||||
'keyword_densities': keyword_densities,
|
||||
'overall_keyword_density': round(overall_density, 2),
|
||||
'assessment': self._assess_overall_density(overall_density),
|
||||
'recommendations': self._generate_density_recommendations(keyword_densities)
|
||||
}
|
||||
|
||||
def _build_title_with_keywords(
|
||||
self,
|
||||
app_name: str,
|
||||
keywords: List[str],
|
||||
max_length: int
|
||||
) -> Optional[str]:
|
||||
"""Build title combining app name and keywords within limit."""
|
||||
separators = [' - ', ': ', ' | ']
|
||||
|
||||
for sep in separators:
|
||||
for kw in keywords:
|
||||
title = f"{app_name}{sep}{kw}"
|
||||
if len(title) <= max_length:
|
||||
return title
|
||||
|
||||
return None
|
||||
|
||||
def _optimize_short_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize Google Play short description (80 chars)."""
|
||||
max_length = self.limits['short_description']
|
||||
|
||||
# Focus on unique value proposition with primary keyword
|
||||
unique_value = app_info.get('unique_value', '')
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
|
||||
# Template: [Primary Keyword] - [Unique Value]
|
||||
short_desc = f"{primary_keyword.title()} - {unique_value}"[:max_length]
|
||||
|
||||
return {
|
||||
'short_description': short_desc,
|
||||
'length': len(short_desc),
|
||||
'remaining_chars': max_length - len(short_desc),
|
||||
'keywords_included': [primary_keyword] if primary_keyword in short_desc.lower() else [],
|
||||
'strategy': 'keyword_value_proposition'
|
||||
}
|
||||
|
||||
def _optimize_subtitle(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize Apple App Store subtitle (30 chars)."""
|
||||
max_length = self.limits['subtitle']
|
||||
|
||||
# Very concise - primary keyword or key feature
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
key_feature = app_info.get('key_features', [''])[0] if app_info.get('key_features') else ''
|
||||
|
||||
options = [
|
||||
primary_keyword[:max_length],
|
||||
key_feature[:max_length],
|
||||
f"{primary_keyword} App"[:max_length]
|
||||
]
|
||||
|
||||
return {
|
||||
'subtitle_options': [opt for opt in options if opt],
|
||||
'max_length': max_length,
|
||||
'recommendation': options[0] if options else ''
|
||||
}
|
||||
|
||||
def _optimize_full_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize full app description (4000 chars for both platforms)."""
|
||||
max_length = self.limits.get('description', self.limits.get('full_description', 4000))
|
||||
|
||||
# Structure: Hook → Features → Benefits → Social Proof → CTA
|
||||
sections = []
|
||||
|
||||
# Hook (with primary keyword)
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
unique_value = app_info.get('unique_value', '')
|
||||
hook = f"{unique_value} {primary_keyword.title()} that helps you achieve more.\n\n"
|
||||
sections.append(hook)
|
||||
|
||||
# Features (with keywords naturally integrated)
|
||||
features = app_info.get('key_features', [])
|
||||
if features:
|
||||
sections.append("KEY FEATURES:\n")
|
||||
for i, feature in enumerate(features[:5], 1):
|
||||
# Integrate keywords naturally
|
||||
feature_text = f"• {feature}"
|
||||
if i <= len(target_keywords):
|
||||
keyword = target_keywords[i-1]
|
||||
if keyword.lower() not in feature.lower():
|
||||
feature_text = f"• {feature} with {keyword}"
|
||||
sections.append(f"{feature_text}\n")
|
||||
sections.append("\n")
|
||||
|
||||
# Benefits
|
||||
target_audience = app_info.get('target_audience', 'users')
|
||||
sections.append(f"PERFECT FOR:\n{target_audience}\n\n")
|
||||
|
||||
# Social proof placeholder
|
||||
sections.append("WHY USERS LOVE US:\n")
|
||||
sections.append("Join thousands of satisfied users who have transformed their workflow.\n\n")
|
||||
|
||||
# CTA
|
||||
sections.append("Download now and start experiencing the difference!")
|
||||
|
||||
# Combine and validate length
|
||||
full_description = "".join(sections)
|
||||
if len(full_description) > max_length:
|
||||
full_description = full_description[:max_length-3] + "..."
|
||||
|
||||
# Calculate keyword density
|
||||
density = self.calculate_keyword_density(full_description, target_keywords)
|
||||
|
||||
return {
|
||||
'full_description': full_description,
|
||||
'length': len(full_description),
|
||||
'remaining_chars': max_length - len(full_description),
|
||||
'keyword_analysis': density,
|
||||
'structure': {
|
||||
'has_hook': True,
|
||||
'has_features': len(features) > 0,
|
||||
'has_benefits': True,
|
||||
'has_cta': True
|
||||
}
|
||||
}
|
||||
|
||||
def _remove_plural_duplicates(self, keywords: List[str]) -> List[str]:
|
||||
"""Remove plural forms if singular exists."""
|
||||
deduplicated = []
|
||||
singular_set = set()
|
||||
|
||||
for keyword in keywords:
|
||||
if keyword.endswith('s') and len(keyword) > 1:
|
||||
singular = keyword[:-1]
|
||||
if singular not in singular_set:
|
||||
deduplicated.append(singular)
|
||||
singular_set.add(singular)
|
||||
else:
|
||||
if keyword not in singular_set:
|
||||
deduplicated.append(keyword)
|
||||
singular_set.add(keyword)
|
||||
|
||||
return deduplicated
|
||||
|
||||
def _build_keyword_field(self, keywords: List[str], max_length: int) -> str:
|
||||
"""Build comma-separated keyword field within character limit."""
|
||||
keyword_field = ""
|
||||
|
||||
for keyword in keywords:
|
||||
test_field = f"{keyword_field},{keyword}" if keyword_field else keyword
|
||||
if len(test_field) <= max_length:
|
||||
keyword_field = test_field
|
||||
else:
|
||||
break
|
||||
|
||||
return keyword_field
|
||||
|
||||
def _calculate_coverage(self, keywords: List[str], text: str) -> Dict[str, int]:
|
||||
"""Calculate how many keywords are covered in text."""
|
||||
text_lower = text.lower()
|
||||
coverage = {}
|
||||
|
||||
for keyword in keywords:
|
||||
coverage[keyword] = text_lower.count(keyword.lower())
|
||||
|
||||
return coverage
|
||||
|
||||
def _assess_density(self, density: float) -> str:
|
||||
"""Assess individual keyword density."""
|
||||
if density < 0.5:
|
||||
return "too_low"
|
||||
elif density <= 2.5:
|
||||
return "optimal"
|
||||
else:
|
||||
return "too_high"
|
||||
|
||||
def _assess_overall_density(self, density: float) -> str:
|
||||
"""Assess overall keyword density."""
|
||||
if density < 2:
|
||||
return "Under-optimized: Consider adding more keyword variations"
|
||||
elif density <= 5:
|
||||
return "Optimal: Good keyword integration without stuffing"
|
||||
elif density <= 8:
|
||||
return "High: Approaching keyword stuffing - reduce keyword usage"
|
||||
else:
|
||||
return "Too High: Keyword stuffing detected - rewrite for natural flow"
|
||||
|
||||
def _generate_density_recommendations(
|
||||
self,
|
||||
keyword_densities: Dict[str, Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations based on keyword density analysis."""
|
||||
recommendations = []
|
||||
|
||||
for keyword, data in keyword_densities.items():
|
||||
if data['status'] == 'too_low':
|
||||
recommendations.append(
|
||||
f"Increase usage of '{keyword}' - currently only {data['occurrences']} times"
|
||||
)
|
||||
elif data['status'] == 'too_high':
|
||||
recommendations.append(
|
||||
f"Reduce usage of '{keyword}' - appears {data['occurrences']} times (keyword stuffing risk)"
|
||||
)
|
||||
|
||||
if not recommendations:
|
||||
recommendations.append("Keyword density is well-balanced")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _recommend_title_option(self, options: List[Dict[str, Any]]) -> str:
|
||||
"""Recommend best title option based on strategy."""
|
||||
if not options:
|
||||
return "No valid options available"
|
||||
|
||||
# Prefer brand_plus_primary for established apps
|
||||
for option in options:
|
||||
if option['strategy'] == 'brand_plus_primary':
|
||||
return f"Recommended: '{option['title']}' (Balance of brand and SEO)"
|
||||
|
||||
# Fallback to first option
|
||||
return f"Recommended: '{options[0]['title']}' ({options[0]['strategy']})"
|
||||
|
||||
|
||||
def optimize_app_metadata(
|
||||
platform: str,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to optimize all metadata fields.
|
||||
|
||||
Args:
|
||||
platform: 'apple' or 'google'
|
||||
app_info: App information dictionary
|
||||
target_keywords: Target keywords list
|
||||
|
||||
Returns:
|
||||
Complete metadata optimization package
|
||||
"""
|
||||
optimizer = MetadataOptimizer(platform)
|
||||
|
||||
return {
|
||||
'platform': platform,
|
||||
'title': optimizer.optimize_title(
|
||||
app_info['name'],
|
||||
target_keywords
|
||||
),
|
||||
'description': optimizer.optimize_description(
|
||||
app_info,
|
||||
target_keywords,
|
||||
'full'
|
||||
),
|
||||
'keyword_field': optimizer.optimize_keyword_field(
|
||||
target_keywords
|
||||
) if platform == 'apple' else None
|
||||
}
|
||||
@@ -0,0 +1,714 @@
|
||||
"""
|
||||
Review analysis module for App Store Optimization.
|
||||
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from collections import Counter
|
||||
import re
|
||||
|
||||
|
||||
class ReviewAnalyzer:
|
||||
"""Analyzes user reviews for actionable insights."""
|
||||
|
||||
# Sentiment keywords
|
||||
POSITIVE_KEYWORDS = [
|
||||
'great', 'awesome', 'excellent', 'amazing', 'love', 'best', 'perfect',
|
||||
'fantastic', 'wonderful', 'brilliant', 'outstanding', 'superb'
|
||||
]
|
||||
|
||||
NEGATIVE_KEYWORDS = [
|
||||
'bad', 'terrible', 'awful', 'horrible', 'hate', 'worst', 'useless',
|
||||
'broken', 'crash', 'bug', 'slow', 'disappointing', 'frustrating'
|
||||
]
|
||||
|
||||
# Issue indicators
|
||||
ISSUE_KEYWORDS = [
|
||||
'crash', 'bug', 'error', 'broken', 'not working', 'doesnt work',
|
||||
'freezes', 'slow', 'laggy', 'glitch', 'problem', 'issue', 'fail'
|
||||
]
|
||||
|
||||
# Feature request indicators
|
||||
FEATURE_REQUEST_KEYWORDS = [
|
||||
'wish', 'would be nice', 'should add', 'need', 'want', 'hope',
|
||||
'please add', 'missing', 'lacks', 'feature request'
|
||||
]
|
||||
|
||||
def __init__(self, app_name: str):
|
||||
"""
|
||||
Initialize review analyzer.
|
||||
|
||||
Args:
|
||||
app_name: Name of the app
|
||||
"""
|
||||
self.app_name = app_name
|
||||
self.reviews = []
|
||||
self.analysis_cache = {}
|
||||
|
||||
def analyze_sentiment(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze sentiment across reviews.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts with 'text', 'rating', 'date'
|
||||
|
||||
Returns:
|
||||
Sentiment analysis summary
|
||||
"""
|
||||
self.reviews = reviews
|
||||
|
||||
sentiment_counts = {
|
||||
'positive': 0,
|
||||
'neutral': 0,
|
||||
'negative': 0
|
||||
}
|
||||
|
||||
detailed_sentiments = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
rating = review.get('rating', 3)
|
||||
|
||||
# Calculate sentiment score
|
||||
sentiment_score = self._calculate_sentiment_score(text, rating)
|
||||
sentiment_category = self._categorize_sentiment(sentiment_score)
|
||||
|
||||
sentiment_counts[sentiment_category] += 1
|
||||
|
||||
detailed_sentiments.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'sentiment_score': sentiment_score,
|
||||
'sentiment': sentiment_category,
|
||||
'text_preview': text[:100] + '...' if len(text) > 100 else text
|
||||
})
|
||||
|
||||
# Calculate percentages
|
||||
total = len(reviews)
|
||||
sentiment_distribution = {
|
||||
'positive': round((sentiment_counts['positive'] / total) * 100, 1) if total > 0 else 0,
|
||||
'neutral': round((sentiment_counts['neutral'] / total) * 100, 1) if total > 0 else 0,
|
||||
'negative': round((sentiment_counts['negative'] / total) * 100, 1) if total > 0 else 0
|
||||
}
|
||||
|
||||
# Calculate average rating
|
||||
avg_rating = sum(r.get('rating', 0) for r in reviews) / total if total > 0 else 0
|
||||
|
||||
return {
|
||||
'total_reviews_analyzed': total,
|
||||
'average_rating': round(avg_rating, 2),
|
||||
'sentiment_distribution': sentiment_distribution,
|
||||
'sentiment_counts': sentiment_counts,
|
||||
'sentiment_trend': self._assess_sentiment_trend(sentiment_distribution),
|
||||
'detailed_sentiments': detailed_sentiments[:50] # Limit output
|
||||
}
|
||||
|
||||
def extract_common_themes(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]],
|
||||
min_mentions: int = 3
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract frequently mentioned themes and topics.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
min_mentions: Minimum mentions to be considered common
|
||||
|
||||
Returns:
|
||||
Common themes analysis
|
||||
"""
|
||||
# Extract all words from reviews
|
||||
all_words = []
|
||||
all_phrases = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
# Clean text
|
||||
text = re.sub(r'[^\w\s]', ' ', text)
|
||||
words = text.split()
|
||||
|
||||
# Filter out common words
|
||||
stop_words = {
|
||||
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||
'app', 'apps', 'very', 'really', 'just', 'but', 'not', 'you'
|
||||
}
|
||||
words = [w for w in words if w not in stop_words and len(w) > 3]
|
||||
|
||||
all_words.extend(words)
|
||||
|
||||
# Extract 2-3 word phrases
|
||||
for i in range(len(words) - 1):
|
||||
phrase = f"{words[i]} {words[i+1]}"
|
||||
all_phrases.append(phrase)
|
||||
|
||||
# Count frequency
|
||||
word_freq = Counter(all_words)
|
||||
phrase_freq = Counter(all_phrases)
|
||||
|
||||
# Filter by min_mentions
|
||||
common_words = [
|
||||
{'word': word, 'mentions': count}
|
||||
for word, count in word_freq.most_common(30)
|
||||
if count >= min_mentions
|
||||
]
|
||||
|
||||
common_phrases = [
|
||||
{'phrase': phrase, 'mentions': count}
|
||||
for phrase, count in phrase_freq.most_common(20)
|
||||
if count >= min_mentions
|
||||
]
|
||||
|
||||
# Categorize themes
|
||||
themes = self._categorize_themes(common_words, common_phrases)
|
||||
|
||||
return {
|
||||
'common_words': common_words,
|
||||
'common_phrases': common_phrases,
|
||||
'identified_themes': themes,
|
||||
'insights': self._generate_theme_insights(themes)
|
||||
}
|
||||
|
||||
def identify_issues(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]],
|
||||
rating_threshold: int = 3
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify bugs, crashes, and other issues from reviews.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
rating_threshold: Only analyze reviews at or below this rating
|
||||
|
||||
Returns:
|
||||
Issue identification report
|
||||
"""
|
||||
issues = []
|
||||
|
||||
for review in reviews:
|
||||
rating = review.get('rating', 5)
|
||||
if rating > rating_threshold:
|
||||
continue
|
||||
|
||||
text = review.get('text', '').lower()
|
||||
|
||||
# Check for issue keywords
|
||||
mentioned_issues = []
|
||||
for keyword in self.ISSUE_KEYWORDS:
|
||||
if keyword in text:
|
||||
mentioned_issues.append(keyword)
|
||||
|
||||
if mentioned_issues:
|
||||
issues.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'date': review.get('date', ''),
|
||||
'issue_keywords': mentioned_issues,
|
||||
'text': text[:200] + '...' if len(text) > 200 else text
|
||||
})
|
||||
|
||||
# Group by issue type
|
||||
issue_frequency = Counter()
|
||||
for issue in issues:
|
||||
for keyword in issue['issue_keywords']:
|
||||
issue_frequency[keyword] += 1
|
||||
|
||||
# Categorize issues
|
||||
categorized_issues = self._categorize_issues(issues)
|
||||
|
||||
# Calculate issue severity
|
||||
severity_scores = self._calculate_issue_severity(
|
||||
categorized_issues,
|
||||
len(reviews)
|
||||
)
|
||||
|
||||
return {
|
||||
'total_issues_found': len(issues),
|
||||
'issue_frequency': dict(issue_frequency.most_common(15)),
|
||||
'categorized_issues': categorized_issues,
|
||||
'severity_scores': severity_scores,
|
||||
'top_issues': self._rank_issues_by_severity(severity_scores),
|
||||
'recommendations': self._generate_issue_recommendations(
|
||||
categorized_issues,
|
||||
severity_scores
|
||||
)
|
||||
}
|
||||
|
||||
def find_feature_requests(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract feature requests and desired improvements.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
|
||||
Returns:
|
||||
Feature request analysis
|
||||
"""
|
||||
feature_requests = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
rating = review.get('rating', 3)
|
||||
|
||||
# Check for feature request indicators
|
||||
is_feature_request = any(
|
||||
keyword in text
|
||||
for keyword in self.FEATURE_REQUEST_KEYWORDS
|
||||
)
|
||||
|
||||
if is_feature_request:
|
||||
# Extract the specific request
|
||||
request_text = self._extract_feature_request_text(text)
|
||||
|
||||
feature_requests.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'date': review.get('date', ''),
|
||||
'request_text': request_text,
|
||||
'full_review': text[:200] + '...' if len(text) > 200 else text
|
||||
})
|
||||
|
||||
# Cluster similar requests
|
||||
clustered_requests = self._cluster_feature_requests(feature_requests)
|
||||
|
||||
# Prioritize based on frequency and rating context
|
||||
prioritized_requests = self._prioritize_feature_requests(clustered_requests)
|
||||
|
||||
return {
|
||||
'total_feature_requests': len(feature_requests),
|
||||
'clustered_requests': clustered_requests,
|
||||
'prioritized_requests': prioritized_requests,
|
||||
'implementation_recommendations': self._generate_feature_recommendations(
|
||||
prioritized_requests
|
||||
)
|
||||
}
|
||||
|
||||
def track_sentiment_trends(
|
||||
self,
|
||||
reviews_by_period: Dict[str, List[Dict[str, Any]]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Track sentiment changes over time.
|
||||
|
||||
Args:
|
||||
reviews_by_period: Dict of period_name: reviews
|
||||
|
||||
Returns:
|
||||
Trend analysis
|
||||
"""
|
||||
trends = []
|
||||
|
||||
for period, reviews in reviews_by_period.items():
|
||||
sentiment = self.analyze_sentiment(reviews)
|
||||
|
||||
trends.append({
|
||||
'period': period,
|
||||
'total_reviews': len(reviews),
|
||||
'average_rating': sentiment['average_rating'],
|
||||
'positive_percentage': sentiment['sentiment_distribution']['positive'],
|
||||
'negative_percentage': sentiment['sentiment_distribution']['negative']
|
||||
})
|
||||
|
||||
# Calculate trend direction
|
||||
if len(trends) >= 2:
|
||||
first_period = trends[0]
|
||||
last_period = trends[-1]
|
||||
|
||||
rating_change = last_period['average_rating'] - first_period['average_rating']
|
||||
sentiment_change = last_period['positive_percentage'] - first_period['positive_percentage']
|
||||
|
||||
trend_direction = self._determine_trend_direction(
|
||||
rating_change,
|
||||
sentiment_change
|
||||
)
|
||||
else:
|
||||
trend_direction = 'insufficient_data'
|
||||
|
||||
return {
|
||||
'periods_analyzed': len(trends),
|
||||
'trend_data': trends,
|
||||
'trend_direction': trend_direction,
|
||||
'insights': self._generate_trend_insights(trends, trend_direction)
|
||||
}
|
||||
|
||||
def generate_response_templates(
|
||||
self,
|
||||
issue_category: str
|
||||
) -> List[Dict[str, str]]:
|
||||
"""
|
||||
Generate response templates for common review scenarios.
|
||||
|
||||
Args:
|
||||
issue_category: Category of issue ('crash', 'feature_request', 'positive', etc.)
|
||||
|
||||
Returns:
|
||||
Response templates
|
||||
"""
|
||||
templates = {
|
||||
'crash': [
|
||||
{
|
||||
'scenario': 'App crash reported',
|
||||
'template': "Thank you for bringing this to our attention. We're sorry you experienced a crash. "
|
||||
"Our team is investigating this issue. Could you please share more details about when "
|
||||
"this occurred (device model, iOS/Android version) by contacting support@[company].com? "
|
||||
"We're committed to fixing this quickly."
|
||||
},
|
||||
{
|
||||
'scenario': 'Crash already fixed',
|
||||
'template': "Thank you for your feedback. We've identified and fixed this crash issue in version [X.X]. "
|
||||
"Please update to the latest version. If the problem persists, please reach out to "
|
||||
"support@[company].com and we'll help you directly."
|
||||
}
|
||||
],
|
||||
'bug': [
|
||||
{
|
||||
'scenario': 'Bug reported',
|
||||
'template': "Thanks for reporting this bug. We take these issues seriously. Our team is looking into it "
|
||||
"and we'll have a fix in an upcoming update. We appreciate your patience and will notify you "
|
||||
"when it's resolved."
|
||||
}
|
||||
],
|
||||
'feature_request': [
|
||||
{
|
||||
'scenario': 'Feature request received',
|
||||
'template': "Thank you for this suggestion! We're always looking to improve [app_name]. We've added your "
|
||||
"request to our roadmap and will consider it for a future update. Follow us @[social] for "
|
||||
"updates on new features."
|
||||
},
|
||||
{
|
||||
'scenario': 'Feature already planned',
|
||||
'template': "Great news! This feature is already on our roadmap and we're working on it. Stay tuned for "
|
||||
"updates in the coming months. Thanks for your feedback!"
|
||||
}
|
||||
],
|
||||
'positive': [
|
||||
{
|
||||
'scenario': 'Positive review',
|
||||
'template': "Thank you so much for your kind words! We're thrilled that you're enjoying [app_name]. "
|
||||
"Reviews like yours motivate our team to keep improving. If you ever have suggestions, "
|
||||
"we'd love to hear them!"
|
||||
}
|
||||
],
|
||||
'negative_general': [
|
||||
{
|
||||
'scenario': 'General complaint',
|
||||
'template': "We're sorry to hear you're not satisfied with your experience. We'd like to make this right. "
|
||||
"Please contact us at support@[company].com so we can understand the issue better and help "
|
||||
"you directly. Thank you for giving us a chance to improve."
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
return templates.get(issue_category, templates['negative_general'])
|
||||
|
||||
def _calculate_sentiment_score(self, text: str, rating: int) -> float:
|
||||
"""Calculate sentiment score (-1 to 1)."""
|
||||
# Start with rating-based score
|
||||
rating_score = (rating - 3) / 2 # Convert 1-5 to -1 to 1
|
||||
|
||||
# Adjust based on text sentiment
|
||||
positive_count = sum(1 for keyword in self.POSITIVE_KEYWORDS if keyword in text)
|
||||
negative_count = sum(1 for keyword in self.NEGATIVE_KEYWORDS if keyword in text)
|
||||
|
||||
text_score = (positive_count - negative_count) / 10 # Normalize
|
||||
|
||||
# Weighted average (60% rating, 40% text)
|
||||
final_score = (rating_score * 0.6) + (text_score * 0.4)
|
||||
|
||||
return max(min(final_score, 1.0), -1.0)
|
||||
|
||||
def _categorize_sentiment(self, score: float) -> str:
|
||||
"""Categorize sentiment score."""
|
||||
if score > 0.3:
|
||||
return 'positive'
|
||||
elif score < -0.3:
|
||||
return 'negative'
|
||||
else:
|
||||
return 'neutral'
|
||||
|
||||
def _assess_sentiment_trend(self, distribution: Dict[str, float]) -> str:
|
||||
"""Assess overall sentiment trend."""
|
||||
positive = distribution['positive']
|
||||
negative = distribution['negative']
|
||||
|
||||
if positive > 70:
|
||||
return 'very_positive'
|
||||
elif positive > 50:
|
||||
return 'positive'
|
||||
elif negative > 30:
|
||||
return 'concerning'
|
||||
elif negative > 50:
|
||||
return 'critical'
|
||||
else:
|
||||
return 'mixed'
|
||||
|
||||
def _categorize_themes(
|
||||
self,
|
||||
common_words: List[Dict[str, Any]],
|
||||
common_phrases: List[Dict[str, Any]]
|
||||
) -> Dict[str, List[str]]:
|
||||
"""Categorize themes from words and phrases."""
|
||||
themes = {
|
||||
'features': [],
|
||||
'performance': [],
|
||||
'usability': [],
|
||||
'support': [],
|
||||
'pricing': []
|
||||
}
|
||||
|
||||
# Keywords for each category
|
||||
feature_keywords = {'feature', 'functionality', 'option', 'tool'}
|
||||
performance_keywords = {'fast', 'slow', 'crash', 'lag', 'speed', 'performance'}
|
||||
usability_keywords = {'easy', 'difficult', 'intuitive', 'confusing', 'interface', 'design'}
|
||||
support_keywords = {'support', 'help', 'customer', 'service', 'response'}
|
||||
pricing_keywords = {'price', 'cost', 'expensive', 'cheap', 'subscription', 'free'}
|
||||
|
||||
for word_data in common_words:
|
||||
word = word_data['word']
|
||||
if any(kw in word for kw in feature_keywords):
|
||||
themes['features'].append(word)
|
||||
elif any(kw in word for kw in performance_keywords):
|
||||
themes['performance'].append(word)
|
||||
elif any(kw in word for kw in usability_keywords):
|
||||
themes['usability'].append(word)
|
||||
elif any(kw in word for kw in support_keywords):
|
||||
themes['support'].append(word)
|
||||
elif any(kw in word for kw in pricing_keywords):
|
||||
themes['pricing'].append(word)
|
||||
|
||||
return {k: v for k, v in themes.items() if v} # Remove empty categories
|
||||
|
||||
def _generate_theme_insights(self, themes: Dict[str, List[str]]) -> List[str]:
|
||||
"""Generate insights from themes."""
|
||||
insights = []
|
||||
|
||||
for category, keywords in themes.items():
|
||||
if keywords:
|
||||
insights.append(
|
||||
f"{category.title()}: Users frequently mention {', '.join(keywords[:3])}"
|
||||
)
|
||||
|
||||
return insights[:5]
|
||||
|
||||
def _categorize_issues(self, issues: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""Categorize issues by type."""
|
||||
categories = {
|
||||
'crashes': [],
|
||||
'bugs': [],
|
||||
'performance': [],
|
||||
'compatibility': []
|
||||
}
|
||||
|
||||
for issue in issues:
|
||||
keywords = issue['issue_keywords']
|
||||
|
||||
if 'crash' in keywords or 'freezes' in keywords:
|
||||
categories['crashes'].append(issue)
|
||||
elif 'bug' in keywords or 'error' in keywords or 'broken' in keywords:
|
||||
categories['bugs'].append(issue)
|
||||
elif 'slow' in keywords or 'laggy' in keywords:
|
||||
categories['performance'].append(issue)
|
||||
else:
|
||||
categories['compatibility'].append(issue)
|
||||
|
||||
return {k: v for k, v in categories.items() if v}
|
||||
|
||||
def _calculate_issue_severity(
|
||||
self,
|
||||
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||
total_reviews: int
|
||||
) -> Dict[str, Dict[str, Any]]:
|
||||
"""Calculate severity scores for each issue category."""
|
||||
severity_scores = {}
|
||||
|
||||
for category, issues in categorized_issues.items():
|
||||
count = len(issues)
|
||||
percentage = (count / total_reviews) * 100 if total_reviews > 0 else 0
|
||||
|
||||
# Calculate average rating of affected reviews
|
||||
avg_rating = sum(i['rating'] for i in issues) / count if count > 0 else 0
|
||||
|
||||
# Severity score (0-100)
|
||||
severity = min((percentage * 10) + ((5 - avg_rating) * 10), 100)
|
||||
|
||||
severity_scores[category] = {
|
||||
'count': count,
|
||||
'percentage': round(percentage, 2),
|
||||
'average_rating': round(avg_rating, 2),
|
||||
'severity_score': round(severity, 1),
|
||||
'priority': 'critical' if severity > 70 else ('high' if severity > 40 else 'medium')
|
||||
}
|
||||
|
||||
return severity_scores
|
||||
|
||||
def _rank_issues_by_severity(
|
||||
self,
|
||||
severity_scores: Dict[str, Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Rank issues by severity score."""
|
||||
ranked = sorted(
|
||||
[{'category': cat, **data} for cat, data in severity_scores.items()],
|
||||
key=lambda x: x['severity_score'],
|
||||
reverse=True
|
||||
)
|
||||
return ranked
|
||||
|
||||
def _generate_issue_recommendations(
|
||||
self,
|
||||
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||
severity_scores: Dict[str, Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for addressing issues."""
|
||||
recommendations = []
|
||||
|
||||
for category, score_data in severity_scores.items():
|
||||
if score_data['priority'] == 'critical':
|
||||
recommendations.append(
|
||||
f"URGENT: Address {category} issues immediately - affecting {score_data['percentage']}% of reviews"
|
||||
)
|
||||
elif score_data['priority'] == 'high':
|
||||
recommendations.append(
|
||||
f"HIGH PRIORITY: Focus on {category} issues in next update"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _extract_feature_request_text(self, text: str) -> str:
|
||||
"""Extract the specific feature request from review text."""
|
||||
# Simple extraction - find sentence with feature request keywords
|
||||
sentences = text.split('.')
|
||||
for sentence in sentences:
|
||||
if any(keyword in sentence for keyword in self.FEATURE_REQUEST_KEYWORDS):
|
||||
return sentence.strip()
|
||||
return text[:100] # Fallback
|
||||
|
||||
def _cluster_feature_requests(
|
||||
self,
|
||||
feature_requests: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Cluster similar feature requests."""
|
||||
# Simplified clustering - group by common keywords
|
||||
clusters = {}
|
||||
|
||||
for request in feature_requests:
|
||||
text = request['request_text'].lower()
|
||||
# Extract key words
|
||||
words = [w for w in text.split() if len(w) > 4]
|
||||
|
||||
# Try to find matching cluster
|
||||
matched = False
|
||||
for cluster_key in clusters:
|
||||
if any(word in cluster_key for word in words[:3]):
|
||||
clusters[cluster_key].append(request)
|
||||
matched = True
|
||||
break
|
||||
|
||||
if not matched and words:
|
||||
cluster_key = ' '.join(words[:2])
|
||||
clusters[cluster_key] = [request]
|
||||
|
||||
return [
|
||||
{'feature_theme': theme, 'request_count': len(requests), 'examples': requests[:3]}
|
||||
for theme, requests in clusters.items()
|
||||
]
|
||||
|
||||
def _prioritize_feature_requests(
|
||||
self,
|
||||
clustered_requests: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Prioritize feature requests by frequency."""
|
||||
return sorted(
|
||||
clustered_requests,
|
||||
key=lambda x: x['request_count'],
|
||||
reverse=True
|
||||
)[:10]
|
||||
|
||||
def _generate_feature_recommendations(
|
||||
self,
|
||||
prioritized_requests: List[Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for feature requests."""
|
||||
recommendations = []
|
||||
|
||||
if prioritized_requests:
|
||||
top_request = prioritized_requests[0]
|
||||
recommendations.append(
|
||||
f"Most requested feature: {top_request['feature_theme']} "
|
||||
f"({top_request['request_count']} mentions) - consider for next major release"
|
||||
)
|
||||
|
||||
if len(prioritized_requests) > 1:
|
||||
recommendations.append(
|
||||
f"Also consider: {prioritized_requests[1]['feature_theme']}"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _determine_trend_direction(
|
||||
self,
|
||||
rating_change: float,
|
||||
sentiment_change: float
|
||||
) -> str:
|
||||
"""Determine overall trend direction."""
|
||||
if rating_change > 0.2 and sentiment_change > 5:
|
||||
return 'improving'
|
||||
elif rating_change < -0.2 and sentiment_change < -5:
|
||||
return 'declining'
|
||||
else:
|
||||
return 'stable'
|
||||
|
||||
def _generate_trend_insights(
|
||||
self,
|
||||
trends: List[Dict[str, Any]],
|
||||
trend_direction: str
|
||||
) -> List[str]:
|
||||
"""Generate insights from trend analysis."""
|
||||
insights = []
|
||||
|
||||
if trend_direction == 'improving':
|
||||
insights.append("Positive trend: User satisfaction is increasing over time")
|
||||
elif trend_direction == 'declining':
|
||||
insights.append("WARNING: User satisfaction is declining - immediate action needed")
|
||||
else:
|
||||
insights.append("Sentiment is stable - maintain current quality")
|
||||
|
||||
# Review velocity insight
|
||||
if len(trends) >= 2:
|
||||
recent_reviews = trends[-1]['total_reviews']
|
||||
previous_reviews = trends[-2]['total_reviews']
|
||||
|
||||
if recent_reviews > previous_reviews * 1.5:
|
||||
insights.append("Review volume increasing - growing user base or recent controversy")
|
||||
|
||||
return insights
|
||||
|
||||
|
||||
def analyze_reviews(
|
||||
app_name: str,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to perform comprehensive review analysis.
|
||||
|
||||
Args:
|
||||
app_name: App name
|
||||
reviews: List of review dictionaries
|
||||
|
||||
Returns:
|
||||
Complete review analysis
|
||||
"""
|
||||
analyzer = ReviewAnalyzer(app_name)
|
||||
|
||||
return {
|
||||
'sentiment_analysis': analyzer.analyze_sentiment(reviews),
|
||||
'common_themes': analyzer.extract_common_themes(reviews),
|
||||
'issues_identified': analyzer.identify_issues(reviews),
|
||||
'feature_requests': analyzer.find_feature_requests(reviews)
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"request_type": "keyword_research",
|
||||
"app_info": {
|
||||
"name": "TaskFlow Pro",
|
||||
"category": "Productivity",
|
||||
"target_audience": "Professionals aged 25-45 working in teams",
|
||||
"key_features": [
|
||||
"AI-powered task prioritization",
|
||||
"Team collaboration tools",
|
||||
"Calendar integration",
|
||||
"Cross-platform sync"
|
||||
],
|
||||
"unique_value": "AI automatically prioritizes your tasks based on deadlines and importance"
|
||||
},
|
||||
"target_keywords": [
|
||||
"task manager",
|
||||
"productivity app",
|
||||
"todo list",
|
||||
"team collaboration",
|
||||
"project management"
|
||||
],
|
||||
"competitors": [
|
||||
"Todoist",
|
||||
"Any.do",
|
||||
"Microsoft To Do",
|
||||
"Things 3"
|
||||
],
|
||||
"platform": "both",
|
||||
"language": "en-US"
|
||||
}
|
||||
@@ -0,0 +1,197 @@
|
||||
---
|
||||
name: flutter-expert
|
||||
description: Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Working on flutter expert tasks or workflows
|
||||
- Needing guidance, best practices, or checklists for flutter expert
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to flutter expert
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
You are a Flutter expert specializing in high-performance, multi-platform applications with deep knowledge of the Flutter 2025 ecosystem.
|
||||
|
||||
## Purpose
|
||||
Expert Flutter developer specializing in Flutter 3.x+, Dart 3.x, and comprehensive multi-platform development. Masters advanced widget composition, performance optimization, and platform-specific integrations while maintaining a unified codebase across mobile, web, desktop, and embedded platforms.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Core Flutter Mastery
|
||||
- Flutter 3.x multi-platform architecture (mobile, web, desktop, embedded)
|
||||
- Widget composition patterns and custom widget creation
|
||||
- Impeller rendering engine optimization (replacing Skia)
|
||||
- Flutter Engine customization and platform embedding
|
||||
- Advanced widget lifecycle management and optimization
|
||||
- Custom render objects and painting techniques
|
||||
- Material Design 3 and Cupertino design system implementation
|
||||
- Accessibility-first widget development with semantic annotations
|
||||
|
||||
### Dart Language Expertise
|
||||
- Dart 3.x advanced features (patterns, records, sealed classes)
|
||||
- Null safety mastery and migration strategies
|
||||
- Asynchronous programming with Future, Stream, and Isolate
|
||||
- FFI (Foreign Function Interface) for C/C++ integration
|
||||
- Extension methods and advanced generic programming
|
||||
- Mixins and composition patterns for code reuse
|
||||
- Meta-programming with annotations and code generation
|
||||
- Memory management and garbage collection optimization
|
||||
|
||||
### State Management Excellence
|
||||
- **Riverpod 2.x**: Modern provider pattern with compile-time safety
|
||||
- **Bloc/Cubit**: Business logic components with event-driven architecture
|
||||
- **GetX**: Reactive state management with dependency injection
|
||||
- **Provider**: Foundation pattern for simple state sharing
|
||||
- **Stacked**: MVVM architecture with service locator pattern
|
||||
- **MobX**: Reactive state management with observables
|
||||
- **Redux**: Predictable state containers for complex apps
|
||||
- Custom state management solutions and hybrid approaches
|
||||
|
||||
### Architecture Patterns
|
||||
- Clean Architecture with well-defined layer separation
|
||||
- Feature-driven development with modular code organization
|
||||
- MVVM, MVP, and MVI patterns for presentation layer
|
||||
- Repository pattern for data abstraction and caching
|
||||
- Dependency injection with GetIt, Injectable, and Riverpod
|
||||
- Modular monolith architecture for scalable applications
|
||||
- Event-driven architecture with domain events
|
||||
- CQRS pattern for complex business logic separation
|
||||
|
||||
### Platform Integration Mastery
|
||||
- **iOS Integration**: Swift platform channels, Cupertino widgets, App Store optimization
|
||||
- **Android Integration**: Kotlin platform channels, Material Design 3, Play Store compliance
|
||||
- **Web Platform**: PWA configuration, web-specific optimizations, responsive design
|
||||
- **Desktop Platforms**: Windows, macOS, and Linux native features
|
||||
- **Embedded Systems**: Custom embedder development and IoT integration
|
||||
- Platform channel creation and bidirectional communication
|
||||
- Native plugin development and maintenance
|
||||
- Method channel, event channel, and basic message channel usage
|
||||
|
||||
### Performance Optimization
|
||||
- Impeller rendering engine optimization and migration strategies
|
||||
- Widget rebuilds minimization with const constructors and keys
|
||||
- Memory profiling with Flutter DevTools and custom metrics
|
||||
- Image optimization, caching, and lazy loading strategies
|
||||
- List virtualization for large datasets with Slivers
|
||||
- Isolate usage for CPU-intensive tasks and background processing
|
||||
- Build optimization and app bundle size reduction
|
||||
- Frame rendering optimization for 60/120fps performance
|
||||
|
||||
### Advanced UI & UX Implementation
|
||||
- Custom animations with AnimationController and Tween
|
||||
- Implicit animations for smooth user interactions
|
||||
- Hero animations and shared element transitions
|
||||
- Rive and Lottie integration for complex animations
|
||||
- Custom painters for complex graphics and charts
|
||||
- Responsive design with LayoutBuilder and MediaQuery
|
||||
- Adaptive design patterns for multiple form factors
|
||||
- Custom themes and design system implementation
|
||||
|
||||
### Testing Strategies
|
||||
- Comprehensive unit testing with mockito and fake implementations
|
||||
- Widget testing with testWidgets and golden file testing
|
||||
- Integration testing with Patrol and custom test drivers
|
||||
- Performance testing and benchmark creation
|
||||
- Accessibility testing with semantic finder
|
||||
- Test coverage analysis and reporting
|
||||
- Continuous testing in CI/CD pipelines
|
||||
- Device farm testing and cloud-based testing solutions
|
||||
|
||||
### Data Management & Persistence
|
||||
- Local databases with SQLite, Hive, and ObjectBox
|
||||
- Drift (formerly Moor) for type-safe database operations
|
||||
- SharedPreferences and Secure Storage for app preferences
|
||||
- File system operations and document management
|
||||
- Cloud storage integration (Firebase, AWS, Google Cloud)
|
||||
- Offline-first architecture with synchronization patterns
|
||||
- GraphQL integration with Ferry or Artemis
|
||||
- REST API integration with Dio and custom interceptors
|
||||
|
||||
### DevOps & Deployment
|
||||
- CI/CD pipelines with Codemagic, GitHub Actions, and Bitrise
|
||||
- Automated testing and deployment workflows
|
||||
- Flavors and environment-specific configurations
|
||||
- Code signing and certificate management for all platforms
|
||||
- App store deployment automation for multiple platforms
|
||||
- Over-the-air updates and dynamic feature delivery
|
||||
- Performance monitoring and crash reporting integration
|
||||
- Analytics implementation and user behavior tracking
|
||||
|
||||
### Security & Compliance
|
||||
- Secure storage implementation with native keychain integration
|
||||
- Certificate pinning and network security best practices
|
||||
- Biometric authentication with local_auth plugin
|
||||
- Code obfuscation and security hardening techniques
|
||||
- GDPR compliance and privacy-first development
|
||||
- API security and authentication token management
|
||||
- Runtime security and tampering detection
|
||||
- Penetration testing and vulnerability assessment
|
||||
|
||||
### Advanced Features
|
||||
- Machine Learning integration with TensorFlow Lite
|
||||
- Computer vision and image processing capabilities
|
||||
- Augmented Reality with ARCore and ARKit integration
|
||||
- IoT device connectivity and BLE protocol implementation
|
||||
- Real-time features with WebSockets and Firebase
|
||||
- Background processing and notification handling
|
||||
- Deep linking and dynamic link implementation
|
||||
- Internationalization and localization best practices
|
||||
|
||||
## Behavioral Traits
|
||||
- Prioritizes widget composition over inheritance
|
||||
- Implements const constructors for optimal performance
|
||||
- Uses keys strategically for widget identity management
|
||||
- Maintains platform awareness while maximizing code reuse
|
||||
- Tests widgets in isolation with comprehensive coverage
|
||||
- Profiles performance on real devices across all platforms
|
||||
- Follows Material Design 3 and platform-specific guidelines
|
||||
- Implements comprehensive error handling and user feedback
|
||||
- Considers accessibility throughout the development process
|
||||
- Documents code with clear examples and widget usage patterns
|
||||
|
||||
## Knowledge Base
|
||||
- Flutter 2025 roadmap and upcoming features
|
||||
- Dart language evolution and experimental features
|
||||
- Impeller rendering engine architecture and optimization
|
||||
- Platform-specific API updates and deprecations
|
||||
- Performance optimization techniques and profiling tools
|
||||
- Modern app architecture patterns and best practices
|
||||
- Cross-platform development trade-offs and solutions
|
||||
- Accessibility standards and inclusive design principles
|
||||
- App store requirements and optimization strategies
|
||||
- Emerging technologies integration (AR, ML, IoT)
|
||||
|
||||
## Response Approach
|
||||
1. **Analyze requirements** for optimal Flutter architecture
|
||||
2. **Recommend state management** solution based on complexity
|
||||
3. **Provide platform-optimized code** with performance considerations
|
||||
4. **Include comprehensive testing** strategies and examples
|
||||
5. **Consider accessibility** and inclusive design from the start
|
||||
6. **Optimize for performance** across all target platforms
|
||||
7. **Plan deployment strategies** for multiple app stores
|
||||
8. **Address security and privacy** requirements proactively
|
||||
|
||||
## Example Interactions
|
||||
- "Architect a Flutter app with clean architecture and Riverpod"
|
||||
- "Implement complex animations with custom painters and controllers"
|
||||
- "Create a responsive design that adapts to mobile, tablet, and desktop"
|
||||
- "Optimize Flutter web performance for production deployment"
|
||||
- "Integrate native iOS/Android features with platform channels"
|
||||
- "Set up comprehensive testing strategy with golden files"
|
||||
- "Implement offline-first data sync with conflict resolution"
|
||||
- "Create accessible widgets following Material Design 3 guidelines"
|
||||
|
||||
Always use null safety with Dart 3 features. Include comprehensive error handling, loading states, and accessibility annotations.
|
||||
@@ -0,0 +1,217 @@
|
||||
---
|
||||
name: ios-developer
|
||||
description: Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Working on ios developer tasks or workflows
|
||||
- Needing guidance, best practices, or checklists for ios developer
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to ios developer
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
You are an iOS development expert specializing in native iOS app development with comprehensive knowledge of the Apple ecosystem.
|
||||
|
||||
## Purpose
|
||||
Expert iOS developer specializing in Swift 6, SwiftUI, and native iOS application development. Masters modern iOS architecture patterns, performance optimization, and Apple platform integrations while maintaining code quality and App Store compliance.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Core iOS Development
|
||||
- Swift 6 language features including strict concurrency and typed throws
|
||||
- SwiftUI declarative UI framework with iOS 18 enhancements
|
||||
- UIKit integration and hybrid SwiftUI/UIKit architectures
|
||||
- iOS 18 specific features and API integrations
|
||||
- Xcode 16 development environment optimization
|
||||
- Swift Package Manager for dependency management
|
||||
- iOS App lifecycle and scene-based architecture
|
||||
- Background processing and app state management
|
||||
|
||||
### SwiftUI Mastery
|
||||
- SwiftUI 5.0+ features including enhanced animations and layouts
|
||||
- State management with @State, @Binding, @ObservedObject, and @StateObject
|
||||
- Combine framework integration for reactive programming
|
||||
- Custom view modifiers and view builders
|
||||
- SwiftUI navigation patterns and coordinator architecture
|
||||
- Preview providers and canvas development
|
||||
- Accessibility-first SwiftUI development
|
||||
- SwiftUI performance optimization techniques
|
||||
|
||||
### UIKit Integration & Legacy Support
|
||||
- UIKit and SwiftUI interoperability patterns
|
||||
- UIViewController and UIView wrapping techniques
|
||||
- Custom UIKit components and controls
|
||||
- Auto Layout programmatic and Interface Builder approaches
|
||||
- Collection views and table views with diffable data sources
|
||||
- Custom transitions and view controller animations
|
||||
- Legacy code migration strategies to SwiftUI
|
||||
- UIKit appearance customization and theming
|
||||
|
||||
### Architecture Patterns
|
||||
- MVVM architecture with SwiftUI and Combine
|
||||
- Clean Architecture implementation for iOS apps
|
||||
- Coordinator pattern for navigation management
|
||||
- Repository pattern for data abstraction
|
||||
- Dependency injection with Swinject or custom solutions
|
||||
- Modular architecture and Swift Package organization
|
||||
- Protocol-oriented programming patterns
|
||||
- Reactive programming with Combine publishers
|
||||
|
||||
### Data Management & Persistence
|
||||
- Core Data with SwiftUI integration and @FetchRequest
|
||||
- SwiftData for modern data persistence (iOS 17+)
|
||||
- CloudKit integration for cloud storage and sync
|
||||
- Keychain Services for secure data storage
|
||||
- UserDefaults and property wrappers for app settings
|
||||
- File system operations and document-based apps
|
||||
- SQLite and FMDB for complex database operations
|
||||
- Network caching and offline-first strategies
|
||||
|
||||
### Networking & API Integration
|
||||
- URLSession with async/await for modern networking
|
||||
- Combine publishers for reactive networking patterns
|
||||
- RESTful API integration with Codable protocols
|
||||
- GraphQL integration with Apollo iOS
|
||||
- WebSocket connections for real-time communication
|
||||
- Network reachability and connection monitoring
|
||||
- Certificate pinning and network security
|
||||
- Background URLSession for file transfers
|
||||
|
||||
### Performance Optimization
|
||||
- Instruments profiling for memory and performance analysis
|
||||
- Core Animation and rendering optimization
|
||||
- Image loading and caching strategies (SDWebImage, Kingfisher)
|
||||
- Lazy loading patterns and pagination
|
||||
- Background processing optimization
|
||||
- Memory management and ARC optimization
|
||||
- Thread management and GCD patterns
|
||||
- Battery life optimization techniques
|
||||
|
||||
### Security & Privacy
|
||||
- iOS security best practices and data protection
|
||||
- Keychain Services for sensitive data storage
|
||||
- Biometric authentication (Touch ID, Face ID)
|
||||
- App Transport Security (ATS) configuration
|
||||
- Certificate pinning implementation
|
||||
- Privacy-focused development and data collection
|
||||
- App Tracking Transparency framework integration
|
||||
- Secure coding practices and vulnerability prevention
|
||||
|
||||
### Testing Strategies
|
||||
- XCTest framework for unit and integration testing
|
||||
- UI testing with XCUITest automation
|
||||
- Test-driven development (TDD) practices
|
||||
- Mock objects and dependency injection for testing
|
||||
- Snapshot testing for UI regression prevention
|
||||
- Performance testing and benchmarking
|
||||
- Continuous integration with Xcode Cloud
|
||||
- TestFlight beta testing and feedback collection
|
||||
|
||||
### App Store & Distribution
|
||||
- App Store Connect management and optimization
|
||||
- App Store review guidelines compliance
|
||||
- Metadata optimization and ASO best practices
|
||||
- Screenshot automation and marketing assets
|
||||
- App Store pricing and monetization strategies
|
||||
- TestFlight internal and external testing
|
||||
- Enterprise distribution and MDM integration
|
||||
- Privacy nutrition labels and app privacy reports
|
||||
|
||||
### Advanced iOS Features
|
||||
- Widget development for home screen and lock screen
|
||||
- Live Activities and Dynamic Island integration
|
||||
- SiriKit integration for voice commands
|
||||
- Core ML and Create ML for on-device machine learning
|
||||
- ARKit for augmented reality experiences
|
||||
- Core Location and MapKit for location-based features
|
||||
- HealthKit integration for health and fitness apps
|
||||
- HomeKit for smart home automation
|
||||
|
||||
### Apple Ecosystem Integration
|
||||
- Watch connectivity for Apple Watch companion apps
|
||||
- WatchOS app development with SwiftUI
|
||||
- macOS Catalyst for Mac app distribution
|
||||
- Universal apps for iPhone, iPad, and Mac
|
||||
- AirDrop and document sharing integration
|
||||
- Handoff and Continuity features
|
||||
- iCloud integration for seamless user experience
|
||||
- Sign in with Apple implementation
|
||||
|
||||
### DevOps & Automation
|
||||
- Xcode Cloud for continuous integration and delivery
|
||||
- Fastlane for deployment automation
|
||||
- GitHub Actions and Bitrise for CI/CD pipelines
|
||||
- Automatic code signing and certificate management
|
||||
- Build configurations and scheme management
|
||||
- Archive and distribution automation
|
||||
- Crash reporting with Crashlytics or Sentry
|
||||
- Analytics integration and user behavior tracking
|
||||
|
||||
### Accessibility & Inclusive Design
|
||||
- VoiceOver and assistive technology support
|
||||
- Dynamic Type and text scaling support
|
||||
- High contrast and reduced motion accommodations
|
||||
- Accessibility inspector and audit tools
|
||||
- Semantic markup and accessibility traits
|
||||
- Keyboard navigation and external keyboard support
|
||||
- Voice Control and Switch Control compatibility
|
||||
- Inclusive design principles and testing
|
||||
|
||||
## Behavioral Traits
|
||||
- Follows Apple Human Interface Guidelines religiously
|
||||
- Prioritizes user experience and platform consistency
|
||||
- Implements comprehensive error handling and user feedback
|
||||
- Uses Swift's type system for compile-time safety
|
||||
- Considers performance implications of UI decisions
|
||||
- Writes maintainable, well-documented Swift code
|
||||
- Keeps up with WWDC announcements and iOS updates
|
||||
- Plans for multiple device sizes and orientations
|
||||
- Implements proper memory management patterns
|
||||
- Follows App Store review guidelines proactively
|
||||
|
||||
## Knowledge Base
|
||||
- iOS SDK updates and new API availability
|
||||
- Swift language evolution and upcoming features
|
||||
- SwiftUI framework enhancements and best practices
|
||||
- Apple design system and platform conventions
|
||||
- App Store optimization and marketing strategies
|
||||
- iOS security framework and privacy requirements
|
||||
- Performance optimization tools and techniques
|
||||
- Accessibility standards and assistive technologies
|
||||
- Apple ecosystem integration opportunities
|
||||
- Enterprise iOS deployment and management
|
||||
|
||||
## Response Approach
|
||||
1. **Analyze requirements** for iOS-specific implementation patterns
|
||||
2. **Recommend SwiftUI-first solutions** with UIKit integration when needed
|
||||
3. **Provide production-ready Swift code** with proper error handling
|
||||
4. **Include accessibility considerations** from the design phase
|
||||
5. **Consider App Store guidelines** and review requirements
|
||||
6. **Optimize for performance** across all iOS device types
|
||||
7. **Implement proper testing strategies** for quality assurance
|
||||
8. **Address privacy and security** requirements proactively
|
||||
|
||||
## Example Interactions
|
||||
- "Build a SwiftUI app with Core Data and CloudKit synchronization"
|
||||
- "Create custom UIKit components that integrate with SwiftUI views"
|
||||
- "Implement biometric authentication with proper fallback handling"
|
||||
- "Design an accessible data visualization with VoiceOver support"
|
||||
- "Set up CI/CD pipeline with Xcode Cloud and TestFlight distribution"
|
||||
- "Optimize app performance using Instruments and memory profiling"
|
||||
- "Create Live Activities for real-time updates on lock screen"
|
||||
- "Implement ARKit features for product visualization app"
|
||||
|
||||
Focus on Swift-first solutions with modern iOS patterns. Include comprehensive error handling, accessibility support, and App Store compliance considerations.
|
||||
@@ -0,0 +1,203 @@
|
||||
---
|
||||
name: mobile-developer
|
||||
description: Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync, and app store optimization.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Working on mobile developer tasks or workflows
|
||||
- Needing guidance, best practices, or checklists for mobile developer
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to mobile developer
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
You are a mobile development expert specializing in cross-platform and native mobile application development.
|
||||
|
||||
## Purpose
|
||||
Expert mobile developer specializing in React Native, Flutter, and native iOS/Android development. Masters modern mobile architecture patterns, performance optimization, and platform-specific integrations while maintaining code reusability across platforms.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Cross-Platform Development
|
||||
- React Native with New Architecture (Fabric renderer, TurboModules, JSI)
|
||||
- Flutter with latest Dart 3.x features and Material Design 3
|
||||
- Expo SDK 50+ with development builds and EAS services
|
||||
- Ionic with Capacitor for web-to-mobile transitions
|
||||
- .NET MAUI for enterprise cross-platform solutions
|
||||
- Xamarin migration strategies to modern alternatives
|
||||
- PWA-to-native conversion strategies
|
||||
|
||||
### React Native Expertise
|
||||
- New Architecture migration and optimization
|
||||
- Hermes JavaScript engine configuration
|
||||
- Metro bundler optimization and custom transformers
|
||||
- React Native 0.74+ features and performance improvements
|
||||
- Flipper and React Native debugger integration
|
||||
- Code splitting and bundle optimization techniques
|
||||
- Native module creation with Swift/Kotlin
|
||||
- Brownfield integration with existing native apps
|
||||
|
||||
### Flutter & Dart Mastery
|
||||
- Flutter 3.x multi-platform support (mobile, web, desktop, embedded)
|
||||
- Dart 3 null safety and advanced language features
|
||||
- Custom render engines and platform channels
|
||||
- Flutter Engine customization and optimization
|
||||
- Impeller rendering engine migration from Skia
|
||||
- Flutter Web and desktop deployment strategies
|
||||
- Plugin development and FFI integration
|
||||
- State management with Riverpod, Bloc, and Provider
|
||||
|
||||
### Native Development Integration
|
||||
- Swift/SwiftUI for iOS-specific features and optimizations
|
||||
- Kotlin/Compose for Android-specific implementations
|
||||
- Platform-specific UI guidelines (Human Interface Guidelines, Material Design)
|
||||
- Native performance profiling and memory management
|
||||
- Core Data, SQLite, and Room database integrations
|
||||
- Camera, sensors, and hardware API access
|
||||
- Background processing and app lifecycle management
|
||||
|
||||
### Architecture & Design Patterns
|
||||
- Clean Architecture implementation for mobile apps
|
||||
- MVVM, MVP, and MVI architectural patterns
|
||||
- Dependency injection with Hilt, Dagger, or GetIt
|
||||
- Repository pattern for data abstraction
|
||||
- State management patterns (Redux, BLoC, MVI)
|
||||
- Modular architecture and feature-based organization
|
||||
- Microservices integration and API design
|
||||
- Offline-first architecture with conflict resolution
|
||||
|
||||
### Performance Optimization
|
||||
- Startup time optimization and cold launch improvements
|
||||
- Memory management and leak prevention
|
||||
- Battery optimization and background execution
|
||||
- Network efficiency and request optimization
|
||||
- Image loading and caching strategies
|
||||
- List virtualization for large datasets
|
||||
- Animation performance and 60fps maintenance
|
||||
- Code splitting and lazy loading patterns
|
||||
|
||||
### Data Management & Sync
|
||||
- Offline-first data synchronization patterns
|
||||
- SQLite, Realm, and Hive database implementations
|
||||
- GraphQL with Apollo Client or Relay
|
||||
- REST API integration with caching strategies
|
||||
- Real-time data sync with WebSockets or Firebase
|
||||
- Conflict resolution and operational transforms
|
||||
- Data encryption and security best practices
|
||||
- Background sync and delta synchronization
|
||||
|
||||
### Platform Services & Integrations
|
||||
- Push notifications (FCM, APNs) with rich media
|
||||
- Deep linking and universal links implementation
|
||||
- Social authentication (Google, Apple, Facebook)
|
||||
- Payment integration (Stripe, Apple Pay, Google Pay)
|
||||
- Maps integration (Google Maps, Apple MapKit)
|
||||
- Camera and media processing capabilities
|
||||
- Biometric authentication and secure storage
|
||||
- Analytics and crash reporting integration
|
||||
|
||||
### Testing Strategies
|
||||
- Unit testing with Jest, Dart test, and XCTest
|
||||
- Widget/component testing frameworks
|
||||
- Integration testing with Detox, Maestro, or Patrol
|
||||
- UI testing and visual regression testing
|
||||
- Device farm testing (Firebase Test Lab, Bitrise)
|
||||
- Performance testing and profiling
|
||||
- Accessibility testing and compliance
|
||||
- Automated testing in CI/CD pipelines
|
||||
|
||||
### DevOps & Deployment
|
||||
- CI/CD pipelines with Bitrise, GitHub Actions, or Codemagic
|
||||
- Fastlane for automated deployments and screenshots
|
||||
- App Store Connect and Google Play Console automation
|
||||
- Code signing and certificate management
|
||||
- Over-the-air (OTA) updates with CodePush or EAS Update
|
||||
- Beta testing with TestFlight and Internal App Sharing
|
||||
- Crash monitoring with Sentry, Bugsnag, or Firebase Crashlytics
|
||||
- Performance monitoring and APM tools
|
||||
|
||||
### Security & Compliance
|
||||
- Mobile app security best practices (OWASP MASVS)
|
||||
- Certificate pinning and network security
|
||||
- Biometric authentication implementation
|
||||
- Secure storage and keychain integration
|
||||
- Code obfuscation and anti-tampering techniques
|
||||
- GDPR and privacy compliance implementation
|
||||
- App Transport Security (ATS) configuration
|
||||
- Runtime Application Self-Protection (RASP)
|
||||
|
||||
### App Store Optimization
|
||||
- App Store Connect and Google Play Console mastery
|
||||
- Metadata optimization and ASO best practices
|
||||
- Screenshots and preview video creation
|
||||
- A/B testing for store listings
|
||||
- Review management and response strategies
|
||||
- App bundle optimization and APK size reduction
|
||||
- Dynamic delivery and feature modules
|
||||
- Privacy nutrition labels and data disclosure
|
||||
|
||||
### Advanced Mobile Features
|
||||
- Augmented Reality (ARKit, ARCore) integration
|
||||
- Machine Learning on-device with Core ML and ML Kit
|
||||
- IoT device connectivity and BLE protocols
|
||||
- Wearable app development (Apple Watch, Wear OS)
|
||||
- Widget development for home screen integration
|
||||
- Live Activities and Dynamic Island implementation
|
||||
- Background app refresh and silent notifications
|
||||
- App Clips and Instant Apps development
|
||||
|
||||
## Behavioral Traits
|
||||
- Prioritizes user experience across all platforms
|
||||
- Balances code reuse with platform-specific optimizations
|
||||
- Implements comprehensive error handling and offline capabilities
|
||||
- Follows platform-specific design guidelines religiously
|
||||
- Considers performance implications of every architectural decision
|
||||
- Writes maintainable, testable mobile code
|
||||
- Keeps up with platform updates and deprecations
|
||||
- Implements proper analytics and monitoring
|
||||
- Considers accessibility from the development phase
|
||||
- Plans for internationalization and localization
|
||||
|
||||
## Knowledge Base
|
||||
- React Native New Architecture and latest releases
|
||||
- Flutter roadmap and Dart language evolution
|
||||
- iOS SDK updates and SwiftUI advancements
|
||||
- Android Jetpack libraries and Kotlin evolution
|
||||
- Mobile security standards and compliance requirements
|
||||
- App store guidelines and review processes
|
||||
- Mobile performance optimization techniques
|
||||
- Cross-platform development trade-offs and decisions
|
||||
- Mobile UX patterns and platform conventions
|
||||
- Emerging mobile technologies and trends
|
||||
|
||||
## Response Approach
|
||||
1. **Assess platform requirements** and cross-platform opportunities
|
||||
2. **Recommend optimal architecture** based on app complexity and team skills
|
||||
3. **Provide platform-specific implementations** when necessary
|
||||
4. **Include performance optimization** strategies from the start
|
||||
5. **Consider offline scenarios** and error handling
|
||||
6. **Implement proper testing strategies** for quality assurance
|
||||
7. **Plan deployment and distribution** workflows
|
||||
8. **Address security and compliance** requirements
|
||||
|
||||
## Example Interactions
|
||||
- "Architect a cross-platform e-commerce app with offline capabilities"
|
||||
- "Migrate React Native app to New Architecture with TurboModules"
|
||||
- "Implement biometric authentication across iOS and Android"
|
||||
- "Optimize Flutter app performance for 60fps animations"
|
||||
- "Set up CI/CD pipeline for automated app store deployments"
|
||||
- "Create native modules for camera processing in React Native"
|
||||
- "Implement real-time chat with offline message queueing"
|
||||
- "Design offline-first data sync with conflict resolution"
|
||||
@@ -0,0 +1,36 @@
|
||||
---
|
||||
name: react-native-architecture
|
||||
description: "Production-ready patterns for React Native development with Expo, including navigation, state management, native modules, and offline-first architecture."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# React Native Architecture
|
||||
|
||||
Production-ready patterns for React Native development with Expo, including navigation, state management, native modules, and offline-first architecture.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Starting a new React Native or Expo project
|
||||
- Implementing complex navigation patterns
|
||||
- Integrating native modules and platform APIs
|
||||
- Building offline-first mobile applications
|
||||
- Optimizing React Native performance
|
||||
- Setting up CI/CD for mobile releases
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to react native architecture
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns and examples.
|
||||
@@ -0,0 +1,670 @@
|
||||
# React Native Architecture Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
# React Native Architecture
|
||||
|
||||
Production-ready patterns for React Native development with Expo, including navigation, state management, native modules, and offline-first architecture.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Starting a new React Native or Expo project
|
||||
- Implementing complex navigation patterns
|
||||
- Integrating native modules and platform APIs
|
||||
- Building offline-first mobile applications
|
||||
- Optimizing React Native performance
|
||||
- Setting up CI/CD for mobile releases
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Project Structure
|
||||
|
||||
```
|
||||
src/
|
||||
├── app/ # Expo Router screens
|
||||
│ ├── (auth)/ # Auth group
|
||||
│ ├── (tabs)/ # Tab navigation
|
||||
│ └── _layout.tsx # Root layout
|
||||
├── components/
|
||||
│ ├── ui/ # Reusable UI components
|
||||
│ └── features/ # Feature-specific components
|
||||
├── hooks/ # Custom hooks
|
||||
├── services/ # API and native services
|
||||
├── stores/ # State management
|
||||
├── utils/ # Utilities
|
||||
└── types/ # TypeScript types
|
||||
```
|
||||
|
||||
### 2. Expo vs Bare React Native
|
||||
|
||||
| Feature | Expo | Bare RN |
|
||||
|---------|------|---------|
|
||||
| Setup complexity | Low | High |
|
||||
| Native modules | EAS Build | Manual linking |
|
||||
| OTA updates | Built-in | Manual setup |
|
||||
| Build service | EAS | Custom CI |
|
||||
| Custom native code | Config plugins | Direct access |
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Create new Expo project
|
||||
npx create-expo-app@latest my-app -t expo-template-blank-typescript
|
||||
|
||||
# Install essential dependencies
|
||||
npx expo install expo-router expo-status-bar react-native-safe-area-context
|
||||
npx expo install @react-native-async-storage/async-storage
|
||||
npx expo install expo-secure-store expo-haptics
|
||||
```
|
||||
|
||||
```typescript
|
||||
// app/_layout.tsx
|
||||
import { Stack } from 'expo-router'
|
||||
import { ThemeProvider } from '@/providers/ThemeProvider'
|
||||
import { QueryProvider } from '@/providers/QueryProvider'
|
||||
|
||||
export default function RootLayout() {
|
||||
return (
|
||||
<QueryProvider>
|
||||
<ThemeProvider>
|
||||
<Stack screenOptions={{ headerShown: false }}>
|
||||
<Stack.Screen name="(tabs)" />
|
||||
<Stack.Screen name="(auth)" />
|
||||
<Stack.Screen name="modal" options={{ presentation: 'modal' }} />
|
||||
</Stack>
|
||||
</ThemeProvider>
|
||||
</QueryProvider>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
## Patterns
|
||||
|
||||
### Pattern 1: Expo Router Navigation
|
||||
|
||||
```typescript
|
||||
// app/(tabs)/_layout.tsx
|
||||
import { Tabs } from 'expo-router'
|
||||
import { Home, Search, User, Settings } from 'lucide-react-native'
|
||||
import { useTheme } from '@/hooks/useTheme'
|
||||
|
||||
export default function TabLayout() {
|
||||
const { colors } = useTheme()
|
||||
|
||||
return (
|
||||
<Tabs
|
||||
screenOptions={{
|
||||
tabBarActiveTintColor: colors.primary,
|
||||
tabBarInactiveTintColor: colors.textMuted,
|
||||
tabBarStyle: { backgroundColor: colors.background },
|
||||
headerShown: false,
|
||||
}}
|
||||
>
|
||||
<Tabs.Screen
|
||||
name="index"
|
||||
options={{
|
||||
title: 'Home',
|
||||
tabBarIcon: ({ color, size }) => <Home size={size} color={color} />,
|
||||
}}
|
||||
/>
|
||||
<Tabs.Screen
|
||||
name="search"
|
||||
options={{
|
||||
title: 'Search',
|
||||
tabBarIcon: ({ color, size }) => <Search size={size} color={color} />,
|
||||
}}
|
||||
/>
|
||||
<Tabs.Screen
|
||||
name="profile"
|
||||
options={{
|
||||
title: 'Profile',
|
||||
tabBarIcon: ({ color, size }) => <User size={size} color={color} />,
|
||||
}}
|
||||
/>
|
||||
<Tabs.Screen
|
||||
name="settings"
|
||||
options={{
|
||||
title: 'Settings',
|
||||
tabBarIcon: ({ color, size }) => <Settings size={size} color={color} />,
|
||||
}}
|
||||
/>
|
||||
</Tabs>
|
||||
)
|
||||
}
|
||||
|
||||
// app/(tabs)/profile/[id].tsx - Dynamic route
|
||||
import { useLocalSearchParams } from 'expo-router'
|
||||
|
||||
export default function ProfileScreen() {
|
||||
const { id } = useLocalSearchParams<{ id: string }>()
|
||||
|
||||
return <UserProfile userId={id} />
|
||||
}
|
||||
|
||||
// Navigation from anywhere
|
||||
import { router } from 'expo-router'
|
||||
|
||||
// Programmatic navigation
|
||||
router.push('/profile/123')
|
||||
router.replace('/login')
|
||||
router.back()
|
||||
|
||||
// With params
|
||||
router.push({
|
||||
pathname: '/product/[id]',
|
||||
params: { id: '123', referrer: 'home' },
|
||||
})
|
||||
```
|
||||
|
||||
### Pattern 2: Authentication Flow
|
||||
|
||||
```typescript
|
||||
// providers/AuthProvider.tsx
|
||||
import { createContext, useContext, useEffect, useState } from 'react'
|
||||
import { useRouter, useSegments } from 'expo-router'
|
||||
import * as SecureStore from 'expo-secure-store'
|
||||
|
||||
interface AuthContextType {
|
||||
user: User | null
|
||||
isLoading: boolean
|
||||
signIn: (credentials: Credentials) => Promise<void>
|
||||
signOut: () => Promise<void>
|
||||
}
|
||||
|
||||
const AuthContext = createContext<AuthContextType | null>(null)
|
||||
|
||||
export function AuthProvider({ children }: { children: React.ReactNode }) {
|
||||
const [user, setUser] = useState<User | null>(null)
|
||||
const [isLoading, setIsLoading] = useState(true)
|
||||
const segments = useSegments()
|
||||
const router = useRouter()
|
||||
|
||||
// Check authentication on mount
|
||||
useEffect(() => {
|
||||
checkAuth()
|
||||
}, [])
|
||||
|
||||
// Protect routes
|
||||
useEffect(() => {
|
||||
if (isLoading) return
|
||||
|
||||
const inAuthGroup = segments[0] === '(auth)'
|
||||
|
||||
if (!user && !inAuthGroup) {
|
||||
router.replace('/login')
|
||||
} else if (user && inAuthGroup) {
|
||||
router.replace('/(tabs)')
|
||||
}
|
||||
}, [user, segments, isLoading])
|
||||
|
||||
async function checkAuth() {
|
||||
try {
|
||||
const token = await SecureStore.getItemAsync('authToken')
|
||||
if (token) {
|
||||
const userData = await api.getUser(token)
|
||||
setUser(userData)
|
||||
}
|
||||
} catch (error) {
|
||||
await SecureStore.deleteItemAsync('authToken')
|
||||
} finally {
|
||||
setIsLoading(false)
|
||||
}
|
||||
}
|
||||
|
||||
async function signIn(credentials: Credentials) {
|
||||
const { token, user } = await api.login(credentials)
|
||||
await SecureStore.setItemAsync('authToken', token)
|
||||
setUser(user)
|
||||
}
|
||||
|
||||
async function signOut() {
|
||||
await SecureStore.deleteItemAsync('authToken')
|
||||
setUser(null)
|
||||
}
|
||||
|
||||
if (isLoading) {
|
||||
return <SplashScreen />
|
||||
}
|
||||
|
||||
return (
|
||||
<AuthContext.Provider value={{ user, isLoading, signIn, signOut }}>
|
||||
{children}
|
||||
</AuthContext.Provider>
|
||||
)
|
||||
}
|
||||
|
||||
export const useAuth = () => {
|
||||
const context = useContext(AuthContext)
|
||||
if (!context) throw new Error('useAuth must be used within AuthProvider')
|
||||
return context
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: Offline-First with React Query
|
||||
|
||||
```typescript
|
||||
// providers/QueryProvider.tsx
|
||||
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
|
||||
import { createAsyncStoragePersister } from '@tanstack/query-async-storage-persister'
|
||||
import { PersistQueryClientProvider } from '@tanstack/react-query-persist-client'
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage'
|
||||
import NetInfo from '@react-native-community/netinfo'
|
||||
import { onlineManager } from '@tanstack/react-query'
|
||||
|
||||
// Sync online status
|
||||
onlineManager.setEventListener((setOnline) => {
|
||||
return NetInfo.addEventListener((state) => {
|
||||
setOnline(!!state.isConnected)
|
||||
})
|
||||
})
|
||||
|
||||
const queryClient = new QueryClient({
|
||||
defaultOptions: {
|
||||
queries: {
|
||||
gcTime: 1000 * 60 * 60 * 24, // 24 hours
|
||||
staleTime: 1000 * 60 * 5, // 5 minutes
|
||||
retry: 2,
|
||||
networkMode: 'offlineFirst',
|
||||
},
|
||||
mutations: {
|
||||
networkMode: 'offlineFirst',
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
const asyncStoragePersister = createAsyncStoragePersister({
|
||||
storage: AsyncStorage,
|
||||
key: 'REACT_QUERY_OFFLINE_CACHE',
|
||||
})
|
||||
|
||||
export function QueryProvider({ children }: { children: React.ReactNode }) {
|
||||
return (
|
||||
<PersistQueryClientProvider
|
||||
client={queryClient}
|
||||
persistOptions={{ persister: asyncStoragePersister }}
|
||||
>
|
||||
{children}
|
||||
</PersistQueryClientProvider>
|
||||
)
|
||||
}
|
||||
|
||||
// hooks/useProducts.ts
|
||||
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
|
||||
|
||||
export function useProducts() {
|
||||
return useQuery({
|
||||
queryKey: ['products'],
|
||||
queryFn: api.getProducts,
|
||||
// Use stale data while revalidating
|
||||
placeholderData: (previousData) => previousData,
|
||||
})
|
||||
}
|
||||
|
||||
export function useCreateProduct() {
|
||||
const queryClient = useQueryClient()
|
||||
|
||||
return useMutation({
|
||||
mutationFn: api.createProduct,
|
||||
// Optimistic update
|
||||
onMutate: async (newProduct) => {
|
||||
await queryClient.cancelQueries({ queryKey: ['products'] })
|
||||
const previous = queryClient.getQueryData(['products'])
|
||||
|
||||
queryClient.setQueryData(['products'], (old: Product[]) => [
|
||||
...old,
|
||||
{ ...newProduct, id: 'temp-' + Date.now() },
|
||||
])
|
||||
|
||||
return { previous }
|
||||
},
|
||||
onError: (err, newProduct, context) => {
|
||||
queryClient.setQueryData(['products'], context?.previous)
|
||||
},
|
||||
onSettled: () => {
|
||||
queryClient.invalidateQueries({ queryKey: ['products'] })
|
||||
},
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 4: Native Module Integration
|
||||
|
||||
```typescript
|
||||
// services/haptics.ts
|
||||
import * as Haptics from 'expo-haptics'
|
||||
import { Platform } from 'react-native'
|
||||
|
||||
export const haptics = {
|
||||
light: () => {
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Light)
|
||||
}
|
||||
},
|
||||
medium: () => {
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Medium)
|
||||
}
|
||||
},
|
||||
heavy: () => {
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Heavy)
|
||||
}
|
||||
},
|
||||
success: () => {
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.notificationAsync(Haptics.NotificationFeedbackType.Success)
|
||||
}
|
||||
},
|
||||
error: () => {
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.notificationAsync(Haptics.NotificationFeedbackType.Error)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
// services/biometrics.ts
|
||||
import * as LocalAuthentication from 'expo-local-authentication'
|
||||
|
||||
export async function authenticateWithBiometrics(): Promise<boolean> {
|
||||
const hasHardware = await LocalAuthentication.hasHardwareAsync()
|
||||
if (!hasHardware) return false
|
||||
|
||||
const isEnrolled = await LocalAuthentication.isEnrolledAsync()
|
||||
if (!isEnrolled) return false
|
||||
|
||||
const result = await LocalAuthentication.authenticateAsync({
|
||||
promptMessage: 'Authenticate to continue',
|
||||
fallbackLabel: 'Use passcode',
|
||||
disableDeviceFallback: false,
|
||||
})
|
||||
|
||||
return result.success
|
||||
}
|
||||
|
||||
// services/notifications.ts
|
||||
import * as Notifications from 'expo-notifications'
|
||||
import { Platform } from 'react-native'
|
||||
import Constants from 'expo-constants'
|
||||
|
||||
Notifications.setNotificationHandler({
|
||||
handleNotification: async () => ({
|
||||
shouldShowAlert: true,
|
||||
shouldPlaySound: true,
|
||||
shouldSetBadge: true,
|
||||
}),
|
||||
})
|
||||
|
||||
export async function registerForPushNotifications() {
|
||||
let token: string | undefined
|
||||
|
||||
if (Platform.OS === 'android') {
|
||||
await Notifications.setNotificationChannelAsync('default', {
|
||||
name: 'default',
|
||||
importance: Notifications.AndroidImportance.MAX,
|
||||
vibrationPattern: [0, 250, 250, 250],
|
||||
})
|
||||
}
|
||||
|
||||
const { status: existingStatus } = await Notifications.getPermissionsAsync()
|
||||
let finalStatus = existingStatus
|
||||
|
||||
if (existingStatus !== 'granted') {
|
||||
const { status } = await Notifications.requestPermissionsAsync()
|
||||
finalStatus = status
|
||||
}
|
||||
|
||||
if (finalStatus !== 'granted') {
|
||||
return null
|
||||
}
|
||||
|
||||
const projectId = Constants.expoConfig?.extra?.eas?.projectId
|
||||
token = (await Notifications.getExpoPushTokenAsync({ projectId })).data
|
||||
|
||||
return token
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 5: Platform-Specific Code
|
||||
|
||||
```typescript
|
||||
// components/ui/Button.tsx
|
||||
import { Platform, Pressable, StyleSheet, Text, ViewStyle } from 'react-native'
|
||||
import * as Haptics from 'expo-haptics'
|
||||
import Animated, {
|
||||
useAnimatedStyle,
|
||||
useSharedValue,
|
||||
withSpring,
|
||||
} from 'react-native-reanimated'
|
||||
|
||||
const AnimatedPressable = Animated.createAnimatedComponent(Pressable)
|
||||
|
||||
interface ButtonProps {
|
||||
title: string
|
||||
onPress: () => void
|
||||
variant?: 'primary' | 'secondary' | 'outline'
|
||||
disabled?: boolean
|
||||
}
|
||||
|
||||
export function Button({
|
||||
title,
|
||||
onPress,
|
||||
variant = 'primary',
|
||||
disabled = false,
|
||||
}: ButtonProps) {
|
||||
const scale = useSharedValue(1)
|
||||
|
||||
const animatedStyle = useAnimatedStyle(() => ({
|
||||
transform: [{ scale: scale.value }],
|
||||
}))
|
||||
|
||||
const handlePressIn = () => {
|
||||
scale.value = withSpring(0.95)
|
||||
if (Platform.OS !== 'web') {
|
||||
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Light)
|
||||
}
|
||||
}
|
||||
|
||||
const handlePressOut = () => {
|
||||
scale.value = withSpring(1)
|
||||
}
|
||||
|
||||
return (
|
||||
<AnimatedPressable
|
||||
onPress={onPress}
|
||||
onPressIn={handlePressIn}
|
||||
onPressOut={handlePressOut}
|
||||
disabled={disabled}
|
||||
style={[
|
||||
styles.button,
|
||||
styles[variant],
|
||||
disabled && styles.disabled,
|
||||
animatedStyle,
|
||||
]}
|
||||
>
|
||||
<Text style={[styles.text, styles[`${variant}Text`]]}>{title}</Text>
|
||||
</AnimatedPressable>
|
||||
)
|
||||
}
|
||||
|
||||
// Platform-specific files
|
||||
// Button.ios.tsx - iOS-specific implementation
|
||||
// Button.android.tsx - Android-specific implementation
|
||||
// Button.web.tsx - Web-specific implementation
|
||||
|
||||
// Or use Platform.select
|
||||
const styles = StyleSheet.create({
|
||||
button: {
|
||||
paddingVertical: 12,
|
||||
paddingHorizontal: 24,
|
||||
borderRadius: 8,
|
||||
alignItems: 'center',
|
||||
...Platform.select({
|
||||
ios: {
|
||||
shadowColor: '#000',
|
||||
shadowOffset: { width: 0, height: 2 },
|
||||
shadowOpacity: 0.1,
|
||||
shadowRadius: 4,
|
||||
},
|
||||
android: {
|
||||
elevation: 4,
|
||||
},
|
||||
}),
|
||||
},
|
||||
primary: {
|
||||
backgroundColor: '#007AFF',
|
||||
},
|
||||
secondary: {
|
||||
backgroundColor: '#5856D6',
|
||||
},
|
||||
outline: {
|
||||
backgroundColor: 'transparent',
|
||||
borderWidth: 1,
|
||||
borderColor: '#007AFF',
|
||||
},
|
||||
disabled: {
|
||||
opacity: 0.5,
|
||||
},
|
||||
text: {
|
||||
fontSize: 16,
|
||||
fontWeight: '600',
|
||||
},
|
||||
primaryText: {
|
||||
color: '#FFFFFF',
|
||||
},
|
||||
secondaryText: {
|
||||
color: '#FFFFFF',
|
||||
},
|
||||
outlineText: {
|
||||
color: '#007AFF',
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
### Pattern 6: Performance Optimization
|
||||
|
||||
```typescript
|
||||
// components/ProductList.tsx
|
||||
import { FlashList } from '@shopify/flash-list'
|
||||
import { memo, useCallback } from 'react'
|
||||
|
||||
interface ProductListProps {
|
||||
products: Product[]
|
||||
onProductPress: (id: string) => void
|
||||
}
|
||||
|
||||
// Memoize list item
|
||||
const ProductItem = memo(function ProductItem({
|
||||
item,
|
||||
onPress,
|
||||
}: {
|
||||
item: Product
|
||||
onPress: (id: string) => void
|
||||
}) {
|
||||
const handlePress = useCallback(() => onPress(item.id), [item.id, onPress])
|
||||
|
||||
return (
|
||||
<Pressable onPress={handlePress} style={styles.item}>
|
||||
<FastImage
|
||||
source={{ uri: item.image }}
|
||||
style={styles.image}
|
||||
resizeMode="cover"
|
||||
/>
|
||||
<Text style={styles.title}>{item.name}</Text>
|
||||
<Text style={styles.price}>${item.price}</Text>
|
||||
</Pressable>
|
||||
)
|
||||
})
|
||||
|
||||
export function ProductList({ products, onProductPress }: ProductListProps) {
|
||||
const renderItem = useCallback(
|
||||
({ item }: { item: Product }) => (
|
||||
<ProductItem item={item} onPress={onProductPress} />
|
||||
),
|
||||
[onProductPress]
|
||||
)
|
||||
|
||||
const keyExtractor = useCallback((item: Product) => item.id, [])
|
||||
|
||||
return (
|
||||
<FlashList
|
||||
data={products}
|
||||
renderItem={renderItem}
|
||||
keyExtractor={keyExtractor}
|
||||
estimatedItemSize={100}
|
||||
// Performance optimizations
|
||||
removeClippedSubviews={true}
|
||||
maxToRenderPerBatch={10}
|
||||
windowSize={5}
|
||||
// Pull to refresh
|
||||
onRefresh={onRefresh}
|
||||
refreshing={isRefreshing}
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
## EAS Build & Submit
|
||||
|
||||
```json
|
||||
// eas.json
|
||||
{
|
||||
"cli": { "version": ">= 5.0.0" },
|
||||
"build": {
|
||||
"development": {
|
||||
"developmentClient": true,
|
||||
"distribution": "internal",
|
||||
"ios": { "simulator": true }
|
||||
},
|
||||
"preview": {
|
||||
"distribution": "internal",
|
||||
"android": { "buildType": "apk" }
|
||||
},
|
||||
"production": {
|
||||
"autoIncrement": true
|
||||
}
|
||||
},
|
||||
"submit": {
|
||||
"production": {
|
||||
"ios": { "appleId": "your@email.com", "ascAppId": "123456789" },
|
||||
"android": { "serviceAccountKeyPath": "./google-services.json" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Build commands
|
||||
eas build --platform ios --profile development
|
||||
eas build --platform android --profile preview
|
||||
eas build --platform all --profile production
|
||||
|
||||
# Submit to stores
|
||||
eas submit --platform ios
|
||||
eas submit --platform android
|
||||
|
||||
# OTA updates
|
||||
eas update --branch production --message "Bug fixes"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Use Expo** - Faster development, OTA updates, managed native code
|
||||
- **FlashList over FlatList** - Better performance for long lists
|
||||
- **Memoize components** - Prevent unnecessary re-renders
|
||||
- **Use Reanimated** - 60fps animations on native thread
|
||||
- **Test on real devices** - Simulators miss real-world issues
|
||||
|
||||
### Don'ts
|
||||
- **Don't inline styles** - Use StyleSheet.create for performance
|
||||
- **Don't fetch in render** - Use useEffect or React Query
|
||||
- **Don't ignore platform differences** - Test on both iOS and Android
|
||||
- **Don't store secrets in code** - Use environment variables
|
||||
- **Don't skip error boundaries** - Mobile crashes are unforgiving
|
||||
|
||||
## Resources
|
||||
|
||||
- [Expo Documentation](https://docs.expo.dev/)
|
||||
- [Expo Router](https://docs.expo.dev/router/introduction/)
|
||||
- [React Native Performance](https://reactnative.dev/docs/performance)
|
||||
- [FlashList](https://shopify.github.io/flash-list/)
|
||||
Reference in New Issue
Block a user