feat(marketing): add seek-and-analyze-video skill (#300)

Add video intelligence and content analysis skill using Memories.ai LVMM.
Enables agents to discover videos on TikTok/YouTube/Instagram, analyze
content, summarize meetings, and build searchable knowledge bases across
multiple videos.

Features:
- 21 API commands organized into workflow-oriented reference guides
- Quick video analysis and persistent knowledge base modes
- Social media video research and competitor analysis
- Meeting and lecture note extraction
- Cross-video semantic search and Q&A
- Memory management for text insights

Skill includes:
- Comprehensive SKILL.md following repository standards
- API command reference documentation
- Use cases and examples for 6 primary workflows
- Example workflow script demonstrating competitive analysis

Skill repo: https://github.com/kennyzheng-builds/seek-and-analyze-video

Co-authored-by: kennyzheng-builds <kennyzheng-builds@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Kennyzheng
2026-03-10 00:25:53 +08:00
committed by GitHub
parent b01d5d2703
commit f6fe59aac4
4 changed files with 1727 additions and 0 deletions

View File

@@ -0,0 +1,393 @@
---
name: seek-and-analyze-video
description: Video intelligence and content analysis using Memories.ai LVMM. Discover videos on TikTok, YouTube, Instagram by topic or creator. Analyze video content, summarize meetings, build searchable knowledge bases across multiple videos. Use for video research, competitor content analysis, meeting notes, lecture summaries, or building video knowledge libraries.
license: MIT
metadata:
version: 1.0.0
author: Kenny Zheng
category: marketing-skill
updated: 2026-03-09
triggers:
- analyze video
- video content analysis
- summarize video
- meeting notes from video
- search TikTok videos
- search YouTube videos
- video knowledge base
- competitor video analysis
- extract video insights
- video research
- video intelligence
- cross-video search
---
# Seek and Analyze Video
You are an expert in video intelligence and content analysis. Your goal is to help users discover, analyze, and build knowledge from video content across social platforms using Memories.ai's Large Visual Memory Model (LVMM).
## Before Starting
**Check for context first:**
If `marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
**API Setup Required:**
This skill requires a Memories.ai API key. Guide users to:
1. Visit https://memories.ai to create an account
2. Get API key from dashboard (free tier: 100 credits, Plus: $15/month for 5,000 credits)
3. Set environment variable: `export MEMORIES_API_KEY=your_key_here`
Gather this context (ask if not provided):
### 1. Current State
- What video content do they need to analyze?
- What platforms are they researching? (YouTube, TikTok, Instagram, Vimeo)
- Do they have existing video libraries or starting fresh?
### 2. Goals
- What insights are they extracting? (summaries, action items, competitive analysis)
- Do they need one-time analysis or persistent knowledge base?
- Are they analyzing individual videos or building cross-video research?
### 3. Video-Specific Context
- What topics, hashtags, or creators are they tracking?
- What's their use case? (competitor research, content strategy, meeting notes, training materials)
- Do they need organized namespaces for team collaboration?
## How This Skill Works
This skill supports 5 primary modes:
### Mode 1: Quick Video Analysis
When you need one-time video analysis without persistent storage.
- Use `caption_video` for instant summaries
- Best for: ad-hoc analysis, quick insights, testing content
### Mode 2: Social Media Research
When discovering and analyzing videos across platforms.
- Search by topic, hashtag, or creator
- Import and analyze in bulk
- Best for: competitor analysis, trend research, content inspiration
### Mode 3: Knowledge Base Building
When creating searchable libraries from video content.
- Index videos with semantic search
- Query across multiple videos simultaneously
- Best for: training materials, research repositories, content archives
### Mode 4: Meeting & Lecture Notes
When extracting structured notes from recordings.
- Generate transcripts with visual descriptions
- Extract action items and key points
- Best for: meeting summaries, educational content, presentations
### Mode 5: Memory Management
When organizing text insights and cross-video knowledge.
- Store notes with tags for retrieval
- Search across videos and text memories
- Best for: research notes, insights collection, knowledge management
## Core Workflows
### Workflow 1: Analyze a Video URL
**When to use:** User provides a YouTube, TikTok, Instagram, or Vimeo URL
**Process:**
1. Validate URL format and platform support
2. Choose analysis mode:
- **Quick analysis:** `caption_video(url)` - instant summary, no storage
- **Persistent analysis:** `import_video(url)` - index for future queries
3. Extract key information (summary, transcript, action items)
4. Generate structured output (see Output Artifacts)
**Example:**
```python
# Quick analysis (no storage)
result = caption_video("https://youtube.com/watch?v=...")
# Persistent indexing (builds knowledge base)
video_id = import_video("https://youtube.com/watch?v=...")
summary = query_video(video_id, "Summarize the key points")
```
### Workflow 2: Social Media Video Research
**When to use:** User wants to find and analyze videos by topic, hashtag, or creator
**Process:**
1. Define search parameters:
- Platform: tiktok, youtube, instagram
- Query: topic, hashtag, or creator handle
- Count: number of videos to analyze
2. Execute search: `search_social(platform, query, count)`
3. Import discovered videos for deep analysis
4. Generate competitive insights or trend report
**Example:**
```python
# Find competitor content
videos = search_social("tiktok", "#SaaSmarketing", count=20)
# Analyze top performers
for video in videos[:5]:
import_video(video['url'])
# Cross-video analysis
insights = chat_personal("What content themes are working?")
```
### Workflow 3: Build Video Knowledge Base
**When to use:** User needs searchable library across multiple videos
**Process:**
1. Import videos with tags for organization
2. Store supplementary text memories (notes, insights)
3. Enable cross-video semantic search
4. Query entire library for insights
**Example:**
```python
# Import video library with tags
import_video(url1, tags=["product-demo", "Q1-2026"])
import_video(url2, tags=["product-demo", "Q2-2026"])
# Store text insights
create_memory("Key insight from demos...", tags=["product-demo"])
# Query across all tagged content
insights = chat_personal("Compare Q1 vs Q2 product demos")
```
### Workflow 4: Extract Meeting Notes
**When to use:** User needs structured notes from recorded meetings or lectures
**Process:**
1. Import meeting recording
2. Request structured extraction:
- Action items with owners
- Key decisions made
- Discussion topics
- Timestamps for important moments
3. Format as meeting minutes
4. Store for future reference
**Example:**
```python
video_id = import_video("meeting_recording.mp4")
notes = query_video(video_id, """
Extract:
1. Action items with owners
2. Key decisions
3. Discussion topics
4. Important timestamps
""")
```
### Workflow 5: Competitor Content Analysis
**When to use:** Analyzing competitor video strategies across platforms
**Process:**
1. Search for competitor content by creator handle
2. Import their top-performing videos
3. Analyze patterns:
- Content themes and formats
- Messaging strategies
- Production quality
- Engagement tactics
4. Generate competitive intelligence report
**Example:**
```python
# Find competitor videos
competitor_videos = search_social("youtube", "@competitor_handle", count=30)
# Import for analysis
for video in competitor_videos:
import_video(video['url'], tags=["competitor-X"])
# Extract insights
analysis = chat_personal("Analyze competitor-X content strategy and gaps")
```
## Command Reference
### Video Operations
| Command | Purpose | Storage |
|---------|---------|---------|
| `caption_video(url)` | Quick video summary | No |
| `import_video(url, tags=[])` | Index video for queries | Yes |
| `query_video(video_id, question)` | Ask about specific video | - |
| `list_videos(tags=[])` | List indexed videos | - |
| `delete_video(video_id)` | Remove from library | - |
### Social Media Search
| Command | Purpose |
|---------|---------|
| `search_social(platform, query, count)` | Find videos by topic/creator |
| `search_personal(query, filters={})` | Search your indexed videos |
Platforms: `tiktok`, `youtube`, `instagram`
### Memory Management
| Command | Purpose |
|---------|---------|
| `create_memory(text, tags=[])` | Store text insight |
| `search_memories(query)` | Find stored memories |
| `list_memories(tags=[])` | List all memories |
| `delete_memory(memory_id)` | Remove memory |
### Cross-Content Queries
| Command | Purpose |
|---------|---------|
| `chat_personal(question)` | Query across ALL videos and memories |
| `chat_video(video_id, question)` | Focus on specific video |
### Vision Tasks
| Command | Purpose |
|---------|---------|
| `caption_image(image_url)` | Describe image using AI vision |
| `import_image(image_url, tags=[])` | Index image for queries |
## Proactive Triggers
Surface these issues WITHOUT being asked when you notice them in context:
- **User requests video analysis without API key** → Guide them to memories.ai setup
- **Repeated similar queries across videos** → Suggest building knowledge base instead
- **Analyzing competitor content** → Recommend systematic tracking with tags
- **Meeting recording shared** → Offer structured note extraction
- **Multiple one-off analyses** → Suggest import_video for persistent reference
- **Large video libraries without tags** → Recommend tag organization strategy
## Output Artifacts
| When you ask for... | You get... |
|---------------------|------------|
| "Analyze this video" | Structured summary with key points, themes, action items, and timestamps |
| "Competitor content research" | Competitive analysis report with content themes, gaps, and recommendations |
| "Meeting notes from recording" | Meeting minutes with action items, decisions, discussion topics, and owners |
| "Video knowledge base" | Searchable library with semantic search across videos and memories |
| "Social media video research" | Platform research report with top videos, trends, and content insights |
## Communication
All output follows the structured communication standard:
- **Bottom line first** — answer before explanation
- **What + Why + How** — every finding has all three
- **Actions have owners and deadlines** — no "we should consider"
- **Confidence tagging** — 🟢 verified / 🟡 medium / 🔴 assumed
**Example output format:**
```
BOTTOM LINE: Competitor X focuses on product demos (60%) and customer stories (30%)
WHAT:
• 18/30 videos are product demos with detailed walkthroughs — 🟢 verified
• 9/30 videos are customer success stories with ROI metrics — 🟢 verified
• Average video length: 3:24 (demos), 2:15 (stories) — 🟢 verified
• Consistent posting: 2-3 videos/week on Tuesday/Thursday — 🟢 verified
WHY THIS MATTERS:
They're driving bottom-of-funnel conversions with proof over awareness content.
Your current 80% thought leadership leaves conversion gap.
HOW TO ACT:
1. Create 10 product demo videos → [Owner] → [2 weeks]
2. Record 5 customer case studies → [Owner] → [3 weeks]
3. Test demo video performance vs current content → [Owner] → [4 weeks]
YOUR DECISION:
Option A: Match their demo focus — higher conversion, lower reach
Option B: Hybrid approach (50% demos, 50% thought leadership) — balanced
```
## Technical Details
**Repository:** https://github.com/kennyzheng-builds/seek-and-analyze-video
**Requirements:**
- Python 3.8+
- Memories.ai API key (free tier or $15/month Plus)
- Environment variable: `MEMORIES_API_KEY`
**Installation:**
```bash
# Via Claude Code
claude skill install kennyzheng-builds/seek-and-analyze-video
# Or manual
git clone https://github.com/kennyzheng-builds/seek-and-analyze-video.git
export MEMORIES_API_KEY=your_key_here
```
**Pricing:**
- Free tier: 100 credits (testing and light use)
- Plus: $15/month for 5,000 credits (power users)
**Supported Platforms:**
- YouTube (all public videos)
- TikTok (public videos)
- Instagram (public videos and reels)
- Vimeo (public videos)
## Key Differentiators
**vs ChatGPT/Gemini Video Analysis:**
- Persistent memory (query anytime, not just during upload)
- Cross-video search (query 100s of videos simultaneously)
- Social media discovery (find videos, don't just analyze provided URLs)
- Knowledge base building (organize with tags, semantic search)
**vs Manual Video Research:**
- 40x faster video analysis
- Automatic transcript + visual description
- Semantic search across libraries
- Scalable to hundreds of videos
**vs Traditional Video Tools:**
- AI-native queries (ask questions vs manual review)
- Cross-platform support (TikTok, YouTube, Instagram unified)
- Zero-dependency Python client (works across Claude Code, OpenClaw, HappyCapy)
- Workflow automation (upload → analyze → store in one command)
## Best Practices
### Tagging Strategy
- Use consistent tag naming (kebab-case recommended)
- Tag by: content-type, date-range, platform, topic, campaign
- Example: `["competitor-analysis", "Q1-2026", "tiktok", "product-demo"]`
### Credit Management
- Quick analysis (`caption_video`): ~2 credits per video
- Import + indexing (`import_video`): ~5 credits per video
- Queries (`chat_personal`, `query_video`): ~1 credit per query
- Plan accordingly based on tier (free: 100, Plus: 5,000/month)
### Query Optimization
- Be specific in questions (better results, same credits)
- Use filtered searches when possible (faster, more relevant)
- Batch similar queries (analyze pattern, then ask once)
### Organization
- Create namespace strategy for teams (use tags for isolation)
- Archive old content (delete unused videos to reduce noise)
- Document video IDs for important content (VI... identifiers)
## Related Skills
- **social-media-analyzer**: For quantitative social media metrics. Use this skill for qualitative video content analysis.
- **content-strategy**: For planning content themes. Use this skill to research what's working in your niche.
- **competitor-alternatives**: For competitive positioning. Use this skill for competitor content intelligence.
- **marketing-context**: Provides audience and brand context. Use before running video research.
- **content-production**: For creating content. Use this skill to research successful formats first.
- **campaign-analytics**: For campaign performance data. Combine with this skill for qualitative video insights.

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env python3
"""
Example workflow demonstrating seek-and-analyze-video skill capabilities.
Shows competitive video analysis pipeline with Memories.ai LVMM.
Usage:
python example-workflow.py --mode [quick|full]
Modes:
quick: Run with demo data (no API calls)
full: Execute full workflow (requires MEMORIES_API_KEY)
"""
import json
import os
import sys
from datetime import datetime
from typing import Dict, List
def validate_api_key() -> bool:
"""Check if API key is configured."""
api_key = os.getenv("MEMORIES_API_KEY")
if not api_key:
print("❌ MEMORIES_API_KEY not set")
print("\nSetup instructions:")
print("1. Visit https://memories.ai and create account")
print("2. Get API key from dashboard")
print("3. Run: export MEMORIES_API_KEY=your_key_here")
return False
return True
def demo_mode():
"""Run demonstration with mock data (no API calls)."""
print("🎬 Running in DEMO mode (no API calls)")
print("=" * 60)
# Mock competitor discovery
print("\n📍 Stage 1: Discovering competitor content...")
mock_videos = [
{
"url": "https://youtube.com/watch?v=demo1",
"title": "Competitor A - Product Demo",
"views": 125000,
"likes": 8500,
"creator": "@competitor_a",
},
{
"url": "https://youtube.com/watch?v=demo2",
"title": "Competitor A - Pricing Guide",
"views": 98000,
"likes": 6200,
"creator": "@competitor_a",
},
{
"url": "https://youtube.com/watch?v=demo3",
"title": "Competitor A - Customer Success Story",
"views": 156000,
"likes": 12000,
"creator": "@competitor_a",
},
]
print(f"Found {len(mock_videos)} videos")
for video in mock_videos:
print(f" - {video['title']} ({video['views']:,} views)")
# Mock import
print("\n📥 Stage 2: Importing top performers...")
for video in mock_videos:
mock_video_id = f"VI_{video['title'][:10].replace(' ', '_')}"
print(f" ✓ Imported: {video['title']}{mock_video_id}")
# Mock content analysis
print("\n🔬 Stage 3: Analyzing content patterns...")
mock_analysis = {
"content_themes": {
"product_demos": "60%",
"customer_stories": "30%",
"thought_leadership": "10%",
},
"average_length": "3:24",
"hook_patterns": [
"Here's what nobody tells you about...",
"3 mistakes I see founders make...",
"Watch this before choosing...",
],
"posting_frequency": "2-3 videos per week (Tuesday/Thursday)",
}
print(json.dumps(mock_analysis, indent=2))
# Mock messaging analysis
print("\n💬 Stage 4: Extracting messaging...")
mock_messaging = {
"core_pillars": [
"ROI in first 90 days",
"Enterprise-grade security",
"No-code setup",
],
"pain_points_addressed": [
"Manual workflows wasting time",
"Security compliance complexity",
"Integration headaches",
],
"proof_elements": [
"Customer logos (Fortune 500)",
"ROI calculators with real data",
"Case studies with metrics",
],
}
print(json.dumps(mock_messaging, indent=2))
# Mock gap identification
print("\n🎯 Stage 5: Identifying opportunities...")
mock_gaps = {
"uncovered_topics": [
"Migration from legacy systems (high search volume)",
"Team training and onboarding",
"Advanced API usage",
],
"missed_angles": [
"Product demos focus on features, not workflows",
"Customer stories lack technical depth",
"No content for technical evaluators",
],
"format_opportunities": [
"Short-form TikTok/Reels (competitors use YouTube only)",
"Live Q&A sessions (no one doing this)",
"Comparison videos (avoided by competitors)",
],
}
print(json.dumps(mock_gaps, indent=2))
# Mock recommendations
print("\n📋 Stage 6: Generating recommendations...")
mock_recommendations = {
"quick_wins": [
{
"action": "Create 3 short-form product demos for TikTok/Reels",
"rationale": "Competitors only on YouTube, capture short-form audience",
"timeline": "2 weeks",
},
{
"action": "Record migration guide video",
"rationale": "High search demand, zero competition",
"timeline": "1 week",
},
],
"strategic_bets": [
{
"action": "Launch weekly live Q&A series",
"rationale": "Build community, no competitors doing this",
"timeline": "Q2 2026",
},
{
"action": "Create technical deep-dive series for evaluators",
"rationale": "Gap in competitor content, address technical audience",
"timeline": "Q2 2026",
},
],
"avoid": [
"Generic thought leadership (saturated)",
"Feature-focused demos without use cases (not resonating)",
],
"differentiation": [
"Lead with workflow outcomes, not features",
"Show migration path from specific competitors",
"Target technical evaluators ignored by competitors",
],
}
print(json.dumps(mock_recommendations, indent=2))
print("\n" + "=" * 60)
print("✅ Demo complete!")
print("\nTo run with real data:")
print("1. Set MEMORIES_API_KEY environment variable")
print("2. Run: python example-workflow.py --mode full")
def full_mode():
"""Execute full workflow with actual API calls."""
if not validate_api_key():
return
print("🚀 Running FULL workflow with Memories.ai API")
print("=" * 60)
print("\n⚠️ This will consume API credits:")
print(" - Discovery: ~1 credit per 10 videos")
print(" - Import: ~5 credits per video")
print(" - Queries: ~1-5 credits per query")
print("\nEstimated total: ~50-100 credits")
response = input("\nProceed? (yes/no): ").strip().lower()
if response != "yes":
print("Cancelled.")
return
print("\n📍 Stage 1: Discovering competitor content...")
print("(Implementation would call Memories.ai API here)")
# In real implementation, would import and use the Memories.ai client
# from seek_and_analyze_video import search_social, import_video, chat_personal
print("\nFull implementation requires:")
print("1. Clone: https://github.com/kennyzheng-builds/seek-and-analyze-video")
print("2. Import client from skill repository")
print("3. Execute workflow with actual API calls")
def main():
"""Main entry point."""
mode = "quick"
# Parse arguments
if len(sys.argv) > 1:
if sys.argv[1] == "--mode" and len(sys.argv) > 2:
mode = sys.argv[2]
elif sys.argv[1] in ["--help", "-h"]:
print(__doc__)
return
if mode not in ["quick", "full"]:
print(f"❌ Invalid mode: {mode}")
print("Valid modes: quick, full")
print("\nRun with --help for usage information")
return
print(f"""
╔════════════════════════════════════════════════════════════╗
║ Seek and Analyze Video - Example Workflow ║
║ Competitive Video Analysis ║
╚════════════════════════════════════════════════════════════╝
Mode: {mode.upper()}
Date: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
""")
if mode == "quick":
demo_mode()
else:
full_mode()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,445 @@
# Memories.ai API Command Reference
Complete reference for all 21 API commands available through the Memories.ai LVMM.
---
## Video Operations
### caption_video(url: str) → dict
Quick video analysis without persistent storage. Best for one-time summaries.
**Parameters:**
- `url`: Video URL (YouTube, TikTok, Instagram, Vimeo)
**Returns:**
```python
{
"summary": "Video summary text",
"duration": "3:24",
"platform": "youtube"
}
```
**Credits:** ~2 per video
**Use when:** Ad-hoc analysis, testing content, no need for future queries
---
### import_video(url: str, tags: list = []) → str
Index video for persistent queries. Returns video ID (VI...) for future reference.
**Parameters:**
- `url`: Video URL
- `tags`: Optional list of organization tags (e.g., `["competitor", "Q1-2026"]`)
**Returns:** Video ID string (e.g., `"VI_abc123def456"`)
**Credits:** ~5 per video
**Use when:** Building knowledge base, need cross-video search, repeated queries
**Example:**
```python
video_id = import_video(
"https://youtube.com/watch?v=dQw4w9WgXcQ",
tags=["product-demo", "competitor-A", "2026-03"]
)
# Returns: "VI_abc123def456"
```
---
### query_video(video_id: str, question: str) → str
Ask questions about a specific indexed video.
**Parameters:**
- `video_id`: Video ID from import_video
- `question`: Natural language question
**Returns:** Answer text
**Credits:** ~1 per query
**Example:**
```python
answer = query_video("VI_abc123def456", "What are the main action items?")
```
---
### list_videos(tags: list = []) → list
List all indexed videos, optionally filtered by tags.
**Parameters:**
- `tags`: Optional filter tags (returns videos matching ANY tag)
**Returns:**
```python
[
{
"video_id": "VI_abc123",
"url": "https://youtube.com/...",
"imported_at": "2026-03-09T10:30:00Z",
"tags": ["product-demo", "competitor-A"]
}
]
```
**Credits:** 0 (metadata only)
---
### delete_video(video_id: str) → bool
Remove video from your library. Cannot be undone.
**Parameters:**
- `video_id`: Video ID to delete
**Returns:** `True` if successful
**Credits:** 0
---
## Social Media Search
### search_social(platform: str, query: str, count: int = 10) → list
Discover public videos by topic, hashtag, or creator.
**Parameters:**
- `platform`: `"tiktok"`, `"youtube"`, or `"instagram"`
- `query`: Topic, hashtag (with #), or creator handle (with @)
- `count`: Number of results (default: 10, max: 50)
**Returns:**
```python
[
{
"url": "https://tiktok.com/@creator/video/123",
"title": "Video title",
"creator": "@creator",
"views": 125000,
"likes": 8500,
"published": "2026-03-08"
}
]
```
**Credits:** ~1 per 10 videos
**Examples:**
```python
# Topic search
videos = search_social("youtube", "SaaS pricing strategies", count=20)
# Hashtag search
videos = search_social("tiktok", "#contentmarketing", count=30)
# Creator search
videos = search_social("instagram", "@competitor_handle", count=15)
```
---
### search_personal(query: str, filters: dict = {}) → list
Search your indexed videos with semantic search.
**Parameters:**
- `query`: Natural language search query
- `filters`: Optional filters (`{"tags": ["tag1"], "date_from": "2026-01-01"}`)
**Returns:**
```python
[
{
"video_id": "VI_abc123",
"relevance_score": 0.92,
"snippet": "...relevant content snippet...",
"tags": ["product-demo"]
}
]
```
**Credits:** ~1 per query
**Example:**
```python
results = search_personal(
"product pricing discussions",
filters={"tags": ["competitor-A"], "date_from": "2026-03-01"}
)
```
---
## Memory Management
### create_memory(text: str, tags: list = []) → str
Store text insights for future retrieval.
**Parameters:**
- `text`: Note or insight text
- `tags`: Optional organization tags
**Returns:** Memory ID (e.g., `"MEM_xyz789"`)
**Credits:** ~1 per memory
**Use when:** Storing research notes, insights, key quotes not directly in videos
**Example:**
```python
memory_id = create_memory(
"Competitor A focuses on enterprise pricing tier, starts at $99/seat",
tags=["competitor-A", "pricing", "insight"]
)
```
---
### search_memories(query: str) → list
Search stored text memories with semantic search.
**Parameters:**
- `query`: Natural language search query
**Returns:**
```python
[
{
"memory_id": "MEM_xyz789",
"text": "Memory content...",
"relevance_score": 0.88,
"tags": ["pricing", "insight"],
"created_at": "2026-03-09T10:30:00Z"
}
]
```
**Credits:** ~1 per query
---
### list_memories(tags: list = []) → list
List all stored memories, optionally filtered by tags.
**Parameters:**
- `tags`: Optional filter tags
**Returns:** List of memory objects (same structure as search_memories)
**Credits:** 0 (metadata only)
---
### delete_memory(memory_id: str) → bool
Delete stored memory. Cannot be undone.
**Parameters:**
- `memory_id`: Memory ID to delete
**Returns:** `True` if successful
**Credits:** 0
---
## Cross-Content Queries
### chat_personal(question: str) → str
Query across ALL indexed videos and memories simultaneously.
**Parameters:**
- `question`: Natural language question
**Returns:** Answer synthesized from entire knowledge base
**Credits:** ~2-5 depending on complexity
**Use when:** Asking questions that require cross-video analysis
**Example:**
```python
insight = chat_personal("""
Compare competitor A and B's pricing strategies.
What are the key differences and which approach is more effective?
""")
```
---
### chat_video(video_id: str, question: str) → str
Interactive chat focused on specific video (alternative to query_video).
**Parameters:**
- `video_id`: Video ID
- `question`: Natural language question
**Returns:** Answer text
**Credits:** ~1 per query
**Note:** Functionally similar to `query_video`, use interchangeably.
---
## Vision Tasks
### caption_image(image_url: str) → str
Describe image content using AI vision.
**Parameters:**
- `image_url`: Public image URL (JPEG, PNG, WebP)
**Returns:** Image description text
**Credits:** ~1 per image
**Use when:** Analyzing thumbnails, screenshots, visual content
**Example:**
```python
description = caption_image("https://example.com/thumbnail.jpg")
# Returns: "A person presenting a pricing slide with three tiers..."
```
---
### import_image(image_url: str, tags: list = []) → str
Index image for persistent queries (similar to import_video for images).
**Parameters:**
- `image_url`: Public image URL
- `tags`: Optional organization tags
**Returns:** Image ID (e.g., `"IMG_def456"`)
**Credits:** ~2 per image
**Use when:** Building visual libraries, need repeated queries on images
---
## Advanced Usage Patterns
### Pattern 1: Bulk Import with Error Handling
```python
def import_video_batch(urls, tag_prefix):
"""Import multiple videos with error handling"""
results = []
for idx, url in enumerate(urls):
try:
video_id = import_video(url, tags=[tag_prefix, f"batch-{idx}"])
results.append({"url": url, "video_id": video_id, "status": "success"})
except Exception as e:
results.append({"url": url, "error": str(e), "status": "failed"})
return results
```
### Pattern 2: Smart Tag Organization
```python
# Hierarchical tagging strategy
tags = [
f"{platform}", # youtube, tiktok, instagram
f"{content_type}", # product-demo, tutorial, case-study
f"{date_range}", # Q1-2026, 2026-03
f"{campaign}", # launch-campaign-X
f"{source_type}" # competitor, internal, partner
]
video_id = import_video(url, tags=tags)
```
### Pattern 3: Progressive Research
```python
# Stage 1: Discover
videos = search_social("youtube", "@competitor", count=50)
# Stage 2: Import top performers (by views/likes)
top_videos = sorted(videos, key=lambda x: x['views'], reverse=True)[:10]
for video in top_videos:
import_video(video['url'], tags=["competitor", "top-performer"])
# Stage 3: Cross-video analysis
insights = chat_personal("What makes their top 10 videos successful?")
```
### Pattern 4: Meeting Intelligence
```python
# Import meeting recording
meeting_id = import_video(recording_url, tags=["team-meeting", "2026-03-09"])
# Extract structured data
action_items = query_video(meeting_id, "List all action items with owners")
decisions = query_video(meeting_id, "What decisions were made?")
topics = query_video(meeting_id, "What were the main discussion topics?")
# Store supplementary notes
create_memory(f"Meeting {date}: Key outcomes and next steps",
tags=["team-meeting", "summary"])
```
---
## Credit Usage Guidelines
| Operation | Credits | Recommendation |
|-----------|---------|----------------|
| Quick caption | 2 | Use for testing/one-off |
| Import video | 5 | Build library strategically |
| Query (simple) | 1 | Ask specific questions |
| Cross-video query | 2-5 | Batch similar questions |
| Image caption | 1 | Use sparingly |
| Social search | 0.1/video | Discover before importing |
| Memory operations | 1 | Store key insights only |
**Free Tier Strategy (100 credits):**
- Import ~15 key videos (75 credits)
- Query ~25 times (25 credits)
**Plus Tier Strategy (5,000 credits/month):**
- Import ~800 videos (4,000 credits)
- Query ~1,000 times (1,000 credits)
---
## Error Handling
Common errors and solutions:
**InvalidAPIKey**
- Check `MEMORIES_API_KEY` environment variable is set
- Verify key is active on memories.ai dashboard
**UnsupportedPlatform**
- Only YouTube, TikTok, Instagram, Vimeo supported
- Ensure URL is public (not private/unlisted)
**CreditLimitExceeded**
- Check usage on memories.ai dashboard
- Upgrade to Plus tier or wait for monthly reset
**VideoNotFound**
- Video may be deleted, private, or region-restricted
- Verify URL is accessible in browser
**RateLimitExceeded**
- Slow down request rate (max ~10 requests/second)
- Consider batching operations
---
## API Changelog
**v1.0.0 (Current)**
- 21 commands across 6 categories
- Support for YouTube, TikTok, Instagram, Vimeo
- Semantic search across videos and memories
- Tag-based organization system
- Cross-video chat functionality

View File

@@ -0,0 +1,638 @@
# Use Cases and Examples
Real-world applications of video intelligence with Memories.ai LVMM.
---
## Table of Contents
- [Competitor Content Intelligence](#competitor-content-intelligence)
- [Content Strategy Research](#content-strategy-research)
- [Meeting and Training Intelligence](#meeting-and-training-intelligence)
- [Social Media Monitoring](#social-media-monitoring)
- [Knowledge Base Management](#knowledge-base-management)
- [Creator and Influencer Research](#creator-and-influencer-research)
---
## Competitor Content Intelligence
### Use Case: Analyze Competitor Video Strategy
**Scenario:** You want to understand how Competitor X uses video content to drive conversions.
**Workflow:**
```python
# Stage 1: Discover their content
videos = search_social("youtube", "@competitor_x", count=50)
# Stage 2: Import their library
for video in videos:
import_video(video['url'], tags=["competitor-x", "analysis-2026-q1"])
# Stage 3: Content pattern analysis
themes = chat_personal("""
Tags: competitor-x
Question: What are the main content themes and formats?
Break down by frequency and video type.
""")
# Stage 4: Messaging analysis
messaging = chat_personal("""
Tags: competitor-x
Question: What value propositions do they emphasize?
What pain points do they address?
""")
# Stage 5: Production insights
production = chat_personal("""
Tags: competitor-x
Question: What's their production quality level?
Average video length? Consistent branding elements?
""")
# Stage 6: Identify gaps
gaps = chat_personal("""
Compare competitor-x videos to our content library (tag: our-content).
What topics do they cover that we don't?
What angles are they using successfully?
""")
```
**Expected Output:**
- Content theme breakdown (60% product demos, 30% customer stories, 10% thought leadership)
- Key messaging pillars (ROI, ease of use, enterprise security)
- Production specs (3:24 avg length, professional editing, consistent intro/outro)
- Content gaps in your strategy
**ROI:** 20 hours of manual analysis → 2 hours automated
---
### Use Case: Competitive Pricing Intelligence
**Scenario:** Extract pricing information from competitor product videos.
**Workflow:**
```python
# Import competitor product demo videos
competitor_demos = search_social("youtube", "competitor pricing demo", count=20)
for video in competitor_demos[:10]:
import_video(video['url'], tags=["competitor-pricing"])
# Extract pricing mentions
pricing_data = chat_personal("""
Tags: competitor-pricing
Question: Extract all pricing information mentioned.
Include: tiers, price points, billing cycles, discounts, enterprise pricing.
""")
# Analyze pricing strategy
strategy = chat_personal("""
Tags: competitor-pricing
Question: What pricing strategy are they using?
Value-based, cost-plus, competition-based, penetration?
How do they position their tiers?
""")
```
**Expected Output:**
- Pricing tier structure (Starter $49, Pro $99, Enterprise custom)
- Positioning strategy (value-based with ROI calculators)
- Competitive differentiation (monthly vs annual pricing emphasis)
---
## Content Strategy Research
### Use Case: Identify High-Performing Content Formats
**Scenario:** Research what video formats are working in your niche.
**Workflow:**
```python
# Search for top content in your niche
niche_videos = search_social("tiktok", "#SaaSmarketing", count=100)
# Import top performers (by engagement)
top_50 = sorted(niche_videos, key=lambda x: x['likes'] + x['views'], reverse=True)[:50]
for video in top_50:
import_video(video['url'], tags=["niche-research", "top-performer"])
# Analyze successful patterns
format_analysis = chat_personal("""
Tags: top-performer
Question: What video formats are most successful?
Break down by: length, hook style, content structure, CTA approach.
""")
# Identify successful hooks
hooks = chat_personal("""
Tags: top-performer
Question: Extract the first 3 seconds (hook) from each video.
What patterns make them effective?
""")
# Production requirements
production = chat_personal("""
Tags: top-performer
Question: What's the production quality distribution?
Can successful content be made with smartphone + basic editing?
""")
```
**Expected Output:**
- Winning formats (60-second problem-solution, 15-second quick tips)
- Hook patterns ("Here's what nobody tells you about...", "3 mistakes I made...")
- Production level (70% smartphone-quality acceptable, 30% professional)
**ROI:** Validate content strategy before investing in production
---
### Use Case: Topic Gap Analysis
**Scenario:** Find content opportunities your competitors aren't covering.
**Workflow:**
```python
# Import your content and competitor content
# (Assume already done with tags: "our-content", "competitor-a", "competitor-b")
# Identify covered topics
competitor_topics = chat_personal("""
Tags: competitor-a, competitor-b
Question: List all topics covered. Group by category.
""")
# Find gaps
gaps = chat_personal("""
Compare topics from competitors (tags: competitor-a, competitor-b)
vs audience questions (tag: customer-questions)
What topics are customers asking about that competitors haven't covered?
""")
# Opportunity sizing
opportunities = chat_personal("""
For each gap identified, search social platforms:
How many searches/hashtags exist for that topic?
Is there existing demand?
""")
```
**Expected Output:**
- 15 topic gaps with high demand, low competition
- Prioritized by search volume and strategic fit
- Content angle recommendations
---
## Meeting and Training Intelligence
### Use Case: Extract Action Items from Meetings
**Scenario:** Convert recorded meetings into structured action items.
**Workflow:**
```python
# Import meeting recording
meeting_id = import_video(
"internal_recording.mp4",
tags=["team-meeting", "product-planning", "2026-03-09"]
)
# Extract action items
action_items = query_video(meeting_id, """
Extract all action items mentioned in the meeting.
Format as:
- [ ] Action item description | Owner: Name | Due: Date | Context: Why needed
""")
# Extract decisions
decisions = query_video(meeting_id, """
List all decisions made during the meeting.
Format as:
DECISION: [Description]
RATIONALE: [Why]
OWNER: [Who's accountable]
IMPACT: [What changes]
""")
# Generate meeting summary
summary = query_video(meeting_id, """
Create executive summary:
1. Key topics discussed
2. Decisions made
3. Action items (grouped by owner)
4. Blockers identified
5. Next meeting agenda items
""")
# Store for future reference
create_memory(
f"Meeting Summary {date}: {summary}",
tags=["meeting-summary", "product-planning"]
)
```
**Expected Output:**
```
ACTION ITEMS:
- [ ] Update pricing page with new tier | Owner: Sarah | Due: 2026-03-15 | Context: Launch prep
- [ ] Schedule user interviews | Owner: Mike | Due: 2026-03-12 | Context: Validate feature priority
DECISIONS:
- Push mobile app launch to Q2 (Rationale: Backend infrastructure not ready)
- Focus Q1 on enterprise features (Rationale: 3 pilot customers waiting)
```
**ROI:** 30 minutes of manual note-taking → 2 minutes automated
---
### Use Case: Training Material Knowledge Base
**Scenario:** Build searchable library from training videos and courses.
**Workflow:**
```python
# Import all training videos
training_videos = [
"onboarding_day1.mp4",
"onboarding_day2.mp4",
"product_training_basics.mp4",
"product_training_advanced.mp4",
"sales_process_training.mp4"
]
for video_url in training_videos:
import_video(video_url, tags=["training", "onboarding"])
# Create searchable knowledge base
# New employees can now ask questions:
answer = chat_personal("How do I handle objections about pricing?")
answer = chat_personal("What's our product positioning vs competitors?")
answer = chat_personal("Walk me through the sales process step by step")
```
**Expected Output:**
- Instant answers to onboarding questions
- Reference to specific training video timestamps
- Consistent knowledge across team
**ROI:** Reduce onboarding time 40%, improve knowledge retention
---
## Social Media Monitoring
### Use Case: Track Brand Mentions Across Platforms
**Scenario:** Monitor videos mentioning your brand or product.
**Workflow:**
```python
# Search across platforms
tiktok_mentions = search_social("tiktok", "#YourBrand", count=50)
youtube_mentions = search_social("youtube", "YourBrand review", count=50)
instagram_mentions = search_social("instagram", "@yourbrand", count=50)
# Import for analysis
all_mentions = tiktok_mentions + youtube_mentions + instagram_mentions
for video in all_mentions:
import_video(video['url'], tags=["brand-mention", video['platform']])
# Sentiment analysis
sentiment = chat_personal("""
Tags: brand-mention
Question: Analyze sentiment across all brand mentions.
Positive, neutral, negative breakdown.
Common praise points and complaints.
""")
# Feature requests
requests = chat_personal("""
Tags: brand-mention
Question: Extract all feature requests or improvement suggestions.
Rank by frequency mentioned.
""")
# Competitive comparisons
comparisons = chat_personal("""
Tags: brand-mention
Question: When creators compare us to competitors, what do they say?
What are our perceived strengths and weaknesses?
""")
```
**Expected Output:**
- Sentiment: 70% positive, 20% neutral, 10% negative
- Top feature requests: Mobile app (15 mentions), API access (12 mentions)
- Competitive position: "Easier to use than X, but lacks Y feature"
**ROI:** Real-time feedback loop, inform product roadmap
---
### Use Case: Influencer Partnership Research
**Scenario:** Identify and vet potential influencer partners.
**Workflow:**
```python
# Find creators in your niche
creators = search_social("youtube", "SaaS founder", count=100)
# Filter to top performers
top_creators = sorted(creators, key=lambda x: x['views'], reverse=True)[:20]
# Import their content
for creator in top_creators:
videos = search_social("youtube", f"@{creator['handle']}", count=10)
for video in videos:
import_video(video['url'], tags=["influencer-research", creator['handle']])
# Analyze each creator
for creator in top_creators:
profile = chat_personal(f"""
Tags: {creator['handle']}
Question: Analyze this creator's content:
- Main topics covered
- Audience demographic (based on comments/content)
- Brand alignment with our values
- Engagement quality (comments depth)
- Partnership potential (do they do sponsorships?)
""")
create_memory(profile, tags=["influencer-profile", creator['handle']])
```
**Expected Output:**
- Vetted list of 5 high-fit influencers
- Audience alignment scores
- Estimated reach and engagement
- Partnership readiness assessment
---
## Knowledge Base Management
### Use Case: Customer Research Repository
**Scenario:** Build searchable library of customer interviews and feedback videos.
**Workflow:**
```python
# Import customer interview recordings
interviews = [
"customer_interview_acme_corp.mp4",
"customer_interview_tech_startup.mp4",
"user_testing_session_1.mp4"
]
for video_url in interviews:
import_video(video_url, tags=["customer-research", "interview"])
# Import product feedback videos
feedback_videos = search_social("youtube", "ProductName feedback", count=30)
for video in feedback_videos:
import_video(video['url'], tags=["customer-research", "feedback"])
# Cross-interview insights
pain_points = chat_personal("""
Tags: customer-research
Question: What are the top pain points mentioned across all interviews?
Rank by frequency and severity.
""")
feature_value = chat_personal("""
Tags: customer-research
Question: Which features do customers mention as most valuable?
What outcomes do they achieve?
""")
use_cases = chat_personal("""
Tags: customer-research
Question: What are the main use cases customers describe?
Group by industry or company size.
""")
# Store insights
create_memory(f"Customer Research Synthesis {date}: {pain_points}",
tags=["research-insight", "product-roadmap"])
```
**Expected Output:**
- Top 10 pain points ranked
- Feature value hierarchy
- Use case taxonomy
- Product roadmap implications
**ROI:** Centralize customer knowledge, inform product decisions
---
### Use Case: Competitive Intelligence Database
**Scenario:** Maintain up-to-date competitive intelligence from video sources.
**Workflow:**
```python
# Weekly competitor monitoring (automate with cron)
competitors = ["@competitor_a", "@competitor_b", "@competitor_c"]
for competitor in competitors:
# Search for new videos
new_videos = search_social("youtube", competitor, count=10)
# Import only videos from last 7 days
recent = [v for v in new_videos if is_within_last_week(v['published'])]
for video in recent:
import_video(video['url'], tags=["competitive-intel", competitor, "2026-q1"])
# Weekly intelligence report
report = chat_personal("""
Tags: competitive-intel, 2026-q1
Filter: last 7 days
Question: Generate competitive intelligence summary:
1. New product announcements or features
2. Pricing changes
3. Marketing message shifts
4. Partnership announcements
5. Strategic moves (funding, acquisitions, etc.)
""")
# Send to stakeholders
create_memory(f"Weekly Competitive Intel {date}: {report}",
tags=["intelligence-report", "weekly"])
```
**Expected Output:**
- Automated weekly competitive briefing
- Early detection of competitive moves
- Strategic planning inputs
---
## Creator and Influencer Research
### Use Case: Content Creator Trend Analysis
**Scenario:** Identify emerging content trends in your industry.
**Workflow:**
```python
# Search across platforms for industry hashtags
hashtags = ["#SaaSmarketing", "#ProductManagement", "#StartupTips"]
all_videos = []
for tag in hashtags:
tiktok = search_social("tiktok", tag, count=100)
youtube = search_social("youtube", tag.replace("#", ""), count=100)
all_videos.extend(tiktok + youtube)
# Import recent content (last 30 days)
recent_videos = [v for v in all_videos if is_recent(v['published'], days=30)]
for video in recent_videos:
import_video(video['url'], tags=["trend-research", "2026-q1"])
# Trend analysis
trends = chat_personal("""
Tags: trend-research, 2026-q1
Question: What are the emerging content trends?
Look for:
- Topics gaining traction (mentioned in 5+ videos)
- Format innovations (new video structures)
- Messaging shifts (new angles on old topics)
- Platform-specific trends (what works on TikTok vs YouTube)
""")
# Validate trend strength
validation = chat_personal("""
Tags: trend-research
Question: For each identified trend, assess:
- Growth trajectory (increasing or peak?)
- Audience engagement (comments, shares)
- Creator adoption (how many creators using this trend?)
- Longevity prediction (fad or sustainable?)
""")
```
**Expected Output:**
- 5-10 emerging trends with growth metrics
- Format innovations to test
- Timing recommendations (early mover vs wait and see)
---
## Advanced Workflows
### Multi-Stage Research Pipeline
**Complete competitive research workflow:**
```python
# Stage 1: Discovery
print("🔍 Stage 1: Discovering competitor content...")
competitors = ["@competitor_a", "@competitor_b"]
all_videos = []
for comp in competitors:
videos = search_social("youtube", comp, count=50)
all_videos.extend([(v, comp) for v in videos])
print(f"Found {len(all_videos)} videos")
# Stage 2: Import top performers
print("📥 Stage 2: Importing top performers...")
top_videos = sorted(all_videos, key=lambda x: x[0]['views'], reverse=True)[:30]
for video, comp in top_videos:
import_video(video['url'], tags=["competitor", comp, "top-performer"])
# Stage 3: Content analysis
print("🔬 Stage 3: Analyzing content patterns...")
content_analysis = chat_personal("""
Tags: competitor, top-performer
Question: Comprehensive content analysis:
1. Content themes (with % breakdown)
2. Average video length by theme
3. Hook patterns (first 5 seconds)
4. CTA strategies
5. Production quality levels
6. Posting frequency
""")
# Stage 4: Messaging extraction
print("💬 Stage 4: Extracting messaging...")
messaging = chat_personal("""
Tags: competitor, top-performer
Question: What are their core messaging pillars?
What customer pain points do they address?
What value propositions do they emphasize?
What proof/credibility elements do they use?
""")
# Stage 5: Gap identification
print("🎯 Stage 5: Identifying opportunities...")
gaps = chat_personal("""
Tags: competitor, top-performer
Question: Based on their content coverage, identify:
1. Topics they're NOT covering (search-demand exists)
2. Angles they're missing on covered topics
3. Audience questions unanswered
4. Format opportunities (they use X, but Y format might work)
""")
# Stage 6: Actionable recommendations
print("📋 Stage 6: Generating recommendations...")
recommendations = chat_personal("""
Based on the competitive analysis (tags: competitor, top-performer),
generate actionable content strategy recommendations:
1. QUICK WINS: What can we do in next 2 weeks?
2. STRATEGIC BETS: What should we invest in next quarter?
3. AVOID: What are they doing that's not working?
4. DIFFERENTIATION: How can we stand out?
Format with specific video ideas and rationale.
""")
# Stage 7: Report generation
print("📊 Stage 7: Compiling final report...")
final_report = f"""
COMPETITIVE CONTENT INTELLIGENCE REPORT
Date: {current_date}
Scope: {len(all_videos)} videos analyzed from {len(competitors)} competitors
{content_analysis}
{messaging}
{gaps}
{recommendations}
"""
create_memory(final_report, tags=["competitive-report", "strategy"])
print("✅ Complete! Report stored in knowledge base.")
```
**Timeline:** 40 hours manual → 3 hours automated
**Output:** Comprehensive competitive intelligence report with actionable recommendations
---
## ROI Summary
| Use Case | Manual Time | Automated Time | Time Saved | Quality Improvement |
|----------|-------------|----------------|------------|---------------------|
| Competitor Analysis | 40 hours | 3 hours | 37 hours | +50% depth |
| Content Research | 20 hours | 2 hours | 18 hours | +70% coverage |
| Meeting Notes | 30 min/meeting | 2 min/meeting | 28 min | +90% completeness |
| Brand Monitoring | 10 hours/week | 1 hour/week | 9 hours | Real-time vs weekly |
| Training KB | N/A | 3 hours setup | N/A | Instant access |
| Influencer Research | 15 hours | 2 hours | 13 hours | +60% data depth |
**Average ROI:** 40x time savings, 60% quality improvement