diff --git a/CATALOG.md b/CATALOG.md index 8963c506..b509a814 100644 --- a/CATALOG.md +++ b/CATALOG.md @@ -2,14 +2,14 @@ Generated at: 2026-02-08T00:00:00.000Z -Total skills: 956 +Total skills: 966 -## architecture (66) +## architecture (62) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | -| `angular` | Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns. | angular | angular, v20, deep, knowledge, signals, standalone, components, zoneless, applications, ssr, hydration, reactive | | `angular-state-management` | Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solu... | angular, state | angular, state, signals, ngrx, rxjs, setting, up, global, managing, component, stores, choosing | +| `apify-audience-analysis` | Understand audience demographics, preferences, behavior patterns, and engagement quality across Facebook, Instagram, YouTube, and TikTok. | apify, audience | apify, audience, analysis, understand, demographics, preferences, behavior, engagement, quality, facebook, instagram, youtube | | `architect-review` | Master software architect specializing in modern architecture | | architect, review, software, specializing, architecture | | `architecture` | Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing ... | architecture | architecture, architectural, decision, making, framework, requirements, analysis, trade, off, evaluation, adr, documentation | | `architecture-decision-records` | Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant techn... | architecture, decision, records | architecture, decision, records, write, maintain, adrs, following, technical, documentation, documenting, significant, decisions | @@ -20,10 +20,10 @@ Total skills: 956 | `brainstorming` | Use before creative or constructive work (features, architecture, behavior). Transforms vague ideas into validated designs through disciplined reasoning and ... | brainstorming | brainstorming, before, creative, constructive, work, features, architecture, behavior, transforms, vague, ideas, validated | | `browser-extension-builder` | Expert in building browser extensions that solve real problems - Chrome, Firefox, and cross-browser extensions. Covers extension architecture, manifest v3, c... | browser, extension, builder | browser, extension, builder, building, extensions, solve, real, problems, chrome, firefox, cross, covers | | `c4-architecture-c4-architecture` | Generate comprehensive C4 architecture documentation for an existing repository/codebase using a bottom-up analysis approach. | c4, architecture | c4, architecture, generate, documentation, existing, repository, codebase, bottom, up, analysis, approach | -| `c4-code` | Expert C4 Code-level documentation specialist. Analyzes code directories to create comprehensive C4 code-level documentation including function signatures, a... | c4, code | c4, code, level, documentation, analyzes, directories, including, function, signatures, arguments, dependencies, structure | -| `c4-component` | Expert C4 Component-level documentation specialist. Synthesizes C4 Code-level documentation into Component-level architecture, defining component boundaries,... | c4, component | c4, component, level, documentation, synthesizes, code, architecture, defining, boundaries, interfaces, relationships | -| `c4-container` | Expert C4 Container-level documentation specialist. | c4, container | c4, container, level, documentation | -| `c4-context` | Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and externa... | c4 | c4, context, level, documentation, creates, high, diagrams, documents, personas, user, journeys, features | +| `c4-code` | | c4, code | c4, code | +| `c4-component` | | c4, component | c4, component | +| `c4-container` | | c4, container | c4, container | +| `c4-context` | | c4 | c4, context | | `calendly-automation` | Automate Calendly scheduling, event management, invitee tracking, availability checks, and organization administration via Rube MCP (Composio). Always search... | calendly | calendly, automation, automate, scheduling, event, invitee, tracking, availability, checks, organization, administration, via | | `cloudformation-best-practices` | CloudFormation template optimization, nested stacks, drift detection, and production-ready patterns. Use when writing or reviewing CF templates. | cloudformation, best, practices | cloudformation, best, practices, optimization, nested, stacks, drift, detection, writing, reviewing, cf | | `code-refactoring-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | code, refactoring, refactor, clean | code, refactoring, refactor, clean, specializing, principles, solid, software, engineering, analyze, provided, improve | @@ -35,17 +35,13 @@ Total skills: 956 | `ddd-strategic-design` | Design DDD strategic artifacts including subdomains, bounded contexts, and ubiquitous language for complex business domains. | [ddd, strategic-design, bounded-context, ubiquitous-language] | [ddd, strategic-design, bounded-context, ubiquitous-language], ddd, strategic, artifacts, including, subdomains, bounded, contexts, ubiquitous | | `ddd-tactical-patterns` | Apply DDD tactical patterns in code using entities, value objects, aggregates, repositories, and domain events with explicit invariants. | [ddd, tactical, aggregates, value-objects, domain-events] | [ddd, tactical, aggregates, value-objects, domain-events], ddd, apply, code, entities, value, objects, repositories | | `doc-coauthoring` | Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision do... | doc, coauthoring | doc, coauthoring, users, through, structured, co, authoring, documentation, user, wants, write, proposals | -| `docs-architect` | Creates comprehensive technical documentation from existing codebases. Analyzes architecture, design patterns, and implementation details to produce long-for... | docs | docs, architect, creates, technical, documentation, existing, codebases, analyzes, architecture, details, produce, long | | `domain-driven-design` | Plan and route Domain-Driven Design work from strategic modeling to tactical implementation and evented architecture patterns. | [ddd, domain, bounded-context, architecture] | [ddd, domain, bounded-context, architecture], driven, plan, route, work, strategic, modeling, tactical, evented | -| `elixir-pro` | Write idiomatic Elixir code with OTP patterns, supervision trees, and Phoenix LiveView. Masters concurrency, fault tolerance, and distributed systems. | elixir | elixir, pro, write, idiomatic, code, otp, supervision, trees, phoenix, liveview, masters, concurrency | -| `error-detective` | Search logs and codebases for error patterns, stack traces, and anomalies. Correlates errors across systems and identifies root causes. | error, detective | error, detective, search, logs, codebases, stack, traces, anomalies, correlates, errors, identifies, root | | `error-handling-patterns` | Master error handling patterns across languages including exceptions, Result types, error propagation, and graceful degradation to build resilient applicatio... | error, handling | error, handling, languages, including, exceptions, result, types, propagation, graceful, degradation, resilient, applications | | `event-sourcing-architect` | Expert in event sourcing, CQRS, and event-driven architecture patterns. Masters event store design, projection building, saga orchestration, and eventual con... | event, sourcing | event, sourcing, architect, cqrs, driven, architecture, masters, store, projection, building, saga, orchestration | | `event-store-design` | Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implement... | event, store | event, store, stores, sourced, building, sourcing, infrastructure, choosing, technologies, implementing, persistence | | `game-development/multiplayer` | Multiplayer game development principles. Architecture, networking, synchronization. | game, development/multiplayer | game, development/multiplayer, multiplayer, development, principles, architecture, networking, synchronization | | `godot-gdscript-patterns` | Master Godot 4 GDScript patterns including signals, scenes, state machines, and optimization. Use when building Godot games, implementing game systems, or le... | godot, gdscript | godot, gdscript, including, signals, scenes, state, machines, optimization, building, games, implementing, game | -| `hig-inputs` | Apple HIG guidance for input methods and interaction patterns: gestures, Apple Pencil, keyboards, game controllers, pointers, Digital Crown, eye tracking, fo... | hig, inputs | hig, inputs, apple, guidance, input, methods, interaction, gestures, pencil, keyboards, game, controllers | -| `hig-patterns` | Apple Human Interface Guidelines interaction and UX patterns. | hig | hig, apple, human, interface, guidelines, interaction, ux | +| `hig-patterns` | | hig | hig | | `i18n-localization` | Internationalization and localization patterns. Detecting hardcoded strings, managing translations, locale files, RTL support. | i18n, localization | i18n, localization, internationalization, detecting, hardcoded, strings, managing, translations, locale, files, rtl | | `inngest` | Inngest expert for serverless-first background jobs, event-driven workflows, and durable execution without managing queues or workers. Use when: inngest, ser... | inngest | inngest, serverless, first, background, jobs, event, driven, durable, execution, without, managing, queues | | `kotlin-coroutines-expert` | Expert patterns for Kotlin Coroutines and Flow, covering structured concurrency, error handling, and testing. | kotlin, coroutines | kotlin, coroutines, flow, covering, structured, concurrency, error, handling, testing | @@ -72,64 +68,61 @@ Total skills: 956 | `wcag-audit-patterns` | Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fi... | wcag, audit | wcag, audit, conduct, accessibility, audits, automated, testing, manual, verification, remediation, guidance, auditing | | `wordpress-theme-development` | WordPress theme development workflow covering theme architecture, template hierarchy, custom post types, block editor support, and responsive design. | wordpress, theme | wordpress, theme, development, covering, architecture, hierarchy, custom, post, types, block, editor, responsive | | `workflow-orchestration-patterns` | Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism cons... | | orchestration, durable, temporal, distributed, covers, vs, activity, separation, saga, state, determinism, constraints | -| `workflow-patterns` | Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding th... | | skill, implementing, tasks, according, conductor, tdd, handling, phase, checkpoints, managing, git, commits | +| `workflow-patterns` | | | | | `zapier-make-patterns` | No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code.... | zapier, make | zapier, make, no, code, automation, democratizes, building, formerly, integromat, let, non, developers | -## business (41) +## business (46) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | -| `competitive-landscape` | This skill should be used when the user asks to \\\"analyze competitors", "assess competitive landscape", "identify differentiation", "evaluate market positi... | competitive, landscape | competitive, landscape, skill, should, used, user, asks, analyze, competitors, assess, identify, differentiation | +| `apify-competitor-intelligence` | Analyze competitor strategies, content, pricing, ads, and market positioning across Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok. | apify, competitor, intelligence | apify, competitor, intelligence, analyze, content, pricing, ads, market, positioning, google, maps, booking | +| `apify-market-research` | Analyze market conditions, geographic opportunities, pricing, consumer behavior, and product validation across Google Maps, Facebook, Instagram, Booking.com,... | apify, market, research | apify, market, research, analyze, conditions, geographic, opportunities, pricing, consumer, behavior, product, validation | +| `business-analyst` | | business, analyst | business, analyst | | `competitor-alternatives` | When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when the user mentions 'alternative page,' 'v... | competitor, alternatives | competitor, alternatives, user, wants, comparison, alternative, pages, seo, sales, enablement, mentions, page | -| `conductor-setup` | Initialize project with Conductor artifacts (product definition, -tech stack, workflow, style guides) | conductor, setup | conductor, setup, initialize, artifacts, product, definition, tech, stack, style, guides | | `content-creator` | Create SEO-optimized marketing content with consistent brand voice. Includes brand voice analyzer, SEO optimizer, content frameworks, and social media templa... | content, creator | content, creator, seo, optimized, marketing, consistent, brand, voice, includes, analyzer, optimizer, frameworks | -| `context-driven-development` | Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship be... | driven | driven, context, development, skill, working, conductor, methodology, managing, artifacts, understanding, relationship, between | | `copy-editing` | When the user wants to edit, review, or improve existing marketing copy. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,'... | copy, editing | copy, editing, user, wants, edit, review, improve, existing, marketing, mentions, my, feedback | | `copywriting` | Write rigorous, conversion-focused marketing copy for landing pages and emails. Enforces brief confirmation and strict no-fabrication rules. | copywriting | copywriting, write, rigorous, conversion, marketing, copy, landing, pages, emails, enforces, brief, confirmation | +| `customer-support` | | customer, support | customer, support | | `deep-research` | Execute autonomous multi-step research using Google Gemini Deep Research Agent. Use for: market analysis, competitive landscaping, literature reviews, techni... | deep, research | deep, research, execute, autonomous, multi, step, google, gemini, agent, market, analysis, competitive | | `defi-protocol-templates` | Implement DeFi protocols with production-ready templates for staking, AMMs, governance, and lending systems. Use when building decentralized finance applicat... | defi, protocol | defi, protocol, protocols, staking, amms, governance, lending, building, decentralized, finance, applications, smart | | `email-systems` | Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, ... | email | email, highest, roi, any, marketing, channel, 36, every, spent, yet, most, startups | | `employment-contract-templates` | Create employment contracts, offer letters, and HR policy documents following legal best practices. Use when drafting employment agreements, creating HR poli... | employment, contract | employment, contract, contracts, offer, letters, hr, policy, documents, following, legal, drafting, agreements | | `framework-migration-legacy-modernize` | Orchestrate a comprehensive legacy system modernization using the strangler fig pattern, enabling gradual replacement of outdated components while maintainin... | framework, migration, legacy, modernize | framework, migration, legacy, modernize, orchestrate, modernization, strangler, fig, enabling, gradual, replacement, outdated | | `free-tool-strategy` | When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or brand awareness. Also use when the user m... | free | free, user, wants, plan, evaluate, marketing, purposes, lead, generation, seo, value, brand | -| `hr-pro` | Professional, ethical HR partner for hiring, onboarding/offboarding, PTO and leave, performance, compliant policies, and employee relations. | hr | hr, pro, professional, ethical, partner, hiring, onboarding, offboarding, pto, leave, performance, compliant | +| `hr-pro` | | hr | hr, pro | +| `legal-advisor` | | legal, advisor | legal, advisor | | `linkedin-cli` | Use when automating LinkedIn via CLI: fetch profiles, search people/companies, send messages, manage connections, create posts, and Sales Navigator. | linkedin, cli | linkedin, cli, automating, via, fetch, profiles, search, people, companies, send, messages, connections | | `local-legal-seo-audit` | Audit and improve local SEO for law firms, attorneys, forensic experts and legal/professional services sites with local presence, focusing on GBP, directorie... | local, legal, seo, audit | local, legal, seo, audit, improve, law, firms, attorneys, forensic, experts, professional, sites | -| `market-sizing-analysis` | This skill should be used when the user asks to \\\"calculate TAM\\\", "determine SAM", "estimate SOM", "size the market", "calculate market opportunity", "w... | market, sizing | market, sizing, analysis, skill, should, used, user, asks, calculate, tam, determine, sam | +| `market-sizing-analysis` | | market, sizing | market, sizing, analysis | | `marketing-ideas` | Provide proven marketing strategies and growth ideas for SaaS and software products, prioritized using a marketing feasibility scoring system. | marketing, ideas | marketing, ideas, provide, proven, growth, saas, software, products, prioritized, feasibility, scoring | | `marketing-psychology` | Apply behavioral science and mental models to marketing decisions, prioritized using a psychological leverage and feasibility scoring system. | marketing, psychology | marketing, psychology, apply, behavioral, science, mental, models, decisions, prioritized, psychological, leverage, feasibility | | `notion-template-business` | Expert in building and selling Notion templates as a business - not just making templates, but building a sustainable digital product business. Covers templa... | notion, business | notion, business, building, selling, just, making, sustainable, digital, product, covers, pricing, marketplaces | | `pricing-strategy` | Design pricing, packaging, and monetization strategies based on value, customer willingness to pay, and growth objectives. | pricing | pricing, packaging, monetization, value, customer, willingness, pay, growth, objectives | -| `sales-automator` | Draft cold emails, follow-ups, and proposal templates. Creates -pricing pages, case studies, and sales scripts. Use PROACTIVELY for sales -outreach or lead nur... | sales, automator | sales, automator, draft, cold, emails, follow, ups, proposal, creates, pricing, pages, case | +| `programmatic-seo` | | programmatic, seo | programmatic, seo | +| `sales-automator` | | sales, automator | sales, automator | | `screenshots` | Generate marketing screenshots of your app using Playwright. Use when the user wants to create screenshots for Product Hunt, social media, landing pages, or ... | screenshots | screenshots, generate, marketing, app, playwright, user, wants, product, hunt, social, media, landing | | `scroll-experience` | Expert in building immersive scroll-driven experiences - parallax storytelling, scroll animations, interactive narratives, and cinematic web experiences. Lik... | scroll, experience | scroll, experience, building, immersive, driven, experiences, parallax, storytelling, animations, interactive, narratives, cinematic | -| `seo-audit` | Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance. | seo, audit | seo, audit, diagnose, issues, affecting, crawlability, indexation, rankings, organic, performance | -| `seo-cannibalization-detector` | Analyzes multiple provided pages to identify keyword overlap and potential cannibalization issues. Suggests differentiation strategies. Use PROACTIVELY when ... | seo, cannibalization, detector | seo, cannibalization, detector, analyzes, multiple, provided, pages, identify, keyword, overlap, potential, issues | -| `seo-content-auditor` | Analyzes provided content for quality, E-E-A-T signals, and SEO best practices. Scores content and provides improvement recommendations based on established ... | seo, content, auditor | seo, content, auditor, analyzes, provided, quality, signals, scores, provides, improvement, recommendations, established | -| `seo-content-planner` | Creates comprehensive content outlines and topic clusters for SEO. -Plans content calendars and identifies topic gaps. Use PROACTIVELY for content -strategy an... | seo, content, planner | seo, content, planner, creates, outlines, topic, clusters, plans, calendars, identifies, gaps, proactively | -| `seo-content-refresher` | Identifies outdated elements in provided content and suggests updates to maintain freshness. Finds statistics, dates, and examples that need updating. Use PR... | seo, content, refresher | seo, content, refresher, identifies, outdated, elements, provided, suggests, updates, maintain, freshness, finds | -| `seo-content-writer` | Writes SEO-optimized content based on provided keywords and topic briefs. Creates engaging, comprehensive content following best practices. Use PROACTIVELY f... | seo, content, writer | seo, content, writer, writes, optimized, provided, keywords, topic, briefs, creates, engaging, following | -| `seo-fundamentals` | Core principles of SEO including E-E-A-T, Core Web Vitals, technical foundations, content quality, and how modern search engines evaluate pages. | seo, fundamentals | seo, fundamentals, core, principles, including, web, vitals, technical, foundations, content, quality, how | -| `seo-keyword-strategist` | Analyzes keyword usage in provided content, calculates density, suggests semantic variations and LSI keywords based on the topic. Prevents over-optimization.... | seo, keyword, strategist | seo, keyword, strategist, analyzes, usage, provided, content, calculates, density, suggests, semantic, variations | -| `seo-meta-optimizer` | Creates optimized meta titles, descriptions, and URL suggestions based on character limits and best practices. Generates compelling, keyword-rich metadata. U... | seo, meta, optimizer | seo, meta, optimizer, creates, optimized, titles, descriptions, url, suggestions, character, limits, generates | -| `seo-snippet-hunter` | Formats content to be eligible for featured snippets and SERP features. Creates snippet-optimized content blocks based on best practices. Use PROACTIVELY for... | seo, snippet, hunter | seo, snippet, hunter, formats, content, eligible, featured, snippets, serp, features, creates, optimized | -| `seo-structure-architect` | Analyzes and optimizes content structure including header hierarchy, suggests schema markup, and internal linking opportunities. Creates search-friendly cont... | seo, structure | seo, structure, architect, analyzes, optimizes, content, including, header, hierarchy, suggests, schema, markup | -| `startup-analyst` | Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies. | startup, analyst | startup, analyst, business, specializing, market, sizing, financial, modeling, competitive, analysis, strategic, planning | -| `startup-business-analyst-business-case` | Generate comprehensive investor-ready business case document with -market, solution, financials, and strategy | startup, business, analyst, case | startup, business, analyst, case, generate, investor, document, market, solution, financials | -| `startup-business-analyst-financial-projections` | Create detailed 3-5 year financial model with revenue, costs, cash -flow, and scenarios | startup, business, analyst, financial, projections | startup, business, analyst, financial, projections, detailed, year, model, revenue, costs, cash, flow | -| `startup-business-analyst-market-opportunity` | Generate comprehensive market opportunity analysis with TAM/SAM/SOM -calculations | startup, business, analyst, market, opportunity | startup, business, analyst, market, opportunity, generate, analysis, tam, sam, som, calculations | -| `startup-financial-modeling` | This skill should be used when the user asks to \\\"create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "est... | startup, financial, modeling | startup, financial, modeling, skill, should, used, user, asks, projections, model, forecast, revenue | +| `seo-audit` | | seo, audit | seo, audit | +| `seo-authority-builder` | | seo, authority, builder | seo, authority, builder | +| `seo-cannibalization-detector` | | seo, cannibalization, detector | seo, cannibalization, detector | +| `seo-content-auditor` | | seo, content, auditor | seo, content, auditor | +| `seo-content-planner` | | seo, content, planner | seo, content, planner | +| `seo-content-refresher` | | seo, content, refresher | seo, content, refresher | +| `seo-content-writer` | | seo, content, writer | seo, content, writer | +| `seo-fundamentals` | | seo, fundamentals | seo, fundamentals | +| `seo-keyword-strategist` | | seo, keyword, strategist | seo, keyword, strategist | +| `seo-meta-optimizer` | | seo, meta, optimizer | seo, meta, optimizer | +| `seo-snippet-hunter` | | seo, snippet, hunter | seo, snippet, hunter | +| `seo-structure-architect` | | seo, structure | seo, structure, architect | +| `startup-analyst` | | startup, analyst | startup, analyst | +| `startup-business-analyst-business-case` | | startup, business, analyst, case | startup, business, analyst, case | +| `startup-business-analyst-financial-projections` | | startup, business, analyst, financial, projections | startup, business, analyst, financial, projections | +| `startup-business-analyst-market-opportunity` | | startup, business, analyst, market, opportunity | startup, business, analyst, market, opportunity | +| `startup-financial-modeling` | | startup, financial, modeling | startup, financial, modeling | +| `startup-metrics-framework` | | startup, metrics, framework | startup, metrics, framework | | `whatsapp-automation` | Automate WhatsApp Business tasks via Rube MCP (Composio): send messages, manage templates, upload media, and handle contacts. Always search tools first for c... | whatsapp | whatsapp, automation, automate, business, tasks, via, rube, mcp, composio, send, messages, upload | -## data-ai (174) +## data-ai (153) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -140,72 +133,58 @@ calculations | startup, business, analyst, market, opportunity | startup, busine | `agents-v2-py` | Build container-based Foundry Agents with Azure AI Projects SDK (ImageBasedHostedAgentDefinition). Use when creating hosted agents with custom container imag... | agents, v2, py | agents, v2, py, container, foundry, azure, ai, sdk, imagebasedhostedagentdefinition, creating, hosted, custom | | `ai-agent-development` | AI agent development workflow for building autonomous agents, multi-agent systems, and agent orchestration with CrewAI, LangGraph, and custom agents. | ai, agent | ai, agent, development, building, autonomous, agents, multi, orchestration, crewai, langgraph, custom | | `ai-agents-architect` | Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build ... | ai, agents | ai, agents, architect, designing, building, autonomous, masters, memory, planning, multi, agent, orchestration | -| `ai-engineer` | Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and ente... | ai | ai, engineer, llm, applications, rag, intelligent, agents, implements, vector, search, multimodal, agent | +| `ai-engineer` | | ai | ai, engineer | | `ai-ml` | AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features. | ai, ml | ai, ml, machine, learning, covering, llm, application, development, rag, agent, architecture, pipelines | -| `ai-product` | Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integrat... | ai, product | ai, product, every, powered, question, whether, ll, right, ship, demo, falls, apart | +| `ai-product` | Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integra... | ai, product | ai, product, every, powered, question, whether, ll, right, ship, demo, falls, apart | | `ai-wrapper-product` | Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products t... | ai, wrapper, product | ai, wrapper, product, building, products, wrap, apis, openai, anthropic, etc, people, pay | -| `analytics-tracking` | Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data. | analytics, tracking | analytics, tracking, audit, improve, produce, reliable, decision, data | +| `analytics-tracking` | | analytics, tracking | analytics, tracking | | `angular-ui-patterns` | Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component ... | angular, ui | angular, ui, loading, states, error, handling, data, display, building, components, async, managing | -| `api-documenter` | Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build com... | api, documenter | api, documenter, documentation, openapi, ai, powered, developer, experience, interactive, docs, generate, sdks | +| `apify-content-analytics` | Track engagement metrics, measure campaign ROI, and analyze content performance across Instagram, Facebook, YouTube, and TikTok. | apify, content, analytics | apify, content, analytics, track, engagement, metrics, measure, campaign, roi, analyze, performance, instagram | +| `apify-ecommerce` | Scrape e-commerce data for pricing intelligence, customer reviews, and seller discovery across Amazon, Walmart, eBay, IKEA, and 50+ marketplaces. Use when us... | apify, ecommerce | apify, ecommerce, scrape, commerce, data, pricing, intelligence, customer, reviews, seller, discovery, amazon | +| `apify-ultimate-scraper` | Universal AI-powered web scraper for any platform. Scrape data from Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search, Google Trends, Booking.... | apify, ultimate, scraper | apify, ultimate, scraper, universal, ai, powered, web, any, platform, scrape, data, instagram | | `appdeploy` | Deploy web apps with backend APIs, database, and file storage. Use when the user asks to deploy or publish a website or web app and wants a public URL. Uses ... | appdeploy | appdeploy, deploy, web, apps, backend, apis, database, file, storage, user, asks, publish | | `audio-transcriber` | Transform audio recordings into professional Markdown documentation with intelligent summaries using LLM integration | [audio, transcription, whisper, meeting-minutes, speech-to-text] | [audio, transcription, whisper, meeting-minutes, speech-to-text], audio, transcriber, transform, recordings, professional, markdown, documentation | | `autonomous-agent-patterns` | Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use ... | autonomous, agent | autonomous, agent, building, coding, agents, covers, integration, permission, browser, automation, human, loop | | `autonomous-agents` | Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The c... | autonomous, agents | autonomous, agents, ai, independently, decompose, goals, plan, actions, execute, self, correct, without | -| `azure-ai-agents-persistent-dotnet` | Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. | azure, ai, agents, persistent, dotnet | azure, ai, agents, persistent, dotnet, sdk, net, low, level, creating, managing, threads | -| `azure-ai-agents-persistent-java` | Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. | azure, ai, agents, persistent, java | azure, ai, agents, persistent, java, sdk, low, level, creating, managing, threads, messages | +| `azure-ai-agents-persistent-dotnet` | | azure, ai, agents, persistent, dotnet | azure, ai, agents, persistent, dotnet | +| `azure-ai-agents-persistent-java` | | azure, ai, agents, persistent, java | azure, ai, agents, persistent, java | | `azure-ai-contentsafety-java` | Build content moderation applications with Azure AI Content Safety SDK for Java. Use when implementing text/image analysis, blocklist management, or harm det... | azure, ai, contentsafety, java | azure, ai, contentsafety, java, content, moderation, applications, safety, sdk, implementing, text, image | -| `azure-ai-contentsafety-py` | Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification. | azure, ai, contentsafety, py | azure, ai, contentsafety, py, content, safety, sdk, python, detecting, harmful, text, images | +| `azure-ai-contentsafety-py` | | azure, ai, contentsafety, py | azure, ai, contentsafety, py | | `azure-ai-contentsafety-ts` | Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detect... | azure, ai, contentsafety, ts | azure, ai, contentsafety, ts, analyze, text, images, harmful, content, safety, rest, moderating | -| `azure-ai-contentunderstanding-py` | Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video. | azure, ai, contentunderstanding, py | azure, ai, contentunderstanding, py, content, understanding, sdk, python, multimodal, extraction, documents, images | -| `azure-ai-document-intelligence-dotnet` | Azure AI Document Intelligence SDK for .NET. Extract text, tables, and structured data from documents using prebuilt and custom models. | azure, ai, document, intelligence, dotnet | azure, ai, document, intelligence, dotnet, sdk, net, extract, text, tables, structured, data | +| `azure-ai-contentunderstanding-py` | | azure, ai, contentunderstanding, py | azure, ai, contentunderstanding, py | +| `azure-ai-document-intelligence-dotnet` | | azure, ai, document, intelligence, dotnet | azure, ai, document, intelligence, dotnet | | `azure-ai-document-intelligence-ts` | Extract text, tables, and structured data from documents using Azure Document Intelligence (@azure-rest/ai-document-intelligence). Use when processing invoic... | azure, ai, document, intelligence, ts | azure, ai, document, intelligence, ts, extract, text, tables, structured, data, documents, rest | | `azure-ai-formrecognizer-java` | Build document analysis applications with Azure Document Intelligence (Form Recognizer) SDK for Java. Use when extracting text, tables, key-value pairs from ... | azure, ai, formrecognizer, java | azure, ai, formrecognizer, java, document, analysis, applications, intelligence, form, recognizer, sdk, extracting | -| `azure-ai-ml-py` | Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines. | azure, ai, ml, py | azure, ai, ml, py, machine, learning, sdk, v2, python, workspaces, jobs, models | -| `azure-ai-openai-dotnet` | Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, ... | azure, ai, openai, dotnet | azure, ai, openai, dotnet, sdk, net, client, library, chat, completions, embeddings, image | -| `azure-ai-projects-dotnet` | Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes. | azure, ai, dotnet | azure, ai, dotnet, sdk, net, high, level, client, foundry, including, agents, connections | -| `azure-ai-projects-java` | Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations. | azure, ai, java | azure, ai, java, sdk, high, level, foundry, including, connections, datasets, indexes, evaluations | +| `azure-ai-ml-py` | | azure, ai, ml, py | azure, ai, ml, py | +| `azure-ai-openai-dotnet` | | azure, ai, openai, dotnet | azure, ai, openai, dotnet | +| `azure-ai-projects-dotnet` | | azure, ai, dotnet | azure, ai, dotnet | +| `azure-ai-projects-java` | | azure, ai, java | azure, ai, java | | `azure-ai-projects-py` | Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents wi... | azure, ai, py | azure, ai, py, applications, python, sdk, working, foundry, clients, creating, versioned, agents | | `azure-ai-projects-ts` | Build AI applications using Azure AI Projects SDK for JavaScript (@azure/ai-projects). Use when working with Foundry project clients, agents, connections, de... | azure, ai, ts | azure, ai, ts, applications, sdk, javascript, working, foundry, clients, agents, connections, deployments | -| `azure-ai-textanalytics-py` | Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language pr... | azure, ai, textanalytics, py | azure, ai, textanalytics, py, text, analytics, sdk, sentiment, analysis, entity, recognition, key | -| `azure-ai-transcription-py` | Azure AI Transcription SDK for Python. Use for real-time and batch speech-to-text transcription with timestamps and diarization. | azure, ai, transcription, py | azure, ai, transcription, py, sdk, python, real, time, batch, speech, text, timestamps | -| `azure-ai-translation-document-py` | Azure AI Document Translation SDK for batch translation of documents with format preservation. Use for translating Word, PDF, Excel, PowerPoint, and other do... | azure, ai, translation, document, py | azure, ai, translation, document, py, sdk, batch, documents, format, preservation, translating, word | -| `azure-ai-translation-text-py` | Azure AI Text Translation SDK for real-time text translation, transliteration, language detection, and dictionary lookup. Use for translating text content in... | azure, ai, translation, text, py | azure, ai, translation, text, py, sdk, real, time, transliteration, language, detection, dictionary | +| `azure-ai-textanalytics-py` | | azure, ai, textanalytics, py | azure, ai, textanalytics, py | +| `azure-ai-transcription-py` | | azure, ai, transcription, py | azure, ai, transcription, py | +| `azure-ai-translation-document-py` | | azure, ai, translation, document, py | azure, ai, translation, document, py | +| `azure-ai-translation-text-py` | | azure, ai, translation, text, py | azure, ai, translation, text, py | | `azure-ai-translation-ts` | Build translation applications using Azure Translation SDKs for JavaScript (@azure-rest/ai-translation-text, @azure-rest/ai-translation-document). Use when i... | azure, ai, translation, ts | azure, ai, translation, ts, applications, sdks, javascript, rest, text, document, implementing, transliter | | `azure-ai-vision-imageanalysis-java` | Build image analysis applications with Azure AI Vision SDK for Java. Use when implementing image captioning, OCR text extraction, object detection, tagging, ... | azure, ai, vision, imageanalysis, java | azure, ai, vision, imageanalysis, java, image, analysis, applications, sdk, implementing, captioning, ocr | -| `azure-ai-vision-imageanalysis-py` | Azure AI Vision Image Analysis SDK for captions, tags, objects, OCR, people detection, and smart cropping. Use for computer vision and image understanding ta... | azure, ai, vision, imageanalysis, py | azure, ai, vision, imageanalysis, py, image, analysis, sdk, captions, tags, objects, ocr | -| `azure-ai-voicelive-dotnet` | Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication. | azure, ai, voicelive, dotnet | azure, ai, voicelive, dotnet, voice, live, sdk, net, real, time, applications, bidirectional | -| `azure-ai-voicelive-java` | Azure AI VoiceLive SDK for Java. Real-time bidirectional voice conversations with AI assistants using WebSocket. | azure, ai, voicelive, java | azure, ai, voicelive, java, sdk, real, time, bidirectional, voice, conversations, assistants, websocket | +| `azure-ai-vision-imageanalysis-py` | | azure, ai, vision, imageanalysis, py | azure, ai, vision, imageanalysis, py | +| `azure-ai-voicelive-dotnet` | | azure, ai, voicelive, dotnet | azure, ai, voicelive, dotnet | +| `azure-ai-voicelive-java` | | azure, ai, voicelive, java | azure, ai, voicelive, java | | `azure-ai-voicelive-py` | Build real-time voice AI applications using Azure AI Voice Live SDK (azure-ai-voicelive). Use this skill when creating Python applications that need real-tim... | azure, ai, voicelive, py | azure, ai, voicelive, py, real, time, voice, applications, live, sdk, skill, creating | -| `azure-ai-voicelive-ts` | Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication. | azure, ai, voicelive, ts | azure, ai, voicelive, ts, voice, live, sdk, javascript, typescript, real, time, applications | +| `azure-ai-voicelive-ts` | | azure, ai, voicelive, ts | azure, ai, voicelive, ts | | `azure-communication-callautomation-java` | Build call automation workflows with Azure Communication Services Call Automation Java SDK. Use when implementing IVR systems, call routing, call recording, ... | azure, communication, callautomation, java | azure, communication, callautomation, java, call, automation, sdk, implementing, ivr, routing, recording, dtmf | -| `azure-cosmos-java` | Azure Cosmos DB SDK for Java. NoSQL database operations with global distribution, multi-model support, and reactive patterns. | azure, cosmos, java | azure, cosmos, java, db, sdk, nosql, database, operations, global, distribution, multi, model | -| `azure-cosmos-py` | Azure Cosmos DB SDK for Python (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data. | azure, cosmos, py | azure, cosmos, py, db, sdk, python, nosql, api, document, crud, queries, containers | -| `azure-cosmos-rust` | Azure Cosmos DB SDK for Rust (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data. | azure, cosmos, rust | azure, cosmos, rust, db, sdk, nosql, api, document, crud, queries, containers, globally | -| `azure-cosmos-ts` | Azure Cosmos DB JavaScript/TypeScript SDK (@azure/cosmos) for data plane operations. Use for CRUD operations on documents, queries, bulk operations, and cont... | azure, cosmos, ts | azure, cosmos, ts, db, javascript, typescript, sdk, data, plane, operations, crud, documents | | `azure-data-tables-java` | Build table storage applications with Azure Tables SDK for Java. Use when working with Azure Table Storage or Cosmos DB Table API for NoSQL key-value data, s... | azure, data, tables, java | azure, data, tables, java, table, storage, applications, sdk, working, cosmos, db, api | -| `azure-data-tables-py` | Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations. | azure, data, tables, py | azure, data, tables, py, sdk, python, storage, cosmos, db, nosql, key, value | +| `azure-data-tables-py` | | azure, data, tables, py | azure, data, tables, py | | `azure-eventhub-java` | Build real-time streaming applications with Azure Event Hubs SDK for Java. Use when implementing event streaming, high-throughput data ingestion, or building... | azure, eventhub, java | azure, eventhub, java, real, time, streaming, applications, event, hubs, sdk, implementing, high | -| `azure-eventhub-rust` | Azure Event Hubs SDK for Rust. Use for sending and receiving events, streaming data ingestion. | azure, eventhub, rust | azure, eventhub, rust, event, hubs, sdk, sending, receiving, events, streaming, data, ingestion | | `azure-eventhub-ts` | Build event streaming applications using Azure Event Hubs SDK for JavaScript (@azure/event-hubs). Use when implementing high-throughput event ingestion, real... | azure, eventhub, ts | azure, eventhub, ts, event, streaming, applications, hubs, sdk, javascript, implementing, high, throughput | -| `azure-maps-search-dotnet` | Azure Maps SDK for .NET. Location-based services including geocoding, routing, rendering, geolocation, and weather. Use for address search, directions, map t... | azure, maps, search, dotnet | azure, maps, search, dotnet, sdk, net, location, including, geocoding, routing, rendering, geolocation | -| `azure-monitor-ingestion-java` | Azure Monitor Ingestion SDK for Java. Send custom logs to Azure Monitor via Data Collection Rules (DCR) and Data Collection Endpoints (DCE). | azure, monitor, ingestion, java | azure, monitor, ingestion, java, sdk, send, custom, logs, via, data, collection, rules | -| `azure-monitor-ingestion-py` | Azure Monitor Ingestion SDK for Python. Use for sending custom logs to Log Analytics workspace via Logs Ingestion API. | azure, monitor, ingestion, py | azure, monitor, ingestion, py, sdk, python, sending, custom, logs, log, analytics, workspace | -| `azure-monitor-query-java` | Azure Monitor Query SDK for Java. Execute Kusto queries against Log Analytics workspaces and query metrics from Azure resources. | azure, monitor, query, java | azure, monitor, query, java, sdk, execute, kusto, queries, against, log, analytics, workspaces | -| `azure-monitor-query-py` | Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics. | azure, monitor, query, py | azure, monitor, query, py, sdk, python, querying, log, analytics, workspaces, metrics | -| `azure-postgres-ts` | Connect to Azure Database for PostgreSQL Flexible Server from Node.js/TypeScript using the pg (node-postgres) package. | azure, postgres, ts | azure, postgres, ts, connect, database, postgresql, flexible, server, node, js, typescript, pg | -| `azure-resource-manager-cosmosdb-dotnet` | Azure Resource Manager SDK for Cosmos DB in .NET. | azure, resource, manager, cosmosdb, dotnet | azure, resource, manager, cosmosdb, dotnet, sdk, cosmos, db, net | -| `azure-resource-manager-mysql-dotnet` | Azure MySQL Flexible Server SDK for .NET. Database management for MySQL Flexible Server deployments. | azure, resource, manager, mysql, dotnet | azure, resource, manager, mysql, dotnet, flexible, server, sdk, net, database, deployments | -| `azure-resource-manager-postgresql-dotnet` | Azure PostgreSQL Flexible Server SDK for .NET. Database management for PostgreSQL Flexible Server deployments. | azure, resource, manager, postgresql, dotnet | azure, resource, manager, postgresql, dotnet, flexible, server, sdk, net, database, deployments | -| `azure-resource-manager-sql-dotnet` | Azure Resource Manager SDK for Azure SQL in .NET. | azure, resource, manager, sql, dotnet | azure, resource, manager, sql, dotnet, sdk, net | -| `azure-search-documents-dotnet` | Azure AI Search SDK for .NET (Azure.Search.Documents). Use for building search applications with full-text, vector, semantic, and hybrid search. | azure, search, documents, dotnet | azure, search, documents, dotnet, ai, sdk, net, building, applications, full, text, vector | -| `azure-search-documents-py` | Azure AI Search SDK for Python. Use for vector search, hybrid search, semantic ranking, indexing, and skillsets. | azure, search, documents, py | azure, search, documents, py, ai, sdk, python, vector, hybrid, semantic, ranking, indexing | +| `azure-postgres-ts` | | azure, postgres, ts | azure, postgres, ts | +| `azure-resource-manager-mysql-dotnet` | | azure, resource, manager, mysql, dotnet | azure, resource, manager, mysql, dotnet | +| `azure-resource-manager-sql-dotnet` | | azure, resource, manager, sql, dotnet | azure, resource, manager, sql, dotnet | | `azure-search-documents-ts` | Build search applications using Azure AI Search SDK for JavaScript (@azure/search-documents). Use when creating/managing indexes, implementing vector/hybrid ... | azure, search, documents, ts | azure, search, documents, ts, applications, ai, sdk, javascript, creating, managing, indexes, implementing | -| `azure-storage-file-datalake-py` | Azure Data Lake Storage Gen2 SDK for Python. Use for hierarchical file systems, big data analytics, and file/directory operations. | azure, storage, file, datalake, py | azure, storage, file, datalake, py, data, lake, gen2, sdk, python, hierarchical, big | | `beautiful-prose` | Hard-edged writing style contract for timeless, forceful English prose without AI tics | beautiful, prose | beautiful, prose, hard, edged, writing, style, contract, timeless, forceful, english, without, ai | | `behavioral-modes` | AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type. | behavioral, modes | behavioral, modes, ai, operational, brainstorm, debug, review, teach, ship, orchestrate, adapt, behavior | | `blockrun` | Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models (\"blockrun\", \"use grok\"... | blockrun | blockrun, user, capabilities, claude, lacks, image, generation, real, time, twitter, data, explicitly | | `browser-automation` | Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to underst... | browser | browser, automation, powers, web, testing, scraping, ai, agent, interactions, difference, between, flaky | -| `business-analyst` | Master modern business analysis with AI-powered analytics, real-time dashboards, and data-driven insights. Build comprehensive KPI frameworks, predictive mod... | business, analyst | business, analyst, analysis, ai, powered, analytics, real, time, dashboards, data, driven, insights | | `cc-skill-backend-patterns` | Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes. | cc, skill, backend | cc, skill, backend, architecture, api, database, optimization, server, side, node, js, express | | `cc-skill-clickhouse-io` | ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads. | cc, skill, clickhouse, io | cc, skill, clickhouse, io, database, query, optimization, analytics, data, engineering, high, performance | | `clarity-gate` | Pre-ingestion verification for epistemic quality in RAG systems with 9-point verification and Two-Round HITL workflow | clarity, gate | clarity, gate, pre, ingestion, verification, epistemic, quality, rag, point, two, round, hitl | @@ -213,20 +192,19 @@ calculations | startup, business, analyst, market, opportunity | startup, busine | `code-reviewer` | Elite code review expert specializing in modern AI-powered code | code | code, reviewer, elite, review, specializing, ai, powered | | `codex-review` | Professional code review with auto CHANGELOG generation, integrated with Codex AI | codex | codex, review, professional, code, auto, changelog, generation, integrated, ai | | `computer-use-agents` | Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer... | computer, use, agents | computer, use, agents, ai, interact, computers, like, humans, do, viewing, screens, moving | -| `content-marketer` | Elite content marketing strategist specializing in AI-powered content creation, omnichannel distribution, SEO optimization, and data-driven performance marke... | content, marketer | content, marketer, elite, marketing, strategist, specializing, ai, powered, creation, omnichannel, distribution, seo | -| `context-manager` | Elite AI context engineering specialist mastering dynamic context management, vector databases, knowledge graphs, and intelligent memory systems. | manager | manager, context, elite, ai, engineering, mastering, dynamic, vector, databases, knowledge, graphs, intelligent | | `context-window-management` | Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, conte... | window | window, context, managing, llm, windows, including, summarization, trimming, routing, avoiding, rot, token | | `conversation-memory` | Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory pers... | conversation, memory | conversation, memory, persistent, llm, conversations, including, short, term, long, entity, remember, persistence | -| `customer-support` | Elite AI-powered customer support specialist mastering conversational AI, automated ticketing, sentiment analysis, and omnichannel support experiences. | customer, support | customer, support, elite, ai, powered, mastering, conversational, automated, ticketing, sentiment, analysis, omnichannel | +| `data-engineer` | | data | data, engineer | | `data-engineering-data-driven-feature` | Build features guided by data insights, A/B testing, and continuous measurement using specialized agents for analysis, implementation, and experimentation. | data, engineering, driven | data, engineering, driven, feature, features, guided, insights, testing, continuous, measurement, specialized, agents | | `data-quality-frameworks` | Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation r... | data, quality, frameworks | data, quality, frameworks, validation, great, expectations, dbt, tests, contracts, building, pipelines, implementing | -| `data-scientist` | Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business in... | data, scientist | data, scientist, analytics, machine, learning, statistical, modeling, complex, analysis, predictive, business, intelligence | +| `data-scientist` | | data, scientist | data, scientist | | `data-storytelling` | Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating dat... | data, storytelling | data, storytelling, transform, compelling, narratives, visualization, context, persuasive, structure, presenting, analytics, stakeholders | | `data-structure-protocol` | Give agents persistent structural memory of a codebase — navigate dependencies, track public APIs, and understand why connections exist without re-reading th... | data, structure, protocol | data, structure, protocol, give, agents, persistent, structural, memory, codebase, navigate, dependencies, track | | `database` | Database development and operations workflow covering SQL, NoSQL, database design, migrations, optimization, and data engineering. | database | database, development, operations, covering, sql, nosql, migrations, optimization, data, engineering | -| `database-architect` | Expert database architect specializing in data layer design from scratch, technology selection, schema modeling, and scalable database architectures. | database | database, architect, specializing, data, layer, scratch, technology, selection, schema, modeling, scalable, architectures | +| `database-admin` | | database, admin | database, admin | +| `database-architect` | | database | database, architect | | `database-design` | Database design principles and decision-making. Schema design, indexing strategy, ORM selection, serverless databases. | database | database, principles, decision, making, schema, indexing, orm, selection, serverless, databases | -| `database-optimizer` | Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures. | database, optimizer | database, optimizer, specializing, performance, tuning, query, optimization, scalable, architectures | +| `database-optimizer` | | database, optimizer | database, optimizer | | `dbt-transformation-patterns` | Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data tr... | dbt, transformation | dbt, transformation, data, analytics, engineering, model, organization, testing, documentation, incremental, building, transformations | | `documentation-generation-doc-generate` | You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user g... | documentation, generation, doc, generate | documentation, generation, doc, generate, specializing, creating, maintainable, code, api, docs, architecture, diagrams | | `documentation-templates` | Documentation templates and structure guidelines. README, API docs, code comments, and AI-friendly documentation. | documentation | documentation, structure, guidelines, readme, api, docs, code, comments, ai, friendly | @@ -243,11 +221,8 @@ calculations | startup, business, analyst, market, opportunity | startup, busine | `google-analytics-automation` | Automate Google Analytics tasks via Rube MCP (Composio): run reports, list accounts/properties, funnels, pivots, key events. Always search tools first for cu... | google, analytics | google, analytics, automation, automate, tasks, via, rube, mcp, composio, run, reports, list | | `googlesheets-automation` | Automate Google Sheets operations (read, write, format, filter, manage spreadsheets) via Rube MCP (Composio). Read/write data, manage tabs, apply formatting,... | googlesheets | googlesheets, automation, automate, google, sheets, operations, read, write, format, filter, spreadsheets, via | | `graphql` | GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful al... | graphql | graphql, gives, clients, exactly, data, no, less, one, endpoint, typed, schema, introspection | -| `hig-technologies` | Apple HIG guidance for Apple technology integrations: Siri, Apple Pay, HealthKit, HomeKit, ARKit, machine learning, generative AI, iCloud, Sign in with Apple... | hig, technologies | hig, technologies, apple, guidance, technology, integrations, siri, pay, healthkit, homekit, arkit, machine | | `hosted-agents-v2-py` | Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. Use when creating container-based agents in Azure AI Foundry. | hosted, agents, v2, py | hosted, agents, v2, py, azure, ai, sdk, imagebasedhostedagentdefinition, creating, container, foundry | | `hybrid-search-implementation` | Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides... | hybrid, search | hybrid, search, combine, vector, keyword, improved, retrieval, implementing, rag, building, engines, neither | -| `imagen` | AI image generation skill powered by Google Gemini, enabling seamless visual content creation for UI placeholders, documentation, and design assets. | imagen | imagen, ai, image, generation, skill, powered, google, gemini, enabling, seamless, visual, content | -| `ios-developer` | Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization. | ios | ios, developer, develop, native, applications, swift, swiftui, masters, 18, uikit, integration, core | | `langchain-architecture` | Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implement... | langchain, architecture | langchain, architecture, llm, applications, framework, agents, memory, integration, building, implementing, ai, creating | | `langgraph` | Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles ... | langgraph | langgraph, grade, framework, building, stateful, multi, actor, ai, applications, covers, graph, construction | | `libreoffice/base` | Database management, forms, reports, and data operations with LibreOffice Base. | libreoffice/base | libreoffice/base, base, database, forms, reports, data, operations, libreoffice | @@ -258,29 +233,23 @@ calculations | startup, business, analyst, market, opportunity | startup, busine | `llm-application-dev-prompt-optimize` | You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thoug... | llm, application, dev, prompt, optimize | llm, application, dev, prompt, optimize, engineer, specializing, crafting, effective, prompts, llms, through | | `llm-evaluation` | Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performanc... | llm, evaluation | llm, evaluation, applications, automated, metrics, human, feedback, benchmarking, testing, performance, measuring, ai | | `mailchimp-automation` | Automate Mailchimp email marketing including campaigns, audiences, subscribers, segments, and analytics via Rube MCP (Composio). Always search tools first fo... | mailchimp | mailchimp, automation, automate, email, marketing, including, campaigns, audiences, subscribers, segments, analytics, via | -| `mlops-engineer` | Build comprehensive ML pipelines, experiment tracking, and model registries with MLflow, Kubeflow, and modern MLOps tools. | mlops | mlops, engineer, ml, pipelines, experiment, tracking, model, registries, mlflow, kubeflow | +| `ml-engineer` | | ml | ml, engineer | | `nanobanana-ppt-skills` | AI-powered PPT generation with document analysis and styled images | nanobanana, ppt, skills | nanobanana, ppt, skills, ai, powered, generation, document, analysis, styled, images | | `neon-postgres` | Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, dat... | neon, postgres | neon, postgres, serverless, branching, connection, pooling, prisma, drizzle, integration, database | | `nextjs-app-router-patterns` | Master Next.js 14+ App Router with Server Components, streaming, parallel routes, and advanced data fetching. Use when building Next.js applications, impleme... | nextjs, app, router | nextjs, app, router, next, js, 14, server, components, streaming, parallel, routes, data | | `nextjs-best-practices` | Next.js App Router principles. Server Components, data fetching, routing patterns. | nextjs, best, practices | nextjs, best, practices, next, js, app, router, principles, server, components, data, fetching | | `nodejs-backend-patterns` | Build production-ready Node.js backend services with Express/Fastify, implementing middleware patterns, error handling, authentication, database integration,... | nodejs, backend | nodejs, backend, node, js, express, fastify, implementing, middleware, error, handling, authentication, database | -| `php-pro` | Write idiomatic PHP code with generators, iterators, SPL data -structures, and modern OOP features. Use PROACTIVELY for high-performance PHP -applications. | php | php, pro, write, idiomatic, code, generators, iterators, spl, data, structures, oop, features | | `podcast-generation` | Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, aud... | podcast, generation | podcast, generation, generate, ai, powered, style, audio, narratives, azure, openai, gpt, realtime | | `postgres-best-practices` | Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, o... | postgres, best, practices | postgres, best, practices, performance, optimization, supabase, skill, writing, reviewing, optimizing, queries, schema | | `postgresql` | Design a PostgreSQL-specific schema. Covers best-practices, data types, indexing, constraints, performance patterns, and advanced features | postgresql | postgresql, specific, schema, covers, data, types, indexing, constraints, performance, features | | `postgresql-optimization` | PostgreSQL database optimization workflow for query tuning, indexing strategies, performance analysis, and production database management. | postgresql, optimization | postgresql, optimization, database, query, tuning, indexing, performance, analysis | | `prisma-expert` | Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, m... | prisma | prisma, orm, schema, migrations, query, optimization, relations, modeling, database, operations, proactively, issues | -| `programmatic-seo` | Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data. | programmatic, seo | programmatic, seo, evaluate, creating, driven, pages, scale, structured, data | | `prompt-caching` | Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache... | prompt, caching | prompt, caching, llm, prompts, including, anthropic, response, cag, cache, augmented, generation, augm | | `prompt-engineering-patterns` | Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, impro... | prompt, engineering | prompt, engineering, techniques, maximize, llm, performance, reliability, controllability, optimizing, prompts, improving, outputs | | `pydantic-models-py` | Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schem... | pydantic, models, py | pydantic, models, py, following, multi, model, base, update, response, indb, variants, defining | | `rag-engineer` | Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL... | rag | rag, engineer, building, retrieval, augmented, generation, masters, embedding, models, vector, databases, chunking | | `rag-implementation` | RAG (Retrieval-Augmented Generation) implementation workflow covering embedding selection, vector database setup, chunking strategies, and retrieval optimiza... | rag | rag, retrieval, augmented, generation, covering, embedding, selection, vector, database, setup, chunking, optimization | | `react-ui-patterns` | Modern React UI patterns for loading states, error handling, and data fetching. Use when building UI components, handling async data, or managing UI states. | react, ui | react, ui, loading, states, error, handling, data, fetching, building, components, async, managing | -| `scala-pro` | Master enterprise-grade Scala development with functional programming, distributed systems, and big data processing. Expert in Apache Pekko, Akka, Spark, ZIO... | scala | scala, pro, enterprise, grade, development, functional, programming, distributed, big, data, processing, apache | -| `schema-markup` | Design, validate, and optimize schema.org structured data for eligibility, correctness, and measurable SEO impact. | schema, markup | schema, markup, validate, optimize, org, structured, data, eligibility, correctness, measurable, seo, impact | | `segment-cdp` | Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinat... | segment, cdp | segment, cdp, customer, data, platform, including, analytics, js, server, side, tracking, plans | | `sendgrid-automation` | Automate SendGrid email operations including sending emails, managing contacts/lists, sender identities, templates, and analytics via Rube MCP (Composio). Al... | sendgrid | sendgrid, automation, automate, email, operations, including, sending, emails, managing, contacts, lists, sender | | `senior-architect` | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, F... | senior | senior, architect, software, architecture, skill, designing, scalable, maintainable, reactjs, nextjs, nodejs, express | @@ -290,6 +259,7 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `spark-optimization` | Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or... | spark, optimization | spark, optimization, optimize, apache, jobs, partitioning, caching, shuffle, memory, tuning, improving, performance | | `sql-injection-testing` | This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection"... | sql, injection | sql, injection, testing, skill, should, used, user, asks, test, vulnerabilities, perform, sqli | | `sql-optimization-patterns` | Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when de... | sql, optimization | sql, optimization, query, indexing, explain, analysis, dramatically, improve, database, performance, eliminate, slow | +| `sql-pro` | | sql | sql, pro | | `sqlmap-database-pentesting` | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap,... | sqlmap, database, pentesting | sqlmap, database, pentesting, skill, should, used, user, asks, automate, sql, injection, testing | | `stitch-ui-design` | Expert guide for creating effective prompts for Google Stitch AI UI design tool. Use when user wants to design UI/UX in Stitch, create app interfaces, genera... | stitch, ui | stitch, ui, creating, effective, prompts, google, ai, user, wants, ux, app, interfaces | | `supabase-automation` | Automate Supabase database queries, table management, project administration, storage, edge functions, and SQL execution via Rube MCP (Composio). Always sear... | supabase | supabase, automation, automate, database, queries, table, administration, storage, edge, functions, sql, execution | @@ -310,7 +280,7 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `xlsx-official` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx, official | xlsx, official, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work | | `youtube-automation` | Automate YouTube tasks via Rube MCP (Composio): upload videos, manage playlists, search content, get analytics, and handle comments. Always search tools firs... | youtube | youtube, automation, automate, tasks, via, rube, mcp, composio, upload, videos, playlists, search | -## development (145) +## development (150) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -321,53 +291,54 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `api-design-principles` | Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers. Use when designing new APIs, revie... | api, principles | api, principles, rest, graphql, intuitive, scalable, maintainable, apis, delight, developers, designing, new | | `api-documentation` | API documentation workflow for generating OpenAPI specs, creating developer guides, and maintaining comprehensive API documentation. | api, documentation | api, documentation, generating, openapi, specs, creating, developer, guides, maintaining | | `api-documentation-generator` | Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices | api, documentation, generator | api, documentation, generator, generate, developer, friendly, code, including, endpoints, parameters, examples | +| `api-documenter` | | api, documenter | api, documenter | | `api-patterns` | API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination. | api | api, principles, decision, making, rest, vs, graphql, trpc, selection, response, formats, versioning | | `app-store-optimization` | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store | app, store, optimization | app, store, optimization, complete, aso, toolkit, researching, optimizing, tracking, mobile, performance, apple | | `architecture-patterns` | Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex ... | architecture | architecture, proven, backend, including, clean, hexagonal, domain, driven, architecting, complex, refactoring, existing | | `async-python-patterns` | Master Python asyncio, concurrent programming, and async/await patterns for high-performance applications. Use when building async APIs, concurrent systems, ... | async, python | async, python, asyncio, concurrent, programming, await, high, performance, applications, building, apis, bound | -| `azure-appconfiguration-java` | Azure App Configuration SDK for Java. Centralized application configuration management with key-value settings, feature flags, and snapshots. | azure, appconfiguration, java | azure, appconfiguration, java, app, configuration, sdk, centralized, application, key, value, settings, feature | -| `azure-appconfiguration-py` | Azure App Configuration SDK for Python. Use for centralized configuration management, feature flags, and dynamic settings. | azure, appconfiguration, py | azure, appconfiguration, py, app, configuration, sdk, python, centralized, feature, flags, dynamic, settings | +| `azure-appconfiguration-java` | | azure, appconfiguration, java | azure, appconfiguration, java | | `azure-appconfiguration-ts` | Build applications using Azure App Configuration SDK for JavaScript (@azure/app-configuration). Use when working with configuration settings, feature flags, ... | azure, appconfiguration, ts | azure, appconfiguration, ts, applications, app, configuration, sdk, javascript, working, settings, feature, flags | | `azure-communication-callingserver-java` | Azure Communication Services CallingServer (legacy) Java SDK. Note - This SDK is deprecated. Use azure-communication-callautomation instead for new projects.... | azure, communication, callingserver, java | azure, communication, callingserver, java, legacy, sdk, note, deprecated, callautomation, instead, new, skill | | `azure-communication-chat-java` | Build real-time chat applications with Azure Communication Services Chat Java SDK. Use when implementing chat threads, messaging, participants, read receipts... | azure, communication, chat, java | azure, communication, chat, java, real, time, applications, sdk, implementing, threads, messaging, participants | | `azure-communication-common-java` | Azure Communication Services common utilities for Java. Use when working with CommunicationTokenCredential, user identifiers, token refresh, or shared authen... | azure, communication, common, java | azure, communication, common, java, utilities, working, communicationtokencredential, user, identifiers, token, refresh, shared | | `azure-communication-sms-java` | Send SMS messages with Azure Communication Services SMS Java SDK. Use when implementing SMS notifications, alerts, OTP delivery, bulk messaging, or delivery ... | azure, communication, sms, java | azure, communication, sms, java, send, messages, sdk, implementing, notifications, alerts, otp, delivery | -| `azure-compute-batch-java` | Azure Batch SDK for Java. Run large-scale parallel and HPC batch jobs with pools, jobs, tasks, and compute nodes. | azure, compute, batch, java | azure, compute, batch, java, sdk, run, large, scale, parallel, hpc, jobs, pools | -| `azure-containerregistry-py` | Azure Container Registry SDK for Python. Use for managing container images, artifacts, and repositories. | azure, containerregistry, py | azure, containerregistry, py, container, registry, sdk, python, managing, images, artifacts, repositories | -| `azure-eventgrid-dotnet` | Azure Event Grid SDK for .NET. Client library for publishing and consuming events with Azure Event Grid. Use for event-driven architectures, pub/sub messagin... | azure, eventgrid, dotnet | azure, eventgrid, dotnet, event, grid, sdk, net, client, library, publishing, consuming, events | +| `azure-compute-batch-java` | | azure, compute, batch, java | azure, compute, batch, java | +| `azure-cosmos-java` | | azure, cosmos, java | azure, cosmos, java | +| `azure-cosmos-rust` | | azure, cosmos, rust | azure, cosmos, rust | +| `azure-eventgrid-dotnet` | | azure, eventgrid, dotnet | azure, eventgrid, dotnet | | `azure-eventgrid-java` | Build event-driven applications with Azure Event Grid SDK for Java. Use when publishing events, implementing pub/sub patterns, or integrating with Azure serv... | azure, eventgrid, java | azure, eventgrid, java, event, driven, applications, grid, sdk, publishing, events, implementing, pub | -| `azure-eventgrid-py` | Azure Event Grid SDK for Python. Use for publishing events, handling CloudEvents, and event-driven architectures. | azure, eventgrid, py | azure, eventgrid, py, event, grid, sdk, python, publishing, events, handling, cloudevents, driven | -| `azure-eventhub-dotnet` | Azure Event Hubs SDK for .NET. | azure, eventhub, dotnet | azure, eventhub, dotnet, event, hubs, sdk, net | -| `azure-eventhub-py` | Azure Event Hubs SDK for Python streaming. Use for high-throughput event ingestion, producers, consumers, and checkpointing. | azure, eventhub, py | azure, eventhub, py, event, hubs, sdk, python, streaming, high, throughput, ingestion, producers | +| `azure-eventhub-dotnet` | | azure, eventhub, dotnet | azure, eventhub, dotnet | +| `azure-eventhub-rust` | | azure, eventhub, rust | azure, eventhub, rust | | `azure-functions` | Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production pat... | azure, functions | azure, functions, development, including, isolated, worker, model, durable, orchestration, cold, start, optimization | -| `azure-identity-rust` | Azure Identity SDK for Rust authentication. Use for DeveloperToolsCredential, ManagedIdentityCredential, ClientSecretCredential, and token-based authentication. | azure, identity, rust | azure, identity, rust, sdk, authentication, developertoolscredential, managedidentitycredential, clientsecretcredential, token | -| `azure-keyvault-certificates-rust` | Azure Key Vault Certificates SDK for Rust. Use for creating, importing, and managing certificates. | azure, keyvault, certificates, rust | azure, keyvault, certificates, rust, key, vault, sdk, creating, importing, managing | -| `azure-keyvault-keys-rust` | Azure Key Vault Keys SDK for Rust. Use for creating, managing, and using cryptographic keys. Triggers: "keyvault keys rust", "KeyClient rust", "create key ru... | azure, keyvault, keys, rust | azure, keyvault, keys, rust, key, vault, sdk, creating, managing, cryptographic, triggers, keyclient | +| `azure-identity-dotnet` | | azure, identity, dotnet | azure, identity, dotnet | +| `azure-identity-rust` | | azure, identity, rust | azure, identity, rust | +| `azure-keyvault-certificates-rust` | | azure, keyvault, certificates, rust | azure, keyvault, certificates, rust | +| `azure-keyvault-keys-rust` | | azure, keyvault, keys, rust | azure, keyvault, keys, rust | | `azure-keyvault-keys-ts` | Manage cryptographic keys using Azure Key Vault Keys SDK for JavaScript (@azure/keyvault-keys). Use when creating, encrypting/decrypting, signing, or rotatin... | azure, keyvault, keys, ts | azure, keyvault, keys, ts, cryptographic, key, vault, sdk, javascript, creating, encrypting, decrypting | +| `azure-maps-search-dotnet` | | azure, maps, search, dotnet | azure, maps, search, dotnet | | `azure-messaging-webpubsub-java` | Build real-time web applications with Azure Web PubSub SDK for Java. Use when implementing WebSocket-based messaging, live updates, chat applications, or ser... | azure, messaging, webpubsub, java | azure, messaging, webpubsub, java, real, time, web, applications, pubsub, sdk, implementing, websocket | -| `azure-mgmt-apicenter-dotnet` | Azure API Center SDK for .NET. Centralized API inventory management with governance, versioning, and discovery. | azure, mgmt, apicenter, dotnet | azure, mgmt, apicenter, dotnet, api, center, sdk, net, centralized, inventory, governance, versioning | -| `azure-mgmt-apicenter-py` | Azure API Center Management SDK for Python. Use for managing API inventory, metadata, and governance across your organization. | azure, mgmt, apicenter, py | azure, mgmt, apicenter, py, api, center, sdk, python, managing, inventory, metadata, governance | -| `azure-mgmt-apimanagement-dotnet` | Azure Resource Manager SDK for API Management in .NET. | azure, mgmt, apimanagement, dotnet | azure, mgmt, apimanagement, dotnet, resource, manager, sdk, api, net | -| `azure-mgmt-apimanagement-py` | Azure API Management SDK for Python. Use for managing APIM services, APIs, products, subscriptions, and policies. | azure, mgmt, apimanagement, py | azure, mgmt, apimanagement, py, api, sdk, python, managing, apim, apis, products, subscriptions | -| `azure-mgmt-fabric-dotnet` | Azure Resource Manager SDK for Fabric in .NET. | azure, mgmt, fabric, dotnet | azure, mgmt, fabric, dotnet, resource, manager, sdk, net | -| `azure-mgmt-fabric-py` | Azure Fabric Management SDK for Python. Use for managing Microsoft Fabric capacities and resources. | azure, mgmt, fabric, py | azure, mgmt, fabric, py, sdk, python, managing, microsoft, capacities, resources | +| `azure-mgmt-apicenter-dotnet` | | azure, mgmt, apicenter, dotnet | azure, mgmt, apicenter, dotnet | +| `azure-mgmt-apimanagement-dotnet` | | azure, mgmt, apimanagement, dotnet | azure, mgmt, apimanagement, dotnet | +| `azure-mgmt-applicationinsights-dotnet` | | azure, mgmt, applicationinsights, dotnet | azure, mgmt, applicationinsights, dotnet | +| `azure-mgmt-arizeaiobservabilityeval-dotnet` | | azure, mgmt, arizeaiobservabilityeval, dotnet | azure, mgmt, arizeaiobservabilityeval, dotnet | +| `azure-mgmt-botservice-dotnet` | | azure, mgmt, botservice, dotnet | azure, mgmt, botservice, dotnet | +| `azure-mgmt-fabric-dotnet` | | azure, mgmt, fabric, dotnet | azure, mgmt, fabric, dotnet | | `azure-mgmt-mongodbatlas-dotnet` | Manage MongoDB Atlas Organizations as Azure ARM resources using Azure.ResourceManager.MongoDBAtlas SDK. Use when creating, updating, listing, or deleting Mon... | azure, mgmt, mongodbatlas, dotnet | azure, mgmt, mongodbatlas, dotnet, mongodb, atlas, organizations, arm, resources, resourcemanager, sdk, creating | -| `azure-monitor-opentelemetry-exporter-java` | Azure Monitor OpenTelemetry Exporter for Java. Export OpenTelemetry traces, metrics, and logs to Azure Monitor/Application Insights. | azure, monitor, opentelemetry, exporter, java | azure, monitor, opentelemetry, exporter, java, export, traces, metrics, logs, application, insights | -| `azure-monitor-opentelemetry-exporter-py` | Azure Monitor OpenTelemetry Exporter for Python. Use for low-level OpenTelemetry export to Application Insights. | azure, monitor, opentelemetry, exporter, py | azure, monitor, opentelemetry, exporter, py, python, low, level, export, application, insights | -| `azure-monitor-opentelemetry-py` | Azure Monitor OpenTelemetry Distro for Python. Use for one-line Application Insights setup with auto-instrumentation. | azure, monitor, opentelemetry, py | azure, monitor, opentelemetry, py, distro, python, one, line, application, insights, setup, auto | -| `azure-resource-manager-durabletask-dotnet` | Azure Resource Manager SDK for Durable Task Scheduler in .NET. | azure, resource, manager, durabletask, dotnet | azure, resource, manager, durabletask, dotnet, sdk, durable, task, scheduler, net | -| `azure-resource-manager-playwright-dotnet` | Azure Resource Manager SDK for Microsoft Playwright Testing in .NET. | azure, resource, manager, playwright, dotnet | azure, resource, manager, playwright, dotnet, sdk, microsoft, testing, net | -| `azure-resource-manager-redis-dotnet` | Azure Resource Manager SDK for Redis in .NET. | azure, resource, manager, redis, dotnet | azure, resource, manager, redis, dotnet, sdk, net | -| `azure-speech-to-text-rest-py` | Azure Speech to Text REST API for short audio (Python). Use for simple speech recognition of audio files up to 60 seconds without the Speech SDK. | azure, speech, to, text, rest, py | azure, speech, to, text, rest, py, api, short, audio, python, simple, recognition | +| `azure-mgmt-weightsandbiases-dotnet` | | azure, mgmt, weightsandbiases, dotnet | azure, mgmt, weightsandbiases, dotnet | +| `azure-monitor-ingestion-java` | | azure, monitor, ingestion, java | azure, monitor, ingestion, java | +| `azure-monitor-opentelemetry-exporter-java` | | azure, monitor, opentelemetry, exporter, java | azure, monitor, opentelemetry, exporter, java | +| `azure-monitor-query-java` | | azure, monitor, query, java | azure, monitor, query, java | +| `azure-resource-manager-cosmosdb-dotnet` | | azure, resource, manager, cosmosdb, dotnet | azure, resource, manager, cosmosdb, dotnet | +| `azure-resource-manager-durabletask-dotnet` | | azure, resource, manager, durabletask, dotnet | azure, resource, manager, durabletask, dotnet | +| `azure-resource-manager-playwright-dotnet` | | azure, resource, manager, playwright, dotnet | azure, resource, manager, playwright, dotnet | +| `azure-resource-manager-postgresql-dotnet` | | azure, resource, manager, postgresql, dotnet | azure, resource, manager, postgresql, dotnet | +| `azure-resource-manager-redis-dotnet` | | azure, resource, manager, redis, dotnet | azure, resource, manager, redis, dotnet | +| `azure-search-documents-dotnet` | | azure, search, documents, dotnet | azure, search, documents, dotnet | +| `azure-servicebus-dotnet` | | azure, servicebus, dotnet | azure, servicebus, dotnet | | `azure-storage-blob-java` | Build blob storage applications with Azure Storage Blob SDK for Java. Use when uploading, downloading, or managing files in Azure Blob Storage, working with ... | azure, storage, blob, java | azure, storage, blob, java, applications, sdk, uploading, downloading, managing, files, working, containers | -| `azure-storage-blob-py` | Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle. | azure, storage, blob, py | azure, storage, blob, py, sdk, python, uploading, downloading, listing, blobs, managing, containers | -| `azure-storage-blob-rust` | Azure Blob Storage SDK for Rust. Use for uploading, downloading, and managing blobs and containers. | azure, storage, blob, rust | azure, storage, blob, rust, sdk, uploading, downloading, managing, blobs, containers | -| `azure-storage-blob-ts` | Azure Blob Storage JavaScript/TypeScript SDK (@azure/storage-blob) for blob operations. Use for uploading, downloading, listing, and managing blobs and conta... | azure, storage, blob, ts | azure, storage, blob, ts, javascript, typescript, sdk, operations, uploading, downloading, listing, managing | -| `azure-storage-file-share-ts` | Azure File Share JavaScript/TypeScript SDK (@azure/storage-file-share) for SMB file share operations. | azure, storage, file, share, ts | azure, storage, file, share, ts, javascript, typescript, sdk, smb, operations | -| `azure-storage-queue-py` | Azure Queue Storage SDK for Python. Use for reliable message queuing, task distribution, and asynchronous processing. | azure, storage, queue, py | azure, storage, queue, py, sdk, python, reliable, message, queuing, task, distribution, asynchronous | -| `azure-storage-queue-ts` | Azure Queue Storage JavaScript/TypeScript SDK (@azure/storage-queue) for message queue operations. Use for sending, receiving, peeking, and deleting messages... | azure, storage, queue, ts | azure, storage, queue, ts, javascript, typescript, sdk, message, operations, sending, receiving, peeking | +| `azure-storage-blob-rust` | | azure, storage, blob, rust | azure, storage, blob, rust | | `azure-web-pubsub-ts` | Build real-time messaging applications using Azure Web PubSub SDKs for JavaScript (@azure/web-pubsub, @azure/web-pubsub-client). Use when implementing WebSoc... | azure, web, pubsub, ts | azure, web, pubsub, ts, real, time, messaging, applications, sdks, javascript, client, implementing | -| `backend-architect` | Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. | backend | backend, architect, specializing, scalable, api, microservices, architecture, distributed | +| `backend-architect` | | backend | backend, architect | | `backend-dev-guidelines` | Opinionated backend development standards for Node.js + Express + TypeScript microservices. Covers layered architecture, BaseController pattern, dependency i... | backend, dev, guidelines | backend, dev, guidelines, opinionated, development, standards, node, js, express, typescript, microservices, covers | | `bevy-ecs-expert` | Master Bevy's Entity Component System (ECS) in Rust, covering Systems, Queries, Resources, and parallel scheduling. | bevy, ecs | bevy, ecs, entity, component, rust, covering, queries, resources, parallel, scheduling | | `bullmq-specialist` | BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull que... | bullmq | bullmq, redis, backed, job, queues, background, processing, reliable, async, execution, node, js | @@ -376,24 +347,26 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `cc-skill-frontend-patterns` | Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. | cc, skill, frontend | cc, skill, frontend, development, react, next, js, state, performance, optimization, ui | | `context7-auto-research` | Automatically fetch latest library/framework documentation for Claude Code via Context7 API | context7, auto, research | context7, auto, research, automatically, fetch, latest, library, framework, documentation, claude, code, via | | `copilot-sdk` | Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Pytho... | copilot, sdk | copilot, sdk, applications, powered, github, creating, programmatic, integrations, node, js, typescript, python | -| `csharp-pro` | Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and... | csharp | csharp, pro, write, code, features, like, records, matching, async, await, optimizes, net | +| `csharp-pro` | | csharp | csharp, pro | | `dbos-golang` | DBOS Go SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Go code with DBOS, creating workflows and ... | dbos, golang | dbos, golang, go, sdk, building, reliable, fault, tolerant, applications, durable, skill, writing | | `dbos-python` | DBOS Python SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Python code with DBOS, creating workfl... | dbos, python | dbos, python, sdk, building, reliable, fault, tolerant, applications, durable, skill, writing, code | | `dbos-typescript` | DBOS TypeScript SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing TypeScript code with DBOS, creatin... | dbos, typescript | dbos, typescript, sdk, building, reliable, fault, tolerant, applications, durable, skill, writing, code | | `development` | Comprehensive web, mobile, and backend development workflow bundling frontend, backend, full-stack, and mobile development skills for end-to-end application ... | | development, web, mobile, backend, bundling, frontend, full, stack, skills, application, delivery | | `discord-bot-architect` | Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactiv... | discord, bot | discord, bot, architect, specialized, skill, building, bots, covers, js, javascript, pycord, python | +| `django-pro` | | django | django, pro | | `documentation` | Documentation generation workflow covering API docs, architecture docs, README files, code comments, and technical writing. | documentation | documentation, generation, covering, api, docs, architecture, readme, files, code, comments, technical, writing | -| `dotnet-architect` | Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns. | dotnet | dotnet, architect, net, backend, specializing, asp, core, entity, framework, dapper, enterprise, application | +| `dotnet-architect` | | dotnet | dotnet, architect | | `dotnet-backend-patterns` | Master C#/.NET backend development patterns for building robust APIs, MCP servers, and enterprise applications. Covers async/await, dependency injection, Ent... | dotnet, backend | dotnet, backend, net, development, building, robust, apis, mcp, servers, enterprise, applications, covers | | `exa-search` | Semantic search, similar content discovery, and structured research using Exa API | exa, search | exa, search, semantic, similar, content, discovery, structured, research, api | -| `fastapi-pro` | Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns. | fastapi | fastapi, pro, high, performance, async, apis, sqlalchemy, pydantic, v2, microservices, websockets, python | +| `fastapi-pro` | | fastapi | fastapi, pro | | `fastapi-router-py` | Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new rout... | fastapi, router, py | fastapi, router, py, routers, crud, operations, authentication, dependencies, proper, response, models, building | | `fastapi-templates` | Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applicati... | fastapi | fastapi, async, dependency, injection, error, handling, building, new, applications, setting, up, backend | | `firecrawl-scraper` | Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API | firecrawl, scraper | firecrawl, scraper, deep, web, scraping, screenshots, pdf, parsing, website, crawling, api | +| `flutter-expert` | | flutter | flutter | | `fp-ts-errors` | Handle errors as values using fp-ts Either and TaskEither for cleaner, more predictable TypeScript code. Use when implementing error handling patterns with f... | fp, ts, errors | fp, ts, errors, handle, values, either, taskeither, cleaner, predictable, typescript, code, implementing | | `fp-ts-pragmatic` | A practical, jargon-free guide to fp-ts functional programming - the 80/20 approach that gets results without the academic overhead. Use when writing TypeScr... | fp, ts, pragmatic | fp, ts, pragmatic, practical, jargon, free, functional, programming, 80, 20, approach, gets | | `frontend-design` | Create distinctive, production-grade frontend interfaces with intentional aesthetics, high craft, and non-generic visual identity. Use when building or styli... | frontend | frontend, distinctive, grade, interfaces, intentional, aesthetics, high, craft, non, generic, visual, identity | -| `frontend-developer` | Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture. | frontend | frontend, developer, react, components, responsive, layouts, handle, client, side, state, masters, 19 | +| `frontend-developer` | | frontend | frontend, developer | | `frontend-mobile-development-component-scaffold` | You are a React component architecture expert specializing in scaffolding production-ready, accessible, and performant components. Generate complete componen... | frontend, mobile, component | frontend, mobile, component, development, scaffold, react, architecture, specializing, scaffolding, accessible, performant, components | | `frontend-slides` | Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a... | frontend, slides | frontend, slides, stunning, animation, rich, html, presentations, scratch, converting, powerpoint, files, user | | `game-development/mobile-games` | Mobile game development principles. Touch input, battery, performance, app stores. | game, development/mobile, games | game, development/mobile, games, mobile, development, principles, touch, input, battery, performance, app, stores | @@ -401,31 +374,34 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `go-concurrency-patterns` | Master Go concurrency with goroutines, channels, sync primitives, and context. Use when building concurrent Go applications, implementing worker pools, or de... | go, concurrency | go, concurrency, goroutines, channels, sync, primitives, context, building, concurrent, applications, implementing, worker | | `go-playwright` | Expert capability for robust, stealthy, and efficient browser automation using Playwright Go. | go, playwright | go, playwright, capability, robust, stealthy, efficient, browser, automation | | `go-rod-master` | Comprehensive guide for browser automation and web scraping with go-rod (Chrome DevTools Protocol) including stealth anti-bot-detection patterns. | go, rod, master | go, rod, master, browser, automation, web, scraping, chrome, devtools, protocol, including, stealth | -| `golang-pro` | Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices. | golang | golang, pro, go, 21, concurrency, performance, optimization, microservices | +| `golang-pro` | | golang | golang, pro | | `hubspot-integration` | Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers... | hubspot, integration | hubspot, integration, crm, including, oauth, authentication, objects, associations, batch, operations, webhooks, custom | +| `ios-developer` | | ios | ios, developer | +| `java-pro` | | java | java, pro | | `javascript-mastery` | Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced pa... | javascript, mastery | javascript, mastery, reference, covering, 33, essential, concepts, every, developer, should, know, fundamentals | -| `javascript-pro` | Master modern JavaScript with ES6+, async patterns, and Node.js APIs. Handles promises, event loops, and browser/Node compatibility. | javascript | javascript, pro, es6, async, node, js, apis, promises, event, loops, browser, compatibility | +| `javascript-pro` | | javascript | javascript, pro | | `javascript-testing-patterns` | Implement comprehensive testing strategies using Jest, Vitest, and Testing Library for unit tests, integration tests, and end-to-end testing with mocking, fi... | javascript | javascript, testing, jest, vitest, library, unit, tests, integration, mocking, fixtures, test, driven | | `javascript-typescript-typescript-scaffold` | You are a TypeScript project architecture expert specializing in scaffolding production-ready Node.js and frontend applications. Generate complete project st... | javascript, typescript | javascript, typescript, scaffold, architecture, specializing, scaffolding, node, js, frontend, applications, generate, complete | | `launch-strategy` | When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature r... | launch | launch, user, wants, plan, product, feature, announcement, release, mentions, hunt, go, market | -| `m365-agents-ts` | Microsoft 365 Agents SDK for TypeScript/Node.js. | m365, agents, ts | m365, agents, ts, microsoft, 365, sdk, typescript, node, js | +| `m365-agents-dotnet` | | m365, agents, dotnet | m365, agents, dotnet | | `makepad-skills` | Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting. | makepad, skills | makepad, skills, ui, development, rust, apps, setup, shaders, packaging, troubleshooting | | `memory-safety-patterns` | Implement memory-safe programming with RAII, ownership, smart pointers, and resource management across Rust, C++, and C. Use when writing safe systems code, ... | memory, safety | memory, safety, safe, programming, raii, ownership, smart, pointers, resource, rust, writing, code | -| `microsoft-azure-webjobs-extensions-authentication-events-dotnet` | Microsoft Entra Authentication Events SDK for .NET. Azure Functions triggers for custom authentication extensions. | microsoft, azure, webjobs, extensions, authentication, events, dotnet | microsoft, azure, webjobs, extensions, authentication, events, dotnet, entra, sdk, net, functions, triggers | +| `microsoft-azure-webjobs-extensions-authentication-events-dotnet` | | microsoft, azure, webjobs, extensions, authentication, events, dotnet | microsoft, azure, webjobs, extensions, authentication, events, dotnet | | `mobile-design` | Mobile-first design and engineering doctrine for iOS and Android apps. Covers touch interaction, performance, platform conventions, offline behavior, and mob... | mobile | mobile, first, engineering, doctrine, ios, android, apps, covers, touch, interaction, performance, platform | -| `mobile-developer` | Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync... | mobile | mobile, developer, develop, react, native, flutter, apps, architecture, masters, cross, platform, development | +| `mobile-developer` | | mobile | mobile, developer | | `modern-javascript-patterns` | Master ES6+ features including async/await, destructuring, spread operators, arrow functions, promises, modules, iterators, generators, and functional progra... | modern, javascript | modern, javascript, es6, features, including, async, await, destructuring, spread, operators, arrow, functions | | `multi-platform-apps-multi-platform` | Build and deploy the same feature consistently across web, mobile, and desktop platforms using API-first architecture and parallel implementation strategies. | multi, platform, apps | multi, platform, apps, deploy, same, feature, consistently, web, mobile, desktop, platforms, api | | `n8n-code-python` | Write Python code in n8n Code nodes. Use when writing Python in n8n, using _input/_json/_node syntax, working with standard library, or need to understand Py... | n8n, code, python | n8n, code, python, write, nodes, writing, input, json, node, syntax, working, standard | | `n8n-node-configuration` | Operation-aware node configuration guidance. Use when configuring nodes, understanding property dependencies, determining required fields, choosing between g... | n8n, node, configuration | n8n, node, configuration, operation, aware, guidance, configuring, nodes, understanding, property, dependencies, determining | | `observe-whatsapp` | Observe and troubleshoot WhatsApp in Kapso: debug message delivery, inspect webhook deliveries/retries, triage API errors, and run health checks. Use when in... | observe, whatsapp | observe, whatsapp, troubleshoot, kapso, debug, message, delivery, inspect, webhook, deliveries, retries, triage | +| `php-pro` | | php | php, pro | | `product-manager-toolkit` | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market ... | product, manager | product, manager, toolkit, managers, including, rice, prioritization, customer, interview, analysis, prd, discovery | | `python-development-python-scaffold` | You are a Python project architecture expert specializing in scaffolding production-ready Python applications. Generate complete project structures with mode... | python | python, development, scaffold, architecture, specializing, scaffolding, applications, generate, complete, structures, tooling, uv | | `python-fastapi-development` | Python FastAPI backend development with async patterns, SQLAlchemy, Pydantic, authentication, and production API patterns. | python, fastapi | python, fastapi, development, backend, async, sqlalchemy, pydantic, authentication, api | | `python-packaging` | Create distributable Python packages with proper project structure, setup.py/pyproject.toml, and publishing to PyPI. Use when packaging Python libraries, cre... | python, packaging | python, packaging, distributable, packages, proper, structure, setup, py, pyproject, toml, publishing, pypi | | `python-patterns` | Python development principles and decision-making. Framework selection, async patterns, type hints, project structure. Teaches thinking, not copying. | python | python, development, principles, decision, making, framework, selection, async, type, hints, structure, teaches | | `python-performance-optimization` | Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottleneck... | python, performance, optimization | python, performance, optimization, profile, optimize, code, cprofile, memory, profilers, debugging, slow, optimizing | -| `python-pro` | Master Python 3.12+ with modern features, async programming, performance optimization, and production-ready practices. Expert in the latest Python ecosystem ... | python | python, pro, 12, features, async, programming, performance, optimization, latest, ecosystem, including, uv | +| `python-pro` | | python | python, pro | | `python-testing-patterns` | Implement comprehensive testing strategies with pytest, fixtures, mocking, and test-driven development. Use when writing Python tests, setting up test suites... | python | python, testing, pytest, fixtures, mocking, test, driven, development, writing, tests, setting, up | | `react-best-practices` | React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.j... | react, best, practices | react, best, practices, next, js, performance, optimization, guidelines, vercel, engineering, skill, should | | `react-flow-architect` | Expert ReactFlow architect for building interactive graph applications with hierarchical node-edge systems, performance optimization, and auto-layout integra... | react, flow | react, flow, architect, reactflow, building, interactive, graph, applications, hierarchical, node, edge, performance | @@ -435,60 +411,89 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `react-nextjs-development` | React and Next.js 14+ application development with App Router, Server Components, TypeScript, Tailwind CSS, and modern frontend patterns. | react, nextjs | react, nextjs, development, next, js, 14, application, app, router, server, components, typescript | | `react-patterns` | Modern React patterns and principles. Hooks, composition, performance, TypeScript best practices. | react | react, principles, hooks, composition, performance, typescript | | `react-state-management` | Master modern React state management with Redux Toolkit, Zustand, Jotai, and React Query. Use when setting up global state, managing server state, or choosin... | react, state | react, state, redux, toolkit, zustand, jotai, query, setting, up, global, managing, server | -| `reference-builder` | Creates exhaustive technical references and API documentation. Generates comprehensive parameter listings, configuration guides, and searchable reference mat... | reference, builder | reference, builder, creates, exhaustive, technical, references, api, documentation, generates, parameter, listings, configuration | | `remotion-best-practices` | Best practices for Remotion - Video creation in React | remotion, video, react, animation, composition | remotion, video, react, animation, composition, creation | -| `ruby-pro` | Write idiomatic Ruby code with metaprogramming, Rails patterns, and performance optimization. Specializes in Ruby on Rails, gem development, and testing fram... | ruby | ruby, pro, write, idiomatic, code, metaprogramming, rails, performance, optimization, specializes, gem, development | +| `ruby-pro` | | ruby | ruby, pro | | `rust-async-patterns` | Master Rust async programming with Tokio, async traits, error handling, and concurrent patterns. Use when building async Rust applications, implementing conc... | rust, async | rust, async, programming, tokio, traits, error, handling, concurrent, building, applications, implementing, debugging | -| `rust-pro` | Master Rust 1.75+ with modern async patterns, advanced type system features, and production-ready systems programming. | rust | rust, pro, 75, async, type, features, programming | +| `rust-pro` | | rust | rust, pro | | `senior-fullstack` | Comprehensive fullstack development skill for building complete web applications with React, Next.js, Node.js, GraphQL, and PostgreSQL. Includes project scaf... | senior, fullstack | senior, fullstack, development, skill, building, complete, web, applications, react, next, js, node | | `shopify-apps` | Expert patterns for Shopify app development including Remix/React Router apps, embedded apps with App Bridge, webhook handling, GraphQL Admin API, Polaris co... | shopify, apps | shopify, apps, app, development, including, remix, react, router, embedded, bridge, webhook, handling | -| `shopify-development` | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid. | shopify | shopify, development, apps, extensions, themes, graphql, admin, api, cli, polaris, ui, liquid | | `slack-automation` | Automate Slack messaging, channel management, search, reactions, and threads via Rube MCP (Composio). Send messages, search conversations, manage channels/us... | slack | slack, automation, automate, messaging, channel, search, reactions, threads, via, rube, mcp, composio | | `slack-bot-builder` | Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event h... | slack, bot, builder | slack, bot, builder, apps, bolt, framework, python, javascript, java, covers, block, kit | | `swiftui-expert-skill` | Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, modern APIs, Swift concurrency, and iOS ... | swiftui, skill | swiftui, skill, write, review, improve, code, following, state, view, composition, performance, apis | | `systems-programming-rust-project` | You are a Rust project architecture expert specializing in scaffolding production-ready Rust applications. Generate complete project structures with cargo to... | programming, rust | programming, rust, architecture, specializing, scaffolding, applications, generate, complete, structures, cargo, tooling, proper | | `tavily-web` | Web search, content extraction, crawling, and research capabilities using Tavily API | tavily, web | tavily, web, search, content, extraction, crawling, research, capabilities, api | | `telegram-mini-app` | Expert in building Telegram Mini Apps (TWA) - web apps that run inside Telegram with native-like experience. Covers the TON ecosystem, Telegram Web App API, ... | telegram, mini, app | telegram, mini, app, building, apps, twa, web, run, inside, native, like, experience | +| `temporal-python-pro` | | temporal, python | temporal, python, pro | | `temporal-python-testing` | Test Temporal workflows with pytest, time-skipping, and mocking strategies. Covers unit testing, integration testing, replay testing, and local development s... | temporal, python | temporal, python, testing, test, pytest, time, skipping, mocking, covers, unit, integration, replay | | `twilio-communications` | Build communication features with Twilio: SMS messaging, voice calls, WhatsApp Business API, and user verification (2FA). Covers the full spectrum from simpl... | twilio, communications | twilio, communications, communication, features, sms, messaging, voice, calls, whatsapp, business, api, user | | `typescript-advanced-types` | Master TypeScript's advanced type system including generics, conditional types, mapped types, template literals, and utility types for building type-safe app... | typescript, advanced, types | typescript, advanced, types, type, including, generics, conditional, mapped, literals, utility, building, safe | -| `typescript-expert` | TypeScript and JavaScript expert with deep knowledge of type-level programming, performance optimization, monorepo management, migration strategies, and mode... | typescript | typescript, javascript, deep, knowledge, type, level, programming, performance, optimization, monorepo, migration, tooling | -| `typescript-pro` | Master TypeScript with advanced types, generics, and strict type safety. Handles complex type systems, decorators, and enterprise-grade patterns. | typescript | typescript, pro, types, generics, strict, type, safety, complex, decorators, enterprise, grade | +| `typescript-expert` | | typescript | typescript | +| `typescript-pro` | | typescript | typescript, pro | | `ui-ux-pro-max` | UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwi... | ui, ux, max | ui, ux, max, pro, intelligence, 50, styles, 21, palettes, font, pairings, 20 | | `uv-package-manager` | Master the uv package manager for fast Python dependency management, virtual environments, and modern Python project workflows. Use when setting up Python pr... | uv, package, manager | uv, package, manager, fast, python, dependency, virtual, environments, setting, up, managing, dependencies | | `viral-generator-builder` | Expert in building shareable generator tools that go viral - name generators, quiz makers, avatar creators, personality tests, and calculator tools. Covers t... | viral, generator, builder | viral, generator, builder, building, shareable, go, name, generators, quiz, makers, avatar, creators | | `webapp-testing` | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing... | webapp | webapp, testing, toolkit, interacting, local, web, applications, playwright, supports, verifying, frontend, functionality | | `zustand-store-ts` | Create Zustand stores with TypeScript, subscribeWithSelector middleware, and proper state/action separation. Use when building React state management, creati... | zustand, store, ts | zustand, store, ts, stores, typescript, subscribewithselector, middleware, proper, state, action, separation, building | -## general (187) +## general (243) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | | `00-andruia-consultant` | Arquitecto de Soluciones Principal y Consultor Tecnológico de Andru.ia. Diagnostica y traza la hoja de ruta óptima para proyectos de IA en español. | 00, andruia, consultant | 00, andruia, consultant, arquitecto, de, soluciones, principal, consultor, tecnol, gico, andru, ia | -| `10-andruia-skill-smith` | Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante. | 10, andruia, skill, smith | 10, andruia, skill, smith, ingeniero, de, sistemas, andru, ia, dise, redacta, despliega | | `20-andruia-niche-intelligence` | Estratega de Inteligencia de Dominio de Andru.ia. Analiza el nicho específico de un proyecto para inyectar conocimientos, regulaciones y estándares únicos de... | 20, andruia, niche, intelligence | 20, andruia, niche, intelligence, estratega, de, inteligencia, dominio, andru, ia, analiza, el | | `address-github-comments` | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | address, github, comments | address, github, comments, review, issue, open, pull, request, gh, cli | | `agent-manager-skill` | Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | agent, manager, skill | agent, manager, skill, multiple, local, cli, agents, via, tmux, sessions, start, stop | | `algorithmic-art` | Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, gener... | algorithmic, art | algorithmic, art, creating, p5, js, seeded, randomness, interactive, parameter, exploration, users, request | +| `angular` | | angular | angular | | `angular-best-practices` | Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and... | angular, best, practices | angular, best, practices, performance, optimization, writing, reviewing, refactoring, code, optimal, bundle, size | | `angular-migration` | Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applicat... | angular, migration | angular, migration, migrate, angularjs, hybrid, mode, incremental, component, rewriting, dependency, injection, updates | | `anti-reversing-techniques` | Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti... | anti, reversing, techniques | anti, reversing, techniques, understand, obfuscation, protection, encountered, during, software, analysis, analyzing, protected | +| `apify-lead-generation` | Generates B2B/B2C leads by scraping Google Maps, websites, Instagram, TikTok, Facebook, LinkedIn, YouTube, and Google Search. Use when user asks to find lead... | apify, lead, generation | apify, lead, generation, generates, b2b, b2c, leads, scraping, google, maps, websites, instagram | +| `apify-trend-analysis` | Discover and track emerging trends across Google Trends, Instagram, Facebook, YouTube, and TikTok to inform content strategy. | apify, trend | apify, trend, analysis, discover, track, emerging, trends, google, instagram, facebook, youtube, tiktok | | `app-builder` | Main application building orchestrator. Creates full-stack applications from natural language requests. Determines project type, selects tech stack, coordina... | app, builder | app, builder, main, application, building, orchestrator, creates, full, stack, applications, natural, language | | `app-builder/templates` | Project scaffolding templates for new applications. Use when creating new projects from scratch. Contains 12 templates for various tech stacks. | app, builder/templates | app, builder/templates, scaffolding, new, applications, creating, scratch, contains, 12, various, tech, stacks | -| `arm-cortex-expert` | Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD). | arm, cortex | arm, cortex, senior, embedded, software, engineer, specializing, firmware, driver, development, microcontrollers, teensy | +| `arm-cortex-expert` | | arm, cortex | arm, cortex | | `avalonia-layout-zafiro` | Guidelines for modern Avalonia UI layout using Zafiro.Avalonia, emphasizing shared styles, generic components, and avoiding XAML redundancy. | avalonia, layout, zafiro | avalonia, layout, zafiro, guidelines, ui, emphasizing, shared, styles, generic, components, avoiding, xaml | | `avalonia-zafiro-development` | Mandatory skills, conventions, and behavioral rules for Avalonia UI development using the Zafiro toolkit. | avalonia, zafiro | avalonia, zafiro, development, mandatory, skills, conventions, behavioral, rules, ui, toolkit | | `aws-cost-cleanup` | Automated cleanup of unused AWS resources to reduce costs | aws, cost, cleanup | aws, cost, cleanup, automated, unused, resources, reduce, costs | | `aws-cost-optimizer` | Comprehensive AWS cost analysis and optimization recommendations using AWS CLI and Cost Explorer | aws, cost, optimizer | aws, cost, optimizer, analysis, optimization, recommendations, cli, explorer | +| `azure-appconfiguration-py` | | azure, appconfiguration, py | azure, appconfiguration, py | +| `azure-containerregistry-py` | | azure, containerregistry, py | azure, containerregistry, py | +| `azure-cosmos-py` | | azure, cosmos, py | azure, cosmos, py | +| `azure-cosmos-ts` | | azure, cosmos, ts | azure, cosmos, ts | +| `azure-eventgrid-py` | | azure, eventgrid, py | azure, eventgrid, py | +| `azure-eventhub-py` | | azure, eventhub, py | azure, eventhub, py | +| `azure-identity-py` | | azure, identity, py | azure, identity, py | +| `azure-keyvault-py` | | azure, keyvault, py | azure, keyvault, py | +| `azure-messaging-webpubsubservice-py` | | azure, messaging, webpubsubservice, py | azure, messaging, webpubsubservice, py | +| `azure-mgmt-apicenter-py` | | azure, mgmt, apicenter, py | azure, mgmt, apicenter, py | +| `azure-mgmt-apimanagement-py` | | azure, mgmt, apimanagement, py | azure, mgmt, apimanagement, py | +| `azure-mgmt-botservice-py` | | azure, mgmt, botservice, py | azure, mgmt, botservice, py | +| `azure-mgmt-fabric-py` | | azure, mgmt, fabric, py | azure, mgmt, fabric, py | +| `azure-monitor-ingestion-py` | | azure, monitor, ingestion, py | azure, monitor, ingestion, py | +| `azure-monitor-opentelemetry-exporter-py` | | azure, monitor, opentelemetry, exporter, py | azure, monitor, opentelemetry, exporter, py | +| `azure-monitor-opentelemetry-py` | | azure, monitor, opentelemetry, py | azure, monitor, opentelemetry, py | +| `azure-monitor-query-py` | | azure, monitor, query, py | azure, monitor, query, py | +| `azure-search-documents-py` | | azure, search, documents, py | azure, search, documents, py | +| `azure-servicebus-py` | | azure, servicebus, py | azure, servicebus, py | +| `azure-speech-to-text-rest-py` | | azure, speech, to, text, rest, py | azure, speech, to, text, rest, py | +| `azure-storage-blob-py` | | azure, storage, blob, py | azure, storage, blob, py | +| `azure-storage-blob-ts` | | azure, storage, blob, ts | azure, storage, blob, ts | +| `azure-storage-file-datalake-py` | | azure, storage, file, datalake, py | azure, storage, file, datalake, py | +| `azure-storage-file-share-py` | | azure, storage, file, share, py | azure, storage, file, share, py | +| `azure-storage-file-share-ts` | | azure, storage, file, share, ts | azure, storage, file, share, ts | +| `azure-storage-queue-py` | | azure, storage, queue, py | azure, storage, queue, py | +| `azure-storage-queue-ts` | | azure, storage, queue, ts | azure, storage, queue, ts | | `backtesting-frameworks` | Build robust backtesting systems for trading strategies with proper handling of look-ahead bias, survivorship bias, and transaction costs. Use when developin... | backtesting, frameworks | backtesting, frameworks, robust, trading, proper, handling, look, ahead, bias, survivorship, transaction, costs | +| `bash-pro` | | bash | bash, pro | | `bazel-build-optimization` | Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise co... | bazel, build, optimization | bazel, build, optimization, optimize, large, scale, monorepos, configuring, implementing, remote, execution, optimizing | -| `blockchain-developer` | Build production-ready Web3 applications, smart contracts, and decentralized systems. Implements DeFi protocols, NFT platforms, DAOs, and enterprise blockcha... | blockchain | blockchain, developer, web3, applications, smart, contracts, decentralized, implements, defi, protocols, nft, platforms | +| `blockchain-developer` | | blockchain | blockchain, developer | | `brand-guidelines-anthropic` | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand co... | brand, guidelines, anthropic | brand, guidelines, anthropic, applies, official, colors, typography, any, sort, artifact, may, benefit | | `brand-guidelines-community` | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand co... | brand, guidelines, community | brand, guidelines, community, applies, anthropic, official, colors, typography, any, sort, artifact, may | | `busybox-on-windows` | How to use a Win32 build of BusyBox to run many of the standard UNIX command line tools on Windows. | busybox, on, windows | busybox, on, windows, how, win32, run, many, standard, unix, command, line | | `c-pro` | Write efficient C code with proper memory management, pointer | c | c, pro, write, efficient, code, proper, memory, pointer | | `canvas-design` | Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art... | canvas | canvas, beautiful, visual, art, png, pdf, documents, philosophy, should, skill, user, asks | -| `carrier-relationship-management` | Codified expertise for managing carrier portfolios, negotiating freight rates, tracking carrier performance, allocating freight, and maintaining strategic ca... | carrier, relationship | carrier, relationship, codified, expertise, managing, portfolios, negotiating, freight, rates, tracking, performance, allocating | +| `carrier-relationship-management` | | carrier, relationship | carrier, relationship | | `cc-skill-continuous-learning` | Development skill from everything-claude-code | cc, skill, continuous, learning | cc, skill, continuous, learning, development, everything, claude, code | | `cc-skill-project-guidelines-example` | Project Guidelines Skill (Example) | cc, skill, guidelines, example | cc, skill, guidelines, example | | `cc-skill-strategic-compact` | Development skill from everything-claude-code | cc, skill, strategic, compact | cc, skill, strategic, compact, development, everything, claude, code | @@ -505,29 +510,38 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `code-review-excellence` | Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use wh... | code, excellence | code, excellence, review, effective, provide, constructive, feedback, catch, bugs, early, foster, knowledge | | `codebase-cleanup-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | codebase, cleanup, tech, debt | codebase, cleanup, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, uncover | | `commit` | Create commit messages following Sentry conventions. Use when committing code changes, writing commit messages, or formatting git history. Follows convention... | commit | commit, messages, following, sentry, conventions, committing, code, changes, writing, formatting, git, history | +| `competitive-landscape` | | competitive, landscape | competitive, landscape | | `comprehensive-review-full-review` | Use when working with comprehensive review full review | comprehensive, full | comprehensive, full, review, working | | `comprehensive-review-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | comprehensive, pr, enhance | comprehensive, pr, enhance, review, optimization, specializing, creating, high, quality, pull, requests, facilitate | | `computer-vision-expert` | SOTA Computer Vision Expert (2026). Specialized in YOLO26, Segment Anything 3 (SAM 3), Vision Language Models, and real-time spatial analysis. | computer, vision | computer, vision, sota, 2026, specialized, yolo26, segment, anything, sam, language, models, real | | `concise-planning` | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | concise, planning | concise, planning, user, asks, plan, coding, task, generate, clear, actionable, atomic, checklist | +| `content-marketer` | | content, marketer | content, marketer | | `context-compression` | Design and evaluate compression strategies for long-running sessions | compression | compression, context, evaluate, long, running, sessions | +| `context-driven-development` | | driven | driven, context, development | | `context-fundamentals` | Understand what context is, why it matters, and the anatomy of context in agent systems | fundamentals | fundamentals, context, understand, what, why, matters, anatomy, agent | | `context-management-context-restore` | Use when working with context management context restore | restore | restore, context, working | | `context-management-context-save` | Use when working with context management context save | save | save, context, working | +| `context-manager` | | manager | manager, context | | `context-optimization` | Apply compaction, masking, and caching strategies | optimization | optimization, context, apply, compaction, masking, caching | -| `cpp-pro` | Write idiomatic C++ code with modern features, RAII, smart pointers, and STL algorithms. Handles templates, move semantics, and performance optimization. | cpp | cpp, pro, write, idiomatic, code, features, raii, smart, pointers, stl, algorithms, move | +| `cpp-pro` | | cpp | cpp, pro | | `create-pr` | Create pull requests following Sentry conventions. Use when opening PRs, writing PR descriptions, or preparing changes for review. Follows Sentry's code revi... | create, pr | create, pr, pull, requests, following, sentry, conventions, opening, prs, writing, descriptions, preparing | +| `crypto-bd-agent` | | crypto, bd, agent | crypto, bd, agent | | `culture-index` | Index and search culture documentation | culture, index | culture, index, search, documentation | | `daily-news-report` | Scrapes content based on a preset URL list, filters high-quality technical information, and generates daily Markdown reports. | daily, news, report | daily, news, report, scrapes, content, preset, url, list, filters, high, quality, technical | +| `debugger` | | debugger | debugger | | `debugging-strategies` | Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use ... | debugging, strategies | debugging, strategies, systematic, techniques, profiling, root, cause, analysis, efficiently, track, down, bugs | | `debugging-toolkit-smart-debug` | Use when working with debugging toolkit smart debug | debugging, debug | debugging, debug, toolkit, smart, working | | `design-md` | Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files | md | md, analyze, stitch, synthesize, semantic, files | | `dispatching-parallel-agents` | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies | dispatching, parallel, agents | dispatching, parallel, agents, facing, independent, tasks, worked, without, shared, state, sequential, dependencies | +| `docs-architect` | | docs | docs, architect | | `docx-official` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx, official | docx, official, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text | -| `dx-optimizer` | Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when developme... | dx, optimizer | dx, optimizer, developer, experience, improves, tooling, setup, proactively, setting, up, new, after | +| `dx-optimizer` | | dx, optimizer | dx, optimizer | +| `elixir-pro` | | elixir | elixir, pro | | `email-sequence` | When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email program. Also use when the user mentions... | email, sequence | email, sequence, user, wants, optimize, drip, campaign, automated, flow, lifecycle, program, mentions | -| `energy-procurement` | Codified expertise for electricity and gas procurement, tariff optimisation, demand charge management, renewable PPA evaluation, and multi-facility energy co... | energy, procurement | energy, procurement, codified, expertise, electricity, gas, tariff, optimisation, demand, charge, renewable, ppa | +| `energy-procurement` | | energy, procurement | energy, procurement | | `environment-setup-guide` | Guide developers through setting up development environments with proper tools, dependencies, and configurations | environment, setup | environment, setup, developers, through, setting, up, development, environments, proper, dependencies, configurations | | `error-debugging-multi-agent-review` | Use when working with error debugging multi agent review | error, debugging, multi, agent | error, debugging, multi, agent, review, working | +| `error-detective` | | error, detective | error, detective | | `error-diagnostics-smart-debug` | Use when working with error diagnostics smart debug | error, diagnostics, debug | error, diagnostics, debug, smart, working | | `evaluation` | Build evaluation frameworks for agent systems | evaluation | evaluation, frameworks, agent | | `executing-plans` | Use when you have a written implementation plan to execute in a separate session with review checkpoints | executing, plans | executing, plans, written, plan, execute, separate, session, review, checkpoints | @@ -535,8 +549,9 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `ffuf-claude-skill` | Web fuzzing with ffuf | ffuf, claude, skill | ffuf, claude, skill, web, fuzzing | | `file-organizer` | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. Use when user wants ... | file, organizer | file, organizer, intelligently, organizes, files, folders, understanding, context, finding, duplicates, suggesting, better | | `finishing-a-development-branch` | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting s... | finishing, a, branch | finishing, a, branch, development, complete, all, tests, pass, decide, how, integrate, work | +| `firmware-analyst` | | firmware, analyst | firmware, analyst | | `fix-review` | Verify fix commits address audit findings without new bugs | fix | fix, review, verify, commits, address, audit, findings, without, new, bugs | -| `form-cro` | Optimize any form that is NOT signup or account registration — including lead capture, contact, demo request, application, survey, quote, and checkout forms. | form, cro | form, cro, optimize, any, signup, account, registration, including, lead, capture, contact, demo | +| `form-cro` | | form, cro | form, cro | | `framework-migration-code-migrate` | You are a code migration expert specializing in transitioning codebases between frameworks, languages, versions, and platforms. Generate comprehensive migrat... | framework, migration, code, migrate | framework, migration, code, migrate, specializing, transitioning, codebases, between, frameworks, languages, versions, platforms | | `game-development` | Game development orchestrator. Routes to platform-specific skills based on project needs. | game | game, development, orchestrator, routes, platform, specific, skills | | `game-development/2d-games` | 2D game development principles. Sprites, tilemaps, physics, camera. | game, development/2d, games | game, development/2d, games, 2d, development, principles, sprites, tilemaps, physics, camera | @@ -552,40 +567,47 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `git-pushing` | Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to... | git, pushing | git, pushing, stage, commit, push, changes, conventional, messages, user, wants, mentions, remote | | `github-issue-creator` | Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error me... | github, issue, creator | github, issue, creator, convert, raw, notes, error, logs, voice, dictation, screenshots, crisp | | `godot-4-migration` | Specialized guide for migrating Godot 3.x projects to Godot 4 (GDScript 2.0), covering syntax changes, Tweens, and exports. | godot, 4, migration | godot, 4, migration, specialized, migrating, gdscript, covering, syntax, changes, tweens, exports | +| `graphql-architect` | | graphql | graphql, architect | | `haskell-pro` | Expert Haskell engineer specializing in advanced type systems, pure | haskell | haskell, pro, engineer, specializing, type, pure | | `hierarchical-agent-memory` | Scoped CLAUDE.md memory system that reduces context token spend. Creates directory-level context files, tracks savings via dashboard, and routes agents to th... | hierarchical, agent, memory | hierarchical, agent, memory, scoped, claude, md, reduces, context, token, spend, creates, directory | -| `hig-components-content` | Apple Human Interface Guidelines for content display components. | hig, components, content | hig, components, content, apple, human, interface, guidelines, display | -| `hig-components-controls` | Apple HIG guidance for selection and input controls including pickers, toggles, sliders, steppers, segmented controls, combo boxes, text fields, text views, ... | hig, components, controls | hig, components, controls, apple, guidance, selection, input, including, pickers, toggles, sliders, steppers | -| `hig-components-dialogs` | Apple HIG guidance for presentation components including alerts, action sheets, popovers, sheets, and digit entry views. | hig, components, dialogs | hig, components, dialogs, apple, guidance, presentation, including, alerts, action, sheets, popovers, digit | -| `hig-components-layout` | Apple Human Interface Guidelines for layout and navigation components. | hig, components, layout | hig, components, layout, apple, human, interface, guidelines, navigation | -| `hig-components-menus` | Apple HIG guidance for menu and button components including menus, context menus, dock menus, edit menus, the menu bar, toolbars, action buttons, pop-up butt... | hig, components, menus | hig, components, menus, apple, guidance, menu, button, including, context, dock, edit, bar | -| `hig-components-search` | Apple HIG guidance for navigation-related components including search fields, page controls, and path controls. | hig, components, search | hig, components, search, apple, guidance, navigation, related, including, fields, page, controls, path | -| `hig-components-status` | Apple HIG guidance for status and progress UI components including progress indicators, status bars, and activity rings. | hig, components, status | hig, components, status, apple, guidance, progress, ui, including, indicators, bars, activity, rings | -| `hig-components-system` | Apple HIG guidance for system experience components: widgets, live activities, notifications, complications, home screen quick actions, top shelf, watch face... | hig, components | hig, components, apple, guidance, experience, widgets, live, activities, notifications, complications, home, screen | -| `hig-foundations` | Apple Human Interface Guidelines design foundations. | hig, foundations | hig, foundations, apple, human, interface, guidelines | -| `hig-platforms` | Apple Human Interface Guidelines for platform-specific design. | hig, platforms | hig, platforms, apple, human, interface, guidelines, platform, specific | -| `hig-project-context` | Create or update a shared Apple design context document that other HIG skills use to tailor guidance. | hig | hig, context, update, shared, apple, document, other, skills, tailor, guidance | +| `hig-components-content` | | hig, components, content | hig, components, content | +| `hig-components-controls` | | hig, components, controls | hig, components, controls | +| `hig-components-dialogs` | | hig, components, dialogs | hig, components, dialogs | +| `hig-components-layout` | | hig, components, layout | hig, components, layout | +| `hig-components-menus` | | hig, components, menus | hig, components, menus | +| `hig-components-search` | | hig, components, search | hig, components, search | +| `hig-components-status` | | hig, components, status | hig, components, status | +| `hig-components-system` | | hig, components | hig, components | +| `hig-foundations` | | hig, foundations | hig, foundations | +| `hig-inputs` | | hig, inputs | hig, inputs | +| `hig-platforms` | | hig, platforms | hig, platforms | +| `hig-project-context` | | hig | hig, context | +| `hig-technologies` | | hig, technologies | hig, technologies | | `hugging-face-cli` | Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create ... | hugging, face, cli | hugging, face, cli, execute, hub, operations, hf, user, download, models, datasets, spaces | | `hugging-face-jobs` | This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, ... | hugging, face, jobs | hugging, face, jobs, skill, should, used, users, want, run, any, workload, infrastructure | +| `imagen` | | imagen | imagen | | `infinite-gratitude` | Multi-agent research skill for parallel research execution (10 agents, battle-tested with real case studies). | infinite, gratitude | infinite, gratitude, multi, agent, research, skill, parallel, execution, 10, agents, battle, tested | | `interactive-portfolio` | Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, d... | interactive, portfolio | interactive, portfolio, building, portfolios, actually, land, jobs, clients, just, showing, work, creating | | `internal-comms-anthropic` | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenev... | internal, comms, anthropic | internal, comms, anthropic, set, resources, me, write, all, kinds, communications, formats, my | | `internal-comms-community` | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenev... | internal, comms, community | internal, comms, community, set, resources, me, write, all, kinds, communications, formats, my | -| `inventory-demand-planning` | Codified expertise for demand forecasting, safety stock optimisation, replenishment planning, and promotional lift estimation at multi-location retailers. | inventory, demand, planning | inventory, demand, planning, codified, expertise, forecasting, safety, stock, optimisation, replenishment, promotional, lift | -| `julia-pro` | Master Julia 1.10+ with modern features, performance optimization, multiple dispatch, and production-ready practices. | julia | julia, pro, 10, features, performance, optimization, multiple, dispatch | +| `inventory-demand-planning` | | inventory, demand, planning | inventory, demand, planning | +| `julia-pro` | | julia | julia, pro | | `last30days` | Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool. | last30days | last30days, research, topic, last, 30, days, reddit, web, become, write, copy, paste | -| `legacy-modernizer` | Refactor legacy codebases, migrate outdated frameworks, and implement gradual modernization. Handles technical debt, dependency updates, and backward compati... | legacy, modernizer | legacy, modernizer, refactor, codebases, migrate, outdated, frameworks, gradual, modernization, technical, debt, dependency | +| `legacy-modernizer` | | legacy, modernizer | legacy, modernizer | | `linear-claude-skill` | Manage Linear issues, projects, and teams | linear, claude, skill | linear, claude, skill, issues, teams | | `lint-and-validate` | Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Tri... | lint, and, validate | lint, and, validate, automatic, quality, control, linting, static, analysis, procedures, after, every | | `linux-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "a... | linux, privilege, escalation | linux, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc | | `linux-shell-scripting` | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or... | linux, shell, scripting | linux, shell, scripting, skill, should, used, user, asks, bash, scripts, automate, tasks | -| `logistics-exception-management` | Codified expertise for handling freight exceptions, shipment delays, damages, losses, and carrier disputes. Informed by logistics professionals with 15+ year... | logistics, exception | logistics, exception, codified, expertise, handling, freight, exceptions, shipment, delays, damages, losses, carrier | +| `logistics-exception-management` | | logistics, exception | logistics, exception | +| `m365-agents-py` | | m365, agents, py | m365, agents, py | +| `m365-agents-ts` | | m365, agents, ts | m365, agents, ts | | `mcp-builder` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder | mcp, builder, creating, high, quality, model, context, protocol, servers, enable, llms, interact | | `mcp-builder-ms` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder, ms | mcp, builder, ms, creating, high, quality, model, context, protocol, servers, enable, llms | | `memory-systems` | Design short-term, long-term, and graph-based memory architectures | memory | memory, short, term, long, graph, architectures | -| `mermaid-expert` | Create Mermaid diagrams for flowcharts, sequences, ERDs, and architectures. Masters syntax for all diagram types and styling. | mermaid | mermaid, diagrams, flowcharts, sequences, erds, architectures, masters, syntax, all, diagram, types, styling | +| `mermaid-expert` | | mermaid | mermaid | | `micro-saas-launcher` | Expert in launching small, focused SaaS products fast - the indie hacker approach to building profitable software. Covers idea validation, MVP development, p... | micro, saas, launcher | micro, saas, launcher, launching, small, products, fast, indie, hacker, approach, building, profitable | -| `minecraft-bukkit-pro` | Master Minecraft server plugin development with Bukkit, Spigot, and Paper APIs. | minecraft, bukkit | minecraft, bukkit, pro, server, plugin, development, spigot, paper, apis | +| `minecraft-bukkit-pro` | | minecraft, bukkit | minecraft, bukkit, pro | +| `mlops-engineer` | | mlops | mlops, engineer | | `monorepo-management` | Master monorepo management with Turborepo, Nx, and pnpm workspaces to build efficient, scalable multi-package repositories with optimized builds and dependen... | monorepo | monorepo, turborepo, nx, pnpm, workspaces, efficient, scalable, multi, package, repositories, optimized, dependency | | `n8n-mcp-tools-expert` | Expert guide for using n8n-mcp MCP tools effectively. Use when searching for nodes, validating configurations, accessing templates, managing workflows, or us... | n8n, mcp | n8n, mcp, effectively, searching, nodes, validating, configurations, accessing, managing, any, provides, sele | | `nft-standards` | Implement NFT standards (ERC-721, ERC-1155) with proper metadata handling, minting strategies, and marketplace integration. Use when creating NFT contracts, ... | nft, standards | nft, standards, erc, 721, 1155, proper, metadata, handling, minting, marketplace, integration, creating | @@ -593,8 +615,9 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `obsidian-clipper-template-creator` | Guide for creating templates for the Obsidian Web Clipper. Use when you want to create a new clipping template, understand available variables, or format cli... | obsidian, clipper, creator | obsidian, clipper, creator, creating, web, want, new, clipping, understand, available, variables, format | | `onboarding-cro` | When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding ... | onboarding, cro | onboarding, cro, user, wants, optimize, post, signup, activation, first, run, experience, time | | `oss-hunter` | Automatically hunt for high-impact OSS contribution opportunities in trending repositories. | oss, hunter | oss, hunter, automatically, hunt, high, impact, contribution, opportunities, trending, repositories | -| `page-cro` | Analyze and optimize individual pages for conversion performance. | page, cro | page, cro, analyze, optimize, individual, pages, conversion, performance | +| `page-cro` | | page, cro | page, cro | | `paid-ads` | When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X, or other ad platforms. Also use when ... | paid, ads | paid, ads, user, wants, advertising, campaigns, google, meta, facebook, instagram, linkedin, twitter | +| `payment-integration` | | payment, integration | payment, integration | | `paypal-integration` | Integrate PayPal payment processing with support for express checkout, subscriptions, and refund management. Use when implementing PayPal payments, processin... | paypal, integration | paypal, integration, integrate, payment, processing, express, checkout, subscriptions, refund, implementing, payments, online | | `paywall-upgrade-cro` | When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgr... | paywall, upgrade, cro | paywall, upgrade, cro, user, wants, optimize, app, paywalls, screens, upsell, modals, feature | | `pdf-official` | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. When Claude needs ... | pdf, official | pdf, official, manipulation, toolkit, extracting, text, tables, creating, new, pdfs, merging, splitting | @@ -603,25 +626,30 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `personal-tool-builder` | Expert in building custom tools that solve your own problems first. The best products often start as personal tools - scratch your own itch, build for yourse... | personal, builder | personal, builder, building, custom, solve, own, problems, first, products, often, start, scratch | | `plan-writing` | Structured task planning with clear breakdowns, dependencies, and verification criteria. Use when implementing features, refactoring, or any multi-step work. | plan, writing | plan, writing, structured, task, planning, clear, breakdowns, dependencies, verification, criteria, implementing, features | | `planning-with-files` | Implements Manus-style file-based planning for complex tasks. Creates task_plan.md, findings.md, and progress.md. Use when starting complex multi-step tasks,... | planning, with, files | planning, with, files, implements, manus, style, file, complex, tasks, creates, task, plan | -| `posix-shell-pro` | Expert in strict POSIX sh scripting for maximum portability across Unix-like systems. Specializes in shell scripts that run on any POSIX-compliant shell (das... | posix, shell | posix, shell, pro, strict, sh, scripting, maximum, portability, unix, like, specializes, scripts | +| `posix-shell-pro` | | posix, shell | posix, shell, pro | | `pptx-official` | Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying o... | pptx, official | pptx, official, presentation, creation, editing, analysis, claude, work, presentations, files, creating, new | | `privilege-escalation-methods` | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploi... | privilege, escalation, methods | privilege, escalation, methods, skill, should, used, user, asks, escalate, privileges, get, root | -| `production-scheduling` | Codified expertise for production scheduling, job sequencing, line balancing, changeover optimisation, and bottleneck resolution in discrete and batch manufa... | production, scheduling | production, scheduling, codified, expertise, job, sequencing, line, balancing, changeover, optimisation, bottleneck, resolution | +| `production-scheduling` | | production, scheduling | production, scheduling | | `prompt-engineer` | Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW) | [prompt-engineering, optimization, frameworks, ai-enhancement] | [prompt-engineering, optimization, frameworks, ai-enhancement], prompt, engineer, transforms, user, prompts, optimized, rtf, risen | | `prompt-library` | Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use... | prompt, library | prompt, library, curated, collection, high, quality, prompts, various, cases, includes, role, task | -| `quality-nonconformance` | Codified expertise for quality control, non-conformance investigation, root cause analysis, corrective action, and supplier quality management in regulated m... | quality, nonconformance | quality, nonconformance, codified, expertise, control, non, conformance, investigation, root, cause, analysis, corrective | +| `quality-nonconformance` | | quality, nonconformance | quality, nonconformance | +| `quant-analyst` | | quant, analyst | quant, analyst | | `readme` | When the user wants to create or update a README.md file for a project. Also use when the user says 'write readme,' 'create readme,' 'document this project,'... | readme | readme, user, wants, update, md, file, says, write, document, documentation, asks, he | | `receiving-code-review` | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technic... | receiving, code | receiving, code, review, feedback, before, implementing, suggestions, especially, seems, unclear, technically, questionable | | `red-team-tools` | This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnera... | red, team | red, team, skill, should, used, user, asks, follow, methodology, perform, bug, bounty | +| `reference-builder` | | reference, builder | reference, builder | | `referral-program` | When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy. Also use when the user mentions 'referr... | referral, program | referral, program, user, wants, optimize, analyze, affiliate, word, mouth, mentions, ambassador | | `requesting-code-review` | Use when completing tasks, implementing major features, or before merging to verify work meets requirements | requesting, code | requesting, code, review, completing, tasks, implementing, major, features, before, merging, verify, work | -| `returns-reverse-logistics` | Codified expertise for returns authorisation, receipt and inspection, disposition decisions, refund processing, fraud detection, and warranty claims management. | returns, reverse, logistics | returns, reverse, logistics, codified, expertise, authorisation, receipt, inspection, disposition, decisions, refund, processing | -| `reverse-engineer` | Expert reverse engineer specializing in binary analysis, disassembly, decompilation, and software analysis. Masters IDA Pro, Ghidra, radare2, x64dbg, and mod... | reverse | reverse, engineer, specializing, binary, analysis, disassembly, decompilation, software, masters, ida, pro, ghidra | +| `returns-reverse-logistics` | | returns, reverse, logistics | returns, reverse, logistics | +| `reverse-engineer` | | reverse | reverse, engineer | +| `scala-pro` | | scala | scala, pro | +| `schema-markup` | | schema, markup | schema, markup | | `search-specialist` | Expert web researcher using advanced search techniques and | search | search, web, researcher, techniques | | `shader-programming-glsl` | Expert guide for writing efficient GLSL shaders (Vertex/Fragment) for web and game engines, covering syntax, uniforms, and common effects. | shader, programming, glsl | shader, programming, glsl, writing, efficient, shaders, vertex, fragment, web, game, engines, covering | | `sharp-edges` | Identify error-prone APIs and dangerous configurations | sharp, edges | sharp, edges, identify, error, prone, apis, dangerous, configurations | | `shellcheck-configuration` | Master ShellCheck static analysis configuration and usage for shell script quality. Use when setting up linting infrastructure, fixing code issues, or ensuri... | shellcheck, configuration | shellcheck, configuration, static, analysis, usage, shell, script, quality, setting, up, linting, infrastructure | | `shodan-reconnaissance` | This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services usin... | shodan, reconnaissance | shodan, reconnaissance, skill, should, used, user, asks, search, exposed, devices, internet, perform | +| `shopify-development` | | shopify | shopify, development | | `signup-flow-cro` | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the user mentions "signup conversions," "reg... | signup, flow, cro | signup, flow, cro, user, wants, optimize, registration, account, creation, trial, activation, flows | | `skill-creator` | This skill should be used when the user asks to create a new skill, build a skill, make a custom skill, develop a CLI skill, or wants to extend the CLI with ... | [automation, scaffolding, skill-creation, meta-skill] | [automation, scaffolding, skill-creation, meta-skill], skill, creator, should, used, user, asks, new, custom | | `skill-rails-upgrade` | Analyze Rails apps and provide upgrade assessments | skill, rails, upgrade | skill, rails, upgrade, analyze, apps, provide, assessments | @@ -629,13 +657,16 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `social-content` | When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram, TikTok, Facebook, or other platforms. A... | social, content | social, content, user, wants, creating, scheduling, optimizing, media, linkedin, twitter, instagram, tiktok | | `subagent-driven-development` | Use when executing implementation plans with independent tasks in the current session | subagent, driven | subagent, driven, development, executing, plans, independent, tasks, current, session | | `superpowers-lab` | Lab environment for Claude superpowers | superpowers, lab | superpowers, lab, environment, claude | -| `team-composition-analysis` | This skill should be used when the user asks to \\\"plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equit... | team, composition | team, composition, analysis, skill, should, used, user, asks, plan, structure, determine, hiring | +| `team-composition-analysis` | | team, composition | team, composition, analysis | | `theme-factory` | Toolkit for styling artifacts with a theme. These artifacts can be slides, docs, reportings, HTML landing pages, etc. There are 10 pre-set themes with colors... | theme, factory | theme, factory, toolkit, styling, artifacts, these, slides, docs, reportings, html, landing, pages | | `threejs-skills` | Create 3D scenes, interactive experiences, and visual effects using Three.js. Use when user requests 3D graphics, WebGL experiences, 3D visualizations, anima... | threejs, skills | threejs, skills, 3d, scenes, interactive, experiences, visual, effects, three, js, user, requests | +| `track-management` | | track | track | | `turborepo-caching` | Configure Turborepo for efficient monorepo builds with local and remote caching. Use when setting up Turborepo, optimizing build pipelines, or implementing d... | turborepo, caching | turborepo, caching, configure, efficient, monorepo, local, remote, setting, up, optimizing, pipelines, implementing | -| `tutorial-engineer` | Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples. | tutorial | tutorial, engineer, creates, step, tutorials, educational, content, code, transforms, complex, concepts, progressive | +| `tutorial-engineer` | | tutorial | tutorial, engineer | | `ui-skills` | Opinionated, evolving constraints to guide agents when building interfaces | ui, skills | ui, skills, opinionated, evolving, constraints, agents, building, interfaces | -| `ui-ux-designer` | Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools. | ui, ux, designer | ui, ux, designer, interface, designs, wireframes, masters, user, research, accessibility, standards | +| `ui-ux-designer` | | ui, ux, designer | ui, ux, designer | +| `ui-visual-validator` | | ui, visual, validator | ui, visual, validator | +| `unity-developer` | | unity | unity, developer | | `upgrading-expo` | Upgrade Expo SDK versions | upgrading, expo | upgrading, expo, upgrade, sdk, versions | | `upstash-qstash` | Upstash QStash expert for serverless message queues, scheduled jobs, and reliable HTTP-based task delivery without managing infrastructure. Use when: qstash,... | upstash, qstash | upstash, qstash, serverless, message, queues, scheduled, jobs, reliable, http, task, delivery, without | | `using-git-worktrees` | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with sma... | using, git, worktrees | using, git, worktrees, starting, feature, work, isolation, current, workspace, before, executing, plans | @@ -652,60 +683,47 @@ applications. | php | php, pro, write, idiomatic, code, generators, iterators, s | `x-article-publisher-skill` | Publish articles to X/Twitter | x, article, publisher, skill | x, article, publisher, skill, publish, articles, twitter | | `youtube-summarizer` | Extract transcripts from YouTube videos and generate comprehensive, detailed summaries using intelligent analysis frameworks | [video, summarization, transcription, youtube, content-analysis] | [video, summarization, transcription, youtube, content-analysis], summarizer, extract, transcripts, videos, generate, detailed, summaries | -## infrastructure (111) +## infrastructure (94) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | | `agent-evaluation` | Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents... | agent, evaluation | agent, evaluation, testing, benchmarking, llm, agents, including, behavioral, capability, assessment, reliability, metrics | | `airflow-dag-patterns` | Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating wor... | airflow, dag | airflow, dag, apache, dags, operators, sensors, testing, deployment, creating, data, pipelines, orchestrating | | `api-testing-observability-api-mock` | You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and e... | api, observability, mock | api, observability, mock, testing, mocking, specializing, realistic, development, demos, mocks, simulate, real | +| `apify-actor-development` | Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifyin... | apify, actor | apify, actor, development, develop, debug, deploy, actors, serverless, cloud, programs, web, scraping | +| `apify-actorization` | Convert existing projects into Apify Actors - serverless cloud programs. Actorize JavaScript/TypeScript (SDK with Actor.init/exit), Python (async context man... | apify, actorization | apify, actorization, convert, existing, actors, serverless, cloud, programs, actorize, javascript, typescript, sdk | +| `apify-brand-reputation-monitoring` | Track reviews, ratings, sentiment, and brand mentions across Google Maps, Booking.com, TripAdvisor, Facebook, Instagram, YouTube, and TikTok. Use when user a... | apify, brand, reputation, monitoring | apify, brand, reputation, monitoring, track, reviews, ratings, sentiment, mentions, google, maps, booking | | `application-performance-performance-optimization` | Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across... | application, performance, optimization | application, performance, optimization, optimize, profiling, observability, backend, frontend, tuning, coordinating, stack | | `aws-serverless` | Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns... | aws, serverless | aws, serverless, specialized, skill, building, applications, covers, lambda, functions, api, gateway, dynamodb | | `aws-skills` | AWS development with infrastructure automation and cloud architecture patterns | aws, skills | aws, skills, development, infrastructure, automation, cloud, architecture | | `azd-deployment` | Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration... | azd, deployment | azd, deployment, deploy, containerized, applications, azure, container, apps, developer, cli, setting, up | | `azure-ai-anomalydetector-java` | Build anomaly detection applications with Azure AI Anomaly Detector SDK for Java. Use when implementing univariate/multivariate anomaly detection, time-serie... | azure, ai, anomalydetector, java | azure, ai, anomalydetector, java, anomaly, detection, applications, detector, sdk, implementing, univariate, multivariate | -| `azure-identity-dotnet` | Azure Identity SDK for .NET. Authentication library for Azure SDK clients using Microsoft Entra ID. Use for DefaultAzureCredential, managed identity, service... | azure, identity, dotnet | azure, identity, dotnet, sdk, net, authentication, library, clients, microsoft, entra, id, defaultazurecredential | | `azure-identity-java` | Azure Identity Java SDK for authentication with Azure services. Use when implementing DefaultAzureCredential, managed identity, service principal, or any Azu... | azure, identity, java | azure, identity, java, sdk, authentication, implementing, defaultazurecredential, managed, principal, any, applic | -| `azure-identity-py` | Azure Identity SDK for Python authentication. Use for DefaultAzureCredential, managed identity, service principals, and token caching. | azure, identity, py | azure, identity, py, sdk, python, authentication, defaultazurecredential, managed, principals, token, caching | | `azure-identity-ts` | Authenticate to Azure services using Azure Identity SDK for JavaScript (@azure/identity). Use when configuring authentication with DefaultAzureCredential, ma... | azure, identity, ts | azure, identity, ts, authenticate, sdk, javascript, configuring, authentication, defaultazurecredential, managed, principals | -| `azure-messaging-webpubsubservice-py` | Azure Web PubSub Service SDK for Python. Use for real-time messaging, WebSocket connections, and pub/sub patterns. | azure, messaging, webpubsubservice, py | azure, messaging, webpubsubservice, py, web, pubsub, sdk, python, real, time, websocket, connections | -| `azure-mgmt-applicationinsights-dotnet` | Azure Application Insights SDK for .NET. Application performance monitoring and observability resource management. | azure, mgmt, applicationinsights, dotnet | azure, mgmt, applicationinsights, dotnet, application, insights, sdk, net, performance, monitoring, observability, resource | -| `azure-mgmt-arizeaiobservabilityeval-dotnet` | Azure Resource Manager SDK for Arize AI Observability and Evaluation (.NET). | azure, mgmt, arizeaiobservabilityeval, dotnet | azure, mgmt, arizeaiobservabilityeval, dotnet, resource, manager, sdk, arize, ai, observability, evaluation, net | -| `azure-mgmt-botservice-dotnet` | Azure Resource Manager SDK for Bot Service in .NET. Management plane operations for creating and managing Azure Bot resources, channels (Teams, DirectLine, S... | azure, mgmt, botservice, dotnet | azure, mgmt, botservice, dotnet, resource, manager, sdk, bot, net, plane, operations, creating | -| `azure-mgmt-botservice-py` | Azure Bot Service Management SDK for Python. Use for creating, managing, and configuring Azure Bot Service resources. | azure, mgmt, botservice, py | azure, mgmt, botservice, py, bot, sdk, python, creating, managing, configuring, resources | -| `azure-mgmt-weightsandbiases-dotnet` | Azure Weights & Biases SDK for .NET. ML experiment tracking and model management via Azure Marketplace. Use for creating W&B instances, managing SSO, marketp... | azure, mgmt, weightsandbiases, dotnet | azure, mgmt, weightsandbiases, dotnet, weights, biases, sdk, net, ml, experiment, tracking, model | | `azure-microsoft-playwright-testing-ts` | Run Playwright tests at scale using Azure Playwright Workspaces (formerly Microsoft Playwright Testing). Use when scaling browser tests across cloud-hosted b... | azure, microsoft, playwright, ts | azure, microsoft, playwright, ts, testing, run, tests, scale, workspaces, formerly, scaling, browser | | `azure-monitor-opentelemetry-ts` | Instrument applications with Azure Monitor and OpenTelemetry for JavaScript (@azure/monitor-opentelemetry). Use when adding distributed tracing, metrics, and... | azure, monitor, opentelemetry, ts | azure, monitor, opentelemetry, ts, instrument, applications, javascript, adding, distributed, tracing, metrics, logs | -| `azure-servicebus-dotnet` | Azure Service Bus SDK for .NET. Enterprise messaging with queues, topics, subscriptions, and sessions. | azure, servicebus, dotnet | azure, servicebus, dotnet, bus, sdk, net, enterprise, messaging, queues, topics, subscriptions, sessions | -| `azure-servicebus-py` | Azure Service Bus SDK for Python messaging. Use for queues, topics, subscriptions, and enterprise messaging patterns. | azure, servicebus, py | azure, servicebus, py, bus, sdk, python, messaging, queues, topics, subscriptions, enterprise | | `azure-servicebus-ts` | Build messaging applications using Azure Service Bus SDK for JavaScript (@azure/service-bus). Use when implementing queues, topics/subscriptions, message ses... | azure, servicebus, ts | azure, servicebus, ts, messaging, applications, bus, sdk, javascript, implementing, queues, topics, subscriptions | -| `azure-storage-file-share-py` | Azure Storage File Share SDK for Python. Use for SMB file shares, directories, and file operations in the cloud. | azure, storage, file, share, py | azure, storage, file, share, py, sdk, python, smb, shares, directories, operations, cloud | | `backend-development-feature-development` | Orchestrate end-to-end backend feature development from requirements to deployment. Use when coordinating multi-phase feature delivery across teams and servi... | backend | backend, development, feature, orchestrate, requirements, deployment, coordinating, multi, phase, delivery, teams | | `bash-defensive-patterns` | Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requir... | bash, defensive | bash, defensive, programming, techniques, grade, scripts, writing, robust, shell, ci, cd, pipelines | -| `bash-pro` | Master of defensive Bash scripting for production automation, CI/CD -pipelines, and system utilities. Expert in safe, portable, and testable shell -scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines, utilities, safe, portable, testable | | `bats-testing-patterns` | Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing tests for shell scripts, CI/CD pipelines, or requiring t... | bats | bats, testing, bash, automated, shell, script, writing, tests, scripts, ci, cd, pipelines | | `box-automation` | Automate Box cloud storage operations including file upload/download, search, folder management, sharing, collaborations, and metadata queries via Rube MCP (... | box | box, automation, automate, cloud, storage, operations, including, file, upload, download, search, folder | | `cdk-patterns` | Common AWS CDK patterns and constructs for building cloud infrastructure with TypeScript, Python, or Java. Use when designing reusable CDK stacks and L3 cons... | cdk | cdk, common, aws, constructs, building, cloud, infrastructure, typescript, python, java, designing, reusable | | `chrome-extension-developer` | Expert in building Chrome Extensions using Manifest V3. Covers background scripts, service workers, content scripts, and cross-context communication. | chrome, extension | chrome, extension, developer, building, extensions, manifest, v3, covers, background, scripts, workers, content | | `cicd-automation-workflow-automate` | You are a workflow automation expert specializing in creating efficient CI/CD pipelines, GitHub Actions workflows, and automated development processes. Desig... | cicd, automate | cicd, automate, automation, specializing, creating, efficient, ci, cd, pipelines, github, actions, automated | | `claude-d3js-skill` | Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisati... | claude, d3js, skill | claude, d3js, skill, creating, interactive, data, visualisations, d3, js, should, used, custom | -| `cloud-architect` | Expert cloud architect specializing in AWS/Azure/GCP multi-cloud infrastructure design, advanced IaC (Terraform/OpenTofu/CDK), FinOps cost optimization, and ... | cloud | cloud, architect, specializing, aws, azure, gcp, multi, infrastructure, iac, terraform, opentofu, cdk | +| `cloud-architect` | | cloud | cloud, architect | | `cloud-devops` | Cloud infrastructure and DevOps workflow covering AWS, Azure, GCP, Kubernetes, Terraform, CI/CD, monitoring, and cloud-native development. | cloud, devops | cloud, devops, infrastructure, covering, aws, azure, gcp, kubernetes, terraform, ci, cd, monitoring | | `code-review-ai-ai-review` | You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Levera... | code, ai | code, ai, review, powered, combining, automated, static, analysis, intelligent, recognition, devops, leverage | | `cost-optimization` | Optimize cloud costs through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing... | cost, optimization | cost, optimization, optimize, cloud, costs, through, resource, rightsizing, tagging, reserved, instances, spending | -| `data-engineer` | Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data pl... | data | data, engineer, scalable, pipelines, warehouses, real, time, streaming, architectures, implements, apache, spark | | `data-engineering-data-pipeline` | You are a data pipeline architecture expert specializing in scalable, reliable, and cost-effective data pipelines for batch and streaming data processing. | data, engineering, pipeline | data, engineering, pipeline, architecture, specializing, scalable, reliable, cost, effective, pipelines, batch, streaming | -| `database-admin` | Expert database administrator specializing in modern cloud databases, automation, and reliability engineering. | database, admin | database, admin, administrator, specializing, cloud, databases, automation, reliability, engineering | | `database-cloud-optimization-cost-optimize` | You are a cloud cost optimization expert specializing in reducing infrastructure expenses while maintaining performance and reliability. Analyze cloud spendi... | database, cloud, optimization, cost, optimize | database, cloud, optimization, cost, optimize, specializing, reducing, infrastructure, expenses, while, maintaining, performance | | `database-migrations-migration-observability` | Migration monitoring, CDC, and observability infrastructure | database, cdc, debezium, kafka, prometheus, grafana, monitoring | database, cdc, debezium, kafka, prometheus, grafana, monitoring, migrations, migration, observability, infrastructure | -| `deployment-engineer` | Expert deployment engineer specializing in modern CI/CD pipelines, GitOps workflows, and advanced deployment automation. | deployment | deployment, engineer, specializing, ci, cd, pipelines, gitops, automation | +| `deployment-engineer` | | deployment | deployment, engineer | | `deployment-procedures` | Production deployment principles and decision-making. Safe deployment workflows, rollback strategies, and verification. Teaches thinking, not scripts. | deployment, procedures | deployment, procedures, principles, decision, making, safe, rollback, verification, teaches, thinking, scripts | | `deployment-validation-config-validate` | You are a configuration management expert specializing in validating, testing, and ensuring the correctness of application configurations. Create comprehensi... | deployment, validation, config, validate | deployment, validation, config, validate, configuration, specializing, validating, testing, ensuring, correctness, application, configurations | +| `devops-troubleshooter` | | devops, troubleshooter | devops, troubleshooter | | `distributed-debugging-debug-trace` | You are a debugging expert specializing in setting up comprehensive debugging environments, distributed tracing, and diagnostic tools. Configure debugging wo... | distributed, debugging, debug, trace | distributed, debugging, debug, trace, specializing, setting, up, environments, tracing, diagnostic, configure, solutions | | `distributed-tracing` | Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microserv... | distributed, tracing | distributed, tracing, jaeger, tempo, track, requests, microservices, identify, performance, bottlenecks, debugging, analyzing | -| `django-pro` | Master Django 5.x with async views, DRF, Celery, and Django Channels. Build scalable web applications with proper architecture, testing, and deployment. | django | django, pro, async, views, drf, celery, channels, scalable, web, applications, proper, architecture | | `e2e-testing` | End-to-end testing workflow with Playwright for browser automation, visual regression, cross-browser testing, and CI/CD integration. | e2e | e2e, testing, playwright, browser, automation, visual, regression, cross, ci, cd, integration | | `e2e-testing-patterns` | Master end-to-end testing with Playwright and Cypress to build reliable test suites that catch bugs, improve confidence, and enable fast deployment. Use when... | e2e | e2e, testing, playwright, cypress, reliable, test, suites, catch, bugs, improve, confidence, enable | | `error-debugging-error-analysis` | You are an expert error analysis specialist with deep expertise in debugging distributed systems, analyzing production incidents, and implementing comprehens... | error, debugging | error, debugging, analysis, deep, expertise, distributed, analyzing, incidents, implementing, observability, solutions | @@ -714,7 +732,6 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `error-diagnostics-error-trace` | You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, conf... | error, diagnostics, trace | error, diagnostics, trace, tracking, observability, specializing, implementing, monitoring, solutions, set, up, configure | | `expo-deployment` | Deploy Expo apps to production | expo, deployment | expo, deployment, deploy, apps | | `file-uploads` | Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle l... | file, uploads | file, uploads, handling, cloud, storage, covers, s3, cloudflare, r2, presigned, urls, multipart | -| `flutter-expert` | Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment. | flutter | flutter, development, dart, widgets, multi, platform, deployment | | `freshservice-automation` | Automate Freshservice ITSM tasks via Rube MCP (Composio): create/update tickets, bulk operations, service requests, and outbound emails. Always search tools ... | freshservice | freshservice, automation, automate, itsm, tasks, via, rube, mcp, composio, update, tickets, bulk | | `game-development/game-art` | Game art principles. Visual style selection, asset pipeline, animation workflow. | game, development/game, art | game, development/game, art, principles, visual, style, selection, asset, pipeline, animation | | `gcp-cloud-run` | Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven)... | gcp, cloud, run | gcp, cloud, run, specialized, skill, building, serverless, applications, covers, containerized, functions, event | @@ -726,13 +743,12 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `gitops-workflow` | Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOp... | gitops | gitops, argocd, flux, automated, declarative, kubernetes, deployments, continuous, reconciliation, implementing, automating, deplo | | `grafana-dashboards` | Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visua... | grafana, dashboards | grafana, dashboards, real, time, visualization, application, metrics, building, monitoring, visualizing, creating, operational | | `helm-chart-scaffolding` | Design, organize, and manage Helm charts for templating and packaging Kubernetes applications with reusable configurations. Use when creating Helm charts, pa... | helm, chart | helm, chart, scaffolding, organize, charts, templating, packaging, kubernetes, applications, reusable, configurations, creating | -| `hybrid-cloud-architect` | Expert hybrid cloud architect specializing in complex multi-cloud solutions across AWS/Azure/GCP and private clouds (OpenStack/VMware). | hybrid, cloud | hybrid, cloud, architect, specializing, complex, multi, solutions, aws, azure, gcp, private, clouds | +| `hybrid-cloud-architect` | | hybrid, cloud | hybrid, cloud, architect | | `hybrid-cloud-networking` | Configure secure, high-performance connectivity between on-premises infrastructure and cloud platforms using VPN and dedicated connections. Use when building... | hybrid, cloud, networking | hybrid, cloud, networking, configure, secure, high, performance, connectivity, between, premises, infrastructure, platforms | | `istio-traffic-management` | Configure Istio traffic management including routing, load balancing, circuit breakers, and canary deployments. Use when implementing service mesh traffic po... | istio, traffic | istio, traffic, configure, including, routing, load, balancing, circuit, breakers, canary, deployments, implementing | | `iterate-pr` | Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automa... | iterate, pr | iterate, pr, until, ci, passes, fix, failures, address, review, feedback, continuously, push | -| `java-pro` | Master Java 21+ with modern features like virtual threads, pattern matching, and Spring Boot 3.x. Expert in the latest Java ecosystem including GraalVM, Proj... | java | java, pro, 21, features, like, virtual, threads, matching, spring, boot, latest, ecosystem | | `kpi-dashboard-design` | Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboard... | kpi, dashboard | kpi, dashboard, effective, dashboards, metrics, selection, visualization, real, time, monitoring, building, business | -| `kubernetes-architect` | Expert Kubernetes architect specializing in cloud-native infrastructure, advanced GitOps workflows (ArgoCD/Flux), and enterprise container orchestration. | kubernetes | kubernetes, architect, specializing, cloud, native, infrastructure, gitops, argocd, flux, enterprise, container, orchestration | +| `kubernetes-architect` | | kubernetes | kubernetes, architect | | `kubernetes-deployment` | Kubernetes deployment workflow for container orchestration, Helm charts, service mesh, and production-ready K8s configurations. | kubernetes, deployment | kubernetes, deployment, container, orchestration, helm, charts, mesh, k8s, configurations | | `langfuse` | Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, Lla... | langfuse | langfuse, open, source, llm, observability, platform, covers, tracing, prompt, evaluation, datasets, integration | | `linux-troubleshooting` | Linux system troubleshooting workflow for diagnosing and resolving system issues, performance problems, and service failures. | linux, troubleshooting | linux, troubleshooting, diagnosing, resolving, issues, performance, problems, failures | @@ -740,11 +756,12 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `machine-learning-ops-ml-pipeline` | Design and implement a complete ML pipeline for: $ARGUMENTS | machine, learning, ops, ml, pipeline | machine, learning, ops, ml, pipeline, complete, arguments | | `manifest` | Install and configure the Manifest observability plugin for your agents. Use when setting up telemetry, configuring API keys, or troubleshooting the plugin. | manifest | manifest, install, configure, observability, plugin, agents, setting, up, telemetry, configuring, api, keys | | `microservices-patterns` | Design microservices architectures with service boundaries, event-driven communication, and resilience patterns. Use when building distributed systems, decom... | microservices | microservices, architectures, boundaries, event, driven, communication, resilience, building, distributed, decomposing, monoliths, implementing | -| `ml-engineer` | Build production ML systems with PyTorch 2.x, TensorFlow, and modern ML frameworks. Implements model serving, feature engineering, A/B testing, and monitoring. | ml | ml, engineer, pytorch, tensorflow, frameworks, implements, model, serving, feature, engineering, testing, monitoring | | `ml-pipeline-workflow` | Build end-to-end MLOps pipelines from data preparation through model training, validation, and production deployment. Use when creating ML pipelines, impleme... | ml, pipeline | ml, pipeline, mlops, pipelines, data, preparation, through, model, training, validation, deployment, creating | | `moodle-external-api-development` | Create custom external web service APIs for Moodle LMS. Use when implementing web services for course management, user tracking, quiz operations, or custom p... | moodle, external, api | moodle, external, api, development, custom, web, apis, lms, implementing, course, user, tracking | | `multi-cloud-architecture` | Design multi-cloud architectures using a decision framework to select and integrate services across AWS, Azure, and GCP. Use when building multi-cloud system... | multi, cloud, architecture | multi, cloud, architecture, architectures, decision, framework, select, integrate, aws, azure, gcp, building | | `network-101` | This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test n... | network, 101 | network, 101, skill, should, used, user, asks, set, up, web, server, configure | +| `network-engineer` | | network | network, engineer | +| `observability-engineer` | | observability | observability, engineer | | `observability-monitoring-monitor-setup` | You are a monitoring and observability expert specializing in implementing comprehensive monitoring solutions. Set up metrics collection, distributed tracing... | observability, monitoring, monitor, setup | observability, monitoring, monitor, setup, specializing, implementing, solutions, set, up, metrics, collection, distributed | | `observability-monitoring-slo-implement` | You are an SLO (Service Level Objective) expert specializing in implementing reliability standards and error budget-based practices. Design SLO frameworks, d... | observability, monitoring, slo, implement | observability, monitoring, slo, implement, level, objective, specializing, implementing, reliability, standards, error, budget | | `performance-engineer` | Expert performance engineer specializing in modern observability, | performance | performance, engineer, specializing, observability | @@ -755,22 +772,17 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `server-management` | Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands. | server | server, principles, decision, making, process, monitoring, scaling, decisions, teaches, thinking, commands | | `service-mesh-observability` | Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debu... | service, mesh, observability | service, mesh, observability, meshes, including, distributed, tracing, metrics, visualization, setting, up, monitoring | | `slo-implementation` | Define and implement Service Level Indicators (SLIs) and Service Level Objectives (SLOs) with error budgets and alerting. Use when establishing reliability t... | slo | slo, define, level, indicators, slis, objectives, slos, error, budgets, alerting, establishing, reliability | -| `sql-pro` | Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid... | sql | sql, pro, cloud, native, databases, oltp, olap, optimization, query, techniques, performance, tuning | -| `temporal-python-pro` | Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testin... | temporal, python | temporal, python, pro, orchestration, sdk, implements, durable, saga, distributed, transactions, covers, async | | `terraform-aws-modules` | Terraform module creation for AWS — reusable modules, state management, and HCL best practices. Use when building or reviewing Terraform AWS infrastructure. | terraform, aws, modules | terraform, aws, modules, module, creation, reusable, state, hcl, building, reviewing, infrastructure | | `terraform-infrastructure` | Terraform infrastructure as code workflow for provisioning cloud resources, creating reusable modules, and managing infrastructure at scale. | terraform, infrastructure | terraform, infrastructure, code, provisioning, cloud, resources, creating, reusable, modules, managing, scale | | `terraform-module-library` | Build reusable Terraform modules for AWS, Azure, and GCP infrastructure following infrastructure-as-code best practices. Use when creating infrastructure mod... | terraform, module, library | terraform, module, library, reusable, modules, aws, azure, gcp, infrastructure, following, code, creating | | `terraform-skill` | Terraform infrastructure as code best practices | terraform, skill | terraform, skill, infrastructure, code | -| `terraform-specialist` | Expert Terraform/OpenTofu specialist mastering advanced IaC automation, state management, and enterprise infrastructure patterns. | terraform | terraform, opentofu, mastering, iac, automation, state, enterprise, infrastructure | -| `test-automator` | Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with a... | automator | automator, test, ai, powered, automation, frameworks, self, healing, tests, quality, engineering, scalable | -| `unity-developer` | Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform de... | unity | unity, developer, games, optimized, scripts, efficient, rendering, proper, asset, masters, lts, urp | +| `terraform-specialist` | | terraform | terraform | | `vercel-deploy-claimable` | Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as 'Deploy my app', 'Deploy this to production', 'C... | vercel, deploy, claimable | vercel, deploy, claimable, applications, websites, skill, user, requests, deployment, actions, such, my | | `vercel-deployment` | Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production. | vercel, deployment | vercel, deployment, knowledge, deploying, next, js, deploy, hosting | | `wireshark-analysis` | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow... | wireshark | wireshark, analysis, skill, should, used, user, asks, analyze, network, traffic, capture, packets | | `workflow-automation` | Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost... | | automation, infrastructure, makes, ai, agents, reliable, without, durable, execution, network, hiccup, during | -| `x-twitter-scraper` | X (Twitter) data platform skill — tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction too... | [twitter, x-api, scraping, mcp, social-media, data-extraction, giveaway, monitoring, webhooks] | [twitter, x-api, scraping, mcp, social-media, data-extraction, giveaway, monitoring, webhooks], twitter, scraper, data | -## security (114) +## security (100) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -785,13 +797,12 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `auth-implementation-patterns` | Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use wh... | auth | auth, authentication, authorization, including, jwt, oauth2, session, rbac, secure, scalable, access, control | | `aws-penetration-testing` | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalatio... | aws, penetration | aws, penetration, testing, skill, should, used, user, asks, pentest, test, security, enumerate | | `azure-cosmos-db-py` | Build Azure Cosmos DB NoSQL services with Python/FastAPI following production-grade patterns. Use when implementing database client setup with dual auth (Def... | azure, cosmos, db, py | azure, cosmos, db, py, nosql, python, fastapi, following, grade, implementing, database, client | -| `azure-keyvault-py` | Azure Key Vault SDK for Python. Use for secrets, keys, and certificates management with secure storage. | azure, keyvault, py | azure, keyvault, py, key, vault, sdk, python, secrets, keys, certificates, secure, storage | -| `azure-keyvault-secrets-rust` | Azure Key Vault Secrets SDK for Rust. Use for storing and retrieving secrets, passwords, and API keys. Triggers: "keyvault secrets rust", "SecretClient rust"... | azure, keyvault, secrets, rust | azure, keyvault, secrets, rust, key, vault, sdk, storing, retrieving, passwords, api, keys | +| `azure-keyvault-secrets-rust` | | azure, keyvault, secrets, rust | azure, keyvault, secrets, rust | | `azure-keyvault-secrets-ts` | Manage secrets using Azure Key Vault Secrets SDK for JavaScript (@azure/keyvault-secrets). Use when storing and retrieving application secrets or configurati... | azure, keyvault, secrets, ts | azure, keyvault, secrets, ts, key, vault, sdk, javascript, storing, retrieving, application, configuration | -| `azure-security-keyvault-keys-dotnet` | Azure Key Vault Keys SDK for .NET. Client library for managing cryptographic keys in Azure Key Vault and Managed HSM. Use for key creation, rotation, encrypt... | azure, security, keyvault, keys, dotnet | azure, security, keyvault, keys, dotnet, key, vault, sdk, net, client, library, managing | +| `azure-security-keyvault-keys-dotnet` | | azure, security, keyvault, keys, dotnet | azure, security, keyvault, keys, dotnet | | `azure-security-keyvault-keys-java` | Azure Key Vault Keys Java SDK for cryptographic key management. Use when creating, managing, or using RSA/EC keys, performing encrypt/decrypt/sign/verify ope... | azure, security, keyvault, keys, java | azure, security, keyvault, keys, java, key, vault, sdk, cryptographic, creating, managing, rsa | | `azure-security-keyvault-secrets-java` | Azure Key Vault Secrets Java SDK for secret management. Use when storing, retrieving, or managing passwords, API keys, connection strings, or other sensitive... | azure, security, keyvault, secrets, java | azure, security, keyvault, secrets, java, key, vault, sdk, secret, storing, retrieving, managing | -| `backend-security-coder` | Expert in secure backend coding practices specializing in input validation, authentication, and API security. Use PROACTIVELY for backend security implementa... | backend, security, coder | backend, security, coder, secure, coding, specializing, input, validation, authentication, api, proactively, implementations | +| `backend-security-coder` | | backend, security, coder | backend, security, coder | | `broken-authentication` | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential s... | broken, authentication | broken, authentication, skill, should, used, user, asks, test, vulnerabilities, assess, session, security | | `burp-suite-testing` | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability sca... | burp, suite | burp, suite, testing, skill, should, used, user, asks, intercept, http, traffic, modify | | `cc-skill-security-review` | Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Pro... | cc, skill, security | cc, skill, security, review, adding, authentication, handling, user, input, working, secrets, creating | @@ -800,26 +811,22 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `code-review-checklist` | Comprehensive checklist for conducting thorough code reviews covering functionality, security, performance, and maintainability | code, checklist | code, checklist, review, conducting, thorough, reviews, covering, functionality, security, performance, maintainability | | `codebase-cleanup-deps-audit` | You are a dependency security expert specializing in vulnerability scanning, license compliance, and supply chain security. Analyze project dependencies for ... | codebase, cleanup, deps, audit | codebase, cleanup, deps, audit, dependency, security, specializing, vulnerability, scanning, license, compliance, supply | | `convex` | Convex reactive backend expert: schema design, TypeScript functions, real-time subscriptions, auth, file storage, scheduling, and deployment. | convex | convex, reactive, backend, schema, typescript, functions, real, time, subscriptions, auth, file, storage | -| `crypto-bd-agent` | Autonomous crypto business development patterns — multi-chain token discovery, 100-point scoring with wallet forensics, x402 micropayments, ERC-8004 on-chain... | crypto, bd, agent | crypto, bd, agent, autonomous, business, development, multi, chain, token, discovery, 100, point | -| `customs-trade-compliance` | Codified expertise for customs documentation, tariff classification, duty optimisation, restricted party screening, and regulatory compliance across multiple... | customs, trade, compliance | customs, trade, compliance, codified, expertise, documentation, tariff, classification, duty, optimisation, restricted, party | +| `customs-trade-compliance` | | customs, trade, compliance | customs, trade, compliance | | `database-migration` | Execute database migrations across ORMs and platforms with zero-downtime strategies, data transformation, and rollback procedures. Use when migrating databas... | database, migration | database, migration, execute, migrations, orms, platforms, zero, downtime, data, transformation, rollback, procedures | | `database-migrations-sql-migrations` | SQL database migrations with zero-downtime strategies for PostgreSQL, MySQL, and SQL Server. Focus on data integrity and rollback plans. | database, migrations, sql | database, migrations, sql, zero, downtime, postgresql, mysql, server, data, integrity, rollback, plans | | `dependency-management-deps-audit` | You are a dependency security expert specializing in vulnerability scanning, license compliance, and supply chain security. Analyze project dependencies for ... | dependency, deps, audit | dependency, deps, audit, security, specializing, vulnerability, scanning, license, compliance, supply, chain, analyze | | `deployment-pipeline-design` | Design multi-stage CI/CD pipelines with approval gates, security checks, and deployment orchestration. Use when architecting deployment workflows, setting up... | deployment, pipeline | deployment, pipeline, multi, stage, ci, cd, pipelines, approval, gates, security, checks, orchestration | -| `devops-troubleshooter` | Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability. | devops, troubleshooter | devops, troubleshooter, specializing, rapid, incident, response, debugging, observability | | `docker-expert` | Docker containerization expert with deep knowledge of multi-stage builds, image optimization, container security, Docker Compose orchestration, and productio... | docker | docker, containerization, deep, knowledge, multi, stage, image, optimization, container, security, compose, orchestration | | `dotnet-backend` | Build ASP.NET Core 8+ backend services with EF Core, auth, background jobs, and production API patterns. | dotnet, backend | dotnet, backend, asp, net, core, ef, auth, background, jobs, api | | `ethical-hacking-methodology` | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct secur... | ethical, hacking, methodology | ethical, hacking, methodology, skill, should, used, user, asks, learn, understand, penetration, testing | | `find-bugs` | Find bugs, security vulnerabilities, and code quality issues in local branch changes. Use when asked to review changes, find bugs, security review, or audit ... | find, bugs | find, bugs, security, vulnerabilities, code, quality, issues, local, branch, changes, asked, review | | `firebase` | Firebase gives you a complete backend in minutes - auth, database, storage, functions, hosting. But the ease of setup hides real complexity. Security rules a... | firebase | firebase, gives, complete, backend, minutes, auth, database, storage, functions, hosting, ease, setup | -| `firmware-analyst` | Expert firmware analyst specializing in embedded systems, IoT security, and hardware reverse engineering. | firmware, analyst | firmware, analyst, specializing, embedded, iot, security, hardware, reverse, engineering | | `framework-migration-deps-upgrade` | You are a dependency management expert specializing in safe, incremental upgrades of project dependencies. Plan and execute dependency updates with minimal r... | framework, migration, deps, upgrade | framework, migration, deps, upgrade, dependency, specializing, safe, incremental, upgrades, dependencies, plan, execute | | `frontend-mobile-security-xss-scan` | You are a frontend security specialist focusing on Cross-Site Scripting (XSS) vulnerability detection and prevention. Analyze React, Vue, Angular, and vanill... | frontend, mobile, security, xss, scan | frontend, mobile, security, xss, scan, focusing, cross, site, scripting, vulnerability, detection, prevention | -| `frontend-security-coder` | Expert in secure frontend coding practices specializing in XSS prevention, output sanitization, and client-side security patterns. | frontend, security, coder | frontend, security, coder, secure, coding, specializing, xss, prevention, output, sanitization, client, side | +| `frontend-security-coder` | | frontend, security, coder | frontend, security, coder | | `gdpr-data-handling` | Implement GDPR-compliant data handling with consent management, data subject rights, and privacy by design. Use when building systems that process EU persona... | gdpr, data, handling | gdpr, data, handling, compliant, consent, subject, rights, privacy, building, process, eu, personal | -| `graphql-architect` | Master modern GraphQL with federation, performance optimization, and enterprise security. Build scalable schemas, implement advanced caching, and design real... | graphql | graphql, architect, federation, performance, optimization, enterprise, security, scalable, schemas, caching, real, time | | `grpc-golang` | Build production-ready gRPC services in Go with mTLS, streaming, and observability. Use when designing Protobuf contracts with Buf or implementing secure ser... | grpc, golang | grpc, golang, go, mtls, streaming, observability, designing, protobuf, contracts, buf, implementing, secure | -| `incident-responder` | Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management. | incident, responder | incident, responder, sre, specializing, rapid, problem, resolution, observability | +| `incident-responder` | | incident, responder | incident, responder | | `incident-response-incident-response` | Use when working with incident response incident response | incident, response | incident, response, working | | `incident-response-smart-fix` | [Extended thinking: This workflow implements a sophisticated debugging and resolution pipeline that leverages AI-assisted debugging tools and observability p... | incident, response, fix | incident, response, fix, smart, extended, thinking, implements, sophisticated, debugging, resolution, pipeline, leverages | | `incident-runbook-templates` | Create structured incident response runbooks with step-by-step procedures, escalation paths, and recovery actions. Use when building runbooks, responding to ... | incident, runbook | incident, runbook, structured, response, runbooks, step, procedures, escalation, paths, recovery, actions, building | @@ -827,52 +834,41 @@ scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines | `k8s-security-policies` | Implement Kubernetes security policies including NetworkPolicy, PodSecurityPolicy, and RBAC for production-grade security. Use when securing Kubernetes clust... | k8s, security, policies | k8s, security, policies, kubernetes, including, networkpolicy, podsecuritypolicy, rbac, grade, securing, clusters, implementing | | `laravel-expert` | Senior Laravel Engineer role for production-grade, maintainable, and idiomatic Laravel solutions. Focuses on clean architecture, security, performance, and m... | laravel | laravel, senior, engineer, role, grade, maintainable, idiomatic, solutions, clean, architecture, security, performance | | `laravel-security-audit` | Security auditor for Laravel applications. Analyzes code for vulnerabilities, misconfigurations, and insecure practices using OWASP standards and Laravel sec... | laravel, security, audit | laravel, security, audit, auditor, applications, analyzes, code, vulnerabilities, misconfigurations, insecure, owasp, standards | -| `legal-advisor` | Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements. | legal, advisor | legal, advisor, draft, privacy, policies, terms, disclaimers, notices, creates, gdpr, compliant, texts | | `linkerd-patterns` | Implement Linkerd service mesh patterns for lightweight, security-focused service mesh deployments. Use when setting up Linkerd, configuring traffic policies... | linkerd | linkerd, mesh, lightweight, security, deployments, setting, up, configuring, traffic, policies, implementing, zero | | `loki-mode` | Multi-agent autonomous startup system for Claude Code. Triggers on "Loki Mode". Orchestrates 100+ specialized agents across engineering, QA, DevOps, security... | loki, mode | loki, mode, multi, agent, autonomous, startup, claude, code, triggers, orchestrates, 100, specialized | -| `m365-agents-dotnet` | Microsoft 365 Agents SDK for .NET. Build multichannel agents for Teams/M365/Copilot Studio with ASP.NET Core hosting, AgentApplication routing, and MSAL-base... | m365, agents, dotnet | m365, agents, dotnet, microsoft, 365, sdk, net, multichannel, teams, copilot, studio, asp | -| `m365-agents-py` | Microsoft 365 Agents SDK for Python. Build multichannel agents for Teams/M365/Copilot Studio with aiohttp hosting, AgentApplication routing, streaming respon... | m365, agents, py | m365, agents, py, microsoft, 365, sdk, python, multichannel, teams, copilot, studio, aiohttp | -| `malware-analyst` | Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis,... | malware, analyst | malware, analyst, specializing, defensive, research, threat, intelligence, incident, response, masters, sandbox, analysis | +| `malware-analyst` | | malware, analyst | malware, analyst | | `memory-forensics` | Master memory forensics techniques including memory acquisition, process analysis, and artifact extraction using Volatility and related tools. Use when analy... | memory, forensics | memory, forensics, techniques, including, acquisition, process, analysis, artifact, extraction, volatility, related, analyzing | -| `mobile-security-coder` | Expert in secure mobile coding practices specializing in input validation, WebView security, and mobile-specific security patterns. | mobile, security, coder | mobile, security, coder, secure, coding, specializing, input, validation, webview, specific | +| `mobile-security-coder` | | mobile, security, coder | mobile, security, coder | | `mtls-configuration` | Configure mutual TLS (mTLS) for zero-trust service-to-service communication. Use when implementing zero-trust networking, certificate management, or securing... | mtls, configuration | mtls, configuration, configure, mutual, tls, zero, trust, communication, implementing, networking, certificate, securing | | `nestjs-expert` | Nest.js framework expert specializing in module architecture, dependency injection, middleware, guards, interceptors, testing with Jest/Supertest, TypeORM/Mo... | nestjs | nestjs, nest, js, framework, specializing, module, architecture, dependency, injection, middleware, guards, interceptors | -| `network-engineer` | Expert network engineer specializing in modern cloud networking, security architectures, and performance optimization. | network | network, engineer, specializing, cloud, networking, security, architectures, performance, optimization | | `nextjs-supabase-auth` | Expert integration of Supabase Auth with Next.js App Router Use when: supabase auth next, authentication next.js, login supabase, auth middleware, protected ... | nextjs, supabase, auth | nextjs, supabase, auth, integration, next, js, app, router, authentication, login, middleware, protected | | `nodejs-best-practices` | Node.js development principles and decision-making. Framework selection, async patterns, security, and architecture. Teaches thinking, not copying. | nodejs, best, practices | nodejs, best, practices, node, js, development, principles, decision, making, framework, selection, async | | `notebooklm` | Use this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. Browser automati... | notebooklm | notebooklm, skill, query, google, notebooks, directly, claude, code, source, grounded, citation, backed | -| `observability-engineer` | Build production-ready monitoring, logging, and tracing systems. Implements comprehensive observability strategies, SLI/SLO management, and incident response... | observability | observability, engineer, monitoring, logging, tracing, implements, sli, slo, incident, response | | `openapi-spec-generation` | Generate and maintain OpenAPI 3.1 specifications from code, design-first specs, and validation patterns. Use when creating API documentation, generating SDKs... | openapi, spec, generation | openapi, spec, generation, generate, maintain, specifications, code, first, specs, validation, creating, api | -| `payment-integration` | Integrate Stripe, PayPal, and payment processors. Handles checkout flows, subscriptions, webhooks, and PCI compliance. Use PROACTIVELY when implementing paym... | payment, integration | payment, integration, integrate, stripe, paypal, processors, checkout, flows, subscriptions, webhooks, pci, compliance | | `pci-compliance` | Implement PCI DSS compliance requirements for secure handling of payment card data and payment systems. Use when securing payment processing, achieving PCI c... | pci, compliance | pci, compliance, dss, requirements, secure, handling, payment, card, data, securing, processing, achieving | | `pentest-checklist` | This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "defi... | pentest, checklist | pentest, checklist, skill, should, used, user, asks, plan, penetration, test, security, assessment | | `plaid-fintech` | Expert patterns for Plaid API integration including Link token flows, transactions sync, identity verification, Auth for ACH, balance checks, webhook handlin... | plaid, fintech | plaid, fintech, api, integration, including, link, token, flows, transactions, sync, identity, verification | | `popup-cro` | Create and optimize popups, modals, overlays, slide-ins, and banners to increase conversions without harming user experience or brand trust. | popup, cro | popup, cro, optimize, popups, modals, overlays, slide, ins, banners, increase, conversions, without | | `postmortem-writing` | Write effective blameless postmortems with root cause analysis, timelines, and action items. Use when conducting incident reviews, writing postmortem documen... | postmortem, writing | postmortem, writing, write, effective, blameless, postmortems, root, cause, analysis, timelines, action, items | -| `quant-analyst` | Build financial models, backtest trading strategies, and analyze market data. Implements risk metrics, portfolio optimization, and statistical arbitrage. | quant, analyst | quant, analyst, financial, models, backtest, trading, analyze, market, data, implements, risk, metrics | | `red-team-tactics` | Red team tactics principles based on MITRE ATT&CK. Attack phases, detection evasion, reporting. | red, team, tactics | red, team, tactics, principles, mitre, att, ck, attack, phases, detection, evasion, reporting | | `research-engineer` | An uncompromising Academic Research Engineer. Operates with absolute scientific rigor, objective criticism, and zero flair. Focuses on theoretical correctnes... | research | research, engineer, uncompromising, academic, operates, absolute, scientific, rigor, objective, criticism, zero, flair | -| `risk-manager` | Monitor portfolio risk, R-multiples, and position limits. Creates hedging strategies, calculates expectancy, and implements stop-losses. | risk, manager | risk, manager, monitor, portfolio, multiples, position, limits, creates, hedging, calculates, expectancy, implements | +| `risk-manager` | | risk, manager | risk, manager | | `risk-metrics-calculation` | Calculate portfolio risk metrics including VaR, CVaR, Sharpe, Sortino, and drawdown analysis. Use when measuring portfolio risk, implementing risk limits, or... | risk, metrics, calculation | risk, metrics, calculation, calculate, portfolio, including, var, cvar, sharpe, sortino, drawdown, analysis | | `sast-configuration` | Configure Static Application Security Testing (SAST) tools for automated vulnerability detection in application code. Use when setting up security scanning, ... | sast, configuration | sast, configuration, configure, static, application, security, testing, automated, vulnerability, detection, code, setting | | `scanning-tools` | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wi... | scanning | scanning, skill, should, used, user, asks, perform, vulnerability, scan, networks, open, ports | | `secrets-management` | Implement secure secrets management for CI/CD pipelines using Vault, AWS Secrets Manager, or native platform solutions. Use when handling sensitive credentia... | secrets | secrets, secure, ci, cd, pipelines, vault, aws, manager, native, platform, solutions, handling | | `security-audit` | Comprehensive security auditing workflow covering web application testing, API security, penetration testing, vulnerability scanning, and security hardening. | security, audit | security, audit, auditing, covering, web, application, testing, api, penetration, vulnerability, scanning, hardening | -| `security-auditor` | Expert security auditor specializing in DevSecOps, comprehensive cybersecurity, and compliance frameworks. | security, auditor | security, auditor, specializing, devsecops, cybersecurity, compliance, frameworks | +| `security-auditor` | | security, auditor | security, auditor | | `security-bluebook-builder` | Build security Blue Books for sensitive apps | security, bluebook, builder | security, bluebook, builder, blue, books, sensitive, apps | | `security-compliance-compliance-check` | You are a compliance expert specializing in regulatory requirements for software systems including GDPR, HIPAA, SOC2, PCI-DSS, and other industry standards. ... | security, compliance, check | security, compliance, check, specializing, regulatory, requirements, software, including, gdpr, hipaa, soc2, pci | | `security-requirement-extraction` | Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stori... | security, requirement, extraction | security, requirement, extraction, derive, requirements, threat, models, business, context, translating, threats, actionable | | `security-scanning-security-dependencies` | You are a security expert specializing in dependency vulnerability analysis, SBOM generation, and supply chain security. Scan project dependencies across eco... | security, scanning, dependencies | security, scanning, dependencies, specializing, dependency, vulnerability, analysis, sbom, generation, supply, chain, scan | | `security-scanning-security-hardening` | Coordinate multi-layer security scanning and hardening across application, infrastructure, and compliance controls. | security, scanning, hardening | security, scanning, hardening, coordinate, multi, layer, application, infrastructure, compliance, controls | -| `security-scanning-security-sast` | Static Application Security Testing (SAST) for code vulnerability -analysis across multiple languages and frameworks | security, scanning, sast | security, scanning, sast, static, application, testing, code, vulnerability, analysis, multiple, languages, frameworks | +| `security-scanning-security-sast` | | security, scanning, sast | security, scanning, sast | | `security/aws-compliance-checker` | Automated compliance checking against CIS, PCI-DSS, HIPAA, and SOC 2 benchmarks | [aws, compliance, audit, cis, pci-dss, hipaa, kiro-cli] | [aws, compliance, audit, cis, pci-dss, hipaa, kiro-cli], aws, checker, automated, checking, against | | `security/aws-iam-best-practices` | IAM policy review, hardening, and least privilege implementation | [aws, iam, security, access-control, kiro-cli, least-privilege] | [aws, iam, security, access-control, kiro-cli, least-privilege], aws, policy, review, hardening, least, privilege | | `security/aws-secrets-rotation` | Automate AWS secrets rotation for RDS, API keys, and credentials | [aws, secrets-manager, security, automation, kiro-cli, credentials] | [aws, secrets-manager, security, automation, kiro-cli, credentials], aws, secrets, rotation, automate, rds, api | | `security/aws-security-audit` | Comprehensive AWS security posture assessment using AWS CLI and security best practices | [aws, security, audit, compliance, kiro-cli, security-assessment] | [aws, security, audit, compliance, kiro-cli, security-assessment], aws, posture, assessment, cli | -| `seo-authority-builder` | Analyzes content for E-E-A-T signals and suggests improvements to -build authority and trust. Identifies missing credibility elements. Use -PROACTIVELY for YMY... | seo, authority, builder | seo, authority, builder, analyzes, content, signals, suggests, improvements, trust, identifies, missing, credibility | | `seo-forensic-incident-response` | Investigate sudden drops in organic traffic or rankings and run a structured forensic SEO incident response with triage, root-cause analysis and recovery plan. | seo, forensic, incident, response | seo, forensic, incident, response, investigate, sudden, drops, organic, traffic, rankings, run, structured | | `service-mesh-expert` | Expert service mesh architect specializing in Istio, Linkerd, and cloud-native networking patterns. Masters traffic management, security policies, observabil... | service, mesh | service, mesh, architect, specializing, istio, linkerd, cloud, native, networking, masters, traffic, security | | `solidity-security` | Master smart contract security best practices to prevent common vulnerabilities and implement secure Solidity patterns. Use when writing smart contracts, aud... | solidity, security | solidity, security, smart, contract, prevent, common, vulnerabilities, secure, writing, contracts, auditing, existing | @@ -882,7 +878,6 @@ PROACTIVELY for YMY... | seo, authority, builder | seo, authority, builder, anal | `threat-mitigation-mapping` | Map identified threats to appropriate security controls and mitigations. Use when prioritizing security investments, creating remediation plans, or validatin... | threat, mitigation, mapping | threat, mitigation, mapping, map, identified, threats, appropriate, security, controls, mitigations, prioritizing, investments | | `threat-modeling-expert` | Expert in threat modeling methodologies, security architecture review, and risk assessment. Masters STRIDE, PASTA, attack trees, and security requirement ext... | threat, modeling | threat, modeling, methodologies, security, architecture, review, risk, assessment, masters, stride, pasta, attack | | `top-web-vulnerabilities` | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability catego... | top, web, vulnerabilities | top, web, vulnerabilities, skill, should, used, user, asks, identify, application, explain, common | -| `ui-visual-validator` | Rigorous visual validation expert specializing in UI testing, design system compliance, and accessibility verification. | ui, visual, validator | ui, visual, validator, rigorous, validation, specializing, testing, compliance, accessibility, verification | | `varlock-claude-skill` | Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits | varlock, claude, skill | varlock, claude, skill, secure, environment, variable, ensuring, secrets, never, exposed, sessions, terminals | | `vulnerability-scanner` | Advanced vulnerability analysis principles. OWASP 2025, Supply Chain Security, attack surface mapping, risk prioritization. | vulnerability, scanner | vulnerability, scanner, analysis, principles, owasp, 2025, supply, chain, security, attack, surface, mapping | | `web-design-guidelines` | Review UI code for Web Interface Guidelines compliance. Use when asked to \"review my UI\", \"check accessibility\", \"audit design\", \"review UX\", or \"ch... | web, guidelines | web, guidelines, review, ui, code, interface, compliance, asked, my, check, accessibility, audit | @@ -892,7 +887,7 @@ PROACTIVELY for YMY... | seo, authority, builder | seo, authority, builder, anal | `wordpress` | Complete WordPress development workflow covering theme development, plugin creation, WooCommerce integration, performance optimization, and security hardening. | wordpress | wordpress, complete, development, covering, theme, plugin, creation, woocommerce, integration, performance, optimization, security | | `wordpress-plugin-development` | WordPress plugin development workflow covering plugin architecture, hooks, admin interfaces, REST API, and security best practices. | wordpress, plugin | wordpress, plugin, development, covering, architecture, hooks, admin, interfaces, rest, api, security | -## testing (32) +## testing (31) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -900,8 +895,6 @@ PROACTIVELY for YMY... | seo, authority, builder | seo, authority, builder, anal | `circleci-automation` | Automate CircleCI tasks via Rube MCP (Composio): trigger pipelines, monitor workflows/jobs, retrieve artifacts and test metadata. Always search tools first f... | circleci | circleci, automation, automate, tasks, via, rube, mcp, composio, trigger, pipelines, monitor, jobs | | `conductor-implement` | Execute tasks from a track's implementation plan following TDD workflow | conductor, implement | conductor, implement, execute, tasks, track, plan, following, tdd | | `conductor-revert` | Git-aware undo by logical work unit (track, phase, or task) | conductor, revert | conductor, revert, git, aware, undo, logical, work, unit, track, phase, task | -| `debugger` | Debugging specialist for errors, test failures, and unexpected -behavior. Use proactively when encountering any issues. | debugger | debugger, debugging, errors, test, failures, unexpected, behavior, proactively, encountering, any, issues | | `dependency-upgrade` | Manage major dependency version upgrades with compatibility analysis, staged rollout, and comprehensive testing. Use when upgrading framework versions, updat... | dependency, upgrade | dependency, upgrade, major, version, upgrades, compatibility, analysis, staged, rollout, testing, upgrading, framework | | `file-path-traversal` | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web a... | file, path, traversal | file, path, traversal, skill, should, used, user, asks, test, directory, exploit, vulnerabilities | | `html-injection-testing` | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applic... | html, injection | html, injection, testing, skill, should, used, user, asks, test, inject, web, pages | @@ -913,14 +906,14 @@ behavior. Use proactively when encountering any issues. | debugger | debugger, d | `screen-reader-testing` | Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issue... | screen, reader | screen, reader, testing, test, web, applications, readers, including, voiceover, nvda, jaws, validating | | `smtp-penetration-testing` | This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners"... | smtp, penetration | smtp, penetration, testing, skill, should, used, user, asks, perform, enumerate, email, users | | `ssh-penetration-testing` | This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabi... | ssh, penetration | ssh, penetration, testing, skill, should, used, user, asks, pentest, enumerate, configurations, brute | -| `startup-metrics-framework` | This skill should be used when the user asks about \\\"key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", ... | startup, metrics, framework | startup, metrics, framework, skill, should, used, user, asks, about, key, saas, cac | | `systematic-debugging` | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes | systematic, debugging | systematic, debugging, encountering, any, bug, test, failure, unexpected, behavior, before, proposing, fixes | -| `tdd-orchestrator` | Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices. | tdd, orchestrator | tdd, orchestrator, specializing, red, green, refactor, discipline, multi, agent, coordination, test, driven | +| `tdd-orchestrator` | | tdd, orchestrator | tdd, orchestrator | | `tdd-workflow` | Test-Driven Development workflow principles. RED-GREEN-REFACTOR cycle. | tdd | tdd, test, driven, development, principles, red, green, refactor, cycle | | `tdd-workflows-tdd-cycle` | Use when working with tdd workflows tdd cycle | tdd, cycle | tdd, cycle, working | | `tdd-workflows-tdd-green` | Implement the minimal code needed to make failing tests pass in the TDD green phase. | tdd, green | tdd, green, minimal, code, needed, failing, tests, pass, phase | | `tdd-workflows-tdd-red` | Generate failing tests for the TDD red phase to define expected behavior and edge cases. | tdd, red | tdd, red, generate, failing, tests, phase, define, expected, behavior, edge, cases | | `tdd-workflows-tdd-refactor` | Use when working with tdd workflows tdd refactor | tdd, refactor | tdd, refactor, working | +| `test-automator` | | automator | automator, test | | `test-driven-development` | Use when implementing any feature or bugfix, before writing implementation code | driven | driven, test, development, implementing, any, feature, bugfix, before, writing, code | | `test-fixing` | Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test sui... | fixing | fixing, test, run, tests, systematically, fix, all, failing, smart, error, grouping, user | | `testing-qa` | Comprehensive testing and QA workflow covering unit testing, integration testing, E2E testing, browser automation, and quality assurance. | qa | qa, testing, covering, unit, integration, e2e, browser, automation, quality, assurance | @@ -930,7 +923,7 @@ behavior. Use proactively when encountering any issues. | debugger | debugger, d | `wordpress-penetration-testing` | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugi... | wordpress, penetration | wordpress, penetration, testing, skill, should, used, user, asks, pentest, sites, scan, vulnerabilities | | `xss-html-injection` | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exp... | xss, html, injection | xss, html, injection, skill, should, used, user, asks, test, vulnerabilities, perform, cross | -## workflow (86) +## workflow (87) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -939,6 +932,7 @@ behavior. Use proactively when encountering any issues. | debugger | debugger, d | `agent-orchestration-multi-agent-optimize` | Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughpu... | agent, multi, optimize | agent, multi, optimize, orchestration, coordinated, profiling, workload, distribution, cost, aware, improving, performance | | `airtable-automation` | Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas. | airtable | airtable, automation, automate, tasks, via, rube, mcp, composio, records, bases, tables, fields | | `amplitude-automation` | Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas. | amplitude | amplitude, automation, automate, tasks, via, rube, mcp, composio, events, user, activity, cohorts | +| `apify-influencer-discovery` | Find and evaluate influencers for brand partnerships, verify authenticity, and track collaboration performance across Instagram, Facebook, YouTube, and TikTok. | apify, influencer, discovery | apify, influencer, discovery, find, evaluate, influencers, brand, partnerships, verify, authenticity, track, collaboration | | `asana-automation` | Automate Asana tasks via Rube MCP (Composio): tasks, projects, sections, teams, workspaces. Always search tools first for current schemas. | asana | asana, automation, automate, tasks, via, rube, mcp, composio, sections, teams, workspaces, always | | `automate-whatsapp` | Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/in... | automate, whatsapp | automate, whatsapp, automations, kapso, configure, triggers, edit, graphs, executions, deploy, functions, databases | | `bamboohr-automation` | Automate BambooHR tasks via Rube MCP (Composio): employees, time-off, benefits, dependents, employee updates. Always search tools first for current schemas. | bamboohr | bamboohr, automation, automate, tasks, via, rube, mcp, composio, employees, time, off, benefits | @@ -954,15 +948,14 @@ behavior. Use proactively when encountering any issues. | debugger | debugger, d | `coda-automation` | Automate Coda tasks via Rube MCP (Composio): manage docs, pages, tables, rows, formulas, permissions, and publishing. Always search tools first for current s... | coda | coda, automation, automate, tasks, via, rube, mcp, composio, docs, pages, tables, rows | | `conductor-manage` | Manage track lifecycle: archive, restore, delete, rename, and cleanup | conductor, manage | conductor, manage, track, lifecycle, archive, restore, delete, rename, cleanup | | `conductor-new-track` | Create a new track with specification and phased implementation plan | conductor, new, track | conductor, new, track, specification, phased, plan | +| `conductor-setup` | | conductor, setup | conductor, setup | | `conductor-status` | Display project status, active tracks, and next actions | conductor, status | conductor, status, display, active, tracks, next, actions | -| `conductor-validator` | Validates Conductor project artifacts for completeness, -consistency, and correctness. Use after setup, when diagnosing issues, or -before implementation to ve... | conductor, validator | conductor, validator, validates, artifacts, completeness, consistency, correctness, after, setup, diagnosing, issues, before | +| `conductor-validator` | | conductor, validator | conductor, validator | | `confluence-automation` | Automate Confluence page creation, content search, space management, labels, and hierarchy navigation via Rube MCP (Composio). Always search tools first for ... | confluence | confluence, automation, automate, page, creation, content, search, space, labels, hierarchy, navigation, via | | `convertkit-automation` | Automate ConvertKit (Kit) tasks via Rube MCP (Composio): manage subscribers, tags, broadcasts, and broadcast stats. Always search tools first for current sch... | convertkit | convertkit, automation, automate, kit, tasks, via, rube, mcp, composio, subscribers, tags, broadcasts | | `crewai` | Expert in CrewAI - the leading role-based multi-agent framework used by 60% of Fortune 500 companies. Covers agent design with roles and goals, task definiti... | crewai | crewai, leading, role, multi, agent, framework, used, 60, fortune, 500, companies, covers | | `datadog-automation` | Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools firs... | datadog | datadog, automation, automate, tasks, via, rube, mcp, composio, query, metrics, search, logs | -| `design-orchestration` | Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order. | | orchestration, orchestrates, routing, work, through, brainstorming, multi, agent, review, execution, readiness, correct | +| `design-orchestration` | Ensure that ideas become designs, designs are reviewed, and only validated designs reach implementation. | | orchestration, ideas, become, designs, reviewed, validated, reach | | `discord-automation` | Automate Discord tasks via Rube MCP (Composio): messages, channels, roles, webhooks, reactions. Always search tools first for current schemas. | discord | discord, automation, automate, tasks, via, rube, mcp, composio, messages, channels, roles, webhooks | | `docusign-automation` | Automate DocuSign tasks via Rube MCP (Composio): templates, envelopes, signatures, document management. Always search tools first for current schemas. | docusign | docusign, automation, automate, tasks, via, rube, mcp, composio, envelopes, signatures, document, always | | `dropbox-automation` | Automate Dropbox file management, sharing, search, uploads, downloads, and folder operations via Rube MCP (Composio). Always search tools first for current s... | dropbox | dropbox, automation, automate, file, sharing, search, uploads, downloads, folder, operations, via, rube | @@ -989,7 +982,7 @@ before implementation to ve... | conductor, validator | conductor, validator, va | `miro-automation` | Automate Miro tasks via Rube MCP (Composio): boards, items, sticky notes, frames, sharing, connectors. Always search tools first for current schemas. | miro | miro, automation, automate, tasks, via, rube, mcp, composio, boards, items, sticky, notes | | `mixpanel-automation` | Automate Mixpanel tasks via Rube MCP (Composio): events, segmentation, funnels, cohorts, user profiles, JQL queries. Always search tools first for current sc... | mixpanel | mixpanel, automation, automate, tasks, via, rube, mcp, composio, events, segmentation, funnels, cohorts | | `monday-automation` | Automate Monday.com work management including boards, items, columns, groups, subitems, and updates via Rube MCP (Composio). Always search tools first for cu... | monday | monday, automation, automate, com, work, including, boards, items, columns, groups, subitems, updates | -| `multi-agent-brainstorming` | Simulate a structured peer-review process using multiple specialized agents to validate designs, surface hidden assumptions, and identify failure modes befor... | multi, agent, brainstorming | multi, agent, brainstorming, simulate, structured, peer, review, process, multiple, specialized, agents, validate | +| `multi-agent-brainstorming` | Transform a single-agent design into a robust, review-validated design by simulating a formal peer-review process using multiple constrained agents. | multi, agent, brainstorming | multi, agent, brainstorming, transform, single, robust, review, validated, simulating, formal, peer, process | | `nerdzao-elite-gemini-high` | Modo Elite Coder + UX Pixel-Perfect otimizado especificamente para Gemini 3.1 Pro High. Workflow completo com foco em qualidade máxima e eficiência de tokens. | nerdzao, elite, gemini, high | nerdzao, elite, gemini, high, modo, coder, ux, pixel, perfect, otimizado, especificamente, para | | `notion-automation` | Automate Notion tasks via Rube MCP (Composio): pages, databases, blocks, comments, users. Always search tools first for current schemas. | notion | notion, automation, automate, tasks, via, rube, mcp, composio, pages, databases, blocks, comments | | `office-productivity` | Office productivity workflow covering document creation, spreadsheet automation, presentation generation, and integration with LibreOffice and Microsoft Offi... | office, productivity | office, productivity, covering, document, creation, spreadsheet, automation, presentation, generation, integration, libreoffice, microsoft | @@ -1012,7 +1005,6 @@ before implementation to ve... | conductor, validator | conductor, validator, va | `telegram-automation` | Automate Telegram tasks via Rube MCP (Composio): send messages, manage chats, share photos/documents, and handle bot commands. Always search tools first for ... | telegram | telegram, automation, automate, tasks, via, rube, mcp, composio, send, messages, chats, share | | `tiktok-automation` | Automate TikTok tasks via Rube MCP (Composio): upload/publish videos, post photos, manage content, and view user profiles/stats. Always search tools first fo... | tiktok | tiktok, automation, automate, tasks, via, rube, mcp, composio, upload, publish, videos, post | | `todoist-automation` | Automate Todoist task management, projects, sections, filtering, and bulk operations via Rube MCP (Composio). Always search tools first for current schemas. | todoist | todoist, automation, automate, task, sections, filtering, bulk, operations, via, rube, mcp, composio | -| `track-management` | Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan... | track | track, skill, creating, managing, working, conductor, tracks, logical, work, units, features, bugs | | `trello-automation` | Automate Trello boards, cards, and workflows via Rube MCP (Composio). Create cards, manage lists, assign members, and search across boards programmatically. | trello | trello, automation, automate, boards, cards, via, rube, mcp, composio, lists, assign, members | | `twitter-automation` | Automate Twitter/X tasks via Rube MCP (Composio): posts, search, users, bookmarks, lists, media. Always search tools first for current schemas. | twitter | twitter, automation, automate, tasks, via, rube, mcp, composio, posts, search, users, bookmarks | | `vercel-automation` | Automate Vercel tasks via Rube MCP (Composio): manage deployments, domains, DNS, env vars, projects, and teams. Always search tools first for current schemas. | vercel | vercel, automation, automate, tasks, via, rube, mcp, composio, deployments, domains, dns, env | diff --git a/README.md b/README.md index e1a0a867..775c60cc 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ -# 🌌 Antigravity Awesome Skills: 956+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More +# 🌌 Antigravity Awesome Skills: 966+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More -> **The Ultimate Collection of 956+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL** +> **The Ultimate Collection of 966+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL** [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Claude Code](https://img.shields.io/badge/Claude%20Code-Anthropic-purple)](https://claude.ai) @@ -30,7 +30,7 @@ If this project helps you, you can [support it here](https://buymeacoffee.com/si - ⚪ **OpenCode** (Open-source CLI) - 🌸 **AdaL CLI** (Self-evolving Coding Agent) -This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Microsoft**, **Supabase**, and **Vercel Labs**. +This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Microsoft**, **Supabase**, **Apify**, and **Vercel Labs**. ## Table of Contents @@ -42,7 +42,7 @@ This repository provides essential skills to transform your AI assistant into a - [🎁 Curated Collections (Bundles)](#curated-collections) - [🧭 Antigravity Workflows](#antigravity-workflows) - [📦 Features & Categories](#features--categories) -- [📚 Browse 956+ Skills](#browse-956-skills) +- [📚 Browse 966+ Skills](#browse-966-skills) - [🤝 How to Contribute](#how-to-contribute) - [💬 Community](#community) - [☕ Support the Project](#support-the-project) @@ -341,7 +341,7 @@ The repository is organized into specialized domains to transform your AI into a Counts change as new skills are added. For the current full registry, see [CATALOG.md](CATALOG.md). -## Browse 956+ Skills +## Browse 966+ Skills We have moved the full skill registry to a dedicated catalog to keep this README clean, and we've also introduced an interactive **Web App**! @@ -472,6 +472,7 @@ This collection would not be possible without the incredible work of the Claude - **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Supabase official skills - Postgres Best Practices. - **[microsoft/skills](https://github.com/microsoft/skills)**: Official Microsoft skills - Azure cloud services, Bot Framework, Cognitive Services, and enterprise development patterns across .NET, Python, TypeScript, Go, Rust, and Java. - **[google-gemini/gemini-skills](https://github.com/google-gemini/gemini-skills)**: Official Gemini skills - Gemini API, SDK and model interactions. +- **[apify/agent-skills](https://github.com/apify/agent-skills)**: Official Apify skills - Web scraping, data extraction and automation. ### Community Contributors @@ -499,8 +500,6 @@ This collection would not be possible without the incredible work of the Claude - **[nedcodes-ok/rule-porter](https://github.com/nedcodes-ok/rule-porter)**: Bidirectional rule converter between Cursor (.mdc), Claude Code (CLAUDE.md), GitHub Copilot, Windsurf, and legacy .cursorrules formats. Zero dependencies. - **[SSOJet/skills](https://github.com/ssojet/skills)**: Production-ready SSOJet skills and integration guides for popular frameworks and platforms — Node.js, Next.js, React, Java, .NET Core, Go, iOS, Android, and more. Works seamlessly with SSOJet SAML, OIDC, and enterprise SSO flows. Works with Cursor, Antigravity, Claude Code, and Windsurf. - **[MojoAuth/skills](https://github.com/MojoAuth/skills)**: Production-ready MojoAuth guides and examples for popular frameworks like Node.js, Next.js, React, Java, .NET Core, Go, iOS, and Android. -- **[Xquik-dev/x-twitter-scraper](https://github.com/Xquik-dev/x-twitter-scraper)**: X (Twitter) data platform — tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server. -- **[shmlkv/dna-claude-analysis](https://github.com/shmlkv/dna-claude-analysis)**: Personal genome analysis toolkit — Python scripts analyzing raw DNA data across 17 categories (health risks, ancestry, pharmacogenomics, nutrition, psychology, etc.) with terminal-style single-page HTML visualization. ### Inspirations @@ -569,8 +568,6 @@ We officially thank the following contributors for their help in making this rep - [@zinzied](https://github.com/zinzied) - [@code-vj](https://github.com/code-vj) - [@thuanlm](https://github.com/thuanlm) -- [@shmlkv](https://github.com/shmlkv) -- [@kriptoburak](https://github.com/kriptoburak) --- diff --git a/data/aliases.json b/data/aliases.json index aa254eac..5fe7bb69 100644 --- a/data/aliases.json +++ b/data/aliases.json @@ -7,6 +7,7 @@ "agent-orchestration-optimize": "agent-orchestration-multi-agent-optimize", "android-jetpack-expert": "android-jetpack-compose-expert", "api-testing-mock": "api-testing-observability-api-mock", + "apify-brand-monitoring": "apify-brand-reputation-monitoring", "templates": "app-builder/templates", "application-performance-optimization": "application-performance-performance-optimization", "azure-ai-dotnet": "azure-ai-agents-persistent-dotnet", diff --git a/data/bundles.json b/data/bundles.json index 8f56ca44..cc48f014 100644 --- a/data/bundles.json +++ b/data/bundles.json @@ -18,6 +18,7 @@ "api-security-best-practices", "api-security-testing", "api-testing-observability-api-mock", + "apify-actorization", "app-store-optimization", "appdeploy", "application-performance-performance-optimization", @@ -385,6 +386,10 @@ "airflow-dag-patterns", "analytics-tracking", "angular-ui-patterns", + "apify-actor-development", + "apify-content-analytics", + "apify-ecommerce", + "apify-ultimate-scraper", "appdeploy", "azure-ai-document-intelligence-dotnet", "azure-ai-document-intelligence-ts", @@ -489,6 +494,7 @@ "agent-evaluation", "airflow-dag-patterns", "api-testing-observability-api-mock", + "apify-brand-reputation-monitoring", "application-performance-performance-optimization", "aws-serverless", "azd-deployment", diff --git a/data/catalog.json b/data/catalog.json index 15393e71..618cc018 100644 --- a/data/catalog.json +++ b/data/catalog.json @@ -1,6 +1,6 @@ { "generatedAt": "2026-02-08T00:00:00.000Z", - "total": 956, + "total": 966, "skills": [ { "id": "00-andruia-consultant", @@ -28,33 +28,6 @@ ], "path": "skills/00-andruia-consultant/SKILL.md" }, - { - "id": "10-andruia-skill-smith", - "name": "10-andruia-skill-smith", - "description": "Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante.", - "category": "general", - "tags": [ - "10", - "andruia", - "skill", - "smith" - ], - "triggers": [ - "10", - "andruia", - "skill", - "smith", - "ingeniero", - "de", - "sistemas", - "andru", - "ia", - "dise", - "redacta", - "despliega" - ], - "path": "skills/10-andruia-skill-smith/SKILL.md" - }, { "id": "20-andruia-niche-intelligence", "name": "20-andruia-niche-intelligence", @@ -538,24 +511,14 @@ { "id": "ai-engineer", "name": "ai-engineer", - "description": "Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.", + "description": "", "category": "data-ai", "tags": [ "ai" ], "triggers": [ "ai", - "engineer", - "llm", - "applications", - "rag", - "intelligent", - "agents", - "implements", - "vector", - "search", - "multimodal", - "agent" + "engineer" ], "path": "skills/ai-engineer/SKILL.md" }, @@ -587,7 +550,7 @@ { "id": "ai-product", "name": "ai-product", - "description": "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...", + "description": "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...", "category": "data-ai", "tags": [ "ai", @@ -759,7 +722,7 @@ { "id": "analytics-tracking", "name": "analytics-tracking", - "description": "Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.", + "description": "", "category": "data-ai", "tags": [ "analytics", @@ -767,13 +730,7 @@ ], "triggers": [ "analytics", - "tracking", - "audit", - "improve", - "produce", - "reliable", - "decision", - "data" + "tracking" ], "path": "skills/analytics-tracking/SKILL.md" }, @@ -825,24 +782,13 @@ { "id": "angular", "name": "angular", - "description": "Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns.", - "category": "architecture", + "description": "", + "category": "general", "tags": [ "angular" ], "triggers": [ - "angular", - "v20", - "deep", - "knowledge", - "signals", - "standalone", - "components", - "zoneless", - "applications", - "ssr", - "hydration", - "reactive" + "angular" ], "path": "skills/angular/SKILL.md" }, @@ -1072,25 +1018,15 @@ { "id": "api-documenter", "name": "api-documenter", - "description": "Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "api", "documenter" ], "triggers": [ "api", - "documenter", - "documentation", - "openapi", - "ai", - "powered", - "developer", - "experience", - "interactive", - "docs", - "generate", - "sdks" + "documenter" ], "path": "skills/api-documenter/SKILL.md" }, @@ -1223,6 +1159,314 @@ ], "path": "skills/api-testing-observability-api-mock/SKILL.md" }, + { + "id": "apify-actor-development", + "name": "apify-actor-development", + "description": "Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Acto...", + "category": "infrastructure", + "tags": [ + "apify", + "actor" + ], + "triggers": [ + "apify", + "actor", + "development", + "develop", + "debug", + "deploy", + "actors", + "serverless", + "cloud", + "programs", + "web", + "scraping" + ], + "path": "skills/apify-actor-development/SKILL.md" + }, + { + "id": "apify-actorization", + "name": "apify-actorization", + "description": "Convert existing projects into Apify Actors - serverless cloud programs. Actorize JavaScript/TypeScript (SDK with Actor.init/exit), Python (async context manager), or any language (CLI wrapper). Us...", + "category": "infrastructure", + "tags": [ + "apify", + "actorization" + ], + "triggers": [ + "apify", + "actorization", + "convert", + "existing", + "actors", + "serverless", + "cloud", + "programs", + "actorize", + "javascript", + "typescript", + "sdk" + ], + "path": "skills/apify-actorization/SKILL.md" + }, + { + "id": "apify-audience-analysis", + "name": "apify-audience-analysis", + "description": "Understand audience demographics, preferences, behavior patterns, and engagement quality across Facebook, Instagram, YouTube, and TikTok.", + "category": "architecture", + "tags": [ + "apify", + "audience" + ], + "triggers": [ + "apify", + "audience", + "analysis", + "understand", + "demographics", + "preferences", + "behavior", + "engagement", + "quality", + "facebook", + "instagram", + "youtube" + ], + "path": "skills/apify-audience-analysis/SKILL.md" + }, + { + "id": "apify-brand-reputation-monitoring", + "name": "apify-brand-reputation-monitoring", + "description": "Track reviews, ratings, sentiment, and brand mentions across Google Maps, Booking.com, TripAdvisor, Facebook, Instagram, YouTube, and TikTok. Use when user asks to monitor brand reputation, analyze...", + "category": "infrastructure", + "tags": [ + "apify", + "brand", + "reputation", + "monitoring" + ], + "triggers": [ + "apify", + "brand", + "reputation", + "monitoring", + "track", + "reviews", + "ratings", + "sentiment", + "mentions", + "google", + "maps", + "booking" + ], + "path": "skills/apify-brand-reputation-monitoring/SKILL.md" + }, + { + "id": "apify-competitor-intelligence", + "name": "apify-competitor-intelligence", + "description": "Analyze competitor strategies, content, pricing, ads, and market positioning across Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok.", + "category": "business", + "tags": [ + "apify", + "competitor", + "intelligence" + ], + "triggers": [ + "apify", + "competitor", + "intelligence", + "analyze", + "content", + "pricing", + "ads", + "market", + "positioning", + "google", + "maps", + "booking" + ], + "path": "skills/apify-competitor-intelligence/SKILL.md" + }, + { + "id": "apify-content-analytics", + "name": "apify-content-analytics", + "description": "Track engagement metrics, measure campaign ROI, and analyze content performance across Instagram, Facebook, YouTube, and TikTok.", + "category": "data-ai", + "tags": [ + "apify", + "content", + "analytics" + ], + "triggers": [ + "apify", + "content", + "analytics", + "track", + "engagement", + "metrics", + "measure", + "campaign", + "roi", + "analyze", + "performance", + "instagram" + ], + "path": "skills/apify-content-analytics/SKILL.md" + }, + { + "id": "apify-ecommerce", + "name": "apify-ecommerce", + "description": "Scrape e-commerce data for pricing intelligence, customer reviews, and seller discovery across Amazon, Walmart, eBay, IKEA, and 50+ marketplaces. Use when user asks to monitor prices, track competi...", + "category": "data-ai", + "tags": [ + "apify", + "ecommerce" + ], + "triggers": [ + "apify", + "ecommerce", + "scrape", + "commerce", + "data", + "pricing", + "intelligence", + "customer", + "reviews", + "seller", + "discovery", + "amazon" + ], + "path": "skills/apify-ecommerce/SKILL.md" + }, + { + "id": "apify-influencer-discovery", + "name": "apify-influencer-discovery", + "description": "Find and evaluate influencers for brand partnerships, verify authenticity, and track collaboration performance across Instagram, Facebook, YouTube, and TikTok.", + "category": "workflow", + "tags": [ + "apify", + "influencer", + "discovery" + ], + "triggers": [ + "apify", + "influencer", + "discovery", + "find", + "evaluate", + "influencers", + "brand", + "partnerships", + "verify", + "authenticity", + "track", + "collaboration" + ], + "path": "skills/apify-influencer-discovery/SKILL.md" + }, + { + "id": "apify-lead-generation", + "name": "apify-lead-generation", + "description": "Generates B2B/B2C leads by scraping Google Maps, websites, Instagram, TikTok, Facebook, LinkedIn, YouTube, and Google Search. Use when user asks to find leads, prospects, businesses, build lead lis...", + "category": "general", + "tags": [ + "apify", + "lead", + "generation" + ], + "triggers": [ + "apify", + "lead", + "generation", + "generates", + "b2b", + "b2c", + "leads", + "scraping", + "google", + "maps", + "websites", + "instagram" + ], + "path": "skills/apify-lead-generation/SKILL.md" + }, + { + "id": "apify-market-research", + "name": "apify-market-research", + "description": "Analyze market conditions, geographic opportunities, pricing, consumer behavior, and product validation across Google Maps, Facebook, Instagram, Booking.com, and TripAdvisor.", + "category": "business", + "tags": [ + "apify", + "market", + "research" + ], + "triggers": [ + "apify", + "market", + "research", + "analyze", + "conditions", + "geographic", + "opportunities", + "pricing", + "consumer", + "behavior", + "product", + "validation" + ], + "path": "skills/apify-market-research/SKILL.md" + }, + { + "id": "apify-trend-analysis", + "name": "apify-trend-analysis", + "description": "Discover and track emerging trends across Google Trends, Instagram, Facebook, YouTube, and TikTok to inform content strategy.", + "category": "general", + "tags": [ + "apify", + "trend" + ], + "triggers": [ + "apify", + "trend", + "analysis", + "discover", + "track", + "emerging", + "trends", + "google", + "instagram", + "facebook", + "youtube", + "tiktok" + ], + "path": "skills/apify-trend-analysis/SKILL.md" + }, + { + "id": "apify-ultimate-scraper", + "name": "apify-ultimate-scraper", + "description": "Universal AI-powered web scraper for any platform. Scrape data from Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search, Google Trends, Booking.com, and TripAdvisor. Use for lead gener...", + "category": "data-ai", + "tags": [ + "apify", + "ultimate", + "scraper" + ], + "triggers": [ + "apify", + "ultimate", + "scraper", + "universal", + "ai", + "powered", + "web", + "any", + "platform", + "scrape", + "data", + "instagram" + ], + "path": "skills/apify-ultimate-scraper/SKILL.md" + }, { "id": "app-builder", "name": "app-builder", @@ -1440,7 +1684,7 @@ { "id": "arm-cortex-expert", "name": "arm-cortex-expert", - "description": "Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).", + "description": "", "category": "general", "tags": [ "arm", @@ -1448,17 +1692,7 @@ ], "triggers": [ "arm", - "cortex", - "senior", - "embedded", - "software", - "engineer", - "specializing", - "firmware", - "driver", - "development", - "microcontrollers", - "teensy" + "cortex" ], "path": "skills/arm-cortex-expert/SKILL.md" }, @@ -1877,7 +2111,7 @@ { "id": "azure-ai-agents-persistent-dotnet", "name": "azure-ai-agents-persistent-dotnet", - "description": "Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -1891,21 +2125,14 @@ "ai", "agents", "persistent", - "dotnet", - "sdk", - "net", - "low", - "level", - "creating", - "managing", - "threads" + "dotnet" ], "path": "skills/azure-ai-agents-persistent-dotnet/SKILL.md" }, { "id": "azure-ai-agents-persistent-java", "name": "azure-ai-agents-persistent-java", - "description": "Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -1919,14 +2146,7 @@ "ai", "agents", "persistent", - "java", - "sdk", - "low", - "level", - "creating", - "managing", - "threads", - "messages" + "java" ], "path": "skills/azure-ai-agents-persistent-java/SKILL.md" }, @@ -1987,7 +2207,7 @@ { "id": "azure-ai-contentsafety-py", "name": "azure-ai-contentsafety-py", - "description": "Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -1999,15 +2219,7 @@ "azure", "ai", "contentsafety", - "py", - "content", - "safety", - "sdk", - "python", - "detecting", - "harmful", - "text", - "images" + "py" ], "path": "skills/azure-ai-contentsafety-py/SKILL.md" }, @@ -2041,7 +2253,7 @@ { "id": "azure-ai-contentunderstanding-py", "name": "azure-ai-contentunderstanding-py", - "description": "Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2053,22 +2265,14 @@ "azure", "ai", "contentunderstanding", - "py", - "content", - "understanding", - "sdk", - "python", - "multimodal", - "extraction", - "documents", - "images" + "py" ], "path": "skills/azure-ai-contentunderstanding-py/SKILL.md" }, { "id": "azure-ai-document-intelligence-dotnet", "name": "azure-ai-document-intelligence-dotnet", - "description": "Azure AI Document Intelligence SDK for .NET. Extract text, tables, and structured data from documents using prebuilt and custom models.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2082,14 +2286,7 @@ "ai", "document", "intelligence", - "dotnet", - "sdk", - "net", - "extract", - "text", - "tables", - "structured", - "data" + "dotnet" ], "path": "skills/azure-ai-document-intelligence-dotnet/SKILL.md" }, @@ -2151,7 +2348,7 @@ { "id": "azure-ai-ml-py", "name": "azure-ai-ml-py", - "description": "Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2163,22 +2360,14 @@ "azure", "ai", "ml", - "py", - "machine", - "learning", - "sdk", - "v2", - "python", - "workspaces", - "jobs", - "models" + "py" ], "path": "skills/azure-ai-ml-py/SKILL.md" }, { "id": "azure-ai-openai-dotnet", "name": "azure-ai-openai-dotnet", - "description": "Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, and assistants.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2190,22 +2379,14 @@ "azure", "ai", "openai", - "dotnet", - "sdk", - "net", - "client", - "library", - "chat", - "completions", - "embeddings", - "image" + "dotnet" ], "path": "skills/azure-ai-openai-dotnet/SKILL.md" }, { "id": "azure-ai-projects-dotnet", "name": "azure-ai-projects-dotnet", - "description": "Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2215,23 +2396,14 @@ "triggers": [ "azure", "ai", - "dotnet", - "sdk", - "net", - "high", - "level", - "client", - "foundry", - "including", - "agents", - "connections" + "dotnet" ], "path": "skills/azure-ai-projects-dotnet/SKILL.md" }, { "id": "azure-ai-projects-java", "name": "azure-ai-projects-java", - "description": "Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2241,16 +2413,7 @@ "triggers": [ "azure", "ai", - "java", - "sdk", - "high", - "level", - "foundry", - "including", - "connections", - "datasets", - "indexes", - "evaluations" + "java" ], "path": "skills/azure-ai-projects-java/SKILL.md" }, @@ -2309,7 +2472,7 @@ { "id": "azure-ai-textanalytics-py", "name": "azure-ai-textanalytics-py", - "description": "Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language processing on text.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2321,22 +2484,14 @@ "azure", "ai", "textanalytics", - "py", - "text", - "analytics", - "sdk", - "sentiment", - "analysis", - "entity", - "recognition", - "key" + "py" ], "path": "skills/azure-ai-textanalytics-py/SKILL.md" }, { "id": "azure-ai-transcription-py", "name": "azure-ai-transcription-py", - "description": "Azure AI Transcription SDK for Python. Use for real-time and batch speech-to-text transcription with timestamps and diarization.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2348,22 +2503,14 @@ "azure", "ai", "transcription", - "py", - "sdk", - "python", - "real", - "time", - "batch", - "speech", - "text", - "timestamps" + "py" ], "path": "skills/azure-ai-transcription-py/SKILL.md" }, { "id": "azure-ai-translation-document-py", "name": "azure-ai-translation-document-py", - "description": "Azure AI Document Translation SDK for batch translation of documents with format preservation. Use for translating Word, PDF, Excel, PowerPoint, and other document formats at scale.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2377,21 +2524,14 @@ "ai", "translation", "document", - "py", - "sdk", - "batch", - "documents", - "format", - "preservation", - "translating", - "word" + "py" ], "path": "skills/azure-ai-translation-document-py/SKILL.md" }, { "id": "azure-ai-translation-text-py", "name": "azure-ai-translation-text-py", - "description": "Azure AI Text Translation SDK for real-time text translation, transliteration, language detection, and dictionary lookup. Use for translating text content in applications.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2405,14 +2545,7 @@ "ai", "translation", "text", - "py", - "sdk", - "real", - "time", - "transliteration", - "language", - "detection", - "dictionary" + "py" ], "path": "skills/azure-ai-translation-text-py/SKILL.md" }, @@ -2474,7 +2607,7 @@ { "id": "azure-ai-vision-imageanalysis-py", "name": "azure-ai-vision-imageanalysis-py", - "description": "Azure AI Vision Image Analysis SDK for captions, tags, objects, OCR, people detection, and smart cropping. Use for computer vision and image understanding tasks.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2488,21 +2621,14 @@ "ai", "vision", "imageanalysis", - "py", - "image", - "analysis", - "sdk", - "captions", - "tags", - "objects", - "ocr" + "py" ], "path": "skills/azure-ai-vision-imageanalysis-py/SKILL.md" }, { "id": "azure-ai-voicelive-dotnet", "name": "azure-ai-voicelive-dotnet", - "description": "Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2514,22 +2640,14 @@ "azure", "ai", "voicelive", - "dotnet", - "voice", - "live", - "sdk", - "net", - "real", - "time", - "applications", - "bidirectional" + "dotnet" ], "path": "skills/azure-ai-voicelive-dotnet/SKILL.md" }, { "id": "azure-ai-voicelive-java", "name": "azure-ai-voicelive-java", - "description": "Azure AI VoiceLive SDK for Java. Real-time bidirectional voice conversations with AI assistants using WebSocket.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2541,15 +2659,7 @@ "azure", "ai", "voicelive", - "java", - "sdk", - "real", - "time", - "bidirectional", - "voice", - "conversations", - "assistants", - "websocket" + "java" ], "path": "skills/azure-ai-voicelive-java/SKILL.md" }, @@ -2583,7 +2693,7 @@ { "id": "azure-ai-voicelive-ts", "name": "azure-ai-voicelive-ts", - "description": "Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -2595,22 +2705,14 @@ "azure", "ai", "voicelive", - "ts", - "voice", - "live", - "sdk", - "javascript", - "typescript", - "real", - "time", - "applications" + "ts" ], "path": "skills/azure-ai-voicelive-ts/SKILL.md" }, { "id": "azure-appconfiguration-java", "name": "azure-appconfiguration-java", - "description": "Azure App Configuration SDK for Java. Centralized application configuration management with key-value settings, feature flags, and snapshots.", + "description": "", "category": "development", "tags": [ "azure", @@ -2620,24 +2722,15 @@ "triggers": [ "azure", "appconfiguration", - "java", - "app", - "configuration", - "sdk", - "centralized", - "application", - "key", - "value", - "settings", - "feature" + "java" ], "path": "skills/azure-appconfiguration-java/SKILL.md" }, { "id": "azure-appconfiguration-py", "name": "azure-appconfiguration-py", - "description": "Azure App Configuration SDK for Python. Use for centralized configuration management, feature flags, and dynamic settings.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "appconfiguration", @@ -2646,16 +2739,7 @@ "triggers": [ "azure", "appconfiguration", - "py", - "app", - "configuration", - "sdk", - "python", - "centralized", - "feature", - "flags", - "dynamic", - "settings" + "py" ], "path": "skills/azure-appconfiguration-py/SKILL.md" }, @@ -2823,7 +2907,7 @@ { "id": "azure-compute-batch-java", "name": "azure-compute-batch-java", - "description": "Azure Batch SDK for Java. Run large-scale parallel and HPC batch jobs with pools, jobs, tasks, and compute nodes.", + "description": "", "category": "development", "tags": [ "azure", @@ -2835,23 +2919,15 @@ "azure", "compute", "batch", - "java", - "sdk", - "run", - "large", - "scale", - "parallel", - "hpc", - "jobs", - "pools" + "java" ], "path": "skills/azure-compute-batch-java/SKILL.md" }, { "id": "azure-containerregistry-py", "name": "azure-containerregistry-py", - "description": "Azure Container Registry SDK for Python. Use for managing container images, artifacts, and repositories.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "containerregistry", @@ -2860,15 +2936,7 @@ "triggers": [ "azure", "containerregistry", - "py", - "container", - "registry", - "sdk", - "python", - "managing", - "images", - "artifacts", - "repositories" + "py" ], "path": "skills/azure-containerregistry-py/SKILL.md" }, @@ -2902,8 +2970,8 @@ { "id": "azure-cosmos-java", "name": "azure-cosmos-java", - "description": "Azure Cosmos DB SDK for Java. NoSQL database operations with global distribution, multi-model support, and reactive patterns.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "cosmos", @@ -2912,24 +2980,15 @@ "triggers": [ "azure", "cosmos", - "java", - "db", - "sdk", - "nosql", - "database", - "operations", - "global", - "distribution", - "multi", - "model" + "java" ], "path": "skills/azure-cosmos-java/SKILL.md" }, { "id": "azure-cosmos-py", "name": "azure-cosmos-py", - "description": "Azure Cosmos DB SDK for Python (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "cosmos", @@ -2938,24 +2997,15 @@ "triggers": [ "azure", "cosmos", - "py", - "db", - "sdk", - "python", - "nosql", - "api", - "document", - "crud", - "queries", - "containers" + "py" ], "path": "skills/azure-cosmos-py/SKILL.md" }, { "id": "azure-cosmos-rust", "name": "azure-cosmos-rust", - "description": "Azure Cosmos DB SDK for Rust (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "cosmos", @@ -2964,24 +3014,15 @@ "triggers": [ "azure", "cosmos", - "rust", - "db", - "sdk", - "nosql", - "api", - "document", - "crud", - "queries", - "containers", - "globally" + "rust" ], "path": "skills/azure-cosmos-rust/SKILL.md" }, { "id": "azure-cosmos-ts", "name": "azure-cosmos-ts", - "description": "Azure Cosmos DB JavaScript/TypeScript SDK (@azure/cosmos) for data plane operations. Use for CRUD operations on documents, queries, bulk operations, and container management.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "cosmos", @@ -2990,16 +3031,7 @@ "triggers": [ "azure", "cosmos", - "ts", - "db", - "javascript", - "typescript", - "sdk", - "data", - "plane", - "operations", - "crud", - "documents" + "ts" ], "path": "skills/azure-cosmos-ts/SKILL.md" }, @@ -3033,7 +3065,7 @@ { "id": "azure-data-tables-py", "name": "azure-data-tables-py", - "description": "Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -3045,22 +3077,14 @@ "azure", "data", "tables", - "py", - "sdk", - "python", - "storage", - "cosmos", - "db", - "nosql", - "key", - "value" + "py" ], "path": "skills/azure-data-tables-py/SKILL.md" }, { "id": "azure-eventgrid-dotnet", "name": "azure-eventgrid-dotnet", - "description": "Azure Event Grid SDK for .NET. Client library for publishing and consuming events with Azure Event Grid. Use for event-driven architectures, pub/sub messaging, CloudEvents, and EventGridEvents.", + "description": "", "category": "development", "tags": [ "azure", @@ -3070,16 +3094,7 @@ "triggers": [ "azure", "eventgrid", - "dotnet", - "event", - "grid", - "sdk", - "net", - "client", - "library", - "publishing", - "consuming", - "events" + "dotnet" ], "path": "skills/azure-eventgrid-dotnet/SKILL.md" }, @@ -3112,8 +3127,8 @@ { "id": "azure-eventgrid-py", "name": "azure-eventgrid-py", - "description": "Azure Event Grid SDK for Python. Use for publishing events, handling CloudEvents, and event-driven architectures.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "eventgrid", @@ -3122,23 +3137,14 @@ "triggers": [ "azure", "eventgrid", - "py", - "event", - "grid", - "sdk", - "python", - "publishing", - "events", - "handling", - "cloudevents", - "driven" + "py" ], "path": "skills/azure-eventgrid-py/SKILL.md" }, { "id": "azure-eventhub-dotnet", "name": "azure-eventhub-dotnet", - "description": "Azure Event Hubs SDK for .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -3148,11 +3154,7 @@ "triggers": [ "azure", "eventhub", - "dotnet", - "event", - "hubs", - "sdk", - "net" + "dotnet" ], "path": "skills/azure-eventhub-dotnet/SKILL.md" }, @@ -3185,8 +3187,8 @@ { "id": "azure-eventhub-py", "name": "azure-eventhub-py", - "description": "Azure Event Hubs SDK for Python streaming. Use for high-throughput event ingestion, producers, consumers, and checkpointing.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "eventhub", @@ -3195,24 +3197,15 @@ "triggers": [ "azure", "eventhub", - "py", - "event", - "hubs", - "sdk", - "python", - "streaming", - "high", - "throughput", - "ingestion", - "producers" + "py" ], "path": "skills/azure-eventhub-py/SKILL.md" }, { "id": "azure-eventhub-rust", "name": "azure-eventhub-rust", - "description": "Azure Event Hubs SDK for Rust. Use for sending and receiving events, streaming data ingestion.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "eventhub", @@ -3221,16 +3214,7 @@ "triggers": [ "azure", "eventhub", - "rust", - "event", - "hubs", - "sdk", - "sending", - "receiving", - "events", - "streaming", - "data", - "ingestion" + "rust" ], "path": "skills/azure-eventhub-rust/SKILL.md" }, @@ -3288,8 +3272,8 @@ { "id": "azure-identity-dotnet", "name": "azure-identity-dotnet", - "description": "Azure Identity SDK for .NET. Authentication library for Azure SDK clients using Microsoft Entra ID. Use for DefaultAzureCredential, managed identity, service principals, and developer credentials.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "identity", @@ -3298,16 +3282,7 @@ "triggers": [ "azure", "identity", - "dotnet", - "sdk", - "net", - "authentication", - "library", - "clients", - "microsoft", - "entra", - "id", - "defaultazurecredential" + "dotnet" ], "path": "skills/azure-identity-dotnet/SKILL.md" }, @@ -3339,8 +3314,8 @@ { "id": "azure-identity-py", "name": "azure-identity-py", - "description": "Azure Identity SDK for Python authentication. Use for DefaultAzureCredential, managed identity, service principals, and token caching.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "azure", "identity", @@ -3349,22 +3324,14 @@ "triggers": [ "azure", "identity", - "py", - "sdk", - "python", - "authentication", - "defaultazurecredential", - "managed", - "principals", - "token", - "caching" + "py" ], "path": "skills/azure-identity-py/SKILL.md" }, { "id": "azure-identity-rust", "name": "azure-identity-rust", - "description": "Azure Identity SDK for Rust authentication. Use for DeveloperToolsCredential, ManagedIdentityCredential, ClientSecretCredential, and token-based authentication.", + "description": "", "category": "development", "tags": [ "azure", @@ -3374,13 +3341,7 @@ "triggers": [ "azure", "identity", - "rust", - "sdk", - "authentication", - "developertoolscredential", - "managedidentitycredential", - "clientsecretcredential", - "token" + "rust" ], "path": "skills/azure-identity-rust/SKILL.md" }, @@ -3412,7 +3373,7 @@ { "id": "azure-keyvault-certificates-rust", "name": "azure-keyvault-certificates-rust", - "description": "Azure Key Vault Certificates SDK for Rust. Use for creating, importing, and managing certificates.", + "description": "", "category": "development", "tags": [ "azure", @@ -3424,20 +3385,14 @@ "azure", "keyvault", "certificates", - "rust", - "key", - "vault", - "sdk", - "creating", - "importing", - "managing" + "rust" ], "path": "skills/azure-keyvault-certificates-rust/SKILL.md" }, { "id": "azure-keyvault-keys-rust", "name": "azure-keyvault-keys-rust", - "description": "Azure Key Vault Keys SDK for Rust. Use for creating, managing, and using cryptographic keys. Triggers: \"keyvault keys rust\", \"KeyClient rust\", \"create key rust\", \"encrypt rust\", \"sign rust\".", + "description": "", "category": "development", "tags": [ "azure", @@ -3449,15 +3404,7 @@ "azure", "keyvault", "keys", - "rust", - "key", - "vault", - "sdk", - "creating", - "managing", - "cryptographic", - "triggers", - "keyclient" + "rust" ], "path": "skills/azure-keyvault-keys-rust/SKILL.md" }, @@ -3491,8 +3438,8 @@ { "id": "azure-keyvault-py", "name": "azure-keyvault-py", - "description": "Azure Key Vault SDK for Python. Use for secrets, keys, and certificates management with secure storage.", - "category": "security", + "description": "", + "category": "general", "tags": [ "azure", "keyvault", @@ -3501,23 +3448,14 @@ "triggers": [ "azure", "keyvault", - "py", - "key", - "vault", - "sdk", - "python", - "secrets", - "keys", - "certificates", - "secure", - "storage" + "py" ], "path": "skills/azure-keyvault-py/SKILL.md" }, { "id": "azure-keyvault-secrets-rust", "name": "azure-keyvault-secrets-rust", - "description": "Azure Key Vault Secrets SDK for Rust. Use for storing and retrieving secrets, passwords, and API keys. Triggers: \"keyvault secrets rust\", \"SecretClient rust\", \"get secret rust\", \"set secret rust\".", + "description": "", "category": "security", "tags": [ "azure", @@ -3529,15 +3467,7 @@ "azure", "keyvault", "secrets", - "rust", - "key", - "vault", - "sdk", - "storing", - "retrieving", - "passwords", - "api", - "keys" + "rust" ], "path": "skills/azure-keyvault-secrets-rust/SKILL.md" }, @@ -3571,8 +3501,8 @@ { "id": "azure-maps-search-dotnet", "name": "azure-maps-search-dotnet", - "description": "Azure Maps SDK for .NET. Location-based services including geocoding, routing, rendering, geolocation, and weather. Use for address search, directions, map tiles, IP geolocation, and weather data.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "maps", @@ -3583,15 +3513,7 @@ "azure", "maps", "search", - "dotnet", - "sdk", - "net", - "location", - "including", - "geocoding", - "routing", - "rendering", - "geolocation" + "dotnet" ], "path": "skills/azure-maps-search-dotnet/SKILL.md" }, @@ -3625,8 +3547,8 @@ { "id": "azure-messaging-webpubsubservice-py", "name": "azure-messaging-webpubsubservice-py", - "description": "Azure Web PubSub Service SDK for Python. Use for real-time messaging, WebSocket connections, and pub/sub patterns.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "azure", "messaging", @@ -3637,22 +3559,14 @@ "azure", "messaging", "webpubsubservice", - "py", - "web", - "pubsub", - "sdk", - "python", - "real", - "time", - "websocket", - "connections" + "py" ], "path": "skills/azure-messaging-webpubsubservice-py/SKILL.md" }, { "id": "azure-mgmt-apicenter-dotnet", "name": "azure-mgmt-apicenter-dotnet", - "description": "Azure API Center SDK for .NET. Centralized API inventory management with governance, versioning, and discovery.", + "description": "", "category": "development", "tags": [ "azure", @@ -3664,23 +3578,15 @@ "azure", "mgmt", "apicenter", - "dotnet", - "api", - "center", - "sdk", - "net", - "centralized", - "inventory", - "governance", - "versioning" + "dotnet" ], "path": "skills/azure-mgmt-apicenter-dotnet/SKILL.md" }, { "id": "azure-mgmt-apicenter-py", "name": "azure-mgmt-apicenter-py", - "description": "Azure API Center Management SDK for Python. Use for managing API inventory, metadata, and governance across your organization.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "mgmt", @@ -3691,22 +3597,14 @@ "azure", "mgmt", "apicenter", - "py", - "api", - "center", - "sdk", - "python", - "managing", - "inventory", - "metadata", - "governance" + "py" ], "path": "skills/azure-mgmt-apicenter-py/SKILL.md" }, { "id": "azure-mgmt-apimanagement-dotnet", "name": "azure-mgmt-apimanagement-dotnet", - "description": "Azure Resource Manager SDK for API Management in .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -3718,20 +3616,15 @@ "azure", "mgmt", "apimanagement", - "dotnet", - "resource", - "manager", - "sdk", - "api", - "net" + "dotnet" ], "path": "skills/azure-mgmt-apimanagement-dotnet/SKILL.md" }, { "id": "azure-mgmt-apimanagement-py", "name": "azure-mgmt-apimanagement-py", - "description": "Azure API Management SDK for Python. Use for managing APIM services, APIs, products, subscriptions, and policies.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "mgmt", @@ -3742,23 +3635,15 @@ "azure", "mgmt", "apimanagement", - "py", - "api", - "sdk", - "python", - "managing", - "apim", - "apis", - "products", - "subscriptions" + "py" ], "path": "skills/azure-mgmt-apimanagement-py/SKILL.md" }, { "id": "azure-mgmt-applicationinsights-dotnet", "name": "azure-mgmt-applicationinsights-dotnet", - "description": "Azure Application Insights SDK for .NET. Application performance monitoring and observability resource management.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "mgmt", @@ -3769,23 +3654,15 @@ "azure", "mgmt", "applicationinsights", - "dotnet", - "application", - "insights", - "sdk", - "net", - "performance", - "monitoring", - "observability", - "resource" + "dotnet" ], "path": "skills/azure-mgmt-applicationinsights-dotnet/SKILL.md" }, { "id": "azure-mgmt-arizeaiobservabilityeval-dotnet", "name": "azure-mgmt-arizeaiobservabilityeval-dotnet", - "description": "Azure Resource Manager SDK for Arize AI Observability and Evaluation (.NET).", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "mgmt", @@ -3796,23 +3673,15 @@ "azure", "mgmt", "arizeaiobservabilityeval", - "dotnet", - "resource", - "manager", - "sdk", - "arize", - "ai", - "observability", - "evaluation", - "net" + "dotnet" ], "path": "skills/azure-mgmt-arizeaiobservabilityeval-dotnet/SKILL.md" }, { "id": "azure-mgmt-botservice-dotnet", "name": "azure-mgmt-botservice-dotnet", - "description": "Azure Resource Manager SDK for Bot Service in .NET. Management plane operations for creating and managing Azure Bot resources, channels (Teams, DirectLine, Slack), and connection settings.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "mgmt", @@ -3823,23 +3692,15 @@ "azure", "mgmt", "botservice", - "dotnet", - "resource", - "manager", - "sdk", - "bot", - "net", - "plane", - "operations", - "creating" + "dotnet" ], "path": "skills/azure-mgmt-botservice-dotnet/SKILL.md" }, { "id": "azure-mgmt-botservice-py", "name": "azure-mgmt-botservice-py", - "description": "Azure Bot Service Management SDK for Python. Use for creating, managing, and configuring Azure Bot Service resources.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "azure", "mgmt", @@ -3850,21 +3711,14 @@ "azure", "mgmt", "botservice", - "py", - "bot", - "sdk", - "python", - "creating", - "managing", - "configuring", - "resources" + "py" ], "path": "skills/azure-mgmt-botservice-py/SKILL.md" }, { "id": "azure-mgmt-fabric-dotnet", "name": "azure-mgmt-fabric-dotnet", - "description": "Azure Resource Manager SDK for Fabric in .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -3876,19 +3730,15 @@ "azure", "mgmt", "fabric", - "dotnet", - "resource", - "manager", - "sdk", - "net" + "dotnet" ], "path": "skills/azure-mgmt-fabric-dotnet/SKILL.md" }, { "id": "azure-mgmt-fabric-py", "name": "azure-mgmt-fabric-py", - "description": "Azure Fabric Management SDK for Python. Use for managing Microsoft Fabric capacities and resources.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "mgmt", @@ -3899,13 +3749,7 @@ "azure", "mgmt", "fabric", - "py", - "sdk", - "python", - "managing", - "microsoft", - "capacities", - "resources" + "py" ], "path": "skills/azure-mgmt-fabric-py/SKILL.md" }, @@ -3939,8 +3783,8 @@ { "id": "azure-mgmt-weightsandbiases-dotnet", "name": "azure-mgmt-weightsandbiases-dotnet", - "description": "Azure Weights & Biases SDK for .NET. ML experiment tracking and model management via Azure Marketplace. Use for creating W&B instances, managing SSO, marketplace integration, and ML observability.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "mgmt", @@ -3951,15 +3795,7 @@ "azure", "mgmt", "weightsandbiases", - "dotnet", - "weights", - "biases", - "sdk", - "net", - "ml", - "experiment", - "tracking", - "model" + "dotnet" ], "path": "skills/azure-mgmt-weightsandbiases-dotnet/SKILL.md" }, @@ -3993,8 +3829,8 @@ { "id": "azure-monitor-ingestion-java", "name": "azure-monitor-ingestion-java", - "description": "Azure Monitor Ingestion SDK for Java. Send custom logs to Azure Monitor via Data Collection Rules (DCR) and Data Collection Endpoints (DCE).", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "monitor", @@ -4005,23 +3841,15 @@ "azure", "monitor", "ingestion", - "java", - "sdk", - "send", - "custom", - "logs", - "via", - "data", - "collection", - "rules" + "java" ], "path": "skills/azure-monitor-ingestion-java/SKILL.md" }, { "id": "azure-monitor-ingestion-py", "name": "azure-monitor-ingestion-py", - "description": "Azure Monitor Ingestion SDK for Python. Use for sending custom logs to Log Analytics workspace via Logs Ingestion API.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "monitor", @@ -4032,22 +3860,14 @@ "azure", "monitor", "ingestion", - "py", - "sdk", - "python", - "sending", - "custom", - "logs", - "log", - "analytics", - "workspace" + "py" ], "path": "skills/azure-monitor-ingestion-py/SKILL.md" }, { "id": "azure-monitor-opentelemetry-exporter-java", "name": "azure-monitor-opentelemetry-exporter-java", - "description": "Azure Monitor OpenTelemetry Exporter for Java. Export OpenTelemetry traces, metrics, and logs to Azure Monitor/Application Insights.", + "description": "", "category": "development", "tags": [ "azure", @@ -4061,21 +3881,15 @@ "monitor", "opentelemetry", "exporter", - "java", - "export", - "traces", - "metrics", - "logs", - "application", - "insights" + "java" ], "path": "skills/azure-monitor-opentelemetry-exporter-java/SKILL.md" }, { "id": "azure-monitor-opentelemetry-exporter-py", "name": "azure-monitor-opentelemetry-exporter-py", - "description": "Azure Monitor OpenTelemetry Exporter for Python. Use for low-level OpenTelemetry export to Application Insights.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "monitor", @@ -4088,21 +3902,15 @@ "monitor", "opentelemetry", "exporter", - "py", - "python", - "low", - "level", - "export", - "application", - "insights" + "py" ], "path": "skills/azure-monitor-opentelemetry-exporter-py/SKILL.md" }, { "id": "azure-monitor-opentelemetry-py", "name": "azure-monitor-opentelemetry-py", - "description": "Azure Monitor OpenTelemetry Distro for Python. Use for one-line Application Insights setup with auto-instrumentation.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "monitor", @@ -4113,15 +3921,7 @@ "azure", "monitor", "opentelemetry", - "py", - "distro", - "python", - "one", - "line", - "application", - "insights", - "setup", - "auto" + "py" ], "path": "skills/azure-monitor-opentelemetry-py/SKILL.md" }, @@ -4155,8 +3955,8 @@ { "id": "azure-monitor-query-java", "name": "azure-monitor-query-java", - "description": "Azure Monitor Query SDK for Java. Execute Kusto queries against Log Analytics workspaces and query metrics from Azure resources.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "monitor", @@ -4167,23 +3967,15 @@ "azure", "monitor", "query", - "java", - "sdk", - "execute", - "kusto", - "queries", - "against", - "log", - "analytics", - "workspaces" + "java" ], "path": "skills/azure-monitor-query-java/SKILL.md" }, { "id": "azure-monitor-query-py", "name": "azure-monitor-query-py", - "description": "Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "monitor", @@ -4194,21 +3986,14 @@ "azure", "monitor", "query", - "py", - "sdk", - "python", - "querying", - "log", - "analytics", - "workspaces", - "metrics" + "py" ], "path": "skills/azure-monitor-query-py/SKILL.md" }, { "id": "azure-postgres-ts", "name": "azure-postgres-ts", - "description": "Connect to Azure Database for PostgreSQL Flexible Server from Node.js/TypeScript using the pg (node-postgres) package.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -4218,24 +4003,15 @@ "triggers": [ "azure", "postgres", - "ts", - "connect", - "database", - "postgresql", - "flexible", - "server", - "node", - "js", - "typescript", - "pg" + "ts" ], "path": "skills/azure-postgres-ts/SKILL.md" }, { "id": "azure-resource-manager-cosmosdb-dotnet", "name": "azure-resource-manager-cosmosdb-dotnet", - "description": "Azure Resource Manager SDK for Cosmos DB in .NET.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "resource", @@ -4248,18 +4024,14 @@ "resource", "manager", "cosmosdb", - "dotnet", - "sdk", - "cosmos", - "db", - "net" + "dotnet" ], "path": "skills/azure-resource-manager-cosmosdb-dotnet/SKILL.md" }, { "id": "azure-resource-manager-durabletask-dotnet", "name": "azure-resource-manager-durabletask-dotnet", - "description": "Azure Resource Manager SDK for Durable Task Scheduler in .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -4273,19 +4045,14 @@ "resource", "manager", "durabletask", - "dotnet", - "sdk", - "durable", - "task", - "scheduler", - "net" + "dotnet" ], "path": "skills/azure-resource-manager-durabletask-dotnet/SKILL.md" }, { "id": "azure-resource-manager-mysql-dotnet", "name": "azure-resource-manager-mysql-dotnet", - "description": "Azure MySQL Flexible Server SDK for .NET. Database management for MySQL Flexible Server deployments.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -4299,20 +4066,14 @@ "resource", "manager", "mysql", - "dotnet", - "flexible", - "server", - "sdk", - "net", - "database", - "deployments" + "dotnet" ], "path": "skills/azure-resource-manager-mysql-dotnet/SKILL.md" }, { "id": "azure-resource-manager-playwright-dotnet", "name": "azure-resource-manager-playwright-dotnet", - "description": "Azure Resource Manager SDK for Microsoft Playwright Testing in .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -4326,19 +4087,15 @@ "resource", "manager", "playwright", - "dotnet", - "sdk", - "microsoft", - "testing", - "net" + "dotnet" ], "path": "skills/azure-resource-manager-playwright-dotnet/SKILL.md" }, { "id": "azure-resource-manager-postgresql-dotnet", "name": "azure-resource-manager-postgresql-dotnet", - "description": "Azure PostgreSQL Flexible Server SDK for .NET. Database management for PostgreSQL Flexible Server deployments.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "resource", @@ -4351,20 +4108,14 @@ "resource", "manager", "postgresql", - "dotnet", - "flexible", - "server", - "sdk", - "net", - "database", - "deployments" + "dotnet" ], "path": "skills/azure-resource-manager-postgresql-dotnet/SKILL.md" }, { "id": "azure-resource-manager-redis-dotnet", "name": "azure-resource-manager-redis-dotnet", - "description": "Azure Resource Manager SDK for Redis in .NET.", + "description": "", "category": "development", "tags": [ "azure", @@ -4378,16 +4129,14 @@ "resource", "manager", "redis", - "dotnet", - "sdk", - "net" + "dotnet" ], "path": "skills/azure-resource-manager-redis-dotnet/SKILL.md" }, { "id": "azure-resource-manager-sql-dotnet", "name": "azure-resource-manager-sql-dotnet", - "description": "Azure Resource Manager SDK for Azure SQL in .NET.", + "description": "", "category": "data-ai", "tags": [ "azure", @@ -4401,17 +4150,15 @@ "resource", "manager", "sql", - "dotnet", - "sdk", - "net" + "dotnet" ], "path": "skills/azure-resource-manager-sql-dotnet/SKILL.md" }, { "id": "azure-search-documents-dotnet", "name": "azure-search-documents-dotnet", - "description": "Azure AI Search SDK for .NET (Azure.Search.Documents). Use for building search applications with full-text, vector, semantic, and hybrid search.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "azure", "search", @@ -4422,23 +4169,15 @@ "azure", "search", "documents", - "dotnet", - "ai", - "sdk", - "net", - "building", - "applications", - "full", - "text", - "vector" + "dotnet" ], "path": "skills/azure-search-documents-dotnet/SKILL.md" }, { "id": "azure-search-documents-py", "name": "azure-search-documents-py", - "description": "Azure AI Search SDK for Python. Use for vector search, hybrid search, semantic ranking, indexing, and skillsets.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "search", @@ -4449,15 +4188,7 @@ "azure", "search", "documents", - "py", - "ai", - "sdk", - "python", - "vector", - "hybrid", - "semantic", - "ranking", - "indexing" + "py" ], "path": "skills/azure-search-documents-py/SKILL.md" }, @@ -4491,7 +4222,7 @@ { "id": "azure-security-keyvault-keys-dotnet", "name": "azure-security-keyvault-keys-dotnet", - "description": "Azure Key Vault Keys SDK for .NET. Client library for managing cryptographic keys in Azure Key Vault and Managed HSM. Use for key creation, rotation, encryption, decryption, signing, and verification.", + "description": "", "category": "security", "tags": [ "azure", @@ -4505,14 +4236,7 @@ "security", "keyvault", "keys", - "dotnet", - "key", - "vault", - "sdk", - "net", - "client", - "library", - "managing" + "dotnet" ], "path": "skills/azure-security-keyvault-keys-dotnet/SKILL.md" }, @@ -4575,8 +4299,8 @@ { "id": "azure-servicebus-dotnet", "name": "azure-servicebus-dotnet", - "description": "Azure Service Bus SDK for .NET. Enterprise messaging with queues, topics, subscriptions, and sessions.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "azure", "servicebus", @@ -4585,24 +4309,15 @@ "triggers": [ "azure", "servicebus", - "dotnet", - "bus", - "sdk", - "net", - "enterprise", - "messaging", - "queues", - "topics", - "subscriptions", - "sessions" + "dotnet" ], "path": "skills/azure-servicebus-dotnet/SKILL.md" }, { "id": "azure-servicebus-py", "name": "azure-servicebus-py", - "description": "Azure Service Bus SDK for Python messaging. Use for queues, topics, subscriptions, and enterprise messaging patterns.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "azure", "servicebus", @@ -4611,15 +4326,7 @@ "triggers": [ "azure", "servicebus", - "py", - "bus", - "sdk", - "python", - "messaging", - "queues", - "topics", - "subscriptions", - "enterprise" + "py" ], "path": "skills/azure-servicebus-py/SKILL.md" }, @@ -4652,8 +4359,8 @@ { "id": "azure-speech-to-text-rest-py", "name": "azure-speech-to-text-rest-py", - "description": "Azure Speech to Text REST API for short audio (Python). Use for simple speech recognition of audio files up to 60 seconds without the Speech SDK.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "speech", @@ -4668,13 +4375,7 @@ "to", "text", "rest", - "py", - "api", - "short", - "audio", - "python", - "simple", - "recognition" + "py" ], "path": "skills/azure-speech-to-text-rest-py/SKILL.md" }, @@ -4708,8 +4409,8 @@ { "id": "azure-storage-blob-py", "name": "azure-storage-blob-py", - "description": "Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4720,22 +4421,14 @@ "azure", "storage", "blob", - "py", - "sdk", - "python", - "uploading", - "downloading", - "listing", - "blobs", - "managing", - "containers" + "py" ], "path": "skills/azure-storage-blob-py/SKILL.md" }, { "id": "azure-storage-blob-rust", "name": "azure-storage-blob-rust", - "description": "Azure Blob Storage SDK for Rust. Use for uploading, downloading, and managing blobs and containers.", + "description": "", "category": "development", "tags": [ "azure", @@ -4747,21 +4440,15 @@ "azure", "storage", "blob", - "rust", - "sdk", - "uploading", - "downloading", - "managing", - "blobs", - "containers" + "rust" ], "path": "skills/azure-storage-blob-rust/SKILL.md" }, { "id": "azure-storage-blob-ts", "name": "azure-storage-blob-ts", - "description": "Azure Blob Storage JavaScript/TypeScript SDK (@azure/storage-blob) for blob operations. Use for uploading, downloading, listing, and managing blobs and containers.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4772,23 +4459,15 @@ "azure", "storage", "blob", - "ts", - "javascript", - "typescript", - "sdk", - "operations", - "uploading", - "downloading", - "listing", - "managing" + "ts" ], "path": "skills/azure-storage-blob-ts/SKILL.md" }, { "id": "azure-storage-file-datalake-py", "name": "azure-storage-file-datalake-py", - "description": "Azure Data Lake Storage Gen2 SDK for Python. Use for hierarchical file systems, big data analytics, and file/directory operations.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4801,22 +4480,15 @@ "storage", "file", "datalake", - "py", - "data", - "lake", - "gen2", - "sdk", - "python", - "hierarchical", - "big" + "py" ], "path": "skills/azure-storage-file-datalake-py/SKILL.md" }, { "id": "azure-storage-file-share-py", "name": "azure-storage-file-share-py", - "description": "Azure Storage File Share SDK for Python. Use for SMB file shares, directories, and file operations in the cloud.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4829,22 +4501,15 @@ "storage", "file", "share", - "py", - "sdk", - "python", - "smb", - "shares", - "directories", - "operations", - "cloud" + "py" ], "path": "skills/azure-storage-file-share-py/SKILL.md" }, { "id": "azure-storage-file-share-ts", "name": "azure-storage-file-share-ts", - "description": "Azure File Share JavaScript/TypeScript SDK (@azure/storage-file-share) for SMB file share operations.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4857,20 +4522,15 @@ "storage", "file", "share", - "ts", - "javascript", - "typescript", - "sdk", - "smb", - "operations" + "ts" ], "path": "skills/azure-storage-file-share-ts/SKILL.md" }, { "id": "azure-storage-queue-py", "name": "azure-storage-queue-py", - "description": "Azure Queue Storage SDK for Python. Use for reliable message queuing, task distribution, and asynchronous processing.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4881,23 +4541,15 @@ "azure", "storage", "queue", - "py", - "sdk", - "python", - "reliable", - "message", - "queuing", - "task", - "distribution", - "asynchronous" + "py" ], "path": "skills/azure-storage-queue-py/SKILL.md" }, { "id": "azure-storage-queue-ts", "name": "azure-storage-queue-ts", - "description": "Azure Queue Storage JavaScript/TypeScript SDK (@azure/storage-queue) for message queue operations. Use for sending, receiving, peeking, and deleting messages in queues.", - "category": "development", + "description": "", + "category": "general", "tags": [ "azure", "storage", @@ -4908,15 +4560,7 @@ "azure", "storage", "queue", - "ts", - "javascript", - "typescript", - "sdk", - "message", - "operations", - "sending", - "receiving", - "peeking" + "ts" ], "path": "skills/azure-storage-queue-ts/SKILL.md" }, @@ -4950,20 +4594,14 @@ { "id": "backend-architect", "name": "backend-architect", - "description": "Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems.", + "description": "", "category": "development", "tags": [ "backend" ], "triggers": [ "backend", - "architect", - "specializing", - "scalable", - "api", - "microservices", - "architecture", - "distributed" + "architect" ], "path": "skills/backend-architect/SKILL.md" }, @@ -5019,7 +4657,7 @@ { "id": "backend-security-coder", "name": "backend-security-coder", - "description": "Expert in secure backend coding practices specializing in input validation, authentication, and API security. Use PROACTIVELY for backend security implementations or security code reviews.", + "description": "", "category": "security", "tags": [ "backend", @@ -5029,16 +4667,7 @@ "triggers": [ "backend", "security", - "coder", - "secure", - "coding", - "specializing", - "input", - "validation", - "authentication", - "api", - "proactively", - "implementations" + "coder" ], "path": "skills/backend-security-coder/SKILL.md" }, @@ -5167,24 +4796,14 @@ { "id": "bash-pro", "name": "bash-pro", - "description": "Master of defensive Bash scripting for production automation, CI/CD\npipelines, and system utilities. Expert in safe, portable, and testable shell\nscripts.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "bash" ], "triggers": [ "bash", - "pro", - "defensive", - "scripting", - "automation", - "ci", - "cd", - "pipelines", - "utilities", - "safe", - "portable", - "testable" + "pro" ], "path": "skills/bash-pro/SKILL.md" }, @@ -5408,24 +5027,14 @@ { "id": "blockchain-developer", "name": "blockchain-developer", - "description": "Build production-ready Web3 applications, smart contracts, and decentralized systems. Implements DeFi protocols, NFT platforms, DAOs, and enterprise blockchain integrations.", + "description": "", "category": "general", "tags": [ "blockchain" ], "triggers": [ "blockchain", - "developer", - "web3", - "applications", - "smart", - "contracts", - "decentralized", - "implements", - "defi", - "protocols", - "nft", - "platforms" + "developer" ], "path": "skills/blockchain-developer/SKILL.md" }, @@ -5728,25 +5337,15 @@ { "id": "business-analyst", "name": "business-analyst", - "description": "Master modern business analysis with AI-powered analytics, real-time dashboards, and data-driven insights. Build comprehensive KPI frameworks, predictive models, and strategic recommendations.", - "category": "data-ai", + "description": "", + "category": "business", "tags": [ "business", "analyst" ], "triggers": [ "business", - "analyst", - "analysis", - "ai", - "powered", - "analytics", - "real", - "time", - "dashboards", - "data", - "driven", - "insights" + "analyst" ], "path": "skills/business-analyst/SKILL.md" }, @@ -5822,7 +5421,7 @@ { "id": "c4-code", "name": "c4-code", - "description": "Expert C4 Code-level documentation specialist. Analyzes code directories to create comprehensive C4 code-level documentation including function signatures, arguments, dependencies, and code structure.", + "description": "", "category": "architecture", "tags": [ "c4", @@ -5830,24 +5429,14 @@ ], "triggers": [ "c4", - "code", - "level", - "documentation", - "analyzes", - "directories", - "including", - "function", - "signatures", - "arguments", - "dependencies", - "structure" + "code" ], "path": "skills/c4-code/SKILL.md" }, { "id": "c4-component", "name": "c4-component", - "description": "Expert C4 Component-level documentation specialist. Synthesizes C4 Code-level documentation into Component-level architecture, defining component boundaries, interfaces, and relationships.", + "description": "", "category": "architecture", "tags": [ "c4", @@ -5855,23 +5444,14 @@ ], "triggers": [ "c4", - "component", - "level", - "documentation", - "synthesizes", - "code", - "architecture", - "defining", - "boundaries", - "interfaces", - "relationships" + "component" ], "path": "skills/c4-component/SKILL.md" }, { "id": "c4-container", "name": "c4-container", - "description": "Expert C4 Container-level documentation specialist.", + "description": "", "category": "architecture", "tags": [ "c4", @@ -5879,33 +5459,21 @@ ], "triggers": [ "c4", - "container", - "level", - "documentation" + "container" ], "path": "skills/c4-container/SKILL.md" }, { "id": "c4-context", "name": "c4-context", - "description": "Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and external dependencies.", + "description": "", "category": "architecture", "tags": [ "c4" ], "triggers": [ "c4", - "context", - "level", - "documentation", - "creates", - "high", - "diagrams", - "documents", - "personas", - "user", - "journeys", - "features" + "context" ], "path": "skills/c4-context/SKILL.md" }, @@ -6009,7 +5577,7 @@ { "id": "carrier-relationship-management", "name": "carrier-relationship-management", - "description": "Codified expertise for managing carrier portfolios, negotiating freight rates, tracking carrier performance, allocating freight, and maintaining strategic carrier relationships.", + "description": "", "category": "general", "tags": [ "carrier", @@ -6017,17 +5585,7 @@ ], "triggers": [ "carrier", - "relationship", - "codified", - "expertise", - "managing", - "portfolios", - "negotiating", - "freight", - "rates", - "tracking", - "performance", - "allocating" + "relationship" ], "path": "skills/carrier-relationship-management/SKILL.md" }, @@ -6618,24 +6176,14 @@ { "id": "cloud-architect", "name": "cloud-architect", - "description": "Expert cloud architect specializing in AWS/Azure/GCP multi-cloud infrastructure design, advanced IaC (Terraform/OpenTofu/CDK), FinOps cost optimization, and modern architectural patterns.", + "description": "", "category": "infrastructure", "tags": [ "cloud" ], "triggers": [ "cloud", - "architect", - "specializing", - "aws", - "azure", - "gcp", - "multi", - "infrastructure", - "iac", - "terraform", - "opentofu", - "cdk" + "architect" ], "path": "skills/cloud-architect/SKILL.md" }, @@ -7111,25 +6659,15 @@ { "id": "competitive-landscape", "name": "competitive-landscape", - "description": "This skill should be used when the user asks to \\\\\\\"analyze competitors\", \"assess competitive landscape\", \"identify differentiation\", \"evaluate market positioning\", \"apply Porter's Five Forces\",...", - "category": "business", + "description": "", + "category": "general", "tags": [ "competitive", "landscape" ], "triggers": [ "competitive", - "landscape", - "skill", - "should", - "used", - "user", - "asks", - "analyze", - "competitors", - "assess", - "identify", - "differentiation" + "landscape" ], "path": "skills/competitive-landscape/SKILL.md" }, @@ -7367,23 +6905,15 @@ { "id": "conductor-setup", "name": "conductor-setup", - "description": "Initialize project with Conductor artifacts (product definition,\ntech stack, workflow, style guides)", - "category": "business", + "description": "", + "category": "workflow", "tags": [ "conductor", "setup" ], "triggers": [ "conductor", - "setup", - "initialize", - "artifacts", - "product", - "definition", - "tech", - "stack", - "style", - "guides" + "setup" ], "path": "skills/conductor-setup/SKILL.md" }, @@ -7410,7 +6940,7 @@ { "id": "conductor-validator", "name": "conductor-validator", - "description": "Validates Conductor project artifacts for completeness,\nconsistency, and correctness. Use after setup, when diagnosing issues, or\nbefore implementation to verify project context.", + "description": "", "category": "workflow", "tags": [ "conductor", @@ -7418,17 +6948,7 @@ ], "triggers": [ "conductor", - "validator", - "validates", - "artifacts", - "completeness", - "consistency", - "correctness", - "after", - "setup", - "diagnosing", - "issues", - "before" + "validator" ], "path": "skills/conductor-validator/SKILL.md" }, @@ -7484,25 +7004,15 @@ { "id": "content-marketer", "name": "content-marketer", - "description": "Elite content marketing strategist specializing in AI-powered content creation, omnichannel distribution, SEO optimization, and data-driven performance marketing.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "content", "marketer" ], "triggers": [ "content", - "marketer", - "elite", - "marketing", - "strategist", - "specializing", - "ai", - "powered", - "creation", - "omnichannel", - "distribution", - "seo" + "marketer" ], "path": "skills/content-marketer/SKILL.md" }, @@ -7548,24 +7058,15 @@ { "id": "context-driven-development", "name": "context-driven-development", - "description": "Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship between product.md, tech-stack.md, and...", - "category": "business", + "description": "", + "category": "general", "tags": [ "driven" ], "triggers": [ "driven", "context", - "development", - "skill", - "working", - "conductor", - "methodology", - "managing", - "artifacts", - "understanding", - "relationship", - "between" + "development" ], "path": "skills/context-driven-development/SKILL.md" }, @@ -7622,24 +7123,14 @@ { "id": "context-manager", "name": "context-manager", - "description": "Elite AI context engineering specialist mastering dynamic context management, vector databases, knowledge graphs, and intelligent memory systems.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "manager" ], "triggers": [ "manager", - "context", - "elite", - "ai", - "engineering", - "mastering", - "dynamic", - "vector", - "databases", - "knowledge", - "graphs", - "intelligent" + "context" ], "path": "skills/context-manager/SKILL.md" }, @@ -7907,24 +7398,14 @@ { "id": "cpp-pro", "name": "cpp-pro", - "description": "Write idiomatic C++ code with modern features, RAII, smart pointers, and STL algorithms. Handles templates, move semantics, and performance optimization.", + "description": "", "category": "general", "tags": [ "cpp" ], "triggers": [ "cpp", - "pro", - "write", - "idiomatic", - "code", - "features", - "raii", - "smart", - "pointers", - "stl", - "algorithms", - "move" + "pro" ], "path": "skills/cpp-pro/SKILL.md" }, @@ -8004,8 +7485,8 @@ { "id": "crypto-bd-agent", "name": "crypto-bd-agent", - "description": "Autonomous crypto business development patterns — multi-chain token discovery, 100-point scoring with wallet forensics, x402 micropayments, ERC-8004 on-chain identity, LLM cascade routing, and...", - "category": "security", + "description": "", + "category": "general", "tags": [ "crypto", "bd", @@ -8014,40 +7495,21 @@ "triggers": [ "crypto", "bd", - "agent", - "autonomous", - "business", - "development", - "multi", - "chain", - "token", - "discovery", - "100", - "point" + "agent" ], "path": "skills/crypto-bd-agent/SKILL.md" }, { "id": "csharp-pro", "name": "csharp-pro", - "description": "Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and ensures comprehensive testing.", + "description": "", "category": "development", "tags": [ "csharp" ], "triggers": [ "csharp", - "pro", - "write", - "code", - "features", - "like", - "records", - "matching", - "async", - "await", - "optimizes", - "net" + "pro" ], "path": "skills/csharp-pro/SKILL.md" }, @@ -8071,32 +7533,22 @@ { "id": "customer-support", "name": "customer-support", - "description": "Elite AI-powered customer support specialist mastering conversational AI, automated ticketing, sentiment analysis, and omnichannel support experiences.", - "category": "data-ai", + "description": "", + "category": "business", "tags": [ "customer", "support" ], "triggers": [ "customer", - "support", - "elite", - "ai", - "powered", - "mastering", - "conversational", - "automated", - "ticketing", - "sentiment", - "analysis", - "omnichannel" + "support" ], "path": "skills/customer-support/SKILL.md" }, { "id": "customs-trade-compliance", "name": "customs-trade-compliance", - "description": "Codified expertise for customs documentation, tariff classification, duty optimisation, restricted party screening, and regulatory compliance across multiple jurisdictions.", + "description": "", "category": "security", "tags": [ "customs", @@ -8106,16 +7558,7 @@ "triggers": [ "customs", "trade", - "compliance", - "codified", - "expertise", - "documentation", - "tariff", - "classification", - "duty", - "optimisation", - "restricted", - "party" + "compliance" ], "path": "skills/customs-trade-compliance/SKILL.md" }, @@ -8148,24 +7591,14 @@ { "id": "data-engineer", "name": "data-engineer", - "description": "Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms.", - "category": "infrastructure", + "description": "", + "category": "data-ai", "tags": [ "data" ], "triggers": [ "data", - "engineer", - "scalable", - "pipelines", - "warehouses", - "real", - "time", - "streaming", - "architectures", - "implements", - "apache", - "spark" + "engineer" ], "path": "skills/data-engineer/SKILL.md" }, @@ -8250,7 +7683,7 @@ { "id": "data-scientist", "name": "data-scientist", - "description": "Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence.", + "description": "", "category": "data-ai", "tags": [ "data", @@ -8258,17 +7691,7 @@ ], "triggers": [ "data", - "scientist", - "analytics", - "machine", - "learning", - "statistical", - "modeling", - "complex", - "analysis", - "predictive", - "business", - "intelligence" + "scientist" ], "path": "skills/data-scientist/SKILL.md" }, @@ -8348,46 +7771,29 @@ { "id": "database-admin", "name": "database-admin", - "description": "Expert database administrator specializing in modern cloud databases, automation, and reliability engineering.", - "category": "infrastructure", + "description": "", + "category": "data-ai", "tags": [ "database", "admin" ], "triggers": [ "database", - "admin", - "administrator", - "specializing", - "cloud", - "databases", - "automation", - "reliability", - "engineering" + "admin" ], "path": "skills/database-admin/SKILL.md" }, { "id": "database-architect", "name": "database-architect", - "description": "Expert database architect specializing in data layer design from scratch, technology selection, schema modeling, and scalable database architectures.", + "description": "", "category": "data-ai", "tags": [ "database" ], "triggers": [ "database", - "architect", - "specializing", - "data", - "layer", - "scratch", - "technology", - "selection", - "schema", - "modeling", - "scalable", - "architectures" + "architect" ], "path": "skills/database-architect/SKILL.md" }, @@ -8524,7 +7930,7 @@ { "id": "database-optimizer", "name": "database-optimizer", - "description": "Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures.", + "description": "", "category": "data-ai", "tags": [ "database", @@ -8532,14 +7938,7 @@ ], "triggers": [ "database", - "optimizer", - "specializing", - "performance", - "tuning", - "query", - "optimization", - "scalable", - "architectures" + "optimizer" ], "path": "skills/database-optimizer/SKILL.md" }, @@ -8752,23 +8151,13 @@ { "id": "debugger", "name": "debugger", - "description": "Debugging specialist for errors, test failures, and unexpected\nbehavior. Use proactively when encountering any issues.", - "category": "testing", + "description": "", + "category": "general", "tags": [ "debugger" ], "triggers": [ - "debugger", - "debugging", - "errors", - "test", - "failures", - "unexpected", - "behavior", - "proactively", - "encountering", - "any", - "issues" + "debugger" ], "path": "skills/debugger/SKILL.md" }, @@ -8919,20 +8308,14 @@ { "id": "deployment-engineer", "name": "deployment-engineer", - "description": "Expert deployment engineer specializing in modern CI/CD pipelines, GitOps workflows, and advanced deployment automation.", + "description": "", "category": "infrastructure", "tags": [ "deployment" ], "triggers": [ "deployment", - "engineer", - "specializing", - "ci", - "cd", - "pipelines", - "gitops", - "automation" + "engineer" ], "path": "skills/deployment-engineer/SKILL.md" }, @@ -9033,22 +8416,17 @@ { "id": "design-orchestration", "name": "design-orchestration", - "description": "Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order.", + "description": "Ensure that ideas become designs, designs are reviewed, and only validated designs reach implementation.", "category": "workflow", "tags": [], "triggers": [ "orchestration", - "orchestrates", - "routing", - "work", - "through", - "brainstorming", - "multi", - "agent", - "review", - "execution", - "readiness", - "correct" + "ideas", + "become", + "designs", + "reviewed", + "validated", + "reach" ], "path": "skills/design-orchestration/SKILL.md" }, @@ -9076,21 +8454,15 @@ { "id": "devops-troubleshooter", "name": "devops-troubleshooter", - "description": "Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability.", - "category": "security", + "description": "", + "category": "infrastructure", "tags": [ "devops", "troubleshooter" ], "triggers": [ "devops", - "troubleshooter", - "specializing", - "rapid", - "incident", - "response", - "debugging", - "observability" + "troubleshooter" ], "path": "skills/devops-troubleshooter/SKILL.md" }, @@ -9224,24 +8596,14 @@ { "id": "django-pro", "name": "django-pro", - "description": "Master Django 5.x with async views, DRF, Celery, and Django Channels. Build scalable web applications with proper architecture, testing, and deployment.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "django" ], "triggers": [ "django", - "pro", - "async", - "views", - "drf", - "celery", - "channels", - "scalable", - "web", - "applications", - "proper", - "architecture" + "pro" ], "path": "skills/django-pro/SKILL.md" }, @@ -9297,24 +8659,14 @@ { "id": "docs-architect", "name": "docs-architect", - "description": "Creates comprehensive technical documentation from existing codebases. Analyzes architecture, design patterns, and implementation details to produce long-form technical manuals and ebooks.", - "category": "architecture", + "description": "", + "category": "general", "tags": [ "docs" ], "triggers": [ "docs", - "architect", - "creates", - "technical", - "documentation", - "existing", - "codebases", - "analyzes", - "architecture", - "details", - "produce", - "long" + "architect" ], "path": "skills/docs-architect/SKILL.md" }, @@ -9470,24 +8822,14 @@ { "id": "dotnet-architect", "name": "dotnet-architect", - "description": "Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns.", + "description": "", "category": "development", "tags": [ "dotnet" ], "triggers": [ "dotnet", - "architect", - "net", - "backend", - "specializing", - "asp", - "core", - "entity", - "framework", - "dapper", - "enterprise", - "application" + "architect" ], "path": "skills/dotnet-architect/SKILL.md" }, @@ -9566,7 +8908,7 @@ { "id": "dx-optimizer", "name": "dx-optimizer", - "description": "Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when development friction is noticed.", + "description": "", "category": "general", "tags": [ "dx", @@ -9574,17 +8916,7 @@ ], "triggers": [ "dx", - "optimizer", - "developer", - "experience", - "improves", - "tooling", - "setup", - "proactively", - "setting", - "up", - "new", - "after" + "optimizer" ], "path": "skills/dx-optimizer/SKILL.md" }, @@ -9638,24 +8970,14 @@ { "id": "elixir-pro", "name": "elixir-pro", - "description": "Write idiomatic Elixir code with OTP patterns, supervision trees, and Phoenix LiveView. Masters concurrency, fault tolerance, and distributed systems.", - "category": "architecture", + "description": "", + "category": "general", "tags": [ "elixir" ], "triggers": [ "elixir", - "pro", - "write", - "idiomatic", - "code", - "otp", - "supervision", - "trees", - "phoenix", - "liveview", - "masters", - "concurrency" + "pro" ], "path": "skills/elixir-pro/SKILL.md" }, @@ -9687,7 +9009,7 @@ { "id": "email-systems", "name": "email-systems", - "description": "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill cov...", + "description": "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill cov...", "category": "business", "tags": [ "email" @@ -9761,7 +9083,7 @@ { "id": "energy-procurement", "name": "energy-procurement", - "description": "Codified expertise for electricity and gas procurement, tariff optimisation, demand charge management, renewable PPA evaluation, and multi-facility energy cost management.", + "description": "", "category": "general", "tags": [ "energy", @@ -9769,17 +9091,7 @@ ], "triggers": [ "energy", - "procurement", - "codified", - "expertise", - "electricity", - "gas", - "tariff", - "optimisation", - "demand", - "charge", - "renewable", - "ppa" + "procurement" ], "path": "skills/energy-procurement/SKILL.md" }, @@ -9881,25 +9193,15 @@ { "id": "error-detective", "name": "error-detective", - "description": "Search logs and codebases for error patterns, stack traces, and anomalies. Correlates errors across systems and identifies root causes.", - "category": "architecture", + "description": "", + "category": "general", "tags": [ "error", "detective" ], "triggers": [ "error", - "detective", - "search", - "logs", - "codebases", - "stack", - "traces", - "anomalies", - "correlates", - "errors", - "identifies", - "root" + "detective" ], "path": "skills/error-detective/SKILL.md" }, @@ -10273,24 +9575,14 @@ { "id": "fastapi-pro", "name": "fastapi-pro", - "description": "Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns.", + "description": "", "category": "development", "tags": [ "fastapi" ], "triggers": [ "fastapi", - "pro", - "high", - "performance", - "async", - "apis", - "sqlalchemy", - "pydantic", - "v2", - "microservices", - "websockets", - "python" + "pro" ], "path": "skills/fastapi-pro/SKILL.md" }, @@ -10565,22 +9857,15 @@ { "id": "firmware-analyst", "name": "firmware-analyst", - "description": "Expert firmware analyst specializing in embedded systems, IoT security, and hardware reverse engineering.", - "category": "security", + "description": "", + "category": "general", "tags": [ "firmware", "analyst" ], "triggers": [ "firmware", - "analyst", - "specializing", - "embedded", - "iot", - "security", - "hardware", - "reverse", - "engineering" + "analyst" ], "path": "skills/firmware-analyst/SKILL.md" }, @@ -10609,26 +9894,20 @@ { "id": "flutter-expert", "name": "flutter-expert", - "description": "Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "flutter" ], "triggers": [ - "flutter", - "development", - "dart", - "widgets", - "multi", - "platform", - "deployment" + "flutter" ], "path": "skills/flutter-expert/SKILL.md" }, { "id": "form-cro", "name": "form-cro", - "description": "Optimize any form that is NOT signup or account registration — including lead capture, contact, demo request, application, survey, quote, and checkout forms.", + "description": "", "category": "general", "tags": [ "form", @@ -10636,17 +9915,7 @@ ], "triggers": [ "form", - "cro", - "optimize", - "any", - "signup", - "account", - "registration", - "including", - "lead", - "capture", - "contact", - "demo" + "cro" ], "path": "skills/form-cro/SKILL.md" }, @@ -10934,24 +10203,14 @@ { "id": "frontend-developer", "name": "frontend-developer", - "description": "Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture.", + "description": "", "category": "development", "tags": [ "frontend" ], "triggers": [ "frontend", - "developer", - "react", - "components", - "responsive", - "layouts", - "handle", - "client", - "side", - "state", - "masters", - "19" + "developer" ], "path": "skills/frontend-developer/SKILL.md" }, @@ -11012,7 +10271,7 @@ { "id": "frontend-security-coder", "name": "frontend-security-coder", - "description": "Expert in secure frontend coding practices specializing in XSS prevention, output sanitization, and client-side security patterns.", + "description": "", "category": "security", "tags": [ "frontend", @@ -11022,16 +10281,7 @@ "triggers": [ "frontend", "security", - "coder", - "secure", - "coding", - "specializing", - "xss", - "prevention", - "output", - "sanitization", - "client", - "side" + "coder" ], "path": "skills/frontend-security-coder/SKILL.md" }, @@ -11913,20 +11163,14 @@ { "id": "golang-pro", "name": "golang-pro", - "description": "Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices.", + "description": "", "category": "development", "tags": [ "golang" ], "triggers": [ "golang", - "pro", - "go", - "21", - "concurrency", - "performance", - "optimization", - "microservices" + "pro" ], "path": "skills/golang-pro/SKILL.md" }, @@ -12081,24 +11325,14 @@ { "id": "graphql-architect", "name": "graphql-architect", - "description": "Master modern GraphQL with federation, performance optimization, and enterprise security. Build scalable schemas, implement advanced caching, and design real-time systems.", - "category": "security", + "description": "", + "category": "general", "tags": [ "graphql" ], "triggers": [ "graphql", - "architect", - "federation", - "performance", - "optimization", - "enterprise", - "security", - "scalable", - "schemas", - "caching", - "real", - "time" + "architect" ], "path": "skills/graphql-architect/SKILL.md" }, @@ -12223,7 +11457,7 @@ { "id": "hig-components-content", "name": "hig-components-content", - "description": "Apple Human Interface Guidelines for content display components.", + "description": "", "category": "general", "tags": [ "hig", @@ -12233,19 +11467,14 @@ "triggers": [ "hig", "components", - "content", - "apple", - "human", - "interface", - "guidelines", - "display" + "content" ], "path": "skills/hig-components-content/SKILL.md" }, { "id": "hig-components-controls", "name": "hig-components-controls", - "description": "Apple HIG guidance for selection and input controls including pickers, toggles, sliders, steppers, segmented controls, combo boxes, text fields, text views, labels, token fields, virtual...", + "description": "", "category": "general", "tags": [ "hig", @@ -12255,23 +11484,14 @@ "triggers": [ "hig", "components", - "controls", - "apple", - "guidance", - "selection", - "input", - "including", - "pickers", - "toggles", - "sliders", - "steppers" + "controls" ], "path": "skills/hig-components-controls/SKILL.md" }, { "id": "hig-components-dialogs", "name": "hig-components-dialogs", - "description": "Apple HIG guidance for presentation components including alerts, action sheets, popovers, sheets, and digit entry views.", + "description": "", "category": "general", "tags": [ "hig", @@ -12281,23 +11501,14 @@ "triggers": [ "hig", "components", - "dialogs", - "apple", - "guidance", - "presentation", - "including", - "alerts", - "action", - "sheets", - "popovers", - "digit" + "dialogs" ], "path": "skills/hig-components-dialogs/SKILL.md" }, { "id": "hig-components-layout", "name": "hig-components-layout", - "description": "Apple Human Interface Guidelines for layout and navigation components.", + "description": "", "category": "general", "tags": [ "hig", @@ -12307,19 +11518,14 @@ "triggers": [ "hig", "components", - "layout", - "apple", - "human", - "interface", - "guidelines", - "navigation" + "layout" ], "path": "skills/hig-components-layout/SKILL.md" }, { "id": "hig-components-menus", "name": "hig-components-menus", - "description": "Apple HIG guidance for menu and button components including menus, context menus, dock menus, edit menus, the menu bar, toolbars, action buttons, pop-up buttons, pull-down buttons, disclosure...", + "description": "", "category": "general", "tags": [ "hig", @@ -12329,23 +11535,14 @@ "triggers": [ "hig", "components", - "menus", - "apple", - "guidance", - "menu", - "button", - "including", - "context", - "dock", - "edit", - "bar" + "menus" ], "path": "skills/hig-components-menus/SKILL.md" }, { "id": "hig-components-search", "name": "hig-components-search", - "description": "Apple HIG guidance for navigation-related components including search fields, page controls, and path controls.", + "description": "", "category": "general", "tags": [ "hig", @@ -12355,23 +11552,14 @@ "triggers": [ "hig", "components", - "search", - "apple", - "guidance", - "navigation", - "related", - "including", - "fields", - "page", - "controls", - "path" + "search" ], "path": "skills/hig-components-search/SKILL.md" }, { "id": "hig-components-status", "name": "hig-components-status", - "description": "Apple HIG guidance for status and progress UI components including progress indicators, status bars, and activity rings.", + "description": "", "category": "general", "tags": [ "hig", @@ -12381,23 +11569,14 @@ "triggers": [ "hig", "components", - "status", - "apple", - "guidance", - "progress", - "ui", - "including", - "indicators", - "bars", - "activity", - "rings" + "status" ], "path": "skills/hig-components-status/SKILL.md" }, { "id": "hig-components-system", "name": "hig-components-system", - "description": "Apple HIG guidance for system experience components: widgets, live activities, notifications, complications, home screen quick actions, top shelf, watch faces, app clips, and app shortcuts.", + "description": "", "category": "general", "tags": [ "hig", @@ -12405,24 +11584,14 @@ ], "triggers": [ "hig", - "components", - "apple", - "guidance", - "experience", - "widgets", - "live", - "activities", - "notifications", - "complications", - "home", - "screen" + "components" ], "path": "skills/hig-components-system/SKILL.md" }, { "id": "hig-foundations", "name": "hig-foundations", - "description": "Apple Human Interface Guidelines design foundations.", + "description": "", "category": "general", "tags": [ "hig", @@ -12430,62 +11599,42 @@ ], "triggers": [ "hig", - "foundations", - "apple", - "human", - "interface", - "guidelines" + "foundations" ], "path": "skills/hig-foundations/SKILL.md" }, { "id": "hig-inputs", "name": "hig-inputs", - "description": "Apple HIG guidance for input methods and interaction patterns: gestures, Apple Pencil, keyboards, game controllers, pointers, Digital Crown, eye tracking, focus system, remotes, spatial...", - "category": "architecture", + "description": "", + "category": "general", "tags": [ "hig", "inputs" ], "triggers": [ "hig", - "inputs", - "apple", - "guidance", - "input", - "methods", - "interaction", - "gestures", - "pencil", - "keyboards", - "game", - "controllers" + "inputs" ], "path": "skills/hig-inputs/SKILL.md" }, { "id": "hig-patterns", "name": "hig-patterns", - "description": "Apple Human Interface Guidelines interaction and UX patterns.", + "description": "", "category": "architecture", "tags": [ "hig" ], "triggers": [ - "hig", - "apple", - "human", - "interface", - "guidelines", - "interaction", - "ux" + "hig" ], "path": "skills/hig-patterns/SKILL.md" }, { "id": "hig-platforms", "name": "hig-platforms", - "description": "Apple Human Interface Guidelines for platform-specific design.", + "description": "", "category": "general", "tags": [ "hig", @@ -12493,60 +11642,36 @@ ], "triggers": [ "hig", - "platforms", - "apple", - "human", - "interface", - "guidelines", - "platform", - "specific" + "platforms" ], "path": "skills/hig-platforms/SKILL.md" }, { "id": "hig-project-context", "name": "hig-project-context", - "description": "Create or update a shared Apple design context document that other HIG skills use to tailor guidance.", + "description": "", "category": "general", "tags": [ "hig" ], "triggers": [ "hig", - "context", - "update", - "shared", - "apple", - "document", - "other", - "skills", - "tailor", - "guidance" + "context" ], "path": "skills/hig-project-context/SKILL.md" }, { "id": "hig-technologies", "name": "hig-technologies", - "description": "Apple HIG guidance for Apple technology integrations: Siri, Apple Pay, HealthKit, HomeKit, ARKit, machine learning, generative AI, iCloud, Sign in with Apple, SharePlay, CarPlay, Game Center,...", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "hig", "technologies" ], "triggers": [ "hig", - "technologies", - "apple", - "guidance", - "technology", - "integrations", - "siri", - "pay", - "healthkit", - "homekit", - "arkit", - "machine" + "technologies" ], "path": "skills/hig-technologies/SKILL.md" }, @@ -12579,24 +11704,14 @@ { "id": "hr-pro", "name": "hr-pro", - "description": "Professional, ethical HR partner for hiring, onboarding/offboarding, PTO and leave, performance, compliant policies, and employee relations.", + "description": "", "category": "business", "tags": [ "hr" ], "triggers": [ "hr", - "pro", - "professional", - "ethical", - "partner", - "hiring", - "onboarding", - "offboarding", - "pto", - "leave", - "performance", - "compliant" + "pro" ], "path": "skills/hr-pro/SKILL.md" }, @@ -12729,7 +11844,7 @@ { "id": "hybrid-cloud-architect", "name": "hybrid-cloud-architect", - "description": "Expert hybrid cloud architect specializing in complex multi-cloud solutions across AWS/Azure/GCP and private clouds (OpenStack/VMware).", + "description": "", "category": "infrastructure", "tags": [ "hybrid", @@ -12738,16 +11853,7 @@ "triggers": [ "hybrid", "cloud", - "architect", - "specializing", - "complex", - "multi", - "solutions", - "aws", - "azure", - "gcp", - "private", - "clouds" + "architect" ], "path": "skills/hybrid-cloud-architect/SKILL.md" }, @@ -12853,31 +11959,20 @@ { "id": "imagen", "name": "imagen", - "description": "AI image generation skill powered by Google Gemini, enabling seamless visual content creation for UI placeholders, documentation, and design assets.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "imagen" ], "triggers": [ - "imagen", - "ai", - "image", - "generation", - "skill", - "powered", - "google", - "gemini", - "enabling", - "seamless", - "visual", - "content" + "imagen" ], "path": "skills/imagen/SKILL.md" }, { "id": "incident-responder", "name": "incident-responder", - "description": "Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management.", + "description": "", "category": "security", "tags": [ "incident", @@ -12885,13 +11980,7 @@ ], "triggers": [ "incident", - "responder", - "sre", - "specializing", - "rapid", - "problem", - "resolution", - "observability" + "responder" ], "path": "skills/incident-responder/SKILL.md" }, @@ -13139,7 +12228,7 @@ { "id": "inventory-demand-planning", "name": "inventory-demand-planning", - "description": "Codified expertise for demand forecasting, safety stock optimisation, replenishment planning, and promotional lift estimation at multi-location retailers.", + "description": "", "category": "general", "tags": [ "inventory", @@ -13149,40 +12238,21 @@ "triggers": [ "inventory", "demand", - "planning", - "codified", - "expertise", - "forecasting", - "safety", - "stock", - "optimisation", - "replenishment", - "promotional", - "lift" + "planning" ], "path": "skills/inventory-demand-planning/SKILL.md" }, { "id": "ios-developer", "name": "ios-developer", - "description": "Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "ios" ], "triggers": [ "ios", - "developer", - "develop", - "native", - "applications", - "swift", - "swiftui", - "masters", - "18", - "uikit", - "integration", - "core" + "developer" ], "path": "skills/ios-developer/SKILL.md" }, @@ -13239,24 +12309,14 @@ { "id": "java-pro", "name": "java-pro", - "description": "Master Java 21+ with modern features like virtual threads, pattern matching, and Spring Boot 3.x. Expert in the latest Java ecosystem including GraalVM, Project Loom, and cloud-native patterns.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "java" ], "triggers": [ "java", - "pro", - "21", - "features", - "like", - "virtual", - "threads", - "matching", - "spring", - "boot", - "latest", - "ecosystem" + "pro" ], "path": "skills/java-pro/SKILL.md" }, @@ -13288,24 +12348,14 @@ { "id": "javascript-pro", "name": "javascript-pro", - "description": "Master modern JavaScript with ES6+, async patterns, and Node.js APIs. Handles promises, event loops, and browser/Node compatibility.", + "description": "", "category": "development", "tags": [ "javascript" ], "triggers": [ "javascript", - "pro", - "es6", - "async", - "node", - "js", - "apis", - "promises", - "event", - "loops", - "browser", - "compatibility" + "pro" ], "path": "skills/javascript-pro/SKILL.md" }, @@ -13385,20 +12435,14 @@ { "id": "julia-pro", "name": "julia-pro", - "description": "Master Julia 1.10+ with modern features, performance optimization, multiple dispatch, and production-ready practices.", + "description": "", "category": "general", "tags": [ "julia" ], "triggers": [ "julia", - "pro", - "10", - "features", - "performance", - "optimization", - "multiple", - "dispatch" + "pro" ], "path": "skills/julia-pro/SKILL.md" }, @@ -13552,24 +12596,14 @@ { "id": "kubernetes-architect", "name": "kubernetes-architect", - "description": "Expert Kubernetes architect specializing in cloud-native infrastructure, advanced GitOps workflows (ArgoCD/Flux), and enterprise container orchestration.", + "description": "", "category": "infrastructure", "tags": [ "kubernetes" ], "triggers": [ "kubernetes", - "architect", - "specializing", - "cloud", - "native", - "infrastructure", - "gitops", - "argocd", - "flux", - "enterprise", - "container", - "orchestration" + "architect" ], "path": "skills/kubernetes-architect/SKILL.md" }, @@ -13769,7 +12803,7 @@ { "id": "legacy-modernizer", "name": "legacy-modernizer", - "description": "Refactor legacy codebases, migrate outdated frameworks, and implement gradual modernization. Handles technical debt, dependency updates, and backward compatibility.", + "description": "", "category": "general", "tags": [ "legacy", @@ -13777,42 +12811,22 @@ ], "triggers": [ "legacy", - "modernizer", - "refactor", - "codebases", - "migrate", - "outdated", - "frameworks", - "gradual", - "modernization", - "technical", - "debt", - "dependency" + "modernizer" ], "path": "skills/legacy-modernizer/SKILL.md" }, { "id": "legal-advisor", "name": "legal-advisor", - "description": "Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements.", - "category": "security", + "description": "", + "category": "business", "tags": [ "legal", "advisor" ], "triggers": [ "legal", - "advisor", - "draft", - "privacy", - "policies", - "terms", - "disclaimers", - "notices", - "creates", - "gdpr", - "compliant", - "texts" + "advisor" ], "path": "skills/legal-advisor/SKILL.md" }, @@ -14308,7 +13322,7 @@ { "id": "logistics-exception-management", "name": "logistics-exception-management", - "description": "Codified expertise for handling freight exceptions, shipment delays, damages, losses, and carrier disputes. Informed by logistics professionals with 15+ years operational experience.", + "description": "", "category": "general", "tags": [ "logistics", @@ -14316,17 +13330,7 @@ ], "triggers": [ "logistics", - "exception", - "codified", - "expertise", - "handling", - "freight", - "exceptions", - "shipment", - "delays", - "damages", - "losses", - "carrier" + "exception" ], "path": "skills/logistics-exception-management/SKILL.md" }, @@ -14358,8 +13362,8 @@ { "id": "m365-agents-dotnet", "name": "m365-agents-dotnet", - "description": "Microsoft 365 Agents SDK for .NET. Build multichannel agents for Teams/M365/Copilot Studio with ASP.NET Core hosting, AgentApplication routing, and MSAL-based auth.", - "category": "security", + "description": "", + "category": "development", "tags": [ "m365", "agents", @@ -14368,24 +13372,15 @@ "triggers": [ "m365", "agents", - "dotnet", - "microsoft", - "365", - "sdk", - "net", - "multichannel", - "teams", - "copilot", - "studio", - "asp" + "dotnet" ], "path": "skills/m365-agents-dotnet/SKILL.md" }, { "id": "m365-agents-py", "name": "m365-agents-py", - "description": "Microsoft 365 Agents SDK for Python. Build multichannel agents for Teams/M365/Copilot Studio with aiohttp hosting, AgentApplication routing, streaming responses, and MSAL-based auth.", - "category": "security", + "description": "", + "category": "general", "tags": [ "m365", "agents", @@ -14394,24 +13389,15 @@ "triggers": [ "m365", "agents", - "py", - "microsoft", - "365", - "sdk", - "python", - "multichannel", - "teams", - "copilot", - "studio", - "aiohttp" + "py" ], "path": "skills/m365-agents-py/SKILL.md" }, { "id": "m365-agents-ts", "name": "m365-agents-ts", - "description": "Microsoft 365 Agents SDK for TypeScript/Node.js.", - "category": "development", + "description": "", + "category": "general", "tags": [ "m365", "agents", @@ -14420,13 +13406,7 @@ "triggers": [ "m365", "agents", - "ts", - "microsoft", - "365", - "sdk", - "typescript", - "node", - "js" + "ts" ], "path": "skills/m365-agents-ts/SKILL.md" }, @@ -14527,7 +13507,7 @@ { "id": "malware-analyst", "name": "malware-analyst", - "description": "Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis, and malware family identification.", + "description": "", "category": "security", "tags": [ "malware", @@ -14535,17 +13515,7 @@ ], "triggers": [ "malware", - "analyst", - "specializing", - "defensive", - "research", - "threat", - "intelligence", - "incident", - "response", - "masters", - "sandbox", - "analysis" + "analyst" ], "path": "skills/malware-analyst/SKILL.md" }, @@ -14576,7 +13546,7 @@ { "id": "market-sizing-analysis", "name": "market-sizing-analysis", - "description": "This skill should be used when the user asks to \\\\\\\"calculate TAM\\\\\\\", \"determine SAM\", \"estimate SOM\", \"size the market\", \"calculate market opportunity\", \"what's the total addressable market\", or...", + "description": "", "category": "business", "tags": [ "market", @@ -14585,16 +13555,7 @@ "triggers": [ "market", "sizing", - "analysis", - "skill", - "should", - "used", - "user", - "asks", - "calculate", - "tam", - "determine", - "sam" + "analysis" ], "path": "skills/market-sizing-analysis/SKILL.md" }, @@ -14769,24 +13730,13 @@ { "id": "mermaid-expert", "name": "mermaid-expert", - "description": "Create Mermaid diagrams for flowcharts, sequences, ERDs, and architectures. Masters syntax for all diagram types and styling.", + "description": "", "category": "general", "tags": [ "mermaid" ], "triggers": [ - "mermaid", - "diagrams", - "flowcharts", - "sequences", - "erds", - "architectures", - "masters", - "syntax", - "all", - "diagram", - "types", - "styling" + "mermaid" ], "path": "skills/mermaid-expert/SKILL.md" }, @@ -14868,7 +13818,7 @@ { "id": "microsoft-azure-webjobs-extensions-authentication-events-dotnet", "name": "microsoft-azure-webjobs-extensions-authentication-events-dotnet", - "description": "Microsoft Entra Authentication Events SDK for .NET. Azure Functions triggers for custom authentication extensions.", + "description": "", "category": "development", "tags": [ "microsoft", @@ -14886,12 +13836,7 @@ "extensions", "authentication", "events", - "dotnet", - "entra", - "sdk", - "net", - "functions", - "triggers" + "dotnet" ], "path": "skills/microsoft-azure-webjobs-extensions-authentication-events-dotnet/SKILL.md" }, @@ -14923,7 +13868,7 @@ { "id": "minecraft-bukkit-pro", "name": "minecraft-bukkit-pro", - "description": "Master Minecraft server plugin development with Bukkit, Spigot, and Paper APIs.", + "description": "", "category": "general", "tags": [ "minecraft", @@ -14932,13 +13877,7 @@ "triggers": [ "minecraft", "bukkit", - "pro", - "server", - "plugin", - "development", - "spigot", - "paper", - "apis" + "pro" ], "path": "skills/minecraft-bukkit-pro/SKILL.md" }, @@ -14993,24 +13932,14 @@ { "id": "ml-engineer", "name": "ml-engineer", - "description": "Build production ML systems with PyTorch 2.x, TensorFlow, and modern ML frameworks. Implements model serving, feature engineering, A/B testing, and monitoring.", - "category": "infrastructure", + "description": "", + "category": "data-ai", "tags": [ "ml" ], "triggers": [ "ml", - "engineer", - "pytorch", - "tensorflow", - "frameworks", - "implements", - "model", - "serving", - "feature", - "engineering", - "testing", - "monitoring" + "engineer" ], "path": "skills/ml-engineer/SKILL.md" }, @@ -15042,22 +13971,14 @@ { "id": "mlops-engineer", "name": "mlops-engineer", - "description": "Build comprehensive ML pipelines, experiment tracking, and model registries with MLflow, Kubeflow, and modern MLOps tools.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "mlops" ], "triggers": [ "mlops", - "engineer", - "ml", - "pipelines", - "experiment", - "tracking", - "model", - "registries", - "mlflow", - "kubeflow" + "engineer" ], "path": "skills/mlops-engineer/SKILL.md" }, @@ -15088,31 +14009,21 @@ { "id": "mobile-developer", "name": "mobile-developer", - "description": "Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync, and app store optimization.", + "description": "", "category": "development", "tags": [ "mobile" ], "triggers": [ "mobile", - "developer", - "develop", - "react", - "native", - "flutter", - "apps", - "architecture", - "masters", - "cross", - "platform", - "development" + "developer" ], "path": "skills/mobile-developer/SKILL.md" }, { "id": "mobile-security-coder", "name": "mobile-security-coder", - "description": "Expert in secure mobile coding practices specializing in input validation, WebView security, and mobile-specific security patterns.", + "description": "", "category": "security", "tags": [ "mobile", @@ -15122,14 +14033,7 @@ "triggers": [ "mobile", "security", - "coder", - "secure", - "coding", - "specializing", - "input", - "validation", - "webview", - "specific" + "coder" ], "path": "skills/mobile-security-coder/SKILL.md" }, @@ -15284,7 +14188,7 @@ { "id": "multi-agent-brainstorming", "name": "multi-agent-brainstorming", - "description": "Simulate a structured peer-review process using multiple specialized agents to validate designs, surface hidden assumptions, and identify failure modes before implementation.", + "description": "Transform a single-agent design into a robust, review-validated design by simulating a formal peer-review process using multiple constrained agents.", "category": "workflow", "tags": [ "multi", @@ -15295,15 +14199,15 @@ "multi", "agent", "brainstorming", - "simulate", - "structured", - "peer", + "transform", + "single", + "robust", "review", - "process", - "multiple", - "specialized", - "agents", - "validate" + "validated", + "simulating", + "formal", + "peer", + "process" ], "path": "skills/multi-agent-brainstorming/SKILL.md" }, @@ -15606,21 +14510,14 @@ { "id": "network-engineer", "name": "network-engineer", - "description": "Expert network engineer specializing in modern cloud networking, security architectures, and performance optimization.", - "category": "security", + "description": "", + "category": "infrastructure", "tags": [ "network" ], "triggers": [ "network", - "engineer", - "specializing", - "cloud", - "networking", - "security", - "architectures", - "performance", - "optimization" + "engineer" ], "path": "skills/network-engineer/SKILL.md" }, @@ -15903,22 +14800,14 @@ { "id": "observability-engineer", "name": "observability-engineer", - "description": "Build production-ready monitoring, logging, and tracing systems. Implements comprehensive observability strategies, SLI/SLO management, and incident response workflows.", - "category": "security", + "description": "", + "category": "infrastructure", "tags": [ "observability" ], "triggers": [ "observability", - "engineer", - "monitoring", - "logging", - "tracing", - "implements", - "sli", - "slo", - "incident", - "response" + "engineer" ], "path": "skills/observability-engineer/SKILL.md" }, @@ -16254,7 +15143,7 @@ { "id": "page-cro", "name": "page-cro", - "description": "Analyze and optimize individual pages for conversion performance.", + "description": "", "category": "general", "tags": [ "page", @@ -16262,13 +15151,7 @@ ], "triggers": [ "page", - "cro", - "analyze", - "optimize", - "individual", - "pages", - "conversion", - "performance" + "cro" ], "path": "skills/page-cro/SKILL.md" }, @@ -16349,25 +15232,15 @@ { "id": "payment-integration", "name": "payment-integration", - "description": "Integrate Stripe, PayPal, and payment processors. Handles checkout flows, subscriptions, webhooks, and PCI compliance. Use PROACTIVELY when implementing payments, billing, or subscription features.", - "category": "security", + "description": "", + "category": "general", "tags": [ "payment", "integration" ], "triggers": [ "payment", - "integration", - "integrate", - "stripe", - "paypal", - "processors", - "checkout", - "flows", - "subscriptions", - "webhooks", - "pci", - "compliance" + "integration" ], "path": "skills/payment-integration/SKILL.md" }, @@ -16631,24 +15504,14 @@ { "id": "php-pro", "name": "php-pro", - "description": "Write idiomatic PHP code with generators, iterators, SPL data\nstructures, and modern OOP features. Use PROACTIVELY for high-performance PHP\napplications.", - "category": "data-ai", + "description": "", + "category": "development", "tags": [ "php" ], "triggers": [ "php", - "pro", - "write", - "idiomatic", - "code", - "generators", - "iterators", - "spl", - "data", - "structures", - "oop", - "features" + "pro" ], "path": "skills/php-pro/SKILL.md" }, @@ -16830,7 +15693,7 @@ { "id": "posix-shell-pro", "name": "posix-shell-pro", - "description": "Expert in strict POSIX sh scripting for maximum portability across Unix-like systems. Specializes in shell scripts that run on any POSIX-compliant shell (dash, ash, sh, bash --posix).", + "description": "", "category": "general", "tags": [ "posix", @@ -16839,16 +15702,7 @@ "triggers": [ "posix", "shell", - "pro", - "strict", - "sh", - "scripting", - "maximum", - "portability", - "unix", - "like", - "specializes", - "scripts" + "pro" ], "path": "skills/posix-shell-pro/SKILL.md" }, @@ -17165,7 +16019,7 @@ { "id": "production-scheduling", "name": "production-scheduling", - "description": "Codified expertise for production scheduling, job sequencing, line balancing, changeover optimisation, and bottleneck resolution in discrete and batch manufacturing.", + "description": "", "category": "general", "tags": [ "production", @@ -17173,39 +16027,22 @@ ], "triggers": [ "production", - "scheduling", - "codified", - "expertise", - "job", - "sequencing", - "line", - "balancing", - "changeover", - "optimisation", - "bottleneck", - "resolution" + "scheduling" ], "path": "skills/production-scheduling/SKILL.md" }, { "id": "programmatic-seo", "name": "programmatic-seo", - "description": "Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data.", - "category": "data-ai", + "description": "", + "category": "business", "tags": [ "programmatic", "seo" ], "triggers": [ "programmatic", - "seo", - "evaluate", - "creating", - "driven", - "pages", - "scale", - "structured", - "data" + "seo" ], "path": "skills/programmatic-seo/SKILL.md" }, @@ -17579,24 +16416,14 @@ { "id": "python-pro", "name": "python-pro", - "description": "Master Python 3.12+ with modern features, async programming, performance optimization, and production-ready practices. Expert in the latest Python ecosystem including uv, ruff, pydantic, and FastAPI.", + "description": "", "category": "development", "tags": [ "python" ], "triggers": [ "python", - "pro", - "12", - "features", - "async", - "programming", - "performance", - "optimization", - "latest", - "ecosystem", - "including", - "uv" + "pro" ], "path": "skills/python-pro/SKILL.md" }, @@ -17627,7 +16454,7 @@ { "id": "quality-nonconformance", "name": "quality-nonconformance", - "description": "Codified expertise for quality control, non-conformance investigation, root cause analysis, corrective action, and supplier quality management in regulated manufacturing.", + "description": "", "category": "general", "tags": [ "quality", @@ -17635,42 +16462,22 @@ ], "triggers": [ "quality", - "nonconformance", - "codified", - "expertise", - "control", - "non", - "conformance", - "investigation", - "root", - "cause", - "analysis", - "corrective" + "nonconformance" ], "path": "skills/quality-nonconformance/SKILL.md" }, { "id": "quant-analyst", "name": "quant-analyst", - "description": "Build financial models, backtest trading strategies, and analyze market data. Implements risk metrics, portfolio optimization, and statistical arbitrage.", - "category": "security", + "description": "", + "category": "general", "tags": [ "quant", "analyst" ], "triggers": [ "quant", - "analyst", - "financial", - "models", - "backtest", - "trading", - "analyze", - "market", - "data", - "implements", - "risk", - "metrics" + "analyst" ], "path": "skills/quant-analyst/SKILL.md" }, @@ -18095,25 +16902,15 @@ { "id": "reference-builder", "name": "reference-builder", - "description": "Creates exhaustive technical references and API documentation. Generates comprehensive parameter listings, configuration guides, and searchable reference materials.", - "category": "development", + "description": "", + "category": "general", "tags": [ "reference", "builder" ], "triggers": [ "reference", - "builder", - "creates", - "exhaustive", - "technical", - "references", - "api", - "documentation", - "generates", - "parameter", - "listings", - "configuration" + "builder" ], "path": "skills/reference-builder/SKILL.md" }, @@ -18239,7 +17036,7 @@ { "id": "returns-reverse-logistics", "name": "returns-reverse-logistics", - "description": "Codified expertise for returns authorisation, receipt and inspection, disposition decisions, refund processing, fraud detection, and warranty claims management.", + "description": "", "category": "general", "tags": [ "returns", @@ -18249,47 +17046,28 @@ "triggers": [ "returns", "reverse", - "logistics", - "codified", - "expertise", - "authorisation", - "receipt", - "inspection", - "disposition", - "decisions", - "refund", - "processing" + "logistics" ], "path": "skills/returns-reverse-logistics/SKILL.md" }, { "id": "reverse-engineer", "name": "reverse-engineer", - "description": "Expert reverse engineer specializing in binary analysis, disassembly, decompilation, and software analysis. Masters IDA Pro, Ghidra, radare2, x64dbg, and modern RE toolchains.", + "description": "", "category": "general", "tags": [ "reverse" ], "triggers": [ "reverse", - "engineer", - "specializing", - "binary", - "analysis", - "disassembly", - "decompilation", - "software", - "masters", - "ida", - "pro", - "ghidra" + "engineer" ], "path": "skills/reverse-engineer/SKILL.md" }, { "id": "risk-manager", "name": "risk-manager", - "description": "Monitor portfolio risk, R-multiples, and position limits. Creates hedging strategies, calculates expectancy, and implements stop-losses.", + "description": "", "category": "security", "tags": [ "risk", @@ -18297,17 +17075,7 @@ ], "triggers": [ "risk", - "manager", - "monitor", - "portfolio", - "multiples", - "position", - "limits", - "creates", - "hedging", - "calculates", - "expectancy", - "implements" + "manager" ], "path": "skills/risk-manager/SKILL.md" }, @@ -18340,24 +17108,14 @@ { "id": "ruby-pro", "name": "ruby-pro", - "description": "Write idiomatic Ruby code with metaprogramming, Rails patterns, and performance optimization. Specializes in Ruby on Rails, gem development, and testing frameworks.", + "description": "", "category": "development", "tags": [ "ruby" ], "triggers": [ "ruby", - "pro", - "write", - "idiomatic", - "code", - "metaprogramming", - "rails", - "performance", - "optimization", - "specializes", - "gem", - "development" + "pro" ], "path": "skills/ruby-pro/SKILL.md" }, @@ -18389,19 +17147,14 @@ { "id": "rust-pro", "name": "rust-pro", - "description": "Master Rust 1.75+ with modern async patterns, advanced type system features, and production-ready systems programming.", + "description": "", "category": "development", "tags": [ "rust" ], "triggers": [ "rust", - "pro", - "75", - "async", - "type", - "features", - "programming" + "pro" ], "path": "skills/rust-pro/SKILL.md" }, @@ -18432,7 +17185,7 @@ { "id": "sales-automator", "name": "sales-automator", - "description": "Draft cold emails, follow-ups, and proposal templates. Creates\npricing pages, case studies, and sales scripts. Use PROACTIVELY for sales\noutreach or lead nurturing.", + "description": "", "category": "business", "tags": [ "sales", @@ -18440,17 +17193,7 @@ ], "triggers": [ "sales", - "automator", - "draft", - "cold", - "emails", - "follow", - "ups", - "proposal", - "creates", - "pricing", - "pages", - "case" + "automator" ], "path": "skills/sales-automator/SKILL.md" }, @@ -18530,24 +17273,14 @@ { "id": "scala-pro", "name": "scala-pro", - "description": "Master enterprise-grade Scala development with functional programming, distributed systems, and big data processing. Expert in Apache Pekko, Akka, Spark, ZIO/Cats Effect, and reactive architectures.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "scala" ], "triggers": [ "scala", - "pro", - "enterprise", - "grade", - "development", - "functional", - "programming", - "distributed", - "big", - "data", - "processing", - "apache" + "pro" ], "path": "skills/scala-pro/SKILL.md" }, @@ -18578,25 +17311,15 @@ { "id": "schema-markup", "name": "schema-markup", - "description": "Design, validate, and optimize schema.org structured data for eligibility, correctness, and measurable SEO impact.", - "category": "data-ai", + "description": "", + "category": "general", "tags": [ "schema", "markup" ], "triggers": [ "schema", - "markup", - "validate", - "optimize", - "org", - "structured", - "data", - "eligibility", - "correctness", - "measurable", - "seo", - "impact" + "markup" ], "path": "skills/schema-markup/SKILL.md" }, @@ -18742,7 +17465,7 @@ { "id": "security-auditor", "name": "security-auditor", - "description": "Expert security auditor specializing in DevSecOps, comprehensive cybersecurity, and compliance frameworks.", + "description": "", "category": "security", "tags": [ "security", @@ -18750,12 +17473,7 @@ ], "triggers": [ "security", - "auditor", - "specializing", - "devsecops", - "cybersecurity", - "compliance", - "frameworks" + "auditor" ], "path": "skills/security-auditor/SKILL.md" }, @@ -18885,7 +17603,7 @@ { "id": "security-scanning-security-sast", "name": "security-scanning-security-sast", - "description": "Static Application Security Testing (SAST) for code vulnerability\nanalysis across multiple languages and frameworks", + "description": "", "category": "security", "tags": [ "security", @@ -18895,16 +17613,7 @@ "triggers": [ "security", "scanning", - "sast", - "static", - "application", - "testing", - "code", - "vulnerability", - "analysis", - "multiple", - "languages", - "frameworks" + "sast" ], "path": "skills/security-scanning-security-sast/SKILL.md" }, @@ -19172,7 +17881,7 @@ { "id": "seo-audit", "name": "seo-audit", - "description": "Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance.", + "description": "", "category": "business", "tags": [ "seo", @@ -19180,23 +17889,15 @@ ], "triggers": [ "seo", - "audit", - "diagnose", - "issues", - "affecting", - "crawlability", - "indexation", - "rankings", - "organic", - "performance" + "audit" ], "path": "skills/seo-audit/SKILL.md" }, { "id": "seo-authority-builder", "name": "seo-authority-builder", - "description": "Analyzes content for E-E-A-T signals and suggests improvements to\nbuild authority and trust. Identifies missing credibility elements. Use\nPROACTIVELY for YMYL topics.", - "category": "security", + "description": "", + "category": "business", "tags": [ "seo", "authority", @@ -19205,23 +17906,14 @@ "triggers": [ "seo", "authority", - "builder", - "analyzes", - "content", - "signals", - "suggests", - "improvements", - "trust", - "identifies", - "missing", - "credibility" + "builder" ], "path": "skills/seo-authority-builder/SKILL.md" }, { "id": "seo-cannibalization-detector", "name": "seo-cannibalization-detector", - "description": "Analyzes multiple provided pages to identify keyword overlap and potential cannibalization issues. Suggests differentiation strategies. Use PROACTIVELY when reviewing similar content.", + "description": "", "category": "business", "tags": [ "seo", @@ -19231,23 +17923,14 @@ "triggers": [ "seo", "cannibalization", - "detector", - "analyzes", - "multiple", - "provided", - "pages", - "identify", - "keyword", - "overlap", - "potential", - "issues" + "detector" ], "path": "skills/seo-cannibalization-detector/SKILL.md" }, { "id": "seo-content-auditor", "name": "seo-content-auditor", - "description": "Analyzes provided content for quality, E-E-A-T signals, and SEO best practices. Scores content and provides improvement recommendations based on established guidelines.", + "description": "", "category": "business", "tags": [ "seo", @@ -19257,23 +17940,14 @@ "triggers": [ "seo", "content", - "auditor", - "analyzes", - "provided", - "quality", - "signals", - "scores", - "provides", - "improvement", - "recommendations", - "established" + "auditor" ], "path": "skills/seo-content-auditor/SKILL.md" }, { "id": "seo-content-planner", "name": "seo-content-planner", - "description": "Creates comprehensive content outlines and topic clusters for SEO.\nPlans content calendars and identifies topic gaps. Use PROACTIVELY for content\nstrategy and planning.", + "description": "", "category": "business", "tags": [ "seo", @@ -19283,23 +17957,14 @@ "triggers": [ "seo", "content", - "planner", - "creates", - "outlines", - "topic", - "clusters", - "plans", - "calendars", - "identifies", - "gaps", - "proactively" + "planner" ], "path": "skills/seo-content-planner/SKILL.md" }, { "id": "seo-content-refresher", "name": "seo-content-refresher", - "description": "Identifies outdated elements in provided content and suggests updates to maintain freshness. Finds statistics, dates, and examples that need updating. Use PROACTIVELY for older content.", + "description": "", "category": "business", "tags": [ "seo", @@ -19309,23 +17974,14 @@ "triggers": [ "seo", "content", - "refresher", - "identifies", - "outdated", - "elements", - "provided", - "suggests", - "updates", - "maintain", - "freshness", - "finds" + "refresher" ], "path": "skills/seo-content-refresher/SKILL.md" }, { "id": "seo-content-writer", "name": "seo-content-writer", - "description": "Writes SEO-optimized content based on provided keywords and topic briefs. Creates engaging, comprehensive content following best practices. Use PROACTIVELY for content creation tasks.", + "description": "", "category": "business", "tags": [ "seo", @@ -19335,16 +17991,7 @@ "triggers": [ "seo", "content", - "writer", - "writes", - "optimized", - "provided", - "keywords", - "topic", - "briefs", - "creates", - "engaging", - "following" + "writer" ], "path": "skills/seo-content-writer/SKILL.md" }, @@ -19378,7 +18025,7 @@ { "id": "seo-fundamentals", "name": "seo-fundamentals", - "description": "Core principles of SEO including E-E-A-T, Core Web Vitals, technical foundations, content quality, and how modern search engines evaluate pages.", + "description": "", "category": "business", "tags": [ "seo", @@ -19386,24 +18033,14 @@ ], "triggers": [ "seo", - "fundamentals", - "core", - "principles", - "including", - "web", - "vitals", - "technical", - "foundations", - "content", - "quality", - "how" + "fundamentals" ], "path": "skills/seo-fundamentals/SKILL.md" }, { "id": "seo-keyword-strategist", "name": "seo-keyword-strategist", - "description": "Analyzes keyword usage in provided content, calculates density, suggests semantic variations and LSI keywords based on the topic. Prevents over-optimization. Use PROACTIVELY for content optimization.", + "description": "", "category": "business", "tags": [ "seo", @@ -19413,23 +18050,14 @@ "triggers": [ "seo", "keyword", - "strategist", - "analyzes", - "usage", - "provided", - "content", - "calculates", - "density", - "suggests", - "semantic", - "variations" + "strategist" ], "path": "skills/seo-keyword-strategist/SKILL.md" }, { "id": "seo-meta-optimizer", "name": "seo-meta-optimizer", - "description": "Creates optimized meta titles, descriptions, and URL suggestions based on character limits and best practices. Generates compelling, keyword-rich metadata. Use PROACTIVELY for new content.", + "description": "", "category": "business", "tags": [ "seo", @@ -19439,23 +18067,14 @@ "triggers": [ "seo", "meta", - "optimizer", - "creates", - "optimized", - "titles", - "descriptions", - "url", - "suggestions", - "character", - "limits", - "generates" + "optimizer" ], "path": "skills/seo-meta-optimizer/SKILL.md" }, { "id": "seo-snippet-hunter", "name": "seo-snippet-hunter", - "description": "Formats content to be eligible for featured snippets and SERP features. Creates snippet-optimized content blocks based on best practices. Use PROACTIVELY for question-based content.", + "description": "", "category": "business", "tags": [ "seo", @@ -19465,23 +18084,14 @@ "triggers": [ "seo", "snippet", - "hunter", - "formats", - "content", - "eligible", - "featured", - "snippets", - "serp", - "features", - "creates", - "optimized" + "hunter" ], "path": "skills/seo-snippet-hunter/SKILL.md" }, { "id": "seo-structure-architect", "name": "seo-structure-architect", - "description": "Analyzes and optimizes content structure including header hierarchy, suggests schema markup, and internal linking opportunities. Creates search-friendly content organization.", + "description": "", "category": "business", "tags": [ "seo", @@ -19490,16 +18100,7 @@ "triggers": [ "seo", "structure", - "architect", - "analyzes", - "optimizes", - "content", - "including", - "header", - "hierarchy", - "suggests", - "schema", - "markup" + "architect" ], "path": "skills/seo-structure-architect/SKILL.md" }, @@ -19726,24 +18327,14 @@ { "id": "shopify-development", "name": "shopify-development", - "description": "Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.", - "category": "development", + "description": "", + "category": "general", "tags": [ "shopify" ], "triggers": [ "shopify", - "development", - "apps", - "extensions", - "themes", - "graphql", - "admin", - "api", - "cli", - "polaris", - "ui", - "liquid" + "development" ], "path": "skills/shopify-development/SKILL.md" }, @@ -20199,24 +18790,14 @@ { "id": "sql-pro", "name": "sql-pro", - "description": "Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid analytical systems.", - "category": "infrastructure", + "description": "", + "category": "data-ai", "tags": [ "sql" ], "triggers": [ "sql", - "pro", - "cloud", - "native", - "databases", - "oltp", - "olap", - "optimization", - "query", - "techniques", - "performance", - "tuning" + "pro" ], "path": "skills/sql-pro/SKILL.md" }, @@ -20298,7 +18879,7 @@ { "id": "startup-analyst", "name": "startup-analyst", - "description": "Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies.", + "description": "", "category": "business", "tags": [ "startup", @@ -20306,24 +18887,14 @@ ], "triggers": [ "startup", - "analyst", - "business", - "specializing", - "market", - "sizing", - "financial", - "modeling", - "competitive", - "analysis", - "strategic", - "planning" + "analyst" ], "path": "skills/startup-analyst/SKILL.md" }, { "id": "startup-business-analyst-business-case", "name": "startup-business-analyst-business-case", - "description": "Generate comprehensive investor-ready business case document with\nmarket, solution, financials, and strategy", + "description": "", "category": "business", "tags": [ "startup", @@ -20335,20 +18906,14 @@ "startup", "business", "analyst", - "case", - "generate", - "investor", - "document", - "market", - "solution", - "financials" + "case" ], "path": "skills/startup-business-analyst-business-case/SKILL.md" }, { "id": "startup-business-analyst-financial-projections", "name": "startup-business-analyst-financial-projections", - "description": "Create detailed 3-5 year financial model with revenue, costs, cash\nflow, and scenarios", + "description": "", "category": "business", "tags": [ "startup", @@ -20362,21 +18927,14 @@ "business", "analyst", "financial", - "projections", - "detailed", - "year", - "model", - "revenue", - "costs", - "cash", - "flow" + "projections" ], "path": "skills/startup-business-analyst-financial-projections/SKILL.md" }, { "id": "startup-business-analyst-market-opportunity", "name": "startup-business-analyst-market-opportunity", - "description": "Generate comprehensive market opportunity analysis with TAM/SAM/SOM\ncalculations", + "description": "", "category": "business", "tags": [ "startup", @@ -20390,20 +18948,14 @@ "business", "analyst", "market", - "opportunity", - "generate", - "analysis", - "tam", - "sam", - "som", - "calculations" + "opportunity" ], "path": "skills/startup-business-analyst-market-opportunity/SKILL.md" }, { "id": "startup-financial-modeling", "name": "startup-financial-modeling", - "description": "This skill should be used when the user asks to \\\\\\\"create financial projections\", \"build a financial model\", \"forecast revenue\", \"calculate burn rate\", \"estimate runway\", \"model cash flow\", or...", + "description": "", "category": "business", "tags": [ "startup", @@ -20413,24 +18965,15 @@ "triggers": [ "startup", "financial", - "modeling", - "skill", - "should", - "used", - "user", - "asks", - "projections", - "model", - "forecast", - "revenue" + "modeling" ], "path": "skills/startup-financial-modeling/SKILL.md" }, { "id": "startup-metrics-framework", "name": "startup-metrics-framework", - "description": "This skill should be used when the user asks about \\\\\\\"key startup metrics\", \"SaaS metrics\", \"CAC and LTV\", \"unit economics\", \"burn multiple\", \"rule of 40\", \"marketplace metrics\", or requests...", - "category": "testing", + "description": "", + "category": "business", "tags": [ "startup", "metrics", @@ -20439,16 +18982,7 @@ "triggers": [ "startup", "metrics", - "framework", - "skill", - "should", - "used", - "user", - "asks", - "about", - "key", - "saas", - "cac" + "framework" ], "path": "skills/startup-metrics-framework/SKILL.md" }, @@ -20758,7 +19292,7 @@ { "id": "tdd-orchestrator", "name": "tdd-orchestrator", - "description": "Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices.", + "description": "", "category": "testing", "tags": [ "tdd", @@ -20766,17 +19300,7 @@ ], "triggers": [ "tdd", - "orchestrator", - "specializing", - "red", - "green", - "refactor", - "discipline", - "multi", - "agent", - "coordination", - "test", - "driven" + "orchestrator" ], "path": "skills/tdd-orchestrator/SKILL.md" }, @@ -20935,7 +19459,7 @@ { "id": "team-composition-analysis", "name": "team-composition-analysis", - "description": "This skill should be used when the user asks to \\\\\\\"plan team structure\", \"determine hiring needs\", \"design org chart\", \"calculate compensation\", \"plan equity allocation\", or requests...", + "description": "", "category": "general", "tags": [ "team", @@ -20944,16 +19468,7 @@ "triggers": [ "team", "composition", - "analysis", - "skill", - "should", - "used", - "user", - "asks", - "plan", - "structure", - "determine", - "hiring" + "analysis" ], "path": "skills/team-composition-analysis/SKILL.md" }, @@ -21061,8 +19576,8 @@ { "id": "temporal-python-pro", "name": "temporal-python-pro", - "description": "Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testing strategies, and production deployment.", - "category": "infrastructure", + "description": "", + "category": "development", "tags": [ "temporal", "python" @@ -21070,16 +19585,7 @@ "triggers": [ "temporal", "python", - "pro", - "orchestration", - "sdk", - "implements", - "durable", - "saga", - "distributed", - "transactions", - "covers", - "async" + "pro" ], "path": "skills/temporal-python-pro/SKILL.md" }, @@ -21203,44 +19709,27 @@ { "id": "terraform-specialist", "name": "terraform-specialist", - "description": "Expert Terraform/OpenTofu specialist mastering advanced IaC automation, state management, and enterprise infrastructure patterns.", + "description": "", "category": "infrastructure", "tags": [ "terraform" ], "triggers": [ - "terraform", - "opentofu", - "mastering", - "iac", - "automation", - "state", - "enterprise", - "infrastructure" + "terraform" ], "path": "skills/terraform-specialist/SKILL.md" }, { "id": "test-automator", "name": "test-automator", - "description": "Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with advanced CI/CD integration.", - "category": "infrastructure", + "description": "", + "category": "testing", "tags": [ "automator" ], "triggers": [ "automator", - "test", - "ai", - "powered", - "automation", - "frameworks", - "self", - "healing", - "tests", - "quality", - "engineering", - "scalable" + "test" ], "path": "skills/test-automator/SKILL.md" }, @@ -21527,24 +20016,13 @@ { "id": "track-management", "name": "track-management", - "description": "Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan.md, and track lifecycle operations.", - "category": "workflow", + "description": "", + "category": "general", "tags": [ "track" ], "triggers": [ - "track", - "skill", - "creating", - "managing", - "working", - "conductor", - "tracks", - "logical", - "work", - "units", - "features", - "bugs" + "track" ], "path": "skills/track-management/SKILL.md" }, @@ -21625,24 +20103,14 @@ { "id": "tutorial-engineer", "name": "tutorial-engineer", - "description": "Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples.", + "description": "", "category": "general", "tags": [ "tutorial" ], "triggers": [ "tutorial", - "engineer", - "creates", - "step", - "tutorials", - "educational", - "content", - "code", - "transforms", - "complex", - "concepts", - "progressive" + "engineer" ], "path": "skills/tutorial-engineer/SKILL.md" }, @@ -21724,47 +20192,27 @@ { "id": "typescript-expert", "name": "typescript-expert", - "description": "TypeScript and JavaScript expert with deep knowledge of type-level programming, performance optimization, monorepo management, migration strategies, and modern tooling.", + "description": "", "category": "development", "tags": [ "typescript" ], "triggers": [ - "typescript", - "javascript", - "deep", - "knowledge", - "type", - "level", - "programming", - "performance", - "optimization", - "monorepo", - "migration", - "tooling" + "typescript" ], "path": "skills/typescript-expert/SKILL.md" }, { "id": "typescript-pro", "name": "typescript-pro", - "description": "Master TypeScript with advanced types, generics, and strict type safety. Handles complex type systems, decorators, and enterprise-grade patterns.", + "description": "", "category": "development", "tags": [ "typescript" ], "triggers": [ "typescript", - "pro", - "types", - "generics", - "strict", - "type", - "safety", - "complex", - "decorators", - "enterprise", - "grade" + "pro" ], "path": "skills/typescript-pro/SKILL.md" }, @@ -21792,7 +20240,7 @@ { "id": "ui-ux-designer", "name": "ui-ux-designer", - "description": "Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools.", + "description": "", "category": "general", "tags": [ "ui", @@ -21802,15 +20250,7 @@ "triggers": [ "ui", "ux", - "designer", - "interface", - "designs", - "wireframes", - "masters", - "user", - "research", - "accessibility", - "standards" + "designer" ], "path": "skills/ui-ux-designer/SKILL.md" }, @@ -21843,8 +20283,8 @@ { "id": "ui-visual-validator", "name": "ui-visual-validator", - "description": "Rigorous visual validation expert specializing in UI testing, design system compliance, and accessibility verification.", - "category": "security", + "description": "", + "category": "general", "tags": [ "ui", "visual", @@ -21853,14 +20293,7 @@ "triggers": [ "ui", "visual", - "validator", - "rigorous", - "validation", - "specializing", - "testing", - "compliance", - "accessibility", - "verification" + "validator" ], "path": "skills/ui-visual-validator/SKILL.md" }, @@ -21891,24 +20324,14 @@ { "id": "unity-developer", "name": "unity-developer", - "description": "Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform deployment.", - "category": "infrastructure", + "description": "", + "category": "general", "tags": [ "unity" ], "triggers": [ "unity", - "developer", - "games", - "optimized", - "scripts", - "efficient", - "rendering", - "proper", - "asset", - "masters", - "lts", - "urp" + "developer" ], "path": "skills/unity-developer/SKILL.md" }, @@ -23102,23 +21525,10 @@ { "id": "workflow-patterns", "name": "workflow-patterns", - "description": "Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding the verification protocol.", + "description": "", "category": "architecture", "tags": [], - "triggers": [ - "skill", - "implementing", - "tasks", - "according", - "conductor", - "tdd", - "handling", - "phase", - "checkpoints", - "managing", - "git", - "commits" - ], + "triggers": [], "path": "skills/workflow-patterns/SKILL.md" }, { @@ -23209,38 +21619,6 @@ ], "path": "skills/x-article-publisher-skill/SKILL.md" }, - { - "id": "x-twitter-scraper", - "name": "x-twitter-scraper", - "description": "X (Twitter) data platform skill — tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server.", - "category": "infrastructure", - "tags": [ - "[twitter", - "x-api", - "scraping", - "mcp", - "social-media", - "data-extraction", - "giveaway", - "monitoring", - "webhooks]" - ], - "triggers": [ - "[twitter", - "x-api", - "scraping", - "mcp", - "social-media", - "data-extraction", - "giveaway", - "monitoring", - "webhooks]", - "twitter", - "scraper", - "data" - ], - "path": "skills/x-twitter-scraper/SKILL.md" - }, { "id": "xlsx-official", "name": "xlsx-official", diff --git a/docs/SOURCES.md b/docs/SOURCES.md index 992d33ff..3a0c5027 100644 --- a/docs/SOURCES.md +++ b/docs/SOURCES.md @@ -3,16 +3,16 @@ We believe in giving credit where credit is due. If you recognize your work here and it is not properly attributed, please open an Issue. -| Skill / Category | Original Source | License | Notes | -| :-------------------------- | :----------------------------------------------------------------- | :------------- | :---------------------------- | -| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. | -| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. | -| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. | -| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). | -| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. | -| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. | -| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. | -| **All Official Skills** | [Anthropic / Google / OpenAI / Microsoft / Supabase / Vercel Labs] | Proprietary | Usage encouraged by vendors. | +| Skill / Category | Original Source | License | Notes | +| :-------------------------- | :------------------------------------------------------------------------- | :------------- | :---------------------------- | +| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. | +| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. | +| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. | +| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). | +| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. | +| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. | +| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. | +| **All Official Skills** | [Anthropic / Google / OpenAI / Microsoft / Supabase / Apify / Vercel Labs] | Proprietary | Usage encouraged by vendors. | ## Skills from VoltAgent/awesome-agent-skills diff --git a/docs/vietnamese/README.vi.md b/docs/vietnamese/README.vi.md index a50f76e2..6b9e9b2a 100644 --- a/docs/vietnamese/README.vi.md +++ b/docs/vietnamese/README.vi.md @@ -30,7 +30,7 @@ Các trợ lý AI (như Claude Code, Cursor, hoặc Gemini) rất thông minh, nhưng chúng thiếu các **công cụ chuyên biệt**. Chúng không biết "Quy trình Triển khai" của công ty bạn hoặc cú pháp cụ thể cho "AWS CloudFormation". **Skills** là các tệp markdown nhỏ dạy cho chúng cách thực hiện những tác vụ cụ thể này một cách chính xác trong mọi lần thực thi. ... -Repository này cung cấp các kỹ năng thiết yếu để biến trợ lý AI của bạn thành một **đội ngũ chuyên gia số toàn năng**, bao gồm các khả năng chính thức từ **Anthropic**, **OpenAI**, **Google**, **Supabase**, và **Vercel Labs**. +Repository này cung cấp các kỹ năng thiết yếu để biến trợ lý AI của bạn thành một **đội ngũ chuyên gia số toàn năng**, bao gồm các khả năng chính thức từ **Anthropic**, **OpenAI**, **Google**, **Supabase**, **Apify**, và **Vercel Labs**. ... Cho dù bạn đang sử dụng **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, hay **OpenCode**, những kỹ năng này được thiết kế để có thể sử dụng ngay lập tức và tăng cường sức mạnh cho trợ lý AI của bạn. @@ -40,17 +40,17 @@ Repository này tập hợp những khả năng tốt nhất từ khắp cộng Repository được tổ chức thành các lĩnh vực chuyên biệt để biến AI của bạn thành một chuyên gia trên toàn bộ vòng đời phát triển phần mềm: -| Danh mục | Trọng tâm | Ví dụ kỹ năng | -| :--- | :--- | :--- | -| Kiến trúc (52) | Thiết kế hệ thống, ADRs, C4 và các mẫu có thể mở rộng | `architecture`, `c4-context`, `senior-architect` | -| Kinh doanh (35) | Tăng trưởng, định giá, CRO, SEO và thâm nhập thị trường | `copywriting`, `pricing-strategy`, `seo-audit` | -| Dữ liệu & AI (81) | Ứng dụng LLM, RAG, agents, khả năng quan sát, phân tích | `rag-engineer`, `prompt-engineer`, `langgraph` | -| Phát triển (72) | Làm chủ ngôn ngữ, mẫu thiết kế framework, chất lượng code | `typescript-expert`, `python-patterns`, `react-patterns` | -| Tổng quát (95) | Lập kế hoạch, tài liệu, vận hành sản phẩm, viết bài, hướng dẫn | `brainstorming`, `doc-coauthoring`, `writing-plans` | -| Hạ tầng (72) | DevOps, cloud, serverless, triển khai, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` | -| Bảo mật (107) | AppSec, pentesting, phân tích lỗ hổng, tuân thủ | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` | -| Kiểm thử (21) | TDD, thiết kế kiểm thử, sửa lỗi, quy trình QA | `test-driven-development`, `testing-patterns`, `test-fixing` | -| Quy trình (17) | Tự động hóa, điều phối, công việc, agents | `workflow-automation`, `inngest`, `trigger-dev` | +| Danh mục | Trọng tâm | Ví dụ kỹ năng | +| :---------------- | :------------------------------------------------------------- | :------------------------------------------------------------------------------ | +| Kiến trúc (52) | Thiết kế hệ thống, ADRs, C4 và các mẫu có thể mở rộng | `architecture`, `c4-context`, `senior-architect` | +| Kinh doanh (35) | Tăng trưởng, định giá, CRO, SEO và thâm nhập thị trường | `copywriting`, `pricing-strategy`, `seo-audit` | +| Dữ liệu & AI (81) | Ứng dụng LLM, RAG, agents, khả năng quan sát, phân tích | `rag-engineer`, `prompt-engineer`, `langgraph` | +| Phát triển (72) | Làm chủ ngôn ngữ, mẫu thiết kế framework, chất lượng code | `typescript-expert`, `python-patterns`, `react-patterns` | +| Tổng quát (95) | Lập kế hoạch, tài liệu, vận hành sản phẩm, viết bài, hướng dẫn | `brainstorming`, `doc-coauthoring`, `writing-plans` | +| Hạ tầng (72) | DevOps, cloud, serverless, triển khai, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` | +| Bảo mật (107) | AppSec, pentesting, phân tích lỗ hổng, tuân thủ | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` | +| Kiểm thử (21) | TDD, thiết kế kiểm thử, sửa lỗi, quy trình QA | `test-driven-development`, `testing-patterns`, `test-fixing` | +| Quy trình (17) | Tự động hóa, điều phối, công việc, agents | `workflow-automation`, `inngest`, `trigger-dev` | ## Bộ sưu tập Tuyển chọn @@ -119,6 +119,7 @@ Bộ sưu tập này sẽ không thể hình thành nếu không có công việ - **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Skills chính thức của Vercel Labs - Thực hành tốt nhất cho React, Hướng dẫn thiết kế Web. - **[openai/skills](https://github.com/openai/skills)**: Danh mục skill của OpenAI Codex - Các kỹ năng của Agent, Trình tạo Skill, Lập kế hoạch Súc tích. - **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Skills chính thức của Supabase - Thực hành tốt nhất cho Postgres. +- **[apify/agent-skills](https://github.com/apify/agent-skills)**: Skills chính thức của Apify - Web scraping, data extraction and automation. ### Những người đóng góp từ Cộng đồng diff --git a/scripts/validate_skills.py b/scripts/validate_skills.py index 5f641518..e56b394d 100644 --- a/scripts/validate_skills.py +++ b/scripts/validate_skills.py @@ -2,79 +2,66 @@ import os import re import argparse import sys -import io - - -def configure_utf8_output() -> None: - """Best-effort UTF-8 stdout/stderr on Windows without dropping diagnostics.""" - if sys.platform != "win32": - return - - for stream_name in ("stdout", "stderr"): - stream = getattr(sys, stream_name) - try: - stream.reconfigure(encoding="utf-8", errors="backslashreplace") - continue - except Exception: - pass - - buffer = getattr(stream, "buffer", None) - if buffer is not None: - setattr( - sys, - stream_name, - io.TextIOWrapper(buffer, encoding="utf-8", errors="backslashreplace"), - ) WHEN_TO_USE_PATTERNS = [ re.compile(r"^##\s+When\s+to\s+Use", re.MULTILINE | re.IGNORECASE), - re.compile(r"^##\s+Use\s+this\s+skill\s+when", re.MULTILINE | re.IGNORECASE), - re.compile(r"^##\s+When\s+to\s+Use\s+This\s+Skill", re.MULTILINE | re.IGNORECASE), + re.compile(r"^##\s+Use\s+this\s+skill\s+when", + re.MULTILINE | re.IGNORECASE), + re.compile(r"^##\s+When\s+to\s+Use\s+This\s+Skill", + re.MULTILINE | re.IGNORECASE), ] + def has_when_to_use_section(content): return any(pattern.search(content) for pattern in WHEN_TO_USE_PATTERNS) -import yaml def parse_frontmatter(content, rel_path=None): """ - Parse frontmatter using PyYAML for robustness. - Returns a dict of key-values and a list of error messages. + Simple frontmatter parser using regex to avoid external dependencies. + Returns a dict of key-values. """ fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL) if not fm_match: - return None, ["Missing or malformed YAML frontmatter"] - + return None, [] + fm_text = fm_match.group(1) + metadata = {} + lines = fm_text.split('\n') fm_errors = [] - try: - metadata = yaml.safe_load(fm_text) or {} - - # Identification of the specific regression issue for better reporting - if "description" in metadata: - desc = metadata["description"] - if not desc or (isinstance(desc, str) and not desc.strip()): - fm_errors.append("description field is empty or whitespace only.") - elif desc == "|": - fm_errors.append("description contains only the YAML block indicator '|', likely due to a parsing regression.") - - return metadata, fm_errors - except yaml.YAMLError as e: - return None, [f"YAML Syntax Error: {e}"] + + for i, line in enumerate(lines): + if ':' in line: + key, val = line.split(':', 1) + metadata[key.strip()] = val.strip().strip('"').strip("'") + + # Check for multi-line description issue (problem identification for the user) + if key.strip() == "description": + stripped_val = val.strip() + if (stripped_val.startswith('"') and stripped_val.endswith('"')) or \ + (stripped_val.startswith("'") and stripped_val.endswith("'")): + if i + 1 < len(lines) and lines[i+1].startswith(' '): + fm_errors.append( + f"description is wrapped in quotes but followed by indented lines. This causes YAML truncation.") + + # Check for literal indicators wrapped in quotes + if stripped_val in ['"|"', "'>'", '"|"', "'>'"]: + fm_errors.append( + f"description uses a block indicator {stripped_val} inside quotes. Remove quotes for proper YAML block behavior.") + return metadata, fm_errors + def validate_skills(skills_dir, strict_mode=False): - configure_utf8_output() - print(f"🔍 Validating skills in: {skills_dir}") print(f"⚙️ Mode: {'STRICT (CI)' if strict_mode else 'Standard (Dev)'}") - + errors = [] warnings = [] skill_count = 0 - + # Pre-compiled regex - security_disclaimer_pattern = re.compile(r"AUTHORIZED USE ONLY", re.IGNORECASE) + security_disclaimer_pattern = re.compile( + r"AUTHORIZED USE ONLY", re.IGNORECASE) valid_risk_levels = ["none", "safe", "critical", "offensive", "unknown"] date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}$') # YYYY-MM-DD format @@ -82,25 +69,26 @@ def validate_skills(skills_dir, strict_mode=False): for root, dirs, files in os.walk(skills_dir): # Skip .disabled or hidden directories dirs[:] = [d for d in dirs if not d.startswith('.')] - + if "SKILL.md" in files: skill_count += 1 skill_path = os.path.join(root, "SKILL.md") rel_path = os.path.relpath(skill_path, skills_dir) - + try: with open(skill_path, 'r', encoding='utf-8') as f: content = f.read() except Exception as e: errors.append(f"❌ {rel_path}: Unreadable file - {str(e)}") continue - + # 1. Frontmatter Check metadata, fm_errors = parse_frontmatter(content, rel_path) if not metadata: - errors.append(f"❌ {rel_path}: Missing or malformed YAML frontmatter") - continue # Cannot proceed without metadata - + errors.append( + f"❌ {rel_path}: Missing or malformed YAML frontmatter") + continue # Cannot proceed without metadata + if fm_errors: for fe in fm_errors: errors.append(f"❌ {rel_path}: YAML Structure Error - {fe}") @@ -109,51 +97,64 @@ def validate_skills(skills_dir, strict_mode=False): if "name" not in metadata: errors.append(f"❌ {rel_path}: Missing 'name' in frontmatter") elif metadata["name"] != os.path.basename(root): - errors.append(f"❌ {rel_path}: Name '{metadata['name']}' does not match folder name '{os.path.basename(root)}'") + errors.append( + f"❌ {rel_path}: Name '{metadata['name']}' does not match folder name '{os.path.basename(root)}'") - if "description" not in metadata or metadata["description"] is None: - errors.append(f"❌ {rel_path}: Missing 'description' in frontmatter") + if "description" not in metadata: + errors.append( + f"❌ {rel_path}: Missing 'description' in frontmatter") else: # agentskills-ref checks for short descriptions - desc = metadata["description"] - if not isinstance(desc, str): - errors.append(f"❌ {rel_path}: 'description' must be a string, got {type(desc).__name__}") - elif len(desc) > 300: # increased limit for multi-line support - errors.append(f"❌ {rel_path}: Description is oversized ({len(desc)} chars). Must be concise.") + if len(metadata["description"]) > 200: + errors.append( + f"❌ {rel_path}: Description is oversized ({len(metadata['description'])} chars). Must be concise.") + elif not len(metadata["description"]): + errors.append( + f"❌ {rel_path}: Description is empty.") # Risk Validation (Quality Bar) if "risk" not in metadata: msg = f"⚠️ {rel_path}: Missing 'risk' label (defaulting to 'unknown')" - if strict_mode: errors.append(msg.replace("⚠️", "❌")) - else: warnings.append(msg) + if strict_mode: + errors.append(msg.replace("⚠️", "❌")) + else: + warnings.append(msg) elif metadata["risk"] not in valid_risk_levels: - errors.append(f"❌ {rel_path}: Invalid risk level '{metadata['risk']}'. Must be one of {valid_risk_levels}") + errors.append( + f"❌ {rel_path}: Invalid risk level '{metadata['risk']}'. Must be one of {valid_risk_levels}") # Source Validation if "source" not in metadata: msg = f"⚠️ {rel_path}: Missing 'source' attribution" - if strict_mode: errors.append(msg.replace("⚠️", "❌")) - else: warnings.append(msg) + if strict_mode: + errors.append(msg.replace("⚠️", "❌")) + else: + warnings.append(msg) # Date Added Validation (optional field) if "date_added" in metadata: if not date_pattern.match(metadata["date_added"]): - errors.append(f"❌ {rel_path}: Invalid 'date_added' format. Must be YYYY-MM-DD (e.g., '2024-01-15'), got '{metadata['date_added']}'") + errors.append( + f"❌ {rel_path}: Invalid 'date_added' format. Must be YYYY-MM-DD (e.g., '2024-01-15'), got '{metadata['date_added']}'") else: msg = f"ℹ️ {rel_path}: Missing 'date_added' field (optional, but recommended)" - if strict_mode: warnings.append(msg) + if strict_mode: + warnings.append(msg) # In normal mode, we just silently skip this # 3. Content Checks (Triggers) if not has_when_to_use_section(content): msg = f"⚠️ {rel_path}: Missing '## When to Use' section" - if strict_mode: errors.append(msg.replace("⚠️", "❌")) - else: warnings.append(msg) + if strict_mode: + errors.append(msg.replace("⚠️", "❌")) + else: + warnings.append(msg) # 4. Security Guardrails if metadata.get("risk") == "offensive": if not security_disclaimer_pattern.search(content): - errors.append(f"🚨 {rel_path}: OFFENSIVE SKILL MISSING SECURITY DISCLAIMER! (Must contain 'AUTHORIZED USE ONLY')") + errors.append( + f"🚨 {rel_path}: OFFENSIVE SKILL MISSING SECURITY DISCLAIMER! (Must contain 'AUTHORIZED USE ONLY')") # 5. Dangling Links Validation # Look for markdown links: [text](href) @@ -165,15 +166,16 @@ def validate_skills(skills_dir, strict_mode=False): continue if os.path.isabs(link_clean): continue - + # Check if file exists relative to this skill file target_path = os.path.normpath(os.path.join(root, link_clean)) if not os.path.exists(target_path): - errors.append(f"❌ {rel_path}: Dangling link detected. Path '{link_clean}' (from '...({link})') does not exist locally.") + errors.append( + f"❌ {rel_path}: Dangling link detected. Path '{link_clean}' (from '...({link})') does not exist locally.") # Reporting print(f"\n📊 Checked {skill_count} skills.") - + if warnings: print(f"\n⚠️ Found {len(warnings)} Warnings:") for w in warnings: @@ -192,14 +194,16 @@ def validate_skills(skills_dir, strict_mode=False): print("\n✨ All skills passed validation!") return True + if __name__ == "__main__": parser = argparse.ArgumentParser(description="Validate Antigravity Skills") - parser.add_argument("--strict", action="store_true", help="Fail on warnings (for CI)") + parser.add_argument("--strict", action="store_true", + help="Fail on warnings (for CI)") args = parser.parse_args() base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) skills_path = os.path.join(base_dir, "skills") - + success = validate_skills(skills_path, strict_mode=args.strict) if not success: sys.exit(1) diff --git a/skills/apify-actor-development/SKILL.md b/skills/apify-actor-development/SKILL.md new file mode 100644 index 00000000..3df9de06 --- /dev/null +++ b/skills/apify-actor-development/SKILL.md @@ -0,0 +1,218 @@ +--- +name: apify-actor-development +description: "Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Acto..." +--- + +# Apify Actor Development + +**Important:** Before you begin, fill in the `generatedBy` property in the meta section of `.actor/actor.json`. Replace it with the tool and model you're currently using, such as "Claude Code with Claude Sonnet 4.5". This helps Apify monitor and improve AGENTS.md for specific AI tools and models. + +## What are Apify Actors? + +Actors are serverless programs inspired by the UNIX philosophy - programs that do one thing well and can be easily combined to build complex systems. They're packaged as Docker images and run in isolated containers in the cloud. + +**Core Concepts:** +- Accept well-defined JSON input +- Perform isolated tasks (web scraping, automation, data processing) +- Produce structured JSON output to datasets and/or store data in key-value stores +- Can run from seconds to hours or even indefinitely +- Persist state and can be restarted + +## Prerequisites & Setup (MANDATORY) + +Before creating or modifying actors, verify that `apify` CLI is installed `apify --help`. + +If it is not installed, use one of these methods (listed in order of preference): + +```bash +# Preferred: install via a package manager (provides integrity checks) +npm install -g apify-cli + +# Or (Mac): brew install apify-cli +``` + +> **Security note:** Do NOT install the CLI by piping remote scripts to a shell +> (e.g. `curl … | bash` or `irm … | iex`). Always use a package manager. + +When the apify CLI is installed, check that it is logged in with: + +```bash +apify info # Should return your username +``` + +If it is not logged in, check if the `APIFY_TOKEN` environment variable is defined (if not, ask the user to generate one on https://console.apify.com/settings/integrations and then define `APIFY_TOKEN` with it). + +Then authenticate using one of these methods: + +```bash +# Option 1 (preferred): The CLI automatically reads APIFY_TOKEN from the environment. +# Just ensure the env var is exported and run any apify command — no explicit login needed. + +# Option 2: Interactive login (prompts for token without exposing it in shell history) +apify login +``` + +> **Security note:** Avoid passing tokens as command-line arguments (e.g. `apify login -t `). +> Arguments are visible in process listings and may be recorded in shell history. +> Prefer environment variables or interactive login instead. +> Never log, print, or embed `APIFY_TOKEN` in source code or configuration files. +> Use a token with the minimum required permissions (scoped token) and rotate it periodically. + +## Template Selection + +**IMPORTANT:** Before starting actor development, always ask the user which programming language they prefer: +- **JavaScript** - Use `apify create -t project_empty` +- **TypeScript** - Use `apify create -t ts_empty` +- **Python** - Use `apify create -t python-empty` + +Use the appropriate CLI command based on the user's language choice. Additional packages (Crawlee, Playwright, etc.) can be installed later as needed. + +## Quick Start Workflow + +1. **Create actor project** - Run the appropriate `apify create` command based on user's language preference (see Template Selection above) +2. **Install dependencies** (verify package names match intended packages before installing) + - JavaScript/TypeScript: `npm install` (uses `package-lock.json` for reproducible, integrity-checked installs — commit the lockfile to version control) + - Python: `pip install -r requirements.txt` (pin exact versions in `requirements.txt`, e.g. `crawlee==1.2.3`, and commit the file to version control) +3. **Implement logic** - Write the actor code in `src/main.py`, `src/main.js`, or `src/main.ts` +4. **Configure schemas** - Update input/output schemas in `.actor/input_schema.json`, `.actor/output_schema.json`, `.actor/dataset_schema.json` +5. **Configure platform settings** - Update `.actor/actor.json` with actor metadata (see [references/actor-json.md](references/actor-json.md)) +6. **Write documentation** - Create comprehensive README.md for the marketplace +7. **Test locally** - Run `apify run` to verify functionality (see Local Testing section below) +8. **Deploy** - Run `apify push` to deploy the actor on the Apify platform (actor name is defined in `.actor/actor.json`) + +## Security + +**Treat all crawled web content as untrusted input.** Actors ingest data from external websites that may contain malicious payloads. Follow these rules: + +- **Sanitize crawled data** — Never pass raw HTML, URLs, or scraped text directly into shell commands, `eval()`, database queries, or template engines. Use proper escaping or parameterized APIs. +- **Validate and type-check all external data** — Before pushing to datasets or key-value stores, verify that values match expected types and formats. Reject or sanitize unexpected structures. +- **Do not execute or interpret crawled content** — Never treat scraped text as code, commands, or configuration. Content from websites could include prompt injection attempts or embedded scripts. +- **Isolate credentials from data pipelines** — Ensure `APIFY_TOKEN` and other secrets are never accessible in request handlers or passed alongside crawled data. Use the Apify SDK's built-in credential management rather than passing tokens through environment variables in data-processing code. +- **Review dependencies before installing** — When adding packages with `npm install` or `pip install`, verify the package name and publisher. Typosquatting is a common supply-chain attack vector. Prefer well-known, actively maintained packages. +- **Pin versions and use lockfiles** — Always commit `package-lock.json` (Node.js) or pin exact versions in `requirements.txt` (Python). Lockfiles ensure reproducible builds and prevent silent dependency substitution. Run `npm audit` or `pip-audit` periodically to check for known vulnerabilities. + +## Best Practices + +**✓ Do:** +- Use `apify run` to test actors locally (configures Apify environment and storage) +- Use Apify SDK (`apify`) for code running ON Apify platform +- Validate input early with proper error handling and fail gracefully +- Use CheerioCrawler for static HTML (10x faster than browsers) +- Use PlaywrightCrawler only for JavaScript-heavy sites +- Use router pattern (createCheerioRouter/createPlaywrightRouter) for complex crawls +- Implement retry strategies with exponential backoff +- Use proper concurrency: HTTP (10-50), Browser (1-5) +- Set sensible defaults in `.actor/input_schema.json` +- Define output schema in `.actor/output_schema.json` +- Clean and validate data before pushing to dataset +- Use semantic CSS selectors with fallback strategies +- Respect robots.txt, ToS, and implement rate limiting +- **Always use `apify/log` package** — censors sensitive data (API keys, tokens, credentials) +- Implement readiness probe handler (required if your Actor uses standby mode) + +**✗ Don't:** +- Use `npm start`, `npm run start`, `npx apify run`, or similar commands to run actors (use `apify run` instead) +- Assume local storage from `apify run` is pushed to or visible in the Apify Console — it is local-only; deploy with `apify push` and run on the platform to see results in the Console +- Rely on `Dataset.getInfo()` for final counts on Cloud +- Use browser crawlers when HTTP/Cheerio works +- Hard code values that should be in input schema or environment variables +- Skip input validation or error handling +- Overload servers - use appropriate concurrency and delays +- Scrape prohibited content or ignore Terms of Service +- Store personal/sensitive data unless explicitly permitted +- Use deprecated options like `requestHandlerTimeoutMillis` on CheerioCrawler (v3.x) +- Use `additionalHttpHeaders` - use `preNavigationHooks` instead +- Pass raw crawled content into shell commands, `eval()`, or code-generation functions +- Use `console.log()` or `print()` instead of the Apify logger — these bypass credential censoring +- Disable standby mode without explicit permission + +## Logging + +See [references/logging.md](references/logging.md) for complete logging documentation including available log levels and best practices for JavaScript/TypeScript and Python. + +Check `usesStandbyMode` in `.actor/actor.json` - only implement if set to `true`. + +## Commands + +```bash +apify run # Run Actor locally +apify login # Authenticate account +apify push # Deploy to Apify platform (uses name from .actor/actor.json) +apify help # List all commands +``` + +**IMPORTANT:** Always use `apify run` to test actors locally. Do not use `npm run start`, `npm start`, `yarn start`, or other package manager commands - these will not properly configure the Apify environment and storage. + +## Local Testing + +When testing an actor locally with `apify run`, provide input data by creating a JSON file at: + +``` +storage/key_value_stores/default/INPUT.json +``` + +This file should contain the input parameters defined in your `.actor/input_schema.json`. The actor will read this input when running locally, mirroring how it receives input on the Apify platform. + +**IMPORTANT - Local storage is NOT synced to the Apify Console:** +- Running `apify run` stores all data (datasets, key-value stores, request queues) **only on your local filesystem** in the `storage/` directory. +- This data is **never** automatically uploaded or pushed to the Apify platform. It exists only on your machine. +- To verify results on the Apify Console, you must deploy the Actor with `apify push` and then run it on the platform. +- Do **not** rely on checking the Apify Console to verify results from local runs — instead, inspect the local `storage/` directory or check the Actor's log output. + +## Standby Mode + +See [references/standby-mode.md](references/standby-mode.md) for complete standby mode documentation including readiness probe implementation for JavaScript/TypeScript and Python. + +## Project Structure + +``` +.actor/ +├── actor.json # Actor config: name, version, env vars, runtime +├── input_schema.json # Input validation & Console form definition +└── output_schema.json # Output storage and display templates +src/ +└── main.js/ts/py # Actor entry point +storage/ # Local-only storage (NOT synced to Apify Console) +├── datasets/ # Output items (JSON objects) +├── key_value_stores/ # Files, config, INPUT +└── request_queues/ # Pending crawl requests +Dockerfile # Container image definition +``` + +## Actor Configuration + +See [references/actor-json.md](references/actor-json.md) for complete actor.json structure and configuration options. + +## Input Schema + +See [references/input-schema.md](references/input-schema.md) for input schema structure and examples. + +## Output Schema + +See [references/output-schema.md](references/output-schema.md) for output schema structure, examples, and template variables. + +## Dataset Schema + +See [references/dataset-schema.md](references/dataset-schema.md) for dataset schema structure, configuration, and display properties. + +## Key-Value Store Schema + +See [references/key-value-store-schema.md](references/key-value-store-schema.md) for key-value store schema structure, collections, and configuration. + + +## Apify MCP Tools + +If MCP server is configured, use these tools for documentation: + +- `search-apify-docs` - Search documentation +- `fetch-apify-docs` - Get full doc pages + +Otherwise, the MCP Server url: `https://mcp.apify.com/?tools=docs`. + +## Resources + +- [docs.apify.com/llms.txt](https://docs.apify.com/llms.txt) - Apify quick reference documentation +- [docs.apify.com/llms-full.txt](https://docs.apify.com/llms-full.txt) - Apify complete documentation +- [https://crawlee.dev/llms.txt](https://crawlee.dev/llms.txt) - Crawlee quick reference documentation +- [https://crawlee.dev/llms-full.txt](https://crawlee.dev/llms-full.txt) - Crawlee complete documentation +- [whitepaper.actor](https://raw.githubusercontent.com/apify/actor-whitepaper/refs/heads/master/README.md) - Complete Actor specification diff --git a/skills/apify-actor-development/references/actor-json.md b/skills/apify-actor-development/references/actor-json.md new file mode 100644 index 00000000..f698139f --- /dev/null +++ b/skills/apify-actor-development/references/actor-json.md @@ -0,0 +1,66 @@ +# Actor Configuration (actor.json) + +The `.actor/actor.json` file contains the Actor's configuration including metadata, schema references, and platform settings. + +## Structure + +```json +{ + "actorSpecification": 1, + "name": "project-name", + "title": "Project Title", + "description": "Actor description", + "version": "0.0", + "meta": { + "templateId": "template-id", + "generatedBy": "" + }, + "input": "./input_schema.json", + "output": "./output_schema.json", + "storages": { + "dataset": "./dataset_schema.json" + }, + "dockerfile": "../Dockerfile" +} +``` + +## Example + +```json +{ + "actorSpecification": 1, + "name": "project-cheerio-crawler-javascript", + "title": "Project Cheerio Crawler Javascript", + "description": "Crawlee and Cheerio project in javascript.", + "version": "0.0", + "meta": { + "templateId": "js-crawlee-cheerio", + "generatedBy": "Claude Code with Claude Sonnet 4.5" + }, + "input": "./input_schema.json", + "output": "./output_schema.json", + "storages": { + "dataset": "./dataset_schema.json" + }, + "dockerfile": "../Dockerfile" +} +``` + +## Properties + +- `actorSpecification` (integer, required) - Version of actor specification (currently 1) +- `name` (string, required) - Actor identifier (lowercase, hyphens allowed) +- `title` (string, required) - Human-readable title displayed in UI +- `description` (string, optional) - Actor description for marketplace +- `version` (string, required) - Semantic version number +- `meta` (object, optional) - Metadata about actor generation + - `templateId` (string) - ID of template used to create the actor + - `generatedBy` (string) - Tool and model name that generated/modified the actor (e.g., "Claude Code with Claude Sonnet 4.5") +- `input` (string, optional) - Path to input schema file +- `output` (string, optional) - Path to output schema file +- `storages` (object, optional) - Storage schema references + - `dataset` (string) - Path to dataset schema file + - `keyValueStore` (string) - Path to key-value store schema file +- `dockerfile` (string, optional) - Path to Dockerfile + +**Important:** Always fill in the `generatedBy` property with the tool and model you're currently using (e.g., "Claude Code with Claude Sonnet 4.5") to help Apify improve documentation. diff --git a/skills/apify-actor-development/references/dataset-schema.md b/skills/apify-actor-development/references/dataset-schema.md new file mode 100644 index 00000000..c61a8cea --- /dev/null +++ b/skills/apify-actor-development/references/dataset-schema.md @@ -0,0 +1,209 @@ +# Dataset Schema Reference + +The dataset schema defines how your Actor's output data is structured, transformed, and displayed in the Output tab in the Apify Console. + +## Examples + +### JavaScript and TypeScript + +Consider an example Actor that calls `Actor.pushData()` to store data into dataset: + +```javascript +import { Actor } from 'apify'; +// Initialize the JavaScript SDK +await Actor.init(); + +/** + * Actor code + */ +await Actor.pushData({ + numericField: 10, + pictureUrl: 'https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_92x30dp.png', + linkUrl: 'https://google.com', + textField: 'Google', + booleanField: true, + dateField: new Date(), + arrayField: ['#hello', '#world'], + objectField: {}, +}); + +// Exit successfully +await Actor.exit(); +``` + +### Python + +Consider an example Actor that calls `Actor.push_data()` to store data into dataset: + +```python +# Dataset push example (Python) +import asyncio +from datetime import datetime +from apify import Actor + +async def main(): + await Actor.init() + + # Actor code + await Actor.push_data({ + 'numericField': 10, + 'pictureUrl': 'https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_92x30dp.png', + 'linkUrl': 'https://google.com', + 'textField': 'Google', + 'booleanField': True, + 'dateField': datetime.now().isoformat(), + 'arrayField': ['#hello', '#world'], + 'objectField': {}, + }) + + # Exit successfully + await Actor.exit() + +if __name__ == '__main__': + asyncio.run(main()) +``` + +## Configuration + +To set up the Actor's output tab UI, reference a dataset schema file in `.actor/actor.json`: + +```json +{ + "actorSpecification": 1, + "name": "book-library-scraper", + "title": "Book Library Scraper", + "version": "1.0.0", + "storages": { + "dataset": "./dataset_schema.json" + } +} +``` + +Then create the dataset schema in `.actor/dataset_schema.json`: + +```json +{ + "actorSpecification": 1, + "fields": {}, + "views": { + "overview": { + "title": "Overview", + "transformation": { + "fields": [ + "pictureUrl", + "linkUrl", + "textField", + "booleanField", + "arrayField", + "objectField", + "dateField", + "numericField" + ] + }, + "display": { + "component": "table", + "properties": { + "pictureUrl": { + "label": "Image", + "format": "image" + }, + "linkUrl": { + "label": "Link", + "format": "link" + }, + "textField": { + "label": "Text", + "format": "text" + }, + "booleanField": { + "label": "Boolean", + "format": "boolean" + }, + "arrayField": { + "label": "Array", + "format": "array" + }, + "objectField": { + "label": "Object", + "format": "object" + }, + "dateField": { + "label": "Date", + "format": "date" + }, + "numericField": { + "label": "Number", + "format": "number" + } + } + } + } + } +} +``` + +## Structure + +```json +{ + "actorSpecification": 1, + "fields": {}, + "views": { + "": { + "title": "string (required)", + "description": "string (optional)", + "transformation": { + "fields": ["string (required)"], + "unwind": ["string (optional)"], + "flatten": ["string (optional)"], + "omit": ["string (optional)"], + "limit": "integer (optional)", + "desc": "boolean (optional)" + }, + "display": { + "component": "table (required)", + "properties": { + "": { + "label": "string (optional)", + "format": "text|number|date|link|boolean|image|array|object (optional)" + } + } + } + } + } +} +``` + +## Properties + +### Dataset Schema Properties + +- `actorSpecification` (integer, required) - Specifies the version of dataset schema structure document (currently only version 1) +- `fields` (JSONSchema object, required) - Schema of one dataset object (use JsonSchema Draft 2020-12 or compatible) +- `views` (DatasetView object, required) - Object with API and UI views description + +### DatasetView Properties + +- `title` (string, required) - Visible in UI Output tab and API +- `description` (string, optional) - Only available in API response +- `transformation` (ViewTransformation object, required) - Data transformation applied when loading from Dataset API +- `display` (ViewDisplay object, required) - Output tab UI visualization definition + +### ViewTransformation Properties + +- `fields` (string[], required) - Fields to present in output (order matches column order) +- `unwind` (string[], optional) - Deconstructs nested children into parent object +- `flatten` (string[], optional) - Transforms nested object into flat structure +- `omit` (string[], optional) - Removes specified fields from output +- `limit` (integer, optional) - Maximum number of results (default: all) +- `desc` (boolean, optional) - Sort order (true = newest first) + +### ViewDisplay Properties + +- `component` (string, required) - Only `table` is available +- `properties` (Object, optional) - Keys matching `transformation.fields` with ViewDisplayProperty values + +### ViewDisplayProperty Properties + +- `label` (string, optional) - Table column header +- `format` (string, optional) - One of: `text`, `number`, `date`, `link`, `boolean`, `image`, `array`, `object` diff --git a/skills/apify-actor-development/references/input-schema.md b/skills/apify-actor-development/references/input-schema.md new file mode 100644 index 00000000..0acfeb07 --- /dev/null +++ b/skills/apify-actor-development/references/input-schema.md @@ -0,0 +1,66 @@ +# Input Schema Reference + +The input schema defines the input parameters for an Actor. It's a JSON object comprising various field types supported by the Apify platform. + +## Structure + +```json +{ + "title": "", + "type": "object", + "schemaVersion": 1, + "properties": { + /* define input fields here */ + }, + "required": [] +} +``` + +## Example + +```json +{ + "title": "E-commerce Product Scraper Input", + "type": "object", + "schemaVersion": 1, + "properties": { + "startUrls": { + "title": "Start URLs", + "type": "array", + "description": "URLs to start scraping from (category pages or product pages)", + "editor": "requestListSources", + "default": [{ "url": "https://example.com/category" }], + "prefill": [{ "url": "https://example.com/category" }] + }, + "followVariants": { + "title": "Follow Product Variants", + "type": "boolean", + "description": "Whether to scrape product variants (different colors, sizes)", + "default": true + }, + "maxRequestsPerCrawl": { + "title": "Max Requests per Crawl", + "type": "integer", + "description": "Maximum number of pages to scrape (0 = unlimited)", + "default": 1000, + "minimum": 0 + }, + "proxyConfiguration": { + "title": "Proxy Configuration", + "type": "object", + "description": "Proxy settings for anti-bot protection", + "editor": "proxy", + "default": { "useApifyProxy": false } + }, + "locale": { + "title": "Locale", + "type": "string", + "description": "Language/country code for localized content", + "default": "cs", + "enum": ["cs", "en", "de", "sk"], + "enumTitles": ["Czech", "English", "German", "Slovak"] + } + }, + "required": ["startUrls"] +} +``` diff --git a/skills/apify-actor-development/references/key-value-store-schema.md b/skills/apify-actor-development/references/key-value-store-schema.md new file mode 100644 index 00000000..81b588f5 --- /dev/null +++ b/skills/apify-actor-development/references/key-value-store-schema.md @@ -0,0 +1,129 @@ +# Key-Value Store Schema Reference + +The key-value store schema organizes keys into logical groups called collections for easier data management. + +## Examples + +### JavaScript and TypeScript + +Consider an example Actor that calls `Actor.setValue()` to save records into the key-value store: + +```javascript +import { Actor } from 'apify'; +// Initialize the JavaScript SDK +await Actor.init(); + +/** + * Actor code + */ +await Actor.setValue('document-1', 'my text data', { contentType: 'text/plain' }); + +await Actor.setValue(`image-${imageID}`, imageBuffer, { contentType: 'image/jpeg' }); + +// Exit successfully +await Actor.exit(); +``` + +### Python + +Consider an example Actor that calls `Actor.set_value()` to save records into the key-value store: + +```python +# Key-Value Store set example (Python) +import asyncio +from apify import Actor + +async def main(): + await Actor.init() + + # Actor code + await Actor.set_value('document-1', 'my text data', content_type='text/plain') + + image_id = '123' # example placeholder + image_buffer = b'...' # bytes buffer with image data + await Actor.set_value(f'image-{image_id}', image_buffer, content_type='image/jpeg') + + # Exit successfully + await Actor.exit() + +if __name__ == '__main__': + asyncio.run(main()) +``` + +## Configuration + +To configure the key-value store schema, reference a schema file in `.actor/actor.json`: + +```json +{ + "actorSpecification": 1, + "name": "data-collector", + "title": "Data Collector", + "version": "1.0.0", + "storages": { + "keyValueStore": "./key_value_store_schema.json" + } +} +``` + +Then create the key-value store schema in `.actor/key_value_store_schema.json`: + +```json +{ + "actorKeyValueStoreSchemaVersion": 1, + "title": "Key-Value Store Schema", + "collections": { + "documents": { + "title": "Documents", + "description": "Text documents stored by the Actor", + "keyPrefix": "document-" + }, + "images": { + "title": "Images", + "description": "Images stored by the Actor", + "keyPrefix": "image-", + "contentTypes": ["image/jpeg"] + } + } +} +``` + +## Structure + +```json +{ + "actorKeyValueStoreSchemaVersion": 1, + "title": "string (required)", + "description": "string (optional)", + "collections": { + "": { + "title": "string (required)", + "description": "string (optional)", + "key": "string (conditional - use key OR keyPrefix)", + "keyPrefix": "string (conditional - use key OR keyPrefix)", + "contentTypes": ["string (optional)"], + "jsonSchema": "object (optional)" + } + } +} +``` + +## Properties + +### Key-Value Store Schema Properties + +- `actorKeyValueStoreSchemaVersion` (integer, required) - Version of key-value store schema structure document (currently only version 1) +- `title` (string, required) - Title of the schema +- `description` (string, optional) - Description of the schema +- `collections` (Object, required) - Object where each key is a collection ID and value is a Collection object + +### Collection Properties + +- `title` (string, required) - Collection title shown in UI tabs +- `description` (string, optional) - Description appearing in UI tooltips +- `key` (string, conditional) - Single specific key for this collection +- `keyPrefix` (string, conditional) - Prefix for keys included in this collection +- `contentTypes` (string[], optional) - Allowed content types for validation +- `jsonSchema` (object, optional) - JSON Schema Draft 07 format for `application/json` content type validation + +Either `key` or `keyPrefix` must be specified for each collection, but not both. diff --git a/skills/apify-actor-development/references/logging.md b/skills/apify-actor-development/references/logging.md new file mode 100644 index 00000000..cc39bf3a --- /dev/null +++ b/skills/apify-actor-development/references/logging.md @@ -0,0 +1,50 @@ +# Actor Logging Reference + +## JavaScript and TypeScript + +**ALWAYS use the `apify/log` package for logging** - This package contains critical security logic including censoring sensitive data (Apify tokens, API keys, credentials) to prevent accidental exposure in logs. + +### Available Log Levels in `apify/log` + +The Apify log package provides the following methods for logging: + +- `log.debug()` - Debug level logs (detailed diagnostic information) +- `log.info()` - Info level logs (general informational messages) +- `log.warning()` - Warning level logs (warning messages for potentially problematic situations) +- `log.warningOnce()` - Warning level logs (same warning message logged only once) +- `log.error()` - Error level logs (error messages for failures) +- `log.exception()` - Exception level logs (for exceptions with stack traces) +- `log.perf()` - Performance level logs (performance metrics and timing information) +- `log.deprecated()` - Deprecation level logs (warnings about deprecated code) +- `log.softFail()` - Soft failure logs (non-critical failures that don't stop execution, e.g., input validation errors, skipped items) +- `log.internal()` - Internal level logs (internal/system messages) + +### Best Practices + +- Use `log.debug()` for detailed operation-level diagnostics (inside functions) +- Use `log.info()` for general informational messages (API requests, successful operations) +- Use `log.warning()` for potentially problematic situations (validation failures, unexpected states) +- Use `log.error()` for actual errors and failures +- Use `log.exception()` for caught exceptions with stack traces + +## Python + +**ALWAYS use `Actor.log` for logging** - This logger contains critical security logic including censoring sensitive data (Apify tokens, API keys, credentials) to prevent accidental exposure in logs. + +### Available Log Levels + +The Apify Actor logger provides the following methods for logging: + +- `Actor.log.debug()` - Debug level logs (detailed diagnostic information) +- `Actor.log.info()` - Info level logs (general informational messages) +- `Actor.log.warning()` - Warning level logs (warning messages for potentially problematic situations) +- `Actor.log.error()` - Error level logs (error messages for failures) +- `Actor.log.exception()` - Exception level logs (for exceptions with stack traces) + +### Best Practices + +- Use `Actor.log.debug()` for detailed operation-level diagnostics (inside functions) +- Use `Actor.log.info()` for general informational messages (API requests, successful operations) +- Use `Actor.log.warning()` for potentially problematic situations (validation failures, unexpected states) +- Use `Actor.log.error()` for actual errors and failures +- Use `Actor.log.exception()` for caught exceptions with stack traces diff --git a/skills/apify-actor-development/references/output-schema.md b/skills/apify-actor-development/references/output-schema.md new file mode 100644 index 00000000..89e439ca --- /dev/null +++ b/skills/apify-actor-development/references/output-schema.md @@ -0,0 +1,49 @@ +# Output Schema Reference + +The Actor output schema builds upon the schemas for the dataset and key-value store. It specifies where an Actor stores its output and defines templates for accessing that output. Apify Console uses these output definitions to display run results. + +## Structure + +```json +{ + "actorOutputSchemaVersion": 1, + "title": "", + "properties": { + /* define your outputs here */ + } +} +``` + +## Example + +```json +{ + "actorOutputSchemaVersion": 1, + "title": "Output schema of the files scraper", + "properties": { + "files": { + "type": "string", + "title": "Files", + "template": "{{links.apiDefaultKeyValueStoreUrl}}/keys" + }, + "dataset": { + "type": "string", + "title": "Dataset", + "template": "{{links.apiDefaultDatasetUrl}}/items" + } + } +} +``` + +## Output Schema Template Variables + +- `links` (object) - Contains quick links to most commonly used URLs +- `links.publicRunUrl` (string) - Public run url in format `https://console.apify.com/view/runs/:runId` +- `links.consoleRunUrl` (string) - Console run url in format `https://console.apify.com/actors/runs/:runId` +- `links.apiRunUrl` (string) - API run url in format `https://api.apify.com/v2/actor-runs/:runId` +- `links.apiDefaultDatasetUrl` (string) - API url of default dataset in format `https://api.apify.com/v2/datasets/:defaultDatasetId` +- `links.apiDefaultKeyValueStoreUrl` (string) - API url of default key-value store in format `https://api.apify.com/v2/key-value-stores/:defaultKeyValueStoreId` +- `links.containerRunUrl` (string) - URL of a webserver running inside the run in format `https://.runs.apify.net/` +- `run` (object) - Contains information about the run same as it is returned from the `GET Run` API endpoint +- `run.defaultDatasetId` (string) - ID of the default dataset +- `run.defaultKeyValueStoreId` (string) - ID of the default key-value store diff --git a/skills/apify-actor-development/references/standby-mode.md b/skills/apify-actor-development/references/standby-mode.md new file mode 100644 index 00000000..73d60252 --- /dev/null +++ b/skills/apify-actor-development/references/standby-mode.md @@ -0,0 +1,61 @@ +# Actor Standby Mode Reference + +## JavaScript and TypeScript + +- **NEVER disable standby mode (`usesStandbyMode: false`) in `.actor/actor.json` without explicit permission** - Actor Standby mode solves this problem by letting you have the Actor ready in the background, waiting for the incoming HTTP requests. In a sense, the Actor behaves like a real-time web server or standard API server instead of running the logic once to process everything in batch. Always keep `usesStandbyMode: true` unless there is a specific documented reason to disable it +- **ALWAYS implement readiness probe handler for standby Actors** - Handle the `x-apify-container-server-readiness-probe` header at GET / endpoint to ensure proper Actor lifecycle management + +You can recognize a standby Actor by checking the `usesStandbyMode` property in `.actor/actor.json`. Only implement the readiness probe if this property is set to `true`. + +### Readiness Probe Implementation Example + +```javascript +// Apify standby readiness probe at root path +app.get('/', (req, res) => { + res.writeHead(200, { 'Content-Type': 'text/plain' }); + if (req.headers['x-apify-container-server-readiness-probe']) { + res.end('Readiness probe OK\n'); + } else { + res.end('Actor is ready\n'); + } +}); +``` + +Key points: + +- Detect the `x-apify-container-server-readiness-probe` header in incoming requests +- Respond with HTTP 200 status code for both readiness probe and normal requests +- This enables proper Actor lifecycle management in standby mode + +## Python + +- **NEVER disable standby mode (`usesStandbyMode: false`) in `.actor/actor.json` without explicit permission** - Actor Standby mode solves this problem by letting you have the Actor ready in the background, waiting for the incoming HTTP requests. In a sense, the Actor behaves like a real-time web server or standard API server instead of running the logic once to process everything in batch. Always keep `usesStandbyMode: true` unless there is a specific documented reason to disable it +- **ALWAYS implement readiness probe handler for standby Actors** - Handle the `x-apify-container-server-readiness-probe` header at GET / endpoint to ensure proper Actor lifecycle management + +You can recognize a standby Actor by checking the `usesStandbyMode` property in `.actor/actor.json`. Only implement the readiness probe if this property is set to `true`. + +### Readiness Probe Implementation Example + +```python +# Apify standby readiness probe +from http.server import SimpleHTTPRequestHandler + +class GetHandler(SimpleHTTPRequestHandler): + def do_GET(self): + # Handle Apify standby readiness probe + if 'x-apify-container-server-readiness-probe' in self.headers: + self.send_response(200) + self.end_headers() + self.wfile.write(b'Readiness probe OK') + return + + self.send_response(200) + self.end_headers() + self.wfile.write(b'Actor is ready') +``` + +Key points: + +- Detect the `x-apify-container-server-readiness-probe` header in incoming requests +- Respond with HTTP 200 status code for both readiness probe and normal requests +- This enables proper Actor lifecycle management in standby mode diff --git a/skills/apify-actorization/SKILL.md b/skills/apify-actorization/SKILL.md new file mode 100644 index 00000000..4f90b1d0 --- /dev/null +++ b/skills/apify-actorization/SKILL.md @@ -0,0 +1,184 @@ +--- +name: apify-actorization +description: "Convert existing projects into Apify Actors - serverless cloud programs. Actorize JavaScript/TypeScript (SDK with Actor.init/exit), Python (async context manager), or any language (CLI wrapper). Us..." +--- + +# Apify Actorization + +Actorization converts existing software into reusable serverless applications compatible with the Apify platform. Actors are programs packaged as Docker images that accept well-defined JSON input, perform an action, and optionally produce structured JSON output. + +## Quick Start + +1. Run `apify init` in project root +2. Wrap code with SDK lifecycle (see language-specific section below) +3. Configure `.actor/input_schema.json` +4. Test with `apify run --input '{"key": "value"}'` +5. Deploy with `apify push` + +## When to Use This Skill + +- Converting an existing project to run on Apify platform +- Adding Apify SDK integration to a project +- Wrapping a CLI tool or script as an Actor +- Migrating a Crawlee project to Apify + +## Prerequisites + +Verify `apify` CLI is installed: + +```bash +apify --help +``` + +If not installed: + +```bash +curl -fsSL https://apify.com/install-cli.sh | bash + +# Or (Mac): brew install apify-cli +# Or (Windows): irm https://apify.com/install-cli.ps1 | iex +# Or: npm install -g apify-cli +``` + +Verify CLI is logged in: + +```bash +apify info # Should return your username +``` + +If not logged in, check if `APIFY_TOKEN` environment variable is defined. If not, ask the user to generate one at https://console.apify.com/settings/integrations, then: + +```bash +apify login -t $APIFY_TOKEN +``` + +## Actorization Checklist + +Copy this checklist to track progress: + +- [ ] Step 1: Analyze project (language, entry point, inputs, outputs) +- [ ] Step 2: Run `apify init` to create Actor structure +- [ ] Step 3: Apply language-specific SDK integration +- [ ] Step 4: Configure `.actor/input_schema.json` +- [ ] Step 5: Configure `.actor/output_schema.json` (if applicable) +- [ ] Step 6: Update `.actor/actor.json` metadata +- [ ] Step 7: Test locally with `apify run` +- [ ] Step 8: Deploy with `apify push` + +## Step 1: Analyze the Project + +Before making changes, understand the project: + +1. **Identify the language** - JavaScript/TypeScript, Python, or other +2. **Find the entry point** - The main file that starts execution +3. **Identify inputs** - Command-line arguments, environment variables, config files +4. **Identify outputs** - Files, console output, API responses +5. **Check for state** - Does it need to persist data between runs? + +## Step 2: Initialize Actor Structure + +Run in the project root: + +```bash +apify init +``` + +This creates: +- `.actor/actor.json` - Actor configuration and metadata +- `.actor/input_schema.json` - Input definition for the Apify Console +- `Dockerfile` (if not present) - Container image definition + +## Step 3: Apply Language-Specific Changes + +Choose based on your project's language: + +- **JavaScript/TypeScript**: See [js-ts-actorization.md](references/js-ts-actorization.md) +- **Python**: See [python-actorization.md](references/python-actorization.md) +- **Other Languages (CLI-based)**: See [cli-actorization.md](references/cli-actorization.md) + +### Quick Reference + +| Language | Install | Wrap Code | +|----------|---------|-----------| +| JS/TS | `npm install apify` | `await Actor.init()` ... `await Actor.exit()` | +| Python | `pip install apify` | `async with Actor:` | +| Other | Use CLI in wrapper script | `apify actor:get-input` / `apify actor:push-data` | + +## Steps 4-6: Configure Schemas + +See [schemas-and-output.md](references/schemas-and-output.md) for detailed configuration of: +- Input schema (`.actor/input_schema.json`) +- Output schema (`.actor/output_schema.json`) +- Actor configuration (`.actor/actor.json`) +- State management (request queues, key-value stores) + +Validate schemas against `@apify/json_schemas` npm package. + +## Step 7: Test Locally + +Run the actor with inline input (for JS/TS and Python actors): + +```bash +apify run --input '{"startUrl": "https://example.com", "maxItems": 10}' +``` + +Or use an input file: + +```bash +apify run --input-file ./test-input.json +``` + +**Important:** Always use `apify run`, not `npm start` or `python main.py`. The CLI sets up the proper environment and storage. + +## Step 8: Deploy + +```bash +apify push +``` + +This uploads and builds your actor on the Apify platform. + +## Monetization (Optional) + +After deploying, you can monetize your actor in the Apify Store. The recommended model is **Pay Per Event (PPE)**: + +- Per result/item scraped +- Per page processed +- Per API call made + +Configure PPE in the Apify Console under Actor > Monetization. Charge for events in your code with `await Actor.charge('result')`. + +Other options: **Rental** (monthly subscription) or **Free** (open source). + +## Pre-Deployment Checklist + +- [ ] `.actor/actor.json` exists with correct name and description +- [ ] `.actor/actor.json` validates against `@apify/json_schemas` (`actor.schema.json`) +- [ ] `.actor/input_schema.json` defines all required inputs +- [ ] `.actor/input_schema.json` validates against `@apify/json_schemas` (`input.schema.json`) +- [ ] `.actor/output_schema.json` defines output structure (if applicable) +- [ ] `.actor/output_schema.json` validates against `@apify/json_schemas` (`output.schema.json`) +- [ ] `Dockerfile` is present and builds successfully +- [ ] `Actor.init()` / `Actor.exit()` wraps main code (JS/TS) +- [ ] `async with Actor:` wraps main code (Python) +- [ ] Inputs are read via `Actor.getInput()` / `Actor.get_input()` +- [ ] Outputs use `Actor.pushData()` or key-value store +- [ ] `apify run` executes successfully with test input +- [ ] `generatedBy` is set in actor.json meta section + +## Apify MCP Tools + +If MCP server is configured, use these tools for documentation: + +- `search-apify-docs` - Search documentation +- `fetch-apify-docs` - Get full doc pages + +Otherwise, the MCP Server url: `https://mcp.apify.com/?tools=docs`. + +## Resources + +- [Actorization Academy](https://docs.apify.com/academy/actorization) - Comprehensive guide +- [Apify SDK for JavaScript](https://docs.apify.com/sdk/js) - Full SDK reference +- [Apify SDK for Python](https://docs.apify.com/sdk/python) - Full SDK reference +- [Apify CLI Reference](https://docs.apify.com/cli) - CLI commands +- [Actor Specification](https://raw.githubusercontent.com/apify/actor-whitepaper/refs/heads/master/README.md) - Complete specification diff --git a/skills/apify-actorization/references/cli-actorization.md b/skills/apify-actorization/references/cli-actorization.md new file mode 100644 index 00000000..73b4ca6b --- /dev/null +++ b/skills/apify-actorization/references/cli-actorization.md @@ -0,0 +1,81 @@ +# CLI-Based Actorization + +For languages without an SDK (Go, Rust, Java, etc.), create a wrapper script that uses the Apify CLI. + +## Create Wrapper Script + +Create `start.sh` in project root: + +```bash +#!/bin/bash +set -e + +# Get input from Apify key-value store +INPUT=$(apify actor:get-input) + +# Parse input values (adjust based on your input schema) +MY_PARAM=$(echo "$INPUT" | jq -r '.myParam // "default"') + +# Run your application with the input +./your-application --param "$MY_PARAM" + +# If your app writes to a file, push it to key-value store +# apify actor:set-value OUTPUT --contentType application/json < output.json + +# Or push structured data to dataset +# apify actor:push-data '{"result": "value"}' +``` + +## Update Dockerfile + +Reference the [cli-start template Dockerfile](https://github.com/apify/actor-templates/blob/master/templates/cli-start/Dockerfile) which includes the `ubi` utility for installing binaries from GitHub releases. + +```dockerfile +FROM apify/actor-node:20 + +# Install ubi for easy GitHub release installation +RUN curl --silent --location \ + https://raw.githubusercontent.com/houseabsolute/ubi/master/bootstrap/bootstrap-ubi.sh | sh + +# Install your CLI tool from GitHub releases (example) +# RUN ubi --project your-org/your-tool --in /usr/local/bin + +# Or install apify-cli and jq manually +RUN npm install -g apify-cli +RUN apt-get update && apt-get install -y jq + +# Copy your application +COPY . . + +# Build your application if needed +# RUN ./build.sh + +# Make start script executable +RUN chmod +x start.sh + +# Run the wrapper script +CMD ["./start.sh"] +``` + +## Testing CLI-Based Actors + +For CLI-based actors (shell wrapper scripts), you may need to test the underlying application directly with mock input, as `apify run` requires a Node.js or Python entry point. + +Test your wrapper script locally: + +```bash +# Set up mock input +export INPUT='{"myParam": "test-value"}' + +# Run wrapper script +./start.sh +``` + +## CLI Commands Reference + +| Command | Description | +|---------|-------------| +| `apify actor:get-input` | Get input JSON from key-value store | +| `apify actor:set-value KEY` | Store value in key-value store | +| `apify actor:push-data JSON` | Push data to dataset | +| `apify actor:get-value KEY` | Retrieve value from key-value store | diff --git a/skills/apify-actorization/references/js-ts-actorization.md b/skills/apify-actorization/references/js-ts-actorization.md new file mode 100644 index 00000000..2b2c894d --- /dev/null +++ b/skills/apify-actorization/references/js-ts-actorization.md @@ -0,0 +1,111 @@ +# JavaScript/TypeScript Actorization + +## Install the Apify SDK + +```bash +npm install apify +``` + +## Wrap Main Code with Actor Lifecycle + +```javascript +import { Actor } from 'apify'; + +// Initialize connection to Apify platform +await Actor.init(); + +// ============================================ +// Your existing code goes here +// ============================================ + +// Example: Get input from Apify Console or API +const input = await Actor.getInput(); +console.log('Input:', input); + +// Example: Your crawler or processing logic +// const crawler = new PlaywrightCrawler({ ... }); +// await crawler.run([input.startUrl]); + +// Example: Push results to dataset +// await Actor.pushData({ result: 'data' }); + +// ============================================ +// End of your code +// ============================================ + +// Graceful shutdown +await Actor.exit(); +``` + +## Key Points + +- `Actor.init()` configures storage to use Apify API when running on platform +- `Actor.exit()` handles graceful shutdown and cleanup +- Both calls must be awaited +- Local execution remains unchanged - the SDK automatically detects the environment + +## Crawlee Projects + +Crawlee projects require minimal changes - just wrap with Actor lifecycle: + +```javascript +import { Actor } from 'apify'; +import { PlaywrightCrawler } from 'crawlee'; + +await Actor.init(); + +// Get and validate input +const input = await Actor.getInput(); +const { + startUrl = 'https://example.com', + maxItems = 100, +} = input ?? {}; + +let itemCount = 0; + +const crawler = new PlaywrightCrawler({ + requestHandler: async ({ page, request, pushData }) => { + if (itemCount >= maxItems) return; + + const title = await page.title(); + await pushData({ url: request.url, title }); + itemCount++; + }, +}); + +await crawler.run([startUrl]); + +await Actor.exit(); +``` + +## Express/HTTP Servers + +For web servers, use standby mode in actor.json: + +```json +{ + "actorSpecification": 1, + "name": "my-api", + "usesStandbyMode": true +} +``` + +Then implement readiness probe. See [standby-mode.md](../../apify-actor-development/references/standby-mode.md). + +## Batch Processing Scripts + +```javascript +import { Actor } from 'apify'; + +await Actor.init(); + +const input = await Actor.getInput(); +const items = input.items || []; + +for (const item of items) { + const result = processItem(item); + await Actor.pushData(result); +} + +await Actor.exit(); +``` diff --git a/skills/apify-actorization/references/python-actorization.md b/skills/apify-actorization/references/python-actorization.md new file mode 100644 index 00000000..b536206d --- /dev/null +++ b/skills/apify-actorization/references/python-actorization.md @@ -0,0 +1,95 @@ +# Python Actorization + +## Install the Apify SDK + +```bash +pip install apify +``` + +## Wrap Main Function with Actor Context Manager + +```python +import asyncio +from apify import Actor + +async def main() -> None: + async with Actor: + # ============================================ + # Your existing code goes here + # ============================================ + + # Example: Get input from Apify Console or API + actor_input = await Actor.get_input() + print(f'Input: {actor_input}') + + # Example: Your crawler or processing logic + # crawler = PlaywrightCrawler(...) + # await crawler.run([actor_input.get('startUrl')]) + + # Example: Push results to dataset + # await Actor.push_data({'result': 'data'}) + + # ============================================ + # End of your code + # ============================================ + +if __name__ == '__main__': + asyncio.run(main()) +``` + +## Key Points + +- `async with Actor:` handles both initialization and cleanup +- Automatically manages platform event listeners and graceful shutdown +- Local execution remains unchanged - the SDK automatically detects the environment + +## Crawlee Python Projects + +```python +import asyncio +from apify import Actor +from crawlee.playwright_crawler import PlaywrightCrawler + +async def main() -> None: + async with Actor: + # Get and validate input + actor_input = await Actor.get_input() or {} + start_url = actor_input.get('startUrl', 'https://example.com') + max_items = actor_input.get('maxItems', 100) + + item_count = 0 + + async def request_handler(context): + nonlocal item_count + if item_count >= max_items: + return + + title = await context.page.title() + await context.push_data({'url': context.request.url, 'title': title}) + item_count += 1 + + crawler = PlaywrightCrawler(request_handler=request_handler) + await crawler.run([start_url]) + +if __name__ == '__main__': + asyncio.run(main()) +``` + +## Batch Processing Scripts + +```python +import asyncio +from apify import Actor + +async def main() -> None: + async with Actor: + actor_input = await Actor.get_input() or {} + items = actor_input.get('items', []) + + for item in items: + result = process_item(item) + await Actor.push_data(result) + +if __name__ == '__main__': + asyncio.run(main()) +``` diff --git a/skills/apify-actorization/references/schemas-and-output.md b/skills/apify-actorization/references/schemas-and-output.md new file mode 100644 index 00000000..a8387681 --- /dev/null +++ b/skills/apify-actorization/references/schemas-and-output.md @@ -0,0 +1,140 @@ +# Schemas and Output Configuration + +## Input Schema + +Map your application's inputs to `.actor/input_schema.json`. Validate against the JSON Schema from the `@apify/json_schemas` npm package (`input.schema.json`). + +```json +{ + "title": "My Actor Input", + "type": "object", + "schemaVersion": 1, + "properties": { + "startUrl": { + "title": "Start URL", + "type": "string", + "description": "The URL to start processing from", + "editor": "textfield", + "prefill": "https://example.com" + }, + "maxItems": { + "title": "Max Items", + "type": "integer", + "description": "Maximum number of items to process", + "default": 100, + "minimum": 1 + } + }, + "required": ["startUrl"] +} +``` + +### Mapping Guidelines + +- Command-line arguments → input schema properties +- Environment variables → input schema or Actor env vars in actor.json +- Config files → input schema with object/array types +- Flatten deeply nested structures for better UX + +## Output Schema + +Define output structure in `.actor/output_schema.json`. Validate against the JSON Schema from the `@apify/json_schemas` npm package (`output.schema.json`). + +### For Table-Like Data (Multiple Items) + +- Use `Actor.pushData()` (JS) or `Actor.push_data()` (Python) +- Each item becomes a row in the dataset + +### For Single Files or Blobs + +- Use key-value store: `Actor.setValue()` / `Actor.set_value()` +- Get the public URL and include it in the dataset: + +```javascript +// Store file with public access +await Actor.setValue('report.pdf', pdfBuffer, { contentType: 'application/pdf' }); + +// Get the public URL +const storeInfo = await Actor.openKeyValueStore(); +const publicUrl = `https://api.apify.com/v2/key-value-stores/${storeInfo.id}/records/report.pdf`; + +// Include URL in dataset output +await Actor.pushData({ reportUrl: publicUrl }); +``` + +### For Multiple Files with a Common Prefix (Collections) + +```javascript +// Store multiple files with a prefix +for (const [name, data] of files) { + await Actor.setValue(`screenshots/${name}`, data, { contentType: 'image/png' }); +} +// Files are accessible at: .../records/screenshots%2F{name} +``` + +## Actor Configuration (actor.json) + +Configure `.actor/actor.json`. Validate against the JSON Schema from the `@apify/json_schemas` npm package (`actor.schema.json`). + +```json +{ + "actorSpecification": 1, + "name": "my-actor", + "title": "My Actor", + "description": "Brief description of what the actor does", + "version": "1.0.0", + "meta": { + "templateId": "ts_empty", + "generatedBy": "Claude Code with Claude Opus 4.5" + }, + "input": "./input_schema.json", + "dockerfile": "../Dockerfile" +} +``` + +**Important:** Fill in the `generatedBy` property with the tool/model used. + +## State Management + +### Request Queue - For Pausable Task Processing + +The request queue works for any task processing, not just web scraping. Use a dummy URL with custom `uniqueKey` and `userData` for non-URL tasks: + +```javascript +const requestQueue = await Actor.openRequestQueue(); + +// Add tasks to the queue (works for any processing, not just URLs) +await requestQueue.addRequest({ + url: 'https://placeholder.local', // Dummy URL for non-scraping tasks + uniqueKey: `task-${taskId}`, // Unique identifier for deduplication + userData: { itemId: 123, action: 'process' }, // Your custom task data +}); + +// Process tasks from the queue (with Crawlee) +const crawler = new BasicCrawler({ + requestQueue, + requestHandler: async ({ request }) => { + const { itemId, action } = request.userData; + // Process your task using userData + await processTask(itemId, action); + }, +}); +await crawler.run(); + +// Or manually consume without Crawlee: +let request; +while ((request = await requestQueue.fetchNextRequest())) { + await processTask(request.userData); + await requestQueue.markRequestHandled(request); +} +``` + +### Key-Value Store - For Checkpoint State + +```javascript +// Save state +await Actor.setValue('STATE', { processedCount: 100 }); + +// Restore state on restart +const state = await Actor.getValue('STATE') || { processedCount: 0 }; +``` diff --git a/skills/apify-audience-analysis/SKILL.md b/skills/apify-audience-analysis/SKILL.md new file mode 100644 index 00000000..7ce31aa7 --- /dev/null +++ b/skills/apify-audience-analysis/SKILL.md @@ -0,0 +1,121 @@ +--- +name: apify-audience-analysis +description: Understand audience demographics, preferences, behavior patterns, and engagement quality across Facebook, Instagram, YouTube, and TikTok. +--- + +# Audience Analysis + +Analyze and understand your audience using Apify Actors to extract follower demographics, engagement patterns, and behavior data from multiple platforms. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Identify audience analysis type (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the analysis script +- [ ] Step 5: Summarize findings +``` + +### Step 1: Identify Audience Analysis Type + +Select the appropriate Actor based on analysis needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Facebook follower demographics | `apify/facebook-followers-following-scraper` | FB followers/following lists | +| Facebook engagement behavior | `apify/facebook-likes-scraper` | FB post likes analysis | +| Facebook video audience | `apify/facebook-reels-scraper` | FB Reels viewers | +| Facebook comment analysis | `apify/facebook-comments-scraper` | FB post/video comments | +| Facebook content engagement | `apify/facebook-posts-scraper` | FB post engagement metrics | +| Instagram audience sizing | `apify/instagram-profile-scraper` | IG profile demographics | +| Instagram location-based | `apify/instagram-search-scraper` | IG geo-tagged audience | +| Instagram tagged network | `apify/instagram-tagged-scraper` | IG tag network analysis | +| Instagram comprehensive | `apify/instagram-scraper` | Full IG audience data | +| Instagram API-based | `apify/instagram-api-scraper` | IG API access | +| Instagram follower counts | `apify/instagram-followers-count-scraper` | IG follower tracking | +| Instagram comment export | `apify/export-instagram-comments-posts` | IG comment bulk export | +| Instagram comment analysis | `apify/instagram-comment-scraper` | IG comment sentiment | +| YouTube viewer feedback | `streamers/youtube-comments-scraper` | YT comment analysis | +| YouTube channel audience | `streamers/youtube-channel-scraper` | YT channel subscribers | +| TikTok follower demographics | `clockworks/tiktok-followers-scraper` | TT follower lists | +| TikTok profile analysis | `clockworks/tiktok-profile-scraper` | TT profile demographics | +| TikTok comment analysis | `clockworks/tiktok-comments-scraper` | TT comment engagement | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `apify/facebook-followers-following-scraper`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Findings + +After completion, report: +- Number of audience members/profiles analyzed +- File location and name +- Key demographic insights +- Suggested next steps (deeper analysis, segmentation) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-audience-analysis/reference/scripts/run_actor.js b/skills/apify-audience-analysis/reference/scripts/run_actor.js new file mode 100644 index 00000000..1a283920 --- /dev/null +++ b/skills/apify-audience-analysis/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-audience-analysis-1.0.1'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-brand-reputation-monitoring/SKILL.md b/skills/apify-brand-reputation-monitoring/SKILL.md new file mode 100644 index 00000000..e38a8d4a --- /dev/null +++ b/skills/apify-brand-reputation-monitoring/SKILL.md @@ -0,0 +1,121 @@ +--- +name: apify-brand-reputation-monitoring +description: "Track reviews, ratings, sentiment, and brand mentions across Google Maps, Booking.com, TripAdvisor, Facebook, Instagram, YouTube, and TikTok. Use when user asks to monitor brand reputation, analyze..." +--- + +# Brand Reputation Monitoring + +Scrape reviews, ratings, and brand mentions from multiple platforms using Apify Actors. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Determine data source (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the monitoring script +- [ ] Step 5: Summarize results +``` + +### Step 1: Determine Data Source + +Select the appropriate Actor based on user needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Google Maps reviews | `compass/crawler-google-places` | Business reviews, ratings | +| Google Maps review export | `compass/Google-Maps-Reviews-Scraper` | Dedicated review scraping | +| Booking.com hotels | `voyager/booking-scraper` | Hotel data, scores | +| Booking.com reviews | `voyager/booking-reviews-scraper` | Detailed hotel reviews | +| TripAdvisor reviews | `maxcopell/tripadvisor-reviews` | Attraction/restaurant reviews | +| Facebook reviews | `apify/facebook-reviews-scraper` | Page reviews | +| Facebook comments | `apify/facebook-comments-scraper` | Post comment monitoring | +| Facebook page metrics | `apify/facebook-pages-scraper` | Page ratings overview | +| Facebook reactions | `apify/facebook-likes-scraper` | Reaction type analysis | +| Instagram comments | `apify/instagram-comment-scraper` | Comment sentiment | +| Instagram hashtags | `apify/instagram-hashtag-scraper` | Brand hashtag monitoring | +| Instagram search | `apify/instagram-search-scraper` | Brand mention discovery | +| Instagram tagged posts | `apify/instagram-tagged-scraper` | Brand tag tracking | +| Instagram export | `apify/export-instagram-comments-posts` | Bulk comment export | +| Instagram comprehensive | `apify/instagram-scraper` | Full Instagram monitoring | +| Instagram API | `apify/instagram-api-scraper` | API-based monitoring | +| YouTube comments | `streamers/youtube-comments-scraper` | Video comment sentiment | +| TikTok comments | `clockworks/tiktok-comments-scraper` | TikTok sentiment | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `compass/crawler-google-places`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Results + +After completion, report: +- Number of reviews/mentions found +- File location and name +- Key fields available +- Suggested next steps (sentiment analysis, filtering) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-brand-reputation-monitoring/reference/scripts/run_actor.js b/skills/apify-brand-reputation-monitoring/reference/scripts/run_actor.js new file mode 100644 index 00000000..edc49c68 --- /dev/null +++ b/skills/apify-brand-reputation-monitoring/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-brand-reputation-monitoring-1.1.1'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-competitor-intelligence/SKILL.md b/skills/apify-competitor-intelligence/SKILL.md new file mode 100644 index 00000000..eb5bdc34 --- /dev/null +++ b/skills/apify-competitor-intelligence/SKILL.md @@ -0,0 +1,131 @@ +--- +name: apify-competitor-intelligence +description: Analyze competitor strategies, content, pricing, ads, and market positioning across Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok. +--- + +# Competitor Intelligence + +Analyze competitors using Apify Actors to extract data from multiple platforms. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Identify competitor analysis type (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the analysis script +- [ ] Step 5: Summarize findings +``` + +### Step 1: Identify Competitor Analysis Type + +Select the appropriate Actor based on analysis needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Competitor business data | `compass/crawler-google-places` | Location analysis | +| Competitor contact discovery | `poidata/google-maps-email-extractor` | Email extraction | +| Feature benchmarking | `compass/google-maps-extractor` | Detailed business data | +| Competitor review analysis | `compass/Google-Maps-Reviews-Scraper` | Review comparison | +| Hotel competitor data | `voyager/booking-scraper` | Hotel benchmarking | +| Hotel review comparison | `voyager/booking-reviews-scraper` | Review analysis | +| Competitor ad strategies | `apify/facebook-ads-scraper` | Ad creative analysis | +| Competitor page metrics | `apify/facebook-pages-scraper` | Page performance | +| Competitor content analysis | `apify/facebook-posts-scraper` | Post strategies | +| Competitor reels performance | `apify/facebook-reels-scraper` | Reels analysis | +| Competitor audience analysis | `apify/facebook-comments-scraper` | Comment sentiment | +| Competitor event monitoring | `apify/facebook-events-scraper` | Event tracking | +| Competitor audience overlap | `apify/facebook-followers-following-scraper` | Follower analysis | +| Competitor review benchmarking | `apify/facebook-reviews-scraper` | Review comparison | +| Competitor ad monitoring | `apify/facebook-search-scraper` | Ad discovery | +| Competitor profile metrics | `apify/instagram-profile-scraper` | Profile analysis | +| Competitor content monitoring | `apify/instagram-post-scraper` | Post tracking | +| Competitor engagement analysis | `apify/instagram-comment-scraper` | Comment analysis | +| Competitor reel performance | `apify/instagram-reel-scraper` | Reel metrics | +| Competitor growth tracking | `apify/instagram-followers-count-scraper` | Follower tracking | +| Comprehensive competitor data | `apify/instagram-scraper` | Full analysis | +| API-based competitor analysis | `apify/instagram-api-scraper` | API access | +| Competitor video analysis | `streamers/youtube-scraper` | Video metrics | +| Competitor sentiment analysis | `streamers/youtube-comments-scraper` | Comment sentiment | +| Competitor channel metrics | `streamers/youtube-channel-scraper` | Channel analysis | +| TikTok competitor analysis | `clockworks/tiktok-scraper` | TikTok data | +| Competitor video strategies | `clockworks/tiktok-video-scraper` | Video analysis | +| Competitor TikTok profiles | `clockworks/tiktok-profile-scraper` | Profile data | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `compass/crawler-google-places`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Findings + +After completion, report: +- Number of competitors analyzed +- File location and name +- Key competitive insights +- Suggested next steps (deeper analysis, benchmarking) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-competitor-intelligence/reference/scripts/run_actor.js b/skills/apify-competitor-intelligence/reference/scripts/run_actor.js new file mode 100644 index 00000000..6f373dd1 --- /dev/null +++ b/skills/apify-competitor-intelligence/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-competitor-intelligence-1.0.1'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-content-analytics/SKILL.md b/skills/apify-content-analytics/SKILL.md new file mode 100644 index 00000000..021eeb5c --- /dev/null +++ b/skills/apify-content-analytics/SKILL.md @@ -0,0 +1,120 @@ +--- +name: apify-content-analytics +description: Track engagement metrics, measure campaign ROI, and analyze content performance across Instagram, Facebook, YouTube, and TikTok. +--- + +# Content Analytics + +Track and analyze content performance using Apify Actors to extract engagement metrics from multiple platforms. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Identify content analytics type (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the analytics script +- [ ] Step 5: Summarize findings +``` + +### Step 1: Identify Content Analytics Type + +Select the appropriate Actor based on analytics needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Post engagement metrics | `apify/instagram-post-scraper` | Post performance | +| Reel performance | `apify/instagram-reel-scraper` | Reel analytics | +| Follower growth tracking | `apify/instagram-followers-count-scraper` | Growth metrics | +| Comment engagement | `apify/instagram-comment-scraper` | Comment analysis | +| Hashtag performance | `apify/instagram-hashtag-scraper` | Branded hashtags | +| Mention tracking | `apify/instagram-tagged-scraper` | Tag tracking | +| Comprehensive metrics | `apify/instagram-scraper` | Full data | +| API-based analytics | `apify/instagram-api-scraper` | API access | +| Facebook post performance | `apify/facebook-posts-scraper` | Post metrics | +| Reaction analysis | `apify/facebook-likes-scraper` | Engagement types | +| Facebook Reels metrics | `apify/facebook-reels-scraper` | Reels performance | +| Ad performance tracking | `apify/facebook-ads-scraper` | Ad analytics | +| Facebook comment analysis | `apify/facebook-comments-scraper` | Comment engagement | +| Page performance audit | `apify/facebook-pages-scraper` | Page metrics | +| YouTube video metrics | `streamers/youtube-scraper` | Video performance | +| YouTube Shorts analytics | `streamers/youtube-shorts-scraper` | Shorts performance | +| TikTok content metrics | `clockworks/tiktok-scraper` | TikTok analytics | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `apify/instagram-post-scraper`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Findings + +After completion, report: +- Number of content pieces analyzed +- File location and name +- Key performance insights +- Suggested next steps (deeper analysis, content optimization) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-content-analytics/reference/scripts/run_actor.js b/skills/apify-content-analytics/reference/scripts/run_actor.js new file mode 100644 index 00000000..418bc07f --- /dev/null +++ b/skills/apify-content-analytics/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-content-analytics-1.0.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-ecommerce/SKILL.md b/skills/apify-ecommerce/SKILL.md new file mode 100644 index 00000000..0e2dc9e6 --- /dev/null +++ b/skills/apify-ecommerce/SKILL.md @@ -0,0 +1,263 @@ +--- +name: apify-ecommerce +description: "Scrape e-commerce data for pricing intelligence, customer reviews, and seller discovery across Amazon, Walmart, eBay, IKEA, and 50+ marketplaces. Use when user asks to monitor prices, track competi..." +--- + +# E-commerce Data Extraction + +Extract product data, prices, reviews, and seller information from any e-commerce platform using Apify's E-commerce Scraping Tool. + +## Prerequisites + +- `.env` file with `APIFY_TOKEN` (at `~/.claude/.env`) +- Node.js 20.6+ (for native `--env-file` support) + +## Workflow Selection + +| User Need | Workflow | Best For | +|-----------|----------|----------| +| Track prices, compare products | Workflow 1: Products & Pricing | Price monitoring, MAP compliance, competitor analysis. Add AI summary for insights. | +| Analyze reviews (sentiment or quality) | Workflow 2: Reviews | Brand perception, customer sentiment, quality issues, defect patterns | +| Find sellers across stores | Workflow 3: Sellers | Unauthorized resellers, vendor discovery via Google Shopping | + +## Progress Tracking + +``` +Task Progress: +- [ ] Step 1: Select workflow and determine data source +- [ ] Step 2: Configure Actor input +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the extraction script +- [ ] Step 5: Summarize results +``` + +--- + +## Workflow 1: Products & Pricing + +**Use case:** Extract product data, prices, and stock status. Track competitor prices, detect MAP violations, benchmark products, or research markets. + +**Best for:** Pricing analysts, product managers, market researchers. + +### Input Options + +| Input Type | Field | Description | +|------------|-------|-------------| +| Product URLs | `detailsUrls` | Direct URLs to product pages (use object format) | +| Category URLs | `listingUrls` | URLs to category/search result pages | +| Keyword Search | `keyword` + `marketplaces` | Search term across selected marketplaces | + +### Example - Product URLs +```json +{ + "detailsUrls": [ + {"url": "https://www.amazon.com/dp/B09V3KXJPB"}, + {"url": "https://www.walmart.com/ip/123456789"} + ], + "additionalProperties": true +} +``` + +### Example - Keyword Search +```json +{ + "keyword": "Samsung Galaxy S24", + "marketplaces": ["www.amazon.com", "www.walmart.com"], + "additionalProperties": true, + "maxProductResults": 50 +} +``` + +### Optional: AI Summary + +Add these fields to get AI-generated insights: + +| Field | Description | +|-------|-------------| +| `fieldsToAnalyze` | Data points to analyze: `["name", "offers", "brand", "description"]` | +| `customPrompt` | Custom analysis instructions | + +**Example with AI summary:** +```json +{ + "keyword": "robot vacuum", + "marketplaces": ["www.amazon.com"], + "maxProductResults": 50, + "additionalProperties": true, + "fieldsToAnalyze": ["name", "offers", "brand"], + "customPrompt": "Summarize price range and identify top brands" +} +``` + +### Output Fields +- `name` - Product name +- `url` - Product URL +- `offers.price` - Current price +- `offers.priceCurrency` - Currency code (may vary by seller region) +- `brand.slogan` - Brand name (nested in object) +- `image` - Product image URL +- Additional seller/stock info when `additionalProperties: true` + +> **Note:** Currency may vary in results even for US searches, as prices reflect different seller regions. + +--- + +## Workflow 2: Customer Reviews + +**Use case:** Extract reviews for sentiment analysis, brand perception monitoring, or quality issue detection. + +**Best for:** Brand managers, customer experience teams, QA teams, product managers. + +### Input Options + +| Input Type | Field | Description | +|------------|-------|-------------| +| Product URLs | `reviewListingUrls` | Product pages to extract reviews from | +| Keyword Search | `keywordReviews` + `marketplacesReviews` | Search for product reviews by keyword | + +### Example - Extract Reviews from Product +```json +{ + "reviewListingUrls": [ + {"url": "https://www.amazon.com/dp/B09V3KXJPB"} + ], + "sortReview": "Most recent", + "additionalReviewProperties": true, + "maxReviewResults": 500 +} +``` + +### Example - Keyword Search +```json +{ + "keywordReviews": "wireless earbuds", + "marketplacesReviews": ["www.amazon.com"], + "sortReview": "Most recent", + "additionalReviewProperties": true, + "maxReviewResults": 200 +} +``` + +### Sort Options +- `Most recent` - Latest reviews first (recommended) +- `Most relevant` - Platform default relevance +- `Most helpful` - Highest voted reviews +- `Highest rated` - 5-star reviews first +- `Lowest rated` - 1-star reviews first + +> **Note:** The `sortReview: "Lowest rated"` option may not work consistently across all marketplaces. For quality analysis, collect a large sample and filter by rating in post-processing. + +### Quality Analysis Tips +- Set high `maxReviewResults` for statistical significance +- Look for recurring keywords: "broke", "defect", "quality", "returned" +- Filter results by rating if sorting doesn't work as expected +- Cross-reference with competitor products for benchmarking + +--- + +## Workflow 3: Seller Intelligence + +**Use case:** Find sellers across stores, discover unauthorized resellers, evaluate vendor options. + +**Best for:** Brand protection teams, procurement, supply chain managers. + +> **Note:** This workflow uses Google Shopping to find sellers across stores. Direct seller profile URLs are not reliably supported. + +### Input Configuration +```json +{ + "googleShoppingSearchKeyword": "Nike Air Max 90", + "scrapeSellersFromGoogleShopping": true, + "countryCode": "us", + "maxGoogleShoppingSellersPerProduct": 20, + "maxGoogleShoppingResults": 100 +} +``` + +### Options +| Field | Description | +|-------|-------------| +| `googleShoppingSearchKeyword` | Product name to search | +| `scrapeSellersFromGoogleShopping` | Set to `true` to extract sellers | +| `scrapeProductsFromGoogleShopping` | Set to `true` to also extract product details | +| `countryCode` | Target country (e.g., `us`, `uk`, `de`) | +| `maxGoogleShoppingSellersPerProduct` | Max sellers per product | +| `maxGoogleShoppingResults` | Total result limit | + +--- + +## Supported Marketplaces + +### Amazon (20+ regions) +`www.amazon.com`, `www.amazon.co.uk`, `www.amazon.de`, `www.amazon.fr`, `www.amazon.it`, `www.amazon.es`, `www.amazon.ca`, `www.amazon.com.au`, `www.amazon.co.jp`, `www.amazon.in`, `www.amazon.com.br`, `www.amazon.com.mx`, `www.amazon.nl`, `www.amazon.pl`, `www.amazon.se`, `www.amazon.ae`, `www.amazon.sa`, `www.amazon.sg`, `www.amazon.com.tr`, `www.amazon.eg` + +### Major US Retailers +`www.walmart.com`, `www.costco.com`, `www.costco.ca`, `www.homedepot.com` + +### European Retailers +`allegro.pl`, `allegro.cz`, `allegro.sk`, `www.alza.cz`, `www.alza.sk`, `www.alza.de`, `www.alza.at`, `www.alza.hu`, `www.kaufland.de`, `www.kaufland.pl`, `www.kaufland.cz`, `www.kaufland.sk`, `www.kaufland.at`, `www.kaufland.fr`, `www.kaufland.it`, `www.cdiscount.com` + +### IKEA (40+ country/language combinations) +Supports all major IKEA regional sites with multiple language options. + +### Google Shopping +Use for seller discovery across multiple stores. + +--- + +## Running the Extraction + +### Step 1: Set Skill Path +```bash +SKILL_PATH=~/.claude/skills/apify-ecommerce +``` + +### Step 2: Run Script + +**Quick answer (display in chat):** +```bash +node --env-file=~/.claude/.env $SKILL_PATH/reference/scripts/run_actor.js \ + --actor "apify/e-commerce-scraping-tool" \ + --input 'JSON_INPUT' +``` + +**CSV export:** +```bash +node --env-file=~/.claude/.env $SKILL_PATH/reference/scripts/run_actor.js \ + --actor "apify/e-commerce-scraping-tool" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_filename.csv \ + --format csv +``` + +**JSON export:** +```bash +node --env-file=~/.claude/.env $SKILL_PATH/reference/scripts/run_actor.js \ + --actor "apify/e-commerce-scraping-tool" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_filename.json \ + --format json +``` + +### Step 3: Summarize Results + +Report: +- Number of items extracted +- File location (if exported) +- Key insights based on workflow: + - **Products:** Price range, outliers, MAP violations + - **Reviews:** Average rating, sentiment trends, quality issues + - **Sellers:** Seller count, unauthorized sellers found + +--- + +## Error Handling + +| Error | Solution | +|-------|----------| +| `APIFY_TOKEN not found` | Ensure `~/.claude/.env` contains `APIFY_TOKEN=your_token` | +| `Actor not found` | Verify Actor ID: `apify/e-commerce-scraping-tool` | +| `Run FAILED` | Check Apify console link in error output | +| `Timeout` | Reduce `maxProductResults` or increase `--timeout` | +| `No results` | Verify URLs are valid and accessible | +| `Invalid marketplace` | Check marketplace value matches supported list exactly | diff --git a/skills/apify-ecommerce/reference/scripts/package.json b/skills/apify-ecommerce/reference/scripts/package.json new file mode 100644 index 00000000..3dbc1ca5 --- /dev/null +++ b/skills/apify-ecommerce/reference/scripts/package.json @@ -0,0 +1,3 @@ +{ + "type": "module" +} diff --git a/skills/apify-ecommerce/reference/scripts/run_actor.js b/skills/apify-ecommerce/reference/scripts/run_actor.js new file mode 100644 index 00000000..9c67d2ea --- /dev/null +++ b/skills/apify-ecommerce/reference/scripts/run_actor.js @@ -0,0 +1,369 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output data.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-ecommerce-1.0.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., apify/e-commerce-scraping-tool) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 products + node --env-file=.env scripts/run_actor.js \\ + --actor "apify/e-commerce-scraping-tool" \\ + --input '{"keyword": "bluetooth headphones", "marketplaces": ["www.amazon.com"], "maxProductResults": 10}' + + # Export prices to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "apify/e-commerce-scraping-tool" \\ + --input '{"detailsUrls": ["https://amazon.com/dp/B09V3KXJPB"]}' \\ + --output prices.csv --format csv + + # Export reviews to JSON + node --env-file=.env scripts/run_actor.js \\ + --actor "apify/e-commerce-scraping-tool" \\ + --input '{"reviewListingUrls": ["https://amazon.com/dp/B09V3KXJPB"], "maxReviewResults": 100}' \\ + --output reviews.json --format json +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-influencer-discovery/SKILL.md b/skills/apify-influencer-discovery/SKILL.md new file mode 100644 index 00000000..12404a0b --- /dev/null +++ b/skills/apify-influencer-discovery/SKILL.md @@ -0,0 +1,118 @@ +--- +name: apify-influencer-discovery +description: Find and evaluate influencers for brand partnerships, verify authenticity, and track collaboration performance across Instagram, Facebook, YouTube, and TikTok. +--- + +# Influencer Discovery + +Discover and analyze influencers across multiple platforms using Apify Actors. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Determine discovery source (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the discovery script +- [ ] Step 5: Summarize results +``` + +### Step 1: Determine Discovery Source + +Select the appropriate Actor based on user needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Influencer profiles | `apify/instagram-profile-scraper` | Profile metrics, bio, follower counts | +| Find by hashtag | `apify/instagram-hashtag-scraper` | Discover influencers using specific hashtags | +| Reel engagement | `apify/instagram-reel-scraper` | Analyze reel performance and engagement | +| Discovery by niche | `apify/instagram-search-scraper` | Search for influencers by keyword/niche | +| Brand mentions | `apify/instagram-tagged-scraper` | Track who tags brands/products | +| Comprehensive data | `apify/instagram-scraper` | Full profile, posts, comments analysis | +| API-based discovery | `apify/instagram-api-scraper` | Fast API-based data extraction | +| Engagement analysis | `apify/export-instagram-comments-posts` | Export comments for sentiment analysis | +| Facebook content | `apify/facebook-posts-scraper` | Analyze Facebook post performance | +| Micro-influencers | `apify/facebook-groups-scraper` | Find influencers in niche groups | +| Influential pages | `apify/facebook-search-scraper` | Search for influential pages | +| YouTube creators | `streamers/youtube-channel-scraper` | Channel metrics and subscriber data | +| TikTok influencers | `clockworks/tiktok-scraper` | Comprehensive TikTok data extraction | +| TikTok (free) | `clockworks/free-tiktok-scraper` | Free TikTok data extractor | +| Live streamers | `clockworks/tiktok-live-scraper` | Discover live streaming influencers | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `apify/instagram-profile-scraper`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Results + +After completion, report: +- Number of influencers found +- File location and name +- Key metrics available (followers, engagement rate, etc.) +- Suggested next steps (filtering, outreach, deeper analysis) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-influencer-discovery/reference/scripts/run_actor.js b/skills/apify-influencer-discovery/reference/scripts/run_actor.js new file mode 100644 index 00000000..e600ded2 --- /dev/null +++ b/skills/apify-influencer-discovery/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-influencer-discovery-1.0.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-lead-generation/SKILL.md b/skills/apify-lead-generation/SKILL.md new file mode 100644 index 00000000..18d01f3e --- /dev/null +++ b/skills/apify-lead-generation/SKILL.md @@ -0,0 +1,120 @@ +--- +name: apify-lead-generation +description: "Generates B2B/B2C leads by scraping Google Maps, websites, Instagram, TikTok, Facebook, LinkedIn, YouTube, and Google Search. Use when user asks to find leads, prospects, businesses, build lead lis..." +--- + +# Lead Generation + +Scrape leads from multiple platforms using Apify Actors. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Determine lead source (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the lead finder script +- [ ] Step 5: Summarize results +``` + +### Step 1: Determine Lead Source + +Select the appropriate Actor based on user needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Local businesses | `compass/crawler-google-places` | Restaurants, gyms, shops | +| Contact enrichment | `vdrmota/contact-info-scraper` | Emails, phones from URLs | +| Instagram profiles | `apify/instagram-profile-scraper` | Influencer discovery | +| Instagram posts/comments | `apify/instagram-scraper` | Posts, comments, hashtags, places | +| Instagram search | `apify/instagram-search-scraper` | Places, users, hashtags discovery | +| TikTok videos/hashtags | `clockworks/tiktok-scraper` | Comprehensive TikTok data extraction | +| TikTok hashtags/profiles | `clockworks/free-tiktok-scraper` | Free TikTok data extractor | +| TikTok user search | `clockworks/tiktok-user-search-scraper` | Find users by keywords | +| TikTok profiles | `clockworks/tiktok-profile-scraper` | Creator outreach | +| TikTok followers/following | `clockworks/tiktok-followers-scraper` | Audience analysis, segmentation | +| Facebook pages | `apify/facebook-pages-scraper` | Business contacts | +| Facebook page contacts | `apify/facebook-page-contact-information` | Extract emails, phones, addresses | +| Facebook groups | `apify/facebook-groups-scraper` | Buying intent signals | +| Facebook events | `apify/facebook-events-scraper` | Event networking, partnerships | +| Google Search | `apify/google-search-scraper` | Broad lead discovery | +| YouTube channels | `streamers/youtube-scraper` | Creator partnerships | +| Google Maps emails | `poidata/google-maps-email-extractor` | Direct email extraction | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `compass/crawler-google-places`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Results + +After completion, report: +- Number of leads found +- File location and name +- Key fields available +- Suggested next steps (filtering, enrichment) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-lead-generation/reference/scripts/run_actor.js b/skills/apify-lead-generation/reference/scripts/run_actor.js new file mode 100644 index 00000000..6cd4acc2 --- /dev/null +++ b/skills/apify-lead-generation/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-lead-generation-1.1.11'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-market-research/SKILL.md b/skills/apify-market-research/SKILL.md new file mode 100644 index 00000000..95e926b4 --- /dev/null +++ b/skills/apify-market-research/SKILL.md @@ -0,0 +1,119 @@ +--- +name: apify-market-research +description: Analyze market conditions, geographic opportunities, pricing, consumer behavior, and product validation across Google Maps, Facebook, Instagram, Booking.com, and TripAdvisor. +--- + +# Market Research + +Conduct market research using Apify Actors to extract data from multiple platforms. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Identify market research type (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the analysis script +- [ ] Step 5: Summarize findings +``` + +### Step 1: Identify Market Research Type + +Select the appropriate Actor based on research needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Market density | `compass/crawler-google-places` | Location analysis | +| Geospatial analysis | `compass/google-maps-extractor` | Business mapping | +| Regional interest | `apify/google-trends-scraper` | Trend data | +| Pricing and demand | `apify/facebook-marketplace-scraper` | Market pricing | +| Event market | `apify/facebook-events-scraper` | Event analysis | +| Consumer needs | `apify/facebook-groups-scraper` | Group research | +| Market landscape | `apify/facebook-pages-scraper` | Business pages | +| Business density | `apify/facebook-page-contact-information` | Contact data | +| Cultural insights | `apify/facebook-photos-scraper` | Visual research | +| Niche targeting | `apify/instagram-hashtag-scraper` | Hashtag research | +| Hashtag stats | `apify/instagram-hashtag-stats` | Market sizing | +| Market activity | `apify/instagram-reel-scraper` | Activity analysis | +| Market intelligence | `apify/instagram-scraper` | Full data | +| Product launch research | `apify/instagram-api-scraper` | API access | +| Hospitality market | `voyager/booking-scraper` | Hotel data | +| Tourism insights | `maxcopell/tripadvisor-reviews` | Review analysis | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `compass/crawler-google-places`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Findings + +After completion, report: +- Number of results found +- File location and name +- Key market insights +- Suggested next steps (deeper analysis, validation) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-market-research/reference/scripts/run_actor.js b/skills/apify-market-research/reference/scripts/run_actor.js new file mode 100644 index 00000000..7a0a904b --- /dev/null +++ b/skills/apify-market-research/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-market-research-1.0.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-trend-analysis/SKILL.md b/skills/apify-trend-analysis/SKILL.md new file mode 100644 index 00000000..7692cde3 --- /dev/null +++ b/skills/apify-trend-analysis/SKILL.md @@ -0,0 +1,122 @@ +--- +name: apify-trend-analysis +description: Discover and track emerging trends across Google Trends, Instagram, Facebook, YouTube, and TikTok to inform content strategy. +--- + +# Trend Analysis + +Discover and track emerging trends using Apify Actors to extract data from multiple platforms. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Identify trend type (select Actor) +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the analysis script +- [ ] Step 5: Summarize findings +``` + +### Step 1: Identify Trend Type + +Select the appropriate Actor based on research needs: + +| User Need | Actor ID | Best For | +|-----------|----------|----------| +| Search trends | `apify/google-trends-scraper` | Google Trends data | +| Hashtag tracking | `apify/instagram-hashtag-scraper` | Hashtag content | +| Hashtag metrics | `apify/instagram-hashtag-stats` | Performance stats | +| Visual trends | `apify/instagram-post-scraper` | Post analysis | +| Trending discovery | `apify/instagram-search-scraper` | Search trends | +| Comprehensive tracking | `apify/instagram-scraper` | Full data | +| API-based trends | `apify/instagram-api-scraper` | API access | +| Engagement trends | `apify/export-instagram-comments-posts` | Comment tracking | +| Product trends | `apify/facebook-marketplace-scraper` | Marketplace data | +| Visual analysis | `apify/facebook-photos-scraper` | Photo trends | +| Community trends | `apify/facebook-groups-scraper` | Group monitoring | +| YouTube Shorts | `streamers/youtube-shorts-scraper` | Short-form trends | +| YouTube hashtags | `streamers/youtube-video-scraper-by-hashtag` | Hashtag videos | +| TikTok hashtags | `clockworks/tiktok-hashtag-scraper` | Hashtag content | +| Trending sounds | `clockworks/tiktok-sound-scraper` | Audio trends | +| TikTok ads | `clockworks/tiktok-ads-scraper` | Ad trends | +| Discover page | `clockworks/tiktok-discover-scraper` | Discover trends | +| Explore trends | `clockworks/tiktok-explore-scraper` | Explore content | +| Trending content | `clockworks/tiktok-trends-scraper` | Viral content | + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `apify/google-trends-scraper`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Findings + +After completion, report: +- Number of results found +- File location and name +- Key trend insights +- Suggested next steps (deeper analysis, content opportunities) + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-trend-analysis/reference/scripts/run_actor.js b/skills/apify-trend-analysis/reference/scripts/run_actor.js new file mode 100644 index 00000000..55124270 --- /dev/null +++ b/skills/apify-trend-analysis/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-trend-analysis-1.0.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/apify-ultimate-scraper/SKILL.md b/skills/apify-ultimate-scraper/SKILL.md new file mode 100644 index 00000000..b41a22ca --- /dev/null +++ b/skills/apify-ultimate-scraper/SKILL.md @@ -0,0 +1,230 @@ +--- +name: apify-ultimate-scraper +description: "Universal AI-powered web scraper for any platform. Scrape data from Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search, Google Trends, Booking.com, and TripAdvisor. Use for lead gener..." +--- + +# Universal Web Scraper + +AI-driven data extraction from 55+ Actors across all major platforms. This skill automatically selects the best Actor for your task. + +## Prerequisites +(No need to check it upfront) + +- `.env` file with `APIFY_TOKEN` +- Node.js 20.6+ (for native `--env-file` support) +- `mcpc` CLI tool: `npm install -g @apify/mcpc` + +## Workflow + +Copy this checklist and track progress: + +``` +Task Progress: +- [ ] Step 1: Understand user goal and select Actor +- [ ] Step 2: Fetch Actor schema via mcpc +- [ ] Step 3: Ask user preferences (format, filename) +- [ ] Step 4: Run the scraper script +- [ ] Step 5: Summarize results and offer follow-ups +``` + +### Step 1: Understand User Goal and Select Actor + +First, understand what the user wants to achieve. Then select the best Actor from the options below. + +#### Instagram Actors (12) + +| Actor ID | Best For | +|----------|----------| +| `apify/instagram-profile-scraper` | Profile data, follower counts, bio info | +| `apify/instagram-post-scraper` | Individual post details, engagement metrics | +| `apify/instagram-comment-scraper` | Comment extraction, sentiment analysis | +| `apify/instagram-hashtag-scraper` | Hashtag content, trending topics | +| `apify/instagram-hashtag-stats` | Hashtag performance metrics | +| `apify/instagram-reel-scraper` | Reels content and metrics | +| `apify/instagram-search-scraper` | Search users, places, hashtags | +| `apify/instagram-tagged-scraper` | Posts tagged with specific accounts | +| `apify/instagram-followers-count-scraper` | Follower count tracking | +| `apify/instagram-scraper` | Comprehensive Instagram data | +| `apify/instagram-api-scraper` | API-based Instagram access | +| `apify/export-instagram-comments-posts` | Bulk comment/post export | + +#### Facebook Actors (14) + +| Actor ID | Best For | +|----------|----------| +| `apify/facebook-pages-scraper` | Page data, metrics, contact info | +| `apify/facebook-page-contact-information` | Emails, phones, addresses from pages | +| `apify/facebook-posts-scraper` | Post content and engagement | +| `apify/facebook-comments-scraper` | Comment extraction | +| `apify/facebook-likes-scraper` | Reaction analysis | +| `apify/facebook-reviews-scraper` | Page reviews | +| `apify/facebook-groups-scraper` | Group content and members | +| `apify/facebook-events-scraper` | Event data | +| `apify/facebook-ads-scraper` | Ad creative and targeting | +| `apify/facebook-search-scraper` | Search results | +| `apify/facebook-reels-scraper` | Reels content | +| `apify/facebook-photos-scraper` | Photo extraction | +| `apify/facebook-marketplace-scraper` | Marketplace listings | +| `apify/facebook-followers-following-scraper` | Follower/following lists | + +#### TikTok Actors (14) + +| Actor ID | Best For | +|----------|----------| +| `clockworks/tiktok-scraper` | Comprehensive TikTok data | +| `clockworks/free-tiktok-scraper` | Free TikTok extraction | +| `clockworks/tiktok-profile-scraper` | Profile data | +| `clockworks/tiktok-video-scraper` | Video details and metrics | +| `clockworks/tiktok-comments-scraper` | Comment extraction | +| `clockworks/tiktok-followers-scraper` | Follower lists | +| `clockworks/tiktok-user-search-scraper` | Find users by keywords | +| `clockworks/tiktok-hashtag-scraper` | Hashtag content | +| `clockworks/tiktok-sound-scraper` | Trending sounds | +| `clockworks/tiktok-ads-scraper` | Ad content | +| `clockworks/tiktok-discover-scraper` | Discover page content | +| `clockworks/tiktok-explore-scraper` | Explore content | +| `clockworks/tiktok-trends-scraper` | Trending content | +| `clockworks/tiktok-live-scraper` | Live stream data | + +#### YouTube Actors (5) + +| Actor ID | Best For | +|----------|----------| +| `streamers/youtube-scraper` | Video data and metrics | +| `streamers/youtube-channel-scraper` | Channel information | +| `streamers/youtube-comments-scraper` | Comment extraction | +| `streamers/youtube-shorts-scraper` | Shorts content | +| `streamers/youtube-video-scraper-by-hashtag` | Videos by hashtag | + +#### Google Maps Actors (4) + +| Actor ID | Best For | +|----------|----------| +| `compass/crawler-google-places` | Business listings, ratings, contact info | +| `compass/google-maps-extractor` | Detailed business data | +| `compass/Google-Maps-Reviews-Scraper` | Review extraction | +| `poidata/google-maps-email-extractor` | Email discovery from listings | + +#### Other Actors (6) + +| Actor ID | Best For | +|----------|----------| +| `apify/google-search-scraper` | Google search results | +| `apify/google-trends-scraper` | Google Trends data | +| `voyager/booking-scraper` | Booking.com hotel data | +| `voyager/booking-reviews-scraper` | Booking.com reviews | +| `maxcopell/tripadvisor-reviews` | TripAdvisor reviews | +| `vdrmota/contact-info-scraper` | Contact enrichment from URLs | + +--- + +#### Actor Selection by Use Case + +| Use Case | Primary Actors | +|----------|---------------| +| **Lead Generation** | `compass/crawler-google-places`, `poidata/google-maps-email-extractor`, `vdrmota/contact-info-scraper` | +| **Influencer Discovery** | `apify/instagram-profile-scraper`, `clockworks/tiktok-profile-scraper`, `streamers/youtube-channel-scraper` | +| **Brand Monitoring** | `apify/instagram-tagged-scraper`, `apify/instagram-hashtag-scraper`, `compass/Google-Maps-Reviews-Scraper` | +| **Competitor Analysis** | `apify/facebook-pages-scraper`, `apify/facebook-ads-scraper`, `apify/instagram-profile-scraper` | +| **Content Analytics** | `apify/instagram-post-scraper`, `clockworks/tiktok-scraper`, `streamers/youtube-scraper` | +| **Trend Research** | `apify/google-trends-scraper`, `clockworks/tiktok-trends-scraper`, `apify/instagram-hashtag-stats` | +| **Review Analysis** | `compass/Google-Maps-Reviews-Scraper`, `voyager/booking-reviews-scraper`, `maxcopell/tripadvisor-reviews` | +| **Audience Analysis** | `apify/instagram-followers-count-scraper`, `clockworks/tiktok-followers-scraper`, `apify/facebook-followers-following-scraper` | + +--- + +#### Multi-Actor Workflows + +For complex tasks, chain multiple Actors: + +| Workflow | Step 1 | Step 2 | +|----------|--------|--------| +| **Lead enrichment** | `compass/crawler-google-places` → | `vdrmota/contact-info-scraper` | +| **Influencer vetting** | `apify/instagram-profile-scraper` → | `apify/instagram-comment-scraper` | +| **Competitor deep-dive** | `apify/facebook-pages-scraper` → | `apify/facebook-posts-scraper` | +| **Local business analysis** | `compass/crawler-google-places` → | `compass/Google-Maps-Reviews-Scraper` | + +#### Can't Find a Suitable Actor? + +If none of the Actors above match the user's request, search the Apify Store directly: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call search-actors keywords:="SEARCH_KEYWORDS" limit:=10 offset:=0 category:="" | jq -r '.content[0].text' +``` + +Replace `SEARCH_KEYWORDS` with 1-3 simple terms (e.g., "LinkedIn profiles", "Amazon products", "Twitter"). + +### Step 2: Fetch Actor Schema + +Fetch the Actor's input schema and details dynamically using mcpc: + +```bash +export $(grep APIFY_TOKEN .env | xargs) && mcpc --json mcp.apify.com --header "Authorization: Bearer $APIFY_TOKEN" tools-call fetch-actor-details actor:="ACTOR_ID" | jq -r ".content" +``` + +Replace `ACTOR_ID` with the selected Actor (e.g., `compass/crawler-google-places`). + +This returns: +- Actor description and README +- Required and optional input parameters +- Output fields (if available) + +### Step 3: Ask User Preferences + +Before running, ask: +1. **Output format**: + - **Quick answer** - Display top few results in chat (no file saved) + - **CSV** - Full export with all fields + - **JSON** - Full export in JSON format +2. **Number of results**: Based on character of use case + +### Step 4: Run the Script + +**Quick answer (display in chat, no file):** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' +``` + +**CSV:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.csv \ + --format csv +``` + +**JSON:** +```bash +node --env-file=.env ${CLAUDE_PLUGIN_ROOT}/reference/scripts/run_actor.js \ + --actor "ACTOR_ID" \ + --input 'JSON_INPUT' \ + --output YYYY-MM-DD_OUTPUT_FILE.json \ + --format json +``` + +### Step 5: Summarize Results and Offer Follow-ups + +After completion, report: +- Number of results found +- File location and name +- Key fields available +- **Suggested follow-up workflows** based on results: + +| If User Got | Suggest Next | +|-------------|--------------| +| Business listings | Enrich with `vdrmota/contact-info-scraper` or get reviews | +| Influencer profiles | Analyze engagement with comment scrapers | +| Competitor pages | Deep-dive with post/ad scrapers | +| Trend data | Validate with platform-specific hashtag scrapers | + + +## Error Handling + +`APIFY_TOKEN not found` - Ask user to create `.env` with `APIFY_TOKEN=your_token` +`mcpc not found` - Ask user to install `npm install -g @apify/mcpc` +`Actor not found` - Check Actor ID spelling +`Run FAILED` - Ask user to check Apify console link in error output +`Timeout` - Reduce input size or increase `--timeout` diff --git a/skills/apify-ultimate-scraper/reference/scripts/run_actor.js b/skills/apify-ultimate-scraper/reference/scripts/run_actor.js new file mode 100644 index 00000000..9a964576 --- /dev/null +++ b/skills/apify-ultimate-scraper/reference/scripts/run_actor.js @@ -0,0 +1,363 @@ +#!/usr/bin/env node +/** + * Apify Actor Runner - Runs Apify actors and exports results. + * + * Usage: + * # Quick answer (display in chat, no file saved) + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + * + * # Export to file + * node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' --output leads.csv --format csv + */ + +import { parseArgs } from 'node:util'; +import { writeFileSync, statSync } from 'node:fs'; + +// User-Agent for tracking skill usage in Apify analytics +const USER_AGENT = 'apify-agent-skills/apify-ultimate-scraper-1.3.0'; + +// Parse command-line arguments +function parseCliArgs() { + const options = { + actor: { type: 'string', short: 'a' }, + input: { type: 'string', short: 'i' }, + output: { type: 'string', short: 'o' }, + format: { type: 'string', short: 'f', default: 'csv' }, + timeout: { type: 'string', short: 't', default: '600' }, + 'poll-interval': { type: 'string', default: '5' }, + help: { type: 'boolean', short: 'h' }, + }; + + const { values } = parseArgs({ options, allowPositionals: false }); + + if (values.help) { + printHelp(); + process.exit(0); + } + + if (!values.actor) { + console.error('Error: --actor is required'); + printHelp(); + process.exit(1); + } + + if (!values.input) { + console.error('Error: --input is required'); + printHelp(); + process.exit(1); + } + + return { + actor: values.actor, + input: values.input, + output: values.output, + format: values.format || 'csv', + timeout: parseInt(values.timeout, 10), + pollInterval: parseInt(values['poll-interval'], 10), + }; +} + +function printHelp() { + console.log(` +Apify Actor Runner - Run Apify actors and export results + +Usage: + node --env-file=.env scripts/run_actor.js --actor ACTOR_ID --input '{}' + +Options: + --actor, -a Actor ID (e.g., compass/crawler-google-places) [required] + --input, -i Actor input as JSON string [required] + --output, -o Output file path (optional - if not provided, displays quick answer) + --format, -f Output format: csv, json (default: csv) + --timeout, -t Max wait time in seconds (default: 600) + --poll-interval Seconds between status checks (default: 5) + --help, -h Show this help message + +Output Formats: + JSON (all data) --output file.json --format json + CSV (all data) --output file.csv --format csv + Quick answer (no --output) - displays top 5 in chat + +Examples: + # Quick answer - display top 5 in chat + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' + + # Export all data to CSV + node --env-file=.env scripts/run_actor.js \\ + --actor "compass/crawler-google-places" \\ + --input '{"searchStringsArray": ["coffee shops"], "locationQuery": "Seattle, USA"}' \\ + --output leads.csv --format csv +`); +} + +// Start an actor run and return { runId, datasetId } +async function startActor(token, actorId, inputJson) { + // Convert "author/actor" format to "author~actor" for API compatibility + const apiActorId = actorId.replace('/', '~'); + const url = `https://api.apify.com/v2/acts/${apiActorId}/runs?token=${encodeURIComponent(token)}`; + + let data; + try { + data = JSON.parse(inputJson); + } catch (e) { + console.error(`Error: Invalid JSON input: ${e.message}`); + process.exit(1); + } + + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'User-Agent': `${USER_AGENT}/start_actor`, + }, + body: JSON.stringify(data), + }); + + if (response.status === 404) { + console.error(`Error: Actor '${actorId}' not found`); + process.exit(1); + } + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: API request failed (${response.status}): ${text}`); + process.exit(1); + } + + const result = await response.json(); + return { + runId: result.data.id, + datasetId: result.data.defaultDatasetId, + }; +} + +// Poll run status until complete or timeout +async function pollUntilComplete(token, runId, timeout, interval) { + const url = `https://api.apify.com/v2/actor-runs/${runId}?token=${encodeURIComponent(token)}`; + const startTime = Date.now(); + let lastStatus = null; + + while (true) { + const response = await fetch(url); + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to get run status: ${text}`); + process.exit(1); + } + + const result = await response.json(); + const status = result.data.status; + + // Only print when status changes + if (status !== lastStatus) { + console.log(`Status: ${status}`); + lastStatus = status; + } + + if (['SUCCEEDED', 'FAILED', 'ABORTED', 'TIMED-OUT'].includes(status)) { + return status; + } + + const elapsed = (Date.now() - startTime) / 1000; + if (elapsed > timeout) { + console.error(`Warning: Timeout after ${timeout}s, actor still running`); + return 'TIMED-OUT'; + } + + await sleep(interval * 1000); + } +} + +// Download dataset items +async function downloadResults(token, datasetId, outputPath, format) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/download_${format}`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + + if (format === 'json') { + writeFileSync(outputPath, JSON.stringify(data, null, 2)); + } else { + // CSV output + if (data.length > 0) { + const fieldnames = Object.keys(data[0]); + const csvLines = [fieldnames.join(',')]; + + for (const row of data) { + const values = fieldnames.map((key) => { + let value = row[key]; + + // Truncate long text fields + if (typeof value === 'string' && value.length > 200) { + value = value.slice(0, 200) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + value = JSON.stringify(value) || ''; + } + + // CSV escape: wrap in quotes if contains comma, quote, or newline + if (value === null || value === undefined) { + return ''; + } + const strValue = String(value); + if (strValue.includes(',') || strValue.includes('"') || strValue.includes('\n')) { + return `"${strValue.replace(/"/g, '""')}"`; + } + return strValue; + }); + csvLines.push(values.join(',')); + } + + writeFileSync(outputPath, csvLines.join('\n')); + } else { + writeFileSync(outputPath, ''); + } + } + + console.log(`Saved to: ${outputPath}`); +} + +// Display top 5 results in chat format +async function displayQuickAnswer(token, datasetId) { + const url = `https://api.apify.com/v2/datasets/${datasetId}/items?token=${encodeURIComponent(token)}&format=json`; + + const response = await fetch(url, { + headers: { + 'User-Agent': `${USER_AGENT}/quick_answer`, + }, + }); + + if (!response.ok) { + const text = await response.text(); + console.error(`Error: Failed to download results: ${text}`); + process.exit(1); + } + + const data = await response.json(); + const total = data.length; + + if (total === 0) { + console.log('\nNo results found.'); + return; + } + + // Display top 5 + console.log(`\n${'='.repeat(60)}`); + console.log(`TOP 5 RESULTS (of ${total} total)`); + console.log('='.repeat(60)); + + for (let i = 0; i < Math.min(5, data.length); i++) { + const item = data[i]; + console.log(`\n--- Result ${i + 1} ---`); + + for (const [key, value] of Object.entries(item)) { + let displayValue = value; + + // Truncate long values + if (typeof value === 'string' && value.length > 100) { + displayValue = value.slice(0, 100) + '...'; + } else if (Array.isArray(value) || (typeof value === 'object' && value !== null)) { + const jsonStr = JSON.stringify(value); + displayValue = jsonStr.length > 100 ? jsonStr.slice(0, 100) + '...' : jsonStr; + } + + console.log(` ${key}: ${displayValue}`); + } + } + + console.log(`\n${'='.repeat(60)}`); + if (total > 5) { + console.log(`Showing 5 of ${total} results.`); + } + console.log(`Full data available at: https://console.apify.com/storage/datasets/${datasetId}`); + console.log('='.repeat(60)); +} + +// Report summary of downloaded data +function reportSummary(outputPath, format) { + const stats = statSync(outputPath); + const size = stats.size; + + let count; + try { + const content = require('fs').readFileSync(outputPath, 'utf-8'); + if (format === 'json') { + const data = JSON.parse(content); + count = Array.isArray(data) ? data.length : 1; + } else { + // CSV - count lines minus header + const lines = content.split('\n').filter((line) => line.trim()); + count = Math.max(0, lines.length - 1); + } + } catch { + count = 'unknown'; + } + + console.log(`Records: ${count}`); + console.log(`Size: ${size.toLocaleString()} bytes`); +} + +// Helper: sleep for ms +function sleep(ms) { + return new Promise((resolve) => setTimeout(resolve, ms)); +} + +// Main function +async function main() { + // Parse args first so --help works without token + const args = parseCliArgs(); + + // Check for APIFY_TOKEN + const token = process.env.APIFY_TOKEN; + if (!token) { + console.error('Error: APIFY_TOKEN not found in .env file'); + console.error(''); + console.error('Add your token to .env file:'); + console.error(' APIFY_TOKEN=your_token_here'); + console.error(''); + console.error('Get your token: https://console.apify.com/account/integrations'); + process.exit(1); + } + + // Start the actor run + console.log(`Starting actor: ${args.actor}`); + const { runId, datasetId } = await startActor(token, args.actor, args.input); + console.log(`Run ID: ${runId}`); + console.log(`Dataset ID: ${datasetId}`); + + // Poll for completion + const status = await pollUntilComplete(token, runId, args.timeout, args.pollInterval); + + if (status !== 'SUCCEEDED') { + console.error(`Error: Actor run ${status}`); + console.error(`Details: https://console.apify.com/actors/runs/${runId}`); + process.exit(1); + } + + // Determine output mode + if (args.output) { + // File output mode + await downloadResults(token, datasetId, args.output, args.format); + reportSummary(args.output, args.format); + } else { + // Quick answer mode - display in chat + await displayQuickAnswer(token, datasetId); + } +} + +main().catch((err) => { + console.error(`Error: ${err.message}`); + process.exit(1); +}); diff --git a/skills/design-orchestration/SKILL.md b/skills/design-orchestration/SKILL.md index df877fd4..a37c825d 100644 --- a/skills/design-orchestration/SKILL.md +++ b/skills/design-orchestration/SKILL.md @@ -1,9 +1,9 @@ --- name: design-orchestration -description: Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order. +description: Ensure that ideas become designs, designs are reviewed, and only validated designs reach implementation. risk: unknown source: community -date_added: '2026-02-27' +date_added: "2026-02-27" --- # Design Orchestration (Meta-Skill) @@ -23,6 +23,7 @@ It **controls the flow between other skills**. This is a **routing and enforcement skill**, not a creative one. It decides: + - which skill must run next - whether escalation is required - whether execution is permitted @@ -42,6 +43,7 @@ This meta-skill coordinates the following: ## Entry Conditions Invoke this skill when: + - a user proposes a new feature, system, or change - a design decision carries meaningful risk - correctness matters more than speed @@ -73,6 +75,7 @@ After brainstorming completes, classify the design as: - **High risk** Use factors such as: + - user impact - irreversibility - operational cost @@ -102,11 +105,13 @@ Skipping escalation when required is prohibited. If `multi-agent-brainstorming` is run: Require: + - completed Understanding Lock - current Design - Decision Log Do NOT allow: + - new ideation - scope expansion - reopening problem definition @@ -120,12 +125,14 @@ Only critique, revision, and decision resolution are allowed. Before allowing implementation: Confirm: + - design is approved (single-agent or multi-agent) - Decision Log is complete - major assumptions are documented - known risks are acknowledged If any condition fails: + - block execution - return to the appropriate skill @@ -143,19 +150,23 @@ If any condition fails: ## Exit Conditions This meta-skill exits ONLY when: + - the next step is explicitly identified, AND - all required prior steps are complete Possible exits: + - “Proceed to implementation planning” - “Run multi-agent-brainstorming” - “Return to brainstorming for clarification” - "If a reviewed design reports a final disposition of APPROVED, REVISE, or REJECT, you MUST route the workflow accordingly and state the chosen next step explicitly." + --- ## Design Philosophy This skill exists to: + - slow down the right decisions - speed up the right execution - prevent costly mistakes @@ -166,4 +177,5 @@ Bad systems fail in production. This meta-skill exists to enforce the former. ## When to Use + This skill is applicable to execute the workflow or actions described in the overview. diff --git a/skills/multi-agent-brainstorming/SKILL.md b/skills/multi-agent-brainstorming/SKILL.md index dbdbebd0..31b5883b 100644 --- a/skills/multi-agent-brainstorming/SKILL.md +++ b/skills/multi-agent-brainstorming/SKILL.md @@ -1,6 +1,6 @@ --- name: multi-agent-brainstorming -description: "Simulate a structured peer-review process using multiple specialized agents to validate designs, surface hidden assumptions, and identify failure modes before implementation." +description: Transform a single-agent design into a robust, review-validated design by simulating a formal peer-review process using multiple constrained agents. risk: unknown source: community date_added: "2026-02-27" @@ -14,6 +14,7 @@ Transform a single-agent design into a **robust, review-validated design** by simulating a formal peer-review process using multiple constrained agents. This skill exists to: + - surface hidden assumptions - identify failure modes early - validate non-functional constraints @@ -44,16 +45,19 @@ Each agent operates under a **hard scope limit**. ### 1️⃣ Primary Designer (Lead Agent) **Role:** + - Owns the design - Runs the standard `brainstorming` skill - Maintains the Decision Log **May:** + - Ask clarification questions - Propose designs and alternatives - Revise designs based on feedback **May NOT:** + - Self-approve the final design - Ignore reviewer objections - Invent requirements post-lock @@ -63,21 +67,25 @@ Each agent operates under a **hard scope limit**. ### 2️⃣ Skeptic / Challenger Agent **Role:** + - Assume the design will fail - Identify weaknesses and risks **May:** + - Question assumptions - Identify edge cases - Highlight ambiguity or overconfidence - Flag YAGNI violations **May NOT:** + - Propose new features - Redesign the system - Offer alternative architectures Prompting guidance: + > “Assume this design fails in production. Why?” --- @@ -85,9 +93,11 @@ Prompting guidance: ### 3️⃣ Constraint Guardian Agent **Role:** + - Enforce non-functional and real-world constraints Focus areas: + - performance - scalability - reliability @@ -96,10 +106,12 @@ Focus areas: - operational cost **May:** + - Reject designs that violate constraints - Request clarification of limits **May NOT:** + - Debate product goals - Suggest feature changes - Optimize beyond stated requirements @@ -109,9 +121,11 @@ Focus areas: ### 4️⃣ User Advocate Agent **Role:** + - Represent the end user Focus areas: + - cognitive load - usability - clarity of flows @@ -119,10 +133,12 @@ Focus areas: - mismatch between intent and experience **May:** + - Identify confusing or misleading aspects - Flag poor defaults or unclear behavior **May NOT:** + - Redesign architecture - Add features - Override stated user goals @@ -132,16 +148,19 @@ Focus areas: ### 5️⃣ Integrator / Arbiter Agent **Role:** + - Resolve conflicts - Finalize decisions - Enforce exit criteria **May:** + - Accept or reject objections - Require design revisions - Declare the design complete **May NOT:** + - Invent new ideas - Add requirements - Reopen locked decisions without cause @@ -170,11 +189,13 @@ Agents are invoked **one at a time**, in the following order: 3. User Advocate For each reviewer: + - Feedback must be explicit and scoped - Objections must reference assumptions or decisions - No new features may be introduced Primary Designer must: + - Respond to each objection - Revise the design if required - Update the Decision Log @@ -184,11 +205,13 @@ Primary Designer must: ### Phase 3 — Integration & Arbitration The Integrator / Arbiter reviews: + - the final design - the Decision Log - unresolved objections The Arbiter must explicitly decide: + - which objections are accepted - which are rejected (with rationale) @@ -216,11 +239,11 @@ You may exit multi-agent brainstorming **only when all are true**: - All objections are resolved or explicitly rejected - Decision Log is complete - Arbiter has declared the design acceptable -- -If any criterion is unmet: +- If any criterion is unmet: - Continue review - Do NOT proceed to implementation -If this skill was invoked by a routing or orchestration layer, you MUST report the final disposition explicitly as one of: APPROVED, REVISE, or REJECT, with a brief rationale. + If this skill was invoked by a routing or orchestration layer, you MUST report the final disposition explicitly as one of: APPROVED, REVISE, or REJECT, with a brief rationale. + --- ## Failure Modes This Skill Prevents @@ -252,6 +275,6 @@ This skill exists to answer one question with confidence: If the answer is unclear, **do not exit this skill**. - ## When to Use + This skill is applicable to execute the workflow or actions described in the overview. diff --git a/skills_index.json b/skills_index.json index 43201848..d214f303 100644 --- a/skills_index.json +++ b/skills_index.json @@ -9,16 +9,6 @@ "source": "personal", "date_added": "2026-02-27" }, - { - "id": "10-andruia-skill-smith", - "path": "skills/10-andruia-skill-smith", - "category": "andruia", - "name": "10-andruia-skill-smith", - "description": "Ingeniero de Sistemas de Andru.ia. Dise\u00f1a, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Est\u00e1ndar de Diamante.", - "risk": "safe", - "source": "personal", - "date_added": "2026-02-25" - }, { "id": "20-andruia-niche-intelligence", "path": "skills/20-andruia-niche-intelligence", @@ -234,7 +224,7 @@ "path": "skills/ai-engineer", "category": "uncategorized", "name": "ai-engineer", - "description": "Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.", + "description": "You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -254,7 +244,7 @@ "path": "skills/ai-product", "category": "uncategorized", "name": "ai-product", - "description": "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...", + "description": "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...", "risk": "unknown", "source": "vibeship-spawner-skills (Apache 2.0)", "date_added": "2026-02-27" @@ -324,7 +314,7 @@ "path": "skills/analytics-tracking", "category": "uncategorized", "name": "analytics-tracking", - "description": "Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.", + "description": "You are an expert in **analytics implementation and measurement design**. Your goal is to ensure tracking produces **trustworthy signals that directly support decisions** across marketing, product, and growth.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -347,14 +337,14 @@ "description": "Automated end-to-end UI testing and verification on an Android Emulator using ADB.", "risk": "safe", "source": "community", - "date_added": "2026-02-28" + "date_added": null }, { "id": "angular", "path": "skills/angular", "category": "uncategorized", "name": "angular", - "description": "Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns.", + "description": "Master modern Angular development with Signals, Standalone Components, Zoneless applications, SSR/Hydration, and the latest reactive patterns.", "risk": "safe", "source": "self", "date_added": "2026-02-27" @@ -454,7 +444,7 @@ "path": "skills/api-documenter", "category": "uncategorized", "name": "api-documenter", - "description": "Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals.", + "description": "You are an expert API documentation specialist mastering modern developer experience through comprehensive, interactive, and AI-enhanced documentation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -509,6 +499,126 @@ "source": "community", "date_added": "2026-02-27" }, + { + "id": "apify-actor-development", + "path": "skills/apify-actor-development", + "category": "uncategorized", + "name": "apify-actor-development", + "description": "Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Acto...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-actorization", + "path": "skills/apify-actorization", + "category": "uncategorized", + "name": "apify-actorization", + "description": "Convert existing projects into Apify Actors - serverless cloud programs. Actorize JavaScript/TypeScript (SDK with Actor.init/exit), Python (async context manager), or any language (CLI wrapper). Us...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-audience-analysis", + "path": "skills/apify-audience-analysis", + "category": "uncategorized", + "name": "apify-audience-analysis", + "description": "Understand audience demographics, preferences, behavior patterns, and engagement quality across Facebook, Instagram, YouTube, and TikTok.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-brand-reputation-monitoring", + "path": "skills/apify-brand-reputation-monitoring", + "category": "uncategorized", + "name": "apify-brand-reputation-monitoring", + "description": "Track reviews, ratings, sentiment, and brand mentions across Google Maps, Booking.com, TripAdvisor, Facebook, Instagram, YouTube, and TikTok. Use when user asks to monitor brand reputation, analyze...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-competitor-intelligence", + "path": "skills/apify-competitor-intelligence", + "category": "uncategorized", + "name": "apify-competitor-intelligence", + "description": "Analyze competitor strategies, content, pricing, ads, and market positioning across Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-content-analytics", + "path": "skills/apify-content-analytics", + "category": "uncategorized", + "name": "apify-content-analytics", + "description": "Track engagement metrics, measure campaign ROI, and analyze content performance across Instagram, Facebook, YouTube, and TikTok.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-ecommerce", + "path": "skills/apify-ecommerce", + "category": "uncategorized", + "name": "apify-ecommerce", + "description": "Scrape e-commerce data for pricing intelligence, customer reviews, and seller discovery across Amazon, Walmart, eBay, IKEA, and 50+ marketplaces. Use when user asks to monitor prices, track competi...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-influencer-discovery", + "path": "skills/apify-influencer-discovery", + "category": "uncategorized", + "name": "apify-influencer-discovery", + "description": "Find and evaluate influencers for brand partnerships, verify authenticity, and track collaboration performance across Instagram, Facebook, YouTube, and TikTok.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-lead-generation", + "path": "skills/apify-lead-generation", + "category": "uncategorized", + "name": "apify-lead-generation", + "description": "Generates B2B/B2C leads by scraping Google Maps, websites, Instagram, TikTok, Facebook, LinkedIn, YouTube, and Google Search. Use when user asks to find leads, prospects, businesses, build lead lis...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-market-research", + "path": "skills/apify-market-research", + "category": "uncategorized", + "name": "apify-market-research", + "description": "Analyze market conditions, geographic opportunities, pricing, consumer behavior, and product validation across Google Maps, Facebook, Instagram, Booking.com, and TripAdvisor.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-trend-analysis", + "path": "skills/apify-trend-analysis", + "category": "uncategorized", + "name": "apify-trend-analysis", + "description": "Discover and track emerging trends across Google Trends, Instagram, Facebook, YouTube, and TikTok to inform content strategy.", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, + { + "id": "apify-ultimate-scraper", + "path": "skills/apify-ultimate-scraper", + "category": "uncategorized", + "name": "apify-ultimate-scraper", + "description": "Universal AI-powered web scraper for any platform. Scrape data from Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search, Google Trends, Booking.com, and TripAdvisor. Use for lead gener...", + "risk": "unknown", + "source": "unknown", + "date_added": null + }, { "id": "app-builder", "path": "skills/app-builder", @@ -594,7 +704,7 @@ "path": "skills/arm-cortex-expert", "category": "uncategorized", "name": "arm-cortex-expert", - "description": "Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).", + "description": "- Working on @arm-cortex-expert tasks or workflows - Needing guidance, best practices, or checklists for @arm-cortex-expert", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -814,7 +924,7 @@ "path": "skills/azure-ai-agents-persistent-dotnet", "category": "uncategorized", "name": "azure-ai-agents-persistent-dotnet", - "description": "Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.", + "description": "Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -824,7 +934,7 @@ "path": "skills/azure-ai-agents-persistent-java", "category": "uncategorized", "name": "azure-ai-agents-persistent-java", - "description": "Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.", + "description": "Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -854,7 +964,7 @@ "path": "skills/azure-ai-contentsafety-py", "category": "uncategorized", "name": "azure-ai-contentsafety-py", - "description": "Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.", + "description": "Detect harmful user-generated and AI-generated content in applications.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -874,7 +984,7 @@ "path": "skills/azure-ai-contentunderstanding-py", "category": "uncategorized", "name": "azure-ai-contentunderstanding-py", - "description": "Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video.", + "description": "Multimodal AI service that extracts semantic content from documents, video, audio, and image files for RAG and automated workflows.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -884,7 +994,7 @@ "path": "skills/azure-ai-document-intelligence-dotnet", "category": "uncategorized", "name": "azure-ai-document-intelligence-dotnet", - "description": "Azure AI Document Intelligence SDK for .NET. Extract text, tables, and structured data from documents using prebuilt and custom models.", + "description": "Extract text, tables, and structured data from documents using prebuilt and custom models.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -914,7 +1024,7 @@ "path": "skills/azure-ai-ml-py", "category": "uncategorized", "name": "azure-ai-ml-py", - "description": "Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.", + "description": "Client library for managing Azure ML resources: workspaces, jobs, models, data, and compute.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -924,7 +1034,7 @@ "path": "skills/azure-ai-openai-dotnet", "category": "uncategorized", "name": "azure-ai-openai-dotnet", - "description": "Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, and assistants.", + "description": "Client library for Azure OpenAI Service providing access to OpenAI models including GPT-4, GPT-4o, embeddings, DALL-E, and Whisper.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -934,7 +1044,7 @@ "path": "skills/azure-ai-projects-dotnet", "category": "uncategorized", "name": "azure-ai-projects-dotnet", - "description": "Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes.", + "description": "High-level SDK for Azure AI Foundry project operations including agents, connections, datasets, deployments, evaluations, and indexes.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -944,7 +1054,7 @@ "path": "skills/azure-ai-projects-java", "category": "uncategorized", "name": "azure-ai-projects-java", - "description": "Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations.", + "description": "High-level SDK for Azure AI Foundry project management with access to connections, datasets, indexes, and evaluations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -974,7 +1084,7 @@ "path": "skills/azure-ai-textanalytics-py", "category": "uncategorized", "name": "azure-ai-textanalytics-py", - "description": "Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language processing on text.", + "description": "Client library for Azure AI Language service NLP capabilities including sentiment, entities, key phrases, and more.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -984,7 +1094,7 @@ "path": "skills/azure-ai-transcription-py", "category": "uncategorized", "name": "azure-ai-transcription-py", - "description": "Azure AI Transcription SDK for Python. Use for real-time and batch speech-to-text transcription with timestamps and diarization.", + "description": "Client library for Azure AI Transcription (speech-to-text) with real-time and batch transcription.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -994,7 +1104,7 @@ "path": "skills/azure-ai-translation-document-py", "category": "uncategorized", "name": "azure-ai-translation-document-py", - "description": "Azure AI Document Translation SDK for batch translation of documents with format preservation. Use for translating Word, PDF, Excel, PowerPoint, and other document formats at scale.", + "description": "Client library for Azure AI Translator document translation service for batch document translation with format preservation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1004,7 +1114,7 @@ "path": "skills/azure-ai-translation-text-py", "category": "uncategorized", "name": "azure-ai-translation-text-py", - "description": "Azure AI Text Translation SDK for real-time text translation, transliteration, language detection, and dictionary lookup. Use for translating text content in applications.", + "description": "Client library for Azure AI Translator text translation service for real-time text translation, transliteration, and language operations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1034,7 +1144,7 @@ "path": "skills/azure-ai-vision-imageanalysis-py", "category": "uncategorized", "name": "azure-ai-vision-imageanalysis-py", - "description": "Azure AI Vision Image Analysis SDK for captions, tags, objects, OCR, people detection, and smart cropping. Use for computer vision and image understanding tasks.", + "description": "Client library for Azure AI Vision 4.0 image analysis including captions, tags, objects, OCR, and more.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1044,7 +1154,7 @@ "path": "skills/azure-ai-voicelive-dotnet", "category": "uncategorized", "name": "azure-ai-voicelive-dotnet", - "description": "Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication.", + "description": "Real-time voice AI SDK for building bidirectional voice assistants with Azure AI.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1054,7 +1164,7 @@ "path": "skills/azure-ai-voicelive-java", "category": "uncategorized", "name": "azure-ai-voicelive-java", - "description": "Azure AI VoiceLive SDK for Java. Real-time bidirectional voice conversations with AI assistants using WebSocket.", + "description": "Real-time, bidirectional voice conversations with AI assistants using WebSocket technology.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1074,7 +1184,7 @@ "path": "skills/azure-ai-voicelive-ts", "category": "uncategorized", "name": "azure-ai-voicelive-ts", - "description": "Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication.", + "description": "Real-time voice AI SDK for building bidirectional voice assistants with Azure AI in Node.js and browser environments.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1084,7 +1194,7 @@ "path": "skills/azure-appconfiguration-java", "category": "uncategorized", "name": "azure-appconfiguration-java", - "description": "Azure App Configuration SDK for Java. Centralized application configuration management with key-value settings, feature flags, and snapshots.", + "description": "Client library for Azure App Configuration, a managed service for centralizing application configurations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1094,7 +1204,7 @@ "path": "skills/azure-appconfiguration-py", "category": "uncategorized", "name": "azure-appconfiguration-py", - "description": "Azure App Configuration SDK for Python. Use for centralized configuration management, feature flags, and dynamic settings.", + "description": "Centralized configuration management with feature flags and dynamic settings.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1164,7 +1274,7 @@ "path": "skills/azure-compute-batch-java", "category": "uncategorized", "name": "azure-compute-batch-java", - "description": "Azure Batch SDK for Java. Run large-scale parallel and HPC batch jobs with pools, jobs, tasks, and compute nodes.", + "description": "Client library for running large-scale parallel and high-performance computing (HPC) batch jobs in Azure.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1174,7 +1284,7 @@ "path": "skills/azure-containerregistry-py", "category": "uncategorized", "name": "azure-containerregistry-py", - "description": "Azure Container Registry SDK for Python. Use for managing container images, artifacts, and repositories.", + "description": "Manage container images, artifacts, and repositories in Azure Container Registry.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1194,7 +1304,7 @@ "path": "skills/azure-cosmos-java", "category": "uncategorized", "name": "azure-cosmos-java", - "description": "Azure Cosmos DB SDK for Java. NoSQL database operations with global distribution, multi-model support, and reactive patterns.", + "description": "Client library for Azure Cosmos DB NoSQL API with global distribution and reactive patterns.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1204,7 +1314,7 @@ "path": "skills/azure-cosmos-py", "category": "uncategorized", "name": "azure-cosmos-py", - "description": "Azure Cosmos DB SDK for Python (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.", + "description": "Client library for Azure Cosmos DB NoSQL API \u2014 globally distributed, multi-model database.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1214,7 +1324,7 @@ "path": "skills/azure-cosmos-rust", "category": "uncategorized", "name": "azure-cosmos-rust", - "description": "Azure Cosmos DB SDK for Rust (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.", + "description": "Client library for Azure Cosmos DB NoSQL API \u2014 globally distributed, multi-model database.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1224,7 +1334,7 @@ "path": "skills/azure-cosmos-ts", "category": "uncategorized", "name": "azure-cosmos-ts", - "description": "Azure Cosmos DB JavaScript/TypeScript SDK (@azure/cosmos) for data plane operations. Use for CRUD operations on documents, queries, bulk operations, and container management.", + "description": "Data plane SDK for Azure Cosmos DB NoSQL API operations \u2014 CRUD on documents, queries, bulk operations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1244,7 +1354,7 @@ "path": "skills/azure-data-tables-py", "category": "uncategorized", "name": "azure-data-tables-py", - "description": "Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations.", + "description": "NoSQL key-value store for structured data (Azure Storage Tables or Cosmos DB Table API).", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1254,7 +1364,7 @@ "path": "skills/azure-eventgrid-dotnet", "category": "uncategorized", "name": "azure-eventgrid-dotnet", - "description": "Azure Event Grid SDK for .NET. Client library for publishing and consuming events with Azure Event Grid. Use for event-driven architectures, pub/sub messaging, CloudEvents, and EventGridEvents.", + "description": "Client library for publishing events to Azure Event Grid topics, domains, and namespaces.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1274,7 +1384,7 @@ "path": "skills/azure-eventgrid-py", "category": "uncategorized", "name": "azure-eventgrid-py", - "description": "Azure Event Grid SDK for Python. Use for publishing events, handling CloudEvents, and event-driven architectures.", + "description": "Event routing service for building event-driven applications with pub/sub semantics.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1284,7 +1394,7 @@ "path": "skills/azure-eventhub-dotnet", "category": "uncategorized", "name": "azure-eventhub-dotnet", - "description": "Azure Event Hubs SDK for .NET.", + "description": "High-throughput event streaming SDK for sending and receiving events via Azure Event Hubs.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1304,7 +1414,7 @@ "path": "skills/azure-eventhub-py", "category": "uncategorized", "name": "azure-eventhub-py", - "description": "Azure Event Hubs SDK for Python streaming. Use for high-throughput event ingestion, producers, consumers, and checkpointing.", + "description": "Big data streaming platform for high-throughput event ingestion.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1314,7 +1424,7 @@ "path": "skills/azure-eventhub-rust", "category": "uncategorized", "name": "azure-eventhub-rust", - "description": "Azure Event Hubs SDK for Rust. Use for sending and receiving events, streaming data ingestion.", + "description": "Client library for Azure Event Hubs \u2014 big data streaming platform and event ingestion service.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1344,7 +1454,7 @@ "path": "skills/azure-identity-dotnet", "category": "uncategorized", "name": "azure-identity-dotnet", - "description": "Azure Identity SDK for .NET. Authentication library for Azure SDK clients using Microsoft Entra ID. Use for DefaultAzureCredential, managed identity, service principals, and developer credentials.", + "description": "Authentication library for Azure SDK clients using Microsoft Entra ID (formerly Azure AD).", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1364,7 +1474,7 @@ "path": "skills/azure-identity-py", "category": "uncategorized", "name": "azure-identity-py", - "description": "Azure Identity SDK for Python authentication. Use for DefaultAzureCredential, managed identity, service principals, and token caching.", + "description": "Authentication library for Azure SDK clients using Microsoft Entra ID (formerly Azure AD).", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1374,7 +1484,7 @@ "path": "skills/azure-identity-rust", "category": "uncategorized", "name": "azure-identity-rust", - "description": "Azure Identity SDK for Rust authentication. Use for DeveloperToolsCredential, ManagedIdentityCredential, ClientSecretCredential, and token-based authentication.", + "description": "Authentication library for Azure SDK clients using Microsoft Entra ID (formerly Azure AD).", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1394,7 +1504,7 @@ "path": "skills/azure-keyvault-certificates-rust", "category": "uncategorized", "name": "azure-keyvault-certificates-rust", - "description": "Azure Key Vault Certificates SDK for Rust. Use for creating, importing, and managing certificates.", + "description": "Client library for Azure Key Vault Certificates \u2014 secure storage and management of certificates.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1404,7 +1514,7 @@ "path": "skills/azure-keyvault-keys-rust", "category": "uncategorized", "name": "azure-keyvault-keys-rust", - "description": "Azure Key Vault Keys SDK for Rust. Use for creating, managing, and using cryptographic keys. Triggers: \"keyvault keys rust\", \"KeyClient rust\", \"create key rust\", \"encrypt rust\", \"sign rust\".", + "description": "Client library for Azure Key Vault Keys \u2014 secure storage and management of cryptographic keys.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1424,7 +1534,7 @@ "path": "skills/azure-keyvault-py", "category": "uncategorized", "name": "azure-keyvault-py", - "description": "Azure Key Vault SDK for Python. Use for secrets, keys, and certificates management with secure storage.", + "description": "Secure storage and management for secrets, cryptographic keys, and certificates.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1434,7 +1544,7 @@ "path": "skills/azure-keyvault-secrets-rust", "category": "uncategorized", "name": "azure-keyvault-secrets-rust", - "description": "Azure Key Vault Secrets SDK for Rust. Use for storing and retrieving secrets, passwords, and API keys. Triggers: \"keyvault secrets rust\", \"SecretClient rust\", \"get secret rust\", \"set secret rust\".", + "description": "Client library for Azure Key Vault Secrets \u2014 secure storage for passwords, API keys, and other secrets.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1454,7 +1564,7 @@ "path": "skills/azure-maps-search-dotnet", "category": "uncategorized", "name": "azure-maps-search-dotnet", - "description": "Azure Maps SDK for .NET. Location-based services including geocoding, routing, rendering, geolocation, and weather. Use for address search, directions, map tiles, IP geolocation, and weather data.", + "description": "Azure Maps SDK for .NET providing location-based services: geocoding, routing, rendering, geolocation, and weather.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1474,7 +1584,7 @@ "path": "skills/azure-messaging-webpubsubservice-py", "category": "uncategorized", "name": "azure-messaging-webpubsubservice-py", - "description": "Azure Web PubSub Service SDK for Python. Use for real-time messaging, WebSocket connections, and pub/sub patterns.", + "description": "Real-time messaging with WebSocket connections at scale.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1484,7 +1594,7 @@ "path": "skills/azure-mgmt-apicenter-dotnet", "category": "uncategorized", "name": "azure-mgmt-apicenter-dotnet", - "description": "Azure API Center SDK for .NET. Centralized API inventory management with governance, versioning, and discovery.", + "description": "Centralized API inventory and governance SDK for managing APIs across your organization.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1494,7 +1604,7 @@ "path": "skills/azure-mgmt-apicenter-py", "category": "uncategorized", "name": "azure-mgmt-apicenter-py", - "description": "Azure API Center Management SDK for Python. Use for managing API inventory, metadata, and governance across your organization.", + "description": "Manage API inventory, metadata, and governance in Azure API Center.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1504,7 +1614,7 @@ "path": "skills/azure-mgmt-apimanagement-dotnet", "category": "uncategorized", "name": "azure-mgmt-apimanagement-dotnet", - "description": "Azure Resource Manager SDK for API Management in .NET.", + "description": "Management plane SDK for provisioning and managing Azure API Management resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1514,7 +1624,7 @@ "path": "skills/azure-mgmt-apimanagement-py", "category": "uncategorized", "name": "azure-mgmt-apimanagement-py", - "description": "Azure API Management SDK for Python. Use for managing APIM services, APIs, products, subscriptions, and policies.", + "description": "Manage Azure API Management services, APIs, products, and policies.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1524,7 +1634,7 @@ "path": "skills/azure-mgmt-applicationinsights-dotnet", "category": "uncategorized", "name": "azure-mgmt-applicationinsights-dotnet", - "description": "Azure Application Insights SDK for .NET. Application performance monitoring and observability resource management.", + "description": "Azure Resource Manager SDK for managing Application Insights resources for application performance monitoring.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1534,7 +1644,7 @@ "path": "skills/azure-mgmt-arizeaiobservabilityeval-dotnet", "category": "uncategorized", "name": "azure-mgmt-arizeaiobservabilityeval-dotnet", - "description": "Azure Resource Manager SDK for Arize AI Observability and Evaluation (.NET).", + "description": ".NET SDK for managing Arize AI Observability and Evaluation resources on Azure.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1544,7 +1654,7 @@ "path": "skills/azure-mgmt-botservice-dotnet", "category": "uncategorized", "name": "azure-mgmt-botservice-dotnet", - "description": "Azure Resource Manager SDK for Bot Service in .NET. Management plane operations for creating and managing Azure Bot resources, channels (Teams, DirectLine, Slack), and connection settings.", + "description": "Management plane SDK for provisioning and managing Azure Bot Service resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1554,7 +1664,7 @@ "path": "skills/azure-mgmt-botservice-py", "category": "uncategorized", "name": "azure-mgmt-botservice-py", - "description": "Azure Bot Service Management SDK for Python. Use for creating, managing, and configuring Azure Bot Service resources.", + "description": "Manage Azure Bot Service resources including bots, channels, and connections.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1564,7 +1674,7 @@ "path": "skills/azure-mgmt-fabric-dotnet", "category": "uncategorized", "name": "azure-mgmt-fabric-dotnet", - "description": "Azure Resource Manager SDK for Fabric in .NET.", + "description": "Management plane SDK for provisioning and managing Microsoft Fabric capacity resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1574,7 +1684,7 @@ "path": "skills/azure-mgmt-fabric-py", "category": "uncategorized", "name": "azure-mgmt-fabric-py", - "description": "Azure Fabric Management SDK for Python. Use for managing Microsoft Fabric capacities and resources.", + "description": "Manage Microsoft Fabric capacities and resources programmatically.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1594,7 +1704,7 @@ "path": "skills/azure-mgmt-weightsandbiases-dotnet", "category": "uncategorized", "name": "azure-mgmt-weightsandbiases-dotnet", - "description": "Azure Weights & Biases SDK for .NET. ML experiment tracking and model management via Azure Marketplace. Use for creating W&B instances, managing SSO, marketplace integration, and ML observability.", + "description": "Azure Resource Manager SDK for deploying and managing Weights & Biases ML experiment tracking instances via Azure Marketplace.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1614,7 +1724,7 @@ "path": "skills/azure-monitor-ingestion-java", "category": "uncategorized", "name": "azure-monitor-ingestion-java", - "description": "Azure Monitor Ingestion SDK for Java. Send custom logs to Azure Monitor via Data Collection Rules (DCR) and Data Collection Endpoints (DCE).", + "description": "Client library for sending custom logs to Azure Monitor using the Logs Ingestion API via Data Collection Rules.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1624,7 +1734,7 @@ "path": "skills/azure-monitor-ingestion-py", "category": "uncategorized", "name": "azure-monitor-ingestion-py", - "description": "Azure Monitor Ingestion SDK for Python. Use for sending custom logs to Log Analytics workspace via Logs Ingestion API.", + "description": "Send custom logs to Azure Monitor Log Analytics workspace using the Logs Ingestion API.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1634,7 +1744,7 @@ "path": "skills/azure-monitor-opentelemetry-exporter-java", "category": "uncategorized", "name": "azure-monitor-opentelemetry-exporter-java", - "description": "Azure Monitor OpenTelemetry Exporter for Java. Export OpenTelemetry traces, metrics, and logs to Azure Monitor/Application Insights.", + "description": "> **\u26a0\ufe0f DEPRECATION NOTICE**: This package is deprecated. Migrate to `azure-monitor-opentelemetry-autoconfigure`. > > See [Migration Guide](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-opentelemetry-exporter/MIGRATIO", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1644,7 +1754,7 @@ "path": "skills/azure-monitor-opentelemetry-exporter-py", "category": "uncategorized", "name": "azure-monitor-opentelemetry-exporter-py", - "description": "Azure Monitor OpenTelemetry Exporter for Python. Use for low-level OpenTelemetry export to Application Insights.", + "description": "Low-level exporter for sending OpenTelemetry traces, metrics, and logs to Application Insights.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1654,7 +1764,7 @@ "path": "skills/azure-monitor-opentelemetry-py", "category": "uncategorized", "name": "azure-monitor-opentelemetry-py", - "description": "Azure Monitor OpenTelemetry Distro for Python. Use for one-line Application Insights setup with auto-instrumentation.", + "description": "One-line setup for Application Insights with OpenTelemetry auto-instrumentation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1674,7 +1784,7 @@ "path": "skills/azure-monitor-query-java", "category": "uncategorized", "name": "azure-monitor-query-java", - "description": "Azure Monitor Query SDK for Java. Execute Kusto queries against Log Analytics workspaces and query metrics from Azure resources.", + "description": "> **DEPRECATION NOTICE**: This package is deprecated in favor of: > - `azure-monitor-query-logs` \u2014 For Log Analytics queries > - `azure-monitor-query-metrics` \u2014 For metrics queries > > See migration guides: [Logs Migration](https://github.com/Azure/a", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1684,7 +1794,7 @@ "path": "skills/azure-monitor-query-py", "category": "uncategorized", "name": "azure-monitor-query-py", - "description": "Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics.", + "description": "Query logs and metrics from Azure Monitor and Log Analytics workspaces.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1694,7 +1804,7 @@ "path": "skills/azure-postgres-ts", "category": "uncategorized", "name": "azure-postgres-ts", - "description": "Connect to Azure Database for PostgreSQL Flexible Server from Node.js/TypeScript using the pg (node-postgres) package.", + "description": "Connect to Azure Database for PostgreSQL Flexible Server using the `pg` (node-postgres) package with support for password and Microsoft Entra ID (passwordless) authentication.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1704,7 +1814,7 @@ "path": "skills/azure-resource-manager-cosmosdb-dotnet", "category": "uncategorized", "name": "azure-resource-manager-cosmosdb-dotnet", - "description": "Azure Resource Manager SDK for Cosmos DB in .NET.", + "description": "Management plane SDK for provisioning and managing Azure Cosmos DB resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1714,7 +1824,7 @@ "path": "skills/azure-resource-manager-durabletask-dotnet", "category": "uncategorized", "name": "azure-resource-manager-durabletask-dotnet", - "description": "Azure Resource Manager SDK for Durable Task Scheduler in .NET.", + "description": "Management plane SDK for provisioning and managing Azure Durable Task Scheduler resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1724,7 +1834,7 @@ "path": "skills/azure-resource-manager-mysql-dotnet", "category": "uncategorized", "name": "azure-resource-manager-mysql-dotnet", - "description": "Azure MySQL Flexible Server SDK for .NET. Database management for MySQL Flexible Server deployments.", + "description": "Azure Resource Manager SDK for managing MySQL Flexible Server deployments.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1734,7 +1844,7 @@ "path": "skills/azure-resource-manager-playwright-dotnet", "category": "uncategorized", "name": "azure-resource-manager-playwright-dotnet", - "description": "Azure Resource Manager SDK for Microsoft Playwright Testing in .NET.", + "description": "Management plane SDK for provisioning and managing Microsoft Playwright Testing workspaces via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1744,7 +1854,7 @@ "path": "skills/azure-resource-manager-postgresql-dotnet", "category": "uncategorized", "name": "azure-resource-manager-postgresql-dotnet", - "description": "Azure PostgreSQL Flexible Server SDK for .NET. Database management for PostgreSQL Flexible Server deployments.", + "description": "Azure Resource Manager SDK for managing PostgreSQL Flexible Server deployments.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1754,7 +1864,7 @@ "path": "skills/azure-resource-manager-redis-dotnet", "category": "uncategorized", "name": "azure-resource-manager-redis-dotnet", - "description": "Azure Resource Manager SDK for Redis in .NET.", + "description": "Management plane SDK for provisioning and managing Azure Cache for Redis resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1764,7 +1874,7 @@ "path": "skills/azure-resource-manager-sql-dotnet", "category": "uncategorized", "name": "azure-resource-manager-sql-dotnet", - "description": "Azure Resource Manager SDK for Azure SQL in .NET.", + "description": "Management plane SDK for provisioning and managing Azure SQL resources via Azure Resource Manager.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1774,7 +1884,7 @@ "path": "skills/azure-search-documents-dotnet", "category": "uncategorized", "name": "azure-search-documents-dotnet", - "description": "Azure AI Search SDK for .NET (Azure.Search.Documents). Use for building search applications with full-text, vector, semantic, and hybrid search.", + "description": "Build search applications with full-text, vector, semantic, and hybrid search capabilities.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1784,7 +1894,7 @@ "path": "skills/azure-search-documents-py", "category": "uncategorized", "name": "azure-search-documents-py", - "description": "Azure AI Search SDK for Python. Use for vector search, hybrid search, semantic ranking, indexing, and skillsets.", + "description": "Full-text, vector, and hybrid search with AI enrichment capabilities.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1804,7 +1914,7 @@ "path": "skills/azure-security-keyvault-keys-dotnet", "category": "uncategorized", "name": "azure-security-keyvault-keys-dotnet", - "description": "Azure Key Vault Keys SDK for .NET. Client library for managing cryptographic keys in Azure Key Vault and Managed HSM. Use for key creation, rotation, encryption, decryption, signing, and verification.", + "description": "Client library for managing cryptographic keys in Azure Key Vault and Managed HSM.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1834,7 +1944,7 @@ "path": "skills/azure-servicebus-dotnet", "category": "uncategorized", "name": "azure-servicebus-dotnet", - "description": "Azure Service Bus SDK for .NET. Enterprise messaging with queues, topics, subscriptions, and sessions.", + "description": "Enterprise messaging SDK for reliable message delivery with queues, topics, subscriptions, and sessions.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1844,7 +1954,7 @@ "path": "skills/azure-servicebus-py", "category": "uncategorized", "name": "azure-servicebus-py", - "description": "Azure Service Bus SDK for Python messaging. Use for queues, topics, subscriptions, and enterprise messaging patterns.", + "description": "Enterprise messaging for reliable cloud communication with queues and pub/sub topics.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1864,7 +1974,7 @@ "path": "skills/azure-speech-to-text-rest-py", "category": "uncategorized", "name": "azure-speech-to-text-rest-py", - "description": "Azure Speech to Text REST API for short audio (Python). Use for simple speech recognition of audio files up to 60 seconds without the Speech SDK.", + "description": "Simple REST API for speech-to-text transcription of short audio files (up to 60 seconds). No SDK required - just HTTP requests.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1884,7 +1994,7 @@ "path": "skills/azure-storage-blob-py", "category": "uncategorized", "name": "azure-storage-blob-py", - "description": "Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle.", + "description": "Client library for Azure Blob Storage \u2014 object storage for unstructured data.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1894,7 +2004,7 @@ "path": "skills/azure-storage-blob-rust", "category": "uncategorized", "name": "azure-storage-blob-rust", - "description": "Azure Blob Storage SDK for Rust. Use for uploading, downloading, and managing blobs and containers.", + "description": "Client library for Azure Blob Storage \u2014 Microsoft's object storage solution for the cloud.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1904,7 +2014,7 @@ "path": "skills/azure-storage-blob-ts", "category": "uncategorized", "name": "azure-storage-blob-ts", - "description": "Azure Blob Storage JavaScript/TypeScript SDK (@azure/storage-blob) for blob operations. Use for uploading, downloading, listing, and managing blobs and containers.", + "description": "SDK for Azure Blob Storage operations \u2014 upload, download, list, and manage blobs and containers.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1914,7 +2024,7 @@ "path": "skills/azure-storage-file-datalake-py", "category": "uncategorized", "name": "azure-storage-file-datalake-py", - "description": "Azure Data Lake Storage Gen2 SDK for Python. Use for hierarchical file systems, big data analytics, and file/directory operations.", + "description": "Hierarchical file system for big data analytics workloads.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1924,7 +2034,7 @@ "path": "skills/azure-storage-file-share-py", "category": "uncategorized", "name": "azure-storage-file-share-py", - "description": "Azure Storage File Share SDK for Python. Use for SMB file shares, directories, and file operations in the cloud.", + "description": "Manage SMB file shares for cloud-native and lift-and-shift scenarios.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1934,7 +2044,7 @@ "path": "skills/azure-storage-file-share-ts", "category": "uncategorized", "name": "azure-storage-file-share-ts", - "description": "Azure File Share JavaScript/TypeScript SDK (@azure/storage-file-share) for SMB file share operations.", + "description": "SDK for Azure File Share operations \u2014 SMB file shares, directories, and file operations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1944,7 +2054,7 @@ "path": "skills/azure-storage-queue-py", "category": "uncategorized", "name": "azure-storage-queue-py", - "description": "Azure Queue Storage SDK for Python. Use for reliable message queuing, task distribution, and asynchronous processing.", + "description": "Simple, cost-effective message queuing for asynchronous communication.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1954,7 +2064,7 @@ "path": "skills/azure-storage-queue-ts", "category": "uncategorized", "name": "azure-storage-queue-ts", - "description": "Azure Queue Storage JavaScript/TypeScript SDK (@azure/storage-queue) for message queue operations. Use for sending, receiving, peeking, and deleting messages in queues.", + "description": "SDK for Azure Queue Storage operations \u2014 send, receive, peek, and manage messages in queues.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -1974,7 +2084,7 @@ "path": "skills/backend-architect", "category": "uncategorized", "name": "backend-architect", - "description": "Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems.", + "description": "You are a backend system architect specializing in scalable, resilient, and maintainable backend systems and APIs.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2004,7 +2114,7 @@ "path": "skills/backend-security-coder", "category": "uncategorized", "name": "backend-security-coder", - "description": "Expert in secure backend coding practices specializing in input validation, authentication, and API security. Use PROACTIVELY for backend security implementations or security code reviews.", + "description": "- Working on backend security coder tasks or workflows - Needing guidance, best practices, or checklists for backend security coder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2074,7 +2184,7 @@ "path": "skills/bash-pro", "category": "uncategorized", "name": "bash-pro", - "description": "Master of defensive Bash scripting for production automation, CI/CD\npipelines, and system utilities. Expert in safe, portable, and testable shell\nscripts.\n", + "description": "- Writing or reviewing Bash scripts for automation, CI/CD, or ops - Hardening shell scripts for safety and portability", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2174,7 +2284,7 @@ "path": "skills/blockchain-developer", "category": "uncategorized", "name": "blockchain-developer", - "description": "Build production-ready Web3 applications, smart contracts, and decentralized systems. Implements DeFi protocols, NFT platforms, DAOs, and enterprise blockchain integrations.", + "description": "- Working on blockchain developer tasks or workflows - Needing guidance, best practices, or checklists for blockchain developer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2304,7 +2414,7 @@ "path": "skills/business-analyst", "category": "uncategorized", "name": "business-analyst", - "description": "Master modern business analysis with AI-powered analytics, real-time dashboards, and data-driven insights. Build comprehensive KPI frameworks, predictive models, and strategic recommendations.", + "description": "- Working on business analyst tasks or workflows - Needing guidance, best practices, or checklists for business analyst", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2344,7 +2454,7 @@ "path": "skills/c4-code", "category": "uncategorized", "name": "c4-code", - "description": "Expert C4 Code-level documentation specialist. Analyzes code directories to create comprehensive C4 code-level documentation including function signatures, arguments, dependencies, and code structure.", + "description": "- Working on c4 code level: [directory name] tasks or workflows - Needing guidance, best practices, or checklists for c4 code level: [directory name]", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2354,7 +2464,7 @@ "path": "skills/c4-component", "category": "uncategorized", "name": "c4-component", - "description": "Expert C4 Component-level documentation specialist. Synthesizes C4 Code-level documentation into Component-level architecture, defining component boundaries, interfaces, and relationships.", + "description": "- Working on c4 component level: [component name] tasks or workflows - Needing guidance, best practices, or checklists for c4 component level: [component name]", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2364,7 +2474,7 @@ "path": "skills/c4-container", "category": "uncategorized", "name": "c4-container", - "description": "Expert C4 Container-level documentation specialist.", + "description": "- Working on c4 container level: system deployment tasks or workflows - Needing guidance, best practices, or checklists for c4 container level: system deployment", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2374,7 +2484,7 @@ "path": "skills/c4-context", "category": "uncategorized", "name": "c4-context", - "description": "Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and external dependencies.", + "description": "- Working on c4 context level: system context tasks or workflows - Needing guidance, best practices, or checklists for c4 context level: system context", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2434,7 +2544,7 @@ "path": "skills/carrier-relationship-management", "category": "uncategorized", "name": "carrier-relationship-management", - "description": "Codified expertise for managing carrier portfolios, negotiating freight rates, tracking carrier performance, allocating freight, and maintaining strategic carrier relationships.", + "description": "Use this skill when building and managing a carrier network, conducting freight RFPs, negotiating linehaul and accessorial rates, tracking carrier KPIs via scorecards, or ensuring regulatory compliance of transportation partners.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -2684,7 +2794,7 @@ "path": "skills/cloud-architect", "category": "uncategorized", "name": "cloud-architect", - "description": "Expert cloud architect specializing in AWS/Azure/GCP multi-cloud infrastructure design, advanced IaC (Terraform/OpenTofu/CDK), FinOps cost optimization, and modern architectural patterns.", + "description": "- Working on cloud architect tasks or workflows - Needing guidance, best practices, or checklists for cloud architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2884,7 +2994,7 @@ "path": "skills/competitive-landscape", "category": "uncategorized", "name": "competitive-landscape", - "description": "This skill should be used when the user asks to \\\\\\\"analyze competitors\", \"assess competitive landscape\", \"identify differentiation\", \"evaluate market positioning\", \"apply Porter's Five Forces\",...", + "description": "Comprehensive frameworks for analyzing competition, identifying differentiation opportunities, and developing winning market positioning strategies.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -2994,7 +3104,7 @@ "path": "skills/conductor-setup", "category": "uncategorized", "name": "conductor-setup", - "description": "Initialize project with Conductor artifacts (product definition,\ntech stack, workflow, style guides)\n", + "description": "Initialize or resume Conductor project setup. This command creates foundational project documentation through interactive Q&A.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3014,7 +3124,7 @@ "path": "skills/conductor-validator", "category": "uncategorized", "name": "conductor-validator", - "description": "Validates Conductor project artifacts for completeness,\nconsistency, and correctness. Use after setup, when diagnosing issues, or\nbefore implementation to verify project context.\n", + "description": "ls -la conductor/", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3044,7 +3154,7 @@ "path": "skills/content-marketer", "category": "uncategorized", "name": "content-marketer", - "description": "Elite content marketing strategist specializing in AI-powered content creation, omnichannel distribution, SEO optimization, and data-driven performance marketing.", + "description": "- Working on content marketer tasks or workflows - Needing guidance, best practices, or checklists for content marketer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3074,7 +3184,7 @@ "path": "skills/context-driven-development", "category": "uncategorized", "name": "context-driven-development", - "description": "Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship between product.md, tech-stack.md, and...", + "description": "Guide for implementing and maintaining context as a managed artifact alongside code, enabling consistent AI interactions and team alignment through structured project documentation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3114,7 +3224,7 @@ "path": "skills/context-manager", "category": "uncategorized", "name": "context-manager", - "description": "Elite AI context engineering specialist mastering dynamic context management, vector databases, knowledge graphs, and intelligent memory systems.", + "description": "- Working on context manager tasks or workflows - Needing guidance, best practices, or checklists for context manager", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3234,7 +3344,7 @@ "path": "skills/cpp-pro", "category": "uncategorized", "name": "cpp-pro", - "description": "Write idiomatic C++ code with modern features, RAII, smart pointers, and STL algorithms. Handles templates, move semantics, and performance optimization.", + "description": "- Working on cpp pro tasks or workflows - Needing guidance, best practices, or checklists for cpp pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3274,7 +3384,7 @@ "path": "skills/crypto-bd-agent", "category": "uncategorized", "name": "crypto-bd-agent", - "description": "Autonomous crypto business development patterns \u2014 multi-chain token discovery, 100-point scoring with wallet forensics, x402 micropayments, ERC-8004 on-chain identity, LLM cascade routing, and...", + "description": "> Production-tested patterns for building AI agents that autonomously discover, > evaluate, and acquire token listings for cryptocurrency exchanges.", "risk": "safe", "source": "community", "date_added": "2026-02-27" @@ -3284,7 +3394,7 @@ "path": "skills/csharp-pro", "category": "uncategorized", "name": "csharp-pro", - "description": "Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and ensures comprehensive testing.", + "description": "- Working on csharp pro tasks or workflows - Needing guidance, best practices, or checklists for csharp pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3304,7 +3414,7 @@ "path": "skills/customer-support", "category": "uncategorized", "name": "customer-support", - "description": "Elite AI-powered customer support specialist mastering conversational AI, automated ticketing, sentiment analysis, and omnichannel support experiences.", + "description": "- Working on customer support tasks or workflows - Needing guidance, best practices, or checklists for customer support", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3314,7 +3424,7 @@ "path": "skills/customs-trade-compliance", "category": "uncategorized", "name": "customs-trade-compliance", - "description": "Codified expertise for customs documentation, tariff classification, duty optimisation, restricted party screening, and regulatory compliance across multiple jurisdictions.", + "description": "Use this skill when navigating international trade regulations, classifying goods under HS codes, determining appropriate Incoterms, managing import/export documentation, or optimizing customs duty payments through Free Trade Agreements.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -3334,7 +3444,7 @@ "path": "skills/data-engineer", "category": "uncategorized", "name": "data-engineer", - "description": "Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms.", + "description": "You are a data engineer specializing in scalable data pipelines, modern data architecture, and analytics infrastructure.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3374,7 +3484,7 @@ "path": "skills/data-scientist", "category": "uncategorized", "name": "data-scientist", - "description": "Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence.", + "description": "- Working on data scientist tasks or workflows - Needing guidance, best practices, or checklists for data scientist", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3414,7 +3524,7 @@ "path": "skills/database-admin", "category": "uncategorized", "name": "database-admin", - "description": "Expert database administrator specializing in modern cloud databases, automation, and reliability engineering.", + "description": "- Working on database admin tasks or workflows - Needing guidance, best practices, or checklists for database admin", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3424,7 +3534,7 @@ "path": "skills/database-architect", "category": "uncategorized", "name": "database-architect", - "description": "Expert database architect specializing in data layer design from scratch, technology selection, schema modeling, and scalable database architectures.", + "description": "You are a database architect specializing in designing scalable, performant, and maintainable data layers from the ground up.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3484,7 +3594,7 @@ "path": "skills/database-optimizer", "category": "uncategorized", "name": "database-optimizer", - "description": "Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures.", + "description": "- Working on database optimizer tasks or workflows - Needing guidance, best practices, or checklists for database optimizer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3574,7 +3684,7 @@ "path": "skills/debugger", "category": "uncategorized", "name": "debugger", - "description": "Debugging specialist for errors, test failures, and unexpected\nbehavior. Use proactively when encountering any issues.\n", + "description": "- Working on debugger tasks or workflows - Needing guidance, best practices, or checklists for debugger", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3644,7 +3754,7 @@ "path": "skills/deployment-engineer", "category": "uncategorized", "name": "deployment-engineer", - "description": "Expert deployment engineer specializing in modern CI/CD pipelines, GitOps workflows, and advanced deployment automation.", + "description": "You are a deployment engineer specializing in modern CI/CD pipelines, GitOps workflows, and advanced deployment automation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3694,7 +3804,7 @@ "path": "skills/design-orchestration", "category": "uncategorized", "name": "design-orchestration", - "description": "Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order.", + "description": "Ensure that ideas become designs, designs are reviewed, and only validated designs reach implementation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3714,7 +3824,7 @@ "path": "skills/devops-troubleshooter", "category": "uncategorized", "name": "devops-troubleshooter", - "description": "Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability.", + "description": "- Working on devops troubleshooter tasks or workflows - Needing guidance, best practices, or checklists for devops troubleshooter", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3774,7 +3884,7 @@ "path": "skills/django-pro", "category": "uncategorized", "name": "django-pro", - "description": "Master Django 5.x with async views, DRF, Celery, and Django Channels. Build scalable web applications with proper architecture, testing, and deployment.", + "description": "- Working on django pro tasks or workflows - Needing guidance, best practices, or checklists for django pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3804,7 +3914,7 @@ "path": "skills/docs-architect", "category": "uncategorized", "name": "docs-architect", - "description": "Creates comprehensive technical documentation from existing codebases. Analyzes architecture, design patterns, and implementation details to produce long-form technical manuals and ebooks.", + "description": "- Working on docs architect tasks or workflows - Needing guidance, best practices, or checklists for docs architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3874,7 +3984,7 @@ "path": "skills/dotnet-architect", "category": "uncategorized", "name": "dotnet-architect", - "description": "Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns.", + "description": "- Working on dotnet architect tasks or workflows - Needing guidance, best practices, or checklists for dotnet architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3924,7 +4034,7 @@ "path": "skills/dx-optimizer", "category": "uncategorized", "name": "dx-optimizer", - "description": "Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when development friction is noticed.", + "description": "- Working on dx optimizer tasks or workflows - Needing guidance, best practices, or checklists for dx optimizer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3954,7 +4064,7 @@ "path": "skills/elixir-pro", "category": "uncategorized", "name": "elixir-pro", - "description": "Write idiomatic Elixir code with OTP patterns, supervision trees, and Phoenix LiveView. Masters concurrency, fault tolerance, and distributed systems.", + "description": "- Working on elixir pro tasks or workflows - Needing guidance, best practices, or checklists for elixir pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -3974,7 +4084,7 @@ "path": "skills/email-systems", "category": "uncategorized", "name": "email-systems", - "description": "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill cov...", + "description": "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill cov...", "risk": "unknown", "source": "vibeship-spawner-skills (Apache 2.0)", "date_added": "2026-02-27" @@ -4004,7 +4114,7 @@ "path": "skills/energy-procurement", "category": "uncategorized", "name": "energy-procurement", - "description": "Codified expertise for electricity and gas procurement, tariff optimisation, demand charge management, renewable PPA evaluation, and multi-facility energy cost management.", + "description": "Use this skill when managing energy procurement tasks, such as optimizing electricity or gas tariffs, evaluating Power Purchase Agreements (PPAs), or developing long-term energy cost management strategies for commercial or industrial facilities.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -4054,7 +4164,7 @@ "path": "skills/error-detective", "category": "uncategorized", "name": "error-detective", - "description": "Search logs and codebases for error patterns, stack traces, and anomalies. Correlates errors across systems and identifies root causes.", + "description": "- Working on error detective tasks or workflows - Needing guidance, best practices, or checklists for error detective", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4234,7 +4344,7 @@ "path": "skills/fastapi-pro", "category": "uncategorized", "name": "fastapi-pro", - "description": "Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns.", + "description": "- Working on fastapi pro tasks or workflows - Needing guidance, best practices, or checklists for fastapi pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4354,7 +4464,7 @@ "path": "skills/firmware-analyst", "category": "uncategorized", "name": "firmware-analyst", - "description": "Expert firmware analyst specializing in embedded systems, IoT security, and hardware reverse engineering.", + "description": "wget http://vendor.com/firmware/update.bin", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4374,7 +4484,7 @@ "path": "skills/flutter-expert", "category": "uncategorized", "name": "flutter-expert", - "description": "Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment.", + "description": "- Working on flutter expert tasks or workflows - Needing guidance, best practices, or checklists for flutter expert", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4384,7 +4494,7 @@ "path": "skills/form-cro", "category": "uncategorized", "name": "form-cro", - "description": "Optimize any form that is NOT signup or account registration \u2014 including lead capture, contact, demo request, application, survey, quote, and checkout forms.", + "description": "You are an expert in **form optimization and friction reduction**. Your goal is to **maximize form completion while preserving data usefulness**.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4504,7 +4614,7 @@ "path": "skills/frontend-developer", "category": "uncategorized", "name": "frontend-developer", - "description": "Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture.", + "description": "You are a frontend development expert specializing in modern React applications, Next.js, and cutting-edge frontend architecture.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4534,7 +4644,7 @@ "path": "skills/frontend-security-coder", "category": "uncategorized", "name": "frontend-security-coder", - "description": "Expert in secure frontend coding practices specializing in XSS prevention, output sanitization, and client-side security patterns.", + "description": "- Working on frontend security coder tasks or workflows - Needing guidance, best practices, or checklists for frontend security coder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4834,7 +4944,7 @@ "path": "skills/golang-pro", "category": "uncategorized", "name": "golang-pro", - "description": "Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices.", + "description": "You are a Go expert specializing in modern Go 1.21+ development with advanced concurrency patterns, performance optimization, and production-ready system design.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4904,7 +5014,7 @@ "path": "skills/graphql-architect", "category": "uncategorized", "name": "graphql-architect", - "description": "Master modern GraphQL with federation, performance optimization, and enterprise security. Build scalable schemas, implement advanced caching, and design real-time systems.", + "description": "- Working on graphql architect tasks or workflows - Needing guidance, best practices, or checklists for graphql architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4964,7 +5074,7 @@ "path": "skills/hig-components-content", "category": "uncategorized", "name": "hig-components-content", - "description": "Apple Human Interface Guidelines for content display components.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4974,7 +5084,7 @@ "path": "skills/hig-components-controls", "category": "uncategorized", "name": "hig-components-controls", - "description": "Apple HIG guidance for selection and input controls including pickers, toggles, sliders, steppers, segmented controls, combo boxes, text fields, text views, labels, token fields, virtual...", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4984,7 +5094,7 @@ "path": "skills/hig-components-dialogs", "category": "uncategorized", "name": "hig-components-dialogs", - "description": "Apple HIG guidance for presentation components including alerts, action sheets, popovers, sheets, and digit entry views.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -4994,7 +5104,7 @@ "path": "skills/hig-components-layout", "category": "uncategorized", "name": "hig-components-layout", - "description": "Apple Human Interface Guidelines for layout and navigation components.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5004,7 +5114,7 @@ "path": "skills/hig-components-menus", "category": "uncategorized", "name": "hig-components-menus", - "description": "Apple HIG guidance for menu and button components including menus, context menus, dock menus, edit menus, the menu bar, toolbars, action buttons, pop-up buttons, pull-down buttons, disclosure...", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5014,7 +5124,7 @@ "path": "skills/hig-components-search", "category": "uncategorized", "name": "hig-components-search", - "description": "Apple HIG guidance for navigation-related components including search fields, page controls, and path controls.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5024,7 +5134,7 @@ "path": "skills/hig-components-status", "category": "uncategorized", "name": "hig-components-status", - "description": "Apple HIG guidance for status and progress UI components including progress indicators, status bars, and activity rings.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5034,7 +5144,7 @@ "path": "skills/hig-components-system", "category": "uncategorized", "name": "hig-components-system", - "description": "Apple HIG guidance for system experience components: widgets, live activities, notifications, complications, home screen quick actions, top shelf, watch faces, app clips, and app shortcuts.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5044,7 +5154,7 @@ "path": "skills/hig-foundations", "category": "uncategorized", "name": "hig-foundations", - "description": "Apple Human Interface Guidelines design foundations.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5054,7 +5164,7 @@ "path": "skills/hig-inputs", "category": "uncategorized", "name": "hig-inputs", - "description": "Apple HIG guidance for input methods and interaction patterns: gestures, Apple Pencil, keyboards, game controllers, pointers, Digital Crown, eye tracking, focus system, remotes, spatial...", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5064,7 +5174,7 @@ "path": "skills/hig-patterns", "category": "uncategorized", "name": "hig-patterns", - "description": "Apple Human Interface Guidelines interaction and UX patterns.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5074,7 +5184,7 @@ "path": "skills/hig-platforms", "category": "uncategorized", "name": "hig-platforms", - "description": "Apple Human Interface Guidelines for platform-specific design.", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5084,7 +5194,7 @@ "path": "skills/hig-project-context", "category": "uncategorized", "name": "hig-project-context", - "description": "Create or update a shared Apple design context document that other HIG skills use to tailor guidance.", + "description": "Create and maintain `.claude/apple-design-context.md` so other HIG skills can skip redundant questions.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5094,7 +5204,7 @@ "path": "skills/hig-technologies", "category": "uncategorized", "name": "hig-technologies", - "description": "Apple HIG guidance for Apple technology integrations: Siri, Apple Pay, HealthKit, HomeKit, ARKit, machine learning, generative AI, iCloud, Sign in with Apple, SharePlay, CarPlay, Game Center,...", + "description": "Check for `.claude/apple-design-context.md` before asking questions. Use existing context and only ask for information not already covered.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5114,7 +5224,7 @@ "path": "skills/hr-pro", "category": "uncategorized", "name": "hr-pro", - "description": "Professional, ethical HR partner for hiring, onboarding/offboarding, PTO and leave, performance, compliant policies, and employee relations.", + "description": "- Working on hr pro tasks or workflows - Needing guidance, best practices, or checklists for hr pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5174,7 +5284,7 @@ "path": "skills/hybrid-cloud-architect", "category": "uncategorized", "name": "hybrid-cloud-architect", - "description": "Expert hybrid cloud architect specializing in complex multi-cloud solutions across AWS/Azure/GCP and private clouds (OpenStack/VMware).", + "description": "- Working on hybrid cloud architect tasks or workflows - Needing guidance, best practices, or checklists for hybrid cloud architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5224,7 +5334,7 @@ "path": "skills/imagen", "category": "uncategorized", "name": "imagen", - "description": "AI image generation skill powered by Google Gemini, enabling seamless visual content creation for UI placeholders, documentation, and design assets.", + "description": "This skill generates images using Google Gemini's image generation model (`gemini-3-pro-image-preview`). It enables seamless image creation during any Claude Code session - whether you're building frontend UIs, creating documentation, or need visual", "risk": "safe", "source": "https://github.com/sanjay3290/ai-skills/tree/main/skills/imagen", "date_added": "2026-02-27" @@ -5244,7 +5354,7 @@ "path": "skills/incident-responder", "category": "uncategorized", "name": "incident-responder", - "description": "Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management.", + "description": "- Working on incident responder tasks or workflows - Needing guidance, best practices, or checklists for incident responder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5354,7 +5464,7 @@ "path": "skills/inventory-demand-planning", "category": "uncategorized", "name": "inventory-demand-planning", - "description": "Codified expertise for demand forecasting, safety stock optimisation, replenishment planning, and promotional lift estimation at multi-location retailers.", + "description": "Use this skill when forecasting product demand, calculating optimal safety stock levels, planning inventory replenishment cycles, estimating the impact of retail promotions, or conducting ABC/XYZ inventory segmentation.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -5364,7 +5474,7 @@ "path": "skills/ios-developer", "category": "uncategorized", "name": "ios-developer", - "description": "Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization.", + "description": "- Working on ios developer tasks or workflows - Needing guidance, best practices, or checklists for ios developer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5394,7 +5504,7 @@ "path": "skills/java-pro", "category": "uncategorized", "name": "java-pro", - "description": "Master Java 21+ with modern features like virtual threads, pattern matching, and Spring Boot 3.x. Expert in the latest Java ecosystem including GraalVM, Project Loom, and cloud-native patterns.", + "description": "- Working on java pro tasks or workflows - Needing guidance, best practices, or checklists for java pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5414,7 +5524,7 @@ "path": "skills/javascript-pro", "category": "uncategorized", "name": "javascript-pro", - "description": "Master modern JavaScript with ES6+, async patterns, and Node.js APIs. Handles promises, event loops, and browser/Node compatibility.", + "description": "You are a JavaScript expert specializing in modern JS and async programming.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5454,7 +5564,7 @@ "path": "skills/julia-pro", "category": "uncategorized", "name": "julia-pro", - "description": "Master Julia 1.10+ with modern features, performance optimization, multiple dispatch, and production-ready practices.", + "description": "- Working on julia pro tasks or workflows - Needing guidance, best practices, or checklists for julia pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5524,7 +5634,7 @@ "path": "skills/kubernetes-architect", "category": "uncategorized", "name": "kubernetes-architect", - "description": "Expert Kubernetes architect specializing in cloud-native infrastructure, advanced GitOps workflows (ArgoCD/Flux), and enterprise container orchestration.", + "description": "You are a Kubernetes architect specializing in cloud-native infrastructure, modern GitOps workflows, and enterprise container orchestration at scale.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5614,7 +5724,7 @@ "path": "skills/legacy-modernizer", "category": "uncategorized", "name": "legacy-modernizer", - "description": "Refactor legacy codebases, migrate outdated frameworks, and implement gradual modernization. Handles technical debt, dependency updates, and backward compatibility.", + "description": "- Working on legacy modernizer tasks or workflows - Needing guidance, best practices, or checklists for legacy modernizer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5624,7 +5734,7 @@ "path": "skills/legal-advisor", "category": "uncategorized", "name": "legal-advisor", - "description": "Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements.", + "description": "- Working on legal advisor tasks or workflows - Needing guidance, best practices, or checklists for legal advisor", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5784,7 +5894,7 @@ "path": "skills/logistics-exception-management", "category": "uncategorized", "name": "logistics-exception-management", - "description": "Codified expertise for handling freight exceptions, shipment delays, damages, losses, and carrier disputes. Informed by logistics professionals with 15+ years operational experience.", + "description": "Use this skill when dealing with deviations from planned logistics operations, such as transit delays, damaged shipments, lost cargo, or when initiating and managing claims and disputes with freight carriers.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -5804,7 +5914,7 @@ "path": "skills/m365-agents-dotnet", "category": "uncategorized", "name": "m365-agents-dotnet", - "description": "Microsoft 365 Agents SDK for .NET. Build multichannel agents for Teams/M365/Copilot Studio with ASP.NET Core hosting, AgentApplication routing, and MSAL-based auth.", + "description": "Build enterprise agents for Microsoft 365, Teams, and Copilot Studio using the Microsoft.Agents SDK with ASP.NET Core hosting, agent routing, and MSAL-based authentication.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5814,7 +5924,7 @@ "path": "skills/m365-agents-py", "category": "uncategorized", "name": "m365-agents-py", - "description": "Microsoft 365 Agents SDK for Python. Build multichannel agents for Teams/M365/Copilot Studio with aiohttp hosting, AgentApplication routing, streaming responses, and MSAL-based auth.", + "description": "Build enterprise agents for Microsoft 365, Teams, and Copilot Studio using the Microsoft Agents SDK with aiohttp hosting, AgentApplication routing, streaming responses, and MSAL-based authentication.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5824,7 +5934,7 @@ "path": "skills/m365-agents-ts", "category": "uncategorized", "name": "m365-agents-ts", - "description": "Microsoft 365 Agents SDK for TypeScript/Node.js.", + "description": "Build enterprise agents for Microsoft 365, Teams, and Copilot Studio using the Microsoft 365 Agents SDK with Express hosting, AgentApplication routing, streaming responses, and Copilot Studio client integrations.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5874,7 +5984,7 @@ "path": "skills/malware-analyst", "category": "uncategorized", "name": "malware-analyst", - "description": "Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis, and malware family identification.", + "description": "file sample.exe sha256sum sample.exe", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5894,7 +6004,7 @@ "path": "skills/market-sizing-analysis", "category": "uncategorized", "name": "market-sizing-analysis", - "description": "This skill should be used when the user asks to \\\\\\\"calculate TAM\\\\\\\", \"determine SAM\", \"estimate SOM\", \"size the market\", \"calculate market opportunity\", \"what's the total addressable market\", or...", + "description": "Comprehensive market sizing methodologies for calculating Total Addressable Market (TAM), Serviceable Available Market (SAM), and Serviceable Obtainable Market (SOM) for startup opportunities.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -5974,7 +6084,7 @@ "path": "skills/mermaid-expert", "category": "uncategorized", "name": "mermaid-expert", - "description": "Create Mermaid diagrams for flowcharts, sequences, ERDs, and architectures. Masters syntax for all diagram types and styling.", + "description": "- Working on mermaid expert tasks or workflows - Needing guidance, best practices, or checklists for mermaid expert", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6014,7 +6124,7 @@ "path": "skills/microsoft-azure-webjobs-extensions-authentication-events-dotnet", "category": "uncategorized", "name": "microsoft-azure-webjobs-extensions-authentication-events-dotnet", - "description": "Microsoft Entra Authentication Events SDK for .NET. Azure Functions triggers for custom authentication extensions.", + "description": "Azure Functions extension for handling Microsoft Entra ID custom authentication events.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6034,7 +6144,7 @@ "path": "skills/minecraft-bukkit-pro", "category": "uncategorized", "name": "minecraft-bukkit-pro", - "description": "Master Minecraft server plugin development with Bukkit, Spigot, and Paper APIs.", + "description": "- Working on minecraft bukkit pro tasks or workflows - Needing guidance, best practices, or checklists for minecraft bukkit pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6064,7 +6174,7 @@ "path": "skills/ml-engineer", "category": "uncategorized", "name": "ml-engineer", - "description": "Build production ML systems with PyTorch 2.x, TensorFlow, and modern ML frameworks. Implements model serving, feature engineering, A/B testing, and monitoring.", + "description": "- Working on ml engineer tasks or workflows - Needing guidance, best practices, or checklists for ml engineer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6084,7 +6194,7 @@ "path": "skills/mlops-engineer", "category": "uncategorized", "name": "mlops-engineer", - "description": "Build comprehensive ML pipelines, experiment tracking, and model registries with MLflow, Kubeflow, and modern MLOps tools.", + "description": "- Working on mlops engineer tasks or workflows - Needing guidance, best practices, or checklists for mlops engineer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6104,7 +6214,7 @@ "path": "skills/mobile-developer", "category": "uncategorized", "name": "mobile-developer", - "description": "Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync, and app store optimization.", + "description": "- Working on mobile developer tasks or workflows - Needing guidance, best practices, or checklists for mobile developer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6124,7 +6234,7 @@ "path": "skills/mobile-security-coder", "category": "uncategorized", "name": "mobile-security-coder", - "description": "Expert in secure mobile coding practices specializing in input validation, WebView security, and mobile-specific security patterns.", + "description": "- Working on mobile security coder tasks or workflows - Needing guidance, best practices, or checklists for mobile security coder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6194,7 +6304,7 @@ "path": "skills/multi-agent-brainstorming", "category": "uncategorized", "name": "multi-agent-brainstorming", - "description": "Simulate a structured peer-review process using multiple specialized agents to validate designs, surface hidden assumptions, and identify failure modes before implementation.", + "description": "Transform a single-agent design into a robust, review-validated design by simulating a formal peer-review process using multiple constrained agents.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6334,7 +6444,7 @@ "path": "skills/network-engineer", "category": "uncategorized", "name": "network-engineer", - "description": "Expert network engineer specializing in modern cloud networking, security architectures, and performance optimization.", + "description": "- Working on network engineer tasks or workflows - Needing guidance, best practices, or checklists for network engineer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6454,7 +6564,7 @@ "path": "skills/observability-engineer", "category": "uncategorized", "name": "observability-engineer", - "description": "Build production-ready monitoring, logging, and tracing systems. Implements comprehensive observability strategies, SLI/SLO management, and incident response workflows.", + "description": "You are an observability engineer specializing in production-grade monitoring, logging, tracing, and reliability systems for enterprise-scale applications.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6594,7 +6704,7 @@ "path": "skills/page-cro", "category": "uncategorized", "name": "page-cro", - "description": "Analyze and optimize individual pages for conversion performance.", + "description": "You are an expert in **page-level conversion optimization**. Your goal is to **diagnose why a page is or is not converting**, assess readiness for optimization, and provide **prioritized, evidence-based recommendations**. You do **not** guarantee con", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6634,7 +6744,7 @@ "path": "skills/payment-integration", "category": "uncategorized", "name": "payment-integration", - "description": "Integrate Stripe, PayPal, and payment processors. Handles checkout flows, subscriptions, webhooks, and PCI compliance. Use PROACTIVELY when implementing payments, billing, or subscription features.", + "description": "- Working on payment integration tasks or workflows - Needing guidance, best practices, or checklists for payment integration", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6764,7 +6874,7 @@ "path": "skills/php-pro", "category": "uncategorized", "name": "php-pro", - "description": "Write idiomatic PHP code with generators, iterators, SPL data\nstructures, and modern OOP features. Use PROACTIVELY for high-performance PHP\napplications.\n", + "description": "- Working on php pro tasks or workflows - Needing guidance, best practices, or checklists for php pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6844,7 +6954,7 @@ "path": "skills/posix-shell-pro", "category": "uncategorized", "name": "posix-shell-pro", - "description": "Expert in strict POSIX sh scripting for maximum portability across Unix-like systems. Specializes in shell scripts that run on any POSIX-compliant shell (dash, ash, sh, bash --posix).", + "description": "- Working on posix shell pro tasks or workflows - Needing guidance, best practices, or checklists for posix shell pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -6984,7 +7094,7 @@ "path": "skills/production-scheduling", "category": "uncategorized", "name": "production-scheduling", - "description": "Codified expertise for production scheduling, job sequencing, line balancing, changeover optimisation, and bottleneck resolution in discrete and batch manufacturing.", + "description": "Use this skill when planning manufacturing operations, sequencing jobs to minimize changeover times, balancing production lines, resolving factory bottlenecks, or responding to unexpected equipment downtime and supply disruptions.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -6994,7 +7104,7 @@ "path": "skills/programmatic-seo", "category": "uncategorized", "name": "programmatic-seo", - "description": "Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data.", + "description": "---", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7154,7 +7264,7 @@ "path": "skills/python-pro", "category": "uncategorized", "name": "python-pro", - "description": "Master Python 3.12+ with modern features, async programming, performance optimization, and production-ready practices. Expert in the latest Python ecosystem including uv, ruff, pydantic, and FastAPI.", + "description": "You are a Python expert specializing in modern Python 3.12+ development with cutting-edge tools and practices from the 2024/2025 ecosystem.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7174,7 +7284,7 @@ "path": "skills/quality-nonconformance", "category": "uncategorized", "name": "quality-nonconformance", - "description": "Codified expertise for quality control, non-conformance investigation, root cause analysis, corrective action, and supplier quality management in regulated manufacturing.", + "description": "Use this skill when investigating product defects or process deviations, performing root cause analysis (RCA), managing Corrective and Preventive Actions (CAPA), interpreting Statistical Process Control (SPC) data, or auditing supplier quality.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -7184,7 +7294,7 @@ "path": "skills/quant-analyst", "category": "uncategorized", "name": "quant-analyst", - "description": "Build financial models, backtest trading strategies, and analyze market data. Implements risk metrics, portfolio optimization, and statistical arbitrage.", + "description": "- Working on quant analyst tasks or workflows - Needing guidance, best practices, or checklists for quant analyst", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7364,7 +7474,7 @@ "path": "skills/reference-builder", "category": "uncategorized", "name": "reference-builder", - "description": "Creates exhaustive technical references and API documentation. Generates comprehensive parameter listings, configuration guides, and searchable reference materials.", + "description": "- Working on reference builder tasks or workflows - Needing guidance, best practices, or checklists for reference builder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7424,7 +7534,7 @@ "path": "skills/returns-reverse-logistics", "category": "uncategorized", "name": "returns-reverse-logistics", - "description": "Codified expertise for returns authorisation, receipt and inspection, disposition decisions, refund processing, fraud detection, and warranty claims management.", + "description": "Use this skill when managing the product return lifecycle, including authorization, physical inspection, making disposition decisions (e.g., restock vs. liquidator), detecting return fraud, or processing warranty claims.", "risk": "safe", "source": "https://github.com/ai-evos/agent-skills", "date_added": "2026-02-27" @@ -7434,7 +7544,7 @@ "path": "skills/reverse-engineer", "category": "uncategorized", "name": "reverse-engineer", - "description": "Expert reverse engineer specializing in binary analysis, disassembly, decompilation, and software analysis. Masters IDA Pro, Ghidra, radare2, x64dbg, and modern RE toolchains.", + "description": "- IDAPython (IDA Pro scripting) - Ghidra scripting (Java/Python via Jython) - r2pipe (radare2 Python API) - pwntools (CTF/exploitation toolkit) - capstone (disassembly framework) - keystone (assembly framework) - unicorn (CPU emulator framework) - an", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7444,7 +7554,7 @@ "path": "skills/risk-manager", "category": "uncategorized", "name": "risk-manager", - "description": "Monitor portfolio risk, R-multiples, and position limits. Creates hedging strategies, calculates expectancy, and implements stop-losses.", + "description": "- Working on risk manager tasks or workflows - Needing guidance, best practices, or checklists for risk manager", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7464,7 +7574,7 @@ "path": "skills/ruby-pro", "category": "uncategorized", "name": "ruby-pro", - "description": "Write idiomatic Ruby code with metaprogramming, Rails patterns, and performance optimization. Specializes in Ruby on Rails, gem development, and testing frameworks.", + "description": "- Working on ruby pro tasks or workflows - Needing guidance, best practices, or checklists for ruby pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7484,7 +7594,7 @@ "path": "skills/rust-pro", "category": "uncategorized", "name": "rust-pro", - "description": "Master Rust 1.75+ with modern async patterns, advanced type system features, and production-ready systems programming.", + "description": "You are a Rust expert specializing in modern Rust 1.75+ development with advanced async programming, systems-level performance, and production-ready applications.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7504,7 +7614,7 @@ "path": "skills/sales-automator", "category": "uncategorized", "name": "sales-automator", - "description": "Draft cold emails, follow-ups, and proposal templates. Creates\npricing pages, case studies, and sales scripts. Use PROACTIVELY for sales\noutreach or lead nurturing.\n", + "description": "- Working on sales automator tasks or workflows - Needing guidance, best practices, or checklists for sales automator", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7544,7 +7654,7 @@ "path": "skills/scala-pro", "category": "uncategorized", "name": "scala-pro", - "description": "Master enterprise-grade Scala development with functional programming, distributed systems, and big data processing. Expert in Apache Pekko, Akka, Spark, ZIO/Cats Effect, and reactive architectures.", + "description": "- Working on scala pro tasks or workflows - Needing guidance, best practices, or checklists for scala pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7564,7 +7674,7 @@ "path": "skills/schema-markup", "category": "uncategorized", "name": "schema-markup", - "description": "Design, validate, and optimize schema.org structured data for eligibility, correctness, and measurable SEO impact.", + "description": "---", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7634,7 +7744,7 @@ "path": "skills/security-auditor", "category": "uncategorized", "name": "security-auditor", - "description": "Expert security auditor specializing in DevSecOps, comprehensive cybersecurity, and compliance frameworks.", + "description": "You are a security auditor specializing in DevSecOps, application security, and comprehensive cybersecurity practices.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7694,7 +7804,7 @@ "path": "skills/security-scanning-security-sast", "category": "uncategorized", "name": "security-scanning-security-sast", - "description": "Static Application Security Testing (SAST) for code vulnerability\nanalysis across multiple languages and frameworks\n", + "description": "Static Application Security Testing (SAST) for comprehensive code vulnerability detection across multiple languages, frameworks, and security patterns.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7764,7 +7874,7 @@ "path": "skills/seo-audit", "category": "uncategorized", "name": "seo-audit", - "description": "Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance.", + "description": "You are an **SEO diagnostic specialist**. Your role is to **identify, explain, and prioritize SEO issues** that affect organic visibility\u2014**not to implement fixes unless explicitly requested**.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7774,7 +7884,7 @@ "path": "skills/seo-authority-builder", "category": "uncategorized", "name": "seo-authority-builder", - "description": "Analyzes content for E-E-A-T signals and suggests improvements to\nbuild authority and trust. Identifies missing credibility elements. Use\nPROACTIVELY for YMYL topics.\n", + "description": "- Working on seo authority builder tasks or workflows - Needing guidance, best practices, or checklists for seo authority builder", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7784,7 +7894,7 @@ "path": "skills/seo-cannibalization-detector", "category": "uncategorized", "name": "seo-cannibalization-detector", - "description": "Analyzes multiple provided pages to identify keyword overlap and potential cannibalization issues. Suggests differentiation strategies. Use PROACTIVELY when reviewing similar content.", + "description": "- Working on seo cannibalization detector tasks or workflows - Needing guidance, best practices, or checklists for seo cannibalization detector", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7794,7 +7904,7 @@ "path": "skills/seo-content-auditor", "category": "uncategorized", "name": "seo-content-auditor", - "description": "Analyzes provided content for quality, E-E-A-T signals, and SEO best practices. Scores content and provides improvement recommendations based on established guidelines.", + "description": "- Working on seo content auditor tasks or workflows - Needing guidance, best practices, or checklists for seo content auditor", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7804,7 +7914,7 @@ "path": "skills/seo-content-planner", "category": "uncategorized", "name": "seo-content-planner", - "description": "Creates comprehensive content outlines and topic clusters for SEO.\nPlans content calendars and identifies topic gaps. Use PROACTIVELY for content\nstrategy and planning.\n", + "description": "- Working on seo content planner tasks or workflows - Needing guidance, best practices, or checklists for seo content planner", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7814,7 +7924,7 @@ "path": "skills/seo-content-refresher", "category": "uncategorized", "name": "seo-content-refresher", - "description": "Identifies outdated elements in provided content and suggests updates to maintain freshness. Finds statistics, dates, and examples that need updating. Use PROACTIVELY for older content.", + "description": "- Working on seo content refresher tasks or workflows - Needing guidance, best practices, or checklists for seo content refresher", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7824,7 +7934,7 @@ "path": "skills/seo-content-writer", "category": "uncategorized", "name": "seo-content-writer", - "description": "Writes SEO-optimized content based on provided keywords and topic briefs. Creates engaging, comprehensive content following best practices. Use PROACTIVELY for content creation tasks.", + "description": "- Working on seo content writer tasks or workflows - Needing guidance, best practices, or checklists for seo content writer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7844,7 +7954,7 @@ "path": "skills/seo-fundamentals", "category": "uncategorized", "name": "seo-fundamentals", - "description": "Core principles of SEO including E-E-A-T, Core Web Vitals, technical foundations, content quality, and how modern search engines evaluate pages.", + "description": "---", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7854,7 +7964,7 @@ "path": "skills/seo-keyword-strategist", "category": "uncategorized", "name": "seo-keyword-strategist", - "description": "Analyzes keyword usage in provided content, calculates density, suggests semantic variations and LSI keywords based on the topic. Prevents over-optimization. Use PROACTIVELY for content optimization.", + "description": "- Working on seo keyword strategist tasks or workflows - Needing guidance, best practices, or checklists for seo keyword strategist", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7864,7 +7974,7 @@ "path": "skills/seo-meta-optimizer", "category": "uncategorized", "name": "seo-meta-optimizer", - "description": "Creates optimized meta titles, descriptions, and URL suggestions based on character limits and best practices. Generates compelling, keyword-rich metadata. Use PROACTIVELY for new content.", + "description": "- Working on seo meta optimizer tasks or workflows - Needing guidance, best practices, or checklists for seo meta optimizer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7874,7 +7984,7 @@ "path": "skills/seo-snippet-hunter", "category": "uncategorized", "name": "seo-snippet-hunter", - "description": "Formats content to be eligible for featured snippets and SERP features. Creates snippet-optimized content blocks based on best practices. Use PROACTIVELY for question-based content.", + "description": "- Working on seo snippet hunter tasks or workflows - Needing guidance, best practices, or checklists for seo snippet hunter", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7884,7 +7994,7 @@ "path": "skills/seo-structure-architect", "category": "uncategorized", "name": "seo-structure-architect", - "description": "Analyzes and optimizes content structure including header hierarchy, suggests schema markup, and internal linking opportunities. Creates search-friendly content organization.", + "description": "- Working on seo structure architect tasks or workflows - Needing guidance, best practices, or checklists for seo structure architect", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -7984,7 +8094,7 @@ "path": "skills/shopify-development", "category": "uncategorized", "name": "shopify-development", - "description": "Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.", + "description": "Use this skill when the user asks about:", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8174,7 +8284,7 @@ "path": "skills/sql-pro", "category": "uncategorized", "name": "sql-pro", - "description": "Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid analytical systems.", + "description": "You are an expert SQL specialist mastering modern database systems, performance optimization, and advanced analytical techniques across cloud-native and hybrid OLTP/OLAP environments.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8214,7 +8324,7 @@ "path": "skills/startup-analyst", "category": "uncategorized", "name": "startup-analyst", - "description": "Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies.", + "description": "- Working on startup analyst tasks or workflows - Needing guidance, best practices, or checklists for startup analyst", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8224,7 +8334,7 @@ "path": "skills/startup-business-analyst-business-case", "category": "uncategorized", "name": "startup-business-analyst-business-case", - "description": "Generate comprehensive investor-ready business case document with\nmarket, solution, financials, and strategy\n", + "description": "Generate a comprehensive, investor-ready business case document covering market opportunity, solution, competitive landscape, financial projections, team, risks, and funding ask for startup fundraising and strategic planning.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8234,7 +8344,7 @@ "path": "skills/startup-business-analyst-financial-projections", "category": "uncategorized", "name": "startup-business-analyst-financial-projections", - "description": "Create detailed 3-5 year financial model with revenue, costs, cash\nflow, and scenarios\n", + "description": "Create a comprehensive 3-5 year financial model with revenue projections, cost structure, headcount planning, cash flow analysis, and three-scenario modeling (conservative, base, optimistic) for startup financial planning and fundraising.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8244,7 +8354,7 @@ "path": "skills/startup-business-analyst-market-opportunity", "category": "uncategorized", "name": "startup-business-analyst-market-opportunity", - "description": "Generate comprehensive market opportunity analysis with TAM/SAM/SOM\ncalculations\n", + "description": "Generate a comprehensive market opportunity analysis for a startup, including Total Addressable Market (TAM), Serviceable Available Market (SAM), and Serviceable Obtainable Market (SOM) calculations using both bottom-up and top-down methodologies.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8254,7 +8364,7 @@ "path": "skills/startup-financial-modeling", "category": "uncategorized", "name": "startup-financial-modeling", - "description": "This skill should be used when the user asks to \\\\\\\"create financial projections\", \"build a financial model\", \"forecast revenue\", \"calculate burn rate\", \"estimate runway\", \"model cash flow\", or...", + "description": "Build comprehensive 3-5 year financial models with revenue projections, cost structures, cash flow analysis, and scenario planning for early-stage startups.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8264,7 +8374,7 @@ "path": "skills/startup-metrics-framework", "category": "uncategorized", "name": "startup-metrics-framework", - "description": "This skill should be used when the user asks about \\\\\\\"key startup metrics\", \"SaaS metrics\", \"CAC and LTV\", \"unit economics\", \"burn multiple\", \"rule of 40\", \"marketplace metrics\", or requests...", + "description": "Comprehensive guide to tracking, calculating, and optimizing key performance metrics for different startup business models from seed through Series A.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8404,7 +8514,7 @@ "path": "skills/tdd-orchestrator", "category": "uncategorized", "name": "tdd-orchestrator", - "description": "Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices.", + "description": "- Working on tdd orchestrator tasks or workflows - Needing guidance, best practices, or checklists for tdd orchestrator", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8484,7 +8594,7 @@ "path": "skills/team-composition-analysis", "category": "uncategorized", "name": "team-composition-analysis", - "description": "This skill should be used when the user asks to \\\\\\\"plan team structure\", \"determine hiring needs\", \"design org chart\", \"calculate compensation\", \"plan equity allocation\", or requests...", + "description": "Design optimal team structures, hiring plans, compensation strategies, and equity allocation for early-stage startups from pre-seed through Series A.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8544,7 +8654,7 @@ "path": "skills/temporal-python-pro", "category": "uncategorized", "name": "temporal-python-pro", - "description": "Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testing strategies, and production deployment.", + "description": "- Working on temporal python pro tasks or workflows - Needing guidance, best practices, or checklists for temporal python pro", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8604,7 +8714,7 @@ "path": "skills/terraform-specialist", "category": "uncategorized", "name": "terraform-specialist", - "description": "Expert Terraform/OpenTofu specialist mastering advanced IaC automation, state management, and enterprise infrastructure patterns.", + "description": "You are a Terraform/OpenTofu specialist focused on advanced infrastructure automation, state management, and modern IaC practices.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8614,7 +8724,7 @@ "path": "skills/test-automator", "category": "uncategorized", "name": "test-automator", - "description": "Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with advanced CI/CD integration.", + "description": "- Working on test automator tasks or workflows - Needing guidance, best practices, or checklists for test automator", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8744,7 +8854,7 @@ "path": "skills/track-management", "category": "uncategorized", "name": "track-management", - "description": "Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan.md, and track lifecycle operations.", + "description": "Guide for creating, managing, and completing Conductor tracks - the logical work units that organize features, bugs, and refactors through specification, planning, and implementation phases.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8784,7 +8894,7 @@ "path": "skills/tutorial-engineer", "category": "uncategorized", "name": "tutorial-engineer", - "description": "Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples.", + "description": "- Working on tutorial engineer tasks or workflows - Needing guidance, best practices, or checklists for tutorial engineer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8824,7 +8934,7 @@ "path": "skills/typescript-expert", "category": "framework", "name": "typescript-expert", - "description": "TypeScript and JavaScript expert with deep knowledge of type-level programming, performance optimization, monorepo management, migration strategies, and modern tooling.", + "description": "You are an advanced TypeScript expert with deep, practical knowledge of type-level programming, performance optimization, and real-world problem solving based on current best practices.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8834,7 +8944,7 @@ "path": "skills/typescript-pro", "category": "uncategorized", "name": "typescript-pro", - "description": "Master TypeScript with advanced types, generics, and strict type safety. Handles complex type systems, decorators, and enterprise-grade patterns.", + "description": "You are a TypeScript expert specializing in advanced typing and enterprise-grade development.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8854,7 +8964,7 @@ "path": "skills/ui-ux-designer", "category": "uncategorized", "name": "ui-ux-designer", - "description": "Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools.", + "description": "- Working on ui ux designer tasks or workflows - Needing guidance, best practices, or checklists for ui ux designer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8874,7 +8984,7 @@ "path": "skills/ui-visual-validator", "category": "uncategorized", "name": "ui-visual-validator", - "description": "Rigorous visual validation expert specializing in UI testing, design system compliance, and accessibility verification.", + "description": "- Working on ui visual validator tasks or workflows - Needing guidance, best practices, or checklists for ui visual validator", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -8894,7 +9004,7 @@ "path": "skills/unity-developer", "category": "uncategorized", "name": "unity-developer", - "description": "Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform deployment.", + "description": "- Working on unity developer tasks or workflows - Needing guidance, best practices, or checklists for unity developer", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -9067,7 +9177,7 @@ "description": "Audit rapidly generated or AI-produced code for structural flaws, fragility, and production risks.", "risk": "safe", "source": "original", - "date_added": "2026-02-28" + "date_added": null }, { "id": "videodb-skills", @@ -9404,7 +9514,7 @@ "path": "skills/workflow-patterns", "category": "uncategorized", "name": "workflow-patterns", - "description": "Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding the verification protocol.", + "description": "Guide for implementing tasks using Conductor's TDD workflow, managing phase checkpoints, handling git commits, and executing the verification protocol that ensures quality throughout implementation.", "risk": "unknown", "source": "community", "date_added": "2026-02-27" @@ -9459,16 +9569,6 @@ "source": "https://github.com/wshuyi/x-article-publisher-skill", "date_added": "2026-02-27" }, - { - "id": "x-twitter-scraper", - "path": "skills/x-twitter-scraper", - "category": "data", - "name": "x-twitter-scraper", - "description": "X (Twitter) data platform skill \u2014 tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server.", - "risk": "safe", - "source": "community", - "date_added": "2026-02-28" - }, { "id": "xlsx-official", "path": "skills/xlsx-official",