diff --git a/CATALOG.md b/CATALOG.md index ad8a24da..55cd66e3 100644 --- a/CATALOG.md +++ b/CATALOG.md @@ -2,7 +2,7 @@ Generated at: 2026-02-08T00:00:00.000Z -Total skills: 845 +Total skills: 856 ## architecture (64) @@ -116,7 +116,7 @@ Total skills: 845 | `team-composition-analysis` | This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity a... | team, composition | team, composition, analysis, skill, should, used, user, asks, plan, structure, determine, hiring | | `whatsapp-automation` | Automate WhatsApp Business tasks via Rube MCP (Composio): send messages, manage templates, upload media, and handle contacts. Always search tools first for c... | whatsapp | whatsapp, automation, automate, business, tasks, via, rube, mcp, composio, send, messages, upload | -## data-ai (153) +## data-ai (159) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -134,6 +134,9 @@ Use when creating hosted agents that run custom code i... | agents, v2, py | age | `audio-transcriber` | Transform audio recordings into professional Markdown documentation with intelligent summaries using LLM integration | audio, transcription, whisper, meeting-minutes, speech-to-text | audio, transcription, whisper, meeting-minutes, speech-to-text, transcriber, transform, recordings, professional, markdown, documentation, intelligent | | `autonomous-agent-patterns` | Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use ... | autonomous, agent | autonomous, agent, building, coding, agents, covers, integration, permission, browser, automation, human, loop | | `autonomous-agents` | Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The c... | autonomous, agents | autonomous, agents, ai, independently, decompose, goals, plan, actions, execute, self, correct, without | +| `azure-ai-agents-persistent-dotnet` | Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conve... | azure, ai, agents, persistent, dotnet | azure, ai, agents, persistent, dotnet, sdk, net, low, level, creating, managing, threads | +| `azure-ai-agents-persistent-java` | Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. +Triggers: "PersistentAgen... | azure, ai, agents, persistent, java | azure, ai, agents, persistent, java, sdk, low, level, creating, managing, threads, messages | | `azure-ai-contentsafety-java` | Build content moderation applications with Azure AI Content Safety SDK for Java. Use when implementing text/image analysis, blocklist management, or harm det... | azure, ai, contentsafety, java | azure, ai, contentsafety, java, content, moderation, applications, safety, sdk, implementing, text, image | | `azure-ai-contentsafety-py` | Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification. Triggers: "azure-ai-contents... | azure, ai, contentsafety, py | azure, ai, contentsafety, py, content, safety, sdk, python, detecting, harmful, text, images | @@ -240,6 +243,8 @@ Triggers: "data lake", "Da... | azure, storage, file, datalake, py | azure, stor | `google-analytics-automation` | Automate Google Analytics tasks via Rube MCP (Composio): run reports, list accounts/properties, funnels, pivots, key events. Always search tools first for cu... | google, analytics | google, analytics, automation, automate, tasks, via, rube, mcp, composio, run, reports, list | | `googlesheets-automation` | Automate Google Sheets operations (read, write, format, filter, manage spreadsheets) via Rube MCP (Composio). Read/write data, manage tabs, apply formatting,... | googlesheets | googlesheets, automation, automate, google, sheets, operations, read, write, format, filter, spreadsheets, via | | `graphql` | GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful al... | graphql | graphql, gives, clients, exactly, data, no, less, one, endpoint, typed, schema, introspection | +| `hosted-agents-v2-py` | Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. +Use when creating container-based agents that run custom code in Azure ... | hosted, agents, v2, py | hosted, agents, v2, py, azure, ai, sdk, imagebasedhostedagentdefinition, creating, container, run, custom | | `hybrid-search-implementation` | Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides... | hybrid, search | hybrid, search, combine, vector, keyword, improved, retrieval, implementing, rag, building, engines, neither | | `ios-developer` | Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization. Use PROACT... | ios | ios, developer, develop, native, applications, swift, swiftui, masters, 18, uikit, integration, core | | `langchain-architecture` | Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implement... | langchain, architecture | langchain, architecture, llm, applications, framework, agents, memory, integration, building, implementing, ai, creating | @@ -255,12 +260,14 @@ Triggers: "data lake", "Da... | azure, storage, file, datalake, py | azure, stor | `nextjs-best-practices` | Next.js App Router principles. Server Components, data fetching, routing patterns. | nextjs, best, practices | nextjs, best, practices, next, js, app, router, principles, server, components, data, fetching | | `nodejs-backend-patterns` | Build production-ready Node.js backend services with Express/Fastify, implementing middleware patterns, error handling, authentication, database integration,... | nodejs, backend | nodejs, backend, node, js, express, fastify, implementing, middleware, error, handling, authentication, database | | `php-pro` | Write idiomatic PHP code with generators, iterators, SPL data structures, and modern OOP features. Use PROACTIVELY for high-performance PHP applications. | php | php, pro, write, idiomatic, code, generators, iterators, spl, data, structures, oop, features | +| `podcast-generation` | Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, aud... | podcast, generation | podcast, generation, generate, ai, powered, style, audio, narratives, azure, openai, gpt, realtime | | `postgres-best-practices` | Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, o... | postgres, best, practices | postgres, best, practices, supabase, performance, optimization, skill, writing, reviewing, optimizing, queries, schema | | `postgresql` | Design a PostgreSQL-specific schema. Covers best-practices, data types, indexing, constraints, performance patterns, and advanced features | postgresql | postgresql, specific, schema, covers, data, types, indexing, constraints, performance, features | | `prisma-expert` | Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, m... | prisma | prisma, orm, schema, migrations, query, optimization, relations, modeling, database, operations, proactively, issues | | `programmatic-seo` | Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data. Use when the user mentions progra... | programmatic, seo | programmatic, seo, evaluate, creating, driven, pages, scale, structured, data, user, mentions, directory | | `prompt-caching` | Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache... | prompt, caching | prompt, caching, llm, prompts, including, anthropic, response, cag, cache, augmented, generation | | `prompt-engineering-patterns` | Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, impro... | prompt, engineering | prompt, engineering, techniques, maximize, llm, performance, reliability, controllability, optimizing, prompts, improving, outputs | +| `pydantic-models-py` | Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schem... | pydantic, models, py | pydantic, models, py, following, multi, model, base, update, response, indb, variants, defining | | `rag-engineer` | Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL... | rag | rag, engineer, building, retrieval, augmented, generation, masters, embedding, models, vector, databases, chunking | | `rag-implementation` | Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded A... | rag | rag, retrieval, augmented, generation, llm, applications, vector, databases, semantic, search, implementing, knowledge | | `react-best-practices` | React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.j... | react, best, practices | react, best, practices, vercel, next, js, performance, optimization, guidelines, engineering, skill, should | @@ -272,6 +279,7 @@ Triggers: "data lake", "Da... | azure, storage, file, datalake, py | azure, stor | `senior-architect` | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, F... | senior | senior, architect, software, architecture, skill, designing, scalable, maintainable, reactjs, nextjs, nodejs, express | | `seo-audit` | Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance. Use when the user asks for an SEO audit, technical SEO r... | seo, audit | seo, audit, diagnose, issues, affecting, crawlability, indexation, rankings, organic, performance, user, asks | | `similarity-search-patterns` | Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieva... | similarity, search | similarity, search, efficient, vector, databases, building, semantic, implementing, nearest, neighbor, queries, optimizing | +| `skill-creator-ms` | Guide for creating effective skills for AI coding agents working with Azure SDKs and Microsoft Foundry services. Use when creating new skills or updating exi... | skill, creator, ms | skill, creator, ms, creating, effective, skills, ai, coding, agents, working, azure, sdks | | `skill-seekers` | -Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes. | skill, seekers | skill, seekers, automatically, convert, documentation, websites, github, repositories, pdfs, claude, ai, skills | | `spark-optimization` | Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or... | spark, optimization | spark, optimization, optimize, apache, jobs, partitioning, caching, shuffle, memory, tuning, improving, performance | | `sql-optimization-patterns` | Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when de... | sql, optimization | sql, optimization, query, indexing, explain, analysis, dramatically, improve, database, performance, eliminate, slow | @@ -292,7 +300,7 @@ Triggers: "data lake", "Da... | azure, storage, file, datalake, py | azure, stor | `xlsx-official` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx, official | xlsx, official, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work | | `youtube-automation` | Automate YouTube tasks via Rube MCP (Composio): upload videos, manage playlists, search content, get analytics, and handle comments. Always search tools firs... | youtube | youtube, automation, automate, tasks, via, rube, mcp, composio, upload, videos, playlists, search | -## development (124) +## development (127) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -364,12 +372,14 @@ Triggers: "queue storage", "QueueServic... | azure, storage, queue, py | azure, | `cc-skill-coding-standards` | Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development. | cc, skill, coding, standards | cc, skill, coding, standards, universal, typescript, javascript, react, node, js, development | | `cc-skill-frontend-patterns` | Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. | cc, skill, frontend | cc, skill, frontend, development, react, next, js, state, performance, optimization, ui | | `context7-auto-research` | Automatically fetch latest library/framework documentation for Claude Code via Context7 API | context7, auto, research | context7, auto, research, automatically, fetch, latest, library, framework, documentation, claude, code, via | +| `copilot-sdk` | Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Pytho... | copilot, sdk | copilot, sdk, applications, powered, github, creating, programmatic, integrations, node, js, typescript, python | | `csharp-pro` | Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and... | csharp | csharp, pro, write, code, features, like, records, matching, async, await, optimizes, net | | `discord-bot-architect` | Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactiv... | discord, bot | discord, bot, architect, specialized, skill, building, bots, covers, js, javascript, pycord, python | | `dotnet-architect` | Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns. Masters async/await, dependenc... | dotnet | dotnet, architect, net, backend, specializing, asp, core, entity, framework, dapper, enterprise, application | | `dotnet-backend-patterns` | Master C#/.NET backend development patterns for building robust APIs, MCP servers, and enterprise applications. Covers async/await, dependency injection, Ent... | dotnet, backend | dotnet, backend, net, development, building, robust, apis, mcp, servers, enterprise, applications, covers | | `exa-search` | Semantic search, similar content discovery, and structured research using Exa API | exa, search | exa, search, semantic, similar, content, discovery, structured, research, api | | `fastapi-pro` | Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns. Use PROA... | fastapi | fastapi, pro, high, performance, async, apis, sqlalchemy, pydantic, v2, microservices, websockets, python | +| `fastapi-router-py` | Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new rout... | fastapi, router, py | fastapi, router, py, routers, crud, operations, authentication, dependencies, proper, response, models, building | | `fastapi-templates` | Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applicati... | fastapi | fastapi, async, dependency, injection, error, handling, building, new, applications, setting, up, backend | | `firecrawl-scraper` | Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API | firecrawl, scraper | firecrawl, scraper, deep, web, scraping, screenshots, pdf, parsing, website, crawling, api | | `fp-ts-errors` | Handle errors as values using fp-ts Either and TaskEither for cleaner, more predictable TypeScript code. Use when implementing error handling patterns with f... | fp, ts, errors | fp, ts, errors, handle, values, either, taskeither, cleaner, predictable, typescript, code, implementing | @@ -392,6 +402,7 @@ Triggers: "queue storage", "QueueServic... | azure, storage, queue, py | azure, | `m365-agents-ts` | Microsoft 365 Agents SDK for TypeScript/Node.js. Build multichannel agents for Teams/M365/Copilot Studio with AgentApplication routing, Express hosting, stre... | m365, agents, ts | m365, agents, ts, microsoft, 365, sdk, typescript, node, js, multichannel, teams, copilot | | `makepad-skills` | Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting. | makepad, skills | makepad, skills, ui, development, rust, apps, setup, shaders, packaging, troubleshooting | | `mcp-builder` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder | mcp, builder, creating, high, quality, model, context, protocol, servers, enable, llms, interact | +| `mcp-builder-ms` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder, ms | mcp, builder, ms, creating, high, quality, model, context, protocol, servers, enable, llms | | `memory-safety-patterns` | Implement memory-safe programming with RAII, ownership, smart pointers, and resource management across Rust, C++, and C. Use when writing safe systems code, ... | memory, safety | memory, safety, safe, programming, raii, ownership, smart, pointers, resource, rust, writing, code | | `microsoft-azure-webjobs-extensions-authentication-events-dotnet` | Microsoft Entra Authentication Events SDK for .NET. Azure Functions triggers for custom authentication extensions. Use for token enrichment, custom claims, a... | microsoft, azure, webjobs, extensions, authentication, events, dotnet | microsoft, azure, webjobs, extensions, authentication, events, dotnet, entra, sdk, net, functions, triggers | | `mobile-design` | Mobile-first design and engineering doctrine for iOS and Android apps. Covers touch interaction, performance, platform conventions, offline behavior, and mob... | mobile | mobile, first, engineering, doctrine, ios, android, apps, covers, touch, interaction, performance, platform | @@ -439,7 +450,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, | `webapp-testing` | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing... | webapp | webapp, testing, toolkit, interacting, local, web, applications, playwright, supports, verifying, frontend, functionality | | `zustand-store-ts` | Create Zustand stores with TypeScript, subscribeWithSelector middleware, and proper state/action separation. Use when building React state management, creati... | zustand, store, ts | zustand, store, ts, stores, typescript, subscribewithselector, middleware, proper, state, action, separation, building | -## general (134) +## general (135) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -517,6 +528,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, | `git-advanced-workflows` | Master advanced Git workflows including rebasing, cherry-picking, bisect, worktrees, and reflog to maintain clean history and recover from any situation. Use... | git, advanced | git, advanced, including, rebasing, cherry, picking, bisect, worktrees, reflog, maintain, clean, history | | `git-pr-workflows-onboard` | You are an **expert onboarding specialist and knowledge transfer architect** with deep experience in remote-first organizations, technical team integration, ... | git, pr, onboard | git, pr, onboard, onboarding, knowledge, transfer, architect, deep, experience, remote, first, organizations | | `git-pr-workflows-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | git, pr, enhance | git, pr, enhance, optimization, specializing, creating, high, quality, pull, requests, facilitate, efficient | +| `github-issue-creator` | Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error me... | github, issue, creator | github, issue, creator, convert, raw, notes, error, logs, voice, dictation, screenshots, crisp | | `imagen` | | imagen | imagen | | `infinite-gratitude` | Multi-agent research skill for parallel research execution (10 agents, battle-tested with real case studies). | infinite, gratitude | infinite, gratitude, multi, agent, research, skill, parallel, execution, 10, agents, battle, tested | | `interactive-portfolio` | Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, d... | interactive, portfolio | interactive, portfolio, building, portfolios, actually, land, jobs, clients, just, showing, work, creating | @@ -578,7 +590,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, | `x-article-publisher-skill` | Publish articles to X/Twitter | x, article, publisher, skill | x, article, publisher, skill, publish, articles, twitter | | `youtube-summarizer` | Extract transcripts from YouTube videos and generate comprehensive, detailed summaries using intelligent analysis frameworks | video, summarization, transcription, youtube, content-analysis | video, summarization, transcription, youtube, content-analysis, summarizer, extract, transcripts, videos, generate, detailed, summaries | -## infrastructure (101) +## infrastructure (102) | Skill | Description | Tags | Triggers | | --- | --- | --- | --- | @@ -588,6 +600,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, | `application-performance-performance-optimization` | Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across... | application, performance, optimization | application, performance, optimization, optimize, profiling, observability, backend, frontend, tuning, coordinating, stack | | `aws-serverless` | Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns... | aws, serverless | aws, serverless, specialized, skill, building, applications, covers, lambda, functions, api, gateway, dynamodb | | `aws-skills` | AWS development with infrastructure automation and cloud architecture patterns | aws, skills | aws, skills, development, infrastructure, automation, cloud, architecture | +| `azd-deployment` | Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration... | azd, deployment | azd, deployment, deploy, containerized, applications, azure, container, apps, developer, cli, setting, up | | `azure-ai-anomalydetector-java` | Build anomaly detection applications with Azure AI Anomaly Detector SDK for Java. Use when implementing univariate/multivariate anomaly detection, time-serie... | azure, ai, anomalydetector, java | azure, ai, anomalydetector, java, anomaly, detection, applications, detector, sdk, implementing, univariate, multivariate | | `azure-identity-java` | Azure Identity Java SDK for authentication with Azure services. Use when implementing DefaultAzureCredential, managed identity, service principal, or any Azu... | azure, identity, java | azure, identity, java, sdk, authentication, implementing, defaultazurecredential, managed, principal, any, applications | | `azure-identity-py` | Azure Identity SDK for Python authentication. Use for DefaultAzureCredential, managed identity, service principals, and token caching. diff --git a/README.md b/README.md index a842111d..5603ea2b 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ -# 🌌 Antigravity Awesome Skills: 845+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More +# 🌌 Antigravity Awesome Skills: 856+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More -> **The Ultimate Collection of 845+ Universal Agentic Skills for AI Coding Assistants β€” Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL** +> **The Ultimate Collection of 856+ Universal Agentic Skills for AI Coding Assistants β€” Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL** [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Claude Code](https://img.shields.io/badge/Claude%20Code-Anthropic-purple)](https://claude.ai) @@ -16,7 +16,7 @@ If this project helps you, you can [support it here](https://buymeacoffee.com/sickn33) or simply ⭐ the repo. -**Antigravity Awesome Skills** is a curated, battle-tested library of **845 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants: +**Antigravity Awesome Skills** is a curated, battle-tested library of **856 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants: - 🟣 **Claude Code** (Anthropic CLI) - πŸ”΅ **Gemini CLI** (Google DeepMind) @@ -38,7 +38,7 @@ This repository provides essential skills to transform your AI assistant into a - [🎁 Curated Collections (Bundles)](#curated-collections) - [🧭 Antigravity Workflows](#antigravity-workflows) - [πŸ“¦ Features & Categories](#features--categories) -- [πŸ“š Browse 845+ Skills](#browse-845-skills) +- [πŸ“š Browse 856+ Skills](#browse-856-skills) - [🀝 How to Contribute](#how-to-contribute) - [🀝 Community](#community) - [β˜• Support the Project](#support-the-project) @@ -280,7 +280,7 @@ The repository is organized into specialized domains to transform your AI into a Counts change as new skills are added. For the current full registry, see [CATALOG.md](CATALOG.md). -## Browse 845+ Skills +## Browse 856+ Skills We have moved the full skill registry to a dedicated catalog to keep this README clean. diff --git a/data/aliases.json b/data/aliases.json index be8ae269..6cefe6e2 100644 --- a/data/aliases.json +++ b/data/aliases.json @@ -10,9 +10,9 @@ "templates": "app-builder/templates", "application-performance-optimization": "application-performance-performance-optimization", "aws penetration testing": "aws-penetration-testing", - "azure-ai-java": "azure-ai-anomalydetector-java", + "azure-ai-dotnet": "azure-ai-agents-persistent-dotnet", + "azure-ai-java": "azure-ai-agents-persistent-java", "azure-ai-py": "azure-ai-contentunderstanding-py", - "azure-ai-dotnet": "azure-ai-document-intelligence-dotnet", "azure-ai-ts": "azure-ai-document-intelligence-ts", "azure-communication-java": "azure-communication-callautomation-java", "azure-keyvault-rust": "azure-keyvault-certificates-rust", diff --git a/data/bundles.json b/data/bundles.json index 87f02ed6..24cc8558 100644 --- a/data/bundles.json +++ b/data/bundles.json @@ -20,6 +20,7 @@ "async-python-patterns", "autonomous-agents", "aws-serverless", + "azure-ai-agents-persistent-java", "azure-ai-anomalydetector-java", "azure-ai-contentsafety-java", "azure-ai-contentsafety-py", @@ -117,6 +118,7 @@ "claude-d3js-skill", "code-documentation-doc-generate", "context7-auto-research", + "copilot-sdk", "discord-bot-architect", "django-pro", "documentation-generation-doc-generate", @@ -126,6 +128,7 @@ "dotnet-backend-patterns", "exa-search", "fastapi-pro", + "fastapi-router-py", "fastapi-templates", "firebase", "firecrawl-scraper", @@ -161,6 +164,7 @@ "m365-agents-ts", "makepad-skills", "mcp-builder", + "mcp-builder-ms", "memory-safety-patterns", "mobile-design", "mobile-developer", @@ -179,7 +183,9 @@ "openapi-spec-generation", "php-pro", "plaid-fintech", + "podcast-generation", "product-manager-toolkit", + "pydantic-models-py", "python-development-python-scaffold", "python-packaging", "python-patterns", @@ -339,6 +345,7 @@ "k8s-core": { "description": "Kubernetes and service mesh essentials.", "skills": [ + "azd-deployment", "azure-cosmos-db-py", "azure-identity-dotnet", "azure-identity-java", @@ -456,6 +463,7 @@ "postgresql", "prisma-expert", "programmatic-seo", + "pydantic-models-py", "quant-analyst", "react-best-practices", "react-ui-patterns", @@ -486,6 +494,7 @@ "api-testing-observability-api-mock", "application-performance-performance-optimization", "aws-serverless", + "azd-deployment", "azure-ai-anomalydetector-java", "azure-mgmt-applicationinsights-dotnet", "azure-mgmt-arizeaiobservabilityeval-dotnet", diff --git a/data/catalog.json b/data/catalog.json index 783abe33..5c98eb37 100644 --- a/data/catalog.json +++ b/data/catalog.json @@ -1,6 +1,6 @@ { "generatedAt": "2026-02-08T00:00:00.000Z", - "total": 845, + "total": 856, "skills": [ { "id": "3d-web-experience", @@ -1550,6 +1550,87 @@ ], "path": "skills/aws-skills/SKILL.md" }, + { + "id": "azd-deployment", + "name": "azd-deployment", + "description": "Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration, creating Bicep infrastructure for Container Apps, configuring remote builds with ACR, implementing idempotent deployments, managing environment variables across local/.azure/Bicep, or troubleshooting azd up failures. Triggers on requests for azd configuration, Container Apps deployment, multi-service deployments, and infrastructure-as-code with Bicep.", + "category": "infrastructure", + "tags": [ + "azd", + "deployment" + ], + "triggers": [ + "azd", + "deployment", + "deploy", + "containerized", + "applications", + "azure", + "container", + "apps", + "developer", + "cli", + "setting", + "up" + ], + "path": "skills/azd-deployment/SKILL.md" + }, + { + "id": "azure-ai-agents-persistent-dotnet", + "name": "azure-ai-agents-persistent-dotnet", + "description": "Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conversation threads, streaming responses, function calling, file search, and code interpreter. Triggers: \"PersistentAgentsClient\", \"persistent agents\", \"agent threads\", \"agent runs\", \"streaming agents\", \"function calling agents .NET\".", + "category": "data-ai", + "tags": [ + "azure", + "ai", + "agents", + "persistent", + "dotnet" + ], + "triggers": [ + "azure", + "ai", + "agents", + "persistent", + "dotnet", + "sdk", + "net", + "low", + "level", + "creating", + "managing", + "threads" + ], + "path": "skills/azure-ai-agents-persistent-dotnet/SKILL.md" + }, + { + "id": "azure-ai-agents-persistent-java", + "name": "azure-ai-agents-persistent-java", + "description": "Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.\nTriggers: \"PersistentAgentsClient\", \"persistent agents java\", \"agent threads java\", \"agent runs java\", \"streaming agents java\".", + "category": "data-ai", + "tags": [ + "azure", + "ai", + "agents", + "persistent", + "java" + ], + "triggers": [ + "azure", + "ai", + "agents", + "persistent", + "java", + "sdk", + "low", + "level", + "creating", + "managing", + "threads", + "messages" + ], + "path": "skills/azure-ai-agents-persistent-java/SKILL.md" + }, { "id": "azure-ai-anomalydetector-java", "name": "azure-ai-anomalydetector-java", @@ -7259,6 +7340,31 @@ ], "path": "skills/convertkit-automation/SKILL.md" }, + { + "id": "copilot-sdk", + "name": "copilot-sdk", + "description": "Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Python, Go, or .NET. Covers session management, custom tools, streaming, hooks, MCP servers, BYOK providers, session persistence, and custom agents. Requires GitHub Copilot CLI installed and a GitHub Copilot subscription (unless using BYOK).", + "category": "development", + "tags": [ + "copilot", + "sdk" + ], + "triggers": [ + "copilot", + "sdk", + "applications", + "powered", + "github", + "creating", + "programmatic", + "integrations", + "node", + "js", + "typescript", + "python" + ], + "path": "skills/copilot-sdk/SKILL.md" + }, { "id": "copy-editing", "name": "copy-editing", @@ -9387,6 +9493,32 @@ ], "path": "skills/fastapi-pro/SKILL.md" }, + { + "id": "fastapi-router-py", + "name": "fastapi-router-py", + "description": "Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new routes, implementing CRUD operations, or adding authenticated endpoints in FastAPI applications.", + "category": "development", + "tags": [ + "fastapi", + "router", + "py" + ], + "triggers": [ + "fastapi", + "router", + "py", + "routers", + "crud", + "operations", + "authentication", + "dependencies", + "proper", + "response", + "models", + "building" + ], + "path": "skills/fastapi-router-py/SKILL.md" + }, { "id": "fastapi-templates", "name": "fastapi-templates", @@ -10716,6 +10848,32 @@ ], "path": "skills/github-automation/SKILL.md" }, + { + "id": "github-issue-creator", + "name": "github-issue-creator", + "description": "Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error messages, or informal descriptions and wants a structured GitHub issue. Supports images/GIFs for visual evidence.", + "category": "general", + "tags": [ + "github", + "issue", + "creator" + ], + "triggers": [ + "github", + "issue", + "creator", + "convert", + "raw", + "notes", + "error", + "logs", + "voice", + "dictation", + "screenshots", + "crisp" + ], + "path": "skills/github-issue-creator/SKILL.md" + }, { "id": "github-workflow-automation", "name": "github-workflow-automation", @@ -11177,6 +11335,33 @@ ], "path": "skills/helpdesk-automation/SKILL.md" }, + { + "id": "hosted-agents-v2-py", + "name": "hosted-agents-v2-py", + "description": "Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.\nUse when creating container-based agents that run custom code in Azure AI Foundry.\nTriggers: \"ImageBasedHostedAgentDefinition\", \"hosted agent\", \"container agent\", \n\"create_version\", \"ProtocolVersionRecord\", \"AgentProtocol.RESPONSES\".", + "category": "data-ai", + "tags": [ + "hosted", + "agents", + "v2", + "py" + ], + "triggers": [ + "hosted", + "agents", + "v2", + "py", + "azure", + "ai", + "sdk", + "imagebasedhostedagentdefinition", + "creating", + "container", + "run", + "custom" + ], + "path": "skills/hosted-agents-v2-py/SKILL.md" + }, { "id": "hr-pro", "name": "hr-pro", @@ -12915,6 +13100,32 @@ ], "path": "skills/mcp-builder/SKILL.md" }, + { + "id": "mcp-builder-ms", + "name": "mcp-builder", + "description": "Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP), Node/TypeScript (MCP SDK), or C#/.NET (Microsoft MCP SDK).", + "category": "development", + "tags": [ + "mcp", + "builder", + "ms" + ], + "triggers": [ + "mcp", + "builder", + "ms", + "creating", + "high", + "quality", + "model", + "context", + "protocol", + "servers", + "enable", + "llms" + ], + "path": "skills/mcp-builder-ms/SKILL.md" + }, { "id": "memory-forensics", "name": "memory-forensics", @@ -14916,6 +15127,31 @@ ], "path": "skills/playwright-skill/SKILL.md" }, + { + "id": "podcast-generation", + "name": "podcast-generation", + "description": "Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, audio narrative generation, podcast creation from content, or integrating with Azure OpenAI Realtime API for real audio output. Covers full-stack implementation from React frontend to Python FastAPI backend with WebSocket streaming.", + "category": "data-ai", + "tags": [ + "podcast", + "generation" + ], + "triggers": [ + "podcast", + "generation", + "generate", + "ai", + "powered", + "style", + "audio", + "narratives", + "azure", + "openai", + "gpt", + "realtime" + ], + "path": "skills/podcast-generation/SKILL.md" + }, { "id": "popup-cro", "name": "popup-cro", @@ -15481,6 +15717,32 @@ ], "path": "skills/protocol-reverse-engineering/SKILL.md" }, + { + "id": "pydantic-models-py", + "name": "pydantic-models-py", + "description": "Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schemas, database models, or data validation in Python applications using Pydantic v2.", + "category": "data-ai", + "tags": [ + "pydantic", + "models", + "py" + ], + "triggers": [ + "pydantic", + "models", + "py", + "following", + "multi", + "model", + "base", + "update", + "response", + "indb", + "variants", + "defining" + ], + "path": "skills/pydantic-models-py/SKILL.md" + }, { "id": "pypict-skill", "name": "pypict-skill", @@ -17574,6 +17836,32 @@ ], "path": "skills/skill-creator/SKILL.md" }, + { + "id": "skill-creator-ms", + "name": "skill-creator", + "description": "Guide for creating effective skills for AI coding agents working with Azure SDKs and Microsoft Foundry services. Use when creating new skills or updating existing skills.", + "category": "data-ai", + "tags": [ + "skill", + "creator", + "ms" + ], + "triggers": [ + "skill", + "creator", + "ms", + "creating", + "effective", + "skills", + "ai", + "coding", + "agents", + "working", + "azure", + "sdks" + ], + "path": "skills/skill-creator-ms/SKILL.md" + }, { "id": "skill-developer", "name": "skill-developer", diff --git a/docs/microsoft-skills-attribution.json b/docs/microsoft-skills-attribution.json index 262a0dc6..e633a563 100644 --- a/docs/microsoft-skills-attribution.json +++ b/docs/microsoft-skills-attribution.json @@ -2,7 +2,7 @@ "source": "microsoft/skills", "repository": "https://github.com/microsoft/skills", "license": "MIT", - "synced_skills": 129, + "synced_skills": 140, "structure": "flat (frontmatter name as directory name)", "skills": [ { @@ -649,6 +649,61 @@ "flat_name": "wiki-changelog", "original_path": "plugins/wiki-changelog", "source": "microsoft/skills (plugin)" + }, + { + "flat_name": "fastapi-router-py", + "original_path": ".github/skills/fastapi-router-py", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "azd-deployment", + "original_path": ".github/skills/azd-deployment", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "copilot-sdk", + "original_path": ".github/skills/copilot-sdk", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "azure-ai-agents-persistent-dotnet", + "original_path": ".github/skills/azure-ai-agents-persistent-dotnet", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "hosted-agents-v2-py", + "original_path": ".github/skills/hosted-agents-v2-py", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "pydantic-models-py", + "original_path": ".github/skills/pydantic-models-py", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "skill-creator-ms", + "original_path": ".github/skills/skill-creator", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "podcast-generation", + "original_path": ".github/skills/podcast-generation", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "github-issue-creator", + "original_path": ".github/skills/github-issue-creator", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "azure-ai-agents-persistent-java", + "original_path": ".github/skills/azure-ai-agents-persistent-java", + "source": "microsoft/skills (.github/skills)" + }, + { + "flat_name": "mcp-builder-ms", + "original_path": ".github/skills/mcp-builder", + "source": "microsoft/skills (.github/skills)" } ] } \ No newline at end of file diff --git a/scripts/sync_microsoft_skills.py b/scripts/sync_microsoft_skills.py index 47cee38b..4b665f0a 100644 --- a/scripts/sync_microsoft_skills.py +++ b/scripts/sync_microsoft_skills.py @@ -158,11 +158,46 @@ def find_plugin_skills(source_dir: Path, already_synced_names: set): return results +def find_github_skills(source_dir: Path, already_synced_names: set): + """Find skills in .github/skills/ not reachable via the skills/ symlink tree.""" + results = [] + github_skills = source_dir / ".github" / "skills" + + if not github_skills.exists(): + return results + + for skill_dir in github_skills.iterdir(): + if not skill_dir.is_dir() or not (skill_dir / "SKILL.md").exists(): + continue + + if skill_dir.name not in already_synced_names: + results.append({ + "relative_path": Path(".github/skills") / skill_dir.name, + "skill_md": skill_dir / "SKILL.md", + "source_dir": skill_dir, + }) + + return results + + def sync_skills_flat(source_dir: Path, target_dir: Path): """ Sync all Microsoft skills into a flat structure under skills/. Uses frontmatter 'name' as directory name, with collision detection. + Protects existing non-Microsoft skills from being overwritten. """ + # Load previous attribution to know which dirs are Microsoft-owned + previously_synced_names = set() + if ATTRIBUTION_FILE.exists(): + try: + with open(ATTRIBUTION_FILE) as f: + prev = json.load(f) + previously_synced_names = { + s["flat_name"] for s in prev.get("skills", []) if s.get("flat_name") + } + except (json.JSONDecodeError, OSError): + pass + all_skill_entries = find_skills_in_directory(source_dir) print(f" πŸ“‚ Found {len(all_skill_entries)} skills in skills/ directory") @@ -179,16 +214,23 @@ def sync_skills_flat(source_dir: Path, target_dir: Path): print( f" ⚠️ No frontmatter name for {entry['relative_path']}, using fallback: {skill_name}") - # Collision detection + # Internal collision detection (two Microsoft skills with same name) if skill_name in used_names: original = used_names[skill_name] print( f" ⚠️ Name collision '{skill_name}': {entry['relative_path']} vs {original}") - # Append language prefix from path to disambiguate lang = entry["relative_path"].parts[0] if entry["relative_path"].parts else "unknown" skill_name = f"{skill_name}-{lang}" print(f" Resolved to: {skill_name}") + # Protect existing non-Microsoft skills from being overwritten + target_skill_dir = target_dir / skill_name + if target_skill_dir.exists() and skill_name not in previously_synced_names: + original_name = skill_name + skill_name = f"{skill_name}-ms" + print( + f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}") + used_names[skill_name] = str(entry["relative_path"]) # Create flat target directory @@ -212,10 +254,13 @@ def sync_skills_flat(source_dir: Path, target_dir: Path): synced_count += 1 print(f" βœ… {entry['relative_path']} β†’ skills/{skill_name}/") - # Sync plugin skills + # Collect all source directory names already synced (for dedup) synced_names = set(used_names.keys()) - plugin_entries = find_plugin_skills( - source_dir, {e["source_dir"].name for e in all_skill_entries}) + already_synced_dir_names = { + e["source_dir"].name for e in all_skill_entries} + + # Sync plugin skills from .github/plugins/ + plugin_entries = find_plugin_skills(source_dir, already_synced_dir_names) if plugin_entries: print(f"\n πŸ“¦ Found {len(plugin_entries)} additional plugin skills") @@ -227,9 +272,18 @@ def sync_skills_flat(source_dir: Path, target_dir: Path): if skill_name in synced_names: skill_name = f"{skill_name}-plugin" - synced_names.add(skill_name) - + # Protect existing non-Microsoft skills target_skill_dir = target_dir / skill_name + if target_skill_dir.exists() and skill_name not in previously_synced_names: + original_name = skill_name + skill_name = f"{skill_name}-ms" + target_skill_dir = target_dir / skill_name + print( + f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}") + + synced_names.add(skill_name) + already_synced_dir_names.add(entry["source_dir"].name) + target_skill_dir.mkdir(parents=True, exist_ok=True) shutil.copy2(entry["skill_md"], target_skill_dir / "SKILL.md") @@ -247,6 +301,49 @@ def sync_skills_flat(source_dir: Path, target_dir: Path): synced_count += 1 print(f" βœ… {entry['relative_path']} β†’ skills/{skill_name}/") + # Sync skills in .github/skills/ not reachable via the skills/ symlink tree + github_skill_entries = find_github_skills( + source_dir, already_synced_dir_names) + + if github_skill_entries: + print( + f"\n οΏ½ Found {len(github_skill_entries)} skills in .github/skills/ not linked from skills/") + for entry in github_skill_entries: + skill_name = extract_skill_name(entry["skill_md"]) + if not skill_name: + skill_name = entry["source_dir"].name + + if skill_name in synced_names: + skill_name = f"{skill_name}-github" + + # Protect existing non-Microsoft skills + target_skill_dir = target_dir / skill_name + if target_skill_dir.exists() and skill_name not in previously_synced_names: + original_name = skill_name + skill_name = f"{skill_name}-ms" + target_skill_dir = target_dir / skill_name + print( + f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}") + + synced_names.add(skill_name) + + target_skill_dir.mkdir(parents=True, exist_ok=True) + + shutil.copy2(entry["skill_md"], target_skill_dir / "SKILL.md") + + for file_item in entry["source_dir"].iterdir(): + if file_item.name != "SKILL.md" and file_item.is_file(): + shutil.copy2(file_item, target_skill_dir / file_item.name) + + skill_metadata.append({ + "flat_name": skill_name, + "original_path": str(entry["relative_path"]), + "source": "microsoft/skills (.github/skills)", + }) + + synced_count += 1 + print(f" βœ… {entry['relative_path']} β†’ skills/{skill_name}/") + return synced_count, skill_metadata diff --git a/skills/azd-deployment/SKILL.md b/skills/azd-deployment/SKILL.md new file mode 100644 index 00000000..88ead12f --- /dev/null +++ b/skills/azd-deployment/SKILL.md @@ -0,0 +1,296 @@ +--- +name: azd-deployment +description: Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration, creating Bicep infrastructure for Container Apps, configuring remote builds with ACR, implementing idempotent deployments, managing environment variables across local/.azure/Bicep, or troubleshooting azd up failures. Triggers on requests for azd configuration, Container Apps deployment, multi-service deployments, and infrastructure-as-code with Bicep. +--- + +# Azure Developer CLI (azd) Container Apps Deployment + +Deploy containerized frontend + backend applications to Azure Container Apps with remote builds, managed identity, and idempotent infrastructure. + +## Quick Start + +```bash +# Initialize and deploy +azd auth login +azd init # Creates azure.yaml and .azure/ folder +azd env new # Create environment (dev, staging, prod) +azd up # Provision infra + build + deploy +``` + +## Core File Structure + +``` +project/ +β”œβ”€β”€ azure.yaml # azd service definitions + hooks +β”œβ”€β”€ infra/ +β”‚ β”œβ”€β”€ main.bicep # Root infrastructure module +β”‚ β”œβ”€β”€ main.parameters.json # Parameter injection from env vars +β”‚ └── modules/ +β”‚ β”œβ”€β”€ container-apps-environment.bicep +β”‚ └── container-app.bicep +β”œβ”€β”€ .azure/ +β”‚ β”œβ”€β”€ config.json # Default environment pointer +β”‚ └── / +β”‚ β”œβ”€β”€ .env # Environment-specific values (azd-managed) +β”‚ └── config.json # Environment metadata +└── src/ + β”œβ”€β”€ frontend/Dockerfile + └── backend/Dockerfile +``` + +## azure.yaml Configuration + +### Minimal Configuration + +```yaml +name: azd-deployment +services: + backend: + project: ./src/backend + language: python + host: containerapp + docker: + path: ./Dockerfile + remoteBuild: true +``` + +### Full Configuration with Hooks + +```yaml +name: azd-deployment +metadata: + template: my-project@1.0.0 + +infra: + provider: bicep + path: ./infra + +azure: + location: eastus2 + +services: + frontend: + project: ./src/frontend + language: ts + host: containerapp + docker: + path: ./Dockerfile + context: . + remoteBuild: true + + backend: + project: ./src/backend + language: python + host: containerapp + docker: + path: ./Dockerfile + context: . + remoteBuild: true + +hooks: + preprovision: + shell: sh + run: | + echo "Before provisioning..." + + postprovision: + shell: sh + run: | + echo "After provisioning - set up RBAC, etc." + + postdeploy: + shell: sh + run: | + echo "Frontend: ${SERVICE_FRONTEND_URI}" + echo "Backend: ${SERVICE_BACKEND_URI}" +``` + +### Key azure.yaml Options + +| Option | Description | +|--------|-------------| +| `remoteBuild: true` | Build images in Azure Container Registry (recommended) | +| `context: .` | Docker build context relative to project path | +| `host: containerapp` | Deploy to Azure Container Apps | +| `infra.provider: bicep` | Use Bicep for infrastructure | + +## Environment Variables Flow + +### Three-Level Configuration + +1. **Local `.env`** - For local development only +2. **`.azure//.env`** - azd-managed, auto-populated from Bicep outputs +3. **`main.parameters.json`** - Maps env vars to Bicep parameters + +### Parameter Injection Pattern + +```json +// infra/main.parameters.json +{ + "parameters": { + "environmentName": { "value": "${AZURE_ENV_NAME}" }, + "location": { "value": "${AZURE_LOCATION=eastus2}" }, + "azureOpenAiEndpoint": { "value": "${AZURE_OPENAI_ENDPOINT}" } + } +} +``` + +Syntax: `${VAR_NAME}` or `${VAR_NAME=default_value}` + +### Setting Environment Variables + +```bash +# Set for current environment +azd env set AZURE_OPENAI_ENDPOINT "https://my-openai.openai.azure.com" +azd env set AZURE_SEARCH_ENDPOINT "https://my-search.search.windows.net" + +# Set during init +azd env new prod +azd env set AZURE_OPENAI_ENDPOINT "..." +``` + +### Bicep Output β†’ Environment Variable + +```bicep +// In main.bicep - outputs auto-populate .azure//.env +output SERVICE_FRONTEND_URI string = frontend.outputs.uri +output SERVICE_BACKEND_URI string = backend.outputs.uri +output BACKEND_PRINCIPAL_ID string = backend.outputs.principalId +``` + +## Idempotent Deployments + +### Why azd up is Idempotent + +1. **Bicep is declarative** - Resources reconcile to desired state +2. **Remote builds tag uniquely** - Image tags include deployment timestamp +3. **ACR reuses layers** - Only changed layers upload + +### Preserving Manual Changes + +Custom domains added via Portal can be lost on redeploy. Preserve with hooks: + +```yaml +hooks: + preprovision: + shell: sh + run: | + # Save custom domains before provision + if az containerapp show --name "$FRONTEND_NAME" -g "$RG" &>/dev/null; then + az containerapp show --name "$FRONTEND_NAME" -g "$RG" \ + --query "properties.configuration.ingress.customDomains" \ + -o json > /tmp/domains.json + fi + + postprovision: + shell: sh + run: | + # Verify/restore custom domains + if [ -f /tmp/domains.json ]; then + echo "Saved domains: $(cat /tmp/domains.json)" + fi +``` + +### Handling Existing Resources + +```bicep +// Reference existing ACR (don't recreate) +resource containerRegistry 'Microsoft.ContainerRegistry/registries@2023-07-01' existing = { + name: containerRegistryName +} + +// Set customDomains to null to preserve Portal-added domains +customDomains: empty(customDomainsParam) ? null : customDomainsParam +``` + +## Container App Service Discovery + +Internal HTTP routing between Container Apps in same environment: + +```bicep +// Backend reference in frontend env vars +env: [ + { + name: 'BACKEND_URL' + value: 'http://ca-backend-${resourceToken}' // Internal DNS + } +] +``` + +Frontend nginx proxies to internal URL: +```nginx +location /api { + proxy_pass $BACKEND_URL; +} +``` + +## Managed Identity & RBAC + +### Enable System-Assigned Identity + +```bicep +resource containerApp 'Microsoft.App/containerApps@2024-03-01' = { + identity: { + type: 'SystemAssigned' + } +} + +output principalId string = containerApp.identity.principalId +``` + +### Post-Provision RBAC Assignment + +```yaml +hooks: + postprovision: + shell: sh + run: | + PRINCIPAL_ID="${BACKEND_PRINCIPAL_ID}" + + # Azure OpenAI access + az role assignment create \ + --assignee-object-id "$PRINCIPAL_ID" \ + --assignee-principal-type ServicePrincipal \ + --role "Cognitive Services OpenAI User" \ + --scope "$OPENAI_RESOURCE_ID" 2>/dev/null || true + + # Azure AI Search access + az role assignment create \ + --assignee-object-id "$PRINCIPAL_ID" \ + --role "Search Index Data Reader" \ + --scope "$SEARCH_RESOURCE_ID" 2>/dev/null || true +``` + +## Common Commands + +```bash +# Environment management +azd env list # List environments +azd env select # Switch environment +azd env get-values # Show all env vars +azd env set KEY value # Set variable + +# Deployment +azd up # Full provision + deploy +azd provision # Infrastructure only +azd deploy # Code deployment only +azd deploy --service backend # Deploy single service + +# Debugging +azd show # Show project status +az containerapp logs show -n -g --follow # Stream logs +``` + +## Reference Files + +- **Bicep patterns**: See [references/bicep-patterns.md](references/bicep-patterns.md) for Container Apps modules +- **Troubleshooting**: See [references/troubleshooting.md](references/troubleshooting.md) for common issues +- **azure.yaml schema**: See [references/azure-yaml-schema.md](references/azure-yaml-schema.md) for full options + +## Critical Reminders + +1. **Always use `remoteBuild: true`** - Local builds fail on M1/ARM Macs deploying to AMD64 +2. **Bicep outputs auto-populate .azure//.env** - Don't manually edit +3. **Use `azd env set` for secrets** - Not main.parameters.json defaults +4. **Service tags (`azd-service-name`)** - Required for azd to find Container Apps +5. **`|| true` in hooks** - Prevent RBAC "already exists" errors from failing deploy diff --git a/skills/azure-ai-agents-persistent-dotnet/SKILL.md b/skills/azure-ai-agents-persistent-dotnet/SKILL.md new file mode 100644 index 00000000..90075e1b --- /dev/null +++ b/skills/azure-ai-agents-persistent-dotnet/SKILL.md @@ -0,0 +1,349 @@ +--- +name: azure-ai-agents-persistent-dotnet +description: | + Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conversation threads, streaming responses, function calling, file search, and code interpreter. Triggers: "PersistentAgentsClient", "persistent agents", "agent threads", "agent runs", "streaming agents", "function calling agents .NET". +package: Azure.AI.Agents.Persistent +--- + +# Azure.AI.Agents.Persistent (.NET) + +Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools. + +## Installation + +```bash +dotnet add package Azure.AI.Agents.Persistent --prerelease +dotnet add package Azure.Identity +``` + +**Current Versions**: Stable v1.1.0, Preview v1.2.0-beta.8 + +## Environment Variables + +```bash +PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/ +MODEL_DEPLOYMENT_NAME=gpt-4o-mini +AZURE_BING_CONNECTION_ID= +AZURE_AI_SEARCH_CONNECTION_ID= +``` + +## Authentication + +```csharp +using Azure.AI.Agents.Persistent; +using Azure.Identity; + +var projectEndpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT"); +PersistentAgentsClient client = new(projectEndpoint, new DefaultAzureCredential()); +``` + +## Client Hierarchy + +``` +PersistentAgentsClient +β”œβ”€β”€ Administration β†’ Agent CRUD operations +β”œβ”€β”€ Threads β†’ Thread management +β”œβ”€β”€ Messages β†’ Message operations +β”œβ”€β”€ Runs β†’ Run execution and streaming +β”œβ”€β”€ Files β†’ File upload/download +└── VectorStores β†’ Vector store management +``` + +## Core Workflow + +### 1. Create Agent + +```csharp +var modelDeploymentName = Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME"); + +PersistentAgent agent = await client.Administration.CreateAgentAsync( + model: modelDeploymentName, + name: "Math Tutor", + instructions: "You are a personal math tutor. Write and run code to answer math questions.", + tools: [new CodeInterpreterToolDefinition()] +); +``` + +### 2. Create Thread and Message + +```csharp +// Create thread +PersistentAgentThread thread = await client.Threads.CreateThreadAsync(); + +// Create message +await client.Messages.CreateMessageAsync( + thread.Id, + MessageRole.User, + "I need to solve the equation `3x + 11 = 14`. Can you help me?" +); +``` + +### 3. Run Agent (Polling) + +```csharp +// Create run +ThreadRun run = await client.Runs.CreateRunAsync( + thread.Id, + agent.Id, + additionalInstructions: "Please address the user as Jane Doe." +); + +// Poll for completion +do +{ + await Task.Delay(TimeSpan.FromMilliseconds(500)); + run = await client.Runs.GetRunAsync(thread.Id, run.Id); +} +while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress); + +// Retrieve messages +await foreach (PersistentThreadMessage message in client.Messages.GetMessagesAsync( + threadId: thread.Id, + order: ListSortOrder.Ascending)) +{ + Console.Write($"{message.Role}: "); + foreach (MessageContent content in message.ContentItems) + { + if (content is MessageTextContent textContent) + Console.WriteLine(textContent.Text); + } +} +``` + +### 4. Streaming Response + +```csharp +AsyncCollectionResult stream = client.Runs.CreateRunStreamingAsync( + thread.Id, + agent.Id +); + +await foreach (StreamingUpdate update in stream) +{ + if (update.UpdateKind == StreamingUpdateReason.RunCreated) + { + Console.WriteLine("--- Run started! ---"); + } + else if (update is MessageContentUpdate contentUpdate) + { + Console.Write(contentUpdate.Text); + } + else if (update.UpdateKind == StreamingUpdateReason.RunCompleted) + { + Console.WriteLine("\n--- Run completed! ---"); + } +} +``` + +### 5. Function Calling + +```csharp +// Define function tool +FunctionToolDefinition weatherTool = new( + name: "getCurrentWeather", + description: "Gets the current weather at a location.", + parameters: BinaryData.FromObjectAsJson(new + { + Type = "object", + Properties = new + { + Location = new { Type = "string", Description = "City and state, e.g. San Francisco, CA" }, + Unit = new { Type = "string", Enum = new[] { "c", "f" } } + }, + Required = new[] { "location" } + }, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }) +); + +// Create agent with function +PersistentAgent agent = await client.Administration.CreateAgentAsync( + model: modelDeploymentName, + name: "Weather Bot", + instructions: "You are a weather bot.", + tools: [weatherTool] +); + +// Handle function calls during polling +do +{ + await Task.Delay(500); + run = await client.Runs.GetRunAsync(thread.Id, run.Id); + + if (run.Status == RunStatus.RequiresAction + && run.RequiredAction is SubmitToolOutputsAction submitAction) + { + List outputs = []; + foreach (RequiredToolCall toolCall in submitAction.ToolCalls) + { + if (toolCall is RequiredFunctionToolCall funcCall) + { + // Execute function and get result + string result = ExecuteFunction(funcCall.Name, funcCall.Arguments); + outputs.Add(new ToolOutput(toolCall, result)); + } + } + run = await client.Runs.SubmitToolOutputsToRunAsync(run, outputs, toolApprovals: null); + } +} +while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress); +``` + +### 6. File Search with Vector Store + +```csharp +// Upload file +PersistentAgentFileInfo file = await client.Files.UploadFileAsync( + filePath: "document.txt", + purpose: PersistentAgentFilePurpose.Agents +); + +// Create vector store +PersistentAgentsVectorStore vectorStore = await client.VectorStores.CreateVectorStoreAsync( + fileIds: [file.Id], + name: "my_vector_store" +); + +// Create file search resource +FileSearchToolResource fileSearchResource = new(); +fileSearchResource.VectorStoreIds.Add(vectorStore.Id); + +// Create agent with file search +PersistentAgent agent = await client.Administration.CreateAgentAsync( + model: modelDeploymentName, + name: "Document Assistant", + instructions: "You help users find information in documents.", + tools: [new FileSearchToolDefinition()], + toolResources: new ToolResources { FileSearch = fileSearchResource } +); +``` + +### 7. Bing Grounding + +```csharp +var bingConnectionId = Environment.GetEnvironmentVariable("AZURE_BING_CONNECTION_ID"); + +BingGroundingToolDefinition bingTool = new( + new BingGroundingSearchToolParameters( + [new BingGroundingSearchConfiguration(bingConnectionId)] + ) +); + +PersistentAgent agent = await client.Administration.CreateAgentAsync( + model: modelDeploymentName, + name: "Search Agent", + instructions: "Use Bing to answer questions about current events.", + tools: [bingTool] +); +``` + +### 8. Azure AI Search + +```csharp +AzureAISearchToolResource searchResource = new( + connectionId: searchConnectionId, + indexName: "my_index", + topK: 5, + filter: "category eq 'documentation'", + queryType: AzureAISearchQueryType.Simple +); + +PersistentAgent agent = await client.Administration.CreateAgentAsync( + model: modelDeploymentName, + name: "Search Agent", + instructions: "Search the documentation index to answer questions.", + tools: [new AzureAISearchToolDefinition()], + toolResources: new ToolResources { AzureAISearch = searchResource } +); +``` + +### 9. Cleanup + +```csharp +await client.Threads.DeleteThreadAsync(thread.Id); +await client.Administration.DeleteAgentAsync(agent.Id); +await client.VectorStores.DeleteVectorStoreAsync(vectorStore.Id); +await client.Files.DeleteFileAsync(file.Id); +``` + +## Available Tools + +| Tool | Class | Purpose | +|------|-------|---------| +| Code Interpreter | `CodeInterpreterToolDefinition` | Execute Python code, generate visualizations | +| File Search | `FileSearchToolDefinition` | Search uploaded files via vector stores | +| Function Calling | `FunctionToolDefinition` | Call custom functions | +| Bing Grounding | `BingGroundingToolDefinition` | Web search via Bing | +| Azure AI Search | `AzureAISearchToolDefinition` | Search Azure AI Search indexes | +| OpenAPI | `OpenApiToolDefinition` | Call external APIs via OpenAPI spec | +| Azure Functions | `AzureFunctionToolDefinition` | Invoke Azure Functions | +| MCP | `MCPToolDefinition` | Model Context Protocol tools | +| SharePoint | `SharepointToolDefinition` | Access SharePoint content | +| Microsoft Fabric | `MicrosoftFabricToolDefinition` | Access Fabric data | + +## Streaming Update Types + +| Update Type | Description | +|-------------|-------------| +| `StreamingUpdateReason.RunCreated` | Run started | +| `StreamingUpdateReason.RunInProgress` | Run processing | +| `StreamingUpdateReason.RunCompleted` | Run finished | +| `StreamingUpdateReason.RunFailed` | Run errored | +| `MessageContentUpdate` | Text content chunk | +| `RunStepUpdate` | Step status change | + +## Key Types Reference + +| Type | Purpose | +|------|---------| +| `PersistentAgentsClient` | Main entry point | +| `PersistentAgent` | Agent with model, instructions, tools | +| `PersistentAgentThread` | Conversation thread | +| `PersistentThreadMessage` | Message in thread | +| `ThreadRun` | Execution of agent against thread | +| `RunStatus` | Queued, InProgress, RequiresAction, Completed, Failed | +| `ToolResources` | Combined tool resources | +| `ToolOutput` | Function call response | + +## Best Practices + +1. **Always dispose clients** β€” Use `using` statements or explicit disposal +2. **Poll with appropriate delays** β€” 500ms recommended between status checks +3. **Clean up resources** β€” Delete threads and agents when done +4. **Handle all run statuses** β€” Check for `RequiresAction`, `Failed`, `Cancelled` +5. **Use streaming for real-time UX** β€” Better user experience than polling +6. **Store IDs not objects** β€” Reference agents/threads by ID +7. **Use async methods** β€” All operations should be async + +## Error Handling + +```csharp +using Azure; + +try +{ + var agent = await client.Administration.CreateAgentAsync(...); +} +catch (RequestFailedException ex) when (ex.Status == 404) +{ + Console.WriteLine("Resource not found"); +} +catch (RequestFailedException ex) +{ + Console.WriteLine($"Error: {ex.Status} - {ex.ErrorCode}: {ex.Message}"); +} +``` + +## Related SDKs + +| SDK | Purpose | Install | +|-----|---------|---------| +| `Azure.AI.Agents.Persistent` | Low-level agents (this SDK) | `dotnet add package Azure.AI.Agents.Persistent` | +| `Azure.AI.Projects` | High-level project client | `dotnet add package Azure.AI.Projects` | + +## Reference Links + +| Resource | URL | +|----------|-----| +| NuGet Package | https://www.nuget.org/packages/Azure.AI.Agents.Persistent | +| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.agents.persistent | +| GitHub Source | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Agents.Persistent | +| Samples | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Agents.Persistent/samples | diff --git a/skills/azure-ai-agents-persistent-java/SKILL.md b/skills/azure-ai-agents-persistent-java/SKILL.md new file mode 100644 index 00000000..1d36b02c --- /dev/null +++ b/skills/azure-ai-agents-persistent-java/SKILL.md @@ -0,0 +1,137 @@ +--- +name: azure-ai-agents-persistent-java +description: | + Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. + Triggers: "PersistentAgentsClient", "persistent agents java", "agent threads java", "agent runs java", "streaming agents java". +package: com.azure:azure-ai-agents-persistent +--- + +# Azure AI Agents Persistent SDK for Java + +Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools. + +## Installation + +```xml + + com.azure + azure-ai-agents-persistent + 1.0.0-beta.1 + +``` + +## Environment Variables + +```bash +PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/ +MODEL_DEPLOYMENT_NAME=gpt-4o-mini +``` + +## Authentication + +```java +import com.azure.ai.agents.persistent.PersistentAgentsClient; +import com.azure.ai.agents.persistent.PersistentAgentsClientBuilder; +import com.azure.identity.DefaultAzureCredentialBuilder; + +String endpoint = System.getenv("PROJECT_ENDPOINT"); +PersistentAgentsClient client = new PersistentAgentsClientBuilder() + .endpoint(endpoint) + .credential(new DefaultAzureCredentialBuilder().build()) + .buildClient(); +``` + +## Key Concepts + +The Azure AI Agents Persistent SDK provides a low-level API for managing persistent agents that can be reused across sessions. + +### Client Hierarchy + +| Client | Purpose | +|--------|---------| +| `PersistentAgentsClient` | Sync client for agent operations | +| `PersistentAgentsAsyncClient` | Async client for agent operations | + +## Core Workflow + +### 1. Create Agent + +```java +// Create agent with tools +PersistentAgent agent = client.createAgent( + modelDeploymentName, + "Math Tutor", + "You are a personal math tutor." +); +``` + +### 2. Create Thread + +```java +PersistentAgentThread thread = client.createThread(); +``` + +### 3. Add Message + +```java +client.createMessage( + thread.getId(), + MessageRole.USER, + "I need help with equations." +); +``` + +### 4. Run Agent + +```java +ThreadRun run = client.createRun(thread.getId(), agent.getId()); + +// Poll for completion +while (run.getStatus() == RunStatus.QUEUED || run.getStatus() == RunStatus.IN_PROGRESS) { + Thread.sleep(500); + run = client.getRun(thread.getId(), run.getId()); +} +``` + +### 5. Get Response + +```java +PagedIterable messages = client.listMessages(thread.getId()); +for (PersistentThreadMessage message : messages) { + System.out.println(message.getRole() + ": " + message.getContent()); +} +``` + +### 6. Cleanup + +```java +client.deleteThread(thread.getId()); +client.deleteAgent(agent.getId()); +``` + +## Best Practices + +1. **Use DefaultAzureCredential** for production authentication +2. **Poll with appropriate delays** β€” 500ms recommended between status checks +3. **Clean up resources** β€” Delete threads and agents when done +4. **Handle all run statuses** β€” Check for RequiresAction, Failed, Cancelled +5. **Use async client** for better throughput in high-concurrency scenarios + +## Error Handling + +```java +import com.azure.core.exception.HttpResponseException; + +try { + PersistentAgent agent = client.createAgent(modelName, name, instructions); +} catch (HttpResponseException e) { + System.err.println("Error: " + e.getResponse().getStatusCode() + " - " + e.getMessage()); +} +``` + +## Reference Links + +| Resource | URL | +|----------|-----| +| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-ai-agents-persistent | +| GitHub Source | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-agents-persistent | diff --git a/skills/copilot-sdk/SKILL.md b/skills/copilot-sdk/SKILL.md new file mode 100644 index 00000000..6caeeea7 --- /dev/null +++ b/skills/copilot-sdk/SKILL.md @@ -0,0 +1,510 @@ +--- +name: copilot-sdk +description: Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Python, Go, or .NET. Covers session management, custom tools, streaming, hooks, MCP servers, BYOK providers, session persistence, and custom agents. Requires GitHub Copilot CLI installed and a GitHub Copilot subscription (unless using BYOK). +--- + +# GitHub Copilot SDK + +Build applications that programmatically interact with GitHub Copilot. The SDK wraps the Copilot CLI via JSON-RPC, providing session management, custom tools, hooks, MCP server integration, and streaming across Node.js, Python, Go, and .NET. + +## Prerequisites + +- **GitHub Copilot CLI** installed and authenticated (`copilot --version` to verify) +- **GitHub Copilot subscription** (Individual, Business, or Enterprise) β€” not required for BYOK +- **Runtime:** Node.js 18+ / Python 3.8+ / Go 1.21+ / .NET 8.0+ + +## Installation + +| Language | Package | Install | +|----------|---------|---------| +| Node.js | `@github/copilot-sdk` | `npm install @github/copilot-sdk` | +| Python | `github-copilot-sdk` | `pip install github-copilot-sdk` | +| Go | `github.com/github/copilot-sdk/go` | `go get github.com/github/copilot-sdk/go` | +| .NET | `GitHub.Copilot.SDK` | `dotnet add package GitHub.Copilot.SDK` | + +--- + +## Core Pattern: Client β†’ Session β†’ Message + +All SDK usage follows this pattern: create a client, create a session, send messages. + +### Node.js / TypeScript + +```typescript +import { CopilotClient } from "@github/copilot-sdk"; + +const client = new CopilotClient(); +const session = await client.createSession({ model: "gpt-4.1" }); + +const response = await session.sendAndWait({ prompt: "What is 2 + 2?" }); +console.log(response?.data.content); + +await client.stop(); +``` + +### Python + +```python +import asyncio +from copilot import CopilotClient + +async def main(): + client = CopilotClient() + await client.start() + session = await client.create_session({"model": "gpt-4.1"}) + response = await session.send_and_wait({"prompt": "What is 2 + 2?"}) + print(response.data.content) + await client.stop() + +asyncio.run(main()) +``` + +### Go + +```go +client := copilot.NewClient(nil) +if err := client.Start(ctx); err != nil { log.Fatal(err) } +defer client.Stop() + +session, _ := client.CreateSession(ctx, &copilot.SessionConfig{Model: "gpt-4.1"}) +response, _ := session.SendAndWait(ctx, copilot.MessageOptions{Prompt: "What is 2 + 2?"}) +fmt.Println(*response.Data.Content) +``` + +### .NET + +```csharp +await using var client = new CopilotClient(); +await using var session = await client.CreateSessionAsync(new SessionConfig { Model = "gpt-4.1" }); +var response = await session.SendAndWaitAsync(new MessageOptions { Prompt = "What is 2 + 2?" }); +Console.WriteLine(response?.Data.Content); +``` + +--- + +## Streaming Responses + +Enable real-time output by setting `streaming: true` and subscribing to delta events. + +```typescript +const session = await client.createSession({ model: "gpt-4.1", streaming: true }); + +session.on("assistant.message_delta", (event) => { + process.stdout.write(event.data.deltaContent); +}); +session.on("session.idle", () => console.log()); + +await session.sendAndWait({ prompt: "Tell me a joke" }); +``` + +**Python equivalent:** + +```python +from copilot.generated.session_events import SessionEventType + +session = await client.create_session({"model": "gpt-4.1", "streaming": True}) + +def handle_event(event): + if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA: + sys.stdout.write(event.data.delta_content) + sys.stdout.flush() + +session.on(handle_event) +await session.send_and_wait({"prompt": "Tell me a joke"}) +``` + +### Event Subscription + +| Method | Description | +|--------|-------------| +| `on(handler)` | Subscribe to all events; returns unsubscribe function | +| `on(eventType, handler)` | Subscribe to specific event type (Node.js only) | + +--- + +## Custom Tools + +Define tools that Copilot can call to extend its capabilities. + +### Node.js + +```typescript +import { CopilotClient, defineTool } from "@github/copilot-sdk"; + +const getWeather = defineTool("get_weather", { + description: "Get the current weather for a city", + parameters: { + type: "object", + properties: { city: { type: "string", description: "The city name" } }, + required: ["city"], + }, + handler: async ({ city }) => ({ city, temperature: "72Β°F", condition: "sunny" }), +}); + +const session = await client.createSession({ + model: "gpt-4.1", + tools: [getWeather], +}); +``` + +### Python + +```python +from copilot.tools import define_tool +from pydantic import BaseModel, Field + +class GetWeatherParams(BaseModel): + city: str = Field(description="The city name") + +@define_tool(description="Get the current weather for a city") +async def get_weather(params: GetWeatherParams) -> dict: + return {"city": params.city, "temperature": "72Β°F", "condition": "sunny"} + +session = await client.create_session({"model": "gpt-4.1", "tools": [get_weather]}) +``` + +### Go + +```go +type WeatherParams struct { + City string `json:"city" jsonschema:"The city name"` +} + +getWeather := copilot.DefineTool("get_weather", "Get weather for a city", + func(params WeatherParams, inv copilot.ToolInvocation) (WeatherResult, error) { + return WeatherResult{City: params.City, Temperature: "72Β°F"}, nil + }, +) + +session, _ := client.CreateSession(ctx, &copilot.SessionConfig{ + Model: "gpt-4.1", + Tools: []copilot.Tool{getWeather}, +}) +``` + +### .NET + +```csharp +var getWeather = AIFunctionFactory.Create( + ([Description("The city name")] string city) => new { city, temperature = "72Β°F" }, + "get_weather", "Get the current weather for a city"); + +await using var session = await client.CreateSessionAsync(new SessionConfig { + Model = "gpt-4.1", Tools = [getWeather], +}); +``` + +--- + +## Hooks + +Intercept and customize session behavior at key lifecycle points. + +| Hook | Trigger | Use Case | +|------|---------|----------| +| `onPreToolUse` | Before tool executes | Permission control, argument modification | +| `onPostToolUse` | After tool executes | Result transformation, logging | +| `onUserPromptSubmitted` | User sends message | Prompt modification, filtering | +| `onSessionStart` | Session begins | Add context, configure session | +| `onSessionEnd` | Session ends | Cleanup, analytics | +| `onErrorOccurred` | Error happens | Custom error handling, retry logic | + +### Example: Tool Permission Control + +```typescript +const session = await client.createSession({ + hooks: { + onPreToolUse: async (input) => { + if (["shell", "bash"].includes(input.toolName)) { + return { permissionDecision: "deny", permissionDecisionReason: "Shell access not permitted" }; + } + return { permissionDecision: "allow" }; + }, + }, +}); +``` + +### Pre-Tool Use Output + +| Field | Type | Description | +|-------|------|-------------| +| `permissionDecision` | `"allow"` \| `"deny"` \| `"ask"` | Whether to allow the tool call | +| `permissionDecisionReason` | string | Explanation for deny/ask | +| `modifiedArgs` | object | Modified arguments to pass | +| `additionalContext` | string | Extra context for conversation | +| `suppressOutput` | boolean | Hide tool output from conversation | + +--- + +## MCP Server Integration + +Connect to MCP servers for pre-built tool capabilities. + +### Remote HTTP Server + +```typescript +const session = await client.createSession({ + mcpServers: { + github: { type: "http", url: "https://api.githubcopilot.com/mcp/" }, + }, +}); +``` + +### Local Stdio Server + +```typescript +const session = await client.createSession({ + mcpServers: { + filesystem: { + type: "local", + command: "npx", + args: ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"], + tools: ["*"], + }, + }, +}); +``` + +### MCP Config Fields + +| Field | Type | Description | +|-------|------|-------------| +| `type` | `"local"` \| `"http"` | Server transport type | +| `command` | string | Executable path (local) | +| `args` | string[] | Command arguments (local) | +| `url` | string | Server URL (http) | +| `tools` | string[] | `["*"]` or specific tool names | +| `env` | object | Environment variables | +| `cwd` | string | Working directory (local) | +| `timeout` | number | Timeout in milliseconds | + +--- + +## Authentication + +### Methods (Priority Order) + +1. **Explicit token** β€” `githubToken` in constructor +2. **Environment variables** β€” `COPILOT_GITHUB_TOKEN` β†’ `GH_TOKEN` β†’ `GITHUB_TOKEN` +3. **Stored OAuth** β€” From `copilot auth login` +4. **GitHub CLI** β€” `gh auth` credentials + +### Programmatic Token + +```typescript +const client = new CopilotClient({ githubToken: process.env.GITHUB_TOKEN }); +``` + +### BYOK (Bring Your Own Key) + +Use your own API keys β€” no Copilot subscription required. + +```typescript +const session = await client.createSession({ + model: "gpt-5.2-codex", + provider: { + type: "openai", + baseUrl: "https://your-resource.openai.azure.com/openai/v1/", + wireApi: "responses", + apiKey: process.env.FOUNDRY_API_KEY, + }, +}); +``` + +| Provider | Type | Notes | +|----------|------|-------| +| OpenAI | `"openai"` | OpenAI API and compatible endpoints | +| Azure OpenAI | `"azure"` | Native Azure endpoints (don't include `/openai/v1`) | +| Azure AI Foundry | `"openai"` | OpenAI-compatible Foundry endpoints | +| Anthropic | `"anthropic"` | Claude models | +| Ollama | `"openai"` | Local models, no API key needed | + +**Wire API:** Use `"responses"` for GPT-5 series, `"completions"` (default) for others. + +--- + +## Session Persistence + +Resume sessions across restarts by providing your own session ID. + +```typescript +// Create with explicit ID +const session = await client.createSession({ + sessionId: "user-123-task-456", + model: "gpt-4.1", +}); + +// Resume later +const resumed = await client.resumeSession("user-123-task-456"); +await resumed.sendAndWait({ prompt: "What did we discuss?" }); +``` + +**Session management:** + +```typescript +const sessions = await client.listSessions(); // List all +await client.deleteSession("user-123-task-456"); // Delete +await session.destroy(); // Destroy active +``` + +**BYOK sessions:** Must re-provide `provider` config on resume (keys are not persisted). + +### Infinite Sessions + +For long-running workflows that may exceed context limits: + +```typescript +const session = await client.createSession({ + infiniteSessions: { + enabled: true, + backgroundCompactionThreshold: 0.80, + bufferExhaustionThreshold: 0.95, + }, +}); +``` + +--- + +## Custom Agents + +Define specialized AI personas: + +```typescript +const session = await client.createSession({ + customAgents: [{ + name: "pr-reviewer", + displayName: "PR Reviewer", + description: "Reviews pull requests for best practices", + prompt: "You are an expert code reviewer. Focus on security, performance, and maintainability.", + }], +}); +``` + +--- + +## System Message + +Control AI behavior and personality: + +```typescript +const session = await client.createSession({ + systemMessage: { content: "You are a helpful assistant. Always be concise." }, +}); +``` + +--- + +## Skills Integration + +Load skill directories to extend Copilot's capabilities: + +```typescript +const session = await client.createSession({ + skillDirectories: ["./skills/code-review", "./skills/documentation"], + disabledSkills: ["experimental-feature"], +}); +``` + +--- + +## Permission & Input Handlers + +Handle tool permissions and user input requests programmatically: + +```typescript +const session = await client.createSession({ + onPermissionRequest: async (request) => { + // Auto-approve git commands only + if (request.kind === "shell") { + return { approved: request.command.startsWith("git") }; + } + return { approved: true }; + }, + onUserInputRequest: async (request) => { + // Handle ask_user tool calls + return { response: "yes" }; + }, +}); +``` + +--- + +## External CLI Server + +Connect to a separately running CLI instead of auto-managing the process: + +```bash +copilot --headless --port 4321 +``` + +```typescript +const client = new CopilotClient({ cliUrl: "localhost:4321" }); +``` + +--- + +## Client Configuration + +| Option | Type | Description | +|--------|------|-------------| +| `cliPath` | string | Path to Copilot CLI executable | +| `cliUrl` | string | URL of external CLI server | +| `githubToken` | string | GitHub token for auth | +| `useLoggedInUser` | boolean | Use stored CLI credentials (default: true) | +| `logLevel` | string | `"none"` \| `"error"` \| `"warning"` \| `"info"` \| `"debug"` | +| `autoRestart` | boolean | Auto-restart CLI on crash (default: true) | +| `useStdio` | boolean | Use stdio transport (default: true) | + +## Session Configuration + +| Option | Type | Description | +|--------|------|-------------| +| `model` | string | Model to use (e.g., `"gpt-4.1"`) | +| `sessionId` | string | Custom ID for resumable sessions | +| `streaming` | boolean | Enable streaming responses | +| `tools` | Tool[] | Custom tools | +| `mcpServers` | object | MCP server configurations | +| `hooks` | object | Session hooks | +| `provider` | object | BYOK provider config | +| `customAgents` | object[] | Custom agent definitions | +| `systemMessage` | object | System message override | +| `skillDirectories` | string[] | Directories to load skills from | +| `disabledSkills` | string[] | Skills to disable | +| `reasoningEffort` | string | Reasoning effort level | +| `availableTools` | string[] | Restrict available tools | +| `excludedTools` | string[] | Exclude specific tools | +| `infiniteSessions` | object | Auto-compaction config | +| `workingDirectory` | string | Working directory | + +--- + +## Debugging + +Enable debug logging to troubleshoot issues: + +```typescript +const client = new CopilotClient({ logLevel: "debug" }); +``` + +**Common issues:** +- `CLI not found` β†’ Install CLI or set `cliPath` +- `Not authenticated` β†’ Run `copilot auth login` or provide `githubToken` +- `Session not found` β†’ Don't use session after `destroy()` +- `Connection refused` β†’ Check CLI process, enable `autoRestart` + +--- + +## Key API Summary + +| Language | Client | Session Create | Send | Stop | +|----------|--------|---------------|------|------| +| Node.js | `new CopilotClient()` | `client.createSession()` | `session.sendAndWait()` | `client.stop()` | +| Python | `CopilotClient()` | `client.create_session()` | `session.send_and_wait()` | `client.stop()` | +| Go | `copilot.NewClient(nil)` | `client.CreateSession()` | `session.SendAndWait()` | `client.Stop()` | +| .NET | `new CopilotClient()` | `client.CreateSessionAsync()` | `session.SendAndWaitAsync()` | `client.DisposeAsync()` | + +## References + +- [GitHub Copilot SDK](https://github.com/github/copilot-sdk) +- [Copilot CLI Installation](https://docs.github.com/en/copilot/how-tos/set-up/install-copilot-cli) +- [MCP Protocol Specification](https://modelcontextprotocol.io) diff --git a/skills/fastapi-router-py/SKILL.md b/skills/fastapi-router-py/SKILL.md new file mode 100644 index 00000000..ed3cf1cf --- /dev/null +++ b/skills/fastapi-router-py/SKILL.md @@ -0,0 +1,52 @@ +--- +name: fastapi-router-py +description: Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new routes, implementing CRUD operations, or adding authenticated endpoints in FastAPI applications. +--- + +# FastAPI Router + +Create FastAPI routers following established patterns with proper authentication, response models, and HTTP status codes. + +## Quick Start + +Copy the template from [assets/template.py](assets/template.py) and replace placeholders: +- `{{ResourceName}}` β†’ PascalCase name (e.g., `Project`) +- `{{resource_name}}` β†’ snake_case name (e.g., `project`) +- `{{resource_plural}}` β†’ plural form (e.g., `projects`) + +## Authentication Patterns + +```python +# Optional auth - returns None if not authenticated +current_user: Optional[User] = Depends(get_current_user) + +# Required auth - raises 401 if not authenticated +current_user: User = Depends(get_current_user_required) +``` + +## Response Models + +```python +@router.get("/items/{item_id}", response_model=Item) +async def get_item(item_id: str) -> Item: + ... + +@router.get("/items", response_model=list[Item]) +async def list_items() -> list[Item]: + ... +``` + +## HTTP Status Codes + +```python +@router.post("/items", status_code=status.HTTP_201_CREATED) +@router.delete("/items/{id}", status_code=status.HTTP_204_NO_CONTENT) +``` + +## Integration Steps + +1. Create router in `src/backend/app/routers/` +2. Mount in `src/backend/app/main.py` +3. Create corresponding Pydantic models +4. Create service layer if needed +5. Add frontend API functions diff --git a/skills/github-issue-creator/SKILL.md b/skills/github-issue-creator/SKILL.md new file mode 100644 index 00000000..4351dcd3 --- /dev/null +++ b/skills/github-issue-creator/SKILL.md @@ -0,0 +1,137 @@ +--- +name: github-issue-creator +description: Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error messages, or informal descriptions and wants a structured GitHub issue. Supports images/GIFs for visual evidence. +--- + +# GitHub Issue Creator + +Transform messy input (error logs, voice notes, screenshots) into clean, actionable GitHub issues. + +## Output Template + +```markdown +## Summary +[One-line description of the issue] + +## Environment +- **Product/Service**: +- **Region/Version**: +- **Browser/OS**: (if relevant) + +## Reproduction Steps +1. [Step] +2. [Step] +3. [Step] + +## Expected Behavior +[What should happen] + +## Actual Behavior +[What actually happens] + +## Error Details +``` +[Error message/code if applicable] +``` + +## Visual Evidence +[Reference to attached screenshots/GIFs] + +## Impact +[Severity: Critical/High/Medium/Low + brief explanation] + +## Additional Context +[Any other relevant details] +``` + +## Output Location + +**Create issues as markdown files** in `/issues/` directory at the repo root. Use naming convention: `YYYY-MM-DD-short-description.md` + +## Guidelines + +**Be crisp**: No fluff. Every word should add value. + +**Extract structure from chaos**: Voice dictation and raw notes often contain the facts buried in casual language. Pull them out. + +**Infer missing context**: If user mentions "same project" or "the dashboard", use context from conversation or memory to fill in specifics. + +**Placeholder sensitive data**: Use `[PROJECT_NAME]`, `[USER_ID]`, etc. for anything that might be sensitive. + +**Match severity to impact**: +- Critical: Service down, data loss, security issue +- High: Major feature broken, no workaround +- Medium: Feature impaired, workaround exists +- Low: Minor inconvenience, cosmetic + +**Image/GIF handling**: Reference attachments inline. Format: `![Description](attachment-name.png)` + +## Examples + +**Input (voice dictation)**: +> so I was trying to deploy the agent and it just failed silently no error nothing the workflow ran but then poof gone from the list had to refresh and try again three times + +**Output**: +```markdown +## Summary +Agent deployment fails silently - no error displayed, agent disappears from list + +## Environment +- **Product/Service**: Azure AI Foundry +- **Region/Version**: westus2 + +## Reproduction Steps +1. Navigate to agent deployment +2. Configure and deploy agent +3. Observe workflow completes +4. Check agent list + +## Expected Behavior +Agent appears in list with deployment status, errors shown if deployment fails + +## Actual Behavior +Agent disappears from list. No error message. Requires page refresh and retry. + +## Impact +**High** - Blocks agent deployment workflow, no feedback on failure cause + +## Additional Context +Required 3 retry attempts before successful deployment +``` + +--- + +**Input (error paste)**: +> Error: PERMISSION_DENIED when publishing to Teams channel. Code: 403. Was working yesterday. + +**Output**: +```markdown +## Summary +403 PERMISSION_DENIED error when publishing to Teams channel + +## Environment +- **Product/Service**: Copilot Studio β†’ Teams integration +- **Region/Version**: [REGION] + +## Reproduction Steps +1. Configure agent for Teams channel +2. Attempt to publish + +## Expected Behavior +Agent publishes successfully to Teams channel + +## Actual Behavior +Returns `PERMISSION_DENIED` with code 403 + +## Error Details +``` +Error: PERMISSION_DENIED +Code: 403 +``` + +## Impact +**High** - Blocks Teams integration, regression from previous working state + +## Additional Context +Was working yesterday - possible permission/config change or service regression +``` \ No newline at end of file diff --git a/skills/hosted-agents-v2-py/SKILL.md b/skills/hosted-agents-v2-py/SKILL.md new file mode 100644 index 00000000..ee9739b3 --- /dev/null +++ b/skills/hosted-agents-v2-py/SKILL.md @@ -0,0 +1,325 @@ +--- +name: hosted-agents-v2-py +description: | + Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. + Use when creating container-based agents that run custom code in Azure AI Foundry. + Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", + "create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES". +package: azure-ai-projects +--- + +# Azure AI Hosted Agents (Python) + +Build container-based hosted agents using `ImageBasedHostedAgentDefinition` from the Azure AI Projects SDK. + +## Installation + +```bash +pip install azure-ai-projects>=2.0.0b3 azure-identity +``` + +**Minimum SDK Version:** `2.0.0b3` or later required for hosted agent support. + +## Environment Variables + +```bash +AZURE_AI_PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/ +``` + +## Prerequisites + +Before creating hosted agents: + +1. **Container Image** - Build and push to Azure Container Registry (ACR) +2. **ACR Pull Permissions** - Grant your project's managed identity `AcrPull` role on the ACR +3. **Capability Host** - Account-level capability host with `enablePublicHostingEnvironment=true` +4. **SDK Version** - Ensure `azure-ai-projects>=2.0.0b3` + +## Authentication + +Always use `DefaultAzureCredential`: + +```python +from azure.identity import DefaultAzureCredential +from azure.ai.projects import AIProjectClient + +credential = DefaultAzureCredential() +client = AIProjectClient( + endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], + credential=credential +) +``` + +## Core Workflow + +### 1. Imports + +```python +import os +from azure.identity import DefaultAzureCredential +from azure.ai.projects import AIProjectClient +from azure.ai.projects.models import ( + ImageBasedHostedAgentDefinition, + ProtocolVersionRecord, + AgentProtocol, +) +``` + +### 2. Create Hosted Agent + +```python +client = AIProjectClient( + endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], + credential=DefaultAzureCredential() +) + +agent = client.agents.create_version( + agent_name="my-hosted-agent", + definition=ImageBasedHostedAgentDefinition( + container_protocol_versions=[ + ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1") + ], + cpu="1", + memory="2Gi", + image="myregistry.azurecr.io/my-agent:latest", + tools=[{"type": "code_interpreter"}], + environment_variables={ + "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"], + "MODEL_NAME": "gpt-4o-mini" + } + ) +) + +print(f"Created agent: {agent.name} (version: {agent.version})") +``` + +### 3. List Agent Versions + +```python +versions = client.agents.list_versions(agent_name="my-hosted-agent") +for version in versions: + print(f"Version: {version.version}, State: {version.state}") +``` + +### 4. Delete Agent Version + +```python +client.agents.delete_version( + agent_name="my-hosted-agent", + version=agent.version +) +``` + +## ImageBasedHostedAgentDefinition Parameters + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `container_protocol_versions` | `list[ProtocolVersionRecord]` | Yes | Protocol versions the agent supports | +| `image` | `str` | Yes | Full container image path (registry/image:tag) | +| `cpu` | `str` | No | CPU allocation (e.g., "1", "2") | +| `memory` | `str` | No | Memory allocation (e.g., "2Gi", "4Gi") | +| `tools` | `list[dict]` | No | Tools available to the agent | +| `environment_variables` | `dict[str, str]` | No | Environment variables for the container | + +## Protocol Versions + +The `container_protocol_versions` parameter specifies which protocols your agent supports: + +```python +from azure.ai.projects.models import ProtocolVersionRecord, AgentProtocol + +# RESPONSES protocol - standard agent responses +container_protocol_versions=[ + ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1") +] +``` + +**Available Protocols:** +| Protocol | Description | +|----------|-------------| +| `AgentProtocol.RESPONSES` | Standard response protocol for agent interactions | + +## Resource Allocation + +Specify CPU and memory for your container: + +```python +definition=ImageBasedHostedAgentDefinition( + container_protocol_versions=[...], + image="myregistry.azurecr.io/my-agent:latest", + cpu="2", # 2 CPU cores + memory="4Gi" # 4 GiB memory +) +``` + +**Resource Limits:** +| Resource | Min | Max | Default | +|----------|-----|-----|---------| +| CPU | 0.5 | 4 | 1 | +| Memory | 1Gi | 8Gi | 2Gi | + +## Tools Configuration + +Add tools to your hosted agent: + +### Code Interpreter + +```python +tools=[{"type": "code_interpreter"}] +``` + +### MCP Tools + +```python +tools=[ + {"type": "code_interpreter"}, + { + "type": "mcp", + "server_label": "my-mcp-server", + "server_url": "https://my-mcp-server.example.com" + } +] +``` + +### Multiple Tools + +```python +tools=[ + {"type": "code_interpreter"}, + {"type": "file_search"}, + { + "type": "mcp", + "server_label": "custom-tool", + "server_url": "https://custom-tool.example.com" + } +] +``` + +## Environment Variables + +Pass configuration to your container: + +```python +environment_variables={ + "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"], + "MODEL_NAME": "gpt-4o-mini", + "LOG_LEVEL": "INFO", + "CUSTOM_CONFIG": "value" +} +``` + +**Best Practice:** Never hardcode secrets. Use environment variables or Azure Key Vault. + +## Complete Example + +```python +import os +from azure.identity import DefaultAzureCredential +from azure.ai.projects import AIProjectClient +from azure.ai.projects.models import ( + ImageBasedHostedAgentDefinition, + ProtocolVersionRecord, + AgentProtocol, +) + +def create_hosted_agent(): + """Create a hosted agent with custom container image.""" + + client = AIProjectClient( + endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], + credential=DefaultAzureCredential() + ) + + agent = client.agents.create_version( + agent_name="data-processor-agent", + definition=ImageBasedHostedAgentDefinition( + container_protocol_versions=[ + ProtocolVersionRecord( + protocol=AgentProtocol.RESPONSES, + version="v1" + ) + ], + image="myregistry.azurecr.io/data-processor:v1.0", + cpu="2", + memory="4Gi", + tools=[ + {"type": "code_interpreter"}, + {"type": "file_search"} + ], + environment_variables={ + "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"], + "MODEL_NAME": "gpt-4o-mini", + "MAX_RETRIES": "3" + } + ) + ) + + print(f"Created hosted agent: {agent.name}") + print(f"Version: {agent.version}") + print(f"State: {agent.state}") + + return agent + +if __name__ == "__main__": + create_hosted_agent() +``` + +## Async Pattern + +```python +import os +from azure.identity.aio import DefaultAzureCredential +from azure.ai.projects.aio import AIProjectClient +from azure.ai.projects.models import ( + ImageBasedHostedAgentDefinition, + ProtocolVersionRecord, + AgentProtocol, +) + +async def create_hosted_agent_async(): + """Create a hosted agent asynchronously.""" + + async with DefaultAzureCredential() as credential: + async with AIProjectClient( + endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], + credential=credential + ) as client: + agent = await client.agents.create_version( + agent_name="async-agent", + definition=ImageBasedHostedAgentDefinition( + container_protocol_versions=[ + ProtocolVersionRecord( + protocol=AgentProtocol.RESPONSES, + version="v1" + ) + ], + image="myregistry.azurecr.io/async-agent:latest", + cpu="1", + memory="2Gi" + ) + ) + return agent +``` + +## Common Errors + +| Error | Cause | Solution | +|-------|-------|----------| +| `ImagePullBackOff` | ACR pull permission denied | Grant `AcrPull` role to project's managed identity | +| `InvalidContainerImage` | Image not found | Verify image path and tag exist in ACR | +| `CapabilityHostNotFound` | No capability host configured | Create account-level capability host | +| `ProtocolVersionNotSupported` | Invalid protocol version | Use `AgentProtocol.RESPONSES` with version `"v1"` | + +## Best Practices + +1. **Version Your Images** - Use specific tags, not `latest` in production +2. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed +3. **Environment Variables** - Use for all configuration, never hardcode +4. **Error Handling** - Wrap agent creation in try/except blocks +5. **Cleanup** - Delete unused agent versions to free resources + +## Reference Links + +- [Azure AI Projects SDK](https://pypi.org/project/azure-ai-projects/) +- [Hosted Agents Documentation](https://learn.microsoft.com/azure/ai-services/agents/how-to/hosted-agents) +- [Azure Container Registry](https://learn.microsoft.com/azure/container-registry/) diff --git a/skills/mcp-builder-ms/SKILL.md b/skills/mcp-builder-ms/SKILL.md new file mode 100644 index 00000000..79263539 --- /dev/null +++ b/skills/mcp-builder-ms/SKILL.md @@ -0,0 +1,303 @@ +--- +name: mcp-builder +description: Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP), Node/TypeScript (MCP SDK), or C#/.NET (Microsoft MCP SDK). +--- + +# MCP Server Development Guide + +## Overview + +Create MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks. + +--- + +## Microsoft MCP Ecosystem + +Microsoft provides extensive MCP infrastructure for Azure and Foundry services. Understanding this ecosystem helps you decide whether to build custom servers or leverage existing ones. + +### Server Types + +| Type | Transport | Use Case | Example | +|------|-----------|----------|---------| +| **Local** | stdio | Desktop apps, single-user, local dev | Azure MCP Server via NPM/Docker | +| **Remote** | Streamable HTTP | Cloud services, multi-tenant, Agent Service | `https://mcp.ai.azure.com` (Foundry) | + +### Microsoft MCP Servers + +Before building a custom server, check if Microsoft already provides one: + +| Server | Type | Description | +|--------|------|-------------| +| **Azure MCP** | Local | 48+ Azure services (Storage, KeyVault, Cosmos, SQL, etc.) | +| **Foundry MCP** | Remote | `https://mcp.ai.azure.com` - Models, deployments, evals, agents | +| **Fabric MCP** | Local | Microsoft Fabric APIs, OneLake, item definitions | +| **Playwright MCP** | Local | Browser automation and testing | +| **GitHub MCP** | Remote | `https://api.githubcopilot.com/mcp` | + +**Full ecosystem:** See [πŸ”· Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) for complete server catalog and patterns. + +### When to Use Microsoft vs Custom + +| Scenario | Recommendation | +|----------|----------------| +| Azure service integration | Use **Azure MCP Server** (48 services covered) | +| AI Foundry agents/evals | Use **Foundry MCP** remote server | +| Custom internal APIs | Build **custom server** (this guide) | +| Third-party SaaS integration | Build **custom server** (this guide) | +| Extending Azure MCP | Follow [Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) + +--- + +# Process + +## πŸš€ High-Level Workflow + +Creating a high-quality MCP server involves four main phases: + +### Phase 1: Deep Research and Planning + +#### 1.1 Understand Modern MCP Design + +**API Coverage vs. Workflow Tools:** +Balance comprehensive API endpoint coverage with specialized workflow tools. Workflow tools can be more convenient for specific tasks, while comprehensive coverage gives agents flexibility to compose operations. Performance varies by clientβ€”some clients benefit from code execution that combines basic tools, while others work better with higher-level workflows. When uncertain, prioritize comprehensive API coverage. + +**Tool Naming and Discoverability:** +Clear, descriptive tool names help agents find the right tools quickly. Use consistent prefixes (e.g., `github_create_issue`, `github_list_repos`) and action-oriented naming. + +**Context Management:** +Agents benefit from concise tool descriptions and the ability to filter/paginate results. Design tools that return focused, relevant data. Some clients support code execution which can help agents filter and process data efficiently. + +**Actionable Error Messages:** +Error messages should guide agents toward solutions with specific suggestions and next steps. + +#### 1.2 Study MCP Protocol Documentation + +**Navigate the MCP specification:** + +Start with the sitemap to find relevant pages: `https://modelcontextprotocol.io/sitemap.xml` + +Then fetch specific pages with `.md` suffix for markdown format (e.g., `https://modelcontextprotocol.io/specification/draft.md`). + +Key pages to review: +- Specification overview and architecture +- Transport mechanisms (streamable HTTP, stdio) +- Tool, resource, and prompt definitions + +#### 1.3 Study Framework Documentation + +**Language Selection:** + +| Language | Best For | SDK | +|----------|----------|-----| +| **TypeScript** (recommended) | General MCP servers, broad compatibility | `@modelcontextprotocol/sdk` | +| **Python** | Data/ML pipelines, FastAPI integration | `mcp` (FastMCP) | +| **C#/.NET** | Azure/Microsoft ecosystem, enterprise | `Microsoft.Mcp.Core` | + +**Transport Selection:** + +| Transport | Use Case | Characteristics | +|-----------|----------|-----------------| +| **Streamable HTTP** | Remote servers, multi-tenant, Agent Service | Stateless, scalable, requires auth | +| **stdio** | Local servers, desktop apps | Simple, single-user, no network | + +**Load framework documentation:** + +- **MCP Best Practices**: [πŸ“‹ View Best Practices](./reference/mcp_best_practices.md) - Core guidelines + +**For TypeScript (recommended):** +- **TypeScript SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md` +- [⚑ TypeScript Guide](./reference/node_mcp_server.md) - TypeScript patterns and examples + +**For Python:** +- **Python SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md` +- [🐍 Python Guide](./reference/python_mcp_server.md) - Python patterns and examples + +**For C#/.NET (Microsoft ecosystem):** +- [πŸ”· Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) - C# patterns, Azure MCP architecture, command hierarchy + +#### 1.4 Plan Your Implementation + +**Understand the API:** +Review the service's API documentation to identify key endpoints, authentication requirements, and data models. Use web search and WebFetch as needed. + +**Tool Selection:** +Prioritize comprehensive API coverage. List endpoints to implement, starting with the most common operations. + +--- + +### Phase 2: Implementation + +#### 2.1 Set Up Project Structure + +See language-specific guides for project setup: +- [⚑ TypeScript Guide](./reference/node_mcp_server.md) - Project structure, package.json, tsconfig.json +- [🐍 Python Guide](./reference/python_mcp_server.md) - Module organization, dependencies +- [πŸ”· Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) - C# project structure, command hierarchy + +#### 2.2 Implement Core Infrastructure + +Create shared utilities: +- API client with authentication +- Error handling helpers +- Response formatting (JSON/Markdown) +- Pagination support + +#### 2.3 Implement Tools + +For each tool: + +**Input Schema:** +- Use Zod (TypeScript) or Pydantic (Python) +- Include constraints and clear descriptions +- Add examples in field descriptions + +**Output Schema:** +- Define `outputSchema` where possible for structured data +- Use `structuredContent` in tool responses (TypeScript SDK feature) +- Helps clients understand and process tool outputs + +**Tool Description:** +- Concise summary of functionality +- Parameter descriptions +- Return type schema + +**Implementation:** +- Async/await for I/O operations +- Proper error handling with actionable messages +- Support pagination where applicable +- Return both text content and structured data when using modern SDKs + +**Annotations:** +- `readOnlyHint`: true/false +- `destructiveHint`: true/false +- `idempotentHint`: true/false +- `openWorldHint`: true/false + +--- + +### Phase 3: Review and Test + +#### 3.1 Code Quality + +Review for: +- No duplicated code (DRY principle) +- Consistent error handling +- Full type coverage +- Clear tool descriptions + +#### 3.2 Build and Test + +**TypeScript:** +- Run `npm run build` to verify compilation +- Test with MCP Inspector: `npx @modelcontextprotocol/inspector` + +**Python:** +- Verify syntax: `python -m py_compile your_server.py` +- Test with MCP Inspector + +See language-specific guides for detailed testing approaches and quality checklists. + +--- + +### Phase 4: Create Evaluations + +After implementing your MCP server, create comprehensive evaluations to test its effectiveness. + +**Load [βœ… Evaluation Guide](./reference/evaluation.md) for complete evaluation guidelines.** + +#### 4.1 Understand Evaluation Purpose + +Use evaluations to test whether LLMs can effectively use your MCP server to answer realistic, complex questions. + +#### 4.2 Create 10 Evaluation Questions + +To create effective evaluations, follow the process outlined in the evaluation guide: + +1. **Tool Inspection**: List available tools and understand their capabilities +2. **Content Exploration**: Use READ-ONLY operations to explore available data +3. **Question Generation**: Create 10 complex, realistic questions +4. **Answer Verification**: Solve each question yourself to verify answers + +#### 4.3 Evaluation Requirements + +Ensure each question is: +- **Independent**: Not dependent on other questions +- **Read-only**: Only non-destructive operations required +- **Complex**: Requiring multiple tool calls and deep exploration +- **Realistic**: Based on real use cases humans would care about +- **Verifiable**: Single, clear answer that can be verified by string comparison +- **Stable**: Answer won't change over time + +#### 4.4 Output Format + +Create an XML file with this structure: + +```xml + + + Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat? + 3 + + + +``` + +--- + +# Reference Files + +## πŸ“š Documentation Library + +Load these resources as needed during development: + +### Core MCP Documentation (Load First) +- **MCP Protocol**: Start with sitemap at `https://modelcontextprotocol.io/sitemap.xml`, then fetch specific pages with `.md` suffix +- [πŸ“‹ MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including: + - Server and tool naming conventions + - Response format guidelines (JSON vs Markdown) + - Pagination best practices + - Transport selection (streamable HTTP vs stdio) + - Security and error handling standards + +### Microsoft MCP Documentation (For Azure/Foundry) +- [πŸ”· Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) - Microsoft-specific patterns including: + - Azure MCP Server architecture (48+ Azure services) + - C#/.NET command implementation patterns + - Remote MCP with Foundry Agent Service + - Authentication (Entra ID, OBO flow, Managed Identity) + - Testing infrastructure with Bicep templates + +### SDK Documentation (Load During Phase 1/2) +- **Python SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md` +- **TypeScript SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md` +- **Microsoft MCP SDK**: See [Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) for C#/.NET + +### Language-Specific Implementation Guides (Load During Phase 2) +- [🐍 Python Implementation Guide](./reference/python_mcp_server.md) - Complete Python/FastMCP guide with: + - Server initialization patterns + - Pydantic model examples + - Tool registration with `@mcp.tool` + - Complete working examples + - Quality checklist + +- [⚑ TypeScript Implementation Guide](./reference/node_mcp_server.md) - Complete TypeScript guide with: + - Project structure + - Zod schema patterns + - Tool registration with `server.registerTool` + - Complete working examples + - Quality checklist + +- [πŸ”· Microsoft MCP Patterns](./reference/microsoft_mcp_patterns.md) - Complete C#/.NET guide with: + - Command hierarchy (BaseCommand β†’ GlobalCommand β†’ SubscriptionCommand) + - Naming conventions (`{Resource}{Operation}Command`) + - Option handling with `.AsRequired()` / `.AsOptional()` + - Azure Functions remote MCP deployment + - Live test patterns with Bicep + +### Evaluation Guide (Load During Phase 4) +- [βœ… Evaluation Guide](./reference/evaluation.md) - Complete evaluation creation guide with: + - Question creation guidelines + - Answer verification strategies + - XML format specifications + - Example questions and answers + - Running an evaluation with the provided scripts diff --git a/skills/podcast-generation/SKILL.md b/skills/podcast-generation/SKILL.md new file mode 100644 index 00000000..7f2e7205 --- /dev/null +++ b/skills/podcast-generation/SKILL.md @@ -0,0 +1,121 @@ +--- +name: podcast-generation +description: Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, audio narrative generation, podcast creation from content, or integrating with Azure OpenAI Realtime API for real audio output. Covers full-stack implementation from React frontend to Python FastAPI backend with WebSocket streaming. +--- + +# Podcast Generation with GPT Realtime Mini + +Generate real audio narratives from text content using Azure OpenAI's Realtime API. + +## Quick Start + +1. Configure environment variables for Realtime API +2. Connect via WebSocket to Azure OpenAI Realtime endpoint +3. Send text prompt, collect PCM audio chunks + transcript +4. Convert PCM to WAV format +5. Return base64-encoded audio to frontend for playback + +## Environment Configuration + +```env +AZURE_OPENAI_AUDIO_API_KEY=your_realtime_api_key +AZURE_OPENAI_AUDIO_ENDPOINT=https://your-resource.cognitiveservices.azure.com +AZURE_OPENAI_AUDIO_DEPLOYMENT=gpt-realtime-mini +``` + +**Note**: Endpoint should NOT include `/openai/v1/` - just the base URL. + +## Core Workflow + +### Backend Audio Generation + +```python +from openai import AsyncOpenAI +import base64 + +# Convert HTTPS endpoint to WebSocket URL +ws_url = endpoint.replace("https://", "wss://") + "/openai/v1" + +client = AsyncOpenAI( + websocket_base_url=ws_url, + api_key=api_key +) + +audio_chunks = [] +transcript_parts = [] + +async with client.realtime.connect(model="gpt-realtime-mini") as conn: + # Configure for audio-only output + await conn.session.update(session={ + "output_modalities": ["audio"], + "instructions": "You are a narrator. Speak naturally." + }) + + # Send text to narrate + await conn.conversation.item.create(item={ + "type": "message", + "role": "user", + "content": [{"type": "input_text", "text": prompt}] + }) + + await conn.response.create() + + # Collect streaming events + async for event in conn: + if event.type == "response.output_audio.delta": + audio_chunks.append(base64.b64decode(event.delta)) + elif event.type == "response.output_audio_transcript.delta": + transcript_parts.append(event.delta) + elif event.type == "response.done": + break + +# Convert PCM to WAV (see scripts/pcm_to_wav.py) +pcm_audio = b''.join(audio_chunks) +wav_audio = pcm_to_wav(pcm_audio, sample_rate=24000) +``` + +### Frontend Audio Playback + +```javascript +// Convert base64 WAV to playable blob +const base64ToBlob = (base64, mimeType) => { + const bytes = atob(base64); + const arr = new Uint8Array(bytes.length); + for (let i = 0; i < bytes.length; i++) arr[i] = bytes.charCodeAt(i); + return new Blob([arr], { type: mimeType }); +}; + +const audioBlob = base64ToBlob(response.audio_data, 'audio/wav'); +const audioUrl = URL.createObjectURL(audioBlob); +new Audio(audioUrl).play(); +``` + +## Voice Options + +| Voice | Character | +|-------|-----------| +| alloy | Neutral | +| echo | Warm | +| fable | Expressive | +| onyx | Deep | +| nova | Friendly | +| shimmer | Clear | + +## Realtime API Events + +- `response.output_audio.delta` - Base64 audio chunk +- `response.output_audio_transcript.delta` - Transcript text +- `response.done` - Generation complete +- `error` - Handle with `event.error.message` + +## Audio Format + +- **Input**: Text prompt +- **Output**: PCM audio (24kHz, 16-bit, mono) +- **Storage**: Base64-encoded WAV + +## References + +- **Full architecture**: See [references/architecture.md](references/architecture.md) for complete stack design +- **Code examples**: See [references/code-examples.md](references/code-examples.md) for production patterns +- **PCM conversion**: Use [scripts/pcm_to_wav.py](scripts/pcm_to_wav.py) for audio format conversion diff --git a/skills/pydantic-models-py/SKILL.md b/skills/pydantic-models-py/SKILL.md new file mode 100644 index 00000000..b46dc1ef --- /dev/null +++ b/skills/pydantic-models-py/SKILL.md @@ -0,0 +1,58 @@ +--- +name: pydantic-models-py +description: Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schemas, database models, or data validation in Python applications using Pydantic v2. +--- + +# Pydantic Models + +Create Pydantic models following the multi-model pattern for clean API contracts. + +## Quick Start + +Copy the template from [assets/template.py](assets/template.py) and replace placeholders: +- `{{ResourceName}}` β†’ PascalCase name (e.g., `Project`) +- `{{resource_name}}` β†’ snake_case name (e.g., `project`) + +## Multi-Model Pattern + +| Model | Purpose | +|-------|---------| +| `Base` | Common fields shared across models | +| `Create` | Request body for creation (required fields) | +| `Update` | Request body for updates (all optional) | +| `Response` | API response with all fields | +| `InDB` | Database document with `doc_type` | + +## camelCase Aliases + +```python +class MyModel(BaseModel): + workspace_id: str = Field(..., alias="workspaceId") + created_at: datetime = Field(..., alias="createdAt") + + class Config: + populate_by_name = True # Accept both snake_case and camelCase +``` + +## Optional Update Fields + +```python +class MyUpdate(BaseModel): + """All fields optional for PATCH requests.""" + name: Optional[str] = Field(None, min_length=1) + description: Optional[str] = None +``` + +## Database Document + +```python +class MyInDB(MyResponse): + """Adds doc_type for Cosmos DB queries.""" + doc_type: str = "my_resource" +``` + +## Integration Steps + +1. Create models in `src/backend/app/models/` +2. Export from `src/backend/app/models/__init__.py` +3. Add corresponding TypeScript types diff --git a/skills/skill-creator-ms/SKILL.md b/skills/skill-creator-ms/SKILL.md new file mode 100644 index 00000000..1f27c72a --- /dev/null +++ b/skills/skill-creator-ms/SKILL.md @@ -0,0 +1,613 @@ +--- +name: skill-creator +description: Guide for creating effective skills for AI coding agents working with Azure SDKs and Microsoft Foundry services. Use when creating new skills or updating existing skills. +--- + +# Skill Creator + +Guide for creating skills that extend AI agent capabilities, with emphasis on Azure SDKs and Microsoft Foundry. + +> **Required Context:** When creating SDK or API skills, users MUST provide the SDK package name, documentation URL, or repository reference for the skill to be based on. + +## About Skills + +Skills are modular knowledge packages that transform general-purpose agents into specialized experts: + +1. **Procedural knowledge** β€” Multi-step workflows for specific domains +2. **SDK expertise** β€” API patterns, authentication, error handling for Azure services +3. **Domain context** β€” Schemas, business logic, company-specific patterns +4. **Bundled resources** β€” Scripts, references, templates for complex tasks + +--- + +## Core Principles + +### 1. Concise is Key + +The context window is a shared resource. Challenge each piece: "Does this justify its token cost?" + +**Default assumption: Agents are already capable.** Only add what they don't already know. + +### 2. Fresh Documentation First + +**Azure SDKs change constantly.** Skills should instruct agents to verify documentation: + +```markdown +## Before Implementation + +Search `microsoft-docs` MCP for current API patterns: +- Query: "[SDK name] [operation] python" +- Verify: Parameters match your installed SDK version +``` + +### 3. Degrees of Freedom + +Match specificity to task fragility: + +| Freedom | When | Example | +|---------|------|---------| +| **High** | Multiple valid approaches | Text guidelines | +| **Medium** | Preferred pattern with variation | Pseudocode | +| **Low** | Must be exact | Specific scripts | + +### 4. Progressive Disclosure + +Skills load in three levels: + +1. **Metadata** (~100 words) β€” Always in context +2. **SKILL.md body** (<5k words) β€” When skill triggers +3. **References** (unlimited) β€” As needed + +**Keep SKILL.md under 500 lines.** Split into reference files when approaching this limit. + +--- + +## Skill Structure + +``` +skill-name/ +β”œβ”€β”€ SKILL.md (required) +β”‚ β”œβ”€β”€ YAML frontmatter (name, description) +β”‚ └── Markdown instructions +└── Bundled Resources (optional) + β”œβ”€β”€ scripts/ β€” Executable code + β”œβ”€β”€ references/ β€” Documentation loaded as needed + └── assets/ β€” Output resources (templates, images) +``` + +### SKILL.md + +- **Frontmatter**: `name` and `description`. The description is the trigger mechanism. +- **Body**: Instructions loaded only after triggering. + +### Bundled Resources + +| Type | Purpose | When to Include | +|------|---------|-----------------| +| `scripts/` | Deterministic operations | Same code rewritten repeatedly | +| `references/` | Detailed patterns | API docs, schemas, detailed guides | +| `assets/` | Output resources | Templates, images, boilerplate | + +**Don't include**: README.md, CHANGELOG.md, installation guides. + +--- + +## Creating Azure SDK Skills + +When creating skills for Azure SDKs, follow these patterns consistently. + +### Skill Section Order + +Follow this structure (based on existing Azure SDK skills): + +1. **Title** β€” `# SDK Name` +2. **Installation** β€” `pip install`, `npm install`, etc. +3. **Environment Variables** β€” Required configuration +4. **Authentication** β€” Always `DefaultAzureCredential` +5. **Core Workflow** β€” Minimal viable example +6. **Feature Tables** β€” Clients, methods, tools +7. **Best Practices** β€” Numbered list +8. **Reference Links** β€” Table linking to `/references/*.md` + +### Authentication Pattern (All Languages) + +Always use `DefaultAzureCredential`: + +```python +# Python +from azure.identity import DefaultAzureCredential +credential = DefaultAzureCredential() +client = ServiceClient(endpoint, credential) +``` + +```csharp +// C# +var credential = new DefaultAzureCredential(); +var client = new ServiceClient(new Uri(endpoint), credential); +``` + +```java +// Java +TokenCredential credential = new DefaultAzureCredentialBuilder().build(); +ServiceClient client = new ServiceClientBuilder() + .endpoint(endpoint) + .credential(credential) + .buildClient(); +``` + +```typescript +// TypeScript +import { DefaultAzureCredential } from "@azure/identity"; +const credential = new DefaultAzureCredential(); +const client = new ServiceClient(endpoint, credential); +``` + +**Never hardcode credentials. Use environment variables.** + +### Standard Verb Patterns + +Azure SDKs use consistent verbs across all languages: + +| Verb | Behavior | +|------|----------| +| `create` | Create new; fail if exists | +| `upsert` | Create or update | +| `get` | Retrieve; error if missing | +| `list` | Return collection | +| `delete` | Succeed even if missing | +| `begin` | Start long-running operation | + +### Language-Specific Patterns + +See `references/azure-sdk-patterns.md` for detailed patterns including: + +- **Python**: `ItemPaged`, `LROPoller`, context managers, Sphinx docstrings +- **.NET**: `Response`, `Pageable`, `Operation`, mocking support +- **Java**: Builder pattern, `PagedIterable`/`PagedFlux`, Reactor types +- **TypeScript**: `PagedAsyncIterableIterator`, `AbortSignal`, browser considerations + +### Example: Azure SDK Skill Structure + +```markdown +--- +name: skill-creator +description: | + Azure AI Example SDK for Python. Use for [specific service features]. + Triggers: "example service", "create example", "list examples". +--- + +# Azure AI Example SDK + +## Installation + +\`\`\`bash +pip install azure-ai-example +\`\`\` + +## Environment Variables + +\`\`\`bash +AZURE_EXAMPLE_ENDPOINT=https://.example.azure.com +\`\`\` + +## Authentication + +\`\`\`python +from azure.identity import DefaultAzureCredential +from azure.ai.example import ExampleClient + +credential = DefaultAzureCredential() +client = ExampleClient( + endpoint=os.environ["AZURE_EXAMPLE_ENDPOINT"], + credential=credential +) +\`\`\` + +## Core Workflow + +\`\`\`python +# Create +item = client.create_item(name="example", data={...}) + +# List (pagination handled automatically) +for item in client.list_items(): + print(item.name) + +# Long-running operation +poller = client.begin_process(item_id) +result = poller.result() + +# Cleanup +client.delete_item(item_id) +\`\`\` + +## Reference Files + +| File | Contents | +|------|----------| +| [references/tools.md](references/tools.md) | Tool integrations | +| [references/streaming.md](references/streaming.md) | Event streaming patterns | +``` + +--- + +## Skill Creation Process + +1. **Gather SDK Context** β€” User provides SDK/API reference (REQUIRED) +2. **Understand** β€” Research SDK patterns from official docs +3. **Plan** β€” Identify reusable resources and product area category +4. **Create** β€” Write SKILL.md in `.github/skills//` +5. **Categorize** β€” Create symlink in `skills///` +6. **Test** β€” Create acceptance criteria and test scenarios +7. **Document** β€” Update README.md skill catalog +8. **Iterate** β€” Refine based on real usage + +### Step 1: Gather SDK Context (REQUIRED) + +**Before creating any SDK skill, the user MUST provide:** + +| Required | Example | Purpose | +|----------|---------|---------| +| **SDK Package** | `azure-ai-agents`, `Azure.AI.OpenAI` | Identifies the exact SDK | +| **Documentation URL** | `https://learn.microsoft.com/en-us/azure/ai-services/...` | Primary source of truth | +| **Repository** (optional) | `Azure/azure-sdk-for-python` | For code patterns | + +**Prompt the user if not provided:** +``` +To create this skill, I need: +1. The SDK package name (e.g., azure-ai-projects) +2. The Microsoft Learn documentation URL or GitHub repo +3. The target language (py/dotnet/ts/java) +``` + +**Search official docs first:** +```bash +# Use microsoft-docs MCP to get current API patterns +# Query: "[SDK name] [operation] [language]" +# Verify: Parameters match the latest SDK version +``` + +### Step 2: Understand the Skill + +Gather concrete examples: + +- "What SDK operations should this skill cover?" +- "What triggers should activate this skill?" +- "What errors do developers commonly encounter?" + +| Example Task | Reusable Resource | +|--------------|-------------------| +| Same auth code each time | Code example in SKILL.md | +| Complex streaming patterns | `references/streaming.md` | +| Tool configurations | `references/tools.md` | +| Error handling patterns | `references/error-handling.md` | + +### Step 3: Plan Product Area Category + +Skills are organized by **language** and **product area** in the `skills/` directory via symlinks. + +**Product Area Categories:** + +| Category | Description | Examples | +|----------|-------------|----------| +| `foundry` | AI Foundry, agents, projects, inference | `azure-ai-agents-py`, `azure-ai-projects-py` | +| `data` | Storage, Cosmos DB, Tables, Data Lake | `azure-cosmos-py`, `azure-storage-blob-py` | +| `messaging` | Event Hubs, Service Bus, Event Grid | `azure-eventhub-py`, `azure-servicebus-py` | +| `monitoring` | OpenTelemetry, App Insights, Query | `azure-monitor-opentelemetry-py` | +| `identity` | Authentication, DefaultAzureCredential | `azure-identity-py` | +| `security` | Key Vault, secrets, keys, certificates | `azure-keyvault-py` | +| `integration` | API Management, App Configuration | `azure-appconfiguration-py` | +| `compute` | Batch, ML compute | `azure-compute-batch-java` | +| `container` | Container Registry, ACR | `azure-containerregistry-py` | + +**Determine the category** based on: +1. Azure service family (Storage β†’ `data`, Event Hubs β†’ `messaging`) +2. Primary use case (AI agents β†’ `foundry`) +3. Existing skills in the same service area + +### Step 4: Create the Skill + +**Location:** `.github/skills//SKILL.md` + +**Naming convention:** +- `azure---` +- Examples: `azure-ai-agents-py`, `azure-cosmos-java`, `azure-storage-blob-ts` + +**For Azure SDK skills:** + +1. Search `microsoft-docs` MCP for current API patterns +2. Verify against installed SDK version +3. Follow the section order above +4. Include cleanup code in examples +5. Add feature comparison tables + +**Write bundled resources first**, then SKILL.md. + +**Frontmatter:** + +```yaml +--- +name: skill-name-py +description: | + Azure Service SDK for Python. Use for [specific features]. + Triggers: "service name", "create resource", "specific operation". +--- +``` + +### Step 5: Categorize with Symlinks + +After creating the skill in `.github/skills/`, create a symlink in the appropriate category: + +```bash +# Pattern: skills/// -> ../../../.github/skills/ + +# Example for azure-ai-agents-py in python/foundry: +cd skills/python/foundry +ln -s ../../../.github/skills/azure-ai-agents-py agents + +# Example for azure-cosmos-db-py in python/data: +cd skills/python/data +ln -s ../../../.github/skills/azure-cosmos-db-py cosmos-db +``` + +**Symlink naming:** +- Use short, descriptive names (e.g., `agents`, `cosmos`, `blob`) +- Remove the `azure-` prefix and language suffix +- Match existing patterns in the category + +**Verify the symlink:** +```bash +ls -la skills/python/foundry/agents +# Should show: agents -> ../../../.github/skills/azure-ai-agents-py +``` + +### Step 6: Create Tests + +**Every skill MUST have acceptance criteria and test scenarios.** + +#### 6.1 Create Acceptance Criteria + +**Location:** `.github/skills//references/acceptance-criteria.md` + +**Source materials** (in priority order): +1. Official Microsoft Learn docs (via `microsoft-docs` MCP) +2. SDK source code from the repository +3. Existing reference files in the skill + +**Format:** +```markdown +# Acceptance Criteria: + +**SDK**: `package-name` +**Repository**: https://github.com/Azure/azure-sdk-for- +**Purpose**: Skill testing acceptance criteria + +--- + +## 1. Correct Import Patterns + +### 1.1 Client Imports + +#### βœ… CORRECT: Main Client +\`\`\`python +from azure.ai.mymodule import MyClient +from azure.identity import DefaultAzureCredential +\`\`\` + +#### ❌ INCORRECT: Wrong Module Path +\`\`\`python +from azure.ai.mymodule.models import MyClient # Wrong - Client is not in models +\`\`\` + +## 2. Authentication Patterns + +#### βœ… CORRECT: DefaultAzureCredential +\`\`\`python +credential = DefaultAzureCredential() +client = MyClient(endpoint, credential) +\`\`\` + +#### ❌ INCORRECT: Hardcoded Credentials +\`\`\`python +client = MyClient(endpoint, api_key="hardcoded") # Security risk +\`\`\` +``` + +**Critical patterns to document:** +- Import paths (these vary significantly between Azure SDKs) +- Authentication patterns +- Client initialization +- Async variants (`.aio` modules) +- Common anti-patterns + +#### 6.2 Create Test Scenarios + +**Location:** `tests/scenarios//scenarios.yaml` + +```yaml +config: + model: gpt-4 + max_tokens: 2000 + temperature: 0.3 + +scenarios: + - name: basic_client_creation + prompt: | + Create a basic example using the Azure SDK. + Include proper authentication and client initialization. + expected_patterns: + - "DefaultAzureCredential" + - "MyClient" + forbidden_patterns: + - "api_key=" + - "hardcoded" + tags: + - basic + - authentication + mock_response: | + import os + from azure.identity import DefaultAzureCredential + from azure.ai.mymodule import MyClient + + credential = DefaultAzureCredential() + client = MyClient( + endpoint=os.environ["AZURE_ENDPOINT"], + credential=credential + ) + # ... rest of working example +``` + +**Scenario design principles:** +- Each scenario tests ONE specific pattern or feature +- `expected_patterns` β€” patterns that MUST appear +- `forbidden_patterns` β€” common mistakes that must NOT appear +- `mock_response` β€” complete, working code that passes all checks +- `tags` β€” for filtering (`basic`, `async`, `streaming`, `tools`) + +#### 6.3 Run Tests + +```bash +cd tests +pnpm install + +# Check skill is discovered +pnpm harness --list + +# Run in mock mode (fast, deterministic) +pnpm harness --mock --verbose + +# Run with Ralph Loop (iterative improvement) +pnpm harness --ralph --mock --max-iterations 5 --threshold 85 +``` + +**Success criteria:** +- All scenarios pass (100% pass rate) +- No false positives (mock responses always pass) +- Patterns catch real mistakes + +### Step 7: Update Documentation + +After creating the skill: + +1. **Update README.md** β€” Add the skill to the appropriate language section in the Skill Catalog + - Update total skill count (line ~73: `> N skills in...`) + - Update Skill Explorer link count (line ~15: `Browse all N skills`) + - Update language count table (lines ~77-83) + - Update language section count (e.g., `> N skills β€’ suffix: -py`) + - Update category count (e.g., `Foundry & AI (N skills)`) + - Add skill row in alphabetical order within its category + - Update test coverage summary (line ~622: `**N skills with N test scenarios**`) + - Update test coverage table β€” update skill count, scenario count, and top skills for the language + +2. **Regenerate GitHub Pages data** β€” Run the extraction script to update the docs site + ```bash + cd docs-site && npx tsx scripts/extract-skills.ts + ``` + This updates `docs-site/src/data/skills.json` which feeds the Astro-based docs site. + Then rebuild the docs site: + ```bash + cd docs-site && npm run build + ``` + This outputs to `docs/` which is served by GitHub Pages. + +3. **Verify AGENTS.md** β€” Ensure the skill count is accurate + +--- + +## Progressive Disclosure Patterns + +### Pattern 1: High-Level Guide with References + +```markdown +# SDK Name + +## Quick Start +[Minimal example] + +## Advanced Features +- **Streaming**: See [references/streaming.md](references/streaming.md) +- **Tools**: See [references/tools.md](references/tools.md) +``` + +### Pattern 2: Language Variants + +``` +azure-service-skill/ +β”œβ”€β”€ SKILL.md (overview + language selection) +└── references/ + β”œβ”€β”€ python.md + β”œβ”€β”€ dotnet.md + β”œβ”€β”€ java.md + └── typescript.md +``` + +### Pattern 3: Feature Organization + +``` +azure-ai-agents/ +β”œβ”€β”€ SKILL.md (core workflow) +└── references/ + β”œβ”€β”€ tools.md + β”œβ”€β”€ streaming.md + β”œβ”€β”€ async-patterns.md + └── error-handling.md +``` + +--- + +## Design Pattern References + +| Reference | Contents | +|-----------|----------| +| `references/workflows.md` | Sequential and conditional workflows | +| `references/output-patterns.md` | Templates and examples | +| `references/azure-sdk-patterns.md` | Language-specific Azure SDK patterns | + +--- + +## Anti-Patterns + +| Don't | Why | +|-------|-----| +| Create skill without SDK context | Users must provide package name/docs URL | +| Put "when to use" in body | Body loads AFTER triggering | +| Hardcode credentials | Security risk | +| Skip authentication section | Agents will improvise poorly | +| Use outdated SDK patterns | APIs change; search docs first | +| Include README.md | Agents don't need meta-docs | +| Deeply nest references | Keep one level deep | +| Skip acceptance criteria | Skills without tests can't be validated | +| Skip symlink categorization | Skills won't be discoverable by category | +| Use wrong import paths | Azure SDKs have specific module structures | + +--- + +## Checklist + +Before completing a skill: + +**Prerequisites:** +- [ ] User provided SDK package name or documentation URL +- [ ] Verified SDK patterns via `microsoft-docs` MCP + +**Skill Creation:** +- [ ] Description includes what AND when (trigger phrases) +- [ ] SKILL.md under 500 lines +- [ ] Authentication uses `DefaultAzureCredential` +- [ ] Includes cleanup/delete in examples +- [ ] References organized by feature + +**Categorization:** +- [ ] Skill created in `.github/skills//` +- [ ] Symlink created in `skills///` +- [ ] Symlink points to `../../../.github/skills/` + +**Testing:** +- [ ] `references/acceptance-criteria.md` created with correct/incorrect patterns +- [ ] `tests/scenarios//scenarios.yaml` created +- [ ] All scenarios pass (`pnpm harness --mock`) +- [ ] Import paths documented precisely + +**Documentation:** +- [ ] README.md skill catalog updated +- [ ] Instructs to search `microsoft-docs` MCP for current APIs diff --git a/skills_index.json b/skills_index.json index 3ec67ca0..34f366d8 100644 --- a/skills_index.json +++ b/skills_index.json @@ -566,6 +566,33 @@ "risk": "safe", "source": "https://github.com/zxkane/aws-skills" }, + { + "id": "azd-deployment", + "path": "skills/azd-deployment", + "category": "uncategorized", + "name": "azd-deployment", + "description": "Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration, creating Bicep infrastructure for Container Apps, configuring remote builds with ACR, implementing idempotent deployments, managing environment variables across local/.azure/Bicep, or troubleshooting azd up failures. Triggers on requests for azd configuration, Container Apps deployment, multi-service deployments, and infrastructure-as-code with Bicep.", + "risk": "unknown", + "source": "unknown" + }, + { + "id": "azure-ai-agents-persistent-dotnet", + "path": "skills/azure-ai-agents-persistent-dotnet", + "category": "uncategorized", + "name": "azure-ai-agents-persistent-dotnet", + "description": "Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conversation threads, streaming responses, function calling, file search, and code interpreter. Triggers: \"PersistentAgentsClient\", \"persistent agents\", \"agent threads\", \"agent runs\", \"streaming agents\", \"function calling agents .NET\".\n", + "risk": "unknown", + "source": "unknown" + }, + { + "id": "azure-ai-agents-persistent-java", + "path": "skills/azure-ai-agents-persistent-java", + "category": "uncategorized", + "name": "azure-ai-agents-persistent-java", + "description": "Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.\nTriggers: \"PersistentAgentsClient\", \"persistent agents java\", \"agent threads java\", \"agent runs java\", \"streaming agents java\".\n", + "risk": "unknown", + "source": "unknown" + }, { "id": "azure-ai-anomalydetector-java", "path": "skills/azure-ai-anomalydetector-java", @@ -2555,6 +2582,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "copilot-sdk", + "path": "skills/copilot-sdk", + "category": "uncategorized", + "name": "copilot-sdk", + "description": "Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Python, Go, or .NET. Covers session management, custom tools, streaming, hooks, MCP servers, BYOK providers, session persistence, and custom agents. Requires GitHub Copilot CLI installed and a GitHub Copilot subscription (unless using BYOK).", + "risk": "unknown", + "source": "unknown" + }, { "id": "copy-editing", "path": "skills/copy-editing", @@ -3374,6 +3410,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "fastapi-router-py", + "path": "skills/fastapi-router-py", + "category": "uncategorized", + "name": "fastapi-router-py", + "description": "Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new routes, implementing CRUD operations, or adding authenticated endpoints in FastAPI applications.", + "risk": "unknown", + "source": "unknown" + }, { "id": "fastapi-templates", "path": "skills/fastapi-templates", @@ -3806,6 +3851,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "github-issue-creator", + "path": "skills/github-issue-creator", + "category": "uncategorized", + "name": "github-issue-creator", + "description": "Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error messages, or informal descriptions and wants a structured GitHub issue. Supports images/GIFs for visual evidence.", + "risk": "unknown", + "source": "unknown" + }, { "id": "github-workflow-automation", "path": "skills/github-workflow-automation", @@ -3977,6 +4031,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "hosted-agents-v2-py", + "path": "skills/hosted-agents-v2-py", + "category": "uncategorized", + "name": "hosted-agents-v2-py", + "description": "Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.\nUse when creating container-based agents that run custom code in Azure AI Foundry.\nTriggers: \"ImageBasedHostedAgentDefinition\", \"hosted agent\", \"container agent\", \n\"create_version\", \"ProtocolVersionRecord\", \"AgentProtocol.RESPONSES\".\n", + "risk": "unknown", + "source": "unknown" + }, { "id": "hr-pro", "path": "skills/hr-pro", @@ -4616,6 +4679,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "mcp-builder-ms", + "path": "skills/mcp-builder-ms", + "category": "uncategorized", + "name": "mcp-builder", + "description": "Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP), Node/TypeScript (MCP SDK), or C#/.NET (Microsoft MCP SDK).", + "risk": "unknown", + "source": "unknown" + }, { "id": "memory-forensics", "path": "skills/memory-forensics", @@ -5372,6 +5444,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "podcast-generation", + "path": "skills/podcast-generation", + "category": "uncategorized", + "name": "podcast-generation", + "description": "Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, audio narrative generation, podcast creation from content, or integrating with Azure OpenAI Realtime API for real audio output. Covers full-stack implementation from React frontend to Python FastAPI backend with WebSocket streaming.", + "risk": "unknown", + "source": "unknown" + }, { "id": "popup-cro", "path": "skills/popup-cro", @@ -5570,6 +5651,15 @@ "risk": "unknown", "source": "unknown" }, + { + "id": "pydantic-models-py", + "path": "skills/pydantic-models-py", + "category": "uncategorized", + "name": "pydantic-models-py", + "description": "Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schemas, database models, or data validation in Python applications using Pydantic v2.", + "risk": "unknown", + "source": "unknown" + }, { "id": "pypict-skill", "path": "skills/pypict-skill", @@ -6335,6 +6425,15 @@ "risk": "safe", "source": "unknown" }, + { + "id": "skill-creator-ms", + "path": "skills/skill-creator-ms", + "category": "uncategorized", + "name": "skill-creator", + "description": "Guide for creating effective skills for AI coding agents working with Azure SDKs and Microsoft Foundry services. Use when creating new skills or updating existing skills.", + "risk": "unknown", + "source": "unknown" + }, { "id": "skill-developer", "path": "skills/skill-developer",