ok
This commit is contained in:
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@@ -50,6 +50,8 @@ jobs:
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run tests
|
||||
env:
|
||||
ENABLE_NETWORK_TESTS: "1"
|
||||
run: npm run test
|
||||
|
||||
- name: 📦 Build catalog
|
||||
|
||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -1,6 +1,12 @@
|
||||
node_modules/
|
||||
__pycache__/
|
||||
.ruff_cache/
|
||||
.worktrees/
|
||||
.tmp/
|
||||
.DS_Store
|
||||
|
||||
# npm pack artifacts
|
||||
antigravity-awesome-skills-*.tgz
|
||||
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
@@ -29,3 +35,7 @@ scripts/*count*.py
|
||||
|
||||
# Optional baseline for legacy JS validator (scripts/validate-skills.js)
|
||||
validation-baseline.json
|
||||
|
||||
# Web app generated assets (from npm run app:setup)
|
||||
web-app/public/skills/
|
||||
web-app/public/skills.json
|
||||
|
||||
551
CATALOG.md
551
CATALOG.md
File diff suppressed because it is too large
Load Diff
385
CHANGELOG.md
385
CHANGELOG.md
@@ -7,6 +7,389 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
---
|
||||
|
||||
## [6.7.0] - 2026-03-01 - "Intelligence Extraction & Automation"
|
||||
|
||||
> **New skills for Web Scraping (Apify), X/Twitter extraction, Genomic analysis, and hardened registry infrastructure.**
|
||||
|
||||
This release integrates 14 new specialized agent-skills. Highlights include the official Apify collection for web scraping and data extraction, a high-performance X/Twitter scraper, and a comprehensive genomic analysis toolkit. The registry infrastructure has been hardened with hermetic testing and secure YAML parsing.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### 🕷️ [apify-agent-skills](skills/apify-actorization/)
|
||||
|
||||
**12 Official Apify skills for web scraping and automation.**
|
||||
Scale data extraction using Apify Actors. Includes specialized skills for e-commerce, lead generation, social media analysis, and market research.
|
||||
|
||||
### 🐦 [x-twitter-scraper](skills/x-twitter-scraper/)
|
||||
|
||||
**High-performance X (Twitter) data extraction.**
|
||||
Search tweets, fetch profiles, and extract media/engagement metrics without complex API setups.
|
||||
|
||||
### 🧬 [dna-claude-analysis](skills/dna-claude-analysis/)
|
||||
|
||||
**Personal genome analysis toolkit.**
|
||||
Analyze raw DNA data across 17 categories (health, ancestry, pharmacogenomics) with interactive HTML visualization.
|
||||
|
||||
---
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Registry Hardening**: Migrated all registry maintenance scripts to `PyYAML` for safe, lossless metadata handling. (PR #168)
|
||||
- **Hermetic Testing**: Implemented environment-agnostic registry tests to prevent CI drift.
|
||||
- **Contributor Sync**: Fully synchronized the Repo Contributors list in README.md from git history (69 total contributors).
|
||||
- **Documentation**: Standardized H2 headers in README.md (no emojis) for clean Table of Contents anchors, following Maintenance V5 rules.
|
||||
- **Skill Metadata**: Enhanced description validation and category consistency across 968 skills.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@ar27111994** for the 12 Apify skills and registry hardening (PR #165, #168)
|
||||
- **@kriptoburak** for `x-twitter-scraper` (PR #164)
|
||||
- **@shmlkv** for `dna-claude-analysis` (PR #167)
|
||||
|
||||
---
|
||||
|
||||
## [6.6.0] - 2026-02-28 - "Community Skills & Quality"
|
||||
|
||||
> **New skills for Android UI verification, memory handling, video manipulation, vibe-code auditing, and essential fixes.**
|
||||
|
||||
This release integrates major community contributions, adding skills for Android testing, scoped agent memory, vibe-code quality auditing, and the VideoDB SDK. It also addresses issues with skill metadata validation and enhances documentation consistency.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### 📱 [android_ui_verification](skills/android_ui_verification/)
|
||||
|
||||
**Automated end-to-end UI testing on Android Emulators.**
|
||||
Test layout issues, check state verification, and capture screenshots right from ADB.
|
||||
|
||||
### 🧠 [hierarchical-agent-memory](skills/hierarchical-agent-memory/)
|
||||
|
||||
**Scoped CLAUDE.md memory system.**
|
||||
Directory-level context files with a dashboard, significantly reducing token spend on repetitive queries.
|
||||
|
||||
### 🎥 [videodb-skills](skills/videodb-skills/)
|
||||
|
||||
**The ultimate Video processing toolkit.**
|
||||
Upload, stream, search, edit, transcribe, and generate AI video/audio using the VideoDB SDK.
|
||||
|
||||
### 🕵️ [vibe-code-auditor](skills/vibe-code-auditor/)
|
||||
|
||||
**AI-code specific quality assessments.**
|
||||
Check prototypes and generated code for structural flaws, hidden technical debt, and production risks.
|
||||
|
||||
---
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Skill Description Restoration**: Recovered 223+ truncated descriptions from git history that were corrupted in release 6.5.0.
|
||||
- **Robust YAML Tooling**: Replaced fragile regex parsing with `PyYAML` across all maintenance scripts (`manage_skill_dates.py`, `validate_skills.py`, etc.) to prevent future data loss.
|
||||
- **Refined Descriptions**: Standardized all skill descriptions to be under 200 characters while maintaining grammatical correctness and functional value.
|
||||
- **Cross-Platform Index**: Normalized `skills_index.json` to use forward slashes for universal path compatibility.
|
||||
- **Skill Validation Fixes**: Corrected invalid description lengths and `risk` fields in `copywriting`, `videodb-skills`, and `vibe-code-auditor`. (Fixes #157, #158)
|
||||
- **Documentation**: New dedicated `docs/SEC_SKILLS.md` indexing all 128 security skills.
|
||||
- **README Quality**: Cleaned up inconsistencies, deduplicated lists, updated stats (954+ total skills).
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@alexmvie** for `android_ui_verification`
|
||||
- **@talesperito** for `vibe-code-auditor`
|
||||
- **@djmahe4** for `docs/SEC_SKILLS.md`
|
||||
- **@kromahlusenii-ops** for `hierarchical-agent-memory`
|
||||
- **@0xrohitgarg** for `videodb-skills`
|
||||
- **@nedcodes-ok** for `rule-porter` addition
|
||||
- **@acbhatt12** for `README.md` improvements (PR #162)
|
||||
|
||||
---
|
||||
|
||||
## [6.5.0] - 2026-02-27 - "Community & Experience"
|
||||
|
||||
> **Major UX upgrade: Stars feature, auto-updates, interactive prompts, and complete date tracking for all 950+ skills.**
|
||||
|
||||
This release introduces significant community-driven enhancements to the web application alongside comprehensive metadata improvements. Users can now upvote skills, build contextual prompts interactively, and benefit from automatic skill updates. All skills now include date tracking for better discoverability.
|
||||
|
||||
## 🚀 New Features
|
||||
|
||||
### ⭐ Stars & Community Upvotes
|
||||
|
||||
**Community-driven skill discovery with star/upvote system.**
|
||||
|
||||
- Upvote skills you find valuable — visible to all users
|
||||
- Star counts persist via Supabase backend
|
||||
- One upvote per browser (localStorage deduplication)
|
||||
- Discover popular skills through community ratings
|
||||
|
||||
> **Try it:** Browse to any skill and click the ⭐ button to upvote!
|
||||
|
||||
### 🔄 Auto-Update Mechanism
|
||||
|
||||
**Seamless skill updates via START_APP.bat.**
|
||||
|
||||
- Automatic skill synchronization on app startup
|
||||
- Git-based fast updates when available
|
||||
- PowerShell HTTPS fallback for non-Git environments
|
||||
- Surgical updates — only `/skills/` folder to avoid conflicts
|
||||
|
||||
> **Try it:** Run `START_APP.bat` to automatically fetch the latest 950+ skills!
|
||||
|
||||
### 🛠️ Interactive Prompt Builder
|
||||
|
||||
**Build contextual prompts directly in skill detail pages.**
|
||||
|
||||
- Add custom context to any skill (e.g., "Use React 19 and Tailwind")
|
||||
- Copy formatted prompt with skill invocation + your context
|
||||
- Copy full skill content with context overlay
|
||||
- Streamlined workflow for AI assistant interactions
|
||||
|
||||
> **Try it:** Visit any skill, add context in the text box, click "Copy @Skill"!
|
||||
|
||||
### 📅 Date Tracking for All Skills
|
||||
|
||||
**Complete `date_added` metadata across the entire registry.**
|
||||
|
||||
- All 950+ skills now include `date_added` field
|
||||
- Visible badges in skill detail pages
|
||||
- Filter and sort by recency
|
||||
- Better discoverability of new capabilities
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Smart Auto-Categorization**: Categories sorted by skill count with "uncategorized" at the end
|
||||
- **Category Stats**: Dropdown shows skill count per category
|
||||
- **Enhanced Home Page**: Risk level badges and date display on skill cards
|
||||
- **Complete Date Coverage**: All skills updated with `date_added` metadata
|
||||
- **Web App Dependencies**: Automatic `@supabase/supabase-js` installation
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@zinzied** for the comprehensive UX enhancement (Stars, Auto-Update, Prompt Builder, Date Tracking, Auto-Categorization — PR #150)
|
||||
|
||||
---
|
||||
|
||||
## [6.4.1] - 2026-02-27 - "Temporal & Convex Backend Hotfix"
|
||||
|
||||
> **Hotfix release: Temporal Go expert skill, Convex reactive backend, and strict-compliant SEO incident/local audit fixes.**
|
||||
|
||||
This release builds on 6.4.0 by adding a Temporal Go SDK pro skill, a comprehensive Convex reactive backend skill, and aligning the new SEO incident/local audit skills with the strict validation rules so they ship cleanly via npm.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### ⏱️ [temporal-golang-pro](skills/temporal-golang-pro/)
|
||||
|
||||
**Temporal Go SDK expert for durable distributed systems.**
|
||||
Guides production-grade Temporal Go usage with deterministic workflow rules, mTLS worker configuration, interceptors, testing strategies, and advanced patterns.
|
||||
|
||||
- **Key Feature 1**: Covers workflow determinism, versioning, durable concurrency and long-running workflow patterns.
|
||||
- **Key Feature 2**: Provides mTLS-secure worker setup, interceptors, and replay/time-skipping test strategies.
|
||||
|
||||
> **Try it:** `Use temporal-golang-pro to design a durable subscription billing workflow with safe versioning and mTLS workers.`
|
||||
|
||||
### 🔄 [convex](skills/convex/)
|
||||
|
||||
**Convex reactive backend for schema, functions, and real-time apps.**
|
||||
Full-stack backend skill covering Convex schema design, TypeScript query/mutation/action functions, real-time subscriptions, auth, file storage, scheduling, and deployment flows.
|
||||
|
||||
- **Key Feature 1**: End-to-end examples for schema validators, function types, pagination and client integration.
|
||||
- **Key Feature 2**: Documents auth options (Convex Auth, Clerk, Better Auth) and operational patterns (cron, storage, environments).
|
||||
|
||||
> **Try it:** `Use convex to design a schema and function set for a real-time dashboard with authenticated users and file uploads.`
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Strict SEO Skills Compliance**:
|
||||
- `seo-forensic-incident-response` and `local-legal-seo-audit` now include `## When to Use` sections and concise descriptions, and use `risk: safe`, fully passing `validate_skills.py --strict`.
|
||||
- **Catalog & Index Sync**:
|
||||
- Updated `CATALOG.md`, `data/catalog.json`, `skills_index.json`, `data/bundles.json`, `data/aliases.json`, and `README.md` to track **950+ skills**, including `temporal-golang-pro`, `convex`, and the new SEO skills.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
- **@HuynhNhatKhanh** for the Temporal Go SDK expert skill (`temporal-golang-pro`, PR #148).
|
||||
- **@chauey** for the Convex reactive backend skill (`convex`, PR #152).
|
||||
- **@talesperito** for the SEO forensic incident response and local legal SEO skills and collaboration on the strict-compliant refinements (PRs #153 / #154).
|
||||
|
||||
---
|
||||
|
||||
## [6.4.0] - 2026-02-27 - "SEO Incident Response & Legal Local Audit"
|
||||
|
||||
> **Focused release: specialized SEO incident response and legal local SEO audit skills, plus catalog sync.**
|
||||
|
||||
This release adds two advanced SEO skills for handling organic traffic incidents and auditing legal/professional services sites, and updates the public catalog to keep discovery aligned with the registry.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### 🧪 [seo-forensic-incident-response](skills/seo-forensic-incident-response/)
|
||||
|
||||
**Forensic SEO incident response for sudden organic traffic or rankings drops.**
|
||||
Guides structured triage, hypothesis-driven investigation, evidence collection and phased recovery plans using GSC, analytics, logs and deployment history.
|
||||
|
||||
- **Key Feature 1**: Classifies incidents across algorithmic, technical, manual action, content and demand-change buckets.
|
||||
- **Key Feature 2**: Produces a forensic report with 0–3 day, 3–14 day and 2–8 week action plans plus monitoring.
|
||||
|
||||
> **Try it:** `We lost 40% of organic traffic last week. Use seo-forensic-incident-response to investigate and propose a recovery plan.`
|
||||
|
||||
### ⚖️ [local-legal-seo-audit](skills/local-legal-seo-audit/)
|
||||
|
||||
**Local SEO auditing for law firms and legal/professional services.**
|
||||
Specialized audit framework for YMYL legal sites covering GBP, E‑E‑A‑T, practice area pages, NAP consistency, legal directories and reputation.
|
||||
|
||||
- **Key Feature 1**: Step‑by‑step GBP, directory and NAP audit tailored to legal practices.
|
||||
- **Key Feature 2**: Generates a prioritized action plan and content strategy for legal/local search.
|
||||
|
||||
> **Try it:** `Audit the local SEO of this law firm website using local-legal-seo-audit and propose the top 10 fixes.`
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Catalog Sync**: Updated `CATALOG.md` and `data/catalog.json` to track 947 skills and include `10-andruia-skill-smith` in the general category listing.
|
||||
- **Documentation**: README now references the MojoAuth implementation skill in the integrations list.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@talesperito** for the SEO forensic incident response and legal local SEO audit skills (PRs #153 / #154).
|
||||
- **@developer-victor** for the MojoAuth implementation README integration (PR #149).
|
||||
|
||||
---
|
||||
|
||||
## [6.3.1] - 2026-02-25 - "Validation & Multi-Protocol Hotfix"
|
||||
|
||||
> **"Hotfix release to restore missing skills, correct industrial risk labels, and harden validation across the registry."**
|
||||
|
||||
This release fixes critical validation errors introduced in previous PRs, ensures full compliance with the strict CI registry checks, and restores two high-demand developer skills.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### 🧩 [chrome-extension-developer](skills/chrome-extension-developer/)
|
||||
|
||||
**Expert in building Chrome Extensions using Manifest V3.**
|
||||
Senior expertise in modern extension architecture, focusing on Manifest V3, service workers, and production-ready security practices.
|
||||
|
||||
- **Key Feature 1**: Comprehensive coverage of Manifest V3 service workers and lifecycle.
|
||||
- **Key Feature 2**: Production-ready patterns for cross-context message passing.
|
||||
|
||||
> **Try it:** `Help me design a Manifest V3 extension that monitors network requests using declarativeNetRequest.`
|
||||
|
||||
### ☁️ [cloudflare-workers-expert](skills/cloudflare-workers-expert/)
|
||||
|
||||
**Senior expertise for serverless edge computing on Cloudflare.**
|
||||
Specialized in edge architectures, performance optimization, and the full Cloudflare developer ecosystem (Wrangler, KV, D1, R2).
|
||||
|
||||
- **Key Feature 1**: Optimized patterns for 0ms cold starts and edge-side storage.
|
||||
- **Key Feature 2**: Implementation guides for Durable Objects and R2 storage integration.
|
||||
|
||||
> **Try it:** `Build a Cloudflare Worker that modifies response headers and caches fragmented data in KV.`
|
||||
|
||||
---
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Registry Update**: Now tracking 946+ high-performance skills.
|
||||
- **Validation Hardening**: Resolved missing "When to Use" sections for 11 critical skills (Andru.ia, Logistics, Energy).
|
||||
- **Risk Label Corrections**: Corrected risk levels to `safe` for `linkedin-cli`, `00-andruia-consultant`, and `20-andruia-niche-intelligence`.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@itsmeares** for PR #139 validation fixes and "When to Use" improvements.
|
||||
|
||||
---
|
||||
|
||||
_Upgrade now: `git pull origin main` to fetch the latest skills._
|
||||
|
||||
## [6.3.0] - 2026-02-25 - "Agent Discovery & Operational Excellence"
|
||||
|
||||
> **Feature release: AgentFolio discovery skill, LinkedIn CLI automation, Evos operational skills, Andru.ia consulting roles, and hardened validation for new contributors.**
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### 🔍 [agentfolio](skills/agentfolio/)
|
||||
|
||||
**Discover and research autonomous AI agents.**
|
||||
Skill for discovering and researching autonomous AI agents, tools, and ecosystems using the AgentFolio directory.
|
||||
|
||||
- **Key Feature 1**: Discover agents for specific use cases.
|
||||
- **Key Feature 2**: Collect concrete examples and benchmarks for agent capabilities.
|
||||
|
||||
> **Try it:** `Use AgentFolio to find 3 autonomous AI agents focused on code review.`
|
||||
|
||||
### 💼 [linkedin-cli](skills/linkedin-cli/)
|
||||
|
||||
**Automate LinkedIn operations via CLI.**
|
||||
CLI-based LinkedIn automation skill using `@linkedapi/linkedin-cli` for profile enrichment, outreach, Sales Navigator, and workflow execution.
|
||||
|
||||
- **Key Feature 1**: Fetch profiles and search people/companies.
|
||||
- **Key Feature 2**: Manage connections and send messages via Sales Navigator.
|
||||
|
||||
> **Try it:** `Use linkedin-cli to search for PMs in San Francisco.`
|
||||
|
||||
### 🚀 [appdeploy](skills/appdeploy/)
|
||||
|
||||
**Deploy full-stack web apps.**
|
||||
Deploy web apps with backend APIs, database, and file storage via an HTTP API to get an instant public URL.
|
||||
|
||||
- **Key Feature 1**: Chat-native deployment orchestrator.
|
||||
- **Key Feature 2**: Support for frontend-only and frontend+backend architectures.
|
||||
|
||||
> **Try it:** `Deploy this React-Vite dashboard using appdeploy.`
|
||||
|
||||
### 🐹 [grpc-golang](skills/grpc-golang/)
|
||||
|
||||
**Production-grade gRPC patterns in Go.**
|
||||
Build robust microservices communication using Protobuf with mTLS, streaming, and observability configurations.
|
||||
|
||||
- **Key Feature 1**: Standardize API contracts with Protobuf and Buf.
|
||||
- **Key Feature 2**: Implement service-to-service authentication and structured metrics.
|
||||
|
||||
> **Try it:** `Use grpc-golang to define a user service streaming endpoint with mTLS.`
|
||||
|
||||
### 📦 [logistics-exception-management](skills/logistics-exception-management/)
|
||||
|
||||
**Expertise for handling freight and carrier disputes.**
|
||||
Deeply codified operational playbook for handling shipping exceptions, delays, damages, and claims. Part of the Evos operational domain expertise suite. Additional skills: `carrier-relationship-management`, `customs-trade-compliance`, `inventory-demand-planning`, `production-scheduling`, `returns-reverse-logistics`, `energy-procurement`, `quality-nonconformance`.
|
||||
|
||||
- **Key Feature 1**: Provides escalation protocols and severity classification for exceptions.
|
||||
- **Key Feature 2**: Delivers templates and decision frameworks for claim management across various delivery modes.
|
||||
|
||||
> **Try it:** `We have a delayed LTL shipment for a key customer, how should we handle it per logistics-exception-management?`
|
||||
|
||||
### 🏗️ [00-andruia-consultant](skills/00-andruia-consultant/)
|
||||
|
||||
**Spanish-language solutions architect.**
|
||||
Diagnóstica y traza la hoja de ruta óptima para proyectos de IA en español. Additional skills: `20-andruia-niche-intelligence`.
|
||||
|
||||
- **Key Feature 1**: Proporciona entrevistas de diagnóstico para proyectos desde cero o existentes.
|
||||
- **Key Feature 2**: Propone el escuadrón de expertos necesario y genera artefactos de backlog en español.
|
||||
|
||||
> **Try it:** `Actúa como 00-andruia-consultant y diagnostica este nuevo workspace.`
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Validation & Quality Bar**:
|
||||
- Normalised `risk:` labels for new skills to conform to the allowed set (`none`, `safe`, `critical`, `offensive`, `unknown`).
|
||||
- Added explicit `## When to Use` sections to new operational and contributor skills to keep the registry strictly compatible with `python3 scripts/validate_skills.py --strict`.
|
||||
- **Interactive Web App**:
|
||||
- Auto-updating local web app launcher and **Interactive Prompt Builder** enhancements (PR #137) now ship as part of the v6.3.0 baseline.
|
||||
- **Registry**:
|
||||
- Validation Chain (`npm run chain` + `npm run validate:strict`) runs clean at 6.3.0 with all new skills indexed in `skills_index.json`, `data/catalog.json`, and `CATALOG.md`.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
- **@bobrenze-bot** for proposing the AgentFolio integration (Issue #136).
|
||||
- **@vprudnikoff** for the `linkedin-cli` skill (PR #131).
|
||||
- **@Onsraa** for the Bevy ECS documentation update around Require Components (PR #132).
|
||||
- **@Abdulrahmansoliman** for the AdaL CLI README instructions (PR #133).
|
||||
- **@avimak** for the `appdeploy` deployment skill (PR #134).
|
||||
- **@HuynhNhatKhanh** for the gRPC Go production patterns skill (PR #135).
|
||||
- **@zinzied** for the auto-updating web app launcher & Interactive Prompt Builder (PR #137).
|
||||
- **@nocodemf** for the Evos operational domain skills (PR #138).
|
||||
|
||||
---
|
||||
|
||||
## [6.2.0] - 2026-02-24 - "Interactive Web App & AWS IaC"
|
||||
|
||||
> **Feature release: Interactive Skills Web App, AWS Infrastructure as Code skills, and Chrome Extension / Cloudflare Workers developer skills.**
|
||||
@@ -370,6 +753,8 @@ This release significantly upgrades our 3D visualization capabilities with a com
|
||||
|
||||
---
|
||||
|
||||
## [5.2.0] - 2026-02-13 - "Podcast Generation & Azure AI Skills"
|
||||
|
||||
> **New AI capabilities: Podcast Generation, Azure Identity, and Self-Evolving Agents.**
|
||||
|
||||
### Added
|
||||
|
||||
219
README.md
219
README.md
@@ -1,6 +1,6 @@
|
||||
# 🌌 Antigravity Awesome Skills: 930+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
# 🌌 Antigravity Awesome Skills: 968+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 930+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL**
|
||||
> **The Ultimate Collection of 968+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
@@ -17,7 +17,7 @@
|
||||
|
||||
If this project helps you, you can [support it here](https://buymeacoffee.com/sickn33) or simply ⭐ the repo.
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **930 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **968+ high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
@@ -30,7 +30,7 @@ If this project helps you, you can [support it here](https://buymeacoffee.com/si
|
||||
- ⚪ **OpenCode** (Open-source CLI)
|
||||
- 🌸 **AdaL CLI** (Self-evolving Coding Agent)
|
||||
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Microsoft**, **Supabase**, and **Vercel Labs**.
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Microsoft**, **Supabase**, **Apify**, and **Vercel Labs**.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -42,25 +42,24 @@ This repository provides essential skills to transform your AI assistant into a
|
||||
- [🎁 Curated Collections (Bundles)](#curated-collections)
|
||||
- [🧭 Antigravity Workflows](#antigravity-workflows)
|
||||
- [📦 Features & Categories](#features--categories)
|
||||
- [📚 Browse 930+ Skills](#browse-930-skills)
|
||||
- [📚 Browse 968+ Skills](#browse-968-skills)
|
||||
- [🤝 How to Contribute](#how-to-contribute)
|
||||
- [🤝 Community](#community)
|
||||
- [💬 Community](#community)
|
||||
- [☕ Support the Project](#support-the-project)
|
||||
- [👥 Contributors & Credits](#credits--sources)
|
||||
- [🏆 Credits & Sources](#credits--sources)
|
||||
- [👥 Repo Contributors](#repo-contributors)
|
||||
- [⚖️ License](#license)
|
||||
- [🌟 Star History](#star-history)
|
||||
- [🏷️ GitHub Topics](#github-topics)
|
||||
|
||||
---
|
||||
|
||||
## New Here? Start Here!
|
||||
|
||||
**Welcome to the V6.2.0 Interactive Web Edition.** This isn't just a list of scripts; it's a complete operating system for your AI Agent.
|
||||
**Welcome to the V6.7.0 Interactive Web Edition.** This isn't just a list of scripts; it's a complete operating system for your AI Agent.
|
||||
|
||||
### 1. 🐣 Context: What is this?
|
||||
|
||||
**Antigravity Awesome Skills** (Release 6.2.0) is a massive upgrade to your AI's capabilities.
|
||||
**Antigravity Awesome Skills** (Release 6.5.0) is a massive upgrade to your AI's capabilities.
|
||||
|
||||
AI Agents (like Claude Code, Cursor, or Gemini) are smart, but they lack **specific tools**. They don't know your company's "Deployment Protocol" or the specific syntax for "AWS CloudFormation".
|
||||
**Skills** are small markdown files that teach them how to do these specific tasks perfectly, every time.
|
||||
@@ -108,13 +107,13 @@ Once installed, just ask your agent naturally:
|
||||
|
||||
These skills follow the universal **SKILL.md** format and work with any AI coding assistant that supports agentic skills.
|
||||
|
||||
| Tool | Type | Invocation Example | Path |
|
||||
| :-------------- | :--- | :-------------------------------- | :---------------- |
|
||||
| **Claude Code** | CLI | `>> /skill-name help me...` | `.claude/skills/` |
|
||||
| **Gemini CLI** | CLI | `(User Prompt) Use skill-name...` | `.gemini/skills/` |
|
||||
| **Codex CLI** | CLI | `(User Prompt) Use skill-name...` | `.codex/skills/` |
|
||||
| **Kiro CLI** | CLI | `(Auto) Skills load on-demand` | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| **Kiro IDE** | IDE | `/skill-name or (Auto)` | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| Tool | Type | Invocation Example | Path |
|
||||
| :-------------- | :--- | :-------------------------------- | :-------------------------------------------------------------------- |
|
||||
| **Claude Code** | CLI | `>> /skill-name help me...` | `.claude/skills/` |
|
||||
| **Gemini CLI** | CLI | `(User Prompt) Use skill-name...` | `.gemini/skills/` |
|
||||
| **Codex CLI** | CLI | `(User Prompt) Use skill-name...` | `.codex/skills/` |
|
||||
| **Kiro CLI** | CLI | `(Auto) Skills load on-demand` | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| **Kiro IDE** | IDE | `/skill-name or (Auto)` | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| **Antigravity** | IDE | `(Agent Mode) Use skill...` | Global: `~/.gemini/antigravity/skills/` · Workspace: `.agent/skills/` |
|
||||
| **Cursor** | IDE | `@skill-name (in Chat)` | `.cursor/skills/` |
|
||||
| **Copilot** | Ext | `(Paste content manually)` | N/A |
|
||||
@@ -168,6 +167,9 @@ npx antigravity-awesome-skills --kiro
|
||||
# OpenCode
|
||||
npx antigravity-awesome-skills --path .agents/skills
|
||||
|
||||
# AdaL CLI
|
||||
npx antigravity-awesome-skills --path .adal/skills
|
||||
|
||||
# Workspace-specific (e.g. .agent/skills for Antigravity workspace)
|
||||
npx antigravity-awesome-skills --path ~/.agent/skills
|
||||
|
||||
@@ -191,9 +193,6 @@ git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skill
|
||||
# Kiro CLI/IDE global
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git ~/.kiro/skills
|
||||
|
||||
# Kiro CLI/IDE workspace
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .kiro/skills
|
||||
|
||||
# Claude Code specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skills
|
||||
|
||||
@@ -203,14 +202,14 @@ git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skil
|
||||
# Codex CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .codex/skills
|
||||
|
||||
# Kiro CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .kiro/skills
|
||||
|
||||
# Cursor specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
|
||||
# OpenCode
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agents/skills
|
||||
|
||||
# AdaL CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .adal/skills
|
||||
```
|
||||
|
||||
### Option C: Kiro IDE Import (GUI)
|
||||
@@ -252,7 +251,10 @@ Install to the tool-specific path. Use installer flags: `--antigravity` (default
|
||||
|
||||
**Good news!** You no longer need to manually run `git pull` or `npx antigravity-awesome-skills` to update your skills.
|
||||
|
||||
Simply double-click **`START_APP.bat`** (or run it in your terminal). It will automatically fetch and merge the latest skills from the original repository every time you open the Web App, ensuring you always have the most up-to-date catalog!
|
||||
- **Windows:** Double-click **`START_APP.bat`** (or run it in your terminal).
|
||||
- **macOS/Linux:** Run `cd web-app && npm run app:dev` from the repo root.
|
||||
|
||||
Both methods automatically fetch and merge the latest skills from the original repository every time you open the Web App, ensuring you always have the most up-to-date catalog.
|
||||
|
||||
### Reinstall from scratch
|
||||
|
||||
@@ -267,7 +269,7 @@ npx antigravity-awesome-skills
|
||||
|
||||
**Bundles** are curated groups of skills for a specific role or goal (for example: `Web Wizard`, `Security Engineer`, `OSS Maintainer`).
|
||||
|
||||
They help you avoid picking from 927+ skills one by one.
|
||||
They help you avoid picking from 950+ skills one by one.
|
||||
|
||||
### ⚠️ Important: Bundles Are NOT Separate Installations!
|
||||
|
||||
@@ -339,24 +341,62 @@ The repository is organized into specialized domains to transform your AI into a
|
||||
|
||||
Counts change as new skills are added. For the current full registry, see [CATALOG.md](CATALOG.md).
|
||||
|
||||
## Browse 930+ Skills
|
||||
## Browse 968+ Skills
|
||||
|
||||
We have moved the full skill registry to a dedicated catalog to keep this README clean, and we've also introduced an interactive **Web App**!
|
||||
|
||||
### 🌐 Interactive Skills Web App
|
||||
|
||||
You can now easily search, filter, and discover the perfect skills for your agent using our local Web App.
|
||||
A modern web interface to explore, search, and use the 950+ skills directly from your browser.
|
||||
|
||||
To launch the app:
|
||||
1. Double-click the `START_APP.bat` file in the root directory (Windows) or run it from your terminal.
|
||||
2. The app will automatically configure everything and open in your default browser.
|
||||
#### ✨ Features
|
||||
|
||||
#### 🛠️ New: Interactive Prompt Builder
|
||||
The web app is no longer just a static catalog! When you click on any skill, you will see an **Interactive Prompt Builder** box.
|
||||
Instead of manually copying `@skill-name` and writing your requirements separately in your IDE:
|
||||
1. Type your specific project constraints into the text box (e.g., "Use React 19 and Tailwind").
|
||||
2. Click **Copy Prompt**.
|
||||
3. Your clipboard now has a fully formatted, ready-to-run prompt combining the skill invocation and your custom context!
|
||||
- 🔍 **Full-text search** – Search skills by name, description, or content
|
||||
- 🏷️ **Category filters** – Frontend, Backend, Security, DevOps, etc.
|
||||
- 📝 **Markdown rendering** – View complete documentation with syntax highlighting
|
||||
- 📋 **Copy buttons** – Copy `@skill-name` or full content in 1 click
|
||||
- 🛠️ **Prompt Builder** – Add custom context before copying
|
||||
- 🌙 **Dark mode** – Adaptive light/dark interface
|
||||
- ⚡ **Auto-update** – Automatically syncs with upstream repo
|
||||
|
||||
#### 🚀 Quick Start
|
||||
|
||||
**Windows:**
|
||||
|
||||
```bash
|
||||
# Double-click or terminal
|
||||
START_APP.bat
|
||||
```
|
||||
|
||||
**macOS/Linux:**
|
||||
|
||||
```bash
|
||||
# 1. Install dependencies (first time)
|
||||
cd web-app && npm install
|
||||
|
||||
# 2. Setup assets and launch
|
||||
npm run app:dev
|
||||
```
|
||||
|
||||
**Available npm commands:**
|
||||
|
||||
```bash
|
||||
npm run app:setup # Copy skills to web-app/public/
|
||||
npm run app:dev # Start dev server
|
||||
npm run app:build # Production build
|
||||
npm run app:preview # Preview production build
|
||||
```
|
||||
|
||||
The app automatically opens at `http://localhost:5173` (or alternative port).
|
||||
|
||||
#### 🛠️ Interactive Prompt Builder
|
||||
|
||||
On each skill page you'll find the **Interactive Prompt Builder**. Instead of manually copying `@skill-name` and writing your requirements separately in your IDE:
|
||||
|
||||
1. Type your specific project constraints into the text box (e.g., "Use React 19, TypeScript and Tailwind").
|
||||
2. Click **Copy Prompt** — copies a fully formatted, ready-to-run prompt combining `@skill-name` + your custom context.
|
||||
3. Or click **Copy Full Content** — copies the full skill documentation.
|
||||
4. Paste into your AI assistant (Claude, Cursor, Gemini, etc.).
|
||||
|
||||
👉 **[View the Complete Skill Catalog (CATALOG.md)](CATALOG.md)**
|
||||
|
||||
@@ -432,12 +472,12 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Supabase official skills - Postgres Best Practices.
|
||||
- **[microsoft/skills](https://github.com/microsoft/skills)**: Official Microsoft skills - Azure cloud services, Bot Framework, Cognitive Services, and enterprise development patterns across .NET, Python, TypeScript, Go, Rust, and Java.
|
||||
- **[google-gemini/gemini-skills](https://github.com/google-gemini/gemini-skills)**: Official Gemini skills - Gemini API, SDK and model interactions.
|
||||
- **[apify/agent-skills](https://github.com/apify/agent-skills)**: Official Apify skills - Web scraping, data extraction and automation.
|
||||
|
||||
### Community Contributors
|
||||
|
||||
- **[rmyndharis/antigravity-skills](https://github.com/rmyndharis/antigravity-skills)**: For the massive contribution of 300+ Enterprise skills and the catalog generation logic.
|
||||
- **[amartelr/antigravity-workspace-manager](https://github.com/amartelr/antigravity-workspace-manager)**: Official Workspace Manager CLI companion to dynamically auto-provision subsets of skills across unlimited local development environments.
|
||||
|
||||
- **[obra/superpowers](https://github.com/obra/superpowers)**: The original "Superpowers" by Jesse Vincent.
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: Core Antigravity extensions.
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure and Backend/Frontend Guidelines.
|
||||
@@ -457,7 +497,11 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[webzler/agentMemory](https://github.com/webzler/agentMemory)**: Source for the agent-memory-mcp skill.
|
||||
- **[sstklen/claude-api-cost-optimization](https://github.com/sstklen/claude-api-cost-optimization)**: Save 50-90% on Claude API costs with smart optimization strategies (MIT).
|
||||
- **[Wittlesus/cursorrules-pro](https://github.com/Wittlesus/cursorrules-pro)**: Professional .cursorrules configurations for 8 frameworks - Next.js, React, Python, Go, Rust, and more. Works with Cursor, Claude Code, and Windsurf.
|
||||
- **[SSOJet/skills](https://github.com/ssojet/skills)**: Production-ready SSOJet skills and integration guides for popular frameworks and platforms — Node.js, Next.js, React, Java, .NET Core, Go, iOS, Android, and more. Works seamlessly with SSOJet SAML, OIDC, and enterprise SSO flows.Works with Cursor,Antigravity, Claude Code, and Windsurf.
|
||||
- **[nedcodes-ok/rule-porter](https://github.com/nedcodes-ok/rule-porter)**: Bidirectional rule converter between Cursor (.mdc), Claude Code (CLAUDE.md), GitHub Copilot, Windsurf, and legacy .cursorrules formats. Zero dependencies.
|
||||
- **[SSOJet/skills](https://github.com/ssojet/skills)**: Production-ready SSOJet skills and integration guides for popular frameworks and platforms — Node.js, Next.js, React, Java, .NET Core, Go, iOS, Android, and more. Works seamlessly with SSOJet SAML, OIDC, and enterprise SSO flows. Works with Cursor, Antigravity, Claude Code, and Windsurf.
|
||||
- **[MojoAuth/skills](https://github.com/MojoAuth/skills)**: Production-ready MojoAuth guides and examples for popular frameworks like Node.js, Next.js, React, Java, .NET Core, Go, iOS, and Android.
|
||||
- **[Xquik-dev/x-twitter-scraper](https://github.com/Xquik-dev/x-twitter-scraper)**: X (Twitter) data platform — tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server.
|
||||
- **[shmlkv/dna-claude-analysis](https://github.com/shmlkv/dna-claude-analysis)**: Personal genome analysis toolkit — Python scripts analyzing raw DNA data across 17 categories (health risks, ancestry, pharmacogenomics, nutrition, psychology, etc.) with terminal-style single-page HTML visualization.
|
||||
|
||||
### Inspirations
|
||||
|
||||
@@ -469,63 +513,82 @@ This collection would not be possible without the incredible work of the Claude
|
||||
## Repo Contributors
|
||||
|
||||
<a href="https://github.com/sickn33/antigravity-awesome-skills/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=sickn33/antigravity-awesome-skills" />
|
||||
<img src="https://contrib.rocks/image?repo=sickn33/antigravity-awesome-skills" alt="Repository contributors" />
|
||||
</a>
|
||||
|
||||
Made with [contrib.rocks](https://contrib.rocks).
|
||||
|
||||
We officially thank the following contributors for their help in making this repository awesome!
|
||||
|
||||
- [@sck000](https://github.com/sck000)
|
||||
- [@munir-abbasi](https://github.com/munir-abbasi)
|
||||
- [@sickn33](https://github.com/sickn33)
|
||||
- [@munir-abbasi](https://github.com/munir-abbasi)
|
||||
- [@ssumanbiswas](https://github.com/ssumanbiswas)
|
||||
- [@zinzied](https://github.com/zinzied)
|
||||
- [@Mohammad-Faiz-Cloud-Engineer](https://github.com/Mohammad-Faiz-Cloud-Engineer)
|
||||
- [@Dokhacgiakhoa](https://github.com/Dokhacgiakhoa)
|
||||
- [@IanJ332](https://github.com/IanJ332)
|
||||
- [@chauey](https://github.com/chauey)
|
||||
- [@PabloSMD](https://github.com/PabloSMD)
|
||||
- [@GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [@Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [@arathiesh](https://github.com/arathiesh)
|
||||
- [@liyin2015](https://github.com/liyin2015)
|
||||
- [@1bcMax](https://github.com/1bcMax)
|
||||
- [@ALEKGG1](https://github.com/ALEKGG1)
|
||||
- [@ar27111994](https://github.com/ar27111994)
|
||||
- [@BenedictKing](https://github.com/BenedictKing)
|
||||
- [@whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
- [@LocNguyenSGU](https://github.com/LocNguyenSGU)
|
||||
- [@yubing744](https://github.com/yubing744)
|
||||
- [@8hrsk](https://github.com/8hrsk)
|
||||
- [@itsmeares](https://github.com/itsmeares)
|
||||
- [@GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [@fernandorych](https://github.com/fernandorych)
|
||||
- [@nikolasdehor](https://github.com/nikolasdehor)
|
||||
- [@talesperito](https://github.com/talesperito)
|
||||
- [@jackjin1997](https://github.com/jackjin1997)
|
||||
- [@HuynhNhatKhanh](https://github.com/HuynhNhatKhanh)
|
||||
- [@liyin2015](https://github.com/liyin2015)
|
||||
- [@arathiesh](https://github.com/arathiesh)
|
||||
- [@Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [@Musayrlsms](https://github.com/Musayrlsms)
|
||||
- [@sohamganatra](https://github.com/sohamganatra)
|
||||
- [@SuperJMN](https://github.com/SuperJMN)
|
||||
- [@SebConejo](https://github.com/SebConejo)
|
||||
- [@Onsraa](https://github.com/Onsraa)
|
||||
- [@truongnmt](https://github.com/truongnmt)
|
||||
- [@code-vj](https://github.com/code-vj)
|
||||
- [@viktor-ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [@vprudnikoff](https://github.com/vprudnikoff)
|
||||
- [@Vonfry](https://github.com/Vonfry)
|
||||
- [@Wittlesus](https://github.com/Wittlesus)
|
||||
- [@avimak](https://github.com/avimak)
|
||||
- [@buzzbysolcex](https://github.com/buzzbysolcex)
|
||||
- [@c1c3ru](https://github.com/c1c3ru)
|
||||
- [@ckdwns9121](https://github.com/ckdwns9121)
|
||||
- [@developer-victor](https://github.com/developer-victor)
|
||||
- [@fbientrigo](https://github.com/fbientrigo)
|
||||
- [@junited31](https://github.com/junited31)
|
||||
- [@KrisnaSantosa15](https://github.com/KrisnaSantosa15)
|
||||
- [@nocodemf](https://github.com/nocodemf)
|
||||
- [@sstklen](https://github.com/sstklen)
|
||||
- [@taksrules](https://github.com/taksrules)
|
||||
- [@thuanlm215](https://github.com/thuanlm215)
|
||||
- [@zebbern](https://github.com/zebbern)
|
||||
- [@vuth-dogo](https://github.com/vuth-dogo)
|
||||
- [@mvanhorn](https://github.com/mvanhorn)
|
||||
- [@rookie-ricardo](https://github.com/rookie-ricardo)
|
||||
- [@evandro-miguel](https://github.com/evandro-miguel)
|
||||
- [@raeef1001](https://github.com/raeef1001)
|
||||
- [@devchangjun](https://github.com/devchangjun)
|
||||
- [@jackjin1997](https://github.com/jackjin1997)
|
||||
- [@ericgandrade](https://github.com/ericgandrade)
|
||||
- [@sohamganatra](https://github.com/sohamganatra)
|
||||
- [@Nguyen-Van-Chan](https://github.com/Nguyen-Van-Chan)
|
||||
- [@8hrsk](https://github.com/8hrsk)
|
||||
- [@Wittlesus](https://github.com/Wittlesus)
|
||||
- [@Vonfry](https://github.com/Vonfry)
|
||||
- [@ssumanbiswas](https://github.com/ssumanbiswas)
|
||||
- [@amartelr](https://github.com/amartelr)
|
||||
- [@fernandorych](https://github.com/fernandorych)
|
||||
- [@GeekLuffy](https://github.com/GeekLuffy)
|
||||
- [@zinzied](https://github.com/zinzied)
|
||||
- [@code-vj](https://github.com/code-vj)
|
||||
- [@thuanlm](https://github.com/thuanlm)
|
||||
- [@ALEKGG1](https://github.com/ALEKGG1)
|
||||
- [@Abdulrahmansoliman](https://github.com/Abdulrahmansoliman)
|
||||
- [@alexmvie](https://github.com/alexmvie)
|
||||
- [@Andruia](https://github.com/Andruia)
|
||||
- [@acbhatt12](https://github.com/acbhatt12)
|
||||
- [@BenedictKing](https://github.com/BenedictKing)
|
||||
- [@rcigor](https://github.com/rcigor)
|
||||
- [@whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
- [@k-kolomeitsev](https://github.com/k-kolomeitsev)
|
||||
- [@Krishna-Modi12](https://github.com/Krishna-Modi12)
|
||||
- [@kromahlusenii-ops](https://github.com/kromahlusenii-ops)
|
||||
- [@djmahe4](https://github.com/djmahe4)
|
||||
- [@maxdml](https://github.com/maxdml)
|
||||
- [@mertbaskurt](https://github.com/mertbaskurt)
|
||||
- [@nedcodes-ok](https://github.com/nedcodes-ok)
|
||||
- [@LocNguyenSGU](https://github.com/LocNguyenSGU)
|
||||
- [@KhaiTrang1995](https://github.com/KhaiTrang1995)
|
||||
- [@sharmanilay](https://github.com/sharmanilay)
|
||||
- [@yubing744](https://github.com/yubing744)
|
||||
- [@PabloASMD](https://github.com/PabloASMD)
|
||||
- [@0xrohitgarg](https://github.com/0xrohitgarg)
|
||||
- [@Silverov](https://github.com/Silverov)
|
||||
- [@shmlkv](https://github.com/shmlkv)
|
||||
- [@kriptoburak](https://github.com/kriptoburak)
|
||||
|
||||
---
|
||||
|
||||
@@ -539,16 +602,6 @@ MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
[](https://www.star-history.com/#sickn33/antigravity-awesome-skills&type=date&legend=top-left)
|
||||
|
||||
If Antigravity Awesome Skills has been useful, consider ⭐ starring the repo or [buying me a book](https://buymeacoffee.com/sickn33).
|
||||
If Antigravity Awesome Skills has been useful, consider ⭐ starring the repo!
|
||||
|
||||
---
|
||||
|
||||
## GitHub Topics
|
||||
|
||||
For repository maintainers, add these topics to maximize discoverability:
|
||||
|
||||
```text
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp,
|
||||
ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md
|
||||
```
|
||||
<!-- GitHub Topics (for maintainers): claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode, agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp, ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md -->
|
||||
|
||||
@@ -14,15 +14,101 @@ IF %ERRORLEVEL% NEQ 0 (
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
:: ===== Auto-Update Skills from GitHub =====
|
||||
echo [INFO] Checking for skill updates...
|
||||
|
||||
:: Method 1: Try Git first (if available)
|
||||
WHERE git >nul 2>nul
|
||||
IF %ERRORLEVEL% EQU 0 goto :USE_GIT
|
||||
|
||||
:: Method 2: Try PowerShell download (fallback)
|
||||
echo [INFO] Git not found. Using alternative download method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:USE_GIT
|
||||
:: Add upstream remote if not already set
|
||||
git remote get-url upstream >nul 2>nul
|
||||
IF %ERRORLEVEL% EQU 0 goto :DO_FETCH
|
||||
echo [INFO] Adding upstream remote...
|
||||
git remote add upstream https://github.com/sickn33/antigravity-awesome-skills.git
|
||||
|
||||
:DO_FETCH
|
||||
echo [INFO] Fetching latest skills from original repo...
|
||||
git fetch upstream >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :FETCH_FAIL
|
||||
goto :DO_MERGE
|
||||
|
||||
:FETCH_FAIL
|
||||
echo [WARN] Could not fetch updates via Git. Trying alternative method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:DO_MERGE
|
||||
:: Surgically extract ONLY the /skills/ folder from upstream to avoid all merge conflicts
|
||||
git checkout upstream/main -- skills >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :MERGE_FAIL
|
||||
|
||||
:: Save the updated skills to local history silently
|
||||
git commit -m "auto-update: sync latest skills from upstream" >nul 2>nul
|
||||
echo [INFO] Skills updated successfully from original repo!
|
||||
goto :SKIP_UPDATE
|
||||
|
||||
:MERGE_FAIL
|
||||
echo [WARN] Could not update skills via Git. Trying alternative method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:USE_POWERSHELL
|
||||
echo [INFO] Downloading latest skills via HTTPS...
|
||||
if exist "update_temp" rmdir /S /Q "update_temp" >nul 2>nul
|
||||
if exist "update.zip" del "update.zip" >nul 2>nul
|
||||
|
||||
:: Download the latest repository as ZIP
|
||||
powershell -Command "Invoke-WebRequest -Uri 'https://github.com/sickn33/antigravity-awesome-skills/archive/refs/heads/main.zip' -OutFile 'update.zip' -UseBasicParsing" >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :DOWNLOAD_FAIL
|
||||
|
||||
:: Extract and update skills
|
||||
echo [INFO] Extracting latest skills...
|
||||
powershell -Command "Expand-Archive -Path 'update.zip' -DestinationPath 'update_temp' -Force" >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :EXTRACT_FAIL
|
||||
|
||||
:: Copy only the skills folder
|
||||
if exist "update_temp\antigravity-awesome-skills-main\skills" (
|
||||
echo [INFO] Updating skills directory...
|
||||
xcopy /E /Y /I "update_temp\antigravity-awesome-skills-main\skills" "skills" >nul 2>nul
|
||||
echo [INFO] Skills updated successfully without Git!
|
||||
) else (
|
||||
echo [WARN] Could not find skills folder in downloaded archive.
|
||||
goto :UPDATE_FAIL
|
||||
)
|
||||
|
||||
:: Cleanup
|
||||
del "update.zip" >nul 2>nul
|
||||
rmdir /S /Q "update_temp" >nul 2>nul
|
||||
goto :SKIP_UPDATE
|
||||
|
||||
:DOWNLOAD_FAIL
|
||||
echo [WARN] Failed to download skills update (network issue or no internet).
|
||||
goto :UPDATE_FAIL
|
||||
|
||||
:EXTRACT_FAIL
|
||||
echo [WARN] Failed to extract downloaded skills archive.
|
||||
goto :UPDATE_FAIL
|
||||
|
||||
:UPDATE_FAIL
|
||||
echo [INFO] Continuing with local skills version...
|
||||
echo [INFO] To manually update skills later, run: npm run update:skills
|
||||
|
||||
:SKIP_UPDATE
|
||||
|
||||
:: Check/Install dependencies
|
||||
cd web-app
|
||||
|
||||
:CHECK_DEPS
|
||||
if not exist "node_modules\" (
|
||||
echo [INFO] Dependencies not found. Installing...
|
||||
goto :INSTALL_DEPS
|
||||
)
|
||||
|
||||
:: Verify dependencies aren't corrupted
|
||||
:: Verify dependencies aren't corrupted (e.g. esbuild arch mismatch after update)
|
||||
echo [INFO] Verifying app dependencies...
|
||||
call npx -y vite --version >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
@@ -52,7 +138,6 @@ call npm run app:setup
|
||||
:: Start App
|
||||
echo [INFO] Starting Web App...
|
||||
echo [INFO] Opening default browser...
|
||||
echo [INFO] Use the Sync Skills button in the app to update skills from GitHub!
|
||||
cd web-app
|
||||
call npx -y vite --open
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 50 KiB |
@@ -1,11 +1,13 @@
|
||||
{
|
||||
"generatedAt": "2026-02-08T00:00:00.000Z",
|
||||
"aliases": {
|
||||
"20-andruia-intelligence": "20-andruia-niche-intelligence",
|
||||
"accessibility-compliance-audit": "accessibility-compliance-accessibility-audit",
|
||||
"agent-orchestration-improve": "agent-orchestration-improve-agent",
|
||||
"agent-orchestration-optimize": "agent-orchestration-multi-agent-optimize",
|
||||
"android-jetpack-expert": "android-jetpack-compose-expert",
|
||||
"api-testing-mock": "api-testing-observability-api-mock",
|
||||
"apify-brand-monitoring": "apify-brand-reputation-monitoring",
|
||||
"templates": "app-builder/templates",
|
||||
"application-performance-optimization": "application-performance-performance-optimization",
|
||||
"azure-ai-dotnet": "azure-ai-agents-persistent-dotnet",
|
||||
@@ -104,6 +106,7 @@
|
||||
"security/aws-iam-practices": "security/aws-iam-best-practices",
|
||||
"aws-secrets-rotation": "security/aws-secrets-rotation",
|
||||
"aws-security-audit": "security/aws-security-audit",
|
||||
"seo-forensic-response": "seo-forensic-incident-response",
|
||||
"startup-business-case": "startup-business-analyst-business-case",
|
||||
"startup-business-projections": "startup-business-analyst-financial-projections",
|
||||
"startup-business-opportunity": "startup-business-analyst-market-opportunity",
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
"agent-framework-azure-ai-py",
|
||||
"algolia-search",
|
||||
"android-jetpack-compose-expert",
|
||||
"android_ui_verification",
|
||||
"api-design-principles",
|
||||
"api-documentation",
|
||||
"api-documentation-generator",
|
||||
@@ -17,7 +18,9 @@
|
||||
"api-security-best-practices",
|
||||
"api-security-testing",
|
||||
"api-testing-observability-api-mock",
|
||||
"apify-actorization",
|
||||
"app-store-optimization",
|
||||
"appdeploy",
|
||||
"application-performance-performance-optimization",
|
||||
"architecture-patterns",
|
||||
"async-python-patterns",
|
||||
@@ -25,15 +28,21 @@
|
||||
"azure-ai-agents-persistent-java",
|
||||
"azure-ai-anomalydetector-java",
|
||||
"azure-ai-contentsafety-java",
|
||||
"azure-ai-contentsafety-py",
|
||||
"azure-ai-contentunderstanding-py",
|
||||
"azure-ai-formrecognizer-java",
|
||||
"azure-ai-ml-py",
|
||||
"azure-ai-projects-java",
|
||||
"azure-ai-projects-py",
|
||||
"azure-ai-projects-ts",
|
||||
"azure-ai-transcription-py",
|
||||
"azure-ai-translation-ts",
|
||||
"azure-ai-vision-imageanalysis-java",
|
||||
"azure-ai-voicelive-java",
|
||||
"azure-ai-voicelive-py",
|
||||
"azure-ai-voicelive-ts",
|
||||
"azure-appconfiguration-java",
|
||||
"azure-appconfiguration-py",
|
||||
"azure-appconfiguration-ts",
|
||||
"azure-communication-callautomation-java",
|
||||
"azure-communication-callingserver-java",
|
||||
@@ -41,34 +50,64 @@
|
||||
"azure-communication-common-java",
|
||||
"azure-communication-sms-java",
|
||||
"azure-compute-batch-java",
|
||||
"azure-containerregistry-py",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-cosmos-java",
|
||||
"azure-cosmos-py",
|
||||
"azure-cosmos-rust",
|
||||
"azure-cosmos-ts",
|
||||
"azure-data-tables-java",
|
||||
"azure-data-tables-py",
|
||||
"azure-eventgrid-java",
|
||||
"azure-eventgrid-py",
|
||||
"azure-eventhub-java",
|
||||
"azure-eventhub-py",
|
||||
"azure-eventhub-rust",
|
||||
"azure-eventhub-ts",
|
||||
"azure-functions",
|
||||
"azure-identity-java",
|
||||
"azure-identity-py",
|
||||
"azure-identity-rust",
|
||||
"azure-identity-ts",
|
||||
"azure-keyvault-certificates-rust",
|
||||
"azure-keyvault-keys-rust",
|
||||
"azure-keyvault-keys-ts",
|
||||
"azure-keyvault-py",
|
||||
"azure-keyvault-secrets-rust",
|
||||
"azure-keyvault-secrets-ts",
|
||||
"azure-messaging-webpubsub-java",
|
||||
"azure-messaging-webpubsubservice-py",
|
||||
"azure-mgmt-apicenter-dotnet",
|
||||
"azure-mgmt-apicenter-py",
|
||||
"azure-mgmt-apimanagement-dotnet",
|
||||
"azure-mgmt-apimanagement-py",
|
||||
"azure-mgmt-botservice-py",
|
||||
"azure-mgmt-fabric-py",
|
||||
"azure-monitor-ingestion-java",
|
||||
"azure-monitor-ingestion-py",
|
||||
"azure-monitor-opentelemetry-exporter-java",
|
||||
"azure-monitor-opentelemetry-exporter-py",
|
||||
"azure-monitor-opentelemetry-py",
|
||||
"azure-monitor-opentelemetry-ts",
|
||||
"azure-monitor-query-java",
|
||||
"azure-monitor-query-py",
|
||||
"azure-postgres-ts",
|
||||
"azure-search-documents-py",
|
||||
"azure-search-documents-ts",
|
||||
"azure-security-keyvault-keys-java",
|
||||
"azure-security-keyvault-secrets-java",
|
||||
"azure-servicebus-py",
|
||||
"azure-servicebus-ts",
|
||||
"azure-speech-to-text-rest-py",
|
||||
"azure-storage-blob-java",
|
||||
"azure-storage-blob-py",
|
||||
"azure-storage-blob-rust",
|
||||
"azure-storage-blob-ts",
|
||||
"azure-storage-file-datalake-py",
|
||||
"azure-storage-file-share-py",
|
||||
"azure-storage-file-share-ts",
|
||||
"azure-storage-queue-py",
|
||||
"azure-storage-queue-ts",
|
||||
"azure-web-pubsub-ts",
|
||||
"backend-architect",
|
||||
"backend-dev-guidelines",
|
||||
@@ -84,6 +123,7 @@
|
||||
"cdk-patterns",
|
||||
"code-documentation-doc-generate",
|
||||
"context7-auto-research",
|
||||
"convex",
|
||||
"copilot-sdk",
|
||||
"dbos-golang",
|
||||
"dbos-python",
|
||||
@@ -121,6 +161,7 @@
|
||||
"go-playwright",
|
||||
"go-rod-master",
|
||||
"golang-pro",
|
||||
"grpc-golang",
|
||||
"hubspot-integration",
|
||||
"ios-developer",
|
||||
"java-pro",
|
||||
@@ -129,6 +170,8 @@
|
||||
"javascript-testing-patterns",
|
||||
"javascript-typescript-typescript-scaffold",
|
||||
"launch-strategy",
|
||||
"m365-agents-py",
|
||||
"m365-agents-ts",
|
||||
"makepad-skills",
|
||||
"manifest",
|
||||
"memory-safety-patterns",
|
||||
@@ -186,6 +229,7 @@
|
||||
"tavily-web",
|
||||
"telegram-bot-builder",
|
||||
"telegram-mini-app",
|
||||
"temporal-golang-pro",
|
||||
"temporal-python-pro",
|
||||
"temporal-python-testing",
|
||||
"trigger-dev",
|
||||
@@ -215,6 +259,7 @@
|
||||
"auth-implementation-patterns",
|
||||
"aws-penetration-testing",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-keyvault-py",
|
||||
"azure-keyvault-secrets-rust",
|
||||
"azure-keyvault-secrets-ts",
|
||||
"azure-security-keyvault-keys-dotnet",
|
||||
@@ -227,20 +272,22 @@
|
||||
"clerk-auth",
|
||||
"cloud-penetration-testing",
|
||||
"code-review-checklist",
|
||||
"code-reviewer",
|
||||
"codebase-cleanup-deps-audit",
|
||||
"convex",
|
||||
"customs-trade-compliance",
|
||||
"dependency-management-deps-audit",
|
||||
"deployment-pipeline-design",
|
||||
"design-orchestration",
|
||||
"docker-expert",
|
||||
"dotnet-backend",
|
||||
"ethical-hacking-methodology",
|
||||
"find-bugs",
|
||||
"firebase",
|
||||
"firmware-analyst",
|
||||
"framework-migration-deps-upgrade",
|
||||
"frontend-mobile-security-xss-scan",
|
||||
"frontend-security-coder",
|
||||
"gdpr-data-handling",
|
||||
"graphql-architect",
|
||||
"k8s-manifest-generator",
|
||||
"k8s-security-policies",
|
||||
"laravel-expert",
|
||||
@@ -248,16 +295,21 @@
|
||||
"legal-advisor",
|
||||
"linkerd-patterns",
|
||||
"loki-mode",
|
||||
"m365-agents-dotnet",
|
||||
"m365-agents-py",
|
||||
"malware-analyst",
|
||||
"mobile-security-coder",
|
||||
"multi-agent-brainstorming",
|
||||
"nestjs-expert",
|
||||
"network-engineer",
|
||||
"nextjs-supabase-auth",
|
||||
"nodejs-best-practices",
|
||||
"notebooklm",
|
||||
"openapi-spec-generation",
|
||||
"payment-integration",
|
||||
"pci-compliance",
|
||||
"pentest-checklist",
|
||||
"plaid-fintech",
|
||||
"quant-analyst",
|
||||
"risk-manager",
|
||||
"risk-metrics-calculation",
|
||||
"sast-configuration",
|
||||
@@ -282,6 +334,7 @@
|
||||
"threat-mitigation-mapping",
|
||||
"threat-modeling-expert",
|
||||
"top-web-vulnerabilities",
|
||||
"ui-visual-validator",
|
||||
"varlock-claude-skill",
|
||||
"vulnerability-scanner",
|
||||
"web-design-guidelines",
|
||||
@@ -294,12 +347,21 @@
|
||||
"description": "Kubernetes and service mesh essentials.",
|
||||
"skills": [
|
||||
"azure-cosmos-db-py",
|
||||
"azure-identity-dotnet",
|
||||
"azure-identity-java",
|
||||
"azure-identity-py",
|
||||
"azure-identity-ts",
|
||||
"azure-messaging-webpubsubservice-py",
|
||||
"azure-mgmt-botservice-dotnet",
|
||||
"azure-mgmt-botservice-py",
|
||||
"azure-servicebus-dotnet",
|
||||
"azure-servicebus-py",
|
||||
"azure-servicebus-ts",
|
||||
"chrome-extension-developer",
|
||||
"cloud-devops",
|
||||
"freshservice-automation",
|
||||
"gitops-workflow",
|
||||
"grpc-golang",
|
||||
"helm-chart-scaffolding",
|
||||
"istio-traffic-management",
|
||||
"k8s-manifest-generator",
|
||||
@@ -324,21 +386,41 @@
|
||||
"airflow-dag-patterns",
|
||||
"analytics-tracking",
|
||||
"angular-ui-patterns",
|
||||
"apify-actor-development",
|
||||
"apify-content-analytics",
|
||||
"apify-ecommerce",
|
||||
"apify-ultimate-scraper",
|
||||
"appdeploy",
|
||||
"azure-ai-document-intelligence-dotnet",
|
||||
"azure-ai-document-intelligence-ts",
|
||||
"azure-ai-textanalytics-py",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-cosmos-java",
|
||||
"azure-cosmos-py",
|
||||
"azure-cosmos-rust",
|
||||
"azure-cosmos-ts",
|
||||
"azure-data-tables-java",
|
||||
"azure-data-tables-py",
|
||||
"azure-eventhub-java",
|
||||
"azure-eventhub-rust",
|
||||
"azure-eventhub-ts",
|
||||
"azure-maps-search-dotnet",
|
||||
"azure-monitor-ingestion-java",
|
||||
"azure-monitor-ingestion-py",
|
||||
"azure-monitor-query-java",
|
||||
"azure-monitor-query-py",
|
||||
"azure-postgres-ts",
|
||||
"azure-resource-manager-mysql-dotnet",
|
||||
"azure-resource-manager-postgresql-dotnet",
|
||||
"azure-resource-manager-sql-dotnet",
|
||||
"azure-security-keyvault-secrets-java",
|
||||
"azure-storage-file-datalake-py",
|
||||
"blockrun",
|
||||
"business-analyst",
|
||||
"cc-skill-backend-patterns",
|
||||
"cc-skill-clickhouse-io",
|
||||
"claude-d3js-skill",
|
||||
"content-marketer",
|
||||
"data-engineer",
|
||||
"data-engineering-data-driven-feature",
|
||||
"data-engineering-data-pipeline",
|
||||
@@ -364,7 +446,9 @@
|
||||
"google-analytics-automation",
|
||||
"googlesheets-automation",
|
||||
"graphql",
|
||||
"ios-developer",
|
||||
"kpi-dashboard-design",
|
||||
"legal-advisor",
|
||||
"libreoffice/base",
|
||||
"libreoffice/calc",
|
||||
"loki-mode",
|
||||
@@ -380,9 +464,13 @@
|
||||
"postgresql",
|
||||
"postgresql-optimization",
|
||||
"prisma-expert",
|
||||
"programmatic-seo",
|
||||
"pydantic-models-py",
|
||||
"quant-analyst",
|
||||
"rag-implementation",
|
||||
"react-ui-patterns",
|
||||
"scala-pro",
|
||||
"schema-markup",
|
||||
"segment-cdp",
|
||||
"sendgrid-automation",
|
||||
"senior-architect",
|
||||
@@ -395,6 +483,7 @@
|
||||
"unity-ecs-patterns",
|
||||
"using-neon",
|
||||
"vector-database-engineer",
|
||||
"x-twitter-scraper",
|
||||
"xlsx-official",
|
||||
"youtube-automation"
|
||||
]
|
||||
@@ -405,16 +494,21 @@
|
||||
"agent-evaluation",
|
||||
"airflow-dag-patterns",
|
||||
"api-testing-observability-api-mock",
|
||||
"apify-brand-reputation-monitoring",
|
||||
"application-performance-performance-optimization",
|
||||
"aws-serverless",
|
||||
"azd-deployment",
|
||||
"azure-ai-anomalydetector-java",
|
||||
"azure-mgmt-applicationinsights-dotnet",
|
||||
"azure-mgmt-arizeaiobservabilityeval-dotnet",
|
||||
"azure-mgmt-weightsandbiases-dotnet",
|
||||
"azure-microsoft-playwright-testing-ts",
|
||||
"azure-monitor-opentelemetry-ts",
|
||||
"backend-development-feature-development",
|
||||
"cicd-automation-workflow-automate",
|
||||
"cloud-devops",
|
||||
"code-review-ai-ai-review",
|
||||
"convex",
|
||||
"data-engineering-data-pipeline",
|
||||
"database-migrations-migration-observability",
|
||||
"deployment-engineer",
|
||||
@@ -424,6 +518,7 @@
|
||||
"devops-troubleshooter",
|
||||
"distributed-debugging-debug-trace",
|
||||
"distributed-tracing",
|
||||
"django-pro",
|
||||
"docker-expert",
|
||||
"e2e-testing-patterns",
|
||||
"error-debugging-error-analysis",
|
||||
@@ -431,22 +526,27 @@
|
||||
"error-diagnostics-error-analysis",
|
||||
"error-diagnostics-error-trace",
|
||||
"expo-deployment",
|
||||
"flutter-expert",
|
||||
"game-development/game-art",
|
||||
"git-pr-workflows-git-workflow",
|
||||
"gitlab-ci-patterns",
|
||||
"gitops-workflow",
|
||||
"grafana-dashboards",
|
||||
"grpc-golang",
|
||||
"incident-responder",
|
||||
"incident-response-incident-response",
|
||||
"incident-response-smart-fix",
|
||||
"incident-runbook-templates",
|
||||
"kpi-dashboard-design",
|
||||
"kubernetes-architect",
|
||||
"kubernetes-deployment",
|
||||
"langfuse",
|
||||
"llm-app-patterns",
|
||||
"loki-mode",
|
||||
"machine-learning-ops-ml-pipeline",
|
||||
"malware-analyst",
|
||||
"manifest",
|
||||
"ml-engineer",
|
||||
"ml-pipeline-workflow",
|
||||
"observability-engineer",
|
||||
"observability-monitoring-monitor-setup",
|
||||
@@ -457,12 +557,16 @@
|
||||
"postmortem-writing",
|
||||
"prometheus-configuration",
|
||||
"risk-metrics-calculation",
|
||||
"seo-forensic-incident-response",
|
||||
"server-management",
|
||||
"service-mesh-expert",
|
||||
"service-mesh-observability",
|
||||
"slo-implementation",
|
||||
"temporal-python-pro",
|
||||
"unity-developer",
|
||||
"vercel-deploy-claimable",
|
||||
"vercel-deployment"
|
||||
"vercel-deployment",
|
||||
"x-twitter-scraper"
|
||||
]
|
||||
}
|
||||
},
|
||||
|
||||
3561
data/catalog.json
3561
data/catalog.json
File diff suppressed because it is too large
Load Diff
@@ -5,6 +5,7 @@
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Install the repository:**
|
||||
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
# or clone manually
|
||||
@@ -421,21 +422,25 @@ Keep a small list of high-frequency skills and reuse it across tasks to reduce c
|
||||
### Beginner → Intermediate → Advanced
|
||||
|
||||
**Web Development:**
|
||||
|
||||
1. Start: `Essentials` → `Web Wizard`
|
||||
2. Grow: `Full-Stack Developer` → `Architecture & Design`
|
||||
3. Master: `Observability & Monitoring` → `Security Developer`
|
||||
|
||||
**AI/ML:**
|
||||
|
||||
1. Start: `Essentials` → `Agent Architect`
|
||||
2. Grow: `LLM Application Developer` → `Data Engineering`
|
||||
3. Master: Advanced RAG and agent orchestration
|
||||
|
||||
**Security:**
|
||||
|
||||
1. Start: `Essentials` → `Security Developer`
|
||||
2. Grow: `Security Engineer` → Advanced pentesting
|
||||
3. Master: Red team tactics and threat modeling
|
||||
|
||||
**Open Source Maintenance:**
|
||||
|
||||
1. Start: `Essentials` → `OSS Maintainer`
|
||||
2. Grow: `Architecture & Design` → `QA & Testing`
|
||||
3. Master: `Skill Author` + release automation workflows
|
||||
@@ -456,4 +461,4 @@ Found a skill that should be in a bundle? Or want to create a new bundle? [Open
|
||||
|
||||
---
|
||||
|
||||
_Last updated: February 2026 | Total Skills: 713+ | Total Bundles: 26_
|
||||
_Last updated: February 2026 | Total Skills: 954+ | Total Bundles: 26_
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Getting Started with Antigravity Awesome Skills (V4)
|
||||
# Getting Started with Antigravity Awesome Skills (V6.5.0)
|
||||
|
||||
**New here? This guide will help you supercharge your AI Agent in 5 minutes.**
|
||||
|
||||
@@ -17,7 +17,7 @@ AI Agents (like **Claude Code**, **Gemini**, **Cursor**) are smart, but they lac
|
||||
|
||||
## ⚡️ Quick Start: The "Starter Packs"
|
||||
|
||||
Don't panic about the 700+ skills. You don't need them all at once.
|
||||
Don't panic about the 954+ skills. You don't need them all at once.
|
||||
We have curated **Starter Packs** to get you running immediately.
|
||||
|
||||
You **install the full repo once** (npx or clone); Starter Packs are curated lists to help you **pick which skills to use** by role (e.g. Web Wizard, Hacker Pack)—they are not a different way to install.
|
||||
@@ -30,7 +30,7 @@ You **install the full repo once** (npx or clone); Starter Packs are curated lis
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
This clones to `~/.agent/skills` by default. Use `--cursor`, `--claude`, `--gemini`, or `--codex` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
This clones to `~/.gemini/antigravity/skills` by default. Use `--cursor`, `--claude`, `--gemini`, `--codex`, or `--kiro` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
|
||||
If you see a 404 error, use: `npx github:sickn33/antigravity-awesome-skills`
|
||||
|
||||
@@ -95,14 +95,18 @@ Once installed, just talk to your AI naturally.
|
||||
|
||||
## 🔌 Supported Tools
|
||||
|
||||
| Tool | Status | Path |
|
||||
| :-------------- | :-------------- | :---------------- |
|
||||
| **Claude Code** | ✅ Full Support | `.claude/skills/` |
|
||||
| **Gemini CLI** | ✅ Full Support | `.gemini/skills/` |
|
||||
| **Codex CLI** | ✅ Full Support | `.codex/skills/` |
|
||||
| **Antigravity** | ✅ Native | `.agent/skills/` |
|
||||
| **Cursor** | ✅ Native | `.cursor/skills/` |
|
||||
| **Copilot** | ⚠️ Text Only | Manual copy-paste |
|
||||
| Tool | Status | Path |
|
||||
| :-------------- | :-------------- | :-------------------------------------------------------------------- |
|
||||
| **Claude Code** | ✅ Full Support | `.claude/skills/` |
|
||||
| **Gemini CLI** | ✅ Full Support | `.gemini/skills/` |
|
||||
| **Codex CLI** | ✅ Full Support | `.codex/skills/` |
|
||||
| **Kiro CLI** | ✅ Full Support | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| **Kiro IDE** | ✅ Full Support | Global: `~/.kiro/skills/` · Workspace: `.kiro/skills/` |
|
||||
| **Antigravity** | ✅ Native | Global: `~/.gemini/antigravity/skills/` · Workspace: `.agent/skills/` |
|
||||
| **Cursor** | ✅ Native | `.cursor/skills/` |
|
||||
| **OpenCode** | ✅ Full Support | `.agents/skills/` |
|
||||
| **AdaL CLI** | ✅ Full Support | `.adal/skills/` |
|
||||
| **Copilot** | ⚠️ Text Only | Manual copy-paste |
|
||||
|
||||
---
|
||||
|
||||
@@ -120,7 +124,7 @@ _Check the [Skill Catalog](../CATALOG.md) for the full list._
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**Q: Do I need to install all 700+ skills?**
|
||||
**Q: Do I need to install all 954+ skills?**
|
||||
A: You clone the whole repo once; your AI only _reads_ the skills you invoke (or that are relevant), so it stays lightweight. **Starter Packs** in [BUNDLES.md](BUNDLES.md) are curated lists to help you discover the right skills for your role—they don't change how you install.
|
||||
|
||||
**Q: Can I make my own skills?**
|
||||
|
||||
@@ -7,6 +7,7 @@ This guide explains how to use Antigravity Awesome Skills with **Kiro CLI**, AWS
|
||||
## What is Kiro?
|
||||
|
||||
Kiro is AWS's agentic AI IDE that combines:
|
||||
|
||||
- **Autonomous coding agents** that work independently for extended periods
|
||||
- **Context-aware assistance** with deep understanding of your codebase
|
||||
- **AWS service integration** with native support for CDK, SAM, and Terraform
|
||||
@@ -16,7 +17,8 @@ Kiro is AWS's agentic AI IDE that combines:
|
||||
## Why Use Skills with Kiro?
|
||||
|
||||
Kiro's agentic capabilities are enhanced by skills that provide:
|
||||
- **Domain expertise** across 883+ specialized areas
|
||||
|
||||
- **Domain expertise** across 954+ specialized areas
|
||||
- **Best practices** from Anthropic, OpenAI, Google, Microsoft, and AWS
|
||||
- **Workflow automation** for common development tasks
|
||||
- **AWS-specific patterns** for serverless, infrastructure, and cloud architecture
|
||||
@@ -68,6 +70,7 @@ Run @security-audit on my CDK stack
|
||||
### Recommended Skills for Kiro Users
|
||||
|
||||
#### AWS & Cloud Infrastructure
|
||||
|
||||
- `@aws-serverless` - Serverless architecture patterns
|
||||
- `@aws-cdk` - AWS CDK best practices
|
||||
- `@aws-sam` - SAM template patterns
|
||||
@@ -76,24 +79,28 @@ Run @security-audit on my CDK stack
|
||||
- `@kubernetes-expert` - K8s deployment patterns
|
||||
|
||||
#### Architecture & Design
|
||||
|
||||
- `@architecture` - System design and ADRs
|
||||
- `@c4-context` - C4 model diagrams
|
||||
- `@senior-architect` - Scalable architecture patterns
|
||||
- `@microservices-patterns` - Microservices design
|
||||
|
||||
#### Security
|
||||
|
||||
- `@api-security-best-practices` - API security hardening
|
||||
- `@vulnerability-scanner` - Security vulnerability detection
|
||||
- `@owasp-top-10` - OWASP security patterns
|
||||
- `@aws-security-best-practices` - AWS security configuration
|
||||
|
||||
#### Development
|
||||
|
||||
- `@typescript-expert` - TypeScript best practices
|
||||
- `@python-patterns` - Python design patterns
|
||||
- `@react-patterns` - React component patterns
|
||||
- `@test-driven-development` - TDD workflows
|
||||
|
||||
#### DevOps & Automation
|
||||
|
||||
- `@ci-cd-pipeline` - CI/CD automation
|
||||
- `@github-actions` - GitHub Actions workflows
|
||||
- `@monitoring-observability` - Observability patterns
|
||||
@@ -134,12 +141,14 @@ Run @security-audit on my CDK stack
|
||||
### MCP Integration
|
||||
|
||||
Kiro's MCP support allows skills to:
|
||||
|
||||
- Call external APIs securely
|
||||
- Query databases with context
|
||||
- Integrate with AWS services
|
||||
- Access documentation in real-time
|
||||
|
||||
Skills that leverage MCP:
|
||||
|
||||
- `@rag-engineer` - RAG system implementation
|
||||
- `@langgraph` - Agent workflow orchestration
|
||||
- `@prompt-engineer` - LLM prompt optimization
|
||||
@@ -149,8 +158,8 @@ Skills that leverage MCP:
|
||||
Kiro can work independently for extended periods. Use skills to guide long-running tasks:
|
||||
|
||||
```
|
||||
Use @systematic-debugging to investigate and fix all TypeScript errors in the codebase,
|
||||
then apply @test-driven-development to add missing tests, and finally run @documentation
|
||||
Use @systematic-debugging to investigate and fix all TypeScript errors in the codebase,
|
||||
then apply @test-driven-development to add missing tests, and finally run @documentation
|
||||
to update all README files.
|
||||
```
|
||||
|
||||
@@ -159,8 +168,8 @@ to update all README files.
|
||||
Kiro maintains deep context. Reference multiple skills in complex workflows:
|
||||
|
||||
```
|
||||
I'm building a SaaS application. Use @brainstorming for the MVP plan,
|
||||
@aws-serverless for the backend, @react-patterns for the frontend,
|
||||
I'm building a SaaS application. Use @brainstorming for the MVP plan,
|
||||
@aws-serverless for the backend, @react-patterns for the frontend,
|
||||
@stripe-integration for payments, and @security-audit for hardening.
|
||||
```
|
||||
|
||||
@@ -169,6 +178,7 @@ I'm building a SaaS application. Use @brainstorming for the MVP plan,
|
||||
Pre-curated skill collections optimized for common Kiro use cases:
|
||||
|
||||
### AWS Developer Bundle
|
||||
|
||||
- `@aws-serverless`
|
||||
- `@aws-cdk`
|
||||
- `@aws-sam`
|
||||
@@ -177,6 +187,7 @@ Pre-curated skill collections optimized for common Kiro use cases:
|
||||
- `@api-gateway-patterns`
|
||||
|
||||
### Full-Stack AWS Bundle
|
||||
|
||||
- `@aws-serverless`
|
||||
- `@react-patterns`
|
||||
- `@typescript-expert`
|
||||
@@ -185,6 +196,7 @@ Pre-curated skill collections optimized for common Kiro use cases:
|
||||
- `@ci-cd-pipeline`
|
||||
|
||||
### DevOps & Infrastructure Bundle
|
||||
|
||||
- `@terraform-expert`
|
||||
- `@docker-expert`
|
||||
- `@kubernetes-expert`
|
||||
@@ -238,8 +250,8 @@ chmod -R 755 ~/.kiro/skills/
|
||||
```
|
||||
I need to build a REST API for a todo application using AWS Lambda and DynamoDB.
|
||||
|
||||
Use @brainstorming to design the architecture, then apply @aws-serverless
|
||||
to implement the Lambda functions, @dynamodb-patterns for data modeling,
|
||||
Use @brainstorming to design the architecture, then apply @aws-serverless
|
||||
to implement the Lambda functions, @dynamodb-patterns for data modeling,
|
||||
and @api-security-best-practices for security hardening.
|
||||
|
||||
Generate the infrastructure using @aws-cdk and add tests with @test-driven-development.
|
||||
@@ -250,8 +262,8 @@ Generate the infrastructure using @aws-cdk and add tests with @test-driven-devel
|
||||
```
|
||||
I want to break down this monolithic application into microservices.
|
||||
|
||||
Use @architecture to create an ADR for the migration strategy,
|
||||
apply @microservices-patterns for service boundaries,
|
||||
Use @architecture to create an ADR for the migration strategy,
|
||||
apply @microservices-patterns for service boundaries,
|
||||
@docker-expert for containerization, and @kubernetes-expert for orchestration.
|
||||
|
||||
Document the migration plan with @documentation.
|
||||
@@ -262,8 +274,8 @@ Document the migration plan with @documentation.
|
||||
```
|
||||
Perform a comprehensive security audit of this application.
|
||||
|
||||
Use @security-audit to scan for vulnerabilities, @owasp-top-10 to check
|
||||
for common issues, @api-security-best-practices for API hardening,
|
||||
Use @security-audit to scan for vulnerabilities, @owasp-top-10 to check
|
||||
for common issues, @api-security-best-practices for API hardening,
|
||||
and @aws-security-best-practices for cloud configuration.
|
||||
|
||||
Generate a report with findings and remediation steps.
|
||||
|
||||
@@ -3,16 +3,16 @@
|
||||
We believe in giving credit where credit is due.
|
||||
If you recognize your work here and it is not properly attributed, please open an Issue.
|
||||
|
||||
| Skill / Category | Original Source | License | Notes |
|
||||
| :-------------------------- | :----------------------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. |
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI / Microsoft / Supabase / Vercel Labs] | Proprietary | Usage encouraged by vendors. |
|
||||
| Skill / Category | Original Source | License | Notes |
|
||||
| :-------------------------- | :------------------------------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. |
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI / Microsoft / Supabase / Apify / Vercel Labs] | Proprietary | Usage encouraged by vendors. |
|
||||
|
||||
## Skills from VoltAgent/awesome-agent-skills
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ Great question! Here's what just happened and what to do next:
|
||||
|
||||
When you ran `npx antigravity-awesome-skills` or cloned the repository, you:
|
||||
|
||||
✅ **Downloaded 883+ skill files** to your computer (default: `~/.gemini/antigravity/skills/`; or `~/.agent/skills/` if you used `--path`)
|
||||
✅ **Downloaded 954+ skill files** to your computer (default: `~/.gemini/antigravity/skills/`; or `~/.agent/skills/` if you used `--path`)
|
||||
✅ **Made them available** to your AI assistant
|
||||
❌ **Did NOT enable them all automatically** (they're just sitting there, waiting)
|
||||
|
||||
@@ -30,8 +30,9 @@ Think of it like installing a toolbox. You have all the tools now, but you need
|
||||
|
||||
Bundles are **recommended lists** of skills grouped by role. They help you decide which skills to start using.
|
||||
|
||||
**Analogy:**
|
||||
- You installed a toolbox with 883+ tools (✅ done)
|
||||
**Analogy:**
|
||||
|
||||
- You installed a toolbox with 954+ tools (✅ done)
|
||||
- Bundles are like **labeled organizer trays** saying: "If you're a carpenter, start with these 10 tools"
|
||||
- You don't install bundles—you **pick skills from them**
|
||||
|
||||
@@ -44,6 +45,7 @@ Bundles are **recommended lists** of skills grouped by role. They help you decid
|
||||
### Example: The "Web Wizard" Bundle
|
||||
|
||||
When you see the [Web Wizard bundle](BUNDLES.md#-the-web-wizard-pack), it lists:
|
||||
|
||||
- `frontend-design`
|
||||
- `react-best-practices`
|
||||
- `tailwind-patterns`
|
||||
@@ -66,30 +68,35 @@ This is the part that should have been explained better! Here's how to use skill
|
||||
The exact syntax varies by tool, but it's always simple:
|
||||
|
||||
#### Claude Code (CLI)
|
||||
|
||||
```bash
|
||||
# In your terminal/chat with Claude Code:
|
||||
>> Use @brainstorming to help me design a todo app
|
||||
```
|
||||
|
||||
#### Cursor (IDE)
|
||||
|
||||
```bash
|
||||
# In the Cursor chat panel:
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
|
||||
#### Gemini CLI
|
||||
|
||||
```bash
|
||||
# In your conversation with Gemini:
|
||||
Use the brainstorming skill to help me plan my app
|
||||
```
|
||||
|
||||
#### Codex CLI
|
||||
|
||||
```bash
|
||||
# In your conversation with Codex:
|
||||
Apply @brainstorming to design a new feature
|
||||
```
|
||||
|
||||
#### Antigravity IDE
|
||||
|
||||
```bash
|
||||
# In agent mode:
|
||||
Use @brainstorming to plan this feature
|
||||
@@ -105,10 +112,12 @@ Here are **real-world examples** of good prompts:
|
||||
|
||||
### Example 1: Starting a New Project
|
||||
|
||||
**Bad Prompt:**
|
||||
**Bad Prompt:**
|
||||
|
||||
> "Help me build a todo app"
|
||||
|
||||
**Good Prompt:**
|
||||
**Good Prompt:**
|
||||
|
||||
> "Use @brainstorming to help me design a todo app with user authentication and cloud sync"
|
||||
|
||||
**Why it's better:** You're explicitly invoking the skill and providing context.
|
||||
@@ -117,10 +126,12 @@ Here are **real-world examples** of good prompts:
|
||||
|
||||
### Example 2: Reviewing Code
|
||||
|
||||
**Bad Prompt:**
|
||||
**Bad Prompt:**
|
||||
|
||||
> "Check my code"
|
||||
|
||||
**Good Prompt:**
|
||||
**Good Prompt:**
|
||||
|
||||
> "Use @lint-and-validate to check `src/components/Button.tsx` for issues"
|
||||
|
||||
**Why it's better:** Specific skill + specific file = precise results.
|
||||
@@ -129,10 +140,12 @@ Here are **real-world examples** of good prompts:
|
||||
|
||||
### Example 3: Security Audit
|
||||
|
||||
**Bad Prompt:**
|
||||
**Bad Prompt:**
|
||||
|
||||
> "Make my API secure"
|
||||
|
||||
**Good Prompt:**
|
||||
**Good Prompt:**
|
||||
|
||||
> "Use @api-security-best-practices to review my REST endpoints in `routes/api/users.js`"
|
||||
|
||||
**Why it's better:** The AI knows exactly which skill's standards to apply.
|
||||
@@ -141,7 +154,8 @@ Here are **real-world examples** of good prompts:
|
||||
|
||||
### Example 4: Combining Multiple Skills
|
||||
|
||||
**Good Prompt:**
|
||||
**Good Prompt:**
|
||||
|
||||
> "Use @brainstorming to design a payment flow, then apply @stripe-integration to implement it"
|
||||
|
||||
**Why it's good:** You can chain skills together in a single prompt!
|
||||
@@ -159,6 +173,7 @@ Let's actually use a skill right now. Follow these steps:
|
||||
2. **Open your AI assistant** (Claude Code, Cursor, etc.)
|
||||
|
||||
3. **Type this exact prompt:**
|
||||
|
||||
```
|
||||
Use @brainstorming to help me design a user profile page for my app
|
||||
```
|
||||
@@ -177,17 +192,18 @@ Let's actually use a skill right now. Follow these steps:
|
||||
|
||||
## 🗂️ Step 5: Picking Your First Skills (Practical Advice)
|
||||
|
||||
Don't try to use all 883+ skills! Here's a sensible approach:
|
||||
Don't try to use all 954+ skills! Here's a sensible approach:
|
||||
|
||||
### Start with "The Essentials" (5 skills, everyone needs these)
|
||||
|
||||
1. **`@brainstorming`** - Plan before you build
|
||||
2. **`@lint-and-validate`** - Keep code clean
|
||||
3. **`@git-pushing`** - Save work safely
|
||||
3. **`@git-pushing`** - Save work safely
|
||||
4. **`@systematic-debugging`** - Fix bugs faster
|
||||
5. **`@concise-planning`** - Organize tasks
|
||||
|
||||
**How to use them:**
|
||||
|
||||
- Before writing new code → `@brainstorming`
|
||||
- After writing code → `@lint-and-validate`
|
||||
- Before committing → `@git-pushing`
|
||||
@@ -198,12 +214,14 @@ Don't try to use all 883+ skills! Here's a sensible approach:
|
||||
Find your role in [BUNDLES.md](BUNDLES.md) and pick 5-10 skills from that bundle.
|
||||
|
||||
**Example for Web Developer:**
|
||||
|
||||
- `@frontend-design`
|
||||
- `@react-best-practices`
|
||||
- `@tailwind-patterns`
|
||||
- `@seo-audit`
|
||||
|
||||
**Example for Security Engineer:**
|
||||
|
||||
- `@api-security-best-practices`
|
||||
- `@vulnerability-scanner`
|
||||
- `@ethical-hacking-methodology`
|
||||
@@ -224,6 +242,7 @@ Let's walk through a realistic scenario:
|
||||
### Task: "Add a blog to my Next.js website"
|
||||
|
||||
#### Step 1: Plan (use @brainstorming)
|
||||
|
||||
```
|
||||
You: Use @brainstorming to design a blog system for my Next.js site
|
||||
|
||||
@@ -233,6 +252,7 @@ AI: [Produces detailed design spec]
|
||||
```
|
||||
|
||||
#### Step 2: Implement (use @nextjs-best-practices)
|
||||
|
||||
```
|
||||
You: Use @nextjs-best-practices to scaffold the blog with App Router
|
||||
|
||||
@@ -240,6 +260,7 @@ AI: [Creates file structure, sets up routes, adds components]
|
||||
```
|
||||
|
||||
#### Step 3: Style (use @tailwind-patterns)
|
||||
|
||||
```
|
||||
You: Use @tailwind-patterns to make the blog posts look modern
|
||||
|
||||
@@ -247,6 +268,7 @@ AI: [Applies Tailwind styling with responsive design]
|
||||
```
|
||||
|
||||
#### Step 4: SEO (use @seo-audit)
|
||||
|
||||
```
|
||||
You: Use @seo-audit to optimize the blog for search engines
|
||||
|
||||
@@ -254,6 +276,7 @@ AI: [Adds meta tags, sitemaps, structured data]
|
||||
```
|
||||
|
||||
#### Step 5: Test & Deploy
|
||||
|
||||
```
|
||||
You: Use @test-driven-development to add tests, then @vercel-deployment to deploy
|
||||
|
||||
@@ -269,6 +292,7 @@ AI: [Creates tests, sets up CI/CD, deploys to Vercel]
|
||||
### "Which tool should I use? Claude Code, Cursor, Gemini?"
|
||||
|
||||
**Any of them!** Skills work universally. Pick the tool you already use or prefer:
|
||||
|
||||
- **Claude Code** - Best for terminal/CLI workflows
|
||||
- **Cursor** - Best for IDE integration
|
||||
- **Gemini CLI** - Best for Google ecosystem
|
||||
@@ -277,6 +301,7 @@ AI: [Creates tests, sets up CI/CD, deploys to Vercel]
|
||||
### "Can I see all available skills?"
|
||||
|
||||
Yes! Three ways:
|
||||
|
||||
1. Browse [CATALOG.md](../CATALOG.md) (searchable list)
|
||||
2. Run `ls ~/.agent/skills/` (if installed there)
|
||||
3. Ask your AI: "What skills do you have for [topic]?"
|
||||
@@ -284,6 +309,7 @@ Yes! Three ways:
|
||||
### "Do I need to restart my IDE after installing?"
|
||||
|
||||
Usually no, but if your AI doesn't recognize a skill:
|
||||
|
||||
1. Try restarting your IDE/CLI
|
||||
2. Check the installation path matches your tool
|
||||
3. Try the explicit path: `npx antigravity-awesome-skills --claude` (or `--cursor`, `--gemini`, etc.)
|
||||
@@ -291,6 +317,7 @@ Usually no, but if your AI doesn't recognize a skill:
|
||||
### "Can I create my own skills?"
|
||||
|
||||
Yes! Use the `@skill-creator` skill:
|
||||
|
||||
```
|
||||
Use @skill-creator to help me build a custom skill for [your task]
|
||||
```
|
||||
@@ -307,15 +334,15 @@ Use @skill-creator to help me build a custom skill for [your task]
|
||||
|
||||
**Save this for quick lookup:**
|
||||
|
||||
| Task | Skill to Use | Example Prompt |
|
||||
|------|-------------|----------------|
|
||||
| Plan new feature | `@brainstorming` | `Use @brainstorming to design a login system` |
|
||||
| Review code | `@lint-and-validate` | `Use @lint-and-validate on src/app.js` |
|
||||
| Debug issue | `@systematic-debugging` | `Use @systematic-debugging to fix login error` |
|
||||
| Security audit | `@api-security-best-practices` | `Use @api-security-best-practices on my API routes` |
|
||||
| SEO check | `@seo-audit` | `Use @seo-audit on my landing page` |
|
||||
| React component | `@react-patterns` | `Use @react-patterns to build a form component` |
|
||||
| Deploy app | `@vercel-deployment` | `Use @vercel-deployment to ship this to production` |
|
||||
| Task | Skill to Use | Example Prompt |
|
||||
| ---------------- | ------------------------------ | --------------------------------------------------- |
|
||||
| Plan new feature | `@brainstorming` | `Use @brainstorming to design a login system` |
|
||||
| Review code | `@lint-and-validate` | `Use @lint-and-validate on src/app.js` |
|
||||
| Debug issue | `@systematic-debugging` | `Use @systematic-debugging to fix login error` |
|
||||
| Security audit | `@api-security-best-practices` | `Use @api-security-best-practices on my API routes` |
|
||||
| SEO check | `@seo-audit` | `Use @seo-audit on my landing page` |
|
||||
| React component | `@react-patterns` | `Use @react-patterns to build a form component` |
|
||||
| Deploy app | `@vercel-deployment` | `Use @vercel-deployment to ship this to production` |
|
||||
|
||||
---
|
||||
|
||||
@@ -333,19 +360,24 @@ Now that you understand how to use skills:
|
||||
## 💡 Pro Tips for Maximum Effectiveness
|
||||
|
||||
### Tip 1: Start Every Feature with @brainstorming
|
||||
|
||||
> Before writing code, use `@brainstorming` to plan. You'll save hours of refactoring.
|
||||
|
||||
### Tip 2: Chain Skills in Order
|
||||
|
||||
> Don't try to do everything at once. Use skills sequentially: Plan → Build → Test → Deploy
|
||||
|
||||
### Tip 3: Be Specific in Prompts
|
||||
|
||||
> Bad: "Use @react-patterns"
|
||||
> Good: "Use @react-patterns to build a modal component with animations"
|
||||
|
||||
### Tip 4: Reference File Paths
|
||||
|
||||
> Help the AI focus: "Use @security-auditor on routes/api/auth.js"
|
||||
|
||||
### Tip 5: Combine Skills for Complex Tasks
|
||||
|
||||
> "Use @brainstorming to design, then @test-driven-development to implement with tests"
|
||||
|
||||
---
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
Các trợ lý AI (như Claude Code, Cursor, hoặc Gemini) rất thông minh, nhưng chúng thiếu các **công cụ chuyên biệt**. Chúng không biết "Quy trình Triển khai" của công ty bạn hoặc cú pháp cụ thể cho "AWS CloudFormation".
|
||||
**Skills** là các tệp markdown nhỏ dạy cho chúng cách thực hiện những tác vụ cụ thể này một cách chính xác trong mọi lần thực thi.
|
||||
...
|
||||
Repository này cung cấp các kỹ năng thiết yếu để biến trợ lý AI của bạn thành một **đội ngũ chuyên gia số toàn năng**, bao gồm các khả năng chính thức từ **Anthropic**, **OpenAI**, **Google**, **Supabase**, và **Vercel Labs**.
|
||||
Repository này cung cấp các kỹ năng thiết yếu để biến trợ lý AI của bạn thành một **đội ngũ chuyên gia số toàn năng**, bao gồm các khả năng chính thức từ **Anthropic**, **OpenAI**, **Google**, **Supabase**, **Apify**, và **Vercel Labs**.
|
||||
...
|
||||
Cho dù bạn đang sử dụng **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, hay **OpenCode**, những kỹ năng này được thiết kế để có thể sử dụng ngay lập tức và tăng cường sức mạnh cho trợ lý AI của bạn.
|
||||
|
||||
@@ -40,17 +40,17 @@ Repository này tập hợp những khả năng tốt nhất từ khắp cộng
|
||||
|
||||
Repository được tổ chức thành các lĩnh vực chuyên biệt để biến AI của bạn thành một chuyên gia trên toàn bộ vòng đời phát triển phần mềm:
|
||||
|
||||
| Danh mục | Trọng tâm | Ví dụ kỹ năng |
|
||||
| :--- | :--- | :--- |
|
||||
| Kiến trúc (52) | Thiết kế hệ thống, ADRs, C4 và các mẫu có thể mở rộng | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Kinh doanh (35) | Tăng trưởng, định giá, CRO, SEO và thâm nhập thị trường | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Dữ liệu & AI (81) | Ứng dụng LLM, RAG, agents, khả năng quan sát, phân tích | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Phát triển (72) | Làm chủ ngôn ngữ, mẫu thiết kế framework, chất lượng code | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| Tổng quát (95) | Lập kế hoạch, tài liệu, vận hành sản phẩm, viết bài, hướng dẫn | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Hạ tầng (72) | DevOps, cloud, serverless, triển khai, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Bảo mật (107) | AppSec, pentesting, phân tích lỗ hổng, tuân thủ | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Kiểm thử (21) | TDD, thiết kế kiểm thử, sửa lỗi, quy trình QA | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Quy trình (17) | Tự động hóa, điều phối, công việc, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
| Danh mục | Trọng tâm | Ví dụ kỹ năng |
|
||||
| :---------------- | :------------------------------------------------------------- | :------------------------------------------------------------------------------ |
|
||||
| Kiến trúc (52) | Thiết kế hệ thống, ADRs, C4 và các mẫu có thể mở rộng | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Kinh doanh (35) | Tăng trưởng, định giá, CRO, SEO và thâm nhập thị trường | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Dữ liệu & AI (81) | Ứng dụng LLM, RAG, agents, khả năng quan sát, phân tích | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Phát triển (72) | Làm chủ ngôn ngữ, mẫu thiết kế framework, chất lượng code | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| Tổng quát (95) | Lập kế hoạch, tài liệu, vận hành sản phẩm, viết bài, hướng dẫn | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Hạ tầng (72) | DevOps, cloud, serverless, triển khai, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Bảo mật (107) | AppSec, pentesting, phân tích lỗ hổng, tuân thủ | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Kiểm thử (21) | TDD, thiết kế kiểm thử, sửa lỗi, quy trình QA | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Quy trình (17) | Tự động hóa, điều phối, công việc, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
|
||||
## Bộ sưu tập Tuyển chọn
|
||||
|
||||
@@ -119,6 +119,7 @@ Bộ sưu tập này sẽ không thể hình thành nếu không có công việ
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Skills chính thức của Vercel Labs - Thực hành tốt nhất cho React, Hướng dẫn thiết kế Web.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: Danh mục skill của OpenAI Codex - Các kỹ năng của Agent, Trình tạo Skill, Lập kế hoạch Súc tích.
|
||||
- **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Skills chính thức của Supabase - Thực hành tốt nhất cho Postgres.
|
||||
- **[apify/agent-skills](https://github.com/apify/agent-skills)**: Skills chính thức của Apify - Web scraping, data extraction and automation.
|
||||
|
||||
### Những người đóng góp từ Cộng đồng
|
||||
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "5.9.0",
|
||||
"version": "6.6.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "5.9.0",
|
||||
"version": "6.6.0",
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
|
||||
22
package.json
22
package.json
@@ -1,20 +1,22 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "6.2.0",
|
||||
"description": "927+ agentic skills for Claude Code, Gemini CLI, Cursor, Antigravity & more. Installer CLI.",
|
||||
"version": "6.7.0",
|
||||
"description": "900+ agentic skills for Claude Code, Gemini CLI, Cursor, Antigravity & more. Installer CLI.",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
"validate": "python3 scripts/validate_skills.py",
|
||||
"validate:strict": "python3 scripts/validate_skills.py --strict",
|
||||
"index": "python3 scripts/generate_index.py",
|
||||
"readme": "python3 scripts/update_readme.py",
|
||||
"validate": "node scripts/run-python.js scripts/validate_skills.py",
|
||||
"validate:strict": "node scripts/run-python.js scripts/validate_skills.py --strict",
|
||||
"index": "node scripts/run-python.js scripts/generate_index.py",
|
||||
"readme": "node scripts/run-python.js scripts/update_readme.py",
|
||||
"chain": "npm run validate && npm run index && npm run readme",
|
||||
"catalog": "node scripts/build-catalog.js",
|
||||
"build": "npm run chain && npm run catalog",
|
||||
"test": "node scripts/tests/validate_skills_headings.test.js && python3 scripts/tests/test_validate_skills_headings.py && python3 scripts/tests/inspect_microsoft_repo.py && python3 scripts/tests/test_comprehensive_coverage.py",
|
||||
"sync:microsoft": "python3 scripts/sync_microsoft_skills.py",
|
||||
"test": "node scripts/tests/run-test-suite.js",
|
||||
"test:local": "node scripts/tests/run-test-suite.js --local",
|
||||
"test:network": "node scripts/tests/run-test-suite.js --network",
|
||||
"sync:microsoft": "node scripts/run-python.js scripts/sync_microsoft_skills.py",
|
||||
"sync:all-official": "npm run sync:microsoft && npm run chain",
|
||||
"update:skills": "python3 scripts/generate_index.py && copy skills_index.json web-app/public/skills.json",
|
||||
"update:skills": "node scripts/run-python.js scripts/generate_index.py && node scripts/copy-file.js skills_index.json web-app/public/skills.json",
|
||||
"app:setup": "node scripts/setup_web.js",
|
||||
"app:install": "cd web-app && npm install",
|
||||
"app:dev": "npm run app:setup && cd web-app && npm run dev",
|
||||
@@ -42,4 +44,4 @@
|
||||
"agentic-skills",
|
||||
"ai-coding"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
1
requirements.txt
Normal file
1
requirements.txt
Normal file
@@ -0,0 +1 @@
|
||||
pyyaml>=6.0
|
||||
@@ -128,8 +128,10 @@ def categorize_skill(skill_name, description):
|
||||
|
||||
return None
|
||||
|
||||
import yaml
|
||||
|
||||
def auto_categorize(skills_dir, dry_run=False):
|
||||
"""Auto-categorize skills and update generate_index.py"""
|
||||
"""Auto-categorize skills and update SKILL.md files"""
|
||||
skills = []
|
||||
categorized_count = 0
|
||||
already_categorized = 0
|
||||
@@ -146,17 +148,19 @@ def auto_categorize(skills_dir, dry_run=False):
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Extract name and description from frontmatter
|
||||
# Extract frontmatter and body
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
continue
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
metadata = {}
|
||||
for line in fm_text.split('\n'):
|
||||
if ':' in line and not line.strip().startswith('#'):
|
||||
key, val = line.split(':', 1)
|
||||
metadata[key.strip()] = val.strip().strip('"').strip("'")
|
||||
body = content[fm_match.end():]
|
||||
|
||||
try:
|
||||
metadata = yaml.safe_load(fm_text) or {}
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ {skill_id}: YAML error - {e}")
|
||||
continue
|
||||
|
||||
skill_name = metadata.get('name', skill_id)
|
||||
description = metadata.get('description', '')
|
||||
@@ -186,32 +190,12 @@ def auto_categorize(skills_dir, dry_run=False):
|
||||
})
|
||||
|
||||
if not dry_run:
|
||||
# Update the SKILL.md file - add or replace category
|
||||
fm_start = content.find('---')
|
||||
fm_end = content.find('---', fm_start + 3)
|
||||
metadata['category'] = new_category
|
||||
new_fm = yaml.dump(metadata, sort_keys=False, allow_unicode=True, width=1000).strip()
|
||||
new_content = f"---\n{new_fm}\n---" + body
|
||||
|
||||
if fm_start >= 0 and fm_end > fm_start:
|
||||
frontmatter = content[fm_start:fm_end+3]
|
||||
body = content[fm_end+3:]
|
||||
|
||||
# Check if category exists in frontmatter
|
||||
if 'category:' in frontmatter:
|
||||
# Replace existing category
|
||||
new_frontmatter = re.sub(
|
||||
r'category:\s*\w+',
|
||||
f'category: {new_category}',
|
||||
frontmatter
|
||||
)
|
||||
else:
|
||||
# Add category before the closing ---
|
||||
new_frontmatter = frontmatter.replace(
|
||||
'\n---',
|
||||
f'\ncategory: {new_category}\n---'
|
||||
)
|
||||
|
||||
new_content = new_frontmatter + body
|
||||
with open(skill_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
with open(skill_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
|
||||
categorized_count += 1
|
||||
else:
|
||||
|
||||
@@ -628,7 +628,8 @@ function buildCatalog() {
|
||||
category,
|
||||
tags,
|
||||
triggers,
|
||||
path: path.relative(ROOT, skill.path),
|
||||
// Normalize separators for deterministic cross-platform output.
|
||||
path: path.relative(ROOT, skill.path).split(path.sep).join("/"),
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
71
scripts/copy-file.js
Normal file
71
scripts/copy-file.js
Normal file
@@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
'use strict';
|
||||
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
if (args.length !== 2) {
|
||||
console.error('Usage: node scripts/copy-file.js <source> <destination>');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const [sourceInput, destinationInput] = args;
|
||||
const projectRoot = path.resolve(__dirname, '..');
|
||||
const sourcePath = path.resolve(projectRoot, sourceInput);
|
||||
const destinationPath = path.resolve(projectRoot, destinationInput);
|
||||
const destinationDir = path.dirname(destinationPath);
|
||||
|
||||
function fail(message) {
|
||||
console.error(message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
function isInsideProjectRoot(targetPath) {
|
||||
const relativePath = path.relative(projectRoot, targetPath);
|
||||
return relativePath === '' || (!relativePath.startsWith('..') && !path.isAbsolute(relativePath));
|
||||
}
|
||||
|
||||
if (!isInsideProjectRoot(sourcePath) || !isInsideProjectRoot(destinationPath)) {
|
||||
fail('Source and destination must resolve inside the project root.');
|
||||
}
|
||||
|
||||
if (sourcePath === destinationPath) {
|
||||
fail('Source and destination must be different files.');
|
||||
}
|
||||
|
||||
if (!fs.existsSync(sourcePath)) {
|
||||
fail(`Source file not found: ${sourceInput}`);
|
||||
}
|
||||
|
||||
let sourceStats;
|
||||
try {
|
||||
sourceStats = fs.statSync(sourcePath);
|
||||
} catch (error) {
|
||||
fail(`Unable to read source file "${sourceInput}": ${error.message}`);
|
||||
}
|
||||
|
||||
if (!sourceStats.isFile()) {
|
||||
fail(`Source is not a file: ${sourceInput}`);
|
||||
}
|
||||
|
||||
let destinationDirStats;
|
||||
try {
|
||||
destinationDirStats = fs.statSync(destinationDir);
|
||||
} catch {
|
||||
fail(`Destination directory not found: ${path.relative(projectRoot, destinationDir)}`);
|
||||
}
|
||||
|
||||
if (!destinationDirStats.isDirectory()) {
|
||||
fail(`Destination parent is not a directory: ${path.relative(projectRoot, destinationDir)}`);
|
||||
}
|
||||
|
||||
try {
|
||||
fs.copyFileSync(sourcePath, destinationPath);
|
||||
} catch (error) {
|
||||
fail(`Copy failed (${sourceInput} -> ${destinationInput}): ${error.message}`);
|
||||
}
|
||||
|
||||
console.log(`Copied ${sourceInput} -> ${destinationInput}`);
|
||||
@@ -1,5 +1,6 @@
|
||||
import os
|
||||
import re
|
||||
import yaml
|
||||
|
||||
def fix_skills(skills_dir):
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
@@ -14,33 +15,31 @@ def fix_skills(skills_dir):
|
||||
continue
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
body = content[fm_match.end():]
|
||||
folder_name = os.path.basename(root)
|
||||
new_fm_lines = []
|
||||
|
||||
try:
|
||||
metadata = yaml.safe_load(fm_text) or {}
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ {skill_path}: YAML error - {e}")
|
||||
continue
|
||||
|
||||
changed = False
|
||||
|
||||
for line in fm_text.split('\n'):
|
||||
if line.startswith('name:'):
|
||||
old_name = line.split(':', 1)[1].strip().strip('"').strip("'")
|
||||
if old_name != folder_name:
|
||||
new_fm_lines.append(f"name: {folder_name}")
|
||||
changed = True
|
||||
else:
|
||||
new_fm_lines.append(line)
|
||||
elif line.startswith('description:'):
|
||||
desc = line.split(':', 1)[1].strip().strip('"').strip("'")
|
||||
if len(desc) > 200:
|
||||
# trim to 197 chars and add "..."
|
||||
short_desc = desc[:197] + "..."
|
||||
new_fm_lines.append(f'description: "{short_desc}"')
|
||||
changed = True
|
||||
else:
|
||||
new_fm_lines.append(line)
|
||||
else:
|
||||
new_fm_lines.append(line)
|
||||
# 1. Fix Name
|
||||
if metadata.get('name') != folder_name:
|
||||
metadata['name'] = folder_name
|
||||
changed = True
|
||||
|
||||
# 2. Fix Description length
|
||||
desc = metadata.get('description', '')
|
||||
if isinstance(desc, str) and len(desc) > 200:
|
||||
metadata['description'] = desc[:197] + "..."
|
||||
changed = True
|
||||
|
||||
if changed:
|
||||
new_fm_text = '\n'.join(new_fm_lines)
|
||||
new_content = content[:fm_match.start(1)] + new_fm_text + content[fm_match.end(1):]
|
||||
new_fm = yaml.dump(metadata, sort_keys=False, allow_unicode=True, width=1000).strip()
|
||||
new_content = f"---\n{new_fm}\n---" + body
|
||||
with open(skill_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
print(f"Fixed {skill_path}")
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import yaml
|
||||
|
||||
def fix_yaml_quotes(skills_dir):
|
||||
print(f"Scanning for YAML quoting errors in {skills_dir}...")
|
||||
print(f"Normalizing YAML frontmatter in {skills_dir}...")
|
||||
fixed_count = 0
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
@@ -21,42 +21,24 @@ def fix_yaml_quotes(skills_dir):
|
||||
continue
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
new_fm_lines = []
|
||||
changed = False
|
||||
body = content[fm_match.end():]
|
||||
|
||||
for line in fm_text.split('\n'):
|
||||
if line.startswith('description:'):
|
||||
key, val = line.split(':', 1)
|
||||
val = val.strip()
|
||||
|
||||
# Store original to check if it matches the fixed version
|
||||
orig_val = val
|
||||
|
||||
# Strip matching outer quotes if they exist
|
||||
if val.startswith('"') and val.endswith('"') and len(val) >= 2:
|
||||
val = val[1:-1]
|
||||
elif val.startswith("'") and val.endswith("'") and len(val) >= 2:
|
||||
val = val[1:-1]
|
||||
|
||||
# Now safely encode using JSON to handle internal escapes
|
||||
safe_val = json.dumps(val)
|
||||
|
||||
if safe_val != orig_val:
|
||||
new_line = f"description: {safe_val}"
|
||||
new_fm_lines.append(new_line)
|
||||
changed = True
|
||||
continue
|
||||
new_fm_lines.append(line)
|
||||
try:
|
||||
# safe_load and then dump will normalize quoting automatically
|
||||
metadata = yaml.safe_load(fm_text) or {}
|
||||
new_fm = yaml.dump(metadata, sort_keys=False, allow_unicode=True, width=1000).strip()
|
||||
|
||||
if changed:
|
||||
new_fm_text = '\n'.join(new_fm_lines)
|
||||
new_content = content[:fm_match.start(1)] + new_fm_text + content[fm_match.end(1):]
|
||||
with open(file_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
print(f"Fixed quotes in {os.path.relpath(file_path, skills_dir)}")
|
||||
fixed_count += 1
|
||||
# Check if it actually changed something significant (beyond just style)
|
||||
# but normalization is good anyway. We'll just compare the fm_text.
|
||||
if new_fm.strip() != fm_text.strip():
|
||||
new_content = f"---\n{new_fm}\n---" + body
|
||||
with open(file_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
fixed_count += 1
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ {file_path}: YAML error - {e}")
|
||||
|
||||
print(f"Total files fixed: {fixed_count}")
|
||||
print(f"Total files normalized: {fixed_count}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
@@ -59,9 +59,11 @@ def generate_index(skills_dir, output_file):
|
||||
parent_dir = os.path.basename(os.path.dirname(root))
|
||||
|
||||
# Default values
|
||||
rel_path = os.path.relpath(root, os.path.dirname(skills_dir))
|
||||
# Force forward slashes for cross-platform JSON compatibility
|
||||
skill_info = {
|
||||
"id": dir_name,
|
||||
"path": os.path.relpath(root, os.path.dirname(skills_dir)),
|
||||
"path": rel_path.replace(os.sep, '/'),
|
||||
"category": parent_dir if parent_dir != "skills" else None, # Will be overridden by frontmatter if present
|
||||
"name": dir_name.replace("-", " ").title(),
|
||||
"description": "",
|
||||
@@ -117,7 +119,7 @@ def generate_index(skills_dir, output_file):
|
||||
# Sort validation: by name
|
||||
skills.sort(key=lambda x: (x["name"].lower(), x["id"].lower()))
|
||||
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
with open(output_file, 'w', encoding='utf-8', newline='\n') as f:
|
||||
json.dump(skills, f, indent=2)
|
||||
|
||||
print(f"✅ Generated rich index with {len(skills)} skills at: {output_file}")
|
||||
|
||||
@@ -18,20 +18,19 @@ def get_project_root():
|
||||
"""Get the project root directory."""
|
||||
return os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
import yaml
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""Parse frontmatter from SKILL.md content."""
|
||||
"""Parse frontmatter from SKILL.md content using PyYAML."""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
metadata = {}
|
||||
for line in fm_text.split('\n'):
|
||||
if ':' in line and not line.strip().startswith('#'):
|
||||
key, val = line.split(':', 1)
|
||||
metadata[key.strip()] = val.strip().strip('"').strip("'")
|
||||
|
||||
return metadata
|
||||
try:
|
||||
return yaml.safe_load(fm_text) or {}
|
||||
except yaml.YAMLError:
|
||||
return None
|
||||
|
||||
def generate_skills_report(output_file=None, sort_by='date'):
|
||||
"""Generate a report of all skills with their metadata."""
|
||||
|
||||
@@ -26,45 +26,39 @@ def get_project_root():
|
||||
"""Get the project root directory."""
|
||||
return os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
import yaml
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""Parse frontmatter from SKILL.md content."""
|
||||
"""Parse frontmatter from SKILL.md content using PyYAML."""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None, content
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
metadata = {}
|
||||
for line in fm_text.split('\n'):
|
||||
if ':' in line and not line.strip().startswith('#'):
|
||||
key, val = line.split(':', 1)
|
||||
metadata[key.strip()] = val.strip().strip('"').strip("'")
|
||||
|
||||
return metadata, content
|
||||
try:
|
||||
metadata = yaml.safe_load(fm_text) or {}
|
||||
return metadata, content
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ YAML parsing error: {e}")
|
||||
return None, content
|
||||
|
||||
def reconstruct_frontmatter(metadata):
|
||||
"""Reconstruct frontmatter from metadata dict."""
|
||||
lines = ["---"]
|
||||
|
||||
# Order: id, name, description, category, risk, source, tags, date_added
|
||||
priority_keys = ['id', 'name', 'description', 'category', 'risk', 'source', 'tags']
|
||||
"""Reconstruct frontmatter from metadata dict using PyYAML."""
|
||||
# Ensure important keys are at the top if they exist
|
||||
ordered = {}
|
||||
priority_keys = ['id', 'name', 'description', 'category', 'risk', 'source', 'tags', 'date_added']
|
||||
|
||||
for key in priority_keys:
|
||||
if key in metadata:
|
||||
val = metadata[key]
|
||||
if isinstance(val, list):
|
||||
# Handle list fields like tags
|
||||
lines.append(f'{key}: {val}')
|
||||
elif ' ' in str(val) or any(c in str(val) for c in ':#"'):
|
||||
lines.append(f'{key}: "{val}"')
|
||||
else:
|
||||
lines.append(f'{key}: {val}')
|
||||
ordered[key] = metadata[key]
|
||||
|
||||
# Add date_added at the end
|
||||
if 'date_added' in metadata:
|
||||
lines.append(f'date_added: "{metadata["date_added"]}"')
|
||||
|
||||
lines.append("---")
|
||||
return '\n'.join(lines)
|
||||
# Add any remaining keys
|
||||
for key, value in metadata.items():
|
||||
if key not in ordered:
|
||||
ordered[key] = value
|
||||
|
||||
fm_text = yaml.dump(ordered, sort_keys=False, allow_unicode=True, width=1000).strip()
|
||||
return f"---\n{fm_text}\n---"
|
||||
|
||||
def update_skill_frontmatter(skill_path, metadata):
|
||||
"""Update a skill's frontmatter with new metadata."""
|
||||
|
||||
@@ -14,6 +14,9 @@ const ALLOWED_FIELDS = new Set([
|
||||
'compatibility',
|
||||
'metadata',
|
||||
'allowed-tools',
|
||||
'date_added',
|
||||
'category',
|
||||
'id',
|
||||
]);
|
||||
|
||||
function isPlainObject(value) {
|
||||
@@ -122,7 +125,8 @@ function normalizeSkill(skillId) {
|
||||
if (!modified) return false;
|
||||
|
||||
const ordered = {};
|
||||
for (const key of ['name', 'description', 'license', 'compatibility', 'allowed-tools', 'metadata']) {
|
||||
const order = ['id', 'name', 'description', 'category', 'risk', 'source', 'license', 'compatibility', 'date_added', 'allowed-tools', 'metadata'];
|
||||
for (const key of order) {
|
||||
if (updated[key] !== undefined) {
|
||||
ordered[key] = updated[key];
|
||||
}
|
||||
|
||||
90
scripts/run-python.js
Normal file
90
scripts/run-python.js
Normal file
@@ -0,0 +1,90 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
'use strict';
|
||||
|
||||
const { spawn, spawnSync } = require('node:child_process');
|
||||
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
if (args.length === 0) {
|
||||
console.error('Usage: node scripts/run-python.js <script.py> [args...]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
function uniqueCandidates(candidates) {
|
||||
const seen = new Set();
|
||||
const unique = [];
|
||||
|
||||
for (const candidate of candidates) {
|
||||
const key = candidate.join('\u0000');
|
||||
if (!seen.has(key)) {
|
||||
seen.add(key);
|
||||
unique.push(candidate);
|
||||
}
|
||||
}
|
||||
|
||||
return unique;
|
||||
}
|
||||
|
||||
function getPythonCandidates() {
|
||||
// Optional override for CI/local pinning without editing scripts.
|
||||
const configuredPython =
|
||||
process.env.ANTIGRAVITY_PYTHON || process.env.npm_config_python;
|
||||
const candidates = [
|
||||
configuredPython ? [configuredPython] : null,
|
||||
// Keep this ordered list easy to update if project requirements change.
|
||||
['python3'],
|
||||
['python'],
|
||||
['py', '-3'],
|
||||
].filter(Boolean);
|
||||
|
||||
return uniqueCandidates(candidates);
|
||||
}
|
||||
|
||||
function canRun(candidate) {
|
||||
const [command, ...baseArgs] = candidate;
|
||||
const probe = spawnSync(
|
||||
command,
|
||||
[...baseArgs, '-c', 'import sys; raise SystemExit(0 if sys.version_info[0] == 3 else 1)'],
|
||||
{
|
||||
stdio: 'ignore',
|
||||
shell: false,
|
||||
},
|
||||
);
|
||||
|
||||
return probe.error == null && probe.status === 0;
|
||||
}
|
||||
|
||||
const pythonCandidates = getPythonCandidates();
|
||||
const selected = pythonCandidates.find(canRun);
|
||||
|
||||
if (!selected) {
|
||||
console.error(
|
||||
'Unable to find a Python 3 interpreter. Tried: python3, python, py -3',
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const [command, ...baseArgs] = selected;
|
||||
const child = spawn(command, [...baseArgs, ...args], {
|
||||
stdio: 'inherit',
|
||||
shell: false,
|
||||
});
|
||||
|
||||
child.on('error', (error) => {
|
||||
console.error(`Failed to start Python interpreter "${command}": ${error.message}`);
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
child.on('exit', (code, signal) => {
|
||||
if (signal) {
|
||||
try {
|
||||
process.kill(process.pid, signal);
|
||||
} catch {
|
||||
process.exit(1);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
process.exit(code ?? 1);
|
||||
});
|
||||
@@ -28,16 +28,21 @@ const destSkills = path.join(WEB_APP_PUBLIC, 'skills');
|
||||
|
||||
console.log(`Copying skills directory...`);
|
||||
|
||||
// Recursive copy function
|
||||
// Recursive copy function (follows symlinks to copy resolved content)
|
||||
function copyFolderSync(from, to) {
|
||||
if (!fs.existsSync(to)) fs.mkdirSync(to);
|
||||
if (!fs.existsSync(to)) fs.mkdirSync(to, { recursive: true });
|
||||
|
||||
fs.readdirSync(from).forEach(element => {
|
||||
if (fs.lstatSync(path.join(from, element)).isFile()) {
|
||||
fs.copyFileSync(path.join(from, element), path.join(to, element));
|
||||
} else {
|
||||
copyFolderSync(path.join(from, element), path.join(to, element));
|
||||
const srcPath = path.join(from, element);
|
||||
const destPath = path.join(to, element);
|
||||
const stat = fs.statSync(srcPath); // statSync follows symlinks
|
||||
|
||||
if (stat.isFile()) {
|
||||
fs.copyFileSync(srcPath, destPath);
|
||||
} else if (stat.isDirectory()) {
|
||||
copyFolderSync(srcPath, destPath);
|
||||
}
|
||||
// Skip other types (e.g. sockets, FIFOs)
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -59,8 +59,10 @@ def cleanup_previous_sync():
|
||||
return removed_count
|
||||
|
||||
|
||||
import yaml
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter using PyYAML."""
|
||||
try:
|
||||
content = skill_md_path.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
@@ -70,13 +72,11 @@ def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
for line in fm_match.group(1).splitlines():
|
||||
match = re.match(r"^name:\s*(.+)$", line)
|
||||
if match:
|
||||
value = match.group(1).strip().strip("\"'")
|
||||
if value:
|
||||
return value
|
||||
return None
|
||||
try:
|
||||
data = yaml.safe_load(fm_match.group(1)) or {}
|
||||
return data.get('name')
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def generate_fallback_name(relative_path: Path) -> str:
|
||||
|
||||
@@ -5,13 +5,61 @@ Shows the repository layout, skill locations, and what flat names would be gener
|
||||
"""
|
||||
|
||||
import re
|
||||
import io
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import traceback
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
|
||||
MS_REPO = "https://github.com/microsoft/skills.git"
|
||||
|
||||
|
||||
def create_clone_target(prefix: str) -> Path:
|
||||
"""Return a writable, non-existent path for git clone destination."""
|
||||
repo_tmp_root = Path(__file__).resolve().parents[2] / ".tmp" / "tests"
|
||||
candidate_roots = (repo_tmp_root, Path(tempfile.gettempdir()))
|
||||
last_error: OSError | None = None
|
||||
|
||||
for root in candidate_roots:
|
||||
try:
|
||||
root.mkdir(parents=True, exist_ok=True)
|
||||
probe_file = root / f".{prefix}write-probe-{uuid.uuid4().hex}.tmp"
|
||||
with probe_file.open("xb"):
|
||||
pass
|
||||
probe_file.unlink()
|
||||
return root / f"{prefix}{uuid.uuid4().hex}"
|
||||
except OSError as exc:
|
||||
last_error = exc
|
||||
|
||||
if last_error is not None:
|
||||
raise last_error
|
||||
raise OSError("Unable to determine clone destination")
|
||||
|
||||
|
||||
def configure_utf8_output() -> None:
|
||||
"""Best-effort UTF-8 stdout/stderr on Windows without dropping diagnostics."""
|
||||
for stream_name in ("stdout", "stderr"):
|
||||
stream = getattr(sys, stream_name)
|
||||
try:
|
||||
stream.reconfigure(encoding="utf-8", errors="backslashreplace")
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
buffer = getattr(stream, "buffer", None)
|
||||
if buffer is not None:
|
||||
setattr(
|
||||
sys,
|
||||
stream_name,
|
||||
io.TextIOWrapper(
|
||||
buffer, encoding="utf-8", errors="backslashreplace"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
try:
|
||||
@@ -37,18 +85,26 @@ def inspect_repo():
|
||||
print("🔍 Inspecting Microsoft Skills Repository Structure")
|
||||
print("=" * 60)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
repo_path: Path | None = None
|
||||
try:
|
||||
repo_path = create_clone_target(prefix="ms-skills-")
|
||||
|
||||
print("\n1️⃣ Cloning repository...")
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(temp_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(repo_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as exc:
|
||||
print("\n❌ git clone failed.", file=sys.stderr)
|
||||
if exc.stderr:
|
||||
print(exc.stderr.strip(), file=sys.stderr)
|
||||
raise
|
||||
|
||||
# Find all SKILL.md files
|
||||
all_skill_mds = list(temp_path.rglob("SKILL.md"))
|
||||
all_skill_mds = list(repo_path.rglob("SKILL.md"))
|
||||
print(f"\n2️⃣ Total SKILL.md files found: {len(all_skill_mds)}")
|
||||
|
||||
# Show flat name mapping
|
||||
@@ -59,7 +115,7 @@ def inspect_repo():
|
||||
|
||||
for skill_md in sorted(all_skill_mds, key=lambda p: str(p)):
|
||||
try:
|
||||
rel = skill_md.parent.relative_to(temp_path)
|
||||
rel = skill_md.parent.relative_to(repo_path)
|
||||
except ValueError:
|
||||
rel = skill_md.parent
|
||||
|
||||
@@ -87,12 +143,18 @@ def inspect_repo():
|
||||
f"\n4️⃣ ✅ No name collisions — all {len(names_seen)} names are unique!")
|
||||
|
||||
print("\n✨ Inspection complete!")
|
||||
finally:
|
||||
if repo_path is not None:
|
||||
shutil.rmtree(repo_path, ignore_errors=True)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
configure_utf8_output()
|
||||
try:
|
||||
inspect_repo()
|
||||
except subprocess.CalledProcessError as exc:
|
||||
sys.exit(exc.returncode or 1)
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
print(f"\n❌ Error: {e}", file=sys.stderr)
|
||||
traceback.print_exc(file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
76
scripts/tests/run-test-suite.js
Normal file
76
scripts/tests/run-test-suite.js
Normal file
@@ -0,0 +1,76 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
const { spawnSync } = require("child_process");
|
||||
|
||||
const NETWORK_TEST_ENV = "ENABLE_NETWORK_TESTS";
|
||||
const ENABLED_VALUES = new Set(["1", "true", "yes", "on"]);
|
||||
const LOCAL_TEST_COMMANDS = [
|
||||
["scripts/tests/validate_skills_headings.test.js"],
|
||||
["scripts/run-python.js", "scripts/tests/test_validate_skills_headings.py"],
|
||||
];
|
||||
const NETWORK_TEST_COMMANDS = [
|
||||
["scripts/run-python.js", "scripts/tests/inspect_microsoft_repo.py"],
|
||||
["scripts/run-python.js", "scripts/tests/test_comprehensive_coverage.py"],
|
||||
];
|
||||
|
||||
function isNetworkTestsEnabled() {
|
||||
const value = process.env[NETWORK_TEST_ENV];
|
||||
if (!value) {
|
||||
return false;
|
||||
}
|
||||
return ENABLED_VALUES.has(String(value).trim().toLowerCase());
|
||||
}
|
||||
|
||||
function runNodeCommand(args) {
|
||||
const result = spawnSync(process.execPath, args, { stdio: "inherit" });
|
||||
|
||||
if (result.error) {
|
||||
throw result.error;
|
||||
}
|
||||
|
||||
if (result.signal) {
|
||||
process.kill(process.pid, result.signal);
|
||||
}
|
||||
|
||||
if (typeof result.status !== "number") {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (result.status !== 0) {
|
||||
process.exit(result.status);
|
||||
}
|
||||
}
|
||||
|
||||
function runCommandSet(commands) {
|
||||
for (const commandArgs of commands) {
|
||||
runNodeCommand(commandArgs);
|
||||
}
|
||||
}
|
||||
|
||||
function main() {
|
||||
const mode = process.argv[2];
|
||||
|
||||
if (mode === "--local") {
|
||||
runCommandSet(LOCAL_TEST_COMMANDS);
|
||||
return;
|
||||
}
|
||||
|
||||
if (mode === "--network") {
|
||||
runCommandSet(NETWORK_TEST_COMMANDS);
|
||||
return;
|
||||
}
|
||||
|
||||
runCommandSet(LOCAL_TEST_COMMANDS);
|
||||
|
||||
if (!isNetworkTestsEnabled()) {
|
||||
console.log(
|
||||
`[tests] Skipping network integration tests. Set ${NETWORK_TEST_ENV}=1 to enable.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`[tests] ${NETWORK_TEST_ENV} enabled; running network integration tests.`);
|
||||
runCommandSet(NETWORK_TEST_COMMANDS);
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -5,14 +5,62 @@ Ensures all skills are captured and no directory name collisions exist.
|
||||
"""
|
||||
|
||||
import re
|
||||
import io
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import traceback
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from collections import defaultdict
|
||||
|
||||
MS_REPO = "https://github.com/microsoft/skills.git"
|
||||
|
||||
|
||||
def create_clone_target(prefix: str) -> Path:
|
||||
"""Return a writable, non-existent path for git clone destination."""
|
||||
repo_tmp_root = Path(__file__).resolve().parents[2] / ".tmp" / "tests"
|
||||
candidate_roots = (repo_tmp_root, Path(tempfile.gettempdir()))
|
||||
last_error: OSError | None = None
|
||||
|
||||
for root in candidate_roots:
|
||||
try:
|
||||
root.mkdir(parents=True, exist_ok=True)
|
||||
probe_file = root / f".{prefix}write-probe-{uuid.uuid4().hex}.tmp"
|
||||
with probe_file.open("xb"):
|
||||
pass
|
||||
probe_file.unlink()
|
||||
return root / f"{prefix}{uuid.uuid4().hex}"
|
||||
except OSError as exc:
|
||||
last_error = exc
|
||||
|
||||
if last_error is not None:
|
||||
raise last_error
|
||||
raise OSError("Unable to determine clone destination")
|
||||
|
||||
|
||||
def configure_utf8_output() -> None:
|
||||
"""Best-effort UTF-8 stdout/stderr on Windows without dropping diagnostics."""
|
||||
for stream_name in ("stdout", "stderr"):
|
||||
stream = getattr(sys, stream_name)
|
||||
try:
|
||||
stream.reconfigure(encoding="utf-8", errors="backslashreplace")
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
buffer = getattr(stream, "buffer", None)
|
||||
if buffer is not None:
|
||||
setattr(
|
||||
sys,
|
||||
stream_name,
|
||||
io.TextIOWrapper(
|
||||
buffer, encoding="utf-8", errors="backslashreplace"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
try:
|
||||
@@ -41,27 +89,35 @@ def analyze_skill_locations():
|
||||
print("🔬 Comprehensive Skill Coverage & Uniqueness Analysis")
|
||||
print("=" * 60)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
repo_path: Path | None = None
|
||||
try:
|
||||
repo_path = create_clone_target(prefix="ms-skills-")
|
||||
|
||||
print("\n1️⃣ Cloning repository...")
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(temp_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(repo_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as exc:
|
||||
print("\n❌ git clone failed.", file=sys.stderr)
|
||||
if exc.stderr:
|
||||
print(exc.stderr.strip(), file=sys.stderr)
|
||||
raise
|
||||
|
||||
# Find ALL SKILL.md files
|
||||
all_skill_files = list(temp_path.rglob("SKILL.md"))
|
||||
all_skill_files = list(repo_path.rglob("SKILL.md"))
|
||||
print(f"\n2️⃣ Total SKILL.md files found: {len(all_skill_files)}")
|
||||
|
||||
# Categorize by location
|
||||
location_types = defaultdict(list)
|
||||
for skill_file in all_skill_files:
|
||||
path_str = str(skill_file)
|
||||
if ".github/skills" in path_str:
|
||||
path_str = skill_file.as_posix()
|
||||
if ".github/skills/" in path_str:
|
||||
location_types["github_skills"].append(skill_file)
|
||||
elif ".github/plugins" in path_str:
|
||||
elif ".github/plugins/" in path_str:
|
||||
location_types["github_plugins"].append(skill_file)
|
||||
elif "/skills/" in path_str:
|
||||
location_types["skills_dir"].append(skill_file)
|
||||
@@ -81,7 +137,7 @@ def analyze_skill_locations():
|
||||
|
||||
for skill_file in all_skill_files:
|
||||
try:
|
||||
rel = skill_file.parent.relative_to(temp_path)
|
||||
rel = skill_file.parent.relative_to(repo_path)
|
||||
except ValueError:
|
||||
rel = skill_file.parent
|
||||
|
||||
@@ -163,9 +219,13 @@ def analyze_skill_locations():
|
||||
"invalid_names": len(invalid_names),
|
||||
"passed": is_pass,
|
||||
}
|
||||
finally:
|
||||
if repo_path is not None:
|
||||
shutil.rmtree(repo_path, ignore_errors=True)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
configure_utf8_output()
|
||||
try:
|
||||
results = analyze_skill_locations()
|
||||
|
||||
@@ -176,14 +236,18 @@ if __name__ == "__main__":
|
||||
if results["passed"]:
|
||||
print("\n✅ V4 FLAT STRUCTURE IS VALID")
|
||||
print(" All names are unique and valid directory names!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
print("\n⚠️ V4 FLAT STRUCTURE NEEDS FIXES")
|
||||
if results["collisions"] > 0:
|
||||
print(f" {results['collisions']} name collisions to resolve")
|
||||
if results["invalid_names"] > 0:
|
||||
print(f" {results['invalid_names']} invalid directory names")
|
||||
sys.exit(1)
|
||||
|
||||
except subprocess.CalledProcessError as exc:
|
||||
sys.exit(exc.returncode or 1)
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
print(f"\n❌ Error: {e}", file=sys.stderr)
|
||||
traceback.print_exc(file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
@@ -1,7 +1,31 @@
|
||||
#!/usr/bin/env python3
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
|
||||
def configure_utf8_output() -> None:
|
||||
"""Best-effort UTF-8 stdout/stderr on Windows without dropping diagnostics."""
|
||||
if sys.platform != "win32":
|
||||
return
|
||||
|
||||
for stream_name in ("stdout", "stderr"):
|
||||
stream = getattr(sys, stream_name)
|
||||
try:
|
||||
stream.reconfigure(encoding="utf-8", errors="backslashreplace")
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
buffer = getattr(stream, "buffer", None)
|
||||
if buffer is not None:
|
||||
setattr(
|
||||
sys,
|
||||
stream_name,
|
||||
io.TextIOWrapper(buffer, encoding="utf-8", errors="backslashreplace"),
|
||||
)
|
||||
|
||||
|
||||
def update_readme():
|
||||
@@ -55,11 +79,12 @@ def update_readme():
|
||||
content,
|
||||
)
|
||||
|
||||
with open(readme_path, "w", encoding="utf-8") as f:
|
||||
with open(readme_path, "w", encoding="utf-8", newline="\n") as f:
|
||||
f.write(content)
|
||||
|
||||
print("✅ README.md updated successfully.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
configure_utf8_output()
|
||||
update_readme()
|
||||
|
||||
@@ -2,6 +2,29 @@ import os
|
||||
import re
|
||||
import argparse
|
||||
import sys
|
||||
import io
|
||||
|
||||
|
||||
def configure_utf8_output() -> None:
|
||||
"""Best-effort UTF-8 stdout/stderr on Windows without dropping diagnostics."""
|
||||
if sys.platform != "win32":
|
||||
return
|
||||
|
||||
for stream_name in ("stdout", "stderr"):
|
||||
stream = getattr(sys, stream_name)
|
||||
try:
|
||||
stream.reconfigure(encoding="utf-8", errors="backslashreplace")
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
buffer = getattr(stream, "buffer", None)
|
||||
if buffer is not None:
|
||||
setattr(
|
||||
sys,
|
||||
stream_name,
|
||||
io.TextIOWrapper(buffer, encoding="utf-8", errors="backslashreplace"),
|
||||
)
|
||||
|
||||
WHEN_TO_USE_PATTERNS = [
|
||||
re.compile(r"^##\s+When\s+to\s+Use", re.MULTILINE | re.IGNORECASE),
|
||||
@@ -12,24 +35,37 @@ WHEN_TO_USE_PATTERNS = [
|
||||
def has_when_to_use_section(content):
|
||||
return any(pattern.search(content) for pattern in WHEN_TO_USE_PATTERNS)
|
||||
|
||||
def parse_frontmatter(content):
|
||||
import yaml
|
||||
|
||||
def parse_frontmatter(content, rel_path=None):
|
||||
"""
|
||||
Simple frontmatter parser using regex to avoid external dependencies.
|
||||
Returns a dict of key-values.
|
||||
Parse frontmatter using PyYAML for robustness.
|
||||
Returns a dict of key-values and a list of error messages.
|
||||
"""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
return None, ["Missing or malformed YAML frontmatter"]
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
metadata = {}
|
||||
for line in fm_text.split('\n'):
|
||||
if ':' in line:
|
||||
key, val = line.split(':', 1)
|
||||
metadata[key.strip()] = val.strip().strip('"').strip("'")
|
||||
return metadata
|
||||
fm_errors = []
|
||||
try:
|
||||
metadata = yaml.safe_load(fm_text) or {}
|
||||
|
||||
# Identification of the specific regression issue for better reporting
|
||||
if "description" in metadata:
|
||||
desc = metadata["description"]
|
||||
if not desc or (isinstance(desc, str) and not desc.strip()):
|
||||
fm_errors.append("description field is empty or whitespace only.")
|
||||
elif desc == "|":
|
||||
fm_errors.append("description contains only the YAML block indicator '|', likely due to a parsing regression.")
|
||||
|
||||
return metadata, fm_errors
|
||||
except yaml.YAMLError as e:
|
||||
return None, [f"YAML Syntax Error: {e}"]
|
||||
|
||||
def validate_skills(skills_dir, strict_mode=False):
|
||||
configure_utf8_output()
|
||||
|
||||
print(f"🔍 Validating skills in: {skills_dir}")
|
||||
print(f"⚙️ Mode: {'STRICT (CI)' if strict_mode else 'Standard (Dev)'}")
|
||||
|
||||
@@ -60,10 +96,14 @@ def validate_skills(skills_dir, strict_mode=False):
|
||||
continue
|
||||
|
||||
# 1. Frontmatter Check
|
||||
metadata = parse_frontmatter(content)
|
||||
metadata, fm_errors = parse_frontmatter(content, rel_path)
|
||||
if not metadata:
|
||||
errors.append(f"❌ {rel_path}: Missing or malformed YAML frontmatter")
|
||||
continue # Cannot proceed without metadata
|
||||
|
||||
if fm_errors:
|
||||
for fe in fm_errors:
|
||||
errors.append(f"❌ {rel_path}: YAML Structure Error - {fe}")
|
||||
|
||||
# 2. Metadata Schema Checks
|
||||
if "name" not in metadata:
|
||||
@@ -71,12 +111,15 @@ def validate_skills(skills_dir, strict_mode=False):
|
||||
elif metadata["name"] != os.path.basename(root):
|
||||
errors.append(f"❌ {rel_path}: Name '{metadata['name']}' does not match folder name '{os.path.basename(root)}'")
|
||||
|
||||
if "description" not in metadata:
|
||||
if "description" not in metadata or metadata["description"] is None:
|
||||
errors.append(f"❌ {rel_path}: Missing 'description' in frontmatter")
|
||||
else:
|
||||
# agentskills-ref checks for short descriptions
|
||||
if len(metadata["description"]) > 200:
|
||||
errors.append(f"❌ {rel_path}: Description is oversized ({len(metadata['description'])} chars). Must be concise.")
|
||||
desc = metadata["description"]
|
||||
if not isinstance(desc, str):
|
||||
errors.append(f"❌ {rel_path}: 'description' must be a string, got {type(desc).__name__}")
|
||||
elif len(desc) > 300: # increased limit for multi-line support
|
||||
errors.append(f"❌ {rel_path}: Description is oversized ({len(desc)} chars). Must be concise.")
|
||||
|
||||
# Risk Validation (Quality Bar)
|
||||
if "risk" not in metadata:
|
||||
|
||||
@@ -4,7 +4,14 @@
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>web-app</title>
|
||||
<meta name="description" content="Antigravity Awesome Skills - 950+ agentic skills catalog for Claude Code, Gemini, Cursor, Copilot. Search, filter, and copy prompts instantly." />
|
||||
<meta name="keywords" content="AI skills, Claude Code, Gemini CLI, Cursor, Copilot, agentic skills, coding assistant" />
|
||||
<meta name="author" content="Niccolò Abate (@sickn33)" />
|
||||
<meta property="og:title" content="Antigravity Awesome Skills Catalog" />
|
||||
<meta property="og:description" content="Browse 950+ battle-tested agentic skills for AI coding assistants" />
|
||||
<meta property="og:type" content="website" />
|
||||
<meta name="twitter:card" content="summary_large_image" />
|
||||
<title>Antigravity Skills | 950+ AI Agentic Skills Catalog</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
|
||||
97
web-app/package-lock.json
generated
97
web-app/package-lock.json
generated
@@ -12,11 +12,13 @@
|
||||
"clsx": "^2.1.1",
|
||||
"framer-motion": "^12.34.2",
|
||||
"github-markdown-css": "^5.9.0",
|
||||
"highlight.js": "^11.11.1",
|
||||
"lucide-react": "^0.574.0",
|
||||
"react": "^19.2.0",
|
||||
"react-dom": "^19.2.0",
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-router-dom": "^7.13.0",
|
||||
"rehype-highlight": "^7.0.2",
|
||||
"tailwind-merge": "^3.5.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
@@ -79,7 +81,6 @@
|
||||
"integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.29.0",
|
||||
"@babel/generator": "^7.29.0",
|
||||
@@ -1860,7 +1861,6 @@
|
||||
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz",
|
||||
"integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"csstype": "^3.2.2"
|
||||
}
|
||||
@@ -1923,7 +1923,6 @@
|
||||
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"bin": {
|
||||
"acorn": "bin/acorn"
|
||||
},
|
||||
@@ -2076,7 +2075,6 @@
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"baseline-browser-mapping": "^2.9.0",
|
||||
"caniuse-lite": "^1.0.30001759",
|
||||
@@ -2437,7 +2435,6 @@
|
||||
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@eslint-community/eslint-utils": "^4.8.0",
|
||||
"@eslint-community/regexpp": "^4.12.1",
|
||||
@@ -2843,6 +2840,19 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/hast-util-is-element": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-3.0.0.tgz",
|
||||
"integrity": "sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/hast": "^3.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/hast-util-to-jsx-runtime": {
|
||||
"version": "2.3.6",
|
||||
"resolved": "https://registry.npmjs.org/hast-util-to-jsx-runtime/-/hast-util-to-jsx-runtime-2.3.6.tgz",
|
||||
@@ -2870,6 +2880,22 @@
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/hast-util-to-text": {
|
||||
"version": "4.0.2",
|
||||
"resolved": "https://registry.npmjs.org/hast-util-to-text/-/hast-util-to-text-4.0.2.tgz",
|
||||
"integrity": "sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/hast": "^3.0.0",
|
||||
"@types/unist": "^3.0.0",
|
||||
"hast-util-is-element": "^3.0.0",
|
||||
"unist-util-find-after": "^5.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/hast-util-whitespace": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-3.0.0.tgz",
|
||||
@@ -2900,6 +2926,15 @@
|
||||
"hermes-estree": "0.25.1"
|
||||
}
|
||||
},
|
||||
"node_modules/highlight.js": {
|
||||
"version": "11.11.1",
|
||||
"resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-11.11.1.tgz",
|
||||
"integrity": "sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w==",
|
||||
"license": "BSD-3-Clause",
|
||||
"engines": {
|
||||
"node": ">=12.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/html-url-attributes": {
|
||||
"version": "3.0.1",
|
||||
"resolved": "https://registry.npmjs.org/html-url-attributes/-/html-url-attributes-3.0.1.tgz",
|
||||
@@ -3443,6 +3478,21 @@
|
||||
"url": "https://github.com/sponsors/wooorm"
|
||||
}
|
||||
},
|
||||
"node_modules/lowlight": {
|
||||
"version": "3.3.0",
|
||||
"resolved": "https://registry.npmjs.org/lowlight/-/lowlight-3.3.0.tgz",
|
||||
"integrity": "sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/hast": "^3.0.0",
|
||||
"devlop": "^1.0.0",
|
||||
"highlight.js": "~11.11.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/wooorm"
|
||||
}
|
||||
},
|
||||
"node_modules/lru-cache": {
|
||||
"version": "5.1.1",
|
||||
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz",
|
||||
@@ -4255,7 +4305,6 @@
|
||||
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
@@ -4283,7 +4332,6 @@
|
||||
}
|
||||
],
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"nanoid": "^3.3.11",
|
||||
"picocolors": "^1.1.1",
|
||||
@@ -4335,7 +4383,6 @@
|
||||
"resolved": "https://registry.npmjs.org/react/-/react-19.2.4.tgz",
|
||||
"integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
@@ -4345,7 +4392,6 @@
|
||||
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.4.tgz",
|
||||
"integrity": "sha512-AXJdLo8kgMbimY95O2aKQqsz2iWi9jMgKJhRBAxECE4IFxfcazB2LmzloIoibJI3C12IlY20+KFaLv+71bUJeQ==",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"scheduler": "^0.27.0"
|
||||
},
|
||||
@@ -4428,6 +4474,23 @@
|
||||
"react-dom": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/rehype-highlight": {
|
||||
"version": "7.0.2",
|
||||
"resolved": "https://registry.npmjs.org/rehype-highlight/-/rehype-highlight-7.0.2.tgz",
|
||||
"integrity": "sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/hast": "^3.0.0",
|
||||
"hast-util-to-text": "^4.0.0",
|
||||
"lowlight": "^3.0.0",
|
||||
"unist-util-visit": "^5.0.0",
|
||||
"vfile": "^6.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/remark-parse": {
|
||||
"version": "11.0.0",
|
||||
"resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-11.0.0.tgz",
|
||||
@@ -4751,6 +4814,20 @@
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/unist-util-find-after": {
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/unist-util-find-after/-/unist-util-find-after-5.0.0.tgz",
|
||||
"integrity": "sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/unist": "^3.0.0",
|
||||
"unist-util-is": "^6.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/unified"
|
||||
}
|
||||
},
|
||||
"node_modules/unist-util-is": {
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-6.0.1.tgz",
|
||||
@@ -4894,7 +4971,6 @@
|
||||
"integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"esbuild": "^0.27.0",
|
||||
"fdir": "^6.5.0",
|
||||
@@ -5037,7 +5113,6 @@
|
||||
"integrity": "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
|
||||
@@ -14,11 +14,13 @@
|
||||
"clsx": "^2.1.1",
|
||||
"framer-motion": "^12.34.2",
|
||||
"github-markdown-css": "^5.9.0",
|
||||
"highlight.js": "^11.11.1",
|
||||
"lucide-react": "^0.574.0",
|
||||
"react": "^19.2.0",
|
||||
"react-dom": "^19.2.0",
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-router-dom": "^7.13.0",
|
||||
"rehype-highlight": "^7.0.2",
|
||||
"tailwind-merge": "^3.5.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
@@ -5,7 +5,6 @@ description: "Arquitecto de Soluciones Principal y Consultor Tecnológico de And
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
@@ -3,16 +3,12 @@ id: 10-andruia-skill-smith
|
||||
name: 10-andruia-skill-smith
|
||||
description: "Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante."
|
||||
category: andruia
|
||||
risk: safe
|
||||
risk: official
|
||||
source: personal
|
||||
date_added: "2026-02-25"
|
||||
---
|
||||
|
||||
# 🔨 Andru.ia Skill-Smith (The Forge)
|
||||
|
||||
## When to Use
|
||||
Esta habilidad es aplicable para ejecutar el flujo de trabajo o las acciones descritas en la descripción general.
|
||||
|
||||
|
||||
## 📝 Descripción
|
||||
Soy el Ingeniero de Sistemas de Andru.ia. Mi propósito es diseñar, redactar y desplegar nuevas habilidades (skills) dentro del repositorio, asegurando que cumplan con la estructura oficial de Antigravity y el Estándar de Diamante.
|
||||
@@ -42,4 +38,4 @@ Generar el código para los siguientes archivos:
|
||||
|
||||
## ⚠️ Reglas de Oro
|
||||
- **Prefijos Numéricos:** Asignar un número correlativo a la carpeta (ej. 11, 12, 13) para mantener el orden.
|
||||
- **Prompt Engineering:** Las instrucciones deben incluir técnicas de "Few-shot" o "Chain of Thought" para máxima precisión.
|
||||
- **Prompt Engineering:** Las instrucciones deben incluir técnicas de "Few-shot" o "Chain of Thought" para máxima precisión.
|
||||
@@ -5,7 +5,6 @@ description: "Estratega de Inteligencia de Dominio de Andru.ia. Analiza el nicho
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: 3d-web-experience
|
||||
description: "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# 3D Web Experience
|
||||
|
||||
@@ -3,7 +3,6 @@ name: ab-test-setup
|
||||
description: "Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# A/B Test Setup
|
||||
|
||||
@@ -3,7 +3,6 @@ name: accessibility-compliance-accessibility-audit
|
||||
description: "You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Accessibility Audit and Testing
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
---
|
||||
name: active-directory-attacks
|
||||
description: "This skill should be used when the user asks to \"attack Active Directory\", \"exploit AD\", \"Kerberoasting\", \"DCSync\", \"pass-the-hash\", \"BloodHound enumeration\", \"Golden Ticket\", ..."
|
||||
metadata:
|
||||
author: zebbern
|
||||
version: "1.1"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Active Directory Attacks
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
---
|
||||
name: activecampaign-automation
|
||||
description: "Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# ActiveCampaign Automation via Rube MCP
|
||||
|
||||
@@ -3,7 +3,6 @@ name: address-github-comments
|
||||
description: "Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: agent-evaluation
|
||||
description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Evaluation
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agent-framework-azure-ai-py
|
||||
description: "Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code int..."
|
||||
package: agent-framework-azure-ai
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Framework Azure Hosted Agents
|
||||
|
||||
@@ -3,7 +3,6 @@ name: agent-manager-skill
|
||||
description: "Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Manager Skill
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agent-memory-mcp
|
||||
author: Amit Rathiesh
|
||||
description: "A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions)."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Skill
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: agent-memory-systems
|
||||
description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector s..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Systems
|
||||
|
||||
@@ -3,7 +3,6 @@ name: agent-orchestration-improve-agent
|
||||
description: "Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Performance Optimization Workflow
|
||||
|
||||
@@ -3,7 +3,6 @@ name: agent-orchestration-multi-agent-optimize
|
||||
description: "Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Multi-Agent Optimization Toolkit
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: agent-tool-builder
|
||||
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessar..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Tool Builder
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: agentfolio
|
||||
description: "Skill for discovering and researching autonomous AI agents, tools, and ecosystems using the AgentFolio directory."
|
||||
risk: unknown
|
||||
source: agentfolio.io
|
||||
date_added: "2026-02-27"
|
||||
risk: unknown
|
||||
---
|
||||
|
||||
# AgentFolio
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agents-v2-py
|
||||
description: "Build container-based Foundry Agents with Azure AI Projects SDK (ImageBasedHostedAgentDefinition). Use when creating hosted agents with custom container images in Azure AI Foundry."
|
||||
package: azure-ai-projects
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Azure AI Hosted Agents (Python)
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
---
|
||||
name: ai-agent-development
|
||||
description: "AI agent development workflow for building autonomous agents, multi-agent systems, and agent orchestration with CrewAI, LangGraph, and custom agents."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
risk: safe
|
||||
domain: ai-ml
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# AI Agent Development Workflow
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: ai-agents-architect
|
||||
description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Agents Architect
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
---
|
||||
name: ai-engineer
|
||||
description: Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.
|
||||
description: |
|
||||
Build production-ready LLM applications, advanced RAG systems, and
|
||||
intelligent agents. Implements vector search, multimodal AI, agent
|
||||
orchestration, and enterprise AI integrations. Use PROACTIVELY for LLM
|
||||
features, chatbots, AI agents, or AI-powered applications.
|
||||
metadata:
|
||||
model: inherit
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
---
|
||||
name: ai-ml
|
||||
description: "AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features."
|
||||
category: workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
risk: safe
|
||||
domain: artificial-intelligence
|
||||
category: workflow-bundle
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# AI/ML Workflow Bundle
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: ai-product
|
||||
description: Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...
|
||||
risk: unknown
|
||||
description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
date_added: '2026-02-27'
|
||||
risk: unknown
|
||||
---
|
||||
|
||||
# AI Product Development
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: ai-wrapper-product
|
||||
description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Cov..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Wrapper Product
|
||||
|
||||
@@ -3,7 +3,6 @@ name: airflow-dag-patterns
|
||||
description: "Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Apache Airflow DAG Patterns
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
---
|
||||
name: airtable-automation
|
||||
description: "Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Airtable Automation via Rube MCP
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: algolia-search
|
||||
description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Algolia Search Integration
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: algorithmic-art
|
||||
description: "Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields,..."
|
||||
license: Complete terms in LICENSE.txt
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
---
|
||||
name: amplitude-automation
|
||||
description: "Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Amplitude Automation via Rube MCP
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
---
|
||||
name: analytics-tracking
|
||||
description: Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.
|
||||
description: >
|
||||
Design, audit, and improve analytics tracking systems that produce reliable,
|
||||
decision-ready data. Use when the user wants to set up, fix, or evaluate
|
||||
analytics tracking (GA4, GTM, product analytics, events, conversions, UTMs).
|
||||
This skill focuses on measurement strategy, signal quality, and validation—
|
||||
not just firing events.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# Analytics Tracking & Measurement Strategy
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
---
|
||||
name: android-jetpack-compose-expert
|
||||
description: "Expert guidance for building modern Android UIs with Jetpack Compose, covering state management, navigation, performance, and Material Design 3."
|
||||
description: Expert guidance for building modern Android UIs with Jetpack Compose, covering state management, navigation, performance, and Material Design 3.
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Android Jetpack Compose Expert
|
||||
|
||||
@@ -3,7 +3,6 @@ name: angular-best-practices
|
||||
description: "Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and rendering efficiency."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Best Practices
|
||||
|
||||
@@ -3,7 +3,6 @@ name: angular-migration
|
||||
description: "Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applications, planning framework migrations, or ..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Migration
|
||||
|
||||
@@ -3,7 +3,6 @@ name: angular-state-management
|
||||
description: "Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solutions, or migrating from legacy patterns."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular State Management
|
||||
|
||||
@@ -3,7 +3,6 @@ name: angular-ui-patterns
|
||||
description: "Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component states."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular UI Patterns
|
||||
|
||||
@@ -3,7 +3,6 @@ name: anti-reversing-techniques
|
||||
description: "Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti-debugging for authorized analysis, or u..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
> **AUTHORIZED USE ONLY**: This skill contains dual-use security techniques. Before proceeding with any bypass or analysis:
|
||||
|
||||
@@ -3,7 +3,6 @@ name: api-design-principles
|
||||
description: "Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers. Use when designing new APIs, reviewing API specifications, or establishing..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Design Principles
|
||||
|
||||
@@ -3,7 +3,6 @@ name: api-documentation-generator
|
||||
description: "Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Documentation Generator
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
---
|
||||
name: api-documentation
|
||||
description: "API documentation workflow for generating OpenAPI specs, creating developer guides, and maintaining comprehensive API documentation."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
risk: safe
|
||||
domain: documentation
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# API Documentation Workflow
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
---
|
||||
name: api-documenter
|
||||
description: Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals.
|
||||
description: |
|
||||
Master API documentation with OpenAPI 3.1, AI-powered tools, and
|
||||
modern developer experience practices. Create interactive docs, generate SDKs,
|
||||
and build comprehensive developer portals. Use PROACTIVELY for API
|
||||
documentation or developer portal creation.
|
||||
metadata:
|
||||
model: sonnet
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
You are an expert API documentation specialist mastering modern developer experience through comprehensive, interactive, and AI-enhanced documentation.
|
||||
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
---
|
||||
name: api-fuzzing-bug-bounty
|
||||
description: "This skill should be used when the user asks to \"test API security\", \"fuzz APIs\", \"find IDOR vulnerabilities\", \"test REST API\", \"test GraphQL\", \"API penetration testing\", \"bug b..."
|
||||
metadata:
|
||||
author: zebbern
|
||||
version: "1.1"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Fuzzing for Bug Bounty
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: api-patterns
|
||||
description: "API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination."
|
||||
allowed-tools: Read, Write, Edit, Glob, Grep
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Patterns
|
||||
|
||||
@@ -3,7 +3,6 @@ name: api-security-best-practices
|
||||
description: "Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Security Best Practices
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
---
|
||||
name: api-security-testing
|
||||
description: "API security testing workflow for REST and GraphQL APIs covering authentication, authorization, rate limiting, input validation, and security best practices."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
risk: safe
|
||||
domain: security
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# API Security Testing Workflow
|
||||
|
||||
@@ -3,7 +3,6 @@ name: api-testing-observability-api-mock
|
||||
description: "You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and enable parallel development."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Mocking Framework
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: app-builder
|
||||
description: "Main application building orchestrator. Creates full-stack applications from natural language requests. Determines project type, selects tech stack, coordinates agents."
|
||||
allowed-tools: Read, Write, Edit, Glob, Grep, Bash, Agent
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# App Builder - Application Building Orchestrator
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: templates
|
||||
description: "Project scaffolding templates for new applications. Use when creating new projects from scratch. Contains 12 templates for various tech stacks."
|
||||
allowed-tools: Read, Glob, Grep
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Project Templates
|
||||
|
||||
@@ -3,7 +3,6 @@ name: app-store-optimization
|
||||
description: "Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# App Store Optimization (ASO) Skill
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
---
|
||||
name: appdeploy
|
||||
description: "Deploy web apps with backend APIs, database, and file storage. Use when the user asks to deploy or publish a website or web app and wants a public URL. Uses HTTP API via curl."
|
||||
description: Deploy web apps with backend APIs, database, and file storage. Use when the user asks to deploy or publish a website or web app and wants a public URL. Uses HTTP API via curl.
|
||||
allowed-tools:
|
||||
- Bash
|
||||
risk: safe
|
||||
source: "AppDeploy (MIT)"
|
||||
date_added: "2026-02-27"
|
||||
source: AppDeploy (MIT)
|
||||
metadata:
|
||||
author: appdeploy
|
||||
version: "1.0.5"
|
||||
---
|
||||
|
||||
# AppDeploy Skill
|
||||
|
||||
@@ -3,7 +3,6 @@ name: application-performance-performance-optimization
|
||||
description: "Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
Optimize application performance end-to-end using specialized performance and optimization agents:
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
---
|
||||
name: architect-review
|
||||
description: "Master software architect specializing in modern architecture"
|
||||
description: Master software architect specializing in modern architecture
|
||||
patterns, clean architecture, microservices, event-driven systems, and DDD.
|
||||
Reviews system designs and code changes for architectural integrity,
|
||||
scalability, and maintainability. Use PROACTIVELY for architectural decisions.
|
||||
metadata:
|
||||
model: opus
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
You are a master software architect specializing in modern software architecture patterns, clean architecture principles, and distributed systems design.
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ name: architecture-decision-records
|
||||
description: "Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant technical decisions, reviewing past architect..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Architecture Decision Records
|
||||
|
||||
@@ -3,7 +3,6 @@ name: architecture-patterns
|
||||
description: "Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex backend systems or refactoring existing ..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Architecture Patterns
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: architecture
|
||||
description: "Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing system design."
|
||||
allowed-tools: Read, Glob, Grep
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Architecture Decision Framework
|
||||
|
||||
@@ -1,9 +1,15 @@
|
||||
---
|
||||
name: arm-cortex-expert
|
||||
description: Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).
|
||||
description: >
|
||||
Senior embedded software engineer specializing in firmware and driver
|
||||
development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).
|
||||
Decades of experience writing reliable, optimized, and maintainable embedded
|
||||
code with deep expertise in memory barriers, DMA/cache coherency,
|
||||
interrupt-driven I/O, and peripheral drivers.
|
||||
metadata:
|
||||
model: inherit
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# @arm-cortex-expert
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
---
|
||||
name: asana-automation
|
||||
description: "Automate Asana tasks via Rube MCP (Composio): tasks, projects, sections, teams, workspaces. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Asana Automation via Rube MCP
|
||||
|
||||
@@ -3,7 +3,6 @@ name: async-python-patterns
|
||||
description: "Master Python asyncio, concurrent programming, and async/await patterns for high-performance applications. Use when building async APIs, concurrent systems, or I/O-bound applications requiring non-..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Async Python Patterns
|
||||
|
||||
@@ -3,7 +3,6 @@ name: attack-tree-construction
|
||||
description: "Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Attack Tree Construction
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
---
|
||||
name: audio-transcriber
|
||||
description: "Transform audio recordings into professional Markdown documentation with intelligent summaries using LLM integration"
|
||||
version: 1.2.0
|
||||
author: Eric Andrade
|
||||
created: 2025-02-01
|
||||
updated: 2026-02-04
|
||||
platforms: [github-copilot-cli, claude-code, codex]
|
||||
category: content
|
||||
tags: [audio, transcription, whisper, meeting-minutes, speech-to-text]
|
||||
risk: safe
|
||||
source: community
|
||||
tags: "[audio, transcription, whisper, meeting-minutes, speech-to-text]"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## Purpose
|
||||
|
||||
@@ -3,7 +3,6 @@ name: auth-implementation-patterns
|
||||
description: "Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use when implementing auth systems, securing A..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Authentication & Authorization Implementation Patterns
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user