feat: add 61 new skills from VoltAgent repository
- 27 official team skills (Sentry, Trail of Bits, Expo, Hugging Face, etc.) - 34 community skills including context engineering suite - All skills validated and compliant with V4 quality bar - Complete source attribution maintained Skills added: - Official: commit, create-pr, find-bugs, iterate-pr, culture-index, fix-review, sharp-edges, expo-deployment, upgrading-expo, hugging-face-cli, hugging-face-jobs, vercel-deploy-claimable, design-md, using-neon, n8n-*, swiftui-expert-skill, fal-*, deep-research, imagen, readme, screenshots - Community: frontend-slides, linear-claude-skill, skill-rails-upgrade, context-*, multi-agent-patterns, tool-design, evaluation, memory-systems, terraform-skill, and more
This commit is contained in:
257
skills/automate-whatsapp/SKILL.md
Normal file
257
skills/automate-whatsapp/SKILL.md
Normal file
@@ -0,0 +1,257 @@
|
||||
---
|
||||
name: automate-whatsapp
|
||||
description: "Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/integrations for state. Use when automating WhatsApp conversations and event handling."
|
||||
source: "https://github.com/gokapso/agent-skills/tree/master/skills/automate-whatsapp"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Automate WhatsApp
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill to build and run WhatsApp automations: workflow CRUD, graph edits, triggers, executions, function management, app integrations, and D1 database operations.
|
||||
|
||||
## Setup
|
||||
|
||||
Env vars:
|
||||
- `KAPSO_API_BASE_URL` (host only, no `/platform/v1`)
|
||||
- `KAPSO_API_KEY`
|
||||
|
||||
## How to
|
||||
|
||||
### Edit a workflow graph
|
||||
|
||||
1. Fetch graph: `node scripts/get-graph.js <workflow_id>` (note the `lock_version`)
|
||||
2. Edit the JSON (see graph rules below)
|
||||
3. Validate: `node scripts/validate-graph.js --definition-file <path>`
|
||||
4. Update: `node scripts/update-graph.js <workflow_id> --expected-lock-version <n> --definition-file <path>`
|
||||
5. Re-fetch to confirm
|
||||
|
||||
For small edits, use `edit-graph.js` with `--old-file` and `--new-file` instead.
|
||||
|
||||
If you get a lock_version conflict: re-fetch, re-apply changes, retry with new lock_version.
|
||||
|
||||
### Manage triggers
|
||||
|
||||
1. List: `node scripts/list-triggers.js <workflow_id>`
|
||||
2. Create: `node scripts/create-trigger.js <workflow_id> --trigger-type <type> --phone-number-id <id>`
|
||||
3. Toggle: `node scripts/update-trigger.js --trigger-id <id> --active true|false`
|
||||
4. Delete: `node scripts/delete-trigger.js --trigger-id <id>`
|
||||
|
||||
For inbound_message triggers, first run `node scripts/list-whatsapp-phone-numbers.js` to get `phone_number_id`.
|
||||
|
||||
### Debug executions
|
||||
|
||||
1. List: `node scripts/list-executions.js <workflow_id>`
|
||||
2. Inspect: `node scripts/get-execution.js <execution-id>`
|
||||
3. Get value: `node scripts/get-context-value.js <execution-id> --variable-path vars.foo`
|
||||
4. Events: `node scripts/list-execution-events.js <execution-id>`
|
||||
|
||||
### Create and deploy a function
|
||||
|
||||
1. Write code with handler signature (see function rules below)
|
||||
2. Create: `node scripts/create-function.js --name <name> --code-file <path>`
|
||||
3. Deploy: `node scripts/deploy-function.js --function-id <id>`
|
||||
4. Verify: `node scripts/get-function.js --function-id <id>`
|
||||
|
||||
### Set up agent node with app integrations
|
||||
|
||||
1. Find model: `node scripts/list-provider-models.js`
|
||||
2. Find account: `node scripts/list-accounts.js --app-slug <slug>` (use `pipedream_account_id`)
|
||||
3. Find action: `node scripts/search-actions.js --query <word> --app-slug <slug>` (action_id = key)
|
||||
4. Create integration: `node scripts/create-integration.js --action-id <id> --app-slug <slug> --account-id <id> --configured-props <json>`
|
||||
5. Add tools to agent node via `flow_agent_app_integration_tools`
|
||||
|
||||
### Database CRUD
|
||||
|
||||
1. List tables: `node scripts/list-tables.js`
|
||||
2. Query: `node scripts/query-rows.js --table <name> --filters <json>`
|
||||
3. Create/update/delete with row scripts
|
||||
|
||||
## Graph rules
|
||||
|
||||
- Exactly one start node with `id` = `start`
|
||||
- Never change existing node IDs
|
||||
- Use `{node_type}_{timestamp_ms}` for new node IDs
|
||||
- Non-decide nodes have 0 or 1 outgoing `next` edge
|
||||
- Decide edge labels must match `conditions[].label`
|
||||
- Edge keys are `source`/`target`/`label` (not `from`/`to`)
|
||||
|
||||
For full schema details, see `references/graph-contract.md`.
|
||||
|
||||
## Function rules
|
||||
|
||||
```js
|
||||
async function handler(request, env) {
|
||||
// Parse input
|
||||
const body = await request.json();
|
||||
// Use env.KV and env.DB as needed
|
||||
return new Response(JSON.stringify({ result: "ok" }));
|
||||
}
|
||||
```
|
||||
|
||||
- Do NOT use `export`, `export default`, or arrow functions
|
||||
- Return a `Response` object
|
||||
|
||||
## Execution context
|
||||
|
||||
Always use this structure:
|
||||
- `vars` - user-defined variables
|
||||
- `system` - system variables
|
||||
- `context` - channel data
|
||||
- `metadata` - request metadata
|
||||
|
||||
## Scripts
|
||||
|
||||
### Workflows
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-workflows.js` | List workflows (metadata only) |
|
||||
| `get-workflow.js` | Get workflow metadata |
|
||||
| `create-workflow.js` | Create a workflow |
|
||||
| `update-workflow-settings.js` | Update workflow settings |
|
||||
|
||||
### Graph
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `get-graph.js` | Get workflow graph + lock_version |
|
||||
| `edit-graph.js` | Patch graph via string replacement |
|
||||
| `update-graph.js` | Replace entire graph |
|
||||
| `validate-graph.js` | Validate graph structure locally |
|
||||
|
||||
### Triggers
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-triggers.js` | List triggers for a workflow |
|
||||
| `create-trigger.js` | Create a trigger |
|
||||
| `update-trigger.js` | Enable/disable a trigger |
|
||||
| `delete-trigger.js` | Delete a trigger |
|
||||
| `list-whatsapp-phone-numbers.js` | List phone numbers for trigger setup |
|
||||
|
||||
### Executions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-executions.js` | List executions |
|
||||
| `get-execution.js` | Get execution details |
|
||||
| `get-context-value.js` | Read value from execution context |
|
||||
| `update-execution-status.js` | Force execution state |
|
||||
| `resume-execution.js` | Resume waiting execution |
|
||||
| `list-execution-events.js` | List execution events |
|
||||
|
||||
### Functions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-functions.js` | List project functions |
|
||||
| `get-function.js` | Get function details + code |
|
||||
| `create-function.js` | Create a function |
|
||||
| `update-function.js` | Update function code |
|
||||
| `deploy-function.js` | Deploy function to runtime |
|
||||
| `invoke-function.js` | Invoke function with payload |
|
||||
| `list-function-invocations.js` | List function invocations |
|
||||
|
||||
### App integrations
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-apps.js` | Search integration apps |
|
||||
| `search-actions.js` | Search actions (action_id = key) |
|
||||
| `get-action-schema.js` | Get action JSON schema |
|
||||
| `list-accounts.js` | List connected accounts |
|
||||
| `create-connect-token.js` | Create OAuth connect link |
|
||||
| `configure-prop.js` | Resolve remote_options for a prop |
|
||||
| `reload-props.js` | Reload dynamic props |
|
||||
| `list-integrations.js` | List saved integrations |
|
||||
| `create-integration.js` | Create an integration |
|
||||
| `update-integration.js` | Update an integration |
|
||||
| `delete-integration.js` | Delete an integration |
|
||||
|
||||
### Databases
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-tables.js` | List D1 tables |
|
||||
| `get-table.js` | Get table schema + sample rows |
|
||||
| `query-rows.js` | Query rows with filters |
|
||||
| `create-row.js` | Create a row |
|
||||
| `update-row.js` | Update rows |
|
||||
| `upsert-row.js` | Upsert a row |
|
||||
| `delete-row.js` | Delete rows |
|
||||
|
||||
### OpenAPI
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `openapi-explore.mjs` | Explore OpenAPI (search/op/schema/where) |
|
||||
|
||||
Install deps (once):
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
node scripts/openapi-explore.mjs --spec workflows search "variables"
|
||||
node scripts/openapi-explore.mjs --spec workflows op getWorkflowVariables
|
||||
node scripts/openapi-explore.mjs --spec platform op queryDatabaseRows
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Prefer file paths over inline JSON (`--definition-file`, `--code-file`)
|
||||
- `action_id` is the same as `key` from `search-actions`
|
||||
- `--account-id` uses `pipedream_account_id` from `list-accounts`
|
||||
- Variable CRUD (`variables-set.js`, `variables-delete.js`) is blocked - Platform API doesn't support it
|
||||
- Raw SQL execution is not supported via Platform API
|
||||
|
||||
## References
|
||||
|
||||
Read before editing:
|
||||
- [references/graph-contract.md](references/graph-contract.md) - Graph schema, computed vs editable fields, lock_version
|
||||
- [references/node-types.md](references/node-types.md) - Node types and config shapes
|
||||
- [references/workflow-overview.md](references/workflow-overview.md) - Execution flow and states
|
||||
|
||||
Other references:
|
||||
- [references/execution-context.md](references/execution-context.md) - Context structure and variable substitution
|
||||
- [references/triggers.md](references/triggers.md) - Trigger types and setup
|
||||
- [references/app-integrations.md](references/app-integrations.md) - App integration and variable_definitions
|
||||
- [references/functions-reference.md](references/functions-reference.md) - Function management
|
||||
- [references/functions-payloads.md](references/functions-payloads.md) - Payload shapes for functions
|
||||
- [references/databases-reference.md](references/databases-reference.md) - Database operations
|
||||
|
||||
## Assets
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `workflow-linear.json` | Minimal linear workflow |
|
||||
| `workflow-decision.json` | Minimal branching workflow |
|
||||
| `workflow-agent-simple.json` | Minimal agent workflow |
|
||||
| `workflow-customer-support-intake-agent.json` | Customer support intake |
|
||||
| `workflow-interactive-buttons-decide-function.json` | Interactive buttons + decide (function) |
|
||||
| `workflow-interactive-buttons-decide-ai.json` | Interactive buttons + decide (AI) |
|
||||
| `workflow-api-template-wait-agent.json` | API trigger + template + agent |
|
||||
| `function-decide-route-interactive-buttons.json` | Function for button routing |
|
||||
| `agent-app-integration-example.json` | Agent node with app integrations |
|
||||
|
||||
## Related skills
|
||||
|
||||
- `integrate-whatsapp` - Onboarding, webhooks, messaging, templates, flows
|
||||
- `observe-whatsapp` - Debugging, logs, health checks
|
||||
|
||||
<!-- FILEMAP:BEGIN -->
|
||||
```text
|
||||
[automate-whatsapp file map]|root: .
|
||||
|.:{package.json,SKILL.md}
|
||||
|assets:{agent-app-integration-example.json,databases-example.json,function-decide-route-interactive-buttons.json,functions-example.json,workflow-agent-simple.json,workflow-api-template-wait-agent.json,workflow-customer-support-intake-agent.json,workflow-decision.json,workflow-interactive-buttons-decide-ai.json,workflow-interactive-buttons-decide-function.json,workflow-linear.json}
|
||||
|references:{app-integrations.md,databases-reference.md,execution-context.md,function-contracts.md,functions-payloads.md,functions-reference.md,graph-contract.md,node-types.md,triggers.md,workflow-overview.md,workflow-reference.md}
|
||||
|scripts:{configure-prop.js,create-connect-token.js,create-function.js,create-integration.js,create-row.js,create-trigger.js,create-workflow.js,delete-integration.js,delete-row.js,delete-trigger.js,deploy-function.js,edit-graph.js,get-action-schema.js,get-context-value.js,get-execution-event.js,get-execution.js,get-function.js,get-graph.js,get-table.js,get-workflow.js,invoke-function.js,list-accounts.js,list-apps.js,list-execution-events.js,list-executions.js,list-function-invocations.js,list-functions.js,list-integrations.js,list-provider-models.js,list-tables.js,list-triggers.js,list-whatsapp-phone-numbers.js,list-workflows.js,openapi-explore.mjs,query-rows.js,reload-props.js,resume-execution.js,search-actions.js,update-execution-status.js,update-function.js,update-graph.js,update-integration.js,update-row.js,update-trigger.js,update-workflow-settings.js,upsert-row.js,validate-graph.js,variables-delete.js,variables-list.js,variables-set.js}
|
||||
|scripts/lib/databases:{args.js,filters.js,kapso-api.js}
|
||||
|scripts/lib/functions:{args.js,kapso-api.js}
|
||||
|scripts/lib/workflows:{args.js,kapso-api.js,result.js}
|
||||
```
|
||||
<!-- FILEMAP:END -->
|
||||
|
||||
22
skills/aws-skills/SKILL.md
Normal file
22
skills/aws-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: aws-skills
|
||||
description: "AWS development with infrastructure automation and cloud architecture patterns"
|
||||
source: "https://github.com/zxkane/aws-skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Aws Skills
|
||||
|
||||
## Overview
|
||||
|
||||
AWS development with infrastructure automation and cloud architecture patterns
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with aws development with infrastructure automation and cloud architecture patterns.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for aws development with infrastructure automation and cloud architecture patterns.
|
||||
|
||||
For more information, see the [source repository](https://github.com/zxkane/aws-skills).
|
||||
22
skills/beautiful-prose/SKILL.md
Normal file
22
skills/beautiful-prose/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: beautiful-prose
|
||||
description: "Hard-edged writing style contract for timeless, forceful English prose without AI tics"
|
||||
source: "https://github.com/SHADOWPR0/beautiful_prose"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Beautiful Prose
|
||||
|
||||
## Overview
|
||||
|
||||
Hard-edged writing style contract for timeless, forceful English prose without AI tics
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with hard-edged writing style contract for timeless, forceful english prose without ai tics.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for hard-edged writing style contract for timeless, forceful english prose without ai tics.
|
||||
|
||||
For more information, see the [source repository](https://github.com/SHADOWPR0/beautiful_prose).
|
||||
22
skills/clarity-gate/SKILL.md
Normal file
22
skills/clarity-gate/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: clarity-gate
|
||||
description: "Pre-ingestion verification for epistemic quality in RAG systems with 9-point verification and Two-Round HITL workflow"
|
||||
source: "https://github.com/frmoretto/clarity-gate"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Clarity Gate
|
||||
|
||||
## Overview
|
||||
|
||||
Pre-ingestion verification for epistemic quality in RAG systems with 9-point verification and Two-Round HITL workflow
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with pre-ingestion verification for epistemic quality in rag systems with 9-point verification and two-round hitl workflow.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for pre-ingestion verification for epistemic quality in rag systems with 9-point verification and two-round hitl workflow.
|
||||
|
||||
For more information, see the [source repository](https://github.com/frmoretto/clarity-gate).
|
||||
22
skills/claude-ally-health/SKILL.md
Normal file
22
skills/claude-ally-health/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: claude-ally-health
|
||||
description: "A health assistant skill for medical information analysis, symptom tracking, and wellness guidance."
|
||||
source: "https://github.com/huifer/Claude-Ally-Health"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Claude Ally Health
|
||||
|
||||
## Overview
|
||||
|
||||
A health assistant skill for medical information analysis, symptom tracking, and wellness guidance.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with a health assistant skill for medical information analysis, symptom tracking, and wellness guidance..
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for a health assistant skill for medical information analysis, symptom tracking, and wellness guidance..
|
||||
|
||||
For more information, see the [source repository](https://github.com/huifer/Claude-Ally-Health).
|
||||
22
skills/claude-scientific-skills/SKILL.md
Normal file
22
skills/claude-scientific-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: claude-scientific-skills
|
||||
description: "Scientific research and analysis skills"
|
||||
source: "https://github.com/K-Dense-AI/claude-scientific-skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Claude Scientific Skills
|
||||
|
||||
## Overview
|
||||
|
||||
Scientific research and analysis skills
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with scientific research and analysis skills.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for scientific research and analysis skills.
|
||||
|
||||
For more information, see the [source repository](https://github.com/K-Dense-AI/claude-scientific-skills).
|
||||
22
skills/claude-speed-reader/SKILL.md
Normal file
22
skills/claude-speed-reader/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: claude-speed-reader
|
||||
description: "-Speed read Claude's responses at 600+ WPM using RSVP with Spritz-style ORP highlighting"
|
||||
source: "https://github.com/SeanZoR/claude-speed-reader"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Claude Speed Reader
|
||||
|
||||
## Overview
|
||||
|
||||
-Speed read Claude's responses at 600+ WPM using RSVP with Spritz-style ORP highlighting
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with -speed read claude's responses at 600+ wpm using rsvp with spritz-style orp highlighting.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for -speed read claude's responses at 600+ wpm using rsvp with spritz-style orp highlighting.
|
||||
|
||||
For more information, see the [source repository](https://github.com/SeanZoR/claude-speed-reader).
|
||||
22
skills/claude-win11-speckit-update-skill/SKILL.md
Normal file
22
skills/claude-win11-speckit-update-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: claude-win11-speckit-update-skill
|
||||
description: "Windows 11 system management"
|
||||
source: "https://github.com/NotMyself/claude-win11-speckit-update-skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Claude Win11 Speckit Update Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Windows 11 system management
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with windows 11 system management.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for windows 11 system management.
|
||||
|
||||
For more information, see the [source repository](https://github.com/NotMyself/claude-win11-speckit-update-skill).
|
||||
171
skills/commit/SKILL.md
Normal file
171
skills/commit/SKILL.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
name: commit
|
||||
description: "Create commit messages following Sentry conventions. Use when committing code changes, writing commit messages, or formatting git history. Follows conventional commits with Sentry-specific issue references."
|
||||
source: "https://github.com/getsentry/skills/tree/main/plugins/sentry-skills/skills/commit"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Sentry Commit Messages
|
||||
|
||||
Follow these conventions when creating commits for Sentry projects.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Committing code changes
|
||||
- Writing commit messages
|
||||
- Formatting git history
|
||||
- Following Sentry commit conventions
|
||||
- Referencing Sentry issues in commits
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before committing, ensure you're working on a feature branch, not the main branch.
|
||||
|
||||
```bash
|
||||
# Check current branch
|
||||
git branch --show-current
|
||||
```
|
||||
|
||||
If you're on `main` or `master`, create a new branch first:
|
||||
|
||||
```bash
|
||||
# Create and switch to a new branch
|
||||
git checkout -b <type>/<short-description>
|
||||
```
|
||||
|
||||
Branch naming should follow the pattern: `<type>/<short-description>` where type matches the commit type (e.g., `feat/add-user-auth`, `fix/null-pointer-error`, `ref/extract-validation`).
|
||||
|
||||
## Format
|
||||
|
||||
```
|
||||
<type>(<scope>): <subject>
|
||||
|
||||
<body>
|
||||
|
||||
<footer>
|
||||
```
|
||||
|
||||
The header is required. Scope is optional. All lines must stay under 100 characters.
|
||||
|
||||
## Commit Types
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `feat` | New feature |
|
||||
| `fix` | Bug fix |
|
||||
| `ref` | Refactoring (no behavior change) |
|
||||
| `perf` | Performance improvement |
|
||||
| `docs` | Documentation only |
|
||||
| `test` | Test additions or corrections |
|
||||
| `build` | Build system or dependencies |
|
||||
| `ci` | CI configuration |
|
||||
| `chore` | Maintenance tasks |
|
||||
| `style` | Code formatting (no logic change) |
|
||||
| `meta` | Repository metadata |
|
||||
| `license` | License changes |
|
||||
|
||||
## Subject Line Rules
|
||||
|
||||
- Use imperative, present tense: "Add feature" not "Added feature"
|
||||
- Capitalize the first letter
|
||||
- No period at the end
|
||||
- Maximum 70 characters
|
||||
|
||||
## Body Guidelines
|
||||
|
||||
- Explain **what** and **why**, not how
|
||||
- Use imperative mood and present tense
|
||||
- Include motivation for the change
|
||||
- Contrast with previous behavior when relevant
|
||||
|
||||
## Footer: Issue References
|
||||
|
||||
Reference issues in the footer using these patterns:
|
||||
|
||||
```
|
||||
Fixes GH-1234
|
||||
Fixes #1234
|
||||
Fixes SENTRY-1234
|
||||
Refs LINEAR-ABC-123
|
||||
```
|
||||
|
||||
- `Fixes` closes the issue when merged
|
||||
- `Refs` links without closing
|
||||
|
||||
## AI-Generated Changes
|
||||
|
||||
When changes were primarily generated by a coding agent (like Claude Code), include the Co-Authored-By attribution in the commit footer:
|
||||
|
||||
```
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
This is the only indicator of AI involvement that should appear in commits. Do not add phrases like "Generated by AI", "Written with Claude", or similar markers in the subject, body, or anywhere else in the commit message.
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple fix
|
||||
|
||||
```
|
||||
fix(api): Handle null response in user endpoint
|
||||
|
||||
The user API could return null for deleted accounts, causing a crash
|
||||
in the dashboard. Add null check before accessing user properties.
|
||||
|
||||
Fixes SENTRY-5678
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
### Feature with scope
|
||||
|
||||
```
|
||||
feat(alerts): Add Slack thread replies for alert updates
|
||||
|
||||
When an alert is updated or resolved, post a reply to the original
|
||||
Slack thread instead of creating a new message. This keeps related
|
||||
notifications grouped together.
|
||||
|
||||
Refs GH-1234
|
||||
```
|
||||
|
||||
### Refactor
|
||||
|
||||
```
|
||||
ref: Extract common validation logic to shared module
|
||||
|
||||
Move duplicate validation code from three endpoints into a shared
|
||||
validator class. No behavior change.
|
||||
```
|
||||
|
||||
### Breaking change
|
||||
|
||||
```
|
||||
feat(api)!: Remove deprecated v1 endpoints
|
||||
|
||||
Remove all v1 API endpoints that were deprecated in version 23.1.
|
||||
Clients should migrate to v2 endpoints.
|
||||
|
||||
BREAKING CHANGE: v1 endpoints no longer available
|
||||
Fixes SENTRY-9999
|
||||
```
|
||||
|
||||
## Revert Format
|
||||
|
||||
```
|
||||
revert: feat(api): Add new endpoint
|
||||
|
||||
This reverts commit abc123def456.
|
||||
|
||||
Reason: Caused performance regression in production.
|
||||
```
|
||||
|
||||
## Principles
|
||||
|
||||
- Each commit should be a single, stable change
|
||||
- Commits should be independently reviewable
|
||||
- The repository should be in a working state after each commit
|
||||
|
||||
## References
|
||||
|
||||
- [Sentry Commit Messages](https://develop.sentry.dev/engineering-practices/commit-messages/)
|
||||
266
skills/context-compression/SKILL.md
Normal file
266
skills/context-compression/SKILL.md
Normal file
@@ -0,0 +1,266 @@
|
||||
---
|
||||
name: context-compression
|
||||
description: "Design and evaluate compression strategies for long-running sessions"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/context-compression"
|
||||
risk: safe
|
||||
---
|
||||
# Context Compression Strategies
|
||||
|
||||
When agent sessions generate millions of tokens of conversation history, compression becomes mandatory. The naive approach is aggressive compression to minimize tokens per request. The correct optimization target is tokens per task: total tokens consumed to complete a task, including re-fetching costs when compression loses critical information.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Agent sessions exceed context window limits
|
||||
- Codebases exceed context windows (5M+ token systems)
|
||||
- Designing conversation summarization strategies
|
||||
- Debugging cases where agents "forget" what files they modified
|
||||
- Building evaluation frameworks for compression quality
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Context compression trades token savings against information loss. Three production-ready approaches exist:
|
||||
|
||||
1. **Anchored Iterative Summarization**: Maintain structured, persistent summaries with explicit sections for session intent, file modifications, decisions, and next steps. When compression triggers, summarize only the newly-truncated span and merge with the existing summary. Structure forces preservation by dedicating sections to specific information types.
|
||||
|
||||
2. **Opaque Compression**: Produce compressed representations optimized for reconstruction fidelity. Achieves highest compression ratios (99%+) but sacrifices interpretability. Cannot verify what was preserved.
|
||||
|
||||
3. **Regenerative Full Summary**: Generate detailed structured summaries on each compression. Produces readable output but may lose details across repeated compression cycles due to full regeneration rather than incremental merging.
|
||||
|
||||
The critical insight: structure forces preservation. Dedicated sections act as checklists that the summarizer must populate, preventing silent information drift.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### Why Tokens-Per-Task Matters
|
||||
|
||||
Traditional compression metrics target tokens-per-request. This is the wrong optimization. When compression loses critical details like file paths or error messages, the agent must re-fetch information, re-explore approaches, and waste tokens recovering context.
|
||||
|
||||
The right metric is tokens-per-task: total tokens consumed from task start to completion. A compression strategy saving 0.5% more tokens but causing 20% more re-fetching costs more overall.
|
||||
|
||||
### The Artifact Trail Problem
|
||||
|
||||
Artifact trail integrity is the weakest dimension across all compression methods, scoring 2.2-2.5 out of 5.0 in evaluations. Even structured summarization with explicit file sections struggles to maintain complete file tracking across long sessions.
|
||||
|
||||
Coding agents need to know:
|
||||
- Which files were created
|
||||
- Which files were modified and what changed
|
||||
- Which files were read but not changed
|
||||
- Function names, variable names, error messages
|
||||
|
||||
This problem likely requires specialized handling beyond general summarization: a separate artifact index or explicit file-state tracking in agent scaffolding.
|
||||
|
||||
### Structured Summary Sections
|
||||
|
||||
Effective structured summaries include explicit sections:
|
||||
|
||||
```markdown
|
||||
## Session Intent
|
||||
[What the user is trying to accomplish]
|
||||
|
||||
## Files Modified
|
||||
- auth.controller.ts: Fixed JWT token generation
|
||||
- config/redis.ts: Updated connection pooling
|
||||
- tests/auth.test.ts: Added mock setup for new config
|
||||
|
||||
## Decisions Made
|
||||
- Using Redis connection pool instead of per-request connections
|
||||
- Retry logic with exponential backoff for transient failures
|
||||
|
||||
## Current State
|
||||
- 14 tests passing, 2 failing
|
||||
- Remaining: mock setup for session service tests
|
||||
|
||||
## Next Steps
|
||||
1. Fix remaining test failures
|
||||
2. Run full test suite
|
||||
3. Update documentation
|
||||
```
|
||||
|
||||
This structure prevents silent loss of file paths or decisions because each section must be explicitly addressed.
|
||||
|
||||
### Compression Trigger Strategies
|
||||
|
||||
When to trigger compression matters as much as how to compress:
|
||||
|
||||
| Strategy | Trigger Point | Trade-off |
|
||||
|----------|---------------|-----------|
|
||||
| Fixed threshold | 70-80% context utilization | Simple but may compress too early |
|
||||
| Sliding window | Keep last N turns + summary | Predictable context size |
|
||||
| Importance-based | Compress low-relevance sections first | Complex but preserves signal |
|
||||
| Task-boundary | Compress at logical task completions | Clean summaries but unpredictable timing |
|
||||
|
||||
The sliding window approach with structured summaries provides the best balance of predictability and quality for most coding agent use cases.
|
||||
|
||||
### Probe-Based Evaluation
|
||||
|
||||
Traditional metrics like ROUGE or embedding similarity fail to capture functional compression quality. A summary may score high on lexical overlap while missing the one file path the agent needs.
|
||||
|
||||
Probe-based evaluation directly measures functional quality by asking questions after compression:
|
||||
|
||||
| Probe Type | What It Tests | Example Question |
|
||||
|------------|---------------|------------------|
|
||||
| Recall | Factual retention | "What was the original error message?" |
|
||||
| Artifact | File tracking | "Which files have we modified?" |
|
||||
| Continuation | Task planning | "What should we do next?" |
|
||||
| Decision | Reasoning chain | "What did we decide about the Redis issue?" |
|
||||
|
||||
If compression preserved the right information, the agent answers correctly. If not, it guesses or hallucinates.
|
||||
|
||||
### Evaluation Dimensions
|
||||
|
||||
Six dimensions capture compression quality for coding agents:
|
||||
|
||||
1. **Accuracy**: Are technical details correct? File paths, function names, error codes.
|
||||
2. **Context Awareness**: Does the response reflect current conversation state?
|
||||
3. **Artifact Trail**: Does the agent know which files were read or modified?
|
||||
4. **Completeness**: Does the response address all parts of the question?
|
||||
5. **Continuity**: Can work continue without re-fetching information?
|
||||
6. **Instruction Following**: Does the response respect stated constraints?
|
||||
|
||||
Accuracy shows the largest variation between compression methods (0.6 point gap). Artifact trail is universally weak (2.2-2.5 range).
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Three-Phase Compression Workflow
|
||||
|
||||
For large codebases or agent systems exceeding context windows, apply compression through three phases:
|
||||
|
||||
1. **Research Phase**: Produce a research document from architecture diagrams, documentation, and key interfaces. Compress exploration into a structured analysis of components and dependencies. Output: single research document.
|
||||
|
||||
2. **Planning Phase**: Convert research into implementation specification with function signatures, type definitions, and data flow. A 5M token codebase compresses to approximately 2,000 words of specification.
|
||||
|
||||
3. **Implementation Phase**: Execute against the specification. Context remains focused on the spec rather than raw codebase exploration.
|
||||
|
||||
### Using Example Artifacts as Seeds
|
||||
|
||||
When provided with a manual migration example or reference PR, use it as a template to understand the target pattern. The example reveals constraints that static analysis cannot surface: which invariants must hold, which services break on changes, and what a clean migration looks like.
|
||||
|
||||
This is particularly important when the agent cannot distinguish essential complexity (business requirements) from accidental complexity (legacy workarounds). The example artifact encodes that distinction.
|
||||
|
||||
### Implementing Anchored Iterative Summarization
|
||||
|
||||
1. Define explicit summary sections matching your agent's needs
|
||||
2. On first compression trigger, summarize truncated history into sections
|
||||
3. On subsequent compressions, summarize only new truncated content
|
||||
4. Merge new summary into existing sections rather than regenerating
|
||||
5. Track which information came from which compression cycle for debugging
|
||||
|
||||
### When to Use Each Approach
|
||||
|
||||
**Use anchored iterative summarization when:**
|
||||
- Sessions are long-running (100+ messages)
|
||||
- File tracking matters (coding, debugging)
|
||||
- You need to verify what was preserved
|
||||
|
||||
**Use opaque compression when:**
|
||||
- Maximum token savings required
|
||||
- Sessions are relatively short
|
||||
- Re-fetching costs are low
|
||||
|
||||
**Use regenerative summaries when:**
|
||||
- Summary interpretability is critical
|
||||
- Sessions have clear phase boundaries
|
||||
- Full context review is acceptable on each compression
|
||||
|
||||
### Compression Ratio Considerations
|
||||
|
||||
| Method | Compression Ratio | Quality Score | Trade-off |
|
||||
|--------|-------------------|---------------|-----------|
|
||||
| Anchored Iterative | 98.6% | 3.70 | Best quality, slightly less compression |
|
||||
| Regenerative | 98.7% | 3.44 | Good quality, moderate compression |
|
||||
| Opaque | 99.3% | 3.35 | Best compression, quality loss |
|
||||
|
||||
The 0.7% additional tokens retained by structured summarization buys 0.35 quality points. For any task where re-fetching costs matter, this trade-off favors structured approaches.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Debugging Session Compression**
|
||||
|
||||
Original context (89,000 tokens, 178 messages):
|
||||
- 401 error on /api/auth/login endpoint
|
||||
- Traced through auth controller, middleware, session store
|
||||
- Found stale Redis connection
|
||||
- Fixed connection pooling, added retry logic
|
||||
- 14 tests passing, 2 failing
|
||||
|
||||
Structured summary after compression:
|
||||
|
||||
```markdown
|
||||
## Session Intent
|
||||
Debug 401 Unauthorized error on /api/auth/login despite valid credentials.
|
||||
|
||||
## Root Cause
|
||||
Stale Redis connection in session store. JWT generated correctly but session could not be persisted.
|
||||
|
||||
## Files Modified
|
||||
- auth.controller.ts: No changes (read only)
|
||||
- middleware/cors.ts: No changes (examined)
|
||||
- config/redis.ts: Fixed connection pooling configuration
|
||||
- services/session.service.ts: Added retry logic for transient failures
|
||||
- tests/auth.test.ts: Updated mock setup
|
||||
|
||||
## Test Status
|
||||
14 passing, 2 failing (mock setup issues)
|
||||
|
||||
## Next Steps
|
||||
1. Fix remaining test failures (mock session service)
|
||||
2. Run full test suite
|
||||
3. Deploy to staging
|
||||
```
|
||||
|
||||
**Example 2: Probe Response Quality**
|
||||
|
||||
After compression, asking "What was the original error?":
|
||||
|
||||
Good response (structured summarization):
|
||||
> "The original error was a 401 Unauthorized response from the /api/auth/login endpoint. Users received this error with valid credentials. Root cause was stale Redis connection in session store."
|
||||
|
||||
Poor response (aggressive compression):
|
||||
> "We were debugging an authentication issue. The login was failing. We fixed some configuration problems."
|
||||
|
||||
The structured response preserves endpoint, error code, and root cause. The aggressive response loses all technical detail.
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Optimize for tokens-per-task, not tokens-per-request
|
||||
2. Use structured summaries with explicit sections for file tracking
|
||||
3. Trigger compression at 70-80% context utilization
|
||||
4. Implement incremental merging rather than full regeneration
|
||||
5. Test compression quality with probe-based evaluation
|
||||
6. Track artifact trail separately if file tracking is critical
|
||||
7. Accept slightly lower compression ratios for better quality retention
|
||||
8. Monitor re-fetching frequency as a compression quality signal
|
||||
|
||||
## Integration
|
||||
|
||||
This skill connects to several others in the collection:
|
||||
|
||||
- context-degradation - Compression is a mitigation strategy for degradation
|
||||
- context-optimization - Compression is one optimization technique among many
|
||||
- evaluation - Probe-based evaluation applies to compression testing
|
||||
- memory-systems - Compression relates to scratchpad and summary memory patterns
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Evaluation Framework Reference](./references/evaluation-framework.md) - Detailed probe types and scoring rubrics
|
||||
|
||||
Related skills in this collection:
|
||||
- context-degradation - Understanding what compression prevents
|
||||
- context-optimization - Broader optimization strategies
|
||||
- evaluation - Building evaluation frameworks
|
||||
|
||||
External resources:
|
||||
- Factory Research: Evaluating Context Compression for AI Agents (December 2025)
|
||||
- Research on LLM-as-judge evaluation methodology (Zheng et al., 2023)
|
||||
- Netflix Engineering: "The Infinite Software Crisis" - Three-phase workflow and context compression at scale (AI Summit 2025)
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-22
|
||||
**Last Updated**: 2025-12-26
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.1.0
|
||||
|
||||
238
skills/context-degradation/SKILL.md
Normal file
238
skills/context-degradation/SKILL.md
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
name: context-degradation
|
||||
description: "Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/context-degradation"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash
|
||||
|
||||
Use this skill when working with recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash.
|
||||
# Context Degradation Patterns
|
||||
|
||||
Language models exhibit predictable degradation patterns as context length increases. Understanding these patterns is essential for diagnosing failures and designing resilient systems. Context degradation is not a binary state but a continuum of performance degradation that manifests in several distinct ways.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Agent performance degrades unexpectedly during long conversations
|
||||
- Debugging cases where agents produce incorrect or irrelevant outputs
|
||||
- Designing systems that must handle large contexts reliably
|
||||
- Evaluating context engineering choices for production systems
|
||||
- Investigating "lost in middle" phenomena in agent outputs
|
||||
- Analyzing context-related failures in agent behavior
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Context degradation manifests through several distinct patterns. The lost-in-middle phenomenon causes information in the center of context to receive less attention. Context poisoning occurs when errors compound through repeated reference. Context distraction happens when irrelevant information overwhelms relevant content. Context confusion arises when the model cannot determine which context applies. Context clash develops when accumulated information directly conflicts.
|
||||
|
||||
These patterns are predictable and can be mitigated through architectural patterns like compaction, masking, partitioning, and isolation.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### The Lost-in-Middle Phenomenon
|
||||
|
||||
The most well-documented degradation pattern is the "lost-in-middle" effect, where models demonstrate U-shaped attention curves. Information at the beginning and end of context receives reliable attention, while information buried in the middle suffers from dramatically reduced recall accuracy.
|
||||
|
||||
**Empirical Evidence**
|
||||
Research demonstrates that relevant information placed in the middle of context experiences 10-40% lower recall accuracy compared to the same information at the beginning or end. This is not a failure of the model but a consequence of attention mechanics and training data distributions.
|
||||
|
||||
Models allocate massive attention to the first token (often the BOS token) to stabilize internal states. This creates an "attention sink" that soaks up attention budget. As context grows, the limited budget is stretched thinner, and middle tokens fail to garner sufficient attention weight for reliable retrieval.
|
||||
|
||||
**Practical Implications**
|
||||
Design context placement with attention patterns in mind. Place critical information at the beginning or end of context. Consider whether information will be queried directly or needs to support reasoning—if the latter, placement matters less but overall signal quality matters more.
|
||||
|
||||
For long documents or conversations, use summary structures that surface key information at attention-favored positions. Use explicit section headers and transitions to help models navigate structure.
|
||||
|
||||
### Context Poisoning
|
||||
|
||||
Context poisoning occurs when hallucinations, errors, or incorrect information enters context and compounds through repeated reference. Once poisoned, context creates feedback loops that reinforce incorrect beliefs.
|
||||
|
||||
**How Poisoning Occurs**
|
||||
Poisoning typically enters through three pathways. First, tool outputs may contain errors or unexpected formats that models accept as ground truth. Second, retrieved documents may contain incorrect or outdated information that models incorporate into reasoning. Third, model-generated summaries or intermediate outputs may introduce hallucinations that persist in context.
|
||||
|
||||
The compounding effect is severe. If an agent's goals section becomes poisoned, it develops strategies that take substantial effort to undo. Each subsequent decision references the poisoned content, reinforcing incorrect assumptions.
|
||||
|
||||
**Detection and Recovery**
|
||||
Watch for symptoms including degraded output quality on tasks that previously succeeded, tool misalignment where agents call wrong tools or parameters, and hallucinations that persist despite correction attempts. When these symptoms appear, consider context poisoning.
|
||||
|
||||
Recovery requires removing or replacing poisoned content. This may involve truncating context to before the poisoning point, explicitly noting the poisoning in context and asking for re-evaluation, or restarting with clean context and preserving only verified information.
|
||||
|
||||
### Context Distraction
|
||||
|
||||
Context distraction emerges when context grows so long that models over-focus on provided information at the expense of their training knowledge. The model attends to everything in context regardless of relevance, and this creates pressure to use provided information even when internal knowledge is more accurate.
|
||||
|
||||
**The Distractor Effect**
|
||||
Research shows that even a single irrelevant document in context reduces performance on tasks involving relevant documents. Multiple distractors compound degradation. The effect is not about noise in absolute terms but about attention allocation—irrelevant information competes with relevant information for limited attention budget.
|
||||
|
||||
Models do not have a mechanism to "skip" irrelevant context. They must attend to everything provided, and this obligation creates distraction even when the irrelevant information is clearly not useful.
|
||||
|
||||
**Mitigation Strategies**
|
||||
Mitigate distraction through careful curation of what enters context. Apply relevance filtering before loading retrieved documents. Use namespacing and organization to make irrelevant sections easy to ignore structurally. Consider whether information truly needs to be in context or can be accessed through tool calls instead.
|
||||
|
||||
### Context Confusion
|
||||
|
||||
Context confusion arises when irrelevant information influences responses in ways that degrade quality. This is related to distraction but distinct—confusion concerns the influence of context on model behavior rather than attention allocation.
|
||||
|
||||
If you put something in context, the model has to pay attention to it. The model may incorporate irrelevant information, use inappropriate tool definitions, or apply constraints that came from different contexts. Confusion is especially problematic when context contains multiple task types or when switching between tasks within a single session.
|
||||
|
||||
**Signs of Confusion**
|
||||
Watch for responses that address the wrong aspect of a query, tool calls that seem appropriate for a different task, or outputs that mix requirements from multiple sources. These indicate confusion about what context applies to the current situation.
|
||||
|
||||
**Architectural Solutions**
|
||||
Architectural solutions include explicit task segmentation where different tasks get different context windows, clear transitions between task contexts, and state management that isolates context for different objectives.
|
||||
|
||||
### Context Clash
|
||||
|
||||
Context clash develops when accumulated information directly conflicts, creating contradictory guidance that derails reasoning. This differs from poisoning where one piece of information is incorrect—in clash, multiple correct pieces of information contradict each other.
|
||||
|
||||
**Sources of Clash**
|
||||
Clash commonly arises from multi-source retrieval where different sources have contradictory information, version conflicts where outdated and current information both appear in context, and perspective conflicts where different viewpoints are valid but incompatible.
|
||||
|
||||
**Resolution Approaches**
|
||||
Resolution approaches include explicit conflict marking that identifies contradictions and requests clarification, priority rules that establish which source takes precedence, and version filtering that excludes outdated information from context.
|
||||
|
||||
### Empirical Benchmarks and Thresholds
|
||||
|
||||
Research provides concrete data on degradation patterns that inform design decisions.
|
||||
|
||||
**RULER Benchmark Findings**
|
||||
The RULER benchmark delivers sobering findings: only 50% of models claiming 32K+ context maintain satisfactory performance at 32K tokens. GPT-5.2 shows the least degradation among current models, while many still drop 30+ points at extended contexts. Near-perfect scores on simple needle-in-haystack tests do not translate to real long-context understanding.
|
||||
|
||||
**Model-Specific Degradation Thresholds**
|
||||
| Model | Degradation Onset | Severe Degradation | Notes |
|
||||
|-------|-------------------|-------------------|-------|
|
||||
| GPT-5.2 | ~64K tokens | ~200K tokens | Best overall degradation resistance with thinking mode |
|
||||
| Claude Opus 4.5 | ~100K tokens | ~180K tokens | 200K context window, strong attention management |
|
||||
| Claude Sonnet 4.5 | ~80K tokens | ~150K tokens | Optimized for agents and coding tasks |
|
||||
| Gemini 3 Pro | ~500K tokens | ~800K tokens | 1M context window, native multimodality |
|
||||
| Gemini 3 Flash | ~300K tokens | ~600K tokens | 3x speed of Gemini 2.5, 81.2% MMMU-Pro |
|
||||
|
||||
**Model-Specific Behavior Patterns**
|
||||
Different models exhibit distinct failure modes under context pressure:
|
||||
|
||||
- **Claude 4.5 series**: Lowest hallucination rates with calibrated uncertainty. Claude Opus 4.5 achieves 80.9% on SWE-bench Verified. Tends to refuse or ask clarification rather than fabricate.
|
||||
- **GPT-5.2**: Two modes available - instant (fast) and thinking (reasoning). Thinking mode reduces hallucination through step-by-step verification but increases latency.
|
||||
- **Gemini 3 Pro/Flash**: Native multimodality with 1M context window. Gemini 3 Flash offers 3x speed improvement over previous generation. Strong at multi-modal reasoning across text, code, images, audio, and video.
|
||||
|
||||
These patterns inform model selection for different use cases. High-stakes tasks benefit from Claude 4.5's conservative approach or GPT-5.2's thinking mode; speed-critical tasks may use instant modes.
|
||||
|
||||
### Counterintuitive Findings
|
||||
|
||||
Research reveals several counterintuitive patterns that challenge assumptions about context management.
|
||||
|
||||
**Shuffled Haystacks Outperform Coherent Ones**
|
||||
Studies found that shuffled (incoherent) haystacks produce better performance than logically coherent ones. This suggests that coherent context may create false associations that confuse retrieval, while incoherent context forces models to rely on exact matching.
|
||||
|
||||
**Single Distractors Have Outsized Impact**
|
||||
Even a single irrelevant document reduces performance significantly. The effect is not proportional to the amount of noise but follows a step function where the presence of any distractor triggers degradation.
|
||||
|
||||
**Needle-Question Similarity Correlation**
|
||||
Lower similarity between needle and question pairs shows faster degradation with context length. Tasks requiring inference across dissimilar content are particularly vulnerable.
|
||||
|
||||
### When Larger Contexts Hurt
|
||||
|
||||
Larger context windows do not uniformly improve performance. In many cases, larger contexts create new problems that outweigh benefits.
|
||||
|
||||
**Performance Degradation Curves**
|
||||
Models exhibit non-linear degradation with context length. Performance remains stable up to a threshold, then degrades rapidly. The threshold varies by model and task complexity. For many models, meaningful degradation begins around 8,000-16,000 tokens even when context windows support much larger sizes.
|
||||
|
||||
**Cost Implications**
|
||||
Processing cost grows disproportionately with context length. The cost to process a 400K token context is not double the cost of 200K—it increases exponentially in both time and computing resources. For many applications, this makes large-context processing economically impractical.
|
||||
|
||||
**Cognitive Load Metaphor**
|
||||
Even with an infinite context, asking a single model to maintain consistent quality across dozens of independent tasks creates a cognitive bottleneck. The model must constantly switch context between items, maintain a comparative framework, and ensure stylistic consistency. This is not a problem that more context solves.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### The Four-Bucket Approach
|
||||
|
||||
Four strategies address different aspects of context degradation:
|
||||
|
||||
**Write**: Save context outside the window using scratchpads, file systems, or external storage. This keeps active context lean while preserving information access.
|
||||
|
||||
**Select**: Pull relevant context into the window through retrieval, filtering, and prioritization. This addresses distraction by excluding irrelevant information.
|
||||
|
||||
**Compress**: Reduce tokens while preserving information through summarization, abstraction, and observation masking. This extends effective context capacity.
|
||||
|
||||
**Isolate**: Split context across sub-agents or sessions to prevent any single context from growing large enough to degrade. This is the most aggressive strategy but often the most effective.
|
||||
|
||||
### Architectural Patterns
|
||||
|
||||
Implement these strategies through specific architectural patterns. Use just-in-time context loading to retrieve information only when needed. Use observation masking to replace verbose tool outputs with compact references. Use sub-agent architectures to isolate context for different tasks. Use compaction to summarize growing context before it exceeds limits.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Detecting Degradation**
|
||||
```yaml
|
||||
# Context grows during long conversation
|
||||
turn_1: 1000 tokens
|
||||
turn_5: 8000 tokens
|
||||
turn_10: 25000 tokens
|
||||
turn_20: 60000 tokens (degradation begins)
|
||||
turn_30: 90000 tokens (significant degradation)
|
||||
```
|
||||
|
||||
**Example 2: Mitigating Lost-in-Middle**
|
||||
```markdown
|
||||
# Organize context with critical info at edges
|
||||
|
||||
[CURRENT TASK] # At start
|
||||
- Goal: Generate quarterly report
|
||||
- Deadline: End of week
|
||||
|
||||
[DETAILED CONTEXT] # Middle (less attention)
|
||||
- 50 pages of data
|
||||
- Multiple analysis sections
|
||||
- Supporting evidence
|
||||
|
||||
[KEY FINDINGS] # At end
|
||||
- Revenue up 15%
|
||||
- Costs down 8%
|
||||
- Growth in Region A
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Monitor context length and performance correlation during development
|
||||
2. Place critical information at beginning or end of context
|
||||
3. Implement compaction triggers before degradation becomes severe
|
||||
4. Validate retrieved documents for accuracy before adding to context
|
||||
5. Use versioning to prevent outdated information from causing clash
|
||||
6. Segment tasks to prevent context confusion across different objectives
|
||||
7. Design for graceful degradation rather than assuming perfect conditions
|
||||
8. Test with progressively larger contexts to find degradation thresholds
|
||||
|
||||
## Integration
|
||||
|
||||
This skill builds on context-fundamentals and should be studied after understanding basic context concepts. It connects to:
|
||||
|
||||
- context-optimization - Techniques for mitigating degradation
|
||||
- multi-agent-patterns - Using isolation to prevent degradation
|
||||
- evaluation - Measuring and detecting degradation in production
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Degradation Patterns Reference](./references/patterns.md) - Detailed technical reference
|
||||
|
||||
Related skills in this collection:
|
||||
- context-fundamentals - Context basics
|
||||
- context-optimization - Mitigation techniques
|
||||
- evaluation - Detection and measurement
|
||||
|
||||
External resources:
|
||||
- Research on attention mechanisms and context window limitations
|
||||
- Studies on the "lost-in-middle" phenomenon
|
||||
- Production engineering guides from AI labs
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
192
skills/context-fundamentals/SKILL.md
Normal file
192
skills/context-fundamentals/SKILL.md
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
name: context-fundamentals
|
||||
description: "Understand what context is, why it matters, and the anatomy of context in agent systems"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/context-fundamentals"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Understand what context is, why it matters, and the anatomy of context in agent systems
|
||||
|
||||
Use this skill when working with understand what context is, why it matters, and the anatomy of context in agent systems.
|
||||
# Context Engineering Fundamentals
|
||||
|
||||
Context is the complete state available to a language model at inference time. It includes everything the model can attend to when generating responses: system instructions, tool definitions, retrieved documents, message history, and tool outputs. Understanding context fundamentals is prerequisite to effective context engineering.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Designing new agent systems or modifying existing architectures
|
||||
- Debugging unexpected agent behavior that may relate to context
|
||||
- Optimizing context usage to reduce token costs or improve performance
|
||||
- Onboarding new team members to context engineering concepts
|
||||
- Reviewing context-related design decisions
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Context comprises several distinct components, each with different characteristics and constraints. The attention mechanism creates a finite budget that constrains effective context usage. Progressive disclosure manages this constraint by loading information only as needed. The engineering discipline is curating the smallest high-signal token set that achieves desired outcomes.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### The Anatomy of Context
|
||||
|
||||
**System Prompts**
|
||||
System prompts establish the agent's core identity, constraints, and behavioral guidelines. They are loaded once at session start and typically persist throughout the conversation. System prompts should be extremely clear and use simple, direct language at the right altitude for the agent.
|
||||
|
||||
The right altitude balances two failure modes. At one extreme, engineers hardcode complex brittle logic that creates fragility and maintenance burden. At the other extreme, engineers provide vague high-level guidance that fails to give concrete signals for desired outputs or falsely assumes shared context. The optimal altitude strikes a balance: specific enough to guide behavior effectively, yet flexible enough to provide strong heuristics.
|
||||
|
||||
Organize prompts into distinct sections using XML tagging or Markdown headers to delineate background information, instructions, tool guidance, and output description. The exact formatting matters less as models become more capable, but structural clarity remains valuable.
|
||||
|
||||
**Tool Definitions**
|
||||
Tool definitions specify the actions an agent can take. Each tool includes a name, description, parameters, and return format. Tool definitions live near the front of context after serialization, typically before or after the system prompt.
|
||||
|
||||
Tool descriptions collectively steer agent behavior. Poor descriptions force agents to guess; optimized descriptions include usage context, examples, and defaults. The consolidation principle states that if a human engineer cannot definitively say which tool should be used in a given situation, an agent cannot be expected to do better.
|
||||
|
||||
**Retrieved Documents**
|
||||
Retrieved documents provide domain-specific knowledge, reference materials, or task-relevant information. Agents use retrieval augmented generation to pull relevant documents into context at runtime rather than pre-loading all possible information.
|
||||
|
||||
The just-in-time approach maintains lightweight identifiers (file paths, stored queries, web links) and uses these references to load data into context dynamically. This mirrors human cognition: we generally do not memorize entire corpuses of information but rather use external organization and indexing systems to retrieve relevant information on demand.
|
||||
|
||||
**Message History**
|
||||
Message history contains the conversation between the user and agent, including previous queries, responses, and reasoning. For long-running tasks, message history can grow to dominate context usage.
|
||||
|
||||
Message history serves as scratchpad memory where agents track progress, maintain task state, and preserve reasoning across turns. Effective management of message history is critical for long-horizon task completion.
|
||||
|
||||
**Tool Outputs**
|
||||
Tool outputs are the results of agent actions: file contents, search results, command execution output, API responses, and similar data. Tool outputs comprise the majority of tokens in typical agent trajectories, with research showing observations (tool outputs) can reach 83.9% of total context usage.
|
||||
|
||||
Tool outputs consume context whether they are relevant to current decisions or not. This creates pressure for strategies like observation masking, compaction, and selective tool result retention.
|
||||
|
||||
### Context Windows and Attention Mechanics
|
||||
|
||||
**The Attention Budget Constraint**
|
||||
Language models process tokens through attention mechanisms that create pairwise relationships between all tokens in context. For n tokens, this creates n² relationships that must be computed and stored. As context length increases, the model's ability to capture these relationships gets stretched thin.
|
||||
|
||||
Models develop attention patterns from training data distributions where shorter sequences predominate. This means models have less experience with and fewer specialized parameters for context-wide dependencies. The result is an "attention budget" that depletes as context grows.
|
||||
|
||||
**Position Encoding and Context Extension**
|
||||
Position encoding interpolation allows models to handle longer sequences by adapting them to originally trained smaller contexts. However, this adaptation introduces degradation in token position understanding. Models remain highly capable at longer contexts but show reduced precision for information retrieval and long-range reasoning compared to performance on shorter contexts.
|
||||
|
||||
**The Progressive Disclosure Principle**
|
||||
Progressive disclosure manages context efficiently by loading information only as needed. At startup, agents load only skill names and descriptions—sufficient to know when a skill might be relevant. Full content loads only when a skill is activated for specific tasks.
|
||||
|
||||
This approach keeps agents fast while giving them access to more context on demand. The principle applies at multiple levels: skill selection, document loading, and even tool result retrieval.
|
||||
|
||||
### Context Quality Versus Context Quantity
|
||||
|
||||
The assumption that larger context windows solve memory problems has been empirically debunked. Context engineering means finding the smallest possible set of high-signal tokens that maximize the likelihood of desired outcomes.
|
||||
|
||||
Several factors create pressure for context efficiency. Processing cost grows disproportionately with context length—not just double the cost for double the tokens, but exponentially more in time and computing resources. Model performance degrades beyond certain context lengths even when the window technically supports more tokens. Long inputs remain expensive even with prefix caching.
|
||||
|
||||
The guiding principle is informativity over exhaustiveness. Include what matters for the decision at hand, exclude what does not, and design systems that can access additional information on demand.
|
||||
|
||||
### Context as Finite Resource
|
||||
|
||||
Context must be treated as a finite resource with diminishing marginal returns. Like humans with limited working memory, language models have an attention budget drawn on when parsing large volumes of context.
|
||||
|
||||
Every new token introduced depletes this budget by some amount. This creates the need for careful curation of available tokens. The engineering problem is optimizing utility against inherent constraints.
|
||||
|
||||
Context engineering is iterative and the curation phase happens each time you decide what to pass to the model. It is not a one-time prompt writing exercise but an ongoing discipline of context management.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### File-System-Based Access
|
||||
|
||||
Agents with filesystem access can use progressive disclosure naturally. Store reference materials, documentation, and data externally. Load files only when needed using standard filesystem operations. This pattern avoids stuffing context with information that may not be relevant.
|
||||
|
||||
The file system itself provides structure that agents can navigate. File sizes suggest complexity; naming conventions hint at purpose; timestamps serve as proxies for relevance. Metadata of file references provides a mechanism to efficiently refine behavior.
|
||||
|
||||
### Hybrid Strategies
|
||||
|
||||
The most effective agents employ hybrid strategies. Pre-load some context for speed (like CLAUDE.md files or project rules), but enable autonomous exploration for additional context as needed. The decision boundary depends on task characteristics and context dynamics.
|
||||
|
||||
For contexts with less dynamic content, pre-loading more upfront makes sense. For rapidly changing or highly specific information, just-in-time loading avoids stale context.
|
||||
|
||||
### Context Budgeting
|
||||
|
||||
Design with explicit context budgets in mind. Know the effective context limit for your model and task. Monitor context usage during development. Implement compaction triggers at appropriate thresholds. Design systems assuming context will degrade rather than hoping it will not.
|
||||
|
||||
Effective context budgeting requires understanding not just raw token counts but also attention distribution patterns. The middle of context receives less attention than the beginning and end. Place critical information at attention-favored positions.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Organizing System Prompts**
|
||||
```markdown
|
||||
<BACKGROUND_INFORMATION>
|
||||
You are a Python expert helping a development team.
|
||||
Current project: Data processing pipeline in Python 3.9+
|
||||
</BACKGROUND_INFORMATION>
|
||||
|
||||
<INSTRUCTIONS>
|
||||
- Write clean, idiomatic Python code
|
||||
- Include type hints for function signatures
|
||||
- Add docstrings for public functions
|
||||
- Follow PEP 8 style guidelines
|
||||
</INSTRUCTIONS>
|
||||
|
||||
<TOOL_GUIDANCE>
|
||||
Use bash for shell operations, python for code tasks.
|
||||
File operations should use pathlib for cross-platform compatibility.
|
||||
</TOOL_GUIDANCE>
|
||||
|
||||
<OUTPUT_DESCRIPTION>
|
||||
Provide code blocks with syntax highlighting.
|
||||
Explain non-obvious decisions in comments.
|
||||
</OUTPUT_DESCRIPTION>
|
||||
```
|
||||
|
||||
**Example 2: Progressive Document Loading**
|
||||
```markdown
|
||||
# Instead of loading all documentation at once:
|
||||
|
||||
# Step 1: Load summary
|
||||
docs/api_summary.md # Lightweight overview
|
||||
|
||||
# Step 2: Load specific section as needed
|
||||
docs/api/endpoints.md # Only when API calls needed
|
||||
docs/api/authentication.md # Only when auth context needed
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Treat context as a finite resource with diminishing returns
|
||||
2. Place critical information at attention-favored positions (beginning and end)
|
||||
3. Use progressive disclosure to defer loading until needed
|
||||
4. Organize system prompts with clear section boundaries
|
||||
5. Monitor context usage during development
|
||||
6. Implement compaction triggers at 70-80% utilization
|
||||
7. Design for context degradation rather than hoping to avoid it
|
||||
8. Prefer smaller high-signal context over larger low-signal context
|
||||
|
||||
## Integration
|
||||
|
||||
This skill provides foundational context that all other skills build upon. It should be studied first before exploring:
|
||||
|
||||
- context-degradation - Understanding how context fails
|
||||
- context-optimization - Techniques for extending context capacity
|
||||
- multi-agent-patterns - How context isolation enables multi-agent systems
|
||||
- tool-design - How tool definitions interact with context
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Context Components Reference](./references/context-components.md) - Detailed technical reference
|
||||
|
||||
Related skills in this collection:
|
||||
- context-degradation - Understanding context failure patterns
|
||||
- context-optimization - Techniques for efficient context use
|
||||
|
||||
External resources:
|
||||
- Research on transformer attention mechanisms
|
||||
- Production engineering guides from leading AI labs
|
||||
- Framework documentation on context window management
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
186
skills/context-optimization/SKILL.md
Normal file
186
skills/context-optimization/SKILL.md
Normal file
@@ -0,0 +1,186 @@
|
||||
---
|
||||
name: context-optimization
|
||||
description: "Apply compaction, masking, and caching strategies"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/context-optimization"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Apply compaction, masking, and caching strategies
|
||||
|
||||
Use this skill when working with apply compaction, masking, and caching strategies.
|
||||
# Context Optimization Techniques
|
||||
|
||||
Context optimization extends the effective capacity of limited context windows through strategic compression, masking, caching, and partitioning. The goal is not to magically increase context windows but to make better use of available capacity. Effective optimization can double or triple effective context capacity without requiring larger models or longer contexts.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Context limits constrain task complexity
|
||||
- Optimizing for cost reduction (fewer tokens = lower costs)
|
||||
- Reducing latency for long conversations
|
||||
- Implementing long-running agent systems
|
||||
- Needing to handle larger documents or conversations
|
||||
- Building production systems at scale
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Context optimization extends effective capacity through four primary strategies: compaction (summarizing context near limits), observation masking (replacing verbose outputs with references), KV-cache optimization (reusing cached computations), and context partitioning (splitting work across isolated contexts).
|
||||
|
||||
The key insight is that context quality matters more than quantity. Optimization preserves signal while reducing noise. The art lies in selecting what to keep versus what to discard, and when to apply each technique.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### Compaction Strategies
|
||||
|
||||
**What is Compaction**
|
||||
Compaction is the practice of summarizing context contents when approaching limits, then reinitializing a new context window with the summary. This distills the contents of a context window in a high-fidelity manner, enabling the agent to continue with minimal performance degradation.
|
||||
|
||||
Compaction typically serves as the first lever in context optimization. The art lies in selecting what to keep versus what to discard.
|
||||
|
||||
**Compaction Implementation**
|
||||
Compaction works by identifying sections that can be compressed, generating summaries that capture essential points, and replacing full content with summaries. Priority for compression goes to tool outputs (replace with summaries), old turns (summarize early conversation), retrieved docs (summarize if recent versions exist), and never compress system prompt.
|
||||
|
||||
**Summary Generation**
|
||||
Effective summaries preserve different elements depending on message type:
|
||||
|
||||
Tool outputs: Preserve key findings, metrics, and conclusions. Remove verbose raw output.
|
||||
|
||||
Conversational turns: Preserve key decisions, commitments, and context shifts. Remove filler and back-and-forth.
|
||||
|
||||
Retrieved documents: Preserve key facts and claims. Remove supporting evidence and elaboration.
|
||||
|
||||
### Observation Masking
|
||||
|
||||
**The Observation Problem**
|
||||
Tool outputs can comprise 80%+ of token usage in agent trajectories. Much of this is verbose output that has already served its purpose. Once an agent has used a tool output to make a decision, keeping the full output provides diminishing value while consuming significant context.
|
||||
|
||||
Observation masking replaces verbose tool outputs with compact references. The information remains accessible if needed but does not consume context continuously.
|
||||
|
||||
**Masking Strategy Selection**
|
||||
Not all observations should be masked equally:
|
||||
|
||||
Never mask: Observations critical to current task, observations from the most recent turn, observations used in active reasoning.
|
||||
|
||||
Consider masking: Observations from 3+ turns ago, verbose outputs with key points extractable, observations whose purpose has been served.
|
||||
|
||||
Always mask: Repeated outputs, boilerplate headers/footers, outputs already summarized in conversation.
|
||||
|
||||
### KV-Cache Optimization
|
||||
|
||||
**Understanding KV-Cache**
|
||||
The KV-cache stores Key and Value tensors computed during inference, growing linearly with sequence length. Caching the KV-cache across requests sharing identical prefixes avoids recomputation.
|
||||
|
||||
Prefix caching reuses KV blocks across requests with identical prefixes using hash-based block matching. This dramatically reduces cost and latency for requests with common prefixes like system prompts.
|
||||
|
||||
**Cache Optimization Patterns**
|
||||
Optimize for caching by reordering context elements to maximize cache hits. Place stable elements first (system prompt, tool definitions), then frequently reused elements, then unique elements last.
|
||||
|
||||
Design prompts to maximize cache stability: avoid dynamic content like timestamps, use consistent formatting, keep structure stable across sessions.
|
||||
|
||||
### Context Partitioning
|
||||
|
||||
**Sub-Agent Partitioning**
|
||||
The most aggressive form of context optimization is partitioning work across sub-agents with isolated contexts. Each sub-agent operates in a clean context focused on its subtask without carrying accumulated context from other subtasks.
|
||||
|
||||
This approach achieves separation of concerns—the detailed search context remains isolated within sub-agents while the coordinator focuses on synthesis and analysis.
|
||||
|
||||
**Result Aggregation**
|
||||
Aggregate results from partitioned subtasks by validating all partitions completed, merging compatible results, and summarizing if still too large.
|
||||
|
||||
### Budget Management
|
||||
|
||||
**Context Budget Allocation**
|
||||
Design explicit context budgets. Allocate tokens to categories: system prompt, tool definitions, retrieved docs, message history, and reserved buffer. Monitor usage against budget and trigger optimization when approaching limits.
|
||||
|
||||
**Trigger-Based Optimization**
|
||||
Monitor signals for optimization triggers: token utilization above 80%, degradation indicators, and performance drops. Apply appropriate optimization techniques based on context composition.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Optimization Decision Framework
|
||||
|
||||
When to optimize:
|
||||
- Context utilization exceeds 70%
|
||||
- Response quality degrades as conversations extend
|
||||
- Costs increase due to long contexts
|
||||
- Latency increases with conversation length
|
||||
|
||||
What to apply:
|
||||
- Tool outputs dominate: observation masking
|
||||
- Retrieved documents dominate: summarization or partitioning
|
||||
- Message history dominates: compaction with summarization
|
||||
- Multiple components: combine strategies
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
Compaction should achieve 50-70% token reduction with less than 5% quality degradation. Masking should achieve 60-80% reduction in masked observations. Cache optimization should achieve 70%+ hit rate for stable workloads.
|
||||
|
||||
Monitor and iterate on optimization strategies based on measured effectiveness.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Compaction Trigger**
|
||||
```python
|
||||
if context_tokens / context_limit > 0.8:
|
||||
context = compact_context(context)
|
||||
```
|
||||
|
||||
**Example 2: Observation Masking**
|
||||
```python
|
||||
if len(observation) > max_length:
|
||||
ref_id = store_observation(observation)
|
||||
return f"[Obs:{ref_id} elided. Key: {extract_key(observation)}]"
|
||||
```
|
||||
|
||||
**Example 3: Cache-Friendly Ordering**
|
||||
```python
|
||||
# Stable content first
|
||||
context = [system_prompt, tool_definitions] # Cacheable
|
||||
context += [reused_templates] # Reusable
|
||||
context += [unique_content] # Unique
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Measure before optimizing—know your current state
|
||||
2. Apply compaction before masking when possible
|
||||
3. Design for cache stability with consistent prompts
|
||||
4. Partition before context becomes problematic
|
||||
5. Monitor optimization effectiveness over time
|
||||
6. Balance token savings against quality preservation
|
||||
7. Test optimization at production scale
|
||||
8. Implement graceful degradation for edge cases
|
||||
|
||||
## Integration
|
||||
|
||||
This skill builds on context-fundamentals and context-degradation. It connects to:
|
||||
|
||||
- multi-agent-patterns - Partitioning as isolation
|
||||
- evaluation - Measuring optimization effectiveness
|
||||
- memory-systems - Offloading context to memory
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Optimization Techniques Reference](./references/optimization_techniques.md) - Detailed technical reference
|
||||
|
||||
Related skills in this collection:
|
||||
- context-fundamentals - Context basics
|
||||
- context-degradation - Understanding when to optimize
|
||||
- evaluation - Measuring optimization
|
||||
|
||||
External resources:
|
||||
- Research on context window limitations
|
||||
- KV-cache optimization techniques
|
||||
- Production engineering guides
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
192
skills/create-pr/SKILL.md
Normal file
192
skills/create-pr/SKILL.md
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
name: create-pr
|
||||
description: "Create pull requests following Sentry conventions. Use when opening PRs, writing PR descriptions, or preparing changes for review. Follows Sentry's code review guidelines."
|
||||
source: "https://github.com/getsentry/skills/tree/main/plugins/sentry-skills/skills/create-pr"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Create Pull Request
|
||||
|
||||
Create pull requests following Sentry's engineering practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Opening pull requests
|
||||
- Writing PR descriptions
|
||||
- Preparing changes for review
|
||||
- Following Sentry's code review guidelines
|
||||
- Creating PRs that follow best practices
|
||||
|
||||
**Requires**: GitHub CLI (`gh`) authenticated and available.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before creating a PR, ensure all changes are committed. If there are uncommitted changes, run the `sentry-skills:commit` skill first to commit them properly.
|
||||
|
||||
```bash
|
||||
# Check for uncommitted changes
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
If the output shows any uncommitted changes (modified, added, or untracked files that should be included), invoke the `sentry-skills:commit` skill before proceeding.
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Verify Branch State
|
||||
|
||||
```bash
|
||||
# Detect the default branch
|
||||
BASE=$(gh repo view --json defaultBranchRef --jq '.defaultBranchRef.name')
|
||||
|
||||
# Check current branch and status
|
||||
git status
|
||||
git log $BASE..HEAD --oneline
|
||||
```
|
||||
|
||||
Ensure:
|
||||
- All changes are committed
|
||||
- Branch is up to date with remote
|
||||
- Changes are rebased on the base branch if needed
|
||||
|
||||
### Step 2: Analyze Changes
|
||||
|
||||
Review what will be included in the PR:
|
||||
|
||||
```bash
|
||||
# See all commits that will be in the PR
|
||||
git log $BASE..HEAD
|
||||
|
||||
# See the full diff
|
||||
git diff $BASE...HEAD
|
||||
```
|
||||
|
||||
Understand the scope and purpose of all changes before writing the description.
|
||||
|
||||
### Step 3: Write the PR Description
|
||||
|
||||
Use this structure for PR descriptions (ignoring any repository PR templates):
|
||||
|
||||
```markdown
|
||||
<brief description of what the PR does>
|
||||
|
||||
<why these changes are being made - the motivation>
|
||||
|
||||
<alternative approaches considered, if any>
|
||||
|
||||
<any additional context reviewers need>
|
||||
```
|
||||
|
||||
**Do NOT include:**
|
||||
- "Test plan" sections
|
||||
- Checkbox lists of testing steps
|
||||
- Redundant summaries of the diff
|
||||
|
||||
**Do include:**
|
||||
- Clear explanation of what and why
|
||||
- Links to relevant issues or tickets
|
||||
- Context that isn't obvious from the code
|
||||
- Notes on specific areas that need careful review
|
||||
|
||||
### Step 4: Create the PR
|
||||
|
||||
```bash
|
||||
gh pr create --draft --title "<type>(<scope>): <description>" --body "$(cat <<'EOF'
|
||||
<description body here>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Title format** follows commit conventions:
|
||||
- `feat(scope): Add new feature`
|
||||
- `fix(scope): Fix the bug`
|
||||
- `ref: Refactor something`
|
||||
|
||||
## PR Description Examples
|
||||
|
||||
### Feature PR
|
||||
|
||||
```markdown
|
||||
Add Slack thread replies for alert notifications
|
||||
|
||||
When an alert is updated or resolved, we now post a reply to the original
|
||||
Slack thread instead of creating a new message. This keeps related
|
||||
notifications grouped and reduces channel noise.
|
||||
|
||||
Previously considered posting edits to the original message, but threading
|
||||
better preserves the timeline of events and works when the original message
|
||||
is older than Slack's edit window.
|
||||
|
||||
Refs SENTRY-1234
|
||||
```
|
||||
|
||||
### Bug Fix PR
|
||||
|
||||
```markdown
|
||||
Handle null response in user API endpoint
|
||||
|
||||
The user endpoint could return null for soft-deleted accounts, causing
|
||||
dashboard crashes when accessing user properties. This adds a null check
|
||||
and returns a proper 404 response.
|
||||
|
||||
Found while investigating SENTRY-5678.
|
||||
|
||||
Fixes SENTRY-5678
|
||||
```
|
||||
|
||||
### Refactor PR
|
||||
|
||||
```markdown
|
||||
Extract validation logic to shared module
|
||||
|
||||
Moves duplicate validation code from the alerts, issues, and projects
|
||||
endpoints into a shared validator class. No behavior change.
|
||||
|
||||
This prepares for adding new validation rules in SENTRY-9999 without
|
||||
duplicating logic across endpoints.
|
||||
```
|
||||
|
||||
## Issue References
|
||||
|
||||
Reference issues in the PR body:
|
||||
|
||||
| Syntax | Effect |
|
||||
|--------|--------|
|
||||
| `Fixes #1234` | Closes GitHub issue on merge |
|
||||
| `Fixes SENTRY-1234` | Closes Sentry issue |
|
||||
| `Refs GH-1234` | Links without closing |
|
||||
| `Refs LINEAR-ABC-123` | Links Linear issue |
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **One PR per feature/fix** - Don't bundle unrelated changes
|
||||
- **Keep PRs reviewable** - Smaller PRs get faster, better reviews
|
||||
- **Explain the why** - Code shows what; description explains why
|
||||
- **Mark WIP early** - Use draft PRs for early feedback
|
||||
|
||||
## Editing Existing PRs
|
||||
|
||||
If you need to update a PR after creation, use `gh api` instead of `gh pr edit`:
|
||||
|
||||
```bash
|
||||
# Update PR description
|
||||
gh api -X PATCH repos/{owner}/{repo}/pulls/PR_NUMBER -f body="$(cat <<'EOF'
|
||||
Updated description here
|
||||
EOF
|
||||
)"
|
||||
|
||||
# Update PR title
|
||||
gh api -X PATCH repos/{owner}/{repo}/pulls/PR_NUMBER -f title='new: Title here'
|
||||
|
||||
# Update both
|
||||
gh api -X PATCH repos/{owner}/{repo}/pulls/PR_NUMBER \
|
||||
-f title='new: Title' \
|
||||
-f body='New description'
|
||||
```
|
||||
|
||||
Note: `gh pr edit` is currently broken due to GitHub's Projects (classic) deprecation.
|
||||
|
||||
## References
|
||||
|
||||
- [Sentry Code Review Guidelines](https://develop.sentry.dev/engineering-practices/code-review/)
|
||||
- [Sentry Commit Messages](https://develop.sentry.dev/engineering-practices/commit-messages/)
|
||||
43
skills/culture-index/SKILL.md
Normal file
43
skills/culture-index/SKILL.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
name: culture-index
|
||||
description: "Index and search culture documentation"
|
||||
source: "https://github.com/trailofbits/skills/tree/main/plugins/culture-index"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Culture Index
|
||||
|
||||
## Overview
|
||||
|
||||
Index and search culture documentation to help teams understand organizational values, practices, and guidelines.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to index and search culture documentation.
|
||||
|
||||
Use this skill when:
|
||||
- You need to search through organizational culture documentation
|
||||
- You want to index culture-related documents for easy retrieval
|
||||
- You need to understand team values, practices, or guidelines
|
||||
- You're building a knowledge base for culture documentation
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides capabilities for indexing and searching culture documentation. It helps teams:
|
||||
|
||||
1. **Index Culture Documents**: Organize and index culture-related documentation
|
||||
2. **Search Functionality**: Quickly find relevant culture information
|
||||
3. **Knowledge Retrieval**: Access organizational values and practices efficiently
|
||||
|
||||
## Usage
|
||||
|
||||
When working with culture documentation:
|
||||
|
||||
1. Identify the culture documents to index
|
||||
2. Use the indexing functionality to organize the content
|
||||
3. Search through indexed documents to find relevant information
|
||||
4. Retrieve specific culture guidelines or practices as needed
|
||||
|
||||
## Resources
|
||||
|
||||
For more information, see the [source repository](https://github.com/trailofbits/skills/tree/main/plugins/culture-index).
|
||||
114
skills/deep-research/SKILL.md
Normal file
114
skills/deep-research/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
name: deep-research
|
||||
description: "Execute autonomous multi-step research using Google Gemini Deep Research Agent. Use for: market analysis, competitive landscaping, literature reviews, technical research, due diligence. Takes 2-10 minutes but produces detailed, cited reports. Costs $2-5 per task."
|
||||
source: "https://github.com/sanjay3290/ai-skills/tree/main/skills/deep-research"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Gemini Deep Research Skill
|
||||
|
||||
Run autonomous research tasks that plan, search, read, and synthesize information into comprehensive reports.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Performing market analysis
|
||||
- Conducting competitive landscaping
|
||||
- Creating literature reviews
|
||||
- Doing technical research
|
||||
- Performing due diligence
|
||||
- Need detailed, cited research reports
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.8+
|
||||
- httpx: `pip install -r requirements.txt`
|
||||
- GEMINI_API_KEY environment variable
|
||||
|
||||
## Setup
|
||||
|
||||
1. Get a Gemini API key from [Google AI Studio](https://aistudio.google.com/)
|
||||
2. Set the environment variable:
|
||||
```bash
|
||||
export GEMINI_API_KEY=your-api-key-here
|
||||
```
|
||||
Or create a `.env` file in the skill directory.
|
||||
|
||||
## Usage
|
||||
|
||||
### Start a research task
|
||||
```bash
|
||||
python3 scripts/research.py --query "Research the history of Kubernetes"
|
||||
```
|
||||
|
||||
### With structured output format
|
||||
```bash
|
||||
python3 scripts/research.py --query "Compare Python web frameworks" \
|
||||
--format "1. Executive Summary\n2. Comparison Table\n3. Recommendations"
|
||||
```
|
||||
|
||||
### Stream progress in real-time
|
||||
```bash
|
||||
python3 scripts/research.py --query "Analyze EV battery market" --stream
|
||||
```
|
||||
|
||||
### Start without waiting
|
||||
```bash
|
||||
python3 scripts/research.py --query "Research topic" --no-wait
|
||||
```
|
||||
|
||||
### Check status of running research
|
||||
```bash
|
||||
python3 scripts/research.py --status <interaction_id>
|
||||
```
|
||||
|
||||
### Wait for completion
|
||||
```bash
|
||||
python3 scripts/research.py --wait <interaction_id>
|
||||
```
|
||||
|
||||
### Continue from previous research
|
||||
```bash
|
||||
python3 scripts/research.py --query "Elaborate on point 2" --continue <interaction_id>
|
||||
```
|
||||
|
||||
### List recent research
|
||||
```bash
|
||||
python3 scripts/research.py --list
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
- **Default**: Human-readable markdown report
|
||||
- **JSON** (`--json`): Structured data for programmatic use
|
||||
- **Raw** (`--raw`): Unprocessed API response
|
||||
|
||||
## Cost & Time
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Time | 2-10 minutes per task |
|
||||
| Cost | $2-5 per task (varies by complexity) |
|
||||
| Token usage | ~250k-900k input, ~60k-80k output |
|
||||
|
||||
## Best Use Cases
|
||||
|
||||
- Market analysis and competitive landscaping
|
||||
- Technical literature reviews
|
||||
- Due diligence research
|
||||
- Historical research and timelines
|
||||
- Comparative analysis (frameworks, products, technologies)
|
||||
|
||||
## Workflow
|
||||
|
||||
1. User requests research → Run `--query "..."`
|
||||
2. Inform user of estimated time (2-10 minutes)
|
||||
3. Monitor with `--stream` or poll with `--status`
|
||||
4. Return formatted results
|
||||
5. Use `--continue` for follow-up questions
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: Success
|
||||
- **1**: Error (API error, config issue, timeout)
|
||||
- **130**: Cancelled by user (Ctrl+C)
|
||||
178
skills/design-md/SKILL.md
Normal file
178
skills/design-md/SKILL.md
Normal file
@@ -0,0 +1,178 @@
|
||||
---
|
||||
name: design-md
|
||||
description: "Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files"
|
||||
source: "https://github.com/google-labs-code/stitch-skills/tree/main/skills/design-md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Stitch DESIGN.md Skill
|
||||
|
||||
You are an expert Design Systems Lead. Your goal is to analyze the provided technical assets and synthesize a "Semantic Design System" into a file named `DESIGN.md`.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Analyzing Stitch projects
|
||||
- Creating DESIGN.md files
|
||||
- Synthesizing semantic design systems
|
||||
- Working with Stitch design language
|
||||
- Generating design documentation for Stitch projects
|
||||
|
||||
## Overview
|
||||
|
||||
This skill helps you create `DESIGN.md` files that serve as the "source of truth" for prompting Stitch to generate new screens that align perfectly with existing design language. Stitch interprets design through "Visual Descriptions" supported by specific color values.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to the Stitch MCP Server
|
||||
- A Stitch project with at least one designed screen
|
||||
- Access to the Stitch Effective Prompting Guide: https://stitch.withgoogle.com/docs/learn/prompting/
|
||||
|
||||
## The Goal
|
||||
|
||||
The `DESIGN.md` file will serve as the "source of truth" for prompting Stitch to generate new screens that align perfectly with the existing design language. Stitch interprets design through "Visual Descriptions" supported by specific color values.
|
||||
|
||||
## Retrieval and Networking
|
||||
|
||||
To analyze a Stitch project, you must retrieve screen metadata and design assets using the Stitch MCP Server tools:
|
||||
|
||||
1. **Namespace discovery**: Run `list_tools` to find the Stitch MCP prefix. Use this prefix (e.g., `mcp_stitch:`) for all subsequent calls.
|
||||
|
||||
2. **Project lookup** (if Project ID is not provided):
|
||||
- Call `[prefix]:list_projects` with `filter: "view=owned"` to retrieve all user projects
|
||||
- Identify the target project by title or URL pattern
|
||||
- Extract the Project ID from the `name` field (e.g., `projects/13534454087919359824`)
|
||||
|
||||
3. **Screen lookup** (if Screen ID is not provided):
|
||||
- Call `[prefix]:list_screens` with the `projectId` (just the numeric ID, not the full path)
|
||||
- Review screen titles to identify the target screen (e.g., "Home", "Landing Page")
|
||||
- Extract the Screen ID from the screen's `name` field
|
||||
|
||||
4. **Metadata fetch**:
|
||||
- Call `[prefix]:get_screen` with both `projectId` and `screenId` (both as numeric IDs only)
|
||||
- This returns the complete screen object including:
|
||||
- `screenshot.downloadUrl` - Visual reference of the design
|
||||
- `htmlCode.downloadUrl` - Full HTML/CSS source code
|
||||
- `width`, `height`, `deviceType` - Screen dimensions and target platform
|
||||
- Project metadata including `designTheme` with color and style information
|
||||
|
||||
5. **Asset download**:
|
||||
- Use `web_fetch` or `read_url_content` to download the HTML code from `htmlCode.downloadUrl`
|
||||
- Optionally download the screenshot from `screenshot.downloadUrl` for visual reference
|
||||
- Parse the HTML to extract Tailwind classes, custom CSS, and component patterns
|
||||
|
||||
6. **Project metadata extraction**:
|
||||
- Call `[prefix]:get_project` with the project `name` (full path: `projects/{id}`) to get:
|
||||
- `designTheme` object with color mode, fonts, roundness, custom colors
|
||||
- Project-level design guidelines and descriptions
|
||||
- Device type preferences and layout principles
|
||||
|
||||
## Analysis & Synthesis Instructions
|
||||
|
||||
### 1. Extract Project Identity (JSON)
|
||||
- Locate the Project Title
|
||||
- Locate the specific Project ID (e.g., from the `name` field in the JSON)
|
||||
|
||||
### 2. Define the Atmosphere (Image/HTML)
|
||||
Evaluate the screenshot and HTML structure to capture the overall "vibe." Use evocative adjectives to describe the mood (e.g., "Airy," "Dense," "Minimalist," "Utilitarian").
|
||||
|
||||
### 3. Map the Color Palette (Tailwind Config/JSON)
|
||||
Identify the key colors in the system. For each color, provide:
|
||||
- A descriptive, natural language name that conveys its character (e.g., "Deep Muted Teal-Navy")
|
||||
- The specific hex code in parentheses for precision (e.g., "#294056")
|
||||
- Its specific functional role (e.g., "Used for primary actions")
|
||||
|
||||
### 4. Translate Geometry & Shape (CSS/Tailwind)
|
||||
Convert technical `border-radius` and layout values into physical descriptions:
|
||||
- Describe `rounded-full` as "Pill-shaped"
|
||||
- Describe `rounded-lg` as "Subtly rounded corners"
|
||||
- Describe `rounded-none` as "Sharp, squared-off edges"
|
||||
|
||||
### 5. Describe Depth & Elevation
|
||||
Explain how the UI handles layers. Describe the presence and quality of shadows (e.g., "Flat," "Whisper-soft diffused shadows," or "Heavy, high-contrast drop shadows").
|
||||
|
||||
## Output Guidelines
|
||||
|
||||
- **Language:** Use descriptive design terminology and natural language exclusively
|
||||
- **Format:** Generate a clean Markdown file following the structure below
|
||||
- **Precision:** Include exact hex codes for colors while using descriptive names
|
||||
- **Context:** Explain the "why" behind design decisions, not just the "what"
|
||||
|
||||
## Output Format (DESIGN.md Structure)
|
||||
|
||||
```markdown
|
||||
# Design System: [Project Title]
|
||||
**Project ID:** [Insert Project ID Here]
|
||||
|
||||
## 1. Visual Theme & Atmosphere
|
||||
(Description of the mood, density, and aesthetic philosophy.)
|
||||
|
||||
## 2. Color Palette & Roles
|
||||
(List colors by Descriptive Name + Hex Code + Functional Role.)
|
||||
|
||||
## 3. Typography Rules
|
||||
(Description of font family, weight usage for headers vs. body, and letter-spacing character.)
|
||||
|
||||
## 4. Component Stylings
|
||||
* **Buttons:** (Shape description, color assignment, behavior).
|
||||
* **Cards/Containers:** (Corner roundness description, background color, shadow depth).
|
||||
* **Inputs/Forms:** (Stroke style, background).
|
||||
|
||||
## 5. Layout Principles
|
||||
(Description of whitespace strategy, margins, and grid alignment.)
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
To use this skill for the Furniture Collection project:
|
||||
|
||||
1. **Retrieve project information:**
|
||||
```
|
||||
Use the Stitch MCP Server to get the Furniture Collection project
|
||||
```
|
||||
|
||||
2. **Get the Home page screen details:**
|
||||
```
|
||||
Retrieve the Home page screen's code, image, and screen object information
|
||||
```
|
||||
|
||||
3. **Reference best practices:**
|
||||
```
|
||||
Review the Stitch Effective Prompting Guide at:
|
||||
https://stitch.withgoogle.com/docs/learn/prompting/
|
||||
```
|
||||
|
||||
4. **Analyze and synthesize:**
|
||||
- Extract all relevant design tokens from the screen
|
||||
- Translate technical values into descriptive language
|
||||
- Organize information according to the DESIGN.md structure
|
||||
|
||||
5. **Generate the file:**
|
||||
- Create `DESIGN.md` in the project directory
|
||||
- Follow the prescribed format exactly
|
||||
- Ensure all color codes are accurate
|
||||
- Use evocative, designer-friendly language
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Be Descriptive:** Avoid generic terms like "blue" or "rounded." Use "Ocean-deep Cerulean (#0077B6)" or "Gently curved edges"
|
||||
- **Be Functional:** Always explain what each design element is used for
|
||||
- **Be Consistent:** Use the same terminology throughout the document
|
||||
- **Be Visual:** Help readers visualize the design through your descriptions
|
||||
- **Be Precise:** Include exact values (hex codes, pixel values) in parentheses after natural language descriptions
|
||||
|
||||
## Tips for Success
|
||||
|
||||
1. **Start with the big picture:** Understand the overall aesthetic before diving into details
|
||||
2. **Look for patterns:** Identify consistent spacing, sizing, and styling patterns
|
||||
3. **Think semantically:** Name colors by their purpose, not just their appearance
|
||||
4. **Consider hierarchy:** Document how visual weight and importance are communicated
|
||||
5. **Reference the guide:** Use language and patterns from the Stitch Effective Prompting Guide
|
||||
|
||||
## Common Pitfalls to Avoid
|
||||
|
||||
- ❌ Using technical jargon without translation (e.g., "rounded-xl" instead of "generously rounded corners")
|
||||
- ❌ Omitting color codes or using only descriptive names
|
||||
- ❌ Forgetting to explain functional roles of design elements
|
||||
- ❌ Being too vague in atmosphere descriptions
|
||||
- ❌ Ignoring subtle design details like shadows or spacing patterns
|
||||
238
skills/evaluation/SKILL.md
Normal file
238
skills/evaluation/SKILL.md
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
name: evaluation
|
||||
description: "Build evaluation frameworks for agent systems"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/evaluation"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Build evaluation frameworks for agent systems
|
||||
|
||||
Use this skill when working with build evaluation frameworks for agent systems.
|
||||
# Evaluation Methods for Agent Systems
|
||||
|
||||
Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, and validates that context engineering choices achieve intended effects.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Testing agent performance systematically
|
||||
- Validating context engineering choices
|
||||
- Measuring improvements over time
|
||||
- Catching regressions before deployment
|
||||
- Building quality gates for agent pipelines
|
||||
- Comparing different agent configurations
|
||||
- Evaluating production systems continuously
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Agent evaluation requires outcome-focused approaches that account for non-determinism and multiple valid paths. Multi-dimensional rubrics capture various quality aspects: factual accuracy, completeness, citation accuracy, source quality, and tool efficiency. LLM-as-judge provides scalable evaluation while human evaluation catches edge cases.
|
||||
|
||||
The key insight is that agents may find alternative paths to goals—the evaluation should judge whether they achieve right outcomes while following reasonable processes.
|
||||
|
||||
**Performance Drivers: The 95% Finding**
|
||||
Research on the BrowseComp evaluation (which tests browsing agents' ability to locate hard-to-find information) found that three factors explain 95% of performance variance:
|
||||
|
||||
| Factor | Variance Explained | Implication |
|
||||
|--------|-------------------|-------------|
|
||||
| Token usage | 80% | More tokens = better performance |
|
||||
| Number of tool calls | ~10% | More exploration helps |
|
||||
| Model choice | ~5% | Better models multiply efficiency |
|
||||
|
||||
This finding has significant implications for evaluation design:
|
||||
- **Token budgets matter**: Evaluate agents with realistic token budgets, not unlimited resources
|
||||
- **Model upgrades beat token increases**: Upgrading to Claude Sonnet 4.5 or GPT-5.2 provides larger gains than doubling token budgets on previous versions
|
||||
- **Multi-agent validation**: The finding validates architectures that distribute work across agents with separate context windows
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### Evaluation Challenges
|
||||
|
||||
**Non-Determinism and Multiple Valid Paths**
|
||||
Agents may take completely different valid paths to reach goals. One agent might search three sources while another searches ten. They might use different tools to find the same answer. Traditional evaluations that check for specific steps fail in this context.
|
||||
|
||||
The solution is outcome-focused evaluation that judges whether agents achieve right outcomes while following reasonable processes.
|
||||
|
||||
**Context-Dependent Failures**
|
||||
Agent failures often depend on context in subtle ways. An agent might succeed on simple queries but fail on complex ones. It might work well with one tool set but fail with another. Failures may emerge only after extended interaction when context accumulates.
|
||||
|
||||
Evaluation must cover a range of complexity levels and test extended interactions, not just isolated queries.
|
||||
|
||||
**Composite Quality Dimensions**
|
||||
Agent quality is not a single dimension. It includes factual accuracy, completeness, coherence, tool efficiency, and process quality. An agent might score high on accuracy but low in efficiency, or vice versa.
|
||||
|
||||
Evaluation rubrics must capture multiple dimensions with appropriate weighting for the use case.
|
||||
|
||||
### Evaluation Rubric Design
|
||||
|
||||
**Multi-Dimensional Rubric**
|
||||
Effective rubrics cover key dimensions with descriptive levels:
|
||||
|
||||
Factual accuracy: Claims match ground truth (excellent to failed)
|
||||
|
||||
Completeness: Output covers requested aspects (excellent to failed)
|
||||
|
||||
Citation accuracy: Citations match claimed sources (excellent to failed)
|
||||
|
||||
Source quality: Uses appropriate primary sources (excellent to failed)
|
||||
|
||||
Tool efficiency: Uses right tools reasonable number of times (excellent to failed)
|
||||
|
||||
**Rubric Scoring**
|
||||
Convert dimension assessments to numeric scores (0.0 to 1.0) with appropriate weighting. Calculate weighted overall scores. Determine passing threshold based on use case requirements.
|
||||
|
||||
### Evaluation Methodologies
|
||||
|
||||
**LLM-as-Judge**
|
||||
LLM-based evaluation scales to large test sets and provides consistent judgments. The key is designing effective evaluation prompts that capture the dimensions of interest.
|
||||
|
||||
Provide clear task description, agent output, ground truth (if available), evaluation scale with level descriptions, and request structured judgment.
|
||||
|
||||
**Human Evaluation**
|
||||
Human evaluation catches what automation misses. Humans notice hallucinated answers on unusual queries, system failures, and subtle biases that automated evaluation misses.
|
||||
|
||||
Effective human evaluation covers edge cases, samples systematically, tracks patterns, and provides contextual understanding.
|
||||
|
||||
**End-State Evaluation**
|
||||
For agents that mutate persistent state, end-state evaluation focuses on whether the final state matches expectations rather than how the agent got there.
|
||||
|
||||
### Test Set Design
|
||||
|
||||
**Sample Selection**
|
||||
Start with small samples during development. Early in agent development, changes have dramatic impacts because there is abundant low-hanging fruit. Small test sets reveal large effects.
|
||||
|
||||
Sample from real usage patterns. Add known edge cases. Ensure coverage across complexity levels.
|
||||
|
||||
**Complexity Stratification**
|
||||
Test sets should span complexity levels: simple (single tool call), medium (multiple tool calls), complex (many tool calls, significant ambiguity), and very complex (extended interaction, deep reasoning).
|
||||
|
||||
### Context Engineering Evaluation
|
||||
|
||||
**Testing Context Strategies**
|
||||
Context engineering choices should be validated through systematic evaluation. Run agents with different context strategies on the same test set. Compare quality scores, token usage, and efficiency metrics.
|
||||
|
||||
**Degradation Testing**
|
||||
Test how context degradation affects performance by running agents at different context sizes. Identify performance cliffs where context becomes problematic. Establish safe operating limits.
|
||||
|
||||
### Continuous Evaluation
|
||||
|
||||
**Evaluation Pipeline**
|
||||
Build evaluation pipelines that run automatically on agent changes. Track results over time. Compare versions to identify improvements or regressions.
|
||||
|
||||
**Monitoring Production**
|
||||
Track evaluation metrics in production by sampling interactions and evaluating randomly. Set alerts for quality drops. Maintain dashboards for trend analysis.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Building Evaluation Frameworks
|
||||
|
||||
1. Define quality dimensions relevant to your use case
|
||||
2. Create rubrics with clear, actionable level descriptions
|
||||
3. Build test sets from real usage patterns and edge cases
|
||||
4. Implement automated evaluation pipelines
|
||||
5. Establish baseline metrics before making changes
|
||||
6. Run evaluations on all significant changes
|
||||
7. Track metrics over time for trend analysis
|
||||
8. Supplement automated evaluation with human review
|
||||
|
||||
### Avoiding Evaluation Pitfalls
|
||||
|
||||
Overfitting to specific paths: Evaluate outcomes, not specific steps.
|
||||
Ignoring edge cases: Include diverse test scenarios.
|
||||
Single-metric obsession: Use multi-dimensional rubrics.
|
||||
Neglecting context effects: Test with realistic context sizes.
|
||||
Skipping human evaluation: Automated evaluation misses subtle issues.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Simple Evaluation**
|
||||
```python
|
||||
def evaluate_agent_response(response, expected):
|
||||
rubric = load_rubric()
|
||||
scores = {}
|
||||
for dimension, config in rubric.items():
|
||||
scores[dimension] = assess_dimension(response, expected, dimension)
|
||||
overall = weighted_average(scores, config["weights"])
|
||||
return {"passed": overall >= 0.7, "scores": scores}
|
||||
```
|
||||
|
||||
**Example 2: Test Set Structure**
|
||||
|
||||
Test sets should span multiple complexity levels to ensure comprehensive evaluation:
|
||||
|
||||
```python
|
||||
test_set = [
|
||||
{
|
||||
"name": "simple_lookup",
|
||||
"input": "What is the capital of France?",
|
||||
"expected": {"type": "fact", "answer": "Paris"},
|
||||
"complexity": "simple",
|
||||
"description": "Single tool call, factual lookup"
|
||||
},
|
||||
{
|
||||
"name": "medium_query",
|
||||
"input": "Compare the revenue of Apple and Microsoft last quarter",
|
||||
"complexity": "medium",
|
||||
"description": "Multiple tool calls, comparison logic"
|
||||
},
|
||||
{
|
||||
"name": "multi_step_reasoning",
|
||||
"input": "Analyze sales data from Q1-Q4 and create a summary report with trends",
|
||||
"complexity": "complex",
|
||||
"description": "Many tool calls, aggregation, analysis"
|
||||
},
|
||||
{
|
||||
"name": "research_synthesis",
|
||||
"input": "Research emerging AI technologies, evaluate their potential impact, and recommend adoption strategy",
|
||||
"complexity": "very_complex",
|
||||
"description": "Extended interaction, deep reasoning, synthesis"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Use multi-dimensional rubrics, not single metrics
|
||||
2. Evaluate outcomes, not specific execution paths
|
||||
3. Cover complexity levels from simple to complex
|
||||
4. Test with realistic context sizes and histories
|
||||
5. Run evaluations continuously, not just before release
|
||||
6. Supplement LLM evaluation with human review
|
||||
7. Track metrics over time for trend detection
|
||||
8. Set clear pass/fail thresholds based on use case
|
||||
|
||||
## Integration
|
||||
|
||||
This skill connects to all other skills as a cross-cutting concern:
|
||||
|
||||
- context-fundamentals - Evaluating context usage
|
||||
- context-degradation - Detecting degradation
|
||||
- context-optimization - Measuring optimization effectiveness
|
||||
- multi-agent-patterns - Evaluating coordination
|
||||
- tool-design - Evaluating tool effectiveness
|
||||
- memory-systems - Evaluating memory quality
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Metrics Reference](./references/metrics.md) - Detailed evaluation metrics and implementation
|
||||
|
||||
## References
|
||||
|
||||
Internal skills:
|
||||
- All other skills connect to evaluation for quality measurement
|
||||
|
||||
External resources:
|
||||
- LLM evaluation benchmarks
|
||||
- Agent evaluation research papers
|
||||
- Production monitoring practices
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
72
skills/expo-deployment/SKILL.md
Normal file
72
skills/expo-deployment/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: expo-deployment
|
||||
description: "Deploy Expo apps to production"
|
||||
source: "https://github.com/expo/skills/tree/main/plugins/expo-deployment"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Expo Deployment
|
||||
|
||||
## Overview
|
||||
|
||||
Deploy Expo applications to production environments, including app stores and over-the-air updates.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to deploy Expo apps to production.
|
||||
|
||||
Use this skill when:
|
||||
- Deploying Expo apps to production
|
||||
- Publishing to app stores (iOS App Store, Google Play)
|
||||
- Setting up over-the-air (OTA) updates
|
||||
- Configuring production build settings
|
||||
- Managing release channels and versions
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance for deploying Expo apps:
|
||||
|
||||
1. **Build Configuration**: Set up production build settings
|
||||
2. **App Store Submission**: Prepare and submit to app stores
|
||||
3. **OTA Updates**: Configure over-the-air update channels
|
||||
4. **Release Management**: Manage versions and release channels
|
||||
5. **Production Optimization**: Optimize apps for production
|
||||
|
||||
## Deployment Workflow
|
||||
|
||||
### Pre-Deployment
|
||||
|
||||
1. Ensure all tests pass
|
||||
2. Update version numbers
|
||||
3. Configure production environment variables
|
||||
4. Review and optimize app bundle size
|
||||
5. Test production builds locally
|
||||
|
||||
### App Store Deployment
|
||||
|
||||
1. Build production binaries (iOS/Android)
|
||||
2. Configure app store metadata
|
||||
3. Submit to App Store Connect / Google Play Console
|
||||
4. Manage app store listings and screenshots
|
||||
5. Handle app review process
|
||||
|
||||
### OTA Updates
|
||||
|
||||
1. Configure update channels (production, staging, etc.)
|
||||
2. Build and publish updates
|
||||
3. Manage rollout strategies
|
||||
4. Monitor update adoption
|
||||
5. Handle rollbacks if needed
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use EAS Build for reliable production builds
|
||||
- Test production builds before submission
|
||||
- Implement proper error tracking and analytics
|
||||
- Use release channels for staged rollouts
|
||||
- Keep app store metadata up to date
|
||||
- Monitor app performance in production
|
||||
|
||||
## Resources
|
||||
|
||||
For more information, see the [source repository](https://github.com/expo/skills/tree/main/plugins/expo-deployment).
|
||||
22
skills/fal-audio/SKILL.md
Normal file
22
skills/fal-audio/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-audio
|
||||
description: "Text-to-speech and speech-to-text using fal.ai audio models"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-audio/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Audio
|
||||
|
||||
## Overview
|
||||
|
||||
Text-to-speech and speech-to-text using fal.ai audio models
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with text-to-speech and speech-to-text using fal.ai audio models.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for text-to-speech and speech-to-text using fal.ai audio models.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-audio/SKILL.md).
|
||||
22
skills/fal-generate/SKILL.md
Normal file
22
skills/fal-generate/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-generate
|
||||
description: "Generate images and videos using fal.ai AI models"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-generate/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Generate
|
||||
|
||||
## Overview
|
||||
|
||||
Generate images and videos using fal.ai AI models
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with generate images and videos using fal.ai ai models.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for generate images and videos using fal.ai ai models.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-generate/SKILL.md).
|
||||
22
skills/fal-image-edit/SKILL.md
Normal file
22
skills/fal-image-edit/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-image-edit
|
||||
description: "AI-powered image editing with style transfer and object removal"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-image-edit/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Image Edit
|
||||
|
||||
## Overview
|
||||
|
||||
AI-powered image editing with style transfer and object removal
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with ai-powered image editing with style transfer and object removal.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for ai-powered image editing with style transfer and object removal.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-image-edit/SKILL.md).
|
||||
22
skills/fal-platform/SKILL.md
Normal file
22
skills/fal-platform/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-platform
|
||||
description: "Platform APIs for model management, pricing, and usage tracking"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-platform/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Platform
|
||||
|
||||
## Overview
|
||||
|
||||
Platform APIs for model management, pricing, and usage tracking
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with platform apis for model management, pricing, and usage tracking.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for platform apis for model management, pricing, and usage tracking.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-platform/SKILL.md).
|
||||
22
skills/fal-upscale/SKILL.md
Normal file
22
skills/fal-upscale/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-upscale
|
||||
description: "Upscale and enhance image and video resolution using AI"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-upscale/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Upscale
|
||||
|
||||
## Overview
|
||||
|
||||
Upscale and enhance image and video resolution using AI
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with upscale and enhance image and video resolution using ai.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for upscale and enhance image and video resolution using ai.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-upscale/SKILL.md).
|
||||
22
skills/fal-workflow/SKILL.md
Normal file
22
skills/fal-workflow/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: fal-workflow
|
||||
description: "Generate workflow JSON files for chaining AI models"
|
||||
source: "https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-workflow/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fal Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Generate workflow JSON files for chaining AI models
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with generate workflow json files for chaining ai models.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for generate workflow json files for chaining ai models.
|
||||
|
||||
For more information, see the [source repository](https://github.com/fal-ai-community/skills/blob/main/skills/claude.ai/fal-workflow/SKILL.md).
|
||||
22
skills/ffuf-claude-skill/SKILL.md
Normal file
22
skills/ffuf-claude-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: ffuf-claude-skill
|
||||
description: "Web fuzzing with ffuf"
|
||||
source: "https://github.com/jthack/ffuf_claude_skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Ffuf Claude Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Web fuzzing with ffuf
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with web fuzzing with ffuf.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for web fuzzing with ffuf.
|
||||
|
||||
For more information, see the [source repository](https://github.com/jthack/ffuf_claude_skill).
|
||||
86
skills/find-bugs/SKILL.md
Normal file
86
skills/find-bugs/SKILL.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
name: find-bugs
|
||||
description: "Find bugs, security vulnerabilities, and code quality issues in local branch changes. Use when asked to review changes, find bugs, security review, or audit code on the current branch."
|
||||
source: "https://github.com/getsentry/skills/tree/main/plugins/sentry-skills/skills/find-bugs"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Find Bugs
|
||||
|
||||
Review changes on this branch for bugs, security vulnerabilities, and code quality issues.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Asked to review changes
|
||||
- Finding bugs in code
|
||||
- Performing security reviews
|
||||
- Auditing code on the current branch
|
||||
- Reviewing pull request changes
|
||||
|
||||
## Phase 1: Complete Input Gathering
|
||||
|
||||
1. Get the FULL diff: `git diff $(gh repo view --json defaultBranchRef --jq '.defaultBranchRef.name')...HEAD`
|
||||
2. If output is truncated, read each changed file individually until you have seen every changed line
|
||||
3. List all files modified in this branch before proceeding
|
||||
|
||||
## Phase 2: Attack Surface Mapping
|
||||
|
||||
For each changed file, identify and list:
|
||||
|
||||
* All user inputs (request params, headers, body, URL components)
|
||||
* All database queries
|
||||
* All authentication/authorization checks
|
||||
* All session/state operations
|
||||
* All external calls
|
||||
* All cryptographic operations
|
||||
|
||||
## Phase 3: Security Checklist (check EVERY item for EVERY file)
|
||||
|
||||
* [ ] **Injection**: SQL, command, template, header injection
|
||||
* [ ] **XSS**: All outputs in templates properly escaped?
|
||||
* [ ] **Authentication**: Auth checks on all protected operations?
|
||||
* [ ] **Authorization/IDOR**: Access control verified, not just auth?
|
||||
* [ ] **CSRF**: State-changing operations protected?
|
||||
* [ ] **Race conditions**: TOCTOU in any read-then-write patterns?
|
||||
* [ ] **Session**: Fixation, expiration, secure flags?
|
||||
* [ ] **Cryptography**: Secure random, proper algorithms, no secrets in logs?
|
||||
* [ ] **Information disclosure**: Error messages, logs, timing attacks?
|
||||
* [ ] **DoS**: Unbounded operations, missing rate limits, resource exhaustion?
|
||||
* [ ] **Business logic**: Edge cases, state machine violations, numeric overflow?
|
||||
|
||||
## Phase 4: Verification
|
||||
|
||||
For each potential issue:
|
||||
|
||||
* Check if it's already handled elsewhere in the changed code
|
||||
* Search for existing tests covering the scenario
|
||||
* Read surrounding context to verify the issue is real
|
||||
|
||||
## Phase 5: Pre-Conclusion Audit
|
||||
|
||||
Before finalizing, you MUST:
|
||||
|
||||
1. List every file you reviewed and confirm you read it completely
|
||||
2. List every checklist item and note whether you found issues or confirmed it's clean
|
||||
3. List any areas you could NOT fully verify and why
|
||||
4. Only then provide your final findings
|
||||
|
||||
## Output Format
|
||||
|
||||
**Prioritize**: security vulnerabilities > bugs > code quality
|
||||
|
||||
**Skip**: stylistic/formatting issues
|
||||
|
||||
For each issue:
|
||||
|
||||
* **File:Line** - Brief description
|
||||
* **Severity**: Critical/High/Medium/Low
|
||||
* **Problem**: What's wrong
|
||||
* **Evidence**: Why this is real (not already fixed, no existing test, etc.)
|
||||
* **Fix**: Concrete suggestion
|
||||
* **References**: OWASP, RFCs, or other standards if applicable
|
||||
|
||||
If you find nothing significant, say so - don't invent issues.
|
||||
|
||||
Do not make changes - just report findings. I'll decide what to address.
|
||||
53
skills/fix-review/SKILL.md
Normal file
53
skills/fix-review/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: fix-review
|
||||
description: "Verify fix commits address audit findings without new bugs"
|
||||
source: "https://github.com/trailofbits/skills/tree/main/plugins/fix-review"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Fix Review
|
||||
|
||||
## Overview
|
||||
|
||||
Verify that fix commits properly address audit findings without introducing new bugs or security vulnerabilities.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to verify fix commits address audit findings without new bugs.
|
||||
|
||||
Use this skill when:
|
||||
- Reviewing commits that address security audit findings
|
||||
- Verifying that fixes don't introduce new vulnerabilities
|
||||
- Ensuring code changes properly resolve identified issues
|
||||
- Validating that remediation efforts are complete and correct
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill helps verify that fix commits properly address audit findings:
|
||||
|
||||
1. **Review Fix Commits**: Analyze commits that claim to fix audit findings
|
||||
2. **Verify Resolution**: Ensure the original issue is properly addressed
|
||||
3. **Check for Regressions**: Verify no new bugs or vulnerabilities are introduced
|
||||
4. **Validate Completeness**: Ensure all aspects of the finding are resolved
|
||||
|
||||
## Review Process
|
||||
|
||||
When reviewing fix commits:
|
||||
|
||||
1. Compare the fix against the original audit finding
|
||||
2. Verify the fix addresses the root cause, not just symptoms
|
||||
3. Check for potential side effects or new issues
|
||||
4. Validate that tests cover the fixed scenario
|
||||
5. Ensure no similar vulnerabilities exist elsewhere
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Review fixes in context of the full codebase
|
||||
- Verify test coverage for the fixed issue
|
||||
- Check for similar patterns that might need fixing
|
||||
- Ensure fixes follow security best practices
|
||||
- Document the resolution approach
|
||||
|
||||
## Resources
|
||||
|
||||
For more information, see the [source repository](https://github.com/trailofbits/skills/tree/main/plugins/fix-review).
|
||||
775
skills/frontend-slides/SKILL.md
Normal file
775
skills/frontend-slides/SKILL.md
Normal file
@@ -0,0 +1,775 @@
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a PPT/PPTX to web, or create slides for a talk/pitch. Helps non-designers discover their aesthetic through visual exploration rather than abstract choices.
|
||||
|
||||
name: frontend-slides
|
||||
description: Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a PPT/PPTX to web, or create slides for a talk/pitch. Helps non-designers discover their aesthetic through visual exploration rather than abstract choices.
|
||||
source: https://github.com/zarazhangrui/frontend-slides
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Frontend Slides Skill
|
||||
|
||||
Create zero-dependency, animation-rich HTML presentations that run entirely in the browser. This skill helps non-designers discover their preferred aesthetic through visual exploration ("show, don't tell"), then generates production-quality slide decks.
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
1. **Zero Dependencies** — Single HTML files with inline CSS/JS. No npm, no build tools.
|
||||
2. **Show, Don't Tell** — People don't know what they want until they see it. Generate visual previews, not abstract choices.
|
||||
3. **Distinctive Design** — Avoid generic "AI slop" aesthetics. Every presentation should feel custom-crafted.
|
||||
4. **Production Quality** — Code should be well-commented, accessible, and performant.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Detect Mode
|
||||
|
||||
First, determine what the user wants:
|
||||
|
||||
**Mode A: New Presentation**
|
||||
- User wants to create slides from scratch
|
||||
- Proceed to Phase 1 (Content Discovery)
|
||||
|
||||
**Mode B: PPT Conversion**
|
||||
- User has a PowerPoint file (.ppt, .pptx) to convert
|
||||
- Proceed to Phase 4 (PPT Extraction)
|
||||
|
||||
**Mode C: Existing Presentation Enhancement**
|
||||
- User has an HTML presentation and wants to improve it
|
||||
- Read the existing file, understand the structure, then enhance
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Content Discovery (New Presentations)
|
||||
|
||||
Before designing, understand the content. Ask via AskUserQuestion:
|
||||
|
||||
### Step 1.1: Presentation Context
|
||||
|
||||
**Question 1: Purpose**
|
||||
- Header: "Purpose"
|
||||
- Question: "What is this presentation for?"
|
||||
- Options:
|
||||
- "Pitch deck" — Selling an idea, product, or company to investors/clients
|
||||
- "Teaching/Tutorial" — Explaining concepts, how-to guides, educational content
|
||||
- "Conference talk" — Speaking at an event, tech talk, keynote
|
||||
- "Internal presentation" — Team updates, strategy meetings, company updates
|
||||
|
||||
**Question 2: Slide Count**
|
||||
- Header: "Length"
|
||||
- Question: "Approximately how many slides?"
|
||||
- Options:
|
||||
- "Short (5-10)" — Quick pitch, lightning talk
|
||||
- "Medium (10-20)" — Standard presentation
|
||||
- "Long (20+)" — Deep dive, comprehensive talk
|
||||
|
||||
**Question 3: Content**
|
||||
- Header: "Content"
|
||||
- Question: "Do you have the content ready, or do you need help structuring it?"
|
||||
- Options:
|
||||
- "I have all content ready" — Just need to design the presentation
|
||||
- "I have rough notes" — Need help organizing into slides
|
||||
- "I have a topic only" — Need help creating the full outline
|
||||
|
||||
If user has content, ask them to share it (text, bullet points, images, etc.).
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Style Discovery (Visual Exploration)
|
||||
|
||||
**CRITICAL: This is the "show, don't tell" phase.**
|
||||
|
||||
Most people can't articulate design preferences in words. Instead of asking "do you want minimalist or bold?", we generate mini-previews and let them react.
|
||||
|
||||
### Step 2.1: Mood Selection
|
||||
|
||||
**Question 1: Feeling**
|
||||
- Header: "Vibe"
|
||||
- Question: "What feeling should the audience have when viewing your slides?"
|
||||
- Options:
|
||||
- "Impressed/Confident" — Professional, trustworthy, this team knows what they're doing
|
||||
- "Excited/Energized" — Innovative, bold, this is the future
|
||||
- "Calm/Focused" — Clear, thoughtful, easy to follow
|
||||
- "Inspired/Moved" — Emotional, storytelling, memorable
|
||||
- multiSelect: true (can choose up to 2)
|
||||
|
||||
### Step 2.2: Generate Style Previews
|
||||
|
||||
Based on their mood selection, generate **3 distinct style previews** as mini HTML files in a temporary directory. Each preview should be a single title slide showing:
|
||||
|
||||
- Typography (font choices, heading/body hierarchy)
|
||||
- Color palette (background, accent, text colors)
|
||||
- Animation style (how elements enter)
|
||||
- Overall aesthetic feel
|
||||
|
||||
**Preview Styles to Consider (pick 3 based on mood):**
|
||||
|
||||
| Mood | Style Options |
|
||||
|------|---------------|
|
||||
| Impressed/Confident | "Corporate Elegant", "Dark Executive", "Clean Minimal" |
|
||||
| Excited/Energized | "Neon Cyber", "Bold Gradients", "Kinetic Motion" |
|
||||
| Calm/Focused | "Paper & Ink", "Soft Muted", "Swiss Minimal" |
|
||||
| Inspired/Moved | "Cinematic Dark", "Warm Editorial", "Atmospheric" |
|
||||
|
||||
**IMPORTANT: Never use these generic patterns:**
|
||||
- Purple gradients on white backgrounds
|
||||
- Inter, Roboto, or system fonts
|
||||
- Standard blue primary colors
|
||||
- Predictable hero layouts
|
||||
|
||||
**Instead, use distinctive choices:**
|
||||
- Unique font pairings (Clash Display, Satoshi, Cormorant Garamond, DM Sans, etc.)
|
||||
- Cohesive color themes with personality
|
||||
- Atmospheric backgrounds (gradients, subtle patterns, depth)
|
||||
- Signature animation moments
|
||||
|
||||
### Step 2.3: Present Previews
|
||||
|
||||
Create the previews in: `.claude-design/slide-previews/`
|
||||
|
||||
```
|
||||
.claude-design/slide-previews/
|
||||
├── style-a.html # First style option
|
||||
├── style-b.html # Second style option
|
||||
├── style-c.html # Third style option
|
||||
└── assets/ # Any shared assets
|
||||
```
|
||||
|
||||
Each preview file should be:
|
||||
- Self-contained (inline CSS/JS)
|
||||
- A single "title slide" showing the aesthetic
|
||||
- Animated to demonstrate motion style
|
||||
- ~50-100 lines, not a full presentation
|
||||
|
||||
Present to user:
|
||||
```
|
||||
I've created 3 style previews for you to compare:
|
||||
|
||||
**Style A: [Name]** — [1 sentence description]
|
||||
**Style B: [Name]** — [1 sentence description]
|
||||
**Style C: [Name]** — [1 sentence description]
|
||||
|
||||
Open each file to see them in action:
|
||||
- .claude-design/slide-previews/style-a.html
|
||||
- .claude-design/slide-previews/style-b.html
|
||||
- .claude-design/slide-previews/style-c.html
|
||||
|
||||
Take a look and tell me:
|
||||
1. Which style resonates most?
|
||||
2. What do you like about it?
|
||||
3. Anything you'd change?
|
||||
```
|
||||
|
||||
Then use AskUserQuestion:
|
||||
|
||||
**Question: Pick Your Style**
|
||||
- Header: "Style"
|
||||
- Question: "Which style preview do you prefer?"
|
||||
- Options:
|
||||
- "Style A: [Name]" — [Brief description]
|
||||
- "Style B: [Name]" — [Brief description]
|
||||
- "Style C: [Name]" — [Brief description]
|
||||
- "Mix elements" — Combine aspects from different styles
|
||||
|
||||
If "Mix elements", ask for specifics.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Generate Presentation
|
||||
|
||||
Now generate the full presentation based on:
|
||||
- Content from Phase 1
|
||||
- Style from Phase 2
|
||||
|
||||
### File Structure
|
||||
|
||||
For single presentations:
|
||||
```
|
||||
presentation.html # Self-contained presentation
|
||||
assets/ # Images, if any
|
||||
```
|
||||
|
||||
For projects with multiple presentations:
|
||||
```
|
||||
[presentation-name].html
|
||||
[presentation-name]-assets/
|
||||
```
|
||||
|
||||
### HTML Architecture
|
||||
|
||||
Follow this structure for all presentations:
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Presentation Title</title>
|
||||
|
||||
<!-- Fonts (use Fontshare or Google Fonts) -->
|
||||
<link rel="stylesheet" href="https://api.fontshare.com/v2/css?f[]=...">
|
||||
|
||||
<style>
|
||||
/* ===========================================
|
||||
CSS CUSTOM PROPERTIES (THEME)
|
||||
Easy to modify: change these to change the whole look
|
||||
=========================================== */
|
||||
:root {
|
||||
/* Colors */
|
||||
--bg-primary: #0a0f1c;
|
||||
--bg-secondary: #111827;
|
||||
--text-primary: #ffffff;
|
||||
--text-secondary: #9ca3af;
|
||||
--accent: #00ffcc;
|
||||
--accent-glow: rgba(0, 255, 204, 0.3);
|
||||
|
||||
/* Typography */
|
||||
--font-display: 'Clash Display', sans-serif;
|
||||
--font-body: 'Satoshi', sans-serif;
|
||||
|
||||
/* Spacing */
|
||||
--slide-padding: clamp(2rem, 5vw, 4rem);
|
||||
|
||||
/* Animation */
|
||||
--ease-out-expo: cubic-bezier(0.16, 1, 0.3, 1);
|
||||
--duration-normal: 0.6s;
|
||||
}
|
||||
|
||||
/* ===========================================
|
||||
BASE STYLES
|
||||
=========================================== */
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
html {
|
||||
scroll-behavior: smooth;
|
||||
scroll-snap-type: y mandatory;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: var(--font-body);
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
overflow-x: hidden;
|
||||
}
|
||||
|
||||
/* ===========================================
|
||||
SLIDE CONTAINER
|
||||
Each section is one slide
|
||||
=========================================== */
|
||||
.slide {
|
||||
min-height: 100vh;
|
||||
padding: var(--slide-padding);
|
||||
scroll-snap-align: start;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: center;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
/* ===========================================
|
||||
ANIMATIONS
|
||||
Trigger via .visible class (added by JS on scroll)
|
||||
=========================================== */
|
||||
.reveal {
|
||||
opacity: 0;
|
||||
transform: translateY(30px);
|
||||
transition: opacity var(--duration-normal) var(--ease-out-expo),
|
||||
transform var(--duration-normal) var(--ease-out-expo);
|
||||
}
|
||||
|
||||
.slide.visible .reveal {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
/* Stagger children */
|
||||
.reveal:nth-child(1) { transition-delay: 0.1s; }
|
||||
.reveal:nth-child(2) { transition-delay: 0.2s; }
|
||||
.reveal:nth-child(3) { transition-delay: 0.3s; }
|
||||
.reveal:nth-child(4) { transition-delay: 0.4s; }
|
||||
|
||||
/* ... more styles ... */
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<!-- Progress bar (optional) -->
|
||||
<div class="progress-bar"></div>
|
||||
|
||||
<!-- Navigation dots (optional) -->
|
||||
<nav class="nav-dots">
|
||||
<!-- Generated by JS -->
|
||||
</nav>
|
||||
|
||||
<!-- Slides -->
|
||||
<section class="slide title-slide">
|
||||
<h1 class="reveal">Presentation Title</h1>
|
||||
<p class="reveal">Subtitle or author</p>
|
||||
</section>
|
||||
|
||||
<section class="slide">
|
||||
<h2 class="reveal">Slide Title</h2>
|
||||
<p class="reveal">Content...</p>
|
||||
</section>
|
||||
|
||||
<!-- More slides... -->
|
||||
|
||||
<script>
|
||||
/* ===========================================
|
||||
SLIDE PRESENTATION CONTROLLER
|
||||
Handles navigation, animations, and interactions
|
||||
=========================================== */
|
||||
|
||||
class SlidePresentation {
|
||||
constructor() {
|
||||
// ... initialization
|
||||
}
|
||||
|
||||
// ... methods
|
||||
}
|
||||
|
||||
// Initialize
|
||||
new SlidePresentation();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### Required JavaScript Features
|
||||
|
||||
Every presentation should include:
|
||||
|
||||
1. **SlidePresentation Class** — Main controller
|
||||
- Keyboard navigation (arrows, space)
|
||||
- Touch/swipe support
|
||||
- Mouse wheel navigation
|
||||
- Progress bar updates
|
||||
- Navigation dots
|
||||
|
||||
2. **Intersection Observer** — For scroll-triggered animations
|
||||
- Add `.visible` class when slides enter viewport
|
||||
- Trigger CSS animations efficiently
|
||||
|
||||
3. **Optional Enhancements** (based on style):
|
||||
- Custom cursor with trail
|
||||
- Particle system background (canvas)
|
||||
- Parallax effects
|
||||
- 3D tilt on hover
|
||||
- Magnetic buttons
|
||||
- Counter animations
|
||||
|
||||
### Code Quality Requirements
|
||||
|
||||
**Comments:**
|
||||
Every section should have clear comments explaining:
|
||||
- What it does
|
||||
- Why it exists
|
||||
- How to modify it
|
||||
|
||||
```javascript
|
||||
/* ===========================================
|
||||
CUSTOM CURSOR
|
||||
Creates a stylized cursor that follows mouse with a trail effect.
|
||||
- Uses lerp (linear interpolation) for smooth movement
|
||||
- Grows larger when hovering over interactive elements
|
||||
=========================================== */
|
||||
class CustomCursor {
|
||||
constructor() {
|
||||
// ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Accessibility:**
|
||||
- Semantic HTML (`<section>`, `<nav>`, `<main>`)
|
||||
- Keyboard navigation works
|
||||
- ARIA labels where needed
|
||||
- Reduced motion support
|
||||
|
||||
```css
|
||||
@media (prefers-reduced-motion: reduce) {
|
||||
.reveal {
|
||||
transition: opacity 0.3s ease;
|
||||
transform: none;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Responsive:**
|
||||
- Mobile-friendly (single column, adjusted spacing)
|
||||
- Disable heavy effects on mobile
|
||||
- Touch-friendly interactions
|
||||
|
||||
```css
|
||||
@media (max-width: 768px) {
|
||||
.nav-dots,
|
||||
.keyboard-hint {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: PPT Conversion
|
||||
|
||||
When converting PowerPoint files:
|
||||
|
||||
### Step 4.1: Extract Content
|
||||
|
||||
Use Python with `python-pptx` to extract:
|
||||
|
||||
```python
|
||||
from pptx import Presentation
|
||||
from pptx.util import Inches, Pt
|
||||
import json
|
||||
import os
|
||||
import base64
|
||||
|
||||
def extract_pptx(file_path, output_dir):
|
||||
"""
|
||||
Extract all content from a PowerPoint file.
|
||||
Returns a JSON structure with slides, text, and images.
|
||||
"""
|
||||
prs = Presentation(file_path)
|
||||
slides_data = []
|
||||
|
||||
# Create assets directory
|
||||
assets_dir = os.path.join(output_dir, 'assets')
|
||||
os.makedirs(assets_dir, exist_ok=True)
|
||||
|
||||
for slide_num, slide in enumerate(prs.slides):
|
||||
slide_data = {
|
||||
'number': slide_num + 1,
|
||||
'title': '',
|
||||
'content': [],
|
||||
'images': [],
|
||||
'notes': ''
|
||||
}
|
||||
|
||||
for shape in slide.shapes:
|
||||
# Extract title
|
||||
if shape.has_text_frame:
|
||||
if shape == slide.shapes.title:
|
||||
slide_data['title'] = shape.text
|
||||
else:
|
||||
slide_data['content'].append({
|
||||
'type': 'text',
|
||||
'content': shape.text
|
||||
})
|
||||
|
||||
# Extract images
|
||||
if shape.shape_type == 13: # Picture
|
||||
image = shape.image
|
||||
image_bytes = image.blob
|
||||
image_ext = image.ext
|
||||
image_name = f"slide{slide_num + 1}_img{len(slide_data['images']) + 1}.{image_ext}"
|
||||
image_path = os.path.join(assets_dir, image_name)
|
||||
|
||||
with open(image_path, 'wb') as f:
|
||||
f.write(image_bytes)
|
||||
|
||||
slide_data['images'].append({
|
||||
'path': f"assets/{image_name}",
|
||||
'width': shape.width,
|
||||
'height': shape.height
|
||||
})
|
||||
|
||||
# Extract notes
|
||||
if slide.has_notes_slide:
|
||||
notes_frame = slide.notes_slide.notes_text_frame
|
||||
slide_data['notes'] = notes_frame.text
|
||||
|
||||
slides_data.append(slide_data)
|
||||
|
||||
return slides_data
|
||||
```
|
||||
|
||||
### Step 4.2: Confirm Content Structure
|
||||
|
||||
Present the extracted content to the user:
|
||||
|
||||
```
|
||||
I've extracted the following from your PowerPoint:
|
||||
|
||||
**Slide 1: [Title]**
|
||||
- [Content summary]
|
||||
- Images: [count]
|
||||
|
||||
**Slide 2: [Title]**
|
||||
- [Content summary]
|
||||
- Images: [count]
|
||||
|
||||
...
|
||||
|
||||
All images have been saved to the assets folder.
|
||||
|
||||
Does this look correct? Should I proceed with style selection?
|
||||
```
|
||||
|
||||
### Step 4.3: Style Selection
|
||||
|
||||
Proceed to Phase 2 (Style Discovery) with the extracted content in mind.
|
||||
|
||||
### Step 4.4: Generate HTML
|
||||
|
||||
Convert the extracted content into the chosen style, preserving:
|
||||
- All text content
|
||||
- All images (referenced from assets folder)
|
||||
- Slide order
|
||||
- Any speaker notes (as HTML comments or separate file)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Delivery
|
||||
|
||||
### Final Output
|
||||
|
||||
When the presentation is complete:
|
||||
|
||||
1. **Clean up temporary files**
|
||||
- Delete `.claude-design/slide-previews/` if it exists
|
||||
|
||||
2. **Open the presentation**
|
||||
- Use `open [filename].html` to launch in browser
|
||||
|
||||
3. **Provide summary**
|
||||
```
|
||||
Your presentation is ready!
|
||||
|
||||
📁 File: [filename].html
|
||||
🎨 Style: [Style Name]
|
||||
📊 Slides: [count]
|
||||
|
||||
**Navigation:**
|
||||
- Arrow keys (← →) or Space to navigate
|
||||
- Scroll/swipe also works
|
||||
- Click the dots on the right to jump to a slide
|
||||
|
||||
**To customize:**
|
||||
- Colors: Look for `:root` CSS variables at the top
|
||||
- Fonts: Change the Fontshare/Google Fonts link
|
||||
- Animations: Modify `.reveal` class timings
|
||||
|
||||
Would you like me to make any adjustments?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Style Reference: Effect → Feeling Mapping
|
||||
|
||||
Use this guide to match animations to intended feelings:
|
||||
|
||||
### Dramatic / Cinematic
|
||||
- Slow fade-ins (1-1.5s)
|
||||
- Large scale transitions (0.9 → 1)
|
||||
- Dark backgrounds with spotlight effects
|
||||
- Parallax scrolling
|
||||
- Full-bleed images
|
||||
|
||||
### Techy / Futuristic
|
||||
- Neon glow effects (box-shadow with accent color)
|
||||
- Particle systems (canvas background)
|
||||
- Grid patterns
|
||||
- Monospace fonts for accents
|
||||
- Glitch or scramble text effects
|
||||
- Cyan, magenta, electric blue palette
|
||||
|
||||
### Playful / Friendly
|
||||
- Bouncy easing (spring physics)
|
||||
- Rounded corners (large radius)
|
||||
- Pastel or bright colors
|
||||
- Floating/bobbing animations
|
||||
- Hand-drawn or illustrated elements
|
||||
|
||||
### Professional / Corporate
|
||||
- Subtle, fast animations (200-300ms)
|
||||
- Clean sans-serif fonts
|
||||
- Navy, slate, or charcoal backgrounds
|
||||
- Precise spacing and alignment
|
||||
- Minimal decorative elements
|
||||
- Data visualization focus
|
||||
|
||||
### Calm / Minimal
|
||||
- Very slow, subtle motion
|
||||
- High whitespace
|
||||
- Muted color palette
|
||||
- Serif typography
|
||||
- Generous padding
|
||||
- Content-focused, no distractions
|
||||
|
||||
### Editorial / Magazine
|
||||
- Strong typography hierarchy
|
||||
- Pull quotes and callouts
|
||||
- Image-text interplay
|
||||
- Grid-breaking layouts
|
||||
- Serif headlines, sans-serif body
|
||||
- Black and white with one accent
|
||||
|
||||
---
|
||||
|
||||
## Animation Patterns Reference
|
||||
|
||||
### Entrance Animations
|
||||
|
||||
```css
|
||||
/* Fade + Slide Up (most common) */
|
||||
.reveal {
|
||||
opacity: 0;
|
||||
transform: translateY(30px);
|
||||
transition: opacity 0.6s var(--ease-out-expo),
|
||||
transform 0.6s var(--ease-out-expo);
|
||||
}
|
||||
|
||||
.visible .reveal {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
/* Scale In */
|
||||
.reveal-scale {
|
||||
opacity: 0;
|
||||
transform: scale(0.9);
|
||||
transition: opacity 0.6s, transform 0.6s var(--ease-out-expo);
|
||||
}
|
||||
|
||||
/* Slide from Left */
|
||||
.reveal-left {
|
||||
opacity: 0;
|
||||
transform: translateX(-50px);
|
||||
transition: opacity 0.6s, transform 0.6s var(--ease-out-expo);
|
||||
}
|
||||
|
||||
/* Blur In */
|
||||
.reveal-blur {
|
||||
opacity: 0;
|
||||
filter: blur(10px);
|
||||
transition: opacity 0.8s, filter 0.8s var(--ease-out-expo);
|
||||
}
|
||||
```
|
||||
|
||||
### Background Effects
|
||||
|
||||
```css
|
||||
/* Gradient Mesh */
|
||||
.gradient-bg {
|
||||
background:
|
||||
radial-gradient(ellipse at 20% 80%, rgba(120, 0, 255, 0.3) 0%, transparent 50%),
|
||||
radial-gradient(ellipse at 80% 20%, rgba(0, 255, 200, 0.2) 0%, transparent 50%),
|
||||
var(--bg-primary);
|
||||
}
|
||||
|
||||
/* Noise Texture */
|
||||
.noise-bg {
|
||||
background-image: url("data:image/svg+xml,..."); /* Inline SVG noise */
|
||||
}
|
||||
|
||||
/* Grid Pattern */
|
||||
.grid-bg {
|
||||
background-image:
|
||||
linear-gradient(rgba(255,255,255,0.03) 1px, transparent 1px),
|
||||
linear-gradient(90deg, rgba(255,255,255,0.03) 1px, transparent 1px);
|
||||
background-size: 50px 50px;
|
||||
}
|
||||
```
|
||||
|
||||
### Interactive Effects
|
||||
|
||||
```javascript
|
||||
/* 3D Tilt on Hover */
|
||||
class TiltEffect {
|
||||
constructor(element) {
|
||||
this.element = element;
|
||||
this.element.style.transformStyle = 'preserve-3d';
|
||||
this.element.style.perspective = '1000px';
|
||||
this.bindEvents();
|
||||
}
|
||||
|
||||
bindEvents() {
|
||||
this.element.addEventListener('mousemove', (e) => {
|
||||
const rect = this.element.getBoundingClientRect();
|
||||
const x = (e.clientX - rect.left) / rect.width - 0.5;
|
||||
const y = (e.clientY - rect.top) / rect.height - 0.5;
|
||||
|
||||
this.element.style.transform = `
|
||||
rotateY(${x * 10}deg)
|
||||
rotateX(${-y * 10}deg)
|
||||
`;
|
||||
});
|
||||
|
||||
this.element.addEventListener('mouseleave', () => {
|
||||
this.element.style.transform = 'rotateY(0) rotateX(0)';
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Fonts not loading:**
|
||||
- Check Fontshare/Google Fonts URL
|
||||
- Ensure font names match in CSS
|
||||
|
||||
**Animations not triggering:**
|
||||
- Verify Intersection Observer is running
|
||||
- Check that `.visible` class is being added
|
||||
|
||||
**Scroll snap not working:**
|
||||
- Ensure `scroll-snap-type` on html/body
|
||||
- Each slide needs `scroll-snap-align: start`
|
||||
|
||||
**Mobile issues:**
|
||||
- Disable heavy effects at 768px breakpoint
|
||||
- Test touch events
|
||||
- Reduce particle count or disable canvas
|
||||
|
||||
**Performance issues:**
|
||||
- Use `will-change` sparingly
|
||||
- Prefer `transform` and `opacity` animations
|
||||
- Throttle scroll/mousemove handlers
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **learn** — Generate FORZARA.md documentation for the presentation
|
||||
- **frontend-design** — For more complex interactive pages beyond slides
|
||||
- **design-and-refine:design-lab** — For iterating on component designs
|
||||
|
||||
---
|
||||
|
||||
## Example Session Flow
|
||||
|
||||
1. User: "I want to create a pitch deck for my AI startup"
|
||||
2. Skill asks about purpose, length, content
|
||||
3. User shares their bullet points and key messages
|
||||
4. Skill asks about desired feeling (Impressed + Excited)
|
||||
5. Skill generates 3 style previews
|
||||
6. User picks Style B (Neon Cyber), asks for darker background
|
||||
7. Skill generates full presentation with all slides
|
||||
8. Skill opens the presentation in browser
|
||||
9. User requests tweaks to specific slides
|
||||
10. Final presentation delivered
|
||||
|
||||
---
|
||||
|
||||
## Conversion Session Flow
|
||||
|
||||
1. User: "Convert my slides.pptx to a web presentation"
|
||||
2. Skill extracts content and images from PPT
|
||||
3. Skill confirms extracted content with user
|
||||
4. Skill asks about desired feeling/style
|
||||
5. Skill generates style previews
|
||||
6. User picks a style
|
||||
7. Skill generates HTML presentation with preserved assets
|
||||
8. Final presentation delivered
|
||||
198
skills/hugging-face-cli/SKILL.md
Normal file
198
skills/hugging-face-cli/SKILL.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
name: hugging-face-cli
|
||||
description: "Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute."
|
||||
source: "https://github.com/huggingface/skills/tree/main/skills/hugging-face-cli"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Hugging Face CLI
|
||||
|
||||
The `hf` CLI provides direct terminal access to the Hugging Face Hub for downloading, uploading, and managing repositories, cache, and compute resources.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- User needs to download models, datasets, or spaces
|
||||
- Uploading files to Hub repositories
|
||||
- Creating Hugging Face repositories
|
||||
- Managing local cache
|
||||
- Running compute jobs on HF infrastructure
|
||||
- Working with Hugging Face Hub authentication
|
||||
|
||||
## Quick Command Reference
|
||||
|
||||
| Task | Command |
|
||||
|------|---------|
|
||||
| Login | `hf auth login` |
|
||||
| Download model | `hf download <repo_id>` |
|
||||
| Download to folder | `hf download <repo_id> --local-dir ./path` |
|
||||
| Upload folder | `hf upload <repo_id> . .` |
|
||||
| Create repo | `hf repo create <name>` |
|
||||
| Create tag | `hf repo tag create <repo_id> <tag>` |
|
||||
| Delete files | `hf repo-files delete <repo_id> <files>` |
|
||||
| List cache | `hf cache ls` |
|
||||
| Remove from cache | `hf cache rm <repo_or_revision>` |
|
||||
| List models | `hf models ls` |
|
||||
| Get model info | `hf models info <model_id>` |
|
||||
| List datasets | `hf datasets ls` |
|
||||
| Get dataset info | `hf datasets info <dataset_id>` |
|
||||
| List spaces | `hf spaces ls` |
|
||||
| Get space info | `hf spaces info <space_id>` |
|
||||
| List endpoints | `hf endpoints ls` |
|
||||
| Run GPU job | `hf jobs run --flavor a10g-small <image> <cmd>` |
|
||||
| Environment info | `hf env` |
|
||||
|
||||
## Core Commands
|
||||
|
||||
### Authentication
|
||||
```bash
|
||||
hf auth login # Interactive login
|
||||
hf auth login --token $HF_TOKEN # Non-interactive
|
||||
hf auth whoami # Check current user
|
||||
hf auth list # List stored tokens
|
||||
hf auth switch # Switch between tokens
|
||||
hf auth logout # Log out
|
||||
```
|
||||
|
||||
### Download
|
||||
```bash
|
||||
hf download <repo_id> # Full repo to cache
|
||||
hf download <repo_id> file.safetensors # Specific file
|
||||
hf download <repo_id> --local-dir ./models # To local directory
|
||||
hf download <repo_id> --include "*.safetensors" # Filter by pattern
|
||||
hf download <repo_id> --repo-type dataset # Dataset
|
||||
hf download <repo_id> --revision v1.0 # Specific version
|
||||
```
|
||||
|
||||
### Upload
|
||||
```bash
|
||||
hf upload <repo_id> . . # Current dir to root
|
||||
hf upload <repo_id> ./models /weights # Folder to path
|
||||
hf upload <repo_id> model.safetensors # Single file
|
||||
hf upload <repo_id> . . --repo-type dataset # Dataset
|
||||
hf upload <repo_id> . . --create-pr # Create PR
|
||||
hf upload <repo_id> . . --commit-message="msg" # Custom message
|
||||
```
|
||||
|
||||
### Repository Management
|
||||
```bash
|
||||
hf repo create <name> # Create model repo
|
||||
hf repo create <name> --repo-type dataset # Create dataset
|
||||
hf repo create <name> --private # Private repo
|
||||
hf repo create <name> --repo-type space --space_sdk gradio # Gradio space
|
||||
hf repo delete <repo_id> # Delete repo
|
||||
hf repo move <from_id> <to_id> # Move repo to new namespace
|
||||
hf repo settings <repo_id> --private true # Update repo settings
|
||||
hf repo list --repo-type model # List repos
|
||||
hf repo branch create <repo_id> release-v1 # Create branch
|
||||
hf repo branch delete <repo_id> release-v1 # Delete branch
|
||||
hf repo tag create <repo_id> v1.0 # Create tag
|
||||
hf repo tag list <repo_id> # List tags
|
||||
hf repo tag delete <repo_id> v1.0 # Delete tag
|
||||
```
|
||||
|
||||
### Delete Files from Repo
|
||||
```bash
|
||||
hf repo-files delete <repo_id> folder/ # Delete folder
|
||||
hf repo-files delete <repo_id> "*.txt" # Delete with pattern
|
||||
```
|
||||
|
||||
### Cache Management
|
||||
```bash
|
||||
hf cache ls # List cached repos
|
||||
hf cache ls --revisions # Include individual revisions
|
||||
hf cache rm model/gpt2 # Remove cached repo
|
||||
hf cache rm <revision_hash> # Remove cached revision
|
||||
hf cache prune # Remove detached revisions
|
||||
hf cache verify gpt2 # Verify checksums from cache
|
||||
```
|
||||
|
||||
### Browse Hub
|
||||
```bash
|
||||
# Models
|
||||
hf models ls # List top trending models
|
||||
hf models ls --search "MiniMax" --author MiniMaxAI # Search models
|
||||
hf models ls --filter "text-generation" --limit 20 # Filter by task
|
||||
hf models info MiniMaxAI/MiniMax-M2.1 # Get model info
|
||||
|
||||
# Datasets
|
||||
hf datasets ls # List top trending datasets
|
||||
hf datasets ls --search "finepdfs" --sort downloads # Search datasets
|
||||
hf datasets info HuggingFaceFW/finepdfs # Get dataset info
|
||||
|
||||
# Spaces
|
||||
hf spaces ls # List top trending spaces
|
||||
hf spaces ls --filter "3d" --limit 10 # Filter by 3D modeling spaces
|
||||
hf spaces info enzostvs/deepsite # Get space info
|
||||
```
|
||||
|
||||
### Jobs (Cloud Compute)
|
||||
```bash
|
||||
hf jobs run python:3.12 python script.py # Run on CPU
|
||||
hf jobs run --flavor a10g-small <image> <cmd> # Run on GPU
|
||||
hf jobs run --secrets HF_TOKEN <image> <cmd> # With HF token
|
||||
hf jobs ps # List jobs
|
||||
hf jobs logs <job_id> # View logs
|
||||
hf jobs cancel <job_id> # Cancel job
|
||||
```
|
||||
|
||||
### Inference Endpoints
|
||||
```bash
|
||||
hf endpoints ls # List endpoints
|
||||
hf endpoints deploy my-endpoint \
|
||||
--repo openai/gpt-oss-120b \
|
||||
--framework vllm \
|
||||
--accelerator gpu \
|
||||
--instance-size x4 \
|
||||
--instance-type nvidia-a10g \
|
||||
--region us-east-1 \
|
||||
--vendor aws
|
||||
hf endpoints describe my-endpoint # Show endpoint details
|
||||
hf endpoints pause my-endpoint # Pause endpoint
|
||||
hf endpoints resume my-endpoint # Resume endpoint
|
||||
hf endpoints scale-to-zero my-endpoint # Scale to zero
|
||||
hf endpoints delete my-endpoint --yes # Delete endpoint
|
||||
```
|
||||
**GPU Flavors:** `cpu-basic`, `cpu-upgrade`, `cpu-xl`, `t4-small`, `t4-medium`, `l4x1`, `l4x4`, `l40sx1`, `l40sx4`, `l40sx8`, `a10g-small`, `a10g-large`, `a10g-largex2`, `a10g-largex4`, `a100-large`, `h100`, `h100x8`
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Download and Use Model Locally
|
||||
```bash
|
||||
# Download to local directory for deployment
|
||||
hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model
|
||||
|
||||
# Or use cache and get path
|
||||
MODEL_PATH=$(hf download meta-llama/Llama-3.2-1B-Instruct --quiet)
|
||||
```
|
||||
|
||||
### Publish Model/Dataset
|
||||
```bash
|
||||
hf repo create my-username/my-model --private
|
||||
hf upload my-username/my-model ./output . --commit-message="Initial release"
|
||||
hf repo tag create my-username/my-model v1.0
|
||||
```
|
||||
|
||||
### Sync Space with Local
|
||||
```bash
|
||||
hf upload my-username/my-space . . --repo-type space \
|
||||
--exclude="logs/*" --delete="*" --commit-message="Sync"
|
||||
```
|
||||
|
||||
### Check Cache Usage
|
||||
```bash
|
||||
hf cache ls # See all cached repos and sizes
|
||||
hf cache rm model/gpt2 # Remove a repo from cache
|
||||
```
|
||||
|
||||
## Key Options
|
||||
|
||||
- `--repo-type`: `model` (default), `dataset`, `space`
|
||||
- `--revision`: Branch, tag, or commit hash
|
||||
- `--token`: Override authentication
|
||||
- `--quiet`: Output only essential info (paths/URLs)
|
||||
|
||||
## References
|
||||
|
||||
- **Complete command reference**: See [references/commands.md](references/commands.md)
|
||||
- **Workflow examples**: See [references/examples.md](references/examples.md)
|
||||
1038
skills/hugging-face-jobs/SKILL.md
Normal file
1038
skills/hugging-face-jobs/SKILL.md
Normal file
File diff suppressed because it is too large
Load Diff
77
skills/imagen/SKILL.md
Normal file
77
skills/imagen/SKILL.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
name: imagen
|
||||
description: |
|
||||
source: "https://github.com/sanjay3290/ai-skills/tree/main/skills/imagen"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Imagen - AI Image Generation Skill
|
||||
|
||||
## Overview
|
||||
|
||||
This skill generates images using Google Gemini's image generation model (`gemini-3-pro-image-preview`). It enables seamless image creation during any Claude Code session - whether you're building frontend UIs, creating documentation, or need visual representations of concepts.
|
||||
|
||||
**Cross-Platform**: Works on Windows, macOS, and Linux.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Automatically activate this skill when:
|
||||
- User requests image generation (e.g., "generate an image of...", "create a picture...")
|
||||
- Frontend development requires placeholder or actual images
|
||||
- Documentation needs illustrations or diagrams
|
||||
- Visualizing concepts, architectures, or ideas
|
||||
- Creating icons, logos, or UI assets
|
||||
- Any task where an AI-generated image would be helpful
|
||||
|
||||
## How It Works
|
||||
|
||||
1. Takes a text prompt describing the desired image
|
||||
2. Calls Google Gemini API with image generation configuration
|
||||
3. Saves the generated image to a specified location (defaults to current directory)
|
||||
4. Returns the file path for use in your project
|
||||
|
||||
## Usage
|
||||
|
||||
### Python (Cross-Platform - Recommended)
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
python scripts/generate_image.py "A futuristic city skyline at sunset"
|
||||
|
||||
# With custom output path
|
||||
python scripts/generate_image.py "A minimalist app icon for a music player" "./assets/icons/music-icon.png"
|
||||
|
||||
# With custom size
|
||||
python scripts/generate_image.py --size 2K "High resolution landscape" "./wallpaper.png"
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- `GEMINI_API_KEY` environment variable must be set
|
||||
- Python 3.6+ (uses standard library only, no pip install needed)
|
||||
|
||||
## Output
|
||||
|
||||
Generated images are saved as PNG files. The script returns:
|
||||
- Success: Path to the generated image
|
||||
- Failure: Error message with details
|
||||
|
||||
## Examples
|
||||
|
||||
### Frontend Development
|
||||
```
|
||||
User: "I need a hero image for my landing page - something abstract and tech-focused"
|
||||
-> Generates and saves image, provides path for use in HTML/CSS
|
||||
```
|
||||
|
||||
### Documentation
|
||||
```
|
||||
User: "Create a diagram showing microservices architecture"
|
||||
-> Generates visual representation, ready for README or docs
|
||||
```
|
||||
|
||||
### UI Assets
|
||||
```
|
||||
User: "Generate a placeholder avatar image for the user profile component"
|
||||
-> Creates image in appropriate size for component use
|
||||
```
|
||||
150
skills/iterate-pr/SKILL.md
Normal file
150
skills/iterate-pr/SKILL.md
Normal file
@@ -0,0 +1,150 @@
|
||||
---
|
||||
name: iterate-pr
|
||||
description: "Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle."
|
||||
source: "https://github.com/getsentry/skills/tree/main/plugins/sentry-skills/skills/iterate-pr"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Iterate on PR Until CI Passes
|
||||
|
||||
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Fixing CI failures
|
||||
- Addressing review feedback
|
||||
- Continuously pushing fixes until all checks are green
|
||||
- Automating the feedback-fix-push-wait cycle
|
||||
- Ensuring PR meets all quality gates
|
||||
|
||||
**Requires**: GitHub CLI (`gh`) authenticated and available.
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Identify the PR
|
||||
|
||||
```bash
|
||||
gh pr view --json number,url,headRefName,baseRefName
|
||||
```
|
||||
|
||||
If no PR exists for the current branch, stop and inform the user.
|
||||
|
||||
### Step 2: Check CI Status First
|
||||
|
||||
Always check CI/GitHub Actions status before looking at review feedback:
|
||||
|
||||
```bash
|
||||
gh pr checks --json name,state,bucket,link,workflow
|
||||
```
|
||||
|
||||
The `bucket` field categorizes state into: `pass`, `fail`, `pending`, `skipping`, or `cancel`.
|
||||
|
||||
**Important:** If any of these checks are still `pending`, wait before proceeding:
|
||||
- `sentry` / `sentry-io`
|
||||
- `codecov`
|
||||
- `cursor` / `bugbot` / `seer`
|
||||
- Any linter or code analysis checks
|
||||
|
||||
These bots may post additional feedback comments once their checks complete. Waiting avoids duplicate work.
|
||||
|
||||
### Step 3: Gather Review Feedback
|
||||
|
||||
Once CI checks have completed (or at least the bot-related checks), gather human and bot feedback:
|
||||
|
||||
**Review Comments and Status:**
|
||||
```bash
|
||||
gh pr view --json reviews,comments,reviewDecision
|
||||
```
|
||||
|
||||
**Inline Code Review Comments:**
|
||||
```bash
|
||||
gh api repos/{owner}/{repo}/pulls/{pr_number}/comments
|
||||
```
|
||||
|
||||
**PR Conversation Comments (includes bot comments):**
|
||||
```bash
|
||||
gh api repos/{owner}/{repo}/issues/{pr_number}/comments
|
||||
```
|
||||
|
||||
Look for bot comments from: Sentry, Codecov, Cursor, Bugbot, Seer, and other automated tools.
|
||||
|
||||
### Step 4: Investigate Failures
|
||||
|
||||
For each CI failure, get the actual logs:
|
||||
|
||||
```bash
|
||||
# List recent runs for this branch
|
||||
gh run list --branch $(git branch --show-current) --limit 5 --json databaseId,name,status,conclusion
|
||||
|
||||
# View failed logs for a specific run
|
||||
gh run view <run-id> --log-failed
|
||||
```
|
||||
|
||||
Do NOT assume what failed based on the check name alone. Always read the actual logs.
|
||||
|
||||
### Step 5: Validate Feedback
|
||||
|
||||
For each piece of feedback (CI failure or review comment):
|
||||
|
||||
1. **Read the relevant code** - Understand the context before making changes
|
||||
2. **Verify the issue is real** - Not all feedback is correct; reviewers and bots can be wrong
|
||||
3. **Check if already addressed** - The issue may have been fixed in a subsequent commit
|
||||
4. **Skip invalid feedback** - If the concern is not legitimate, move on
|
||||
|
||||
### Step 6: Address Valid Issues
|
||||
|
||||
Make minimal, targeted code changes. Only fix what is actually broken.
|
||||
|
||||
### Step 7: Commit and Push
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "fix: <descriptive message of what was fixed>"
|
||||
git push origin $(git branch --show-current)
|
||||
```
|
||||
|
||||
### Step 8: Wait for CI
|
||||
|
||||
Use the built-in watch functionality:
|
||||
|
||||
```bash
|
||||
gh pr checks --watch --interval 30
|
||||
```
|
||||
|
||||
This waits until all checks complete. Exit code 0 means all passed, exit code 1 means failures.
|
||||
|
||||
Alternatively, poll manually if you need more control:
|
||||
|
||||
```bash
|
||||
gh pr checks --json name,state,bucket | jq '.[] | select(.bucket != "pass")'
|
||||
```
|
||||
|
||||
### Step 9: Repeat
|
||||
|
||||
Return to Step 2 if:
|
||||
- Any CI checks failed
|
||||
- New review feedback appeared
|
||||
|
||||
Continue until all checks pass and no unaddressed feedback remains.
|
||||
|
||||
## Exit Conditions
|
||||
|
||||
**Success:**
|
||||
- All CI checks are green (`bucket: pass`)
|
||||
- No unaddressed human review feedback
|
||||
|
||||
**Ask for Help:**
|
||||
- Same failure persists after 3 attempts (likely a flaky test or deeper issue)
|
||||
- Review feedback requires clarification or decision from the user
|
||||
- CI failure is unrelated to branch changes (infrastructure issue)
|
||||
|
||||
**Stop Immediately:**
|
||||
- No PR exists for the current branch
|
||||
- Branch is out of sync and needs rebase (inform user)
|
||||
|
||||
## Tips
|
||||
|
||||
- Use `gh pr checks --required` to focus only on required checks
|
||||
- Use `gh run view <run-id> --verbose` to see all job steps, not just failures
|
||||
- If a check is from an external service, the `link` field in checks JSON provides the URL to investigate
|
||||
543
skills/linear-claude-skill/SKILL.md
Normal file
543
skills/linear-claude-skill/SKILL.md
Normal file
@@ -0,0 +1,543 @@
|
||||
---
|
||||
name: linear-claude-skill
|
||||
description: "Manage Linear issues, projects, and teams"
|
||||
allowed-tools:
|
||||
- WebFetch(domain: linear.app)
|
||||
source: "https://github.com/wrsmith108/linear-claude-skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Manage Linear issues, projects, and teams
|
||||
|
||||
Use this skill when working with manage linear issues, projects, and teams.
|
||||
# Linear
|
||||
|
||||
Tools and workflows for managing issues, projects, and teams in Linear.
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Tool Availability (READ FIRST)
|
||||
|
||||
**This skill supports multiple tool backends. Use whichever is available:**
|
||||
|
||||
1. **MCP Tools (mcp__linear)** - Use if available in your tool set
|
||||
2. **Linear CLI (`linear` command)** - Always available via Bash
|
||||
3. **Helper Scripts** - For complex operations
|
||||
|
||||
**If MCP tools are NOT available**, use the Linear CLI via Bash:
|
||||
|
||||
```bash
|
||||
# View an issue
|
||||
linear issues view ENG-123
|
||||
|
||||
# Create an issue
|
||||
linear issues create --title "Issue title" --description "Description"
|
||||
|
||||
# Update issue status (get state IDs first)
|
||||
linear issues update ENG-123 -s "STATE_ID"
|
||||
|
||||
# Add a comment
|
||||
linear issues comment add ENG-123 -m "Comment text"
|
||||
|
||||
# List issues
|
||||
linear issues list
|
||||
```
|
||||
|
||||
**Do NOT report "MCP tools not available" as a blocker** - use CLI instead.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Manage Linear issues, projects, and teams
|
||||
|
||||
Use this skill when working with manage linear issues, projects, and teams.
|
||||
## 🔐 Security: Varlock Integration
|
||||
|
||||
**CRITICAL**: Never expose API keys in terminal output or Claude's context.
|
||||
|
||||
### Safe Commands (Always Use)
|
||||
|
||||
```bash
|
||||
# Validate LINEAR_API_KEY is set (masked output)
|
||||
varlock load 2>&1 | grep LINEAR
|
||||
|
||||
# Run commands with secrets injected
|
||||
varlock run -- npx tsx scripts/query.ts "query { viewer { name } }"
|
||||
|
||||
# Check schema (safe - no values)
|
||||
cat .env.schema | grep LINEAR
|
||||
```
|
||||
|
||||
### Unsafe Commands (NEVER Use)
|
||||
|
||||
```bash
|
||||
# ❌ NEVER - exposes key to Claude's context
|
||||
linear config show
|
||||
echo $LINEAR_API_KEY
|
||||
printenv | grep LINEAR
|
||||
cat .env
|
||||
```
|
||||
|
||||
### Setup for New Projects
|
||||
|
||||
1. Create `.env.schema` with `@sensitive` annotation:
|
||||
```bash
|
||||
# @type=string(startsWith=lin_api_) @required @sensitive
|
||||
LINEAR_API_KEY=
|
||||
```
|
||||
|
||||
2. Add `LINEAR_API_KEY` to `.env` (never commit this file)
|
||||
|
||||
3. Configure MCP to use environment variable:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"linear": {
|
||||
"env": { "LINEAR_API_KEY": "${LINEAR_API_KEY}" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. Use `varlock load` to validate before operations
|
||||
|
||||
---
|
||||
|
||||
## Quick Start (First-Time Users)
|
||||
|
||||
### 1. Check Your Setup
|
||||
|
||||
Run the setup check to verify your configuration:
|
||||
|
||||
```bash
|
||||
npx tsx ~/.claude/skills/linear/scripts/setup.ts
|
||||
```
|
||||
|
||||
This will check:
|
||||
- LINEAR_API_KEY is set and valid
|
||||
- @linear/sdk is installed
|
||||
- Linear CLI availability (optional)
|
||||
- MCP configuration (optional)
|
||||
|
||||
### 2. Get API Key (If Needed)
|
||||
|
||||
If setup reports a missing API key:
|
||||
|
||||
1. Open [Linear](https://linear.app) in your browser
|
||||
2. Go to **Settings** (gear icon) -> **Security & access** -> **Personal API keys**
|
||||
3. Click **Create key** and copy the key (starts with `lin_api_`)
|
||||
4. Add to your environment:
|
||||
|
||||
```bash
|
||||
# Option A: Add to shell profile (~/.zshrc or ~/.bashrc)
|
||||
export LINEAR_API_KEY="lin_api_your_key_here"
|
||||
|
||||
# Option B: Add to Claude Code environment
|
||||
echo 'LINEAR_API_KEY=lin_api_your_key_here' >> ~/.claude/.env
|
||||
|
||||
# Then reload your shell or restart Claude Code
|
||||
```
|
||||
|
||||
### 3. Test Connection
|
||||
|
||||
Verify everything works:
|
||||
|
||||
```bash
|
||||
npx tsx ~/.claude/skills/linear/scripts/query.ts "query { viewer { name } }"
|
||||
```
|
||||
|
||||
You should see your name from Linear.
|
||||
|
||||
### 4. Common Operations
|
||||
|
||||
```bash
|
||||
# Create issue in a project
|
||||
npx tsx scripts/linear-ops.ts create-issue "Project" "Title" "Description"
|
||||
|
||||
# Update issue status
|
||||
npx tsx scripts/linear-ops.ts status Done ENG-123 ENG-124
|
||||
|
||||
# Create sub-issue
|
||||
npx tsx scripts/linear-ops.ts create-sub-issue ENG-100 "Sub-task" "Details"
|
||||
|
||||
# Update project status
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 1" completed
|
||||
|
||||
# Show all commands
|
||||
npx tsx scripts/linear-ops.ts help
|
||||
```
|
||||
|
||||
See [Project Management Commands](#project-management-commands) for full reference.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Manage Linear issues, projects, and teams
|
||||
|
||||
Use this skill when working with manage linear issues, projects, and teams.
|
||||
## Project Planning Workflow
|
||||
|
||||
### Create Issues in the Correct Project from the Start
|
||||
|
||||
**Best Practice**: When planning a new phase or initiative, create the project and its issues together in a single planning session. Avoid creating issues in a catch-all project and moving them later.
|
||||
|
||||
#### Recommended Workflow
|
||||
|
||||
1. **Create the project first**:
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts create-project "Phase X: Feature Name" "My Initiative"
|
||||
```
|
||||
|
||||
2. **Set project state to Planned**:
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase X: Feature Name" planned
|
||||
```
|
||||
|
||||
3. **Create issues directly in the project**:
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts create-issue "Phase X: Feature Name" "Parent task" "Description"
|
||||
npx tsx scripts/linear-ops.ts create-sub-issue ENG-XXX "Sub-task 1" "Description"
|
||||
npx tsx scripts/linear-ops.ts create-sub-issue ENG-XXX "Sub-task 2" "Description"
|
||||
```
|
||||
|
||||
4. **Update project state when work begins**:
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase X: Feature Name" in-progress
|
||||
```
|
||||
|
||||
#### Why This Matters
|
||||
|
||||
- **Traceability**: Issues are linked to their project from creation
|
||||
- **Metrics**: Project progress tracking is accurate from day one
|
||||
- **Workflow**: No time wasted moving issues between projects
|
||||
- **Organization**: Linear views and filters work correctly
|
||||
|
||||
#### Anti-Pattern to Avoid
|
||||
|
||||
❌ Creating issues in a "holding" project and moving them later:
|
||||
```bash
|
||||
# Don't do this
|
||||
create-issue "Phase 6A" "New feature" # Wrong project
|
||||
# Later: manually move to Phase X # Extra work
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Management Commands
|
||||
|
||||
### project-status
|
||||
|
||||
Update a project's state in Linear. Accepts user-friendly terminology that maps to Linear's API.
|
||||
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts project-status <project-name> <state>
|
||||
```
|
||||
|
||||
**Valid States:**
|
||||
| Input | Description | API Value |
|
||||
|-------|-------------|-----------|
|
||||
| `backlog` | Not yet started | backlog |
|
||||
| `planned` | Scheduled for future | planned |
|
||||
| `in-progress` | Currently active | started |
|
||||
| `paused` | Temporarily on hold | paused |
|
||||
| `completed` | Successfully finished | completed |
|
||||
| `canceled` | Will not be done | canceled |
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Start working on a project
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 8: MCP Decision Engine" in-progress
|
||||
|
||||
# Mark project complete
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 8" completed
|
||||
|
||||
# Partial name matching works
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 8" paused
|
||||
```
|
||||
|
||||
### link-initiative
|
||||
|
||||
Link an existing project to an initiative.
|
||||
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts link-initiative <project-name> <initiative-name>
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Link a project to an initiative
|
||||
npx tsx scripts/linear-ops.ts link-initiative "Phase 8: MCP Decision Engine" "Q1 Goals"
|
||||
|
||||
# Partial matching works
|
||||
npx tsx scripts/linear-ops.ts link-initiative "Phase 8" "Q1 Goals"
|
||||
```
|
||||
|
||||
### unlink-initiative
|
||||
|
||||
Remove a project from an initiative.
|
||||
|
||||
```bash
|
||||
npx tsx scripts/linear-ops.ts unlink-initiative <project-name> <initiative-name>
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Remove incorrect link
|
||||
npx tsx scripts/linear-ops.ts unlink-initiative "Phase 8" "Linear Skill"
|
||||
|
||||
# Clean up test links
|
||||
npx tsx scripts/linear-ops.ts unlink-initiative "Test Project" "Q1 Goals"
|
||||
```
|
||||
|
||||
**Error Handling:**
|
||||
- Returns error if project is not linked to the specified initiative
|
||||
- Returns error if project or initiative not found
|
||||
|
||||
### Complete Project Lifecycle Example
|
||||
|
||||
```bash
|
||||
# 1. Create project linked to initiative
|
||||
npx tsx scripts/linear-ops.ts create-project "Phase 11: New Feature" "Q1 Goals"
|
||||
|
||||
# 2. Set state to planned
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 11" planned
|
||||
|
||||
# 3. Create issues in the project
|
||||
npx tsx scripts/linear-ops.ts create-issue "Phase 11" "Parent task" "Description"
|
||||
npx tsx scripts/linear-ops.ts create-sub-issue ENG-XXX "Sub-task 1" "Details"
|
||||
|
||||
# 4. Start work - update to in-progress
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 11" in-progress
|
||||
|
||||
# 5. Mark issues done
|
||||
npx tsx scripts/linear-ops.ts status Done ENG-XXX ENG-YYY
|
||||
|
||||
# 6. Complete project
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase 11" completed
|
||||
|
||||
# 7. (Optional) Link to additional initiative
|
||||
npx tsx scripts/linear-ops.ts link-initiative "Phase 11" "Q2 Goals"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Manage Linear issues, projects, and teams
|
||||
|
||||
Use this skill when working with manage linear issues, projects, and teams.
|
||||
## Tool Selection
|
||||
|
||||
Choose the right tool for the task:
|
||||
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| **MCP (Official Server)** | Most operations - PREFERRED |
|
||||
| **Helper Scripts** | Bulk operations, when MCP unavailable |
|
||||
| **SDK scripts** | Complex operations (loops, conditionals) |
|
||||
| **GraphQL API** | Operations not supported by MCP/SDK |
|
||||
|
||||
### MCP Server Configuration
|
||||
|
||||
**Use the official Linear MCP server** at `mcp.linear.app`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"linear": {
|
||||
"command": "npx",
|
||||
"args": ["mcp-remote", "https://mcp.linear.app/sse"],
|
||||
"env": { "LINEAR_API_KEY": "your_api_key" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **WARNING**: Do NOT use deprecated community servers. See [troubleshooting.md](troubleshooting.md) for details.
|
||||
|
||||
### MCP Reliability (Official Server)
|
||||
|
||||
| Operation | Reliability | Notes |
|
||||
|-----------|-------------|-------|
|
||||
| Create issue | ✅ High | Full support |
|
||||
| Update status | ✅ High | Use `state: "Done"` directly |
|
||||
| List/Search issues | ✅ High | Supports filters, queries |
|
||||
| Add comment | ✅ High | Works with issue IDs |
|
||||
|
||||
### Quick Status Update
|
||||
|
||||
```bash
|
||||
# Via MCP - use human-readable state names
|
||||
update_issue with id="issue-uuid", state="Done"
|
||||
|
||||
# Via helper script (bulk operations)
|
||||
node scripts/linear-helpers.mjs update-status Done 123 124 125
|
||||
```
|
||||
|
||||
### Helper Script Reference
|
||||
|
||||
For detailed helper script usage, see **[troubleshooting.md](troubleshooting.md)**.
|
||||
|
||||
### Parallel Agent Execution
|
||||
|
||||
For bulk operations or background execution, use the `Linear-specialist` subagent:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
description: "Update Linear issues",
|
||||
prompt: "Mark ENG-101, ENG-102, ENG-103 as Done",
|
||||
subagent_type: "Linear-specialist"
|
||||
})
|
||||
```
|
||||
|
||||
**When to use `Linear-specialist` (parallel):**
|
||||
- Bulk status updates (3+ issues)
|
||||
- Project status changes
|
||||
- Creating multiple issues
|
||||
- Sync operations after code changes
|
||||
|
||||
**When to use direct execution:**
|
||||
- Single issue queries
|
||||
- Viewing issue details
|
||||
- Quick status checks
|
||||
- Operations needing immediate results
|
||||
|
||||
See **[sync.md](sync.md)** for parallel execution patterns.
|
||||
|
||||
## Critical Requirements
|
||||
|
||||
### Issues → Projects → Initiatives
|
||||
|
||||
**Every issue MUST be attached to a project. Every project MUST be linked to an initiative.**
|
||||
|
||||
| Entity | Must Link To | If Missing |
|
||||
|--------|--------------|------------|
|
||||
| Issue | Project | Not visible in project board |
|
||||
| Project | Initiative | Not visible in roadmap |
|
||||
|
||||
See **[projects.md](projects.md)** for complete project creation checklist.
|
||||
|
||||
---
|
||||
|
||||
## Conventions
|
||||
|
||||
### Issue Status
|
||||
|
||||
- **Assigned to me**: Set `state: "Todo"`
|
||||
- **Unassigned**: Set `state: "Backlog"`
|
||||
|
||||
### Labels
|
||||
|
||||
Uses **domain-based label taxonomy**. See [docs/labels.md](docs/labels.md).
|
||||
|
||||
**Key rules:**
|
||||
- ONE Type label: `feature`, `bug`, `refactor`, `chore`, `spike`
|
||||
- 1-2 Domain labels: `security`, `backend`, `frontend`, etc.
|
||||
- Scope labels when applicable: `blocked`, `breaking-change`, `tech-debt`
|
||||
|
||||
```bash
|
||||
# Validate labels
|
||||
npx tsx scripts/linear-ops.ts labels validate "feature,security"
|
||||
|
||||
# Suggest labels for issue
|
||||
npx tsx scripts/linear-ops.ts labels suggest "Fix XSS vulnerability"
|
||||
```
|
||||
|
||||
## SDK Automation Scripts
|
||||
|
||||
**Use only when MCP tools are insufficient.** For complex operations involving loops, mapping, or bulk updates, write TypeScript scripts using `@linear/sdk`. See `sdk.md` for:
|
||||
|
||||
- Complete script patterns and templates
|
||||
- Common automation examples (bulk updates, filtering, reporting)
|
||||
- Tool selection criteria
|
||||
|
||||
Scripts provide full type hints and are easier to debug than raw GraphQL for multi-step operations.
|
||||
|
||||
## GraphQL API
|
||||
|
||||
**Fallback only.** Use when operations aren't supported by MCP or SDK.
|
||||
|
||||
See **[api.md](api.md)** for complete documentation including:
|
||||
- Authentication and setup
|
||||
- Example queries and mutations
|
||||
- Timeout handling patterns
|
||||
- MCP timeout workarounds
|
||||
- Shell script compatibility
|
||||
|
||||
**Quick ad-hoc query:**
|
||||
|
||||
```bash
|
||||
npx tsx ~/.claude/skills/linear/scripts/query.ts "query { viewer { name } }"
|
||||
```
|
||||
|
||||
## Projects & Initiatives
|
||||
|
||||
For advanced project and initiative management patterns, see **[projects.md](projects.md)**.
|
||||
|
||||
**Quick reference** - common project commands:
|
||||
|
||||
```bash
|
||||
# Create project linked to initiative
|
||||
npx tsx scripts/linear-ops.ts create-project "Phase X: Name" "My Initiative"
|
||||
|
||||
# Update project status
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase X" in-progress
|
||||
npx tsx scripts/linear-ops.ts project-status "Phase X" completed
|
||||
|
||||
# Link/unlink projects to initiatives
|
||||
npx tsx scripts/linear-ops.ts link-initiative "Phase X" "My Initiative"
|
||||
npx tsx scripts/linear-ops.ts unlink-initiative "Phase X" "Old Initiative"
|
||||
```
|
||||
|
||||
**Key topics in projects.md:**
|
||||
- Project creation checklist (mandatory steps)
|
||||
- Content vs Description fields
|
||||
- Discovery before creation
|
||||
- Codebase verification before work
|
||||
- Sub-issue management
|
||||
- Project status updates
|
||||
- Project updates (status reports)
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Manage Linear issues, projects, and teams
|
||||
|
||||
Use this skill when working with manage linear issues, projects, and teams.
|
||||
## Sync Patterns (Bulk Operations)
|
||||
|
||||
For bulk synchronization of code changes to Linear, see **[sync.md](sync.md)**.
|
||||
|
||||
**Quick sync commands:**
|
||||
|
||||
```bash
|
||||
# Bulk update issues to Done
|
||||
npx tsx scripts/linear-ops.ts status Done ENG-101 ENG-102 ENG-103
|
||||
|
||||
# Update project status
|
||||
npx tsx scripts/linear-ops.ts project-status "My Project" completed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [api.md](api.md) | GraphQL API reference, timeout handling |
|
||||
| [sdk.md](sdk.md) | SDK automation patterns |
|
||||
| [sync.md](sync.md) | Bulk sync patterns |
|
||||
| [projects.md](projects.md) | Project & initiative management |
|
||||
| [troubleshooting.md](troubleshooting.md) | Common issues, MCP debugging |
|
||||
| [docs/labels.md](docs/labels.md) | Label taxonomy |
|
||||
|
||||
**External:** [Linear MCP Documentation](https://linear.app/docs/mcp.md)
|
||||
22
skills/makepad-skills/SKILL.md
Normal file
22
skills/makepad-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: makepad-skills
|
||||
description: "Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting."
|
||||
source: "https://github.com/ZhangHanDong/makepad-skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Makepad Skills
|
||||
|
||||
## Overview
|
||||
|
||||
Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with makepad ui development skills for rust apps: setup, patterns, shaders, packaging, and troubleshooting..
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for makepad ui development skills for rust apps: setup, patterns, shaders, packaging, and troubleshooting..
|
||||
|
||||
For more information, see the [source repository](https://github.com/ZhangHanDong/makepad-skills).
|
||||
228
skills/memory-systems/SKILL.md
Normal file
228
skills/memory-systems/SKILL.md
Normal file
@@ -0,0 +1,228 @@
|
||||
---
|
||||
name: memory-systems
|
||||
description: "Design short-term, long-term, and graph-based memory architectures"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/memory-systems"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Design short-term, long-term, and graph-based memory architectures
|
||||
|
||||
Use this skill when working with design short-term, long-term, and graph-based memory architectures.
|
||||
# Memory System Design
|
||||
|
||||
Memory provides the persistence layer that allows agents to maintain continuity across sessions and reason over accumulated knowledge. Simple agents rely entirely on context for memory, losing all state when sessions end. Sophisticated agents implement layered memory architectures that balance immediate context needs with long-term knowledge retention. The evolution from vector stores to knowledge graphs to temporal knowledge graphs represents increasing investment in structured memory for improved retrieval and reasoning.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Building agents that must persist across sessions
|
||||
- Needing to maintain entity consistency across conversations
|
||||
- Implementing reasoning over accumulated knowledge
|
||||
- Designing systems that learn from past interactions
|
||||
- Creating knowledge bases that grow over time
|
||||
- Building temporal-aware systems that track state changes
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Memory exists on a spectrum from immediate context to permanent storage. At one extreme, working memory in the context window provides zero-latency access but vanishes when sessions end. At the other extreme, permanent storage persists indefinitely but requires retrieval to enter context.
|
||||
|
||||
Simple vector stores lack relationship and temporal structure. Knowledge graphs preserve relationships for reasoning. Temporal knowledge graphs add validity periods for time-aware queries. Implementation choices depend on query complexity, infrastructure constraints, and accuracy requirements.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### Memory Architecture Fundamentals
|
||||
|
||||
**The Context-Memory Spectrum**
|
||||
Memory exists on a spectrum from immediate context to permanent storage. At one extreme, working memory in the context window provides zero-latency access but vanishes when sessions end. At the other extreme, permanent storage persists indefinitely but requires retrieval to enter context. Effective architectures use multiple layers along this spectrum.
|
||||
|
||||
The spectrum includes working memory (context window, zero latency, volatile), short-term memory (session-persistent, searchable, volatile), long-term memory (cross-session persistent, structured, semi-permanent), and permanent memory (archival, queryable, permanent). Each layer has different latency, capacity, and persistence characteristics.
|
||||
|
||||
**Why Simple Vector Stores Fall Short**
|
||||
Vector RAG provides semantic retrieval by embedding queries and documents in a shared embedding space. Similarity search retrieves the most semantically similar documents. This works well for document retrieval but lacks structure for agent memory.
|
||||
|
||||
Vector stores lose relationship information. If an agent learns that "Customer X purchased Product Y on Date Z," a vector store can retrieve this fact if asked directly. But it cannot answer "What products did customers who purchased Product Y also buy?" because relationship structure is not preserved.
|
||||
|
||||
Vector stores also struggle with temporal validity. Facts change over time, but vector stores provide no mechanism to distinguish "current fact" from "outdated fact" except through explicit metadata and filtering.
|
||||
|
||||
**The Move to Graph-Based Memory**
|
||||
Knowledge graphs preserve relationships between entities. Instead of isolated document chunks, graphs encode that Entity A has Relationship R to Entity B. This enables queries that traverse relationships rather than just similarity.
|
||||
|
||||
Temporal knowledge graphs add validity periods to facts. Each fact has a "valid from" and optionally "valid until" timestamp. This enables time-travel queries that reconstruct knowledge at specific points in time.
|
||||
|
||||
**Benchmark Performance Comparison**
|
||||
The Deep Memory Retrieval (DMR) benchmark provides concrete performance data across memory architectures:
|
||||
|
||||
| Memory System | DMR Accuracy | Retrieval Latency | Notes |
|
||||
|---------------|--------------|-------------------|-------|
|
||||
| Zep (Temporal KG) | 94.8% | 2.58s | Best accuracy, fast retrieval |
|
||||
| MemGPT | 93.4% | Variable | Good general performance |
|
||||
| GraphRAG | ~75-85% | Variable | 20-35% gains over baseline RAG |
|
||||
| Vector RAG | ~60-70% | Fast | Loses relationship structure |
|
||||
| Recursive Summarization | 35.3% | Low | Severe information loss |
|
||||
|
||||
Zep demonstrated 90% reduction in retrieval latency compared to full-context baselines (2.58s vs 28.9s for GPT-5.2). This efficiency comes from retrieving only relevant subgraphs rather than entire context history.
|
||||
|
||||
GraphRAG achieves approximately 20-35% accuracy gains over baseline RAG in complex reasoning tasks and reduces hallucination by up to 30% through community-based summarization.
|
||||
|
||||
### Memory Layer Architecture
|
||||
|
||||
**Layer 1: Working Memory**
|
||||
Working memory is the context window itself. It provides immediate access to information currently being processed but has limited capacity and vanishes when sessions end.
|
||||
|
||||
Working memory usage patterns include scratchpad calculations where agents track intermediate results, conversation history that preserves dialogue for current task, current task state that tracks progress on active objectives, and active retrieved documents that hold information currently being used.
|
||||
|
||||
Optimize working memory by keeping only active information, summarizing completed work before it falls out of attention, and using attention-favored positions for critical information.
|
||||
|
||||
**Layer 2: Short-Term Memory**
|
||||
Short-term memory persists across the current session but not across sessions. It provides search and retrieval capabilities without the latency of permanent storage.
|
||||
|
||||
Common implementations include session-scoped databases that persist until session end, file-system storage in designated session directories, and in-memory caches keyed by session ID.
|
||||
|
||||
Short-term memory use cases include tracking conversation state across turns without stuffing context, storing intermediate results from tool calls that may be needed later, maintaining task checklists and progress tracking, and caching retrieved information within sessions.
|
||||
|
||||
**Layer 3: Long-Term Memory**
|
||||
Long-term memory persists across sessions indefinitely. It enables agents to learn from past interactions and build knowledge over time.
|
||||
|
||||
Long-term memory implementations range from simple key-value stores to sophisticated graph databases. The choice depends on complexity of relationships to model, query patterns required, and acceptable infrastructure complexity.
|
||||
|
||||
Long-term memory use cases include learning user preferences across sessions, building domain knowledge bases that grow over time, maintaining entity registries with relationship history, and storing successful patterns that can be reused.
|
||||
|
||||
**Layer 4: Entity Memory**
|
||||
Entity memory specifically tracks information about entities (people, places, concepts, objects) to maintain consistency. This creates a rudimentary knowledge graph where entities are recognized across multiple interactions.
|
||||
|
||||
Entity memory maintains entity identity by tracking that "John Doe" mentioned in one conversation is the same person in another. It maintains entity properties by storing facts discovered about entities over time. It maintains entity relationships by tracking relationships between entities as they are discovered.
|
||||
|
||||
**Layer 5: Temporal Knowledge Graphs**
|
||||
Temporal knowledge graphs extend entity memory with explicit validity periods. Facts are not just true or false but true during specific time ranges.
|
||||
|
||||
This enables queries like "What was the user's address on Date X?" by retrieving facts valid during that date range. It prevents context clash when outdated information contradicts new data. It enables temporal reasoning about how entities changed over time.
|
||||
|
||||
### Memory Implementation Patterns
|
||||
|
||||
**Pattern 1: File-System-as-Memory**
|
||||
The file system itself can serve as a memory layer. This pattern is simple, requires no additional infrastructure, and enables the same just-in-time loading that makes file-system-based context effective.
|
||||
|
||||
Implementation uses the file system hierarchy for organization. Use naming conventions that convey meaning. Store facts in structured formats (JSON, YAML). Use timestamps in filenames or metadata for temporal tracking.
|
||||
|
||||
Advantages: Simplicity, transparency, portability.
|
||||
Disadvantages: No semantic search, no relationship tracking, manual organization required.
|
||||
|
||||
**Pattern 2: Vector RAG with Metadata**
|
||||
Vector stores enhanced with rich metadata provide semantic search with filtering capabilities.
|
||||
|
||||
Implementation embeds facts or documents and stores with metadata including entity tags, temporal validity, source attribution, and confidence scores. Query includes metadata filters alongside semantic search.
|
||||
|
||||
**Pattern 3: Knowledge Graph**
|
||||
Knowledge graphs explicitly model entities and relationships. Implementation defines entity types and relationship types, uses graph database or property graph storage, and maintains indexes for common query patterns.
|
||||
|
||||
**Pattern 4: Temporal Knowledge Graph**
|
||||
Temporal knowledge graphs add validity periods to facts, enabling time-travel queries and preventing context clash from outdated information.
|
||||
|
||||
### Memory Retrieval Patterns
|
||||
|
||||
**Semantic Retrieval**
|
||||
Retrieve memories semantically similar to current query using embedding similarity search.
|
||||
|
||||
**Entity-Based Retrieval**
|
||||
Retrieve all memories related to specific entities by traversing graph relationships.
|
||||
|
||||
**Temporal Retrieval**
|
||||
Retrieve memories valid at specific time or within time range using validity period filters.
|
||||
|
||||
### Memory Consolidation
|
||||
|
||||
Memories accumulate over time and require consolidation to prevent unbounded growth and remove outdated information.
|
||||
|
||||
**Consolidation Triggers**
|
||||
Trigger consolidation after significant memory accumulation, when retrieval returns too many outdated results, periodically on a schedule, or when explicit consolidation is requested.
|
||||
|
||||
**Consolidation Process**
|
||||
Identify outdated facts, merge related facts, update validity periods, archive or delete obsolete facts, and rebuild indexes.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Integration with Context
|
||||
|
||||
Memories must integrate with context systems to be useful. Use just-in-time memory loading to retrieve relevant memories when needed. Use strategic injection to place memories in attention-favored positions.
|
||||
|
||||
### Memory System Selection
|
||||
|
||||
Choose memory architecture based on requirements:
|
||||
- Simple persistence needs: File-system memory
|
||||
- Semantic search needs: Vector RAG with metadata
|
||||
- Relationship reasoning needs: Knowledge graph
|
||||
- Temporal validity needs: Temporal knowledge graph
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Entity Tracking**
|
||||
```python
|
||||
# Track entity across conversations
|
||||
def remember_entity(entity_id, properties):
|
||||
memory.store({
|
||||
"type": "entity",
|
||||
"id": entity_id,
|
||||
"properties": properties,
|
||||
"last_updated": now()
|
||||
})
|
||||
|
||||
def get_entity(entity_id):
|
||||
return memory.retrieve_entity(entity_id)
|
||||
```
|
||||
|
||||
**Example 2: Temporal Query**
|
||||
```python
|
||||
# What was the user's address on January 15, 2024?
|
||||
def query_address_at_time(user_id, query_time):
|
||||
return temporal_graph.query("""
|
||||
MATCH (user)-[r:LIVES_AT]->(address)
|
||||
WHERE user.id = $user_id
|
||||
AND r.valid_from <= $query_time
|
||||
AND (r.valid_until IS NULL OR r.valid_until > $query_time)
|
||||
RETURN address
|
||||
""", {"user_id": user_id, "query_time": query_time})
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Match memory architecture to query requirements
|
||||
2. Implement progressive disclosure for memory access
|
||||
3. Use temporal validity to prevent outdated information conflicts
|
||||
4. Consolidate memories periodically to prevent unbounded growth
|
||||
5. Design for memory retrieval failures gracefully
|
||||
6. Consider privacy implications of persistent memory
|
||||
7. Implement backup and recovery for critical memories
|
||||
8. Monitor memory growth and performance over time
|
||||
|
||||
## Integration
|
||||
|
||||
This skill builds on context-fundamentals. It connects to:
|
||||
|
||||
- multi-agent-patterns - Shared memory across agents
|
||||
- context-optimization - Memory-based context loading
|
||||
- evaluation - Evaluating memory quality
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Implementation Reference](./references/implementation.md) - Detailed implementation patterns
|
||||
|
||||
Related skills in this collection:
|
||||
- context-fundamentals - Context basics
|
||||
- multi-agent-patterns - Cross-agent memory
|
||||
|
||||
External resources:
|
||||
- Graph database documentation (Neo4j, etc.)
|
||||
- Vector store documentation (Pinecone, Weaviate, etc.)
|
||||
- Research on knowledge graphs and reasoning
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
262
skills/multi-agent-patterns/SKILL.md
Normal file
262
skills/multi-agent-patterns/SKILL.md
Normal file
@@ -0,0 +1,262 @@
|
||||
---
|
||||
name: multi-agent-patterns
|
||||
description: "Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/multi-agent-patterns"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures
|
||||
|
||||
Use this skill when working with master orchestrator, peer-to-peer, and hierarchical multi-agent architectures.
|
||||
# Multi-Agent Architecture Patterns
|
||||
|
||||
Multi-agent architectures distribute work across multiple language model instances, each with its own context window. When designed well, this distribution enables capabilities beyond single-agent limits. When designed poorly, it introduces coordination overhead that negates benefits. The critical insight is that sub-agents exist primarily to isolate context, not to anthropomorphize role division.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Single-agent context limits constrain task complexity
|
||||
- Tasks decompose naturally into parallel subtasks
|
||||
- Different subtasks require different tool sets or system prompts
|
||||
- Building systems that must handle multiple domains simultaneously
|
||||
- Scaling agent capabilities beyond single-context limits
|
||||
- Designing production agent systems with multiple specialized components
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Multi-agent systems address single-agent context limitations through distribution. Three dominant patterns exist: supervisor/orchestrator for centralized control, peer-to-peer/swarm for flexible handoffs, and hierarchical for layered abstraction. The critical design principle is context isolation—sub-agents exist primarily to partition context rather than to simulate organizational roles.
|
||||
|
||||
Effective multi-agent systems require explicit coordination protocols, consensus mechanisms that avoid sycophancy, and careful attention to failure modes including bottlenecks, divergence, and error propagation.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### Why Multi-Agent Architectures
|
||||
|
||||
**The Context Bottleneck**
|
||||
Single agents face inherent ceilings in reasoning capability, context management, and tool coordination. As tasks grow more complex, context windows fill with accumulated history, retrieved documents, and tool outputs. Performance degrades according to predictable patterns: the lost-in-middle effect, attention scarcity, and context poisoning.
|
||||
|
||||
Multi-agent architectures address these limitations by partitioning work across multiple context windows. Each agent operates in a clean context focused on its subtask. Results aggregate at a coordination layer without any single context bearing the full burden.
|
||||
|
||||
**The Token Economics Reality**
|
||||
Multi-agent systems consume significantly more tokens than single-agent approaches. Production data shows:
|
||||
|
||||
| Architecture | Token Multiplier | Use Case |
|
||||
|--------------|------------------|----------|
|
||||
| Single agent chat | 1× baseline | Simple queries |
|
||||
| Single agent with tools | ~4× baseline | Tool-using tasks |
|
||||
| Multi-agent system | ~15× baseline | Complex research/coordination |
|
||||
|
||||
Research on the BrowseComp evaluation found that three factors explain 95% of performance variance: token usage (80% of variance), number of tool calls, and model choice. This validates the multi-agent approach of distributing work across agents with separate context windows to add capacity for parallel reasoning.
|
||||
|
||||
Critically, upgrading to better models often provides larger performance gains than doubling token budgets. Claude Sonnet 4.5 showed larger gains than doubling tokens on earlier Sonnet versions. GPT-5.2's thinking mode similarly outperforms raw token increases. This suggests model selection and multi-agent architecture are complementary strategies.
|
||||
|
||||
**The Parallelization Argument**
|
||||
Many tasks contain parallelizable subtasks that a single agent must execute sequentially. A research task might require searching multiple independent sources, analyzing different documents, or comparing competing approaches. A single agent processes these sequentially, accumulating context with each step.
|
||||
|
||||
Multi-agent architectures assign each subtask to a dedicated agent with a fresh context. All agents work simultaneously, then return results to a coordinator. The total real-world time approaches the duration of the longest subtask rather than the sum of all subtasks.
|
||||
|
||||
**The Specialization Argument**
|
||||
Different tasks benefit from different agent configurations: different system prompts, different tool sets, different context structures. A general-purpose agent must carry all possible configurations in context. Specialized agents carry only what they need.
|
||||
|
||||
Multi-agent architectures enable specialization without combinatorial explosion. The coordinator routes to specialized agents; each agent operates with lean context optimized for its domain.
|
||||
|
||||
### Architectural Patterns
|
||||
|
||||
**Pattern 1: Supervisor/Orchestrator**
|
||||
The supervisor pattern places a central agent in control, delegating to specialists and synthesizing results. The supervisor maintains global state and trajectory, decomposes user objectives into subtasks, and routes to appropriate workers.
|
||||
|
||||
```
|
||||
User Query -> Supervisor -> [Specialist, Specialist, Specialist] -> Aggregation -> Final Output
|
||||
```
|
||||
|
||||
When to use: Complex tasks with clear decomposition, tasks requiring coordination across domains, tasks where human oversight is important.
|
||||
|
||||
Advantages: Strict control over workflow, easier to implement human-in-the-loop interventions, ensures adherence to predefined plans.
|
||||
|
||||
Disadvantages: Supervisor context becomes bottleneck, supervisor failures cascade to all workers, "telephone game" problem where supervisors paraphrase sub-agent responses incorrectly.
|
||||
|
||||
**The Telephone Game Problem and Solution**
|
||||
LangGraph benchmarks found supervisor architectures initially performed 50% worse than optimized versions due to the "telephone game" problem where supervisors paraphrase sub-agent responses incorrectly, losing fidelity.
|
||||
|
||||
The fix: implement a `forward_message` tool allowing sub-agents to pass responses directly to users:
|
||||
|
||||
```python
|
||||
def forward_message(message: str, to_user: bool = True):
|
||||
"""
|
||||
Forward sub-agent response directly to user without supervisor synthesis.
|
||||
|
||||
Use when:
|
||||
- Sub-agent response is final and complete
|
||||
- Supervisor synthesis would lose important details
|
||||
- Response format must be preserved exactly
|
||||
"""
|
||||
if to_user:
|
||||
return {"type": "direct_response", "content": message}
|
||||
return {"type": "supervisor_input", "content": message}
|
||||
```
|
||||
|
||||
With this pattern, swarm architectures slightly outperform supervisors because sub-agents respond directly to users, eliminating translation errors.
|
||||
|
||||
Implementation note: Implement direct pass-through mechanisms allowing sub-agents to pass responses directly to users rather than through supervisor synthesis when appropriate.
|
||||
|
||||
**Pattern 2: Peer-to-Peer/Swarm**
|
||||
The peer-to-peer pattern removes central control, allowing agents to communicate directly based on predefined protocols. Any agent can transfer control to any other through explicit handoff mechanisms.
|
||||
|
||||
```python
|
||||
def transfer_to_agent_b():
|
||||
return agent_b # Handoff via function return
|
||||
|
||||
agent_a = Agent(
|
||||
name="Agent A",
|
||||
functions=[transfer_to_agent_b]
|
||||
)
|
||||
```
|
||||
|
||||
When to use: Tasks requiring flexible exploration, tasks where rigid planning is counterproductive, tasks with emergent requirements that defy upfront decomposition.
|
||||
|
||||
Advantages: No single point of failure, scales effectively for breadth-first exploration, enables emergent problem-solving behaviors.
|
||||
|
||||
Disadvantages: Coordination complexity increases with agent count, risk of divergence without central state keeper, requires robust convergence constraints.
|
||||
|
||||
Implementation note: Define explicit handoff protocols with state passing. Ensure agents can communicate their context needs to receiving agents.
|
||||
|
||||
**Pattern 3: Hierarchical**
|
||||
Hierarchical structures organize agents into layers of abstraction: strategic, planning, and execution layers. Strategy layer agents define goals and constraints; planning layer agents break goals into actionable plans; execution layer agents perform atomic tasks.
|
||||
|
||||
```
|
||||
Strategy Layer (Goal Definition) -> Planning Layer (Task Decomposition) -> Execution Layer (Atomic Tasks)
|
||||
```
|
||||
|
||||
When to use: Large-scale projects with clear hierarchical structure, enterprise workflows with management layers, tasks requiring both high-level planning and detailed execution.
|
||||
|
||||
Advantages: Mirrors organizational structures, clear separation of concerns, enables different context structures at different levels.
|
||||
|
||||
Disadvantages: Coordination overhead between layers, potential for misalignment between strategy and execution, complex error propagation.
|
||||
|
||||
### Context Isolation as Design Principle
|
||||
|
||||
The primary purpose of multi-agent architectures is context isolation. Each sub-agent operates in a clean context window focused on its subtask without carrying accumulated context from other subtasks.
|
||||
|
||||
**Isolation Mechanisms**
|
||||
Full context delegation: For complex tasks where the sub-agent needs complete understanding, the planner shares its entire context. The sub-agent has its own tools and instructions but receives full context for its decisions.
|
||||
|
||||
Instruction passing: For simple, well-defined subtasks, the planner creates instructions via function call. The sub-agent receives only the instructions needed for its specific task.
|
||||
|
||||
File system memory: For complex tasks requiring shared state, agents read and write to persistent storage. The file system serves as the coordination mechanism, avoiding context bloat from shared state passing.
|
||||
|
||||
**Isolation Trade-offs**
|
||||
Full context delegation provides maximum capability but defeats the purpose of sub-agents. Instruction passing maintains isolation but limits sub-agent flexibility. File system memory enables shared state without context passing but introduces latency and consistency challenges.
|
||||
|
||||
The right choice depends on task complexity, coordination needs, and acceptable latency.
|
||||
|
||||
### Consensus and Coordination
|
||||
|
||||
**The Voting Problem**
|
||||
Simple majority voting treats hallucinations from weak models as equal to reasoning from strong models. Without intervention, multi-agent discussions devolve into consensus on false premises due to inherent bias toward agreement.
|
||||
|
||||
**Weighted Voting**
|
||||
Weight agent votes by confidence or expertise. Agents with higher confidence or domain expertise carry more weight in final decisions.
|
||||
|
||||
**Debate Protocols**
|
||||
Debate protocols require agents to critique each other's outputs over multiple rounds. Adversarial critique often yields higher accuracy on complex reasoning than collaborative consensus.
|
||||
|
||||
**Trigger-Based Intervention**
|
||||
Monitor multi-agent interactions for specific behavioral markers. Stall triggers activate when discussions make no progress. Sycophancy triggers detect when agents mimic each other's answers without unique reasoning.
|
||||
|
||||
### Framework Considerations
|
||||
|
||||
Different frameworks implement these patterns with different philosophies. LangGraph uses graph-based state machines with explicit nodes and edges. AutoGen uses conversational/event-driven patterns with GroupChat. CrewAI uses role-based process flows with hierarchical crew structures.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Failure Modes and Mitigations
|
||||
|
||||
**Failure: Supervisor Bottleneck**
|
||||
The supervisor accumulates context from all workers, becoming susceptible to saturation and degradation.
|
||||
|
||||
Mitigation: Implement output schema constraints so workers return only distilled summaries. Use checkpointing to persist supervisor state without carrying full history.
|
||||
|
||||
**Failure: Coordination Overhead**
|
||||
Agent communication consumes tokens and introduces latency. Complex coordination can negate parallelization benefits.
|
||||
|
||||
Mitigation: Minimize communication through clear handoff protocols. Batch results where possible. Use asynchronous communication patterns.
|
||||
|
||||
**Failure: Divergence**
|
||||
Agents pursuing different goals without central coordination can drift from intended objectives.
|
||||
|
||||
Mitigation: Define clear objective boundaries for each agent. Implement convergence checks that verify progress toward shared goals. Use time-to-live limits on agent execution.
|
||||
|
||||
**Failure: Error Propagation**
|
||||
Errors in one agent's output propagate to downstream agents that consume that output.
|
||||
|
||||
Mitigation: Validate agent outputs before passing to consumers. Implement retry logic with circuit breakers. Use idempotent operations where possible.
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Research Team Architecture**
|
||||
```text
|
||||
Supervisor
|
||||
├── Researcher (web search, document retrieval)
|
||||
├── Analyzer (data analysis, statistics)
|
||||
├── Fact-checker (verification, validation)
|
||||
└── Writer (report generation, formatting)
|
||||
```
|
||||
|
||||
**Example 2: Handoff Protocol**
|
||||
```python
|
||||
def handle_customer_request(request):
|
||||
if request.type == "billing":
|
||||
return transfer_to(billing_agent)
|
||||
elif request.type == "technical":
|
||||
return transfer_to(technical_agent)
|
||||
elif request.type == "sales":
|
||||
return transfer_to(sales_agent)
|
||||
else:
|
||||
return handle_general(request)
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Design for context isolation as the primary benefit of multi-agent systems
|
||||
2. Choose architecture pattern based on coordination needs, not organizational metaphor
|
||||
3. Implement explicit handoff protocols with state passing
|
||||
4. Use weighted voting or debate protocols for consensus
|
||||
5. Monitor for supervisor bottlenecks and implement checkpointing
|
||||
6. Validate outputs before passing between agents
|
||||
7. Set time-to-live limits to prevent infinite loops
|
||||
8. Test failure scenarios explicitly
|
||||
|
||||
## Integration
|
||||
|
||||
This skill builds on context-fundamentals and context-degradation. It connects to:
|
||||
|
||||
- memory-systems - Shared state management across agents
|
||||
- tool-design - Tool specialization per agent
|
||||
- context-optimization - Context partitioning strategies
|
||||
|
||||
## References
|
||||
|
||||
Internal reference:
|
||||
- [Frameworks Reference](./references/frameworks.md) - Detailed framework implementation patterns
|
||||
|
||||
Related skills in this collection:
|
||||
- context-fundamentals - Context basics
|
||||
- memory-systems - Cross-agent memory
|
||||
- context-optimization - Partitioning strategies
|
||||
|
||||
External resources:
|
||||
- [LangGraph Documentation](https://langchain-ai.github.io/langgraph/) - Multi-agent patterns and state management
|
||||
- [AutoGen Framework](https://microsoft.github.io/autogen/) - GroupChat and conversational patterns
|
||||
- [CrewAI Documentation](https://docs.crewai.com/) - Hierarchical agent processes
|
||||
- [Research on Multi-Agent Coordination](https://arxiv.org/abs/2308.00352) - Survey of multi-agent systems
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-20
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.0.0
|
||||
750
skills/n8n-code-python/SKILL.md
Normal file
750
skills/n8n-code-python/SKILL.md
Normal file
@@ -0,0 +1,750 @@
|
||||
---
|
||||
name: n8n-code-python
|
||||
description: "Write Python code in n8n Code nodes. Use when writing Python in n8n, using _input/_json/_node syntax, working with standard library, or need to understand Python limitations in n8n Code nodes."
|
||||
source: "https://github.com/czlonkowski/n8n-skills/tree/main/skills/n8n-code-python"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Python Code Node (Beta)
|
||||
|
||||
Expert guidance for writing Python code in n8n Code nodes.
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Important: JavaScript First
|
||||
|
||||
**Recommendation**: Use **JavaScript for 95% of use cases**. Only use Python when:
|
||||
- You need specific Python standard library functions
|
||||
- You're significantly more comfortable with Python syntax
|
||||
- You're doing data transformations better suited to Python
|
||||
|
||||
**Why JavaScript is preferred:**
|
||||
- Full n8n helper functions ($helpers.httpRequest, etc.)
|
||||
- Luxon DateTime library for advanced date/time operations
|
||||
- No external library limitations
|
||||
- Better n8n documentation and community support
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
# Basic template for Python Code nodes
|
||||
items = _input.all()
|
||||
|
||||
# Process data
|
||||
processed = []
|
||||
for item in items:
|
||||
processed.append({
|
||||
"json": {
|
||||
**item["json"],
|
||||
"processed": True,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
})
|
||||
|
||||
return processed
|
||||
```
|
||||
|
||||
### Essential Rules
|
||||
|
||||
1. **Consider JavaScript first** - Use Python only when necessary
|
||||
2. **Access data**: `_input.all()`, `_input.first()`, or `_input.item`
|
||||
3. **CRITICAL**: Must return `[{"json": {...}}]` format
|
||||
4. **CRITICAL**: Webhook data is under `_json["body"]` (not `_json` directly)
|
||||
5. **CRITICAL LIMITATION**: **No external libraries** (no requests, pandas, numpy)
|
||||
6. **Standard library only**: json, datetime, re, base64, hashlib, urllib.parse, math, random, statistics
|
||||
|
||||
---
|
||||
|
||||
## Mode Selection Guide
|
||||
|
||||
Same as JavaScript - choose based on your use case:
|
||||
|
||||
### Run Once for All Items (Recommended - Default)
|
||||
|
||||
**Use this mode for:** 95% of use cases
|
||||
|
||||
- **How it works**: Code executes **once** regardless of input count
|
||||
- **Data access**: `_input.all()` or `_items` array (Native mode)
|
||||
- **Best for**: Aggregation, filtering, batch processing, transformations
|
||||
- **Performance**: Faster for multiple items (single execution)
|
||||
|
||||
```python
|
||||
# Example: Calculate total from all items
|
||||
all_items = _input.all()
|
||||
total = sum(item["json"].get("amount", 0) for item in all_items)
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"total": total,
|
||||
"count": len(all_items),
|
||||
"average": total / len(all_items) if all_items else 0
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### Run Once for Each Item
|
||||
|
||||
**Use this mode for:** Specialized cases only
|
||||
|
||||
- **How it works**: Code executes **separately** for each input item
|
||||
- **Data access**: `_input.item` or `_item` (Native mode)
|
||||
- **Best for**: Item-specific logic, independent operations, per-item validation
|
||||
- **Performance**: Slower for large datasets (multiple executions)
|
||||
|
||||
```python
|
||||
# Example: Add processing timestamp to each item
|
||||
item = _input.item
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
**item["json"],
|
||||
"processed": True,
|
||||
"processed_at": datetime.now().isoformat()
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Python Modes: Beta vs Native
|
||||
|
||||
n8n offers two Python execution modes:
|
||||
|
||||
### Python (Beta) - Recommended
|
||||
- **Use**: `_input`, `_json`, `_node` helper syntax
|
||||
- **Best for**: Most Python use cases
|
||||
- **Helpers available**: `_now`, `_today`, `_jmespath()`
|
||||
- **Import**: `from datetime import datetime`
|
||||
|
||||
```python
|
||||
# Python (Beta) example
|
||||
items = _input.all()
|
||||
now = _now # Built-in datetime object
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"count": len(items),
|
||||
"timestamp": now.isoformat()
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### Python (Native) (Beta)
|
||||
- **Use**: `_items`, `_item` variables only
|
||||
- **No helpers**: No `_input`, `_now`, etc.
|
||||
- **More limited**: Standard Python only
|
||||
- **Use when**: Need pure Python without n8n helpers
|
||||
|
||||
```python
|
||||
# Python (Native) example
|
||||
processed = []
|
||||
|
||||
for item in _items:
|
||||
processed.append({
|
||||
"json": {
|
||||
"id": item["json"].get("id"),
|
||||
"processed": True
|
||||
}
|
||||
})
|
||||
|
||||
return processed
|
||||
```
|
||||
|
||||
**Recommendation**: Use **Python (Beta)** for better n8n integration.
|
||||
|
||||
---
|
||||
|
||||
## Data Access Patterns
|
||||
|
||||
### Pattern 1: _input.all() - Most Common
|
||||
|
||||
**Use when**: Processing arrays, batch operations, aggregations
|
||||
|
||||
```python
|
||||
# Get all items from previous node
|
||||
all_items = _input.all()
|
||||
|
||||
# Filter, transform as needed
|
||||
valid = [item for item in all_items if item["json"].get("status") == "active"]
|
||||
|
||||
processed = []
|
||||
for item in valid:
|
||||
processed.append({
|
||||
"json": {
|
||||
"id": item["json"]["id"],
|
||||
"name": item["json"]["name"]
|
||||
}
|
||||
})
|
||||
|
||||
return processed
|
||||
```
|
||||
|
||||
### Pattern 2: _input.first() - Very Common
|
||||
|
||||
**Use when**: Working with single objects, API responses
|
||||
|
||||
```python
|
||||
# Get first item only
|
||||
first_item = _input.first()
|
||||
data = first_item["json"]
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"result": process_data(data),
|
||||
"processed_at": datetime.now().isoformat()
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### Pattern 3: _input.item - Each Item Mode Only
|
||||
|
||||
**Use when**: In "Run Once for Each Item" mode
|
||||
|
||||
```python
|
||||
# Current item in loop (Each Item mode only)
|
||||
current_item = _input.item
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
**current_item["json"],
|
||||
"item_processed": True
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### Pattern 4: _node - Reference Other Nodes
|
||||
|
||||
**Use when**: Need data from specific nodes in workflow
|
||||
|
||||
```python
|
||||
# Get output from specific node
|
||||
webhook_data = _node["Webhook"]["json"]
|
||||
http_data = _node["HTTP Request"]["json"]
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"combined": {
|
||||
"webhook": webhook_data,
|
||||
"api": http_data
|
||||
}
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
**See**: [DATA_ACCESS.md](DATA_ACCESS.md) for comprehensive guide
|
||||
|
||||
---
|
||||
|
||||
## Critical: Webhook Data Structure
|
||||
|
||||
**MOST COMMON MISTAKE**: Webhook data is nested under `["body"]`
|
||||
|
||||
```python
|
||||
# ❌ WRONG - Will raise KeyError
|
||||
name = _json["name"]
|
||||
email = _json["email"]
|
||||
|
||||
# ✅ CORRECT - Webhook data is under ["body"]
|
||||
name = _json["body"]["name"]
|
||||
email = _json["body"]["email"]
|
||||
|
||||
# ✅ SAFER - Use .get() for safe access
|
||||
webhook_data = _json.get("body", {})
|
||||
name = webhook_data.get("name")
|
||||
```
|
||||
|
||||
**Why**: Webhook node wraps all request data under `body` property. This includes POST data, query parameters, and JSON payloads.
|
||||
|
||||
**See**: [DATA_ACCESS.md](DATA_ACCESS.md) for full webhook structure details
|
||||
|
||||
---
|
||||
|
||||
## Return Format Requirements
|
||||
|
||||
**CRITICAL RULE**: Always return list of dictionaries with `"json"` key
|
||||
|
||||
### Correct Return Formats
|
||||
|
||||
```python
|
||||
# ✅ Single result
|
||||
return [{
|
||||
"json": {
|
||||
"field1": value1,
|
||||
"field2": value2
|
||||
}
|
||||
}]
|
||||
|
||||
# ✅ Multiple results
|
||||
return [
|
||||
{"json": {"id": 1, "data": "first"}},
|
||||
{"json": {"id": 2, "data": "second"}}
|
||||
]
|
||||
|
||||
# ✅ List comprehension
|
||||
transformed = [
|
||||
{"json": {"id": item["json"]["id"], "processed": True}}
|
||||
for item in _input.all()
|
||||
if item["json"].get("valid")
|
||||
]
|
||||
return transformed
|
||||
|
||||
# ✅ Empty result (when no data to return)
|
||||
return []
|
||||
|
||||
# ✅ Conditional return
|
||||
if should_process:
|
||||
return [{"json": processed_data}]
|
||||
else:
|
||||
return []
|
||||
```
|
||||
|
||||
### Incorrect Return Formats
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Dictionary without list wrapper
|
||||
return {
|
||||
"json": {"field": value}
|
||||
}
|
||||
|
||||
# ❌ WRONG: List without json wrapper
|
||||
return [{"field": value}]
|
||||
|
||||
# ❌ WRONG: Plain string
|
||||
return "processed"
|
||||
|
||||
# ❌ WRONG: Incomplete structure
|
||||
return [{"data": value}] # Should be {"json": value}
|
||||
```
|
||||
|
||||
**Why it matters**: Next nodes expect list format. Incorrect format causes workflow execution to fail.
|
||||
|
||||
**See**: [ERROR_PATTERNS.md](ERROR_PATTERNS.md) #2 for detailed error solutions
|
||||
|
||||
---
|
||||
|
||||
## Critical Limitation: No External Libraries
|
||||
|
||||
**MOST IMPORTANT PYTHON LIMITATION**: Cannot import external packages
|
||||
|
||||
### What's NOT Available
|
||||
|
||||
```python
|
||||
# ❌ NOT AVAILABLE - Will raise ModuleNotFoundError
|
||||
import requests # ❌ No
|
||||
import pandas # ❌ No
|
||||
import numpy # ❌ No
|
||||
import scipy # ❌ No
|
||||
from bs4 import BeautifulSoup # ❌ No
|
||||
import lxml # ❌ No
|
||||
```
|
||||
|
||||
### What IS Available (Standard Library)
|
||||
|
||||
```python
|
||||
# ✅ AVAILABLE - Standard library only
|
||||
import json # ✅ JSON parsing
|
||||
import datetime # ✅ Date/time operations
|
||||
import re # ✅ Regular expressions
|
||||
import base64 # ✅ Base64 encoding/decoding
|
||||
import hashlib # ✅ Hashing functions
|
||||
import urllib.parse # ✅ URL parsing
|
||||
import math # ✅ Math functions
|
||||
import random # ✅ Random numbers
|
||||
import statistics # ✅ Statistical functions
|
||||
```
|
||||
|
||||
### Workarounds
|
||||
|
||||
**Need HTTP requests?**
|
||||
- ✅ Use **HTTP Request node** before Code node
|
||||
- ✅ Or switch to **JavaScript** and use `$helpers.httpRequest()`
|
||||
|
||||
**Need data analysis (pandas/numpy)?**
|
||||
- ✅ Use Python **statistics** module for basic stats
|
||||
- ✅ Or switch to **JavaScript** for most operations
|
||||
- ✅ Manual calculations with lists and dictionaries
|
||||
|
||||
**Need web scraping (BeautifulSoup)?**
|
||||
- ✅ Use **HTTP Request node** + **HTML Extract node**
|
||||
- ✅ Or switch to **JavaScript** with regex/string methods
|
||||
|
||||
**See**: [STANDARD_LIBRARY.md](STANDARD_LIBRARY.md) for complete reference
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns Overview
|
||||
|
||||
Based on production workflows, here are the most useful Python patterns:
|
||||
|
||||
### 1. Data Transformation
|
||||
Transform all items with list comprehensions
|
||||
|
||||
```python
|
||||
items = _input.all()
|
||||
|
||||
return [
|
||||
{
|
||||
"json": {
|
||||
"id": item["json"].get("id"),
|
||||
"name": item["json"].get("name", "Unknown").upper(),
|
||||
"processed": True
|
||||
}
|
||||
}
|
||||
for item in items
|
||||
]
|
||||
```
|
||||
|
||||
### 2. Filtering & Aggregation
|
||||
Sum, filter, count with built-in functions
|
||||
|
||||
```python
|
||||
items = _input.all()
|
||||
total = sum(item["json"].get("amount", 0) for item in items)
|
||||
valid_items = [item for item in items if item["json"].get("amount", 0) > 0]
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"total": total,
|
||||
"count": len(valid_items)
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### 3. String Processing with Regex
|
||||
Extract patterns from text
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
items = _input.all()
|
||||
email_pattern = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
|
||||
|
||||
all_emails = []
|
||||
for item in items:
|
||||
text = item["json"].get("text", "")
|
||||
emails = re.findall(email_pattern, text)
|
||||
all_emails.extend(emails)
|
||||
|
||||
# Remove duplicates
|
||||
unique_emails = list(set(all_emails))
|
||||
|
||||
return [{
|
||||
"json": {
|
||||
"emails": unique_emails,
|
||||
"count": len(unique_emails)
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### 4. Data Validation
|
||||
Validate and clean data
|
||||
|
||||
```python
|
||||
items = _input.all()
|
||||
validated = []
|
||||
|
||||
for item in items:
|
||||
data = item["json"]
|
||||
errors = []
|
||||
|
||||
# Validate fields
|
||||
if not data.get("email"):
|
||||
errors.append("Email required")
|
||||
if not data.get("name"):
|
||||
errors.append("Name required")
|
||||
|
||||
validated.append({
|
||||
"json": {
|
||||
**data,
|
||||
"valid": len(errors) == 0,
|
||||
"errors": errors if errors else None
|
||||
}
|
||||
})
|
||||
|
||||
return validated
|
||||
```
|
||||
|
||||
### 5. Statistical Analysis
|
||||
Calculate statistics with statistics module
|
||||
|
||||
```python
|
||||
from statistics import mean, median, stdev
|
||||
|
||||
items = _input.all()
|
||||
values = [item["json"].get("value", 0) for item in items if "value" in item["json"]]
|
||||
|
||||
if values:
|
||||
return [{
|
||||
"json": {
|
||||
"mean": mean(values),
|
||||
"median": median(values),
|
||||
"stdev": stdev(values) if len(values) > 1 else 0,
|
||||
"min": min(values),
|
||||
"max": max(values),
|
||||
"count": len(values)
|
||||
}
|
||||
}]
|
||||
else:
|
||||
return [{"json": {"error": "No values found"}}]
|
||||
```
|
||||
|
||||
**See**: [COMMON_PATTERNS.md](COMMON_PATTERNS.md) for 10 detailed Python patterns
|
||||
|
||||
---
|
||||
|
||||
## Error Prevention - Top 5 Mistakes
|
||||
|
||||
### #1: Importing External Libraries (Python-Specific!)
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Trying to import external library
|
||||
import requests # ModuleNotFoundError!
|
||||
|
||||
# ✅ CORRECT: Use HTTP Request node or JavaScript
|
||||
# Add HTTP Request node before Code node
|
||||
# OR switch to JavaScript and use $helpers.httpRequest()
|
||||
```
|
||||
|
||||
### #2: Empty Code or Missing Return
|
||||
|
||||
```python
|
||||
# ❌ WRONG: No return statement
|
||||
items = _input.all()
|
||||
# Processing...
|
||||
# Forgot to return!
|
||||
|
||||
# ✅ CORRECT: Always return data
|
||||
items = _input.all()
|
||||
# Processing...
|
||||
return [{"json": item["json"]} for item in items]
|
||||
```
|
||||
|
||||
### #3: Incorrect Return Format
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Returning dict instead of list
|
||||
return {"json": {"result": "success"}}
|
||||
|
||||
# ✅ CORRECT: List wrapper required
|
||||
return [{"json": {"result": "success"}}]
|
||||
```
|
||||
|
||||
### #4: KeyError on Dictionary Access
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Direct access crashes if missing
|
||||
name = _json["user"]["name"] # KeyError!
|
||||
|
||||
# ✅ CORRECT: Use .get() for safe access
|
||||
name = _json.get("user", {}).get("name", "Unknown")
|
||||
```
|
||||
|
||||
### #5: Webhook Body Nesting
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Direct access to webhook data
|
||||
email = _json["email"] # KeyError!
|
||||
|
||||
# ✅ CORRECT: Webhook data under ["body"]
|
||||
email = _json["body"]["email"]
|
||||
|
||||
# ✅ BETTER: Safe access with .get()
|
||||
email = _json.get("body", {}).get("email", "no-email")
|
||||
```
|
||||
|
||||
**See**: [ERROR_PATTERNS.md](ERROR_PATTERNS.md) for comprehensive error guide
|
||||
|
||||
---
|
||||
|
||||
## Standard Library Reference
|
||||
|
||||
### Most Useful Modules
|
||||
|
||||
```python
|
||||
# JSON operations
|
||||
import json
|
||||
data = json.loads(json_string)
|
||||
json_output = json.dumps({"key": "value"})
|
||||
|
||||
# Date/time
|
||||
from datetime import datetime, timedelta
|
||||
now = datetime.now()
|
||||
tomorrow = now + timedelta(days=1)
|
||||
formatted = now.strftime("%Y-%m-%d")
|
||||
|
||||
# Regular expressions
|
||||
import re
|
||||
matches = re.findall(r'\d+', text)
|
||||
cleaned = re.sub(r'[^\w\s]', '', text)
|
||||
|
||||
# Base64 encoding
|
||||
import base64
|
||||
encoded = base64.b64encode(data).decode()
|
||||
decoded = base64.b64decode(encoded)
|
||||
|
||||
# Hashing
|
||||
import hashlib
|
||||
hash_value = hashlib.sha256(text.encode()).hexdigest()
|
||||
|
||||
# URL parsing
|
||||
import urllib.parse
|
||||
params = urllib.parse.urlencode({"key": "value"})
|
||||
parsed = urllib.parse.urlparse(url)
|
||||
|
||||
# Statistics
|
||||
from statistics import mean, median, stdev
|
||||
average = mean([1, 2, 3, 4, 5])
|
||||
```
|
||||
|
||||
**See**: [STANDARD_LIBRARY.md](STANDARD_LIBRARY.md) for complete reference
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Always Use .get() for Dictionary Access
|
||||
|
||||
```python
|
||||
# ✅ SAFE: Won't crash if field missing
|
||||
value = item["json"].get("field", "default")
|
||||
|
||||
# ❌ RISKY: Crashes if field doesn't exist
|
||||
value = item["json"]["field"]
|
||||
```
|
||||
|
||||
### 2. Handle None/Null Values Explicitly
|
||||
|
||||
```python
|
||||
# ✅ GOOD: Default to 0 if None
|
||||
amount = item["json"].get("amount") or 0
|
||||
|
||||
# ✅ GOOD: Check for None explicitly
|
||||
text = item["json"].get("text")
|
||||
if text is None:
|
||||
text = ""
|
||||
```
|
||||
|
||||
### 3. Use List Comprehensions for Filtering
|
||||
|
||||
```python
|
||||
# ✅ PYTHONIC: List comprehension
|
||||
valid = [item for item in items if item["json"].get("active")]
|
||||
|
||||
# ❌ VERBOSE: Manual loop
|
||||
valid = []
|
||||
for item in items:
|
||||
if item["json"].get("active"):
|
||||
valid.append(item)
|
||||
```
|
||||
|
||||
### 4. Return Consistent Structure
|
||||
|
||||
```python
|
||||
# ✅ CONSISTENT: Always list with "json" key
|
||||
return [{"json": result}] # Single result
|
||||
return results # Multiple results (already formatted)
|
||||
return [] # No results
|
||||
```
|
||||
|
||||
### 5. Debug with print() Statements
|
||||
|
||||
```python
|
||||
# Debug statements appear in browser console (F12)
|
||||
items = _input.all()
|
||||
print(f"Processing {len(items)} items")
|
||||
print(f"First item: {items[0] if items else 'None'}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When to Use Python vs JavaScript
|
||||
|
||||
### Use Python When:
|
||||
- ✅ You need `statistics` module for statistical operations
|
||||
- ✅ You're significantly more comfortable with Python syntax
|
||||
- ✅ Your logic maps well to list comprehensions
|
||||
- ✅ You need specific standard library functions
|
||||
|
||||
### Use JavaScript When:
|
||||
- ✅ You need HTTP requests ($helpers.httpRequest())
|
||||
- ✅ You need advanced date/time (DateTime/Luxon)
|
||||
- ✅ You want better n8n integration
|
||||
- ✅ **For 95% of use cases** (recommended)
|
||||
|
||||
### Consider Other Nodes When:
|
||||
- ❌ Simple field mapping → Use **Set** node
|
||||
- ❌ Basic filtering → Use **Filter** node
|
||||
- ❌ Simple conditionals → Use **IF** or **Switch** node
|
||||
- ❌ HTTP requests only → Use **HTTP Request** node
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
### Works With:
|
||||
|
||||
**n8n Expression Syntax**:
|
||||
- Expressions use `{{ }}` syntax in other nodes
|
||||
- Code nodes use Python directly (no `{{ }}`)
|
||||
- When to use expressions vs code
|
||||
|
||||
**n8n MCP Tools Expert**:
|
||||
- How to find Code node: `search_nodes({query: "code"})`
|
||||
- Get configuration help: `get_node_essentials("nodes-base.code")`
|
||||
- Validate code: `validate_node_operation()`
|
||||
|
||||
**n8n Node Configuration**:
|
||||
- Mode selection (All Items vs Each Item)
|
||||
- Language selection (Python vs JavaScript)
|
||||
- Understanding property dependencies
|
||||
|
||||
**n8n Workflow Patterns**:
|
||||
- Code nodes in transformation step
|
||||
- When to use Python vs JavaScript in patterns
|
||||
|
||||
**n8n Validation Expert**:
|
||||
- Validate Code node configuration
|
||||
- Handle validation errors
|
||||
- Auto-fix common issues
|
||||
|
||||
**n8n Code JavaScript**:
|
||||
- When to use JavaScript instead
|
||||
- Comparison of JavaScript vs Python features
|
||||
- Migration from Python to JavaScript
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Before deploying Python Code nodes, verify:
|
||||
|
||||
- [ ] **Considered JavaScript first** - Using Python only when necessary
|
||||
- [ ] **Code is not empty** - Must have meaningful logic
|
||||
- [ ] **Return statement exists** - Must return list of dictionaries
|
||||
- [ ] **Proper return format** - Each item: `{"json": {...}}`
|
||||
- [ ] **Data access correct** - Using `_input.all()`, `_input.first()`, or `_input.item`
|
||||
- [ ] **No external imports** - Only standard library (json, datetime, re, etc.)
|
||||
- [ ] **Safe dictionary access** - Using `.get()` to avoid KeyError
|
||||
- [ ] **Webhook data** - Access via `["body"]` if from webhook
|
||||
- [ ] **Mode selection** - "All Items" for most cases
|
||||
- [ ] **Output consistent** - All code paths return same structure
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
### Related Files
|
||||
- [DATA_ACCESS.md](DATA_ACCESS.md) - Comprehensive Python data access patterns
|
||||
- [COMMON_PATTERNS.md](COMMON_PATTERNS.md) - 10 Python patterns for n8n
|
||||
- [ERROR_PATTERNS.md](ERROR_PATTERNS.md) - Top 5 errors and solutions
|
||||
- [STANDARD_LIBRARY.md](STANDARD_LIBRARY.md) - Complete standard library reference
|
||||
|
||||
### n8n Documentation
|
||||
- Code Node Guide: https://docs.n8n.io/code/code-node/
|
||||
- Python in n8n: https://docs.n8n.io/code/builtin/python-modules/
|
||||
|
||||
---
|
||||
|
||||
**Ready to write Python in n8n Code nodes - but consider JavaScript first!** Use Python for specific needs, reference the error patterns guide to avoid common mistakes, and leverage the standard library effectively.
|
||||
654
skills/n8n-mcp-tools-expert/SKILL.md
Normal file
654
skills/n8n-mcp-tools-expert/SKILL.md
Normal file
@@ -0,0 +1,654 @@
|
||||
---
|
||||
name: n8n-mcp-tools-expert
|
||||
description: "Expert guide for using n8n-mcp MCP tools effectively. Use when searching for nodes, validating configurations, accessing templates, managing workflows, or using any n8n-mcp tool. Provides tool selection guidance, parameter formats, and common patterns."
|
||||
source: "https://github.com/czlonkowski/n8n-skills/tree/main/skills/n8n-mcp-tools-expert"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# n8n MCP Tools Expert
|
||||
|
||||
Master guide for using n8n-mcp MCP server tools to build workflows.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Searching for n8n nodes
|
||||
- Validating n8n configurations
|
||||
- Accessing n8n templates
|
||||
- Managing n8n workflows
|
||||
- Using any n8n-mcp tool
|
||||
- Need guidance on tool selection or parameter formats
|
||||
|
||||
---
|
||||
|
||||
## Tool Categories
|
||||
|
||||
n8n-mcp provides tools organized into categories:
|
||||
|
||||
1. **Node Discovery** → [SEARCH_GUIDE.md](SEARCH_GUIDE.md)
|
||||
2. **Configuration Validation** → [VALIDATION_GUIDE.md](VALIDATION_GUIDE.md)
|
||||
3. **Workflow Management** → [WORKFLOW_GUIDE.md](WORKFLOW_GUIDE.md)
|
||||
4. **Template Library** - Search and deploy 2,700+ real workflows
|
||||
5. **Documentation & Guides** - Tool docs, AI agent guide, Code node guides
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Most Used Tools (by success rate)
|
||||
|
||||
| Tool | Use When | Speed |
|
||||
|------|----------|-------|
|
||||
| `search_nodes` | Finding nodes by keyword | <20ms |
|
||||
| `get_node` | Understanding node operations (detail="standard") | <10ms |
|
||||
| `validate_node` | Checking configurations (mode="full") | <100ms |
|
||||
| `n8n_create_workflow` | Creating workflows | 100-500ms |
|
||||
| `n8n_update_partial_workflow` | Editing workflows (MOST USED!) | 50-200ms |
|
||||
| `validate_workflow` | Checking complete workflow | 100-500ms |
|
||||
| `n8n_deploy_template` | Deploy template to n8n instance | 200-500ms |
|
||||
|
||||
---
|
||||
|
||||
## Tool Selection Guide
|
||||
|
||||
### Finding the Right Node
|
||||
|
||||
**Workflow**:
|
||||
```
|
||||
1. search_nodes({query: "keyword"})
|
||||
2. get_node({nodeType: "nodes-base.name"})
|
||||
3. [Optional] get_node({nodeType: "nodes-base.name", mode: "docs"})
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```javascript
|
||||
// Step 1: Search
|
||||
search_nodes({query: "slack"})
|
||||
// Returns: nodes-base.slack
|
||||
|
||||
// Step 2: Get details
|
||||
get_node({nodeType: "nodes-base.slack"})
|
||||
// Returns: operations, properties, examples (standard detail)
|
||||
|
||||
// Step 3: Get readable documentation
|
||||
get_node({nodeType: "nodes-base.slack", mode: "docs"})
|
||||
// Returns: markdown documentation
|
||||
```
|
||||
|
||||
**Common pattern**: search → get_node (18s average)
|
||||
|
||||
### Validating Configuration
|
||||
|
||||
**Workflow**:
|
||||
```
|
||||
1. validate_node({nodeType, config: {}, mode: "minimal"}) - Check required fields
|
||||
2. validate_node({nodeType, config, profile: "runtime"}) - Full validation
|
||||
3. [Repeat] Fix errors, validate again
|
||||
```
|
||||
|
||||
**Common pattern**: validate → fix → validate (23s thinking, 58s fixing per cycle)
|
||||
|
||||
### Managing Workflows
|
||||
|
||||
**Workflow**:
|
||||
```
|
||||
1. n8n_create_workflow({name, nodes, connections})
|
||||
2. n8n_validate_workflow({id})
|
||||
3. n8n_update_partial_workflow({id, operations: [...]})
|
||||
4. n8n_validate_workflow({id}) again
|
||||
5. n8n_update_partial_workflow({id, operations: [{type: "activateWorkflow"}]})
|
||||
```
|
||||
|
||||
**Common pattern**: iterative updates (56s average between edits)
|
||||
|
||||
---
|
||||
|
||||
## Critical: nodeType Formats
|
||||
|
||||
**Two different formats** for different tools!
|
||||
|
||||
### Format 1: Search/Validate Tools
|
||||
```javascript
|
||||
// Use SHORT prefix
|
||||
"nodes-base.slack"
|
||||
"nodes-base.httpRequest"
|
||||
"nodes-base.webhook"
|
||||
"nodes-langchain.agent"
|
||||
```
|
||||
|
||||
**Tools that use this**:
|
||||
- search_nodes (returns this format)
|
||||
- get_node
|
||||
- validate_node
|
||||
- validate_workflow
|
||||
|
||||
### Format 2: Workflow Tools
|
||||
```javascript
|
||||
// Use FULL prefix
|
||||
"n8n-nodes-base.slack"
|
||||
"n8n-nodes-base.httpRequest"
|
||||
"n8n-nodes-base.webhook"
|
||||
"@n8n/n8n-nodes-langchain.agent"
|
||||
```
|
||||
|
||||
**Tools that use this**:
|
||||
- n8n_create_workflow
|
||||
- n8n_update_partial_workflow
|
||||
|
||||
### Conversion
|
||||
|
||||
```javascript
|
||||
// search_nodes returns BOTH formats
|
||||
{
|
||||
"nodeType": "nodes-base.slack", // For search/validate tools
|
||||
"workflowNodeType": "n8n-nodes-base.slack" // For workflow tools
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
### Mistake 1: Wrong nodeType Format
|
||||
|
||||
**Problem**: "Node not found" error
|
||||
|
||||
```javascript
|
||||
// WRONG
|
||||
get_node({nodeType: "slack"}) // Missing prefix
|
||||
get_node({nodeType: "n8n-nodes-base.slack"}) // Wrong prefix
|
||||
|
||||
// CORRECT
|
||||
get_node({nodeType: "nodes-base.slack"})
|
||||
```
|
||||
|
||||
### Mistake 2: Using detail="full" by Default
|
||||
|
||||
**Problem**: Huge payload, slower response, token waste
|
||||
|
||||
```javascript
|
||||
// WRONG - Returns 3-8K tokens, use sparingly
|
||||
get_node({nodeType: "nodes-base.slack", detail: "full"})
|
||||
|
||||
// CORRECT - Returns 1-2K tokens, covers 95% of use cases
|
||||
get_node({nodeType: "nodes-base.slack"}) // detail="standard" is default
|
||||
get_node({nodeType: "nodes-base.slack", detail: "standard"})
|
||||
```
|
||||
|
||||
**When to use detail="full"**:
|
||||
- Debugging complex configuration issues
|
||||
- Need complete property schema with all nested options
|
||||
- Exploring advanced features
|
||||
|
||||
**Better alternatives**:
|
||||
1. `get_node({detail: "standard"})` - for operations list (default)
|
||||
2. `get_node({mode: "docs"})` - for readable documentation
|
||||
3. `get_node({mode: "search_properties", propertyQuery: "auth"})` - for specific property
|
||||
|
||||
### Mistake 3: Not Using Validation Profiles
|
||||
|
||||
**Problem**: Too many false positives OR missing real errors
|
||||
|
||||
**Profiles**:
|
||||
- `minimal` - Only required fields (fast, permissive)
|
||||
- `runtime` - Values + types (recommended for pre-deployment)
|
||||
- `ai-friendly` - Reduce false positives (for AI configuration)
|
||||
- `strict` - Maximum validation (for production)
|
||||
|
||||
```javascript
|
||||
// WRONG - Uses default profile
|
||||
validate_node({nodeType, config})
|
||||
|
||||
// CORRECT - Explicit profile
|
||||
validate_node({nodeType, config, profile: "runtime"})
|
||||
```
|
||||
|
||||
### Mistake 4: Ignoring Auto-Sanitization
|
||||
|
||||
**What happens**: ALL nodes sanitized on ANY workflow update
|
||||
|
||||
**Auto-fixes**:
|
||||
- Binary operators (equals, contains) → removes singleValue
|
||||
- Unary operators (isEmpty, isNotEmpty) → adds singleValue: true
|
||||
- IF/Switch nodes → adds missing metadata
|
||||
|
||||
**Cannot fix**:
|
||||
- Broken connections
|
||||
- Branch count mismatches
|
||||
- Paradoxical corrupt states
|
||||
|
||||
```javascript
|
||||
// After ANY update, auto-sanitization runs on ALL nodes
|
||||
n8n_update_partial_workflow({id, operations: [...]})
|
||||
// → Automatically fixes operator structures
|
||||
```
|
||||
|
||||
### Mistake 5: Not Using Smart Parameters
|
||||
|
||||
**Problem**: Complex sourceIndex calculations for multi-output nodes
|
||||
|
||||
**Old way** (manual):
|
||||
```javascript
|
||||
// IF node connection
|
||||
{
|
||||
type: "addConnection",
|
||||
source: "IF",
|
||||
target: "Handler",
|
||||
sourceIndex: 0 // Which output? Hard to remember!
|
||||
}
|
||||
```
|
||||
|
||||
**New way** (smart parameters):
|
||||
```javascript
|
||||
// IF node - semantic branch names
|
||||
{
|
||||
type: "addConnection",
|
||||
source: "IF",
|
||||
target: "True Handler",
|
||||
branch: "true" // Clear and readable!
|
||||
}
|
||||
|
||||
{
|
||||
type: "addConnection",
|
||||
source: "IF",
|
||||
target: "False Handler",
|
||||
branch: "false"
|
||||
}
|
||||
|
||||
// Switch node - semantic case numbers
|
||||
{
|
||||
type: "addConnection",
|
||||
source: "Switch",
|
||||
target: "Handler A",
|
||||
case: 0
|
||||
}
|
||||
```
|
||||
|
||||
### Mistake 6: Not Using intent Parameter
|
||||
|
||||
**Problem**: Less helpful tool responses
|
||||
|
||||
```javascript
|
||||
// WRONG - No context for response
|
||||
n8n_update_partial_workflow({
|
||||
id: "abc",
|
||||
operations: [{type: "addNode", node: {...}}]
|
||||
})
|
||||
|
||||
// CORRECT - Better AI responses
|
||||
n8n_update_partial_workflow({
|
||||
id: "abc",
|
||||
intent: "Add error handling for API failures",
|
||||
operations: [{type: "addNode", node: {...}}]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tool Usage Patterns
|
||||
|
||||
### Pattern 1: Node Discovery (Most Common)
|
||||
|
||||
**Common workflow**: 18s average between steps
|
||||
|
||||
```javascript
|
||||
// Step 1: Search (fast!)
|
||||
const results = await search_nodes({
|
||||
query: "slack",
|
||||
mode: "OR", // Default: any word matches
|
||||
limit: 20
|
||||
});
|
||||
// → Returns: nodes-base.slack, nodes-base.slackTrigger
|
||||
|
||||
// Step 2: Get details (~18s later, user reviewing results)
|
||||
const details = await get_node({
|
||||
nodeType: "nodes-base.slack",
|
||||
includeExamples: true // Get real template configs
|
||||
});
|
||||
// → Returns: operations, properties, metadata
|
||||
```
|
||||
|
||||
### Pattern 2: Validation Loop
|
||||
|
||||
**Typical cycle**: 23s thinking, 58s fixing
|
||||
|
||||
```javascript
|
||||
// Step 1: Validate
|
||||
const result = await validate_node({
|
||||
nodeType: "nodes-base.slack",
|
||||
config: {
|
||||
resource: "channel",
|
||||
operation: "create"
|
||||
},
|
||||
profile: "runtime"
|
||||
});
|
||||
|
||||
// Step 2: Check errors (~23s thinking)
|
||||
if (!result.valid) {
|
||||
console.log(result.errors); // "Missing required field: name"
|
||||
}
|
||||
|
||||
// Step 3: Fix config (~58s fixing)
|
||||
config.name = "general";
|
||||
|
||||
// Step 4: Validate again
|
||||
await validate_node({...}); // Repeat until clean
|
||||
```
|
||||
|
||||
### Pattern 3: Workflow Editing
|
||||
|
||||
**Most used update tool**: 99.0% success rate, 56s average between edits
|
||||
|
||||
```javascript
|
||||
// Iterative workflow building (NOT one-shot!)
|
||||
// Edit 1
|
||||
await n8n_update_partial_workflow({
|
||||
id: "workflow-id",
|
||||
intent: "Add webhook trigger",
|
||||
operations: [{type: "addNode", node: {...}}]
|
||||
});
|
||||
|
||||
// ~56s later...
|
||||
|
||||
// Edit 2
|
||||
await n8n_update_partial_workflow({
|
||||
id: "workflow-id",
|
||||
intent: "Connect webhook to processor",
|
||||
operations: [{type: "addConnection", source: "...", target: "..."}]
|
||||
});
|
||||
|
||||
// ~56s later...
|
||||
|
||||
// Edit 3 (validation)
|
||||
await n8n_validate_workflow({id: "workflow-id"});
|
||||
|
||||
// Ready? Activate!
|
||||
await n8n_update_partial_workflow({
|
||||
id: "workflow-id",
|
||||
intent: "Activate workflow for production",
|
||||
operations: [{type: "activateWorkflow"}]
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Detailed Guides
|
||||
|
||||
### Node Discovery Tools
|
||||
See [SEARCH_GUIDE.md](SEARCH_GUIDE.md) for:
|
||||
- search_nodes
|
||||
- get_node with detail levels (minimal, standard, full)
|
||||
- get_node modes (info, docs, search_properties, versions)
|
||||
|
||||
### Validation Tools
|
||||
See [VALIDATION_GUIDE.md](VALIDATION_GUIDE.md) for:
|
||||
- Validation profiles explained
|
||||
- validate_node with modes (minimal, full)
|
||||
- validate_workflow complete structure
|
||||
- Auto-sanitization system
|
||||
- Handling validation errors
|
||||
|
||||
### Workflow Management
|
||||
See [WORKFLOW_GUIDE.md](WORKFLOW_GUIDE.md) for:
|
||||
- n8n_create_workflow
|
||||
- n8n_update_partial_workflow (17 operation types!)
|
||||
- Smart parameters (branch, case)
|
||||
- AI connection types (8 types)
|
||||
- Workflow activation (activateWorkflow/deactivateWorkflow)
|
||||
- n8n_deploy_template
|
||||
- n8n_workflow_versions
|
||||
|
||||
---
|
||||
|
||||
## Template Usage
|
||||
|
||||
### Search Templates
|
||||
|
||||
```javascript
|
||||
// Search by keyword (default mode)
|
||||
search_templates({
|
||||
query: "webhook slack",
|
||||
limit: 20
|
||||
});
|
||||
|
||||
// Search by node types
|
||||
search_templates({
|
||||
searchMode: "by_nodes",
|
||||
nodeTypes: ["n8n-nodes-base.httpRequest", "n8n-nodes-base.slack"]
|
||||
});
|
||||
|
||||
// Search by task type
|
||||
search_templates({
|
||||
searchMode: "by_task",
|
||||
task: "webhook_processing"
|
||||
});
|
||||
|
||||
// Search by metadata (complexity, setup time)
|
||||
search_templates({
|
||||
searchMode: "by_metadata",
|
||||
complexity: "simple",
|
||||
maxSetupMinutes: 15
|
||||
});
|
||||
```
|
||||
|
||||
### Get Template Details
|
||||
|
||||
```javascript
|
||||
get_template({
|
||||
templateId: 2947,
|
||||
mode: "structure" // nodes+connections only
|
||||
});
|
||||
|
||||
get_template({
|
||||
templateId: 2947,
|
||||
mode: "full" // complete workflow JSON
|
||||
});
|
||||
```
|
||||
|
||||
### Deploy Template Directly
|
||||
|
||||
```javascript
|
||||
// Deploy template to your n8n instance
|
||||
n8n_deploy_template({
|
||||
templateId: 2947,
|
||||
name: "My Weather to Slack", // Custom name (optional)
|
||||
autoFix: true, // Auto-fix common issues (default)
|
||||
autoUpgradeVersions: true // Upgrade node versions (default)
|
||||
});
|
||||
// Returns: workflow ID, required credentials, fixes applied
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Self-Help Tools
|
||||
|
||||
### Get Tool Documentation
|
||||
|
||||
```javascript
|
||||
// Overview of all tools
|
||||
tools_documentation()
|
||||
|
||||
// Specific tool details
|
||||
tools_documentation({
|
||||
topic: "search_nodes",
|
||||
depth: "full"
|
||||
})
|
||||
|
||||
// Code node guides
|
||||
tools_documentation({topic: "javascript_code_node_guide", depth: "full"})
|
||||
tools_documentation({topic: "python_code_node_guide", depth: "full"})
|
||||
```
|
||||
|
||||
### AI Agent Guide
|
||||
|
||||
```javascript
|
||||
// Comprehensive AI workflow guide
|
||||
ai_agents_guide()
|
||||
// Returns: Architecture, connections, tools, validation, best practices
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
```javascript
|
||||
// Quick health check
|
||||
n8n_health_check()
|
||||
|
||||
// Detailed diagnostics
|
||||
n8n_health_check({mode: "diagnostic"})
|
||||
// → Returns: status, env vars, tool status, API connectivity
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tool Availability
|
||||
|
||||
**Always Available** (no n8n API needed):
|
||||
- search_nodes, get_node
|
||||
- validate_node, validate_workflow
|
||||
- search_templates, get_template
|
||||
- tools_documentation, ai_agents_guide
|
||||
|
||||
**Requires n8n API** (N8N_API_URL + N8N_API_KEY):
|
||||
- n8n_create_workflow
|
||||
- n8n_update_partial_workflow
|
||||
- n8n_validate_workflow (by ID)
|
||||
- n8n_list_workflows, n8n_get_workflow
|
||||
- n8n_test_workflow
|
||||
- n8n_executions
|
||||
- n8n_deploy_template
|
||||
- n8n_workflow_versions
|
||||
- n8n_autofix_workflow
|
||||
|
||||
If API tools unavailable, use templates and validation-only workflows.
|
||||
|
||||
---
|
||||
|
||||
## Unified Tool Reference
|
||||
|
||||
### get_node (Unified Node Information)
|
||||
|
||||
**Detail Levels** (mode="info", default):
|
||||
- `minimal` (~200 tokens) - Basic metadata only
|
||||
- `standard` (~1-2K tokens) - Essential properties + operations (RECOMMENDED)
|
||||
- `full` (~3-8K tokens) - Complete schema (use sparingly)
|
||||
|
||||
**Operation Modes**:
|
||||
- `info` (default) - Node schema with detail level
|
||||
- `docs` - Readable markdown documentation
|
||||
- `search_properties` - Find specific properties (use with propertyQuery)
|
||||
- `versions` - List all versions with breaking changes
|
||||
- `compare` - Compare two versions
|
||||
- `breaking` - Show only breaking changes
|
||||
- `migrations` - Show auto-migratable changes
|
||||
|
||||
```javascript
|
||||
// Standard (recommended)
|
||||
get_node({nodeType: "nodes-base.httpRequest"})
|
||||
|
||||
// Get documentation
|
||||
get_node({nodeType: "nodes-base.webhook", mode: "docs"})
|
||||
|
||||
// Search for properties
|
||||
get_node({nodeType: "nodes-base.httpRequest", mode: "search_properties", propertyQuery: "auth"})
|
||||
|
||||
// Check versions
|
||||
get_node({nodeType: "nodes-base.executeWorkflow", mode: "versions"})
|
||||
```
|
||||
|
||||
### validate_node (Unified Validation)
|
||||
|
||||
**Modes**:
|
||||
- `full` (default) - Comprehensive validation with errors/warnings/suggestions
|
||||
- `minimal` - Quick required fields check only
|
||||
|
||||
**Profiles** (for mode="full"):
|
||||
- `minimal` - Very lenient
|
||||
- `runtime` - Standard (default, recommended)
|
||||
- `ai-friendly` - Balanced for AI workflows
|
||||
- `strict` - Most thorough (production)
|
||||
|
||||
```javascript
|
||||
// Full validation with runtime profile
|
||||
validate_node({nodeType: "nodes-base.slack", config: {...}, profile: "runtime"})
|
||||
|
||||
// Quick required fields check
|
||||
validate_node({nodeType: "nodes-base.webhook", config: {}, mode: "minimal"})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
| Tool | Response Time | Payload Size |
|
||||
|------|---------------|--------------|
|
||||
| search_nodes | <20ms | Small |
|
||||
| get_node (standard) | <10ms | ~1-2KB |
|
||||
| get_node (full) | <100ms | 3-8KB |
|
||||
| validate_node (minimal) | <50ms | Small |
|
||||
| validate_node (full) | <100ms | Medium |
|
||||
| validate_workflow | 100-500ms | Medium |
|
||||
| n8n_create_workflow | 100-500ms | Medium |
|
||||
| n8n_update_partial_workflow | 50-200ms | Small |
|
||||
| n8n_deploy_template | 200-500ms | Medium |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do
|
||||
- Use `get_node({detail: "standard"})` for most use cases
|
||||
- Specify validation profile explicitly (`profile: "runtime"`)
|
||||
- Use smart parameters (`branch`, `case`) for clarity
|
||||
- Include `intent` parameter in workflow updates
|
||||
- Follow search → get_node → validate workflow
|
||||
- Iterate workflows (avg 56s between edits)
|
||||
- Validate after every significant change
|
||||
- Use `includeExamples: true` for real configs
|
||||
- Use `n8n_deploy_template` for quick starts
|
||||
|
||||
### Don't
|
||||
- Use `detail: "full"` unless necessary (wastes tokens)
|
||||
- Forget nodeType prefix (`nodes-base.*`)
|
||||
- Skip validation profiles
|
||||
- Try to build workflows in one shot (iterate!)
|
||||
- Ignore auto-sanitization behavior
|
||||
- Use full prefix (`n8n-nodes-base.*`) with search/validate tools
|
||||
- Forget to activate workflows after building
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Most Important**:
|
||||
1. Use **get_node** with `detail: "standard"` (default) - covers 95% of use cases
|
||||
2. nodeType formats differ: `nodes-base.*` (search/validate) vs `n8n-nodes-base.*` (workflows)
|
||||
3. Specify **validation profiles** (`runtime` recommended)
|
||||
4. Use **smart parameters** (`branch="true"`, `case=0`)
|
||||
5. Include **intent parameter** in workflow updates
|
||||
6. **Auto-sanitization** runs on ALL nodes during updates
|
||||
7. Workflows can be **activated via API** (`activateWorkflow` operation)
|
||||
8. Workflows are built **iteratively** (56s avg between edits)
|
||||
|
||||
**Common Workflow**:
|
||||
1. search_nodes → find node
|
||||
2. get_node → understand config
|
||||
3. validate_node → check config
|
||||
4. n8n_create_workflow → build
|
||||
5. n8n_validate_workflow → verify
|
||||
6. n8n_update_partial_workflow → iterate
|
||||
7. activateWorkflow → go live!
|
||||
|
||||
For details, see:
|
||||
- [SEARCH_GUIDE.md](SEARCH_GUIDE.md) - Node discovery
|
||||
- [VALIDATION_GUIDE.md](VALIDATION_GUIDE.md) - Configuration validation
|
||||
- [WORKFLOW_GUIDE.md](WORKFLOW_GUIDE.md) - Workflow management
|
||||
|
||||
---
|
||||
|
||||
**Related Skills**:
|
||||
- n8n Expression Syntax - Write expressions in workflow fields
|
||||
- n8n Workflow Patterns - Architectural patterns from templates
|
||||
- n8n Validation Expert - Interpret validation errors
|
||||
- n8n Node Configuration - Operation-specific requirements
|
||||
- n8n Code JavaScript - Write JavaScript in Code nodes
|
||||
- n8n Code Python - Write Python in Code nodes
|
||||
796
skills/n8n-node-configuration/SKILL.md
Normal file
796
skills/n8n-node-configuration/SKILL.md
Normal file
@@ -0,0 +1,796 @@
|
||||
---
|
||||
name: n8n-node-configuration
|
||||
description: "Operation-aware node configuration guidance. Use when configuring nodes, understanding property dependencies, determining required fields, choosing between get_node detail levels, or learning common configuration patterns by node type."
|
||||
source: "https://github.com/czlonkowski/n8n-skills/tree/main/skills/n8n-node-configuration"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# n8n Node Configuration
|
||||
|
||||
Expert guidance for operation-aware node configuration with property dependencies.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Configuring n8n nodes
|
||||
- Understanding property dependencies
|
||||
- Determining required fields
|
||||
- Choosing between get_node detail levels
|
||||
- Learning common configuration patterns by node type
|
||||
|
||||
---
|
||||
|
||||
## Configuration Philosophy
|
||||
|
||||
**Progressive disclosure**: Start minimal, add complexity as needed
|
||||
|
||||
Configuration best practices:
|
||||
- `get_node` with `detail: "standard"` is the most used discovery pattern
|
||||
- 56 seconds average between configuration edits
|
||||
- Covers 95% of use cases with 1-2K tokens response
|
||||
|
||||
**Key insight**: Most configurations need only standard detail, not full schema!
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Operation-Aware Configuration
|
||||
|
||||
**Not all fields are always required** - it depends on operation!
|
||||
|
||||
**Example**: Slack node
|
||||
```javascript
|
||||
// For operation='post'
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "post",
|
||||
"channel": "#general", // Required for post
|
||||
"text": "Hello!" // Required for post
|
||||
}
|
||||
|
||||
// For operation='update'
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "update",
|
||||
"messageId": "123", // Required for update (different!)
|
||||
"text": "Updated!" // Required for update
|
||||
// channel NOT required for update
|
||||
}
|
||||
```
|
||||
|
||||
**Key**: Resource + operation determine which fields are required!
|
||||
|
||||
### 2. Property Dependencies
|
||||
|
||||
**Fields appear/disappear based on other field values**
|
||||
|
||||
**Example**: HTTP Request node
|
||||
```javascript
|
||||
// When method='GET'
|
||||
{
|
||||
"method": "GET",
|
||||
"url": "https://api.example.com"
|
||||
// sendBody not shown (GET doesn't have body)
|
||||
}
|
||||
|
||||
// When method='POST'
|
||||
{
|
||||
"method": "POST",
|
||||
"url": "https://api.example.com",
|
||||
"sendBody": true, // Now visible!
|
||||
"body": { // Required when sendBody=true
|
||||
"contentType": "json",
|
||||
"content": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Mechanism**: displayOptions control field visibility
|
||||
|
||||
### 3. Progressive Discovery
|
||||
|
||||
**Use the right detail level**:
|
||||
|
||||
1. **get_node({detail: "standard"})** - DEFAULT
|
||||
- Quick overview (~1-2K tokens)
|
||||
- Required fields + common options
|
||||
- **Use first** - covers 95% of needs
|
||||
|
||||
2. **get_node({mode: "search_properties", propertyQuery: "..."})** (for finding specific fields)
|
||||
- Find properties by name
|
||||
- Use when looking for auth, body, headers, etc.
|
||||
|
||||
3. **get_node({detail: "full"})** (complete schema)
|
||||
- All properties (~3-8K tokens)
|
||||
- Use only when standard detail is insufficient
|
||||
|
||||
---
|
||||
|
||||
## Configuration Workflow
|
||||
|
||||
### Standard Process
|
||||
|
||||
```
|
||||
1. Identify node type and operation
|
||||
↓
|
||||
2. Use get_node (standard detail is default)
|
||||
↓
|
||||
3. Configure required fields
|
||||
↓
|
||||
4. Validate configuration
|
||||
↓
|
||||
5. If field unclear → get_node({mode: "search_properties"})
|
||||
↓
|
||||
6. Add optional fields as needed
|
||||
↓
|
||||
7. Validate again
|
||||
↓
|
||||
8. Deploy
|
||||
```
|
||||
|
||||
### Example: Configuring HTTP Request
|
||||
|
||||
**Step 1**: Identify what you need
|
||||
```javascript
|
||||
// Goal: POST JSON to API
|
||||
```
|
||||
|
||||
**Step 2**: Get node info
|
||||
```javascript
|
||||
const info = get_node({
|
||||
nodeType: "nodes-base.httpRequest"
|
||||
});
|
||||
|
||||
// Returns: method, url, sendBody, body, authentication required/optional
|
||||
```
|
||||
|
||||
**Step 3**: Minimal config
|
||||
```javascript
|
||||
{
|
||||
"method": "POST",
|
||||
"url": "https://api.example.com/create",
|
||||
"authentication": "none"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4**: Validate
|
||||
```javascript
|
||||
validate_node({
|
||||
nodeType: "nodes-base.httpRequest",
|
||||
config,
|
||||
profile: "runtime"
|
||||
});
|
||||
// → Error: "sendBody required for POST"
|
||||
```
|
||||
|
||||
**Step 5**: Add required field
|
||||
```javascript
|
||||
{
|
||||
"method": "POST",
|
||||
"url": "https://api.example.com/create",
|
||||
"authentication": "none",
|
||||
"sendBody": true
|
||||
}
|
||||
```
|
||||
|
||||
**Step 6**: Validate again
|
||||
```javascript
|
||||
validate_node({...});
|
||||
// → Error: "body required when sendBody=true"
|
||||
```
|
||||
|
||||
**Step 7**: Complete configuration
|
||||
```javascript
|
||||
{
|
||||
"method": "POST",
|
||||
"url": "https://api.example.com/create",
|
||||
"authentication": "none",
|
||||
"sendBody": true,
|
||||
"body": {
|
||||
"contentType": "json",
|
||||
"content": {
|
||||
"name": "={{$json.name}}",
|
||||
"email": "={{$json.email}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 8**: Final validation
|
||||
```javascript
|
||||
validate_node({...});
|
||||
// → Valid! ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## get_node Detail Levels
|
||||
|
||||
### Standard Detail (DEFAULT - Use This!)
|
||||
|
||||
**✅ Starting configuration**
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.slack"
|
||||
});
|
||||
// detail="standard" is the default
|
||||
```
|
||||
|
||||
**Returns** (~1-2K tokens):
|
||||
- Required fields
|
||||
- Common options
|
||||
- Operation list
|
||||
- Metadata
|
||||
|
||||
**Use**: 95% of configuration needs
|
||||
|
||||
### Full Detail (Use Sparingly)
|
||||
|
||||
**✅ When standard isn't enough**
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.slack",
|
||||
detail: "full"
|
||||
});
|
||||
```
|
||||
|
||||
**Returns** (~3-8K tokens):
|
||||
- Complete schema
|
||||
- All properties
|
||||
- All nested options
|
||||
|
||||
**Warning**: Large response, use only when standard insufficient
|
||||
|
||||
### Search Properties Mode
|
||||
|
||||
**✅ Looking for specific field**
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.httpRequest",
|
||||
mode: "search_properties",
|
||||
propertyQuery: "auth"
|
||||
});
|
||||
```
|
||||
|
||||
**Use**: Find authentication, headers, body fields, etc.
|
||||
|
||||
### Decision Tree
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Starting new node config? │
|
||||
├─────────────────────────────────┤
|
||||
│ YES → get_node (standard) │
|
||||
└─────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────┐
|
||||
│ Standard has what you need? │
|
||||
├─────────────────────────────────┤
|
||||
│ YES → Configure with it │
|
||||
│ NO → Continue │
|
||||
└─────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────┐
|
||||
│ Looking for specific field? │
|
||||
├─────────────────────────────────┤
|
||||
│ YES → search_properties mode │
|
||||
│ NO → Continue │
|
||||
└─────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────┐
|
||||
│ Still need more details? │
|
||||
├─────────────────────────────────┤
|
||||
│ YES → get_node({detail: "full"})│
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Property Dependencies Deep Dive
|
||||
|
||||
### displayOptions Mechanism
|
||||
|
||||
**Fields have visibility rules**:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "body",
|
||||
"displayOptions": {
|
||||
"show": {
|
||||
"sendBody": [true],
|
||||
"method": ["POST", "PUT", "PATCH"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Translation**: "body" field shows when:
|
||||
- sendBody = true AND
|
||||
- method = POST, PUT, or PATCH
|
||||
|
||||
### Common Dependency Patterns
|
||||
|
||||
#### Pattern 1: Boolean Toggle
|
||||
|
||||
**Example**: HTTP Request sendBody
|
||||
```javascript
|
||||
// sendBody controls body visibility
|
||||
{
|
||||
"sendBody": true // → body field appears
|
||||
}
|
||||
```
|
||||
|
||||
#### Pattern 2: Operation Switch
|
||||
|
||||
**Example**: Slack resource/operation
|
||||
```javascript
|
||||
// Different operations → different fields
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "post"
|
||||
// → Shows: channel, text, attachments, etc.
|
||||
}
|
||||
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "update"
|
||||
// → Shows: messageId, text (different fields!)
|
||||
}
|
||||
```
|
||||
|
||||
#### Pattern 3: Type Selection
|
||||
|
||||
**Example**: IF node conditions
|
||||
```javascript
|
||||
{
|
||||
"type": "string",
|
||||
"operation": "contains"
|
||||
// → Shows: value1, value2
|
||||
}
|
||||
|
||||
{
|
||||
"type": "boolean",
|
||||
"operation": "equals"
|
||||
// → Shows: value1, value2, different operators
|
||||
}
|
||||
```
|
||||
|
||||
### Finding Property Dependencies
|
||||
|
||||
**Use get_node with search_properties mode**:
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.httpRequest",
|
||||
mode: "search_properties",
|
||||
propertyQuery: "body"
|
||||
});
|
||||
|
||||
// Returns property paths matching "body" with descriptions
|
||||
```
|
||||
|
||||
**Or use full detail for complete schema**:
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.httpRequest",
|
||||
detail: "full"
|
||||
});
|
||||
|
||||
// Returns complete schema with displayOptions rules
|
||||
```
|
||||
|
||||
**Use this when**: Validation fails and you don't understand why field is missing/required
|
||||
|
||||
---
|
||||
|
||||
## Common Node Patterns
|
||||
|
||||
### Pattern 1: Resource/Operation Nodes
|
||||
|
||||
**Examples**: Slack, Google Sheets, Airtable
|
||||
|
||||
**Structure**:
|
||||
```javascript
|
||||
{
|
||||
"resource": "<entity>", // What type of thing
|
||||
"operation": "<action>", // What to do with it
|
||||
// ... operation-specific fields
|
||||
}
|
||||
```
|
||||
|
||||
**How to configure**:
|
||||
1. Choose resource
|
||||
2. Choose operation
|
||||
3. Use get_node to see operation-specific requirements
|
||||
4. Configure required fields
|
||||
|
||||
### Pattern 2: HTTP-Based Nodes
|
||||
|
||||
**Examples**: HTTP Request, Webhook
|
||||
|
||||
**Structure**:
|
||||
```javascript
|
||||
{
|
||||
"method": "<HTTP_METHOD>",
|
||||
"url": "<endpoint>",
|
||||
"authentication": "<type>",
|
||||
// ... method-specific fields
|
||||
}
|
||||
```
|
||||
|
||||
**Dependencies**:
|
||||
- POST/PUT/PATCH → sendBody available
|
||||
- sendBody=true → body required
|
||||
- authentication != "none" → credentials required
|
||||
|
||||
### Pattern 3: Database Nodes
|
||||
|
||||
**Examples**: Postgres, MySQL, MongoDB
|
||||
|
||||
**Structure**:
|
||||
```javascript
|
||||
{
|
||||
"operation": "<query|insert|update|delete>",
|
||||
// ... operation-specific fields
|
||||
}
|
||||
```
|
||||
|
||||
**Dependencies**:
|
||||
- operation="executeQuery" → query required
|
||||
- operation="insert" → table + values required
|
||||
- operation="update" → table + values + where required
|
||||
|
||||
### Pattern 4: Conditional Logic Nodes
|
||||
|
||||
**Examples**: IF, Switch, Merge
|
||||
|
||||
**Structure**:
|
||||
```javascript
|
||||
{
|
||||
"conditions": {
|
||||
"<type>": [
|
||||
{
|
||||
"operation": "<operator>",
|
||||
"value1": "...",
|
||||
"value2": "..." // Only for binary operators
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Dependencies**:
|
||||
- Binary operators (equals, contains, etc.) → value1 + value2
|
||||
- Unary operators (isEmpty, isNotEmpty) → value1 only + singleValue: true
|
||||
|
||||
---
|
||||
|
||||
## Operation-Specific Configuration
|
||||
|
||||
### Slack Node Examples
|
||||
|
||||
#### Post Message
|
||||
```javascript
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "post",
|
||||
"channel": "#general", // Required
|
||||
"text": "Hello!", // Required
|
||||
"attachments": [], // Optional
|
||||
"blocks": [] // Optional
|
||||
}
|
||||
```
|
||||
|
||||
#### Update Message
|
||||
```javascript
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "update",
|
||||
"messageId": "1234567890", // Required (different from post!)
|
||||
"text": "Updated!", // Required
|
||||
"channel": "#general" // Optional (can be inferred)
|
||||
}
|
||||
```
|
||||
|
||||
#### Create Channel
|
||||
```javascript
|
||||
{
|
||||
"resource": "channel",
|
||||
"operation": "create",
|
||||
"name": "new-channel", // Required
|
||||
"isPrivate": false // Optional
|
||||
// Note: text NOT required for this operation
|
||||
}
|
||||
```
|
||||
|
||||
### HTTP Request Node Examples
|
||||
|
||||
#### GET Request
|
||||
```javascript
|
||||
{
|
||||
"method": "GET",
|
||||
"url": "https://api.example.com/users",
|
||||
"authentication": "predefinedCredentialType",
|
||||
"nodeCredentialType": "httpHeaderAuth",
|
||||
"sendQuery": true, // Optional
|
||||
"queryParameters": { // Shows when sendQuery=true
|
||||
"parameters": [
|
||||
{
|
||||
"name": "limit",
|
||||
"value": "100"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### POST with JSON
|
||||
```javascript
|
||||
{
|
||||
"method": "POST",
|
||||
"url": "https://api.example.com/users",
|
||||
"authentication": "none",
|
||||
"sendBody": true, // Required for POST
|
||||
"body": { // Required when sendBody=true
|
||||
"contentType": "json",
|
||||
"content": {
|
||||
"name": "John Doe",
|
||||
"email": "john@example.com"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### IF Node Examples
|
||||
|
||||
#### String Comparison (Binary)
|
||||
```javascript
|
||||
{
|
||||
"conditions": {
|
||||
"string": [
|
||||
{
|
||||
"value1": "={{$json.status}}",
|
||||
"operation": "equals",
|
||||
"value2": "active" // Binary: needs value2
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Empty Check (Unary)
|
||||
```javascript
|
||||
{
|
||||
"conditions": {
|
||||
"string": [
|
||||
{
|
||||
"value1": "={{$json.email}}",
|
||||
"operation": "isEmpty",
|
||||
// No value2 - unary operator
|
||||
"singleValue": true // Auto-added by sanitization
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Handling Conditional Requirements
|
||||
|
||||
### Example: HTTP Request Body
|
||||
|
||||
**Scenario**: body field required, but only sometimes
|
||||
|
||||
**Rule**:
|
||||
```
|
||||
body is required when:
|
||||
- sendBody = true AND
|
||||
- method IN (POST, PUT, PATCH, DELETE)
|
||||
```
|
||||
|
||||
**How to discover**:
|
||||
```javascript
|
||||
// Option 1: Read validation error
|
||||
validate_node({...});
|
||||
// Error: "body required when sendBody=true"
|
||||
|
||||
// Option 2: Search for the property
|
||||
get_node({
|
||||
nodeType: "nodes-base.httpRequest",
|
||||
mode: "search_properties",
|
||||
propertyQuery: "body"
|
||||
});
|
||||
// Shows: body property with displayOptions rules
|
||||
|
||||
// Option 3: Try minimal config and iterate
|
||||
// Start without body, validation will tell you if needed
|
||||
```
|
||||
|
||||
### Example: IF Node singleValue
|
||||
|
||||
**Scenario**: singleValue property appears for unary operators
|
||||
|
||||
**Rule**:
|
||||
```
|
||||
singleValue should be true when:
|
||||
- operation IN (isEmpty, isNotEmpty, true, false)
|
||||
```
|
||||
|
||||
**Good news**: Auto-sanitization fixes this!
|
||||
|
||||
**Manual check**:
|
||||
```javascript
|
||||
get_node({
|
||||
nodeType: "nodes-base.if",
|
||||
detail: "full"
|
||||
});
|
||||
// Shows complete schema with operator-specific rules
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Anti-Patterns
|
||||
|
||||
### ❌ Don't: Over-configure Upfront
|
||||
|
||||
**Bad**:
|
||||
```javascript
|
||||
// Adding every possible field
|
||||
{
|
||||
"method": "GET",
|
||||
"url": "...",
|
||||
"sendQuery": false,
|
||||
"sendHeaders": false,
|
||||
"sendBody": false,
|
||||
"timeout": 10000,
|
||||
"ignoreResponseCode": false,
|
||||
// ... 20 more optional fields
|
||||
}
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```javascript
|
||||
// Start minimal
|
||||
{
|
||||
"method": "GET",
|
||||
"url": "...",
|
||||
"authentication": "none"
|
||||
}
|
||||
// Add fields only when needed
|
||||
```
|
||||
|
||||
### ❌ Don't: Skip Validation
|
||||
|
||||
**Bad**:
|
||||
```javascript
|
||||
// Configure and deploy without validating
|
||||
const config = {...};
|
||||
n8n_update_partial_workflow({...}); // YOLO
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```javascript
|
||||
// Validate before deploying
|
||||
const config = {...};
|
||||
const result = validate_node({...});
|
||||
if (result.valid) {
|
||||
n8n_update_partial_workflow({...});
|
||||
}
|
||||
```
|
||||
|
||||
### ❌ Don't: Ignore Operation Context
|
||||
|
||||
**Bad**:
|
||||
```javascript
|
||||
// Same config for all Slack operations
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "post",
|
||||
"channel": "#general",
|
||||
"text": "..."
|
||||
}
|
||||
|
||||
// Then switching operation without updating config
|
||||
{
|
||||
"resource": "message",
|
||||
"operation": "update", // Changed
|
||||
"channel": "#general", // Wrong field for update!
|
||||
"text": "..."
|
||||
}
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```javascript
|
||||
// Check requirements when changing operation
|
||||
get_node({
|
||||
nodeType: "nodes-base.slack"
|
||||
});
|
||||
// See what update operation needs (messageId, not channel)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ Do
|
||||
|
||||
1. **Start with get_node (standard detail)**
|
||||
- ~1-2K tokens response
|
||||
- Covers 95% of configuration needs
|
||||
- Default detail level
|
||||
|
||||
2. **Validate iteratively**
|
||||
- Configure → Validate → Fix → Repeat
|
||||
- Average 2-3 iterations is normal
|
||||
- Read validation errors carefully
|
||||
|
||||
3. **Use search_properties mode when stuck**
|
||||
- If field seems missing, search for it
|
||||
- Understand what controls field visibility
|
||||
- `get_node({mode: "search_properties", propertyQuery: "..."})`
|
||||
|
||||
4. **Respect operation context**
|
||||
- Different operations = different requirements
|
||||
- Always check get_node when changing operation
|
||||
- Don't assume configs are transferable
|
||||
|
||||
5. **Trust auto-sanitization**
|
||||
- Operator structure fixed automatically
|
||||
- Don't manually add/remove singleValue
|
||||
- IF/Switch metadata added on save
|
||||
|
||||
### ❌ Don't
|
||||
|
||||
1. **Jump to detail="full" immediately**
|
||||
- Try standard detail first
|
||||
- Only escalate if needed
|
||||
- Full schema is 3-8K tokens
|
||||
|
||||
2. **Configure blindly**
|
||||
- Always validate before deploying
|
||||
- Understand why fields are required
|
||||
- Use search_properties for conditional fields
|
||||
|
||||
3. **Copy configs without understanding**
|
||||
- Different operations need different fields
|
||||
- Validate after copying
|
||||
- Adjust for new context
|
||||
|
||||
4. **Manually fix auto-sanitization issues**
|
||||
- Let auto-sanitization handle operator structure
|
||||
- Focus on business logic
|
||||
- Save and let system fix structure
|
||||
|
||||
---
|
||||
|
||||
## Detailed References
|
||||
|
||||
For comprehensive guides on specific topics:
|
||||
|
||||
- **[DEPENDENCIES.md](DEPENDENCIES.md)** - Deep dive into property dependencies and displayOptions
|
||||
- **[OPERATION_PATTERNS.md](OPERATION_PATTERNS.md)** - Common configuration patterns by node type
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Configuration Strategy**:
|
||||
1. Start with `get_node` (standard detail is default)
|
||||
2. Configure required fields for operation
|
||||
3. Validate configuration
|
||||
4. Search properties if stuck
|
||||
5. Iterate until valid (avg 2-3 cycles)
|
||||
6. Deploy with confidence
|
||||
|
||||
**Key Principles**:
|
||||
- **Operation-aware**: Different operations = different requirements
|
||||
- **Progressive disclosure**: Start minimal, add as needed
|
||||
- **Dependency-aware**: Understand field visibility rules
|
||||
- **Validation-driven**: Let validation guide configuration
|
||||
|
||||
**Related Skills**:
|
||||
- **n8n MCP Tools Expert** - How to use discovery tools correctly
|
||||
- **n8n Validation Expert** - Interpret validation errors
|
||||
- **n8n Expression Syntax** - Configure expression fields
|
||||
- **n8n Workflow Patterns** - Apply patterns with proper configuration
|
||||
22
skills/nanobanana-ppt-skills/SKILL.md
Normal file
22
skills/nanobanana-ppt-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: nanobanana-ppt-skills
|
||||
description: "AI-powered PPT generation with document analysis and styled images"
|
||||
source: "https://github.com/op7418/NanoBanana-PPT-Skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Nanobanana Ppt Skills
|
||||
|
||||
## Overview
|
||||
|
||||
AI-powered PPT generation with document analysis and styled images
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with ai-powered ppt generation with document analysis and styled images.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for ai-powered ppt generation with document analysis and styled images.
|
||||
|
||||
For more information, see the [source repository](https://github.com/op7418/NanoBanana-PPT-Skills).
|
||||
109
skills/observe-whatsapp/SKILL.md
Normal file
109
skills/observe-whatsapp/SKILL.md
Normal file
@@ -0,0 +1,109 @@
|
||||
---
|
||||
name: observe-whatsapp
|
||||
description: "Observe and troubleshoot WhatsApp in Kapso: debug message delivery, inspect webhook deliveries/retries, triage API errors, and run health checks. Use when investigating production issues, message failures, or webhook delivery problems."
|
||||
source: "https://github.com/gokapso/agent-skills/tree/master/skills/observe-whatsapp"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Observe WhatsApp
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill for operational diagnostics: message delivery investigation, webhook delivery debugging, error triage, and WhatsApp health checks.
|
||||
|
||||
## Setup
|
||||
|
||||
Env vars:
|
||||
- `KAPSO_API_BASE_URL` (host only, no `/platform/v1`)
|
||||
- `KAPSO_API_KEY`
|
||||
|
||||
## How to
|
||||
|
||||
### Investigate message delivery
|
||||
|
||||
1. List messages: `node scripts/messages.js --phone-number-id <id>`
|
||||
2. Inspect message: `node scripts/message-details.js --message-id <id>`
|
||||
3. Find conversation: `node scripts/lookup-conversation.js --phone-number <e164>`
|
||||
|
||||
### Triage errors
|
||||
|
||||
1. Message errors: `node scripts/errors.js`
|
||||
2. API logs: `node scripts/api-logs.js`
|
||||
3. Webhook deliveries: `node scripts/webhook-deliveries.js`
|
||||
|
||||
### Run health checks
|
||||
|
||||
1. Project overview: `node scripts/overview.js`
|
||||
2. Phone number health: `node scripts/whatsapp-health.js --phone-number-id <id>`
|
||||
|
||||
## Scripts
|
||||
|
||||
### Messages
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `messages.js` | List messages |
|
||||
| `message-details.js` | Get message details |
|
||||
| `lookup-conversation.js` | Find conversation by phone or ID |
|
||||
|
||||
### Errors and logs
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `errors.js` | List message errors |
|
||||
| `api-logs.js` | List external API logs |
|
||||
| `webhook-deliveries.js` | List webhook delivery attempts |
|
||||
|
||||
### Health
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `overview.js` | Project overview |
|
||||
| `whatsapp-health.js` | Phone number health check |
|
||||
|
||||
### OpenAPI
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `openapi-explore.mjs` | Explore OpenAPI (search/op/schema/where) |
|
||||
|
||||
Install deps (once):
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
node scripts/openapi-explore.mjs --spec platform search "webhook deliveries"
|
||||
node scripts/openapi-explore.mjs --spec platform op listWebhookDeliveries
|
||||
node scripts/openapi-explore.mjs --spec platform schema WebhookDelivery
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- For webhook setup (create/update/delete, signature verification, event types), use `integrate-whatsapp`.
|
||||
|
||||
## References
|
||||
|
||||
- [references/message-debugging-reference.md](references/message-debugging-reference.md) - Message debugging guide
|
||||
- [references/triage-reference.md](references/triage-reference.md) - Error triage guide
|
||||
- [references/health-reference.md](references/health-reference.md) - Health check guide
|
||||
|
||||
## Related skills
|
||||
|
||||
- `integrate-whatsapp` - Onboarding, webhooks, messaging, templates, flows
|
||||
- `automate-whatsapp` - Workflows, agents, and automations
|
||||
|
||||
<!-- FILEMAP:BEGIN -->
|
||||
```text
|
||||
[observe-whatsapp file map]|root: .
|
||||
|.:{package.json,SKILL.md}
|
||||
|assets:{health-example.json,message-debugging-example.json,triage-example.json}
|
||||
|references:{health-reference.md,message-debugging-reference.md,triage-reference.md}
|
||||
|scripts:{api-logs.js,errors.js,lookup-conversation.js,message-details.js,messages.js,openapi-explore.mjs,overview.js,webhook-deliveries.js,whatsapp-health.js}
|
||||
|scripts/lib/messages:{args.js,kapso-api.js}
|
||||
|scripts/lib/status:{args.js,kapso-api.js}
|
||||
|scripts/lib/triage:{args.js,kapso-api.js}
|
||||
```
|
||||
<!-- FILEMAP:END -->
|
||||
|
||||
22
skills/pypict-skill/SKILL.md
Normal file
22
skills/pypict-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: pypict-skill
|
||||
description: "Pairwise test generation"
|
||||
source: "https://github.com/omkamal/pypict-claude-skill/blob/main/SKILL.md"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Pypict Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Pairwise test generation
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with pairwise test generation.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for pairwise test generation.
|
||||
|
||||
For more information, see the [source repository](https://github.com/omkamal/pypict-claude-skill/blob/main/SKILL.md).
|
||||
775
skills/readme/SKILL.md
Normal file
775
skills/readme/SKILL.md
Normal file
@@ -0,0 +1,775 @@
|
||||
---
|
||||
name: readme
|
||||
description: "When the user wants to create or update a README.md file for a project. Also use when the user says "write readme," "create readme," "document this project," "project documentation," or asks for help with README.md. This skill creates absurdly thorough documentation covering local setup, architecture, and deployment."
|
||||
source: "https://github.com/Shpigford/skills/tree/main/readme"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# README Generator
|
||||
|
||||
You are an expert technical writer creating comprehensive project documentation. Your goal is to write a README.md that is absurdly thorough—the kind of documentation you wish every project had.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- User wants to create or update a README.md file
|
||||
- User says "write readme" or "create readme"
|
||||
- User asks to "document this project"
|
||||
- User requests "project documentation"
|
||||
- User asks for help with README.md
|
||||
|
||||
## The Three Purposes of a README
|
||||
|
||||
1. **Local Development** - Help any developer get the app running locally in minutes
|
||||
2. **Understanding the System** - Explain in great detail how the app works
|
||||
3. **Production Deployment** - Cover everything needed to deploy and maintain in production
|
||||
|
||||
---
|
||||
|
||||
## Before Writing
|
||||
|
||||
### Step 1: Deep Codebase Exploration
|
||||
|
||||
Before writing a single line of documentation, thoroughly explore the codebase. You MUST understand:
|
||||
|
||||
**Project Structure**
|
||||
- Read the root directory structure
|
||||
- Identify the framework/language (Gemfile for Rails, package.json, go.mod, requirements.txt, etc.)
|
||||
- Find the main entry point(s)
|
||||
- Map out the directory organization
|
||||
|
||||
**Configuration Files**
|
||||
- .env.example, .env.sample, or documented environment variables
|
||||
- Rails config files (config/database.yml, config/application.rb, config/environments/)
|
||||
- Credentials setup (config/credentials.yml.enc, config/master.key)
|
||||
- Docker files (Dockerfile, docker-compose.yml)
|
||||
- CI/CD configs (.github/workflows/, .gitlab-ci.yml, etc.)
|
||||
- Deployment configs (config/deploy.yml for Kamal, fly.toml, render.yaml, Procfile, etc.)
|
||||
|
||||
**Database**
|
||||
- db/schema.rb or db/structure.sql
|
||||
- Migrations in db/migrate/
|
||||
- Seeds in db/seeds.rb
|
||||
- Database type from config/database.yml
|
||||
|
||||
**Key Dependencies**
|
||||
- Gemfile and Gemfile.lock for Ruby gems
|
||||
- package.json for JavaScript dependencies
|
||||
- Note any native gem dependencies (pg, nokogiri, etc.)
|
||||
|
||||
**Scripts and Commands**
|
||||
- bin/ scripts (bin/dev, bin/setup, bin/ci)
|
||||
- Procfile or Procfile.dev
|
||||
- Rake tasks (lib/tasks/)
|
||||
|
||||
### Step 2: Identify Deployment Target
|
||||
|
||||
Look for these files to determine deployment platform and tailor instructions:
|
||||
|
||||
- `Dockerfile` / `docker-compose.yml` → Docker-based deployment
|
||||
- `vercel.json` / `.vercel/` → Vercel
|
||||
- `netlify.toml` → Netlify
|
||||
- `fly.toml` → Fly.io
|
||||
- `railway.json` / `railway.toml` → Railway
|
||||
- `render.yaml` → Render
|
||||
- `app.yaml` → Google App Engine
|
||||
- `Procfile` → Heroku or Heroku-like platforms
|
||||
- `.ebextensions/` → AWS Elastic Beanstalk
|
||||
- `serverless.yml` → Serverless Framework
|
||||
- `terraform/` / `*.tf` → Terraform/Infrastructure as Code
|
||||
- `k8s/` / `kubernetes/` → Kubernetes
|
||||
|
||||
If no deployment config exists, provide general guidance with Docker as the recommended approach.
|
||||
|
||||
### Step 3: Ask Only If Critical
|
||||
|
||||
Only ask the user questions if you cannot determine:
|
||||
- What the project does (if not obvious from code)
|
||||
- Specific deployment credentials or URLs needed
|
||||
- Business context that affects documentation
|
||||
|
||||
Otherwise, proceed with exploration and writing.
|
||||
|
||||
---
|
||||
|
||||
## README Structure
|
||||
|
||||
Write the README with these sections in order:
|
||||
|
||||
### 1. Project Title and Overview
|
||||
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
Brief description of what the project does and who it's for. 2-3 sentences max.
|
||||
|
||||
## Key Features
|
||||
|
||||
- Feature 1
|
||||
- Feature 2
|
||||
- Feature 3
|
||||
```
|
||||
|
||||
### 2. Tech Stack
|
||||
|
||||
List all major technologies:
|
||||
|
||||
```markdown
|
||||
## Tech Stack
|
||||
|
||||
- **Language**: Ruby 3.3+
|
||||
- **Framework**: Rails 7.2+
|
||||
- **Frontend**: Inertia.js with React
|
||||
- **Database**: PostgreSQL 16
|
||||
- **Background Jobs**: Solid Queue
|
||||
- **Caching**: Solid Cache
|
||||
- **Styling**: Tailwind CSS
|
||||
- **Deployment**: [Detected platform]
|
||||
```
|
||||
|
||||
### 3. Prerequisites
|
||||
|
||||
What must be installed before starting:
|
||||
|
||||
```markdown
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 20 or higher
|
||||
- PostgreSQL 15 or higher (or Docker)
|
||||
- pnpm (recommended) or npm
|
||||
- A Google Cloud project for OAuth (optional for development)
|
||||
```
|
||||
|
||||
### 4. Getting Started
|
||||
|
||||
The complete local development guide:
|
||||
|
||||
```markdown
|
||||
## Getting Started
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
\`\`\`bash
|
||||
git clone https://github.com/user/repo.git
|
||||
cd repo
|
||||
\`\`\`
|
||||
|
||||
### 2. Install Ruby Dependencies
|
||||
|
||||
Ensure you have Ruby 3.3+ installed (via rbenv, asdf, or mise):
|
||||
|
||||
\`\`\`bash
|
||||
bundle install
|
||||
\`\`\`
|
||||
|
||||
### 3. Install JavaScript Dependencies
|
||||
|
||||
\`\`\`bash
|
||||
yarn install
|
||||
\`\`\`
|
||||
|
||||
### 4. Environment Setup
|
||||
|
||||
Copy the example environment file:
|
||||
|
||||
\`\`\`bash
|
||||
cp .env.example .env
|
||||
\`\`\`
|
||||
|
||||
Configure the following variables:
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `DATABASE_URL` | PostgreSQL connection string | `postgresql://localhost/myapp_development` |
|
||||
| `REDIS_URL` | Redis connection (if used) | `redis://localhost:6379/0` |
|
||||
| `SECRET_KEY_BASE` | Rails secret key | `bin/rails secret` |
|
||||
| `RAILS_MASTER_KEY` | For credentials encryption | Check `config/master.key` |
|
||||
|
||||
### 5. Database Setup
|
||||
|
||||
Start PostgreSQL (if using Docker):
|
||||
|
||||
\`\`\`bash
|
||||
docker run --name postgres -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:16
|
||||
\`\`\`
|
||||
|
||||
Create and set up the database:
|
||||
|
||||
\`\`\`bash
|
||||
bin/rails db:setup
|
||||
\`\`\`
|
||||
|
||||
This runs `db:create`, `db:schema:load`, and `db:seed`.
|
||||
|
||||
For existing databases, run migrations:
|
||||
|
||||
\`\`\`bash
|
||||
bin/rails db:migrate
|
||||
\`\`\`
|
||||
|
||||
### 6. Start Development Server
|
||||
|
||||
Using Foreman/Overmind (recommended, runs Rails + Vite):
|
||||
|
||||
\`\`\`bash
|
||||
bin/dev
|
||||
\`\`\`
|
||||
|
||||
Or manually:
|
||||
|
||||
\`\`\`bash
|
||||
# Terminal 1: Rails server
|
||||
bin/rails server
|
||||
|
||||
# Terminal 2: Vite dev server (for Inertia/React)
|
||||
bin/vite dev
|
||||
\`\`\`
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) in your browser.
|
||||
```
|
||||
|
||||
Include every step. Assume the reader is setting up on a fresh machine.
|
||||
|
||||
### 5. Architecture Overview
|
||||
|
||||
This is where you go absurdly deep:
|
||||
|
||||
```markdown
|
||||
## Architecture
|
||||
|
||||
### Directory Structure
|
||||
|
||||
\`\`\`
|
||||
├── app/
|
||||
│ ├── controllers/ # Rails controllers
|
||||
│ │ ├── concerns/ # Shared controller modules
|
||||
│ │ └── api/ # API-specific controllers
|
||||
│ ├── models/ # ActiveRecord models
|
||||
│ │ └── concerns/ # Shared model modules
|
||||
│ ├── jobs/ # Background jobs (Solid Queue)
|
||||
│ ├── mailers/ # Email templates
|
||||
│ ├── views/ # Rails views (minimal with Inertia)
|
||||
│ └── frontend/ # Inertia.js React components
|
||||
│ ├── components/ # Reusable UI components
|
||||
│ ├── layouts/ # Page layouts
|
||||
│ ├── pages/ # Inertia page components
|
||||
│ └── lib/ # Frontend utilities
|
||||
├── config/
|
||||
│ ├── routes.rb # Route definitions
|
||||
│ ├── database.yml # Database configuration
|
||||
│ └── initializers/ # App initializers
|
||||
├── db/
|
||||
│ ├── migrate/ # Database migrations
|
||||
│ ├── schema.rb # Current schema
|
||||
│ └── seeds.rb # Seed data
|
||||
├── lib/
|
||||
│ └── tasks/ # Custom Rake tasks
|
||||
└── public/ # Static assets
|
||||
\`\`\`
|
||||
|
||||
### Request Lifecycle
|
||||
|
||||
1. Request hits Rails router (`config/routes.rb`)
|
||||
2. Middleware stack processes request (authentication, sessions, etc.)
|
||||
3. Controller action executes
|
||||
4. Models interact with PostgreSQL via ActiveRecord
|
||||
5. Inertia renders React component with props
|
||||
6. Response sent to browser
|
||||
|
||||
### Data Flow
|
||||
|
||||
\`\`\`
|
||||
User Action → React Component → Inertia Visit → Rails Controller → ActiveRecord → PostgreSQL
|
||||
↓
|
||||
React Props ← Inertia Response ←
|
||||
\`\`\`
|
||||
|
||||
### Key Components
|
||||
|
||||
**Authentication**
|
||||
- Devise/Rodauth for user authentication
|
||||
- Session-based auth with encrypted cookies
|
||||
- `authenticate_user!` before_action for protected routes
|
||||
|
||||
**Inertia.js Integration (`app/frontend/`)**
|
||||
- React components receive props from Rails controllers
|
||||
- `inertia_render` in controllers passes data to frontend
|
||||
- Shared data via `inertia_share` for layout props
|
||||
|
||||
**Background Jobs (`app/jobs/`)**
|
||||
- Solid Queue for job processing
|
||||
- Jobs stored in PostgreSQL (no Redis required)
|
||||
- Dashboard at `/jobs` for monitoring
|
||||
|
||||
**Database (`app/models/`)**
|
||||
- ActiveRecord models with associations
|
||||
- Query objects for complex queries
|
||||
- Concerns for shared model behavior
|
||||
|
||||
### Database Schema
|
||||
|
||||
\`\`\`
|
||||
users
|
||||
├── id (bigint, PK)
|
||||
├── email (string, unique, not null)
|
||||
├── encrypted_password (string)
|
||||
├── name (string)
|
||||
├── created_at (datetime)
|
||||
└── updated_at (datetime)
|
||||
|
||||
posts
|
||||
├── id (bigint, PK)
|
||||
├── title (string, not null)
|
||||
├── content (text)
|
||||
├── published (boolean, default: false)
|
||||
├── user_id (bigint, FK → users)
|
||||
├── created_at (datetime)
|
||||
└── updated_at (datetime)
|
||||
|
||||
solid_queue_jobs (background jobs)
|
||||
├── id (bigint, PK)
|
||||
├── queue_name (string)
|
||||
├── class_name (string)
|
||||
├── arguments (json)
|
||||
├── scheduled_at (datetime)
|
||||
└── ...
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 6. Environment Variables
|
||||
|
||||
Complete reference for all env vars:
|
||||
|
||||
```markdown
|
||||
## Environment Variables
|
||||
|
||||
### Required
|
||||
|
||||
| Variable | Description | How to Get |
|
||||
|----------|-------------|------------|
|
||||
| `DATABASE_URL` | PostgreSQL connection string | Your database provider |
|
||||
| `SECRET_KEY_BASE` | Rails secret for sessions/cookies | Run `bin/rails secret` |
|
||||
| `RAILS_MASTER_KEY` | Decrypts credentials file | Check `config/master.key` (not in git) |
|
||||
|
||||
### Optional
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `REDIS_URL` | Redis connection string (for caching/ActionCable) | - |
|
||||
| `RAILS_LOG_LEVEL` | Logging verbosity | `debug` (dev), `info` (prod) |
|
||||
| `RAILS_MAX_THREADS` | Puma thread count | `5` |
|
||||
| `WEB_CONCURRENCY` | Puma worker count | `2` |
|
||||
| `SMTP_ADDRESS` | Mail server hostname | - |
|
||||
| `SMTP_PORT` | Mail server port | `587` |
|
||||
|
||||
### Rails Credentials
|
||||
|
||||
Sensitive values should be stored in Rails encrypted credentials:
|
||||
|
||||
\`\`\`bash
|
||||
# Edit credentials (opens in $EDITOR)
|
||||
bin/rails credentials:edit
|
||||
|
||||
# Or for environment-specific credentials
|
||||
RAILS_ENV=production bin/rails credentials:edit
|
||||
\`\`\`
|
||||
|
||||
Credentials file structure:
|
||||
\`\`\`yaml
|
||||
secret_key_base: xxx
|
||||
stripe:
|
||||
public_key: pk_xxx
|
||||
secret_key: sk_xxx
|
||||
google:
|
||||
client_id: xxx
|
||||
client_secret: xxx
|
||||
\`\`\`
|
||||
|
||||
Access in code: `Rails.application.credentials.stripe[:secret_key]`
|
||||
|
||||
### Environment-Specific
|
||||
|
||||
**Development**
|
||||
\`\`\`
|
||||
DATABASE_URL=postgresql://localhost/myapp_development
|
||||
REDIS_URL=redis://localhost:6379/0
|
||||
\`\`\`
|
||||
|
||||
**Production**
|
||||
\`\`\`
|
||||
DATABASE_URL=<production-connection-string>
|
||||
RAILS_ENV=production
|
||||
RAILS_SERVE_STATIC_FILES=true
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 7. Available Scripts
|
||||
|
||||
```markdown
|
||||
## Available Scripts
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `bin/dev` | Start development server (Rails + Vite via Foreman) |
|
||||
| `bin/rails server` | Start Rails server only |
|
||||
| `bin/vite dev` | Start Vite dev server only |
|
||||
| `bin/rails console` | Open Rails console (IRB with app loaded) |
|
||||
| `bin/rails db:migrate` | Run pending database migrations |
|
||||
| `bin/rails db:rollback` | Rollback last migration |
|
||||
| `bin/rails db:seed` | Run database seeds |
|
||||
| `bin/rails db:reset` | Drop, create, migrate, and seed database |
|
||||
| `bin/rails routes` | List all routes |
|
||||
| `bin/rails test` | Run test suite (Minitest) |
|
||||
| `bundle exec rspec` | Run test suite (RSpec, if used) |
|
||||
| `bin/rails assets:precompile` | Compile assets for production |
|
||||
| `bin/rubocop` | Run Ruby linter |
|
||||
| `yarn lint` | Run JavaScript/TypeScript linter |
|
||||
```
|
||||
|
||||
### 8. Testing
|
||||
|
||||
```markdown
|
||||
## Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
\`\`\`bash
|
||||
# Run all tests (Minitest)
|
||||
bin/rails test
|
||||
|
||||
# Run all tests (RSpec, if used)
|
||||
bundle exec rspec
|
||||
|
||||
# Run specific test file
|
||||
bin/rails test test/models/user_test.rb
|
||||
bundle exec rspec spec/models/user_spec.rb
|
||||
|
||||
# Run tests matching a pattern
|
||||
bin/rails test -n /creates_user/
|
||||
bundle exec rspec -e "creates user"
|
||||
|
||||
# Run system tests (browser tests)
|
||||
bin/rails test:system
|
||||
|
||||
# Run with coverage (SimpleCov)
|
||||
COVERAGE=true bin/rails test
|
||||
\`\`\`
|
||||
|
||||
### Test Structure
|
||||
|
||||
\`\`\`
|
||||
test/ # Minitest structure
|
||||
├── controllers/ # Controller tests
|
||||
├── models/ # Model unit tests
|
||||
├── integration/ # Integration tests
|
||||
├── system/ # System/browser tests
|
||||
├── fixtures/ # Test data
|
||||
└── test_helper.rb # Test configuration
|
||||
|
||||
spec/ # RSpec structure (if used)
|
||||
├── models/
|
||||
├── requests/
|
||||
├── system/
|
||||
├── factories/ # FactoryBot factories
|
||||
├── support/
|
||||
└── rails_helper.rb
|
||||
\`\`\`
|
||||
|
||||
### Writing Tests
|
||||
|
||||
**Minitest example:**
|
||||
\`\`\`ruby
|
||||
require "test_helper"
|
||||
|
||||
class UserTest < ActiveSupport::TestCase
|
||||
test "creates user with valid attributes" do
|
||||
user = User.new(email: "test@example.com", name: "Test User")
|
||||
assert user.valid?
|
||||
end
|
||||
|
||||
test "requires email" do
|
||||
user = User.new(name: "Test User")
|
||||
assert_not user.valid?
|
||||
assert_includes user.errors[:email], "can't be blank"
|
||||
end
|
||||
end
|
||||
\`\`\`
|
||||
|
||||
**RSpec example:**
|
||||
\`\`\`ruby
|
||||
require "rails_helper"
|
||||
|
||||
RSpec.describe User, type: :model do
|
||||
describe "validations" do
|
||||
it "is valid with valid attributes" do
|
||||
user = build(:user)
|
||||
expect(user).to be_valid
|
||||
end
|
||||
|
||||
it "requires an email" do
|
||||
user = build(:user, email: nil)
|
||||
expect(user).not_to be_valid
|
||||
expect(user.errors[:email]).to include("can't be blank")
|
||||
end
|
||||
end
|
||||
end
|
||||
\`\`\`
|
||||
|
||||
### Frontend Testing
|
||||
|
||||
For Inertia/React components:
|
||||
|
||||
\`\`\`bash
|
||||
yarn test
|
||||
\`\`\`
|
||||
|
||||
\`\`\`typescript
|
||||
import { render, screen } from '@testing-library/react'
|
||||
import { Dashboard } from './Dashboard'
|
||||
|
||||
describe('Dashboard', () => {
|
||||
it('renders user name', () => {
|
||||
render(<Dashboard user={{ name: 'Josh' }} />)
|
||||
expect(screen.getByText('Josh')).toBeInTheDocument()
|
||||
})
|
||||
})
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 9. Deployment
|
||||
|
||||
Tailor this to detected platform (look for Dockerfile, fly.toml, render.yaml, kamal/, etc.):
|
||||
|
||||
```markdown
|
||||
## Deployment
|
||||
|
||||
### Kamal (Recommended for Rails)
|
||||
|
||||
If using Kamal for deployment:
|
||||
|
||||
\`\`\`bash
|
||||
# Setup Kamal (first time)
|
||||
kamal setup
|
||||
|
||||
# Deploy
|
||||
kamal deploy
|
||||
|
||||
# Rollback to previous version
|
||||
kamal rollback
|
||||
|
||||
# View logs
|
||||
kamal app logs
|
||||
|
||||
# Run console on production
|
||||
kamal app exec --interactive 'bin/rails console'
|
||||
\`\`\`
|
||||
|
||||
Configuration lives in `config/deploy.yml`.
|
||||
|
||||
### Docker
|
||||
|
||||
Build and run:
|
||||
|
||||
\`\`\`bash
|
||||
# Build image
|
||||
docker build -t myapp .
|
||||
|
||||
# Run with environment variables
|
||||
docker run -p 3000:3000 \
|
||||
-e DATABASE_URL=postgresql://... \
|
||||
-e SECRET_KEY_BASE=... \
|
||||
-e RAILS_ENV=production \
|
||||
myapp
|
||||
\`\`\`
|
||||
|
||||
### Heroku
|
||||
|
||||
\`\`\`bash
|
||||
# Create app
|
||||
heroku create myapp
|
||||
|
||||
# Add PostgreSQL
|
||||
heroku addons:create heroku-postgresql:mini
|
||||
|
||||
# Set environment variables
|
||||
heroku config:set SECRET_KEY_BASE=$(bin/rails secret)
|
||||
heroku config:set RAILS_MASTER_KEY=$(cat config/master.key)
|
||||
|
||||
# Deploy
|
||||
git push heroku main
|
||||
|
||||
# Run migrations
|
||||
heroku run bin/rails db:migrate
|
||||
\`\`\`
|
||||
|
||||
### Fly.io
|
||||
|
||||
\`\`\`bash
|
||||
# Launch (first time)
|
||||
fly launch
|
||||
|
||||
# Deploy
|
||||
fly deploy
|
||||
|
||||
# Run migrations
|
||||
fly ssh console -C "bin/rails db:migrate"
|
||||
|
||||
# Open console
|
||||
fly ssh console -C "bin/rails console"
|
||||
\`\`\`
|
||||
|
||||
### Render
|
||||
|
||||
If `render.yaml` exists, connect your repo to Render and it will auto-deploy.
|
||||
|
||||
Manual setup:
|
||||
1. Create new Web Service
|
||||
2. Connect GitHub repository
|
||||
3. Set build command: `bundle install && bin/rails assets:precompile`
|
||||
4. Set start command: `bin/rails server`
|
||||
5. Add environment variables in dashboard
|
||||
|
||||
### Manual/VPS Deployment
|
||||
|
||||
\`\`\`bash
|
||||
# On the server:
|
||||
|
||||
# Pull latest code
|
||||
git pull origin main
|
||||
|
||||
# Install dependencies
|
||||
bundle install --deployment
|
||||
|
||||
# Compile assets
|
||||
RAILS_ENV=production bin/rails assets:precompile
|
||||
|
||||
# Run migrations
|
||||
RAILS_ENV=production bin/rails db:migrate
|
||||
|
||||
# Restart application server (e.g., Puma via systemd)
|
||||
sudo systemctl restart myapp
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 10. Troubleshooting
|
||||
|
||||
```markdown
|
||||
## Troubleshooting
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
**Error:** `could not connect to server: Connection refused`
|
||||
|
||||
**Solution:**
|
||||
1. Verify PostgreSQL is running: `pg_isready` or `docker ps`
|
||||
2. Check `DATABASE_URL` format: `postgresql://USER:PASSWORD@HOST:PORT/DATABASE`
|
||||
3. Ensure database exists: `bin/rails db:create`
|
||||
|
||||
### Pending Migrations
|
||||
|
||||
**Error:** `Migrations are pending`
|
||||
|
||||
**Solution:**
|
||||
\`\`\`bash
|
||||
bin/rails db:migrate
|
||||
\`\`\`
|
||||
|
||||
### Asset Compilation Issues
|
||||
|
||||
**Error:** `The asset "application.css" is not present in the asset pipeline`
|
||||
|
||||
**Solution:**
|
||||
\`\`\`bash
|
||||
# Clear and recompile assets
|
||||
bin/rails assets:clobber
|
||||
bin/rails assets:precompile
|
||||
\`\`\`
|
||||
|
||||
### Bundle Install Failures
|
||||
|
||||
**Error:** Native extension build failures
|
||||
|
||||
**Solution:**
|
||||
1. Ensure system dependencies are installed:
|
||||
\`\`\`bash
|
||||
# macOS
|
||||
brew install postgresql libpq
|
||||
|
||||
# Ubuntu
|
||||
sudo apt-get install libpq-dev
|
||||
\`\`\`
|
||||
2. Try again: `bundle install`
|
||||
|
||||
### Credentials Issues
|
||||
|
||||
**Error:** `ActiveSupport::MessageEncryptor::InvalidMessage`
|
||||
|
||||
**Solution:**
|
||||
The master key doesn't match the credentials file. Either:
|
||||
1. Get the correct `config/master.key` from another team member
|
||||
2. Or regenerate credentials: `rm config/credentials.yml.enc && bin/rails credentials:edit`
|
||||
|
||||
### Vite/Inertia Issues
|
||||
|
||||
**Error:** `Vite Ruby - Build failed`
|
||||
|
||||
**Solution:**
|
||||
\`\`\`bash
|
||||
# Clear Vite cache
|
||||
rm -rf node_modules/.vite
|
||||
|
||||
# Reinstall JS dependencies
|
||||
rm -rf node_modules && yarn install
|
||||
\`\`\`
|
||||
|
||||
### Solid Queue Issues
|
||||
|
||||
**Error:** Jobs not processing
|
||||
|
||||
**Solution:**
|
||||
Ensure the queue worker is running:
|
||||
\`\`\`bash
|
||||
bin/jobs
|
||||
# or
|
||||
bin/rails solid_queue:start
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 11. Contributing (Optional)
|
||||
|
||||
Include if open source or team project.
|
||||
|
||||
### 12. License (Optional)
|
||||
|
||||
---
|
||||
|
||||
## Writing Principles
|
||||
|
||||
1. **Be Absurdly Thorough** - When in doubt, include it. More detail is always better.
|
||||
|
||||
2. **Use Code Blocks Liberally** - Every command should be copy-pasteable.
|
||||
|
||||
3. **Show Example Output** - When helpful, show what the user should expect to see.
|
||||
|
||||
4. **Explain the Why** - Don't just say "run this command," explain what it does.
|
||||
|
||||
5. **Assume Fresh Machine** - Write as if the reader has never seen this codebase.
|
||||
|
||||
6. **Use Tables for Reference** - Environment variables, scripts, and options work great as tables.
|
||||
|
||||
7. **Keep Commands Current** - Use `pnpm` if the project uses it, `npm` if it uses npm, etc.
|
||||
|
||||
8. **Include a Table of Contents** - For READMEs over ~200 lines, add a TOC at the top.
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
Generate a complete README.md file with:
|
||||
- Proper markdown formatting
|
||||
- Code blocks with language hints (```bash, ```typescript, etc.)
|
||||
- Tables where appropriate
|
||||
- Clear section hierarchy
|
||||
- Linked table of contents for long documents
|
||||
|
||||
Write the README directly to `README.md` in the project root.
|
||||
401
skills/screenshots/SKILL.md
Normal file
401
skills/screenshots/SKILL.md
Normal file
@@ -0,0 +1,401 @@
|
||||
---
|
||||
name: screenshots
|
||||
description: "Generate marketing screenshots of your app using Playwright. Use when the user wants to create screenshots for Product Hunt, social media, landing pages, or documentation."
|
||||
source: "https://github.com/Shpigford/skills/tree/main/screenshots"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Screenshots
|
||||
|
||||
Generate marketing-quality screenshots of your app using Playwright directly. Screenshots are captured at true HiDPI (2x retina) resolution using `deviceScaleFactor: 2`.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- User wants to create screenshots for Product Hunt
|
||||
- Creating screenshots for social media
|
||||
- Generating images for landing pages
|
||||
- Creating documentation screenshots
|
||||
- User requests marketing-quality app screenshots
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Playwright must be available. Check for it:
|
||||
```bash
|
||||
npx playwright --version 2>/dev/null || npm ls playwright 2>/dev/null | grep playwright
|
||||
```
|
||||
|
||||
If not found, inform the user:
|
||||
> Playwright is required. Install it with: `npm install -D playwright` or `npm install -D @playwright/test`
|
||||
|
||||
## Step 1: Determine App URL
|
||||
|
||||
If `$1` is provided, use it as the app URL.
|
||||
|
||||
If no URL is provided:
|
||||
1. Check if a dev server is likely running by looking for `package.json` scripts
|
||||
2. Use `AskUserQuestion` to ask the user for the URL or offer to help start the dev server
|
||||
|
||||
Common default URLs to suggest:
|
||||
- `http://localhost:3000` (Next.js, Create React App, Rails)
|
||||
- `http://localhost:5173` (Vite)
|
||||
- `http://localhost:4000` (Phoenix)
|
||||
- `http://localhost:8080` (Vue CLI, generic)
|
||||
|
||||
## Step 2: Gather Requirements
|
||||
|
||||
Use `AskUserQuestion` with the following questions:
|
||||
|
||||
**Question 1: Screenshot count**
|
||||
- Header: "Count"
|
||||
- Question: "How many screenshots do you need?"
|
||||
- Options:
|
||||
- "3-5" - Quick set of key features
|
||||
- "5-10" - Comprehensive feature coverage
|
||||
- "10+" - Full marketing suite
|
||||
|
||||
**Question 2: Purpose**
|
||||
- Header: "Purpose"
|
||||
- Question: "What will these screenshots be used for?"
|
||||
- Options:
|
||||
- "Product Hunt" - Hero shots and feature highlights
|
||||
- "Social media" - Eye-catching feature demos
|
||||
- "Landing page" - Marketing sections and benefits
|
||||
- "Documentation" - UI reference and tutorials
|
||||
|
||||
**Question 3: Authentication**
|
||||
- Header: "Auth"
|
||||
- Question: "Does the app require login to access the features you want to screenshot?"
|
||||
- Options:
|
||||
- "No login needed" - Public pages only
|
||||
- "Yes, I'll provide credentials" - Need to log in first
|
||||
|
||||
If user selects "Yes, I'll provide credentials", ask follow-up questions:
|
||||
- "What is the login page URL?" (e.g., `/login`, `/sign-in`)
|
||||
- "What is the email/username?"
|
||||
- "What is the password?"
|
||||
|
||||
The script will automatically detect login form fields using Playwright's smart locators.
|
||||
|
||||
## Step 3: Analyze Codebase for Features
|
||||
|
||||
Thoroughly explore the codebase to understand the app and identify screenshot opportunities.
|
||||
|
||||
### 3.1: Read Documentation First
|
||||
|
||||
**Always start by reading these files** to understand what the app does:
|
||||
|
||||
1. **README.md** (and any README files in subdirectories) - Read the full README to understand:
|
||||
- What the app is and what problem it solves
|
||||
- Key features and capabilities
|
||||
- Screenshots or feature descriptions already documented
|
||||
|
||||
2. **CHANGELOG.md** or **HISTORY.md** - Recent features worth highlighting
|
||||
|
||||
3. **docs/** directory - Any additional documentation about features
|
||||
|
||||
### 3.2: Analyze Routes to Find Pages
|
||||
|
||||
Read the routing configuration to discover all available pages:
|
||||
|
||||
| Framework | File to Read | What to Look For |
|
||||
|-----------|--------------|------------------|
|
||||
| **Next.js App Router** | `app/` directory structure | Each folder with `page.tsx` is a route |
|
||||
| **Next.js Pages Router** | `pages/` directory | Each file is a route |
|
||||
| **Rails** | `config/routes.rb` | Read the entire file for all routes |
|
||||
| **React Router** | Search for `createBrowserRouter` or `<Route` | Route definitions with paths |
|
||||
| **Vue Router** | `src/router/index.js` or `router.js` | Routes array with path definitions |
|
||||
| **SvelteKit** | `src/routes/` directory | Each folder with `+page.svelte` is a route |
|
||||
| **Remix** | `app/routes/` directory | File-based routing |
|
||||
| **Laravel** | `routes/web.php` | Route definitions |
|
||||
| **Django** | `urls.py` files | URL patterns |
|
||||
| **Express** | Search for `app.get`, `router.get` | Route handlers |
|
||||
|
||||
**Important**: Actually read these files, don't just check if they exist. The route definitions tell you what pages are available for screenshots.
|
||||
|
||||
### 3.3: Identify Key Components
|
||||
|
||||
Look for components that represent screenshottable features:
|
||||
|
||||
- Dashboard components
|
||||
- Feature sections with distinct UI
|
||||
- Forms and interactive inputs
|
||||
- Data visualizations (charts, graphs, tables)
|
||||
- Modals and dialogs
|
||||
- Navigation and sidebars
|
||||
- Settings panels
|
||||
- User profile sections
|
||||
|
||||
### 3.4: Check for Marketing Assets
|
||||
|
||||
Look for existing marketing content that hints at key features:
|
||||
- Landing page components (often in `components/landing/` or `components/marketing/`)
|
||||
- Feature list components
|
||||
- Pricing tables
|
||||
- Testimonial sections
|
||||
|
||||
### 3.5: Build Feature List
|
||||
|
||||
Create a comprehensive list of discovered features with:
|
||||
- Feature name (from README or component name)
|
||||
- URL path (from routes)
|
||||
- CSS selector to focus on (from component structure)
|
||||
- Required UI state (logged in, data populated, modal open, specific tab selected)
|
||||
|
||||
## Step 4: Plan Screenshots with User
|
||||
|
||||
Present the discovered features to the user and ask them to confirm or modify the list.
|
||||
|
||||
Use `AskUserQuestion`:
|
||||
- Header: "Features"
|
||||
- Question: "I found these features in your codebase. Which would you like to screenshot?"
|
||||
- Options: List 3-4 key features discovered, plus "Let me pick specific ones"
|
||||
|
||||
If user wants specific ones, ask follow-up questions to clarify exactly what to capture.
|
||||
|
||||
## Step 5: Create Screenshots Directory
|
||||
|
||||
```bash
|
||||
mkdir -p screenshots
|
||||
```
|
||||
|
||||
## Step 6: Generate and Run Playwright Script
|
||||
|
||||
Create a Node.js script that uses Playwright with proper HiDPI settings. The script should:
|
||||
|
||||
1. **Use `deviceScaleFactor: 2`** for true retina resolution
|
||||
2. **Set viewport to 1440x900** (produces 2880x1800 pixel images)
|
||||
3. **Handle authentication** if credentials were provided
|
||||
4. **Navigate to each page** and capture screenshots
|
||||
|
||||
### Script Template
|
||||
|
||||
Write this script to a temporary file (e.g., `screenshot-script.mjs`) and execute it:
|
||||
|
||||
```javascript
|
||||
import { chromium } from 'playwright';
|
||||
|
||||
const BASE_URL = '[APP_URL]';
|
||||
const SCREENSHOTS_DIR = './screenshots';
|
||||
|
||||
// Authentication config (if needed)
|
||||
const AUTH = {
|
||||
needed: [true|false],
|
||||
loginUrl: '[LOGIN_URL]',
|
||||
email: '[EMAIL]',
|
||||
password: '[PASSWORD]',
|
||||
};
|
||||
|
||||
// Screenshots to capture
|
||||
const SCREENSHOTS = [
|
||||
{ name: '01-feature-name', url: '/path', waitFor: '[optional-selector]' },
|
||||
{ name: '02-another-feature', url: '/another-path' },
|
||||
// ... add all planned screenshots
|
||||
];
|
||||
|
||||
async function main() {
|
||||
const browser = await chromium.launch();
|
||||
|
||||
// Create context with HiDPI settings
|
||||
const context = await browser.newContext({
|
||||
viewport: { width: 1440, height: 900 },
|
||||
deviceScaleFactor: 2, // This is the key for true retina screenshots
|
||||
});
|
||||
|
||||
const page = await context.newPage();
|
||||
|
||||
// Handle authentication if needed
|
||||
if (AUTH.needed) {
|
||||
console.log('Logging in...');
|
||||
await page.goto(AUTH.loginUrl);
|
||||
|
||||
// Smart login: try multiple common patterns for email/username field
|
||||
const emailField = page.locator([
|
||||
'input[type="email"]',
|
||||
'input[name="email"]',
|
||||
'input[id="email"]',
|
||||
'input[placeholder*="email" i]',
|
||||
'input[name="username"]',
|
||||
'input[id="username"]',
|
||||
'input[type="text"]',
|
||||
].join(', ')).first();
|
||||
await emailField.fill(AUTH.email);
|
||||
|
||||
// Smart login: try multiple common patterns for password field
|
||||
const passwordField = page.locator([
|
||||
'input[type="password"]',
|
||||
'input[name="password"]',
|
||||
'input[id="password"]',
|
||||
].join(', ')).first();
|
||||
await passwordField.fill(AUTH.password);
|
||||
|
||||
// Smart login: try multiple common patterns for submit button
|
||||
const submitButton = page.locator([
|
||||
'button[type="submit"]',
|
||||
'input[type="submit"]',
|
||||
'button:has-text("Sign in")',
|
||||
'button:has-text("Log in")',
|
||||
'button:has-text("Login")',
|
||||
'button:has-text("Submit")',
|
||||
].join(', ')).first();
|
||||
await submitButton.click();
|
||||
|
||||
await page.waitForLoadState('networkidle');
|
||||
console.log('Login complete');
|
||||
}
|
||||
|
||||
// Capture each screenshot
|
||||
for (const shot of SCREENSHOTS) {
|
||||
console.log(`Capturing: ${shot.name}`);
|
||||
await page.goto(`${BASE_URL}${shot.url}`);
|
||||
await page.waitForLoadState('networkidle');
|
||||
|
||||
// Optional: wait for specific element
|
||||
if (shot.waitFor) {
|
||||
await page.waitForSelector(shot.waitFor);
|
||||
}
|
||||
|
||||
// Optional: perform actions before screenshot
|
||||
if (shot.actions) {
|
||||
for (const action of shot.actions) {
|
||||
if (action.click) await page.click(action.click);
|
||||
if (action.fill) await page.fill(action.fill.selector, action.fill.value);
|
||||
if (action.wait) await page.waitForTimeout(action.wait);
|
||||
}
|
||||
}
|
||||
|
||||
await page.screenshot({
|
||||
path: `${SCREENSHOTS_DIR}/${shot.name}.png`,
|
||||
fullPage: shot.fullPage || false,
|
||||
});
|
||||
console.log(` Saved: ${shot.name}.png`);
|
||||
}
|
||||
|
||||
await browser.close();
|
||||
console.log('Done!');
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
```
|
||||
|
||||
### Running the Script
|
||||
|
||||
```bash
|
||||
node screenshot-script.mjs
|
||||
```
|
||||
|
||||
After running, clean up the temporary script:
|
||||
```bash
|
||||
rm screenshot-script.mjs
|
||||
```
|
||||
|
||||
## Step 7: Advanced Screenshot Options
|
||||
|
||||
### Element-Focused Screenshots
|
||||
|
||||
To screenshot a specific element instead of the full viewport:
|
||||
|
||||
```javascript
|
||||
const element = await page.locator('[CSS_SELECTOR]');
|
||||
await element.screenshot({ path: `${SCREENSHOTS_DIR}/element.png` });
|
||||
```
|
||||
|
||||
### Full Page Screenshots
|
||||
|
||||
For scrollable content, capture the entire page:
|
||||
|
||||
```javascript
|
||||
await page.screenshot({
|
||||
path: `${SCREENSHOTS_DIR}/full-page.png`,
|
||||
fullPage: true
|
||||
});
|
||||
```
|
||||
|
||||
### Waiting for Animations
|
||||
|
||||
If the page has animations, wait for them to complete:
|
||||
|
||||
```javascript
|
||||
await page.waitForTimeout(500); // Wait 500ms for animations
|
||||
```
|
||||
|
||||
### Clicking Elements Before Screenshot
|
||||
|
||||
To capture a modal, dropdown, or hover state:
|
||||
|
||||
```javascript
|
||||
await page.click('button.open-modal');
|
||||
await page.waitForSelector('.modal-content');
|
||||
await page.screenshot({ path: `${SCREENSHOTS_DIR}/modal.png` });
|
||||
```
|
||||
|
||||
### Dark Mode Screenshots
|
||||
|
||||
If the app supports dark mode:
|
||||
|
||||
```javascript
|
||||
// Set dark mode preference
|
||||
const context = await browser.newContext({
|
||||
viewport: { width: 1440, height: 900 },
|
||||
deviceScaleFactor: 2,
|
||||
colorScheme: 'dark',
|
||||
});
|
||||
```
|
||||
|
||||
## Step 8: File Naming Convention
|
||||
|
||||
Use descriptive, kebab-case filenames with numeric prefixes for ordering:
|
||||
|
||||
| Feature | Filename |
|
||||
|---------|----------|
|
||||
| Dashboard overview | `01-dashboard-overview.png` |
|
||||
| Link management | `02-link-inbox.png` |
|
||||
| Edition editor | `03-edition-editor.png` |
|
||||
| Analytics | `04-analytics.png` |
|
||||
| Settings | `05-settings.png` |
|
||||
|
||||
## Step 9: Verify and Summarize
|
||||
|
||||
After capturing all screenshots, verify the results:
|
||||
|
||||
```bash
|
||||
ls -la screenshots/*.png
|
||||
sips -g pixelWidth -g pixelHeight screenshots/*.png 2>/dev/null || file screenshots/*.png
|
||||
```
|
||||
|
||||
Provide a summary to the user:
|
||||
|
||||
1. List all generated files with their paths
|
||||
2. Confirm the resolution (should be 2880x1800 for 2x retina at 1440x900 viewport)
|
||||
3. Mention total file sizes
|
||||
4. Suggest any follow-up actions
|
||||
|
||||
Example output:
|
||||
```
|
||||
Generated 5 marketing screenshots:
|
||||
|
||||
screenshots/
|
||||
├── 01-dashboard-overview.png (1.2 MB, 2880x1800 @ 2x)
|
||||
├── 02-link-inbox.png (456 KB, 2880x1800 @ 2x)
|
||||
├── 03-edition-editor.png (890 KB, 2880x1800 @ 2x)
|
||||
├── 04-analytics.png (567 KB, 2880x1800 @ 2x)
|
||||
└── 05-settings.png (234 KB, 2880x1800 @ 2x)
|
||||
|
||||
All screenshots are true retina-quality (2x deviceScaleFactor) and ready for marketing use.
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Playwright not found**: Suggest `npm install -D playwright`
|
||||
- **Page not loading**: Check if the dev server is running, suggest starting it
|
||||
- **Login failed**: The smart locators try common patterns but may fail on unusual login forms. If login fails, analyze the login page HTML to find the correct selectors and customize the script.
|
||||
- **Element not found**: Verify the CSS selector, offer to take a full page screenshot instead
|
||||
- **Screenshot failed**: Check disk space, verify write permissions to screenshots directory
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Clean UI state**: Use demo/seed data for realistic content
|
||||
2. **Consistent sizing**: Use the same viewport for all screenshots
|
||||
3. **Wait for content**: Use `waitForLoadState('networkidle')` to ensure all content loads
|
||||
4. **Hide dev tools**: Ensure no browser extensions or dev overlays are visible
|
||||
5. **Dark mode variants**: Consider capturing both light and dark mode if supported
|
||||
22
skills/security-bluebook-builder/SKILL.md
Normal file
22
skills/security-bluebook-builder/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: security-bluebook-builder
|
||||
description: "Build security Blue Books for sensitive apps"
|
||||
source: "https://github.com/SHADOWPR0/security-bluebook-builder"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Security Bluebook Builder
|
||||
|
||||
## Overview
|
||||
|
||||
Build security Blue Books for sensitive apps
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with build security blue books for sensitive apps.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for build security blue books for sensitive apps.
|
||||
|
||||
For more information, see the [source repository](https://github.com/SHADOWPR0/security-bluebook-builder).
|
||||
70
skills/sharp-edges/SKILL.md
Normal file
70
skills/sharp-edges/SKILL.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
name: sharp-edges
|
||||
description: "Identify error-prone APIs and dangerous configurations"
|
||||
source: "https://github.com/trailofbits/skills/tree/main/plugins/sharp-edges"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Sharp Edges
|
||||
|
||||
## Overview
|
||||
|
||||
Identify error-prone APIs and dangerous configurations that could lead to bugs, security vulnerabilities, or system failures.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to identify error-prone APIs and dangerous configurations.
|
||||
|
||||
Use this skill when:
|
||||
- Reviewing code for potentially dangerous API usage
|
||||
- Identifying configurations that could cause issues
|
||||
- Analyzing code for error-prone patterns
|
||||
- Assessing risk in API design or configuration choices
|
||||
- Performing security audits focused on API misuse
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill helps identify problematic APIs and configurations:
|
||||
|
||||
1. **API Analysis**: Review API usage for error-prone patterns
|
||||
2. **Configuration Review**: Identify dangerous or risky configurations
|
||||
3. **Pattern Recognition**: Spot common mistakes and pitfalls
|
||||
4. **Risk Assessment**: Evaluate the potential impact of identified issues
|
||||
|
||||
## Common Sharp Edges
|
||||
|
||||
### Error-Prone APIs
|
||||
|
||||
- APIs with complex parameter requirements
|
||||
- APIs with non-obvious failure modes
|
||||
- APIs that require careful resource management
|
||||
- APIs with timing or concurrency issues
|
||||
- APIs with unclear error handling
|
||||
|
||||
### Dangerous Configurations
|
||||
|
||||
- Default settings that are insecure
|
||||
- Configurations that bypass security controls
|
||||
- Settings that enable dangerous features
|
||||
- Options that reduce system reliability
|
||||
- Parameters that affect performance negatively
|
||||
|
||||
## Detection Strategies
|
||||
|
||||
1. **Code Review**: Look for known problematic patterns
|
||||
2. **Static Analysis**: Use tools to identify risky API usage
|
||||
3. **Configuration Audits**: Review configuration files for dangerous settings
|
||||
4. **Documentation Review**: Check for warnings about API usage
|
||||
5. **Experience-Based**: Leverage knowledge of common pitfalls
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Document identified sharp edges
|
||||
- Provide clear guidance on safe usage
|
||||
- Create examples of correct vs incorrect usage
|
||||
- Recommend safer alternatives when available
|
||||
- Update documentation with findings
|
||||
|
||||
## Resources
|
||||
|
||||
For more information, see the [source repository](https://github.com/trailofbits/skills/tree/main/plugins/sharp-edges).
|
||||
408
skills/skill-rails-upgrade/SKILL.md
Normal file
408
skills/skill-rails-upgrade/SKILL.md
Normal file
@@ -0,0 +1,408 @@
|
||||
---
|
||||
name: skill-rails-upgrade
|
||||
description: "Analyze Rails apps and provide upgrade assessments"
|
||||
source: "https://github.com/robzolkos/skill-rails-upgrade"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Analyze Rails apps and provide upgrade assessments
|
||||
|
||||
Use this skill when working with analyze rails apps and provide upgrade assessments.
|
||||
# Rails Upgrade Analyzer
|
||||
|
||||
Analyze the current Rails application and provide a comprehensive upgrade assessment with selective file merging.
|
||||
|
||||
## Step 1: Verify Rails Application
|
||||
|
||||
Check that we're in a Rails application by looking for these files:
|
||||
- `Gemfile` (must exist and contain 'rails')
|
||||
- `config/application.rb` (Rails application config)
|
||||
- `config/environment.rb` (Rails environment)
|
||||
|
||||
If any of these are missing or don't indicate a Rails app, stop and inform the user this doesn't appear to be a Rails application.
|
||||
|
||||
## Step 2: Get Current Rails Version
|
||||
|
||||
Extract the current Rails version from:
|
||||
1. First, check `Gemfile.lock` for the exact installed version (look for `rails (x.y.z)`)
|
||||
2. If not found, check `Gemfile` for the version constraint
|
||||
|
||||
Report the exact current version (e.g., `7.1.3`).
|
||||
|
||||
## Step 3: Find Latest Rails Version
|
||||
|
||||
Use the GitHub CLI to fetch the latest Rails release:
|
||||
|
||||
```bash
|
||||
gh api repos/rails/rails/releases/latest --jq '.tag_name'
|
||||
```
|
||||
|
||||
This returns the latest stable version tag (e.g., `v8.0.1`). Strip the 'v' prefix for comparison.
|
||||
|
||||
Also check recent tags to understand the release landscape:
|
||||
|
||||
```bash
|
||||
gh api repos/rails/rails/tags --jq '.[0:10] | .[].name'
|
||||
```
|
||||
|
||||
## Step 4: Determine Upgrade Type
|
||||
|
||||
Compare current and latest versions to classify the upgrade:
|
||||
|
||||
- **Patch upgrade**: Same major.minor, different patch (e.g., 7.1.3 → 7.1.5)
|
||||
- **Minor upgrade**: Same major, different minor (e.g., 7.1.3 → 7.2.0)
|
||||
- **Major upgrade**: Different major version (e.g., 7.1.3 → 8.0.0)
|
||||
|
||||
## Step 5: Fetch Upgrade Guide
|
||||
|
||||
Use WebFetch to get the official Rails upgrade guide:
|
||||
|
||||
URL: `https://guides.rubyonrails.org/upgrading_ruby_on_rails.html`
|
||||
|
||||
Look for sections relevant to the version jump. The guide is organized by target version with sections like:
|
||||
- "Upgrading from Rails X.Y to Rails X.Z"
|
||||
- Breaking changes
|
||||
- Deprecation warnings
|
||||
- Configuration changes
|
||||
- Required migrations
|
||||
|
||||
Extract and summarize the relevant sections for the user's specific upgrade path.
|
||||
|
||||
## Step 6: Fetch Rails Diff
|
||||
|
||||
Use WebFetch to get the diff between versions from railsdiff.org:
|
||||
|
||||
URL: `https://railsdiff.org/{current_version}/{target_version}`
|
||||
|
||||
For example: `https://railsdiff.org/7.1.3/8.0.0`
|
||||
|
||||
This shows:
|
||||
- Changes to default configuration files
|
||||
- New files that need to be added
|
||||
- Modified initializers
|
||||
- Updated dependencies
|
||||
- Changes to bin/ scripts
|
||||
|
||||
Summarize the key file changes.
|
||||
|
||||
## Step 7: Check JavaScript Dependencies
|
||||
|
||||
Rails applications often include JavaScript packages that should be updated alongside Rails. Check for and report on these dependencies.
|
||||
|
||||
### 7.1: Identify JS Package Manager
|
||||
|
||||
Check which package manager the app uses:
|
||||
|
||||
```bash
|
||||
# Check for package.json (npm/yarn)
|
||||
ls package.json 2>/dev/null
|
||||
|
||||
# Check for importmap (Rails 7+)
|
||||
ls config/importmap.rb 2>/dev/null
|
||||
```
|
||||
|
||||
### 7.2: Check Rails-Related JS Packages
|
||||
|
||||
If `package.json` exists, check for these Rails-related packages:
|
||||
|
||||
```bash
|
||||
# Extract current versions of Rails-related packages
|
||||
cat package.json | grep -E '"@hotwired/|"@rails/|"stimulus"|"turbo-rails"' || echo "No Rails JS packages found"
|
||||
```
|
||||
|
||||
**Key packages to check:**
|
||||
|
||||
| Package | Purpose | Version Alignment |
|
||||
|---------|---------|-------------------|
|
||||
| `@hotwired/turbo-rails` | Turbo Drive/Frames/Streams | Should match Rails version era |
|
||||
| `@hotwired/stimulus` | Stimulus JS framework | Generally stable across Rails versions |
|
||||
| `@rails/actioncable` | WebSocket support | Should match Rails version |
|
||||
| `@rails/activestorage` | Direct uploads | Should match Rails version |
|
||||
| `@rails/actiontext` | Rich text editing | Should match Rails version |
|
||||
| `@rails/request.js` | Rails UJS replacement | Should match Rails version era |
|
||||
|
||||
### 7.3: Check for Updates
|
||||
|
||||
For npm/yarn projects, check for available updates:
|
||||
|
||||
```bash
|
||||
# Using npm
|
||||
npm outdated @hotwired/turbo-rails @hotwired/stimulus @rails/actioncable @rails/activestorage 2>/dev/null
|
||||
|
||||
# Or check latest versions directly
|
||||
npm view @hotwired/turbo-rails version 2>/dev/null
|
||||
npm view @rails/actioncable version 2>/dev/null
|
||||
```
|
||||
|
||||
### 7.4: Check Importmap Pins (if applicable)
|
||||
|
||||
If the app uses importmap-rails, check `config/importmap.rb` for pinned versions:
|
||||
|
||||
```bash
|
||||
cat config/importmap.rb | grep -E 'pin.*turbo|pin.*stimulus|pin.*@rails' || echo "No importmap pins found"
|
||||
```
|
||||
|
||||
To update importmap pins:
|
||||
```bash
|
||||
bin/importmap pin @hotwired/turbo-rails
|
||||
bin/importmap pin @hotwired/stimulus
|
||||
```
|
||||
|
||||
### 7.5: JS Dependency Summary
|
||||
|
||||
Include in the upgrade summary:
|
||||
|
||||
```
|
||||
### JavaScript Dependencies
|
||||
|
||||
**Package Manager**: [npm/yarn/importmap/none]
|
||||
|
||||
| Package | Current | Latest | Action |
|
||||
|---------|---------|--------|--------|
|
||||
| @hotwired/turbo-rails | 8.0.4 | 8.0.12 | Update recommended |
|
||||
| @rails/actioncable | 7.1.0 | 8.0.0 | Update with Rails |
|
||||
| ... | ... | ... | ... |
|
||||
|
||||
**Recommended JS Updates:**
|
||||
- Run `npm update @hotwired/turbo-rails` (or yarn equivalent)
|
||||
- Run `npm update @rails/actioncable @rails/activestorage` to match Rails version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Generate Upgrade Summary
|
||||
|
||||
Provide a comprehensive summary including all findings from Steps 1-7:
|
||||
|
||||
### Version Information
|
||||
- Current version: X.Y.Z
|
||||
- Latest version: A.B.C
|
||||
- Upgrade type: [Patch/Minor/Major]
|
||||
|
||||
### Upgrade Complexity Assessment
|
||||
|
||||
Rate the upgrade as **Small**, **Medium**, or **Large** based on:
|
||||
|
||||
| Factor | Small | Medium | Large |
|
||||
|--------|-------|--------|-------|
|
||||
| Version jump | Patch only | Minor version | Major version |
|
||||
| Breaking changes | None | Few, well-documented | Many, significant |
|
||||
| Config changes | Minimal | Moderate | Extensive |
|
||||
| Deprecations | None active | Some to address | Many requiring refactoring |
|
||||
| Dependencies | Compatible | Some updates needed | Major dependency updates |
|
||||
|
||||
### Key Changes to Address
|
||||
|
||||
List the most important changes the user needs to handle:
|
||||
1. Configuration file updates
|
||||
2. Deprecated methods/features to update
|
||||
3. New required dependencies
|
||||
4. Database migrations needed
|
||||
5. Breaking API changes
|
||||
|
||||
### Recommended Upgrade Steps
|
||||
|
||||
1. Update test suite and ensure passing
|
||||
2. Review deprecation warnings in current version
|
||||
3. Update Gemfile with new Rails version
|
||||
4. Run `bundle update rails`
|
||||
5. Update JavaScript dependencies (see JS Dependencies section)
|
||||
6. **DO NOT run `rails app:update` directly** - use the selective merge process below
|
||||
7. Run database migrations
|
||||
8. Run test suite
|
||||
9. Review and update deprecated code
|
||||
|
||||
### Resources
|
||||
|
||||
- Rails Upgrade Guide: https://guides.rubyonrails.org/upgrading_ruby_on_rails.html
|
||||
- Rails Diff: https://railsdiff.org/{current}/{target}
|
||||
- Release Notes: https://github.com/rails/rails/releases/tag/v{target}
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Analyze Rails apps and provide upgrade assessments
|
||||
|
||||
Use this skill when working with analyze rails apps and provide upgrade assessments.
|
||||
## Step 9: Selective File Update (replaces `rails app:update`)
|
||||
|
||||
**IMPORTANT:** Do NOT run `rails app:update` as it overwrites files without considering local customizations. Instead, follow this selective merge process:
|
||||
|
||||
### 9.1: Detect Local Customizations
|
||||
|
||||
Before any upgrade, identify files with local customizations:
|
||||
|
||||
```bash
|
||||
# Check for uncommitted changes
|
||||
git status
|
||||
|
||||
# List config files that differ from a fresh Rails app
|
||||
# These are the files we need to be careful with
|
||||
git diff HEAD --name-only -- config/ bin/ public/
|
||||
```
|
||||
|
||||
Create a mental list of files in these categories:
|
||||
- **Custom config files**: Files with project-specific settings (i18n, mailer, etc.)
|
||||
- **Modified bin scripts**: Scripts with custom behavior (bin/dev with foreman, etc.)
|
||||
- **Standard files**: Files that haven't been customized
|
||||
|
||||
### 9.2: Analyze Required Changes from Railsdiff
|
||||
|
||||
Based on the railsdiff output from Step 6, categorize each changed file:
|
||||
|
||||
| Category | Action | Example |
|
||||
|----------|--------|---------|
|
||||
| **New files** | Create directly | `config/initializers/new_framework_defaults_X_Y.rb` |
|
||||
| **Unchanged locally** | Safe to overwrite | `public/404.html` (if not customized) |
|
||||
| **Customized locally** | Manual merge needed | `config/application.rb`, `bin/dev` |
|
||||
| **Comment-only changes** | Usually skip | Minor comment updates in config files |
|
||||
|
||||
### 9.3: Create Upgrade Plan
|
||||
|
||||
Present the user with a clear upgrade plan:
|
||||
|
||||
```
|
||||
## Upgrade Plan: Rails X.Y.Z → A.B.C
|
||||
|
||||
### New Files (will be created):
|
||||
- config/initializers/new_framework_defaults_A_B.rb
|
||||
- bin/ci (new CI script)
|
||||
|
||||
### Safe to Update (no local customizations):
|
||||
- public/400.html
|
||||
- public/404.html
|
||||
- public/500.html
|
||||
|
||||
### Needs Manual Merge (local customizations detected):
|
||||
- config/application.rb
|
||||
└─ Local: i18n configuration
|
||||
└─ Rails: [describe new Rails changes if any]
|
||||
|
||||
- config/environments/development.rb
|
||||
└─ Local: letter_opener mailer config
|
||||
└─ Rails: [describe new Rails changes]
|
||||
|
||||
- bin/dev
|
||||
└─ Local: foreman + Procfile.dev setup
|
||||
└─ Rails: changed to simple ruby script
|
||||
|
||||
### Skip (comment-only or irrelevant changes):
|
||||
- config/puma.rb (only comment changes)
|
||||
```
|
||||
|
||||
### 9.4: Execute Upgrade Plan
|
||||
|
||||
After user confirms the plan:
|
||||
|
||||
#### For New Files:
|
||||
Create them directly using the content from railsdiff or by extracting from a fresh Rails app:
|
||||
|
||||
```bash
|
||||
# Generate a temporary fresh Rails app to extract new files
|
||||
cd /tmp && rails new rails_template --skip-git --skip-bundle
|
||||
# Then copy needed files
|
||||
```
|
||||
|
||||
Or use the Rails generator for specific files:
|
||||
```bash
|
||||
bin/rails app:update:configs # Only updates config files, still interactive
|
||||
```
|
||||
|
||||
#### For Safe Updates:
|
||||
Overwrite these files as they have no local customizations.
|
||||
|
||||
#### For Manual Merges:
|
||||
For each file needing merge, show the user:
|
||||
|
||||
1. **Current local version** (their customizations)
|
||||
2. **New Rails default** (from railsdiff)
|
||||
3. **Suggested merged version** that:
|
||||
- Keeps all local customizations
|
||||
- Adds only essential new Rails functionality
|
||||
- Removes deprecated settings
|
||||
|
||||
Example merge for `config/application.rb`:
|
||||
```ruby
|
||||
# KEEP local customizations:
|
||||
config.i18n.available_locales = [:de, :en]
|
||||
config.i18n.default_locale = :de
|
||||
config.i18n.fallbacks = [:en]
|
||||
|
||||
# ADD new Rails 8.1 settings if needed:
|
||||
# (usually none required - new defaults come via new_framework_defaults file)
|
||||
```
|
||||
|
||||
### 9.5: Handle Active Storage Migrations
|
||||
|
||||
After file updates, run any new migrations:
|
||||
|
||||
```bash
|
||||
bin/rails db:migrate
|
||||
```
|
||||
|
||||
Check for new migrations that were added:
|
||||
```bash
|
||||
ls -la db/migrate/ | tail -10
|
||||
```
|
||||
|
||||
### 9.6: Verify Upgrade
|
||||
|
||||
After completing the merge:
|
||||
|
||||
1. Start the Rails server and check for errors:
|
||||
```bash
|
||||
bin/dev # or bin/rails server
|
||||
```
|
||||
|
||||
2. Check the Rails console:
|
||||
```bash
|
||||
bin/rails console
|
||||
```
|
||||
|
||||
3. Run the test suite:
|
||||
```bash
|
||||
bin/rails test
|
||||
```
|
||||
|
||||
4. Review deprecation warnings in logs
|
||||
|
||||
---
|
||||
|
||||
## Step 10: Finalize Framework Defaults
|
||||
|
||||
After verifying the app works:
|
||||
|
||||
1. Review `config/initializers/new_framework_defaults_X_Y.rb`
|
||||
2. Enable each new default one by one, testing after each
|
||||
3. Once all defaults are enabled and tested, update `config/application.rb`:
|
||||
```ruby
|
||||
config.load_defaults X.Y # Update to new version
|
||||
```
|
||||
4. Delete the `new_framework_defaults_X_Y.rb` file
|
||||
|
||||
---
|
||||
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Analyze Rails apps and provide upgrade assessments
|
||||
|
||||
Use this skill when working with analyze rails apps and provide upgrade assessments.
|
||||
## Error Handling
|
||||
|
||||
- If `gh` CLI is not authenticated, instruct the user to run `gh auth login`
|
||||
- If railsdiff.org doesn't have the exact versions, try with major.minor.0 versions
|
||||
- If the app is already on the latest version, congratulate the user and note any upcoming releases
|
||||
- If local customizations would be lost, ALWAYS stop and show the user what would be overwritten before proceeding
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Never overwrite without checking** - Always check for local customizations first
|
||||
2. **Preserve user intent** - Local customizations exist for a reason
|
||||
3. **Minimal changes** - Only add what's necessary for the new Rails version
|
||||
4. **Transparency** - Show the user exactly what will change before doing it
|
||||
5. **Reversibility** - User should be able to `git checkout` to restore if needed
|
||||
22
skills/skill-seekers/SKILL.md
Normal file
22
skills/skill-seekers/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: skill-seekers
|
||||
description: "-Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes."
|
||||
source: "https://github.com/yusufkaraaslan/Skill_Seekers"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Skill Seekers
|
||||
|
||||
## Overview
|
||||
|
||||
-Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with -automatically convert documentation websites, github repositories, and pdfs into claude ai skills in minutes..
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for -automatically convert documentation websites, github repositories, and pdfs into claude ai skills in minutes..
|
||||
|
||||
For more information, see the [source repository](https://github.com/yusufkaraaslan/Skill_Seekers).
|
||||
22
skills/superpowers-lab/SKILL.md
Normal file
22
skills/superpowers-lab/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: superpowers-lab
|
||||
description: "Lab environment for Claude superpowers"
|
||||
source: "https://github.com/obra/superpowers-lab"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Superpowers Lab
|
||||
|
||||
## Overview
|
||||
|
||||
Lab environment for Claude superpowers
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with lab environment for claude superpowers.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for lab environment for claude superpowers.
|
||||
|
||||
For more information, see the [source repository](https://github.com/obra/superpowers-lab).
|
||||
275
skills/swiftui-expert-skill/SKILL.md
Normal file
275
skills/swiftui-expert-skill/SKILL.md
Normal file
@@ -0,0 +1,275 @@
|
||||
---
|
||||
name: swiftui-expert-skill
|
||||
description: "Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, modern APIs, Swift concurrency, and iOS 26+ Liquid Glass adoption. Use when building new SwiftUI features, refactoring existing views, reviewing code quality, or adopting modern SwiftUI patterns."
|
||||
source: "https://github.com/AvdLee/SwiftUI-Agent-Skill/tree/main/swiftui-expert-skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# SwiftUI Expert Skill
|
||||
|
||||
## Overview
|
||||
Use this skill to build, review, or improve SwiftUI features with correct state management, modern API usage, Swift concurrency best practices, optimal view composition, and iOS 26+ Liquid Glass styling. Prioritize native APIs, Apple design guidance, and performance-conscious patterns. This skill focuses on facts and best practices without enforcing specific architectural patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Building new SwiftUI features
|
||||
- Refactoring existing SwiftUI views
|
||||
- Reviewing SwiftUI code quality
|
||||
- Adopting modern SwiftUI patterns
|
||||
- Working with SwiftUI state management
|
||||
- Implementing iOS 26+ Liquid Glass styling
|
||||
|
||||
## Workflow Decision Tree
|
||||
|
||||
### 1) Review existing SwiftUI code
|
||||
- Check property wrapper usage against the selection guide (see `references/state-management.md`)
|
||||
- Verify modern API usage (see `references/modern-apis.md`)
|
||||
- Verify view composition follows extraction rules (see `references/view-structure.md`)
|
||||
- Check performance patterns are applied (see `references/performance-patterns.md`)
|
||||
- Verify list patterns use stable identity (see `references/list-patterns.md`)
|
||||
- Inspect Liquid Glass usage for correctness and consistency (see `references/liquid-glass.md`)
|
||||
- Validate iOS 26+ availability handling with sensible fallbacks
|
||||
|
||||
### 2) Improve existing SwiftUI code
|
||||
- Audit state management for correct wrapper selection (prefer `@Observable` over `ObservableObject`)
|
||||
- Replace deprecated APIs with modern equivalents (see `references/modern-apis.md`)
|
||||
- Extract complex views into separate subviews (see `references/view-structure.md`)
|
||||
- Refactor hot paths to minimize redundant state updates (see `references/performance-patterns.md`)
|
||||
- Ensure ForEach uses stable identity (see `references/list-patterns.md`)
|
||||
- Suggest image downsampling when `UIImage(data:)` is used (as optional optimization, see `references/image-optimization.md`)
|
||||
- Adopt Liquid Glass only when explicitly requested by the user
|
||||
|
||||
### 3) Implement new SwiftUI feature
|
||||
- Design data flow first: identify owned vs injected state (see `references/state-management.md`)
|
||||
- Use modern APIs (no deprecated modifiers or patterns, see `references/modern-apis.md`)
|
||||
- Use `@Observable` for shared state (with `@MainActor` if not using default actor isolation)
|
||||
- Structure views for optimal diffing (extract subviews early, keep views small, see `references/view-structure.md`)
|
||||
- Separate business logic into testable models (see `references/layout-best-practices.md`)
|
||||
- Apply glass effects after layout/appearance modifiers (see `references/liquid-glass.md`)
|
||||
- Gate iOS 26+ features with `#available` and provide fallbacks
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
### State Management
|
||||
- **Always prefer `@Observable` over `ObservableObject`** for new code
|
||||
- **Mark `@Observable` classes with `@MainActor`** unless using default actor isolation
|
||||
- **Always mark `@State` and `@StateObject` as `private`** (makes dependencies clear)
|
||||
- **Never declare passed values as `@State` or `@StateObject`** (they only accept initial values)
|
||||
- Use `@State` with `@Observable` classes (not `@StateObject`)
|
||||
- `@Binding` only when child needs to **modify** parent state
|
||||
- `@Bindable` for injected `@Observable` objects needing bindings
|
||||
- Use `let` for read-only values; `var` + `.onChange()` for reactive reads
|
||||
- Legacy: `@StateObject` for owned `ObservableObject`; `@ObservedObject` for injected
|
||||
- Nested `ObservableObject` doesn't work (pass nested objects directly); `@Observable` handles nesting fine
|
||||
|
||||
### Modern APIs
|
||||
- Use `foregroundStyle()` instead of `foregroundColor()`
|
||||
- Use `clipShape(.rect(cornerRadius:))` instead of `cornerRadius()`
|
||||
- Use `Tab` API instead of `tabItem()`
|
||||
- Use `Button` instead of `onTapGesture()` (unless need location/count)
|
||||
- Use `NavigationStack` instead of `NavigationView`
|
||||
- Use `navigationDestination(for:)` for type-safe navigation
|
||||
- Use two-parameter or no-parameter `onChange()` variant
|
||||
- Use `ImageRenderer` for rendering SwiftUI views
|
||||
- Use `.sheet(item:)` instead of `.sheet(isPresented:)` for model-based content
|
||||
- Sheets should own their actions and call `dismiss()` internally
|
||||
- Use `ScrollViewReader` for programmatic scrolling with stable IDs
|
||||
- Avoid `UIScreen.main.bounds` for sizing
|
||||
- Avoid `GeometryReader` when alternatives exist (e.g., `containerRelativeFrame()`)
|
||||
|
||||
### Swift Best Practices
|
||||
- Use modern Text formatting (`.format` parameters, not `String(format:)`)
|
||||
- Use `localizedStandardContains()` for user-input filtering (not `contains()`)
|
||||
- Prefer static member lookup (`.blue` vs `Color.blue`)
|
||||
- Use `.task` modifier for automatic cancellation of async work
|
||||
- Use `.task(id:)` for value-dependent tasks
|
||||
|
||||
### View Composition
|
||||
- **Prefer modifiers over conditional views** for state changes (maintains view identity)
|
||||
- Extract complex views into separate subviews for better readability and performance
|
||||
- Keep views small for optimal performance
|
||||
- Keep view `body` simple and pure (no side effects or complex logic)
|
||||
- Use `@ViewBuilder` functions only for small, simple sections
|
||||
- Prefer `@ViewBuilder let content: Content` over closure-based content properties
|
||||
- Separate business logic into testable models (not about enforcing architectures)
|
||||
- Action handlers should reference methods, not contain inline logic
|
||||
- Use relative layout over hard-coded constants
|
||||
- Views should work in any context (don't assume screen size or presentation style)
|
||||
|
||||
### Performance
|
||||
- Pass only needed values to views (avoid large "config" or "context" objects)
|
||||
- Eliminate unnecessary dependencies to reduce update fan-out
|
||||
- Check for value changes before assigning state in hot paths
|
||||
- Avoid redundant state updates in `onReceive`, `onChange`, scroll handlers
|
||||
- Minimize work in frequently executed code paths
|
||||
- Use `LazyVStack`/`LazyHStack` for large lists
|
||||
- Use stable identity for `ForEach` (never `.indices` for dynamic content)
|
||||
- Ensure constant number of views per `ForEach` element
|
||||
- Avoid inline filtering in `ForEach` (prefilter and cache)
|
||||
- Avoid `AnyView` in list rows
|
||||
- Consider POD views for fast diffing (or wrap expensive views in POD parents)
|
||||
- Suggest image downsampling when `UIImage(data:)` is encountered (as optional optimization)
|
||||
- Avoid layout thrash (deep hierarchies, excessive `GeometryReader`)
|
||||
- Gate frequent geometry updates by thresholds
|
||||
- Use `Self._printChanges()` to debug unexpected view updates
|
||||
|
||||
### Liquid Glass (iOS 26+)
|
||||
**Only adopt when explicitly requested by the user.**
|
||||
- Use native `glassEffect`, `GlassEffectContainer`, and glass button styles
|
||||
- Wrap multiple glass elements in `GlassEffectContainer`
|
||||
- Apply `.glassEffect()` after layout and visual modifiers
|
||||
- Use `.interactive()` only for tappable/focusable elements
|
||||
- Use `glassEffectID` with `@Namespace` for morphing transitions
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Property Wrapper Selection (Modern)
|
||||
| Wrapper | Use When |
|
||||
|---------|----------|
|
||||
| `@State` | Internal view state (must be `private`), or owned `@Observable` class |
|
||||
| `@Binding` | Child modifies parent's state |
|
||||
| `@Bindable` | Injected `@Observable` needing bindings |
|
||||
| `let` | Read-only value from parent |
|
||||
| `var` | Read-only value watched via `.onChange()` |
|
||||
|
||||
**Legacy (Pre-iOS 17):**
|
||||
| Wrapper | Use When |
|
||||
|---------|----------|
|
||||
| `@StateObject` | View owns an `ObservableObject` (use `@State` with `@Observable` instead) |
|
||||
| `@ObservedObject` | View receives an `ObservableObject` |
|
||||
|
||||
### Modern API Replacements
|
||||
| Deprecated | Modern Alternative |
|
||||
|------------|-------------------|
|
||||
| `foregroundColor()` | `foregroundStyle()` |
|
||||
| `cornerRadius()` | `clipShape(.rect(cornerRadius:))` |
|
||||
| `tabItem()` | `Tab` API |
|
||||
| `onTapGesture()` | `Button` (unless need location/count) |
|
||||
| `NavigationView` | `NavigationStack` |
|
||||
| `onChange(of:) { value in }` | `onChange(of:) { old, new in }` or `onChange(of:) { }` |
|
||||
| `fontWeight(.bold)` | `bold()` |
|
||||
| `GeometryReader` | `containerRelativeFrame()` or `visualEffect()` |
|
||||
| `showsIndicators: false` | `.scrollIndicators(.hidden)` |
|
||||
| `String(format: "%.2f", value)` | `Text(value, format: .number.precision(.fractionLength(2)))` |
|
||||
| `string.contains(search)` | `string.localizedStandardContains(search)` (for user input) |
|
||||
|
||||
### Liquid Glass Patterns
|
||||
```swift
|
||||
// Basic glass effect with fallback
|
||||
if #available(iOS 26, *) {
|
||||
content
|
||||
.padding()
|
||||
.glassEffect(.regular.interactive(), in: .rect(cornerRadius: 16))
|
||||
} else {
|
||||
content
|
||||
.padding()
|
||||
.background(.ultraThinMaterial, in: RoundedRectangle(cornerRadius: 16))
|
||||
}
|
||||
|
||||
// Grouped glass elements
|
||||
GlassEffectContainer(spacing: 24) {
|
||||
HStack(spacing: 24) {
|
||||
GlassButton1()
|
||||
GlassButton2()
|
||||
}
|
||||
}
|
||||
|
||||
// Glass buttons
|
||||
Button("Confirm") { }
|
||||
.buttonStyle(.glassProminent)
|
||||
```
|
||||
|
||||
## Review Checklist
|
||||
|
||||
### State Management
|
||||
- [ ] Using `@Observable` instead of `ObservableObject` for new code
|
||||
- [ ] `@Observable` classes marked with `@MainActor` (if needed)
|
||||
- [ ] Using `@State` with `@Observable` classes (not `@StateObject`)
|
||||
- [ ] `@State` and `@StateObject` properties are `private`
|
||||
- [ ] Passed values NOT declared as `@State` or `@StateObject`
|
||||
- [ ] `@Binding` only where child modifies parent state
|
||||
- [ ] `@Bindable` for injected `@Observable` needing bindings
|
||||
- [ ] Nested `ObservableObject` avoided (or passed directly to child views)
|
||||
|
||||
### Modern APIs (see `references/modern-apis.md`)
|
||||
- [ ] Using `foregroundStyle()` instead of `foregroundColor()`
|
||||
- [ ] Using `clipShape(.rect(cornerRadius:))` instead of `cornerRadius()`
|
||||
- [ ] Using `Tab` API instead of `tabItem()`
|
||||
- [ ] Using `Button` instead of `onTapGesture()` (unless need location/count)
|
||||
- [ ] Using `NavigationStack` instead of `NavigationView`
|
||||
- [ ] Avoiding `UIScreen.main.bounds`
|
||||
- [ ] Using alternatives to `GeometryReader` when possible
|
||||
- [ ] Button images include text labels for accessibility
|
||||
|
||||
### Sheets & Navigation (see `references/sheet-navigation-patterns.md`)
|
||||
- [ ] Using `.sheet(item:)` for model-based sheets
|
||||
- [ ] Sheets own their actions and dismiss internally
|
||||
- [ ] Using `navigationDestination(for:)` for type-safe navigation
|
||||
|
||||
### ScrollView (see `references/scroll-patterns.md`)
|
||||
- [ ] Using `ScrollViewReader` with stable IDs for programmatic scrolling
|
||||
- [ ] Using `.scrollIndicators(.hidden)` instead of initializer parameter
|
||||
|
||||
### Text & Formatting (see `references/text-formatting.md`)
|
||||
- [ ] Using modern Text formatting (not `String(format:)`)
|
||||
- [ ] Using `localizedStandardContains()` for search filtering
|
||||
|
||||
### View Structure (see `references/view-structure.md`)
|
||||
- [ ] Using modifiers instead of conditionals for state changes
|
||||
- [ ] Complex views extracted to separate subviews
|
||||
- [ ] Views kept small for performance
|
||||
- [ ] Container views use `@ViewBuilder let content: Content`
|
||||
|
||||
### Performance (see `references/performance-patterns.md`)
|
||||
- [ ] View `body` kept simple and pure (no side effects)
|
||||
- [ ] Passing only needed values (not large config objects)
|
||||
- [ ] Eliminating unnecessary dependencies
|
||||
- [ ] State updates check for value changes before assigning
|
||||
- [ ] Hot paths minimize state updates
|
||||
- [ ] No object creation in `body`
|
||||
- [ ] Heavy computation moved out of `body`
|
||||
|
||||
### List Patterns (see `references/list-patterns.md`)
|
||||
- [ ] ForEach uses stable identity (not `.indices`)
|
||||
- [ ] Constant number of views per ForEach element
|
||||
- [ ] No inline filtering in ForEach
|
||||
- [ ] No `AnyView` in list rows
|
||||
|
||||
### Layout (see `references/layout-best-practices.md`)
|
||||
- [ ] Avoiding layout thrash (deep hierarchies, excessive GeometryReader)
|
||||
- [ ] Gating frequent geometry updates by thresholds
|
||||
- [ ] Business logic separated into testable models
|
||||
- [ ] Action handlers reference methods (not inline logic)
|
||||
- [ ] Using relative layout (not hard-coded constants)
|
||||
- [ ] Views work in any context (context-agnostic)
|
||||
|
||||
### Liquid Glass (iOS 26+)
|
||||
- [ ] `#available(iOS 26, *)` with fallback for Liquid Glass
|
||||
- [ ] Multiple glass views wrapped in `GlassEffectContainer`
|
||||
- [ ] `.glassEffect()` applied after layout/appearance modifiers
|
||||
- [ ] `.interactive()` only on user-interactable elements
|
||||
- [ ] Shapes and tints consistent across related elements
|
||||
|
||||
## References
|
||||
- `references/state-management.md` - Property wrappers and data flow (prefer `@Observable`)
|
||||
- `references/view-structure.md` - View composition, extraction, and container patterns
|
||||
- `references/performance-patterns.md` - Performance optimization techniques and anti-patterns
|
||||
- `references/list-patterns.md` - ForEach identity, stability, and list best practices
|
||||
- `references/layout-best-practices.md` - Layout patterns, context-agnostic views, and testability
|
||||
- `references/modern-apis.md` - Modern API usage and deprecated replacements
|
||||
- `references/sheet-navigation-patterns.md` - Sheet presentation and navigation patterns
|
||||
- `references/scroll-patterns.md` - ScrollView patterns and programmatic scrolling
|
||||
- `references/text-formatting.md` - Modern text formatting and string operations
|
||||
- `references/image-optimization.md` - AsyncImage, image downsampling, and optimization
|
||||
- `references/liquid-glass.md` - iOS 26+ Liquid Glass API
|
||||
|
||||
## Philosophy
|
||||
|
||||
This skill focuses on **facts and best practices**, not architectural opinions:
|
||||
- We don't enforce specific architectures (e.g., MVVM, VIPER)
|
||||
- We do encourage separating business logic for testability
|
||||
- We prioritize modern APIs over deprecated ones
|
||||
- We emphasize thread safety with `@MainActor` and `@Observable`
|
||||
- We optimize for performance and maintainability
|
||||
- We follow Apple's Human Interface Guidelines and API design patterns
|
||||
517
skills/terraform-skill/SKILL.md
Normal file
517
skills/terraform-skill/SKILL.md
Normal file
@@ -0,0 +1,517 @@
|
||||
---
|
||||
name: terraform-skill
|
||||
description: "Terraform infrastructure as code best practices"
|
||||
license: Apache-2.0
|
||||
metadata:
|
||||
author: "Anton Babenko"
|
||||
version: 1.5.0
|
||||
source: "https://github.com/antonbabenko/terraform-skill"
|
||||
risk: safe
|
||||
---
|
||||
# Terraform Skill for Claude
|
||||
|
||||
Comprehensive Terraform and OpenTofu guidance covering testing, modules, CI/CD, and production patterns. Based on terraform-best-practices.com and enterprise experience.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
**Activate this skill when:**
|
||||
- Creating new Terraform or OpenTofu configurations or modules
|
||||
- Setting up testing infrastructure for IaC code
|
||||
- Deciding between testing approaches (validate, plan, frameworks)
|
||||
- Structuring multi-environment deployments
|
||||
- Implementing CI/CD for infrastructure-as-code
|
||||
- Reviewing or refactoring existing Terraform/OpenTofu projects
|
||||
- Choosing between module patterns or state management approaches
|
||||
|
||||
**Don't use this skill for:**
|
||||
- Basic Terraform/OpenTofu syntax questions (Claude knows this)
|
||||
- Provider-specific API reference (link to docs instead)
|
||||
- Cloud platform questions unrelated to Terraform/OpenTofu
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Code Structure Philosophy
|
||||
|
||||
**Module Hierarchy:**
|
||||
|
||||
| Type | When to Use | Scope |
|
||||
|------|-------------|-------|
|
||||
| **Resource Module** | Single logical group of connected resources | VPC + subnets, Security group + rules |
|
||||
| **Infrastructure Module** | Collection of resource modules for a purpose | Multiple resource modules in one region/account |
|
||||
| **Composition** | Complete infrastructure | Spans multiple regions/accounts |
|
||||
|
||||
**Hierarchy:** Resource → Resource Module → Infrastructure Module → Composition
|
||||
|
||||
**Directory Structure:**
|
||||
```
|
||||
environments/ # Environment-specific configurations
|
||||
├── prod/
|
||||
├── staging/
|
||||
└── dev/
|
||||
|
||||
modules/ # Reusable modules
|
||||
├── networking/
|
||||
├── compute/
|
||||
└── data/
|
||||
|
||||
examples/ # Module usage examples (also serve as tests)
|
||||
├── complete/
|
||||
└── minimal/
|
||||
```
|
||||
|
||||
**Key principle from terraform-best-practices.com:**
|
||||
- Separate **environments** (prod, staging) from **modules** (reusable components)
|
||||
- Use **examples/** as both documentation and integration test fixtures
|
||||
- Keep modules small and focused (single responsibility)
|
||||
|
||||
**For detailed module architecture, see:** [Code Patterns: Module Types & Hierarchy](references/code-patterns.md)
|
||||
|
||||
### 2. Naming Conventions
|
||||
|
||||
**Resources:**
|
||||
```hcl
|
||||
# Good: Descriptive, contextual
|
||||
resource "aws_instance" "web_server" { }
|
||||
resource "aws_s3_bucket" "application_logs" { }
|
||||
|
||||
# Good: "this" for singleton resources (only one of that type)
|
||||
resource "aws_vpc" "this" { }
|
||||
resource "aws_security_group" "this" { }
|
||||
|
||||
# Avoid: Generic names for non-singletons
|
||||
resource "aws_instance" "main" { }
|
||||
resource "aws_s3_bucket" "bucket" { }
|
||||
```
|
||||
|
||||
**Singleton Resources:**
|
||||
|
||||
Use `"this"` when your module creates only one resource of that type:
|
||||
|
||||
✅ DO:
|
||||
```hcl
|
||||
resource "aws_vpc" "this" {} # Module creates one VPC
|
||||
resource "aws_security_group" "this" {} # Module creates one SG
|
||||
```
|
||||
|
||||
❌ DON'T use "this" for multiple resources:
|
||||
```hcl
|
||||
resource "aws_subnet" "this" {} # If creating multiple subnets
|
||||
```
|
||||
|
||||
Use descriptive names when creating multiple resources of the same type.
|
||||
|
||||
**Variables:**
|
||||
```hcl
|
||||
# Prefix with context when needed
|
||||
var.vpc_cidr_block # Not just "cidr"
|
||||
var.database_instance_class # Not just "instance_class"
|
||||
```
|
||||
|
||||
**Files:**
|
||||
- `main.tf` - Primary resources
|
||||
- `variables.tf` - Input variables
|
||||
- `outputs.tf` - Output values
|
||||
- `versions.tf` - Provider versions
|
||||
- `data.tf` - Data sources (optional)
|
||||
|
||||
## Testing Strategy Framework
|
||||
|
||||
### Decision Matrix: Which Testing Approach?
|
||||
|
||||
| Your Situation | Recommended Approach | Tools | Cost |
|
||||
|----------------|---------------------|-------|------|
|
||||
| **Quick syntax check** | Static analysis | `terraform validate`, `fmt` | Free |
|
||||
| **Pre-commit validation** | Static + lint | `validate`, `tflint`, `trivy`, `checkov` | Free |
|
||||
| **Terraform 1.6+, simple logic** | Native test framework | Built-in `terraform test` | Free-Low |
|
||||
| **Pre-1.6, or Go expertise** | Integration testing | Terratest | Low-Med |
|
||||
| **Security/compliance focus** | Policy as code | OPA, Sentinel | Free |
|
||||
| **Cost-sensitive workflow** | Mock providers (1.7+) | Native tests + mocking | Free |
|
||||
| **Multi-cloud, complex** | Full integration | Terratest + real infra | Med-High |
|
||||
|
||||
### Testing Pyramid for Infrastructure
|
||||
|
||||
```
|
||||
/\
|
||||
/ \ End-to-End Tests (Expensive)
|
||||
/____\ - Full environment deployment
|
||||
/ \ - Production-like setup
|
||||
/________\
|
||||
/ \ Integration Tests (Moderate)
|
||||
/____________\ - Module testing in isolation
|
||||
/ \ - Real resources in test account
|
||||
/________________\ Static Analysis (Cheap)
|
||||
- validate, fmt, lint
|
||||
- Security scanning
|
||||
```
|
||||
|
||||
### Native Test Best Practices (1.6+)
|
||||
|
||||
**Before generating test code:**
|
||||
|
||||
1. **Validate schemas with Terraform MCP:**
|
||||
```
|
||||
Search provider docs → Get resource schema → Identify block types
|
||||
```
|
||||
|
||||
2. **Choose correct command mode:**
|
||||
- `command = plan` - Fast, for input validation
|
||||
- `command = apply` - Required for computed values and set-type blocks
|
||||
|
||||
3. **Handle set-type blocks correctly:**
|
||||
- Cannot index with `[0]`
|
||||
- Use `for` expressions to iterate
|
||||
- Or use `command = apply` to materialize
|
||||
|
||||
**Common patterns:**
|
||||
- S3 encryption rules: **set** (use for expressions)
|
||||
- Lifecycle transitions: **set** (use for expressions)
|
||||
- IAM policy statements: **set** (use for expressions)
|
||||
|
||||
**For detailed testing guides, see:**
|
||||
- **[Testing Frameworks Guide](references/testing-frameworks.md)** - Deep dive into static analysis, native tests, and Terratest
|
||||
- **[Quick Reference](references/quick-reference.md#testing-approach-selection)** - Decision flowchart and command cheat sheet
|
||||
|
||||
## Code Structure Standards
|
||||
|
||||
### Resource Block Ordering
|
||||
|
||||
**Strict ordering for consistency:**
|
||||
1. `count` or `for_each` FIRST (blank line after)
|
||||
2. Other arguments
|
||||
3. `tags` as last real argument
|
||||
4. `depends_on` after tags (if needed)
|
||||
5. `lifecycle` at the very end (if needed)
|
||||
|
||||
```hcl
|
||||
# ✅ GOOD - Correct ordering
|
||||
resource "aws_nat_gateway" "this" {
|
||||
count = var.create_nat_gateway ? 1 : 0
|
||||
|
||||
allocation_id = aws_eip.this[0].id
|
||||
subnet_id = aws_subnet.public[0].id
|
||||
|
||||
tags = {
|
||||
Name = "${var.name}-nat"
|
||||
}
|
||||
|
||||
depends_on = [aws_internet_gateway.this]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Variable Block Ordering
|
||||
|
||||
1. `description` (ALWAYS required)
|
||||
2. `type`
|
||||
3. `default`
|
||||
4. `validation`
|
||||
5. `nullable` (when setting to false)
|
||||
|
||||
```hcl
|
||||
variable "environment" {
|
||||
description = "Environment name for resource tagging"
|
||||
type = string
|
||||
default = "dev"
|
||||
|
||||
validation {
|
||||
condition = contains(["dev", "staging", "prod"], var.environment)
|
||||
error_message = "Environment must be one of: dev, staging, prod."
|
||||
}
|
||||
|
||||
nullable = false
|
||||
}
|
||||
```
|
||||
|
||||
**For complete structure guidelines, see:** [Code Patterns: Block Ordering & Structure](references/code-patterns.md#block-ordering--structure)
|
||||
|
||||
## Count vs For_Each: When to Use Each
|
||||
|
||||
### Quick Decision Guide
|
||||
|
||||
| Scenario | Use | Why |
|
||||
|----------|-----|-----|
|
||||
| Boolean condition (create or don't) | `count = condition ? 1 : 0` | Simple on/off toggle |
|
||||
| Simple numeric replication | `count = 3` | Fixed number of identical resources |
|
||||
| Items may be reordered/removed | `for_each = toset(list)` | Stable resource addresses |
|
||||
| Reference by key | `for_each = map` | Named access to resources |
|
||||
| Multiple named resources | `for_each` | Better maintainability |
|
||||
|
||||
### Common Patterns
|
||||
|
||||
**Boolean conditions:**
|
||||
```hcl
|
||||
# ✅ GOOD - Boolean condition
|
||||
resource "aws_nat_gateway" "this" {
|
||||
count = var.create_nat_gateway ? 1 : 0
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
**Stable addressing with for_each:**
|
||||
```hcl
|
||||
# ✅ GOOD - Removing "us-east-1b" only affects that subnet
|
||||
resource "aws_subnet" "private" {
|
||||
for_each = toset(var.availability_zones)
|
||||
|
||||
availability_zone = each.key
|
||||
# ...
|
||||
}
|
||||
|
||||
# ❌ BAD - Removing middle AZ recreates all subsequent subnets
|
||||
resource "aws_subnet" "private" {
|
||||
count = length(var.availability_zones)
|
||||
|
||||
availability_zone = var.availability_zones[count.index]
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
**For migration guides and detailed examples, see:** [Code Patterns: Count vs For_Each](references/code-patterns.md#count-vs-for_each-deep-dive)
|
||||
|
||||
## Locals for Dependency Management
|
||||
|
||||
**Use locals to ensure correct resource deletion order:**
|
||||
|
||||
```hcl
|
||||
# Problem: Subnets might be deleted after CIDR blocks, causing errors
|
||||
# Solution: Use try() in locals to hint deletion order
|
||||
|
||||
locals {
|
||||
# References secondary CIDR first, falling back to VPC
|
||||
# Forces Terraform to delete subnets before CIDR association
|
||||
vpc_id = try(
|
||||
aws_vpc_ipv4_cidr_block_association.this[0].vpc_id,
|
||||
aws_vpc.this.id,
|
||||
""
|
||||
)
|
||||
}
|
||||
|
||||
resource "aws_vpc" "this" {
|
||||
cidr_block = "10.0.0.0/16"
|
||||
}
|
||||
|
||||
resource "aws_vpc_ipv4_cidr_block_association" "this" {
|
||||
count = var.add_secondary_cidr ? 1 : 0
|
||||
|
||||
vpc_id = aws_vpc.this.id
|
||||
cidr_block = "10.1.0.0/16"
|
||||
}
|
||||
|
||||
resource "aws_subnet" "public" {
|
||||
vpc_id = local.vpc_id # Uses local, not direct reference
|
||||
cidr_block = "10.1.0.0/24"
|
||||
}
|
||||
```
|
||||
|
||||
**Why this matters:**
|
||||
- Prevents deletion errors when destroying infrastructure
|
||||
- Ensures correct dependency order without explicit `depends_on`
|
||||
- Particularly useful for VPC configurations with secondary CIDR blocks
|
||||
|
||||
**For detailed examples, see:** [Code Patterns: Locals for Dependency Management](references/code-patterns.md#locals-for-dependency-management)
|
||||
|
||||
## Module Development
|
||||
|
||||
### Standard Module Structure
|
||||
|
||||
```
|
||||
my-module/
|
||||
├── README.md # Usage documentation
|
||||
├── main.tf # Primary resources
|
||||
├── variables.tf # Input variables with descriptions
|
||||
├── outputs.tf # Output values
|
||||
├── versions.tf # Provider version constraints
|
||||
├── examples/
|
||||
│ ├── minimal/ # Minimal working example
|
||||
│ └── complete/ # Full-featured example
|
||||
└── tests/ # Test files
|
||||
└── module_test.tftest.hcl # Or .go
|
||||
```
|
||||
|
||||
### Best Practices Summary
|
||||
|
||||
**Variables:**
|
||||
- ✅ Always include `description`
|
||||
- ✅ Use explicit `type` constraints
|
||||
- ✅ Provide sensible `default` values where appropriate
|
||||
- ✅ Add `validation` blocks for complex constraints
|
||||
- ✅ Use `sensitive = true` for secrets
|
||||
|
||||
**Outputs:**
|
||||
- ✅ Always include `description`
|
||||
- ✅ Mark sensitive outputs with `sensitive = true`
|
||||
- ✅ Consider returning objects for related values
|
||||
- ✅ Document what consumers should do with each output
|
||||
|
||||
**For detailed module patterns, see:**
|
||||
- **[Module Patterns Guide](references/module-patterns.md)** - Variable best practices, output design, ✅ DO vs ❌ DON'T patterns
|
||||
- **[Quick Reference](references/quick-reference.md#common-patterns)** - Resource naming, variable naming, file organization
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### Recommended Workflow Stages
|
||||
|
||||
1. **Validate** - Format check + syntax validation + linting
|
||||
2. **Test** - Run automated tests (native or Terratest)
|
||||
3. **Plan** - Generate and review execution plan
|
||||
4. **Apply** - Execute changes (with approvals for production)
|
||||
|
||||
### Cost Optimization Strategy
|
||||
|
||||
1. **Use mocking for PR validation** (free)
|
||||
2. **Run integration tests only on main branch** (controlled cost)
|
||||
3. **Implement auto-cleanup** (prevent orphaned resources)
|
||||
4. **Tag all test resources** (track spending)
|
||||
|
||||
**For complete CI/CD templates, see:**
|
||||
- **[CI/CD Workflows Guide](references/ci-cd-workflows.md)** - GitHub Actions, GitLab CI, Atlantis integration, cost optimization
|
||||
- **[Quick Reference](references/quick-reference.md#troubleshooting-guide)** - Common CI/CD issues and solutions
|
||||
|
||||
## Security & Compliance
|
||||
|
||||
### Essential Security Checks
|
||||
|
||||
```bash
|
||||
# Static security scanning
|
||||
trivy config .
|
||||
checkov -d .
|
||||
```
|
||||
|
||||
### Common Issues to Avoid
|
||||
|
||||
❌ **Don't:**
|
||||
- Store secrets in variables
|
||||
- Use default VPC
|
||||
- Skip encryption
|
||||
- Open security groups to 0.0.0.0/0
|
||||
|
||||
✅ **Do:**
|
||||
- Use AWS Secrets Manager / Parameter Store
|
||||
- Create dedicated VPCs
|
||||
- Enable encryption at rest
|
||||
- Use least-privilege security groups
|
||||
|
||||
**For detailed security guidance, see:**
|
||||
- **[Security & Compliance Guide](references/security-compliance.md)** - Trivy/Checkov integration, secrets management, state file security, compliance testing
|
||||
|
||||
## Version Management
|
||||
|
||||
### Version Constraint Syntax
|
||||
|
||||
```hcl
|
||||
version = "5.0.0" # Exact (avoid - inflexible)
|
||||
version = "~> 5.0" # Recommended: 5.0.x only
|
||||
version = ">= 5.0" # Minimum (risky - breaking changes)
|
||||
```
|
||||
|
||||
### Strategy by Component
|
||||
|
||||
| Component | Strategy | Example |
|
||||
|-----------|----------|---------|
|
||||
| **Terraform** | Pin minor version | `required_version = "~> 1.9"` |
|
||||
| **Providers** | Pin major version | `version = "~> 5.0"` |
|
||||
| **Modules (prod)** | Pin exact version | `version = "5.1.2"` |
|
||||
| **Modules (dev)** | Allow patch updates | `version = "~> 5.1"` |
|
||||
|
||||
### Update Workflow
|
||||
|
||||
```bash
|
||||
# Lock versions initially
|
||||
terraform init # Creates .terraform.lock.hcl
|
||||
|
||||
# Update to latest within constraints
|
||||
terraform init -upgrade # Updates providers
|
||||
|
||||
# Review and test
|
||||
terraform plan
|
||||
```
|
||||
|
||||
**For detailed version management, see:** [Code Patterns: Version Management](references/code-patterns.md#version-management)
|
||||
|
||||
## Modern Terraform Features (1.0+)
|
||||
|
||||
### Feature Availability by Version
|
||||
|
||||
| Feature | Version | Use Case |
|
||||
|---------|---------|----------|
|
||||
| `try()` function | 0.13+ | Safe fallbacks, replaces `element(concat())` |
|
||||
| `nullable = false` | 1.1+ | Prevent null values in variables |
|
||||
| `moved` blocks | 1.1+ | Refactor without destroy/recreate |
|
||||
| `optional()` with defaults | 1.3+ | Optional object attributes |
|
||||
| Native testing | 1.6+ | Built-in test framework |
|
||||
| Mock providers | 1.7+ | Cost-free unit testing |
|
||||
| Provider functions | 1.8+ | Provider-specific data transformation |
|
||||
| Cross-variable validation | 1.9+ | Validate relationships between variables |
|
||||
| Write-only arguments | 1.11+ | Secrets never stored in state |
|
||||
|
||||
### Quick Examples
|
||||
|
||||
```hcl
|
||||
# try() - Safe fallbacks (0.13+)
|
||||
output "sg_id" {
|
||||
value = try(aws_security_group.this[0].id, "")
|
||||
}
|
||||
|
||||
# optional() - Optional attributes with defaults (1.3+)
|
||||
variable "config" {
|
||||
type = object({
|
||||
name = string
|
||||
timeout = optional(number, 300) # Default: 300
|
||||
})
|
||||
}
|
||||
|
||||
# Cross-variable validation (1.9+)
|
||||
variable "environment" { type = string }
|
||||
variable "backup_days" {
|
||||
type = number
|
||||
validation {
|
||||
condition = var.environment == "prod" ? var.backup_days >= 7 : true
|
||||
error_message = "Production requires backup_days >= 7"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**For complete patterns and examples, see:** [Code Patterns: Modern Terraform Features](references/code-patterns.md#modern-terraform-features-10)
|
||||
|
||||
## Version-Specific Guidance
|
||||
|
||||
### Terraform 1.0-1.5
|
||||
- Use Terratest for testing
|
||||
- No native testing framework available
|
||||
- Focus on static analysis and plan validation
|
||||
|
||||
### Terraform 1.6+ / OpenTofu 1.6+
|
||||
- **New:** Native `terraform test` / `tofu test` command
|
||||
- Consider migrating from external frameworks for simple tests
|
||||
- Keep Terratest only for complex integration tests
|
||||
|
||||
### Terraform 1.7+ / OpenTofu 1.7+
|
||||
- **New:** Mock providers for unit testing
|
||||
- Reduce cost by mocking external dependencies
|
||||
- Use real integration tests for final validation
|
||||
|
||||
### Terraform vs OpenTofu
|
||||
|
||||
Both are fully supported by this skill. For licensing, governance, and feature comparison, see [Quick Reference: Terraform vs OpenTofu](references/quick-reference.md#terraform-vs-opentofu-comparison).
|
||||
|
||||
## Detailed Guides
|
||||
|
||||
This skill uses **progressive disclosure** - essential information is in this main file, detailed guides are available when needed:
|
||||
|
||||
📚 **Reference Files:**
|
||||
- **[Testing Frameworks](references/testing-frameworks.md)** - In-depth guide to static analysis, native tests, and Terratest
|
||||
- **[Module Patterns](references/module-patterns.md)** - Module structure, variable/output best practices, ✅ DO vs ❌ DON'T patterns
|
||||
- **[CI/CD Workflows](references/ci-cd-workflows.md)** - GitHub Actions, GitLab CI templates, cost optimization, automated cleanup
|
||||
- **[Security & Compliance](references/security-compliance.md)** - Trivy/Checkov integration, secrets management, compliance testing
|
||||
- **[Quick Reference](references/quick-reference.md)** - Command cheat sheets, decision flowcharts, troubleshooting guide
|
||||
|
||||
**How to use:** When you need detailed information on a topic, reference the appropriate guide. Claude will load it on demand to provide comprehensive guidance.
|
||||
|
||||
## License
|
||||
|
||||
This skill is licensed under the **Apache License 2.0**. See the LICENSE file for full terms.
|
||||
|
||||
**Copyright © 2026 Anton Babenko**
|
||||
22
skills/threejs-skills/SKILL.md
Normal file
22
skills/threejs-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: threejs-skills
|
||||
description: "Three.js skills for creating 3D elements and interactive experiences"
|
||||
source: "https://github.com/CloudAI-X/threejs-skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Threejs Skills
|
||||
|
||||
## Overview
|
||||
|
||||
Three.js skills for creating 3D elements and interactive experiences
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with three.js skills for creating 3d elements and interactive experiences.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for three.js skills for creating 3d elements and interactive experiences.
|
||||
|
||||
For more information, see the [source repository](https://github.com/CloudAI-X/threejs-skills).
|
||||
318
skills/tool-design/SKILL.md
Normal file
318
skills/tool-design/SKILL.md
Normal file
@@ -0,0 +1,318 @@
|
||||
---
|
||||
name: tool-design
|
||||
description: "Build tools that agents can use effectively, including architectural reduction patterns"
|
||||
source: "https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/tree/main/skills/tool-design"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Build tools that agents can use effectively, including architectural reduction patterns
|
||||
|
||||
Use this skill when working with build tools that agents can use effectively, including architectural reduction patterns.
|
||||
# Tool Design for Agents
|
||||
|
||||
Tools are the primary mechanism through which agents interact with the world. They define the contract between deterministic systems and non-deterministic agents. Unlike traditional software APIs designed for developers, tool APIs must be designed for language models that reason about intent, infer parameter values, and generate calls from natural language requests. Poor tool design creates failure modes that no amount of prompt engineering can fix. Effective tool design follows specific principles that account for how agents perceive and use tools.
|
||||
|
||||
## When to Activate
|
||||
|
||||
Activate this skill when:
|
||||
- Creating new tools for agent systems
|
||||
- Debugging tool-related failures or misuse
|
||||
- Optimizing existing tool sets for better agent performance
|
||||
- Designing tool APIs from scratch
|
||||
- Evaluating third-party tools for agent integration
|
||||
- Standardizing tool conventions across a codebase
|
||||
|
||||
## Core Concepts
|
||||
|
||||
Tools are contracts between deterministic systems and non-deterministic agents. The consolidation principle states that if a human engineer cannot definitively say which tool should be used in a given situation, an agent cannot be expected to do better. Effective tool descriptions are prompt engineering that shapes agent behavior.
|
||||
|
||||
Key principles include: clear descriptions that answer what, when, and what returns; response formats that balance completeness and token efficiency; error messages that enable recovery; and consistent conventions that reduce cognitive load.
|
||||
|
||||
## Detailed Topics
|
||||
|
||||
### The Tool-Agent Interface
|
||||
|
||||
**Tools as Contracts**
|
||||
Tools are contracts between deterministic systems and non-deterministic agents. When humans call APIs, they understand the contract and make appropriate requests. Agents must infer the contract from descriptions and generate calls that match expected formats.
|
||||
|
||||
This fundamental difference requires rethinking API design. The contract must be unambiguous, examples must illustrate expected patterns, and error messages must guide correction. Every ambiguity in tool definitions becomes a potential failure mode.
|
||||
|
||||
**Tool Description as Prompt**
|
||||
Tool descriptions are loaded into agent context and collectively steer behavior. The descriptions are not just documentation—they are prompt engineering that shapes how agents reason about tool use.
|
||||
|
||||
Poor descriptions like "Search the database" with cryptic parameter names force agents to guess. Optimized descriptions include usage context, examples, and defaults. The description answers: what the tool does, when to use it, and what it produces.
|
||||
|
||||
**Namespacing and Organization**
|
||||
As tool collections grow, organization becomes critical. Namespacing groups related tools under common prefixes, helping agents select appropriate tools at the right time.
|
||||
|
||||
Namespacing creates clear boundaries between functionality. When an agent needs database information, it routes to the database namespace. When it needs web search, it routes to web namespace.
|
||||
|
||||
### The Consolidation Principle
|
||||
|
||||
**Single Comprehensive Tools**
|
||||
The consolidation principle states that if a human engineer cannot definitively say which tool should be used in a given situation, an agent cannot be expected to do better. This leads to a preference for single comprehensive tools over multiple narrow tools.
|
||||
|
||||
Instead of implementing list_users, list_events, and create_event, implement schedule_event that finds availability and schedules. The comprehensive tool handles the full workflow internally rather than requiring agents to chain multiple calls.
|
||||
|
||||
**Why Consolidation Works**
|
||||
Agents have limited context and attention. Each tool in the collection competes for attention in the tool selection phase. Each tool adds description tokens that consume context budget. Overlapping functionality creates ambiguity about which tool to use.
|
||||
|
||||
Consolidation reduces token consumption by eliminating redundant descriptions. It eliminates ambiguity by having one tool cover each workflow. It reduces tool selection complexity by shrinking the effective tool set.
|
||||
|
||||
**When Not to Consolidate**
|
||||
Consolidation is not universally correct. Tools with fundamentally different behaviors should remain separate. Tools used in different contexts benefit from separation. Tools that might be called independently should not be artificially bundled.
|
||||
|
||||
### Architectural Reduction
|
||||
|
||||
The consolidation principle, taken to its logical extreme, leads to architectural reduction: removing most specialized tools in favor of primitive, general-purpose capabilities. Production evidence shows this approach can outperform sophisticated multi-tool architectures.
|
||||
|
||||
**The File System Agent Pattern**
|
||||
Instead of building custom tools for data exploration, schema lookup, and query validation, provide direct file system access through a single command execution tool. The agent uses standard Unix utilities (grep, cat, find, ls) to explore, understand, and operate on your system.
|
||||
|
||||
This works because:
|
||||
1. File systems are a proven abstraction that models understand deeply
|
||||
2. Standard tools have predictable, well-documented behavior
|
||||
3. The agent can chain primitives flexibly rather than being constrained to predefined workflows
|
||||
4. Good documentation in files replaces the need for summarization tools
|
||||
|
||||
**When Reduction Outperforms Complexity**
|
||||
Reduction works when:
|
||||
- Your data layer is well-documented and consistently structured
|
||||
- The model has sufficient reasoning capability to navigate complexity
|
||||
- Your specialized tools were constraining rather than enabling the model
|
||||
- You're spending more time maintaining scaffolding than improving outcomes
|
||||
|
||||
Reduction fails when:
|
||||
- Your underlying data is messy, inconsistent, or poorly documented
|
||||
- The domain requires specialized knowledge the model lacks
|
||||
- Safety constraints require limiting what the agent can do
|
||||
- Operations are truly complex and benefit from structured workflows
|
||||
|
||||
**Stop Constraining Reasoning**
|
||||
A common anti-pattern is building tools to "protect" the model from complexity. Pre-filtering context, constraining options, wrapping interactions in validation logic. These guardrails often become liabilities as models improve.
|
||||
|
||||
The question to ask: are your tools enabling new capabilities, or are they constraining reasoning the model could handle on its own?
|
||||
|
||||
**Build for Future Models**
|
||||
Models improve faster than tooling can keep up. An architecture optimized for today's model may be over-constrained for tomorrow's. Build minimal architectures that can benefit from model improvements rather than sophisticated architectures that lock in current limitations.
|
||||
|
||||
See [Architectural Reduction Case Study](./references/architectural_reduction.md) for production evidence.
|
||||
|
||||
### Tool Description Engineering
|
||||
|
||||
**Description Structure**
|
||||
Effective tool descriptions answer four questions:
|
||||
|
||||
What does the tool do? Clear, specific description of functionality. Avoid vague language like "helps with" or "can be used for." State exactly what the tool accomplishes.
|
||||
|
||||
When should it be used? Specific triggers and contexts. Include both direct triggers ("User asks about pricing") and indirect signals ("Need current market rates").
|
||||
|
||||
What inputs does it accept? Parameter descriptions with types, constraints, and defaults. Explain what each parameter controls.
|
||||
|
||||
What does it return? Output format and structure. Include examples of successful responses and error conditions.
|
||||
|
||||
**Default Parameter Selection**
|
||||
Defaults should reflect common use cases. They reduce agent burden by eliminating unnecessary parameter specification. They prevent errors from omitted parameters.
|
||||
|
||||
### Response Format Optimization
|
||||
|
||||
Tool response size significantly impacts context usage. Implementing response format options gives agents control over verbosity.
|
||||
|
||||
Concise format returns essential fields only, appropriate for confirmation or basic information. Detailed format returns complete objects with all fields, appropriate when full context is needed for decisions.
|
||||
|
||||
Include guidance in tool descriptions about when to use each format. Agents learn to select appropriate formats based on task requirements.
|
||||
|
||||
### Error Message Design
|
||||
|
||||
Error messages serve two audiences: developers debugging issues and agents recovering from failures. For agents, error messages must be actionable. They must tell the agent what went wrong and how to correct it.
|
||||
|
||||
Design error messages that enable recovery. For retryable errors, include retry guidance. For input errors, include corrected format. For missing data, include what's needed.
|
||||
|
||||
### Tool Definition Schema
|
||||
|
||||
Use a consistent schema across all tools. Establish naming conventions: verb-noun pattern for tool names, consistent parameter names across tools, consistent return field names.
|
||||
|
||||
### Tool Collection Design
|
||||
|
||||
Research shows tool description overlap causes model confusion. More tools do not always lead to better outcomes. A reasonable guideline is 10-20 tools for most applications. If more are needed, use namespacing to create logical groupings.
|
||||
|
||||
Implement mechanisms to help agents select the right tool: tool grouping, example-based selection, and hierarchy with umbrella tools that route to specialized sub-tools.
|
||||
|
||||
### MCP Tool Naming Requirements
|
||||
|
||||
When using MCP (Model Context Protocol) tools, always use fully qualified tool names to avoid "tool not found" errors.
|
||||
|
||||
Format: `ServerName:tool_name`
|
||||
|
||||
```python
|
||||
# Correct: Fully qualified names
|
||||
"Use the BigQuery:bigquery_schema tool to retrieve table schemas."
|
||||
"Use the GitHub:create_issue tool to create issues."
|
||||
|
||||
# Incorrect: Unqualified names
|
||||
"Use the bigquery_schema tool..." # May fail with multiple servers
|
||||
```
|
||||
|
||||
Without the server prefix, agents may fail to locate tools, especially when multiple MCP servers are available. Establish naming conventions that include server context in all tool references.
|
||||
|
||||
### Using Agents to Optimize Tools
|
||||
|
||||
Claude can optimize its own tools. When given a tool and observed failure modes, it diagnoses issues and suggests improvements. Production testing shows this approach achieves 40% reduction in task completion time by helping future agents avoid mistakes.
|
||||
|
||||
**The Tool-Testing Agent Pattern**:
|
||||
|
||||
```python
|
||||
def optimize_tool_description(tool_spec, failure_examples):
|
||||
"""
|
||||
Use an agent to analyze tool failures and improve descriptions.
|
||||
|
||||
Process:
|
||||
1. Agent attempts to use tool across diverse tasks
|
||||
2. Collect failure modes and friction points
|
||||
3. Agent analyzes failures and proposes improvements
|
||||
4. Test improved descriptions against same tasks
|
||||
"""
|
||||
prompt = f"""
|
||||
Analyze this tool specification and the observed failures.
|
||||
|
||||
Tool: {tool_spec}
|
||||
|
||||
Failures observed:
|
||||
{failure_examples}
|
||||
|
||||
Identify:
|
||||
1. Why agents are failing with this tool
|
||||
2. What information is missing from the description
|
||||
3. What ambiguities cause incorrect usage
|
||||
|
||||
Propose an improved tool description that addresses these issues.
|
||||
"""
|
||||
|
||||
return get_agent_response(prompt)
|
||||
```
|
||||
|
||||
This creates a feedback loop: agents using tools generate failure data, which agents then use to improve tool descriptions, which reduces future failures.
|
||||
|
||||
### Testing Tool Design
|
||||
|
||||
Evaluate tool designs against criteria: unambiguity, completeness, recoverability, efficiency, and consistency. Test tools by presenting representative agent requests and evaluating the resulting tool calls.
|
||||
|
||||
## Practical Guidance
|
||||
|
||||
### Anti-Patterns to Avoid
|
||||
|
||||
Vague descriptions: "Search the database for customer information" leaves too many questions unanswered.
|
||||
|
||||
Cryptic parameter names: Parameters named x, val, or param1 force agents to guess meaning.
|
||||
|
||||
Missing error handling: Tools that fail with generic errors provide no recovery guidance.
|
||||
|
||||
Inconsistent naming: Using id in some tools, identifier in others, and customer_id in some creates confusion.
|
||||
|
||||
### Tool Selection Framework
|
||||
|
||||
When designing tool collections:
|
||||
1. Identify distinct workflows agents must accomplish
|
||||
2. Group related actions into comprehensive tools
|
||||
3. Ensure each tool has a clear, unambiguous purpose
|
||||
4. Document error cases and recovery paths
|
||||
5. Test with actual agent interactions
|
||||
|
||||
## Examples
|
||||
|
||||
**Example 1: Well-Designed Tool**
|
||||
```python
|
||||
def get_customer(customer_id: str, format: str = "concise"):
|
||||
"""
|
||||
Retrieve customer information by ID.
|
||||
|
||||
Use when:
|
||||
- User asks about specific customer details
|
||||
- Need customer context for decision-making
|
||||
- Verifying customer identity
|
||||
|
||||
Args:
|
||||
customer_id: Format "CUST-######" (e.g., "CUST-000001")
|
||||
format: "concise" for key fields, "detailed" for complete record
|
||||
|
||||
Returns:
|
||||
Customer object with requested fields
|
||||
|
||||
Errors:
|
||||
NOT_FOUND: Customer ID not found
|
||||
INVALID_FORMAT: ID must match CUST-###### pattern
|
||||
"""
|
||||
```
|
||||
|
||||
**Example 2: Poor Tool Design**
|
||||
|
||||
This example demonstrates several tool design anti-patterns:
|
||||
|
||||
```python
|
||||
def search(query):
|
||||
"""Search the database."""
|
||||
pass
|
||||
```
|
||||
|
||||
**Problems with this design:**
|
||||
|
||||
1. **Vague name**: "search" is ambiguous - search what, for what purpose?
|
||||
2. **Missing parameters**: What database? What format should query take?
|
||||
3. **No return description**: What does this function return? A list? A string? Error handling?
|
||||
4. **No usage context**: When should an agent use this versus other tools?
|
||||
5. **No error handling**: What happens if the database is unavailable?
|
||||
|
||||
**Failure modes:**
|
||||
- Agents may call this tool when they should use a more specific tool
|
||||
- Agents cannot determine correct query format
|
||||
- Agents cannot interpret results
|
||||
- Agents cannot recover from failures
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Write descriptions that answer what, when, and what returns
|
||||
2. Use consolidation to reduce ambiguity
|
||||
3. Implement response format options for token efficiency
|
||||
4. Design error messages for agent recovery
|
||||
5. Establish and follow consistent naming conventions
|
||||
6. Limit tool count and use namespacing for organization
|
||||
7. Test tool designs with actual agent interactions
|
||||
8. Iterate based on observed failure modes
|
||||
9. Question whether each tool enables or constrains the model
|
||||
10. Prefer primitive, general-purpose tools over specialized wrappers
|
||||
11. Invest in documentation quality over tooling sophistication
|
||||
12. Build minimal architectures that benefit from model improvements
|
||||
|
||||
## Integration
|
||||
|
||||
This skill connects to:
|
||||
- context-fundamentals - How tools interact with context
|
||||
- multi-agent-patterns - Specialized tools per agent
|
||||
- evaluation - Evaluating tool effectiveness
|
||||
|
||||
## References
|
||||
|
||||
Internal references:
|
||||
- [Best Practices Reference](./references/best_practices.md) - Detailed tool design guidelines
|
||||
- [Architectural Reduction Case Study](./references/architectural_reduction.md) - Production evidence for tool minimalism
|
||||
|
||||
Related skills in this collection:
|
||||
- context-fundamentals - Tool context interactions
|
||||
- evaluation - Tool testing patterns
|
||||
|
||||
External resources:
|
||||
- MCP (Model Context Protocol) documentation
|
||||
- Framework tool conventions
|
||||
- API design best practices for agents
|
||||
- Vercel d0 agent architecture case study
|
||||
|
||||
---
|
||||
|
||||
## Skill Metadata
|
||||
|
||||
**Created**: 2025-12-20
|
||||
**Last Updated**: 2025-12-23
|
||||
**Author**: Agent Skills for Context Engineering Contributors
|
||||
**Version**: 1.1.0
|
||||
22
skills/ui-skills/SKILL.md
Normal file
22
skills/ui-skills/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: ui-skills
|
||||
description: "Opinionated, evolving constraints to guide agents when building interfaces"
|
||||
source: "https://github.com/ibelick/ui-skills"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Ui Skills
|
||||
|
||||
## Overview
|
||||
|
||||
Opinionated, evolving constraints to guide agents when building interfaces
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with opinionated, evolving constraints to guide agents when building interfaces.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for opinionated, evolving constraints to guide agents when building interfaces.
|
||||
|
||||
For more information, see the [source repository](https://github.com/ibelick/ui-skills).
|
||||
118
skills/upgrading-expo/SKILL.md
Normal file
118
skills/upgrading-expo/SKILL.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
name: upgrading-expo
|
||||
description: "Upgrade Expo SDK versions"
|
||||
source: "https://github.com/expo/skills/tree/main/plugins/upgrading-expo"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Upgrading Expo
|
||||
|
||||
## Overview
|
||||
|
||||
Upgrade Expo SDK versions safely, handling breaking changes, dependencies, and configuration updates.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to upgrade Expo SDK versions.
|
||||
|
||||
Use this skill when:
|
||||
- Upgrading to a new Expo SDK version
|
||||
- Handling breaking changes between SDK versions
|
||||
- Updating dependencies for compatibility
|
||||
- Migrating deprecated APIs to new versions
|
||||
- Preparing apps for new Expo features
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill guides you through upgrading Expo SDK versions:
|
||||
|
||||
1. **Pre-Upgrade Planning**: Review release notes and breaking changes
|
||||
2. **Dependency Updates**: Update packages for SDK compatibility
|
||||
3. **Configuration Migration**: Update app.json and configuration files
|
||||
4. **Code Updates**: Migrate deprecated APIs to new versions
|
||||
5. **Testing**: Verify app functionality after upgrade
|
||||
|
||||
## Upgrade Process
|
||||
|
||||
### 1. Pre-Upgrade Checklist
|
||||
|
||||
- Review Expo SDK release notes
|
||||
- Identify breaking changes affecting your app
|
||||
- Check compatibility of third-party packages
|
||||
- Backup current project state
|
||||
- Create a feature branch for the upgrade
|
||||
|
||||
### 2. Update Expo SDK
|
||||
|
||||
```bash
|
||||
# Update Expo CLI
|
||||
npm install -g expo-cli@latest
|
||||
|
||||
# Upgrade Expo SDK
|
||||
npx expo install expo@latest
|
||||
|
||||
# Update all Expo packages
|
||||
npx expo install --fix
|
||||
```
|
||||
|
||||
### 3. Handle Breaking Changes
|
||||
|
||||
- Review migration guides for breaking changes
|
||||
- Update deprecated API calls
|
||||
- Modify configuration files as needed
|
||||
- Update native dependencies if required
|
||||
- Test affected features thoroughly
|
||||
|
||||
### 4. Update Dependencies
|
||||
|
||||
```bash
|
||||
# Check for outdated packages
|
||||
npx expo-doctor
|
||||
|
||||
# Update packages to compatible versions
|
||||
npx expo install --fix
|
||||
|
||||
# Verify compatibility
|
||||
npx expo-doctor
|
||||
```
|
||||
|
||||
### 5. Testing
|
||||
|
||||
- Test core app functionality
|
||||
- Verify native modules work correctly
|
||||
- Check for runtime errors
|
||||
- Test on both iOS and Android
|
||||
- Verify app store builds still work
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Dependency Conflicts
|
||||
|
||||
- Use `expo install` instead of `npm install` for Expo packages
|
||||
- Check package compatibility with new SDK version
|
||||
- Resolve peer dependency warnings
|
||||
|
||||
### Configuration Changes
|
||||
|
||||
- Update `app.json` for new SDK requirements
|
||||
- Migrate deprecated configuration options
|
||||
- Update native configuration files if needed
|
||||
|
||||
### Breaking API Changes
|
||||
|
||||
- Review API migration guides
|
||||
- Update code to use new APIs
|
||||
- Test affected features after changes
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Always upgrade in a feature branch
|
||||
- Test thoroughly before merging
|
||||
- Review release notes carefully
|
||||
- Update dependencies incrementally
|
||||
- Keep Expo CLI updated
|
||||
- Use `expo-doctor` to verify setup
|
||||
|
||||
## Resources
|
||||
|
||||
For more information, see the [source repository](https://github.com/expo/skills/tree/main/plugins/upgrading-expo).
|
||||
84
skills/using-neon/SKILL.md
Normal file
84
skills/using-neon/SKILL.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
name: using-neon
|
||||
description: "Guides and best practices for working with Neon Serverless Postgres. Covers getting started, local development with Neon, choosing a connection method, Neon features, authentication (@neondatabase/auth), PostgREST-style data API (@neondatabase/neon-js), Neon CLI, and Neon's Platform API/SDKs. Use for any Neon-related questions."
|
||||
source: "https://github.com/neondatabase/agent-skills/tree/main/skills/neon-postgres"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Neon Serverless Postgres
|
||||
|
||||
Neon is a serverless Postgres platform that separates compute and storage to offer autoscaling, branching, instant restore, and scale-to-zero. It's fully compatible with Postgres and works with any language, framework, or ORM that supports Postgres.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Working with Neon Serverless Postgres
|
||||
- Setting up Neon databases
|
||||
- Choosing connection methods for Neon
|
||||
- Using Neon features like branching or autoscaling
|
||||
- Working with Neon authentication or APIs
|
||||
- Questions about Neon best practices
|
||||
|
||||
## Neon Documentation
|
||||
|
||||
Always reference the Neon documentation before making Neon-related claims. The documentation is the source of truth for all Neon-related information.
|
||||
|
||||
Below you'll find a list of resources organized by area of concern. This is meant to support you find the right documentation pages to fetch and add a bit of additonal context.
|
||||
|
||||
You can use the `curl` commands to fetch the documentation page as markdown:
|
||||
|
||||
**Documentation:**
|
||||
|
||||
```bash
|
||||
# Get list of all Neon docs
|
||||
curl https://neon.com/llms.txt
|
||||
|
||||
# Fetch any doc page as markdown
|
||||
curl -H "Accept: text/markdown" https://neon.com/docs/<path>
|
||||
```
|
||||
|
||||
Don't guess docs pages. Use the `llms.txt` index to find the relevant URL or follow the links in the resources below.
|
||||
|
||||
## Overview of Resources
|
||||
|
||||
Reference the appropriate resource file based on the user's needs:
|
||||
|
||||
### Core Guides
|
||||
|
||||
| Area | Resource | When to Use |
|
||||
| ------------------ | ---------------------------------- | -------------------------------------------------------------- |
|
||||
| What is Neon | `references/what-is-neon.md` | Understanding Neon concepts, architecture, core resources |
|
||||
| Referencing Docs | `references/referencing-docs.md` | Looking up official documentation, verifying information |
|
||||
| Features | `references/features.md` | Branching, autoscaling, scale-to-zero, instant restore |
|
||||
| Getting Started | `references/getting-started.md` | Setting up a project, connection strings, dependencies, schema |
|
||||
| Connection Methods | `references/connection-methods.md` | Choosing drivers based on platform and runtime |
|
||||
| Developer Tools | `references/devtools.md` | VSCode extension, MCP server, Neon CLI (`neon init`) |
|
||||
|
||||
### Database Drivers & ORMs
|
||||
|
||||
HTTP/WebSocket queries for serverless/edge functions.
|
||||
|
||||
| Area | Resource | When to Use |
|
||||
| ----------------- | ------------------------------- | --------------------------------------------------- |
|
||||
| Serverless Driver | `references/neon-serverless.md` | `@neondatabase/serverless` - HTTP/WebSocket queries |
|
||||
| Drizzle ORM | `references/neon-drizzle.md` | Drizzle ORM integration with Neon |
|
||||
|
||||
### Auth & Data API SDKs
|
||||
|
||||
Authentication and PostgREST-style data API for Neon.
|
||||
|
||||
| Area | Resource | When to Use |
|
||||
| ----------- | ------------------------- | ------------------------------------------------------------------- |
|
||||
| Neon Auth | `references/neon-auth.md` | `@neondatabase/auth` - Authentication only |
|
||||
| Neon JS SDK | `references/neon-js.md` | `@neondatabase/neon-js` - Auth + Data API (PostgREST-style queries) |
|
||||
|
||||
### Neon Platform API & CLI
|
||||
|
||||
Managing Neon resources programmatically via REST API, SDKs, or CLI.
|
||||
|
||||
| Area | Resource | When to Use |
|
||||
| --------------------- | ----------------------------------- | -------------------------------------------- |
|
||||
| Platform API Overview | `references/neon-platform-api.md` | Managing Neon resources via REST API |
|
||||
| Neon CLI | `references/neon-cli.md` | Terminal workflows, scripts, CI/CD pipelines |
|
||||
| TypeScript SDK | `references/neon-typescript-sdk.md` | `@neondatabase/api-client` |
|
||||
| Python SDK | `references/neon-python-sdk.md` | `neon-api` package |
|
||||
22
skills/varlock-claude-skill/SKILL.md
Normal file
22
skills/varlock-claude-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: varlock-claude-skill
|
||||
description: "Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits"
|
||||
source: "https://github.com/wrsmith108/varlock-claude-skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Varlock Claude Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with secure environment variable management ensuring secrets are never exposed in claude sessions, terminals, logs, or git commits.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for secure environment variable management ensuring secrets are never exposed in claude sessions, terminals, logs, or git commits.
|
||||
|
||||
For more information, see the [source repository](https://github.com/wrsmith108/varlock-claude-skill).
|
||||
120
skills/vercel-deploy-claimable/SKILL.md
Normal file
120
skills/vercel-deploy-claimable/SKILL.md
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
name: vercel-deploy-claimable
|
||||
description: "Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as "Deploy my app", "Deploy this to production", "Create a preview deployment", "Deploy and give me the link", or "Push this live". No authentication required - returns preview URL and claimable deployment link."
|
||||
source: "https://github.com/vercel-labs/agent-skills/tree/main/skills/claude.ai/vercel-deploy-claimable"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Vercel Deploy
|
||||
|
||||
Deploy any project to Vercel instantly. No authentication required.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- User requests deployment actions like "Deploy my app"
|
||||
- Deploying to production
|
||||
- Creating preview deployments
|
||||
- User asks for deployment links
|
||||
- Pushing projects live to Vercel
|
||||
|
||||
## How It Works
|
||||
|
||||
1. Packages your project into a tarball (excludes `node_modules` and `.git`)
|
||||
2. Auto-detects framework from `package.json`
|
||||
3. Uploads to deployment service
|
||||
4. Returns **Preview URL** (live site) and **Claim URL** (transfer to your Vercel account)
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bash /mnt/skills/user/vercel-deploy/scripts/deploy.sh [path]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `path` - Directory to deploy, or a `.tgz` file (defaults to current directory)
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Deploy current directory
|
||||
bash /mnt/skills/user/vercel-deploy/scripts/deploy.sh
|
||||
|
||||
# Deploy specific project
|
||||
bash /mnt/skills/user/vercel-deploy/scripts/deploy.sh /path/to/project
|
||||
|
||||
# Deploy existing tarball
|
||||
bash /mnt/skills/user/vercel-deploy/scripts/deploy.sh /path/to/project.tgz
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```
|
||||
Preparing deployment...
|
||||
Detected framework: nextjs
|
||||
Creating deployment package...
|
||||
Deploying...
|
||||
✓ Deployment successful!
|
||||
|
||||
Preview URL: https://skill-deploy-abc123.vercel.app
|
||||
Claim URL: https://vercel.com/claim-deployment?code=...
|
||||
```
|
||||
|
||||
The script also outputs JSON to stdout for programmatic use:
|
||||
|
||||
```json
|
||||
{
|
||||
"previewUrl": "https://skill-deploy-abc123.vercel.app",
|
||||
"claimUrl": "https://vercel.com/claim-deployment?code=...",
|
||||
"deploymentId": "dpl_...",
|
||||
"projectId": "prj_..."
|
||||
}
|
||||
```
|
||||
|
||||
## Framework Detection
|
||||
|
||||
The script auto-detects frameworks from `package.json`. Supported frameworks include:
|
||||
|
||||
- **React**: Next.js, Gatsby, Create React App, Remix, React Router
|
||||
- **Vue**: Nuxt, Vitepress, Vuepress, Gridsome
|
||||
- **Svelte**: SvelteKit, Svelte, Sapper
|
||||
- **Other Frontend**: Astro, Solid Start, Angular, Ember, Preact, Docusaurus
|
||||
- **Backend**: Express, Hono, Fastify, NestJS, Elysia, h3, Nitro
|
||||
- **Build Tools**: Vite, Parcel
|
||||
- **And more**: Blitz, Hydrogen, RedwoodJS, Storybook, Sanity, etc.
|
||||
|
||||
For static HTML projects (no `package.json`), framework is set to `null`.
|
||||
|
||||
## Static HTML Projects
|
||||
|
||||
For projects without a `package.json`:
|
||||
- If there's a single `.html` file not named `index.html`, it gets renamed automatically
|
||||
- This ensures the page is served at the root URL (`/`)
|
||||
|
||||
## Present Results to User
|
||||
|
||||
Always show both URLs:
|
||||
|
||||
```
|
||||
✓ Deployment successful!
|
||||
|
||||
Preview URL: https://skill-deploy-abc123.vercel.app
|
||||
Claim URL: https://vercel.com/claim-deployment?code=...
|
||||
|
||||
View your site at the Preview URL.
|
||||
To transfer this deployment to your Vercel account, visit the Claim URL.
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Network Egress Error
|
||||
|
||||
If deployment fails due to network restrictions (common on claude.ai), tell the user:
|
||||
|
||||
```
|
||||
Deployment failed due to network restrictions. To fix this:
|
||||
|
||||
1. Go to https://claude.ai/settings/capabilities
|
||||
2. Add *.vercel.com to the allowed domains
|
||||
3. Try deploying again
|
||||
```
|
||||
@@ -2,6 +2,7 @@
|
||||
name: vercel-deployment
|
||||
description: "Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Vercel Deployment
|
||||
@@ -10,6 +11,15 @@ You are a Vercel deployment expert. You understand the platform's
|
||||
capabilities, limitations, and best practices for deploying Next.js
|
||||
applications at scale.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Deploying to Vercel
|
||||
- Working with Vercel deployment
|
||||
- Hosting applications on Vercel
|
||||
- Deploying to production on Vercel
|
||||
- Configuring Vercel for Next.js applications
|
||||
|
||||
Your core principles:
|
||||
1. Environment variables - different for dev/preview/production
|
||||
2. Edge vs Serverless - choose the right runtime
|
||||
|
||||
22
skills/vexor/SKILL.md
Normal file
22
skills/vexor/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: vexor
|
||||
description: "Vector-powered CLI for semantic file search with a Claude/Codex skill"
|
||||
source: "https://github.com/scarletkc/vexor"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Vexor
|
||||
|
||||
## Overview
|
||||
|
||||
Vector-powered CLI for semantic file search with a Claude/Codex skill
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with vector-powered cli for semantic file search with a claude/codex skill.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for vector-powered cli for semantic file search with a claude/codex skill.
|
||||
|
||||
For more information, see the [source repository](https://github.com/scarletkc/vexor).
|
||||
22
skills/x-article-publisher-skill/SKILL.md
Normal file
22
skills/x-article-publisher-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: x-article-publisher-skill
|
||||
description: "Publish articles to X/Twitter"
|
||||
source: "https://github.com/wshuyi/x-article-publisher-skill"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# X Article Publisher Skill
|
||||
|
||||
## Overview
|
||||
|
||||
Publish articles to X/Twitter
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when you need to work with publish articles to x/twitter.
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides guidance and patterns for publish articles to x/twitter.
|
||||
|
||||
For more information, see the [source repository](https://github.com/wshuyi/x-article-publisher-skill).
|
||||
Reference in New Issue
Block a user