docs(skills): Add 4 production skills from antigravity reference

Skills added:
- cloudflare-workers-expert: For servers-api Worker and edge computing
- n8n-workflow-patterns: For self-hosted n8n on Command Center
- nodejs-backend-patterns: For Arbiter Express backend work
- git-advanced-workflows: Git learning (rebase, cherry-pick, recovery)

All sourced from antigravity-skills-reference (MIT license)

Chronicler #73
This commit is contained in:
Claude
2026-04-09 13:49:53 +00:00
parent e9744a5b16
commit 086c70fbe1
6 changed files with 2069 additions and 0 deletions

View File

@@ -72,6 +72,95 @@
---
### cloudflare-workers-expert
**Location:** `docs/skills/cloudflare-workers-expert/SKILL.md`
**Source:** antigravity-skills-reference (MIT)
**Triggers:** Cloudflare Workers, Wrangler, KV, D1, Durable Objects, R2, edge computing
**Purpose:** Expert guidance for Cloudflare Workers and edge computing ecosystem
**What It Covers:**
- Wrangler CLI and `wrangler.toml` configuration
- KV, D1, Queues, Durable Objects bindings
- Edge-side caching and response modification
- Web Fetch API (not Node.js globals)
- `waitUntil()` for async tasks
**Read This When:**
- Working on `servers-api` Cloudflare Worker
- Building new Workers for Firefrost
- Moving logic to the edge for performance
---
### n8n-workflow-patterns
**Location:** `docs/skills/n8n-workflow-patterns/SKILL.md`
**Source:** antigravity-skills-reference (MIT)
**Triggers:** n8n, workflow automation, self-hosted automation
**Purpose:** Patterns for building robust n8n workflows
**What It Covers:**
- Workflow architecture patterns
- Error handling and retry logic
- Webhook integrations
- Data transformation between nodes
**Read This When:**
- Building automations on Command Center's n8n instance
- Connecting services via webhooks
- Troubleshooting workflow failures
---
### nodejs-backend-patterns
**Location:** `docs/skills/nodejs-backend-patterns/SKILL.md`
**Source:** antigravity-skills-reference (MIT)
**Triggers:** Node.js, Express, REST API, backend architecture, Arbiter
**Purpose:** Production patterns for Node.js backend applications
**What It Covers:**
- REST API and GraphQL server patterns
- Authentication and authorization
- Middleware and error handling
- Database integration (SQL/NoSQL)
- WebSockets and real-time apps
- Background job processing
**Read This When:**
- Working on Arbiter's Express backend
- Adding new API endpoints
- Implementing Task #87 lifecycle handlers
- Refactoring backend architecture
**Resources:** `resources/implementation-playbook.md` for detailed examples
---
### git-advanced-workflows
**Location:** `docs/skills/git-advanced-workflows/SKILL.md`
**Source:** antigravity-skills-reference (MIT)
**Triggers:** git, rebase, cherry-pick, bisect, worktrees, branch management
**Purpose:** Advanced Git techniques for clean history and recovery
**What It Covers:**
- Interactive rebase (squash, fixup, reword)
- Cherry-picking commits across branches
- Git bisect for finding bugs
- Working with multiple worktrees
- Recovering from mistakes (reflog)
- Complex branch workflows
**Read This When:**
- Learning Git beyond basics
- Cleaning up commit history before merging
- Recovering from Git mistakes
- Managing feature branches
---
### tea-cli
**Location:** `docs/skills/tea-cli/`
**Source:** skill.fish

View File

@@ -0,0 +1,89 @@
---
name: cloudflare-workers-expert
description: "Expert in Cloudflare Workers and the Edge Computing ecosystem. Covers Wrangler, KV, D1, Durable Objects, and R2 storage."
risk: safe
source: community
date_added: "2026-02-27"
---
You are a senior Cloudflare Workers Engineer specializing in edge computing architectures, performance optimization at the edge, and the full Cloudflare developer ecosystem (Wrangler, KV, D1, Queues, etc.).
## Use this skill when
- Designing and deploying serverless functions to Cloudflare's Edge
- Implementing edge-side data storage using KV, D1, or Durable Objects
- Optimizing application latency by moving logic to the edge
- Building full-stack apps with Cloudflare Pages and Workers
- Handling request/response modification, security headers, and edge-side caching
## Do not use this skill when
- The task is for traditional Node.js/Express apps run on servers
- Targeting AWS Lambda or Google Cloud Functions (use their respective skills)
- General frontend development that doesn't utilize edge features
## Instructions
1. **Wrangler Ecosystem**: Use `wrangler.toml` for configuration and `npx wrangler dev` for local testing.
2. **Fetch API**: Remember that Workers use the Web standard Fetch API, not Node.js globals.
3. **Bindings**: Define all bindings (KV, D1, secrets) in `wrangler.toml` and access them through the `env` parameter in the `fetch` handler.
4. **Cold Starts**: Workers have 0ms cold starts, but keep the bundle size small to stay within the 1MB limit for the free tier.
5. **Durable Objects**: Use Durable Objects for stateful coordination and high-concurrency needs.
6. **Error Handling**: Use `waitUntil()` for non-blocking asynchronous tasks (logging, analytics) that should run after the response is sent.
## Examples
### Example 1: Basic Worker with KV Binding
```typescript
export interface Env {
MY_KV_NAMESPACE: KVNamespace;
}
export default {
async fetch(
request: Request,
env: Env,
ctx: ExecutionContext,
): Promise<Response> {
const value = await env.MY_KV_NAMESPACE.get("my-key");
if (!value) {
return new Response("Not Found", { status: 404 });
}
return new Response(`Stored Value: ${value}`);
},
};
```
### Example 2: Edge Response Modification
```javascript
export default {
async fetch(request, env, ctx) {
const response = await fetch(request);
const newResponse = new Response(response.body, response);
// Add security headers at the edge
newResponse.headers.set("X-Content-Type-Options", "nosniff");
newResponse.headers.set(
"Content-Security-Policy",
"upgrade-insecure-requests",
);
return newResponse;
},
};
```
## Best Practices
-**Do:** Use `env.VAR_NAME` for secrets and environment variables.
-**Do:** Use `Response.redirect()` for clean edge-side redirects.
-**Do:** Use `wrangler tail` for live production debugging.
-**Don't:** Import large libraries; Workers have limited memory and CPU time.
-**Don't:** Use Node.js specific libraries (like `fs`, `path`) unless using Node.js compatibility mode.
## Troubleshooting
**Problem:** Request exceeded CPU time limit.
**Solution:** Optimize loops, reduce the number of await calls, and move synchronous heavy lifting out of the request/response path. Use `ctx.waitUntil()` for tasks that don't block the response.

View File

@@ -0,0 +1,415 @@
---
name: git-advanced-workflows
description: "Master advanced Git techniques to maintain clean history, collaborate effectively, and recover from any situation with confidence."
risk: critical
source: community
date_added: "2026-02-27"
---
# Git Advanced Workflows
Master advanced Git techniques to maintain clean history, collaborate effectively, and recover from any situation with confidence.
## Do not use this skill when
- The task is unrelated to git advanced workflows
- You need a different domain or tool outside this scope
## Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.
## Use this skill when
- Cleaning up commit history before merging
- Applying specific commits across branches
- Finding commits that introduced bugs
- Working on multiple features simultaneously
- Recovering from Git mistakes or lost commits
- Managing complex branch workflows
- Preparing clean PRs for review
- Synchronizing diverged branches
## Core Concepts
### 1. Interactive Rebase
Interactive rebase is the Swiss Army knife of Git history editing.
**Common Operations:**
- `pick`: Keep commit as-is
- `reword`: Change commit message
- `edit`: Amend commit content
- `squash`: Combine with previous commit
- `fixup`: Like squash but discard message
- `drop`: Remove commit entirely
**Basic Usage:**
```bash
# Rebase last 5 commits
git rebase -i HEAD~5
# Rebase all commits on current branch
git rebase -i $(git merge-base HEAD main)
# Rebase onto specific commit
git rebase -i abc123
```
### 2. Cherry-Picking
Apply specific commits from one branch to another without merging entire branches.
```bash
# Cherry-pick single commit
git cherry-pick abc123
# Cherry-pick range of commits (exclusive start)
git cherry-pick abc123..def456
# Cherry-pick without committing (stage changes only)
git cherry-pick -n abc123
# Cherry-pick and edit commit message
git cherry-pick -e abc123
```
### 3. Git Bisect
Binary search through commit history to find the commit that introduced a bug.
```bash
# Start bisect
git bisect start
# Mark current commit as bad
git bisect bad
# Mark known good commit
git bisect good v1.0.0
# Git will checkout middle commit - test it
# Then mark as good or bad
git bisect good # or: git bisect bad
# Continue until bug found
# When done
git bisect reset
```
**Automated Bisect:**
```bash
# Use script to test automatically
git bisect start HEAD v1.0.0
git bisect run ./test.sh
# test.sh should exit 0 for good, 1-127 (except 125) for bad
```
### 4. Worktrees
Work on multiple branches simultaneously without stashing or switching.
```bash
# List existing worktrees
git worktree list
# Add new worktree for feature branch
git worktree add ../project-feature feature/new-feature
# Add worktree and create new branch
git worktree add -b bugfix/urgent ../project-hotfix main
# Remove worktree
git worktree remove ../project-feature
# Prune stale worktrees
git worktree prune
```
### 5. Reflog
Your safety net - tracks all ref movements, even deleted commits.
```bash
# View reflog
git reflog
# View reflog for specific branch
git reflog show feature/branch
# Restore deleted commit
git reflog
# Find commit hash
git checkout abc123
git branch recovered-branch
# Restore deleted branch
git reflog
git branch deleted-branch abc123
```
## Practical Workflows
### Workflow 1: Clean Up Feature Branch Before PR
```bash
# Start with feature branch
git checkout feature/user-auth
# Interactive rebase to clean history
git rebase -i main
# Example rebase operations:
# - Squash "fix typo" commits
# - Reword commit messages for clarity
# - Reorder commits logically
# - Drop unnecessary commits
# Force push cleaned branch (safe if no one else is using it)
git push --force-with-lease origin feature/user-auth
```
### Workflow 2: Apply Hotfix to Multiple Releases
```bash
# Create fix on main
git checkout main
git commit -m "fix: critical security patch"
# Apply to release branches
git checkout release/2.0
git cherry-pick abc123
git checkout release/1.9
git cherry-pick abc123
# Handle conflicts if they arise
git cherry-pick --continue
# or
git cherry-pick --abort
```
### Workflow 3: Find Bug Introduction
```bash
# Start bisect
git bisect start
git bisect bad HEAD
git bisect good v2.1.0
# Git checks out middle commit - run tests
npm test
# If tests fail
git bisect bad
# If tests pass
git bisect good
# Git will automatically checkout next commit to test
# Repeat until bug found
# Automated version
git bisect start HEAD v2.1.0
git bisect run npm test
```
### Workflow 4: Multi-Branch Development
```bash
# Main project directory
cd ~/projects/myapp
# Create worktree for urgent bugfix
git worktree add ../myapp-hotfix hotfix/critical-bug
# Work on hotfix in separate directory
cd ../myapp-hotfix
# Make changes, commit
git commit -m "fix: resolve critical bug"
git push origin hotfix/critical-bug
# Return to main work without interruption
cd ~/projects/myapp
git fetch origin
git cherry-pick hotfix/critical-bug
# Clean up when done
git worktree remove ../myapp-hotfix
```
### Workflow 5: Recover from Mistakes
```bash
# Accidentally reset to wrong commit
git reset --hard HEAD~5 # Oh no!
# Use reflog to find lost commits
git reflog
# Output shows:
# abc123 HEAD@{0}: reset: moving to HEAD~5
# def456 HEAD@{1}: commit: my important changes
# Recover lost commits
git reset --hard def456
# Or create branch from lost commit
git branch recovery def456
```
## Advanced Techniques
### Rebase vs Merge Strategy
**When to Rebase:**
- Cleaning up local commits before pushing
- Keeping feature branch up-to-date with main
- Creating linear history for easier review
**When to Merge:**
- Integrating completed features into main
- Preserving exact history of collaboration
- Public branches used by others
```bash
# Update feature branch with main changes (rebase)
git checkout feature/my-feature
git fetch origin
git rebase origin/main
# Handle conflicts
git status
# Fix conflicts in files
git add .
git rebase --continue
# Or merge instead
git merge origin/main
```
### Autosquash Workflow
Automatically squash fixup commits during rebase.
```bash
# Make initial commit
git commit -m "feat: add user authentication"
# Later, fix something in that commit
# Stage changes
git commit --fixup HEAD # or specify commit hash
# Make more changes
git commit --fixup abc123
# Rebase with autosquash
git rebase -i --autosquash main
# Git automatically marks fixup commits
```
### Split Commit
Break one commit into multiple logical commits.
```bash
# Start interactive rebase
git rebase -i HEAD~3
# Mark commit to split with 'edit'
# Git will stop at that commit
# Reset commit but keep changes
git reset HEAD^
# Stage and commit in logical chunks
git add file1.py
git commit -m "feat: add validation"
git add file2.py
git commit -m "feat: add error handling"
# Continue rebase
git rebase --continue
```
### Partial Cherry-Pick
Cherry-pick only specific files from a commit.
```bash
# Show files in commit
git show --name-only abc123
# Checkout specific files from commit
git checkout abc123 -- path/to/file1.py path/to/file2.py
# Stage and commit
git commit -m "cherry-pick: apply specific changes from abc123"
```
## Best Practices
1. **Always Use --force-with-lease**: Safer than --force, prevents overwriting others' work
2. **Rebase Only Local Commits**: Don't rebase commits that have been pushed and shared
3. **Descriptive Commit Messages**: Future you will thank present you
4. **Atomic Commits**: Each commit should be a single logical change
5. **Test Before Force Push**: Ensure history rewrite didn't break anything
6. **Keep Reflog Aware**: Remember reflog is your safety net for 90 days
7. **Branch Before Risky Operations**: Create backup branch before complex rebases
```bash
# Safe force push
git push --force-with-lease origin feature/branch
# Create backup before risky operation
git branch backup-branch
git rebase -i main
# If something goes wrong
git reset --hard backup-branch
```
## Common Pitfalls
- **Rebasing Public Branches**: Causes history conflicts for collaborators
- **Force Pushing Without Lease**: Can overwrite teammate's work
- **Losing Work in Rebase**: Resolve conflicts carefully, test after rebase
- **Forgetting Worktree Cleanup**: Orphaned worktrees consume disk space
- **Not Backing Up Before Experiment**: Always create safety branch
- **Bisect on Dirty Working Directory**: Commit or stash before bisecting
## Recovery Commands
```bash
# Abort operations in progress
git rebase --abort
git merge --abort
git cherry-pick --abort
git bisect reset
# Restore file to version from specific commit
git restore --source=abc123 path/to/file
# Undo last commit but keep changes
git reset --soft HEAD^
# Undo last commit and discard changes
git reset --hard HEAD^
# Recover deleted branch (within 90 days)
git reflog
git branch recovered-branch abc123
```
## Resources
- **references/git-rebase-guide.md**: Deep dive into interactive rebase
- **references/git-conflict-resolution.md**: Advanced conflict resolution strategies
- **references/git-history-rewriting.md**: Safely rewriting Git history
- **assets/git-workflow-checklist.md**: Pre-PR cleanup checklist
- **assets/git-aliases.md**: Useful Git aliases for advanced workflows
- **scripts/git-clean-branches.sh**: Clean up merged and stale branches

View File

@@ -0,0 +1,419 @@
---
name: n8n-workflow-patterns
description: "Proven architectural patterns for building n8n workflows."
risk: unknown
source: community
---
# n8n Workflow Patterns
Proven architectural patterns for building n8n workflows.
## When to Use
- You need to choose an architectural pattern for an n8n workflow before building it.
- The task involves webhook processing, API integration, scheduled jobs, database sync, or AI-agent workflow design.
- You want a high-level workflow structure rather than node-by-node troubleshooting.
---
## The 5 Core Patterns
Based on analysis of real workflow usage:
1. **Webhook Processing** (Most Common)
- Receive HTTP requests → Process → Output
- Pattern: Webhook → Validate → Transform → Respond/Notify
2. **[HTTP API Integration]**
- Fetch from REST APIs → Transform → Store/Use
- Pattern: Trigger → HTTP Request → Transform → Action → Error Handler
3. **Database Operations**
- Read/Write/Sync database data
- Pattern: Schedule → Query → Transform → Write → Verify
4. **AI Agent Workflow**
- AI agents with tools and memory
- Pattern: Trigger → AI Agent (Model + Tools + Memory) → Output
5. **Scheduled Tasks**
- Recurring automation workflows
- Pattern: Schedule → Fetch → Process → Deliver → Log
---
## Pattern Selection Guide
### When to use each pattern:
**Webhook Processing** - Use when:
- Receiving data from external systems
- Building integrations (Slack commands, form submissions, GitHub webhooks)
- Need instant response to events
- Example: "Receive Stripe payment webhook → Update database → Send confirmation"
**HTTP API Integration** - Use when:
- Fetching data from external APIs
- Synchronizing with third-party services
- Building data pipelines
- Example: "Fetch GitHub issues → Transform → Create Jira tickets"
**Database Operations** - Use when:
- Syncing between databases
- Running database queries on schedule
- ETL workflows
- Example: "Read Postgres records → Transform → Write to MySQL"
**AI Agent Workflow** - Use when:
- Building conversational AI
- Need AI with tool access
- Multi-step reasoning tasks
- Example: "Chat with AI that can search docs, query database, send emails"
**Scheduled Tasks** - Use when:
- Recurring reports or summaries
- Periodic data fetching
- Maintenance tasks
- Example: "Daily: Fetch analytics → Generate report → Email team"
---
## Common Workflow Components
All patterns share these building blocks:
### 1. Triggers
- **Webhook** - HTTP endpoint (instant)
- **Schedule** - Cron-based timing (periodic)
- **Manual** - Click to execute (testing)
- **Polling** - Check for changes (intervals)
### 2. Data Sources
- **HTTP Request** - REST APIs
- **Database nodes** - Postgres, MySQL, MongoDB
- **Service nodes** - Slack, Google Sheets, etc.
- **Code** - Custom JavaScript/Python
### 3. Transformation
- **Set** - Map/transform fields
- **Code** - Complex logic
- **IF/Switch** - Conditional routing
- **Merge** - Combine data streams
### 4. Outputs
- **HTTP Request** - Call APIs
- **Database** - Write data
- **Communication** - Email, Slack, Discord
- **Storage** - Files, cloud storage
### 5. Error Handling
- **Error Trigger** - Catch workflow errors
- **IF** - Check for error conditions
- **Stop and Error** - Explicit failure
- **Continue On Fail** - Per-node setting
---
## Workflow Creation Checklist
When building ANY workflow, follow this checklist:
### Planning Phase
- [ ] Identify the pattern (webhook, API, database, AI, scheduled)
- [ ] List required nodes (use search_nodes)
- [ ] Understand data flow (input → transform → output)
- [ ] Plan error handling strategy
### Implementation Phase
- [ ] Create workflow with appropriate trigger
- [ ] Add data source nodes
- [ ] Configure authentication/credentials
- [ ] Add transformation nodes (Set, Code, IF)
- [ ] Add output/action nodes
- [ ] Configure error handling
### Validation Phase
- [ ] Validate each node configuration (validate_node)
- [ ] Validate complete workflow (validate_workflow)
- [ ] Test with sample data
- [ ] Handle edge cases (empty data, errors)
### Deployment Phase
- [ ] Review workflow settings (execution order, timeout, error handling)
- [ ] Activate workflow using `activateWorkflow` operation
- [ ] Monitor first executions
- [ ] Document workflow purpose and data flow
---
## Data Flow Patterns
### Linear Flow
```
Trigger → Transform → Action → End
```
**Use when**: Simple workflows with single path
### Branching Flow
```
Trigger → IF → [True Path]
└→ [False Path]
```
**Use when**: Different actions based on conditions
### Parallel Processing
```
Trigger → [Branch 1] → Merge
└→ [Branch 2] ↗
```
**Use when**: Independent operations that can run simultaneously
### Loop Pattern
```
Trigger → Split in Batches → Process → Loop (until done)
```
**Use when**: Processing large datasets in chunks
### Error Handler Pattern
```
Main Flow → [Success Path]
└→ [Error Trigger → Error Handler]
```
**Use when**: Need separate error handling workflow
---
## Common Gotchas
### 1. Webhook Data Structure
**Problem**: Can't access webhook payload data
**Solution**: Data is nested under `$json.body`
```javascript
{{$json.email}}
{{$json.body.email}}
```
See: n8n Expression Syntax skill
### 2. Multiple Input Items
**Problem**: Node processes all input items, but I only want one
**Solution**: Use "Execute Once" mode or process first item only
```javascript
{{$json[0].field}} // First item only
```
### 3. Authentication Issues
**Problem**: API calls failing with 401/403
**Solution**:
- Configure credentials properly
- Use the "Credentials" section, not parameters
- Test credentials before workflow activation
### 4. Node Execution Order
**Problem**: Nodes executing in unexpected order
**Solution**: Check workflow settings → Execution Order
- v0: Top-to-bottom (legacy)
- v1: Connection-based (recommended)
### 5. Expression Errors
**Problem**: Expressions showing as literal text
**Solution**: Use {{}} around expressions
- See n8n Expression Syntax skill for details
---
## Integration with Other Skills
These skills work together with Workflow Patterns:
**n8n MCP Tools Expert** - Use to:
- Find nodes for your pattern (search_nodes)
- Understand node operations (get_node)
- Create workflows (n8n_create_workflow)
- Deploy templates (n8n_deploy_template)
- Use ai_agents_guide for AI pattern guidance
**n8n Expression Syntax** - Use to:
- Write expressions in transformation nodes
- Access webhook data correctly ({{$json.body.field}})
- Reference previous nodes ({{$node["Node Name"].json.field}})
**n8n Node Configuration** - Use to:
- Configure specific operations for pattern nodes
- Understand node-specific requirements
**n8n Validation Expert** - Use to:
- Validate workflow structure
- Fix validation errors
- Ensure workflow correctness before deployment
---
## Pattern Statistics
Common workflow patterns:
**Most Common Triggers**:
1. Webhook - 35%
2. Schedule (periodic tasks) - 28%
3. Manual (testing/admin) - 22%
4. Service triggers (Slack, email, etc.) - 15%
**Most Common Transformations**:
1. Set (field mapping) - 68%
2. Code (custom logic) - 42%
3. IF (conditional routing) - 38%
4. Switch (multi-condition) - 18%
**Most Common Outputs**:
1. HTTP Request (APIs) - 45%
2. Slack - 32%
3. Database writes - 28%
4. Email - 24%
**Average Workflow Complexity**:
- Simple (3-5 nodes): 42%
- Medium (6-10 nodes): 38%
- Complex (11+ nodes): 20%
---
## Quick Start Examples
### Example 1: Simple Webhook → Slack
```
1. Webhook (path: "form-submit", POST)
2. Set (map form fields)
3. Slack (post message to #notifications)
```
### Example 2: Scheduled Report
```
1. Schedule (daily at 9 AM)
2. HTTP Request (fetch analytics)
3. Code (aggregate data)
4. Email (send formatted report)
5. Error Trigger → Slack (notify on failure)
```
### Example 3: Database Sync
```
1. Schedule (every 15 minutes)
2. Postgres (query new records)
3. IF (check if records exist)
4. MySQL (insert records)
5. Postgres (update sync timestamp)
```
### Example 4: AI Assistant
```
1. Webhook (receive chat message)
2. AI Agent
├─ OpenAI Chat Model (ai_languageModel)
├─ HTTP Request Tool (ai_tool)
├─ Database Tool (ai_tool)
└─ Window Buffer Memory (ai_memory)
3. Webhook Response (send AI reply)
```
### Example 5: API Integration
```
1. Manual Trigger (for testing)
2. HTTP Request (GET /api/users)
3. Split In Batches (process 100 at a time)
4. Set (transform user data)
5. Postgres (upsert users)
6. Loop (back to step 3 until done)
```
---
## Detailed Pattern Files
For comprehensive guidance on each pattern:
- **webhook_processing.md** - Webhook patterns, data structure, response handling
- **http_api_integration** - REST APIs, authentication, pagination, retries
- **database_operations.md** - Queries, sync, transactions, batch processing
- **ai_agent_workflow.md** - AI agents, tools, memory, langchain nodes
- **scheduled_tasks.md** - Cron schedules, reports, maintenance tasks
---
## Real Template Examples
From n8n template library:
**Template #2947**: Weather to Slack
- Pattern: Scheduled Task
- Nodes: Schedule → HTTP Request (weather API) → Set → Slack
- Complexity: Simple (4 nodes)
**Webhook Processing**: Most common pattern
- Most common: Form submissions, payment webhooks, chat integrations
**HTTP API**: Common pattern
- Most common: Data fetching, third-party integrations
**Database Operations**: Common pattern
- Most common: ETL, data sync, backup workflows
**AI Agents**: Growing in usage
- Most common: Chatbots, content generation, data analysis
Use `search_templates` and `get_template` from n8n-mcp tools to find examples!
---
## Best Practices
### ✅ Do
- Start with the simplest pattern that solves your problem
- Plan your workflow structure before building
- Use error handling on all workflows
- Test with sample data before activation
- Follow the workflow creation checklist
- Use descriptive node names
- Document complex workflows (notes field)
- Monitor workflow executions after deployment
### ❌ Don't
- Build workflows in one shot (iterate! avg 56s between edits)
- Skip validation before activation
- Ignore error scenarios
- Use complex patterns when simple ones suffice
- Hardcode credentials in parameters
- Forget to handle empty data cases
- Mix multiple patterns without clear boundaries
- Deploy without testing
---
## Summary
**Key Points**:
1. **5 core patterns** cover 90%+ of workflow use cases
2. **Webhook processing** is the most common pattern
3. Use the **workflow creation checklist** for every workflow
4. **Plan pattern****Select nodes****Build****Validate****Deploy**
5. Integrate with other skills for complete workflow development
**Next Steps**:
1. Identify your use case pattern
2. Read the detailed pattern file
3. Use n8n MCP Tools Expert to find nodes
4. Follow the workflow creation checklist
5. Use n8n Validation Expert to validate
**Related Skills**:
- n8n MCP Tools Expert - Find and configure nodes
- n8n Expression Syntax - Write expressions correctly
- n8n Validation Expert - Validate and fix errors
- n8n Node Configuration - Configure specific operations

View File

@@ -0,0 +1,38 @@
---
name: nodejs-backend-patterns
description: "Comprehensive guidance for building scalable, maintainable, and production-ready Node.js backend applications with modern frameworks, architectural patterns, and best practices."
risk: safe
source: community
date_added: "2026-02-27"
---
# Node.js Backend Patterns
Comprehensive guidance for building scalable, maintainable, and production-ready Node.js backend applications with modern frameworks, architectural patterns, and best practices.
## Use this skill when
- Building REST APIs or GraphQL servers
- Creating microservices with Node.js
- Implementing authentication and authorization
- Designing scalable backend architectures
- Setting up middleware and error handling
- Integrating databases (SQL and NoSQL)
- Building real-time applications with WebSockets
- Implementing background job processing
## Do not use this skill when
- The task is unrelated to node.js backend patterns
- You need a different domain or tool outside this scope
## Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.
## Resources
- `resources/implementation-playbook.md` for detailed patterns and examples.

File diff suppressed because it is too large Load Diff