From 086c70fbe1b6b231659397a80f84b82bb519376c Mon Sep 17 00:00:00 2001 From: Claude Date: Thu, 9 Apr 2026 13:49:53 +0000 Subject: [PATCH] docs(skills): Add 4 production skills from antigravity reference Skills added: - cloudflare-workers-expert: For servers-api Worker and edge computing - n8n-workflow-patterns: For self-hosted n8n on Command Center - nodejs-backend-patterns: For Arbiter Express backend work - git-advanced-workflows: Git learning (rebase, cherry-pick, recovery) All sourced from antigravity-skills-reference (MIT license) Chronicler #73 --- docs/skills/SKILLS-INDEX.md | 89 ++ .../skills/cloudflare-workers-expert/SKILL.md | 89 ++ docs/skills/git-advanced-workflows/SKILL.md | 415 +++++++ docs/skills/n8n-workflow-patterns/SKILL.md | 419 +++++++ docs/skills/nodejs-backend-patterns/SKILL.md | 38 + .../resources/implementation-playbook.md | 1019 +++++++++++++++++ 6 files changed, 2069 insertions(+) create mode 100644 docs/skills/cloudflare-workers-expert/SKILL.md create mode 100644 docs/skills/git-advanced-workflows/SKILL.md create mode 100644 docs/skills/n8n-workflow-patterns/SKILL.md create mode 100644 docs/skills/nodejs-backend-patterns/SKILL.md create mode 100644 docs/skills/nodejs-backend-patterns/resources/implementation-playbook.md diff --git a/docs/skills/SKILLS-INDEX.md b/docs/skills/SKILLS-INDEX.md index ad73911..d7a684b 100644 --- a/docs/skills/SKILLS-INDEX.md +++ b/docs/skills/SKILLS-INDEX.md @@ -72,6 +72,95 @@ --- +### cloudflare-workers-expert +**Location:** `docs/skills/cloudflare-workers-expert/SKILL.md` +**Source:** antigravity-skills-reference (MIT) +**Triggers:** Cloudflare Workers, Wrangler, KV, D1, Durable Objects, R2, edge computing + +**Purpose:** Expert guidance for Cloudflare Workers and edge computing ecosystem + +**What It Covers:** +- Wrangler CLI and `wrangler.toml` configuration +- KV, D1, Queues, Durable Objects bindings +- Edge-side caching and response modification +- Web Fetch API (not Node.js globals) +- `waitUntil()` for async tasks + +**Read This When:** +- Working on `servers-api` Cloudflare Worker +- Building new Workers for Firefrost +- Moving logic to the edge for performance + +--- + +### n8n-workflow-patterns +**Location:** `docs/skills/n8n-workflow-patterns/SKILL.md` +**Source:** antigravity-skills-reference (MIT) +**Triggers:** n8n, workflow automation, self-hosted automation + +**Purpose:** Patterns for building robust n8n workflows + +**What It Covers:** +- Workflow architecture patterns +- Error handling and retry logic +- Webhook integrations +- Data transformation between nodes + +**Read This When:** +- Building automations on Command Center's n8n instance +- Connecting services via webhooks +- Troubleshooting workflow failures + +--- + +### nodejs-backend-patterns +**Location:** `docs/skills/nodejs-backend-patterns/SKILL.md` +**Source:** antigravity-skills-reference (MIT) +**Triggers:** Node.js, Express, REST API, backend architecture, Arbiter + +**Purpose:** Production patterns for Node.js backend applications + +**What It Covers:** +- REST API and GraphQL server patterns +- Authentication and authorization +- Middleware and error handling +- Database integration (SQL/NoSQL) +- WebSockets and real-time apps +- Background job processing + +**Read This When:** +- Working on Arbiter's Express backend +- Adding new API endpoints +- Implementing Task #87 lifecycle handlers +- Refactoring backend architecture + +**Resources:** `resources/implementation-playbook.md` for detailed examples + +--- + +### git-advanced-workflows +**Location:** `docs/skills/git-advanced-workflows/SKILL.md` +**Source:** antigravity-skills-reference (MIT) +**Triggers:** git, rebase, cherry-pick, bisect, worktrees, branch management + +**Purpose:** Advanced Git techniques for clean history and recovery + +**What It Covers:** +- Interactive rebase (squash, fixup, reword) +- Cherry-picking commits across branches +- Git bisect for finding bugs +- Working with multiple worktrees +- Recovering from mistakes (reflog) +- Complex branch workflows + +**Read This When:** +- Learning Git beyond basics +- Cleaning up commit history before merging +- Recovering from Git mistakes +- Managing feature branches + +--- + ### tea-cli **Location:** `docs/skills/tea-cli/` **Source:** skill.fish diff --git a/docs/skills/cloudflare-workers-expert/SKILL.md b/docs/skills/cloudflare-workers-expert/SKILL.md new file mode 100644 index 0000000..653428b --- /dev/null +++ b/docs/skills/cloudflare-workers-expert/SKILL.md @@ -0,0 +1,89 @@ +--- +name: cloudflare-workers-expert +description: "Expert in Cloudflare Workers and the Edge Computing ecosystem. Covers Wrangler, KV, D1, Durable Objects, and R2 storage." +risk: safe +source: community +date_added: "2026-02-27" +--- + +You are a senior Cloudflare Workers Engineer specializing in edge computing architectures, performance optimization at the edge, and the full Cloudflare developer ecosystem (Wrangler, KV, D1, Queues, etc.). + +## Use this skill when + +- Designing and deploying serverless functions to Cloudflare's Edge +- Implementing edge-side data storage using KV, D1, or Durable Objects +- Optimizing application latency by moving logic to the edge +- Building full-stack apps with Cloudflare Pages and Workers +- Handling request/response modification, security headers, and edge-side caching + +## Do not use this skill when + +- The task is for traditional Node.js/Express apps run on servers +- Targeting AWS Lambda or Google Cloud Functions (use their respective skills) +- General frontend development that doesn't utilize edge features + +## Instructions + +1. **Wrangler Ecosystem**: Use `wrangler.toml` for configuration and `npx wrangler dev` for local testing. +2. **Fetch API**: Remember that Workers use the Web standard Fetch API, not Node.js globals. +3. **Bindings**: Define all bindings (KV, D1, secrets) in `wrangler.toml` and access them through the `env` parameter in the `fetch` handler. +4. **Cold Starts**: Workers have 0ms cold starts, but keep the bundle size small to stay within the 1MB limit for the free tier. +5. **Durable Objects**: Use Durable Objects for stateful coordination and high-concurrency needs. +6. **Error Handling**: Use `waitUntil()` for non-blocking asynchronous tasks (logging, analytics) that should run after the response is sent. + +## Examples + +### Example 1: Basic Worker with KV Binding + +```typescript +export interface Env { + MY_KV_NAMESPACE: KVNamespace; +} + +export default { + async fetch( + request: Request, + env: Env, + ctx: ExecutionContext, + ): Promise { + const value = await env.MY_KV_NAMESPACE.get("my-key"); + if (!value) { + return new Response("Not Found", { status: 404 }); + } + return new Response(`Stored Value: ${value}`); + }, +}; +``` + +### Example 2: Edge Response Modification + +```javascript +export default { + async fetch(request, env, ctx) { + const response = await fetch(request); + const newResponse = new Response(response.body, response); + + // Add security headers at the edge + newResponse.headers.set("X-Content-Type-Options", "nosniff"); + newResponse.headers.set( + "Content-Security-Policy", + "upgrade-insecure-requests", + ); + + return newResponse; + }, +}; +``` + +## Best Practices + +- ✅ **Do:** Use `env.VAR_NAME` for secrets and environment variables. +- ✅ **Do:** Use `Response.redirect()` for clean edge-side redirects. +- ✅ **Do:** Use `wrangler tail` for live production debugging. +- ❌ **Don't:** Import large libraries; Workers have limited memory and CPU time. +- ❌ **Don't:** Use Node.js specific libraries (like `fs`, `path`) unless using Node.js compatibility mode. + +## Troubleshooting + +**Problem:** Request exceeded CPU time limit. +**Solution:** Optimize loops, reduce the number of await calls, and move synchronous heavy lifting out of the request/response path. Use `ctx.waitUntil()` for tasks that don't block the response. diff --git a/docs/skills/git-advanced-workflows/SKILL.md b/docs/skills/git-advanced-workflows/SKILL.md new file mode 100644 index 0000000..78c1687 --- /dev/null +++ b/docs/skills/git-advanced-workflows/SKILL.md @@ -0,0 +1,415 @@ +--- +name: git-advanced-workflows +description: "Master advanced Git techniques to maintain clean history, collaborate effectively, and recover from any situation with confidence." +risk: critical +source: community +date_added: "2026-02-27" +--- + +# Git Advanced Workflows + +Master advanced Git techniques to maintain clean history, collaborate effectively, and recover from any situation with confidence. + +## Do not use this skill when + +- The task is unrelated to git advanced workflows +- You need a different domain or tool outside this scope + +## Instructions + +- Clarify goals, constraints, and required inputs. +- Apply relevant best practices and validate outcomes. +- Provide actionable steps and verification. +- If detailed examples are required, open `resources/implementation-playbook.md`. + +## Use this skill when + +- Cleaning up commit history before merging +- Applying specific commits across branches +- Finding commits that introduced bugs +- Working on multiple features simultaneously +- Recovering from Git mistakes or lost commits +- Managing complex branch workflows +- Preparing clean PRs for review +- Synchronizing diverged branches + +## Core Concepts + +### 1. Interactive Rebase + +Interactive rebase is the Swiss Army knife of Git history editing. + +**Common Operations:** +- `pick`: Keep commit as-is +- `reword`: Change commit message +- `edit`: Amend commit content +- `squash`: Combine with previous commit +- `fixup`: Like squash but discard message +- `drop`: Remove commit entirely + +**Basic Usage:** +```bash +# Rebase last 5 commits +git rebase -i HEAD~5 + +# Rebase all commits on current branch +git rebase -i $(git merge-base HEAD main) + +# Rebase onto specific commit +git rebase -i abc123 +``` + +### 2. Cherry-Picking + +Apply specific commits from one branch to another without merging entire branches. + +```bash +# Cherry-pick single commit +git cherry-pick abc123 + +# Cherry-pick range of commits (exclusive start) +git cherry-pick abc123..def456 + +# Cherry-pick without committing (stage changes only) +git cherry-pick -n abc123 + +# Cherry-pick and edit commit message +git cherry-pick -e abc123 +``` + +### 3. Git Bisect + +Binary search through commit history to find the commit that introduced a bug. + +```bash +# Start bisect +git bisect start + +# Mark current commit as bad +git bisect bad + +# Mark known good commit +git bisect good v1.0.0 + +# Git will checkout middle commit - test it +# Then mark as good or bad +git bisect good # or: git bisect bad + +# Continue until bug found +# When done +git bisect reset +``` + +**Automated Bisect:** +```bash +# Use script to test automatically +git bisect start HEAD v1.0.0 +git bisect run ./test.sh + +# test.sh should exit 0 for good, 1-127 (except 125) for bad +``` + +### 4. Worktrees + +Work on multiple branches simultaneously without stashing or switching. + +```bash +# List existing worktrees +git worktree list + +# Add new worktree for feature branch +git worktree add ../project-feature feature/new-feature + +# Add worktree and create new branch +git worktree add -b bugfix/urgent ../project-hotfix main + +# Remove worktree +git worktree remove ../project-feature + +# Prune stale worktrees +git worktree prune +``` + +### 5. Reflog + +Your safety net - tracks all ref movements, even deleted commits. + +```bash +# View reflog +git reflog + +# View reflog for specific branch +git reflog show feature/branch + +# Restore deleted commit +git reflog +# Find commit hash +git checkout abc123 +git branch recovered-branch + +# Restore deleted branch +git reflog +git branch deleted-branch abc123 +``` + +## Practical Workflows + +### Workflow 1: Clean Up Feature Branch Before PR + +```bash +# Start with feature branch +git checkout feature/user-auth + +# Interactive rebase to clean history +git rebase -i main + +# Example rebase operations: +# - Squash "fix typo" commits +# - Reword commit messages for clarity +# - Reorder commits logically +# - Drop unnecessary commits + +# Force push cleaned branch (safe if no one else is using it) +git push --force-with-lease origin feature/user-auth +``` + +### Workflow 2: Apply Hotfix to Multiple Releases + +```bash +# Create fix on main +git checkout main +git commit -m "fix: critical security patch" + +# Apply to release branches +git checkout release/2.0 +git cherry-pick abc123 + +git checkout release/1.9 +git cherry-pick abc123 + +# Handle conflicts if they arise +git cherry-pick --continue +# or +git cherry-pick --abort +``` + +### Workflow 3: Find Bug Introduction + +```bash +# Start bisect +git bisect start +git bisect bad HEAD +git bisect good v2.1.0 + +# Git checks out middle commit - run tests +npm test + +# If tests fail +git bisect bad + +# If tests pass +git bisect good + +# Git will automatically checkout next commit to test +# Repeat until bug found + +# Automated version +git bisect start HEAD v2.1.0 +git bisect run npm test +``` + +### Workflow 4: Multi-Branch Development + +```bash +# Main project directory +cd ~/projects/myapp + +# Create worktree for urgent bugfix +git worktree add ../myapp-hotfix hotfix/critical-bug + +# Work on hotfix in separate directory +cd ../myapp-hotfix +# Make changes, commit +git commit -m "fix: resolve critical bug" +git push origin hotfix/critical-bug + +# Return to main work without interruption +cd ~/projects/myapp +git fetch origin +git cherry-pick hotfix/critical-bug + +# Clean up when done +git worktree remove ../myapp-hotfix +``` + +### Workflow 5: Recover from Mistakes + +```bash +# Accidentally reset to wrong commit +git reset --hard HEAD~5 # Oh no! + +# Use reflog to find lost commits +git reflog +# Output shows: +# abc123 HEAD@{0}: reset: moving to HEAD~5 +# def456 HEAD@{1}: commit: my important changes + +# Recover lost commits +git reset --hard def456 + +# Or create branch from lost commit +git branch recovery def456 +``` + +## Advanced Techniques + +### Rebase vs Merge Strategy + +**When to Rebase:** +- Cleaning up local commits before pushing +- Keeping feature branch up-to-date with main +- Creating linear history for easier review + +**When to Merge:** +- Integrating completed features into main +- Preserving exact history of collaboration +- Public branches used by others + +```bash +# Update feature branch with main changes (rebase) +git checkout feature/my-feature +git fetch origin +git rebase origin/main + +# Handle conflicts +git status +# Fix conflicts in files +git add . +git rebase --continue + +# Or merge instead +git merge origin/main +``` + +### Autosquash Workflow + +Automatically squash fixup commits during rebase. + +```bash +# Make initial commit +git commit -m "feat: add user authentication" + +# Later, fix something in that commit +# Stage changes +git commit --fixup HEAD # or specify commit hash + +# Make more changes +git commit --fixup abc123 + +# Rebase with autosquash +git rebase -i --autosquash main + +# Git automatically marks fixup commits +``` + +### Split Commit + +Break one commit into multiple logical commits. + +```bash +# Start interactive rebase +git rebase -i HEAD~3 + +# Mark commit to split with 'edit' +# Git will stop at that commit + +# Reset commit but keep changes +git reset HEAD^ + +# Stage and commit in logical chunks +git add file1.py +git commit -m "feat: add validation" + +git add file2.py +git commit -m "feat: add error handling" + +# Continue rebase +git rebase --continue +``` + +### Partial Cherry-Pick + +Cherry-pick only specific files from a commit. + +```bash +# Show files in commit +git show --name-only abc123 + +# Checkout specific files from commit +git checkout abc123 -- path/to/file1.py path/to/file2.py + +# Stage and commit +git commit -m "cherry-pick: apply specific changes from abc123" +``` + +## Best Practices + +1. **Always Use --force-with-lease**: Safer than --force, prevents overwriting others' work +2. **Rebase Only Local Commits**: Don't rebase commits that have been pushed and shared +3. **Descriptive Commit Messages**: Future you will thank present you +4. **Atomic Commits**: Each commit should be a single logical change +5. **Test Before Force Push**: Ensure history rewrite didn't break anything +6. **Keep Reflog Aware**: Remember reflog is your safety net for 90 days +7. **Branch Before Risky Operations**: Create backup branch before complex rebases + +```bash +# Safe force push +git push --force-with-lease origin feature/branch + +# Create backup before risky operation +git branch backup-branch +git rebase -i main +# If something goes wrong +git reset --hard backup-branch +``` + +## Common Pitfalls + +- **Rebasing Public Branches**: Causes history conflicts for collaborators +- **Force Pushing Without Lease**: Can overwrite teammate's work +- **Losing Work in Rebase**: Resolve conflicts carefully, test after rebase +- **Forgetting Worktree Cleanup**: Orphaned worktrees consume disk space +- **Not Backing Up Before Experiment**: Always create safety branch +- **Bisect on Dirty Working Directory**: Commit or stash before bisecting + +## Recovery Commands + +```bash +# Abort operations in progress +git rebase --abort +git merge --abort +git cherry-pick --abort +git bisect reset + +# Restore file to version from specific commit +git restore --source=abc123 path/to/file + +# Undo last commit but keep changes +git reset --soft HEAD^ + +# Undo last commit and discard changes +git reset --hard HEAD^ + +# Recover deleted branch (within 90 days) +git reflog +git branch recovered-branch abc123 +``` + +## Resources + +- **references/git-rebase-guide.md**: Deep dive into interactive rebase +- **references/git-conflict-resolution.md**: Advanced conflict resolution strategies +- **references/git-history-rewriting.md**: Safely rewriting Git history +- **assets/git-workflow-checklist.md**: Pre-PR cleanup checklist +- **assets/git-aliases.md**: Useful Git aliases for advanced workflows +- **scripts/git-clean-branches.sh**: Clean up merged and stale branches diff --git a/docs/skills/n8n-workflow-patterns/SKILL.md b/docs/skills/n8n-workflow-patterns/SKILL.md new file mode 100644 index 0000000..dbbe3e4 --- /dev/null +++ b/docs/skills/n8n-workflow-patterns/SKILL.md @@ -0,0 +1,419 @@ +--- +name: n8n-workflow-patterns +description: "Proven architectural patterns for building n8n workflows." +risk: unknown +source: community +--- + +# n8n Workflow Patterns + +Proven architectural patterns for building n8n workflows. + +## When to Use + +- You need to choose an architectural pattern for an n8n workflow before building it. +- The task involves webhook processing, API integration, scheduled jobs, database sync, or AI-agent workflow design. +- You want a high-level workflow structure rather than node-by-node troubleshooting. + +--- + +## The 5 Core Patterns + +Based on analysis of real workflow usage: + +1. **Webhook Processing** (Most Common) + - Receive HTTP requests → Process → Output + - Pattern: Webhook → Validate → Transform → Respond/Notify + +2. **[HTTP API Integration]** + - Fetch from REST APIs → Transform → Store/Use + - Pattern: Trigger → HTTP Request → Transform → Action → Error Handler + +3. **Database Operations** + - Read/Write/Sync database data + - Pattern: Schedule → Query → Transform → Write → Verify + +4. **AI Agent Workflow** + - AI agents with tools and memory + - Pattern: Trigger → AI Agent (Model + Tools + Memory) → Output + +5. **Scheduled Tasks** + - Recurring automation workflows + - Pattern: Schedule → Fetch → Process → Deliver → Log + +--- + +## Pattern Selection Guide + +### When to use each pattern: + +**Webhook Processing** - Use when: +- Receiving data from external systems +- Building integrations (Slack commands, form submissions, GitHub webhooks) +- Need instant response to events +- Example: "Receive Stripe payment webhook → Update database → Send confirmation" + +**HTTP API Integration** - Use when: +- Fetching data from external APIs +- Synchronizing with third-party services +- Building data pipelines +- Example: "Fetch GitHub issues → Transform → Create Jira tickets" + +**Database Operations** - Use when: +- Syncing between databases +- Running database queries on schedule +- ETL workflows +- Example: "Read Postgres records → Transform → Write to MySQL" + +**AI Agent Workflow** - Use when: +- Building conversational AI +- Need AI with tool access +- Multi-step reasoning tasks +- Example: "Chat with AI that can search docs, query database, send emails" + +**Scheduled Tasks** - Use when: +- Recurring reports or summaries +- Periodic data fetching +- Maintenance tasks +- Example: "Daily: Fetch analytics → Generate report → Email team" + +--- + +## Common Workflow Components + +All patterns share these building blocks: + +### 1. Triggers +- **Webhook** - HTTP endpoint (instant) +- **Schedule** - Cron-based timing (periodic) +- **Manual** - Click to execute (testing) +- **Polling** - Check for changes (intervals) + +### 2. Data Sources +- **HTTP Request** - REST APIs +- **Database nodes** - Postgres, MySQL, MongoDB +- **Service nodes** - Slack, Google Sheets, etc. +- **Code** - Custom JavaScript/Python + +### 3. Transformation +- **Set** - Map/transform fields +- **Code** - Complex logic +- **IF/Switch** - Conditional routing +- **Merge** - Combine data streams + +### 4. Outputs +- **HTTP Request** - Call APIs +- **Database** - Write data +- **Communication** - Email, Slack, Discord +- **Storage** - Files, cloud storage + +### 5. Error Handling +- **Error Trigger** - Catch workflow errors +- **IF** - Check for error conditions +- **Stop and Error** - Explicit failure +- **Continue On Fail** - Per-node setting + +--- + +## Workflow Creation Checklist + +When building ANY workflow, follow this checklist: + +### Planning Phase +- [ ] Identify the pattern (webhook, API, database, AI, scheduled) +- [ ] List required nodes (use search_nodes) +- [ ] Understand data flow (input → transform → output) +- [ ] Plan error handling strategy + +### Implementation Phase +- [ ] Create workflow with appropriate trigger +- [ ] Add data source nodes +- [ ] Configure authentication/credentials +- [ ] Add transformation nodes (Set, Code, IF) +- [ ] Add output/action nodes +- [ ] Configure error handling + +### Validation Phase +- [ ] Validate each node configuration (validate_node) +- [ ] Validate complete workflow (validate_workflow) +- [ ] Test with sample data +- [ ] Handle edge cases (empty data, errors) + +### Deployment Phase +- [ ] Review workflow settings (execution order, timeout, error handling) +- [ ] Activate workflow using `activateWorkflow` operation +- [ ] Monitor first executions +- [ ] Document workflow purpose and data flow + +--- + +## Data Flow Patterns + +### Linear Flow +``` +Trigger → Transform → Action → End +``` +**Use when**: Simple workflows with single path + +### Branching Flow +``` +Trigger → IF → [True Path] + └→ [False Path] +``` +**Use when**: Different actions based on conditions + +### Parallel Processing +``` +Trigger → [Branch 1] → Merge + └→ [Branch 2] ↗ +``` +**Use when**: Independent operations that can run simultaneously + +### Loop Pattern +``` +Trigger → Split in Batches → Process → Loop (until done) +``` +**Use when**: Processing large datasets in chunks + +### Error Handler Pattern +``` +Main Flow → [Success Path] + └→ [Error Trigger → Error Handler] +``` +**Use when**: Need separate error handling workflow + +--- + +## Common Gotchas + +### 1. Webhook Data Structure +**Problem**: Can't access webhook payload data + +**Solution**: Data is nested under `$json.body` +```javascript +❌ {{$json.email}} +✅ {{$json.body.email}} +``` +See: n8n Expression Syntax skill + +### 2. Multiple Input Items +**Problem**: Node processes all input items, but I only want one + +**Solution**: Use "Execute Once" mode or process first item only +```javascript +{{$json[0].field}} // First item only +``` + +### 3. Authentication Issues +**Problem**: API calls failing with 401/403 + +**Solution**: +- Configure credentials properly +- Use the "Credentials" section, not parameters +- Test credentials before workflow activation + +### 4. Node Execution Order +**Problem**: Nodes executing in unexpected order + +**Solution**: Check workflow settings → Execution Order +- v0: Top-to-bottom (legacy) +- v1: Connection-based (recommended) + +### 5. Expression Errors +**Problem**: Expressions showing as literal text + +**Solution**: Use {{}} around expressions +- See n8n Expression Syntax skill for details + +--- + +## Integration with Other Skills + +These skills work together with Workflow Patterns: + +**n8n MCP Tools Expert** - Use to: +- Find nodes for your pattern (search_nodes) +- Understand node operations (get_node) +- Create workflows (n8n_create_workflow) +- Deploy templates (n8n_deploy_template) +- Use ai_agents_guide for AI pattern guidance + +**n8n Expression Syntax** - Use to: +- Write expressions in transformation nodes +- Access webhook data correctly ({{$json.body.field}}) +- Reference previous nodes ({{$node["Node Name"].json.field}}) + +**n8n Node Configuration** - Use to: +- Configure specific operations for pattern nodes +- Understand node-specific requirements + +**n8n Validation Expert** - Use to: +- Validate workflow structure +- Fix validation errors +- Ensure workflow correctness before deployment + +--- + +## Pattern Statistics + +Common workflow patterns: + +**Most Common Triggers**: +1. Webhook - 35% +2. Schedule (periodic tasks) - 28% +3. Manual (testing/admin) - 22% +4. Service triggers (Slack, email, etc.) - 15% + +**Most Common Transformations**: +1. Set (field mapping) - 68% +2. Code (custom logic) - 42% +3. IF (conditional routing) - 38% +4. Switch (multi-condition) - 18% + +**Most Common Outputs**: +1. HTTP Request (APIs) - 45% +2. Slack - 32% +3. Database writes - 28% +4. Email - 24% + +**Average Workflow Complexity**: +- Simple (3-5 nodes): 42% +- Medium (6-10 nodes): 38% +- Complex (11+ nodes): 20% + +--- + +## Quick Start Examples + +### Example 1: Simple Webhook → Slack +``` +1. Webhook (path: "form-submit", POST) +2. Set (map form fields) +3. Slack (post message to #notifications) +``` + +### Example 2: Scheduled Report +``` +1. Schedule (daily at 9 AM) +2. HTTP Request (fetch analytics) +3. Code (aggregate data) +4. Email (send formatted report) +5. Error Trigger → Slack (notify on failure) +``` + +### Example 3: Database Sync +``` +1. Schedule (every 15 minutes) +2. Postgres (query new records) +3. IF (check if records exist) +4. MySQL (insert records) +5. Postgres (update sync timestamp) +``` + +### Example 4: AI Assistant +``` +1. Webhook (receive chat message) +2. AI Agent + ├─ OpenAI Chat Model (ai_languageModel) + ├─ HTTP Request Tool (ai_tool) + ├─ Database Tool (ai_tool) + └─ Window Buffer Memory (ai_memory) +3. Webhook Response (send AI reply) +``` + +### Example 5: API Integration +``` +1. Manual Trigger (for testing) +2. HTTP Request (GET /api/users) +3. Split In Batches (process 100 at a time) +4. Set (transform user data) +5. Postgres (upsert users) +6. Loop (back to step 3 until done) +``` + +--- + +## Detailed Pattern Files + +For comprehensive guidance on each pattern: + +- **webhook_processing.md** - Webhook patterns, data structure, response handling +- **http_api_integration** - REST APIs, authentication, pagination, retries +- **database_operations.md** - Queries, sync, transactions, batch processing +- **ai_agent_workflow.md** - AI agents, tools, memory, langchain nodes +- **scheduled_tasks.md** - Cron schedules, reports, maintenance tasks + +--- + +## Real Template Examples + +From n8n template library: + +**Template #2947**: Weather to Slack +- Pattern: Scheduled Task +- Nodes: Schedule → HTTP Request (weather API) → Set → Slack +- Complexity: Simple (4 nodes) + +**Webhook Processing**: Most common pattern +- Most common: Form submissions, payment webhooks, chat integrations + +**HTTP API**: Common pattern +- Most common: Data fetching, third-party integrations + +**Database Operations**: Common pattern +- Most common: ETL, data sync, backup workflows + +**AI Agents**: Growing in usage +- Most common: Chatbots, content generation, data analysis + +Use `search_templates` and `get_template` from n8n-mcp tools to find examples! + +--- + +## Best Practices + +### ✅ Do + +- Start with the simplest pattern that solves your problem +- Plan your workflow structure before building +- Use error handling on all workflows +- Test with sample data before activation +- Follow the workflow creation checklist +- Use descriptive node names +- Document complex workflows (notes field) +- Monitor workflow executions after deployment + +### ❌ Don't + +- Build workflows in one shot (iterate! avg 56s between edits) +- Skip validation before activation +- Ignore error scenarios +- Use complex patterns when simple ones suffice +- Hardcode credentials in parameters +- Forget to handle empty data cases +- Mix multiple patterns without clear boundaries +- Deploy without testing + +--- + +## Summary + +**Key Points**: +1. **5 core patterns** cover 90%+ of workflow use cases +2. **Webhook processing** is the most common pattern +3. Use the **workflow creation checklist** for every workflow +4. **Plan pattern** → **Select nodes** → **Build** → **Validate** → **Deploy** +5. Integrate with other skills for complete workflow development + +**Next Steps**: +1. Identify your use case pattern +2. Read the detailed pattern file +3. Use n8n MCP Tools Expert to find nodes +4. Follow the workflow creation checklist +5. Use n8n Validation Expert to validate + +**Related Skills**: +- n8n MCP Tools Expert - Find and configure nodes +- n8n Expression Syntax - Write expressions correctly +- n8n Validation Expert - Validate and fix errors +- n8n Node Configuration - Configure specific operations diff --git a/docs/skills/nodejs-backend-patterns/SKILL.md b/docs/skills/nodejs-backend-patterns/SKILL.md new file mode 100644 index 0000000..b9d01b1 --- /dev/null +++ b/docs/skills/nodejs-backend-patterns/SKILL.md @@ -0,0 +1,38 @@ +--- +name: nodejs-backend-patterns +description: "Comprehensive guidance for building scalable, maintainable, and production-ready Node.js backend applications with modern frameworks, architectural patterns, and best practices." +risk: safe +source: community +date_added: "2026-02-27" +--- + +# Node.js Backend Patterns + +Comprehensive guidance for building scalable, maintainable, and production-ready Node.js backend applications with modern frameworks, architectural patterns, and best practices. + +## Use this skill when + +- Building REST APIs or GraphQL servers +- Creating microservices with Node.js +- Implementing authentication and authorization +- Designing scalable backend architectures +- Setting up middleware and error handling +- Integrating databases (SQL and NoSQL) +- Building real-time applications with WebSockets +- Implementing background job processing + +## Do not use this skill when + +- The task is unrelated to node.js backend patterns +- You need a different domain or tool outside this scope + +## Instructions + +- Clarify goals, constraints, and required inputs. +- Apply relevant best practices and validate outcomes. +- Provide actionable steps and verification. +- If detailed examples are required, open `resources/implementation-playbook.md`. + +## Resources + +- `resources/implementation-playbook.md` for detailed patterns and examples. diff --git a/docs/skills/nodejs-backend-patterns/resources/implementation-playbook.md b/docs/skills/nodejs-backend-patterns/resources/implementation-playbook.md new file mode 100644 index 0000000..84446bf --- /dev/null +++ b/docs/skills/nodejs-backend-patterns/resources/implementation-playbook.md @@ -0,0 +1,1019 @@ +# Node.js Backend Patterns Implementation Playbook + +This file contains detailed patterns, checklists, and code samples referenced by the skill. + +# Node.js Backend Patterns + +Comprehensive guidance for building scalable, maintainable, and production-ready Node.js backend applications with modern frameworks, architectural patterns, and best practices. + +## When to Use This Skill + +- Building REST APIs or GraphQL servers +- Creating microservices with Node.js +- Implementing authentication and authorization +- Designing scalable backend architectures +- Setting up middleware and error handling +- Integrating databases (SQL and NoSQL) +- Building real-time applications with WebSockets +- Implementing background job processing + +## Core Frameworks + +### Express.js - Minimalist Framework + +**Basic Setup:** +```typescript +import express, { Request, Response, NextFunction } from 'express'; +import helmet from 'helmet'; +import cors from 'cors'; +import compression from 'compression'; + +const app = express(); + +// Security middleware +app.use(helmet()); +app.use(cors({ origin: process.env.ALLOWED_ORIGINS?.split(',') })); +app.use(compression()); + +// Body parsing +app.use(express.json({ limit: '10mb' })); +app.use(express.urlencoded({ extended: true, limit: '10mb' })); + +// Request logging +app.use((req: Request, res: Response, next: NextFunction) => { + console.log(`${req.method} ${req.path}`); + next(); +}); + +const PORT = process.env.PORT || 3000; +app.listen(PORT, () => { + console.log(`Server running on port ${PORT}`); +}); +``` + +### Fastify - High Performance Framework + +**Basic Setup:** +```typescript +import Fastify from 'fastify'; +import helmet from '@fastify/helmet'; +import cors from '@fastify/cors'; +import compress from '@fastify/compress'; + +const fastify = Fastify({ + logger: { + level: process.env.LOG_LEVEL || 'info', + transport: { + target: 'pino-pretty', + options: { colorize: true } + } + } +}); + +// Plugins +await fastify.register(helmet); +await fastify.register(cors, { origin: true }); +await fastify.register(compress); + +// Type-safe routes with schema validation +fastify.post<{ + Body: { name: string; email: string }; + Reply: { id: string; name: string }; +}>('/users', { + schema: { + body: { + type: 'object', + required: ['name', 'email'], + properties: { + name: { type: 'string', minLength: 1 }, + email: { type: 'string', format: 'email' } + } + } + } +}, async (request, reply) => { + const { name, email } = request.body; + return { id: '123', name }; +}); + +await fastify.listen({ port: 3000, host: '0.0.0.0' }); +``` + +## Architectural Patterns + +### Pattern 1: Layered Architecture + +**Structure:** +``` +src/ +├── controllers/ # Handle HTTP requests/responses +├── services/ # Business logic +├── repositories/ # Data access layer +├── models/ # Data models +├── middleware/ # Express/Fastify middleware +├── routes/ # Route definitions +├── utils/ # Helper functions +├── config/ # Configuration +└── types/ # TypeScript types +``` + +**Controller Layer:** +```typescript +// controllers/user.controller.ts +import { Request, Response, NextFunction } from 'express'; +import { UserService } from '../services/user.service'; +import { CreateUserDTO, UpdateUserDTO } from '../types/user.types'; + +export class UserController { + constructor(private userService: UserService) {} + + async createUser(req: Request, res: Response, next: NextFunction) { + try { + const userData: CreateUserDTO = req.body; + const user = await this.userService.createUser(userData); + res.status(201).json(user); + } catch (error) { + next(error); + } + } + + async getUser(req: Request, res: Response, next: NextFunction) { + try { + const { id } = req.params; + const user = await this.userService.getUserById(id); + res.json(user); + } catch (error) { + next(error); + } + } + + async updateUser(req: Request, res: Response, next: NextFunction) { + try { + const { id } = req.params; + const updates: UpdateUserDTO = req.body; + const user = await this.userService.updateUser(id, updates); + res.json(user); + } catch (error) { + next(error); + } + } + + async deleteUser(req: Request, res: Response, next: NextFunction) { + try { + const { id } = req.params; + await this.userService.deleteUser(id); + res.status(204).send(); + } catch (error) { + next(error); + } + } +} +``` + +**Service Layer:** +```typescript +// services/user.service.ts +import { UserRepository } from '../repositories/user.repository'; +import { CreateUserDTO, UpdateUserDTO, User } from '../types/user.types'; +import { NotFoundError, ValidationError } from '../utils/errors'; +import bcrypt from 'bcrypt'; + +export class UserService { + constructor(private userRepository: UserRepository) {} + + async createUser(userData: CreateUserDTO): Promise { + // Validation + const existingUser = await this.userRepository.findByEmail(userData.email); + if (existingUser) { + throw new ValidationError('Email already exists'); + } + + // Hash password + const hashedPassword = await bcrypt.hash(userData.password, 10); + + // Create user + const user = await this.userRepository.create({ + ...userData, + password: hashedPassword + }); + + // Remove password from response + const { password, ...userWithoutPassword } = user; + return userWithoutPassword as User; + } + + async getUserById(id: string): Promise { + const user = await this.userRepository.findById(id); + if (!user) { + throw new NotFoundError('User not found'); + } + const { password, ...userWithoutPassword } = user; + return userWithoutPassword as User; + } + + async updateUser(id: string, updates: UpdateUserDTO): Promise { + const user = await this.userRepository.update(id, updates); + if (!user) { + throw new NotFoundError('User not found'); + } + const { password, ...userWithoutPassword } = user; + return userWithoutPassword as User; + } + + async deleteUser(id: string): Promise { + const deleted = await this.userRepository.delete(id); + if (!deleted) { + throw new NotFoundError('User not found'); + } + } +} +``` + +**Repository Layer:** +```typescript +// repositories/user.repository.ts +import { Pool } from 'pg'; +import { CreateUserDTO, UpdateUserDTO, UserEntity } from '../types/user.types'; + +export class UserRepository { + constructor(private db: Pool) {} + + async create(userData: CreateUserDTO & { password: string }): Promise { + const query = ` + INSERT INTO users (name, email, password) + VALUES ($1, $2, $3) + RETURNING id, name, email, password, created_at, updated_at + `; + const { rows } = await this.db.query(query, [ + userData.name, + userData.email, + userData.password + ]); + return rows[0]; + } + + async findById(id: string): Promise { + const query = 'SELECT * FROM users WHERE id = $1'; + const { rows } = await this.db.query(query, [id]); + return rows[0] || null; + } + + async findByEmail(email: string): Promise { + const query = 'SELECT * FROM users WHERE email = $1'; + const { rows } = await this.db.query(query, [email]); + return rows[0] || null; + } + + async update(id: string, updates: UpdateUserDTO): Promise { + const fields = Object.keys(updates); + const values = Object.values(updates); + + const setClause = fields + .map((field, idx) => `${field} = $${idx + 2}`) + .join(', '); + + const query = ` + UPDATE users + SET ${setClause}, updated_at = CURRENT_TIMESTAMP + WHERE id = $1 + RETURNING * + `; + + const { rows } = await this.db.query(query, [id, ...values]); + return rows[0] || null; + } + + async delete(id: string): Promise { + const query = 'DELETE FROM users WHERE id = $1'; + const { rowCount } = await this.db.query(query, [id]); + return rowCount > 0; + } +} +``` + +### Pattern 2: Dependency Injection + +**DI Container:** +```typescript +// di-container.ts +import { Pool } from 'pg'; +import { UserRepository } from './repositories/user.repository'; +import { UserService } from './services/user.service'; +import { UserController } from './controllers/user.controller'; +import { AuthService } from './services/auth.service'; + +class Container { + private instances = new Map(); + + register(key: string, factory: () => T): void { + this.instances.set(key, factory); + } + + resolve(key: string): T { + const factory = this.instances.get(key); + if (!factory) { + throw new Error(`No factory registered for ${key}`); + } + return factory(); + } + + singleton(key: string, factory: () => T): void { + let instance: T; + this.instances.set(key, () => { + if (!instance) { + instance = factory(); + } + return instance; + }); + } +} + +export const container = new Container(); + +// Register dependencies +container.singleton('db', () => new Pool({ + host: process.env.DB_HOST, + port: parseInt(process.env.DB_PORT || '5432'), + database: process.env.DB_NAME, + user: process.env.DB_USER, + password: process.env.DB_PASSWORD, + max: 20, + idleTimeoutMillis: 30000, + connectionTimeoutMillis: 2000, +})); + +container.singleton('userRepository', () => + new UserRepository(container.resolve('db')) +); + +container.singleton('userService', () => + new UserService(container.resolve('userRepository')) +); + +container.register('userController', () => + new UserController(container.resolve('userService')) +); + +container.singleton('authService', () => + new AuthService(container.resolve('userRepository')) +); +``` + +## Middleware Patterns + +### Authentication Middleware + +```typescript +// middleware/auth.middleware.ts +import { Request, Response, NextFunction } from 'express'; +import jwt from 'jsonwebtoken'; +import { UnauthorizedError } from '../utils/errors'; + +interface JWTPayload { + userId: string; + email: string; +} + +declare global { + namespace Express { + interface Request { + user?: JWTPayload; + } + } +} + +export const authenticate = async ( + req: Request, + res: Response, + next: NextFunction +) => { + try { + const token = req.headers.authorization?.replace('Bearer ', ''); + + if (!token) { + throw new UnauthorizedError('No token provided'); + } + + const payload = jwt.verify( + token, + process.env.JWT_SECRET! + ) as JWTPayload; + + req.user = payload; + next(); + } catch (error) { + next(new UnauthorizedError('Invalid token')); + } +}; + +export const authorize = (...roles: string[]) => { + return async (req: Request, res: Response, next: NextFunction) => { + if (!req.user) { + return next(new UnauthorizedError('Not authenticated')); + } + + // Check if user has required role + const hasRole = roles.some(role => + req.user?.roles?.includes(role) + ); + + if (!hasRole) { + return next(new UnauthorizedError('Insufficient permissions')); + } + + next(); + }; +}; +``` + +### Validation Middleware + +```typescript +// middleware/validation.middleware.ts +import { Request, Response, NextFunction } from 'express'; +import { AnyZodObject, ZodError } from 'zod'; +import { ValidationError } from '../utils/errors'; + +export const validate = (schema: AnyZodObject) => { + return async (req: Request, res: Response, next: NextFunction) => { + try { + await schema.parseAsync({ + body: req.body, + query: req.query, + params: req.params + }); + next(); + } catch (error) { + if (error instanceof ZodError) { + const errors = error.errors.map(err => ({ + field: err.path.join('.'), + message: err.message + })); + next(new ValidationError('Validation failed', errors)); + } else { + next(error); + } + } + }; +}; + +// Usage with Zod +import { z } from 'zod'; + +const createUserSchema = z.object({ + body: z.object({ + name: z.string().min(1), + email: z.string().email(), + password: z.string().min(8) + }) +}); + +router.post('/users', validate(createUserSchema), userController.createUser); +``` + +### Rate Limiting Middleware + +```typescript +// middleware/rate-limit.middleware.ts +import rateLimit from 'express-rate-limit'; +import RedisStore from 'rate-limit-redis'; +import Redis from 'ioredis'; + +const redis = new Redis({ + host: process.env.REDIS_HOST, + port: parseInt(process.env.REDIS_PORT || '6379') +}); + +export const apiLimiter = rateLimit({ + store: new RedisStore({ + client: redis, + prefix: 'rl:', + }), + windowMs: 15 * 60 * 1000, // 15 minutes + max: 100, // Limit each IP to 100 requests per windowMs + message: 'Too many requests from this IP, please try again later', + standardHeaders: true, + legacyHeaders: false, +}); + +export const authLimiter = rateLimit({ + store: new RedisStore({ + client: redis, + prefix: 'rl:auth:', + }), + windowMs: 15 * 60 * 1000, + max: 5, // Stricter limit for auth endpoints + skipSuccessfulRequests: true, +}); +``` + +### Request Logging Middleware + +```typescript +// middleware/logger.middleware.ts +import { Request, Response, NextFunction } from 'express'; +import pino from 'pino'; + +const logger = pino({ + level: process.env.LOG_LEVEL || 'info', + transport: { + target: 'pino-pretty', + options: { colorize: true } + } +}); + +export const requestLogger = ( + req: Request, + res: Response, + next: NextFunction +) => { + const start = Date.now(); + + // Log response when finished + res.on('finish', () => { + const duration = Date.now() - start; + logger.info({ + method: req.method, + url: req.url, + status: res.statusCode, + duration: `${duration}ms`, + userAgent: req.headers['user-agent'], + ip: req.ip + }); + }); + + next(); +}; + +export { logger }; +``` + +## Error Handling + +### Custom Error Classes + +```typescript +// utils/errors.ts +export class AppError extends Error { + constructor( + public message: string, + public statusCode: number = 500, + public isOperational: boolean = true + ) { + super(message); + Object.setPrototypeOf(this, AppError.prototype); + Error.captureStackTrace(this, this.constructor); + } +} + +export class ValidationError extends AppError { + constructor(message: string, public errors?: any[]) { + super(message, 400); + } +} + +export class NotFoundError extends AppError { + constructor(message: string = 'Resource not found') { + super(message, 404); + } +} + +export class UnauthorizedError extends AppError { + constructor(message: string = 'Unauthorized') { + super(message, 401); + } +} + +export class ForbiddenError extends AppError { + constructor(message: string = 'Forbidden') { + super(message, 403); + } +} + +export class ConflictError extends AppError { + constructor(message: string) { + super(message, 409); + } +} +``` + +### Global Error Handler + +```typescript +// middleware/error-handler.ts +import { Request, Response, NextFunction } from 'express'; +import { AppError } from '../utils/errors'; +import { logger } from './logger.middleware'; + +export const errorHandler = ( + err: Error, + req: Request, + res: Response, + next: NextFunction +) => { + if (err instanceof AppError) { + return res.status(err.statusCode).json({ + status: 'error', + message: err.message, + ...(err instanceof ValidationError && { errors: err.errors }) + }); + } + + // Log unexpected errors + logger.error({ + error: err.message, + stack: err.stack, + url: req.url, + method: req.method + }); + + // Don't leak error details in production + const message = process.env.NODE_ENV === 'production' + ? 'Internal server error' + : err.message; + + res.status(500).json({ + status: 'error', + message + }); +}; + +// Async error wrapper +export const asyncHandler = ( + fn: (req: Request, res: Response, next: NextFunction) => Promise +) => { + return (req: Request, res: Response, next: NextFunction) => { + Promise.resolve(fn(req, res, next)).catch(next); + }; +}; +``` + +## Database Patterns + +### PostgreSQL with Connection Pool + +```typescript +// config/database.ts +import { Pool, PoolConfig } from 'pg'; + +const poolConfig: PoolConfig = { + host: process.env.DB_HOST, + port: parseInt(process.env.DB_PORT || '5432'), + database: process.env.DB_NAME, + user: process.env.DB_USER, + password: process.env.DB_PASSWORD, + max: 20, + idleTimeoutMillis: 30000, + connectionTimeoutMillis: 2000, +}; + +export const pool = new Pool(poolConfig); + +// Test connection +pool.on('connect', () => { + console.log('Database connected'); +}); + +pool.on('error', (err) => { + console.error('Unexpected database error', err); + process.exit(-1); +}); + +// Graceful shutdown +export const closeDatabase = async () => { + await pool.end(); + console.log('Database connection closed'); +}; +``` + +### MongoDB with Mongoose + +```typescript +// config/mongoose.ts +import mongoose from 'mongoose'; + +const connectDB = async () => { + try { + await mongoose.connect(process.env.MONGODB_URI!, { + maxPoolSize: 10, + serverSelectionTimeoutMS: 5000, + socketTimeoutMS: 45000, + }); + + console.log('MongoDB connected'); + } catch (error) { + console.error('MongoDB connection error:', error); + process.exit(1); + } +}; + +mongoose.connection.on('disconnected', () => { + console.log('MongoDB disconnected'); +}); + +mongoose.connection.on('error', (err) => { + console.error('MongoDB error:', err); +}); + +export { connectDB }; + +// Model example +import { Schema, model, Document } from 'mongoose'; + +interface IUser extends Document { + name: string; + email: string; + password: string; + createdAt: Date; + updatedAt: Date; +} + +const userSchema = new Schema({ + name: { type: String, required: true }, + email: { type: String, required: true, unique: true }, + password: { type: String, required: true }, +}, { + timestamps: true +}); + +// Indexes +userSchema.index({ email: 1 }); + +export const User = model('User', userSchema); +``` + +### Transaction Pattern + +```typescript +// services/order.service.ts +import { Pool } from 'pg'; + +export class OrderService { + constructor(private db: Pool) {} + + async createOrder(userId: string, items: any[]) { + const client = await this.db.connect(); + + try { + await client.query('BEGIN'); + + // Create order + const orderResult = await client.query( + 'INSERT INTO orders (user_id, total) VALUES ($1, $2) RETURNING id', + [userId, calculateTotal(items)] + ); + const orderId = orderResult.rows[0].id; + + // Create order items + for (const item of items) { + await client.query( + 'INSERT INTO order_items (order_id, product_id, quantity, price) VALUES ($1, $2, $3, $4)', + [orderId, item.productId, item.quantity, item.price] + ); + + // Update inventory + await client.query( + 'UPDATE products SET stock = stock - $1 WHERE id = $2', + [item.quantity, item.productId] + ); + } + + await client.query('COMMIT'); + return orderId; + } catch (error) { + await client.query('ROLLBACK'); + throw error; + } finally { + client.release(); + } + } +} +``` + +## Authentication & Authorization + +### JWT Authentication + +```typescript +// services/auth.service.ts +import jwt from 'jsonwebtoken'; +import bcrypt from 'bcrypt'; +import { UserRepository } from '../repositories/user.repository'; +import { UnauthorizedError } from '../utils/errors'; + +export class AuthService { + constructor(private userRepository: UserRepository) {} + + async login(email: string, password: string) { + const user = await this.userRepository.findByEmail(email); + + if (!user) { + throw new UnauthorizedError('Invalid credentials'); + } + + const isValid = await bcrypt.compare(password, user.password); + + if (!isValid) { + throw new UnauthorizedError('Invalid credentials'); + } + + const token = this.generateToken({ + userId: user.id, + email: user.email + }); + + const refreshToken = this.generateRefreshToken({ + userId: user.id + }); + + return { + token, + refreshToken, + user: { + id: user.id, + name: user.name, + email: user.email + } + }; + } + + async refreshToken(refreshToken: string) { + try { + const payload = jwt.verify( + refreshToken, + process.env.REFRESH_TOKEN_SECRET! + ) as { userId: string }; + + const user = await this.userRepository.findById(payload.userId); + + if (!user) { + throw new UnauthorizedError('User not found'); + } + + const token = this.generateToken({ + userId: user.id, + email: user.email + }); + + return { token }; + } catch (error) { + throw new UnauthorizedError('Invalid refresh token'); + } + } + + private generateToken(payload: any): string { + return jwt.sign(payload, process.env.JWT_SECRET!, { + expiresIn: '15m' + }); + } + + private generateRefreshToken(payload: any): string { + return jwt.sign(payload, process.env.REFRESH_TOKEN_SECRET!, { + expiresIn: '7d' + }); + } +} +``` + +## Caching Strategies + +```typescript +// utils/cache.ts +import Redis from 'ioredis'; + +const redis = new Redis({ + host: process.env.REDIS_HOST, + port: parseInt(process.env.REDIS_PORT || '6379'), + retryStrategy: (times) => { + const delay = Math.min(times * 50, 2000); + return delay; + } +}); + +export class CacheService { + async get(key: string): Promise { + const data = await redis.get(key); + return data ? JSON.parse(data) : null; + } + + async set(key: string, value: any, ttl?: number): Promise { + const serialized = JSON.stringify(value); + if (ttl) { + await redis.setex(key, ttl, serialized); + } else { + await redis.set(key, serialized); + } + } + + async delete(key: string): Promise { + await redis.del(key); + } + + async invalidatePattern(pattern: string): Promise { + const keys = await redis.keys(pattern); + if (keys.length > 0) { + await redis.del(...keys); + } + } +} + +// Cache decorator +export function Cacheable(ttl: number = 300) { + return function ( + target: any, + propertyKey: string, + descriptor: PropertyDescriptor + ) { + const originalMethod = descriptor.value; + + descriptor.value = async function (...args: any[]) { + const cache = new CacheService(); + const cacheKey = `${propertyKey}:${JSON.stringify(args)}`; + + const cached = await cache.get(cacheKey); + if (cached) { + return cached; + } + + const result = await originalMethod.apply(this, args); + await cache.set(cacheKey, result, ttl); + + return result; + }; + + return descriptor; + }; +} +``` + +## API Response Format + +```typescript +// utils/response.ts +import { Response } from 'express'; + +export class ApiResponse { + static success(res: Response, data: T, message?: string, statusCode = 200) { + return res.status(statusCode).json({ + status: 'success', + message, + data + }); + } + + static error(res: Response, message: string, statusCode = 500, errors?: any) { + return res.status(statusCode).json({ + status: 'error', + message, + ...(errors && { errors }) + }); + } + + static paginated( + res: Response, + data: T[], + page: number, + limit: number, + total: number + ) { + return res.json({ + status: 'success', + data, + pagination: { + page, + limit, + total, + pages: Math.ceil(total / limit) + } + }); + } +} +``` + +## Best Practices + +1. **Use TypeScript**: Type safety prevents runtime errors +2. **Implement proper error handling**: Use custom error classes +3. **Validate input**: Use libraries like Zod or Joi +4. **Use environment variables**: Never hardcode secrets +5. **Implement logging**: Use structured logging (Pino, Winston) +6. **Add rate limiting**: Prevent abuse +7. **Use HTTPS**: Always in production +8. **Implement CORS properly**: Don't use `*` in production +9. **Use dependency injection**: Easier testing and maintenance +10. **Write tests**: Unit, integration, and E2E tests +11. **Handle graceful shutdown**: Clean up resources +12. **Use connection pooling**: For databases +13. **Implement health checks**: For monitoring +14. **Use compression**: Reduce response size +15. **Monitor performance**: Use APM tools + +## Testing Patterns + +See `javascript-testing-patterns` skill for comprehensive testing guidance. + +## Resources + +- **Node.js Best Practices**: https://github.com/goldbergyoni/nodebestpractices +- **Express.js Guide**: https://expressjs.com/en/guide/ +- **Fastify Documentation**: https://www.fastify.io/docs/ +- **TypeScript Node Starter**: https://github.com/microsoft/TypeScript-Node-Starter