feat: add DBOS skills for TypeScript, Python, and Go (#94)
Add three DBOS SDK skills with reference documentation for building reliable, fault-tolerant applications with durable workflows. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
92
skills/dbos-golang/AGENTS.md
Normal file
92
skills/dbos-golang/AGENTS.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# dbos-golang
|
||||
|
||||
> **Note:** `CLAUDE.md` is a symlink to this file.
|
||||
|
||||
## Overview
|
||||
|
||||
DBOS Go SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Go code with DBOS, creating workflows and steps, using queues, using the DBOS Client from external applications, or building Go applications that need to be resilient to failures.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
dbos-golang/
|
||||
SKILL.md # Main skill file - read this first
|
||||
AGENTS.md # This navigation guide
|
||||
CLAUDE.md # Symlink to AGENTS.md
|
||||
references/ # Detailed reference files
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
1. Read `SKILL.md` for the main skill instructions
|
||||
2. Browse `references/` for detailed documentation on specific topics
|
||||
3. Reference files are loaded on-demand - read only what you need
|
||||
|
||||
## Reference Categories
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`).
|
||||
|
||||
## Available References
|
||||
|
||||
**Advanced** (`advanced-`):
|
||||
- `references/advanced-patching.md`
|
||||
- `references/advanced-versioning.md`
|
||||
|
||||
**Client** (`client-`):
|
||||
- `references/client-enqueue.md`
|
||||
- `references/client-setup.md`
|
||||
|
||||
**Communication** (`comm-`):
|
||||
- `references/comm-events.md`
|
||||
- `references/comm-messages.md`
|
||||
- `references/comm-streaming.md`
|
||||
|
||||
**Lifecycle** (`lifecycle-`):
|
||||
- `references/lifecycle-config.md`
|
||||
|
||||
**Pattern** (`pattern-`):
|
||||
- `references/pattern-debouncing.md`
|
||||
- `references/pattern-idempotency.md`
|
||||
- `references/pattern-scheduled.md`
|
||||
- `references/pattern-sleep.md`
|
||||
|
||||
**Queue** (`queue-`):
|
||||
- `references/queue-basics.md`
|
||||
- `references/queue-concurrency.md`
|
||||
- `references/queue-deduplication.md`
|
||||
- `references/queue-listening.md`
|
||||
- `references/queue-partitioning.md`
|
||||
- `references/queue-priority.md`
|
||||
- `references/queue-rate-limiting.md`
|
||||
|
||||
**Step** (`step-`):
|
||||
- `references/step-basics.md`
|
||||
- `references/step-concurrency.md`
|
||||
- `references/step-retries.md`
|
||||
|
||||
**Testing** (`test-`):
|
||||
- `references/test-setup.md`
|
||||
|
||||
**Workflow** (`workflow-`):
|
||||
- `references/workflow-background.md`
|
||||
- `references/workflow-constraints.md`
|
||||
- `references/workflow-control.md`
|
||||
- `references/workflow-determinism.md`
|
||||
- `references/workflow-introspection.md`
|
||||
- `references/workflow-timeout.md`
|
||||
|
||||
---
|
||||
|
||||
*29 reference files across 9 categories*
|
||||
1
skills/dbos-golang/CLAUDE.md
Symbolic link
1
skills/dbos-golang/CLAUDE.md
Symbolic link
@@ -0,0 +1 @@
|
||||
AGENTS.md
|
||||
133
skills/dbos-golang/SKILL.md
Normal file
133
skills/dbos-golang/SKILL.md
Normal file
@@ -0,0 +1,133 @@
|
||||
---
|
||||
name: dbos-golang
|
||||
description: DBOS Go SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Go code with DBOS, creating workflows and steps, using queues, using the DBOS Client from external applications, or building Go applications that need to be resilient to failures.
|
||||
risk: safe
|
||||
source: https://docs.dbos.dev/
|
||||
license: MIT
|
||||
metadata:
|
||||
author: dbos
|
||||
version: "1.0.0"
|
||||
organization: DBOS
|
||||
date: February 2026
|
||||
abstract: Comprehensive guide for building fault-tolerant Go applications with DBOS. Covers workflows, steps, queues, communication patterns, and best practices for durable execution.
|
||||
---
|
||||
|
||||
# DBOS Go Best Practices
|
||||
|
||||
Guide for building reliable, fault-tolerant Go applications with DBOS durable workflows.
|
||||
|
||||
## When to Use
|
||||
|
||||
Reference these guidelines when:
|
||||
- Adding DBOS to existing Go code
|
||||
- Creating workflows and steps
|
||||
- Using queues for concurrency control
|
||||
- Implementing workflow communication (events, messages, streams)
|
||||
- Configuring and launching DBOS applications
|
||||
- Using the DBOS Client from external applications
|
||||
- Testing DBOS applications
|
||||
|
||||
## Rule Categories by Priority
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
## Critical Rules
|
||||
|
||||
### Installation
|
||||
|
||||
Install the DBOS Go module:
|
||||
|
||||
```bash
|
||||
go get github.com/dbos-inc/dbos-transact-golang/dbos@latest
|
||||
```
|
||||
|
||||
### DBOS Configuration and Launch
|
||||
|
||||
A DBOS application MUST create a context, register workflows, and launch before running any workflows:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/dbos-inc/dbos-transact-golang/dbos"
|
||||
)
|
||||
|
||||
func main() {
|
||||
ctx, err := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "my-app",
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer dbos.Shutdown(ctx, 30*time.Second)
|
||||
|
||||
dbos.RegisterWorkflow(ctx, myWorkflow)
|
||||
|
||||
if err := dbos.Launch(ctx); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Workflow and Step Structure
|
||||
|
||||
Workflows are comprised of steps. Any function performing complex operations or accessing external services must be run as a step using `dbos.RunAsStep`:
|
||||
|
||||
```go
|
||||
func fetchData(ctx context.Context) (string, error) {
|
||||
resp, err := http.Get("https://api.example.com/data")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
result, err := dbos.RunAsStep(ctx, fetchData, dbos.WithStepName("fetchData"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
### Key Constraints
|
||||
|
||||
- Do NOT start or enqueue workflows from within steps
|
||||
- Do NOT use uncontrolled goroutines to start workflows - use `dbos.RunWorkflow` with queues or `dbos.Go`/`dbos.Select` for concurrent steps
|
||||
- Workflows MUST be deterministic - non-deterministic operations go in steps
|
||||
- Do NOT modify global variables from workflows or steps
|
||||
- All workflows and queues MUST be registered before calling `Launch()`
|
||||
|
||||
## How to Use
|
||||
|
||||
Read individual rule files for detailed explanations and examples:
|
||||
|
||||
```
|
||||
references/lifecycle-config.md
|
||||
references/workflow-determinism.md
|
||||
references/queue-concurrency.md
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- https://docs.dbos.dev/
|
||||
- https://github.com/dbos-inc/dbos-transact-golang
|
||||
41
skills/dbos-golang/references/_sections.md
Normal file
41
skills/dbos-golang/references/_sections.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Section Definitions
|
||||
|
||||
This file defines the rule categories for DBOS Go best practices. Rules are automatically assigned to sections based on their filename prefix.
|
||||
|
||||
---
|
||||
|
||||
## 1. Lifecycle (lifecycle)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** DBOS configuration, initialization, and launch patterns. Foundation for all DBOS applications.
|
||||
|
||||
## 2. Workflow (workflow)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** Workflow creation, determinism requirements, background execution, and workflow IDs.
|
||||
|
||||
## 3. Step (step)
|
||||
**Impact:** HIGH
|
||||
**Description:** Step creation, retries, concurrent steps with Go/Select, and when to use steps vs workflows.
|
||||
|
||||
## 4. Queue (queue)
|
||||
**Impact:** HIGH
|
||||
**Description:** Queue creation, concurrency limits, rate limiting, partitioning, and priority.
|
||||
|
||||
## 5. Communication (comm)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Workflow events, messages, and streaming for inter-workflow communication.
|
||||
|
||||
## 6. Pattern (pattern)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Common patterns including idempotency, scheduled workflows, debouncing, and durable sleep.
|
||||
|
||||
## 7. Testing (test)
|
||||
**Impact:** LOW-MEDIUM
|
||||
**Description:** Testing DBOS applications with Go's testing package, mocks, and integration test setup.
|
||||
|
||||
## 8. Client (client)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** DBOS Client for interacting with DBOS from external applications.
|
||||
|
||||
## 9. Advanced (advanced)
|
||||
**Impact:** LOW
|
||||
**Description:** Workflow versioning, patching, and safe code upgrades.
|
||||
86
skills/dbos-golang/references/advanced-patching.md
Normal file
86
skills/dbos-golang/references/advanced-patching.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
title: Use Patching for Safe Workflow Upgrades
|
||||
impact: LOW
|
||||
impactDescription: Safely deploy breaking workflow changes without disrupting in-progress workflows
|
||||
tags: advanced, patching, upgrade, breaking-change
|
||||
---
|
||||
|
||||
## Use Patching for Safe Workflow Upgrades
|
||||
|
||||
Use `dbos.Patch` to safely deploy breaking changes to workflow code. Breaking changes alter which steps run or their order, which can cause recovery failures.
|
||||
|
||||
**Incorrect (breaking change without patching):**
|
||||
|
||||
```go
|
||||
// BEFORE: original workflow
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
result, _ := dbos.RunAsStep(ctx, foo, dbos.WithStepName("foo"))
|
||||
_, _ = dbos.RunAsStep(ctx, bar, dbos.WithStepName("bar"))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// AFTER: breaking change - recovery will fail for in-progress workflows!
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
result, _ := dbos.RunAsStep(ctx, baz, dbos.WithStepName("baz")) // Changed step
|
||||
_, _ = dbos.RunAsStep(ctx, bar, dbos.WithStepName("bar"))
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using patch):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
useBaz, err := dbos.Patch(ctx, "use-baz")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
var result string
|
||||
if useBaz {
|
||||
result, _ = dbos.RunAsStep(ctx, baz, dbos.WithStepName("baz")) // New workflows
|
||||
} else {
|
||||
result, _ = dbos.RunAsStep(ctx, foo, dbos.WithStepName("foo")) // Old workflows
|
||||
}
|
||||
_, _ = dbos.RunAsStep(ctx, bar, dbos.WithStepName("bar"))
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
`dbos.Patch` returns `true` for new workflows and `false` for workflows that started before the patch.
|
||||
|
||||
**Deprecating patches (after all old workflows complete):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
dbos.DeprecatePatch(ctx, "use-baz") // Always takes the new path
|
||||
result, _ := dbos.RunAsStep(ctx, baz, dbos.WithStepName("baz"))
|
||||
_, _ = dbos.RunAsStep(ctx, bar, dbos.WithStepName("bar"))
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Removing patches (after all workflows using DeprecatePatch complete):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
result, _ := dbos.RunAsStep(ctx, baz, dbos.WithStepName("baz"))
|
||||
_, _ = dbos.RunAsStep(ctx, bar, dbos.WithStepName("bar"))
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
Lifecycle: `Patch()` → deploy → wait for old workflows → `DeprecatePatch()` → deploy → wait → remove patch entirely.
|
||||
|
||||
**Required configuration** — patching must be explicitly enabled:
|
||||
|
||||
```go
|
||||
ctx, _ := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "my-app",
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
EnablePatching: true, // Required for dbos.Patch and dbos.DeprecatePatch
|
||||
})
|
||||
```
|
||||
|
||||
Without `EnablePatching: true`, calls to `dbos.Patch` and `dbos.DeprecatePatch` will fail.
|
||||
|
||||
Reference: [Patching](https://docs.dbos.dev/golang/tutorials/upgrading-workflows#patching)
|
||||
62
skills/dbos-golang/references/advanced-versioning.md
Normal file
62
skills/dbos-golang/references/advanced-versioning.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Use Versioning for Blue-Green Deployments
|
||||
impact: LOW
|
||||
impactDescription: Enables safe deployment of new code versions alongside old ones
|
||||
tags: advanced, versioning, blue-green, deployment
|
||||
---
|
||||
|
||||
## Use Versioning for Blue-Green Deployments
|
||||
|
||||
Set `ApplicationVersion` in configuration to tag workflows with a version. DBOS only recovers workflows matching the current application version, preventing code mismatches during recovery.
|
||||
|
||||
**Incorrect (deploying new code that breaks in-progress workflows):**
|
||||
|
||||
```go
|
||||
ctx, _ := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "my-app",
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
// No version set - version auto-computed from binary hash
|
||||
// Old workflows will be recovered with new code, which may break
|
||||
})
|
||||
```
|
||||
|
||||
**Correct (versioned deployment):**
|
||||
|
||||
```go
|
||||
ctx, _ := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "my-app",
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
ApplicationVersion: "2.0.0",
|
||||
})
|
||||
```
|
||||
|
||||
By default, the application version is automatically computed from a SHA-256 hash of the executable binary. Set it explicitly for more control.
|
||||
|
||||
**Blue-green deployment strategy:**
|
||||
|
||||
1. Deploy new version (v2) alongside old version (v1)
|
||||
2. Direct new traffic to v2 processes
|
||||
3. Let v1 processes "drain" (complete in-progress workflows)
|
||||
4. Check for remaining v1 workflows:
|
||||
|
||||
```go
|
||||
oldWorkflows, _ := dbos.ListWorkflows(ctx,
|
||||
dbos.WithAppVersion("1.0.0"),
|
||||
dbos.WithStatus([]dbos.WorkflowStatusType{dbos.WorkflowStatusPending}),
|
||||
)
|
||||
```
|
||||
|
||||
5. Once all v1 workflows are complete, retire v1 processes
|
||||
|
||||
**Fork to new version (for stuck workflows):**
|
||||
|
||||
```go
|
||||
// Fork a workflow from a failed step to run on the new version
|
||||
handle, _ := dbos.ForkWorkflow[string](ctx, dbos.ForkWorkflowInput{
|
||||
OriginalWorkflowID: workflowID,
|
||||
StartStep: failedStepID,
|
||||
ApplicationVersion: "2.0.0",
|
||||
})
|
||||
```
|
||||
|
||||
Reference: [Versioning](https://docs.dbos.dev/golang/tutorials/upgrading-workflows#versioning)
|
||||
65
skills/dbos-golang/references/client-enqueue.md
Normal file
65
skills/dbos-golang/references/client-enqueue.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Enqueue Workflows from External Applications
|
||||
impact: HIGH
|
||||
impactDescription: Enables external services to submit work to DBOS queues
|
||||
tags: client, enqueue, external, queue
|
||||
---
|
||||
|
||||
## Enqueue Workflows from External Applications
|
||||
|
||||
Use `client.Enqueue()` to submit workflows from outside your DBOS application. Since the Client runs externally, workflow and queue metadata must be specified explicitly by name.
|
||||
|
||||
**Incorrect (trying to use RunWorkflow from external code):**
|
||||
|
||||
```go
|
||||
// RunWorkflow requires a full DBOS context with registered workflows
|
||||
dbos.RunWorkflow(ctx, processTask, "data", dbos.WithQueue("myQueue"))
|
||||
```
|
||||
|
||||
**Correct (using Client.Enqueue):**
|
||||
|
||||
```go
|
||||
client, err := dbos.NewClient(context.Background(), dbos.ClientConfig{
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer client.Shutdown(10 * time.Second)
|
||||
|
||||
// Basic enqueue - specify workflow and queue by name
|
||||
handle, err := client.Enqueue("task_queue", "processTask", "task-data")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Wait for the result
|
||||
result, err := handle.GetResult()
|
||||
```
|
||||
|
||||
**Enqueue with options:**
|
||||
|
||||
```go
|
||||
handle, err := client.Enqueue("task_queue", "processTask", "task-data",
|
||||
dbos.WithEnqueueWorkflowID("custom-id"),
|
||||
dbos.WithEnqueueDeduplicationID("unique-id"),
|
||||
dbos.WithEnqueuePriority(10),
|
||||
dbos.WithEnqueueTimeout(5*time.Minute),
|
||||
dbos.WithEnqueueQueuePartitionKey("user-123"),
|
||||
dbos.WithEnqueueApplicationVersion("2.0.0"),
|
||||
)
|
||||
```
|
||||
|
||||
Enqueue options:
|
||||
- `WithEnqueueWorkflowID`: Custom workflow ID
|
||||
- `WithEnqueueDeduplicationID`: Prevent duplicate enqueues
|
||||
- `WithEnqueuePriority`: Queue priority (lower = higher priority)
|
||||
- `WithEnqueueTimeout`: Workflow timeout
|
||||
- `WithEnqueueQueuePartitionKey`: Partition key for partitioned queues
|
||||
- `WithEnqueueApplicationVersion`: Override application version
|
||||
|
||||
The workflow name must match the registered name or custom name set with `WithWorkflowName` during registration.
|
||||
|
||||
Always call `client.Shutdown()` when done.
|
||||
|
||||
Reference: [DBOS Client Enqueue](https://docs.dbos.dev/golang/reference/client#enqueue)
|
||||
65
skills/dbos-golang/references/client-setup.md
Normal file
65
skills/dbos-golang/references/client-setup.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Initialize Client for External Access
|
||||
impact: HIGH
|
||||
impactDescription: Enables external applications to interact with DBOS workflows
|
||||
tags: client, external, setup, initialization
|
||||
---
|
||||
|
||||
## Initialize Client for External Access
|
||||
|
||||
Use `dbos.NewClient` to interact with DBOS from external applications like API servers, CLI tools, or separate services. The Client connects directly to the DBOS system database.
|
||||
|
||||
**Incorrect (using full DBOS context from an external app):**
|
||||
|
||||
```go
|
||||
// Full DBOS context requires Launch() - too heavy for external clients
|
||||
ctx, _ := dbos.NewDBOSContext(context.Background(), config)
|
||||
dbos.Launch(ctx)
|
||||
```
|
||||
|
||||
**Correct (using Client):**
|
||||
|
||||
```go
|
||||
client, err := dbos.NewClient(context.Background(), dbos.ClientConfig{
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer client.Shutdown(10 * time.Second)
|
||||
|
||||
// Send a message to a workflow
|
||||
err = client.Send(workflowID, "notification", "topic")
|
||||
|
||||
// Get an event from a workflow
|
||||
event, err := client.GetEvent(workflowID, "status", 60*time.Second)
|
||||
|
||||
// Retrieve a workflow handle
|
||||
handle, err := client.RetrieveWorkflow(workflowID)
|
||||
result, err := handle.GetResult()
|
||||
|
||||
// List workflows
|
||||
workflows, err := client.ListWorkflows(
|
||||
dbos.WithStatus([]dbos.WorkflowStatusType{dbos.WorkflowStatusError}),
|
||||
)
|
||||
|
||||
// Workflow management
|
||||
err = client.CancelWorkflow(workflowID)
|
||||
handle, err = client.ResumeWorkflow(workflowID)
|
||||
|
||||
// Read a stream
|
||||
values, closed, err := client.ClientReadStream(workflowID, "results")
|
||||
|
||||
// Read a stream asynchronously
|
||||
ch, err := client.ClientReadStreamAsync(workflowID, "results")
|
||||
```
|
||||
|
||||
ClientConfig options:
|
||||
- `DatabaseURL` (required unless `SystemDBPool` is set): PostgreSQL connection string
|
||||
- `SystemDBPool`: Custom `*pgxpool.Pool`
|
||||
- `DatabaseSchema`: Schema name (default: `"dbos"`)
|
||||
- `Logger`: Custom `*slog.Logger`
|
||||
|
||||
Always call `client.Shutdown()` when done.
|
||||
|
||||
Reference: [DBOS Client](https://docs.dbos.dev/golang/reference/client)
|
||||
69
skills/dbos-golang/references/comm-events.md
Normal file
69
skills/dbos-golang/references/comm-events.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Use Events for Workflow Status Publishing
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables real-time progress monitoring and interactive workflows
|
||||
tags: communication, events, status, key-value
|
||||
---
|
||||
|
||||
## Use Events for Workflow Status Publishing
|
||||
|
||||
Workflows can publish events (key-value pairs) with `dbos.SetEvent`. Other code can read events with `dbos.GetEvent`. Events are persisted and useful for real-time progress monitoring.
|
||||
|
||||
**Incorrect (using external state for progress):**
|
||||
|
||||
```go
|
||||
var progress int // Global variable - not durable!
|
||||
|
||||
func processData(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
progress = 50 // Not persisted, lost on restart
|
||||
return input, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using events):**
|
||||
|
||||
```go
|
||||
func processData(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
dbos.SetEvent(ctx, "status", "processing")
|
||||
_, err := dbos.RunAsStep(ctx, stepOne, dbos.WithStepName("stepOne"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
dbos.SetEvent(ctx, "progress", 50)
|
||||
_, err = dbos.RunAsStep(ctx, stepTwo, dbos.WithStepName("stepTwo"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
dbos.SetEvent(ctx, "progress", 100)
|
||||
dbos.SetEvent(ctx, "status", "complete")
|
||||
return "done", nil
|
||||
}
|
||||
|
||||
// Read events from outside the workflow
|
||||
status, err := dbos.GetEvent[string](ctx, workflowID, "status", 60*time.Second)
|
||||
progress, err := dbos.GetEvent[int](ctx, workflowID, "progress", 60*time.Second)
|
||||
```
|
||||
|
||||
Events are useful for interactive workflows. For example, a checkout workflow can publish a payment URL for the caller to redirect to:
|
||||
|
||||
```go
|
||||
func checkoutWorkflow(ctx dbos.DBOSContext, order Order) (string, error) {
|
||||
paymentURL, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return createPayment(order)
|
||||
}, dbos.WithStepName("createPayment"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
dbos.SetEvent(ctx, "paymentURL", paymentURL)
|
||||
// Continue processing...
|
||||
return "success", nil
|
||||
}
|
||||
|
||||
// HTTP handler starts workflow and reads the payment URL
|
||||
handle, _ := dbos.RunWorkflow(ctx, checkoutWorkflow, order)
|
||||
url, _ := dbos.GetEvent[string](ctx, handle.GetWorkflowID(), "paymentURL", 300*time.Second)
|
||||
```
|
||||
|
||||
`GetEvent` blocks until the event is set or the timeout expires. It returns the zero value of the type if the timeout is reached.
|
||||
|
||||
Reference: [Workflow Events](https://docs.dbos.dev/golang/tutorials/workflow-communication#workflow-events)
|
||||
57
skills/dbos-golang/references/comm-messages.md
Normal file
57
skills/dbos-golang/references/comm-messages.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Use Messages for Workflow Notifications
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables reliable inter-workflow and external-to-workflow communication
|
||||
tags: communication, messages, send, recv, notification
|
||||
---
|
||||
|
||||
## Use Messages for Workflow Notifications
|
||||
|
||||
Use `dbos.Send` to send messages to a workflow and `dbos.Recv` to receive them. Messages are queued per topic and persisted for reliable delivery.
|
||||
|
||||
**Incorrect (using external messaging for workflow communication):**
|
||||
|
||||
```go
|
||||
// External message queue is not integrated with workflow recovery
|
||||
ch := make(chan string) // Not durable!
|
||||
```
|
||||
|
||||
**Correct (using DBOS messages):**
|
||||
|
||||
```go
|
||||
func checkoutWorkflow(ctx dbos.DBOSContext, orderID string) (string, error) {
|
||||
// Wait for payment notification (timeout 120 seconds)
|
||||
notification, err := dbos.Recv[string](ctx, "payment_status", 120*time.Second)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if notification == "paid" {
|
||||
_, err = dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return fulfillOrder(orderID)
|
||||
}, dbos.WithStepName("fulfillOrder"))
|
||||
return "fulfilled", err
|
||||
}
|
||||
_, err = dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return cancelOrder(orderID)
|
||||
}, dbos.WithStepName("cancelOrder"))
|
||||
return "cancelled", err
|
||||
}
|
||||
|
||||
// Send a message from a webhook handler
|
||||
func paymentWebhook(ctx dbos.DBOSContext, workflowID, status string) error {
|
||||
return dbos.Send(ctx, workflowID, status, "payment_status")
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- `Recv` waits for and consumes the next message for the specified topic
|
||||
- Returns the zero value if the wait times out, with a `DBOSError` with code `TimeoutError`
|
||||
- Messages without a topic can only be received by `Recv` without a topic
|
||||
- Messages are queued per-topic (FIFO)
|
||||
|
||||
**Reliability guarantees:**
|
||||
- All messages are persisted to the database
|
||||
- Messages sent from workflows are delivered exactly-once
|
||||
|
||||
Reference: [Workflow Messaging and Notifications](https://docs.dbos.dev/golang/tutorials/workflow-communication#workflow-messaging-and-notifications)
|
||||
75
skills/dbos-golang/references/comm-streaming.md
Normal file
75
skills/dbos-golang/references/comm-streaming.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
title: Use Streams for Real-Time Data
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables streaming results from long-running workflows
|
||||
tags: communication, stream, real-time, channel
|
||||
---
|
||||
|
||||
## Use Streams for Real-Time Data
|
||||
|
||||
Workflows can stream data to clients in real-time using `dbos.WriteStream`, `dbos.CloseStream`, and `dbos.ReadStream`/`dbos.ReadStreamAsync`. Useful for LLM output streaming or progress reporting.
|
||||
|
||||
**Incorrect (accumulating results then returning at end):**
|
||||
|
||||
```go
|
||||
func processWorkflow(ctx dbos.DBOSContext, items []string) ([]string, error) {
|
||||
var results []string
|
||||
for _, item := range items {
|
||||
result, _ := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return processItem(item)
|
||||
}, dbos.WithStepName("process"))
|
||||
results = append(results, result)
|
||||
}
|
||||
return results, nil // Client must wait for entire workflow to complete
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (streaming results as they become available):**
|
||||
|
||||
```go
|
||||
func processWorkflow(ctx dbos.DBOSContext, items []string) (string, error) {
|
||||
for _, item := range items {
|
||||
result, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return processItem(item)
|
||||
}, dbos.WithStepName("process"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
dbos.WriteStream(ctx, "results", result)
|
||||
}
|
||||
dbos.CloseStream(ctx, "results") // Signal completion
|
||||
return "done", nil
|
||||
}
|
||||
|
||||
// Read the stream synchronously (blocks until closed)
|
||||
handle, _ := dbos.RunWorkflow(ctx, processWorkflow, items)
|
||||
values, closed, err := dbos.ReadStream[string](ctx, handle.GetWorkflowID(), "results")
|
||||
```
|
||||
|
||||
**Async stream reading with channels:**
|
||||
|
||||
```go
|
||||
ch, err := dbos.ReadStreamAsync[string](ctx, handle.GetWorkflowID(), "results")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
for sv := range ch {
|
||||
if sv.Err != nil {
|
||||
log.Fatal(sv.Err)
|
||||
}
|
||||
if sv.Closed {
|
||||
break
|
||||
}
|
||||
fmt.Println("Received:", sv.Value)
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- A workflow may have any number of streams, each identified by a unique key
|
||||
- Streams are immutable and append-only
|
||||
- Writes from workflows happen exactly-once
|
||||
- Streams are automatically closed when the workflow terminates
|
||||
- `ReadStream` blocks until the workflow is inactive or the stream is closed
|
||||
- `ReadStreamAsync` returns a channel of `StreamValue[R]` for non-blocking reads
|
||||
|
||||
Reference: [Workflow Streaming](https://docs.dbos.dev/golang/tutorials/workflow-communication#workflow-streaming)
|
||||
70
skills/dbos-golang/references/lifecycle-config.md
Normal file
70
skills/dbos-golang/references/lifecycle-config.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
title: Configure and Launch DBOS Properly
|
||||
impact: CRITICAL
|
||||
impactDescription: Application won't function without proper setup
|
||||
tags: configuration, launch, setup, initialization
|
||||
---
|
||||
|
||||
## Configure and Launch DBOS Properly
|
||||
|
||||
Every DBOS application must create a context, register workflows and queues, then launch before running any workflows.
|
||||
|
||||
**Incorrect (missing configuration or launch):**
|
||||
|
||||
```go
|
||||
// No context or launch!
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
return input, nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
// This will fail - DBOS is not initialized or launched
|
||||
dbos.RegisterWorkflow(nil, myWorkflow) // panic: ctx cannot be nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (create context, register, launch):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
return input, nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
ctx, err := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "my-app",
|
||||
DatabaseURL: os.Getenv("DBOS_SYSTEM_DATABASE_URL"),
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer dbos.Shutdown(ctx, 30*time.Second)
|
||||
|
||||
dbos.RegisterWorkflow(ctx, myWorkflow)
|
||||
|
||||
if err := dbos.Launch(ctx); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
handle, err := dbos.RunWorkflow(ctx, myWorkflow, "hello")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
result, err := handle.GetResult()
|
||||
fmt.Println(result) // "hello"
|
||||
}
|
||||
```
|
||||
|
||||
Config fields:
|
||||
- `AppName` (required): Application identifier
|
||||
- `DatabaseURL` (required unless `SystemDBPool` is set): PostgreSQL connection string
|
||||
- `SystemDBPool`: Custom `*pgxpool.Pool` (takes precedence over `DatabaseURL`)
|
||||
- `DatabaseSchema`: Schema name (default: `"dbos"`)
|
||||
- `Logger`: Custom `*slog.Logger` (defaults to stdout)
|
||||
- `AdminServer`: Enable HTTP admin server (default: `false`)
|
||||
- `AdminServerPort`: Admin server port (default: `3001`)
|
||||
- `ApplicationVersion`: App version (auto-computed from binary hash if not set)
|
||||
- `ExecutorID`: Executor identifier (default: `"local"`)
|
||||
- `EnablePatching`: Enable code patching system (default: `false`)
|
||||
|
||||
Reference: [Integrating DBOS](https://docs.dbos.dev/golang/integrating-dbos)
|
||||
47
skills/dbos-golang/references/pattern-debouncing.md
Normal file
47
skills/dbos-golang/references/pattern-debouncing.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Debounce Workflows to Prevent Wasted Work
|
||||
impact: MEDIUM
|
||||
impactDescription: Prevents redundant workflow executions during rapid triggers
|
||||
tags: pattern, debounce, delay, efficiency
|
||||
---
|
||||
|
||||
## Debounce Workflows to Prevent Wasted Work
|
||||
|
||||
Use `dbos.NewDebouncer` to delay workflow execution until some time has passed since the last trigger. This prevents wasted work when a workflow is triggered multiple times in quick succession.
|
||||
|
||||
**Incorrect (executing on every trigger):**
|
||||
|
||||
```go
|
||||
// Every keystroke triggers a new workflow - wasteful!
|
||||
func onInputChange(ctx dbos.DBOSContext, userInput string) {
|
||||
dbos.RunWorkflow(ctx, processInput, userInput)
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using Debouncer):**
|
||||
|
||||
```go
|
||||
// Create debouncer before Launch()
|
||||
debouncer := dbos.NewDebouncer(ctx, processInput,
|
||||
dbos.WithDebouncerTimeout(120*time.Second), // Max wait: 2 minutes
|
||||
)
|
||||
|
||||
func onInputChange(ctx dbos.DBOSContext, userID, userInput string) error {
|
||||
// Delays execution by 60 seconds from the last call
|
||||
// Uses the LAST set of inputs when finally executing
|
||||
_, err := debouncer.Debounce(ctx, userID, 60*time.Second, userInput)
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- First argument to `Debounce` is the debounce key, grouping executions together (e.g., per user)
|
||||
- Second argument is the delay duration from the last call
|
||||
- `WithDebouncerTimeout` sets a max wait time since the first trigger
|
||||
- When the workflow finally executes, it uses the **last** set of inputs
|
||||
- After execution begins, the next `Debounce` call starts a new cycle
|
||||
- Debouncers must be created **before** `Launch()`
|
||||
|
||||
Type signature: `Debouncer[P any, R any]` — the type parameters match the target workflow.
|
||||
|
||||
Reference: [Debouncing Workflows](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#debouncing)
|
||||
63
skills/dbos-golang/references/pattern-idempotency.md
Normal file
63
skills/dbos-golang/references/pattern-idempotency.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Use Workflow IDs for Idempotency
|
||||
impact: MEDIUM
|
||||
impactDescription: Prevents duplicate side effects like double payments
|
||||
tags: pattern, idempotency, workflow-id, deduplication
|
||||
---
|
||||
|
||||
## Use Workflow IDs for Idempotency
|
||||
|
||||
Assign a workflow ID to ensure a workflow executes only once, even if called multiple times. This prevents duplicate side effects like double payments.
|
||||
|
||||
**Incorrect (no idempotency):**
|
||||
|
||||
```go
|
||||
func processPayment(ctx dbos.DBOSContext, orderID string) (string, error) {
|
||||
_, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return chargeCard(orderID)
|
||||
}, dbos.WithStepName("chargeCard"))
|
||||
return "charged", err
|
||||
}
|
||||
|
||||
// Multiple calls could charge the card multiple times!
|
||||
dbos.RunWorkflow(ctx, processPayment, "order-123")
|
||||
dbos.RunWorkflow(ctx, processPayment, "order-123") // Double charge!
|
||||
```
|
||||
|
||||
**Correct (with workflow ID):**
|
||||
|
||||
```go
|
||||
func processPayment(ctx dbos.DBOSContext, orderID string) (string, error) {
|
||||
_, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return chargeCard(orderID)
|
||||
}, dbos.WithStepName("chargeCard"))
|
||||
return "charged", err
|
||||
}
|
||||
|
||||
// Same workflow ID = only one execution
|
||||
workflowID := fmt.Sprintf("payment-%s", orderID)
|
||||
dbos.RunWorkflow(ctx, processPayment, "order-123",
|
||||
dbos.WithWorkflowID(workflowID),
|
||||
)
|
||||
dbos.RunWorkflow(ctx, processPayment, "order-123",
|
||||
dbos.WithWorkflowID(workflowID),
|
||||
)
|
||||
// Second call returns the result of the first execution
|
||||
```
|
||||
|
||||
Access the current workflow ID inside a workflow:
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
currentID, err := dbos.GetWorkflowID(ctx)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
fmt.Printf("Running workflow: %s\n", currentID)
|
||||
return input, nil
|
||||
}
|
||||
```
|
||||
|
||||
Workflow IDs must be **globally unique** for your application. If not set, a random UUID is generated.
|
||||
|
||||
Reference: [Workflow IDs and Idempotency](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#workflow-ids-and-idempotency)
|
||||
69
skills/dbos-golang/references/pattern-scheduled.md
Normal file
69
skills/dbos-golang/references/pattern-scheduled.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Create Scheduled Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables recurring tasks with exactly-once-per-interval guarantees
|
||||
tags: pattern, scheduled, cron, recurring
|
||||
---
|
||||
|
||||
## Create Scheduled Workflows
|
||||
|
||||
Use `dbos.WithSchedule` when registering a workflow to run it on a cron schedule. Each scheduled invocation runs exactly once per interval.
|
||||
|
||||
**Incorrect (manual scheduling with goroutine):**
|
||||
|
||||
```go
|
||||
// Manual scheduling is not durable and misses intervals during downtime
|
||||
go func() {
|
||||
for {
|
||||
generateReport()
|
||||
time.Sleep(60 * time.Second)
|
||||
}
|
||||
}()
|
||||
```
|
||||
|
||||
**Correct (using WithSchedule):**
|
||||
|
||||
```go
|
||||
// Scheduled workflow must accept time.Time as input
|
||||
func everyThirtySeconds(ctx dbos.DBOSContext, scheduledTime time.Time) (string, error) {
|
||||
fmt.Println("Running scheduled task at:", scheduledTime)
|
||||
return "done", nil
|
||||
}
|
||||
|
||||
func dailyReport(ctx dbos.DBOSContext, scheduledTime time.Time) (string, error) {
|
||||
_, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
return generateReport()
|
||||
}, dbos.WithStepName("generateReport"))
|
||||
return "report generated", err
|
||||
}
|
||||
|
||||
func main() {
|
||||
ctx, _ := dbos.NewDBOSContext(context.Background(), config)
|
||||
defer dbos.Shutdown(ctx, 30*time.Second)
|
||||
|
||||
dbos.RegisterWorkflow(ctx, everyThirtySeconds,
|
||||
dbos.WithSchedule("*/30 * * * * *"),
|
||||
)
|
||||
dbos.RegisterWorkflow(ctx, dailyReport,
|
||||
dbos.WithSchedule("0 0 9 * * *"), // 9 AM daily
|
||||
)
|
||||
|
||||
dbos.Launch(ctx)
|
||||
select {} // Block forever
|
||||
}
|
||||
```
|
||||
|
||||
Scheduled workflows must accept exactly one parameter of type `time.Time` representing the scheduled execution time.
|
||||
|
||||
DBOS crontab uses 6 fields with second precision:
|
||||
```text
|
||||
┌────────────── second
|
||||
│ ┌──────────── minute
|
||||
│ │ ┌────────── hour
|
||||
│ │ │ ┌──────── day of month
|
||||
│ │ │ │ ┌────── month
|
||||
│ │ │ │ │ ┌──── day of week
|
||||
* * * * * *
|
||||
```
|
||||
|
||||
Reference: [Scheduled Workflows](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#scheduled-workflows)
|
||||
52
skills/dbos-golang/references/pattern-sleep.md
Normal file
52
skills/dbos-golang/references/pattern-sleep.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Use Durable Sleep for Delayed Execution
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables reliable scheduling across restarts
|
||||
tags: pattern, sleep, delay, durable, schedule
|
||||
---
|
||||
|
||||
## Use Durable Sleep for Delayed Execution
|
||||
|
||||
Use `dbos.Sleep` for durable delays within workflows. The wakeup time is stored in the database, so the sleep survives restarts.
|
||||
|
||||
**Incorrect (non-durable sleep):**
|
||||
|
||||
```go
|
||||
func delayedTask(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// time.Sleep is not durable - lost on restart!
|
||||
time.Sleep(60 * time.Second)
|
||||
result, err := dbos.RunAsStep(ctx, doWork, dbos.WithStepName("doWork"))
|
||||
return result, err
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (durable sleep):**
|
||||
|
||||
```go
|
||||
func delayedTask(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Durable sleep - survives restarts
|
||||
_, err := dbos.Sleep(ctx, 60*time.Second)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
result, err := dbos.RunAsStep(ctx, doWork, dbos.WithStepName("doWork"))
|
||||
return result, err
|
||||
}
|
||||
```
|
||||
|
||||
`dbos.Sleep` takes a `time.Duration`. It returns the remaining sleep duration (zero if completed normally).
|
||||
|
||||
Use cases:
|
||||
- Scheduling tasks to run in the future
|
||||
- Implementing retry delays
|
||||
- Delays spanning hours, days, or weeks
|
||||
|
||||
```go
|
||||
func scheduledTask(ctx dbos.DBOSContext, task string) (string, error) {
|
||||
// Sleep for one week
|
||||
dbos.Sleep(ctx, 7*24*time.Hour)
|
||||
return processTask(task)
|
||||
}
|
||||
```
|
||||
|
||||
Reference: [Durable Sleep](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#durable-sleep)
|
||||
53
skills/dbos-golang/references/queue-basics.md
Normal file
53
skills/dbos-golang/references/queue-basics.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Use Queues for Concurrent Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Queues provide managed concurrency and flow control
|
||||
tags: queue, concurrency, enqueue, workflow
|
||||
---
|
||||
|
||||
## Use Queues for Concurrent Workflows
|
||||
|
||||
Queues run many workflows concurrently with managed flow control. Use them when you need to control how many workflows run at once.
|
||||
|
||||
**Incorrect (uncontrolled concurrency):**
|
||||
|
||||
```go
|
||||
// Starting many workflows without control - could overwhelm resources
|
||||
for _, task := range tasks {
|
||||
dbos.RunWorkflow(ctx, processTask, task)
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using a queue):**
|
||||
|
||||
```go
|
||||
// Create queue before Launch()
|
||||
queue := dbos.NewWorkflowQueue(ctx, "task_queue")
|
||||
|
||||
func processAllTasks(ctx dbos.DBOSContext, tasks []string) ([]string, error) {
|
||||
var handles []dbos.WorkflowHandle[string]
|
||||
for _, task := range tasks {
|
||||
handle, err := dbos.RunWorkflow(ctx, processTask, task,
|
||||
dbos.WithQueue(queue.Name),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
handles = append(handles, handle)
|
||||
}
|
||||
// Wait for all tasks
|
||||
var results []string
|
||||
for _, h := range handles {
|
||||
result, err := h.GetResult()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
results = append(results, result)
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
```
|
||||
|
||||
Queues process workflows in FIFO order. All queues must be created with `dbos.NewWorkflowQueue` before `Launch()`.
|
||||
|
||||
Reference: [DBOS Queues](https://docs.dbos.dev/golang/tutorials/queue-tutorial)
|
||||
49
skills/dbos-golang/references/queue-concurrency.md
Normal file
49
skills/dbos-golang/references/queue-concurrency.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
title: Control Queue Concurrency
|
||||
impact: HIGH
|
||||
impactDescription: Prevents resource exhaustion with concurrent limits
|
||||
tags: queue, concurrency, workerConcurrency, limits
|
||||
---
|
||||
|
||||
## Control Queue Concurrency
|
||||
|
||||
Queues support worker-level and global concurrency limits to prevent resource exhaustion.
|
||||
|
||||
**Incorrect (no concurrency control):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "heavy_tasks") // No limits - could exhaust memory
|
||||
```
|
||||
|
||||
**Correct (worker concurrency):**
|
||||
|
||||
```go
|
||||
// Each process runs at most 5 tasks from this queue
|
||||
queue := dbos.NewWorkflowQueue(ctx, "heavy_tasks",
|
||||
dbos.WithWorkerConcurrency(5),
|
||||
)
|
||||
```
|
||||
|
||||
**Correct (global concurrency):**
|
||||
|
||||
```go
|
||||
// At most 10 tasks run across ALL processes
|
||||
queue := dbos.NewWorkflowQueue(ctx, "limited_tasks",
|
||||
dbos.WithGlobalConcurrency(10),
|
||||
)
|
||||
```
|
||||
|
||||
**In-order processing (sequential):**
|
||||
|
||||
```go
|
||||
// Only one task at a time - guarantees order
|
||||
serialQueue := dbos.NewWorkflowQueue(ctx, "sequential_queue",
|
||||
dbos.WithGlobalConcurrency(1),
|
||||
)
|
||||
```
|
||||
|
||||
Worker concurrency is recommended for most use cases. Take care with global concurrency as any `PENDING` workflow on the queue counts toward the limit, including workflows from previous application versions.
|
||||
|
||||
When using worker concurrency, each process must have a unique `ExecutorID` set in configuration (this is automatic with DBOS Conductor or Cloud).
|
||||
|
||||
Reference: [Managing Concurrency](https://docs.dbos.dev/golang/tutorials/queue-tutorial#managing-concurrency)
|
||||
52
skills/dbos-golang/references/queue-deduplication.md
Normal file
52
skills/dbos-golang/references/queue-deduplication.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Deduplicate Queued Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Prevents duplicate workflow executions
|
||||
tags: queue, deduplication, idempotent, duplicate
|
||||
---
|
||||
|
||||
## Deduplicate Queued Workflows
|
||||
|
||||
Set a deduplication ID when enqueuing to prevent duplicate workflow executions. If a workflow with the same deduplication ID is already enqueued or executing, a `DBOSError` with code `QueueDeduplicated` is returned.
|
||||
|
||||
**Incorrect (no deduplication):**
|
||||
|
||||
```go
|
||||
// Multiple calls could enqueue duplicates
|
||||
func handleClick(ctx dbos.DBOSContext, userID, task string) error {
|
||||
_, err := dbos.RunWorkflow(ctx, processTask, task,
|
||||
dbos.WithQueue(queue.Name),
|
||||
)
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (with deduplication):**
|
||||
|
||||
```go
|
||||
func handleClick(ctx dbos.DBOSContext, userID, task string) error {
|
||||
_, err := dbos.RunWorkflow(ctx, processTask, task,
|
||||
dbos.WithQueue(queue.Name),
|
||||
dbos.WithDeduplicationID(userID),
|
||||
)
|
||||
if err != nil {
|
||||
// Check if it was deduplicated
|
||||
var dbosErr *dbos.DBOSError
|
||||
if errors.As(err, &dbosErr) && dbosErr.Code == dbos.QueueDeduplicated {
|
||||
fmt.Println("Task already in progress for user:", userID)
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
Deduplication is per-queue. The deduplication ID is active while the workflow has status `ENQUEUED` or `PENDING`. Once the workflow completes, a new workflow with the same deduplication ID can be enqueued.
|
||||
|
||||
This is useful for:
|
||||
- Ensuring one active task per user
|
||||
- Preventing duplicate form submissions
|
||||
- Idempotent event processing
|
||||
|
||||
Reference: [Deduplication](https://docs.dbos.dev/golang/tutorials/queue-tutorial#deduplication)
|
||||
49
skills/dbos-golang/references/queue-listening.md
Normal file
49
skills/dbos-golang/references/queue-listening.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
title: Control Which Queues a Worker Listens To
|
||||
impact: HIGH
|
||||
impactDescription: Enables heterogeneous worker pools
|
||||
tags: queue, listen, worker, process, configuration
|
||||
---
|
||||
|
||||
## Control Which Queues a Worker Listens To
|
||||
|
||||
Use `ListenQueues` to make a process only dequeue from specific queues. This enables heterogeneous worker pools.
|
||||
|
||||
**Incorrect (all workers process all queues):**
|
||||
|
||||
```go
|
||||
cpuQueue := dbos.NewWorkflowQueue(ctx, "cpu_queue")
|
||||
gpuQueue := dbos.NewWorkflowQueue(ctx, "gpu_queue")
|
||||
|
||||
// Every worker processes both CPU and GPU tasks
|
||||
// GPU tasks on CPU workers will fail or be slow!
|
||||
dbos.Launch(ctx)
|
||||
```
|
||||
|
||||
**Correct (selective queue listening):**
|
||||
|
||||
```go
|
||||
cpuQueue := dbos.NewWorkflowQueue(ctx, "cpu_queue")
|
||||
gpuQueue := dbos.NewWorkflowQueue(ctx, "gpu_queue")
|
||||
|
||||
workerType := os.Getenv("WORKER_TYPE") // "cpu" or "gpu"
|
||||
|
||||
if workerType == "gpu" {
|
||||
ctx.ListenQueues(ctx, gpuQueue)
|
||||
} else if workerType == "cpu" {
|
||||
ctx.ListenQueues(ctx, cpuQueue)
|
||||
}
|
||||
|
||||
dbos.Launch(ctx)
|
||||
```
|
||||
|
||||
`ListenQueues` only controls dequeuing. A CPU worker can still enqueue tasks onto the GPU queue:
|
||||
|
||||
```go
|
||||
// From a CPU worker, enqueue onto the GPU queue
|
||||
dbos.RunWorkflow(ctx, gpuTask, "data",
|
||||
dbos.WithQueue(gpuQueue.Name),
|
||||
)
|
||||
```
|
||||
|
||||
Reference: [Listening to Specific Queues](https://docs.dbos.dev/golang/tutorials/queue-tutorial#listening-to-specific-queues)
|
||||
42
skills/dbos-golang/references/queue-partitioning.md
Normal file
42
skills/dbos-golang/references/queue-partitioning.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
title: Partition Queues for Per-Entity Limits
|
||||
impact: HIGH
|
||||
impactDescription: Enables per-entity concurrency control
|
||||
tags: queue, partition, per-user, dynamic
|
||||
---
|
||||
|
||||
## Partition Queues for Per-Entity Limits
|
||||
|
||||
Partitioned queues apply flow control limits per partition key instead of the entire queue. Each partition acts as a dynamic "subqueue".
|
||||
|
||||
**Incorrect (global concurrency for per-user limits):**
|
||||
|
||||
```go
|
||||
// Global concurrency=1 blocks ALL users, not per-user
|
||||
queue := dbos.NewWorkflowQueue(ctx, "tasks",
|
||||
dbos.WithGlobalConcurrency(1),
|
||||
)
|
||||
```
|
||||
|
||||
**Correct (partitioned queue):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "tasks",
|
||||
dbos.WithPartitionQueue(),
|
||||
dbos.WithGlobalConcurrency(1),
|
||||
)
|
||||
|
||||
func onUserTask(ctx dbos.DBOSContext, userID, task string) error {
|
||||
// Each user gets their own partition - at most 1 task per user
|
||||
// but tasks from different users can run concurrently
|
||||
_, err := dbos.RunWorkflow(ctx, processTask, task,
|
||||
dbos.WithQueue(queue.Name),
|
||||
dbos.WithQueuePartitionKey(userID),
|
||||
)
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
When a queue has `WithPartitionQueue()` enabled, you **must** provide a `WithQueuePartitionKey()` when enqueuing. Partition keys and deduplication IDs cannot be used together.
|
||||
|
||||
Reference: [Partitioning Queues](https://docs.dbos.dev/golang/tutorials/queue-tutorial#partitioning-queues)
|
||||
45
skills/dbos-golang/references/queue-priority.md
Normal file
45
skills/dbos-golang/references/queue-priority.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
title: Set Queue Priority for Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Prioritizes important workflows over lower-priority ones
|
||||
tags: queue, priority, ordering, importance
|
||||
---
|
||||
|
||||
## Set Queue Priority for Workflows
|
||||
|
||||
Enable priority on a queue to process higher-priority workflows first. Lower numbers indicate higher priority.
|
||||
|
||||
**Incorrect (no priority - FIFO only):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "tasks")
|
||||
// All tasks processed in FIFO order regardless of importance
|
||||
```
|
||||
|
||||
**Correct (priority-enabled queue):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "tasks",
|
||||
dbos.WithPriorityEnabled(),
|
||||
)
|
||||
|
||||
// High priority task (lower number = higher priority)
|
||||
dbos.RunWorkflow(ctx, processTask, "urgent-task",
|
||||
dbos.WithQueue(queue.Name),
|
||||
dbos.WithPriority(1),
|
||||
)
|
||||
|
||||
// Low priority task
|
||||
dbos.RunWorkflow(ctx, processTask, "background-task",
|
||||
dbos.WithQueue(queue.Name),
|
||||
dbos.WithPriority(100),
|
||||
)
|
||||
```
|
||||
|
||||
Priority rules:
|
||||
- Range: `1` to `2,147,483,647`
|
||||
- Lower number = higher priority
|
||||
- Workflows **without** assigned priorities have the highest priority (run first)
|
||||
- Workflows with the same priority are dequeued in FIFO order
|
||||
|
||||
Reference: [Priority](https://docs.dbos.dev/golang/tutorials/queue-tutorial#priority)
|
||||
50
skills/dbos-golang/references/queue-rate-limiting.md
Normal file
50
skills/dbos-golang/references/queue-rate-limiting.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Rate Limit Queue Execution
|
||||
impact: HIGH
|
||||
impactDescription: Prevents overwhelming external APIs with too many requests
|
||||
tags: queue, rate-limit, throttle, api
|
||||
---
|
||||
|
||||
## Rate Limit Queue Execution
|
||||
|
||||
Set rate limits on a queue to control how many workflows start in a given period. Rate limits are global across all DBOS processes.
|
||||
|
||||
**Incorrect (no rate limiting):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "llm_tasks")
|
||||
// Could send hundreds of requests per second to a rate-limited API
|
||||
```
|
||||
|
||||
**Correct (rate-limited queue):**
|
||||
|
||||
```go
|
||||
queue := dbos.NewWorkflowQueue(ctx, "llm_tasks",
|
||||
dbos.WithRateLimiter(&dbos.RateLimiter{
|
||||
Limit: 50,
|
||||
Period: 30 * time.Second,
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
This queue starts at most 50 workflows per 30 seconds.
|
||||
|
||||
**Combining rate limiting with concurrency:**
|
||||
|
||||
```go
|
||||
// At most 5 concurrent and 50 per 30 seconds
|
||||
queue := dbos.NewWorkflowQueue(ctx, "api_tasks",
|
||||
dbos.WithWorkerConcurrency(5),
|
||||
dbos.WithRateLimiter(&dbos.RateLimiter{
|
||||
Limit: 50,
|
||||
Period: 30 * time.Second,
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
Common use cases:
|
||||
- LLM API rate limiting (OpenAI, Anthropic, etc.)
|
||||
- Third-party API throttling
|
||||
- Preventing database overload
|
||||
|
||||
Reference: [Rate Limiting](https://docs.dbos.dev/golang/tutorials/queue-tutorial#rate-limiting)
|
||||
81
skills/dbos-golang/references/step-basics.md
Normal file
81
skills/dbos-golang/references/step-basics.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
title: Use Steps for External Operations
|
||||
impact: HIGH
|
||||
impactDescription: Steps enable recovery by checkpointing results
|
||||
tags: step, external, api, checkpoint
|
||||
---
|
||||
|
||||
## Use Steps for External Operations
|
||||
|
||||
Any function that performs complex operations, accesses external APIs, or has side effects should be a step. Step results are checkpointed, enabling workflow recovery.
|
||||
|
||||
**Incorrect (external call in workflow):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// External API call directly in workflow - not checkpointed!
|
||||
resp, err := http.Get("https://api.example.com/data")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (external call in step using `dbos.RunAsStep`):**
|
||||
|
||||
```go
|
||||
func fetchData(ctx context.Context) (string, error) {
|
||||
resp, err := http.Get("https://api.example.com/data")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
data, err := dbos.RunAsStep(ctx, fetchData, dbos.WithStepName("fetchData"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
```
|
||||
|
||||
`dbos.RunAsStep` can also accept an inline closure:
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
data, err := dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
resp, err := http.Get("https://api.example.com/data")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}, dbos.WithStepName("fetchData"))
|
||||
return data, err
|
||||
}
|
||||
```
|
||||
|
||||
Step type signature: `type Step[R any] func(ctx context.Context) (R, error)`
|
||||
|
||||
Step requirements:
|
||||
- The function must accept a `context.Context` parameter — use the one provided, not the workflow's context
|
||||
- Inputs and outputs must be serializable to JSON
|
||||
- Cannot start or enqueue workflows from within steps
|
||||
- Calling a step from within another step makes the inner call part of the outer step's execution
|
||||
|
||||
When to use steps:
|
||||
- API calls to external services
|
||||
- File system operations
|
||||
- Random number generation
|
||||
- Getting current time
|
||||
- Any non-deterministic operation
|
||||
|
||||
Reference: [DBOS Steps](https://docs.dbos.dev/golang/tutorials/step-tutorial)
|
||||
79
skills/dbos-golang/references/step-concurrency.md
Normal file
79
skills/dbos-golang/references/step-concurrency.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
title: Run Concurrent Steps with Go and Select
|
||||
impact: HIGH
|
||||
impactDescription: Enables parallel execution of steps with durable checkpointing
|
||||
tags: step, concurrency, goroutine, select, parallel
|
||||
---
|
||||
|
||||
## Run Concurrent Steps with Go and Select
|
||||
|
||||
Use `dbos.Go` to run steps concurrently in goroutines and `dbos.Select` to durably select the first completed result. Both operations are checkpointed for recovery.
|
||||
|
||||
**Incorrect (raw goroutines without checkpointing):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Raw goroutines are not checkpointed - recovery breaks!
|
||||
ch := make(chan string, 2)
|
||||
go func() { ch <- callAPI1() }()
|
||||
go func() { ch <- callAPI2() }()
|
||||
return <-ch, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using dbos.Go for concurrent steps):**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Start steps concurrently
|
||||
ch1, err := dbos.Go(ctx, func(ctx context.Context) (string, error) {
|
||||
return callAPI1(ctx)
|
||||
}, dbos.WithStepName("api1"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ch2, err := dbos.Go(ctx, func(ctx context.Context) (string, error) {
|
||||
return callAPI2(ctx)
|
||||
}, dbos.WithStepName("api2"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Wait for the first result (durable select)
|
||||
result, err := dbos.Select(ctx, []<-chan dbos.StepOutcome[string]{ch1, ch2})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Waiting for all concurrent steps:**
|
||||
|
||||
```go
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) ([]string, error) {
|
||||
ch1, _ := dbos.Go(ctx, step1, dbos.WithStepName("step1"))
|
||||
ch2, _ := dbos.Go(ctx, step2, dbos.WithStepName("step2"))
|
||||
ch3, _ := dbos.Go(ctx, step3, dbos.WithStepName("step3"))
|
||||
|
||||
// Collect all results
|
||||
results := make([]string, 3)
|
||||
for i, ch := range []<-chan dbos.StepOutcome[string]{ch1, ch2, ch3} {
|
||||
outcome := <-ch
|
||||
if outcome.Err != nil {
|
||||
return nil, outcome.Err
|
||||
}
|
||||
results[i] = outcome.Result
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- `dbos.Go` starts a step in a goroutine and returns a channel of `StepOutcome[R]`
|
||||
- `dbos.Select` durably selects the first completed result and checkpoints which channel was selected
|
||||
- On recovery, `Select` replays the same selection, maintaining determinism
|
||||
- Steps started with `Go` follow the same retry and checkpointing rules as `RunAsStep`
|
||||
|
||||
Reference: [Concurrent Steps](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#concurrent-steps)
|
||||
66
skills/dbos-golang/references/step-retries.md
Normal file
66
skills/dbos-golang/references/step-retries.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: Configure Step Retries for Transient Failures
|
||||
impact: HIGH
|
||||
impactDescription: Automatic retries handle transient failures without manual code
|
||||
tags: step, retry, exponential-backoff, resilience
|
||||
---
|
||||
|
||||
## Configure Step Retries for Transient Failures
|
||||
|
||||
Steps can automatically retry on failure with exponential backoff. This handles transient failures like network issues.
|
||||
|
||||
**Incorrect (manual retry logic):**
|
||||
|
||||
```go
|
||||
func fetchData(ctx context.Context) (string, error) {
|
||||
var lastErr error
|
||||
for attempt := 0; attempt < 3; attempt++ {
|
||||
resp, err := http.Get("https://api.example.com")
|
||||
if err == nil {
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
lastErr = err
|
||||
time.Sleep(time.Duration(math.Pow(2, float64(attempt))) * time.Second)
|
||||
}
|
||||
return "", lastErr
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (built-in retries with `dbos.RunAsStep`):**
|
||||
|
||||
```go
|
||||
func fetchData(ctx context.Context) (string, error) {
|
||||
resp, err := http.Get("https://api.example.com")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
data, err := dbos.RunAsStep(ctx, fetchData,
|
||||
dbos.WithStepName("fetchData"),
|
||||
dbos.WithStepMaxRetries(10),
|
||||
dbos.WithBaseInterval(500*time.Millisecond),
|
||||
dbos.WithBackoffFactor(2.0),
|
||||
dbos.WithMaxInterval(5*time.Second),
|
||||
)
|
||||
return data, err
|
||||
}
|
||||
```
|
||||
|
||||
Retry parameters:
|
||||
- `WithStepMaxRetries(n)`: Maximum retry attempts (default: `0` — no retries)
|
||||
- `WithBaseInterval(d)`: Initial delay between retries (default: `100ms`)
|
||||
- `WithBackoffFactor(f)`: Multiplier for exponential backoff (default: `2.0`)
|
||||
- `WithMaxInterval(d)`: Maximum delay between retries (default: `5s`)
|
||||
|
||||
With defaults, retry delays are: 100ms, 200ms, 400ms, 800ms, 1.6s, 3.2s, 5s, 5s...
|
||||
|
||||
If all retries are exhausted, a `DBOSError` with code `MaxStepRetriesExceeded` is returned to the calling workflow.
|
||||
|
||||
Reference: [Configurable Retries](https://docs.dbos.dev/golang/tutorials/step-tutorial#configurable-retries)
|
||||
90
skills/dbos-golang/references/test-setup.md
Normal file
90
skills/dbos-golang/references/test-setup.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Use Proper Test Setup for DBOS
|
||||
impact: LOW-MEDIUM
|
||||
impactDescription: Ensures consistent test results with proper DBOS lifecycle management
|
||||
tags: testing, go-test, setup, integration, mock
|
||||
---
|
||||
|
||||
## Use Proper Test Setup for DBOS
|
||||
|
||||
DBOS applications can be tested with unit tests (mocking DBOSContext) or integration tests (real Postgres database).
|
||||
|
||||
**Incorrect (no lifecycle management between tests):**
|
||||
|
||||
```go
|
||||
// Tests share state - results are inconsistent!
|
||||
func TestOne(t *testing.T) {
|
||||
myWorkflow(ctx, "input")
|
||||
}
|
||||
func TestTwo(t *testing.T) {
|
||||
// Previous test's state leaks into this test
|
||||
myWorkflow(ctx, "input")
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (unit testing with mocks):**
|
||||
|
||||
The `DBOSContext` interface is fully mockable. Use a mocking library like `testify/mock` or `mockery`:
|
||||
|
||||
```go
|
||||
func TestWorkflow(t *testing.T) {
|
||||
mockCtx := mocks.NewMockDBOSContext(t)
|
||||
|
||||
// Mock RunAsStep to return a canned value
|
||||
mockCtx.On("RunAsStep", mockCtx, mock.Anything, mock.Anything).
|
||||
Return("mock-result", nil)
|
||||
|
||||
result, err := myWorkflow(mockCtx, "input")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "expected", result)
|
||||
|
||||
mockCtx.AssertExpectations(t)
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (integration testing with Postgres):**
|
||||
|
||||
```go
|
||||
func setupDBOS(t *testing.T) dbos.DBOSContext {
|
||||
t.Helper()
|
||||
databaseURL := os.Getenv("DBOS_TEST_DATABASE_URL")
|
||||
if databaseURL == "" {
|
||||
t.Skip("DBOS_TEST_DATABASE_URL not set")
|
||||
}
|
||||
|
||||
ctx, err := dbos.NewDBOSContext(context.Background(), dbos.Config{
|
||||
AppName: "test-" + t.Name(),
|
||||
DatabaseURL: databaseURL,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
dbos.RegisterWorkflow(ctx, myWorkflow)
|
||||
|
||||
err = dbos.Launch(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Cleanup(func() {
|
||||
dbos.Shutdown(ctx, 10*time.Second)
|
||||
})
|
||||
return ctx
|
||||
}
|
||||
|
||||
func TestWorkflowIntegration(t *testing.T) {
|
||||
ctx := setupDBOS(t)
|
||||
|
||||
handle, err := dbos.RunWorkflow(ctx, myWorkflow, "test-input")
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := handle.GetResult()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "expected-output", result)
|
||||
}
|
||||
```
|
||||
|
||||
Key points:
|
||||
- Use `t.Cleanup` to ensure `Shutdown` is called after each test
|
||||
- Use unique `AppName` per test to avoid collisions
|
||||
- Mock `DBOSContext` for fast unit tests without Postgres
|
||||
- Use real Postgres for integration tests that verify durable behavior
|
||||
|
||||
Reference: [Testing DBOS](https://docs.dbos.dev/golang/tutorials/testing)
|
||||
64
skills/dbos-golang/references/workflow-background.md
Normal file
64
skills/dbos-golang/references/workflow-background.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Start Workflows in Background
|
||||
impact: CRITICAL
|
||||
impactDescription: Background workflows enable reliable async processing
|
||||
tags: workflow, background, handle, async
|
||||
---
|
||||
|
||||
## Start Workflows in Background
|
||||
|
||||
Use `dbos.RunWorkflow` to start a workflow and get a handle to track it. The workflow is guaranteed to run to completion even if the app is interrupted.
|
||||
|
||||
**Incorrect (no way to track background work):**
|
||||
|
||||
```go
|
||||
func processData(ctx dbos.DBOSContext, data string) (string, error) {
|
||||
// ...
|
||||
return "processed: " + data, nil
|
||||
}
|
||||
|
||||
// Fire and forget in a goroutine - no durability, no tracking
|
||||
go func() {
|
||||
processData(ctx, data)
|
||||
}()
|
||||
```
|
||||
|
||||
**Correct (using RunWorkflow):**
|
||||
|
||||
```go
|
||||
func processData(ctx dbos.DBOSContext, data string) (string, error) {
|
||||
return "processed: " + data, nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
// ... setup and launch ...
|
||||
|
||||
// Start workflow, get handle
|
||||
handle, err := dbos.RunWorkflow(ctx, processData, "input")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Get the workflow ID
|
||||
fmt.Println(handle.GetWorkflowID())
|
||||
|
||||
// Wait for result
|
||||
result, err := handle.GetResult()
|
||||
|
||||
// Check status
|
||||
status, err := handle.GetStatus()
|
||||
}
|
||||
```
|
||||
|
||||
Retrieve a handle later by workflow ID:
|
||||
|
||||
```go
|
||||
handle, err := dbos.RetrieveWorkflow[string](ctx, workflowID)
|
||||
result, err := handle.GetResult()
|
||||
```
|
||||
|
||||
`GetResult` supports options:
|
||||
- `dbos.WithHandleTimeout(timeout)`: Return a timeout error if the workflow doesn't complete within the duration
|
||||
- `dbos.WithHandlePollingInterval(interval)`: Control how often the database is polled for completion
|
||||
|
||||
Reference: [Workflows](https://docs.dbos.dev/golang/tutorials/workflow-tutorial)
|
||||
68
skills/dbos-golang/references/workflow-constraints.md
Normal file
68
skills/dbos-golang/references/workflow-constraints.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Follow Workflow Constraints
|
||||
impact: CRITICAL
|
||||
impactDescription: Violating constraints breaks recovery and durability guarantees
|
||||
tags: workflow, constraints, rules, best-practices
|
||||
---
|
||||
|
||||
## Follow Workflow Constraints
|
||||
|
||||
Workflows have specific constraints to maintain durability guarantees. Violating them can break recovery.
|
||||
|
||||
**Incorrect (starting workflows from steps):**
|
||||
|
||||
```go
|
||||
func myStep(ctx context.Context) (string, error) {
|
||||
// Don't start workflows from steps!
|
||||
// The step's context.Context does not support workflow operations
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Starting a child workflow inside a step breaks determinism
|
||||
dbos.RunAsStep(ctx, func(ctx context.Context) (string, error) {
|
||||
handle, _ := dbos.RunWorkflow(ctx.(dbos.DBOSContext), otherWorkflow, "data") // WRONG
|
||||
return handle.GetWorkflowID(), nil
|
||||
})
|
||||
return "", nil
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (workflow operations only from workflows):**
|
||||
|
||||
```go
|
||||
func fetchData(ctx context.Context) (string, error) {
|
||||
// Steps only do external operations
|
||||
resp, err := http.Get("https://api.example.com")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return string(body), nil
|
||||
}
|
||||
|
||||
func myWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
data, err := dbos.RunAsStep(ctx, fetchData, dbos.WithStepName("fetchData"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Start child workflows from the parent workflow
|
||||
handle, err := dbos.RunWorkflow(ctx, otherWorkflow, data)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Receive messages from the workflow
|
||||
msg, err := dbos.Recv[string](ctx, "topic", 60*time.Second)
|
||||
// Set events from the workflow
|
||||
dbos.SetEvent(ctx, "status", "done")
|
||||
return data, nil
|
||||
}
|
||||
```
|
||||
|
||||
Additional constraints:
|
||||
- Don't modify global variables from workflows or steps
|
||||
- All workflows and queues must be registered **before** `Launch()`
|
||||
- Concurrent steps must start in deterministic order using `dbos.Go`/`dbos.Select`
|
||||
|
||||
Reference: [Workflow Guarantees](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#workflow-guarantees)
|
||||
53
skills/dbos-golang/references/workflow-control.md
Normal file
53
skills/dbos-golang/references/workflow-control.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Cancel, Resume, and Fork Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables operational control over long-running workflows
|
||||
tags: workflow, cancel, resume, fork, management
|
||||
---
|
||||
|
||||
## Cancel, Resume, and Fork Workflows
|
||||
|
||||
DBOS provides functions to cancel, resume, and fork workflows for operational control.
|
||||
|
||||
**Incorrect (no way to handle stuck or failed workflows):**
|
||||
|
||||
```go
|
||||
// Workflow is stuck or failed - no recovery mechanism
|
||||
handle, _ := dbos.RunWorkflow(ctx, processTask, "data")
|
||||
// If the workflow fails, there's no way to retry or recover
|
||||
```
|
||||
|
||||
**Correct (using cancel, resume, and fork):**
|
||||
|
||||
```go
|
||||
// Cancel a workflow - stops at its next step
|
||||
err := dbos.CancelWorkflow(ctx, workflowID)
|
||||
|
||||
// Resume from the last completed step
|
||||
handle, err := dbos.ResumeWorkflow[string](ctx, workflowID)
|
||||
result, err := handle.GetResult()
|
||||
```
|
||||
|
||||
Cancellation sets the workflow status to `CANCELLED` and preempts execution at the beginning of the next step. Cancelling also cancels all child workflows.
|
||||
|
||||
Resume restarts a workflow from its last completed step. Use this for workflows that are cancelled or have exceeded their maximum recovery attempts. You can also use this to start an enqueued workflow immediately, bypassing its queue.
|
||||
|
||||
Fork a workflow from a specific step:
|
||||
|
||||
```go
|
||||
// List steps to find the right step ID
|
||||
steps, err := dbos.GetWorkflowSteps(ctx, workflowID)
|
||||
|
||||
// Fork from a specific step
|
||||
forkHandle, err := dbos.ForkWorkflow[string](ctx, dbos.ForkWorkflowInput{
|
||||
OriginalWorkflowID: workflowID,
|
||||
StartStep: 2, // Fork from step 2
|
||||
ForkedWorkflowID: "new-wf-id", // Optional
|
||||
ApplicationVersion: "2.0.0", // Optional
|
||||
})
|
||||
result, err := forkHandle.GetResult()
|
||||
```
|
||||
|
||||
Forking creates a new workflow with a new ID, copying the original workflow's inputs and step outputs up to the selected step.
|
||||
|
||||
Reference: [Workflow Management](https://docs.dbos.dev/golang/tutorials/workflow-management)
|
||||
51
skills/dbos-golang/references/workflow-determinism.md
Normal file
51
skills/dbos-golang/references/workflow-determinism.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: Keep Workflows Deterministic
|
||||
impact: CRITICAL
|
||||
impactDescription: Non-deterministic workflows cannot recover correctly
|
||||
tags: workflow, determinism, recovery, reliability
|
||||
---
|
||||
|
||||
## Keep Workflows Deterministic
|
||||
|
||||
Workflow functions must be deterministic: given the same inputs and step return values, they must invoke the same steps in the same order. Non-deterministic operations must be moved to steps.
|
||||
|
||||
**Incorrect (non-deterministic workflow):**
|
||||
|
||||
```go
|
||||
func exampleWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Random value in workflow breaks recovery!
|
||||
// On replay, rand.Intn returns a different value,
|
||||
// so the workflow may take a different branch.
|
||||
if rand.Intn(2) == 0 {
|
||||
return stepOne(ctx)
|
||||
}
|
||||
return stepTwo(ctx)
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (non-determinism in step):**
|
||||
|
||||
```go
|
||||
func exampleWorkflow(ctx dbos.DBOSContext, input string) (string, error) {
|
||||
// Step result is checkpointed - replay uses the saved value
|
||||
choice, err := dbos.RunAsStep(ctx, func(ctx context.Context) (int, error) {
|
||||
return rand.Intn(2), nil
|
||||
}, dbos.WithStepName("generateChoice"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if choice == 0 {
|
||||
return stepOne(ctx)
|
||||
}
|
||||
return stepTwo(ctx)
|
||||
}
|
||||
```
|
||||
|
||||
Non-deterministic operations that must be in steps:
|
||||
- Random number generation
|
||||
- Getting current time (`time.Now()`)
|
||||
- Accessing external APIs (`http.Get`, etc.)
|
||||
- Reading files
|
||||
- Database queries
|
||||
|
||||
Reference: [Workflow Determinism](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#determinism)
|
||||
64
skills/dbos-golang/references/workflow-introspection.md
Normal file
64
skills/dbos-golang/references/workflow-introspection.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: List and Inspect Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables monitoring and debugging of workflow executions
|
||||
tags: workflow, list, inspect, status, monitoring
|
||||
---
|
||||
|
||||
## List and Inspect Workflows
|
||||
|
||||
Use `dbos.ListWorkflows` to query workflow executions by status, name, time range, and other criteria.
|
||||
|
||||
**Incorrect (no monitoring of workflow state):**
|
||||
|
||||
```go
|
||||
// Start workflow with no way to check on it later
|
||||
dbos.RunWorkflow(ctx, processTask, "data")
|
||||
// If something goes wrong, no way to find or debug it
|
||||
```
|
||||
|
||||
**Correct (listing and inspecting workflows):**
|
||||
|
||||
```go
|
||||
// List workflows by status
|
||||
erroredWorkflows, err := dbos.ListWorkflows(ctx,
|
||||
dbos.WithStatus([]dbos.WorkflowStatusType{dbos.WorkflowStatusError}),
|
||||
)
|
||||
|
||||
for _, wf := range erroredWorkflows {
|
||||
fmt.Printf("Workflow %s: %s - %v\n", wf.ID, wf.Name, wf.Error)
|
||||
}
|
||||
```
|
||||
|
||||
List workflows with multiple filters:
|
||||
|
||||
```go
|
||||
workflows, err := dbos.ListWorkflows(ctx,
|
||||
dbos.WithName("processOrder"),
|
||||
dbos.WithStatus([]dbos.WorkflowStatusType{dbos.WorkflowStatusSuccess}),
|
||||
dbos.WithLimit(100),
|
||||
dbos.WithSortDesc(),
|
||||
dbos.WithLoadOutput(true),
|
||||
)
|
||||
```
|
||||
|
||||
List workflow steps:
|
||||
|
||||
```go
|
||||
steps, err := dbos.GetWorkflowSteps(ctx, workflowID)
|
||||
for _, step := range steps {
|
||||
fmt.Printf("Step %d: %s\n", step.StepID, step.StepName)
|
||||
if step.Error != nil {
|
||||
fmt.Printf(" Error: %v\n", step.Error)
|
||||
}
|
||||
if step.ChildWorkflowID != "" {
|
||||
fmt.Printf(" Child: %s\n", step.ChildWorkflowID)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Workflow status values: `WorkflowStatusPending`, `WorkflowStatusEnqueued`, `WorkflowStatusSuccess`, `WorkflowStatusError`, `WorkflowStatusCancelled`, `WorkflowStatusMaxRecoveryAttemptsExceeded`
|
||||
|
||||
To optimize performance, avoid loading inputs/outputs when you don't need them (they are not loaded by default).
|
||||
|
||||
Reference: [Workflow Management](https://docs.dbos.dev/golang/tutorials/workflow-management#listing-workflows)
|
||||
38
skills/dbos-golang/references/workflow-timeout.md
Normal file
38
skills/dbos-golang/references/workflow-timeout.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Set Workflow Timeouts
|
||||
impact: CRITICAL
|
||||
impactDescription: Prevents workflows from running indefinitely
|
||||
tags: workflow, timeout, cancellation, duration
|
||||
---
|
||||
|
||||
## Set Workflow Timeouts
|
||||
|
||||
Set a timeout for a workflow by using Go's `context.WithTimeout` or `dbos.WithTimeout` on the DBOS context. When the timeout expires, the workflow and all its children are cancelled.
|
||||
|
||||
**Incorrect (no timeout for potentially long workflow):**
|
||||
|
||||
```go
|
||||
// No timeout - could run indefinitely
|
||||
handle, err := dbos.RunWorkflow(ctx, processTask, "data")
|
||||
```
|
||||
|
||||
**Correct (with timeout):**
|
||||
|
||||
```go
|
||||
// Create a context with a 5-minute timeout
|
||||
timedCtx, cancel := dbos.WithTimeout(ctx, 5*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
handle, err := dbos.RunWorkflow(timedCtx, processTask, "data")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
Key timeout behaviors:
|
||||
- Timeouts are **start-to-completion**: the timeout begins when the workflow starts execution, not when it's enqueued
|
||||
- Timeouts are **durable**: they persist across restarts, so workflows can have very long timeouts (hours, days, weeks)
|
||||
- Cancellation happens at the **beginning of the next step** - the current step completes first
|
||||
- Cancelling a workflow also cancels all **child workflows**
|
||||
|
||||
Reference: [Workflow Timeouts](https://docs.dbos.dev/golang/tutorials/workflow-tutorial#workflow-timeouts)
|
||||
95
skills/dbos-python/AGENTS.md
Normal file
95
skills/dbos-python/AGENTS.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# dbos-python
|
||||
|
||||
> **Note:** `CLAUDE.md` is a symlink to this file.
|
||||
|
||||
## Overview
|
||||
|
||||
DBOS Python SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Python code with DBOS, creating workflows and steps, using queues, using DBOSClient from external applications, or building applications that need to be resilient to failures.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
dbos-python/
|
||||
SKILL.md # Main skill file - read this first
|
||||
AGENTS.md # This navigation guide
|
||||
CLAUDE.md # Symlink to AGENTS.md
|
||||
references/ # Detailed reference files
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
1. Read `SKILL.md` for the main skill instructions
|
||||
2. Browse `references/` for detailed documentation on specific topics
|
||||
3. Reference files are loaded on-demand - read only what you need
|
||||
|
||||
## Reference Categories
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`).
|
||||
|
||||
## Available References
|
||||
|
||||
**Advanced** (`advanced-`):
|
||||
- `references/advanced-async.md`
|
||||
- `references/advanced-patching.md`
|
||||
- `references/advanced-versioning.md`
|
||||
|
||||
**Client** (`client-`):
|
||||
- `references/client-enqueue.md`
|
||||
- `references/client-setup.md`
|
||||
|
||||
**Communication** (`comm-`):
|
||||
- `references/comm-events.md`
|
||||
- `references/comm-messages.md`
|
||||
- `references/comm-streaming.md`
|
||||
|
||||
**Lifecycle** (`lifecycle-`):
|
||||
- `references/lifecycle-config.md`
|
||||
- `references/lifecycle-fastapi.md`
|
||||
|
||||
**Pattern** (`pattern-`):
|
||||
- `references/pattern-classes.md`
|
||||
- `references/pattern-debouncing.md`
|
||||
- `references/pattern-idempotency.md`
|
||||
- `references/pattern-scheduled.md`
|
||||
- `references/pattern-sleep.md`
|
||||
|
||||
**Queue** (`queue-`):
|
||||
- `references/queue-basics.md`
|
||||
- `references/queue-concurrency.md`
|
||||
- `references/queue-deduplication.md`
|
||||
- `references/queue-listening.md`
|
||||
- `references/queue-partitioning.md`
|
||||
- `references/queue-priority.md`
|
||||
- `references/queue-rate-limiting.md`
|
||||
|
||||
**Step** (`step-`):
|
||||
- `references/step-basics.md`
|
||||
- `references/step-retries.md`
|
||||
- `references/step-transactions.md`
|
||||
|
||||
**Testing** (`test-`):
|
||||
- `references/test-fixtures.md`
|
||||
|
||||
**Workflow** (`workflow-`):
|
||||
- `references/workflow-background.md`
|
||||
- `references/workflow-constraints.md`
|
||||
- `references/workflow-control.md`
|
||||
- `references/workflow-determinism.md`
|
||||
- `references/workflow-introspection.md`
|
||||
- `references/workflow-timeout.md`
|
||||
|
||||
---
|
||||
|
||||
*32 reference files across 9 categories*
|
||||
1
skills/dbos-python/CLAUDE.md
Symbolic link
1
skills/dbos-python/CLAUDE.md
Symbolic link
@@ -0,0 +1 @@
|
||||
AGENTS.md
|
||||
102
skills/dbos-python/SKILL.md
Normal file
102
skills/dbos-python/SKILL.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
name: dbos-python
|
||||
description: DBOS Python SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing Python code with DBOS, creating workflows and steps, using queues, using DBOSClient from external applications, or building applications that need to be resilient to failures.
|
||||
risk: safe
|
||||
source: https://docs.dbos.dev/
|
||||
license: MIT
|
||||
metadata:
|
||||
author: dbos
|
||||
version: "1.0.0"
|
||||
organization: DBOS
|
||||
date: January 2026
|
||||
abstract: Comprehensive guide for building fault-tolerant Python applications with DBOS. Covers workflows, steps, queues, communication patterns, and best practices for durable execution.
|
||||
---
|
||||
|
||||
# DBOS Python Best Practices
|
||||
|
||||
Guide for building reliable, fault-tolerant Python applications with DBOS durable workflows.
|
||||
|
||||
## When to Use
|
||||
|
||||
Reference these guidelines when:
|
||||
- Adding DBOS to existing Python code
|
||||
- Creating workflows and steps
|
||||
- Using queues for concurrency control
|
||||
- Implementing workflow communication (events, messages, streams)
|
||||
- Configuring and launching DBOS applications
|
||||
- Using DBOSClient from external applications
|
||||
- Testing DBOS applications
|
||||
|
||||
## Rule Categories by Priority
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
## Critical Rules
|
||||
|
||||
### DBOS Configuration and Launch
|
||||
|
||||
A DBOS application MUST configure and launch DBOS inside its main function:
|
||||
|
||||
```python
|
||||
import os
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"system_database_url": os.environ.get("DBOS_SYSTEM_DATABASE_URL"),
|
||||
}
|
||||
DBOS(config=config)
|
||||
DBOS.launch()
|
||||
```
|
||||
|
||||
### Workflow and Step Structure
|
||||
|
||||
Workflows are comprised of steps. Any function performing complex operations or accessing external services must be a step:
|
||||
|
||||
```python
|
||||
@DBOS.step()
|
||||
def call_external_api():
|
||||
return requests.get("https://api.example.com").json()
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
result = call_external_api()
|
||||
return result
|
||||
```
|
||||
|
||||
### Key Constraints
|
||||
|
||||
- Do NOT call `DBOS.start_workflow` or `DBOS.recv` from a step
|
||||
- Do NOT use threads to start workflows - use `DBOS.start_workflow` or queues
|
||||
- Workflows MUST be deterministic - non-deterministic operations go in steps
|
||||
- Do NOT create/update global variables from workflows or steps
|
||||
|
||||
## How to Use
|
||||
|
||||
Read individual rule files for detailed explanations and examples:
|
||||
|
||||
```
|
||||
references/lifecycle-config.md
|
||||
references/workflow-determinism.md
|
||||
references/queue-concurrency.md
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- https://docs.dbos.dev/
|
||||
- https://github.com/dbos-inc/dbos-transact-py
|
||||
41
skills/dbos-python/references/_sections.md
Normal file
41
skills/dbos-python/references/_sections.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Section Definitions
|
||||
|
||||
This file defines the rule categories for DBOS Python best practices. Rules are automatically assigned to sections based on their filename prefix.
|
||||
|
||||
---
|
||||
|
||||
## 1. Lifecycle (lifecycle)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** DBOS configuration, initialization, and launch patterns. Foundation for all DBOS applications.
|
||||
|
||||
## 2. Workflow (workflow)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** Workflow creation, determinism requirements, background execution, and workflow IDs.
|
||||
|
||||
## 3. Step (step)
|
||||
**Impact:** HIGH
|
||||
**Description:** Step creation, retries, transactions, and when to use steps vs workflows.
|
||||
|
||||
## 4. Queue (queue)
|
||||
**Impact:** HIGH
|
||||
**Description:** Queue creation, concurrency limits, rate limiting, partitioning, and priority.
|
||||
|
||||
## 5. Communication (comm)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Workflow events, messages, and streaming for inter-workflow communication.
|
||||
|
||||
## 6. Pattern (pattern)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Common patterns including idempotency, scheduled workflows, debouncing, and classes.
|
||||
|
||||
## 7. Testing (test)
|
||||
**Impact:** LOW-MEDIUM
|
||||
**Description:** Testing DBOS applications with pytest, fixtures, and best practices.
|
||||
|
||||
## 8. Client (client)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** DBOSClient for interacting with DBOS from external applications.
|
||||
|
||||
## 9. Advanced (advanced)
|
||||
**Impact:** LOW
|
||||
**Description:** Async workflows, workflow versioning, patching, and code upgrades.
|
||||
101
skills/dbos-python/references/advanced-async.md
Normal file
101
skills/dbos-python/references/advanced-async.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Use Async Workflows Correctly
|
||||
impact: LOW
|
||||
impactDescription: Enables non-blocking I/O in workflows
|
||||
tags: async, coroutine, await, asyncio
|
||||
---
|
||||
|
||||
## Use Async Workflows Correctly
|
||||
|
||||
Coroutine (async) functions can be DBOS workflows. Use async-specific methods and patterns.
|
||||
|
||||
**Incorrect (mixing sync and async):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
async def async_workflow():
|
||||
# Don't use sync sleep in async workflow!
|
||||
DBOS.sleep(10)
|
||||
|
||||
# Don't use sync start_workflow for async workflows
|
||||
handle = DBOS.start_workflow(other_async_workflow)
|
||||
```
|
||||
|
||||
**Correct (async patterns):**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import aiohttp
|
||||
|
||||
@DBOS.step()
|
||||
async def fetch_async():
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get("https://example.com") as response:
|
||||
return await response.text()
|
||||
|
||||
@DBOS.workflow()
|
||||
async def async_workflow():
|
||||
# Use async sleep
|
||||
await DBOS.sleep_async(10)
|
||||
|
||||
# Await async steps
|
||||
result = await fetch_async()
|
||||
|
||||
# Use async start_workflow
|
||||
handle = await DBOS.start_workflow_async(other_async_workflow)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
### Running Async Steps In Parallel
|
||||
|
||||
You can run async steps in parallel if they are started in **deterministic order**:
|
||||
|
||||
**Correct (deterministic start order):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
async def parallel_workflow():
|
||||
# Start steps in deterministic order, then await together
|
||||
tasks = [
|
||||
asyncio.create_task(step1("arg1")),
|
||||
asyncio.create_task(step2("arg2")),
|
||||
asyncio.create_task(step3("arg3")),
|
||||
]
|
||||
# Use return_exceptions=True for proper error handling
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
return results
|
||||
```
|
||||
|
||||
**Incorrect (non-deterministic order):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
async def bad_parallel_workflow():
|
||||
async def seq_a():
|
||||
await step1("arg1")
|
||||
await step2("arg2") # Order depends on step1 timing
|
||||
|
||||
async def seq_b():
|
||||
await step3("arg3")
|
||||
await step4("arg4") # Order depends on step3 timing
|
||||
|
||||
# step2 and step4 may run in either order - non-deterministic!
|
||||
await asyncio.gather(seq_a(), seq_b())
|
||||
```
|
||||
|
||||
If you need concurrent sequences, use child workflows instead of interleaving steps.
|
||||
|
||||
For transactions in async workflows, use `asyncio.to_thread`:
|
||||
|
||||
```python
|
||||
@DBOS.transaction()
|
||||
def sync_transaction(data):
|
||||
DBOS.sql_session.execute(...)
|
||||
|
||||
@DBOS.workflow()
|
||||
async def async_workflow():
|
||||
result = await asyncio.to_thread(sync_transaction, data)
|
||||
```
|
||||
|
||||
Reference: [Async Workflows](https://docs.dbos.dev/python/tutorials/workflow-tutorial#coroutine-async-workflows)
|
||||
68
skills/dbos-python/references/advanced-patching.md
Normal file
68
skills/dbos-python/references/advanced-patching.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Use Patching for Safe Workflow Upgrades
|
||||
impact: LOW
|
||||
impactDescription: Deploy breaking changes without disrupting in-progress workflows
|
||||
tags: patching, upgrade, versioning, migration
|
||||
---
|
||||
|
||||
## Use Patching for Safe Workflow Upgrades
|
||||
|
||||
Use `DBOS.patch()` to safely deploy breaking workflow changes. Breaking changes alter what steps run or their order.
|
||||
|
||||
**Incorrect (breaking change without patch):**
|
||||
|
||||
```python
|
||||
# Original
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
foo()
|
||||
bar()
|
||||
|
||||
# Updated - breaks in-progress workflows!
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
baz() # Replaced foo() - checkpoints don't match
|
||||
bar()
|
||||
```
|
||||
|
||||
**Correct (using patch):**
|
||||
|
||||
```python
|
||||
# Enable patching in config
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"enable_patching": True,
|
||||
}
|
||||
DBOS(config=config)
|
||||
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
if DBOS.patch("use-baz"):
|
||||
baz() # New workflows use baz
|
||||
else:
|
||||
foo() # Old workflows continue with foo
|
||||
bar()
|
||||
```
|
||||
|
||||
Deprecating patches after all old workflows complete:
|
||||
|
||||
```python
|
||||
# Step 1: Deprecate (runs all workflows, stops inserting marker)
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
DBOS.deprecate_patch("use-baz")
|
||||
baz()
|
||||
bar()
|
||||
|
||||
# Step 2: Remove entirely (after all deprecated workflows complete)
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
baz()
|
||||
bar()
|
||||
```
|
||||
|
||||
`DBOS.patch(name)` returns:
|
||||
- `True` for new workflows (started after patch deployed)
|
||||
- `False` for old workflows (started before patch deployed)
|
||||
|
||||
Reference: [Patching](https://docs.dbos.dev/python/tutorials/upgrading-workflows#patching)
|
||||
66
skills/dbos-python/references/advanced-versioning.md
Normal file
66
skills/dbos-python/references/advanced-versioning.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: Use Versioning for Blue-Green Deployments
|
||||
impact: LOW
|
||||
impactDescription: Safely deploy new code with version tagging
|
||||
tags: versioning, blue-green, deployment, recovery
|
||||
---
|
||||
|
||||
## Use Versioning for Blue-Green Deployments
|
||||
|
||||
DBOS versions workflows to prevent unsafe recovery. Use blue-green deployments to safely upgrade.
|
||||
|
||||
**Incorrect (deploying breaking changes without versioning):**
|
||||
|
||||
```python
|
||||
# Deploying new code directly kills in-progress workflows
|
||||
# because their checkpoints don't match the new code
|
||||
|
||||
# Old code
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
step_a()
|
||||
step_b()
|
||||
|
||||
# New code replaces old immediately - breaks recovery!
|
||||
@DBOS.workflow()
|
||||
def workflow():
|
||||
step_a()
|
||||
step_c() # Changed step - old workflows can't recover
|
||||
```
|
||||
|
||||
**Correct (using versioning with blue-green deployment):**
|
||||
|
||||
```python
|
||||
# Set explicit version in config
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"application_version": "2.0.0", # New version
|
||||
}
|
||||
DBOS(config=config)
|
||||
|
||||
# Deploy new version alongside old version
|
||||
# New traffic goes to v2.0.0, old workflows drain on v1.0.0
|
||||
|
||||
# Check for remaining old workflows before retiring v1.0.0
|
||||
old_workflows = DBOS.list_workflows(
|
||||
app_version="1.0.0",
|
||||
status=["PENDING", "ENQUEUED"]
|
||||
)
|
||||
|
||||
if len(old_workflows) == 0:
|
||||
# Safe to retire old version
|
||||
pass
|
||||
```
|
||||
|
||||
Fork a workflow to run on a new version:
|
||||
|
||||
```python
|
||||
# Fork workflow from step 5 on version 2.0.0
|
||||
new_handle = DBOS.fork_workflow(
|
||||
workflow_id="old-workflow-id",
|
||||
start_step=5,
|
||||
application_version="2.0.0"
|
||||
)
|
||||
```
|
||||
|
||||
Reference: [Versioning](https://docs.dbos.dev/python/tutorials/upgrading-workflows#versioning)
|
||||
54
skills/dbos-python/references/client-enqueue.md
Normal file
54
skills/dbos-python/references/client-enqueue.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: Enqueue Workflows from External Applications
|
||||
impact: HIGH
|
||||
impactDescription: Enables decoupled architecture with separate API and worker services
|
||||
tags: client, enqueue, workflow, external
|
||||
---
|
||||
|
||||
## Enqueue Workflows from External Applications
|
||||
|
||||
Use `client.enqueue()` to submit workflows from outside the DBOS application. Must specify workflow and queue names explicitly.
|
||||
|
||||
**Incorrect (missing required options):**
|
||||
|
||||
```python
|
||||
from dbos import DBOSClient
|
||||
|
||||
client = DBOSClient(system_database_url=db_url)
|
||||
|
||||
# Missing workflow_name and queue_name!
|
||||
handle = client.enqueue({}, task_data)
|
||||
```
|
||||
|
||||
**Correct (with required options):**
|
||||
|
||||
```python
|
||||
from dbos import DBOSClient, EnqueueOptions
|
||||
|
||||
client = DBOSClient(system_database_url=db_url)
|
||||
|
||||
options: EnqueueOptions = {
|
||||
"workflow_name": "process_task", # Required
|
||||
"queue_name": "task_queue", # Required
|
||||
}
|
||||
handle = client.enqueue(options, task_data)
|
||||
result = handle.get_result()
|
||||
client.destroy()
|
||||
```
|
||||
|
||||
With optional parameters:
|
||||
|
||||
```python
|
||||
options: EnqueueOptions = {
|
||||
"workflow_name": "process_task",
|
||||
"queue_name": "task_queue",
|
||||
"workflow_id": "custom-id-123",
|
||||
"workflow_timeout": 300,
|
||||
"deduplication_id": "user-123",
|
||||
"priority": 1,
|
||||
}
|
||||
```
|
||||
|
||||
Limitation: Cannot enqueue workflows that are methods on Python classes.
|
||||
|
||||
Reference: [DBOSClient.enqueue](https://docs.dbos.dev/python/reference/client#enqueue)
|
||||
57
skills/dbos-python/references/client-setup.md
Normal file
57
skills/dbos-python/references/client-setup.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Initialize DBOSClient for External Access
|
||||
impact: HIGH
|
||||
impactDescription: Enables external applications to interact with DBOS
|
||||
tags: client, setup, initialization, external
|
||||
---
|
||||
|
||||
## Initialize DBOSClient for External Access
|
||||
|
||||
Use `DBOSClient` to interact with DBOS from external applications (API servers, CLI tools, etc.).
|
||||
|
||||
**Incorrect (no cleanup):**
|
||||
|
||||
```python
|
||||
from dbos import DBOSClient
|
||||
|
||||
client = DBOSClient(system_database_url=db_url)
|
||||
handle = client.enqueue(options, data)
|
||||
# Connection leaked - no destroy()!
|
||||
```
|
||||
|
||||
**Correct (with cleanup):**
|
||||
|
||||
```python
|
||||
import os
|
||||
from dbos import DBOSClient
|
||||
|
||||
client = DBOSClient(
|
||||
system_database_url=os.environ["DBOS_SYSTEM_DATABASE_URL"]
|
||||
)
|
||||
|
||||
try:
|
||||
handle = client.enqueue(options, data)
|
||||
result = handle.get_result()
|
||||
finally:
|
||||
client.destroy()
|
||||
```
|
||||
|
||||
Constructor parameters:
|
||||
- `system_database_url`: Connection string to DBOS system database
|
||||
- `serializer`: Must match the DBOS application's serializer (default: pickle)
|
||||
|
||||
## API Reference
|
||||
|
||||
Beyond `enqueue`, DBOSClient mirrors the DBOS API. Use the same patterns from other reference files:
|
||||
|
||||
| DBOSClient method | Same as DBOS method |
|
||||
|-------------------|---------------------|
|
||||
| `client.send()` | `DBOS.send()` - add `idempotency_key` for exactly-once |
|
||||
| `client.get_event()` | `DBOS.get_event()` |
|
||||
| `client.read_stream()` | `DBOS.read_stream()` |
|
||||
| `client.list_workflows()` | `DBOS.list_workflows()` |
|
||||
| `client.cancel_workflow()` | `DBOS.cancel_workflow()` |
|
||||
| `client.resume_workflow()` | `DBOS.resume_workflow()` |
|
||||
| `client.retrieve_workflow()` | `DBOS.retrieve_workflow()` |
|
||||
|
||||
Reference: [DBOSClient](https://docs.dbos.dev/python/reference/client)
|
||||
61
skills/dbos-python/references/comm-events.md
Normal file
61
skills/dbos-python/references/comm-events.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Use Events for Workflow Status Publishing
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables real-time workflow status monitoring
|
||||
tags: events, set_event, get_event, status
|
||||
---
|
||||
|
||||
## Use Events for Workflow Status Publishing
|
||||
|
||||
Workflows can publish key-value events that clients can read. Events are persisted and useful for status updates.
|
||||
|
||||
**Incorrect (no way to monitor progress):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def long_workflow():
|
||||
step_one()
|
||||
step_two() # Client can't see progress
|
||||
step_three()
|
||||
return "done"
|
||||
```
|
||||
|
||||
**Correct (publishing events):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def long_workflow():
|
||||
DBOS.set_event("status", "starting")
|
||||
|
||||
step_one()
|
||||
DBOS.set_event("status", "step_one_complete")
|
||||
|
||||
step_two()
|
||||
DBOS.set_event("status", "step_two_complete")
|
||||
|
||||
step_three()
|
||||
DBOS.set_event("status", "finished")
|
||||
return "done"
|
||||
|
||||
# Client code to read events
|
||||
@app.post("/start")
|
||||
def start_workflow():
|
||||
handle = DBOS.start_workflow(long_workflow)
|
||||
return {"workflow_id": handle.get_workflow_id()}
|
||||
|
||||
@app.get("/status/{workflow_id}")
|
||||
def get_status(workflow_id: str):
|
||||
status = DBOS.get_event(workflow_id, "status", timeout_seconds=0) or "not started"
|
||||
return {"status": status}
|
||||
```
|
||||
|
||||
Get all events from a workflow:
|
||||
|
||||
```python
|
||||
all_events = DBOS.get_all_events(workflow_id)
|
||||
# Returns: {"status": "finished", "other_key": "value"}
|
||||
```
|
||||
|
||||
Events can be called from `set_event` from workflows or steps.
|
||||
|
||||
Reference: [Workflow Events](https://docs.dbos.dev/python/tutorials/workflow-communication#workflow-events)
|
||||
56
skills/dbos-python/references/comm-messages.md
Normal file
56
skills/dbos-python/references/comm-messages.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Use Messages for Workflow Notifications
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables external signals to control workflow execution
|
||||
tags: messages, send, recv, notifications
|
||||
---
|
||||
|
||||
## Use Messages for Workflow Notifications
|
||||
|
||||
Send messages to workflows to signal or notify them while running. Messages are persisted and queued per topic.
|
||||
|
||||
**Incorrect (polling external state):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def payment_workflow():
|
||||
# Polling is inefficient and not durable
|
||||
while True:
|
||||
status = check_payment_status()
|
||||
if status == "paid":
|
||||
break
|
||||
time.sleep(1)
|
||||
```
|
||||
|
||||
**Correct (using messages):**
|
||||
|
||||
```python
|
||||
PAYMENT_STATUS = "payment_status"
|
||||
|
||||
@DBOS.workflow()
|
||||
def payment_workflow():
|
||||
# Process order...
|
||||
DBOS.set_event("payment_id", payment_id)
|
||||
|
||||
# Wait for payment notification (60 second timeout)
|
||||
payment_status = DBOS.recv(PAYMENT_STATUS, timeout_seconds=60)
|
||||
|
||||
if payment_status == "paid":
|
||||
fulfill_order()
|
||||
else:
|
||||
cancel_order()
|
||||
|
||||
# Webhook endpoint to receive payment notification
|
||||
@app.post("/payment_webhook/{workflow_id}/{status}")
|
||||
def payment_webhook(workflow_id: str, status: str):
|
||||
DBOS.send(workflow_id, status, PAYMENT_STATUS)
|
||||
return {"ok": True}
|
||||
```
|
||||
|
||||
Key points:
|
||||
- `DBOS.recv()` can only be called from workflows
|
||||
- Messages are queued per topic
|
||||
- `recv()` returns `None` on timeout
|
||||
- Messages are persisted for exactly-once delivery
|
||||
|
||||
Reference: [Workflow Messaging](https://docs.dbos.dev/python/tutorials/workflow-communication#workflow-messaging-and-notifications)
|
||||
57
skills/dbos-python/references/comm-streaming.md
Normal file
57
skills/dbos-python/references/comm-streaming.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Use Streams for Real-Time Data
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables real-time progress and LLM streaming
|
||||
tags: streaming, write_stream, read_stream, realtime
|
||||
---
|
||||
|
||||
## Use Streams for Real-Time Data
|
||||
|
||||
Workflows can stream data in real-time to clients. Useful for LLM responses, progress reporting, or long-running results.
|
||||
|
||||
**Incorrect (returning all data at end):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def llm_workflow(prompt):
|
||||
# Client waits for entire response
|
||||
response = call_llm(prompt)
|
||||
return response
|
||||
```
|
||||
|
||||
**Correct (streaming results):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def llm_workflow(prompt):
|
||||
for chunk in call_llm_streaming(prompt):
|
||||
DBOS.write_stream("response", chunk)
|
||||
DBOS.close_stream("response")
|
||||
return "complete"
|
||||
|
||||
# Client reads stream
|
||||
@app.get("/stream/{workflow_id}")
|
||||
def stream_response(workflow_id: str):
|
||||
def generate():
|
||||
for value in DBOS.read_stream(workflow_id, "response"):
|
||||
yield value
|
||||
return StreamingResponse(generate())
|
||||
```
|
||||
|
||||
Stream characteristics:
|
||||
- Streams are immutable and append-only
|
||||
- Writes from workflows happen exactly-once
|
||||
- Writes from steps happen at-least-once (may duplicate on retry)
|
||||
- Streams auto-close when workflow terminates
|
||||
|
||||
Close streams explicitly when done:
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def producer():
|
||||
DBOS.write_stream("data", {"step": 1})
|
||||
DBOS.write_stream("data", {"step": 2})
|
||||
DBOS.close_stream("data") # Signal completion
|
||||
```
|
||||
|
||||
Reference: [Workflow Streaming](https://docs.dbos.dev/python/tutorials/workflow-communication#workflow-streaming)
|
||||
74
skills/dbos-python/references/lifecycle-config.md
Normal file
74
skills/dbos-python/references/lifecycle-config.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: Configure and Launch DBOS Properly
|
||||
impact: CRITICAL
|
||||
impactDescription: Application won't function without proper setup
|
||||
tags: configuration, launch, setup, initialization
|
||||
---
|
||||
|
||||
## Configure and Launch DBOS Properly
|
||||
|
||||
Every DBOS application must configure and launch DBOS inside the main function.
|
||||
|
||||
**Incorrect (configuration at module level):**
|
||||
|
||||
```python
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
# Don't configure at module level!
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
}
|
||||
DBOS(config=config)
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
DBOS.launch()
|
||||
my_workflow()
|
||||
```
|
||||
|
||||
**Correct (configuration in main):**
|
||||
|
||||
```python
|
||||
import os
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"system_database_url": os.environ.get("DBOS_SYSTEM_DATABASE_URL"),
|
||||
}
|
||||
DBOS(config=config)
|
||||
DBOS.launch()
|
||||
my_workflow()
|
||||
```
|
||||
|
||||
For scheduled-only applications (no HTTP server), block the main thread:
|
||||
|
||||
```python
|
||||
import os
|
||||
import threading
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
@DBOS.scheduled("* * * * *")
|
||||
@DBOS.workflow()
|
||||
def scheduled_task(scheduled_time, actual_time):
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"system_database_url": os.environ.get("DBOS_SYSTEM_DATABASE_URL"),
|
||||
}
|
||||
DBOS(config=config)
|
||||
DBOS.launch()
|
||||
threading.Event().wait() # Block forever
|
||||
```
|
||||
|
||||
Reference: [DBOS Configuration](https://docs.dbos.dev/python/reference/configuration)
|
||||
66
skills/dbos-python/references/lifecycle-fastapi.md
Normal file
66
skills/dbos-python/references/lifecycle-fastapi.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: Integrate DBOS with FastAPI
|
||||
impact: CRITICAL
|
||||
impactDescription: Proper integration ensures workflows survive server restarts
|
||||
tags: fastapi, http, server, integration
|
||||
---
|
||||
|
||||
## Integrate DBOS with FastAPI
|
||||
|
||||
When using DBOS with FastAPI, configure and launch DBOS inside the main function before starting uvicorn.
|
||||
|
||||
**Incorrect (configuration at module level):**
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Don't configure at module level!
|
||||
config: DBOSConfig = {"name": "my-app"}
|
||||
DBOS(config=config)
|
||||
|
||||
@app.get("/")
|
||||
@DBOS.workflow()
|
||||
def endpoint():
|
||||
return {"status": "ok"}
|
||||
|
||||
if __name__ == "__main__":
|
||||
DBOS.launch()
|
||||
uvicorn.run(app)
|
||||
```
|
||||
|
||||
**Correct (configuration in main):**
|
||||
|
||||
```python
|
||||
import os
|
||||
from fastapi import FastAPI
|
||||
from dbos import DBOS, DBOSConfig
|
||||
import uvicorn
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@DBOS.step()
|
||||
def process_data():
|
||||
return "processed"
|
||||
|
||||
@app.get("/")
|
||||
@DBOS.workflow()
|
||||
def endpoint():
|
||||
result = process_data()
|
||||
return {"result": result}
|
||||
|
||||
if __name__ == "__main__":
|
||||
config: DBOSConfig = {
|
||||
"name": "my-app",
|
||||
"system_database_url": os.environ.get("DBOS_SYSTEM_DATABASE_URL"),
|
||||
}
|
||||
DBOS(config=config)
|
||||
DBOS.launch()
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
```
|
||||
|
||||
The workflow decorator can be combined with FastAPI route decorators. The FastAPI decorator should come first (outermost).
|
||||
|
||||
Reference: [DBOS with FastAPI](https://docs.dbos.dev/python/tutorials/workflow-tutorial)
|
||||
61
skills/dbos-python/references/pattern-classes.md
Normal file
61
skills/dbos-python/references/pattern-classes.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Use DBOS Decorators with Classes
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables stateful workflow patterns with class instances
|
||||
tags: classes, dbos_class, instance, oop
|
||||
---
|
||||
|
||||
## Use DBOS Decorators with Classes
|
||||
|
||||
DBOS decorators work with class methods. Workflow classes must inherit from `DBOSConfiguredInstance`.
|
||||
|
||||
**Incorrect (missing class setup):**
|
||||
|
||||
```python
|
||||
class MyService:
|
||||
def __init__(self, url):
|
||||
self.url = url
|
||||
|
||||
@DBOS.workflow() # Won't work without proper setup
|
||||
def fetch_data(self):
|
||||
return self.fetch()
|
||||
```
|
||||
|
||||
**Correct (proper class setup):**
|
||||
|
||||
```python
|
||||
from dbos import DBOS, DBOSConfiguredInstance
|
||||
|
||||
@DBOS.dbos_class()
|
||||
class URLFetcher(DBOSConfiguredInstance):
|
||||
def __init__(self, url: str):
|
||||
self.url = url
|
||||
# instance_name must be unique and passed to super()
|
||||
super().__init__(instance_name=url)
|
||||
|
||||
@DBOS.workflow()
|
||||
def fetch_workflow(self):
|
||||
return self.fetch_url()
|
||||
|
||||
@DBOS.step()
|
||||
def fetch_url(self):
|
||||
return requests.get(self.url).text
|
||||
|
||||
# Instantiate BEFORE DBOS.launch()
|
||||
example_fetcher = URLFetcher("https://example.com")
|
||||
api_fetcher = URLFetcher("https://api.example.com")
|
||||
|
||||
if __name__ == "__main__":
|
||||
DBOS.launch()
|
||||
print(example_fetcher.fetch_workflow())
|
||||
```
|
||||
|
||||
Requirements:
|
||||
- Class must be decorated with `@DBOS.dbos_class()`
|
||||
- Class must inherit from `DBOSConfiguredInstance`
|
||||
- `instance_name` must be unique and passed to `super().__init__()`
|
||||
- All instances must be created before `DBOS.launch()`
|
||||
|
||||
Steps can be added to any class without these requirements.
|
||||
|
||||
Reference: [Python Classes](https://docs.dbos.dev/python/tutorials/classes)
|
||||
59
skills/dbos-python/references/pattern-debouncing.md
Normal file
59
skills/dbos-python/references/pattern-debouncing.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Debounce Workflows to Prevent Wasted Work
|
||||
impact: MEDIUM
|
||||
impactDescription: Reduces redundant executions during rapid input
|
||||
tags: debounce, throttle, input, optimization
|
||||
---
|
||||
|
||||
## Debounce Workflows to Prevent Wasted Work
|
||||
|
||||
Debouncing delays workflow execution until some time has passed since the last trigger. Useful for user input processing.
|
||||
|
||||
**Incorrect (processing every input):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def process_input(user_input):
|
||||
# Expensive processing
|
||||
analyze(user_input)
|
||||
|
||||
@app.post("/input")
|
||||
def on_input(user_id: str, input: str):
|
||||
# Every keystroke triggers processing!
|
||||
DBOS.start_workflow(process_input, input)
|
||||
```
|
||||
|
||||
**Correct (debounced processing):**
|
||||
|
||||
```python
|
||||
from dbos import Debouncer
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_input(user_input):
|
||||
analyze(user_input)
|
||||
|
||||
# Create a debouncer for the workflow
|
||||
debouncer = Debouncer.create(process_input)
|
||||
|
||||
@app.post("/input")
|
||||
def on_input(user_id: str, input: str):
|
||||
# Wait 5 seconds after last input before processing
|
||||
debounce_key = user_id # Debounce per user
|
||||
debounce_period = 5.0 # Seconds
|
||||
handle = debouncer.debounce(debounce_key, debounce_period, input)
|
||||
return {"workflow_id": handle.get_workflow_id()}
|
||||
```
|
||||
|
||||
Debouncer with timeout (max wait time):
|
||||
|
||||
```python
|
||||
# Process after 5s idle OR 60s max wait
|
||||
debouncer = Debouncer.create(process_input, debounce_timeout_sec=60)
|
||||
|
||||
def on_input(user_id: str, input: str):
|
||||
debouncer.debounce(user_id, 5.0, input)
|
||||
```
|
||||
|
||||
When workflow executes, it uses the **last** inputs passed to `debounce`.
|
||||
|
||||
Reference: [Debouncing Workflows](https://docs.dbos.dev/python/tutorials/workflow-tutorial#debouncing-workflows)
|
||||
52
skills/dbos-python/references/pattern-idempotency.md
Normal file
52
skills/dbos-python/references/pattern-idempotency.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Use Workflow IDs for Idempotency
|
||||
impact: MEDIUM
|
||||
impactDescription: Prevents duplicate executions of critical operations
|
||||
tags: idempotency, workflow-id, deduplication, exactly-once
|
||||
---
|
||||
|
||||
## Use Workflow IDs for Idempotency
|
||||
|
||||
Set workflow IDs to make operations idempotent. A workflow with the same ID executes only once.
|
||||
|
||||
**Incorrect (duplicate payments possible):**
|
||||
|
||||
```python
|
||||
@app.post("/pay/{order_id}")
|
||||
def process_payment(order_id: str):
|
||||
# Multiple clicks = multiple payments!
|
||||
handle = DBOS.start_workflow(payment_workflow, order_id)
|
||||
return handle.get_result()
|
||||
```
|
||||
|
||||
**Correct (idempotent with workflow ID):**
|
||||
|
||||
```python
|
||||
from dbos import SetWorkflowID
|
||||
|
||||
@app.post("/pay/{order_id}")
|
||||
def process_payment(order_id: str):
|
||||
# Same order_id = same workflow ID = only one execution
|
||||
with SetWorkflowID(f"payment-{order_id}"):
|
||||
handle = DBOS.start_workflow(payment_workflow, order_id)
|
||||
return handle.get_result()
|
||||
|
||||
@DBOS.workflow()
|
||||
def payment_workflow(order_id: str):
|
||||
charge_customer(order_id)
|
||||
send_confirmation(order_id)
|
||||
return "success"
|
||||
```
|
||||
|
||||
Access the workflow ID inside workflows:
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
current_id = DBOS.workflow_id
|
||||
DBOS.logger.info(f"Running workflow {current_id}")
|
||||
```
|
||||
|
||||
Workflow IDs must be globally unique. Duplicate IDs return the existing workflow's result without re-executing.
|
||||
|
||||
Reference: [Workflow IDs and Idempotency](https://docs.dbos.dev/python/tutorials/workflow-tutorial#workflow-ids-and-idempotency)
|
||||
56
skills/dbos-python/references/pattern-scheduled.md
Normal file
56
skills/dbos-python/references/pattern-scheduled.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Create Scheduled Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Run workflows exactly once per time interval
|
||||
tags: scheduled, cron, recurring, timer
|
||||
---
|
||||
|
||||
## Create Scheduled Workflows
|
||||
|
||||
Use `@DBOS.scheduled` to run workflows on a schedule. Workflows run exactly once per interval.
|
||||
|
||||
**Incorrect (manual scheduling):**
|
||||
|
||||
```python
|
||||
# Don't use external cron or manual timers
|
||||
import schedule
|
||||
schedule.every(1).minute.do(my_task)
|
||||
```
|
||||
|
||||
**Correct (DBOS scheduled workflow):**
|
||||
|
||||
```python
|
||||
@DBOS.scheduled("* * * * *") # Every minute
|
||||
@DBOS.workflow()
|
||||
def run_every_minute(scheduled_time, actual_time):
|
||||
print(f"Running at {scheduled_time}")
|
||||
do_maintenance_task()
|
||||
|
||||
@DBOS.scheduled("0 */6 * * *") # Every 6 hours
|
||||
@DBOS.workflow()
|
||||
def periodic_cleanup(scheduled_time, actual_time):
|
||||
cleanup_old_records()
|
||||
```
|
||||
|
||||
Scheduled workflow requirements:
|
||||
- Must have `@DBOS.scheduled` decorator with crontab syntax
|
||||
- Must accept two arguments: `scheduled_time` and `actual_time` (both `datetime`)
|
||||
- Main thread must stay alive for scheduled workflows
|
||||
|
||||
For apps with only scheduled workflows (no HTTP server):
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
||||
if __name__ == "__main__":
|
||||
DBOS.launch()
|
||||
threading.Event().wait() # Block forever
|
||||
```
|
||||
|
||||
Crontab format: `minute hour day month weekday`
|
||||
- `* * * * *` = every minute
|
||||
- `0 * * * *` = every hour
|
||||
- `0 0 * * *` = daily at midnight
|
||||
- `0 0 * * 0` = weekly on Sunday
|
||||
|
||||
Reference: [Scheduled Workflows](https://docs.dbos.dev/python/tutorials/scheduled-workflows)
|
||||
58
skills/dbos-python/references/pattern-sleep.md
Normal file
58
skills/dbos-python/references/pattern-sleep.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Use Durable Sleep for Delayed Execution
|
||||
impact: MEDIUM
|
||||
impactDescription: Survives restarts and can span days or weeks
|
||||
tags: sleep, delay, schedule, durable
|
||||
---
|
||||
|
||||
## Use Durable Sleep for Delayed Execution
|
||||
|
||||
Use `DBOS.sleep()` for durable delays that survive restarts. The wakeup time is persisted in the database.
|
||||
|
||||
**Incorrect (regular sleep):**
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
@DBOS.workflow()
|
||||
def delayed_task(delay_seconds, task):
|
||||
# Regular sleep is lost on restart!
|
||||
time.sleep(delay_seconds)
|
||||
run_task(task)
|
||||
```
|
||||
|
||||
**Correct (durable sleep):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def delayed_task(delay_seconds, task):
|
||||
# Durable sleep - survives restarts
|
||||
DBOS.sleep(delay_seconds)
|
||||
run_task(task)
|
||||
```
|
||||
|
||||
Use cases for durable sleep:
|
||||
- Schedule a task for the future
|
||||
- Wait between retries
|
||||
- Implement delays spanning hours, days, or weeks
|
||||
|
||||
Example: Schedule a reminder:
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def send_reminder(user_id: str, message: str, delay_days: int):
|
||||
# Sleep for days - survives any restart
|
||||
DBOS.sleep(delay_days * 24 * 60 * 60)
|
||||
send_notification(user_id, message)
|
||||
```
|
||||
|
||||
For async workflows, use `DBOS.sleep_async()`:
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
async def async_delayed_task():
|
||||
await DBOS.sleep_async(60)
|
||||
await run_async_task()
|
||||
```
|
||||
|
||||
Reference: [Durable Sleep](https://docs.dbos.dev/python/tutorials/workflow-tutorial#durable-sleep)
|
||||
60
skills/dbos-python/references/queue-basics.md
Normal file
60
skills/dbos-python/references/queue-basics.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: Use Queues for Concurrent Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Queues provide managed concurrency and flow control
|
||||
tags: queue, concurrency, enqueue, workflow
|
||||
---
|
||||
|
||||
## Use Queues for Concurrent Workflows
|
||||
|
||||
Queues run many workflows concurrently with managed flow control. Use them when you need to control how many workflows run at once.
|
||||
|
||||
**Incorrect (uncontrolled concurrency):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def process_task(task):
|
||||
pass
|
||||
|
||||
# Starting many workflows without control
|
||||
for task in tasks:
|
||||
DBOS.start_workflow(process_task, task) # Could overwhelm resources
|
||||
```
|
||||
|
||||
**Correct (using queue):**
|
||||
|
||||
```python
|
||||
from dbos import Queue
|
||||
|
||||
queue = Queue("task_queue")
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_task(task):
|
||||
pass
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_all_tasks(tasks):
|
||||
handles = []
|
||||
for task in tasks:
|
||||
# Queue manages concurrency
|
||||
handle = queue.enqueue(process_task, task)
|
||||
handles.append(handle)
|
||||
# Wait for all tasks
|
||||
return [h.get_result() for h in handles]
|
||||
```
|
||||
|
||||
Queues process workflows in FIFO order. You can enqueue both workflows and steps.
|
||||
|
||||
```python
|
||||
queue = Queue("example_queue")
|
||||
|
||||
@DBOS.step()
|
||||
def my_step(data):
|
||||
return process(data)
|
||||
|
||||
# Enqueue a step
|
||||
handle = queue.enqueue(my_step, data)
|
||||
result = handle.get_result()
|
||||
```
|
||||
|
||||
Reference: [DBOS Queues](https://docs.dbos.dev/python/tutorials/queue-tutorial)
|
||||
57
skills/dbos-python/references/queue-concurrency.md
Normal file
57
skills/dbos-python/references/queue-concurrency.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Control Queue Concurrency
|
||||
impact: HIGH
|
||||
impactDescription: Prevents resource exhaustion with concurrent limits
|
||||
tags: queue, concurrency, worker_concurrency, limits
|
||||
---
|
||||
|
||||
## Control Queue Concurrency
|
||||
|
||||
Queues support worker-level and global concurrency limits to prevent resource exhaustion.
|
||||
|
||||
**Incorrect (no concurrency control):**
|
||||
|
||||
```python
|
||||
queue = Queue("heavy_tasks") # No limits - could exhaust memory
|
||||
|
||||
@DBOS.workflow()
|
||||
def memory_intensive_task(data):
|
||||
# Uses lots of memory
|
||||
pass
|
||||
```
|
||||
|
||||
**Correct (worker concurrency):**
|
||||
|
||||
```python
|
||||
# Each process runs at most 5 tasks from this queue
|
||||
queue = Queue("heavy_tasks", worker_concurrency=5)
|
||||
|
||||
@DBOS.workflow()
|
||||
def memory_intensive_task(data):
|
||||
pass
|
||||
```
|
||||
|
||||
**Correct (global concurrency):**
|
||||
|
||||
```python
|
||||
# At most 10 tasks run across ALL processes
|
||||
queue = Queue("limited_tasks", concurrency=10)
|
||||
```
|
||||
|
||||
**In-order processing (sequential):**
|
||||
|
||||
```python
|
||||
# Only one task at a time - guarantees order
|
||||
queue = Queue("sequential_queue", concurrency=1)
|
||||
|
||||
@DBOS.step()
|
||||
def process_event(event):
|
||||
pass
|
||||
|
||||
def handle_event(event):
|
||||
queue.enqueue(process_event, event)
|
||||
```
|
||||
|
||||
Worker concurrency is recommended for most use cases. Global concurrency should be used carefully as pending workflows count toward the limit.
|
||||
|
||||
Reference: [Managing Concurrency](https://docs.dbos.dev/python/tutorials/queue-tutorial#managing-concurrency)
|
||||
51
skills/dbos-python/references/queue-deduplication.md
Normal file
51
skills/dbos-python/references/queue-deduplication.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: Deduplicate Queued Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Prevents duplicate work and resource waste
|
||||
tags: queue, deduplication, duplicate, idempotent
|
||||
---
|
||||
|
||||
## Deduplicate Queued Workflows
|
||||
|
||||
Use deduplication IDs to ensure only one workflow with a given ID is active in a queue at a time.
|
||||
|
||||
**Incorrect (duplicate workflows possible):**
|
||||
|
||||
```python
|
||||
queue = Queue("user_tasks")
|
||||
|
||||
@app.post("/process/{user_id}")
|
||||
def process_for_user(user_id: str):
|
||||
# Multiple requests = multiple workflows for same user!
|
||||
queue.enqueue(process_workflow, user_id)
|
||||
```
|
||||
|
||||
**Correct (deduplicated by user):**
|
||||
|
||||
```python
|
||||
from dbos import Queue, SetEnqueueOptions
|
||||
from dbos import error as dboserror
|
||||
|
||||
queue = Queue("user_tasks")
|
||||
|
||||
@app.post("/process/{user_id}")
|
||||
def process_for_user(user_id: str):
|
||||
with SetEnqueueOptions(deduplication_id=user_id):
|
||||
try:
|
||||
handle = queue.enqueue(process_workflow, user_id)
|
||||
return {"workflow_id": handle.get_workflow_id()}
|
||||
except dboserror.DBOSQueueDeduplicatedError:
|
||||
return {"status": "already processing"}
|
||||
```
|
||||
|
||||
Deduplication behavior:
|
||||
- If a workflow with the same deduplication ID is `ENQUEUED` or `PENDING`, new enqueue raises `DBOSQueueDeduplicatedError`
|
||||
- Once the workflow completes, a new workflow with the same ID can be enqueued
|
||||
- Deduplication is per-queue (same ID can exist in different queues)
|
||||
|
||||
Use cases:
|
||||
- One active task per user
|
||||
- Preventing duplicate job submissions
|
||||
- Rate limiting by entity
|
||||
|
||||
Reference: [Queue Deduplication](https://docs.dbos.dev/python/tutorials/queue-tutorial#deduplication)
|
||||
64
skills/dbos-python/references/queue-listening.md
Normal file
64
skills/dbos-python/references/queue-listening.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Control Which Queues a Worker Listens To
|
||||
impact: HIGH
|
||||
impactDescription: Enables heterogeneous worker pools (CPU/GPU)
|
||||
tags: queue, listen, worker, heterogeneous
|
||||
---
|
||||
|
||||
## Control Which Queues a Worker Listens To
|
||||
|
||||
Use `DBOS.listen_queues()` to make a process only handle specific queues. Useful for CPU vs GPU workers.
|
||||
|
||||
**Incorrect (all workers handle all queues):**
|
||||
|
||||
```python
|
||||
cpu_queue = Queue("cpu_tasks")
|
||||
gpu_queue = Queue("gpu_tasks")
|
||||
|
||||
# Every worker processes both queues
|
||||
# GPU tasks may run on CPU-only machines!
|
||||
if __name__ == "__main__":
|
||||
DBOS(config=config)
|
||||
DBOS.launch()
|
||||
```
|
||||
|
||||
**Correct (workers listen to specific queues):**
|
||||
|
||||
```python
|
||||
from dbos import DBOS, DBOSConfig, Queue
|
||||
|
||||
cpu_queue = Queue("cpu_queue")
|
||||
gpu_queue = Queue("gpu_queue")
|
||||
|
||||
@DBOS.workflow()
|
||||
def cpu_task(data):
|
||||
pass
|
||||
|
||||
@DBOS.workflow()
|
||||
def gpu_task(data):
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
worker_type = os.environ.get("WORKER_TYPE") # "cpu" or "gpu"
|
||||
config: DBOSConfig = {"name": "worker"}
|
||||
DBOS(config=config)
|
||||
|
||||
if worker_type == "gpu":
|
||||
DBOS.listen_queues([gpu_queue])
|
||||
elif worker_type == "cpu":
|
||||
DBOS.listen_queues([cpu_queue])
|
||||
|
||||
DBOS.launch()
|
||||
```
|
||||
|
||||
Key points:
|
||||
- Call `DBOS.listen_queues()` **before** `DBOS.launch()`
|
||||
- Workers can still **enqueue** to any queue, just won't **dequeue** from others
|
||||
- By default, workers listen to all declared queues
|
||||
|
||||
Use cases:
|
||||
- CPU vs GPU workers
|
||||
- Memory-intensive vs lightweight tasks
|
||||
- Geographic task routing
|
||||
|
||||
Reference: [Explicit Queue Listening](https://docs.dbos.dev/python/tutorials/queue-tutorial#explicit-queue-listening)
|
||||
62
skills/dbos-python/references/queue-partitioning.md
Normal file
62
skills/dbos-python/references/queue-partitioning.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Partition Queues for Per-Entity Limits
|
||||
impact: HIGH
|
||||
impactDescription: Enables per-user or per-entity flow control
|
||||
tags: queue, partition, per-user, flow-control
|
||||
---
|
||||
|
||||
## Partition Queues for Per-Entity Limits
|
||||
|
||||
Partitioned queues apply flow control limits per partition, not globally. Useful for per-user or per-entity concurrency limits.
|
||||
|
||||
**Incorrect (global limit affects all users):**
|
||||
|
||||
```python
|
||||
queue = Queue("user_tasks", concurrency=1) # Only 1 task total
|
||||
|
||||
def handle_user_task(user_id, task):
|
||||
# One user blocks all other users!
|
||||
queue.enqueue(process_task, task)
|
||||
```
|
||||
|
||||
**Correct (per-user limits with partitioning):**
|
||||
|
||||
```python
|
||||
from dbos import Queue, SetEnqueueOptions
|
||||
|
||||
# Partition queue with concurrency=1 per partition
|
||||
queue = Queue("user_tasks", partition_queue=True, concurrency=1)
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_task(task):
|
||||
pass
|
||||
|
||||
def handle_user_task(user_id: str, task):
|
||||
# Each user gets their own "subqueue" with concurrency=1
|
||||
with SetEnqueueOptions(queue_partition_key=user_id):
|
||||
queue.enqueue(process_task, task)
|
||||
```
|
||||
|
||||
For both per-partition AND global limits, use two-level queueing:
|
||||
|
||||
```python
|
||||
# Global limit of 5 concurrent tasks
|
||||
global_queue = Queue("global_queue", concurrency=5)
|
||||
# Per-user limit of 1 concurrent task
|
||||
user_queue = Queue("user_queue", partition_queue=True, concurrency=1)
|
||||
|
||||
def handle_task(user_id: str, task):
|
||||
with SetEnqueueOptions(queue_partition_key=user_id):
|
||||
user_queue.enqueue(concurrency_manager, task)
|
||||
|
||||
@DBOS.workflow()
|
||||
def concurrency_manager(task):
|
||||
# Enforces global limit
|
||||
return global_queue.enqueue(process_task, task).get_result()
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_task(task):
|
||||
pass
|
||||
```
|
||||
|
||||
Reference: [Partitioning Queues](https://docs.dbos.dev/python/tutorials/queue-tutorial#partitioning-queues)
|
||||
62
skills/dbos-python/references/queue-priority.md
Normal file
62
skills/dbos-python/references/queue-priority.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Set Queue Priority for Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Ensures important work runs first
|
||||
tags: queue, priority, ordering, scheduling
|
||||
---
|
||||
|
||||
## Set Queue Priority for Workflows
|
||||
|
||||
Use priority to control which workflows run first. Lower numbers = higher priority.
|
||||
|
||||
**Incorrect (no priority control):**
|
||||
|
||||
```python
|
||||
queue = Queue("tasks")
|
||||
|
||||
# All tasks treated equally - urgent tasks may wait
|
||||
for task in tasks:
|
||||
queue.enqueue(process_task, task)
|
||||
```
|
||||
|
||||
**Correct (with priority):**
|
||||
|
||||
```python
|
||||
from dbos import Queue, SetEnqueueOptions
|
||||
|
||||
# Must enable priority on the queue
|
||||
queue = Queue("tasks", priority_enabled=True)
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_task(task):
|
||||
pass
|
||||
|
||||
def enqueue_task(task, is_urgent: bool):
|
||||
# Priority 1 = highest, runs before priority 10
|
||||
priority = 1 if is_urgent else 10
|
||||
with SetEnqueueOptions(priority=priority):
|
||||
queue.enqueue(process_task, task)
|
||||
```
|
||||
|
||||
Priority behavior:
|
||||
- Range: 1 to 2,147,483,647 (lower = higher priority)
|
||||
- Workflows without priority have highest priority (run first)
|
||||
- Same priority = FIFO order
|
||||
- Must set `priority_enabled=True` on queue
|
||||
|
||||
Example with multiple priority levels:
|
||||
|
||||
```python
|
||||
queue = Queue("jobs", priority_enabled=True)
|
||||
|
||||
PRIORITY_CRITICAL = 1
|
||||
PRIORITY_HIGH = 10
|
||||
PRIORITY_NORMAL = 100
|
||||
PRIORITY_LOW = 1000
|
||||
|
||||
def enqueue_job(job, level):
|
||||
with SetEnqueueOptions(priority=level):
|
||||
queue.enqueue(process_job, job)
|
||||
```
|
||||
|
||||
Reference: [Queue Priority](https://docs.dbos.dev/python/tutorials/queue-tutorial#priority)
|
||||
55
skills/dbos-python/references/queue-rate-limiting.md
Normal file
55
skills/dbos-python/references/queue-rate-limiting.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: Rate Limit Queue Execution
|
||||
impact: HIGH
|
||||
impactDescription: Prevents hitting API rate limits
|
||||
tags: queue, rate-limit, api, throttle
|
||||
---
|
||||
|
||||
## Rate Limit Queue Execution
|
||||
|
||||
Use rate limits when working with rate-limited APIs (like LLM APIs). Limits are global across all processes.
|
||||
|
||||
**Incorrect (no rate limiting):**
|
||||
|
||||
```python
|
||||
queue = Queue("llm_tasks")
|
||||
|
||||
@DBOS.step()
|
||||
def call_llm(prompt):
|
||||
# May hit rate limits if too many calls
|
||||
return openai.chat.completions.create(...)
|
||||
```
|
||||
|
||||
**Correct (with rate limit):**
|
||||
|
||||
```python
|
||||
# Max 50 tasks started per 30 seconds
|
||||
queue = Queue("llm_tasks", limiter={"limit": 50, "period": 30})
|
||||
|
||||
@DBOS.step()
|
||||
def call_llm(prompt):
|
||||
return openai.chat.completions.create(...)
|
||||
|
||||
@DBOS.workflow()
|
||||
def process_prompts(prompts):
|
||||
handles = []
|
||||
for prompt in prompts:
|
||||
# Queue enforces rate limit
|
||||
handle = queue.enqueue(call_llm, prompt)
|
||||
handles.append(handle)
|
||||
return [h.get_result() for h in handles]
|
||||
```
|
||||
|
||||
Rate limit parameters:
|
||||
- `limit`: Maximum number of functions to start in the period
|
||||
- `period`: Time period in seconds
|
||||
|
||||
Rate limits can be combined with concurrency limits:
|
||||
|
||||
```python
|
||||
queue = Queue("api_tasks",
|
||||
worker_concurrency=5,
|
||||
limiter={"limit": 100, "period": 60})
|
||||
```
|
||||
|
||||
Reference: [Rate Limiting](https://docs.dbos.dev/python/tutorials/queue-tutorial#rate-limiting)
|
||||
53
skills/dbos-python/references/step-basics.md
Normal file
53
skills/dbos-python/references/step-basics.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Use Steps for External Operations
|
||||
impact: HIGH
|
||||
impactDescription: Steps enable recovery by checkpointing results
|
||||
tags: step, external, api, checkpoint
|
||||
---
|
||||
|
||||
## Use Steps for External Operations
|
||||
|
||||
Any function that performs complex operations, accesses external APIs, or has side effects should be a step. Step results are checkpointed, enabling workflow recovery.
|
||||
|
||||
**Incorrect (external call in workflow):**
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
# External API call directly in workflow - not checkpointed!
|
||||
response = requests.get("https://api.example.com/data")
|
||||
return response.json()
|
||||
```
|
||||
|
||||
**Correct (external call in step):**
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
@DBOS.step()
|
||||
def fetch_data():
|
||||
response = requests.get("https://api.example.com/data")
|
||||
return response.json()
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
# Step result is checkpointed for recovery
|
||||
data = fetch_data()
|
||||
return data
|
||||
```
|
||||
|
||||
Step requirements:
|
||||
- Inputs and outputs must be serializable
|
||||
- Should not modify global state
|
||||
- Can be retried on failure (configurable)
|
||||
|
||||
When to use steps:
|
||||
- API calls to external services
|
||||
- File system operations
|
||||
- Random number generation
|
||||
- Getting current time
|
||||
- Any non-deterministic operation
|
||||
|
||||
Reference: [DBOS Steps](https://docs.dbos.dev/python/tutorials/step-tutorial)
|
||||
44
skills/dbos-python/references/step-retries.md
Normal file
44
skills/dbos-python/references/step-retries.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: Configure Step Retries for Transient Failures
|
||||
impact: HIGH
|
||||
impactDescription: Automatic retries handle transient failures without manual code
|
||||
tags: step, retry, exponential-backoff, resilience
|
||||
---
|
||||
|
||||
## Configure Step Retries for Transient Failures
|
||||
|
||||
Steps can automatically retry on failure with exponential backoff. This handles transient failures like network issues.
|
||||
|
||||
**Incorrect (manual retry logic):**
|
||||
|
||||
```python
|
||||
@DBOS.step()
|
||||
def fetch_data():
|
||||
# Manual retry logic is error-prone
|
||||
for attempt in range(3):
|
||||
try:
|
||||
return requests.get("https://api.example.com").json()
|
||||
except Exception:
|
||||
if attempt == 2:
|
||||
raise
|
||||
time.sleep(2 ** attempt)
|
||||
```
|
||||
|
||||
**Correct (built-in retries):**
|
||||
|
||||
```python
|
||||
@DBOS.step(retries_allowed=True, max_attempts=10, interval_seconds=1.0, backoff_rate=2.0)
|
||||
def fetch_data():
|
||||
# Retries handled automatically
|
||||
return requests.get("https://api.example.com").json()
|
||||
```
|
||||
|
||||
Retry parameters:
|
||||
- `retries_allowed`: Enable automatic retries (default: False)
|
||||
- `max_attempts`: Maximum retry attempts (default: 3)
|
||||
- `interval_seconds`: Initial delay between retries (default: 1.0)
|
||||
- `backoff_rate`: Multiplier for exponential backoff (default: 2.0)
|
||||
|
||||
With defaults, retry delays are: 1s, 2s, 4s, 8s, 16s...
|
||||
|
||||
Reference: [Configurable Retries](https://docs.dbos.dev/python/tutorials/step-tutorial#configurable-retries)
|
||||
58
skills/dbos-python/references/step-transactions.md
Normal file
58
skills/dbos-python/references/step-transactions.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Use Transactions for Database Operations
|
||||
impact: HIGH
|
||||
impactDescription: Transactions provide atomic database operations
|
||||
tags: transaction, database, postgres, sqlalchemy
|
||||
---
|
||||
|
||||
## Use Transactions for Database Operations
|
||||
|
||||
Transactions are a special type of step optimized for database access. They execute as a single database transaction. Only use with Postgres.
|
||||
|
||||
**Incorrect (database access in regular step):**
|
||||
|
||||
```python
|
||||
@DBOS.step()
|
||||
def save_to_db(data):
|
||||
# For Postgres, use transactions instead of steps
|
||||
# This doesn't get transaction guarantees
|
||||
engine.execute("INSERT INTO table VALUES (?)", data)
|
||||
```
|
||||
|
||||
**Correct (using transaction):**
|
||||
|
||||
```python
|
||||
from sqlalchemy import text
|
||||
|
||||
@DBOS.transaction()
|
||||
def save_to_db(name: str, value: str) -> None:
|
||||
sql = text("INSERT INTO my_table (name, value) VALUES (:name, :value)")
|
||||
DBOS.sql_session.execute(sql, {"name": name, "value": value})
|
||||
|
||||
@DBOS.transaction()
|
||||
def get_from_db(name: str) -> str | None:
|
||||
sql = text("SELECT value FROM my_table WHERE name = :name LIMIT 1")
|
||||
row = DBOS.sql_session.execute(sql, {"name": name}).first()
|
||||
return row[0] if row else None
|
||||
```
|
||||
|
||||
With SQLAlchemy ORM:
|
||||
|
||||
```python
|
||||
from sqlalchemy import Table, Column, String, MetaData, select
|
||||
|
||||
greetings = Table("greetings", MetaData(),
|
||||
Column("name", String),
|
||||
Column("note", String))
|
||||
|
||||
@DBOS.transaction()
|
||||
def insert_greeting(name: str, note: str) -> None:
|
||||
DBOS.sql_session.execute(greetings.insert().values(name=name, note=note))
|
||||
```
|
||||
|
||||
Important:
|
||||
- Only use transactions with Postgres databases
|
||||
- For other databases, use regular steps
|
||||
- Never use `async def` with transactions
|
||||
|
||||
Reference: [DBOS Transactions](https://docs.dbos.dev/python/reference/decorators#transactions)
|
||||
63
skills/dbos-python/references/test-fixtures.md
Normal file
63
skills/dbos-python/references/test-fixtures.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Use Proper Test Fixtures for DBOS
|
||||
impact: LOW-MEDIUM
|
||||
impactDescription: Ensures clean state between tests
|
||||
tags: testing, pytest, fixtures, reset
|
||||
---
|
||||
|
||||
## Use Proper Test Fixtures for DBOS
|
||||
|
||||
Use pytest fixtures to properly reset DBOS state between tests.
|
||||
|
||||
**Incorrect (no reset between tests):**
|
||||
|
||||
```python
|
||||
def test_workflow_one():
|
||||
DBOS.launch()
|
||||
result = my_workflow()
|
||||
assert result == "expected"
|
||||
|
||||
def test_workflow_two():
|
||||
# DBOS state from previous test!
|
||||
result = another_workflow()
|
||||
```
|
||||
|
||||
**Correct (reset fixture):**
|
||||
|
||||
```python
|
||||
import pytest
|
||||
import os
|
||||
from dbos import DBOS, DBOSConfig
|
||||
|
||||
@pytest.fixture()
|
||||
def reset_dbos():
|
||||
DBOS.destroy()
|
||||
config: DBOSConfig = {
|
||||
"name": "test-app",
|
||||
"database_url": os.environ.get("TESTING_DATABASE_URL"),
|
||||
}
|
||||
DBOS(config=config)
|
||||
DBOS.reset_system_database()
|
||||
DBOS.launch()
|
||||
yield
|
||||
DBOS.destroy()
|
||||
|
||||
def test_workflow_one(reset_dbos):
|
||||
result = my_workflow()
|
||||
assert result == "expected"
|
||||
|
||||
def test_workflow_two(reset_dbos):
|
||||
# Clean DBOS state
|
||||
result = another_workflow()
|
||||
assert result == "other_expected"
|
||||
```
|
||||
|
||||
The fixture:
|
||||
1. Destroys any existing DBOS instance
|
||||
2. Creates fresh configuration
|
||||
3. Resets the system database
|
||||
4. Launches DBOS
|
||||
5. Yields for test execution
|
||||
6. Cleans up after test
|
||||
|
||||
Reference: [Testing DBOS](https://docs.dbos.dev/python/tutorials/testing)
|
||||
58
skills/dbos-python/references/workflow-background.md
Normal file
58
skills/dbos-python/references/workflow-background.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Start Workflows in Background
|
||||
impact: CRITICAL
|
||||
impactDescription: Background workflows survive crashes and restarts
|
||||
tags: workflow, background, start_workflow, handle
|
||||
---
|
||||
|
||||
## Start Workflows in Background
|
||||
|
||||
Use `DBOS.start_workflow` to run workflows in the background. This returns a handle to monitor or retrieve results.
|
||||
|
||||
**Incorrect (using threads):**
|
||||
|
||||
```python
|
||||
import threading
|
||||
|
||||
@DBOS.workflow()
|
||||
def long_task(data):
|
||||
# Long running work
|
||||
pass
|
||||
|
||||
# Don't use threads for DBOS workflows!
|
||||
thread = threading.Thread(target=long_task, args=(data,))
|
||||
thread.start()
|
||||
```
|
||||
|
||||
**Correct (using start_workflow):**
|
||||
|
||||
```python
|
||||
from dbos import DBOS, WorkflowHandle
|
||||
|
||||
@DBOS.workflow()
|
||||
def long_task(data):
|
||||
# Long running work
|
||||
return "done"
|
||||
|
||||
# Start workflow in background
|
||||
handle: WorkflowHandle = DBOS.start_workflow(long_task, data)
|
||||
|
||||
# Later, get the result
|
||||
result = handle.get_result()
|
||||
|
||||
# Or check status
|
||||
status = handle.get_status()
|
||||
```
|
||||
|
||||
You can retrieve a workflow handle later using its ID:
|
||||
|
||||
```python
|
||||
# Get workflow ID
|
||||
workflow_id = handle.get_workflow_id()
|
||||
|
||||
# Later, retrieve the handle
|
||||
handle = DBOS.retrieve_workflow(workflow_id)
|
||||
result = handle.get_result()
|
||||
```
|
||||
|
||||
Reference: [Starting Workflows](https://docs.dbos.dev/python/tutorials/workflow-tutorial#starting-workflows-in-the-background)
|
||||
70
skills/dbos-python/references/workflow-constraints.md
Normal file
70
skills/dbos-python/references/workflow-constraints.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
title: Follow Workflow Constraints
|
||||
impact: CRITICAL
|
||||
impactDescription: Violating constraints causes failures or incorrect behavior
|
||||
tags: workflow, step, constraints, rules
|
||||
---
|
||||
|
||||
## Follow Workflow Constraints
|
||||
|
||||
DBOS workflows and steps have specific constraints that must be followed for correct operation.
|
||||
|
||||
**Incorrect (calling start_workflow from step):**
|
||||
|
||||
```python
|
||||
@DBOS.step()
|
||||
def my_step():
|
||||
# Never start workflows from inside a step!
|
||||
DBOS.start_workflow(another_workflow)
|
||||
```
|
||||
|
||||
**Incorrect (modifying global state):**
|
||||
|
||||
```python
|
||||
results = [] # Global variable
|
||||
|
||||
@DBOS.workflow()
|
||||
def my_workflow():
|
||||
# Don't modify globals from workflows!
|
||||
results.append("done")
|
||||
```
|
||||
|
||||
**Incorrect (using recv outside workflow):**
|
||||
|
||||
```python
|
||||
@DBOS.step()
|
||||
def my_step():
|
||||
# recv can only be called from workflows!
|
||||
msg = DBOS.recv("topic")
|
||||
```
|
||||
|
||||
**Correct (following constraints):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def parent_workflow():
|
||||
result = my_step()
|
||||
# Start child workflow from workflow, not step
|
||||
handle = DBOS.start_workflow(child_workflow, result)
|
||||
# Use recv from workflow
|
||||
msg = DBOS.recv("topic")
|
||||
return handle.get_result()
|
||||
|
||||
@DBOS.step()
|
||||
def my_step():
|
||||
# Steps just do their work and return
|
||||
return process_data()
|
||||
|
||||
@DBOS.workflow()
|
||||
def child_workflow(data):
|
||||
return transform(data)
|
||||
```
|
||||
|
||||
Key constraints:
|
||||
- Do NOT call `DBOS.start_workflow` from a step
|
||||
- Do NOT call `DBOS.recv` from a step
|
||||
- Do NOT call `DBOS.set_event` from outside a workflow
|
||||
- Do NOT modify global variables from workflows or steps
|
||||
- Do NOT use threads to start workflows
|
||||
|
||||
Reference: [DBOS Workflows](https://docs.dbos.dev/python/tutorials/workflow-tutorial)
|
||||
77
skills/dbos-python/references/workflow-control.md
Normal file
77
skills/dbos-python/references/workflow-control.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
title: Cancel, Resume, and Fork Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Control running workflows and recover from failures
|
||||
tags: workflow, cancel, resume, fork, control
|
||||
---
|
||||
|
||||
## Cancel, Resume, and Fork Workflows
|
||||
|
||||
Use these methods to control workflow execution: stop runaway workflows, retry failed ones, or restart from a specific step.
|
||||
|
||||
**Incorrect (expecting immediate cancellation):**
|
||||
|
||||
```python
|
||||
DBOS.cancel_workflow(workflow_id)
|
||||
# Wrong: assuming the workflow stopped immediately
|
||||
cleanup_resources() # May race with workflow still running its current step
|
||||
```
|
||||
|
||||
**Correct (wait for cancellation to complete):**
|
||||
|
||||
```python
|
||||
DBOS.cancel_workflow(workflow_id)
|
||||
# Cancellation happens at the START of the next step
|
||||
# Wait for workflow to actually stop
|
||||
handle = DBOS.retrieve_workflow(workflow_id)
|
||||
status = handle.get_status()
|
||||
while status.status == "PENDING":
|
||||
time.sleep(0.5)
|
||||
status = handle.get_status()
|
||||
# Now safe to clean up
|
||||
cleanup_resources()
|
||||
```
|
||||
|
||||
### Cancel
|
||||
|
||||
Stop a workflow and remove it from its queue:
|
||||
|
||||
```python
|
||||
DBOS.cancel_workflow(workflow_id) # Cancels workflow and all children
|
||||
```
|
||||
|
||||
### Resume
|
||||
|
||||
Restart a stopped workflow from its last completed step:
|
||||
|
||||
```python
|
||||
# Resume a cancelled or failed workflow
|
||||
handle = DBOS.resume_workflow(workflow_id)
|
||||
result = handle.get_result()
|
||||
|
||||
# Can also bypass queue for an enqueued workflow
|
||||
handle = DBOS.resume_workflow(enqueued_workflow_id)
|
||||
```
|
||||
|
||||
### Fork
|
||||
|
||||
Start a new workflow from a specific step of an existing one:
|
||||
|
||||
```python
|
||||
# Get steps to find the right starting point
|
||||
steps = DBOS.list_workflow_steps(workflow_id)
|
||||
for step in steps:
|
||||
print(f"Step {step['function_id']}: {step['function_name']}")
|
||||
|
||||
# Fork from step 3 (skips steps 1-2, uses their saved results)
|
||||
new_handle = DBOS.fork_workflow(workflow_id, start_step=3)
|
||||
|
||||
# Fork to run on a new application version (useful for patching bugs)
|
||||
new_handle = DBOS.fork_workflow(
|
||||
workflow_id,
|
||||
start_step=3,
|
||||
application_version="2.0.0"
|
||||
)
|
||||
```
|
||||
|
||||
Reference: [Workflow Management](https://docs.dbos.dev/python/tutorials/workflow-management)
|
||||
53
skills/dbos-python/references/workflow-determinism.md
Normal file
53
skills/dbos-python/references/workflow-determinism.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Keep Workflows Deterministic
|
||||
impact: CRITICAL
|
||||
impactDescription: Non-deterministic workflows cannot recover correctly
|
||||
tags: workflow, determinism, recovery, reliability
|
||||
---
|
||||
|
||||
## Keep Workflows Deterministic
|
||||
|
||||
Workflow functions must be deterministic: given the same inputs and step return values, they must invoke the same steps in the same order. Non-deterministic operations must be moved to steps.
|
||||
|
||||
**Incorrect (non-deterministic workflow):**
|
||||
|
||||
```python
|
||||
import random
|
||||
|
||||
@DBOS.workflow()
|
||||
def example_workflow():
|
||||
# Random number in workflow breaks recovery!
|
||||
choice = random.randint(0, 1)
|
||||
if choice == 0:
|
||||
step_one()
|
||||
else:
|
||||
step_two()
|
||||
```
|
||||
|
||||
**Correct (non-determinism in step):**
|
||||
|
||||
```python
|
||||
import random
|
||||
|
||||
@DBOS.step()
|
||||
def generate_choice():
|
||||
return random.randint(0, 1)
|
||||
|
||||
@DBOS.workflow()
|
||||
def example_workflow():
|
||||
# Random number generated in step - result is saved
|
||||
choice = generate_choice()
|
||||
if choice == 0:
|
||||
step_one()
|
||||
else:
|
||||
step_two()
|
||||
```
|
||||
|
||||
Non-deterministic operations that must be in steps:
|
||||
- Random number generation
|
||||
- Getting current time
|
||||
- Accessing external APIs
|
||||
- Reading files
|
||||
- Database queries (use transactions or steps)
|
||||
|
||||
Reference: [Workflow Determinism](https://docs.dbos.dev/python/tutorials/workflow-tutorial#determinism)
|
||||
68
skills/dbos-python/references/workflow-introspection.md
Normal file
68
skills/dbos-python/references/workflow-introspection.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: List and Inspect Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables monitoring and management of workflow state
|
||||
tags: workflow, list, introspection, status, monitoring
|
||||
---
|
||||
|
||||
## List and Inspect Workflows
|
||||
|
||||
Use `DBOS.list_workflows()` to query workflows by status, name, queue, or other criteria.
|
||||
|
||||
**Incorrect (loading unnecessary data):**
|
||||
|
||||
```python
|
||||
# Loading inputs/outputs when not needed is slow
|
||||
workflows = DBOS.list_workflows(status="PENDING")
|
||||
for w in workflows:
|
||||
print(w.workflow_id) # Only using ID
|
||||
```
|
||||
|
||||
**Correct (optimize with load flags):**
|
||||
|
||||
```python
|
||||
# Disable loading inputs/outputs for better performance
|
||||
workflows = DBOS.list_workflows(
|
||||
status="PENDING",
|
||||
load_input=False,
|
||||
load_output=False
|
||||
)
|
||||
for w in workflows:
|
||||
print(f"{w.workflow_id}: {w.status}")
|
||||
```
|
||||
|
||||
Common queries:
|
||||
|
||||
```python
|
||||
# Find failed workflows
|
||||
failed = DBOS.list_workflows(status="ERROR", limit=100)
|
||||
|
||||
# Find workflows by name
|
||||
processing = DBOS.list_workflows(
|
||||
name="process_task",
|
||||
status=["PENDING", "ENQUEUED"]
|
||||
)
|
||||
|
||||
# Find workflows on a specific queue
|
||||
queued = DBOS.list_workflows(queue_name="high_priority")
|
||||
|
||||
# Only queued workflows (shortcut)
|
||||
queued = DBOS.list_queued_workflows(queue_name="task_queue")
|
||||
|
||||
# Find old version workflows for blue-green deploys
|
||||
old = DBOS.list_workflows(
|
||||
app_version="1.0.0",
|
||||
status=["PENDING", "ENQUEUED"]
|
||||
)
|
||||
|
||||
# Get workflow steps
|
||||
steps = DBOS.list_workflow_steps(workflow_id)
|
||||
for step in steps:
|
||||
print(f"Step {step['function_id']}: {step['function_name']}")
|
||||
```
|
||||
|
||||
WorkflowStatus fields: `workflow_id`, `status`, `name`, `queue_name`, `created_at`, `input`, `output`, `error`
|
||||
|
||||
Status values: `ENQUEUED`, `PENDING`, `SUCCESS`, `ERROR`, `CANCELLED`, `MAX_RECOVERY_ATTEMPTS_EXCEEDED`
|
||||
|
||||
Reference: [Workflow Management](https://docs.dbos.dev/python/tutorials/workflow-management)
|
||||
59
skills/dbos-python/references/workflow-timeout.md
Normal file
59
skills/dbos-python/references/workflow-timeout.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Set Workflow Timeouts
|
||||
impact: CRITICAL
|
||||
impactDescription: Prevents runaway workflows from consuming resources
|
||||
tags: timeout, cancel, deadline, limits
|
||||
---
|
||||
|
||||
## Set Workflow Timeouts
|
||||
|
||||
Use `SetWorkflowTimeout` to limit workflow execution time. Timed-out workflows are cancelled.
|
||||
|
||||
**Incorrect (no timeout):**
|
||||
|
||||
```python
|
||||
@DBOS.workflow()
|
||||
def potentially_long_workflow():
|
||||
# Could run forever!
|
||||
while not done:
|
||||
process_next()
|
||||
```
|
||||
|
||||
**Correct (with timeout):**
|
||||
|
||||
```python
|
||||
from dbos import SetWorkflowTimeout
|
||||
|
||||
@DBOS.workflow()
|
||||
def bounded_workflow():
|
||||
while not done:
|
||||
process_next()
|
||||
|
||||
# Workflow must complete within 60 seconds
|
||||
with SetWorkflowTimeout(60):
|
||||
bounded_workflow()
|
||||
|
||||
# Or with start_workflow
|
||||
with SetWorkflowTimeout(60):
|
||||
handle = DBOS.start_workflow(bounded_workflow)
|
||||
```
|
||||
|
||||
Timeout behavior:
|
||||
- Timeout is **start-to-completion** (doesn't count queue wait time)
|
||||
- Timeouts are **durable** (persist across restarts)
|
||||
- Cancellation happens at the **beginning of the next step**
|
||||
- **All child workflows** are also cancelled
|
||||
|
||||
With queues:
|
||||
|
||||
```python
|
||||
queue = Queue("example_queue")
|
||||
|
||||
# Timeout starts when dequeued, not when enqueued
|
||||
with SetWorkflowTimeout(30):
|
||||
queue.enqueue(my_workflow)
|
||||
```
|
||||
|
||||
Timeouts work with long durations (hours, days, weeks) since they're stored in the database.
|
||||
|
||||
Reference: [Workflow Timeouts](https://docs.dbos.dev/python/tutorials/workflow-tutorial#workflow-timeouts)
|
||||
94
skills/dbos-typescript/AGENTS.md
Normal file
94
skills/dbos-typescript/AGENTS.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# dbos-typescript
|
||||
|
||||
> **Note:** `CLAUDE.md` is a symlink to this file.
|
||||
|
||||
## Overview
|
||||
|
||||
DBOS TypeScript SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing TypeScript code with DBOS, creating workflows and steps, using queues, using DBOSClient from external applications, or building applications that need to be resilient to failures.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
dbos-typescript/
|
||||
SKILL.md # Main skill file - read this first
|
||||
AGENTS.md # This navigation guide
|
||||
CLAUDE.md # Symlink to AGENTS.md
|
||||
references/ # Detailed reference files
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
1. Read `SKILL.md` for the main skill instructions
|
||||
2. Browse `references/` for detailed documentation on specific topics
|
||||
3. Reference files are loaded on-demand - read only what you need
|
||||
|
||||
## Reference Categories
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`).
|
||||
|
||||
## Available References
|
||||
|
||||
**Advanced** (`advanced-`):
|
||||
- `references/advanced-patching.md`
|
||||
- `references/advanced-versioning.md`
|
||||
|
||||
**Client** (`client-`):
|
||||
- `references/client-enqueue.md`
|
||||
- `references/client-setup.md`
|
||||
|
||||
**Communication** (`comm-`):
|
||||
- `references/comm-events.md`
|
||||
- `references/comm-messages.md`
|
||||
- `references/comm-streaming.md`
|
||||
|
||||
**Lifecycle** (`lifecycle-`):
|
||||
- `references/lifecycle-config.md`
|
||||
- `references/lifecycle-express.md`
|
||||
|
||||
**Pattern** (`pattern-`):
|
||||
- `references/pattern-classes.md`
|
||||
- `references/pattern-debouncing.md`
|
||||
- `references/pattern-idempotency.md`
|
||||
- `references/pattern-scheduled.md`
|
||||
- `references/pattern-sleep.md`
|
||||
|
||||
**Queue** (`queue-`):
|
||||
- `references/queue-basics.md`
|
||||
- `references/queue-concurrency.md`
|
||||
- `references/queue-deduplication.md`
|
||||
- `references/queue-listening.md`
|
||||
- `references/queue-partitioning.md`
|
||||
- `references/queue-priority.md`
|
||||
- `references/queue-rate-limiting.md`
|
||||
|
||||
**Step** (`step-`):
|
||||
- `references/step-basics.md`
|
||||
- `references/step-retries.md`
|
||||
- `references/step-transactions.md`
|
||||
|
||||
**Testing** (`test-`):
|
||||
- `references/test-setup.md`
|
||||
|
||||
**Workflow** (`workflow-`):
|
||||
- `references/workflow-background.md`
|
||||
- `references/workflow-constraints.md`
|
||||
- `references/workflow-control.md`
|
||||
- `references/workflow-determinism.md`
|
||||
- `references/workflow-introspection.md`
|
||||
- `references/workflow-timeout.md`
|
||||
|
||||
---
|
||||
|
||||
*31 reference files across 9 categories*
|
||||
1
skills/dbos-typescript/CLAUDE.md
Symbolic link
1
skills/dbos-typescript/CLAUDE.md
Symbolic link
@@ -0,0 +1 @@
|
||||
AGENTS.md
|
||||
111
skills/dbos-typescript/SKILL.md
Normal file
111
skills/dbos-typescript/SKILL.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
name: dbos-typescript
|
||||
description: DBOS TypeScript SDK for building reliable, fault-tolerant applications with durable workflows. Use this skill when writing TypeScript code with DBOS, creating workflows and steps, using queues, using DBOSClient from external applications, or building applications that need to be resilient to failures.
|
||||
risk: safe
|
||||
source: https://docs.dbos.dev/
|
||||
license: MIT
|
||||
metadata:
|
||||
author: dbos
|
||||
version: "1.0.0"
|
||||
organization: DBOS
|
||||
date: January 2026
|
||||
abstract: Comprehensive guide for building fault-tolerant TypeScript applications with DBOS. Covers workflows, steps, queues, communication patterns, and best practices for durable execution.
|
||||
---
|
||||
|
||||
# DBOS TypeScript Best Practices
|
||||
|
||||
Guide for building reliable, fault-tolerant TypeScript applications with DBOS durable workflows.
|
||||
|
||||
## When to Use
|
||||
|
||||
Reference these guidelines when:
|
||||
- Adding DBOS to existing TypeScript code
|
||||
- Creating workflows and steps
|
||||
- Using queues for concurrency control
|
||||
- Implementing workflow communication (events, messages, streams)
|
||||
- Configuring and launching DBOS applications
|
||||
- Using DBOSClient from external applications
|
||||
- Testing DBOS applications
|
||||
|
||||
## Rule Categories by Priority
|
||||
|
||||
| Priority | Category | Impact | Prefix |
|
||||
|----------|----------|--------|--------|
|
||||
| 1 | Lifecycle | CRITICAL | `lifecycle-` |
|
||||
| 2 | Workflow | CRITICAL | `workflow-` |
|
||||
| 3 | Step | HIGH | `step-` |
|
||||
| 4 | Queue | HIGH | `queue-` |
|
||||
| 5 | Communication | MEDIUM | `comm-` |
|
||||
| 6 | Pattern | MEDIUM | `pattern-` |
|
||||
| 7 | Testing | LOW-MEDIUM | `test-` |
|
||||
| 8 | Client | MEDIUM | `client-` |
|
||||
| 9 | Advanced | LOW | `advanced-` |
|
||||
|
||||
## Critical Rules
|
||||
|
||||
### Installation
|
||||
|
||||
Always install the latest version of DBOS:
|
||||
|
||||
```bash
|
||||
npm install @dbos-inc/dbos-sdk@latest
|
||||
```
|
||||
|
||||
### DBOS Configuration and Launch
|
||||
|
||||
A DBOS application MUST configure and launch DBOS before running any workflows:
|
||||
|
||||
```typescript
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function main() {
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
await DBOS.launch();
|
||||
await myWorkflow();
|
||||
}
|
||||
|
||||
main().catch(console.log);
|
||||
```
|
||||
|
||||
### Workflow and Step Structure
|
||||
|
||||
Workflows are comprised of steps. Any function performing complex operations or accessing external services must be run as a step using `DBOS.runStep`:
|
||||
|
||||
```typescript
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function fetchData() {
|
||||
return await fetch("https://api.example.com").then(r => r.json());
|
||||
}
|
||||
|
||||
async function myWorkflowFn() {
|
||||
const result = await DBOS.runStep(fetchData, { name: "fetchData" });
|
||||
return result;
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
### Key Constraints
|
||||
|
||||
- Do NOT call, start, or enqueue workflows from within steps
|
||||
- Do NOT use threads or uncontrolled concurrency to start workflows - use `DBOS.startWorkflow` or queues
|
||||
- Workflows MUST be deterministic - non-deterministic operations go in steps
|
||||
- Do NOT modify global variables from workflows or steps
|
||||
|
||||
## How to Use
|
||||
|
||||
Read individual rule files for detailed explanations and examples:
|
||||
|
||||
```
|
||||
references/lifecycle-config.md
|
||||
references/workflow-determinism.md
|
||||
references/queue-concurrency.md
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- https://docs.dbos.dev/
|
||||
- https://github.com/dbos-inc/dbos-transact-ts
|
||||
41
skills/dbos-typescript/references/_sections.md
Normal file
41
skills/dbos-typescript/references/_sections.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Section Definitions
|
||||
|
||||
This file defines the rule categories for DBOS TypeScript best practices. Rules are automatically assigned to sections based on their filename prefix.
|
||||
|
||||
---
|
||||
|
||||
## 1. Lifecycle (lifecycle)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** DBOS configuration, initialization, and launch patterns. Foundation for all DBOS applications.
|
||||
|
||||
## 2. Workflow (workflow)
|
||||
**Impact:** CRITICAL
|
||||
**Description:** Workflow creation, determinism requirements, background execution, and workflow IDs.
|
||||
|
||||
## 3. Step (step)
|
||||
**Impact:** HIGH
|
||||
**Description:** Step creation, retries, transactions via datasources, and when to use steps vs workflows.
|
||||
|
||||
## 4. Queue (queue)
|
||||
**Impact:** HIGH
|
||||
**Description:** WorkflowQueue creation, concurrency limits, rate limiting, partitioning, and priority.
|
||||
|
||||
## 5. Communication (comm)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Workflow events, messages, and streaming for inter-workflow communication.
|
||||
|
||||
## 6. Pattern (pattern)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** Common patterns including idempotency, scheduled workflows, debouncing, and class instances.
|
||||
|
||||
## 7. Testing (test)
|
||||
**Impact:** LOW-MEDIUM
|
||||
**Description:** Testing DBOS applications with Jest, mocking, and integration test setup.
|
||||
|
||||
## 8. Client (client)
|
||||
**Impact:** MEDIUM
|
||||
**Description:** DBOSClient for interacting with DBOS from external applications.
|
||||
|
||||
## 9. Advanced (advanced)
|
||||
**Impact:** LOW
|
||||
**Description:** Workflow versioning, patching, and safe code upgrades.
|
||||
72
skills/dbos-typescript/references/advanced-patching.md
Normal file
72
skills/dbos-typescript/references/advanced-patching.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Use Patching for Safe Workflow Upgrades
|
||||
impact: LOW
|
||||
impactDescription: Safely deploy breaking workflow changes without disrupting in-progress workflows
|
||||
tags: advanced, patching, upgrade, breaking-change
|
||||
---
|
||||
|
||||
## Use Patching for Safe Workflow Upgrades
|
||||
|
||||
Use `DBOS.patch()` to safely deploy breaking changes to workflow code. Breaking changes alter which steps run or their order, which can cause recovery failures.
|
||||
|
||||
**Incorrect (breaking change without patching):**
|
||||
|
||||
```typescript
|
||||
// BEFORE: original workflow
|
||||
async function workflowFn() {
|
||||
await foo();
|
||||
await bar();
|
||||
}
|
||||
const workflow = DBOS.registerWorkflow(workflowFn);
|
||||
|
||||
// AFTER: breaking change - recovery will fail for in-progress workflows!
|
||||
async function workflowFn() {
|
||||
await baz(); // Changed step
|
||||
await bar();
|
||||
}
|
||||
const workflow = DBOS.registerWorkflow(workflowFn);
|
||||
```
|
||||
|
||||
**Correct (using patch):**
|
||||
|
||||
```typescript
|
||||
async function workflowFn() {
|
||||
if (await DBOS.patch("use-baz")) {
|
||||
await baz(); // New workflows run this
|
||||
} else {
|
||||
await foo(); // Old workflows continue with original code
|
||||
}
|
||||
await bar();
|
||||
}
|
||||
const workflow = DBOS.registerWorkflow(workflowFn);
|
||||
```
|
||||
|
||||
`DBOS.patch()` returns `true` for new workflows and `false` for workflows that started before the patch.
|
||||
|
||||
**Deprecating patches (after all old workflows complete):**
|
||||
|
||||
```typescript
|
||||
async function workflowFn() {
|
||||
if (await DBOS.deprecatePatch("use-baz")) { // Always returns true
|
||||
await baz();
|
||||
}
|
||||
await bar();
|
||||
}
|
||||
const workflow = DBOS.registerWorkflow(workflowFn);
|
||||
```
|
||||
|
||||
**Removing patches (after all workflows using deprecatePatch complete):**
|
||||
|
||||
```typescript
|
||||
async function workflowFn() {
|
||||
await baz();
|
||||
await bar();
|
||||
}
|
||||
const workflow = DBOS.registerWorkflow(workflowFn);
|
||||
```
|
||||
|
||||
Lifecycle: `patch()` → deploy → wait for old workflows → `deprecatePatch()` → deploy → wait → remove patch entirely.
|
||||
|
||||
Use `DBOS.listWorkflows` to check for active old workflows before deprecating or removing patches.
|
||||
|
||||
Reference: [Patching](https://docs.dbos.dev/typescript/tutorials/upgrading-workflows#patching)
|
||||
61
skills/dbos-typescript/references/advanced-versioning.md
Normal file
61
skills/dbos-typescript/references/advanced-versioning.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Use Versioning for Blue-Green Deployments
|
||||
impact: LOW
|
||||
impactDescription: Enables safe deployment of new code versions alongside old ones
|
||||
tags: advanced, versioning, blue-green, deployment
|
||||
---
|
||||
|
||||
## Use Versioning for Blue-Green Deployments
|
||||
|
||||
Set `applicationVersion` in configuration to tag workflows with a version. DBOS only recovers workflows matching the current application version, preventing code mismatches during recovery.
|
||||
|
||||
**Incorrect (deploying new code that breaks in-progress workflows):**
|
||||
|
||||
```typescript
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
// No version set - all workflows recovered regardless of code version
|
||||
});
|
||||
```
|
||||
|
||||
**Correct (versioned deployment):**
|
||||
|
||||
```typescript
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
applicationVersion: "2.0.0",
|
||||
});
|
||||
```
|
||||
|
||||
By default, the application version is automatically computed from a hash of workflow source code. Set it explicitly for more control.
|
||||
|
||||
**Blue-green deployment strategy:**
|
||||
|
||||
1. Deploy new version (v2) alongside old version (v1)
|
||||
2. Direct new traffic to v2 processes
|
||||
3. Let v1 processes "drain" (complete in-progress workflows)
|
||||
4. Check for remaining v1 workflows:
|
||||
|
||||
```typescript
|
||||
const oldWorkflows = await DBOS.listWorkflows({
|
||||
applicationVersion: "1.0.0",
|
||||
status: "PENDING",
|
||||
});
|
||||
```
|
||||
|
||||
5. Once all v1 workflows are complete, retire v1 processes
|
||||
|
||||
**Fork to new version (for stuck workflows):**
|
||||
|
||||
```typescript
|
||||
// Fork a workflow from a failed step to run on the new version
|
||||
const handle = await DBOS.forkWorkflow<string>(
|
||||
workflowID,
|
||||
failedStepID,
|
||||
{ applicationVersion: "2.0.0" }
|
||||
);
|
||||
```
|
||||
|
||||
Reference: [Versioning](https://docs.dbos.dev/typescript/tutorials/upgrading-workflows#versioning)
|
||||
75
skills/dbos-typescript/references/client-enqueue.md
Normal file
75
skills/dbos-typescript/references/client-enqueue.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
title: Enqueue Workflows from External Applications
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables external services to submit work to DBOS queues
|
||||
tags: client, enqueue, external, queue
|
||||
---
|
||||
|
||||
## Enqueue Workflows from External Applications
|
||||
|
||||
Use `client.enqueue()` to submit workflows from outside your DBOS application. Since `DBOSClient` runs externally, workflow and queue metadata must be specified explicitly.
|
||||
|
||||
**Incorrect (trying to use DBOS.startWorkflow from external code):**
|
||||
|
||||
```typescript
|
||||
// DBOS.startWorkflow requires a full DBOS setup
|
||||
await DBOS.startWorkflow(processTask, { queueName: "myQueue" })("data");
|
||||
```
|
||||
|
||||
**Correct (using DBOSClient.enqueue):**
|
||||
|
||||
```typescript
|
||||
import { DBOSClient } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const client = await DBOSClient.create({
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
|
||||
// Basic enqueue
|
||||
const handle = await client.enqueue(
|
||||
{
|
||||
workflowName: "processTask",
|
||||
queueName: "task_queue",
|
||||
},
|
||||
"task-data"
|
||||
);
|
||||
|
||||
// Wait for the result
|
||||
const result = await handle.getResult();
|
||||
```
|
||||
|
||||
**Type-safe enqueue:**
|
||||
|
||||
```typescript
|
||||
// Import or declare the workflow type
|
||||
declare class Tasks {
|
||||
static processTask(data: string): Promise<string>;
|
||||
}
|
||||
|
||||
const handle = await client.enqueue<typeof Tasks.processTask>(
|
||||
{
|
||||
workflowName: "processTask",
|
||||
workflowClassName: "Tasks",
|
||||
queueName: "task_queue",
|
||||
},
|
||||
"task-data"
|
||||
);
|
||||
|
||||
// TypeScript infers the result type
|
||||
const result = await handle.getResult(); // type: string
|
||||
```
|
||||
|
||||
**Enqueue options:**
|
||||
- `workflowName` (required): Name of the workflow function
|
||||
- `queueName` (required): Name of the queue
|
||||
- `workflowClassName`: Class name if the workflow is a class method
|
||||
- `workflowConfigName`: Instance name if using `ConfiguredInstance`
|
||||
- `workflowID`: Custom workflow ID
|
||||
- `workflowTimeoutMS`: Timeout in milliseconds
|
||||
- `deduplicationID`: Prevent duplicate enqueues
|
||||
- `priority`: Queue priority (lower = higher priority)
|
||||
- `queuePartitionKey`: Partition key for partitioned queues
|
||||
|
||||
Always call `client.destroy()` when done.
|
||||
|
||||
Reference: [DBOS Client Enqueue](https://docs.dbos.dev/typescript/reference/client#enqueue)
|
||||
60
skills/dbos-typescript/references/client-setup.md
Normal file
60
skills/dbos-typescript/references/client-setup.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: Initialize DBOSClient for External Access
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables external applications to interact with DBOS workflows
|
||||
tags: client, external, setup, initialization
|
||||
---
|
||||
|
||||
## Initialize DBOSClient for External Access
|
||||
|
||||
Use `DBOSClient` to interact with DBOS from external applications like API servers, CLI tools, or separate services. `DBOSClient` connects directly to the DBOS system database.
|
||||
|
||||
**Incorrect (using DBOS directly from an external app):**
|
||||
|
||||
```typescript
|
||||
// DBOS requires full setup with launch() - too heavy for external clients
|
||||
DBOS.setConfig({ ... });
|
||||
await DBOS.launch();
|
||||
```
|
||||
|
||||
**Correct (using DBOSClient):**
|
||||
|
||||
```typescript
|
||||
import { DBOSClient } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const client = await DBOSClient.create({
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
|
||||
// Send a message to a workflow
|
||||
await client.send(workflowID, "notification", "topic");
|
||||
|
||||
// Get an event from a workflow
|
||||
const event = await client.getEvent<string>(workflowID, "status");
|
||||
|
||||
// Read a stream from a workflow
|
||||
for await (const value of client.readStream(workflowID, "results")) {
|
||||
console.log(value);
|
||||
}
|
||||
|
||||
// Retrieve a workflow handle
|
||||
const handle = client.retrieveWorkflow<string>(workflowID);
|
||||
const result = await handle.getResult();
|
||||
|
||||
// List workflows
|
||||
const workflows = await client.listWorkflows({ status: "ERROR" });
|
||||
|
||||
// Workflow management
|
||||
await client.cancelWorkflow(workflowID);
|
||||
await client.resumeWorkflow(workflowID);
|
||||
|
||||
// Always destroy when done
|
||||
await client.destroy();
|
||||
```
|
||||
|
||||
Constructor options:
|
||||
- `systemDatabaseUrl`: Connection string to the Postgres system database (required)
|
||||
- `systemDatabasePool`: Optional custom `node-postgres` connection pool
|
||||
- `serializer`: Optional custom serializer (must match the DBOS application's serializer)
|
||||
|
||||
Reference: [DBOS Client](https://docs.dbos.dev/typescript/reference/client)
|
||||
57
skills/dbos-typescript/references/comm-events.md
Normal file
57
skills/dbos-typescript/references/comm-events.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Use Events for Workflow Status Publishing
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables real-time progress monitoring and interactive workflows
|
||||
tags: communication, events, status, key-value
|
||||
---
|
||||
|
||||
## Use Events for Workflow Status Publishing
|
||||
|
||||
Workflows can publish events (key-value pairs) with `DBOS.setEvent`. Other code can read events with `DBOS.getEvent`. Events are persisted and useful for real-time progress monitoring.
|
||||
|
||||
**Incorrect (using external state for progress):**
|
||||
|
||||
```typescript
|
||||
let progress = 0; // Global variable - not durable!
|
||||
|
||||
async function processDataFn() {
|
||||
progress = 50; // Not persisted, lost on restart
|
||||
}
|
||||
const processData = DBOS.registerWorkflow(processDataFn);
|
||||
```
|
||||
|
||||
**Correct (using events):**
|
||||
|
||||
```typescript
|
||||
async function processDataFn() {
|
||||
await DBOS.setEvent("status", "processing");
|
||||
await DBOS.runStep(stepOne, { name: "stepOne" });
|
||||
await DBOS.setEvent("progress", 50);
|
||||
await DBOS.runStep(stepTwo, { name: "stepTwo" });
|
||||
await DBOS.setEvent("progress", 100);
|
||||
await DBOS.setEvent("status", "complete");
|
||||
}
|
||||
const processData = DBOS.registerWorkflow(processDataFn);
|
||||
|
||||
// Read events from outside the workflow
|
||||
const status = await DBOS.getEvent<string>(workflowID, "status", 0);
|
||||
const progress = await DBOS.getEvent<number>(workflowID, "progress", 0);
|
||||
// Returns null if the event doesn't exist within the timeout (default 60s)
|
||||
```
|
||||
|
||||
Events are useful for interactive workflows. For example, a checkout workflow can publish a payment URL for the caller to redirect to:
|
||||
|
||||
```typescript
|
||||
async function checkoutWorkflowFn() {
|
||||
const paymentURL = await DBOS.runStep(createPayment, { name: "createPayment" });
|
||||
await DBOS.setEvent("paymentURL", paymentURL);
|
||||
// Continue processing...
|
||||
}
|
||||
const checkoutWorkflow = DBOS.registerWorkflow(checkoutWorkflowFn);
|
||||
|
||||
// HTTP handler starts workflow and reads the payment URL
|
||||
const handle = await DBOS.startWorkflow(checkoutWorkflow)();
|
||||
const url = await DBOS.getEvent<string>(handle.workflowID, "paymentURL", 300);
|
||||
```
|
||||
|
||||
Reference: [Workflow Events](https://docs.dbos.dev/typescript/tutorials/workflow-communication#workflow-events)
|
||||
55
skills/dbos-typescript/references/comm-messages.md
Normal file
55
skills/dbos-typescript/references/comm-messages.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: Use Messages for Workflow Notifications
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables reliable inter-workflow and external-to-workflow communication
|
||||
tags: communication, messages, send, recv, notification
|
||||
---
|
||||
|
||||
## Use Messages for Workflow Notifications
|
||||
|
||||
Use `DBOS.send` to send messages to a workflow and `DBOS.recv` to receive them. Messages are queued per topic and persisted for reliable delivery.
|
||||
|
||||
**Incorrect (using external messaging for workflow communication):**
|
||||
|
||||
```typescript
|
||||
// External message queue is not integrated with workflow recovery
|
||||
import { Queue } from "some-external-queue";
|
||||
```
|
||||
|
||||
**Correct (using DBOS messages):**
|
||||
|
||||
```typescript
|
||||
async function checkoutWorkflowFn() {
|
||||
// Wait for payment notification (timeout 120 seconds)
|
||||
const notification = await DBOS.recv<string>("payment_status", 120);
|
||||
|
||||
if (notification && notification === "paid") {
|
||||
await DBOS.runStep(fulfillOrder, { name: "fulfillOrder" });
|
||||
} else {
|
||||
await DBOS.runStep(cancelOrder, { name: "cancelOrder" });
|
||||
}
|
||||
}
|
||||
const checkoutWorkflow = DBOS.registerWorkflow(checkoutWorkflowFn);
|
||||
|
||||
// Send a message from a webhook handler
|
||||
async function paymentWebhook(workflowID: string, status: string) {
|
||||
await DBOS.send(workflowID, status, "payment_status");
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- `recv` waits for and consumes the next message for the specified topic
|
||||
- Returns `null` if the wait times out (default timeout: 60 seconds)
|
||||
- Messages without a topic can only be received by `recv` without a topic
|
||||
- Messages are queued per-topic (FIFO)
|
||||
|
||||
**Reliability guarantees:**
|
||||
- All messages are persisted to the database
|
||||
- Messages sent from workflows are delivered exactly-once
|
||||
- Messages sent from non-workflow code can use an idempotency key:
|
||||
|
||||
```typescript
|
||||
await DBOS.send(workflowID, message, "topic", "idempotency-key-123");
|
||||
```
|
||||
|
||||
Reference: [Workflow Messaging](https://docs.dbos.dev/typescript/tutorials/workflow-communication#workflow-messaging-and-notifications)
|
||||
53
skills/dbos-typescript/references/comm-streaming.md
Normal file
53
skills/dbos-typescript/references/comm-streaming.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Use Streams for Real-Time Data
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables streaming results from long-running workflows
|
||||
tags: communication, stream, real-time, async-generator
|
||||
---
|
||||
|
||||
## Use Streams for Real-Time Data
|
||||
|
||||
Workflows can stream data to clients in real-time using `DBOS.writeStream`, `DBOS.closeStream`, and `DBOS.readStream`. Useful for LLM output streaming or progress reporting.
|
||||
|
||||
**Incorrect (accumulating results then returning at end):**
|
||||
|
||||
```typescript
|
||||
async function processWorkflowFn() {
|
||||
const results: string[] = [];
|
||||
for (const chunk of data) {
|
||||
results.push(await processChunk(chunk));
|
||||
}
|
||||
return results; // Client must wait for entire workflow to complete
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (streaming results as they become available):**
|
||||
|
||||
```typescript
|
||||
async function processWorkflowFn() {
|
||||
for (const chunk of data) {
|
||||
const result = await DBOS.runStep(() => processChunk(chunk), { name: "process" });
|
||||
await DBOS.writeStream("results", result);
|
||||
}
|
||||
await DBOS.closeStream("results"); // Signal completion
|
||||
}
|
||||
const processWorkflow = DBOS.registerWorkflow(processWorkflowFn);
|
||||
|
||||
// Read the stream from outside
|
||||
const handle = await DBOS.startWorkflow(processWorkflow)();
|
||||
for await (const value of DBOS.readStream<string>(handle.workflowID, "results")) {
|
||||
console.log(`Received: ${value}`);
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- A workflow may have any number of streams, each identified by a unique key
|
||||
- Streams are immutable and append-only
|
||||
- Writes from workflows happen exactly-once
|
||||
- Writes from steps happen at-least-once (retried steps may write duplicates)
|
||||
- Streams are automatically closed when the workflow terminates
|
||||
- `readStream` returns an async generator that yields values until the stream is closed
|
||||
|
||||
You can also read streams from outside the DBOS application using `DBOSClient.readStream`.
|
||||
|
||||
Reference: [Workflow Streaming](https://docs.dbos.dev/typescript/tutorials/workflow-communication#workflow-streaming)
|
||||
47
skills/dbos-typescript/references/lifecycle-config.md
Normal file
47
skills/dbos-typescript/references/lifecycle-config.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Configure and Launch DBOS Properly
|
||||
impact: CRITICAL
|
||||
impactDescription: Application won't function without proper setup
|
||||
tags: configuration, launch, setup, initialization
|
||||
---
|
||||
|
||||
## Configure and Launch DBOS Properly
|
||||
|
||||
Every DBOS application must configure and launch DBOS before running any workflows. All workflows and steps must be registered before calling `DBOS.launch()`.
|
||||
|
||||
**Incorrect (missing configuration or launch):**
|
||||
|
||||
```typescript
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
// No configuration or launch!
|
||||
async function myWorkflowFn() {
|
||||
// This will fail - DBOS is not launched
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
await myWorkflow();
|
||||
```
|
||||
|
||||
**Correct (configure and launch in main):**
|
||||
|
||||
```typescript
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function myWorkflowFn() {
|
||||
// workflow logic
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
|
||||
async function main() {
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
await DBOS.launch();
|
||||
await myWorkflow();
|
||||
}
|
||||
|
||||
main().catch(console.log);
|
||||
```
|
||||
|
||||
Reference: [DBOS Lifecycle](https://docs.dbos.dev/typescript/reference/dbos-class)
|
||||
61
skills/dbos-typescript/references/lifecycle-express.md
Normal file
61
skills/dbos-typescript/references/lifecycle-express.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Integrate DBOS with Express
|
||||
impact: CRITICAL
|
||||
impactDescription: Proper integration ensures workflows survive server restarts
|
||||
tags: express, http, integration, server
|
||||
---
|
||||
|
||||
## Integrate DBOS with Express
|
||||
|
||||
Configure and launch DBOS before starting your Express server. Register all workflows and steps before calling `DBOS.launch()`.
|
||||
|
||||
**Incorrect (DBOS not launched before server starts):**
|
||||
|
||||
```typescript
|
||||
import express from "express";
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const app = express();
|
||||
|
||||
async function processTaskFn(data: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
// Server starts without launching DBOS!
|
||||
app.listen(3000);
|
||||
```
|
||||
|
||||
**Correct (launch DBOS first, then start Express):**
|
||||
|
||||
```typescript
|
||||
import express from "express";
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const app = express();
|
||||
|
||||
async function processTaskFn(data: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
app.post("/process", async (req, res) => {
|
||||
const handle = await DBOS.startWorkflow(processTask)(req.body.data);
|
||||
res.json({ workflowID: handle.workflowID });
|
||||
});
|
||||
|
||||
async function main() {
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
await DBOS.launch();
|
||||
app.listen(3000, () => {
|
||||
console.log("Server running on port 3000");
|
||||
});
|
||||
}
|
||||
|
||||
main().catch(console.log);
|
||||
```
|
||||
|
||||
Reference: [Integrating DBOS](https://docs.dbos.dev/typescript/integrating-dbos)
|
||||
67
skills/dbos-typescript/references/pattern-classes.md
Normal file
67
skills/dbos-typescript/references/pattern-classes.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: Use DBOS with Class Instances
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables configurable workflow instances with recovery support
|
||||
tags: pattern, class, instance, ConfiguredInstance
|
||||
---
|
||||
|
||||
## Use DBOS with Class Instances
|
||||
|
||||
Class instance methods can be workflows and steps. Classes with workflow methods must extend `ConfiguredInstance` to enable recovery.
|
||||
|
||||
**Incorrect (instance workflows without ConfiguredInstance):**
|
||||
|
||||
```typescript
|
||||
class MyWorker {
|
||||
constructor(private config: any) {}
|
||||
|
||||
@DBOS.workflow()
|
||||
async processTask(task: string) {
|
||||
// Recovery won't work - DBOS can't find the instance after restart
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (extending ConfiguredInstance):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, ConfiguredInstance } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
class MyWorker extends ConfiguredInstance {
|
||||
cfg: WorkerConfig;
|
||||
|
||||
constructor(name: string, config: WorkerConfig) {
|
||||
super(name); // Unique name required for recovery
|
||||
this.cfg = config;
|
||||
}
|
||||
|
||||
override async initialize(): Promise<void> {
|
||||
// Optional: validate config at DBOS.launch() time
|
||||
}
|
||||
|
||||
@DBOS.workflow()
|
||||
async processTask(task: string): Promise<void> {
|
||||
// Can use this.cfg safely - instance is recoverable
|
||||
const result = await DBOS.runStep(
|
||||
() => fetch(this.cfg.apiUrl).then(r => r.text()),
|
||||
{ name: "callApi" }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Create instances BEFORE DBOS.launch()
|
||||
const worker1 = new MyWorker("worker-us", { apiUrl: "https://us.api.com" });
|
||||
const worker2 = new MyWorker("worker-eu", { apiUrl: "https://eu.api.com" });
|
||||
|
||||
// Then launch
|
||||
await DBOS.launch();
|
||||
```
|
||||
|
||||
Key requirements:
|
||||
- `ConfiguredInstance` constructor requires a unique `name` per class
|
||||
- All instances must be created **before** `DBOS.launch()`
|
||||
- The `initialize()` method is called during launch for validation
|
||||
- Use `DBOS.runStep` inside instance workflows for step operations
|
||||
- Event registration decorators like `@DBOS.scheduled` cannot be applied to instance methods
|
||||
|
||||
Reference: [Using TypeScript Objects](https://docs.dbos.dev/typescript/tutorials/instantiated-objects)
|
||||
56
skills/dbos-typescript/references/pattern-debouncing.md
Normal file
56
skills/dbos-typescript/references/pattern-debouncing.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Debounce Workflows to Prevent Wasted Work
|
||||
impact: MEDIUM
|
||||
impactDescription: Prevents redundant workflow executions during rapid triggers
|
||||
tags: pattern, debounce, delay, efficiency
|
||||
---
|
||||
|
||||
## Debounce Workflows to Prevent Wasted Work
|
||||
|
||||
Use `Debouncer` to delay workflow execution until some time has passed since the last trigger. This prevents wasted work when a workflow is triggered multiple times in quick succession.
|
||||
|
||||
**Incorrect (executing on every trigger):**
|
||||
|
||||
```typescript
|
||||
async function processInputFn(userInput: string) {
|
||||
// Expensive processing
|
||||
}
|
||||
const processInput = DBOS.registerWorkflow(processInputFn);
|
||||
|
||||
// Every keystroke triggers a new workflow - wasteful!
|
||||
async function onInputChange(userInput: string) {
|
||||
await processInput(userInput);
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using Debouncer):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, Debouncer } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function processInputFn(userInput: string) {
|
||||
// Expensive processing
|
||||
}
|
||||
const processInput = DBOS.registerWorkflow(processInputFn);
|
||||
|
||||
const debouncer = new Debouncer({
|
||||
workflow: processInput,
|
||||
debounceTimeoutMs: 120000, // Max wait: 2 minutes
|
||||
});
|
||||
|
||||
async function onInputChange(userId: string, userInput: string) {
|
||||
// Delays execution by 60 seconds from the last call
|
||||
// Uses the LAST set of inputs when finally executing
|
||||
await debouncer.debounce(userId, 60000, userInput);
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
- `debounceKey` groups executions that are debounced together (e.g., per user)
|
||||
- `debouncePeriodMs` delays execution by this amount from the last call
|
||||
- `debounceTimeoutMs` sets a max wait time since the first trigger
|
||||
- When the workflow finally executes, it uses the **last** set of inputs
|
||||
- After execution begins, the next `debounce` call starts a new cycle
|
||||
- Workflows from `ConfiguredInstance` classes cannot be debounced
|
||||
|
||||
Reference: [Debouncing Workflows](https://docs.dbos.dev/typescript/tutorials/workflow-tutorial#debouncing-workflows)
|
||||
53
skills/dbos-typescript/references/pattern-idempotency.md
Normal file
53
skills/dbos-typescript/references/pattern-idempotency.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Use Workflow IDs for Idempotency
|
||||
impact: MEDIUM
|
||||
impactDescription: Prevents duplicate side effects like double payments
|
||||
tags: pattern, idempotency, workflow-id, deduplication
|
||||
---
|
||||
|
||||
## Use Workflow IDs for Idempotency
|
||||
|
||||
Assign a workflow ID to ensure a workflow executes only once, even if called multiple times. This prevents duplicate side effects like double payments.
|
||||
|
||||
**Incorrect (no idempotency):**
|
||||
|
||||
```typescript
|
||||
async function processPaymentFn(orderId: string, amount: number) {
|
||||
await DBOS.runStep(() => chargeCard(amount), { name: "chargeCard" });
|
||||
await DBOS.runStep(() => updateOrder(orderId), { name: "updateOrder" });
|
||||
}
|
||||
const processPayment = DBOS.registerWorkflow(processPaymentFn);
|
||||
|
||||
// Multiple calls could charge the card multiple times!
|
||||
await processPayment("order-123", 50);
|
||||
await processPayment("order-123", 50); // Double charge!
|
||||
```
|
||||
|
||||
**Correct (with workflow ID):**
|
||||
|
||||
```typescript
|
||||
async function processPaymentFn(orderId: string, amount: number) {
|
||||
await DBOS.runStep(() => chargeCard(amount), { name: "chargeCard" });
|
||||
await DBOS.runStep(() => updateOrder(orderId), { name: "updateOrder" });
|
||||
}
|
||||
const processPayment = DBOS.registerWorkflow(processPaymentFn);
|
||||
|
||||
// Same workflow ID = only one execution
|
||||
const workflowID = `payment-${orderId}`;
|
||||
await DBOS.startWorkflow(processPayment, { workflowID })("order-123", 50);
|
||||
await DBOS.startWorkflow(processPayment, { workflowID })("order-123", 50);
|
||||
// Second call returns the result of the first execution
|
||||
```
|
||||
|
||||
Access the current workflow ID inside a workflow:
|
||||
|
||||
```typescript
|
||||
async function myWorkflowFn() {
|
||||
const currentID = DBOS.workflowID;
|
||||
console.log(`Running workflow: ${currentID}`);
|
||||
}
|
||||
```
|
||||
|
||||
Workflow IDs must be **globally unique** for your application. If not set, a random UUID is generated.
|
||||
|
||||
Reference: [Workflow IDs and Idempotency](https://docs.dbos.dev/typescript/tutorials/workflow-tutorial#workflow-ids-and-idempotency)
|
||||
69
skills/dbos-typescript/references/pattern-scheduled.md
Normal file
69
skills/dbos-typescript/references/pattern-scheduled.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Create Scheduled Workflows
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables recurring tasks with exactly-once-per-interval guarantees
|
||||
tags: pattern, scheduled, cron, recurring
|
||||
---
|
||||
|
||||
## Create Scheduled Workflows
|
||||
|
||||
Use `DBOS.registerScheduled` to run workflows on a cron schedule. Each scheduled invocation runs exactly once per interval.
|
||||
|
||||
**Incorrect (manual scheduling with setInterval):**
|
||||
|
||||
```typescript
|
||||
// Manual scheduling is not durable and misses intervals during downtime
|
||||
setInterval(async () => {
|
||||
await generateReport();
|
||||
}, 60000);
|
||||
```
|
||||
|
||||
**Correct (using DBOS.registerScheduled):**
|
||||
|
||||
```typescript
|
||||
import { DBOS } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function everyThirtySecondsFn(scheduledTime: Date, actualTime: Date) {
|
||||
DBOS.logger.info("Running scheduled task");
|
||||
}
|
||||
const everyThirtySeconds = DBOS.registerWorkflow(everyThirtySecondsFn);
|
||||
DBOS.registerScheduled(everyThirtySeconds, { crontab: "*/30 * * * * *" });
|
||||
|
||||
async function dailyReportFn(scheduledTime: Date, actualTime: Date) {
|
||||
await DBOS.runStep(generateReport, { name: "generateReport" });
|
||||
}
|
||||
const dailyReport = DBOS.registerWorkflow(dailyReportFn);
|
||||
DBOS.registerScheduled(dailyReport, { crontab: "0 9 * * *" });
|
||||
```
|
||||
|
||||
Scheduled workflows must accept exactly two parameters: `scheduledTime` (Date) and `actualTime` (Date).
|
||||
|
||||
DBOS crontab supports 5 or 6 fields (optional seconds):
|
||||
```text
|
||||
┌────────────── second (optional)
|
||||
│ ┌──────────── minute
|
||||
│ │ ┌────────── hour
|
||||
│ │ │ ┌──────── day of month
|
||||
│ │ │ │ ┌────── month
|
||||
│ │ │ │ │ ┌──── day of week
|
||||
* * * * * *
|
||||
```
|
||||
|
||||
Retroactive execution (for missed intervals):
|
||||
|
||||
```typescript
|
||||
import { DBOS, SchedulerMode } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
async function fridayNightJobFn(scheduledTime: Date, actualTime: Date) {
|
||||
// Runs even if the app was offline during the scheduled time
|
||||
}
|
||||
const fridayNightJob = DBOS.registerWorkflow(fridayNightJobFn);
|
||||
DBOS.registerScheduled(fridayNightJob, {
|
||||
crontab: "0 21 * * 5",
|
||||
mode: SchedulerMode.ExactlyOncePerInterval,
|
||||
});
|
||||
```
|
||||
|
||||
Scheduled workflows cannot be applied to instance methods.
|
||||
|
||||
Reference: [Scheduled Workflows](https://docs.dbos.dev/typescript/tutorials/scheduled-workflows)
|
||||
59
skills/dbos-typescript/references/pattern-sleep.md
Normal file
59
skills/dbos-typescript/references/pattern-sleep.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Use Durable Sleep for Delayed Execution
|
||||
impact: MEDIUM
|
||||
impactDescription: Enables reliable scheduling across restarts
|
||||
tags: pattern, sleep, delay, durable, schedule
|
||||
---
|
||||
|
||||
## Use Durable Sleep for Delayed Execution
|
||||
|
||||
Use `DBOS.sleep()` for durable delays within workflows. The wakeup time is stored in the database, so the sleep survives restarts.
|
||||
|
||||
**Incorrect (non-durable sleep):**
|
||||
|
||||
```typescript
|
||||
async function delayedTaskFn() {
|
||||
// setTimeout is not durable - lost on restart!
|
||||
await new Promise(r => setTimeout(r, 60000));
|
||||
await DBOS.runStep(doWork, { name: "doWork" });
|
||||
}
|
||||
const delayedTask = DBOS.registerWorkflow(delayedTaskFn);
|
||||
```
|
||||
|
||||
**Correct (durable sleep):**
|
||||
|
||||
```typescript
|
||||
async function delayedTaskFn() {
|
||||
// Durable sleep - survives restarts
|
||||
await DBOS.sleep(60000); // 60 seconds in milliseconds
|
||||
await DBOS.runStep(doWork, { name: "doWork" });
|
||||
}
|
||||
const delayedTask = DBOS.registerWorkflow(delayedTaskFn);
|
||||
```
|
||||
|
||||
`DBOS.sleep()` takes milliseconds (unlike Python which takes seconds).
|
||||
|
||||
Use cases:
|
||||
- Scheduling tasks to run in the future
|
||||
- Implementing retry delays
|
||||
- Delays spanning hours, days, or weeks
|
||||
|
||||
```typescript
|
||||
async function scheduledTaskFn(task: string) {
|
||||
// Sleep for one week
|
||||
await DBOS.sleep(7 * 24 * 60 * 60 * 1000);
|
||||
await processTask(task);
|
||||
}
|
||||
```
|
||||
|
||||
For getting the current time durably, use `DBOS.now()`:
|
||||
|
||||
```typescript
|
||||
async function myWorkflowFn() {
|
||||
const now = await DBOS.now(); // Checkpointed as a step
|
||||
// For random UUIDs:
|
||||
const id = await DBOS.randomUUID(); // Checkpointed as a step
|
||||
}
|
||||
```
|
||||
|
||||
Reference: [Durable Sleep](https://docs.dbos.dev/typescript/tutorials/workflow-tutorial#durable-sleep)
|
||||
59
skills/dbos-typescript/references/queue-basics.md
Normal file
59
skills/dbos-typescript/references/queue-basics.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Use Queues for Concurrent Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Queues provide managed concurrency and flow control
|
||||
tags: queue, concurrency, enqueue, workflow
|
||||
---
|
||||
|
||||
## Use Queues for Concurrent Workflows
|
||||
|
||||
Queues run many workflows concurrently with managed flow control. Use them when you need to control how many workflows run at once.
|
||||
|
||||
**Incorrect (uncontrolled concurrency):**
|
||||
|
||||
```typescript
|
||||
async function processTaskFn(task: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
// Starting many workflows without control - could overwhelm resources
|
||||
for (const task of tasks) {
|
||||
await DBOS.startWorkflow(processTask)(task);
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using a queue):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const queue = new WorkflowQueue("task_queue");
|
||||
|
||||
async function processTaskFn(task: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
async function processAllTasksFn(tasks: string[]) {
|
||||
const handles = [];
|
||||
for (const task of tasks) {
|
||||
// Enqueue by passing queueName to startWorkflow
|
||||
const handle = await DBOS.startWorkflow(processTask, {
|
||||
queueName: queue.name,
|
||||
})(task);
|
||||
handles.push(handle);
|
||||
}
|
||||
// Wait for all tasks
|
||||
const results = [];
|
||||
for (const h of handles) {
|
||||
results.push(await h.getResult());
|
||||
}
|
||||
return results;
|
||||
}
|
||||
const processAllTasks = DBOS.registerWorkflow(processAllTasksFn);
|
||||
```
|
||||
|
||||
Queues process workflows in FIFO order. All queues should be created before `DBOS.launch()`.
|
||||
|
||||
Reference: [DBOS Queues](https://docs.dbos.dev/typescript/tutorials/queue-tutorial)
|
||||
53
skills/dbos-typescript/references/queue-concurrency.md
Normal file
53
skills/dbos-typescript/references/queue-concurrency.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Control Queue Concurrency
|
||||
impact: HIGH
|
||||
impactDescription: Prevents resource exhaustion with concurrent limits
|
||||
tags: queue, concurrency, workerConcurrency, limits
|
||||
---
|
||||
|
||||
## Control Queue Concurrency
|
||||
|
||||
Queues support worker-level and global concurrency limits to prevent resource exhaustion.
|
||||
|
||||
**Incorrect (no concurrency control):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("heavy_tasks"); // No limits - could exhaust memory
|
||||
```
|
||||
|
||||
**Correct (worker concurrency):**
|
||||
|
||||
```typescript
|
||||
// Each process runs at most 5 tasks from this queue
|
||||
const queue = new WorkflowQueue("heavy_tasks", { workerConcurrency: 5 });
|
||||
```
|
||||
|
||||
**Correct (global concurrency):**
|
||||
|
||||
```typescript
|
||||
// At most 10 tasks run across ALL processes
|
||||
const queue = new WorkflowQueue("limited_tasks", { concurrency: 10 });
|
||||
```
|
||||
|
||||
**In-order processing (sequential):**
|
||||
|
||||
```typescript
|
||||
// Only one task at a time - guarantees order
|
||||
const serialQueue = new WorkflowQueue("sequential_queue", { concurrency: 1 });
|
||||
|
||||
async function processEventFn(event: string) {
|
||||
// ...
|
||||
}
|
||||
const processEvent = DBOS.registerWorkflow(processEventFn);
|
||||
|
||||
app.post("/events", async (req, res) => {
|
||||
await DBOS.startWorkflow(processEvent, { queueName: serialQueue.name })(req.body.event);
|
||||
res.send("Queued!");
|
||||
});
|
||||
```
|
||||
|
||||
Worker concurrency is recommended for most use cases. Take care with global concurrency as any `PENDING` workflow on the queue counts toward the limit, including workflows from previous application versions.
|
||||
|
||||
When using worker concurrency, each process must have a unique `executorID` set in configuration (this is automatic with DBOS Conductor or Cloud).
|
||||
|
||||
Reference: [Managing Concurrency](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#managing-concurrency)
|
||||
51
skills/dbos-typescript/references/queue-deduplication.md
Normal file
51
skills/dbos-typescript/references/queue-deduplication.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: Deduplicate Queued Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Prevents duplicate workflow executions
|
||||
tags: queue, deduplication, idempotent, duplicate
|
||||
---
|
||||
|
||||
## Deduplicate Queued Workflows
|
||||
|
||||
Set a deduplication ID when enqueuing to prevent duplicate workflow executions. If a workflow with the same deduplication ID is already enqueued or executing, a `DBOSQueueDuplicatedError` is thrown.
|
||||
|
||||
**Incorrect (no deduplication):**
|
||||
|
||||
```typescript
|
||||
// Multiple clicks could enqueue duplicates
|
||||
async function handleClick(userId: string) {
|
||||
await DBOS.startWorkflow(processTask, { queueName: queue.name })("task");
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (with deduplication):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("task_queue");
|
||||
|
||||
async function processTaskFn(task: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
async function handleClick(userId: string) {
|
||||
try {
|
||||
await DBOS.startWorkflow(processTask, {
|
||||
queueName: queue.name,
|
||||
enqueueOptions: { deduplicationID: userId },
|
||||
})("task");
|
||||
} catch (e) {
|
||||
// DBOSQueueDuplicatedError - workflow already active for this user
|
||||
console.log("Task already in progress for user:", userId);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Deduplication is per-queue. The deduplication ID is active while the workflow has status `ENQUEUED` or `PENDING`. Once the workflow completes, a new workflow with the same deduplication ID can be enqueued.
|
||||
|
||||
This is useful for:
|
||||
- Ensuring one active task per user
|
||||
- Preventing duplicate form submissions
|
||||
- Idempotent event processing
|
||||
|
||||
Reference: [Deduplication](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#deduplication)
|
||||
63
skills/dbos-typescript/references/queue-listening.md
Normal file
63
skills/dbos-typescript/references/queue-listening.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Control Which Queues a Worker Listens To
|
||||
impact: HIGH
|
||||
impactDescription: Enables heterogeneous worker pools
|
||||
tags: queue, listen, worker, process, configuration
|
||||
---
|
||||
|
||||
## Control Which Queues a Worker Listens To
|
||||
|
||||
Configure `listenQueues` in DBOS configuration to make a process only dequeue from specific queues. This enables heterogeneous worker pools.
|
||||
|
||||
**Incorrect (all workers process all queues):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const cpuQueue = new WorkflowQueue("cpu_queue");
|
||||
const gpuQueue = new WorkflowQueue("gpu_queue");
|
||||
|
||||
// Every worker processes both CPU and GPU tasks
|
||||
// GPU tasks on CPU workers will fail or be slow!
|
||||
DBOS.setConfig({
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
});
|
||||
await DBOS.launch();
|
||||
```
|
||||
|
||||
**Correct (selective queue listening):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";
|
||||
|
||||
const cpuQueue = new WorkflowQueue("cpu_queue");
|
||||
const gpuQueue = new WorkflowQueue("gpu_queue");
|
||||
|
||||
async function main() {
|
||||
const workerType = process.env.WORKER_TYPE; // "cpu" or "gpu"
|
||||
|
||||
const config: any = {
|
||||
name: "my-app",
|
||||
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
|
||||
};
|
||||
|
||||
if (workerType === "gpu") {
|
||||
config.listenQueues = [gpuQueue];
|
||||
} else if (workerType === "cpu") {
|
||||
config.listenQueues = [cpuQueue];
|
||||
}
|
||||
|
||||
DBOS.setConfig(config);
|
||||
await DBOS.launch();
|
||||
}
|
||||
```
|
||||
|
||||
`listenQueues` only controls dequeuing. A CPU worker can still enqueue tasks onto the GPU queue:
|
||||
|
||||
```typescript
|
||||
// From a CPU worker, enqueue onto the GPU queue
|
||||
await DBOS.startWorkflow(gpuTask, { queueName: gpuQueue.name })("data");
|
||||
```
|
||||
|
||||
Reference: [Explicit Queue Listening](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#explicit-queue-listening)
|
||||
63
skills/dbos-typescript/references/queue-partitioning.md
Normal file
63
skills/dbos-typescript/references/queue-partitioning.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Partition Queues for Per-Entity Limits
|
||||
impact: HIGH
|
||||
impactDescription: Enables per-entity concurrency control
|
||||
tags: queue, partition, per-user, dynamic
|
||||
---
|
||||
|
||||
## Partition Queues for Per-Entity Limits
|
||||
|
||||
Partitioned queues apply flow control limits per partition key instead of the entire queue. Each partition acts as a dynamic "subqueue".
|
||||
|
||||
**Incorrect (global concurrency for per-user limits):**
|
||||
|
||||
```typescript
|
||||
// Global concurrency=1 blocks ALL users, not per-user
|
||||
const queue = new WorkflowQueue("tasks", { concurrency: 1 });
|
||||
```
|
||||
|
||||
**Correct (partitioned queue):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("tasks", {
|
||||
partitionQueue: true,
|
||||
concurrency: 1,
|
||||
});
|
||||
|
||||
async function onUserTask(userID: string, task: string) {
|
||||
// Each user gets their own partition - at most 1 task per user
|
||||
// but tasks from different users can run concurrently
|
||||
await DBOS.startWorkflow(processTask, {
|
||||
queueName: queue.name,
|
||||
enqueueOptions: { queuePartitionKey: userID },
|
||||
})(task);
|
||||
}
|
||||
```
|
||||
|
||||
**Two-level queueing (per-user + global limits):**
|
||||
|
||||
```typescript
|
||||
const concurrencyQueue = new WorkflowQueue("concurrency-queue", { concurrency: 5 });
|
||||
const partitionedQueue = new WorkflowQueue("partitioned-queue", {
|
||||
partitionQueue: true,
|
||||
concurrency: 1,
|
||||
});
|
||||
|
||||
// At most 1 task per user AND at most 5 tasks globally
|
||||
async function onUserTask(userID: string, task: string) {
|
||||
await DBOS.startWorkflow(concurrencyManager, {
|
||||
queueName: partitionedQueue.name,
|
||||
enqueueOptions: { queuePartitionKey: userID },
|
||||
})(task);
|
||||
}
|
||||
|
||||
async function concurrencyManagerFn(task: string) {
|
||||
const handle = await DBOS.startWorkflow(processTask, {
|
||||
queueName: concurrencyQueue.name,
|
||||
})(task);
|
||||
return await handle.getResult();
|
||||
}
|
||||
const concurrencyManager = DBOS.registerWorkflow(concurrencyManagerFn);
|
||||
```
|
||||
|
||||
Reference: [Partitioning Queues](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#partitioning-queues)
|
||||
48
skills/dbos-typescript/references/queue-priority.md
Normal file
48
skills/dbos-typescript/references/queue-priority.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
title: Set Queue Priority for Workflows
|
||||
impact: HIGH
|
||||
impactDescription: Prioritizes important workflows over lower-priority ones
|
||||
tags: queue, priority, ordering, importance
|
||||
---
|
||||
|
||||
## Set Queue Priority for Workflows
|
||||
|
||||
Enable priority on a queue to process higher-priority workflows first. Lower numbers indicate higher priority.
|
||||
|
||||
**Incorrect (no priority - FIFO only):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("tasks");
|
||||
// All tasks processed in FIFO order regardless of importance
|
||||
```
|
||||
|
||||
**Correct (priority-enabled queue):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("tasks", { priorityEnabled: true });
|
||||
|
||||
async function processTaskFn(task: string) {
|
||||
// ...
|
||||
}
|
||||
const processTask = DBOS.registerWorkflow(processTaskFn);
|
||||
|
||||
// High priority task (lower number = higher priority)
|
||||
await DBOS.startWorkflow(processTask, {
|
||||
queueName: queue.name,
|
||||
enqueueOptions: { priority: 1 },
|
||||
})("urgent-task");
|
||||
|
||||
// Low priority task
|
||||
await DBOS.startWorkflow(processTask, {
|
||||
queueName: queue.name,
|
||||
enqueueOptions: { priority: 100 },
|
||||
})("background-task");
|
||||
```
|
||||
|
||||
Priority rules:
|
||||
- Range: `1` to `2,147,483,647`
|
||||
- Lower number = higher priority
|
||||
- Workflows **without** assigned priorities have the highest priority (run first)
|
||||
- Workflows with the same priority are dequeued in FIFO order
|
||||
|
||||
Reference: [Priority](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#priority)
|
||||
44
skills/dbos-typescript/references/queue-rate-limiting.md
Normal file
44
skills/dbos-typescript/references/queue-rate-limiting.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: Rate Limit Queue Execution
|
||||
impact: HIGH
|
||||
impactDescription: Prevents overwhelming external APIs with too many requests
|
||||
tags: queue, rate-limit, throttle, api
|
||||
---
|
||||
|
||||
## Rate Limit Queue Execution
|
||||
|
||||
Set rate limits on a queue to control how many workflows start in a given period. Rate limits are global across all DBOS processes.
|
||||
|
||||
**Incorrect (no rate limiting):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("llm_tasks");
|
||||
// Could send hundreds of requests per second to a rate-limited API
|
||||
```
|
||||
|
||||
**Correct (rate-limited queue):**
|
||||
|
||||
```typescript
|
||||
const queue = new WorkflowQueue("llm_tasks", {
|
||||
rateLimit: { limitPerPeriod: 50, periodSec: 30 },
|
||||
});
|
||||
```
|
||||
|
||||
This queue starts at most 50 workflows per 30 seconds.
|
||||
|
||||
**Combining rate limiting with concurrency:**
|
||||
|
||||
```typescript
|
||||
// At most 5 concurrent and 50 per 30 seconds
|
||||
const queue = new WorkflowQueue("api_tasks", {
|
||||
workerConcurrency: 5,
|
||||
rateLimit: { limitPerPeriod: 50, periodSec: 30 },
|
||||
});
|
||||
```
|
||||
|
||||
Common use cases:
|
||||
- LLM API rate limiting (OpenAI, Anthropic, etc.)
|
||||
- Third-party API throttling
|
||||
- Preventing database overload
|
||||
|
||||
Reference: [Rate Limiting](https://docs.dbos.dev/typescript/tutorials/queue-tutorial#rate-limiting)
|
||||
63
skills/dbos-typescript/references/step-basics.md
Normal file
63
skills/dbos-typescript/references/step-basics.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Use Steps for External Operations
|
||||
impact: HIGH
|
||||
impactDescription: Steps enable recovery by checkpointing results
|
||||
tags: step, external, api, checkpoint
|
||||
---
|
||||
|
||||
## Use Steps for External Operations
|
||||
|
||||
Any function that performs complex operations, accesses external APIs, or has side effects should be a step. Step results are checkpointed, enabling workflow recovery.
|
||||
|
||||
**Incorrect (external call in workflow):**
|
||||
|
||||
```typescript
|
||||
async function myWorkflowFn() {
|
||||
// External API call directly in workflow - not checkpointed!
|
||||
const response = await fetch("https://api.example.com/data");
|
||||
return await response.json();
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
**Correct (external call in step using `DBOS.runStep`):**
|
||||
|
||||
```typescript
|
||||
async function fetchData() {
|
||||
return await fetch("https://api.example.com/data").then(r => r.json());
|
||||
}
|
||||
|
||||
async function myWorkflowFn() {
|
||||
const data = await DBOS.runStep(fetchData, { name: "fetchData" });
|
||||
return data;
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
`DBOS.runStep` can also accept an inline arrow function:
|
||||
|
||||
```typescript
|
||||
async function myWorkflowFn() {
|
||||
const data = await DBOS.runStep(
|
||||
() => fetch("https://api.example.com/data").then(r => r.json()),
|
||||
{ name: "fetchData" }
|
||||
);
|
||||
return data;
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, you can use `DBOS.registerStep` to pre-register a step or `@DBOS.step()` as a class decorator, but `DBOS.runStep` is preferred for most use cases.
|
||||
|
||||
Step requirements:
|
||||
- Inputs and outputs must be serializable to JSON
|
||||
- Cannot call, start, or enqueue workflows from within steps
|
||||
- Calling a step from another step makes the called step part of the calling step's execution
|
||||
|
||||
When to use steps:
|
||||
- API calls to external services
|
||||
- File system operations
|
||||
- Random number generation
|
||||
- Getting current time
|
||||
- Any non-deterministic operation
|
||||
|
||||
Reference: [DBOS Steps](https://docs.dbos.dev/typescript/tutorials/step-tutorial)
|
||||
67
skills/dbos-typescript/references/step-retries.md
Normal file
67
skills/dbos-typescript/references/step-retries.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: Configure Step Retries for Transient Failures
|
||||
impact: HIGH
|
||||
impactDescription: Automatic retries handle transient failures without manual code
|
||||
tags: step, retry, exponential-backoff, resilience
|
||||
---
|
||||
|
||||
## Configure Step Retries for Transient Failures
|
||||
|
||||
Steps can automatically retry on failure with exponential backoff. This handles transient failures like network issues.
|
||||
|
||||
**Incorrect (manual retry logic):**
|
||||
|
||||
```typescript
|
||||
async function fetchData() {
|
||||
for (let attempt = 0; attempt < 3; attempt++) {
|
||||
try {
|
||||
return await fetch("https://api.example.com").then(r => r.json());
|
||||
} catch (e) {
|
||||
if (attempt === 2) throw e;
|
||||
await new Promise(r => setTimeout(r, 2 ** attempt * 1000));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (built-in retries with `DBOS.runStep`):**
|
||||
|
||||
```typescript
|
||||
async function fetchData() {
|
||||
return await fetch("https://api.example.com").then(r => r.json());
|
||||
}
|
||||
|
||||
async function myWorkflowFn() {
|
||||
const data = await DBOS.runStep(fetchData, {
|
||||
name: "fetchData",
|
||||
retriesAllowed: true,
|
||||
maxAttempts: 10,
|
||||
intervalSeconds: 1,
|
||||
backoffRate: 2,
|
||||
});
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
With an inline arrow function:
|
||||
|
||||
```typescript
|
||||
async function myWorkflowFn() {
|
||||
const data = await DBOS.runStep(
|
||||
() => fetch("https://api.example.com").then(r => r.json()),
|
||||
{ name: "fetchData", retriesAllowed: true, maxAttempts: 10 }
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
Retry parameters:
|
||||
- `retriesAllowed`: Enable automatic retries (default: `false`)
|
||||
- `maxAttempts`: Maximum retry attempts (default: `3`)
|
||||
- `intervalSeconds`: Initial delay between retries in seconds (default: `1`)
|
||||
- `backoffRate`: Multiplier for exponential backoff (default: `2`)
|
||||
|
||||
With defaults, retry delays are: 1s, 2s, 4s, 8s, 16s...
|
||||
|
||||
If all retries are exhausted, a `DBOSMaxStepRetriesError` is thrown to the calling workflow.
|
||||
|
||||
Reference: [Configurable Retries](https://docs.dbos.dev/typescript/tutorials/step-tutorial#configurable-retries)
|
||||
68
skills/dbos-typescript/references/step-transactions.md
Normal file
68
skills/dbos-typescript/references/step-transactions.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Use Transactions for Database Operations
|
||||
impact: HIGH
|
||||
impactDescription: Transactions provide exactly-once database execution within workflows
|
||||
tags: step, transaction, database, datasource
|
||||
---
|
||||
|
||||
## Use Transactions for Database Operations
|
||||
|
||||
Use datasource transactions for database operations within workflows. Transactions commit exactly once and are checkpointed for recovery.
|
||||
|
||||
**Incorrect (raw database query in workflow):**
|
||||
|
||||
```typescript
|
||||
import { Pool } from "pg";
|
||||
const pool = new Pool();
|
||||
|
||||
async function myWorkflowFn() {
|
||||
// Direct database access in workflow - not checkpointed!
|
||||
const result = await pool.query("INSERT INTO orders ...");
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (using a datasource transaction):**
|
||||
|
||||
Install a datasource package (e.g., Knex):
|
||||
```
|
||||
npm i @dbos-inc/knex-datasource
|
||||
```
|
||||
|
||||
Configure the datasource:
|
||||
```typescript
|
||||
import { KnexDataSource } from "@dbos-inc/knex-datasource";
|
||||
|
||||
const config = { client: "pg", connection: process.env.DBOS_DATABASE_URL };
|
||||
const dataSource = new KnexDataSource("app-db", config);
|
||||
```
|
||||
|
||||
Run transactions inline with `runTransaction`:
|
||||
```typescript
|
||||
async function insertOrderFn(userId: string, amount: number) {
|
||||
const rows = await dataSource
|
||||
.client("orders")
|
||||
.insert({ user_id: userId, amount })
|
||||
.returning("id");
|
||||
return rows[0].id;
|
||||
}
|
||||
|
||||
async function myWorkflowFn(userId: string, amount: number) {
|
||||
const orderId = await dataSource.runTransaction(
|
||||
() => insertOrderFn(userId, amount),
|
||||
{ name: "insertOrder" }
|
||||
);
|
||||
return orderId;
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
You can also pre-register a transaction function with `dataSource.registerTransaction`:
|
||||
```typescript
|
||||
const insertOrder = dataSource.registerTransaction(insertOrderFn);
|
||||
```
|
||||
|
||||
Available datasource packages: `@dbos-inc/knex-datasource`, `@dbos-inc/kysely-datasource`, `@dbos-inc/drizzle-datasource`, `@dbos-inc/typeorm-datasource`, `@dbos-inc/prisma-datasource`, `@dbos-inc/nodepg-datasource`, `@dbos-inc/postgres-datasource`.
|
||||
|
||||
Datasources require installing the DBOS schema (`transaction_completion` table) via `initializeDBOSSchema`.
|
||||
|
||||
Reference: [Transactions & Datasources](https://docs.dbos.dev/typescript/tutorials/transaction-tutorial)
|
||||
104
skills/dbos-typescript/references/test-setup.md
Normal file
104
skills/dbos-typescript/references/test-setup.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: Use Proper Test Setup for DBOS
|
||||
impact: LOW-MEDIUM
|
||||
impactDescription: Ensures consistent test results with proper DBOS lifecycle management
|
||||
tags: testing, jest, setup, integration, mock
|
||||
---
|
||||
|
||||
## Use Proper Test Setup for DBOS
|
||||
|
||||
DBOS applications can be tested with unit tests (mocking DBOS) or integration tests (real Postgres database).
|
||||
|
||||
**Incorrect (no lifecycle management between tests):**
|
||||
|
||||
```typescript
|
||||
// Tests share state - results are inconsistent!
|
||||
describe("tests", () => {
|
||||
it("test one", async () => {
|
||||
await myWorkflow("input");
|
||||
});
|
||||
it("test two", async () => {
|
||||
// Previous test's state leaks into this test
|
||||
await myWorkflow("input");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Correct (unit testing with mocks):**
|
||||
|
||||
```typescript
|
||||
// Mock DBOS - no Postgres required
|
||||
jest.mock("@dbos-inc/dbos-sdk", () => ({
|
||||
DBOS: {
|
||||
registerWorkflow: jest.fn((fn) => fn),
|
||||
runStep: jest.fn((fn) => fn()),
|
||||
setEvent: jest.fn(),
|
||||
recv: jest.fn(),
|
||||
startWorkflow: jest.fn(),
|
||||
workflowID: "test-workflow-id",
|
||||
},
|
||||
}));
|
||||
|
||||
describe("workflow unit tests", () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it("should process data", async () => {
|
||||
jest.mocked(DBOS.recv).mockResolvedValue("success");
|
||||
await myWorkflow("input");
|
||||
expect(DBOS.setEvent).toHaveBeenCalledWith("status", "done");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Mock `registerWorkflow` to return the function directly (not wrapped with durable workflow code).
|
||||
|
||||
**Correct (integration testing with Postgres):**
|
||||
|
||||
```typescript
|
||||
import { DBOS, DBOSConfig } from "@dbos-inc/dbos-sdk";
|
||||
import { Client } from "pg";
|
||||
|
||||
async function resetDatabase(databaseUrl: string) {
|
||||
const dbName = new URL(databaseUrl).pathname.slice(1);
|
||||
const postgresDatabaseUrl = new URL(databaseUrl);
|
||||
postgresDatabaseUrl.pathname = "/postgres";
|
||||
const client = new Client({ connectionString: postgresDatabaseUrl.toString() });
|
||||
await client.connect();
|
||||
try {
|
||||
await client.query(`DROP DATABASE IF EXISTS ${dbName} WITH (FORCE)`);
|
||||
await client.query(`CREATE DATABASE ${dbName}`);
|
||||
} finally {
|
||||
await client.end();
|
||||
}
|
||||
}
|
||||
|
||||
describe("integration tests", () => {
|
||||
beforeEach(async () => {
|
||||
const databaseUrl = process.env.DBOS_TEST_DATABASE_URL;
|
||||
if (!databaseUrl) throw Error("DBOS_TEST_DATABASE_URL must be set");
|
||||
await DBOS.shutdown();
|
||||
await resetDatabase(databaseUrl);
|
||||
DBOS.setConfig({ name: "my-integration-test", systemDatabaseUrl: databaseUrl });
|
||||
await DBOS.launch();
|
||||
}, 10000);
|
||||
|
||||
afterEach(async () => {
|
||||
await DBOS.shutdown();
|
||||
});
|
||||
|
||||
it("should complete workflow", async () => {
|
||||
const result = await myWorkflow("test-input");
|
||||
expect(result).toBe("expected-output");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Key points:
|
||||
- Call `DBOS.shutdown()` before resetting and reconfiguring
|
||||
- Reset the database between tests for isolation
|
||||
- Set a generous `beforeEach` timeout (10s) for database setup
|
||||
- Use `DBOS.shutdown({ deregister: true })` if re-registering functions
|
||||
|
||||
Reference: [Testing & Mocking](https://docs.dbos.dev/typescript/tutorials/testing)
|
||||
54
skills/dbos-typescript/references/workflow-background.md
Normal file
54
skills/dbos-typescript/references/workflow-background.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: Start Workflows in Background
|
||||
impact: CRITICAL
|
||||
impactDescription: Background workflows enable reliable async processing
|
||||
tags: workflow, background, handle, async
|
||||
---
|
||||
|
||||
## Start Workflows in Background
|
||||
|
||||
Use `DBOS.startWorkflow` to start a workflow in the background and get a handle to track it. The workflow is guaranteed to run to completion even if the app is interrupted.
|
||||
|
||||
**Incorrect (no way to track background work):**
|
||||
|
||||
```typescript
|
||||
async function processDataFn(data: string) {
|
||||
// ...
|
||||
}
|
||||
const processData = DBOS.registerWorkflow(processDataFn);
|
||||
|
||||
// Fire and forget - no way to track or get result
|
||||
processData(data);
|
||||
```
|
||||
|
||||
**Correct (using startWorkflow):**
|
||||
|
||||
```typescript
|
||||
async function processDataFn(data: string) {
|
||||
return "processed: " + data;
|
||||
}
|
||||
const processData = DBOS.registerWorkflow(processDataFn);
|
||||
|
||||
async function main() {
|
||||
// Start workflow in background, get handle
|
||||
const handle = await DBOS.startWorkflow(processData)("input");
|
||||
|
||||
// Get the workflow ID
|
||||
console.log(handle.workflowID);
|
||||
|
||||
// Wait for result
|
||||
const result = await handle.getResult();
|
||||
|
||||
// Check status
|
||||
const status = await handle.getStatus();
|
||||
}
|
||||
```
|
||||
|
||||
Retrieve a handle later by workflow ID:
|
||||
|
||||
```typescript
|
||||
const handle = DBOS.retrieveWorkflow<string>(workflowID);
|
||||
const result = await handle.getResult();
|
||||
```
|
||||
|
||||
Reference: [Starting Workflows in Background](https://docs.dbos.dev/typescript/tutorials/workflow-tutorial#starting-workflows-in-the-background)
|
||||
65
skills/dbos-typescript/references/workflow-constraints.md
Normal file
65
skills/dbos-typescript/references/workflow-constraints.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Follow Workflow Constraints
|
||||
impact: CRITICAL
|
||||
impactDescription: Violating constraints breaks recovery and durability guarantees
|
||||
tags: workflow, constraints, rules, best-practices
|
||||
---
|
||||
|
||||
## Follow Workflow Constraints
|
||||
|
||||
Workflows have specific constraints to maintain durability guarantees. Violating them can break recovery.
|
||||
|
||||
**Incorrect (starting workflows from steps):**
|
||||
|
||||
```typescript
|
||||
async function myStep() {
|
||||
// Don't start workflows from steps!
|
||||
await DBOS.startWorkflow(otherWorkflow)();
|
||||
}
|
||||
|
||||
async function myOtherStep() {
|
||||
// Don't call recv from steps!
|
||||
const msg = await DBOS.recv("topic");
|
||||
}
|
||||
|
||||
async function myWorkflowFn() {
|
||||
await DBOS.runStep(myStep, { name: "myStep" });
|
||||
}
|
||||
```
|
||||
|
||||
**Correct (workflow operations only from workflows):**
|
||||
|
||||
```typescript
|
||||
async function fetchData() {
|
||||
// Steps only do external operations
|
||||
return await fetch("https://api.example.com").then(r => r.json());
|
||||
}
|
||||
|
||||
async function myWorkflowFn() {
|
||||
await DBOS.runStep(fetchData, { name: "fetchData" });
|
||||
// Start child workflows from the parent workflow
|
||||
await DBOS.startWorkflow(otherWorkflow)();
|
||||
// Receive messages from the workflow
|
||||
const msg = await DBOS.recv("topic");
|
||||
// Set events from the workflow
|
||||
await DBOS.setEvent("status", "done");
|
||||
}
|
||||
const myWorkflow = DBOS.registerWorkflow(myWorkflowFn);
|
||||
```
|
||||
|
||||
Additional constraints:
|
||||
- Don't modify global variables from workflows or steps
|
||||
- Steps in parallel must start in deterministic order:
|
||||
|
||||
```typescript
|
||||
// CORRECT - deterministic start order
|
||||
const results = await Promise.allSettled([
|
||||
DBOS.runStep(() => step1("arg1"), { name: "step1" }),
|
||||
DBOS.runStep(() => step2("arg2"), { name: "step2" }),
|
||||
DBOS.runStep(() => step3("arg3"), { name: "step3" }),
|
||||
]);
|
||||
```
|
||||
|
||||
Use `Promise.allSettled` instead of `Promise.all` to safely handle errors without crashing the Node.js process.
|
||||
|
||||
Reference: [Workflow Guarantees](https://docs.dbos.dev/typescript/tutorials/workflow-tutorial#workflow-guarantees)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user