Files
Max dml 7e5abd504f feat: add DBOS skills for TypeScript, Python, and Go (#94)
Add three DBOS SDK skills with reference documentation for building
reliable, fault-tolerant applications with durable workflows.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 23:26:51 +01:00

1.3 KiB

title, impact, impactDescription, tags
title impact impactDescription tags
Rate Limit Queue Execution HIGH Prevents hitting API rate limits queue, rate-limit, api, throttle

Rate Limit Queue Execution

Use rate limits when working with rate-limited APIs (like LLM APIs). Limits are global across all processes.

Incorrect (no rate limiting):

queue = Queue("llm_tasks")

@DBOS.step()
def call_llm(prompt):
    # May hit rate limits if too many calls
    return openai.chat.completions.create(...)

Correct (with rate limit):

# Max 50 tasks started per 30 seconds
queue = Queue("llm_tasks", limiter={"limit": 50, "period": 30})

@DBOS.step()
def call_llm(prompt):
    return openai.chat.completions.create(...)

@DBOS.workflow()
def process_prompts(prompts):
    handles = []
    for prompt in prompts:
        # Queue enforces rate limit
        handle = queue.enqueue(call_llm, prompt)
        handles.append(handle)
    return [h.get_result() for h in handles]

Rate limit parameters:

  • limit: Maximum number of functions to start in the period
  • period: Time period in seconds

Rate limits can be combined with concurrency limits:

queue = Queue("api_tasks",
    worker_concurrency=5,
    limiter={"limit": 100, "period": 60})

Reference: Rate Limiting