CONCEPTS

Understanding OpenQueue and event-driven architecture

What is Event-Driven Architecture?

Event-driven architecture (EDA) is a software design pattern where components communicate by producing and consuming events. Instead of direct, synchronous calls between services, systems emit events when something happens, and other services react to those events asynchronously.

Traditional (Synchronous)

ClientAPIProcessing

Client waits for processing to complete

Event-Driven

ClientQueueWorker

Client gets immediate confirmation, work happens in background

Why Event-Driven?

Decoupling

Producers and workers don't need to know about each other. Just enqueue work and move on.

Scalability

Add more workers to handle increased load. The queue absorbs spikes gracefully.

Reliability

Jobs persist in the queue until processed. No work is lost if a worker crashes.

Responsiveness

Clients get immediate responses. Heavy processing happens asynchronously in the background.

Ordering

Jobs can be prioritized. Critical tasks jump to the front of the line.

Scheduling

Delay execution with run_at. Perfect for cron jobs, retries, and deferred work.

How OpenQueue Supports Event-Driven Architecture

OpenQueue implements a job queue pattern - one of the most common ways to achieve event-driven processing. Here's how each concept maps to OpenQueue:

Producer

Any service that creates work. It enqueues a job with a payload and gets back a job ID immediately. The producer doesn't care who processes it or when.

# Producer enqueues work
job_id = client.enqueue(
    queue_name="send_email",
    payload={"to": "user@example.com", "subject": "Welcome!"}
)
# Returns immediately - job is queued
Worker

Services that consume and process work. Workers lease jobs from the queue, process them, and report success or failure. You can run multiple workers for parallel processing.

# Worker leases and processes
leased = client.lease(queue_name="send_email", worker_id="worker-1")
if leased:
    send_email(leased.job.payload)
    client.ack(leased.job.id, leased.lease_token)
Queue

The buffer between producers and workers. OpenQueue uses PostgreSQL as the backing store - jobs are rows in a table, giving you durability, querying, and ACID guarantees.

PostgreSQL-backed|ACID compliant|Queryable

Core Concepts

Leasing
Workers atomically "claim" a job using database row locking (FOR UPDATE SKIP LOCKED). This prevents two workers from processing the same job - no race conditions, no duplicates.
Lease Token
A unique token returned when leasing. Required for ack/nack operations. Prevents stale workers from updating a job after their lease expires.
Visibility Timeout
If a worker crashes, the job automatically becomes available again after the lease expires. This is "visibility timeout" - the job becomes "visible" to other workers.
Heartbeat
For long-running jobs, workers can send heartbeats to extend their lease. The job stays locked while processing continues.
ACK / NACK
ACK marks a job as completed successfully. NACK marks it as failed - OpenQueue will retry automatically with exponential backoff.
Dead Letter Queue
Jobs that exhaust all retries go to a "dead" state. They're not deleted - you can inspect them, understand what went wrong, and manually replay if needed.
Priority
Jobs have priority (higher = more urgent). When leasing, OpenQueue picks the highest priority job first. Critical tasks jump the queue.
Scheduled Jobs
The run_at parameter delays execution. Jobs stay pending until their scheduled time, then become available for leasing. No cron needed.

Job Lifecycle

PENDING
Queued
PROCESSING
Leased
COMPLETED
Done
FAILED
Retrying
DEAD
DLQ
Pathways:
PENDING → lease → PROCESSING → ack → COMPLETED
PENDING → lease → PROCESSING → nack → FAILED → retry → PENDING
PENDING → lease → PROCESSING → nack (no retries) → DEAD
PENDING → cancel → cancelled

Ready to get started?