Open source · PostgreSQL-backed · Production ready

Background jobs. No Redis required.

Most job queue solutions force you to deploy and operate Redis alongside your database. OpenQueue runs entirely on the PostgreSQL you already have - giving you reliable background jobs, automatic retries, scheduled execution, and a real-time dashboard without adding a single extra service.

Multi-tenant by design. Python & TypeScript SDKs. Dead-letter queues built in.

View on GitHub

Zero

Extra infrastructure

PostgreSQL only

< 5ms

Job lease latency

p99, under load

2

SDK languages

Python & TypeScript

MIT

License

Free forever

The problem

Adding a job queue shouldn't mean
adding a Redis operations team

Every major job queue library today assumes Redis. That means a second database to provision, monitor, scale, back up, and pay for - before you process a single job.

⚙️

Another service to operate

Redis must be deployed separately, kept alive, monitored for memory pressure, and backed up. On managed cloud it adds $50–$200/mo before your queue does anything useful.

🔲

Observability is a black box

Redis-based queues store jobs in opaque data structures. You can't write a JOIN, you can't run an EXPLAIN, and you can't use the SQL tools your team already knows.

🔓

Multi-tenancy is bolted on

Bull, Celery, and RQ have no concept of users. Isolating one tenant's jobs from another requires custom key-namespacing, wrapper logic, and discipline - every time.

The OpenQueue approach

Use the PostgreSQL database you're already running. Your jobs live next to your application data - queryable, observable, and isolated per user out of the box.

Features

Everything you need, nothing you don't

A complete job queue system - retries, scheduling, dead-letter queues, multi-tenancy, and observability - all without leaving your existing PostgreSQL setup.

Atomic job leasing

PostgreSQL's SELECT FOR UPDATE SKIP LOCKED makes job pickup atomic and contention-free. Two workers can never receive the same job - no locks, no races, no duplicates.

Multi-tenant by design

Every job is scoped to a user via API token at the schema level. One tenant's queue can never read or interfere with another's - no key-namespacing tricks required.

Automatic retries & DLQ

Set max_retries per job. Failed jobs are retried automatically. Once retries are exhausted, jobs land in a dead-letter queue for manual inspection - never silently lost.

Scheduled execution

Pass a run_at timestamp to delay any job. Jobs stay pending until their scheduled time, then become eligible for leasing. No separate cron service needed.

Real-time dashboard

Per-queue stats, per-job timelines, payload & result inspection, error traces, and manual cancellation - all in a built-in terminal-style UI. No third-party add-on required.

Python & TypeScript SDKs

Official clients for Python and TypeScript/Node.js with full lifecycle support: enqueue, lease, ack, nack, heartbeat, cancel. Typed, tested, and published to PyPI & npm.

Full SQL observability

Jobs are rows. Query them with JOIN, WHERE, GROUP BY. Plug in any SQL client, write custom reports, trace slow jobs, or export to your analytics stack - no special tooling needed.

Priority queues

Assign an integer priority to any job. Higher-priority jobs are always leased first within a queue, giving you fine-grained control over processing order without multiple queues.

Visibility timeouts & heartbeats

Leased jobs have a configurable visibility timeout. Long-running workers send heartbeats to extend their lease. Crashed workers release their jobs automatically - no manual cleanup.

Use cases

What can you build with OpenQueue?

Any task that shouldn't block a web request belongs in a queue. Here's what you can do:

Transactional Emails

Offload sign-up confirmations, password resets, and order receipts to a queue. If your email provider is slow or down, jobs retry automatically - users never see a slow response.

welcome emailsreceiptsalerts

Webhook Delivery

Deliver outbound webhooks reliably. Failed deliveries are retried with back-off, dead jobs are inspected from the dashboard, and every attempt is logged - no lost events.

outbound eventsretriesdelivery logs

Media Processing

Resize images, generate thumbnails, transcode video, or run OCR in the background. Queue the work immediately on upload, process asynchronously at your own pace.

image resizethumbnailstranscoding

Scheduled Reports

Use run_at to schedule daily digests, weekly analytics emails, or monthly billing summaries. No separate cron service - the queue is the scheduler.

daily digestsbilling runscron replacement

AI & LLM Pipelines

Queue document processing, embedding generation, or LLM inference jobs. Handle bursts without overloading your inference budget - workers consume at a controlled rate.

embeddingsLLM callsbatch inference

Data Sync & ETL

Sync third-party APIs, run import jobs, or trigger data pipeline steps as queue jobs. Each step is observable, retryable, and scoped per tenant - no shared state issues.

API syncimportsETL steps

How it works

Three API calls. That's the whole model.

Producers enqueue. Workers lease and process. The queue handles retries, timeouts, and dead letters automatically.

01

Enqueue a job

POST to /jobs with a queue name, JSON payload, priority, max retries, and an optional run_at timestamp for deferred execution. Returns a job ID immediately.

Your web request returns instantly. The work happens in the background.

02

Lease & process

Workers call /lease on a queue. OpenQueue atomically selects the highest-priority eligible job and locks it with a visibility timeout - guaranteed to one worker only.

Workers can send heartbeats to extend the lease on long-running jobs.

03

Ack or nack

On success, call /ack to mark the job completed. On failure, call /nack - OpenQueue retries up to max_retries times, then moves the job to the dead-letter queue.

Dead jobs are visible in the dashboard and can be re-queued manually.

Job lifecycle

pendingprocessingcompletedor on failure →pending (retry)→ … →dead (DLQ)
Retries on nackConfigurable per-job retry limit
Visibility timeoutCrashed workers release jobs automatically
Heartbeat supportLong jobs extend their own lease
Scheduled executionrun_at delays jobs until the right moment

Comparison

How OpenQueue stacks up

Compared to the most common alternatives for background job processing.

CapabilityOpenQueue OpenQueueRedis / BullMQCeleryRQ
Infrastructure neededPostgreSQL onlyPostgreSQL + RedisPostgreSQL + Redis / RabbitMQPostgreSQL + Redis
Multi-tenancy✓ Built-in✗ Manual✗ Manual✗ Manual
Built-in dashboard✓ IncludedNeeds Bull BoardNeeds Flower✗ None
Dead-letter queue✓ Built-inPartialPartial✗ Manual
Visibility timeout
Scheduled jobs✓ Native✓ via cron✓ via BeatPartial
SQL observability✓ Full
Priority queues✓ Built-in✓ Limited
Python SDK
TypeScript SDK

Comparison reflects typical default configurations. Some capabilities vary by version or plugin.

SDKs

Up and running in minutes

Official clients for Python and TypeScript. Install, point at your API URL, and start enqueueing - no broker configuration, no connection pools to manage.

Python
pip install openqueue-client
from openqueue import OpenQueue

oq = OpenQueue(
    base_url="https://open-queue-ivory.vercel.app",
    api_token="oq_live_...",
)

# Enqueue a job
job = oq.enqueue("emails", {"to": "user@example.com"})

# Worker loop
while True:
    leased = oq.lease("emails", worker_id="w1")
    if leased:
        process(leased.job.payload)
        oq.ack(leased.job.id, leased.lease_token)
TypeScript
npm i @ravin-d-27/openqueue
import { OpenQueue } from "@ravin-d-27/openqueue";

const oq = new OpenQueue({
  baseUrl: "https://open-queue-ivory.vercel.app",
  apiToken: "oq_live_...",
});

// Enqueue a job
const job = await oq.enqueue("emails", {
  to: "user@example.com",
});

// Worker loop
const leased = await oq.lease("emails", "worker-1");
if (leased) {
  await process(leased.job.payload);
  await oq.ack(leased.job.id, leased.leaseToken);
}

Get started

Ship your background jobs today

Sign up, get your API token, and start enqueueing jobs in under five minutes. No Redis. No Celery. No ops overhead.

Free to use · Open source · MIT license

View source