Most job queue solutions force you to deploy and operate Redis alongside your database. OpenQueue runs entirely on the PostgreSQL you already have - giving you reliable background jobs, automatic retries, scheduled execution, and a real-time dashboard without adding a single extra service.
Multi-tenant by design. Python & TypeScript SDKs. Dead-letter queues built in.
Zero
Extra infrastructure
PostgreSQL only
< 5ms
Job lease latency
p99, under load
2
SDK languages
Python & TypeScript
MIT
License
Free forever
The problem
Every major job queue library today assumes Redis. That means a second database to provision, monitor, scale, back up, and pay for - before you process a single job.
Redis must be deployed separately, kept alive, monitored for memory pressure, and backed up. On managed cloud it adds $50–$200/mo before your queue does anything useful.
Redis-based queues store jobs in opaque data structures. You can't write a JOIN, you can't run an EXPLAIN, and you can't use the SQL tools your team already knows.
Bull, Celery, and RQ have no concept of users. Isolating one tenant's jobs from another requires custom key-namespacing, wrapper logic, and discipline - every time.
The OpenQueue approach
Use the PostgreSQL database you're already running. Your jobs live next to your application data - queryable, observable, and isolated per user out of the box.
Features
A complete job queue system - retries, scheduling, dead-letter queues, multi-tenancy, and observability - all without leaving your existing PostgreSQL setup.
PostgreSQL's SELECT FOR UPDATE SKIP LOCKED makes job pickup atomic and contention-free. Two workers can never receive the same job - no locks, no races, no duplicates.
Every job is scoped to a user via API token at the schema level. One tenant's queue can never read or interfere with another's - no key-namespacing tricks required.
Set max_retries per job. Failed jobs are retried automatically. Once retries are exhausted, jobs land in a dead-letter queue for manual inspection - never silently lost.
Pass a run_at timestamp to delay any job. Jobs stay pending until their scheduled time, then become eligible for leasing. No separate cron service needed.
Per-queue stats, per-job timelines, payload & result inspection, error traces, and manual cancellation - all in a built-in terminal-style UI. No third-party add-on required.
Official clients for Python and TypeScript/Node.js with full lifecycle support: enqueue, lease, ack, nack, heartbeat, cancel. Typed, tested, and published to PyPI & npm.
Jobs are rows. Query them with JOIN, WHERE, GROUP BY. Plug in any SQL client, write custom reports, trace slow jobs, or export to your analytics stack - no special tooling needed.
Assign an integer priority to any job. Higher-priority jobs are always leased first within a queue, giving you fine-grained control over processing order without multiple queues.
Leased jobs have a configurable visibility timeout. Long-running workers send heartbeats to extend their lease. Crashed workers release their jobs automatically - no manual cleanup.
Use cases
Any task that shouldn't block a web request belongs in a queue. Here's what you can do:
Offload sign-up confirmations, password resets, and order receipts to a queue. If your email provider is slow or down, jobs retry automatically - users never see a slow response.
Deliver outbound webhooks reliably. Failed deliveries are retried with back-off, dead jobs are inspected from the dashboard, and every attempt is logged - no lost events.
Resize images, generate thumbnails, transcode video, or run OCR in the background. Queue the work immediately on upload, process asynchronously at your own pace.
Use run_at to schedule daily digests, weekly analytics emails, or monthly billing summaries. No separate cron service - the queue is the scheduler.
Queue document processing, embedding generation, or LLM inference jobs. Handle bursts without overloading your inference budget - workers consume at a controlled rate.
Sync third-party APIs, run import jobs, or trigger data pipeline steps as queue jobs. Each step is observable, retryable, and scoped per tenant - no shared state issues.
How it works
Producers enqueue. Workers lease and process. The queue handles retries, timeouts, and dead letters automatically.
POST to /jobs with a queue name, JSON payload, priority, max retries, and an optional run_at timestamp for deferred execution. Returns a job ID immediately.
Your web request returns instantly. The work happens in the background.
Workers call /lease on a queue. OpenQueue atomically selects the highest-priority eligible job and locks it with a visibility timeout - guaranteed to one worker only.
Workers can send heartbeats to extend the lease on long-running jobs.
On success, call /ack to mark the job completed. On failure, call /nack - OpenQueue retries up to max_retries times, then moves the job to the dead-letter queue.
Dead jobs are visible in the dashboard and can be re-queued manually.
Job lifecycle
Comparison
Compared to the most common alternatives for background job processing.
| Capability | Redis / BullMQ | Celery | RQ | |
|---|---|---|---|---|
| Infrastructure needed | PostgreSQL only | PostgreSQL + Redis | PostgreSQL + Redis / RabbitMQ | PostgreSQL + Redis |
| Multi-tenancy | ✓ Built-in | ✗ Manual | ✗ Manual | ✗ Manual |
| Built-in dashboard | ✓ Included | Needs Bull Board | Needs Flower | ✗ None |
| Dead-letter queue | ✓ Built-in | Partial | Partial | ✗ Manual |
| Visibility timeout | ✓ | ✓ | ✗ | ✗ |
| Scheduled jobs | ✓ Native | ✓ via cron | ✓ via Beat | Partial |
| SQL observability | ✓ Full | ✗ | ✗ | ✗ |
| Priority queues | ✓ Built-in | ✓ | ✓ Limited | ✗ |
| Python SDK | ✓ | ✗ | ✓ | ✓ |
| TypeScript SDK | ✓ | ✓ | ✗ | ✗ |
Comparison reflects typical default configurations. Some capabilities vary by version or plugin.
SDKs
Official clients for Python and TypeScript. Install, point at your API URL, and start enqueueing - no broker configuration, no connection pools to manage.
pip install openqueue-clientfrom openqueue import OpenQueue
oq = OpenQueue(
base_url="https://open-queue-ivory.vercel.app",
api_token="oq_live_...",
)
# Enqueue a job
job = oq.enqueue("emails", {"to": "user@example.com"})
# Worker loop
while True:
leased = oq.lease("emails", worker_id="w1")
if leased:
process(leased.job.payload)
oq.ack(leased.job.id, leased.lease_token)npm i @ravin-d-27/openqueueimport { OpenQueue } from "@ravin-d-27/openqueue";
const oq = new OpenQueue({
baseUrl: "https://open-queue-ivory.vercel.app",
apiToken: "oq_live_...",
});
// Enqueue a job
const job = await oq.enqueue("emails", {
to: "user@example.com",
});
// Worker loop
const leased = await oq.lease("emails", "worker-1");
if (leased) {
await process(leased.job.payload);
await oq.ack(leased.job.id, leased.leaseToken);
}Get started
Sign up, get your API token, and start enqueueing jobs in under five minutes. No Redis. No Celery. No ops overhead.
Free to use · Open source · MIT license