TL;DR: BullMQ is a job queue library for Node.js backed by Redis. Your app adds jobs to the queue (send this email, resize this image, run this report), and a separate worker process picks them up and does the work. Redis is the storage layer that holds the queue while jobs wait. BullMQ handles retries, delays, scheduling, and concurrency limits automatically. It's the classic, battle-tested way to run background work when you have a persistent Node.js server. If you're on serverless/Vercel, Inngest is usually a better fit — but understanding BullMQ makes you fluent in a pattern that shows up everywhere.
Why You Need Background Jobs at All
Every web app eventually hits the same wall. A user does something — signs up, uploads a file, completes a purchase — and your app needs to do more than just save a record to the database. It needs to send an email. Resize a photo. Generate a PDF. Sync data to another service.
The naive approach: do it all inside the same API route that handled the user's request. The user clicks "Submit" and waits while your server makes three external API calls in sequence. If any one of them takes 5 seconds, the user sits staring at a loading spinner. If one fails, the whole thing might crash and the user gets an error — even though their data was actually saved just fine.
The real problem gets worse on modern hosting. Serverless functions time out after 10–30 seconds. Even traditional servers have limits. And you can't predict when an email service will be slow or when an image processing library will take longer than expected.
The solution is background jobs: finish the user's request instantly, then hand off the slow work to a separate process that runs independently. The user gets their success screen immediately. The email goes out 2 seconds later. If the email fails, BullMQ retries it automatically. Nothing blocks anything.
This pattern is so common that there are dedicated tools for it. BullMQ is one of the most widely used in the Node.js ecosystem. If you want a broader overview of the concept first, read what background jobs are and why you need them.
What BullMQ Actually Is
BullMQ is a Node.js library that implements a job queue — a structured list of tasks waiting to be processed. The "MQ" stands for Message Queue, which is the broader category of systems BullMQ belongs to.
There are three pieces to understand:
The Queue. This is where your app deposits work. When something needs to happen in the background, your app creates a job and adds it to the queue. The queue is just a named list — you might have a queue called email, another called image-processing, another called reports. Jobs sit in the queue until a worker picks them up.
The Worker. This is a separate process — or multiple processes — that watches the queue and runs jobs one by one (or in parallel, if you configure it). The worker contains the actual code that sends the email, resizes the image, or generates the report. It never talks to users directly; it just processes queue items.
Redis. This is the database that stores the queue between your app and your worker. When your app adds a job, BullMQ writes it to Redis. When a worker is ready for more work, it reads from Redis. Redis is what allows your app and your worker to be completely separate processes — they both connect to Redis, but they don't need to know about each other directly.
Think of it like a restaurant kitchen. The front-of-house (your app) takes orders and writes them on a ticket (Redis). The kitchen (the worker) reads the tickets and cooks. The dining room doesn't need to know how the kitchen works, and the kitchen doesn't need to talk to customers. The ticket system (BullMQ + Redis) is what makes it work smoothly.
What Redis Is in This Context
Redis is a fast, in-memory database. You don't need to understand the internals to work with BullMQ — but you do need to know it exists and why BullMQ requires it.
The key properties that make Redis ideal for a job queue:
- Fast. Redis handles tens of thousands of operations per second. Adding a job to the queue is nearly instantaneous — it won't slow down your user-facing request.
- Persistent. Even though Redis is in-memory, it writes to disk periodically. Jobs survive if your Node.js process crashes. The work isn't lost.
- Shared. Multiple processes can all connect to the same Redis instance. Your web server, your workers, and any other services can all read from and write to the same queue without conflicts.
When you're developing locally, you typically run Redis on your machine (often via Docker). In production, you use a managed Redis service like Redis Cloud, Upstash, or the Redis service offered by Railway, Render, or similar platforms.
BullMQ is built entirely on top of Redis — it's not optional. No Redis means no BullMQ. For a deeper look at what Redis does across different use cases, see our explainer on Redis.
Real Scenarios Where AI Generates BullMQ Code
When you describe certain tasks to an AI coding assistant, it will often reach for BullMQ. Here are the most common scenarios:
Sending emails after user actions
Welcome emails, password reset emails, notification emails. These involve an external API call (to SendGrid, Resend, Postmark, etc.) and don't need to block the user's request. Your app adds a job to an email queue with the recipient and template data; the worker sends it.
Image and file processing
When a user uploads a photo, you often need to resize it to multiple dimensions, strip metadata, convert formats, or run it through a CDN upload. Image processing is CPU-intensive and can take several seconds. BullMQ lets you accept the upload instantly and process it asynchronously — showing the original image right away and swapping in the processed version when it's ready.
Scheduled and recurring tasks
BullMQ has built-in support for repeating jobs — the equivalent of cron jobs. Send a weekly digest email every Monday at 8am. Run a database cleanup job every night at 2am. Check for overdue subscriptions every hour. These are defined once and BullMQ handles the scheduling.
Webhooks and third-party sync
When your app receives a webhook from Stripe, GitHub, or another service, you want to acknowledge it instantly (return a 200 status) and then process it asynchronously. BullMQ is a clean way to do this — receive the webhook, add the payload as a job, return 200 immediately, process the job in the background.
AI-heavy operations
Generating text with an LLM, running an image through a vision model, processing a document through an AI pipeline — these operations can take 10–60 seconds. You never want a user's browser sitting there waiting. Queue the job, return a job ID, and let the frontend poll for completion or use a websocket to push the result when it's ready.
Reading BullMQ Code Your AI Generated
Here's what a typical BullMQ setup looks like. Don't worry about memorizing it — the goal is to be able to read it when your AI generates it.
Adding a job to the queue (inside your API route or wherever the triggering event happens):
import { Queue } from 'bullmq';
import { redisConnection } from './redis';
// Create a named queue
const emailQueue = new Queue('email', { connection: redisConnection });
// Inside your API handler — after saving the user to the database
await emailQueue.add('send-welcome', {
to: user.email,
name: user.name,
plan: user.plan
});
The Queue is named 'email'. The job is named 'send-welcome'. The object you pass is the job's data — whatever the worker will need to do the work.
The worker — usually a separate file, often called worker.ts or workers/email.ts:
import { Worker } from 'bullmq';
import { redisConnection } from './redis';
import { sendEmail } from './email';
const worker = new Worker('email', async (job) => {
if (job.name === 'send-welcome') {
await sendEmail({
to: job.data.to,
subject: 'Welcome!',
template: 'welcome',
data: { name: job.data.name }
});
}
}, { connection: redisConnection });
worker.on('completed', (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed:`, err);
});
The worker listens on the same 'email' queue. When a job arrives, it runs the function you provided. job.data is whatever you passed when you added the job. The event listeners at the bottom surface completions and failures in your logs.
The Redis connection config — usually a small shared file:
// redis.ts
export const redisConnection = {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD
};
Both the Queue and the Worker import this same connection config so they're both pointing at the same Redis instance.
Common BullMQ Patterns AI Will Generate
Beyond the basics, here are the patterns that show up constantly in AI-generated BullMQ code:
Job options — retries and delays
await emailQueue.add('send-welcome', { to: user.email }, {
attempts: 3, // Retry up to 3 times if it fails
backoff: {
type: 'exponential',
delay: 1000 // Wait 1s, then 2s, then 4s between retries
},
delay: 5000 // Wait 5 seconds before even starting the job
});
attempts tells BullMQ how many times to retry a failed job before giving up. backoff controls how long to wait between retries — exponential backoff means each retry waits longer than the last, which is the right default for external APIs that might be temporarily overloaded. delay postpones the job's first run.
Repeating (scheduled) jobs
await reportsQueue.add(
'weekly-digest',
{ reportType: 'user-summary' },
{
repeat: {
pattern: '0 8 * * 1' // Every Monday at 8am (cron syntax)
}
}
);
The pattern field takes standard cron syntax. BullMQ stores this in Redis and automatically re-adds the job after each run.
Job priority
// High-priority job — gets processed first
await emailQueue.add('password-reset', { to: user.email }, {
priority: 1
});
// Low-priority — can wait
await emailQueue.add('weekly-newsletter', { segment: 'all' }, {
priority: 10
});
Lower numbers = higher priority. Critical transactional emails (password reset, 2FA codes) jump the queue ahead of bulk marketing sends.
Concurrency control
const worker = new Worker('image-processing', async (job) => {
await resizeImage(job.data.imageUrl, job.data.sizes);
}, {
connection: redisConnection,
concurrency: 3 // Process 3 jobs at a time, never more
});
Image processing is CPU-heavy. Setting concurrency: 3 means this worker will handle 3 jobs simultaneously but won't take on a 4th until one finishes. This prevents your server from being overwhelmed.
BullMQ vs. Inngest vs. setTimeout: The Real Comparison
When your AI reaches for a background job solution, it's usually choosing between three options. Here's when each makes sense:
setTimeout / setInterval
The quick hack that feels like it should work. Schedule something to run 5 seconds from now using setTimeout. The problem: setTimeout only lives in memory. If your Node.js process restarts — and in production, processes restart all the time — the scheduled work is gone, silently. No retries, no persistence, no visibility. Fine for toy apps. A liability for anything real.
BullMQ
Battle-tested, feature-complete, widely deployed. Persists jobs in Redis, handles retries, supports scheduling, gives you concurrency control, and has a UI dashboard (Bull Board) for inspecting queue state. The cost: you have to run and manage Redis yourself, and your worker needs to be a persistent Node.js process. Best fit for Express/Fastify/NestJS apps running on a VPS, Railway, Render, or a dedicated server.
Inngest
The managed, serverless alternative. No Redis to run. No worker process to manage. Your job functions are just API routes that Inngest calls. Works natively with Next.js, Remix, and other serverless frameworks. Costs money at scale and adds an external dependency. Best fit for Next.js apps on Vercel or any serverless deployment.
The short version: if you already have a persistent server and need maximum control, use BullMQ. If you're serverless and want zero infrastructure overhead, use Inngest. If you're in doubt and just getting started, Inngest has a lower barrier to entry. Either way, avoid setTimeout for anything that matters.
For a deeper comparison of the broader category, read what message queues are and when you need them.
What AI Gets Wrong About BullMQ
AI coding assistants generate competent BullMQ code most of the time, but there are a handful of mistakes that show up repeatedly. Worth knowing about before you ship.
Running the worker inside the web server process
AI sometimes puts the worker initialization right inside your main server.ts or app.ts file. This works locally but causes problems in production — especially if you scale your web server to multiple instances, you'll end up with multiple workers all competing on the same queue, which can cause duplicate job processing. Workers should run as a separate process, separate deployment, or separate Dockerfile.
Forgetting the failed event listener
The most common reason BullMQ failures are invisible: no worker.on('failed', ...) listener. BullMQ catches errors inside job processors and stores them as failed jobs in Redis — it won't crash your app or surface anything in your logs unless you explicitly listen for the 'failed' event. Always add it. Better yet, integrate it with an error monitoring service like Sentry so failures alert you automatically.
Not configuring Redis for production
AI-generated Redis connection config often defaults to localhost:6379 with no authentication. Fine for local development. In production, you need a real Redis URL (usually from an environment variable), a password, and often TLS enabled. Look for REDIS_URL in your environment variables — if it's not there, the job queue will silently fail to connect in production.
Adding jobs without error handling
The queue.add() call can fail if Redis is unavailable. AI often generates it without a try/catch. If your Redis connection drops, jobs added during that window are lost. Wrap critical job additions in error handling and consider whether you need a fallback strategy.
Using the wrong job ID for deduplication
BullMQ lets you specify a custom job ID to prevent duplicate jobs. AI sometimes misunderstands this and sets IDs in ways that accidentally deduplicate jobs that should run multiple times, or allows duplicates when they should be prevented. If deduplication matters for your use case (e.g., "only send one welcome email per user"), verify the job ID logic explicitly.
Troubleshooting When Jobs Fail Silently
Silent failures are the #1 BullMQ pain point. The job doesn't run, nothing crashes, nothing appears in the logs — it just... doesn't happen. Here's how to diagnose it.
Step 1: Verify Redis is actually running
Locally, run redis-cli ping in your terminal. If you get back PONG, Redis is running. If you get an error, Redis isn't running — start it with Docker (docker run -p 6379:6379 redis) or install it directly. In production, check your environment variables and verify the Redis service is healthy in your hosting dashboard.
Step 2: Confirm the worker process is running
The worker is a separate process. If you only start your web server but not your worker, jobs accumulate in the queue but never get processed. Check your package.json for scripts — there's often a "worker" or "start:worker" script. You need to run both your server and your worker.
Step 3: Check for failed jobs in Redis
BullMQ stores failed jobs in Redis with error details. Install Bull Board (the queue dashboard) to see queue state visually, or use the BullMQ API to query failed jobs programmatically:
const failedJobs = await myQueue.getFailed();
console.log(failedJobs.map(j => ({
id: j.id,
name: j.name,
failedReason: j.failedReason
})));
Step 4: Add explicit event listeners
If you're not already listening to worker events, add them now:
worker.on('active', (job) => {
console.log(`Processing job ${job.id}: ${job.name}`);
});
worker.on('completed', (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed with: ${err.message}`);
console.error(err.stack);
});
worker.on('error', (err) => {
// Worker-level errors (e.g. Redis connection lost)
console.error('Worker error:', err);
});
The 'error' event is different from 'failed'. 'failed' fires when a job processor throws. 'error' fires for infrastructure-level problems like a Redis disconnection.
Step 5: Check queue and worker name mismatch
The queue name in your app code and the queue name in your worker must be identical — case-sensitive, exact match. A mismatch means jobs go into one queue and the worker is listening on a different one. It's a silent failure by design — BullMQ won't warn you that a queue has no workers.
Frequently Asked Questions
What is BullMQ in simple terms?
BullMQ is a library that lets your Node.js app hand off slow or heavy tasks to a background worker, so users don't have to wait for them. You add a job to a queue (like adding a ticket to a to-do pile), and a separate worker process picks it up and handles it. Redis — a fast in-memory database — is the middleman that stores the queue. BullMQ handles automatic retries, job priorities, delays, and scheduled tasks on top of that.
Why does BullMQ need Redis?
Redis is the storage layer for the queue. When your app adds a job, BullMQ writes it to Redis. When a worker is ready, it reads the next job from Redis and processes it. Redis is fast enough to handle thousands of job operations per second without slowing things down, and it persists jobs so they survive if your Node.js process crashes. Without Redis, BullMQ has nowhere to store the queue — it's a hard requirement, not an optional add-on.
What's the difference between BullMQ and Inngest?
BullMQ runs inside your own infrastructure — you manage a Redis instance and a worker process yourself. It gives you more control and can handle very high throughput, but requires more setup. Inngest is a managed cloud service that handles all that infrastructure for you; your job functions are just API routes that Inngest calls. For serverless apps (Next.js on Vercel), Inngest is almost always simpler. For apps with a persistent Node.js server where you want full control, BullMQ is a solid choice.
Why is my BullMQ job failing silently?
Silent failures are the most common BullMQ pain point. The job runs in a worker process that's separate from your main app — any errors thrown inside a job processor function are caught by BullMQ and stored in Redis as a failed job, but they won't crash your app or show up in your main server logs unless you explicitly set up event listeners. Add a worker.on('failed', (job, err) => console.error(job, err)) handler to surface failures. Better yet, integrate an error monitoring tool like Sentry.
Can I use BullMQ with Next.js?
Technically yes, but it's awkward. BullMQ workers need a long-running Node.js process to process jobs, and Next.js on Vercel runs as serverless functions that spin up and down on demand. You can add jobs to a BullMQ queue from a Next.js API route, but the worker has to run somewhere persistent — a separate server, a background process on a VPS, or a dedicated worker deployment. If you're fully serverless on Next.js, Inngest is a much better fit. BullMQ shines when you have a traditional Node.js server (Express, Fastify, NestJS) running 24/7.
Keep Building
Now that you understand what BullMQ does, here's where to go next:
- What Are Background Jobs? — the broader concept BullMQ implements
- What Is Inngest? — the serverless alternative, better for Next.js
- What Is Redis? — the database BullMQ depends on
- What Are Message Queues? — the architecture pattern BullMQ belongs to
- What Is Error Monitoring? — how to catch silent BullMQ failures before users notice