TL;DR — The Quick Verdict: If you're building a Next.js app on Vercel or any serverless platform, use Inngest. It's easier to set up, doesn't need Redis, and fits the serverless model perfectly. If you're running a traditional Node.js server (Express, Fastify, NestJS) on a VPS or dedicated host and want full control over your job processing, use BullMQ. It's free, battle-tested, and handles massive throughput. For most AI-enabled builders shipping their first few apps? Start with Inngest. You can always graduate to BullMQ when you need the power.

Why This Comparison Matters for AI-Enabled Builders

Here's something that trips up a lot of vibe coders: you ask your AI to "add email sending to my app" or "process uploaded images in the background," and the AI makes a choice for you. It picks a queue processing system, wires it up, and moves on. You might not even realize a decision was made.

But that decision has real consequences. It determines whether you need to run a Redis server. Whether your background jobs work on Vercel or only on a VPS. Whether you're paying $0/month or $50/month. Whether debugging a failed job takes 30 seconds or 30 minutes.

The two most common systems AI reaches for in the Node.js ecosystem are BullMQ and Inngest. They solve the same core problem — running work outside the request-response cycle — but they approach it from opposite directions. Understanding the difference means you can guide your AI to the right choice instead of discovering three weeks in that you picked the wrong one.

This isn't academic. I've seen builders spend a week setting up BullMQ with Redis on Vercel (it doesn't work well) and others over-paying for Inngest when they're already running a $5 VPS that could handle everything for free. Let's make sure you don't make either mistake.

The Two Contenders: A Quick Introduction

BullMQ — The Self-Hosted Powerhouse

BullMQ is an open-source job queue library for Node.js. It uses Redis as its storage layer — every job you create gets written to Redis, and worker processes read from Redis to pick up and execute jobs. It's been around since 2020 (evolved from the original Bull library, which dates back to 2013), and it's one of the most downloaded job queue packages in the Node.js ecosystem.

Think of it like a to-do list that lives in Redis. Your app adds items to the list. Separate worker processes — which can run on the same server or different servers — check the list, grab the next item, and do the work. If a job fails, BullMQ can automatically retry it. If your server crashes, the jobs are still safe in Redis waiting to be picked up when the worker comes back online.

The key word is self-hosted. You run Redis. You run the workers. You manage the infrastructure. In exchange, you get complete control, zero per-job fees, and the ability to process thousands of jobs per second.

Inngest — The Managed Serverless Approach

Inngest takes the opposite approach. Instead of you running infrastructure, Inngest is a managed service that orchestrates your background jobs by calling your API endpoints via HTTP. You define functions in your codebase, deploy your app normally, and Inngest's cloud service handles the scheduling, retries, and orchestration.

Think of it like hiring a project manager. You tell Inngest "when a user signs up, call this function, then wait 24 hours, then call this other function." Inngest handles the timing, the retries if something fails, and the tracking of what ran and what didn't. Your code just needs to be deployed somewhere that accepts HTTP requests — which includes Vercel, Netlify, Railway, or any hosting platform.

The key word is managed. No Redis. No workers. No background processes to keep alive. You write functions, Inngest calls them. In exchange, you're relying on a third-party service and paying per function run once you exceed the free tier.

Head-to-Head: Five Things That Actually Matter

1. AI Code Quality — What Your AI Actually Generates

This is the comparison nobody else makes, and it's the one that matters most if you're building with AI.

BullMQ wins on code accuracy. When you ask Claude or GPT to set up BullMQ, the generated code is almost always correct. BullMQ has been in training data since 2020, the API has been stable for years, and the patterns are dead simple: create a Queue, create a Worker, call queue.add(). Here's what typical AI-generated BullMQ code looks like:

// queue.js — AI gets this right almost every time
import { Queue } from 'bullmq';
import { redis } from './redis';

export const emailQueue = new Queue('emails', { connection: redis });

// worker.js
import { Worker } from 'bullmq';
import { redis } from './redis';

const worker = new Worker('emails', async (job) => {
  await sendEmail(job.data.to, job.data.subject, job.data.body);
}, { connection: redis });

// In your API route
await emailQueue.add('welcome-email', {
  to: user.email,
  subject: 'Welcome!',
  body: 'Thanks for signing up.'
});

Clean, correct, predictable. AI has seen this pattern millions of times.

Inngest is trickier for AI. Inngest's SDK has gone through significant API changes across versions. The v2 and v3 SDKs look quite different, and AI models sometimes generate a mashup of old and new syntax. Function definitions, event schemas, and the serve handler have all changed. Here's what correct modern Inngest code looks like — but your AI might not generate exactly this:

// inngest/client.ts
import { Inngest } from 'inngest';
export const inngest = new Inngest({ id: 'my-app' });

// inngest/functions/welcome-email.ts
export const sendWelcomeEmail = inngest.createFunction(
  { id: 'send-welcome-email' },
  { event: 'user/signed.up' },
  async ({ event, step }) => {
    await step.run('send-email', async () => {
      await sendEmail(event.data.email, 'Welcome!', 'Thanks for signing up.');
    });
  }
);

// app/api/inngest/route.ts (Next.js App Router)
import { serve } from 'inngest/next';
import { inngest } from '@/inngest/client';
import { sendWelcomeEmail } from '@/inngest/functions/welcome-email';

export const { GET, POST, PUT } = serve({
  client: inngest,
  functions: [sendWelcomeEmail],
});

See how there's more structure? More files? More concepts (events, steps, serve handlers)? AI gets this wrong more often — not because it's harder conceptually, but because the training data includes outdated SDK patterns. Pro tip: always paste the current Inngest docs or specify the SDK version in your prompt.

2. Setup Complexity — How Fast Can You Ship?

Inngest wins on time-to-working. Here's what each one requires to go from zero to a working background job:

BullMQ setup checklist:

  • Install BullMQ package (npm install bullmq)
  • Install and run Redis (locally, Docker, or managed service)
  • Configure Redis connection (host, port, password)
  • Create queue definition file
  • Create worker file
  • Start worker as a separate process (or thread)
  • Add queue.add() calls to your API routes
  • Set up worker process management (PM2, systemd, or Docker)
  • Configure Redis persistence so jobs survive restarts
  • Optional: install BullMQ Board or Arena for a dashboard

That's 8–10 steps before your first job runs. If you've never set up Redis, add another hour of troubleshooting.

Inngest setup checklist:

  • Install Inngest package (npm install inngest)
  • Create Inngest client
  • Create function file
  • Add serve route to your app
  • Run Inngest dev server (npx inngest-cli@latest dev)
  • Send an event to trigger the function

That's 6 steps, and none of them involve setting up external infrastructure. The Inngest dev server gives you a local dashboard for free. For most vibe coders building their first background job system, this simplicity is a massive advantage.

When you ask Claude to "add background email processing to my Next.js app," and it picks Inngest, you can have jobs running in under 10 minutes. With BullMQ, expect 30–60 minutes — more if Redis gives you trouble.

3. Pricing — What It Actually Costs to Run

This is where the conversation gets real. Let's break it down honestly.

BullMQ is free software. The library costs nothing. Your costs are infrastructure:

  • Local Redis: $0 (development only)
  • Managed Redis (Upstash free tier): $0 for up to 10,000 commands/day
  • Managed Redis (small instance): $10–30/month
  • VPS to run workers: $5–20/month (can share with your app server)
  • Total for a small-to-medium app: $5–50/month, or $0 if Redis runs on the same server

Inngest pricing is per function run:

  • Free tier: 25,000 function runs/month (generous for MVPs)
  • Pro plan: starts at $50/month for higher volume
  • Additional runs: billed per run beyond your plan's allowance
  • No infrastructure to manage: $0 on hosting for the job system itself

The real comparison: For a hobby project processing a few hundred jobs a day, both are effectively free. For a medium-scale app processing 50,000+ jobs/month, BullMQ on a $10 VPS is dramatically cheaper than Inngest's paid tier. For enterprise scale, BullMQ wins on raw cost but loses on the DevOps hours you'll spend managing Redis clusters, worker scaling, and monitoring.

The question isn't just "what does the software cost?" It's "what does your time cost?" If you're a solo builder and every hour you spend on infrastructure is an hour not building features, Inngest's managed approach might save you money even though it costs more on paper.

4. Scaling — What Happens When Your App Grows

BullMQ scales with muscle. Need more throughput? Add more workers. Need faster processing? Increase concurrency per worker. Need to handle spikes? Redis handles thousands of operations per second without breaking a sweat. BullMQ has been used in production systems processing millions of jobs per day. The scaling model is straightforward: more hardware = more capacity.

But scaling BullMQ means scaling Redis too. A single Redis instance has limits. At very high volumes, you might need Redis Cluster (which BullMQ supports but adds complexity). You need to monitor Redis memory usage, set up persistence correctly, and handle failover if Redis goes down.

Inngest scales invisibly. Because Inngest calls your functions via HTTP, scaling is about your app's ability to handle incoming requests — which your hosting platform (Vercel, Railway, Fly.io) usually handles automatically. Inngest manages its own infrastructure for scheduling and orchestration. You don't think about it.

The limitation? Inngest has rate limits based on your plan. And because every job is an HTTP request, each job invocation has the overhead of an HTTP round-trip — which is negligible for most use cases but matters when you're processing thousands of tiny jobs per second.

For the vast majority of AI-enabled builders — apps processing a few hundred to a few thousand jobs per day — both scale fine. This only becomes a differentiator at serious volume.

5. Debugging — What Happens When Jobs Fail

This is where Inngest genuinely shines, and it's worth highlighting because debugging background jobs is one of the most frustrating experiences for any builder.

BullMQ debugging is manual. When a job fails, BullMQ stores the error in Redis. To see it, you need to either:

  • Add event listeners in your code (worker.on('failed', ...)) to log failures
  • Connect to Redis directly and inspect failed jobs
  • Set up a dashboard like BullMQ Board, Arena, or Bull Monitor
  • Integrate with an error monitoring service like Sentry

Without explicit setup, BullMQ jobs fail silently. This is the single biggest pain point, and it catches almost every builder the first time. Your worker crashes, Redis has the error, but your app logs show nothing. You have no idea anything went wrong until a user complains that they never got their welcome email.

Inngest debugging is built in. The Inngest dashboard — both local dev and cloud — shows every function run with its status, input data, output, errors, and retry history. You can see exactly what happened, when, and why. Failed runs show the full error message and stack trace. You can replay failed runs with one click.

For a vibe coder who's not used to managing background processes, Inngest's observability is a massive quality-of-life upgrade. You're not digging through Redis keys or setting up monitoring dashboards. It just works.

The Prompt Card: Getting Your AI to Pick the Right One

🤖 Copy-Paste Prompt: Background Jobs Decision

When you want BullMQ:

I need background job processing for my [Express/Fastify/NestJS] app
running on [my VPS / a persistent Node.js server]. Use BullMQ with Redis.

Requirements:
- Set up a Redis connection config (I'm using [local Redis / Upstash / 
  AWS ElastiCache])
- Create a queue for [describe your jobs: emails, image processing, etc.]
- Create a worker in a separate file
- Add error handling with worker.on('failed') logging
- Include graceful shutdown handling
- Show me how to start the worker alongside my app

Do NOT use Inngest. I want self-hosted job processing.

When you want Inngest:

I need background job processing for my Next.js app on Vercel. 
Use Inngest (latest SDK version — check docs at inngest.com/docs).

Requirements:
- Set up Inngest client and serve handler for Next.js App Router
- Create a function for [describe your jobs: emails, image processing, etc.]
- Use step.run() for each discrete operation
- Include proper event typing
- Show the inngest dev server command for local testing

Do NOT use BullMQ or Redis. I want serverless-compatible job processing.
Use ONLY the current Inngest SDK syntax (v3+).

Why specifying the SDK version matters: AI models are trained on data that includes years of blog posts, tutorials, and Stack Overflow answers. BullMQ's API has been stable, so old examples still work. But Inngest's SDK changed significantly between v1, v2, and v3. If you don't tell your AI which version to use, you might get code that mixes syntax from different eras — and the error messages won't be obvious about why. Always say "latest SDK" or "v3+" in your Inngest prompts.

What AI Gets Wrong About BullMQ and Inngest

AI models have predictable blind spots with both of these tools. Knowing them in advance saves you hours of debugging.

BullMQ — Common AI Mistakes

  • Forgetting the Redis connection. AI sometimes creates the queue and worker but hard-codes localhost:6379 instead of using an environment variable. This works in dev and breaks immediately in production. Always look for a Redis connection config that reads from process.env.REDIS_URL.
  • Not handling worker shutdown. AI rarely includes graceful shutdown code. Without it, killing your worker process while a job is mid-execution can leave that job in a "stuck" state. You need worker.close() on SIGTERM.
  • Missing error listeners. AI generates the happy path. It creates the worker, processes jobs, and moves on. It almost never adds worker.on('failed') or worker.on('error') listeners. Those failures then vanish into Redis with no logging.
  • Putting the worker in the same process. AI sometimes imports the worker file directly into your API server. This technically works but defeats the purpose — a CPU-heavy job will block your API responses. Workers should run as a separate process.
  • Using the old Bull package. Some AI responses still use const Bull = require('bull') instead of BullMQ. Bull (without the "MQ") is the older version. It works, but BullMQ has better TypeScript support, better performance, and active maintenance. Always check that the import says bullmq.

Inngest — Common AI Mistakes

  • Wrong SDK version syntax. The most common error. AI mixes createFunction syntax from different SDK versions. If you see createStepFunction or inngest.send() with an unexpected signature, you're probably looking at v1 or v2 code.
  • Missing the serve handler. AI sometimes creates Inngest functions but forgets the API route that serves them. Without the serve endpoint, Inngest can't discover or call your functions. In Next.js App Router, this is the app/api/inngest/route.ts file.
  • Not wrapping work in step.run(). Inngest's step functions need side effects wrapped in step.run() for reliable retry behavior. AI sometimes puts async work directly in the function body without the step wrapper, which makes retries replay the entire function instead of resuming from the failed step.
  • Forgetting environment variables. Inngest needs an INNGEST_EVENT_KEY and INNGEST_SIGNING_KEY in production. AI sets up the code but rarely reminds you to configure these in your hosting platform.
  • Not registering functions. AI creates the function files but doesn't add them to the functions array in the serve handler. Inngest won't know about functions that aren't registered.

When to Choose Each: The Decision Framework

Choose BullMQ When:

  • You run a persistent Node.js server — Express, Fastify, NestJS, or similar on a VPS, dedicated server, or container platform (Docker, Kubernetes). You have a long-running process that can host workers.
  • You already have Redis — if Redis is already in your stack for caching or sessions, adding BullMQ is almost free in terms of additional infrastructure.
  • You need high throughput — processing thousands of jobs per second, or you have CPU-intensive jobs that benefit from dedicated worker processes.
  • Budget is tight — BullMQ is free. Redis on the same VPS is free. For bootstrapped projects where every dollar counts, this matters.
  • You want full control — you need custom retry logic, priority queues, rate limiting, job dependencies, or other advanced features that BullMQ supports natively.
  • You're building with cron jobs — BullMQ has excellent built-in support for repeatable jobs on cron schedules, all managed through the same queue system.

Choose Inngest When:

  • You're on a serverless platform — Vercel, Netlify, Cloudflare Workers, or similar. No persistent process = no BullMQ workers. Inngest was designed for this world.
  • You want the fastest path to working — Inngest has less setup, built-in dev tools, and a dashboard that works out of the box. For MVPs and prototypes, speed matters.
  • You need multi-step workflows — Inngest's step functions make it natural to build "do A, then wait, then do B, then do C" workflows with automatic retry at each step.
  • You're a solo builder — no DevOps experience, no desire to manage Redis, no interest in monitoring infrastructure. Inngest handles all of that.
  • Observability matters — if you need to see what's running, what failed, and why — without setting up separate monitoring tools — Inngest's dashboard is a major advantage.
  • Event-driven architecture — if your app is built around events ("user signed up," "payment completed," "file uploaded"), Inngest's event-trigger model maps naturally to your mental model.

The real-world test: Ask yourself two questions. (1) "Does my app run on Vercel or a similar serverless platform?" If yes → Inngest. (2) "Am I already running a VPS with Redis?" If yes → BullMQ. If neither applies, Inngest is the safer default for most AI-enabled builders because it requires less infrastructure knowledge.

Side-by-Side at a Glance

Feature BullMQ Inngest
Type Self-hosted library Managed service
Requires Redis Yes (hard requirement) No
Works on Vercel Awkward (no persistent workers) Yes (designed for it)
Setup time 30–60 minutes 10–15 minutes
Cost Free (+ Redis hosting) Free tier, then $50+/mo
AI code accuracy High (stable API) Medium (SDK version issues)
Built-in dashboard No (third-party add-ons) Yes (local + cloud)
Multi-step workflows Manual (chain jobs yourself) Built-in (step.run)
Cron/scheduled jobs Built-in repeatable jobs Built-in cron triggers
Best for VPS/container deployments Serverless deployments

A Real Scenario: User Sign-Up Flow

Let's make this concrete. You're building a SaaS app. When a user signs up, you need to:

  1. Send a welcome email
  2. Create a Stripe customer record
  3. Wait 24 hours, then send an onboarding tip email
  4. Wait 3 days, then send a "how's it going?" email

With BullMQ, you'd create a queue, add the first two jobs immediately, then add delayed jobs for steps 3 and 4:

// On user signup
await emailQueue.add('welcome', { userId: user.id });
await stripeQueue.add('create-customer', { userId: user.id });
await emailQueue.add('onboarding-tip', { userId: user.id }, {
  delay: 24 * 60 * 60 * 1000 // 24 hours in milliseconds
});
await emailQueue.add('check-in', { userId: user.id }, {
  delay: 3 * 24 * 60 * 60 * 1000 // 3 days
});

Each job runs independently. If the Stripe call fails, it retries without affecting the emails. But if you need step 3 to depend on step 2 (only send the tip if Stripe setup succeeded), you have to wire that logic yourself — checking status, chaining jobs manually.

With Inngest, the same flow is one function with steps:

export const onboardingFlow = inngest.createFunction(
  { id: 'user-onboarding' },
  { event: 'user/signed.up' },
  async ({ event, step }) => {
    await step.run('send-welcome', async () => {
      await sendWelcomeEmail(event.data.userId);
    });

    const customer = await step.run('create-stripe', async () => {
      return await createStripeCustomer(event.data.userId);
    });

    await step.sleep('wait-24h', '24h');

    await step.run('send-onboarding-tip', async () => {
      await sendOnboardingTip(event.data.userId);
    });

    await step.sleep('wait-3d', '3d');

    await step.run('send-check-in', async () => {
      await sendCheckIn(event.data.userId);
    });
  }
);

Notice the difference. With Inngest, it reads like a story: do this, then wait, then do that. Each step is individually retryable. If step 3 fails, Inngest retries from step 3 — not from the beginning. The step.sleep() calls handle the waiting without you managing delayed jobs.

This workflow pattern is where Inngest's design really shines. BullMQ can do all of this, but you're assembling it from primitives instead of declaring it as a flow.

What to Learn Next

Frequently Asked Questions

Is BullMQ or Inngest better for a Next.js app on Vercel?

Inngest is almost always the better choice for Next.js on Vercel. BullMQ needs a persistent Node.js process and a Redis server — neither of which Vercel provides natively. Inngest works by calling your API routes as HTTP endpoints, which fits the serverless model perfectly. You can have background jobs running in minutes without any infrastructure to manage. The only exception: if you're already running a separate backend server alongside your Next.js frontend, BullMQ on that server is a viable option.

Can I use BullMQ without Redis?

No. Redis is a hard requirement for BullMQ — it's the storage layer that holds your entire job queue. Without Redis running and accessible, BullMQ won't even start. You need to either run Redis locally (free), use a managed Redis service like Upstash or AWS ElastiCache ($0–30/month), or spin up Redis in Docker. If managing Redis feels like too much overhead for your project, that's a strong signal that Inngest is the better fit.

Which one does AI generate better code for?

AI models generally produce more reliable BullMQ code because BullMQ has been around since 2020 (with its predecessor Bull dating to 2013) and has a massive presence in training data. The patterns are simple and stable: create a queue, create a worker, add jobs. Inngest code is trickier because the SDK has changed significantly across versions, and AI tends to mix syntax from different eras. Always specify the Inngest SDK version in your prompt, or paste the current docs directly into your AI's context window.

How much does each one cost to run?

BullMQ itself is free and open-source. Your cost is hosting Redis — free locally, around $10–30/month for a managed instance. If Redis runs on the same VPS as your app, the additional cost is effectively zero. Inngest has a generous free tier (25,000 function runs/month) and paid plans starting at $50/month. For hobby projects and MVPs, both can run at zero cost. At production scale, BullMQ is significantly cheaper in raw hosting costs but more expensive in time spent managing infrastructure.

Can I switch from BullMQ to Inngest later (or vice versa)?

Yes, but it's not a drop-in swap. Your core job logic — the actual work your functions do (sending emails, processing images, etc.) — is completely portable. But the wiring is different. BullMQ uses queue.add() and worker processor functions; Inngest uses inngest.createFunction() and event triggers. Plan for a few hours of refactoring per job type. The good news: your AI can handle most of the migration if you give it the existing job code and the target SDK's current documentation.