TL;DR: Serverless functions are tiny pieces of backend code that run on-demand — you don't manage servers, you don't pay when they're idle, and Vercel/Netlify deploy them automatically when your AI writes an API route. Every file inside an api/ folder in a Next.js or Nuxt app becomes one. You wake it up by making an HTTP request to it, it does its job, and it goes back to sleep. The platform handles all the infrastructure. You handle the logic.
Why AI Coders Need to Understand This
Here's something that surprises a lot of vibe coders: you've almost certainly been using serverless functions for months without knowing it.
Every time you told your AI to "add a backend route," "create an API endpoint," "handle form submissions," or "connect to a database," and you deployed to Vercel or Netlify — the AI was writing serverless functions. The platforms were quietly running them for you. No server setup. No DevOps. Just magic.
That magic is mostly great. But it has specific rules, and when your AI doesn't follow them, things break in ways that are genuinely confusing if you don't know what's happening. Functions time out. Files disappear. API keys stop working in production. The logs say things like "Task timed out after 10.00 seconds" and you have no idea why it worked locally.
Understanding what serverless functions actually are — what they do, what they can't do, and how they fail — means you can fix those problems instead of asking AI to fix them over and over.
This isn't about becoming an infrastructure engineer. It's about having just enough context to know when something your AI wrote is going to break in production, and why. Think of it the same way you'd think about understanding environment variables — you don't need a computer science degree to understand them, but if you don't know they exist, you'll be confused forever.
The Real Scenario: From "Add a Contact Form" to Production
Let's trace exactly what happens when you ask AI to build something that involves a backend.
You're building a portfolio site. It's static — just HTML, CSS, some React. Then you decide you want a contact form so visitors can email you. You ask your AI:
Your prompt:
"Add a contact form to my site. When someone submits it, send me an email using Resend."
The AI generates a few things. A React form component in your frontend. And — here's the important part — a new file: /api/contact.js or /api/contact.ts.
That second file is the backend. It's the code that actually sends the email. And it can't run in your user's browser — it needs a server, because it has to call the Resend API using your secret API key, and you can't put secret keys in frontend code where anyone can see them.
So where does the server come from?
You deploy to Vercel. Vercel sees the api/ directory. Vercel thinks: "I know what to do with this." It takes each file in that directory and wraps it in its own serverless function, deployed globally across its infrastructure. Now when someone submits your form, their browser makes a request to yoursite.com/api/contact. Vercel spins up the function, it runs your code, it calls Resend, the email arrives in your inbox. Done.
You didn't provision a server. You didn't configure a runtime. You didn't write a Dockerfile. You wrote a JavaScript file and deployed it. The platform handled the rest.
That's serverless. The name is slightly misleading — there's absolutely a server involved, you just don't have to manage it.
What AI Generated: Three Real Examples
Let's look at the actual code your AI produces when it writes serverless functions. Understanding what you're looking at helps you spot when something is off.
Example 1: Vercel / Next.js API Route
This is the most common pattern if you're building with Next.js and deploying to Vercel. The file lives at /app/api/contact/route.ts (App Router) or /pages/api/contact.ts (Pages Router).
// /app/api/contact/route.ts
// Next.js App Router API Route — becomes a serverless function on Vercel
import { Resend } from 'resend';
const resend = new Resend(process.env.RESEND_API_KEY);
export async function POST(request: Request) {
const { name, email, message } = await request.json();
if (!name || !email || !message) {
return Response.json(
{ error: 'Missing required fields' },
{ status: 400 }
);
}
const { data, error } = await resend.emails.send({
from: 'contact@yoursite.com',
to: 'you@yourpersonalemail.com',
subject: `New message from ${name}`,
text: `From: ${name} (${email})\n\n${message}`,
});
if (error) {
return Response.json({ error: 'Failed to send email' }, { status: 500 });
}
return Response.json({ success: true });
}
Notice a few things. There's no app.listen(3000). No server startup code. No Express instance. You export a function (POST) that accepts a request and returns a response. Vercel handles everything else — starting the runtime, routing the request to this function, scaling it, shutting it down.
The process.env.RESEND_API_KEY part is critical. That's your secret API key. It exists in your .env.local file when you're developing locally. In production, you have to add it manually in the Vercel dashboard. If you forget, the function runs but the email never sends — and the error message often won't be obvious about why.
Example 2: Netlify Function
If you're deploying to Netlify instead, the structure is slightly different. Your functions live in a /netlify/functions/ directory (or a custom path you configure).
// /netlify/functions/contact.js
// Netlify Function — same idea, different format
const { Resend } = require('resend');
const resend = new Resend(process.env.RESEND_API_KEY);
exports.handler = async (event, context) => {
if (event.httpMethod !== 'POST') {
return { statusCode: 405, body: 'Method Not Allowed' };
}
const { name, email, message } = JSON.parse(event.body);
if (!name || !email || !message) {
return {
statusCode: 400,
body: JSON.stringify({ error: 'Missing required fields' }),
};
}
try {
await resend.emails.send({
from: 'contact@yoursite.com',
to: 'you@yourpersonalemail.com',
subject: `New message from ${name}`,
text: `From: ${name} (${email})\n\n${message}`,
});
return {
statusCode: 200,
body: JSON.stringify({ success: true }),
};
} catch (err) {
return {
statusCode: 500,
body: JSON.stringify({ error: 'Failed to send email' }),
};
}
};
Netlify uses the exports.handler pattern. The function receives an event object (which contains the HTTP method, headers, and body) and a context object (which contains runtime info you'll rarely need). You return an object with a statusCode and body.
Different format, same concept: one file, one function, runs on-demand when someone hits the endpoint.
Example 3: AWS Lambda (What's Underneath)
Both Vercel and Netlify are built on top of AWS Lambda — Amazon's original serverless function platform. If you ever work directly with Lambda, or if your AI generates AWS-specific code, you'll see this pattern:
// AWS Lambda handler
// This is what Vercel and Netlify are abstracting for you
exports.handler = async (event) => {
const body = JSON.parse(event.body || '{}');
const { name, email, message } = body;
if (!name || !email || !message) {
return {
statusCode: 400,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ error: 'Missing required fields' }),
};
}
// ... your logic here
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ success: true }),
};
};
This looks nearly identical to the Netlify format — because Netlify's functions are thin wrappers around Lambda. The main difference is that you have to manually add the Content-Type header in Lambda, while Netlify and Vercel handle some of that for you.
You probably won't write raw Lambda code unless you're using AWS directly or a framework like SST. But seeing it demystifies what's happening under the hood at Vercel and Netlify.
Understanding Each Part: How Serverless Functions Actually Work
You don't need to know every technical detail — but understanding a few key concepts will save you hours of debugging.
The Request/Response Cycle
Every serverless function does the same basic thing: it receives an HTTP request, runs some code, and returns an HTTP response. That's it. The lifecycle looks like this:
- Request arrives — a browser, a form submission, a webhook, or any HTTP client sends a request to your function's URL.
- Platform spins up the function — Vercel or Netlify allocates a container, loads your code, and starts running it.
- Your code executes — you read the request, do something (call an API, query a database, send an email), and build a response.
- Response goes back — your function returns the response, the platform sends it to the caller, and the function shuts down (or stays warm briefly in case another request comes in).
The whole thing can happen in under 100 milliseconds for a simple function. For something that calls an external API, more like 300–800ms. For something that queries a database on the other side of the world, potentially a few seconds.
Cold Starts
This is the most misunderstood quirk of serverless, and the one most likely to bite you.
When a serverless function hasn't been called recently, the platform recycles its resources. There's no container sitting around waiting. The next time a request comes in, the platform has to start fresh: allocate a new container, load the Node.js runtime, download your code, and then — finally — run it.
That startup process is called a cold start. It adds latency. For a lightweight function, a cold start might add 200–400ms. For a function that imports a heavy library (like an ORM or a large SDK), it can add 1–2 seconds.
On Vercel, cold starts happen when your function hasn't been called in a few minutes. On Netlify, it's similar. AWS Lambda is more aggressive about recycling.
For most use cases — form submissions, webhooks, background tasks — cold starts don't matter. Nobody is measuring response time on a contact form. Where cold starts become a problem is when you're using serverless functions for user-facing interactions where the user is literally waiting for the response to render something on screen.
Execution Limits
Serverless functions have hard time limits. If your function doesn't finish in time, the platform kills it and returns an error. These limits vary by platform:
| Platform | Free Tier Limit | Paid Tier Max |
|---|---|---|
| Vercel | 10 seconds | 300 seconds (Pro) |
| Netlify | 10 seconds | 26 seconds |
| AWS Lambda | 15 minutes | 15 minutes |
Ten seconds sounds like a lot. But if your function is doing something slow — generating an image, processing a large file, making several sequential API calls, or running a database query that doesn't have proper indexes — it can hit that limit faster than you'd expect.
The timeout error message is typically something like: Task timed out after 10.00 seconds or Function exceeded maximum duration. If you see that, the problem isn't a bug in your logic — the function is just taking too long.
Environment Variables
This is the number one reason APIs work locally but break in production.
Your .env.local file is a local-only secret. It never gets deployed. It never gets committed to git (your .gitignore should exclude it). When you deploy to Vercel or Netlify, that file doesn't come with you.
Instead, you have to manually add each environment variable in the platform dashboard. Vercel: Project Settings → Environment Variables. Netlify: Site Configuration → Environment Variables. Every key your function references with process.env.SOMETHING has to exist there.
If it doesn't exist, process.env.SOMETHING is undefined. Your function might crash immediately, or it might run but silently fail (passing undefined as an API key to an SDK, which returns an authentication error).
You can learn more about how environment variables work across different deployment contexts in the full environment variables explainer.
What AI Gets Wrong About Serverless Functions
AI coding tools are genuinely good at writing serverless functions. The patterns are well-documented, they're in the training data, and the basic structure is simple enough that the AI rarely makes structural mistakes.
But there are four specific failure modes that show up repeatedly — patterns the AI writes that work locally but fail or degrade in production.
1. Functions That Take Too Long
This is the most common problem. AI will happily write a function that does something slow — generating a PDF, resizing an image, sending bulk emails, crawling a URL — without any regard for the 10-second execution limit.
The function runs fine locally because there's no timeout. You deploy it. A user triggers it. Twenty seconds later they get a timeout error. The function was killed mid-execution. The task never finished.
What AI writes:
// This will timeout for anything slow
export async function POST(request: Request) {
const { url } = await request.json();
// AI happily scrapes a page, processes content,
// generates a summary, saves to database —
// might take 30+ seconds. Timeout. Crash.
const content = await scrapeAndProcess(url);
await saveToDatabase(content);
return Response.json({ success: true });
}
What it should do: For anything that might take more than a few seconds, the function should kick off the work asynchronously and return immediately. The actual processing happens in a background job (a queue, a cron job, or a separate long-running service). The API route just says "I got your request, I'll process it." You might need to check out what CI/CD pipelines and background workers look like if you're building anything with serious async processing needs.
2. Trying to Use the Filesystem
Serverless functions run in ephemeral containers — disposable environments that spin up for one request and disappear. Any files you write to the filesystem during one function invocation are gone by the next invocation. Or they might vanish mid-invocation if the platform recycles the container.
AI often forgets this. It'll write code that saves an uploaded image to /tmp/image.png, processes it, and then tries to read it back — only to discover it's gone. Or it'll write a caching strategy that saves data to a local file, which works locally but breaks in production where each function invocation might run on a completely different machine.
What AI writes:
import fs from 'fs';
import path from 'path';
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get('image') as File;
const bytes = await file.arrayBuffer();
const buffer = Buffer.from(bytes);
// PROBLEM: Writing to filesystem in a serverless function.
// This "works" but you cannot rely on this file persisting.
const tempPath = path.join('/tmp', file.name);
fs.writeFileSync(tempPath, buffer);
// Process the file...
const result = await processImage(tempPath);
return Response.json(result);
}
The rule: Anything you need to persist — uploaded files, generated assets, cached data — has to go to external storage. S3, Cloudflare R2, Supabase Storage, Vercel Blob. Not the filesystem. The /tmp directory technically exists and technically works for temporary scratch space within a single invocation, but you cannot count on it existing across invocations.
3. Missing Environment Variables in Production
Already covered this above, but it bears repeating because it's so common. AI writes process.env.STRIPE_SECRET_KEY and it just works locally. You deploy. Stripe calls fail. You spend an hour debugging the logic before realizing the key doesn't exist in your Vercel project settings.
AI almost never reminds you to add the environment variables to your deployment platform. It knows the code is right. It doesn't know your deployment configuration.
The habit to build: Every time AI writes a function that uses process.env.ANYTHING, immediately open your Vercel or Netlify dashboard and add that variable. Do it before you even test the function. Don't wait until it's broken in production to go looking for why.
For a deeper look at how environment variables work across local, staging, and production environments, see the environment variables guide.
4. Not Handling Cold Starts
AI writes functions that initialize heavy dependencies at the module level — connecting to databases, instantiating large SDK clients, loading configuration. This is usually fine and often correct. But it can dramatically worsen cold start times when those dependencies are large or when the initialization involves async work that has to complete before the function can handle its first request.
More importantly, AI rarely accounts for connection pooling. Each serverless function invocation opens a new database connection. If you have 50 concurrent requests, you now have 50 database connections. Most databases have connection limits. You'll hit them on a moderately trafficked app.
What AI writes:
// ai generates this innocently enough
import { PrismaClient } from '@prisma/client';
// New PrismaClient on every cold start — fine.
// But on Vercel, each function invocation in development
// can create a new client, exhausting connections fast.
const prisma = new PrismaClient();
export async function GET() {
const users = await prisma.user.findMany();
return Response.json(users);
}
What it should do: Use Vercel's recommended singleton pattern for Prisma, or use a serverless-native database that handles connection pooling at the infrastructure level (PlanetScale, Neon, Supabase with the connection pooler enabled).
How to Debug: When Your Serverless Functions Break
Local development with serverless functions is straightforward — npm run dev and you can hit your API routes at localhost:3000/api/contact. Problems almost always appear in production. Here's how to find them.
Vercel Function Logs
Vercel has a built-in log viewer. Go to your project → the "Deployments" tab → click on a deployment → click "Functions." You'll see a list of all your serverless functions and can view their logs in real time.
Alternatively, go to the "Logs" tab at the top of your project dashboard. You can filter by function, by log level (errors vs. info), and by time range. Every console.log() in your function shows up here.
The most useful things to log when debugging:
export async function POST(request: Request) {
console.log('Function started');
const body = await request.json();
console.log('Request body:', JSON.stringify(body));
// Log environment variable presence (NOT the value)
console.log('API key present:', !!process.env.RESEND_API_KEY);
try {
const result = await doSomething(body);
console.log('Success:', result);
return Response.json({ success: true });
} catch (err) {
console.error('Error:', err);
return Response.json({ error: 'Something went wrong' }, { status: 500 });
}
}
Notice the pattern for environment variables: log whether the key exists (!!process.env.KEY returns true or false) rather than logging the actual value. Never log secret keys — your logs are stored and accessible to anyone with dashboard access.
Netlify Function Logs
Netlify's logs are in the dashboard under "Functions" in the left sidebar. Click on a specific function to see its logs. Netlify also has a CLI for tailing logs locally during development:
netlify dev # runs your site locally with functions
netlify logs:stream # streams function logs in your terminal
Common Error Messages and What They Mean
These are the error messages you'll see most often and what's actually happening:
| Error Message | What It Means | How to Fix |
|---|---|---|
Task timed out after 10.00 seconds |
Function exceeded the execution time limit | Optimize slow code, move heavy processing to a background job, or upgrade to a plan with longer limits |
Cannot read properties of undefined |
Usually a missing environment variable returning undefined |
Check that all process.env.KEYS are set in your platform dashboard |
500 Internal Server Error with no details |
Unhandled exception in your function | Add a try/catch block and log the error — the real error is being swallowed |
FUNCTION_INVOCATION_FAILED |
Vercel-specific: function crashed on startup or during execution | Check function logs — usually a missing dependency, bad import, or uncaught error at module level |
429 Too Many Requests from an API |
Your function is calling an external API too frequently | Add rate limiting, caching, or check if cold starts are creating duplicate calls |
Test Your Functions Locally Before Deploying
Both Vercel and Netlify have local development tools that simulate the serverless environment on your machine:
# Vercel — runs your app with API routes working exactly as in production
npx vercel dev
# Netlify — runs your site with netlify functions working locally
netlify dev
These are more accurate than next dev alone because they simulate the serverless function environment — including the timeout limits and the environment variable handling. If a function works in netlify dev or vercel dev but fails in production, the problem is almost certainly environment variables.
What to Learn Next
Serverless functions are one building block in a larger infrastructure picture. Here's what connects to them and what to explore next.
Edge Functions
Think of edge functions as serverless functions on steroids for speed. They run in dozens of locations simultaneously — near your users instead of in a single region. Almost no cold starts. Sub-50ms latency. The tradeoff: a much more restricted runtime. No Node.js APIs, very limited npm packages, no filesystem access at all.
Vercel Edge Functions and Cloudflare Workers are the main players. If you're using serverless functions for anything performance-sensitive — auth checks, redirects, personalization, A/B testing — edge functions are worth learning. Check out the edge computing explainer for the full picture.
CDNs
Serverless functions handle your dynamic requests. CDNs (Content Delivery Networks) handle your static assets — your HTML, CSS, JavaScript, and images. Together, they form the delivery layer of a modern web app. Understanding how they interact helps you make sense of caching behavior, why some changes don't appear immediately after deploy, and how to get your app loading as fast as possible. The CDN explainer is a good next read.
Docker and Containers
Serverless abstracts away the server completely. Docker gives you direct control over the server environment — you define exactly what's installed, what version of Node.js you're running, and how your app starts. For complex apps, long-running processes, apps that need persistent connections, or anything with unusual dependencies, Docker is often the better choice than serverless. The Docker explainer covers when you'd reach for it.
CI/CD Pipelines
When you push code to GitHub and Vercel automatically deploys it — that's the simplest form of CI/CD. As your app grows, you'll want to add automated testing, staging environments, and more controlled deploy processes. Understanding CI/CD pipelines helps you build confidence that your serverless functions work before they reach production.
Serverless vs. Containers
The decision between serverless functions and a traditional containerized server comes up frequently as apps grow. Serverless wins on simplicity and cost at low scale. Containers win on flexibility, predictability, and cost efficiency at high scale. There's a full breakdown in the serverless vs. containers comparison.
Frequently Asked Questions
Are serverless functions really free?
Free tiers exist and they're generous for small apps. Vercel gives you 100GB-hours of function execution per month on the free plan. Netlify gives you 125,000 function invocations per month. AWS Lambda gives you 1 million free requests per month. For a contact form, a small API, or a personal project, you'll likely never pay a cent.
You start paying when you scale — which is actually the point. A traditional server costs money 24/7 whether it's handling one request or ten thousand. A serverless function costs nothing when idle and scales up automatically when traffic spikes. For most early-stage apps, that math is overwhelmingly in your favor.
What is a cold start and should I worry about it?
A cold start happens when a function hasn't been called recently and the provider has to spin up a fresh environment to run it. This adds latency — usually 100ms to 500ms, sometimes more for heavy runtimes and large dependency sets.
For most vibe coding use cases — contact forms, webhooks, background API calls — users won't notice. For user-facing interactions where speed matters (search results, real-time features), a cold start can feel sluggish. The solution is either edge functions (which almost never cold start because they're always warm in nearby locations), keeping your functions lightweight, or using a paid plan that keeps functions warm. If cold starts are a serious pain point, that's a signal to look at persistent server options like a VPS or container.
Can I use serverless functions with a database?
Yes, but you need the right kind of database. Traditional databases that use persistent connections (like a self-hosted PostgreSQL) can get overwhelmed by serverless, because each function invocation opens a new connection. With 100 concurrent requests, you have 100 open connections — and most databases have limits well below that.
Use a database designed for serverless: PlanetScale, Neon, Supabase, or Turso handle this well. If you're using Prisma with Vercel, enable the Accelerate add-on for connection pooling. The AI will often write the database connection code correctly but may not automatically add pooling — check the database provider's serverless documentation to make sure you're set up right.
How do I add environment variables to my serverless functions?
In local development, create a .env.local file in your project root and add your variables there. This file never gets committed to git and never gets deployed — it's local only.
In production on Vercel: go to your project dashboard → Settings → Environment Variables. On Netlify: Site Configuration → Environment Variables. Add each key-value pair there. The variable names must match exactly what's in your code — process.env.RESEND_API_KEY needs an environment variable named RESEND_API_KEY in the dashboard.
This is the most common reason an API route works locally but fails in production. When in doubt, log whether the key exists (not its value) at the start of your function to confirm it's being picked up.
What's the difference between serverless functions and edge functions?
Serverless functions run in a single region on a full Node.js runtime. You get access to the complete Node.js API, most npm packages, and generous execution limits. Edge functions run in dozens of locations simultaneously — much closer to your users — using a lightweight runtime (based on the browser's Service Worker API rather than Node.js).
Edge functions are faster (almost no cold starts, much lower latency for global users) but more restricted — you can't use most npm packages, you have no filesystem access, and certain Node.js APIs aren't available. For a contact form or a database query: serverless is the right choice. For auth checks, redirect logic, or personalization that needs to be fast for every user regardless of location: edge functions are worth the tradeoff.