TL;DR: A reverse proxy is a server (usually nginx) that sits between the internet and your app. Visitors hit nginx on port 80/443, and nginx forwards their requests to your app running on a local port like 3000. It handles SSL, security headers, caching, and load balancing so your app doesn't have to. Every AI-generated VPS deployment uses one.
Why AI Coders Need to Know This
Here's a pattern you've probably already lived through: you build a Next.js app with Claude or Cursor, it works perfectly on localhost:3000, and then you ask your AI to deploy it to a VPS. Suddenly your instructions include installing nginx, writing a config file with proxy_pass, setting up SSL certificates, and reloading services. You paste it all in, it works — until it doesn't.
The reverse proxy is the invisible layer between the internet and your app. According to the W3Techs 2025 web server survey, nginx powers about 34% of all websites. When your AI sets up a deployment, it's configuring a reverse proxy roughly 9 times out of 10. If you don't understand what it does, you can't debug it when things break.
For vibe coders, the reverse proxy is one of those concepts that separates "I can build locally" from "I can ship to production." Knowing what it does means:
- You understand why visitors see your domain while your app quietly runs on port 3000
- You can read the nginx config your AI generates instead of blindly pasting it
- You can fix 502 Bad Gateway errors in minutes instead of hours
- You understand how SSL/HTTPS connects to your application
- You know what to tell your AI when WebSocket connections or real-time features stop working through the proxy
The Construction Analogy
Think of it like a reception desk at a large office building. Visitors don't walk directly to every employee's desk — they go to the front desk first. The receptionist checks who they need, directs them to the right floor, and handles security. The employees never deal with random strangers walking in.
A reverse proxy is that receptionist for your web application:
- The building = your server
- The receptionist = nginx (the reverse proxy)
- The employees = your app (Next.js, Flask, Express, etc.)
- The visitors = users hitting your website
The visitors never interact with employees directly. They only talk to the receptionist. And that's exactly how a reverse proxy works — the internet only talks to nginx, and nginx talks to your app behind the scenes.
Forward Proxy vs. Reverse Proxy (30-Second Version)
You might hear "proxy" used in two ways. Here's the quick difference:
- Forward proxy: Sits in front of clients (users). Like a VPN — your browser talks to the proxy, and the proxy makes requests to websites on your behalf. The website doesn't know who you are.
- Reverse proxy: Sits in front of servers (your app). The user talks to nginx, and nginx makes requests to your app on the user's behalf. The user doesn't know about your internal server.
When AI deploys your app, it always sets up a reverse proxy. You almost never need to think about forward proxies.
Real Scenario
You've built a Next.js app. It works great locally. Now you want it live on a $5/month VPS. You open Claude Code and type:
What You Tell Claude
Deploy my Next.js app to my Ubuntu VPS at 203.0.113.50.
Domain: myvibeapp.com
Set up SSL with Let's Encrypt.
Make sure it stays running after reboots.
Claude generates a deployment script. Somewhere in the middle, it creates an nginx config file. That config file is the reverse proxy. Here's what it looks like and why every line exists.
What AI Generated
This is the actual nginx reverse proxy config Claude would generate for your Next.js deployment. Every line is annotated so you know what it does:
# /etc/nginx/sites-available/myvibeapp.com
# ── Redirect all HTTP traffic to HTTPS ──────────────────
server {
listen 80;
server_name myvibeapp.com www.myvibeapp.com;
return 301 https://myvibeapp.com$request_uri;
}
# ── Redirect www to non-www (HTTPS) ─────────────────────
server {
listen 443 ssl;
server_name www.myvibeapp.com;
ssl_certificate /etc/letsencrypt/live/myvibeapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myvibeapp.com/privkey.pem;
return 301 https://myvibeapp.com$request_uri;
}
# ── Main reverse proxy config ───────────────────────────
server {
listen 443 ssl http2;
server_name myvibeapp.com;
# SSL certificates from Let's Encrypt
ssl_certificate /etc/letsencrypt/live/myvibeapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myvibeapp.com/privkey.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=63072000" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml;
# ── THIS IS THE REVERSE PROXY PART ──────────────────
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 60s;
proxy_connect_timeout 10s;
}
# Cache static assets aggressively
location /_next/static/ {
proxy_pass http://127.0.0.1:3000;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Serve Next.js image optimization
location /_next/image {
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 30s;
}
}
Understanding Each Part
Let's walk through the reverse proxy pieces of that config. You don't need to memorize any of this — but you do need to recognize it when your AI generates it, and know what to check when something breaks.
proxy_pass — The Heart of the Reverse Proxy
proxy_pass http://127.0.0.1:3000;
This single line is what makes nginx a reverse proxy instead of just a web server. It says: "Don't serve files from a directory. Instead, forward every request to the application running at 127.0.0.1:3000."
127.0.0.1 means "this same machine" (you'll also see localhost). 3000 is the port your Next.js app listens on. When a visitor requests https://myvibeapp.com/about, nginx receives that request on port 443 and forwards it to http://127.0.0.1:3000/about. Your Next.js app processes it and sends the response back through nginx to the visitor.
proxy_set_header — Passing Information Through
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
When nginx forwards a request to your app, your app needs to know certain things about the original visitor. Without these headers:
- Host: Your app wouldn't know which domain the visitor typed. It would just see a request from
127.0.0.1. - X-Real-IP: Your app would think every request comes from
127.0.0.1(nginx itself) instead of the visitor's actual IP address. - X-Forwarded-For: Similar to X-Real-IP, but preserves the full chain if there are multiple proxies.
- X-Forwarded-Proto: Tells your app whether the original request was HTTP or HTTPS. Without this, your app might generate
http://links even though visitors are on HTTPS.
These headers are like the receptionist telling the employee: "This person came in through the main entrance, their name badge says John, and they have a VIP pass."
WebSocket Headers — For Real-Time Features
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
If your app uses WebSockets (real-time chat, live notifications, collaborative editing), these lines are critical. WebSockets start as a normal HTTP request and then "upgrade" to a persistent connection. These headers tell nginx to pass that upgrade request through to your app instead of blocking it.
Without them, your app loads fine but real-time features silently fail. This is one of the most common issues in AI-generated reverse proxy configs — and we'll cover it more in the "What AI Gets Wrong" section.
location Blocks — Routing Traffic
location / {
proxy_pass http://127.0.0.1:3000;
}
location /_next/static/ {
proxy_pass http://127.0.0.1:3000;
expires 1y;
add_header Cache-Control "public, immutable";
}
Location blocks tell nginx how to handle different URL paths. The first block (/) catches all traffic and forwards it to your app. The second block (/_next/static/) catches Next.js static assets specifically and adds aggressive caching headers — so browsers cache JavaScript and CSS files for a year instead of re-downloading them every visit.
Think of it like the receptionist having different instructions for different types of visitors: regular clients go to the main floor, delivery drivers go to the loading dock.
upstream Blocks — Multiple App Servers
For more advanced setups, your AI might generate an upstream block:
upstream nextjs_app {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
server {
location / {
proxy_pass http://nextjs_app;
}
}
This tells nginx to distribute requests across three instances of your app — that's load balancing. If one instance crashes, nginx automatically stops sending traffic to it. You probably won't need this for your first deployment, but it's good to recognize when your AI suggests it for scaling.
Timeouts — How Long to Wait
proxy_read_timeout 60s;
proxy_connect_timeout 10s;
proxy_connect_timeout is how long nginx waits to establish a connection with your app (10 seconds is generous — if your app doesn't respond in 10 seconds, something is very wrong). proxy_read_timeout is how long nginx waits for your app to send back a response. For most web pages, 60 seconds is plenty. If you have long-running API calls (AI processing, large file uploads), you might need to increase proxy_read_timeout.
What AI Gets Wrong
AI tools are remarkably good at generating nginx reverse proxy configs. But they have consistent blind spots. Here are the ones that will bite you:
1. WebSocket Proxy Issues
This is the #1 problem. Your AI generates a clean reverse proxy config, your app loads perfectly, but WebSocket connections fail silently. The symptoms:
- Chat messages don't appear in real time
- Socket.io falls back to long-polling (slow)
- Live notifications don't work
- The browser console shows WebSocket connection errors
The fix: Make sure your config includes all four of these lines in the location block that handles WebSocket traffic:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
Sometimes AI puts these in the main location / block but forgets them in a separate location /socket.io/ or location /ws block. If your app uses a specific WebSocket path, make sure that path's location block also has these headers.
2. Missing X-Forwarded Headers
AI sometimes generates a minimal config with just proxy_pass and nothing else. This "works" — pages load — but causes subtle bugs:
- Your app logs every request as coming from
127.0.0.1(useless for analytics) - Rate limiting by IP breaks (everyone appears to be the same user)
- OAuth redirects fail because the callback URL uses
http://instead ofhttps:// - CSRF protection gets confused about the request origin
The fix: Always include the full set of proxy_set_header directives shown in the config above. If your AI skips them, ask it to add them.
3. CORS Issues Through the Proxy
If your frontend and API are on different subdomains (like app.mysite.com and api.mysite.com), each going through its own reverse proxy, CORS headers can get duplicated or stripped. Symptoms:
- Browser shows "Access-Control-Allow-Origin" errors
- Preflight OPTIONS requests fail
- Your API works in Postman but not from the browser
The fix: Handle CORS in one place — either in your app code or in the nginx config, not both. If nginx adds CORS headers and your Express app also adds them, the browser receives duplicate headers and rejects the response. Tell your AI: "Handle CORS only in [nginx/the app], not both."
4. The Trailing Slash Problem
This one is subtle and maddening:
# These are NOT the same:
location /api/ {
proxy_pass http://127.0.0.1:3000/; # Trailing slash = strips /api prefix
}
location /api/ {
proxy_pass http://127.0.0.1:3000; # No trailing slash = keeps /api prefix
}
With the trailing slash on proxy_pass, a request to /api/users is forwarded as /users. Without it, the request is forwarded as /api/users. AI tools frequently get this wrong, especially when your backend expects routes with or without a prefix. If your API returns 404 for every route through the proxy but works fine when you hit it directly, check the trailing slash.
5. Forgetting to Increase Timeouts for AI/Upload Endpoints
Default proxy_read_timeout is 60 seconds. If your app has an endpoint that calls an AI API (like OpenAI or Anthropic), the response might take 90+ seconds for complex prompts. AI rarely sets a longer timeout for these specific routes. Your endpoint times out with a 504 error, but the AI call was still processing successfully in the background.
The fix: Add a separate location block for slow endpoints:
location /api/generate {
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 300s; # 5 minutes for AI endpoints
proxy_set_header Host $host;
# ... other headers
}
How to Debug with AI
When your reverse proxy isn't working, here's how to get your AI tool to actually help instead of generating another broken config.
Cursor / Windsurf
Open your nginx config file directly in the editor. Highlight the problematic section and ask in the inline chat:
- "My app works on localhost:3000 but returns 502 through this proxy. What's wrong with this location block?"
- "WebSockets work locally but not through this proxy. What headers am I missing?"
- "My API returns CORS errors through this proxy but works in Postman. Fix the CORS handling."
The key is giving Cursor the actual config file as context, not just describing the problem. The AI can see exactly what directives are present or missing.
Claude Code
Claude Code can SSH into your server and check things directly. Effective prompts:
- "SSH into my VPS, check if nginx is running, test the config with nginx -t, and show me the last 20 lines of /var/log/nginx/error.log"
- "My Next.js app on port 3000 works with curl localhost:3000 but returns 502 through nginx. Debug it."
- "Compare my nginx config at /etc/nginx/sites-available/myapp with what's actually enabled in sites-enabled. Are they linked correctly?"
Claude Code's strength is running diagnostic commands and reading log files in sequence — exactly what you need for proxy debugging.
The Debug Checklist (Give This to Any AI)
When your reverse proxy breaks, paste this checklist into your AI tool to get a structured diagnosis:
Debug my nginx reverse proxy. Check these in order:
1. Is nginx running? (systemctl status nginx)
2. Does the config pass validation? (nginx -t)
3. Is my app actually running on the expected port? (curl localhost:3000)
4. What do the nginx error logs say? (tail -50 /var/log/nginx/error.log)
5. Is the sites-available config symlinked to sites-enabled?
6. Are there conflicting server blocks for the same domain?
7. Is the firewall blocking internal connections?
This checklist catches about 95% of reverse proxy issues. The most common answer? Step 3 — the app isn't running.
Why Not Just Expose Your App Directly?
You might wonder: "If my Next.js app runs on port 3000, why not just open port 3000 to the internet and skip nginx entirely?" Technically you can. Here's why you shouldn't:
- SSL/TLS: nginx handles HTTPS termination. Your app doesn't need to manage certificates.
- Security: nginx filters malicious requests, limits request sizes, and rate-limits bad actors before they reach your app code.
- Performance: nginx serves static files significantly faster than Node.js. It also handles gzip compression, caching headers, and connection management.
- Stability: Node.js is single-threaded. A slow client can block connections. nginx handles thousands of concurrent connections using an event-driven architecture, buffering slow clients so your app doesn't stall.
- Multiple apps: One nginx server can reverse-proxy to multiple apps on different ports — a Next.js frontend on 3000, an API on 4000, an admin panel on 5000 — all under different paths or subdomains on port 443.
This is why AI always includes nginx. It's not overhead — it's a critical layer that makes everything else work reliably. If you want to understand nginx itself in depth, check out our complete nginx guide.
Quick Note: It's Not Only nginx
nginx is the most common reverse proxy you'll encounter in AI-generated deployments, but it's not the only one:
- Caddy: Automatic HTTPS with zero config. Gaining popularity, especially in Docker setups.
- Traefik: Container-native. Auto-discovers services in Docker Compose and Kubernetes.
- Apache (mod_proxy): The old guard. Still runs millions of sites but less common in new AI-generated deployments.
- Cloudflare: A reverse proxy as a service — sits between the internet and your server at the DNS level.
The concepts from this article apply to all of them. The config syntax changes, but proxy_pass is proxy_pass whether it's nginx, Caddy, or Traefik.
What to Learn Next
Now that you understand reverse proxies, here's the learning path that builds on this knowledge:
- What Is nginx? — Deep dive into nginx itself: static file serving, server blocks, SSL setup, and the full config reference for AI-assisted developers.
- What Is Load Balancing? — When one server isn't enough, load balancing distributes traffic across multiple instances. This builds directly on the
upstreamblock concept from this article. - What Is DNS? — How your domain name connects to the server running your reverse proxy. The missing piece between "I bought a domain" and "visitors can reach my app."
- What Is Docker? — In containerized deployments, the reverse proxy often runs as its own container. Docker adds another layer to understand.
- What Is HTTPS/SSL? — The reverse proxy handles SSL termination. This guide explains what that means and how Let's Encrypt certificates work.
Frequently Asked Questions
A reverse proxy is a server that sits between the internet and your application. Visitors connect to the reverse proxy (usually nginx on port 80/443), and it forwards their requests to your actual app running on a local port like 3000. The visitor never knows about the internal app — they only see your public domain. Think of it as a receptionist directing visitors to the right person inside an office building.
A forward proxy sits in front of clients (users) and makes requests on their behalf — like a VPN. The website doesn't know who the user is. A reverse proxy sits in front of servers and receives requests on their behalf — the user doesn't know about the internal server. When AI deploys your app, it always sets up a reverse proxy, not a forward proxy.
Node.js, Python, and other app servers aren't designed to face the public internet directly. They lack efficient SSL handling, rate limiting, static file caching, and protection against slow clients. nginx handles all of that efficiently as a reverse proxy, so AI tools configure it in virtually every VPS deployment. It's a best practice, not just a preference.
proxy_pass is the nginx directive that makes nginx a reverse proxy. It tells nginx where to forward incoming requests. For example, proxy_pass http://localhost:3000 means nginx sends each request to your app running on port 3000 on the same server. Without proxy_pass, nginx only serves static files — with it, nginx becomes a reverse proxy.
A 502 means nginx is running but can't reach your backend app. Check three things in order: (1) Is your app actually running? Use pm2 status or systemctl status your-app. (2) Does the port in proxy_pass match where your app actually listens? (3) Check nginx error logs at /var/log/nginx/error.log for the specific connection error. The most common cause is simply that the app isn't running.