⚡ TL;DR

AI-generated code often has security holes that look completely intentional — missing auth checks on routes, hardcoded secrets, no rate limiting, and insecure session defaults. You don't need a security degree to catch them. Use the checklist in this article every time you ship AI-generated code that handles user data, authentication, or payments.

Why AI Coders Need to Know This

Here's the uncomfortable truth about building with AI: the code it generates is almost always functionally correct. It works. The login form logs people in. The API returns data. The database saves records. You test it, it does what you asked, and you move on.

The problem is what you didn't ask for.

A 2025 Stanford study found that developers using AI coding assistants produced code with more security vulnerabilities than developers writing code manually — and here's the kicker — they were more confident their code was secure. The AI's clean output and confident comments created a false sense of safety.

This matters for you specifically as a vibe coder because:

  • You're shipping faster than ever. AI lets you build in hours what used to take weeks. That speed is a superpower — but it means security gaps reach production faster too.
  • You're building real apps with real users. If someone trusts your app with their email, password, or payment info, you're responsible for protecting it.
  • AI doesn't think like an attacker. It generates the "happy path" — what happens when everything works correctly. Attackers exploit what happens when things go wrong.
  • The code looks SO good you don't question it. Beautiful variable names, helpful comments, proper structure. It all screams "this is production-ready." It's not.

You don't need to become a cybersecurity expert. You need a systematic way to catch the stuff AI consistently gets wrong. That's what this article gives you.

Real Scenario: The Auth Code That Looked Perfect

You're building a project management app. You tell Cursor: "Build me user authentication with login, signup, and a dashboard that shows only the logged-in user's projects."

Cursor generates beautiful code. Login works. Signup works. The dashboard shows projects. You test it with two different accounts — each user sees only their own projects. Ship it, right?

Wrong. Here's what actually happened:

🚨 The Hidden Vulnerability

The AI built authentication (verifying who you are) but skipped authorization on the API routes (verifying what you're allowed to access). Anyone with a valid login token can access any user's projects by changing the user ID in the API request. The dashboard UI only shows your projects because the frontend filters them — but the API happily serves everyone's data to anyone who asks.

This is the exact scenario trending on r/ChatGPTCoding right now. A developer posted: "How do you catch auth bypass risks in generated code that looks completely correct?" Dozens of comments confirmed they'd shipped similar bugs. The code worked. The tests passed. The vulnerability was invisible unless you knew where to look.

Let's look at exactly what the AI generated and where the flaws hide.

What AI Generated — Code That Looks Right But Isn't

Here's a simplified version of what AI typically generates when you ask for authenticated API routes. This is based on Express.js (Node.js), but the same patterns show up in Python/Flask, Next.js API routes, and every other framework.

The Login Route (This Part Is Usually Fine)

// POST /api/login - AI generates this correctly most of the time
app.post('/api/login', async (req, res) => {
  const { email, password } = req.body;

  const user = await User.findOne({ email });
  if (!user) return res.status(401).json({ error: 'Invalid credentials' });

  const valid = await bcrypt.compare(password, user.passwordHash);
  if (!valid) return res.status(401).json({ error: 'Invalid credentials' });

  const token = jwt.sign({ userId: user.id }, process.env.JWT_SECRET, {
    expiresIn: '7d'
  });

  res.json({ token, user: { id: user.id, name: user.name, email: user.email } });
});

This looks correct — and for the most part, it is. The AI used bcrypt for password comparison, created a JWT token, and didn't leak the password hash in the response. Good.

Now here's where things go wrong:

The Projects Route (Here's the Bug)

// GET /api/projects/:userId — VULNERABLE!
// The AI added auth middleware but forgot authorization
app.get('/api/projects/:userId', authenticateToken, async (req, res) => {
  const projects = await Project.find({ owner: req.params.userId });
  res.json(projects);
});
🔓 The Flaw

See it? The authenticateToken middleware checks that you're logged in — but the route uses req.params.userId (from the URL) instead of req.user.userId (from the token). Any logged-in user can change the :userId in the URL and access anyone else's projects. This is called an Insecure Direct Object Reference (IDOR) — one of the most common security flaws in web apps.

The Auth Middleware (Looks Good, Missing Pieces)

// Auth middleware — AI-generated, common version
function authenticateToken(req, res, next) {
  const authHeader = req.headers['authorization'];
  const token = authHeader && authHeader.split(' ')[1];

  if (!token) return res.sendStatus(401);

  jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
    if (err) return res.sendStatus(403);
    req.user = user;
    next();
  });
}

This middleware works, but it's missing several things a security-conscious developer would add:

  • No token revocation check. If a user logs out or you need to ban them, their token still works until it expires (7 days!).
  • No token type validation. It doesn't check if this is an access token vs. a refresh token — an attacker might use the wrong token type.
  • Vague error responses. Returning 401 vs. 403 tells attackers whether the token is missing or invalid, giving them information to refine their attack.

The Admin Route (The Worst One)

// DELETE /api/projects/:id — AI forgot auth entirely!
app.delete('/api/projects/:id', async (req, res) => {
  await Project.findByIdAndDelete(req.params.id);
  res.json({ message: 'Project deleted' });
});
🔓 No Auth At All

This one is worse. The AI generated the delete route without any authentication middleware. Anyone — not even logged in — can delete any project if they know (or guess) the ID. AI tools sometimes "forget" to add middleware to routes generated in a different prompt or later in the conversation.

Understanding Each Part

Let's break down what each piece of this code does and where the security boundaries should be:

Authentication vs. Authorization

This is the single most important distinction in security, and AI tools blur it constantly:

  • Authentication = "Who are you?" (Login, tokens, sessions). Proving your identity.
  • Authorization = "What are you allowed to do?" (Permissions, roles, ownership). Proving your access rights.

AI is great at authentication. It knows how to hash passwords, create tokens, and verify logins. It's terrible at authorization — checking whether the logged-in user should actually be allowed to perform the specific action they're requesting.

The JWT Token

jwt.sign({ userId: user.id }, process.env.JWT_SECRET, { expiresIn: '7d' })

This creates an encrypted "pass" that contains the user's ID. When the user sends this token with future requests, the server can verify who they are without hitting the database every time. The expiresIn: '7d' means the token works for 7 days — which is convenient but means a stolen token is dangerous for a full week.

The Middleware Chain

app.get('/api/projects/:userId', authenticateToken, async (req, res) => { ... })

The authenticateToken in the middle is "middleware" — code that runs before your route handler. Think of it as a security guard at the door. The problem is the AI only put one guard (authentication) when you need two (authentication + authorization). Here's what the fixed version looks like:

// FIXED: Uses the token's userId, not the URL parameter
app.get('/api/projects', authenticateToken, async (req, res) => {
  // req.user.userId comes from the verified JWT token — can't be faked
  const projects = await Project.find({ owner: req.user.userId });
  res.json(projects);
});

Notice two changes: we removed :userId from the URL entirely (no need for it), and we pull the user ID from the verified token (req.user.userId) instead of the URL. Now users can only access their own projects.

The Delete Route Fix

// FIXED: Auth middleware + ownership check
app.delete('/api/projects/:id', authenticateToken, async (req, res) => {
  const project = await Project.findById(req.params.id);

  if (!project) return res.status(404).json({ error: 'Not found' });

  // Authorization: verify the logged-in user owns this project
  if (project.owner.toString() !== req.user.userId) {
    return res.status(403).json({ error: 'Not authorized' });
  }

  await Project.findByIdAndDelete(req.params.id);
  res.json({ message: 'Project deleted' });
});

Now the route has three layers of protection: authentication (valid token required), existence check (project must exist), and authorization (you must own it).

The Security Review Checklist

Run through this checklist every time you ship AI-generated code that handles user data, authentication, payments, or sensitive operations. It takes 15–20 minutes and catches the vast majority of AI-generated security flaws.

Step 1: Check Every Route for Auth

Open every file that defines API routes or server endpoints. For each one, ask:

  • ✅ Does this route have authentication middleware?
  • ✅ Does this route check authorization (not just authentication)?
  • ✅ Does it use the user ID from the token, not from the URL or request body?
  • ✅ Are admin-only routes checking for admin roles?
🔍 Quick Method

Search your codebase for app.get, app.post, app.put, app.delete (or your framework's equivalent). Make a list of every route. Check each one for auth middleware. Any route without it is a potential vulnerability.

Step 2: Search for Hardcoded Secrets

AI loves putting secrets directly in code. Search your entire project for:

# Run these searches in your project root
grep -r "sk-" .              # OpenAI API keys
grep -r "sk_live" .          # Stripe live keys
grep -r "password" .         # Hardcoded passwords
grep -r "secret" .           # JWT secrets, API secrets
grep -r "Bearer " .          # Hardcoded auth tokens
grep -r "mongodb+srv" .      # Database connection strings
grep -r "API_KEY" .          # Any API key references

Every secret should be in a .env file (and that .env file should be in your .gitignore). If you find secrets in actual code files, move them immediately.

Step 3: Check Input Validation

For every route that accepts user input, check:

  • ✅ Is the input validated before it's used? (Type checks, length limits, format validation)
  • ✅ Is the input sanitized before being inserted into a database query? (Prevents SQL injection)
  • ✅ Is user-supplied content escaped before being rendered in HTML? (Prevents XSS attacks)
  • ✅ Are file uploads restricted by type and size?

AI frequently generates routes that take req.body and pass it directly to database queries without any validation. This is like leaving your front door open and hoping only friends walk in.

Step 4: Review Error Handling

Check what happens when things go wrong:

  • ✅ Do error responses avoid leaking internal details? (No stack traces, file paths, or database errors sent to users)
  • ✅ Is there a global error handler that catches unhandled exceptions?
  • ✅ Do database errors get caught and return generic messages?

AI often generates catch(err) { res.json({ error: err.message }) } — which sends internal error details straight to the user. An attacker can use those details to understand your database structure, file layout, and technology stack.

Step 5: Verify Rate Limiting Exists

Check these endpoints specifically:

  • ✅ Login route — can someone try 10,000 passwords?
  • ✅ Signup route — can someone create 10,000 accounts?
  • ✅ Password reset — can someone spam reset emails?
  • ✅ Any route that sends emails, texts, or costs you money

AI almost never adds rate limiting unless you specifically ask for it. A single missing rate limit on your login route means anyone can brute-force passwords.

Step 6: Test with Unexpected Inputs

Try sending requests your app doesn't expect:

  • Send an empty body to POST routes
  • Send a string where a number is expected
  • Send an extremely long string (100,000+ characters)
  • Send special characters: '; DROP TABLE users;--
  • Send a valid token but change the user ID in the request
  • Try accessing endpoints without a token

If any of these crash your server or return unexpected data, you've found a vulnerability.

What AI Gets Wrong About Security

After reviewing thousands of AI-generated code samples, these are the patterns that show up consistently. Knowing these categories means you know where to look.

1. Auth Bypass (Most Common)

As we covered: AI builds login systems but skips per-route authorization. It confuses "is this person logged in?" with "is this person allowed to do this?" Every route that accesses data needs both checks.

What to look for: Routes using URL parameters (:userId, :id) to look up resources instead of the authenticated user's ID from the token.

2. Missing Rate Limiting

AI generates zero rate limiting by default. Your login endpoint, signup endpoint, password reset, email sending — all wide open for abuse. A brute-force attack on an unprotected login route can try thousands of passwords per minute.

What to look for: Any route that accepts credentials or triggers external actions (emails, payments, API calls) without express-rate-limit or equivalent middleware.

// Add this to protect your login route
const rateLimit = require('express-rate-limit');

const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000,  // 15 minutes
  max: 5,                     // 5 attempts per window
  message: { error: 'Too many login attempts. Try again in 15 minutes.' }
});

app.post('/api/login', loginLimiter, async (req, res) => {
  // ... login logic
});

3. Exposed API Keys and Secrets

AI regularly puts secrets directly in code — especially in example code, configuration files, and client-side JavaScript. Common offenders:

  • JWT secrets hardcoded in the file instead of process.env.JWT_SECRET
  • Database connection strings with passwords inline
  • API keys for OpenAI, Stripe, or other services in frontend code
  • .env files that aren't in .gitignore (so they get pushed to GitHub)

What to look for: Any string that looks like a key or password that isn't coming from process.env. Check your .gitignore includes .env.

4. Insecure Session Handling

AI generates sessions and tokens with insecure defaults:

  • Tokens that never expire — or expire in 30 days instead of hours
  • No token refresh mechanism — users keep the same token forever
  • Session cookies without security flags — missing httpOnly, secure, and sameSite attributes
  • No logout invalidation — "logging out" just deletes the token from the browser, but the token still works if someone intercepted it
// AI generates this (insecure):
res.cookie('session', token);

// You need this (secure):
res.cookie('session', token, {
  httpOnly: true,   // JavaScript can't read this cookie
  secure: true,     // Only sent over HTTPS
  sameSite: 'lax',  // Prevents CSRF attacks
  maxAge: 3600000   // Expires in 1 hour, not 7 days
});

5. Missing CORS Configuration

AI often sets CORS to allow all origins — meaning any website in the world can make requests to your API:

// AI generates this (too permissive):
app.use(cors());

// You need this (specific origins):
app.use(cors({
  origin: ['https://yourdomain.com', 'http://localhost:3000'],
  credentials: true
}));

6. No Input Sanitization

AI takes user input and passes it straight to database queries, HTML templates, or system commands without cleaning it first. This opens the door to SQL injection, cross-site scripting (XSS), and command injection attacks.

What to look for: Any place where req.body, req.params, or req.query values go directly into a database query or HTML output without validation or sanitization.

How to Debug with AI — Security Audit Prompts

Here's the twist: AI is actually great at finding security bugs — it's just bad at preventing them during initial generation. Use these specific prompts to turn your AI tool into a security auditor.

Cursor Security Audit

Cursor Prompt — Full Security Review
Review this codebase for security vulnerabilities. Specifically check:
1. Every API route — does it have authentication AND authorization?
2. Any hardcoded secrets, API keys, or passwords in code files
3. Missing rate limiting on login, signup, and password reset routes
4. Input validation — is user input sanitized before database queries?
5. Error handling — are internal error details exposed to users?
6. CORS configuration — is it too permissive?
7. Session/token settings — are cookies httpOnly and secure?

For each issue found, show me the exact file and line, explain the risk
in plain English, and provide the fixed code.

Claude Code Security Audit

Claude Code Prompt — Auth-Specific Review
Act as a penetration tester reviewing this application. I need you to:

1. List every API endpoint and whether it has auth middleware
2. For each protected endpoint, verify it checks authorization (not just
   authentication) — does it verify the requesting user owns the resource?
3. Check for IDOR vulnerabilities — any route using URL params to look up
   resources instead of the authenticated user's ID
4. Check if there's a way to access admin functionality as a regular user
5. Look for routes where the AI may have forgotten to add middleware

Output a table: Route | Method | Auth? | Authz? | Risk Level | Issue

Windsurf Security Audit

Windsurf Prompt — Dependency and Config Check
Scan this project for security issues in configuration and dependencies:

1. Check package.json for known vulnerable packages
2. Verify .env is in .gitignore
3. Check all environment variable usage — are any secrets hardcoded?
4. Review CORS, helmet, and security middleware configuration
5. Check cookie/session settings for security flags
6. Look for any TODO, FIXME, or HACK comments related to security
7. Verify HTTPS is enforced in production configuration

For each issue, rate it Critical/High/Medium/Low and explain the fix.
💡 Pro Tip: Cross-AI Review

If Cursor generated your code, paste it into Claude for the security review (or vice versa). Different AI models have different blind spots. A second AI reviewing the first AI's code consistently catches more issues than asking the same AI to review its own output. Think of it as getting a second opinion.

The "Attacker's Perspective" Prompt

This is the most powerful security prompt you can use with any AI tool:

Any AI Tool — Attacker Simulation
Pretend you're a malicious hacker trying to break into this application.
You have access to the frontend code and can see the API endpoints.

1. What would you try first?
2. What data could you access that you shouldn't be able to?
3. How would you escalate from a regular user to an admin?
4. What would you do to cause the most damage?
5. Which endpoint is the weakest?

Be specific — show me the exact curl commands or API calls you'd use.

This prompt forces the AI to think adversarially — exactly the mindset it's missing when it generates code. The responses are often eye-opening, revealing attack vectors you never considered.

Putting It All Together — Your Security Workflow

Here's how to build security review into your AI coding workflow without slowing you down:

  1. Generate code normally. Don't try to prompt-engineer perfect security on the first pass. Let the AI build the feature.
  2. Run the 6-step checklist from this article. Takes 15–20 minutes per feature. Focus on auth, secrets, rate limiting, input validation, error handling, and unexpected inputs.
  3. Use a second AI for review. Paste the code into a different AI tool with one of the security audit prompts above. Cross-AI review catches more bugs.
  4. Fix issues immediately. Don't add them to a backlog. Security bugs in production are infinitely harder to fix after a breach.
  5. Test the fixes. After applying security patches, verify they work by trying the attacks yourself (empty tokens, wrong user IDs, missing auth headers).

This workflow adds about 30 minutes per feature. It's the difference between shipping a side project and shipping something you can confidently put real users on.

📊 By the Numbers

According to the 2025 OWASP Top 10, Broken Access Control remains the #1 web application security risk for the fifth consecutive year. 94% of applications tested had some form of broken access control. The average cost of a data breach in 2025 was $4.88 million (IBM Security). Even for small apps, a breach can mean legal liability, lost trust, and months of cleanup.

What to Learn Next

Security is a skill that builds on itself. Now that you know how to review AI-generated code, go deeper on the specific vulnerabilities AI creates most often:

  • Security Basics for AI Coders — The foundation. Covers HTTPS, encryption, hashing, and the security mindset every vibe coder needs.
  • What Is SQL Injection? — The #1 database attack. AI generates unparameterized queries more often than you'd think. Learn how to spot and prevent them.
  • What Is XSS (Cross-Site Scripting)? — When user input gets rendered as HTML without sanitization, attackers can inject malicious scripts. AI-generated frontend code is especially vulnerable.
  • What Is Authentication? — A deeper dive into tokens, sessions, OAuth, and the auth patterns AI generates. Understand the building blocks so you can spot when they're assembled wrong.
📚 Recommended Learning Path

Start with Security Basics → then Authentication → then come back to this checklist with fresh eyes. You'll catch twice as many issues once you understand what the code is supposed to be protecting against.

Frequently Asked Questions

Not necessarily less secure, but differently insecure. AI tends to produce code that works perfectly for the "happy path" — when everything goes right. The security gaps show up in edge cases: what happens when someone sends an empty token, hits the endpoint 10,000 times, or passes unexpected data types. A 2025 Stanford study found that developers using AI assistants were more likely to produce insecure code while believing it was secure. The issue isn't that AI writes bad code — it's that it writes code that looks so correct you don't question it.

Missing or incomplete authorization checks. AI is great at generating authentication (login/signup) code, but frequently forgets to add authorization (permission) checks on individual routes. It'll build you a login system and then leave API endpoints accessible to anyone with a valid token — regardless of whether they should have access to that specific resource. Always check: does every route verify not just "is this person logged in" but "is this person allowed to do THIS specific thing?"

Use the checklist approach from this article: check every route for auth, search for hardcoded secrets, look for missing rate limiting, verify input validation exists, and test what happens with unexpected inputs. You don't need to be a security expert — you need to be systematic. The biggest wins come from simply asking "what happens if someone sends bad data here?" for every endpoint. Also, use AI itself as a reviewer: paste your code into a second AI session and ask it to find security vulnerabilities.

Yes — this is one of the most effective strategies. Different AI models have different blind spots. If Cursor generated your code, paste it into Claude and ask for a security review (or vice versa). The reviewing AI will often catch issues the generating AI missed. Think of it like getting a second opinion from a different doctor. The specific prompts in this article's "How to Debug with AI" section are designed for exactly this cross-review workflow.

Do a quick check every time you add authentication, payment processing, or user data handling. Do a full review before any deployment to production. And do a comprehensive audit at least monthly for any app that handles real user data. The security review checklist in this article takes about 15–20 minutes for a typical feature. That small investment catches issues that could take weeks to fix after a breach. Build it into your workflow: generate code, review for security, then ship.