TL;DR: The OWASP Top 10 is a checklist of the 10 most common web security vulnerabilities. AI tools generate code that's vulnerable to most of them — especially SQL injection, broken authentication, XSS, and security misconfigurations. You don't need to become a security expert. You just need to know what to ask your AI to include. This guide gives you those exact prompts, plus a quick-reference checklist to run before you ship.
Why AI Coders Need This
Here's the uncomfortable truth: AI tools are great at making code that works. They're not so great at making code that's secure. A Stanford research study found that developers using AI coding assistants produced significantly more security vulnerabilities than developers writing code by hand. The AI isn't trying to hurt you — it just learned from millions of code examples, and a lot of that code was insecure.
The OWASP Top 10 is your shield. OWASP stands for Open Worldwide Application Security Project — a nonprofit that tracks how web apps get attacked. Every few years they update their list of the 10 most critical vulnerabilities. These aren't theoretical risks. They're the actual methods attackers use on actual apps right now.
The good news: once you know what to watch for, you can tell your AI exactly what to do. AI is actually great at writing secure code — when you ask for it specifically. Most vibe coders don't ask. This article changes that.
Who this is for: You're building real apps with AI tools (Claude, Cursor, ChatGPT, Copilot). You don't have a security background. You want to ship something you won't regret. This guide assumes zero prior security knowledge.
The OWASP Top 10 — Explained for Vibe Coders
Each item below has three parts: what it is in plain English, a real-world example of how it gets exploited, and the exact prompt to give your AI so it builds protection in from the start. The ones marked ⭐ are the most common in AI-generated code — pay extra attention to those.
Broken Access Control
What it is: Your app lets users do things they shouldn't be able to do. User A can see User B's data. A regular user can access the admin panel. Someone can delete another person's account just by changing a number in the URL.
This is the #1 OWASP vulnerability — not because it's the most technically complex, but because it's the most forgotten. You build a feature, it works, you ship it. You never tested what happens when someone changes /account/123 to /account/124.
Your app has an API endpoint: GET /api/orders/456. User A is logged in and their order is #456. But User A also discovers they can fetch /api/orders/457 — which is User B's order. No error, no warning. Full order details, address, everything. This is called an "IDOR" (Insecure Direct Object Reference) and it's everywhere in AI-generated code.
"For every API endpoint that returns user data, add server-side authorization checks that verify the logged-in user owns the requested resource. Never trust the ID in the URL or request body — always cross-check it against the authenticated user's session. Throw a 403 error if they don't match."
Cryptographic Failures
What it is: Sensitive data — passwords, credit cards, health info — gets stored or transmitted without proper protection. This includes storing passwords in plain text, using outdated encryption like MD5, or sending sensitive data over HTTP instead of HTTPS.
You don't need to understand cryptography. You just need to know which tools to use and which ones to avoid. AI often picks outdated or weak options because they were common in older training data.
Your app stores user passwords. AI generates code that uses md5(password) before storing. MD5 was cracked in 2004. If your database gets stolen (and eventually it might), every single password can be reversed using publicly available lookup tables in seconds. This happens constantly in quick-prototype code.
"Store all passwords using bcrypt or Argon2id (never MD5, SHA1, or SHA256 alone). Use HTTPS for all traffic — no HTTP endpoints that transmit user data. Never log passwords, tokens, or credit card numbers. Store sensitive data encrypted at rest using AES-256."
⭐ Injection (SQL, Command, LDAP)
What it is: An attacker tricks your app into executing their code by putting it into an input field. The most famous type is SQL injection — where someone types SQL commands into a form and your app runs them against your database. But injection can also happen with shell commands, LDAP queries, and more.
This is the one that causes headlines. SQL injection can dump your entire database, delete all your data, or let someone log in as any user — including admin. And AI generates SQL injection-vulnerable code constantly.
AI writes a login query like: db.query("SELECT * FROM users WHERE email='" + email + "'"). An attacker enters the email: ' OR '1'='1. The resulting query becomes: SELECT * FROM users WHERE email='' OR '1'='1'. Since '1'='1' is always true, this returns every user in your database — and logs the attacker in as the first one (usually your admin account).
"Use parameterized queries (prepared statements) for all database interactions — never build SQL strings using string concatenation with user input. If using Node.js, use an ORM like Prisma or Drizzle that handles this automatically. If writing raw SQL, always use the ? placeholder syntax. For shell commands, never pass user input to exec() or spawn() — use an allowlist of permitted operations instead."
→ Deep dive: What Is SQL Injection? The Complete Guide for AI Coders
Insecure Design
What it is: The app was designed without security in mind. This isn't about a specific code bug — it's about architecture decisions that made the whole thing vulnerable before a line of code was written. No rate limiting on login attempts. Password reset that can be guessed. APIs with no authentication layer at all.
For AI coders, this often shows up as: you asked AI to "build a login system" and it did — but it didn't add account lockout, brute force protection, or email verification. The code works, the design is broken.
AI builds a password reset feature. User enters their email, gets a 4-digit code. Sounds fine. But there's no limit on how many times you can try codes. An attacker can run a script that tries all 10,000 possible codes in minutes. The feature works perfectly — and is also completely bypassable.
"Add rate limiting to all authentication endpoints — maximum 5 failed attempts per IP per 15 minutes, then temporary lockout. Password reset codes should be at least 6 characters, cryptographically random (not sequential), expire after 15 minutes, and be one-time-use only. Add CAPTCHA to high-risk forms."
⭐ Security Misconfiguration
What it is: Your app or server is configured in an insecure way — often because the default settings are insecure and nobody changed them. Debug mode left on in production. Default admin passwords still active. Detailed error messages that show your file paths and database structure to anyone who gets an error. Open cloud storage buckets. Unnecessary services running on your server.
This is the one AI coders get hit by most often — especially in infrastructure and deployment code. AI sets things up for development convenience, not production security.
AI scaffolds your Express.js app and includes app.use(morgan('dev')) — which logs every request in detail. Fine for development. In production, this means your server logs contain user email addresses, query parameters, and sometimes tokens. If anyone gets your logs (a common breach vector), they get all of that. AI also frequently leaves NODE_ENV=development in deployment configs, which enables verbose error pages that reveal your entire stack trace to users when something breaks.
"For production: disable debug mode, remove all development middleware, use generic error messages (never expose stack traces to users), set NODE_ENV=production, add security headers using helmet.js, disable directory listing, remove default admin accounts, and ensure no development/test credentials are present. Add a checklist comment block at the top of any deployment config listing these items."
→ Related: Security Basics Every AI Coder Needs to Know
⭐ Vulnerable and Outdated Components
What it is: You're using libraries, packages, or frameworks that have known security vulnerabilities. This is one of the most common real-world attack vectors — hackers don't need to find a bug in YOUR code. They find a bug in a library you imported, and they exploit that.
AI models have a training cutoff. When AI suggests npm install [package]@version, that version might be months or years old — and might have known CVEs (Common Vulnerabilities and Exposures) with public exploit code available.
In 2021, a vulnerability called Log4Shell was discovered in the Log4j Java library. It affected hundreds of thousands of apps. Companies had to scramble to update a dependency they didn't even know they were using (it was a dependency of a dependency). In Node.js, the node-serialize package had a critical remote code execution vulnerability used in multiple real-world attacks. AI tools frequently suggest packages that haven't been audited for years.
"When suggesting npm packages, always recommend the latest stable version. After generating package.json, remind me to run npm audit and npm audit fix. Prefer packages with active maintenance (recent commits, many downloads, responsive maintainers). Avoid packages that haven't been updated in over a year. Set up Dependabot or Renovate for automated dependency updates."
→ Related: What Is npm audit? How to Find and Fix Vulnerable Dependencies
⭐ Identification and Authentication Failures
What it is: Your login system can be bypassed or broken. This includes: no limits on login attempts (brute force), weak session management (sessions that don't expire), allowing weak passwords, not having multi-factor authentication for sensitive actions, and improper logout (old session tokens still work after logout).
Authentication is one of the hardest things to build correctly — and AI's shortcut is to build it simply, not securely. The result is usually code that looks like authentication but has critical gaps.
AI builds a JWT authentication system. The tokens work — but the secret key is hardcoded as "secret" (literally). There's also no expiration on the tokens, so a stolen token works forever. And there's no token invalidation on logout (JWTs are stateless — the only way to invalidate them is a blocklist or short expiry). An attacker who gets one token has permanent access.
"For authentication: use a JWT secret that's at least 256 bits of randomness from an environment variable (never hardcoded). Set token expiry to 15 minutes for access tokens, 7 days for refresh tokens. Implement token rotation. Add account lockout after 5 failed attempts. Enforce minimum password length of 12 characters. Consider using an auth service like Clerk, Auth0, or Supabase Auth rather than rolling your own — they handle all of this correctly by default."
Software and Data Integrity Failures
What it is: Your app trusts code or data that it shouldn't. This includes: loading JavaScript from third-party CDNs without verification (what if the CDN gets hacked?), auto-updating from untrusted sources, deserializing data from untrusted inputs, or having a CI/CD pipeline that anyone can push code to without review.
For most vibe coders, the most relevant risk here is loading scripts from external CDNs without Subresource Integrity (SRI) checks, and using an insecure deployment pipeline.
You load jQuery from a CDN: <script src="https://cdn.example.com/jquery.min.js">. If that CDN gets compromised (it's happened to major CDNs), the attacker can serve a modified version of jQuery to every visitor of your site — containing malware, keyloggers, or crypto miners. The 2018 British Airways breach used this exact technique to steal 500,000 customers' payment card details.
"For any script loaded from an external CDN, add Subresource Integrity (SRI) attributes — include the integrity hash and crossorigin attribute. Self-host critical JavaScript when possible. For deployment pipelines, require code review before any merge to main, and use signed commits. Never auto-deploy from any branch without review."
Security Logging and Monitoring Failures
What it is: When an attack happens, you can't detect it, investigate it, or respond to it because you have no logs. No alerts when someone tries to brute-force your login. No record of who changed what in the database. No notification when someone exports a large amount of data. An attacker can be inside your system for weeks before you notice.
Most AI-generated apps have zero security logging. The app works — there's just no paper trail when something goes wrong.
The average time to detect a breach is 194 days. That's over 6 months where an attacker is in your system, reading data, potentially exfiltrating it — and you have no idea. With basic security logging, that timeline collapses dramatically. Login from a new country at 3am? Alert. Hundred failed login attempts in a minute? Alert. Admin account exported a database table? Alert.
"Add security event logging for: all failed login attempts (with IP and timestamp), successful logins from new IP addresses, password change events, admin actions, any large data exports, and API rate limit violations. Log these to a separate, append-only log store. Set up basic alerting (email or Slack) for anything suspicious. Never log passwords or full credit card numbers."
Server-Side Request Forgery (SSRF)
What it is: Your server fetches a URL on behalf of the user — and an attacker tricks it into fetching URLs it shouldn't. Like internal services that aren't exposed to the internet. Or cloud metadata endpoints that contain credentials. Your server becomes a proxy for the attacker to explore your internal network.
This is most relevant when your app has a feature like "import from URL," "fetch preview," "webhook," or any feature where you enter a URL and the server fetches it.
Your app lets users enter a URL to preview a link. AI writes: const response = await fetch(userProvidedUrl). An attacker enters: http://169.254.169.254/latest/meta-data/iam/security-credentials/ — the AWS metadata endpoint. Your server dutifully fetches it and returns the AWS credentials for your entire account. This exact vulnerability was used in the 2019 Capital One breach, which exposed 100 million customer records.
"For any feature that fetches user-provided URLs: validate and allowlist the URL (only allow http:// and https://, block private IP ranges like 10.x.x.x, 172.16.x.x, 192.168.x.x, and 169.254.x.x). Use a DNS validation library to prevent DNS rebinding. Set strict timeouts. Log all external fetch requests. Consider using a dedicated URL-fetching service that handles SSRF protection."
Wait — what about XSS? Cross-Site Scripting (XSS) is technically covered under A03 Injection in the 2021 list, but it's distinct enough — and common enough in AI-generated front-end code — that it deserves a dedicated mention. XSS is when an attacker injects malicious JavaScript into your page that runs in other users' browsers. The prevention: always escape user-generated content before rendering it in HTML, use Content-Security-Policy headers, and use frameworks like React that escape by default (but don't use dangerouslySetInnerHTML with user data). → Full guide: What Is XSS and How to Prevent It
What AI Gets Wrong (3 Mistakes That Keep Showing Up)
Even when you ask AI for secure code, it makes specific recurring mistakes. Knowing these three means you can catch them in review.
It Secures the Happy Path, Not the Edge Cases
AI builds the login form with proper password hashing — great. But it forgets to add rate limiting to the forgot-password endpoint, the API token refresh endpoint, or the "check if username exists" endpoint. Attackers don't use the front door. They use every side entrance. Always ask: "What security controls should be on EVERY endpoint, not just the main ones?"
It Leaves Development Settings in Production Code
AI writes code for your prompt — which is usually a development scenario. It leaves debug mode on, uses weak secrets like "dev-secret-key", adds verbose logging that exposes sensitive data, and keeps CORS wide open (Access-Control-Allow-Origin: *). Before you deploy anything AI wrote, explicitly ask: "Review this code for any settings that are appropriate for development but dangerous in production."
It Forgets Authorization Even When Authentication Is Correct
Authentication = "Who are you?" Authorization = "What are you allowed to do?" AI is getting better at building authentication. It still frequently forgets authorization. A logged-in user can access any other user's data because the API checks "are you logged in?" but not "do you own this resource?" Always ask: "For every API endpoint that accesses user data, verify the authenticated user is authorized to access the specific requested resource."
The Pre-Launch Security Checklist
Run through this before you ship anything that handles real users or real data. It's not exhaustive — but it covers the highest-impact items. Every "yes" here blocks a real attack vector.
db.query( and queryRawUnsafe.
dangerouslySetInnerHTML or innerHTML with user data. Content-Security-Policy header set.
.env file that's in .gitignore.
npm audit (or equivalent). No high/critical vulnerabilities. Dependencies are relatively recent versions.
*).
The power move: Paste your entire authentication/database file into Claude and ask: "Review this code for OWASP Top 10 vulnerabilities. List each one you find, explain why it's a problem, and show me the corrected code." AI is excellent at security code review — it's generating secure code from scratch that it struggles with.
Going Deeper on the High-Risk Items
These are the vulnerabilities most common in AI-generated code. Each has a dedicated guide with more examples, more explanation, and more specific fixes.
- What Is SQL Injection? Complete Guide for AI Coders — The most exploited database vulnerability, why AI generates vulnerable code, and how ORMs save you
- What Is XSS (Cross-Site Scripting)? — How attackers inject JavaScript into your pages and how React, Vue, and proper escaping prevent it
- What Is CSRF? — The attack that tricks users into making requests they didn't intend, and the token-based defense
- What Is Input Validation? — Why "never trust user input" is the most important security rule, and how to enforce it
- What Is npm audit? — The one command that checks all your dependencies for known vulnerabilities
- What Is Password Hashing? — Why you never store passwords directly and how bcrypt works without needing to understand it
- Secrets Management for AI Coders — Where to put API keys, database passwords, and other secrets so they don't end up on GitHub
- How to Review AI-Generated Code for Security Issues — A practical checklist for auditing what your AI wrote
What to Learn Next
Frequently Asked Questions
What is the OWASP Top 10? +
The OWASP Top 10 is a regularly updated list of the 10 most critical web application security risks, published by the Open Worldwide Application Security Project. It's the closest thing the security world has to an official checklist of how web apps get attacked. It's not a law or certification — it's a practical guide used by developers and security teams worldwide. The current edition is from 2021, with a 2025 update expected.
Does AI-generated code have OWASP vulnerabilities? +
Yes, frequently. Studies show AI coding assistants regularly produce code vulnerable to SQL injection, XSS, broken authentication, and security misconfigurations. AI tools optimize for code that works, not code that's secure. The good news: if you know what to ask for, AI can also generate secure code. The prompts in this guide tell you exactly what to say.
Which OWASP vulnerabilities are most common in AI-generated code? +
The most common in AI-generated code are: Injection (A03) — AI often uses string concatenation instead of parameterized queries; Security Misconfiguration (A05) — AI leaves debug mode on and default credentials in place; Vulnerable Dependencies (A06) — AI suggests outdated packages; XSS — AI rarely sanitizes output in vanilla JS or template literals; and Broken Authentication (A07) — AI uses weak secrets and no rate limiting. These five account for the vast majority of real-world vulnerabilities found in AI-assisted projects.
Do I need to implement all 10 for a small project? +
For any app that handles real users or real data, yes — but the implementation is lighter than you think. Most OWASP protections are one-time configuration choices, not ongoing work. Use an ORM (prevents injection), add helmet.js (handles several headers at once), use a managed auth service like Clerk or Auth0 (prevents broken auth), and run npm audit regularly. You can cover 80% of the Top 10 in an afternoon. The security checklist in this article gives you the prioritized starting point.
How do I get AI to write more secure code by default? +
Add a security context to your system prompt or project instructions. Something like: "Always follow OWASP Top 10 best practices. Use parameterized queries for all database operations. Add rate limiting to all authentication endpoints. Never hardcode secrets. Validate and sanitize all user inputs. Add security headers using helmet.js." When you frame the entire project with security requirements up front, AI follows them throughout — rather than you having to remember to ask for each one.
You've got this. Security feels overwhelming until you realize it's mostly about a handful of specific decisions made at specific moments. You don't need to become a security engineer. You need to know what to ask for, what to look for in review, and how to run through a checklist before you ship. That's all this is. You now know more about web security than most people who built the apps you use every day.