TL;DR: AI generates code that works but often leaves security doors wide open. This checklist covers 15 things to verify before you ship — from exposed API keys and unhashed passwords to SQL injection, missing HTTPS, and vulnerable dependencies. Think of it as a building inspection for your app. Print it. Use it. Every time.

Why AI Coders Need This Checklist

Here's something nobody tells you when you start building with AI: the code works, but it's not safe.

When you ask Claude or ChatGPT to build a login page, it builds a login page. It focuses on making it functional — does the form submit? Does the user get created? Can they log in? And it nails all of that. What it doesn't do — unless you specifically ask — is lock the doors behind it.

A 2023 Stanford study found that developers using AI assistants produced code with security vulnerabilities just as often as those writing by hand — but here's the kicker — they were more confident their code was secure. The AI gave them working code, and working code feels like secure code. It's not.

Think about it in construction terms. When a general contractor builds a house, the framing goes up and the house looks great. But before anyone moves in, an inspector walks through with a checklist. Are the electrical outlets grounded? Is the plumbing up to code? Are the smoke detectors wired? The house "works" without those things — until there's a fire.

Your app is the same. AI built the house. This checklist is the inspection.

If you're new to security as an AI-assisted builder, don't worry — you don't need to become a cybersecurity expert. You just need to know what to check and how to tell your AI to fix it.

The Scenario That Should Scare You

Real Scenario

You asked your AI to build a user signup page. It created a beautiful form with email and password fields, connected it to a database, and even added a "Welcome!" confirmation email. Everything worked perfectly in testing.

Here's what it didn't do:

  • It stored passwords as plain text (anyone who sees the database sees every password)
  • It put your database connection string directly in the code (push to GitHub and it's public)
  • It didn't limit how many times someone can try to log in (attackers can guess passwords forever)
  • It didn't check what users typed into the form (attackers can inject code through the email field)
  • It didn't set up HTTPS (passwords travel over the internet like postcards — anyone can read them)

The app worked. It just wasn't safe. And you wouldn't know until something went wrong.

This isn't hypothetical. This is what AI-generated code looks like every single day. The good news? Every one of these problems has a straightforward fix. That's what the checklist is for.

The 15-Point Security Checklist

Walk through each item before you ship. If you can check all 15, you're ahead of most developers — including the ones with CS degrees.

🔐 Authentication & Access (Items 1–4)

This category is about who can get in and what they can do. Think of it as the locks, keys, and security cameras for your building.

1. Are your API keys and secrets hidden?

API keys are like the master key to your building. They give access to services you're paying for — your database, your email sender, your payment processor. AI loves to put these keys right in the code, because that's the fastest way to make things work.

The problem: if your code ends up on GitHub (or anywhere public), those keys are exposed. Bots scan GitHub constantly looking for API keys. They find yours, and suddenly someone is sending 50,000 emails through your account or running up your cloud bill.

What to check: Search your entire project for any string that looks like a password, key, or token. Search for sk-, api_key, password, secret, token. If you find any, move them to environment variables (a .env file that never gets uploaded to GitHub). For a deeper dive, see our API security guide.

Tell your AI: "Move all API keys, passwords, and secrets into environment variables. Add .env to .gitignore. Show me where each one was hardcoded."

2. Are passwords being hashed?

When a user creates an account, their password needs to be scrambled before you store it. This scrambling is called "hashing." It turns MyPassword123 into something like $2b$10$X7z3... — a string that can't be reversed back into the original password.

If you store passwords as plain text (exactly what the user typed), anyone who gets access to your database — a hacker, a rogue employee, even you — can see every user's password. And since most people reuse passwords, you've just compromised their bank account, email, and everything else.

What to check: Look at the code that saves new users. Is there a hashing function (like bcrypt.hash() or argon2.hash()) being called before the password hits the database? If the password goes straight from the form into the database without transformation, it's not hashed.

Tell your AI: "Are passwords being hashed before storage? Use bcrypt with at least 10 salt rounds. Show me the signup and login code."

3. Is there rate limiting on login attempts?

Rate limiting means putting a cap on how many times someone can try something in a given period. Without it, an attacker can try thousands of passwords per minute against your login page until they guess the right one. This is called a "brute force" attack — it's the digital equivalent of trying every key on a keyring until one opens the door.

What to check: Try logging in with a wrong password 20 times in a row. Does anything stop you? If you can keep guessing forever, you need rate limiting. A good rule: 5 failed attempts, then lock the account or add a delay for 15 minutes.

Tell your AI: "Add rate limiting to the login endpoint. Maximum 5 failed attempts per IP address per 15 minutes. Return a clear error message when the limit is hit."

4. Are user permissions actually enforced?

This one is subtle but dangerous. Say your app has regular users and admin users. Can a regular user access admin pages just by typing the URL directly? AI often builds the UI so that regular users don't see the admin button — but it doesn't always block the actual route.

It's like building an "Employees Only" door but not putting a lock on it. The sign keeps honest people out. It doesn't stop anyone who tries the handle.

What to check: Log in as a regular user. Then manually type the URL for admin pages or API endpoints. If you can access them, your permissions are cosmetic, not real.

Tell your AI: "Add server-side authorization checks to every admin route. Don't just hide the UI — block the request if the user doesn't have the right role."

🛡️ Data Protection (Items 5–8)

This category is about what happens when data comes into your app. Untrusted data is like an unknown person walking onto your jobsite — you need to check their credentials before letting them touch anything.

5. Is user input being validated?

Input validation means checking that the data users type into your forms is actually what you expect. An email field should contain an email. An age field should contain a number. A name field shouldn't contain JavaScript code.

Without validation, attackers can type anything into your forms — including code that your app will accidentally execute. It's like accepting a delivery without checking the manifest. Most deliveries are fine. The one that isn't could contain something that takes down your whole operation.

What to check: Go to every form in your app and type something unexpected. Put <script>alert('hi')</script> in the name field. Put '; DROP TABLE users;-- in the search bar. If the app accepts this without complaint, your inputs aren't validated.

Tell your AI: "Add input validation to every form field and API endpoint. Validate type, length, and format. Reject anything that doesn't match expected patterns."

6. Are you protected against SQL injection?

SQL injection is when an attacker types database commands into your form fields, and your app accidentally runs them. Imagine you have a search box. A normal user types "blue shoes." An attacker types ' OR 1=1; DROP TABLE users;-- and suddenly your entire user database is deleted.

This happens when your code takes what the user typed and plugs it directly into a database query without any protection. AI does this constantly because it's the simplest way to write the code.

What to check: Look at your database queries. If you see user input being concatenated (glued) directly into SQL strings with + or template literals, you're vulnerable. Safe code uses "parameterized queries" or an ORM (a tool that builds queries safely for you).

Tell your AI: "Review all database queries for SQL injection vulnerabilities. Convert any string-concatenated queries to parameterized queries or use the ORM's built-in methods."

7. Are you protected against XSS?

Cross-site scripting (XSS) is when an attacker injects malicious code that runs in other users' browsers. Example: someone puts JavaScript code in their "About Me" profile field. When other users view that profile, the code runs in their browser — it could steal their login session, redirect them to a fake site, or worse.

This is like someone leaving a booby trap in a shared space. They write it once, and every person who walks through gets hit.

What to check: Anywhere your app displays user-generated content (comments, profiles, messages), try entering <script>alert('xss')</script>. If a popup appears, you're vulnerable. The fix is to "sanitize" or "escape" the content — converting special characters so browsers display them as text instead of executing them.

Tell your AI: "Review all places where user input is rendered in the browser. Ensure all output is properly escaped or sanitized. Use a library like DOMPurify for any HTML content."

8. Are file uploads restricted?

If your app lets users upload files (profile pictures, documents, etc.), you need to control what they can upload. Without restrictions, someone could upload a malicious script disguised as an image, and your server might execute it.

It's like having a mail slot on your front door with no size limit — eventually someone pushes something through that you didn't want inside.

What to check: Try uploading a file with a .exe or .php extension. Try uploading a 500MB file. If the app accepts both without complaint, your upload handling needs work. Good upload security checks file type, file size, and stores uploads outside your main code directory.

Tell your AI: "Add file upload validation: whitelist allowed file types (images only: jpg, png, webp), limit file size to 5MB, rename files on upload, and store them outside the web root."

🏗️ Infrastructure (Items 9–11)

Infrastructure is the foundation your app sits on. These checks are like making sure the building has proper insulation, wiring, and plumbing — stuff users never see but that keeps everything from falling apart.

9. Is HTTPS enabled?

HTTPS encrypts the data traveling between your users and your server. Without it, everything — passwords, personal info, session tokens — is sent as plain text. Anyone on the same WiFi network (think: coffee shop) can read it.

Check your URL. If it says http:// instead of https://, your traffic isn't encrypted. It's like sending a letter without an envelope — anyone who handles it can read it.

What to check: Visit your live site. Does the browser show a lock icon? Does the URL start with https://? Try accessing http:// — does it automatically redirect to https://? If not, you need to set up SSL/TLS certificates (Let's Encrypt is free) and force HTTPS redirect.

Tell your AI: "Set up HTTPS with Let's Encrypt. Configure the server to redirect all HTTP traffic to HTTPS. Enable HSTS headers."

10. Is CORS configured properly?

CORS (Cross-Origin Resource Sharing) controls which websites are allowed to make requests to your server. Without proper CORS configuration, any website in the world can send requests to your API and potentially access your users' data.

Think of it like a guest list at a private event. Without CORS, there's no guest list — anyone can walk in. AI often sets CORS to "allow everything" because it eliminates errors during development. That's fine for testing. It's dangerous in production.

What to check: Search your code for cors, Access-Control, or * (wildcard). If you see origin: '*' or Access-Control-Allow-Origin: *, your API accepts requests from any website. Change it to only allow your actual domain.

Tell your AI: "Configure CORS to only allow requests from my production domain. Remove any wildcard (*) origins. Set appropriate methods and headers."

11. Are environment variables keeping secrets out of your code?

This ties back to item #1, but it's important enough to check from the infrastructure side. Your code — the files that get deployed, committed to GitHub, or shared with collaborators — should contain zero passwords, API keys, database URLs, or secret tokens.

All of those should live in environment variables (a .env file locally, and your hosting platform's "environment" or "secrets" settings in production).

What to check: Open your .gitignore file. Is .env listed? If not, add it immediately. Then search your codebase for anything that looks like a secret. Check your Git history too — if a key was ever committed, even if it's been removed, it's still in the history. You may need to rotate (change) that key.

Tell your AI: "Audit the codebase for any hardcoded secrets, API keys, or credentials. Ensure .env is in .gitignore. List any secrets that may have been committed in Git history."

📦 Dependencies (Items 12–13)

Dependencies are the pre-built packages your app uses — code written by other people that you pull in to save time. When AI builds your app, it installs dozens of these. Each one is a potential entry point for attackers if it has known vulnerabilities.

12. Have you run a security audit on your packages?

If you're building with JavaScript/Node.js, running npm audit scans all your installed packages for known security vulnerabilities. It checks a database of reported issues and tells you which packages need updating.

Think of it like a product recall check. Your building uses hundreds of components from different manufacturers. A recall check tells you if any of them have known defects.

What to check: Run npm audit (for Node.js projects) or the equivalent for your language (pip-audit for Python, bundle audit for Ruby). Read the output. "Critical" and "High" severity issues should be fixed before shipping. Run npm audit fix to automatically patch what it can.

Tell your AI: "Run npm audit and fix all critical and high severity vulnerabilities. For any that can't be auto-fixed, explain the risk and suggest alternatives."

13. Are your dependencies up to date?

AI models are trained on data from specific time periods. When AI writes your code, it might install package versions from months or even years ago. Older versions often have known vulnerabilities that have been fixed in newer releases.

What to check: Run npm outdated to see which packages have newer versions. Pay special attention to packages that are multiple major versions behind. Not every update is urgent, but security-related packages (authentication libraries, encryption libraries, web frameworks) should stay current.

Tell your AI: "Check for outdated dependencies. Update all security-critical packages to their latest stable versions. Flag any breaking changes I need to know about."

🚀 Before Going Live (Items 14–15)

You're almost there. These last two items are the final walkthrough before you hand over the keys.

14. Do error messages keep secrets?

When something goes wrong in your app, the error message should help the user without helping an attacker. A bad error message looks like this: Error: Connection to PostgreSQL at db.myserver.com:5432 failed, user 'admin', password 'correct_horse_battery'. That just told an attacker your database address, username, and password.

Good error messages say: Something went wrong. Please try again. The detailed error goes to your logs (which only you can see), not to the user's screen.

What to check: Deliberately break something in your app (disconnect the database, send a malformed request, access a page that doesn't exist). Look at the error messages. Do they reveal file paths, database details, stack traces, or internal code structure? If yes, you need custom error handling.

Tell your AI: "Add global error handling that shows generic messages to users and logs detailed errors server-side. Make sure stack traces, file paths, and database details never appear in user-facing responses. Configure separate error handling for development and production."

15. Is logging and monitoring configured?

Logging means keeping a record of what happens in your app — who logged in, what errors occurred, what requests came in. Without logs, if something goes wrong (or someone breaks in), you have no way to figure out what happened. It's like having a building with no security cameras — you only know there was a problem after the damage is done, and you have no footage to review.

What to check: Does your app log failed login attempts? Does it log errors? Can you access those logs easily? Do you have any kind of alerting — something that notifies you if things go wrong? Even a basic setup (log to a file, check it periodically) is better than nothing.

Tell your AI: "Set up structured logging for the application. Log all authentication events (login success, failure, logout), all errors, and all API requests. Include timestamps, user IDs (where applicable), and request IPs. Don't log sensitive data (passwords, tokens, full credit card numbers)."

What AI Gets Wrong About Security

AI coding assistants are incredible at building features. They're not great at building defenses. Here's why:

AI optimizes for "it works," not "it's safe." When you ask AI to build a feature, its success metric is: does the code run without errors? Security is a separate concern — like soundproofing. A room can be perfectly functional without soundproofing, but that doesn't mean you'd want to hold a private conversation in it.

AI uses the simplest solution. The simplest way to connect to a database is to put the connection string right in the code. The simplest way to handle CORS is to allow everything. The simplest way to store a password is to save it as-is. Simple and secure are often opposites.

AI doesn't think about attackers. When AI writes a login function, it thinks about the happy path — a real user typing their real credentials. It doesn't think about someone sending 10,000 requests per second to guess passwords. It doesn't think about someone putting SQL commands in the username field. You have to tell it to think about those scenarios.

AI trains on public code — including insecure public code. A huge percentage of code on GitHub and Stack Overflow has security issues. AI learned from that code. It reproduces those patterns confidently, because they're statistically common. Common doesn't mean safe.

AI doesn't know your threat model. A personal blog and a medical records app have very different security needs. AI treats them the same unless you tell it otherwise. You're the one who knows what you're protecting and who might try to get in.

The "Tell Your AI" Prompt: Security Audit Your Code

Copy this prompt and paste it into Claude, ChatGPT, Cursor, or whatever AI tool you're using. It will review your project for the issues on this checklist.

Copy-Paste Security Audit Prompt

Use this prompt with your AI coding assistant. Share your project files or codebase when asked.

I need you to perform a security audit on my project. Check for these 
specific issues and report what you find for each one:

AUTHENTICATION & ACCESS:
1. Are any API keys, passwords, or secrets hardcoded in the source code?
2. Are user passwords hashed (bcrypt/argon2) before being stored?
3. Is there rate limiting on login and signup endpoints?
4. Are authorization checks enforced server-side on all protected routes?

DATA PROTECTION:
5. Is user input validated on both client and server side?
6. Are database queries parameterized (no string concatenation in SQL)?
7. Is user-generated content sanitized before being rendered in the browser?
8. Are file uploads restricted by type, size, and stored safely?

INFRASTRUCTURE:
9. Is HTTPS configured with automatic HTTP redirect?
10. Is CORS configured to allow only the production domain (no wildcards)?
11. Are all secrets in environment variables, and is .env in .gitignore?

DEPENDENCIES:
12. What does npm audit (or equivalent) show? Any critical/high issues?
13. Are there outdated packages with known vulnerabilities?

BEFORE GOING LIVE:
14. Do error messages expose internal details (paths, DB info, stack traces)?
15. Is logging configured for auth events, errors, and requests?

For each item, tell me:
- ✅ PASS — what you found and why it's secure
- ❌ FAIL — what the vulnerability is, and the exact code fix
- ⚠️ WARNING — it's partially addressed but could be improved

Be specific. Show me the file names and line numbers.

This prompt won't catch everything — no single check will. But it'll surface the most common and most dangerous issues in AI-generated code. Run it on every project before you ship.

What to Learn Next

You've got the checklist. Now go deeper on the individual topics that matter most for what you're building:

  • Security Basics for AI Coders — The foundation. Start here if any checklist item felt unfamiliar.
  • What Is SQL Injection? — A deep dive into the most common database attack and how to prevent it in AI-generated code.
  • What Is XSS? — Understanding cross-site scripting attacks and why your AI probably didn't protect against them.
  • What Is Input Validation? — The single most important defensive practice. If you do one thing right, make it this.
  • What Is npm audit? — How to scan your project's dependencies for known vulnerabilities in 30 seconds.
  • API Security Guide — If your app has an API (most do), this covers authentication, rate limiting, and protecting your endpoints.
  • Client-Side vs Server-Side — Half of security mistakes come from not knowing which code runs in the browser. This explains the difference.

Next Step

Pick the one checklist item you're least confident about. Read the linked article. Then go fix that one thing in your project. Security isn't about doing everything at once — it's about closing one door at a time until the building is solid.

FAQ

Yes. Attackers use automated bots that scan the entire internet for common vulnerabilities — exposed API keys, open databases, missing input validation. They don't care if your app has 10 users or 10 million. A small app with a leaked API key can rack up thousands of dollars in charges overnight. The checklist takes 30 minutes. Skipping it can cost you far more.

Studies from Stanford (2023) and others show that AI-generated code contains security vulnerabilities at roughly the same rate as human-written code — but with a key difference. AI tends to produce code that works perfectly on the surface while hiding security gaps underneath. A human developer might write insecure code too, but they're more likely to think about security as a separate concern. AI treats security as optional unless you specifically ask for it.

If you can only do one thing: check for exposed API keys and secrets. Search your entire project for any hardcoded passwords, API keys, or database credentials. This is the number one way AI-built apps get compromised — the AI puts your secret key right in the code, you push it to GitHub, and within minutes a bot has found it and is using your account. Use environment variables instead.

Run the full checklist before every deployment to production. For ongoing development, run items 12–13 (npm audit and dependency checks) weekly, since new vulnerabilities are discovered constantly. Any time you add a major new feature — especially anything involving user accounts, payments, or file uploads — run the relevant sections again.

You can and should ask AI to review your code for security issues — we include a copy-paste prompt for exactly this purpose. But don't treat it as a replacement for the checklist. AI security reviews catch many issues, but they also miss things and sometimes give false reassurance. Think of the AI review as a second pair of eyes, and the checklist as the official inspection. You want both.