TL;DR: File uploads let users send files (images, documents, videos) from their browser to your app's storage. AI can generate file upload code quickly, but it often skips critical parts: file type validation, size limits, proper storage configuration, and security checks. Files should go to dedicated storage services like Supabase Storage or AWS S3 — never directly into your database. The upload path has more attack surface than any other feature, so validation isn't optional.

Why AI Coders Need This

Every real app eventually needs file uploads. It doesn't matter what you're building — a marketplace needs product photos, a social app needs profile pictures, a SaaS needs document attachments, a job board needs resume uploads. You can build a prototype without file uploads, but you can't ship a real product without them.

The tricky part? File uploads touch every layer of your stack. The frontend needs a file picker and upload progress. The API needs to accept binary data (not just JSON). The backend needs to validate what was sent. And you need somewhere to actually store the files — which is a completely different system from your database.

When you ask AI to add file uploads, it generates code that works in development. You pick a file, it appears on the page, everything looks great. But the code is usually missing the parts that matter in production: What if someone uploads a 2GB file? What if they rename a .exe to .jpg? What if your storage bucket is publicly writable? These aren't edge cases — they're the first things attackers try.

Real Scenario

💬 What You Might Prompt

"Add a profile picture upload to my user settings page. Users should be able to pick an image, see a preview, and save it. Store the image in Supabase Storage and save the URL to the user's profile in the database."

This is a completely reasonable prompt. And AI will generate something that mostly works. But "mostly works" with file uploads means "works until someone uploads something unexpected, and then things break in ways that are hard to debug." Let's look at what AI typically generates, understand each piece, and then see what's missing.

What AI Generated

Here's typical AI-generated code for a React profile picture upload with Supabase Storage:

import { useState } from 'react';
import { supabase } from '../lib/supabaseClient';

function ProfilePictureUpload({ userId }) {
  const [uploading, setUploading] = useState(false);
  const [preview, setPreview] = useState(null);

  async function handleFileChange(event) {
    const file = event.target.files[0];
    if (!file) return;

    // Show preview
    setPreview(URL.createObjectURL(file));

    // Upload to Supabase Storage
    setUploading(true);
    const fileName = `${userId}/${Date.now()}-${file.name}`;

    const { data, error } = await supabase.storage
      .from('avatars')        // the storage bucket name
      .upload(fileName, file); // upload the file

    if (error) {
      alert('Upload failed: ' + error.message);
      setUploading(false);
      return;
    }

    // Get public URL for the uploaded file
    const { data: urlData } = supabase.storage
      .from('avatars')
      .getPublicUrl(fileName);

    // Save the URL to the user's profile in the database
    await supabase
      .from('profiles')
      .update({ avatar_url: urlData.publicUrl })
      .eq('id', userId);

    setUploading(false);
  }

  return (
    <div>
      {preview && <img src={preview} alt="Preview" width={100} />}
      <input
        type="file"
        accept="image/*"
        onChange={handleFileChange}
        disabled={uploading}
      />
      {uploading && <p>Uploading...</p>}
    </div>
  );
}

This code does work. It shows a file picker, generates a preview, uploads to Supabase Storage, gets the public URL, and saves it to the user's profile. But look at what's missing — no file size check, no real file type validation (just the browser's accept attribute, which is easily bypassed), no error handling for oversized files, and the original filename is preserved (which can contain malicious characters).

Understanding Each Part

Multipart Form Data

When a browser sends a file, it can't use the same format as a regular form submission. Normal forms send data as key=value text. But files are binary data — a JPEG image is millions of bytes that aren't text at all.

The browser switches to an encoding called multipart/form-data. Think of it like mailing a package with a letter inside: the letter is your regular form fields (username, email), and the package contains the binary file. The browser wraps everything with boundaries — separator markers that say "text field ends here, file starts here."

You don't need to build this yourself. When you use <input type="file"> and submit, the browser handles the encoding automatically. On the server side, libraries like multer (Node.js) or built-in handlers in frameworks like Next.js parse it back apart. But you should know it exists because error messages sometimes reference it: "Request entity too large" usually means your server's multipart body size limit is smaller than the file being uploaded.

Presigned URLs

In the AI-generated code above, the file goes from the browser → your app → Supabase Storage. This works for small files, but it means your server has to handle every byte of every file upload. If 50 users upload 10MB photos simultaneously, that's 500MB flowing through your server.

A better pattern is presigned URLs. Your server generates a temporary, pre-authorized URL that lets the browser upload directly to storage — your server never touches the file:

// Step 1: Browser asks your server for permission to upload
// Your API route (server-side)
async function getUploadUrl(req, res) {
  const { fileName, fileType } = req.body;

  // Generate a presigned URL — valid for 60 seconds
  const { data, error } = await supabase.storage
    .from('avatars')
    .createSignedUploadUrl(`${userId}/${fileName}`);

  return res.json({ uploadUrl: data.signedUrl });
}

// Step 2: Browser uploads directly to storage
// Client-side
const response = await fetch('/api/get-upload-url', {
  method: 'POST',
  body: JSON.stringify({ fileName: 'photo.jpg', fileType: 'image/jpeg' }),
});
const { uploadUrl } = await response.json();

// Upload directly to Supabase — bypasses your server entirely
await fetch(uploadUrl, {
  method: 'PUT',
  headers: { 'Content-Type': 'image/jpeg' },
  body: file,
});

The presigned URL is like a VIP pass — it's temporary, has restrictions (file type, size, expiration), and lets the browser go directly to storage. Your server only generates the pass; it never carries the luggage. This is faster, cheaper, and scales much better.

Storage Buckets

A storage bucket is like a top-level folder for your files. In the code above, 'avatars' is the bucket name. Most apps have multiple buckets: one for avatars, one for documents, one for product images. Each bucket can have different access rules.

In Supabase, buckets can be public (anyone with the URL can view files) or private (requires authentication to access). Profile pictures are usually public — you want anyone to see them. Medical documents are private — only the owner should access them. AI almost always creates public buckets because it's simpler, but think about whether your files actually should be public.

File Size Limits

File size limits need to be enforced at three levels:

  1. Frontend: Check file.size before uploading. This gives instant feedback ("File too large — maximum is 5MB") and avoids wasting bandwidth on uploads that will be rejected.
  2. Backend / API route: Reject requests that exceed your limit. In Next.js, this is configured in your API route. In Express, it's a multer option. This catches bypasses — someone could use Postman or curl to skip your frontend check.
  3. Storage service: Set bucket-level policies. Supabase lets you set max file size per bucket. S3 has similar policies. This is your last line of defense.

Never rely on just one layer. Frontend checks are for user experience. Backend and storage checks are for security.

Storage Options for AI-Built Apps

When you ask AI to add file uploads, it needs to put those files somewhere. Here's how the major options compare:

ServiceBest ForFree TierPricing ModelAI Familiarity
Supabase Storage Apps already using Supabase for auth/database 1 GB storage, 2 GB transfer/month $0.021/GB stored, $0.07/GB transfer ⭐⭐⭐⭐⭐ — AI's default pick
AWS S3 Production apps, enterprise, maximum control 5 GB (12 months), 20K GET, 2K PUT $0.023/GB stored, $0.09/GB transfer ⭐⭐⭐⭐⭐ — extensive training data
Cloudflare R2 High-bandwidth apps, avoiding egress fees 10 GB storage, zero egress fees $0.015/GB stored, $0 egress (!) ⭐⭐⭐ — newer, less AI training data
Uploadthing Quick setup, TypeScript-first projects 2 GB storage, 2 GB transfer $10/mo for 10 GB, scales from there ⭐⭐⭐ — growing, popular in Next.js ecosystem
Vercel Blob Vercel-hosted apps, simplest setup Included in Vercel plan Based on Vercel plan tier ⭐⭐⭐⭐ — AI picks this for Vercel apps

Which should you pick? If you're already using Supabase, use Supabase Storage — it's integrated and AI knows it well. If you're on Vercel, Vercel Blob is the path of least resistance. If you're serving lots of files (like a media app), Cloudflare R2's zero egress fees can save serious money. AWS S3 is the industry standard but has more setup complexity.

The important thing: use a storage service, not your database. We'll cover why that's a critical rule in the next section.

What AI Gets Wrong About File Uploads

⚠️ AI Failure Mode #1: No File Type Validation

AI relies on the HTML accept attribute: <input type="file" accept="image/*">. This only filters the file picker in the browser — it doesn't prevent someone from submitting any file type via a direct API call, browser dev tools, or curl. Fix: Always validate file types on the server side. Check the MIME type and the file's magic bytes (the first few bytes that identify the real format). A .exe renamed to .jpg still has executable magic bytes. Tell AI: "Add server-side file type validation that checks magic bytes, not just the file extension or MIME header."

⚠️ AI Failure Mode #2: No File Size Limits

AI-generated upload code almost never includes size limits. Without them, a user (or attacker) can upload a 5GB file, crashing your server, blowing through your storage quota, and running up your bill. Fix: Add size checks in three places — frontend (if (file.size > 5 * 1024 * 1024)), backend (middleware or API route config), and storage bucket policy. Tell AI: "Add a 5MB file size limit enforced on both client and server."

⚠️ AI Failure Mode #3: Storing Files in the Database

AI sometimes generates code that converts files to base64 and saves them directly in your PostgreSQL or MySQL database. This is one of the worst patterns in web development. Databases are optimized for structured data queries — not for storing and serving binary blobs. Storing files in the DB bloats your database (a 1MB image becomes ~1.37MB in base64), makes backups enormous, slows every query because the DB engine has to work around huge rows, and costs 5–10x more than object storage. Fix: Always store files in a dedicated storage service. Store only the file's URL or path in the database. Tell AI: "Store the file in Supabase Storage and save the public URL to the database — do NOT store the file contents in the database."

⚠️ AI Failure Mode #4: Missing CORS Configuration

If you're using presigned URLs or uploading directly to a storage service from the browser, you'll hit CORS errors. The browser blocks requests to different domains unless the storage service explicitly allows it. AI generates the upload code but forgets to mention or configure CORS on the storage bucket. You'll see: Access-Control-Allow-Origin errors in the browser console and the upload silently fails. Fix: Configure CORS on your storage bucket to allow requests from your app's domain. In Supabase, this is in the bucket settings. In S3, it's a bucket CORS policy. Tell AI: "Also configure CORS on the storage bucket to allow uploads from my frontend domain."

⚠️ AI Failure Mode #5: No Mention of Virus/Malware Scanning

AI never mentions virus scanning. For profile pictures in a small app, this might be acceptable risk. But if your app accepts documents, PDFs, or any files that users download — you need to consider it. Malicious files uploaded to your storage can infect other users who download them. Fix: For production apps that handle document uploads, integrate a scanning service like ClamAV (open source) or a cloud service like AWS's built-in S3 malware scanning. At minimum, be aware this risk exists. Tell AI: "What security measures should I add for user-uploaded documents?"

Security Considerations

File uploads are the single largest attack surface in most web apps. Every file a user sends is untrusted input — just like form fields, but more dangerous because files can contain executable code, exploit image processing libraries, or consume unlimited resources. Here's your security checklist:

The File Upload Security Checklist:

  • Validate file types server-side — check MIME type AND magic bytes, not just extensions. See input validation for the broader principle.
  • Enforce size limits everywhere — frontend, backend, and storage bucket level.
  • Generate new filenames — never use the original filename. Generate a UUID or hash. Original filenames can contain path traversal characters (../../etc/passwd) or special characters that break your system.
  • Use private buckets by default — only make files public if they genuinely need to be. Serve private files through signed URLs with expiration times.
  • Set Content-Disposition headers — when serving user-uploaded files, set Content-Disposition: attachment to force download instead of browser execution. An uploaded HTML file served inline can execute JavaScript in your domain's context.
  • Serve files from a different domain — use a CDN or separate subdomain for user uploads. This prevents uploaded files from accessing your main domain's cookies.
  • Process images server-side — re-encode uploaded images (strip EXIF data, resize, convert to WebP). This neutralizes image-based exploits and reduces storage costs.
  • Rate limit uploads — prevent abuse by limiting how many files a user can upload per minute/hour.

You don't need to implement every item on day one. But you should know they exist, so when you're asking AI to build upload features, you can prompt for the ones that matter for your app. The minimum viable security: validate file types server-side, enforce size limits, and generate new filenames.

A Safer Upload: What AI Should Have Generated

Here's the profile picture upload with the critical missing pieces added:

import { useState } from 'react';
import { supabase } from '../lib/supabaseClient';
import { v4 as uuidv4 } from 'uuid';

// Configuration — easy to find and change
const MAX_FILE_SIZE = 5 * 1024 * 1024; // 5MB
const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp'];

function ProfilePictureUpload({ userId }) {
  const [uploading, setUploading] = useState(false);
  const [preview, setPreview] = useState(null);
  const [error, setError] = useState(null);

  async function handleFileChange(event) {
    const file = event.target.files[0];
    if (!file) return;
    setError(null);

    // ✅ Validate file type
    if (!ALLOWED_TYPES.includes(file.type)) {
      setError('Please upload a JPEG, PNG, or WebP image.');
      return;
    }

    // ✅ Validate file size
    if (file.size > MAX_FILE_SIZE) {
      setError('File too large. Maximum size is 5MB.');
      return;
    }

    // Show preview
    setPreview(URL.createObjectURL(file));
    setUploading(true);

    // ✅ Generate a safe filename (no original filename)
    const fileExt = file.name.split('.').pop();
    const safeFileName = `${userId}/${uuidv4()}.${fileExt}`;

    const { data, error: uploadError } = await supabase.storage
      .from('avatars')
      .upload(safeFileName, file, {
        contentType: file.type,
        upsert: false,  // don't overwrite existing files
      });

    if (uploadError) {
      setError('Upload failed. Please try again.');
      setUploading(false);
      return;
    }

    // Get public URL
    const { data: urlData } = supabase.storage
      .from('avatars')
      .getPublicUrl(safeFileName);

    // Save URL to profile
    const { error: dbError } = await supabase
      .from('profiles')
      .update({ avatar_url: urlData.publicUrl })
      .eq('id', userId);

    if (dbError) {
      setError('Saved image but failed to update profile.');
    }

    setUploading(false);
  }

  return (
    <div>
      {preview && <img src={preview} alt="Preview" width={100} />}
      <input
        type="file"
        accept="image/jpeg,image/png,image/webp"
        onChange={handleFileChange}
        disabled={uploading}
      />
      {uploading && <p>Uploading...</p>}
      {error && <p style={{ color: 'red' }}>{error}</p>}
      <p style={{ fontSize: '0.85rem', color: '#888' }}>
        Max 5MB. JPEG, PNG, or WebP only.
      </p>
    </div>
  );
}

Key improvements: explicit file type checking (not just the HTML attribute), file size validation with user-friendly error messages, UUID-based filenames (no path traversal risk), specific accept types instead of the vague image/*, and upsert: false to prevent accidental overwrites. This is still a frontend-only check — for production, you'd add the same validation on your API route and storage bucket policy.

Remember: this client-side validation is for user experience. The real security enforcement happens server-side and at the storage level. Anyone with browser dev tools can bypass frontend checks.

Frequently Asked Questions

Always use a dedicated file storage service like Supabase Storage, AWS S3, Cloudflare R2, or Vercel Blob. Never store files directly in your database. Databases are designed for structured data (text, numbers, relationships), not binary files. Storing files in the database bloats it, slows queries, makes backups enormous, and costs significantly more than object storage.

Multipart form data is the encoding type browsers use to send files in HTTP requests. A normal form sends data as key=value text. Files are binary data that can't be sent as text, so multipart encoding splits the request into "parts" — each with its own content type — allowing text fields and binary files to travel in the same request. You don't build this yourself; the browser handles it when you use <input type="file">.

A presigned URL is a temporary, pre-authorized link that lets a browser upload a file directly to cloud storage without the file passing through your server. Your server generates the URL with specific permissions (allowed file type, max size, expiration time), and the browser uploads straight to storage. It's faster, cheaper, and scales better than routing every file through your backend.

Enforce limits in three places: on the frontend (check file.size before uploading for instant user feedback), on the backend or API route (reject oversized requests via middleware), and on the storage service (set bucket policies with max file size). Frontend checks alone are not enough — anyone can bypass them with a direct API call. Always enforce on the server side.

No. Always restrict allowed file types to exactly what your app needs. If you need profile pictures, only allow image/jpeg, image/png, and image/webp. Don't just check the file extension — validate the MIME type and ideally the file's magic bytes (the first few bytes that identify the actual format). Unrestricted uploads are a major security risk. See input validation for the broader principle.