TL;DR: Your app is only as good as your last backup. A database backup is a saved copy of your data that you can restore if something goes wrong — a bad deploy, a hacked server, an accidental DELETE FROM users, or a hosting provider outage. If you don't have backups, you're one mistake away from losing everything your users trusted you with. The 3-2-1 rule: 3 copies of your data, on 2 different types of storage, with 1 copy offsite.
Why AI Coders Need This
Here's the thing nobody tells you when you're building with AI: getting the app running is the easy part. Keeping data safe is where it gets real.
You asked Claude or Cursor to build you a SaaS app. It scaffolded a PostgreSQL database, set up user authentication, created tables for your app's data. Users started signing up. They're entering real information — email addresses, preferences, content, maybe even payment details. Your database is now holding something valuable.
Now consider all the ways that data can vanish:
- You run a bad migration — AI generated a database migration that drops a column, and you didn't check it before running it
- Your hosting provider has an outage — and their "we handle everything" free tier doesn't include backups
- You get hacked — someone finds your exposed database credentials and wipes the data
- You accidentally delete production data — you thought you were connected to your dev database. You weren't.
- A Docker container restart wipes your volume — because nobody configured persistent storage
Every single one of these scenarios has happened to real developers, including experienced ones. The difference between "annoying weekend" and "business-ending catastrophe" is whether you have a backup.
Real Scenario: You Ask AI for Automatic Backups
💬 The Prompt: "Set up automatic database backups for my PostgreSQL database. Back up daily to S3 and keep the last 30 days."
Reasonable request. You know you need backups, you asked AI to set them up. Here's what AI typically generates:
What AI Generated
#!/bin/bash
# backup.sh — Daily PostgreSQL backup to S3
# Configuration
DB_NAME="myapp_production"
DB_USER="postgres"
BACKUP_DIR="/tmp/backups"
S3_BUCKET="s3://myapp-backups"
DATE=$(date +%Y-%m-%d_%H-%M-%S)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"
# Create backup directory
mkdir -p $BACKUP_DIR
# Dump the database and compress it
pg_dump -U $DB_USER -d $DB_NAME | gzip > $BACKUP_FILE
# Upload to S3
aws s3 cp $BACKUP_FILE $S3_BUCKET/daily/
# Delete local backups older than 7 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +7 -delete
# Delete S3 backups older than 30 days
aws s3 ls $S3_BUCKET/daily/ | while read -r line; do
CREATE_DATE=$(echo $line | awk '{print $1}')
if [[ $(date -d "$CREATE_DATE" +%s) -lt $(date -d "-30 days" +%s) ]]; then
FILE_NAME=$(echo $line | awk '{print $4}')
aws s3 rm "$S3_BUCKET/daily/$FILE_NAME"
fi
done
echo "Backup completed: $BACKUP_FILE"
And then a cron job to run it daily:
# Run backup every day at 2:00 AM
0 2 * * * /home/deploy/backup.sh >> /var/log/backup.log 2>&1
This looks professional. It dumps the database, compresses it, uploads to S3, and cleans up old backups. But there are problems hiding in this script that AI didn't tell you about — we'll get to those in the "What AI Gets Wrong" section.
Understanding Each Part: Backup Types Explained
Not all backups are created equal. Before you set anything up, you need to understand what kind of backup you actually need.
Full Backup vs. Incremental vs. Point-in-Time
| Type | What It Does | Pros | Cons |
|---|---|---|---|
| Full Backup | Copies the entire database every time | Simple to restore — one file has everything | Slow for large databases; uses lots of storage |
| Incremental Backup | Only copies what changed since the last backup | Fast and storage-efficient | Restore requires the full backup + all increments in order |
| Point-in-Time Recovery (PITR) | Records every change continuously; lets you restore to any second | Minimal data loss — restore to the exact moment before disaster | More complex setup; requires WAL archiving in PostgreSQL |
For most AI-built apps starting out: Daily full backups are fine. Once you have paying users or data you absolutely cannot lose, upgrade to PITR.
Backup vs. Replication
This one trips people up. A replica is a live copy of your database that stays in sync in real time. If the primary server dies, the replica takes over. Sounds like a backup, right?
It's not. Here's why: if you accidentally run DELETE FROM users WHERE active = true, that delete replicates to the replica instantly. Now both copies are missing your users. A replica protects against hardware failure. A backup protects against data loss. You need both.
Hot Backup vs. Cold Backup
A hot backup happens while the database is running and serving requests. This is what pg_dump does — your app stays online during the backup. A cold backup requires shutting down the database first, copying the files, then starting it back up. Cold backups are more reliable but require downtime. For most apps, hot backups (pg_dump or PITR) are the way to go.
Backup Strategy by Platform
Where you host your database determines what you get for free and what you need to set up yourself.
| Platform | What's Included (Free Tier) | What's Included (Paid) | What You Need to Set Up |
|---|---|---|---|
| Supabase | No backups on free tier | Daily backups (Pro); PITR (Team/Enterprise) | Free tier: manual pg_dump or use Supabase CLI for scheduled exports |
| Railway | No automatic backups | No automatic backups | You're on your own — set up pg_dump + cron + S3 or use a backup plugin |
| PlanetScale | Daily backups on all plans | Daily backups with configurable retention | Cross-region backup replication for disaster recovery |
| Neon | 7-day history (branching) | PITR up to 30 days; configurable retention | Offsite copies if you want backups outside Neon's infrastructure |
| Self-hosted PostgreSQL | Nothing — you're the admin | N/A | Everything: pg_dump/pg_basebackup, cron, S3 upload, monitoring, retention, encryption, restore testing |
"It's in the cloud" does not mean "it's backed up." Many platforms — including popular ones AI recommends — do not include automatic backups on their free or starter tiers. Check your plan. Right now. Before you need it.
What AI Gets Wrong About Database Backups
AI will set up a beautiful backup script that runs every night. But it never tells you to test the restore. A backup you've never restored is a backup you can't trust. Backup files can be corrupted, incomplete, or written in a format that doesn't actually restore cleanly. Fix: "Add a monthly restore drill — spin up a temporary database, restore the latest backup, verify data integrity, then tear it down."
That script AI generated saves backups to /tmp/backups on the same machine as the database. If the server's hard drive dies, you lose the database and the backups. Uploading to S3 is the right idea, but AI often keeps the local copy as the primary backup. Fix: "Store backups in a completely separate location — different server, different cloud provider, different geographic region."
That .sql.gz file contains every user's email, password hash, and personal data. AI almost never encrypts backup files. If someone gets access to your S3 bucket — through misconfigured permissions, leaked credentials, or a breach — they get all your data in plain text. Fix: "Encrypt the backup file with GPG or age before uploading. Use pg_dump | gzip | gpg --encrypt --recipient backup@myapp.com > backup.sql.gz.gpg."
Your database is 50MB today. In six months, it's 5GB. In a year, 20GB. AI sets up backups assuming today's database size. It doesn't plan for growth — backup duration increases, storage costs balloon, and that 30-day retention policy is suddenly costing real money. Fix: "Add monitoring for backup file size and duration. Alert me if a backup takes more than 10 minutes or exceeds 1GB. Switch from full daily to incremental + weekly full when the database gets large."
AI generates a backup of your data but forgets about the database schema and migrations. If you restore a backup from two weeks ago, the data might not match your current code's expected schema. Columns were added, tables were renamed, relationships changed. Fix: "Back up the schema separately with pg_dump --schema-only. Keep your migration files in version control. Document which migration version corresponds to which backup."
The 3-2-1 Backup Rule
This is the industry standard for backup strategy, and it's been around since before cloud computing existed. It's simple, memorable, and it works:
3 copies of your data (the original + 2 backups)
2 different types of storage (e.g., local disk + cloud object storage)
1 copy offsite (different geographic location from your server)
Here's what that looks like for a typical AI-built app:
- Copy 1: Your live PostgreSQL database (the original)
- Copy 2: Daily pg_dump uploaded to an S3 bucket in the same region
- Copy 3: Weekly backup replicated to a different cloud region (or a different cloud provider entirely)
The offsite copy is crucial. If your cloud provider has a region-wide outage — and yes, this happens to AWS, GCP, and Azure — your same-region backups are just as unreachable as your database. An offsite copy in a different region (or on a different provider) is your insurance policy for the worst case.
For a hobby project, copies 1 and 2 might be enough. For anything with real users and real data, follow the full 3-2-1.
How to Test Your Backup (The Restore Drill)
A backup is worthless if you can't restore from it. The only way to know your backups work is to actually restore one. Here's a step-by-step restore drill you should run at least monthly:
Step 1: Get Your Latest Backup
# Download the most recent backup from S3
aws s3 cp s3://myapp-backups/daily/myapp_production_2026-03-25.sql.gz ./
# If encrypted, decrypt first
gpg --decrypt myapp_production_2026-03-25.sql.gz.gpg > myapp_production_2026-03-25.sql.gz
# Decompress
gunzip myapp_production_2026-03-25.sql.gz
Step 2: Create a Temporary Test Database
# Create a throwaway database for testing
createdb myapp_restore_test
# Restore the backup into it
psql -d myapp_restore_test -f myapp_production_2026-03-25.sql
Step 3: Verify the Data
# Check table counts — do they match what you expect?
psql -d myapp_restore_test -c "SELECT 'users' as tbl, count(*) FROM users
UNION ALL SELECT 'orders', count(*) FROM orders
UNION ALL SELECT 'products', count(*) FROM products;"
# Check for recent data — is the backup actually current?
psql -d myapp_restore_test -c "SELECT max(created_at) FROM users;"
# Spot-check specific records
psql -d myapp_restore_test -c "SELECT id, email, created_at FROM users ORDER BY created_at DESC LIMIT 5;"
Step 4: Clean Up
# Drop the test database when you're satisfied
dropdb myapp_restore_test
# Delete the local backup file
rm myapp_production_2026-03-25.sql
Step 5: Document the Results
Log the date, backup file used, restore duration, and whether it succeeded. If anything was wrong — missing tables, corrupt data, failed restore — fix the backup process now, not during a real emergency.
Pro tip: Ask AI to automate the restore drill: "Write a script that downloads the latest backup, restores it to a temporary database, runs verification queries, logs the results, and cleans up. Run it monthly via cron and email me the results." Now you have verified backups without remembering to do it manually.
Putting It All Together: Your Backup Checklist
Here's what a solid backup strategy looks like for an AI-built app with real users:
- Automated daily full backups via pg_dump + cron (or your platform's built-in backup)
- Encrypted before upload — never store unencrypted backups in cloud storage
- Stored offsite — at least one copy in a different region or provider
- Retention policy — keep 30 days of dailies, 12 months of weeklies, adjust as you grow
- Monitoring & alerts — get notified if a backup fails, is too small (might be empty), or takes too long
- Monthly restore drills — verify backups actually restore correctly
- Schema backups — back up your database structure separately from your data
- Migration tracking — know which migration version corresponds to which backup
You don't need all of this on day one. Start with automated daily backups to S3. Add encryption next. Then set up restore drills. Build the muscle before you need it in an emergency.
Frequently Asked Questions
It depends on how much data you can afford to lose. For most AI-built apps, daily backups are a solid starting point. If you have paying users or financial transactions, you want continuous backups or point-in-time recovery (PITR) so you can restore to any second. Ask yourself: "If my database died right now, how many hours of data loss would hurt?" That's your backup frequency.
Some do, some don't — and the free tier often doesn't include backups. Supabase includes daily backups on paid plans. Railway has no automatic backups. PlanetScale includes daily backups. Neon includes PITR on paid plans. Always check your specific plan. "It's in the cloud" does not mean "it's backed up."
A backup is a saved snapshot from a point in time. A replica is a live copy that stays in sync. If you accidentally delete all users, the replica deletes them too — instantly. Replicas protect against hardware failure (server goes down). Backups protect against data loss (someone runs the wrong query). You need both for a production app.
No. Git is for code, not data. Database dumps are large, binary-ish, and change constantly. Committing them to git bloats your repository, slows down clones, and risks exposing user data. Use proper backup tools (pg_dump, mysqldump) and store backups in object storage like S3 — not your code repo.
Test them. Regularly. Spin up a temporary database, restore your latest backup, run queries to verify the data, then tear it down. Do this monthly at minimum. The worst time to discover your backups are corrupt is during an actual emergency. See the restore drill section above for step-by-step instructions.