TL;DR: Docker Compose lets you run multiple containers — your app, a database, a cache, whatever you need — with one command: docker compose up. Instead of starting each service separately, you define everything in a single docker-compose.yml file. One file describes your entire stack. One command starts it all. One command stops it all. That's it.

Why AI Coders Need This

Here's what happens every time you build something real: your app needs a database. Maybe a cache too. Maybe a background worker. Suddenly "run my app" means starting three or four different things in the right order, with the right settings, connected to each other.

When you ask Claude or Cursor to "containerize my app" or "set up a local development environment," there's a very high chance it generates a docker-compose.yml file. It's the standard answer for running multi-service applications. And if you don't understand what that file says, you're copy-pasting infrastructure config you can't debug when something breaks.

You already know what Docker is — it packages a single app into a container. Docker Compose is the next step: it orchestrates multiple containers so they work together. Think of Docker as running one instrument. Docker Compose conducts the whole orchestra.

The good news? The file is readable. The commands are simple. And once you understand the pattern, you'll recognize it everywhere — because almost every modern web application follows it.

Real Scenario

You've been building a task management app with Claude. It works great locally — Node.js API, a few endpoints, everything stored in memory. Then you say:

Prompt You'd Type

This app needs a real database. Add PostgreSQL for data storage
and Redis for session caching. I want to run everything locally
with one command. Containerize the whole thing.

Claude doesn't just give you a Dockerfile. It generates a docker-compose.yml with three services — your app, a PostgreSQL database, and a Redis cache — all wired together. Let's look at exactly what it produces and what every line means.

What AI Generated

Here's the docker-compose.yml that Claude creates. This single file replaces "open three terminal tabs, start Postgres, start Redis, then start your app with the right environment variables":

# docker-compose.yml — your entire stack in one file
# Start everything:  docker compose up
# Stop everything:   docker compose down
# See what's running: docker compose ps

services:
  # --- Your Application ---
  app:
    build: .                              # build from Dockerfile in current directory
    ports:
      - "3000:3000"                       # expose port 3000 to your browser
    environment:
      NODE_ENV: development
      DATABASE_URL: postgresql://postgres:secretpass@db:5432/taskmanager
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy        # wait for Postgres to be READY, not just started
      cache:
        condition: service_started
    volumes:
      - .:/app                            # sync your code into the container (live reload)
      - /app/node_modules                 # but keep container's node_modules separate

  # --- PostgreSQL Database ---
  db:
    image: postgres:16-alpine             # official Postgres image, no Dockerfile needed
    environment:
      POSTGRES_DB: taskmanager
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secretpass
    ports:
      - "5432:5432"                       # optional: access DB from host tools like pgAdmin
    volumes:
      - postgres_data:/var/lib/postgresql/data   # persist data across restarts
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  # --- Redis Cache ---
  cache:
    image: redis:7-alpine                 # official Redis image
    ports:
      - "6379:6379"                       # optional: access Redis from host

# Named volumes — data survives container restarts
volumes:
  postgres_data:

That's 45 lines. It replaces a page of setup instructions, three install guides, and the inevitable "why can't my app connect to the database?" debugging session.

Understanding Each Part

Services: The Building Blocks

Everything under services: is a container that Docker Compose will create and manage. In our example, there are three: app, db, and cache. These names are arbitrary — you pick them — but they become important because they're how containers find each other on the network.

When your app's code connects to postgresql://postgres:secretpass@db:5432/taskmanager, that db hostname works because Docker Compose creates a private network where service names resolve to the right container. No IP addresses to manage. No localhost confusion. Service name = hostname.

image vs build

There are two ways to tell a service what to run:

  • image: postgres:16-alpine — pull a pre-built image from Docker Hub. You use this for databases, caches, and other standard tools. No Dockerfile needed.
  • build: . — build a custom image from your Dockerfile. You use this for your own application code.

Most Compose files mix both. Your app uses build (because it's your custom code). Your database uses image (because PostgreSQL is already packaged and ready to go).

Ports Mapping

The ports section maps a port on your computer to a port inside the container. The format is "host:container".

ports:
  - "3000:3000"    # your machine's port 3000 → container's port 3000
  - "8080:3000"    # your machine's port 8080 → container's port 3000

The left number is what you type in your browser (localhost:3000). The right number is what the app inside the container is listening on. They don't have to match. If you already have something running on port 3000, change the left number: "3001:3000". For a deeper dive, see What Are Ports?

Volumes: Keeping Your Data

Containers are disposable. When you stop them, anything stored inside is gone. Volumes solve this by mapping a location inside the container to persistent storage outside it.

There are two types you'll see:

  • Named volumespostgres_data:/var/lib/postgresql/data — Docker manages the storage location. Data survives docker compose down. This is what you use for databases.
  • Bind mounts.:/app — maps a folder on your machine directly into the container. Changes you make to files on your machine appear instantly inside the container. This is what you use for development (live code reloading).

The critical rule: if a service stores data you care about (databases, uploaded files), it must have a volume. Without one, running docker compose down and docker compose up gives you an empty database every time.

Environment Variables

The environment section passes configuration into the container — database credentials, API keys, feature flags, connection strings. This is exactly like setting environment variables in a .env file, but scoped to each individual service.

environment:
  NODE_ENV: development
  DATABASE_URL: postgresql://postgres:secretpass@db:5432/taskmanager

You can also load from a file to keep secrets out of your Compose file:

env_file:
  - .env            # loads all KEY=VALUE pairs from .env file

depends_on: Startup Order

depends_on controls which services start first. Your app needs the database to be running before it can connect, so you declare that dependency:

depends_on:
  db:
    condition: service_healthy    # wait until healthcheck passes
  cache:
    condition: service_started    # just wait until container starts

The key difference: service_started means "the container is running" (but the service inside might still be booting). service_healthy means "the healthcheck passed" — the database is actually accepting connections. For databases, always use service_healthy with a healthcheck. Otherwise your app will try to connect before Postgres is ready and crash.

Common Patterns

Nearly every docker-compose.yml you'll encounter follows one of these patterns. Once you recognize the pattern, you can read any Compose file.

Pattern 1: App + Database

The most common setup. Your app and a database, nothing else.

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://postgres:password@db:5432/myapp
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  pgdata:

Pattern 2: App + Database + Cache

When you add Redis for sessions, caching, or job queues. This is what our main example uses. It's the go-to for any app that needs to be fast under load.

Pattern 3: App + Database + Nginx Reverse Proxy

When you're deploying to production on a VPS, you'll often add nginx as a reverse proxy in front of your app. Nginx handles SSL, serves static files, and forwards API requests to your app.

services:
  app:
    build: .
    expose:
      - "3000"                    # only visible to other containers, not to host
    environment:
      DATABASE_URL: postgresql://postgres:password@db:5432/myapp

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"                   # the only service exposed to the outside world
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - app

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Notice expose instead of ports on the app service. expose makes the port available to other containers on the network but not to your host machine. Only nginx is exposed to the outside world on ports 80 and 443. This is a security best practice — your app and database are behind the proxy, invisible to the internet.

Essential Commands

You only need to know six commands. Seriously — six commands cover 95% of everything you'll do with Docker Compose.

docker compose up

Starts everything defined in your docker-compose.yml. Add -d to run in the background (detached mode) so you get your terminal back:

docker compose up          # starts everything, logs stream to terminal
docker compose up -d       # starts everything in background
docker compose up --build  # rebuild images before starting (use after code changes)

docker compose down

Stops and removes all containers. Your named volumes (database data) survive by default:

docker compose down        # stop containers, keep data
docker compose down -v     # stop containers AND delete volumes (⚠️ destroys data!)

Warning: The -v flag deletes your volumes. That means your database data, uploaded files — everything stored in volumes — is gone. Only use -v when you intentionally want a clean slate.

docker compose logs

See what's happening inside your containers. This is your first stop when something isn't working:

docker compose logs            # all services
docker compose logs app        # just the app service
docker compose logs -f app     # follow (stream) logs in real time
docker compose logs --tail 50  # last 50 lines

docker compose ps

Shows the status of all your services — which are running, which have exited, and which ports are mapped:

docker compose ps

If a service shows "Exited" instead of "Up," check its logs to find out what went wrong.

docker compose exec

Run a command inside a running container. This is how you open a shell for debugging or run database commands:

docker compose exec app sh               # shell into your app container
docker compose exec db psql -U postgres   # open PostgreSQL CLI
docker compose exec app npm test          # run tests inside the container

docker compose build

Rebuild your images without starting containers. Useful when you've changed your Dockerfile or dependencies:

docker compose build           # rebuild all services that use 'build:'
docker compose build app       # rebuild just the app service

What AI Gets Wrong About Docker Compose

AI is excellent at generating working docker-compose.yml files. But it consistently makes a few mistakes that can cause real problems. Here's what to watch for.

1. Wrong Port Mappings

AI sometimes maps ports that conflict with services you already have running, or maps ports backwards. Remember: the format is host:container, not container:host.

# ❌ AI sometimes generates this — backwards
ports:
  - "5432:3000"    # maps Postgres port to app port... wrong

# ✅ Correct
ports:
  - "3000:3000"    # host port 3000 → container port 3000

Also check for conflicts: if you have PostgreSQL installed locally and running on port 5432, mapping "5432:5432" for the containerized Postgres will fail. Change it to "5433:5432" and update your connection string accordingly.

2. Missing Volumes for Database Persistence

This is the biggest one. AI sometimes generates a database service without a volume. Everything works fine until you run docker compose down and docker compose up — and your database is empty.

# ❌ No volume — data disappears on restart
db:
  image: postgres:16-alpine
  environment:
    POSTGRES_DB: myapp
    POSTGRES_PASSWORD: password

# ✅ Named volume — data persists
db:
  image: postgres:16-alpine
  environment:
    POSTGRES_DB: myapp
    POSTGRES_PASSWORD: password
  volumes:
    - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Always check: does every database service have a volume? If not, add one.

3. The "version" Field Confusion

Older Docker Compose files start with version: "3.8" or similar. AI still generates this because it was trained on older examples. The version field is deprecated as of Docker Compose v2 and does nothing. It won't break anything if it's there, but it's unnecessary clutter.

# ❌ Old format — version field is deprecated
version: "3.8"
services:
  app:
    build: .

# ✅ Modern format — no version needed
services:
  app:
    build: .

4. Missing or Inadequate Health Checks

AI often uses depends_on without a health check, which means your app starts as soon as the database container launches — not when the database is actually ready to accept connections. PostgreSQL can take several seconds to initialize, and your app will crash trying to connect during that window.

# ❌ depends_on without healthcheck — app crashes on first connection
depends_on:
  - db

# ✅ depends_on with healthcheck — app waits for DB to be ready
depends_on:
  db:
    condition: service_healthy

And make sure the database service actually has a healthcheck defined. Without one, service_healthy has nothing to check against.

5. Hardcoded Secrets in the Compose File

AI loves putting passwords directly in the docker-compose.yml. For local development, this is fine. For anything deployed, it's a security risk — especially if this file ends up in a Git repository.

# ❌ Secrets in the file (fine for dev, bad for production)
environment:
  POSTGRES_PASSWORD: my_super_secret_password

# ✅ Secrets from .env file (and .env is in .gitignore)
environment:
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
# Then in .env:  POSTGRES_PASSWORD=my_super_secret_password

Debugging Docker Compose with AI

When something goes wrong — and it will — Docker Compose gives you the tools to figure out what happened. Here's the debugging workflow:

  1. Check status: docker compose ps — is the service running or did it exit?
  2. Read logs: docker compose logs [service] — what did the container say before it died?
  3. Get inside: docker compose exec [service] sh — explore the filesystem, test connections
  4. Ask AI: paste the error output and your docker-compose.yml into Claude

Claude is particularly good at diagnosing connection refused errors (wrong hostname — remember, use service names not localhost), port conflicts, permission issues, and missing environment variables. Give it the full error and the full Compose file for the best results.

What to Learn Next

Docker Compose builds on several concepts. If anything in this article was unclear, these will fill in the gaps:

Frequently Asked Questions

Docker runs individual containers. Docker Compose runs multiple containers together from a single YAML file. Think of Docker as running one instrument, and Docker Compose as conducting the whole orchestra. You use Docker for a single service (just your app), and Docker Compose when your app needs a database, cache, or other services running alongside it.

Not always. If you're using pre-built images (like postgres:16-alpine or redis:7-alpine), you don't need a Dockerfile for those services. You only need a Dockerfile for services you're building from your own code. A typical docker-compose.yml might have one service with 'build: .' (needs a Dockerfile) and two services with 'image:' (no Dockerfile needed).

It depends on volumes. If your docker-compose.yml defines a named volume for the database (like postgres_data:/var/lib/postgresql/data), your data survives 'docker compose down'. But if you run 'docker compose down -v' (with the -v flag), it deletes volumes too — and your data is gone. Always check for the -v flag before running down.

depends_on controls startup order — it makes sure the database container starts before your app container. But by default, it only waits for the container to start, not for the service inside to be ready. A PostgreSQL container can be 'started' but still initializing. Use depends_on with 'condition: service_healthy' and a healthcheck to wait until the database is actually accepting connections.

Docker Compose works fine for small production deployments on a single server — a VPS running your app, database, and maybe nginx. For larger deployments with multiple servers, auto-scaling, or zero-downtime deployments, you'd use Kubernetes or a managed container service like AWS ECS or Google Cloud Run. Most vibe coders start with Compose and only move to orchestration tools when they outgrow it.