TL;DR: Docker packages your app and everything it needs (runtime, libraries, config) into a container that runs identically anywhere. A Dockerfile defines how to build the image. Docker Compose runs multiple containers together (app + database + cache). You don't always need Docker — Vercel and Railway handle this for you. You need it for complex multi-service setups or custom runtimes.
Why AI Coders Need to Know This
"Works on my machine" is one of the oldest problems in software development. Your Node.js version differs from the server's. A library behaves differently on macOS versus Ubuntu. A Python dependency isn't installed on the production server. Docker solves all of these by packaging your entire runtime environment — not just your code — into a single portable unit.
AI coding tools reach for Docker when you have complex deployment needs. If you ask Claude to "set up a production-ready deployment," there's a good chance it generates a Dockerfile, a docker-compose.yml, and maybe a GitHub Actions workflow that builds and pushes Docker images. Understanding these files is the difference between knowing what you deployed and having no idea what's running in production.
Real Scenario
Prompt I Would Type
I have a Node.js API and a PostgreSQL database.
I want to run both locally with one command during development,
and deploy them both to my VPS.
Make it reproducible — works the same on every machine.
What AI Generated
# Dockerfile — how to build your app's container image
# This file turns your code into a reproducible, portable image
FROM node:22-alpine
# Start from the official Node.js 22 image (Alpine = small Linux distro, ~50MB vs 900MB)
WORKDIR /app
# All subsequent commands run from /app directory inside the container
# Copy package files first (Docker caches this layer)
# If package.json hasn't changed, npm ci won't re-run on next build
COPY package*.json ./
RUN npm ci --only=production
# npm ci is faster and stricter than npm install — installs exact versions from lockfile
# Copy the rest of your source code
COPY . .
# Document which port the app uses (doesn't actually publish it — that's in Compose)
EXPOSE 3000
# The command that starts your app when the container runs
CMD ["node", "src/server.js"]
# docker-compose.yml — runs your whole stack with one command
# Run it with: docker compose up
# Stop it with: docker compose down
services:
app:
build: . # build from the Dockerfile in current directory
ports:
- "3000:3000" # host:container port mapping
environment:
NODE_ENV: development
DATABASE_URL: postgresql://postgres:password@db:5432/myapp
# "db" is the service name — Docker Compose creates a network where services
# can find each other by service name
depends_on:
db:
condition: service_healthy # wait until database is ready before starting app
volumes:
- .:/app # mount current directory — changes reflect immediately
- /app/node_modules # don't overwrite container's node_modules
db:
image: postgres:16-alpine # use official Postgres image (no Dockerfile needed)
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data # persist data across container restarts
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
postgres_data: # named volume — survives container restarts
Understanding Each Part
Images vs Containers
An image is the blueprint — a read-only snapshot of your app and its environment. A container is a running instance of an image. You build images with docker build and run containers with docker run. One image can spawn many containers — like a class and its instances in OOP.
Dockerfile Layers and Caching
Every instruction in a Dockerfile creates a layer. Docker caches layers and only rebuilds from the first changed layer. This is why the Dockerfile above copies package.json and runs npm ci before copying the rest of the code — dependencies change infrequently, so that layer stays cached. Code changes frequently, but its layer builds quickly because node_modules are already cached.
Docker Compose Networking
When you run docker compose up, Docker creates a private network and all services join it. They can reach each other by service name — that's why the database URL uses db:5432 instead of localhost:5432. From inside the app container, db resolves to the database container's IP address automatically.
Volumes: Persisting Data
Containers are ephemeral — when they stop, their internal filesystem state is lost. Volumes persist data outside the container lifecycle. The postgres_data volume in the example keeps your database data even if you stop and restart the containers. Without it, your database would be empty every time you run docker compose up.
When You Don't Need Docker
Docker adds complexity. For most vibe coder projects, you don't need it:
- Next.js on Vercel — Vercel handles containerization automatically
- Node.js on Railway — Railway detects Node and deploys without Docker
- Python on Render — Render supports Python natively
Use Docker when: you have multiple services that need to run together, you have a custom runtime or unusual dependencies, you need exact environment parity between dev and production, or you're deploying to a provider that uses Docker images (AWS ECS, Google Cloud Run).
What AI Gets Wrong About Docker
1. Running as Root
AI Dockerfiles often run the app as root (the default). This is a security risk — if the container is compromised, the attacker has root access. Add a non-root user: RUN addgroup -S app && adduser -S app -G app, then USER app before CMD.
2. Copying .env Into the Image
AI sometimes generates COPY .env . in the Dockerfile. Never do this — the image gets baked with your secrets and if it's ever pushed to a registry, those secrets are exposed. Always add .env to .dockerignore and pass secrets via environment variables at runtime.
3. Large Images
Using FROM node:22 (Debian-based, ~900MB) instead of FROM node:22-alpine (Alpine-based, ~50MB) creates unnecessarily large images. Multi-stage builds reduce this further by building in one stage and copying only the built output to a minimal final image.
4. Missing .dockerignore
Without a .dockerignore file, COPY . . copies everything — including node_modules (huge), .git, .env, and local build artifacts. Create a .dockerignore with at minimum: node_modules, .env, .git, *.log.
How to Debug Docker with AI
When a container fails to start, check logs: docker compose logs app or docker logs [container-id]. Paste the error into Claude: "My Docker container is failing with this error. Here's the Dockerfile and docker-compose.yml." Claude is strong at diagnosing port conflicts, missing environment variables, healthcheck failures, and build layer issues.
For interactive debugging: docker compose exec app sh opens a shell inside the running container, letting you explore the filesystem and test commands directly.
What to Learn Next
Frequently Asked Questions
Docker is a platform for packaging applications into containers — self-contained units that include your code, runtime, libraries, and configuration. A Docker container runs identically on any machine that has Docker installed: your laptop, a teammate's computer, a VPS, or a cloud provider. It solves the 'works on my machine' problem.
A virtual machine (VM) emulates an entire computer including the OS. A container shares the host OS kernel but isolates the application in its own environment. Containers start in seconds (VMs take minutes), use far less memory, and are much more efficient. Docker containers are isolated but not as fully isolated as VMs.
A Dockerfile is a text file with instructions for building a Docker image — the blueprint for your container. Each line adds a layer: FROM specifies the base image, RUN executes commands, COPY copies files, EXPOSE declares a port, and CMD sets the startup command. Running 'docker build' follows these instructions to create a reproducible image.
Not always. For Next.js, React, and Node.js apps deploying to Vercel, Railway, or Render, you probably don't need Docker — these platforms handle containerization for you. Docker becomes valuable when you have complex multi-service setups (app + database + cache), custom runtimes, or need consistent environments across a team.
Docker Compose is a tool for running multi-container applications defined in a docker-compose.yml file. Instead of starting your app, database, and cache with three separate commands, Compose starts them all together with 'docker compose up'. It's the standard way to run development environments locally with Docker.