Docker Compose Best Practices
Docker Compose does one thing well: it describes a multi-container environment and lets you spin it up consistently everywhere—your laptop, a CI runner, a colleague’s machine.
That is why it works so well for local stacks, integration tests, and one-off tasks. It is also why teams get into trouble when they try to turn it into a production orchestration platform. The moment you start treating a compose.yaml as a long-term deployment manifest, YAML piles up and boundaries blur.
This post is about one thing: keeping Compose in the lane where it actually shines.
Position It Correctly—It Defines Environments, Not Deployments
The official docs are clear about what Compose is built for:
Good fit:
- Local dev stack: app + database + cache
- Integration test environments
- Single-machine preview environments
- One-off tasks: migrations, seeds, admin jobs
- Reproducing issues with real dependencies
Bad fit:
- Multi-node production orchestration
- Auto-scaling or rolling deployments
- Complex release strategies
- Cross-host scheduling
- Serious secret management at scale
If you need rolling releases, instance scheduling, fault tolerance, and auto-scaling—Compose is not “almost there.” The tool is wrong for the job.
Use v2 Plugin: docker compose
The current standard is the Docker Compose v2 plugin:
docker compose versionUse docker compose ... as the primary form, not the legacy docker-compose ....
Also stop writing the top-level version: field in new Compose files. The official guidance is explicit: it is no longer meaningful for version selection.
Keep the Directory Structure Ordinary
.
├── compose.yaml
├── compose.override.yaml
├── compose.ci.yaml
├── .env.example
├── app/
├── infra/
│ ├── postgres/
│ └── nginx/
└── scripts/Practical conventions:
compose.yaml— base configurationcompose.override.yaml— local dev overrides, loaded automaticallycompose.ci.yaml— CI-specific overrides.env.example— variable documentation, never real credentials
Splitting into seven Compose files is a smell. When docker compose config produces something nobody can read, the structure has already failed.
Start Small and Stable
services:
api:
build:
context: .
dockerfile: app/Dockerfile
command: ["./bin/api"]
environment:
APP_ENV: development
DATABASE_URL: postgres://app:app@db:5432/app?sslmode=disable
REDIS_URL: redis://redis:6379/0
ports:
- "8080:8080"
volumes:
- .:/workspace
working_dir: /workspace
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:16
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d app"]
interval: 5s
timeout: 3s
retries: 10
redis:
image: redis:7
command: ["redis-server", "--save", "", "--appendonly", "no"]
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 10
volumes:
postgres-data:No version:. This is the current recommended form.
Override Files Are for Convenience, Not Hiding Differences
compose.override.yaml is the right place for local dev-specific config:
- bind mounts for live reload
- debug ports
- hot-reload commands
- more verbose log levels
services:
api:
command: ["air", "-c", ".air.toml"]
environment:
LOG_LEVEL: debug
ports:
- "2345:2345"Developers run:
docker compose upCI explicitly specifies its override:
docker compose -f compose.yaml -f compose.ci.yaml up --abort-on-container-exit --exit-code-from testOverride should change environment details, not hide the fact that the same service is actually two completely different contracts across environments.
Get .env Right or It Will Bite You
Compose’s env var handling trips up almost everyone. Three things to keep separate:
.envfeeds${...}substitution in the Compose file itselfenv_file:injects variables into the container’s environment.env.exampleis safe to commit; the real.envis not
services:
api:
env_file:
- .env.local
environment:
APP_ENV: development
HTTP_PORT: ${HTTP_PORT:-8080}Common pitfalls:
.envaffects Compose file renderingenv_filegoes into the container- shell env vars can override Compose substitution results
- quotes, spaces, extra newlines all cause surprises
When in doubt:
docker compose configLook at the final merged output before blaming Docker.
Treat Secrets Seriously Even Locally
Most credential leaks start with “it’s just dev.”
At minimum for local dev:
- Use low-privilege or fake credentials
- Never commit real keys to the repo
- Prefer environment variables over hardcoded values
- Mount config files read-only where possible
services:
api:
environment:
STRIPE_API_BASE: https://api.stripe.com
STRIPE_API_KEY: ${STRIPE_API_KEY}
volumes:
- ./infra/api/config.dev.yaml:/workspace/config.yaml:roFor production-grade secret management, Compose is not the answer. Use a proper secret manager.
healthcheck Is Worth the Effort
“Container started” does not mean “service is ready.”
Common cases where this matters:
- Database process is up but not yet accepting connections
- App is running migrations on startup
- HTTP port is listening but downstream dependencies are not connected
Add healthcheck to any non-trivial service:
services:
api:
healthcheck:
test: ["CMD", "curl", "-fsS", "http://localhost:8080/healthz"]
interval: 10s
timeout: 3s
retries: 10
start_period: 15sA good health check is:
- Low overhead
- Executable from inside the container
- Representative of actual readiness
- Not dependent on unstable external networks
Writing a health check that hits a third-party public endpoint tests the internet, not your service.
depends_on Manages Startup Order, Not Application Readiness
Be direct about this: depends_on is not a substitute for your application’s retry and backoff logic.
Even with:
depends_on:
db:
condition: service_healthyYou only get ordered startup. This does not solve:
- Database becoming unhealthy later
- Migrations not yet complete
- Initialization data not ready
- Permission or tenant data not created
Compose handles “what runs first.” Your application still needs to handle “am I actually ready.”
Use Named Volumes for Persistent Data
For container-managed state like databases, use named volumes:
services:
db:
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:For source code, config, and scripts that you want to edit on the host—those are what bind mounts are for:
services:
api:
volumes:
- .:/workspacePractical split:
- named volume: service data (databases, caches)
- bind mount: development workspace
Avoid bind-mounting database data directories to host paths. Permissions, performance, and filesystem differences will cause unexplained problems.
Default Network Is Usually Enough
Compose creates a default network automatically. Services reach each other by service name:
db:5432redis:6379api:8080
Most local environments do not need more than this.
Only create custom networks when you genuinely need topology isolation:
services:
nginx:
image: nginx:1.27
networks:
- edge
api:
build: .
networks:
- edge
- backend
db:
image: postgres:16
networks:
- backend
networks:
edge:
backend:
internal: trueinternal: true is useful for a backend-only network—it blocks all external connectivity.
profile Beats Commenting Out YAML
Some services are not needed every time:
- MailHog
- Jaeger
- Prometheus
- Local object storage emulator
profile is the right tool:
services:
api:
build: .
mailhog:
image: mailhog/mailhog
profiles: ["dev"]
jaeger:
image: jaegertracing/all-in-one:1.57
profiles: ["observability"]Default run (base stack only):
docker compose upWith MailHog:
docker compose --profile dev upWith observability tools:
docker compose --profile observability upThis beats asking everyone to comment out service blocks manually.
One-off Tasks Need Explicit Services
Migrations, seeds, and admin jobs belong as explicit services:
services:
migrate:
build:
context: .
dockerfile: app/Dockerfile
command: ["./bin/migrate", "up"]
environment:
DATABASE_URL: postgres://app:app@db:5432/app?sslmode=disable
depends_on:
db:
condition: service_healthy
profiles: ["ops"]Run with:
docker compose --profile ops run --rm migrateThis is more maintainable than a wiki full of docker run ... commands.
Make Logs Accessible
For local and CI use, container stdout/stderr is usually sufficient.
Commands you actually need:
docker compose logs -f api
docker compose logs --tail=100 db
docker compose psTo cap verbose logs:
services:
api:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"If you need centralized log aggregation, long-term retention, and cross-host search—you have already left Compose’s comfort zone.
restart Policy Is Helpful, Not a Stability Solution
Appropriate uses for local dependencies:
- One-off jobs: no restart needed
- Persistent dependencies:
unless-stoppedis reasonable - Never use restart as a band-aid for crash loops
services:
db:
image: postgres:16
restart: unless-stoppedExcessive auto-restart hides real problems. “All containers up” does not mean “all services healthy.” CI environments should almost never use restart—test failures should fail fast and visibly.
Compose Excels in CI and Integration Testing
The real value in CI: bring up your dependency environment, run your tests, throw everything away.
services:
test:
build:
context: .
dockerfile: app/Dockerfile
command: ["go", "test", "./...", "-count=1"]
environment:
DATABASE_URL: postgres://app:app@db:5432/app?sslmode=disable
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d app"]
interval: 5s
timeout: 3s
retries: 10CI pipeline:
docker compose -f compose.yaml -f compose.ci.yaml up --build --abort-on-container-exit --exit-code-from test
docker compose down -vGood habits:
- Use
-pfor an isolated project name - Clean up networks and volumes after the job
- Keep test data deterministic
- Never depend on state from a previous job run
Commands You Actually Need
docker compose up -d --build
docker compose down
docker compose down -v
docker compose ps
docker compose logs -f api
docker compose exec api sh
docker compose run --rm migrate
docker compose configdocker compose config is underused. It shows the final merged, substituted, profile-expanded output—much safer than guessing.
Common Anti-Patterns
Treating Compose as a long-term production orchestrator.
A single machine at very small scale might勉强 work. The moment you want reliable deployments and elastic scaling, forcing Compose further is the wrong call.
Putting real secrets in Compose files.
If your compose.yaml contains production passwords, the problem is process, not YAML syntax.
Expecting depends_on to solve readiness.
It handles startup order. Application retry logic and connection health checks are still your responsibility.
Bind-mounting everything.
Source code mounts are fine. Bind-mounting database data directories, runtime directories, and cache directories usually multiplies your problems.
Too many files and profiles.
If nobody can answer “what files and profiles do I need to start locally,” the design has already failed.
Over-relying on container_name.
Compose provides service-name-based discovery out of the box. Fixed container names create naming conflicts and reduce flexibility.
Checklist
A Compose project in good shape has:
- Uses
docker compose, not v1docker-compose - No deprecated
version:field - One base file, a few overrides, clear responsibilities
- Key dependencies have healthcheck
- Stateful services use named volumes
- Bind mounts only where they genuinely aid development
- Optional services controlled via profiles
- No real secrets in the repo
- CI creates, destroys, and cleans up reproducibly
- Nobody is treating it as a production orchestrator
Compose does not need to be “advanced.” It needs to be stable, predictable, and low-friction. Keep it in its lane and it will serve you well.
Further Reading
- Docker Compose Documentation — official entry point
- Compose File Reference — all fields explained
- Compose v2 Plugin Install (Linux) — v2 setup
- Multiple Compose Files — override and profile mechanics
- Docker Networking — network isolation and
internalnetworks