main*
📝docker-compose-for-real-dev-stacks.md
📅April 28, 20225 min read

Docker Compose for Real Development Stacks

#docker#docker-compose#developer-experience#local-dev#infrastructure

Docker Compose for Real Development Stacks

Compose Gets Dismissed Too Easily

Docker Compose is usually discussed in extremes.

One camp treats it like a toy.

The other camp tries to force it to behave like a miniature production orchestrator.

Both approaches miss the point.

Compose is at its best when you use it as a local integration platform:

  • good enough to run real workflows
  • simple enough that every developer can trust it
  • explicit enough that dependencies stop living in tribal knowledge

That is a valuable thing.

The Real Local Development Problem

Modern applications rarely need just one service.

Even a modest system can depend on:

  • API server
  • frontend
  • PostgreSQL
  • Redis
  • background worker
  • object storage emulator
  • email catcher

When the setup lives in a wiki page and three shell scripts, onboarding becomes archaeology.

Compose gives you a way to turn that dependency graph into a repeatable contract.

What I Want From a Local Stack

My local platform does not need to be production.

It does need to be dependable.

That means:

  1. one command to start the stack
  2. predictable ports and service names
  3. real health checks
  4. persisted data when useful, throwaway data when not
  5. easy selective startup for heavier dependencies
  6. enough fidelity to test real workflows

If the stack is fragile, people stop using it. Then the whole point is gone.

A Compose File Should Explain the System

I like a Compose file that reads like architecture, not just syntax.

services:
  postgres:
    image: postgres:14
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
    ports:
      - "5432:5432"
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d app"]
      interval: 5s
      timeout: 5s
      retries: 10
 
  redis:
    image: redis:7
    ports:
      - "6379:6379"
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 10
 
  api:
    build:
      context: .
      dockerfile: apps/api/Dockerfile.dev
    env_file:
      - .env.local
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    ports:
      - "4000:4000"
    volumes:
      - .:/workspace
 
  worker:
    build:
      context: .
      dockerfile: apps/worker/Dockerfile.dev
    env_file:
      - .env.local
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    volumes:
      - .:/workspace
 
volumes:
  postgres-data:

This tells a new engineer the shape of the system immediately.

Health Checks Matter More Than depends_on

One of the oldest Compose mistakes is assuming container startup order equals readiness.

It does not.

Postgres can be "started" and still be unavailable for actual connections. Redis can boot before data restoration is done. An API can begin before migrations finished.

That is why I prefer health checks plus condition: service_healthy whenever the stack supports it.

Without that, local boot becomes a race condition disguised as convenience.

Profiles Keep the Stack Practical

Not every developer needs every dependency on every run.

Profiles are a great way to keep the stack flexible.

services:
  mailhog:
    image: mailhog/mailhog
    profiles: ["messaging"]
    ports:
      - "8025:8025"
 
  minio:
    image: minio/minio
    profiles: ["storage"]
    command: server /data
    ports:
      - "9000:9000"

Now you can run:

docker compose --profile storage up

That keeps the default path lightweight while still supporting deeper workflows.

Local Persistence Should Be Intentional

Not every service should persist data the same way.

I usually split dependencies into two groups.

Persisted

Things where keeping data between restarts saves time.

  • PostgreSQL
  • object storage emulator
  • search index for heavier apps

Disposable

Things where starting clean is usually fine.

  • ephemeral test workers
  • local schedulers
  • sidecar helpers

Compose works best when you make that choice explicitly instead of letting everything accidentally persist forever.

Seed Data Is Part of the Platform

If a stack is only usable after six manual setup steps, the stack is incomplete.

I like seed or init containers for the predictable basics.

  db-seed:
    build:
      context: .
      dockerfile: scripts/Dockerfile.seed
    depends_on:
      postgres:
        condition: service_healthy
    command: ["node", "scripts/seed-dev-data.js"]
    profiles: ["seed"]

That gives the team a standard path for local demo data or integration fixtures.

Use Service Names as Stable Hostnames

One of the best Compose ergonomics is internal DNS.

Inside the network, your app can connect to postgres, redis, minio, and so on by service name.

That means local environment configuration can stay simple and obvious:

DATABASE_URL=postgres://app:app@postgres:5432/app
REDIS_URL=redis://redis:6379

No random host discovery. No "use localhost here but not there" confusion inside containers.

Production-Like Does Not Mean Production-Identical

I do not want a local stack that tries to perfectly simulate cloud networking, autoscaling, and every managed service nuance.

That usually creates pain without much value.

I want the local stack to preserve the important truths:

  • same major dependencies
  • same protocols
  • same environment shape
  • same important startup order
  • same migration path

That is enough to catch most integration issues early.

The Best Compose Files Age Well

You can tell whether a Compose setup was designed well by how it feels six months later.

Good signs:

  • new engineers can get running quickly
  • services start predictably
  • common workflows work without shell folklore
  • optional dependencies stay optional
  • local debugging feels close to reality

Bad signs:

  • docker compose up only works after a magic script
  • startup order breaks randomly
  • ports collide constantly
  • nobody knows which volumes can be deleted safely
  • the docs and the Compose file disagree

My Rule of Thumb

If a team runs more than two or three local dependencies, Compose should probably be the system of record for the development platform.

Not because it is perfect.

Because a written dependency graph is better than a social dependency graph.

The Main Takeaway

Docker Compose is still one of the highest-leverage tools for local development when it is used for what it is actually good at:

  • declaring the stack
  • wiring service dependencies clearly
  • making local integration repeatable
  • reducing setup ambiguity for the team

Treat it like an integration contract, not just a convenience file, and it stays useful for a long time.