Docker Compose for Beginners: From docker run to YAML
If you have ever deployed a web application, you know that modern software rarely runs as a single process. A typical stack involves a web server, an application backend, a database, a cache layer, and perhaps a message queue — each running in its own container. Managing all of these with individual docker run commands quickly becomes unwieldy. That is where Docker Compose comes in. It lets you define your entire multi-container application in a single YAML file and manage it with one command. In this guide, we will walk you through everything you need to know — from understanding the basics to converting your existing docker run commands into a clean docker-compose.yml file. You can also use BeautiCode's Docker Run to Compose converter to automate this process instantly.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Instead of executing multiple docker run commands with long lists of flags, you write a declarative YAML configuration file that describes all your services, networks, and volumes. Then, with a single docker compose up command, Docker creates and starts everything for you.
When you run a single container — say, a Node.js API server — a plain docker run command works fine. But real-world applications are rarely that simple. Consider a typical web application: you need an Nginx reverse proxy, a Node.js backend, a PostgreSQL database, and a Redis cache. Each service needs specific ports, environment variables, volumes, and network connections. Managing four separate docker run commands — remembering the correct flags, the startup order, and the network configuration — is tedious, error-prone, and impossible to version control effectively.
Docker Compose solves all of these problems. It provides a single source of truth for your application's infrastructure, is easy to read and modify, and can be committed to version control alongside your application code. Whether you are setting up a local development environment, running integration tests in CI, or deploying a staging environment, Docker Compose makes the process reproducible and painless.
Tip: As of Docker Desktop 3.4+, the docker compose command (without the hyphen) is the recommended way to use Compose. The older standalone docker-compose binary (V1) is deprecated. All examples in this guide use the modern docker compose syntax.
Docker Compose File Structure
A docker-compose.yml file is written in YAML, a human-friendly data serialization format. The file is organized into several top-level keys, each serving a distinct purpose. Understanding this structure is the foundation of working with Docker Compose.
# docker-compose.yml 기본 구조
services:
# 서비스 정의 (컨테이너)
web:
image: nginx:alpine
ports:
- "80:80"
api:
build: ./api
ports:
- "3000:3000"
db:
image: postgres:16
volumes:
- db-data:/var/lib/postgresql/data
# 네트워크 정의 (선택)
networks:
frontend:
backend:
# 볼륨 정의 (선택)
volumes:
db-data:The services section is the heart of the file. Each key under services defines a container — its image, build context, ports, volumes, environment variables, and more. The networks section lets you create isolated networks so that only specific services can communicate with each other. The volumes section defines named volumes for persistent data storage. In modern Compose (V2+), the version key is no longer required — Docker automatically determines the correct schema.
From docker run to Compose
The best way to understand Docker Compose is to see how a complex docker run command translates into a Compose file. Let us take a real-world example: running a PostgreSQL database with custom configuration.
The docker run Command
# 긴 docker run 명령 — 기억하고 반복하기 어려움
docker run -d \
--name my-postgres \
-p 5432:5432 \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=secretpass \
-e POSTGRES_DB=myapp \
-v pgdata:/var/lib/postgresql/data \
-v ./init.sql:/docker-entrypoint-initdb.d/init.sql \
--restart unless-stopped \
--network my-network \
postgres:16-alpineThe Equivalent Compose File
# docker-compose.yml — 동일한 설정을 선언적으로 표현
services:
my-postgres:
image: postgres:16-alpine
container_name: my-postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secretpass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
restart: unless-stopped
networks:
- my-network
networks:
my-network:
volumes:
pgdata:Every flag in the docker run command maps directly to a key in the Compose file. The -d flag (detached mode) becomes the default behavior with docker compose up -d. The -e flags become the environment section. The -v flags become volumes. The mapping is intuitive, and the YAML format is far more readable than a long shell command.
| docker run Flag | Compose Key | Description |
|---|---|---|
--name | container_name | Container name |
-p | ports | Port mapping |
-e | environment | Environment variables |
-v | volumes | Volume mounts |
--restart | restart | Restart policy |
--network | networks | Network attachment |
Tip: You do not have to convert docker run commands manually. Use BeautiCode's Docker Run to Compose tool to paste any docker run command and get a valid docker-compose.yml instantly.
Essential Compose Configuration
Docker Compose offers dozens of configuration options, but most services only need a handful. Here are the most commonly used keys with practical examples.
image
Specifies the Docker image to use for the service. You can use official images from Docker Hub, private registry images, or images with specific tags.
# 이미지 지정 방법
services:
web:
image: nginx:1.25-alpine # 공식 이미지 + 태그
api:
image: myregistry.io/api:v2 # 프라이빗 레지스트리
db:
image: postgres:16 # 메이저 버전만 지정build
Instead of pulling a pre-built image, you can build an image from a Dockerfile. The build key accepts a simple path or a detailed configuration with context and Dockerfile location.
# 빌드 설정 예제
services:
api:
build: ./api # 단순 경로 (Dockerfile이 해당 디렉토리에 존재)
frontend:
build:
context: ./frontend # 빌드 컨텍스트 경로
dockerfile: Dockerfile.prod # 커스텀 Dockerfile 이름
args:
NODE_ENV: production # 빌드 인자ports
Maps container ports to host ports. The format is HOST:CONTAINER. Always quote port mappings in YAML to avoid parsing issues with the colon.
# 포트 매핑 예제
services:
web:
ports:
- "80:80" # 호스트 80 → 컨테이너 80
- "443:443" # HTTPS
api:
ports:
- "3000:3000" # 개발용 API 포트
- "9229:9229" # Node.js 디버거 포트environment
Sets environment variables inside the container. You can use a map syntax or a list syntax. For sensitive values, use env_file to load variables from an external file.
# 환경 변수 설정
services:
api:
environment:
NODE_ENV: production # 맵 형식
DATABASE_URL: postgres://admin:pass@db:5432/myapp
env_file:
- .env # 외부 파일에서 로드 (민감한 값 분리)depends_on
Controls the startup order of services. When you specify depends_on, Docker Compose starts the dependency first. With the condition option, you can wait until a service is actually healthy before starting the dependent service.
# 의존성 및 헬스체크 설정
services:
api:
depends_on:
db:
condition: service_healthy # DB가 건강 상태일 때만 시작
redis:
condition: service_started # Redis 시작 후 바로 진행
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 5s
timeout: 5s
retries: 5restart
Defines the container's restart policy. The most common options are no (default), always, on-failure, and unless-stopped. For production-like environments, use unless-stopped so containers restart automatically after crashes or host reboots but stay stopped when you explicitly stop them.
Networking in Compose
One of the most powerful features of Docker Compose is its built-in networking. By default, Compose creates a single network for your entire application, and all services can communicate with each other using their service names as hostnames. This is called service discovery.
Default Network
When you run docker compose up, Compose automatically creates a bridge network named <project>_default. Every service defined in your Compose file is attached to this network. This means your API service can connect to your database simply by using the service name db as the hostname — no IP addresses, no manual DNS configuration.
# 서비스 디스커버리 — 서비스명으로 접근
services:
api:
environment:
# 'db'는 서비스명이며, Compose가 자동으로 DNS 해석
DATABASE_URL: postgres://admin:pass@db:5432/myapp
# 'redis'도 서비스명으로 접근 가능
REDIS_URL: redis://redis:6379
db:
image: postgres:16
redis:
image: redis:7-alpineCustom Networks
For more complex architectures, you can define custom networks to isolate groups of services. For example, you might want your frontend proxy to communicate with the API but not directly with the database. Custom networks give you this fine-grained control.
# 커스텀 네트워크로 서비스 격리
services:
nginx:
image: nginx:alpine
networks:
- frontend # 프론트엔드 네트워크만 연결
api:
build: ./api
networks:
- frontend # nginx와 통신
- backend # db와 통신
db:
image: postgres:16
networks:
- backend # 백엔드 네트워크만 연결 (외부 접근 차단)
networks:
frontend:
backend:In this setup, the nginx service can reach api through the frontend network, and api can reach db through the backend network. But nginx cannot directly access db — providing an important layer of security.
Volumes and Data Persistence
Containers are ephemeral by design — when a container is removed, all data inside it is lost. Volumes solve this problem by providing persistent storage that survives container restarts and removals. Docker Compose supports two main types of volume mounts.
Named Volumes
Named volumes are managed by Docker and stored in a Docker-controlled directory on the host. They are the recommended approach for persistent data like databases because Docker handles the lifecycle, permissions, and cleanup. Named volumes must be declared in the top-level volumes section.
# Named Volumes — Docker가 관리하는 영구 저장소
services:
db:
image: postgres:16
volumes:
- db-data:/var/lib/postgresql/data # named volume
volumes:
db-data: # 최상위에 선언 필수Bind Mounts
Bind mounts map a specific directory on your host machine directly into the container. They are ideal for development workflows where you want live code reloading — changes to files on your host are immediately reflected inside the container.
# Bind Mounts — 호스트 디렉토리를 컨테이너에 직접 연결
services:
api:
build: ./api
volumes:
- ./api/src:/app/src # 소스 코드 실시간 반영
- ./api/package.json:/app/package.json
- /app/node_modules # 익명 볼륨으로 node_modules 보호Volume Drivers
For advanced use cases — such as sharing volumes across multiple Docker hosts, storing data on NFS, or using cloud storage — you can specify a volume driver. The default driver is local, but plugins like vieux/sshfs or cloud-specific drivers enable remote storage.
# 볼륨 드라이버 설정 예제
volumes:
db-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.100,rw
device: ":/path/to/shared/data"Common Compose Patterns
While every project is different, certain Docker Compose patterns appear again and again. Here are the most common multi-service architectures you will encounter.
Web + Database (Nginx + PostgreSQL)
# 패턴 1: 웹 서버 + 데이터베이스
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro # 읽기 전용 마운트
depends_on:
- api
api:
build: ./api
environment:
DATABASE_URL: postgres://admin:pass@db:5432/myapp
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 5s
retries: 5
volumes:
pgdata:App + Cache + Database (Node + Redis + MongoDB)
# 패턴 2: 애플리케이션 + 캐시 + 데이터베이스
services:
app:
build: .
ports:
- "3000:3000"
environment:
MONGO_URI: mongodb://mongo:27017/myapp
REDIS_URL: redis://redis:6379
depends_on:
- mongo
- redis
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
command: redis-server --appendonly yes # AOF 영속성 활성화
mongo:
image: mongo:7
volumes:
- mongo-data:/data/db
environment:
MONGO_INITDB_DATABASE: myapp
volumes:
redis-data:
mongo-data:Full Stack Example
# 패턴 3: 프론트엔드 + 백엔드 + DB + 캐시 + 리버스 프록시
services:
proxy:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- frontend
- api
networks:
- frontend-net
frontend:
build: ./frontend
networks:
- frontend-net
api:
build: ./backend
environment:
DATABASE_URL: postgres://admin:pass@db:5432/app
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
networks:
- frontend-net
- backend-net
db:
image: postgres:16-alpine
volumes:
- db-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: pass
POSTGRES_DB: app
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 5s
retries: 5
networks:
- backend-net
cache:
image: redis:7-alpine
networks:
- backend-net
networks:
frontend-net:
backend-net:
volumes:
db-data:Essential Compose Commands
Docker Compose provides a rich set of commands for managing your multi-container application. Here are the ones you will use every day.
| Command | Description |
|---|---|
docker compose up -d | Create and start all services in detached mode |
docker compose down | Stop and remove all containers, networks |
docker compose down -v | Stop, remove containers, and delete volumes |
docker compose logs -f | Follow log output from all services |
docker compose logs api | Show logs for a specific service |
docker compose ps | List running containers and their status |
docker compose exec api sh | Open a shell inside a running container |
docker compose build | Build or rebuild all services with a build key |
docker compose pull | Pull the latest images for all services |
docker compose restart api | Restart a specific service without rebuilding |
A typical development workflow looks like this: run docker compose up -d to start your stack, use docker compose logs -f to monitor output, use docker compose exec to debug inside containers, and run docker compose down when you are done. If you change a Dockerfile, run docker compose up -d --build to rebuild and restart the affected service.
Convert with BeautiCode
Writing a docker-compose.yml from scratch can be intimidating, especially when you are migrating from a collection of docker run commands. BeautiCode's Docker Run to Compose tool eliminates the guesswork. Paste your docker run command, and the tool instantly generates a valid, well-structured Compose service definition.
All processing happens entirely in your browser — your commands and configurations never leave your machine. This is critical when working with production infrastructure commands that may contain sensitive environment variables, registry URLs, or internal hostnames.
- Docker Run to Compose — Convert any docker run command to a Compose file instantly
- YAML Validator — Validate your docker-compose.yml syntax before deploying
- JSON to YAML Converter — Convert JSON configuration files to YAML format
Tip: After generating your Compose file, always validate it by running docker compose config. This command parses the file, resolves variables, and outputs the final configuration — catching syntax errors before they cause runtime failures. You can also use our YAML Validator for instant syntax checking.
Frequently Asked Questions
What is the difference between docker-compose (V1) and docker compose (V2)?
Docker Compose V1 was a standalone Python binary invoked as docker-compose (with a hyphen). Docker Compose V2 is a Go-based plugin integrated directly into the Docker CLI, invoked as docker compose (with a space). V2 is faster, supports the latest Compose specification, and is the only actively maintained version. V1 reached end-of-life in June 2023. If you are still using V1, migrating to V2 is straightforward — the YAML syntax is nearly identical.
Do I still need the version key in docker-compose.yml?
No. Starting with Docker Compose V2, the version key is obsolete and ignored. Docker automatically uses the latest Compose specification. If you include it, Compose will print a warning but still process the file. New projects should omit it entirely.
Can I use Docker Compose in production?
Docker Compose is excellent for development, testing, and staging environments. For single-server production deployments, it works well with restart: unless-stopped policies and health checks. However, for large-scale, multi-host production deployments, orchestrators like Kubernetes or Docker Swarm provide features that Compose does not — such as automatic scaling, rolling updates across nodes, and self-healing across a cluster. Compose is increasingly used as a stepping stone: define your services in Compose during development, then adapt the configuration for Kubernetes when you scale.
How do I pass secrets securely in Docker Compose?
Avoid hardcoding secrets directly in your docker-compose.yml. Instead, use env_file to load environment variables from a .env file (which should be added to .gitignore). For more robust security, use Docker's built-in secrets feature, which mounts secret files into containers at /run/secrets/ without exposing them as environment variables or in docker inspect output.
How do I convert a docker run command to a Compose file?
You can manually map each docker run flag to its corresponding Compose key — for example, -p becomes ports, -e becomes environment, and -v becomes volumes. Or, you can save time by using BeautiCode's Docker Run to Compose tool, which parses your command and generates a valid Compose file automatically — no manual translation needed.
Related Articles
How to Generate Secure Passwords in 2026: A Complete Guide
Learn why strong passwords matter and how to generate secure passwords using entropy, length, and complexity. Includes practical tips and free tools.
2026-03-23 · 8 min readData FormatsJSON vs YAML: When to Use What — A Developer's Guide
Compare JSON and YAML formats with syntax examples, pros and cons, and use case recommendations for APIs, configs, and CI/CD pipelines.
2026-03-23 · 10 min read