Docker Compose for container management
Having explored the fundamentals of Docker and its power to deploy encapsulated applications, we now …
read moreMaster the lifecycle of your containers, networks and volumes before taking the leap to production.
If you’ve followed the series this far, you’ve already covered considerable ground. We started with Chapter 1, where we discovered what an image is, created our first containers with docker run
, and learned to clean them up. In Chapter 2 we dove deep into basic administration: commands like ps, logs, stop and inspect stopped sounding like gibberish and we started feeling comfortable navigating the Docker universe. The journey continued with Chapter 3, where we dockerized a Node/Express API: we wrote our first Dockerfile, built the image and deployed the application without fear of “it works on my machine”. And in Chapter 4 we took a qualitative leap by introducing Docker Compose: we added PostgreSQL and Redis, spun everything up with a single docker compose up
and, along the way, promised that we would later open Docker’s famous “fine print”.
That moment has arrived. Before venturing into slightly more advanced topics—image optimization, cloud deployments or orchestration—we need to understand how Docker actually manages the fundamental resources that support your infrastructure:
Resource | What it is | Why it matters |
---|---|---|
Images | An immutable snapshot of your application: includes your code, its dependencies and a minimalist mini-operating system. Built in layers; if multiple images share layers, Docker reuses them. | They act as the “exact recipe” that guarantees your app behaves the same in any environment. The smaller they are, the less time your CI/CD will take to send them to the registry and your servers to start them. |
Containers | A process launched from an image that runs in lightweight isolation (namespaces + cgroups). To it, it seems like it has its own filesystem, but it’s actually a temporary read-write (RW) layer on top of the image. | You can create, stop and destroy them in seconds — ideal for quick tests. However, everything you write inside gets deleted when you destroy it, unless you use volumes. |
Volumes | Persistent folders that Docker mounts inside containers. There are two main types: bind mounts (you choose the exact host path) and named volumes (Docker manages them in /var/lib/docker/volumes). | This is where data that shouldn’t be lost lives: databases, user uploads, backups… Separating data and container lets you deploy new versions without touching the information. |
Networks | Virtual connections (bridges) that Docker creates so containers can discover each other by name and communicate without clashing with other host services. Each network offers internal DNS and its own subnet. | They prevent port conflicts and allow you to isolate environments (dev, test, prod) on the same machine. They’re also your first line of security: if a container isn’t on the network, it simply “doesn’t exist” to the others. |
Throughout the chapter we’ll follow a small practical example that will function as a testing laboratory. With it we’ll see, step by step, what happens when we stop, delete or recreate containers and resources: Nginx logs will stay safe in a bind mount, the database will live in a persistent volume and the cache will demonstrate what it means to store data only in memory.
Each block will present you with a concept and, immediately after, ask you for one or two commands so you can see the effect in real time. No theory suspended in the air: you’ll learn by pressing keys.
Specifically, you’ll discover how to:
Ready to get your hands dirty? Let’s create and explain the file structure, then start experimenting.
Before diving deeper, let’s take a look at the files that make up our environment. This is the initial snapshot:
.
├── docker-compose.yml # orchestrates all services
├── api/ # Node/Express API code
│ ├── Dockerfile
│ ├── package.json
│ └── server.js
├── db/ # scripts to initialize PostgreSQL
│ └── init.sql
├── nginx/ # reverse proxy
│ └── nginx.conf
└── logs/ # created at runtime (bind mount)
├── api/
└── nginx/
For this example we’ll use six configuration files, which are the following:
.env
— sample environment variables
# .env
POSTGRES_USER=labuser
POSTGRES_PASSWORD=labsupersecret
POSTGRES_DB=labdb
# other variables used by the API
PGHOST=db
PGPORT=5432
PGUSER=$POSTGRES_USER
PGPASSWORD=$POSTGRES_PASSWORD
PGDATABASE=$POSTGRES_DB
REDIS_HOST=cache
REDIS_PORT=6379
NODE_ENV=development
⚠ Warning
This file is intended only for development: it includes passwords in plain text and values you shouldn’t expose in production.
In a real environment you would use secret providers (Docker Secrets, HashiCorp Vault, orchestrator environment variables…) and generate unique credentials per environment.
docker-compose.yml
— the centerpiece
version: "3.9"
services:
api:
build: ./api
container_name: lab_api
env_file: .env
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./logs/api:/usr/src/app/logs
networks:
- backend
- frontend
ports:
- "3000:3000"
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/health',r=>process.exit(r.statusCode===200?0:1)).on('error',()=>process.exit(1))"]
interval: 30s
retries: 3
db:
image: postgres:15
container_name: lab_db
env_file: .env
volumes:
- db-data:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
interval: 10s
retries: 5
networks:
- backend
cache:
image: redis:7
container_name: lab_cache
tmpfs:
- /data
networks:
- backend
nginx:
image: nginx:1.27-alpine
container_name: lab_proxy
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./logs/nginx:/var/log/nginx
depends_on:
- api
networks:
- frontend
volumes:
db-data:
networks:
backend:
driver: bridge
internal: true
frontend:
driver: bridge
There are several key points to note in this compose
:
volumes:
declares db-data, a named volume that persists the database even if you delete the db
container../logs/api
saves the API logs directly on your machine.backend
network is internal: true
; neither Nginx nor your host can accidentally access PostgreSQL or Redis.healthcheck
prevents the API from accepting traffic before PostgreSQL is operational.api/
folder — to communicate with internal servicesDockerfile
FROM node:lts-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
node:lts-slim
base to keep the image lightweight.package*.json
first to take advantage of layer caching.package.json
{
"name": "docker-resources-api",
"version": "1.0.0",
"description": "Mini API for Docker resource management laboratory",
"main": "server.js",
"type": "module",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^5.1.0",
"pg": "^8.16.3",
"ioredis": "^5.6.1"
}
}
server.js
The api/server.js
file is the “brain” of our practical example. It brings together the HTTP routes we’ll use to demonstrate how data lives in Docker and, incidentally, illustrates the connection to two internal services: Redis (memory) and PostgreSQL (disk).
import express from 'express';
import pkg from 'pg';
import Redis from 'ioredis';
const { Pool } = pkg;
const app = express();
const pool = new Pool(); // connects to db:5432 via internal DNS
const redis = new Redis({ host: 'cache' });
app.get('/health', (_, res) => res.send('OK'));
app.get('/visits', async (_, res) => {
const visits = await redis.incr('counter');
res.json({ visits });
});
app.get('/pgvisits', async (_, res) => {
const { rows } = await pool.query(
'UPDATE pg_visits SET counter = counter + 1 WHERE id = 1 RETURNING counter;'
);
res.json({ pgVisits: rows[0].counter });
});
// server.js
app.get('/', (_, res) =>
res.send(`
<h1>Docker Resources API</h1>
<p>Available routes:</p>
<ul>
<li><a href="/visits">/visits</a> – Redis counter</li>
<li><a href="/pgvisits">/pgvisits</a> – PostgreSQL counter</li>
<li><a href="/health">/health</a> – health‑check</li>
</ul>
`)
);
app.listen(3000, () =>
console.log('API listening on http://localhost:3000')
);
The client doesn’t need to know the internal ports: Docker resolves service names (
db
,cache
) and the frontend network exposes only port 80 of the Nginx proxy.
Route | Purpose | Where data is stored | Why it’s useful in the chapter |
---|---|---|---|
GET /health |
Returns OK if the API responds. Used by Docker Compose’s health‑check. |
— | You’ll see how Docker decides if the container is “healthy”. |
GET / |
Minimal HTML page with links to other endpoints. | — | Quick entry point from the browser. |
GET /visits |
Increments the counter key in Redis and returns the value. |
Redis inside lab_cache (tmpfs mount) → doesn’t persist. |
Demonstrates what happens when restarting the cache service: counter goes back to 1. |
GET /pgvisits |
Increments the counter column in the pg_visits table. |
PostgreSQL on the db-data volume. |
Shows that data survives even if you delete and recreate the lab_db container. |
db/
folder — database initialization script
CREATE TABLE IF NOT EXISTS pg_visits (
id SERIAL PRIMARY KEY,
counter INT NOT NULL
);
INSERT INTO pg_visits (counter) VALUES (0);
db-data
volume is empty.nginx/
folder — reverse proxy
http {
upstream api_upstream { server api:3000; }
server {
listen 80;
location / {
proxy_pass http://api_upstream;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
With this infrastructure clear, you’re ready to explore images, containers, volumes and networks in the following sections.
We’ll start with the most basic—and at the same time most frequent—concepts: what is an image, what is a container and how do they behave when we use them.
docker compose up -d
# What containers are alive?
docker compose ps
# And what images are available?
docker image ls
Remember
An image is read‑only; a container adds an RW layer on top that disappears when you destroy it withdocker rm
.
# Stop only the API (DB and Redis stay alive)
docker compose stop api
# Start it again
docker compose start api
The API recovers its state without problem because it wasn’t storing data in the container: logs are in the bind mount ./logs/api
and persistent information in db-data
or in Redis RAM.
To illustrate the different types of data storage in Docker, let’s see several mini‑experiments that show how they change (or don’t) depending on the type used.
Why we’re doing this
Check that data stored only in RAM is lost when restarting the service.
Steps
# 1 – check the counter
curl -s http://localhost/visits
# 2 – restart only Redis
docker compose restart cache
# 3 – check again
curl -s http://localhost/visits
Expected result
The counter goes back to 1 → Redis was using tmpfs
; when restarted everything was erased.
Why we’re doing this
See that a named volume persists even if the container is destroyed or restarted.
Steps
# 1 - increment the counter several times
curl -s http://localhost/pgvisits
curl -s http://localhost/pgvisits
# 2 – restart Postgres
docker compose restart db
# 3 – check again
curl -s http://localhost/pgvisits
Expected result
The number keeps increasing → the table lives in the db-data
volume. Data persists between sessions.
Why we’re doing this
See that Docker recreates the mounted folder if it doesn’t exist when starting the container.
Steps
# 1 – remove the proxy container
docker compose rm -sf nginx
# 2 – delete the logs folder on the host
rm -rf logs/nginx
# 3 – launch Nginx again
docker compose up -d nginx
Expected result
The logs/nginx
folder and the access.log
/ error.log
files are created again on your host. Having simply restarted the container with the API, the visit data from Redis and PostgreSQL remains intact.
Before a byte of information travels from one container to another, there must be a network that connects them. In Docker, networks act as small virtual LANs: they determine which services “see” each other, which ports are exposed to the outside and which traffic remains completely isolated. If two containers don’t share a network, it’s as if they lived on different machines.
Objective
Know how many networks Docker Compose has created, what containers hang from each one and if the back-end network is really isolated from the host.
# Show all system networks
docker network ls
# Inspecting the network we can see its configuration
docker network inspect dockerresources_backend
# See if it's internal (not accessible from the host)
docker network inspect dockerresources_backend | grep "Internal"
backend is private (internal: true
); the host cannot access.
Objective
See what happens when an application loses connection with another service because they are disconnected or on different networks.
# 1 – Find out the exact name of the backend network
docker network ls | grep _backend
# 2 – Disconnect the Redis container
docker network disconnect dockerresources_backend lab_cache
# 3 – Call the volatile counter
curl -s --max-time 2 http://localhost/visits # ⟶ timeout
# 4 – Re‑connect Redis
docker network connect dockerresources_backend lab_cache
# 5 – Try again
curl -s http://localhost/visits # ⟶ starts counting again
In this block we’re going to delete absolutely all the resources we’ve created in the laboratory —containers, images, volumes and networks— so you can see what survives and what doesn’t.
Requirement: In case you want to preserve the current state of the environment up to this point, make sure to backup the volumes and anything you want to keep.
# Stop and remove all project containers
docker compose down
# Delete the built API image
docker rmi docker-resources-api 2>/dev/null || true
# (Optional) Delete base images if you don't use them in other projects
docker rmi node:lts-slim postgres:15 redis:7 nginx:1.27-alpine 2>/dev/null || true
# Remove the volume with Postgres data
docker volume rm dockerresources_db-data
# Remove networks created by Compose (if they're still there)
docker network rm dockerresources_backend dockerresources_frontend 2>/dev/null || true
At this point your project is “empty”: only the code and Docker service configuration files remain. If we spin up the service again, it will create everything anew along with a fresh API image.
To finish, let’s take stock. In this chapter we’ve opened the hood of Docker’s “engine” and played with all its main gears: we created lightweight images with a Dockerfile
, saw how containers add a thin read-write layer that can be ruthlessly deleted, discovered that important data must live in volumes and verified that networks are internal highways you can disconnect and reconnect hot. We also saw how to quickly identify what’s running on your host, clean up orphaned resources without tearing down the environment, etc; all using real commands simulating a functional environment. For upcoming chapters we’ll continue delving into more advanced topics. Happy Coding!
That may interest you
Having explored the fundamentals of Docker and its power to deploy encapsulated applications, we now …
read moreIn the previous chapter we reviewed what process automation is and why tools like n8n have become …
read moreIn this series, we’ve explored various database engines, such as SQLite, and the fundamental …
read moreConcept to value