Creating a REST API with Express and Docker
Express.js is a minimalist, flexible framework that provides a robust set of features for web …
read moreHaving explored the fundamentals of Docker and its power to deploy encapsulated applications, we now delve into the domain of Docker Compose, an essential tool for managing applications composed of multiple containers. In this article, we will reveal how Docker Compose simplifies the lives of developers by allowing the definition, execution, and management of multi-container services with ease.
The installation of Docker Compose can be performed along with Docker Desktop for Windows and Mac, while for Linux,
it must be installed manually with a simple apt install docker-compose
on Debian-derived distributions like Ubuntu.
Once installed, the first step is to create a docker-compose.yml
file at the root of your project, defining the
services, networks, and volumes that your application needs. Sometimes you may see them with the extension .yaml
instead of .yml
, both are valid.
In this article, we will delve deeper into the use of Docker Compose, an indispensable tool for defining and
efficiently running Docker multi-container applications. From the example of
the previous chapter, where we deployed a Nginx server
using the docker run
command.
Docker Compose allows us to organize the configuration of our services in a YAML file, simplifying the process of deploying and managing containers that are part of the same application. Let’s see how this file is structured for our Nginx case:
version: '3.3'
services:
mynginx:
image: nginx
ports:
- "8080:80"
restart: always
In this file, the version
key indicates the version of the Docker Compose syntax we are using, while
the services
key defines the services that make up our application. In this case, we have defined a service
named mynginx
that uses the official Nginx image, exposing the container’s port 80 on port 8080 of our host.
Additionally, we have specified that the service should restart automatically in case of failure.
The practical end result is the same, but as we can already infer, the application management is greatly simplified. It
would be enough to execute the command docker-compose up
in the same folder where the docker-compose.yml
file is
located to launch our container. We could access the Nginx server on port 8080
just as we did in the previous example
through the address http://localhost:8080
.
For cases like this where we manage a single “application”, it may seem exaggerated. However, when dealing with more complex applications, with multiple services and dependencies, Docker Compose becomes an indispensable tool. In future articles, we will explore how it facilitates the management of more complex applications, allowing the definition of networks, volumes, and dependencies between services.
In our last chapter of the series we
created an API with ExpressJS virtualized using Docker. Let’s convert it to our new methodology with docker-compose
.
Previously, we stored data in memory, so there was no data persistence after restart, we will take advantage of the
flexibility of docker-compose
to later add a database to which we will connect to preserve the data of our
application. Below is the docker-compose.yml
file that would launch a container similar to our API example:
version: '3.8'
services:
app:
image: node:14-alpine
command: sh -c "npm install && node app.js"
volumes:
- ./:/app
working_dir: /app
ports:
- "3000:3000"
environment:
NODE_ENV: development
In this docker-compose.yml
, we have defined a service named app
that uses the official Node.js image in its version
14 with Alpine Linux. We have specified a command that installs our application’s
dependencies and starts the server, as well as a volume that mounts the current directory into the container.
Additionally, we have exposed port 3000 to the host and have defined an environment variable NODE_ENV
with the Node
development environment (NODE_ENV) set to the value development
.
For this to work, it is necessary to initiate the NPM project with npm init -y
and create a file app.js
with the
following content, extracted from our chapter creating a sample API:
const express = require('express');
const app = express();
const PORT = 3000;
app.use(express.json());
app.get('/', (req, res) => {
res.send('¡Hola Mundo con Express y Docker!');
});
app.listen(PORT, () => {
console.log(`Servidor corriendo en http://localhost:${PORT}`);
});
Now, we will use a PostgreSQL database to make our application’s data persistent so that it is not lost. For this, we
add the following service to our docker-compose.yml
and the depends_on
parameter to the service. This makes the
application wait until the database is ready before starting:
version: '3.8'
services:
app:
image: node:14-alpine
command: sh -c "npm install && node app.js"
depends_on:
- db
volumes:
- ./:/app
working_dir: /app
ports:
- "3000:3000"
environment:
NODE_ENV: development
db:
image: postgres:13
ports:
- "5432:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: app
The database part db
is quite straightforward, based on the official PostgreSQL image
in its version 13. We associate PostgreSQL’s port 5432 inside the container (on the right) with port 5432 on the host (
on the left). In this case, they coincide but it doesn’t have to be this way, as we have seen before. In addition, we
have defined three environment variables to configure the database: POSTGRES_USER
, POSTGRES_PASSWORD
,
and POSTGRES_DB
. It’s worth mentioning that these are sample data and should never be used in production.
With this, we have created the definition of our Express API in our docker-compose
along with a PostgreSQL database
that we can lift together with a simple command:
docker-compose up -d
The -d
command is optional and means it will run in the background, so you can close the terminal from which you
executed it, and the service will continue running. At this point, we could connect to our database to see that it is
indeed there using a database client like DBeaver, an open-source program that allows us to
connect to multiple types of databases, visualize their content, and manage them. For now, it should contain a database
named app
and be empty.
In the next article, we will create the tables for our database to store the data from our API example and explore how to connect our application. We will also see a practical and in-depth case on how to analyze the logs and how to consult the different configurations that Docker has defined for our containers. I invite you to continue investigating on your own and to stay tuned for our next chapter in the series.
That may interest you
Express.js is a minimalist, flexible framework that provides a robust set of features for web …
read moreIt’s crucial to understand some common parameters in Docker that are essential for its …
read moreDocker is a virtualization platform that offers a unique methodology for packaging and distributing …
read moreConcept to value