Docker Compose and Raspberry Pi

image credits

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. — Overview of Docker Compose

In my blog post Raspberry Pi and Docker I describe how I installed Docker on a Raspberry Pi. This allows me, for example, to run a small NodeJs application that is always online and available on the local network. Because of the current situation I’m at home anyway. Therefore, I don’t need to open a port in the firewall or deploy the application on AWS.

Docker Compose

My application has a dependency on a database (e.g. redis) and a MQTT Broker, and I want to bundle them. This is where Docker Compose comes in. With Docker Compose a multi-container Docker application can be defined and executed.

The features of Compose that make it effective are:

  • Multiple isolated environments on a single host
  • Preserve volume data when containers are created
  • Only recreate containers that have changed
  • Variables and moving a composition between environments

Install Docker Compose

Docker Compose can be installed on a computer or Raspberry Pi. The Install Docker Compose page on has a detailed description. I installed it with pip3 on the Raspberry Pi:

sudo apt-get install python3-pip
pip3 install docker-compose

Docker Compose Example

The article A Docker/docker-compose setup with Redis and Node/Express describes an example of Docker Compose with redis and express. It includes all necessary NodeJs, Docker and Docker Compose files to build a small REST service. This service stores a key-value pair in redis. The code is on Github available: HugoDF/express-redis-docker.

The docker-compose.yml file

The docker-compose.yml file from the example looks like this:

  image: redis
  container_name: cache
    - 6379
  build: ./
    - ./:/var/www/app
    - redis
    - 3000:3000
    - REDIS_URL=redis://cache
    - NODE_ENV=development
    - PORT=3000
    sh -c 'npm i && node server.js'

The file, which is written in YAML format, is divided into two parts. One part is the redis configuration and the other part is the app configuration. The redis configuration uses a redis Docker image, names the container ‘cache’ and exposes the standard redis port 6379.

The app configuration uses build to use the Dockerfile file in the current directory and volumes mounts the current directory to /var/www/app.

By default, each service can reach any other service under the name of that service. links are not necessary, but document which service is used (see also Infrastructure as code).

ports exposes the service port. In this case the host system can reach the service on port 3000.

You can define environment variables. For example, a redis database which contains test data can be used during development.

The command entry contains the instructions that are executed inside the container when Docker Compose builds the container. In the example npm i installs all NodeJs libraries and then starts the server.js with the application.

The app uses the following Dockerfile, which uses a long-term support (LTS) node container and specifies the WORKDIR which is mounted inside the docker-compose.yml file to the current directory.

FROM node:lts
# Or whatever Node version/image you want
WORKDIR '/var/www/app'

expose vs ports

In the docker-compose.yml example both expose and ports are listed. What is the difference?

expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.”

With ports the container publishes the port to the host machine and can be reached from there. The configured port of the host machine accesses the corresponding port on the application in the Docker container.

Docker Compose creates its own network which can be seen with docker network ls. The communication is only inside this network. So, you don’t have to worry about open ports. The respective application names are used for the hostname. With this hostname you can reach them, but only within the Docker network.

The Application

The application has two REST interfaces. With /store/:key?query the query is stored as value of the key in the redis database. /:key reads the key from the redis database and returns the value.

const redisClient = require('./redis-client');

app.get('/store/:key', async (req, res) => {
    const { key } = req.params;
    const value = req.query;
    await redisClient.setAsync(key, JSON.stringify(value));
    return res.send('Success');

app.get('/:key', async (req, res) => {
    const { key } = req.params;
    const rawData = await redisClient.getAsync(key);
    return res.json(JSON.parse(rawData));

You can find the complete code of the server.js at the Github project.


To test it, the containers are created and started as follows:

docker-compose up

… then store a key-value pair using cURL to call the service running on port 3000:

curl http://pi2:3000/store/my-key\?some\=value\&some-other\=other-value

… and read the value:

curl http://pi2:3000/my-key

As you can see in the URL, I run it on a Raspberry Pi. In case you read my blog post Load Average on 7-Segment you know that I show the load average and the free disk space on a 7-segment display. This example took 0.02 gigabyte disk space. The load average went up but went back to 0.00 although the two Docker images are running. Only when I make several calls the value goes up.


A docker-compose.yml file documents the configuration of the necessary containers and is Infrastructure as code.

Furthermore, all necessary components are configured and started, and they are defined for development as well as production. In case there are differences (e.g. an SSL certificate for production), Docker Compose offers a simple way to configure the differences within a second file.

More tips are available in the article 10 Tips for Docker Compose Hosting in Production.

Any comments? Write me on Twitter @choas (DM is open).