Setting Up a Multi-Container Development Environment with Docker
July 27, 2020

In this post we will set up a development environment for an application that will be using multiple Docker containers.
The application is a Fibonacci number calculator, which consists of the following services:

  • A react frontend (client).
  • A Node.js backend (server).
  • A Redis worker.
  • An Nginx router.
  • A Postgres database.

How the application works:

A user will enter a random index on the browser, the backend saves the index in the database and redis; thereafter it triggers a redis insert event. This event will be handled by the redis worker, which will calculate the fibonacci value for the index, and finally insert in redis the index and value as key value pairs.

A user will then reload the application and see the following:

  • The indexes posted so far (retrieved from the database).
  • The indexes posted and their corresponding fibonacci values (retrieved from redis).


This post assumes that you already have some basic knowledge on docker and git, however, i will explain all the docker commands used in this post. Some good resources to get you up and running include:

Project Setup

  1. Install docker using this installation guide
  2. Install docker-compose using this installation guide.
  3. Confirm you have docker and docker-compose installed in your machine by checking their versions on your terminal.

# confirm docker is installed by checking the version
docker --version
Docker version 19.03.8, build afacb8b

# confirm docker compose is installed by checking the version
docker-compose --version
docker-compose version 1.25.5, build 8a1c60f6
  1. git clone the sample project from here and cd into the fib-calculator directory.

Creating a docker file for the client container

  1. cd into the client directory and create a docker file named We will be naming our docker files with a *.dev extension so that we can differentiate them from the production docker file’s in future.
  2. Add the following lines of code in the file:

   FROM node:alpine
   WORKDIR '/app'
   COPY ./package.json ./
   RUN yarn install
   COPY . .
   CMD ["yarn", "start"]

The FROM node:alpine pulls the Node.js base image from the docker hub (docker registry). We are using the alpine version of Node.js which is a bit lightweight for development purposes.

The WORKDIR '/app' sets the working directory inside our container to /app, which is where all our project files and folders will live.

The COPY ./package.json ./ copies the package.json file from our machine to the current working directory inside the container.

The RUN yarn install installs all the dependencies specified in the package.json file.

The COPY . . copies all the files and folders inside the client directory into our current working directory.

Finally, CMD ["yarn", "start"] command defines the start up command for the container. This command is also defined inside the package.json file.

Creating a docker file for the server container

This docker file will be very similar to the client container docker file, the only difference will be the Node js base image version and the container start up command.

  1. cd into the server directory and create a docker file named
  2. Add the following lines of code in the file:

FROM node:13
WORKDIR '/app'
COPY ./package.json ./
RUN yarn install
COPY . .
CMD ["yarn", "dev"]

We are using this particular version of Node.js (version 13) because of an NPM package inside package.json that has a dependency to this version.

Creating a docker file for the worker container

This docker file will be very similar to the server container one except for the Node.js base image version, which in this case will be the alpine version.

  1. cd into the worker directory and create a docker file named
  2. Add the following lines of code in the file:

FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN yarn install
COPY . .
CMD ["yarn", "dev"]

Creating a docker file for the nginx container

  1. Make a new directory named nginx on the same level as the client directory.
  2. cd into the nginx directory and create a docker file named
  3. Add the following lines of code in the file:
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf

The FROM nginx gets the nginx base image from docker hub.

The COPY ./default.conf /etc/nginx/conf.d/default.conf copies our nginx config file in our machine to the default nginx config file inside the container.

Creating the docker compose file

Here we configure our containers using docker compose. The Compose tool is used to define and run multi-container Docker applications. The application’s services are configured in a YAML file. Then, with a single command, you create and start all the services from your configuration.

  1. cd into the parent directory of the repository and create a file named docker-compose.yml.
  2. Add the following lines of code in the file:

version: "3"
        restart: always
          context: ./nginx
          - "3000:80"
        image: "postgres:10.5"
        image: "redis:latest"
          context: ./server
          - /app/node_modules
          - ./server:/app
          - REDIS_HOST=redis
          - REDIS_PORT=6379
          - PGUSER=postgres
          - PGHOST=postgres
          - PGDATABASE=postgres
          - PGPASSWORD=postgres_password
          - PGPORT=5432
          context: ./client
          - /app/node_modules
          - ./client:/app
          context: ./worker
          - /app/node_modules
          - ./worker:/app
          - REDIS_HOST=redis
          - REDIS_PORT=6379

The version: "3" specifies the docker compose version to use, we use version 3 which is the newest version at the moment.

The services section contains the configurations for each container. Bellow is a description of each service configuration options:

Nginx service

The restart: always configures the nginx container to always restart when it fails.

The build defines the configurations that are applied at build time, which include:

  • dockerfile specifies the container docker file name.
  • context specifies the directory containing the docker file.

The ports field defines port mapping (HOST:CONTAINER). Here we are mapping port 3000 in our machine to port 80 in the container.

Postgres service

For this service, we only specify the postgres image to use using image:postgres:10.5, which sets the container base image to postgres version 10.5.

Redis service

Similar to the postgres service, we use image: "redis:latest" to set the base image for this container to the latest version of redis.

API service

Similar to the nginx service build configuration, the dockerfile and context options specify the docker file to use and it’s relative path respectively.

The volumes option is used to map a host path to a container path.

  • The /app/node_modules prevents this container path from being mapped to the host.
  • The ./server:/app maps the host files and folders in the server directory to the app directory inside the container. This enables the app to be restarted whenever a file is changed in the host.

The environment: option is used to specify environment variables, and in our case we set redis and postgres environment variables used when running the server.

For the client and worker services, refer to the api service.

Starting the containers

Run docker-compose up --build to build and start the containers. This takes time initially because of downloading the base images from docker hub.

The --build flag is used for rebuilding containers, use this flag when you make a change in the docker files, otherwise, only run docker-compose up to start the containers.

Open your browser on port 3000, and check out the application.

Stopping the containers

Run docker-compose stop to stop the containers.

Finally, the full code for this multi-container application can be found here, Also check out this link for the full list of docker compose version 3 options.

I hope this helps you going forward in simplifying and accelerating your development workflow with docker.

About the Author

Matt’s role is to set the vision of the company. He sees challenges and envisions solutions; he sees data and envisions use. Prior to founding ONA, Matt led a social enterprise initiative at Columbia University’s Earth Institute, where he served as ICT Director for the Millennium Villages Project. He had previously been Technology Director for ChildCount+; and a member of Columbia University’s Department of Mechanical Engineering research group in the Fu Foundation School of Engineering and Applied Science . Matt was born in Cameroon, grew up in Senegal, and has worked in Africa for 15+ years. He is a PopTech! Social Innovation Fellow and was named to the 2010 Time 100 List of Most Influential People of the World. Matt has an MBA from the Thunderbird School of Global Management, and has taught ICT4D at the School of International and Public Affairs (SIPA) at Columbia University where he was adjunct faculty.