Tech Notes: Fundamentals of Docker

June 10, 2023

Introduction

In this post, we are going to talk about the fundamentals of Docker. Maybe youā€™ve heard about Docker or sometimes you find an awesome project on GitHub and you are scared when you read the Docker Instructions section in the README or you donā€™t understand the purpose of the Dockerfile. In this post, we clarify the following questions?

  • Whatā€™s Docker and how it works?
  • Whatā€™s an Image? What about a Container?
  • What does the Dokerfile mean?

Letā€™s get started!

What is?

Docker is an open platform for developing, shipping, and running applications. Docker uses containers.

  • enables you to separate your applications from your infrastructure so you can deliver software quickly
  • you can manage your infrastructure in the same ways you manage your applications.
  • reduce the delay between writing code and running it in production

Architecture

Docker works based on an architecture that consists of three main components: the Docker client, the Docker host, and the Docker registry.

Docker Archtecture

The workflow in the Docker architecture typically involves the Docker client interacting with the Docker daemon to manage containers by using a command-line interface (CLI) tool. The Docker client sends commands to the Docker daemon, which in turn executes those commands on the Docker host. The Docker host pulls the required Docker images from the registry, creates containers based on those images, and manages their execution.

Docker Images

This is one of the most complex concepts to understand when you are learning Docker. Maybe you will get confused when you use the FROM in your Dockerfile

You can imagine a Docker image as a recipe for creating a specific type of application. It defines the environment and instructions for running the application consistently across different machines and platforms.

Docker images are created on a file based on a set of instructions called a Dockerfile, which specifies how to build the image step by step.

Containers

Now, imagine a Docker container as a running instance of a Docker image. You can think of it as a living, isolated environment where an application is running. Containers are created from Docker images and provide a lightweight and isolated runtime environment for applications.

Important, isolated means each container has its own isolated filesystem, network, and process space, ensuring that the application running inside the container is isolated from other containers and the host system.

The Dockerfile

As we discussed before, a Dockerfile is a text file that contains a set of instructions for building a Docker image. It provides a standardized and repeatable way to package and deploy applications within containers.

The following Dockerfile example tries to get a Node.js application into a Docker container:

FROM node:18

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --omit=dev

# Bundle app source
COPY . .

EXPOSE 8080
CMD [ "node", "server.js" ]

Based on the previous Node.js App Dockerfile, let's go through the main instructions and concepts:

  1. Base Image: The FROM instruction specifies the base image that you want to use as the starting point for your Docker image. It defines the environment and dependencies required for your application. You can use an official image from Docker Hub or a custom image you have built.
  2. Working Directory: The WORKDIR instruction sets the working directory inside the container where subsequent instructions will be executed. It is a best practice to set a specific working directory to organize your files and ensure consistent execution of commands. All CMD commands will be executed at WORKDIR.
  3. Copying Files: The COPY instruction copies files or directories from the build context (the local directory where the Dockerfile resides) into the image. It is commonly used to include application code, configuration files, and other necessary resources.
  4. Installing Dependencies: The RUN instruction executes a command during the image build process. It is commonly used to install packages, run setup scripts, or perform any other necessary setup steps. Each RUN instruction creates a new layer in the image.
  5. Setting Environment Variables: The ENV instruction sets environment variables inside the image. These variables can be accessed during the container runtime. It is useful for configuring application-specific settings, such as database connection details or API keys.
  6. Exposing Ports: the EXPOSE instruction documents the ports that the container listens on at runtime. It does not publish the ports to the host machine or create any network bindings. It is purely informative and serves as documentation for users of the image.
  7. Starting our Application: The CMD instruction specifies the command to run when the container starts. It provides the default command for the container but can be overridden when running the container. Only the last CMD instruction in the Dockerfile is effective.

So, what does ā€œdockerizing an appā€ means?

"Dockerizing an app" refers to the process of packaging and deploying an application within a Docker container by using a Dockerfile.

So, letā€™s suppose you are working on a Node.js, PHP, React.js, C++ project and you want to ā€œdockerizeā€ it. So, you need to create a Docker image that encapsulates all the necessary dependencies, configurations, and code of your cool application.

The purpose here is to make our application portable and isolated from the underlying infrastructure. And will also achieve consistency and reproducibility in the application's deployment across different environments, such as development, testing, and production.

Letā€™s run the application described in the Dockerfile and all dependencies, environments and all configurations will be done automatically we donā€™t care about installing dependencies or making extra changes for Windows or Linux environments compatibility.

Conclusion

Weā€™ve covered the fundamentals of Docker and the Dockerfile. Maybe you find concepts complex and confusing. But we encourage you to make learn this concept by putting your application(Node.js, PHP, React.js) running in a Docker container.

Docker is a great tool and gives us the superpower to build, test, and maintain our cool software.


Profile picture

Written by Marco Ciau who is apassionate about providing solutions by using software. I thoroughly enjoy learning new things and am always eager to embrace new challenges. You can follow me on Twitter.