Mastering Docker: Unleashing the Container Revolution

Mastering Docker: Unleashing the Container Revolution

In this comprehensive article, we embark on an enlightening exploration of the fundamental aspects of Docker, an innovative technology that revolutionizes the automation of application deployment, management, and scalability. Through an in-depth analysis of Docker commands, exemplified with practical demonstrations, and an introduction to Docker Compose, we aim to equip readers with the essential knowledge to grasp and harness the full potential of this groundbreaking technology. Let us delve into the world of Docker!

Defining Docker Docker can be succinctly described as an application designed to streamline the automation of application deployment, management, and scaling processes. It accomplishes this by executing applications within an isolated environment, resembling virtual machines but exhibiting superior performance. Docker operates through encapsulated "containers" that house everything necessary to run an application, including its code and dependencies. Each container operates independently, ensuring that multiple containers can coexist on the same machine without interference. All the while, they share the same kernel, which acts as an intermediary between hardware and software, providing vital services and functionalities to manage system resources, facilitate communication, and ensure overall system stability and security.

Advantages of Docker Docker offers several compelling advantages over conventional methods of application deployment:

  1. Efficient Memory Utilization: Docker optimizes memory usage by storing applications within containers rather than locally on the host machine.

  2. Swift Boot Time: With a single command, Docker containers can be quickly launched, significantly reducing the time required for application startup.

  3. Seamless Scalability: Docker simplifies scalability, enabling applications to be effortlessly expanded when demand rises, surpassing the limitations of locally running instances.

  4. Easy Collaboration: Docker facilitates seamless sharing, deployment, and testing of applications, promoting effective collaboration among teams.

  5. Cross-Platform Compatibility: Docker and Docker Compose, the latter of which we will discuss later in this article, can be installed on various operating systems. For guidance on installation, please refer to this link.

  6. Verification of Docker Installation: To ensure correct installation, the following command can be used:

[Command to verify Docker installation]

docker --version

Output example depending on the version installed
Docker version 24.0.4

Docker Image:

A Foundation for Containers. A Docker Image serves as a versatile blueprint, enabling the creation of one or multiple containers, as exemplified in the following instances.

Exploring Docker Hub In day-to-day programming, certain common tools such as programming languages like Python, web servers like Nginx, and databases like PostgreSQL play a pivotal role in developing various applications.

Docker addresses this need by maintaining a registry known as Docker Hub, housing a vast repository of pre-configured images containing these tools. To utilize these images, users must create a Docker Hub account, granting access to the wealth of available resources.

Upon account creation, pulling images from Docker Hub becomes a straightforward process. Additionally, users have the flexibility to create their own customized images and upload them to Docker Hub, allowing for the images to be shared either publicly or privately.

A Practical Example Let us illustrate the process with a command to pull a Python version 3.10 image from Docker Hub:

[Command to pull a Python 3.10 image from Docker Hub]

docker pull python:3.10-alpine3.18

The alpine3.18 keyword pulls the smallest in size of python3.10 available , hence saving on storage space. It is advisable to use "alpine" when pulling or defining images.

To view all the images we can run

docker images
Output:
REPOSITORY       TAG               IMAGE ID       CREATED       SIZE
python           3.10-alpine3.18   7d34b8225fda   7 weeks ago   49.7MB

Running a normal installation of python3 using docker pull python:3.10 then viewing installed images, displays the difference in size between the normal one and the alpine version as shown:

REPOSITORY       TAG               IMAGE ID       CREATED       SIZE
python           3.10              d9122363988f   6 weeks ago   1GB
python           3.10-alpine3.18   7d34b8225fda   7 weeks ago   49.7MB

To remove an image run

docker image rm <image_name/image_id>

Example:
docker image rm python:3.10

Output:
d9122363988f

Docker Container Creation

After defining the Docker image, it becomes the basis for creating a Docker container. Containers are instances that run from the specified image, providing an isolated environment for the application to operate within.

As an illustration, let's consider the usage of the Python image mentioned earlier:

docker run --name pytry -it python:3.10-alpine3.18

The --name flag is used to define the name that will be used to reference the container, else a random name will be given to your container.
The -it command is used to run the command in an interactive mode.
The above command produces a python shell which we can use to run various commands as shown.

Python 3.10.12 (main, Jun 15 2023, 03:17:49) [GCC 12.2.1 20220924] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print("I am a python container")
I am a python container

To view all the flags that can be used with the docker run command use:

docker run --help

Now that the container is defined, run the command bellow on another terminal window to view running containers

Command:
docker ps

Output:
CONTAINER ID   IMAGE                    COMMAND     CREATED          STATUS          PORTS     NAMES
5bcde232650b   python:3.10-alpine3.18   "python3"   32 seconds ago   Up 31 seconds             pytry

To view all the containers which are present in docker use:
docker ps -a

Move to the terminal window running Python container and run the command exit() to return to the command line.
Notice running docker ps to view running containers produces no results since our container is closed.

To make the code above more complex we can create a folder called hello.py in the current directory and write some code in it. Then run the command again but with a different container name to avoid conflicts.

# hello.py
from datetime import datetime

time = datetime.now()
print(f'I am a Python file in a Docker container running at {time}')

Execute the following Docker command to run the Python script inside the container:


docker run --name pytryFile -w /app -v "$(pwd):/app" python:3.10-alpine3.18 python hello.py

The output will display a timestamp representing the current time, as follows:


I am a Python file in a Docker container running at <a_timestamp_of_the_time_right_now>

By executing this streamlined Docker command, the Python script hello.py will run within the container, displaying the appropriate message with the current timestamp.

Let's break down this command:

  • -v "$(pwd):/app" mounts the current directory (pwd) into the container at the /app directory, allowing access to your Python script.

  • -w /app sets the working directory inside the container to /app.

  • python:3.10-alpine3.18 is the image name

  • python hello.py is the command that runs your Python script inside the container.

Managing Containers

Container management is a critical aspect of working with Docker. Docker provides a set of commands and tools to help you create, start, stop, inspect, and remove containers efficiently. Here are some essential commands for managing containers:

  1. To start a stopped container:

     docker start <container_name/container_id>
     example:
     docker start pytry
     Output:
     pytry
    

    To stop a running docker container use:

     docker stop <container_name/container_id>
     example:
     docker stop pytry
    
     Output:
     pytry
    

    To remove a docker container :

     docker rm <container_name/container_id>
     example:
     docker rm pytry
    

    To list all container's id

     docker ps -aq
    

    To get rid of all of the containers:

     docker rm $(docker ps -aq)
    

    To view other interesting flags you can use to manipulate containers:
    docker ps --help

    Docker Volumes

    Docker Volumes facilitate seamless sharing of data, such as files and directories, between the host and containers, as well as among different containers. The earlier illustration of running the hello.py script effectively showcased this capability.

    For another demonstration, let's consider a scenario where we create a new folder containing a file named hello.html

<h2> Hello World </>

To display this content on a web browser using the nginx web server, run the nginx image as shown below:

    docker run --name helloWorld -v $(pwd)/hello.html:/usr/share/nginx/html/hello.html:ro -p 8080:80 -d nginx

    # View the content on the browser as shown:
    http://localhost:8080/hello.html

/usr/share/nginx/html: is the nginx folder used to display content stored in nginx web server
ro: includes permissions on the shared content, in our case read and write permission.

-p: this flag illustrates the port mapping in which the contents will be viewed from. In our case all the contents on the container(80) will be viewed on host by accessing port(8080)

Note: we did not pull the nginx image before using it , but running "docker images" after the above command is successful returns an nginx image as one of the images present.

This is because when executing docker run ..., docker checks for the presence of the image locally and once it can't find it, it pulls it from docker hub and runs it

docker images
Output:
REPOSITORY       TAG               IMAGE ID       CREATED       SIZE
python           3.10-alpine3.18   7d34b8225fda   7 weeks ago   49.7MB
nginx            latest            021283c8eb95   3 weeks ago   187MB

# running containers
docker ps

Output:
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                                   NAMES
eefaa7cd973c   nginx     "/docker-entrypoint.…"   11 minutes ago   Up 11 minutes   0.0.0.0:8080->80/tcp, :::8080->80/tcp   helloWorld

Dockerfile - Building Custom Images

A Dockerfile is a crucial component when deploying applications in Docker. It bears the name "Dockerfile" and serves as a blueprint for creating custom Docker images that encapsulate multiple application components, enabling easy sharing and distribution.

In essence, a Dockerfile empowers users to craft personalized images tailored to their specific needs. It comprises a series of instructions, each representing a step in bringing the application to life within a container.

The Dockerfile adheres to a specific structure, commencing with a mandatory "FROM" clause, which defines the base image upon which the custom image will be built. Subsequently, other flags can be employed interchangeably, depending on the application's requirements. Notably, the "FROM" clause may appear multiple times to generate several distinct images.

Following the "FROM" clause, a multitude of commands can be included to configure the image. These commands include:

FROM - statement defining the image to pull
RUN - to execute commands
WORKDIR - define the directory to work from
COPY - create a duplicate o the files or folders present in the host to the container.
ADD - copies new files, folders to the remote directory.
RUN - execute a command
CMD - run a file

More commands like this one can be found in the docker reference

An example of a dockerfile includes:
Assuming we have a folder called Example,with a hello.js file
containing

console.log('hello, world');

Create another file and name it "dockerfile" with the content below:

FROM node:alpine
WORKDIR /app
COPY . .
CMD node hello.js

Docker Build

Once a docker file is defined, the "build" command is used to create the image. View docker build --help to view the flags that can be used with docker build.
The syntax is as shown:

docker build --tag <name_of_image>:<tag> <directory_with_dockerfile>

The tag represents the image versioning which aids in image naming. If none is given, then latest is assumed.It provides more control of the image you create.

Creating the nodejs image above

docker build --tag greetings:latest .
Viewing the images as shown above, we notice a new image with a tag greetings:latest

REPOSITORY       TAG               IMAGE ID       CREATED       SIZE
python           3.10-alpine3.18   7d34b8225fda   7 weeks ago   49.7MB
nginx            latest            021283c8eb95   3 weeks ago   187MB
greetings           latest            531620bb45c5   17 seconds ago   181MB

Running the image:

docker run greetings:latest

Output:
hello, world

.dockerignore

It is used to ignore files that the application does not require to run like node_modules, requirement.txt and .git.
It is created as file in the current directory and the files to be ignored placed inside it

.dockerignore
requirement.txt
node_modules

Docker Registry

As discussed earlier before images are pulled and pushed from the docker hub. These images are stored in a repository called the docker registry.
That said it is possible to push personal images to the docker hub.

  1. First, create an account in docker the hub.

  2. Run the following command and provide your Docker Hub credentials (username and password) when prompted: docker login

  3. Tag the previously built Docker image with your Docker Hub username and the desired repository name. docker tag greetings your_username/greetings:latest

  4. Finally, push the tagged image to Docker Hub using the following command: docker push your_username/greetings:latest This will push the Docker image to your Docker Hub repository named "greetings" with the "latest" tag.

Docker Inspect

This command allows you to retrieve detailed information about Docker objects, such as containers, images, volumes, and networks. It provides a JSON representation of the specified Docker object, which includes various metadata and configuration details.
The syntax is as shown:
docker inspect <container_name /id >
Pick a container and try it out.

Docker Logs

This command is used to monitor traffic of a running container.
The syntax is as shown:
docker logs <container_name /id >
To follow the logs of the container as they come in add the -f flag before the container name/id.
To add a timestamp to each log when following add the the flags -ft before the container name/id.
For more important flags view
docker logs --help

Docker Network

This command is used to place two or more networks on the same network hence making it easy for them to communicate with each other. For example, a postgress container and a pgadmin container that provides a graphical user interface of the data in a postgress database can easily communicate if they are placed on the same network.
The syntax is as shown:
docker network create <network_name>

The two images will have to now contain the flag --network when run in order to communicate eg
docker -e POSTGRES_PASSWORD=pass -p 5432:5432 --network <network_name> -d postgres:latest

Docker Compose: Simplified Multi-Container Management

Docker Compose is a powerful tool that enables the seamless definition and management of multiple Docker containers as a unified entity, eliminating the need for individual container linkage through networks. With just one straightforward command, Docker Compose brings multiple components of an application to life, including the frontend, backend, and storage.

This capability proves especially advantageous in development and testing environments, as well as during the deployment of applications to production. By utilizing a user-friendly docker-compose.yaml file containing services, networks, and volumes, developers can efficiently orchestrate the entire application ecosystem. Each service in the YAML file represents a container, and through this file, one can specify various attributes such as images, ports, volumes, environment variables, and more.

To leverage Docker Compose, simply place the docker-compose.yaml file in the root directory of your project. Before using Docker Compose, ensure that you have installed it in accordance with your operating system type. Additionally, verify that your Docker Compose version is compatible with your Docker version for seamless integration.

Here is an illustrative example of a Docker Compose file:

services:
  db:
    image: postgres:15.3-alpine3.18
    environment:
      - POSTGRES_PASSWORD=<password>
      - POSTGRES_DB=<db_name>
    volumes:
      - "./green:/var/lib/postgresql/data:rw"
    ports:
      - "5432:5432"
  admin:
     image: dpage/pgadmin4
     environment:
        - PGADMIN_DEFAULT_EMAIL=<any@email.com>
        - PGADMIN_DEFAULT_PASSWORD=<pgadmin_password>
     ports:
        - "8080:80"
     volumes:
        - "./pgadmindata:/var/lib/pgadmin:rw

You can include the version of docker compose you are using eg version:3.8, just above everything else but its not a requirement by default the installed version will be used.

Let's unpack the file above:

  • A service: contains various parts of your application. It also directs docker on how to build the image required by that service. Each service has to have a unique name that differentiates it from another. We use db as the service title for our postgres database image and admin to reffere to the pgadmin image and the flags it will use to run successfully.

To run the file :
docker-compose up

Use docker ps to view if the containers were created successfully and are up and running.
So generally our docker-compose file runs both postgres and pgadmin which is postgres gui in the same network.

Running https://localhost 8080 provide a pgadmin interface where you can log in using pgadmin credentials and create a server to view the postgres database with on the browser as shown.

https://res.cloudinary.com/practicaldev/image/fetch/s--d3FZ_RlA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzryd2j2ifjpmw0xeb8w.png

Create a server name

https://res.cloudinary.com/practicaldev/image/fetch/s--AHBqfPbs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jkxt56icsg1ip4hgg14.png

Move to connection and use the name of the postgres service as the hostname and include the postgres user and password in the columns indicated

https://res.cloudinary.com/practicaldev/image/fetch/s--Kz2yrUuM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erz6qjk2j9pxggaptlgt.png

Now you can run

docker exec -it postgres:15.3-alpine3.18
psql -U <postgres_user>
\c <db_name>
CREATE TABLE hello_pdadmin;

Refresh pgadmin to see the changes.

To build the images and not run them, use
docker-compose build

To stop the running containers use
docker compose down

Conclusion

In summary, Docker goes beyond being a mere technology; it represents a transformative mindset that fosters continuous integration and continuous deployment.

Regardless of whether you are an individual developer, a member of a team, or responsible for managing extensive infrastructures, Docker offers valuable benefits to all.

Embracing Docker's capabilities is certain to elevate your development workflow and elevate your projects to unprecedented levels of efficiency and dependability. By adopting Docker, you empower yourself and your team to deliver innovative solutions with confidence, adaptability, and agility.

Thanks For Reading