Understanding Docker and Containers: An Introduction to Deployment
Written on
Docker and Containers: An Overview
DevOps Series: - Foundations of DevOps - Docker and Containers: Part 2 - Practical Applications - Docker and Containers: Part 3 - Microservices and Docker Compose - Container Orchestration: Part 1 - Container Orchestration: Part 2 - Scaling a Containerized 3-Tier Web Application - Container Orchestration: Part 3 - Enhancing Kubernetes Clusters - CI/CD Pipelines: Part 1 - CI/CD Pipelines: Part 2 - GitHub Actions in Practice - Infrastructure Automation with GitOps - DevSecOps: Part 1 - DevSecOps: Part 2 - Observability and Monitoring
The Challenge of Traditional Deployment
Before containerization, application deployment often relied on virtualization, where we created virtual machines to house applications along with their dependencies. Although this method provided some level of isolation between applications, it was highly resource-intensive and expensive.
Running virtual machines is costly, especially when scaling applications under heavy load, necessitating multiple VM instances.
The Rise of Containerization
In response to these challenges, the tech industry innovated by encapsulating applications and their dependencies within containers.
A container serves as a self-contained environment for applications, promoting significant portability. Unlike virtual machines, containers operate independently of the underlying environment and include a container engine that manages them.
Key distinctions include: 1. Complete isolation of applications within containers. 2. Reduced resource requirements.
Containerization is a transformative approach to packaging and deploying applications.
Understanding Containers
To grasp what a container truly is, we must first define a namespace. A namespace restricts what a process can access, effectively isolating it from other processes.
Control groups (cgroups) limit a process's access to system resources.
In essence, a container is a process that originates from tar archives, anchored to namespaces and regulated by cgroups.
So, a container is fundamentally a process. Interested in building a container from scratch? Check out a video by Liza Rice for guidance.
Docker: The Leading Tool for Containerization
Docker is the foremost platform for developing and running applications within containers, offering an open-source toolkit for deploying and managing these applications.
Understanding Docker's Architecture
Docker's architecture comprises three main components:
- Client: The command-line interface for interacting with the Docker daemon (also known as the Docker engine) on the host.
- Docker Host: Contains the Docker daemon, containers, and images.
- Registry: A repository for storing images, allowing users to retrieve, collaborate, and distribute them.
To clarify, an image is akin to an object, while a container represents its instantiation. After developing an application, you create a unique image from which you can generate multiple containers.
Basic Docker Commands
Here are some essential commands for developers: - docker ps: Lists all active containers. - docker inspect <container_name_or_id>: Inspects a specific container. - docker logs <container_name_or_id>: Displays the logs for a container. - docker build: Builds an image.
Installation of Docker
Installing Docker is straightforward: visit the official website, download the installer, and follow the prompts to complete the installation by clicking YES.
Getting Started with Docker
After installing Docker Desktop, open the application and authenticate yourself. To verify that Docker is functioning correctly, open the command line and input the command:
Now that your environment is ready, we will adopt a hands-on approach. I will guide you through creating a simple full-stack application using Angular for the frontend and Spring Boot for the backend, along with MySQL as the database.
Containerizing a 3-Tier Web Application
Creating a MySQL Container
Let’s start by creating our initial container, which will be a MySQL container. Enter the following command in your command line:
This command pulls the official MySQL image from Docker Hub to your machine.
What is Docker Hub? It's a platform where developers and companies upload their images for others to use, similar to GitHub but for containers.
After the image is downloaded, we can run:
Breaking down the command: - docker run: Executes the container based on the specified image (in this case, mysql:default). - -d: Indicates that the container will run in the background. - --name: Specifies the name of the container. - -e MYSQL_ROOT_PASSWORD: Sets a variable that is essential for every database container, requiring a user (default is not root) and a password. - -p 3306:3306: Maps the container's port.
The returned ID indicates the unique identifier for the created container.
You can verify the running containers by using the command docker ps:
Congratulations, you've successfully created your first container!
Next, let's access the container, a common practice especially for database containers.
In this command, we specify the container's name, the image, and use -u to set the user (in this case, root). After executing the command, we will be prompted for our password, which is set to 12345 (feel free to choose a password that suits you).
Once inside the container, you can execute basic MySQL commands:
Creating a New Database
Let’s proceed to create a new database:
Understanding Docker Concepts: Persistence and Networking
Before delving deeper into creating images from code, it's crucial to grasp several Docker concepts.
a. Persistence
Persistence in Docker refers to how we store data. As you may know, data within a container is lost once the container is deleted. To mitigate this, Docker offers various solutions to retain data, even post-container deletion.
The most effective solution is using volumes, which store data outside the container on the host machine, managed by Docker itself.
Types of Volumes: 1. Host Volumes: Developers choose where to store data.
- Command to run this type of volume:
- Anonymous Volumes: Automatically created by Docker for each container on the host machine, linked to the container's virtual filesystem.
- Named Volumes: References to volumes by name, often considered the best solution.
Other methods to store data, though less popular, include: - Bind Mount: Data is stored randomly on the host and can be modified by non-Docker processes. - Tmpfs Mount: Data is stored in RAM.
b. Networking
Docker supports various networking modes, with the most significant being:
- Bridge (default): Enables communication between containers autonomously, each with its own IP address.
- Host: Connects directly to the host's network, sharing the same IP address but with separate container ports.
- MACVLAN: Assigns a unique MAC address to each container, allowing it to appear as a physical device on the network.