Understanding Docker Volumes and Networking in Containerization
Written on
Docker Volumes and Networking
Hello everyone! As we continue our journey through DevOps, today we will delve into some more sophisticated containerization topics.
When deploying applications, one significant challenge is storage. Containers generate data while they run, but this data is lost once the container stops. Docker addresses this issue using volumes.
What are Docker Volumes?
Volumes are storage units that provide persistent storage for containers. They can exist both inside and outside the containers. Here are some advantages of using volumes:
- They can be shared across multiple containers.
- They simplify data migration.
- Drivers allow volumes to be stored on remote hosts or cloud services, enhancing functionality.
When operating multiple containers, Docker volumes become crucial. They enable communication between different containers by utilizing Docker networking.
What is Docker Networking?
A Docker network is a virtual network that connects Docker containers, facilitating data exchange. These networks provide isolation, security, and connectivity. Docker automatically creates a network for each container, but you can also establish custom networks for more control over interactions.
Numerous types of networks are available in Docker, each serving specific needs.
You can read more about them here.
Today, we will accomplish two tasks: deploying a multi-container application and gaining a deeper understanding of how volumes operate.
Task 1
For our first task, we need to familiarize ourselves with Docker Compose.
Prerequisites: - A running EC2 machine or Ubuntu VM - Docker (installed and running) - Project files
In this example, I will use an Ubuntu VM to deploy a Django project with a PostgreSQL database.
Link to the project repository
We will begin by cloning the project into our local machine using the following command:
git clone https://github.com/testdrivenio/django-on-docker.git
After cloning the repository, we’ll create a development environment by making a new file with the following command:
nano .dev.env
Content: DEBUG=1 SECRET_KEY=foo DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1] SQL_ENGINE=django.db.backends.postgresql SQL_DATABASE=hello_django_dev SQL_USER=hello_django SQL_PASSWORD=hello_django SQL_HOST=db SQL_PORT=5432 DATABASE=postgres
This file configures the environment variables necessary for the Django project, especially for the database.
Next, we will create a Docker Compose file with this content:
nano docker-compose.yml
Content: version: '3.8'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- my_django_network
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
networks:
- my_django_network
volumes:
postgres_data:
networks:
my_django_network:
driver: bridge
The above configuration runs two services: - web: Built from the app, this service runs on port 8000 and is dependent on the DB service. - db: Utilizes the postgres:15 image with data stored in the volume.
A volume named postgres_data is created within the container, and a private network connects the two services securely.
Now, let’s build and run the project using the following command:
docker compose up
This command will first build the images and subsequently start both services. The DB and Web containers are created independently.
After successful execution, your local host will display the application running on port 8000.
You can upload a file and view its contents.
So far, the application has been deployed in a development environment.
To deploy the application in production, follow this process.
First, create a production Docker Compose file:
nano docker-compose.prod.yml
Contents: version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
networks:
- my_django_network
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
networks:
- my_django_network
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
networks:
- my_django_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
my_django_network:
driver: bridge
This configuration runs the web service using Gunicorn, mounting two volumes for static and media files generated by Django, and exposing it on port 8000, depending on the DB service.
The DB pulls the PostgreSQL version 15 and mounts it with the volume postgres_data, while Nginx uses a Dockerfile and exposes it on port 1337.
Next, set the environment by creating a production environment file:
nano .env.prod
Contents: DEBUG=0 SECRET_KEY=change_me DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1] SQL_ENGINE=django.db.backends.postgresql SQL_DATABASE=hello_django_prod SQL_USER=hello_django SQL_PASSWORD=hello_django SQL_HOST=db SQL_PORT=5432 DATABASE=postgres
Then create another file for the database:
nano .env.prod.db
Content: POSTGRES_USER=hello_django POSTGRES_PASSWORD=hello_django POSTGRES_DB=hello_django_prod
Once everything is set up, run the application with the following command:
docker-compose -f docker-compose.prod.yml up --build
Your application will be running on port 1337.
In summary, Task 1 demonstrates how to deploy a multi-container application using Docker across different environments.
Task 2
Next, we will explore how volumes function in detail.
For this, we will use the ubuntu:latest image.
I will begin by running the image in a container, attaching it to a newly created named volume with the following commands:
# Creating a new volume docker volume create volume_name
# Running Docker with the Ubuntu image docker run -it -v volume_name:/directory ubuntu
Here, /directory is where data from the image will be stored.
Now, inside the container,
I will navigate to /data, which will initially be empty.
Let’s create a new folder here.
Now, open a new terminal window and restart the same container.
Navigate to the /data directory to check its contents.
You will see that the folder created in the previous container is also present here.
I can create another folder here, and it will also appear in the other running container.
This illustrates how volumes provide persistent storage for containers.
For a challenge, create a container from the same image, mount the same volume, add a new file or directory, and check if it appears in the other container.
That's all for today. Thank you for reading!