Install and Use Docker Compose

How To Install and Use Docker Compose on Ubuntu 20.04: A Step-by-Step Guide for Beginners

Docker Compose simplifies managing multi-container Docker applications on Ubuntu 20.04. This tool allows developers to define and run complex setups with a single command.

You can install Docker Compose on Ubuntu 20.04 by downloading the binary and setting the correct permissions.

Install and Use Docker Compose

Docker Compose uses a YAML file to configure application services. This file specifies containers, networks, and volumes for a Docker application.

With Docker Compose, users can start all the services with one command, making it easier to develop and deploy applications.

Install and Use Docker Compose – Key Takeaways

  • Docker Compose streamlines multi-container application management on Ubuntu 20.04
  • Installation involves downloading the Docker Compose binary and setting proper permissions
  • A YAML file defines the entire application stack, including services, networks, and volumes

Prerequisites

image 11

Before installing Docker Compose on Ubuntu 20.04, you need to set up your system. This involves creating a user account, checking system requirements, and updating your package database.

Setting Up a Non-Root User with Sudo Privileges

Creating a non-root user with sudo privileges is important for security. To do this, log in as root and use the adduser command:

adduser username

Replace “username” with your desired username. Follow the prompts to set a password and user information.

Next, add the user to the sudo group:

usermod -aG sudo username

This allows the user to run commands with sudo.

Test it by switching to the new user and running a sudo command:

su - username
sudo apt update

Enter the user’s password when prompted.

Ensuring System Requirements Are Met

Ubuntu 20.04 meets the basic requirements for Docker Compose. Check your system’s RAM and CPU:

free -h
lscpu

Docker Compose needs at least 2GB of RAM and a 64-bit processor. Make sure your system has these.

Also, check your Ubuntu version:

lsb_release -a

Confirm it shows Ubuntu 20.04.

Updating Package Database

Keep your system up-to-date. Run these commands:

sudo apt update
sudo apt upgrade

This updates the package list and installs available upgrades.

After updating, reboot your system:

sudo reboot

This ensures all updates are applied correctly.

Installing Docker on Ubuntu 20.04

Docker installation on Ubuntu 20.04 involves setting up the repository, adding the GPG key, and installing the Docker Engine. This process ensures you get the latest stable version of Docker directly from the official source.

Configuring Docker Repository

To install Docker on Ubuntu 20.04, start by updating the package index:

sudo apt-get update

Next, install packages to allow apt to use HTTPS:

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

Add Docker’s official repository to your system:

echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

This command adds the stable repository for your Ubuntu version.

Adding Docker’s GPG Key

Docker uses GPG keys to ensure package authenticity. Add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Verify the key’s fingerprint:

sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --list-keys

The output should show a key with the last 8 characters: 0EBFCD88.

Installing Docker Engine

With the repository set up, install Docker:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

This installs the latest stable version of Docker Engine and containerd.

Verify the installation:

sudo docker run hello-world

If successful, you’ll see a welcome message.

To use Docker without sudo, add your user to the docker group:

sudo usermod -aG docker $USER

Log out and back in for this change to take effect.

Installing Docker Compose

Docker Compose allows users to manage multiple containers easily. Installing it on Ubuntu 20.04 involves downloading the latest version, setting permissions, and verifying the installation.

Downloading the Latest Stable Version of Docker Compose

To install Docker Compose, users first need to download it. The process involves using the curl command to fetch the latest stable version from Docker’s GitHub repository.

Here’s how to do it:

  1. Open a terminal window
  2. Run the following command:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

This command downloads the Docker Compose binary and saves it in the /usr/local/bin directory. The version number may change, so it’s a good idea to check the latest release on GitHub.

Applying Executable Permissions to the Binary

After downloading, the next step is to make the Docker Compose binary executable. This allows the system to run it as a command.

To set the correct permissions, use the chmod command:

sudo chmod +x /usr/local/bin/docker-compose

This command gives execute permissions to the docker-compose file. Without this step, users won’t be able to run Docker Compose commands.

Verifying Successful Installation

The final step is to confirm that Docker Compose installed correctly. Users can do this by checking the version number.

To verify the installation:

  1. Open a terminal
  2. Run the command:
docker-compose --version

If installed correctly, this command will display the version of Docker Compose. For example:

docker-compose version 1.29.2, build 5becea4c

If users see this output, Docker Compose is ready to use. If an error appears, they may need to check the previous steps or consult Docker’s documentation for troubleshooting.

Using Docker Compose

Docker Compose simplifies the management of multi-container applications. It uses YAML files to define services and allows easy deployment with a single command.

Understanding Compose YAML Files

Compose YAML files are the core of Docker Compose. These files, typically named docker-compose.yml, define the structure of your application. They specify services, networks, and volumes.

A basic docker-compose.yml file starts with a version declaration. It then lists services, each with its own configuration. Here’s a simple example:

version: '3'
services:
  web:
    image: nginx
    ports:
      - "8080:80"
  database:
    image: mysql
    environment:
      MYSQL_ROOT_PASSWORD: example

This file defines two services: a web server using Nginx and a MySQL database. Each service has its own settings, like image and port mappings.

Defining a Multi-Container Application

Multi-container applications in Docker Compose link different services together. Each service runs in its own container but can communicate with others.

To define a service, you specify its image, environment variables, ports, and other settings. You can also build custom images using a Dockerfile. Here’s an example:

services:
  app:
    build: ./app
    ports:
      - "3000:3000"
  database:
    image: postgres
    environment:
      POSTGRES_DB: myapp

This setup creates an app service from a local Dockerfile and connects it to a PostgreSQL database. Docker Compose handles the networking, allowing the app to connect to the database easily.

Running Docker Compose Commands

Docker Compose offers several commands to manage your application. Here are some key commands:

  • docker compose up: Starts all services defined in the compose file.
  • docker compose down: Stops and removes all containers, networks, and volumes.
  • docker compose build: Builds or rebuilds services.
  • docker compose ps: Lists all running containers.

To start your application, navigate to the directory with your docker-compose.yml file and run:

docker compose up -d

The -d flag runs containers in the background. To view logs, use docker compose logs. To stop services, run docker compose stop.

Docker Compose simplifies complex setups. It’s great for development environments and can also be used in production with proper configuration.

Common Management Commands

Docker Compose offers several key commands for managing multi-container applications. These commands help control services, view logs, and monitor running containers.

Starting and Stopping Services

To start services defined in a docker-compose.yml file, use the docker-compose up command. This launches all containers specified in the configuration.

For background execution, add the -d flag:

docker-compose up -d

To stop running services, use:

docker-compose down

This command stops and removes containers, networks, and volumes created by ‘up’.

To restart a specific service:

docker-compose restart service_name

Viewing Logs of Services

Docker Compose makes it easy to view logs from your services. The logs command displays log output from all services.

To view logs for all services:

docker-compose logs

For real-time log updates, add the -f (follow) flag:

docker-compose logs -f

To view logs for a specific service:

docker-compose logs service_name

Listing Running Containers

To see which containers are currently running, use the ps command:

docker-compose ps

This displays a list of containers, their status, and ports.

For a more detailed view, including stopped containers:

docker-compose ps -a

To show only running containers:

docker-compose ps --services

These commands help monitor the state of your Docker Compose environment and manage services effectively.

Working with Docker Images

Docker images are the foundation of containers. They contain the application code, runtime, libraries, and dependencies needed to run applications. Let’s explore how to pull, build, and manage Docker images on Ubuntu 20.04.

Pulling Images from Docker Hub

Docker Hub is a central repository for Docker images. To pull an image, use the docker pull command:

docker pull ubuntu:20.04

This downloads the Ubuntu 20.04 image. To see all downloaded images, run:

docker images

Users can also search for images on Docker Hub using:

docker search nginx

This lists available NGINX images. To use a pulled image, create a container:

docker run -it ubuntu:20.04 /bin/bash

This starts an interactive Ubuntu 20.04 container.

Building Images from a Dockerfile

A Dockerfile defines how to build a custom image. Create a file named Dockerfile:

FROM ubuntu:20.04
RUN apt-get update && apt-get install -y nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

This Dockerfile creates an image with NGINX installed. To build it:

docker build -t my-nginx-image .

The -t flag tags the image. The . specifies the build context (current directory).

To view the new image:

docker images

Run a container from the custom image:

docker run -d -p 8080:80 my-nginx-image

This starts an NGINX container, mapping port 8080 on the host to port 80 in the container.

Managing Image Storage

Docker images can consume significant disk space. To view image sizes, use the following command:

docker images --format "{{.Repository}}: {{.Size}}"

Remove unused images to free up space with the command:

docker image prune

This command removes dangling images (those not associated with a container).

To remove a specific image, use the command:

docker rmi image_name

Be cautious when removing images, as containers using those images will stop working.

To limit image download size, use multi-stage builds in Dockerfiles. This technique creates smaller, more efficient images by discarding unnecessary build artifacts.

Networking and Data Persistence

Docker Compose simplifies network setup and data storage for containerized apps. It handles port mapping and volume creation, making services accessible and data persistent.

Handling Port Redirection and Accessibility

Docker Compose manages port redirection to make services available outside containers. In the docker-compose.yml file, use the “ports” directive to map container ports to host ports.

For a web server like Nginx, you might add:

services:
  web:
    image: nginx
    ports:
      - "8080:80"

This maps port 80 in the container to port 8080 on the host. Users can now access the Nginx server at localhost:8080.

Docker Compose also creates a default network for your services. This allows containers to communicate using service names as hostnames.

Defining Shared Volumes for Services

Volumes in Docker Compose let you persist data and share it between containers. Use the “volumes” directive to create and mount volumes.

To set up a shared volume for a database service, use the following configuration:

services:
  db:
    image: postgres
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

This creates a named volume “pgdata” and mounts it to the database’s data directory. The volume exists outside the container, ensuring data survives container restarts or removals.

You can also share volumes between services. This is useful for scenarios like sharing config files or static assets in a web application.

Advanced Topics

Docker Compose offers powerful features for managing complex multi-container applications. Environment variables and orchestration tools can enhance your Docker Compose workflow.

Incorporating Environment Variables

Environment variables allow for flexible configuration of Docker containers. Docker Compose can use these variables to customize container behavior without changing the Compose file.

To use environment variables, add them to a .env file in the same directory as your docker-compose.yml. Docker Compose will automatically read this file.

Example .env file:

DB_PASSWORD=secretpassword
APP_PORT=8080

In your docker-compose.yml, reference these variables using ${VARIABLE_NAME}:

services:
  webapp:
    image: myapp
    ports:
      - "${APP_PORT}:8080"
  database:
    image: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${DB_PASSWORD}

This setup allows for easy changes to container configs without editing the Compose file.

Integrating with Orchestration Tools

Docker Compose works well with orchestration tools like Kubernetes for managing large-scale deployments.

Kubernetes can use Docker Compose files as a starting point for creating deployments. The kompose tool converts Compose files to Kubernetes resources.

To use kompose:

  1. Install kompose on your system
  2. Run kompose convert -f docker-compose.yml
  3. Apply the generated Kubernetes files with kubectl apply -f

This process creates Kubernetes deployments, services, and other resources based on your Compose file. It simplifies the transition from local development to production Kubernetes clusters.

Orchestration tools add features like automatic scaling, rolling updates, and self-healing to your containerized applications. They build on Docker Compose’s simplicity while providing robust production-ready solutions.

Best Practices for Docker Compose

Docker Compose makes it easy to manage multi-container applications. To get the most out of it, follow these best practices:

  • Use version control for your docker-compose.yml file. This helps track changes and collaborate with others.
  • Keep your Compose file simple. Only include necessary services and configurations.
  • Use environment variables for sensitive information. This improves security and makes your setup more flexible.
  • Name your services clearly. This makes your Compose file easier to read and understand.
  • Set resource limits for containers. This prevents one container from using too much memory or CPU.
  • Use health checks to ensure your services are running correctly.
  • For local development, use volumes to mount your code. This allows you to make changes without rebuilding containers.
  • Optimize your Docker images to improve performance. Use multi-stage builds and minimize the number of layers.
  • Keep your services isolated. Each service should have a single responsibility.
  • Use networks to control communication between containers. This improves security by limiting unnecessary connections.
  • Regularly update your Docker Compose version to get the latest features and security patches.

Uninstalling Docker Compose

Removing Docker Compose from your Ubuntu 20.04 system involves uninstalling the binary and cleaning up related resources. This process ensures a complete removal of the tool.

Removing Docker Compose Binary

To uninstall Docker Compose, users need to remove the Docker Compose CLI plugin. On Ubuntu 20.04, this can be done using the package manager. Open a terminal and run the following command:

sudo apt-get remove docker-compose-plugin

This command removes the Docker Compose plugin from the system. If Docker Compose was installed manually as a standalone binary, locate and delete the file:

sudo rm /usr/local/bin/docker-compose

Users should verify the removal by checking the Docker Compose version:

docker-compose --version

If the uninstallation was successful, this command will return an error.

Cleaning Up Related Resources

After removing the Docker Compose binary, it’s important to clean up related resources.

This includes removing Docker Compose configuration files and any unused containers, networks, or volumes.

To remove Docker Compose configuration files:

rm ~/.docker/config.json

Users can remove unused Docker resources with these commands:

docker system prune -a
docker volume prune

These commands delete unused containers, networks, images, and volumes. Be cautious, as this action is irreversible.

Lastly, check for any remaining Docker Compose files in projects and remove them if no longer needed.

These files are typically named docker-compose.yml.

Share this article:
As a passionate DevOps Engineer, I thrive on bridging the gap between development and operations. My expertise lies in crafting efficient, scalable infrastructure solutions, with a particular fondness for Linux and Ubuntu environments. I'm constantly exploring innovative ways to streamline processes, enhance system reliability, and boost productivity through automation. My toolkit includes a wide array of cutting-edge technologies and best practices in continuous integration, deployment, and monitoring. When I'm not immersed in code or fine-tuning server configurations, you'll find me staying up-to-date with the latest industry trends and sharing knowledge with the tech community. Let's connect and discuss how we can revolutionize your infrastructure!

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *