# Docker Basics
###### tags: `talleres_mln`
In this workshop you will learn the basics of Docker, how to build custom images, work with networking and communication between containers, and use Docker Compose to orchestrate your containers. With practical examples and useful commands, you'll gain the skills you need to get the most out of upcoming Mastering in Lightning workshops.
**Table of Contents**
[OCD]
## Author
Twitter for corrections, comments or suggestions: [@ibiko1](https://twitter.com/ibiko1)
This tutorial was prepared for the [Mastering Lightning Socratic Seminar](https://libreriadesatoshi.com/) through [@libreriadesatoshi](https://twitter.com/libdesatoshi).
## Requirements :information_source:
:::info
1. Have a Debian-based Linux distribution installed, preferably Ubuntu.
2. For Windows, be on Windows 10 version 2004 and later (build 19041 and later) or Windows 11
3. For Mac, have a Mac with Intel chipset.
:::
## Introduction
Docker is a virtualization tool that allows you to create and manage containers, isolated and autonomous environments that contain everything necessary for an application or service to work smoothly on different operating systems.
In this tutorial, we will lay the necessary foundation so that you can use Docker machines specifically configured for the Lightning Network in subsequent workshops. You'll learn the basics of Docker, from installing to running containers, and explore how Docker can help you create and maintain consistent, portable environments.
With Docker, you can quickly deploy a Lightning Network virtual machine on your system, without having to deal with complex configurations and specific network requirements. This will allow you to focus on experimenting with the Lightning Network, without worrying about the technical details of the underlying environment.
:::success
1. **Installation**:
- On **Windows**: Download and install Docker Desktop, using the WSL 2 backend (Windows Subsystem for Linux)
- On **Ubuntu Linux**: Install Docker following the command guide.
- On **MacOs**: Download and install Docker Desktop.
2. **Docker Basics**:
- Images: You will learn how to find, download and use Docker images.
- Containers: You will discover how to create, run and stop containers in Docker, and how to manage their state.
- Volumes: You will learn how to use volumes in Docker to persist data beyond the lifecycle of a container.
3. **Building custom images**:
- *Dockerfile*: You will learn how to create a Dockerfile file, which defines the steps necessary to build a custom image.
- Image building: You will use the Dockerfile to build a custom image and then run it in a container.
4. **Networks and communication between containers**:
- Networking in Docker: You will explore how Docker handles networking and how you can connect containers together.
- Communication between containers: You will learn to establish communication and links between containers to create complex applications.
5. **Container Orchestration with Docker Compose**:
- Docker Compose: Introduction to Docker Compose, a tool for defining and managing multi-container applications.
- Configuring services: You will learn how to define services in a YAML file and run them using Docker Compose.
:::
## Facility
### On Windows
In this section, I will guide you through the process of installing Docker on Windows using WSL 2 (Windows Subsystem for Linux 2). WSL 2 is a compatibility layer that allows you to run a Linux environment on your Windows system, which is especially useful for using Docker efficiently.
Combining Docker with WSL 2 on Windows gives you the ability to use Docker in a native Linux environment within your Windows operating system. This will allow you to take advantage of all the benefits of Docker and use Docker images and commands transparently as if you were in a pure Linux environment.
In this tutorial, you'll learn how to configure WSL 2 on your Windows system, install a supported Linux distribution such as Ubuntu, and then install Docker within WSL 2. Once you've completed these steps, you'll be ready to start using Docker on your Windows environment with all the functionalities and benefits it offers.
Whether you are new to Docker or have prior experience, this tutorial will take you through the steps necessary to configure Docker on Windows using WSL 2. Let's begin the installation process and get ready to get the most out of Docker in your Windows environment!
#### Requirements
The first thing we must enable on our computer is the Virtualization configuration at the BIOS level. Here are a couple of examples, it may vary depending on the equipment:
* [Generic tutorial](https://bce.berkeley.edu/enabling-virtualization-in-your-pc-bios.html).
* [HP Tutorial](https://support.hp.com/co-es/document/ish_5637144-5698274-16#:~:text=Turn on%20the%20computer%20and%20press,Select%20Enable)
* [ASUS Tutorial](https://www.asus.com/latin/support/FAQ/1043786/)
The next step will be to enable the "Virtual Machines Platform" and "[Windows Subsystem for Linux](https://learn.microsoft.com/es-es/windows/wsl/install)" in the Windows configuration.
To do this we do the following:
* Let's go to start, open the Control Panel.
* Then click "Programs and Features" and then "Turn Windows features on or off."
* And towards the end of the list we look for the options indicated in the screenshot and mark them:

After applying the changes, Windows will ask us to restart. When restarting, we check in the performance tab of the Task Manager, if everything is activated correctly, looking for "Virtualization: Active".

#### Installing WSL
Now you can install everything you need to run WSL with a single command. Open PowerShell or the Windows Command Prompt in administrator mode by right-clicking and selecting "Run as administrator," and use the following command to update WSL.
```
wsl --update
```

We execute the WSL restart:
```
wsl --shutdown
```
We install a machine with the Ubuntu distribution:
```
wsl --install -d Ubuntu
```

This command will enable the features necessary to run WSL and install the Ubuntu Linux distribution. This Linux installation is not a Docker machine yet, but it is a necessary requirement for Docker to work as you progress through this tutorial.
The first time you start a freshly installed Linux distribution, a console window will open and you will be asked to wait for the files to be unzipped and stored on your machine, make sure you have enough space for your machines.

##### WSL Commands (Optional)
* See the List of available Linux distributions:
```bash=
wsl --list --online
```
You will see a list of Linux distributions available through the online store.
* List of installed Linux distributions
```bash=
wsl --list --verbose
```
#### Docker installation
We download the installer (Docker Desktop Installer.exe), you can get it from [Docker Hub](https://docs.docker.com/desktop/install/windows-install/) and click the "Docker Desktop for Windows" button .

Double-click Docker Desktop Installer.exe to run the installer and follow the steps
:::info
If your administrator account is different from your user account, you must add the user to the **"docker-users"** group. Run "Computer Management" as administrator and go to "Local Users and Groups" > "Groups" > "docker-users". Right click to add the user to the group. Sign out and sign back in for the changes to take effect.
:::
#### Start Docker
Docker Desktop does not start automatically after installation. To start Docker Desktop:

Search for Docker and select "Docker Desktop" in the search results.
The Docker menu () displays the Docker Subscription Service Agreement window.
Select "OK" to continue. Docker Desktop will start after you accept the terms.

After this we open the Windows terminal with the "cmd" or PowerShell command and launch the hello-world test machine.
```bash=
docker run hello-world
```
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
You have successfully installed and started Docker Engine and launched a hello-world machine.
### On Ubuntu Linux
Docker works on x86_64 or amd64 machines and on most distributions, here I leave you links to the Docker page for [Fedora](https://docs.docker.com/desktop/install/fedora/) and for [Debian] (https://docs.docker.com/desktop/install/debian/).
In this workshop we will focus on the installation in [Ubuntu](https://docs.docker.com/desktop/install/ubuntu/), since it is usually the most widespread.
#### Requirements
Here you can check that [KVM virtualization support](https://docs.docker.com/desktop/install/linux-install/#kvm-virtualization-support) is active, in modern distributions support usually comes already but It doesn't hurt to check, since Docker Desktop runs a virtual machine that requires it.
#### Docker installation
The easiest thing is to add the Docker repository to Ubuntu and install using apt-get. Here you can how to do it in the Docker [guide](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository). Anyway, I'll summarize the steps for you.
##### Add repository
1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:
```bash=
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
```
2. Add the official Docker GPG key:
```bash
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
```
3. Use the following command to configure the repository:
```bash=
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
##### Install Docker Engine
1. Update the apt package index:
```bash=
sudo apt-get update
```
2. Install Docker Engine, containerd and Docker Compose:
```bash=
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```
3. Verify that the Docker Engine installation was successful by running the hello-world image.
```bash=
sudo docker run hello-world
```
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
You have successfully installed and started **Docker Engine**.
### On MacOS
Docker works on both machines with an Intel chip and machines with an Apple Silicon chip (M1, M2, ...).
In this workshop we will focus on the installation on [Intel](https://docs.docker.com/desktop/install/mac-install/#mac-with-intel-chip), because it is the model that I have, but [here](https://docs.docker.com/desktop/install/mac-install/#mac-with-apple-silicon) you can see how to do it for Apple Silicon.
#### Requirements
macOS must be version 11 or later. That's Big Sur (11), Monterey (12) or Ventura (13). It is recommended to update to the latest version of macOS.
Do not install VirtualBox prior to version 4.3.30, as it is not compatible with Docker Desktop.
#### Installing Docker Desktop
1. Download the Docker Desktop version for Intel chip.

3. Double-click **Docker.dmg** to open the installer, then drag the Docker icon to the Applications folder.

4. Double-click Docker.app in the Applications folder to start Docker.
5. The Docker menu () displays the Docker Subscription Service Agreement window. Accept the terms to continue.
6. In the installation window, select the recommended settings (password required). This allows Docker Desktop to automatically configure the necessary configuration settings.
7. Select Finish and Docker will start, it may take a while depending on your machine

#### Check installation
We open the Mac terminal and run the following command.
```bash=
docker run hello-world
```

If we see "Hello from Docker!" Our installation has worked correctly and we now have Docker on our machine.
## Docker basics
Here is a brief explanation of the basic components that make up Docker. These concepts of images, containers, and volumes are fundamental to understanding how Docker works.
### Images
- Docker images are templates that contain everything necessary to run an application or service. An image includes the operating system, libraries, dependencies, and application code, as well as any additional configuration required.
- You can search and download images from [Docker Hub](https://hub.docker.com/), the public image repository, or even create your own custom images using a Dockerfile.
### Containers
- Containers in Docker are running instances of an image. A container is an isolated, self-contained environment that runs independently on the host operating system. Each container has its own file system, processes, and resources, but shares the same host operating system kernel.
- Containers are lightweight, portable and scalable. You can create, start, stop, restart, and delete containers quickly and easily. Containers allow you to encapsulate and distribute applications along with all their dependencies.
### Volumes
- Volumes in Docker are storage mechanisms that allow data to be persisted beyond the lifecycle of a container. Volumes are independent of containers and can be used to share and store data in a durable manner.
- You can create volumes in Docker and associate them with running containers, they are useful when you want to keep data intact even if the container is stopped or deleted.
Each component has associated commands that we can use, [here](https://docs.docker.com/get-started/docker_cheatsheet.pdf) I leave you the "cheatsheet" with the main ones.
## Basic Docker commands
### Images
- Let's look for an image in Docker Hub of web server [nginx](https://hub.docker.com/_/nginx):
```bash=
docker search nginx
```

- We download the official image:
```bash=
docker pull nginx
```

- We list the images downloaded to our computer:
```bash=
docker images
```

- We create a container from an nginx image and publish it to port 80:
```bash=
docker run -d -p 80:80 nginx
```
If everything has gone well, we go to the localhost:80 page in our browser and the following will appear:

Our nginx web server is running on Docker!
### Containers
- We list the running containers:
```bash=
docker ps
```
Fixed that our nginx image now has a "CONTAINER_ID" and "NAMES" to identify it, with any of this data we can perform the following commands.

Docker, if we do not specify a name for the container, names them with curious names. To specify a name, we must create the container like this:
```bash=
docker run -d --name mynginx1 -p 80:80 nginx
```
- We list all containers (even detained ones):
```bash=
docker ps -a
```
- We stop the running nginx container (we must use what appears in NAMES or the CONTAINER_ID):
```bash=
docker stop mynginx1
```
- Start a stopped container:
```bash=
docker start mynginx1
```
- Delete a stopped container:
```bash=
docker rm mynginx1
```
- We execute the command 'df -lh' inside a running container:
```bash=
docker exec -it mynginx1 df -lh
```

- We can also use the 'sh' command to access the terminal of our container with:
```bash=
docker exec -it mynginx1 sh
```

To exit we use 'exit'.
### Volumes in Docker
- We create a volume called config_volumen:
```bash=
docker volume create volume_config
```
- List the volumes created:
```bash=
docker volume ls
```

- We associate the volume config_volumen to a new container called 'mynginx2' on port 90, based on the nginx image:
```bash=
docker run -d --name mynginx2 -v volume_config:/mnt -p 90:90 nginx
```
We enter our container and create a file in the /mnt directory, called test_file. We add the text "test data".

If we delete that container, the existing data on the volume will continue to exist, thereby avoiding losing data. Remember that when you delete the container, all changes made to it disappear, unless they are saved on a volume.
## Custom Image Building
1. Stop and kill any nginx containers you have running.
2. Create a file called "Dockerfile" in an empty directory.
3. Open the "Dockerfile" file with a text editor and write the following instructions:
```bash=
# Use the Nginx base image
FROM nginx
#Copy the custom configuration file to the correct location
COPY nginx.conf /etc/nginx/nginx.conf
#Copy any static content or website files to the Nginx documents folder
COPY website/ /usr/share/nginx/html
```
This example uses the official Nginx base image available on Docker Hub. Next, a custom configuration file (called "nginx.conf") and any static content or website files are copied from the "website" folder to the appropriate location in the container.
4. Create the configuration file "nginx.conf": Create a file called "nginx.conf" in the same directory where the Dockerfile is located.
5. Open the "nginx.conf" file with a text editor and you can use the following basic configuration:
```javascript=
events {
use epoll;
worker_connections 128;
}
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
indexindex.html;
}
}
}
```
This basic configuration indicates that the Nginx server should listen on port 80, use the "index.html" file as the home page, and serve the files from the "/usr/share/nginx/html" folder in the container.
6. Create the "website" folder and add static files:
* Create a folder called "website" in the same directory where the Dockerfile is located.
* Add any static content or website files you want to serve via Nginx inside this folder.
7. Build the custom image:
* Open a terminal and navigate to the directory containing the Dockerfile.
Run the following command to build the custom Nginx image:
```bash=
docker build -t mi-nginx .
```

8. Run a container based on the custom image:
Once the build completes successfully, you can run a container based on the custom image using the following command:
```bash=
docker run -p 80:80 mi-nginx
```
This will map port 80 of the container to port 80 of your host machine, allowing you to access the Nginx server from your browser at "http://localhost".

With these steps, you've created a custom Dockerfile for a simple Nginx server. You can further customize the Dockerfile and Nginx configuration to your needs, [here](https://docs.docker.com/engine/reference/builder/) you will find much more information about the capabilities of Dockerfile.
## Networks and communications between containers
One of the powerful features of Docker is its ability to connect and communicate containers with each other using networks. This allows you to create distributed applications and deploy microservices that communicate over internal networks.
Docker provides different types of networking to facilitate communication between containers. Some of the most common types of networks are:
* **Default networks**: Docker creates a default network called "bridge" that allows communication between containers on the same host. Containers can communicate using hostnames or IP addresses assigned by Docker.
* **Custom networks**: You can create your own custom networks to isolate and manage communication between containers. This allows you to group containers into specific networks and control their connectivity.
* **External networks**: Docker also allows you to connect containers to external networks, such as the host network or user-defined networks on your system. This makes it easier to communicate with other network resources outside of the Docker environment.
Below I present some commands and brief explanations for working with networks in Docker:
* Create a new custom network in Docker with the specified name
```bash=
docker network create <network_name>
```
* List all networks available in Docker.
```bash=
docker network ls:
```
* Displays detailed information about a specific network, including containers connected to it
```bash=
docker network inspect <network_name>:
```
* Connect an existing container to a specific network
```bash=
docker network connect <network_name> <container_name>:
```
* Disconnect a container from a specific network.
```bash=
docker network disconnect <network_name> <container_name>
```
With these tools, you can create and manage networks in Docker to enable communication between your containers. This facilitates the development of distributed applications and the deployment of interconnected services. That is, by creating a network and including the containers in it, among them they could share the resources they need while remaining isolated.
Remember that Docker provides many more options and functionalities related to networking. You can explore the official Docker documentation to learn more and delve deeper into the topic.
## Container orchestration with Docker Compose
**Docker Compose** is a tool that allows you to define and manage multi-container applications. With Docker Compose, you can define your services configuration in a YAML file and then use a single command to start and stop all containers in a coordinated manner.
Below I'll walk you through the basic steps of using Docker Compose by creating an nginx machine with a basic configuration.
### Facility
Make sure you have Docker Compose installed on your system. You can refer to the official Docker documentation for detailed instructions on how to install it.
#### Installing Docker Compose on Ubuntu
1. Open a terminal on your Ubuntu system.
2. Update system packages by running the following command:
```bash=
sudo apt update
```
3. Install Docker Compose by running the following command:
```bash=
sudo apt install docker-compose
```
4. Verify the installation by running the following command to display the Docker Compose version:
```bash=
docker-compose --version
```
#### Installing docker-compose on Mac and Windows
On both Windows and Mac, the installation of docker-compose is done when we install Docker Desktop. Anyway, verify that it works correctly with this command:
```bash=
docker-compose --version
```
### Configuration file
We create a YAML file called "docker-compose.yml" in your project directory. And we will take advantage of the files that we created in the "Construction of custom images" point.

In this file, you will define the configuration of your services, including the service name, the Docker image to use, the exposed ports, and any other relevant settings.
### Definition of services
Inside the "docker-compose.yml" file, use YAML syntax to define your services. Each service is represented as a block of code with its name, image, ports, volumes, and any other necessary configuration. You can define as many services as you need for your application.
```bash=
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- mynetwork
networks:
mynetwork:
```
In this example, we have defined a service called "nginx" that uses the Nginx image from Docker Hub. The service exposes port 80 of the container and maps it to port 80 of the host. Additionally, we have defined a volume to mount a custom configuration file called "nginx.conf" in the path "/etc/nginx/nginx.conf" inside the container. Finally, we have created a network called "mynetwork" to allow communication between services.
### Basic commands
Once you have defined your "docker-compose.yml" file, you can use the following commands to manage your services:
Start all services defined in your configuration file.
```bash=
docker-compose up -d
```

Show the status of your service containers.
```bash=
docker-compose ps
```
Stop and remove all containers for your services.
```bash=
docker-compose down
```

Show the log records of your services.
```bash=
docker-compose logs
```
These are just some of the most commonly used basic commands with Docker Compose. There are also other more advanced options and commands that you can explore according to your needs.
### Add a new service
Now that we have an nginx service working we would like to add a new database service, mysql for example. We would have to do the following:
1. Delete the docker-compose.yml file and create a new one with this content:
```bash=
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- mynetwork
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
networks:
- mynetwork
networks:
mynetwork:
```
We have actually added the "mysql" service that uses the "mysql" image from the Docker Hub repository and we have included it in the same network as the "nginx" service.
2. Launch the command "docker-compose up -d" and the first thing it will do is download the Mysql docker image:

3. Check that both the nginx and mysql containers are working:

As you can see **Docker Compose** greatly simplifies the management of multi-container applications, allowing you to easily orchestrate and coordinate their operation. It is especially useful for development and testing environments, as well as local deployments.
Remember that you can find more detailed information and practical examples in the official Docker Compose documentation.
## Links
Here I leave you some links to complement the information of this workshop.
- [Docker Curriculum](https://docker-curriculum.com/)
- [Docker CLI Reference](https://docs.docker.com/engine/reference/run/)
- [PeladoNerd's Docker Course](https://www.youtube.com/watch?v=CV_Uf3Dq-EU&ab_channel=PeladoNerd)
- [FaztCode Docker Guide](https://www.youtube.com/watch?v=NVvZNmfqg6M&ab_channel=FaztCode)
- [WSL 2](https://learn.microsoft.com/es-es/windows/wsl/install)
## :zap: Donations
:::success
If this material was useful to you, you can thank @ibiko1 by sending him a contribution via Lightning Address: ifuensan@ln.tips
:::