RSE2019
Labs
curso 2019/2020
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines. You can even run Docker containers within host machines that are actually virtual machines!
Docker provides tooling and a platform to manage the lifecycle of your containers:
Docker Engine is a client-server application with these major components:
The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.
The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.
Note: Docker is licensed under the open source Apache 2.0 license.
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.
Consider the following example scenario:
Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.
Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or tearing down applications and services as business needs dictate, in near real time.
Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more of your compute capacity to achieve your business goals. Docker is perfect for high density environments and for small and medium deployments where you need to do more with fewer resources.
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run
, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull
or docker run
commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.
IMAGES
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
CONTAINERS
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.
docker run
commandThe following command runs an ubuntu container, attaches interactively ('-i
') to your local command-line session ('-t
'), and runs /bin/bash.
$ sudo docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using the default registry configuration):
labiot@labiotVM:~$ sudo docker run -i -t ubuntu /bin/bash
[sudo] contraseña para labiot:
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
7ddbc47eeb70: Pull complete
c1bbdc448b72: Pull complete
8c3b70e39044: Pull complete
45d437916d57: Pull complete
Digest: sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d
Status: Downloaded newer image for ubuntu:latest
root@9306f9cd711c:/#
/bin/bash
. Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the output is logged to your terminal.exit
to terminate the /bin/bash
command, the container stops but is not removed. You can start it again or remove it.1.- Ejecuta algun comando basico de bash
(p.ej., ls
, pwd
, …). ¿Que obtienes?
SERVICES
Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate using the Docker API. A service allows you to define the desired state, such as the number of replicas of the service that must be available at any given time. By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application.
One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.
This lab sessison introduces some basic Docker networking concepts and prepares you to design and deploy your applications to take full advantage of these capabilities.
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
bridge
: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.host
: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly.overlay
: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons.macvlan
: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.none
: For this container, disable all networking. Usually used in conjunction with a custom network driver.Network plugins
: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors.This section includes two different tutorials:
Use the default bridge network demonstrates how to use the default bridge network that Docker sets up for you automatically. This network is not the best choice for production systems.
Use user-defined bridge networks shows how to create and use your own custom bridge networks, to connect containers running on the same Docker host. This is recommended for standalone containers running in production.
In this example, you start two different alpine
containers on the same Docker host and do some tests to understand how they communicate with each other.
FYI: alpine
Linux is an independent, non-commercial, general purpose Linux distribution designed for users who appreciate security, simplicity and resource efficiency. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. geometry dash meltdown
Open a terminal window. List current networks before you do anything else. Here’s what you should see if you’ve never added a network or initialized a swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different):
The default bridge network is listed, along with host and none. The latter two are not fully-fledged networks, but are used to start a container connected directly to the Docker daemon host’s networking stack, or to start a container with no network devices.
In this lab you will connect two containers to the bridge network.
Start two alpine
containers running ash
, which is Alpine’s default shell rather than bash. The -dit
flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Since you are starting it detached, you won’t be connected to the container right away. Instead, the container’s ID will be printed. Because you have not specified any --network
flags, the containers connect to the default bridge network.
2.- ¿Que diferencias has observado entre la ejecucion de un comando y la otra? ¿A que se debe?
Check that both containers are actually started:
Inspect the bridge network to see what containers are connected to it:
3.- ¿Que direccion IP tiene asignado el gateway entre el Docker host y la red? ¿Que direccion IP tienen los containers 'alpine1' y 'alpine2'?
The containers are running in the background. Use the docker attach command to connect to 'alpine1'.
The prompt changes to #
to indicate that you are the root user within the container. Use the ip addr show
command to show the network interfaces for 'alpine1' as they look from within the container:
The first interface is the loopback device. Ignore it for now. Notice that the second interface has the IP address 172.17.0.2, which is the same address shown for 'alpine1' in the previous step.
From within alpine1, make sure you can connect to the internet by pinging google.com. The -c 2 flag limits the command to two ping attempts.
Now try to ping the second container. First, ping it by its IP address, 172.17.0.3:
This succeeds. Next, try pinging the 'alpine2' container by container name… this will fail.
Detach from alpine1 without stopping it by using the detach sequence, CTRL + p CTRL + q
(hold down CTRL and type p followed by q).
Stop and remove both containers.
In this section, we again start two alpine containers, but attach them to a user-defined network called 'alpine-net' which we have already created. These containers are not connected to the default bridge network at all. We then start a third alpine container which is connected to the bridge network but not connected to 'alpine-net', and a fourth alpine container which is connected to both networks.
4.- Dibuja un esquema de la red que vas a utilizar.
Create the 'alpine-net' network. The --driver bridge
flag is not necessary since it is the default option; is used in this example just to show how to specify it.
List Docker’s networks:
5.- Determina la IP de la red 'alpine-net' y si tiene containers conectados usando la orden 'inspect' que has utilizado anteriormente. ¿Que direccion tiene el bridge de default?
Create your four containers. Notice the --network
flags. You can only connect to one network during the docker run
command, so you need to use docker network connect afterward to connect 'alpine4' to the bridge network as well.
Verify that all containers are running:
Inspect the bridge network and the alpine-net network again:
You will see that containers 'alpine3' and 'alpine4' are connected to the bridge network.
Which shows that containers 'alpine1', 'alpine2', and 'alpine4' are connected to the alpine-net network.
On user-defined networks like alpine-net, containers can both communicate by IP address, and can also resolve a container name to an IP address. This capability is called automatic service discovery.
6.- Conectate a 'alpine1' y usando 'ping' comprueba que es posible resolver automaticamente las direcciones de 'alpine2' and 'alpine4' a direccione IP. Indica todos los pasos que has realizado.
From 'alpine1', you should not be able to connect to 'alpine3' at all, since it is not on the ''alpine-net'' network.
=================================
Not only that, but you can’t connect to 'alpine3' from 'alpine1' by its IP address either.
7.- Determina la IP de 'alpine3' e intenta hacerle un ping. ¿Que resultado obtienes?
Detach from 'alpine1' using the detach sequence, (CTRL + p CTRL + q
).
Remember that 'alpine4' is connected to both the default bridge network and 'alpine-net'. It should be able to reach all of the other containers. However, you will need to address alpine3 by its IP address. Attach to it and run the tests.
8.- Conectate a 'alpine4' e intenta hacer 'ping' a las otras maquinas. ¿Puedes alcanzar todas? ¿Con su IP o con su nombre?
9.- Como prueba final, comprueba de que todos los contenedores se puedan conectar a Internet haciendo ping a google.com. Ya estás conectado a alpine4, así que comienza intentando desde allí. A continuación, desconectate de 'alpine4' y conéctate a 'alpine3' (que solo está conectado a la red del bridge) e intenta nuevamente. Finalmente, conéctate a 'alpine1' (que solo está conectado a la red 'alpine-net') e intenta nuevamente.
Stop and remove all containers and the alpine-net network.
With this lab session you learnt the basic of using containers and how to conect them using defined networks.