# Dockers Lab 1: Networking with standalone containers ###### tags: `RSE2019` `Labs` > curso 2019/2020 ## Docker overview > https://docs.docker.com/engine/docker-overview/ > Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production. ### The Docker platform Docker provides the ability to package and run an application in a loosely isolated environment called a **container**. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines. You can even run Docker containers within host machines that are actually virtual machines! Docker provides tooling and a platform to manage the lifecycle of your containers: * Develop your application and its supporting components using containers. * The container becomes the unit for distributing and testing your application. * When you’re ready, deploy your application into your production environment, as a container or an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two. ### Docker Engine Docker Engine is a client-server application with these major components: * A server which is a type of long-running program called a daemon process (the dockerd command). * A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do. * A command line interface (CLI) client (the docker command). ![](https://i.imgur.com/m4jTCCX.png) The CLI uses the *Docker REST API* to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI. The daemon creates and manages Docker objects, such as images, containers, networks, and volumes. > Note: Docker is licensed under the open source Apache 2.0 license. ### What can I use Docker for? #### Fast, consistent delivery of your applications Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows. Consider the following example scenario: * Your developers write code locally and share their work with their colleagues using Docker containers. * They use Docker to push their applications into a test environment and execute automated and manual tests. * When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation. * When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production environment. #### Responsive deployment and scaling Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments. Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or tearing down applications and services as business needs dictate, in near real time. #### Running more workloads on the same hardware Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more of your compute capacity to achieve your business goals. Docker is perfect for high density environments and for small and medium deployments where you need to do more with fewer resources. ### Docker architecture Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface. ![](https://i.imgur.com/0y1nfh2.png) #### The Docker daemon The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services. #### The Docker client The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker `run`, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon. #### Docker registries A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. When you use the docker `pull` or docker `run` commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry. ### Docker objects When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects. **IMAGES** An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run. You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies. **CONTAINERS** A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine. A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear. #### Example `docker run` command The following command runs an ubuntu container, attaches interactively ('`-i`') to your local command-line session ('`-t`'), and runs /bin/bash. $ sudo docker run -i -t ubuntu /bin/bash When you run this command, the following happens (assuming you are using the default registry configuration): labiot@labiotVM:~$ sudo docker run -i -t ubuntu /bin/bash [sudo] contraseña para labiot: Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 7ddbc47eeb70: Pull complete c1bbdc448b72: Pull complete 8c3b70e39044: Pull complete 45d437916d57: Pull complete Digest: sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d Status: Downloaded newer image for ubuntu:latest root@9306f9cd711c:/# 1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually. 1. Docker creates a new container, as though you had run a docker container create command manually. 1. Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem. 1. **Docker creates a network interface** to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the container. By default, containers can connect to external networks using the host machine’s network connection. 1. Docker starts the container and executes `/bin/bash`. Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the output is logged to your terminal. 1. When you type `exit` to terminate the `/bin/bash` command, the container stops but is not removed. You can start it again or remove it. :::danger 1.- Ejecuta algun comando basico de `bash` (p.ej., `ls`, `pwd`, ...). ¿Que obtienes? ::: **SERVICES** Services allow you to scale containers across multiple Docker daemons, which all work together as a **swarm** with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate using the Docker API. A service allows you to define the desired state, such as the number of replicas of the service that must be available at any given time. By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application. # Networking with standalone containers ## Networking overview One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way. This lab sessison introduces some basic Docker networking concepts and prepares you to design and deploy your applications to take full advantage of these capabilities. ### Network drivers Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality: * `bridge`: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. * `host`: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. * `overlay`: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. * `macvlan`: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. * `none`: For this container, disable all networking. Usually used in conjunction with a custom network driver. * `Network plugins`: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. ## Networking with standalone containers This section includes two different tutorials: 1. Use the default bridge network demonstrates how to use the default bridge network that Docker sets up for you automatically. This network is not the best choice for production systems. 2. Use user-defined bridge networks shows how to create and use your own custom bridge networks, to connect containers running on the same Docker host. This is recommended for standalone containers running in production. ### Use the default bridge network In this example, you start two different `alpine` containers on the same Docker host and do some tests to understand how they communicate with each other. :::info FYI: [`alpine`](https://www.alpinelinux.org/about/) Linux is an independent, non-commercial, general purpose Linux distribution designed for users who appreciate security, simplicity and resource efficiency. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. [geometry dash meltdown](https://geometrydashmeltdown.io) ::: **Open a terminal window**. List current networks before you do anything else. Here’s what you should see if you’ve never added a network or initialized a swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different): ``` $ sudo docker network ls NETWORK ID NAME DRIVER SCOPE 17e324f45964 bridge bridge local 6ed54d316334 host host local 7092879f2cc8 none null local ``` The default bridge network is listed, along with host and none. The latter two are not fully-fledged networks, but are used to start a container connected directly to the Docker daemon host’s networking stack, or to start a container with no network devices. **In this lab you will connect two containers to the bridge network.** Start two `alpine` containers running `ash`, which is Alpine’s default shell rather than bash. The `-dit` flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Since you are starting it detached, you won’t be connected to the container right away. Instead, the container’s ID will be printed. Because you have not specified any `--network` flags, the containers connect to the default bridge network. ``` $ sudo docker run -dit --name alpine1 alpine ash $ sudo docker run -dit --name alpine2 alpine ash ``` :::danger 2.- ¿Que diferencias has observado entre la ejecucion de un comando y la otra? ¿A que se debe? ::: Check that both containers are actually started: ```bash $ sudo docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 602dbf1edc81 alpine "ash" 4 seconds ago Up 3 seconds alpine2 da33b7aa74b0 alpine "ash" 17 seconds ago Up 16 seconds alpine1 ``` Inspect the bridge network to see what containers are connected to it: ```bash $ sudo docker network inspect bridge ``` :::danger 3.- ¿Que direccion IP tiene asignado el gateway entre el Docker host y la red? ¿Que direccion IP tienen los containers 'alpine1' y 'alpine2'? ::: The containers are running in the background. Use the docker attach command to connect to 'alpine1'. ``` bash $ sudo docker attach alpine1 / # ``` The prompt changes to `#` to indicate that you are the root user within the container. Use the `ip addr show` command to show the network interfaces for 'alpine1' as they look from within the container: ```bash # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft forever ``` The first interface is the loopback device. Ignore it for now. Notice that the second interface has the IP address, which is the same address shown for 'alpine1' in the previous step. From within alpine1, make sure you can connect to the internet by pinging google.com. The -c 2 flag limits the command to two ping attempts. ```bash # ping -c 2 google.com PING google.com ( 56 data bytes 64 bytes from seq=0 ttl=41 time=9.841 ms 64 bytes from seq=1 ttl=41 time=9.897 ms --- google.com ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 9.841/9.869/9.897 ms ``` Now try to ping the second container. First, ping it by its IP address, ``` # ping -c 2 PING ( 56 data bytes 64 bytes from seq=0 ttl=64 time=0.086 ms 64 bytes from seq=1 ttl=64 time=0.094 ms --- ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.086/0.090/0.094 ms ``` This succeeds. Next, try pinging the 'alpine2' container by container name... **this will fail**. ``` # ping -c 2 alpine2 ping: bad address 'alpine2' ``` Detach from alpine1 without stopping it by using the detach sequence, `CTRL + p CTRL + q` (*hold down CTRL and type p followed by q*). Stop and remove both containers. ``` $ sudo docker container stop alpine1 alpine2 $ sudo docker container rm alpine1 alpine2 ``` ### Use user-defined bridge networks In this section, we again start two alpine containers, but attach them to a user-defined network called 'alpine-net' which we have already created. These containers are not connected to the default bridge network at all. We then start a third alpine container which is connected to the bridge network but not connected to 'alpine-net', and a fourth alpine container which is connected to both networks. :::danger 4.- Dibuja un esquema de la red que vas a utilizar. ::: Create the 'alpine-net' network. The `--driver bridge` flag is not necessary since it is the default option; is used in this example just to show how to specify it. ``` $ sudo docker network create --driver bridge alpine-net ``` List Docker’s networks: ``` $ sudo docker network ls NETWORK ID NAME DRIVER SCOPE e9261a8c9a19 alpine-net bridge local 17e324f45964 bridge bridge local 6ed54d316334 host host local 7092879f2cc8 none null local ``` :::danger 5.- Determina la IP de la red 'alpine-net' y si tiene containers conectados usando la orden 'inspect' que has utilizado anteriormente. ¿Que direccion tiene el bridge de default? ::: Create your four containers. **Notice the `--network` flags.** You can only connect to one network during the docker `run` command, so you need to use docker network connect afterward to connect 'alpine4' to the bridge network as well. ``` $ sudo docker run -dit --name alpine1 --network alpine-net alpine ash $ sudo docker run -dit --name alpine2 --network alpine-net alpine ash $ sudo docker run -dit --name alpine3 alpine ash $ sudo docker run -dit --name alpine4 --network alpine-net alpine ash $ sudo docker network connect bridge alpine4 ``` Verify that all containers are running: ``` $ sudo docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 156849ccd902 alpine "ash" 41 seconds ago Up 41 seconds alpine4 fa1340b8d83e alpine "ash" 51 seconds ago Up 51 seconds alpine3 a535d969081e alpine "ash" About a minute ago Up About a minute alpine2 0a02c449a6e9 alpine "ash" About a minute ago Up About a minute alpine1 ``` Inspect the bridge network and the alpine-net network again: ``` $ sudo docker network inspect bridge ``` You will see that containers 'alpine3' and 'alpine4' are connected to the bridge network. ``` $ docker network inspect alpine-net ``` Which shows that containers 'alpine1', 'alpine2', and 'alpine4' are connected to the alpine-net network. On user-defined networks like alpine-net, containers can both communicate by IP address, and can also resolve a container name to an IP address. This capability is called **automatic service discovery**. :::danger 6.- Conectate a 'alpine1' y usando 'ping' comprueba que es posible resolver automaticamente las direcciones de 'alpine2' and 'alpine4' a direccione IP. Indica todos los pasos que has realizado. ::: From 'alpine1', you should not be able to connect to 'alpine3' at all, since it is not on the ''alpine-net'' network. ================================= ``` # ping -c 2 alpine3 ping: bad address 'alpine3' ``` Not only that, but you can’t connect to 'alpine3' from 'alpine1' by its IP address either. :::danger 7.- Determina la IP de 'alpine3' e intenta hacerle un ping. ¿Que resultado obtienes? ::: Detach from 'alpine1' using the detach sequence, (`CTRL + p CTRL + q`). Remember that 'alpine4' is connected to both the default bridge network and 'alpine-net'. It should be able to reach all of the other containers. However, you will need to address alpine3 by its IP address. Attach to it and run the tests. :::danger 8.- Conectate a 'alpine4' e intenta hacer 'ping' a las otras maquinas. ¿Puedes alcanzar todas? ¿Con su IP o con su nombre? ::: :::danger 9.- Como prueba final, comprueba de que todos los contenedores se puedan conectar a Internet haciendo ping a google.com. Ya estás conectado a alpine4, así que comienza intentando desde allí. A continuación, desconectate de 'alpine4' y conéctate a 'alpine3' (que solo está conectado a la red del bridge) e intenta nuevamente. Finalmente, conéctate a 'alpine1' (que solo está conectado a la red 'alpine-net') e intenta nuevamente. ::: Stop and remove all containers and the alpine-net network. ``` $ docker container stop alpine1 alpine2 alpine3 alpine4 $ docker container rm alpine1 alpine2 alpine3 alpine4 $ docker network rm alpine-net ``` With this lab session you learnt the basic of using containers and how to conect them using defined networks.