# Hands-on session about Docker containers This document serves as a guideline for the hands-on session to get acquainted with Docker containers. It will be followed on the projector by the teacher. You can follow along on your system at the same time. [TOC] # Installation instructions for Docker under CentOS 7.6 minimal This document describes the required steps to take to install Docker in a virtual machine running CentOS 7.6 minimal. We will be using a virtual machine for running Docker containers. We assume that fault tolerance, high availability and backup are performed on the VM-level. VM assumptions * CentOS 7.6 minimal * 64bit * 1GB+ memory * 64GB disk (dynamically expanding / thin provisioned) * internet access * root privileges Update your Operating System to have the most recent packages. ``` $ sudo yum update ``` :::danger Please keep in mind that your host firewall might interfere with proper functioning of your virtual machine. ::: Install some extra packages which are handy for possible troubleshooting. ``` $ sudo yum install -y tcpdump bind-utils net-tools yum-utils device-mapper-persistent-data lvm2 wget unzip epel-release git ``` Use the following command to set up the stable repository. ``` $ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo ``` ## Create a docker login In order to be able to download images from the docker hub, you need a docker login. Navigate your webbrowser to [https://hub.docker.com/](https://hub.docker.com/). Select 'Sign Up' to create a new free account. ![](https://i.imgur.com/Ui3mZ6y.png) * Pick a Docker ID (example: *FirstnameLastname*) * Pick a Password (pick a *unique* password; it will be stored plain text in your CentOS VM) * Enter your Email address We will need this login later on. ## Install Docker Community Edition Install the latest version of Docker CE and containerd ``` $ sudo yum install docker-ce docker-ce-cli containerd.io ``` If prompted to accept the GPG key, verify that the fingerprint matches ```060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35```, and if so, accept it. Docker is installed but not started. The docker group is created, but no users are added to the group. # First steps with Docker ## Start Docker ``` $ sudo systemctl start docker ``` ## Verify that Docker CE is installed correctly We will now verify that Docker CE is installed correctly by running the ```hello-world``` image. ``` $ sudo docker run hello-world ``` You will see that the hello-world image cannot be found locally, and will be downloaded from the library. You should receive output saying that your installation appears to be working correctly. ![](https://i.imgur.com/MHRG4Aa.png) To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. ## Create a docker group To be able to run docker containers not solely as 'root', but as another user, we will create a group in CentOS called ```docker``` and will add users to that group. 1. Create the ```docker``` group. ``` $ sudo groupadd docker ``` 2. Add your user to the ```docker``` group. ``` $ sudo usermod -aG docker $USER ``` 3. Log out and log back in so that your group membership is re-evaluated. 4. Verify that you can run ```docker``` commands without ```sudo```. ``` $ docker run hello-world ``` ## Configure Docker to start on boot CentOS 7.6 uses ```systemd``` to manage which services start when the system boots. ``` $ sudo systemctl enable docker ``` ## Specify DNS servers for Docker The default location of the configuration file is ```/etc/docker/daemon.json```. The documentation below assumes the configuration file is located at ```/etc/docker/daemon.json```. 1. Create or edit the Docker daemon configuration file which controls the Docker daemon configuration. ``` $ sudo nano /etc/docker/daemon.json ``` 2. Add a ```dns``` key with one or more IP addresses as values. If the file has existing contents, you only need to add or edit the ```dns``` line. ``` { "dns": ["8.8.8.8", "8.8.4.4"] } ``` If your internal DNS server cannot resolve public IP addresses, include at least one DNS server which can, so that you can connect to Docker Hub and so that your containers can resolve internet domain names. Save and close the file. 3. Restart the Docker daemon. ``` $ sudo service docker restart ``` 4. Verify that Docker can resolve external IP addresses by trying to pull an image: ``` $ docker pull hello-world ``` 5. Verify that Docker containers can resolve hostnames. We will download a very lightweight linux image, called ```alpine```, run a container from it and issue the command ping inside that container. We will use the ```docker run``` command for that, together with some options: * the ```-rm``` option specifies that the container must be removed when it exits **or** when the daemon exits, whichever happens first. * the ```-it``` option will start an ```i```nteractive ```t```erminal with your container. We specify that in that interactive terminal, the command ```ping -c4 www.savaco.com``` must be executed (do 4 ICMP echo-request - echo-replies). The output will be shown on our CLI (stdout). ``` $ docker run --rm -it alpine ping -c4 www.savaco.com PING www.savaco.com (5.255.128.32): 56 data bytes 64 bytes from 5.255.128.32: seq=0 ttl=124 time=3.828 ms 64 bytes from 5.255.128.32: seq=1 ttl=124 time=1.321 ms 64 bytes from 5.255.128.32: seq=2 ttl=124 time=1.122 ms 64 bytes from 5.255.128.32: seq=3 ttl=124 time=2.579 ms ``` We can see that DNS resolving is working inside the container. ## Docker run and common behaviour After 4 'pings', the container exits and is stopped. This is common behaviour for containers. Because we also specified ```-rm``` we expect the container to be removed upon exit. Verify that with command ```$ docker ps -a```. You should see some of the hello-world containers (each having a weird name), but your alpine container should not be part of the list. Let's run the previous container, but without the ```--rm``` option: ``` $ docker run -it alpine ping -c4 www.savaco.com PING www.savaco.com (5.255.128.32): 56 data bytes 64 bytes from 5.255.128.32: seq=0 ttl=124 time=3.828 ms 64 bytes from 5.255.128.32: seq=1 ttl=124 time=1.321 ms 64 bytes from 5.255.128.32: seq=2 ttl=124 time=1.122 ms 64 bytes from 5.255.128.32: seq=3 ttl=124 time=2.579 ms ``` Verify that the container is not removed this time: ```$ docker ps -a``` The STATUS of that container is Exited (0), but the container itself remains on the system. The ```0``` between the brackets means 'exit code 0', which means 'success'. This is default Linux behaviour. Now let's run an alpine container with an interactive terminal (```-it```) but without specifying any command: ``` $ docker run -it alpine ``` You should be getting a ```/bin/sh``` shell into the running alpine container. In that shell, execute the ```ip a``` command to look at the network settings: ``` / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever ``` Here you can see that your container has an IP-address, which is delivered by your docker daemon to your container. Take a look at the ```/etc/resolv.conf``` file. What DNS-resolving settings does your container use? Compare that to the contents of ```/etc/resolv.conf``` of your CentOS VM. Now take a look at the output of ```df -h```. ``` / # df -h Filesystem Size Used Available Use% Mounted on overlay 40.7G 1.5G 39.2G 4% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 490.5M 0 490.5M 0% /sys/fs/cgroup /dev/mapper/centos-root 40.7G 1.5G 39.2G 4% /etc/resolv.conf /dev/mapper/centos-root 40.7G 1.5G 39.2G 4% /etc/hostname /dev/mapper/centos-root 40.7G 1.5G 39.2G 4% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm tmpfs 490.5M 0 490.5M 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 64.0M 0 64.0M 0% /proc/timer_stats tmpfs 64.0M 0 64.0M 0% /proc/sched_debug tmpfs 490.5M 0 490.5M 0% /proc/scsi tmpfs 490.5M 0 490.5M 0% /sys/firmware ``` The reason the contents of ```/etc/resolv.conf``` of your container is identical to that of your CentOS VM, is because it is actually the same file. Your container points to the file on your CentOS filesystem. To verify that, let's take the md5sum of the ```/etc/resolv.conf``` file in the container and compare it with the md5sum of ```/etc/resolv.conf``` of your CentOS VM. ``` / # md5sum /etc/resolv.conf eb7905effbe321a5224d6f23cdae9ced /etc/resolv.conf / # exit [user@dockermachine ~]$ md5sum /etc/resolv.conf eb7905effbe321a5224d6f23cdae9ced /etc/resolv.conf ``` Change something to the ```/etc/resolv.conf``` of your CentOS VM (add a # on a new line). ``` $ sudo vi /etc/resolv.conf # Generated by NetworkManager search mshome.net dmsvckor nameserver 172.30.240.129 # ``` Recalculate the md5sums. ``` [user@dockermachine ~]$ docker run -it alpine / # md5sum /etc/resolv.conf e26f616976272ab4fd9958ba5f57e70c /etc/resolv.conf / # exit [user@dockermachine ~]$ md5sum /etc/resolv.conf e26f616976272ab4fd9958ba5f57e70c /etc/resolv.conf ``` You can also see that the Size, the Used space, the Available space, ... is identical to your CentOS VM. That's because it uses those values. ## Docker run The command ```docker run``` can be used to start containers and to interact with running containers. There are a lot of options possible. The most commonly used ones are described below. ### Foreground vs detached When starting a Docker container, you must first decide if you want to run the container in the background in a “detached” mode or in the default foreground mode. #### Foreground In foreground mode (the default), docker run can start the process in the container and attach your console (of your CentOS VM) to the process’s standard input, output, and standard error. In other words you will receive the output of the running command in the container, in your CentOS bash from where you started the container. The same applies for errors. Let's try this out with the following example: ``` $ docker run -a stdin -a stdout -i -t alpine /bin/sh ``` This will run a container from the alpine image. It will attach the stdin stream of your CentOS bash session to the stdin of the running alpine container. Same applies for the stdout. When you end the ```/bin/sh``` process, the container will stop and you will return to your CentOS VM bash session. ``` / # exit [user@dockermachine ~]$ ``` #### Detached To start a container in detached mode, you use -d option. By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the daemon exits, whichever happens first. Let's run a detached container. This is the time where we first need our docker account. In order to be able to download the nginx image for the example below, you need to tell your docker daemon to login to docker hub. ``` docker login ``` Enter your Docker ID (example: *FirstnameLastname*) and the corresponding password. :::warning Please note that your password will be stored **plain text** in your CentOS VM. Make sure this is a unique password, only used for your Docker account. ::: When the login is successful, start the detached container as follows. ``` docker run -d -p 80:80 nginx nginx -g 'daemon off;' ``` Because we create a detached container, we can only interact with it through the network. Therefor we specify which port of our CentOS docker daemon host needs to be mapped to which port inside our container. In this example we map port 80/TCP of our CentOS VM to port 80/TCP of our container running nginx. Verify that your container is running with the command ```docker ps -a```. We passed the command ```nginx -g 'daemon off;'``` to this container. It is configured by default to listen to port 80/TCP inside the container. This will container will keep on running until the nginx process stops (correctly stopped or crashed), or until the docker daemon stops, or unless we explicitely stop the container. Verify that nginx is running inside your container. To do that, point your browser to the IP address of your CentOS VM. You should get something as below: ![](https://i.imgur.com/rUWKN7V.png) To reattach to a detached container, use docker attach command: ```$ docker attach <container name>``` or ```$ docker attach <container id>```. You find the container name or id with the command ```docker ps -a```. ``` [user@dockermachine ~]$ docker attach 8ef7f7cb15b4 ``` You will notice that you get a new line, but nothing else. This is because you are now attached to the container, but in the container the only thing happening is a running nginx process. Refresh your webbrowser showing the *Welcome to nginx!*. You should see this request being handled by nginx in your attached container. ``` [user@dockermachine ~]$ docker attach 8ef7f7cb15b4 172.30.240.129 - - [19/Apr/2019:18:05:56 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" "-" ``` Now exit the nginx process by pressing CTRL+c. The nginx process will terminate, leading to the container also exitting. You arrive in the bash session of your CentOS VM. ``` [user@dockermachine ~]$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8ef7f7cb15b4 nginx "nginx -g 'daemon of…" 15 minutes ago Exited (0) 41 seconds ago infallible_grothendieck ``` Note that the STATUS of the container is *Exited (0)*. (meaning: clean exit) ### Give your container a name In the above examples you saw randomly picked names for your containers. Of course this 'NAME' field is not there to show off the creativity of the docker programmers, but for your own convenience. You can specify a name for your container. This has many advantages. It is easier to manage an environment with containers when you can use names instead of not-human-friendly-IDs. Also, and this is perhaps even more interesting, the names can be used by other containers in the same docker daemon to connect to that container (over the internal network). In other words there is DNS resolving for the names you give to your containers. We will experiment with that later on. First let's take a look at the networking in Docker. # Networking and Docker containers We already know that our alpine container has an IP address and can do DNS resolving. But where do these network settings come from? And can we influence them? ## List existing Docker networks We currently have a freshly-installed, almost empty Docker setup. Apparently the networking part is pre-configured and working. Run the following command to get a list of currently configured networks: ``` [user@dockermachine ~]$ docker network ls ``` This will give a list resembling the following: ``` NETWORK ID NAME DRIVER SCOPE 6572fd1bc8df bridge bridge local bf13bdb20888 host host local 69f91dbea607 none null local ``` ## Find details about existing networks The ```docker network``` command is the starting point to find out everything there is to know about networking in your docker installation. ```docker network --help``` gives some information on possible arguments. The online commandline reference guide also gives insights into the usage of the commands and their meaning: [https://docs.docker.com/engine/reference/commandline/network/](https://docs.docker.com/engine/reference/commandline/network/) ``` [user@dockermachine ~]$ docker network --help Usage: docker network COMMAND Manage networks Commands: connect Connect a container to a network create Create a network disconnect Disconnect a container from a network inspect Display detailed information on one or more networks ls List networks prune Remove all unused networks rm Remove one or more networks Run 'docker network COMMAND --help' for more information on a command. ``` Let's take a look at the details of the *bridge* network which came pre-configured in our docker setup. ``` [user@dockermachine ~]$ docker network inspect bridge [ { "Name": "bridge", "Id": "6572fd1bc8df561003609444d488e81c654f9020994bafd07eaab55c3f8712cb", "Created": "2019-04-19T09:52:17.798991492-04:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "44ca2303a22f3b0f8b9593efaa573d33d72b96e90abbf913d1d2d3f5e460671a": { "Name": "youthful_shannon", "EndpointID": "1df2522c8519ff24e7de6b0b513d1a93969d522b8fdf2dc735130a25f3e98a71", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ] ``` Below the most important parts are discussed ### Name ```"Name": "bridge"``` this is the name for this network. ### Driver ```"Driver": "bridge"``` is the default network driver. Bridge networks are usually used when your applications run in standalone containers that need to communicate with one another. In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level In other words, there is resemblance with a vSwitch in VMware terminology: devices connected to the same vSwitch can communicate with eachother, devices on a different vSwitch cannot, unless you arrange for routing. ### IPAM ``` "IPAM": { ... "Config": [ { "Subnet": "172.17.0.0/16" } ] } ``` IPAM stands for IP Address Management. This is where the IP network ranges are specified. In our example, we can see that the subnet for our ```bridged``` network is ```172.17.0.0/16```. ### Containers This part shows a list of currently active containers which are part of this network. In our example one container is part of this network. The container with name ```youthful_shannon``` has IP address ```172.17.0.2/16``` and MAC address ```02:42:ac:11:00:02```. ### Options In the Options section, we can see, amongst others, ```"com.docker.network.bridge.enable_ip_masquerade": "true"``` That means that network traffic originating from our container, will be Network Address Translated (NAT) to the IP address of our CentOS VM running the docker daemon. Also the MTU size is configured to the default 1500 bytes. ## Container-to-container communication As we saw earlier, you can specify a name for a container when you start the container. This name will resolve to the internal IP address of that container for other containers in the same Docker daemon. ### Create docker network Let's create our own isolated network with the following parameters: * Driver: bridge * Name: my_isolated_bridge_network ``` $ docker network create --driver bridge my_isolated_bridge_network ``` Verify if it is visible in the list of networks: ``` $ docker network ls ``` Take a look at the settings Docker gave to our network: ``` $ docker network inspect my_isolated_bridge_network ``` ### Test container-to-container communication Open a new SSH session to your CentOS VM (so you have 2 sessions simultaneously). In **session 1**, start an alpine container with interactive terminal, connected to our my_isolated_bridge_network: ``` $ docker run --name alpine1 --net=my_isolated_bridge_network -ti alpine /bin/sh ``` In **session 2**, start an alpine container with interactive terminal, connected to our my_isolated_bridge_network: ``` $ docker run --name alpine2 --net=my_isolated_bridge_network -ti alpine /bin/sh ``` From the ```/bin/sh``` of container alpine1, issue the following command: ``` / # ping -c 4 alpine2 PING alpine2 (172.18.0.3): 56 data bytes 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.147 ms 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.221 ms 64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.101 ms 64 bytes from 172.18.0.3: seq=3 ttl=64 time=0.218 ms ``` As you can see, both containers can reach each other AND there is DNS resolving by means of the name of the containers. ### Specify the IP address a container will use In the above examples, we never explicitely specified what IP address the container had to have. The docker daemon assigned the container an IP address from the subnet where the container was attached to. However, there might be cases in which you want to have control over the IP inside the container. First, let's recreate our my_isolated_bridge_network, this time using a specific subnet: ``` $ docker network rm my_isolated_bridge_network $ docker network create --driver bridge --subnet 192.168.19.0/24 my_isolated_bridge_network $ docker network inspect my_isolated_bridge_network ``` Next, create a container attached to this network: ``` $ docker rm alpine1 $ docker run --name alpine1 --net=my_isolated_bridge_network -ti alpine /bin/sh / # ip a ``` You can see that your newly created container named *alpine1* has an IP address in the 192.168.19.0/24 range. To assign a specific IP address to a container, you can do that as follows: ``` docker run --name alpine1 --net=my_isolated_bridge_network --ip 192.168.19.253 -ti alpine /bin/sh ``` :::info More information about Docker networking can be found in the docs: https://docs.docker.com/network/ ::: # Storage A container stops when the process in the container stops (for whatever reason - beit gracefully or because of a crash). This shows that, as a concept, a container is volatile. We only talk about a container when there is a running instance of an image. If it is not running, it is just an image. So how about data which needs to be used in a container? We saw an nginx container, which is supposed to serve a website. Where does this container get the website data from? That's where *volumes* come into play. ## Docker Volumes Docker volumes are very useful when we need to persist data in Docker containers or share data between containers. Docker volumes are important because when a Docker container is destroyed, it’s entire file system is destroyed too. So if we want to keep this data, it is necessary that we use Docker volumes. Docker volumes are attached to containers during a ```docker run``` command by using the ```-v``` flag. Let's look at the nginx container we used previously: [https://hub.docker.com/_/nginx](https://hub.docker.com/_/nginx) In the section 'How to use this image' one can see that the creators suggest to map the local directory (of your CentOS docker 'host') /some/content to the directory /usr/share/nginx/html of the container. ![](https://i.imgur.com/bOog4av.png) ``` $ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx ``` Together with the pathnames, the option ```ro``` is specified. This stands for *read only*. This means that the container can only read the contents of the /some/content directory of our CentOS VM; not write to it. Let's download a website to our CentOS VM to be served by our nginx container. ``` $ wget https://github.com/BlackrockDigital/startbootstrap-creative/archive/gh-pages.zip && unzip gh-pages.zip ``` You will get a new subdirectory ```startbootstrap-creative-gh-pages``` containing a twitter bootstrap template which we will use in nginx. Now let's start an nginx container specifying this directory as the place where nginx needs to look for the files of the website. ``` $ docker run --name bootstrap-page -v /home/user/startbootstrap-creative-gh-pages:/usr/share/nginx/html:ro -p 80:80 -d nginx ``` Point your browser to the IP address of *eth0* of your **CentOS VM**. ![](https://i.imgur.com/MRdlWyG.png) You should be seeing the startbootstrap twitter bootstrap theme we just downloaded. ![](https://i.imgur.com/SJDoxoU.png) This webpage itself is located on the filesystem of the CentOS VM under /home/user/startbootstrap-creative-gh-pages/. Let's change something to this webpage and see what this does in the container and in our browser. ``` $ cd ~ $ cd startbootstrap-creative-gh-pages $ ls -lah $ vi index.html ``` Search for the title 'Your Favorite Source of Free Bootstrap Themes' and adjust it to: 'The wonderful world of Docker containers'. Refresh the page in your web browser. The title is changed immediately. That is because the container gets its data from the filesystem of the CentOS VM. ## Docker data copy Another way is to copy data from the file system of the CentOS VM inside the container at the start of the container. This is less relevant for today, so let's just put that aside. # Docker compose One of the main reasons for working with containers and no longer with (whether or not virtual) servers is because there is a transition from *monolithic architecture* of a program to *microservices architecture*. This means that an application no longer consists of one large whole, but is pulled loose into individual, smaller, separate parts. Each micro service can be offered with its own container. As a result, a program consists of a multitude of containers. Some containers depend on others, some may exist on their own. To start all containers that an application consists of in the correct order and to specify the correct IP addresses and network ports, we can use a docker-compose file. A docker-compose file is a plaintext YAML-file and has always the name ```docker-compose.yml```. ## Install docker-compose ``` $ sudo yum install -y python-pip $ sudo yum upgrade python* $ sudo pip install docker-compose ``` To verify a successful Docker Compose installation, run: ``` $ docker-compose version ``` ``` docker-compose version 1.24.0, build 0aa5906 docker-py version: 3.7.2 CPython version: 2.7.5 OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017 ``` # Alfresco Content Services Let's try to set up a demo environment with Alfresco Content Services using Docker containers. ``` $ git clone https://github.com/Alfresco/acs-community-deployment.git $ cd acs-community-deployment/ $ cd docker-compose $ ls -lah ``` Here you will find the ```docker-compose.yml``` file. Take a look at the contents of this YAML-file. You will immediatly see that this is good for demo, but definitely needs tweaking for production. The following ports will be used: * 8983 * 8080 * 5432 * 8161 * 5672 * 61616 * 61613 We have not yet downloaded the Alfresco Content Services (ACS) images. When we try to start the containers (plural!), the images must be downloaded first. ``` [user@dockermachine docker-compose]$ docker-compose up Creating network "docker-compose_default" with the default driver Pulling activemq (alfresco/alfresco-activemq:5.15.8)... 5.15.8: Pulling from alfresco/alfresco-activemq 0ffa5ac9f3c5: Downloading [========> ] 12.88MB/74.69MB c33a419d9bc7: Downloading [============================================> ] 12.92MB/14.45MB fac65327829f: Downloading [===========> ] 12.93MB/55.03MB d4424e2d4fa0: Waiting 093b74461cce: Waiting 03b577e110b5: Waiting e62aef22738a: Waiting ``` This can take a while... After the download and extract, the containers will be started. You will see colors on your console: every container will get it's color so you can easily distinguish what console output you are looking at. We can now access the Web UI at http://<ip-of-your-centos-vm>:8080/share. The login credentials are: admin/admin. (Depending on the resources of your CentOS VM, this might take a while.) ![](https://i.imgur.com/vmeygL6.png) Open a new SSH-session to your CentOS VM and take a look at the running containers: ``` $ docker container ls -a ``` You should see 5 running containers. In the top-menu bar, go to My Files and upload a random file. ![](https://i.imgur.com/hAWSADF.png) Stop the containers ``` $ docker-compose down ``` Start the containers again ``` $ docker-compose up ``` Do you find your previously uploaded file? *Why is that?* Close all SSH-sessions to your CentOS VM. *Is your ACS still available in your browser?* Open a SSH-session to your CentOS VM. Verify if the containers are still running. ``` $ docker container ls ``` # Use Cases ## Backup * Use Docker volumes to store data on machine running Docker daemon. * Map a SAN-volume so that data of container is stored on SAN * Backup the VM as you would any other VM Example: with Veeam Backup & Replication ## Monitoring * Install application level monitoring for the different services * layer 4: TCP SYN -SYN/ACK - ACK on port 8080 * layer 7: hash of webpage Example: PRTG HTTP (Content) Sensor ## Patching * docker-compose down * docker-compose up This will download the latest version of the container and start it using the same docker-compose.yml file. # Handy commands :::danger Handy, but dangerous in production environments! ::: Hard-stop all containers: ```$ docker kill $(docker ps -q)``` Remove all containers: ```$ docker rm $(docker ps -a -q)``` Remove all images: ```docker rmi $(docker images -q)```