appliot2022
based on
Create your own Docker ID at https://hub.docker.com/signup and…
Option 1:
You can download and install Docker on multiple platforms. Refer to the following link: https://docs.docker.com/get-docker/ and choose the best installation path for you.
Option 2:
You can execute it online: https://labs.play-with-docker.com/
The code of this section is here
If you are running Docker online: https://labs.play-with-docker.com/ you can upload files in the session terminal by dragging over it.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
An image is a read-only template with instructions for creating a Docker container. A Docker registry stores Docker images.
Docker Hub (https://hub.docker.com/) is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default; there are millions of images available.
There are three different ways to use containers. These include:
In the following we will see examples of all three ways to use containers
https://hub.docker.com/_/hello-world
Images can have multiple versions. For example you could pull a specific version of ubuntu
image as follows:
If you do not specify the version number of the image the Docker client will default to a version named latest
.
So for example, the docker pull
command given below will pull an image named ubuntu:latest
:
To get a new Docker image you can either get it from a registry (such as the Docker Store) or create your own. You can also search for images directly from the command line using docker search
. For example:
You can check the images you downloaded using:
By the way…
In the rest of this seminar, we are going to run an Alpine Linux container. Alpine (https://www.alpinelinux.org/) is a lightweight Linux distribution so it is quick to pull down and run, making it a popular starting point for many other images.
Some examples:
The command run
creates a container from an image and executes the command that is indicated.
Let's try with these examples:
Which is the difference between these two examples?
The latter command runs an alpine container, attaches interactively ('-i
') to your local command-line session ('-t
'), and runs /bin/sh.
E.g., try: / # ip a
Summing up:
run
, if you do not have the image locally, Docker pulls it from your configured registry./bin/sh
.exit
to terminate the /bin/sh
command, the container stops but is not removed. You can start it again or remove it.This is an important security concept in the world of Docker containers:
In previous example, even though each docker container run command used the same alpine image, each execution was a separate, isolated container.
Each container has a separate filesystem and runs in a different namespace; by default a container has no way of interacting with other containers, even those from the same image.
So, if we do this:
we will get to something like this:
To show all Docker containers (both running and stopped) we can also use the command $ docker ps -a
.
In this specific case, the container with the ID 330a96cc4f29
is the one with the file
Now if we do:
While if we do:
We will see that in that container there is not the file "hello.txt"!
To summarize a little.
To show which Docker containers are running:
To show all Docker containers (both running and stopped):
If you don't see your container in the output of docker ps -a
command, than you have to run an image:
If a container appears in docker ps -a
but not in docker ps
, the container has stopped, you have to restart it:
If the Docker container is already running (i.e., listed in docker ps
), you can reconnect to the container in each terminal:
Starts an Alpine container using the -dit
flags running ash
. The container will start detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Since you are starting it detached, you won’t be connected to the container right away.
Use the docker attach
command to connect to this container:
Detach from alpine1 without stopping it by using the detach sequence, CTRL + p CTRL + q
(hold down CTRL and type p followed by q).
Commands to stop and remove containers and images.
The values for <CONTAINER ID>
can be found with:
Remember that when you remove a container all the data it stored is erased too…
List all containers (only IDs)
Stop all running containers
Remove all containers
Remove all images
The TIG Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data easy.
A time series is simply any set of values with a timestamp where time is a meaningful component of the data. The classic real world example of a time series is stock currency exchange price data.
Some widely used tools are:
In this Lab we will use the following images:
InfluxDB is a time-series database compatible with SQL, so we can setup a database and a user easily. In a terminal execute the following:
This will keep InfluxDB executing in the background (i.e., detached: -d
). Now we connect to the CLI:
The first step consists in creating a database called "telegraf":
Next, we create a user (called “telegraf”) and grant it full access to the database:
Finally, we have to define a Retention Policy (RP). A Retention Policy is the part of InfluxDB’s data structure that describes for how long InfluxDB keeps data.
InfluxDB compares your local server’s timestamp to the timestamps on your data and deletes data that are older than the RP’s DURATION
. So:
Exit from the InfluxDB CLI:
We have to configure Telegraf instance to read data from the TTN (The Things Network) MQTT broker.
We have to first create the configuration file telegraf.conf
in our working directory with the content below:
Then execute:
This last part is interesting:
tells Docker to put this container’s processes inside of the network stack that has already been created inside of another container. The new container’s processes will be confined to their own filesystem and process list and resource limits, but will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface.
Check if the data is sent from Telegraf to InfluxDB, by re-entering in the InfluxDB container:
and then issuing an InfluxQL query using database 'telegraf':
> use telegraf
> select * from "TTN"
you should start seeing something like:
Exit from the InfluxDB CLI:
Before executing Grafana to visualize the data, we need to discover the IP address assigned to the InfluxDB container by Docker. Execute:
and look for a line that look something like this:
This means private IP address 172.17.0.2 was assigned to the container "influxdb". We'll use this value in a moment.
Execute Grafana:
Log into Grafana using a web browser:
the first time you will be asked to change the password (this step can be skipped).
You have to add a data source:
and then:
then select:
Fill in the fields:
(the IP address depends on the value obtained before)
and click on Save & Test
. If everything is fine you should see:
Now you have to create a dashboard and add graphs to it to visualize the data. Click on
then "+ Add new panel",
You have now to specify the data you want to plot, starting frorm "select_measurement":
you can actually choose among a lot of data "field", and on the right you have various option for the panel setting and visualization.
You can add as many variables as you want to the same Dashboard.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
The features of Compose that make it effective are:
Using Compose is basically a three-step process:
Define your app’s environment with a Dockerfile
so it can be reproduced anywhere.
Define the services that make up your app in docker-compose.yml
so they can be run together in an isolated environment.
For more information, see the Compose file reference.
Run docker-compose up
and the Docker compose command starts and runs your entire app. docker-compose
has commands for managing the whole lifecycle of your application:
So, to execute automatically the system that we just built, we have to create the corresponding docker-compose.yml
file:
and the slightly new version of the telegraf.conf
file is:
Which is the difference between this version of telegraf.conf
and the previous one?
Run docker compose up
in the terminal.
The first time it might take a couple of minutes depending on the internet and computer speed.
Once it is done, as before, log into Grafana using a web browser:
When adding a data source the address needs to be: http://influxdb:8086
and the rest is all as before…