LAB
All the code necessary for this Lab session is available in Poliformat/RSE: Recursos/Laboratorio/código practicas laboratorio, or here: https://bit.ly/codigoRSE2021
To execute the code in your computer:
$ sudo pip3 install paho-mqtt
The TIG Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data easy.
A time series is simply any set of values with a timestamp where time is a meaningful component of the data. The classic real world example of a time series is stock currency exchange price data.
Some widely used tools are:
In this Lab we will use the containers platform Docker.
We will use the following images:
InfluxDB is a time-series database compatible with SQL, so we can setup a database and a user easily. In a terminal execute the following:
This will keep InfluxDB executing in the background (i.e., detached: -d
). Now we connect to the CLI:
The first step consists in creating a database called "telegraf":
Next, we create a user (called “telegraf”) and grant it full access to the database:
Finally, we have to define a Retention Policy (RP). A Retention Policy is the part of InfluxDB’s data structure that describes for how long InfluxDB keeps data.
InfluxDB compares your local server’s timestamp to the timestamps on your data and deletes data that are older than the RP’s DURATION
. So:
Exit from the InfluxDB CLI:
We have to configure the Telegraf instance to read data from the TTN (The Things Network) MQTT broker.
We have to first create the configuration file telegraf.conf
in our working directory with the content below:
where you have to change the "XXX" for username
and the "ttn-account-XXX" for password
with the values below:
Then execute:
This last part is interesting:
tells Docker to put this container’s processes inside of the network stack that has already been created inside of another container. The new container’s processes will be confined to their own filesystem and process list and resource limits, but will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface.
Check if the data is sent from Telegraf to InfluxDB, by re-entering in the InfluxDB container:
and then issuing an InfluxQL query using database 'telegraf':
> use telegraf
> select * from "TTN"
you should start seeing something like:
Exit from the InfluxDB CLI:
Before executing Grafana to visualize the data, we need to discover the IP address assigned to the InfluxDB container by Docker. Execute:
and look for a line that look something like this:
This means private IP address 172.17.0.2 was assigned to the container "influxdb". We'll use this value in a moment.
Execute Grafana:
Log into Grafana using a web browser:
the first time you will be asked to change the password (this step can be skipped).
You have to add a data source:
and then:
then select:
Fill in the fields:
(the IP address depends on the value obtained before)
and click on Save & Test
. If everything is fine you should see:
Now you have to create a dashboard and add graphs to it to visualize the data. Click on
then "+ Add new panel",
You have now to specify the data you want to plot, starting frorm "select_measurement":
you can actually choose among a lot of data "field", and on the right you have various option for the panel setting and visualization.
You can add as many variables as you want to the same Dashboard.
1.- Haz una captura de pantalla de la dashboard que has creado e insertala en el documento a entregar.
You can interact with your Influx database using Python. You need to install a library called influxdb
.
Like many Python libraries, the easiest way to get up and running is to install the library using pip.
Just in case, the complete instructions are here:
https://www.influxdata.com/blog/getting-started-python-influxdb/
We’ll work through some of the functionality of the Python library using a REPL, so that we can enter commands and immediately see their output. Let’s start the REPL now, and import the InfluxDBClient from the python-influxdb library to make sure it was installed:
The next step will be to create a new instance of the InfluxDBClient (API docs), with information about the server that we want to access. Enter the following command in your REPL… we’re running locally on the default port:
INFO: There are some additional parameters available to the InfluxDBClient constructor, including username and password, which database to connect to, whether or not to use SSL, timeout and UDP parameters.
We will list all databases and set the client to use a specific database:
Let’s try to get some data from the database:
>>> client.query('SELECT * from "TTN"')
The query()
function returns a ResultSet object, which contains all the data of the result along with some convenience methods. Our query is requesting all the measurements in our database.
You can use the get_points()
method of the ResultSet to get the measurements from the request, filtering by tag or field:
>>> results = client.query('SELECT * from "TTN"')
>>> points=results.get_points()
>>> for item in points:
... print(item['time'])
...
2019-10-31T11:27:16.113061054Z
2019-10-31T11:27:35.767137586Z
2019-10-31T11:27:57.035219983Z
2019-10-31T11:28:18.761041162Z
2019-10-31T11:28:39.067849788Z
You can get mean values (mean
), number of items (count
), or apply other conditions:
>>> client.query('select mean(uplink_message_decoded_payload_temperature) from TTN')
>>> client.query('select count(uplink_message_decoded_payload_temperature) from TTN')
>>> client.query('select * from TTN WHERE time > now() - 7d')
Finally, everything can clearly run in a unique python file, like:
which prints all the temperature values of the last hours that are not "None".
2.- Crea un programa python que imprima en pantalla los datos de temperatura y humedad de los ultimos 15 minutos.
Adjunta el codigo en el documento a entregar.