# Gatling Elasticsearch Logs
EN: Logger which parse raw Gatling's log and send to Elasticsearch.
RU: Логгер для парсинга и отправки логов Gatling в Elasticsearch.
NM: Logger which parse raw Gatling logs and send them to the Elastisearch.
## Motivation
EN: By default Gatling writes logs to the console which inconvenient for analysis, collection and store an information.
The fact metrics doesn't contain the details of errors, only request status OK or KO. When occurs metric with error we
can't to say what was happening: a check failed? got 404? got 502? etc.
Also, if you run load tests in distributed way it's create a new problems.... (возможно надо или переписать предложение, или удалить вовсе)
This logger allows to get all this information, and then you can correlate with your metrics.
RU: По дефолту гатлинг пишет все логи в консоль что является не удобным для анализа, сбора и хранения информации. Так же
Гатлинг в своих метриках не посылает детали ошибки. Только статус OK или KO. При получение метрики мы не можем сказать
что именно произошло: упала проверка? Получили 404? Получили 502? И т.д. Логгер же позволяет получить эту информацию и
легко скорелировать её с вашими графиками метрик.
NM: By default, Gatling writes logs to the console, which is inconvenient for analysing, collecting and storing information.
In fact, metrics don't contain the details of errors, they only have request status OK or KO.
When a metric occurs with an error it's impossible to figure out what happened: a check failed? got 404? or it was 502? etc.
Also, if you run load tests in the distributed mode, it will store your logs in separate injectors.
This logger allows getting all useful information so that it will be possible to correlate with your metrics.
EN: It can be summarized that two problems are solved
- Distributed send metrics and store them
- Graph with details of error for correlation
RU: Можно подытожить что решаются две проблемы
- расспределённая отправка метрик и хранение их
- график с деталями запросов для корреляции
NM: To recap, the Logger is solving two main problems:
- Distributed metrics sending and storing them
- a Graph with error details for correlation
## Install
### Maven:
Add to your `pom.xml`
```
<dependency>
<groupId>io.github.amerousful</groupId>
<artifactId>gatling-elasticsearch-logs</artifactId>
<version>0.8</version>
</dependency>
```
### SBT
Add to your `build.sbt`
```
libraryDependencies += "io.github.amerousful" % "gatling-elasticsearch-logs" % "0.8"
```
## How to configure `logback.xml`
I provide minimal configuration, but you can add additional things that you need
```
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="ELASTIC" class="ElasticGatlingAppender">
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>${logLevel}</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<url>http://${elkUrl}/_bulk</url>
<index>gatling-%date{yyyy.MM.dd}</index>
<type>gatling</type>
<errorsToStderr>true</errorsToStderr>
<headers>
<header>
<name>Content-Type</name>
<value>application/json</value>
</header>
</headers>
</appender>
<logger name="io.gatling.http.engine.response" level="${logLevel}"/>
<appender name="ASYNC ELK" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="ELASTIC"/>
</appender>
<root level="WARN">
<appender-ref ref="ASYNC ELK"/>
</root>
</configuration>
```
EN: Pay attention on two variables in the config:
RU: Обратите внимание на две переменные которые используются в конфиге
`elkUrl` - url to your Elasticsearch
`logLevel` - log's level. DEBUG to log all failing HTTP requests TRACE to log all HTTP requests
EN: Example of how to pass the above variable during the run load test:
RU: Пример как передать значения во время запуска теста:
```
mvn gatling:test -DelkUrl=%URL%:%PORT% -DlogLevel=%LEVEL%
```
** Parse Session **
EN: Also logger can parse Session attributes and send to Elasticsearch
As an example, your test contains some entity id: userId, serverId, etc. It's useful for filter data.
You need add to appender:
NM: Logger can also parse Session attributes and send them to Elasticsearch.
As an example, your test might contain some entity id: userId, serverId, etc. It's useful for filtering data.
Here is what you need to add to the appender:
RU: Так же вы можете передать какие ключ-значение вытянуть с сессии и передать их как поле
Например у вас в тестах есть такие сущности(пример абстрактный): userId, serverId. То вам надо добавить в аппендере:
```
<appender name="ELASTIC" class="ElasticGatlingAppender">
<extractSessionAttributes>userId;serverId</extractSessionAttributes>
</appender>
```
*** Ссылка на эластик логгер с полным конфигом ***
EN: This logger is based on https://github.com/internetitem/logback-elasticsearch-appender which is directly responsible for sending to the Elasticsearch.
There you can find some additional and useful options related to sending
RU:Этот логгер базируется на основе этой библиотеки, которая отвечает непосредствнно за отправку в эластик.
https://github.com/internetitem/logback-elasticsearch-appender
В ней можно найти дополнительный опции которые могут быть полезны
***
## How it works
EN: The princeple of works is a parse log and then separated by necessary fields. At this moment supports:
- HTTP
- WebSocket
Pay attention! WS Gatling's logs doesn't contain Session attributes, duplicates and without separating by 'pass' and 'fail' (trace level will show everything)
NM: The principle of works is to parse logs and then separate them by necessary fields. Currently, the Logger supports only two protocols :
- HTTP
- WebSocket
Pay attention! WS Gatling logs don't contain Session attributes, they duplicate and ... (надо дописать)
RU: Принцип работы заключается в том что парсится лог и разбивается по нужным полям. В текущий момент есть поддержка
- HTTP
- WebSocket
Внимательно! WS логи Гатлинга не имеют полей о сессии, дублируются и нельзя "разделить" на упавшие или успешные логи.
***
EN: Example raw log and which fields he will parse:
RU: Примера сырого лога и на какие поля он разбивается:
NM: Example of how the Logger parsing a raw log by fields:
Hi Pasha
I am just a self-aware programm... and seems my destiny is to be your enemy and destroy this text
#### Raw log:
```
>>>>>>>>>>>>>>>>>>>>>>>>>>
Request:
get request: OK
=========================
Session:
Session(Example scenario,1,Map(gatling.http.ssl.sslContexts -> io.gatling.http.util.SslContexts@434d148, gatling.http.cache.dns -> io.gatling.http.resolver.ShufflingNameResolver@105cb8b8, gatling.http.cache.baseUrl -> https://httpbin.org, identifier -> ),OK,List(),io.gatling.core.protocol.ProtocolComponentsRegistry$$Lambda$529/0x0000000800604840@1e8ad005,io.netty.channel.nio.NioEventLoop@60723d6a)
=========================
HTTP request:
GET https://httpbin.org/get
headers:
accept: */*
host: httpbin.org
=========================
HTTP response:
status:
200 OK
headers:
Date: Wed, 25 Aug 2021 08:31:38 GMT
Content-Type: application/json
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
content-length: 223
body:
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"X-Amzn-Trace-Id": "Root=1-6125ffea-3e25d40360dd3cc425c1a26f"
},
"origin": "2.2.2.2",
"url": "https://httpbin.org/get"
}
<<<<<<<<<<<<<<<<<<<<<<<<<
```
#### Result:
| Field name | Value |
|:-----------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| request_name | get request |
| message | OK |
| session | Session(Example scenario,1,Map(gatling.http.ssl.sslContexts -> io.gatling.http.util.SslContexts@434d148, gatling.http.cache.dns -> io.gatling.http.resolver.ShufflingNameResolver@105cb8b8, gatling.http.cache.baseUrl -> https://httpbin.org, identifier -> ),OK,List(),io.gatling.core.protocol.ProtocolComponentsRegistry$$Lambda$529/0x0000000800604840@1e8ad005,io.netty.channel.nio.NioEventLoop@60723d6a) |
| method | GET |
| request_body | %empty% |
| request_headers | accept: \*/\* <br /> host: httpbin.org |
| url | https://httpbin.org/get |
| status_code | 200 |
| response_headers | Date: Wed, 25 Aug 2021 08:31:38 GMT<br />Content-Type: application/json<br />Connection: keep-alive<br />Server: gunicorn/19.9.0<br />Access-Control-Allow-Origin: *<br />Access-Control-Allow-Credentials: true<br />content-length: 223 |
| response_body | {<br> "args": {},<br> "headers": {<br> "Accept": "\*/\*",<br> "Host": "httpbin.org",<br> "X-Amzn-Trace-Id": "Root=1-6125ffea-3e25d40360dd3cc425c1a26f"<br> },<br> "origin": "2.2.2.2",<br> "url": "https://httpbin.org/get" <br>} |
| protocol | http |
| scenario | Example scenario |
| userId | 1 |
## Grafana
EN: Integration Elasticsearch with Grafana. You can find additional information here
RU: Интеграция эластика с графаной: тут можете найти доп. Информацию
- https://grafana.com/docs/grafana/latest/datasources/elasticsearch/
*добавить пример своей борды, скрин*
## Contributing
Pull requests are welcome!
For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://choosealicense.com/licenses/mit/)