owned this note
owned this note
Published
Linked with GitHub
---
tags: humio,logscale,api,blog
---
How to visualize your data using the LogScale API - Part One
===
_By Søren Skovsbøll - Jan 4, 2023_
#### Part One
## Building our own terminal tools to visualize your LogScale data
Crowdstrike Falcon® LogScale dashboards are great for monitoring your data with all kinds of visualizations. You can choose between a range of nice charts and arrange your dashboards for wall monitor display or exploring your data.
Sometimes, you need other ways to explore or present your data.
You may want more control of the shape of your data, or you may want to create small tools tailored to your organization's environment and use cases.
For example, a graph of the top 10 code deleters on GitHub:
![](https://i.imgur.com/QoQ6QOa.png)
In this blog series I'll show you two different ways of accessing your data and visualizing it. First, using the terminal to retrieve and display LogScale data in various ways. And second, using Jupyter notebooks for manipulation and visualization. I'll also show you how to use the REST API to retrieve data in part one and how to use the Python LogScale package in part two.
Note: I'm using the [public GitHub data on LogScale Community](https://cloud.community.humio.com/humio-organization-github-demo/) for the examples.
:::info
:gear:
#### Getting set up
If you don't yet have one, head to https://cloud.community.humio.com/signup and claim your free account.
If you want to try out the experiments in this post you are going to need to install two CLI tools:
* wget (`brew install wget`)
* termgraph (`python3 -m pip install termgraph`)
I also assume you have both Python and Ruby installed.
You can use curl instead of wget, [see Appendix A](#Appendix-A-If-you-prefer-curl).
:::
### The terminal can do more than display text
It is fun playing around with visual data in the terminal. ANSI characters and a bit of color can pack a surprising amount of information. Here's why I personally love the terminal: Since the space and formatting options are limited, visualizations in the terminal often end up less cluttered. The temptation to add a little extra ui here and there is easy to reject in the terminal.
As with most API demos, the first step is getting authenticated to access the data.
You are going to need two environment variables set. Sure, you can inline both if you want, but keep in mind, your API token is better off not saved in source files.
```bash=
export HUMIO_ENDPOINT=cloud.community.humio.com # or replace with your
# own LogScale instance, or the LogScale Cloud if your account is there.
export API_TOKEN=... # grab this from
# https://cloud.community.humio.com/account-api-token-page
```
Now we're ready to load our data using the public REST API.
Building a LogScale query in this case includes a filter, an aggregation and controlling the order and size of rows:
```clike=
actor.login != *bot*
| groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions))
| sort(deletions, order=desc, limit=10)
```
Without line one, the query result would reveal that most accounts are bots. I care about human users, so line 1 filters away all the bots. Then, since the only thing better than writing code is _deleting code_, it counts the number of deletions per user (line 2). Finally, it sorts the rows by the number of deletions to render a "top 10" of accounts (line 3).
[Click here to run this query](https://cloud.community.humio.com/humio-organization-github-demo/search?live=false&query=actor.login%20!%3D%20*bot*%20%0A%7C%20groupBy(actor.login%2C%20function%3Dsum(payload.pull_request.deletions%2C%20as%3Ddeletions))%20%0A%7C%20sort(deletions%2C%20order%3Ddesc%2C%20limit%3D10)%0A&start=7d)
Want to run this query using the REST API instead of the LogScale UI? Here's how:
```bash=
wget -q -O - \
--post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --color red
```
:::info
#### :knife: Let's dissect these lines
1. `wget -q -O -`:
Fetch the data without extra output (`-q`) and output the content to stdout (`-O -`)
2. `--post-data '{..} '`:
The LogScale query and time window within which to search
3. `--header="Authorization: Bearer $API_TOKEN"`:
Authenticate with the api token
4. `--header="Content-Type: application/json"`:
Request data is in json format
5. `--header="Accept: text/csv"`:
Return CSV data which is easier to format at pass along to termgraph
6. `https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query`:
Search the GitHub demo data on the LogScale community server
7. `ruby -ne 'puts gsub(/\"/, "") if $. > 1'`:
Strip quotes from numeric values (using Ruby -- often installed on Macs)
8. `termgraph --color {red}`:
Show a vertical bar chart and color the bars red (they represent deletions)
:::
Run the command to render a nice graph of the top 10 (non-bot) users with the most code deletions in the last day:
![](https://i.imgur.com/QoQ6QOa.png)
:::warning
:warning:
If you don't get any output, it's likely because authentication fails.
You can debug this by replacing the `-q` (q = quick) with `-S` (s = server-response) and omitting the last two lines:
```bash=
wget -S -O - \
--post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query
```
Even if the API reports 404 it may still be due to missing permissions. In that case, to add yourself as a user of the `humio-organization-github-demo` repository, head to https://cloud.community.humio.com/humio-organization-github-demo/settings/permissions-v3 and click __+ Add__, select yourself in the dialog and choose __Member__ as the role.
:::
:::info
:bulb:
If you are curious how the data looks, just run the same command but omit the last line and the backslash before it:
```bash=
wget -q -O - \
--post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query
```
:::
### Charts with two series
Termgraph can handle more than one data series and separate them visually using color. Let's see deletions and additions together in one chart where deletions are colored red and additions green.
```bash=
wget -q -O - \
--post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | total := additions + deletions | sort(total, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --color {red,green}
```
![](https://i.imgur.com/NBtpb0K.png)
### Tracking merges across time
Since LogScale is a great time series data lake, let's have some fun with dates. Termgraph has a calendar heatmap mode that plots months on the x axis and weekdays on the y axis. It expects the first column to be date in yyyy-mm-dd format. A LogScale query would look like this:
```clike=
parseTimestamp(field="payload.pull_request.merged_at")
| formatTime(format="%Y-%m-%d",as="merged")
| groupBy(merged)
```
:::info
:bulb:
I quoted the field identifiers here although you don't have to do so unless they contain special characters.
:::
Let's send this query to LogScale and instruct it to fetch data from the entire last year to fill each month in the calendar view.
```clike=
parseTimestamp(field = payload.pull_request.merged_at)
| formatTime(format = "%Y-%m-%d", as = merged)
| groupby(merged)
```
Same query through the REST API:
```bash=
wget -q -O - \
--post-data '{"queryString":"parseTimestamp(field=payload.pull_request.merged_at)|formatTime(format=\"%Y-%m-%d\",as=merged)|groupby(merged)","start": "1y","end":"now","isLive":false}' \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --calendar
```
This renders a heat map, or maybe rather a "cold map" where a denser color represents a higher count of merges.
![](https://i.imgur.com/iQSRXl7.png)
Looking at your data in a heat map can uncover surprising facts! Notice how the density is much higher to the right? Could it be everybody is slacking from Jan through Nov only to start working properly in Dec.? Of course not. What really happens is that LogScale has a data retention policy of one month for the GitHub repository. We still see certain github events, such as comments or re-openings, for PRs that were closed earlier than one month ago.
Hopefully, what you take away here is the principle and how to build your own heat maps visualizing your own data.
### A small dashboard in the terminal
It's time to put together our two widgets into a dashboard. You can extend the dashboard with any number
Let's create a bash or zsh function called getData. It takes three arguments, the output filename, LogScale query string and how far back in time it should look.
```bash=
getData() {
BODY="{\"queryString\":\"$(echo $2 | sed 's#"#\\"#g')\",\"start\":\"$3\",\"end\":\"now\",\"isLive\":false}"
wget -q -O $1 \
--post-data "$BODY" \
--header="Authorization: Bearer $API_TOKEN" \
--header="Content-Type: application/json" \
--header="Accept: text/csv" \
https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query
}
```
:::warning
The way we escape quotes here is important.
Inside the getData function definition, we have to `$(echo $2 | sed 's#"#\\"#g')`, on line 2, to maintain the escaped quotes when we later put the query string inside the json request object that we hand to `wget --post-data`.
Otherwise we would get a _400 Bad Request_ response when LogScale sees that we're using unescaped quotes inside quotes strings.
Zsh functions can be called the same way you would invoke a command. Don't expect them to be as powerful or intuitive as functions in other programming languages though. One thing that you need to mind is the double quotes, especially escaping them properly. For example, the `format` argument to the `formatTime` query function needs to be a string. Remember to esacape the double quotes when calling getData with your own query strings.
:::
Save this to your `~/.zprofile` file.
Let's also add two more functions, one to render the data.csv file and one that calls render repeatedly, creating a small dashboard that refreshes every 30 seconds.
```bash=
barchart() {
cat $1 \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --color {red,green}
}
heatmap() {
cat $1 \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --calendar
}
dashboard() {
getData deletions.csv "actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | total := additions + deletions | sort(total, order=desc, limit=10)" "7d"
getData merges.csv "parseTimestamp(field=payload.pull_request.merged_at)|formatTime(format=\"%Y-%m-%d\",as=merged)|groupby(merged)" "1y"
clear
echo -e "\e[1mTop code adders and deleters\e[0m"
barchart deletions.csv
echo -e "\3[1mMerge activity\e[0m"
heatmap merges.csv
sleep 30
dashboard
}
```
After you have saved the snippet above to `~/.zprofile` you can load the profile and start the dashboard:
```bash=
. ~/.zprofile
dashboard
```
And you'll have your mini dashboard running in your terminal and updating every 30 seconds!
![](https://i.imgur.com/KtlJGrb.png)
In the next post, I'll show you how to use Python, Pandas and Jupyter to wrangle and display your LogScale data.
---
:::success
#### Appendix A: If you prefer curl
Curl is generally my favorite, but you'll soon run into problems piping data from curl into another command. Curl will detect if the next command closes stdin too soon and will throw an error. To get around that, install `tac` with `brew install tac` and pipe the results to `| tac | tac` -- yes, I know it looks weird. "tac" is "cat" backwards. Reversing the text twice will bring it back in shape and load it fully before processing it further.
```bash
curl https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
-X POST \
-H "Authorization: Bearer $API_TOKEN" \
-H 'Content-Type: application/json' \
-H "Accept: text/csv" \
-d '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | sort(additions, order=desc, limit=10)","start": "1h","end":"now","isLive":false}' \
| tac | tac \
| ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
| termgraph --color {red,green}
```
:::