Back to Tech Center

How to visualize your data using the LogScale API – Part One

January 6, 2023

Tech Center

Building our own terminal tools to visualize your LogScale data

CrowdStrike Falcon® LogScale dashboards are great for monitoring your data with all kinds of visualizations. You can choose between a range of nice charts and arrange your dashboards for wall monitor display or exploring your data.

Sometimes, however, you need other ways to explore or present your data. You may want more control of the shape of your data, or you may want to create small tools tailored to your organization’s environment and use cases.

For example, a graph of the top 10 code deleters on GitHub:

In this blog series, I’ll show you two different ways of accessing your data and visualizing it. First, using the terminal to retrieve and display LogScale data in various ways. And second, using Jupyter notebooks for manipulation and visualization. I’ll also show you how to use the REST API to retrieve data in part one and how to use the Python LogScale package in part two.

Note: I’m using the public GitHub data on LogScale Community for the examples.

Getting set up

If you don’t yet have one, head to https://cloud.community.humio.com/signup and claim your free account.

If you want to try out the experiments in this post you are going to need to install two CLI tools:

  • wget (brew install wget)
  • termgraph (python3 -m pip install termgraph)

I also assume you have both Python and Ruby installed.

You can use curl instead of wget, see Appendix A.

The terminal can do more than display text

It’s fun playing around with visual data in the terminal. Simple ANDI characters and a bit of color can pack a surprising amount of information. Here’s why I personally love the terminal: Since the space and formatting options are limited, visualizations in the terminal often end up less cluttered. The temptation to add a little extra UI here and there is easy to reject in the terminal.

As with most API demos, the first step is getting authenticated to access the data. You are going to need two environment variables set. Sure, you can inline both if you want, but keep in mind your API token is better off not saved in source files.

export HUMIO_ENDPOINT=cloud.community.humio.com # or replace with your 
    # own LogScale instance, or the LogScale Cloud if your account is there.
export API_TOKEN=... # grab this from 
    # https://cloud.community.humio.com/account-api-token-page

Now we’re ready to load our data using the public REST API.

Building a LogScale query in this case includes a filter, an aggregation and controlling the order and size of rows:

actor.login != *bot* 
| groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) 
| sort(deletions, order=desc, limit=10)

Click here to run this query (You’ll be directed to Falcon LogScale Community Edition. Log in with your Google credentials)

Want to run this query using the REST API instead of the LogScale UI? Here’s how:

wget -q -O - \
  --post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --color red

Let’s dissect these lines

  1. wget -q -O -:
    Fetch the data without extra output (-q) and output the content to stdout (-O -)

  2. --post-data '{..} ':
    The LogScale query and time window within which to search

  3. --header="Authorization: Bearer $API_TOKEN":
    Authenticate with the API token

  4. --header="Content-Type: application/json":
    Request data is in json format

  5. --header="Accept: text/csv":
    Return CSV data which is easier to format at pass along to termgraph

  6. https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query:
    Search the GitHub demo data on the LogScale community server

  7. ruby -ne 'puts gsub(/\"/, "") if $. > 1':
    Strip quotes from numeric values (using Ruby – often installed on Macs)

  8. termgraph --color {red}:
    Show a vertical bar chart and color the bars red (they represent deletions)

Run the command to render a nice graph of the top 10 (non-bot) users with the most code deletions in the last day:

Note
If you don’t get any output, it’s likely because authentication fails.
You can debug this by replacing the -q (q = quick) with -S (s = server-response) and omitting the last two lines:

wget -S -O - \
  --post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query

Info

If you’re curious how the data looks, just run the same command but omit the last line and the backslash before it:

wget -q -O - \
  --post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=sum(payload.pull_request.deletions, as=deletions)) | sort(deletions, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query

Charts with two series

Termgraph can handle more than one data series and separate them visually using color. Let’s see deletions and additions together in one chart where deletions are colored red and additions green.

wget -q -O - \
  --post-data '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | total := additions + deletions | sort(total, order=desc, limit=10)","start": "1d","end":"now","isLive":false}' \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --color {red,green}

Tracking merges across time

Since LogScale is a great time series data lake, let’s have some fun with dates. Termgraph has a calendar heat map mode that plots months on the X axis and weekdays on the Y axis. It expects the first column to be date in yyyy-mm-dd format. A LogScale query would look like this:

parseTimestamp(field="payload.pull_request.merged_at")
| formatTime(format="%Y-%m-%d",as="merged")
| groupBy(merged)

Info

I quoted the field identifiers here although you don’t have to do so unless they contain special characters.

Let’s send this query to LogScale and instruct it to fetch data from the entire last year to fill each month in the calendar view.

parseTimestamp(field = payload.pull_request.merged_at)
| formatTime(format = "%Y-%m-%d", as = merged)
| groupby(merged)

Same query through the REST API:

wget -q -O - \
  --post-data '{"queryString":"parseTimestamp(field=payload.pull_request.merged_at)|formatTime(format=\"%Y-%m-%d\",as=merged)|groupby(merged)","start": "1y","end":"now","isLive":false}' \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --calendar

This renders a heat map, or maybe rather a “cold map” where a denser color represents a higher count of merges.

Looking at your data in a heat map can uncover surprising facts! Notice how the density is much higher to the right? Could it be everybody is slacking from Jan. through Nov. only to start working properly in Dec.? Of course not. What really happens is that LogScale has a data retention policy of one month for the GitHub repository. We still see certain GitHub events, such as comments or re-openings, for PRs that were closed earlier than one month ago.

Hopefully, what you take away here is the principle, and how to build your own heat maps visualizing your own data.

A small dashboard in the terminal

It’s time to put together our two widgets into a dashboard. You can extend the dashboard with any number

Let’s create a bash or zsh function called getData. It takes three arguments: the output filename, LogScale query string and how far back in time it should look.

getData() {
  BODY="{\"queryString\":\"$(echo $2 | sed 's#"#\\"#g')\",\"start\":\"$3\",\"end\":\"now\",\"isLive\":false}" 
  wget -q -O $1 \
  --post-data "$BODY" \
  --header="Authorization: Bearer $API_TOKEN" \
  --header="Content-Type: application/json" \
  --header="Accept: text/csv" \
  https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query
}

The way we escape quotes here is important.

Inside the getData function definition, we have to $(echo $2 | sed 's#"#\\"#g') on line 2 to maintain the escaped quotes when we later put the query string inside the json request object that we hand to wget --post-data.

Otherwise we would get a 400 bad request response when LogScale sees that we’re using unescaped quotes inside quotes strings.

Zsh functions can be called the same way you would invoke a command. Don’t expect them to be as powerful or intuitive as functions in other programming languages though. One thing that you need to mind is the double quotes, especially escaping them properly. For example, the format argument to the formatTime query function needs to be a string. Remember to escape the double quotes when calling getData with your own query strings.

Save this to your ~/.zprofile file.

Let’s also add two more functions: one to render the data.csv file and one that calls render repeatedly, creating a small dashboard that refreshes every 30 seconds.

barchart() {
  cat $1 \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --color {red,green}
}

heatmap() {
  cat $1 \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --calendar
}

dashboard() {
  getData deletions.csv "actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | total := additions + deletions | sort(total, order=desc, limit=10)" "7d"
  getData merges.csv "parseTimestamp(field=payload.pull_request.merged_at)|formatTime(format=\"%Y-%m-%d\",as=merged)|groupby(merged)" "1y" 
  clear
  echo -e "\e[1mTop code adders and deleters\e[0m"
  barchart deletions.csv
  echo -e "\3[1mMerge activity\e[0m"
  heatmap merges.csv
  sleep 30
  dashboard
}

After you have saved the snippet above to ~/.zprofile you can load the profile and start the dashboard:

. ~/.zprofile
dashboard

And you’ll have your mini dashboard running in your terminal and updating every 30 seconds!

In the next post, I’ll show you how to use Python, Pandas and Jupyter to wrangle and display your LogScale data.


Appendix A: If you prefer curl

Curl is generally my favorite, but you’ll soon run into problems piping data from curl into another command. Curl will detect if the next command closes stdin too soon and will throw an error. To get around that, install tac with brew install tac and pipe the results to | tac | tac – yes, I know it looks weird. “tac” is “cat” backwards. Reversing the text twice will bring it back in shape and load it fully before processing it further.

curl https://$HUMIO_ENDPOINT/api/v1/repositories/humio-organization-github-demo/query \
  -X POST \
  -H "Authorization: Bearer $API_TOKEN" \
  -H 'Content-Type: application/json' \
  -H "Accept: text/csv" \
  -d '{"queryString":"actor.login != *bot* | groupBy(actor.login, function=[sum(payload.pull_request.deletions, as=deletions), sum(payload.pull_request.additions, as=additions)]) | sort(additions, order=desc, limit=10)","start": "1h","end":"now","isLive":false}' \
  | tac | tac \
  | ruby -ne 'puts gsub(/\"/, "") if $. > 1' \
  | termgraph --color {red,green}
Related Content