Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
0 Kudos

Introduction


The SAP STAD transaction provides detailed metrics of individual SAP transactions. The level of detail is great and can be very helpful in troubleshooting.

What if you are a global company with many different sites using the same SAP instance and would like to get more insight in the response time in relation to the site? Looking at the raw data in STAD is an option, exporting to Excel is also an option. But there is another possibility that will give you a more visual appealing result using Kibana.

We will load the metrics in Elasticsearch and visualize the result on a world map using Kibana. I will also show you how to enrich the data with location information using Logstash while loading the data in Elasticsearch, .

Loading sample data


As a first step, we will create an index:
PUT /stad
{
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"tcode": { "type": "keyword" },
"guinettime": { "type": "integer" },
"respti": { "type": "integer" },
"location": { "type": "geo_point" }
}
}
}

 

Next, we load 3 sample documents in the newly created index :
POST /stad/_doc
{
"@timestamp": "2019-12-20T16:29:02.000Z",
"respti": 21408,
"tcode": "ME51N",
"guinettime": 220622,
"location": "48.85661, 2.351499"
}
POST /stad/_doc
{
"@timestamp": "2019-12-20T16:29:01.000Z",
"respti": 146864,
"tcode": "VA05",
"guinettime": 120056,
"location": "48.85661, 2.351499"
}
POST /stad/_doc
{
"@timestamp": "2019-12-20T16:29:00.000Z",
"respti": 1300,
"tcode": "FAGLB03",
"guinettime": 110107,
"location": "50.827643, 3.265988"
}

 

Create index pattern


Go to Kibana and create an index pattern







 

Create sample map


Go to Kibana maps apps and create a new map

Add a new layer to the map and click on « Grid aggregation »:





Optionally, give the new layer a name.



For this demo, I'm configuring 2 metrics:

  1. Count: the number of documents.

  2. Respti: the average of the field respti




I'm using a circle marker. The size of the circle is determined by the count, while the color is determined by the average response time for that location.



The result :

  • A small green dot in Belgium, 1 fast transaction

  • A bigger red dot in France, 2 slower transactions




 

This small demo should give you an idea of what the end result will look like if you load data from STAD enriched with location information can look like. Next we will look at how to add location information while ingesting the data using a mapping table.

Using logstash to add location while ingesting documents


Now that I have shown the concept, it is time to work with some real data and show you, how we can add the location info while loading the data.

Although STAD contains a wealth of information, one piece of information that is sorely missing is the IP address of the client. The only information about the client is the terminal id. This means that we need to be able to convert the terminal id to a location. We could use a reverse lookup. But there is another option. A lot of global corporations use a naming convention for their desktops that includes location information

Example : BEBRU0001, the 2 first letters are the country, the next 3 are the town, followed by a digit.

Example input record :
{

"terminalid": "BEBRU0001",
"respti": 338105,
"tcode": "VA03",

}

Logstash translate filter :
    translate {
field => "[terminalid]"
destination => "[location]"
dictionary_path => '/etc/logstash/terminalid-location.yaml'
exact => true
regex => true
override => true
}

Logstash location dictionary :
'^BE': "u151"
'^FR': "u094"
...

In this example, I have encoded the location as a geohash. I’m using 4 characters, which should be sufficiently accurate for our purpose. In case, some locations are physically close to each other, you can always a an extra character to ensure a higher precision.

Wrapping up


The hard part is getting the required data in Elasticsearch. Once the data is available, the fun can start and you can use the power of Kibana to visualize your data in any way you want. In this post we added location information to our data to be able to get an overview of activity and response time by location.
Labels in this area