Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Prabhakaran
Explorer

Background:


The blog explores an open source tool to monitor HANA logs in real time. Imagine having to monitor and analyse HANA logs to ensure your HANA node is running properly and what if you had many HANA nodes in a cluster environment in your landscape. SAP has introduced  SAP ITOA which is a real-time monitoring and analysis tool which is a licensed version which runs on a HANA database, which grabs syslog from different infrastructure components.

What if we could have an open source platform to stream, analysis and monitor your HANA component logs from HANA cluster nodes?

We attempt to introduce a new analytics solution for real time log analysis "Open Stack IT operation analytics" for HANA cluster nodes  which is developed in-house with open source tools (ELK stack) to stream, analysis, monitor SAP HANA components.

Benefits of Open Stack IT operation analytics: Stream, analyse the logs and identify critical incidents in real time.

Overview of ELK Stack:


Elastic has created an end-to-end stack called ELK and elastic stack makes searching and analysing of data easier than ever before. Figure-1 represents the ELK stack.
Please refer https://www.elastic.co/ for more information

Figure 1

 Kibana: Kabana gives shape to your data and is the extensible user interface for configuring and managing all aspects of the Elastic Stack.

Elastic search: Elastic search is a distributed, JSON-based search and analytics engine designed for horizontal scalability, maximum reliability, and easy management. With elastic search we can instantly store, search and analyse the data.

Logstash: Ingest any data, from any source, in any format. Logstash is a dynamic data collection pipeline with an extensible plugin ecosystem and strong Elastic search synergy.

Benefits:

  • Derive structure from unstructured data with grok

  • Decipher geo coordinates from IP addresses

  • Anonymize PII data, exclude sensitive fields completely

  • Ease overall processing independent of the data source, format, or schema.


Architecture of Open Stack IT Operation Analytics for SAP HANA:



Figure 2

SAP ITOA and Open Stack IT Operation Analytics:



Part 1:Stream, analyse and monitor SAP HANA component logs using ELK Stack


SAP HANA has below core components which availability is critical to the business.

  • Name Server

  • Index Server

  • Pre-processor

  • XS Engine

  • Web dispatcher


The logging feature of these components can be leveraged by elastic search and can be used to search, analyse, visualize and monitor the resources.As of today the logs are unstructured data, as data travels from source to store, Logstash filters parse each event, identify named fields to build structure, and transform them to converge on a common format for easier, accelerated analysis and business value. Logstash dynamically transforms and prepare your data regardless of format or complexity

Step 1: Download and install the ELK stack packages
1.1 Install Elastic search:
Install the Elastic search with simple steps as mentioned in the below link
https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html

1.2 Install Logstash:
Install the logstash with simple steps as mentioned in the below link
https://www.elastic.co/guide/en/logstash/5.x/installing-logstash.html

1.3 Install Kibana:
Install the Kibana with simple steps as mentioned in the below link
https://www.elastic.co/guide/en/kibana/current/install.html

We can configure the ELK stack packages in any machine, in this session , we have installed and configured ELK in windows and shipped the HANA logs from our HANA box in cloud to the windows server (ELK).

Step 2: Configure Elasticsearch
Open the elastic search URL and check the cluster name and node are set correctly as per the installation steps



In this example we use the default index which has created , optionally you can create an index in the elastic search .
If you’re running Elasticsearch on Windows, you can download cURL from http://curl.haxx.se/download.html. cURL provides a convenient way to submit requests to Elasticsearch and installing cURL enables you to copy and paste many of the examples in this book to try them out.

We have used “elastic head “ a front end tool visualize the elastic search cluster and indices



Step 4: Configure logstash
The actual configuration file consists of three sections: input, filter and output. The input part defines the logs to read, the filter part defines the filter to be applied to the input and the output part specifies where to write the result to.



The general format looks like



Logstash comes out of the box with everything it takes to read Apache logs, syslog’s. In case of SAP HANA components logs we need to derive custom pattern to read the logs. To do this we have used Grok.

Grok is currently the best way in logstash to parse crappy unstructured log data into something structured and query able. Grok works by combining text patterns into something that matches your logs. Please use the below links to derive the patterns using Grok

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-matc...

If you need help building patterns to match your logs, you will find the http://grokdebug.herokuapp.com and http://grokconstructor.appspot.com/ applications quite useful!

Input

All files starting with “indexserver_” at the directory  are being read by logstash.  To all lines read a type is added, which can be used later for searching and filtering purposes.

Filter



The filter is applied to all lines with type which is mentioned in the input. Grok is doing the regular expressions and to find the customized patterns for HANA logs.To facilitate later analysis we have included timestamp from the log , action and the log message.

Output



As output a local elasticsearch server is defined. The logs are written to the default index . This stores the log lines as a value to elasticsearch and makes it accessible for further processing.

If you don’t want to see the output log , you can comment the “stdout { codec => rebydebug }”

Execute the Logstash with config file
Save the config file in the logtash bin directory and execute the below command to start Logstash



You could notice the working filter as shown below





Similarly, multiple input files can be used to parse and send the JSON to elasticsearch


Sample log files from HANA
Index server input log
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.024596 i TraceContext     TraceContext.cpp(00923) : UserName=
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.024591 i STATS_WORKER     ConfigurableInstaller.cpp(00043) : installing Alert_Server_Time_Discrepancy (id is 76): nothing to do
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.024600 i STATS_WORKER     ConfigurableInstaller.cpp(00030) : installing Alert_Check_Database_Disk_Usage (id is 77)
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.026501 i TraceContext     TraceContext.cpp(00923) : UserName=
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.026497 i STATS_WORKER     ConfigurableInstaller.cpp(00043) : installing Alert_Check_Database_Disk_Usage (id is 77): nothing to do
[8296]{-1}[-1/-1] 2017-01-04 07:03:58.026505 i STATS_WORKER     ConfigurableInstaller.cpp(00030) : installing Alert_Replication_Connection_Closed (id is 78)
Nameserver input log
[5159]{-1}[-1/-1] 2017-01-04 07:03:57.725294 i STATS_CTRL       CallInterfaceProxy.cpp(00044) : sending install request
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.696959 i STATS_CTRL       CallInterfaceProxy.cpp(00048) : response to install request: OK
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.727035 i STATS_CTRL       NameServerControllerThread.cpp(00777) : removing old section from statisticsserver.ini: statisticsserver_general
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.738053 i STATS_CTRL       NameServerControllerThread.cpp(00782) : making sure old StatisticsServer is inactive statisticsserver.ini: statisticsserver_general, active=false
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.747184 i STATS_CTRL       NameServerControllerThread.cpp(00524) : installation done
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.747241 i STATS_CTRL       NameServerControllerThread.cpp(00575) : starting controller
[5159]{-1}[-1/-1] 2017-01-04 07:03:59.747305 i STATS_CTRL       NameServerControllerThread.cpp(00192) : waited 177030ms
Web dispatcher input log
[8341]{-1}[-1/-1] 2017-01-03 15:45:18.980609 i webdispatcher    webdispatcher.cpp(00163) : Waiting for Web Dispatcher Shutdown ...
[8341]{-1}[-1/-1] 2017-01-03 15:45:18.980960 e TNS              TNSClient.cpp(00657) : nameserver vhcalhdbdb:30201 not initialized. retry in 5 sec...
[8341]{-1}[-1/-1] 2017-01-03 15:45:18.980987 i webdispatcher    webdispatcher.cpp(00199) : Notified Nameserver about shutdown
[8341]{-1}[-1/-1] 2017-01-03 15:45:18.980993 i Service_Shutdown TrexService.cpp(00891) : Stopping threads

Step 5: Configure Kibana
Open the Kibana interface and select the index pattern for which index the visualisation need to be created.



In the discover tab we could notice the latest documents , if a time field is configured for the selected index pattern, the distribution of documents over time is displayed in a histogram at the top of the page



Visualize your Elasticsearch data and navigate the Elastic Stack with different chart types.

Build your own dash board



We can notice that the “Service_Shutdown” has been monitored in the Kibana Dashboard

The cluster and indices can also be monitored in Kibana



Alerting can be done based on the watcher search in Kibana which could send alerts in case of the critical errors , which will be discussed in the blog monitoring and alerting on critical errors of SAP HANA components in real time with Open Stack IT Op....

 
2 Comments
Labels in this area