Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
mark_foerster
Participant

What drives me



After a hiatus of more than 7 years, I want to start blogging again. This time I try a SAP community blog post series about my adventures in Machine Learning. First a short introduction. I have been an SAP basis administrator (including an Oracle DBA) for 23 years. During that time I was also doing what could be called Data Engineering. It would be too much of a stretch to call myself a Data Scientist, but I fully get why Data Scientist was called the sexiest job of the 21st century.

<p>


During the last years I was mainly occupied with SAP BW systems and also with SAP HANA database administration. It was much fun for me discovering many intricate details of how SAP BW systems work. However, some things always eluded themselves. Especially SAP BW performance remained quite a mystery. There is of course the transaction ST03 showing some measures on the BW workload and performance. And you can spend a lot of time trying to make sense of it or figuring out how to use these metrics. In the end, I didn't use them much for various reasons. The short history makes them quite arbitrary. And it is difficult to reverse engineer how they are being calculated.




Empty line

A new approach to an old problem



Here is a rough and oversimplified sketch of the SAP BW system's workflow. (If you are familiar with SAP BW, simply skip it and don't roll your eyes!) Data is loaded from source systems (e.g. ERP systems) via InfoPackages (IPs). The data is first written verbatim into an area called PSA. From there the Data Transfer Processes (DTPs) transform the data and write it to Data Store Objects (DSOs). Typically as a next step the data is written to InfoCubes. And if the BW system is running on Oracle you might index the data via a Business Warehouse Accelerator (BWA). Once the data is available, the end users can use predefined BW Queries to access either the InfoCubes on Oracle, or HANA-optimized advanced DSOs (aDSOs) on HANA.




Empty line



Figure 1: drawn by myself


<p>




So I always wanted to get a clearer picture of the performance of SAP BW systems. I could simply define some basic metrics like "the average process chain runtime should be faster than X seconds", or "the average BW query navigational step runtime should be faster than Y seconds". You might have already guessed it, such simple rules do not help much when working on real world issues. Why not trying something  completely new? Something for which I don't even know how to tackle my issue? This gave me the motivation to start learning Machine Learning. I had the problem, now let's try if Machine Learning or even Artificial Intelligence could help me out.

I needed a software that could analyze the BW workload and tell me if the workload was normal or extraordinary. Once I have tackled that, the next step would be a software that could analyze the BW workload and predict whether the system was getting unstable (in terms of availability or performance), at least for some scenarios.

<p>


Since I already knew that setting some fixed thresholds wouldn't help me, I could experiment with unsupervised learning (i.e. not introduce my personal bias). The algorithms should tell me whether the workload looks unusual. I couldn't reliably say so, and depending on whom you ask you would get different answers about whether the performance is normal anyway.

<p>

I know of SAP Focused Run and its Anomaly Prediction functionality. In the future I might also evaluate that. Lacking access to SAP FRUN I started from scratch all by myself:

  • extracting 80+ metrics from SAP BW Systems

  • collecting them automatically in a database

  • analyzing them via Python/Anaconda with Pandas, Numpy, Jupyter, Keras, ...


Currently I am supporting 13 BWonOracle and 2 BWonHANA systems with various releases, sizes, scopes and histories. I started collecting data automatically via PL/SQL on Oracle or SQLScript on HANA. This is the list of tables/views which provide the data:




<p>




Table 1: written by myself



<p>





I tried to get data from table MONI since it seems highly relevant for me. Unfortunately, due to SAP's arcane compression algorithm this data can only be accessed on ABAP level, not on database level. This is a very unfortunate decision by SAP. I believe the success of SAP was in some part due to its "open source" approach. The ABAP coding was available, and even most of the data was easily accessible in the database and you were not always forced to go via the ABAP stack to implement some simple interface. Performance analysis on SAP systems would be MUCH easier if this table could be accessed via simple SQL. And if I wanted to analyze the workload  of Non-BW-Systems, then I would definitely need the MONI data.




<p>


Outlook


So far for my introduction. I plan my further blog posts in this series to be much more technical, providing many more details like:

  • the list of which metrics I have actually chosen

  • presenting some interesting correlations between the metrics

  • a new approach for a ranking of the overall SAP BW workload (from busy to idle)

  • applying Machine Learning to identify important/useful metrics, and detecting metrics with low data quality

  • using standard Machine Learning algorithms to get some first insights

  • comparing their performance to an artificial neuronal network

  • using that artificial neuronal network to detect unusual BW workload (first big milestone)


Maybe in the future I will also have some experience with SAP FRUN, and how to use that tool for analyzing the workload. Let's see whether this series is interesting for some audience here and how things develop.

5 Comments
Labels in this area