Skip to Content

SAP R/3 and the myth of the average dialog response time

Back in the good old days when SAP software was considerably simpler (e.g. no Java at the horizon yet) you had your SAP R/3 system(s) and transaction ST03 for analyzing its performance. The average dialog response time was THE (arguably even the only) SAP performance metric of the system. It seems comical, but it is coincidental that the value for the average dialog response time correlated so very well with the end users experienced performance. Reasons for this were two opposite effects which mostly canceled out each other:

1. What end users experienced as a single dialog step was reported as many really fast (sub) steps. You can verify that via single records analysis (transaction STAD).

2. Even though there is a limit on the maximum hold time for dialog workprocess, you can still find some rare but really long running DIA transactions in the ST03 data. They have a high impact on the averages.

In summary, you checked the average dialog response time and luckily most of the time it was very close what users saw when working on the system. If you don’t believe me, reduce the value for rdisp/max_wprun_time by 20% and check the effect in ST03.

Standards, which Standards?

Now when you have a SAP BW system, some transparency would be great. Of course users are complaining about reporting and/or data load performance, but the big questions remain:

– Are the complaints justified?

– Is the trend just negative or catastrophically?

– Where are the pain points?

– How do I verify or report overall system performance?

Unfortunately there seems to be no equivalent to the average dialog response time known from R/3 systems. This sounds like a harsh statement, especially since I am no SAP BW specialist.

There is this really great presentation “SAP NetWeaver BW Administration Cockpit”, couldn’t I just read that and I am done with it?

BW Administration Cockpit

http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0e5ca3b-95ce-2b10-4d94-864ab29a8b63

Well, actually no.

Land of Confusion

At first, this whitepaper answers lots of questions and provides deep insight into the SAP BW performance reporting topic, so I am really thankful that it exists. However when I try to use it in real life, the problems start and more and more mysteries pop up:

Query Runtime Statistics

SAP says that the OLAP processor collects Query Runtime Statistics in the RSDDSTAT* tables. The BW technical content reads this data via view RSDDSTAT_OLAP and writes it into DSO 0TCT_DS01 and later InfoCube 0TCT_C01. So far so good, but have you ever tried to verify that information? I had a deeper look into RSDDSTAT_OLAP on various systems and tried to make sense of what was later written into 0TCT_C01. The data manager timing (typically an Oracle database) correlated very well with what was found in 0TCT_C01. This was of no great help to me, because Oracle databases provide lots of helpful KPIs on performance themselves, I don’t need SAP systems for that. But anyway that specific data was accurate. Strangely enough, the correlation for the OLAP data was much less obvious. There were many rows with unexplainable differences between the RSDDATAT_OLAP and 0TCT_C01 data. In total, these differences could accumulate to well over 10% therefore skewing the averages. I used lots of tracing to find a reason for this, but I simply cannot explain it. Worse than that was the frontend data. While RSDDSTAT_OLAP typically doesn’t show much frontend response times, the InfoCube 0TCT_C01 does! Even with tracing I couldn’t see how SAP BW “makes up” this data. For performance reporting it is useless.

Data Load Statistics

I was not much surprised that for Data Load Statistics the picture is quite similar. Comparing the entries from table RSDDSTATDTP with InfoCube 0TCT_C22 is difficult, even for small systems showing only little activity. Maybe haven’t been looking correctly, but can it be that the contents of table RSDDSTATDTP aren’t documented by SAP? The InfoCube 0TCT_C22 is quite self explaining, but negative runtimes reduce the trustworthiness. If more that 10% of the rows contain negative runtimes, I don’t know how to handle that.

Similarly I cannot see a strong correlation between RSDDSTATWHM and InfoCube 0TCT_C23. And here as well 10% of the rows have negative durations. Most probably I am just missing the right documentation on table RSDDSTATWHM contents and the hidden logic how this is uploaded into 0TCT_C23, but depending on which data I consult I get quite different pictures on performance.

If there is indeed documentation for the contents of RSDDSTATDTP and RSDDSTATWHM, please provide a link. I will be happy to apologize and restart my research.

Whom should I trust?

Reality, as usual, is at least one order of magnitude more complex than anticipated. To make the picture more complete, SAP is adding quite a lot to my confusion. For example if you access transaction ST03 on a BW system, you can see the “BI Workload” performance. There is a branch called “Load Data” containing InfoPackage and DTP performance data:

ST03 BW Performance Data

Now guess what? The InfoPackage performance data is taken from table RSDDSTATWHM and the DTP performance data is taken from table RSDDSTATDTP, but neither correlates really close with the actual table contents! Of course, the data presented in ST03 is quite close to the actual table contents, but again I see lots of unexplainable discrepancies. The discrepancies affect all categories like packages, number of rows, runtimes etc. So whom should I trust? The raw data in RSDDSTATDTP? Or transaction ST03? Or InfoCube 0TCT_C22? Which one is closest to the truth?

For the Query Runtime Statistics SAP has already changed the picture, so transaction ST03 Query Runtime Statistics are taken from InfoCube 0TCT_C01. In the past the data was from the RSDDSTAT* tables. Still the big question remains what is closer to the truth, the actual data from the RSDDSTAT* tables or the technical content InfoCubes like 0TCT_C01.

Help is on its way

You can imagine how thankful I am for SAP’s announcement of the In-Memory Database. I am not so confident that the mysteries of how SAP presents BW performance data will ever be really clarified. (But I would acknowledge if I was too pessimistic here.) However some problems might vanish by themselves:

– As soon as the BW system is running on an In-Memory Database the  Query Runtimes can be expected to decrease by 1 or 2 orders of  magnitude. The criticality of BW performance reporting will decrease  respectively.

– Once the ERP and the BW system share the same In-Memory Database, the need for data loading is mostly removed. Thus analyzing data loading performance problems belongs to the past. This alone would justify migrating to In-Memory Databases.

To report this post you need to login first.

5 Comments

You must be Logged on to comment or reply to a post.

  1. Kristian Appel
    Dear Mark.
    I agree that the standard business content on performance is close to useless but I have earlier actually been able to use it as a good starting point. Sure it’s not like book-keeping where you can trace every little cent, but does that matter? We are typically just looking for trends or comparing 1 query with another and trusting the law of large numbers it will typically turn out OK.
    And for your HANA dreams, I hope that you are deeply ironic here. Of cause there are many situations where the DB access is the bottleneck but many reports actually use a lot of time in the OLAP due to e.g. exception aggregation (typically counting). HANA will not solve the OLAP problems! You also write “Once the ERP and the BW system share the same In-Memory Database, the need for data loading is mostly removed.”. How naive. We may not need to load data as today and in areas like finance we will see reports directly on the ECC tables (e.g. CO-PA) but in areas like SD og SCM there is so much logic in the loading process that I can’t see it done “online”. Of cause SAP could choose to reprogram on the ECC side but I must admit that I can’t see the business case for SAP to deliver in this area as long as the technology for a lot of their customers are prohibitive costly. I hope that I’m wrong but it will be at least 5-10 years down the road before we will see a really interesting HANA world where everything is running in the same piece of memory.

    With Kind Regards
    Kristian Appel

    (0) 
    1. Mark Förster Post author
      Hello Kristian,
      thanks a lot for your practical feedback! You are of course right, I don’t need accounting precise data. One can use e.g. the SAPSMETER if the exact resource consumption of a BW system is needed.

      The larger the system and the more activity there is on the system, the closer e.g. the RSDDSTAT_OLAP and 0TCT_C01 results. Still I have the deep urge to find out what data is closer to the truth.

      About my hopes in HANA, you might very well be right and I am simply naive. Anyway I put high hopes into the techonology. Time will tell whether SAP’s claims were justified. Even if only parts of the data loading pains can be alleviated it is still a strong business case for HANA!

      I cannot estimate how big the influence on putting OLAP functionality into the HANA database really is. Why not putting exception aggregation into the in-memory database? The potential seems to be large, but as I said I am no BW specialist.

      Isn’t it fascinating to see the technology evolve? There is no big bang approach, maybe because that wouldn’t work. We currently see SAP tackling the HANA introduction in several distinctive steps. For SAP ERP to run on HANA it can very well take 5 years to come, but the benefits for BW to run on HANA are already almost within reach.

      Regards,

      Mark

      (0) 
  2. HS Kok

    I have been using the BI Statistics for quite some time now, and I have to agree that there is no clear-cut way in using it properly. A lot of reverse-engineering needs to be done in trying to figure out how each line in the RSDDSTAT tables (and 0TCT_C01) need to be translated into actual executions/runtimes of each BW query.

    Once you have that figured out, everything works like a charm. I have to admit that I do not use the BI statistics to the level of granularity where I am using it to measure report runtimes right down to the last millisecond.

    And even though HANA is touted as the “next big thing” in resolving all report performance problems, I have to say that it is really something that we as BW developers needs to take a close hard look at.

    Oftentimes I have seen too many badly-designed InfoCubes, queries or custom extractors that have absolutely horrendous runtime performance. Now, instead of finding ways to improve the design/logic/coding behind these bottlenecks, SAP has come out with HANA to be the “quick fix” for companies who are encountering performance bottlenecks in their BW systems.

    It is truly a sad state of affair (and irony!) that we have to depend on such “technology upgrades” to deal with performance issues, when in fact bad design and poor coding are actually the real culprits…

    (0) 
  3. Samik Sarkar

    Hi Mark,

    Nice blog. However, if you are asked to develop such report what would be your best approach ? Can you please suggest with KPIs ?

    Thanks in advance

    (0) 
    1. Mark Förster Post author

      Hello Samik,

      typically I go with the BW avg. NavSteps runtime KPI:

      (0TCTTMEOLAP+TCTTIMEDM)/0TCTQUCOUNT

      Data is read from 0TCT_MC01.

      Maybe the frontend times would be interessting as well, but normally I leave them out.
      The values have a high variation, but for detecting trends it should be usable. They are NOT suitable for defining SLAs.

      Regards,

      Mark

      (0) 

Leave a Reply