Time for SAP to innovate ? System Performance
My first exposure to SAP was SAP R2 in Basis (long time ago).
One of the role of my team was to monitor the SAP System performance.
The main KPI was Response Time per Dialog Steps.
Few years later despite all the technical evolutions of SAP solution (like SAP R3, ECC6 …) the main KPI for performance still remains Response Time by Dialog Steps … with ST03 / ST03N as a traditional source of information (and management decision 😥 )
The classical issue is the impact of release upgrade on the ST03N statistics.
Due to my non-IT engineering background I know for a while that the calculation done in ST03N is misleading (without enough time and relevant data to demonstrate it)
Few months ago I have found this old article which is in fact an outstanding summary of the challenge : http://arxiv.org/pdf/cs.PF/0404035.pdf
It illustrates why ST03N calculated data are “useless” according to the “maths”.
There is a lot of SAP buzz the last few months / years around Big Data, Analytics (including Predictive), In Memory database, Data Visualization, Innovation.
Hence my frustration with the fact we have the same approach by SAP for more than 20 years on system performance.
It seems there is a unique opportunity for SAP to demonstrate how these buzz words could enable a step change with a case that everybody in IT could understand as it is independent from any business process. It could become an outstanding concrete marketing vehicle.
Data in STAD is the source of big data and an incredible asset (far beyond we can think : like real time monitoring as we do in some production industry).
If somebody from SAP is interested to understand further my thought on the potential innovation he/she can contact me.
This blog is a personal opinion and is not an official statement from my company.
I think one issue with STAD is that though every dialog step (user interaction) results in a statistical record, the steps of one transaction cannot be identified by names. Instead, all end up under the same URL (in case of WD ABAP) or transaction/program (SAP Gui).
But even with that information it would not be too meaningful to do elaborate statistics, since the same UI step may be slow for one user and fast for another one - depending on used selections, created objects, personalization etc.
In the end, to identify the reason for bad performance observed by some user, one has to execute exactly the scenario of that user, and trace it with ST05, SAT/ST12.
Best Regards, Randolf
thanks for sharing that PDF 🙂
I totally understand your pain with the KPI "average response time" (and its derivation) and especially convincing clients that this may not be the right source of information. One of my "fun demos" is just to run the input screen dialog step several times and reducing the average response time of a transaction in consequence. After reaching the average response time target with that approach, i usually ask "Do your SAP users notice a difference now after we reached the requested average response time per transaction?". Most of the time this is enough to question the previously used KPI.
Several business processes using the same transaction are a real life example of such response time shifting without having an issue at all.
However the SAP performance instrumentation is great in general, but lacks in setting and using a unique (user experience) ID. Currently it is not possible to group several dialog steps to a business related aggregate and define SLAs on that (e.g. 90% of the whole runs of the executed and aggregated business task has to finish in less than 1 sec) without that UID. STAD (or any response time driven analysis) gets even worst, if the end user is using several SAP modes with the same transaction and without such a UID.
The UID has to be set on specific characteristics (like an unique id, user, used business process, etc.) which makes it unique for a specific aggregation. However such implementations get even more complex with several different technologies. Running synthetical test cases (= simulated business related tasks) by RFC and measuring the aggregated response time is a work around, but very limited and nothing dynamic or end user specific.