Skip to Content

PerformanceCloud.png

When I wrote the last update of my SAP performance optimization book, I thought about a picture that brings all performance concepts together in one picture, a sort of “grand unified performance picture”. What came out of these thoughts is the performance cloud that I am presenting here.

Here are some ideas on how to read it. Of course, the cloud is about performance, so performance is in the center. On the next level, you can find parallelization factor and processing time. Assume that processing time is the average time that is needed for processing a transaction (step), performance is

     performance = parallelization factor ÷ processing time,

measured as the number of transactions (steps) processed per time.

On the next level, you find on side of the processing time the hardware parameters CPU speed, disk and network latency and network time that influence the processing time, multiplied by the number of respective (code) operations:

     processing time =  operations on the CPU ÷ CPU speed + operations on the disk × disk latency + …

The number of operations can be reduced by the techniques of performance-efficient programming. These are buffering, indexes, aggregates, compression, and data locality. On the other hand, these techniques come with a price tag, which can – if used in an inefficient way – can turn into performance loss instead of gain. In this context, main memory plays a crucial role, because memory is required for these types of performance improvement. A lack of main memory can also lead to performance loss (paging – which can occur on operation system level, but also in form of buffer swaps etc.) But beside these effects, the data volume that is touched, moved, etc. by the code is the biggest enemy of processing time. This insight leads, for example to the golden rules of SQL programming, as explained in detail in my performance optimization book.

Let us now have a look at the left side of the performance cloud that deals with the parallelization factor. The parallelization factor is given by:

     parallelization factor = no of CPUs (cores, threads) × horizontal scalability

Of course the parallelization factor is limited by the number of CPU threads. Having more logical threads than CPU threads in parallel may make sense from the viewpoint of load distribution, but they will queue up at the physical CPU.

The number of CPU threads cannot be used, if the software does not scale. Here horizontal scalability is assumed to be measured by a factor that in an ideal case equal to 1 (software scales in an optimal way). A lower value means that factors reduce the scalability. Which factors reduce scalability? First there is the utilization effect: If you use the CPU near 100%, processing time increases in a nonlinear way because of queuing effects. To avoid prolonged processing times, most administrators run systems at a maximum utilization of 80%. The second effect is the effort for parallelization itself, which of course comes at a certain cost. Stickiness (the effect that in case of statefull processing a client process has to be routed to the server node where its session is handled) can have a negative effect on scalability. But one of the main enemies of scalability is locking. Locking is absolute necessary to ensure data integrity and consistency, but inefficient locking strategies lead to decrease in scalability.

Finally let us spend some words on vertical scalability. Vertical scalability is the ability to distribute software over different servers in a vertical, layered way. Years ago, when the hardware capacity was limited, this was a very valuable option to, for example, run the database and the application on different servers. In the day of almost unlimited hardware power, this is no longer a big point. In contrary, the performance of the complete system is the minimum of all vertical software layers (if the database is slow, this cannot be compensated by a higher performance of the application layer).

I hope that it is helpful for you to see the difference performance aspects in one picture. If you have comments, additional aspects, I am happy to read your comments.

Best regards, Thomas

Dr. Thomas Schneider

Development Architect – HANA Platform Applications

Thomas Schneider:  SAP Performanceoptimierung – 7.Auflage

Thomas Schneider:  SAP Performance Optimization Guide 7th Edition

Thomas Schneider: SAP Business ByDesign Studio -Application Development

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply