Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
susanne_janssen
Active Participant
Somebody recently claimed that a load test was essentially a performance test. While one can argue that more often than not load tests are too costly , there is no negating the fact that they are very helpful with a scope way beyond performance testing alone. Over the years a number of myths and half truths about load testing and performance have emerged that I would like to comment on.

Myth number 1: Load tests equal performance tests
While it is certainly true that performance KPIs collected in a load situation can provide information about the performance behavior of a system, this is by far not the only purpose of load tests. They also help to analyze the behavior of the system in the following instances:
  • Verification of scalable software and system infrastructure by checking load balancing mechanisms.
  • Test, if packages can be processed in parallel
  • Are locks released in time
  • System tuning and parameterization
  • Bottleneck analysis for infrastructure by making interfaces part of the testing procedure.
  • Test, if the strategies for fail-over, back-up and disaster recovery are sufficient
  • Robustness, what happens if an application server is suddenly cut off?
  • Stability: how long can the system run under high load (particularly in the Java world where memory leaks may occur). Another question could be solved by analyzing how the performance degrades, if the system is overloaded.
Myth number 2: The collected response time during a test run shows scalability
I guess all load test tools collect the response time during the test cycle. This is a nice thing to do, especially when the response time was set out as a KPI. However, the response time does not show scalability, for example, if CPU times increase linearly to the throughput. The response time depends on the utilization which basically means that it is made up of elapse times, network times and wait times. When you analyze performance, however, you need to know the individual processing time. In this area, many load test tools fail to provide easy-to-understand analyses. Response times do not show scalability.
Myth number 3: Load tests provide reliable performance information
Whether or not a load test will provide reliable performance information actually depends mostly on the set-up of the test cases. More often than not, the test cases do not reflect meaningful business scenarios but are somewhat hypothetical. Mind that not all tests need to be 100% like reality, but realistic would already do. An example: Some years ago a customer set up a large system and ran a few FI business processes for a couple of hours until the system performance broke down. Note, the test was not meant to break the system! The system was maxed out, because the main business process was incompletely modeled. As a result of the incomplete model, the customer created millions of financial documents without balancing them. Part of the test was that all simulated users displayed all open, unbalanced items. In a real-world scenario, a company with millions of open postings would face thousands of unpaid bills and thus would go bankrupt. More often than not the test cases are set up in such a way that the performance KPIs collected during the run provide no meaningful performance information.
Myth number 4: The right load test tool will do everything for me
Even the cleverest tool on the market is not able to take away your job. You will still have to do the elementary tasks of
  • Setting up meaningful KPIs (other than response time alone, see myth x), for example throughput and concurrent user numbers
  • Creating meaningful, representative and repeatable test cases
  • Creating sufficient and realistic master data
  • Finding and setting up a proper test system
  • Analyzing the results These above tasks actually take about 70-80% of the time. 10-20% of the time is spent on creating the scripts in the tool…
Myth number 5: Load tests are the only means to validate sizing and scalability
I have no idea where this myth originated. It is certainly true that in many situations it is helpful to validate the sizing with the help of a load test, especially when the sizing involved many assumptions; but it is not the only way to validate sizing.

With the GoingLive Check or Going Live Functional Upgrade services, for example, SAP Support can check the sizing of your hardware. With Early Watch, system performance data is collected automatically.

A third possibility of validation is simply to monitor the utilization of the production system yourself and compare it with the projected utilization. A fourth one is to perform single-user performance measurements in a test system and compare them with the projected ones. So the answer to that myth is: not only ... but also.

By way of conclusion
An often overlooked but very important benefit of load testing is the fact that you can detect functional issues that you would not uncover in single-user tests. These errors may perhaps not always be in the business logic, but certainly in top three set of functional errors detected in previous years:
  • Concurrent updates lead to inconsistent data
  • Missing locks cause inconsistent data as well
  • Queue and buffer overflow, if the programming is too lax regarding buffering of data, data may get lost, if too many requests overflow the buffer