Skip to Content
Author's profile photo Andy Greig

Breaking Down Performance

If your reading this you have some interest in system performance.  Or application performance.  Or your hotrod’s performance.  Or your fishing skills.

We talk about performance. We “test” for performance and we strive for optimal performance.

But what is performance?

Those of us that practice performance consulting strive to create accurate performance load and then analyze the results of our tests by considering two primary factors; response time and the resource utilization of the systems that process that load.

So let’s break these two aspects down, load and performance, and look at what makes up these two aspects of Performance Testing.

There was a time that we referred to “Performance” testing as Load testing.  Today, after 25 years in the biz I’ve heard all kinds of terms that attempt to define what we do:

  • load testing,
  • stress testing,
  • scalability testing
  • and a dozen other terms

I just call it all, Performance Testing.

You can say that Performance testing is a specific type of multi-iteration user or system interaction that causes a load on a system and performs some specific set of activities on a system.  OK.  So we have that agreement right?  But the essence of our tests is that we artificially create activities, in a non-live system, in a lab system that we expect could “realistically” occur in the live system.  

First Characteristic of Performance Testing

It’s about load, not functionality.

A key aspect of our tests is that it is a multiple iteration activity, representing one or several activities that a “system” performs for specific purposes.  But it must occur for a known number of repetitions and for a known duration.

Technically, we expect correct functionality. We have to expect that the activities we perform work without functional issues.  Do we always get it? No. But Performance Testing is NOT functional testing.   Our goal is to measure accurate “performance” derived from the load we exert on the system.  It can’t be a random load. It has to be specific. It has to be repeatable.  And it has to represent some real life situation or “Point in Time”.  But it can’t be failing because of some functional issue or it won’t create the accurate load that we require.

Second Characteristic of Performance Testing

The objective is to obtain time based information that is relevant to humans.

There are two items here, time based and relevant to humans.

Time by itself is irrelevant as a measurement.  For what use is a time based measurement, or more specifically a duration, without something to measure it against?  After all, time has to have something to measure itself against.  Modern clocks measure time based on the breakdown of the radioactive decay of an element proven to decay at a specific rate.

In business, those that pay for our services and their systems to “perform” (there’s that word again) some activity are measured against the need for that activity to complete within some predetermined time based need.  Generically, we refer to this as “response time”. LOL.  How often do we find that these response time requirements are based on arbitrary expectations?  A good topic for a future blog.

Putting these two aspects together leads us to the two activities that comprise our work:

  1. Creating a business based activity load that is
  2. Measureable by the response time to complete a specific number of those activities.

And this brings us to a conclusion…

The purpose of Performance Testing is to obtain two key results:

  1. Response time of a planned set of activities
  2. The consumption of system resources used to perform those activities.

And that leaves us with our prime objective, and the true purpose of our discipline:  The analysis of the results of our performance test.

It is this analysis of the test results that must be the focus of our work.  This is why creating an accurate, real-life, purpose based load is a requirement, not the objective of our work.  The purpose of our work is to analyze the result of accurate performance tests in order to determine the current system performance for specific activities.  From this we can determine the resource requirements and can then move on to performance optimization by systematic breakdown of the system workload and the discovery of any throughput restrictions.Check out this PDF for further reading on SAP Performance Testing.

In another blog, I’ll examine how to take the results of a performance test and break out the system components that comprise the overall response time so that we can look at optimizing those components to improve response time, and where single-iteration verses multi-iteration load creates issues that are detectable by the software or systems engineering discipline.  This is our value proposition in the IT marketplace. Applied engineering on the computer test track.

For more details about SAP Global Testing contact rod.pobre@sap.com

Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      very interesting blog. I'd really like to read the next one on arbitrary response time expectations.

      I recently had a customer flagging up poor response times for a dialog report. But

      the data-set they wanted to select included over 1 million DB entries!

      The program in question was designed to return data for human analysis so i'ts hard to understand their expectations.