Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

In order to be able to improve a process, it is first required to know what needs to be fixed. And to know what needs to be fixed, you need to measure what it is that is broken. Occasionally, this can work the other way round—unless it is measured, it is hard to say the process is broken and needs to be fixed. As a process owner, you need to ask what is critical to be measured and improved. What are the risks to business and functionality if these focal points are left unresolved?

 

I am thinking out loud about effects of poorly defined test data on overall quality. As a Data and QA practitioner I am always thinking about ways Quality can be measured and enhanced and how it interacts with data. Specifically test data.

 

Part of my assessment process is asking other QA personnel and testers about their experiences with discovering and improving poor quality. And when asked to describe what poorly defined test data means to them, responses I get usually vary between testability (inability to test) and functionality failure. This is true- how do you test if you do not have the underlying data to supply your tests. And yes, given the lack of testing, failure of function is certain in production systems. However, bad test data goes beyond testability. I am constantly surprised to learn that there is a dearth of base lining effort and costs associated with creating and managing test data given the importance of such metrics. Note that I am not talking about actual testing here, I refer to the data that is needed to test various solutions. Let me demonstrate with an example how test data impacts quality with a hard dollar cost.

Check out the following illustration. It shows relationships between defect origin, defect discovery and overall cost to fix the defect within SDLC. I sum it up with 3 main observations-

                                   

 

 

                                                 *Source- Software Engineering Institute

 

 

1) Majority of defects (50%) originate in development phase.

2) A relatively small number of defects (20%) are detected while unit testing.

3) It is 50 times more expensive to fix a defect once it reaches production than it is in development system.

 

Now, there might be several contributing factors to this trend, such as, a flawed release strategy, poorly defined QA processes, a gap between requirements design and test execution, however, by an overwhelming majority the culprit is lack of testing at the unit test case level. I can deduce, given a majority of defects are originating at the development stage, if I can also catch majority of defects at the unit test level, I can bring down the cost of overall testing effort. Test it early, and test it often is not merely a catchphrase; it has some truth behind it.

Developers are notorious for under-testing (or not testing at all) in the sandboxes. So yes, that is a part of the problem. Wearing my developer hat, I can think of the following scenarios when testing my own code at the Unit test level-

i. What is the function of this code? What is it supposed to do? What is it not supposed to do?

ii. What are the external dependencies of this code?

iii. How do I test it?

iv. What kind of test data is needed to test it?

 

But how can I achieve my objectives i thru iii above if I do not know how to achieve # iv? I cannot if there is no viable data in DEV to test with (Some of this is process related, of course, but that can be fixed). And as a result, there is always a request somewhere to move up the code in QAS so it can be tested.

 

I call this the hotspot of poor quality; this is where quality escapes continually from application lifecycle.

 

A single three tier system landscape of SAP maintenance world- a combination of development (DEV), Quality (QAS) and Production (Prod) can have as much as 5 clients (1 for Prod, QAS each and 3 for DEV and some projects I have seen can have this landscape setting many times over!). It is quite a hassle if you have to regularly service all these clients with reliable, quality test data. This in many cases leads to what I feel if often a much neglected DEV system. These DEV systems have not been refreshed for a long –long duration. In some instances, these have not been refreshed from go-live. The quality of test data in such DEV systems is anything but pristine. I do not blame if one does not want to test in such an out of whack system.

 

So there is an overemphasis on taking a defect or a test case to the QAS so it can be tested. There are instances where QA conducts their testing directly within productive systems. This is because there is no trust in the development system’s data consistency and data integrity. However, by the time a defect is ready to be released into PROD quality has already been diminished. There are bugs lurking, that could have been easily found and fixed during unit testing had there been a reliable development system available. I think if you are finding defects in QA and Prod that is disproportionate to what you are finding in development systems, that is a good indicator that something is amiss and you need to revisit your QA and Test data strategies. In fact, any defect found in production should trigger an extensive root cause analysis to prevent further occurrence of similar problems.

 

I look forward to see reader’s views on how they are managing their unit testing and supplying test data required in development systems without compromising the integrity of their golden clients.