Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
andreas_goeb
Explorer

Note: I did originally publish in the following post in my company's blog on software quality. Since it might be interesting for developers in the SAP BW realm, I re-publish it here (slightly adopted).


As my colleague Fabian Streitel explained in another post, a combination of change detection and execution logging can substantially increase transparency regarding which recent changes of a software system have actually been covered by the testing process. I will not repeat all the details of the Test Gap Analysis approach here, but instead just summarize the core idea: Untested new or changed code is much more likely to contain bugs than other parts of a software system. Therefore it makes sense to use information about code changes and code execution during testing in order to identify those changed but untested areas.

Several times we heard from our customers that they like the idea, but they are not sure about its applicability in their specific project. In the majority of these cases the argument was that the project mainly deals with generated artifacts rather than code, ranging from Python snippets generated from UML-like models and stored in a proprietary database schema to SAP BW applications containing a variety of artifact types beyond ABAP code. Even under these circumstances Test Gap Analysis is a valuable tool and may provide insight into what would otherwise be hidden from the team. In the following I explain how we applied Test Gap Analysis in an SAP BW environment.

The Starting Point

As you all know, in SAP BW a large amount of development is performed graphically in the BW Workbench. In cases where custom behavior is required, routines can be attached at well-defined points. As a consequence, it is very hard to track changes in a BW application. Of course, there is metadata attached to every element containing the relevant information, but seeing all changes that have occurred since a given point in time (e.g., the current production release) is not a trivial task. The same holds for execution information. Since we were already using Test Gap Analysis for transactional ABAP systems, we reached out to a team developing in BW and showed them some results for their own custom ABAP code.


Figure 1: Test Gaps in Manually Maintained ABAP code only

The picture shows all ABAP code manually maintained by the development team. Each rectangle having white borders corresponds to a package, and the smaller rectangles within correspond to processing blocks, i.e., methods, form routines, function modules, or the like. As explained in Fabian’s post, grey means the block was unchanged, while the colors denote changed blocks. Out of these, the green ones have been executed after the most recent change, while the orange ones have untested modifications with regard to the baseline, and the red ones are new and untested. Of course, tooltips are provided when hovering over a rectangle containing all the information necessary to identify which code block it represents - I just did not include it in the screenshot for confidentiality reasons.

Moving Forward

As expected, the feedback was that most of the development effort was not represented in the picture, since development mainly happened in Queries, Transformations and DTPs rather than plain ABAP code. Another insight, however, was that all these artifacts are transformed into executable ABAP code. Therefore, we analyzed the code generated from them, keeping track of the original objects’ names. The result was (of course) much larger.


Figure 2: Code generated from DTPs (left), Transformations (middle top), Queries (middle bottom), and manually maintained code (far right)

Including all the programs generated out of BW objects, the whole content of the first picture shrinks down to what you can see in the right column now, meaning that it only makes up a fraction of the analyzed code. Therefore, we have two main observations: First, ABAP programs generated from BW objects tend to get quite large and contain a lot of methods. Second, not every generated method is executed when the respective BW object is executed. In order to make the output more comprehensible, we decided to draw only one rectangle per BW object and mark it as changed (or executed) if at least one of the generated methods has been changed (or executed). This way, the granularity of the result is much closer to what the developer expects. In addition, we shrink the rectangles representing these generated programs by a configurable factor. Since the absolute size of these programs is not comparable to that of manually maintained code anyway, the scaling factor can be adjusted to achieve an easier to navigate visual representation.


Figure 3: Aggregated and scaled view for generated code (left to middle) and manually maintained code (right)

The Result

With this visualization at hand, the teams can now directly see which parts of the application have changed since the last release in order to focus their test efforts and monitor the test coverage over time. This helps increasing transparency and provides timely feedback regarding the effectiveness of the test suite in terms of change coverage.

3 Comments
Labels in this area