How to filter ATC findings to detect only NEW findings
Did you ever think that it would be nice to have an ATC check variant that allows you to see only the new findings, that come from the changes that you just apply to existing “legacy” code?
This is especially useful for SAP customers, that have (or plan to have) ATC checks configured to run during transport release. In a typical SAP customer situation, when a developer performs some small bugfix on an existing, old Z-program, he (or she) does not want to correct all the old quality problems that that program has. And additionally, the business user does not want to test all functions of the program, when she (or he) asked only for one little bugfix or change in a specific part of the program.
Wouldn’t it be nice to have a filter that lets us see only those findings that did not already exist in previous versions of the program?
I have heard that SAP is working on such a feature for the ATC. But it happened that I stumbled over an enhancement spot where I could implement it myself relatively easy (there is one bigger problem for a specific case though – see at the end of this blog).
Update 2018: here is a description of SAPs implementation: https://blogs.sap.com/2016/12/13/remote-code-analysis-in-atc-working-with-baseline-to-suppress-findings-in-old-legacy-code/
The suitable spot is an implicit enhancement in class CL_SATC_CI_ADAPTER, at the end of method if_Satc_Ci_Adapter~analyze_Objects in the local class.
In that position, I inserted a call to a new class:
ENHANCEMENT 1 ZS_ATC_FILTER_FINDINGS. “active version
zcl_s_atc_filter_findings=>filter( exporting i_or_inspection = anonymous_inspection
changing c_it_findings = findings ).
ENDENHANCEMENT.
This is how the filter class is defined:
CLASS zcl_s_atc_filter_findings DEFINITION
PUBLIC
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS:
filter
IMPORTING i_or_inspection TYPE REF TO cl_ci_inspection
CHANGING c_it_findings TYPE scit_rest.
PROTECTED SECTION.
PRIVATE SECTION.
CLASS-METHODS:
is_finding_new
IMPORTING i_wa_f TYPE scir_rest
RETURNING VALUE(r_result) TYPE abap_bool
RAISING cx_satc_failure,
init_comparison_data
IMPORTING i_it_findings TYPE scit_rest
RAISING cx_satc_failure,
get_consolidated_names
IMPORTING i_it_findings TYPE scit_rest
RETURNING VALUE(r) TYPE if_satc_result_access_filter=>ty_object_names,
filter_previous_findings
CHANGING c_it_findings TYPE scit_rest
RAISING x_satc_failure.
CLASS-DATA s_comparison_findings TYPE scit_rest.
ENDCLASS.
In the filter() method, I check the variant name. For Z_DELTA, I forward to method filter_previous_findings() to do the actual work.
METHOD filter.
CHECK c_it_findings IS NOT INITIAL.
TRY.
IF i_or_inspection->chkv->chkvinf–checkvname = ‘Z_DELTA’.
filter_previous_findings( CHANGING c_it_findings = c_it_findings ).
ENDIF.
CATCH cx_satc_failure INTO DATA(cx).
DATA(exc_text) = cx->get_text( ).
MESSAGE exc_text TYPE ‘E’.
ENDTRY.
ENDMETHOD.
Method filter_previous_findings() does what its name says:
METHOD filter_previous_findings.
init_comparison_data( i_it_findings = c_it_findings ).
LOOP AT c_it_findings ASSIGNING FIELD-SYMBOL(<f>).
IF NOT is_finding_new( <f> ).
DELETE c_it_findings USING KEY loop_key.
ENDIF.
ENDLOOP.
ENDMETHOD.
But how can we compare with the check results of a previous version of the program? It is relatively easy, if we regularly run mass checks (for all customer coding) on the quality/test system, and replicate these checks to the development system. In this case, we can access those findings with the ATC API classes.
Method init_comparison_data() is the key element: it selects the newest complete, central check run from table SATC_AC_RESULTH, using a pattern for the title. You will have to adapt this pattern to your system name, or whatever you configured as name for the check run in your quality/test system.
METHOD init_comparison_data.
DATA(object_names) = get_consolidated_names( i_it_findings ).
CHECK object_names IS NOT INITIAL.
DATA(or_factory) = NEW cl_satc_api_factory( ).
DATA(or_filter) = or_factory->create_result_access_filter( i_object_names = object_names ).
SELECT display_id FROM satc_ac_resulth
WHERE is_central_run = ‘X’
AND is_complete = ‘X’
AND title LIKE ‘D1Q:%’ ” adapt this to the pattern of your mass test run name
ORDER BY scheduled_on_ts DESCENDING
INTO @DATA(display_id)
UP TO 1 ROWS.
ENDSELECT.
CHECK sy–subrc = 0.
or_factory->create_result_access( i_result_id = display_id )->get_findings(
EXPORTING i_filter = or_filter
IMPORTING e_findings = s_comparison_findings ).
ENDMETHOD.
In the above method, we do not want to load all findings of the mass run (usually an enormously big number, depending on your system), so we prepare a filter from the objects in the current findings, using method get_consolidated_names().
METHOD get_consolidated_names.
DATA wa LIKE LINE OF r.
wa = VALUE #( sign = ‘I’ option = ‘EQ’ ).
LOOP AT i_it_findings ASSIGNING FIELD-SYMBOL(<f>).
wa–low = <f>-objname.
APPEND wa TO r.
ENDLOOP.
SORT r.
DELETE ADJACENT DUPLICATES FROM r.
ENDMETHOD.
And here is the method for the actual comparison:
METHOD is_finding_new.
READ TABLE s_comparison_findings WITH KEY test = i_wa_f–test ” test class
code = i_wa_f–code
objtype = i_wa_f–objtype
objname = i_wa_f–objname
” sub object (where the findings was actually detected)
sobjtype = i_wa_f–sobjtype
sobjname = i_wa_f–sobjname
param1 = i_wa_f-param1
” param2 seems to contain technical and sometimes
“ language-dependent info, so we ignore it
TRANSPORTING NO FIELDS.
r_result = xsdbool( sy–subrc <> 0 ).
ENDMETHOD.
That’s it!
Unfortunately, there is one little loophole: if you use ATC as part of the transport release, and if you either grant limited exemptions for ATC findings (that expire at a certain date), or if you allow “emergency transports” to bypass the checks in some way, then you accumulate “new dirt” in your test/quality system, and you will never notice this because the mechanism proposed here cannot distinguish the “new dirt” from the old, “accepted” dirt.
A simple solution for this is to keep the comparison run from your quality/test system fixed, and not replace it with newer runs.
However, this has a disadvantage if you ever want to switch on additional checks in your check variant. In that case, all findings of the new checks will appear as “new”, even if they already existed in old coding.
To overcome this, we implemented a “dirt list” database table, where we store all unresolved findings that were transported to the quality/test system (for whatever reasons). If there is sufficient interest, I will explain this in another blog.
Hi Edo,
thank you for this very interesting blog.
Could you please explain, how you have implemented the "dirt list" and how to work with it?
Thank you in advance.
Hi Alexander,
as the interest in this topic is not so big, and SAP still has something like this on its roadmap, I do not intend to describe the solution in detail. But I can give you some hints.
I created a Z-Table with the key fields:
and the further fields:
This table is filled by a method zcl_s_atc_filter_findings=>store_as_dirt(),
which in turn is called by Z-code executed during transport release (see my other blog, http://scn.sap.com/community/abap/blog/2016/04/20/how-to-perform-an-atc-check-automatically-during-transport-task-release),
based on the output of method IF_SATC_RESULT_ACCESS~GET_FINDINGS() that fulfils the criteria:
Before the loop in filter_previous_findings(), I determine the "known dirt" in the findings by comparison with the Z-table. Then after the delta filtering loop, I add them again (and delete duplicates).
I hope that points in the right direction for you.
Best regards,
Edo
Thank you Edo. Nice job. I copy-pasted your code but had many issues for compiling, because of the special characters (single and double quotes and minus). In case other people copy your code, that would be nice if you could replace them all. Thanks a lot 🙂
Hi Sandro,
thanks. I just updated the blog with a reference to the SAP implementation (named "baseline").
I also tried correcting the quotes, but I did not find an easy way to do it. In the visual editor, I changed it, but it stayed the same. In the textual editor, the code is embedded in a huge amount of HTML formatting, which I did not dare to touch (and I did not find the quotes).
Best regards,
Edo
Thank you for the effort 😉
Hi Edo,
Thanks for the detailed blog on custom implementation, it helped me set it up easily.
Question - On the below READ statement field PARAM1 is always differing when compared to the quality results collected. How do we then ensure that only new changes violations are displayed ? - do we have any other parameter other than param1?
READ TABLE s_comparison_findings WITH KEY test = i_wa_f–test ” test class
code = i_wa_f–code
objtype = i_wa_f–objtype
objname = i_wa_f–objname
” sub object (where the findings was actually detected)
sobjtype = i_wa_f–sobjtype
sobjname = i_wa_f–sobjname
param1 = i_wa_f-param1
” param2 seems to contain technical and sometimes
“ language-dependent info, so we ignore it
TRANSPORTING NO FIELDS.
Hi Vinay,
I looked into our current coding, and there is a small improvement in that place. I assume that that solves your problem.
I wrote a new method get_short_and_unique_param1() which maps from the original param1 to a “short and unique” version, taking account of various special cases that we encountered. This method has to be called at the end of init_comparison_data() to modify s_comparison_findings-param1:
And again at the beginning of is_finding_new(), to modify the currently checked param1:
Here is the method declaration:
and implementation:
Ah, and I see another improvement: I found out that there are cases where the numbering of the method includes is different in different systems. So I created a local class that connects to the reference system, and then builds a mapping table.
I guess this reinforces the point: It is always best to use the SAP standard mechanism for something(i.e. baseline, once you are on a release where it is available ...).
Best regards,
Edo
Thanks Edo, yes for the numbering of method I already used a remote call to find the method name from the include rather than comparing the include names.