Technical Articles
Optimizing OpenSQL replacement services
Introduction
In this blog-post i want to show some optimizations for the OpenSQL replacement service.
OpenSQL replacement service
The OpenSQL replacement service replaces tables in OpenSQL-statements with fake-tables. The replacement is done at runtime. This service is quite simple to consume. In ABAP-Unit we could use one of the setup methods to start the replacement and one of the teardown methods to stop the replacement. Listing 1 shows a replacement of table scarr
with a fake-table named zairline_fake
.
CLASS replacement_service_demo DEFINITION FOR TESTING DURATION SHORT
RISK LEVEL HARMLESS.
PRIVATE SECTION.
CLASS-METHODS class_setup.
METHODS read_airlines_from_fake FOR TESTING.
CLASS-METHODS class_teardown.
ENDCLASS.
CLASS replacement_service_demo IMPLEMENTATION.
METHOD class_setup.
DATA: airline_stub TYPE zairline_fake.
cl_osql_replace=>activate_replacement( replacement_table = VALUE #( (
source = 'SCARR' target = 'ZAIRLINE_FAKE' ) ) ).
DELETE FROM zairline_fake.
airline_stub = VALUE #( carrid = 'TG' carrname = 'Thai Airways' currcode = 'THB' ).
INSERT zairline_fake FROM airline_stub.
COMMIT WORK AND WAIT.
ENDMETHOD.
METHOD read_airlines_from_fake.
DATA(expected_airlines) = VALUE scarr_tab( ( carrid = 'TG' carrname = 'Thai Airways' currcode = 'THB' ) ).
SELECT * FROM scarr INTO TABLE @DATA(found_airlines).
cl_abap_unit_assert=>assert_equals( exp = expected_airlines act = found_airlines ).
ENDMETHOD.
METHOD class_teardown.
" stop replacement with empty input parameters
cl_osql_replace=>activate_replacement( ).
ENDMETHOD.
ENDCLASS.
Listing 1: usage of OpenSQL replacement service
Now the OpenSQL replacement service ensures, that SELECT * FROM scarr
is replaced at runtime by SELECT * FROM zairline_fake
.
Why use it?
The OpenSQL replacement service solves a big problem. After copies from production to development systems, i saw many of my unit tests failing and i had to repair them. The reason was overriden test-data. With the replacement service, this problem is history.
But filling the fake-tables with test-data was very time consuming especially for huge tables like mseg
or resb
. The next problem was coming soon. It was very difficult to figure out the exact database records, when the OpenSQL replacement should be integraded in an existing test-class. So i started implementing some optimizations.
Optimization
First of all, i wanted to replace the manual process of copying single field values and paste them in the testclass with a automated algorithm. The automated algorithm should export a set of database records and use the exported data to fill the fake-tables. After some research, i found all necessary elements in the ABAP-language.
With dynamic OpenSQL-queries, the table name and the where-restriction can be set at runtime.
CLASS exporter DEFINITION ABSTRACT.
PUBLIC SECTION.
METHODS export_from_source
IMPORTING
source_table TYPE tabname
where_restriction TYPE string.
ENDCLASS.
CLASS exporter IMPLEMENTATION.
METHOD export_from_source.
DATA: content TYPE REF TO data.
FIELD-SYMBOLS: <content> TYPE STANDARD TABLE.
CREATE DATA content TYPE STANDARD TABLE OF (source_table).
ASSIGN content->* TO <content>.
SELECT * FROM (source_table) INTO TABLE <content> WHERE (where_restriction).
ENDMETHOD.
ENDCLASS.
Listing 2 export with dynamic OpenSQL-queries
With listing 2, it is possible to extract a part of any given dictionary table into an internal table. Now i just needed to save the internal table in some storage, which is connected to the transport system. I found two solution for this task:
- usage of ecatt-test-data containers
- usage of identity transformation with the command
CALL TRANSFORMATION id
As source of the identity transformation i used a internal table with the type abap_trans_srcbind_tab
. That makes the serialization of a set of internal tables possible. The result of the identity transformation was exported as binary object with the function module ‘WWWDATA_EXPORT’. This binary object could be connected to the transport system.
With the api in class cl_apl_ecatt_tdc_api
the ecatt-test-data containers were more easier to fill.
In the last step, i created a utility function, which copies the exported content to the fake-table. Like in the export step, it was possible to use dynamic OpenSQL-Statements for this purpose.
CLASS import DEFINITION ABSTRACT.
PUBLIC SECTION.
METHODS import
IMPORTING
fake_table TYPE tabname
where_restriction TYPE string
content TYPE STANDARD TABLE.
ENDCLASS.
CLASS importer IMPLEMENTATION.
METHOD import.
DELETE (fake_table) FROM (where_restriction).
INSERT (fake_table) FROM TABLE content.
ENDMETHOD.
ENDCLASS.
Listing 3 import with dynamic OpenSQL-statements
The full solution is published at https://github.com/bunysae/abap_db_preparator. I used it to integrate the OpenSQL replacement service in many of my unit-tests.
Hi Bunyanuch,
I really liked your Post, and it has taught me a lot about Test Data Containers.
I didn't like the way they had to be filled manually, with your Project that works awesome.
I'm curious, why did you opt to use the "OpenSQL replacement services" over the "Open SQL Test Double Framework" featured in the OpenSAP course "Writing Testable Code for ABAP"?
Thank you in advance,
Tristan
The system i developed it was to old for "Open SQL Test Double Framework". It just had the "OpenSQL replacement services". In future "Open SQL Test Double Framework" or "CDS Test Double Framework" are options, which might be considered, too. The code was designed, so that these frameworks can be easily integrated.
In the meanwhile we worked on integrating the Open SQL Test Double Framework. You can try it out at https://github.com/ABAP-prep/abap_db_preparator_osql.
Very interesting work: had I known about this project a few months ago, I could have tried it on a system that also did not have the TDF available, instead of mocking the data myself with an interface and a fake class to be called during tests.
What is less clear to me is why I would want to use this preparator on a system where TDF is present. One reason that comes to my mind is having a unified approach, is that all or am I missing something?
I assume TDF means Test Double Framework. The approach was to come closer to the real data, while mocking the OpenSQL calls. When using the preparator, we could capture a snapshot of some real database entities, instead of creating mocked database entities, which may missing some relationsships or may have some values, which real database entities don't have. Take as example a purchase order. When creating a purchase order without an account assignment, table EKKN doesn't contain an entry for that order. In the mock data we could easily create purchase orders without account assignment and entries in table EKKN.
An other reason were system copies. System copies would overwrite database entities created for older unit tests, which were developed before TDF was installed. So i prefer taking a snapshot of the database entities used in the tests and the next system copy wouldn't cause hundreds of red tests. Hope this makes the intention clearer 🙂