Quality testing on Development server
Testing environment on development server using shared memory area SHMA
IN 3-tier architecture landscape there are multiple servers meant for some special purposes.These servers are commonly known as sandbox, development, quality, production servers or systems. The purpose of this layered approach is to minimize the risk of failures on live environment due to erroneous, redundant and irrelevant code and also to keep the environment clean.
Many times it is required to have certain programs to be created as a ‘test program’ in order to check the complex working and also to decide the final design of the object. So in order to avoid creating version of the main program, sometimes the test objects or programs are developed and transported to the next environment where they are tested for the functionally correct behavior in different scenarios. The basis team regularly copies the data from the live / production environment to the quality environment. These real case scenarios are then tested against the developed objects for their behavior by functional consultants. If there are any discrepancies found then the developer is asked to make new changes and transport it to the quality environment where they get tested again. This feedback-correction process continues until the object behaves in the expected manner and later transported to the production environment for the actual use by the end user.
The development time and efforts of the error corrections depends upon the understanding of the business requirements by the developer and its corresponding implementation. In general scenarios, the developer performs some basic unit testing on developer environment and then sends the object to the quality environment. The main difference between testing on development and quality environment is the quality of data. While data on Quality environment is generally copy of the production thus more correct and accurate, the data on Development environment is created manually and thus containing many inconsistencies and flaws. If good quality data is available for the developer on the Development server, then developer can test more precisely and correct the errors at the development stages itself. This results in the lesser number of transports and delays. To have a good quality data on the development server is something an IT and BASIS team can decide and one solution is to copy the data from Quality system to the Development system. But this will result in the increased database requirements for the Development environment.
So the purpose of this article is to propose a way to make the data on Quality or Production environment available on the development environment so that developer can test it for most of the issues during development phase that may come in the production environment. This article proposes the use of Shared Memory Area and Shared Memory Objects technology and making use of Remote Function Calls (RFCs).
SHARED MEMORY AREA AND OBJECTS
Shared Memory Area (henceforth referred as SHMA) and Shared Memory Objects (hence forth referred as SHMO) are new way of accessing shared memory of SAP application server in the ABAP development workbench. The shared memory is a memory area on an application server, which is accessed by all of this server’s ABAP programs. Before shared objects were implemented, ABAP programs could access this memory area using the EXPORTand IMPORT statements exclusively with the SHARED BUFFER or SHARED MEMORY additions, and instances of classes or anonymous data objects existed exclusively in the internal session of an ABAP program. With the implementation of shared objects with Release 6.40, the shared memory was enhanced with the shared objects memory that stores shared objects. The size of the shared objects memory is determined by the abap/shared_objects_size_MB profile parameter.
The purpose of this document is not to explain what is SHMA and SHMO and how to implement them but It is assumed that the reader is already aware of the implementation details. I need to show how we can use this setup to make the data on Quality environment available to the developer on the Development environment during development for unit testing.
Some points need to be highlighted are.
1. SHMA improves the data fetching drastically since the memory allocated to these areas is much larger than what is allocated to any runtime object / program. For ex. An internal table in the program on application server can contain only limited number of entries where as internal table in Shared memory can contain much higher number of records due to the large memory dedicated to it. This reduces the possibility of MEMORY_OVERFLOW types of error in the programs which results in the short dump.
2. The shared memory is not automatically collected and freed by Garbage Collector; hence it is the responsibility of the developer to carefully design the creation and deletion of the memory areas. Redundant memory area and its data could eat up the total available memory on application server and hence may affect the performance of other programs running in parallel.
3. This solution is for situations in which we need to process few hundred thousands of records and for that you need to have 2GB or more memory allocated to shared memory on any application server. Normally default value for this is 20MB or you can check the parameter abap/shared_objects_size_MB profile in transaction RZ10.
4. The Shared memory object classes and RFC FM needs to be transported to Quality environment before we start the testing process.
The solution will consist of a RFC enabled FM which will fetch the data from different tables based on the business logic using SHMO methods , SHMA and SHMO classes in which we will call this FM to store the data in the SHMA using SHMO and finally a simple program which will access the data from SHMA using SHMO.
There can be 2 approaches for the place where we can put the data fetching logic.
A. In the methods of SHMO and these methods are accessed in the program as well as RFC FM
B. In the RFC FM and from the main program, we will simply call the RFC FM and not the methods of SHMO.
In the second approach, there could be a concern when the data records are running in large numbers. Since the data fetching logic is present in the RFC FM, it will be processed in the internal ABAP session which will have limited size to hold the records. Moreover transferring this large number of records through FM export parameters could be problematic.
So we make an assumption and go ahead with the 1st approach and data fetching and transferring is done by methods of SHMO in the main program and RFC FM and this FM is just a vehicle to execute these methods on Quality environment.
1. Create SHMA and SHMO classes.
- Create RFC enabled FM. Create a RFC destination using SM59 which will be pointing to the Application server on Quality system. You will need to provide the user name and password for the Quality system in the destination having enough authorizations to fetch the data from tables.
- In the RFC FM, First check if the SHMA and SHMO instances are already present in the memory. If they are already present then it is better to delete them first.
Note: If we delete the memory area and its data at the beginning of the FM then this will result in the large query fired every time but this will also result in the fresh data being present every time. So based on the requirement, we decide to delete the memory area or keep the same on app. Server.
Warning: If we decide to keep the memory area and its data, then it will stay even if we are not running our program and this will result in eating up the memory available for other programs running in parallel.
- In the RFC FM, create SHMO and SHMA instances and attach the SHMO to SHMA just created or the one which is already present in the app. Server.
So we have a dedicated shared memory area with the size and duration parameters as set in the SHMA class. We also have the SHMO attached to it.
- Now we call the self defined data accessing methods of the SHMO such as Get_data_into_mem() , Read_data() in the FM.
6. We also write a code to delete and free the memory area and shared memory objects at the end based on a certain flag shown as importing parameter to control the deletion of the memory area and its data.
Function Module ZRFC_SHMO
Importing : delete_flag
TABLES : IV_ITAB
Check if SHMA and SHMO already exists in the memory, If yes the DELETE it and create a new one.
SHMO=>Get_data_into_memory (selection range) // This is where we have our SELECT QUERY
Use an existing memory area and its data.
SHMO=> Read_data( selection range) // This is where we select only some data
IMPORTING EV_ITAB = IV_ITAB[ ] // based on the selection range
// Data has been populated in the memory and selective data has been fetched and assigned to ITAB, ready to be imported in the main program
Remember we need to have this RFC FM present in the Quality system and continue working on the Development system on our main program which needs to be tested with data from Quality system.
So it will be better to keep separate transports for the RFC FM and our main program.
Now we create some methods and some global attributes in the SHMO class which we created in the step 1. These methods are called in the RFC enabled FM to access the shared memory as mentioned in step 5 and step 6.
The methods are 1. Fetch_data() 2. Read_data() 3. Get_data_into_mem( ) and attribute is ITAB [ ].
In the First method Fetch_Data(), we make a call to this RFC FM to fetch the data from tables.
So the purpose of this method is when we run it on the Development server, it calls the RFC FM on Quality server and fetches the data in the Quality system and brings it back in the Development system.
Call FM ZRFC_SHMO DESTINATION dest
IV_ITAB = ITAB [ ]
Here ITAB is the attribute of SHMO which is present in the SHMA. Hence the data coming from RFC FM is now present in the SHMA in the internal table ITAB
So in this call, being in Development server, we fetched the data from Quality server and saved it in the SHMA in the Development server.
In the second method, we make the data present in the SHMA to the ABAP program by simply passing it through the method parameters. Again since internal table in the ABAP program can only hold certain number of records, it is better to pass the data in chunks so as to avoid any issues of Memory overflow while processing it.
EV_ITAB[ ] = ITAB[ ].
//Here we can put some logic to pass only limited chunks of data.
// For ex. APPEND LINES OF ITAB FROM idx TO idx TO EV_ITAB
Finally when we are done with using the data from Quality server and Development server, we can need to delete and free the shared memory and objects so that it will not cause memory constraints to other programs which are running in parallel.
In general, in multiple application server scenarios, we can have a single application server with larger memory area dedicated for shared memory objects for this purpose.
Using this approach combined with the parallel processing we can create a very efficient architecture for processing large amount of data in considerably short period of time.
By making use of the RFC we can use the data on Quality system on development system but there will always be restriction for the amount of data being transferred between the 2 systems. It depends on the developer to design the logic to maintain the data consistency and the performance of the program.
Developer also needs to take care of the failure mechanisms and deletion of the shared memory area on both development and quality servers so that this memory doesn’t block the space available for other legitimate programs.