SAP HANA Based Transformations (Processing transformations in HANA) aka ABAP Based Database Procedure (AMDP)
SAP HANA Based Transformations (New way of writing Routines)
As majority of us has worked on SAP BW and have written ABAP routines in transformations in BW to derive the business logic, we often noticed the performance issue while loading the data into DSO, Info cube or master data info object.
There could be numerous reasons for this:
- Bad Coding
- Memory bottleneck
- Performance of Database system
- Limited resources
- Row-based approach of processing records and many more.
An ABAP based BW transformation loads the data package by package from the source object into application layer (ABAP). The BW transformation logic is executed inside the application layer and transformed data packages are shipped back to database server which writes the result packages into target object. Therefore, the data is transmitted twice between Application layer and database layer.
During processing of ABAP based BW Transformation, the source data package is processed row by row.
But in HANA based BW transformation, the data can be transferred directly from source object to target object within a single processing step. This eliminates the data transfer between DB layer and Application layer.
The complete processing takes place in SAP HANA**.
**Note : Some user defined formulas in transformations could prevent the code pushdown to HANA.
How to Create SAP HANA Transformations:
Step 1: Create a ADSO say ZSOURCE with activate data and change log in BW Modelling Tool in Eclipse
Step 2: Create transformation between aDSO and Data Source. Create expert routine from
Edit Menu -> Routine -> Expert Routine.
Pop-up will ask for confirmation to replace standard transformation with expert routine. Click on the “Yes” button.
System will ask for ABAP routine or AMDP script. Click on AMDP Script.
An AMDP Class will be generated with default method – PROCEDURE and with default interface – IF_AMDP_MARKER_HDB
Step 3: Open ABAP development tools in Eclipse with BW on HANA system in ABAP perspective to change the HANA SQL script.
Important Points to note:
- Write your custom code after the statement:
METHOD PROCEDURE BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY.
- Intab and OutTab are importing & exporting parameters in SAP HANA transformation respectively. It is like Source_Package & result_package in SAP ABAP routines.
- No need to define the internal table types and structures in SQL Script. System will define it internally.
- If you want to access other ABAP dictionary tables or views or other AMDP procedures, you will have to mention the tables in the USING clause of the method.
For example: If we want to lookup on another DSO, master data tables or read the target table, we need to mention the DSOs, master data tables in the using clause of method. See example below.
METHOD PROCEDURE BY DATABASE PROCEDURE
FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY USING /BIC/AZSOURCE2.
For better understanding, I have explained 3 scenarios (Real world scenarios) for better understanding and how to create and write the code in AMDPs :
Scenario 1: One to One Mapping from Source to target using SAP HANA Expert Routine
- Created ADSO say ZSOURCE and added fields in it.
- Created a SAP HANA transformation with expert routine ( steps are explained above)
- Write an expert routine in Method Procedure to assign source fields to target fields.
Here in the below code, we are selecting fields from intab and assigning to exporting parameter outTab (no additional logic in transformation).
Scenario 2: Update the Deletion Indicator in target DSO when the records get deleted from Source system using SQL Script
If we build this logic in ABAP, we would need to match the target records (one by one) with source object data for all data packages and update the deletion flag with X if a record is not present in source object. This could lead to performance issue if the records are in high volume and the SQL script logic is much simpler than writing the logic in ABAP.
In SQL, we can achieve this by using SQL functions:
- First step is to declare the variable lv_count to store the count of entries in Target Table. This is required to know if it is the first-time load in DSO or the successive loads.
Based on that we will build our logic.
For the first-time data load the value of LV_COUNT value will be zero. In this case, we need to transfer all the source data as it is to Target DSO with deletion flag as blank as we don’t need to update the flag.
For the successive date loads, we need to compare the existing records in Target DSO with the latest records coming from Source and update the deletion flag accordingly.
Target DSO Contents after the first load:
**say after first load, the record with field 1 = 500 got deleted from source system. In that case, we need to update the flag with value = X.
We can declare the variable by using DECLARE statement.
Declare <variable> varchar(3);
2. For the successive loads, we need to compare the existing records in Target with Source as mentioned above.
Here lv_count is variable and /BIC/AZSOURCE2 is target DSO and I have copied the content in temporary table – It_target.
For this case, I have counted the number of records in Target DSO and put it in lv_count Variable.
**It_target is the temporary table declared to store the content of target DSO which would be used to comapare the records in source package
If Lv_Count > 0 then
/* copying the latest record from source to temporary table it_zsource1*/
It_zsource1 will have:
*It_target – is the target dso
*:intab – Source package contents
it_zsource2 will have:
Union of It_zsource1 and It_zsource2
It_zsource3 will have data shown below and this would be the final output which we need to load it in outTAB:
This will be the final output we required and assigned it to outTab:
Below else statement is for first time load when lv_count = 0 and transfers the records from source as it is to target.
SAMPLE CODE BELOW :
-- INSERT YOUR CODING HERE /* outTab = select "/BIC/ZFIELD1", ' ' "RECORDMODE", "/BIC/ZFIELD2", "/BIC/ZFIELD3", ' ' "/BIC/ZFLAGDEL", ' ' "RECORD", ' ' "SQL__PROCEDURE__SOURCE__RECORD" from :intab; */ declare lv_count varchar(3); /* declare a variable */ it_target = select * from "/BIC/AZSOURCE2"; select Count(*) INTO lv_count from :it_target; /* count number of records in target */ if lv_count > 0 then /* if new records are added in source */ it_zsource1 = select sourcepackage."/BIC/ZFIELD1" as field1, sourcepackage."/BIC/ZFIELD2" as field2, sourcepackage."/BIC/ZFIELD3" as field3, '' as zfield1 from :intab as sourcepackage; /* if existing records deletes or updated*/ it_zsource2 = select ztarget."/BIC/ZFIELD1" as field1, ztarget."/BIC/ZFIELD2" as field2, ztarget."/BIC/ZFIELD3" as field3, sourcepackage."/BIC/ZFIELD1" as zfield1 from :it_target as ztarget left join :intab as sourcepackage on ztarget."/BIC/ZFIELD1" = sourcepackage."/BIC/ZFIELD1"; /* union of it_zsource1 and it_zsource 2 */ it_zsource3 = select *, '' as flag from :it_zsource1 union all select *, 'X' as flag from :it_zsource2 where zfield1 is null; outTab = select field1 as "/BIC/ZFIELD1", ' ' "RECORDMODE", field2 as "/BIC/ZFIELD2", field3 as "/BIC/ZFIELD3", flag as "/BIC/ZFLAGDEL", '' "RECORD", '' "SQL__PROCEDURE__SOURCE__RECORD" from :it_zsource3; /* for first time load or when the DSO is empty */ else outTab = select "/BIC/ZFIELD1", ' ' "RECORDMODE", "/BIC/ZFIELD2", "/BIC/ZFIELD3", ' ' "/BIC/ZFLAGDEL", ' ' "RECORD", ' ' "SQL__PROCEDURE__SOURCE__RECORD" from :intab; end if; errorTab = select ' ' "ERROR_TEXT", ' ' "SQL__PROCEDURE__SOURCE__RECORD" from :outTab;
Scenario 3: use of Windows function in SAP HANA SQL Script:
Key 1,Key2, Field 3 should hold the value from source table
Field 4x should hold the latest value for field 4 from lookup table based on dates.
Field 5x should hold the sum of all the values of Field 5 based on same keys.
Field 6x should hold the previous value of the latest transaction.
In ABAP Routines, this would be very complex in reading the previous transactions and updated the value of the field. But in SQL script, we can use windows functions with which the complex computations can be done in a simpler way.
There are many windows functions available in SAP HANA SQL Script:
- Rank Over () – Return rank within a partition, starting from 1
- Row Number () – Returns a unique row number within a partition
- Dense Rank () – Returns ranking values without gaps.
- Percent Rank () – Returns relative rank of a row.
- Lead () – Returns the value of the <offset>rows after current row.
- Lag () – Returns the value of the <offset>rows before current row.
And many more are there.
In my scenario, I will use the windows function Rank () and Lead () to achieve the required output.
Define the It_lookup temporary table and copy the records from Lookup DSO.
So It_Lookup will have data :
Then, I have ranked the records using RANK() window function based on date in descending order and also used the windows function LEAD () to get the next value after the latest transaction.
It_tab1 will have:
For requirement field5x : we need to sum the values field 5x for the same keys.
Inner join It_tab1 and It_tab2 where rank = 1
It_tab3 will have :
This is the required output. Hence assigning the fields from It_tab3 to Outtab:
AMDPs provide powerful abstractions that enable ABAP developers to combine the benefits of the high-speed in-memory execution of SAP HANA with the well-established ABAP run time and life cycle model. Along with CDS Views, CDS Table Functions, and OPEN SQL, allow ABAP developers to renovate their existing assets and create new and innovative applications for SAP Business Suite on SAP HANA without losing the platform-independence of ABAP.
Happy reading !!!.
Explained very well.
Very Informative blog.
Can you share if currency conversion possible in AMDP Field Routine ?
Yes we can.Please go through below blog.
Very informative..Big thanks..
Extremely informative. It's perfect. Thank you.
Explained very well. Thanks
Good thought and well written - Great work !!
thanks for the blog post and the efforts you have put into it. I just started to learn SQLScript and how to create AMDPs and your post and the examples were great to get started.
Very good and useful article.