Skip to Content

Back from vacation, I follow up my blog series (The diary of a BW 7.3 Ramp-Up project (Part 1)The diary of a BW 7.3 Ramp-Up project (Part 2) and The diary of a BW 7.3 Ramp-Up project (Part 3)).

Within this blog I will come to the probably most interesting part of the project. The technical details. Of course, you will not see any code here, but I will talk a bit about what we did at which step and how we realized it.
I will start with the extraction. The first step was to take out the coding in the includes zxrsau*. Therefore we enhanced the extractors as far as possible with rsa6 and added all possible fields. If there was a necessity to get some other data we activated the appropriate standard extractor or created our own generic extractor for the data.
Posting the data to the corporate memory we added the request date and the request time to the data fields. There is one corporate memory object for each datasource and each corporate memory object contains all fields a datasource delivers.

Up to the propagation layer we added one central method to the start routine of each transformation going to a propagation layer object. This method contains the logic to do upper/lower case conversions, to switch alpha conversion on/off, to check for not allowed characters, to check the date format, to check the format of numeric fields and to do some more project specific things. Via settings
in customizing tables it is possible to switch these actions on and off. No entry in the customizing tables for a transformation means, the method in the start routine will not be processed. All actions will be logged in the dtp monitor.
From the propagation layer to the reporting layer we have two methods which have to be added to each transformation. One in the start routine and one in the end routine. Within these methods, we implemented again the upper/lower case conversion and the switch alpha conversion on/off. Additionally there is one central method to read data from other objects (p-tables, DSOs) just via customizing thru some table entries. You have to specify the table to be read, the where clause as well as the fields you want to read and how they should be mapped to the target fields. For very specific derivations we also added the functionality of ‘exit’-methods. For this we added the call of methods with a predefined interface.
The ‘exit’-methods can be called row wise or data packet wise in the start or the end routine. The name of the methods will be maintained in customizing tables and here they can be switched on or off addtionally.
For loading files, we implemented a logic to check the file content before even starting to load the file. This logic consists of checking the number of fields in a file record, checking the length of a field, checking the delimiter, checking the number format, checking the date format and of course project specific checks.
On top we created an abap to check all project conventions and added this functionality to the change and transport system. So, in our system it is not possible to release a task or order which contains objects that doesn’t fullfil our conventions such as naming conventions and the correct implementation of our methods in the transformations.

As you know specially in case of file data you have to carry out a selective deletion of incorrect data. The possibilities to carry out the selective deletion are pretty well described Selective Deletion scenarios in SAP BW  and possible solutions. We implemented approach 3 with a little change. Instead of using an abap and DSO2 we are just using a transformation from DSO1 to DSO1 setting the recordmode to ‘deletion’ in the transformation.

We also use SPOs. To setup the SPOs system wide with the same settings we used the SAP NetWeaver BW 7.30: Semantically Partitioned Objects (SPOs) built from BAdI – Consistent, Rule Based Modeling of Logical Partitions. The settings are maintained in customizing tables.

Another story is the implementation of our ‘force delta’ process which was recommended by SAP. This process is used when you have 2 or more datasources which data needs to be combined in one target. Imagine you have DSO1 and DSO2 and you need to combine the data (the keyfields are different) into DSO3. So instead of doing a lookup from DSO1 to DSO3 and from DSO2 to DSO3, you just do one of the lookups. The other one gets replaces by an additional transformation from datasource1 to DSO3 changing a dummy field in DSO2 which will force a delta going from DSO2 to DSO3 doing the lookup again. This process eliminates a lot of coding.

With this I will end my blog of the technical details. In case anybody is interested I would love to talk about it at Teched 2011 in Madrid which I am planning to attend.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Christian Baumann
    Hello Siegfried

    Thanks that you take time to share some of your own 7.3 experience.

    Nevertheless I am not totally happy as I can’t follow your explanation of ‘forced delta process, which was recommended by SAP’.

    Is it now a new 7.3 feature you use to realize and to implement the ‘forced delta process’?

    Could you please spend some additional lines on this feature as I like to understand your ideas there? (Perhaps an illustration helps to understand 🙂

    Thanks a lot for your efforts

  2. Tianjian Xie
    Hi Siegfried,

    Very nice blog! This is the first time I’ve heard about ‘Forced Delta Process’ despite the scenario you depicted is quite common.

    Would you please give an example about your approach instead of just elaborating on it?

    With many thanks!

  3. Thomas Goodwin


    This is a good introduction to the standardized transformations, as you presented at SAP TechEd 2012 (Las Vegas) today.  Very interesting.We have only a few infoproviders, but even so it is clear the customized trfn and start/end routines can get out of hand. Perhaps you can contact me at my email.




Leave a Reply