Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

We had a requirement for data reconciliation between two systems. Client wanted all the migrated data at contract level available in BI Netezza move to SAP BW system. And thus keep a copy of reconciliation data from pre-prod cube into production for audit purposes. The risk involved was once the data was approved by the user, huge number of flat files were required to download in application server and loaded to production cube.

Flat files were required because no RFC was allowed between the pre production and production system.

The best part was data reconciliation of 20 billion records was done by business and our responsibility as a BW developer was to move those reconciled data into the BW system.

Issue/Risk: Too many flat files will be created and which needs to be loaded manually into the infoprovider manually.

Approach:  the requirement had three stages of development.

Stage 1: Already the flow of Insurance, Billing and Payment were present in the development which had all contract level information. A transparent table was provided which contained migrated data in the ECC development system. A generic datasource was created using the table in which delta was created on anniversary date. This datasource was replicated in BW system and a standard DSO was created. It had key fields same as in datasource. Some of the infoobject were already present which were reused and few of them were created as per the requirement. Lookup was done using the migrated DSO in Insurance, Billing and Payment DSO’s. A multiprovider was created which contained all the cubes followed by the query using Multiprovider as the infoprovider. According to the requirement 65 key figures (which included both restricted and calculated key figures) were created and 28 characteristics were required. Query was restricted on key date, calendar month, contract number and group ID. This optimized the performance of the query as 20 billion of records were expected in pre production environment.

Stage 2:Now the challenge was to move the query data into an infoprovider. We went for two approaches.

A) First was using RSCRM_BAPI. But we found that the extract table which was created is always saved in $TMP which cannot be changed also. And it can have only 20 key figures. Since we were having 65 key figures so the idea was scrapped. There were many other limitations found refer the link http://www.stechno.net/sap-notes.html?view=sapnote&id=605208

B) The second approach was the standard one i.e. use of an APD. So we created one APD whose source was a query and target was a direct DSO. There was limitation in DSO, a DSO can have only 16 key fields. So what we did is analyzed the data and found 16 fields as key and remaining fields were derived in the start routine of the DSO. In this way we transferred all query data into an infoprovider.

Always remember when we use the APD always create variant in the query as per the selection screen of the query which is saved in the APD otherwise it generates an error of no variable entry found.

Stage 3: Now this infoprovider which had all migrated data of BI Netezaa was moved to an open hub. There was a need of open hub because all migrated data was required to move to production system. And this was possible only through flat files. So an open hub was created whose destination was an application server. Now since there will be multiple files so a function module was created which would generate timestamp and store multiple csv files in an application server.

These flat files will then be loaded to infoprovider in the production environment.

5 Comments
Labels in this area