Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Summary:

This paper provides the overview of BW data push to third party system (INFORMATICA). In particular, it highlights the major topics such as challenges faces by any organization in present day and how to overcome. The purpose of this document is to know how to improve the performance and how to face the issues while connecting with Non-SAP system.

Present SAP BW Work flow:

We are have few standard and customized data sources and we have DSO’ in EDW layer and top of it we have Propagation layer DSO’s. On top of the Preparation layer DSO’s we have built Open Hubs  as Third party Target system and RFC destinations will be pointing to INFA.

INFA team will convert the data and sends to OOBI which is used for reporting business reports.

Firstly, we should create an RFC destination path between BW and INFARMATICA, once the RFC is build we can start building Open Hubs’s which targets has “Third party “ and up we should maintain the same RFC destination name in all the Open Hub’s which are needed to push data to INFA environment.

Example : RFC destination name : ZBIBD1

Once the RFC is given, click on parameters tab and we should maintain: WORKFLOW & FOLDERNAME details which we INFA team will be giving.

Workflow: Is the table name which INFA team will have creating.

Work flow will be maintained in a certain folder at INFA, so we need to get that details from INFA team.

This ends the initial steps to be done from BW.

We have created the above Open Hub on top of Standard DSO and Info cubes:

Standard DSO:

Data Flow-->Data source-->DSO1-->DSO2-->Open hub

On top of DSO2 we have created the Open hub.

Prerequisites which needs to be handled while pushing data from a DSO to INFA via Open Hub:

·    -->All the Keys fields in the DSO should be the made as the primary keys at INFA end also because uniqueness of the data is pushed by keys.Whenever there is a change in the structure of the DSO, we should make INFA team aware of the changes done from BW and same should be done at INFA end so that load will not fail.

W---->We should handle updated data records like New, Before Image & After Image because we are dealing with Standard DSO- In few projects this can be handle in INFA end but in our case it was handled at BW end byfiltering out before imagine  “X” values at DTP level.

·    -->If there is  delete and full load from BW end with new updated data  then we should make INFA team to be  informed so that they will truncate the table’s from their end if required.

·    -->Make sure these changes moved to DEV to QUA or PROD system from BW end first then INFA changes moved to DEV to QUA then trigger the load in QUA system…

      Infocube: 

      Data Flow-->Data source-->DSO1-->INFOCUBE-->Open hub

·    -->We have created on info cubes because the underline data flow is delete and reload of the data. If we use Standard DSO then it will hamper time to active the table which leads to delay.

·    -->All the characteristics in the cube should be given as the Primary keys at INFA end because we don’t have any tables as standard DSO.

·    -->Data load is daily delete and reload from BW end, so at INFA end new records will be inserted or updated or old records will be rejected.

·    -->If there is  delete and full load from BW end with new updated data  then we should make INFA team keep informed so that they will truncate the table’s from their end if required (same as DSO).

       Error Handling:

       

·    -->Once the data is triggered from BW in few case it might got failed due to connection issues or data issues at that point we always need to delete the failed request with program “RSBK_DEL_DTP_REQ_FROM_Ohd

Go to Tcode : SE38 give the above program name and execute.

   

Provided the OH name and failed request id from DTP, make sure to uncheck the TEST RUN tab and execute it and the failed request will get deleted.

·    --->For few data loads time might take much longer and the request will be in “YELLOW” state without any update of data, at this point we should not start the new load ,first we should make the running load status to RED then we can start the new load.

To change the status to RED we can use Function Module “RSB_API_OHS_REQUEST_SETSTATUS”

Open SE37 give function module name and execute:

Provide Request ID from DTP (yellow) and put status as “RED” then you can give the message as you wish.

Now execute, request will turn RED then you can use the SE38 Program as mentioned above to delete the request .Once this is done we can go ahead with next data load .

       

        Process chain:

      

       

When including this Open Hub to a process chain then we should make sure to  maintain “SEND STATUS “Customized program, which includes RFC destination check, PARAMETERS & SEND STATUS (customized function module).

  1. RFC Destination-->RFC_SYSTEM_INFO (function module)
  2. PARAMETERS-->RSB_API_OHS_DEST_SETPARAMS(function module).

POOLING FLAG:

With this indicator you can control the behavior of the main process when you have distributed processes. Distributed processes, such as the loading process, are characterized as having different work processes involved in specific tasks. With the polling flag you determine whether the main process needs to be kept as long as the actual process has ended. By selecting the indicator, a high level of process security is guaranteed, and external scheduling tools can be provided with the status of the distributed processes.

Hope this document helps and please provide the feedback

Thanks for reading

11 Comments
Labels in this area