Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member185194
Participant
0 Kudos

Hi Folks,

As part of the recent project work I made some learning’s in regards to BW 7.0 functionality that I would like to share with you:

PSA per data source – different infopackage loads e.g. per region - problems

When you got an scenario where you run several loads for one data source e.g. one load per region and have then separated DTPs per region to process the data in BW you will notice that the DTPs get slower the larger the PSA is. Although the DTP will pick just the data that you defined in the filter (e.g. region APJ) it will scan the whole PSA table for entries matching this filter criteria. You will see in the DTP monitor several data packages with 0 records (data package of the PSA skipped because not matching the filter criteria). To avoid that kind of delay you will need to be rigor at cleaning PSA data after every load. However, here comes the next problem as there is no selective deletion available on PSA in BW7.0 you can only delete all requests – which can lead to not wanted situations -> not yet processed requests are deleted.

Therefore it is advisable to check in design if it is not better to simply create a data source per region to have true independency and best performance.

In cases this is to complicated (-> a lot of different loads -> to much data sources then to maintain) or you are already past design you can use bellow prg. on WW BW that I have developed with GADSC to do selective deletion of PSA data based on the infopackage -> to keep PSA lean and have good loading performance ZGBR_PSA_DEL_BY_INFOPACKAGE

DTP ID not equal across systems

I used the DTP ID to have data fields differently transformed depending on what kind of load to the cube happened.

You receive the DTP ID along with other useful information as DTP timestamp -> to assign a load time as described in bellow SDN post

http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/6428

However, it turned out that if you check on the DTP ID with a if/case statement in ITG/PRD the DTP ID is different than in DEV.

But we coded a “translation function module” that does the trick to always get the DEV DTP ID -> have your if/case statement still working

Sample code:

DATA:

       i_tstmp_start TYPE rstimestmp,

       load_timestamp TYPE c LENGTH 14,

       i_dtp TYPE rsbkdtpnm,

       i_curr_dtp TYPE rsbkdtpnm,

* Get the start_timestamp from DTP information

    CALL METHOD p_r_request->get_tstmp_start

      RECEIVING

        r_tstmp_start = i_tstmp_start.

    load_timestamp = i_tstmp_start.

    load_date = load_timestamp+0(8).

*Get the DTP Id

    CALL METHOD p_r_request->get_dtp

      RECEIVING

        r_dtp = i_curr_dtp.

*As the DTP ID changes during transporting we have to back track

*to original name

CALL FUNCTION 'Z_GB_GET_DTP_ORIGINAL'

  EXPORTING

    I_DTP                = i_curr_dtp

IMPORTING

   O_DTP_ORIGINAL       = i_dtp.

Open Hub – no aggregation / no multiprovider as source

With 7.0 open hub got quite an upgrade. You no longer need BADI code to do transformation. Or custom ABAP to set filters dynamically (e.g. load data of the cube data). Because with a 7.0 open hub you have a full BW transformation + DTP between source and your open hub destination (flat file / DB table). Therefore 7.0 open hub should be used.

However, there comes some short sides:

1.       For whatever reason 7.0 open hub can’t use a multiprovider as source -> changes with 7.3 as far as I know

2.       An 3.5 infospoke did aggregate the data e.g. if you extracted from a weekly cube only monthly data it summarized/aggregated all the weekly records into one monthly one. BW 7.0 won’t do it. If you have technical key selected you will get 4 records+ per month (depending how many weeks you got). If you have sematic key -> it will dump with an duplicate record method.

It seems that you need an infosource before the open hub -> here it is aggregated and then infosource to open hub. Haven’t tried it but SDN talks about this and it makes sense. Don’t know if SAP changes here something with 7.3

1 Comment
Labels in this area