Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member189845
Active Participant

Hi All,

Today I'm going to share one of most efficient way and the fix for long running loads.

In one of our application, there is delta load which usually completes in seconds, but from past 2 days is running long than the usually time. We started our investigation by checking the BG job and it is getting hanged up and we tired with doing full loads also is not happening.

Actually there is no error message. and the load that usually runs for a matter of seconds is now taking more than 5 hours and still not processing any data. source has 56,000 records to be processed. However, the DTP ran for almost 4.5 hours and did not process any records.

Due to this, we had to cancel the load and delete the request since it hampers the rest of the processes.

Then we try running with reducing the data package size from 50,000 to 1,000 and set semantic grouping on  Sales org, division, customer sales and distribution channel and also we consider running the loads by enabling the setting “Get All New data request by request”. ( since there are more than one delta request from the source )

As per the logic between source A and target B,  it will pick up all the records from target DSO B based on Sales org, division, customer sales and distribution channel.

Now the DTP load is trying to process 120,405 records with record mode =”N” and you can imagine when it does look up on B DSO how much records it will bring(definitely more than half a million?) and the ABAP heap memory limit will hit when it reaches at the maximum. So, we continued with running with small data package size with semantic grouping. So the lookup cost & processing also will be optimise.

Now same loads are now successfully completed within 15 mins. after making the below proposed changes to the DTP settings.

From the below screen-print, you can find the explosion of source package(A DSO ) <–> compare the LINES READ and LINES TRANSFERRED columns.

The total delta created in A DSO is 0.12 Million which has explode to 3.4 Million; which is 27 times bigger in terms of volume!! and it is not practically feasible to accommodate all 3.4 Million records in single data package.

Analysis:

With the initial DTP settings i.e., data package size = 50,000 records and without semantic grouping + parallel processing = 3, you can imagine totally 3 packages(with split of records 50000+50000+24901) would be created and when it was not semantically grouped, the same set of records with key combinations Sales org, division, customer sales and distribution channel processed in package 1 might get a chance of processing in rest of the other data packages also. So, logically the number of records explode Vs. the amount of ABAP heap memory reserved per background job definitely will not be suffice to accomplish this data load.

Certainly this is the reason where we find ABAP shortage of memory dumps in ST22 and when the processes running with the PRIV memory mode, still if it don’t find enough memory to process further, then can find the job not progressing because it will wait for any other jobs to release the memory

Coming to the other part on load type = delta or full : Be it delta or full, the logic will treat the loads in the same way.. because we don’t consider the record mode = “X”(of Change log) and thus, it won’t make any difference(on delta or full) though you run the loads with the same selection as in the Change log.

Furthermore,

It was the same case even in the past where 6K records explode to .6 M(~100 times bigger!!) and the total runtime to complete this job was 38 minutes.

Solution:

• Ensure the semantic grouping is always maintained in the both full loads as well as delta loads.

• Prefer to keep the data package size from 50,000 to 1,000 and if you still find performance issues, you can consider running with even small data package size(e.g., 500 records per package).

• To boost the load performance, you can consider to use up-to 6 parallel work processes.

Thanks,

Siva

3 Comments
Labels in this area