Generation of data for datasource 0UC_SALES_STATS2 for initialization and delta processing in short time
BW and Batch Functionality in SAP IS-U (ECC 6.0).
This article guides how to speed up the simulation job EBW_DQ_SS for Initialization and Delta run. I am presenting the tips to load the data very quickly by using various options and source system resources.
Utility industry generally deals with very heavy amount of data. SAP industry solution for Utilities (IS-U\CCS) carries out a number of tasks in batch or as background jobs as an effective way for processing high volumes of data. There are certain simulation jobs which run to create the delta for BI processing.
One of Such job is
1) EBW_DQ_SS – Sales Statistics -> BW Delta Queue
Generally following steps needs to be done before fetching the data from source system for 0UC_SALES_STAT2.
- 1. In BW, run a full load for 0UC_SALES_STATS_02 (you will extract no records and it will remain with a yellow status; which is ok!).
2. In BW, run the Initial Load for 0UC_SALES_STATS_02 (it should extract 1 record and be in status of green).
3. In BW, Check RSA7; there should now be an entry in the delta queue for 0UC_SALES_STATS_02.
4. Close reconciliation keys (you can check table DBESTA_BWPROT after you have done this to see what recon keys are available for you to extract).
5. Transfer reconciliation keys to G/L. ( Functional Tam)
6. Run mass activity EBW_DQ_SS.
7. You can check table DBESTA_BWPROTE to see which documents are staged to be extracted or view the delta queue).
8. In BW, run the Delta load for 0UC_SALES_STATS_02.
Here Simulation job EBW_DQ_SS generated the data to be transferred to BI. It takes all the print documents for which reconciliation key is closed. So ideally this program should be scheduled after Close reconciliation key job of source system.
Now we face problem of how to generate the data at the time of initialization or reloading. For this we have to run this job for whole period. And this can be real pain area if the volume is too large simulation job can run for lot of amount.
In our case we have to reload the data for go-live. But as source system was already running there were huge amount of data. We have to complete simulation and extraction in Saturday and Sunday, so it will not impact our source system. We also had limitation that we could take downtime due to some business reasons.
So we decided to generate EBW_DQ_SS in small intervals and 200 background job of the source system.
As we can see in Variant we can maintain the criteria which divide the load into small intervals. And in number of Jobs we can assign the background jobs to be allocated for this run.
In this EBW_DQ_SS the variant object is based on Reconciliation Key of the document. So the intervals will be divided by reconciliation keys. And for each interval we can assign the background job, so we can process our data parallel and can reduce the run time.
We can create the intervals by Following Way.
Click on New Variant Button as per following screen.
We will get the following screen.
If we select Interval length, the number of intervals will vary according to daily number of Reconciliation keys
If we select Number of intervals then number of interval length will be changing.
For example we have 60 reconciliation keys.
If we set interval length as 20 then 3 number of intervals gets created by system.
If we set number of interval as 4 then we get 4 intervals created by system each will hold 15 reconciliation keys..
And in number of jobs we can assign the jobs so that each interval will get assigned at single background job once we start the simulation. If the number of intervals are more then once previous interval gets finished then next interval will get free background job. In our case we have used 200 background jobs by dividing the job in small intervals.
We have fetched data records greeter than almost 7 million. With 200 Jobs our simulation job completed in 15 hours in source system. And extraction got completed in 20 hours.
For each delta run alos we can use this number of jobs parameters by using various variant option , so that job can finish faster.
Also in build we transferred the data using DTP option Request by Request in DSO and Cube also, by distributing the load also within BI system.