Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

It’s a continuation of my previous post  Dissection of a long running Hierarchy Attribute Change run job. Through this blog we are going to discuss the mostly known reasons for a slow performing change run job.

First of all, when we find any slow change run activity we think of canceling it. But it’s recommended to always check, if it is okay to cancel it, then correct the specific problem, and restart.  However, prior to doing this, we should view the information shown in the Change Run Monitor.  If we are not sure which of the aggregates might be causing the problem, then it would be a good idea to deactivate only the aggregates that have not been adjusted yet. 

Then, once these aggregates are deactivated, we can restart the change run job and allow it to finish successfully. When it gets completed, the aggregates that were previously deactivated should be manually reactivated and filled.  This provides an interim solution to ‘unlock’ the system in case of a never-ending change run. 

How to cancel the change run, properly:

We have to locate the server and specific work process where the job is running.

Select the work process by marking it with a check.

Goto menu path, process, cancel without Core.

Some of the known reasons of the slow performing Change run job would be

Poor modeling decision: Maybe too many aggregates are created. So, it’s advisable to check if we can delete any unused & unnecessary aggregates by looking at the usage (number of calls) and last used date.

Both a characteristic and its navigational attribute are used in the same aggregate definition with * aggregation level (for example, 0CUSTOMER & 0CUSTOMER__0CUST_GRP). It’s NOT recommended.  This will increase the time of the change run.  It is enough to only have the characteristic 0CUSTOMER in the aggregate definition.  A query with a selection on 0CUSTOMER__0CUST_GRP will still access the aggregate, even though they have only included 0CUSTOMER in the definition!!

Too many uncompressed requests in the F Fact table of the InfoCube or aggregate: On all database platforms it is strongly recommended to regularly compress info cube and aggregate data.  For ORACLE it is particularly important.  We have to check the number of entries in packet dimension table /bic/DxxxxxxP.  A maximum number of 20 - 30 partitions (uncompressed requests) per F table are recommended. 

Too many Partitions defined on the E Fact table of the InfoCube/aggregates (ORACLE databases): It is possible that we might be prepared for the future and for example, partitions an InfoCube on 0FISCPER through the year 2030. However, this can cause performance problems in present day. This can extend the runtime of the change run by increasing the number of database operations that must be performed during the reconstruction of an aggregate. 

DB statistics are not up-to-date: DB statistics must be up to date for the database to choose the most optimal accesses to the data.  It could happen that change run program never finish because of a poor execution plan.

Degenerated Indexes: Degenerated indexes can also cause longer runtimes as the data cannot be accessed in the most efficient way on the database.

RSADMINC-DELTALIMIT is set too low or too high: SAP recommended us to set this parameter to 20.  This means that if less than 20 percent of the master data of a characteristic has been changed, the aggregate will be changed with the ‘Delta’ method.  If more than 20 percent has changed, then the aggregate will be ‘Reconstructed’ (or rebuilt completely).   If the DELTALIMIT is set too high, it can take longer to adjust the aggregate using the delta method, as opposed to doing a rebuild. If it is set too low or not set at all, then all aggregates will be ‘reconstructed’ during each change run. 

Missing SAP Notes also may be one of the reasons. We shouldn’t forget to look for DB related performance notes that refer to index rebuilding or statistics creation.

 Additional ways to improve performance: 

1. Executing Change Run in Parallel.

2. Refer SAP Note 1388570 - BW Change Run.  We have to be aware that the paralleled version of the change run only has performance benefits if aggregates from several InfoCube have to be adjusted.

3. Schedule the Change Run when the system load is lightest. Heavy system load can cause the change run to take more time. We should try to schedule this job during periods of low or limited system activity.

4. Customize the BLOCKSIZE parameter. During the initial filling of an aggregate or the rebuild of an aggregate during the change run, high resource consuming operations are performed on the temporary tablespace.   It’s recommended to set the BLOCKSIZE parameter to read in several blocks to prevent resource problems when rebuilding an aggregate. 

Goto transaction SPRO –> SAP NetWeaver ->Business Intelligence-> Performance Settings –> Parameters for Aggregates -> Block size.  This is a database dependent setting and should be tested by Basis person to determine the best performance when filling an aggregate. 

I hope you have enjoyed reading this blog. Please feel free to provide your feedback or post your comments.

Thanks.