Skip to Content

Dissection of a long running Hierarchy Attribute Change run job

Dissection of a long running Hierarchy Attribute Change run job

Friends, let’s accept this !! Our work life is so packed and monotonous that most of the time we really don’t find time to do the Root Cause Analysis However we cancel a long running change run, correct a specific problem, and then restart the change run.  However, prior to doing this, we can take the following points into account. 

The change run is taking too long to finish and this is causing delays in data loading & rollups.   But it’s not clear why the change run is taking so long now, as it used to only take 1 to 2 hours and now it takes 7 or 8.  To answer this, please read through this blog where I have mentioned certain tips we usually follow during my day to day work.


Several factors would be responsible for this and here are few of them: 

Amount of master data to be changed                          

Amount of hierarchies to be adapted

Number and size of aggregates and their basis Infocubes                               

General load on the system                                        

Status of the database (statistics & indices)


It is possible and reasonable that it can take a while (several hours) for the change run to finish successfully. However we should see progress of the job in  SM37, Chance Run Monitor, or ST04.  It is a good idea to try to get as much background information as much as possible by asking the following questions to our team mates.

Is this the first time the change run has taken exceptionally long to finish?  If not, since when?

Is there a previous change run job that did not take so long that we can also have a look at in the application logs?  If so, we have to get the name, date, and time of the job.   It can be helpful to be able to compare the amount of changes that have taken place to the master data and aggregates and then compare the runtimes based on this. 

Never the less we have to check from our basis friend whether the database statistics for all BW tables( including X & Y, P & Q tables) are up-to-date?  How often are we collecting the DB statistics and when was the last time?  This can be checked using RSRT, however this tool may not always be correct.  We can confirm any yellow or red lights using transaction DB20 and check the specific table. 

Have we changed anything in the system recently?  Have we applied a support package, or a database patch? Have we recently created new aggregates on the system? 


Couple of ways to identify the Hierarchy/Attribute change run job in the system:


RSA1, Tools, Apply Hierarchy/Attribute Change:  We will see an entry with the status ‘Hierarchy/Attribute Change is running.’


We have to look for jobs that are currently ‘active’ that have a name that starts with the prefix ‘BI_STRU* or ‘BI_PROCESS_ATTRIBCHAN’ else we can look for all jobs that are currently active in the job overview. 

After identifying the job name we can check

What time was the change run started?

What is the current duration of this change run?

Which server is the change run running on?

What work process number and PID are associated with this job? 

Ans: We can find most of this information by pressing the ‘Job Details’ button.  


Change Run Monitor:

The Change Run Monitor will show all characteristics that are involved in the current change run and a list of aggregates that have already been adjusted and more importantly, those that still have to be adjusted

The Change Run Monitor can be accessed a few different ways: 

Transaction changerunmoni

RSA1, Tools, Apply Hierarchy/Attribute Change…, click on the icon for the Change Run Monitor.   Wait a few seconds and a new session will be started.


SE37, single test function module RSDDS_CHANGERUN_MONITOR




Application Log /nSLG1 


Here we can list of all navigational attributes, hierarchies and aggregates that are affected during this change run.  We can also see how long it takes to adjust each one and the method that was used to adjust it (delta, rebuild, etc.).  


General Rules: 

The larger the number of changes to the Master Data, the longer the change run will take.

The larger the number of records contained in the aggregate, the longer the change run.



If the BW statistics OLAP flags are turned on for a basic Info Cube, statistics records are written to this table regarding the new filling of its aggregates, aggregate rollups and change runs. 

We can select or sort the records by field STARTTIME. Here we can get an overview of how the aggregate has been accessed for specific period of time.    

The field STARTTIME has the format YYYYMMDDHHMMSS


In this table, we can identify how the data in an aggregate was last modified from the ‘CHANGEMODE’ field.  Valid entries for this field are:

N – New fill (change run or initial fill)

R – Roll up (change run or rollup)

D –  Delta (change run)

We can also see how much time was spent on index drop/rebuild, DB statistics creation, and compression. Has the time for these processes been increasing steadily along with an increase in the data volume or is it sporadic?


SM50/SM66 Process Overview:

Once we get the server information and work process number from SM37, we can quickly identifywhich process to review. 

Is the job doing a SELECT, INSERT, or UPDATE on any of the aggregate tables? For ORACLE, we can get the complete SQL statement information by noting the PID and then opening a new session in transaction ST04, Detailed Analysis Menu, Oracle Session, and looking for the PID on the database level.    We can take help of a DB specialist to determine exactly what is happening on the DB or have a look in SM37 at the job log. 



1 Comment
You must be Logged on to comment or reply to a post.