Skip to Content

BIA BLOG Series, Part IV: BIA Monitoring and Maintenance

Hello Everyone –


In our blog “BIA Changes Everything!” BIA BLOG Series, Part I: BIA Changes Everything! the SAP NetWeaver RIG announced we would be publishing several topics with respect to the SAP NetWeaver BI Accelerator.  If you haven’t had the opportunity to review the last blog in our series, BIA BLOG Series, Part III: Checking Data Consistency with SAP NetWeaver BI Accelerator, you can find it at:  BIA BLOG Series, Part III: Checking Data Consistency with SAP NetWeaver BI Accelerator


Continuing the fourth part of the BI Accelerator (BIA) blog series, I will be addressing the topic of BIA Monitoring and Maintenance.  


One key area that is necessary in your systems when monitoring and maintaining your BI Accelerator, is to ensure you have up to date statistics in your BI system.  This is very important because you should be reviewing the statistical data related to your queries prior to determining if they should be candidates for BIA.  The key to look for is if the queries are spending most of their time in the Data Manager meaning they are spending most of their time reading the database, as opposed to hitting the OLAP Processor which is where calculations and conversions are performed.  If you have a query that is performing a significant amount of OLAP processing, then it is quite possible that it is not a candidate for BIA.


Once you have your indexes built in your BI Accelerator, overtime it can happen that the indexes become larger than fact table in your BI system; this can be caused either by compression in your BI system or by the deletion of requests.  We recommend that you execute the BI and BIA table comparison job in RSRV on a periodic basis, and when there is a significant deviation, greater than 50% it is time to rebuild your index.


There are several additional RSRV checks available in your BI system that are specific to BIA that should also be utilized to aid in monitoring your systems health, including the BIA alerts, as well as the BIA Indexes with Status and Parts.  In order to configure the BIA alerts it is required to logon via the TREXAdmin tool on the BI Accelerator itself and from there you can select which alerts you would like to activate.  It is very important that someone is tasked with the role of monitoring these alerts and reviewing your BIA system’s health.  On a bi-weekly basis have someone run the BIA Performance-Tests and the Consistency-Checks.  Execute Check Sums, Existence of indexes for DB tables, check consistency with random queries, verification of hierarchy buffer and check if there are negative or greater DIMIDs, SIDs, all Meta data checks, and DTP check on a bi-weekly basis as well. 


You may also want to consider executing all the above RSRV checks for BIA whenever you apply a new BIA revision, adopting these checks as part of your new testing plan.


Run BIA Index adjustment after InfoCube activation, when a relevant InfoCube has been altered, so on an as needed basis.


In addition to RSRV checks there is also an overall BIA check in transaction RSDDBIAMON2, you select the button system check and this will validate communication and execute a simple functionality test with the BI Accelerator, by way of creating data for an index, filling the index, executing a test query against the created index and then deleting that test index and return the results.


There are two different types of indexes that can be built on the BI Accelerator, the standard and most common index, which encompasses the entire fact table of an InfoCube.  There is also what is called a delta index; this type of index is used when there are large volumes of data, and thus a large index.  Rather than rebuilding a very large index frequently, you can utilize a delta index with only the changes in data acting as a supplemental index in the BI Accelerator to reduce rollup time.  If you choose to use this option however there is one additional maintenance step that must be followed and that is, as the delta index grows in size, it must be merged with the main index in BIA.  We would recommend scheduling the merging of the delta index if you choose to utilize option each week to avoid it from growing too large, so simple schedule it over the weekend.


One last administrative recommendation is from time to time you may need to perform an index reorganization or redistribution across your blades.  Sometimes this is necessary as certainly indexes do change in size, as do their usage; you can actually launch this capability from the BI Accelerator Monitor, RSDDBIAMON2, select the check box Reorganize BIA Landscape and select the execute button.


The next blog in the series will be entitled BIA BLOG Series, Part V, Aggregates vs. BIA – What’s the trade-off

You must be Logged on to comment or reply to a post.
  • Hi Sara,

    Thanks for sharing the valuable insight’s into BIA management.

    I just wanted to comment on the criterion you mentioned to rebuild BIA Indexes as they outgrow cubes with obsolete data. I was wondering if the 50% oversize limit you’re suggesting is based on some performance analysis of the indexes on BIA server or a general guideline ?

    We have automated the BIA Index rebuilds in our nightly BW batch processing and as part of this rebuild BIA Indexes on cubes once they exceed a certain threshold %. The approach we have taken is to set a higher threshold % for the rebuild for cubes that are smaller in size  50% and a lower % of 25% for cubes that are larger in size. This is primarily to control the frequency of the BIA rebuilds and keep it not too often or too late.

    Also, we have noticed that the % difference in size as shown in the BIA monitor is not really accurate always. For ex : Immediately after a Index rebuild the monitor still showed us a 12% oversized index.

    Upon investigating the code we found that the counts of rows in the cube that is updated in the DBA Stats table DBA_TABLES does not match exactly to the count in the cube tables. In many cases the stats were updating a lower count as compared to the cube and hence the size comparison was showing a oversized BIA Index when in reality it had the exact same count as the cube tables.

    Our DBA’s told us that the stats counts are for cost optimizing the SQL’s and need not be same as the count in the tables.

    I would suggest SAP to look into this so that the % variances reported in the BIA monitor are more accurate.


    • Hi Ravi,

      Detailed description of %variance warning says that you have to have up-to-date DB statistics for this to be correct. I would suggest that you include Statistics update step into your process chain after InfoCube load. This should help 🙂


      • That is exactly what we have it setup as.

        But the point I was trying to make is the algorith SAP’s code is using in BIA to compute the % difference in the Cube vs the Index size is based on the rowcounts in DBA_TABLES which is updated from the STATS job.

        And as I mentioned the STATS job by design of the DataBase computes this count for cost optimizing and not to get a exact table row count.

        We even tried using 100% of the data to compute the STATS and it was still underreporitng the rowcounts in the cube tables causing the BIA Index to be reported as oversized when it was not.

        So I think the logic in the % computation needs to be changed so that the counts in cube tables are compared to BIA rowcount but that could be a overhead on the Index check jobs.

    • Some of the aspects you should start thinking about in replacement for a nightly indexing is possibly auto-remodeling your InfoCubes. Rather than re-indexing your InfoCubes nightly another solutions might be to leverage a BWA cockpit that allows you to visually see what cubes need re-indexing.
      On the question of reindexing to maintain size, we witnesses that most customer have a lot of redundant characteristics in their BW. While performance is not critical in BWA size of your BW InfoCubes sure is. When we move from the InfoCube to the DSO we find this redundancy factor all the more prevalent as we always kept a lot of ‘maybe’ objects in our DSO mainly for future use.
      As we plan for BWA we need to reconsider these decisions.
      The reason is three folds.
      1. When we designed our DSO’s and InfoCubes none of us did it for BWA.
      2. When we designed our ODS or DSO’s we did it for strategic data retention
      3. Most BW instances go through multiple development cycles and developers as each developer can handpick a key figure and attach it to a specific query. Sometimes developers created multiple key figures with the same name or content – redundancy.
      You need to also start thinking about visually looking at your process chains and comparing the BW indexed InfoCubes with the BWA InfoCubes. This allows proactive monitoring of all BWA to BW mapping.  Customers are using tools that display data in BWA visually, allow optimal selections by time slice, visual cockpits for process chain runs, data reconciliation and other critical BWA support needs.
      Hope this helps
  • Hello everyone,

    I was wondering that in the blog for maintenance and monitoring of BIA the part CCMS integration and consequently the possibility of BI reporting via the solution manager is not discussed/presented?!?!

    Kind regards,

    Michael Warwas

    • Hi Michael,

      That’s an interesting point.  We are really trying to focus more on BIA with this blog series and reporting via solution manager would be a bit of a departure from the main point, but perhaps an area we can revisit in the future.  Thanks for the suggestion.


  • Hi,

    For one of the valued clint after a long history poor response while query execution by the productive users, we have advised our client to go for the BIA to bring in place .

    Yes, after the BIA implimented successfully and every thing in place , still we fill BIA does not added  value to the expected response.

    So, kindly high light ( architecture and added ussages) the ” BIA ” wit NW BI — How it makes sense to over all performance issue and query run time faster .



  • Hi,

    For one of the valued client after a long history of poor performance issues i.e. while query execution by the productive users.So, we have advised our client to go for the BIA to bring in place .

    Yes, after the BIA implimented successfully and every thing in place , still ,we fill BIA does not added  value as expected.

    So, kindly high-light the architecture and added ussages that ” BIA ” brings with NW BI, And,how it makes sense to over all performance issue and query run time faster .



    • Hi Bhagirathi,
      Here are some of the BWA issues we faced.
      1. How do we remodel our Infocubes perfectly for BWA, and what do we do with reporting DSO’s. Though recommendations currently state that re-modeling is not required for performance, and these are absolutely right, they will assist a lot for decreasing the BW footprint on the BWA. When we talk of the DSO we are now entering ‘Murphy’s Law’ Our queries will be as fast as the slowest access. So if you have a MultiProvider repot that sources data from 2 Infocubes and 1 DSO then the Infocubes, if indexed in BWA, will provide blistering response, however the response will then be slowed down by the DSO components of the report.

      2. The TRex in BWA is a sequential read only. So if you compare 1 re-modeled infocube with another of a normal un-BWA-modeled Infocube then there is almost no difference. lets say the difference is .04 seconds, this is beyond human perceptions. However, most customers deploying BWA have many concurrent users. In my last BWA we undertook performance tests for 6,000 concurrent users. If 1,000 of these users are accessing BWA and each query has a response lapse of 0.04 seconds we suddenly see the issue.

      3. We also noticed, by running RSRT with cache disabled, and then collecting critical queries into a large group, that in almost all the queries we had exceptional improvement in Data manager time, but there was almost no response gain on the OLAP and other OLAP processes.

      One needs to conduct a deeper analysis on why your performance gains are not as per expectations, I call if Business Value Attainment or BVA, and then level the expectations with the users and optimize the W of the SWOT analysis.

      Hope this is helpful

    • When installing BWA start thinking along these lines
      1.What is the impact or re-modeling my infocubes (see paper ‘Positioning the InfoCube in SAP BW’ in SDN
      2. What do I do with my reporting DSO’s for BWA deployment?
      3. What is the impact of ‘Slimming’ my infocubes as a one time and annual saving
      3. How important is modeling the Infocube in a BWA environment?
      4. In a perfect world will modeling for BW difffer from modeling for BWA differ from modeling for BO Explorer
      5. How do I optimize for my BWA Deployment
      6. What are some of the things I must do
         6.1 Prior to BWA realization
         6.2 Soon after BWA becomes a reality
         6.3 Periodically after BWA is a Routine
  • Monitoring the BWA is critical and timeconsuming using the standard manual processes.
    Three steps that require monitoring are
    1. process chain runs
    2. Dat reconcliliation
    3. A less know fact of forecasting when our BWA will need additinal blades.

    Some other areas for monitoring as very rightly pointed out are indexing and viewing the differential between BWA and BW indexed cubes.
    One method is to manually monotor these one process, or Infocube at a time and the other is to leverage modern BWA visual cockpits that allow administrators to, for example  proactively monitor all process chains and all data reconciliations from two cockpits.
    Visualization of data growth in BW and BWA is also now available in visual cockpits.
    Critical automation of BWA administration is possibly the logical move forward as we strive to increase effeciencies both of monitoring and proactive forecasting for the BWA