Skip to Content
Author's profile photo Former Member

BIA BLOG Series, Part III: Checking Data Consistency with SAP NetWeaver BI Accelerator

Hello everyone,

In our blog “BIA Changes Everything!” the SAP NetWeaver RIG announced to publish several topics
around the SAP NetWeaver BI Accelerator.

As this is the third in a series of six BIA blogs being developed by the SAP NetWeaver BI RIG, I hope
you continue to find value and provide comments. Prior to reading this blog, please also consider
reading the second blog, “Preparing for a SAP NetWeaver BI Accelerator Implementation”,
if you haven’t read it already,
BIA BLOG Series, Part II: Preparing for a SAP NetWeaver BI Accelerator Implementation.

In this blog I would like to give you an idea on how to check data consistency when using
the SAP NetWeaver BI Accelerator (BIA). Many customers ask for tools and how to apply them.

For checking the consistency of data we recommend using the following tools that
exist in your SAP NetWeaver BI system:


  1. Transaction RSTT: CATT traces of your TOP 10 business queries (scheduling via test jobs)
  2. Transaction RSDDBIAMON2: Checks in the BIA consistency check center (scheduling of check set jobs  e.g. in a process chain)
  3. Checks in transaction RSRV (create packages and include them in a process chain)


See also the following matrix to get an impression what kind of checks, for what purpose and
in which frequency, could be scheduled in your SAP NetWeaver BI system.

The matrix shows the check tools (CATT traces; checks in the consistency check center; RSRV checks; checks that are published in notes for detailed analysis in case of incidents) and the reasons why data could have become inconsistent. The “x” indicates in which case a certain check is relevant.
In general some tests are marked as “fast” or “time-consuming” with indication of the frequency.
Nevertheless it could be that also a data comparison check is fast if small tables for the drill-downs are chosen. Therefore we can only give a general overview as this always depends on how big the checked tables are.

Due to permanent changes in a SAP NetWeaver BI system you cannot guarantee data consistency on a daily basis.
Also for the RSTT checks you have to define a timeframe where no data changes take place in the BI and BIA system.

The following document shows you an idea on how to configure the available check tools.

The topic “trouble shooting”, when errors or derivations occur, is not covered in this blog.


A deeper analysis in case of incidents can be done by:


  • using further detailed RSRV checks (see matrix)
  • following the notes see below
  • opening a customer message with all the detailed information on how to reproduce the problem and on the analysis details already available (e.g. a trace ID from an RSTT check)

Please also refer to the following notes:


1095886  Checking the data consistency in BI Accelerator
1060387  Analysis with incorrect results in BIA queries
1161967  BIA shadow index: Enhancement of analysis option
1052941  Creating shadow indexes in SAP NetWeaver BI Accelerator
1168802  SPARSE compression for shadow index does not work
1070848  Several problems with BIA indices
1162365  Collective note: BIA and incorrect data



The next blog in the series will be entitled BIA BLOG Series, Part IV: BIA Maintenance and will appear in SDN in the coming weeks.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member
      Your blog series are really helpful for entire BI community.Thank you very much for this excellent contribution from SAP.

      Well, i have one thought which i want to share with you.It may sound silly but i want to share.It is true that with the advent of BIA ,aggregates became job less in BI world.
      Now,how nice it would be, if there is a tool(may be extension of BIA) which will be a replacement to multiproviders and infosets aswell.

      I mean,user will have a flexibility at query runtime to combine/join certain infocubes/ODS/info objects dynamically and view the results without any prior settings.It can be accomplished using some sort of Index procedure what BIA adopted to access the info quickly and join/combine them.

      Modeling/performance/database wise it will be a greater advancement

      But here,unlike BIA we have a tough challenge .Coz' we need to acess multiple objects(infocubes/ODS/info objects )at a time for which software requirements,processing speed should be high.

      It is just an imaginary thought don't know whether it is feasible practically or not

      Would like to share your view.


      Author's profile photo Former Member
      Former Member
      Blog Post Author
      Some customers keep basis aggregates in the beginning. As basis aggregates do not contain navigational attributes they do not affect the change run and therefore the administration advantage of BIA still takes effect.

      With the use of the backup & recovery functionality (which currently resides in pilot phase) and maybe also a disaster tolerant solution there should not be long system downtimes anymore in case of failures. And even in case of corrupt indexes without backup and recovery the initial indexing procedure should be fast when using enough resources and executing it in parallel.

      There are already developments going on to accelerate further objects with BIA see the following sdn document (especially pages 57, 59, 60 and ff); so see your thoughts are very feasible.

      Kind regards