Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

Hello Everyone -

As this is the second in a series of six BIA blogs being developed by the NetWeaver BI RIG, I hope you continue to find value and provide comments. Prior to reading this blog, please consider reading the first blog, BIA Changes Everything, if you haven't read it already, BIA BLOG Series, Part I: BIA Changes Everything!.

One of the first steps when preparing for a BIA implementation is to determine how many or which InfoCubes will use BIA. The number of InfoCubes will help determine the sizing of the BIA appliance you order. There are really two schools of thought when determining the number of InfoCubes:

  1. Put the entire environment in BIA. This can appear costly up front, however the performance gain, the decrease in DASD usage and the time removed from the nightly load window can significantly offset the up front costs of BIA.
  2. Only choose those InfoCubes whose query statistics show a large amount of time spent in the database or are the foundation for management cockpits or dashboards. To identify these InfoCubes, rely on the BI statistics queries which are delivered with BI Content. By using the technical name of the query, identify which spend large amounts of time doing data selection. In addition, you can execute queries in RSRT to further understand the performance. From RSRT, select the query and click on Execute + Debug. When the list of options appears, select Display Statistics Data and Do Not Use Cache. After the results are returned, click on the green "<--” button. This will bring you to the Statistics Data for Query Runtime page. Under the Event Text column, notice an event called Data Manager, this is equivalent to time spent in the database. BIA addresses queries with large amounts of time spent in the database. Secondly, by clicking on the Aggregation Layer tab, see what InfoCubes and aggregates are being used to execute the query. With a combination of these tools you can quickly identify which queries/InfoCubes should be targeted for BIA.

Go-live strategy – Like all other IT projects, a BIA implementation should actively involve the business users. With BIA especially, the enthusiasm which is generated by the business when they begin to see the performance improvements and the ability to see how much easier it is to meet service level agreements goes a long way toward a successful implementation. However, setting expectations is required. Keep in mind the majority of the query performance will come from those queries which spend long periods of time in the database. BIA will not provide much help for those queries whose time is spent in the OLAP processor.

Once the InfoCubes have been determined, decide on the strategy for activating BIA in a production environment. Provided testing in a DEV and QA environment are successful, your options are to activate all at once or use a staggered approach. Activating all at once provides all business users with immediate performance boosts and quickly shows signs of lowering TCO. If problems are identified for certain InfoCubes, with the flip of a switch, BIA can be deactivated. There is no long arduous task of turning BIA off. With the simplicity of this, it makes activating BIA for all InfoCubes very appealing.

With staggering the roll out of BIA to multiple InfoCubes the process takes much longer, performance gains in one area and not in others often lead to the business users questioning if BIA is working. Regardless of the method chosen, refer to OSS Note 1161525 to turn off BIA per user/query. This will allow you to turn on BIA but only have certain users access the index. This can be helpful when you roll out BIA by certain user groups.

You will also need to have a strategy for deactivating aggregates and removing secondary indexes on E and F fact tables. My recommendation is to go live with BIA and aggregates until IT and the business users are comfortable. Once this occurs, typically four to six weeks, begin turning the aggregates off and delete them. This will allow you to regain disk space as well as reduce the amount of time it takes to execute the change run. In addition, because queries will now be executed off of BIA, there is no longer as much of a need to maintain secondary indexes on the fact tables (exceptions are initial filling of BIA indexes, infosets, query has request ID as a characteristic, BIA is deactivated for the query containing virtual Infoobjects, 3.X queries that still use calculated key figures with "calculate before aggregation" proptery turned on, using open hub on the fact table and transferring data from one InfoCube to another dataprovider.) Often, the size of the indexes are as big as the fact tables themselves. This will allow you to recover additional disk space and speed up data loads because updates/inserts are no longer taking place. 

If the deletion of aggregates and indexes become part of your BIA strategy, it’s important to understand the potential impact if BIA for some reason becomes unavailable. If this should happen, queries will now execute against the fact tables with no aggregates or indexes, causing very slow query performance. That’s why these activities should be part of the strategy but executed when you’re comfortable with BIA.

BIA and Process Chains – In order for delta data loads to make their way from the Infocube to BIA, a process chain must be in place with the process type “roll up” at the end. This is the same process type used in a process chain to roll up aggregates. After identifying the InfoCubes you would like to send data to BIA, determine if the process chain already has a roll up step at the end. If it doesn’t, one needs to be added. This is a task which can be performed prior to the BIA implementation. Consequently, in the case where there are frequent deletions of requests from the Infocube or selective deletion of data from an Infocube occurs in the process chain, additional work will be required. These tasks will be further documented in the next blog in the BIA series, BIA Maintenance.

 

The next blog in the series will be entitled BIA BLOG Series, Part III: Checking Data Consistency with SAP NetWeaver BI Accelerator and will appear in SDN in the coming weeks.

9 Comments