Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Author(s): Anil Kumar Puranam and Peesu Sudhakar

Company: Deloitte

Created on: 24th December, 2014

Authors Bio:

Sudhakar Reddy Peesu is a PMP certified SAP Technology Manager in Deloitte Consulting. He has more than 16 years of experience leading SAP BW on HANA, BW 3.x/7.x/BPC/ABAP implementations, custom applications, and developing IT strategies covering all phases of the SDLC using onsite/offshore delivery models. He has strong communication and project management skills with Banking, Consumer/Products, and Public Sector industry project experience.

Anil Kumar Puranam is working as a Senior BW/BI Developer in Deloitte consulting. He has more than 10 years of SAP BW/BI/BO experience. He has worked on various support/implementation projects while working in consulting companies like Deloitte, IBM India, and TCS

Introduction:

     The SAP NetWeaver BW Accelerator (BWA) is an appliance which boosts SAP NetWeaver BW performance based on SAP's search and classification engine, and on specially configured hardware. SAP NetWeaver BW customers adopting the BWA can expect radical improvements in query performance through sophisticated in-memory data compression and partitioning to faster read and write data improving the overall load and query runtimes with near zero administrative overhead.

Intended Audience:

 

     This document is to share the capabilities of BWA and interned for SAP NetWeaver Business Warehouse (SAP BW) and BWA working and/or having a very good understanding of the BW and BWA functionality and capabilities.

Build the BWA with BWA split/partition Indexes

     Generally we enable and activate the BWA for the InfoProviders (Cubes) with initial fill and there by rollup deltas data loads. First initial activation maybe done as part of the
deployment activity or when we first start the data load to a cube, or when we first configure the BWA or when the Cube BWA got corrupted and need to rebuild by initial fill again. The standard BWA configuration will set the BWA index on a single blade (server), this means that all the data records/volume will sit on a single blade. In case the cube is expected to have huge volume of data and have many cubes sitting on one blade, we may end up fi ling up allocated size on the blade and in which case the roll up step fails. Also the write optimization process with in the roll up step may take additional time to complete the roll up step during which we may see interruption to reporting.

In order to balance and distribute the cube data volume across all the blades within the BWA server, we have an option to force create/split the data volume and distribute across the blades. Here we can pick and choose the huge volume estimated cubes to take the advantage of this split and leave the small volume cubes with the standard system setting.

When we initially activate the BWA index for a particular cube, before we click ‘Activate and Fill BWA Index’, go to the ‘Load Admin.’ tab and select the ‘Own Setting’ radiobutton and  the expected volume that will be loaded to BWA. (in follow up with SAP, understood that mentioning any volume is okay as this create the split across the blades)

Navigation and steps are as below:

  1. Go to transaction code ‘RSDDB

                                                                              
2. Select ‘Goto’ menu -> choose BWA Monitor as shown below, after entering the Cube name.

     3. In the BWA monitor screen (RSDDB), when we index into BWA for first time i.e INITIAL FILL, we need to switch to “Own Setting” in “LOAD Admin” tab and go high value for the number of records. This will force the system to split the BWA.

Estimate Run time of Fact Table Indexing

Generally when we want to build BWA for a particular cube, the question arise, how much time is required for building the BWA index apart from other dependency questions, especially in the environment where we have data loads more frequently throughout the day.   This RSRV test gives the system estimate of the time required to fill the fact index. The program for this test considers the current parameter values for background and dialog parallel processing. The time taken is calculated from the processes available and the estimated best throughput of data records in the database, the application server, and the BIA server.

We need to keep in mind that the calculated duration is an estimate; the factors that influence the actual time/duration is the load on the system, the distribution of data across block criteria, and deviations during processing but this helps to start with an estimated time to complete the BWA index build.

Navigation and steps are as below:

RSRV T code --> Tests in Transaction RSRV --> All Elementary Tests --> BW Accelerator --> BW Accelerator Performance Checks --> Estimate  runtime of fact table indexing --> Double click on it and enter the Cube name and then Execute.

Here is the sample output that we receive after execution of RSRV test. We can see the estimated time for Building the BWA for the concern Cube in Seconds.

Estimate Memory Consumption of Fact Table Index

                The other important estimation that we need to perform before starting the Initial fill of BWA for any large cubes is to Estimate storage requirements of fact index.  This RSRV test estimates the size of the fact index of the BWA index. In doing so, the system analyzes the data in the fact table and provides a projection.

Note that if data distribution is poor, the actual memory consumption can deviate from the projected value. A more exact analysis would demand more time than the time required to re building the index, since the number of different values in the fact table needs to be determined for each column (COUNT DISTINCT).

Navigation and steps are as below:

RSRV T code --> Tests in Transaction RSRV --> All Elementary Tests  --> BW Accelerator --> BW Accelerator Performance Checks --> Estimate  memory consumption of fact table index  --> Double click on it and enter the Cube name and then Execute.

Here is the sample output that we receive after execution of RSRV test. We can see the estimated memory for Building the BWA for the concern Cube in KB.

BWA alert when the index size reaches threshold limit and compare the BWA index size to identify the cubes that benefit split/partition across the blades

A BWA Index with 2 billion records cannot be indexed. A rollup or other process which adds up the total entries of the cube to 2 billion will result in CONVT_OVERFLOW dump. The numbers of records that are read from the database and indexed into the BWA are assigned to a variable of type INT4, a signed 4-byte integer. The decision for an INT4 was taken due to the general availability of that type over all hardware platforms supported by SAP at the time of that decision.

Now the type INT4 or signed 4-byte integer, has a possible range of values from  [-2^31-1 ... +2^31-1] or [-2147483647 ...+2147483647].

These are the theoretical minima and maxima .Now BW only uses the non-negative integers and only up to a maximum value of 2.000.000.000 (2 billion). So, the variable can hold a value only up to 2 billion. Independent from BW, the problem also exists if we just count the number of entries from SE16/SE11 if the table has more than 2^31 records. As said, this is a restriction from the ABAP runtime environment.

To avoid the failures of Roll ups because of the cube has reached the above mentioned limit, we can use the below Custom ABAP Utility Program to get the alerts to indicate us that the cube is reaching the certain threshold limit that we set while running the program.

     The custom ABAP Program (YBWA_ALERT_SEND)

    • The program has two sections –
      • Sends an alert email for the cubes index size reaching the threshold limit
      • Check the BWA load admin settings (System setting vs Own setting) and recommends the cubes that benefit from changing the setting to own
        setting to distribute the volume across the blades and avoid BWA corruptions and ending up rebuilding the indexes again and again
  • This program is exception based – sends out email with the exceptions to the email distribution entered in the first parameter. This means that this program can be executed as frequently as needed as email will send out only when it meet the exception criteria. We can add the program to Process Chain to run them on daily basis to execute these checks.

Here is the Screen shot of the Custom program:

The sample output the report:

Program code is attached at the end of the document.

This is a technical limitation and indexing cubes with more than 2 billion records is not supported. In cases where you have such a cube to be indexed, you can follow any of the possible solutions:

Cube Compression & rebuild the BWA:

If the cube has huge data and remains uncompressed or partially compressed, with F fact table holding most of the data, then you can perform a cube compression, by which the data gets aggregated and depending on your scenario there is a possibility of reduction of the total entries of records in the fact table. This is not a long term solution as the data will keep on growing and eventually you will hit the 2 billion problem soon.

Even though we compress the cube regularly but with few days delay, The data into BWA is accumulated into BWA with uncompressed format, so it is better to rebuild the BWA once we are reaching the threshold by making sure that all the data in the cube is compressed.

Cube Remodeling:

This is the long term and ideal solution. You will have to split the cube into smaller cubes depending on your business requirement and create a multi provider on top of the smaller cubes. These smaller cubes then can be indexed into BWA.

BWA overloaded alert

Business Warehouse Accelerator (BWA) runs properly if we limit for the amount of data we can load into the BWA compare to thresh hold limit. The threshold limit is defined as below We should only load < “70% of the available blade memory” into the BWA.

We can use the below standard program to get the alerts in case of any overloads. Report RSDDTZA_BIA_OVERLOAD_EMAIL, which carries out the check, can be used in this context. You can schedule this report as a job.

If the report finds a problem, it sends an e-mail. You can use report RSDDTREX_ADMIN_MAINTAIN to specify the e-mail addresses that you want the report to send e-mails to in table RSDDTREXADMIN, according to the following schema (see also Note 1070085).

Object: BIA_NOTIFY_EMAIL_ADDRESS1. (The numbers at the end must be consecutive, that is, the next entry will be BIA_NOTIFY_EMAIL_ADDRESS2 and then
BIA_NOTIFY_EMAIL_ADDRESS3). Value: The e-mail address (a.b@c.com)            Action "INSERT".

The sample output of the report:

BWA query performance metrics process

The statistics cube 0TCT_C03 can be leveraged to get more information about the  how the BWA information is used  and the health of the Query performance while reading the data from BWA.

The BEx Query can be created as shown on 0TCT_C03 to get the trend of BWA read time in Seconds per Million records. We can provide a date range and Multi providers as selection. In Addition to Time to read 1 M records, there are other measures such as time to read hierarchy, Time to Prepare Read, No of records Read etc are available for display in this query

Step1: Create the filters as shown below:

Step 2: The Key figures and other columns need to be taken as below:, And also we can see the Free characteristics and Rows part of the query.


Step 3: The formula for “Time Read Per Million” is defined as below:

The sample output is:


Labels in this area