Skip to Content

Introduction

    

SAP NetWeaver Business Intelligence (SAP NetWeaver BI) unites a powerful business intelligence platform, a comprehensive set of tools, planning and imulation capabilities, and data-warehousing functionality – delivered through sophisticated and user-centric enterprise portal technology to provide a oherent, comprehensive solution With SAP Net Weaver Business Warehouse (SAP NetWeaver BW), we can tightly integrate data warehousing capabilities on a comprehensive and scalable platform, while leveraging best practices to drive business intelligence predicated on a single version of the truth. By combining a scalable and layered architecture, a rich set of predefined business content based on established best practices, and key enterprise information management topologies, SAP NetWeaver BW can help you achieve:

 

  

Reliable data acquisition – Tightly integrates data across all applications and business processes in SAP Business Suite, and enrich accessibility of heterogeneous data sources and data quality.

    

Business-oriented modeling– Enable quick and sustainable implementations through modelling patterns based on established best practices and rich predefined business content that match your business processes. Deploy all models across different subject domains and enable a single version of the
truth across your complete data warehouse.

    

Robust analytical capabilities – Support online analytical processing and provide a robust foundation for computing multidimensional business data across dimensions and hierarchies. Benefit from a framework for building planning applications tightly integrated with your enterprise data warehouse.

    

Enterprise ready life cycle management – Benefit from sophisticated life-cycle management functionality at three different levels – system life-cycle management, metadata life-cycle management, and information life-cycle management.

    

Streamlined operations – Manage and monitor operations with functionality to actively push critical events – and actionable recommendations for recovery and self-healing – to administrators. Ensure compliance with corporate policies, while ensuring high data manageability and consistency.

       

Performance Optimization

 

The SAP Business Intelligence Tool has evolved over a period of time, from a tool solely used for analyzing data captured in the ERP to a tool which is used as an Enterprise wide Datawarehouse and serves as a one stop shop for all source of raw as well as processed information. The business expectation from the Business Intelligence has also changed over the years from a nice to have information tool to a mandatory tool required for business. The organization can gain a competitive edge by responding to market situations in a quicker informed manner.

The Service Providers are under constant pressure to meet the expectations of the Business. The performance expectation can only be met by following the Industry best practices and using the performance optimization features available in SAP Netweaver BI . All these options are discussed in the subsequent sections below:

1. BW Query Performance optimization

In order to evaluate the data contained in an Info Cube, queries are defined and inserted into workbooks. The total of all Info Objects for an Info Cube forms the basis of the query definition. With the query definition in the BEx Analyzer, you select certain combinations of characteristics and key figures, or reusable structures.

 

BEX queries are the main logic behind BW Reports and this information is used by End users for analysis and decision making. Query performance plays important roles in delivering reports in time. To improve query performance the below options can be used.

  

1.1 Compression:

 

Compression of info cubes is a quick solution that can be implemented to improve performance. it reduces the number of records by combining records with the same key that has been loaded in different request in the info cube.

It improves Query performance is usually improved significantly and Compress aggregates as soon as possible to avoid query performance issues.

Disadvantage: once you compress the request, request based deletion is not possible.

 

1.2 Create Indexes:

 

If you report on DSO objects with a restrictive selection key, check if indexes are defined for this key. Use indices on characteristics which you are accessing regularly (for reporting or Data Mart interface). Check if the Info Cube Indices exist. In ORACLE, you can choose between bitmap index and B-tree index

 

Use B-tree index, if dimension size exceeds 10% of the fact table size. If you select on navigational attributes, be sure that an appropriate index is available

 

1.3 Aggregates:

 

An aggregate is a materialized, aggregated view of the data in an Info Cube. In an aggregate, the dataset of an Info Cube is saved redundantly and ersistently in a consolidated form into the database. Aggregates make it possible to access Info Cube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance. An aggregate is made up of characteristics and navigation attributes belonging to an InfoCube. If aggregates are created for an InfoCube and entered data for them, the OLAP processor automatically accesses these aggregates. When navigating, the different results are consistent. The aggregate is transparent for the end user.

New data is loaded at a defined time using logical data packages (requests) in an aggregate. After this transaction, the new data is available for rolling up in reporting.

1.3.1 Aggregates analysis:

 

An aggregate must be considerably smaller than its source, meaning the InfoCube or the aggregate from which it was built. Aggregates should be 10% of the size of the cube. The number of records contained in a filled aggregate is found in the “Records” column in the aggregates maintenance. The “Summarized Records (Mean Value)” column tells you how many records on average have to be read from the source, to create a record in the aggregate. Since the aggregate should be ten times smaller than its source, this number should be greater than ten. “Valuation” column evaluates each aggregate as either good or bad. The valuation starts at “+++++” for very useful, to “—–” for delete. This valuation is only meant as a
rough guide.

 

1.3.2 Maintaining Aggregates:

 

• Delete aggregates that are no longer used, or that have not been used for a long time. The last time the aggregate was used is in the “Last Call” column, and the frequency of the calls is in the “Number of Calls” column. Do not delete the basic aggregates that you created to speed up the change run. Do not forget that particular aggregates might only not be used at particular times.

 

• Do not use a characteristic and one of its attributes at the same time in an aggregate. Since many characteristic values have the same attribute value, the aggregate with the attribute is considerably smaller than the aggregate with the characteristic. The aggregate with the characteristic and the attribute has the same level of detail and therefore the same size as the aggregate with the characteristic. It is however affected by the change run. The attribute information in the aggregate is contained in the aggregate only with the characteristic using the join with the master table…

 

• The aggregate with the characteristic and the attribute saves only the database – join. For this reason, you cannot create this kind of aggregate.

   

1.4 Query settings:

 

There can be several reasons for the poor performance .It can be related to database performance issues, query processing time, more use of exceptions /conditions in the query, non optimized code for the update rule/routine, need of creating an aggregate.

 

1.4.1    Performance settings:

There are several settings that can be done in transaction RSRT, clicking on the query properties .We can decide whether query will read data upon navigation or expansion .The cache status will use 1 main memory with/without swapping.

 

1.4.2    Filters:

 

Filters should be removed as much as possible .Using filters contributes to reducing the number of database reads and the size of the result set, thereby significantly improving query runtimes. They are valuable especially when associated with big dimensions. If large reports have to be produced Bex Broadcaster should be used to generate batch jobs and can deliver them via email ,PDF or printer.

 

1.4.3    Determining slow queries:

 

RSRT can be used to determine the performance of a query. It indicates whether an aggregate is required or not. The query can be debugged to find out where the query is slow. RSRV highlights the database issues like index rebuilding, requirement of applying SAP Note etc. These 2 transactions are gold mine for debugging and analyzing the slow running queries. RSTT also shows OLAP performance and front end data transfer can also be traced.

 

1.4.4 Restricted Key Figure and Line Item dimensions:

 

When Restricted Key Figure (RKF) is included in a query, conditioning is done for each of them during query execution. This is very time consuming and a high number of RKF s can seriously hurt query performance. RKFs should be used to minimum in the query. Also, calculations can be done in the infoprovider instead of the query, since the formulas within an infoprovider are returned at runtime and are held in cache.

 

Line item dimensions are basically fields that are transaction oriented and therefore, once flagged as a ‘line item dimension’, are actually stored in the fact table. This result in faster query access without any table joins.

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

Leave a Reply