Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Hi Guys,

I want to share some techniques that are useful in BO performance tunning.

Tuning Level: Database

> Components to Tune: Execution path of SQL

> Description: Determine the execution plan of BO generated SQL in target database. Explain plan is a handy tool for estimating resource requirements in advance. It displays execution plans chosen by Oracle optimiser without executing it and gives an insight on how to make improvements at database level.

Tuning Level: Universe

> Components to Tune: Modify fetch array parameter

> Description: The array fetch parameter sets the maximum number of rows that are permitted in a FETCH procedure. For example, of the Array Fetch size is 20 and total rows are 100, then five fetches will be executed to retrieve data, which will consume more time in comparison with one fetch.

> Resolution: If network allows sending large arrays, then set Array fetch parameter to new larger value. This speed up the fetch procedure and reduce the query processing time. To do this you must change a parameter in relevant PRM file of the database. Browse the directory

> Business Objects\Business Objects Enterprise xx\dataAccess\connectionServer\Oracle

Tuning Level: Universe

> Components to Tune: Allocate weight to each table

> Description: Table weight is a measure of how many rows in the table. Lighter tables have less rows than a heavier table. By default BO sorts the tables from lighter to heavier. The order in which tables is sorted are based on the databases. For example, Sybase uses the same order as BO, but Oracle uses opposite order. The SQL will be optimised for more databases, but not for Oracle where the smallest table is put first in the sort order. So, if you are using Oracle database, you can optimise the SQL by reversing the order that BO sorts the tables.

> Resolution: To do this you must change a parameter in relevant PRM file of the database Business Objects settings, the Oracle PRM file must be modified as follows: Browse the directory

> Business Objects\Business Objects Enterprise xx\dataAccess\RDBMS\connectionServer\Oracle

> Open ORACLE.PRM file, change the REVERSE_TABLE_WEIGHT value to N from Y

Tuning Level: Universe

> Components to Tune: Use shortcut joins

> Description: Numbers of tables in join are more, even when selected objects are less. Even when no object of related table is selected then also that table is appearing in the join condition. For example if A-ID object from A table of C table is selected with B table in between, then BO generates SQL shows that intermediate table B table was present in the "From" clause.

> Resolution: Shortcut joins allow users to skip intermediate tables and allow alternative path between tables, which results in query performance from 1.5 minute to 30 seconds.

Tuning Level: Universe

> Components to Tune: Use aggregate functions

> Description: Data is aggregated on the subject of analysis (user selected criteria) at report level. This takes more processing time as data from database is loaded in temporary memory and then aggregated or processed to display.

> Resolution: Use aggregate functions (sum, count, min, max) in measure objects at universe level. Aggregate functions will aggregate the data at database level rather than at report level which will save on processing time at report level and also reduce the number of rows returned back to report.

Tuning Level: Universe

> Components to Tune: Use aggregate tables

> Description: Aggregate data are obtained by scanning and summarizing all of the records in the fact table at real-time which consumes more time.

> Resolution: Aggregate tables contain pre-calculated aggregated data. Using aggregate tables instead of detail tables enhances the performance of SQL transactions and speeds up query execution. Aggregate_Awareness function has ability to dynamically re-write SQL to the level of granularity needed to answer a business question. Aggregate tables allow for faster querying speed and increases query performance manifolds!

Tuning Level: Universe

> Components to Tune: Minimise usage of derived tables

> Description: Since derived tables are evaluated and executed at runtime, SQL tuning is not possible.

> Resolution: Minimize the usage of derived tabled and replace them with tables or materialized view. SQL tuning techniques such as creating index can be applied on tables or materialized views which will improve performance of BO reports.

Tuning Level: Report

> Components to Tune: Opt for Refresh-at-Will over Refresh-on-open

> Description: Refresh-on-open reports refresh new data each time it is opened. Connection with database is established each time report is refreshed which in turn slows the report performance.

> Resolution: If report is based on snapshot data and static, it is better to publish report without refresh-on-open property. Users will thus view the same instance of report without establishing database connection, which will reduce the response time of BO report.

Tuning Level: Report

> Components to Tune: List of values (LOVs)

> Description: When we create LOV object, distinct values are selected into it. Distinct forces an internal sort/compare on the table. Selecting a distinct on a large table is not optimal. Example: selecting a distinct list of custom_store against t_curr_tran_daily table is not optimal.

> Resolution:

> a. Re-map the object list of values to smaller look up tables.

> b. If there are no smaller lookup tables, then create external file as a source to LOV. This file needs to be exported along with universe and be available to all users, which is additional overhead. Usage of external file replaces the need of lookup table and delivers high performance and weighs down the overhead cost

> c. Avoid creating LOV on dates and measures. Disassociate LOV from all such objects which are not display as prompts.

Tuning Level: Report

> Components to Tune: Conditional Objects

> Description: The entire data from database is fetched (<=maximum rows setting) and the filters are applied at the report level. As data is not restricted at the database or universe level, the report takes more time to execute.

> Resolution: When handling huge data, one of the following steps can be taken to limit data:

> 1. Use prompts to restrict data selection at universe level. Preferably use time period prompts in reports.

> 2. Replace report filters with Universe condition objects, if possible. Usage of conditional objects will limit rows returned at database level.

Tuning Level: Report

> Components to Tune: Complex calculation in ETL

> Description: The data from database is fetched and then calculations are applied to that data. As calculations are performed at universe or report level on huge data, report takes more time to execute.

> Resolution: When dealing with huge data warehouses perform complex calculations at ETL level. Thus Business Objects saves time on calculations and deliver high performance.

Tuning Level: Report

> Components to Tune: Minimise usage of report variables/formulas

> Description: If the report is pulling tons of data, doing loads of joins, making lot of clever calculations, using lot of report variables and formulas, report may run very slow. Report variables and formulas are loaded and calculated in memory at real-time. As variables are created at real-time and calculations are performed at report level, report takes more time to execute.

> Resolution: When dealing with big reports, minimize usage of report variables/formulas and try to place them at universe to deliver high performance reports.

Hope this might be useful.

Cheers

Ram