Skip to Content

I just want to give you an overview and not go into deep details or exact instructions. Just wanna give you some points from where you can start your analyze and tune your system. You should try out all the tables, views and transactions by yourself.

  1. Performance issues in summary
  2. Query performance analyse
  3. Cache monitor
  4. ST03n
  5. ST13
  6. ST14
  7. Statistics
  8. ST02
  9. BW Administration Cockpit
  10. Optimizing performance of InfoProviders
  11. ILM  (Information Lifecycle Management)
  12. BWA
  13. Query analyzing example
  14. How Oracle 11g affects BW performance
  15. General Hints

1. The common reasons for performance issues in summary

Causes for high DB-runtimes of queries

  • no aggregates/BWA
  • DB-statistics are missing
  • Indexes not updated
  • read mode of the query is not optimal
  • small sized PSAPTEMP
  • DB-parameters not optimal (memory and buffer)
  • HW: buffer, I/O, CPU, memory are not sufficient
  • Useage of OLAP Cache?

Causes for high OLAP runtimes

  • high amount of transmitted cells, because read mode is not optimal
  • user exits in query execution
  • usage of big hirarchies

Causes for high frontend runtimes

  • high amount of transmitted cells and formattings to the front-end
  • high latencies in refering WAN/LAN
  • insuffincient client hardware

2. Query performance analyse

I think this is a really important point (including the OLAP cache) and should be explained a little bit deeper.

To get exact runtimes for before/after analyze use this transaction with or without Cache/BWA etc.
choose query execute and debug -> don´t use cache -> show statistic data

Button Properties
activate cache mode (also able to activate for the whole InfoProvider)
you should use the grouping, if you use multiprovider where data of only one Cube are changed independent from the other ones. So you can avoid the invalidation of the cache.
Following grouping procedures are available:
1) no grouping
2) grouping depending on InfoProvider Types
3) grouping depending on InfoProvider Types InfoCubes Seperately
4) every Provider seperate

1) All results of an Infoprovider are stored together. If data of one of the Infoprovider are changed the whole cache must be recreated. This setting should be used when all the Infoprovider, which are used from the multiprovider, have the same load cycle.
2) All the results are stored grouped by the type of the InfoProvider. This option should be used when a basic InfoCubes are combined with an realtime InfoCube.
3) Is the same as 2) with additionally the feature that every result of an Infocubes are stored seperately. It should be used when you change/fill the cubes independent from each other.
4) Every results of a provider will be stored seperated (independent from the type). This option should be used when not only, but also other provider types InfoCubes are updated seperately.

2.1 RSRT Query Properties

You can turn off parallel processing for a single query. In the case of queries with very fast response times, the effort required for parallel processing can be greater than the potential time gain. In this case, it may also make sense to turn off parallel processing.

Just play a little bit with RSRT and the different optionsto get the optimal settings for your queries!

There are also some special read modes for a query. In the most cases the best choice is ‘H’ (Query to be read when you navigate or expand hierarchiesmore information)


2.1 RSRT Query properties with grouping

Technical Info

– Performance Info

-> Useage of aggregates, Cache (+delta), compression, status of requests

2.2 RSRT Performance Info

3. Cache monitor

jump from RSRT into Cache monitor (TA: RSRCACHE)

Cache parameters
General infos about cache parameters, check them if they (runtime object and shared memory) are all well sized. Therefore have also a look at the sap help.

There are 2 types of OLAP Cache, Cross-transaction cache and Local Cache (details on

!!!One thing you must know: the local cache is used in the following cases:

  • When the cross-transactional cache has been deactivated (see the parameter Cache Inactive).
  • When the cache was deactivated for the InfoProvider (for all future queries) or the query
  • If you determine during runtime that caching cannot take place

Main memory -> Objects inside in list or hirarchy display -> technical info (usage of selected cache)

Check also buffer consumption under buffer monitor (Exp/ImpMem) and buffer overview (Exp./ Imp. SHM).

Check for which query it does make sense to save them in the OLAP cache, recommendations from SAP:

How often the query is requested

We recommend that you save queries that are requested very frequently in the cache. Main memory cache is very fast, but limited in size. By displacing cached data, you can cancel out main memory limitations, but this also affects system performance. There are practically no limitations on the memory space available in the database or in the file system for the persistent cache. Accessing compressed data directly in the persistent cache also improves performance.

The complexity of the query

Caching improves performance for queries whose evaluation is more complex. We recommend that you keep complex data processed by the OLAP processor in the cache. (Therefore the cache mode Main Memory Without Swapping is less suitable for such queries.)

How often data is loaded

The cache does not provide an advantage if query-relevant data is frequently changed and therefore has to be loaded frequently, since the cache has to be regenerated every time. If cached data is kept in main memory, data from queries that are called frequently can be displaced, so that calling the data takes more time

For detailed information which of the following modes should be used check sap help :

  • Cache is Inactive (0)
  • Main Memory Cache Without Swapping (1) 
  • Main Memory Cache with Swapping (2)
  • Persistent Cache per Application Server (3)
  • Cross-Application Server Persistent Cache (4)
  • BLOB/Cluster Enhanced (5)

You can configurate this settings in RSRT (see screenshot 3.1)


3.1 RSRT grouping (BLOB/Cluster Enhanced is deactivated)


3.2 RSRCACHE – Queries in Main Memory

Use delta caching if possible. With this option you can avoid invalidation of the cache data when the data basis are changed (data loads / process chains). So only the new data are read from the DB.

Hint: Prefilling the OLAP cache via broadcasting (rsa1->administration->broadcasting; documentation)

4. System load Monitor ST03n

ST03N (modi expert) -> click on BI system load to get data like:

  • Query runtimes (seperated BEx, BEx Web (ABAP / JAVA)
  • Process chain runtimes
  • DTP runtimes
  • Aggregate usage

5. ST13 Analyze & Service Toolset (depends on your ST-A/PI level)

there you can find some well known reports like RSECNOTE, but also new BI tools:

BPSTOOLS BW-BPS Performance Toolset
BIIPTOOLS BI-IP Performance Toolset
BW_QUERY_ACCESSES BW: aggregate/InfoCube accesses of queries
BW_QUERY_USAGE BW: query usage statistics
BW-TOOLS   BW Tools (PC Analyze, Request analyse, Aggregate toolset, IP Analyse, DTP request analyse and IO Usage)
TABLE_ANALYSIS Table Analysis Tools

These tools use all RSDD* tables/views and displays them in a colorful and sorted way.


6. ST14

ST14 -> Business Warehouse -> plan analyze -> client 010  choose date , Basis Data (Top Objects) and Basis: Determine Top DB Objects and schedule it
you will get a great analyze for your whole BI system, including

  • top 30 PSA, E-fact, F-fact, Dimension, master data tables, change logs, Cubes ODS/DSO, Aggregates and some special infos for BWA
  • for those who use oracle also Tables with more than 100 partitions
  • the upload performance for the last weeks
  • Compression rate
  • result of SAP_INFOCUBE_DESIGNS (D- and E-tables in relation to the F-tables)


6.1 ST14 Overview

If you have trouble with the growth of your system this is a great entry point to start your analyze to find out where the space is gone 😉
So you know now which requests should be compressed and how to get rid of partitions (maybe repartitioning; rsa1 -> administation -> repartitioning), but keep in mind that repartitioning creates shadow tables in namespace /BIC/4E<InfoCubename> and /BIC/4F<InfoCubename>.

This tables are exists until the next repartitioning, so you can delete them after the repartitioning is completed. Locate and delete empty F-partitions via report SAP_DROP_EMPTY_FPARTITION (note 430486)

7. Statistics

TA: RSDDSTAT statistic recording (tracing) settings for for Infoprovider/queries etc.

Views RSDDSTAT_OLAP (OLAP + Frontend statistics) RSDDSTAT_DM (multiprovider, aggregate-split, DB access time, rfc time)
Use TA SE11 to view there content.
Column AGGRAGATE to identify if it´s using aggregates or the BWA: aggregates are 1xxxxxx and BWA-Indizes with <InfoCube>$X

How to delete statistics

TA RSDDSTAT (manual deletion)
setting up the tracelevel of queries and setting up deleletion of statistics

automatical deletion
date is relating field Starttime in table RSDDSTATINFO

8. ST02

check every instance for swaps -> double click on the red marked lines and then click on current parameters and you will see which parameter you should increase.
Please read the sap help for each parameter it could be that there are dependencies!
(Memory and Buffer).

There are two possible reasons for swapping:

  • There is no space left in the buffer data area -> buffer is too small
  • There are no directory entries left -> Although there is enough space left in the buffer, no further objects can be loaded because the number of directory entries is limited -> increase the needed parameter for the directory entries!

Note : Before you change the settings, also have an eye on the pools via tool sappfpar! (on OS as sidadm: sappfpar check pf=<path-to-profile> )

9. Using the BW Administration Cockpit

Setup via SPRO (BI -> Seetings for BI Content -> Business Intelligence ->BI Adminstration Cockpit)


  • min. NW 7.0 Portal Stack 5 + BI Administration package 1.0
  • implement technical content (TA: RSTCC_INST_BIAC)
  • Report RSPOR_SETUP


  • average and max. runtimes of queries
  • PC runtimes
  • trends for queries and bw-applications
  • suggestion for obsolet PSA data


9.1 compressed and not compressed requests


9.2 process chain status

10. Optimizing performance of InfoProviders in summary

  • Compress InfoCubes
  • Partitioning (and repartitioning) of InfoCubes

            – DB level
            – range partitioning (only for data base system which can handle partitions, e.g. oracle, DB2, MSSQL)
            – clustering
            – application level

11. ILM (Information Lifecycle Management)

  • nearline (Vendors for nearline Storage are e.g. SAND Technology, EMC², FileTek, PBS …)
  • archiving (Archiving via fileserver or stape drives)
  • deletion of data

Currently we don´t use any kind of ILM, but research is going on 😉

12. BWA Business Warehouse Accelerator (just a small summary):

  • RSDDTREX_MEMORY_ESTIMATE (see screenshot)-> to estimate the memory consumption of the BWA for a specific InfoCube. That´s only the memory consumption and not the needed storage on the hard disk!
  • RSDDV Display all your Indizes which are indexed by the BWA
  • RSRV Analyze BW objects
  • TREX_ADMIN_TOOL (standalone tool)
  • Tables RSDDSTATTREX and RSDDSTATTREXSERV for analyzing the runtimes of BWA
  • Table RSDDTREXDIR (Administration of the TREX Aggregates) , check this blog for more information

1) Report: RSDDTREX_INDEX_LOAD_UNLOAD to load or delete BWA Indizes from the memory of the BWA servers. This can also be done over the RSRV ->Tests in Transaction RSRV -> BI Accelerator -> BI Accelerator Performance Checks -> Load BIA index data into main memory/Delete BIA index data from main memory.

2) Optimize Rollup process with BWA-Delta-Index via RSRV (Tests in Transaction RSRV -> All Elementary Tests ->BI Accelerator ->BI Accelerator Performance Checks -> Propose Delta-Index for Indixes )
Note that the Delta index growth with every load. The Delta index should not be bigger than 10% of the main index. If this is the case -> merge both indexes via report RSDDTREX_DELTAINDEX_MERGE

3) Use the BWA/BIA Index Maintenance Wizard for DFI Support or the option ‘Always keep all BIA index data in main store’. So they won´t be read from the disk, they stay always in memory! You can also activate and monitore DFI support via the trexadmin standalone tool. Control your memory consumption of BWA for this option!


12.1 result of report RSDDTREX_MEMORY_ESTIMATE


12.2 option index keep in memory via BWA/BIA Index Maintenance Wizard


12.3 BWA suggestion for delta indexes (RSRV, see 12. 2) )

13. Query analyzing example

find out which queries have a long runtime over ST03n:

13.1 ST03n – very high DB useage for this query

Check list

  • how often data in this infoprovider were changed?
  • RSRT -> Performance Info -> any aggregates, cache (+delta) mode, compression?
  • which Infoprovider were hit by the query? RSRT -> Technical Information (in our case GRBCS_V11 – virtual cube and GRBCS_R11 – reporting cube)
  • DB statistics for this table/indexes up-to-date?
  • is it possible to index the Cube via BWA? (GRBCS_V11 can´t indexed because it is a virtual Cube, GRBCS_R11 is already indexed, the GRBCS_V11 includes GRBCS_M11 – a realtime infocube, which also can´t be indexed – and GRBCS_R11)
  • check where the most part of the runtime is spent (execute query in RSRT with options ‘Display Statistic Data’ and ‘Do not use Cache’)
  • check table RSDDIME if Line Item Dimension or High Cardinality used (if you not sure when you should use this features have a look below to the useful links)

In this case I would activate the OLAP Cache (which mode depends on the how often the basis data are changed and if they are filled at the same time -> grouping for multiprovider, see point 2) and talk to my colleagues which are responsible for modeling if we can change something on the compression time frames. For more details you can also check table RSDDSTAT_DM.

The high runtime causes also from a bug in the db statistics (results in a bad execution plan) which will be fixed in a merge fix (9657085 for PSU 1 and 10007936 for PSU2) for oracle 11g. (bug 9495669 see note 1477787)


13.2 You can see a high usage of the data manager (part of the analytic engine) = read access to the Infoproviders. In this case read time of the DB.

14. How Oracle 11g affects BW performance

So now the time has come for the Oracle 11g part! So first of all the advanced compression keeps what it promised 😉
If your read the SAP note 1436352, you will see there is no way around the advanced compression, when you are already on, because this is the prequesite for it. Index compression is already available since oracle 10g.
The easiest way is to stop the BW and make an offline reorg into a new tablespace. There is no need to worry about besides the SAP recommendations. This is not possible for everyone – also not for me. So I have to compress the tables online.
Here you have to take notice of loading processes. They create a lot of redo logs besides the normal redo logs for compression. Another thing we have learned is that process chains which don´t delete and recreate the indexes lead into an ORA error when a partition is compressed and the other one isn´t (and the indexes weren´t recreated afterwards).
So either you change all process chains or you compress all partitioned tables and than activate the default compression for your tablespace. Enough theory for now, let´s have a look what the compression is made of 😉

At first I took some small tables to get an experience for the duration and the compression rate of the OLTP compression. The first result of tables with a size of 5-10GB including index compression were really promisingly. This means a compression rate of 70 – 80%! Really impressive or? Yeah, but not for every table. Some tables have just a rate of 40 -50%, but this is still better than before 😉
Next step is to determine the biggest parts, this can be done with a normal ST14 analyze. Our biggest 4 tables had in summary a size of 2TB. The DB size was at about 3.4TB. We come to the decision to compress these big tables at a weekend, delete the schedules for the loading processes to avoid unnecessary redo logs.
We compressed all of our F-,E-,B- and A-tables, afterwards we checked if all table partitions are compressed. It was possible that during the long duration of the compression, some new request were created. One new request in e.g. a PSA table leads to a new partition on oracle.
This was the last step we had to take to activate the default compression for the tablespace!

One strange error occured during our compression. I compressed some tables in parallel mode with 10 threads and suddenly one of the ten threads canceled every table. I have checked what was the trigger for this issue. Just a simple bug by the brtools:
SQL statement ‘CREATE TABLE’ longer than 1024 KB
Creation of DDL statements for table SAPBWP./BIC/Fxxx failed

The table had a lot of partitions and this was too much for the standard length of the statement (brtools). There is a note for this issue (646681). Just set some higher environment values and everything will work. But why all of the following tables were canceled? This is just a bug which have to be fixed!

At the end of the day we reduced the DB size from 3.4TB to 1.3TB. Our big 4 could be reduced from 2TB to 500GB, which was one of our biggest success in this action.

some useful hints:

  1. delete all unneeded PSA- and changelog tables to avoid long runtimes and high amount of redo logs
  2. use brspace in parallel mode, test out what your hardware is made of 😉
  3. take care of an archiver stuck in online mode.
  4. If you take the easier method you save also a lot space for your backups. The online methode has a big disadvantage the used blocks before the compression will also be backuped after the compression although they aren´t filled/used. We will do the in the next upgrade to reduce the downtimes.
  5. example command for compressing all F-tables:
  • brspace -u / -f tbreorg -a reorg -c ctab -t “/BIC/F*”
  • brspace -u / -f tbreorg -a reorg -c ctab -t “/BI0/F*”


  • saved disk space
  • better use of DB cache
  • reduced physical reads and writes
  • no errors, CPU lags or other issues after activation of the default compression

15. General Hints

  1. Use high cardinality only where it makes sense! It could result in bad query performance. Use table RSDDIME to get an overview over all properties of your dimensions.
  2. Check in table RSRREPDIR (Field Cachemode) if for all queries cache and read mode ‘H’ are activated (take also care of the Delta-Cache). If you have special cases for some queries, don´t change your config. To change the read mode for all queries, call transaction RSRT -> type ‘RALL’ as “OK code”, and press ‘Enter’. In the dialog box, choose the new read mode and press ‘Enter’. To change the read mode for a specific query, enter the name of the query and select ‘Read Mode’
  3. Tablespace PSAPTEMP should have minimum size of 2 times of your biggest F-fact table (e.g. we had some performance issues while executing some queries which are really took a lot of temp space in cause of aggregating and sorting, so now our temp space is 4 times bigger than our biggest F-table)
  4. Table RSTODSPART shows the amount of records per request
  5. BEx Information Broadcaster -> Fill OLAP-Cache via BEx Query Designer, BEx Analyzer, BEx Web Analyzer, WAD, Portal and BEx Report Designer (Scheduling on daily, weekly or monthly bases)
  6. All tables of an InfoCube can be listed with TA LISTSCHEMA.
  7. Report SAP_INFOCUBE_DESIGNS (Print a list of the cubes in the system and their layout)
  8. Delete PSA-tables in your process chains
  9. Delete Changelogs in your process chains
  10. check if your aggregates are wise or not (TA: RSMON -> Aggregates)
  11. Check SAP Note 1139396 and run reports SAP_DROP_TMPTABLES and SAP_UPDATE_DBDIFF to clean obsolete temporary entries.
  12. Book regularly an EW Report analyze from SAP. This report includes many hints and improvements in summary. In addtion you can create/request the EWA report from your solution manager, which includes some details from the EW Report analyze, but it´s not exactly. The report from sap is more detailed.

I hope I could give you some useful hints for your analyses. I appreciate any kind of feedback, improvements and own experiences. Be careful with compression and partitioning, just use it if you know what you are doing and what is happening with your data!!!

May be I could show an old stager some new tables/transactions or some useful hints 😉

Some useful links and documents:

Book recommendation

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Witalij Rudnicki
    … that although you refer to the accelerator as “BWA”, you still call “BW 7.0” the “BI 7.0” 🙂  Yesterday SAP announced release of new version – “BI 4.0”, obviously it is about BObj suite; let’s see how this will clash with broader pre-BO community.
    The book by Thomas Schroeder is the best on the topic, although I haven’t seen the one for BW 7.0 yet, and am refering to the previous edition of it.
    Btw, I too have a short session on BW Performance during TechEd in Las Vegas: (although displayed under different name) and plan to turn this into the blog after the conference.
    1. Jens Gleichmann Post author
      hehe Vitaliy you are right, normally it is called BW, but my collegeas and even in my last SAP training they called it sometimes BI and not BW. Just wait 1 year and they will rename it back to BI 😉

      You will also find a lot of documentions which still call it also BI.

      You are right, the book is the best on this topic I have read in the last years.

      New performance hints are always welcome! I´m looking forward to your session information. Thanks in advance for this information 😉

      Best Regards,

  2. Rajeshkumar Salecha
    Hi Jens,

    I really appreciate your effort to accumulate and put all the available performance improvements in one place. Please be open to add few more links to this article as you get comments related to further more BI 7.0 improvements.

    This area of performance improvement has now begun to take a larger scope in the industry given the fact that more and more output performance are being analyzed. Even efforts to decrease the total effort is now part of the development. This article may provide some points which can be taken into consideration while implementation itself so as to have a better final product.

    Jen – please keep it updated with whatever new things that you come across on the same topic.

    Best Wishes and Regards
    Raj Salecha

    1. Jens Gleichmann Post author
      Hi Raj,

      thanks, I will do my best to keep it up-to-date 😉

      You are right BI becomes more and more important in the last years -> so also the performance requirements to the BI landscape increased dramatically. Especially archiving and DB compression (e.g. with 11g) come to the fore.

      In the next month we will implement 11g in our production system. I hope I can add here some useful details if and how 11g affects the performance.

      Best Wishes and Regards,

  3. Jacob Isaksen
    Hi Jens,

    Really great article. I already forwarded to some of my colleagues.

    I have a suggestion for addition: How would you analyze the query performance impact of non-optimal dimensional modeling in cubes and where to remodel?

    rgds Jacob

  4. Jay Roble
    Nice blog, Looking forward to the update of section 14. How Oracle 11g affects BW performance.


    11G Deferred Segment Creation
    – Beginning in Oracle Database 11g Release 2, when creating a table in a locally managed tablespace, table segment creation is deferred until the first row is inserted.
    – So when you partition an infocube E table, it doesn’t create the future or empty E table partitions.

    1. Jens Gleichmann Post author
      Hi Jay,

      Section 14 will be updated in the next few weeks.
      There is really a big potential e.g. the topic compression. I’m looking forward to implement such features in our prod system.


    2. Jens Gleichmann Post author
      Hey guys,

      I have updated section 14. Enjoy reading and tell me you experience with the advanced compression!

      I will take some more tests with the tables and check your statements, Jay. I will get back to you if I finished them 😉


  5. Sundeep Chalasani

    Hey Jens,

    Fantastic article. Thanks for putting all the information in one place.

    Whats your take on “Use Selection of Structure Elements” in the Query Properties? I notice performance improvement especially in the queries with large selections and restricted key figures when I check this option.

    Is there a scenario where this option when checked, could lead to a performance hit on the query?


    Sundeep Chalasani

    1. Jens Gleichmann Post author

      Hi Sundeep,

      this option depends on the query and your users (filter usage). In the most cases this option won’t help you so much to improve the query performance. I’ve never had a query which was improved with this option dramatically. You have to test it for each query.

      Just have a look at the offical SAP docu.

      “The Use Selection of Structure Elements option can be selected for any query; however, it is only useful in some cases.

      In queries, selections are frequently based on one or more characteristics in the columns, or more precisely, in the structure elements. One or more of these structure elements are often filtered in the BEx Analyzer or in BEx Web applications. If you do not select the Use Selection of Structure Elements option, these dynamic (structure element) selections are not transferred to the database. Normally, the data for the entire structure or for both structures is then read from the database.”

      So at the end of the day you have to try it out. The option just ensures that structure elements are sent to the database for processing. I know that the performance indicator could be (but didn’t double check it till now):

      – very high FEMS number

      – high EVENTID (3200 times)

      – many Restricted Key Figures

      – calculations in the query

      But like I said at the beginning it depends on the query itself and its usage.




Leave a Reply