As the fourth quarter of the year started everybody seems to check the goals that had been set for this year and what has already been reached. Looking back on the messages I worked on I found a very clear pattern, when the message was about BW on Oracle and performance.
Most messages were solved by the very same simple five steps:
- ensuring the database software is up do date
- ensuring the database parameters are set as recommended
- ensuring that the CBO statistics were in fact current and correct
- ensuring that all indexes are present and usable
- ensuring that the f-fact table doesn’t have too many partitions
Yes, these steps are boring and not very popular.
To give the recommendation to implement these steps, not much special analysis is required.
In fact, it’s a rather quick thing to check on these and tell if they need to be applied. It reminds me of the the flossing discussions with the dentist… Anyhow, let’s review these points briefly one by one:
Up to date Oracle database software
BW systems regularly present the Oracle query optimizer with highly complex issues to solve.
And the Oracle cost based optimizer (CBO) is by far the most complex piece of coding in the whole DBMS software.
So, even without the long history of epic optimizer errors (check note #176754) it should be obvious, that it’s fairly impossible to not have bugs in this code.
For example one issue that struck often with the early 11.2. patches was that the optimizer started to use the infamous MERGE JOIN CARTESIAN (MJC) operation.
Due to bugs and/or outdated statistics the CBO believed to get one one or zero rows back from one row source and with that knowledge a MJC eventually seems to be a good choice.
Once you get more rows than that you easily end up with PSAPTEMP overflows (ORA-1652) and horrid runtimes.
And sorry, but Oracle 8i, 9i and 10g are all outdated and not supported anymore.
See note #1110995 Extended maintenance for Oracle Version 10.2 on this.
Of course nobody can deny you to use Oracle 10g, it’s your system landscape after all.
But be prepared to have to install Oracle 10.2.0.5 PLUS the latest SAP Bundle Patch before any ind-epth analysis (e.g.via CBO trace) is performed.
We’re not chasing bugs in this old versions anymore.
Besides: very very likely the problems just won’t be there with the latest patch anyhow. It’s pretty mature nowadays.
Well this pretty much just goes with the paragraph before: without the correct parameter settings that fit the current version and patch of the database software, bugs and issues cannot be avoided, even if they are already known.
Current/fitting CBO statistics
Still, one of the links that I give to my customers most often is the one to my own old blog post
Even today it seems that there are yet many systems where the DBSTATC needs to be cleaned up.
Today I just added a small detail: when you used to use report SAP_ANALYZE_ALL_INFOCUBES to collect statistics, you really should stop that.
It basically brings back the DBSTATC entries…
Another aspect of this topic is that the standard heuristic to gather new statistics is based on a threshold of changed data in a table. Usually this threshold is at 50% of changed (that is added, updated, deleted) rows. For BW systems this can be suboptimal. For example time dimension tables might get new data and thus certain say 0CALMONTH values might suddenly be present in the histograms. By relying on the standard heuristic the CBO won’t notice that, as no new statistics have been gathered. One way to overcome this would be to force new statistics on single tables.
Another step to take would be to force the re-collection of statistics for tables that have very old (say older than a year) statistics.
Unfortunately there is – not yet – a automatism build into brtools or BW, but internal discussions on this have started!
Indexes present and usable
Indexes are critical for the optimizer. They speed up queries, they provide data access possibilities and sometimes even allow to leave out a table access alltogether.
But only if they are actually present on the database and usable.
For BW systems, where table partitioning is used per default, it’s important to have the indexes also partitioned.
Otherwise partition-wide actions like DROP PARTITION/TRUNCATE PARTITION will render the indexes on the table UNUSABLE or INVALID.
And they cannot be used for queries in this state.
The most common cases are described or at least referenced in note
#1513510 – 900 index in unsuable state with SAP Basis 7.00/7.01/7.02
Although I write the note initially to cover a specific bug in a SAP Basis SP it contains extensive explanations, a check SQL statement and links to other related sap notes.
Make sure to check it out and to get your indexes straight.
Too many f-table partitions
F-fact tables can be seen as the data inbox table of an infocube.
The request-wise partitioning allows quick and easy data removal in case the loaded data is not OK.
Usually this shouldn’t happen once your data mart has been properly setup and is stabilized.
Anyhow, keeping a good 30 partitions (e.g. a months worth of data) won’t do any harm.
But more than that make the work for the optimizer much more difficult.
Also, nearly nobody using business reporting checks on specific data load requests.
Instead queries are usually build upon a time reference.
The E-fact table can be partitioned by time and that way the query performance can be improved by factors.
So, make sure to check blog post
Partition count contest!
and note #590370 – Too many uncompressed request (f table partitions)
and start compressing your cubes today.
Is this it? Yep, kinda…
Are there other possible performance problems in the BW on Oracle world?
Sure they are. But these aren’t nearly as often as the ones that are solved for good by implementing the five steps discussed here.
As a little extra, now that you know about these five steps, you can plan and decide when you implement them – you don’t have to wait until you hit a serious performance issue and open a support message for that 🙂
That’s it for now. Have a great weekend!