Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
lbreddemann
Active Contributor
0 Kudos

Hi folks!

Sorry for missing out the last days. The training had been tough, going out afterwards even tougher 😉 and as soon as the training was over, real life support work set in again.

Therefore all I can provide this time is a "flashback course topics" and here we go (in really no order):

Partitioning

Partitioning is a great feature to chop down the data into smaller chunks so that it becomes easier to deal with.
Partitioning mean faster statistics collection, quicker segment scans and easier data removal.
Cleverly used they can benefit your buffer cache efficiency and the way data is clustered.
Although the ABAP DDIC supports partitioning since 4.6C there was still a problem:
How to properly setup and maintain the partitions. Basically the DBA had to check what data came in and went out and adapt the partition layout accordingly - manually.

The good news is: this has changed!
With Oracle 11g there's interval partitioning available and supported by SAP!

This feature will automatically create required partitions as data is inserted.
A nice addition to that is: JUST the required partition is created, so Oracle is not stupidly adding many partitions when "outlying" data is entered.
If you e.g. have partitions setup for ranges of numbers and every 5000 values a new partition should be created, this might look like this:

PART 1:    0 -  4999
PART 2: 5000 -  9999
PART 3:10000 - 14999

Now entering a value of 34123 would lead to this:

PART 1:    0 -  4999
PART 2: 5000 -  9999
PART 3:10000 - 14999
PART 4:30000 - 34999 *

Now entering a value of 18512 would lead to what?
This:

PART 1:    0 -  4999
PART 2: 5000 -  9999
PART 3:10000 - 14999
PART 4:15000 - 19999 *  
PART 5:30000 - 34999 *

Nice, isn't it?
If you're now already started clapping, better get a bit down again.
Think about how many number fields SAP stores as actual numbers in Oracle.
Only a few!
The same is true for date columns.

So here goes the easy way to automatically partition your data.

One option to address this may be the usage of another new feature:
Virtual Column-Based Partitioning.

This allows adding a virtual column, which is basically a column derived from one of the normal columns. By creating a virtual column that effectively does a data type conversion so that "SAP numbers" become "Oracle numbers" interval partitioning can suddenly be used.

Sadly enough there is still a hurdle to cross before this will just easy to be used with SAP: INSERT statements created by the NetWeaver DB Interface layer typically don't contain column names.
And if you omit the column names in an INSERT statement you HAVE to provide a value for ALL columns. But for the virtual columns you cannot provide values...
See the point?
Oracle and SAP are working on a solution on that, but it's not here, yet.

What is here instead is the SAP partitioning engine.
This piece of ABAP coding takes over partition maintenance based on NetWeaver number ranges (NRIV).
For details check notes
     SAP Note 1333328 Partitioning Engine for Oracle
     SAP Note 1415029 Corrections for SAP partitioning engine (1)
     SAP Note 1460079 Corrections for SAP partitioning engine (2)

Online patching

For those of you that had kept an eye on Oracle patches lately surely realized that patching changed a lot lately.
There are now the SAP Bundle Patches, there's mopatch (SAP note 1027012) and scheduled patch release dates.
This is already the case for Oracle 10g and this will stay the same with Oracle 11g.
However there is a new thing called online patching .
Although this feature sounds so sweet - be very clear about that nothing in life comes for free.
The way Oracle implemented online patching is by adding code branches around the buggy code and pointing into the revised code.
This adds overhead in the code execution and in memory consumption FOR EVERY ORACLE PROCESS/THREAD there is!
Plus: this is done for each and every bug fix installed.

Besides that, it's necessary to actively request online patches from Oracle - they won't be delivered via service marketplace.

If you ask me, although it may harm your SLA, I'd bet shutting down the instance to properly install a patch will lead to far less troubles.

Next topic!

EXPDB compression/encryption, RMAN compression, Advanced Compression Option

Part of the Oracle Advanced Compression Option are (additional) compression functions for EXPDP and for RMAN. (SAP Note 1436352 - Oracle 11g  Advanced Compression for SAP Systems)      

EXPDP properly compresses the data it exports - nothing to be sad about. Except that EXPDP runs on the database server and basically eats up your CPU there!!
Even worse you can parallelize EXPDP to make sure that really NO other process gets any CPU...

Amazingly Oracle implemented a similar "advancement" into RMAN.
While RMAN already provided compression, there are now additional levels of software compression.
Sadly enough the new algorithms are neither way more efficient nor do they lead to shorter backup times or less CPU consumption.
No idea what they thought when they did that...
Besides, if you've bought your Oracle license from Oracle, then Advanced Compression is a to-be-paid-Option.

Stuff around CBO, Cursors and statistics

As performance is one of the major interest areas for DB-users and as in most cases query-performance is meant by it, much had been done on these wide and complex grounds. It's not possible to briefly summarize all the details of 2 days training about these topics, so I will just throw some keywords at you.
If you're interested in any of them, there's quite some more information available, plus this leaves room for future blog posts...

Ok, so let's start with the new CBO statistics features.
Up to 10g, statistics were used from the moment when they were collected.
You could have prevented the re-parsing of cursors (for a while) that used the statistics but you couldn't control whether the new stats were used at all.

With 11g Oracle offers the option to first collect new stats in a 'private' mode and to publish them after evaluating them.
Those statistics are called 'pending statistics' and you can instruct the CBO to use them via the parameter "optimizer_use_pending_statistics".
Having this set to FALSE on instance level and to TRUE on session level (e.g. in ST04) gives a neat option to review plan changes.
There's even an extension for the DBMS_STATS.DIFF_TABLE_STATS from 10.2.0.4 (remember Oracle Patchset 10.2.0.4 - neat DBMS_STATS enhancement? ;-)) for pending stats.

Another nice thing are statistics preferences. Basically this is DBSTATC taken to the database level.
In plain English: you can define general settings for collecting statistics now right within the database catalog instead of having some "external" tool like BRCONNECT do it for you.
By that you can make sure that a simple DBMS_STATS.GATHER_TABLE_STATS() call will collect the statistics for he table with all the special settings defined for that table.
To me this feature is a nice one and surely did not cost the world to build it in.
Maybe we can get rid of DBSTATC in the future then!?

A big chunk of the training was dedicated to the new SQL performance features.

There's adaptive cursor sharing that can be viewed as bind_peeking-V2.0.
The test cases presented really looked nice and it seems that it's not yet decided whether or not to use this feature with SAP.
However, Oracle now uses cardinality feedback from cursor executions to recognize if the currently used bind value is still appropriate or if a new plan is required and silently generates a new child cursor in case it is.
To me this seemed to work a rather smart way - so maybe this will be something we will see in the future.
Downside is: one statement and n cursors with n execution plans.
This is going to be big fun during performance analysis.

I'm not going to talk about stored outlines or plan evolution too much here - because it likely won't be used with SAP.
Bottom line for that was something like: "Ok, the execution plan for a statement changed. Hey, DBA, go and check it. If you're happy with the new plan, release it..."
Honestly - was this meant to be a job creation feature?
Who should monitor and process all the plan changes that might come up in a typical SAP application?

What will be used with SAP however is a really nice feature:
Automatic SQL monitoring (V$SQL_MONITOR)
This feature picks up statements that run longer than 5 secs. (could be 5 secs on CPU or something - doesn't seem to be the elapsed wall clock time...) and starts to collect execution statistics very similar to what you would get with the hint /*+ GATHER_PLAN_STATISTICS */.

To say the least: it looks fabulous!
While the statement is running you can check WHERE the execution is right now and HOW MUCH data is processed right now. 
The GUI support in Enterprise Manager DB Console also looks very polished - nice, nice, nice!

What else?
Hmmm... there are - as usual - a bunch of new CBO hints available. Make sure to check the new (undocumented?) V$SQL_HINT view to get a complete list with all the version information.
Neat!

Time to capture the remaining topics...

There's a new Oracle option available with 11g and they called it RAT...

This, maybe not so wisely chosen, acronym stands for Real Application Testing.
With this Oracle provides a facility for database workload capturing and replay so that you can check out changes to the database setup (new hardware, new Oracle software, new disk layout, ...) with your production workload WITHOUT touching your production system.
You just capture the workload (ALL statements that run in your productive database together with timestamps when they where issued!) and have them redone on the target machine.

This approach addresses two difficult issues with load testing:

  1. have typical workload and
  2. have real think times (that is pauses where no statement is run)

It just uses your REAL workload.

Good idea!
They even thought of providing the option to capture workload in 10.2.0.4 onwards and replay the workload in 11g.
That way you can really check out whether 11g plus all those compression, encryption, auto-tuning stuff really brings better performance.

This feature looks so nice, Oracle thought they could ask for money for it.
And boy, they do!
Even customers that bought their license through SAP will have to pay for this...

Health Checks/Diagnostics

Oracle decided to copy from MaxDB and replaced the ASCII Alert.log file by a pseudo XML file!!

🙂

If your gut now starts to turn, be relieved!
They had been smarter with this and the old Alert.log is still available.
BUT: they moved it.
They moved it away from the longtime familiar /sap trace/background folder to a folder-sub-sub-sub-sub-...-folder dungeon that is very structured and a total nerve killer when you're supposed to check files on the command line!

It may help with complex Oracle setups (think of RAC, think of multiple instances, think of ASM...) on one machine but it sure makes looking for log files less easy.
Also they don't differentiate between background and usertrace anymore AND they put in more files in these folders as well...
Simple stuff like 'ls -ltra' might now not work that good anymore.
Well let's see how this turns out in real-life.

Anyhow, Oracle also delivered some tools to help with this:
There's ADRCI which is a command line/menu-driven tool to access all those trace files, to handle "incidents", to do the housekeeping a.k.a. deletions of those files and some more.
And there's a new set of V$-views that allow access to the diagnostic data from SQL.
Just look for views called '%DIAG%' or check the documentation (here) and check SAP Note 1431751 - Quick  Reference for ADRCI and ADR

I mentioned "incidents" above, so what is that? The error handling now automatically creates incidents that can be worked on by the DBA for each new error.
The coding is clever enough to realize when the same error is reported over and over again (think of block corruptions on a central table that every sessions wants to read from). This cleverness is called "flood control" and again seems to be rather smart.
Once a message flooding is detected, the new error messages won't be spilled to the trace files anymore (THANKS!).
Obviously Oracle tries to make their diagnostic/monitoring/management features easier to adapt to ITIL standard processes - not the worst idea to me.

However, there are areas where they just started with it: Database health checks is one of those.
Having played around with it a bit there's barely any real benefit of them as far as I can tell. But it's a starting, isn't it?

Fine - final topics: ASM and Exadata.

Automatic Storage Management will come also for SAP, that's quite clear.
When? Don't ask me, but it is for sure something we cannot hide from much longer.
One reason for it will be the certification of Oracles EXADATA/Database machine.

This surely will come some time (or, to be precise, I totally fail to see how SAP would be able to NOT certify it. I've no inside information on this, but it's just my private opinion!) and when it comes, ASM is a must to use.

That's it.

Ok, I left you with one of my longest posts ever and a pile of scattered information but I hope there had been one or two interesting points for some of you in it.
Of course most of the topics mentioned require deeper discussion, so just take this as a list of reading hints and private opinions.

Sorry again for being late with this final post, but hey - it's summer and soccer world cup and Germany just send Argentina home yesterday.
I wouldn't be sane to post this instead of sitting outside and watching the game, would I? 😉

Have fun with 11g and see you soon!

P.S.
While writing I found this nice search facility for Oracle docs:

Oracle documentation search for DB 8i, 9i, 10g and 11g

There's also much information available in form  of SAP notes.
Note 1431800 - Oracle  11.2.0: Central Technical Note should be your starting point! 

P.P.S.
read the first and second part of this blog as well:

Oracle 11g training, Day #2 - Savepoint!

4 Comments