Previously: 24 Hours After Oracle Compression
Quite previously: Our Oracle compression project progress
Run times should go down, or, at the worst, stay around the same. So how are we going to notice something drifting in the wrong direction? That’s not something that can be definitively answered in 3 days given variations that occur based on volume, not on this platform. Projecting requires more data.
With compression, segment spaces have shrunk. If the statistics from the old system moved to the new platform, all plans should stay the same (unless some optimizer trick saves new block counts and adjusts accordingly), Compressed objects should probably get new statistics but the fewer plan changes that could pop up at one time the better. Find the devolved versions and fix those.
The above job must sweep through formerly uncompressed tables and now sweeps cleaner.
I’ve saved off ST03 transaction histories for the past few weeks, and months, in order to show longer term improvements. With 40 – 50 thousand distinct jobs running into the millions of steps per week, I have typically found the top 80% is under 200 background jobs.
OLAP OLTP Hybrid
Our Supply Chain stack runs Live Cache, Event Management, and a well-seasoned Available To Promise bag for R/3/ECC. As it runs both as a transaction system for order management, it also acts in a business warehouse mode for planning books and other reports. So you get hybrid SQL statements. Tuning for OLAP versus OLTP is still required in this landscape, and we have primarily leveraged the optimizer index cost parameter to slide the dial one way or another.
Oracle DBAs would understand altering that value is not done lightly as plans could change in surprising ways.
An example Oracle star transformation (intended) SQL execution plan:
This is easier on the brain (meaning the optimizer not just me):
One odd thing I noted which I don’t recall seeing before is the lack of table or condition details. I have the SQL address so may check if it happens again. My SAP GUI version was bumped up a week ago (shiny new knobs in the last screen shot).
This is what we might have done differently.
- Checking automated jobs for paths; one did not exist on the target (new) system so faulted.
- A trigger ended up with a permission issue. Don’t think we could have prevented this though. Lessons learned are all internal.
- I think I did okay keeping statistics records for the best overview; some stick with the DB server and some with the instance, stored in the DB. Only had a day or so gap.
Cache view. The top row doesn’t make sense, for another day.