Our Oracle compression project progress
This will be a rather abbreviated blog, as unlike our Unicode migrations several years ago (when I started blogging on SCN), I am not involved in a hands-on fashion. That allowed me to go the the Sapphire conference this week while the database, server, and Basis teams prepare for several production moves. As with our Unicode platform moves, we leveraged the tool called “O2O” (or O to O) to accelerate transfer between two Oracle SAP systems. It’s a certified process; an Oracle export-import would not be (necessarily).
We copied back our production systems to the quality platform recently so we would get the closest approximation for actual data sizes. There were a couple days of “what’s all this then” because we combined the approved outage with a change to the underlying storage (mostly spinning disk to all flash). Which leaves room for debate about which component gained us more speed. We’re all on the same team, so no issues there,
As I type this, one of the production moves is underway and two more will start in a few hours. I’ll post a followup as soon as I have some representative data.
Metric 1 – DB sequential read time.
I’m kicking myself for not having captured these numbers before the copyback wiped them. I have the for production, as can be seen in the blue line. So far, we’ve been under 1 millisecond for index access. Not too bad, eh?
Metric 2 – VA01 Order Entry
This skips over the other VA01 components that ST03 records, as I wanted to show the DB portion in isolation. CPU and other SAP level times have not changed much in these tests, so the overall impact on specific transactions will depend on the relative portion of each.
Metric 3 – Batch job
To end users, metrics like the above don’t matter as much as how fast their business process executes. I looked at a few examples, and cherry-picked one that has a telling story.
I wish all infrastructure projects could show these kinds of results this easily!