24 Hours After Oracle Compression
After the completion of the work alluded to in Our Oracle compression project progress I took a look at the systems. As it’s the weekend, normal (much less peak) volumes have yet to occur, but there are early indications of database behavior, specifically, and application health in general.
First, the usual CCMS dialog response time et al:
A, B, C are past time periods (not daily) while D is yesterday, which includes under 24 hours since the application started partway into the day.
The database reduction was expected but the CPU decrease is a pleasant surprise. I am not depending on this to continue after more time passes, though if the drop persists will not argue.
A couple batch job run times, picked somewhat randomly out of thousands that are scheduled, “somewhat” meaning I saw these running in the beginning, so they did not just jump in and out of the system queues.
And another view from ST03 of VA01 database time, and one of sequential read time.
The blue line is production, which is now running at the same rate as the test system (in red). Nice that the test was representative.
Observations and next steps
Overall, the post migration period has been quiet, at least from the bulletins I am subscribed to. Pulling apart the central instance and database is an action we avoided for a long time, as the risks seemed to outweigh benefits. Issues that I’ve noticed have been related to that, with a maintenance job not moving to along with the database server.
One other area I spotted was a database statement issued from an ABAP program. Normally, SQL statements fire off around the clock. I have not looked at the ABAP but my guess would be an open SQL statement, or along those lines.
Of course, one day (or less) does not make a trend, and caches have not built up yet. I’ll be watching daily for new trends or anomalies.
Sorry for the mixture of Excel charts. Vis is not my strong suit.