ST04 — Monitor Database Performance
Monitor Database Performance
ST04 is used to monitor database performance. It provides all kind of historical data and statistics so database performance can be evaluated and improvement opportunity from application point view or system point view can be identified.
• Data buffer cache size & quality must be above 95% [meaning less physical read from disk]
• User/Recursive Call > 2 [Recursive call will get more over time]
• Read/User Call< 30 [> 30means expensive SQL statements]
• Time/User Call<15ms
• Busy time & CPU time ratio 60:40 ratio. Higher ratio means tuning required
• Sort sections. It should be less than 0.1% of total sorts.
• Shared pool statistics. DD(data dictionary) cache quality should be more than 99%, similarly the SQL area get ratio.
• Instance performance. Soft parse ratio max value is 1 which is not possible because at least once the SQL is hard parsed and then soft parsed in its next executions.
But this must be as close as possible to 1 for a healthy system or better system performance. Similarly there is another factor which is in-memory sort ratio; this should have higher values for the better system performance.
ST04 database performance overview screen:
It is a memory area to store database data/blocks so future access on the same data can be fulfilled from memory. Accessing data in memory is much faster than accessing data in the disk. Goal of data buffer is to reduce physical I/O as much as possible.
Field Size displays configured memory size. Quality indicates how often would a requested data is found in memory without disk access. Quality = number of memory reads/reads x 100%. Quality is also known as hit ratio.
Reads displays the sum of number of disk access and memory access since start of database server.Physical reads:Total number of disk access.
Buffer Busy wait: Total number of buffer busy wait. This wait happens when a session wants to access a database block in memory but it cannot because the buffer is busy. This is related to concurrent accessing on the same database table.
WHAT BASIS CONSULTANTS NEED TO KNOW– PERFORMANCE ANALYSIS
It is generally recommended that buffer quality/hit ratio should be greater than 95%. To improve the hit ratio, you need to analyze SQL cache to tune expensiveSQLs and/or increase data buffer.
However high ratio could be inflated by a very frequent executed SQL which get data from buffer. For example, Database has 3 tables which is the same size but buffer can hold one table at one time.
If one table is accessed 100 times, the remaining two tables are accessed 1 time, the hit ratio might be much higher than each table is accessed 34 times. If the number of wait situations is over 5% of reads, you must perform an extensive analysis of the database; wait situations may also occur due to a data buffer that is too small so if catch hit rate is low at the same time, you might need to increase the data buffer first.
It is a memory area to store executable SQL version of SQL codes and Database objects.Goal of Shared Pool is to promote reusing of parsed SQL and avoid disc access for database objects.
DD-Cache Quality indicates how often would a requested data object is found in cache. The Data Dictionary cache stores information on objects in the Oracle database, such as their names, definition and access to them.
SQL Area GetRatio and PinRatio Indicates how often parsed SQL statements are found in SQL Area of the shared pool. The Shared SQL Area (also called as Shared Cursor Cache) stores the parsed SQL statements (parsing means processing SQL statements to derive an access path to the data within the database).
Dictionary cache quality above 98% in a production system. The value SQL Area get/pinratio should be above 99% in a production system.If not, then you should consider to increase Shared Pool size
It is a circular memory area to store every database change before they are written to log files. Every database change would create one “redo” entry in log buffer.
Changes in log buffer can be written to a redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, all entries in the log buffer would be written to a redo log file by Oracle LGWR process, even though some redo records may not be committed. If necessary, the database can roll back these changes
Size field shows memory size for the log bufferEntries shows total number of redo entry since database was started
Allocation Retries & Allocation fault rate show the number of failed attempts to allocate free space in the redo log buffer and the ratio.
Redo Log wait & Log files display wait situation and number of log files used for recovery. log file sync waits are a wait on lgwr process to complete a write to disk.
****If number of failed attempts is higher, you need to review the LOG_BUFFER_WAIT event further. Possible mitigate action is to increase redo log buffer and/or tune IO system.****
Display important statistics on database operation. Calls displays the total number of user calls in the database since it was started.
Commits–: Displays the total number of committed transactions since the database was started.
Rollbacks–: Displays the number of rolled back transactions since the database was started.
Recursive calls–: Displays the number of recursive calls.Recursive calls/user calls display total number of recursive calls and user calls. SAP mentions total number of recursive calls should be smaller than the number of user calls by a factor of 5 in a productive environment.
Parses displays the total number of parsed SQL statements. SAP mentions the rate of Parses to User Calls should be under 25%.
Reads / User calls displays the number of Oracle blocks read on average from the data buffer to answer user queries. If this number is greater than 30, this points to “expensive” SQL statements.
****Sap mentions that the number of recursive calls should be smaller than the number of user calls by a factor of 5 in a productive environment. The rate of Parses to User Calls should be under 25%.The value Reads / User calls displays the number of Oracle blocks read on average from the data buffer to answer user queries. If this number is greater than 30, this points to “expensive” SQL statements.You need to analyze SQL cache to tune expensive SQL.****
Display summary of database wait situation. Wait itself would not consume resources but a result of resource contention. Wait situation should be reviewed and mitigated whenever possible.
Busy wait time shows the cumulative time that is used up, because the database system had to wait for a resource that was not available at the time of the request. If this value is high, you must perform a detailed analysis via Oracle wait event analysis. CPU time: total CPU time used since database was started.
CPU usage: Average CPU usage since database was started.
****Busy wait time shows the cumulative time that is used up, because the database system had to wait for a resource that was not available at the time of the request. If this value is high, you must perform a detailed analysis using wait event analysis and CPU load analysis on database server. CPU usage 20% or 30% itself is not high. However usage from 20% to 30% represent a 50% changes, this could be significant. Detail investigation is needed to see what leads to this “dramatic” change and possible impact.****
Redo Logging : Information here is related to redo log output performance.
Table scans & Fetches
Display summary of sequential read as well as total number related to chained row/continued row or read via index.From performance point view, we need to avoid sequential read on long tables
Short tables field displays the total number of sequential read operations on small tables ( tables with less than 5 Oracle data blocks ) since database was started.
Long tables field displays the number of sequential read operations on large tables ( tables with 5 or more Oracle data blocks ) since database was started.
Fetch By Row Id fields displays total number of table rows chosen by index or by a unique ID (row ID, ROWID) in the SQL statement since database was started.
Continued Row displays the number of chained Rows fetched since database was started.
****If fetches by continued row is high, then you need to analyze index/tables fragmentation situation and consider index/table reorganization. For short table, sequential read might be better than index read. For long table, reading via index read is generally better than sequential read. If huge number of records (like 20%) is needed, then sequential read might be better bet from performance point view. Further analysis on sequential read needed to be done via SQL cache analysis****
Displays the total number of sort operations performed in the main memory and on the disk. The Sorts section displays the total number of sort operations performed in the main memory and on the disk.
Memory displays the number of sorts in the main memory. These sorting processes are normally faster than sorting processes on the disk.
Disk displays the number of sorting processes temporarily written to the tablespace PSAPTEMP.
****Ideally, sort should be in memory from performance point view. SAP mentions that the ratio of disk to main memory should be under 5%. If this is not the case, increase the Oracle parameter sort_area_size****
There are lot of other areas where you need to look into and explore. Also, you should keep an eye on the overview and detailed analysis and then come to any conclusion.