Hands-On with the SQL Monitor Part 3: Analysis of Individual Database Tables
Hey folks. I’m back again with another hands-on scenario for the SQL Monitor. Last time we focused on SQL statements accessing huge volumes of data and empty FOR ALL ENTRIES tables. Today I will show you how to use the SQL Monitor to analyze individual database tables.
Imagine the following situation: In our system we have a custom database table ZSQLM_TEST_ORDER which is central to a number of our business processes. Since we are curious about the actual usage of this table in our productive system, we pose ourselves the following questions:
- Which request entry points (that is business processes) have accessed the table and which of them is most expensive in terms of SQL executions and database time?
- What is the table’s SQL operation (SELECT, INSERT, …) profile and does the table qualify for table buffering?
“Which request entry points (that is business processes) have accessed the table and which of them is most expensive in terms of SQL executions and database time?”
As in the previous hands-on scenarios, the starting point of our analysis is the data display transaction SQLMD. The first question asks us to find the list of request entry points that have accessed our database table ZSQLM_TEST_ORDER. For this purpose we enter the table name in the selection screen field “Table Name” and select to aggregate the result by request in section “Aggregation”. Depending on the number of requests you expect, it might also be advisable to clear the field “Maximal Number of Records” to avoid truncation. However, for our particular case the default value of 200 will be more than sufficient. The configured selection screen looks like this:
After hitting F8, we are then directly presented the list of request entry points we looked for. As you can see in the screenshot below, there are three ABAP reports and an RFC function module that have accessed our database table.
For the second part of the first question we need to find the most dominant request out of the list. This can easily be achieved by evaluating the columns “Executions” and “Total Time”. A quick check reveals that the record at the very top of the list (ZSQLM_TEST) causes about half of all SQL executions on the table and accounts for about 50% of the total database time spent on the table. So if, for instance, we wanted to reduce load on this specific database table, we would first analyze SQL statements triggered by this top request entry point.
“What is the table’s SQL operation (SELECT, INSERT, …) profile and does the table qualify for table buffering?”
Question number two seeks for the SQL operation profile of our database table that means the distribution of the number of SQL executions over SELECT, INSERT, UPDATE, … operations. In addition, the second part of the question is concerned with the possibility of table buffering. Therefore, we will also need to know how the number of accessed database records spreads across the different operation types. In order to obtain this information, we need to return to the selection screen of SQLMD and make some adaptions. First, we disable aggregation by selecting “None” in section “Aggregation”. Moreover, we clear the field “Maximal Number of Records” in order not to truncate any records. Here is the resulting selection screen:
Hitting F8 now brings up a list of all unaggregated SQL Monitor records that involve the table ZSQLM_TEST_ORDER. Note that here unaggregated means that the monitoring records where not grouped by source code position or request entry point. The reason for choosing this representation is that now we can see the SQL operation type directly in the top-level list (column “SQL Operation Type”) without having to drill-down.
To obtain the distribution of SQL executions and accessed database records over the different SQL operation types, we can now use the standard ALV functionality for calculating subtotals. All we need to do is select the column “SQL Operation Type” and click the “Subtotals…” button in the ALV toolbar. Afterwards we can select the columns “Executions” and “Total DB Records” and click the “Total” toolbar button to calculate the relative amounts. The final result is shown in the following screenshot.
As you can see from the screenshot above, more than 99% of all SQL executions on our table are SELECT statements while DELETE and INSERT operations account for significantly less than 1% (see column “Executions”). This almost looks like a read-only scenario. However, despite being negligible in terms of SQL executions, DELETE and INSERT statements are still responsible for more than 40% of all accessed database records (see column “Total DB Records”).
So what does this mean in regard to buffering the table? In general, writing accesses are disadvantageous for the table buffer since they lead to potentially costly synchronization processes between application servers. In our particular case a relatively high amount of database records are accessed in a number of DELETE and INSERT statements that is vanishingly small with respect to the vast number of SELECT statements. On average each individual writing access modifies 100,000 records (divide “Total DB Records” by “Executions” in Screenshot 4) and, thus, leads to a complete invalidation of the buffered table. Hence, the following buffer synchronization can be performed in a block operation which is relatively fast as compared to updating the records individually. Since this happens only once for every 35,000 SELECT statements (divide number of SELECTs by number of INSERTs and DELETEs in screenshot 4), it is an acceptable overhead. In total we can conclude that buffering our database table will very likely result in a significant speed-up of reading accesses.
Before activating the table buffer for your own database tables you should, however, be aware of the fact that data that is read from the buffer might not be up-to-date. This is because synchronization across multiple application servers due to writing accesses can take up to several minutes. Consequently, you should not use buffering when reading obsolete data is not acceptable for your application. Depending on your specific case this might even have legal implications.
In today’s hands-on scenario I showed you how to analyze your recorded SQL Monitor data on the level of database tables. With great ease, we have obtained a list of request entry points – that is business processes – that have accessed a specific database table. In addition, we employed standard ALV features to construct our table’s SQL operation profiles for the number of executions and the number of accessed database records. These profiles led us to the conclusion that buffering should be considered an option for our particular table.
It is important to note that as an application developer you could have had a hard time getting this insight from your source code alone. While you might know how many reading and writing database accesses are in your code, you usually will have no way to determine how often these operations occur and how many database records they access during productive use of your application. It might even get more difficult if your application doesn’t have exclusive access to the database table. The SQL Monitor greatly simplifies this task.
Finally, let me remark that the table filtering feature of transaction SQLMD has a wide range of additional use cases. For instance, it can help you to find each and any development object that has accessed a particular database table. This might be important when you want to analyze the encapsulation and security of your data. In addition, the table filtering can also assist you in locating lock situations caused by concurrent accesses to a table from different processes.
That’s enough for today. If you still can’t get enough of SQL monitoring make sure not to miss my next post in ABAP for SAP HANA.
The follow-up post can be found here: Hands-On with the SQL Monitor Part 4: Administration