In the first post of this series I provided you with the big picture of what the SQL Monitor is and what it can do for you. In a nutshell, it’s like an aggregated SQL Trace that runs system-wide and supplies various kinds of runtime measures as well as a connection between the SQL statement and the driving business process. Today you will finally see the SQL Monitor in action as I will guide you through the first usage scenario.
From an end-user’s perspective the SQL Monitor consists of two components. The administration transaction SQLM, on the one hand, provides an overview of the monitor’s current status and allows to activate and deactivate the monitoring. The data display transaction SQLMD, on the other hand, enables you to analyze the recorded data. A separate blog post will be devoted to transaction SQLM and the associated admin tasks at a later time. In this post, however, we will focus on analyzing the collected data using transaction SQLMD.
- What are our system’s most expensive request entry points (transactions, reports, …) in terms of the total database time?
- Which SQL statements dominate the SQL profile of the one most time-consuming entry point?
- How can we improve the (database) performance of this entry point?
To answer these questions, we log on to our system and fire up transaction SQLMD. Since the amount of collected monitoring data can get quite large, SQLMD first brings up a selection screen. This screen allows you to limit the amount of displayed data through several different filter options. For instance, you may restrict the objects containing the SQL statements (section “Objects”) – for example, to your custom code in packages Y* and Z*. In addition, you can filter the request entry points (section “Requests”) as well as the tables involved in the SQL statements (section “Tables”) and you can also choose how to aggregate the data (section “Aggregation”). Finally, you can decide by which criterion to sort the data and how many records to display (section “Order by”). Keep in mind that here the term “records” refers to the number of records displayed on the following screen and not to the number of records accessed by the SQL statements.
“What are our system’s most expensive request entry points (transactions, reports, …) in terms of the total database time?”
In order to answer question number one, we choose to aggregate the data by request and to order the results by the total (database) time. In addition, we merely consider the top 200 records that means the 200 most time-consuming request entry points, which will suffice for a rough overview. Note that we don’t use any of the filter settings since we want to get the top entries with respect to our entire system. Hitting F8 now takes us directly into the data by bringing up the following screen.
This list displays our system’s most expensive entry points – that is business processes – in terms of database time together with a wide range of aggregated runtime data. As you can see, the data involves entry points in both custom as well as SAP code. Hence, the SQL Monitor provides us with a comprehensive profile of our system.
In order to analyze the data, it is important to realize that all of the runtime measures refer to the SQL statements triggered by an entry point rather than to the entry point itself. For instance, the column “Executions” denotes the total number of executions for all SQL statements caused by an entry point but not the number of times the entry point itself was executed. Moreover, all time related columns are denoted in units of milliseconds.
Turning back to question number one, we now inspect the column “Total Time”, which provides the total database time of all SQL statements that were executed by an entry point. Comparing the values, we easily recognize that in terms of database time our system is clearly dominated by the three entry points at the top of the list, all of which lie in custom code. Hence, the first question is answered and if we wanted to reduce our system’s database load we would definitely start with those three entry points.
“Which SQL statements dominate the SQL profile of the one most time-consuming entry point?”
Moving on to the second question, we now focus on the one most time-consuming request entry point, which is the record at the very top. As you can see from the screenshot above, this entry point has caused more than 250 million database accesses (column “Executions”) which have resulted in a total of nearly 150 million milliseconds of database time (column “Total Time”). To get the SQL profile of this entry point, we have to use the column “Records”, which – contrary to what you might think – has nothing to do with the number of database records processed by an entry point. “Records” is a generic column which denotes the number of detail records and offers a hotspot link to drill down. Since we chose to aggregate by the request entry point, the detail records are the source code locations of the individual SQL statements that were triggered by an entry point. Consequently, clicking the “Records” hotspot link for our top entry point takes us to its SQL profile which looks like this:
Each row in this list corresponds to an individual SQL statement triggered by our top entry point. For all of these statements the list includes a variety of aggregated runtime data as well as meta information like the involved database tables and the SQL operation type (SELECT, INSERT, UPDATE, …).
To analyze the SQL profile we firstnote that the list of statements is again sorted by the total database time. Observing the column “Total Time”, we then see that the uppermost statement obviously dominates over all other statements. In fact, with an aggregated runtime of more than 133 million milliseconds, it makes up about 90% of the database time of the entire entry point. This observation answers the second question and if we wanted to optimize the entry point’s database performance, we would doubtlessly inspect this top statement first.
“How can we improve the (database) performance of this entry point?”
Considering the third and last question we take a closer look at the data for the topmost statement. For one thing, the column “Mean Time” reveals that the statement itself is rather small since it has a mean runtime of a mere 0.5 milliseconds. Furthermore, the column “Exe./Sess.” (executions per internal session) tells us that on average the statement is executed almost 300.000 times in each internal session. Hence, we can conclude that the statement must be nested in a loop – maybe even in multiple loops across different stack levels.
|Background Knowledge: Internal Session|
The term “internal session” refers to the number of different roll areas in which a SQL statement was executed. A roll area is an area of memory that belongs to a work process. You can read up on the expressions “roll area” and “work process” on http://help.sap.com/.
In layman’s terms, this means that for the most important request entry points like transactions or reports the number of internal sessions simply equals the number of times the transaction or report was executed. For RFC function modules things are a little more involved. This is due to the fact that as long as an RFC connection remains open, all calls to RFC modules in one and the same function group will use the same roll area in the remote systems. This can, for instance, happen when you make successive calls to an RFC module from one and the same report. Thus, for RFC modules the number of internal sessions can be smaller than the number of times the module was called.
In any case, if a SQL statement is executed very often in one internal session – that means if you observe high values in the column “Exe./Sess.” – this is a clear indication for a nested loop.
To validate our assumption we navigate to the source code which is achieved by simply double-clicking the row in the list.
As you can easily see from the screenshot, the statement is a SELECT SINGLE – that means it is small – and it is wrapped in a local loop. In light of the rather large number of average executions per internal session it is even likely that the statement is also contained in a non-local loop. This means that the method get_details is probably called in an external loop which could be verified by generating the corresponding where-used list. What’s remarkable at this point, is that the SQL Monitor data had already provided us with correct and valuable insights before we inspected the source code itself.
So what can you do as an ABAP developer in such a situation to improve the performance? Well, if the combination of the words “small statement” and “loop” has not rung your alarm bells yet, you should first of all study SAP’s golden OpenSQL rules. Having done so, you will come to the conclusion that it is best to resolve the loop(s) and to rewrite the sequence of small statements into a single large statement. This could, for instance, be achieved by using a join, a subquery or a FOR ALL ENTRIES all of which, in general, will exhibit a much better performance. This is especially true on SAP HANA where the database engine is optimized for large-scale block operations. Thus, question number three is answered as well.
Performing the code optimization itself is beyond the scope of this blog post which brings us to the end of our discussion. To recap, we have used the SQL Monitor to get an overview of our system’s most expensive request entry points in terms of total database time. Moreover, for the one most time-consuming request we have received the SQL profile through a simple UI drill-down. Finally, we have inspected the top statement within this profile and leveraged the monitoring data to identify performance issues. Astonishingly, all of this could be done with the greatest of ease and required nothing more than what is shown in the screenshots.
That’s all for today folks. If you like what you saw, make sure to watch out for my next blog post in ABAP for SAP HANA.
Hungry for more? Check out the follow-up post: Hands-On with the SQL Monitor Part 2: Top Statements and Empty FOR ALL ENTRIES Tables