Skip to Content

Hands-On with the SQL Monitor Part 2: Top Statements and Empty FOR ALL ENTRIES Tables

Welcome to the third post in my blog series on the SQL Monitor. Last time I showed you how to use the SQL Monitor to find and analyze your system’s most time-consuming requests – that means business processes. In addition to this request-based approach, it can also be useful to perform an analysis on the SQL statement level. This is owing to the fact that an SQL statement might be triggered by several different requests. Hence, if such a statement exhibits a particularly poor performance, this can affect multiple business processes.

One of the common reasons for miserable performance is reading unnecessary database records. Therefore, today we will scan our demo system for SQL statements that are expensive due to the huge amount of accessed data. In particular, we want to answer the following questions:

  1. Which SQL statements have read or modified the most database records?
  2. Which requests trigger the top SQL statement in terms of accessed database records?
  3. How can we improve this statement’s performance?

Question 1

“Which SQL statements have read or modified the most database records?”

As before, the starting point of our analysis is the SQL Monitor’s data display transaction SQLMD. To answer the first question, we need an overview of the SQL statements executed in our system ordered by the total number of accessed database records. The first requirement – that is grouping the data by SQL statement – can easily be met by selecting to aggregate by source position on the selection screen (section “Aggregation”). Establishing the appropriate ordering, however, requires some manual steps on our Netweaver 7.40 SP3 demo system. This is because in the aforementioned release there is no option to sort the data by the total number of accessed database records in section “Order By” (a corresponding option is planned for Netweaver 7.40 SP5). The workaround is to clear the field “Maximal Number of Records” which will cause all available monitoring records to be displayed. Afterwards we can manually sort the result list in the ALV.

Note that if you have a large number of monitoring records in your system, SQLMD might require a substantial amount of time to build the result list. In such a case you may be able to speed up the process by setting a high number (for instance 10,000) in the field “Maximal Number of Records” instead of clearing it completely.

Altogether, the configured selection screen looks like this:


Screenshot 1: Configuration of the selection screen in transaction SQLMD.

After hitting F8 we are then presented the list of all SQL statements that were executed in our system. To obtain the desired ordering, all we need to do is locate the column “Total Records” and sort it in descending order. A word of advice: Be careful not to confuse the columns “Records” and “Total Records”. As explained in my last post, the former denotes the number of available detail records while the latter indicates the number of accessed database records. For our demo system the final result is depicted in the following screenshot.


Screenshot 2: List of SQL statements ordered by the total number of accessed database records.

This list provides an overview of SQL statements that have caused a large amount of data to be transferred between the database and the application server. As you can see, all of these statements are located in custom code. The two dominating statements are at the top of the list and in total each has accessed more than a billion database records. Hence, question number one is answered.

Question 2

“Which requests trigger the top SQL statement in terms of accessed database records?”

Turning to the second question, we now focus on the statement at the very top of the list which is located in include ZSQLM_TEST11 line number 34. We are interested to know which request entry points have caused the statement to be executed. If you have worked through my previous post, you might already guess that this information is just a single click away. Remember the column “Records”? As explained in the aforementioned post, this is a generic column which denotes the number of available detail records. Since we have chosen to aggregate the data by the source position, in our case the detail records are the requests that have triggered the SQL statement. To drill down into these detail records, all you have to do is click the hotspot link in the “Records” column. It couldn’t be any easier, could it?

For the SQL statement at the top of the list, the “Records” column indicates that there are three different driving requests – that means business processes. In particular, clicking the hotspot link takes us to the following list:


Screenshot 3: List of request entry points that have caused the top SQL statement to be executed.

As you can see, all of the driving entry points caused our statement to access an average of about 25,000 database records on each execution (column “Mean Recs.”). The majority of executions – and thereby the majority of accessed database records – were caused by the report ZSQLM_TEST11. In addition, the statement was also triggered by an RFC module and a transaction both of which are, however, almost negligible due to their low number of executions. Thus, if we optimized our top statement’s performance, the request that would benefit most is the report ZSQLM_TEST11. These observations answer the second of our questions.

Question 3

“How can we improve this statement’s performance?”

Focusing on the last question, let us now turn back to the SQL statement itself and think about how we could speed it up a little. Your first impulse might be to investigate the code but hold on a second since before digging through the ABAP sources the SQL Monitor may already provide you with valuable insights!

When dealing with SQL statements that process an immense number of database records, it is advisable to check the maximum and minimum number of accessed records in the columns “Max. Records” and “Min. Records”, respectively. As you can see from the second screenshot, for our top statement the maximum amounts to 75,000 records while the minimum is 0. Moreover, the third screenshot indicates that the statement accessed the database table ZSQLM_TEST_USERS (column “Table Names”). Checking the table contents with transaction SE16, we realize that it contains exactly 75,000 records.Capture5.PNG

Screenshot 4: Number of records contained in the table accessed by the top SQL statement.

Hence, sometimes our top statement accessed all the contents of the database table while other times it accessed nothing. This is very suspicious and especially the fact that at times the whole content of the database table was accessed indicates that our top statement may involve an empty FOR ALL ENTRIES table. To check this assumption we can navigate to the source code by performing a double click either on the top statement itself in the overview (second screenshot) or on one of its driving requests in the detail view (third screenshot).


Screenshot 5: Source code for the top statement.

Just as suspected the statement is a SELECT FOR ALL ENTRIES without any prior content check on the FOR ALL ENTRIES table. When the FOR ALL ENTRIES table is empty, the SELECT yields all the records contained in the database table. What’s particularly important is that an empty FOR ALL ENTRIES table invalidates the WHERE clause completely even if it contains additional conditions. In the majority of cases this is, however, not what the developer intended, especially when the database table contains even more records than in our case. To put it straight, this is not just a statement with room for performance optimization – it’s a bug. To fix it, all you need to do is wrap the SELECT in an IF statement to make sure it is never executed with an empty FOR ALL ENTRIES table. If you really want to access all database records for an empty FOR ALL ENTRIES table, add an ELSE branch and use a plain SELECT without any WHERE clause. This makes your code much more robust and readable and, finally, answers the third and last question.


Stepping through today’s scenario, I showed you how to analyze the SQL Monitor data starting from the SQL statement level. For this purpose we generated a list of all SQL statements executed in our system and sorted the results by the total number of accessed database records. Focusing on the top statement we used a simple one-click drill-down operation to obtain the list of requests (business processes) that caused the statement to be executed. Furthermore, we leveraged the monitoring data to reveal that the top statement involves an empty FOR ALL ENTRIES table.

One final remark: In view of SAP HANA, sorting the SQL Monitor data by the total number of accessed database records can also be useful when using aggregation by request. This allows you to locate data-intense requests which might significantly benefit from a code push-down onto the database.

That’s it for today. If you still feel like walking through another hands-on scenario (I hope you do), don’t miss my next post in ABAP for SAP HANA.

Couldn’t find the next blog post? Here it is: Hands-On with the SQL Monitor Part 3: Analysis of Individual Database Tables

You must be Logged on to comment or reply to a post.
  • Thanks a lot for such a wonderful blog. I got one query while playing with SQL Monitor.

    Do we see date & Time stamp in result screen of SQL monitor? If we running same Program (Z Program) multiple times, then how would I differentiate between multiple runs on same database table?

    • Hi Vipin.

      Thanks for your feedback. The SQL Monitor provides aggregated runtime data. So if you execute a program multiple times without changing it, then the runtime data for each execution simply adds up in the SQL Monitor data and things like the mean runtime are updated accordingly. That means you cannot differentiate between individual executions of an SQL statement. For Netweaver 7.40 SP5 we are, however, planning to add the date and time of the last execution of each SQL statement in the result list.

      For the use case you're referring to you should rather use trace tools like the ST05. As explained in the first post of my series, the SQL Monitor is like an aggregated SQL trace that runs system-wide with a minimal performance overhead.

      Feel free to reach out to me if you have any further questions.



      • I have one more Query, What are different SQL operation type available ?

        If SQL operation type is SELECT, FOR ALL ENTRIES then can i assume that code has come across such a select statement where driver table was empty and all the record were fetched from database table?

        • The column "SQL Operation Type" indicates the general type of a SQL statement. Possible values include SELECT, INSERT, UPDATE, MODIFY, DELETE, OPEN CURSOR and FETCH. For Netweaver 7.40 SP5 we plan to add a few additional types such as ABAP Managed Database Procedure and SELECT FOR ALL ENTRIES. The latter does, however, only express that the statement has the FOR ALL ENTRIES addition but not that the FOR ALL ENTRIES table was empty. As explained in my post, you can evaluate the columns "Min. Records" and "Max. Records" to locate statements with a potentially empty FOR ALL ENTRIES table.

          For Netweaver 7.40 SP5 we are working on a new transaction, SRTCM, which provides access to a runtime check that monitors the system for executions of SELECT statements with an empty FOR ALL ENTRIES table.

    • Hi,
      I start using SQL Monitor a few days ago.
      With the Transaction SQLMD you can select the results by time interval. (It doesn't work with SQLM and F7 ! )
      After hitting F8 you can select the line, where you want to see the time stamp. Then click on the button 'Display time series'. You only see the timestamp from the time of update.
      You can change the update interval in the SQL monitor settings in SQLM.

  • Great for find the error. I'd suggest a better fix would be to eliminate if possible the FOR ALL ENTRIES entirely, since, as has been so often demonstrated, in the vast majority of circumstances, an INNER JOIN will be more efficient.

    It also requires less code and doesn't require the programmer to remember to check for an empty table...

    • Hi Matthew,

      You're perfectly right. An INNER JOIN would be another viable solution to the empty FOR ALL ENTRIES issue. As you said, it's potentially more robust and efficient, although the latter might depend on the specific case.



  • Hi Johannes,

    I'm reading your blogs with a lot of interest. I'm using the /SDF/SQLM transaction because I'm working on an older SAP version. It offers me a lot of very useful information. However, there is one thing that I haven't been able to find any documentation about: what is entry point 'INITIAL ROOTID'. (entry point type = Unknown). Or is this just a bug in the previous version?



    Ben Meijs