Skip to Content

Welcome back to another round of SQL Monitor hands-on action. While previous posts in this series were concerned with analyzing the recorded data, today we will focus on administrative tasks. In particular, I will guide you through a full administrative cycle of the SQL Monitor.

In general, such a cycle begins with the selection of a suitable measurement period. For example, you might decide to monitor a specific time slice – for instance 7 days – to inspect day-to-day business in your system. Alternatively, the measurement period can also be determined by certain events such as a year-end closing. In either case, you should make sure to limit code changes to a minimum during the monitoring interval since these changes might distort the recorded data and complicate the analysis.

Having decided for a monitoring interval, the next steps are to activate the SQL Monitor and to ensure that it remains in a consistent state throughout its operation. At the end of the measurement period you can then either deactivate the SQL Monitor manually or take advantage of the automatic deactivation feature which will be described later in this article. Afterwards you may analyze the recorded data and adapt your source code to improve its performance. Finally, to analyze the effect of your adaptions, you should reset the previously recorded data and start the monitoring cycle over again.

Summarizing this brief description, the monitoring cycle consists of the following phases

  1. Activation – activate the monitoring

  2. Monitoring – check & ensure the monitoring status
  3. Deactivation – deactivate the monitoring
  4. Reset – archive the collected data & reset the monitoring

Each of these steps will be described in detail in the remainder of this post. Before diving into the specifics of the different phases, however, there is one important point worth covering: authorizations. The SQL Monitor’s authorization concept involves two different roles – the data analyzer and the administrator. The former, evaluates the recorded data with the data display transaction SQLMD while the latter administrates the monitor using transaction SQLM. Both of these roles require the authorization profile for object S_DEVELOP (activity 03) as the bare minimum. In addition, you’ll need the authorization profile for object S_ADMI_FCD with value SQMD for the data analyzer role and value SQMA for the administrator role, respectively.

That being said, let’s turn to the individual administrative phases 1 to 4.

Phase 1 – Activation

As mentioned earlier, the starting point of all administrative tasks is the transaction SQLM. Its main screen (see Screenshot 1) contains information about the current status of the monitor (section “State”) as well as a number of control buttons (sections “Activation / Deactivation” and “Data Management”).

Capture1.PNG

Screenshot 1: Main screen of administration transaction SQLM.

As you can see, the monitoring is currently deactivated (field “Activation State” in section “State”). To activate it, you can use the buttons “All Servers” and “Select Servers” in section “Activation / Deactivation”. The former activates the monitoring globally – that means for every application server in the system – whereas the latter allows you to control the activation for each server individually.

In general, you will want to track all processes that are executed in your system and so you should usually activate the monitoring globally. For demonstrative purposes, however, let’s assume that we wanted to activate the monitoring on the local application server only. After clicking the button “Select Servers” a popup window appears in which we can select the individual servers for activation. The local application server is marked with an arrow which makes the task of locating and selecting it a breeze.

Capture2.PNG

Screenshot 2: Application server selection popup.

After selecting the local server and confirming the popup another popup appears and asks us to specify an expiration date for the activation. The monitoring will automatically be deactivated when the specified date and time are reached. Setting the expiration date is mandatory and has two benefits. Firstly, it lets you plan the length of your monitoring phase ahead of time so you don’t need to remember to come back and deactivate the monitoring. Secondly, if for any reason you forget to deactivate the monitoring, the automatic deactivation avoids recording of unneeded monitoring data.

Note that you can always change the scheduled deactivation date and time after the monitor was activated by using the application menu (from the menu bar select “SQL Monitor” > “Maintain Deactivation Schedule”).

Capture3.PNG

Screenshot 3: Activation expiration popup.

After confirming the popup the monitoring is, finally, activated for the local server and the state information on the main screen is updated accordingly. In addition, we see a tabular representation of the activation state for the individual servers.

Capture4.PNG

Screenshot 4: Updated main screen of transaction SQLM after activation of the monitoring on the local server.

Every time you change the activation state of the SQL Monitor a corresponding log entry will be created in the SQL Monitor log. The log can be accessed via the “History” (in later versions “Log”) toolbar button on the main screen of transaction SQLM (see Screenshot 4). It collects all past administrative actions that were performed on the SQL Monitor and help you answer questions such as “How long did we run the monitoring last time?” or “When was the data reset the last time?”.

Capture5.PNG

Screenshot 5: SQL Monitor History.

Now that the SQL Monitor is active we can switch to the second phase – the actual monitoring.

Phase 2 – Monitoring

Having activated the SQL Monitor there is not much to do for you from an administrator point of view. It might, however, be good practice to cast an eye on the monitoring status from time to time for the rare case of failure.That’s monitoring of the monitor if you like.

In general, all required status information and any failures will be presented to you on the start screen of transaction SQLM. One of the things you should watch out for here is an explosion in the number of collected monitoring records. Such a situation could occur if lots of generated coding is executed in your system. The SQL Monitor internally uses a number of aggregation strategies to limit the number of monitoring records but depending on your particular case there might be a rare chance that these counter-measures prove ineffective. Therefore, it is advisable to keep a close eye on the number of monitoring records (field “Number of Records” in section “State”) during the first few hours of operation. In general, the number will quickly saturate. If, however, you find the number of records to rise indefinitely without showing signs of saturation, you should deactivate the monitoring to prevent unnecessary high data volumes. At this point it might be worth mentioning that if you have SAP Note 1972328 applied in your system, the SQL Monitor will automatically shut off when the number of records exceeds 2,000,000. Besides, this note also contains tons of other enhancements and fixes so you should definitely take a look at it.

Other possible but rare failures include unscheduled batch jobs. During operation the SQL Monitor writes recorded data to the application servers’ shared memory space from where it is later collected and persisted into the system’s database. This requires periodic execution of the reports RTM_COLLECT_ALL and RSQLM_UPDATE_DATA. The scheduling of both of these jobs is automatically ensured when you active the monitoring in transaction SQLM. If for any reason one or both of these jobs are unscheduled afterwards, you will see a warning on the start screen of transaction SQLM. If you come across such a warning message, you need to either reschedule the batch job manually or renew the SQL Monitor activation by using the buttons “All Servers” or “Select Servers” in section “Activation / Deactivation”.

Capture6.PNG

Screenshot 6: Main screen of transaction SQLM displaying a warning about an unscheduled batch job.

Finally, there is one particular type of error that is not immediately apparent after firing up transaction SQLM – inconsistent activation states. An inconsistent activation state exists when the expected activation state differs from the actual activation state. In other words, the SQL Monitor is activated on a particular server but it is not actually running on that server (or the other way around). This could, for instance, happen when the server becomes unreachable during the activation process. To check the activation consistency you can click the “Servers” button in the toolbar on the main screen of transaction SQLM. This will establish the tabular display of the per-server activation state shown in Screenshot 4. In the rare case of inconsistencies you can try to repair the activation by using the application menu (from the menu bar select “SQL Monitor” > “Server States” > “Ensure”).

Capture7.PNG

Screenshot 7: In the rare case of inconsistent activation states you can try to repair the activation using the application menu.

Phase 3 – Deactivation

As mentioned in the description of phase number one, the activation expiration will lead to an automatic deactivation of the SQL Monitor once the specified date and time are reached. If, for any reason, you want to halt the monitoring before, you can do so by using the button “Deactivate” in section “Activation / Deactivation”. Both options for deactivating the monitor are globally effective and stop the monitoring on all available application servers.

Capture8.PNG

Screenshot 8: Main screen of transaction SQLM after deactivating the SQL Monitor manually.

Phase 4 – Reset

As mentioned earlier, the monitoring data is most valuable when it was recorded in a period without code changes. However, as you analyze the data you will likely want to change certain sections of code to improve performance. These code changes in turn will lead to a gradual distortion of the monitoring results which makes it harder to draw conclusions from the recorded data. Therefore, it is advisable to reset the SQL Monitor data whenever the source code in the system was changed to a significant extent (for instance due to imported transports).

Prior to erasing all SQL Monitor data from the system you have the opportunity to download the data to a file by using the button “Download Data” in section “Data Management”. The button will take you to a new screen that allows you to apply basic filtering (for instance on the package name) in order to limit the amount of downloaded data. Downloading the recorded data can be advantageous for archiving purposes or if you plan to import the data into a different application such as the SQL Performance Tuning Worklist (transaction SWLT).

Note that in later versions of the SQL Monitor you can also archive the recorded data into a so called snapshot. You can then display and analyze the snapshot’s data at any later point in time without independently of the currently recorded monitoring data.

Capture9.PNG

Screenshot 9: Selection screen for downloading the recorded data.

Whether or not you have downloaded or archived the recorded data, you can easily wipe all SQL Monitor data off the system by clicking the button “Delete Data” in section “Data Management”. Note, however, that this is only possible when the SQL Monitor is deactivated.

Wrap-Up

In today’s hands-on example I gave you an introduction to the administrative site of the SQL Monitor. As you’ve seen the administrative cycle consists of the four phases: activation, monitoring, deactivation and reset. The central access point for all of these phases is the transaction SQLM. The main screen of this transaction offers valuable status information and a set of control buttons. As result, tasks like activating, deactivating or resetting the Monitor are breeze.

I hope you had fun working through the present as well as any previous hands-on scenarios. I for sure did! This post marks the end of my series – at least for the time being. As we add new features to the SQL Monitor, I’m sure to return with another pack of hands-on scenarios soon!

In case you missed previous posts, you should start working through my series from the initial article: SQL Monitor Unleashed

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

  1. Johan Keldermans

    Hi Johannes, very nice blog!

    One question though:

    Although I have no problem in exporting the trace data to .zip and then importing this file into SWLT; a snapshot taken from SQLM itself will not appear in the list of sql monitor snapshots in SWLT… the reverse is true as well, if I decide to take a snapshot from SWLT, it will not appear in the selection list from SQLM… Note I am doing everything on one sandbox system, so no RFC’s or so…

    “Downloading the recorded data can be advantageous for archiving purposes or if you plan to import the data into a different application such as the SQL Performance Tuning Worklist (transaction SWLT).

    Note that in later versions of the SQL Monitor you can also archive the recorded data into a so called snapshot. You can then display and analyze the snapshot’s data at any later point in time without independently of the currently recorded monitoring data.”

    Any ideas?

    Cheers

    (0) 

Leave a Reply