In this blog post of my series Use ST05 to Analyze the Communication of the ABAP Work Process with External Resources, I explain how you can store traces of your applications’ communication events as long as required. Beside giving you more time for in-depth analyses, stored traces pave the way to completely new types of investigations. Examples are:
- Check the temporal evolution of the applications’ use of external resources (before-after comparisons).
- Investigate the impact of user input or data volume on the applications’ performance or scalability.
- Compare the applications’ behavior between systems.
- Share traces with co-workers who do not have access to the system where the application was executed.
This blog post assumes previously recorded traces. See the descriptions in my posts ST05: Basic Use and ST05: Activate Trace Recording with Filter to refresh your trace recording skills.
Some of your traces may be more important for you than other traces. But for
ST05, they are all the same and subject to the file system storage mechanism that cycles through the configured number of trace files. Due to this round-robin retention approach, even your most important trace is eventually lost—and this can happen before you have completed your analysis of this trace. For more explanation of the process, please refer to my blog post ST05: Technical Background of Trace Recording and Analysis. It covers the trace recording in great detail, and gives a comprehensive description of the underlying mechanism. The trace directory integrated into
ST05 provides a solution by persisting traces into the database.
Save a Trace into the Trace Directory
Let’s begin with the assumption that you have just recorded a trace. On the
ST05 start screen, click button Display Trace to go to the Filter Conditions for Trace Records (Fig. 1, left panel). Specify criteria that support your intended analysis. Then, click button Save Trace in Database . This will immediately save the trace records without displaying them.
Alternatively and preferably, take a first quick look at the trace to convince yourself that it is complete and representative for your application’s typical execution. Only then can it be a reliable foundation for detailed and conclusive investigations. Next, trigger Save Trace in Database via the appropriate button above the ALV grid control displaying the trace records (Fig. 1, right panel).
Figure 1: Saving trace records into the trace directory on the database can be done from the Filter Conditions for Trace Records (left panel) or from the list of Trace Main Records displayed in an ALV grid (right panel). In both cases, a popup (inset) asks for a Short Description of Trace to identify the resulting trace directory entry.
Either way, a popup prompts you to give a Short Description of Trace (Fig. 1, inset) so that later you can reliably find your trace in the trace directory.
When you continue from this popup, the display formats of the trace records (as created on the fly by
ST05 from the raw data in the trace files on the application server instance’s file system) are stored in the database. Any restrictions you may have defined for the trace records in the Filter Conditions for Trace Records or in the ALV’s filter are respected: Only those trace records that match your selection criteria are stored in the database.
Traces that you save into the trace directory are given an Expiry Date, which is 120 days in the future. After this date they will be deleted automatically from the database. This mechanism prevents excessive growth of the underlying database table’s size due to old and most likely outdated, therefore useless, traces.
Display the Trace Directory
To navigate to the trace directory, click button Display Trace Directory on
ST05‘s start screen. This takes you to the Directory of Performance Traces (Fig. 2). In its two areas at the top, you can select from predefined filter conditions, or provide your own terms for the stored traces that you want to work with. The bottom panel shows the list of matching traces in an ALV grid control. By default, you see the traces you have created on the current day.
Figure 2: The Directory of Performance Traces is subdivided into three areas. In the upper two panels, Trace Selection and Time Period, you define conditions on the stored traces you want to list. The table in the lower panel then consists of the relevant trace directory entries. The icon at the start of each row distinguishes between entries that contain just metadata pointing to traces only available in the application server instance’s file system (indicated by the icon depicting a binary file), and entries where the trace records are saved in the database (represented by the DB-like icon).
Here the table shows two sets of five traces each for consecutive calls to the Audit Journal Fiori application. For the older set, the metadata-only entries are already deleted because the trace records are no longer in the file system. The newer set still includes the metadata-only records next to their counterparts with the complete traces persisted in the database.
Table 1: Fields in the list of traces in the trace directory.
||Client where directory entry was created.
||User who saved trace.
||Application server where trace was created.
(Always empty for traces stored in database.)
||Start date of trace period.
||Start time of trace period.
||Date after which trace will be deleted from directory.
(Only used for traces stored in database.)
||Description of trace.
The characteristic fields of directory entries are listed in Table 1. An additional leading column in front of Client contains one of two icons to distinguish the type of trace directory entry. The kind described so far is identified by the DB-like icon , indicating that the entire trace with all its records is stored in the database. The other icon represents a binary file. It is assigned to trace directory entries that keep only metadata (trace types, trace period) in the database. This type of trace directory entry is initiated automatically whenever trace recording is activated (buttons Activate Trace or Activate Trace with Filter on
ST05‘s start screen), and completed when you switch off tracing. The actual trace records are only in the application server instance’s file system, where they are subject to the cyclic overwriting described in my blog post ST05: Technical Background of Trace Recording and Analysis. When the trace records are overwritten,
ST05 automatically deletes the corresponding metadata trace directory entries. They are a convenience tool: Imagine that you record a trace, and then you have to logout of the system before you can analyze the trace. When you logon again, the load balancer may put you on another application server instance. Then these trace directory entries save you the effort to manually search for your trace on all of the system’s application server instances. Instead, it is very easy to find it—provided it has not been overwritten in the meantime: Open the trace directory, find and double-click the entry.
ST05 will go to the adequate application server instance and display the trace there. To enable this cross-instance navigation, these types of trace directory entries have the appropriate value in their Instance Name field. For traces completely stored in the database, this value is not necessary and always empty.
The precise semantics of Start Date and Start Time depend on the type of the trace directory entry. For those representing fully persisted traces, they are the values specified on the Filter Conditions for Trace Records screen. For the metadata-only entries, they correspond to the instant of trace activation.
In the Trace Selection area, the default option My Traces restricts the list of traces to those trace directory entries that you have created. Optionally, you may decide to see traces of any user, or you select traces based on their owner (User), their Description, or their Expiry Date (Deleted Before). The selection criteria for User and Description support patterns with the wildcard characters
+ (single character) or
* (any character string). The Description is case sensitive. When you mark the checkbox All Clients, traces saved from every client in the system will be shown. Please note that the User is the one who has created the trace directory entry. This may be different from the user who has recorded the trace, or from the user whose activities were recorded. A similar remark applies to the Client field.
In the Time Period panel, you filter traces by their Start Date. The default is the current day (Today). Other restrictions are for traces from the current week (This Week) or by a Time Period. The option No Limit covers all traces, independent of their Start Date.
The conditions from the two panels are combined by logical
Work with Saved Traces
For completely saved traces, the trace directory allows you to change their properties Expiry Date (if the default retention period of 120 days is too short) or Description. You can do this either via button Change Trace Property , or through the appropriate entries in the corresponding fields’ context menus. The Description can also be set or changed for metadata-only directory entries, but because of their short lifetime this hardly ever makes sense.
When you do not need some previously stored traces any longer, mark them in the trace directory. Then manually delete them with button Delete Trace(s) . This saves space in the underlying database table and gives you a better overview of your still relevant traces.
If you have recorded separate but related traces, e.g., one for each individual user interaction of an application with multiple steps, and also want to analyze the entire application’s communication with external resources, you can mark all corresponding traces and merge them into one comprehensive trace through button Merge Traces . Then analyze it as if it was recorded in one go.
To access the true content of a trace directory’s entry (as opposed to its characteristics shown in the list and explained in Table 1), double click it. For persisted traces, this takes you immediately to the list of Trace Main Records as covered in my blog post ST05: Basic Use. Alternatively, you right-click an entry’s row header and select Display Trace with Filter from the context menu or from button Display Trace to get the Filter Conditions for Trace Records screen where you can restrict to a subset of the stored trace records. This screen is always the entry point for metadata-only trace directory entries.
You may also export traces from the trace directory into the local file system of your front-end computer. Mark the desired trace and select the download format from the context menu of button Export Trace . Available formats are binary or CSV for the various aggregation levels supported by
ST05. Give a file name for each trace you want to export. You can initiate trace downloads also from the Filter Conditions for Trace Records screen or from the list of Trace Main Records.
CSV downloads are typically opened with a spreadsheet application. A binary export can be imported into a trace directory of another system (button Import Trace ). There, its description will be the file name with
(Imported from file) appended. Based on such an imported trace, you can continue your analysis of the trace even when you do not have access to the original system any longer. Or you can forward the trace to a colleague who does not have a user in the original system. Finally, with imported traces you can compare the behavior of your application between different systems with distinct configurations, or software releases, or amounts of data. The next section explains the trace comparison in detail.
Trace records contain the name of the user who triggered the associated statement. This may conflict with local data privacy legislation. To remedy such collisions, you can anonymize persisted traces: Mark the relevant stored traces and select Anonymize Trace Records from the context menu (right-click the row header). In the ensuing popup, specify the user name to be anonymized. It will be replaced by a string of 12 alphanumeric characters. If you specify
* all user names will be deleted. No other wildcards are supported.
For fully persisted traces, the trace directory supports their comparison: Mark exactly two traces, then click button Compare Two Traces (Fig. 3).
Figure 3: From the Directory of Performance Traces you can compare two traces that you have previously saved on the database. Fig. 4 shows the comparison.
This leads to the Trace Comparison of Structure-Identical Statements (Fig. 4). Its upper panel shows the two traces’ metadata and overall totals for their KPIs Total Number of Executions, Number of Records, Execution Duration, HANA CPU Time, and Redundant Identical Statements. Additionally, it includes the prevalences of buffer types among the database tables or views recorded in the traces. The lower panel contains the actual comparison. The first trace is considered the baseline against which the second trace is contrasted. Press button Swap Traces to interchange the traces if you want to regard the other trace as the baseline.
Figure 4: With a trace comparison you can investigate variations in your application’s runtime behavior related to its communication with external resources. These may be due to unequal amounts of data either in principle available to the application, or actually processed by it. Other reasons for runtime differences may be changes to the application’s source code, or its configuration, or the way it is executed by its users.
The upper panel shows the two traces’ metadata and their overall KPI totals for Total Number of Executions, Number of Records, Execution Duration, HANA CPU Time, and Redundant Identical Statements.
In the lower panel, the actual comparison examines the KPI’s values aggregated according to the selected granularity. The default comparison method is the difference, and the rows are sorted descending by the runtime difference.
In the screenshot, the comparison contrasts two calls to the Audit Journal Fiori application F0997. The first one does not find any data, the second has 71 rows in its result set.
To prepare for the comparison,
ST05 summarizes, separately within each trace, the values for Total Number of Executions, Number of Records, Execution Duration, HANA CPU Time, and Redundant Identical Statements of all matching records. The default summarization criterion is the object affected by the statements, independent of the type or the structure of the statements. For finer granularity, distinct operations (SELECT, INSERT, UPDATE, DELETE, ENQUEUE, DEQUEUE, …) can be resolved, or (for greatest detail) the operations and the statements. After this preparation, the summary records of the two traces are matched (by the combination of object, operation, or statement, corresponding to your chosen level of resolution), and their values are compared. The default comparison (Fig. 4) uses the values’ differences (absolute numbers). Optionally,
ST05 can express this as the percentages relative to the values of the first trace. Other alternatives show the ratios of the second trace’s values divided by those of the first trace, or the relative contributions to the corresponding total values of each trace.
The first column of each summary record in the comparison table encodes the record’s occurrence:
T1_ : only in the first trace
T_2 : only in the second trace
T12 : in both traces
For a meaningful trace comparison,
T12 shall be the predominant code. Otherwise, the two traces are not really comparable.
Columns Object Name, Operation, and Statement characterize the level of detail on which the comparison is done. For rows dealing with database tables or views, column Buffer Type indicates the buffering type.
The four color-coded groups with three columns each contain the compared KPIs with the measured values for the two traces and their differences.
To streamline the comparison, click button Display Differences Only . This compares only those records that are not in both traces (code either
T_2) or that have different values for Total Number of Executions or Number of Records. Press the button again to toggle back to Display Full List.
From the dropdown menu of button Set Comparison Method you can pick your preferred perspective: absolute difference, ratio, percentage increase, or percentage of each trace’s totals.
Similarly, use the dropdown menu of button Set Comparison Granularity to determine the level of detail for your analysis: just Objects, or Objects and Operations, or Objects, Operations, and Statements.
The example shown in Fig. 4 indicates that the same number of HTTP requests to OData services
/sap/opu/odata/sap/FAC_AUDIT_JOURNAL_SRV took significantly longer in the second trace than in the first one. The reasons are different accesses to CDS views
IFIACCDOCJRNL (Journal) and
IFICMPTJRNL (Compact Journal), leading to longer Execution Duration and higher HANA CPU Time consumption. For an in-depth investigation, you open the second trace (by double-clicking it in the upper panel of the Trace Comparison, or by pressing button Display Trace ). There you can use all the tools offered by
ST05 to analyze why the second execution is slower. In this example, the evident reason is that it handles more data, but you still may find an approach to accelerate it. Always question the obvious explanations and try to go beyond the apparent cause.
The trace directory integrated into
ST05 serves two main purposes:
- It prevents important traces from being overwritten by subsequently recorded traces.
This gives you more time for comprehensive analyses, e.g., to find optimization approaches.
- Two previously stored traces can be compared with each other.
With such a comparison you can investigate your application’s scaling behavior (related to its communication with external resources) to verify that its execution time or resource consumption grows at worst proportional to the amount of processed data.
Alternatively, with a trace comparison you may be able to demonstrate that your optimization project has been successful.
Traces exported to your front-end computer’s file system and then imported into another SAP system enable the comparison of your application’s behavior across systems. Such a comparison may reveal the impacts of differences in system configuration, process customizing, or available data volume on your application’s performance.