Skip to Content
Author's profile photo Former Member

SAP ASE Workload Analyzer Whitepaper

The SAP ASE Workload Analyzer feature was introduced in the ASE Cockpit for SAP Adaptive Server Enterprise version 16.0 SP02 release.  This is a powerful feature that enables users to analyze the performance of production applications and to replay the production workload in a variety of test environments to determine the effects of changes in SAP ASE server configuration, database design, software version changes or operating system environment changes.

The SAP ASE Workload Analyzer Whitepaper provides an overview of the benefits and features of the SAP ASE Workload Analyzer feature.

Please use this link to download the SAP ASE Workload Analyzer whitepaper

Also, see the SAP ASE Workload Analyser Users Guide for more detailed information about installing and using SAP ASE Workload Analyzer.

Contact your SAP Sales representative with any questions about using or licensing this feature.

Assigned tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Mark A Parsons
      Mark A Parsons

      While it looks like the production dataserver needs to be ASE 16.0 SP02+, is there a minimum ASE version for the test and repository ASE's?

      For example, would it be possible to playback against ASE 15.7 w/ capture disabled?


      Is any method provided (or documented) for making sure the test db is in sync (timewise) with the captured workload?  Or would the following be sufficient:

      - dump database <production>

      - start capture

      - dump transaction <production>

      ... snip ...

      - complete capture

      - analyze capture (note start/datetime)

      - load database dump into test db

      - load transaction dump into testdb w/ until_time = 'start/datetime' from capture analysis

      - play captured workload against test db


      What happens if the playback runs into unexpected errors, eg, dup key errors if the test db is not in sync with the playback?  Does the playback continue or come to a screeching halt?


      During playback, will this product simulate concurrent db connections (complete with blocking/deadlocking), or does it attempt to thread all activity into a single db connection (like a Repserver DSI connection)?

      If playback does simulate concurrent db connections, is there a limit to how many it can realistically support?


      The manual mentions that an ASE license is provided with the Workload Analyzer package so that a new ASE dataserver can be built to hold the repository database.

      Do you know the limitations of this ASE/repository license?  max number of user connections, max number of engines (or threads), max memory, etc? [The manual lists some basics - eg, engines=4, #user connections = 21/25 - so wondering if the license would cover a larger ASE/repository server in the case where multiple workloads are being captured/analyzed.]

      The manual also states that the repository dataserver needs to have semantic partitioning enabled.

      Does a semantic partitioning license also come with the Workload Analyzer package?


      Do you know if during playback it is possible to have a replication agent running in the playback/test database?

      I'm guessing there should be no issues (assuming the repagent and repserver are configured appropriately) ... unless the playback connections are disabling replication (eg, 'set replication off').



      Author's profile photo Former Member
      Former Member
      Blog Post Author


      The Replay Engine is part of the ASE Cockpit for ASE 16.0 SP02 or later and that version of ASE Cockpit does not support ASE 15.7, so it is not possible to replay captured workload against earlier versions of ASE.

      Your procedure for recreating the production environment in the replay/test setup is correct except I do not see a need to generate or restore the transaction log (for the purpose of using the Workload Analyzer).  When you capture the workload, all DML that generated changes in the production database will be captured (unless you have filtered them out on purpose).  In order to replay this workload you need to restore the application database to the state that it was in at the time that the capture was started.  The replay will apply all changes that occurred during the capture period.

      When replay encounters errors, these will be reported in the Replay Status dialog but the replay will continue.  The replay can be terminated manually if desired.  The errors will also be included in the captured workload for the replay session (if capture is enabled during replay) and the errors can be analyzed as part of the analysis of the replay session.

      The replay engine is fully multi-threaded and establishes the same number of connections performing the same operations as were used during the original workload.  You may see deadlocks and other interactions between these connections.  We have tested with over 2000 connections.  There is no fixed limit to the number of connections but the maximum number of connections will be determined by the amount of memory available to the Java virtual machine used by the Replay Engine.

      The additional ASE license provided with the Workload Analyzer license includes the semantic partitioning entitlement.  I don't know all details of the ASE license but it is sufficient to operate the Workload Analyzer Repository,  Note that during operations such as analysis and replay the load on the Workload Analyzer Repository ASE server will be substantial and this will affect other activity on this server.  For this reason, we do not recommend using this server to support any other applications.

      It is possible to have an active replication agent on the replay server.

      Thank you for your questions.  I hope that I have answered them.

      - Peter

      Author's profile photo Mark A Parsons
      Mark A Parsons

      The only reason I threw in the log dump/load was to catch any activity that occurred between the end of the 'dump database' and the 'start capture' (eg, the time difference could be seconds/minutes/hours so I'm thinking the log dump should allow us to get as close to 'in sync' as possible).

      Otherwise, thanks for the quick response!