Skip to Content
h4. Motivation   The new workload statistics collector shipped with NW2004s does not support the user exit Z_USEREXIT_WORKLOAD (note 143550  (http://service.sap.com/sap/support/notes/143550)) anymore, but requires to implement a so-called “BAdI” (Business Add-In, note 931446  (http://service.sap.com/sap/support/notes/931446)). As an additional challenge, the NW2004s workload collector does not only call this BAdI once per collector run, but – depending on the number of statistical records to be processed – multiple times in short time interval. So we have to face the fact that the BAdI implementation might run in parallel on the same application server, processing different sets of statistical records.    In this blog, I want to outline how to cope with this challenge by providing sample coding for a quite simple task: When was a transaction or report used the last time?  h4. What is a BAdI implementation?   A BAdI implementation is simply an ABAP OO class which implements an interface defined by the BAdI definition. Nothing more, nothing less. Use transaction SE18 to create that ABAP OO class, the BAdI name is ‘WORKLOAD_STATISTIC’. Note 931446  (http://service.sap.com/sap/support/notes/931446) provides more details.  h4. Parallel execution   Imagine the following situation: The NW2004s workload collector starts and has to process 7500 statistical records. After processing 7490 statistical records, the workload collector reaches the implemented limit for memory consumption, therefore interrupts its work, passes the collected statistical records to the BAdI implementation and restarts itself. The BAdI implementation now asynchronously starts to process the 7490 statistical records. In the meantime, the workload collector finishes processing the 10 statistical records left over from the previous run and again asynchronously starts the BAdI implementation. The numbers given are an approximation and not 100% correct, but quite close.  This drawing illustrates the situation image
To report this post you need to login first.

14 Comments

You must be Logged on to comment or reply to a post.

  1. Tim Wise
    I have an implementation that writes out select records to a file. The file name is hard-coded in the code.

    It seems I may race conditions on the file. Two executions may try to create it, or write to it at the same time.

    How can I ensure that only one execution creates the file and the writes are sequetialized? Is there a ‘critical region’ in ABAP?

    Thanks.

    Tim

    (0) 
    1. Frank Klausner Post author
      Hi Tim,

      if you need to write to a single file, serialization is definitely an issue.

      Either you could do it via the database: Export the selected statistical records to a cluster table using the key fields provided by the Workload Collector. In a second step, import the statistical records and write them to a file (similar to the example in the blog).

      Or you have to serialize the BAdI executions themselves using an SAP enqueue object: Try to get the enqueue, if it fails, wait a second (function module RZL_SLEEP, then try again, until you get it. However, I won’t recommend to do it this way. You need to optimize your coding concerning performance and especially memory consumption in this case, e.g. I’d suggest to store the selected statistical records in an internal table and FREE table I_STATISTIC before starting to wait for the enqueue.

      I think I’d go for the first option, using a periodic background job it’s easy to ensure that only one is running at the same time.

      Best regards,
        Frank

      (0) 
        1. Frank Klausner Post author
          Tim,

          writing to several files named as you mentioned above is probably the best option. The collector ensures disjunct portions of statistical records for each BAdI call, so sorting them by name and concatenating should do the job.

          Best regards,
            Frank

          (0) 
  2. Tim Wise
    Frank,

    On a related topic, is there an API or RFC interface I can use to extract workload data (ST03N) from an SAP system.

    I seem to recall seeing that recently but can find the reference.

    Thanks again!

    Tim

    (0) 
    1. Frank Klausner Post author
      Hi Tim,

      please have a look at function group “SCSM_NW_WORKLOAD”, it offers function modules to read statistical records and also the aggregates produced by the workload collector.

      Best regards,
        Frank

      (0) 
    1. Frank Klausner Post author
      Tim,
      to debug your BAdI implementation, put a breakpoint into your code and then run report “SWNCCOLL”. This is the workload collector itself, no need to run all the other stuff which is started by RSCOLL00.
      Additionally, SWNCCOLL has a parameter “Only execute locally”, if you set it, it runs only on the actual application server you’re working on, not on all of them.
      A new window with the debugger will appear when your breakpoint is reached.

      Best regards, Frank

      (0) 
      1. Tim Wise
        Frank, when I do this the collector runs but I don’t get a break. I’ll figure that out.

        What I’m more concerned about is whether running RSCOLL00 or SWNCCOLL manually interfers with the hourly batch run of the job? Does the manual run clear out the stat records and they won’t be in the next batch run, is the MONI database OK, etc?

        On our system after running the collector manully, the batch job is disrupted. It fires every hour but records are not being passed to the SAP collector even though I see them in STAD.

        We were seeing somthing similar last week. It was due a time zone issue as described in SAP Note 926290 (Workload collector (NW) collects no data). The problem was cleared up until I started running the collector manually.

        Any insight is appreciated. Thanks.
        Tim

        (0) 
        1. Frank Klausner Post author
          Tim,

          the collector (SWNCCOLL) processes all statistical records until the last second of the previous hour, e.g. if the batchjob runs at 09:05:00, it processes the statistical records until 08:59:59.
          If you start testing at 09:10:00, you will not reach your breakpoint, there are no records to process anymore (your manual run would also process until 08:59:59, but these statistical records are already processed).
          Please stop the batchjob during your testing and debugging phase and use SWNCCOLL with checked “Only execute locally” flag. Then you can test once per hour on each instance, e.g. five instances => five chances to test per hour. A second run within the same hour on the same instance will not produce any result.

          Hope this helps,
            Frank

          (0) 
  3. Tim Wise
    Frank,

    Can there be more than one BAdi collector installed in a Basis 700 system?

    Can there be more than one z_userexit_workload function installed on a Basis 640 (or earlier) system?

    Can the BAdi or user exit function be within a name space?

    Thanks.

    Tim

    (0) 
    1. Frank Klausner Post author
      Tim,

      1) yes, you can create multiple BAdI implementations for BAdI ‘WORKLOAD_STATISTIC’.
      2) As the name of the function module is fixed and must be unique, there can only be one ‘Z_USEREXIT_WORKLOAD’ in a system. Nevertheless, you can of course implement a kind of ‘dispatcher’ there which calls multiple other function modules.
      3) The name of the ABAP OO class which provides the BAdI implementation can be chosen freely, the name of the function module ‘Z_USEREXIT_WORKLOAD’ is fixed.

      Best regards,
        Frank

      (0) 
  4. Marv Zahardnik
    Hello,

    Nice blog. Many thanks. It saved me a great deal of time.

    Question should you refresh your itab lt_workloadusage at the begining of your DO loop in program Z_WORKLOAD_USAGE?

    Otherwise, if there are more than 1000 records in table zworklbadiusage to be processed, could you count the same records again?

    Thanks,
        Marv

    (0) 
    1. Frank Klausner Post author
      Hi Marv,

      good point, you’re right, table lt_workloadusage should indeed be refreshed for each DO loop.
      Obviously, this problem did not show up while testing the program 😉

      Best regards, Frank

      (0) 

Leave a Reply