Skip to Content
Author's profile photo Thomas Stoffels

Creating large test data sets with loadrunner

A lot of customers face the challenge of creating large sets of test data for volume / interface testing but also to fuel manual testing. The latter applies mostly if certain features of a solution can only be properly tested with large data sets.

Manual creation of such test data sets is not an option because it would be extremely effort intensive and a repetitive, error-prone task nobody will volunteer for.

In a current project we use Loadrunner to create large test data sets to be used in manual integration testing. Doing so we found a simple way (probably worth sharing – hence this post) to filter customized Loadrunner execution logs so we can easily check the created test data for completeness/errors and share the test data created.

So this post will essentially describe how to get from several thousand lines lines of this…

(standard loadrunner log, very detailed, relevant info difficult to find)

…to this:

(only 1 line per document created with document number (“STO”) and/or system message, can be shared as is)

Note: This post does not describe how to record loadrunner scripts, but what adjustments to make to existing scripts and how to filter the output. We will assume basic Loadrunner knowledge (although the technique should be applicable to other automation/load testing tools as well).

So, how is it done?

To convey the general idea we will use a single-user execution example of a SAPGUI script running in VUGen. Considerations for parallel executions can be found further below.

  1. Parameter Settings:
    Introduce an additional parameter “ID” in your test data file to make the entries conveniently identifiable so we know where to restart / which line to rerun in case of an error:

    Also for this example we set the test data handling to sequential/each iteration.
  2. Custom Log Entries:Insert into action vuser_init:

    Insert at the end of the last action (thus to be written at the end of each Iteration, ideally right after the system message):

    //capture status message
    sapgui_status_bar_get_param("2", "new_sto", LAST);
    sapgui_status_bar_get_type("paramStatusBarType", LAST);
    sapgui_status_bar_get_text("paramStatusBarText", LAST);
    // write log entry

    This block basically involves capturing the info about the newly created document from the system message and writing it to the log.
    Most important pieces of information in this case are ID (the newly introduced parameter), new_sto (The test data item – in this case a “Stock Transfer Order”- created) and tparamStatusBarType/Text (Status message type & text).

  3. Runtime settings:
    Set number of iterations to number of lines in test data file.
  4.  Run the script and display the log.
  5. Copy/paste the whole execution log to a spreadsheet

    Our custom log entries should be  automatically split into several columns by the enclosed tabstops.
  6. Filter the first column for “###Init” and “###End”

    (Appologies, I ony have access to a German Excel atm)



Voilà – We have a numbered list of all test data items created along with the corresponding status messages. The filtered list can conveniently be checked for completeness (all IDs should be there), correctness (based on status message – in this example ID 9 should be checked) and copy/pasted into an email.


Considerations for parallel execution (for example via Performance Center):

If execute with n parallel users, we can (generally speaking) create test data n times more quickly. However, there a few differences to keep in mind:

  1. Test Data Handling:
    As we have several Virtual Users in play, we will have to make sure the lines in our data file are distributed across the VUsers without overlap and processed only once.

    These Settings cause  the lines in our test data file to be split evenly among the VUsers. Each Vuser will then process his “Package” and then stop.
  2. Custom log entries: No changes
  3. Runtime Settings: No changes (number of iterations is ignored)
  4. Run Script
    Obviously we will start the execution from the LR controller / performance center and thus need to create a controller scenario/performance center test for parallel execution and chose the number of parallel users.
  5. Copy/paste the whole execution log to a spreadsheet
    In the parallel execution scenario we cannot copy the log from the VUGen – instead we will be left with several log files (one for each VUser). In performance center these can be conveniently downloaded as a zip file. We then simply merge the different log files to one file
  6. Filter:
    Two more things to consider:
    -You will have several ###Init lines (1 per VUser) à Remove all but the first one.
    -Sorting may be necessary as the results will likely not be ordered by ID.


Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Dear Thomas,

      i've been wondering: If we are talking single user execution with VUGen, isn't it easier to just use SE16 to look up the created documents after the run? You can tell which created document belongs to which template (your "ID") because they should be created in sequence.

      For parallel execution your approach is great though!





      Author's profile photo Thomas Stoffels
      Thomas Stoffels
      Blog Post Author

      Hi Amar,

      thanks for your comment.

      It depends on whether the system under test has one or more application servers. If it has several application servers the document numbers may not be in sequence because  number ranges are buffered per application server.
      (Of course there won't be any overlap, but the next X numbers of the range get split up between the appservers, so depending where your document gets created you may be 10 or 20 document numbers ahead or behind. At least this is what we are seeing at my current customer.)