Enterprise Resource Planning Blogs by SAP
Get insights and updates about cloud ERP and RISE with SAP, SAP S/4HANA and SAP S/4HANA Cloud, and more enterprise management capabilities with SAP blog posts.
cancel
Showing results for 
Search instead for 
Did you mean: 
TSTOFFELS
Explorer


A lot of customers face the challenge of creating large sets of test data for volume / interface testing but also to fuel manual testing. The latter applies mostly if certain features of a solution can only be properly tested with large data sets.

Manual creation of such test data sets is not an option because it would be extremely effort intensive and a repetitive, error-prone task nobody will volunteer for.

In a current project we use Loadrunner to create large test data sets to be used in manual integration testing. Doing so we found a simple way (probably worth sharing – hence this post) to filter customized Loadrunner execution logs so we can easily check the created test data for completeness/errors and share the test data created.

So this post will essentially describe how to get from several thousand lines lines of this…

(standard loadrunner log, very detailed, relevant info difficult to find)

…to this:

(only 1 line per document created with document number (“STO”) and/or system message, can be shared as is)

Note: This post does not describe how to record loadrunner scripts, but what adjustments to make to existing scripts and how to filter the output. We will assume basic Loadrunner knowledge (although the technique should be applicable to other automation/load testing tools as well).

So, how is it done?

To convey the general idea we will use a single-user execution example of a SAPGUI script running in VUGen. Considerations for parallel executions can be found further below.

  1. Parameter Settings:
    Introduce an additional parameter “ID” in your test data file to make the entries conveniently identifiable so we know where to restart / which line to rerun in case of an error:

    Also for this example we set the test data handling to sequential/each iteration.

  2. Custom Log Entries:Insert into action vuser_init:
    lr_message(lr_eval_string("###INIT\tID\tsto_template\tdelivery_date\tstatus\tSite\tSTO\tparamStatusBarType\tparamStatusBarText"));​

    Insert at the end of the last action (thus to be written at the end of each Iteration, ideally right after the system message):
    //capture status message
    sapgui_status_bar_get_param("2", "new_sto", LAST);
    sapgui_status_bar_get_type("paramStatusBarType", LAST);
    sapgui_status_bar_get_text("paramStatusBarText", LAST);

    // write log entry
    lr_message(lr_eval_string("###END\t{ID}\t{sto_template}\t{delivery_date}\t{status}\t{Site}\t{new_sto}\t{paramStatusBarType}\t{paramStatusBarText}"));​

    This block basically involves capturing the info about the newly created document from the system message and writing it to the log.
    Most important pieces of information in this case are ID (the newly introduced parameter), new_sto (The test data item – in this case a “Stock Transfer Order”- created) and tparamStatusBarType/Text (Status message type & text).

  3. Runtime settings:
    Set number of iterations to number of lines in test data file.

  4.  Run the script and display the log.

  5. Copy/paste the whole execution log to a spreadsheet

    Our custom log entries should be  automatically split into several columns by the enclosed tabstops.

  6. Filter the first column for "###Init" and "###End"

    (Appologies, I ony have access to a German Excel atm)


 

Result:



Voilà – We have a numbered list of all test data items created along with the corresponding status messages. The filtered list can conveniently be checked for completeness (all IDs should be there), correctness (based on status message – in this example ID 9 should be checked) and copy/pasted into an email.

 

Considerations for parallel execution (for example via Performance Center):

If execute with n parallel users, we can (generally speaking) create test data n times more quickly. However, there a few differences to keep in mind:

  1. Test Data Handling:
    As we have several Virtual Users in play, we will have to make sure the lines in our data file are distributed across the VUsers without overlap and processed only once.

    These Settings cause  the lines in our test data file to be split evenly among the VUsers. Each Vuser will then process his "Package" and then stop.

  2. Custom log entries: No changes

  3. Runtime Settings: No changes (number of iterations is ignored)

  4. Run Script
    Obviously we will start the execution from the LR controller / performance center and thus need to create a controller scenario/performance center test for parallel execution and chose the number of parallel users.

  5. Copy/paste the whole execution log to a spreadsheet
    In the parallel execution scenario we cannot copy the log from the VUGen - instead we will be left with several log files (one for each VUser). In performance center these can be conveniently downloaded as a zip file. We then simply merge the different log files to one file

  6. Filter:
    Two more things to consider:
    -You will have several ###Init lines (1 per VUser) à Remove all but the first one.
    -Sorting may be necessary as the results will likely not be ordered by ID.


 
2 Comments