This is the first posting of the WPS (Wednesday Performance Snippet) series. This series addresses hints and best practices on how to develop efficient, specifically in the context of TM.

This time it´s about how to read data efficiently.

Often you need context information when processing data. Let’s say you need first stop information when working with items.

We unfortunately see coding like this quite often in customer enhancements:

ril1.PNG

This coding snipped has several issues.

The most important one: The data is read within the loop, so in every single loop BOBF is called and data is read from DB or buffer. This is not a good idea at all. It will reduce the runtime a lot, and in many cases you will only find out after go live when realistic data volumes are processed.

You have to collect the keys and read the data in advance in one mass call, e.g. like this:

ril2.PNG

Of course you also have to distinguish if you really need this information for all items or only for some, e.g. only for the main cargo items, only the vehicle items etc.

This is specifically relevant for determinations. Determinations are always called, if the prerequisite (e.g. modify to the item node) is fullfilled, and this is then true for ALL item categories. But this will be the focus of another WPS 😎

Now your challenge: Find other performance issues in the first code snippet, there are at least two … 😕

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

  1. Erik von der Osten

    One performance issue is the INTO keyword in the LOOP/READ statements which is copying each table entry of a line into the strucutre variable. In the second coding snipet the REFERENCE INTO was used which is setting one reference value to the reference variable.

    A second performance issue is the first INSERT statement into the lt_key table. In the second coding the mehod check_insert_key of class /SCMTMS/CL_COMMON_HELPER is used instead to avoid duplicated keys to be inserted from the items with the same root_key.

    (0) 
  2. Bernd Dittrich Post author

    @ Erik: Winner!

    The issue with the key collection is in the first example also a missing clear. Even with the //check_insert_eky approach the table is growing and would retrieve more and more data, which would then also lead to wrong results…

    (0) 

Leave a Reply