Skip to Content

Something really critical for performance is the number of determinations you define in the BO model.

As a wise man said:

For one node at one (BOPF-) time there shall be only one determination!

E.g. everything that should happen after an item has been modified (be it in the cargo or resource or whatever section) should be handled within one item after modify determination.

Within this determination:

  • read at the beginning
  • determine what needs to be done at all and return if there is nothing to do! There is nothing bad in doing nothing if there is nothing to be done!
  • all submethods are only working on local tables
  • at the end one modify is performed for the complete table.

The helper method /SCMTMS/CL_MOD_HELPER=>MOD_UPDATE_MULTI is very helpful here to get the lt_mod out of the new table.

The recommendet way of calling this method is like this

:


          CALL METHOD /scmtms/cl_mod_helper=>mod_update_multi
          EXPORTING
            iv_node            = /scmtms/if_tor_c=>sc_node-item_tr
            it_data            = lt_d_item_all
            it_changed_fields  = lt_changed_fields
            iv_autofill_fields = abap_false
          CHANGING
            ct_mod             = lt_mod.



Of course it often makes sense to check for the changes of the note at the very beginning to determine which data is really needed here.

So, what does BOPF-Time mean here:

Basically this refers to when a certain determination is executed. which is defined here:

Capture.PNG

and here:

Capture2.PNG

So in this example all changes to a certain node are processed within the determination when the endmodify is triggered.

Why this rule:

The performance impact of one vs. multiple determinations is mainly, that every determination would read the same data, does it´s own modify which then triggeres it´s own change notifications and buffer updates etc. And all this sums up significantly. Believe me, we saw it…

Also BOPF needs more time to determine the relevance of the determinations, if there are too many determinations to be checked.

To report this post you need to login first.

6 Comments

You must be Logged on to comment or reply to a post.

  1. Oliver Jaegle

    Dear Bernd,

    Thanks for your insights into the motivation for the TM BOPF architecture.

    However, I don’t fully agree with your general recommendation to use only one determination (per timepoint) per node. Of course you’re right that the calculation of the change object takes some time. All the modified instances need to be compared in all attributes with all the instances of the previous image and if you deal with mass-changes, it can be painful if this is done multiple times during a single roundtrip (after each determination).

    However, there are other benefits of using multiple determinations:

    • Maintainability: The model simply becomes more transparent if multiple determinations are used. This allows for more easy adaption of parts of the logic. Of course, if you modularize in a strict way in your single determination class, you can achieve the same. But I experienced very often in my project that all the logic is being implemented inside the single EXECUTE-method – like the ABAPers always used to 😉
    • Performance: If multiple determinations operate on distinct data and have different preconditions, the check- and check_delta-methods can be used to determine whether the determining logic actually needs to be performed. Assuming that reading already read (and thus buffered) data has no significant impact on performance, the amount of data overall processed can be reduced.
    • Transient data: If a determination modifies transient parts of the node, the configuration includes information about which attributes are being manipulated by the class. At runtime, BOPF will only respect those determinations if the requested attributes contain at least one of the modeled attributes.
    • Dedicated triggers: Apart from the timepoint, configuration also contains the trigger conditions. Not only a direct manipulation of the node can trigger a determination, but also a change of associated nodes might invoke a recalculation (e. g. the creation of a sub-node triggers the calculation of a total at the parent node). If multiple concerns are merged into one determination class, you cannot comfortably distinguish between which logic shall actually be performed.

    So from my point of view, there are many good reasons why to have one determination per concern! Of course, if the time-points and triggers of multiple determinations are equal it is likely, that they cover the same concern. But this is not necesssarily the case.

    In my personal and project experience, the other rules you gave (particularly mass-reading of all relevant data) are more crucial for the overall performance of a transactional application. In our application particularly the use of the requested attributes on data retrieval got us rid of most of the BOPF-related performance issues.

    Everybody has to judge for himself the values performance, maintainability and readability of models. For our usecase, the “one determination per concern”-approach more than compensates the performance benefits of merging the determinations.

    Cheers,

    Oliver

    (0) 
    1. Bernd Dittrich Post author

      Hi Oliver,

      thanks for you reply and your thoughs!

      However I still think the rule holds true. For the implementation of the check/prepare methods you also have to read and process the data which again comes down to runtime. And reading from the buffer also takes time, I just did some work in the BOBF-buffer class (BUF_SIMPLE) and it is currently really just a simple buffer. And all the retrieves are summing up.

      I fully agree with your dedicated triggers point: “But then the one point in time”-rule does not hold true anymore.

      I found your point about requested attributes in retrieves interesting. The only thing I saw so far is that if only fields from the persistent part of the note are requested the br-determination for the transient part is not triggered, which is of course a huge different. Buf for the DB and buffer accesses themselfes nothing really changes.

      Thanks again for your thoughts!

      Which project are you working in? Maybe we can meet for a coffee or two and share experiences…

      Bernd

      (0) 
      1. Oliver Jaegle

        Dear Bernd,

        If the “one point in time” refers to the combination of trigger and phase (which is modeled in the “node category assignment”) I can more agree to this rule. I interpreted your post in such a way that you only wanted one determination class to be registered for all the triggers.

        The “simple buffer” uses all the features ABAP has in stall for quickly accessing data, such as secondary keys and is optimized for mass-access. So reading the same data in the check-method and lateron in another determination’s execute is an access to an internal table with a sorted secondary key. Of course, not reading the data twice is faster, but as long as it’s the same instances and no additional DB-access has to be performed, this has no significant impact on the overall performance. At least in our application, we’ve done much worse things 😉

        Cheers,

        Oliver

        P.s.: I’m employed at a SAP customer in Frankfurt. If you pass there, I’d be happy to meet!

        (0) 
        1. Bernd Dittrich Post author

          Hi,

          I just worked a lot with the simple buffer and currently the buffer access does not use the secondary keys, unfortunately 😯 .

          This specifically holds true for retrieve by association e.g. in specializations or reverse foreogn keys. Basically it´s an assign component kind of thing, at the moment. It´s worth do some debugging in the class /SCMTMS/CL_BUF_SIMPLE, e.g.  the retrieve by association. E.g. in xcase of reverse foreign key assocs BOBF goes always to the DB, even in case of buffer hits.

          The BOPF-collegues issued some performance notes in the last couple of days which might be worth looking at for you, specifically improving the RBA.

          Regards and I let you know when I´m in Frankfurt (or you come to Walldorf)… specifically as you are also interested in BRFplus, as I saw (our BRFplus episode of the TM podcast might be interesting for you as well…here is the download link: http://scn.sap.com/docs/DOC-40517 )

          Bernd

          (0) 
  2. Tilmann David Kopp

    Hello Bernd,

    you suggested to buffer instances that are used by several determinations during
    a roundtrip on entity level. This allows to bundle the read and write to the BOPF buffer.
    This will result in a a performance benefit and as your blog is about performance and not about other qualities (like for instance maintainability), I’ll give you the 5 stars 😉

    However the achieved performance benefit depends on the number of instances which are stored in the buffer. In case of using the default BOPF simple buffer, the buffer corresponds to an internal table. Thus if your application buffers a lot of instances (e.g. >500.000), each access to the BOPF buffer costs more performance compared to an application scenario having only a few (e.g. <1.000) buffered instances.

    I think these different scenarios are the reason why it is so difficult to define a general rule which can be applied to all BOPF built applications.

    • As Transportation Management deals with a lot of instances, I agree that buffering on entity level might be a good idea there. It would be also helpful to use a special buffer that restricts the amount of buffered instances – instead of just using the default BOPF simple buffer.
    • But I also think that for many of the other BOPF user groups, the performance benefit of grouping determination logic into a single determination is in general not high enough to compensate the disadvantages of this buffering on entity level (e.g. maintainability, design of the BO, Oliver mentioned a few more).

    But of course – and this is something which is always true: If you have 2 determinations reading and writing the same data, having the same triggering/request condition and are from a semantical viewpoint somehow related, you should think of combining them.

    Thanks for your blog post!

    Best regards
    Tilmann

    (0) 
    1. Bernd Dittrich Post author

      Thanks for giving your feedback! As this is the TM blog it mainly addresses BOPF in the TM context, but generally speaking it is always a good idea to think deeper about your modelling.

      (0) 

Leave a Reply