“A transaction management? Why do we need some framework doing that”?

I had to introduce to my management why we wanted to go for BOPF when we started to design our custom application.

Although I knew that it will be quite tricky to convey the benefits of an application framework to Non-Developers (sometimes, it’s even hard to discuss this with developers 😉 ), I could not just tell them “because we do need it” as I needed to convince them to upgrade the landscape to at least EhP5.

So I listed some of the benefits. I focused on the features supporting the development process:

  • “Living models” in the system
  • Enhanced modularization
  • Reuse
  • Common consumer interface
  • Adapter for web-based user interfaces (FBI)

But also mentione some more technical ones (as I thought it might be a good idea to sound knowledgable 😉 )

  • Database abstraction
  • Buffering
  • Transaction management

This was not the best idea I had. Actually, the audience knew close-to-nothing about what it takes to implement a full fledged transactional application, but some of them had attended the BC400 some decades ago. And they had a good memory:

“SAP has an excellent database abstraction, buffering and an integrated transaction management”. And they could not resist adding a “Don’t you know that?” after their speech.

What they were referring to were the features of the OpenSQL (as an abstraction of the vendor-individual database-dialects), the database buffer and the logical unit of work. And of course this is true.

So I was confronted to argue why these well-known concepts are not state-of-the-art and where theit limitations are.

While this was possible for database abstraction (as not every information is persisted), promoting the benefits of a buffer which does not directly flush into the database was more difficult. In the end, I was resorting to some aspect which I considerd to be a minor one, but which finally convinced them.

Implicit commits, buffering and a transaction manager

“How do you handle implicit commits when calling a remote function module using RFC”, I asked. They did not understand.

Of course, as with most systems, our application should connect to other SAP-systems via RFC. But to my surprise, no one knew that we might have an architectural challenge when not using proper buffer- and transaction management.

I implemented a short report illustrating the need for a transaction management and application-server-memory based buffering with BOPF.

The model

For the sake of having a persistence to address I modeled a very simple dummy-business-object with only one node and one attribute and generated the DB table. The attribute is considered to be unique.


BO model.png

The sample transaction

For a short showcase, I implemented a sample consumer accessing the object. It creates a new instance, rolls back the transaction, tries to create the same instance again. As a variation, it calls a remote function module again within the sequence.

As the technical key of the instance is unique on the DB, subsequent creates / INSERTs are expected to fail unless there is a rollback in between.

sample transaction.png

Alternative implementations:

Classical OpenSQL-based transaction

In the “classical” approach I did not introduce a transaction layer but did it the way ABAPers (used to) do: Simply read and update the DB table directly. I call this the DB-interface approach as this uses the database table as first-level-citizen of the model and interacts with it like a programming interface. Buffering is taken care of by the application servers database buffer. This also means that no other buffers (internal tables) should be used within the application so that an OpenSQL SELECT can always get the current state. For managing the transaction, the well-known COMMIT and ROLL-BACK statements were as good as ever.

We all love OpenSQL, every ABAPer who has not done like that in his career may throw the first stone at me 😉

DB transaction.png

BOPF-based transaction

With BOPF, I used the public interface of the service manager to access the dummy business object. Via the framework, a generic buffer using internal (member) tables is instantiated which takes care of returning the requested state (in this sample the current one) to the consumer reading the data. All transaction related operations were done – well – using the BOPF transaction manager.

bopf transaction 1.png
bopf transaction 2.png
bopf transaction 3.png

The result

As the code pictures, direct DB interaction in ABAP is very efficient with respect to lines-of code. Particularly updating data using the BOPF interface methods is much less convenient and takes much more space on the screen. But that was not what counted.

When executing the transaction, the same controller for the transactions printed the intermediate results to the screen as an execution logs.

Execution log of the “classical” implementation


Execution log of the BOPF based implementation


We can clearly see that the transaction behave very similar. As expected, after each INSERT of the classical transaction the result is persisted on the database and thus globally visble. As BOPF only inserts into internal tables, a SELECT can’t find them.

But after the RFC-based function module reading some remote data is called, the logs vary:

Although the transaction is rolled back afterwards, the changed data was persisted in the “classical” implementation. The reason for that is that a remote function call implicitly commits the transaction – in the calling system. With the application-server-memory based buffering of BOPF, this DB COMMIT is not an issue. Everything has been properly undone.

I’ve been showing this to some of my fellow developers and was surprised to hear that almost nobody knew about this benefit.

But this just shows the real value-add: Using a well-engineered framework which takes care of all the technology-handling, developers simply don’t have to know about a many of those details and can concentrate on what it actually takes: Implementing Business Logic.

What do you think? How do you handle the transaction and buffer in your ABAP-application?

I’d be most interested to reading from you in the comments!


To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Glen Simpson

    Hi Oliver

    I’ve been listening in the BOPF space for a little while now and have really appreciated the effort you put into your blog posts. Thanks for sharing.

    I am very interested in some of the benefits that BOPF can provide (FPM/POWL/Gateway/etc Integration, not to mention the inbuilt transaction management and other things you’ve mentioned) but I have to admit that I am still a little conflicted. The main reason for this is that the client code (the code that uses the BOPF API) is so verbose and “technical” that it becomes difficult to read – and I worry that if it is difficult to read then it will be difficult for less-experienced developers to use and to maintain correctly.

    My ideal “business object” is one that exposes a simple API and allows the client code to interact with it in business-speak. As an example, if the client code has a sales order item object and wants to deliver it then all it needs to do is:

      sales_order_item->deliver( ).  “no error mgmt but you get the idea…

    It seems the equivalent code to trigger an action on a BOPF object is a lot more complicated and, in my opinion, completely hides the fact that a sales order is even involved here:

      “Set the BO instance key:


      <ls_key>-key = iv_key.     “<== Sales Order BO Key

      “Populate the parameter structure that contains parameters passed

      “to the action:

      ls_parameters-item_no = ‘001’.

      GET REFERENCE OF ls_parameters INTO lr_s_parameters.

      “Call the DELIVER action defined on the ROOT node:

      CALL METHOD mo_svc_mngr->do_action


          iv_act_key           = /bobf/if_demo_sales_order_c=>sc_action-root-deliver

          it_key               = lt_key

          is_parameters        = lr_s_parameters.

    (This code was “borrowed” from a blog post in James Wood’s excellent series and slightly simplified.


    So, all of that was just to say that I like to create my own business objects based on a consistent pattern rather than a framework but this means I need to build my own transaction management and other functionality that BOPF probably provides out-of-the-box. It also means I don’t get the automatic integration features.

    There is more to my business object pattern than this but basically I have a factory class that can create new instances or retrieve existing, buffered instances of a business object class. I usually have one class per “level” in a business object (eg. ZCL_SALES_ORDER, ZCL_SALES_ORDER_ITEM, etc) and these classes use consistent naming for similar methods and attributes at each level. The data for each level is stored in one or more attributes of the object and are not inserted into the database immediately as suggested in your blog post. There is a SAVE method in the top level class that manages the process of posting its own data as well as interacting with its children objects to save their data too. If the business object class is based on a standard SAP object then a BAPI is usually called in the SAVE method, otherwise, if the object is fully custom then the data will be inserted into the database at this time.

    The benefit of this pattern is that I can make the API very domain-specific which makes it quite easy to read (even for functional consultants!) however it does mean that there is significant amount of “boilerplate code” that needs to be built for each business object class.

    Anyway, I’m going to keep listening here and studying the BOPF examples and who knows… maybe you can convert me to a BOPF believer. 😉

    Kind Regards


    1. Glen Simpson

      As I read over my own comment, I’m now wondering if maybe I’m not comparing apples to apples. Maybe I shouldn’t be comparing my bespoke domain-specific business object to SAP’s technical framework? Maybe it’s not a case of domain-specific business object *or* BOPF but that my business object could use BOPF internally and hide the technical details? It does seem like it could be a lot of extra work and another point of failure though…

      I’d be interested in your thoughts on that, Oliver.



      1. Oliver Jaegle Post author

        Dear Glen,

        thanks for your feedback and for sharing your thoughts!

        Though I risk hijacking my own topic (about transaction management) with something a bit off, I’m too eager to respond to your comment 😉

        Your remarks address the consumer API. This has also been an issue for dozens of other BOPF developers and there have been different solutions to that in the past (afaik).

        I know the architectural paradigm “your” business object API follows a “domain model” .

        It’s a pattern widely used in purely object oriented languages, some of them even support this pattern (some in combination with “active record”) as a first-level-citizen of the platform (I have only read about that being the case on the MS-DotNet-platforms, not experienced it – I’m an ABAPer with respect to serious programming 😉 ). The persistent classes in ABAP also follow this architecture.

        BOPF uses a combination of pattern which – as far as I understood it – uses a “Service Layer” (in combination with “table data gateway” for the persistence access).

        Each of those approaches has different (dis-)advantages.

        Particularly in ABAP, object instantiation is quite expensive (compared to other languages or to the insert of a new line into an internal table). Thus, a service layer seems to me optimized for performance. However, consumption code does not read as nicely (I have to admit that). You can tune readability though by using good data declarations.

        Imho, thanks to the generated constant interface, you can almost use it like an internal DSL:

        " Deliver the sales order selected with the split quantity determined earlier 
        INITIAL LINE TO lt_sales_order_key ASSIGNING <ls_sales_order_key>.
        <ls_sales_order_key>-key = iv_key. * This is really cumbersome: A structure with one attribute!
        * This will get better with
        ABAP constructor expressions though 😉 CREATE DATA lr_delivery_parameters. lr_delivery_parameters->delivered_quantity = lv_split_quantity. * changed semantic here:
        * In your sample, the instance seemed to be passed as parameter.

        * This would be a bad design of the action as the instance the action executed for

        * is (precisely are) referred to by its' keys.
        lo_sales_order_manager->do_action(      iv_act_key           = /bobf/if_demo_sales_order_c=>sc_action-root-deliver      it_key               = lt_sales_order_key      is_parameters        = lr_delivery_parameters ).

        There has been some feature generating the style of API you are looking for in the past by BOPF itself, but I believe this feature has not been released.

        Maybe somebody of the BOPF group can give us an insight into that.

        Cheers, Oliver

  2. Tobias Trapp

    Hi Oliver,

    I like your blogs and I want to encourage you: keep on with your excellent work!

    You asked the following question:

    What do you think? How do you handle the transaction and buffer in your ABAP-application?

    Buffering and transaction management are not new for most ABAP developers and they already have experience with ABAP Objects services and I think you can compare both frameworks: they have an internal buffer management and both a transaction management. But there are also differences: Objects Services are compatible with the “classic” transaction concept for good reasons and I explain you why: When you are working in custom or AddOn development you usually don’t have complete control over your transaction: the framework does COMMIT WORK resp. ROLLBACK. In this environment you need to bridge the gap between the transaction concept of “SAP standard” and BOPF. So it is very urgent that we need to knowhow to do this by programming an own, custom transaction manager for example.

    So I would appreciate that you continue your blog series about BOPF transaction management and who show how to use BOPF in “SAP standard” environment. By the way: Object Services support both concepts: besides their own transaction manager is you can use the “legacy” (= SAP Business Suite) transaction concept.

    So this would be a great topic for your next blog.



    1. Oliver Jaegle Post author

      Dear Tobias,

      Thanks for your appreciation! You surely know it is some effort to blog on SCN, particularly if you do it in a foreign language.

      Feedback like yours encourage me to spend my time while commuting with writing 😉

      Regarding your comment: I have to admit I don’t really know how the business object services really work. I’ve just been “playing” with them some years ago, so I can’t really compare the architecture (particularly of the runtime) based on its architecture. But I can guess 🙂 I’d be happy if you could confirm or disagree on my following assumptions:

      1. As you wrote it integrates into the “standard” (one of my favourite words with respect to SAP products 😉 ) transactions, I expect the object services to trigger the SAVE on COMMIT.
      2. For flushing internal tables on commit, an update module is the usual implementation.
      3. Thus, I expect the object services to register an update module which flushes all buffers of instantiated objects
      4. If all this was true, this was also the way I’d integrate custom objects into another transaction.

      However, my personal experience with that is very limited yet. There are mechanisms (such as a slave transaction manager) which I have not used yet, so I needed to do some research on that before I start wrting a blog about it. Maybe someone of the core BOPF team can give an introduction into how to do this properly.

      Nevertheless, I’d enjoy to discuss about this topic here in the comments.

      Looking forward to reading your comments on my assumptions!



      1. Tobias Trapp

        The Object Services collect the data in buffers and start a generic “verbucher” by proving the data as XML if I’m remembering right. You can hook in and start own V1 and V2 modules. And in fact it uses PERFORM ON COMMIT but all this stuff is hidden behind the framework.

        But this is not the point I would like to discuss. There are two transaction modes for Object Services: a transaction object similar but not so powerful as BOPF, but you can also use the classical transaction concept out of the box. It would be cool to have this in BOPF, too.

        But at first we have to learn how to use the slave transaction manager techniques, otherwise the use cases for BOPF in AddOn development for SAP Business Suite are limited. This is exactly what Bruno Haller said.



        P.S.: By the way, SAP “standard” is one of my favorite words, too.

  3. Bruno Haller

    Hi Oliver,

    the BOPF transaction manager is working fine for us, as long as we are solely inside the BOPF context.

    Like Tobias already mentioned, unfortunately we also need to integrate sometimes legacy objects (non-BOPF) within the same transaction and than it becomes some kind of challenge, or let’s say trial&error to get it working.

    I would also look forward to some guidance, how to deal with this properly.

    Best Regards,


    1. Carsten Schminke

      Hi all,

      let me try to bring some light into the mysterious topics BOPF transaction and Slave Transaction Manager.

      But before explaining the slave transaction manager, I have to give some insides on the BOPF transaction model. First of all a BOPF transaction consists of two phases. First there is an interaction phase. In the interaction phase, consumers
      do consume BO services like reading data, changing data, executing actions and
      so on. During the interaction phase all changes are collected in the
      transaction buffer. After the interaction phase the consumer decides whether to
      commit or rollback his changes. This results in the second phase: The save or
      the cleanup phase. Looking behind the scenes of the save phase there is not
      just a COMMIT WORK, it’s a defined choreography of single steps before and
      after the COMMIT WORK is performed:

      1. Finalizes step: BOs are able to do late changes  (Finalize determinations)
      2. Check Before Save step: BOs are doing last checks (consistency validations in a consistency group). In this step that SAVE can be aborted because of failing checks
      3. Late Numbering step: BOs are able to select numbers out of a number range
      4. Save step: During this step the data is saved from the transaction buffer to the database
      5. The COMMIT WORK is performed
      6. After Success Save step


      This works very smooth and well until you have the requirement to mix non-BOPF parts of an application – that has an own transaction model – with parts that are
      implemented with BOPF. For an example the non-BOPF application has an own
      transaction handler with an own choreography of steps or just an function model
      that is doing some changes and finally performs a COMMIT WORK. How would bring
      that together with the BOPF transaction model?

      First of all there is no problem to just execute both transaction handlers in a sequence. First just call the existing transaction handler and then the SAVE on BOPFs transaction handler. Since BOPF stores everything in its buffers the first COMMIT WORK would not interfere with the BOPF transaction. But what would you do if the first COMMIT works fine but the BOPF SAVE is aborted. For example because an consistency validation throws an error. You could try to revert the changes
      done in your non-BOPF application – there is an dedicated determination time
      point ‘After Failed Save’ where you could do this – but this get’s very cumbersome. And sometime maybe you even don’t know which data exactly needs to be reverted.

      Now the so called Salve Transaction Manager (STM) comes into picture. With the STM you are able to remove the commit control from the BOPF and you are able to perform the choreography of the save steps (1 – 6 mentioned above) by your own. You can mix the steps of the BOPF save with the steps that are necessary to save the data of your non-BOPF application. And finally you are able to execute one COMMIT WORK.

      Coming back to the example with function module it would look like this:
      You first call the steps 1 – 4 on the STM. Now you execute the FM that is doing
      internally the COMMIT WORK and all the data from the non-BOPF and the
      BOPF-parts of your application are in one LUW. Finally you call step 6 to
      finish the SAVE phase also from BOPF perspective. To do so, you need to get the
      STM from the Transaction Manager Factory /BOBF/CL_TRA_TRANS_MGR using
      the method GET_SLAVE_TRANSACTION_MANAGER. The STM instance has the above mentioned save steps as public methods and you are able to define your own
      transaction choreography.

      Best regards,


  4. Vigneshwaran Odayappan


    Awesome Post. 🙂 ,Thanks for the effort that you have made to write this blog and brigging some lights to developers life who actually wanted to something using BOPF framework.


Leave a Reply