8.2.9 Changing instances and saving to the database

In “ABAP to the future” (A2tF), this chapter mainly contains advice on how to implement the monster-model-class and introduces another helper-class which shall facilitate handling of BOPF. In my eyes, it doesn’t. I believe it makes things worse and has some major architectural flaws. Therefore, I would like to split this chapter in two sections. The first describes how modifications are actually being processed in BOPF and how the interaction with the database works. Afterwards, I will explain why I don’t think you should re-use the provided helper-class (I hope I could convince that you should not have a model-class if you read the previous chapters anyway).

Manipulating instances

I have already covered in my version of chapter 8.2.2 briefly that the core-service MODIFY is the weapon-of-choice available in /BOBF/IF_TRA_SERVICE_MANAGER for consumers who want to actually change something.

Re-reading what I wrote back then, I hope that by now it’s easier to understand the paragraph.

“The modify command gets passed a set of modification instructions which might not only affect multiple instances, but also multiple nodes in one call. This is essential, as there might be business logic which validates whether an instance can be created based on subnode-data. E. g. we could validate that each monster needs to have a least one head. Creating a monster without a head would reject the modifications for the failed monster instance.”

Let’s have a bit of a more detailed look at how this “set of modification instructions” looks like.

/bobf/frw_s_modification defines the structure of the command.

The mandatory information always to be populated is actually quite compact

  • NODE: The technical key  (from the constants-interface) of the node to be manipulated.
  • Change-Mode: /BOBF/IF_FRW_C=>SC_MODIFY_<Create/Update/Delete>
  • DATA: A data reference to a work-area of the nodes combined structure
  • KEY
    • Mandatory only for updates and deletes: The technical key of the instance to be manipulated (redundant to the KEY-component of the DATA)
    • Optional for create: Allows the consumer to pre-define the technical identifier of the instance to be created. Necessary only if a subnode shall be created in the same mofiy-call. Use /bobf/cl_frw_factory=>get_new_key( )

A subnode is always created via an association from a source-node. Therefore, for creates of non-root nodes, the following information is mandatory (and should not be filled in other cases, as the KEY of the instance to be manipulated will be known then).

  • SOURCE_NODE: The technical key  (from the constants-interface) of the node from which the instance to be created can be related. This is usually, but not necessarily the parent node.
  • ASSOCIATION: The technical key  (from the constants-interface) of the association which shall be used for the creation. This is usually, but not necessarily a composition.
  • SOURCE_KEY: The technical key of the instance of the source node (redundant to the attribute PARENT_KEYDATA).


  • CHANGED_ATTRIBUTES: only for updates, specifies the scope of the change. If supplied, all other attributes will be ignored.

“Redundant” code

Building the modifications table of course obviously involves a CLEAR as well as an INSERT INTO TABLE <of modifications>. I always felt tempted to implement a method reducing this redundancy in code, but usually nowadays, I don’t. The reason is that in reality, doing this rarely reduces the actually executed lines of code and as the ABAP compiler does not inline the method call, it will be a tad slower. But we could argue on that and I would not burn a developer factoring this to an own method. However, if you do that, I recommend to have a dedicated typed method for each node, as this will at least produce a more powerful signature. I strongly discourage to implement a generic helper such as Paul is providing along with A2tF for various reasons and the lack of a real benefit.

A modification roundtrip – putting it all together

So now what happens actually inside the system after the consumer calls “modify”? Let’s have a closer look.

The service manager does what managers are used to. Not much: It basically delegates to the framework (/BOBF/CL_FRW) (after having launched plugins which may be registered, but we’ll not go into details on plugins though).

The framework analyses which nodes are affected and pushes the changes to the buffers of the corresponding nodes (by default, the BOPF simple buffer, /BOBF/CL_BUF_SIMPLE manages the transactional images). Technically this means, that a member table of the buffer instance is changed. Afterwards, the internal determination-validation-cycle starts: The framework will analyze which classes need to be instantiated, determines the affected keys and calls the described interface-methods:

  • Action validations are launched first: As they might prevent the modification, the further business logic might in fact not be necessary after all. Note that a failed action validation for a modification will reject to complete set of modifications for the instance’s node tree: If e.g. a create of a subnode fails, the update on the corresponding parent-node will also not be executed.
  • Determinations after modify are triggered next. Whatever changes they request (via io_modify) are not being flushed automatically to the buffer: They remain in a member table of the modify-instance. As the io_readio_modify
  • Finally, consistency-validations are checking the current state of the instances. As a consequence, consistency-status might be set and messages can be created.

After the modify-command has passed, a new state is available in the memory of the application server. As such, it will be isolated from concurrent sessions. This roundtrip can be executed multiple times (always having the transaction in the interaction phase) until the save is requested.

If you want to really understand what happens, I highly encourage debugging /BOBF/CL_FRW. It’s always good to know what really happens if you are going to build an application on top of it.

The save-command on the transaction manager will be propagated to all business object instantiated (precisely their framework-instances). This makes the session change from the interaction phase to the save-phase. The same cycle described above happens again – with the differences, that action validations triggered on the SAVE_<node> framework-action are executed first, then the determinations on finalize, afterwards again consistency-validations which might prevent the save as well (if they don’t set a consistency-status).

The actual persistency-management is an interaction between the buffer and data access-class which is quite unspectacular and needs no further insight (you may want to debug it, set your breakpoints in /BOBF/CL_DAC_TABLE).

Please note that a save is an interaction which affects multiple business objects. One validation preventing the save leads to no data of the other business objects being persisted as well. This is one of the reasons why I believe that there must not be a class which is responsible for a business object and at the same time features a save-method (as this is a cross-BO-command). Paul’s helper-class which has a member of the BO’s service manager and a member of the transaction manager as well implies that only this BO which the model class represents is going to be persisted. And this is not the case.

> find more alternative versions of chapters on my blogs.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply