Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member182889
Active Participant

8.2 Using BOPF in order to write a DYNPRO-style program

In the following chapters, we’ll look at how to actually program business logic in BOPF.

Also in the ancient DYNPRO-times, it was (theoretically) possible to implement a MVC-pattern. The DYNPRO-Events PBO and PAI could sever as entry point for a controller which then delegates the actual logic to a model. However, there are only a few samples I know where this has been done consequently (you might debug a PBO and PAI of the ABAP workbench (SE80) or the BOPF builder (BOBX) in order to get an impression how this could be done).

In my opinion, there is no “BOPF equivalent of a PAI” (as Paul writes in A2tF), as the PAI is part of the UI or controller layer (even I can argue with myself what it’s exactly part of), but it’s surely not meant to be part of the model which is where BOPF resides. The UI and its controller are BOPF consumers.

In this chapter, we will consume BOPF as well as provide business logic (such as checks on the input’s sanity). The interfaces and patterns interfaces for service consumption and provisioning look very similar, which is one of the strengths of the patterns used.


8.2.2 Accessing instances of a Business Object

This chapter deals with how to access a business object instance.


Excursus: The BOPF Test-UI (transaction BOBT)

After you modeled the static structure of your business object, you can immediately interact with it in all aspects described in this chapter. For this purpose, you can utilize a Dynpro-based Test-UI. Simply load your business object by entering its name and start either with creating a new instance or identifying existing ones.

For a better usability of the Test-UI: Select a node attribute of a unique alternative key to be displayed instead of the GUID.


As we imagine entering our application, there are typically two different UI-patterns: After a selection-screen, a list of instances matching the criteria is displayed, one row representing one instance of the entity. On the button-bar, we’d be offered to edit an existing instance or to create a new one. Alternatively to a selection list, we could also just have to enter an identifier which is a human-readable semantical key. In both patterns, the next screen would be used in order to read the current data and edit it or could be used to create an instance with the corresponding ID (optionally with default values).


Identifying instances

As written before, a business object node is the model-part which corresponds to a UML class and thus carries the data of the actual instances. In BOPF, each of these instances is identified with a – tada – GUID. This technical key does not need to be modeled: While generating the combined structure which includes the persistent as well as the (optional) transient information, BOPF also includes a technical structure, the so-called key-include. It contains not only the instances GUID (KEY), but also the PARENT_KEY which is the KEY of the parent node (is initial for root-nodes) as well as the ROOT_KEY (in case of a root-node-instance, the ROOT_KEY and the KEY carry the same values). This key-include is used by the framework in order to resolve compositions as well as their reverse (TO_PARENT) and TO_ROOT, but of course also can be interpreted in business logic.

All semantical data including identifiers are modeled as attributes. Based on these attributes, two core-services exist in order to get the KEYs for an instance: QUERY and CONVERT_ALTERNATIVE_KEY. Once you know the key, you can feed it to the core-service RETRIEVE in order to get the actual data.


Query

Let’s have a look at QUERY first, as it’s the simpler one. A query is a modeled artifact which resides at a node (the “assigned node”). Based upon (multiple) query parameters, a set of instances of the node at which the query resides is being returned (precisely the corresponding KEYs). The query contract allows applying the well-known select-options for each attribute (including BT and CP). There are two types of queries which do not need to be implemented, but which can be answered by the framework itself: The node-attribute-query SELECT_BY_ELEMENTS is a query where the parameters match the node structure; the SELECT_ALL-query is a query without any parameters.

Figure 18 - The SELECT_ALL-query is more or less only technical and needs to have this name

Figure 19 -The node attribute query with the node structure as query parameters


Note that the query-names are not unique within the model, but only within the context of the node: A SELECT_BY_ELEMENTS at the ROOT node will return keys of the ROOT-instances; the SELECT_BY_ELEMENTS at the HEAD node will return HEAD-keys matching the criteria (potentially of multiple monsters). All queries adhere to the implied contract and have to support paging as well as the restriction to a set of instances upon which is queried (see parameter “is_query_options”).

I believe that “QUERY” feels very familiar for most ABAP developers as it kind of wraps an SQL-query (like a prepared statement). But there is one pitfall when using it in transactional applications: Just like any select-statement, only persisted data can be returned. The transactional buffer (some internal member table which holds the created and changed instances) is ignored. Therefore, I highly recommend using “QUERY” only from the consumer at the very beginning at a transaction (e. g. on a selection screen or at the beginning of some batch-report). Especially within service provisioning, queries must not be used! The side effects of reading dirty while applying business logic are tricky to identify and mostly horrible to correct.


Converting an alternative key

The core-service CONVERT_ALTERNATIVE_KEY is much less comfortable with respect how to identify instances of a node and needs more modeling, but it respects the transactional buffer! An alternative key in the sense of BOPF is an attribute of a node (or a combination of multiple attributes) which serves to identify an instance either exactly (usually an ID) or in order to identify a set of instances (usually a foreign key). A node may have one or more alternative keys which are explicitly modeled.

The definition in the business object comprises its structure as well as its multiplicity (uniqueness). In our sample, the monster name could be a unique alternative key while the creator could be a non-unique alternative key if there was a need to have business logic based on the selection by creator.

Figure 20 - Multiple alternative keys for a monster, the name being a unique one


If for example monsters have got a rental price and for all monsters of a creator the price shall be adjusted, we’d need an alternative key on the creator: Using a query would not find a monster which has been created within the same transaction.

Figure 21 - A non-unique alternative key configuration

Figure 22 - Using an alternative key conversion in the Test-UI


The alternative key’s uniqueness can also be used for validating that no second instance with the same unique alternative key is getting created. In contrast to what Paul wrote, BOPF offers a re-use-feature which ensures the adequate uniqueness: Once you model an alternative key, you are requested to add an action validation (which we’ll cover in a later chapter) with implementation class /BOBF/CL_LIB_V_ALTERNATIVE_KEY.

Figure 23 - The BO check will inform about un-validated alternative keys


The SAP-provided implementation also ensures uniqueness across multiple sessions on non-persisted data!

Remark: Alternative keys are also necessary in order to be able to model associations between nodes of different business objects (Cross-BO-associations). In this case, the multiplicity of the association has to match the uniqueness of the alternative key.

Reading data with RETRIEVE

Alright, now we got a set of technical keys of instances which we’d like to process. There are two core-services for reading BO nodes: RETRIEVE gets the data of instances of which we know the KEYs. RETRIEVE_BY_ASSOCIATION – surprise, surprise – can retrieve instances (KEYs) of associated nodes. Optionally (not by default!), RETRIEVE_BY_ASSOCIATION also returns the data of the target instances. Both services allow the consumer to specify in which information of the node to be retrieved he’s interested in by specifying the it_requested_attributes. If one of the requested attributes is a calculated one (from the transient part of the node structure), BOPF will execute the corresponding calculation. If no requested attributes is specified, all node-attributes are considered requested.

As your models grow (and they will, be sure) and transient information is added and calculated, the use of the requested attributes is getting more and more important. So even if you’re requesting all attributes of the currently modeled nodes, I recommend specifying the attributes which are relevant. This not only saves you nasty performance analysis in the future, but also helps to make your code more readable. Let me give you a short sample:

monster_manager->retrieve(
EXPORTING
     iv_node                   = zif_monster_c=>sc_node-root
     it_key                    = relevant_monster_keys
     it_requested_attributes   = VALUE #( ( zif_monster_c=>sc_node_atttribute-root-number_of_heads ) )
IMPORTING
     et_data                   = relevant_monsters
).

The above code implies that the number of heads is relevant for the business logic which is about to follow. Also note that a table of monster-keys is being fed into the method. In BOPF, all commands issued by the consumer are mass-enabled. This is particularly important for the retrieval-methods, as each read might result in a DB access (if the buffer is not being hit for all instances). It can scrutinize your system’s performance if you only feed single keys and read with index 1 and do this in a loop. I highly recommend to mass-read all the relevant data (including the necessary associated data) right in the beginning of the method. If you in addition properly fill the requested attributes, 80% of your performance tuning has already been taken care of.

The command for following an association looks very similar:

monster_manager->retrieve_by_association(
EXPORTING
     iv_node                   = zif_monster_c=>sc_node-root
     it_key                    = relevant_monster_keys
*    iv_fill_data              = abap_true
*    it_requested_attributes   = VALUE #( zif_monster_c=>sc_node_atttribute-head-number_of_eyes )
IMPORTING
     et_key_link               = link_root_head
*    et_data                   = relevant_monsters_heads
).

A careful observer will see that the data of the target node is not always being returned when following an association. The runtime representation of an association is a link between the source and target node. The data is actually a property of the target node (and not of the association). The target node data is also not always necessary in order to implement the requested behavior. As the retrieval of the target node’s data is comparatively expensive (particularly if transient information is requested), the default of a retrieve by association is not to request the data (iv_fill_data). If you have managed to implement a real-world usecase without ever running into a short-dump because you forgot to set iv_fill_data = abap_true, you are certainly a more careful programmer than I am.

Modifying instances

After we read the current data of an instance, we might want to manipulate it. /BOBF/IF_TRA_SERVICE_MANAGER offers the core-service MODIFY which is a command to execute all kinds of manipulations (Create, Update, Delete). The modify command gets passed a set of modification instructions which might not only affect multiple instances, but also multiple nodes in one call. This is essential, as there might be business logic which validates whether an instance can be created based on subnode-data. E. g. we could validate that each monster needs to have a least one head. Creating a monster without a head would reject the modifications for the failed monster instance.

I will not go into the details of the command (but I recommend you to read the method documentation on the modification structure which will really help you, the BOPF documentation team did a great job there):


Documentation of structure /BOBF/S_FRW_MODIFICATION

In order to create a new root node instance, the following components of the structure must be filled:

NODE: Model node, from which the new node instance shall be created
CHANGE_MODE: Always the constant /BOBF/IF_FRW_C=>sc_modify_create.
NODE_CAT: If no node category is set, the default node category is automatically chosen.
KEY: Unique key of the new node instance. This key can be retrieved by the help of the get_new_key() method.
DATA: If the new node instance shall already be filled with data, this parameter can be used. It must refer to a data structure of the node's type.
CHANGED_FIELDS: If it is initial, the whole structure of the DATA parameter will be moved into the newly created node instance. Otherwise, only the attributes, which are specified in this parameter are moved.
ROOT_KEY: This field is optional and specifies to which root node instance the newly created instances shall belong (Beware, wrong contents can cause undefined behavior of the business object!).

In order to create a new non-root node instance, additionally the following components of the structure must be filled:

ASSOCIATION: Model key of the association, which leads to the node, of which the new node instance shall be created.
SOURCE_NODE: Model key of the association's source node.
SOURCE_KEY: Node instance of the model node, which is specified by the SOURCE_NODE parameter and is starting point of the association.

In order to update a node instance, the following component of the structure must be filled:

NODE: Model node, from which an instance shall be updated.
CHANGE_MODE: Always the constant /BOBF/IF_FRW_C=>SC_MODIFY_UPDATE.
KEY: Key of the node instance, which shall be updated.
DATA: Reference to a data structure, which contains the updated node instance data. This parameter must refer to a data structure of the node's type.
CHANGED_FIELDS: If it is initial, the whole structure of the DATA parameter will overwrite the current data of the node instance. Otherwise, only the values of the attributes, which are specified in this parameter, are moved.
ROOT_KEY: This field is optional and specifies to which root node instance the newly created instances shall belong (Beware, wrong contents can cause undefined behavior of the business object!).

In order to delete an already existing node instance, the following component of the structure must be filled:

NODE: Model node, from which an instance shall be deleted.
CHANGE_MODE: Always the constant /BOBF/IF_FRW_C=>SC_MODIFY_DELETE
KEY: Key of the node instance, which shall be deleted.
ROOT_KEY: This field is optional and specifies to which root node instance the newly created instances shall belong (Beware, wrong contents can cause undefined behavior of the business object!).


Let me highlight some aspects which might not be obvious from the documentation. When creating instances of multiple nodes of a composition (in one modification call), you need to make sure that the instances of the subnode are created for the proper parent-node-instance. In order to be able to do this, you need to know the KEY of the parent node instance. In this case, you can use /bobf/cl_frw_factory=>get_new_key( ) in order to define with which technical identifier the parent node instance shall be created. Else, as a consumer you don’t need to define the key, the framework will do that for you.

Once you update an instance, you can use the changed attributes in order to inform the framework which parts of the instance have changed. This not only increases performance (as BOPF doesn’t have to compare the before- and target-data), but also allows you to have multiple modification instructions per instance affecting different attributes.

When deleting an instance, BOPF will implicitly delete the subnodes (via the compositions) as well. There is no need for an explicit deletion of the subnode-instances.


Change- and message-object

Each core-service returns a message container and a change object.

It is crucial to understand that in a BOPF-application (such as it should be in any other well-designed application), messages are exclusively intended to be interpreted by a human. Business logic must never be based upon the existence of a particular message-attribute. BOPF calculates a change-object after each roundtrip. This does not only inform reliably bout the differences in the transaction before and after the roundtrip, but also tells you about failed changes. It may also be the case that during one roundtrip, multiple modifications are being made out of which some are successful and some fail (because they violated some constraint). Thus, if the has_failed_changes( )-method returns abap_true, you definitely have to analyze which change failed!

> Find more alternative versions of chapters in my blogs.

4 Comments