Skip to Content

Understanding flow control methodologies in NetWeaver BPM 7.20

This blog provides instructions for how to optimize your process models in a way that runtime processing delays caused by locking conflicts and unnecessarily excessive resource consumption are avoided. Specifically, we discuss about use-cases for flow control that make use of Intermediate Message Events. The content of this blog is based on the information shared by Dr.Sorenduring one of the optimization exercises.Thanks to Dr.Soren for sharing relevant examples and details.

Token Swallowing

While modeling your business process, you may sometimes feel the need to introduce some behavior off the happy path. Most prominently, that may include exception handling you may wish to introduce. For this purpose, NetWeaver BPM supports two flavors of BPMN’s boundary events, being Escalation Eventsand Error Events. Those two event triggers differ in their implication on the activity that has raised them. With Error Events attached to the boundary of an activity, the activity that has raised them is implicitly aborted. Vice versa, Escalation Events can be raised (and caught) while the activity (including subflows and human activities) that has raised them happily continues on its internal processing. In fact, a single activity instance may raise multiple Escalation Events during the course of its execution. Correspondingly, those Escalation Events may be caught multiple times by the associated boundary event which each time passes on a token to its outbound control flow edge.

A Counter Example

When it comes to problem as to how to later synchronize these tokens, multiple variants exist. In most cases, though, we do not even want to synchronize these tokens back to the “happy path” process execution. In fact, we often just want to get rid of these tokens, once they have accomplished their task of triggering the activities on their path. In this article, we will specifically discuss variants how this can be done in a cost-effective manner that does not introduce extra runtime costs. Let us first start with an example that is feasible and (to some extent) also semantically correct but comes with some costly runtime penalties. For an illustration, please have a look into the process model below. It contains a Human Activity having an Escalation Boundary Event attached to it. On the control flow branch originating from that boundary event, some exception handling activity (here: a Notification Activity) is placed. Further downstream, an Intermediate Message Event merely serves the purpose of putting the token “on hold” once it has left theNotification Activity. In this specific scenario, theIntermediate Message Event is not even supposed to ever get triggered which implies that no token will ever be passed to its outbound edge and onwards, to the downstream Uncontrolled Merge(aka XOR merge gateway) which merges the exception branch to the main (“happy path”) branch, thus, making sure we have no “loose ends” in any control flow branch.

 

Counter Example

 

While this approach is generally viable and semantically correct, as such, it comes with some implications to be aware of. For one, activating the Intermediate Message Event(by successfully sending a message to the corresponding Web Service endpoint URL and having it matched and delivered to the Intermediate Message Event) must be avoided. This is due to the fact that merging tokens from two distinct branches equals a “Lack of Synchronization” scenario where a single control flow branch carries multiple tokens. Each of these tokens will independently trigger any entity on that branch which is most likely undesirable. Inhibiting matching can easily be accomplished by setting theIntermediate Message Event’s correlation condition to “false”. However, besides offering a Web Service endpoint to outside components for no specific reason, merely placingIntermediate Message Events into a process model has some performance implications to be aware of. Without going into details here, Intermediate Message Events rely on a database-backed matching algorithm which will generate some I/O load for the affected process instance even if no message is ever received. What is more is that process “binaries” (i.e., compiled process models) are artificially blown up by the logics that implements Intermediate Message Events. Finally, tokens that are passed to the exception handling branch queue up in front of the Intermediate Message Event, waiting to receive a message that will never be delivered. As a result, a process implementing this pattern must be ended with aTerminating End Event as shown, reason for that being the fact that a regular (non-terminating) end event (e.g., a Message End Event) will only terminate a process if there are no pending tokens lurking at hidden places of your process instance.

 

In the light of these disadvantages, it is sure worth the effort to watch out for alternatives that accomplish the same purpose which is to “swallow” tokens that are no longer needed.

Variant 1: Using Blocking Synchronization with Terminating End Events

Inspired by the approach introduced before, there is a low-intrusive pattern to rework a process model to avoid Intermediate Message Events but rely on the very same principle of queuing up “spare” tokens at dedicated branches from which there is no escape. The idea is to let those tokens deliberately run into a deadlock situation which may easily be enforced at a Parallel Join gateway. Parallel joins (aka “AND join gateway”) require a token to be present on each inbound branch to successfully synchronize those tokens (one from each branch) into a single token that is passed its outbound edge. In order to inhibit this gateway from ever successfully synchronizing, we need to make sure that it does not receive any tokens on at least one of its inbound edges. Doing so is easily accomplished by connecting a “dead branch” from some upstream Decision Gateway(aka “XOR split gateway”) where we set a branch condition to “false”. In effect, no token will ever be passed to the AND join that is downstream connected.

 

Illustration

 

In order to meet the syntactical correctness of our process model, we still need to connect that AND join gateway to some other entity. In the example, we just connect it to the end event which is perfectly viable as this branch will never be activated, thus, never triggering the end event.

Please be aware, that we still need to make use of Terminating End Events to clean up any tokens that may have queued up on one of the AND join’s inbound edges. Doing so, will not constitute a problem for most of your processes but still, there may be situations where you may explicitly want to rely on the fact that a regular end event (e.g., a Message End Event) does not terminate a process instance as long as there are other tokens somewhere within that process (e.g., intentionally processing activities on the process’ “happy path”). For that reason, it is good to read on to check out two more approaches of handling the problem at hand.

Variant 2: Using Regular End Events in Top-Level Processes

If you are within a top-level process (i.e., not being invoked as a subflow from another process) the whole process can be further simplified. In effect, you will not only make your process diagrams better readable but also further save on the runtime cost of your process binary. The idea is to make use of regular end events (e.g., Message End Events) and simply connect your exception handling branch (or whatever branch that carries a token to get rid of) to that end event. As long as your process has other token(s) existing somewhere in the control flow, it will not be terminated. In fact, a process instance is only completed once no other token exists anywhere in your process.

Illustration

 

Again, this approach is suitable for top-level processes, only. Processes that are invoked as subflows have a completion semantics where continuing the outer (invoking) process’ run is de-coupled from completing the inner process (i.e., the subflow instance). In fact, the subflow will only be completed if either there are no further tokens or it is forcefully cancelled (like by ending the outer process in a Terminating End Event). Vice versa, the first token to trigger the subflow’s end event will continue the outer process. Suppose that you trigger the end event from an exception handling branch, the outer process is continued even though the subflow’s happy path was not completely processed. Altogether, if in a subflow, you will need another approach.

Variant 3: Using Escalation End Events for Exception Propagation from Subflows

For the case of “swallowing” spare tokens in subflows there is (apart from using variant 1, if possible), a slightly more natural approach. Here, the idea is to “re-throw” (or initially throw) an (non-terminating)Escalation Event from a dedicated Escalation End Event. Connecting your exception handling branch (or whatever other branch) to such an Escalation End Event will cause the triggering token to be “swallowed” by the end event and raising anEscalation Event that can be caught by a Boundary Eventon the subflow activity in the outer (invoking) process model. Alternatively, the Escalation Event may be caught on any other subflow activity in the call stack or not at all, thus, “bubbling up” to the top-level process where it is merely written to the process log.

Illustration

 

Using exception propagation (actuallyEscalation Event propagation) comes with its own merits. For one, it is a natural way to handling exceptional situations and second, it allows for passing some data (and the event as such) to an invoking process without ending the subflow instance. The latter can be used (and abused) for other use-cases, not discussed here.

Process Control

Another popular scenario for usingIntermediate Message Events is to expose an interface to control a process instance from outside. Actually, there are many reasons to do so, but only some of them are semantically viable, including passing extra (business) data to a running process instance or synchronizing the process to the state of some external application or device.

A Counter Example

Most prominently, the process’ lifecycle may be made accessible through a Web Service API mapping toIntermediate Message Events in the process model from where the process may be terminated, escalated, or cancelled. Have a look into the example diagram below to see a typical example of how anIntermediate Message Event waits on a parallel branch to cause a premature process termination when a matching message is received.

Illustration

While doing so is technically viable, it is conceptually misleading and most of all, costly in terms of runtime performance. Whenever feasible, you should resort back to the monitoring and administration tools in NetWeaver Administratorto act upon a process instance. If that is not an option for the scenario you are implementing, another, less costly alternative exists to accomplish the very same task.

Using Idling User Tasks for Process Control

In comparison to Intermediate Message Events, Human Activities wrapping UserTasks are the less costly concept, especially when re-using one and the same task definition from within many processes (as opposed to re-defining new user tasks for each process model). The process model from above can be easily altered to rather trigger a user task to be shown in those people’s Universal Worklist’s(UWL) inbox that are configured as “Potential Owners” of that task.

Illustration

Using this pattern requires making use of Terminating End Events to make sure, these task instances get cleaned up (and removed from the UWL inboxes) when the process terminates in a regular way (by completely processing the “happy path”). Using tasks instead of Intermediate Message Eventsdoes not only come with the benefit of a better runtime performance but also relieves you from programming a user interface where you have to manually write the code to perform a Web Service call that triggers an Intermediate Message Event inside a process.

Be the first to leave a comment
You must be Logged on to comment or reply to a post.