Skip to Content
Author's profile photo Former Member

How to avoid modeling errors in Netweaver BPM? Part 3: Data flow in style

The process context is the place where you temporarily store data during the lifetime of a process instance. It subsumes the set of all data objects (here: “Data_Object_0” and “Data_Object_1”) and their currently assigned values. It is the process context that facilitates data flow between activities, events, and (decision) gateways.

Because of how data flow actually happens at runtime, there are some recommendations, I would like to share with you. Other than in my previous postings, I’m not really discussing modeling errors but rather performance issues that may arise from poorly designed process contexts.

Using and Reusing Types

In Netweaver BPM, the process context is both static (i.e., the actual number of data objects cannot be changed at runtime) and strictly typed. That is, any data object is bound to a specific, non-anonymous XSD type. Along those lines, anything within a process model that handles (i.e., consumes and/or produces) data (e.g., mappings within activities, events; expressions acting as routing conditions of decision gateways) adheres to a particular service interface which is a WSDL portType and operation. In there, request, response, and fault messages are, again, strictly typed.

Types and also service interfaces are re-usable within a process model and also across different processes. At runtime, the Galaxy server does (behind the scenes) acknowledge this fact and shares type definitions between all consumers. For this reason, it is a good idea to re-use types whereever possible. In particular, you should take advantage of the pre-shipped types that are available by default in every process model (essentially, XSD/WSDL primitive types plus the signature types of the built-in functions) by using them to type your process context. Runtime memory consumption will benefit from type re-use, in particular if you have many different processes deployed.  

Sometimes, you may be tempted to create custom types that are simply restrictions or extensions of built-in types without actually adding any fields. While this is conceptually fine, it comes with a memory consumption penalty and won’t really help you to validate your data in any meaningful way. Here is why:

  • Violations of type constraints (such as out-of-bounds values, etc.) will result in a runtime error which lets the respective step abort and “roll back”. In other words, it is still up to you to explicitly validate data (like incoming messages) by using appropriate means (XOR splits to error branches, that is). For an illustration, have a look onto the process flow below.

    In the start event (“Start”), that process receives an xsd:string-typed request document which is later to be mapped onto the process context variable “Data_Object”, typed “MyCustomString” (a plain length restriction to 0…5 characters). If you actually wanted to perform this mapping safely, you had to introduce another xsd:string-typed data object (here: “Buffer”) within which you would temporarily store the request payload (within the start event’s output mapping). The downstream XOR gateway could then simply perform the validation, by assigning a predicate “string-length(Buffer)>5” to the “request invalid” branch.

    While I don’t know whether or not you would agree, but in my view data validation not what a business process should be all about. I you have data validation issues, you better tackle this issue in your messaging middleware of choice.  

  • More importantly, any distinct type which is structurally or name-wise different from all other types that are used somewhere else in any deployed process, requires its own runtime representation. Needless to say, any distinct type statically occupies additional chunks of main memory.

Recommendation: Re-use types within and across processes whenever possible. Avoid self-defined types that constitute sole extensions or restrictions of built-in types.  

Context Granularity

Did you ever wonder, how many data objects a process context should be made of? Let me tell you upfront: it’s a trade-off. But there are some clear indicators when to model a single complex-typed data object having many fields (XSD elements and attributes) or when to better split it up into multiple data objects (for instance, one per field). Take the custom-defined complex type (“myComplexType”) below (designed in the built-in WTP XSD editor):

“myComplexType” has three distinct fields (actually, XSD elements of an XSD sequence) and could actually be used for modeling a process context accomodating three values using a single data object.

The afore-discussed type reuse problem may already give you a hint. It is somewhat more likely that atomic and built-in (i.e., pre-shipped) types are used more often (within and across processes). In other words, it often is the right choice to designate additional data objects, each using atomic types. Then again, be aware that each data object introduces a minor memory consumption overhead at runtime, so you want to make sure not to exceed a reasonable number of data objects per process, say 10 for a simply structured and 30 for a more complex process. 

Recommendation: As a rule of thumb, multiple simple-typed data objects is the preferred way to go. But consider to restrict the number of data objects to a reasonable upper threshold. If in doubt, also consider the recommendations below.  

But there are also other, qualitative criteria when to go for a single data object – complex type or a multiple data objects – simple type approach. The first one is related to Galaxy’s concurrency control mechanisms. At runtime, concurrent worker threads execute activities residing on parallel control flow branches. (Actually, that’s a fairly generic description of what goes on.) Concurrency control mechanisms make sure we have full ACID guarantees for concurrently executed process steps. In terms of data objects, no two activities may update a data object at the same time (concurrent loopups are permissible, though). In other words, locking granularity is on data object level, restricting concurrent accesses to a data object. The Galaxy runtime will actually make sure to serialize concurrently running activities that access the same data object. As a result, performance penalties may arise, increasing process latencies (aka “turnaround” times). Have a look onto the example flow below which has a single data object whose type is “myComplexType”:

On both parallel branches, two automated activities perform synchronous Web Service calls, both passing on the content of the string-typed “field 1”. When the call returns, the lower (upper) branch activity updates “field 2” (“field 3”). On a data object granularity, process context accesses are in conflict. That is, even though there are no actual conflicts (i.e., incompatible concurrent data accesses) on a field level, both activities are serialized (i.e., executed one after the other).  In other words, the whole advantage of placing the two activities onto parallel branches is lost which is particularly bad if the activities themselves come with long latencies.

A simple, but effective way to avoid this issue is to split up the process context into three data objects, representing fields 1, 2, and 3:

Please do note that there are no parallel updates (aka write accesses) of a single data object any longer. Concurrent loopup (aka read accesses) of “Field_1” still does occur but does not require serializing the respective activities.

Recommendation: Watch out for write accesses (output mappings) of identical data objects from parallel branches. If update operations “touch” disjoint fields, consider splitting up the respective data object.  

Finally, I would like to discuss an issue that arises from the fact that many processes (1) do include asynchronous steps (such as human activities) and (2) operate on process context data that isn’t equally accessed throughout the process instance’s life time. Have a look onto the flow below:

All three activities are sequentially lined up such that there is no need to optimize  process latencies in the light of parallel branches. Nevertheless, there is some optimization potential in this process’ context. 

From a chronological perspective, different fields of “Data_Object_1” are accessed at different points in time. In addition to that, this process involves a human activity which is, by nature, asynchronous. That is, a task is dispatched to an end user’s UWL (Universal Work List) waiting there for being taken up and completed. While the task is sitting in that user’s inbox, the process instance itself is actually evicted from main memory, freeing up precious resources. Consequently, the instance needs to be recovered (loaded from the database), once the user completes the task and the process may resume executing the downstream process flow. 

For performance reasons, the process context is “lazily” recovered, only fetching a data object from disk once it is really needed. In the given example, the human activity updates field 3 (as part of its output mapping), the downstream activity (“Read field 3”) reads the content of field 3 (within its input mapping). In other words, after the process recovers, neither field 1 nor field 2 is ever accessed again. And here is where the optimization potential lies: But splitting the process context into chunks of timely co-accessed fields, certain fragments of the process’ context will not be needed once a process recovers from the database. In essence, that saves you database lookups for the affected data objects and also reduces the memory consumption footprint of your process instance.

Recommendation: Localize human activities within your process and analyze process context accesses (from mappings, routing conditions) upstream and downstream of these activities. If possible, try to design your process context in a way that as many data objects as possible are exclusively accessed upstream (x)or downstream of the human activity (it’s output mapping already counts as “downstream”).

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member


      Author's profile photo Former Member
      Former Member
      Hi Wesley,

      from what I understand, you have implemented a custom mapping function which returns some list-valued POJO ("Employee"). Please note that custom mapping functions are required to use SDO DataObjects for their parameter and return types. In detail, mapping functions need to implement a specific interface which comes with the afore-mentioned SDO-style signature.

      There is an extensive description in SDN which illustrates how this is done: to see how this is done.

      Thanks and kind regards,

      Author's profile photo Former Member
      Former Member
      Good post! But there is one question about sharing the common context between two parallel human activities.
      Context has "Text" variable and it's shared between two activities. The first activity changes the variable, but the second one doesn't get the change. Is it possible to re-read the changed data or not?
      Author's profile photo Former Member
      Former Member
      On first sight (and if I get your example correctly), I'd say no: as soon as an activity is started, it exlusively operates on its local data which was initially filled when the activity was started. Successive changes to global data objects won't be reflected into a running activity instance.
      Author's profile photo Former Member
      Former Member
      You got it rightly:). Sadly it is not possible to reflect the changes:(.