The latest SP of SAP Process Orchestration 7.31 / 7.4 has been available since end of last year. So it’s about time to show you what features and enhancements we have shipped. I’m sure you will like it.
New CTC template for SLD self registration
There might be situations where you need to re-run the SLD self registration of your SAP Process Orchestration system, e.g., your current registration became inconsistent, or the configuration requires changes. With the new CTC template in the NetWeaver Administrator, you can re-register all components of an Advanced Adapter Engine Extended (AEX) or SAP Process Orchestration (PO) in the SLD. You can also use this wizard to connect the AEX/PO to another (central) SLD as part of the post-installation process. See also SAP note 2034226.
Receiver Rules in Integration Flows
Having shipped the support for Receiver Rules in Integrated Configuration Objects with 7.31 SP13 / 7.4 SP08, we now support Receiver Rules also within Integration Flows with one SP delay. Receiver Rules are re-usable xpath conditions to determine the recipients of the incoming messages. In the Process Integration Designer perspective of the SAP NetWeaver Developer Studio, you can create, read, change, and cancel Receiver Rules. You can also access all rules that have been previously created within the Integration Directory. In order to use the rules in Integration Flows, a new routing technique has been introduced. On the Routing Behaviour tab, choose Use Receiver Rules, and add one or more rules.
Supporting multiple mail attachments for sender mail adapter
So far, when reading mails from a mail server containing multiple attachments, the sender mail adapter only supported the creation of one single PI message whereas the mail attachments were attached to the PI message. New parameters of the standard mail adapter module have been introduced that allow you to configure the sender mail adapter so that each attachment can be processed as separate PI message. Depending on the chosen settings, you can either create PI messages for all attachments and the actual payload of the mail or for the attachments only ignoring the payload. See also SAP note 2040884.
Extended EOIO handling
For messages that need to be processed in a specific order, e.g., a purchase order needs to be processed before a change order, the Quality of Service (QoS) Exactly Once in Order (EOIO) is guaranteed by putting all messages on hold as long as the respective predecessor hasn’t reached a final status. EOIO messages that went into an error need to be handled manually to resolve the error and to resume the queue processing. There might be situations where you rather would accept the violation of the EOIO delivery in order to avoid blocking queues, e.g., for messages that repeatedly fail during message delivery. In this case, we have introduced a new feature that allows you to automatically remove erronous messages from the EOIO queues so that the sequences can be continued. The exception handling can be configured individually per serialization context. You have two options either move the failed message to an error queue or remove it from the queue hence changing the QoS from EOIO to EO.
New adapter for REST based services
A new adapter has been delivered for the provisioning and consumption of REST (Representational State Transfer) based services. It runs on the Adapter Engine and hence is supported on both PI Dual stack and Process Orchestration. We have recently published a series of blogs explaining the concepts of the adapter as well as selected features along sample scenarios, so I keep it short here and would refer to the blog series overview page. If you like to learn what is planned beyond the first shipment, see also my announcement from last year.
The Message Flow Monitor in SAP Solution Manager allows you to track messages from end-to-end. So far, only asynchronous scenarios were supported. We have enhanced the data collector in SAP Process Orchestration to support the monitoring of synchronous scenarios. Prerequisite is that you have switched on the logging of the respective synchronous messages. Besides this, we have introduced a new status Application Error for messages that were processed successfully on the adapter engine however went into an application error in the backend. You can search for messages that resulted into an application error, and the new status is displayed in the local message monitoring as well as in the Message Flow Monitor. Furtehrmore, you can trigger alerts in case of application errors.
Enhancements of the copy and resend of successful messages feature
In the previous What’s new blog for 7.31 SP13 / 7.4 SP08, we have introduced a new feature that allows you to copy and re-send already successfully processed messages. This is required for retrieving lost messages within your receiving backend system in case that a recovery is not otherwise possible. With the previous shipment, only copy and immediate send was supported. With the enhancement, copy only is added. This new option allows you to create a copy of the successful processed message so that you have the chance to edit the copied message before sending. Note that this feature needs to be handled cautiously since it potentially violates the Exactly Once delivery. So, I would like to stress that for performing this action you need a specific user role.
Further BPM OData services
The BPM OData services have been initially introduced with 7.31 SP09 / 7.4 SP04, and since then continously enhanced. The OData services provide you a simplified access to BPM processes and process instances to build your own custom interfaces. The current enhancements target the administrator role helping to build custom UIs for process administration and test automation by adding support for query a collection of process instances, cancel, suspend, and resume process instances. You can explore the supported OData services on your Process Orchestration system calling http://<host>:<port>/bpmodata. For more details about the BPM OData services, refer to the Custom UIs with the BPM OData service blog on SCN.
Improvements of the Claim Check pattern
The Claim Check pattern allows you to handle large message sizes within your BPM processes. You actually do not load the complete large payload into the BPM process context, instead you store the large payload outside of BPM and keep a reference to the data in the context. This ensures that the size of the process context is kept to a minimum. The Claim Check pattern is often used together with the Aggregation pattern where a number of incoming messages are collected and aggregated into a bulk message. In the Claim Check implementation so far, only the references were collected and kept within the process context, however the very last step within your BPM process required the complete large payload to be retrieved from the staging and put into the process context. This has been changed with the new and improved implementation. Now, the data retrieval and actual aggregation happens outside of the BPM process context leading to an improved runtime performance. Besides this, the enrichment of the data in the BPM process context is supported.
Hope this made you curious. For more details of all new 7.31 SP14 / 7.4 SP09 features, check out the release notes.