Pseudo-Thread prioritization in mass event processing systems
In workflow heavy systems that handle multiple events and workflow triggers, events are processed in the same local destination by default. In this article, we will look at fine tuning this approach for better resource utilization and parallelism.
By default, all workflows and events are processed in a single queue by the outbound scheduler (SMQ2). The default setup with outbound destinations might look as below
In a workflow / event heavy system, there is little control over which event is processed first. Low priority workflows and high priority ones get the same treatment.
Consider the following scenario in your production environment involving a car paint job. Following are the different asynchronous events according to their priorities:
- Update painter handbook with schedule
- Send paint job details to spray room booking system
- Update inventory
- Update data aggregates for reporting
Without any prioritization in place, all the above events are treated equally and can be processed in any order. Also, low priority events like data aggregation might consume longer time to execute, holding the high priority events.
Fortunately for us, it is possible to simulate control and prioritization using receiver destination in SWE2.
We have to create new RFC Destinations one for every priority required. Let us call our highest priority RFC destination as WF_LOCAL_PRIO1. This new RFC destination has to be entered in the ‘Destination of receiver’. The result will look as below:
Once this is done for all the events that have to be prioritized, corresponding changes have to be done in SMQS. We define new queues and perform resource allocation.
Note how the resource allocation has been done according to the event priority and we can also throttling the least priority events. Though it is not possible to define exact queue priorities in the outbound scheduler, we can simulate this through such resource allocation.
Though this setup does not guarantee that priority events are always executed first, we can tweak the max connection parameters at run time to achieve the best possible results. It is also possible to allocate more resources to the data aggregate queue during non-peak times.
This approach can also be used to fine tune an asynchronous heavy processing system apart from the other well known methods.