A grown-up Singleton concept for Workflow instances (3 step approach)
Introduction
When you read through the blog post Another 10 Common Mistakes made by Workflow Beginners from Paul Bakker, you can find in section #16 the advice to avoid duplicate workflow instances and the advice
You could set a ‘check function’ on your event linkage, so that no new workflow is started if an active workflow already exists for the same object. Or, when a change occurs, raise an event to terminate the existing workflow and start a new one.
In this blog I will show you, how to achieve a very safe singleton concept, starting with
- The event coupling and check function to prevent duplicate workflows
- Trigger-Wait-Event concept for Singleton
- Enhanced event trigger for super-safe approach
1. Event check function for “is another workflow already running”
If you would like to check, if there’s already the same workflow (i.e. to be exact: “A workflow instance of this object running, that is based on the same workflow pattern as this one”), you’ll need two information
– The object instance identification: type and instance key
– The workflow pattern ID
Both is already available in the standard event coupling, if you’re not using any specialty here:
- – The object comes within the SENDER as an importing element
- – The workflow pattern ID comes as the RECTYPE as another importing element
to the check function module.
This way, it’s possibly to create a generic check function module Z_CHECK_OTHER_WORKFLOW_RUNS, that is working with all workflow implementations instantly:
FUNCTION Z_CHECK_OTHER_WORKFLOW_RUNS.
*”———————————————————————-
*”*”Local Interface:
*” IMPORTING
*” VALUE(SENDER) TYPE SIBFLPORB
*” VALUE(EVENT) TYPE SIBFEVENT
*” VALUE(RECTYPE) TYPE SWFERECTYP
*” VALUE(EVENT_CONTAINER) TYPE REF TO IF_SWF_IFS_PARAMETER_CONTAINER
*” EXCEPTIONS
*” OTHER_WORKFLOW_RUNS
*”———————————————————————-
“The method generically finds any workflow based on
“the sender object.
DATA: lt_task_filter TYPE STANDARD TABLE
OF SWR_TASK
INITIAL SIZE 2,
lt_worklist TYPE STANDARD TABLE
OF SWR_WIHDR.
DATA: ls_task TYPE swr_task.
ls_task = recType.
APPEND ls_task TO lt_task_filter.
CALL FUNCTION ‘SAP_WAPI_WORKITEMS_TO_OBJECT’
EXPORTING
OBJECT_POR = sender
TOP_LEVEL_ITEMS = ‘X’
TEXT = ‘ ‘
OUTPUT_ONLY_TOP_LEVEL = ‘X’
TABLES
TASK_FILTER = lt_task_filter
WORKLIST = lt_worklist.
“Check running. If error-workflows, then start another instance
READ TABLE lt_worklist TRANSPORTING NO FIELDS
WITH KEY wi_stat = ‘STARTED’.
IF sy-subrc = 0.
RAISE OTHER_WORKFLOW_RUNS. “Use better MESSAGE E… RAISING …
ENDIF.
“You may also check for wi_stat = ‘ERROR’ if you like that
ENDFUNCTION.
This looks already like we’re done, and for normal cases it’s working fine. However, the story continues, as soon as you have multiple starting events at the same time (right after posting a document or later on through a change document). In such cases, if the workflow instances are created simultaneously, each call to SAP_WAPI_WORKITEMS_TO_OBJECT will return that no instance is currently running, and at the end you still have two (or more) workflows running. Just think of the BUS2012.ReleaseStepCreated event, which can occur more than once at the same time.
This leads us to the next section, closing the door via the second workflow design pattern as mentioned by Paul before.
2. Trigger-Wait-Event concept for singleton
As outlined in chapter 1, it’s still possible to have a another workflow instance running, although you’re checking that in the check-FM during the creation event, or you just can’t check at the time of the event creation and you have to rely on the workflow instance.
This trigger/wait event concept foresees, that each new workflow instance will close all already running workflows in place. So, yes, a new workflow will start, but in ends any other existing way, leaving only one left for the object instance.
As you need the instance identification anyways, I suggest to create an event .CloseProcess on the instance level of the object type, regardless if you’re using the BOR design or the new ABAP OO Workflow design.
Within the workflow pattern, you design a sender fork/wait combination like this:
The sending step will raise the event for <instance>.CloseProcess and any other workflow who’s already listening to that event (because it’s running), will end up immediatly. After that is done, the workflow will itself wait for such an event, just for the case, there’s a newer workflow coming up.
The fork will cover the complete workflow implementation. So after receiving the event .CloseProcess for the <instance> you’d immediately need to complete your workflow gracefully.
So, that’s all not real magic and you have another advantage, that if you’d like to start all-over with a couple of workflow instance, you can now easily raise the event .CloseProcess for the instance you don’t like anymore and create a new start event (if you’re including the solution of chapter 1 a new workflow wouldn’t start with a second start event, because that is prevented).
However, there’s a small piece missing, as I have experienced, that the event processing through the workflow runtime system can take some time, and under certain circumstances it may happen, that the workflow is receiving its own .CloseProcess event. This happens, when the rRFC queue is full and the event creation is delayed. In such cases, the wait step is already created, before the event is receiving, ending up with no workflow running at all.
To solve this, please read the next chapter.
3. Enhanced event trigger for super-safe approach
The event trigger for .CloseProcess must provide the own source top-Workflow ID, so that the receiving wait step can identify itself.
Add an event parameter EV_WORKFLOW_ID to the definition for .CloseProcess like this:
Create a dataflow upon creating the event like this now:
Now the trick: Edit the instance linkage for the .CloseProcess event and add a check-function module there (transaction SWEINST):
And when done with it, you’d still need the coding excerpt from the check-FM, so here it comes:
FUNCTION Z_CHECK_NOT_OWN_TOPFLOW.
*”———————————————————————-
*”*”Local Interface:
*” IMPORTING
*” VALUE(SENDER) TYPE SIBFLPORB
*” VALUE(EVENT) TYPE SIBFEVENT
*” VALUE(RECTYPE) TYPE SWFERECTYP
*” VALUE(EVENT_CONTAINER) TYPE REF TO IF_SWF_IFS_PARAMETER_CONTAINER
*” EXCEPTIONS
*” THIS_IS_THE_SAME_WORKFLOW
*”———————————————————————-
DATA: lv_sending_topflow_ID TYPE swwwihead-wi_id.
TRY.
CALL METHOD event_container->get
EXPORTING
name = ‘EV_WORKFLOW_ID’
IMPORTING
value = lv_sending_topflow_id.
IF NOT lv_sending_topflow_id IS INITIAL.
“The top workflow IS present in the event container, so we will check it!
* ** Read workitem instance and container from event
DATA: lv_target_ei_id TYPE swwwihead-wi_id. “Event wait step of target workflow
CALL METHOD event_container->get
EXPORTING
name = evt_receiver_id
IMPORTING
value = lv_target_ei_id.
IF NOT lv_target_ei_id IS INITIAL.
“Yes, so we have not source TopFlow and target EventID.
DATA: lo_wi_handle TYPE REF TO if_swf_run_wim_internal.
DATA: ls_context TYPE sww_wimctx.
DATA: lo_wi_container TYPE REF TO if_swf_cnt_container.
* – set context
ls_context-do_commit = ‘ ‘.
ls_context-called_btc = ‘X’.
ls_context-exec_user = sy-uname.
ls_context-fbname = ‘Z_CHECK_NOT_OWN_TOPFLOW’.
CLEAR lo_wi_handle.
CALL METHOD cl_swf_run_wim_factory=>initialize( ).
CALL METHOD cl_swf_run_wim_factory=>find_by_wiid
EXPORTING
im_wiid = lv_target_ei_id
im_read_for_update = ‘ ‘
im_context = ls_context
RECEIVING
re_instance = lo_wi_handle.
IF NOT lo_wi_handle IS INITIAL.
DATA: ls_swwwihead_of_target_EI TYPE swwwihead.
ls_swwwihead_of_target_EI = lo_wi_handle->get_wi_header( ).
* *********************************************************************
*
* COMPARE the sending top-workflow against the receiving top-workflow
*
* *********************************************************************
IF ls_swwwihead_of_target_EI-top_wi_id = lv_sending_topflow_ID.
MESSAGE E005 RAISING THIS_IS_THE_SAME_WORKFLOW.
“Received event from the own workflow instance
ENDIF.
ENDIF.
ENDIF.
ENDIF.
CATCH CX_ROOT.
“Exception handling is cut here for a better overview only
“Attached coding file has full exception handling provided
ENDTRY.
ENDFUNCTION.
Enjoy Workflow.
Take care.
Any helpful comments are highly appreciated.
Florin Wach
Systems-Integration
SAP Business Workflow Senior-Expert
Hi Florin,
Once again, a stellar blog. I hope other #sap_wf enthusiasts will read and absorb!
Sue
Hi Florin,
Very nice one
Hi Florin,
great blog!
Regards,
Dirk
Hi Florin,
thanks for sharing.
Regards,
Gianluca
Great job Florin,
Interesting concept,very useful.
Regards,
Toni.
First, thanks for sharing this post. I am searching for a solution for the same problem. You have an interesting concept; however, I think it is better to keep the existing/already in-process workflow.
Typically, workflow is used for a multi-step and/or multi-agent scenario. If the overall processing of the old workflow instance is about 80% done (which can be days or weeks with multiple user involvement), cancelling it with the new workflow instance isn't very nice for those who have some time already. 😉
Henry L.
Hello Henry,
thank you for your thought here.
The part 2 and 3 of the blog takes care about a situation, where the Check-Function-Module "Check if another Workflow of the same is already running" didn't work out correctly. This may have set your focus on this exceptional situation. However, as I have mentioned in part 1, the given Check-FM will keep the existing Workflow and will NOT start another one.
So I still think, that if you'd use the first Check-FM from part 1 you'd implement your ideas.
Is this what you'd liked to achieve and would you like to test, if part 1 would fulfill this?
With the very best wishes
Florin
Dear Florin,
at first, thanks for this great blog post!
I want to ask you for advice on a workflow problem I'm currently thinking about.
Following scenario:
When a material is created, a workflow driven review process should start.
The workflow starts on event bus1001006.viewcreated because it must also start when later on another view is added to an existing material.
This event gets fired for every material view created, so n (n = number of created views) workflows are started simultaneously.
The first thing to do for the flow is to persist the data passed by the event into a z-table.
The first of the concurrent flows to do this is designated the "leading flow". All other flows recognize themselves as "non leading flows" and terminate after persisting their event data into the z-table.
So far so good. My problem is how to catch when all other workflows have persisted their event data so that the leading flow can continue.
Until now I'm simply using a Dummy Step with "requested start" = WI creation + 1 minute to give the other flows "enough" time to persist their data and terminate.
Nevertheless, I'm not very happy with this approach since it may fail if the tRfc queue is congested.
Using your singleton pattern the "non leading" flows would have to fire an event telling the "leading" flow that they have terminated.
The problem here is for the "leading" flow to know for how many parallel flows to wait for their termination.
I would have to find out how many material views were created respectively how many viewcreated events were fired for that material instance.
Table MSTA unfortunately only has creation date but no time, so this wouldn't work.
Do you understand my problem and have any advice?
Best regards...
Hello Frank,
this is a very interesting situation, and I'd like to contribute some ideas on that. As other members may like to share their experiences who're watching the Forum posts, too, could you do me a favor and post a question with the given text in the Forum, so me and others could respond there?
With the very best wishes
Florin
Hello Frank,
this is a very interesting situation, and I'd like to contribute some ideas on that. As other members may like to share their experiences who're watching the Forum posts, too, could you do me a favor and post a question with the given text in the Forum, so me and others could respond there?
With the very best wishes
Florin
Interesting Scenario Frank, we might need more input on what you are trying to achieve via the Workflow in this case and so on. Please post in the forums so that more people can reach out and respond.
Good luck,
Vimal
Dear Vimal,
I posted this in the forums a while ago and received a good answer from Florin Wach.
http://scn.sap.com/thread/3399381
Best regards...
Nice Content...... ℹ
Good post Florin.
This could help in very complex scenarios.
I would however prefer to reduce the load on WF subsystem and try to always have an Approval Step as the first step - wherever possible. We can set the same set of event to start and terminate this step ensuring only one WF will be active for a given event.
IMHO, in all other cases, your approach is much effective and preferred.
Thanks for this write up.
Vimal
Hi Florin,
Your approach is really good, but I'm just wondering if is better to use Events under Workflow header In the "Goto->Basic Data", then choose Tab "Version-Dependent" and "Events"?
I think it achieves the same that you want to achieve here but with config only.
Please see the suggested solution from Josef Graf in this post:
http://scn.sap.com/thread/3329141
Regards,
Pablo
Pablo, that solution is slightly different in the sense that it terminates an existing WF and starts a new WF for a linked BO Event with the same Key.
However in a scenario where we do not want to terminate the existing WF, but also want to prevent any new WFs from starting for the same Key, Florin's approach holds good.
Hi Florin Wach,
Thanks for the great blog.I was facing similar problem and by implementing all three options ,i would be able to resolve many issues.
However I have a question here .can you kindly provide some ideas on it .
By implementing option 2 and 3,we are deleting the workflow instances which might be created because of simultaneous event triggers.But all the workflow instances would be in the workflow overview option in the standard tcode (PO,PR,MM02 etc).
Is there any to stop the workflow trigger ,so that only one instance is available in the workflow overview .
Any guidance would be helpful .
Thanks
Suman Kalyan
Yes, by implementing Option 1) in addition to 2) or 3)