The biggest challenge was that since the process did not finish successfully but aborted in the middle either due to failure of any kind of pre-check or some kind of on-the-fly calculation (or any other reason) and was not traceable in the database i.e the runtime information. Since our business process is distributed over different systems which means that the information is also distributed over multiple systems as well (see fig1). So, the biggest technical challenge for us was how to achieve this.
1. The only way to get this run time information is to capture it via logs.
We used the application log (see step 1 and 2). The next big challenge for us was how to correlate the logs of a process distributed over different systems. Currently there are not many good tools available to view and analyze the application log. We achieved/overcame this challenge by using the tool ‘EMMA’ (step 3). Later I will show you what has to be done if EMMA isn’t available locally. With future releases it’s possible to use EMMA in webservice calls.
If you need exact information about customizing EMMA and examples please follow the blogs of Hasnain Jaffery. Here you can see a great example of future EMMA possibilities, even in non IS-U systems:The specified item was not found.
This transaction is a mass activity to create the clarification cases (once an hour). There you have to put in a label for the date and an identification label like “RUN01”. It can be started (like other mass activities also) in background to create the cases. This mass activity can also be started in dialogue.
overview (the six steps to ensure restarting aborted functions)
3. saving context data: (seen in step 1):
We implemented the BAPIs to log the context data in the application log. It was a bit of challenge to do that as in the situation where the process was terminated or resulted in error the information was getting lost. So we did this by implementing multiple COMMIT within our business process. Also, the other thing that was implemented in order to utilize EMMA was the implementation of Business Process Codes in the system. We predefined the codes in the system by using the customizing (please check the delivered cookbook) in IMG.
The next task was now that we have this information, what should we do with it and how. So, we used the Case Management functionality of EMMA to distribute the information among the users (it uses the standard workflow role resolution) which can be viewed by individuals in the customized in-box of EMMA (transaction [EMMACL]).
OK, now we have created the cases/alerts and distributed them to the end users as well. If each user is going to spend some time on each case/alert, then it would take a good amount of time to resolve each and every case manually. For this EMMA provides a nice functionality to automate the solution process for the problems or part of the problems that can be resolved by the execution of a code (it can be a BOR method or can be a workflow task or a front office process). To create cases in high volume EMMA also provides a mass activity (a parallel job tool) which can be scheduled as required. Regarding the coding we made logical blocks in the function modules starting with
* get actual step
* If FIRST RUN. H_STEP = SPACE else
* if called by EMMA , then EMMA knews the aborted STEP
if H_STEP <= 1 TO 13
* process coding
* first of all save and commit context data
* INSERT the coding for the application AND COMMIT
IF ERROR = 'X'.
MESSAGE E123(ZZZ). " into application_log
DELETE context data.
To be able to save the context data we made 2 dictionary tables: one to save the header fields (unique key, step, function module number) and the clustered data. It’s a clustered table. The second table
is used to store for each numbered function modules and step the obligatory fields. Because if we start to tidy up we need the mandatory fields and we need to know which fields are an internal table.
4. application log data (seen in step2)
We always put two messages into the application log (call [SLG1] to see what is meant). First message is the ABORT message with first variable is the step and second variable is the total number of steps. The second message puts all other information into the application log like business partner, a unique ID, the aborted step and the number of the function module called.
5. Extending EMMA and the clarification cases for retrieval and customer-own fields.
Every time we create clarification cases we insert also our own fields. This is realized by the usage of the customer-include CI_EMMA_CASE. We put function number, unique ID and step there to be able to find cases by customer fields. With the report named “REMMACL_SELSCRN_GENERATE” you are able to do that. The part of the picture above named step3 gives an idea of how a clarification case looks like, when you open it in [EMMACL] (case list).
6. Step 4 shows the tab “process” in the EMMA clarification case.
If you push the function code “VKCR” (shown in step 4) the business object method assigned to that action is started. Background: it’s a [SWO1] API-method (see it in step 6) which calls the previously aborted function module but starts this time just at the aborted step. Before that the previously saved context data has to be read.
7. If everything is o.K. the function module is processed in the right way and everything is done.
In one of the next posts I will talk about using EMMA and the entries in application log to monitor the whole process in SAP Solution Manager and the idea of SALSA (Synthesis and Automatic extraction of business processes out of Logs and System runtime Artifacts). This is an idea of the SAP research group for approaching a business-objects based “bird’s eye view” on business processes.