Skip to Content
In October 2006 we went live with a huge FI-CA system where the architecture is as follows. As client a Smalltalk-based Customer Relationship Management (CRM) system is working. Over 100,000 service calls/day via IBM websphere (SOAP) ends up in FI-CAX system (Release 472 based on SAP BASIS release 620). The calls of the IBM middleware are done by using the BAPIs. The biggest challenge for us was to find out what to do and how to handle the situation when a process didn’t finish or resulted in error. Also, on the other hand the challenge was how to trace it and resolve it in short time.
The biggest challenge was that since the process did not finish successfully but aborted in the middle either due to failure of any kind of pre-check or some kind of on-the-fly calculation (or any other reason) and was not traceable in the database i.e the runtime information. Since our business process is distributed over different systems which means that the information is also distributed over multiple systems as well (see fig1). So, the biggest technical challenge for us was how to achieve this. 

1. The only way to get this run time information is to capture it via logs.

We used the application log (see step 1 and 2). The next big challenge for us was how to correlate the logs of a process distributed over different systems. Currently there are not many good tools available to view and analyze the application log. We achieved/overcame this challenge by using the tool ‘EMMA’ (step 3).  Later I will show you what has to be done if EMMA isn’t available locally. With future releases it’s possible to use EMMA in webservice calls
If you need exact information about customizing EMMA and examples please follow the blogs of Hasnain Jaffery. Here you can see a great example of future EMMA possibilities, even in non IS-U systems:The specified item was not found.

 

2. Usage of [FPEMMACGEN].

This transaction is a mass activity to create the clarification cases (once an hour). There you have to put in a label for the date and an identification label like “RUN01”. It can be started (like other mass activities also) in background to create the cases. This mass activity can also be started in dialogue.

fig.1:

image

 

overview (the six steps to ensure restarting aborted functions)

[picture1-788120.JPG]

 

3. saving context data: (seen in step 1):

We implemented the BAPIs to log the context data in the application log. It was a bit of challenge to do that as in the situation where the process was terminated or resulted in error the information was getting lost. So we did this by implementing multiple COMMIT within our business process. Also, the other thing that was implemented in order to utilize EMMA was the implementation of Business Process Codes in the system. We predefined the codes in the system by using the customizing (please check the delivered cookbook) in IMG.
The next task was now that we have this information, what should we do with it and how. So, we used the Case Management functionality of EMMA to distribute the information among the users (it uses the standard workflow role resolution) which can be viewed by individuals in the customized in-box of EMMA (transaction [EMMACL]).
OK, now we have created the cases/alerts and distributed them to the end users as well. If each user is going to spend some time on each case/alert, then it would take a good amount of time to resolve each and every case manually. For this EMMA provides a nice functionality to automate the solution process for the problems or part of the problems that can be resolved by the execution of a code (it can be a BOR method or can be a workflow task or a front office process).  To create cases in high volume EMMA also provides a mass activity (a parallel job tool) which can be scheduled as required. Regarding the coding we made logical blocks in the function modules starting with

 

* get actual step
* If FIRST RUN. H_STEP = SPACE else
* if called by EMMA  , then EMMA knews the aborted STEP
if H_STEP <= 1 TO 13
* process coding
* first of all save and commit context data
...
* INSERT the coding for the application AND COMMIT
 IF ERROR = 'X'.  
   MESSAGE E123(ZZZ). " into application_log
 ELSE.
   DELETE context data.
 ENDIF.
ENDIF.

To be able to save the context data we made 2 dictionary tables: one to save the header fields (unique key, step, function module number) and the clustered data. It’s a clustered table. The second table
is used to store for each numbered function modules and step the obligatory fields. Because if we start to tidy up we need the mandatory fields and we need to know which fields are an internal table.

4. application log data (seen in step2)

We always put two messages into the application log (call [SLG1] to see what is meant). First message is the ABORT message with first variable is the step and second variable is the total number of steps. The second message puts all other information into the application log like business partner, a unique ID, the aborted step and the number of the function module called.

5. Extending EMMA and the clarification cases for retrieval and customer-own fields.

Every time we create clarification cases we insert also our own fields. This is realized by the usage of the customer-include CI_EMMA_CASE. We put function number, unique ID and step there to be able to find cases by customer fields. With the report named “REMMACL_SELSCRN_GENERATE” you are able to do that. The part of the picture above named step3 gives an idea of how a clarification case looks like, when you open it in [EMMACL] (case list).

6. Step 4 shows the tab “process” in the EMMA clarification case.

If you push the function code “VKCR” (shown in step 4) the business object method assigned to that action is started. Background: it’s a [SWO1] API-method (see it in step 6) which calls the previously aborted  function module but starts this time just at the aborted step. Before that the previously saved context data has to be read.

7. If everything is o.K. the function module is processed in the right way and everything is done.

Coming soon:
In one of the next posts I will talk about using EMMA and the entries in application log to monitor the whole process in SAP Solution Manager and the idea of SALSA (Synthesis and Automatic extraction of business processes out of Logs and System runtime Artifacts). This is an idea of the SAP research group for approaching a business-objects based “bird’s eye view” on business processes.

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Jonathan Gilman
    Achim,

    After reading this, I have one question – how did you link your cluster entries to the clarification case generated?  It appears that more would need to be done than activating the CI_EMMA_CASE extension.

    (0) 
    1. Achim Toeper Post author
      Thank you Jonathan for your reply.
      You are right. We store the customer data belonging to the clarification case in 2 cluster tables. The key of the first is RELID, number of the called function module, and the unique-ID of the call. In a customizing table we remember per function module the interface fields needed.
      In the moment when we save the context data for the potential clarification case we store also the function module number, step and unique-ID in the append for the clarification case. If there is no error and the function module ends correctly we delete the stored context data.
      (0) 
      1. Jonathan Gilman
        Actually, I was hoping you could shed light on the Additional Data => Customer Data tab on the clarification cases.  Did you display your cluster data using ALV grids?  Thanks in advance.
        (0) 
        1. Achim Toeper Post author

          Jonathan,<br/><br/>to shed light on it I attached a picture of our customer-defined transaction .<br/><br/><imgsrc=”https://weblogs.sdn.sap.com/weblogs/images/64034/ZKK_DISPCLEAR.jpg” width=”503″ height=”400″ border=”0″ alt=”image” />.<br/><br/>As you can see in the customer tab of the clarification case only the key fields are stored. With these keys we read the cluster data and display it in a dynamic module pool. Flat fields are shown in the upper frame. Contents of internal tables are shown in a table grid.<br/><br/>If I should provide some source snippets you should contact me via email or I can explain it in detail in one of the folliwing blogs. <br/><br/>regards Achim.

          (0) 

Leave a Reply