Skip to Content

Recently I was working for a scenario where production orders from SAP ERP needs to be transformed into B2MML format based on ISA95 standard in SAP MII
and sent to MES. Though this is a very common scenario for SAP MII based solutions, the unique point here is handling very large data volumes for transforming the production order IDoc structure into B2MML XML. Each production order typically has 2000-3000 operations with 5000-6000 components, being a heavy-engineering manufacturing scenario. To build the B2MML ProductionSchedule XML in BLS transaction it is required to loop into the E1AFVOL element of the
LOIPRO IDoc XML to add the SegmentRequirement (operation) elements in B2MML.

As a standard practice, initially I started using the Reference Schema Loader action block in BLS transaction to generate the XML out of the B2MML xsd, and using it all through the transaction to append the operations and components nodes (SegmentRequirement) to it under a Repeater loop. With an initial load testing using a production order with 2000 operations, the execution time it took was scary. It took a good 8 hours to process a single production order IDoc with 2000 operations and generate a ProductionSchedule B2MML out of it. This was no way acceptable in a production scenario. So I started looking into the performance statistics and tried to find the loopholes by which the performance bottleneck can be addressed. The initial thought which occurred to me was that using the Reference Schema Loader action all along the logic to append the operation structure to its XML property may cause a performance bottleneck, as each action block when executed in BLS transaction instantiates a Java class and calls its methods which should cost a good memory. Here what I needed was just the XML structure of the B2MML XML generated out of the XSD. So to avoid using the Reference Schema Loader action block repeatedly I mimicked the B2MML XML structure to the BLS transaction’s output XML property by using the The specified item was not found.. In that way I get the B2MML XML structure in the transaction output property which I can use to generate the B2MML and append the operation (SegmentRequirement) structures to it inside the loop. In that way I avoided the use of Reference Schema Loader action under the Repeater loop and accessed the BLS transaction property instead. When I executed the transaction after doing these changes in the logic, to my surprise the execution time came down to little more than 2.5 hours. Definitely it was a huge performance boost, but I was not to be satisfied so easily. Having got the clue now, I further created a local XML property in the BLS, as the parent element for the operation structure and modified the logic to append the operation structures (SegmentRequirement) under this element inside the Repeater loop. Finally outside the Repeater loop I assigned this local XML property (having all SegmentRequirement operations elements added to it) to the corresponding element in the BLS transaction property where the complete B2MML structure was present. Now when I executed the BLS transaction to my utter surprise the execution time came down to 19 minutes. That’s a huge improvement considering the first execution time was 8 hours for the same data load. The below statistics summarizes the performance tests.

Type of Logic (under Repeater action) Execution Time

Using Reference Document (Complete XML)

   8.2 hours
Using Transaction Property(Complete XML)    2.7 hours
Using Local Property (Segment XML)    0.32 hours

Also I like to mention that the tool which helped me a lot in identifying the performance bottleneck is the Illuminator service BLSManager with Stats mode. To enable this mode to log the execution statistics you need to execute the BLS transaction via the Runner service as:

http://<server>:<port>/XMII/Runner?Transaction=<Project>/<Path>/<Name>&LogStatisticsToDB=true

When the transaction is getting executed, note the transaction id generated for this transaction from the Transaction Manager menu under System Management. After the transaction execution is over you can check the statistics by the following Illuminator service:

http://<server>:<port>/XMII/Illuminator?service=BLSManager&Mode=Stats&ID=<TRXID>

This will show the average execution time taken by each action block and help you to quickly identify the performance bottleneck. In my case it was the assignment action where the SegmentRequirement element was being appended to the B2MML XML.

To summarize, the following tips may help in optimizing the performance of BLS transaction when large XML manipulations need to be done.

    • Use XPath in BLS transaction as much as possible. E.g. while looping into a repeating element of XML conditionally (e.g. looping into components elements where the component quantity is positive) use XPath condition in Repeater action configuration.
    • While creating a new XML Document in BLS transaction try to minimize the number of columns as much as possible, which will decrease the size of the XML and enhance the performance
    • For huge XML document, parse the data in background by scheduled transaction and store the output in database tables or MDO (in MII 12.2) instead of doing it every time when the query is executed by an user interface or an external system
To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Sascha Wenninger
    Hi Dipankar,

    great work getting to such a big performance improvement! Sounds to me like the first method (append to Reference Document) causes the XML to be parsed into a DOM for each loop iteration, so that the elements can be appended? If so, this is probably the cause of the slow performance.

    Out of interest, did you try using an XSLT for this at all? Depending on the processor, these can either be terribly slow (when they use a DOM tree internally), or quite fast and memory-efficient (when they use a stream parser internally, like Saxon or newer versions of Xalan)…

    Great work and thanks for sharing!

    Sascha

    (0) 
    1. Dipankar Saha Post author
      Hi Sascha,
      Excellent point! Though I haven’t used XSLT yet for this scenario yet, it’ll be my next PoC option. I used XSLT a lot to dynamically generate web pages from the data in BLS and it seems very efficient. The only issue is it is a bit difficult to debug and support compared to BLS logic.
      Thanks,
      Dipankar
      (0) 
    2. Dipankar Saha Post author
      Hi Sascha,
      Excellent point! Though I haven’t used XSLT yet for this scenario yet, it’ll be my next PoC option. I used XSLT a lot to dynamically generate web pages from the data in BLS and it seems very efficient. The only issue is it is a bit difficult to debug and support compared to BLS logic.
      Thanks,
      Dipankar
      (0) 
    3. Dipankar Saha Post author
      Hi Sascha,
      Excellent point! Though I haven’t used XSLT yet for this scenario yet, it’ll be my next PoC option. I used XSLT a lot to dynamically generate web pages from the data in BLS and it seems very efficient. The only issue is it is a bit difficult to debug and support compared to BLS logic.
      Thanks,
      Dipankar
      (0) 

Leave a Reply