Skip to Content
Technical Articles
Author's profile photo Malte Schluenz

Reduce MPL message attachments in SAP CPI logs by using the log level of the iFlow

MPL attachments make it easy to analyze and debug past interface executions within the message monitoring of SAP CPI. They are handy not only while developing an integration but also while analyzing the root cause of an incident in the production environment easily.
However, there are limitations to this approach as SAP has implemented a circuit breaker to block the addition of MPL attachments when 1 GB of them has been stored within 24 hours already (SAP Note: 2593825). SAP recommends to do not use the MPL attachments for that or limit the usage, instead, the log level TRACE should be used. Sadly, this is limited to a 10-minute timeframe. Therefore, it requires strict alignment with functional colleagues for testing and reproduction of incidents. Furthermore, it makes it difficult to test some higher loads (which have a longer execution time, e.g. sending a couple of thousand materials from a backend system) and analyze occurring issues.
As we have faced this issue a couple of times while developing over the last weeks, we have found the following solution which works well for us so far:

We have changed all of our MPL attachments logging scripts to the one below. This enables us to control, whether we want to create the logs or not. In consequence, we can activate the MPL attachment logging for a specific time frame to enable effective reproduction of incidents with functional colleagues over a longer timeframe as well as to deactivate logging when we test heavier loads.

import com.sap.gateway.ip.core.customdev.util.Message;
import groovy.xml.*;
import java.util.regex.*;
import java.util.HashMap;

def Message processData(Message message) {
    def body = message.getBody(java.lang.String) as String;
    
    // Get LogLevel of the artifact
    def map = message.getProperties();
	def logConfig = map.get("SAP_MessageProcessingLogConfiguration");
	def logLevel = (String) logConfig.logLevel;
	
    def messageLog = messageLogFactory.getMessageLog(message);
    if(messageLog != null){
        // Only log when LogLevel of iFlow == Debug || Trace
        if(logLevel.equals("DEBUG") || logLevel.equals("TRACE")) {
            def bodyNice = XmlUtil.serialize(body); // Make XML fancy
            messageLog.setStringProperty("Logging#3", "Printing Payload As Attachment");
            messageLog.addAttachmentAsString("3. Outgoing", bodyNice , "text/plain");
        } // Here it would be possible to add logging alternatives in case of other log levels
    }

    return message;
}

Now the question came up on how to check which record has failed during execution. Luckily, there is a feature to set a custom id that is displayed within message monitoring called Application ID. Herefore you only need to set SAP_ApplicationID as a message header (e.g. with a content modifier) and then it is getting displayed (below is an example, more information: Specifying Application ID, SAP_Sender and SAP_Receiver Fields. In this way, a functional id that identifies the record(s) can be stored and provided to the colleagues to recreate the issue while the MPL attachment logging is activated.

I hope that this might help you, that you enjoyed reading and appreciate your feedback, thoughts, and comments.

Assigned Tags

      14 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Raffael Herrmann
      Raffael Herrmann

      Hi Malte,

      Pretty clever usage of the given SAP functionalities. I'll take this script (or a variant of it) into my best practices portfolio.

      One idea for optimizing the script: I like to make things as reusable as possible. Therefore (at least for my favor) I would replace the hardcoded strings like "3. Outgoing" with properties.get() calls. Sure you than have to add a Content Modifier in front of each of your script calls, to set the property, but at the other hand you than can reuse the logging script in all of your flows without the need to change or adapt the script. In addition you can get the logging steps functionality directly from the Content Modifier instead of having the need to read the script to understand what it logs. (But this is just my humble opinion.)

      So far, thanks for sharing your thoughts with us.

      Best regards

      Author's profile photo Stefan Tanck
      Stefan Tanck

      Hi Raffael,

      btw... are you aware of a (still working) option for having a central groovy script repository (like UDF in PI/PO) so that for example the log script can be reused in different iFlows?

      Best regards

      Stefan

      Author's profile photo Raffael Herrmann
      Raffael Herrmann

      Hi Stefan,

      As far as I know there's no standard solution to tackle this issue. Maybe Finny Babu knows more (or at least if it's on the backlog/roadmap). For now we are using a Git repository for shared/common GroovyScripts. So we have a central, leading source for re-usable scripts. Nevertheless, when one commits to this repo, we have to check in which IFlows the scripts are used and update them manually.

      (Theoretically this step could be automated by setting up a webhook on commit in Git, which downloads the corresponding IFlows from the WebIDE via OData API, unzips them, replaces the corresponding script, zips and reupload them, but in practice we haven't setup something like this.)

      Author's profile photo Malte Schluenz
      Malte Schluenz
      Blog Post Author

      Hi Raffael,

      thank you for your feedback!

      How can I trigger a script directly from a content modifier, that would than make the approach even better!

      Best regards,
      Malte

      Author's profile photo Raffael Herrmann
      Raffael Herrmann

      Hi Malte,

      that's not possible (to call a Script from a Content Modifier). What I meant was: Replace the lines with static strings (log texts) like this one:

      messageLog.setStringProperty("Logging#3", "Printing Payload As Attachment");

      By something like this one:

      messageLog.setStringProperty("Logging#3", map.get("MPL_LOG_TEXT"))

      Then you could set "MPL_LOG_TEXT" by use of a Content Modifier in front of (and in addition to) the script element.

      Downside: You have to place two elements to the IFlow (Content Modifier and Script element) when you want to use the logging script functionality.

      Upside: You don't have to change the script (or better said the texts/strings in it) anytime you use it.

      I think the upside is more worthy than the downside. Yes, you have to place one more element per logging step, but therefore one can simply identify the purpose of the logging step (or re-check which texts it will write to the log) by looking into the Content Modifiers settings, instead of sneaking through the script itself and find the strings which are written to the log.

       

      Author's profile photo Malte Schluenz
      Malte Schluenz
      Blog Post Author

      Hi Raffael,

      I also see the advantages of setting the Text via a content modifier.

      In my opinion, these advantages really take effect as soon as it would be possible to automatically sync scripts with a Git repository. I am currently testing some tools. However, I have not found a good solution for this.

      Author's profile photo Eng Swee Yeoh
      Eng Swee Yeoh

      Hi Malte

       

      This is a very good idea to make logging dependent on the trace level. I've been using an external parameter to control logging, but that requires updating the configuration value and redeployment. Your approach will not require any down time and that's great!

       

      A few points to consider for your script:-

      1. As best as possible, always use Reader when accessing the message body
      2. Move definition and checking of messageLog inside the if block for trace level. There is no need to have the messageLog if the trace level condition is not met
      3. Consider not altering the input content when logging it - it is better to have the content logged as-is to preserve what is actually sent from the sender system

       

      Thanks for sharing this with the community.

       

      Best regards

      Eng Swee

      Author's profile photo Malte Schluenz
      Malte Schluenz
      Blog Post Author

      Hi Eng Swee,

      thank you for your feedback!

      We currently think about combining the approach of a parameter and the log level. This would enable us to have two options to steer logging.

      Some remarks to your suggestions:

      1. Great idea to improve resource consumption. Wouldn't I need to parse the XMLSluper to a string to beautify it? Then I would have additional coding.
      2. I have not moved it in the block when the LogLevel is met as we have sometimes scripts which log when the LogLevel is Info/None, some very high-level information from the payload.
      3. Yeah, that's a downside of this script. Nevertheless, I see the benefits, by directly have something "human"-readable.
      Author's profile photo Axel Albrecht
      Axel Albrecht

      Hi Eng Swee,

      you could also inject the attachment switch via Partner Directory and activate it via an OData call from external without the need of redeploying the flow.

      Best regards,
      Axel

      Author's profile photo J Evertse
      J Evertse

      Hi Axel. This looks promising. Could you maybe direct us to more info on this subject? Best Regards John

      Author's profile photo Axel Albrecht
      Axel Albrecht

      Hi John,

      kindly check this blog 

      best regards, Axel

      Author's profile photo Beverely Parks
      Beverely Parks

      If the MPL attachment is occurring within a JMS queue, is there a way to check if the attachment exists before adding it again?  I'm currently checking SAPJMSRetries ==null but I'm still seeing the attachment twice.

      Author's profile photo Shaun Oosthuizen
      Shaun Oosthuizen

      This works great. Thanks for sharing.

      Author's profile photo Philippe Addor
      Philippe Addor

      Hi Malte

      Good idea! What I'm usually doing is logging the payload only in error case using an exception sub-process. Like that you're sure that you have the payload available when you get notified about an error in the production environment (where you usually don't have a chance to activate the trace before it happens).

      Philippe