Skip to Content
Technical Articles
Author's profile photo Uwe Voigt

Advanced MPL using Elastic Stack

Introduction

There were a couple of posts about advanced logging already. In certain respect they all point out that SAP Cloud Platform Integration has pitfalls when it comes to logging. Some can really ruin your business day:

  • the 1GB per day MPL attachment limit – when enforced by the so called “circuit breaker” makes log analysis impossible
  • automatic MPL deletion after 30 days
  • Monitoring dashboard message filtering and sorting could be better

Since there are reliable cloud logging solutions, at last there is no reason in enduring that situation.

One of them is Elastic Stack also known as ELK.

The scope of the article is to provide an overview what can be done with it. I do not go into every technical detail.

Install an Elastic Stack

Elastic Stack has a Basic licence which makes the product available at no cost. It can also be used as managed Elastic Cloud service.

I decided to try out a self managed scenario in an Azure subscription by deploying a prepared Ubuntu virtual machine image with the complete elastic stack already installed. We could also use containers in a Kubernetes service in future – that depends on the experiences we make with the setup and cost considerations.

The virtual machine only opens HTTP/HTTPS ports 80/443. A DNS name is assigned to its public IP.
Currently, it uses a 200GB disk.

There are two endpoints that have to be opened to the internet:

  1. Logstash – the API to send log messages from CPI flows
  2. Kibana – the front end to visualise log data

Their transport must be encrypted and clients have to authenticate.

I installed an Nginx as reverse proxy that utilises Let’s Encrypt certificates with automatic renewal via cron job. Authentication is done using basic username and password simply provided with htpasswd.

Create a Search Index Template

The Kibana UI has a Stack Management / Index Management perspective that allows to create index templates. With a template you can define settings that are inherited by the indexes which are automatically created on a daily basis. It can also have a lifecycle policy that removes indexes after a defined period or moves them to less performant and therefore cheaper hardware.

To use the index for searching there must be an index pattern which can be created on the same management UI. It is useful to create the pattern after at least one document is in the index. Else the pattern must be refreshed to know all the fields sent by the CPI.

Send log messages to the Elastic Stack

As with any other MPL attachment where you use scripts like this Groovy

Message logMessage(Message message) {
	def messageLog = messageLogFactory.getMessageLog(message)
	if (messageLog) {
		def body = message.getBody(String)

		def attachment = createAttachment(message, body)

		def name = ["Log", message.getProperty("customLogName")]

		messageLog.addAttachmentAsString(name.findAll().collect { it.trim() }.join(" - "), attachment as String, "text/xml")
	}
}

you basically do the same but use some additional Camel knowledge.

 

First, there are two tasks to prepare the platform for sending to the Elastic Stack:

  1. Add the Let’s Encrypt Root Certificate DST Root CA X3 to the platform keystore.
  2. Add the username and password that was used to protect the Logstash endpoint as user credential

 

Then, in the script there are the following steps:

  1. Prepare the request to send to the Logstash API.
    def metadata = ["beat": "scpi", "version": "1.0.0", "@timestamp": new Date().format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")]
    def name = ["Log", text, message.getProperty("customLogName")]
    def logs = ["name": name.findAll().collect { it.trim() }.join(" - "),
    	"level": level,
    	"body": message.getBody(String),
    	"headers": mapToString(message.headers),
    	"properties": mapToString(message.properties),
    	"mplId": message.getProperty("SAP_MessageProcessingLogID"),
    	"messageCorrelationId": getCorrelationIdFromMpl(message.exchange)
    ]
    def logstashBody = [ "@metadata": metadata,
    	"component": message.exchange.context.name,
    	"environment": getEnvironment(),
    	"logs": logs
    ]​
  2. Send the request. (credentials are fetched using the SecureStoreService API)
    def logstashUrl = message.getProperty("logstashUrl")
    def credential = getCredential("Logstash")
    
    def template = message.exchange.context.createProducerTemplate()
    MplConfiguration mplConfig = new MplConfiguration()
    mplConfig.setLogLevel(MplLogLevel.NONE)
    def exchange = ExchangeBuilder.anExchange(message.exchange.context)
    	.withHeader("CamelHttpMethod", "POST")
    	.withHeader("Content-Type", "application/json")
    	.withHeader("Authorization", "Basic " + Base64.encoder.encodeToString("${credential.username}:${credential.password as String}".getBytes(StandardCharsets.UTF_8)))
    	.withBody(new JsonBuilder(logstashBody).toString())
    	.withProperty("SAP_MessageProcessingLogConfiguration", mplConfig)
    	.build()
    template.asyncCallback("ahc:${logstashUrl}", exchange, new Synchronization() {
    	void onComplete(Exchange ex) {
    		template.stop()
    	}
    	void onFailure(Exchange ex) {
    		if (ex.exception)
    			log.logErrors("Error sending to Logstash: " + ex.exception)
    		else
    			log.logErrors("Error response from Logstash:  ${ex.out.headers['CamelHttpResponseCode']} - ${ex.out.headers['CamelHttpResponseText']}")
    		template.stop()
    	}
    })

     

  3. That is it!

Inspect log messages in Kibana

This does not only look pretty, it comes with much more filtering features than the CPI monitoring.

It would also be possible to have a linkage between Kibana and the CPI monitoring just by submitting an URL for mplId or correlationId.

Conclusion

With that relatively simple changes we can provide

  • a more robust monitoring for the operations team
  • message history whose size only depends on what the customer is willing to pay for the disc
  • search in log attachments at a level of granularity that the CPI sorely misses
  • continuous logging, no necessity to decrease the log level by setting a logEnabled property to false in test or production environments in fear of the circuit breaker

Assigned Tags

      8 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Malte Schluenz
      Malte Schluenz

      Hi Uwe Voigt ,

      thank you for sharing this great idea!

      We are also currently working on an integrated ELK-stack logging and alerting solution.
      Therefore, I have three questions:

      1. How does your script behave if the log cannot be send to Logstash (e.g. Logstash down)? Does the complete iFlow fail or is just no log stored?
      2. Why have you decided to load the logs into elasticsearch via Logstash and not insert it directly into elasticsearch.
      3. Have you thought about bundling logs into bigger bulk messages to reduce the number of outgoing calls from CPI to Logstash?

      Thank you for answering my questions in advance!
      Malte

      Author's profile photo Uwe Voigt
      Uwe Voigt
      Blog Post Author

      Hi Malte,

      Currently, the setup of that Elastic Stack has been done within one day. I am sure there are a couple of things that could be done better. If we are forced to invest further, I will write an update here.

      E.g. I would like to have the node’s log stream indexed. But I think that most probably would require an OSGi bundle or fragment to be deployed. Not sure if that can be done using the custom adapter deployment infrastructure.

      Regarding your questions:

      How does your script behave if the log cannot be send to Logstash (e.g. Logstash down)? Does the complete iFlow fail or is just no log stored?

      The script reports any connectivity or Logstash message to the node log only. The integration flow is not affected. As a fallback, the log message including body, headers and exchange properties could be attached to the MPL. Note that the initial reason to use the Elastic Stack was the MPL circuit breaker.
      Since we use JMS messaging between receiver and sender integration flows, we should not lose messages even if Logstash/Elastic may be down for a while.

      Why have you decided to load the logs into elasticsearch via Logstash and not insert it directly into elasticsearch.

      To my knowledge, Logstash comes with a bunch of plugins that allow to modify your message while being processed by the pipeline. I have not yet experimented a lot with Ingest pipelines, but I guess they do not offer too much functionality Logstash does not.
      At the moment, we are creating an index document for each message sent by an integration flow. An example of a useful modification of a message might be to use the correlation id of the message as the document id and to add fields to the document on each new message.
      But since the filtering capabilities of Kibana are that strong, the current setup is already very effective.
      On the other hand, I do not think that the additional component Logstash imposes a great risk of failure. There is not much memory pressure on it because it streams batches of data.

      Have you thought about bundling logs into bigger bulk messages to reduce the number of outgoing calls from CPI to Logstash?

      The experiences of the last weeks show us that this is not necessary. Quite the contrary – the integration flows perform faster than with MPL attachments.

       

      Best Regards!
      Uwe

      Author's profile photo Mikel Maeso Zelaya
      Mikel Maeso Zelaya

      Hi Uwe,

      Great post, I´ve seen that you are using lots of camel classes to do the srcipting.

      How do you make use of these? Could you share some of the imports or a more detailed script?

      Thanks in advance.

       

      Author's profile photo Uwe Voigt
      Uwe Voigt
      Blog Post Author

      Hi Mikel,

      When developing a Groovy script, you better use an IDE (I do not like the way SAP decided to create their developing process for the CPI, frankly said - they should not have switched from the approach of developing locally to the online design editor. It would have been even smarter to provide a downloadable CPI feature set to run within a local OSGi container. This for instance would enable us to debug scripts.)

      If you attach the Apache Camel core to your IDE project or whatever, you will see all Camel classes and you could even read the Javadoc!

      Here are the imports for your reference:

      import com.sap.gateway.ip.core.customdev.util.Message
      import com.sap.it.api.ITApiFactory
      import com.sap.it.api.securestore.SecureStoreService
      import com.sap.it.api.securestore.UserCredential
      import com.sap.it.op.agent.collector.camel.MplAccessor
      import com.sap.it.op.agent.mpl.MplConfiguration
      import com.sap.it.op.agent.mpl.MplLogLevel
      
      import groovy.json.JsonBuilder
      
      import org.apache.camel.*
      import org.apache.camel.builder.ExchangeBuilder
      import org.apache.camel.spi.Synchronization

      Best Regards,
      Uwe

      Author's profile photo Abraham Raies
      Abraham Raies

      Hi Uwe!

      I am trying to create logs from my i-flow and send them directly to kibana / elastic and this post seemed to be all I needed but unfortunately when trying to replicate the steps you detail in the blog, I had a problem wanting to add the certificate (DST Root CA X3) as it is outdated. What alternative certificate can I use?

      Thanks in advance.

      Best Regards,

      Author's profile photo Uwe Voigt
      Uwe Voigt
      Blog Post Author

      HI Abraham,

      You can use CN=ISRG Root X1,O=Internet Security Research Group,C=US certificate which is a valid root certificate used by Let's Encrypt (https://letsencrypt.org/certificates/)

      Regards,
      Uwe

      Author's profile photo OmPrakash Heerani
      OmPrakash Heerani

      Hello Uwe,

       

      Good day.

      I have a query. Please help me. You used the Logstash API to send logs to elasticsearch. However can we directly connect to elasticsearch DB from SAP CPI? If yes, then which protocol/Adapter? Any lead is highly appreciated.

      Thank you.

      Best Regards

      Om

      Author's profile photo Uwe Voigt
      Uwe Voigt
      Blog Post Author

      Hi Om,

      Instead of the Logstash API you can use the Elastic Index API (https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html)

      I have not tried yet because I initially set up the stack with Logstash up front and we are still going with that approach, but it should work as well with little differences. Please try out and keep us noted.

      Regards,
      Uwe