Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Martin-Pankraz
Active Contributor








Update History:

03 May 21: Enhanced Azure Workbook to work with multiple CPI instances out of the box,  introduced region identifier mapping (eu20 -> west eu Azure), groovy scripting enhanced to send region and cpi domain to distinguish multiple cpi instances.

03 Mar 21: Added iFlow for direct connection to Azure Log Analytics without SAP Alert Notification Service in the middle.

Dear community,

Have you been using the E-Mail Adapter to notice any problems with your iFlows before and flooded your inbox in doing so? I certainly have 😄 let’s do better than that!

Some thoughts on the context and alternatives


Out of the box CPI gives you detailed logs, tracing capabilities and a nice UI to troubleshoot integration errors of all sorts with the cloud-based admin-ui. To avoid checking it regularly you need means to notify your CloudOps team. E-Mail seems straight forward and is used very often. However, E-Mails become very impractical soon. In case of a target system outage during a batch load, you might get emails in the magnitudes of hundreds/thousands within minutes depending on your setup and message frequency. That even threatens your email server to flag them as an email-bomb attack and eventually blocking the sender?.

Great monitoring systems aggregate problems, give you an audit-able history, the capability to slice and dice your issues on a high level and to spin up automation based on your metrics. At the end of this blog, you will have exactly that with tight integration between SAP CPI and Azure Monitor (filter messages from your Azure Monitor Workbook on CPI). See below the overall architecture and example screenshot.


Fig.1 Architecture overview to monitor alerts from SAP CPI in Azure Monitor



Fig.2 Screenshot from Azure Monitor workbook


My presented solution will be using mostly Azure Monitor Workbooks together with SCP Alert Notification Service (ANS) but it is not limited to that. As always there are multiple ways to achieve monitoring/alerting for your iFlows. Especially with public APIs you can connect many systems nowadays. Find below some other alternatives for you to explore:































Already mentioned plain simple CPI Admin UI with email adapter




SAP Analytics Cloud CPI Story (as of SAC innovation release 17)


I borrowed this picture from holger.himmelmann2 from CBS, because it seems not to be available on my SAC trial account.
SAP Solution Manager https://support.sap.com/en/alm/solution-manager/expert-portal/public-cloud-operations/sap-cloud-plat...
SAP Cloud Application Lifecycle Management (ALM) https://support.sap.com/en/alm/sap-cloud-alm/operations/expert-portal/integration-monitoring/calm-cp...
SAP Application Interface Framework (AIF) to feedback into your SAP backend


I borrowed this picture from holger.himmelmann2 from CBS.

Find a starting point for AIF in the community here. This is great for end-to-end monitoring for SAP specific protocols for  but lacks a little the scope outside of the SAP ecosystem.
Custom integration with any REST endpoint via HTTP or SAP OpenConnector Adapter Use the adapter on your iFlow to call you desired endpoint. You need to handle authorization and specific header setup yourself though. Sometimes you even need multiple http calls to obtain CSRF-Token or Bearer tokens etc. before you can POST your actual payload. The OpenConnector tries to lift that burden for a given set of targets. You can have a look here.
SAP Cloud Platform Integration API Find the relevant API reference on the API Hub here. With that API you can pull the CPI logs and create you own monitoring. This happens on tenant level in contrast to the adapters on iFlow level, which can push information to their target directly.



Table1 Overview of possible CPI monitoring solutions


Find the SAP CPI webinar on CPI messaging from where the two borrowed screenshots from holger.himmelmann2 originate here.

Poll vs. Push


For all monitoring integration scenarios with SAP CPI you can either send directly from your iFlow or use the last mentioned API to check on a timely poll-based basis for new message logs. Sending metrics directly (pushing) lifts the burden of checking if you already notified your target of that specific data item. But it also adds complexity to your iFlows. It is best practice to consolidate the actual sending in a separate iFlow and connect it via a ProcessDirect Adapter or use at least an exception sub-process on source-iFlow level.

For the SAP Cloud Platform Alert Notification Service there is even a standard integration package provided by SAP on the API Business Hub making use of the poll-based CPI API.


Fig.3 Screenshot of standard integration package for SCP ANS


This is great because you can immediately start using it. The config guide is straight forward and SAP makes sure the interfaces are maintained with each update of the involved systems. However, there is also a trade-off: You cannot make any changes to that iFlow. Otherwise, you will lose the capability to receive updates from SAP. Usually, I advise to make a copy of that iFlow and make your changes there. The original is not deployed and serves only as indicator for updates (SAP’s changelog tells you also what they changed) and as master copy so you can research the changes to merge manually. I have done local text/object compare in the past. You can try to get more sophisticated with Git repos and DevOps methodologies. Find my latest post on that matter here.


Fig.4 Screenshot of standard iFlow for SCP ANS


The SAP standard iFlow calculates time windows and stores them on the tenant to ensure it “knows” which logs were already sent. This addresses the “poll-burden” I mentioned before. With that timeframe it asks the API for all failed iFlow messages and maps the response to a JSON structure, which is then send to your configured Alert Notification Service instance. See below a snippet from the groovy script that constructs the payload. Have a closer look at the "severity" attribute of the Event object.
Event toNotification(String flowId, def messages, Message message) {

String flowName = messages[0].IntegrationArtifact.Name.toString()
String flowType = messages[0].IntegrationArtifact.Type.toString()
String currentTimeFrameStart = getStringHeader(message, CURRENT_TIME_FRAME_START_HEADER_NAME)
String currentTimeFrameEnd = getStringHeader(message, CURRENT_TIME_FRAME_END_HEADER_NAME)

return new Event(
eventType: "CPIIntegrationFlowExecutionFailure",
resource: new Resource(
resourceName: flowName,
resourceType: flowType
),
severity: "INFO",
category: "NOTIFICATION",
subject: "CPI Integration Flow '${flowName}': Execution Failure",
body: "There were '${messages.size()}' failures for the '${flowName}' integration flow within the time frame starting from '${currentTimeFrameStart}' and ending at '${currentTimeFrameEnd}'. ",
tags: [
'cpi:IntegrationArtifact.Id': flowId
]
)
}

This groovy script and iFlow shipped by SAP focus on filtering by failed messages and aggregating their numbers, which is great if you want to send emails or create Microsoft Teams notifications. But unfortunately, this will limit our ability to get a fully blown monitoring on Azure Monitor. For instance, you won’t be able to create metrics on failed messages vs. completed messages or the likes. To overcome this limitation, I copied the standard iFlow and dropped the OData filter on the status:


Fig.5 Screenshot from OData settings for MPL of standard SAP iFlow for ANS


In addition to that I altered the groovy to create an event for every single message rather than grouping them to count upfront how many occurred during the time frame like the standard implementation does.
flowIdToMessageProcessingLog.each { String key, def value ->
logMessage.append("Mapping '${value.size()}' failed messages for integration flow with id '${key}' to service notification\n")
value.each { myMessage ->
//skip logging service itself except if it failed
def status = myMessage.Status.toString()
if(key != "Send_notifications_for_failed_Message_Processing_Logs" || (key == "Send_notifications_for_failed_Message_Processing_Logs" && status == "FAILED")){
events.add(toNotification(key, myMessage, message))
}
}
}

Furthermore, I am modifying the notification severity (INFO or ERROR) on the event depending on the iFlow message status (COMPLETED or anything else).
def myStatus = "INFO";
if(flowStatus != "COMPLETED"){
myStatus= "ERROR";
}

return new Event(
eventType: "CPIIntegrationFlowExecutionFailure",
resource: new Resource(
resourceName: flowName,
resourceType: flowType
),
//eventTimestamp: logEnd,
severity: myStatus,
category: "NOTIFICATION",
subject: "CPI Integration Flow '${flowName}': Execution Failure",
body: "Message '${messageID}' for the '${flowName}' integration flow failed within the time frame starting from '${currentTimeFrameStart}' and ending at '${currentTimeFrameEnd}'. ",
tags: [
'cpi:IntegrationArtifact.Id': flowId
]
)

Now, let’s take a closer look at the ANS.

SCP Alert Notification Service (ANS)


An actual alert on ANS is setup using three modelling objects. A condition to act upon if met, the actual action to take and a subscription to combine the condition with a set of actions. You can model all this manually or do an import from my GitHub repos.

SAP’s blogs on ANS setup for E-Mail or Microsoft Teams might be interesting too.

Be aware that the blogs refer to the NEO environment. You need to change the endpoint for ANS on the iFlow configuration to “/cf/producer/v1/resource-events”. You can find the reference here.

For the standard integration package on CPI you need to configure only one condition. The same is true for my modified version. You can skip this, when you import from my provided JSON.


Fig.5 Screenshot from SCP ANS config


On the actions piece you can choose from the pre-configured options provided by SAP:

  • E-Mail or E-Mail with custom SMTP server

  • Microsoft Teams

  • SCP Automation Pilot

  • ServiceNow Case / Incident

  • Slack / Slack threaded

  • Alert Notification Service Store

  • VictorOps or

  • Plain vanilla webhooks with authentication flows like OAuth/Basic Auth etc.


For Azure Monitor I chose a simple webhook, because I used the shared access signature instead of any credentials. If you consider that not secure enough you can also go for OAuth and register the app with Azure AD.

Be aware that you currently need to access the Alert Notifications Service in trial from the same target URL as the subaccount and your CPI region. I had problems at first because my subaccount was in US but my web-ui defaults to EU due to my location. I just altered my URL manually from cockpit.eu10.hana.ondemand.com/<target> to cockpit.us10.hana.ondemand.com/<target> while accessing the web-ui.

The moving parts to make it happen



Fig.6 Architecture overview to monitor alerts from SAP CPI in Azure Monitor


Azure Log Analytics has a REST API that can be used directly. You can find the reference here. Since the webhook action configuration capabilities on the ANS are limited, I put a LogicApp in between, which exposes the needed webhook.

The LogicApp has a built-in connector to call LogAnalytics workspaces in an integrated low-code manner. You just need to specify your Log Analytics workspace, the log name (in my case CPIFailure) and optional additional attributes like the Time-generated-field. Once you save it, it generates your http web hook. You will need that to configure the ANS action. I left a placeholder (YOURPATH) for you to replace on the JSON template for ANS.


Fig.7 Screenshot from LogicApp in Azure


I can simply forward the payload from ANS, because Log Analytics expects key-value pairs. It will parse it automatically without any transformation needed.

After firing an iFlow, that I setup to fail you get the following output.


Fig.8 Screenshot from Log Analytics custom log for CPI notifications


Remember the poll is being done by another iFlow on a timely basis. Mine is set to 5 minutes. Once you run a query the custom log will show the first data entries and parsed key-value pairs.


Fig.9 Screenshot from expanded notification on Log Analytics Workspace


Now we can unleash the power of the Kusto Query Language (KQL) upon those data sets and orchestrate that in an Azure Monitor workbook. I have prepared a template for you to plug & play here. Go to Azure Monitor on the portal and navigate to workbooks from the navigation pane. There are other examples for you to get inspired from on that screen if you like.

Simply upload the json by clicking edit on a new workbook (Empty), click the advanced editor button (</>), copy my json into the view and finish by clicking apply.


Fig.10 Screenshot from Workbook import


Next, click edit on the workbook to adjust to your own CPI environment.

  • Change the generic Admin UI link

  • You need to fill the parameters for the subsequent visuals:


The workspace parameter is used as input for the integrated Alert wizard to pre-fill your resources (click the percentages under "Failure Rate").

The base URL for CPI is used for the inline links on the iFlows to jump to the CPI Admin UI with the iFlow pre-filtered. This way I implemented tight integration between the workbook and CPI Admin UI.


Fig.11 Short overview of workbook for CPI monitoring


Ok, now we are talking! Gradual colour coding for suspiciously often failing iFlows, notification trends evolving over time and configurable charts to look at the data from different angles.

The table rows act as interactive filters for the Message Log.

Like I said before: the iFlow Id is also a hyperlink to targets SAP CPI Monitor with that id pre-filtered and the status pre-set to “FAILED”.


What about notifying the CloudOps team? Azure Portal allows you to create alerts based on many metrics. In our case a certain number of failed iFlow messages per iFlow would make sense. To get there you can click the percentage number on the column “Failure Rate” or do it plain vanilla from the Azure Portal view for Alerts.


Fig.12 Create Alert Rule from failure rate on workbook chart


I configured an hourly check on the last hour of monitored iFlow messages.
CPIFailure_CL | where TimeGenerated > ago(1h) and severity_s == "ERROR" | summarize count()

On the action group you can do similar things compared to what SCP ANS offers for its targets. There is native integration for Azure Automation Runbooks, Functions, LogicApps and ITSM (ServiceNow etc.). The difference here is the ability to slice and dice the data the way you need it.


Fig.13 Screenshot from Action Group creation


You guessed correctly. I configured my E-Mail as alert action again 😄 but this time it is only contacting me based on a customizable threshold of failed messages from a portal where I monitor all my Cloud resources including on-premise or even AWS workloads. Have a look at Azure ARC if you want to dig deeper.

Alternatively calling Azure Log Analytics directly


Above architecture has it merits when you want to manage your SCP notifications for all services in one place and be able to react differently based on the event attributes. In case your focus is on Azure services you might prefer doing it directly from your iFlows. Here is how you do it:


Fig. 14 Architecture overview with direct send to Azure Log Analytics


To make that work we adapt again the SAP standard iFlow "Send notifications for failed Message Processing Logs". The Azure Log Analytics REST API requires us to authenticate using an http header containing specific properties encoded with the HMAC-SHA256 algorithm. We got past that with the ANS approach, because the LogicApp masks the complexity of the authentication.

You can find more details on the Log Analytics REST API here. Luckily CPI already ships all the required libraries with their groovy runtime to create the call from our iFlow. Here is an excerpt of my groovy scripting for the authentication piece:
static String createAuthorization(String workspaceId, String key, int contentLength, String rfc1123Date) {
try {
// Documentation: https://docs.microsoft.com/en-us/rest/api/loganalytics/create-request
String signature = String.format("POST\n%d\napplication/json\nx-ms-date:%s\n/api/logs", contentLength, rfc1123Date);
Mac mac = Mac.getInstance("HmacSHA256");
mac.init(new SecretKeySpec(DatatypeConverter.parseBase64Binary(key), "HmacSHA256"));
String hmac = DatatypeConverter.printBase64Binary(mac.doFinal(signature.getBytes(Charset.forName("UTF-8"))));
return String.format("SharedKey %s:%s", workspaceId, hmac);
} catch (NoSuchAlgorithmException | InvalidKeyException e) {
throw new RuntimeException(e);
}
}

The complete iFlow is part of the CPI package shared on my GitHub repos. So, it is up to you decide if you want to use ANS in between or call Azure directly.

By the way the mechnism to construct the authentication header (or often called Shared-Access-Signature) is adaptable to other Azure service REST APIs like for the Azure Service Bus for instance.

Thoughts on production readiness



  • More sophisticated dashboarding on the Azure Monitor Workbook: I think its purpose is not to replace the Admin UI of CPI. You will still go there for actual tracing, replicating issues and chasing down logs. The proposed approach serves more to give your CloudOps team configurable reports, access to Azure Data tooling and to consolidate your Cloud monitoring in a single place, where you probably monitor all your other Azure workloads too. With Azure ARC even cross-hyperscalers (AWS, GCP, etc.). That is a topic for another day though.

  • Authentication: It would be worth to elevate the ANS configuration of the service key from BasicAuth to OAuth. The access signature of the LogicApp is secure as long only admins have access to the ANS setup. If not, I mentioned already that you could swap to a webhook with OAuth and register ANS as an application with Azure AD. The same is true for the direct call to Azure Log Analytics. In order to use Azure AD you would need to register your CPI tenant with AAD. However, that is material for another post on high-security authentication from BTP to Azure services 🙂

  • Sending only aggregated error messages to Azure Monitor: Reduces the cost but limits the analysis capabilities tremendously because you cannot create metrics for completed vs. failed messages and even the aggregated number of messages is “hidden” on plain text on the notification body. Parsing it from there would be possible on the LogicApp for instance. But even then, you have only one event registered although you might have 10 mentioned on the body. Therefore, I proposed a modified version of the standard iFlow.

  • Enhancing the standard iFlow for ANS: The standard integration content sends always alerts with severity info. It might be worth to extend that and flag certain source iFlows differently. You might consider a synchronization flow failure (e.g. employee synch from SuccessFactors to S4) more critical than a GET request for some master data from C4C. For such scenarios you could come up with a logic that changes the severity level, based on the source iFlow. The mapping could be maintained in a CPI Value Mapping artifact. This way it would be configurable outside of the groovy code. Another approach could be based on the error code. Maybe http 404 is different to 500? For my implementation I evaluate only the message status for now (Completed vs. Error,Retry etc.).

  • Direct Azure LogAnalytics call vs. ANS: The proposed solution uses the SCP Alert Notification Service to showcase the interoperability and possibility to build upon your existing SCP strategy. Of course, technology-wise you could bypass it altogether and directly send the event to Azure Log Analytics via its API. ANS has it merits when you want to manage your SCP notifications for all services in one place and be able to react differently based on the event attributes. One use case could be: sending all messages to Azure Log Analytics but add the ServiceNow action for very critical errors. For that it would be possible to add another condition to the ANS configuration and modify the code on the iFlow to fill the eventType based on your custom logic to identify critical issues.


Given that many CPI customers, that I saw, use the E-Mail adapter for productive scenarios, I would consider my prototype in its simplest form without any changes to the SAP CPI integration content production ready even today. What do you think?

Final Words


I showed you today how to enhance your CPI monitoring and alerting with Azure Monitor. To do that we reused the existing standard integration content for SAP Cloud Platform Alert Notification Services (ANS), adapted it slightly to send all messages (not only failed) and dropped the aggregation, configured it for CloudFoundry endpoints, configured ANS to send notifications via a webhook to Azure, create visualisations on the log data with Azure Monitor workbooks and finally created an alert rule to notify us based on failure thresholds. Not too bad, right?

Find the adapted iFlow on my GitHub repos and the workbook on the official Azure Monitor Community.

Find the setup guide for SCP Alert Notification Service here.

As always feel free to leave or ask lots of follow-up questions.

 

Best Regards

Martin
Labels in this area