Principal propagation in a multi-cloud solution between Microsoft Azure and SAP, Part V: Production readiness with unified API- and infrastructure management
Note: This blog post is the fifth part of a tutorial series. If you arrived here without reading the first, second, third and fourth part, please do so before you continue, and then come back here again.
Part I starts with principal propagation from Microsoft Azure by calling a simple Web Service deployed on SAP Business Technology Platform (BTP). Part II of this blog series extends the scenario by propagating the Azure-authenticated user via BTP and SAP Cloud Connector to an SAP Gateway system. Part III adds a business application to the scenario by implementing a chatbot in Microsoft Teams with the Microsoft Bot Framework V4 SDK, SAP BTP Integration Suite, and Core Data Services in the SAP Gateway system. A live demo of this scenario is available on episode #31 of the great SAP on Azure Video podcast series (starting at min 23:30) from Holger Bruchelt , Goran Condric, and Robert Boban. Part IV uses a “low-code” approach for implementing the chatbot of part III with Microsoft Power Platform, demonstrated in episode #40 of the SAP on Azure Video Podcast series. Part V (this blog post) looks at different aspects for production readiness, such as API management, monitoring and alerting. Part VI turns the scenario into the opposite direction by propagating the SAP-authenticated user of a BTP business application to call the Microsoft Graph API and retrieve the user’s Outlook events. Finally, part VII looks at principal propagation from Microsoft Power Platform to SAP in the context of making Remote Function Calls (RFCs) and using Business Application Programming Interfaces (BAPIs) with the Kerberos protocol. See episode 142 of the SAP on Azure video podcast series for a live demo of this scenario.
API Management and Monitoring
In part II of this blog series, the scenario has been extended with an OData service exposed by the SAP backend system in the corporate network. The service endpoint was never directly accessed from the cloud. Parts II and III demonstrated a setup using SAP Cloud Connector (SCC) to propagate the authenticated user (“principal”) from SAP BTP, part IV proposed an alternative solution with the Microsoft On-Premises Data Gateway (OPDG) connected to Microsoft Power Platform. SCC and OPDG are both security-critical components deployed in the corporate network and share a set of common features to secure the access to the on-premise product catalogue data in the scenario:
- Data between the cloud and on-premise is sent over an encrypted tunnel.
- The corresponding cloud services for SCC (Connectivity service in BTP) and OPDG (Gateway Cloud Service in Azure) only accept requests from trusted instances in the corporate network.
- The tunnel is established from the corporate network to the cloud. To allow SCC or OPDG to connect to their cloud service, the corporate firewall must unblock the IP addresses according to the region where SCC or OPDG connect to. Communication is over port 443 (HTTPS), and there is no need to open any ports in the corporate firewall for incoming requests.
- SCC and OPDG are typically deployed in a Demilitarized Zone (DMZ) of the corporate network and provide a shared service across many scenarios for on-premise data access.
- Both can be configured for high-availability (see details here for SCC and here for OPDG) and provide audit logging.
Although these features already provide a solid foundation for a production evironment, there are still some capabilities missing. While talking to many customers about similar integration scenarios like in this blog series, I’ve heard the common concern about overload protection when exposing OData services in the backend to a larger – or sometimes even unknown – number of internal or external API consumers, such as business partners or customers. So far in this setup, managing access to the data occurred only in the backend system by using the Data Control Language (DCL) to define the authorization model for the Core Data Services (CDS) view of the product catalogue. However, the backend is still not protected from a malicious denial-of-service attack or an unintentional overload by too many concurrent service requests. Neither SCC nor OPDG enforce any API rate limits or quotas over the encrypted tunnel, or can mitigate the effects of spikes by throttling the number of requests from API consumers. Such traffic management capabilities are usually provided by a different service, also known as API Management.
API Management (APIM) can enforce flexible and fine-grained controls for consuming the APIs exposed by the backend system. Both BTP and Azure provide corresponding platform services. There are already excellent blogs like this from Niklas Miroll explaining how to configure SAP’s APIM service on BTP to implement the token exchange between Azure AD and XSUAA for the principal propagation in parts I, II and III in this scenario. Therefore, this blog takes a look at the Azure APIM service and how to integrate it to control the access to the backend services in this scenario.
Another important requirement for production readiness is monitoring. By introducing APIM, a new component is added to the end-to-end call chain from the bot in the cloud to the backend system on-premise. Any performance issues or unplanned outages will cause a service downtime for the end users. Monitoring of all critical components in such a complex, distributed system landscape is required to identify issues proactively, and combined with reliable alerting it can help to resolve them quickly. However, monitoring on-premises resources alongside the cloud infrastructure and services feels disjointed and cumbersome to manage. Azure Arc is a solution to simplify governance and management in hybrid environments by making virtual machines, Kubernetes clusters, and databases deployed on-premise or in other clouds appear as they were running in Azure. This allows to use familiar Azure services and management services, such as Azure Monitor for monitoring and alerting, Microsoft Defender for Cloud for threat protection, or Azure Policy for enforcement of organizational standards and compliance rules. The following diagram provides an overview of the new components introduced by APIM and centralized monitoring based on the scenario in part IV:
- Azure APIM is composed of a management plane for configuring the service, a data plane (also referred to as the APIM gateway) that proxies the requests between the API consumer and provider, and a portal for developers to discover and use the APIs. All three components are deployed in Azure by default, causing all requests to flow through the cloud regardless of where the APIs are hosted. For APIs implemented by backends in the cloud, this deployment option offers the best operational simplicity. However, for clients on the corporate network consuming APIs of backends hosted on-premises, the detour to Azure comes at the cost of an increased latency and additional data transfer fees.
- To optimize the traffic flow in these cases, Azure APIM supports a hybrid deployment model with a self-hosted gateway (SHGW). The SHGW is a containerized version of the APIM gateway component which can be deployed close to the APIs on-premise, either on a single Docker host, or a cluster running Kubernetes. The central management plane remains in Azure with an APIM instance that the decentralized SHGW(s) connect to. Any changes made to the configuration of an API are immediately synchronized with the connected SHGW that manages the API.
- Kubernetes provides a highly flexible, scalable and reliable orchestration environment for containerized workloads such as the SHGW. Nevertheless, the cluster itself is also a critical resource and should be closely monitored. To do so in a unified and central way, the Kubernetes cluster will be Azure Arc-enabled so that it can be managed centrally with Azure Monitor and other services.
- The end-to-end message flow (steps 1 to 6 in the diagram above) does not change with Azure APIM intercepting the requests to the backend system. For a detailed explanation of each step please refer to part IV of the blog series. Only the backend service URLs of the connections managed by OPDG must be adapted to the new architecture by changing them to the endpoints exposed by the SHGW on Kubernetes.
- By introducing a new “man-in-the-middle” with the SHGW, the trust relationships between all components in the request chain from OPGW via SHGW to the SAP backend must be configured accordingly. As in the previous parts of this blog series, the Corporate Root CA remains the common trust anchor for all systems on-premise. It is responsible for issuing the certificates for TLS communication and other purposes.
Please ensure to meet the following prerequisites before continuing with the setup instructions for this part of the blog series:
- Create a new API Management instance or use an existing one in your Azure subscription. The following steps assume a that the instance is deployed to the existing resource group ProductSearchGroupRG created in step 77 of part III, and that it has the name SAPAPIMGMT.
- Install or upgrade your Azure Command Line Client (CLI) to a version >= 2.16.0 and <= 2.29.0. You can check your current version with
- Make sure you have access to a Kubernetes cluster that supports Azure Arc as documented here.
Local development environment setup
The following diagram shows the setup of the local development environment for this part of the blog series:
The SHGW runs on a single-node Kubernetes cluster. I recommend using the Kubernetes environment from Docker Desktop which can be simply installed by enabling the corresponding feature in the preferences. If you run on Windows, it is highly recommended to configure Docker Desktop with the Windows Subsystem for Linux (WSL) as documented here.
I also switched for this part of the blog series from the Hyper-V managed AS ABAP 7.52 SP04 VM image introduced in part II to Docker (Desktop) for running the ABAP Platform 1909 Developer Image available from Docker Hub. The configuration steps for the backend do not change with this newer ABAP version. However, the following technical parameters change:
|Parameter||old value||new value|
|SAP IP address||192.168.99.10 (Hyper-V VM)||172.17.0.x (Docker Container)|
|Default HTTPS Port||44300||50001|
Note: The IP address of your local SAP system may be different. By default, the container will use the Docker bridge network. To find out the assigned IP address, run the command
docker network inspect bridge. In the output, look for the element
Containers to find your SAP system’s IP address (here 172.17.0.2):
The setup instructions in this part are based on the following local development environment setup:
- Windows 11 on an Intel i7 with 16 GB RAM
- Docker Desktop 4.3.2 (72729) and configured to use the WSL 2 based engine
- Kubernetes v1.22.5 enabled in Docker Desktop
- SAP system based on the ABAP Platform 1909 Developer Image running in Docker Desktop
- On-Premises Data Gateway v3000.101.16
- SAP Logon 760
- kubectl v1.22.5
- JMeter v.5.4.3
To successfully resolve the full-qualified host names (FQDNs) used in this scenario, the following entries were added to the local hosts file (C:\Windows\System32\drivers\etc\hosts on Windows or /etc/hosts on Linux) on the development machine:
... # APIM Self-Hosted Gateway 127.0.0.1 sapshgw.bestrun.corp # SAP backend (running on Docker) 172.17.0.2 vhcala4hci.bestrun.corp ...
Make sure to remove any existing entries in this file from the previous parts of this blog series (for example from step 11 of part IV). Source code and configuration files for this part of the blog series are available from this GitHub repository and branch.
Now let’s get started with connecting the local Kubernetes cluster to Azure.
Connect the on-premise Kubernetes cluster to Azure Arc
|01||Open a terminal and execute the command
Login to your Azure subscription with the Azure CLI command
A browser window opens to process the login.
Register the required providers for Azure Arc-enabled Kubernetes with
Wait for the providers to complete the registration in your subscription. You can monitor the process with the commands
The RegistrationState must be
Run the command
to connect your on-premise cluster to Azure Arc.
Wait for the operation to complete.
|05||Verify the successful connection to Azure Arc by logging into the Azure Portal. Open the resource group ProductSearchBotRG. You should find a new resource of type Kubernetes – AzureArc and name DockerDesktopK8S.|
Also verify the successful provisioning of the Azure Arc agents in your local Kubernetes cluster with the command
You will see the pods deployed by the previous command in the newly created Kubernetes namespace azure-arc.
Optionally install (or update) the Azure CLI extension for cluster extensions with the command
The k8s-extension provides a platform for different extensions to be installed and managed on an Azure Kubernetes Cluster (AKS) and Azure Arc-managed cluster. This is needed in if you want to deploy the SHGW with the Azure CLI. We’ll use the Azure Portal in the next step.
You have now connected your on-premise Kubernetes cluster to Azure Arc, which allows you to make use of common Azure services such as Azure Monitor or Azure Policy to manage the cluster. With an Arc-enabled cluster, you can install extensions such as the API Management extension which simplifies the provisioning of the SHGW to the on-premise cluster. You will also create a Log Analytics Workspace to capture the monitoring logs of the new SHGW with Azure Monitor.
Deploy the Self-Hosted API Management Gateway
|08||Go to Azure Portal, select the ProductSearchBotRG resource group and click Create.|
Enter “Log Analytics Workspace” into the search bar and click Create.
Provide a name for the new workspace (e.g. saplaws).
Click Review + Create and after successful validation on Create.
Go back to the ProductSearchBotRG resource group and open the DockerDesktopK8S Kubernetes cluster resource.
Select Extensions from the left-side navigation menu and click on Add.
Select API Management Extension (preview) from the list of available extensions.
Click on Create.
Select your Azure API Management instance from the list. The following steps use an instance with the name SAPAPIMGMT.
Click on Create new to provide the Gateway name of the new self-hosted gateway with your APIM instance. The following steps assume an instance with the name sapshgw.
Enter the value ‘default‘ for the (Kubernetes) Namespace the self-hosted gateway will be deployed to.
Keep the default selection for Service Type (Load Balancer) and HTTP/HTTPS port numbers.
Click on Next: Monitoring >
Activate the Enable monitoring checkbox and select the previously created Log Analytics workspace from the list.
Click Review + create.
|13||Review the settings and click Create.|
Wait for the deployment to complete.
Then click Manage gateway.
|15||On the overview page for the newly provisioned APIM self-hosted gateway take a note of the upper right corner of the dashboard. It should report a heartbeat status sent to Azure by the new SHGW in your on-premise cluster.|
Verify the status of the SHGW deployment in your Kubernetes by running the command
It lists the new deployment for the SHGW instance in the default namespace. Make sure that it reports 1 AVAILABLE instance.
Take a note of the NAME listed in the output. Use this name in the command of the next step.
Verify the deployment status of the SHGW service which exposes the HTTPS port of the gateway instance on the localhost. Run the command
If there is NO service in the command output that exposes PORTS 80(HTTP) and 443(HTTPS) go the next step. Otherwise continue with step 19.
Note: Only execute this step if there is NO service exposing PORTS 80 and 443. In this case, run the following command from the Git repositories subfolder /Kubernetes:
Note: If you chose a different deployment name you have to modify the selector’s instance name in line 16 of this file accordingly.
Verify that that the ports 80 and 443 are now mapped to the SHGW instances by running the command
again. It should now list the PORTS in the command output.
The infrastructure is now in place to manage the backend system’s API which will be defined next using the central APIM management plane in Azure Portal.
Define the APIs for SAP in Azure API Management
Navigate back to your SAPAPIMGMT APIM instance, select APIs -> APIs from the left-hand navigation menu.
Click Add API and click the HTTP tile.
|20||Switch from Basic to Full and enter the following values:
Select the newly created API from the list.
Click Add operation.
The first operation exposes the SAP Gateway’s OAuth server token endpoint.
Enter the following values:
|23||Repeat the previous step and add second operation for the OData service to access the product catalogue:
Configure the backend in APIM representing the SAP system in the scenario.
Select APIs -> Backends from the left-hand navigation menu.
|25||Enter the following values:
Note: For security reasons activate both checkboxes for the certificate validation. Validation requires a trusted and valid certificate presented by the SAP backend to the SHGW which will be configured in the next steps.
Select APIs -> APIs from the left-hand navigation.
Click SAP A4H (or another name chosen in step 20) from the list of APIs, select All operations and click the </> symbol to open the Policy code editor.
|27||Add the following line to the
The APIs and backend system are now defined in Azure API Management. The configuration is synchronized with the SHGW by choosing it as the Gateway for the API. Because certificate validation is activated for the backend, the SSL configuration in the SAP backend must be updated to contain the correct Subject Alternative Names (SANs). Otherwise SHGW will not establish a secure connection to the backend system.
Setup trust between APIM Self-Hosted Gateway and SAP system
The SHGW establishes a TLS connection to SAP system using the base URL of the backend configured in APIM. SHGW resolves the hostname of the URL (“vhcala4hci.bestrun.corp”) into the Docker-internal IP address. To successfully pass the certificate validation, this IP address must be included in the Subject Alternative Names (SANs) of the certificate in the SAP system’s SSL Standard PSE.
The SHGW, running on the local Docker Desktop-hosted Kubernetes cluster, and the SAP backend (also running on Docker Desktop) communicate over the default Docker bridge network. By default, this network has the address range 172.17.0.0/16. If you haven’t done already, find the IP address assigned to your SAP system’s docker container by running the command
docker network inspect bridge as explained above, or
docker inspect <container id or name of your SAP system>. In the output, look for the value of the field
You will start by repeating the steps from part IV to configure the TLS certificate in the SAP system with the IP address added to the SAN attribute:
Login with SAP GUI to the backend system and start transaction STRUST. Switch to the change mode (Ctrl+F1) and double-click on the SSL server Standard node.
Right-click and select Replacement Wizard from the context menu.
Enter the distinguished name (DN) of the new SSL certificate with the full-qualified domain name (FQDN) of your SAP system:
OU=<Org Unit>, O=<Organization>, C=US, CN=<FQDN>
For the chosen FQDN using the ABAP developer trial Docker image CN should be set to “vhcala4hci.bestrun.corp“, which results in the following DN as shown in the screenshot:
DNS=vhcalnplci.bestrun.corp, OU=SAP Team, O=BestRun, C=US, CN=vhcala4hci.bestrun.corp
In the second entry field, enter the Subject Alternative Names (SANs). SHGW expects the IP address of the SAP backend, which must be added to the FQDN by using a colon (“:”) as a separator as follows:
“vhcala4hci.bestrun.corp:<Docker-internal IP address of the SAP system>”
Click Choose Distinguished Name.
|30||Keep the proposed algorithm and key length and click on Select Algorithm.|
|31||Click Create Key Pair.|
Click Save as local file and save the new Certificate Signing Request (CSR) as a new file, e.g. vhcala4hci.csr, to your OpenSSL (see part II) installation’s subdirectory trustedca/csr.
Open a command line and change to the subdirectory trustedca of the OpenSSL installation. Run the command
to sign the CSR.
Go back to the replacement wizard.
Click Load local file and select the signed certificate response file
Open your OpenSSL Trusted CA’s signing certificate (trustedca/trustedca.crt) in a text editor.
Copy the content into the clipboard and paste it at the end of imported certificate response file content (after the
Click Import Certificate Response.
|36||Click Activate New Key Pair and Certificate.|
|37||Click Back (F3). The updated SSL server Standard PSE now has a certificate with a SAN containing the FQDN and IP Address, signed by the common Root CA for the scenario.|
The backend in APIM has been registered with all certificate validation options, including the certificate chain. This means SHGW also needs to trust the corporate CA’s root certificate that was used in step 33 to sign the SAP systems certificate.
Run the following command
from the trustedca root directory to create a PKCS12 (.PFX) file of the Root CA certificate.
To establish trust in SHGW to the corporate root CA, go to Azure Portal and select the APIM instance from the ProductSearchBotRG resource group.
Go to Security -> Certificates from the left-hand navigation menu.
Note: Adding a trusted CA by uploading it to API Management service via the CA certificates tab is not supported. To establish trust you have to configure a specific client certificate so that it’s trusted by the gateway as a custom certificate authority.
Enter an Id (e.g. “TrustedCA”) for the new certificate. Select Custom and upload the previously generated trustedca.pfx file. Provide the password that you entered in the previous step.
Use the APIM Gateway Certificate Authority REST APIs to create and manage a (custom) CA for the SHGW. This requires a service principal in Azure AD with sufficient permissions.
In Azure Portal, go to Azure Active Directory, and select App registrations from the left-hand navigation menu.
Click New registration.
Enter the name of the new application (e.g. “APIManagementRESTClient”).
From the Overview page of the new application, copy the Application (client) ID to a notepad.
Select Certificates & secrets from the navigation menu of the new application. Switch to the tab Client secrets.
Click New client secret.
Enter a description (e.g. “RESTAPISecret”) and click Add.
|44||Copy the value of the new secret to the notepad as well.|
|45||Managing certificates for a SHGW requires permission for the action
To assign this permission to the application, select the APIM instance (e.g. SAPAPIMGMT) from the ProductSearchBotRG resource group.
Select Access control (IAM) from the left-hand navigation menu, switch to the Role assignement tab, and click Add.
Choose Add role assignment from the drop-down menu.
From the Role list, select the API Management Service Contributor role.
Then switch to the Members tab.
Click Select members.
Search for the name of the new application (e.g. APIManagementRESTClient).
Then click Review + assign twice.
After successful completion of the role assignment, open a REST client of your choice (e.g. Postman).
If you are using Postman, you can download and import the Postman collection for this part of the blog series from here. From the collection’s menu (“…”), select Edit.
Switch to the Variables tab and maintain the following values in the current value row:
Click Save (or press Ctrl+S).
Select the first request from the collection (“Request Access Token for REST API Call“) and click Send.
The response includes the Access Token returned from Azure AD for the service principal representing the API Management REST Client.
A script in Postman sets the response in a collection variable to make the token available for the request in the next step.
Select the second request from the collection (“Assign CA cert to SHGW“) which sends a request according to the APIM Gateway CA management REST API documentation to configure the CA certificate for the SHGW.
A script takes the token from the previous step and adds it to the Authorization header of this request.
The response should return code 201 with a JSON-formatted output.
Trust between SHGW and the SAP backend system is now configured to successfully establish a TLS connection between these systems.
Setup trust between On-Premises Data Gateway and APIM Self-Hosted Gateway
Trust is also required between the On-Premises Data Gateway and the SHGW. By default, SHGW uses a self-signed certificate which is not trusted by On-Premise Data Gateway. You will replace it with a new certificate signed by the corporate root CA.
Change to your corporate root CAs subdirectory (e.g. /trustedca) that you created in step 1 of part II.
Generate a new TLS certificate for the SHGW with the OpenSSL command
Below are suggested values for the new certificate’s Distinguished Name (DN), Make sure to set the Common Name (CN) attribute to you SHGW FQDN (e.g. sapshgw.bestrun.corp):
Sign the certificate signing request with the following command:
Create a PKCS12 file from the private key and certificate of the new SHGW key pair with the command
Select the APIM instance in Azure Portal, go to Security -> Certificates, and select the tab Certificates.
|55||Enter an Id for the new certificate (e.g. SHGW), switch to Custom, and upload the PFX file created in the /certs subfolder in step 53.|
|56||Go to Deployment + infrastructure -> Gateways from the left-hand navigation and select your SHGW from the list.|
Select Hostnames from the left-hand navigation menu.
|58||Enter the following values:
The communication path from On-Premises Data Gateway via SHGW to the SAP backend is now configured with trusted TLS certificates issued by the corporate root CA. Now the Power Automate Flows must be updated from the backend URLs to the new APIM endpoints of the SHGW.
Updating the Power Platform solution
The Git repository contains an updated version of the solution containing the PVA chatbot and Power Automate flows. The URLs of the connections changed from the endpoints of the SAP backend to the APIs exposed by the SHGW (https://sapshgw.bestrun.corp/sap-a4h). You will update the solution in your power platform environment from part IV by importing the new file and applying a few manual changes.
Sign in to Power Apps. Select Solutions from the menu.
Click Browse and open the file Product_Search_Bot_Solution.zip from the ProductSearchBot subfolder of the Git repository’s part5 branch.
|62||Wait until the updated solution has been imported successfully. Then select the Product Search Bot from the list.|
Select Environment Variables from the left-hand navigation menu.
Select BotAppClientID from the list and overwrite the Default Value with the client ID from step 65 of part IV.
|64||Repeat the previous step for the following remaining environment variables:
Select Cloud Flows from the left-side navigation menu.
Select the Exchange Token flow from the list.
|67||Select the 4th step (“Connection”) from the flow.|
|68||Select Token Endpoint Connection from the list of existing connections.|
|69||Click Save, and then go back to the previous page by clicking the back arrow.|
|70||Select the Call SAP OData Service flow from the list.|
|71||Select the 2nd step (“Connection”) from the flow.|
|72||Select OData Service Connection from the list of existing connections.|
|73||Click Save, and then go back to the previous page by clicking the back arrow.|
|74||Select the action menu (…) of both flows in the list and select Turn on.|
|75||Select Chatbots from the left-hand navigation menu and click the Product Search Bot from the list.|
Select Publish from the left-side navigation.
Test the Chatbot
To successfully search items in the product catalogue, the user signed-in with Teams and propagated to the SAP backend must have the permission to access at least one product category. The product catalogue data changed with the newer release of the ABAP developer system used in this part of the blog series. Therefore you start by checking the user’s permissions in the backend and then continue with launching the chatbot in Teams.
Logon to your SAP system with SAP GUI and start transaction PFCG (Role Maintenance).
Enter PRODUCT_SEARCH for the role name and click the Pencil symbol.
|78||Click the Pencil symbol next to Change Authorization Data.|
|79||Browse to the PDCATEGORY Authorization Field and click the Pencil symbol.|
|80||Select a product category of your choice (e.g. Desk Lamps) and click Save.|
|81||Click Save and then Generate to update the role profiles.|
Go back to the browser and select Manage -> Channels from the left-side navigation.
Click the Microsoft Teams tile.
|83||Click Availability options.|
|84||Click Copy link.|
Paste the link into a new browser session or incognito session.
Use the Teams web application for testing.
|86||Log on to Teams with the user who is mapped via the email address to a user in the SAP system.|
To see the API requests from Power Platform to SAP for the access token and catalog data processed by APIM, execute the following commands to follow the logs in SHGW:
This returns the name of the Kubernetes pod running SHGW. The pods runs two containers: One for the APIM gateway itself, and another one for the monitoring agent.
To see the log output of the APIM gateway container, run the command
|88||Add the Product Search Bot app to Teams.|
Start the conversation with the bot by entering one of the trigger phrases, e.g. “Purchase new office equipment“.
You are single-signed-on to the PVA chatbot.
Enter the search term of products you are looking for. If you chose “Desk Lamps” in step 80 as a product category the user is allowed to search for, enter the search term “OF-DL” and hit Enter.
The bot will return with a list of products from the catalogue and show some details (price etc.).
Take a note of the additional output with the SAP access token and copy the value to the clipboard. You will need it in the next section to do a final load test with APIM protecting the backend from an overload situation.
|90||Check the command line window with the log output from the SHGW. You will see both requests to the backend for the token and the product data logged by SHGW.|
After verifying functional correctness with SHGW added to the scenario, lets do a final test to see what APIM can do to protect the backend from an overload situation. At the center of the solution you will add a policy that implements the acceptable request rate (10 API calls per minute for testing purposes) for the backend. In addtion we also want to inform the administrator(s) via email that an overload situation occurred.
Test APIM-controlled rate limitation, monitoring and alerting in an overload scenario
For simulating the overload scenario, an API client is needed to generate a larger number of requests to exceed the rate limit. A free and open source tool for API load testing is JMeter which is used in this last test. If you haven’t do so, download JMeter and extract it in a directory on your local development environment.
In Azure Portal, select the APIM instance SAPAPIMGMT.
Select APIs from the left-hand navigation.
Select the SAP A4H API from the list and click All Operations.
Click + Add policy on the Inbound processing tile.
|92||Click the Limit call rate policy tile.|
|93||Configure the new policy as follows:
By defining the new policy for all operations, every call to the token and OData endpoint counts. To define different rate limits per operation, the policy can be defined on an operation level as well.
Because all requests to SHGW are sent by On-Premises Data Gateway in this scenario, setting the Counter key to IP address makes most sense. However, if other internal systems with different IP addresses call the API, another counter key must be found.
Note: The rate limit defined here is shared across all the replicas in the self-hosted gateway Kubernetes deployment. By default there are 2 replicas, so the number of calls is set to 20 to actually enforce half of the permitted calls (10) per minute.
|94||From the Home screen in Azure Portal select Monitor from the main menu.|
In Azure Monitor, select Metrics from the left-hand navigation menu.
Select SAPAPIMGMT for the resource to define the scope.
|96||Configure the following settings:
Click on Finish editing metric
|97||Click Save to dashboard and select Pin to dashboard from the drop-down menu.|
Switch to the Create new tab, choose Private and provide a name for the new dashboard (e.g. SAP APIM).
Click Create and pin.
|99||Click New alert rule.|
|100||Click the auto-generated condition name’s link.|
|101||Scroll down to the Alert logic section and configure is as follows:
|102||Switch to the Details tab and enter the following:
Switch to the Actions tab.
Click Create action group.
Enter a Name (e.g. SAP APIM Alerts) and Display Name (e.g. SAP APIM).
Click Next: Notifications.
Select the Notification type Email/SMS message/Push/Voice.
Active the checkbox Email and enter a valid email address of an inbox you have access to.
Enter a name for the new Notification (e.g. Admin Email alert).
Click Review + create.
Back to the Alert rule, click Review + create.
Review the settings and click Create.
Launch JMeter from the bin subfolder of your local installation directory.
Select File -> Open from the menu and open the file SAP API Test Plan.jmx from the Git repository’s subfolder JMeter.
|109||Select the HTTP Header Manager from the test plan.|
|110||Click the Value of the Authorization header row. Replace the placeholder <SAP Access Token> with the value of the SAP access token copied from the Teams chatbot conversation in step 89.|
|111||Click Start to run the load test.|
|112||Select View Results Tree from the test plan. You can see that the first 10 requests were executed successfully. The last 10 returned with an error message “Rate limit is exceeded. Try again in 59 seconds.” and status code 429.|
|113||In Azure Portal, check the Alerts page in Azure Monitor to see that the condition was met to fire the alert.|
|114||Check the inbox of the email address provided in step 105. Azure Monitor sent an email according to the notification settings of the alert rule.|
This part of the blog series started to look into relevant production topics for the scenario, but there are certainly many more aspects to take in to consideration. For example, deployment of OPDG and SHGW/Kubernetes in a highly-available manner, backup management of the configurations, etc.
If you now became a big fan of APIM, you may also have a look at this repository on GitHub with many more examples for advanced policies, such a this one, which essentially moves the logic from the Exchange Token Power Automate flow of the scenario into an APIM policy. Big thank you to Martin Pankraz, my go-to-expert for APIM and many other things, who helped a lot reviewing this blog.