Distributed Resiliency of SAP CAP applications using SAP HANA Cloud (Multi-Zone Replication) with Azure Traffic Manager
The goal of resilience or robust software design is to deal with failures that occur in complex system landscapes while they are being used, but should ideally go unnoticed by users. In contrast to conventional stability techniques, its objective is to maximize the availability of systems and system landscapes rather than to decrease the likelihood of failure occurring. It embraces the unavoidability and unpredictability of failures and concentrates on resolving them as soon as feasible.
There are different principles and patterns that may be used to make your applications more resilient. However, it is not always easy to find the combination that best fits your applications. The Developing Resilient Apps on SAP BTP Guide provides an overview of the various options you have, as well as detailed information about the particular patterns you may employ.
The majority of SAP BTP services support Availability Zones (AZ) for high availability. However, the Availability Zones (AZ) are confined to a single region, which is insufficient for mission-critical applications. Implementing the multi-region architecture, further reduce downtime, reduce regional latency, or simply use this architecture as a load balancer to disperse rising traffic between regions.
This blog post primarily focuses on the application’s active-active (Distributed Resiliency) setup employing multi-region architectural concepts. Here, you will discover how to deploy the SAP CAP application to multiple regions, divert traffic to the appropriate region, and monitor its availability.
The conceptual solution diagram below shows an active-active (Distributed Resiliency) setup employing multi-region architecture.
In this setup, SAP CAP applications are spread across different regions and connected to SAP HANA Cloud database. The incoming requests are routed to the appropriate region using Azure Traffic Manager.
To optimize your application’s response time, we recommend deploying it across data centers where your target audience and back-end system are situated. For instance, consumers in Europe may select the Frankfurt and Amsterdam data centers.
Note: Be aware that there may be concerns with respect to legal data processing constraints if you wish to deploy your application in data centers that are not in the same region. For more information, see Data Protection and Privacy
High-level Implementation Steps
- SAP BTP, Cloud Foundry Runtime in two subaccounts from different regions or hyperscalers.
- A custom domain URL serves as the single point of entry to the SAP CAP Application.
- Using the SAP BTP Custom Domain service, configure and map the custom domain to SAP CAP Application routes.
- Azure Traffic Manager is an intelligent component that monitors the application’s health and sends user requests to another region in the event of a failover.
- SAP CAP Applications are smoothly synchronized to several regions using the SAP Cloud Transport Management service.
While implementing this architecture, consider the subscription costs for duplicate services in different subaccounts and ensure that the applications are always in sync. It is required to configure SSO for a seamless switch between the regions.
We have a detailed step-by-step discovery center mission for this scenario if you want to try it
SAP BTP Multi-Region reference architectures for High Availability and Resiliency by Mahesh
Architecting solutions on SAP BTP for High Availability by Murali Shanmugham
How to crash your iflows and watch them failover beautifully by Martin Pankraz
I hope this blog post gives you an idea in terms of how you can leverage SAP BTP across different regions/providers to architect highly available solutions.
Please leave any thoughts or feedback in the comments section below.
have you done some measurements about the latencies and resulting performance reduction when the CAP app that is deployed in cf-us20 uses the SAP HANA Cloud in cf-ap20?
Hi Gregor, Thanks for your query . We are in the process of conducting the performance tests. It will be beneficial, if we collaborate on this . Here is our email id email@example.com, looking forward to hear from you.
The key point here is that IMHO the different AZs should not have runtime dependencies. If the domain allows it I would go for eventual consistent replication of t to he data in each AZ.
Exactly. Looking forward to see the next iteration of this using a globally distributed data base "tunable" to the application needs. Azure CosmosDB for instance allows you to choose between a spectrum of consistency levels. Eventually consistent or strong consistency are subsets of them.
Also SAP Private Link for Azure supports now CosmosDB completing the picture.