Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Martin-Pankraz
Active Contributor
Dear community,

Today is the day you will finally get your questions answered where all the toilet paper searches went. At least if you are based in a country where this was relevant in 2020. Travel experience taught that other regions had paper-free alternatives ever since, but first things first.

One the one hand, protecting your ERP from unpredictable load is key in general but especially important during sensitive periods like financial closing or heavy batch loads. On the other hand, you want connected satellite systems to have the most accurate and ideally near real-time state of certain SAP data sets to support business.

Imagine an online store that offers insight into your shop’s product availability based on the customer’s postal code. So far so good. You integrated that web shop with your primary SAP backend through a nice OData interface – maybe even nicely governed through an API management solution. And now consider an event that sparks millions of people globally to search through your website for remaining stock of that precious toilet paper. You can only hope that your API management is setup to start throttling or blocking requests once it reaches a threshold that would bring down your ERP.

Puh, so you survived that wave of threatening requests but likely lost business due to the declined searches. Can we do better than that? We most certainly can! Let’s walk through one approach with Azure components to solve this challenge.

And by the way it doesn’t need a black swan event to suffer from this. A good marketing campaign or a container ship getting stuck in a channel such as Suez Canal, impacting supply chain might also spark a menace of requests towards your ERP.


Fig.1 architecture overview


Spoiler alert: We won’t be using toilet paper product data on our SAP, but you can picture the airplanes from the Sflight data set transporting it 😉

What’s what?


I will quickly introduce the components for a joint understanding. In general, we need a highly scalable and globally available data back plane to serve our requests, a routing component that delivers the requests in a geo-optimized fashion and networking components for secure tunnels and virtual networks to achieve a secure boundary for our SAP data to flow in and at least one instance of the BTP client apps close to the region, where your user base lives. Furthermore, you need to be able to distribute the app to your BTP and Azure target regions.

To top it all of we want to be able to consume it with native OData, so that any app running in Azure or SAP BTP etc. can directly integrate it, without the need to know any specifics about the underlying data backplane. You will see that the wizard on SAP Business Application Studio immediately picks up the entities served from CosmosDB.

Geodes Pattern (Geographical Nodes)


The pattern involves deploying a collection of backend services into a set of geographical nodes, each of which can service any request for any client in any region. This pattern allows serving requests in an active-active style, improving latency, and increasing availability by distributing request processing around the globe.


Source: Microsoft Docs


My colleagues Will and Chris ran a hands-on session for the pattern from a different angle on the same example here:


Azure CosmosDB (Cosmos)


Is a fully managed globally distributed NoSQL database built for low-latency, automatic and instant scalability and enterprise-grade availability and security. The consistency level can be tuned between strong and eventual depending on your requirement. Its SQL API comes with an SDK for popular programming languages. In addition to that it is well equipped to hold heterogenous data and cope with high-frequency scenarios, such as IoT telemetry for instance.

This separates Cosmos from typical “side-car” pattern approaches.

So, we are all set on the data serving layer for our global-read scenario at high scale to find those toilet paper rolls round the clock.

Azure App Service (App service)


PaaS environment for programming languages of your choice with built-in security, autoscaling, load balancing and automated management. Dotnet 5 (.NET 5) supports OData and Cosmos’s SDK. So, that was a straightforward choice for us. If needed you can port our web api from .NET to node.js for example to run it on BTP.

Azure FrontDoor (AFD)


A globally available and scalable entry-point to Microsoft’s global edge network. It automatically picks the best route and caters for network delays or outages as well as app unavailability. In case your European app instance suffers from delays or worse you will be served from the next best instance.

Ok, now we can reach our SAP data in Cosmos geo-optimized via OData from wherever we like.

Azure Virtual Network (VNet)


Standard component to lock down your workloads in Azure into a private space. Any SAP deployment on Azure is deployed as part of a private VNet. Access to that private space is usually given through a VPN, ExpressRoute or a gateway component including web application firewalls.

PaaS or DB-aaS can be connected to your private VNet with so called private endpoints or private links.

This way the data being sent from SAP to Cosmos stays on our private network, as well as the traffic between our app service and Cosmos. Only the web API needs to be exposed publicly for BTP to reach it.

Azure Active Directory (AAD)


Microsoft’s cloud-based identity and access management service. We leverage it to secure our web API as a best practice and its public exposure. Going forward you might also think about adding to your consuming app on BTP for end-to-end identity protection.

For simplicity we configured the OAuth2 Client Credentials Grant flow. There are multiple more options available.

Azure DevOps (ADO)


Provides developer services to support agile software development to collaborate, build and deploy with native support for CloudFoundry and Azure. We specifically leverage its CI/CD capabilities to release the client app to BTP via Business Application Studio and the app service hosting the OData web API via Visual Studio Code to profit from purpose-built extensions in each ecosystem.

SAP BTP Destination (subaccount level)


Destinations allow the platform to abstract connectivity and authentication away from the actual app. We use the service to configure the authentication mechanism towards AAD and the OData web API on Azure.

We assume that remaining components like the SAP Business Application Studio (BAS) and the other involved CloudFoundry services in general are known to the community.

While setting up the app config for the destination on BAS I found these docs useful. Especially the destination /reload command proved helpful during troubleshooting.

The moving parts in action


Now, that we have clarified the actors, let’s see how the process flows. Initially SAP needs to send its data to the downstream services. That can happen as part of the new SAP Business Events or a scheduled program for example. I can recommend to have a look at badelang repos to drill deeper on the event-driven approach on ABAP. A standard example by SAP can be found here.

In the end we need to be able to send JSON payload via http to our OData endpoint from ABAP. We choose to do that with the native ABAP REST client. There is also the ABAP SDK for Azure if you like an even more structured approach.
*Sample data population for sending it to Azure Cosmos
SELECT connid carrid connid fldate planetype SEATSMAX SEATSOCC
FROM sflight
INTO TABLE it_data
WHERE connid = 64 AND fldate = '20210813'.
*create JSON from table structure
lv1_string = /ui2/cl_json=>serialize( data = it_data compress = abap_false pretty_name = /ui2/cl_json=>pretty_mode-camel_case ).

There are standard classes to create JSON from ABAP tables. Once we extracted the data set from the popular SFlight demo data set, we can retrieve the Bearer token from Azure AD for our subsequent call to the OData web API. Find the complete ABAP code on my GitHub repos.

JSON and ABAP REST client is nice, but you need XML and iDoc instead? Let us know via a comment or GitHub Issues.


Fig.2 Focus on SAP data downstream flow


The JSON payload from SAP will automatically be parsed by the .NET implementation. To enable that we modeled the Sflight object and registered it as EDM model. That enables all OData operations based of that entity that you would expect from SAP BTP for a full-cycle implementation. Have a loot at the following snippets for reference.
private IEdmModel GetEdmModel()
{
var odataBuilder = new ODataConventionModelBuilder();
odataBuilder.EntitySet<Sflight>("Sflight");
return odataBuilder.GetEdmModel();
}

...
app.UseEndpoints(endpoints =>
{
var odataBatchHandler = new DefaultODataBatchHandler();
endpoints.EnableDependencyInjection();
endpoints.Select().Filter().OrderBy().Count().MaxTop(100);
endpoints.MapODataRoute(
routeName: "odata",
routePrefix: "api/odata",
model: GetEdmModel(),
batchHandler: odataBatchHandler );
});

The OData controller feeds the parsed object to Cosmos SQL API, which ultimately creates your object. See below a snippet from the implementation of the PUT method.
[EnableQuery]
[Authorize(Roles = "Writer")]
public async Task<string> Put(string key, [FromBody]Sflight flight)
{
return await Repository.UpdateItemAsync(key, flight);
}

Bottom line is that you only need to model your object (in our case Sflight) and the rest is already covered for you by the existing implementation. It gets you the $metadata and all the other OData verbs including navigation out of the box.

Furthermore, we left the OData web API open to be implemented against other services exposed by the SDK. That includes the Azure Blob storage for instance.

Once the request reaches Cosmos it is distributed to the replicas and ready to be consumed globally.


Fig.3 Screenshot from Cosmos Data Explorer


The SAPUI5, Fiori template or CAP wizard in SAP BAS picks up the entity right away. We feed the endpoint exposed through FrontDoor as described by the architecture in fig.1.


Fig.4 Screenshot from SAPUI5 project creation in SAP BAS with OData shim in Azure


Down below you can see the Object collection being populated automatically, as well as the individual properties of the entity living in Cosmos.


Fig.5 Screenshot from OData entity selection based of data in CosmosDB


The whole process is enabled the by the OData web API running on the Azure app services. To SAP BAS and the developer, it is completely in-transparent that the data is being served from Cosmos. The app and developer can treat it as any other data source, because of the OData abstraction layer.

To showcase the global read mechanism, we added a service (/api/geode) to check on the geography from where you retrieved the data. The response is surfaced as part of the title.


Fig.6 Screenshot from UI5 app on BTP CF environment


From my development machine in Germany, I get always routed to my instance in west Europe. To test the geo-optimized routing, I spun up VMs in South America and West US and called the SAPUI5 app from my remote session. To check on high-availability I simply stopped my European app service instance and watched FrontDoor pick up the unhealthy instance and re-route to West US instead.


Fig.7 Screenshot from FrontDoor metrics overview split by app service instance (eu instance stopped at 4:36pm)


To operationalize the approach, we will need to be able to distribute the apps and push updates to SAP BTP CF runtime and Azure App service. For that purpose, we setup pipelines on Azure DevOps that enable CI/CD and “rolling-region” upgrades. We start with our primary instances in Europe and move on to west US once approved.


Fig.8 SAPUI5 app deploy to CF regions



Fig.9 .NET OData web API deploy to Azure App Service


Have look here for further reading on cloud-native release strategies.

Thoughts on production readiness


All components involved in the setup are managed by Microsoft or SAP for enterprise-grade apps. The OData web API resides in a critical position within the architecture.

The PaaS environment it runs on is managed but the code would be your responsibility. The first version relies on standard coding from the well-known SDK docs and has been tested from the BTP client app and Postman. All API calls are part of the Postman collection so that automated testing can be added. I am looking to publish mass-testing capabilities over the coming weeks. To be clear, most large customers that need scale for “black-Friday” events with multi-million requests don’t rely on auto-scaling alone. You would schedule resources shortly before the event to cover a certain base load and only auto-scale for the remaining parts.

Auto-scaling is a good approach but can struggle with unpredictable steep spikes. Scheduling mitigates that. Another approach could be to provide your own scaling logic that acts more aggressively on resource demand. That comes with its own set of challenges though, where you might overreact most of the time 😉.

In addition to that we showed a possible CI/CD setup to operationalize the whole setup.

Find more details on our exemplary load tests on the readme on the GitHub repos.


Fig.10 Screenshot from last JMeter load test with 10k threads


Anything else you would need to see to get started? Reach out via GitHub Issues or the comment section under this blog.

Final Words


Ok, we didn’t unravel the mystery of the lost toilet paper, but we saw an architecture equipped to survive such demands.

We discussed how you can implement a highly scalable data store for specific SAP backend data sets with the goal in mind, that it can be read globally meeting steep demand, while being agnostic to SAP Business Technology Platform applications. The geodes pattern uses best-practices architecture components to deliver on these requirements.

The only adjustment required for your individual scenario is the data modelling on the web API (replace Sflight entity) and the ABAP part to send JSON payload as well as initial thoughts on your scaling approach. To support that we added Apache JMeter scripts and shared results for GET  /api/odata/sflight requests with 10k simulated users.

Find the OData web API sources and config guide on my GitHub repos here.

Find the SAPUI5 client app on my GitHub repos here.

Find the SAP CAP app on my GitHub repos here.

Find the Azure DevOps project for guidance on the CI/CD part of the equation here.

 

As always feel free to leave or ask lots of follow-up questions.

 

Best Regards

Martin
Labels in this area