Proof of Concept: SAP on Kubernetes – Deployment, Application scaling scenarios
Containerizing and running bigger monolithic applications like SAP in Kubernetes is always challenging. This blog post will discuss about how deploy and run SAP on Kubernetes by decoupling the data/application with OS and scaling techniques of application. I have been working on various SAP on kubernetes PoCs to test different scenarios. In this blog post i am going to share my findings in deployment and scaling of SAP on Kubernetes.
SAP S/4HANA and Google cloud platform/Google Kubernetes Engine were used in all my proof of concepts to evaluate SAP deployment in Kubernetes, automatic scaling and scale back of SAP application instances.
Please note that this document is not official solution or ongoing development update. Always refer to SAP official documents for any related information. Official support information of SAP on kubernetes ( virtualized environments ) is documented in SAP Note 1122387
Deployment of SAP on Kubernetes
Containerizing bigger applications like SAP is challenging. Because of its size, it is difficult to keep entire application/database in docker container and create image out of it. Even if we break down application into smaller pieces, still database comes with big size. Even if it containerized, initial download time will be high so as deployment times. As database grows it creates more problems.
Rather than going in traditional way, OS and application/database can be decoupled and move complete application/database to persistent storage to run applications with single OS base image. This way deployment will be improved and the complete data also resides on persistent storage to make sure data is persistent when pod restarts.
Following diagram shows graphical representation of process to deploy SAP on Kubernetes
As shown in the diagram same base OS image can be used to run different flavors of SAP systems on Kubernetes. Only difference is corresponding data disk needs to be created from respective disk image and maintain it in deployment yaml before deploying.
Once application is deployed, startup script will copy SAP specific OS settings to OS container and start the system.
This container image will be light weight and complete data also remains persistent
SAP application auto scaling and application load balancing
SAP application servers can be easily auto scaled and scale back based on work load demand. in this PoC, database, primary application server and additional application server were setup in separate pods. own docker image for SAP additional application server with local data was built to have faster spinning of multiple application server replicas. Database and primary application serer can be run with same base docker image as described in “Deployment” section in this document.
To scale application server automatically, Kubernetes horizantal pod autoscaler is needed to continuously monitor its resource usage and trigger auto scale of application server if resource usage is exceeds defined threshold.
Application level load balancing needs to be managed carefully to identify newly created application servers and add those to corresponding logon or Job groups to handle workload. When load goes down, pod autoscaler tries to delete newly created pods. So those newly created application servers needs to be removed from logon or Job groups to prevent new user requests sent to these severs. And also existing sessions in these severs needs to be cleared before deleting these severs.
Below is process diagram what will happen when CPU load goes high and what will happen when CPU load goes down and how the application load balancing will be handled in application layer without effecting user sessions
In my PoC i have used CPU load as resource parameter and took Job group to show how user sessions can be managed with load balancing during auto scale and scale back situations. i have written few ABAP programs and bash scripts to make it work
When CPU load goes beyond threshold, horizontal pod autoscaler identifies it and creates more application server replicas to handle workload. Then on the application level, ABAP program will add newly created application servers to Job group to distribute the load. Once load is reduced, autoscaler will mark newly created pods for termination. But kubernetes wont delete those immediately and wait for signal from ABAP program. Once servers marked for deletion, abap program will remove those severs from job group and checks if any existing sessions are running on it. Once all sessions are completed then it will send signal to kubernetes to delete that particular sever. Then kubernetes will delete that pod. This way application load balancing will be handled during auto scale and scale back situations.
Below is Demo video to show complete scenario
With enormous number of ports which SAP and HANA uses for its internal and external communication for various operations, it is very complex to manage communication with kubernetes services ( with or without combination of pod ips )
Since we are starting systems with fresh OS image, startup script should have all necessary tasks to be executed before starting system. As the requirements goes high this will become heavy. To make it light weight it can call other script which does all tasks on behalf.
Technically it is possible to run SAP on kubernetes. However extensive tests needs to be performed to know the complete stability of the system. And also Kubernetes provides very good scaling features with which applications can be scaled without depending on underlying infrastructure. so to fully utilize its features, both SAP and kubernetes needs to be integrated closely.
Feel free to provide your valuable feedback in comment section.
Do license keys survive when pods move around nodes?
No, they don't.
Nice post Sarath!
What is the motivation behind this post? Currently, any efforts being put towards changing SAP monolithic apps to MicroServices like splitting into smaller services i.e., Message Server, Enqueue server, dispatcher+WP, Gateway, ICM etc?
Looks very much viable Sarath! Looking forward to some future SAP announcements!
Very interesting and cool to see containerization for the SAP platform stack.
With focus on SAP NW ABAP:
from my point of view, a potentially valid technically approach to simplify / speed up instance provisioning. I'm curious about integrating the steps of starting and stopping of ABAP instances into eg.:
- work load balancing mechanisms for external clients (logon groups) as also into internal ressource groups (like RFC groups) in a smart way
- handling of buffered number range objects, especially those in the szenario "parallel buffering" (nrivshadow, ...)
Provisioning, scaling, migration of micro services in seamless manner means, reaching stateless designs - e.g. by decoupling data persistency from service operations - I think, not in scope with SAP NW ABAP stack ... 😉
This is very exciting, technically it should definitely be possible to run SAP ABAP AS on k8s, the real challenge, however, is that SAP infrastructure are born to be integrated(coupled), thus the application component built upon its infrastructure were coupled, this is the nightmare besides the infrastructure can be scaled.
Execellent overview of how SAP landscape can be auto-scaled based on the CPU load.
The feature looks like more relevant for smaller scenarios like load exceeding defined threshold values.
However, how do you consider scaling of pods for very large scenarios where IDOCs flows in and out of the system and almost all the App-servers are occupied or jobs running for several days.
Also how would the application benefits from the load balancing feature of K8S when SAP has its own load balancing features.
Maybe the points are already discussed at other group but I would still like to know if its okay. 🙂