Skip to Content
Technical Articles
Author's profile photo Mangesh Pise

How to relocate an existing app running on your Kubernetes Cluster to Kyma on SAP Business Technology Platform

At its core, Kyma is an open-source project that was initially launched in July 2018 by SAP. The basic premise of this project was to provide an enterprise-grade developer experience (DX) on the Kubernetes platform so that they can build extensions to other enterprise applications. Kyma is a runtime that comes standard with building blocks comprised of tools such as Istio – which enables secure traffic management, Kubeless – the serverless framework, Prometheus & Grafana – providing observability via monitoring and alerting, Luigi – which provides a full- fledged Kyma UI for developers and administrators to manage the deployments and the platform itself, etc.

Where Kyma became powerful for SAP customers, is when SAP decided to provide it as a supported platform on the SAP Business Technology Platform (SAP BTP). Kyma sits next to Cloud Foundry runtime and the recently-announced ABAP Cloud runtime. All of these together provide a wide array of choices for developers to safely, reliably, and effortlessly extend business capabilities from large enterprise applications, such as SAP S/4HANA solutions, SAP Customer Experience solutions, SaaS solutions – such as SAP SuccessFactors, SAP Ariba, etc.

The Challenge

In this blog post, we are going to put Kyma through a test. We are going to find out how easy it is to relocate an existing containerized application that runs on an existing Kubernetes cluster to the Kyma runtime on SAP BTP. But why this challenge?

Here’s why – many organizations have extended their business processes outside the core ERP platforms as a side-by-side application. That’s good, or at least, better than modifying the core itself! However, over a while, such extensions grow into a “thing of its own”. These side-by-side applications eventually need dedicated development teams, testing cycles, environment dependencies, cloud infrastructure requirements, etc. That increases the overall Total Cost of Ownership (TCO). So, how do we protect the core ERP by using a side-by-side architecture pattern, and yet keep the TCO to the minimum?

The answer is – by keeping the focus on the extension itself without having to spend additional time, money, and effort in managing the platform on which you build the extension. In other words, low-TCO extensions can be built by leveraging a platform like Kyma that –

  1. enables developers to deploy their containerized applications on Kubernetes, and
  2. which comes in-built with platform management services (read as, managed services provided by SAP as a part of SAP BTP).

Thus, within the scope of this blog post – developers should be able to relocate their existing side-by-side applications running on Kubernetes, without any re-factoring, re-architecting or re-platforming requirements. That’s what we will put Kyma to the test for in this blog post.

The App in Question

The app in question which is currently running on a Kubernetes cluster, is a 3-tier application with MongoDB as the database, Microservices written in NodeJS, and the frontend UI written in AngularJS. The application is a demo eCommerce store that lists products on its landing page where a user can add items to a cart and eventually place an order. There is also a page to view all the orders placed by the users.

Demo%20app%3A%20Product%20catalog Demo%20app%3A%20Orders

Fig. 1. Demo app: Product Catalog and Orders

Let me describe the architecture layer by layer, in detail, below:

microservices%20architecture

Fig. 2. Microservices Architecture

Microservices / Middleware

The NodeJS applications for providing product and order APIs are containerized using Docker images and are stored in the Docker Hub’s container registry (a place where all images are stored, versioned/tagged, and later retrieved to put to use).

For deploying these microservices on the Kubernetes platform as Pods (smallest deployable units that can be scaled into multiple replicas), the Docker images referred to above are pulled at deploy-time and instantiated to create a runtime application. Kubernetes Services are then built atop these Pods so they can load balance user requests between one or more Pod replicas.

However, do note that all these replicas and services run in Kubernetes’ internal cluster network. That means none of the Pods or Services can be accessed from your browser (or the internet) for now. This is where we add a Kubernetes object called Ingress. Ingress provides an externally available name and also serves as a proxy to rewrite URLs to point them to appropriate Services. For example, in our application, the API endpoints containing https://api-endpoint/orders/ would route to Kubernetes Services connecting to the Order Pods, while those containing https://api-endpoint/products/ would route to Product Pods.

Database Layer

In this application, we decided to use the official docker image for MongoDB, called mongo. Hence, all we need is a Kubernetes deployment object to pull the mongo Docker image and configure it using environment variables that hold the initializing DB username and password (as provided in the Docker image documentation).

However, to support the secure storage and retrieval of the username and password, we leverage Kubernetes Secrets.

What about persistence? Since MongoDB would need some physical storage to persist user data, we make use of two Kubernetes objects, viz. PersistentVolume and PersistentVolumeClaim, which effectively allocates space on a shared storage disk and assigns it to the MongoDB Pods.

Ultimately, we would create a Kubernetes Service for the single MongoDB Pod (keeping it simple for now). We would not create Ingress for the database service because we would strictly like to prevent any external access to the database. However, we have used another official image for mongo-express as an administrator’s tool to manage the MongoDB database.

Sidebar: We can also leverage SAP HANA Cloud service directly in Kyma. Follow this blog post to learn how.

Frontend Layer

The user interface (UI) is written as an AngularJS app. Once packaged, a Docker image is created using the official image for httpd, also retrieved from Docker Hub.

The frontend is then deployed as a Kubernetes Pod, pretty much like the Microservices, i.e. Pods and Service for load balancing, except for the fact that we have exposed the service externally as LoadBalancer.

Summary – list of Kubernetes objects

To summarize, we have the following list of Kubernetes objects for our application. Our goal is to deploy this entire stack with the same Kubernetes objects on Kyma provisioned on SAP BTP.

    • Secrets
      • secret
    • Volumes
      • mongo-pv
      • mongo-pvc
    • Deployments
      • mongodb
      • mongo-express
      • order
      • product
      • web-app
    • Services
      • mongodb-svc
      • mongo-express-svc
      • order-svc
      • product-svc
      • web-app-svc
  • Ingress
    • online-store-apis
    • online-store-web

Executing the Challenge

With an understanding of the App, it is now time to put Kyma to test. The assumption is that you have access to SAP BTP. For organizations to try SAP BTP, SAP provides a free trial. There is also a free tier of different services and at the time of this blog post, there is a free tier plan available for Kyma runtime. Check this page out for the latest. If at this point you do not have access to SAP BTP, you may consider using the free trial. Once you have logged in to your SAP BTP account, Kyma runtime needs to be activated. Follow these steps to activate your Kyma runtime.

Please note, that if you are new to SAP BTP, you may want to spend a few minutes understanding a few SAP BTP basics. You can find learning material on the SAP BTP onboarding resource center or read the SAP BTP onboarding blog series.

After activating Kyma runtime within your trial subaccount, your trial account home page will look something like this.

Kyma%20environment%20in%20SAP%20BTP

Fig. 3. Kyma environment in SAP BTP

Under the Kyma Environment section, you will find a link to your Kyma Dashboard and most importantly, you will be able to download the KUBECONFIG file to configure your kubectl so that it points to your cluster in the Kyma runtime. Please note that Kyma requires kubelogin utility, so please install it as per the instructions here.

It might feel like quite an undertaking thus far (mainly in case you haven’t been exposed to SAP Business Technology Platform yet), but at this point, the hard part is over! Just like anyone ultimately getting used to hyperscaler consoles, I assure you, it will feel the same with the SAP BTP console in a short amount of time. So, don’t beat yourself up – at least not now, since the fun is just about to begin!

Kyma%20Dashboard

Fig. 4. Kyma Dashboard

As you deploy your Kubernetes objects, make sure you are logged in into the Kyma dashboard. The first time, you might have to use the KUBECONFIG yaml file to have the dashboard connected to your Kyma cluster. Once logged in, select the default namespace from the dashboard toolbar.

Default%20Namespace%20Kyma%20Cluster

Fig. 5. Default Namespace in Kyma Cluster

We will use the command line utility kubectl to deploy our application on the Kyma cluster (default namespace). Again, the idea is to see how easily the existing Kubernetes objects get deployed on Kyma.

The code structure in my app is per below.

full-stack-microservices
> 1.middleware - contains NodeJS application code for product and order
> 2.ui - contains AngularJS code for the frontend UI
> 3.k8s - contains kubernetes object definitions in YAML
> 4.kyma - contains the kubeconfig.yaml file downloaded from SAP BTP

We will manually deploy our app by applying Kubernetes objects in the following sequence.

# Setting up KUBECONFIG for Kyma on SAP BTP
export KUBECONFIG=~/<path to extracted code>/4.kyma/kubeconfig.yaml
# Create secret with DB username and password
kubectl apply -f 3.k8s/common/secret.yml
secret/secret created
# Create Persistent Volumes
kubectl apply -f 3.k8s/db/mongo-volumes.yml
persistentvolume/mongo-pv created
persistentvolumeclaim/mongo-pvc created
# Deploy MongoDB Pod and Service pointing to it
kubectl apply -f 3.k8s/db/mongo.yml
deployment.apps/mongodb created
service/mongodb-svc created
# Deploy Mongo Express Pod and Service
kubectl apply -f 3.k8s/db/mongo-express.yml
deployment.apps/mongo-express created
service/mongo-express-svc created
# Deploy product Pods and Service
kubectl apply -f 3.k8s/product/deployment.yml
deployment.apps/product created
service/product-svc created
# Deploy order Pods and Service
kubectl apply -f 3.k8s/order/deployment.yml
deployment.apps/order created
service/order-svc created
# Deploy Web UI Pods and Service
kubectl apply -f 3.k8s/web/deployment.yml
deployment.apps/product created
service/product-svc created
# Finally, deploy Ingress
kubectl apply -f 3.k8s/common/ingress.yml
ingress.networking.k8s.io/online-store-apis created
ingress.networking.k8s.io/online-store-web created

As we navigate through the Kyma dashboard, we can start noticing that the deployments are completing successfully and resources are getting created successfully as well. Below are some screenshots from the dashboard that you might want to observe as you go through this exercise.

Deployments

Fig. 6(a). Deployments

Pods

Fig. 6(b). Pods

Services

Fig. 6(c). Services

Ingress

Fig. 6(d). Ingress

Everything looks good, except for the fact that the Ingress resources do not have Load Balancers and external name assigned. That means our services are still not accessible externally via browser (or the internet). For the purpose of this blog post we will leave the discussion regarding supportability of nginx-based Ingress in Kyma. So, what would be the means to expose services externally on Kyma? Well, as a reminder, Kyma comes pre-configured with common building blocks, such as Istio service mesh. Based on this, Kyma provides a custom resource called APIRule. The full documentation on the Custom Resource Definition (CRD) is available here.

Adjustments

Based on the above learning, we need a new definition to create the APIRule for each of our Services, i.e. product, order, and the web-app. Below are the APIRule definitions for all the Services in our app.

apiVersion: gateway.kyma-project.io/v1beta1
kind: APIRule
metadata:
name: online-store-product-apis
  labels:
    app.kubernetes.io/name: online-store-product-apis
spec:
  service:
    name: product-svc
    port: 8080
host: productsapi.f976b46.kyma.ondemand.com
  rules:
- path: (/|$)(.*)
      methods:
- GET
        - POST
        - DELETE
        - PUT
        - OPTIONS
     accessStrategies:
        - handler: allow
gateway: kyma-gateway.kyma-system.svc.cluster.local
---
apiVersion: gateway.kyma-project.io/v1beta1
kind: APIRule
metadata:
name: online-store-order-apis
  labels:
    app.kubernetes.io/name: online-store-order-apis
spec:
  service:
    name: order-svc
    port: 8080
  host: ordersapi.f976b46.kyma.ondemand.com
  rules:
    - path: (/|$)(.*)
      methods:
        - GET
        - POST
        - DELETE
        - PUT
        - OPTIONS
      accessStrategies:
        - handler: allow
gateway: kyma-gateway.kyma-system.svc.cluster.local
---
apiVersion: gateway.kyma-project.io/v1beta1
kind: APIRule
metadata:
 name: online-store
  labels:
app.kubernetes.io/name: online-store
spec:
  service:
    name: web-app-svc
    port: 80
host: online-store.f976b46.kyma.ondemand.com
  rules:
   - path: /.*
      methods:
        - GET
      accessStrategies:
        - handler: allow
  gateway: kyma-gateway.kyma-system.svc.cluster.local

Thus, we can expose any Kubernetes Service as an endpoint. As you can also notice, we can specify an access strategy to be based on specific authentication (jwt, oauth2, etc.) as well as specify HTTP methods that can be used for accessing the Service.

In our specific app, the definitions have been appended in individual service deployment. At this point we need to re-deploy the objects – order, product, and the frontend UI.

We will redeploy them again using kubectl as below.

# Just make sure we have the KUBECONFIG set to point to Kyma on SAP BTP
export KUBECONFIG=~/<path to extracted code>/4.kyma/kubeconfig.yaml
# Deploy product Pods and Service
kubectl apply -f 3.k8s/product/deployment.yml
deployment.apps/product unchanged
service/product-svc unchanged
apirule.gateway.kyma-project.io/online-store-product-apis created
# Deploy order Pods and Service
kubectl apply -f 3.k8s/order/deployment.yml
deployment.apps/order unchanged
service/order-svc unchanged
apirule.gateway.kyma-project.io/online-store-order-apis created
# Deploy Web UI Pods and Service
kubectl apply -f 3.k8s/web/deployment.yml
deployment.apps/web-app unchanged
service/web-app-svc unchanged
apirule.gateway.kyma-project.io/online-store-web-app-apis created
# Finally, we can delete the original Ingress
kubectl delete -f 3.k8s/common/ingress.yml
ingress.networking.k8s.io/online-store-apis deleted
ingress.networking.k8s.io/online-store-web deleted

We can observe the API Rules deployed in the Kyma Dashboard as well as the associated externally accessible URLs as specified in the APIRule CRD.

Kyma%20API%20Rules

Fig. 7. API Rules in Kyma

When we now access the web-app URL from our browser, we can see that the app is fully working. We can also validate that by creating a new order.

order%20placed order%20listed

Fig. 8. Fully deployed app in Kyma on SAP BTP

Conclusion

First, with a long history of SAP supporting cloud-native runtimes and platforms like Cloud Foundry, it is great to see the support extending to yet another powerful and highly adopted Kubernetes platform. What is much more exciting is the direction from SAP to not only just adopt Kubernetes as a runtime on their strategic SAP Business Technology Platform, but also adopt it along with the most common tooling and building blocks that most developers need to create enterprise-grade applications and extensions to their core ERP platforms. This was only possible by creating it as an open source project and gaining support from a large community of developers, both within and outside of SAP to continue to make it stronger. Overall, it builds a case for enabling organizations to build seamless extensions of their end-to-end business processes while keeping the TCO to the minimum.

As far as our challenge to relocate a containerized app currently running in a standard Kubernetes platform to Kyma is concerned, it was a straightforward process without any major refactoring. The situation where external exposure did not work with an nginx based Ingress, APIRule as a custom resource on Kyma aptly fit to resolve our requirement. I can only think that it made the app simpler. APIRule created a Virtual Service for Istio, reused the Kyma gateway, and secured the endpoint – all in one go. Thus making it only better than the previous deployment.

Kyma passed the test and there is still some more to explore (e.g. Serverless Functions). I’m curious to learn if you are running side-by-side apps that integrate with your core ERP, and if they will be your candidates to run on Kyma?

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.