Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
mariusobert
Developer Advocate
Developer Advocate
In this Cloud-Native Lab post, I'll compare the manifest files of two runtimes within SAP Cloud Platform - the Cloud Foundry and the Kyma runtime. In other words, I compare thedeployment.yaml of Kyma with the mta.yaml file of SAP's Cloud Foundry Deploy service.

Update 6th Nov 2020: I added a more elegant way to deploy to the Kyma Runtime

This comparison serves two purposes. First, it will help you understand the fundamental differences between the Cloud Foundry and the Kyma runtime. And second, you'll learn what kinds of directives exist for each manifest file and how to "translate" them to each other.


A dockerized SAPUI5 sample app running on the Cloud Foundry and the Kyma runtime


In the screenshot above, you can see that I deployed a SAPUI5 sample app to both runtimes - Kyma and Cloud Foundry. I created a simple SAPUI5 web app that is embedded in an approuter, which is a Node.js application. This approuter consumes two SAP Cloud Platform services (destination services and xsuaa services). I dockerized the entire application and uploaded it to DockerHub, from where it can be consumed by both cloud-native runtimes. For this consumption, each runtime later needs to define the compute resources and bound service instances.

Manifests


Manifest files are quite common in software development. They are used outside of the SAP-World (e.g., Android App Manifests, Node.js package.json) and inside the SAP-World (SAPUI5 manifest.json). They usually include metadata about projects such as the project ID, the project name, and the project's packages or dependencies. The manifest of the SAP Cloud Platform runtimes contain metadata as well, but they use different properties than the example from above.

While there is a clear difference in the number of available parameters and their effects, there are also many similarities between the deployment.yaml of  Kubernetes and Kyma and the mta.yaml file of the deploy service of SAP Cloud Platform, Cloud Foundry. As the name suggests, both manifests use YAML as the file format. While this format is not always easy to write, it is significantly easier to read than other formats such as JSON.

Both manifest file types are used to specify the respective parameters that the platform can offer. Typical vectors here are the compute resources (memory size, disk size, CPU shares, etc.), the bound service instances, the attached volumes, the environment variables, and so on. Platforms that make more assumptions of the hosting setup (such as Cloud Foundry) typically offer fewer configuration parameters and offer a more simplistic manifest. More mightily platforms, such a Kyma, on the other side, offer many tuning parameters to configure the project setups and apply best practices manually. As a consequence, the manifest becomes more verbose.

Services in SAP Cloud Platform


At last years' TechEd, SAP's CTO Jürgen Müller announced that the Business Technology Platform's goal is “to provide the fastest way to turn data into business value.”  This goal also applies to the SAP Cloud Platform as it is part of the Business Technology Platform. The value of a platform heavily depends on the value of the services offered on this platform. To provide high business value, the SAP Cloud Platform offers many business services that make life easy for SAP developers.

Such services are, for example, the destination and connectivity services that help you to connect your cloud apps with cloud solutions (SAP S/4 HANA Cloud, SAP SuccessFactors, non-SAP system...) and on-premise solutions (SAP S/4 HANA, SAP NetWeaver...). The Launchpad service is used to provide access to all your business apps via the Fiori Launchpad. The Workflow Management service can create flexible workflows for your processes and define business rules. The Document Information Extraction service uses machine learning to extract information from documents such as bills and recipes. With this technical service, you can add these capabilities to your application with a simple REST request. All SAP Cloud Platform services can be found here.

To reiterate: The value of the SAP Cloud Platform comes from its services; the runtimes are the connective tissue that binds the services with each other while creating business value. In that message's spirit, we started reorganizing the SAP Cloud Platform cockpit to bring the services more in the developers' focus. As the screenshot below shows, we now display the provisioned services instances of all runtimes next to each other.


SAP Cloud Platform cockpit view that shows the services instances of both runtimes



The Cloud Foundry Manifest


To be more precise: The manifest of the Cloud Foundry Deploy Service


The mta.yaml file is the manifest of the SAP Cloud Foundry deploy service distribution and the manifest.yamlis the general manifest - you can use both in SAP Cloud Platform. In practice, I see moremta.yamlfiles, which is why I'll focus on them here.

This manifest defines two types of resources: the modules (applications) and resources (services) consumed by the modules. Modules contain a type, source code files, and parameters to specify the runtime environment's compute resources. As an alternative to the source code, you can also refer to a prebuilt Docker image.  Resources are defined with a service name, a service plan, and possibly individual parameters that specify the service instance's configuration. This configuration can be externalized in a JSON file to keep the manifest short and concise. Overall the manifest is quite easy to read as it provides a clearly laid out set of parameters. The creators behind it strictly followed the KISS principle to make cloud deployments as easy as possible.
_schema-version: 3.2.0
ID: project
version: 1.0.0

modules:
- name: module1
type: javascript.nodejs
path: folder1
requires:
- name: service_name
parameters:
disk-quota: 512M
memory: 512M
resources:
- name: service_name
type: org.cloudfoundry.managed-service
parameters:
path: ./configuration.json
service: service-name
service-plan: service-plan

The simple structure of the  mta.yaml manifest


The snippet above shows a service binding. This means that the service credentials are injected into the environment variables of the module. All major programming languages provide directives to read these variables. To make life easier, you can also use packages to abstract these calls and directly access the service credentials.

The manifest can also be used to describe the build-parameters of the project. They can be leveraged to trigger the build process with the mbt tool. This makes it easier to include the project in an optimized CI/CD pipeline later on. These build steps are executed locally, and only the build results will be included in the .mtar archive later. As the build steps are not needed during deployment, they are removed from the manifest. Only the resulting "deployment manifest", which is then called mtad.yaml, will be included in the .mtar archive.

The command to deploy the .mtar archive is, among other commands, provided by the MultiApps Cloud Foundry CLI Plugin:
cf deploy archive.mtar 
cf undeploy archive.mtar
cf mta archive
cf mtas
# and more

 

The Kyma Manifest


To be more precise: The Kubernetes manifest


As Kyma builds on top of Kubernetes, it uses the deployment.yaml(the file name can vary) manifest to organize its resources.

Kubernetes provides much more resource types than Cloud Foundry. On top of that, Kyma adds additional resource types. Possible types are deployments, services to route traffic, secrets, API gateways, service instances, and services bindings. All these resources can be freely configured, connected, and annotated with so-called labels. These labels make it easier to organize, access, and patch resources later on. The resources can be described in one or multiple .yaml files, which are then sent to the Kubernetes API Server. It is no surprise that this additional complexity offers a lot of freedom that the Cloud Foundry environment cannot offer. But we all know there is no free lunch: Apps build on Kyma are potentially more powerful than apps build on Cloud Foundry, but it is harder to design and set up applications that use the Kyma runtime.

At the time of this writing, not all SAP Cloud Platform services are available in the Kyma runtime. But I can assure you, we're working on extending the list of available services.

The following snippets show a similar application to the one we've seen above. It includes multiple resources that are separated by---.The first deployment resource describes a pod that includes one container with fixed compute resources, a Docker image, a port that needs to be exposed (internally), and an attached service binding. The second service resource exposes the internal port to an internal service. The API rule resource exposes this service to the public internet and defines how communication can happen. The service instance resource describes the service name, the service plan, and the provisioning parameters. And the last service binding resource describes the service credentials of the provisioned service. This resource is also referenced in the first deployment resource above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: value
labels:
app: name
spec:
replicas: 1
selector:
matchLabels:
app: name
template:
metadata:
labels:
app: name
spec:
volumes:
- name: service-name
secret:
secretName: service-name-binding
containers:
- image: user/image
imagePullPolicy: Always
name: name
ports:
- name: http
containerPort: 5000
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi
volumeMounts:
- name: service
mountPath: "/etc/secrets/sapcp/servicename/name_service"
readOnly: true

---
apiVersion: v1
kind: Service
metadata:
name: value
labels:
app: name
spec:
ports:
- name: http
port: 5000
selector:
app: name

---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
name: value
labels:
app: name
spec:
service:
host: approuter
name: value
port: 5000
gateway: kyma-gateway.kyma-system.svc.cluster.local
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators: []

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite
parameters:
param1: value1
param2: value2

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: service-name-binding
spec:
instanceRef:
name: service-instance

The structure of the  deployment.yaml manifest


At first sight, both manifest looks very different, but when we have a closer look, we see that very similar things are going on. We see that Kyma provides more options to define how services are exposed to the public and, therefore, also provides options to use services only for inter-pods communication. Another similarity is that you can use the xsenv package to retrieve the credentials from the service bindings. This package is the reason why we attached the service bindings to volumes.

The command to trigger the process that is described in the deployment.yaml manifest is, among other commands, provided by kubectl:
kubectl apply -f file

Hands-on: Deploy a Docker image with two bound services


I'll deploy the same Docker image that consumes two SAP Cloud Platform services (destination and xsuaa service) to the Kyma and the Cloud Foundry runtime in the rest of this post. I think this example serves well to illustrate the similarities and differences between both approaches. In the end, we'll see two deployed SAPUI5 apps that display data from the Northwind service and are accessible via Single-Sign-On. To save some time, I already created the Docker image and uploaded it to DockerHub. I added the Dockerfile that I used here for the sake of completion, but you don't have to worry about it as the image has already been created.
FROM node:12-alpine

WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package.json ./

RUN npm install --only=production
COPY . .

EXPOSE 5000
CMD [ "npm", "start" ]

0. Preparation


Before we get to the fun part, we need to install some tools which are mandatory for cloud development on SAP Cloud Platform (if you haven't done so already):

1. Create the manifest for Cloud Foundry


First, you need to create the manifest file, the mta.yaml. Paste the following content in the file and then save it.
_schema-version: 3.2.0
ID: cloudnativelab2
version: 1.0.0

modules:
- name: approuter
type: javascript.nodejs
build-parameters:
no-source: true
requires:
- name: cloudnativelab2_destination
- name: cloudnativelab2_uaa
parameters:
disk-quota: 512M
docker:
image: iobert/dockerized-sapui5-app
memory: 512M
resources:
- name: cloudnativelab2_destination
type: org.cloudfoundry.managed-service
parameters:
path: ./destination.json
service: destination
service-plan: lite
- name: cloudnativelab2_uaa
type: org.cloudfoundry.managed-service
parameters:
path: ./xs-security.json
service: xsuaa
service-plan: application

You'll notice that this manifest outsources the service instance definitions. Therefore you need to create the following files destination.json:
{
"init_data": {
"subaccount": {
"existing_destinations_policy": "update",
"destinations": [
{
"Name": "Northwind",
"Description": "Automatically generated Northwind destination",
"Authentication": "NoAuthentication",
"ProxyType": "Internet",
"Type": "HTTP",
"URL": "https://services.odata.org"
}
]
}
}
}

And xs-security.json:
{
"xsappname": "cloudnativelab2-cf",
"tenant-mode": "dedicated",
"oauth2-configuration": {
"redirect-uris": [
"https://*/**"
]
}
}

Both files will perform some service-specific configuration steps.

2. Deploy to the Cloud Foundry environment


The deployment here is straight forward. First, you need to build the .mtar archive (that includes the manifest), and then you need to deploy it.
mbt build
cf deploy mta_archives/cloudnativelab2_1.0.0.mtar

You'll find the URL of the app in the console output once the deployment is finished.

Tip: You won't need to wait until the deployment finished to start the next step.

3. Create the manifest for Kyma


As mentioned above, the Kubernetes manifest is more verbose due to the higher complexity. For the sake of simplicity, I wrote all definitions in a single file. Create a deployment.yamlfile with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
replicas: 1
selector:
matchLabels:
app: approuter
template:
metadata:
labels:
app: approuter
spec:
volumes:
- name: destination
secret:
secretName: destination-service-binding
- name: xsuaa
secret:
secretName: uaa-service-binding
containers:
# replace the repository URL with your own repository (e.g. {DockerID}/approuter:0.0.x for Docker Hub).
- image: iobert/dockerized-sapui5-app
imagePullPolicy: Always
name: approuter
ports:
- name: http
containerPort: 5000
volumeMounts:
- name: destination
mountPath: "/etc/secrets/sapcp/destination/cloudnativelab2_destination"
readOnly: true
- name: xsuaa
mountPath: "/etc/secrets/sapcp/xsuaa/cloudnativelab2_uaa"
readOnly: true
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi

---
apiVersion: v1
kind: Service
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
ports:
- name: http
port: 5000
selector:
app: approuter

---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
labels:
app: approuter
name: cloudnativelab2
apirule.gateway.kyma-project.io/v1alpha1: approuter
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
service:
host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
name: cloudnativelab2
port: 5000
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators:
- handler: header
config:
headers:
x-forwarded-host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
x-forwarded-proto: https
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: uaa-service-instance
spec:
clusterServiceClassExternalName: xsuaa
clusterServicePlanExternalName: application
parameters:
xsappname: cloudnativelab2-kyma
tenant-mode: dedicated
oauth2-configuration:
redirect-uris:
- https://*/**

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: uaa-service-binding
spec:
instanceRef:
name: uaa-service-instance

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: destination-service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: destination-service-binding
spec:
instanceRef:
name: destination-service-instance

We need to add some special  x-forward-hostandx-forward-protoheaders to run the approuter in the Kyma (see here why). Find the "TODO" comments left in this file and replace the values with the ID of your Kyma cluster.

You'll notice that this manifest contains the same information as the Cloud Foundry manifest. The only big difference is the destination service's missing service parameters, but this could be added as well. Another minor difference is that there is, as far as I know, no way to externalize the service parameters in an external file (please leave a comment if you know how to do this). Therefore, we included the configuration of the xsuaa service in the manifest. On top, you see metadata labels and networking configurations needed to expose the port of the Docker image to the outside world.

4. Deploy to the Kyma environment


Deploying apps to Kyma requires the following command:
kubectl apply -f deployment/deployment.yaml

You'll find the URL of the app in the Kyma console or you can directly access the URL you inserted in the deployment file in the previous step.

5. Inspect both apps


Once the deployment is finished, you can access both web apps. You'll see that you won't see a difference except for the URL in the browser. Both apps use the same Docker image and the same services, and therefore, they are identical.


Summary


I hope this post helped you to grasp the differences between both runtimes. We've seen that the Kyma runtime catalog currently only includes a subset of the available services in SAP Cloud Platform - but this will definitely change in the future.
Another difference is that Kyma requires a Docker image and you need a development process to build a Docker image based on a Dockerfile. This image needs to be uploaded to a registry before the actual deployment can be triggered. Cloud Foundry does not require Docker images but can leverage buildpacks to run the code directly from the .mtar archive.
The third big difference that I want to summarize is that I've been highly impressed by the speed of Kyma. The console (web interface) and the CLI are very fast and have almost no noticeable loading time. The same goes for the deploy time. The Cloud Foundry app took about 1:12 min to deploy while the Kyma app was about 3x faster and only required 0:27 min.

You now also understand that runtimes shall be used to combine the platform's services to create business value. A runtime by itself does neither solve a business problem, nor does it create value. This is why it does make sense to pick the simplest runtime that fulfills your needs. In case the Cloud Foundry runtime offers all you need, I recommend using it. Cloud Foundry makes many assumptions and takes work off your developers and, therefore, saves time. If you want to build more complex apps that require features such as internal routing, different scaling behavior, or if you deliberately want to diverge from the assumptions that the Cloud Foundry environment makes, I recommend the Kyma runtime.

Next Steps



 

Disclaimer: It might also make sense to use Istio features to redirect traffic between your application's services. Depending on your individual project setup, it might not be necessary to include an approuter to the project.






This was the second blog post of my bi-monthly series #CloudNativeLab. The name already says all there is to it; This series won't be about building pure business apps with cloud-native technology. I think there are already plenty of great posts about those aspects out there. Instead, this series rather thinks outside the box and demonstrates unconventional use-cases of cloud-native technology such as Cloud Foundry, Kyma, Gardener, etc.

Previous episode: Cloud-Native Lab #1 - 7 Ways to Define Environment Variables

Next episode: Cloud-Native Lab #3 – Comparing Cloud Foundry and Kyma Clients

 
13 Comments