Skip to Content
Technical Articles
Author's profile photo Gunter Albrecht

Kyma, Kubernetes, Kanban? Deploying a multi-service application on SAP BTP

Dear readers, it is spring in Tokyo! 🌸Cherry blossom is in full bloom 🌸 Yet another rainy Sunday! ☔

I therefore decided to go out, pay a visit to my favorite bakery, get a coffee and think of some alternative program. ☕ And there was my idea! ⚡😁

On the way back I passed by this beautiful cherry tree – you see the picture above. Let’s see if I can host this Kanban software on SAP BTP!

Abstract

This article explains how to deploy a multi-service application on SAP BTP using Kyma and Kubernetes.

Flow%20of%20activities%20in%20this%20blog%20post

Flow of activities in this blog post

At the end of the tutorial you  should be able to login to Wekan and set up beautiful boards like below.

Wekan%20example%20setup%20for%20our%20blog%20post%20hosted%20on%20SAP%20BTP%20with%20Kyma

Wekan example setup for our blog post hosted on SAP BTP with Kyma

Motivation

Wekan is a wonderful implementation of Software Kanban based on MongoDB and offering a great flexibility to customize boards, cards and with sufficient APIs to download the data. Moreover, the maintainer of this application kindly provides a docker repository to deploy it as a container!

I tried to build an image and to host it on SAP BTP CloudFoundry (CF) the other day and failed as it is a composition of services: The Wekan application and the MongoDB! To all I know about CF, it won’t support pushing compositions. Also, it has a limitation of port numbers: First, only one port can be exposed per container, second, the port number must be 1000 or above. There is also a restriction in disk quota of 4GB. If this is wrong information, please comment!

So I recalled that with Kyma it should be possible to leverage Kubernetes’ ability to deploy it and that is what I did.

Preparation

⚠ I’m using Windows so please adjust for Linux or Mac accordingly.

First, we need a SAP BTP account with Kyma activated. There are a few blogs out there so I won’t explain it here. I recommend these:

Second, we need a local installation of Docker Desktop. Also make sure you have a Docker Hub account registered so that you can push images to Docker Hub.

Third, there is a tool we will need later called Kompose. Please have this ready as well. I guess this is it.

Building the docker image

At first we build our docker image for Wekan. This gives us the possibility to change the settings if we like. Also we will push this to our docker hub account later to make it available for SAP BTP.

Let’s clone the repository from Github into an empty folder on our computer:

git clone https://github.com/wekan/wekan.git

Note that there are two files of interest in the main folder: Dockerfile that defines the “recipe” how the image is to be built and docker-compose.yml that defines how the services work with each other (dependencies).

Let’s drill into the docker-compose.yml for a moment as it helps to understand a later step in this tutorial:

services:

  wekandb:
    image: mongo:4.4
    container_name: wekan-db
    restart: always
    command: mongod --logpath /dev/null --oplogSize 128 --quiet
    networks:
      - wekan-tier
    expose:
      - 27017
    volumes:
      - wekan-db:/data/db
      - wekan-db-dump:/dump

  wekan:
    image: gunter04/wekan
    container_name: wekan-app
    restart: always
    networks:
      - wekan-tier
    build:
      context: .
      dockerfile: gunter04/wekan
    ports:
      - 3001:8080
    environment:
      - MONGO_URL=mongodb://wekandb:27017/wekan
    depends_on:
      - wekandb

volumes:
  wekan-db:
    driver: local
  wekan-db-dump:
    driver: local

networks:
  wekan-tier:
    driver: bridge

I’ve removed all comments so it becomes clearer. We see 2 services get defined, for the application (wekan) and the database (wekandb). We can also see that wekan depends on wekandb therefore defining the sequence of service instantiation.

We also see that the network definition is shared among the 2 services, therefore establishing a bridge architecture (wekan-tier definition). I also defined the port to be exposed as 3001 but (maybe) this is ultimately not required. We will see later.

We can now build the image (exchange the dot if you are not in the directory of Wekan with the path to it or cd into it to use the dot:

docker build . --tag gunter04/wekan

and then push it to the Docker Hub (in my case gunter04/wekan). You can use this image if you like or adjust accordingly.

docker push gunter04/wekan

Once it’s pushed to the hub (and therefore accessible to SAP BTP) we’re good to go to the next step. You can tag at the time of pushing otherwise it will be latest. If you wonder why you don’t need to authenticate to Docker Hub: It’s because you very likely did already on the docker desktop. You should now see a remote image:

Docker%20Desktop%3A%20List%20of%20remote%20images%20on%20Docker%20Hub

Docker Desktop: List of remote images on Docker Hub

Preparing docker desktop for Kyma on SAP BTP

Now we prepare the docker desktop to work with SAP BTP Kyma, well actually with the Kubernetes cluster below Kyma. But Kyma helps us with this. Let’s enter the SAP BTP and navigate to the subaccount overview that you enabled Kyma with:

SAP%20BTP%3A%20Subaccount%20overview%20with%20link%20to%20Kyma%20dashboard

SAP BTP: Subaccount overview with link to Kyma dashboard

We click the link to open up the Kyma dashboard. From there we download the connection details which we will use in the docker desktop.

Connection%20details%20for%20Docker%20Desktop%3A%20Download%20from%20Kyma%20dashboard%20%28upper%20right%20corner%29

Connection details for Docker Desktop: Download from Kyma dashboard (upper right corner)

A yaml-file will be downloaded with details specific to your instance of Kyma. Next let’s check if Docker Desktop has Kubernetes enabled. That is in settings menu ➝ Kubernetes. Check the box to enable it if not already done and wait for it to complete. It can take some minutes.

As for me it never completed and I reset Kubernetes, shut the Docker desktop down, restarted and it worked from there.

If all looks like above, we exchange the standard file in %userprofile%\.kube (for Linux it would be ~/.kube I suppose) called config with the content from the downloaded file from BTP.

Let’s test if that works by reading the context:

kubectl config get-contexts

Output should be something like this:

CURRENT   NAME                                              CLUSTER                                           AUTHINFO   NAMESPACE
*         xxxxxxx.kyma.shoot.live.k8s-hana.ondemand.com   xxxxxxx.kyma.shoot.live.k8s-hana.ondemand.com   OIDCUser

Configuring Kubernetes with Kompose

Do you remember when we looked into the docker-compose.yml? This file explains to docker how to create the container and the sequence of starting the services and how they interact.

Good or bad, this is not how it works with Kubernetes. Kubernetes is to orchestrate many containers (possibly of the same image) at the same time. Therefore it needs to know which services can be created and ended just like that and others (like our database) which should better be saved before ending them. For that a multitude of files is needed.

Good for us, some friendly, clever people created Kompose to generate the required files out of the docker-compose.yml. And sometimes – like for us – this works out of the box without the need to edit these.

Let’s create them! We cd into the wekan folder that we cloned before from git and run:

kompose convert

This generates many yaml-files:

wekan-deployment.yaml
wekan-dockerfile-manifest.yaml
wekan-service.yaml
wekan-wekan-db-dump-persistentvolumeclaim.yaml
wekan-wekan-db-persistentvolumeclaim.yaml
wekandb-deployment.yaml
wekandb-service.yaml
wekan_wekan-tier-networkpolicy.yaml

Wow! We will not need all in our last step, though.

Deploying application with Kubernetes and Kyma

Let’s stay at the command line and create a namespace for this deployment:

kubectl create namespace sapblog
➝namespace/sapblog created

Give it any name you prefer of course 😅!

You should now see a new namespace in the Kyma dashboard.

Kyma%20dashboard%3A%20Created%20namespace%20is%20shown

Kyma dashboard: Created namespace is shown

Let’s click on it which navigates us to the resources. Let us now gradually start the Wekan.

kubectl -n sapblog apply -f wekandb-service.yaml
➝service/wekandb created

And let’s check what happened in Kyma:

Kyma%20dashboard%3A%20Wekan%20database%20service%20created

Kyma dashboard: Wekan database service created

Let’s create the persistent volume claims next:

kubectl -n sapblog apply -f wekan-wekan-db-dump-persistentvolumeclaim.yaml
➝persistentvolumeclaim/wekan-wekan-db-dump created
kubectl -n sapblog apply -f wekan-wekan-db-persistentvolumeclaim.yaml
➝persistentvolumeclaim/wekan-wekan-db created

And also let’s create the MongoDB instance:

kubectl -n sapblog apply -f wekandb-deployment.yaml
➝deployment.apps/wekandb created

We check what happened on Kyma:

Kyma%20dashboard%3A%20MongoDB%20instance%20created%20in%20a%20pod

Kyma dashboard: MongoDB instance created in a pod

We just created a MongoDB on SAP BTP! ✊ It should take like a minute until the status changes from WAITING to RUNNING.

If you want to see what’s going on, click the three dots and then on Logs. Now that the database is running we complete the final steps:

kubectl -n sapblog apply -f wekan-service.yaml
➝service/wekan created
kubectl -n sapblog apply -f wekan-deployment.yaml
➝deployment.apps/wekan created

Let’s check Kyma again.

Kyma%20dashboard%3A%20Wekan%20running%20for%20application%20and%20db

Kyma dashboard: Wekan running for application and database

Kyma presents this to us in an overview as well:

Kyma%20dashboard%3A%20Deployments%20and%20Pods%20overview

Kyma dashboard: Deployments and Pods overview

Now let’s try out the application! Wait! How? 🤔

We add an API rule for the main service wekan in order to access the kanban board like shown below.

Yay! It works. You can register yourself by clicking “Register” and filling in the fields below. Once you push the button “Register” it gives an error which is ok.

Deployed Wekan through Kyma on SAP BTP: Registration process

Then click on “sign in” and log in with the credentials you just set. This makes you the admin. Enjoy! Wekan is a magnificent, speedy tool with a roadmap towards future features – Kudos to the team!

Summary

We have seen how to push a multi-service application to SAP BTP making use of Kyma, Kubernetes and Kompose. Now we know I had to choose a Kanban application to complement the many Ks. 😁

I’m very confident there are better ways to achieve the same result with Kyma – if you know, let me know in the comments!

Appendix / Q&A / Troubleshooting

Q: KubeCtl will not allow me to get into Kyma!

error: You must be logged in to the server (the server has asked for the client to provide credentials

A: Download a new file through “Get KubeConfig” (see above)

 

Q: How can I open a command line (shell) in a running container?

A: Get the pods through

kubectl get pod

Then open the according running container by

kubectl exec --stdin --tty <the name of your pod> -- /bin/bash

actually this just opens the first container. You get a hint how to see all containers (in case you have multiple in the same pod). If you can’t use /bin/bash try if /bin/sh is available (depends on the image).

Q: Can I set a default namespace to work with?

A: Yes, to get an overview of all namespaces in Kyma/Kubernetes:

kubectl get namespace

to set a default namespace:

kubectl config set-context --current --namespace=<your namespace>

Assigned Tags

      10 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Dear Gunter Albrecht,

      Thank You for the helpful blog.

      I've tried it and is working fine.

      I do have an issue while trying to extend above by adding mongo-express to the docker compose file and deploy to Kyma.

      It fails saying "no healthy upstream".

      To make sure I'm doing it right I've tried to deploy just mongo-express available on docker hub https://hub.docker.com/_/mongo. interestingly the docker compose file when deployed on to docker desktop works fine.

      However when I deploy to Kyma it fails with the same error.

      I'm using Kompose to convert docker compose to K8S artifacts and adding API to access mongo-express service.

      Would appreciate Your inputs to fix my issue.

      Best Regards,

      Sunil

       

      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Hi Sunil,

      happy you found the blog useful! The Kompose is something I would suggest if you have already a working dockercompose file and want to convert it for K8S. If you want to add something new, don't add it to the dockercompose but to the existing .yaml for K8S. In that case I suggest add a deployment for mongo-express and a service for it plus an API rule.

      "no healthy upstream" usually means a mismatch of ports between service and API rule and container.

      If you can share your current K8S yaml including all parts I can look into it.

      Gunter

      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Thank You Gunter,

      Figured our that Your wekandb in the example above does not have a ID/password set and when removing the defaults as per docker hub, was able to get into wekandb and view the collections.

       

      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Hi Gunter,

      I'm having issue migrating Wekan Db entries from local docker to Kyma.

      The existing dashboards & Users are not being uploaded when I covert the same docker compose to K8s artifacts.

      Just to make the query clear, I'm able to deploy Wekan successfully onto Kyma but my existing data like Users,Dashboards are not visible on the Kyma wekan.

      Any leads would be helpful.

      Thank You,

      Sunil

      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Sunil Varma Chintalapati So you like to migrate your MongoDB from a local Docker-Compose to Kyma, is that correct?

      If so, you need to move the MongoDB data into your PVC on Kyma. There is no special migration approach but I would do the following:

      1. mongodump --out /some/local/folder/on/dockercompose
      2. Now you have the files (.bson etc) from docker in folders. I think you see 2 folders, "wekan" and "admin".
      3. Transfer the folders into the PVC of your WekanDB on Kyma:
        kubectl cp -n wekan-prod /some/local/folder/on/dockercompose wekandb-75bc7fb9d9-vjdrk:/home/backup
        Replace namespace with your namespace and the pod-name with you db-podname and the destination /home/backup with a PVC folder where you want the data to go.
        you can add a "--retries:10" this helps of the connection breaks in between.
      4. Restore the Mongo DB: mongorestore /home/backup --drop

       

      Now you have it migrated.

      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Thank You Gunter,

       

      Does that mean like we can't use volumemounts in the docker-compose file to do the copy during the initial deployment?

      Best Regards,

      Sunil Chintalapati

      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Hi Sunil,

      not sure how you intend this to happen. Docker-Compose and Kubernetes are having some similarities but are different animals. And the deployment is not a data migration. If I misunderstood you let me know how you thought the DB data can flow from dockercompose to Kyma in the cloud.

      Kind regards,
      Gunter

      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Hi Gunter,

      I used below docker-compose to run on my local docker desktop

      version: '3.1'
      services:
      mongodb:
      image: mongo
      restart: always
      ports:
      - 27017:27017
      environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: example
      volumes:
      - ./mydb:/data/db
      mongo-express:
      image: mongo-express
      restart: always
      ports:
      - 8081:8081
      environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: example
      ME_CONFIG_MONGODB_SERVER: mongodb
      The folder mydb is containing all the collections that are being created/updated.
      Even if I drop my container and spin up a new one at a different point in time, the new container is reading the collections available in this folder.
      With that understanding, I komposed this docker-compose to K8s artefacts and deployed to Kyma K8s.
      Upon loading it on Kyma, I could not find the existing collections but it spins like a brand new setup.
      That is where I understood like Kyma is not loading the existing files.
      I also tried to copy the collections using "kubectl cp*" command but it ends up with pod deployment crashing with exit code 137.
      I might be missing some basics of loading things to Kyma from my local desktop.
      My requirement is to create a template that can be deployed to multiple namespaces each using their own dataset that I can load on demand and like to avoid creating images with different data sets.
      Best Regards,
      Sunil Chintalapati
      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Hi Sunil,

      I think I get it now. You create an image that contains already a dataset? I'm no Kubernetes/ Docker guru but I'd say this is most certainly not a good idea. Why that?

      1. The image gets bigger. It's best to have an image as small as possible for deployment.
      2. You publish data to everyone unless you have a secure image host (e.g. on Docker you can get 1 secured image with a free account vs unlimited unsecured).
      3. I'm not clear how mixing data baked into an image with the concept of PVCs: PVCs are mounted into your container at start. Therefore the path of the baked-in data most certainly deviates from the mount point of the PVC. And as you know non-PVC storage is volatile, meaning it will disappear at pod restart.

      So my suggestion is:

      1. Use the standard images from MongoDB and WeKan. No need to create your own docker image in this case.
      2. Put the environment variables into the deployment definition (container section)
      3. Put passwords etc. into Kubernetes secrets
      4. Consider using a helm chart if you want to make it more flexible. Maybe also read about the service account concept.

      In general your questions are around Docker vs. Kubernetes and I recommend to look also into those communities.

      Author's profile photo Sunil Varma Chintalapati
      Sunil Varma Chintalapati

      Sure, Thank You Gunter. Appreciate Your inputs.