Skip to Content
Technical Articles
Author's profile photo Nuno Pereira

Project piper installation on Kyma journey

As a follow-up of blog, I got very valuable feedback that I should give project piper a try.

I wanted to try piper on Kyma, since we already have our pipelines and custom shared libraries, so I followed this option with the hope to be able to have both piper and our pipelines working on the same container so I believe it is the most valuable scenario for our team. Also, having in the end a docker image containing everything needed would be a much more portable solution than having to install each software one by one on a new server.

Piper documentation ( catalogs deployment on kubernetes as experimental, so make sure to test it properly if you’re using this as your productive environment. Also a disclaimer: I’m not any expert on Kyma or Piper (it was my first exposure to both), so maybe I had a bad journey due to the lack of knowledge which is fair. Nevertheless, I think the whole process could be documented better so I hope this blog brings attention so that someone can contribute to enhance the documentation into a step by step procedure.

1st Attempt – Using deprecated helm chart

Checked the official documentation that points to the usage of helm charts to install it on Kyma. From my understanding helm charts are just highly configurable recipes to import all the kyma/kubernetes resources via yaml files. You can then override the values while installing from charts via command line or via an extra override file.


Official piper installation on kubernetes documentation

If you follow the link for the helm chart you’ll get into a page with a deprecated github repo. The deprecated helm chart has indeed a Master.Image value that you can supply but I wasn’t able to determine the url of this deprecated repo to add it via “helm add repo piperofficial <repourl>”. If you know the answer just let me know in the comments. I didn’t focus to much on this since the helm chart was marked as deprecated anyway so I moved to the new one.

2nd Attempt – Using the new official helm chart

If you follow the link to the new helm chart, you’ll get here ( I was able to add the repo with command

helm repo add jenkins

then to install it via

 .\helm.exe install devops jenkins/jenkins --set namespaceOverride=piper

Execution was successful, I was able to open jenkins, authenticated successfully and went to the system settings to confirm the piper shared library was there. Problem: Image is the jenkins/jenkins standard image without the piper lib installed. If on top of this I would need to install all the jenkins plugins required by piper and the piper lib manually it would be too much effort, so I removed everything and tried again but this time overriding the image and tag of the helm chart to the piper one.

 .\helm.exe install devops jenkins/jenkins --set namespaceOverride=piper --set controller.image=ppiper/jenkins-master --set controller.tag=latest

Everything was deployed but the pod didn’t start… Why?


Jenkins pod log

Logs of the pod mention problems with jenkins plugin dependencies… Last thing I want to deal is with dependencies version mismatches. From what I understand docker image ppiper/jenkins-master and jenkins/jenkins (that the helm chart was prepared for) use different Jenkins runtime versions and have different plugins installed which is normal but led me to conclude that this would not work without a deeper look into solving these dependencies/versions/compatibility issues. If you’ve found a simple way to do it via this approach please let me know which steps you’ve followed on the comments.

3rd Attempt – Create it from scratch

I gave up on using the official documentation and the only solution that got it working for me was to create it from scratch on Kyma. Below you can find the artifacts I’ve created (I’ve used a Deployment with a ReplicaSet having a volume on jenkins home pointing to PersistentVolumeClaims instead of using the StatefulSet approach that the helm chart uses).

apiVersion: v1
kind: Namespace
    name: piper
    labels: piper
        istio-injection: enabled piper
    - kubernetes
  phase: Active
apiVersion: v1
kind: PersistentVolumeClaim
  name: piperpvc
  namespace: piper
    - ReadWriteOnce
      storage: 1Gi
apiVersion: apps/v1
kind: Deployment
    name: piperapp
    namespace: piper
        app: piperapp
  replicas: 1
      templatename: piperapp
        templatename: piperapp
        app: piperapp
        - name: pipercontainer
          image: ppiper/jenkins-master
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
           - name: jenkins-volume
             mountPath: /var/jenkins_home
        - name: jenkins-volume
             claimName: piperpvc
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
    type: RollingUpdate
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
kind: Service
apiVersion: v1
  name: piperappserv
  namespace: piper
    app: piperapp
    - protocol: TCP
      port: 80
      targetPort: 8080
    app: piperapp
  type: ClusterIP
  sessionAffinity: None
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster
  loadBalancer: {}
kind: APIRule
    app: piperapp
  name: piperapirule
  namespace: piper
  gateway: kyma-gateway.kyma-system.svc.cluster.local
  host: piperapihost
    - accessStrategies:
        - config: {}
          handler: noop
        - GET
        - POST
        - PUT
        - DELETE
        - OPTIONS
        - PATCH
      path: /.*
    name: piperappserv
    port: 80

To apply it, assuming you have a file named fullyaml.yaml with the contents above, just run:

kubectl apply -f fullyaml.yaml

I’ve tested this image and I was able to execute piper commands as well as to force termination of the pods to guarantee that the state was saved after their automatic recreation. Despite this was done on my trial account I believe it will work on a non trial the same way.


Jenkins with piper working on Kyma

I was able to run piper integrationArtifactDeploy and worked without problems. Now, if I want to use our current shared libraries with the github actions approach, I would need either to:

  1. Break our shared libraries into small individual pieces that can be executed separately (this would be a lot of work and I would have as result at least 10x more files than I have now). Problem: our shared libraries are doing many operations (reading, writing and zipping files, parsing jsons or delegation tables, creating backups on the OS). I’m not sure how to put all of this together via github pipeline without becoming a monster file. It would need to be spitted in a very smart way to encapsulate many of these low level commands.
  2. Leave them as they are (doing all the logic inside of it). This would be easier since the github pipeline would only reference each of our custom shared libraries (one for backup, one for documentation, one for testing, etc). The github pipeline file would be cleaner, on the other hand the potential for reuse could drop (not everyone wants to sync with crucible for instance).

Next steps

  • Give CI/CD BTP Service a chance and see how well we can integrate it with our own pipelines on an advanced usage of this service. UPDATE: with this service you can either configure your steps from the UI or to read them from GitHub pipeline, nevertheless the execution will always be from a managed internal piper installation that AFAIK we don’t have access to. So this option would not allow us to use our own shared libraries.
  • Continue to evaluate potential benefit of using the official piper libraries instead of the regular jenkins pipelines we already have (right now I have serious doubts it would be worth the effort of migration)
  • I was thinking that it would be pretty cool if you would be able to provision everything in an automated way, meaning to enable the Kyma runtime, fetching the kubeconfig file and apply an helm chart or a yaml file (such as the one above) in a completely automated process without user intervention. I’ve checked BTP setup automator ( On the FAQs, you can find a reference to Kyma ( where it is mentioned that because of the kubelogin, you would need a browser opened, so most likely you would need to break it into two processes and do this step manually in between which is not that great. If you had some success story trying to do this let me know.


As you can see, my installation journey was not smooth, not sure if it was due to my lack of knowledge on Kyma, or the documentation that was not that great or both. To be fair, I was able to use it on premise with the cx server which is also the recommended approach, but I was thinking that if we invest in migrating to piper, we should also get rid of maintenance costs for our on premise server. Not sure if there’s a way to use cx server on Kyma. If you have experience with that please comment.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Park Woongki
      Park Woongki

      Really Nice

      Author's profile photo Nuno Pereira
      Nuno Pereira
      Blog Post Author

      Thanks Park Woongki 🙂

      Author's profile photo Hugo Figueiredo
      Hugo Figueiredo

      Hello Nuno,

      How did you got access to the credentials? I can't find them to log in.

      Other than that, great job, very helpful!

      Author's profile photo Nuno Pereira
      Nuno Pereira
      Blog Post Author

      Hi Hugo,


      Thanks, hope this tutorial can help you.


      Nuno Pereira

      Author's profile photo Gregor Wolf
      Gregor Wolf

      Hi Nuno,

      I think Hugo is asking how to get the credentials for the Jenkins Admin login. Can you provide any information? I'm also stuck there.

      Best Regards

      Author's profile photo CP-A SAP Education Course
      CP-A SAP Education Course

      Hi Gregor,

      The initial password will be created while the container is set up. You will find it in the logs of the created Pod.

      best regards Peter