Skip to Content
Technical Articles
Author's profile photo Gunter Albrecht

SAP Cloud Connector deployed on SAP Kyma

One of the great advantages of SAP Kyma is the fact that as long as your planned work load is on top of the internet protocol there are virtually no limits on what you deploy.

Content of this blog

Connecting your SAP on-premise systems to SAP BTP requires the SAP Cloud Connector (CC) for a secure connection. For testing purposes it can be handy to have a CC running 24/7. Instead of searching for a box to install it on, why not on Kyma?

This blog explains the steps necessary from CC download, container image creation and finally deployment on Kyma runtime

1. Download of SAP Cloud Connector

It all starts with downloading the cloud connector. As we just aim for a test installation we pick the portable package. You can download it from the SAP development tools site for Linux with x86_64 architecture.

Could we automate the download in the container through curl? Certainly yes, I’m not sure if the license allows this way of obtaining the packaging. If you know, you can enhance the image accordingly.

2. Container image creation

Next, we start up VSCode and Docker Desktop. VSCode is not needed, any text editor will do. We move the downloaded CC package in an empty folder.

Now, create an empty Dockerfile in that folder and open it in the editor. I suggest to build it upon a Java 11 image which is still very lean.

#Build SAP Cloud Connector image
FROM bitnami/java:11-debian-10

WORKDIR /usr/sapcc

COPY . /home

RUN apt update && \
    apt install lsof && \
    apt install nano 

EXPOSE 8443/tcp

If you are not familiar with Docker: FROM tells the baseline image to be used, a lean Debian Linux with Java 11 pre-installed (OpenJDK). WORKDIR creates the directory which will be opened if you tty into the container at runtime. With COPY the cloud connector packages goes into the home folder. Then we unstall lsof (needed by CC and not part of the image) and nano as an editor (not needed, just in case you want to open up a textfile later at runtime).

Finally the port 8443 which is the standard CC port is exposed, so that the container can be reached from outside.

Do you know how Kyma/ Kubernetes retrieves your image? It is pulled from a container registry! I use Docker Hub in this example but many more alternatives exist. If you have not registered yet, please do an get a user name.

Now we are good to build the image with

$ docker build -t <your-registry-and-username>/java11-sapcc:1.0 .

This can take a few minutes to download the data and building the image. Next, we push the local image to the registry for example with

$ docker push <your-registry-and-username>/java11-sapcc:1.0         

Done, ready to deploy on Kyma!

3. Kyma Deployment

I assume you know how to get Kyma running on BTP (free trial is available!) or as your own cluster. If you want to go through first steps there are great tutorials out there.

Create a new file in your folder and name it e.g. sap-cc-deployment.yaml. Looking at how to build the deployment we go step by step.

3a. Namespace

We need a namespace without Istio sidecar injection. That is because SAP CC works on https port already and we want to expose it to the internet directly.

apiVersion: v1
kind: Namespace
metadata:
  name: dl-sapcc
  labels:
    istio-injection: disabled
---

3b. Persistent Volume Claim

Cloud Connector needs some persistence (PVC) to store configuration data. So here it is:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:    
  name: sapcc-pvc
  namespace: dl-sapcc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---

3c. Deployment of our image

The important part – deploy the image in a pod as a container and reference it to the PVC’s volume.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sap-cloud-connector
  namespace: dl-sapcc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sapcc-app
  template:
    metadata:
      labels:
        app: sapcc-app
    spec:
      containers:
        - name: sap-cc-image
          image: <your-registry-and-username>/java11-sapcc:1.0
          imagePullPolicy: Always  
          ports:
          - containerPort: 8443
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "1Gi"
              cpu: "500m"  
          volumeMounts:
          - mountPath: /usr/sapcc
            name: sapcc-volume
          command: ["/bin/sh"]
          args: ["-c", "if [ ! -f /usr/sapcc/go.sh ]; then tar -xzof /home/sapcc*.tar.gz && rm /home/*.tar.gz; else echo Already installed, starting; fi; ./go.sh"]   
      volumes:
      - name: sapcc-volume
        persistentVolumeClaim:
          claimName: sapcc-pvc
---

After the container started up, this unpacks the CC package if not already done before (e.g. the pod restarted once already) and then calls the start script.

3d. Service as load balancer

Next step is the service which connects to the container and exposes port 8443 through a load balancer IP to the internet.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: sapcc-app
  name: sappcc-service
  namespace: dl-sapcc
spec:
  ports:
  - name: "adminport"
    port: 8443
    targetPort: 8443
  type: LoadBalancer
  selector:
    app: sapcc-app

 

3e. Add DNS entry

To be able to reach the service’s load balancer a DNS entry is handy – that way no IP address must be typed. You can do this entry either in Kyma 2.0 UI’s DNS Entry menu or by modifying the service from 3d with the below annotations:

apiVersion: v1
kind: Service
metadata:
  annotations:
    dns.gardener.cloud/dnsnames: dlcc.<your-kyma-cluster-name>.k8s-hana.ondemand.com
    dns.gardener.cloud/ttl: "600"
    dns.gardener.cloud/class: garden
  labels:
    app: sapcc-app
  name: sappcc-service
  namespace: dl-sapcc
spec:
  ports:
  - name: "adminport"
    port: 8443
    targetPort: 8443
  type: LoadBalancer
  selector:
    app: sapcc-app

Save the file with all elements in it and run this command on your Kyma cluster:

$ kubectl apply -f sapcc-deployment.yaml

You should now be able to call the SAP Cloud Connector on Kyma and attach it to your on-premise systems and SAP BTP.

Deployed Cloud Connector on SAP BTP Kyma

Summary

This was a simple example of deploying a workload to Kyma where the application is exposed by load balancer. A production instance of SAP Cloud Connector will likely run on-premise since your on-premise production systems are not exposed to the internet.

Assigned Tags

      2 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Piotr Tesny
      Piotr Tesny

      Hi Gunter Albrecht,

      Thanks for your very interesting blog.

      I have been using a SAP Cloud Connector deployed on a kyma cluster (or CF) for quite some time already.

      However I did take a different approach. First I made a SCC in a local linux suse docker image work with HTTP on port 8080. This way I could deploy it in a CF Diego cell or as a Kyma workload.

      When it comes to Kyma cluster I did not have to disable istio or needed to use a LoadBalancer resource.

      Actually, because I could retain istio I was able to expose the CC to the public internet with a single API rule. Moreover the API rule can be protected with a JWT strategy for increased security.

      Another difference is that my SCC docker image is stateless (no need to rely on a PVC) and the backups/ restores of the configuration are controlled via the SCC REST API.

      cheers

      Piotr

      Author's profile photo Murali Shanmugham
      Murali Shanmugham

      This is interesting. Thanks for sharing.
      I have seen scenarios where customers deal with pure cloud solutions and had the need to use agents like DPAgent/SDI Adapter/Cloud Connector etc. This approach of using Kyma would be good.