Skip to Content
Technical Articles
Author's profile photo Gunter Albrecht

SAP Cloud Connector deployed on SAP Kyma

One of the great advantages of SAP Kyma is the fact that as long as your planned work load is on top of the internet protocol there are virtually no limits on what you deploy.

Content of this blog

Connecting your SAP on-premise systems to SAP BTP requires the SAP Cloud Connector (CC) for a secure connection. For testing purposes it can be handy to have a CC running 24/7. Instead of searching for a box to install it on, why not on Kyma?

This blog explains the steps necessary from CC download, container image creation and finally deployment on Kyma runtime

1. Download of SAP Cloud Connector

It all starts with downloading the cloud connector. As we just aim for a test installation we pick the portable package. You can download it from the SAP development tools site for Linux with x86_64 architecture.

Could we automate the download in the container through curl? Certainly yes, I’m not sure if the license allows this way of obtaining the packaging. If you know, you can enhance the image accordingly.

2. Container image creation

Next, we start up VSCode and Docker Desktop. VSCode is not needed, any text editor will do. We move the downloaded CC package in an empty folder.

Now, create an empty Dockerfile in that folder and open it in the editor. I suggest to build it upon a Java 11 image which is still very lean.

#Build SAP Cloud Connector image
FROM bitnami/java:11-debian-10

WORKDIR /usr/sapcc

COPY . /home

RUN apt update && \
    apt install lsof && \
    apt install nano 

EXPOSE 8443/tcp

If you are not familiar with Docker: FROM tells the baseline image to be used, a lean Debian Linux with Java 11 pre-installed (OpenJDK). WORKDIR creates the directory which will be opened if you tty into the container at runtime. With COPY the cloud connector packages goes into the home folder. Then we unstall lsof (needed by CC and not part of the image) and nano as an editor (not needed, just in case you want to open up a textfile later at runtime).

Finally the port 8443 which is the standard CC port is exposed, so that the container can be reached from outside.

Do you know how Kyma/ Kubernetes retrieves your image? It is pulled from a container registry! I use Docker Hub in this example but many more alternatives exist. If you have not registered yet, please do an get a user name.

Now we are good to build the image with

$ docker build -t <your-registry-and-username>/java11-sapcc:1.0 .

This can take a few minutes to download the data and building the image. Next, we push the local image to the registry for example with

$ docker push <your-registry-and-username>/java11-sapcc:1.0         

Done, ready to deploy on Kyma!

3. Kyma Deployment

I assume you know how to get Kyma running on BTP (free trial is available!) or as your own cluster. If you want to go through first steps there are great tutorials out there.

Create a new file in your folder and name it e.g. sap-cc-deployment.yaml. Looking at how to build the deployment we go step by step.

3a. Namespace

We need a namespace without Istio sidecar injection. That is because SAP CC works on https port already and we want to expose it to the internet directly.

apiVersion: v1
kind: Namespace
metadata:
  name: dl-sapcc
  labels:
    istio-injection: disabled
---

3b. Persistent Volume Claim

Cloud Connector needs some persistence (PVC) to store configuration data. So here it is:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:    
  name: sapcc-pvc
  namespace: dl-sapcc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---

3c. Deployment of our image

The important part – deploy the image in a pod as a container and reference it to the PVC’s volume.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sap-cloud-connector
  namespace: dl-sapcc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sapcc-app
  template:
    metadata:
      labels:
        app: sapcc-app
    spec:
      containers:
        - name: sap-cc-image
          image: <your-registry-and-username>/java11-sapcc:1.0
          imagePullPolicy: Always  
          ports:
          - containerPort: 8443
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "1Gi"
              cpu: "500m"  
          volumeMounts:
          - mountPath: /usr/sapcc
            name: sapcc-volume
          command: ["/bin/sh"]
          args: ["-c", "if [ ! -f /usr/sapcc/go.sh ]; then tar -xzof /home/sapcc*.tar.gz && rm /home/*.tar.gz; else echo Already installed, starting; fi; ./go.sh"]   
      volumes:
      - name: sapcc-volume
        persistentVolumeClaim:
          claimName: sapcc-pvc
---

After the container started up, this unpacks the CC package if not already done before (e.g. the pod restarted once already) and then calls the start script.

3d. Service as load balancer

Next step is the service which connects to the container and exposes port 8443 through a load balancer IP to the internet.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: sapcc-app
  name: sappcc-service
  namespace: dl-sapcc
spec:
  ports:
  - name: "adminport"
    port: 8443
    targetPort: 8443
  type: LoadBalancer
  selector:
    app: sapcc-app

 

3e. Add DNS entry

To be able to reach the service’s load balancer a DNS entry is handy – that way no IP address must be typed. You can do this entry either in Kyma 2.0 UI’s DNS Entry menu or by modifying the service from 3d with the below annotations:

apiVersion: v1
kind: Service
metadata:
  annotations:
    dns.gardener.cloud/dnsnames: dlcc.<your-kyma-cluster-name>.k8s-hana.ondemand.com
    dns.gardener.cloud/ttl: "600"
    dns.gardener.cloud/class: garden
  labels:
    app: sapcc-app
  name: sappcc-service
  namespace: dl-sapcc
spec:
  ports:
  - name: "adminport"
    port: 8443
    targetPort: 8443
  type: LoadBalancer
  selector:
    app: sapcc-app

Save the file with all elements in it and run this command on your Kyma cluster:

$ kubectl apply -f sapcc-deployment.yaml

You should now be able to call the SAP Cloud Connector on Kyma and attach it to your on-premise systems and SAP BTP.

Deployed Cloud Connector on SAP BTP Kyma

Summary

This was a simple example of deploying a workload to Kyma where the application is exposed by load balancer. A production instance of SAP Cloud Connector will likely run on-premise since your on-premise production systems are not exposed to the internet.

Assigned Tags

      9 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Piotr Tesny
      Piotr Tesny

      Hi Gunter Albrecht,

      Thanks for your very interesting blog.

      I have been using a SAP Cloud Connector deployed on a kyma cluster (or CF) for quite some time already.

      However I did take a different approach. First I made a SCC in a local linux suse docker image work with HTTP on port 8080. This way I could deploy it in a CF Diego cell or as a Kyma workload.

      When it comes to Kyma cluster I did not have to disable istio or needed to use a LoadBalancer resource.

      Actually, because I could retain istio I was able to expose the CC to the public internet with a single API rule. Moreover the API rule can be protected with a JWT strategy for increased security.

      Another difference is that my SCC docker image is stateless (no need to rely on a PVC) and the backups/ restores of the configuration are controlled via the SCC REST API.

      cheers

      Piotr

      Author's profile photo Murali Shanmugham
      Murali Shanmugham

      This is interesting. Thanks for sharing.
      I have seen scenarios where customers deal with pure cloud solutions and had the need to use agents like DPAgent/SDI Adapter/Cloud Connector etc. This approach of using Kyma would be good.

      Author's profile photo Martin Donadio
      Martin Donadio

      Hi Gunter Albrecht

      Great blog post, thanks for sharing !

      I am trying to expose an SFTP server with a custom dns.

      The server is already deployed in my trial Kyma cluster and I can reach the server with a simple port-forward to the Service created.

      However, I a not able to reach the Container in the desired dns hostname like you shared in the post.

      The Service definition I have:

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          dns.gardener.cloud/dnsnames: sftp.d0deb3d.kyma.ondemand.com
          dns.gardener.cloud/ttl: "600"
          dns.gardener.cloud/class: garden
        labels:
          app: lively-decision
        name: lively-decision-svc
        namespace: prueba
      spec:
        ports:
        - name: "sftpport"
          port: 22
          targetPort: 22    
        type: LoadBalancer
        selector:
          app: lively-decision

       

      . Any hint is appreciated.

       

      Thanks

      Martin

       

      Author's profile photo Martin Donadio
      Martin Donadio

      Thanks to Piotr Tesny  I found the answer in the comment

      "...  make sure the port value is not the same as the targetPort value"

      In the service definition I was using, the port and target value was the same.

      After changing the port value to a different one I was able to reach the SFTP in the dns defined in the annotation.

       

      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Martin Donadio great application idea and happy you made it work Martin!

      Author's profile photo Kivanc Aktas
      Kivanc Aktas

      Hi Gunter,

      Thanks for sharing, very interesting topic.

      Do you have any VPC options in Kyma runtime to access onPrem SAP ECC which can be only accessed with VPN ?

      Kind regards,

      Kivanc

      Author's profile photo Gunter Albrecht
      Gunter Albrecht
      Blog Post Author

      Hi Kivanc,

      maybe someone from the Kyma experts can answer that. Piotr Tesny Your idea is to have the CC on Kyma inside the VPC where also the ECC system resides?

      Kind regards,
      Gunter

      Author's profile photo Piotr Tesny
      Piotr Tesny

      Hello Gunter Albrecht , thanks for the heads-up;

      Indeed, "my" approach is rather to accommodate for pure cloud scenarii where the CC is a kyma cluster workload bridging access from chosen SAP BTP sub-accounts into so-called "on-premise" systems. These so-called OP systems must be accessible from CC as well;

      In other words for this to work one would need to exposed the required ECC endpoint(s) to the public internet first;

      Alternatively, a SAP CC could be installed in the same subnet as the ECC system but the VPN tunnel would have to be configured to allow for the incoming traffic from chosen BTP sub-accounts.

      The latter approach can work nicely with a SAP S/4HANA or SAP ECC  with a SAP CC in the same subnet as these SAP ERP systems, for instance, inside an AWS VPC... In this scenario the built-in connectivity proxy of a kyma cluster can be leveraged to implement the inbound communication;

      I hope that helps; kind regards; Piotr

      Author's profile photo Piotr Tesny
      Piotr Tesny

      Hello Kivanc Aktas ,

      SAP BTP, Kyma runtime does not offer VPC peering option as we speak;

      However it offers a built-in connectivity proxy to help implement the Cloud-to-On-Premise communication via SAP CC; cf my answer above;

      I hope that helps; kind regards; Piotr