Skip to Content

In my previous blog, I leveraged the SAP Data Hub, developer edition as my SAP Data Hub Distributed Runtime:

This worked well to some extent, but is of course not a supported architecture. Therefore, I will explain how to install the SAP Data Hub Distributed Runtime on the SUSE CaaS Platform which is supported as per SAP Note 2464722 – Prerequisites for installing SAP Data Hub:

To start with, I install three SUSE CaaSP nodes as per the respective Deployment Guide.

Administration Node

  • 2 cores
  • 12 GB RAM
  • 40 GB disk

Please ensure to select Install Tiller during the Initial CaaS Platform Configuration as this will be needed for the SAP Data Hub Distributed Runtime:

Master Node

  • 2 cores
  • 8 GB RAM
  • 40 GB disk

Worker Node

  • 4 cores
  • 20 GB RAM
  • 80 + 16 GB disks

This is the node onto which I will install the SAP Data Hub Distributed Runtime later. To make this work, I need to mount an additional volume as /var/local:

Cluster

As a result, I got my cluster ready and can monitor its Cluster Status from Velum:

As well as my respective Cluster Nodes from my Kubernetes dashboard:

With this I configure my Docker Registry to Use self-signed certificates and verify its availability:

424e441f1fef4c629d512f5fa33889f0:~ # curl -iv https://registry.dynalias.com:5000/v2
* Hostname was NOT found in DNS cache
*   Trying 86.164.155.84...
* Connected to registry.dynalias.com (86.164.155.84) port 5000 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs/
* SSLv3, TLS Unknown, Unknown (22):
* SSLv3, TLS handshake, Client hello (1):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Server hello (2):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, CERT (11):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Server finished (14):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv2, Unknown (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Finished (20):
* SSLv2, Unknown (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv2, Unknown (22):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
*        subject: C=UK; ST=Warwickshire; O=BOA; CN=registry.dynalias.com
*        start date: 2018-01-14 12:17:16 GMT
*        expire date: 2019-01-14 12:17:16 GMT
*        common name: registry.dynalias.com (matched)
*        issuer: C=UK; ST=Warwickshire; O=BOA; CN=registry.dynalias.com
*        SSL certificate verify ok.
* SSLv2, Unknown (23):
> GET /v2 HTTP/1.1
> User-Agent: curl/7.37.0
> Host: registry.dynalias.com:5000
> Accept: */*
>
* SSLv2, Unknown (23):
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Docker-Distribution-Api-Version: registry/2.0
Docker-Distribution-Api-Version: registry/2.0
< Location: /v2/
Location: /v2/
< Date: Wed, 17 Jan 2018 16:58:52 GMT
Date: Wed, 17 Jan 2018 16:58:52 GMT
< Content-Length: 39
Content-Length: 39
< Content-Type: text/html; charset=utf-8
Content-Type: text/html; charset=utf-8

<
<a href="/v2/">Moved Permanently</a>.

* Connection #0 to host registry.dynalias.com left intact

Next, I provide the Kubernetes Persistent Volumes required by the SAP Data Hub Distributed Runtime. Please ensure that their Capacity and Access modes are exactly as shown below, otherwise the Kubernetes Persistent Volume Claims could not be satisfied:

Finally, I install Helm to match Tiller:

424e441f1fef4c629d512f5fa33889f0:~ # curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.6.2-linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.5M  100 15.5M    0     0  6296k      0  0:00:02  0:00:02 --:--:-- 6299k
424e441f1fef4c629d512f5fa33889f0:~ # gunzip helm-v2.6.2-linux-amd64.tar.gz      gzip: helm-v2.6.2-linux-amd64.tar already exists; do you wish to overwrite (y or n)? ^C
424e441f1fef4c629d512f5fa33889f0:~ # rm helm-v2.6.2-linux-amd64.tar
424e441f1fef4c629d512f5fa33889f0:~ # curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.6.2-linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.5M  100 15.5M    0     0  6478k      0  0:00:02  0:00:02 --:--:-- 6476k
424e441f1fef4c629d512f5fa33889f0:~ # gunzip helm-v2.6.2-linux-amd64.tar.gz      424e441f1fef4c629d512f5fa33889f0:~ # tar -xvf helm-v2.6.2-linux-amd64.tar       linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
424e441f1fef4c629d512f5fa33889f0:~ # cp linux-amd64/helm bin/

With this the installations runs smoothly and I am rewarded with a fully operational and supported SAP Data Hub Distributed Runtime:

If you needed more compute capacity, you could of course add more nodes.

To report this post you need to login first.

11 Comments

You must be Logged on to comment or reply to a post.

    1. Frank Schuler Post author

      Hello Robert,

      I put my Kubernetes Persistent Volumes onto a NFS server like this:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv-volume-0
      spec:
        capacity:
          storage: 1Gi
        accessModes:
          - ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        nfs:
          path: /tmp/data
          server: 192.168.2.80
          readOnly: false

      For each Kubernetes Persistent Volume you would have to update the name, capacity and accessModes respectively.

      Best regards

      Frank

      (0) 
        1. Frank Schuler Post author

          Hello Robert,

          When you chose the on premise installation option for the SAP Data Hub Distributed Runtime then only the vora/vflow-stdlib-pvc persistent volume is put on nfs. The other volumes are put on a local host path which I do not want, but that depends on your requirements.

          Best regards

          Frank

          (0) 
          1. Former Member

            Thanks for the clarification.  thought my issue might have to do with nfs.

            Have you experienced anything like the following?  I get lost connection to pod every time!

            “Using vora consul image…

            Deploying vora-consul with: helm install –namespace vora -f values.yaml -f /root/SAPVora-2.1.60-DistributedRuntime/stateful-replica-conf.yaml   –set docker.registry=localhost:5000   –set rbac.enabled=true   –set imagePullSecret=   –set docker.imagePullSecret=   –set version.package=2.1.60 –set docker.image=localhost:5000/vora/consul –set docker.imageTag=0.9.0-sap10 –set version.component=0.9.0-sap10 –set dontUseExternalStorage=true –set useHostPath=true –set components.disk.useHostPath=true –set components.dlog.useHostPath=true . –wait –timeout 900

            E0213 05:09:03.055798   63085 portforward.go:178] lost connection to pod

            Error: transport is closing

            Deployment failed, please check logs above and Kubernetes dashboard for more information!”

             

            (0) 
          2. karthik paladugu

            HI Frank,

             

            do we have any similar setup for Suse 12 SP3 or SP1 . there we need to manually install kubernetes master and kubeadm and kubeclnt . dont see any sap guide around that.

            tried to install on one node, kubelet service is not getting started and because of that we are unable to initialize kubeadm.

            as we are getting above two errors, we are unable to proceed with vora Installation.

            (0) 
          3. karthik paladugu

            Hi Frank ,

            we have Installed SUSE Caas platform with 2 nodes . how to cluster both nodes .

            when i am logging in to kubernetus dashboard i can only see local server, unable to see worker node.

            can we run autoyast command in 2 node direclty or do we need to follow some other process. as we have performed normal installation on 2nd node.

            Regards

            karthik

            (0) 
  1. Former Member

    Hey everyone!

    Just a quick note from Rob your friendly CaaS Platform Strategist (and creator of this):

    Please never ever ever install a cluster that only has 1 master and 1 worker.

    You will not have etcd quorum if you do this and if you reboot the worker or master it’s highly likely your cluster will be trashed beyond recovery.

    So when using SUSE CaaS Platform please always have at least 2 worker nodes and 1 master node. This will allow 3 instances of etcd to run and allow you to survive a single node failure without breaking quorum.

    As another side note – it’s worth mentioning if you use local-only storage your pods won’t be able to start up on another node. Always use shared storage for application data. In this case I can see an external NFS server is being used which is great! (although I’m unsure what /var/local is being used for. If it’s temp/scratch data then that’s great but for anything requiring persistence we recommend using NFS or Ceph based backends).

    But great blog post Frank! – It’s great to see the platform being used for actual workloads and interesting for me to see what our partners and customers are getting up to in the real world 🙂

    (if anyone is considering purchasing SUSE CaaS Platform or wants a demo/trial and needs assistance feel free to email me at rob.decanha-knight@suse.com and I’ll be happy to point in the right direction 🙂 )

     

    Rob

    (0) 
    1. karthik paladugu

      Hi Rob,

      we use only Suse 12 versions for our sap installations.

      do we have any similar setup for Suse 12 SP3 or SP1 . there we need to manually install kubernetes master and kubeadm and kubeclnt . dont see any sap guide around that.

      tried to install on one node, kubelet service is not getting started and because of that we are unable to initialize kubeadm.

      as we are getting above two errors, we are unable to proceed with vora Installation.

      do we have any proper documentation to create cluster with 2 or 3 nodes (Kuberenets )

       

      (0) 

Leave a Reply