Skip to Content

In my previous blog Make your SAP Data Hub Distributed Runtime work on the SUSE CaaS Platform I explained how to install the SAP Data Hub Distributed Runtime on the SUSE CaaS Platform which is a validated environment as per SAP Note 2464722 – Prerequisites for installing SAP Data Hub.

However, this is also a quite heavy weight installation with an overall footprint of 40GB main memory and 8 processor cores. Therefore, in this blog I will explain how to deploy the SAP Data Hub Distributed Runtime more light weight to the Microsoft Azure Container Service (AKS).

My starting point is this excellent blog Your SAP on Azure – Part 6 – SAP HANA Express on Azure Kubernetes Cluster (AKS) by my colleague Bartosz Jarkowski, that I follow up to the point where he starts to install the SAP HANA, express edition.

For installing the SAP Data Hub 1.2.1 I need to add a third Kubernetes node to the cluster in order to fulfill the dlog requirements. In fact, for this blog environment, I chose the less expensive Standard D2 v2 (2 vcpus, 7 GB memory) over the potentially more suitable Standard DS11 v2 (2 vcpus, 14 GB memory) virtual machine size:

Then I add a Container registry and assign my app to it as an Owner so that the SAP Data Hub Distributed Runtime pods will be able to access it:

Next, I need a Linux jump server from which I can install the SAP Data Hub Distributed Runtime. For this I follow these excellent instructions by the SAP Academy up to the point where kubectl gets installed which I prefer to install differently as follows:

sudo snap install kubectl --classic

Subsequently I copy the config file from my Windows .kube directory onto my Linux jump server and verify that I can connect to my Azure Container Service:

kubectl cluster-info

Then I must install helm and initialise it to deploy its server component tiller into my Kubernetes cluster:

curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.8.2-linux-amd64.tar.gz
helm init

With that I connect to my Container registry

docker login sdhdocker.azurecr.io

I get the user name and password for this by temporarily enabling the Admin user for my Container registry:

Finally, I export my Container registry Login server:

export DOCKER_REGISTRY=sdhdocker.azurecr.io

And follow the installation instructions with the (b) cloud option to use the provided Azure Kubernetes default Storage Class. Please ensure that you made the adjustments as per SAP Note 608651 – Data Hub Distributed Runtime (Vora) installation fails with “Docker build failed:

./install.sh --enable-rbac=no --vsystem-load-nfs-modules

Which takes a while but finishes and validates successfully:

Successfully validated the Installation!
######################################
############ Ports for external connectivity ############
vora-kibana-logging/http port:                      30530
vora-grafana/http port:                             32669
vora-tx-coordinator/tc port:                        31802
vora-tx-coordinator/hana-wire port:
vsystem/vsystem port:                               32426
#########################################################
You can find the generated X.509 keys/certificates under /home/frank/SAPVora-2.1.60-DistributedRuntime/deployment/certs for later use!
#########################################################

As a result, all SAP Data Hub Distributed Runtime pods are up and running happily on Azure:

For external access, I create a Load balancer for my vsystem pod:

kubectl -n vora get pods | grep vsystem-[1-9]
kubectl -n vora expose pod vsystem-2522550416-dqsz3 --name=kubernetes --type=LoadBalancer

This also creates an external IP address:

With which I can access my System Management and create a Pipeline Modeler and Vora Tools instance:

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply