Skip to Content

Previously, I installed the SAP Data Hub Distributed Runtime on the SUSE Container as a Service Platform and the Microsoft Azure Container Service.

In this blog I am attempting the same on the Google Cloud Platform.

To start with, I Create a Google Cloud Platform Kubernetes cluster. Since I am installing the SAP Data Hub 1.3.0, I can accept the default Cluster Version 1.8.8-gke.0:

Then I make the Kubernetes dashboard accessible. First, I find the respective pod:

kubectl get pods -n kube-system | grep kubernetes-dashboard

Second, I create a service for it:

kubectl -n kube-system expose pod kubernetes-dashboard-768854d6dc-678x9 --name=dashboard --type=LoadBalancer

And get its external IP address and port:

kubectl get services -n kube-system | grep ^dashboard

There are different ways to¬†Access control the Kubernetes dashboard, but for simplicity I grant¬†admin privileges to the Dashboard’s Service Account:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-system-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Next, I install the Google Cloud SDK for Linux and connect to my Container Registry:

gcloud components install docker-credential-gcr
docker-credential-gcr configure-docker
export DOCKER_REGISTRY=gcr.io

Since the Google Cloud Container Registry requires the Google Cloud Platform Project ID in the path name, I adjust the DOCKER_REPOSITORY_DOMAIN in my install.sh script:

Also I need to create Cluster Role Bindings for the default Service Accounts of Namespaces kube-system and vora:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-system-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: vora-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: default
  namespace: vora
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

From here the installation works like a charm and finishes successfully:

And again, after exposing my vsystem pod:

kubectl -n vora get pods | grep vsystem-[1-9]
kubectl -n vora expose pod vsystem-2326503697-9864k --name=kubernetes --type=LoadBalancer

And retrieving its external IP address and ports:

kubectl get services -n vora | grep kubernetes

I can create a Pipeline Modeler or Vora Tools instance:

And new with the SAP Data Hub 1.3.0 I can also manage my tenants:

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply