Skip to Content
Technical Articles

prepare the Installation Host for the SLC Bridge

Last changed 18th of October 2019

prepare the Installation Host for the SLC Bridge for the SAP Data Hub Installation

 

In the Blog – DataHub Implementation with the SLC Bridge I have explained the software stack which is used to activate- online help – the SLC Bridge on the installation Host.

Now it is also Important to enable the Kubernetes Cluster access which is in conjunction of the SLC Bridge execution. Also in this context it is mandatory to understand, how a containerized application like the SAP DataHub works.

Azure Cloud Shell

is an interactive, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. Linux users can opt for a Bash experience, while Windows users can opt for PowerShell.

Install Azure CLI with zypper

zypper dist-upgrade
zypper install -y curl
rpm --import https://packages.microsoft.com/keys/microsoft.asc
zypper addrepo --name 'Azure CLI' --check https://packages.microsoft.com/yumrepos/azure-cli azure-cli
zypper install --from azure-cli -y azure-cli
az login --use-device-code
az aks get-credentials --name <namespace> --resource-group <Azure Resource Group>

furthermore it is meaningful to add several environment variables to the root/sapadm user:

vi .bashrc

export ACR_NAME=<registry service>
export DOCKER_REGISTRY=<registry service>.azurecr.io
export DNS_DOMAIN=<datahub-aks>.westeurope.cloudapp.azure.com
export HELM_VERSION=v2.12.3
export HELM_HOME=/root/.helm
export NAMESPACE=<namespace>
export SERVICE_PRINCIPAL_ID=<service principal from ADS)
export TILLER_NAMESPACE=<AKS namespace>

the following abbreviations are used:

  • <Azure Resource Group> – Azure Resource which is used for the SAP Data Hub
  • <AKS cluster> – Name of the Azure Kubernetes Service
  • <namespace> – Name Space within the Azure Kubernetes Service for the SAP Data Hub
  • <container registry> – Azure Container Registry Service
  • <password> – Password of the Azure Container Registry Service

 


Install helm, tiller and kubectl

Configure Helm, the Kubernetes Package Manager, with Internet Access
(for troubleshooting it is suitable, also to check the without Internet Access section)

Get helm – https://helm.sh/Version 2.12.3Helm Versions

cd /tmp
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz
chmod +x ./kubectl
tar -xvf helm-v2.12.3-linux-amd64.tar.gz
chmod +x linux-amd64/helm linux-amd64/tiller
cp helm tiller /usr/local/bin/
cp kubectl /usr/local/bin/kubectl

helm init

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

helm init --service-account tiller --upgrade
helm repo update
helm version
helm ls

kubectl version
kubectl auth can-i '*' '*'
kubectl get nodes -o wide 
kubectl cluster-info

In case of problems with helm/tiller versions, helm init, etc. you can reset helm as follows:

helm reset --force --remove-helm-home

kubectl delete serviceaccount --namespace $NAMESPACE --all --cascade=true
kubectl delete serviceaccount --namespace kube-system --all --cascade=true

kubectl delete clusterrolebinding tiller-cluster-rule --namespace kube-system

 


Install the docker service

the easiest way is to run the setup via zypper/rpm

yast2 sw_single &

after the Docker Community Engine Installation, several directories and files are created either for root or a dedicated user for docker (in our case root is the overall user)

server:~ #
-rw-r--r--  1 root root      435 Aug 12 16:37 .bashrc
-rw-------  1 root root    12385 Aug 12 16:37 .viminfo
drwxr-xr-x  6 root root     4096 Aug 12 16:47 .helm
drwxr-xr-x  4 root root     4096 Aug 12 16:48 .kube
drwx------  3 root root     4096 Aug 12 16:58 .cache
drwx------  2 root root     4096 Aug 12 17:02 .docker
drwx------ 18 root root     4096 Aug 12 17:02 .
server:~ #
server:~ # service docker start
server:~ # systemctl enable docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
server:~ # service docker status
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-08-12 17:01:21 CEST; 19min ago
     Docs: http://docs.docker.com
 Main PID: 4131 (dockerd)
    Tasks: 58
   Memory: 69.9M
      CPU: 4.101s
   CGroup: /system.slice/docker.service
           ├─4131 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
           └─4146 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info
server:~ # docker version
server:~ # docker login <container registry>.azurecr.io --username=<container registry> --password=<password>

Pretty soon, you will realize that the docker image grow fast and fill the file system at /var/lib/docker/ quickly. So it is suitable to relocate the path were the docker images reside.

either in on of the files
/etc/systemd/system/docker.service.d/docker.conf
/lib/systemd/system/docker.service

add the following line
[service]
ExecStart=/usr/bin/docker daemon -g /sapmnt/docker -H fd://

restart docker and sync with the new path
server:/lib # vi /lib/systemd/system/docker.service
server:/lib # systemctl stop docker
server:/lib # ps aux | grep -i docker | grep -v grep
server:/lib # systemctl daemon-reload
server:/lib # rsync -aqxP /var/lib/docker/ /sapmnt/docker
server:/lib # systemctl start docker
server:/lib # ps aux | grep -i docker | grep -v grep
root      26009  2.2  0.0 1678532 76732 ?       Ssl  10:40   0:00 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
root      26028  0.6  0.0 1241752 40796 ?       Ssl  10:40   0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info

 

If you want to work with yaml files instead of command line, kubectl allows you several options to update the configuration.

cd /tmp
kubectl create -f helm-sdh.yaml
kubectl edit -f helm-sdh.yaml
kubectl replace -f helm-sdh.yaml
kubectl delete -f helm-sdh.yaml --all --cascade=true

 

Note 2765857 – SAP Data Hub 2.x installation on SUSE CaaS Platform 3.0 fails when deploying hana-0 or vsystem-vrep pods
Note 2776522 – SAP Data Hub 2: Specific Configurations for Installation on SUSE CaaS Platform

 


Test and logon against your Azure AKS

finally you can also test several ways to logon on to the Azure Kubernetes Service (AKS) including the Docker environment and the Container Registry

az login
az login --use-device-code (old method)

az aks get-credentials --resource-group <Azure Resource Group> --name <AKS cluster>

az acr login --name <container registry> --username=<container registry> --password=<password>
az acr show -n <container registry>

docker login <container registry>.azurecr.io --username=<container registry> --password=<password>

now that all necessary tools are enables on the Jump Server and the access the Azure Kubernetes Service is provided, the Installation of the SAP Data Hub via the SLC Bridge can continue.

 


The first hurdle to take will be the prerequisites check from the SAP Data Hub Installation Routine

Results from the prerequisites check like shown here, are mainly solved by logging in to the AKS cluster as mentioned above. with the different logins, several files/directories are written which will be accessed by the prerequisites check, e.g.

  • /root/.kube/config
  • /root/.docker/config.json
  • /root/.helm


Blog: SAP DataHub 2.7 Installation with SLC Bridge

Blog: Maintenance Planer and the SLC Bridge for Data Hub

 

Roland Kramer, SAP Platform Architect for Intelligent Data & Analytics
@RolandKramer

 

Be the first to leave a comment
You must be Logged on to comment or reply to a post.