Skip to Content
Technical Articles

prepare the Installation Host for the SLC Bridge

Last changed 24th of March 2020

Blog: prepare the Installation Host for the SLC Bridge
Blog: Maintenance Planer and the SLC Bridge for Data Hub
Blog: SAP DataHub 2.7 Installation with SLC Bridge
Blog: Data Intelligence Hub – connecting the Dots …
new Blog: SAP Data Intelligence 3.0 – implement with slcb tool

prepare the Installation Host for the SLC Bridge or slcb

 

In the Blog – DataHub Implementation with the SLC Bridge I have explained the software stack which is used to activate- online help – the SLC Bridge on the installation Host.

Now it is also Important to enable the Kubernetes Cluster access which is in conjunction of the SLC Bridge execution. Also in this context it is mandatory to understand, how
a containerized application like the SAP DataHub works.


Azure Cloud Shell

is an interactive, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. Linux users can opt for a Bash experience, while Windows users can opt for PowerShell.

Install Azure CLI with zypper

server:~ # zypper ar -f https://download.opensuse.org/distribution/openSUSE-stable/repo/oss/ openSUSE_OSS
server:~ # zypper ar --name 'Azure CLI' --check https://packages.microsoft.com/yumrepos/azure-cli azure-cli
server:~ # rpm --import https://packages.microsoft.com/keys/microsoft.asc
server:~ # zypper dist-upgrade/dup
server:~ # zypper install -y curl
zypper clean -a
zypper install --from azure-cli -y azure-cli
zypper up azure-cli

server:~ # az login --use-device-code
server:~ # az aks get-credentials --name <namespace> --resource-group <Azure Resource Group>
server:~ # az aks install-cli --client-version 1.14.7
server:~ # az extension add --name aks-preview

# optional: Install the ACI connector
az aks install-connector \
  --name <namespace> \ 
  --resource-group <Azure Resource Group>
# optional: Enable the cluster autoscaler
az aks nodepool show -n agentpool \
  --cluster-name <AKS Cluster> \
  --resource-group <Azure Resource Group>
az aks update \
  --resource-group <Azure Resource Group> \
  --name <AKS Cluster> \
  --update-cluster-autoscaler \
  --min-count X \
  --max-count Y

 


to understand how the pods are consumed by the SAP Datahub, you may found the Blog from Pascal De Poorter very useful – Consider your pods (Azure)

furthermore it is meaningful to add several environment variables to the root/sapadm user:

vi .bashrc

export ACR_NAME=<registry service>
export DOCKER_REGISTRY=<registry service>.azurecr.io
export DNS_DOMAIN=<AKS cluster>.westeurope.cloudapp.azure.com
export HELM_VERSION=v2.15.2
export HELM_HOME=/root/.helm
export NAMESPACE=<namespace>
export SERVICE_PRINCIPAL_ID=<service principal from ADS)
export TILLER_NAMESPACE=<AKS namespace>

the following abbreviations are used:

  • <Azure Resource Group> – Azure Resource which is used for the SAP Data Hub
  • <AKS cluster> – Name of the Azure Kubernetes Service
  • <namespace> – Name Space within the AKS for the SAP Data Hub
  • <registry service> – Azure Container Registry Service
  • <SPN> – Service Principal ID
  • <password> – Password of the Azure Container Registry Service

 


Install helm, tiller, kubectl and slcb (new)

Configure Helm, the Kubernetes Package Manager, with Internet Access
(for troubleshooting it is suitable, also to check the without Internet Access section)

Additionally with the latest SDH/DI releases, make the slcb binary available as well
Making the SLC Bridge Base available in your Kubernetes Cluster


Installation Guide for SAP Data Intelligence 3.0 (including the slcb usage)


for a complete picture click on the image above.

Get helm – https://helm.sh/Version 2.12.3Version 2.14.3Version 2.15.2Helm Versions

server:~ # cd /tmp
server:~ # wget https://storage.googleapis.com/kubernetes-release/release/v1.14.7/bin/linux/amd64/kubectl
server:~ # az aks install-cli --client-version 1.14.7
server:~ # wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-amd64.tar.gz
server:~ # tar -xvf helm-v2.14.1-linux-amd64.tar.gz
server:~ # chmod +x kubectl slcb
server:~ # chmod +x linux-amd64/helm linux-amd64/tiller
server:~ # cp helm tiller kubectl slcb /usr/local/bin/

server:~ # kubectl create serviceaccount -n kube-system tiller
server:~ # kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller -n kube-system
server:~ # kubectl create clusterrolebinding default-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:default -n kube-system
server:~ # kubectl -n kube-system  rollout status deploy/tiller-deploy
server:~ # helm init --service-account tiller
server:~ # kubectl patch deploy -n kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

helm repo update
helm version
helm ls

server:~ # kubectl version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
server:~ # kubectl auth can-i '*' '*'
yes
server:~ # kubectl get nodes # -o wide 
NAME                                             STATUS   ROLES   AGE   VERSION
aks-agentpool-39478146-0                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-1                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-2                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-3                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-4                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-5                         Ready    agent   46d   v1.14.8
virtual-kubelet-aci-connector-linux-westeurope   Ready    agent   28d   v1.13.1-vk-v0.9.0-1-g7b92d1ee-dev
server:~ # kubectl cluster-info

In case of problems with helm/tiller versions, helm init, etc. you can reset helm as follows:

helm reset --force --remove-helm-home
kubectl delete serviceaccount --namespace $NAMESPACE --all --cascade=true
kubectl delete serviceaccount --namespace kube-system --all --cascade=true
kubectl delete clusterrolebinding tiller-cluster-rule --namespace kube-system/$NAMESPACE
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller

Install the docker service

the easiest way is to run the setup via zypper/rpm

yast2 sw_single &

after the Docker Community Engine Installation, several directories and files are created either for root or a dedicated user for docker (in our case root is the overall user)

server:~ #
-rw-r--r--  1 root root      435 Aug 12 16:37 .bashrc
-rw-------  1 root root    12385 Aug 12 16:37 .viminfo
drwxr-xr-x  6 root root     4096 Aug 12 16:47 .helm
drwxr-xr-x  4 root root     4096 Aug 12 16:48 .kube
drwx------  3 root root     4096 Aug 12 16:58 .cache
drwx------  2 root root     4096 Aug 12 17:02 .docker
drwx------ 18 root root     4096 Aug 12 17:02 .
server:~ #
server:~ # systemctl enable docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
server:~ # service docker status
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-08-12 17:01:21 CEST; 19min ago
     Docs: http://docs.docker.com
 Main PID: 4131 (dockerd)
    Tasks: 58
   Memory: 69.9M
      CPU: 4.101s
   CGroup: /system.slice/docker.service
           ├─4131 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
           └─4146 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info
server:~ # service docker start
server:~ # docker version --short
Client: v2.14.3+g0e7f3b6
Server: v2.14.3+g0e7f3b6
server:~ #


Pretty soon, you will realize that the docker image grow fast and fill the file system at /var/lib/docker/ quickly. So it is suitable to relocate the path were the docker images reside.

either in on of the files
/etc/systemd/system/docker.service.d/docker.conf
/lib/systemd/system/docker.service

add the following line
[service]
ExecStart=/usr/bin/docker daemon -g /sapmnt/docker -H fd://

restart docker and sync with the new path
server:/lib # vi /lib/systemd/system/docker.service
server:/lib # systemctl stop docker
server:/lib # systemctl daemon-reload
server:/lib # rsync -aqxP /var/lib/docker/ /sapmnt/docker
server:/lib # systemctl start docker

server:/lib # ps aux | grep -i docker | grep -v grep
root      26009  2.2  0.0 1678532 76732 ?       Ssl  10:40   0:00 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
root      26028  0.6  0.0 1241752 40796 ?       Ssl  10:40   0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info

If you want to work with yaml files instead of command line, kubectl allows you several options to update the configuration.

cd /tmp
kubectl create -f helm-sdh.yaml
kubectl edit -f helm-sdh.yaml
kubectl replace -f helm-sdh.yaml
kubectl delete -f helm-sdh.yaml --all --cascade=true

Note 2765857 – SAP Data Hub 2.x installation on SUSE CaaS Platform 3.0 fails when deploying hana-0 or vsystem-vrep pods
Note 2776522 – SAP Data Hub 2: Specific Configurations for Installation on SUSE CaaS Platform


Test and logon against your Azure AKS

finally you can also test several ways to logon on to the Azure Kubernetes Service (AKS) including the Docker environment and the Container Registry

az login
az login --use-device-code ##old method
az aks get-credentials --resource-group <Azure Resource Group> --name <AKS cluster>

az acr login --name <container registry> --username=<container registry> --password=<password>
az acr show -n <container registry>
az acr check-health -n <container registry> --ignore-errors --yes

docker login <container registry>.azurecr.io --username=<SPN-ID> --password=<SPN-Secret>
kubectl create secret docker-registry docker-secret \
  --docker-server=<container registry>.azurecr.io \
  --docker-username=<SPN-ID> \
  --docker-password=<SPN-Secret> \
  --docker-email=your@eamil.com -n $NAMESPACE

now that all necessary tools are enables on the Jump Server and the access the Azure Kubernetes Service is provided, the Installation of the SAP Data Hub via the SLC Bridge can continue.


Install vctl from the Launchpad Help Section

in case it is not possible to access the SAP Datahub UI via the Web Browser, you can use the command line tool “vctl” to execute some important setting in a kind of “offline mode”


See also the Blog from Gianluca De LorenzoEpisode 3: vctl, the hidden pearl you must know

server:~ # chmod +x vctl
server:~ # cp vctl /usr/local/bin
server:~ # vctl
SAP Data Hub System Management CLI
More information at https://help.sap.com/viewer/p/SAP_DATA_HUB
server:~ # 

here are some important commands for the SAP Datahub Maintenance with “vctl
please note that the “vctl admin commands” only work in the system tenant with user system

server:~ # vctl login https://<cluster>.westeurope.cloudapp.azure.com system system --insecure
server:~ # vctl Version
Client Version: {Version:2.7.148-1103 BuildTime:2019-11-03T06:1742 GitCommit: Platform:linux}
Server Version: {Version:2.7 DistributedRuntimeVersion:2.7.151 K8sVersion:v1.14.8 DeploymentType:On-prem}
server:~ # vctl whoami
tenant:system user:system role:clusterAdmin
server:~ # vctl scheduler list-instances -o text
TemplateId                      Tenant   User            StartTime
datahub-app-launchpad           system   system          Wed, 04 Mar 2020
datahub-app-system-management   system   system          Wed, 04 Mar 2020
datahub-app-database            system   _vora_tenant    Tue, 03 Mar 2020
shared                          system   _vora_tenant    Wed, 04 Mar 2020
voraadapter                     system   _vora_tenant    Tue, 03 Mar 2020
license-manager                 system   _vora_cluster   Tue, 03 Mar 2020
server:~ # vctl apps scheduler list-templates
ID                              Name
diagnostics-grafana             Diagnostics Grafana
license-manager                 License Management
voraadapter                     voraadapter
datahub-app-database            DataHub App DB
datahub-app-launchpad           Launchpad
datahub-app-system-management   System Management
diagnostics-kibana              Diagnostics Kibana
shared                          Shared
vora-tools                      Vora Tools
server:~ # vctl tenant list --insecure
Name      Strategy
system    strat-system-2.7.151
default   strat-default-2.7.151
server:~ # 
vctl strategy list --insecure
vctl parameter list
vctl tenants list
vctl scheduler list-instances -o text
vctl objects get route -s user
vctl apps scheduler list-templates
vctl apps scheduler list-tenant
vctl apps scheduler stop-all-instances
vctl apps scheduler stop-tenant default

online help – Using the SAP Data Hub System Management Command-Line Client

however you will see that the tool kubectl is more convenient for the daily work.



The first hurdle to take will be the prerequisites check from the SAP Data Hub Installation Routine

Note 2839319 – Elasticsearch validation failed during upgrade, not healthy – SAP Data Hub
Note 2813853 – Elasticsearch runs out of persistent volume disk space

  • Error: forwarding ports: error upgrading connection: the server could not find the requested resource
  • Error: release wrapping-narwhal failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-elasticsearch” already exists
  • Error: release lopsided-anteater failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-vora-deployment-operator” already exist
  • Error: configmaps is forbidden: User “system:serviceaccount:$NAMESPACE:default” cannot list resource “configmaps” in API group “” in the namespace “$NAMESPACE”
  • Error: Checking if there is no failed helm chart in the namespace…failed!


Results from the prerequisites check like shown here, are mainly solved by logging in to the AKS cluster as mentioned above. with the different logins, several files/directories are written which will be accessed by the prerequisites check, e.g.

  • /root/.kube/config
  • /root/.docker/config.json
  • /root/.helm


 

 

 

Roland Kramer, SAP Platform Architect for Intelligent Data & Analytics
@RolandKramer

 

Be the first to leave a comment
You must be Logged on to comment or reply to a post.