Skip to Content
Technical Articles

prepare the Installation Host for the SLC Bridge

Last changed 10th of June 2020

Blog: prepare the Installation Host for the SLC Bridge
Blog: Maintenance Planer and the SLC Bridge for Data Hub
Blog: SAP DataHub 2.7 Installation with SLC Bridge
Blog: Secure the Ingress for DH 2.7 or DI 3.0
Blog: Data Intelligence Hub – connecting the Dots …
new Blog: SAP Data Intelligence 3.0 – implement with slcb tool

prepare the Installation Host for the SLC Bridge or slcb

 

In the Blog – DataHub Implementation with the SLC Bridge I have explained the software stack which is used to activate- online help – the SLC Bridge on the installation Host.

Now it is also Important to enable the Kubernetes (K8s) Cluster access which is in conjunction of the SLC Bridge execution. Also in this context it is mandatory to understand, how
a containerized application like the SAP DataHub works.


Azure Cloud Shell

is an interactive, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. Linux users can opt for a Bash experience, while Windows users can opt for PowerShell.

Install Azure CLI with zypper

server:~ # zypper ar -f https://download.opensuse.org/distribution/openSUSE-stable/repo/oss/ openSUSE_OSS
server:~ # zypper ar --name 'Azure CLI' --check https://packages.microsoft.com/yumrepos/azure-cli azure-cli
server:~ # rpm --import https://packages.microsoft.com/keys/microsoft.asc
server:~ # zypper dist-upgrade/dup
server:~ # zypper install -y curl
server:~ # zypper clean -a
server:~ # zypper install --from azure-cli -y azure-cli

Update Azure CLI to Version 2.5.1 or higher (e.g. 2.7.0)  which includes Python 3.6.x

server:~ # az login --use-device-code
server:~ # az aks get-credentials --name <namespace> --resource-group <Azure Resource Group>
server:~ # zypper refresh
server:~ # zypper update azure-cli
server:~ # az extension add --name aks-preview
server:~ # az --version
azure-cli                          2.7.0
command-modules-nspkg              2.0.3
core                               2.7.0
nspkg                              3.0.4
telemetry                          1.0.4
Extensions:
aks-preview                       0.4.44
Python location '/usr/bin/python3'
Extensions directory '/root/.azure/cliextensions'
Python (Linux) 3.6.5 (default, Mar 31 2018, 19:45:04) [GCC]
server:~ # 
server:~ # az acr check-health --ignore-errors --yes -n <registry>.azurecr.io
Docker daemon status: available
Docker version: 'Docker version 18.09.1, build 4c52b901c6cb, platform linux/amd64'
Docker pull of 'mcr.microsoft.com/mcr/hello-world:latest' : OK
Azure CLI version: 2.7.0
DNS lookup to <registry>.azurecr.io at IP xxx.xxx.xxx.xxx : OK
Challenge endpoint https://<registry>.azurecr.io/v2/ : OK
Fetch refresh token for registry '<registry>.azurecr.io' : OK
Fetch access token for registry '<registry>.azurecr.io' : OK
Helm version: 3.2.1
Notary version: 0.6.0
Please refer to https://aka.ms/acr/health-check for more information.
server:~ #

Optional: Install the ACI connector

az aks install-connector \
  --name <namespace> \ 
  --resource-group <Azure Resource Group>
# optional: Enable the cluster autoscaler
az aks nodepool show -n agentpool \
  --cluster-name <AKS Cluster> \
  --resource-group <Azure Resource Group>
az aks update \
  --resource-group <Azure Resource Group> \
  --name <AKS Cluster> \
  --update-cluster-autoscaler \
  --min-count X \
  --max-count Y

to understand how the pods are consumed by the SAP Datahub, you may found the Blog from Pascal De Poorter very useful – Consider your pods (Azure)

furthermore it is meaningful to add several environment variables to the root/sapadm user:

vi .bashrc

export ACR_NAME=<registry service>
export DOCKER_REGISTRY=<registry service>.azurecr.io
export DNS_DOMAIN=<AKS cluster>.westeurope.cloudapp.azure.com
export HELM_VERSION=v2.15.2
export HELM_HOME=/root/.helm
export NAMESPACE=<namespace>
export SERVICE_PRINCIPAL_ID=<service principal from ADS)
export TILLER_NAMESPACE=<AKS namespace>

the following abbreviations are used:

  • <Azure Resource Group> – Azure Resource which is used for the SAP Data Hub
  • <AKS cluster> – Name of the Azure Kubernetes Service
  • <namespace> – Name Space within the AKS for the SAP Data Hub
  • <registry service> – Azure Container Registry Service
  • <SPN> – Service Principal ID
  • <password> – Password of the Azure Container Registry Service

 


Install helm, tiller, kubectl and slcb (new)

Configure Helm, the Kubernetes Package Manager, with Internet Access
(for troubleshooting it is suitable, also to check the without Internet Access section)

Additionally with the latest SDH/DI releases, make the slcb binary available as well
Making the SLC Bridge Base available in your Kubernetes Cluster

server:~ # kubectl -n sap-slcbridge get all
NAME                                READY   STATUS    RESTARTS     AGE
pod/slcbridgebase-988f57f68-dpmhp   2/2     Running   0            16h
NAME                                TYPE              CLUSTER-IP   EXTERNAL-IP    PORT(S)          AGE
service/slcbridgebase-service       LoadBalancer      10.0.xx.xx   20.xx.xx.xx    1128:32729/TCP   16h
NAME                                READY   UP-TO-DATE   AVAILABLE     AGE
deployment.apps/slcbridgebase       1/1     1            1             16h
NAME                                        DESIRED      CURRENT       READY   AGE
replicaset.apps/slcbridgebase-988f57f68     1            1             1       16h
server:~ #


Installation Guide for SAP Data Intelligence 3.0 (including the slcb usage)


for a complete picture click on the image above.

Get helm – https://helm.sh/Version 2.12.3Version 2.14.3Version 2.15.2Helm Versions

How%20to%20migrate%20from%20Helm%20v2%20to%20Helm%20v3

How to migrate from Helm v2 to Helm v3

server:~ # cd /tmp
server:~ # wget https://storage.googleapis.com/kubernetes-release/release/v1.14.7/bin/linux/amd64/kubectl
server:~ # wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-amd64.tar.gz
server:~ # tar -xvf helm-v2.14.1-linux-amd64.tar.gz
server:~ # chmod +x kubectl slcb
server:~ # chmod +x linux-amd64/helm linux-amd64/tiller
server:~ # cp helm tiller kubectl slcb /usr/local/bin/
server:~ # curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

server:~ # kubectl create serviceaccount -n kube-system tiller
server:~ # kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller -n kube-system
server:~ # kubectl create clusterrolebinding default-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:default -n kube-system
server:~ # kubectl -n kube-system  rollout status deploy/tiller-deploy
server:~ # helm init --service-account tiller
server:~ # kubectl patch deploy -n kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
server:~ # kubectl version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
server:~ # kubectl auth can-i '*' '*'
yes
server:~ # kubectl get nodes # -o wide 
NAME                                             STATUS   ROLES   AGE   VERSION
aks-agentpool-39478146-0                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-1                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-2                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-3                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-4                         Ready    agent   46d   v1.14.8
aks-agentpool-39478146-5                         Ready    agent   46d   v1.14.8
virtual-kubelet-aci-connector-linux-westeurope   Ready    agent   28d   v1.13.1-vk-v0.9.0-1-g7b92d1ee-dev
server:~ # kubectl cluster-info

In case of problems with helm/tiller versions, helm init, etc. you can reset helm as follows:

helm reset --force --remove-helm-home
kubectl delete serviceaccount --namespace $NAMESPACE --all --cascade=true
kubectl delete serviceaccount --namespace kube-system --all --cascade=true
kubectl delete clusterrolebinding tiller-cluster-rule --namespace kube-system/$NAMESPACE
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller

Install the docker service

the easiest way is to run the setup via zypper/rpm

yast2 sw_single &

after the Docker Community Engine Installation, several directories and files are created either for root or a dedicated user for docker (in our case root is the overall user)

server:~ #
-rw-r--r--  1 root root      435 Aug 12 16:37 .bashrc
-rw-------  1 root root    12385 Aug 12 16:37 .viminfo
drwxr-xr-x  6 root root     4096 Aug 12 16:47 .helm
drwxr-xr-x  4 root root     4096 Aug 12 16:48 .kube
drwx------  3 root root     4096 Aug 12 16:58 .cache
drwx------  2 root root     4096 Aug 12 17:02 .docker
drwx------ 18 root root     4096 Aug 12 17:02 .
server:~ #
server:~ # systemctl enable docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
server:~ # service docker status
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-08-12 17:01:21 CEST; 19min ago
     Docs: http://docs.docker.com
 Main PID: 4131 (dockerd)
    Tasks: 58
   Memory: 69.9M
      CPU: 4.101s
   CGroup: /system.slice/docker.service
           ├─4131 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
           └─4146 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info
server:~ # service docker start
server:~ # docker version --short
Client: v2.14.3+g0e7f3b6
Server: v2.14.3+g0e7f3b6
server:~ #


Pretty soon, you will realize that the docker image grow fast and fill the file system at /var/lib/docker/ quickly. So it is suitable to relocate the path were the docker images reside.
With the new slcb the images will be stored directly in the AKS container repository, so these task will become obsolete.

Please also note that Helm3 only contains helm, as the server component tiller has been removed.
Furthermore the commands with helm are more strict now and you have to revise your existing commands with Helm 3 now.

either in on of the files
/etc/systemd/system/docker.service.d/docker.conf
/lib/systemd/system/docker.service

add the following line
[service]
ExecStart=/usr/bin/docker daemon -g /sapmnt/docker -H fd://

restart docker and sync with the new path
server:/lib # vi /lib/systemd/system/docker.service
server:/lib # systemctl stop docker
server:/lib # systemctl daemon-reload
server:/lib # rsync -aqxP /var/lib/docker/ /sapmnt/docker
server:/lib # systemctl start docker

server:/lib # ps aux | grep -i docker | grep -v grep
root      26009  2.2  0.0 1678532 76732 ?       Ssl  10:40   0:00 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
root      26028  0.6  0.0 1241752 40796 ?       Ssl  10:40   0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info

If you want to work with yaml files instead of command line, kubectl allows you several options to update the configuration.

cd /tmp
kubectl create -f helm-sdh.yaml
kubectl edit -f helm-sdh.yaml
kubectl replace -f helm-sdh.yaml
kubectl delete -f helm-sdh.yaml --all --cascade=true

Note 2765857 – SAP Data Hub 2.x installation on SUSE CaaS Platform 3.0 fails when deploying hana-0 or vsystem-vrep pods
Note 2776522 – SAP Data Hub 2: Specific Configurations for Installation on SUSE CaaS Platform


Test and logon against your Azure AKS

finally you can also test several ways to logon on to the Azure Kubernetes Service (AKS) including the Docker environment and the Container Registry

server:~ # az login
server:~ # az login --use-device-code ##old method
server:~ # az aks get-credentials --resource-group <Azure Resource Group> --name <AKS cluster>

server:~ # az acr login --name <container registry> --username=<container registry> --password=<password>
server:~ # az acr show -n <container registry>
server:~ # az acr check-health -n <container registry> --ignore-errors --yes
server:~ # az acr check-health --name <container registry> --ignore-errors --yes --verbose --debug > acr_check.txt  2>&1

server:~ # docker login <container registry>.azurecr.io --username=<SPN-ID> --password=<SPN-Secret>
server:~ # kubectl create secret docker-registry docker-secret \
  --docker-server=<container registry>.azurecr.io \
  --docker-username=<SPN-ID> \
  --docker-password=<SPN-Secret> \
  --docker-email=your@email.com -n $NAMESPACE

now that all necessary tools are enables on the Jump Server and the access the Azure Kubernetes Service is provided, the Installation of the SAP Data Hub via the SLC Bridge can continue.


Install vctl from the Launchpad Help Section

in case it is not possible to access the SAP Datahub UI via the Web Browser, you can use the command line tool “vctl” to execute some important setting in a kind of “offline mode”


See also the Blog from Gianluca De LorenzoEpisode 3: vctl, the hidden pearl you must know

server:~ # chmod +x vctl
server:~ # cp vctl /usr/local/bin
server:~ # vctl
SAP Data Hub System Management CLI
More information at https://help.sap.com/viewer/p/SAP_DATA_HUB
server:~ # 
server:/ # vctl login https://<aks-cluster>.<region>.cloudapp.azure.com system system --insecure
Enter password:
Successfully logged in as "system/system"
server:/ # vctl version
Client Version: {Version:2002.1.13-0428 BuildTime:2020-04-28T18:4725 GitCommit: Platform:linux}
Server Version: {Version:2002.1 DistributedRuntimeVersion:3.0.24 K8sVersion:v1.15.10 DeploymentType:On-prem ProductType:data-intelligence}
server:/ # 

here are some important commands for the SAP Data Hub/Intelligence Maintenance with “vctl“. please note that the “vctl admin commands” only work in the system tenant with user system

server:~ # vctl whoami
tenant:system user:system role:clusterAdmin
server:~ # vctl scheduler list-instances -o text
TemplateId                      Tenant   User            StartTime
datahub-app-launchpad           system   system          Wed, 04 Mar 2020
datahub-app-system-management   system   system          Wed, 04 Mar 2020
datahub-app-database            system   _vora_tenant    Tue, 03 Mar 2020
shared                          system   _vora_tenant    Wed, 04 Mar 2020
voraadapter                     system   _vora_tenant    Tue, 03 Mar 2020
license-manager                 system   _vora_cluster   Tue, 03 Mar 2020
server:~ # vctl apps scheduler list-templates
ID                              Name
diagnostics-grafana             Diagnostics Grafana
license-manager                 License Management
voraadapter                     voraadapter
datahub-app-database            DataHub App DB
datahub-app-launchpad           Launchpad
datahub-app-system-management   System Management
diagnostics-kibana              Diagnostics Kibana
shared                          Shared
vora-tools                      Vora Tools
server:~ # vctl tenant list --insecure
Name      Strategy
system    strat-system-2.7.151
default   strat-default-2.7.151
server:~ # 
vctl strategy list --insecure
vctl parameter list
vctl tenants list
vctl scheduler list-instances -o text
vctl objects get route -s user
vctl apps scheduler list-templates
vctl apps scheduler list-tenant
vctl apps scheduler stop-all-instances
vctl apps scheduler stop-tenant default

online help – Using the SAP Data Hub System Management Command-Line Client

however you will see that the tool kubectl is more convenient for the daily work.



The first hurdle to take will be the prerequisites check from the SAP Data Hub Installation Routine

Note 2839319 – Elasticsearch validation failed during upgrade, not healthy – SAP Data Hub
Note 2813853 – Elasticsearch runs out of persistent volume disk space
Note 2905007 – Installing Container-based Software using Maintenance Planner and SL Container Bridge – Troubleshooting Note

  • Error: forwarding ports: error upgrading connection: the server could not find the requested resource
  • Error: release wrapping-narwhal failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-elasticsearch” already exists
  • Error: release lopsided-anteater failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-vora-deployment-operator” already exist
  • Error: configmaps is forbidden: User “system:serviceaccount:$NAMESPACE:default” cannot list resource “configmaps” in API group “” in the namespace “$NAMESPACE”
  • Error: Checking if there is no failed helm chart in the namespace…failed!


Results from the prerequisites check like shown here, are mainly solved by logging in to the AKS cluster as mentioned above. with the different logins, several files/directories are written which will be accessed by the prerequisites check, e.g.

  • /root/.kube/config
  • /root/.docker/config.json
  • /root/.helm

With the Installation of SAP Data intelligence 3.0 based on the slcb tool the most of the checks are running now directly in the AKS, thus become obsolete.

new Blog: SAP Data Intelligence 3.0 – implement with slcb tool


 

Roland Kramer, SAP Platform Architect for Intelligent Data & Analytics
@RolandKramer

 

Be the first to leave a comment
You must be Logged on to comment or reply to a post.