Skip to Content
Technical Articles

SAP DataHub 2.7 Installation with SLC Bridge

Last Changed: 18th of October 2019

SAP DataHub 2.7 Installation with SLC Bridge

One on the biggest steps towards the Intelligent Enterprise is the Implementation of the SAP Data Hub using the latest Version from 2.6 and above, e.g. 2.7
SAP Data Hub 2.7 is also the foundation for the Cloud Service “SAP Data Intelligence” – What is SAP Data Intelligence currently offered by AWS. Azure will follow later this Year.

SAP Data Hub allows the automated and reliable data processing across the entire data landscape within the company.

In the previous Blog’s, I wrote about the preparations to implement the SAP DataHub 2.6.x and higher, e.g. 2.7.0

Blog: prepare the Installation Host for the SLC Bridge
Blog: Maintenance Planer and the SLC Bridge for Data Hub 

In the meantime (12th of August 2019) there are already some updated resources:

  • SLPLUGIN00_25-70003322.SAR
  • MP_Stack_1000756382_20190812_SDH_12082019.xml
  • DHFOUNDATION06_1-80004015.ZIP

Upgrade to Version 2.7.0 (18th of October 2019)

  • SLPLUGIN00_27-70003322.SAR
  • MP_Stack_2000763611_20190925_CNT_upgrade27.xml
  • DHFOUNDATION07_0-80004015.ZIP (2.7.147)

sidenote: the screens are not differ between Version 2.6.1 and 2.7.0

 


To see what is going on during the SAP Data Hub Installation,
see the @Thorsten Schneider Blog – Installing SAP Data Hub

Your SAP on Azure – Part 13 – Install SAP Data Hub on Azure Kubernetes Service
@Bartosz Jarkowski

 


starting the Software Lifecycle Container Bridge (SLC Bridge)

 

Here you can choose as Customer/Partner/External your S-User/Password and as SAP employee your User-ID/Password

Depending to your result in the MP execution plan, the current SLPLUGIN version will be uploaded. The stack.sml war already uploaded before …

If this is not the first upload of the DHFOUNDATION zip file, you can choose to use the existing one.

See the Information about the Target Software Level

Al already described in the Blog – prepare the Installation Host for the SLC Bridge various follow up problems will occur, if the Prerequisites Check does not pass successfully …

 

az aks get-upgrades --name kubcluidna03 --resource-group EDW --output table

Name     ResourceGroup    MasterVersion    NodePoolVersion    Upgrades
-------  ---------------  ---------------  -----------------  ---------------
default  EDW              1.12.8           1.12.8             1.13.9, 1.13.10

 


These are some Errors (Independent from the IaaS Distributor) in the Phase SLP_HELM_CHECK:

Note 2839319 – Elasticsearch validation failed during upgrade, not healthy – SAP Data Hub
Note 2813853 – Elasticsearch runs out of persistent volume disk space

  • Error: forwarding ports: error upgrading connection: the server could not find the requested resource
  • Error: release wrapping-narwhal failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-elasticsearch” already exists
  • Error: release lopsided-anteater failed: clusterroles.rbac.authorization.k8s.io “$NAMESPACE-vora-deployment-operator” already exist
  • Error: configmaps is forbidden: User “system:serviceaccount:$NAMESPACE:default” cannot list resource “configmaps” in API group “” in the namespace “$NAMESPACE”
  • Error: Checking if there is no failed helm chart in the namespace…failed!

These Errors are mostly from misconfiguration or missing settings during the Setup of the Installation Host

One the first Validation of the environment is passed, the Installation continues with details of the Kubernetes Cluster …

Choosing the Type “Advanced Installation” allows you to examine additional Details

If you have no Internet Connection, either from IaaS or from on premise location, you can download the Container Images in advance. With an Internet Connection this is not necessary …

The mentioned “Technical User” here is a dedicated User, which is created based on your S-User or User-ID as has the format: <InstNr.>.<S-User>

The Container Registry is created in the Kubernetes Cluster (e.g. AKS)

 

A Certificate Domain is needed, e.g. *.westeurope.cloudapp.azure.com

SAP Data Hub System Tenant Administrator Password

To ease the initial Installation, the users “user” and “system” can have the same password

To reduce complexity it is suitable, not to enable the checkpoint store at this time as the error occur later in the Installation, and you might have to re-run the complete setup again

These are the different options, if you choose “Enable checkpoint store” (Example Azure)

 

Hence at the end of the configuration, skip the validation of the checkpoint store

configuring default storage classes for certain types is suitable for SAP Datahub Installations

kubectl -n $NAMESPACE get storageclass ${STORAGE_CLASS} -o yaml
  • Default Storage Class
  • System Management Storage Class
  • Dlog Storage Class
  • Disk Storage Class
  • Consul Storage Class
  • SAP HANA Storage Class
  • SAP Data Hub Diagnostics Storage Class

the Docker Container Log Path is provided by the IaaS Distributor

 

It is suitable to “Enable loading NFS modules” here to avoid follow up errors, eg.

  • -e vora-vsystem.vRep.nfsv4MinorVersion=1
  • -e vora-dqp.components.disk.replicas=3
  • -e vora-dqp.components.dlog.storageSize=200Gi

Parameter use in this section start with vora-<service>. A wrong notation will lead into an Installation Error and you have to run from scratch again, as the SLC Bridge is not a SWPM

 


executing the Installation Service

Now the Installation starts with the execution of the Installation Service


cleaning out the old docker images by deleting the directories below /var/lib/docker/overlay2 will lead to inconsistencies.

Instead run the following procedure:

server:/var/lib/docker/overlay2 # docker image ls
REPOSITORY                                                                                                                           TAG                 IMAGE ID            CREATED             SIZE
73554900100900002861.val.dockersrv.repositories.vlab-sapcloudplatformdev.cn/com.sap.datahub.linuxx86_64/vsolution-ml-python          2.7.74              2bb018c63e82        3 days ago          2.36GB
conregidna03.azurecr.io/com.sap.datahub.linuxx86_64/vsolution-ml-python                                                              2.7.74              2bb018c63e82        3 days ago          2.36GB
73554900100900002861.val.dockersrv.repositories.vlab-sapcloudplatformdev.cn/com.sap.datahub.linuxx86_64/vsolution-hana_replication   2.7.74              7119eba7f498        3 days ago          976MB
conregidna03.azurecr.io/com.sap.datahub.linuxx86_64/vsolution-hana_replication                                                       2.7.74              7119eba7f498        3 days ago          976MB
73554900100900002861.val.dockersrv.repositories.vlab-sapcloudplatformdev.cn/com.sap.datahub.linuxx86_64/vsystem-voraadapter          2.7.118             a732aeefe28f        11 days ago         676MB
conregidna03.azurecr.io/com.sap.datahub.linuxx86_64/vsystem-voraadapter                                                              2.7.118             a732aeefe28f        11 days ago         676MB
73554900100900002861.val.dockersrv.repositories.vlab-sapcloudplatformdev.cn/com.sap.datahub.linuxx86_64/hello-sap                    1.1                 c4d1d0758d85        11 months ago       2.01MB
sapb4hsrv:/var/lib/docker/overlay2 #

 

server:~ # docker image ls
server:~ # docker images -aq
server:~ # docker rmi $(docker images -aq) --force
server:~ # docker images -aq
server:~ # service docker restart
server:~ # docker images
REPOSITORY      TAG    IMAGE ID   CREATED   SIZE
server:~ #

 

Milestone Voracluster CRD has been deployed reached

Activating the Vora Cluster might take some time, depending of the amount of pods …

 

server:/ # kubectl get pods -n $NAMESPACE | grep vora
vora-catalog-745f55844f-dbhzw                                 2/2     Running     0          176m
vora-config-init-75vg9                                        0/2     Completed   0          176m
vora-consul-0                                                 1/1     Running     0          3h3m
vora-consul-1                                                 1/1     Running     0          3h3m
vora-consul-2                                                 1/1     Running     0          3h3m
vora-deployment-operator-66fbc9f9-bc9m9                       1/1     Running     0          178m
vora-disk-0                                                   2/2     Running     0          176m
vora-dlog-0                                                   2/2     Running     0          178m
vora-dlog-admin-gpkr6                                         0/2     Completed   0          177m
vora-landscape-79c6fd9c55-h5s4d                               2/2     Running     0          176m
vora-nats-streaming-6f6cb5b7d-t52hx                           1/1     Running     0          174m
vora-relational-cf7758794-kzwtf                               2/2     Running     0          176m
vora-security-operator-85f6cd966f-lxnzn                       1/1     Running     0          179m
vora-textanalysis-84fbbd5d88-2wql4                            1/1     Running     0          174m
vora-tx-broker-59b66f96df-mhpxg                               2/2     Running     0          176m
vora-tx-coordinator-5bcb55449d-7hlll                          2/2     Running     0          176m
voraadapter-t6kt2-5d7c5f46bb-tzw5g                            3/3     Running     0          171m
voraadapter-vql9x-7b86f94dcd-5rfhl                            3/3     Running     0          169m
server:/ #

 

If you already started the Installation again, and eventually some artefacts are left, it can happen that some chart cannot be deployed and the SLC Bridge shows an error:

2019-08-16T12:38:43.061+0200    INFO    cmd/cmd.go:243  2> Error: release kissed-catfish failed: storageclasses.storage.k8s.io "vrep-nsidnawdf03" already exists
2019-08-16T12:38:43.067+0200    INFO    cmd/cmd.go:243  1> 2019-08-16T12:38:43+0200 [ERROR] Deployment failed, please check logs above and Kubernetes dashboard for more information!
2019-08-16T12:38:43.230+0200    INFO    control/steps.go:288
----------------------------
Execution of step Install (Deploying SAP Data Hub.) finished with error: execution failed: status 1, error: Error: release kissed-catfish failed: 
storageclasses.storage.k8s.io "vrep-nsidnawdf03" already exists
----------------------------
2019-08-16T12:38:43.230+0200    WARN    control/controlfile.go:1430
----------------------------
Step Install failed: execution failed: status 1, error: Error: release kissed-catfish failed: storageclasses.storage.k8s.io "vrep-nsidnawdf03" already exists
----------------------------
2019-08-16T12:38:43.230+0200    ERROR   slp/slp_monitor.go:102
----------------------------
Executing Step Install Failed:
Execution of step Install failed
execution failed: status 1, error: Error: release kissed-catfish failed: storageclasses.storage.k8s.io "vrep-nsidnawdf03" already exists
.
Choose Retry to retry the step.
Choose Abort to abort the SLC Bridge and return to the Welcome dialog.
Choose Cancel to cancel the SLC Bridge immediately.
----------------------------

The easiest was is to Rollback the failed chart with helm

sapb4hsrv:~ # helm ls
NAME                    REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
crazy-hedgehog          1               Fri Aug 16 12:28:25 2019        DEPLOYED        vora-diagnostic-rbac-2.0.2                      nsidnawdf03
giggly-quoll            1               Fri Aug 16 12:28:39 2019        DEPLOYED        vora-consul-0.9.0-sap13         0.9.0           nsidnawdf03
honorary-lizard         1               Fri Aug 16 12:32:09 2019        DEPLOYED        vora-security-operator-0.0.24                   nsidnawdf03
hoping-eel              1               Fri Aug 16 12:28:29 2019        DEPLOYED        vora-deployment-rbac-0.0.21                     nsidnawdf03
invincible-possum       1               Fri Aug 16 12:28:37 2019        DEPLOYED        hana-0.0.1                                      nsidnawdf03
kissed-catfish          1               Fri Aug 16 12:38:42 2019        FAILED          vora-vsystem-2.6.60                             nsidnawdf03
lame-maltese            1               Fri Aug 16 12:38:27 2019        DEPLOYED        vora-textanalysis-0.0.33                        nsidnawdf03
peddling-porcupine      1               Fri Aug 16 12:33:31 2019        DEPLOYED        vora-deployment-operator-0.0.21                 nsidnawdf03
piquant-bronco          1               Fri Aug 16 12:32:33 2019        DEPLOYED        storagegateway-2.6.32                           nsidnawdf03
righteous-hare          1               Fri Aug 16 12:32:11 2019        DEPLOYED        uaa-0.0.24                                      nsidnawdf03
trendsetting-porcupine  1               Fri Aug 16 12:33:29 2019        DEPLOYED        vora-cluster-0.0.21                             nsidnawdf03
unhinged-olm            1               Fri Aug 16 12:32:39 2019        DEPLOYED        vora-sparkonk8s-2.6.22          2.6.22          nsidnawdf03
vehement-ant            1               Fri Aug 16 12:31:58 2019        DEPLOYED        vora-security-context-0.0.24                    nsidnawdf03
winning-emu             1               Fri Aug 16 12:32:07 2019        DEPLOYED        auditlog-0.0.24                                 nsidnawdf03
sapb4hsrv:~ # helm rollback kissed-catfish 1
Rollback was a success! Happy Helming!
sapb4hsrv:~ # helm ls
NAME                    REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
crazy-hedgehog          1               Fri Aug 16 12:28:25 2019        DEPLOYED        vora-diagnostic-rbac-2.0.2                      nsidnawdf03
giggly-quoll            1               Fri Aug 16 12:28:39 2019        DEPLOYED        vora-consul-0.9.0-sap13         0.9.0           nsidnawdf03
honorary-lizard         1               Fri Aug 16 12:32:09 2019        DEPLOYED        vora-security-operator-0.0.24                   nsidnawdf03
hoping-eel              1               Fri Aug 16 12:28:29 2019        DEPLOYED        vora-deployment-rbac-0.0.21                     nsidnawdf03
invincible-possum       1               Fri Aug 16 12:28:37 2019        DEPLOYED        hana-0.0.1                                      nsidnawdf03
kissed-catfish          2               Fri Aug 16 13:53:35 2019        DEPLOYED        vora-vsystem-2.6.60                             nsidnawdf03
lame-maltese            1               Fri Aug 16 12:38:27 2019        DEPLOYED        vora-textanalysis-0.0.33                        nsidnawdf03
peddling-porcupine      1               Fri Aug 16 12:33:31 2019        DEPLOYED        vora-deployment-operator-0.0.21                 nsidnawdf03
piquant-bronco          1               Fri Aug 16 12:32:33 2019        DEPLOYED        storagegateway-2.6.32                           nsidnawdf03
righteous-hare          1               Fri Aug 16 12:32:11 2019        DEPLOYED        uaa-0.0.24                                      nsidnawdf03
trendsetting-porcupine  1               Fri Aug 16 12:33:29 2019        DEPLOYED        vora-cluster-0.0.21                             nsidnawdf03
unhinged-olm            1               Fri Aug 16 12:32:39 2019        DEPLOYED        vora-sparkonk8s-2.6.22          2.6.22          nsidnawdf03
vehement-ant            1               Fri Aug 16 12:31:58 2019        DEPLOYED        vora-security-context-0.0.24                    nsidnawdf03
winning-emu             1               Fri Aug 16 12:32:07 2019        DEPLOYED        auditlog-0.0.24                                 nsidnawdf03
sapb4hsrv:~ #

 

Milestone Initializing system tenant reached …

Milestone Running validation for vora-cluster reached


If this Phase takes eventually longer as expected, then you should check the logfiles in advance for timeouts, issues, etc.

 

server:/sapmnt/hostctrl/slplugin/work # dir
total 544
drwxr-x--- 3 root   root     4096 Aug 16 16:37 .
drwxr-x--- 9 sapadm sapsys   4096 Aug 14 11:19 ..
-rwxr-x--- 1 root   root    31860 Aug 16 16:34 EvalForm.html
-rw-r----- 1 root   root    11366 Aug 16 16:34 analytics.xml
-rw-r----- 1 root   root     1226 Aug 16 16:32 auditlog_validation_log.txt
-rw-r----- 1 root   root      628 Aug 16 16:20 cert_generation_log.txt
-rwxr-x--- 1 root   root    59077 Aug 16 16:11 control.yml
-rw-r----- 1 root   root       91 Aug 16 16:32 datahub-app-base-db_validation_log.txt
drwxr-x--- 2 root   root     4096 Aug 16 16:32 displaytab
-rw-r----- 1 root   root   154092 Aug 16 16:16 helm.tar.gz
-rw-r----- 1 root   root     1566 Aug 16 16:12 inputs.log
-rw------- 1 root   root      437 Aug 16 16:37 loginfo.yml
-rw-r----- 1 root   root   219760 Aug 16 16:37 slplugin.log
-rw-r----- 1 root   root       15 Aug 16 16:10 slplugin.port
-rw-r----- 1 root   root    14349 Aug 16 16:15 variables.yml
-rw-r----- 1 root   root     3625 Aug 16 16:31 vora-cluster_validation_log.txt
-rw-r----- 1 root   root     1098 Aug 16 16:32 vora-diagnostic_validation_log.txt
-rw-r----- 1 root   root       59 Aug 16 16:32 vora-textanalysis_validation_log.txt
-rw-r----- 1 root   root     2077 Aug 16 16:31 vora-vsystem_validation_log.txt
server:/sapmnt/hostctrl/slplugin/work #

 

After the Validation of the Vora Cluster the Installation is finished

The summary screen shows additional Informations about the Installation …

Done …


Upgrade SAP DataHub to Version 2.7.0

Update from 26th of September 2019: Upgrade to Version 2.7.0x successfully done …

Update from 11th of October 2019: Fresh Installation of Version 2.7.146 successfully done …

Update from 16th of October 2019: Fresh Installation of Version 2.7.147 successfully done …

 

 

server:~ # helm list
NAME                    REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
aged-gopher             1               Tue Oct 15 14:07:17 2019        DEPLOYED        storagegateway-2.7.44                           nsidnawdf03
alert-quokka            1               Fri Oct 11 21:25:37 2019        DEPLOYED        vora-cluster-0.0.21                             nsidnawdf03
bald-worm               1               Fri Oct 11 21:35:04 2019        DEPLOYED        vora-diagnostic-rbac-2.0.2                      nsidnawdf03
brown-frog              2               Tue Oct 15 14:05:25 2019        DEPLOYED        vora-security-operator-0.0.24                   nsidnawdf03
cert-manager            2               Tue Oct  1 12:57:30 2019        DEPLOYED        cert-manager-v0.7.2             v0.7.2          cert-manager
cloying-arachnid        2               Tue Oct 15 14:05:40 2019        DEPLOYED        hana-2.7.6                                      nsidnawdf03
dandy-swan              1               Fri Aug 16 17:21:17 2019        DEPLOYED        nginx-ingress-1.15.1            0.25.1          ingress-basic
dangling-ladybug        1               Tue Oct 15 14:07:28 2019        DEPLOYED        vora-sparkonk8s-2.7.18          2.7.18          nsidnawdf03
dull-catfish            1               Mon Aug 19 12:58:58 2019        DEPLOYED        nginx-ingress-1.15.1            0.25.1          kube-system
eyewitness-stingray     1               Fri Oct 11 21:35:08 2019        DEPLOYED        vora-diagnostic-2.0.2                           nsidnawdf03
goodly-peahen           2               Tue Oct 15 14:12:48 2019        DEPLOYED        vora-vsystem-2.7.123                            nsidnawdf03
intent-jackal           2               Tue Oct 15 14:05:21 2019        DEPLOYED        vora-security-context-0.0.24                    nsidnawdf03
misty-owl               1               Tue Oct 15 14:12:31 2019        DEPLOYED        vora-textanalysis-0.0.33                        nsidnawdf03
newbie-hamster          2               Tue Oct 15 14:07:01 2019        DEPLOYED        uaa-0.0.24                                      nsidnawdf03
nonexistent-camel       1               Fri Aug 16 16:40:02 2019        DEPLOYED        nginx-ingress-1.15.1            0.25.1          kube-system
nordic-marsupial        2               Tue Oct 15 14:08:21 2019        DEPLOYED        vora-deployment-operator-0.0.21                 nsidnawdf03
nosy-alpaca             2               Tue Oct 15 14:06:58 2019        DEPLOYED        auditlog-0.0.24                                 nsidnawdf03
vigilant-fly            1               Tue Oct 15 14:08:16 2019        DEPLOYED        vora-cluster-0.0.21                             nsidnawdf03
virtuous-seal           1               Tue Aug 20 10:24:59 2019        DEPLOYED        nginx-ingress-1.15.1            0.25.1          kube-system
voting-mandrill         2               Tue Oct 15 14:08:19 2019        DEPLOYED        vora-deployment-rbac-0.0.21                     nsidnawdf03
waxen-lightningbug      2               Tue Oct 15 14:05:47 2019        DEPLOYED        vora-consul-0.9.0-sap13         0.9.0           nsidnawdf03
youthful-kitten         2               Tue Oct 15 14:05:15 2019        DEPLOYED        network-policies-0.0.1                          nsidnawdf03
server:~ #

 


post Installation Steps for the SAP Data Hub

 


Finally you have to expose the SAP Data Hub Launchpad. This can be found in the
online help – Configuring SAP Data Hub Foundation on Cloud Platforms

 


After upgrading SAP Data Hub the base system runs in the new version, but applications on top still run in the previous version.
online help – Activate New SAP Data Hub System Management Applications

 

 


 


It looks like the way @Bartosz Jarkowski exposed the SAP Data Hub Launchpad is more efficient then the sap online helpYour SAP on Azure – Part 20 – Expose SAP Data Hub Launchpad

and the Your SAP on Azure – Part 16 – Easy TLS/SSL with SAP Data Hub and Let’s Encrypt works as well! actually with the current SAP Data Hub Installation only a few steps are necessary.

simply check the existing configuration:

kubectl get pods -n cert-manager
kubectl describe certificate vsystem-tls-certs -n $NAMESPACE
kubectl describe ing vsystem -n $NAMESPACE

 

server:/SDH # vi vsystem-ingress.yaml
#***
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: vsystem
 annotations:
   kubernetes.io/ingress.class: nginx
   certmanager.k8s.io/cluster-issuer: letsencrypt-prod
   nginx.ingress.kubernetes.io/secure-backends: "true"
   nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
   nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
   nginx.ingress.kubernetes.io/proxy-body-size: "500m"
   nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
   nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
   nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
   nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
spec:
 tls:
 - hosts:
   - <dns_label>.<azure_region>.cloudapp.azure.com
   secretName: vsystem-tls-certs
 rules:
 - host: <dns_label>.<azure_region>.cloudapp.azure.com
   http:
     paths:
     - path: /
       backend:
         serviceName: vsystem
         servicePort: 8797
#***
server:/SDH # kubectl apply -f vsystem-ingress.yaml -n $NAMESPACE
server:/SDH # kubectl edit -f vsystem-ingress.yaml -n $NAMESPACE

 

 


If you already installed the SAP Data Hub again, then you have to check if there is already an Ingress Controller is running. There should be only on ingress-controller attached to one LoadBalancer and one external IP-Address.
online help – Expose SAP Vora Transaction Coordinator and SAP HANA Wire Externally

 

server:/SDH # helm install stable/nginx-ingress -n kube-system

server:/SDH # kubectl -n kube-system get services -o wide  | grep ingress-controller
dining-mule-nginx-ingress-controller              LoadBalancer   10.0.110.14    51.144.74.205   80:30202/TCP,443:31424/TCP   38d     app=nginx-ingress,component=controller,release=dining-mule
nonexistent-camel-nginx-ingress-controller        LoadBalancer   10.0.112.222   13.80.131.18    80:30494/TCP,443:31168/TCP   3d18h   app=nginx-ingress,component=controller,release=nonexistent-camel
virtuous-seal-nginx-ingress-controller            LoadBalancer   10.0.161.96    13.80.71.39     80:31059/TCP,443:31342/TCP   70m     app=nginx-ingress,component=controller,release=virtuous-seal
server:/SDH #

server:/SDH # kubectl -n kube-system delete services dining-mule-nginx-ingress-controller
server:/SDH # kubectl -n kube-system delete services nonexistent-camel-nginx-ingress-controller 

Furthermore you must create a TLS certificate for your cluster to enable

echo "<DNS_DOMAIN>=$dns_domain"
<DNS_DOMAIN>=<datahub>.<location>.cloudapp.azure.com

dns_domain=<DNS_DOMAIN>
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=${dns_domain}"
kubectl -n $NAMESPACE create secret tls vsystem-tls-certs --key /tmp/tls.key --cert /tmp/tls.crt

By default, the SAP Vora Transaction Coordinator and SAP HANA Wire are exposed as a Kubernetes service of type ClusterIP. If you want to connect to the SAP Vora Transaction Coordinator or to SAP HANA Wire, you must be able to reach one of the Kubernetes nodes over the network.

kubectl -n $NAMESPACE get service vora-tx-coordinator-ext -o="custom-columns=IP:.spec.clusterIP,PORT_TXC_AND_HANAWIRE:.spec.ports[*].targetPort"

kubectl -n $NAMESPACE expose service vora-tx-coordinator-ext --type LoadBalancer --name=vora-tx-coordinator-ext-lb-internal

kubectl -n $NAMESPACE patch service vora-tx-coordinator-ext-lb-internal -p '{"metadata":{"annotations": {"service.beta.kubernetes.io/azure-load-balancer-internal":"true"}}}'

kubectl -n $NAMESPACE get service vora-tx-coordinator-ext-lb-internal
kubectl -n $NAMESPACE get service vora-tx-coordinator-ext-lb

Log on to the SAP Data Hub

online help – Launchpad for SAP Data Hub

a new logon procedure is available with SAP Datahub Version 2.7.x

 

Note 2751127 – Support information for SAP Data Hub

 


Install vctl from the Launchpad Help Section

in case it is not possible to access the SAP Datahub UI via the Web Browser, you can use the command line tool “vctl” to execute some important setting in a kind of “offline mode”

 

server:/SDH # chmod +x vctl
server:/SDH # cp vctl /usr/bin
server:/SDH # vctl
SAP Data Hub System Management CLI
More information at https://help.sap.com/viewer/p/SAP_DATA_HUB
server:/SDH #

here are some important commands for the SAP Datahub Maintenance with “vctl”
please note that the vctl admin commands only work in the system tenant with user system

vctl login https://<cluster>.westeurope.cloudapp.azure.com system system --insecure
vctl whoami
vctl parameter list
vctl scheduler list-instances -o text
vctl apps scheduler stop-all-instances
vctl apps scheduler stop-cluster
vctl tenant list --insecure
vctl strategy list --insecure

 

online help – Using the SAP Data Hub System Management Command-Line Client

however you will see that the tool kubectl is more convenient for the daily work.

 


 


If you installed SAP Data Hub on Azure or if you use a password-protected container registry, then you must configure the access credentials for the container registry. To provide the access credential for the container registry to SAP Data Hub Modeler, you must define a secret within the corresponding tenant and associate it with the Modeler.
online help – Provide Access Credentials for a Password Protected Container Registry

 

 

 


To verify the SAP Data Hub installation, test the SAP Data Hub Modeler
online help – define a Pipeline

 

 

 


configure the SAP Data Hub Connection Management

add the needed connections and certificates
online help – Using SAP Data Hub Connection Management

 

Note 2784068 – Modeler instances show 500 error – SAP Data Hub
Note 2796073 – HANA authentication failed error in Conn Management – SAP Data Hub
Note 2807716 – UnknownHostException when testing connection – SAP Data Hub
Note 2823040 – Clean Up Completed Graphs – SAP Data Hub
Note 2813853 – Elasticsearch runs out of persistent volume disk space

 

 

Note 2775549 – Release-independent ODP interface for SAP Data Hub
Note 2711139 – SAP Data Hub 2.x: Limit. and prereq. BW Dataset & BW Data Transfer
Note 2727180 – Connecting SAP Data Hub to SAP BW or SAP BW/4HANA
Note 2731192 – SAP Data Hub – ABAP connection type for SAP Data Hub
Note 2807438 – Release Restrictions for SAP Data Hub 2.6
Note 2838751 – Release Restrictions for SAP Data Hub 2.7

 

 

 


 

Roland Kramer, SAP Platform Architect for Intelligent Data & Analytics
@RolandKramer

 

4 Comments
You must be Logged on to comment or reply to a post.
  • Hello @Bartosz Jarkowski
    Thanks for your “offline support” as well!

    I just went to your Blog – Your SAP on Azure – Part 16 – Easy TLS/SSL with SAP Data Hub and Let’s Encrypt and it worked well …

    actually with the current Version of SDH 2.6.x it is less to do, as the namespace cert-manager and the pods are already available. furthermore the serviceaccount “cert-manager-cainjector” already exists.

    the important step is to adapt vsystem-ingress.yaml with the lets-encrypt extension, that makes the trick …

    thanks again and best regards
    Roland

     

     

     

    • That’s interesting. I didn’t know that as I haven’t yet installed the new DataHub (I’ll probably wait for the 2.7 release). Maybe the TLS configuration will be included in documentation as well.

  • Hi,

    I used my own Azure Subscription and spended a lot of time to use the SL Container Bridge from the beginning. No one really use this before internally, but it help to get more transparency to the Installation process, which is not really easy … 😉

    Furthermore the analysis of several tryout’s of the Installation, e.g

    • corrupted deployment
    • helm init
    • error messages which have no troubleshooting solution so far
    • cleaning service accounts and role bindings
    • complete reset of the SDH Installation (still a myth …)

    helped a lot. I like the graphics from @Thorsten Schneider which also shows what happens during the Installation and Deployment of the SAP Data Hub. I also mentioned his resources at the beginning of the Blog …

    Now I’m looking forward to do a proper Upgrade to a higher Version.

    Upgrade to Version 2.7.0 was done 25th of September 2019

    Best Regards Roland