Skip to Content
Technical Articles
Author's profile photo Dimitri Vorobiev

How to install SAP Data Intelligence 3.0 on-premise edition

SAP Data Intelligence 3.0 features a completely redesigned installation and deployment method. Those that previously worked with SAP Data Hub 2.x will see that we’ve removed the traditional bash and python scripts that were used to perform the installations. In fact, you will not even be able to download SAP Data Intelligence from the Service Marketplace and instead you will be deploying a containerized installer in the target Kubernetes cluster. The new installer is responsible for the mirroring of docker images and the deployment of Kubernetes resources from within the Kubernetes cluster.

Below is a high-level summary of the new changes with regards to installation:

  • Further integration with SAP Maintenance Planner
    • All upgrades and installations will require a stack.xml file generated by the Maintenance Planner to begin deployment
  • No dependency on an installer jumpbox
    • Installer can be deployed onto Kubernetes directly from a workstation
    • Docker image mirroring and installation is performed from within Kubernetes cluster which reduces deployment time
  • Reduced cluster resource requirements
    • Data Science Platform (DSP), Vora database stack and Diagnostic Framework are all optional and can be skipped to reduce resource strain

Prerequisites:

  • Kubernetes cluster v1.14 or v1.15
  • A private Docker container registry for mirroring images
  • A workstation (Windows, MacOS or Linux) with following components installed:
    • kubectl v1.14 or higher
    • helm to deploy nginx ingress controller after completing the installation (Optional)

For a complete list of prerequisites please refer to the official SAP Data Intelligence 3.0 Installation Guide

 

Step 1: Download the SLC Bridge tool

In this guide it is assumed you are using a Linux machine, but commands in Windows or MacOS should be identical.

To deploy the installer into Kubernetes you will need the SLC Bridge (slcb) binary tool which can be downloaded from the SAP Service Marketplace here. Upload this file into your jumpbox.

If you download the tool for linux the filename will misleadingly also have a Windows .EXE extension. Regardless of the operating system you are using, to make it easier to run commands I recommend renaming the SLCB01_XX_.EXE file to slcb and placing in the /usr/bin directory. You may also need to make it executable using chmod.

mv SLCB01_43-70003322.EXE /usr/bin/slcb

chmod +x /usr/bin/slcb

 

Step 2: Initialize the SLC Bridge pod in Kubernetes

If you haven’t done so already ensure that your kubectl tool is configured to communicate with your Kubernetes cluster. A simple way to check is to look up the nodes of your cluster.

kubectl get nodes

From your workstation execute the following command:

slcb init

You will be prompted a series of questions. Their descriptions can be found in the official installation guide here. Note: At this time there is no difference between “Typical” and “Expert” deployment mode.

When choosing between exposing the installer web UI using a LoadBalancer or a NodePort service a LoadBalancer is the most common and preferred method.

To monitor or troubleshoot the status of the initialization you can query the Kubernetes namespace where the SLC Bridge was deployed. By default this namespace is set to sap-slcbridge

kubectl -n sap-slcbridge get all

 

Finally, in the terminal output of the slcb tool you should see the complete URL to the SLC Bridge Web UI labelled as slcbridgebase-service. Make a note of this IP address and port number. If you chose to deploy SLC Bridge using NodePort service then use the external/public IP address of one of your Kubernetes worker nodes instead (Not the workstation IP address)

************************
*       Message        *
************************
  Deployment "slcbridgebase" has 1 available replicas in namespace "sap-slcbridge"
  Service "slcbridgebase-service" is listening on "https://34.77.12.163:9000/docs/index.html"

  Choose action Next [n/<F1>]: n

 

Step 3: Generate stack.xml file in SAP Maintenance Planner

Go to the SAP Maintenance Planner at https://apps.support.sap.com/sap/support/mp and click on “Plan a new system” -> “Plan”

On the left menu, select “Container Based” -> “SAP Data Intelligence 3” -> 3.0 (03/2020)

Note on patches: SAP Maintenance Planner will automatically select the latest patch for a given service pack. The latest patch version of a service pack is not displayed in the SAP Maintenance Planner.

You must also specify which stack you want to install. Starting with SAP Data Intelligence 3.0 you can choose from three different installation stacks with an option to skip the deployment of the Vora database and the Diagnostics Framework (Kibana/Grafana).

Each stack option is described in the official SAP Data Intelligence Installation guide.

 

The Select Files step gives you the opportunity to download the slcb tool directly from the SAP Maintenance Planner but as it was already installed in step 1 we can skip this step.

Confirm the empty selection and click next to continue to the Download Files step

 

If your Kubernetes cluster is accessible from the public internet (e.g. deployed in the public cloud such as AWS) then proceed to option A in this guide.

If your cluster is not accessible from the public internet (e.g. on-premise Kubernetes hidden behind a corporate proxy, on a private cloud network, etc) or you prefer to use the command line to install SAP Data Intelligence then proceed to option B in this guide.

 

Option A: Deploying MP_stack.xml file to SLC Bridge via Maintenance Planner

In SAP Maintenance planner click on Execute Plan and enter the IP address and port number of your SLC Bridge LoadBalancer or NodePort Kubernetes service (Obtained in Step 2 of this guide) and then click Next

 

Before you proceed, it is mandatory that you log on to the SLC Bridge tool in other to obtain an authentication token. Without this the SAP Maintenance Planner will not be able to upload the stack.xml file. Go to https://<SLC IP address>:<Port>/docs/index.html

Once logged in, return to the SAP Maintenance Planner and click on the Deploy button to upload the MP_stack.xml file to your SLC Bridge in Kubernetes. There is no progress bar so you may not see a success message right away. After a success message is seen, click Next.

Note: If you do not see a success message then it may indicate the SAP Maintenance Planner is unable to reach your SLC Bridge pod, if so check your firewall/security group rules to ensure the incoming connection is not blocked.

 

The deployment of the MP_stack.xml file is complete and you can proceed to the SAP Data Intelligence installer web UI. In the next step of the Maintenance Planner a link to the Web UI is provided, but you can also go to http://<IP-address>:<port>/docs/index.html

Proceed to step 4.

 

Option B: Deploying the MP_stack.xml file to SLC Bridge via slcb command line

Since the SAP Maintenance Planner cannot communicate with Kubernetes clusters hidden behind firewalls or proxy servers you will have the option of uploading the Stack XML file via command line.

Download the MP_stack.xml file from the SAP Maintenance Planner to your workstation

On your workstation execute the command

slcb execute --useStackXML /path/to/MP_stack.xml

You will be prompted for the complete SLC Bridge URL and the login credentials that were defined in step 2 of this guide. After this step the xml file will have been uploaded to the SLC Bridge in Kubernetes.

You now have the option to continue the installation via command line or cancel the current execution using CTRL+C and proceed to install via the Web UI at the URL that you received at the end of step 2 https://<IP_ADDRESS>:<PORT_NUMBER>/docs/index.html 

Both the Web UI and the command-line are identical terms of the installation process, the only difference is visual.

If you choose to use the command line then select Option 1 (SAP DATA INTELLIGENCE 3 SP Stack 3.0) to continue. You can follow the instructions on Step 4 to complete the command line installation.

slcb execute --useStackXML MP_Stack_2000891052_2020047_.xml 

'slcb' executable information
Executable:   slcb
Build date:   2020-04-03 07:30:22 UTC
Git branch:   fa/rel-1.1
Git revision: 55125df84328acf8c97303aae5c978b0466b97b2
Platform:     linux
Architecture: amd64
Version:      1.1.44
SLUI version: 2.6.57
Arguments:    execute --useStackXML MP_Stack_2000891052_2020047_.xml
Working dir:  /Downloads
Schemata:     0.0.44, 1.4.44

Url [<F1>]: https://34.77.12.163:9000/docs/index.html
User: admin
Password: *******

Execute step Download Bridge Images

***************************
* Stack XML File Uploaded *
***************************

  
  Successful Upload of Stack XML File
  You uploaded the Stack XML File "stack_2000891052.xml" (ID 2000891052). It contains:
  
  Product Version:       SAP DATA INTELLIGENCE 3
  Support Package Stack: 3.0 (03/2020)
  S-User:                S123456789
  Product Bridge Image:  com.sap.datahub.linuxx86_64/di-forwarding-bridge:3.0.12
  SLC Bridge will now proceed to download the Product Bridge Images.


Available Options

1: + SAP DATA INTELLIGENCE 3 SP Stack 3.0 (03/2020) (ID 2000891052)
2: + Planned Software Changes
3: + Maintenance

Select option 1 .. 3 [<F1>]: 1

 

Step 4: Selecting a stack and defining installation parameters

The first prompt will ask for which stack to deploy. Each stack option is described in the official SAP Data Intelligence Installation guide.

Before being prompted for installation parameters the installer will first verify (but not copy) that all necessary SAP Data Intelligence images are available in SAP Docker Artifactory. This can take several minutes.

Notes on upgrades:

  • Kubernetes must be upgraded to v1.14 or v1.15 prior to performing the upgrade
  • Only upgrades from existing SAP Data Intelligence clusters or SAP Data Hub 2.7 Patch 4 are supported. Older installations of SAP Data Hub must first upgrade to 2.7.4
  • The namespace of an existing DI/DH installation must be used when upgrading
  • There are known issues with upgrading from SAP Data Hub 2.7. Before upgrading please read SAP Note 2908055 – SAP Data Intelligence 3.0 Upgrade Note

 

All parameters, including the differences between Basic and Advanced installation modes, are described in the official SAP Data Intelligence Installation guide.

It is highly recommended to deploy SAP Data Intelligence into a separate namespace from the SLC Bridge.

 

After answering all prompts you will be presented with a summary screen before proceeding with the installation or upgrade.

 

Step 5: Monitoring and troubleshooting the installation

The installer does not output logs or a detailed progress information during deployment. You can monitor for this by querying for the pod logs using the kubectl command line tool on your workstation

By default the SLC Bridge namespace is set to sap-slcbridge. The name of the pod depends on which stack you have chosen to install, for example if you are deploying the smallest stack, DI Platform, the pod name will be di-platform-product-bridge. The installer logs are stored in the productbridge container. See the example below.

# kubectl -n sap-slcbridge get pods 
NAME                             READY   STATUS    RESTARTS   AGE
di-platform-product-bridge       2/2     Running   0          10m
slcbridgebase-67fc885767-rvp45   2/2     Running   0          2d4h

# kubectl -n sap-slcbridge logs di-platform-product-bridge -c produdctbridge
..
..
2020-04-09T20:04:01.561Z	INFO	cmd/cmd.go:243	1> DataHub/di3/default [Pending] 
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> └── Spark/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   ├── SecurityOperator/di3/default [Pending]  [Start Time:  2020-04-09 20:02:30 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │       └── SecurityOperatorDeployment/di3/default [Pending]  [Start Time:  2020-04-09 20:03:02 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │           └── CertificateResource/di3/tls [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │           └── CertificateResource/di3/jwt [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> └── VSystem/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   ├── Uaa/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   │   ├── Hana/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   │       └── SecurityOperator/di3/default [Pending]  [Start Time:  2020-04-09 20:02:30 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   │           └── SecurityOperatorDeployment/di3/default [Pending]  [Start Time:  2020-04-09 20:03:02 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   │               └── CertificateResource/di3/tls [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   │               └── CertificateResource/di3/jwt [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.562Z	INFO	cmd/cmd.go:243	1> │   ├── Hana/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1> │       └── SecurityOperator/di3/default [Pending]  [Start Time:  2020-04-09 20:02:30 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1> │           └── SecurityOperatorDeployment/di3/default [Pending]  [Start Time:  2020-04-09 20:03:02 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1> │               └── CertificateResource/di3/tls [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1> │               └── CertificateResource/di3/jwt [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1> └── StorageGateway/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     └── Auditlog/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     │   ├── Hana/di3/default [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     │       └── SecurityOperator/di3/default [Pending]  [Start Time:  2020-04-09 20:02:30 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     │           └── SecurityOperatorDeployment/di3/default [Pending]  [Start Time:  2020-04-09 20:03:02 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     │               └── CertificateResource/di3/tls [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     │               └── CertificateResource/di3/jwt [Unknown]  [Start Time:  0001-01-01 00:00:00 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>     └── SecurityOperator/di3/default [Pending]  [Start Time:  2020-04-09 20:02:30 +0000 UTC]
2020-04-09T20:04:01.563Z	INFO	cmd/cmd.go:243	1>         └── SecurityOperatorDeployment/di3/default [Pending]  [Start Time:  2020-04-09 20:03:02 +00

 

If an installation is taking longer than expected then you may also check for the status of the pods in your target namespace for crashes or other errors, or if a pod is in status pending.

kubectl  get pods -n di3
NAME                                      READY   STATUS    RESTARTS   AGE
hana-0                                    1/2     Running   0          112s
spark-master-7f7bf7885c-d2j4r             1/1     Running   0          114s
spark-worker-0                            2/2     Running   0          114s
spark-worker-1                            2/2     Running   0          106s
vora-security-operator-74d6b755b6-c9thh   1/1     Running   0          5m21s

 

Step 6: Post-installation Configuration

Exposing the front-end 

After installation is complete the SAP Data Intelligence front-end is by default not exposed to external traffic. This is typically done by manually deploying an ingress controller and an ingress Kubernetes resource. If you are not familiar with the concept of an ingress then it is highly recommend to read up on it before continuing.

The specific steps for deploying an ingress controller for each cloud vendor are covered in the official installation guide here and for on-premise platforms here.

Configuring container registry for Pipeline Modeler

If you deployed SAP Data Intelligence on Microsoft Azure or if you have a password-protected registry then you will have to provide the docker credentials, without this the Pipeline Modeler will be unable to push docker images to the registry. The exact steps to do this are covered here.

 

Installing permanent license key

All newly installed clusters come with a 90 day temporary license after this the temporary license will expire and you will no longer be able to log in. So be sure to import the new license right away to avoid unnecessary downtime. The exact steps to import the license are covered here.

 

How to uninstall SAP Data Intelligence and re-/uninstall the SLC Bridge

To uninstall SAP Data Intelligence use the SLC Bridge Web UI, specify the namespace you wish to work on, and choose the “Uninstall option”. Do NOT delete the Kubernetes namespace where SAP Data Intelligence is installed as this leaves behind many other Kubernetes resources that would then have to be deleted manually.

 

To uninstall the SLC Bridge tool run the command slcb init command on your workstation. After specifying the namespace of where the SLC Bridge pod is running you will be prompted to re/uninstall it.

slcb init

< ... some text truncated for brevity ... >

************************
* SLC Bridge Namespace *
************************

  Enter the Kubernetes namespace for the SLC Bridge.
  Namespace [<F1>]: slcb


************************
*  Deployment Option   *
************************

  Choose the required deployment option.
  
        1. Reinstall same version
     >  2. Uninstall

Assigned Tags

      52 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Roland Kramer
      Roland Kramer

      Hello Dimitri

      This was exactly needed a few weeks before ... 😉

      See also my findings here – SAP Data Intelligence 3.0 – implement with slcb tool

      Best Regards Roland

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Looks great! I would say this takes a deeper dive into the technicalities of the new slcb tool.

      Author's profile photo Regys Mene
      Regys Mene

      Hallo Dimitri,

      great blog with detailed steps! ? Do you mean with temporary 90-days license, that in these 90-days the Data Intelligence can be also used as „Trial“ or other required components are missing to run the Data Intelligence with the temporary 90-days license?

       

      Thanks in advance,

      Regys

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      If your S-user is not associated with a business account that has a valid Data Intelligence license then you won’t be able to generate a stack.xml file or mirror images from SAP.

      Author's profile photo Vasi Venkatesan
      Vasi Venkatesan

      Very nice blog Dimitri, and provides good technical detail.

      Some questions.

      You mention: To monitor or troubleshoot the status of the initialization you can query the Kubernetes namespace sap-slcbridge

      kubectl -n sap-slcbridge get all

      • This is a specific step, but the namespace could be different if the user chooses not to use the default namespace provided by the slcb tool. do you agree?

      Secondly, When you talk about configuring the container registry on Azure, do you need the password, if the ACR is configured without any Admins?. I believe, i have seen it working without a password configuration for Azure ACR without Admins. Can you confirm?

      One other point is, the maintenance planner provides the stack.xml file for version 2.7 also (if you think you want to install 2.7), but does not see that working with slcb.

       

      Thank you

      Author's profile photo Roland Kramer
      Roland Kramer

      Hello Vasi Venkatesan

      for Datahub 2.7 you still have to use the "classical" SL Container Bridge based on the SAP Host Agent - Maintenance Planer and the SLC Bridge for Data Hub

      Best Regards Roland

       

      Author's profile photo Vasi Venkatesan
      Vasi Venkatesan

      Thank you Roland Kramer

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Thank's for the feedback, I've incorporated your feedback.

       

      Regarding the container registry in Azure, I expect that by default customers will want to secure their docker registry with a username and password. The post-installation configuration step covers the process of uploading these credentials to SAP Data Intelligence. Of course, if the registry does not require credentials then this step can be skipped.

      Regarding the stack.xml file for Data Hub 2.7, yes we already supported using the SAP Maintenance Planner back then, but the SLC Bridge installer tool is not backwards compatible with SAP Data Hub.

      Author's profile photo Rajendra Chandrasekhar
      Rajendra Chandrasekhar

      I was able to install on AWS using our I Number without any issue. Installation of DI 3.0 was much more smoother than the earlier version. Great work Dimitri and Team..  

      Still need to verify the instance. 

      Initial observation was that the I came across multiple connection failure during slcb init before it finally succeeded in connecting to repositories.sap.ondemand.com

      Cheers,

      Raj

      Author's profile photo Hauke Schaper
      Hauke Schaper

      Beim Installieren des DI 3.0 kam es immer wieder zum error:

      Error writing blob: Error determining upload URL: http: no Location header in response
      

       

      Dies lag daran, dass die Pods keine Schreibberechtigung auf die Registry hatten. Nachdem diese im Cluster erstellt war, lief der Rest problemlos durch.

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hello Hauke, most likely your installer is failing to communicate with your private docker registry. I would check if the registry is reachable from your target Kubernetes cluster.

      Author's profile photo Roland Kramer
      Roland Kramer

      Hello Hauke
      to avoid such errors, you should create a "docker secret" to use while you are installing SAP Data Intelligence 3.0. This was aölso with SAP Datahub 2.x the case.

      See the Blog - SAP DataHub 2.7 Installation with SLC Bridge for details

      Furthermore make sure that the used Service Principal doesn't loose the secret while you are installing, because this is the secret you are specifying in the docker secret ...

      Best Regards Roland

      Author's profile photo Kyoungmun Chang
      Kyoungmun Chang

      Dear Dimitri:

      Thank you for your detailed explanation.

      I am trying to install DI 3 on GKE (Google Cloud)

      and followed the same steps in the post (about installing Data Hub on GKE)

      https://blogs.sap.com/2019/04/24/sap-data-hub-2.5-fresh-installation-on-google-cloud/

       

      but when I executed

      slcb init

      it failed and stopped

      saying

      **************************************************

      Executing Step WaitForK8s SLCBridgePod Failed

      **************************************************

      Execution of step WaitForK8s SLCBridgePod failed

      Synchronizing Deployment slcbridgebase failed [9.262593017s].

       

      I checked that

      kubectl -n sap-slcbridge get all

      NAME                                READY   STATUS             RESTARTS   AGE

      pod/slcbridgebase-d6b49d8d8-7ffb7   0/2     CrashLoopBackOff   11        5m20s

      NAME                            READY   UP-TO-DATE   AVAILABLE   AGE

      deployment.apps/slcbridgebase   0/1     1            0           5m20s

       

      May I ask:

      What causes this error

       

      Have a nice day and thank you, Dimitri

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Did you check for a detailed error message in the pod logs? You can use this command:

      kubectl logs slcbridgebase-d6b49d8d8-7ffb7  -n sap-slcbridge

      Author's profile photo Roland Kramer
      Roland Kramer

      Hello Dimitri

      you missed the log option in the command ... 😉

      kubectl logs slcbridgebase-988f57f68-qvvmk slcbridge -n sap-slcbridge

      Best regards Roland

      Author's profile photo Kyoungmun Chang
      Kyoungmun Chang

      Dear Dimitri

      Thank you for your valuable comment

      It(Log) states that the error may be related to read/write permission of Google Container Registry

       

      2020-05-14T01:03:28.037Z        ERROR   images/authProvider.go:476      Getting GCR Access Token failed: Google Container Registry does not have read-write permisions. Permissions to the storage pool of the cluster can only be set at clus
      ter creation time
      ERROR   Getting GCR Access Token failed: Google Container Registry does not have read-write permisions. Permissions to the storage pool of the cluster can only be set at cluster creation time
      2020-05-14T01:03:28.037Z        ERROR   images/authProvider.go:372      Cluster configuration is not correct: Google Container Registry does not have read-write permisions. Permissions to the storage pool of the cluster can only be set at
       cluster creation time

       

      May I ask one more question?

      For Google Container Registry(GCR) to be read/writable? by VMs in GKE

      3 VMs inside GKE must be stopped but it looks impossible.

      But DI3 installation manual pdf says it is possible to install DI on GKE

       

      What will be the good(best) idea to setup GCR(registry) and GKE's VM to install DI3?

      Thank you.

      Author's profile photo Antonio Maradiaga
      Antonio Maradiaga

      Kyoungmun Chang you need to add the https://www.googleapis.com/auth/devstorage.read_write scope when creating the GKE cluster. Note that https://www.googleapis.com/auth/devstorage.full_control does not work, it needs to be read_write.

      Example:

      gcloud beta container –project [PROJECT-ID] clusters create “data-intelligence-3” –zone “europe-west4-a” –no-enable-basic-auth –cluster-version “1.15.11-gke.13” –machine-type “n2-standard-4” –image-type “COS” –disk-type “pd-standard” –disk-size “100” –metadata disable-legacy-endpoints=true –scopes “https://www.googleapis.com/auth/userinfo.email”,”https://www.googleapis.com/auth/compute”,”https://www.googleapis.com/auth/taskqueue”,”https://www.googleapis.com/auth/bigquery”,”https://www.googleapis.com/auth/sqlservice.admin”,”https://www.googleapis.com/auth/logging.write”,”https://www.googleapis.com/auth/monitoring”,”https://www.googleapis.com/auth/cloud-platform”,”https://www.googleapis.com/auth/bigtable.data”,”https://www.googleapis.com/auth/pubsub”,”https://www.googleapis.com/auth/servicecontrol”,”https://www.googleapis.com/auth/service.management”,”https://www.googleapis.com/auth/trace.append”,”https://www.googleapis.com/auth/source.full_control”,“https://www.googleapis.com/auth/devstorage.read_write” –num-nodes “3” –enable-stackdriver-kubernetes –enable-ip-alias –network “projects/msc-qmul-170903112/global/networks/default” –subnetwork “projects/msc-qmul-170903112/regions/europe-west4/subnetworks/default” –default-max-pods-per-node “110” –no-enable-master-authorized-networks –addons HorizontalPodAutoscaling,HttpLoadBalancing –enable-autoupgrade –enable-autorepair –max-surge-upgrade 1 –max-unavailable-upgrade 0

      I’ve test it and it works ?

      Additional documentation regarding GCR scopes - https://cloud.google.com/container-registry/docs/troubleshooting#permission_issues_when_communicating_with

      Author's profile photo Kyoungmun Chang
      Kyoungmun Chang

      Dear Antonio:

      Thank you for your detailed help

       

      As you mentioned, (several days ago) I’ve figured out I need

      https://www.googleapis.com/auth/devstorage.read_write

      finally.

       

      So I executed a command as you said

      which creates GKE (a command generated by GCP)

      in the GCP console, so the problem about Google Container Registry was solved

      finally

       

      But there’s still problem

      I am using free account in GCP

      so there is limitation in resources (CPUs, RAMs, HDDs etc)

      I can only create worker machines of 2 vCPU, 8? or 16?G RAM,

      so “hana-0 pod” can’t be created

       

      Thus, it looks impossible to install DI3

      under 8~16G RAM limitation per node, anyway

       

      So I gave up installing DI3 on GKE

      unfortunately.

       

      Maybe I will ask my supervisor for paid GCP account.

       

      Thank you for your help again, and have a nice weekend, Antonio

      Author's profile photo Raja Mahesh Musunoori
      Raja Mahesh Musunoori

      Hi Kyoungmun Chang

       

      I was stuck in the similar situation and then have tried the below config changes to make it work:

      1. I've used n1-standard-4 machine with the default 3 nodes and 45 Gig memory.

      2. Enabled Autoscaling for my default-pool cluster and set the number of nodes max as 6. Let's say the default nodes are 3 and extendable up to 6 nodes per zone depending on the need.

      3. I've updated to the full account and changed my CPU quota to 12 from 8cores. --- > this did not cost me anything as I am still in the free tier period.

      with the above changes, it worked and didn't face storage issues that we had with hana-0 pod earlier.

      Hope this helps.

      Best Regards,

       

      Author's profile photo Kyoungmun Chang
      Kyoungmun Chang

      Dear Raja.

      Thank you for your comment !!

      I will try as you suggest (maybe tomorrow or this Friday)

      Have a nice day and see you again, Raja

       

      Best regards,

      Kyoungmun

      Author's profile photo Roland Kramer
      Roland Kramer

      Hello Chang

      you might find the suggestion from this answer as well useful, despite it is AWS, Azure, Google or Alibaba asn the requirements for the SAP Data Intelligence Platform remains the same - https://answers.sap.com/questions/13028783/technical-requirements-to-install-sap-data-intelli.html?childToView=13030678#answer-13030678

      Best Regards Roland

      Author's profile photo Kyoungmun Chang
      Kyoungmun Chang

      Dear Roland.

      Thank you so much !

      Author's profile photo Raja Mahesh Musunoori
      Raja Mahesh Musunoori

      Hi @Dimitri,

      Very nicely put together.. thank you very much!!

      Do you recommend “SAP Data Intelligence Full” installation on GCP ?

      My configuration of K8’s cluster is just as in the command line of Fabio’s post https://jam4.sapjam.com/discussions/uEJPUyfRlF8PCGSampt5n8 except for the machine type as “n1-standard-1”

      Also I’ve opted “StorageClass Configuration” as per this official documentation https://help.sap.com/viewer/a8d90a56d61a49718ebcb5f65014bbe7/3.0.latest/en-US/abfa9c73f7704de2907ea7ff65e7a20a.html with all possible storage classes like Default StorageClass, VSystem StorageClass, DLog StorageClass, Disk StorageClass, Consul StorageClass, HANA StorageClass, Diagnostics StorageClass

      Any recommendations, as my “SAP Data Intelligence Full” is stuck at 97% and unable to move further..

      I’ve checked kubectl -n <diinstallation-1-slcbridge> get pods

      NAME                                            READY    STATUS     RESTARTS    AGE

      diagnostics-prometheus-server-0  0/1           Pending      0                   6h13m
      hana-0                                           0/2           Pending      0                   6h13m

       

      diagonostrics and hana are in Pending status for more than 6 hours.. I believe this is because of storage space issue/GCP cluster configuration is not sufficient for the “SAP Data Intelligence Full” setup.

      Do you think I will be able to complete the installation with “SAP Data Intelligence Platform” option ?

      May I ask if I can I skip the “storage class configuration” to mitigate these pod challenges ?

      BR, Mahesh

       

      Author's profile photo Raja Mahesh Musunoori
      Raja Mahesh Musunoori

      Resolved it.. thanks.

      Author's profile photo Eldho George
      Eldho George

      Thank you Dimitri for the blog and presentation

      I had to use "Cloud Shell" in GKE for the installation . Maintenance planner had some issue to get various options and go with next  steps.

      I was trying to get the Log using the command you have mentioned in the blog. But getting below error

      demoxxxx@cloudshell:~ (xxxx-12345)$ kubectl -n sap-slcbridge get pods
      NAME READY STATUS RESTARTS AGE
      di-platform-product-bridge 2/2 Running 0 14h
      slcbridgebase-79c5b4cf4d-xch6s 2/2 Running 0 14h

      demoxxxx@cloudshell:~ (xxxx-12345)$ kubectl -n sap-slcbridge logs di-platform-product-bridge -c produdctbridge
      error: container produdctbridge is not valid for pod di-platform-product-bridge

      Author's profile photo Marco Wittenzellner
      Marco Wittenzellner

      Hello Dimitri,

      Thank you for your great post!

      Im trying to install DI on Amazon EKS. When i execute slcb init, after setting my S-User Credentials im getting following error when trying to copy from SAP Repository to my ecr:

      "Error trying to reuse blob sha........ at destination: failed to read from destination repository com.sap.sl.cbpod/slcbridgebase: 403 (Forbidden)"

      Any help would be appreciated !

      Best Regards,
      Marco

      Author's profile photo Marco Wittenzellner
      Marco Wittenzellner

      Solved it !
      My docker user wasnt able to access the repo.

      Author's profile photo Ren Sui
      Ren Sui

      hi Marco:

      I got the same issue with you when I install SAP DI 3.1 in my AWS K8S.

      could you share your solution for this problem?

      thanks a lot.

      Author's profile photo Marco Wittenzellner
      Marco Wittenzellner

      Hi Ren,

      as far as i remember, the docker user couldnt read from the SAP Repository. I dont know the exact solution, but you could start by making sure the Docker User is logged-in properly. I did something similar like this:

      aws ecr get-login-password \
      --region xx-xxxx-xx \
      | docker login \
      --username AWS \
      --password-stdin

      as mentioned here:

      https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html

       

      Also, i suggest to use SAP CAL to install Data Intelligence if your problem still exists. This will take care of small problems like this automatically.

      BR
      Marco

      Author's profile photo Ren Sui
      Ren Sui

      hi Marco:

      Thanks for your quick feedback.

      after double check with SAP consultant and AWS support, we found that the root cause is related repository does not exist in my ECR even I have create one repository for DI installation, so the error was gone after I created the repository manually.

      But from my understanding, the repository for components during installation should be created automatically in my previous created repository, my problem maybe related with AWS China region, SAP AG support will double check the details.

      thanks anyway.

      Author's profile photo Srinath Balu
      Srinath Balu

      Hello Dimitri,

       

      I have installed the SLCBRIDGE in GCP - Kubernetes

      kubectl -n sap-slcbridge get all
      NAME READY STATUS RESTARTS AGE
      pod/slcbridgebase-58df6d94c6-j4npw 2/2 Running 0 5h15m

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      =service/slcbridgebase-service LoadBalancer 10.0.1.160 XXX.XXX.XXX.XXX 9000:30337/TCP 7h37m

       

      I went with Option B ...  the images are uploaded gcr.io and i am able to see the images

      however when i tried to install the Data Intelligence it is asking to provide the S-User ID again

      below is the error message, Could you please provide an update ?

       

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hi Srinath,

      During slcb init you only copy the SLC Bridge images to the registry.

      Afterwards, deploying the SLC Bridge pod the installer prompts you for the S-user credentials again so that it can copy the SAP Data Intelligence images to your registry.

      Author's profile photo Srinath Balu
      Srinath Balu

      Hi Dimitri,

       

      I created public IP based kubernetes cluster but  getting client timeout now when pulling this image /or at this particular time  while it downloaded earlier images without issues

      Error initializing source slcb://73554900100900004388.dockersrv.repositories.sapcdn.io/com.sap.datahub.linuxx86_64/datahub-operator-installer-base:2002.1.72: cannot resolve digest for 73554900100900004388.dockersrv.repositories.sapcdn.io/com.sap.datahub.linuxx86_64/datahub-operator-installer-base:2002.1.72: Get "https://notary.repositories.sap.ondemand.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

       

      Regards,

      B.Srinath

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hi Srinath, are you still experiencing this issue?

      Author's profile photo pavankumar ongolu
      pavankumar ongolu

      Hello Dimitri,

       

      facing below issues while installing DI 3.0 SP03 on AWS EKS Cluster.

      Execution of step Install finished with error: execution failed: status 1, error: Error from server (NotFound): datahubs.installers.datahub.sap.com "default" not found.

       

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Did you run slcb init command before starting the installation?

      Author's profile photo Andy Trigg
      Andy Trigg

      Hi Dimitri,

       

      Excellent blog and just what I've been looking for although I'm hitting an issue at step 4.  I don't even get to the parameter selections.  After hitting OK I get the following:

      Any ideas would be greatly appreciated.

       

      Thank you

       

      Andy

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      It looks like you did not provide correct credentials to your private registry when trying to push images.

      Author's profile photo manish madhav
      manish madhav

      Hello Dimitri,

      Life-saver blog.. however am stuck at stage of pending pod deployment hana-0. Not sue what is the reason, logs does not say much. Installer keeps stuck and time out over night. How to solve the Pending pods issue?

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hello, a pending hana-0 pod typically happens when your cluster is low on memory. We recommend three worker nodes with 32GB of memory each.

       

      By the way, in general whenever a pod in stuck in status pending you can found out why by running the command kubectl describe pod <pod-name>

      Author's profile photo Marco Wittenzellner
      Marco Wittenzellner

      Hello Dimitri,

      unfortunately we forgot to activate the permanent license key, so our DI Deployment on AWS has gone into lock mode and we cannot login anymore.

      Is there a way to unlock the installation at this point or do we need to install it again ?

      Many thanks in advance
      Marco

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hello Marco, did you pay SAP CAL to unlock instance yet? If yes can you please e-mail me directly at dimitri.vorobiev@sap.com

      Author's profile photo Marco Wittenzellner
      Marco Wittenzellner

      We didnt used SAP CAL actually. We deployed and installed it from scratch in AWS.

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hi Marco, please create a support ticket under the component EIM-DH. Someone from the support team will help you restore your cluster.

      Author's profile photo Claudio Palladino
      Claudio Palladino

      Hi Dimitri,

      I am trying to install the SAP Data Intelligence on GCP, but it's failing at 60%.
      I did troubleshooting investigating with kubectl command and i found the following issue:

      pod/datahub.post-actions.validations.validate-vflow-mmb6zpod/datahub.post-actions.validations.validate-vflow-mmb6z

      Do you have any idea how can I figure out?

      Thanks in advance.

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Hi Claudio, from the logs it looks the installation completed but it was unable to start the Pipeline Modeler pod. You should be able to find more detailed logs inside the pod.

      Try launching the Pipeline Modeler and you should see a new pipeline modeler pod starting.

      Author's profile photo Claudio Palladino
      Claudio Palladino

      Hi Dimitrj,

      Thanks for your feedback.
      I am trying to start the Sample pipeline(Data Generator) and i am getting the same issue:

      pipe%20error

      pipe error

      failed to deploy graph: failed to prepare graph images: failed to prepare image: build failed for image: eu.gcr.io/acn-hana-coe-sap-landscape/vora/vflow-node-d2d4352a0cfc540f9be2ae9685643998bb11e126:3.1.54-com.sap.sles.base-20210322-130939
      Thanks
      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Please see this document: https://help.sap.com/viewer/a8d90a56d61a49718ebcb5f65014bbe7/3.1.latest/en-US/a98feae43198450a875896ec73208207.html

      Your service account used by GKE needs to have the Storage Admin role assigned which allows the pipeline modeler to write/push images into your GCR.

       

      Author's profile photo Claudio Palladino
      Claudio Palladino

      Hi Dimitrj,

      I have all grant for my service account:

      grant

      I attach also the issue that i found on GCP GKE:
      log2

      Thanks

      Author's profile photo Dimitri Vorobiev
      Dimitri Vorobiev
      Blog Post Author

      Is Kaniko enabled? See this doc: https://help.sap.com/viewer/a8d90a56d61a49718ebcb5f65014bbe7/3.1.latest/en-US/0b621cf6ed7b42f08c9a10108cb16298.html

      Author's profile photo Claudio Palladino
      Claudio Palladino

      Hi Dimitri Vorobiev ,

      yes i enabled it during the installation wizard, also the installation failed for the same reason remaining at 60%:

      Installation%20Failed
      For double check i followed the doc that you referred and it was already set up:

      Installation Failed

      Kaniko

      Kaniko

      Thanks

      Author's profile photo Claudio Palladino
      Claudio Palladino

      Hi Dimitri Vorobiev ,

      sorry for bother you again.
      Please Do you have any advice about the issue that I am facing?

      Thanks