Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member681769
Participant
This is Part I in a two-part series detailing how to install and configure SAP Data Intelligence (SDI) upon a Red Hat OpenShift cluster. In this chapter, we look at the background and prerequisites of setting up your environment, preparing the OCP cluster for SDI and deploying the SDI Observer. In Part II, we look at how to perform the actual SDI installation, as well as the tests required to verify your installation and the setup. By the end of this two-part post, you will have a SAP Data Intelligence workspace running upon an OpenShift cluster. Special thanks to mkoch-redhat and Michal Minar for testing and validating & providing the technical content for this article.

Background


In this post, you'll learn how to install and use SAP Data Intelligence (SDI) on an OpenShift Cluster. To learn more about data intelligence, this blog is a great place to get started.

Prerequisites


To install SAP Data Intelligence (SDI), you'll need a running Openshift Cluster with at least 3 Worker nodes and access to Block and Object Storage.

The worker nodes need to have the following minimum requirements:

  • 8 CPUs

  • 32 GB memory

  • 100 GB local ephemeral storage


In addition, 250GB of persistent volume and 90GB for the container registry is needed. See also Minimum Sizing for SAP Data Intelligence for more information.

For the checkpoint storage feature and the data lake feature, you will also need S3 storage.

SDI with Red Hat OpenShift Container Storage (OCS) has been tested and verified by Red Hat, and the article will explain how to use this to provide S3 storage. More information on OpenShift Container Storage can be found here. If an Openshift cluster is unavailable, the following table describes the parameters for a feasible setup in AWS.

Figure 1. Openshift Requirements for SAP Data Intelligence Test Systems


















































Type Count Operating System vCPU RAM (GB) Storage (GB) AWS Instance Type
Bootstrap 1 RHCOS 2 16 120 i3.large
Master 3+ RHCOS 4 16 120 m4.xlarge
Compute 3+ RHEL 7.6 or RHCOS 4 32 120 m4.2xlarge
Storage 3+ RHCOS 10 24 120 + 2048 m5.4xlarge

For a POC, it is feasible to have only three Worker Nodes. You can aggregate the requirements. In production environments, it is recommended to have SDI and OCS on separate worker nodes, which enables easier scaling of one or the other.

High Level Installation Flow


To deploy SAP Data Intelligence on the cluster, the following steps need to be performed:

  1. Label the worker nodes which should receive SDI contents

  2. Change the configuration of the SDI Worker Nodes to suit SAP's requirements

  3. Deploy the sdi-observer monitoring and installation helper tool from Red Hat

  4. Prepare the required S3 storage pools

  5. Apply the required lifted permissions to the SDI project

  6. Deploy the SAP LifeCycle Manager Container Bridge (SLCB) for installing SDI

  7. Launch the installation of SDI in SLCB01


During this process you will need the following data:

  • Your SAP S-User name and password for downloading the software

  • login credentials to Red Hat Portal

  • login credentials to access the Openshift Cluster with admin permissions


Verifying / preparing the management workstation


You will need to prepare a management workstation from where you can access your OpenShift cluster, run Ansible playbooks and a web browser session. For the purpose of this guide, a Linux workstation based on RHEL or Fedora is assumed. If you are on Windows, MacOS or another Linux distribution you will need to adapt the settings accordingly.

  1. Login to your management workstation

  2. Ensure the following software is installed:

    • ansible for automating the setup together with the python modules for managing Openshift

    • python3-pyyaml

    • python3-urllib3.noarch

    • python3-requests

    • python3-requests-oauthlib

    • python3-openshift (from EPEL)

    • yum-utils for managing repositories

    • git for loading data from github


    On RHEL 8 set the following commands as root:

    # dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    # dnf -y ansible python3-pyyaml python3-urllib3 python3-requests python3-requests-oauthlib python3-openshift yum-utils


  3. Make sure jq version 1.6 is installed from parsing JSON:

     # curl -L -O /usr/local/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
    # chmod a+x /usr/local/bin/jq


  4. Install an OpenShift client according to your OCP version:

     # OCP_VERSION=4.7.2
    # wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${OCP_VERSION}/openshift-client-linux-${OC...
    # sudo tar zxvf openshift-client-linux-${OCP_VERSION}.tar.gz -C /usr/bin
    # sudo rm -f openshift-client-linux-${OCP_VERSION}.tar.gz /usr/bin/README.md
    # sudo chmod +x /usr/bin/oc /usr/bin/kubectl


  5. Setup bash completion (optional)

    oc completion bash | sudo tee /etc/bash_completion.d/openshift > /dev/null


  6. Clone the github repository with the ansible playbooks and scripts for configuring the cluster. You will find the playbooks and scripts used in this blog in the demo subdirectory:


Verifying Openshift Cluster



  1. Make sure you have OpenShift Cluster admin rights:

    # oc whoami
    system:admin


    If you do not have cluster-admin permissions, login with a user which does. In this article we assume this user is named admin:

    # oc login -u admin


  2. Check that the requirements are met. The following is an example with 3 worker nodes:

     # oc get nodes
    ip-10-0-133-218.ec2.internal Ready master 47m v1.20.0+5fbfd19
    ip-10-0-141-94.ec2.internal Ready worker 37m v1.20.0+5fbfd19
    ip-10-0-154-232.ec2.internal Ready master 47m v1.20.0+5fbfd19
    ip-10-0-159-127.ec2.internal Ready worker 40m v1.20.0+5fbfd19
    ip-10-0-167-89.ec2.internal Ready master 48m v1.20.0+5fbfd19
    ip-10-0-175-27.ec2.internal Ready worker 43m v1.20.0+5fbfd19


    You should see 3 worker nodes and 3 master nodes.
    Note

    If you see something like this after your systems shut down:

     #  oc get nodes
    NAME STATUS ROLES AGE VERSION
    ip-10-0-137-27.ec2.internal NotReady worker 2d23h v1.20.0+5fbfd19
    ip-10-0-141-89.ec2.internal NotReady master 3d v1.20.0+5fbfd19
    ip-10-0-154-182.ec2.internal NotReady master 3d v1.20.0+5fbfd19
    ip-10-0-159-71.ec2.internal NotReady worker 2d23h v1.20.0+5fbfd19
    ip-10-0-165-90.ec2.internal NotReady worker 2d23h v1.20.0+5fbfd19
    ip-10-0-168-27.ec2.internal NotReady master 3d v1.20.0+5fbfd19


    This can happen if the systems are shutdown for more then 24 hours and the certificates are set to invalid. The following command will manually approve the new certificates:

    # oc adm certificate approve $(oc get csr | grep Pending | awk {'print $1'})



  3. Store the worker names in a variable for later use:

     # WORKER=$(oc get nodes | awk ' ( $3 ~ "worker" ) {print $1 }')


  4. Check the hardware resources of the cluster nodes:

    # oc describe node $WORKER  | grep -A 6 Capacity
    Capacity:
    attachable-volumes-aws-ebs: 25
    cpu: 16
    ephemeral-storage: 125293548Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 64792280Ki
    --
    Capacity:
    attachable-volumes-aws-ebs: 25
    cpu: 16
    ephemeral-storage: 125293548Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 64792280Ki
    --
    Capacity:
    attachable-volumes-aws-ebs: 25
    cpu: 16
    ephemeral-storage: 125293548Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 64792280Ki


    This will ensure that the minimum requirements are met with 64 GiB memory and 120 GiB local storage.


Preparing the OCP cluster for SDI


Switch to the directory with the Ansible playbooks. The playbook ocp_prep_nodes.yml will label all worker nodes in your cluster for use with SDI. Please change the variable sdi_configure_ocp_worker_nodelist if you want something different. You can also change the "when" statement, so that additional properties can be used when selecting nodes for SDI.

Verify and run the playbook using the following command:
# ansible-playbook -i myhosts -vv ocp_prep_nodes.yml

The playbook will perform the following steps on all of the nodes in thesdi_configure_ocp_worker_nodelist:

  1. Label the SDI compute Nodes with this command: node-role.kubernetes.io/sdi=""

  2. Enable net-raw capability for containers on schedulable nodes

  3. Pre-load additional needed kernel modules for SDI (e.g. NFS and iptables)

  4. Increase PID limits to 16384

  5. Connect the MachineConfigs defined in step in 2-3, to the nodes with the label sdi.


It may take a while until all the nodes have completed updating. The following
command can be used to wait for the change to be applied to all the
worker nodes:
   oc wait mcp/sdi --all --for=condition=updated

The following command lists the status of the nodes:
   oc get mcp

Note

If the update is not working, check the machineconfig operator.

Verify that the settings have been completed. You can use the following script to confirm that all required changes have been made on the OpenShift Worker nodes:
#!/usr/bin/bash

# CHECK OCP (Note Files may change after update)

for worker in `oc get nodes|awk '/worker/{print $1}'`; do
echo "Checking node $worker ------------------------------------------------------------------------------"
# Check for additional kernelmodules
oc debug node/$worker -- chroot /host cat /etc/crio/crio.conf.d/90-default-capabilities 2> /dev/null
# Check for additional kernelmodules
oc debug node/$worker -- chroot /host cat /etc/modules-load.d/sdi-dependencies.conf 2> /dev/null
# check for module load service
oc debug node/$worker -- chroot /host systemctl status sdi-modules-load.service 2> /dev/null
# check for pidsLimit:
oc debug node/$worker -- chroot /host cat /etc/crio/crio.conf.d/01-ctrcfg-pidsLimit
echo "--------------------------------------------------------------------------------------------------------"
done

Configuring Storage


Run the playbook ocs-create-S3buckets.yml to create the new project sdi-infra which will store the S3 buckets for SDI:

# ansible-playbook -i myhosts -vv ocs-create-S3buckets.yml


By default, two buckets will be created. You can list them like this:

# bash <(curl -s https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/utils/mksdibuckets) list

Bucket claim namespace/name: sdi/sdi-checkpoint-store (Status: Bound, Age: 7m33s)
Cluster internal URL: http://s3.openshift-storage.svc.cluster.local
Bucket name: sdi-checkpoint-store-ef4999e0-2d89-4900-9352-b1e1e7b361d9
AWS_ACCESS_KEY_ID: LQ7YciYTw8UlDLPi83MO
AWS_SECRET_ACCESS_KEY: 8QY8j1U4Ts3RO4rERXCHGWGIhjzr0SxtlXc2xbtE
Bucket claim namespace/name: sdi/sdi-data-lake (Status: Bound, Age: 7m33s)
Cluster internal URL: http://s3.openshift-storage.svc.cluster.local
Bucket name: sdi-data-lake-f86a7e6e-27fb-4656-98cf-298a572f74f3
AWS_ACCESS_KEY_ID: cOxfi4hQhGFW54WFqP3R
AWS_SECRET_ACCESS_KEY: rIlvpcZXnonJvjn6aAhBOT/Yr+F7wdJNeLDBh231


Deploying the SDI Observer


SDI Observer is a Red Hat monitoring software that controls the behaviour and proper installation of SDI. Before running the SDI Observer, you will need to create a service account for registry.redhat.io at https://access.redhat.com/terms-based-registry/, and download & save it to rht-registry-secret.yaml.

a. Run the following playbook to deploy the SDI observer:

# ansible-playbook -i myhosts -vv deploy-sdi-observer.yml


The playbook does the following:

  1. It creates Namespaces for the SDI Observer, the SDI and the Install bridge:

    1. sdi-observer

    2. sdi

    3. sap-slcbridge



  2. It creates a Pull Secret for the redhat registry within the sdi-observer namespace

  3. It defines several variables for the SDI Observer & deploys it.


You can influence the behaviour of the SDI observer by changing or adding certain variables in the playbook. See https://access.redhat.com/articles/5100521 section 4.1 for a list of the variables. Keep in mind that we want sdi-observer to deploy a SDI compliant registry and to make sure that an OpenShift route is created after the installation of SDI.
Note

You can change the variables at a later time with the following command: oc set env -n sdi-observer dc/sdi-observer <variable name>=<value>

Note

You can list the variables with oc set env -n sdi-observer --list dc/sdi-observer

Note

If you change a variable afterwards, a rebuild of sdi-observer might be required. You can trigger the rebuild with the following command:

oc start-build -n sdi-observer -F bc/sdi-observer

b. Wait until the sdi-observer and registry pods are running:

 $ oc get pods
NAME READY STATUS RESTARTS AGE
container-image-registry-1-build 0/1 Completed 0 3m20s
container-image-registry-1-deploy 0/1 Completed 0 82s
container-image-registry-1-jkrx8 1/1 Running 0 79s
deploy-registry-4gccn 0/1 Completed 0 3m26s
sdi-observer-1-build 0/1 Completed 0 5m48s
sdi-observer-1-deploy 0/1 Completed 0 3m53s
sdi-observer-1-xphzw 1/1 Running 0 3m49s


You can follow the deployment processes:

 oc logs sdi-observer-1-build -f
oc logs container-image-registry-1-build -f


c. Check the Registry, get the credentials and configure OCP to trust the new registry. The following script tests the registry, prints the access credentials (which are needed for the installation), and makes some required additional settings in the cluster to trust the registry deployed by the sdi-observer:

  #!/bin/bash

## Change Namespace to sdi-observer
NAMESPACE="${NAMESPACE:-sdi-observer}"
oc project sdi-observer

## Obtain registry credentials
reg_credentials=$(oc get -n "${NAMESPACE:-sdi-observer}" secret/container-image-registry-htpasswd -o jsonpath='{.data.\.htpasswd\.raw }' | base64 -d )
reg_user=$(echo $reg_credentials| cut -d: -f1)
reg_pw=$(echo $reg_credentials| cut -d: -f2)

## Obtain registry hostname
reg_hostname="$(oc get route -n "${NAMESPACE:-sdi-observer}" container-image-registry -o jsonpath='{.spec.host}')"
echo "================================================="
echo "Using registry: $reg_hostname"
echo "USER: $reg_user"
echo "PW : $reg_pw"
echo "================================================="

if [ -z "$reg_user" -o -z "$reg_pw" ]; then
echo "Something went wrong. Check if the pods are running"
exit 1
fi

### Obtain Ingress Router's default self-signed CA certificate
mkdir -p "/etc/containers/certs.d/${reg_hostname}"
router_ca_crt="/etc/containers/certs.d/${reg_hostname}/router-ca.crt"
oc get secret -n openshift-ingress-operator -o json router-ca | \
jq -r '.data as $d | $d | keys[] | select(test("\\.crt$")) | $d[.] ' | base64 -d > ${router_ca_crt}

### test via curl
curl -k -I --user ${reg_credentials} --cacert ${router_ca_crt} "https://${reg_hostname}/v2/"

### test via podman
echo $reg_pw | podman login -u $reg_user --password-stdin ${reg_hostname}

reg_login_ok=$?

if [ $reg_login_ok ]; then
# Configure Openshift to trust container registry (8.2)
echo "Configure Openshift to trust container registry"
echo "CTRL-C to stop, ENTER to continue"
read zz
caBundle="$(oc get -n openshift-ingress-operator -o json secret/router-ca | \
jq -r '.data as $d | $d | keys[] | select(test("\\.(?:crt|pem)$")) | $d[.]' | base64 -d)"
# determine the name of the CA configmap if it exists already
cmName="$(oc get images.config.openshift.io/cluster -o json | \
jq -r '.spec.additionalTrustedCA.name // "trusted-registry-cabundles"')"
if oc get -n openshift-config "cm/$cmName" 2>/dev/null; then
# configmap already exists -> just update it
oc get -o json -n openshift-config "cm/$cmName" | \
jq '.data["'"${reg_hostname//:/..}"'"] |= "'"$caBundle"'"' | \
oc replace -f - --force
else
# creating the configmap for the first time
oc create configmap -n openshift-config "$cmName" \
--from-literal="${reg_hostname//:/..}=$caBundle"
oc patch images.config.openshift.io cluster --type=merge \
-p '{"spec":{"additionalTrustedCA":{"name":"'"$cmName"'"}}}'
fi
# Check that the certifcate is deployed
sleep 10 # give some time for configuration
oc rsh -n openshift-image-registry "$(oc get pods -n openshift-image-registry -l docker-registry=default | \
awk '/Running/ {print $1; exit}')" ls -1 /etc/pki/ca-trust/source/anchors

else
echo "Registry setup failed, please repair before you continue"
fi


Ensuring that the project service accounts have correct privileges


SDI uses several service accounts that need additional privileges:

oc login -u admin
oc project sdi
oc adm policy add-scc-to-group anyuid "system:serviceaccounts:$(oc project -q)"
oc adm policy add-scc-to-user privileged -z "$(oc project -q)-elasticsearch"
oc adm policy add-scc-to-user privileged -z "$(oc project -q)-fluentd"
oc adm policy add-scc-to-user privileged -z default
oc adm policy add-scc-to-user privileged -z mlf-deployment-api
oc adm policy add-scc-to-user privileged -z vora-vflow-server
oc adm policy add-scc-to-user privileged -z "vora-vsystem-$(oc project -q)"
oc adm policy add-scc-to-user privileged -z "vora-vsystem-$(oc project -q)-vrep"


Installing SDI Install Bridge


Now that the SDI observer is up and running, we can install the SDI Install Bridge container that is used to install SDI on the cluster. The following steps should be run as the user admin.

# oc login -u admin

# oc whoami
admin


Note

This document assumes your cluster has direct internet access. If you require proxy settings follow the steps in https://access.redhat.com/articles/5100521 section 5.1.






      1. Download the SAP install bridge from SAP, which will require an S-User. Go to the maintenance planner (MP) at https://apps.support.sap.com/sap/support/mp and click "Plan a New System: imageSelect "CONTAINER based." imageSelect "SAP DATA INTELLIGENCE", DI - Platform full, latest version (currently 3.1) and click Next. imageClick Next. imageSelect Linux on x86_64 and Confirm Selection. imageSelect "SL CONTAINER BRIDE" and Click Next. imageSelect SLCB01_*.EXE, Click "Push to Download Basket" and Next. imageClick "Execute Plan." imageKeep the browser tab open. You will need to return here after installing SLCB.Use the SAP Software Downloader to download the previously selected "SLCB01*.EXE" from the Downloadbasket.Alternatively, you can download SLCB01_<Version>.EXE for Linux directly from https://support.sap.com/, rename it to slcb and make it executable (click Software Downloads and enter Software Lifecycle Container Bridge in the search field)

        # mv SLCB01_*.EXE /usr/bin/slcb
        # chmod +x /usr/bin/slcb


      2. Install the SDI Install Bridge:
        Note

        This tutorial was tested with version 1.1.63. You can always install this version with `slcb init --bridgebaseVersion 1.1.63, but any later version should work.

        When responding to the installer questions, the following selections are important:

        • Installation Type: Expert Mode

        • Service Type

          1. On AWS choose Loadbalancer. You do not need to give annotations.

          2. On all other environments choose NodePort.



        • See https://access.redhat.com/articles/5100521 if you need to configure proxies. This article assumes direct connection to the internet.

        • You will need to provide the following information:



            • Address of the Container Image Repository

            • Image registry user name





          • Image registry password

          • Your S-User + password

          • admin user password




        Now you need the different credentials and information you noted earlier. Execute slcb init. Here is an example log:

            $ slcb init

        'slcb' executable information
        Executable: slcb
        Build date: 2021-03-26 03:45:45 UTC
        Git branch: fa/rel-1.1
        Git revision: 4f99471a2f764f65da2d72ef74c5259e8639697e
        Platform: linux
        Architecture: amd64
        Version: 1.1.62
        SL Core version: 1.0.0
        SLUI version: 2.6.67
        Arguments: init
        Working dir: /home/generic_emea_mkoch
        Schemata: 0.0.62, 1.13.62

        Explanation of supported shortcuts:
        <F1>: Display help for input value.
        <ENTER> or <Ctrl-N>: Confirm and continue to next input value.
        <F12> or <Ctrl-B>: Go back to previous input value.
        <r>: Retry current step.
        <e>: Edit a multi-line input value.
        <Ctrl-C>: Abort current processing and return to the Welcome dialog of the SLC Bridge Base.
        Ctrl-C is not explicitly shown as an option in the command line prompt but you can always use it.
        <Tab>: Completion of input values.
        In dialogs that accept only a restricted set of values (like files, directories etc)
        use the <Tab> key to cycle through the values or for completion of incomplete input.

        Execute step Download Bridge Images

        ***********************************
        * Product Bridge Image Repository *
        ***********************************

        Enter the address of your private container image repository used to store the bridge images.
        You require read and write permissions for this repository.
        Choose action <F12> for Back/<F1> for help
        Address of the Container Image Repository: container-image-registry-sdi-observer.apps.cluster-bf86.bf86.example.opentlc.com

        ************************
        * Image Registry User *
        ************************

        The user name used to logon to "container-image-registry-sdi-observer.apps.cluster-bf86.bf86.example.opentlc.com".
        Choose action <F12> for Back/<F1> for help
        Image registry user name: user-q5j0lq
        Choose action <F12> for Back/<F1> for help
        Image registry password:

        ***************************
        * Enter Logon Information *
        ***************************

        You require S-User credentials to log on to the SAP Registry ("rhapi.repositories.cloud.sap") for product version "SL TOOLSET 1.0" (01200615320900005323)
        Choose action <F12> for Back/<F1> for help
        S-User Name: S0001234567
        Choose action <F12> for Back/<F1> for help
        Password:

        Copying image slcb://01200615320900005323.dockersrv.repositories.sapcdn.io/com.sap.sl.cbpod/slcbridgebase:1.1.62 to "container-image-registry-sdi-observer.apps.cluster-bf86.bf86.example.opentlc.com"
        Copying image slcb://01200615320900005323.dockersrv.repositories.sapcdn.io/com.sap.sl.cbpod/nginx-sidecar:1.1.62 to "container-image-registry-sdi-observer.apps.cluster-bf86.bf86.example.opentlc.com"
        Checking prerequisite

        Execute step Check Prerequisites
        I0331 13:01:04.372152 6354 request.go:621] Throttling request took 1.153431509s, request: GET:https://api.cluster-bf86.bf86.example.opentlc.com:6443/apis/flows.knative.dev/v1beta1?timeout=32s
        Checking prerequisite Kubernetes Server Version

        ************************
        * Prerequiste Check *
        ************************

        Checking the prerequisites for "SL Container Bridge" succeeded.

        Kubernetes Cluster Context:

        Cluster name: api-cluster-bf86-bf86-example-opentlc-com:6443
        API server URL: https://api.cluster-bf86.bf86.example.opentlc.com:6443

        Editable Prerequisites

        Enter the path to the "kubectl" configuration file. The configuration information contained in this file will specify the cluster on which you are about to
        perform the deployment.
        Choose action <Tab> for completion/<F1> for help
        Path to the "kubeconfig" file: ESC[1G Path to the "kubeconfig" file: /home/generic_emea_mkoch/.kube/configESC[0KESC[71G

        Prerequisite Check Result

        Name Current Value Result Error Message
        KUBECONFIG /home/generic_emea_mkoch/.kube/config + (passed)
        Kubernetes Server Version 1.20.0 + (passed)

        Choose "Retry (r)" to retry the Prerequisite Check.
        Choose "Next (n)" to continue.

        Choose action Retry(r)/Next(n)/<F1> for help: n

        Execute step Collect Input

        ***************************************************************************
        * Choose whether you want to run the deployment in typical or expert mode *
        ***************************************************************************

        You can run the deployment either in typical or expert mode:

        - Typical Mode
        If you choose "Typical Mode", the option is performed with default settings. As a result, you only have to respond to a small selection of prompts.
        - Expert Mode
        If you choose "Expert Mode", you are prompted for all parameters.

        > 1. Typical Mode
        2. Expert Mode
        Choose action <F12> for Back/<F1> for help
        possible values [1,2]: 2

        ************************
        * SLC Bridge Namespace *
        ************************

        Enter the Kubernetes namespace for the SLC Bridge.
        Choose action <F12> for Back/<Tab> for completion/<F1> for help
        Namespace: sap-slcbridge

        ************************
        * Administrator User *
        ************************

        Specify the name of the administrator user for the SLC Bridge Base.
        Choose action <F12> for Back/<F1> for help
        User Name: admin

        *******************************
        * Administrator User Password *
        *******************************

        Define the password of the administrator user admin
        Choose action <F12> for Back/<F1> for help
        Password of User admin:
        Confirm:

        ***********************************************
        * Service Type of the SLC Bridge Base Service *
        ***********************************************

        In order to access the SLC Bridge Base, the UI Port needs to be exposed. This is accomplished by defining a Kubernetes service.
        Kubernetes offers multiple service types. SAP currently supports the following service types. You have to select one of them.

        - Service Type "LoadBalancer" is suitable if your Kubernetes cluster comes with a controller for this service type. For example, this is the case for all
        hyperscaler platforms.
        - Service Type "NodePort" is suitable if your Kubernetes cluster runs on premise and the cluster nodes can be reached from your network

        > 1. Service Type LoadBalancer
        2. Service Type NodePort
        Choose action <F12> for Back/<F1> for help
        possible values [1,2]: 2

        ************************
        * Proxy Settings *
        ************************

        Do you want to configure Proxy Settings for the Pods running in the cluster?

        This is necessary if the Pods in the cluster are running behind a proxy.

        Configure Proxy Settings: n
        Choose action <F12> for Back/<F1> for help
        possible values [yes(y)/no(n)]: n

        Execute step Show Summary

        ************************
        * Parameter Summary *
        ************************

        Choose "Next" to start the deployment with the displayed parameter values or choose "Back" to revise the parameters.

        SLC Bridge Namespace
        Namespace: sap-slcbridge

        Image Registry User
        Image registry user name: user-q5j0lq

        SLP_BRIDGE_REPOSITORY_PASSWORD

        Enter Logon Information
        S-User Name: S0000000000

        IMAGES_SAP_SUSER_PASSWORD

        KUBECONFIG
        Path to the "kubeconfig" file: /home/generic_emea_mkoch/.kube/config

        Choose whether you want to run the deployment in typical or expert mode
        1. Typical Mode
        > 2. Expert Mode

        Administrator User
        User Name: admin

        Administrator User Password

        Service Type of the SLC Bridge Base Service
        1. Service Type LoadBalancer
        > 2. Service Type NodePort

        Proxy Settings
        Configure Proxy Settings: n

        Choose "Next" to start the deployment with the displayed parameter values or choose "Back" to revise the parameters.

        Choose action <F12> for Back/Next(n)/<F1> for help: ESC[1G Choose action <F12> for Back/Next(n)/<F1> for help: n
        Apply Secret Template (secret-slcbridge.yml)...

        Execute step Master secret
        Apply Secret Template (secret-nginx.yml)...

        Execute step Nginx secret

        Execute step Wait for Kubernetes Object SLCBridgeNamespace

        Execute step Wait for Kubernetes Object SLCBridgeServiceAccount

        Execute step Wait for Kubernetes Object DefaultsMap

        Execute step Execute Service

        Execute step Wait for Kubernetes Object ProductHistory

        Execute step Wait for Kubernetes Object MasterSecret

        Execute step Wait for Kubernetes Object NginxSecret

        Execute step Wait for Kubernetes Object SLCBridgePod

        Execute step SL Container Bridge

        ************************
        * Message *
        ************************

        Deployment "slcbridgebase" has 1 available replicas in namespace "sap-slcbridge"
        Service slcbridgebase-service is listening on any of the kubernetes nodes on "https://node:30713/docs/index.html"

        Choose action Next(n)/<F1> for help: n

        Execute step Get User Feedback

        ******************************
        * Provide feedback to SAP SE *
        ******************************

        Dear user, please help us improve our software by providing your feedback (press <F1> for more information).

        > 1. Fill out questionnaire
        2. Send analytics data only
        3. No feedback
        Choose action <F12> for Back/<F1> for help
        possible values [1,2,3]: 3
        Execute step Service Completed


      3. Check to see if Bridge is running. If the setup was successful, you will see the resources:

        $ oc -n sap-slcbridge get all
        NAME READY STATUS RESTARTS AGE
        pod/slcbridgebase-6cd8b94579-4l72q 2/2 Running 0 24m

        NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
        service/slcbridgebase-service NodePort 172.30.122.31 <none> 9000:30578/TCP 24m

        NAME READY UP-TO-DATE AVAILABLE AGE
        deployment.apps/slcbridgebase 1/1 1 1 24m

        NAME DESIRED CURRENT READY AGE
        replicaset.apps/slcbridgebase-6cd8b94579 1 1 1 24m


      4. Connect to the bridge:

        1. If you are on AWS and have chosen LoadBalancer the installer prints the URL for access to the SLCB Bridge, which will complete the process.

        2. If you chose node port, the Service is exposed on the given port on any node of the cluster. Get the exposed node port and pick an IP address of one of the nodes and point your browser to: https://<IP>:<NodePort>/docs/index.html

          • Get the IP:

               $ oc get node -o wide sdi-worker-1
            NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
            sdi-worker-1 Ready sdi,worker 14d v1.19.0+9c69bdc 10.19.20.160 <none> Red Hat Enterprise Linux CoreOS 46.82.202101131942-0 (Ootpa) 4.18.0-193.40.1.el8_2.x86_64 cri-o://1.19.1-2.rhaos4.6.git2af9ecf.el8


          • Get the Port:

               $ oc get svc -n "${SLCB_NAMESPACE:-sap-slcbridge}" slcbridgebase-service -o jsonpath=$'{.spec.ports[0].nodePort}\n'
            30578



          In this example, point your browser to https://10.19.20.160:30578/docs/index.html to access the Installer

        3. If you have choosen NodePort, but the Nodes of your cluster are behind a firewall and not reachable with the above URL, you can use OpenShift Portforwarding to access the installer. Run the following command:

              $ oc port-forward svc/slcbridgebase-service 8443:9000


          Point your browser here to access the installer.


        Note

        Username/Password: Use the one that you provided during installation

        If everything worked well, you should see this page: image

        Keep this page Open and continue in the maintenance planner window.






 

Conclusion


If you made it this far, congratulations - you've finished the groundwork for setting up SAP Data Intelligence (SDI) upon a Red Hat OpenShift cluster. In this blog post, you gained insight into the high level installation workflow of setting up SDI on OpenShift and learnt how to set up your environment, prepare the OCP cluster for SDI and deploy the SDI Observer. In Part II, we'll cover the actual SDI Installation and walk you through the steps needed to perform a successful installation. Stay tuned!

If you have feedback or thoughts, feel free to share them below in the comment section. For the latest content on SAP Data Intelligence, Red Hat and OpenShift, do subscribe to the tags and my profile (vivien.wang01) for more exciting news in this space.

 

Vivien Wang is currently an Ecosystem Partner Manager for the Red Hat Partner Engineering Ecosystem.