Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
nicolbe
Discoverer
Finally I took the opportunity to install SAP Data Intelligence 3.0 on a GCP trial account (with the free usage credit from GCP). 

I want to share with you some good pre-requirements that are needed to perform a smooth installation.

This is the list of the preliminary steps I did prior to start SAP Data Intelligence 3.0 installation.

  • Create an account on GCP - you can use your personal account or any google account of course. Setup a billing method, add a credit card and accept the free credit - as the free usage credit covers all the possible usage of GCP services you won't be charged unless you overrun the free usage.

  • From the GCP web control panel I've created a new project (I named it "DI3Installation", you can choose any name you prefer) and I've enabled both "compute engine" and "kubernetes engine" APIs in order to use those services,

  • Then, on the Compute Engine section I have created a unix machine (the standard one is more than sufficient) to act as a jumpbox

    • this step is not really required because you can also use your laptop (not a Mac which is my case) to act as a jumpbox, but using a locally stored jumpbox gives you some plus such as:

      • the machine is in the "internal" GCP network, so you do not have to deal with firewall or network issues

      • the machine is already configured to have all the necessary GCP client software to deal with all GCP services (e.g. gcloud is already installed as well as GCP sdk)






When the unix machine is online the installation procedure can start.

Before issuing any command you have to initialize the gcloud environment by typing:
gcloud init

Answer to all the questions the tools does and then you're ready to create a correctly configured K8S cluster.

This cannot be done using the GCP webUI as with the Control panel you're not able to configure correctly the writing grants n the storage. You have to perform a "manual" cluster creation using the command line gcloud tool. This was the command i issued:
gcloud beta container --project "<insert here your GCP project ID>" clusters create "<insert here the desired cluster name>" --zone "<insert here the zone where you want the cluster to be deployed>" --no-enable-basic-auth --cluster-version "1.15.11-gke.13" --machine-type "n1-standard-4" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/userinfo.email","https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/taskqueue","https://www.googleapis.com/auth/bigquery","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/bigtable.data","https://www.googleapis.com/auth/pubsub","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/trace.append","https://www.googleapis.com/auth/source.full_control","https://www.googleapis.com/auth/devstorage.read_write" --num-nodes "3" --enable-stackdriver-kubernetes --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0

You should adapt this command by filling the parameters:

  • --project

  • cluster name

  • --zone (I used europe-west4-a) - you can find zones at this link

  • --machine-type (here the list, I used n1-standard-4 which is more than enough for a SAP Data Intelligence 3.0 installation test)

  • --num-nodes (the above command will create a 3 nodes k8s cluster, you can add more nodes if you wish)


The tricky part that sorted out a correct K8S installation was the scopes parameter. You have to be sure to add to this parameter this part
"https://www.googleapis.com/auth/devstorage.read_write"

Without this you will not be able to write on the internal google container registry and therefore you won't be able to complete the installation phase.

Once you have successfully created a K8S cluster the next step is to configure kubectl (the client part of a k8s cluster) to correctly point to it. This can be done by issuing this command
gcloud container clusters get-credentials <cluster-name>

where cluster-name is the same name you defined when the cluster has been created.

Now you can follow the installation as stated in this blog post.

During the installation procedure I found two tricky points:

  • How to interact with GCP internal docker registry. You need to have a container registry where the installer can push the latest images to perform the installation. GCP gives you the ability to use the internal secure registry but you have to enable it! Follow this useful guide: https://cloud.google.com/container-registry/docs/quickstart 

  • How to create a correct ingress to expose SAP Data Intelligence 3.0 UI out from GCP network: in this case I followed the SAP DI3 Installation guide (https://help.sap.com/viewer/a8d90a56d61a49718ebcb5f65014bbe7/3.0.latest/en-US/8db22b4111fa466b9b7dea...). In addition to the steps in the installation guide you have also to define a FQDN (fully qualified domain name) to associate with DataIntelligence installation in order to:

    • create the SSL certificate

    • associate this FQDN to the IP address where the ingress was deployed.




The whole installation procedure took me no more than 4 hours, and I used a very little part of the free usage credit. In the end I've used 4 VMs (3VMs for the K8S cluster and the jumphost, with the standard configuration) for 8 hours.
3 Comments