When you deploy the SAP Data Hub to Azure you need to consider how to access it. Of course, you can always enter a node URL and the port in the browser, but that’s not the recommended scenario. After a system restart (or even during normal work) the node assignment can change. It would mean that all your links stop working and users are not able to access the system.
A good practice is to use the Ingress Controller to expose the service. It’s a reverse proxy, that acts as a bridge between you and the SAP Data Hub. Using a load balancer, it routes the traffic to the correct node. There are two types of available Load Balancers:
Public – for services that are exposed over the Internet
Private – for services that are exposed to internal network
The Load Balancer is deployed automatically during the installation of the Ingress, so you don’t need to create them separately.
By default, the Ingress Controller is provisioned together with Public Load Balancer. If you’d like to access the SAP Data Hub over the internet, install the Ingress using following command:
helm install stable/nginx-ingress --namespace <namespace>
To expose the SAP Data Hub only to the internal network, firstly create configuration file ingress_private.yaml that contains:
controller: service: loadBalancerIP: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true"
It tells the Kubernetes cluster to use internal load balancer instead of public one. Now you can use this config to install the Ingress.
helm install stable/nginx-ingress --namespace <namespace> -f ingress_private.yaml
You can read the Load Balancer IP using:
kubectl -n <namespace> get service -l app=nginx-ingress
If you see <Pending> instead of the IP it means the Load Balancer is still being provisioned and you need to check again after couple of minutes. If you can’t see the IP after a long time, check what’s going on in the Azure portal. You may notice that the Load Balancer deployment fails.
If you deployed the Ingress using internal Load Balancer you’ll see that the assigned IP belongs to the virtual network:
By default, the IP is not associated with any hostname. The private IP should be configured on a local DNS server. For the internet facing scenarios, you can configure the hostname directly on the Public IP resource in the Azure portal. The URL will have following format:
If you have a custom domain, you need to add a DNS entry that points to the Public IP.
But the website doesn’t work – we still need to configure the Ingress controller. After the installation it points to the default backend, so we need to update the routing information. You can use the configuration included in the Installation Hub. Replace the fields <DNS_NAME> with the previously configured domain name:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: vsystem annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "500m" nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" nginx.ingress.kubernetes.io/proxy-buffer-size: "16k" spec: tls: - hosts: - <DNS_NAME> secretName: vsystem-tls-certs rules: - host: <DNS_NAME> http: paths: - path: / backend: serviceName: vsystem servicePort: 8797
Use the following command to deploy the routing. The command is exactly the same for internal and public deployments:
kubectl apply -f vsystem-ingress.yaml -n <namespace>
Now we can access the website! If you’d like to enable the TLS (and hide this ugly Certificate Error message) check out my post where I describe how to integrate the Kubernetes cluster with Let’s Encrypt!