Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
In this post we will be creating a simple nodeJS application using express.js to expose a http end point. Along with that we will be using SAP Cloud Platform's xsuaa PaaS to secure our end point. Later we will be deploying the application on to a Kubernetes cluster.

This sample application is to show how applications running on Kubernetes can access the PaaS from SAP Cloud Platform via Service Manger Broker.

I have set up the Kubernetes cluster using Minikube for this post, but I have also installed the same application on to Gardener Cluster with out any change.

I have integrated my Kubernetes cluster with SAP Cloud Platform Service Manager using the steps given in Register Kubernetes Clusters at the Service Manager. You can consider Register Kubernetes Clusters at the Service Manager as a prerequisite for this post.


Bind XSUAA Service


Now we need to create a service binding to the XSUAA service. As a prerequisite we require security descriptor that contains a declaration about authorisation scopes we intend to use in our application. In our case, we simply declare a DISPLAY scope that we will use later on to authorise our users. In addition, we declare a so-called role template called Viewer that references our DISPLAY scope.

Let's first create the xs-security.json considering the above.
{
"xsappname": "xsuaa-k8-scp-example",
"scopes": [
{
"name": "$XSAPPNAME.Display",
"description": "display"
}
],
"role-templates": [
{
"name": "Viewer",
"description": "Read Access",
"scope-references": [
"$XSAPPNAME.Display"
]
}
]
}

Now with the above content let's create a service instance of XSUAA using service catalog cli in Kubernetes. You can also create and bind service instance using Kubenetes configuration.
$ svcat provision xsuaa-example --class xsuaa --plan broker --params-json '{"xsappname":"xsuaa-k8-scp-example","scopes":[{"name":"$XSAPPNAME.Display","description":"display"}],"role-templates":[{"name":"Viewer","description":"Read Access","scope-references":["$XSAPPNAME.Display"]}]}'

Name: xsuaa-example
Namespace: default
Status:
Class: xsuaa
Plan: z48zz57zz45zgt9z2fzjz4azz47zz4-fd5fd60de69db525c44c9608067cb61a

Parameters:
role-templates:
- description: Read Access
name: Viewer
scope-references:
- $XSAPPNAME.Display
scopes:
- description: display
name: $XSAPPNAME.Display
xsappname: xsuaa-k8-scp-example

Let's check if the service instance is created correctly. We need to check it the status is in READY state. Service instance binding is possible only when the the service instance is in READY state.
$ svcat get instance

NAME NAMESPACE CLASS PLAN STATUS
+----------------+-----------+-------+-----------------------------------------------------------------+--------+
xsuaa-example default xsuaa z48zz57zz45zgt9z2fzjz4azz47zz4-fd5fd60de69db525c44c9608067cb61a Ready

Once the service instance is in READY state, we can go ahead and bind the service instance. Once you bind the instance, two things (binding and secret) will be generated and the service instance will be ready for consumption by any application.
$ svcat bind xsuaa-example

Name: xsuaa-example
Namespace: default
Status:
Secret: xsuaa-example
Instance: xsuaa-example

Parameters:
No parameters defined

$ svcat get bindings

NAME NAMESPACE INSTANCE STATUS
+----------------+-----------+----------------+--------+
xsuaa-example default xsuaa-example Ready

$ kubectl get secrets
NAME TYPE DATA AGE
default-token-nvqvg kubernetes.io/service-account-token 3 4d20h
xsuaa-example Opaque 13 106s

With this we complete the setup in Kubernetes and ready to develop application which can use XSUAA instance that we created. But before that we need to do security configuration in SAP Cloud Cockpit.

Assign Users to Scope in SAP Cloud Cockpit


To regain access to our secured endpoint in the application, you need to get the Display OAuth scope assigned to the user. This is done using the SCP cockpit.

Go to the Sub Account that we have created for enabling the service manager API access through SaaS subscription using the steps provided in Register Kubernetes Clusters at the Service Manager post and navigate to Security → Role Collections.

Create a new role collection which you can give an arbitrary name. In our case, we call the role collection Viewer.

Afterwards, select the role collection Viewer and select Add Role. From the menu, select your application and the corresponding role template and role.



Now the user has to be assigned to the newly created Viewer in order to receive the Display scope. In order to do this, select the trust configuration from the security menu and select the SAP ID Service from the list.



In the opening dialog, enter your user ID as e-mail into the user field and click Show Assignments followed by Add Assignments.

Select the Viewer role collection from the menu to assign it to your user.



With this we are done with the setup in SAP Cloud Platform for user to have Display auth scope.

Sample Application


Application Coding


Create a folder k8 and under that create folders like xsuaa-k8-scp-example and xsuaa-k8-scp-example-config.

Let's first finish the required coding in xsuaa-k8-scp-example. Afterwards we can jump on to the configuration in xsuaa-k8-scp-example-config.
├── Dockerfile
├── package.json
└── server.js

server.js
var express = require('express');
var xsenv = require('@sap/xsenv');
var passport = require('passport');
var JWTStrategy = require('@sap/xssec').JWTStrategy;


var app = express();

passport.use(new JWTStrategy(xsenv.getServices({uaa:{tag:'xsuaa'}}).uaa));

app.use(passport.initialize());
app.use(passport.authenticate('JWT', { session: false }));

app.get('/', function (req, res, next) {
console.log("Authenticated Request Reached...");
var isAuthorized = req.authInfo.checkScope('xsuaa-k8-scp-example!b10809.Display');
if (isAuthorized) {
console.log("Authorization success. User: " + req.user.id + ", Path: '/'.");
res.send('Application user: ' + req.user.id);
} else {
console.log("Authorization failed. User: " + req.user.id + ", Path: '/'.");
res.status(403).send('Forbidden');
}
});

var port = process.env.PORT || 8085;
app.listen(port, function () {
console.log('myapp listening on port ' + port);
});

package.json
{
"name": "xsuaa-k8-scp-example",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"dependencies": {
"@sap/xsenv": "1.2.7",
"@sap/xssec": "^2.1.16",
"express": "^4.17.1",
"http": "0.0.0",
"passport": "^0.4.0"
},
"author": "",
"license": "ISC"
}

DockerFile
FROM node:10.15.3-jessie-slim

EXPOSE 8085

COPY package.json .
COPY server.js .
COPY node_modules ./node_modules

CMD DEBUG=* node server.js

Create Docker Image


Connect the local docker environment to Minikube Kubernetes
eval $(minikube docker-env)

Install node modules defined in package.json
npm install

Create a docker image for the application.
$ docker build -t xsuaa-k8-scp-example:0.9 .
Sending build context to Docker daemon 25.87MB
Step 1/6 : FROM node:10.15.3-jessie-slim
---> b2566e062f4a
Step 2/6 : EXPOSE 8085
---> Using cache
---> e40927591070
Step 3/6 : COPY package.json .
---> Using cache
---> 2157829d59cc
Step 4/6 : COPY server.js .
---> Using cache
---> 73f35f9aae4c
Step 5/6 : COPY node_modules ./node_modules
---> 43a6d7f86cac
Step 6/6 : CMD DEBUG=* node server.js
---> Running in a5ebb6456a3f
Removing intermediate container a5ebb6456a3f
---> 103580985707
Successfully built 103580985707
Successfully tagged xsuaa-k8-scp-example:0.9

List the docker images to see if the image added.
$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
xsuaa-k8-scp-example 0.9 72b63f9eb3f0 30 seconds ago 211MB
<none> <none> 4e85af197088 25 hours ago 211MB
k8s.gcr.io/kube-proxy v1.17.3 ae853e93800d 3 weeks ago 116MB
k8s.gcr.io/kube-apiserver v1.17.3 90d27391b780 3 weeks ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.3 b0f1517c1f4b 3 weeks ago 161MB
k8s.gcr.io/kube-scheduler v1.17.3 d109c0821a2b 3 weeks ago 94.4MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 2 months ago 90.8MB
quay.io/kubernetes-service-catalog/service-catalog v0.3.0-beta.2 8da829e9f261 3 months ago 42.7MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 3 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
kubernetesui/metrics-scraper v1.0.2 3b08661dc379 4 months ago 40.1MB
quay.io/service-manager/sb-proxy-k8s v0.3.2 1dffd12df62a 4 months ago 48.7MB
node 10.15.3-jessie-slim b2566e062f4a 10 months ago 187MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 2 years ago 80.8MB

Now we have our application docker image is preset in the minikube repository. Now let's create the Kubernetes deployment configuration for the same.

Create Kubernetes Deployment Configuration


Go to xsuaa-k8-scp-example-config and create a file named xsuaa-k8-scp-example.yaml with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: xsuaa-k8-scp-example
name: xsuaa-k8-scp-example
spec:
replicas: 1
selector:
matchLabels:
app: xsuaa-k8-scp-example
template:
metadata:
labels:
app: xsuaa-k8-scp-example
spec:
containers:
- image: xsuaa-k8-scp-example:0.9
name: xsuaa-k8-scp-example
env:
- name: clientid
valueFrom: { secretKeyRef: { name: xsuaa-example, key: clientid } }
- name: identityzone
valueFrom: { secretKeyRef: { name: xsuaa-example, key: identityzone } }
- name: sburl
valueFrom: { secretKeyRef: { name: xsuaa-example, key: sburl } }
- name: trustedclientidsuffix
valueFrom: { secretKeyRef: { name: xsuaa-example, key: trustedclientidsuffix } }
- name: apiurl
valueFrom: { secretKeyRef: { name: xsuaa-example, key: apiurl } }
- name: clientsecret
valueFrom: { secretKeyRef: { name: xsuaa-example, key: clientsecret } }
- name: identityzoneid
valueFrom: { secretKeyRef: { name: xsuaa-example, key: identityzoneid } }
- name: tenantid
valueFrom: { secretKeyRef: { name: xsuaa-example, key: tenantid } }
- name: tenantmode
valueFrom: { secretKeyRef: { name: xsuaa-example, key: tenantmode } }
- name: uaadomain
valueFrom: { secretKeyRef: { name: xsuaa-example, key: uaadomain } }
- name: url
valueFrom: { secretKeyRef: { name: xsuaa-example, key: url } }
- name: verificationkey
valueFrom: { secretKeyRef: { name: xsuaa-example, key: verificationkey } }
- name: xsappname
valueFrom: { secretKeyRef: { name: xsuaa-example, key: xsappname } }

If you see I am creating multiple environment variables for the Kubernetes pod, so that we can consume the same in our application running on the same pod. Now the question is how do we know what all parameters are provided by the service instance, so that we can provide the configuration here. For doing so we need to describe the secret that got created while we were creating the binding for the service instance.
$ kubectl describe secrets/xsuaa-example
Name: xsuaa-example
Namespace: default
Labels: <none>
Annotations: <none>

Type: Opaque

Data
====
identityzoneid: 36 bytes
tenantid: 36 bytes
trustedclientidsuffix: 34 bytes
url: 50 bytes
clientid: 36 bytes
clientsecret: 28 bytes
identityzone: 5 bytes
sburl: 59 bytes
tenantmode: 9 bytes
uaadomain: 36 bytes
verificationkey: 442 bytes
xsappname: 33 bytes
apiurl: 48 bytes

If you check the Kubernetes config that we have created and the secret retrieved above, I have configured those parameters that are provided by the service instance. Currently for the security reasons we can not see the value for the parameters here. To see the parameters, we need to deploy our application in Kubernetes pod using the above Kubernetes configuration.

Retrieve Service Instance parameter value from Kubernetes pod


As we are using Kubernetes configuration, it's easy for us to create a pod having our application running. At this stage our application will not be running and the pod would be in error state. To eliminate this error we can comment some of the code in server.js to make it look as following.

var express = require('express');

var app = express();

app.get('/', function (req, res, next) {
console.log("Authenticated Request Reached...");
});

var port = process.env.PORT || 8085;
app.listen(port, function () {
console.log('myapp listening on port ' + port);
});

With the above code your pod would be in running state, when you deploy the application. Please use the following command from k8 folder to deploy the pod in Kubernetes cluster.
kubectl apply -f xsuaa-k8-scp-example-config/

Once the pod is deployed, we can list the pods using the following command.
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xsuaa-k8-scp-example-6b7d458bd6-cdgv9 1/1 Running 0 25h 172.17.0.7 minikube <none> <none>

Now it's time to get inside the pod.
$ kubectl exec -it xsuaa-k8-scp-example-6b7d458bd6-cdgv9 bash
root@xsuaa-k8-scp-example-6b7d458bd6-cdgv9:/#

Once we can inside the pod, we can use the following command to retrieve all the parameters that are available to the applications running on the pod.
# env
NODE_VERSION=10.15.3
HOSTNAME=xsuaa-k8-scp-example-6b7d458bd6-cdgv9
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
TERM=xterm
KUBERNETES_SERVICE_PORT=443
url=https://xxxx.authentication.sap.hana.ondemand.com
KUBERNETES_SERVICE_HOST=10.96.0.1
xsappname=<app name>
identityzone=xxx
verificationkey=-----BEGIN PUBLIC KEY-----<cert key>-----END PUBLIC KEY-----
clientid=sb-xsuaa-k8-scp-examplexxxxx
clientsecret=Ct/+xxxxxxxxxxxxxxxxxxxxxx/w=
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
trustedclientidsuffix=<trustedclientidsuffix>
PWD=/
identityzoneid=55d8129b-1e6b-4231-9c80-000ae080f9dd
SHLVL=1
HOME=/root
YARN_VERSION=1.13.0
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
sburl=https://internal-xsuaa.authentication.sap.hana.ondemand.com
tenantid=55d8129b-1e6b-4231-9c80-000ae080f9dd
tenantmode=dedicated
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
apiurl=https://api.authentication.sap.hana.ondemand.com
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
uaadomain=authentication.sap.hana.ondemand.com
_=/usr/bin/env

Once when we have retrieved all the parameters, we can revert our server.js to original one.

Setting up VCAP_SERVICES parameter for SAP node APIs


SAP NodeJS libraries are designed to read VCAP_SERVICES of Cloud Foundry for consuming PaaS service instance configurations from environment. So to use those libraries in our application running in Kubernetes, we need to wrap all the parameters inside VCAP_SERVICES. While we could find a way to make this discovering the parameter values from secret happen automatically for us, for this POC we have hardcoded VCAP_SERVICES in our Kubernetes deployment configuration. You can also create a kubernetes secret to store the VAP_SERVCES. After hard coding the same in Kubernetes deployment configuration, the configuration file looks like below.

Note: I had to add few more parameters to VCAP_SERVICES as they were mandatory for @sap/xssec library. Those properties are name, label, tag, plan. cfservices.js expects those to be there to apply filter on to that while searching.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: xsuaa-k8-scp-example
name: xsuaa-k8-scp-example
spec:
replicas: 1
selector:
matchLabels:
app: xsuaa-k8-scp-example
template:
metadata:
labels:
app: xsuaa-k8-scp-example
spec:
containers:
- image: xsuaa-k8-scp-example:0.9
name: xsuaa-k8-scp-example
env:
- name: VCAP_SERVICES
value: "{\"xsuaa\":[{\"credentials\":{\"apiurl\":\"https://api.authentication.sap.hana.ondemand.com\",\"clientid\":\"sb-xsuaa-k8-scp-examplexxxx\",\"clientsecret\":\"xxxxxxxxxxxxxxx\",\"identityzone\":\"xxx\",\"identityzoneid\":\"55d8129b-1e6b-4231-9c80-000ae080f9dd\",\"sburl\":\"https://internal-xsuaa.authentication.sap.hana.ondemand.com\",\"tenantid\":\"55d8129b-1e6b-4231-9c80-000ae080f9dd\",\"tenantmode\":\"dedicated\",\"uaadomain\":\"authentication.sap.hana.ondemand.com\",\"url\":\"https://xxxx.authentication.sap.hana.ondemand.com\",\"verificationkey\":\"-----BEGIN PUBLIC KEY-----<cert key>-----END PUBLIC KEY-----\",\"xsappname\":\"xsuaa-k8-scp-example!b10809\"},\"tags\":[\"xsuaa\"],\"label\":\"xsuaa\",\"plan\":\"application\",\"name\":\"xsuaa-example1\"}]}"

Application Deployment


Once we are ready with the VCAP_SERVICES, we can now deploy our application to Kubernetes and test the end point to see if it is secured by XSUAA of SAP Cloud Platform. For deploying the application we can use the same command as used before.
kubectl apply -f xsuaa-k8-scp-example-config/

Once you deploy the application Kubernetes pod, you need to port forward the pod port to your local system port in order to access the end point.
kubectl port-forward deployment.apps/xsuaa-k8-scp-example 8085:8085

With this we are now ready to access the endpoint from outside of the Kubernetes pod.

Application Testing


Getting the JWT Token


In order to access the application we need to first retrieve the JWT token on behalf of the user to whom we have assigned the scope in the earlier steps. Use the following curl to get the JWT token.
curl --location --request POST 'https://xxx.authentication.sap.hana.ondemand.com/oauth/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Accept: application/json' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'password=<user's password>' \
--data-urlencode 'username=<user's email address>' \
--data-urlencode 'client_id=<xsuaa client id>' \
--data-urlencode 'client_secret=<xsuaa client secret>' \
--data-urlencode 'response_type=token'

Response


{
"access_token": "<token>",
"token_type": "bearer",
"id_token": "<token>",
"refresh_token": "<token>",
"expires_in": 43199,
"scope": "openid xsuaa-k8-scp-example!b10809.Display",
"jti": "65501d3df7de4e7387267f44e106a8ea"
}

End Point Access (With Valid JWT)


curl -i --location --request GET 'http://localhost:8085/' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer <token>'

Response


HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 43
ETag: W/"2b-M8JdCOGSPyy3PSSpF4lP7MEeqAI"
Date: Wed, 04 Mar 2020 08:46:45 GMT
Connection: keep-alive

Application user: xxx@sap.com

End Point Access (With Invalid JWT)


curl -i --location --request GET 'http://localhost:8085/' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer <token>'

Response


HTTP/1.1 403 Forbidden
X-Powered-By: Express
Date: Wed, 04 Mar 2020 08:47:43 GMT
Connection: keep-alive
Content-Length: 9

Forbidden

If we are hardcoding the VCAP_SERVICES, then there is no advantage of using a Service Manager Broker in Kubernetes, in stead we can create secrets in Kubernetes using the service instance credential of SAP CP(In this case VCAP_SERVICES). But if you want to write your own implementation, the you can use the secret attributes directly.

This architecture can be applied to other cloud providers like AWS and GCP, where you can build an application using the best platform services available across. It's mandatory for the cloud provides to implement/comply with Open Service Broker API Specifications. While this architecture gives us an advantage of using platform services across cloud providers, it also brings a latency to the application for accessing different services from across providers.

4 Comments