Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
CarlosRoggan
Product and Topic Expert
Product and Topic Expert
In this very simple tutorial we’re going to learn how to create a REST service based on Node.js and run it in Kyma.
We’re also going to secure an endpoint with OAuth protection provided by Kyma
Warning:
There won’t be a happy end!
We'll invoke our URL and the result will be an error
Super!
Sad... but successful

Overview



Prerequisites



  • Almost no knowledge about Kyma required. Only some basic knowledge provided by this tutorial:
    Kyma for Dymmies 1

  • Basic knowledge about Node.js
    Note: if you don't want to install Node.js, you can skip it.
    Really, because we'll create a docker container which will contain a Node.js installation


Application


As usual, in my tutorials the applications are simple and silly.
It allows to focus on the other topics.
In this case, we create a basic server app which exposes 2 REST endpoints
One is meant to be accessible for free
The other one should be secured
Apart from that, these endpoints are totally useless

Create node app

We create a working directory called
- C:\tmp_kyma
Inside, we create 2 files:
- package.json
- server.js

It looks like this:


The content can be copied from the appendix section
Afterwards, we need to open command prompt, jump into our working directory and execute the following command

npm install

This will download and install all required dependencies


Let’s have a quick look at the app:
We create a server, based on express

We define 2 endpoints, listening to GET requests
The first one is called /free and does nothing than return a silly response
The second one is called /prot and does nothing than return a silly response, but should be protected, because it contains some secret info
But as we can see: there’s no effort done to protect it

The server is started and listens on port 1111
This is a silly port number, but we’ve chosen it to easier distinguish the port numbers defined and mapped at different levels (see below)
I strongly believe that silly names and silly numbers do help in understanding the code and the concept and the reason of our being
In our example, 1111 stands for initial port number, defined in our application code

Add Security

This chapter is a mistake.
We don’t add security in the application code

Run app locally

To run our app, we go to our command prompt and from the working directory we execute the command node server.js


Now that the server is running, we open a browser and call our 2 endpoints:

http://localhost:1111/free
and
http://localhost:1111/prot

Both endpoints behave as expected, both respond with silly text, the only difference being: one is even more useless than the other
We’ve seen that the /prot endpoint is not really protected

Docker


In the previous tutorial we’ve already learned the most basic steps for working with docker.
That's good because there won’t be advanced steps here

Create image

We create a file called Dockerfile in the working directory
Yes, it is silly to mix all files in one folder, but.....it makes life easier
Our current project folder:


The content of the Dockerfile can be found in the appendix – and here:
FROM node:12

COPY . .
RUN npm install
EXPOSE 1111
CMD ["node", "server.js"]

You’ll easily find better dockerfiles to be used with node.js applications - I just want it short.
What it says:
We just copy everything (the first dot) to the container (the second dot)
We also execute the 2 commands which we just executed locally (install dependencies and run the app)

Interesting: the EXPOSE statement
Here we have to make sure that we expose exactly the same port number which we’re using in the application code
It must be the real existing port in use
If we expose a wrong port number nothing happens, no error message, nothing
But if we then start the container, it will start our node server and the server will run on port 1111 internally in the container
BUT… we cannot access the server from outside, due to wrong port number exposed
OK?
So I recommend to check again:
Exposed 1111 correctly?

Once we’ve double-checked with colleagues that the number is correct, we can create the docker image
We go to command prompt, jump into our working dir and execute this command:

docker build -t yourDockerID/safeapp .

Note: Make sure to replace the dockerID with yours (remember, you need docker ID in order to upload to public docker repo)
And make sure to not forget the dot at the end, which tells docker to use the current directory

Run app in container (optional)

Before uploading our image, we want to test our container
To create and start our container, we execute the following command

docker run -p 2222:1111 yourDockerID/safeapp

(make sure to replace…)
See:


With the -p param we specify a port mapping (it is an optional param, if we don’t specify, docker would do it for us)
The syntax:
<port>:<port>

Very helpful...
Now which port is what port?
Our command makes it clear – and this is why we chose silly port numbers
<desired_external_for_user_port>:<real_existing_inside_container_port>

In our example: docker run -p 2222:1111 ...

We can of course use always the same number – but then we don’t learn what we’re doing
OK: we need to map an outside port to the inner port
The second number (port inside container) has to match exactly the numbers in the code and in the dockerfile
The first number is our wish and can be any arbitrary number
Remember: we mention FIRST what we wish to write in the Browser url
Below image illustrates the relationship between the numbers


 

 

Note:
Above command doesn’t use the -d param, which is usually used, because it tells to run in "detached" mode, means the console doesn't wait for the process to finish.
However, without detaching, we can see the log output directly in our console.
So we don’t need to fetch the logs using docker log command

Now we can test our service in the browser
The URL is

http://localhost:2222/prot

We can see:
Docker has exposed the 1111, but mapped to 2222
As such, we invoke the 2222 with our browser
That’s also the output of our service: the port number of the request
However, if we have a look at the console, we can see the output of our node app
The node app prints the internal port number 1111

And yes, we can start our server.js locally and the container at the same time, then we can invoke the localhost on port 1111 and 2222 at the same time

Housekeeping

OK, after this little experience, we can press ctrl + C (two times) to quit the process on the console

Then we want to delete the running container.
First we need to fetch the container id:  docker ps


Note:
Use docker ps -a to see all processes, even those which are stopped
Now we can stop.
docker stop <containerID>

BTW, it is enough to type first few chars of the container id


The we can delete the container: docker rm <containerID>


Upload image

We want to upload our image to the public docker hub, such that it can later be easily used by Kyma
The command: docker push <yourDockerID>/safeapp


Until now, we’ve created a little node.js application, containerized it and uploaded to public docker repo
Next, we want to deploy to Kyma

Kyma


To deploy our application to Kyma, we need to define a deployment descriptor

Define deployment

In the previous tutorial we’ve already learned how to define a deployment descriptor
Basically, we need to declare what container we want to deploy
In case of server app, in addition, we need to provide information about the port.
In our case, we say 1111, which is the one that is exposed by docker (and also used by our server app)

OK, we create a file called deploy_app.yaml in our working directory and copy the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-app-nodesafe
spec:
template:
metadata:
labels:
apptemplate: template-nodesafeapp
spec:
containers:
- image: <yourDockerID>/safeapp
name: container-safeapp
ports:
- name: http
containerPort: 1111
selector:
matchLabels:
apptemplate: template-nodesafeapp

Note:
Make sure to replace the dockerID with yours

Note:
Typically delployment files are named Deployment.yaml

We go to our SAP Cloud Platform account and open the Kyma Console UI
In Kyma dashboard, we navigate into our namespace (default), then press the button “deploy new resource” and deploy our deploy_app.yaml file

Note:
In case you get an error message, try renaming the file extension to .json

After deployment, we can check the new resources (deployment and pod) in our Kyma dashboard
To check the log, we can use the context menu of the pod
We can see the log outpout of our server.js

OK, we’ve seen that our node.js app has been deployed and has been started.
But what we want is to invoke the endpoint
To address this wish, we have the next chapter

Define service

Up to now, we have:
-> an app
--> in a container
---> in a pod (can be even multiple pods)

In order to allow communication between pods, we need to create a “Service” in Kubernetes
Otherwise our server app endpoint remains unreachable (see here for more info
A “service” in Kyma will not only provide a port for communication with our app, it will also take care of lot of useful things, e.g. loadbalancing, security

To create a Service, we have to deploy a resource file
In our working directory, we create another file, called deploy_service.yaml
The content:
apiVersion: v1
kind: Service
metadata:
name: service-nodesafeapp
labels:
servicelabel: nodesafeappservice
spec:
ports:
- port: 3333
targetPort: 1111
selector:
apptemplate: template-nodesafeapp

Note:
At this point in time, there’s no button to create a service in the Kyma dashboard

As we can see, the main purpose of this resource is to specify a port with an arbitrary number (e.g. 3333), and map it to the real port (1111), which we defined in the deployment and which is exposed by our docker container

We also must not forget to mention, to which container we want to apply the specified port. This is done with the selector statement. To be more concrete: the selector will forward the request to a pod. As we know, there can be multiple identical pods, carrying container with our app

As usual, in the kind attribute, we specify, which kind of resource we’re deploying. In this case it is a “Service”

The resource has an arbitrary name, in our example it is a silly name, such that we always know what we see, once we see a name in the Kyma dashboard

It is optional to add a label. We’ve done it here, because otherwise it wouldn’t be possible to navigate into the details page of the service in the Kyma dashboard

OK
We deploy this resource file as us usual: Overview->Deploy new resource

To view the newly created service, we go to Operation-> Services then click on the service entry


We can click on “Edit”.
This will display a dialog where we can see some default values which have been added in addition to our deployed resource
For instance, the default protocol for communication is TCP
Also, a “Cluster IP” has been assigned to the service
We close the dialog without applying any changes

Now that our app-container-pod is proxied by a service, we still can’t call our server-endpoint from browser
We need to expose it to the outside world

This is supported by the API Gateway component of Kyma
To enable it, we create an API Rule resource

Create API Rule

Creating an API Rule can be done in the dashboard
In the “Service” details screen, there’s an “Expose Service” button in the “API Rules” section
This opens the creation dialog

Note:
Alternatively, we can navigate to Configuration -> API Rules, then select the desired service


In the Create API Rule screen, we enter an arbitrary name for the API rule, e.g. apirule-nodesafe
Then enter a short name for the host, which will end up in the URL of our API, e.g. safeapp
We make sure that the service, which we created above, is selected

There must always be at least one access strategy, otherwise an API rule doesn’t make sense
Here we enter the relative path to our free endpoint
We leave the defaults, which means that no restricting rule will be applied to our “free” endpoint

Finally, we press create

As a result, we get the info about the new API Rule, along with the host url and the port
We can see the specified host is concatenated with the Kyma domain
And we can see that the port is 3333, according to our definition of the service


Run app in Kyma

After saving the new API Rule, we can click on the “Host” URL
After pressing the Host-URL, we only need to append our free endpoint
In my example:

https://safeapp.c-12345b2.kyma.shoot.live.k8s-hana.ondemand.com/free

That’s it, we’re happy to see the silly response of our app

Add Security

Coming finally to the interesting part of this blog post:
How to protect our unprotected endpoint
I mean, I our app we’ve defined an endpoint with name prot, meant to be protected.
But we never did any effort to protect it
Up to now
But still, it is absolutely no effort

Enhance API Rule

The API rule created above didn’t define any restriction
So now let’s enhance it

We navigate to Configuration-> API Rules, then click on our apirule-nodesafe
In the details screen, we press Edit
Then press Add access strategy
The path is /protMake sure that this string is identical with the path specified in our server.js file

As authentication handler, we select OAuth2

Our endpoint anyways supports only GET, as such, we select only the GET method in our new strategy

Afterwards, we also specify a scope, e.g. nodesafeaccessscope
This means that our protected endpoint requires that anybody (or anything) who wants to access this  endpoint, must have this scope


Note:
See here the spec of config

Note:
Make sure to take a note of the scope name

Talking about having scope…
Who exactly has a scope?
It is the oauth access token which contains information about authorization
Talking about tokens…
Who exactly handles them?
Correct, we need to create an OAuth client

Create OAuth Client

In the Kyma dashboard, we go to Configuration -> OAuth Clients
We press Create OAuth Client and enter the following values:

Name: any arbitrary name, e.g. oauthclient-for-nodesafeapp

Response types: Token

Grant types: Client credentials

Scope: Copied from above: nodesafeaccessscope

Define custom secret name:

Secret name: secret-for-oauthclientfornodesafeapp


Press create

As a result, we get clientid and  Clientsecret generated in the new secret (the one with our silly custom name)
The clientid looks silly as well
We can press Decode...
...but it looks still silly

Run app in Kyma with security

After saving the modified API Rule, we can test our endpoints
To get the URL, we just need to go to our API Rule
The free endpoint is still free

https://safeapp.c-1234.kyma.shoot.live.k8s-hana.ondemand.com/free

While the protected endpoint gives an error when called from browser

https://nodesafe.c-1234.kyma.shoot.live.k8s-hana.ondemand.com/prot

We get status code 401 Unauthorized

Which means, the request is properly pointing at an existing resource, but the request is lacking authorization
Reason is obvious: we haven’t sent any authorization data along with the request
We need not only an access token, there must be additionally the required scope in the token

What now?
How to call that endpoint with REST client?

New Hope

This is the topic of a separate blog post: Run protected kyma endpoint with Postman

Summary


We've created a Node.js app which exposes an endpoint
We've creaed a Docker image and uploaded to Docker hub
We've created a Deployment in Kyma
We've created a Service
We've created an API Rule
We've created an OAuth Client

We've learned how to expose the port of our endpoint with Docker and with Kyma
We've learned how to make our endpoint accessible for public usage in Kyma
We've learned how to protect our endpoint with OAuth in Kyma

Links



Appendix: Sample Code



package.json
{
"dependencies": {
"express": "^4.16.3"
}
}

server.js
const express = require('express')
const app = express()

app.get('/free', (req, res)=>{
res.send('Free silly')
})

app.get('/prot', (req, res)=>{

res.send(`Successfully passed security control. Running on '${req.headers.host}${req.path}'. Auth: '${req.headers.authorization}'`)
})

app.listen(1111, ()=>{
console.log('Node app running on port 1111')
})

Dockerfile
FROM node:12
COPY . .
RUN npm install
EXPOSE 1111
CMD ["node", "server.js"]

 

deploy_app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-app-nodesafe
spec:
template:
metadata:
labels:
apptemplate: template-nodesafeapp
spec:
containers:
- image: <yourDockerID>/safeapp
name: container-safeapp
ports:
- name: http
containerPort: 1111
selector:
matchLabels:
apptemplate: template-nodesafeapp

deploy_service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-nodesafeapp
labels:
servicelabel: nodesafeappservice
spec:
ports:
- port: 3333
targetPort: 1111
selector:
apptemplate: template-nodesafeapp
5 Comments