Skip to Content
Technical Articles

Exposing On Premise Data to SAP Cloud Foundry

Introduction:

In this Blog , I will try to cover the topic of exposing On Premise Data to SAP Cloud Foundry using a Python based application using Flask Library and no authentication initially.

My overall objective is to expose the data to Data Hub , refine the data in Data Hub and store it / transfer it to the target system which i will try to cover in a series of blogs.

Brief Background : Why is this needed and Why Python ?

While working on a topic in Data intelligence running on SAP Cloud foundry instance , I came across the situation of exposing the table content of the ABAP stack to the pipelines in Data intelligence.

Since the SAP systems are running on-premise environment , exposing data directly outside the on-premise environment (as an OData service primarily) is not easy but of course not impossible,so  I decided to shed some light on this topic.

As to Why Python – My overall objective is to create an application with a bear minimum basic authorization on Cloud foundry which can be called then within Data Hub using the Python3 operator and further using Pandas library , the data can be written to a CSV file in the storage.

Solution :

One possible solution to expose tables of on-premise system can be done by following the below mentioned technical steps

  • On Premise System – Create a CDS View for exposing table data with OData service enabled
  • Enable SAP Gateway Services in the Back end system to expose the CDS view as a service implementation as an OData service
  • In Cloud connector-Configure the following parameters
    • Create a connection to the Cloud Foundry Instance
    • Map Actual server of the back end to a virtual host
  • Deploy App in SAP Cloud Foundry environment
    • Create an App using the Destination , Connectivity and XSUAA services of cloud foundry to pull data from the on-premise system  and expose the data on a HTTPS Host with basic authentication.

Available solutions but why this?

Link here provides a wonderful solution to our problem statement in basically minimal effort and coding.

Mostly the end to end connectivity is ensured via the Node JS app deployed in the App router with an authentication service. I know you must be thinking if something similar can be achieved with a Simple app-router with basically no line of code,then why should be look at a separate approach ?

As I mentioned earlier my target is to consume this data within ´Data hub as an HTTP service’ which is not possible via the app-router due to technical limitations.

With the app router in Node JS , the work is done so brilliantly that we don’t actually see the intricacies involved internally. The inbuilt XSUAA service is the central point of truth for authentication.The internal routing which is done by the app-router cant be easily replicated in the Data hub pipeline . The app-router only shows us the final link of the app , however internally it first authenticates with an O-Auth scheme , keeps the JWT for future authentication ,provides JWTs for authenticating services and  internally does the call to retrieve data from on-premise system and finally show the data in the deployed app. The main trick here is the OAuth scheme , which i wasn’t able to replicate in the Data Hub pipeline .

So as a work around , I used this approach to feed the on-premise data to the Data Hub via a python based App .

How Cloud connector – Cloud Foundry and the ABAP system on Premise are linked ?

Well to be honest this is the main part which took me a while to decipher. I am no expert in any of these technologies and linking them together was bit hard. So let’s try to sum it up in short .

The main 3 services in Cloud foundry which do the trick are :

  • Authorization and Trust management
  • Connectivity
  • Destination

SAP Cloud Connector –  as the name states is a connector. It’s a gateway with 2 doors , one that opens up in the world of on-premise data and the other door opens up to the SAP Cloud platform. It’s a secure tunnel which allows only one way connection ensuring a safe and secure transfer of data .The cloud connector ensures mapping of the Internal system to a Virtual address which hides the actual serve details when a response is provided back to the applications running on Cloud platform.

Above is an image of the mapping of the actual Internal host of the on-premise system with a Virtual Host in SAP Cloud connector. The cloud connector now uses the virtual host as a reference and uses this as a host name for making calls to the on-premise OData service .

So now instead of having a call for the OData service as :

http://Internal Host:Internal Port/sap/opu/odata/sap/<Odata service>

the call to the Odata service by the cloud connector would be :

http://Virtual Host:Virtual Port/sap/opu/odata/sap/<Odata service>

so if i take the above example the link would be something like :

http://vhost.atosorigin-ica.com:8080/sap/opu/odata/sap/<My Odata service> .

Below is an example of a Node JS application deployed on Cloud foundry which uses the Cloud connector to connect to the ABAP system (Link here for the app router ).

If you notice , the virtual mapping works and the actual details of the on-premise host and port are masked and instead as a response the metadata of the service points to the virtual address instead of the actual server information .

Now just for the sake of fun , i would remove the connection of my Cloud connector to my Cloud Foundry trial account and run the same app again :

And voila , the app doesn’t show any data what-so-ever .

The cloud connector is the link between this Virtual OData URL and the app running on cloud. The cloud connector is queried by the Connectivity service of CF using a Proxy host and Proxy Port with an actual Get request for the Virtual OData URL which is retrieved from the Destination services  after a proper authentication by the XSUAA service to receive the data .

I know that was a bit too much, so lets roll back to the 3 services needed in Cloud Foundry that do the trick and check them in detail.

Connectivity Service :

This is one of the most important service from my opinion when connection between On premise and Cloud system is considered .

If you have used  the connectivity service or gone through this link here , you know how to create a service implementation for the connectivity service .Once the service implementation is created , go inside the instance and create a service , by clicking on the Create service Key button as seen in the below image.

Service key has some crucial information regarding the Connectivity service and the below image contains important keys to consider :

The connectivity service has to first get a JWT from the XSUAA services to authenticate itself.Once validated , using the token a request is made to the Virtual OData address maintained in the Destination services via the cloud connector .

The request made by the connectivity service is not a direct one , instead it uses as proxy host and proxy port which are used while making a get request to get the data from the source system.The information of the server details for Proxy host and port are maintained in the service key (see image above ) .

This part would be clear when we talk about the CURL part later on  .

Destination Services :

Destination service is probably the easiest of the 3. This service stores the information regarding the destination which needs to be queried by the connectivity service . So in short that has the information regarding the Virtual host and port name (given in the cloud connector ) along with the authentication parameters . A simple destination would somewhat like below :

 

The service key for destination service also hold some interesting information regarding the client ID , client secret and URL for authentication .

Authorization and Trust management

The last piece of our puzzle but also another very important service –  xsuaa .

The main use of the xsuaa service is to provide trust to identify providers for authentication.This service mainly handles this by providing JWT’s( Java Web tokens ) .Any service can authenticate itself against the xsuaa service using a client Id and client secret by making a get call to the url for xsuaa service . This information can be found in the service keys for the particular instance . Using the JWT’s further calls are made to either fetch the destination name or to make a call to the SAP cloud connector to fetch the data using proxy server details.

The below image shows a typical service key implementation for xsuaa service instance . The most interesting key to notice here is the URL . This is the service URL for authentication to which a get query is triggered to get the JWT.

How this is done will be more clear in the section for CURL and finally the python code , so in-case this still did not make much sense , just relax 😉 .

In simple words ,

  • Destination service will hold the server information of the on-premise system where the query will be made to pull the data .
  • Connectivity services will use this information from destination service to retrieve where the query has to be made and using a proxy host and port it will make a call to retrieve the data from the on-premise system .
  • For the purpose of authentication xsuaa service will be used which will provide JWTs for authentication which will have its own lifespan of validity.

CURL :

Well to be honest , until now its been more of ‘why’ and ‘what’ , lets do the ‘how’ now !

Lets start with the destination service .

Retrieve destination details:
Step 1 – Understanding the API  for destination services:

The API documentation for the destination service can be found on this link here.

Now if you want to retrieve a particular destination from your cloud foundry you need to first get the URI mentioned in the service key for destination (discussed before ) and would somewhat look like below :

https://destination-configuration.cfapps.eu10.hana.ondemand.com

As per the API documentation the get destination request query is – /destinations/{name} 

An actual request URL would be like : <URI Of Destination from service key>/destination-configuration/v1/destinations/{name of the destination }.

An example for the get request query would look somewhat like below where my destination is named abapBackend1

https://destination-configuration.cfapps.eu10.hana.ondemand.com/destination-configuration/v1/destinations/abapBackend1

A query to this URL would result in an error as shown below for a query from postman.

The error clearly states a missing authorization bearer . The reason is that it requires a JWT for authenticating this call . As mentioned already this JWT is provided by the xsuaa services . So lets move to getting this JWT .

Step 2 – Retrieve JWT for destination services: 

To get a JWT from the xsuaa service a get query has to be triggered to the URL mentioned in the service key for the instance of XSUAA services .

The request URL to get the token would be :

<URL>/oauth/token

so in my example it would be look somewhat like below( please note the request is a POST request):

https://s0018786787trial.authentication.eu10.hana.ondemand.com/oauth/token

along with this request URL some other crucial information has to be fed , primarily the basic authentication details and some other parameters .

In our case the JWT is requested by the destination services , therefore the username and password will be the client ID and secret from the service key of destination service . Please navigate to your destination service and open the service key for the implementation which would have client ID and client secret . Your username will be the value of client id and the password would be client secret .

Mentioned below is a complete CURL statement for this request :

curl -X POST \

<URL from the service key of xsuaa service >/oauth/token’\  -H ‘Authorization: Basic <client ID and client secret from destination service encoded in base64 format>’ \   -H ‘Content-Type: application/x-www-form-urlencoded’ \    -d ‘client_id=<client id from key of your destination service>&grant_type=client_credentials’

Replace the parameters with your instance specific details and import the CURL into postman to trigger a POST request which would return and access_token as a response  :

Step 3 – Call destination service API to retrieve details :

As discussed before in Step 1 we make a call to the destination API along with an authorization header using the access_token as the JWT as mentioned in the below CURL :

curl -X GET \
https://<URL from the destination service key>/destination-configuration/v1/destinations/abapBackend1 \

-H ‘Authorization: bearer <token received in Step 2>

On executing this CURL statement we will receive the details of our destination :

Using connectivity services :

The next important topic is to use connectivity services . Using the details regarding the username , password , destination URL , the connectivity services can make a request via the SAP CC the data from on-premise system . But first it would need a JWT for authentication .

Step 1: Retrieve the JWT for connectivity services

As discussed in the previous section the process would be similar except the details used here would be from the service key for connectivity service .

curl -X POST \

<URL from the service key of xsuaa service >/oauth/token’\  -H ‘Authorization: Basic <client ID and client secret from connectivity service encoded in base64 format>’ \   -H ‘Content-Type: application/x-www-form-urlencoded’ \    -d ‘client_id=<client id from key of your connectivity service>&grant_type=client_credentials’

A Post request of the above CURL would retrieve a JWT to authorize the connectivity service , and a response over postman would somewhat look like below:

Step 2: Call the backend OData service

This step would involve a bit of coding so let’s directly cover this via a Python code snippet. Important technical aspects to consider are as below :

  • The call to the SAP back end is using a basic authentication so it would make a call to the OData service using the SAP user name and Password for a user who has the roles for viewing data of the OData service
  • Connectivity service will not make a direct get request to the OData URL . The request will be made via a proxy host : proxy port , which can be found in the service key of connectivity service
  • To authenticate the call to the proxy server , Proxy-Authorization will be passed as a header which would use the JWT obtained in Step 1 as the bearer .

APP To retrieve data from On-premise system :

Now comes the most important aspect of this whole blog.Our code !!

You can follow the below steps or directly download the whole implementation from here.

Step 1 – Create a folder where the you want to add all the files of your code. Eg -> C:\CF App

Step 2 – Create a file runtime.txt with below content.

python-3.6.8

This will tell the CF instance the runtime environment to be used . You can also use any other supported CF instance , 3.6.9 for example

Step 3 – Create a file requirements.txt with below content.

Flask
requests
cfenv

Since our app would be deployed in the cloud foundry instance , we need to explicitly mention the libraries which are needed apart from the standard libs which are available in the standard runtime . These library will be also installed when the app will be deployed to the CF instance .

Flask – we would use this to display our final result as a dict

requests – used to make get/post requests

cfenv – would be used to read the environment variables associated with the app .

Step 4 – Create file Procfile with below content :

web: python read_data.py

This would tell the application to execute the read_data.py file after deployment . Please note this file is not yet create and will be done in step 6.

Step 5 – Create the file manifest.yml with the below content:

---
applications:
- memory: 128MB
  disk_quota: 256MB
  random-route: true
services:
    - <Name of your XSUAA service instance>
    - <Name of your connectivity instance>

Please change the memory allocations as per your wish or use the same as mentioned above .

In the services section we explicitly bind the service instances to the app so that they are available as an environment variable in the app and can be later used within the code for further processing . Please do not forget to change the name of your service instance in the above code .

Step 6 : Create the file read_data.py

Since this is a bigger code snippet , please retrieve it from here.

I have marked the comments to each step so it should be self explanatory .

Step 7: Deploy the app

Open CMD and navigate to the specific location where your file’s are stored .Eg -> CD C:\CF App

Important info :

  • Incase you dont have the Cloud foundry CLI installed , please install it and then proceed with the next steps .We would be using commands pertaining to CF now
  • Please ensure your SAP Cloud connector is connected to your CF instance or else you will receive an error as no response will be received by the app and the app will fail in deployment

Login to the CF instance of SAP using the below command :

cf login

Please then enter your username and password . Please note password would not be visible when typed in so go with the flow 🙂 .

In case you have various sub accounts , select the sub account which you want to use for deployment

Once you are in the instance of CF , use the CF Push <your application name > to deploy the app

Once deployment is complete you will get a success message which should look like below :

Once this application is deployed , you can navigate to the applications inside space and now the app with the name that you deployed with should be visible . When you click on the URL , the response will be visible as an HTML app somewhat like below :

In case you look at the metadata – the metadata points to the virtual host and pulls the exact same data as that of the on-premise OData service outside the on-premise environment.

And using this approach we can happily expose the data outside the SAP world.

However this approach still has few blocking points :

  • The data is exposed to an open internet.In case you are dealing with crucial client data , no one would appreciate this and a more secure APP would be needed.
  • There is still some hard coding regarding destination and Username / password details .
  • Data is exposed just as a basic response . We need to make it more meaningful and readable.

In my next article i will cover these topics and provide a better and improved solution . Also the final target to expose this data to Data Hub is not yet covered , so i will try to sum everything up in another blog .

Hopefully this blog is of some purpose to someone,somewhere,someday  🙂 !

Be the first to leave a comment
You must be Logged on to comment or reply to a post.