Skip to Content

In order to avoid to waste time and do things incorrectly, this document aims to show the necessary steps for the creation of a cloud application for the first time. Behind these steps, you will also discover some principle of SAP Cloud Platform and of the usage of REST predicitve services. Nevertheless, this document don’t replace the online documentation of SAP Cloud Platform programming.

Before to start, we make these assumptions:

  • SAP Cloud Platform Predictive service is deployed and configured on your account. Refer to the online help of the predictive services.
  • APL, which comes with the predictive services, is installed on the HANA database of your cloud account.
  • The user who will develop the application has at least a developer role on the cloud account and  C4PA-USER role for the Java application of the predictive services.

Prepare HANA database

Create accounts

Connection to database with Eclipse

On the SAP Cloud Platform, from the Persistence menu, go to Databases & Schemas, click on your database and in the page, click on Database User.

If you have never did that before, you will get this dialog. Click on Create User.

Then click on Show button to see you initial password.

Now it is time to launch Eclipse (at least Mars version). In the toolbar, click on icon Add System and choose to add a cloud system because the HANA database on the SAP Cloud Platform cloud.

Fill in the fields. Use you SAP Cloud Platform ID user and password and click Next.

Then you can either choose a schemas or a database. It is here you will use the user ID and password provided just before in the HANA cockpit.

Once connected the first time, you will be asked to choose your password.

You can also check version of this HANA database. Select it and in the toolbar, click on Administration icon.

In the window, you can see

  • HANA version and
  • Installed plug-ins and when you click on it, you will get the version of each plug-in.

Create User

It is not a good idea to use your database user ID to connect SAP Cloud Platform Predictive service to the HANA database. It is better to create a technical user because it not linked to you and a generic technical user can be shared between several developers.

There are 2 ways to do this and it depends on the development environment used:

  1. Eclipse as shown in previous section to add a cloud system or
  2. SAP HANA Web-based Development Workbench which is run from SAP Cloud Platform

To start with this environment, your database user must have specific roles (see table below). You can assign these roles directly from Eclipse if you have admin privilege, otherwise ask to an admin user. Refer to the help of SAP HANA Web-Based Develoment Workbench.

Tool Description Required Role
Editor Inspect, create, change, delete and activate SAP HANA repository objects sap.hana.ide.roles::EditorDeveloper
Debug server-side JavaScript code sap.hana.xs.debugger::Debugger
Catalog Create, edit, execute and manage SQL catalog artifacts in the SAP HANA database sap.hana.ide.roles::CatalogDeveloper
Security Create users and user roles sap.hana.ide.roles::SecurityAdmin
Traces View and download SAP HANA trace files and set trace levels sap.hana.ide.roles::TraceViewer

 

 

 

 

 

 

 

 

 

 

 

From Eclipse

From the Cloud System created, go to Security/Users, right click on Users and choose New Users. Then fill the form.

Another easy way to do this is to open a SQL window in Eclipse and run this sql

-- CREATE HANA TECHNICAL USER
CREATE USER PS_USER PASSWORD Password1;

 

It is then necessary to grant roles to this user. The roles to grant depends on the version of HANA.

-- ROLES (SPS09)
GRANT AFL__SYS_AFL_APL_AREA_EXECUTE TO PS_USER;
GRANT AFLPM_CREATOR_ERASER_EXECUTE TO PS_USER;

-- ROLE (SPS10)
CALL GRANT_ACTIVATED_ROLE('sap.pa.apl.base.roles::APL_EXECUTE','PS_USER');

 

Now the password used for PS_USER needs to be changed. To do this, you will add the same cloud system but with PS_USER database user.

In the first login dialog, enter your SAP Cloud Platform user ID and in the second login dialog, enter PS_USER and its password and click Finish button.

As it is the first time you connect to HANA database with this user, the system asks you to change the initial password. So enter a new one.

When done, if you double click on this new user, you should have this:

From SAP HANA Web-based Development Workbench

The root page looks like this:

Click on the Security tile. Right click on Users and select New User. Fill the form with the user name, the password and the role(s).

Create a schema for data

To be secure, it is better that data will be loaded into a protected schema. Thus create schema PS_DATA.

In order HCPps can access these data, it is necessary to grant select privilege to PS_USER to the tables of schema PS_DATA. To do this execute this SQL script on Eclipse.
Here again 2 ways:

From Eclipse

Right click on the Catalog and select Open SQL Console. In this console, enter the SQL code to create the schema and grant select privilege.

Note that the grant can be done from the UI of Eclipse for PS_USER

From SAP HANA Web-based Development Workbench

From the root page, click on Catalog tile. Right click on Catalog and select New Schema. Give a name and click OK.

The schema is created. Go to Security tile and grant select privilege to PS_USER to the tables of schema PS_DATA.

Load data into HANA

For the sample of this document, my data are in a csv file. Here I will show how to import this csv file into a table of schema PS_DATA. The table will have the name of the csv.

From Eclipse

In File menu, choose Import. Then select SAP HANA Content and Data from Local File. Then click on Next.

Select the HANA database on which you want to load data and click on Next.

In the wizard, provide information about the data file and where you want to store data. Here in a new table of schema PS_DATA named SMALL_SALES. Click on Next.

The next wizard manages the table definition and the data mappings. Here keeps what is proposed and click on Next.

The last screen is a summary of the data imported. Click on Finish to do the import.

Now in the schema PS_DATA, the new table SMALL_SALES has been created.

Bind Predictive Services to HANA database

Now that the database technical user is created and have an access to the schema PS_DATA which contains tables that can be used as dataset, a list step to finish configuration is to establish the link between the predictive services and the schema PS_DATA.

To do this, SAP Cloud Platform uses the mechanism of destination. Here is how to set a destination between predictive services and schema PS_DATA through the technical user PS_USER.

From SAP Cloud Platform go to the Java applications and select the Java application of the Predictive service.

Click on aac4paservices and then on Data Source Bindings and then on New Binding.

Leave the Data Source empty. Once saved the data source binding will have the name <DEFAULT>.

Choose the database ID and use the database technical user PS_USER and its new password set in the previous section.

Then click on Save button and the SAP Cloud Platform Predictive service is now ready to be consumed inside a cloud application. It is what we will see in the next sections.

Create HTML/Javascript application

Define destinations

Destination is a SAP Cloud Platform mechanism to link an application to another one so that the second can be called by the first. The principle is this one:

Account destination to SAP Cloud Platform Predictive service

Once predictive services are deployed, configured and started, it is necessary to create a destination.

SAP Cloud Platform Predictive service is a Java application generally named “aac4paservices” but this is not mandatory. This name is given during deployment and is part of the URL to designate this service. This URL is mentioned in the overview of the predictive services accessible from Java Applications menu.

To create a destination, go to menu Destinations and click on button New Destination and set the fields like this and save.

The URL is the URL of the predictive services shown just before.

Application destination

The application destination is set when the application is created in SAP Cloud Platform. This is visible in the overview of your application. The Edit button allows you to choose the account destination you want to bind to your application.

At the end you have this schema:

Initialize HTML Application

From HTML5 Application, click on New Application. A small dialog appear; enter the name of your application: for example hcpps4ki. Now this application appears in the list. Click on it, go to Versioning and click on button Edit Online.

WebIDE starts and as it is the 1st time for this application, it requests you to provide information for Git repository.

On the next dialog, keep it as it is and click on Commit and Push. The application is created into your workspace. It contains only one file that you will not have to touch. Select you application, do a right click and choose New / Project from Template.

Choose SAPUI5 Application, then Next.

Keep the name of your application for the namespace. Then click Finish.

Edit the file neo-app.json and add the code highlighted in yellow.

In the Git pane, select all the files, give a version number and click on Commit and Push button with Orign/master.

Go back to SAP Cloud Platform. In the Versioning window, you now see the version V0.1. You also see at the bottom a section about Destination. Click on Edit button to set the correct destination so that it looks like this and Save.

During this step we did 2 things:

  1. We have created the skeleton of a new HTML application which will use SAPUI5 controls
  2. Create the 2nd step of the destination workflow by defining the application destination and linking it to the account destination.

The consequences are:

  1. Application can be run (click on link V0.1)
  2. Application can now receive code to call SAP Cloud Platform Predictive service.

Add code

Objective here is not to describe how to do to develop a cloud HTML application. For this I let you read the SAP Cloud Platform online documentation. In this section I will tell you:

  • The high level steps to include predictive features into your application
  • How to call predictive services and
  • How to get the results of these services

High Level steps

Before doing any prediction it is necessary to prepare your data into the HANA database of your cloud account. These data will be either put into a specific table or view of a schema. Then this table or view will be registered as your dataset. You will use the dataset service to register your data.

Input parameter of this service is the table or view where your data are. The service will return a description of the data and the most important is a dataset ID that you will use later to reference your data.

Next step is to do prediction(s). This will be done by call to the predictive service(s) which correspond to your needs.

At the end a good programming behavior is to delete the job IDs and the dataset ID when you will have finished with them.

Here are these 3 high level steps. For more information refer to the online functional documentation or the API RAML documentation.

How to call predictive services?

Predicitive services are REST web services. This means that:

  1. They are invoked through 3 verbs: GET, POST and DELETE
  2. An URL is used to specify the service to call
  3. A service can take one or several input arguments
  4. The output is placed inside a variable which is parsed after the execution to retrieve the results.

Furthermore there are 2 modes to invoke a service.

  1. Synchronous mode: which means that your application has to wait for the end of the execution of the service to do the next instruction. Keyword “/sync” is added at the end of the URI to call a service with this mode.
  2. Asynchronous mode: which means that the hand is given back to the application after the call to the service and you don’t have to wait until the end of the execution of the service. In this case the service returns a job ID and a job type that you can use later to check if the execution of the service is finished or not. This mode is very interesting when you have to process a big dataset because your application can do something else.

In the next section, there are examples in Javascript of how to call predictive services

Register a dataset

An easy way to call a service in Javascript is to use the ajax function.

$.ajax({
            type : "POST",
            contentType : "application/json",
            url : root + "/api/analytics/dataset/sync",
            dataType : 'json',
            data: JSON.stringify({"hanaURL": "PS_DATA/SMALL_SALES"}),
    		success : function(data, status, request) {
            	$('#datasetID').val(data.id);
                displayVariables(data.id);
            },
            error : function(request, status, error) {
            	var msg = status + ": " + request.state();
           	$('#out_param').html(msg).css({"background-color": "red"});
            }
    	});

The 1st argument precises the verb.

The url argument defines which service to call. The root variable is set like this:

var root = "/HCPpsAppDest";

You recognize the name of the application destination mentioned above in this document.

Finally the /sync indicates that the service will be called in synchronous mode.

The input argument is given is JSON format:

data: JSON.stringify({"hanaURL": "PS_DATA/SMALL_SALES"}),

In case of success, the function displayVariables is called with the parameter data.id which represent the dataset ID.

In case of failure, a message is displayed.

Get dataset description

function displayVariables(id) {
    $.ajax({
		type : "GET",
    	contentType : "application/json",
    	url : root + "/api/analytics/dataset/" + id,
    	dataType : 'json',
    	success : function(data, status, request) {
    		var i = 0;
    		var rowCount = 0;
    		var oItem;
    		ddlb_variables.destroyItems();
    		for (i = 0; i < data.variables.length; i++) {
    			addRowInTable("variablesTable", data.variables[i].name, data.variables[i].value);

				oItem = new sap.ui.core.ListItem();
				oItem.setText(data.variables[i].name);
				ddlb_variables.addItem(oItem);	
    		}
    		ddlb_variables.setValue(data.variables[data.variables.length - 1].name);
    		gTarget = data.variables[data.variables.length - 1].name;
    		ddlb_variables.attachChange(function(){
    			$('#target').html(ddlb_variables.getValue());
    			gTarget = ddlb_variables.getValue();
    		});
			ddlb_variables.placeAt("ddlb_ChooseTarget");
			
    		$('#target').html(ddlb_variables.getValue());
    	},
    	error : function(request, status, error) {
            var msg = status + ": " + request.state();
            $('#out_param').html(msg).css({"background-color": "red"});
        }
    });
}

We use again the ajax function, but his time the verb is GET.

We also see that the URL is built dynamically with the variable root, the name of the service and the dataset ID.

In case of success of the call of this service, we will get the description of the dataset by parsing the variable data. The structure of this variable is given into the API documentation.

Call a Key Influencer

$.ajax({
            type : "POST",
            contentType : "application/json",
            url : root + "/api/analytics/keyinfluencer/",
            dataType : 'json',
            data: JSON.stringify({"datasetID": $('#datasetID').val(), "targetColumn": gTarget}),
            success : function(data, status, request) {
            	gJobID = data.ID;
                var res = 'Job ID: ' + data.ID + ' Status: ' + data.status + ' type: ' + data.type;
                $('#status').html(res);
            },
            error : function(request, status, error) {
            	var msg = status + ": " + request.state();
            	$('#out_param').html(msg).css({"background-color": "red"});
            }
        });

The same mechanism is used again with the ajax function. Verb is POST, URL is the one of the key influencer in asynchronous mode. Only the 2 mandatory input parameters are provided: the dataset ID and the target variable.

In case of success the job ID is stored into variable gJobID.

Get Status of a job

$.ajax({
            type : "GET",
            url : root + "/api/analytics/keyinfluencer/" + gJobID + "/status",
            dataType : 'json',
            success : function(data, status, request) {
                var res = 'id: ' + data.ID + ' status: ' + data.status + ' type: ' + data.type;
                $('#status').html(res).css("background-color", "");
            },
            error : function(request, status, error) {
                var msg = status + ": " + request.state();
                $('#status').html(msg).css({"background-color": "red"});
            }
        });

The application can continue and periodically, it is necessary to check the status of the key influencer job. This is done by a GET to the status of this job. It takes no input parameters. All necessary information is inside the URL.

When the call of the status call is finished in success, we got the status of the key influencer job which can be:

  1. Either PROCESSING. Which means that it is not finished. So another call to the status will be necessary.
  2. Or SUCCESS. Which means that we now can get the result of the key influencer.
  3. Or FAILURE. Which means there was a problem and an error code is given.

Get result of a Key Influencers

$.ajax({
            type : "GET",
            url : root + "/api/analytics/keyinfluencer/" + gJobID,
            dataType : 'json',
            success : function(data, status, request) {
            	// Get KI and KR
            	$('#tKI').html(data.modelPerformance.predictivePower);
            	$('#tKR').html(data.modelPerformance.predictionConfidence);
            	var contrib = 0;
            	for (var i = 0; i < data.influencers.length; i++) { 
            		contrib = data.influencers[i].contribution*100; 
            		addRowInTable("keyInfluencers", data.influencers[i].variable, contrib.toFixed(4));
            	}
            },
            error : function(request, status, error) {
                var msg = status + ": " + request.state();
                $('#status').html(msg).css({"background-color": "red"});
            }
        });

To obtain the result, use the GET verb. The URL mentions the key influencer and the job ID. In case of success of the Get call, it is necessary to parse the variable data to display the key inflencers and their contributions. The data variable is a structured list of several levels and its description is is given into the API documentation.

Conclusion

There are some steps to do to prepare your environment before starting the development of a cloud  application. This is not specific to predictive services but to the way SAP Cloud Platform handles services in general.

The way data are managed in HANA database depends on each customers. The only constraints is to determined which data from which table is necessary for you to do the predictions you need. Then you will have to create a view containing these data and which will become your dataset.

The mechanism to call REST services is standard in Web applications. It is always the same: select the good verb, build the URL corresponding to the service you want to call, set input parameters and parse the structured list which contains results of the service.

 

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

Leave a Reply