Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
WouterLemaire
Active Contributor
In my blog post about CAP and Machine Learning I showed how you can use the ML API inside CAP: https://blogs.sap.com/2019/08/19/combine-cap-m-with-machine-learning-sdk-api-part/

On top of that, I created a blog how this CAP created API could be used in a UI5 app in the same MTA: https://blogs.sap.com/2019/08/27/combine-cap-m-with-machine-learning-sdk-ui-part/

This works well and puts to complexity in the CAP Java Service instead of the UI. Still, it could be possible to use the ML API directly in the UI5 app. I did this in the Face Recognition app to check that the image has a face before the end user can fill in all his details: https://blogs.sap.com/2019/05/28/face-recognition-app/

In this blog I want to show how you can use the ML API in your UI5 app directly. For demo purpose I’m extending the app that I’ve created earlier in these blog posts:

https://blogs.sap.com/2019/08/19/combine-cap-m-with-machine-learning-sdk-api-part/

https://blogs.sap.com/2019/08/27/combine-cap-m-with-machine-learning-sdk-ui-part/

 

The full project is also available on my github account: https://github.com/lemaiwo/MyCAPMAppWithML

 

Let’s start.


In case you also followed the other blogs, then you can just run your Java service. This will create an instance of the machine learning service because this is defined in the mta.yml. If this is not yet the case, you need to define the Machine Learning service in the mta.yaml, it can simple be added.

Machine Learning resource definition in Mta.yml



Deploying this app or running the service from the SAP Web IDE, will create the instance:



Check for the details of the instance:



You can also create this instance manually, but I would like to use the same instance in the UI module as in the Java Service. Defining this in the mta makes sure that it will be created when you deploy the mta on any other account and you don’t need to think of creating the ML instance.

In this example, I want to use this service directly in UI module. This can be useful in case you want to use the Machine Learning service for UI related features but not on the database level, this depends on your use case.

Destinations


For this, a destination needs to be created in the CF subaccount with the details of the ML instance. The ML API requires two destinations, one for the authentication and one for the API itself:



The authentication destination requires the url to the authentication service which can be found in the sensitive data of the instance:



The first destination requires the following:

  • url: authentication url from ML instance sensitive data

  • authentication: basic authentication

  • user: clientid

  • password: clientsecret



URL=<ml keys url example: https://trialwl1l.authentication.eu10.hana.ondemand.com>;
Name=ml_auth_api
ProxyType=Internet
Type=HTTP
Authentication=BasicAuthentication
User=<clientid>
Password=<clientsecret>

The second destination needs to reference to the Face Feature extraction api.



The destination to the API requires the following:

  • url: url to the face feature extraction api (can be found in the sensitive data of the ML instance) and should only contain the host. The path will be added in the UI5 app.

  • Authentication: no authentication, it will use the bearer token that the UI5 app receives from the authentication service



URL=https://mlftrial-face-feature-extractor.cfapps.eu10.hana.ondemand.com
Name=ml_api
ProxyType=Internet
Type=HTTP
Authentication=NoAuthentication

 

In the end CF should have two destinations



In the latest version of the SAP Web IDE it is possible to run the app directly on CF. Still this can take a few seconds to run the app and you might prefer to test on NEO. For testing on NEO, the destinations should also be created here (exactly the same configuration):




UI5 app config


Configuration is done, let’s start coding. The UI5 app requires configuration to forward the API requests to the right destination, as well for CF as for NEO.



To test the app in NEO, it requires configuration in the “Neo-app.json” for both destinations:
,{
"path": "/webapp/mlauth",
"target": {
"type": "destination",
"name": "ml_auth_trial"
},
"description": "classifier"
}, {
"path": "/webapp/mlapi",
"target": {
"type": "destination",
"name": "ml_face_trial"
},
"description": "classifier"
}

 

I added “/webapp” in the path to be able to test the app on NEO and CF without changing the path in the code.



CF requires configuration in “Xs-app.json”:
,{
"source": "^/mlapi/(.*)$",
"target": "$1",
"authenticationType": "none",
"destination": "ml_api",
"csrfProtection": false
},
{
"source": "^/mlauth/(.*)$",
"target": "$1",
"destination": "ml_auth_api",
"csrfProtection": false
}



I’m using a prefix for both destinations:

  • “mlapi” will be used to forward all requests to the api. “mlapi” itself is not part of the url.

  • “mlauth” will be used to forward all requests to the authentication service. “mlauth” itself is not part of the url.


The prefix is just to map the request to the correct destination.

ML implementation


We’re ready to use the destinations in the UI5 app. The ML API will be consumed in the FaceService which was created already in this blog: https://blogs.sap.com/2019/08/27/combine-cap-m-with-machine-learning-sdk-ui-part/ . Here it will be extended with two functions:

  • getBearerToken: get the bearer token from the authentication service

  • getFaceFeatures: use the getBearer token, wrap the blob into formdata and send it to the ML API


FaceService.js


getBearerToken: function () {
return this.http("mlauth/oauth/token?grant_type=client_credentials").get({
"accept": "application/json"
});
},
getFaceFeatures: function (body,originalImage) {
return this.getBearerToken().then(function (token) {
var tokenInfo = JSON.parse(token);
var form = new FormData();
form.append("files", body,originalImage.name);
if (form.fd) {
form = form.fd;
}
return this.http("mlapi/api/v2alpha1/image/face-feature-extraction/").post({
"authorization": "Bearer " + tokenInfo.access_token,
"accept": "application/json"
}, form);
}.bind(this));
}

The view needs a additional button to upload an image and use the “getFaceFeature” function:


<u:FileUploader buttonOnly="true" icon="sap-icon://action-settings" iconOnly="true" sameFilenameAllowed="true" change=".onConvertFace"/>

 

The controller has the eventhandler of this button, it will do the following:

  • Resize the image in case it’s to big (same utils object as in other blog)

  • Call the getFaceFeatures function of the FaceService

  • Open the result in a new dialog



onConvertFace: function (oEvent) {
var image = oEvent.getParameter("files")[0];
return ImageHandler.resize(image).then(function (resizedImage) {
return FaceService.getFaceFeatures(resizedImage.blob,image);
}).then(function (oVector) {
this.showVector(oVector);
}.bind(this)).catch(function () {
MessageToast.show("Error occured...");
});
},

 

The result will be shown in a dialog with the following UI5 code:



 
showVector:function(oVector){
if (!this._oVectorDialog) {
this._oVectorDialog = sap.ui.xmlfragment("be.wl.FaceConvertor.view.dialog.Vector", this);
}
this._oVectorDialog.setModel(new JSONModel({
extraction: oVector
}), "vector");
this._oVectorDialog.open();
},
onCloseVector: function (oEvent) {
this._oVectorDialog && this._oVectorDialog.close();
}

 

The dialog contains just a simple text area that will contain the vector



 
<core:FragmentDefinition xmlns="sap.m" xmlns:l="sap.ui.layout" xmlns:f="sap.ui.layout.form" xmlns:core="sap.ui.core">
<Dialog title="New Face" showHeader="true" contentWidth="800px">
<content>
<TextArea cols="160" rows="30" value="{vector>/extraction}"></TextArea>
</content>
<buttons>
<Button icon="sap-icon://decline" press=".onCloseVector"></Button>
</buttons>
</Dialog>
</core:FragmentDefinition>

 

Result


Test the app, click on the new icon, select an image and see the result:



It will work on by running on CF and NEO! This can be changed in the run configuration:

Labels in this area