Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Sharadha1
Active Contributor
It started as a fun project. I was comparing various Facial detection/Facial feature APIs provided by Amazon Rekognition, Microsoft Azure Face and Google Vision with those provided by SAP Leonardo Machine services. The number of facial features that can be extracted by the Face API provided by Microsoft Azure Cognitive Services is impressive.  You can see more details here on this - Microsoft Azure Face API

According to the documentation,  the Face Detection API detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes. Examples are head pose, gender, age, emotion, facial hair, and glasses.

This gave me ideas for several use cases which can be implemented using SAP Cloud Platform. I will be writing more on those use cases soon but this blog focuses mainly on building a connector for this API using SAP Cloud Platform Open Connectors.

I have read a number of excellent detailed blogs from divya.mary and laforted on building new connectors but never thought it would be so easy to create a connector. Let us look at the pre-requisites to start with.

Pre-requisites

  1. A trial account in SAP Cloud Platform (Neo) (http://account.hanatrial.ondemand.com)

  2. ‘Open Connectors’ service is enabled in your trial account. Refer https://blogs.sap.com/2018/09/19/part-1-enable-sap-cloud-platform-open-connectors-in-trial/ for the steps.

  3. Azure free account with a resource created for 'cognitive services'. Refer URL . Make sure you note down the subscription key and the API end point.

  4. Swagger json - download from the 'API definition' - here and save it as .json.




 

Build the connector

  1.  Go to 'Open connectors' service from SCP cockpit and click 'Go to Service'.






2. Go to Connectors and click on 'Build New Connector' on the right hand top corner.



3. Click on 'Import'



4. Choose 'Swagger' and choose the .json file we saved from the API definition (Pre-requisites, Step 4).



5. Click 'Continue Import'. For simplicity, we will choose only the '/detect' resource and click 'Import'.



6. The connector gets created and the set up screen is shown as below.



 

Now change the base URL to your API end point (based on the region).



7. We will go ahead and set up the authentication. Face API requires the Subscription key which provides access to this API.

Add a new configuration -> Blank



Add 'api key' and make it mandatory.

 



8. Now we have to pass the value of this configuration to the parameter 'Ocp-Apim-Subscription-Key' (refer the API Documentation for more details on this).

Add a new parameter.



Enter 'Name' as 'api.key' (configuration id) and choose the 'Type' as 'configuration'. Enter 'Ocp-Apim-Subscription-Key' in 'Vendor Name' field and choose 'header' as 'Type'.

 



 

9. Click on 'Save & Next'.It takes you to the 'Resources' screen where we have to create and authenticate an instance for testing. Click on 'Authenticate Instance'



Enter a name for the instance. Input the subscription key which we got from Azure Cognitive service (pre-requisite 3). click 'Create Instance'.



 

Thats it. We are ready to test now.

10. Once the instance is created successfully, we will see the screen like this. Click on 'Test in the API docs'

 



11. Choose the instance and  test the Post request. Click 'Try it out'



 

12. Enter the request body with the URL of the face image to be detected. Again, refer the Microsoft azure documentation on various ways of sending the image for detection.

 



13. Click Execute. You will see the response from the Face Detect API. The API detects the face from the picture and returns the rectangle coordinates (width, top, left and height).



 

14. We can get the attributes such as age, gender etc by sending the list of required attributes.





 

And of course, this can be easily be called from an SAP UI5 application. All you need is a destination set up to connect to the open connectors service ( https://api.openconnectors.ext.hanatrial.ondemand.com/elements/api-v2/ )



Add this destination to the neopp.json and use the code snippet below.
	var json = "{\"url\":\"https://homepages.cae.wisc.edu/~ece533/images/girl.png\"}"
$.ajax({
url: "/openconnectors/detect?returnFaceAttributes=gender,age",
data: json,
type: "POST",
contentType: "application/json",
headers: {
"Authorization": "User XXXX, Organization XXXX, Element XXX"
},
})
.done(function (data) {
var age = data[0].faceAttributes.age;
MessageBox.success("Age:" + age, {
styleClass: "sapUiSizeCompact"
});
})
.fail(function () {
});

Authorization header values are generated by the Open Connectors when the instance is authenticated. You can get it from 'Test API' screen (Refer the screenshot under step 11 above). This is of course not the recommended way of passing the authorisation header. Refer Part 4 of Divya’s blog it talks about how to use the SAP’s API Management in order to manage the token creation so that you do not need to define this header.

This is the output from the test UI5 application.



In the example above, I have passed the image as URL in Json format. Face API also accepts image as binary. But Open Connectors does not seem to support content type of 'application/octet-stream' in request headers.

There are several other useful APIs from Azure cognitive services which you can try out in the similar fashion.

Feel free to comment in case of questions/feedback.
6 Comments
Labels in this area