Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
WouterLemaire
Active Contributor
A few months ago, I have posted a blog about an app I have created to recognize faces. This was not for a customer or business use case, but just for helping myself with remembering the names of people I’ve met. You can find all the details here: https://blogs.sap.com/2019/05/28/face-recognition-app/

Now, in this post, I want to share the technical details behind this app. The app is built and running on top of SAP Cloud Platform Cloud Foundry and using several services of the platform.

I started in the SAP Web IDE with the template “SAP Cloud Platform Business Application”. This template already generated me an MTA project with already preconfigured modules for the database and service. In both modules I’ve used CDS to define the data I needed to store.

Next to storing data, I also wanted to store images of every person. I could do this in the database as base64 string but I preferred a real document storage place for that. So, I ended up with the SCP Document Service on NEO.

For detecting and recognizing faces, I also needed Machine Learning in the UI module as well as in the service module. I needed this in the UI to check if the picture has a face before creating a new person with it. This could also be done in the service module, but I wanted to check this in a sperate request to have a better User eXperience. The face recognition on the other hand needs to look in the full database. If you do this in the UI, it could happen that not all persons are in the app because of lazy loading. Therefore, I needed to implement this in the service module.

Putting this all together resulted in the following architecture:



My first idea was to explain all the technical details and challenges that I’ve faced during the developments of this app. Only this would be too much, and nobody would read it. Instead I’ve decided to create several technical how-to blog posts of each dedicated part. I’ve created a smaller version of the Face Recognition app to create the how-to blog posts. This app hasn’t all the same features, nevertheless following all the how-to blog posts will take you to the Face Recognition app.

Understanding SAP Leonardo Machine Learning Service


Before I started, I needed to understand how I could do Face Recognition with SAP Leonardo Machine Learning. After investigating all the SAP ML API’s I came up with the following solution:

https://blogs.sap.com/2019/05/14/how-to-do-face-recognition-with-sap-leonardo-machine-learning-servi...

Saving the face images




Second, I needed a place to store the images of the faces. Not for the face recognition, just to have an idea of the person in case I don’t remember the name anymore... I could store this as a base64 string in the database but it’s not the best way and the string field is in CAP has a limited length. Searching for another solution brought me to the SCP Document Service. In the following blog, I explain all the steps to configure the SCP Document Service and how to use it in your UI5 app:

https://blogs.sap.com/2019/08/05/how-to-use-the-document-repository-service-in-your-ui5-app/

I went one step further and created my own app to view documents in the SCP Document Service:

https://blogs.sap.com/2019/07/29/document-repository-viewer/

(This is missing feature of the SCP Document Service if you ask me…)

Face Recognition API


Next, I created the database layer for storing the Faces and expose it as an OData API. Inside this API Java service, I used the ML API for searching through all the faces in the database.


Database module


In the database I have defined exactly one entity in the CDS file:



This entity will store all the faces with a:

  • Unique ID

  • Firstname

  • Lastname

  • Vectors

    • This will keep a vector of the face that the Machine Learning service found in the image of the person. In the UI I’ll use the ML API to find the face and return the vector of it. The UI will then send the result together with all other information to the service.

    • This means that I’ll keep a vector for each person. When I want to look for a person, I’ll compare all these vectors with the vector of the image which I use in the search.



  • Image

    • This will keep the location of the image in the Document Service




After creating or changing the db module, it is important to execute build cds again.



This will update the csn.json file in the service layer:



This file will map the entities and fields of the service with the db entities and fields. This needs to stay in sync.

Service module


In the service module I started with exposing the “Faces” entity from the database module as entity Face.

I’ve added two actions next to the Face entity. These actions are two operations that don’t fit in the CRUDQ model:

  • findFaceByVector

    • This action will compare the incoming vector with all the vectors of faces in the database by using Machine Learning and return the best match.



  • findFaceByImage

    • This will do exactly the same as the other function, but it will first convert the incoming base64 string image to a vector by using machine learning.




In the end, I only used the action findFaceByImage. It was just for testing purpose, so I would be able to test if the conversion failed or something else.

Next to that, you might have noticed that I use actions instead of functions. Both are almost the same, but actions use the HTTP POST method instead of GET. I’m sending an image as base64 string to my service which is way to long for a parameter of a GET method.

You can find more information about actions here:

https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/7347f9d7aff44a2eac950b7be8f...

I created a how-to blog with more technical steps from the beginning till the api to created a service that integrates the ML service:

https://blogs.sap.com/2019/08/19/combine-cap-m-with-machine-learning-sdk-api-part/

Face Recognition UI




On top of this service, I’ve created a UI5 app to create new persons and search based on pictures. I made an example on how to consume this service in a UI module of the same mta in the following post:

https://blogs.sap.com/2019/08/27/combine-cap-m-with-machine-learning-sdk-ui-part/

The concept is the same in the face recognition app but the service is more complex. Next to the ML part, the face recognition app will also create faces. This is not documented in this post because it’s a simple post request which has been documented many times on the web.

ML Service in the Face Recognition UI




The Face Recognition app is also using the ML service directly from the UI module. This is also described in a how to blog post that defines all the additional configuration and code:

https://blogs.sap.com/2019/10/01/machine-learning-in-ui5-on-scp-cloudfoundry/

Deployment


Finally, deployment. I know, there are already several posts about this topic… but I still struggled with this. This took way too much time and deserved an additional blog post:

https://blogs.sap.com/2019/08/29/combine-cap-m-with-machine-learning-sdk-deployment-part/

 

 

The code of the full project is available on GitHub: https://github.com/lemaiwo/MyCAPMAppWithML

Full project code of face recognition app: https://github.com/lemaiwo/FaceRecognition

Following all these posts and putting it together into one MTA app will enable you to create your own face recognition app!
Labels in this area