Over the years you get to know more and more people and of course you always like to greet them by name. From time to time it can happen that you can’t remember the name of someone you know… which can be awkward.
The same thing happens to me too, more and more lately… To avoid awkward situations, I’ve been thinking about a solution for this. I’ve started with an app to store all the people I know together with a picture. This should help me to find the name of someone I already knew. I would still need to search in the list, which is not easy when it becomes a big list and you don’t remember the name…
I needed to add something to the app that could provide me the name of a person by just taking a picture. This way, I don’t need to search through a big list and don’t have to know the name.
Before I could start on this app, I still had to do some research to answer a few questions like:
- Where can I store the people?
- Where can I store the images?
- How can I recognize a face and find it in my list?
After some investigation, I came up with the following:
- Storing data? à It would be a good opportunity to use the new Cloud Application Programming model (CAPM). CAPM enables you to create a complete Java or NodeJS OData service in SAP Cloud Platform Cloud Foundry by only creating a Core Data Service (CDS) file.
- Images? à For this, I came across the Document Repository in the SAP Cloud Platform NEO
- Face recognition à SAP Leonardo Machine Learning Service offers several API’s related to human faces, for example an API to detect the face of a person in an image, an API to extract face features and so on… . Although there is no specific API for comparing faces, it can be done by combining the face feature extraction api with the similarity scoring api. The face feature extraction returns a vector of the face that the api found on the image. The vectors of all images can be used in the similarity scoring api to compare images and find a match between images.
- Technical steps are defined in this blog: https://blogs.sap.com/2019/05/14/how-to-do-face-recognition-with-sap-leonardo-machine-learning-service/
That’s all I needed to know and was ready to start 😊
I developed a solution on SAP Cloud Platform Cloud Foundry that offers me two possibilities. On the one hand it offers me a nice UI5 app that allows me to store all my contact persons together with a picture. On the other hand, it offers me the great functionality to search through all my contacts by just uploading a new picture of the person.
In case I come across an old contact person, who I don’t remember his name. I can just take a new picture and upload it to the app and it will tell me who it is. Off course I’ll need to take the picture without that person notices …
Here you have a small demo on how I can easily add a new person and how to search in the list with another image:
- First I add Daniel Craig to the list with a picture
- You’ll see how to search in the list by using another picture of one of the contactpersons
This solution is built as an MTA project on SAP Cloud Platform CloudFoundry. In this MTA project I have added the following layers:
- HANA module as the Database layer to store my contact persons with their related Face Feature Extraction Vector
- Java Service module to expose the data in HANA as an OData service
- UI5 for the UI Layer
Besides the MTA project I also needed other services:
- Machine Learning for the Face Feature extraction and similarity scoring to compare vectors and find the right person.
- SCP Document Service to store the images. (I could have stored them as a base64 string in the database, but I prefer a document store for images)
Machine Learning is being used in the UI and the Service layer. It is needed in the UI layer to validate the image before showing the create dialog. But then again for searching in the list of contact persons, it’s better to do this on the server side. Imagine that we have a list of 100 persons and the UI only shows 20 of them due to lazy loading…
I will publish a blog with more technical details soon.
Install the solution
Because this is an MTA project, it can be easily installed on every SCP account. Follow these steps to install it on your account.
Your always welcome to improve this app via pull requests on the git project: https://github.com/lemaiwo/FaceRecognition
Download the MTAR file here: https://drive.google.com/open?id=11XzPfH0IBDBruI4xkkt7yIDRmtz6lS0C
(It should be public)
Login to Cloud Foundry with the command CF Login
Run the command
cf deploy FaceRecognition_0.0.1.mtar
Or clone my github repository in SAP Web IDE
Right click on the project and select build and build again.
This will generate an MTAR file which you can deploy:
If the deployment is successful, you’ll see the app in your space:
Activate Document Service
Activate the Document Service on your NEO account and create a repository (store the generated key):
Create a Java Proxy to you NEO Document Service like described in the documentation. You’ll need the generated key in your proxy.
Go back to Cloud Foundry and create a destination to your Document Service Proxy with this name:
Create a new destination with the following properties:
URL=https://<your cmis proxy>.hanatrial.ondemand.com Name=documentservicewl ProxyType=Internet Type=HTTP Authentication=BasicAuthentication Description=Connection to Proxy Bridge App User=<your user> Password=<your password>
A Machine Learning instance is automatic instantiated by the MTA project because it’s being used by the Java Service. Now, we also want to use it in our UI layer. Therefore, we still need to add two destinations, so the UI layer can access the same Machine Learning Instance.
(This could be improved by defining the destinations in the mta.yml and maybe one instead of two)
If you go to your space and open Service Instances, you’ll see the ml-service. This is your Machine Learning instance that’s being used by the Java service. (it could be that you need to open the Service Instance menu two times before the instances show up)
Click on it to see all the required information like clientid, secret, … You need the clientid, clientsecret and the url at the bottom for the destination.
We need to create two destinations, one for fetching the bearer token and one to access the Face Feature Extraction API. This is needed because we use this API directly in the UI layer to detect faces on images.
URL=<ml keys url example: https://p935700trial.authentication.eu10.hana.ondemand.com> Name=ml_auth_api ProxyType=Internet Type=HTTP Authentication=BasicAuthentication User=<clientid> Password=<clientsecret>
Next, add a destination to the Face Feature Extraction API:
URL=https://mlftrial-face-feature-extractor.cfapps.eu10.hana.ondemand.com Name=ml_api ProxyType=Internet Type=HTTP Authentication=NoAuthentication
You should have 3 destinations now, make sure the names are exactly the same for the destinations:
Run the app
Go to your space and click on “FaceRecognition_appRouter”
Click on the link in the section Application Routes
We need to add the path to the UI layer to the url. Add the following to the url:
This is generated based on the namespace and name of the UI layer.
You get a login screen now.
You’ll see the app after you login:
You can start uploading persons 😊
Be aware that the UI is not yet completely productive ready and might still have some bugs. For example, it won’t do anything when it can’t find a face. The error handling is not yet finished.
You’re free to help improving this solution by contributing on this github repo: https://github.com/lemaiwo/FaceRecognition