Skip to Content
Technical Articles

Add Machine Learning capabilities in SAP Fiori App with Google ML Kit Custom Model

We are going to build an Android Fiori App with SAP Web IDE to identify objects in images using Google ML Kit Custom Model. With the custom model, you can train your own ML model to recognize the objects according to your needs.

Pre-requisities

Follow my earlier blog to setup the local HAT with SAP Web IDE Full Stack and install the required cordova plugins:

  • cordova-plugin-ml
    https://bitbucket.org/bhivedevs/cordova-plugin-ml-text.git
  • cordova -plugin-firebase
    https://www.npmjs.com/package/cordova-plugin-firebase
  • cordova-plugin-camera-preview
    https://www.npmjs.com/package/cordova-plugin-camera-preview

We will modify the cordova-plugin-ml and add some Java codes later on.

Don’t forget to setup the Firebase console project, download the google-services.json file and add into C:\Users\<user>\SAPHybrid\Scanner\hybrid\platforms\android\

In this tutorial, we will just use the existing ML model from Google codelabs.

Modify build.gradle 

Open build.gradle and add aaptOptions and the required dependencies:

dependencies {
    implementation fileTree(dir: 'libs', include: '*.jar')
    // SUB-PROJECT DEPENDENCIES START
    implementation(project(path: "CordovaLib"))
    compile "com.google.android.gms:play-services-tagmanager:+"
    compile "com.google.firebase:firebase-core:+"
    compile "com.google.firebase:firebase-messaging:+"
    compile "com.google.firebase:firebase-config:+"
    compile "com.google.firebase:firebase-perf:+"
    compile "com.google.firebase:firebase-ml-vision:18.0.2"
    compile "com.android.support:exifinterface:$ANDROID_SUPPORT_LIBRARY_VERSION"
    implementation 'com.google.firebase:firebase-ml-model-interpreter:16.2.4'
    // SUB-PROJECT DEPENDENCIES END
}

And sync the project.

Adding TensorFlow Lite Model to Project Asset Folder

We are going to use the existing model from TensorFlow. We can build our own custom model, you may refer to this tutorial how to do it.

We we will use the local version of the model for offline scenario.

Download the TF Lite model from here and extract.

Get the mobilenet_v1_1.0_224_quant.tflite.

For label, we need to get the label.txt from here.

Extract the code and get the label.txt from \mlkit-android-master\final\app\src\main\assets\

Now we have two files: mobilenet_v1_1.0_224_quant.tflite and label.txt.

Upload these files to the asset folder Android project.

Modify cordova-plugin-ml

Open MLtext.java from this location:

Update the code with Mltext.java.

Call the Plugin

How to call this plugin? it is very easy, just call takePicture() JavaScript method from SAPUI5 XML view and get the result from recognizedText.blocktext.

takePicture: function() {
    CameraPreview.takePicture({
        quality: 100
    }, function(imgData) {

      mltext.getText(onSuccessText, onFailText, {
            imgType: 4,
            imgSrc: imgData
        });

        function onSuccessText(recognizedText) {
            console.log(recognizedText);
            alert("Text: " + recognizedText.blocktext);
        }

        function onFailText(message) {
            alert('Failed because: ' + message);
        }
    });
},

Now everything is setup, you can build the project and install the .APK onto you Android phone.

A complete source code can be found on my Git.

Thanks for reading and let me know if there is any question.

Be the first to leave a comment
You must be Logged on to comment or reply to a post.