Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member186543
Active Contributor
Hello everyone,

Welcoming you all to the world of Deep Learning ! This application was also demo'ed at Sapphire 2017.

I am pretty excited here to describe a very interesting and complex implementation we did to demonstrate integration of SAP with Google ML engine and Tensorflow, bringing user experience to an entirely new level.

Scenario:

An end user can take a picture from the webcam of his computer through a UI5 screen and in real time the system should be able to recognize the person/object in the image - harnessing the power of Deep learning and Tensorflow in SAP.

The blog post is slightly huge considering the complexity of the scenario, so please forgive me for the same 🙂

Architecture design:



We integrated below list of technologies/ skillset to implement the same:

  1. SAP UI5 over WebIDE to build webcam screen

  2. Knowledge of implementing Convolutional neural nets ( Deep learning )

  3. Tensorflow knowledge to train a Convolutional neural network on Python - https://www.tensorflow.org/tutorials/

  4. Account on Google cloud platform to host the final Neural network model


Please note: That some terminologies like Python and Tensorflow might be new to you, hence you may refer to external links/ papers to understand more about Python, Tensorflow, Convolutional nets etc.

So lets start, we will go one by one here with as much screen shots as possible:


  • SAP UI5 HTML IFRAME screen embedded inside an xml view to invoke camera and take picture




 // Init canvas
function init() {
// Get the canvas and obtain a context for
// drawing in it
canvas = document.getElementById("myCanvas");
ctx = canvas.getContext('2d');
}

// Take image from webcam -> Data is stored in a variable window.filebinary
function snapshot() {
let image = new Image();
ctx.drawImage(video, 0,0, canvas.width, canvas.height);
var data = canvas.toDataURL('image/jpeg');
image.src = data;
window.filebinary = data;
}

Output:



Product image search results:



SAP UI5 code to trigger attachment GW service, to carry file binary data to SAP GW
sap.ui.define(["sap/ui/core/mvc/Controller"], function(BaseController) {
"use strict";
return BaseController.extend("ml001.app.controller.webcam", {

_onButtonPress1: function() {
this.getView().setBusy(true);
var token1 = "";
var out;
var xhr = new window.XMLHttpRequest();
var ifr = this.getView().byId("attachmentframe");
var file_bin = ifr.getDomRef().contentWindow.filebinary;
// GET X-CSRF-TOKEN - Hasan Rafiq
$.ajax({
url: '/sap/opu/odata/sap/<ZSERVICE>/',
type: 'GET',
async: false,
contentType: "application / atom + xml;type\ x3dentry;",
dataType: "",
headers: {
"x-csrf-token": "fetch"
},
success: function(data, textStatus, request) {
token1 = request.getResponseHeader("x-csrf-token");
}
});

// Post file binary to GW attachment service - Custom Hasan Rafiq
$.ajax({
type: 'POST',
url: '/sap/opu/odata/sap/<ZGW_SERVICE>/getimageSet',
async: false,
xhr: function() {
//Upload progress
xhr.upload.addEventListener('progress', function(evt) {
if (evt.lengthComputable) {
var percentComplete = Math.round(evt.loaded * 100 / evt.total);
$('.progress').val(percentComplete);
//Do something with upload progress
console.log(percentComplete);
}
}, false);

xhr.addEventListener('error', function(evt) {
this.getView().setBusy(true);
alert('There was an error attempting to upload the file.');
return;
}, false);

return xhr;
},
contentType: 'image/jpeg',
processData: false,
data: file_bin,
headers: {
'x-csrf-token': token1
},
success: function(data) {
//this.getView().setBusy(true);
//alert('File uploaded successfully');
out = data.documentElement.innerHTML;
}
});
//
if ( out )
{
sap.ui.controller("ml001.app.controller.ee6f9677cf93c15a0d5cde1b5_S5").manualUploadComplete(out);
sap.ui.controller("ml001.app.controller.ProductResult").fillValues();
var dialogName = "ProductResult";
var dialog = window.dialogs[dialogName];
dialog.open();
}
else
{
alert("Error in calling ML service");
}

var oDialog = this.getView().getContent()[0];
oDialog.close();
}
});
}, /* bExport= */
true);​


We retrained an existing Inception neural net model available on Tensorflow library( http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz ) by retraining it's last layer for new images and category. The configuration of the neural net is below

  1. Convolutional layer #1

  2. Max pooling layer #1  => ReLu activations

  3. Convolutional layer #2

  4. Max pooling layer #2  => ReLu activations

  5. Fully connect layer => #2500 neurons

  6. Softmax layer => Softmax activation


Retraining the existing tensorflow model to categorize new objects: Steps followed:

  1. Download the python retrainer package https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py

  2. Create an image folder

  3. Create subfolders named with the category label ( eg:, not_hasan ) with around 100 image of each category with different angles, different ambience, brightness etc.

  4. Run Retrain.PY passing the location of the image folder to retrain the model

    1. Will return a .PB file ( Tensorflow model )

    2. Run the code using the CMD line, it will return back file in C:/TMP/output_graph.pb which is the final neural network model



  5. It took around 1 hour for the whole neural network to be retrained on an Intel Core i7 machine enabled with SSE4.1 and Tensorflow 1.2


Prediction on new images python code: Steps followed( Needs python knowledge ):

  1. I created custom python code to read the newly created model .PB file and also a new image file placed in a folder in order to test the performance of my newly trained model

  2. Python code


#Hasan -> Run existing Transferlearning Tflow PB to classify new images
import numpy as np
import tensorflow as tf
import argparse

imagePath = 'image_to_classify/test.jpg'
modelFullPath = 'output_graph.pb'
labelsFullPath = 'output_labels.txt'

# Function to load existing tensor flow graph
def load_existing_tf_graph():
with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')

# Run existing graph on new image
def classify_image():
#Read image file into binary
image_data = tf.gfile.FastGFile(imagePath, 'rb').read()

#Run TF Session -> classifier
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)

#Convert prediction probability vector to label texts ( Trained on )
f = open(labelsFullPath, 'rb')
lines = f.readlines()
labels = [str(w).replace("\n", "") for w in lines]
for x in range(0,predictions.shape[0]):
human_string = labels[x]
score = predictions[x]
print('%s (score = %.5f)' % (human_string, score))

# Main thread
if __name__ == '__main__':
load_existing_tf_graph()
classify_image()


  • The output of this code is a probability vector which tells as to what is the probability of whether the image is "Hasan" or "Not_hasan"


Below output shows that the model is working fine with around 99% confidence and is ready to be used
{
"Hasan": 0.99126,
"Not_hasan": 0.00874
}

Next: In order to have our UI5 webcam image be recognized by our python script we need to upload the created Tensorflow model .PB file to GCP ML engine


Since there are many steps involved in deploying the newly created model to GCP Machine learning engine, it is not shown here.

You may refer to: https://cloud.google.com/ml-engine/docs/concepts/prediction-overview

Once this is done, your model will be ready to take input image as binary and return back results

The final part of this was the backend SAP GW service:


The GW service reads the image's binary data in an attachment GW service and triggered POST REST hit on the Google ML engine to get back the results from our neural network hosted in Google ML engine.

Steps followed:

  1. Create an RFC destination to the GCP ML Engine URL where the model is hosted and listening for HTTP requests




In the GW service method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_STREAM, wrote the below code to hit the service and parse the output
DATA: lo_http_client     TYPE REF TO if_http_client,
lo_rest_client TYPE REF TO cl_rest_http_client,
lv_url TYPE string,
http_status TYPE string,
lv_body TYPE string.

DATA lr_json_deserializer TYPE REF TO cl_trex_json_deserializer.
TYPES: BEGIN OF ty_json_res,
hasan type string,
not_hasan type string,
END OF ty_json_res.
DATA: json_res TYPE ty_json_res.

cl_http_client=>create_by_destination(
EXPORTING
destination = 'MLRFC' " Logical destination (specified in function call)
IMPORTING
client = lo_http_client " HTTP Client Abstraction
EXCEPTIONS
argument_not_found = 1
destination_not_found = 2
destination_no_authority = 3
plugin_not_active = 4
internal_error = 5
OTHERS = 6
).

CREATE OBJECT lo_rest_client
EXPORTING
io_http_client = lo_http_client.
lo_http_client->request->set_version( if_http_request=>co_protocol_version_1_0 ).
IF lo_http_client IS BOUND AND lo_rest_client IS BOUND.
lv_url = 'infer'.
cl_http_utility=>set_request_uri(
EXPORTING
request = lo_http_client->request " HTTP Framework (iHTTP) HTTP Request
uri = lv_url " URI String (in the Form of /path?query-string)
).

lo_request = lo_rest_client->if_rest_client~create_request_entity( ).
lo_request->set_string_data( lv_body ).
lo_rest_client->if_rest_resource~post( lo_request ).
**-- Response
lo_response = lo_rest_client->if_rest_client~get_response_entity( ).
response = lo_response->get_string_data( ).

**-- Parse JSON
CREATE OBJECT lr_json_deserializer.
lr_json_deserializer->deserialize( EXPORTING json = response IMPORTING abap = json_res ).

**-- Here your structure JSON_RES should match output type
copy_data_to_ref( exporting is_data = json_res
changing cr_data = er_entity )

 

Thanks a lot for reading out the blog !

In case if you have any queries or suggestions do reach out.

Thanks,

Hasan

 
11 Comments
Labels in this area