Hello everyone,

Welcoming you all to the world of Deep Learning ! This application was also demo’ed in Sapphire 2017.

I am pretty excited here to describe a very interesting and complex implementation we did to demonstrate integration of SAP with Google ML engine and Tensorflow, bringing user experience to an entirely new level.


An end user can take a picture from the webcam of his computer through a UI5 screen and in real time the system should be able to recognize the person/object in the image – harnessing the power of Deep learning and Tensorflow in SAP.

The blog post is slightly huge considering the complexity of the scenario, so please forgive me for the same 🙂

Architecture design:

We integrated below list of technologies/ skillset to implement the same:

  1. SAP UI5 over WebIDE to build webcam screen
  2. Knowledge of implementing Convolutional neural nets ( Deep learning )
  3. Tensorflow knowledge to train a Convolutional neural network on Python – https://www.tensorflow.org/tutorials/
  4. Account on Google cloud platform to host the final Neural network model

Please note: That some terminologies like Python and Tensorflow might be new to you, hence you may refer to external links/ papers to understand more about Python, Tensorflow, Convolutional nets etc.

So lets start, we will go one by one here with as much screen shots as possible:

  • SAP UI5 HTML IFRAME screen embedded inside an xml view to invoke camera and take picture

 // Init canvas
 function init() {
        // Get the canvas and obtain a context for
        // drawing in it
        canvas = document.getElementById("myCanvas");
        ctx = canvas.getContext('2d');

 // Take image from webcam -> Data is stored in a variable window.filebinary
 function snapshot() {
      	let image = new Image();
        ctx.drawImage(video, 0,0, canvas.width, canvas.height);
        var data =	canvas.toDataURL('image/jpeg');
    	image.src = data;
    	window.filebinary = data;


Product image search results:

SAP UI5 code to trigger attachment GW service, to carry file binary data to SAP GW

sap.ui.define(["sap/ui/core/mvc/Controller"], function(BaseController) {
		"use strict";
		return BaseController.extend("ml001.app.controller.webcam", {
			_onButtonPress1: function() {
				var token1 = "";
				var out;
				var xhr = new window.XMLHttpRequest();
				var ifr = this.getView().byId("attachmentframe");
				var file_bin = ifr.getDomRef().contentWindow.filebinary;
				// GET X-CSRF-TOKEN - Hasan Rafiq
					url: '/sap/opu/odata/sap/<ZSERVICE>/',
					type: 'GET',
					async: false,
					contentType: "application / atom + xml;type\ x3dentry;",
					dataType: "",
					headers: {
						"x-csrf-token": "fetch"
					success: function(data, textStatus, request) {
						token1 = request.getResponseHeader("x-csrf-token");

				// Post file binary to GW attachment service - Custom Hasan Rafiq
					type: 'POST',
					url: '/sap/opu/odata/sap/<ZGW_SERVICE>/getimageSet',
					async: false,
					xhr: function() {
						//Upload progress
						xhr.upload.addEventListener('progress', function(evt) {
							if (evt.lengthComputable) {
								var percentComplete = Math.round(evt.loaded * 100 / evt.total);
								//Do something with upload progress
						}, false);

						xhr.addEventListener('error', function(evt) {
							alert('There was an error attempting to upload the file.');
						}, false);

						return xhr;
					contentType: 'image/jpeg',
					processData: false,
					data: file_bin,
					headers: {
						'x-csrf-token': token1
					success: function(data) {
						//alert('File uploaded successfully');
						out = data.documentElement.innerHTML;
				if ( out )
					var dialogName = "ProductResult";
					var dialog = window.dialogs[dialogName];
					alert("Error in calling ML service");	
				var oDialog = this.getView().getContent()[0];
	}, /* bExport= */

We retrained an existing Inception neural net model available on Tensorflow library( http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz ) by retraining it’s last layer for new images and category. The configuration of the neural net is below

  1. Max pooling layer #1  => ReLu activations
  2. Convolutional layer #1
  3. Max pooling layer #2  => ReLu activations
  4. Convolutional layer #2
  5. Dense layer
  6. Softmax layer => Softmax activation

Retraining the existing tensorflow model to categorize new objects: Steps followed:

  1. Download the python retrainer package https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
  2. Create an image folder
  3. Create subfolders named with the category label ( eg:, not_hasan ) with around 100 image of each category with different angles, different ambience, brightness etc.
  4. Run Retrain.PY passing the location of the image folder to retrain the model
    1. Will return a .PB file ( Tensorflow model )
    2. Run the code using the CMD line, it will return back file in C:/TMP/output_graph.pb which is the final neural network model
  5. It took around 1 hour for the whole neural network to be retrained on an Intel Core i7 machine enabled with SSE4.1 and Tensorflow 1.2

Prediction on new images python code: Steps followed( Needs python knowledge ):

  1. I created custom python code to read the newly created model .PB file and also a new image file placed in a folder in order to test the performance of my newly trained model
  2. Python code
#Hasan -> Run existing Transferlearning Tflow PB to classify new images
import numpy as np
import tensorflow as tf
import argparse

imagePath = 'image_to_classify/test.jpg'
modelFullPath = 'output_graph.pb'
labelsFullPath = 'output_labels.txt'

# Function to load existing tensor flow graph
def load_existing_tf_graph():
    with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
        graph_def = tf.GraphDef()
        _ = tf.import_graph_def(graph_def, name='')

# Run existing graph on new image
def classify_image():
    #Read image file into binary
    image_data = tf.gfile.FastGFile(imagePath, 'rb').read()

    #Run TF Session -> classifier
    with tf.Session() as sess:
        softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
        predictions = sess.run(softmax_tensor,
                               {'DecodeJpeg/contents:0': image_data})
        predictions = np.squeeze(predictions)

    #Convert prediction probability vector to label texts ( Trained on )
    f = open(labelsFullPath, 'rb')
    lines = f.readlines()
    labels = [str(w).replace("\n", "") for w in lines]
    for x in range(0,predictions.shape[0]):
        human_string = labels[x]
        score = predictions[x]
        print('%s (score = %.5f)' % (human_string, score))

# Main thread
if __name__ == '__main__':  
  • The output of this code is a probability vector which tells as to what is the probability of whether the image is “Hasan” or “Not_hasan”

Below output shows that the model is working fine with around 99% confidence and is ready to be used

  "Hasan": 0.99126,
  "Not_hasan": 0.00874

Next: In order to have our UI5 webcam image be recognized by our python script we need to upload the created Tensorflow model .PB file to GCP ML engine

Since there are many steps involved in deploying the newly created model to GCP Machine learning engine, it is not shown here.

You may refer to: https://cloud.google.com/ml-engine/docs/concepts/prediction-overview

Once this is done, your model will be ready to take input image as binary and return back results

The final part of this was the backend SAP GW service:

The GW service reads the image’s binary data in an attachment GW service and triggered POST REST hit on the Google ML engine to get back the results from our neural network hosted in Google ML engine.

Steps followed:

  1. Create an RFC destination to the GCP ML Engine URL where the model is hosted and listening for HTTP requests

In the GW service method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_STREAM, wrote the below code to hit the service and parse the output

DATA: lo_http_client     TYPE REF TO if_http_client,
         lo_rest_client     TYPE REF TO cl_rest_http_client,
         lv_url             TYPE        string,
         http_status        TYPE        string,
         lv_body            TYPE        string.

DATA lr_json_deserializer TYPE REF TO cl_trex_json_deserializer.
     TYPES: BEGIN OF ty_json_res,
            hasan type string,
            not_hasan type string,
            END OF ty_json_res.
     DATA: json_res TYPE ty_json_res.

      destination              = 'MLRFC'    " Logical destination (specified in function call)
      client                   = lo_http_client    " HTTP Client Abstraction
      argument_not_found       = 1
      destination_not_found    = 2
      destination_no_authority = 3
      plugin_not_active        = 4
      internal_error           = 5
      OTHERS                   = 6

CREATE OBJECT lo_rest_client
       io_http_client = lo_http_client.
   lo_http_client->request->set_version( if_http_request=>co_protocol_version_1_0 ).
   IF lo_http_client IS BOUND AND lo_rest_client IS BOUND.
     lv_url = 'infer'.
         request = lo_http_client->request    " HTTP Framework (iHTTP) HTTP Request
         uri     = lv_url                     " URI String (in the Form of /path?query-string)

     lo_request = lo_rest_client->if_rest_client~create_request_entity( ).
     lo_request->set_string_data( lv_body ).
     lo_rest_client->if_rest_resource~post( lo_request ).
**-- Response
     lo_response = lo_rest_client->if_rest_client~get_response_entity( ).
     response = lo_response->get_string_data( ).

**-- Parse JSON
     CREATE OBJECT lr_json_deserializer.
     lr_json_deserializer->deserialize( EXPORTING json = response IMPORTING abap = json_res ).
**-- Here your structure JSON_RES should match output type
     copy_data_to_ref( exporting is_data = json_res 
                       changing cr_data = er_entity )


Thanks a lot for reading out the blog !

In case if you have any queries or suggestions do reach out.




To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Karthik A

    I need your help, I am trying to execute the below URL to use the ML scripts for translator, but I am below facing error, please find attached image. Can you please provide your inputs to proceed further?

    url: https://sandbox.api.sap.com/ml/translation/translate


    var data = JSON.stringify({
    				"sourceLanguage": "de",
    				"targetLanguages": [
    				"units": [{
    					"value": "Der Bestellvorgang bricht beim Speichern der Lieferadresse ab."
    			var xhr = new XMLHttpRequest();
    			//xhr.withCredentials = true;
    			xhr.addEventListener("readystatechange", function() {
    				if (this.readyState === this.DONE) {
    			//setting request method
    			xhr.open("POST", "https://sandbox.api.sap.com/ml/translation/translate");
    			//adding request headers
    			xhr.setRequestHeader("Content-Type", "application/json");
    			xhr.setRequestHeader("Accept", "application/json;charset=UTF-8");
    			xhr.setRequestHeader("APIKey", "<GIVEN CODE>");
    			xhr.setRequestHeader("Access-Control-Allow-Origin", "*");
    			xhr.setRequestHeader("Access-Control-Allow-Methods", "GET,  PUT,  POST,  DELETE");
    			xhr.setRequestHeader("Access-Control-Allow-Headers", "origin, content-type, accept, x-requested-with");
    			xhr.setRequestHeader("Access-Control-Max-Age", "3600");
    			//sending request


    Karthik A




    1. Hasan Rafiq Post author

      Hi Karthik,


      Thanks a lot for the appreciation !

      The error seems to be related to your API call not being authorized.

      I believe you are trying SAP API Hub, however you need to generate your API key for the translation service from the console which needs to be replaced in <API_KEY>. If you have already done this step then first replicate HTTP POST using a REST Client software eg: Postman.

      If you are successfully able to call the service from Postman then you can use the same headers and url to do via AJAX.


      Otherwise you can create a question around this on SAP SCN space !




      1. Karthik A

        Thanks a lot Hassan, I found a solution to resolve my issues.


        Your work should be sharable, we are sharing your blogs… please keep posted the same. All the best



        Karthik A


    Hi Hasan,

    is there a way to get the source code of the SAPUI5 application you used for taking pictures? Maybe a Github repository somewhere.





Leave a Reply