Skip to Content
Technical Articles
Author's profile photo Sudip Ghosh

Developing Custom Machine learning Model and Running on KYMA – Enterprise AI Extension Part 2

Hello All,

Welcome to Part – 2 of Developing Enterprise AI Extension, Well in first part i have discussed about overview of different types of AI you can Infuse into your Enterprise. In this Blog i will show you how you can create Custom Machine Learning based Image Classification Model with Teachable Machine ( A No Code Machine Learning Model Development Platform) and then How you can Dockerize it and Finally Deploy that machine Learning Model in SAP Cloud Platform Kyma Run Time.


What Is teachable Machine?

Teachable Machine is Web Based tool where you can train and create custom machine learning model for Image Classification, Voice and Pose. Cool isn’t it. Best Part is Non Machine learning Developer can create Machine Learning Model, Train with their data and export it as saved model or keras or tensorflow.js and can create rest api out of it or embed it into Application.

More Information you can find out here.

Lets get into Action

Create Three Materials in SAP S/4HANA Cloud with Iphone X, Iphone 11 pro and Iphone 6 and we will create custom tensorflow based  Image Classification Model for those Material in Teachable machine and finally we will embed that model in flask based application and dockerize it to deploy into Kyma.

Now in order to train and build the Machine Learning Model Create Training Data Set and Upload the Training data Set as below. I have created three class accrodingly.

After That Click on Train Model to train the model and then export the model in Tensorflow Keras Format as below by clicking on the Download my model button.

Now extract the Zip format, you will find keras model and Label.txt.

These two Files are Very Important and we need this two files to build Python Flask based application.

Creating Machine Learning Model Embeded Python Flask based Application

Everyone needs rest api, isn’t it because ultimately that could be integrated everywhere like any UI application or any Conversational AI based Application. 

As this model is based on Keras, In order to expose this machine Learning Model as Rest API we need to create Flask Based Application and tweak this generated model into it.

Lets open Visual Studio Code (My Favourite IDE) and Create the Application Structure as Below


Below are the Code Snippet for each files.

import json
import os
import io

# Imports for the REST API
from flask import Flask, request, jsonify

# Imports for image procesing
from PIL import Image

# Imports for prediction
from predict import predict_url

app = Flask(__name__)

# 4MB Max image size limit
app.config['MAX_CONTENT_LENGTH'] = 4 * 1024 * 1024

def index():
    return 'GET Methods are Not Allowed'

@app.route('/image', methods=['POST'])

def predict_url_handler():
        image_url = json.loads(request.get_data().decode('utf-8'))['url']
        results = predict_url(image_url)
        return jsonify(results)
    except Exception as e:
        print('EXCEPTION:', str(e))
        return 'Error processing image'

if __name__ == '__main__':
    #     # Load and intialize the model
    #     initialize()

    # Run the server'', port=80)




import tensorflow.keras
from urllib.request import Request, urlopen
from PIL import Image, ImageOps
import numpy as np
import ssl

def predict_url(imageUrl):
    predicts image by url
    ssl._create_default_https_context = ssl._create_unverified_context
    # log_msg("Predicting from url: " + imageUrl)
    imgrequest = Request(imageUrl, headers={"User-Agent": "Mozilla/5.0"})
    with urlopen(imgrequest) as testImage:
        # with urlopen(imageUrl) as testImage:

        image =
        return predict_image(image)

def predict_image(image):
    # code snippet from teachable machine start-----------------------------
    # Disable scientific notation for clarity

# Load the model
    model = tensorflow.keras.models.load_model('model.h5')
# model = keras.models.load_model('model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
    data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)

# # Replace this with the path to your image
# image ='test.jpg')

#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
    size = (224, 224)
    image =, size, Image.ANTIALIAS)

#turn the image into a numpy array
    image_array = np.asarray(image)

# display the resized image

# Normalize the image
    normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1

# Load the image into the array
    data[0] = normalized_image_array

# run the inference
    predictions = model.predict(data)
# code snippet end teachable machine start-----------------------------

    # print('result')
    # print(predictions)
    labels_filename = 'labels.txt'
    labels = []
    with open(labels_filename, 'rt') as lf:
        # global labels
        labels = [l.strip() for l in lf.readlines()]
        result = []
        for p, label in zip(predictions[0], labels):
                'tagName': label,
                'probability': p * 100

        response = {

            'predictions': result

        # log_msg("Results: " + str(response))
        return response
    # return prediction


FROM python:3.7-slim

RUN pip install -U pip
RUN pip install --no-cache-dir numpy~=1.17.5 tensorflow~=2.4.0 flask~=1.1.2 pillow~=7.2.0

COPY app /app

# Expose the port

# Set the working directory

# Run the flask server for the endpoints
CMD python -u

Building Docker Image

docker build -t codersudip/tmachineofficesupply:aarini .

Docker Run Locally

docker run -p -d codersudip/tmachineofficesupply:aarini


Testing in Postman

In order to test we need to find some image from google, i tested with Iphone 6s Picture and i must say result is pretty goodin terms of Accuracy. (If you remember MZ-PROC-IT-IP-0032 was the Material we have created for Iphone 6 in SAP S/4HANA CLOUD)


Now Deploying and running  into KYMA

In order to run and deploy this machine learning model into SAP Cloud Platform Kyma Run Time.

docker push codersudip/tmachineofficesupply:aarini            


Once it is pushed to Docker Hub, You are ready to Deploy into Kyma for that you need to enable Kyma Run Time.

In order to enable Kyma Run Please Follow this SAP Developer Tutorial for Enable Kyma Run Time by Kevin Muessig

Pre-requisite: Install Kubectl and get you kubeconfig


Setting the Kube Config  Path.

export KUBECONFIG=kubeconfig.yml


Create Deployment

kubectl create deployment --image=codersudip/tmachineofficesupply:aarini officesupplytm

kubectl set env deployment/officesupplytm DOMAIN=cluster

Expose the service

kubectl expose deployment officesupplytm --port=80 --name=officesupplytm

Now go to Kyma and Look at the Deployment, You can see the recent deployment we just did

Now go to services and expose it as API.


Click on Expose Service

For this Scenario I am not showing How to secure this API because some one have already written blog on it. I am going to share the blog here insted 🙂

Please Follow Kyma for Dymmies [2]: First Simple Microservice with Security by Carlos Roggan


Testing this Kyma Based ML Micro Service in Postman.


I have also done Youtube Podcast Session  on this topic, you can Watch if you would like to..

For today thats it, i hope you really enjoyed this blog, In next blog i would show How we can Integrate with Conversational AI and Finally Integrate with SAP S/4HANA Cloud to create Image Based Buying Feature for SAP S/4HANA Cloud from WhatsApp Based Conversation AI.

In the meanwhile have a good read, share with your friends and enjoy the weekend.




Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo AMIT Lal
      AMIT Lal

      Nice work!! Sudip.

      I wrote on a similar topic - AutoML on Azure using the PowerBI platform.


      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thanks for sharing amit, i was wondering how do yo export that model and contairainzed for using anywhere

      Author's profile photo AMIT Lal
      AMIT Lal

      Thanks to you as well. Good Point! I'm exploring to get the model exported, I see import ver on the frontend though.

      Author's profile photo Kallol Chakraborty
      Kallol Chakraborty

      Nice post! You can use pickle module to save the model & use it later.

      import pickle
      For dumping the model:
      To save a file using pickle one needs to open a file, load it under 
      some alias name and dump all the info of the model. This can be achieved 
      using below code:
      with open('model_pkl', 'wb') as files:
          pickle.dump(model, files)
      Load saved model:
      One can load this file back again into a model using the same logic, 
      here using the lr variable for referencing the model and then using it.
      with open('model_pkl' , 'rb') as f:
          lr = pickle.load(f)
      # Check model
      Author's profile photo Somnath Paul
      Somnath Paul

      Thanks, Sudip. A lot to know from you!

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thanks Somnath Paul for your words 🙂

      Author's profile photo k samarth ashvinbhai
      k samarth ashvinbhai

      Hey Sudip, Great and very informative blog. I have one query!


      Let's say i changed something in my model and Flask API and pushed it again in the same docker hub repo. Will the changed appear directly in the Kyma environment or do i have to do all the "create deployment" part again for this changes?


      And lets say if i have to do all these changes again, will it change the "Host" link or will it remain same?


      hope to hear from you soon!




      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      There CI/CD Pipeline comes, just setup CI/CD pipeline