Skip to Content
Technical Articles

Building an Intelligent SAP Fiori iOS App with Core ML

Introduction

A few weeks ago I had the chance to participate in an SAP CodeJam with the topic “SAP Cloud Platform Fiori for iOS SDK and Mobile Services”. SAP CodeJams are always inspiring events for developers who want to learn new SAP technologies and meet their fellow developers. This was my starting point for diving deeper into the subject and to develop my first iOS app.

In his blog Top five reasons to develop enterprise-ready iOS apps using SAP  Gerhard Henig gives an overview of the benefits of using the tools and services provided by SAP. SAP Leonardo Machine Learning Services enable developers to build intelligent apps for the enterprise. Let me give you two examples of intelligent applications.

  1. Image Similarity Scoring. At the Swarovski global repair center, a product identification app is used to speed up the service of repairing products sent in by customers. You can watch a demo version of image similarity scoring here.
  2. The Daimler Car Detection App. Suppose you discover a Mercedes-Benz that thrills you. If you have the car detection app installed you can take a photo of that car and the app will identify the model, find the dealer next to you and it will finally lead you to the car configurator. Read the article in the SAP News Center  for more information.

What is an intelligent app?

An intelligent app is an application that makes use of a machine learning model. A machine learning model can be thought of as a function that is capable to perform a specific task like image classification, speech recognition or product recommendation. Machine learning models are generated by feeding data into a machine learning algorithm. The output is a trained machine learning model which has learned from the data it has been taught.   

If you are a developer, you might be interested to learn how to build such intelligent applications. In the SAP Tutorial Navigator, you can find the tutorial SAP Leonardo Machine Learning and the iOS SDK which explains how to build a prototype of an image classification app.

However, there is a drawback if you are using machine learning services offered as a cloud service in your app: a stable network connection is required for the app to function properly. Wouldn’t it be nice if we could include the machine learning model – which represents the “brain” of the application – in our app so that it works offline, even in the remote wilderness?

Core ML – integrate machine learning models into your app

Here comes Core ML to the rescue. It was introduced at WWDC 2017 and allows developers to easily integrate machine learning models into their apps. At Apple, it is used for many tasks, e.g. people recognition and scene recognition, handwriting recognition.

The demo app

The goal was to build an easily testable SAP Fiori iOS application with an integrated Core ML model. I finally came up with a fruit detection app.

The app works as follows.

  1. Take a photo of the fruit you want to identify.
  2. The app tries to classify the fruit and displays its name in the title.
  3. Tap on the detail button to get more information about the prediction:

A list of fruit and vegetables appears with the corresponding confidence levels (4), the predicted class with the highest probability being listed on the top. In this example, the app was able to identify the red apple. The list was implemented as an SAP Fiori Data Table using the template provided by the SAP Fiori Mentor App.

Training the machine learning model

There are some ready-to-use Core ML models available for download here. 

These models are quite large and generic, will likely not perform well for highly specialized tasks. In such cases, you might want to build the machine learning model on your own.

The “Fruits 360” dataset was used to train the model. You can download the dataset from Kaggle.

The dataset downloaded contained 62116 images of 90 fruits. It is updated from time to time.

I considered two options for training a model:

  1. Use the deep learning framework Keras to build an artificial neural network. Convert the Keras model into a Core ML model using Core ML Tools
  2. Use Create ML to train a Core ML model with few lines of code. Create ML was introduced at WWDC 2018 .

After having watched the videos of WWDC 2018 I opted for Create ML. It has never been easier to train a model. Open a playground and enter the three lines of code on the left side:

Run the code and an assistant editor opens on the right side. Simply drop the folder with the training data into the “Drop Images” area and the training process starts.

After some time (in my case it took more than an hour) the model is created and the model accuracy is shown on the right side:

Now drop the testing data folder into the playground and the evaluation process begins to start.

The accuracy on the testing data is lower because the model has not seen these images before.

Finally we save the Core ML model and simply drag the file into the Xcode project. A model class for the Core ML model is generated automatically:

The model expects an image as input and returns a dictionary containing the probability of each category. The class label with the highest probability corresponds to the type of fruit the model predicts.

Within a ViewController we retrieve the model and make a classification request passing the image as input:

func classifyImage(image: CIImage){
        guard let model = try? VNCoreMLModel(for: FruitImageClassifier().model) else {
            fatalError("Cannot import CoreML model")
        }
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            guard let classificationResults = request.results as? [VNClassificationObservation] 
            else {
                fatalError("Did not get classification request results")
            }
            guard let bestResult = classificationResults.first else {
                fatalError("Cannot get classification result")
            }
            currentClassificationResult 
              = CurrentClassificationResult.init(observations: classificationResults
                .filter({ $0.confidence > 0.01 })
                .map(self.observationResult))
            
            print(currentClassificationResult)
            self.navigationItem.title = bestResult.identifier
        }
            
        let handler = VNImageRequestHandler(ciImage: image)
        do {
            try handler.perform([request])
        } catch {
            print(error)
        }
    }

The source code for this demo app is available on Bitbucket.

Conclusion

I hope I could give you an impression of how simple it is to build and integrate machine learning models into your iOS app thanks to the highly sophisticated machine learning tools and frameworks from Apple. Combined with the SAP Cloud Platform SDK for iOS, developers are very well-equipped to develop innovative apps for the Intelligent Enterprise.

 

 

2 Comments
You must be Logged on to comment or reply to a post.
  • Great stuff, thanks for writing about it. I have a small question since now we have added the trained Core ML model to the app itself does it mean you will have to update an app each time you retrain your model on new images etc.?  If yes isn’t it a constraint?

    Nabheet

    • Hi Nabheet,

      thank you for your kind remarks. Yes, the Core ML model is static and that is a major drawback of ML models that reside on the mobile device. As far as I know, the user has to update the app when a new ML model is required.

      When you have to update the ML model frequently, a cloud based ML model is the better choice.

      So you have to decide which approach – ML model on the device or in the cloud – you take.

      Best regards

      Pius