Artificial Intelligence and Machine Learning Blogs
Explore AI and ML blogs. Discover use cases, advancements, and the transformative potential of AI for businesses. Stay informed of trends and applications.
cancel
Showing results for 
Search instead for 
Did you mean: 
0 Kudos
Introduction

In this blog we'll see how customizable object  detection  service  offered by SAP Machine Learning Foundation can be tailored  to detect objects in images which are not detected by standard offering.(You don't need to be an ML expert either ..)

The steps we follow are highlighted below.



Provision machine learning foundation

In this step we create an instance of machine learning foundation and create a service key . The service key provides authentication endpoint and end point to various functional and technical services provided by machine learning foundation. You can follow the steps mentioned in here to create a service instance and service key. (Please note at the moment of writing this blog object retraining service is available in  ml-foundation-beta in SAP market place)

Once provisioning is done we get the service key as below:



 

Opening  RETRAIN_OBJECT_DETECTION_API_URL  opens up swagger UI with different services we will call for our retraining scenario.



 

To start using the swagger client we need to get the access token. Follow the help documentation here to generate the token.

Storage of training data

Next we need to prepare the data to train the algorithm. This is the part that takes most time and effort. It important that the images that are chosen for training are not biased in any way.

In this case we will train the algorithm to detect raccoons in an image. The retrainable service takes the following as an input:

  1. Set of images

  2. Annotations file(.csv) containing annotations of object inside the image i.e.

    1. Image name

    2. Height

    3. Width

    4. Label (ie the object we are trying to detect)

    5. xmin,ymin,xmax,ymax (topleft and bottom right)




 



 



 

The final folder structure for training data looks like this:



Once the training data is ready we upload the data .

First we need to generate the credentials to access  minio storage by invoking the storage endpoint from swagger UI.



  • To upload the data we need to install minio client from here .Next configure the client to use the endpoint retrieved in previous step.


mc config host add <ALIAS> <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> <API-SIGNATURE>

(Append https:// in front of the endpoint from return of get /storage api)

  • Once the client is configured we can upload the data.


mc cp -r <SOURCE-ROOT>  <ALAIS>\data\<TARGET-ROOT>

  • Check the uploaded data using


mc ls -r <ALAIS>



 

Alternatively you can login to <YOUR-S3-ENDPOINT> using a browser and credentials returned  by GET /storage api.

Retrain model

Next we submit the retraining job from swagger UI  /jobs . You need to pass path to images directory and  to the annotations



Note the job id returned in response . You can query the job ID to check the status of training job.





It might take some time for the training container to get allocated and for the training job to run.

Logs of the training job are visible in minio storage under data\jobs\<name>-<id>.

Once the training is successful the trained model is persisted and needs to be deployed for inference

Model Deployment

We invoke the deployment API with the model name we used for training the model



 

You can query the deployment status using GET /deployments/{id}





Inference

Once we have the model successfully deployed it can be used for inference by invoking OBJECT_DETECTION_API_URL from the service key and passing the image and model name for inference





 

As you would have seen by now ,SAP Machine Learning foundation makes it easy for developers to start customizing pre-existing models for their custom scenarios . #happylearning