Skip to Content

SAP Leonardo Machine Learning Foundational APIs have been recently made available in the trial landscape. Anyone can register for a trial account and test drive these ML APIs. In this blog, I want to quickly show you how you get started using the ML APIs to works with the pre-trained models. At SAPPHIRE, the Machine Learning team also announced a set of new pre-trained and customizable services for Face detection, Scene text recognition etc. In the blog, I would like to focus on Scene text recognition which will enable to read text from natural images/scenes. I have been working with several customers who were looking for a similar service to extract text/numbers from images.

I would also like to point to a blog series by Fabian Lehmann  on “Getting started with SAP Leonard ML Foundation on SAP Cloud Platform Cloud Foundry”. Fabian has used a Productive account and shows how to get started as well as covers how to retrain the models and also deploy your own ML models. I would highly recommend going through those blogs.

When registering for a Cloud Foundry trial account, ensure you pick AWS instance as I couldn’t find ML services in Azure/GCP yet.


Once you have a Global Account in the Cloud Foundry trial landscape,  verify your entitlements.

You will notice ML Foundation Trial “Standard” service plan is already allocated to this account. Interesting to note that the Blockchain Hyperledger Fabric service is also provided for trial.

Navigate to the Spaces and click on the “Dev” space.

Under Service Marketplace, locate “ml-foundation-trial-beta”

Create a new Instance from the “Instances” menu.

Provide a name of your choice


Once the instance has been created, select the instance to go into the details

Create a Service Key form the “Service Keys” menu.

Provide a name of your choice for the service key.

This will give you list of parameters which you will need in order to interact with the ML APIs and build an application. There will be secret keys along with the API endpoints for each of the service. You would be able to test these APIs directly from SAP API Business Hub too.


I can test the OCR service by providing an image of a receipt. Below is a receipt which I had from a recent shopping.

I can use POSTMAN Rest Client to pass this image and it would be able to detect the store name, address and the total amount.


If you would like to test this using REST Client, please follow the instructions on the setup here – on “Getting started with SAP Leonard ML Foundation on SAP Cloud Platform Cloud Foundry”.

Here is a video which shows how one can build a native mobile app which leverages the above functionality. This android app was built by two interns from SAP Singapore office – Rahul Rajesh & Anusha Anandan in just few days.

I was also curious to find the accuracy of the Scene text ML service. Below is a simple image which shows the house number.

I was able to use the POSTMAN Rest client to test this service with the above image too.

I hope this blog was informative. Go ahead and try these ML APIs on your own and build apps leveraging them.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Kashif Bashir

    Hi, Any idea where can i get “bearer” token?

    I am getting authorization error while trying to post using REST Client.






  2. Ravindra Tanguturi

    Hi Murali,

    Is OCR changed or same that was in API Hub. As we tried this last year and looking for some capabilities like support for .doc/.docx files and file size of >5 MB etc…..


    Best Regards

    Ravindra Tanguturi

    1. Murali Shanmugham Post author

      Hi Ravindra,

      OCR service (like any other service) has been going through lot of updates. The API Business Hub sandbox service instance only allows documents of up to 2MB, whereas productive instances up to 10MB. The service accepts pdf, jpg and png files types as input and returns detected texts within the file in either text or hOCR format. Hope this helps.


  3. Rodrigo Francou

    Thank you very much for the post, it has helped me a lot. I am interested in knowing how you do to get the image in good shape so that the service can extract the information well. How do you do this process? I see that when you take the photo the app recognizes you the receipt

    1. Murali Shanmugham Post author

      Its very much possible. One of the key scenarios of the ML platform is to allow the deployment of custom models and expose them as APIs. Once you deploy your own models, you would use the training infrastructure to train the models



Leave a Reply