SAP Leonardo ML Foundation – Retraining part 2
Introducing
In the Prevoius blog we´ve started the “Retraining” functionality which is availbel with SAP Leonardo ML Foundation on top of SAP Cloud Platform Cloud Foundry Environment.
In this blog we will continue this by executing now the deployment.
Finally we want to to execute the standard “Image Classifier API” by uploading some test data and of course the same API by using our own “model”!
The previous content which is part of this little hands-on ML series can be found here:
1 | Getting started with SAP Leonard ML Foundation on SAP Cloud Platform Cloud Foundry |
2 | SAP Leonardo ML Foundation – Retraining part1 |
3 | SAP Leonardo ML Foundation – Retraining part2 (this blog) |
4 | SAP Leonardo ML Foundation – Bring your own model (BYOM) |
If we go back to the requiered steps we just finished by now 50%:
- Uploading the data for the training ✓
- Executing the retraining job ✓
So let´s continue with the last two steps….
- Deploy the model
- Execute the image classifier API
Recap: Get the model info
To get an overview about the model which we´ve already deployed (in the last blog) we can execute the following API call to get the details:
Details:
HTTP Method | GET |
URL | <RETRAIN_API_URL> |
PATH | /v1/models/{modelName} |
HEADER | Authorization (OAuth2 Access Token) |
Response:
[
{
"uploadDate": "15 Feb 2018 09:02:21 UTC",
"modelName": "flowers-demo",
"metaData": {
"batchSize": "64",
"maxEpochs": 100,
"jobId": "flowers-2018-02-15t0851z",
"maxUnimprovedEpochs": 10,
"accuracy": "98.49999547%",
"learningRate": "0.001"
},
"modelVersion": "3"
},
{
"uploadDate": "15 Feb 2018 07:54:33 UTC",
"modelName": "flowers-demo",
"metaData": {
"batchSize": "64",
"maxEpochs": 100,
"jobId": "flowers-2018-02-15t0744z",
"maxUnimprovedEpochs": 10,
"accuracy": "100.0%",
"learningRate": "0.001"
},
"modelVersion": "2"
},
{
"uploadDate": "14 Feb 2018 13:11:49 UTC",
"modelName": "flowers-demo",
"metaData": {
"batchSize": "64",
"maxEpochs": 100,
"jobId": "flowers-2018-02-14t1308z",
"maxUnimprovedEpochs": 10,
"accuracy": "100.0%",
"learningRate": "0.001"
},
"modelVersion": "1"
}
]
Deploy the model
Based on the information before, its now simple to deploy the model by executing the following API.
Details:
HTTP Method | POST |
URL | <RETRAIN_API_URL> |
PATH | /v1/deployments |
HEADER | Authorization (OAuth2 Access Token) |
Body:
{
"modelName": "flowers-demo",
"modelVersion": "3"
}
….but whats happening here:
{
"message": "Error during deployments: \"1 version of model:flowers-demo already exist. To Proceed undeploy a version first.\""
}
What´s happen here…..?
The error above told us, that already a model with the model-name “flowers-demo” is active!
To resolve this issue we need to undeploy the active version, for the undeployment it´s neccessary to know the id whcih we can get by calling the follwoing API.
1. List the deployments
Get the deployments:
Details:
HTTP Method | GET |
URL | <RETRAIN_API_URL> |
PATH | /v1/deployments |
HEADER | Authorization (OAuth2 Access Token) |
Response:
[
{
"id": "1906aaed-b150-4b92-919b-192aea45b5ab",
"modelName": "flowers-demo",
"deploymentStatus": {
"description": "Service [tfs-6b5fe984-a8fa-45ec-9b08-fe77d599d34d] is ready.",
"state": "SUCCEEDED"
},
"modelVersion": "1"
}
]
Please notice here the “id” that we need to use in the next step for the undeployment.
2. Delete the existing deployment
Based on the fact that we now have all informations we can now execute the call by passing the id:
Details:
HTTP Method | DELETE |
URL | <RETRAIN_API_URL> |
PATH | /v1/deployments/{id} |
HEADER | Authorization (OAuth2 Access Token) |
Response:
HTTP 204 > That´s OK!
Now deploy…dude
API Call:
Details:
HTTP Method | POST |
URL | <RETRAIN_API_URL> |
PATH | /v1/deployments/ |
HEADER | Authorization (OAuth2 Access Token) |
Body:
{
"modelName": "flowers-demo",
"modelVersion": "3"
}
Response:
{
"id": "2ab82680-79c0-450b-8401-03b28b1fa663"
}
If we recheck the API call to GET an deployment overview, we can now see that “modelversion” 3 is active.
All fine ;o)
[
{
"id": "2ab82680-79c0-450b-8401-03b28b1fa663",
"modelName": "flowers-demo",
"deploymentStatus": {
"description": "Service [tfs-cf02efcd-d3ed-49ad-8787-07bd3c317e92] is ready.",
"state": "SUCCEEDED"
},
"modelVersion": "3"
}
]
Executing the API
Finally we are now ready to execute the Image Classifier API (with the standard nodel) to validate what´s happening.
Source Image:
API:
Details:
HTTP Method | POST |
URL | <IMAGE_CLASSIFICATION URL> |
Path | /inference_sync |
Header | Authorization (OAuth2 Access Token) |
Body (form-data):
files |
Archive: zip tar gz tgz Image file: jpg jpe jpeg png gif bmp tif tiff |
options | {empty} == no custom model |
Standard API Response:
{
"_id": "276e0cbf-89cd-468d-6e4b-fcf5039dc957",
"predictions": [
{
"name": "1485142251_ca89254442.jpg",
"results": [
{
"label": "artichoke, globe artichoke",
"score": 0.8144451379776001
},
{
"label": "coil, spiral, volute, whorl, helix",
"score": 0.07173910737037659
},
{
"label": "velvet",
"score": 0.02427002228796482
},
{
"label": "snail",
"score": 0.014293165877461433
},
{
"label": "mask",
"score": 0.008539138361811638
}
]
}
],
"processedTime": "2018-02-15T12:57:58.689744",
"status": "DONE"
}
As we can see our “rose” is detected as an “artichoke”.
So hopefully our own model works better……
Let´s execute now our “new” model by adapting the API call:
Details:
HTTP Method | POST |
URL | <IMAGE_CLASSIFICATION URL> |
Path | /inference_sync |
Header | Authorization (OAuth2 Access Token) |
Body (form-data):
files |
Archive: zip tar gz tgz Image file: jpg jpe jpeg png gif bmp tif tiff |
options |
{“modelName”: “{your model}”, “modelVersion”: “{deployed version}”} e.g.: {“modelName”: “flowers-demo”, “modelVersion”: “3”} |
“flowers-demo” model API Response::
{
"_id": "94f6858b-d9fa-4370-707e-e5ce08cb89e8",
"predictions": [
{
"name": "1485142251_ca89254442.jpg",
"results": [
{
"label": "roses",
"score": 0.999797523021698
},
{
"label": "sunflowers",
"score": 0.00012994145799893886
},
{
"label": "tulips",
"score": 0.00007257211109390482
}
]
}
],
"processedTime": "2018-02-15T13:00:20.584463",
"status": "DONE"
}
This looks better, we can now see the “rose” is detected by 0.9997.
For the first try with this, its not a bad score.
Conclusion
I hope u can see i´ve shown you with a “hands-on” like blog how u can use the SAP Leonardo ML Foundation Retraining.
For me it was the first contact and yes sometimes a litle bit tricky. Especially the official documentation can be improved with more “real” examples to get en easy start with this.
One of my next challenges will be to interact with one of the ML API´s in the context of IoT related or S4/HANA buisiness data. Let see whats happening….
Please feel free to ask me, if something is not clear.
And because i love to show easy (mostly) you can work with this new stuff, i will give a little talk about this (if the session will be accepted) on the upcomming “SAP Inside Track Frankfurt 2018” cu there ;o
cheers,
fabian
Helpful Links
SAP Leonardo ML Foundation: https://help.sap.com/viewer/product/SAP_LEONARDO_MACHINE_LEARNING_FOUNDATION/1.0/en-US
Hi,
to use the retrained model you should use this link:
"https://<myProductiveEnvironment>.cfapps.eu10.hana.ondemand.com/api/v2/image/classification/models/{modelName}/versions/{version}"
e. g. "/models/flowers-demo/versions/1"
Best regards,
Kevin
Thank you Fabian for excellent documentation on how to retrain SAP Leonardo Image Classification model. I was able to do that successfully with help of SAP help portal, Your blog and comments against your blog. I was able to test re-trained model using URL https://mlftrial-image-classifier.cfapps.eu10.hana.ondemand.com/api/v2/image/classification/models/flowers-demo/versions/1
Thanks,
Amey