Deploying HANA ML with SAP Kyma Serverless Functions
In this blogpost I will walk through quick and easy low-code steps to use the capabilities provided by SAP Kyma Serverless Functions to deploy HANA Machine Learning libraries using python API client.
This deployment method is useful mostly in such cases where you do not want the overhead of maintaining dedicated docker images and docker repository management.
For details on use of Kyma functions refer to SAP Help Documentation for Kyma Functions.
This functionality is useful when most of the workload is done on the HANA Cloud layer for training or inferencing a machine learning model and the user wants to provide an API to trigger the training or inference call. This step is typically needed after the model development has been done and an endpoint is needed to simply call the inference.
If the code requires significant dependencies on other libraries for data preparation and feature selection then a full deployment on Kyma is recommended. Here are some sample applications which can be used SAP-samples Kyma runtime-extension-samples as starters. In these cases the docker image needs to have hana-ml python package.
The same functionality can also be achieved via kubectl functionality, here we use the Kyma Cockpit and see the ease of creating a serverless function. For a developer focused functionality there are jumpstart generators for VS Code which enable the same. Here is a link to a complete tutorial from SAP HANA Academy SAP HANA Academy for BTP Serverless Python.
To go through the the steps you would need to ensure you already have the following:
- You have HANA Cloud Database instance and required credentials to connect. This could be either HANA Cloud or underlying HANA Cloud from a SAP Datasphere tenant.
- Setup BTP Account with SAP Kyma runtime as described here Create Kyma Environment on BTP
Create Kyma Function to access HANA ML via Python API Client
Choose the Kyma namespace where you would like to deploy this functionality
- Add the HANA credentials, in our example, hanaauth, required to connect to the Secrets in Kyma
- Go to Workloads and Create FunctionThis basic version works with Function profile XS but incase you have higher memory or CPU requirements for building (incase of other additional libraries) or compute increase the resources requirements.
- Add Environment variables using the Secret
- To enable the use of hanaml libraries we need to
- Add the hana-ml and shapely in the dependencies
- Go to the yaml view of the file and add runtimeImageOverride: hanaacademy/kyma-faas:python39 after runtime:python39. This is the most critical step as without the runtime override the Kyma function is not able to build with hana-ml dependency as the default base image does not support it.
runtime: python39 runtimeImageOverride: hanaacademy/kyma-faas:python39
import hana_ml import os def connectToHANA(): import hana_ml.dataframe as dataframe conn = dataframe.ConnectionContext(address = os.environ.get('HANA_ADDRESS'), port = os.environ.get('HANA_PORT'), user = os.environ.get('HANA_USER'), password = os.environ.get('HANA_PASSWORD'), encrypt = 'true') print("HANA DB version:", conn.hana_version()) def main(event, context): message = 'Hello World from the Kyma Function '+context['function-name']+' running on '+context['runtime']+ "hana_ml version:" + hana_ml.__version__ + '!'; print(message) connectToHANA() print("hana_ml version:" + hana_ml.__version__) return message
Create API rule to call the Kyma function for HANA ML
Once the function is created we need to add an API rule to ensure access outside the Kyma cluster as usual
For this go to the Discovery and Network section -> API Rules
Creating the API rule is straightforward, you can change the Name as you like and provide a Subdomain which adds a prefix to the host and helps to distinguish this service from others you may create on the Kyma cluster
The above endpoint can now be integrated in the end-user application. Typically you would then add authorization steps which can be done in Kyma. For example this endpoint can also be called from SAP Analytics Cloud via API Step in SAC Multi Actions to trigger workloads on SAP HANA.
Testing and Debugging the endpoint
You can test the endpoint by calling it directly from the browser or via postman and check the logs in Kyma console of the corresponding pod.
Thanks for sharing Nidhi Sawhney! In March 2023 took place SAP Community Developer Challenge: EDA with SAP HANA and Python
I used a trial account to build an environment with Cloud Foundry and access databases with the local Jupyter Notebook.
Thanks Iatco Sergiu for sharing your experience. Yes indeed Cloud Foundry is another good way to deploy ML models if the compute can be offloaded to SAP HANA for example.
As always the deployment option is use-case dependent and based on scalability requirements. For cases where there is little code to be managed the inline capability of Kyma functions together with dynamic scalability especially for highly variable peak vs regular workloads can come in handy.