Consuming SAP HANA Cloud from the Kyma environment
(January 2023: this material can now be found in the help at Map an SAP HANA Database to another Environment Context | SAP Help Portal, including command-line instructions)
SAP Business Technology Platform (BTP) applications looking to store their data in SAP HANA Cloud typically do so using a HANA Cloud schema or HDI container, which is hosted in an SAP HANA database instance. For a long time, most applications on BTP were developed and deployed on the Cloud Foundry runtime environment, but recently the Kubernetes-based Kyma runtime has become increasingly important for developing and hosting applications.
Until October 2022, access to SAP HANA Cloud relied on Cloud Foundry, but with the release of multi-environmentt HANA Cloud tools, that changed. You can now access the HANA Cloud tools to provision and manage database instances at the level of a subaccount (rather than a Cloud Foundry space), or you can use the btp command line interface to create, start and stop instances. However, even with this new multi-environment approach, you could not develop applications in Kyma that used HANA HDI containers or HANA schemas to store data.
(Disclaimer: This is probably a pretty specialized post. It takes the terms in the paragraphs above as understood. If they don’t mean anything to you, this may not be the blog post for you.)
If you prefer a video introduction to this topic, you can get that from the always-excellent SAP HANA Academy channel on YouTube here.
With the December 2022 release of the SAP HANA Cloud Tools, you can now develop Kyma applications that work with HANA HDI containers and schemas. This blog post explains one essential part of what you need to do.
The overall process is as follows:
- From the multi-environment edition of SAP HANA Cloud Central or using the btp CLI, provision a HANA database instance in a subaccount.
- From the multi-environment edition of SAP HANA Cloud Central, use the “instance mapping” feature to map the instance into a Kyma namespace (or Cloud Foundry space), either in the same subaccount or in a different subaccount.
- From the Kyma dashboard or using the kubectl CLI, create an HDI container or HANA schema service instance that creates a container or instance in the “mapped” HANA Cloud database instance.
- From the Kyma dashboard or using the kubectl CLI, create a service binding to the HDI container.
- Your application can now consume the HANA Cloud service through the service binding.
This blog post covers steps 2, 3, and 4. The process is not simple, but once the instance mapping is done, you may be able to use that HANA Cloud database instance for many HDI containers or schemas.
In your subaccount you must subscribe to both the SAP HANA Cloud service and the SAP HANA Schemas & HDI Containers service.
Make sure you have subscribed to the tools application service, which is the SAP HANA Cloud multi-environment tools.
Also make sure your SAP HANA Schemas & HDI Containers service is available in the Kyma environment. Here I will just use the schema service plan.
Map a HANA Cloud database instance into a Kyma namespace
I assume you have provisioned an SAP HANA Cloud instance in your subaccount, using the multi-environment HANA Cloud Tools. Also, that you have a Kyma namespace from where you want to access the instance. For me, that namespace is in the same subaccount, but that need not be so. Here is my namespace tom-namespace in the Kyma dashboard.
To map a HANA Cloud database instance you must also know the Environment Instance ID of the Kyma runtime environment. In the Kyma dashboard you can find this under Namespaces > kyma-system.
Click the kyma-system link, then go to Configuration > Config Maps, and search for sap-btp-operator-config.
Open the sap-btp-operator-config Config Map and copy the CLUSTER ID identifier to the clipboard. You will need to paste this into your SAP HANA Cloud Central instance mapping dialog.
Now, from SAP HANA Cloud Central, you can set up the instance mapping into your Kyma namespace.
From SAP HANA Cloud Central, click the three dots and open Manage Configuration.
Click Edit in the top right, then Instance Mapping, then Add Mapping. Choose a Kyma environment type, enter the Cluster ID and the namespace. It should then look as follows. Notice that this instance has also been mapped into a Cloud Foundry space.
This completes the instance mapping step. The instance will not show up in lists of services in the mapped namespace, but can still be consumed by the SAP HANA Schemas & HDI Containers service. Instance mapping is described in the SAP HANA Cloud product documentation, here.
A word on instance sharing
I used the description “instance mapping” here. SAP BTP also has a related concept of “instance sharing”, which is not implemented in HANA Cloud. For most developers, instance sharing is not what they want – instead, they want to create and bind to a HANA schema or HDI container from their application: that goal is better achieved using instance mapping.
Create a HANA Schema service instance
The rest of this procedure is carried out from the Kyma dashboard. To create a service instance of a HANA schema, open up your namespace (mine is tom-namespace), lick BTP Service Instances and click Create Service Instance.
In the dialog, enter a name (mine is tom-schema-instance), the Offering name “hana”, and the Plan name “schema”. You can find these names in BTP Cockpit, in the SAP HANA Schemas & HDI Containers tile in the Service Marketplace.
Click Create to finish. Your Schema service instance is now available.
Aside: You were not asked which HANA instance this schema should be created in, because only one instance was available (the one you mapped into this namespace). If more than one had been mapped, you would have been prompted to choose an instance. If you click the service instance name, to see the properties of the service instance, you will see an Instance ID. This is the same as the instance ID of the HANA database instance you mapped from SAP HANA Cloud Central: if you go to HANA Cloud Central you can see that the instance ID is the same.
Create a service binding
The final step is to create a Service Binding, so that Kyma applications can bind to this Schema. In Kyma Dashboard, open the namespace again and go to Service Bindings > Create Service Binding.
The only entry in the Service Instance Name drop down is the tom-schema-instance that I created above. Give your service binding a name (tom-schema-binding for me) and click Create.
Take a look at the binding
The service binding is stored as a Kubernetes Secret in your Kyma namespace. Still in Kyma Dashboard, from the Services Binding list, click the binding you just created and a new window opens, showing its encoded contents. Click Decode at the top right to show the details. A snippet is shown below.
You can now use this binding in Kyma applications to store your data: but Kyma application development is beyond the scope of this blog post (and of my expertise). I hope this has smoothed out at least one step in your Kyma development journey.
With Cloud Foundry, we have the limitation that the mapping can only be done with sub-accounts belonging to the same region.
Is it the same limitation with Kyma ?
Hi Michael Cocquerel I need to check this question with some colleagues and will follow up once I have an answer.
From my experience, a kyma cluster could be anywhere; however you will need to have the btp service operator pointing to the BTP sub-account of your HANA Cloud database instance if you want to be able to create hdi containers and schemas on kyma.
As we speak, when you provision a SAP BTP, Kyma runtime cluster from a SAP BTP sub-account, albeit you can decide your SKR cluster to be provisioned in a region which is not the same as the region of the BTP sub-account, the btp service operator will point to the BTP sub-account...
I hope that helps; Piotr
For those that prefer to watch, here is a video tutorial on the topic from Philip MUGGLESTONE
Excellent tutorial from Philip MUGGLESTONE as always, and covers more ground than this blog post.
Hi Tom, many thanks to the Team for delivering this long awaited kyma clusters mapping functionality
I just came through a small bug. When you are in HANA Central and you are going through the database instance creation wizard, at one of the steps there is an option to define the mappings.
Still the wizard only allows for CloudFoundry environment mapping. Eventually, after the database instance has been created the kyma mapping can be done from Manage Configuration/Instance Mapping.
kind regards; Piotr
Thanks Piotr, I've reported that bug and it should get fixed before too long.
Thank you Tom,
What also could be a cool feature is to maintain automatically the kyma clusters egress IP addresses.
Every time a mapping is done and if allow all IP option is not selected HC could add or remove the egress IP addresses of kyma clusters being mapped or unmapped;
otherwise one still needs to allow all IP addresses or retrieve the egress IPs of kyma clusters and maintain them manually;
Given the fact each kyma cluster is multi-zone and has multiple egress IP addresses the manual IP maintenance is hardly the way....
kind regards; Piotr
A hint how to retrieve the SAP BTP,Kyma runtime cluster id: