Skip to Content
Technical Articles
Author's profile photo Chao Xu

Develop Spring Boot App with HANA Cloud in Cloud Foundry

Through this post, you can get:

  • How to build a Spring Boot App consuming HANA Cloud

  • How to deploy Spring Boot App in SAP BTP Cloud Foundry Environment

  • Comparison between consuming HANA Cloud instance directly and through hdi-shared /schema

  • Comparison between consuming HANA Cloud with JDBC and JPA

  • Comparison between HANA Cloud users and their authorizations

  • How to get Trial Account on SAP BTP@AliCloud

Trial Account (SAP BTP@AliCloud)

If you are an SAP employee, access SAP BTP Control Center – Canary – Alibaba Cloud to grant the service entitlements to your Global Account.

If you are a partner or start-up, read this article to apply for BTP TDD (Test, Demonstration and Development) license account.

If you are a potential customer, please reach sales to apply for an Enterprise Trial according to this SAP internal page.

If you are a personal developer, start by reading Get a Free Account on SAP BTP Trial to get a global free trial account.


Service Plan
SAP HANA Cloud hana
SAP HANA Cloud hana-cloud-connection
SAP HANA Cloud relational-data-lake
SAP HANA Schemas & HDI Containers (HANA) schema / hdi-shared
Cloud Foundry Runtime MEMORY

Step 1: Create a Spring Boot Application Consuming HANA Cloud

1. Create a raw SB project

Create a brand new project with the official tool Spring initializr, and add the following dependencies:

  • Spring Web

  • Spring Data JPA

Click Generate button to get the zip project file and then import it into your IDE.

2. Add dependency

Add the HANA Cloud JDBC dependency in pom.xml

MVN Repository – SAP HANA JDBC Driver

    <!-- -->

3. Add model, service, and controller

Create a new package model under the main package, and add a new class JobExecutionStatus:


import javax.persistence.*;

@Table(name = "\"ECM_JOB_EXECUTION_STATUS\"")
public class JobExecutionStatus {

    @Column(name = "\"JOB_ID\"", length = 200)
    private String jobId;
    @Column(name = "\"JOB_NAME\"", length = 190)
    private String jobName;
    @Column(name = "\"START_TIME\"")
    private long startTime;
    @Column(name = "\"STATUS\"", length = 16)
    private String status;
    @Column(name = "\"RESULT\"", length = 250)
    private String result;

    public String getJobId() {
        return jobId;

    public void setJobId(String jobId) {
        this.jobId = jobId;

    public String getJobName() {
        return jobName;

    public void setJobName(String jobName) {
        this.jobName = jobName;

    public long getStartTime() {
        return startTime;

    public void setStartTime(long startTime) {
        this.startTime = startTime;

    public String getStatus() {
        return status;

    public void setStatus(String status) {
        this.status = status;

    public String getResult() {
        return result;

    public void setResult(String result) {
        this.result = result;


Create a new package dao under the main package, and add a new class Database:


import org.springframework.context.annotation.Configuration;

@ConfigurationProperties(prefix = "database")
public class Database {
    String url;
    String username;
    String password;

    public String getUrl() {
        return url;

    public void setUrl(String url) {
        this.url = url;

    public String getUsername() {
        return username;

    public void setUsername(String username) {
        this.username = username;

    public String getPassword() {
        return password;

    public void setPassword(String password) {
        this.password = password;

Under the main package, add a new class NativeSQL:


import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;


public class NativeSQL {

    Logger logger = LoggerFactory.getLogger(NativeSQL.class);
    private Connection connection = null;
    @Autowired Database db;

    public void createConnection() {

        long startTime = System.currentTimeMillis();
"Java version: " +;"Minimum supported Java version and SAP driver version number: "

        try {
            connection = DriverManager.getConnection(db.getUrl(), db.getUsername(), db.getPassword());

            if (connection != null) {

      "Connection to HANA successful!");

            long endTime = System.currentTimeMillis();
            long executeTime = endTime - startTime;

  "HANA Connection Time:"  + executeTime);

        } catch (SQLException e) {
            logger.error("Connection Failed:");

    public void insertJob(JobExecutionStatus job) {
        if (connection != null) {
            try {
                long startTime = System.currentTimeMillis();

                PreparedStatement pstmt = connection.prepareStatement(
                        "INSERT INTO \"ECM_JOB_EXECUTION_STATUS\" (\"JOB_ID\", \"JOB_NAME\", \"RESULT\", \"START_TIME\", \"STATUS\") VALUES (?, ?, ?, ?, ?)");
                pstmt.setNString(1, job.getJobId());
                pstmt.setNString(2, job.getJobName());
                pstmt.setNString(3, job.getResult());
                pstmt.setLong(4, job.getStartTime());
                pstmt.setNString(5, job.getStatus());


                long endTime = System.currentTimeMillis();
                long executeTime = endTime - startTime;

      "Insert Job:" + job.getJobId() + ":" + executeTime);

            } catch (SQLException e) {
                logger.error("Insert failed!");

    public void updateJob(JobExecutionStatus job) {
        if (connection != null) {
            try {
                long startTime = System.currentTimeMillis();

                PreparedStatement pstmt = connection.prepareStatement(
                        "UPDATE \"ECM_JOB_EXECUTION_STATUS\" SET \"RESULT\" = ?, \"STATUS\" = ? WHERE (\"JOB_ID\" = ?)");
                pstmt.setNString(1, job.getResult());
                pstmt.setNString(2, job.getStatus());
                pstmt.setNString(3, job.getJobId());

                long endTime = System.currentTimeMillis();
                long executeTime = endTime - startTime;

      "Update Job:" + job.getJobId() + ":" + executeTime);

            } catch (SQLException e) {
                logger.error("Update failed!");

    public void getJob(String jobId) {
        if (connection != null) {
            try {
                long startTime = System.currentTimeMillis();

                PreparedStatement pstmt = connection.prepareStatement(
                        "SELECT \"JOB_ID\", \"JOB_NAME\", \"RESULT\", \"START_TIME\", \"STATUS\" FROM \"ECM_JOB_EXECUTION_STATUS\" WHERE (\"JOB_ID\" = ?)");
                pstmt.setNString(1, jobId);

                ResultSet rs = pstmt.executeQuery();

                if (rs != null && {
                    String job_id = rs.getString("JOB_ID");
                    String job_name = rs.getString("JOB_Name");
                    long start_time = rs.getLong("START_TIME");
                    String status = rs.getString("STATUS");
                    String result = rs.getString("RESULT");

           + "-" + job_name + "-" + start_time + "-" + status + "-" + result);

                long endTime = System.currentTimeMillis();
                long executeTime = endTime - startTime;

      "Query Job:" + jobId + ":" + executeTime);

            } catch (SQLException e) {
                logger.error("Query failed!");

    public void closeConnection() {
        if (connection != null) {
            try {
                connection = null;
            } catch (SQLException e) {



Create a new package service under the main package, and add a new class NativeSQLRunner:


import java.util.UUID;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;


public class NativeSQLRunner {
    private static int MAX_RUM = 100;

    NativeSQL nativeSQL;

    public void startTest() {
        for (int i = 0; i < MAX_RUM; i++) {
            JobExecutionStatus job = new JobExecutionStatus();
            job.setStatus("In Progress");


            job.setResult("Instance creation successfully completed");



Create a new package controller under the main package, and add a new class TestController:


import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;


public class TestController {
    NativeSQLRunner runner_native_sql;

    public String hello() {

        return "Hello!";

    public String test_native_sql() {


        return "Test Native SQL Started!";


4. Configure application properties


#Spring Boot JPA

Step 2: Create HANA Cloud Instance on SAP BTP@AliCloud

Please follow this guide to Provision HANA Cloud instance.

Step 3: Create Table

Login to the HANA Database Explorer and create a table in your HANA Cloud via executing an SQL command:


Or, you can create the table with JDBC command, for more details, please read Connect to SAP HANA Cloud via JDBC:

java -jar ngdbc-2.9.16.jar -u <username>,<password> -n <hana-instance-host>:443 -o encrypt=true -c "CREATE TABLE ECM_JOB_EXECUTION_STATUS(JOB_ID VARCHAR(500),JOB_NAME VARCHAR(500),START_TIME VARCHAR(500),STATUS VARCHAR(500),RESULT VARCHAR(500))";

Check the new table in the HANA Database Explorer

Step 4: Run and Test locally

Run the project locally:

mvn clean install package
java -jar target/SpringBootHANADemo-0.0.1-SNAPSHOT.jar

Access http://localhost:8080/ in the browser, it would return a hello message:

Access http://localhost:8080/test_native_sql in the browser, it would write data into the table in the backend and return a success message.

Check the written data in the HANA Database Explorer:

Step 5: Deploy the Spring Boot App to SAP BTP Cloud Foundry Environment

Under the root directory of the project, create a manifest.yml file with the following content, more detailed configuration can be found here:

  - name: springboothanademo
    host: <host>
    path: target/SpringBootHANADemo-0.0.1-SNAPSHOT.jar
    domain: <custom-domain>
    memory: 1G

Replace host with a unique name, so it does not clash with other deployed applications.

Replace custom-domain with the domain available in your org, you can check it by executing CF command cf domains. For BTP@AliCloud, custom domain is necessary to prepare due to compliance requirement in China. More details on how to get your own custom domain can be found here: Use custom domain in SAP Cloud Platform on Alibaba Cloud

The URL of your application will be: host.custom-domain.

For example:

  - name: springboothanademo
    host: springboothanademo
    path: target/SpringBootHANADemo-0.0.1-SNAPSHOT.jar
    memory: 1G

Under the root directory of the project, execute:

cf push

This command will deploy the springboothanademo application into your Cloud Foundry environment. When you push an app, Cloud Foundry automatically detects an appropriate buildpack for it. This buildpack is used to compile or prepare your app for launch. In our case, is Java Buildpack.

cf push is always executed in the same directory, where the manifest.yml is located.

If your app deployment fails, you can troubleshoot by checking its logs:

cf logs springboothanademo --recent

When the staging and deployment steps are complete, you can check the state and URL of your application through Cockpit or CF command:

cf apps

Open a browser window and enter the URL of the springboothanademo application, for example:

Step 6: Get HANA Cloud Credential with Java CFEnv


Until now, you may notice that the HANA Cloud credential is maintained in the project code, which is not a secure and easy way to do credential maintenance. We can try to get the credential in real-time from the environment through binding service instances with your app. However, the service instance we mentioned here is not the HANA Cloud instance itself, instead, it is a kind of hdi-shared (or schema) instance based on the HANA Cloud instance.

One of the main reason behind is that the HANA Cloud instance doesn’t provide the DBADMIN credential explicitly in the service instance key. Let’s see an example.

Let’s create a service key for the HANA Cloud instance, and have a look at its content:

    "url": "jdbc:sap://",
    "certificate": "-----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwIBAgIQCDvgVpBCRrGhd...bQk\nCAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n-----END CERTIFICATE-----",
    "driver": "",
    "host": "",
    "port": "443"

However, let’s create a new hdi-shared instance and a new key for it, and have a look at the content in the key:

    "url": "jdbc:sap://",
    "certificate": "-----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwI...7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n-----END CERTIFICATE-----",
    "driver": "",
    "hdi_password": "Gg3NquSSlSMN9H1c...5Z20jrGhBQAin.67IAeMfFGq_1",
    "hdi_user": "738720F9AFD34B7CAE1338E937A9F550_2HMTZ3RWEJUW0QG35AE4IMD89_DT",
    "host": "",
    "password": "Ff53CjMoNmj_hOaeFm...FuV5S-ThZCO9h7.aux",
    "port": "443",
    "schema": "738720F9AFD34B7CAE1338E937A9F550",
    "user": "738720F9AFD34B7CAE1338E937A9F550_2HMTZ3RWEJUW0QG35AE4IMD89_RT"

Explore hdi-shared and its users

The deployment of database objects with SAP HANA Deployment Infrastructure (HDI) is based on a container model where each container corresponds roughly to a database schema. Each schema, and the database objects deployed into the schema, are owned by a dedicated technical database user.

For every container deployed, a new technical database user and schema with the same name as the container are created. Additional schemas and technical users required for metadata and deployment APIs are also created.

These technical database users are used internally by HDI only. As a general rule, these users are created as restricted database users who do not have any privileges by default (not even the role PUBLIC) and cannot be used to log on to the database. The only exception to this rule concerns user schema#OO who is granted the role PUBLIC by default.

Every time the service instance is bound to an application, the service broker creates two new users that are specific to this binding. In other words, for each hdi-shared key, you can find two pairs of username and password shown as above, that are:

  • Design-time (DT) user: "hdi_user"/"hdi_password"

  • Run-time (RT) user: "user"/"password"

There would be a separate schema created behind for each user, by default, their name are:

  • Schema for DT user: 738720F9AFD34B7CAE1338E937A9F550_2HMTZ3RWEJUW0QG35AE4IMD89_DT

  • Schema for RT user: 738720F9AFD34B7CAE1338E937A9F550_2HMTZ3RWEJUW0QG35AE4IMD89_RT

If you would like to define it with custom name, you can add a configuration parameter when creating the service instance and key.

DT user and RT user are assigned with different Roles, you can check it in the HANA Cockpit -> Security and User Management -> User Management:

  • Roles for DT user

    This user is equipped with privileges for the container’s APIs in the “schema#DI” schema.

  • Roles for RT user

    This user is assigned the service instance’s global access role “schema::access_role”

Explore DBADMIN administration user

All SAP HANA Cloud, SAP HANA database instances have an administration user named DBADMIN. This user is reserved for use by customers. You can use this user to log on to the SAP HANA cockpit and perform all initial user administration tasks. For more detailed information, please read User Management and User Management with the SAP HANA Database Administrator DBADMIN.

The DBADMIN user:

  • Is a member and operator of the user group DEFAULT

  • Has the system privilege USERGROUP OPERATOR for the user group DEFAULT.

  • Has all the grantable permissions of the SYSTEM user, except the system privilege USER ADMIN. This prevents DBADMIN from changing predefined SAP HANA users in the SAP HANA database instance.

    Permissions for new objects created by SAP HANA Cloud components in the database must be granted to DBADMIN.

It is recommended that you do not use the DBADMIN user for day-to-day activities. Instead:

  1. Use DBADMIN to create database users for specific administrative tasks and assign them the privileges required.

  2. Then, deactivate the DBADMIN user.

You can check the Roles for the DBADMIN in the HANA Cockpit:

Comparison between HANA Cloud users

It is often necessary to specify different security policies for different types of database user. In the SAP HANA database, a distinction is made between database users that correspond to real people and technical database users.

Database Users that Correspond to Real People:

  • Every person who needs to work with SAP HANA must have a database user. Depending on your system configuration and scenario,

  • When database users that correspond to real people leave the organization, the DBA must intervene to delete these users from the database. When a database user is deleted, this means that any database objects that were owned by that user are also automatically dropped, and any privileges granted by that user are automatically revoked.

  • Database users are created with either the CREATE USER or CREATE RESTRICTED USER statement, or using the SAP HANA cockpit.

Technical Database Users:

  • Technical database users do not correspond to real people. They are therefore not dropped if a person leaves the organization. This means that they should be used for administrative tasks such as creating objects and granting privileges for a particular application.

  • Some technical users are available as standard, for example, the user SYS.

  • Other technical database users are created for application-specific purposes. For example, an application server may log on to the SAP HANA database using a dedicated technical database user. For example, HDI user.

  • Technical users are standard users created with the CREATE USER statement.

Comparison between consuming HANA Cloud instance directly and through hdi-shared/schema

We can consume HANA Cloud instance without hdi-shared/schema instance. However, hdi-shared/schema has strong advantages over consuming with a single HANA Cloud instance.

hdi-shared/schema is the Service Plan of the SAP HANA Schemas & HDI Containers (HANA) Service. This service generates a schema (if you use schema plan) which allows your Spring Boot application to create its database artifacts. There are other plans of course which cater to different use-cases. For example, if you follow the Cloud Application Programming model, it is recommended to model the database artifacts in CDS. In that case, hdi-shared plan is used, which deploys the database artifacts from your CDS files.

The hdi-shared Service Plan:

  • Credential

    When you create and bind a service instance with the service plan hdi-shared, an application receives the credentials required for access to an HDI container, which is basically a database schema that is equipped with additional metadata.

  • Isolation

    HDI containers ensure isolation, and within an SAP HANA database you can define an arbitrary number of HDI containers. The same objects can be deployed multiple times into different HDI containers in the same SAP HANA database, for example, to install several instances of the same software product in the same SAP HANA database. HDI containers are isolated from each other by means of schema-level access privileges. Cross-container access at the database level is prevented by default but can be enabled by explicitly granting the necessary privileges, for example, using synonyms.

  • Persistence

    Database objects (tables, views, procedures, and so on) have an owner: the user who created the object. When the owner of a database object is deleted, all objects owned by the deleted user are removed from the database, too. In addition, if application objects are created by end users, the objects are deleted when the end user is deleted, for example when the employee leaves the organization. HDI ensures that during deployment all database objects are created by a container-specific technical user, which is never deleted as long as the container exists.

The schema Service Plan:

The schema service plan creates a plain schema, which you need to manage by hand. Consider using this service plan if your application uses an OR Mapper concept and a framework is available that creates the necessary database resources on demand.

Enhanced code with Java CFEnv

Java CFEnv is a library for easily accessing the environment variables set when deploying an application to Cloud Foundry. It is modeled after the design of the node library node-cfenv and other -cfenv libraries in the Cloud Foundry ecosystem.

You can access the stable release from maven central using the coordinates, add it to your pom.xml:


And add a code snippet:

	public DataSourceProperties dataSourceProperties() {
		CfJdbcEnv cfJdbcEnv = new CfJdbcEnv();
		DataSourceProperties properties = new DataSourceProperties();
		CfCredentials hanaCredentials = cfJdbcEnv.findCredentialsByTag("hana");

		if (hanaCredentials != null) {

			String uri = hanaCredentials.getUri("hana");

		return properties;

Modify mainifest.yml file

Add the service instance in the mainifest.yml file, Cloud Foundry automatically bind the service instance to your app springboothanademo so that your app can get the environment variables.

  - name: springboothanademo
    host: springboothanademo
    path: target/SpringBootHANADemo-0.0.1-SNAPSHOT.jar
    memory: 1G
      - hana-hdi-shared

Deploy app to Cloud Foundry again

cf push

Check app environment variables

You can check the app environment variables via executing the command:

cf env springboothanademo

Comparison between JDBC and JPA

To communicate with HANA Cloud, JDBC and JPA (Hibernate, EclipseLink, etc.) are both supported. In this post, we focus on consuming HANA Cloud in the JDBC way. For the other JPA mode, we can open a new post if you have an interest in it. Just let me know via giving comments below.

When it comes to deciding how to communicate with back-end database systems, software architects face a significant technological challenge. The debate between JPA and JDBC is often the deciding factor, as the two database technologies take very different approaches to work with persistent data. For more details on the key differences between them, you can read A Comparison Between JPA and JDBC.

More References

Predefined Users

Security in SAP HANA Cloud

JDBC Application Programming

Data Types

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Princis RAKOTOMANGA
      Princis RAKOTOMANGA


      Author's profile photo Maria Woydich
      Maria Woydich

      Thanks a lot for the detailed explanations. You are using Java 8 in your example. Have you already tried this with Java 11 (or 17)? I always get an error when pushing the app. For me it looks like the sap_java_buildpack is not supporting Java versions higher than 8. But maybe I am missing a setting.

      Thanks in advance

      Author's profile photo Biswaranjan Ray
      Biswaranjan Ray


      By default the buildpack always downloads JDK8 artifacts when the app is deployed to CF. Please consider adding following env in your manifest.yml file.

      JBP_CONFIG_COMPONENTS: "jres: ['']"
      JBP_CONFIG_SAP_MACHINE_JRE: '{ jre: { version: "11.+" } }'

      Author's profile photo Yanbing Yu
      Yanbing Yu

      Hi Maria,

      If you have to use Java 11 and java-buildpack, you have to make sure you have configured following:

      ... ...
          JBP_DEFAULT_OPEN_JDK_JRE: '{ jre: { version: "11.+" } }'
      When you push your codes to BTP, you have to run following:
          cf push -b yourAppName