Skip to Content
Technical Articles
Author's profile photo Erhard Weidenauer

General principles for SAP HANA Cloud native application development

SAP HANA Cloud is a true cloud product and an integral part of SAP BTP. In an on-premise set-up using SAP HANA XS classic, application development could be discussed and handled to a large extend within XS classic. For SAP HANA Cloud, XS advanced is the new and only way of building applications. This programming and server model relies more on shared services and Cloud Foundry. Consequently, some of your previous design principles need to be adapted. For instance, the database as well as services and application developed upon are fully integrated into Cloud Foundry and adhere to its security policies.

But let us start easy. When it comes to native database development, you basically deal with the following:

  • Data, e.g., tables and their content
  • Application logic, e.g., procedures, views, calculation views
  • Users and authorizations, e.g., analytic privileges, roles, users.
  • Persona, e.g., developers, administrators, and operators
  • Processes for development, testing, and production

In this blog post, we focus on the application logic and persona and not too much on data, application security, or processes. We will discuss and motivate the following guiding principles, which are not complete, but should be considered during the design process.

  • Use Cloud Foundry spaces and/or BTP sub-accounts to separate database administration tasks from development tasks
  • You should have only one database module per HDI container
  • Use separate database modules for components or bigger sub-components
  • Use Cloud Foundry spaces for layered software design and to increase the protection of sensitive data

Before going into detail of these principles, I will comment on data. In general, we recommend to separate data from application logic as a rule of thumb. Reasons for this separation are as follows. Whereas the application logic could follow a micro service design. This might not always be a good idea for the data, i.e., persistence. For instance, you build a data lake, which serves as a single source of data for various applications. If the persistency of the applications would be strictly separated this would yield to replication of data used by several applications. This replication and the resulting redundancy often contradict the idea of your data lake. In addition, the application logic is subject to change and could be disruptive, e.g., an algorithm is removed or changed in an incompatible manner. Whereas for data, the underlying table definitions are less likely to be changed. And if they are changed, it is generally in a compatible way. Consequently, separate deployment is often helpful.

Cloud Foundry and BTP influence segregation of duty in the database

You want to do application development on SAP HANA Cloud. You might wonder why you need to deal with Cloud Foundry or even BTP sub-accounts. This section will illustrate you how these two influence the design and security of your application.


Remember application development in the cloud is based on services. The services that contain data and application logic for SAP HANA Cloud database development are the HDI containers. One of the Cloud Foundry paradigms is that services within the same Cloud Foundry space can fully access each other. Meaning all service keys are visible between the services of the Cloud Foundry space as well as to any space developer. The service keys for HDI containers consists of a runtime and a design user of the HDI container, which gives you basically full access. This is by intention to enable easy binding and re-use. If you now want to strictly limit the access to an HDI container, you need separate the accessing developer or service and the consumed HDI container into different Cloud Foundry spaces. Showing that database and application security includes the design of HDI containers and their location in Cloud Foundry spaces, and BTP accounts, as we will see soon.

We strongly recommend starting with your BTP account design early. You might iterate over the design. Having database specific questions in mind, the planning or design of the BTP global account is touched briefly to make you familiar with some concepts.

The general structure of your BTP global account can have aspects that focus on

  • Region: in which location do I need an account? Or the availability of certain services or hyper scalers?
  • Cost: you want to easily identify and assign costs to business units, organizations, cost centers in your company
  • Compliance and security: who has access to which data?
  • Technical features
  • etc.

A good starting point are the general best-practices for setting up a BTP account, see

How big is your organization?

First you should determine the amount of people that will work with your solution. Do not limit your thoughts to now, but it is good practice to prepare for the next years. There might be other projects using the same data foundation, more people using your application, etc.

One team: In a set-up, where everyone does everything, you should keep things as simple as possible without compromising on compliance or security aspects. For instance, if everyone can act as a super-user, then auditing and traceability are key for being compliant.

Some teams: In such a set-up, you find specialization and responsibilities with different teams. But the list of teams is flat, i.e., there is no need for a hierarchy of teams.

  • Database & infrastructure team
  • Developer team(s)
  • End-users
  • Fire fighters
  • Supporters
  • etc.

The team members are supposed to do certain tasks. Even more important you have the aspect that certain tasks shall only be done by a specific team. Thereby, you enter areas of

  • user and members
  • entitlements, service plan assignments
  • quotas and quota plans
  • Cloud Foundry space and BTP sub-accounts
  • role collections, role templates, and scopes
  • etc.

Since the list of teams is flat, your global account model has no explicit need for sub-structures or grouping. Therefore, you have more freedom in your modeling.

Many teams: With a higher number of teams, you have hierarchies or groups of teams. In our global account modeling, this translates into:

  • working with naming conventions
  • working with different sub-accounts for branches of the hierarchy
  • using Cloud Foundry spaces for branches where self-organization is possible
  • etc.

Enough of the pure BTP account design. Let us have a look at the personas who work and develop on SAP HANA Cloud instance. Especially, development operations and administrators on the one hand and software developers on other the hand. Both personas have BTP accounts and are members of Cloud Foundry spaces. One thing most customer need to achieve is segregation of duty. Let us have a look at the following objective.

Keep database administration separate from development tasks

It is important to know that the creation, change, and deletion of a SAP HANA Cloud instance is in general NOT different to the creation, change, and deletion of any other service instance in a Cloud Foundry space. The Cloud Foundry user role that enables for this is Space Developer, see

We have the following:

  • The creation, change, and deletion of SAP HANA Cloud instances is controlled by the entitlement “SAP HANA Cloud” on BTP sub-account. In all Cloud Foundry spaces, whose quota plan allow the usage of paid services, all organization members being space developers can create, change, delete SAP HANA Cloud instances.
  • The creation, change, and deletion of HDI containers is controlled by the entitlements “SAP HANA schemas & HDI Containers” and “Cloud Foundry Runtime” on BTP sub-account. In all Cloud Foundry spaces, whose quota plans have a non-zero memory quota, all organization members being space developers can create, change, delete HDI containers.
  • For administration of the database on database level, you need a dedicated SAP HANA Cloud database user.


Although software developers cannot apply changes to the database, if they do not have access to a database administrator user, they can still do the following:

  • They could create their own SAP HANA Cloud instances. This could result in higher costs due to increased SAP HANA Cloud quota usage.
  • They could delete existing SAP HANA Cloud instances bringing the risk of loss of intellectual property and work due to deleted databases


You should have different Cloud Foundry spaces for database administration and project or application development.

It might even be useful to have these Cloud Foundry spaces in separate BTP sub-accounts. Technically, you can exclude the usage of paid services in a Cloud Foundry space using an appropriate Quota Plan. But keep in mind, development might grow over time and deal with integration of various services, too. Overtime, the quota configuration might change and hence open the creation and/or deletion of SAP HANA Cloud instances. In case you need a save set-up, you can create a sub-account that contains your SAP HANA Cloud instances only, whereas application development, testing, and running productive application is done in other sub-accounts.

  1. One sub-account for database administrators
  2. One or more sub-account for developers

Note: Since the SAP HANA Cloud database is no longer in the same Cloud Foundry space, it is by default not visible or accessible for space developers in another Cloud Foundry space. In order to make the SAP HANA Cloud instance available for development, e.g. HDI container deployment, you need to share the instance the Cloud Foundry space used for development, see

Layered software design and Cloud Foundry spaces

Dividing your application or project into independently deployable parts is often key when it comes applying minimal changes and corrections into your productive application. The smallest granularity of Cloud Foundry deployment is a service, e.g. an HDI container. We will now discuss the impact of this to your application design.

Often, we find many applications and services running on the same SAP HANA Cloud database. In such a situation, it is natural to find tasks and functions needed by different applications. Or you have applications which profit from having a sub-structure grouping similar tasks and functions into a software component or shortly a component. In bigger projects, components are grouped by purpose e.g. data foundation layer, composition layer, consumption layer.  We call such a group a layer and request that they form a directed, a-cyclic graph.

Since data and application logic is contained in HDI containers, components are built of HDI containers. In the following we touch some aspects on HDI container design with respect to layers.

Sure, layers and components are logical concept that are not reflected in objects in CF or HANA, directly. It is described as Software Architecture. But we have the following recommendations on how these logical concepts can be used grouping HDI containers in Cloud Foundry spaces.


  • For a layer where all components are completely visible and accessible to each other, it is recommended that this layer is fully contained in single Cloud Foundry space. This ensures easy cross-container access between the various components.
  •  If you require that not all components should be completely visible to each other, e.g.,
    • that upper layers and their developers shall only be able to consume and read data use certain functions from the lower layer using an agreed API.
    • Or that the programmatic access to sensitive data, e.g., Human Resources data like employee information, must be controlled carefully.

In these cases, you keep the layers in different Cloud Foundry spaces and expose the API using user-defined service with minimal privileges, see


  • All services of a Cloud Foundry space are mutually fully visible. Runtime and design time users of other HDI containers are accessible.
  • An HDI container in another Cloud Foundry space is accessible using e.g., a user provided service, which basically consists of user credentials. Access to the HDI container is controlled by the privileges that the HANA user of the user-provided service can grant to others. These credentials generally differ from the credentials of the service keys of the HDI container.

Up to now, we had a pure design discussion. Next, we dive into the interplay between design time and runtime. Remember services in the cloud are intended to be instantiated several times for scalability and robustness reasons. Therefore, we have the situation that one design time object can correspond to several runtime instances. Nothing special. Nevertheless, the next section deals with the correspondence between design time and runtime objects.

Remember, a database module of an MTA is a design time object that contains the definition of database artefacts, see the developer guide. An HDI container is a runtime object containing instances of database artefacts. During development, you create database artefacts in a database module. These objects or the module itself are deployed into an HDI container to create runtime representations of the design time objects. During development, it is technically possible that various design time environments bind to the same HDI container, because the deploy application is in the end just a Cloud Foundry application binding to a Cloud Foundry service. Since there are no limits for applications to bind to a service, several design time environments, e.g., BAS development spaces of different developers, could bind to the same HDI container. Beside the human aspect of different developer working on the same HDI container, it is technically possible that you bind several database modules to the same HDI container. Meaning different database modules deploy into the very same HDI container. This is strongly discouraged.

You should have only one database module deploying into an HDI container.


  • There can be name clashes of artefacts defined in different modules.
  • The deployment of a database module into an HDI container manages the delta, too. Meaning what is missing will be created, what is changed will be altered, objects with a counterpart in source code will be deleted if deployment is called with the un-deploy option. This is the default setting in Business Application Studio.

Set-up sub-components within an HDI container

Software projects are naturally evolve over time, e.g., the code base grows, or a component might need sub-component…. Sometimes you might face the situation that sub-components that are deployed into one HDI container are handled by different people. In this section, we illustrate how two developers working in their own BAS development space can contribute to the source code that is deployed into one productive HDI container.

Let’s take the following example.

  • We have two developers, Anton and Rachel, who both work on different sub-components of the same component, i.e., component 1 and component 2
  • The component is realized with only 1 HDI container, i.e., we assume there is a design decision not to split the HDI container into several HDI containers

The following questions will be discussed:

  • What technical artefact should represent such a sub-component?
    • A subfolder in database module?
    • or a database module?
  • How will Anton and Rachel work without ignoring the principles we discussed?

By assumption, we have only one HDI container and we follow the guiding principle only one database module per HDI container. We recommend using sub-folders of the database module path db/src in such a situation. This yield to an easy rule to follow for the developers

  • Anton is responsible for sub-component 1 and works in folder db/src/subcomponent1
  • Rachel is responsible for sub-component 2 and works in folder db/src/subcomponent2

Note: In order to reflect the differences of the two sub-components, the sub-folders could have different namespaces for their database artefacts, too. Besides the safety belt for potential name clashes, the sub-component of an artefact could be easily identified and the correct developer directly approached in case of questions, new features, or errors.

The guiding principle influences the runtime and the number of HDI containers, mostly. Meaning, we have at least three HDI containers in this situation.

  • Anton has his HDI container for testing and developing
  • Rachel has her HDI container for testing and developing
  • There is one HDI container that contains the published/released code of Anton and Rachel. We sometimes call it a golden code line, or you can think of the common quality or productive HDI container

This is shown in the next diagram. Both developers have pulled the component resources from a common code repository, e.g., GitRepository. Their software project, which is described by the mta.yaml file, has only one database module. This module contains the sub-modules as sub-folders. During development, Anton and Rachel work in different containers and a deployment of one developer does not impact or influence the work of the other developer. The published/released code is separate container that does not change either.


2 developers working on different sub-components in the same database module

Discouraged Approaches: Anton and Rachel use the same development HDI container

In case Anton and Rachel would use the same HDI container, they would have two database modules deploy into only one HDI container.


In the picture, you see that if the source code differs between the two developers, their deployment into the same container influences the artefacts of the other component, which might already be changed. In case the complete module or parts of sub-component 2 are deployed by Anton, the already deployed changes from Rachel get reverted, because Anton still has the old source code of Rachel’s sub-component. The same holds true if Rachel deploys the complete database module or parts of sub-component 1.


Issue if 2 developers deploy into the very same HDI container

Application Example

Let us take a simplified CRM example. In the CRM system, you run two applications

  • Customer segmentation for market analysis
  • Claims & returns

We assume that there are two teams and that you follow the principle of least privilege or need to know. Meaning for example that the artefacts of Claims & Returns are only accessible to the team for Claims & Returns and not to the team for Customer Segmentation.

Moreover, we assume that

  • the set of artefacts for Claims & Returns is stable, e.g., changes or new objects occur rarely.
  • the set of artefacts for Customer Segmentation changes frequently


Simple example of two CRM applications

Design Possibilities and Implications

  • Simple CRM example in one HDI container
  • Simple CRM example in two HDI containers

The choice for a 1 or 2 HDI containers design has implications on reused or common objects, too. A typical example is given by master data.

In a 1 HDI container design – Monolith

  •  The applications “Customer Segmentation” and “Claims & Returns” are sub-components in one HDI container. The database artefacts, e.g., calculation views, for both applications reside in the same schema, i.e., HDI container schema. Therefore, you must have different namespaces for the two applications.
  • Each application needs a separate role that provides access to the artefacts of its application.

Consequently, we have the following:

  • Artefacts from another namespace can easily accessed
  • Changes in one component require the redeployment of everything


Monolithic design of two CRM applications

In a 2 HDI container design:

  • The applications “Customer Segmentation” and “Claims & Returns” are separate HDI containers.
    • Since the artefacts exist in different schemas, there is no technical need for a separate namespace, because name clashes can only happen with one schema.
    • Nevertheless, if you work with public synonyms or sometimes omit the schema name in discussions, different namespaces for the two applications could be beneficial.
  • Again, each application needs a separate role that provides access to the artefacts of its application.

Consequently, we have the following:

  • Artefacts from the other container can only be accessed by cross container access which requires some additional efforts
  • Changes of one component do not imply a re-deployment of the other component. This results in shorter deploy times and less overall service downtime due to software changes or fixes.

We see that the 2 HDI container is the preferred design according to our requirements to isolation and the need-to-know principle.

Nevertheless, working with several HDI containers comes with the necessity of cross-container access. Cyclic HDI container usage should be omitted because deployment dependencies could block each other like a dead lock. Let’s continue our example. Both applications naturally will need master data. In the beginning the set might theoretically be disjoint. But over time, there will be a set of common meta data. This data must not be cross referenced between the individual developer containers; hence we must introduce a third HDI container for the re-use artefacts. Meaning practically there is NO 2 HDI container design, only a 3 HDI container design 😉.



Native application development in SAP HANA Cloud has many aspects. Beside data structures, algorithms, views, and database artefacts, an applications has to fulfill requirements like security and compliance, e.g., segregation of duties. In this blog post, I illustrated how these requirements can not be solved in the SAP HANA Cloud database, alone. But you need to make yourself familiar with the concepts of Business Technology Platform and Cloud Foundry, right from the start.

If you have feedback & questions please let me know in the comments what would be helpful to you.

Thanks and Regards,

Some References

For a complete overview on the development please refer to SAP HANA Cloud, SAP HANA Database Developer Guide for Cloud Foundry Multitarget Applications. For our discussion, we just recall the major terms:

The design-time artifacts of a database module are deployed into an HDI container, which contains the runtime objects, see


Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Michał Majer
      Michał Majer

      Really good blog post, congrats!

      Author's profile photo Erhard Weidenauer
      Erhard Weidenauer
      Blog Post Author

      Thanks for the kind words

      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Erhard Weidenauer thanks for your very interesting blog.
      It is amazing because I have posted the following question 2 days ago : and your blog is providing the answer the day after. It was like a teasing for your blog.

      I would have one additional question. In case we made the wrong design initially by putting database administration and application development in the same space, is it possible to split afterwards without having to neither recreate the database nor recreate the hdi services? I have the following procedure in mind but I'm not sure it is valid and if I though about all potential side effects:
      1 - Create new sub-account/space for database administration with entitlement “SAP HANA Cloud”
      2 - Create new mapping for the existing HANA Cloud instance for the new database administration sub-account/space
      3 - Remove the entitlement “SAP HANA Cloud” from the application development sub-account/space.


      Author's profile photo Erhard Weidenauer
      Erhard Weidenauer
      Blog Post Author

      Hi Michael,

      good to hear that the blog was published right in time.

      With respect to your question. Currently, there are no means to move services between spaces or manage a Cloud Foundry service from one Cloud Foundry space in another Cloud Foundry space. Meaning, the database mapping is basically only relevant for HDI container deployment. Entitlements or Cloud Foundry service administration are not related to it.

      Hence you have to drop and to recreate something. In order to split things and not starting from scratch, completely, you need to drop and recreate the HDI containers. I recommend the following high level approach.

      1. Create a new space for application development
      2. .. create a mapping of the existing database to new Cloud Foundry space for application development
      3. ...add application developers as members to the new space and grant them the corresponding roles, e.g. space developer
      4. Now, there is something tricky. You basically have to drop and recreate the HDI containers. But depending on their content and chosen schema names, the sequence is crucial. Remember: schema names are unique in the database, i.e. two HDI containers cannot have the same schema.
        1. In case you have a fixed schema name that is referred elsewhere many times, e.g. in HANA live connection of SAP Analytical Cloud.
          1. Check whether you have table content that needs to be saved. If this is the case, then you could do the following, you export the content into a cloud storage near or in the same data center. You could use the Object Store from BTP for these purposes.
          2. Having saved the data, you can now drop the old HDI container.
          3.  Deploy the HDI container in the new Cloud Foundry space for application development.
          4. Import the content into the tables of your new HDI container
        2. In cases where your schema names, or the references to it, can be easily adapted,...
          1. you could deploy the HDI container in the new Cloud Foundry space for application development
          2. Copy the data from the old tables into the new tables. Note: consider packet sizes of 10 million rows per commit.
          3. Adapt the references to new schema name of your HDI container
          4. Drop the old HDI container.
      5. Now, you make the existing space the one for the database administrators.
          1. Space renaming is supported in Cloud Foundry. 
          2. Revoke the space developer role from the application developers
          3. ...remove the membership of the application developers, if needed

      With that you will have a clean separation. Hope this answers your question.

      Kind regards,


      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Thank-you very much Erhard Weidenauer for this detailed answer. I think I will have also to recover all role assignments to roles belonging to those hdi containers in step 4.
      When you say that database mapping is not relevant for administration, does that mean if I would have applied the procedure I had in mind, at the step 3, when removing entitlement “SAP HANA Cloud” from the initial space, it would have deleted the full database instance ?

      Author's profile photo Erhard Weidenauer
      Erhard Weidenauer
      Blog Post Author general, you should not be able to remove or decrease the quota of an entitlement that is still in use. I built up your example and then tried to remove the entitlement for the databases. This was not possible because I still have an instance of the entitlement. This ensures that instances will not be deleted accidently.

      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Thanks for having tested entitlement removal behavior.

      All is clear now.


      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      it seems an HANA instance can only be shared with Org/Spaces belonging to the same region. Do you confirm ?

      Author's profile photo Erhard Weidenauer
      Erhard Weidenauer
      Blog Post Author

      This is at least the information from the official help page.


      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      yes and the documentation does not mentioned that sharing is only possible within the same region but when creating new mapping, the list of organisation I get is only the ones belonging to the same region. The issue is It means if we want to benefit from free-tier hana plan, in Europe, we have to put all sub-accounts on Microsoft Azure in Netherlands.
      I wanted to check this limitation is done on-purpose or it was a bug in the mapping maintenance UI.

      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Erhard Weidenauer

      Considering the new enhancement "Runtimes independence for provisioning and managing database instances in SAP Business Technology Platform" coming with SAP HANA Cloud in Q3 2022 (see ), does your blog remain fully valid or would it need to be updated/completed ?

      Author's profile photo Erhard Weidenauer
      Erhard Weidenauer
      Blog Post Author

      Hi Michael,
      this blog is valid for Cloud Foundry as a runtime. For the new feature in 2022 QRC3, we have to check on how to adapt the principles.



      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Thanks, waiting for update