Skip to Content
Technical Articles

Create Calculation views and OData services in a single project with HANA Cloud

As a Presales, a question I get from customers developing with HANA Cloud is “How to create HANA Calculation views which can be accessed via SAP Analytics Cloud AND via Odata ?“.

As I described in Use CAP to expose HANA Cloud tables as OData services, the recommended method to expose OData services with HANA Cloud is the SAP Cloud Application Programming Model. You can also use xsodata to expose HANA tables as OData services, though this method is not recommended anymore.

In previous blogs, I only talked about tables. I did not cover HANA native artifacts(Calculation Views, procedures, sequences…). However with a live connection to SAP HANA, SAP Analytics Cloud only accesses Calculation Views. Therefore this blog will focus on deploying HANA Calculation Views and database procedures on SAP HANA Cloud, then exposing them via OData.

Developers use Business Application Studio to write a Full-Stack Application and deploy it onto Cloud Foundry. The HDI Container can then be exposed to SAP Analytics Cloud.
Users can access the application via the app router which will first authenticate them, then redirect them to the correct route on the app to fetch the requested data via OData.
Users can also access stories on SAP Analytics Cloud which show live data from SAP HANA Cloud.


All required steps to implement this architecture are skillfully explained by Thomas Jung in Combine CAP with SAP HANA Cloud to Create Full-Stack Applications.

You can find the code I wrote following Thomas’s tutorial on Github.

I highly encourage you to follow Thomas’s tutorial. The only way to really learn a technology is to use it. But before you go, here are a few bits of information that will help you understand how HANA Cloud works.

Multi-Target Applications with Business Application Studio

When developing/administring SAP HANA Cloud, you will use these tools :

In this blog, I focus on using the SAP Business Application Studio to develop Custom Business applications. They are used in conjunction with other standard SAP applications to extend capabilities and increase the productivity of users. They are typically composed of several parts that are built with different technology for different run-time environments.
For example, an application could contain static Web content that runs in the browser, server-side Java code that runs in a Java Enterprise server, OData service definitions for an OData provisioning run time, and also database content such as tables, views, and procedures.
Since all these parts belong to the same business application, they are developed, delivered, configured, and deployed together. Often the various different parts have dependencies and, as a result, need to be deployed to the specified target in a given order.

Therefore, within the Cloud Application Programming model, SAP recommends to build custom applications following the Multi-Target Application(MTA) architecture. A MTA is logically a single application comprised of multiple modules created with different technologies, which share the same lifecycle.

The developers describe the desired result using the MTA model in a multi-target application descriptor (mta.yaml), which contains MTA modules, MTA resources, and interdependencies between them. Afterward, the SAP Cloud Deployment service validates, orchestrates, and automates the deployment of the MTA, which results in Cloud Foundry applications, services and SAP specific contents.Development to Deployment lifecycle in the MTA Model

When using a wizard to create an application within SAP Business Application Studio,the mta.yaml file is automatically generated in the root project folder. It is updated when the project properties change or when a module is added or removed. However, not all the necessary information can be generated automatically. You need to maintain the descriptor manually to define resources, properties, and dependencies, as well as fill in missing information.

In Thomas Jung’s tutorial app, the application contains 4 modules, you can see them on my Github :

  - name: Interactions-srv
    type: nodejs
    path: srv
  - name: Interactions-db-deployer
    type: hdb
    path: db
  - name: app
    type: approuter.nodejs
    path: app
  - name: Interactions_ui_deployer
    path: .

Each module has a name, a type and a path.
Each module can also have dependencies(written as requires) and provide services(written as provides).
The requirements need to be explicitly detailed in the resources section. In Thomas Jung’s tutorial app, the application contains 4 resources.

  - name: Interactions-db
  - name: Interactions_html_repo_runtime
    type: org.cloudfoundry.managed-service
  - name: uaa_Interactions
    type: org.cloudfoundry.managed-service
  - name: Interactions_html_repo_host
    type: org.cloudfoundry.managed-service

HDI Containers

Pay close attention to the resource
It is used by the application to communicate with the SAP HANA Cloud database.
Every time my Node.js application called Interactions-srv needs to read/write to the database, it will call the hdi-container service called Interactions-db. Interactions-db is bound to a service called Interactions-hdi in my Cloud Foundry space where all the information necessary to access HANA Cloud is stored. (host, port, schema, user, password)

How is Interactions-db bound to Interactions-hdi!?
The information about the bound service is in the .env file in the db folder. Sneaky!

Learn more about the structure of the db folder and the .env file in this blog by Witalij Rudnicki.
You can find interactions-hdi in the SAP BTP Cockpit.

Within the Service keys, you find all information necessary to connect to HANA Cloud.

Finally, you can go back to my architecture at the top of the blog. You will notice that the App and Srv modules are there, as well as the DB and Security(XSUAA) resources.
I do not understand exactly how the html repo host and runtime work, as I do not focus on front-end development (yet).

What is a HDI Container ?
The SAP HANA Deployment Infrastructure enables you to deploy database development artifacts(tables, views, procedures, etc.) to containers in SAP HANA.
As you have seen above with Interactions-db, the application does not directly reference a database schema. The application only knows the HDI container. We call this schema-less development. This allows for multiple deployments, sandboxing and enhanced security options.
HDI Containers also offer a clear separation of design-time and run-time artifacts.
Version control and lifecycle management are managed through Git.

The HDI Reference describes the tasks required to set up, maintain, grant access to, and use the SAP HANA Deployment Infrastructure for SAP HANA Cloud. It describes the roles required to provide access to the HDI at the various levels and how the roles fit together to provide a secure deployment infrastructure.

When developing Calculation Views, it is important to remember that within HDI Containers, only local object access is allowed. This way when the code is branched and points to a different version of the container, the actual physical references can be redirected to a different underlying schema. The code becomes easily portable from development to production.

This means that if you want your Calculation Views to access tables/views outside the HDI Container, you need to clearly declare the dependency via synonyms.
Look at the official documentation on how to enable access to objects in a remote classic schema in HDI Containers.

The security concepts and steps to access objects in other schemas are explained in this video by the HANA Academy : HANA Cloud – Access Schema from HDI Container

Create database artifacts

Thomas’s tutorial teaches you two ways of creating artifacts in SAP HANA Cloud.

The first method is to create artifacts with Core Data Services


Core Data Services(CDS) is the backbone of the SAP Cloud Application Programming Model. It provides the means to declaratively capture service definitions and data models, queries, and expressions in plain JavaScript object notations.CDS features to parse from a variety of source languages and to compile them into various target languages.

With this method, at first you define all the objects in a .cds file within your db folder.
The next step is to generate database-native design-time artifacts ( .hdbtable and .hdbview files which can be read by HANA) by building the .cds file with cds build.
Finally you deploy all the design-time artifacts (.hdbtable and .hdbview files) to HANA to create tables and views in the database.

Here is an example : interactions.cds.

context app.interactions {
    entity Interactions_Header {
        key ID        : Integer;
            ITEMS     : Composition of many Interactions_Items on ITEMS.INTHeader = $self;
            PARTNER   : String(10);
            LOG_DATE  : DateTime;
    entity Interactions_Items {
        key INTHeader : Association to Interactions_Header;
        key TEXT_ID   : String(10);
            LANGU     : String(2);
            LOGTEXT   : String(1024);

I define 2 entities (tables) : Interactions_Header and Interactions_Items.
They have a 1:n relationship written as “ITEMS : Composition of many Interactions_Items on ITEMS.INTHeader =$self; ” in the Header table and “INTHeader : Association to Interactions_Header; ” in the Items table.

You just need to run cds build when you are ready, and all the objects you defined will be translated to HANA native files in a gen folder. These files can then be deployed to HANA.

This method leverages CAP fully for the most simple development experience, however it does not allow you to build HANA Calculation Views nor procedures.

The second method is to create SAP HANA Native artifacts directly


To use SAP HANA native features in CAP, create design-time artifacts (.hdbtable, .hdbsynonym, .hdbcalculationview, .hdbprocedure, …) directly in the folder db/src.

The SAP Business Application Studio offers a UI to generate HANA Native artifacts.

Once you have created native artifacts, if you want to use them in your CAP application and expose them via OData, you need to make the objects known to CDS:

  • Define an entity that matches the signature of the newly designed or already existing database object in the .cds file.
  • Add the annotation @cds.persistence.exists to tell CDS that this object already exists on the database and must not be generated.

This entity then serves as a facade for the database object and can be used in the model like a regular entity. Here is an example.

    key![ID]             : Integer     @title : 'ID: ID';
    key![PARTNER]        : String(10)  @title : 'PARTNER: PARTNER';
    key![LOG_DATE]       : String      @title : 'LOG_DATE: LOG_DATE';
    key![BPCOUNTRY_CODE] : String(3)   @title : 'BPCOUNTRY_CODE: BPCOUNTRY_CODE';
    key![TEXT_ID]        : String(10)  @title : 'TEXT_ID: TEXT_ID';
    key![LANGU]          : String(2)   @title : 'LANGU: LANGU';
    key![LOGTEXT]        : String(1024)@title : 'LOGTEXT: LOGTEXT';
    key![INTHEADER_ID]   : Integer     @title : 'INTHEADER_ID: INTHEADER_ID'; }

With this description added to your .cds file, when you use the cds build command, the object will be recognized as already existing, and you will be able to use it in the CAP applicaiton.

Define OData services in SAP Cloud Application Programming Model

In the srv folder, you can define the service interface in a .cds file (same extension as the db folder). This file defines the services exposing data. It will not be deployed to the database.

Here is the interactions-srv.cds file you will create in Thomas Jung’s tutorial:

using app.interactions from '../db/interactions';
using V_INTERACTION from '../db/interactions';

@requires: 'authenticated-user'
service CatalogService {

 entity Interactions_Header
	as projection on interactions.Interactions_Header;

 entity Interactions_Items
	as projection on  interactions.Interactions_Items;

function sleep() returns Boolean;

entity V_Interaction as projection on V_INTERACTION;

The keyword using defines the dependency to the entities in the db folder .cds file.
The keyword @requires controls which roles a user needs to access a resource. In this case a authenticated user is required.
The keyword service defines which entities will be exposed as an OData service.
The keyword @readonly defines read-only entities, in this case a HANA Calculation View.

Once your .cds file is ready, you just need to build it with cds build, then it is ready to be exposed as an OData service. That is the advantage of CAP, it saves developers a huge amount of time !

You can test your OData service from within Business Application Studio by running npm start. Once it is ready, you can deploy it onto SAP BTP for production.

SAP Analytics Cloud live connection to HANA Cloud

Once you have created your HANA Calculation Views, you can expose them to analytics clients, such as SAP Analytics Cloud. Here are the steps :

  1. Grant the SELECT rights to the database user which will be used to connect to SAP Analytics Cloud. Here is a blog that explains the steps in details.
  2. Connect SAP Analytics Cloud to HANA Cloud. Optionally set up Single Sign-on.
  3. Create models based on the HANA Calculation Views that you want to access.
  4. Create a story using the models you need to visualize data.

Troubleshoot – Useful resources

Are you stuck while building a CAP application ?
Personally, I recently had trouble with the package.json file in my db module. The community is always there to help ! Try to post blogs and answer questions on-line, that’s the best way to grow.
Here are a few resources to learn more on the topic..

Thank you for reading !
Maxime SIMON

1 Comment
You must be Logged on to comment or reply to a post.