Technical Articles
Modeling in SAP HANA Cloud (part 2): starting with the project files and users
So, in the previous part, we worked to set up the project in SAP Business Application Studio that would allow us to do the modeling (creation of the calculation views) in SAP HANA Cloud. These models will be used in SAP Analytics Cloud later on.
Reopen the project
First, because it is a new day, I need to open my project and start my SAP HANA database in the SAP HANA Cloud trial:
- Go to my SAP Business Application Studio trial (either bookmarked or from https://account.hanatrial.ondemand.com/trial/#/home/trial),
- Start my Dev Space
TPCDS
, and open it when started, - Open my last
tpcds
project (aka workspace), e.g. via File -> Open Recent Workspace in the studio.
Bu now the Cloud Foundry (CF) login token has expired since I worked with the environment last time, so
- I want to login into my CF space in the SAP Business Technology Platform (SAP BTP) trial in
eu10
region again; I like using the SSO option for that with
cf login --sso -a https://api.cf.eu10.hana.ondemand.com
, - I want to start my SAP HANA Cloud, SAP HANA database
vital-hc-hana
; and instead of going to the UI, I like using the command line for that with
cf update-service vital-hc-hana -c '{"data":{"serviceStopped": false}}'
.
So far, so good and we can continue from where we left before.
Understand the structure of the project
In order to build calculation views, we are working on the SAP HANA Multitarget Application (aka MTA) based on the Cloud Foundry programming model.
It is just that in the case of analytics modeling our MTA contains only one module called db
used to build and deploy the SAP HANA database artifacts.
mta.yaml
The elements and dependencies of the multitarget application are defined using mta.yaml
file (1).
The wizard generated an MTA with one module db
(2) that currently requires a single hdi_db
resource. This is the target container service.
This resource hdi-db
(3) is an HDI container that has been created by the wizard as well. The name of the container is not hard-coded but is coming from a variable ${service-name}
. In turn, it is used as well as a property in the db
module’s requirements via a reference ~{hdi-container-name}
.
The SAP HANA Projects pane in the Explorer of the studio makes it easy to see the structure of the project, including object names i/o variables. We can use as well pre-installed mta
command-line tool (4).
mta resolve -m db -p mta.yaml
But where does the name of the container comes from to substitute the variable in the mta.yaml
?
db/.env
For that, we need to look into the .env
file in the db
folder.
The file db/.env
(1) contains bound services using CF environment variable VCAP_SERVICES
but might be difficult to read as-is (2).
That’s why I installed jq
tool in the previous post, that will help us to read the formatted content of this file (3) coming after the VCAP_SERVICES=
.
cat db/.env | cut -d "=" -f 2- -z | ~/bin/jq
Now it is easy to spot that there is currently one service tpcds-hdidb-ws-x4bb9
(5) known to the MTA as hdi_db
thanks to the tag mta-resource-name
(4).
The service definition contains authentication for two SAP HANA database users: _DT
for the design-time user (6), and _RT
for the run-time user (7).
The target HDI container users
In fact, these are users generated for the service key SharedDevKey
, which is used when opening the SAP HANA Database Explorer.
We can get the same content of the service key by executing the following command (make sure to provide your service names):
cf service-key tpcds-hdidb-ws-x4bb9 SharedDevKey
.
While in the Database Explorer let’s check these two users. In the screenshot above we opened the SQL Console using the SQL
icon and the service key’s database run-time user (ending with _RT
) is authenticated and is used to run queries.
SELECT 'Current user' as "Property", Current_User as "Value" FROM DUMMY
UNION ALL
SELECT 'Current schema', Current_Schema FROM DUMMY;
Now let’s do right-click on the same database entry and open an SQL Console for the Admin.
Execute the same SQL statement and you get the name of the design-time user (with _DT
postfix), but the working schema is the same.
This is the schema (
TPCDS_HDI_DB_1
in this example) that the HDI target container in our project (tpcds-hdidb-ws-x4bb9
) abstracts and its name can change, so it should not be hard-coded anywhere in your code!
We will get back to these users soon to add the run-time database user to the SAP HANA Clients’ user store in the Dev Space.
HDI deployment files
And the last look is at the db/node_modules
that has only one npm package @sap/hdi-deploy
installed now. As per the documentation, it requires a mandatory folder src
for HDI deployment definition files and an optional folder cfg
.
These are the folders we will work with later to model our calculation views.
Add HDI Container’s run-time user to the local HANA user store
As mentioned before, knowing the structure of the db/.env
file, let’s use ~bin/jq
tool to parse run-time database user credentials and add them to the HANA user store ~/sap/hdbclient/hdbuserstore
under the key HDI_DB_RT
(1).
Thanks to that we will be able to limit the number of times jumping between SAP Business Application Studio while working with the design-time artifacts and SAP HANA Database Explorer while checking run-time objects 🤓
cat db/.env | cut -d "=" -f 2- -z \
| ~/bin/jq -r -c '.hana[] | select(.tags[]=="mta-resource-name:hdi_db") .credentials | .host +":"+ .port +" "+ .user +" "+ .password' \
| xargs ~/sap/hdbclient/hdbuserstore SET HDI_DB_RT
~/sap/hdbclient/hdbuserstore LIST
~/sap/hdbclient/hdbsql -U HDI_DB_RT -A "SELECT Current_User FROM DUMMY"
The first command uses jq
to find the service entry with the tag "mta-resource-name:hdi_db"
in db/.env
file and construct hostname, port, user and password arguments to be passed to hdbuserstore
command as arguments thanks to xargs
.
You can see the key HDI_DB_RT
has been added in addition to already existing HANACLOUDTRIAL_DBADMIN
and can be used to run queries with hdbsql
.
Let’s check what views the run-time user can see now.
~/sap/hdbclient/hdbsql -A -U HDI_DB_RT \
"SELECT SCHEMA_NAME, VIEW_NAME, VIEW_TYPE FROM VIEWS WHERE SCHEMA_NAME NOT LIKE '%SYS%';"
The #DI
schema contains objects and data required for the container’s management. One specific view that will become more interesting for us is a M_OBJECTS
, which shows the database objects in the run-time schema of an HDI container. But it is empty for now as we haven’t deployed anything yet…
Make data from external schema visible to the project
At this moment our MTA project is bound only to its target HDI container and can read data and metadata only from it. But we want data from previously loaded TPCDS
classic schema to be used in our calculation views. In Cloud Foundry programming it is possible using instances of User-Provided Services (aka UPS).
In our case, the user-provided service should give access to the TPCDS
schema and JSON-formatted parameters required by such UPS are described in the “Enable Access to Objects in a Remote Classic Schema” documentation.
Create a technical user for the UPS
Let’s create a new file ups_user_with_roles.sql
in a new folder misc
in our project…
…with the following content…
--DROP USER TPC_USER;
CREATE USER TPC_USER PASSWORD "H3LL0C!oud" NO FORCE_FIRST_PASSWORD_CHANGE SET USERGROUP DEFAULT;
--DROP ROLE SELECT_TPCDS;
CREATE ROLE SELECT_TPCDS;
GRANT SELECT ON SCHEMA "TPCDS" TO SELECT_TPCDS;
GRANT SELECT_TPCDS TO "TPC_USER" WITH ADMIN OPTION;
--DROP ROLE SELECT_TPCDS_WITH_GRANT;
CREATE ROLE SELECT_TPCDS_WITH_GRANT;
GRANT SELECT ON SCHEMA "TPCDS" TO SELECT_TPCDS_WITH_GRANT WITH GRANT OPTION;
GRANT SELECT_TPCDS_WITH_GRANT TO "TPC_USER" WITH ADMIN OPTION;
CONNECT TPC_USER PASSWORD "H3LL0C!oud";
SELECT "TABLE_NAME", "RECORD_COUNT" FROM "M_TABLES" WHERE "SCHEMA_NAME"='TPCDS';
…and execute it.
~/sap/hdbclient/hdbsql -U HANACLOUDTRIAL_DBADMIN \
-A -f -I misc/ups_user_with_roles.sql
The user TPC_USER
with the password H3LL0C!oud
and two roles have been created, and the user can select data from tables in TPCDS
schema. We can check the same in the Database Cockpit in SAP BTP.
Create the user-provided service
There are several ways to create UPS, e.g. from the cockpit of the SAP Business Technology Platform…
…but we will take the advantage of the CLI to create everything we need by copying some parameters like host
, port
, certificate
and driver
from the existing HDI target container service in .env
.
cat db/.env | cut -d "=" -f 2- -z \
| ~/bin/jq -r -c '.hana[] | select(.tags[]=="mta-resource-name:hdi_db") .credentials | {host: .host, port: .port, driver: .driver, certificate: .certificate, user: "TPC_USER", password: "H3LL0C!oud", schema: "TPCDS", tags: "hana"}' \
| ~/bin/jq > /tmp/ups4tpcds_credentials.json
cat /tmp/ups4tpcds_credentials.json
Now we can supply this JSON file as a parameter to the service to be created.
cf create-user-provided-service UPS4TPCDS \
-p /tmp/ups4tpcds_credentials.json
cf service UPS4TPCDS
Bind the user-provided service to the MTA
Luckily we do not need to manually modify mta.yaml
and db/.env
files to bind this new UPS to our multitarget application in SAP Business Application Studio.
Once it is bound, thanks to the git version management you can see the differences introduced into the mta.yaml
file.
The db/.env
file is not tracked in the git repository, so we cannot see differences similar to the above. But we can output the user-provided part of its JSON configuration.
cat db/.env | cut -d "=" -f 2- -z | ~/bin/jq '."user-provided"'
Commit changes to the local Git repository
Before we close this post let’s commit our changes to the local Git repository.
git add --all
git commit -am "UPS for TPCDS added"
git hist
And with that, we are all set to build our calculation views in the next step.
-Vitaliy, aka @Sygyzmundovych
Witalij Rudnicki,
Thanks for this wonderful post!