Skip to Content
Author's profile photo Thomas Jung

SAP HANA SPS 11: New Developer Features; HDI

This blog is part of the larger series on all new developer features in SAP HANA SPS 11: SAP HANA SPS 11: New Developer Features

HDI – HANA Deployment Infrastructure

HDI (HANA Deployment Infrastructure) is a service layer of the SAP HANA database that simplifies the deployment of HANA database artifacts by providing a declarative approach for defining database objects and ensuring a consistent deployment into the database based on a transactional all-or-nothing deployment model and implicit dependency management. It is intended to improve the overall management of database artifacts, particularly in scenarios where its necessary to deploy multiple versions of the same data model into the same database instance.

There are a few high level points about HDI that are important to understand:

  • Containers:  All development objects within the scope of HDI now must be within an HDI Container.  The HDI Container allows multiple deployments, sandboxing and enhanced security options for all database artifacts.
  • HDI focuses on deployment only:  Unlike the classic HANA Repository, there is no version control or lifecycle management aspects.  These topics are now provided by Git/GitHub.
  • Database Objects only: Unlike the classic HANA Repository, HDI only covers pure database development objects. It has nothing to do with JavaScript, XSODATA, or other application-layer artifacts.

Containers

Container is such an overloaded term in the IT industry.  We have OS containers, runtime containers, application containers, etc. Inside we even already have the concept of MDC – Multi-Database Containers. HDI introduces another thing called a container, but this is lower level than all those other examples.  An HDI container is essentially a database schema. It abstracts the actual physical schema and provides schema-less development and the security isolation that customers have been requesting. Some rules of the HDI Container world:

  • All database objects are still deployed into a schema
  • This schema is abstracted by the container and is really only designed to be accessed via the container
  • All database object definitions and access logic has to be written in a schema-free way
  • Only local object access is allowed. This way when the code is branched and points to a different version of the container, the actual physical references can be redirected to a different underlying schema
  • Database objects are now owned by a container-specific technical object owner.  There is no longer a single all-powerful technical user (_SYS_REPO). Each technical user only has access to its local container objects.  Any foreign objects must be accessed via Synonym and granted access by the foreign technical user.
  • The same container specific technical user is automatically used by XS Advanced when executing database logic. For more details on XSA technical user connectivity see this blog: SAP HANA SPS 11: New Developer Features; XS Advanced
  • No longer are modelded views placed in a single central schema (_SYS_BIC/_SYS_BI). They are now placed in the container specific schema like all other development objects.  This means that some central meta-data concepts must also be duplicated in each container schema.

HDIContainer1.png

Figure 1: HDI Containers in Detail

Small Example

Probably the best way to explain the new concepts of HDI is to walk through the steps in order to create a simple example. The following example is based upon the initial delivery of SPS 11 and uses the command line tools and external editors. Early next year, SAP will also ship web-based tool that will provide an enhanced development experience.

Create the Container

HDI commands have both a SQL API and are integrated into XS Advanced. The more common approach is to use HDI in conjunction with XS Advanced and that’s the scenario we will show here. In order to create the container we use the XS command line and create-service command.

ContainerCreate.png

Figure 2: Create Container

This one action created the contain both in HANA and exposed it to XS Advanced. Behind the scenes several database schemas and a technical user/owner was also created in the HANA database.  We will see more of what was created later in these steps.

At the root of our XS Advanced application we will need some form of deployment descriptor. The deployment description is contained in the application deployment manifest, which specifies what you want to build (and how) as well as where (and how) to deploy it.  For simple applications you might use the manifest.yml and the corresponding xs push command to send the content to the server.  For more complicated multi-target applications (MTA) with dynamic service port assignment and dependencies you would use the mtad.yaml file.  For more details on deployment descriptors, please see: http://help.sap.com/hana/SAP_HANA_Developer_Guide_for_SAP_HANA_XS_Advanced_Model_en.pdf

Regardless of which approach you use, this deployment descriptor is where we would reference the HDI container.  You only need to supply the container name to both the database services and the Node.js or Java services.  This is how any part of our application knows which HANA system to connect to and which schema/technical user to use to connect. We never supply such technical information in our code or design time objects any longer.

The following is an example database service definition section from a manifest.yml.  In the services section we reference the container name we created in the previous step.

/wp-content/uploads/2015/12/manifest_db_847171.png

Figure 3: manifest.yml database service example

This is the same definition but as done in the mtad.yaml format:

/wp-content/uploads/2015/12/mtad_db_847181.png

Figure 4: mtad.yaml database service example

Creating the DB service

For creating/deploying database artifacts we need a database service in our project.  In the above deployment descriptor files we designated that the /db folder in our project would hold this database service.  The database service is really just an SAP supplied node.js module that runs briefly after deployment to call into HANA and ask HDI to deploy the corresponding database artifacts.  It then shuts down. This is why you see the no-route property for this service in the manifest.yml.  This means that no HTTP port will be assigned to this service since it isn’t designed to be interactive.

Inside the db folder we will need a package.json file (since this is a node.js service) and a src folder to hold the actual database artifact definitions. The package.json should declare the dependency to the sap-hdi-deploy module and also call this module as the start up script.  The rest of the content in this example is optional.

/wp-content/uploads/2015/12/db_package_json_847183.png

Figure 5: Database Service package.json

Inside the source folder we have two HDI specific configuration files.  In this new world of HDI, there is no SAP HANA Repository and therefore no packages. In the old Repository we used the folders/packages as the namespace for all database objects.  The corresponding functionality in HDI is place a .hdinamespace file in the root of the source and specify the starting namespace for all development objects.  You can then also use the subfolder: append option to attach the folders in your project structure as parts of the namespace as well.

/wp-content/uploads/2015/12/hdinamespace_847199.png

Figure 6: .hdinamespace Example

The other configuration file is the .hdiconfig.  Here we list the HDI plug-ins and versions for each file extension.  This allows you to control your file extension usage at the project level. More importantly it allows you to target a specific version of the deployment plug-in. In HDI, we use the numbers in the hdiconfig file to do a version check, eg if an application wants plugin x in version 12.1.0 and we only have 11.1.0, then we reject this. So, it’s also more clear that you cannot import an application designed only for SPS 12 into SPS 11. Since the plug-ins are backwards compatible you can use the version 11.1.0 even on your 12.1.0 or later system. This way if your application is designed for to be used on multiple versions you can use the lowest version in the hdiconfig file and explicitly control which versions it is then compatible with.

/wp-content/uploads/2015/12/hdiconfig_847200.png

Figure 7: .hdiconfig Example

Database Development Objects

The actual database artifact development isn’t all that different from what you do today in the current HANA Repository. Each database object type has its own file and the file extension controls what type of object you want to have. Often you can simply cut and paste the existing objects from the current HANA Repository into your new HDI/XSA project.  For many development artifacts, like stored procedures, you only need to remove the Schema references.

/wp-content/uploads/2015/12/hdi_procedure_847201.png

Figure 8: .hdbprocedure Example in HDI

Other database artifacts, such as CDS, have new file extensions and updated syntax.  HDBDD is now HDBCDS for example. For the full list of additions and changes to the CDS syntax in HDI, please see this blog: SAP HANA SPS 11: New Developer Features; HANA Core Data Services

/wp-content/uploads/2015/12/hdi_hdbcds_847208.png

Figure 9: .hdbcds Example in HDI

Other artifacts have been completely redesigned within HDI.  The hdbti artifact for example has an all new format, options and file extension (hdbtabledata). We have also added a whole new set of DDL-based HDI development artifacts. This means we finally have a way to manage the lifecycle and consistently deploy catalog objects which are also created via pure SQL/DDL.  These artifacts include Tables, Indexes, Constraints, Triggers, Views, Sequences, etc.

Finally its important to reiterate the point that in the HDI world only access to local objects is allowed. There is no such thing as global public synonyms. Therefore common logic such as SELECT FROM DUMMY won’t work any longer.  None of the system tables or views are immediately available. Even for such objects local synonyms must be created and logic within the container can only reference these synonyms.

For example we might create a synonym for DUMMY:

/wp-content/uploads/2015/12/hdbsynonym_847218.png

Figure 10: .hdbsynonym Example in HDI

Later even for JavaScript or Java code running in XS Advanced we can only use this synonym when querying the database.


var connection = $.hdb.getConnection();
var query = 'SELECT CURRENT_USER FROM "dev602.data::DUMMY"';
var rs = connection.executeQuery(query);
var currentUser = rs[0].CURRENT_USER;
var greeting = 'Hello Application User: ' + $.session.getUsername() +
               ' Database User: ' + currentUser +
               '! Welcome to HANA ';
$.response.contentType = 'text/plain; charset=utf-8';
$.response.setBody(greeting);


Deployment to the Database

Once we have coded all of our HDI based development artifacts we are ready to deploy the database service to XS Advanced and thereby also deploy the database artifacts into HANA and the underlying schema for the container. For this we will use the xs push command. Add on the name of the specific service defined in the manifest.yml file to only deploy the database service for now.

/wp-content/uploads/2015/12/xs_push_847226.png

Figure 11: xs push

The service should deploy, run, and then very quickly reach completion.  However no actual errors are reported back from the deployment service via the push command. If you had a syntax error in any of the development artifacts, you could only see this if you look at the deployment logs.  The upcoming web-based development tools will streamline this process by displaying the logs immediately in a deployment window. For now, though, we will use the command xs logs command to check the status of the HDI deployment.

/wp-content/uploads/2015/12/xs_logs_errors_847228.png

Figure 12: xs logs With Errors

Notice in the above log output we did have an error in the .hdiconfig. I specified an HDI plug-in version that doesn’t exist on my service.  Any sort of syntax or configuration error would show up here in the logs in a similar fashion.

After correcting my error, I can perform the push again. This time everything works correctly and I can see the name of the target schema in the logs. Currently this is the best way to see the correlation between HDI container and the actual physical schema in HANA.

/wp-content/uploads/2015/12/xs_logs_good_847241.png

Figure 12: xs logs With Successful Deployment

We could now go to the HANA Studio or the Web-based Development Workbench and look at this physical schema. In most SPS 11 and higher systems using HDI there will be many such Schema with long GUID-based names. You likely will have to search for this schema name. You should see several schema were actually created for your container. If your deployment was successful, you should also be able to see the database artifacts you created.

/wp-content/uploads/2015/12/container_studio_847242.png

Figure 13: HDI Container Schema as Viewed from the HANA Studio

Admittedly the experience of working with the generated schema in the HANA Studio and the Web-based Development Workbench isn’t ideal. This is why early next year with the new SAP Web IDE for SAP HANA, SAP plans to also deliver a new XS Advanced/HDI based catalog tool. This tool will allow you to list and view the details of the HDI containers and avoid the cumbersome lookup of the underlying schema names.

/wp-content/uploads/2015/12/hdi_catalog1_847243.png

Figure 14: HDI/XSA Based Catalog Tool for Viewing HDI Container Details

This new catalog tool will also allow you to view the data in tables/views and execute procedures within your contain. All access will be done via the technical user of the container. This way developers in development systems can have full access to the development objects they are processing without need for setting up special developer roles for an application.

/wp-content/uploads/2015/12/hdi_catalog2_847253.png

Figure 15: HDI/XSA Based Catalog Tool; Data Preview

Planning for HDI

With the introduction for HDI there are several logical changes to supported development artifacts as well.  This particularly impacts the area of modeling.  In HDI there is no support for Analytic, Attribute, or Scripted Calculation Views. Therefore you would have the following transition:

  • Analytic Views -> Graphical Calculation Views
  • Attribute Views -> Graphical Calculation Views
  • Scripted Calculation Views -> SQLScript Table Functions
  • Column Based Filters -> Filter Expressions
  • Derived Parameters by Table -> Only Derived By Procedure (your procedure logic can read a table)

In order to prepare for these changes when moving to HDI, the HANA Studio in SPS 11 contains a migration tool This tool migrate these various view types in place. Meaning it won’t convert them to HDI, but will leave them in the existing HANA Repository. It will convert them to Graphical Calculation Views and/or SQLScript Table Functions in order to prepare for a later move to HDI. This way customers can make the transition in phases for less disruption to their business users.

Studio_Migrate.png

Figure 16: Studio Migration Tool For Modeled Views

Assigned Tags

      121 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Sergio Guerrero
      Sergio Guerrero

      great article Thomas

      Author's profile photo Martin Chambers
      Martin Chambers

      HANA is changing so fast it's a struggle to keep up. Makes you want to sit back and wait until the dust settles.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Thank you THomas for detailed blog.

      If all the tables are visible within the container code, and is a schema less, How do we handle ETL scenarious, For example to load some external data into these container tables. I suspect that would be thru synonym, and any example with this regard would greatly appreciated

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Generally ETL related tables would have been created by the ETL tool and therefore just normal Schema tables - not HDI container ones.  However if you wanted to use ETL to fill your HDI tables your technical user would have to grant permission to the ETL user to access these tables (via a stored procedure).

      CREATE LOCAL TEMPORARY COLUMN TABLE #PARAMETERS LIKE _SYS_DI.TT_PARAMETERS;

      CREATE LOCAL TEMPORARY COLUMN TABLE #PRIVILEGES LIKE _SYS_DI.TT_SCHEMA_ROLES;

      INSERT INTO #PRIVILEGES ( ROLE_NAME, PRINCIPAL_NAME, PRINCIPAL_SCHEMA_NAME ) VALUES ( 'dev602.roles::dev602', '<user>', NULL );

      CALL _SYS_DI.GRANT_CONTAINER_SCHEMA_ROLES('<runtime schema>', #PRIVILEGES, #PARAMETERS, ?, ?, ? );

      The above code grants an HDI  CONTAINER specific Role to an non-container user (any HANA user).  Your ETL user would now have access to read/write into the HDI container tables.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Thank you for more info. Couple more questions

      Its my understanding that all the schemas are replaced with container approach, does it mean that can we continue to create stand alone schemas for ETL purpose?

      In this scenario, If I create a container with custom XS Code and APplicataion tables using CDS, THe models that are created int his HDB COntainer, can access to both container tables and stand alone tables?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >Its my understanding that all the schemas are replaced with container approach,

      No this is not true.  With SQL directly you can still create non-HDI container schemas. You just give up the benefits of HDI if you.  The Suite for instance will continue to use non-HDI container schemas since they already manage the lifecycle of the objects externally anyway.

      Even in HDI, the schema doesn't technically go away.  A container generates a Schema. Its just the programming model within the container which doesn't allow references to the schema.  This way the generated schema name can change (providing branching and isolation) but all the objects within the container remain working.  It is this approach which allows two different versions of the same container to be active and testable within a single HANA instance.

      >THe models that are created int his HDB COntainer, can access to both container tables and stand alone tables?

      Yes the models within an HDI container can always access any objects within that container.  They can also access tables from other containers via synonyms and the proper technical user access grants.  They can also access objects from non-HDI container schemas as long as the technical user of the HDI schema has expressly been granted access to this foreign schema.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Thank you. THis helps.

      Author's profile photo Martin Chambers
      Martin Chambers

      Hi Thomas,

      really interesting. This is the only reference to the procedure GRANT_CONTAINER_SCHEMA_ROLES that I could find. I would have expected an entry the SPS 11 Security Guide.

      http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

      Perhaps you could suggest adding something there, next to the other GRANT_ - procedures?

      Regards,

      Martin

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I have pointed out to development that the entire scenario of external access to HDI container content isn't documented currently.  Its possible but not documented and rather complex.  I posted a complete example here which includes structured privileges and container specific roles:

      GitHub - I809764/DEV602: SAP TechEd 2015 DEV602 First XS Advanced Project

      I was going to publish a how-to blog on this subject but after discussing with development decided to wait.  We are making some enhancements to the process and the Grant procedures to make the overall flow better.  We also figure that most people are probably only doing small scale POCs with HDI in SPS 11 and will wait until SPS 12 to do anything larger. Therefore we aren't widely promoting how to do this in SPS 11 know that it will change for the better in the near future. Still studying such an example hopefully does clear up the concepts of HDI and external access to HDI container objects.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      "However if you wanted to use ETL to fill your HDI tables your technical user would have to grant permission to the ETL user to access these tables (via a stored procedure)."


      At this moment, I'm not sure if I've understood the way to do it. We have a lot hdbdd tables filled with flowgraphs and want to exchange them with hdbcds tables.


      My understanding is that the Technical User (HDI_USER) has now to grant the permissions to the ETL user via procedures, right?

      Does It mean I have to call this procedure (this described template) inside of my container?

      And after the ETL-User got the permission he should be able to access the container and he will see the tables in the non-container world?


      And for this scenario I don't need .hdbsynonyms, or do I?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >Does It mean I have to call this procedure (this described template) inside of my container?

      The procedure is in the #DI schema of your container:

      Container.png

      The other option is to create an hdbrole which will create a container-specific role which can be granted to the ETL user.

      >And after the ETL-User got the permission he should be able to access the container and he will see the tables in the non-container world?

      Yes the ETL user will have access to the underlying schema just like any non-HDI schema.

      >And for this scenario I don't need .hdbsynonyms, or do I?

      No. Nothing you described here would require synonyms.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      Thomas, thanks a lot.

      Because I prefer to you use design time objects, I would like to use the option with the hdbroles.

      Now I found your sample on github. DEV602/dev602.hdbrole at master · I809764/DEV602 · GitHub

      Does I would work, if I would enter all my relevant tables inside of this role and grant it than to the ETL user?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >Does I would work, if I would enter all my relevant tables inside of this role and grant it than to the ETL user?

      Yes that should work.  Have a look at the last 5 minutes or so of this video:

      https://www.youtube.com/watch?v=rb4jOoNwSX4&list=PLoc6uc3ML1JT94z5DLhYXnyczii-gnkel&index=2

      I use a container specific HDBROLE and assign the corresponding container specific role to a database user. This database user is then able to use Lumira to access the container objects without any real knowledge of what an HDI container even is.  Once you have this role granted your user can access the underlying schema directly just like any other.  The only thing not quite so nice is seeing all the long, unreadable schema names.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      This is exactly what I was looking for. 🙂

      (And I remembered that I saw this video for a while ago, but unfortunately had forgotten it.)

      I created a hdbrole in my container and everything worked fine. I could build my role successfully.

      After that I was looking for my role with Studio and WebIDE, but couldn't find the created role, even so with the system user. It's not there. 🙁

      Could there are any further restrictions that prevent the availability of this role?

      (And yes I can see the access_roles from the different containers)

      Author's profile photo Dirk Raschke
      Dirk Raschke

      What I'm wondering, while in your video the build-log shows the deployment of the role and in my log the role is not shown. May be its not deployed...

      I build it more times and I don't get any errors, all seems fine.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      You should see the role in the log getting created/changed.  Best hint I can give you is to double check the file extension. Its easy to make a typo and an unknown file extension won't produce an error. The file will simply be ignored.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      Only one last question... I don't have to register it, in one of the yaml or other json files, or do I?

      (extension seems to be fine - I copied yours and changed it.)

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      No. Nothing special about the role. Its just like any other database artifact really.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      Sorry, I made it wrong. I put it in the db folder, but not in the src folder.

      Now I can see it behind the roles.... 🙂

      But helped me to learn more about the log files ... Thanks a lot!

      You saved me a lot of time!!!

      Author's profile photo Dirk Raschke
      Dirk Raschke

      If I try to give my ETL-User the role, I get  this error:

      "Could not modify user 'ETL-User'. Could not grant role TestTBASE.db.roles::cdsTables Role TestTBASE.db.roles::cdsTables does not exist"

      If I deploy the hdbrole, does the system checks the entries inside of the role at this time?

      Or does the roles only checked, while I'm try to give someone the permission?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I would have expected an incorrect object reference within the role to be caught at build time.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      I tried also more roles, but the result is always the same -> "Role ...does not exist"

      I can assign the role to the user, but if I save it, I get the error that the role doesn't exist.

      If do you have any idea, what the reason could be, please let me know....

      Thanks!


      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Are you trying this in the Studio or the Web-based Development Workbench?  The role assigned of Schema Local roles only works in a the Studio and only from a recent patch level.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      Tried both, Studio & WebIDE. Rev. 112 (1.00.112.00.1457615240)

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      The Studio version I used in my video is not an internal SAP version. It was installed from tools.hana.ondemand.com. So I wasn't using anything newer than what you have. I'm afraid I'll have to suggest that you need to enter a support ticket to have them troubleshoot further.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      One other thing you can try - instead of using the tooling, call the stored procedure directly.  Call GRANT_CONTAINER_SCHEMA_ROLES from your #DI schema (see my earlier screenshot). This is what the tooling should be calling behind the scenes for you anyway. If there is problem in your tooling perhaps you can bypass it temporarily by calling the procedure manually.

      Author's profile photo Dirk Raschke
      Dirk Raschke

      Hi Thomas, now I found out that I had a to old hana studio version running. After I've updated my studio I could assign the role to the user successfully. 🙂

      SAP HANA SPS 12: hdbroles doesn't work

      Author's profile photo Former Member
      Former Member

      Hi Thomas

      First of all thank you so much for the awesome documentation.

      Im stuck right now as im not getting a response from the xs controller.

      im typing:

      https://<host name>:3<instance number>15/v2/info

      I am getting a connection reset error.

      I am unable to proceed further with setting an XS api end point

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      3<Instance Number>15 doesn't seem right.  Its normally 3<instance number>30.

      Author's profile photo Former Member
      Former Member

      It was a firewall issue. I used a remote system in the same domain as the hana linux box was in and it worked. and yes the port number was 3<instance number>30.

      Author's profile photo Sachin C
      Sachin C

      Hello Thomas,

      I am creating a .hdbview file and trying to activate it using HDI. I am always getting an error "com.sap.hana.di.view: [8250009] Syntax error: "incorrect syntax near "="".

      Or slightly different errors as to where the syntax error is. I was wondering if there is any different syntax to be followed in SP12 systems for hdbviews. I know CDS is the way to create the views now, but my understanding was that even hdbviews are supported. Am I right?

      Thanks,

      Sachin

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Yes HDBVIEW has completely new syntax structure in HDI. You can't just copy over the HDBVIEW from the Repository.  The old HDBVIEW was JSON based and the new one is DDL based.  Please see the developer guide for description and example of the new syntax: http://help.sap.com/hana/SAP_HANA_Developer_Guide_for_SAP_HANA_XS_Advanced_Model_en.pdf

      Author's profile photo Sachin C
      Sachin C

      Thanks Thomas. Was struggling with this for some time.

      Author's profile photo Sachin C
      Sachin C

      Hi Thomas,

      I have 1 more question. Its mentioned that the .hdinamespace is where we set the namespace name and the subfolder appending rules. But I see in the dB objects like .hdbprocedure, .hdbview that we have to give the namespace prefixed value as the name. What is the use of the .hdinamespace file then? Couldnt it be that all artifacts in or under the folder of the .hdinamespace file make use of the namespace in this file?

      Thanks,

      Sachin

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      This is really no different than in the old repository.  The .hdinamespace file replaces the folder structure of the repository.  Although you placed your .hdbprocedure in a certain folder structure, you still had to place the matching namespace in the signature definition within the file itself. Now what helps is that the tooling would generate this signature for you. Once we ship the SAP Web IDE for SAP HANA, you will see that the new Editors for HDI/XSA development do the same and read this information from the .hdinamespace file.

      Author's profile photo Sachin C
      Sachin C

      Aah OK. So the presence of tooling makes a difference.

      Author's profile photo Former Member
      Former Member

      HI Thomas

      While i was deploying myapp1 and myapp2 which were mentioned in the sample programs as per the xsa video tutorials i faced an Internal Server Error.

      when i checked the logs for myapp1-web and myapp2-web i saw an error

      Error: Hostname/IP doesn't match certificate's altnames: "Host: dcidshsapp01.dci.local. is not in the cert's altnames: DNS:dcidshsapp01, DNS:*.dcidshsapp01"

      Is this issue beacuse of the certificate? How do i resolve it ?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Yes the hostname you are using to connect doesn't match the HTTPS certificate hostname. Perhaps you are tyring to connect with localhost or some other alias. You must use the same hostname to connect with the XS Client as you use to start the controller. It is very specific about this in order to validate the HTTPS certificates.

      Author's profile photo Former Member
      Former Member

      yes in the .yml file the destination url that i had given was 'dcidshsapp01.dci.local' which was what i used to check if the xs controller was working. But when i changed the destination url in .yml file to dcidshsapp01 (which i thought was the host name in the certificate) everything started working fine.  '.dci.local'  was not required for the routing purposed I guess .

      Author's profile photo Dirk Raschke
      Dirk Raschke

      We have the same problem, but I don't get it solved.

      I opened up a thread.

      SAP Hana WebIDE: 500 Hostname/IP doesn't match certificate's altnames

      Author's profile photo Fabian Krüger
      Fabian Krüger

      Hi Thomas,

      synonyms work great for SYS.DUMMY, but when I try any other synonyms like SYS.USERS or SYS.USER_PARAMETERS the deployment fails because of insufficient privilege.

      Did you try other synonyms yet? Audit Log says the CREATE SYNONYM action is unsuccessful for user _SYS_DI_TO. This user has object privileges on SYS.DUMMY (select) but nothing else. If I add SYS.USER_PARAMETER (select) the action still fails. Do you have any suggestions where to grant additional authorizations to make it work?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      The container technical user needs access to the object in the source schema. 

      Author's profile photo Fabian Krüger
      Fabian Krüger

      Thanks for your fast reply.

      As far as I understand - the container technical user is the one actually executing the statements on the database. This user is called SBSS_<long_number> and has Roles like <RuntimeHDIContainerGUID>::access_role as well as hdi::cds::access_role and some others.

      I added the authorization for SYS.USER_PARAMETERS (select).

      The deployment still fails:

      01.04.16 17:05:44.337 [APP/0] ERR       Deploying "src/general.hdbsynonym"

      01.04.16 17:05:44.337 [APP/0] ERR       ERROR: com.sap.hana.di.synonym [8250505] Not authorized to access the synonym target "SYS.USER_PARAMETERS"

      01.04.16 17:05:44.337 [APP/0] ERR         at "src/general.hdbsynonym" [0:0]

      01.04.16 17:05:44.337 [APP/0] ERR      Processing work list... Failed

      01.04.16 17:05:44.337 [APP/0] ERR      ERROR: [8211557] Make failed (1 errors, 0 warnings): tried to deploy 1 (effective 3) files, delete 0 (effective 0) files, redeploy 0 dependent files

      01.04.16 17:05:44.337 [APP/0] ERR     Making... Failed

      The only difference is: I can now access the table without using the synonym (since the executing user has the authorization)...although the result set is currently empty (but shouldn't be empty).

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      If you look in the HRTT tool you will see that there are two technical users - HDI_USER and USER - for each container.

      HRTT.png

      I suspect you've assigned to the USER but not the HDI_USER. 

      We do plan to make this whole process of access grants easier in the future by having a series of procedures you can call. We realize that working directly with the technical users for such grants is rather cumbersome.

      Author's profile photo Fabian Krüger
      Fabian Krüger

      Thanks again.

      I couldn't find HRTT tool ... which revision is needed for that? I'm currently on 111. I added authorization to all SBSS-users having the schema access role (about 6-8 users, not sure any more, but there were two different types and all others were kind of duplicates) but still without success. I will probably wait until the process gets easier in the future...

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      HRTT can be downloaded from the Service Marketplace and manually installed onto SPS 11 (Rev 111 is fine). Its part of the SAP Web IDE for SAP HANA delivery that shipped earlier this week. SAP HANA SPS 11: New Developer Features; SAP Web IDE for SAP HANA

      Author's profile photo Fabian Krüger
      Fabian Krüger

      Thanks Thomas,

      having HRTT installed now, I can only see hdi containers in space SAP. How can I see the ones in my own space?

      The user I'm using has SpaceDeveloper role for both SAP space and my own space...

      The "know limitations" state that all created objects will be in space SAP. I guess this means that SAP is the only space which is visible then... 🙁

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Yes in SPS 11 the HRTT and Web IDE for SAP HANA can only work with the SAP space. In SPS 12 it is planned that this limitation is removed and any space can be utilized.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Hi Fabian,

      How did you resolve this issue as i am getting same

      ERROR: com.sap.hana.di.synonym [8250505] Not authorized to access the "SYS.USERS" synonym target

      at "src/data/general.hdbsynonym$abhranjan01.db.data::Users.hdbsynonym" [0:0]

      sys.users is a synonym, how can we assign HDI user for this access rights ??



      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Hi THomas,

      Can you pl. advice if there is improved document on synonyms as I am trying to access sys.users by creating synonym and facing not authorized error. I tried assigning public role to technical user which is not possible. I also tried assigning sys.user object to technical user but stilll seeing the same.

      We are stuck at this point, can you pl. provide insights into it.

      Thanks,

      Naresh G

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Admittedly the synonym process is complicated, but we are working to make it better in SPS 12 and beyond.  For your situation check that you have the right technical user.  There are two created.  The HRTT will show you the user and the HDI user for the container.

      With SPS 12 we introduce a configuration artifact called the hdbsynonymgrantor.  This allows you to describe the security which should be granted to the technical user during HDI build. Therefore you don't have to manually do this step.

      As far as documentation, I will pass along that feedback to the documentation colleagues that we definitely need more content in the area of cross-container/schema access. This will become increasingly important as more companies migrate existing content to HDI.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Thank you thomas, I am trying this on sps12 and used the right tech users. I did notice that granter file but could not be successful.

      I will try again, thanks for your inputs.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      Hi Thomas,

      We are using SP12 system (1.00.120.00.1462275491 ) for our development.

      We want to use objects from a "non-container schema" in my container, specifically to build a calculation view.

      I granted the access of this "non-container schema" to both the technical user i.e. HDI_USER and USER.

      Now I am able to select data from the tables of the "non-container schema"in my HRTT tool sql console.

      But when I try to create the synonym it gives me the error - "Not authorized to access the synonym target"

      I also tried to follow the instructions from development guide for XSA. There in the prerequisites of creating a synonym they have mentioned that I have to create one service using this syntax - xs create-user-provided-service  -p "{\"host\": \"\",\"port\":\"\",\"user\":\"\", \"password\":\"\",\"tags\":[\"hana\"] }".

      I am not sure where I should create this service. I tried on XSA client tools but there command " xs create-user-provided-service" is not available.

      Please help me in resolving this issue.

      Regards,

      Sanampreet Singh

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >tried on XSA client tools but there command " xs create-user-provided-service" is not available.

      That is the correct command and you should use the XSA Client tools. If you just issue xs, do you not see this command listed? Perhaps you need to update your XSA client.

      /wp-content/uploads/2016/07/cups1_987620.png

      I also have an example project that I'm creating for TechEd here:GitHub - I809764/dev703: SAP TechEd 2016: Code Review; Migrating Applications to HDI/XSA

      You do have to issue the CUPS command from the XSA client here too (I put the CUPS command in text files in the root of the project).  You then need the hdbsynonymgrantor files in the db/cfg folder. This is what causes your HDI owner and application technical user to receive grants to the foreign schema so that the container synonyms will work.  The grants are done by whatever user you specify in the CUPS.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      Thank you very much Thomas for the prompt response. I really appreciate that.

      I will try this process again after looking into your example project.

      I have few more doubts.

      > After creating this service and .hdbsynonymgrantor file, do we need to provide access of "non-container schema" to technical users of our container?

      > Is db/cfg folder specific to your project? Or do we need to create .hdbsynonymgrantor files in the same folder structure?

      Regards,

      Sanampreet Singh

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >do we need to provide access of "non-container schema" to technical users of our container?

      No that's that the hdbsynonymgrantor does.  Upon deploy/build it will automatically grant the access to which ever technical user type (or both) you configure in this file.

      >Is db/cfg folder specific to your project?

      db is what I named my hdb module. It can be anything you want.  The folder cfg must be named cfg; just like the src folder, its name is special.  This tells the HDI deployer that this folder containers such configuration files and treats them appropriately.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      I followed the process but still I am struggling somewhere. Build is failing.

      I performed the following steps:

      > issued cups and created a service named "CROSS_SCHEMA_SDI_TARGET".

      service.JPG> Then I changed the mta.yaml file and appended the following lines to the resources sections.

      - name: CrossSchemaService

        type: org.cloudfoundry.existing-service

        parameters:

          service-name: CROSS_SCHEMA_SDI_TARGET

      > Then I created a "cfg" folder in my HDB module and created a .hdbsynonymgrantor file, named as "sdi_target.hdbsynonymgrantor". The contents of the file are:


      {

        "CROSS_SCHEMA_SDI_TARGET": {

          "object_owner" : {

            "schema_privileges":[

              {

                "reference":"SDI_TARGET",

                "privileges_with_grant_option":["SELECT", "SELECT METADATA"]

              }

            ]

          },

          "application_user" : {

            "schema_privileges":[

              {

                "reference":"SDI_TARGET",

                "privileges_with_grant_option":["SELECT", "SELECT METADATA"]

              }

            ]

          }

        }

      }

      > Now when I am doing a 'Build' operation on my HDB module, it fails and gives this error

      error.JPG

      But this service I have already created in the step 1 and it is present also when I check using xs services command.

      Is there anything that I am doing wrong? Please help.

      Thank you.

      Regards,

      Sanampreet Singh

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Did you add the service as a requires entry under your hdb module in the mta.yaml as well?  This is necessary to bind the CUPS to the hdb service:

      modules:
      - name: db
        type: hdb
        path: db
        properties:
         SERVICE_REPLACEMENTS:
         - key: forgein-schema
         service: CrossSchemaService
        requires:
         - name: hdi-container
         properties:
         TARGET_CONTAINER: ~{hdi-service-name}
         - name: CrossSchemaService
         - name: CrossSchemaSys
      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      I didn't do it before. Thank you for pointing out the mistake.

      I have added that service under my hdb module now. Now when i do a build, it gives me the error

      error1.JPG> "90CACGDUJWBHLENF_TINYWORLD_HDI_CONTAINER" is the auto generated schema bind to my container.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      That sounds like the user you placed in your CUPS service doesn't have the WITH GRANT authorizations to the target schema.  That user is the one who will perform the grant to your container technical users and therefore needs the WITH GRANT authorization themselves.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      This error doesn't come when I remove  'SELECT METADATA' privilege from the  .hdbsynonymgrantor" file. Everything works fine with only 'SELECT' privilege. My database user I311166 also has all the privileges with grant option.

      Also, is there any documentation present on mta.yaml file where I can read about all the components/clauses that can be used in this file? For example, we used "SERVICE_REPLACEMENTS" clause in this scenario. Likewise there will be many I suppose that will be useful in other scenarios.

      Thank you.

      Author's profile photo Naresh Gadamsetti
      Naresh Gadamsetti

      Hi Thomas,

      How can I migrate an existing calculation view in xs classic to xsa. I tried import .calculationview file into hana webide but this seems not working as new calculation views in XSA has extension of .hdbcalculation view and the xml format is totally different.

      THanks,

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      We plan to deliver a migration tool with the HANA release schedule for the end of this year. Until then you really have to recreate the calculation view by hand.

      Author's profile photo Former Member
      Former Member

      Hi Thomas . For now , could you please provide the steps for creating the synonyms for SYS.USERS and other tables/views in SYS schema . We have been waiting for a while for the latest documentation to come up , but this would really help us proceed with our migration .

      Moreover , could you plaese let me know if I can use the synonym inside my calculation view . I have a working SYS.DUMMY synonym deployed but i am not able to use this inside my calc view . it doesn't show in the node search .

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      >For now , could you please provide the steps for creating the synonyms for SYS.USERS and other tables/views in SYS schema

      You just need to create the HDBSYNONYM development object.  Your Container Technical user will need access rights to these tables.  So you might have to manually grant those.  We will make that easier in the future by adding a configuration file that will auto grant those rights on build/deploy. That feature is working internally, but not shipped with SPS 12 yet.

      As far as the SYS.DUMMY Synonym not showing up in the Calc View value help; is your system SPS 12 based?  This feature for search for foreign schema based synonyms didn't yet exist in SPS 11, but does in SPS 12.

      Author's profile photo Former Member
      Former Member

      Hi Thomas . Even I had the same query as Makesh Balasubramanian The Dummy synonym does not show up when i do an artifact search from a calculation view node . I have SPS 12 installed .

      I am able to access the synonym that i created for SYS.DUMMY from inside my procedures . So , the synonym is very much there and it's active . However , when I search from a calculation view's node , I am only able to see tables/views/table_fn . I can't see the synonyms . Is there something I am missing .

      One more thing Thomas . I could not find any documentation for SAML configuration in SPS12 . Right now , we are using "CA Siteminder" for Single-Sign-On facilitation . How do i go about integrating this in XSA .

      Any document/references would help .

      Thanks,

      Abhishek

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      If your synonyms are correct and accessible elsewhere, but you don't see them in the value help of the calculation view; then I'd consider this a bug and suggest you open a support ticket. 

      As far as your SAML question: here is the link to the help document for SAML configuration in XSA:Managing SAML Identity Providers in XS Advanced - SAP HANA Administration Guide - SAP Library

      Author's profile photo Former Member
      Former Member

      Thanks a lot Thomas 🙂

      Author's profile photo Former Member
      Former Member

      Hi Thomas,

      I am trying to get SAML to work for my XSA application .

      Prior to XSA , we had to open the HANA user using Studio and check the "SAML" checkbox , then click on "Configure" and choose my IDP . I could also set the SAML assertion validation to be done via "EMAIL ADDRESS" in the "User Parameters" list .

      However in XSA admin page , I don't see  an option of enabling SAML for a user . So what's happening is that if a user exists in my IDP's ActiveDirectory , he is able to access the xsa application . Is that the expected behaviour ? Is there no way to validate the saml assertion based on the emailid as was the case earlier ?

      Author's profile photo Former Member
      Former Member

      Hi Thomas,

      One question regarding access via Lumira . Can you please advice on accessing calculation view from Lumira. When we develop XSA Db module with calculation view, the view gets created in HDI container as column view but lumira access these views from content packages only. I am unable to see my calculation view with in lumira if I make Hana live connection compared to sql connection.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Yes for Lumira you must use SQL connection. Lumira doesn't yet understand containers and it assumes the Calc Views are in _SYS_BIC (no longer true).

      Author's profile photo Former Member
      Former Member

      Thanks for the reply Thomas . You say "yet" , so I believe the "hana live connection" approach is still under progress and will be available in future . If this is correct , do we have any rough timelines for this "Hana live connection feature" availability .

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I say yet simply because to me this seems like an obvious feature improvement.  But there is no confirmation I can make that this feature will be added.  Its largely up to the reporting tools development teams to decide if this is investment they want to make. I have no say in that.

      Author's profile photo Former Member
      Former Member

      Sure . Thanks a lot as usual for the prompt reply 🙂

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      Hi Thomas,

      I have created two projects in the XSA Web IDE with the same user.. Both have different containers. Now I want to use objects of one container in the other. How should I go about it?

      We create CUPS for non HDI schemas for the same purpose. not sure how to create CUPS for HDI container.

      If both the projects belong to different users, will the process be same?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      All containers are isolated. Even those created in two projects within the same workspace and user. You don't use a CUPS for an existing HDI schema but instead reference that as a resource but with the type org.cloudfoundry.existing-service

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      I am not able to understand this flow.

      When we have to use non HDI scheama, we follow some steps i.e. create cups for the schema, use that cups as a resource and change the mta.yaml, then create synonymgrantor file and finally synonym.

      I am not able to grasp the steps for the other HDI container/schema what will be the steps.

      Can you please help me there?

      Should I give the HDI schema name as a resource in mta file and create grantor file?

      Do you have any example for that which I can refer?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      The process is essentially the same as the non-HDI schema cross access. The only difference is you don't have to create the User Provided Service.  A service already exists for the other HDI container.  You just reference that HDI container service as you would the CUPS in the mta.yaml and the hdbsynonymgrantor files of your project.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I don't know what details you are looking for?  The service name is specified in the foreign project.  If running from the Web IDE the service name will have the user name and workspace appended to the front of it. You can issue xs services to view all services from the command line. You can also see which ones are bound to other applications.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      Thank you Thomas. I have found the service for my HDI container. But I am facing issues with the privileges. For no HDI schema, we give privileges of that schema to our database user. Now here how do I assign the privileges as my database user is not authorized to assign privileges of HDI schema to other user?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      There are no manual privilege grants needed. You should do everything via the hdbsynonymgrantors.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      I am facing the privilege issues while building my module. Can you please tell what could be the issue?

      [Error: Error executing: GRANT SELECT ON SCHEMA "90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER" TO "90CACGDUJWBHLENF_TINYWORLD_HDI_CONTAINER#OO";

      (nested message: insufficient privilege: Not authorized)]


      Below is the content of my grantor file. Here "90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER" is the HDI schema name of other container.

      {

        "MYUSER-90cacgdujwbhlenf-testProject-hdi-container": {

          "object_owner" : {

            "schema_privileges":[

              {

                "reference":"90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER",

                "privileges":["SELECT"]

              }

            ]

          },

          "application_user" : {

            "schema_privileges":[

              {

                "reference":"90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER",

                "privileges":["SELECT"]

              }

            ]

          }

        }

      }

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      As you are an I-user; you should really conduct this questioning on the internal xs2 listserv and not in the public forum. But in general you might very well have to create an HDBROLE in your foreign container.

      From the documenation of the hdideploy node.js module:

      HDI container object privileges can only be granted to other containers via container local roles. Please follow these steps to grant object privileges of a 'grantor container' to application users of a 'grantee container':

      • deploy one or more .hdbrole files defining object privileges to the 'grantor container'
      • reference these roles in the 'container_roles' sections of a .hdbsynonymgrantor file for 'grantee container' deployment

      I would suggest that you read through the documentation contained in the hdideploy module. It has some nice extended explanation and diagrams of these cross-container/schema scenarios.

      Author's profile photo Sanampreet Singh
      Sanampreet Singh

      Thank you Thomas. I wasn't aware of the internal XS2 listserv. Will surely check the details and do so. 🙂

      Author's profile photo Bill Liu
      Bill Liu

      Could I deploy two projects (respective db modules) into the same HDI container? i.e I have two projects with yaml files like below, the first deployment was successful but the second deployment somehow failed.

      ERR    Could not validate services: Service "play-hdi-container" already exists, but is associated with MTA(s): play1

       

      xs services

      play-hdi-container            hana            hdi-shared    db1


       

      ID: play1
      _schema-version: '2.0'
      version: 0.0.1

      modules:
      - name: db1
      type: hdb
      path: db1
      requires:
      - name: play-hdi-container

      resources:
      - name: play-hdi-container
      type: com.sap.xs.hdi-container

       


       

      ID: play2
      _schema-version: '2.0'
      version: 0.0.1

      modules:
      - name: db2
      type: hdb
      path: db2
      requires:
      - name: play-hdi-container

      resources:
      - name: play-hdi-container
      type: com.sap.xs.hdi-container

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      You can not have two projects that both try and create the same container. One project owns the container and is responsible for its creation and all editing of objects within that container.  From a second project you can reference the container as an already existing service and connect to it.  You can use it as your container in node.js/java modules and you can consume its artifacts via synonyms in your own project HDB module.

      Author's profile photo Harish Bhatt
      Harish Bhatt

      Nice and long article !

       

      Thanks!

      Author's profile photo Tapishnu Sengupta
      Tapishnu Sengupta

      Hi Thomas,

       

      I have a Hana XSA system with HDI in CF. My purpose is to create Calculation Views based on the tables which are deployed in the mentioned system.

      Though I am not able to create Calculation Views . I have connected through Hana Studio with Chisel as the proxy provider.

       

      My question is how can we create Calculation view in this scenario?

      Can we do this through web Ide ?

      I am not sure whether we have a  dedicated web ide in CF .

       

      Any help is appreciated.

       

      Best Regards,

      Tapishnu Sengupta

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      HANA Studio can not be used with HDI.  For HDI development in SCP you have two choices.  You can use an on premise HANA system (such as HANA Express), do you development there with the SAP Web IDE for SAP HANA, and then deploy to Cloud Foundry on SCP.  The second option is to use the SAP Web IDE Full Stack on SCP.  It has some, but not all, XSA and HDI development capabilities currently with plans to close those gaps in the near future.

      Author's profile photo Tapishnu Sengupta
      Tapishnu Sengupta

       

      Hi Thomas,

       

      As per your advice through Hana Express web Ide will i be able to connect to the Hana instance which i have in CF environment. As i said my goal is to create Calculation Views based on the existing tables in the HDI instance.

      I have installed the Hana Express Edition on my system and i am able to launch Web IDE.

      I see 4 different options for adding different type of DB.

      Can you please direct me how i can connect to the Hana DB instance which i have in CF env.

       

      Best Regards,

      Tapishnu Sengupta

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      No HANA Express Web IDE doesn’t connect directly to a remote HANA instance.  You can do development locally in HXE and then remotely deploy your MTAR to Cloud Platform.

       

      Author's profile photo Tapishnu Sengupta
      Tapishnu Sengupta

      I am using Cloud Foundry on official cloud platform (Europe – Canary), and I have access to the official Hana service. When I tried to build the db module, I got the following error in log:

      (DIBuild) Cannot provision services due to Cloud Foundry error: 'CF-ServiceBrokerBadResponse(10001): The service broker returned an invalid response for the request to https://hana-broker.cfapps.sap.hana.ondemand.com/v2/service_instances/e87b09dc-39b2-4cde-b48c-40915a7bffae?accepts_incomplete=true. Status Code: 500 Internal Server Error, Body: {"message":"Failed to lookup database, because of: Multiple databases are available, exact id needs to be specified. ([aca1808e-7f50-41df-896b-3a2f53f6f0da:sharingtest, aca1808e-7f50-41df-896b-3a2f53f6f0da:statisticsdb3, aca1808e-7f50-41df-896b-3a2f53f6f0da:mobilehanadb])"}'

      In my CF space ,there are 3 tenant database, which are listed in the above log. Looks like this is the root of the problem. Here is what I did:

      1. Created an instance of the Hana service, and provide the following parameters:
        {"database_id":"aca1808e-7f50-41df-896b-3a2f53f6f0da:mobilehanadb"}​

        I suppose this helps to specify the database I intend to use.

      2. Bind the this instance with the builder app.
      3. Restart the builder app.
      4. Build the db module from Web IDE.

      Then I got the error message shown above.

      What else should I do the fix the problem? Or should I specify the database id somewhere else?

       

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I suggest you post this question as its own entry in the Q&A forums specifically in the SAP Cloud Platform section.

      Author's profile photo Former Member
      Former Member

      Hi Thomas,

      I have a Hana XSA system with HDI in SAP Cloud Foundry:

      The structure of MTA is:
      modules:
      - name: db
      type: nodejs
      path: db
      requires:
      - name: hdi-container

      - name: odata
      type: nodejs
      path: odata
      provides:
      - name: odata-destination
      public: true
      properties:
      name: ${app-name}
      url: ${default-url}
      requires:
      - name: hdi-container
      - name: uaa

      - name: ui
      type: html5
      path: ui
      provides:
      - name: ui-destination
      public: true
      properties:
      name: ${app-name}
      url: ${default-url}
      requires:
      - name: odata-destination
      group: destinations
      properties:
      name: odata-destination
      url: ~{url}
      forwardAuthToken: true
      - name: uaa

      My question is:

      Above apps support SaaS Multi-Tenant, I have a use-case that need provide an API in odata application
      and it can get agent information from DB for all SaaS tenants.

      So I don't know that how to get data from DB for every SaaS tenant via different HDI-Container method?
      Could you help and give me some ideas ?

      thanks.

       

      Best Regards,

      Peter, Wang

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      You should look at the managed-Hana service broker type.  It has an instance manager api that does what you describe.

       

      Author's profile photo Former Member
      Former Member

      Hi Thomas,

      Add some detail information:

      I add a job scheduler and make use of this job to call exposed API in OData service. Now only know SaaS tenant Id in this API while Job trigger this scheduler.

      Could you tell me some detail information, such as: example code etc ?

       

      Thanks very much.

       

      Best Regards,

      Peter, Wang

      Author's profile photo Jee Lee Sim
      Jee Lee Sim

      Hi Thomas,

      Now we could use HDI development artifacts at design time to create catalog object, for example, .hdbtrigger will create a trigger object in the catalog.

      But how do we delete/remove these catalog object later on when they are not used any more, for example a trigger using design time artifact?

      thanks.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Just delete the design time object and when you re-build or re-deploy the container it will also remove these objects.

      Author's profile photo Jee Lee Sim
      Jee Lee Sim

      Hi Thomas,

      I tried with sps 02, delete .hdbtrigger file for that particular trigger. Build and deploy the mtar. Browse the HDI container using the database explorer and the trigger still there. hmmmm.

      Create a new trigger does reflected in the runtime, so i am sure i am looking at the correct container.

      Do I need to specify any special parameter when i do the "xs deploy"?

      Or this only work for sps03? Or do i need to have any special authorization or roles.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      To be clear that the automatic undeploy happens via the Web IDE build but not when you deploy via MTAR. For that you must either set the auto-undeploy flag in the package.json or setup an undeploy whitelist. For details refer to the HDI deployer online help.  Here is an excerpt:

      In order to undeploy deleted files, an application needs to include an undeploy whitelist via an undeploy.json file in the root directory of the db module (right beside the src/ and cfg/ folders). The undeploy whitelist undeploy.json file is a JSON document with a top-level array of file names:

      undeploy.json:

      [
          "src/Table.hdbcds",
          "src/Procedure.hdbprocedure"
      ]
      

      The file must list all artifacts which should be undeployed. The file path of the artifacts must be relative to the root directory of the db module, must use the HDI file path delimiter '/', and must be based on the HDI server-side folder structure. In case of resuable database modules, the server-side top-level folder lib/ needs to be used instead of the local folder node_modules/.

      For interactive scenarios, it's possible to pass the auto-undeploy option to the HDI Deployer, e.g.

      node deploy --auto-undeploy
      

      In this case, the HDI Deployer will ignore the undeploy whitelist undeploy.json file and will schedule all deleted files in the src/ and cfg/ folders for undeployment.

      Author's profile photo Pranjal Chugh
      Pranjal Chugh

      Hi Thomas,

      I have a question regarding security.

      My Understanding is : Any HDI Container which knows the role and service name of another HDI container in same/different space can get the access using hdbgrants.

      Isn't it a problem from the security perspective? How can an HDI Container control the access to its objects even if it has exposed a Role ?

       

      Regards,

      Pranjal

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      This is why the role itself becomes the interface to other containers. Don't put anything in the role that you don't want exposed to other containers. If you want to more fully isolate containers from each other then you need separate spaces. This is why I often recommend that different business areas use separate spaces. For instance your HR development and FI development might be done in separate spaces.

      Author's profile photo Jee Lee Sim
      Jee Lee Sim

      Hi Thomas,

       

      I have a question regarding the the use of .hdbtabledata file.

      We currently use a .hdbtabledata file and a corresponding .CSV file to load initial data into the table during deployment.

      But during subsequent deployment (due to code change etc.) we noticed if there is a change in the CSV file, all the existing data in the table will be wiped off and data from the CSV will be reloaded into the table. This pose as a very concerning risk when we deploy in production environment. We need to make sure no production data will be accidentally wiped off if someone changes the CSV file.

      So what is the correct or recommended approach to prevent this?

      Option 1, do we set these 2 flag: "no_data_import: true, delete_existing_foreign_data: false", are these the correct attributes to prevent data being wipe off?

      Option 2, or we should delete .hdbtabledata and its corresponding CSV file, to make sure they are not include in the mtar file for subsequent deployment?

      thanks.

       

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I suggest reviewing the many options in the online help:

      https://help.sap.com/viewer/4505d0bdaf4948449b7f7379d24d0f0d/2.0.03/en-US/35c4dd829d2046f29fc741505302f74d.html

      Especially review the content on  Key-Reservation scenario

      Author's profile photo Manasa M
      Manasa M

      Hi Thomas,

      I am trying to create a .hdbsynonym say "dummy.hdbsynonym" and I was trying the below code where "DEV.Jobs.JobScheduler" is my package name

      {
      "DEV.Jobs.JobScheduler::DUMMY": {
      "target": {
      "object": "DUMMY",
      "schema": "SYS"
      }
      }
      }

       

      When I activate the object I am getting an error as  below

       

      Error while activating /DEV/Jobs/JobScheduler/DUMMY.hdbsynonym:
      [DEV.Jobs.JobScheduler:dummy.hdbsynonym] No valid synonym content: JSON tag "schema" is missing, check content "{
      "DEV.Jobs.JobScheduler::DUMMY": {
      "target": {
      "object": "DUMMY",
      "schema": "SYS"
      }
      }
      }"

       

      Can you please advise? I am able to query table from my user. I am creating the hdbsynonym from Workbench Editor but its XS Classic though.

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      You are trying to use the syntax for HDI in the Old Repository (XSC)?  You cant’ do that. The old repository version of hdbsynonym uses a different syntax:

      https://help.sap.com/viewer/b3d0daf2a98e49ada00bf31b7ca7a42e/2.0.03/en-US/e9ce76f32da14e039fd428a14c483ed5.html

       

      There are also some strict limitations in this old hdbsynonym compared to the HDI version:

      the target object specified in a design-time synonym must only exist in the catalog; it is not possible to use .hdbsynonym to define a synonym for a catalog object that originates from a design-time artifact.

      Author's profile photo Manasa M
      Manasa M

      Thanks Thomas. Followed the instructions in the link and got the result I expected.

      Author's profile photo Ramya Ramesh
      Ramya Ramesh

      Hi Thomas,

      I created the HDI container using cockpit and when trying to bind it to an application i am getting the below error,

      Thanks

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I would suggest you do as the error message recommends and enter a support ticket.

      Author's profile photo Priya Jha
      Priya Jha

      Hi All,

      Is it possible to have DROP statements in the HDI files like hdbtable,hdbdropcreate,hdbmigration?

      If yes , what would be the syntax for the same?

      If no, what is the dbArtifact used for DROP statement?

      i tried, passing a DROP statement in one of my HDBTable files but I get

      "Error: com.sap.hana.di.table: Syntax error: "incorrect syntax near "DROP"" [8250009]"

       

      Please guide where am i going wrong and how to programmatically delete the tables?

       

      Regards,

      Priya

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      If you want to remove objects, delete the design time artifact.

      You can then do one of two things. First you could add the --auto-undeploy switch to the @sap/hdi-deploy command in the package.json of the db module. This will automatically remove any runtime objects for which the design object has been deleted.  The other option if you want more explicit control is to use the undeploy whitelist as described in the README.md of the @sap/hdi-deploy module. From that help document:

      In order to undeploy deleted files, an application needs to include an undeploy whitelist via an undeploy.json file in the root directory of the db module (right beside the src/ and cfg/ folders). The undeploy whitelist undeploy.json file is a JSON document with a top-level array of file names:

      undeploy.json:

      [
          "src/Table.hdbcds",
          "src/Procedure.hdbprocedure"
      ]
      

      The file must list all artifacts which should be undeployed. The file path of the artifacts must be relative to the root directory of the db module, must use the HDI file path delimiter '/', and must be based on the HDI server-side folder structure. In case of reusable database modules, the server-side top-level folder lib/ needs to be used instead of the local folder node_modules/.

      For interactive scenarios, it's possible to pass the auto-undeploy option to the HDI Deployer, e.g.

      node deploy --auto-undeploy
      

      In this case, the HDI Deployer will ignore the undeploy whitelist undeploy.json file and will schedule all deleted files in the src/ and cfg/ folders for undeployment.

      Author's profile photo Priya Jha
      Priya Jha

      Hi Thomas,

      Thank you for your reply.

      I have another doubt.

      While deploying db artifacts with .hdbcds  files , we can specify mutiple key ,example:

      entity <Table_NAME> {
                              key "ID" : hana.VARCHAR(60) not null;
                              key "VERSION_ID" : hana.VARCHAR(60) not null;
                              "VALID_FROM" : UTCTimestamp;
      }
      
      How can this be achieved with .hdbTable file?
      
      Example:
      
      COLUMN TABLE "Table_name" (
               "ID"  VARCHAR(60) not null PRIMARY KEY ,
                "VERSION_ID"  VARCHAR(60) not null PRIMARY KEY ,
                "VALID_FROM"  TIMESTAMP
      );
      
      Currently, this column data throws error, saying we cannot pass multiple PKs.
      
      Please guide, how to achieve this?
      
      Regards,
      Priya
      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Here is an example of HDBTABLE with with two primary key columns:
      https://github.com/SAP-samples/hana-xsa-opensap-hana7/blob/master/db/src/data/PurchaseOrder/Item.hdbtable#L12

      Author's profile photo Sathiyaraj Jagadesh
      Sathiyaraj Jagadesh

      Hello Thomas - I have requirement of creating HDI roles in my XSC systems(S4/Suite on Hana), can you guide some documentation on creating HDI Roles in HDI container.

      Also Can I assign these HDI roles created in my own container to classic database user for administrative access(like backup,certificate management)?

      Please provide your input.

      Mainly I am looking to automate database role changes into multiple systems(importing the same container roles in other HANA systems) using HTA or GIT.

      Thanks

      Sathya

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      I'm not sure what you mean by HDI roles in XSC.  HDI basically requires the usage of XSA. Sure you can create roles in HDI.  This is included in the HDI documentation. There is an hdbrole artifact in HDI.  And yes you can assign these to a classic database user via a DB user who has HDI group admin access.

      Author's profile photo Priya Jha
      Priya Jha

      Hi Thomas,

      In SAP HANA Cloud, HDI config file, I see “minimum_feature_version”: “1000” instead of plugin version , as specified for HANA.

      Request you to please elaborate on minimum_feature_version and where can i read more about this property?

       

      Thanks ,

      Priya

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      Honestly no idea what that means. I too saw that the Web IDE Full-stack generates that entry when you choose the HANA Cloud version in the wizard.  Beyond that fact I have no other information about this property.

      Author's profile photo Priya Jha
      Priya Jha

      Hi Thomas,

      In our product, we have a scenario, where we need to add columns or change column data type in one of the database table during runtime.

       

      After going through the  documentation, i got to know that we can have 2 options to achieve this:

      Option 1 : deploy hdbtable : this i cannot use, as during runtime i will not be aware of what was       deployed during the design time. So this option does not holds good.

      Option 2: deploy  hdbmigration : we finally decided to go with this approach.

      Wanted to know your inputs on the same, do you see any performance issues with option 2 or if any better approach?

      Request you to kindly guide.

       

      Regards,

      Priya

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      If its truly dynamic deployment you are after, there is a separate Node.js module for that:https://www.npmjs.com/package/@sap/hdi-dynamic-deploy  I don't see how the choice of hdbtable or hdbmigration impacts the situation however. Are you just worried about the performance of adjustments on large tables? If so then yes this is why hdbmigration was introduced.

      Author's profile photo Priya Jha
      Priya Jha

      Hi Thomas, is there a way I can run Alter table statements on already deployed tables?

       

      I tried with the SQL queries in the console by logging in with HDI_USER credentials, but it says insufficient privileges.

       

      Any thoughts?

      Author's profile photo Thomas Jung
      Thomas Jung
      Blog Post Author

      You should not alter HDI Managed objects with SQL. This will cause the HDI Metadata to get out of sync. Instead you should always return to the design time artifacts, alter them, and then redeploy.