This blog is part of the larger series on all new developer features in SAP HANA SPS 11: SAP HANA SPS 11: New Developer Features

HDI – HANA Deployment Infrastructure

HDI (HANA Deployment Infrastructure) is a service layer of the SAP HANA database that simplifies the deployment of HANA database artifacts by providing a declarative approach for defining database objects and ensuring a consistent deployment into the database based on a transactional all-or-nothing deployment model and implicit dependency management. It is intended to improve the overall management of database artifacts, particularly in scenarios where its necessary to deploy multiple versions of the same data model into the same database instance.

There are a few high level points about HDI that are important to understand:

  • Containers:  All development objects within the scope of HDI now must be within an HDI Container.  The HDI Container allows multiple deployments, sandboxing and enhanced security options for all database artifacts.
  • HDI focuses on deployment only:  Unlike the classic HANA Repository, there is no version control or lifecycle management aspects.  These topics are now provided by Git/GitHub.
  • Database Objects only: Unlike the classic HANA Repository, HDI only covers pure database development objects. It has nothing to do with JavaScript, XSODATA, or other application-layer artifacts.

Containers

Container is such an overloaded term in the IT industry.  We have OS containers, runtime containers, application containers, etc. Inside we even already have the concept of MDC – Multi-Database Containers. HDI introduces another thing called a container, but this is lower level than all those other examples.  An HDI container is essentially a database schema. It abstracts the actual physical schema and provides schema-less development and the security isolation that customers have been requesting. Some rules of the HDI Container world:

  • All database objects are still deployed into a schema
  • This schema is abstracted by the container and is really only designed to be accessed via the container
  • All database object definitions and access logic has to be written in a schema-free way
  • Only local object access is allowed. This way when the code is branched and points to a different version of the container, the actual physical references can be redirected to a different underlying schema
  • Database objects are now owned by a container-specific technical object owner.  There is no longer a single all-powerful technical user (_SYS_REPO). Each technical user only has access to its local container objects.  Any foreign objects must be accessed via Synonym and granted access by the foreign technical user.
  • The same container specific technical user is automatically used by XS Advanced when executing database logic. For more details on XSA technical user connectivity see this blog: SAP HANA SPS 11: New Developer Features; XS Advanced
  • No longer are modelded views placed in a single central schema (_SYS_BIC/_SYS_BI). They are now placed in the container specific schema like all other development objects.  This means that some central meta-data concepts must also be duplicated in each container schema.

HDIContainer1.png

Figure 1: HDI Containers in Detail

Small Example

Probably the best way to explain the new concepts of HDI is to walk through the steps in order to create a simple example. The following example is based upon the initial delivery of SPS 11 and uses the command line tools and external editors. Early next year, SAP will also ship web-based tool that will provide an enhanced development experience.

Create the Container

HDI commands have both a SQL API and are integrated into XS Advanced. The more common approach is to use HDI in conjunction with XS Advanced and that’s the scenario we will show here. In order to create the container we use the XS command line and create-service command.

ContainerCreate.png

Figure 2: Create Container

This one action created the contain both in HANA and exposed it to XS Advanced. Behind the scenes several database schemas and a technical user/owner was also created in the HANA database.  We will see more of what was created later in these steps.

At the root of our XS Advanced application we will need some form of deployment descriptor. The deployment description is contained in the application deployment manifest, which specifies what you want to build (and how) as well as where (and how) to deploy it.  For simple applications you might use the manifest.yml and the corresponding xs push command to send the content to the server.  For more complicated multi-target applications (MTA) with dynamic service port assignment and dependencies you would use the mtad.yaml file.  For more details on deployment descriptors, please see: http://help.sap.com/hana/SAP_HANA_Developer_Guide_for_SAP_HANA_XS_Advanced_Model_en.pdf

Regardless of which approach you use, this deployment descriptor is where we would reference the HDI container.  You only need to supply the container name to both the database services and the Node.js or Java services.  This is how any part of our application knows which HANA system to connect to and which schema/technical user to use to connect. We never supply such technical information in our code or design time objects any longer.

The following is an example database service definition section from a manifest.yml.  In the services section we reference the container name we created in the previous step.

/wp-content/uploads/2015/12/manifest_db_847171.png

Figure 3: manifest.yml database service example

This is the same definition but as done in the mtad.yaml format:

/wp-content/uploads/2015/12/mtad_db_847181.png

Figure 4: mtad.yaml database service example

Creating the DB service

For creating/deploying database artifacts we need a database service in our project.  In the above deployment descriptor files we designated that the /db folder in our project would hold this database service.  The database service is really just an SAP supplied node.js module that runs briefly after deployment to call into HANA and ask HDI to deploy the corresponding database artifacts.  It then shuts down. This is why you see the no-route property for this service in the manifest.yml.  This means that no HTTP port will be assigned to this service since it isn’t designed to be interactive.

Inside the db folder we will need a package.json file (since this is a node.js service) and a src folder to hold the actual database artifact definitions. The package.json should declare the dependency to the sap-hdi-deploy module and also call this module as the start up script.  The rest of the content in this example is optional.

/wp-content/uploads/2015/12/db_package_json_847183.png

Figure 5: Database Service package.json

Inside the source folder we have two HDI specific configuration files.  In this new world of HDI, there is no SAP HANA Repository and therefore no packages. In the old Repository we used the folders/packages as the namespace for all database objects.  The corresponding functionality in HDI is place a .hdinamespace file in the root of the source and specify the starting namespace for all development objects.  You can then also use the subfolder: append option to attach the folders in your project structure as parts of the namespace as well.

/wp-content/uploads/2015/12/hdinamespace_847199.png

Figure 6: .hdinamespace Example

The other configuration file is the .hdiconfig.  Here we list the HDI plug-ins and versions for each file extension.  This allows you to control your file extension usage at the project level. More importantly it allows you to target a specific version of the deployment plug-in. In HDI, we use the numbers in the hdiconfig file to do a version check, eg if an application wants plugin x in version 12.1.0 and we only have 11.1.0, then we reject this. So, it’s also more clear that you cannot import an application designed only for SPS 12 into SPS 11. Since the plug-ins are backwards compatible you can use the version 11.1.0 even on your 12.1.0 or later system. This way if your application is designed for to be used on multiple versions you can use the lowest version in the hdiconfig file and explicitly control which versions it is then compatible with.

/wp-content/uploads/2015/12/hdiconfig_847200.png

Figure 7: .hdiconfig Example

Database Development Objects

The actual database artifact development isn’t all that different from what you do today in the current HANA Repository. Each database object type has its own file and the file extension controls what type of object you want to have. Often you can simply cut and paste the existing objects from the current HANA Repository into your new HDI/XSA project.  For many development artifacts, like stored procedures, you only need to remove the Schema references.

/wp-content/uploads/2015/12/hdi_procedure_847201.png

Figure 8: .hdbprocedure Example in HDI

Other database artifacts, such as CDS, have new file extensions and updated syntax.  HDBDD is now HDBCDS for example. For the full list of additions and changes to the CDS syntax in HDI, please see this blog: SAP HANA SPS 11: New Developer Features; HANA Core Data Services

/wp-content/uploads/2015/12/hdi_hdbcds_847208.png

Figure 9: .hdbcds Example in HDI

Other artifacts have been completely redesigned within HDI.  The hdbti artifact for example has an all new format, options and file extension (hdbtabledata). We have also added a whole new set of DDL-based HDI development artifacts. This means we finally have a way to manage the lifecycle and consistently deploy catalog objects which are also created via pure SQL/DDL.  These artifacts include Tables, Indexes, Constraints, Triggers, Views, Sequences, etc.

Finally its important to reiterate the point that in the HDI world only access to local objects is allowed. There is no such thing as global public synonyms. Therefore common logic such as SELECT FROM DUMMY won’t work any longer.  None of the system tables or views are immediately available. Even for such objects local synonyms must be created and logic within the container can only reference these synonyms.

For example we might create a synonym for DUMMY:

/wp-content/uploads/2015/12/hdbsynonym_847218.png

Figure 10: .hdbsynonym Example in HDI

Later even for JavaScript or Java code running in XS Advanced we can only use this synonym when querying the database.


var connection = $.hdb.getConnection();
var query = 'SELECT CURRENT_USER FROM "dev602.data::DUMMY"';
var rs = connection.executeQuery(query);
var currentUser = rs[0].CURRENT_USER;
var greeting = 'Hello Application User: ' + $.session.getUsername() +
               ' Database User: ' + currentUser +
               '! Welcome to HANA ';
$.response.contentType = 'text/plain; charset=utf-8';
$.response.setBody(greeting);


Deployment to the Database

Once we have coded all of our HDI based development artifacts we are ready to deploy the database service to XS Advanced and thereby also deploy the database artifacts into HANA and the underlying schema for the container. For this we will use the xs push command. Add on the name of the specific service defined in the manifest.yml file to only deploy the database service for now.

/wp-content/uploads/2015/12/xs_push_847226.png

Figure 11: xs push

The service should deploy, run, and then very quickly reach completion.  However no actual errors are reported back from the deployment service via the push command. If you had a syntax error in any of the development artifacts, you could only see this if you look at the deployment logs.  The upcoming web-based development tools will streamline this process by displaying the logs immediately in a deployment window. For now, though, we will use the command xs logs command to check the status of the HDI deployment.

/wp-content/uploads/2015/12/xs_logs_errors_847228.png

Figure 12: xs logs With Errors

Notice in the above log output we did have an error in the .hdiconfig. I specified an HDI plug-in version that doesn’t exist on my service.  Any sort of syntax or configuration error would show up here in the logs in a similar fashion.

After correcting my error, I can perform the push again. This time everything works correctly and I can see the name of the target schema in the logs. Currently this is the best way to see the correlation between HDI container and the actual physical schema in HANA.

/wp-content/uploads/2015/12/xs_logs_good_847241.png

Figure 12: xs logs With Successful Deployment

We could now go to the HANA Studio or the Web-based Development Workbench and look at this physical schema. In most SPS 11 and higher systems using HDI there will be many such Schema with long GUID-based names. You likely will have to search for this schema name. You should see several schema were actually created for your container. If your deployment was successful, you should also be able to see the database artifacts you created.

/wp-content/uploads/2015/12/container_studio_847242.png

Figure 13: HDI Container Schema as Viewed from the HANA Studio

Admittedly the experience of working with the generated schema in the HANA Studio and the Web-based Development Workbench isn’t ideal. This is why early next year with the new SAP Web IDE for SAP HANA, SAP plans to also deliver a new XS Advanced/HDI based catalog tool. This tool will allow you to list and view the details of the HDI containers and avoid the cumbersome lookup of the underlying schema names.

/wp-content/uploads/2015/12/hdi_catalog1_847243.png

Figure 14: HDI/XSA Based Catalog Tool for Viewing HDI Container Details

This new catalog tool will also allow you to view the data in tables/views and execute procedures within your contain. All access will be done via the technical user of the container. This way developers in development systems can have full access to the development objects they are processing without need for setting up special developer roles for an application.

/wp-content/uploads/2015/12/hdi_catalog2_847253.png

Figure 15: HDI/XSA Based Catalog Tool; Data Preview

Planning for HDI

With the introduction for HDI there are several logical changes to supported development artifacts as well.  This particularly impacts the area of modeling.  In HDI there is no support for Analytic, Attribute, or Scripted Calculation Views. Therefore you would have the following transition:

  • Analytic Views -> Graphical Calculation Views
  • Attribute Views -> Graphical Calculation Views
  • Scripted Calculation Views -> SQLScript Table Functions
  • Column Based Filters -> Filter Expressions
  • Derived Parameters by Table -> Only Derived By Procedure (your procedure logic can read a table)

In order to prepare for these changes when moving to HDI, the HANA Studio in SPS 11 contains a migration tool This tool migrate these various view types in place. Meaning it won’t convert them to HDI, but will leave them in the existing HANA Repository. It will convert them to Graphical Calculation Views and/or SQLScript Table Functions in order to prepare for a later move to HDI. This way customers can make the transition in phases for less disruption to their business users.

Studio_Migrate.png

Figure 16: Studio Migration Tool For Modeled Views

To report this post you need to login first.

84 Comments

You must be Logged on to comment or reply to a post.

  1. Naresh Setty

    Thank you THomas for detailed blog.

    If all the tables are visible within the container code, and is a schema less, How do we handle ETL scenarious, For example to load some external data into these container tables. I suspect that would be thru synonym, and any example with this regard would greatly appreciated

    (0) 
    1. Thomas Jung Post author

      Generally ETL related tables would have been created by the ETL tool and therefore just normal Schema tables – not HDI container ones.  However if you wanted to use ETL to fill your HDI tables your technical user would have to grant permission to the ETL user to access these tables (via a stored procedure).

      CREATE LOCAL TEMPORARY COLUMN TABLE #PARAMETERS LIKE _SYS_DI.TT_PARAMETERS;

      CREATE LOCAL TEMPORARY COLUMN TABLE #PRIVILEGES LIKE _SYS_DI.TT_SCHEMA_ROLES;

      INSERT INTO #PRIVILEGES ( ROLE_NAME, PRINCIPAL_NAME, PRINCIPAL_SCHEMA_NAME ) VALUES ( ‘dev602.roles::dev602’, ‘<user>’, NULL );

      CALL _SYS_DI.GRANT_CONTAINER_SCHEMA_ROLES(‘<runtime schema>’, #PRIVILEGES, #PARAMETERS, ?, ?, ? );

      The above code grants an HDI  CONTAINER specific Role to an non-container user (any HANA user).  Your ETL user would now have access to read/write into the HDI container tables.

      (0) 
      1. Naresh Setty

        Thank you for more info. Couple more questions

        Its my understanding that all the schemas are replaced with container approach, does it mean that can we continue to create stand alone schemas for ETL purpose?

        In this scenario, If I create a container with custom XS Code and APplicataion tables using CDS, THe models that are created int his HDB COntainer, can access to both container tables and stand alone tables?

        (0) 
        1. Thomas Jung Post author

          >Its my understanding that all the schemas are replaced with container approach,

          No this is not true.  With SQL directly you can still create non-HDI container schemas. You just give up the benefits of HDI if you.  The Suite for instance will continue to use non-HDI container schemas since they already manage the lifecycle of the objects externally anyway.

          Even in HDI, the schema doesn’t technically go away.  A container generates a Schema. Its just the programming model within the container which doesn’t allow references to the schema.  This way the generated schema name can change (providing branching and isolation) but all the objects within the container remain working.  It is this approach which allows two different versions of the same container to be active and testable within a single HANA instance.

          >THe models that are created int his HDB COntainer, can access to both container tables and stand alone tables?

          Yes the models within an HDI container can always access any objects within that container.  They can also access tables from other containers via synonyms and the proper technical user access grants.  They can also access objects from non-HDI container schemas as long as the technical user of the HDI schema has expressly been granted access to this foreign schema.

          (0) 
        1. Thomas Jung Post author

          I have pointed out to development that the entire scenario of external access to HDI container content isn’t documented currently.  Its possible but not documented and rather complex.  I posted a complete example here which includes structured privileges and container specific roles:

          GitHub – I809764/DEV602: SAP TechEd 2015 DEV602 First XS Advanced Project

          I was going to publish a how-to blog on this subject but after discussing with development decided to wait.  We are making some enhancements to the process and the Grant procedures to make the overall flow better.  We also figure that most people are probably only doing small scale POCs with HDI in SPS 11 and will wait until SPS 12 to do anything larger. Therefore we aren’t widely promoting how to do this in SPS 11 know that it will change for the better in the near future. Still studying such an example hopefully does clear up the concepts of HDI and external access to HDI container objects.

          (0) 
      2. Dirk Raschke

        “However if you wanted to use ETL to fill your HDI tables your technical user would have to grant permission to the ETL user to access these tables (via a stored procedure).”


        At this moment, I’m not sure if I’ve understood the way to do it. We have a lot hdbdd tables filled with flowgraphs and want to exchange them with hdbcds tables.


        My understanding is that the Technical User (HDI_USER) has now to grant the permissions to the ETL user via procedures, right?

        Does It mean I have to call this procedure (this described template) inside of my container?

        And after the ETL-User got the permission he should be able to access the container and he will see the tables in the non-container world?


        And for this scenario I don’t need .hdbsynonyms, or do I?

        (0) 
        1. Thomas Jung Post author

          >Does It mean I have to call this procedure (this described template) inside of my container?

          The procedure is in the #DI schema of your container:

          Container.png

          The other option is to create an hdbrole which will create a container-specific role which can be granted to the ETL user.

          >And after the ETL-User got the permission he should be able to access the container and he will see the tables in the non-container world?

          Yes the ETL user will have access to the underlying schema just like any non-HDI schema.

          >And for this scenario I don’t need .hdbsynonyms, or do I?

          No. Nothing you described here would require synonyms.

          (0) 
            1. Thomas Jung Post author

              >Does I would work, if I would enter all my relevant tables inside of this role and grant it than to the ETL user?

              Yes that should work.  Have a look at the last 5 minutes or so of this video:

              https://www.youtube.com/watch?v=rb4jOoNwSX4&list=PLoc6uc3ML1JT94z5DLhYXnyczii-gnkel&index=2

              I use a container specific HDBROLE and assign the corresponding container specific role to a database user. This database user is then able to use Lumira to access the container objects without any real knowledge of what an HDI container even is.  Once you have this role granted your user can access the underlying schema directly just like any other.  The only thing not quite so nice is seeing all the long, unreadable schema names.

              (0) 
              1. Dirk Raschke

                This is exactly what I was looking for. 🙂

                (And I remembered that I saw this video for a while ago, but unfortunately had forgotten it.)

                I created a hdbrole in my container and everything worked fine. I could build my role successfully.

                After that I was looking for my role with Studio and WebIDE, but couldn’t find the created role, even so with the system user. It’s not there. 🙁

                Could there are any further restrictions that prevent the availability of this role?

                (And yes I can see the access_roles from the different containers)

                (0) 
                1. Dirk Raschke

                  What I’m wondering, while in your video the build-log shows the deployment of the role and in my log the role is not shown. May be its not deployed…

                  I build it more times and I don’t get any errors, all seems fine.

                  (0) 
                  1. Thomas Jung Post author

                    You should see the role in the log getting created/changed.  Best hint I can give you is to double check the file extension. Its easy to make a typo and an unknown file extension won’t produce an error. The file will simply be ignored.

                    (0) 
                    1. Dirk Raschke

                      Only one last question… I don’t have to register it, in one of the yaml or other json files, or do I?

                      (extension seems to be fine – I copied yours and changed it.)

                      (0) 
                        1. Dirk Raschke

                          Sorry, I made it wrong. I put it in the db folder, but not in the src folder.

                          Now I can see it behind the roles…. 🙂

                          But helped me to learn more about the log files … Thanks a lot!

                          You saved me a lot of time!!!

                          (0) 
                          1. Dirk Raschke

                            If I try to give my ETL-User the role, I get  this error:

                            “Could not modify user ‘ETL-User’. Could not grant role TestTBASE.db.roles::cdsTables Role TestTBASE.db.roles::cdsTables does not exist”

                            If I deploy the hdbrole, does the system checks the entries inside of the role at this time?

                            Or does the roles only checked, while I’m try to give someone the permission?

                            (0) 
                              1. Dirk Raschke

                                I tried also more roles, but the result is always the same -> “Role …does not exist”

                                I can assign the role to the user, but if I save it, I get the error that the role doesn’t exist.

                                If do you have any idea, what the reason could be, please let me know….

                                Thanks!


                                (0) 
                                1. Thomas Jung Post author

                                  Are you trying this in the Studio or the Web-based Development Workbench?  The role assigned of Schema Local roles only works in a the Studio and only from a recent patch level.

                                  (0) 
                                    1. Thomas Jung Post author

                                      The Studio version I used in my video is not an internal SAP version. It was installed from tools.hana.ondemand.com. So I wasn’t using anything newer than what you have. I’m afraid I’ll have to suggest that you need to enter a support ticket to have them troubleshoot further.

                                      (0) 
                                      1. Thomas Jung Post author

                                        One other thing you can try – instead of using the tooling, call the stored procedure directly.  Call GRANT_CONTAINER_SCHEMA_ROLES from your #DI schema (see my earlier screenshot). This is what the tooling should be calling behind the scenes for you anyway. If there is problem in your tooling perhaps you can bypass it temporarily by calling the procedure manually.

                                        (0) 
  2. Anoop Vinod Kumar

    Hi Thomas

    First of all thank you so much for the awesome documentation.

    Im stuck right now as im not getting a response from the xs controller.

    im typing:

    https://<host name>:3<instance number>15/v2/info

    I am getting a connection reset error.

    I am unable to proceed further with setting an XS api end point

    (0) 
  3. Sachin C

    Hello Thomas,

    I am creating a .hdbview file and trying to activate it using HDI. I am always getting an error “com.sap.hana.di.view: [8250009] Syntax error: “incorrect syntax near “=””.

    Or slightly different errors as to where the syntax error is. I was wondering if there is any different syntax to be followed in SP12 systems for hdbviews. I know CDS is the way to create the views now, but my understanding was that even hdbviews are supported. Am I right?

    Thanks,

    Sachin

    (0) 
      1. Sachin C

        Hi Thomas,

        I have 1 more question. Its mentioned that the .hdinamespace is where we set the namespace name and the subfolder appending rules. But I see in the dB objects like .hdbprocedure, .hdbview that we have to give the namespace prefixed value as the name. What is the use of the .hdinamespace file then? Couldnt it be that all artifacts in or under the folder of the .hdinamespace file make use of the namespace in this file?

        Thanks,

        Sachin

        (0) 
        1. Thomas Jung Post author

          This is really no different than in the old repository.  The .hdinamespace file replaces the folder structure of the repository.  Although you placed your .hdbprocedure in a certain folder structure, you still had to place the matching namespace in the signature definition within the file itself. Now what helps is that the tooling would generate this signature for you. Once we ship the SAP Web IDE for SAP HANA, you will see that the new Editors for HDI/XSA development do the same and read this information from the .hdinamespace file.

          (0) 
  4. Anoop Vinod Kumar

    HI Thomas

    While i was deploying myapp1 and myapp2 which were mentioned in the sample programs as per the xsa video tutorials i faced an Internal Server Error.

    when i checked the logs for myapp1-web and myapp2-web i saw an error

    Error: Hostname/IP doesn’t match certificate’s altnames: “Host: dcidshsapp01.dci.local. is not in the cert’s altnames: DNS:dcidshsapp01, DNS:*.dcidshsapp01”

    Is this issue beacuse of the certificate? How do i resolve it ?

    (0) 
    1. Thomas Jung Post author

      Yes the hostname you are using to connect doesn’t match the HTTPS certificate hostname. Perhaps you are tyring to connect with localhost or some other alias. You must use the same hostname to connect with the XS Client as you use to start the controller. It is very specific about this in order to validate the HTTPS certificates.

      (0) 
      1. Anoop Vinod Kumar

        yes in the .yml file the destination url that i had given was ‘dcidshsapp01.dci.local’ which was what i used to check if the xs controller was working. But when i changed the destination url in .yml file to dcidshsapp01 (which i thought was the host name in the certificate) everything started working fine.  ‘.dci.local’  was not required for the routing purposed I guess .

        (0) 
  5. Fabian Krüger

    Hi Thomas,

    synonyms work great for SYS.DUMMY, but when I try any other synonyms like SYS.USERS or SYS.USER_PARAMETERS the deployment fails because of insufficient privilege.

    Did you try other synonyms yet? Audit Log says the CREATE SYNONYM action is unsuccessful for user _SYS_DI_TO. This user has object privileges on SYS.DUMMY (select) but nothing else. If I add SYS.USER_PARAMETER (select) the action still fails. Do you have any suggestions where to grant additional authorizations to make it work?

    (0) 
      1. Fabian Krüger

        Thanks for your fast reply.

        As far as I understand – the container technical user is the one actually executing the statements on the database. This user is called SBSS_<long_number> and has Roles like <RuntimeHDIContainerGUID>::access_role as well as hdi::cds::access_role and some others.

        I added the authorization for SYS.USER_PARAMETERS (select).

        The deployment still fails:

        01.04.16 17:05:44.337 [APP/0] ERR       Deploying “src/general.hdbsynonym”

        01.04.16 17:05:44.337 [APP/0] ERR       ERROR: com.sap.hana.di.synonym [8250505] Not authorized to access the synonym target “SYS.USER_PARAMETERS”

        01.04.16 17:05:44.337 [APP/0] ERR         at “src/general.hdbsynonym” [0:0]

        01.04.16 17:05:44.337 [APP/0] ERR      Processing work list… Failed

        01.04.16 17:05:44.337 [APP/0] ERR      ERROR: [8211557] Make failed (1 errors, 0 warnings): tried to deploy 1 (effective 3) files, delete 0 (effective 0) files, redeploy 0 dependent files

        01.04.16 17:05:44.337 [APP/0] ERR     Making… Failed

        The only difference is: I can now access the table without using the synonym (since the executing user has the authorization)…although the result set is currently empty (but shouldn’t be empty).

        (0) 
        1. Thomas Jung Post author

          If you look in the HRTT tool you will see that there are two technical users – HDI_USER and USER – for each container.

          HRTT.png

          I suspect you’ve assigned to the USER but not the HDI_USER. 

          We do plan to make this whole process of access grants easier in the future by having a series of procedures you can call. We realize that working directly with the technical users for such grants is rather cumbersome.

          (0) 
          1. Fabian Krüger

            Thanks again.

            I couldn’t find HRTT tool … which revision is needed for that? I’m currently on 111. I added authorization to all SBSS-users having the schema access role (about 6-8 users, not sure any more, but there were two different types and all others were kind of duplicates) but still without success. I will probably wait until the process gets easier in the future…

            (0) 
              1. Fabian Krüger

                Thanks Thomas,

                having HRTT installed now, I can only see hdi containers in space SAP. How can I see the ones in my own space?

                The user I’m using has SpaceDeveloper role for both SAP space and my own space…

                The “know limitations” state that all created objects will be in space SAP. I guess this means that SAP is the only space which is visible then… 🙁

                (0) 
                1. Thomas Jung Post author

                  Yes in SPS 11 the HRTT and Web IDE for SAP HANA can only work with the SAP space. In SPS 12 it is planned that this limitation is removed and any space can be utilized.

                  (0) 
            1. Naresh Setty

              Hi Fabian,

              How did you resolve this issue as i am getting same

              ERROR: com.sap.hana.di.synonym [8250505] Not authorized to access the “SYS.USERS” synonym target

              at “src/data/general.hdbsynonym$abhranjan01.db.data::Users.hdbsynonym” [0:0]

              sys.users is a synonym, how can we assign HDI user for this access rights ??



              (0) 
          2. Naresh Setty

            Hi THomas,

            Can you pl. advice if there is improved document on synonyms as I am trying to access sys.users by creating synonym and facing not authorized error. I tried assigning public role to technical user which is not possible. I also tried assigning sys.user object to technical user but stilll seeing the same.

            We are stuck at this point, can you pl. provide insights into it.

            Thanks,

            Naresh G

            (0) 
            1. Thomas Jung Post author

              Admittedly the synonym process is complicated, but we are working to make it better in SPS 12 and beyond.  For your situation check that you have the right technical user.  There are two created.  The HRTT will show you the user and the HDI user for the container.

              With SPS 12 we introduce a configuration artifact called the hdbsynonymgrantor.  This allows you to describe the security which should be granted to the technical user during HDI build. Therefore you don’t have to manually do this step.

              As far as documentation, I will pass along that feedback to the documentation colleagues that we definitely need more content in the area of cross-container/schema access. This will become increasingly important as more companies migrate existing content to HDI.

              (0) 
              1. Naresh Setty

                Thank you thomas, I am trying this on sps12 and used the right tech users. I did notice that granter file but could not be successful.

                I will try again, thanks for your inputs.

                (0) 
              2. Sanampreet Singh

                Hi Thomas,

                We are using SP12 system (1.00.120.00.1462275491 ) for our development.

                We want to use objects from a “non-container schema” in my container, specifically to build a calculation view.

                I granted the access of this “non-container schema” to both the technical user i.e. HDI_USER and USER.

                Now I am able to select data from the tables of the “non-container schema”in my HRTT tool sql console.

                But when I try to create the synonym it gives me the error – “Not authorized to access the synonym target

                I also tried to follow the instructions from development guide for XSA. There in the prerequisites of creating a synonym they have mentioned that I have to create one service using this syntax – xs create-user-provided-service  -p “{\”host\”: \”\”,\”port\”:\”\”,\”user\”:\”\”, \”password\”:\”\”,\”tags\”:[\”hana\”] }”.

                I am not sure where I should create this service. I tried on XSA client tools but there command “ xs create-user-provided-service” is not available.

                Please help me in resolving this issue.

                Regards,

                Sanampreet Singh

                (0) 
                1. Thomas Jung Post author

                  >tried on XSA client tools but there command “ xs create-user-provided-service” is not available.

                  That is the correct command and you should use the XSA Client tools. If you just issue xs, do you not see this command listed? Perhaps you need to update your XSA client.

                  /wp-content/uploads/2016/07/cups1_987620.png

                  I also have an example project that I’m creating for TechEd here:GitHub – I809764/dev703: SAP TechEd 2016: Code Review; Migrating Applications to HDI/XSA

                  You do have to issue the CUPS command from the XSA client here too (I put the CUPS command in text files in the root of the project).  You then need the hdbsynonymgrantor files in the db/cfg folder. This is what causes your HDI owner and application technical user to receive grants to the foreign schema so that the container synonyms will work.  The grants are done by whatever user you specify in the CUPS.

                  (0) 
                  1. Sanampreet Singh

                    Thank you very much Thomas for the prompt response. I really appreciate that.

                    I will try this process again after looking into your example project.

                    I have few more doubts.

                    > After creating this service and .hdbsynonymgrantor file, do we need to provide access of “non-container schema” to technical users of our container?

                    > Is db/cfg folder specific to your project? Or do we need to create .hdbsynonymgrantor files in the same folder structure?

                    Regards,

                    Sanampreet Singh

                    (0) 
                    1. Thomas Jung Post author

                      >do we need to provide access of “non-container schema” to technical users of our container?

                      No that’s that the hdbsynonymgrantor does.  Upon deploy/build it will automatically grant the access to which ever technical user type (or both) you configure in this file.

                      >Is db/cfg folder specific to your project?

                      db is what I named my hdb module. It can be anything you want.  The folder cfg must be named cfg; just like the src folder, its name is special.  This tells the HDI deployer that this folder containers such configuration files and treats them appropriately.

                      (0) 
                      1. Sanampreet Singh

                        I followed the process but still I am struggling somewhere. Build is failing.

                        I performed the following steps:

                        > issued cups and created a service named “CROSS_SCHEMA_SDI_TARGET”.

                        service.JPG> Then I changed the mta.yaml file and appended the following lines to the resources sections.

                        – name: CrossSchemaService

                          type: org.cloudfoundry.existing-service

                          parameters:

                            service-name: CROSS_SCHEMA_SDI_TARGET

                        > Then I created a “cfg” folder in my HDB module and created a .hdbsynonymgrantor file, named as “sdi_target.hdbsynonymgrantor”. The contents of the file are:


                        {

                          “CROSS_SCHEMA_SDI_TARGET”: {

                            “object_owner” : {

                              “schema_privileges”:[

                                {

                                  “reference”:”SDI_TARGET”,

                                  “privileges_with_grant_option”:[“SELECT”, “SELECT METADATA”]

                                }

                              ]

                            },

                            “application_user” : {

                              “schema_privileges”:[

                                {

                                  “reference”:”SDI_TARGET”,

                                  “privileges_with_grant_option”:[“SELECT”, “SELECT METADATA”]

                                }

                              ]

                            }

                          }

                        }

                        > Now when I am doing a ‘Build’ operation on my HDB module, it fails and gives this error

                        error.JPG

                        But this service I have already created in the step 1 and it is present also when I check using xs services command.

                        Is there anything that I am doing wrong? Please help.

                        Thank you.

                        Regards,

                        Sanampreet Singh

                        (0) 
                        1. Thomas Jung Post author

                          Did you add the service as a requires entry under your hdb module in the mta.yaml as well?  This is necessary to bind the CUPS to the hdb service:

                          modules:
                          name: db
                            type: hdb
                            path: db
                            properties:
                             SERVICE_REPLACEMENTS:
                             key: forgein-schema
                             service: CrossSchemaService
                            requires:
                             name: hdi-container
                             properties:
                             TARGET_CONTAINER: ~{hdi-service-name}
                             name: CrossSchemaService
                             name: CrossSchemaSys
                          (0) 
                          1. Sanampreet Singh

                            I didn’t do it before. Thank you for pointing out the mistake.

                            I have added that service under my hdb module now. Now when i do a build, it gives me the error

                            error1.JPG> “90CACGDUJWBHLENF_TINYWORLD_HDI_CONTAINER” is the auto generated schema bind to my container.

                            (0) 
                            1. Thomas Jung Post author

                              That sounds like the user you placed in your CUPS service doesn’t have the WITH GRANT authorizations to the target schema.  That user is the one who will perform the grant to your container technical users and therefore needs the WITH GRANT authorization themselves.

                              (0) 
                              1. Sanampreet Singh

                                This error doesn’t come when I remove  ‘SELECT METADATA’ privilege from the  .hdbsynonymgrantor” file. Everything works fine with only ‘SELECT’ privilege. My database user I311166 also has all the privileges with grant option.

                                Also, is there any documentation present on mta.yaml file where I can read about all the components/clauses that can be used in this file? For example, we used “SERVICE_REPLACEMENTS” clause in this scenario. Likewise there will be many I suppose that will be useful in other scenarios.

                                Thank you.

                                (0) 
  6. Naresh Setty

    Hi Thomas,

    How can I migrate an existing calculation view in xs classic to xsa. I tried import .calculationview file into hana webide but this seems not working as new calculation views in XSA has extension of .hdbcalculation view and the xml format is totally different.

    THanks,

    (0) 
    1. Thomas Jung Post author

      We plan to deliver a migration tool with the HANA release schedule for the end of this year. Until then you really have to recreate the calculation view by hand.

      (0) 
      1. Makesh Balasubramanian

        Hi Thomas . For now , could you please provide the steps for creating the synonyms for SYS.USERS and other tables/views in SYS schema . We have been waiting for a while for the latest documentation to come up , but this would really help us proceed with our migration .

        Moreover , could you plaese let me know if I can use the synonym inside my calculation view . I have a working SYS.DUMMY synonym deployed but i am not able to use this inside my calc view . it doesn’t show in the node search .

        (0) 
        1. Thomas Jung Post author

          >For now , could you please provide the steps for creating the synonyms for SYS.USERS and other tables/views in SYS schema

          You just need to create the HDBSYNONYM development object.  Your Container Technical user will need access rights to these tables.  So you might have to manually grant those.  We will make that easier in the future by adding a configuration file that will auto grant those rights on build/deploy. That feature is working internally, but not shipped with SPS 12 yet.

          As far as the SYS.DUMMY Synonym not showing up in the Calc View value help; is your system SPS 12 based?  This feature for search for foreign schema based synonyms didn’t yet exist in SPS 11, but does in SPS 12.

          (0) 
          1. Abhishek Ranjan

            Hi Thomas . Even I had the same query as Makesh Balasubramanian The Dummy synonym does not show up when i do an artifact search from a calculation view node . I have SPS 12 installed .

            I am able to access the synonym that i created for SYS.DUMMY from inside my procedures . So , the synonym is very much there and it’s active . However , when I search from a calculation view’s node , I am only able to see tables/views/table_fn . I can’t see the synonyms . Is there something I am missing .

            One more thing Thomas . I could not find any documentation for SAML configuration in SPS12 . Right now , we are using “CA Siteminder” for Single-Sign-On facilitation . How do i go about integrating this in XSA .

            Any document/references would help .

            Thanks,

            Abhishek

            (0) 
              1. Abhishek Ranjan

                Hi Thomas,

                I am trying to get SAML to work for my XSA application .

                Prior to XSA , we had to open the HANA user using Studio and check the “SAML” checkbox , then click on “Configure” and choose my IDP . I could also set the SAML assertion validation to be done via “EMAIL ADDRESS” in the “User Parameters” list .

                However in XSA admin page , I don’t see  an option of enabling SAML for a user . So what’s happening is that if a user exists in my IDP’s ActiveDirectory , he is able to access the xsa application . Is that the expected behaviour ? Is there no way to validate the saml assertion based on the emailid as was the case earlier ?

                (0) 
  7. Abhishek Ranjan

    Hi Thomas,

    One question regarding access via Lumira . Can you please advice on accessing calculation view from Lumira. When we develop XSA Db module with calculation view, the view gets created in HDI container as column view but lumira access these views from content packages only. I am unable to see my calculation view with in lumira if I make Hana live connection compared to sql connection.

    (0) 
    1. Thomas Jung Post author

      Yes for Lumira you must use SQL connection. Lumira doesn’t yet understand containers and it assumes the Calc Views are in _SYS_BIC (no longer true).

      (0) 
      1. Abhishek Ranjan

        Thanks for the reply Thomas . You say “yet” , so I believe the “hana live connection” approach is still under progress and will be available in future . If this is correct , do we have any rough timelines for this “Hana live connection feature” availability .

        (0) 
        1. Thomas Jung Post author

          I say yet simply because to me this seems like an obvious feature improvement.  But there is no confirmation I can make that this feature will be added.  Its largely up to the reporting tools development teams to decide if this is investment they want to make. I have no say in that.

          (0) 
  8. Sanampreet Singh

    Hi Thomas,

    I have created two projects in the XSA Web IDE with the same user.. Both have different containers. Now I want to use objects of one container in the other. How should I go about it?

    We create CUPS for non HDI schemas for the same purpose. not sure how to create CUPS for HDI container.

    If both the projects belong to different users, will the process be same?

    (0) 
    1. Thomas Jung Post author

      All containers are isolated. Even those created in two projects within the same workspace and user. You don’t use a CUPS for an existing HDI schema but instead reference that as a resource but with the type org.cloudfoundry.existing-service

      (0) 
      1. Sanampreet Singh

        I am not able to understand this flow.

        When we have to use non HDI scheama, we follow some steps i.e. create cups for the schema, use that cups as a resource and change the mta.yaml, then create synonymgrantor file and finally synonym.

        I am not able to grasp the steps for the other HDI container/schema what will be the steps.

        Can you please help me there?

        Should I give the HDI schema name as a resource in mta file and create grantor file?

        Do you have any example for that which I can refer?

        (0) 
        1. Thomas Jung Post author

          The process is essentially the same as the non-HDI schema cross access. The only difference is you don’t have to create the User Provided Service.  A service already exists for the other HDI container.  You just reference that HDI container service as you would the CUPS in the mta.yaml and the hdbsynonymgrantor files of your project.

          (0) 
  9. Thomas Jung Post author

    I don’t know what details you are looking for?  The service name is specified in the foreign project.  If running from the Web IDE the service name will have the user name and workspace appended to the front of it. You can issue xs services to view all services from the command line. You can also see which ones are bound to other applications.

    (0) 
    1. Sanampreet Singh

      Thank you Thomas. I have found the service for my HDI container. But I am facing issues with the privileges. For no HDI schema, we give privileges of that schema to our database user. Now here how do I assign the privileges as my database user is not authorized to assign privileges of HDI schema to other user?

      (0) 
        1. Sanampreet Singh

          I am facing the privilege issues while building my module. Can you please tell what could be the issue?

          [Error: Error executing: GRANT SELECT ON SCHEMA “90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER” TO “90CACGDUJWBHLENF_TINYWORLD_HDI_CONTAINER#OO”;

          (nested message: insufficient privilege: Not authorized)]


          Below is the content of my grantor file. Here “90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER” is the HDI schema name of other container.

          {

            “MYUSER-90cacgdujwbhlenf-testProject-hdi-container”: {

              “object_owner” : {

                “schema_privileges”:[

                  {

                    “reference”:”90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER”,

                    “privileges”:[“SELECT”]

                  }

                ]

              },

              “application_user” : {

                “schema_privileges”:[

                  {

                    “reference”:”90CACGDUJWBHLENF_TESTPROJECT_HDI_CONTAINER”,

                    “privileges”:[“SELECT”]

                  }

                ]

              }

            }

          }

          (0) 
          1. Thomas Jung Post author

            As you are an I-user; you should really conduct this questioning on the internal xs2 listserv and not in the public forum. But in general you might very well have to create an HDBROLE in your foreign container.

            From the documenation of the hdideploy node.js module:

            HDI container object privileges can only be granted to other containers via container local roles. Please follow these steps to grant object privileges of a ‘grantor container’ to application users of a ‘grantee container’:

            • deploy one or more .hdbrole files defining object privileges to the ‘grantor container’
            • reference these roles in the ‘container_roles’ sections of a .hdbsynonymgrantor file for ‘grantee container’ deployment

            I would suggest that you read through the documentation contained in the hdideploy module. It has some nice extended explanation and diagrams of these cross-container/schema scenarios.

            (0) 

Leave a Reply