SAP HANA SPS 11: New Developer Features; HDI
This blog is part of the larger series on all new developer features in SAP HANA SPS 11: SAP HANA SPS 11: New Developer Features
HDI – HANA Deployment Infrastructure
HDI (HANA Deployment Infrastructure) is a service layer of the SAP HANA database that simplifies the deployment of HANA database artifacts by providing a declarative approach for defining database objects and ensuring a consistent deployment into the database based on a transactional all-or-nothing deployment model and implicit dependency management. It is intended to improve the overall management of database artifacts, particularly in scenarios where its necessary to deploy multiple versions of the same data model into the same database instance.
There are a few high level points about HDI that are important to understand:
- Containers: All development objects within the scope of HDI now must be within an HDI Container. The HDI Container allows multiple deployments, sandboxing and enhanced security options for all database artifacts.
- HDI focuses on deployment only: Unlike the classic HANA Repository, there is no version control or lifecycle management aspects. These topics are now provided by Git/GitHub.
Container is such an overloaded term in the IT industry. We have OS containers, runtime containers, application containers, etc. Inside we even already have the concept of MDC – Multi-Database Containers. HDI introduces another thing called a container, but this is lower level than all those other examples. An HDI container is essentially a database schema. It abstracts the actual physical schema and provides schema-less development and the security isolation that customers have been requesting. Some rules of the HDI Container world:
- All database objects are still deployed into a schema
- This schema is abstracted by the container and is really only designed to be accessed via the container
- All database object definitions and access logic has to be written in a schema-free way
- Only local object access is allowed. This way when the code is branched and points to a different version of the container, the actual physical references can be redirected to a different underlying schema
- Database objects are now owned by a container-specific technical object owner. There is no longer a single all-powerful technical user (_SYS_REPO). Each technical user only has access to its local container objects. Any foreign objects must be accessed via Synonym and granted access by the foreign technical user.
- The same container specific technical user is automatically used by XS Advanced when executing database logic. For more details on XSA technical user connectivity see this blog: SAP HANA SPS 11: New Developer Features; XS Advanced
- No longer are modelded views placed in a single central schema (_SYS_BIC/_SYS_BI). They are now placed in the container specific schema like all other development objects. This means that some central meta-data concepts must also be duplicated in each container schema.
Figure 1: HDI Containers in Detail
Probably the best way to explain the new concepts of HDI is to walk through the steps in order to create a simple example. The following example is based upon the initial delivery of SPS 11 and uses the command line tools and external editors. Early next year, SAP will also ship web-based tool that will provide an enhanced development experience.
Create the Container
HDI commands have both a SQL API and are integrated into XS Advanced. The more common approach is to use HDI in conjunction with XS Advanced and that’s the scenario we will show here. In order to create the container we use the XS command line and create-service command.
Figure 2: Create Container
This one action created the contain both in HANA and exposed it to XS Advanced. Behind the scenes several database schemas and a technical user/owner was also created in the HANA database. We will see more of what was created later in these steps.
At the root of our XS Advanced application we will need some form of deployment descriptor. The deployment description is contained in the application deployment manifest, which specifies what you want to build (and how) as well as where (and how) to deploy it. For simple applications you might use the manifest.yml and the corresponding xs push command to send the content to the server. For more complicated multi-target applications (MTA) with dynamic service port assignment and dependencies you would use the mtad.yaml file. For more details on deployment descriptors, please see: http://help.sap.com/hana/SAP_HANA_Developer_Guide_for_SAP_HANA_XS_Advanced_Model_en.pdf
Regardless of which approach you use, this deployment descriptor is where we would reference the HDI container. You only need to supply the container name to both the database services and the Node.js or Java services. This is how any part of our application knows which HANA system to connect to and which schema/technical user to use to connect. We never supply such technical information in our code or design time objects any longer.
The following is an example database service definition section from a manifest.yml. In the services section we reference the container name we created in the previous step.
Figure 3: manifest.yml database service example
This is the same definition but as done in the mtad.yaml format:
Figure 4: mtad.yaml database service example
Creating the DB service
For creating/deploying database artifacts we need a database service in our project. In the above deployment descriptor files we designated that the /db folder in our project would hold this database service. The database service is really just an SAP supplied node.js module that runs briefly after deployment to call into HANA and ask HDI to deploy the corresponding database artifacts. It then shuts down. This is why you see the no-route property for this service in the manifest.yml. This means that no HTTP port will be assigned to this service since it isn’t designed to be interactive.
Inside the db folder we will need a package.json file (since this is a node.js service) and a src folder to hold the actual database artifact definitions. The package.json should declare the dependency to the sap-hdi-deploy module and also call this module as the start up script. The rest of the content in this example is optional.
Figure 5: Database Service package.json
Inside the source folder we have two HDI specific configuration files. In this new world of HDI, there is no SAP HANA Repository and therefore no packages. In the old Repository we used the folders/packages as the namespace for all database objects. The corresponding functionality in HDI is place a .hdinamespace file in the root of the source and specify the starting namespace for all development objects. You can then also use the subfolder: append option to attach the folders in your project structure as parts of the namespace as well.
Figure 6: .hdinamespace Example
The other configuration file is the .hdiconfig. Here we list the HDI plug-ins and versions for each file extension. This allows you to control your file extension usage at the project level. More importantly it allows you to target a specific version of the deployment plug-in. In HDI, we use the numbers in the hdiconfig file to do a version check, eg if an application wants plugin x in version 12.1.0 and we only have 11.1.0, then we reject this. So, it’s also more clear that you cannot import an application designed only for SPS 12 into SPS 11. Since the plug-ins are backwards compatible you can use the version 11.1.0 even on your 12.1.0 or later system. This way if your application is designed for to be used on multiple versions you can use the lowest version in the hdiconfig file and explicitly control which versions it is then compatible with.
Figure 7: .hdiconfig Example
Database Development Objects
The actual database artifact development isn’t all that different from what you do today in the current HANA Repository. Each database object type has its own file and the file extension controls what type of object you want to have. Often you can simply cut and paste the existing objects from the current HANA Repository into your new HDI/XSA project. For many development artifacts, like stored procedures, you only need to remove the Schema references.
Figure 8: .hdbprocedure Example in HDI
Other database artifacts, such as CDS, have new file extensions and updated syntax. HDBDD is now HDBCDS for example. For the full list of additions and changes to the CDS syntax in HDI, please see this blog: SAP HANA SPS 11: New Developer Features; HANA Core Data Services
Figure 9: .hdbcds Example in HDI
Other artifacts have been completely redesigned within HDI. The hdbti artifact for example has an all new format, options and file extension (hdbtabledata). We have also added a whole new set of DDL-based HDI development artifacts. This means we finally have a way to manage the lifecycle and consistently deploy catalog objects which are also created via pure SQL/DDL. These artifacts include Tables, Indexes, Constraints, Triggers, Views, Sequences, etc.
Finally its important to reiterate the point that in the HDI world only access to local objects is allowed. There is no such thing as global public synonyms. Therefore common logic such as SELECT FROM DUMMY won’t work any longer. None of the system tables or views are immediately available. Even for such objects local synonyms must be created and logic within the container can only reference these synonyms.
For example we might create a synonym for DUMMY:
Figure 10: .hdbsynonym Example in HDI
var connection = $.hdb.getConnection(); var query = 'SELECT CURRENT_USER FROM "dev602.data::DUMMY"'; var rs = connection.executeQuery(query); var currentUser = rs.CURRENT_USER; var greeting = 'Hello Application User: ' + $.session.getUsername() + ' Database User: ' + currentUser + '! Welcome to HANA '; $.response.contentType = 'text/plain; charset=utf-8'; $.response.setBody(greeting);
Deployment to the Database
Once we have coded all of our HDI based development artifacts we are ready to deploy the database service to XS Advanced and thereby also deploy the database artifacts into HANA and the underlying schema for the container. For this we will use the xs push command. Add on the name of the specific service defined in the manifest.yml file to only deploy the database service for now.
Figure 11: xs push
The service should deploy, run, and then very quickly reach completion. However no actual errors are reported back from the deployment service via the push command. If you had a syntax error in any of the development artifacts, you could only see this if you look at the deployment logs. The upcoming web-based development tools will streamline this process by displaying the logs immediately in a deployment window. For now, though, we will use the command xs logs command to check the status of the HDI deployment.
Figure 12: xs logs With Errors
Notice in the above log output we did have an error in the .hdiconfig. I specified an HDI plug-in version that doesn’t exist on my service. Any sort of syntax or configuration error would show up here in the logs in a similar fashion.
After correcting my error, I can perform the push again. This time everything works correctly and I can see the name of the target schema in the logs. Currently this is the best way to see the correlation between HDI container and the actual physical schema in HANA.
Figure 12: xs logs With Successful Deployment
We could now go to the HANA Studio or the Web-based Development Workbench and look at this physical schema. In most SPS 11 and higher systems using HDI there will be many such Schema with long GUID-based names. You likely will have to search for this schema name. You should see several schema were actually created for your container. If your deployment was successful, you should also be able to see the database artifacts you created.
Figure 13: HDI Container Schema as Viewed from the HANA Studio
Admittedly the experience of working with the generated schema in the HANA Studio and the Web-based Development Workbench isn’t ideal. This is why early next year with the new SAP Web IDE for SAP HANA, SAP plans to also deliver a new XS Advanced/HDI based catalog tool. This tool will allow you to list and view the details of the HDI containers and avoid the cumbersome lookup of the underlying schema names.
Figure 14: HDI/XSA Based Catalog Tool for Viewing HDI Container Details
This new catalog tool will also allow you to view the data in tables/views and execute procedures within your contain. All access will be done via the technical user of the container. This way developers in development systems can have full access to the development objects they are processing without need for setting up special developer roles for an application.
Figure 15: HDI/XSA Based Catalog Tool; Data Preview
Planning for HDI
With the introduction for HDI there are several logical changes to supported development artifacts as well. This particularly impacts the area of modeling. In HDI there is no support for Analytic, Attribute, or Scripted Calculation Views. Therefore you would have the following transition:
- Analytic Views -> Graphical Calculation Views
- Attribute Views -> Graphical Calculation Views
- Scripted Calculation Views -> SQLScript Table Functions
- Column Based Filters -> Filter Expressions
- Derived Parameters by Table -> Only Derived By Procedure (your procedure logic can read a table)
In order to prepare for these changes when moving to HDI, the HANA Studio in SPS 11 contains a migration tool This tool migrate these various view types in place. Meaning it won’t convert them to HDI, but will leave them in the existing HANA Repository. It will convert them to Graphical Calculation Views and/or SQLScript Table Functions in order to prepare for a later move to HDI. This way customers can make the transition in phases for less disruption to their business users.
Figure 16: Studio Migration Tool For Modeled Views