HDI Schema Evolution
A table within a schema contains the definition how data are stored or checked. Data structures are described with DDL (Data Definition Language). The life cycle comes with to the need of changing the structure by extending definitions, change definitions or add new ones. This life cycles some people also call schema evolution.
If you want to do the changes your structures, you have to apply commands to alter the structure or even must build DML (Data Manipulation Language) because you do not only want to change the structure but also keep the data and it should be fail-save as well.
HDI (HANA Deployment Infrastructure) provides two methods to create/change tables.
HDBTABLE is the automated very robust but not always the most effective way to change tables
HDBMIGRATIONTABLE allows you to have user controlled way. Renaming of columns is only here possible.
As a sample we take a table PERSON. In HDBTABLE it is identical to the CREATE TABLE statement but without the “create” phrase and a needed “column” attribute which is typical the default but needs to be explicit here. Now your first version of Person.hdbtable will look like this:
Person.hdbtable Version 1
column table PERSON ( PERSONID BIGINT not null FIRSTNAME NVARCHAR(100), LNAME NVARCHAR(100) );
Next time you may want to add the address to the person. So, the “second” version of the table will have the attributes STREET and CITY also in its definition.
Person.hdbtable Version 2
column table PERSON ( PERSONID BIGINT not null, FIRSTNAME NVARCHAR(100), LNAME NVARCHAR(100), STREET NVARCHAR(100), CITY NVARCHAR(100) );
Now with HDBTABLE you do not have to think about the way how you get from Version 1 to Version 2. HDI will do the needed table changes and preserve the content it is containing.
In HDI there is second way to achieve the same result. It is called HDBMIGRATIONTABLE.
Let us take our sample from the beginning. Be aware that the phrase “VERSION” is now part of the syntax we will use.
Person.hdbmigrationtable (Version 1)
== Version = 1 column table PERSON ( PERSONID BIGINT not null, FIRSTNAME NVARCHAR(100), LNAME NVARCHAR(100) );
Now we change the table:
Person.hdbmigrationtable (Version 2)
== version = 2 column table PERSON ( PERSONID BIGINT not null, FIRSTNAME NVARCHAR(100), LNAME NVARCHAR(100), STREET NVARCHAR(100), CITY NVARCHAR(100) ); == migration = 2 alter table PERSON add (STREET varchar(100)); alter table PERSON add (CITY varchar(100));
Migration 2 are the steps needed to alter table PERSON from Version 1 to Version 2
If the table PERSON does not exist, the Version 2 is implemented directly.
|rename column||not possible||manual|
|partitioning||initial||initial and at run time|
|rows||< 1M#||> 1M#||various factors and may need individual adjustment|
Factors are memory footprint and deploy time. HDBMIGRATIONTABLE often uses less resources.
Using CAP you can generate HDBMIGRATIONTABLES with the
entity annotation: @cds.persistence.journal
CAP Schema Evolution
CAP will automate the generation of the migration content. This is very helpful.
A hdbtable construct can be converted into a hdbmigrationtable. Rename the file extension and add the version line in the beginning.
Strong advice: Give this an intensive testing before you do this for production data. (see below link)
HDBMIGRATION empowers the developer to influence and control on the table/schema migration.
HDBTABLE is a very easy and often used way. There are pros and cons on both methods.
If you want to use the NSE feature you have to use HDBMIGRATIONTABLE
Related Blog post:
How to partition a migrationtable is described in this blog:
Partition HDI migrationtables