Skip to Content

We all start out doing things the wrong way, and then wonder how we get ourselves back out of it.

Over the last couple of weeks, I’ve been trying to bring ourselves into a better development method by building objects into HANA using the SAP HANA Development perspective. Unfortunately, we’ve been and built a whole series of tables using SQL. I know, bad boys and so on, but we’re trying to get back on the right track.

So we’ve finally started building our new tables using hdbtable files (and hdbdd files), but how can we reverse engineer the tables we’ve already built?

The answer lied in the catalog tables.

What started as a seemingly painful problem, ended up as a fairly simple query that I wrote and fine tuned in a couple of hours in between other things

This is what I ended up:

SELECT

  ‘{name = “‘ || A.COLUMN_NAME || ‘”;’

  || ‘ sqlType = ‘ || A.DATA_TYPE_NAME

  || ‘; nullable = ‘ || LOWER(A.IS_NULLABLE)

  || ‘;’ ||

  CASE

    WHEN A.DATA_TYPE_NAME IN (‘CHAR’,’VARCHAR’)

      THEN ‘ length = ‘ || TO_CHAR(A.LENGTH) || ‘;’

    WHEN A.DATA_TYPE_NAME = ‘FLOAT’

      THEN ‘ precision = ‘ || TO_CHAR(A.LENGTH) || ‘; scale = ‘ || TO_CHAR(A.SCALE) || ‘;’

    WHEN A.DATA_TYPE_NAME = ‘DECIMAL’

      THEN ‘ precision = ‘ || TO_CHAR(A.LENGTH) || ‘; scale = ‘ || TO_CHAR(A.SCALE) || ‘;’

    ELSE ”

  END

  ||

  CASE

    WHEN A.COMMENTS IS NULL THEN ”

    ELSE ‘ comment = “‘ || A.COMMENTS || ‘”;’

  END

  || ‘},’ AS COLUMN_COMMAND

FROM TABLE_COLUMNS A

WHERE A.SCHEMA_NAME = <schema_name>

AND A.TABLE_NAME = <table_name>

ORDER BY A.COLUMN_ID

The script output looks like this excerpt from one of our tables:

{name = “CCYY_IND”; sqlType = VARCHAR; length = 4;},

{name = “ACCT_PERIOD”; sqlType = INTEGER;},

{name = “ACTUAL_AMOUNT”; sqlType = DECIMAL; precision = 15; scale = 2;},

Once I have this, it’s just a matter of creating a new hdbtable file, completing the schema name using the demo lines and pasting in the results from the above script. I’ve attached a copy of the script file, and an actual file from our landscape.

Hope this helps someone else out there.

To report this post you need to login first.

14 Comments

You must be Logged on to comment or reply to a post.

  1. Paul Aschmann

    Hi Jamie,

    Thanks for sharing – this is useful and would be great for all the object types. I am on the fence for switching from SQL DDL to HDB due to some of the minor limitations like default values on columns. If I am going to switch I would prefer to have anything one way or the other ๐Ÿ™‚

    Cheers, Paul

    (0) 
    1. Jamie Allen Post author

      Hi Paul,

      Thanks. I’m actually working on something similar for roles at the moment so I’ll be sure to post it as well.It’s not quite so straight forward ๐Ÿ™‚

      Default value for HDB is available, so I’m not sure what you mean by “..minor limitations like default values..”. I hadn’t included it as we didn’t have a requirement for it yet.

      Jamie

      (0) 
        1. Jamie Allen Post author

          Hi Paul.

          Interesting. Given that we haven’t used defaults yet (but I’m sure it will come up) I hadn’t gone down the path and had simply noted that it was possible, but it appears that it can only use simple constant values and not functions etc.

          We also have a number of partitioned tables, so I’ll have to figure out a way to manage that a little better ๐Ÿ™‚

          Thanks for the updates.

          Jamie

          (0) 
    1. Jamie Allen Post author

      Thanks John.

      I had a quick read of the SPS09 notes the other day and it goes into a lot of the big features but I’m hoping there will be some more add ins for Studio that wouldn’t make it into the release notes at this stage.

      Will be interesting to see the wed based development workbench in action.

      Jamie

      (0) 
    1. Jamie Allen Post author

      Hi Krishna,

      .hdbdd seems to be the preferred syntax going forward but there are some things it just can’t do as yet. I’m assuming that hdbtable will be there for some time yet ๐Ÿ™‚

      Jamie

      (0) 
      1. John Appleby

        What I quite often do is to save a .sql file in my data folder with all the manual stuff that is required to complete data dictionary build. It is then at least remembered and transportable.

        From Tom’s comments, it sounds like they have merged much of this functionality in SPS09.

        (0) 
    2. Thomas Jung

      hdbdd should be preferred whenever it can meet your requirements.  hdbtable will be maintained for backwards compatibility but will not receive any new features.  All investment goes towards hdbdd.  We are working to fill the features gaps with hdbdd and I think you will see many (but not all unfortunately) of those gaps go away in SPS09.

      (0) 
      1. Thomas Jung

        I would add that one of the features planned for SPS09 is a migration capability from hdbtable to hdbdd.  When you create a compatible hdbdd object and activate it in the same step as the deletion of the corresponding hdbtable; all the data will be moved and preserved during the activation process.

        (0) 
  2. Jonathan Haun

    Thanks for the script.

    does anyone know when .hdbdd will support column comments? Help for SPS12 states that it is not supported. It is really nice to have them when designing data mart tables…

    (0) 
      1. Jonathan Haun

        Thanks. This is the primary reason we have not started using .hdbdd for our custom DDL scripts. The comments metadata is vital for ourย metadata management processes. I hope it is added soon.

        (0) 

Leave a Reply