Skip to Content

New features in HANA SP07 – what’s useful in the real world?

I’ve been using HANA SP07 for two weeks now, and delivered a training course on it in the interim. It’s fair to say that we kicked it around, load tested it, and generally beaten it to death. I thought I’d share the things that HANA SP07 brings that matter most in the real world.

1) Data Engine Improvements

The data engine improvements are quite dramatic. There was a time when the choice between SQL, Analytic Views, Calculation Views, what engine you ran stuff in etc. really mattered. Much trial and error was required to build good models. Now, in SP07, we find that there are a more simple set of rules that you can apply for Best Practice model generation. Plus, if you get it wrong, you don’t get punished with a 100x performance impact.

Plus… some things which ran really badly before, like joining row and column store objects, now perform surprisingly well. There has been a lot of quiet work done in the background, which you can see using the Plan Visualizer. This makes a huge difference in the real world!

Also it may just be me, but text search seems much faster.

2) Developer Experience

HANA SP06 was the first revision built for Developers, by Developers, and SP07 consolidates on that effort. The main things that matter…

– The tooling is much faster and more consistent in naming conventions

– There are lots of useful things like code completion and syntax correction, which makes development faster and less error-prone

– Improvements to UI Integration Services which mean building PoCs and Mock-ups can be done in hours

It’s fair to say that there is plenty of work here to come – including even more consistency between development artifacts, but this is a step in the right direction!

3) Multi-User Development and Transport Management

This was very basic in HANA SP06 and you used to have to transport all of a delivery unit in one go. Now, you can easily have multiple users, developing on the same code. Highlights include:

– Inactive code testing

– Change management

– Repository management including version management

– Job Scheduling (which looks like a cron-script generator)

4) Smart Data Access

Smart Data Access is much improved in SP07 with support for Oracle, MSSQL, Sybase ASE and IQ, Teradata, Hadoop and generic ODBC connections. I’ve tested it a few times and it looks pretty handy. Plus, it supports Insert/Delete/Update so you could write jobs in HANA XS which move data from your hot store (HANA) to your cold store (IQ) overnight. This is the beginning of automatic data temperature management.

5) Monitoring

I was surprised here, but there are a bunch of things which make monitoring better in the real world. Such small pieces of usability are much appreciated!

– Improvements to the Data Preview button. Much faster and better SQL is generated.

– Expensive Statements trace is much faster for some reason

– New monitoring views in HANA Studio (right click the system to see them)

– Ability to see failed SQL Plan Visualizations

6) Modeling

There’s not so much in the modeler, and they haven’t done COUNT DISTINCT, AVG or WAVG yet (hint hint!) but there are a few neat new things.

– Star Join capability in the output node of the Calculation View modeler. This makes certain types of view much easier to build, as you could previously only join two tables at a time.

– Much improved usability with propagation of objects through models and code completion for expressions

– Much improved performance – huge improvement here

– Copy/Paste!!! Unfortunately not inside the Calculation View modeler 🙁

7) Maintenance Revisions

These allow you to maintain the latest revision of HANA in your project track, and the latest revision of the last SP of HANA in your production track, whilst retaining security fixes. They are very useful in practical deployments of HANA – read more here.

There are also a few things which still don’t seem to be complete enough to be highly usable:

Core Data Services

Core Data Services is a mechanism that allows building a whole data dictionary for an application in one development artifact. You can define types, tables, associations and it will build the model for you. But, it is still too limited to use substantially in the real world and doesn’t support HANA Information Views.


There is a new Web IDE in SP06, enhanced in SP07. It feels like a collection of disjointed tools, which it is. It is very useful for Transport Management and a few other things like reliably deleting repository objects, but there’s no way it could be used as a Cloud development environment in its current state.


It feels like the Spatial Engine needs some work and there are no concrete examples to test and work with, and my testing couldn’t get models working. Probably this is my lack of knowledge.

AFL Modeler

This still seems to generate script that generates tables, which means the AFL Modeler isn’t really usable in the real world. It would be cool if SAP used its KXEN people to help make the AFL Modeler a success.


I find HANA SP07 a very pleasing incremental release. I think it will go down as the release that made HANA ready for the developer community to create large-scale projects in anger, and also as the release where SAP stopped cramming so many new features, and made what was there better, more mature and focussed on developer productivity and usability.

The development team should be proud with what they have created. There are a few parts of SP07 which feel a bit rough around the edges, but I’m sure they will be smoothed in the next few revisions.

It also very clearly lays down the foundation for what needs to come in SP08. But more of that in another blog.

You must be Logged on to comment or reply to a post.
    • I don't have time for a proper teardown today, but I can give some preliminary analysis based on the queries in my last blog. Results below.


      - SP07 is faster across the board for aggregation. Maybe 10% faster.

      - SP07 is substantially faster for COUNT DISTINCT - 40-50% in my tests.

      - SP07 is substantially faster for STDDEV (but nothing like as fast as IQ)

      - Information views do not improve performance as much as they did in HANA SP06

      The latter point is MASSIVE for apps that access SQL directly via ODBC, like Cognos. Otherwise, no surprises here.


      Comparing SAP HANA and Sybase IQ - real world performance tests


      Query 1: 1.2s

      Query 2: 1.7s

      Query 3: 3.4s

      Query 4: 14.0s

      Query 5: 3.2s

      Query 6: 409s

      SP07 with Standard SQL:

      Query 1: 1.1s

      Query 2: 1.9s

      Query 3: 2.3s

      Query 4: 11.0s

      Query 5: 2.9s

      Query 6: 266s

      SP07 with Information Views:

      Query 1: 1.1s

      Query 2: 1.7s

      Query 3: 3.6s

      Query 4: 10.6s

      Query 5: 2.9s

      Query 6: Not Possible

        • Yes correct, the SP06 benchmark had to be done over Information Views. It was roughly 100x slower otherwise.

          In SP07 we find that SQL is nearly as fast as Information Views in simple examples. However, when things get more complex, the greater control you can give to the OLAP engine in Information Views mean they are still quite a bit faster.

          By the way I have a suspicion that the way we modeled the data in SP06 could be better in SP07 by changing the partition design and it would improve performance. That, is an analysis for another day.


  • Storage Snapshot was introduced in SP07, but setup instruction in admin guide is unclear. Tried to setup and play around with it, end up figuring out it was only works with hdbbackint, collaborating with 3-rd party tools, but never mention in the guide or presentation sllide at all. Correct me if i'm wrong.

    Moreover, don't see any benefits of storage snapshot vs normal backup.


    Nicholas Chang

    • Hey Nicholas,

      So my understanding is that there's some downsides to Storage Snapshots, specifically that integrity is not ensured.

      But there are also some benefits: they have less impact on the system and are faster. Plus you could use one on your DR system to quickly step back to a point in time for regression testing.

      I tried to setup a storage snapshot on my system but I can't get it to work either. Maybe Lars Breddemann can help.


      • Hi John,

        Thanks for the swift reply!

        however, the benefits mention are subjective. We don't face any performance issue for current hana backup/restore technology. For quick DR system setup, we still require an existing Hana instance, copying the snapshot data area over and start recovery. Steps are identical to normal backup/restore and recovery. Moreover, cross system restore/recovery or copying a database using "Storage Snapshot" is not updated in SP07 admin guide too.

        Yeah, hope we can hear from saphana real soon 😉



  • Hello John and Nicholas,

    Thanks for your comments about storage snapshots with SAP HANA SPS 07.

    A few points:

    * Setup

    To work with storage snapshots, no special setup steps are needed.

    Storage snapshots work "out of the box" in your system.

    What particular setup instructions were you looking for?

    It would be important for us to learn what exactly is going

    wrong in your systems.

    * hdbbackint
    Storage snapshots do work with third-party backup tools, but do

    not require Backint for SAP HANA to work.
    Have I understood your point here?

    * Benefits of storage snapshots
    As John said, storage snapshots have the advantage that they are faster compared with conventional data backups, because they do not use extra database resources.

    * Downsides to storage snapshots
    John is again right here: we point out in the SAP HANA administration guide that integrity checks are not done on storage snapshots.
    Unlike data backups, while a storage snapshot is being created, no data checks are made. The storage snapshot relies on the first created internal *database snapshot* to ensure consistency.
    So if, while the storage snapshot is being created, something should happen to corrupt the database snapshot, there is no mechanism to ensure the consistency of the resulting storage snapshot.

    This is why we recommend using a combination of data backups and storage snapshots in your backup strategy.

    * Database copy
    This does now work with storage snapshots as well as file-based backups.

    We are already documenting this for the SPS07 maintenance release.

    Best wishes


    • Thanks Paul, appreciate your time. Here are the screenshots from my system. Interestingly... yesterday, the BACKUP DATA CREATE SNAPSHOT command was failing... and now it works, for some reason.

      Sorry I didn't keep the error message. But via the GUI it definitely does not work, unless I am doing something wrong!


      Screen Shot 2013-12-19 at 10.51.03 AM.png

      Screen Shot 2013-12-19 at 10.51.13 AM.png

      Screen Shot 2013-12-19 at 10.51.21 AM.png

      Screen Shot 2013-12-19 at 10.51.29 AM.png

      • Hello John,

        Thanks for this!

        Ok, you seem to be confirming that storage snapshot works as designed.

        In the future, if you do encounter any unexpected situations, do let us know.

        It would be helpful for us here.

        Best wishes


    • Hi Paul,

      Thanks for your reply.

      FYI, there's no problem for me to create the storage snapshot, neither via Studio or Command. On the prepare phase, i can see a storage snapshot file consists of all DB Instance info/repository created in /sapmnt/data/SID/mnt00001/hdb00001, named snapshot_databackup_0_1 as below:

      All files in data area, /sapmnt/data are copied into another file system for testing. The problem occured during the restore/recovery where it is looking for hdbbackint in below path, after the copied storage snapshot was put into /sapmnt/data/

      FYI, by default, hdbbackint is not installed in our appliance.

      Hope to hear from you soon.


      Nicholas Chang

      • Hello Nicholas,

        Thanks for your reply.

        This is an interesting situation, and it would be good if we could analyze it a bit more.

        It would appear that, although you are not using hdbbackint, hdbbackint is still being requested for the recovery.

        Was hdbbackint used at any stage?

        The backup catalog may contain an entry with hdbbackint.

        Can you provide some information from backup.log for the recovery attempt?

        Perhaps we could take this up again in January after the holidays?

        Best wishes


  • Hi

    Want to know, if i am writing a procedure and i have compiled it successfully and it is yet to be deployed into production.

    Can i have a lock as we have in BW using Transport request such that no one can edit (even though they have required SQL privileges to access it )?


    Krishna Tangudu

      • Hi John,

        This Change manager is for XS related objects right? or is it also for the Views and Procedures we develop using "Modeller" prespective?

        Am unable to get the link to login to change manager.. the url used in the video to login to Change Manager?


        Krishna Tangudu

        • You shouldn't be using the Modeler perspective at all any more. It will be deprecated.

          Instead use the Developer perspective and create your Information Views within a project. Then they can be added to the Change Manager.

          I believe it needs a delivery unit and a role to work. It's all in the Developer Guide.