Skip to Content

Thoughts on #SAPonHANA Announcement

Jan 10, 2013 announcement of SAPOnHANA was very important to me for several reasons, not because SAP is now running on HANA. As several of us know, we really need to give some more time for SAP to make SAPOnHANA – production-ready. Why then it is very important?

1 The Past

  • Hasso’s admission of “no R/3 without DB2 and Oracle” moved me.

        (Timeline: appoximately 14m:30s in his speech on Jan 10, 2013)

This was tough for SAP. To go into competition with IBM, Oracle the two founders basically of the R/3 development. There is no R/3 without DB2 and the work we did on the mainframe and Oracle the work we did with Unix computers in the early 90’s.

2 Open Innovation

What is Open Innovation?Craig Cmehil

“Open innovation is a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology”. The boundaries between a firm and its environment have become more permeable; innovations can easily transfer inward and outward. The central idea behind open innovation is that in a world of widely distributed knowledge, companies cannot afford to rely entirely on their own research, but should instead buy or license processes or inventions (e.g. patents) from other companies. In addition, internal inventions not being used in a firm’s business should be taken outside the company (e.g., through licensing, joint ventures, spin-offs)

Vishal Sikka in his blog: This statement – see below – is highly intriguing to me for several reasons. First & foremost, would what SAP promise even be practical because R/3’s design philosophy was based on a premise that the traditional databases are slow? (Hint: Number range buffering, table buffering, VBMOD, VBDATA, VBHEADER etc). I’ll provide more details in the next blog. I know Hasso mentions in his speech that the business logic could be pushed to DB layer of traditional databases due to advances in technology. But still read/write from/to the disk is expensive, isn’t it?

Innovations in the SAP Business Suite, such as push down of data centric processing logic from the application server to database tier via stored procedures would be made available to other databases too making them perform better too.

Would SAP’s approach be considered good example for open innovation? Craig discusses open innovation in the context of one’s own ecosystem whereas SAP is discussing (about) taking open innovation to the next level, the competitors. I’m sure many would question SAP’s real intention behind this approach.

3 Clarity on Pricing

From blog by Dennis Howlett:

Andreas Oczko, member of the board of DSAG: “We pushed for a pricing model which is based on the customer’s added value. This means that SAP customers must now upgrade only those licenses which actually access the HANA database and they do not have to upgrade the entire license agreement. As far as licensing costs are concerned, the database for the Business Suite on HANA will now cost existing SAP customers exactly the same as the conventional databases. This will give each individual customer the chance to use in-memory technology at a reasonable price.”

Marco Lenck, chairman of the board of DSAG: “SAP has adopted our proposals to setup the pricing model based on contract value and not on main memory usage. Due to the conventionally focused pricing, existing customers now have easy access to innovations in the HANA environment. SAP has responded customer-oriented to the core requirement of the DSAG. We believe that this will lead to a strong push in the adoption of the new technology.”

Dennis further states “These statements should be taken as a solid endorsement from a group that does not fight shy of voicing its adverse opinions.

4 Delivery

Several years ago, I was amazed when I saw standardization in R/3 screen(SAPGUI) layout. I saw beauty in that consistent “look & feel” of SAPGUI screen layouts. And then I forgot about it as I became familiar with SAP. Recently once again I was amazed at how well SAP delivered the “revised and upgraded” methodology of upgrades using MOPZ, LVSM and other tools. Software is not science but the upgrade methodology and SAP tools seemed like science. Based on that experience, I once again started feeling SAP would deliver what they promise elegantly.

5 Speed

I’m not talking about HANA speed but the speed with which SAP delivered “somewhat working SAP-HANA” to the market. I’m sure we’ve a long way to go before “SAP runs on HANA” becomes “production ready”; however SAP demonstrated commitment, dedication, perseverance, determination, arguably clarity and more importantly brilliance in both the announcement and HANA delivery.

You must be Logged on to comment or reply to a post.
  • Hi Bala,

    thank you for your blog. You try to get to the bottom of the hype: What will HANA change in real life and when? Though these blogs are not so frequently read as some re-hyping other blogs, they are maybe more important: SAP has its own promotion department…

    I agree with your hint to current R/3 design philosophy. Today, databases are not slow anymore, and they support read isolation (no need für read locks), built-in locking and sequencing etc.

    And traditional databases are optimized like hell, cache hit ratios (memory!) > 90% are normal for databases in OLTP applications. Comparing HANA’s in memory reads to disk reads is very misleading.

    Looking forward to your next blog.



    • Hi Rolf,

      Thanks for your comment. I would like to understand SAP-HANA a little bit better so I can provide better recommendations to customers who’re genuinely interested in making sound decision long term.

      On locks, Oracle’s philosophy has been “Readers don’t wait for writers, writers don’t wait for readers” for several years(20+). I remember “Snapshot too old” errors due to small rollback segments in older versions. Undo tablespace improved the situation a little bit. And other databases provide Dirty read/Uncommitted read isolation to read ‘changed but not committed yet’ records.

      Regardless of what databases we use, the changed records would remain locked until transaction is committed. Sure, the time to perform a transaction has come down, along with a need to support more concurrent users, due to the improvements in technologies; however it has not eliminated the need for second user to wait until first user’s transaction is committed/rolled back to update a record changed by the first user. This is the reason why SAP-HANA uses INSERT-ONLY operations and then update using “Delta-merge” operation asynchronously.

      I’ve written 3 more blogs on this topic.

      1) Questions on Traditional Database Support

      2) SAPonHANA

      3) Thoughts on One Version of SAP Business Suite-saponhana

      Looking forward to discussing more with you.

      Best regards,


      • Hi Bala,

        thanks for the reply. The infamous “snapshot too old” I only experience the last 10 years in long running batches keeping a cursor open for hours and clients changing individual records simultaneous. All cases could be refactored by shorter cursors and commits on each – say – 1000 records.

        I am no DB2 expert, afaik in earlier versions there was no “read committed” without the possibility that the read was blocked by the write. So the “read uncommited” was a feature to ensure unblocked reads. If you wanted data consistency AND no read locks, you had to do it programmatically like SAP did with buffer tables and enqueue server.

        But this is long time ago. DB2 supports read uncommited a.k.a. “cursor stability” similar to read committed.

        In real life application, “read uncommitted” often is not desirable/acceptable because you possibly read inconsistent data. In AS ABAP (correct me if I am wrong) if you want to read only committed data, the App Server locks data via lock object (enqueue server) etc., so the database features are not used.

        But Oracle and DB2 support this feature out of the box. And: Why lock via “enqueue server” if the database is tuned for data consistency, transaction handlng and – locks?

        Do you know if you can use read uncommitted together with an Oracle database in AS ABAP? If yes, SAP implemented everything of transaction handling beside the database because Oracle does not support read uncommited at all.

        Anyway, there seems to be a lot of space for ABAP transactional tuning. Let the database care for “undirty” reads, locks and transaction, not a layer between application program and database that expects a database from 1975.



        • Hi Rolf,

          Thanks for sharing your thoughts.

          If we look at transaction handling in standalone mode, yes, you’re correct SAP has implemented what RDBMS provides out of the box. That looks redundent; however SAP does one more thing: Asynchronous Updates. Asynchronous updates is what makes SAP scalable and in order to support this feature, they-imo- implemented their own transaction handling. Even in SAP-HANA, they – for right reasons – rely on asynchronous update feature (delta-merge). And locking, imo, in traditional RDBMS & SAP-HANA in synchronous mode is expensive.

          Best regards,


          • Hi Bala,

            thanks for your interesting input.

            Oralce provides asynchronous updates since 10gR2 = 2005, e.g. COMMIT WRITE BATCH NOWAIT.

            But asynchronous updates are rarely needed to improve performance. Modern RDMBS systems are tuned by experts to optimize reads. Probably the SAP RDBMS layer with its own buffer contradicts this optimization.

            Imo it is a myth today that the RDBMS features implemented by SAP are necessary at all to improve scalabilty. High concurrency – parallel updates on the same records – are rather rare. SAP’s implementation of RDMBS features on the other hand can cause bottle necks, e.g. in implementing sequences, buffer tables with generic structure, enqueue lock (that is much more expensive than database locks).

            All this probably was a good idea 20 years ago, but today it is merely an alibi not to fit the design to state of the art.

            There are high performance mass data applications e.g. running on Java EE with JPA; no framework exists (because it is not needed) to implement RDMBS features outside the database.

            You probably know the first golden rule for framework development:”Don’t” This applies for implementing RDBMS features.

            About HANA: Since updates in column-based storage are extremely expensive compared to traditional storage, there is no chance to get acceptable performance without asynchronous updates. Of course there is a great development in HANA with delta merge, even if you maybe loose some “D” in “ACID”?

            Best regards


          • Hi Rolf,

            Thanks for thoughtful comments. I’ll give a few examples why I’m struggling to understand:


              is for writing redo logs to the disk. This can be implemented for SAP application as well. This feature is not for application data.

            • Parallel updates to the same record are rather rare

              • This is something new. Have you or anyone published paper on this? One example comes to my mind: Stock level of an item
            • Deadlock situation is still a problem. Even today, we should drop indexes on F tables while loading data to avoid deadlocks in SAP-BW.

            I’m sure there may be applications which would benefit from RDBMS features. For example, SAP didn’t implement SAP’s architecture with SAP-BW. They made use of RDBMS features so they recommend we drop/rebuild indexes to improve the concurrency/level of parallelism. And mostly BW is read-only application so no need to use SAP’s architecture.

            Note: I may write a blog on this to get input from wider audience.

            Thanks again.

            Best regards,


          • Hi Bala,

            I only mentioned the asynchronous commit phase (writing redo logs) because this is very expensive and writing of application data is mostly asynchronous on modern operating systems.

            That parallel modifications of application data are quite rare and no performance killer is my experience with many OLTP applications.

            • in batch you can easily avoid it.
            • in user interaction, the work items can be routed to different persons. Optimistic locking has become popular (at least outside SAP business suite) even in business applications because it is cheap, avoids “currently locked by another user” (who might be at lunch) while saving data consistency.

            The situation when SAP implemented their self-made RDMBS features was completely different: some databases could not lock single rows (maybe switched to table locking if too many locks consumed “too much memory”), no highly optimized read buffering, no high performance sequences, no highly optimized I/O, no “read committed”.

            The situation is completely different today, no need for SAP’s buffering/asynchronous writing / home made locking/sequencing to get scalability. As you mention about dead locks, SAP’s implementatoin of RDBMS features is not only ressource consumpting but also causes bottle necks it wants to avoid.

            Pushdown for all RDBMS and I/O operations to the expert system (database, OS).

            I am curious if SAP business suite on HANA will carry the burden of 20 year old architecture that does not accept the database as expert for RDBMS functions. I doubt they will change a lot (HANA currently does not support SELECT FOR UPDATE in some circumstances, so will SAP business suite on HANA require an “enqueue server” for locks? Can you get a real safe ACID delta merge even if you pull out the plug in an awkward moment when everything is asynchronous?

            And I am curious for your blog. Great that the database is getting closer to the developer, maybe decision makers understand the importance of database skills.



          • Hi Rolf,

            Thanks for your patience & taking time to explain. I do see what you’re trying to tell ;however we’re not on the same page. For example, SAP’s implementation of asynchronous writing is not to optimize write operations, but to improve the concurrency-I’ll explain in a blog. SAP still relies on O/S & DB for improving write operations.

            Secondly SAP-HANA doesn’t support SELECT FOR UPDATE because it performs INSERT-ONLY operation. No updates involved. Correct me if I’m wrong pl.

            Best regards,


          • Hi Bala,

            HANA supports SELECT FOR UPDATE, see


            otherwise HANA would be too far away from database standards.

            What you call “insert only” is delta merge describing the technical detail how updates are implemented inside HANA.

            They probably choosed to implement UPDATE as a mixture of insert and merge because it makes updates less expensive. (UPDATE is not the command column based storage is optimized for.)

            Best regards


          • Hi Rolf,

            I’ve not forgotten this topic. I’ll soon publish blog on “My thoughts on Concurrency”. I’ll let you know when published.

            Best regards,