Skip to Content

I’ve been keeping a quiet eye on SAP HANA for a while now… and I thought I’d showcase a few predictions for SAP HANA in 2014. 2014 will, presumably, contain a further two releases of SAP HANA, SP08 and SP09, which will as usual contain a raft of new functionality and apps. Here are some of my favorites. None of this is official product strategy!

[Update June 2014] I’ve updated this blog with how close my predictions are so far!

SAP River

SAP River is a new, descriptive, development language based on SAP HANA. I’ve been using it and it is a fabulous way to build next-generation business applications: it is fast, simple and easy to use and I can build apps in minutes rather than days.

It was released with HANA SP07 into an early adopter program and from what I can see, it is likely to be released formally with HANA SP08. We will see some new features, performance improvements and there will be a big focus on usability. If you are considering building apps in 2014 then I highly recommend looking at the River platform.

[Update June 2014] River made it into HANA SPS08 but it isn’t officially GA. You can go live with it, but you have to go into an Early Adopter program.

Lumira on HANA

SAP Lumira was born as Visual Intelligence in January of 2012, and was renamed to Lumira in 2013 and completely rewritten to have a responsive design, which translates to being able to produce dashboards and visualizations that work on any device that supports HTML5.

The kicker is this: SAP Lumira Server is on its way, and this will be a version of Lumira that runs inside SAP HANA, as an app. It looks like it will allow for information exploration, like SAP Explorer, plus the ability to publish dashboards and visualizations into the HANA appliance, which will then run in-memory, on any device.

Lumira isn’t formally tied to the HANA release schedule so I hope to see an early release in Q1, with a proper release later in the year.

[Update June 2014] Lumira Server is now out there, and Lumira Desktop will come with the SP17 release.

HANA Graph Engine

Research papers have been written about the HANA Graph Engine and my guess is that the Graph Engine already exists in HANA, but it is disabled by default. The Graph Engine is the main missing piece in the HANA story – with this, you will be able to build almost any app.

My guess is that it will be used in the next generation APO Engine, because a Graph Engine is well suited to solving approximations of the Traveling salesman problem. I’m looking forward to this but I don’t expect to see it until SP09.

[Update June 2014] I’m continuing to hear whispers about the HANA Graph Engine!

HANA Complex Event Processing

SAP have a market-leading CEP Engine called the Event Stream Processor (ESP), which integrates into SAP HANA. However, this causes one serious business problem, which is how do I react to events based on information that happened in the past. Currently this requires a call-out to SAP HANA via a network socket, which is (relatively) inefficient.

I believe that SAP will write a CEP Engine based on ESP integrated directly into the HANA Appliance – this will completely differentiate SAP from everyone else in the Complex Event Processing market. I don’t think we will see this until at least SP09.

[Update June 2014] Nothing from this yet, though I hear that the teams are trying to figure how this will work

Web Development Platform

There is a basic Web-based IDE in SAP HANA SP06, which has been enhanced in SAP HANA SP07. However, for serious HANA development it is currently necessary to use the Eclipse workbench, for which SAP built a plugin called HANA Studio. For HANA to become a serious application platform, it needs a web-based development platform on which you can build apps, including SAP River apps.

I believe that a first revision of this will come in HANA SP08, and it will be further refined in SP09.

[Update June 2014] This has been released into Beta as the SAP River RDE – Enablement Overview (previously WATT)

SAP Magnet

Back to apps again for a moment. There was a blog written quietly last year about a project called SAP Magnet. This is an awesome project that consumes contextual information from your email and calendar, news sites, and pastes them within the Fiori Launchpad. So, you open up your iPad and it shows you your next meeting, your recent interactions with that person, their stock results and information from the Business Suite like outstanding invoices.

I doubt we will see a release of Magnet until late 2014 or early 2015.

[Update June 2014] I spoke on this topic at the SAPPHIRE and ASUG annual conference. We are currently looking at custom development opportunities for contextual awareness.

Other Business Applications

What I hope to see in 2014 is a small collection of business applications based on the SAP HANA Platform. We have yet to see the capabilities of HANA to transform industries, like SAP R/3 transformed industry in the 1980s and 1990s.

Some of this will be based on SAP Business Suite on HANA and the existing apps being ported to HANA, but the platform is mature enough for very complex industry specific applications to be built, solving some of the world’s hardest problems. I hope SAP put sufficient focus on this.

[Update June 2014] This has started to happen with Simplified Financials, and SAP is rewriting the “S” Business Suite.

HANA as a Cloud Application Platform

All of this leads to the big ticket item – the Cloud Application Platform. SAP will sell variants of this – Infrastructure as a Service, Platform as a Service, on-premise, SAP Cloud, Partner Cloud. It will consume the existing HANA Cloud Platform, HANA Enterprise Cloud, it will be purchasable on demand and on a subscription basis or via a perpetual license (Bring Your Own License). It will allow web-based application development and delivery and will be awesome.

If SAP manage to do this in 2014 then I will be incredibly impressed. They have all of the components to do it and make it a success.

[Update June 2014] This has been mostly completed with the HANA Cloud.

Predictive Analysis

I suspect that SAP will take the KXEN Infinite Insight app and build it directly into SAP HANA, like they have done with Lumira. They will hopefully then integrate this into HANA Live for application scenarios for line of business, and industry, within the SAP Business Suite.

I’d expect this to be a high priority and we may see a first version in the SP08 codebase, with major refinements in the SP09 release.

[Update June 2014] This is coming in the Lumira SP17 release

Competition

It’s worth a major note – 2014 is the year when the competitors will arrive to the party. This will come from the traditional RDBMS vendors like Oracle, IBM and Microsoft. Anyone who reads my content will know that I believe that they fundamentally don’t understand the point of SAP HANA, and have missed the mark with their offerings, but the power of their install base shouldn’t be underestimated, nor should the power of their sales and marketing teams.

It will come from the analytic vendors like Teradata, Qlikview and Tableau as they build out their in-memory offerings.

And, it will come from the new database and noSQL databases like VoltDB, Hadoop and MongoDB. These guys have realized that noSQL doesn’t work for high-performance apps and some are heading in a similar direction to HANA.

All of the other vendors are at least 2 years behind SAP HANA, but some of them have very deep R&D budgets. SAP is lucky that the very large vendors have big database revenue streams to protect, which means they face the innovators dilemma.

[Update June 2014] Amazingly the big vendors, whilst they are starting to write in-memory software, really don’t get it.

Final Words

In my opinion, 2014 will be the defining year for SAP HANA. The platform is by now mature enough for use in any business scenario, it will be available in any conceivable way you want to consume it, and the new features and functionality differentiate it from what else is on the market.

However, there are the challenges of competition coming to the market, plus the mounting pressure of moving to a subscription-based license model. This will no doubt ensure that 2014 is a memorable year.

[Update June 2014] We will certainly remember 2014 as being the year when HANA became mainstream. Most of my predictions have already come true and the rest are coming. It’s very pleasing to watch, indeed.

To report this post you need to login first.

45 Comments

You must be Logged on to comment or reply to a post.

    1. John Appleby Post author

      Well this is one part of the journey that is really unclear for everyone. Certainly there will be more support for Smart Data Access, but that’s to be expected – no need for any predictions there. It’s likely we will see data temperature control, but again, no surprises.

      There is a question of whether SAP will change the data persistence for HANA away from the current snapshot approach into either IQ, or Hadoop storage. It’s possible.

      More interesting is how Hadoop infringes on HANA’s turf – getting into real-time, and high performance aggregation. We see some of that with the new aggregation frameworks in both HANA and MongoDB.

      Both products are moving at 100mph, so it is hard to see where this will go.

      (0) 
    1. John Appleby Post author

      I’m not so sure. Here’s why:

      1) Availability of large single node systems.

      As of Q4 2013, 4TB single node systems were available. This covers most customers. Through 2014, 12TB+ single node systems will become available. That’s a lot of business suite.

      2) Improvements in working set

      Each revision of HANA requires less RAM. In SP8 and SP9 I expect to see improvements in how much RAM you need for a specific data set.

      3) Non-active data, smart data access, archiving and HANA storage

      This will be improved through 2014 and we may even see a completely different storage system for HANA based on Sybase tech. This will lead to smaller Business Suite systems.

      4) Solving of the underlying problems for scale-out on SoH

      There are some underlying engine issues for SoH that need solving including join collocation, partition strategies, SQL engine optimisations. These will come in coming releases.

      But overall I don’t think large SoH systems will be a problem in 2014. What do you think?

      (0) 
      1. Denny Liao

        Thanks for the insight!

        I think 4) is still a big warning sign for lots of SAP customers. Large customer won’t be confident if the scale-out problem cannot be addressed, Nobody would expect OOM issue once they put the production system on HANA and replacing hardware frequently.

        Small customer will wait and watch how big guy proceed. This concerns has huge impact on SoH adoption in 2014 from my personal point of view.

        (0) 
        1. John Appleby Post author

          We’ll see how this plays out, but you can already fit a fairly large ERP system into 2TB of main memory. Certainly, SAP’s 65k employees all run on a single IBM x5 server with 4TB of RAM.

          Once we have 12TB systems (this quarter, I believe), I think that will cover all but 5-10 of SAP’s largest customers.

          In all honesty I think it’s not the major reason impeding Suite on HANA adoption in large customers. Mostly, the difficult part is doing any kind of change on a large, heavily customized, ERP system.

          (0) 
          1. Henrique Pinto

            It’s usually an item in the IT checklist for adopting a new solution.

            “Can I grow 2, 5, 10x without replacing the whole HW infrastructure?”

            Today the answer is no for some larger customers, tthough I agree with you it’s “YES” for 90+% of the installed base.

            I’m not arguing whether that’s a valid question or not, it’s just the way it is…

            @Denny,

            you said: “Small customer will wait and watch how big guy proceed

            Oddly (or not) enough, the expected behavior is kind of the contrary. At least in my geography, 90% of SAP’s new customers (usually, small or medium companies) already go for Suite on HANA. And the adoption rate I’ve personally perceived has been faster in either the VERY large customers that are innovation oriented (of course not for their whole ERP instance, but smaller systems such as CRM) or medium customers that are not so much risk-phobic and are willing to try out a new project with considerable cost saving potential.

            (0) 
  1. Michael zhen

    Thanks for your great insight, even I’m just starting SAP Hana, I’ll keep on eye on it in 2014! Absolutely, 2014 is a key for both of SAP & his competitors. Let’s see.

    (0) 
  2. Chih Lai

    I really like to try out HANA’s graph engine since I have a huge graph data.

    Is it possible that we can get test-version of HANA graph engine?

    Thanks.

    Scott

    (0) 
  3. Mikhail Budilov

    Stability.

    OOM (out of memory) handling – one of the main show stopper HANA in productions system.

    After OOM now there only one way – restart whole HANA Box. It’s not deal – restart DB on every OOM.

    HANA dev team must eliminate this problem, we must never get OOM errors in HANA in future.

    (0) 
    1. John Appleby Post author

      Mikhail,

      You keep posting about this over and over πŸ™‚

      It’s not a problem I see on modern HANA revisions. To my mind there are three things that matter:

      1) OOM on HANA can and should occur. This means that your workload exceeds the amount of available memory. Either with one huge query or several smaller queries.

      2) When OOM does occur, HANA should handle it gracefully. This has been the case in my testing – and I’ve stress-tested HANA on huge datasets – for at least 6 months now.

      3) SAP should look to reduce query memory bubbles, and this was a big focus in SPS07. In many cases, the SPS07 optimizer reduces memory requirements for queries by 100x, especially when you are using SQL against raw tables rather than views.

      By the way are you running on HANA Rev.72? Rev.70 has a lot of OOM improvements and Rev.72 has a lot of general stability improvements. I’d be interested, if you are not on Rev.72 yet, how this improves your scenario.

      In any case, clearly your scenario causes you to have a problem, and there must be something specific to your scenario. Raise an OSS request, and let’s get it fixed.

      John

      (0) 
      1. Mikhail Budilov

        Key word. Should πŸ™‚

        We’re in production. And it’s news from real life. OOMs is realy nervous end users, consultants. With OOMs we can’t have any serious SLA. 

        We’re on Rev 72 now. OOM still happens, and HANA still doesn’t restore after OOMs.

        We’re already opened couple of OSSes in SAP. Waiting news.

        I didn’t see any perfect solutions from HANA developers about OOMs handles in HANA.

        I’ve asked to realize smart Sessions Manager in HANA without allow any OOMs possibilities.

        OOMs in HANA must be is in the past, not in the future!



        Otherwise – implement ERP on HANA it will be last CIO decisions.





        (0) 
        1. John Appleby Post author

          Mikhail, understood, but also understand that I’ve got many live customers, and don’t have any OOM problems at any of those. There aren’t 100 people screaming about it here. My guess is there is something specific in your design that is triggering a bad condition.

          That doesn’t take away from the fact that you have a problem which is probably a software bug, but it does mean that a focussed effort on your specific scenario will be required.

          Have you had a proper audit of your overall environment done by a HANA expert?

          You bring an excellent point that I keep meaning to blog about: the feeling towards an organization about HANA is all about how the first project goes. Expectations are high, and the project must live up to those.

          (0) 
          1. Mikhail Budilov

            Software bug it’s not a problem, the problem is current HANA approach  to near and OOM handling.(one indexserver per node – if OOM, all sessions on this indexserver will terminated and indexserver will be restarted. More interested – if OOMs  in  Scale-out (multinode) system, restoring from OOM must be much more complicated).




            Ok, lets waiting till first 100 customers screaming about OOMs problems in it’s BW on HANA or ERP on HANA or clear HANA system.



            (0) 
            1. John Appleby Post author

              Hey Mikhail,

              I’m not saying you don’t have a problem – clearly you do – but I have stress-tested a bunch of HANA appliances lately with massive and unreasonable workloads. In every case, when the appliance got over-stretched, it would cancel expensive queries and move on, without crashing.

              So my suspicion is that there is something specific to your scenario which is triggering a product error. Let’s get the support people to get to the bottom of it and fix that bug.

              In parallel, you should consider getting a HANA expert to look at your environment and design, because in my experience, crashing systems are usually caused by environmental and design mistakes.

              John

              (0) 
              1. Mikhail Budilov

                John, please try on Scale-out instead Scale-Up.

                if user made select * from big_table, no matter from which client users do this – HANA studio,  Bex, or BO and no matter which HANA expert examine you system –  1-2 min and you get 99% CPUs on all of you nodes and next – OOM and after OOM system restorations.   You never get similar in the disk based DBMS (one session never kill whole server).

                If current HANA Sessions Manager allow OOM situation – it’s bad approach for stability (IMHO).

                Again – IMHO SAP must focus on stability. OOMs situations – first candidate for this.)

                Thats i want to see in SP08 and SP09.

                (0) 
                1. John Appleby Post author

                  ΠœΡ‹ Ρ…ΠΎΠ΄ΠΈΠΌ ΠΊΡ€ΡƒΠ³Π°ΠΌΠΈ πŸ™‚

                  My team pretty much only does HANA scale-out. We usually work on 4-10TB clusters, this is the acceptable price point with the current generation of Intel CPUs.

                  Hasso said: “SELECT * is for losers”. If you are doing SELECT * then you are using HANA wrong. I’d be happy to get someone to look over your project and understand the design problems.

                  Totally agree that one process should not kill HANA, but like I said… it doesn’t, in my testing, even with thousands of complex queries running simultaneously.

                  You bring up an interesting point and this appears to be the case. SAP is focussing on more mission critical scenarios for SPS08 and SPS09. I understand there should also be a workaround which will limit session memory usage.

                  (0) 
        2. Henrique Pinto

          Hi Mikhail,

          If truly sounds like you have a HANA memory sizing issue.

          Saying that OOM shouldn’t happen on HANA is the same as saying that tablespace crashes shouldn’t happen on <insert any RDBMS here>.

          For sure, the platform should provide all the means for the DBA to be alerted and avoid tablespaces to crash, but if they are getting much more data than initially planned, it is gonna crash eventually. On the same sense, if your HANA box is getting more data than initially planned, it is gonna OOM.


          One thing that has to be very clear for the “HANA DBAs” is that, now, RAM memory has two different roles:

          – RAM is still used as work area for the query executions – this is what John has mentioned that has been addressed in the latest SPs, specially SPS7. The direction is that HANA query execution is gonna require less and less RAM as work area;

          – RAM, on HANA, is also used for data store. This means that maintaining RAM occupation % on a safety level is as critical for HANA as it is to keep disks with a safety occupation % on <insert any RDBMS here>.


          On disk-based RDBMSs, while it’d be safe (and even desired) to keep high RAM utilization rates (meaning you were effectively utilizing your HW, without a lot of idle processing capacity), on HANA, it means you’re dangerously dwelling on your maximum data storing capacity.


          It is the kind of mindset that has to be changed when you’re going for an in-memory DB (not particular to HANA).

          (0) 
          1. Mikhail Budilov

            No, Henrique.

            The memory mor that enough. The problem in current HANA approach – got OMM – index server restart, all current sesson killed.

            By simple example – you cant limit users to execute reports on middle and huge data volumes. (by example – 100GB Ram per one report or transaction).

            If 10 users run report at one time – you need to minimum 1TB RAM consumption,

            if 100 users – min 10TB  and so on.

            Ooops, no matter for your sizing – you will get OOM.

            One session must be never have influence to another session and server – isn’t ?

            If one session eating more and mory memory and OOM near – session must be killed by SessionManager and all session transactions roll-outed and resourses must be free. 

            Memory management must exlude any OOM. Why it’s so difficult?

            (0) 
            1. Henrique Pinto

              Sizing a HANA landscape is not all about RAM for data storage.

              Concurrent sessions MUST also be considered in the RAM sizing.

              For very user-intensive scenarios, the 50/50 rule might not be applicable and you could need more nodes than you’d assume with the rule of thumb.

              That’s why you always need to validate the pre-sizings SAP deliver with the HW experts and with the more detailed usage information possible, because all SAP does is, as mentioned, a pre-sizing. SAP won’t ever provide a full-fledged final HW sizing, and HW partners that just replicate what SAP propose are bound to get it wrong eventually.

              (0) 
              1. Mikhail Budilov

                If OOM potentially can happens – be shore it’s will happened.  And no matter how – count of users, or memory leaks, or queries working with huge amount of data.

                If it possible – it will happened.

                HANA must be stable RDBMS. Until developers close this possibilities (OOMs) – IMHO HANA can’t be 100% fulfill requirements as RBMS for transactional systems with strong SLA.

                (0) 
                1. Henrique Pinto

                  You’re still missing the point.

                  Saying that OOM shouldn’t happen even if you get your memory full is the same as saying that HANA should swap memory to disk, which means it stops being a 100% in-memory database. So if SAP implements that, HANA will stop being a 100% IMDB, by definition.

                  So, you’re basically saying, in practical terms, a 100% in-memory DB is not achievable?

                  (0) 
                  1. Mikhail Budilov

                    No, i never said about swap.

                    I want – if situation goes to OMM – session manager must kill top 1 session by RAM,

                    then  wait 30 sec, and check again. With this approach OOMs  will be in the past,

                    Algoritm is simple.

                      Loop.

                          totalSessionsRAMMemory = GetTotalSessionsMemoryPrcnt().

                          if totalSessionsMemory >= 90%.

                                 GetMostHungryRAM_Session().KillSession().

                           endif.

                          wait 30 secs.

                       endloop.

                    (0) 
                    1. Henrique Pinto

                      Easier said than done.

                      What if the other session is still active?

                      You can’t just kill and drop the user connection.

                      Of course, HANA memory management is not optimal, but just killing sessions or swapping is no final solution, if you still consider RAM something you don’t need grow as fast as your data grows.

                      Again, RAM needs to be treated as disk now – and you’d never let your disks get full, would you?

                      Best,

                      Henrique.

                      (0) 
                      1. Samuli Kaski

                        It is really hard to come up with a algorithm that always kills the right process (the one doing harm to every other process). I remember many many years ago when the Linux kernel developers went through the exercise. The only 100% working solution is to have per process/thread quotas.

                        (0) 
            2. Henrique Pinto

              And an additional thing. On what rev are you?

              I’ve seen some memory leaks issues up to rev69.

              I’d strongly recommend the latest patches of rev69 or rev72.

              (0) 
                  1. Mikhail Budilov

                    not everyone, but DSO activation  process need a huge amount of DRAM (especially with 250-400+ mlns rows DSOs).

                    By example – Last 5 OOMs we got because HANA still havent any overload limits,

                    users sometimes make in BO such queries which  “eating” all available memory in our scale-out hana by couple minutes.

                    Also after OOM we got sometimes (“backup int processes” failure, “zomby” sessions, unreleased memory).

                    It’s our HANA experience.  I am believe that HANA dev teams can handle this challenges in future.

                    (0) 
                    1. John Appleby Post author

                      As you know, Rev.80 now has a statement memory limit which will help you. Are you still on Rev.72?

                      These are not big DSOs and DSO activation shouldn’t take this much memory. Are you using HANA-optimized DSOs or regular DSOs?

                      If you use HANA-optimized DSOs then please check SAP Note 1646723 which gives instructions on how to reduce DSO activation memory.

                      As of BW 7.3 SPS10 you can now go back to the regular DSO concept, which will probably solve your problem. Look here for more detail.

                      I’ve said this before, but I really recommend you get a good BWoH consultant to look at your problems.

                      (0) 
                      1. Mikhail Budilov

                        John,

                        Thanks for advise))) 

                        My BI  team already have certified consultants for BW and HANA, and we’re in very active collaboration with SAP support and HANA dev teams.

                        Of course we on standart DSO, with in-mem DSO OOMs we got a lot of OOMs in 2013. in-mem DSOs was pain, realy pain – espessially DSO log. In-mem DSO was developers mistake.

                        We planning to go to rev 81 for  statement memory limit. But i not expect that isn’t a silver bullet.

                        But ok, let stop talking about OOMs)

                        (0) 
  4. Suseelan Hari

    Hi John,

    Another Master Piece.

    Thanks you so much for sharing about 10 Predictions of SAP HANA. This type of consolidated information / data is very useful to understand easily about the recent updates in one package. Once again thank you so much for updating.

    Regards,

    Hari Suseelan

    (0) 
  5. Basar Ozgur Kahraman

    Hi John,

    While looking for some info about hana graph engine, i found myself inside this blog πŸ™‚

    And i guess there isn’t enough info about graph engine on web. so following your blog for updates πŸ˜‰ Thank you

    (0) 

Leave a Reply