SAP HANA: What it means for business and for your career
With the announcement of Ramp-up of SAP High-perofrmance Analytic Appliance 1.0 (SAP HANA, #SAPHANA) we witness third wave of popularity of in-memory (#inmemory) and HANA (#HANA) topic. First wave was caused by SAPPHIRE’10, second – by October’s TechEds. How long will the current wave last? Two-three weeks? May be. But even if it will last one week, the topic should come back even stronger and infinitively around spring 2011, when SAP HANA 1.0 goes into GA (General Availability). And we better be ready for that on Business and IT sides and on whatever shadows of grey are in between the two.
First things first: Impact of SAP HANA on business
The announcement of HANA 1.0 and its components – SAP In-memory computing engine (ICE, pronounced by SAP folks ‘eye-see-ee’ and not ‘ays’; in some recent documents you can find as well IMCE) and Sybase Replication Server – caused lots of technical questions. I am a technical guy too as hopefully was reflected in 2 previous HANA-related blogs. But this time I am going to focus on something slightly different.
Obviously SAP is not in the business of pleasing techies, but in the for-profit business of delivering business solutions and therefore the question of how SAP HANA and the whole in-memory is going to impact the business of SAP customers is the first question to ask and answer. I hope to get the attention of SCN’s Business Process Experts and Business Analysts here.
Here are some of the statements I heard so far about experience of alpha customers of SAP In-memory computing: “Moving from rigid shadow reporting on 7 days old data to flexible exploration of data with 7 seconds latency” (SAP GFO), “Some batch reports that had taken 2-3 hours before, took 2-3 seconds with in-memory processing” (Hilti), “‘Conversation with data’ allowed to find patterns in car sales not visible before” (Car manufacturer), “Business to IT: ‘Why did you hide this functionality from us before?'” (Food & beverages company).
At SAP Influencers Summit (#SAPSummit) yesterday during his opening keynote SAP Co-CEO Jim Hagemann Snabe said: “SAP HANA is not about making analytics faster, but about resolving business issues that cannot be resolved using conventional [methods]…” What does it mean?
Let me take the responsibility to re-phrase SAP roadmap of the business value planned with the evolution of their in-memory technology. Here is what was recently presented in the way I understood and interpreted:
- Currently released SAP HANA 1.0 acts as a high-performing database for operational data mart analysis done with SAP BusinessObjects BI 4.0 tools on massive amounts of right-time non-aggregated data, where some data is near real-time replicated from SAP Business Suite application into ICE and some other data is extracted from SAP Business Suite applications, SAP NetWeaver BW data warehouses and 3rd party applications and then transformed using SAP BusinessObjects DataServices 4.0 and loaded into ICE (sometimes referred as ‘side-by-side’ scenario)
- Additional scenario is agile data mart analysis, which is an extension to the previous one. Here business users are given HANA-specific tools to load their PC files directly into the HANA and mash that data up with corporate data (You might recall this being discussed some 2 years ago at TechEds in the context of “Newton” project).
- Third scenario are vertical and LoB applications built specifically to run on In-memory computing engine, as announced “SAP BusinessObjects Strategic Workforce Planning, version for in-memory computing”. SWP, although not data volume intensive, is based on complex modeling and simulation algorithms, which – optimized for in-memory computing – allow real-time interaction with the application without the need to run simulations as a background jobs while taking a break for a coffee between run cycles.
Wave 2 (code name “HANA 1.5”)
- This is where ICE is taking the place of traditional RDBMS underneath the SAP Business Warehouse (scenario referred as ‘on-top’ scenario) accelerating not only standard and ad-hoc queries, which feature intensive data read from mission-critical InfoCubes, as it is with today’s BW Accelerator 7.0 scenario, but accelerating any BEx query built on any BW InfoProvider, because now all BW data and meta-data is stored, selected and aggregated in memory and more and more OLAP functions are executed by computation layer of ICE. Acceleration of all back-end BW operations should come as a side effect bonus for us – in IT.
- Development of ICE-optimized applications should gain momentum as SAP will start converting their applications (Bank Analyzer, BPC, APO etc should be in the pipeline) and more and more ISVs will join the party delivering their vertical applications.
Wave 3+ (code names “HANA 2.0” and “HANA 2+”)
- Initially in this wave data exchange between SAP Business Suite and HANA is becoming bi0directional. Not only data from Business Suite is replicated into the HANA, but as well any data created or modified in ICE is replicated back into the Business Suite application enabling real-time link between operational planning and execution.
- And at the end we are getting to the stage where ICE suppose to become a single data store for both – transactional and analytical data processing – breaking the barrier between OLTP and OLAP applications worlds and instead treating data as a single multi-purpose asset.
*My few quick comments to the above waves
- Having my own ballast of experience it is difficult for me to accept some of the scenarios unreservedly, including the last one of converged OLTP and OLAP datastore. But obviously we cannot build tomorrow’s vision using the yesterday’s knowledge.
- The roadmap I discussed above is for Enterprise on-premise model; there are separate stories developing for the cloud, Business ByDesign and even BusinessOne. Prototype of BusinessOne running on ICE on 8GB RAM machine was available for demo during SAP Influencers Summit 2010.
- I on purpose did not put dates next to waves, as the exercise was not to discuss the timeline, but the business value enabled by the coming waves of technology transformation.
- As mentioned in previous blog and comments, I would not bet on the long-term existence of ‘HANA’ acronym, and even more on the presented versions; therefore I described the roadmap for “SAP in-memory technology” and in “waves”, but included “HANA x.x” references for consistency between my interpretation and SAP communication.
- I will bet though that data modeling tools from SAP wide armory – BO Information Designer, In-memory Computing Studio, BW Datawarehouse Workbench, Sybase PowerDesigner – will be evolving, influencing each other and consolidating along the same timeline.
And now important stuff: Impact of SAP HANA on your career
If SAP executes its in-memory vision and HANA/ICE marketing right, than HANA will be Larger-than-Life for our SAP world. It will not be just a hobby tool for few niche specialist as it is with BW Accelerator today, but will cause reshaping of skills market and need to resharp for everyone of us:
- Business will need to bring back the meaning of “Business Process Re-engineering” and learn how to provide break-through requirements going beyond “Product groups in rows, Y/Y KPIs in columns”
- Functional Consultants will need to learn how to implement and configure whole new set of applications
- BI Designers will need to build all the calculation views in ICE and integrate them with new BO Information Designer
- How many skills does BW consultant need? will get the whole new set of ICE-optimized BW objects to use in their modeling after migration of BW to ‘on-top’ of HANA
- Developers will need to start familiarizing themselves with writing in-memory optimized code using new ICE SQL Script language, and few of them might even need to learn “L” – new SAP procedural language for ICE programming
- Basis and DBAs will not only get new toy to play, but as well whole new bunch of capabilities and tools to learn
- OS/DB migration consultants will get their hands dirty in migrations of BW DBMSs to SAP ICE
- Security will need to design authentication and authorization around ICE, depending on the discussed scenario
- Project Managers will need to keep an eye on the evolution of ASAP and any in-memory projects related Add-ons to it
And at the end – SAP Press will need to publish separate catalog with books discussing all implications of in-memory for SAP world, because most of the books will require re-write anyhow.
Last, but not least: What is under the hood of HANA and its components?
I feel a huge relief now. I am done with the implications that in-memory shift would have on businesses and on people. It means I can move to what like most – discussing technology. But not now. It is almost 4am here and I still need to be ready for a job tomorrow, for which I am paid 🙂
No worries, if you still feel like you need to know (I do!) more about how HANA works – this will go away very soon with the overwhelming amount of details, like what keys are used by hash function on contiguous chunks of table’s columns stored in CPU L3 cache to enable intra-parallelism of query joins on multi-core architecture processing cache-aware algorithms. Multiply this by the factor of noise surrounding new technologies, and the fun has just begun…
If I haven’t discouraged you enough yet, then please feel free to contribute at SDN Forum “SAP HANA and In-Memory Computing” by asking your questions and sharing your opinions.
-Vitaliy, aka @Sygyzmundovych, this time reporting from sleepy and cold Washington, D.C., USA