In-Memory – is it really non-disruptive?
First off – I am thrilled beyond words. Hasso and Vishal totally got rock star status in my books with this announcement. As I tweeted earlier, it is also a bit of a case of bragging rights for me – since it closely matched my predictions in So, what is next in Business Intelligence ? which I wrote a year ago 🙂
While this is excellent news – I don’t think this totally resonates with the “no disruption” theme that SAP leadership was touting all three days at SAPPHIRE NOW. This could totally be attributed to my lack of understanding of the whole idea. So as always – jump in and comment on your thoughts.
Here is the general idea I got from the keynotes. The New DB will get a snapshot of Old DB, and then get delta images as it happens. Apparently this takes only few hours for initial load and few seconds for deltas. New DB will have a columnar database – which to my mind is something like the account model in BI, where every column can serve as an index. I assume SAP has some cool software that will do a metadata mapping from ABAP DDIC to the new model, and then somehow keep it in sync. I am not sure what happens to persistence after it moves to new DB. I guess we can quickly reload everything from old DB in a disaster recovery scenario to a new appliance.
Here are my questions.
1. For the delta to happen from old DB to new DB, some program has to read the data in old DB. Wouldn’t this mean that all constraints of the old DB – like its data model, lesser quality hardware etc apply while sending the deltas to new DB? So, although the receiving system is super fast, wouldn’t this delta process take something more than a few seconds?
2. Would the application continue to use ABAP DDIC ? I am especially keen to understand if there is any changes to record locking in this new paradigm.
3. Are blade servers a mandatory thing for in-memory analytics? Blades are not cheap, and this is one reason many customers hold back on BWA and BOBJ Explorer. With the new Sybase acquisition – can SAP also do columnar DB with non-blade boxes?
4. I am also curious to find out how SAP thinks of harnessing the power of front end machines. Phones don’t have a terribly powerful processor now, but that is not to say that won’t change. So when your front end can do a lot – and most devices already have a lot of memory, what is SAP doing to harness that power? Wouldn’t it be a shame to waste it by only doing server side optimization for analytics?
5. When at some point in future, the new DB becomes the only DB for SAP – will SAP revise the ABAP DDIC to optimize it to the new world? My point is – the existing modelling theories were all built in a time where storage was costly and processors were slower. So when processors and memory become very cheap- for example, can we have applications all have flat structures for all transaction data like we do for reporting? And use normalized data only for master data maybe?
6. For analytics – we typically need to combine data from multiple sources, including the virtual way that Data Federator provides. Now if one of the sources is columnar and others are row based – is it easy to combine this data? I remember the troubles of combining key figure based cubes with account model based cubes in Plan vs Actual reporting in our existing world. How does SAP solve this?
I have some more questions, but let me first get an idea of the ones above. Besides my flight lands in 20 minutes and I am almost out of battery charge on my PC !