SAP and others have been talking a lot about in-memory computing. I honestly thinking that HANA will be a big game changer for SAP – especially if it also has a mobile play. As I processed the information I gleaned from Teched over the last 24 hours snce I left Vegas, I have a few questions.
Server vs Client – Is it all about Shrek? what about Donkey?
The point I want to make here is – with all the talk about big blade servers holding data in RAM, and having 64 (well 128 soon) cores to process it, what is the role of clients?Does it matter at all whether client side machines have any power or not? Here is what I think.
A lot of innovation is happening in the semiconductor world for chips that run in mobile devices. Every generation of these processors make them more powerful and generate less heat. One day soon, we will have mobile processors that can do extremely powerful calculations. I actually think that more innovation is happening on this than on the processors that are specific for servers.
Despite this, all the applications like HANA seems to depend on server side processing and memory alone without considering the possibility of using the power of client machines. I think this is a missed opportunity. While it is obvious that efficient server side processing is a mandatory primary requirement for in memory computing,why would you waste resources that are available?
I would think that an approach of “go where there are resources” is more suitable. It is not very hard to find out the processing power of client machines at run time. So if you can figure it out, then the excess server side power canbe used for something else.
Similarly, for mobile devices – the big restriction is bandwidth. So if client side resources can be used intelligently, may be the requirement to send roundtrips to the server can be minimized. In a powerful mobile device, a lot of things like visualization, scrolling and so on can probably be handled on client side without server needing to worry about it. This should be a big improvement in user experience.
If you have a big jack hammer, will you look for a nail every where?
I get the idea of doing analysis fast. What I do not completely comprehend is the quantity of data that needs to be analyzed to take decisions.If you have 20 billion records in your system, would you put all 20 billion into your in-memory system? What is the value in that?
Since the world around us changes frequently, I have always felt that the importance of analyzing vast amounts of historical data to make decisions is some what over rated. There are exceptions – I readily agree – but I would think in most cases, you don’t have a real need to analyze data that is really old. Also, I am not sure if HANA has inbuilt predictie analytical abilities .
Same is true on what parts of your data will you load into HANA. Not all fields are relevant for reporting. So do you make a distinction in what you load into HANA? or would you load everything into it just because you can? I would like to think that you do some data modelling in HANA and choose the information that is relevant for reporting. But that means that we some how have to define upfront what reports we can and cannot run.
Read and write with same efficiency?
I readily admit that I am not a big expert in the theory – but from my limited understanding, I feel that columnar databases are better at reading than at writing. So to make in-memory computing work – I guess there are two separately optimized parts at play under the hood – one that reads efficiently and one that writes data efficiently. Which would also mean that frequently these two things will need to be combined. I would GREATLY appreciate some insight into how this works, if some one can explain it. I stopped by the Hasso Plattner Institute pod yesterday, but could not find any one to ask this. The guys and gals in white lab coats next door seemed pretty busy with a big crowd around them. Too bad I could not organize my thoughts early in the day, and chase some one down to ask for a clear answer.
Is price-performance enough to make customers switch?
I understand now that price-performance is in HANA’s favor for sure. But that is not how most companies decide on CAPEX budgets. Blades are not cheap – just because SAP could get their sandbox for half a million dollars cannot be extrapolated to mean that customers will get similar prices. And customers who have heavy investments in hardware and database licenses already will need an extra value proposition beyond price-performance to make the switch. I am sure the top 100 customers or so will make the move, and get the benefits. But if this has to become mainstream, I think a lot more business value needs to be shown.Maybe this will happen over time – with SAP and partners making a lot of industry specific analytical applications that make use of HANA.
What about BW?
No one told me officially so far that BW will die if HANA becomes mainstream. Thousands of customers use BW – so I can see why SAP cannot scare them. But over time – and not a long time, say maybe 5 years or so – I would think that HANA can replace BW, BWA, Explorer and so on and serve as a super datawarehouse. If it does not do that, and if you need parallel BW, BWA systems etc I think that is a big let down. I would like to think that SAP can somehow come up with an automated migration of BW on to HANA, without customers having to rip off BW and rebuild in HANA. Again, if some one knows any plans for this – please share.
What about data federation?
I am told that HANA comes preloaded with BO tools. I assume this includes datafederator with the common semantic layer. So if 70% of the enterprise data sits in HANA and 30% in other systems, will the data federator within HANA be able to combine virtual data from elsewhere with the data it is holding in RAM? or will the approach be that we are better off by physical transfer of the remaining 30% data also into HANA?
That is it – I think I get the rest of the story pretty well.