Very recently I have attended a seminar on Big Data and how Hadoop is being shown as a solution for doing with oceanic amount of data which is static or perennial stream of data to be analyzed for business strategies to understand the trends and getting edge over competitors or to break the codes in scientific research.
Hadoop which uses the strategy of bringing the computation to the data instead of transferring the data for computation thus reducing the network delay and further running Map reduce algorithms over the data. Hadoop doesn’t suit for interactive data processing.
In the previous TechEd , heard about SAP HANA as its strategy for dealing with huge data which is its in-memory analytical appliance which enables real-time analytics. HANA can process both structured , unstructured, machine generated and also Social networking data.
Mere realtime examples are analyzing the logs generated by web servers of enterprises and Facebook or Linked-In data which generates hundreds of terabytes of data across world everyday. But this can be analyzed with any other data warehousing tool.
My take is SAP customers are more traditional enterprises that have long relationships with the existing ERP platform and appreciate stable upgrade paths, maintenance and support to keep operations running smoothly.
Customers will be interested in running data analytics against increasing amounts of data stored in both SAP and non-SAP systems. Hence mostly they do not employ researchers or data scientists to experiment with still developing technologies like Hadoop for distributed computing and Big Data analytics.
Finally according to me HANA is not about Big Data but more about quick or fast data for all its customers to make quicker, better business decisions to stay ahead of the competition and hence puts SAP in a good position to expand its analytics business