Vishal Sikka at SAP recently posted his thoughts about the HANA project on his personal blog. The post, titled “SAP, Software, and Amplifying Human Potential: Some Thoughts on the eve of TechEd,” offers great insight into the past, present, and future of SAP’s enormous Big Data effort.
HANA, SAP’s Big Data in-memory computing solution, began as an experimental in-memory project in 2002. In 2009, SAP officially put the enormous project (in both scope and cost) into motion. Since then, HANA has been an enormous success, becoming the industry go-to for Big Data solutions. Sikka attributes two factors to the meteoric rise of HANA:
(a) the new hardware reality of super-affordable x86 based machines that combine very powerful multi-core processors with the super-fast access to data in large memories that are now available in DRAM, and
(b) the new ideas in in-memory structures, especially the column store, the newly designed highly parallel structures and operators, and tons of new ideas in database technology.
However it was done, HANA has evolved into quite an impressive piece of technology. Taking that freebie, HANA grew to one of the most profitable solutions SAP has ever sold. More than just software and hardware, HANA is a method of processing and understanding data beyond the scope of just about everything else on the market.
Be sure to read the whole post, it’s very insightful and an interesting personal note from an SAP higher-up.
Image courtesy of diginomica.