Skip to Content

I am sure most of you have experienced or read or at least heard about the new buzz in the Information Technology world – SAP HANA (SAP High-Performance Analytic Appliance). It’s an amazing appliance to work with. Though it’s based on the already existing in-memory technology, the whole idea has been an immediate success with the end-users and developers.

 

So, what is so great about this whole thing? I will start with some basics.

The Online Transactional Processing system (OLTP) was designed to handle the more complex Business requirements and the Online Analytical Processing system was designed to handle the analytical & financial planning applications. Both OLTP & OLAP have been based on the relational theory. Data in the OLTP is arranged in rows and the data in OLAP is often organized in the star schemas, where compression of data is a standard practice to improve the query performance.

 

Traditionally, the data has always been split between the OLTP and the OLAP systems. OLTP system has been a pre-requisite for the OLAP system but it is only with OLAP that the organizations are able to make better decisions as they can compare the actual and the planned data and also use the historical data to understand the trend that the business has followed over the past few years.

 

A decade of technological improvements:

There have been major developments in the technology space during the last decade, but a couple of things that stood out are – the advent of the modern multi-core CPUs (which are capable of providing an enormous amount of computing power) and the growth of main memory.

 

More recently the use of column store databases for analytics has become quite popular and has shown significant improvements in query processing. This ensured that reporting on the OLAP system is even faster. And now, the thought of having a column store database for the OLTP data has become a reality. This has also resulted in massive data compression ratios (a minimum of 2.5 times) – I have seen compression ratios of up to 10 times! (All this without actually performing compression)

 

The idea has given us BWA & Explorer and now the appliance known as SAP HANA.

 

What is/ are the functions of SAP HANA?

  1. Real-time reporting                       – Report directly on the Source System real-time
  2. Acting as a pure database            – SAP BW 7.30 on HANA

 

What does SAP HANA use?

  1. In-memory computing & an enormous main memory
  2. Multi-core CPU architecture

 

Advantage(s) of using SAP HANA?

  1. Information made available in sub-seconds/ seconds
  2. Supports both Row store & Column store data storage
  3. Very high data compression ratios

 

Coming from an SAP BW background, I was more interested & thrilled with the idea of SAP BW on HANA.  I have penned down my thoughts here.

 

Insight on SAP BW 7.30 on HANA

SAP has brought out SAP HANA 1.0 SPS03 which is capable of supporting BW 7.30 (which should be on SPS05 at the minimum), i.e. SAP HANA will act as a pure database below BW application layer. Another pre-requisite is a standard database migration from the existing database to a HANA database. Also, the BW system has to be Unicode-enabled.

 

After the database migration & upgrading to SAP BW 7.30, we can see that there is no real change in the current data-flow or multi-providers or queries – they stay as such. So, we have the existing flow as-is and also take advantage of the in-memory capability of SAP HANA.

 

Advantage of having SAP BW on HANA:

  1. Faster loading & activation of Standard Data Store Objects (DSO)
  2. Eliminating the need to store data at multiple levels
  3. Elimination of the Dimension tables & E-fact table
  4. BWA-like query performance
  5. In-memory computing based BW-Integrated Planning
  6. Co-existence of SAP BW & HANA data models – use of Transient Infoprovider

 

On the flip side:

  1. Systems with BWA already in place will not see any improvement in query performance

 

So, how does HANA fare against a BWA?

Both HANA and BWA are based on the in-memory technology. Difference being- BWA gives us performance gains only at the query level but with HANA in place, the same happens at the database level. With HANA, the BWA indexes would become obsolete and we will continue to get BWA-like query performance.

 

Questions???

During various discussions with colleagues and also after going through articles on the internet, a lot of doubts arise. Many things still remain unclear to me and i believe to many others as well. Although I have written about my understanding on SAP HANA and SAP BW on HANA, the real motive here is to highlight/ bring out some of the general doubts and my thoughts on those. 


  • Is it safe to assume that only the in-memory DSO and in-memory Infocube is equivalent to an analytical view?
  • As I understood from my discussion with the presenters at the SAP Boot-camp an in-memory DSO & Infocube only can be considered as similar to an Analytic View (in HANA modeling). In fact, this is one of the reasons why reporting directly on an in-memory DSO should be considered instead of an Infocube provided they contain similar data. Consider using a multi-provider on multiple in-memory DSOs!
  • Though no changes to the data-flow are expected after migrating to a HANA database, to make full use of its capability the objects have to be migrated as in-memory objects?
  • This is a tricky question. From what I understand, after a standard OS-DB migration from your existing database to HANA database, the objects/ data-flows etc. will remain the same. There are standard procedures as well as programs (RSDRI_CONVERT_CUBE_TO_INMEMORY, for infocubes only) available to convert the Standard DSO and infocubes to in-memory infocubes. It also comes with some pre-requisites. Objects do not get converted automatically!
  • When the infocube is migrated to an in-memory infocube, we are going back from an extended star-schema to a star schema – what changes have to be made to the way master data is being used in the existing BW data flow?
  • Actually the infocube structure will be flattened and the DIM tables and the E-fact table will be eliminated. The cube structure, post migration to in-memory, would contain only the F-fact table and a single dimension table. I guess the attribute view will come into picture here (attribute views are similar to master data).
  • Feasibility of having a HANA database beneath each of the BW instances – Dev/ QA/ Prod?
  • We know that the cost associated in procuring an HANA Appliance is very high. So, I believe that the Clients may be interested in buying only a single appliance for their Production system. But then, how will we ensure the version management of the objects existing across various environments? Or they may go for a HANA database each for the Dev and Prod environments..I have my own doubts on this.
  • How do we ensure the data integrity check at the SAP HANA level as compared to that of SAP BW?
  • I think, as of now, there is no way to ensure an integrity check for data within SAP HANA. If we are using ETL based replication in HANA, checks can be made at the BusinessObjects DataServices 4.0 level and the data can be cleansed. If it’s the SLT method of replication, some sort of check will have to be done either at the ECC level or at the SLT server level (provided both are on different servers).

 

There are many questions which I am unable to get an answer at this point, but I will open it for the community and hope that someone can throw some light on these and enlighten us:

 

  1. Data modeling to be performed at multiple levels – HANA for real-time reporting, SAP BW, SAP BusinessObjects Universe layer?
  2. Is it safe to assume that the role of SAP BW is starting to decrease with the possibility of real-time reporting?
  3. Why need a BW BEx Query when BusinessObjects BI 4.0 is so well integrated with SAP HANA and SAP BW?
  4. Keeping the SAP roadmap for HANA in mind, version 2.0 aims at placing the OLTP and the OLAP on the same HANA database. Does this mean we’ll have a single system for ECC and BW?

 

References that have helped me put up this blog:

 

  • SAP HANA Boot-camp

 

  • A common Database approach for OLTP and OLAP using the in-memory database:

http://www.sigmod09.org/images/sigmod1ktp-plattner.pdf

 

  • First impressions on SAP NetWeaver  BW 7.3, powered by SAP HANA:

http://www.bluefinsolutions.com/insights/blog/first_impressions_on_sap_netweaver_bw7.3_powered_by_sap_hana_amazing/

 

  • Updated: The SAP HANA FAQ – answering key SAP In-Memory questions:

http://www.bluefinsolutions.com/insights/blog/the_sap_hana_faq_answering_key_sap_in_memory_questions/

 

  • SAP in-memory Business Data Management Blogs:

http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/t/135

 

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

  1. Sangita Kumari
    This Blog is very helpful to understand basic concept and logic behind SAP HANA and get answer of few questions. Though still we are having lots of queries around and looking forward about SAP HANA roadmap,its association with OLTP and OLAP system.

    Thanks
    Sangita

    (0) 

Leave a Reply