Skip to Content

This article provides an understanding of how using SAP HANA as part of SAP BW can reduce the time IT and your system spend on data loading, maintenance, and modeling.


SAP HANA provides Key changes and has the potential to alter the business set up. The key changes for business are improved reporting performance, real-time data access, and the ability to simulate and plan faster.

Today, majority of cost associated with managing and maintaining a data warehouse has been in the time and staffing it takes for IT departments to manage the business warehouse environment.

By opting for SAP HANA, IT departments can save time on the management and maintenance of SAP NetWeaver BW. This results in potential cost savings through reduction in expensive operational tasks (e.g., indexing and tuning), increased modeling flexibility, simplified maintenance, and increased loading performance. These points have been discussed here in detail:

Increased Modeling Flexibility

SAP HANA provides the ability to change data models to evolve with business requirements.

Currently, within SAP BW on a traditional RDBMS, modelling requires specialized skills as there are Dimensions and  cardinality going around. Any changes to Line Item Dimensions or cardinality should exclusively be handled. (Remodeling aspect of BI 7 has covered this though!!!)

However, with SAP NetWeaver BW on SAP HANA, the dimension tables are no longer part of an InfoCube definition, and the data is primarily stored in columnar format. This format allows you to quickly remodel by going to the Administrator Workbench (via transaction code RSA1), dragging and dropping dimensions in and out of InfoCubes, and activating the object. Performing this remodeling operation issues the change directly within the SAP HANA database and reorganizes the data as required.

As no aggregates are required with SAP NetWeaver BW on SAP HANA, there is no need to rebuild aggregates once the data model is changed — meaning IT departments can respond to business requests that involve changing the data model more quickly.

Decreased Load Windows

Business gets fast reporting, planning, and real-time data from SAP HANA.

However, what about the existing load processes? One common ailment in SAP NetWeaver BW environments has been the load window required to make data available as part of nightly batch loading cycles. With ever-increasing data volumes and pressure from business to make the data available in tighter service level agreements (SLA), IT departments struggle with meeting the load window SLAs must provide.

Most of the BW engagements,  take beating in the availability SLA’s due to delays in existing load processes. Ever increasing data volumes have also fairly contributed to this. Ultimately, IT departments are facing challenges with meeting the load window/availability SLA’s.

Thanks to SAP HANA offerings. It’s  two new ” In-Memory” objects offer a variety of improvements to loading performance:


  • The in-memory optimized DataStore Object (DSO) has its delta calculation and activation logic implemented in SAP HANA instead of in the ABAP application layer (as shown in below figure). Moreover, all  the DSO data resides directly with in-memory column tables within SAP HANA. This results in leveraging the in-memory and massive-parallel processing (MPP) capabilities of SAP HANA to speed up the delta calculation and activation logic of a DSO.

       BW 7.3  - DSO new feature.png    

  • The in-memory optimized InfoCube has come with a healthy changes in terms of schema. It has a simplified schema for optimizing data loads, in which dimension tables are no longer generated as part of the InfoCube schema (as depicted in below figure). Additionally, SAP NetWeaver BW InfoCubes traditionally stored compressed data in an E Fact table and uncompressed data in an F Fact table. With in-memory optimized InfoCubes, the E/F Fact tables are consolidated and partitioned as part of the InfoCube schema. This storage mechanism is internal to SAP HANA and doesn’t require any configuration or management by IT departments. This new schema provides faster loading time into these InfoCubes as dimension IDs> Thus they no longer need to be generated by the system as part of the load process.

      HANA schema changes.png

The significance of in-memory optimized InfoCubes and DSOs is that there is improved performance within every step of the load process as follows:

  • When data loads into a DSO within SAP NetWeaver BW, the data is loaded directly into memory, as memory is the primary persistence for SAP NetWeaver BW on SAP HANA. The loading of data into a DSO provides performance improvement for the loading portion of the extraction, transformation, and loading function.
  • When activating the DSO to consolidate the changed data, the activation is processed within SAP HANA instead of the ABAP application tier, improving performance due to the activation taking place in memory and the activation being parallelized as part of the MPP computing architecture of SAP HANA.
  • When loading data from the in-memory optimized DSO to the in-memory optimized InfoCube, there are performance improvements when extracting from the DSO (as the data is being read from memory), as well as loading into the InfoCube. This is because dimension IDs are no longer required to be generated and the data is being loaded into an in-memory persistent column table.

We need not make any changes to the existing schedules. Rather, few migration steps would turn our existing objects as “In – Memory” objects.Thus, we can get significant reductions in our loading times, which helps meet SLA criteria for loading.

Additionally, because SAP HANA’s in-memory architecture does not require indexing and aggregate tables to speed query response, this portion of the load time is reduced. Also, in the past, once the loading was complete, users had to roll up the data into aggregates or SAP Business Warehouse Accelerator (BWA) to achieve good reporting performance. With SAP NetWeaver BW on SAP HANA, this portion of rolling up data into SAP BWA is eliminated, further reducing the data load times.

Simplified Maintenance

Maintenance is simplified with SAP HANA because there is no special effort for indexing or database statistics maintenance to guarantee fast reporting. All the time spent building aggregates (for companies that didn’t have SAP BWA) is also not required, so there is simplification of maintenance activities as well.

Columnar-based storage with high compression rates reduces the database size of SAP NetWeaver BW.

With SAP HANA, SAP BWA is no longer required. Thus eliminating the need for IT departments to maintain separate BWA’s.The SAP NetWeaver BW application server remains separate from SAP HANA, but the role of the application server is diminished because data-intensive logic is pushed from the server to SAP HANA. Therefore, users likely need fewer application servers as part of their overall sizing.

SAP has also simplified administration via one set of admin tools (e.g., for data recovery and high availability). Finally, companies need to consider their overall landscape topology. Within SAP NetWeaver BW environments, the landscape setup usually involves a central database server and numerous application servers to distribute workload. This workload is specifically for user queries and data loading. With the reduction in data load times and the acceleration of reporting queries, the overall workload on the system is reduced (i.e., the time that each operation takes), which leads to less concurrency and the ability to scale down use of some of the application servers.

Migrating to SAP HANA

SAP has standard migration tools available for the migration process enabling us to migrate from existing environments.

As part of the OS/DB migration process, SAP HANA generates specific tables as column tables instead of row tables for objects that are read intensive, and for which there is a large data compression benefit from storing the data in a columnar format. Once the OS/DB migration is complete, SAP NetWeaver BW InfoCubes and DSOs remain unchanged. It is up to the user to convert these objects to in-memory optimized versions.

The conversion process can be done one-by-one or en mass and is available through running ABAP program RSMIGRHANADB.


To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

  1. Neelesh Jain

    Thanks for a nice informative blog.

    What happens to the logic written in transformation routines using ABAP, I think they will still continue to be processed at Application Server. Am I correct?

    (0) 

Leave a Reply