Skip to Content

Ever since BW on HANA had been introduced, I read many articles discussing the improvement that it can deliver. These articles provided these conceptual experiences that resulted in specific benefits, features and overall improvements. I recently had the experience to execute a true POC that compared a BW on Oracle system with a BW on HANA environment. This was the first time I was able to turn the conceptual into practical and I wanted to share these results with you. Though the system that I was working with was not terabytes, I think I was able to clearly quantify the HANA benefit. Let me know what you think and I hope that you enjoy.


Key Benefits

Company A is currently reviewing the capabilities to implement BW on HANA.  The main reasons to implement this tool are the following:


  • Improve end user data accessibility A few teams are limited with the data sets that they are trying to execute.  The current data model has been effectively implemented but one of the main reasons for the deficiencies is the way the tool is currently used.  End Users are executing queries that are returning over a million plus rows and have complex restricted and calculated key figures.
  • Reduce development life cycles – Due to consistent changes as the system matures, we need to be able to reduce the development life cycles
  • Reduce data model – The Company A data model has become very complex over the last few months and it continues grow in complexity.
  • Future development – As the data model continues to grow, the volume will also continue to expand. Leveraging BW on HANA will allow the database to be scalable moving forward.  Not only does it provide the in-memory capabilities but it will also provide the following:
    • Scalability – As Company A introduces additional functional areas, we will not have an immediate concern for performance implications.
    • Predictive Analytics In the future, Company A may implement both a fully functional planning solution and predictive capabilities.  BW on HANA provides the extended and improved capabilities with the integration with the Predictive Analytical Library
    • Real Time Operational Reporting – With the enterprise solution, HANA Live can be introduced to support real time operational reporting
    • BW real time extraction – With BW 7.4, SLT will support real time (trigger based) data extraction.  Please note, not all DataSources provide these capabilities but this will continue to grow until the main capabilities are supported.

The following methods were measured to determine the immediate improvements between BW on an Oracle Database and BW on a HANA Database:

  • DSO Activation times
  • InfoCube Load Times
  • InfoCube Size
  • Query run times

In order to successfully achieve the above measurements, the following systems, transactions and tools were used:

  • Data Model Benchmarks
    • BWP vs. BWX
      • Please note that these systems were a snapshot at a specific time so the data sets are not the exactly the same
    • Transaction RSA1 – This transaction is used for all data modeling executions.  These benchmarks use SAP Best Practices in order to create a Pulse Check to compare the systems
  • Query Runtime
    • Business Explorer Analyzer (BEx) – This is the tool that leverages Microsoft Excel and is embedded
    • RSRT – This is the embedded BW tool that leverages the application server and mimics the execution within the Web

Data Model Benchmarks

The following prerequisites were executed prior to all benchmarks:

  • Optimize all BW on HANA InfoCubes
  • Optimize all BW on HANA DSO’s (new DSOs that were copied from original source DSO)
  • Creation of improved Data Model

BW on Oracle (BWP)

The following measurements are 2 of the larger DataStore Objects within BWP.  We are documenting how long it takes to activate a DSO.


DSO

Activation Time (seconds)

Activation Time (minutes)

Activation Time (hours)

DSO1

1232

20.53333333

0.342222222

DSO2

8206

136.7666667

2.279444444

The following measurements are the 4 main InfoCubes that are leveraged within the Sales and Distribution functional area.  These are the main sources for reporting for both the Finance and Demand Planning teams. None of these InfoCubes have aggregates but 2 of them are compressed within a 7 day time period (every 7 days).


InfoCube

Load Duration (minutes)

Load Duration (hours)

# of Records

InfoCube1

22

0.366666667

8,456,590

InfoCube2

18

0.3

3,195,159

InfoCube2

97

1.616666667

3,592,243

InfoCube2

75

1.25

3,745,513

BW on HANA (BWX)

Since the BW on HANA System is not connected to the PRD system, we are unable to e51xtract from this system directly into BWX.  In addition, we delete the Persistent Staging area on a weekly basis.  In order to compare the above data loads, we have replicated the DSOs and InfoCubes, which will be sourced from the above Data Targets. This will not provide an exact comparison but similar since the structures are the same.  As stated below, the number of records will be less in our BW on HANA System.


BWX DSO’s

DSO

Activation Time (seconds)

Activation Time (minutes)

Activation Time (hours)

# of Records

DSO1

51

.85

0

8,458,642

DSO2

38

.63

0

3,218,010

BWX InfoCubes

InfoCube

Source Target

Load Duration (minutes)

Load Duration (hours)

# of Records

ZHANA01

DSO1

4

.06

8,458,642

ZHANA02

DSO2

11

.18

3,723,589

ZHANA02

DSO3

9

.15

3,791,004

ZHANA02

DSO4

9

.15

3,176,558

Comparisons

In the below DSO Activation comparisons, you can see they are all greater than or equal to 95%.

DSO

Activation Time (seconds) – BWP

Activation Time (seconds) – BWX

Variance

% improvement

DSO1 vs. DSO3

1232

51

-1181

95%

DSO 2 vs. DSO4

8206

38

-8168

                    99.5%

Please note***we are loading DSO1 into DSO2, while DSO1 is loaded from 2 different sources so there is more records that need to be aggregated together.

In the below InfoCube comparisons, you can see how changes to the data model will be reduced with maintenance time with BW on HANA.  The load times have decreased by >50% in all cases and in some instances at least 90% better

InfoCube

Load Duration (minutes) – BWP

Load Duration (minutes) – BWX

Load Duration Variance

% improvement

InfoCube1 vs. ZHANA01

22

4

-18

81%

InfoCube2 vs. ZHANA02

18

11

-7

38%

InfoCube2 vs. ZHANA02

97

9

-88

90%

InfoCube2 vs. ZHANA02

75

9

-66

88%

Query Runtime

There were 2 approaches taken in order to execute queries:

  • Executing the reports against the current architecture
  • Creating a more efficient architecture and recreating a similar query to see performance improvements

With BW on HANA, there are areas that improvement will be realized but other areas that it can’t be realized. All improvements will be recognized within the database.  After the data is moved from the backend database to the frontend database, the

Within the query runtime, we want to measure some of the following metrics:

OLAP Related events

Event

Description

OLAP: Query Generation

This event measures the time that is needed to check the query definition and, if necessary, to generate the query

Data Manager

Structure-/format-converting for frontend presentation. It can be optimized with help of compression & aggregates. 

If the duration time of all events 9000 (Data Manager) is higher than all other times then BW on HANA will provide a significant improvement 

Total DBTRANS

Total Number of Transported Records

Total DBSEL

Total Number of Read Records

OLAP: Data Selection

This event measures e.g. the time taken to sort the read data (from DM) according to structure element selections or restricted key figures. The calculation of virtual key figures is done here as well.

OLAP: Datatransfer

In this part of the coding many OLAP features are processed, e.g. exception aggregations are carried out, formulas are calculated, and the correct number of decimal places for the data cells is determined. The time can be optimized by using cache. The number indicates the number of cells.

OLAP: Read Texts

Texts are read to sort the data. This event measures the time taken to call the text read class

Front-End related events in detail

Event

Description

Bytes Transferred

The number indicates the transfered bytes for one step.

Load Workbook

This is the time for transfering the workbook from backend to temp. folder of client.

Generate Workbook

This time is needed for generation of the workbook.

Not Assigned

This is a collective event for all parts that are not explicitly assigned to an event. The parts that are not assigned to an event are not belonging to the backend or frontend. This time is spend somewhere in between, durring the frontend backend communication. In most of the cases this is the network load (WAN). Especially if your Citrix Servers are located far from the application servers and the network load is high.

This is the time which belongs to data transfer in the landscape after leaving the BW server and before reaching the frontend tool, it includes complete time for network-traffic and occurs for every data transfer between BW Server and frontend. To improve the performance in the frontend/backend communication you should apply following notes:


1352375 – Performance improvements in a WAN (roundtrip reductions) > ANA_USE_SIDGRIDWBUF/ANA_USE_SIDGRIDMASS
1179647 – Performance: Network load in BEx Analyzer > ANA_USE_TABLE

Total DBSEL

How many cells reads the database

Number of Excel Cell

This is the amount of data which is sent across the network.

Render Item

This is the Excel formatting time for rendering the BEx items. In column Statistic Object Name you can identify which BEx Item consumes the time.
You can reduce it by using the following analysis grid properties:
– Apply Formatting: deactivate
– Display Hierarchy Icons: “display +/-”
See Online Help:
Analysis Grid

Wait Time, User

It is the time when the application was waiting for user action.

The first query that was executed was Query XXXX, which is a report which supports the Team X. 

The following were used:

  • Transaction RSRT
  • HTML
  • No Cache
  • Track Statistics

The navigation that occurred within this report is the following:

  • Drill Across on Calendar Year/Month
  • Drill Down on Material

Event (BWX)

Duration  (BWX)

Event (BWP)

Duration (BWP)

# of records

% Improvement

Data Manager

1.993154

Data Manager

34.062514

0

94.1%

Total DBTRANS

0.000000 Total DBTRANS 0.000000 114,476
Total DBSEL 0.000000 Total DBSEL 0.000000 5,008,276
OLAP: Data Selection 3.944361 OLAP: Data Selection 21.840990 0 81.9%
Cache Commit 0.000444 Cache Commit 0.000548 0
OLAP: Read Data 0.003076 OLAP: Read Data 0.004482 23,263
OLAP: Read Texts 0.153941 OLAP: Read Texts 0.436043 0
OLAP: Data Transfer 18.624689 OLAP: Data Transfer 39.310077 570,102 53.8%

Query XXXXX

The following were used:

  • Transaction RSRT
  • HTML
  • No Cache
  • Track Statistics
  • Selection Criteria – 0CALMONTH Range – 01/2014 – 12/2014
  • Created a Y query in both systems using the original Query XXXXX as the baseline
    • Characteristics included in the initial layout are the following:
      • Sold-To – Sales District
      • Sold-To – Customer Group
      • Sold-To – Industry
      • Sold-To Party
      • Sold-To – Broker
      • Material – Brand
      • Material
      • Calendar Month (Columns)
      • Key Figures Included are the following:
        • Restricted Key Figure 1
        • Restricted Key Figure 2
        • Restricted Key Figure 3
        • Calculated Key Figure 1
        • Restricted Key Figure 4
        • Restricted Key Figure 5
        • Restricted Key Figure 6
        • Restricted Key Figure 7
        • Calculated Key Figure 2
        • Calculated Key Figure 3

System

Transaction

Report (Technical Name)

Executed?

Time

# of Rows

Comments

BWP

RSRT

Query XXXXX – Non HANA

Yes

11 minutes

1,409,796

BWX

RSRT

Query XXXXX – HANA based

Yes

4 minutes

1,383,192

     

Event

BWP Duration

# of rows

BWX Duration

# of rows

Comments

%

Data Manager 79.439548 0 11.515888 0 85%
Quantity Conversion 7.105138 0 3.508917 0
Total DBTRANS 0.000000 1,186,632 0.000000 972,297
Total DBSEL 0.000000 2,435,735 0.000000 2,142,308
OLAP: Data Selection 82.334437 0 23.708521 0 71%
Cache Commit 0.000854 0 0.000578 0
OLAP: Read Data 20.268199 1,376,757 2.968213 903,120
OLAP: Read Texts 0.487236 0 0.280800 0
OLAP: Data Transfer 425.246616 9,688,374 193.350133 4,253,660 Restricted Key Figures 54%

In the below 2 tables, you can see the aggregated access to the main tables in BWP and BWX.  Within BWP, you will see both F and E Tables since there are compressed and uncompressed tables that need to be read. Within BWX, since there are no dimension or fact tables, this is skipped and the read time is drastically lower. This provides a huge improvement within the database access and the amount of time that it takes to start rendering the data.  This provides further explanation on why it took 11 minutes in BWP vs. 4 minutes in BWX.

Basic InfoProvider – BWX

Aggregate

Table Type

Read Time

InfoCube1

InfoCube1$X

0.045058

InfoCube2

InfoCube2$X

0.242016

InfoCube3

InfoCube3$X

0.074789

InfoCube4

InfoCube4$X

4.390364

InfoCube5

InfoCube5$X

7.740808

0.000000

Scenario 1:

Query: XXXXXX

Variable Values: 01/2014 – 03/2014

Key Figures to be included are the following:

  • Restricted Key Figure 1
  • Restricted Key Figure 2
  • Restricted Key Figure 3
  • Restricted Key Figure 4
  • Calculated Key Figure 1

Execution Steps:

  • Execute Report in RSRT
  • Drill Down on Free Characteristic 1
  • Drill Across with Key Figures
  • Filter Key Figures not required
  • Sort Restricted Key Figure 4
  • Filter Hierarchy on NODEX
  • Drill down on Item
  • Swap Item with Characteristic 2
  • Delete Filter on NODEX and switch to NODEY
  • Start back with NODEX and Free Characteristic 1
  • Select Free Characteristic 2
  • Filter down by Free Characteristic 3 on Free Characteristic 2
  • Complete

BWP – 6:14 minutes (368 seconds)

BWX – 2:40 minutes (144 seconds)

  1. 60.8% improvement

Basic InfoProvider – BWP

Aggregate

Table Type

Read Time

InfoCube6

InfoCube6

F

0.922992

InfoCube5

InfoCube5

F

4.884891

InfoCube3

InfoCube3

F

4.968414

InfoCube4

InfoCube4

F

8.535001

InfoCube7

InfoCube7

F

11.981518

InfoCube1

InfoCube1

F

40.520607

InfoCube4

InfoCube4

E

74.700701

0.000000

Event

Duration BWX

Duration BWP

Data Manager

1.175035

50.377441

Quantity Conversion

0.240998

2.132771

Total DBTRANS

0.000000

0.000000

Total DBSEL

0.000000

0.000000

OLAP: Data Selection

0.181115

19.870039

Cache Commit

0.000088

0.000111

OLAP: Read Data

0.006416

0.015813

OLAP: Data Transfer

0.002840

0.003708

In the above screen shot, you can see the substantial improvement within the Data Manager.  As SAP has stated that if the Data Manager time duration is longer than all of the other events, SAP HANA can greatly improve the overall performance

Execution Steps:

The time that is listed is the overall duration that it took to get to that step.  It does not mean the amount of time it took that step to execute.  The final execution is the overall duration for the end to end processing.

Step

BWP

BWX

Execute Report in RRMX

1:39 minutes

:17 seconds

Drill Down on Free Characteristic 1

2:56 minutes

:42 seconds

Drill Across with Key Figures

3:26 minutes

:54 seconds

Filter Key Figures not required

3:48 minutes

1:16 minutes

Sort Key Figure 1

3:58 minutes (1733 records)

1:26 minutes (1414 records)

Filter Hierarchy on NODEX

4:44 minutes

1:42 minutes

Filter on Top 10 Characteristic 2

4:49 minutes

1:48 minutes

Drill down on Free Characteristic 3

5:03 minutes

1:56 minutes

Swap Characteristic 3 with Free Characteristic 4

6:01 minutes (361 seconds)

2:56 minutes (176 seconds)

Delete filter on NODEX

Switch to NODEY

Return to Free Characteristic 1/Free Characteristic 2

Select Free Characteristic 2 (Top 10)

Filter down by Free Characteristic 5 and 6

Improved Architecture:

One of the benefits of BW on HANA is to reduce the overall data model, remove duplication and simplify the loading process.  In order to do this moving forward, Company A will be required to review the overall data model that has been currently created for the implemented modules.  In order to determine if there is a significant performance improvement, we created a quick remodel of a few of the objects (testing purposes only and will not include the complete data set).

Our main objective is to remove the InfoCube layer and leverage the DSO layer for all reporting. A MultiProvider will still be the reporting layer in order to create the necessary unions.

Prior Architecture object type

Prior Technical name

New Object Type

New Technical Name

InfoCube

InfoCube4

DSO

DSOHANA – HANA POC

InfoCube

InfoCube2

DSO

DSOHANA2

MultiProvider

MultiProvider1

MultiProvider

ZHANAM – HANA MultiProvider

A few notes on the below stats:

  • The number of objects in the MultiProvider were less
  • Only the data that is used in the report were included in the data targets
  • SPO was used against the Forecasting data by version which may not be feasible within production.  SPO capabilities may still be used in production but by year

Step

BWX (new architecture

BWX

% Improvement

Execute Report in RRMX

:04 seconds

:17 seconds

76%

Drill Down on Free Characteristic 1

:07 seconds

:42 seconds

83%

Drill Across with Key Figures

:12 seconds

:54 seconds

77%

Filter Key Figures not required

:21 seconds

1:16 minutes

72%

Sort Key Figure 1

35 seconds (1317 records)

1:26 minutes (1414 records)

59%

Filter Hierarchy on NODEX

48 seconds

1:42 minutes

53%

Filter on Top 10 Characteristic 2

51 seconds

1:48 minutes

53%

Drill down on Free Characteristic 3

54 seconds

1:56 minutes

54%

Swap Characteristic 3 with Free Characteristic 4

1:35 minutes (95 seconds)

2:56 minutes (176 seconds)

47%

Delete filter on NODEX

Switch to NODEY

Return to Free Characteristic 1/Free Characteristic 2

Select Free Characteristic 2(Top 10)

Filter down by Free Characteristic 5 and 6

As you can see the largest improvements occur in the beginning when the database is accessed. After this, the improvements start to level off because the application server is working through the most of the tasks while the database is accessed less.

To report this post you need to login first.

22 Comments

You must be Logged on to comment or reply to a post.

  1. Tim Korba Post author

    Thanks everyone, I am happy with how it came together…more to come on the decision at the client and the next steps that we are taking…it will be an interesting follow up

    (0) 
    1. Tim Korba Post author

      Dinesh,

      I am glad that you liked the document.  Are you transitioning to the HANA area?  Let me know if you have additional questions and I would be happy to answer them for you where I can!

      Tim

      (0) 
  2. Dinesh Baddyapalli

    Yes Tim, I am moving into HANA space now first with BW on HANA.

    Just upgraded our system to 7.31 and now our client is planning to go for it.

    So looking into this area for more details on it. Currently checking on SAP BW on HANA Cookbook.

    Regards,

    Dinesh

    (0) 
    1. Tim Korba Post author

      this is perfect example for you.  It is relatively easy from the BW side.  Is the client running BW on HANA runtime license only or are you going enterprise version? 

      (0) 
  3. Venkateswara Rao Pendyala

    Hello Tim,

    Thanks for the Valuable information 🙂

    I have small doubt on coding part.

    When we move to BW on HANA ,do we need to rewrite our existing ABAP-BW code(Routines)  to  make compatible with HANA database for better performance? Any guidelines for the same..

    What type of rework do we have do workout while moving to BW on HANA ?

    Regards,

    Venkat

    (0) 
    1. Tim Korba Post author

      Venkat,

      There are a few things that you should consider when migrations:

      • Remodeling – This is extremely important to flatten out your architecture but doesn’t have to be done right away.  Similar to a technical upgrade vs. migration effort, you can do it in pieces. I would initially focus on your worst performing functional areas. 
      • InfoCube HANA Optimized – Even if you don’t do a remodeling effort and decide to keep your InfoCubes, it is imperative to Optimize these InfoCubes.  This can be done using the wizard and you will notice that you will no longer have dimension and fact tables.  This will provide immediate improvement and if this is not done, you will not be grasping the benefit of your HANA DB
      • HANA Migration effort – Though I believe that you should 100% transition your ABAP code to HANA Optimized code, I would expect this to be the last portion that you would address unless you have extensive code that really impedes your overall loading process.  Most of the code is for the back end and not the front end reporting so it is a little more transparent to the end users
      • Reporting – Remember depending on the tool that you are using, the rendering of the data will not be improved, only the database extraction so be aware that if your query has a long run time, that may be the reason as well

      Let me know if this helps your questions that you stated above.

      Tim

      (0) 
  4. vivek singh

    Hi Tim,

    Thats a solid piece of work. BTW, we are in planning phase to migrate to HANA.

    I was wondering what are the migrating tools available for migrating to hana and does they come with additional costs? Already on 7.3 SP08 , so no upgrade required.

    I have a few additional questions about the solution what I am proposing to the client. I would be glad if you can have a quick look at it. Can we exchange some information over the email.

    Regards,

    Vivek

    (0) 
  5. Peggy Hsieh

    Hello Tim,

    This article give me very useful information.Thank you for your sharing.

    I have a BW on HANA Project and go-live now and I have a performance issue.

    When I create an InfoCube on HANA, it seems it was already InfoCube HANA Optimized (because the conversion to SAP HANA-Optimized button is gray).

    /wp-content/uploads/2014/10/11_570111.jpg

    /wp-content/uploads/2014/10/22_570112.jpg

    I use one multi-provider to creating many queries, one query performance is bad.

    The query result is 4109 records.

    1) RSRT-Display Statistics Data

    I Execute the query using RSRT and check “Display Statistics Data” and found that the “OLAP: Data transfer” spends a lot much time.

    It takes 30 seconds and the other queries takes around 2 or 3 seconds.

    I have read about you and Klaus Nagel article (Query Performance in BW-HANA FINAL).

    Based on those articles, it seems “OLAP: Data transfer” does not perform on HANA, but on an application server it works (I mean BW?), right?

    If query has ABAP and complex formula exception aggregation, it does not perform on HANA calculation engine.

    Can you explain me about what it means “complex” and “simple”? Can you explain to me how does “long formula” or “many variables” impact the perfomance?

    How can I make the query improve its performance by using HANA calculation engine?

    2) RSRT-Do Not Use Cache

    When I mark “Do Not Use Cache” and “OLAP: Data transfer” it spends 34 seconds.

    There is no difference using cache on HANA. Can I improve the performance by using Cache?

    3) Is there any way to improve the “OLAP: Data transfer” performance?

    (0) 
    1. Tim Korba Post author

      Peggy,

      I first want to ask the need for an InfoCube?  What are you trying to achieve with this InfoCube?  Is there a possibility to create a more leaner architecture?

      You are correct there are specific types of aggregation that will not be improved by HANA but what are you specific trying to do?  Maybe you can lay out the query in the following sections:

      1) Are there aggregations?

      2) Is there any ABAP that is executed and where is this done?

      3) Are there many restricted key figures and calculations?

      4) Are you running this within BEx?

      5) How many variables do you have?

      4109 records is your result but how many records is it going through?

      Tim

      (0) 
      1. Peggy Hsieh

        Hi, Tim

        Thank you for your reply.

        According your question, this is my answer:

        I first want to ask the need for an InfoCube?  What are you trying to achieve with this InfoCube?  Is there a possibility to create a more leaner architecture?

        You are correct there are specific types of aggregation that will not be improved by HANA but what are you specific trying to do? Maybe you can lay out the query in the following sections:

        I try to do something in InfoCube for improving performance and expect it can help for query execute time.

        (1) SAP HANA-Optimized InfoCube: Some articles offer advice about changing InfoCube to SAP HANA-Optimized. But when I create InfoCube, its type is “SAP HANA-Optimized InfoCube”. I think nothing needs to be changed for Infocube.

        (2) create an aggregate in InfoCube: BW on HANA does not offer this function. When I use HANA, it should


        I think there’s nothing to improve InfoCube performance and basically when I use BW on HANA, InfoCube performance should be improve, right?                                    

        What’s leaner architecture? Another Infocube?

        1) Are there aggregations?

        Every Key figure in the query is “Standard aggregation” in the exception aggregation setting.

        2) Is there any ABAP that is executed and where is this done?

        Yes, I use many variables in the query and variables is setting “processing by customer exit”.

        3) Are there many restricted key figures and calculations?

        There are 40 restricted key figures and 37 calculations.

        4) Are you running this within BEx?

        Yes

        5) How many variables do you have?

        There are 5 variables in the query.

        4109 records is your result but how many records is it going through?

        There are total 154,715 records in the multiprovider.

        I find “OLAP: Datatransfer” (Event ID:3200) and other OLAP features do not push down the HANA engine.

        Because SAP document says:

        OLAP Features pushed down to HANA in BW 7.3x

        • Hierarchy Handling Part I
        • Restricted key figures
        • Exception Aggregation CNT for quantity key figures without unit conversion
        • Exception Aggregation of currency key figures with optional currency conversion

        It means only these OLAP features above perform on HANA and others still perform on BW application server.

        If “OLAP: Datatransfer” costs 90% execution time in the query and it never push down HANA, does it means HANA is not helping on the query performance?

        (0) 
  6. Tim Korba Post author

    Everyone please come to the ASUG Chicago Chapter (November 6 @Harper College), where I will be speaking about this article and the overall SAP HANA roadmap that we determined with the client.  It will be a lot of fun and the talking points will be very helpful for everyone.

    (0) 

Leave a Reply