Skip to Content

BW on HANA can easily enable SQL data access for third party tools, using the “External SAP HANA View” flag to automatically generate Calculation Views out of BW objects. However, Consuming multidimensional BW scenarios through relational SQL paradigm can lead to quite some issues with performance. This blog is about a few basic technical tips, which help to enhance runtime performance. Tested with BW 7.5 on HANA SPS 10, 11 and 12.

 

1. System Checks

SAP provides the “SQL Statement Collection for SAP HANA”, which you can download from SAP Note 1969700. You will get a zip file which can be imported and used within HANA Studio, for instructions see this blog.

The collection includes some quite useful database checks:

  • As a starting point, you can execute the “MiniChecks” from within folder “Configuration”. The result indicates whether there is an issue you should look into more closely and it also points to the appropriate documentation. Further information how to interpret the results can be found in SAP Note 1999993.
  • Folder “SQL” gives performance related information like historic expensive statements and plan cache usage.
  • The Folders “BW” and “Tables” include some basic checks to make sure there are no inconsistencies or other issues related to your tables.

 

2. Table Maintenance

Check and correct table issues

  • In addition to the checks via HANA Studio there is also the very useful report RSDU_TABLE_CONSISTENCY, which identifies and even fixes several HANA specific issues like inefficient indexes, partitioning or compression.
  • First run the report in store mode, this will display a list with all the problems found. Now go through all the entries and mark them for repair via context menu. Finally re-run the report in repair mode. Most issues can be fixed automatically, while a few may need additional manual care.
  • SAP recommends running that report on a regular basis e.g. once a month.

Prefer column store instead of row store

  • Column tables are optimized for query read access, feature advanced compression and can be partitioned and distributed through all nodes in scale-out environments.
  • In contrast, row store tables are more suitable for write operations of full data records. However, they always reside on the master node, which can become a performance bottleneck. That is, because A) for row store traffic there is no load balancing in scale-out environments and B) the more space your row tables consume, the less free memory remains for calculations, which can potentially lead to out-of-memory errors.
  • What’s more, there is an implication for the query execution plan, which you can display via HANA Studio “Explain Plan” feature. Most operations take place in column engine, however if you are using row store tables, then switches between execution engines are necessary, see screenshot below:Problem is that whenever such engine switches occur, all the intermediate results have to be temporarily materialized, which is quite a performance disadvantage. Therefore, it is best practice to use only column tables.

Avoid Partitioning Issues

  • Watch out the 2 billion rows limit when it comes to SID tables. If your tables exceed that size, they will have to be partitioned, and in case of SID tables this which will lead to performance drops.
  • Be sure to run regular table distribution optimizations to balance the load on a scale-out system. This also prevents certain issues, like new / change / active DSO tables that are not kept together, which would lead to performance drops when loading or activation data.
  • Consider using advanced tools like the Data Distribution Optimizer (see SAP Note 2092669)

Watch your indexes

  • In general, there is always a trade-off with speed vs. memory consumption. Since HANA memory is expensive and indexes can grow quite large, you should avoid any that are not really necessary.
  • If you cannot remove multi row indexes, you could try to reduce them to single column index.
  • There is the option to convert indexes to inverted hash, which potentially consumes 30% less space. We have not tried this yet, so I cannot verify how well this works in the field.

 

3. Parameter Settings

There are some configuration settings that have a huge impact on performance. Therefore, it is important to make sure that the parameters in Administration Console -> Configuration are reasonably set.

  • The following parameters improved both SQL performance and stability on our system:
  • This is just to give you an idea, please do not simply reuse all those exact settings. You will have to adjust all values according to your system size and setup. Additional recommendations are given in this blog.
  • You can always use the following statement to display an overview about custom settings that have been modified from standard configuration:select * from sys.m_inifile_contents where layer_name = ‘SYSTEM’

 

4. Software Revision

We tested a lot of HANA revisions and also implemented quite a few hundred SAP Notes in BW backend lately. We can certainly say that the revision and patch level you are on makes a huge difference for SQL runtime performance.

  • Obviously, always apply the latest BW Support Package and HANA revision if possible.
  • When it comes to execution of external SAP HANA Views, there have been major improvement regarding unfolding in recent HANA revisions.
  • Unfolding basically means that the basic elements and hence the execution plan for certain objects can be determined in more detail. So instead of executing just one single search operation on a View, the processing is distributed along multiple steps and engines. The screenshots below show the explain plans for the very same scenario, executed on different HANA revisions.SPS 11 – single search operation:

    SPS 12 – multiple operations and different engines:

    Some of our scenarios suddenly ran 2-3 times faster due to this.

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Former Member

    Hi,
    Thanks a trillion mate!

    We have a problem migration our client’s date to <a href=”https://asha24.com/blog/sap-hana-certification-guide”>SAP S/4 Hana</a>system.

    We are trying to upload around 50.000 BP data from legacy system to SAP with external batch input software(same as LSMW) .So we create flat files, intermediate files and upload it to batch input software. We have two problems:

    ① Process is very slow (it took us 3 days to upload 50.000 data)

    ②Each BP data contains a lot of bank details, but the system is only migrating maximum 4 banks ( Bank details section has 4 spots and for one to register more than 4, they have to scroll down, and we are not being able to stimulate that process in our software)

    As a solution, we are thinking of using BAPI. Is there anyone of who had similar problems during data migration, if yes, how did you resolve them. Also if there is anyone who has experience with BAPI for BP, please let me know how you did.

     

    It works like charm, saved a lot of energy & time.

    By the way do you have any YouTube videos, would love to watch it. I would like to connect you on LinkedIn, great to have experts like you in my connection (In case, if you don’t have any issues).
    Please keep providing such valuable information.
    Thank you,

    Rahul

    (0) 
  2. Former Member

    Hi ben,

     

    THANKS SO MUCH for sharing this! I would love to buy you a coffee since I now won’t be up all night that has been driving me crazy (until now!!). I just wish I knew what was going wrong but so glad it’s in the right place now! Thanks again:)
    I believe I saw a post on this topic on this forum, but can’t find it again…

    When writing a Hana view with a SQL script, will Hana materialize all the “intermediate” views in the script (all the select statements in the script) or will it be smart enough to optimize the statements into a merged select that reduces as much as possible the need for materialization? I think I read a post earlier that stated Hana was smart enough – is that correct?

    Is it the same approach taken by the optimizer for graphical views as well?

     

    By the way do you have any YouTube videos, would love to watch it. I would like to connect you on LinkedIn, great to have experts like you in my connection (In case, if you don’t have any issues).
    Please keep providing such valuable information.  </a>

     

    Thanks,

    Irene Hynes

    (0) 
  3. Bernhard Sauerteig Post author

    Hello Irene,

    thanks for your nice feedback!

    About your question: SQL Views should be avoided whenever possible in HANA. That is, because the execution engine is optimized for Graphical Calculation Views, so this is where you will get the best performance. If you have very complex logic that still requires a lot of SQL, then the next best option would be to build a Calculation View with Table Function (that’s the successor of the Script-based Calculation View).

    • Please have a look at my other Blog SAP HANA – SQL Execution Performance. Chapter 2 explains one of the downsides of SQL Views, which is about the reduction of redundant attributes.
    • Also the filter pushdown might not work as expected or not at all in SQL Views, which can be a huge performance disadvantage.
    • In addition, SQL Views can easily become invalidated, if a dependent object is changed. In contrast, if something around a Calculation View is changed, then the redeployment of all dependent objects is done automatically.
    • Last but not least, Calculation Views are designtime objects and as such can be easily transported, while SQL Views – at least those that were generated through a CREATE statement – exist only in the runtime and can’t be transported.

    So always better stick to Calculation Views for that matter.

    Best regards,
    Bernhard

    (0) 

Leave a Reply