SAP BW/4HANA Performance Optimization Part I
Performance optimization is a central task of every developer/modeler, since both speed and quality are central requirements of modern reporting. With this in mind, we have created a small blog series for you, which deals with the central questions regarding the performance optimization of an SAP BW/4HANA system:
– Performance Optimization – SAP BW/4HANA Part 1
– Performance Optimization – SAP BW/4HANA Part 2
– Performance Optimization – SAP BW/4HANA Part 3
This first blog explains some general techniques and approaches that can be used to improve report performance.
How can you improve the runtime performance of your analysis?
The first and best answer, of course, is to migrate to the SAP HANA database. Thanks to in-memory technology, HANA accelerates data access and enables data processing directly where the data is located. This technology saves time when transferring mass data between the database server and application server.
But as the amount of data continues to grow, at some point SAP S/4HANA and SAP BW/4HANA users need to consider other performance optimization options. In general, there are three areas that you should consider:
1) Optimal use of SAP HANA functionality:
Make sure that the tables have the correct storage type for the respective requirement, especially for self-defined models. For small tables, especially if you change entire records, the row-based storage approach is optimal. However, if you use tables for reporting and you aggregate individual columns, the column store format is better. If the access times for the column store format are increasing, ensure the jobs that re-sort and condense the column store (delta merge jobs) are executed often enough and are successful. For programs, make sure that as much logic as possible is processed in the database, for example with native calculation views. Read only the columns needed. Choose performance-optimized join types in calculation views: if the referential integrity of data is guaranteed, use referential joins instead of inner joins.
2) Additional saving of values:
If highly aggregated data is required (for example, thousands of rows are aggregated to one average) and if large amounts of data are read frequently, storing the aggregated data results makes sense, especially if key figure filters are executed on the aggregated values. Since joins on calculated values are much slower than joins on stored column contents, the performance of such joins can be increased if the calculated values are physically stored.
With SAP HANA, it is also useful to buffer data that is to be read repeatedly in an application cache.
3) Additional options for performance optimization:
Large SAP HANA database tables are partitioned by default using a hash function, i.e. divided into parts that are read in parallel. But even with medium-sized tables, manually defined partitioning can make sense. Make sure that you use as partitioning criteria fields that are used in the queries as a drilldown or filter characteristic and that the partitions are of a similar size. In a few cases, even an index over several columns can be useful; analyze the performance with and without this option.