Over the years, experience has shown us that one of the most crucial topics for a SAP Profitability and Performance Management 3.0 (SAP PaPM) application supposed to process millions or billions of entries is the performance of its processes! Especially the data retrieval part should achieve the best performance it possibly can to avoid serious issues in later project phases. However, reality unfortunately shows that configuration of customer applications in SAP PaPM is the opposite in many cases. Very often those solutions show that the modelers do not pay enough attention to the usage of best practices and principles of performance. And even good modelers get caught in that trap because of a variety of circumstances.
That is the reason why I want to present three examples for a Model View function consuming a Core Data Service (CDS) view with different configurations each and which are consumed in other SAP PaPM functionalities, to show the consequences and the impact of data read process on the performance of a SAP PaPM application. The upcoming sections will present the following examples:
- Model View with field mapping but without filter applied
- Model View with field mapping and with filter applied
- Model View without field mapping
- Model View as input of a View function but without Filter option:
The first example shows what happens when a mapping is applied in a Model View function which is consumed as input in a View function and a Join function afterwards. The reasons for a mapping can vary, so probably the field names the view provides contain symbols that should not be used, or you want to use some names that are more meaningful in terms of the underlying functional or business requirements. Hence you add a mapping between the source fields and the target fields you want to use in your environment (e.g. /EX/KEYDAT => KEYDATE). The consequence of such a mapping is the creation of a subquery shown on the first SQL statement highlighted in the red frame. In case no filter is applied to reduce the data read by this subquery, all data stored in the database will be consumed which can lead to memory issues. The SQL statement below shows the subquery that is created because of the mapping between the source fields and the target fields and inevitably lead to consumption of all data stored in the database since the View function doesn’t apply any filter option in this example. Hence the function will materialize all input data before processing.
Also seen very often is that those Model Views are set as input function in a Join function. Like the first example the function creates a subquery within the brackets which is highlighted red in the SQL statement below. The brackets indicate that the function first executes the inner query and hence consumes all data coming from the database before the join predicates can take place to restrict the data.
In case you need to apply the field mapping it is recommended to apply a filter option using the Advanced tab section in the function attributes. The following example shows how it works and what the impact is on the process.
2. Model View with field mapping and with filter
In case a mapping is necessary for certain reasons, the number of entries read should be limited with an applied filter e.g. in a View function build upon the Model View function which is a standard functionality of SAP PaPM. To do that you have to add the fields that contain the filter values to the “Advanced” section of the function attributes. In the screenshot below you can see that the field KEYDAT has been added as a filter. This generates an APPLY_FILTER() statement during activation of the View function. Since it’s added right before the mapping is done, it limits the data according to the values of the field KEYDAT and hence avoids materialization of unnecessary data. To filter for a certain criterion, one or more values need to be added in the “Package Selection” section when you trigger the run. In general it is possible to add input filter to any function types, e.g. Join or Flow Modeling function, where input tab is available and input function is assigned.
Field added to Package Selection section of View function that is built on Model View function:
This example shows how the SQL statement looks like in case that no mapping is applied in the View function because there is no difference between the field names in the columns “Field” and “Source Field”. In general, it is recommended to work without mapping in data reading functions to avoid unnecessary efforts for applying filters and to avoid the risk of accidentally materialize huge amounts of data which can lead to memory issues.
In the first SQL statement from View function below you can see that no mapping happens but no condition, e.g. WHERE clause to restrict the data. The second SQL statement shows how the WHERE clause as part of the View functions input tab is applied to restrict the data according the value entered in Selection section before materialization of data happens.
3.a. Model View as input of a View function without WHERE clause:
3.b. Model View as input of a View function with WHERE clause:
3.c. Model View used in Join function:
In contrast to the first example where the Model View function is used including a mapping and without any filter applied which leads to a subquery the following example shows the SQL statement created for a Join function including a Model View without a mapping. In this case SAP PaPM doesn’t create a subquery which would lead to a materialization of the data read and instead searches for the matching entries according to the join predicates and joins them only without a materialization of all input data. Usually this should prevent memory issues or lag of performance during data retrieval.
Being a modeling user, you should conduct comprehensive checks on which option is the best one for your scenario but in general it can be stated that example 3 shows the configuration that is the preferable option. In configuration of functions dealing with data retrieval you should always consider the number of entries to be processed and hence make sure that those functions consume and materialize only data that is necessary to decrease the risk of memory issues and reduced performance.
I hope that this blog post helps you to be prepared for upcoming SAP PaPM projects as well as to help to improve the performance and memory consumption of implementation projects already ongoing. Furthermore, please check the following notes to get more information on how to improve the performance of your SAP PaPM application:
- 2967291 – FS-PER 3.0: Performance related Modeling Best Practices
- 2901976 – FS-PER 3.0: Best Practices on Performance related Considerations
Thanks for taking the time to read my first blog post. In case you think that this blog post is helpful, please feel free to share it with all your colleagues and partners working on SAP PaPM projects.