Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
celo_berger
Active Participant

Overview

Performance tuning is always a consideration when building data warehouses. The expected tendency is for a DW DB to grow, as data is being added on a daily basis. And as the volume increases, performance will decrease as reports are now querying millions of records across multiple years.

The purpose of this document is to focus on a few backend performance best practices and tricks that I’ve learned across the years, that will save time during ETL, and as a result, will be an overall more efficient DW to serve the clients reporting interests.

Many of these topics have been covered here on SCN, some not so much. This is by no means a comprehensive list and is a consolidation of what I’ve found most useful in my projects.

For full disclosure, I’m not an ABAPer and my ABAP knowledge is limited. So the basis for my recommendations are purely based on performance comparisons before and after the ABAP changes referenced in this post. I’m sure that you can find many blogs and forums around each of topics I’ll be mentioning.

Logical Partitioning

As your data warehouse matures, and your data is now many years old, you might find yourself stuck with a cube that contains multiple years of data. If best practices were followed, you should have a multiprovider on top of your cube for the reporting layer:


There are a few disadvantages to having a single cube storing multiple years of data:

  • Increased loading time
  • Increased reload time if changes are required, such as a new field being added (all history needs to be reloaded)
  • Decreased query performance, as the select from BEx will sift through all the records in the cube

Logically Partitioned cubes are exactly what the name suggests: partitioning (or splitting) the data in the cube according to a logical criteria. The example I will give below is an easy to understand one, but it can really be done by any field that doesn’t have too many dimensions to it.

What I’ve done in the past, was to logically partition cubes based on the Fiscal Year/Calendar Year. In this scenario, you’d have multiple infocubes logically partitioned by year, 2010, 2011…..2017, 2018 etc. All the cubes would be linked together through a single multiprovider:


This option does require a little setup, such as creating transformations and DTPs for each cube, and ensuring the correct data is being loaded to each cube, either through a simple start routine that deletes the SOURCE_PACKAGE where CALYEAR NOT “YYYY”, or by having filters in the DTPs that only load the appropriate year into each cube.


But once the setup is done, if data reloads are required for specific years, you only need to reload that particular cube, without disrupting reporting on all the other data.


When running a report with data for a single year, the multiprovider will be smart enough to direct the select statement to the appropriate cube, thus eliminating data from the other years that will not be queried, and improving report run times.


This practice is the most recommended, so much so that SAP provided a standard functionality to accomplish that, which is topic of the next segment:



Semantic Partitioned Objects

SPOs are the exact same concept as logical partitioning. SAP provided functionality to enable partitioning in BW in a more streamlined and automated way.

The link below provides a great how-to guide on using SPOs by Rakesh Dilip Kalyankar:

  http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/50e09876-d902-2f10-1db6-ce161ab7f...



Batch Jobs for DTP Executions

As part of ETL, BW developers will schedule DTPs to be executed. Depending on the size and complexity of the process chains, as well as the number of available background jobs at any given time, we can take advantage of having multiple parallel background jobs to be run per DTP.


There isn’t a straightforward number of parallel jobs that should be set, since each BW system is unique, in terms of:

  • Amount of infoproviders that need to be loaded
  • Availability and number of background jobs
  • Timing in which loads need to be completed
  • Number of DTPs scheduled to run in parallel via process chains


Although one main rule is that the number of jobs has to be multiples of 3 for transactional data, and we can only have 1 job for master data.

To change the background job settings, in the menu, click on Goto -> Settings for Batch Manager …

Change the Number of Processes number to increase or decrease the number of background jobs


Here are a few tips on how to determine the number of background jobs to set for a particular DTP:

1.       Assess the volume and criticality of the load:

     a.       Higher volume and critical loads (loads that need to be completed before 8AM or when the business needs the data) should have more parallel background jobs. A ballpark number I’ve used is between 9-12 background jobs.

     b.       If, for example, you have a DTP load that on a daily basis takes 4 hours and is holding the entire process chain down, it might be a candidate for increasing background jobs.

2.       Determine the amount of available background jobs by going to SM51 and double clicking on the available servers, and viewing the total background jobs

With that number, you have an estimate of how many background jobs you can parallelize.


*Important: if you have 20 total background jobs, you should NOT try to max out and consume those 20 jobs. Try and stay 2-3 jobs below that threshold so you don’t get into job queuing issues which could lead to loss of performance


3.       Assess the process chains in your system to identify where loads can be parallelized and where it makes sense to increase the number of background jobs in the DTPs



DTP Data Package Size

Many times overlooked, the Package Size can provide moderate to significant performance improvements if tweaked correctly.


Typically a larger package size is preferable, since the overhead to initialize each data packet is reduced, since more records are being bundled into a single package.


For example, if we’re loading 50k records from a PSA to a DSO:

1.       Package size of 1000:

     a.       This results in 50x overhead since the DTP is going to load 50 packages of 1000 to load 50k;

2.       Package size of 50000:

     a.       Only one overhead processing since we’re grouping all the records into a single 50k package.


Option 2 will provide better performance

However, there are some considerations when modifying the package size:

1.       If there’s custom logic in the transformation to do large volume lookups from other DSOs or Master Data objects, it might be necessary to reduce the package size, since there’s a limit to how much memory can be used for internal tables. If the level of complexity or data volume in the custom code is so great, then that might be the only option to ensuring the loads complete successfully. And you will only find out through trial and error;

2.       If there’s no custom code or logic in the transformation, it is possible to expand the package size by 5-10x the default setting of 50k. Again, trial and error will help determine the sweet spot.



Secondary Indices on DSOs

Another common issue observed was around poor performance in transformations when doing non key selects from DSOs. Fortunately SAP provides a very simple fix for that, which is the ability to create Secondary Indices on DSOs.


It’s a very simple process, which will improve the performance in those selects tremendously.


In the DSO modelling screen (double clicking on the DSO)

At the bottom, under Indexes, right click and select Create New Indexes

Leave the Unique checkbox unchecked. If checked, the secondary index which is being created would have to have unique values in the DSO, which might not be the case, if for example the index is being created on the GL Account field. There could be multiple records in the DSO with the same GL Account.

Once created, simply drag a field from the Data Fields into the new index, and activate the DSO



Master Data Lookup in Transformation

SAP introduced a nice functionality that automatically allows us, without any code, to select master data attributes in the field routine of transformations:

This simplifies the build process significantly, as no code is required.


However, we’ve noticed that this actually decreases the loading performance compared to doing a select statement in the start routine, and then reading the internal table in the field routines, as the example below:


START ROUTINE

  SELECT FIELD1 FIELD2
FROM /BI0/PFIELD1
INTO TABLE itab1
FOR ALL ENTRIES IN SOURCE_PACKAGE
WHERE FIELD1 = SOURCE_PACKAGE-FIELD1 AND
       objvers
= 'A'
.

FIELD ROUTINE

  IF RESULT_FIELDS IS NOT INITIAL.
  
READ TABLE ITAB1 WITH TABLE KEY
      FIELD1 =
RESULT_FIELDS-FIELD1 ASSIGNING <f_s_itab1>.
        
IF <f_s_itab1> IS ASSIGNED.
           RESULT = <f_s_itab1>-FIELD2.

          
UNASSIGN <f_s_itab1>.
        
ENDIF.
ENDIF
.

The reason for that is actually quite simple. If for example, we’re loading 50k records per data package, the single select statement in the start routine will do one select for 50k records and store that in our internal table.


If we use the field routine standard logic, as it is a FIELD routine, it will end up doing 50k selects, which is significantly more costly.


One consideration to keep in mind is for time dependent master data. Given the complexities around figuring out the correct time period to select, I have used the standard SAP functionality of Reading the Master Data in the field routines of the transformations to select time dependent attributes.



Hashed Tables

Hashed tables are nothing but tables that have a defined key, as opposed to a standard table where you need to perform a SORT command in order to do an efficient READ with BINARY-SEARCH.


So when declaring the DATA type, there would be an explicit command:


TYPES: BEGIN OF t_itab1,
        field1
TYPE /BI0/OIFIELD1,
        field2 TYPE /BI0/OIFIELD2,
     
END OF t_itab1.

DATA: itab1 TYPE HASHED TABLE OF t_itab1 WITH UNIQUE KEY field1.


FIELD-SYMBOLS: <f_s_itab1> type t_itab1.
 

For this particular internal table itab1, we know that field1 is unique and therefore we can declare it that way.


When the data is selected, the system will index it according to the specified key. SORTs and BINARY-SEARCH are not required in this case.


Below is an example of a read statement on the hashed table. One thing to keep in mind is to use WITH TABLE KEY for reading hashed tables, as opposed to WITH KEY for standard tables

READ TABLE itab1 WITH TABLE KEY
field1
= RESULT_FIELDS-field1 ASSIGNING <f_s_itab1>
.


Field Symbols Instead of Work Areas

A field symbol acts as a pointer to a record in an internal table, whereas a work area actually holds the value from an internal table.


So if we do a loop on an internal table with 100 records, the field symbol will store the position of each record through each pass of the loop, and allow us to modify that internal table, whereas the work area will actually store the record that was looped.


What we’ve noticed is that field symbols provide better performance when having to loop or read through internal tables.


To declare a field symbol, you first need to have a type or structure declared or available:

TYPES: BEGIN OF t_itab1,
        field1
TYPE /BI0/OIFIELD1,
        field2 TYPE /BI0/OIFIELD2,
     
END OF t_itab1.


FIELD-SYMBOLS: <f_s_itab1> type t_itab1.

When performing a read you will use the ASSIGNING command:

  READ TABLE itab1 WITH TABLE KEY
field1
= RESULT_FIELDS-field1 ASSIGNING <f_s_itab1>
.

For validating if the field symbol is assigned:

IF <f_s_itab1> IS ASSIGNED.

Write your logic

Don’t forget to unassign the field symbol after your logic is complete:

             UNASSIGN <f_s_itab1>.

And close your IF statement:

     ENDIF.

Looping is similar to a read, as you also have to use the ASSIGNING command:

      LOOP AT SOURCE_PACKAGE ASSIGNING <source_fields>.

The main difference is that you do NOT need to unassign the field symbol. At each iteration of the loop, it will unassign and reassign to the next record in the internal table.


However, if you do wish to reutilize the field symbol after your ENDLOOP, you should immediately unassign it to prevent incorrect records being pointed to:

LOOP AT SOURCE_PACKAGE ASSIGNING <source_fields>.
   Write your logic

ENDLOOP.
UNASSIGN <source_fields>
.


Parallel Cursor

Inevitably when writing ABAP code, we will stumble across a scenario where we need to write a loop within a loop. That is a big no-no in terms of ABAP programming best practices. There’s a nifty little trick that’s called a parallel cursor.


Here’s how it works:

1.       If you’re not using hashed tables, make sure to sort itab1 and itab2;

2.       Start the first loop into itab1 and assign field symbol <fs1>;

3.       Within your loop on itab1, we first do a READ into itab2, assigning field symbol <fs2>, to determine the exact location of the record that is required;

4.       If a record is found and <fs2> is assigned, we then save the position of the record (sy-tabix) to our variable lv_index;

5.       We then unassign <fs2> so we can start the loop with parallel cursor

6.       We start the loop into itab2 from that start position lv_index;

7.       After assigning <fs2> we do a check to validate if the field1 we’re selecting from the 2nd loop matches the record in <fs1>.

     a.       If it does, we carry on with our logic;

     b.       If it doesn’t, we exit, and now we will move on to the next record in itab1


Below is an example of the parallel cursor:

LOOP AT itab1 ASSIGNING <fs1>.

READ TABLE itab2 WITH KEY field1 = <fs1>-field1       

     ASSIGNING <fs2> BINARY SEARCH. 

 

     IF <fs2> IS ASSIGNED.

            lv_index = sy-tabix.

            UNASSIGN <fs2>.

 

        LOOP AT itab2 ASSIGNING <fs2> FROM lv_index.

           IF <fs1>-field1 <> <fs2>-field1

            EXIT.
         
ENDIF.
   write your code

        ENDLOOP.

ENDLOOP.

And with comments:

LOOP AT itab1 ASSIGNING <fs1>. Our first loop

   READ TABLE itab2 WITH KEY field1 = <fs1> ASSIGNING <fs2> BINARY SEARCH. Reading our 2nd internal table we wish to loop into to determine the start position of the second loop
 

     IF <fs2> IS ASSIGNED. If a value is found, it will be assigned and pass this check

        lv_index = sy-tabix. Store the position of the found record in itab2
       UNASSIGN <fs2>. Clear the field symbol
   

      LOOP AT itab2 ASSIGNING <fs2> FROM lv_index. Start the loop at the position we found above on itab2
      IF <fs1>-field1 <> <fs2>-field1 if we’ve now looped through itab2 and it no longer matches the record in <fs1>, it’s time to move to the next record in the loop for itab1, so we exit the 2nd loop on itab2
        EXIT. Exits the loop on itab2
      
ENDIF.
              Write your logic for when <fs1>-field1 = <fs2>-field1

     ENDLOOP. Endloop for itab2

ENDLOOP. Endloop for itab1

Summary

Hopefully these tips can help you build a more efficient and better performing backend SAP BW data warehouse. Your feedback and suggestions are always welcome, and if you have better or different ways of doing the same thing, I'd definitely be interested in learning them.


Best of luck on your developments!



7 Comments
Labels in this area