Skip to Content
Author's profile photo Former Member

Expert Routine – why not? (2/2)

Is it worth to use Expert Routines? In you can find information: “You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types” When I read it for the first time I was scared of using it. Now I am using it every time when the transformation is complex or I need to improve loading performance.

Below you can find some examples when I’m considering using expert routine:

  • When I need access to the source and target structure (for example when I am changing from characteristics to Key Figures View)
  • When in current data model there is complex ABAP in start/end routines and loading performance is not so good
  • When in transformation rules there are complex ABAPs for 10 or more characteristics and the loading performance is not so good

Every time when I am changing existing data model, firstly I am building a prototype, next to existing flow, to be able to compare transformation result and throughput time. It costs me much more time but at the end I am sure that I am delivering the proper solution.

Besides that every time when I am changing standard routine into expert routine I am trying to improve the performance by adjusting a little bit existing logic. There are commonly known “mistakes” which slow down speed and increase memory consumption during data loading, for example:

1. Data declaration and “select *”

When you are reading data from other DSO during data load and you need just a few fields from there do not use “select *” but select particular fields only.

Instead of:

DATA: internal_table TYPE STANDARD TABLE OF  /BIC/ADSO00100.


BEGIN OF internal_table_s,
A TYPE type_for_A,
B TYPE type_for_B,
C TYPE type_for_C,
END OF internal_table_s.
DATA: internal_table TYPE STANDARD TABLE OF internal_table_s.
SELECT A B C INTO TABLE internal_table

2. Do not use FOR ALL ENTRIES IN if your source table is small.

If in your flow you need to derive some values from other table/DSO and this source table has no more than 1000 records (and you do not expect more in the future) selection FOR ALL ENTRIES IN is much slower then selecting to the internal table all records.

3. Reduce number of records for your “ALL ENTRIES IN” table

It is quite common that you need to derive for example attributes of some characteristics to the transactional data in your flow. Let’s take into account an attribute of 0MATERIAL which we need to derive during loading Sales orders. In one data package it is highly likely that one material is repeated many times. By using selection FOR ALL ENTRIES IN SOURCE_PACKAGE you are hitting source table many times for the same selection.

To avoid that I am using simple trick:

LOOP AT source_package ASSIGNING <source_fields>.
material = <source_fields>-material.
APPEND material TO material_tab.
SORT material_tab BY material.

And then I am using my internal table material_tab as a selection for ALL ENTRIES IN (and remember about checking it your internal table is not empty).

4. Always use BINNARY SEARCH in your READ statement. And remember about sorting your table, otherwise your result will not be correct.

5. Try to use SORTED tables in your logic you are going to use LOOP in LOOP.

If it is not possible I am using following trick to speed up looping:

2 internal tables: ITAB_1 and ITAB_2
For all entries in ITAB_1 you have to loop at ITAB_2 with condition ITAB_2-A = ITAB_1-A.

Code example:

      DATA: index TYPE  sy-tabix.
      SORT ITAB_2 by A.
      CLEAR lv_orderi_idx.
      WITH KAY A = ITAB_1L-A
      IF sy-subrc = 0.
        index = sy-tabix.
        IF ITAB_2L-A NE ITAB_1L-A..
          CLEAR ITAB_2L.
<----- do something there --------->

        CLEAR: lv_orderi_idx.

Turning to the Expert routines, there are some disadvantages of using it as you cannot have initial values for some characteristics and you cannot do other aggregation than overwrite. To solve this issue I am using Infosource in between, For example:

Initial value for characteristics:


If you would like to replace transformation between Source DSO1 and Target DSO with Expert routine, then “Status field” would be always overwritten with “Empty” value, what means that the proper value from Source DSO2 will disappear. To avoid situation like that, we can simply put infosource in between:


Now between DSO1 and Infosource we can create our expert routine and then by using standard transformation create mapping between infosource and Target DSO.

Aggreagated values for Key Figures:

Similar solution like for initial value. By putting an infosource in between we can use aggregation features given by standard transformations


One more thing, which can be very useful when you decide using Expert routines in your flow, is displaying messages in case of errors or issues in loading monitor. To do that you can use simple code:

          MONITOR TYPE rspc_t_msg,
          monitor_rec TYPE rspc_s_msg.

          CLEAR monitor_rec.
          monitor_rec-msgno = '000'.
          monitor_rec-msgid = 'BW01'.
          monitor_rec-msgty = 'I'.
          monitor_rec-msgv1 = 'Missing Item category group ERLA: '.
          monitor_rec-msgv2 = <result_fields>-material.
          monitor_rec-msgv3 = <result_fields>-salesorg.
          monitor_rec-msgv4 = <result_fields>-distr_chan.
          APPEND monitor_rec TO MONITOR.
          CALL METHOD log->add_t_msg
              i_t_msg = MONITOR.
          FREE MONITOR.


If you have some other ideas how to improve loading performance, please share it with me.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Tobias Haas
      Tobias Haas
      some good advices.
      thank you
      Author's profile photo Former Member
      Former Member

      thanks helpful advices