Skip to Content

4 HANA based BW Transformation – New features delivered by 7.50 SP04


This blog is part of the blog series HANA based BW Transformation. 

Following new features are shipped with the BW 7.50 Support Package 04 (feature pack):

  • SQL Script (AMDP) based Start-, End-, and Field Routines
  • Error Handling

4.1 Mixed implementation (HANA / ABAP)

In a stacked data flow it is possible to mix HANA executed BW transformation (Standard push down or SQL Script) and ABAP Routines. In case of a mixed scenario it is important that the lower level BW transformations are HANA push down capable. Lower level means the transformation is executed closer to the source object.

Figure 4.1 shows a stacked data flow with one InfoSource in between, see (1). The upper BW transformation (2) contains an ABAP start routine, therefore only the ABAP runtime is supported. In the lower BW transformation (3) only standard rules are used, therefore both the HANA and ABAP runtime are supported.

Despite the fact that in the data flow an ABAP routine is embedded, the DTP setting does support the SAP HANA execution, see (4) if the SAP HANA execution flag is set and the processing mode switch is set to (H) Parallel Processing with partial SAP HANA Execution, see (5). 


Figure 4.1: HANA and ABAP mixed data flows

4.1.1       Restriction in mixed data flows (Added on 03/07/2017)

In a mixed data flow (SAP HANA and ABAP processing) it is not possible to enable the error handling. For a mixed data flow, the DTP flag SAP HANA Execution is set and the flag is grayed out. That means the first part of the data flow must be executed in the HANA processing mode.

The attempt to activate the error handling results in the message in Figure 4.1b.

Figure 4.1b: Mixed data flow and error handling

If the error handling is absolutely necessary, an intermediate persistence may have to be integrated into the data flow.

4.2      Routines

SAP Help: What’s new – Routines in Transformations?

With BW 7.50 SP04 all BW Transformation routines can be implemented in ABAP or in SQL Script. Figure 4.2 shows the available routine types in a BW transformation context.


Figure 4.2: Available Routines in BW transformations

With BW 7.50 SP04 the concept and therefore the menu structure to create / delete a new routine changed. With BW 7.50 SP04 all routines, Start-, Field-, End- and Expert-Routines can be implemented ABAP or SQL Script based. It is not possible to mix ABAP and SQL Script routines within one transformation.


Figure 4.3: Routines in BW transformation

The transformation framework always tries to offer both execution modes, ABAP and HANA. For more information see the main blog of this series.

By implementing the first routine of a BW transformation the system asks for the implementation type (ABAP or SQL Script (AMDP Script)). Figure 3.1 shows the different routine implementation types and the impact on the execution mode of the selected implementation type.


Figure 4.4: Routine implementation type

Initially both execution modes (1), ABAP and HANA, are possible (unless you are using a feature which prevents a push down). The implementation type decision for the first routine within a BW transformation sets the implementation type for all further routines within this BW transformation.  The dialog (2) will only come up for the first routine within a BW transformation. If you choose ABAP routine for the first routine the Runtime Status will change from ABAP and HANA runtime are supported to Only ABAP runtime is supported (3). If you choose AMDP script for the first routine the Runtime Status changes to Only HANA runtime is supported (4).

4.2.1       General routine information

For each SQL Script (Start, End, Field and Expert) routine a specific AMDP – ABAP class is created. For more information about the AMDP – ABAP class see paragraph »The AMDP Class« in the initial blog »HANA based BW Transformation« of this blog series.

Only the method body (including the method declaration) is stored in the BW transformation metadata. You can find the source code of all BW transformation related routines (methods / procedures) in the table RSTRANSTEPSCRIPT.

Table replacement for expert routines
Up to BW 7.50 SP04 the procedure source code for SAP HANA Expert Scripts is stored in the table RSTRANSCRIPT. With BW 7.50 SP04 the storage location for AMDP based routines has been changed. With BW 7.50 SP04 the source code for all AMDP based routines (in the context of a BW transformation) will be stored in the table RSTRANSTEPSCRIPT

The column CODEID provides the ID for the ABAP class name. To get the full ABAP class name it is necessary to add the prefix “/BIC/” to the ID.

The generated AMDP – ABAP classes are not transported. Only the metadata, including the method source code are transported. The AMDP – ABAP classes are generated in the post processing transport step in the target system during the BW transformation activation.

4.2.2       Routine Parameter

Parallel to the routines (Start-, End- and Field-Routines), error handling was also delivered with BW 7.50 SP04. As a result the method declaration has changed, including the SAP HANA Expert Script. To keep existing SAP HANA Expert Scripts valid the method declaration will not change during the upgrade to SP04, for more information see paragraph 4.2.3 »Flag – Enable Error Handling for Expert Script«. Field SQL__PROCEDURE__SOURCE__RECORD

The field SQL__PROCEDURE__SOURCE__RECORD is part of all structured parameters. The field can be used to store the original source record value of a row.

Figure 4.5 shows an example how to handle record information during the transformation and the error handling. In the example data flow the source object is a classic DataStore-Object (ODSO).

The sample data flow uses two transformations both implement a SQL Script (Start-, End- or Expert- Routine).

The inTab of the first SQL Script (AMDP Script (1)) contains information about the source data in this example the source object provides technical information to create a unique identifier for each row. If you are reading data from the active table (see DTP adjustments) of an ODSO it is not possible to get the necessary information from the source. In this case both columns RECORD and SQL__PROCEDURE__SOURCE__RECORD are set to initial. The SQL Script does not contain logic to handle erroneous records.

If the source provides technical information to create a unique identifier both columns RECORD and SQL__PROCEDURE__SOURCE__RECORD will be populated. In the inTab the content of both columns are the same. The columns will be created by concatenating of the technical field REQUID, DATAPAK and RECORD for an ODSO and REQUEST TSN, DATAPAK and RECORD for an ADSO.

The field REQUEST (see Source in Figure 4.5) cannot be used as an order criteria, because of the generated values. Therefore the related SID (see /BI0/REQUID in Figure 4.5) is used.

The business logic requires to multiply some source rows. To get a unique sort able column the column RECORD is re-calculated with new unique sorted values. For the multiplied records the source record information in the column SQL__PROCEDURE__SOURCE__RECORD is untouched. 


Figure 4.5: Source record information

The second BW transformation also contains a SQL Script (AMDP Script (2)) to verify the transferred data and identify erroneous records. The row with the record ID 4 is identified as an erroneous record. Therefore a new entry with the original record ID from the source object is written to the errorTab. The original record ID is still stored in the column SQL__PROCEDURE__SOURCE__RECORD .

This example explains the purpose off the column SQL__PROCEDURE__SOURCE__RECORD . I’ll provide more details about the error handling in paragraph 4.3 »Error Handling«.       Common Parameter

The following parameters are available in all new SQL Script routine created after the upgrade to BW 7.50 SP04:

  • errorTab (table) and
  • I_ERROR_HANDLING (field)

In addition the field SQL__PROCEDURE__SOURCE__RECORD is a member the importing parameter inTab and the exporting parameter outTab, with the exception of the field routine exporting parameter outTab.

The error handling related parameters (I_ERROR_HANDLING, errorTab and the additional field SQL__PROCEDURE__SOURCE__RECORD ) are only available if the flag Enable Error Handling for Expert Script is set, see paragraph 4.2.3 »Flag – Enable Error Handling for Expert Script«.

There is a special handling for existing SAP HANA Expert Scripts which were created before upgrading to SP04. To preserve the customer code, for existing SAP HANA Expert Script the flag is not set by default. Therefore the error related parameters are not been added for existing SAP HANA Expert Scripts.

The input parameter I_ERROR_HANDLING is an indicator to mark the current processing step as error handling step, for further information see paragraph 4.3 »Error Handling«.

The export parameters errorTab is used as part of the error handling to identify erroneous records, for further information see paragraph 4.3 »Error Handling«.

All output table parameter of an AMDP method must be assigned. Otherwise the AMDP class is not valid. In case you are not using the error handling the output table parameter errorTab must be assigned by using a dummy statement. The following statement can be used to return an empty errorTab :

  errorTab =




     WHERE DUMMY <> ‘X’;       Start- and End-Routine Parameter

The Start- End-, and Expert Routine all have the same method declaration:

class-methods PROCEDURE


    value(i_error_handling) type STRING

    value(inTab) type <<Class-Name>>=>TN_T_IN


     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR .

Only the type definition of the structures TN_T_IN and TN_T_OUT are different between the routines.

In case of the start routine the inTab (TN_T_IN ) and the outTab (TN_T_OUT ) structure are identical and can be compared with the SOURCE_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTab for a start routine.

In case of the end routine the inTab (TN_T_IN ) and the outTab (TN_T_OUT ) structure are identical and can be compared with the RESULT_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTab for an end routine.

In case of the SAP HANA Expert Script routine the inTab (TN_T_IN ) can be compared with the SOURCE_PACKAGE in the ABAP case and the outTab (TN_T_OUT ) can be compared with the RESULT_PACKAGE in the ABAP case. The inTab always contains all fields from the source object and can not be adjusted.       Field-Routine Parameter

The procedure declaration is exactly the same as the declaration for the Start- End-, and Expert Routine:

class-methods PROCEDURE


     value(i_error_handling) type STRING

     value(inTab) type <<Class-Name>>=>TN_T_IN


     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR.

The inTab contains the source field(s) and in addition the columns RECORD and SQL__PROCEDURE__SOURCE__RECORD, see paragraph »Field SQL__PROCEDURE__SOURCE__RECORD«.

Important difference to ABAP based field routines
A field routine in the ABAP context is called row by row. The routine only gets the values for the current processing line for the defined source fields. In the HANA context a field routine is called once per data package. And the importing parameter (inTab ) contains all values of the source field columns, see Figure 4.2.

I’ll provide more information about the difference in processing between SQL Script and ABAP in paragraph 4.2.5 »Field-Routine«.

4.2.3       Flag – Enable Error Handling for Expert Script

For all SAP HANA Expert Scripts created with a release before BW 7.50 SP04 it is necessary to enable the error handling explicitly after the upgrade. To set the flag go to Edit => Enable Error Handling for Expert Script.

To ensure that the existing SAP HANA Expert Script implementations are still valid after the upgrade to BW 7.50 SP04 the method declaration will be left untouched during the upgrade.

After the upgrade to BW 7.50 SP04 all new created SQL Script routines will been prepared for the error handling and the flag, see Figure 4.6, will be set by default.


Figure 4.6: Enable Error Handling for Expert Script 

4.2.4       Start-Routine

Typical ABAP use cases for a start routine are:

  • Delete rows from the source package that cannot filtered by the DTP
  • Prepare internal ABAP table for field routines
General filter recommendation
If possible, try to use the DTP filter instead of a start routine. Only use a start routine to filter data if the filter condition cannot be applied in the DTP. Also ABAP routines and BEx variable can be used in the DTP filter without preventing a push down, see blog »HANA based BW Transformation«.

Using a filter in a start routine is still a valid approach in the push down scenario to reduce unnecessary source data. Figure 4.7 shows the steps to create an AMDP based start routine to filter the inTab (source package) with a filter condition that cannot applied in the DTP filter settings.


Figure 4.7: AMDP Start Routine Sample

The second use case for an ABAP start routine is not a recommended practice in the context of a push down approach. Remember that the data in a push down scenario is not processed row by row. In a push down approach the data are processed in blocks. Therefore we do not recommend to use a field routine to read data from a local table like we do in ABAP. Further on the logic of an AMDP field routine differs from an ABAP based field routine, see paragraph 4.2.5 »Field-Routine«. The better way in a push down scenario is to read data from an external source using a JOIN in a routine

4.2.5       Field-Routine

A field routine can typically be used if the business requirement cannot be implemented with standard functionalities. For example if you want to read data from an additional source such as an external (non BW) table. Or you want to read data from a DataStore-Object but the source doesn’t provide the full key information which prevents the usage of the standard DSO read rule.

Figure 4.8 shows the necessary steps to create an AMDP Script based field routine. Keep in mind that AMDP based routines can only be created in the Eclipse based BW and ABAP tools. The first step to create an AMDP Script based field routine is the same as for an ABAP based field routine, see (1). If you select the rule type ROUTINE a popup dialog asks for the processing type, see (2). Choose AMDP script to create an AMDP script based field routine. The BW Transformation framework opens the Eclipse based ABAP class editor. For this an ABAP project is needed, see (3). Please note that the popup dialog sometimes opens in the background. The popup lists the assigned ABAP projects. If there is no project you can use the New… button to create one.  After selecting a project the AMDP class can be adjusted. Enter your code for the field routine in the body of the method PROCEDURE, see (4).


Figure 4.8: Steps to create an AMDP Script based field routine

The SQL script based field routine processing is different from the ABAP based routine. The ABAP based routine is processed row-by-row. The SQL script based routine on the other hand is only called once per data package, like all the other SQL script based routines.

Because of this all values of the adjusted source fields are available in the inTab. For all source field values the corresponding RECORD and the SQL_PROCEDURE_SOURCE_RECORD information are available in the inTab.

For SQL Script based field routines the following points need to be considered:

  • The target structure requires for each source value exactly one value.
  • The outTabinTab sort order may not be changed.
  • The result value must be on the same row number as the source value

In case of using a join operator pay attention that inner join operations could lead in a subset of rows. 

4.2.6       End-Routine

Typical ABAP use cases for an end routine are post transformation activities, such as:

  • Delete rows which are obsolete after the data transformation, for example redundant rows
  • Line cross-data check
  • Discover values for a specific column based on the transfer result

From the development perspective the end routine is quite equal to the start routine. Only the time of execution differs. The start routine is before the transformation and the end routine afterwards. 

4.3      Error Handling

In previous BW 7.50 SP’s switching on error handling in a DTP prevented a SAP HANA push down. As of BW 7.50 SP4 this is no longer the case and enabling error handling in a DTP will not prevent a SAP HANA push down.

A DTP with enabled error handling processes the following steps:

  1. Determine erroneous records
  2. Determine semantic assigned records to:
    1. The new erroneous records
    2. The erroneous records in the error stack
  3. Transfer the non-erroneous records

Therefore it is necessary to call the transformation twice. The first call will determine the erroneous records (1. Step) and the second call transfers the non-erroneous records (3. Step).

The error handling is only available for data flows with DataSource (RSDS), DataStore-Object classic (ODSO) or DataStore-Object advanced (ADSO) as source object and DataStore-Object classic (ODSO), DataStore-Object advanced (ADSO) or Open Hub-Destination (DEST) as target object. 

4.3.1       Error handling background information

Here is some general background information about the technical runtime objects and the runtime behavior.

DTP with error handling and the associated CalcScenario
In the blog HANA based Transformation (deep dive) in paragraph 2.1 » Simple data flow (Non-Stacked Data Flow)« I’ve explained that the DTP in case of an non-stacked data flow reuses the CalculationScenario from the BW transformation. If the error handling is switched on in the DTP, the BW Transformation Framework must enhance the CalculationScenario for the error handling. Therefore it is necessary to create an own CalculationScenario for the DTP. The naming convention is the same as for a DTP for a stacked data flow. The unique identifier from the DTP is used and the DTP_ is replaced by TR_.
Difference regarding record collection between HANA and ABAP

The HANA processing mode collects all records from the corresponding semantic group in the current processing data package where the erroneous record belongs to and writes them to the error stack, see Figure 4.9. All records with the same semantic key in further processing data packages are also written to the error stack.

The ABAP processing mode writes only the erroneous record for the current processing package to the error stack. Further packages are handled in the same way as in the HANA processing mode. All records with the same semantic key are also written to the error stack.       Find the DTP related error stack

The related error stack (table / PSA) for a DTP can be found by the following SQL statement:


    FROM “DD02V”

   WHERE “DDTEXT” like ‘%<<DTP>>%’;

4.3.2       Error handling in a standard BW transformation

Fundamentally, the error handling in the execution mode SAP HANA behaves very similar to the execution mode ABAP. There are some minor topics to explain regarding the SAP HANA processing of artefacts like runtime and modelling objects.  In the next section we will also discuss differences in how the error handling is executed.

Figure 4.9 provides an example how error handling works in a BW transformation with standard transformation rules (meaning: no customer SQL script coding).

The business requirement for the BW transformation is to ensure that only data with valid customer (TK_CUST) is written in the target. Valid customer means, for the customer master data is available. This means the flag Referential Integrity is set for the TK_CUST transformation rule.

Semantic Groups

In the context of SAP HANA processing semantics groups are not supported!

The error handling defines an exception for this limitation. The semantic group in combination with the error handling is used to identify records that belong together. You cannot use the error handling functionality to work around the limitation and artificially build semantic groups for SAP HANA processing. That means the processing data packages are not grouped by the defined semantic groups. The data load process ignores the semantic groups.

The logic implemented in the sample data flow in Figure 4.9 writes only data for a Sales Document (DOC_NUMER) to the target if all Sales Document Items (S_ORD_ITEM) are valid. This means in case one item is not valid all items for the related Sales Document should be written to the error stack. Therefore I chose Sales Document (DOC_NUMER) as semantic group.

In the source is one record with an unknown customer (C4712) for DOC_NUMER = 4712 and S_ORD_ITEM = 10. The request monitor provides some information how many records are written to the error stack. The detail messages provides more information about the reason. 


Figure 4.9: Error handling in a standard BW transformation

The initial erroneous record of a group is marked in the error stack. The erroneous data can be adjusted and repaired within the error stack, if possible. If the transaction data or master data are corrected the data can be loaded from the error stack into the data target by executing the Error-DTP. 

4.3.3       Error handling and SQL Script routines

In case of using SQL script routines within a BW transformation the BW transformation framework sets a flag to identify which processing step 1st or 3rd is currently being processed.  Therefore all SQL script procedure (AMDP method) declarations will be enhanced, see paragraph 4.2.2 »Routine Parameter«. The following parameters are related to the error handling:


The indicator is set to ‘TRUE’ when the BW transformation framework executes step 1, otherwise the indicator is set to ‘FALSE ‘.

In this case the BW transformation framework expects only the erroneous records in the output parameter errorTab.

  • errorTab

The output table parameter can be used to handover erroneous records during the 1st call.

The error table structure provides two fields:


The data input structure (inTab) is enhanced by the field SQL__PROCEDURE__SOURCE__RECORDfor all routines (Start-, Field-, End- and Expert routine).

The data output structure (outTab) for the Start-, End- and Expert routine is also enhanced by the field SQL__PROCEDURE__SOURCE__RECORD.

The field SQL__PROCEDURE__SOURCE__RECORDis be used to store the original record value from the persistent source object. For more information see paragraph »Field SQL__PROCEDURE__SOURCE__RECORD«. 

Next I’ll explain the individual steps how a BW transformation is processed using an example. As mentioned before, the following steps are processed in case the error handling is used:

  1. Determine erroneous records
  2. Determine semantic assigned records to:
    1. The new erroneous records
    2. The erroneous records in the error stack
  3. Transfer the non-erroneous records

Only step 1 and 3 must be considered in the SQL script implementation. Step 2 and the sub steps are processed internally from the BW transformation framework.

Figure 4.10 provides an overview how the error handling will be processed. To illustrate the runtime behavior I keep the logic to identify the erroneous records quite simple. The source object contains document item data for two documents (TEST100 and TEST200) for the first one five and for the second one four items are available. The record TEST100, Document Item 50 contains a not valid customer C0815, see (1). To ensure that only document information are written into the target if all items for the document are valid I set the semantic key to document number (/BIC/TK_DOCNR), see (2). The procedure coding contains two parts. The first part supplies the data for the errorTab and the second part the data for the outTab, see (3). The procedure is called twice during the processing. The first call is to collect the erroneous records, see Figure 4.11. Based on the errorTab result the BW transformation framework determines the corresponding records regarding the errorTab and the semantic key. The collected records are written to the error stack see (4). The second procedure call is to determine the non-erroneous records. As a result the collected erroneous records will be removed from the inTab before the second call is executed, see Figure 4.12. The outTab result from the second call is written into the target object, see (5). 


Figure 4.10: Error handling with SQL script

Figure 4.11 shows the procedure from the sample above in the debug mode during the first call to determine the erroneous records. The parameter I_ERROR_HANDLING is set to TRUE , see (1). Only the first statement to fetch the result for the output parameter errorTab is relevant for this step, see (2). In my coding sample the second SELECT statement will also be executed but the result parameter outTab is not used by the caller. For simplicity I have kept the logic here as simple as possible, but note that from a performance perspective there are better options. The result from the SELECT statement to detect the erroneous records is shown in (3). Based on the SQL_PROCEDURE__SOURCE_RECORD ID the BW transformation framework determines the corresponding semantic records from the source and writes them to the error stack. 


Figure 4.11: Error handling with SQL script – Determinate erroneous records

The next step is to transfer the non-erroneous records. The BW transformation framework calls the SQL script procedure a second time, see Figure 4.12. Now the parameter I_ERROR_HANDLING is set to FALSE , see (1). From the coding perspective, see (2), only the second part to get the outTab result is relevant. The inTab , see (3), now contains only source data which can be transferred without producing erroneous result information. 


Figure 4.12: Error handling with SQL script – Transfer the non-erroneous records

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Margarita Ocampo

    Hi, Torsten, thank you for this blog, it’s very helpfull.

    I hope you can help me, I’m implementing the error handling DTP, but it shows this error at execution:
    Conversion of SAPscript Text HELP_DOKU to HTML

    numeric overflow: cannot convert to Integer type: 20161010143941000003000 at function __typecast__() (at pos 221) (field:REQUEST)

    Message no. RS_EXCEPTION000

    My Expert Routine is: 

    Do you have any idea why?


    1. Margarita Ocampo

      Hi Torsten,

      Basis installed SP05 yesterday. I executed the process but it still ends with error, this time because of an invalid column name.  it inserts all rows but it ends with that error and it inserts nothing to the error stack.  I’ll ask if Basis can install the notes for SP06.

      Thank you.

      1. Torsten Kessler Post author

        up to now I didn’t get the point point when the error is occurs.
        I could not see any field in your procedure named REQUEST, so I assume the error occurs if you execute or activate the DTP, right?

        1. Margarita Ocampo

          Yes, it is when i execute the DTP.  Now, with the SP05 it shows another error. I had to make it work with abap becouse of the plan of the project. So, when the next SP is out i will try it out and let you know y it works.

          Thank you!

  2. Rui Keiti Kikumatsu

    Hello Mr Kessler,

    Great blog with wonderfull information, but when I try to created a transformation with AMDP in my environment and activate the ABAP shows an error like that.

    Erro SQL com código ‘2.048’; ver mensagem SQL seguinte:
    => column store error: fail to create scenario: [34011] Inconsistent calculation mo
    => del;calculationNode (OPERATION.FUNCTION_CALL.OUTPUT):Attribute REQUID is missing
    => in node OUTTAB,Details (Errors): – calculationNode (OPERATION.FUNCTION_CALL.OUT
    => PUT): Attribute REQUID is missing in node OUTTAB.

    The AMDP Class activate without errors but the transformation still inactive.

    I opened an Incident to SAP helps me too.

    Thanks and best regards,
    Rui Keiti Kikumatsu

      1. Rui Keiti Kikumatsu

        Hi Torsten,

        My incident is 396164 / 2016, but after last interaction with SAP Support, I installed the last version of SNOTE 2385163 and now my transformation was activated without errors.

        Thanks for your time and concern to fix this error.

        Best regards,
        Rui Keiti Kikumatsu

  3. Martin Chambers

    Hi Torsten,
    I have been looking for some time before I found you blog. It was sorely needed.

    I have a two questions regarding the field routines.

    For SQL Script based field routines the following points need to be considered:

    • The target structure requires for each source value exactly one value. The outTabinTab
    • Sort order may not be changed. The result value must be on the same row number as the source value

    This may simply be a matter of formatting. Are these rather three than two bullet points? I.e.

    • The target structure requires for each source value exactly one value.
    • The outTab and the inTab sort order may not be changed.
    • The result value must be on the same row number as the source value


    In case of using a join operator pay attention that inner join operations could lead in a subset of rows.

    Unfortunately, I do not understand this sentance at all. Do you mean that an inner join could create additional data records? (I thought only an outer join would do that) Or something else? Could you reformulate it or perhaps write it in German?


    1. Torsten Kessler Post author

      thx for your feedback.

      1. Yes it’s only a formatting issue. It’s fixed now.

      2. No a inner join can return less rows.
      A good visualized sample can be find here:

      To compare that sample with our case the field routine source table is the Students table and the Advisors table is used as a join partner, for example a master data table.


  4. Roman Bukarev

    Torsten, can you please shed some light on how the system determines between processing mores “Parallel SAP HANA  Extraction” and “Parallel Processing with partial SAP HANA Execution”?

    I’ve got the former in Cube->ADSO transformation, and the latter in the ADSO->Cube. Does the target type determine the processing mode to use?

    1. Torsten Kessler Post author

      the processing mode “Parallel Processing with partial SAP HANA Execution” means that one part  of the data flow (transformation) is pushed down and a further part is processed on the ABAP side.

      But keep in mind the push down must be happen at first. Once the data are on the ABAP server, a push-down within the data flow above is no longer possible.

      Depending on the ADSO type it could be necessary to load the data into the ABAP stack to process some internal steps (like SID generation). In that case the system offers the processing mode “Parallel Processing with partial SAP HANA Execution”.


  5. Rui Keiti Kikumatsu

    Hi Torsten,

    I have a new problem using a INNER JOIN with Master Data table of InfoObject (I need fill some attribuites in transformation). If I create the class without the JOIN the transformation activates but if I insert the JOIN there are errors and I can’t activate.

    “column store error: fail to create scenario: [34011] Inconsistent calculation model;scenario:Referenced template scenario SAPHBD: in node OPERATION.FUNCTION_CALL not valid,Details (Errors): – scenario: Referenced template scenario SAPHBD: in node OPERATION.FUNCTION_CALL not valid”

    My script look like below.

    Thanks for any help.
    Rui Keiti Kikumatsu

    outTab = SELECT


    FROM :inTab as intab


    1. Torsten Kessler Post author

      you can activate the AMDP class?

      I’m not sure which release (ABAP) you are, but I can not see the errorTab parameter in your script snippet.

      Sometimes the impact handling does not raise an event to recreate the dependent runtime objects.
      That means, if you change the method content of a AMDP class (method) it is necessary to reactivate the transformation as well.

      From your posting I understand you get an error on activation the transformation, right?
      In case the AMDP class can be activated and the transformation throws an error please check at first if all related notes (See my blog) are implemented in the latest version.

      We are continuously enhancing these notes.

      May be it is time for an incident!?! ;-[


  6. Erik Fröberg

    Hi Torsten,

    Since the generated AMDP classes for transformation routines are created in $TMP, I get errors when trying to edit the PROCEDURE methods, since we have the local package as “Not modifiable” in “System Change Option” in SE03 in our dev. system.

    Do we have to make the local package “modifiable” in our dev. system? In general, this is not allowed in our project. Or is there a way to edit the method without switching the system change option?

    Best regards,



    1. Torsten Kessler Post author

      Hi Erik,

      first, why does the admin set the package $tmp to “not-modifiable” in the dev-system?

      Does he or she not trust their own developers? ;-}


      Yes it is necessary to set the $tmp to modifiable. The AMDP class is a temporary object and we need it to enable the customer to place the SQL Script code.

      The class himself will not been transported! That’s the reason why the class is not assigned to a dev-package.


      1. Julian Phillips

        Hi Torsten,

        Erik mentioned your question to me – and so I thought I would answer (please note the intention is not to hijack this discussion in any way, but to answer your question).

        As the development architect, for a large multinational – I am very concerned to preserve the consistency of development objects across all of our development environments. Also the D systems represents the source system for all of our development work, so once it becomes inconsistent, there is no clear system we can refresh it from. We are continually roling out development, and so there is no window when development is not in progress in D systems, and so they cannot be refreshed even if there was a suitable source to refresh from.

        So my question back to you is why do you think it is ok for developers to create temporary objects (that then risk being integrated with non- temporary objects). This risks errors on transport import, it means your unit testing in D systems is not necessarily representative of unit test results in T systems, and so more defects, and lower code quality results.

        These are my concerns. Its not that I don’t trust our developers – but as we have over 80 of them, and some are offshore, I think you can see then the scope of my concerns.

        1. Torsten Kessler Post author


          you’re welcome to interact the discussion.

          But please explain what can a developer do in the $tmp package what he can not do in any other development package?

          As I explained in my reply before we use the $tmp package as a temporary location for the SQL code.

          In case you will keep the $tmp package on not modifiable you can try the following workaround.

          But keep in mind this is a may be possible working work around (I never tested it) and we strongly recommend to use the $tmp package. But may be the workaround can be used, I’m not sure.

          Work around:

          • Create the transformation
          • Create the procedure (Expert / Start / End)
          • Assign the class manual to your development package (but do not transport the class)
            • Here I’m not sure if it possible to assign the BW generated “/BIC/…” object to your dev package
          • Modify the method in the AMDP class (open the class from the transformation editor, do not open the class manual)
          • Save and activate the class
          • Save the transformation (at this point the transformation framework reads the method definition from the the AMDP class and add the method definition to the transformation metadata)
          • Activate the transformation

          Keep in mind:

          • In the follow up systems (Q- and Prod.) the generated classes are assigned to $tmp package.






  7. Joerg Boeke


    Hi Torsten,

    I currently have a Problem deleting an infoobject and I assume it is related to the new HANA transformations and maybe you can help me with that Problem.

    We’re  running BW 7.50 SPa and HANA 122 and for that particular Infoobject I did not cretae a HANA Transformation.

    In table ( you referenced) RSTRANSTEPSCRIP I do not find any entry for that Infoobject but in table RSDHAXREF (Cross references)

    That entry will not allow me to delete the infoobject.

    Is this related to HANA transformations or do you ahe an idea how to delete that IObj?





    1. Torsten Kessler Post author


      from a far a way perspective everything looks like “works as designed”.

      What is the content of the table RSDHAXREF?

      Do you checked the related (referenced) HAP?


  8. Pavol Feranec

    Hi Thorsten,


    very good blog. It helped me a lot to design transformations using AMDP Field routines.

    There is however a bug which causes that the result values of AMDP Field routines are not properly mapped to the Datapackage and the outcome has mixed values. Developers should install OSS Note 2467323 to solve AMDP Field routines.

    “Side effect” of the OSS Note is that SAP does add Technical Fields RECORD and SQL__PROCEDURE__SOURCE__RECORD also to the Field Routines. All the AMDP Field routines then have to be manually adjusted as they cause syntax errors without the technical fields.


    If you can update your blog concerning Field routines that would be great. I use it constantly throughout my latest developments. 🙂


    I was also only able to use 2 transformations (when it has AMDPs) with infosource between Datasource and aDSO. With third one it showed lots of strange errors when activating DTP. Maybe this is also solved with the OSS Note…


    Thanks and keep up the good work.



    1. Torsten Kessler Post author


      I’d updated the blog regarding the parameter for field routines.

      But it’s difficult to keep the blog parallel to the development ;-{

      Or I need one blog for each release and SP ;-}


      Regarding your stacked data flow issue, technically it should be possible to use more than one InfoSource in a data flow. Also if the source is a DataSource.

      In case the error is still present please create an incident.

      But please keep in mind, we recommend to use not more than 2 InfoSources in one data flow.

      As less InfoSource as better.


  9. Sunitha Mylapuri


    Hi Trosten,

    Good Morning. Thanks for nice blog.

    We are on BW 7.5 with SP06. I have implemented the AMDP script for BW transformation End Routine. I have three mapping fields and 5 derived fields in the ADSO mapping. Through End Routine I have to update these 5 fields.

    I was planning to implement this process thru AMDP scripting to see how it works for this scenario. I have succeeded getting all the mapping fields into DTP but AMDP is not bringing the derived fields. No idea what is the issue.

    So, to see what is getting into the Outtab I thought of debugging the AMDP script but I am not succeeded in debugging the process. I am able to put the AMDP breakpoints and I did the Debug Configuration for procedure.

    I came to RSA1 and started DTP I did neither get any Debug Prospective pop up nor debug started.


    Your help is really appreciated in this. I am already ahead of dead lines on this object.





  10. Sunitha Mylapuri

    Hi Torsten,

    Happy to see your response.

    I have followed the blog Debug Process but somehow at DTP the Prospect pop up not triggering.


    I am activating the break points in the method. Please check my screen.




    1. Torsten Kessler Post author



      what I see from the screenshot is that the request is running into an error (Request overall status is red) .

      Can you post the error message. May be the error occurs before the SQL Script is called!?!


  11. Bo Zhang

    Hi Torsten,

    I create a open ODS view on transformation, and I tried to create AMDP routine, but when I display ODS view data, the AMDP routine did not work, no matter if there are start/end/field/expert routine, except for those AMDP routine, other transformation options did work. Can I understand AMDP routines are not supported in tranformations for open ODS view.

    Many Thanks,


  12. Debraj Ray

    Hi Torsten,

    Thanks for such a nice detailed blog which is very helpful.

    I am creating AMDP script in end routine of a BW transformation, but getting few errors while activating the DTP. Kindly note the system is BW7.5 SP9 and there is no issues while using AMDP expert routines.

    1.  Error 1: While activating the DTP, Error message is “Info object REQUEST not available in version A. An exception with the type CX_RSD_IOBJ_NOT_EXIST was raised”.   Manual temporary workaround: After debugging through the ST22 dump, I manually maintained ‘REQUEST’ and ‘DATAPAKID’ new entries in RSDIOBJ & RSDDPA tables and was able to get past the error. Please note the standard technical chars 0REQUEST and 0DATAPAKID were already maintained in the tables and in active version. Any ideas how we can fix this error in a recommended way?
    2.  Error 2: After getting past error 1 by the manual workaround, the further message while activating the DTP is

    => Node (OPERATION.OUTPUT.PROJ) -> attribut”

    => s -> attribute: Invalid datatype, length or/and scale are missing: ty

    => igits=0,Details (Errors): – calculationNode (OPERATION.OUTPUT.PROJ) -> attribute

    => column store error: fail to create scenario: [34011] Inconsistent calculation mo

    SQL error with code ‘2,048’. See the following SQL message:


    I have selected few fields to be updated using the end routine script. When I choose all the fields as ‘target fields for end routine’ the DTP is able to activate successfully.

    Any ideas how can I get an work around for this, because I only want to update few (not all) fields using the end routine?


    Let me know if you would like to get any further details with the error message.

    Many thanks.



    1. Torsten Kessler Post author


      please do never change the content of SAP internal table without request from SAP!!!

      To add some data in the table RSDIOBJ and/or RSDDPA could may be “fix” the current issue but it could also generates unwanted sideffects.


    1. Debraj Ray

      Hi Torsten,

      Thanks for the reply.

      I have installed the notes recommended for SP9.

      This seems to be an isolated issue only when the target  is an SPO DSO. Manually removing the unwanted rules in the RSTRANRULE table for the tunneling transformation leading to each of the partitions of the SPO resolves the issue but obviously this is not the recommended way.

      We have logged an incident 604969/2017 for this.



      1. Debraj Ray

        Hi Torsten,

        Hope you are well.

        Would like to keep you informed that this logged incident  mentioned above (604969/2017) is taking quite some time for resolution. Any chance if could have a look please?



  13. Mohit Agrawal

    Hi Torsten,

    I wanted to understand why don’t we have option to write AMDP code in DTP filters. If I have my transformation HANA executable and now if I write a filter ABAP code in DTP field routine then would it not circle back ABAP server when executing?




    1. Torsten Kessler Post author



      we do not push down the filter by coding.

      The DTP filter vallues are callculated in a prestep. The DTP filter values are added to the SQL statement which select the data from the source object.

      A DTP filter does not prevent a push down. Even if the filter implemented in ABAP or based on an BEx variable.

      You can find the DTP filter values in the generated SQL statement (INSERT AS SELECT). Search for the placeholder:

      PLACEHOLDER’=(‘<HAP-Name>.$$filter$$’ …


  14. Klaus Steinbach

    Hi Torsten,

    thanks for the nice blog and information provided.

    What isn’t clear to me how shall i model the following situation:

    I have a calculation view or procedure A developed in HANA with complicated and intensive logic. I would like to call it inside a transformation expert routine (AMDP) B and also fill the errorTab.

    Currently we use a data source as source which is built on the calculation view A with data extraction “Directly from source system”.

    The transformation is a 1:1 and stores the whole result in an ADSO C.

    The main thing we are looking for is to add lines to an errorTab, which seems like not supported for data source – directly from source system.

    Secondly if everything is pushed down would be beneficial.

    Do you have any advice how to model this situation.

    Thanks & Kind Regards,


    1. Torsten Kessler Post author


      I didn’t get your problem.

      In case you wants to consume the calcView or the procedure within the expert script you can consume both:

      • CalculationView in a normal SELECT statement and
        • outTab = select <calcView>
      • a procedure via CALL PROCEDURE
        • CALL procedure( :inTab, … , outTab);

      Instead of outTab as target you can also use the errorTab.


      If you wants to use the calcView as a source object for a transformation use the DataSource to extract the view. That’s the correct way.

      What do you mean by:

      The main thing we are looking for is to add lines to an errorTab, which seems like not supported for data source – directly from source system.

      I struggle a bit by the word ADD. The purpose of the errorTab is to collect erroneous source records. That means it makes only sense to write source records into the errorTab.


        1. Torsten Kessler Post author


          you can create new lines within your SQLScript but we need the field RECORD and

          SQL__PROCEDURE__SOURCE__RECORD to identify the source record.



      1. Klaus Steinbach

        Hi Torsten,

        i have a source with 2 entries just one field id 1) DATA 2) ERROR.

        No I call inside my ADMP a calculation view if source entry is DATA and fill the errorTab with a dummy entry if source entry is ERROR.

        I see the processing of the DATA from the calculation view perfectly into the target. However I do not find the dummy error entry I produce.

        That is my code for the errorTab

        errorTab = 
            'dummy error' AS ERROR_TEXT,
            :inTab as i 
            i."/BIC/ZKS_ID" = 'ERROR' and 
            :i_error_handling = 'TRUE';

        I can not find an error table with this statement, I guess it should be created with my user.

          SELECT *
            FROM "SAPEBH"."DD02V"
           WHERE "DDTEXT" like '%DTP%' and tabname like '/BIC/B%'

        What do I miss?


        Thanks for your support.

  15. Klaus Steinbach

    Update: Additionally pushing the error stack button of the DTP gives the message does not exits. (This page does not let me edit the comment above… )


    1. Torsten Kessler Post author


      first do not use this part

      :i_error_handling = 'TRUE'

      in your WHERE condition.

      We call the routine several times to identify the erroneous records and semantically assigned records.

      During collecting erroneous records we are only pick the errorTab.


      After collecting erroneous records  we call the routine again to get the outTab result for the correct records. During this call the inTab is filtered by the erroneous records.

      Which Release/SP you are running?

      Did you checked, that all relevant notes are implemented in the latest version?





Leave a Reply