Skip to Content
Technical Articles

HANA based BW Transformation – New features delivered by 7.50 SP04

4 HANA based BW Transformation – New features delivered by 7.50 SP04


This blog is part of the blog series HANA based BW Transformation. 

Following new features are shipped with the BW 7.50 Support Package 04 (feature pack):

  • SQL Script (AMDP) based Start-, End-, and Field Routines
  • Error Handling

4.1 Mixed implementation (HANA / ABAP)

In a stacked data flow it is possible to mix HANA executed BW transformation (Standard push down or SQL Script) and ABAP Routines. In case of a mixed scenario it is important that the lower level BW transformations are HANA push down capable. Lower level means the transformation is executed closer to the source object.

Figure 4.1 shows a stacked data flow with one InfoSource in between, see (1). The upper BW transformation (2) contains an ABAP start routine, therefore only the ABAP runtime is supported. In the lower BW transformation (3) only standard rules are used, therefore both the HANA and ABAP runtime are supported.

Despite the fact that in the data flow an ABAP routine is embedded, the DTP setting does support the SAP HANA execution, see (4) if the SAP HANA execution flag is set and the processing mode switch is set to (H) Parallel Processing with partial SAP HANA Execution, see (5). 


Figure 4.1: HANA and ABAP mixed data flows

4.1.1       Restriction in mixed data flows (Added on 03/07/2017)

In a mixed data flow (SAP HANA and ABAP processing) it is not possible to enable the error handling. For a mixed data flow, the DTP flag SAP HANA Execution is set and the flag is grayed out. That means the first part of the data flow must be executed in the HANA processing mode.

The attempt to activate the error handling results in the message in Figure 4.1b.

Figure 4.1b: Mixed data flow and error handling

If the error handling is absolutely necessary, an intermediate persistence may have to be integrated into the data flow.

4.2      Routines

SAP Help: What’s new – Routines in Transformations?

With BW 7.50 SP04 all BW Transformation routines can be implemented in ABAP or in SQL Script. Figure 4.2 shows the available routine types in a BW transformation context.


Figure 4.2: Available Routines in BW transformations

With BW 7.50 SP04 the concept and therefore the menu structure to create / delete a new routine changed. With BW 7.50 SP04 all routines, Start-, Field-, End- and Expert-Routines can be implemented ABAP or SQL Script based. It is not possible to mix ABAP and SQL Script routines within one transformation.


Figure 4.3: Routines in BW transformation

The transformation framework always tries to offer both execution modes, ABAP and HANA. For more information see the main blog of this series.

By implementing the first routine of a BW transformation the system asks for the implementation type (ABAP or SQL Script (AMDP Script)). Figure 3.1 shows the different routine implementation types and the impact on the execution mode of the selected implementation type.


Figure 4.4: Routine implementation type

Initially both execution modes (1), ABAP and HANA, are possible (unless you are using a feature which prevents a push down). The implementation type decision for the first routine within a BW transformation sets the implementation type for all further routines within this BW transformation.  The dialog (2) will only come up for the first routine within a BW transformation. If you choose ABAP routine for the first routine the Runtime Status will change from ABAP and HANA runtime are supported to Only ABAP runtime is supported (3). If you choose AMDP script for the first routine the Runtime Status changes to Only HANA runtime is supported (4).

4.2.1       General routine information

For each SQL Script (Start, End, Field and Expert) routine a specific AMDP – ABAP class is created. For more information about the AMDP – ABAP class see paragraph »The AMDP Class« in the initial blog »HANA based BW Transformation« of this blog series.

Only the method body (including the method declaration) is stored in the BW transformation metadata. You can find the source code of all BW transformation related routines (methods / procedures) in the table RSTRANSTEPSCRIPT.

Table replacement for expert routines
Up to BW 7.50 SP04 the procedure source code for SAP HANA Expert Scripts is stored in the table RSTRANSCRIPT. With BW 7.50 SP04 the storage location for AMDP based routines has been changed. With BW 7.50 SP04 the source code for all AMDP based routines (in the context of a BW transformation) will be stored in the table RSTRANSTEPSCRIPT

The column CODEID provides the ID for the ABAP class name. To get the full ABAP class name it is necessary to add the prefix “/BIC/” to the ID.

The generated AMDP – ABAP classes are not transported. Only the metadata, including the method source code are transported. The AMDP – ABAP classes are generated in the post processing transport step in the target system during the BW transformation activation.

4.2.2       Routine Parameter

Parallel to the routines (Start-, End- and Field-Routines), error handling was also delivered with BW 7.50 SP04. As a result the method declaration has changed, including the SAP HANA Expert Script. To keep existing SAP HANA Expert Scripts valid the method declaration will not change during the upgrade to SP04, for more information see paragraph 4.2.3 »Flag – Enable Error Handling for Expert Script«. Field SQL__PROCEDURE__SOURCE__RECORD

The field SQL__PROCEDURE__SOURCE__RECORD is part of all structured parameters. The field can be used to store the original source record value of a row.

Figure 4.5 shows an example how to handle record information during the transformation and the error handling. In the example data flow the source object is a classic DataStore-Object (ODSO).

The sample data flow uses two transformations both implement a SQL Script (Start-, End- or Expert- Routine).

The inTab of the first SQL Script (AMDP Script (1)) contains information about the source data in this example the source object provides technical information to create a unique identifier for each row. If you are reading data from the active table (see DTP adjustments) of an ODSO it is not possible to get the necessary information from the source. In this case both columns RECORD and SQL__PROCEDURE__SOURCE__RECORD are set to initial. The SQL Script does not contain logic to handle erroneous records.

If the source provides technical information to create a unique identifier both columns RECORD and SQL__PROCEDURE__SOURCE__RECORD will be populated. In the inTab the content of both columns are the same. The columns will be created by concatenating of the technical field REQUID, DATAPAK and RECORD for an ODSO and REQUEST TSN, DATAPAK and RECORD for an ADSO.

The field REQUEST (see Source in Figure 4.5) cannot be used as an order criteria, because of the generated values. Therefore the related SID (see /BI0/REQUID in Figure 4.5) is used.

Figure 4.5: Source record information

The second BW transformation also contains a SQL Script (AMDP Script (2)) to verify the transferred data and identify erroneous records. The row with the record ID 4 is identified as an erroneous record. Therefore a new entry with the original record ID from the source object is written to the errorTab. The original record ID is still stored in the column SQL__PROCEDURE__SOURCE__RECORD .

For this example, the business logic requires to multiply some source rows. To get a unique sort able column the column RECORD must re-calculated with new unique sorted values. For the multiplied records the source record information in the column SQL__PROCEDURE__SOURCE__RECORD is untouched.

The field RECORD is a character-based data type (type C length 56) and not a numeric-based data type! That means the value you create to determinate the sort order must be respect that.

Figure 4.6 shows in the upper part a SQL call to generate a new numeric value. The two tables below the SQL statement highlight the problem we get with that kind of values. Within the transformation logic (the CalculationScenario) the routine result is sorted by the column RECORD, which leads into an unexpected sort order (Sorted result). Because the values are interpreted as character values and not as numbers.

The second SQL statement generate the unique values as character like values. Now the sorted result is in the expected order.

Figure 4.5: Unique RECORD value

This example explains the purpose off the column SQL__PROCEDURE__SOURCE__RECORD . I’ll provide more details about the error handling in paragraph 4.3 »Error Handling«.       Common Parameter

The following parameters are available in all new SQL Script routine created after the upgrade to BW 7.50 SP04:

  • errorTab (table) and
  • I_ERROR_HANDLING (field)

In addition the field SQL__PROCEDURE__SOURCE__RECORD is a member the importing parameter inTab and the exporting parameter outTab, with the exception of the field routine exporting parameter outTab.

The error handling related parameters (I_ERROR_HANDLING, errorTab and the additional field SQL__PROCEDURE__SOURCE__RECORD ) are only available if the flag Enable Error Handling for Expert Script is set, see paragraph 4.2.3 »Flag – Enable Error Handling for Expert Script«.

There is a special handling for existing SAP HANA Expert Scripts which were created before upgrading to SP04. To preserve the customer code, for existing SAP HANA Expert Script the flag is not set by default. Therefore the error related parameters are not been added for existing SAP HANA Expert Scripts.

The input parameter I_ERROR_HANDLING is an indicator to mark the current processing step as error handling step, for further information see paragraph 4.3 »Error Handling«.

The export parameters errorTab is used as part of the error handling to identify erroneous records, for further information see paragraph 4.3 »Error Handling«.

All output table parameter of an AMDP method must be assigned. Otherwise the AMDP class is not valid. In case you are not using the error handling the output table parameter errorTab must be assigned by using a dummy statement. The following statement can be used to return an empty errorTab :

  errorTab =




     WHERE DUMMY <> ‘X’;       Start- and End-Routine Parameter

The Start- End-, and Expert Routine all have the same method declaration:

class-methods PROCEDURE


    value(i_error_handling) type STRING

    value(inTab) type <<Class-Name>>=>TN_T_IN


     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR .

Only the type definition of the structures TN_T_IN and TN_T_OUT are different between the routines.

In case of the start routine the inTab (TN_T_IN ) and the outTab (TN_T_OUT ) structure are identical and can be compared with the SOURCE_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTab for a start routine.

In case of the end routine the inTab (TN_T_IN ) and the outTab (TN_T_OUT ) structure are identical and can be compared with the RESULT_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTab for an end routine.

In case of the SAP HANA Expert Script routine the inTab (TN_T_IN ) can be compared with the SOURCE_PACKAGE in the ABAP case and the outTab (TN_T_OUT ) can be compared with the RESULT_PACKAGE in the ABAP case. The inTab always contains all fields from the source object and can not be adjusted.       Field-Routine Parameter

The procedure declaration is exactly the same as the declaration for the Start- End-, and Expert Routine:

class-methods PROCEDURE


     value(i_error_handling) type STRING

     value(inTab) type <<Class-Name>>=>TN_T_IN


     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR.

The inTab contains the source field(s) and in addition the columns RECORD and SQL__PROCEDURE__SOURCE__RECORD, see paragraph »Field SQL__PROCEDURE__SOURCE__RECORD«.

Important difference to ABAP based field routines
A field routine in the ABAP context is called row by row. The routine only gets the values for the current processing line for the defined source fields. In the HANA context a field routine is called once per data package. And the importing parameter (inTab ) contains all values of the source field columns, see Figure 4.2.

I’ll provide more information about the difference in processing between SQL Script and ABAP in paragraph 4.2.5 »Field-Routine«.

4.2.3       Flag – Enable Error Handling for Expert Script

For all SAP HANA Expert Scripts created with a release before BW 7.50 SP04 it is necessary to enable the error handling explicitly after the upgrade. To set the flag go to Edit => Enable Error Handling for Expert Script.

To ensure that the existing SAP HANA Expert Script implementations are still valid after the upgrade to BW 7.50 SP04 the method declaration will be left untouched during the upgrade.

After the upgrade to BW 7.50 SP04 all new created SQL Script routines will been prepared for the error handling and the flag, see Figure 4.6, will be set by default.


Figure 4.6: Enable Error Handling for Expert Script 

4.2.4       Start-Routine

Typical ABAP use cases for a start routine are:

  • Delete rows from the source package that cannot filtered by the DTP
  • Prepare internal ABAP table for field routines
General filter recommendation
If possible, try to use the DTP filter instead of a start routine. Only use a start routine to filter data if the filter condition cannot be applied in the DTP. Also ABAP routines and BEx variable can be used in the DTP filter without preventing a push down, see blog »HANA based BW Transformation«.

Using a filter in a start routine is still a valid approach in the push down scenario to reduce unnecessary source data. Figure 4.7 shows the steps to create an AMDP based start routine to filter the inTab (source package) with a filter condition that cannot applied in the DTP filter settings.


Figure 4.7: AMDP Start Routine Sample

The second use case for an ABAP start routine is not a recommended practice in the context of a push down approach. Remember that the data in a push down scenario is not processed row by row. In a push down approach the data are processed in blocks. Therefore we do not recommend to use a field routine to read data from a local table like we do in ABAP. Further on the logic of an AMDP field routine differs from an ABAP based field routine, see paragraph 4.2.5 »Field-Routine«. The better way in a push down scenario is to read data from an external source using a JOIN in a routine

4.2.5       Field-Routine

A field routine can typically be used if the business requirement cannot be implemented with standard functionalities. For example if you want to read data from an additional source such as an external (non BW) table. Or you want to read data from a DataStore-Object but the source doesn’t provide the full key information which prevents the usage of the standard DSO read rule.

Figure 4.8 shows the necessary steps to create an AMDP Script based field routine. Keep in mind that AMDP based routines can only be created in the Eclipse based BW and ABAP tools. The first step to create an AMDP Script based field routine is the same as for an ABAP based field routine, see (1). If you select the rule type ROUTINE a popup dialog asks for the processing type, see (2). Choose AMDP script to create an AMDP script based field routine. The BW Transformation framework opens the Eclipse based ABAP class editor. For this an ABAP project is needed, see (3). Please note that the popup dialog sometimes opens in the background. The popup lists the assigned ABAP projects. If there is no project you can use the New… button to create one.  After selecting a project the AMDP class can be adjusted. Enter your code for the field routine in the body of the method PROCEDURE, see (4).


Figure 4.8: Steps to create an AMDP Script based field routine

The SQL script based field routine processing is different from the ABAP based routine. The ABAP based routine is processed row-by-row. The SQL script based routine on the other hand is only called once per data package, like all the other SQL script based routines.

Because of this all values of the adjusted source fields are available in the inTab. For all source field values the corresponding RECORD and the SQL_PROCEDURE_SOURCE_RECORD information are available in the inTab.

For SQL Script based field routines the following points need to be considered:

  • The target structure requires for each source value exactly one value.
  • The outTabinTab sort order may not be changed.
  • The result value must be on the same row number as the source value

In case of using a join operator pay attention that inner join operations could lead in a subset of rows. 

4.2.6       End-Routine

Typical ABAP use cases for an end routine are post transformation activities, such as:

  • Delete rows which are obsolete after the data transformation, for example redundant rows
  • Line cross-data check
  • Discover values for a specific column based on the transfer result

From the development perspective the end routine is quite equal to the start routine. Only the time of execution differs. The start routine is before the transformation and the end routine afterwards. 

4.3      Error Handling

In previous BW 7.50 SP’s switching on error handling in a DTP prevented a SAP HANA push down. As of BW 7.50 SP4 this is no longer the case and enabling error handling in a DTP will not prevent a SAP HANA push down.

A DTP with enabled error handling processes the following steps:

  1. Determine erroneous records
  2. Determine semantic assigned records to:
    1. The new erroneous records
    2. The erroneous records in the error stack
  3. Transfer the non-erroneous records

Therefore it is necessary to call the transformation twice. The first call will determine the erroneous records (1. Step) and the second call transfers the non-erroneous records (3. Step).

The error handling is only available for data flows with DataSource (RSDS), DataStore-Object classic (ODSO) or DataStore-Object advanced (ADSO) as source object and DataStore-Object classic (ODSO), DataStore-Object advanced (ADSO) or Open Hub-Destination (DEST) as target object. 

4.3.1       Error handling background information

Here is some general background information about the technical runtime objects and the runtime behavior.

DTP with error handling and the associated CalcScenario
In the blog HANA based Transformation (deep dive) in paragraph 2.1 » Simple data flow (Non-Stacked Data Flow)« I’ve explained that the DTP in case of an non-stacked data flow reuses the CalculationScenario from the BW transformation. If the error handling is switched on in the DTP, the BW Transformation Framework must enhance the CalculationScenario for the error handling. Therefore it is necessary to create an own CalculationScenario for the DTP. The naming convention is the same as for a DTP for a stacked data flow. The unique identifier from the DTP is used and the DTP_ is replaced by TR_.
Difference regarding record collection between HANA and ABAP

The HANA processing mode collects all records from the corresponding semantic group in the current processing data package where the erroneous record belongs to and writes them to the error stack, see Figure 4.9. All records with the same semantic key in further processing data packages are also written to the error stack.

The ABAP processing mode writes only the erroneous record for the current processing package to the error stack. Further packages are handled in the same way as in the HANA processing mode. All records with the same semantic key are also written to the error stack.       Find the DTP related error stack

The related error stack (table / PSA) for a DTP can be found by the following SQL statement:


    FROM “DD02V”

   WHERE “DDTEXT” like ‘%<<DTP>>%’;

4.3.2       Error handling in a standard BW transformation

Fundamentally, the error handling in the execution mode SAP HANA behaves very similar to the execution mode ABAP. There are some minor topics to explain regarding the SAP HANA processing of artefacts like runtime and modelling objects.  In the next section we will also discuss differences in how the error handling is executed.

Figure 4.9 provides an example how error handling works in a BW transformation with standard transformation rules (meaning: no customer SQL script coding).

The business requirement for the BW transformation is to ensure that only data with valid customer (TK_CUST) is written in the target. Valid customer means, for the customer master data is available. This means the flag Referential Integrity is set for the TK_CUST transformation rule.

Semantic Groups

In the context of SAP HANA processing semantics groups are not supported!

The error handling defines an exception for this limitation. The semantic group in combination with the error handling is used to identify records that belong together. You cannot use the error handling functionality to work around the limitation and artificially build semantic groups for SAP HANA processing. That means the processing data packages are not grouped by the defined semantic groups. The data load process ignores the semantic groups.

The logic implemented in the sample data flow in Figure 4.9 writes only data for a Sales Document (DOC_NUMER) to the target if all Sales Document Items (S_ORD_ITEM) are valid. This means in case one item is not valid all items for the related Sales Document should be written to the error stack. Therefore I chose Sales Document (DOC_NUMER) as semantic group.

In the source is one record with an unknown customer (C4712) for DOC_NUMER = 4712 and S_ORD_ITEM = 10. The request monitor provides some information how many records are written to the error stack. The detail messages provides more information about the reason. 


Figure 4.9: Error handling in a standard BW transformation

The initial erroneous record of a group is marked in the error stack. The erroneous data can be adjusted and repaired within the error stack, if possible. If the transaction data or master data are corrected the data can be loaded from the error stack into the data target by executing the Error-DTP. 

4.3.3       Error handling and SQL Script routines

In case of using SQL script routines within a BW transformation the BW transformation framework sets a flag to identify which processing step 1st or 3rd is currently being processed.  Therefore all SQL script procedure (AMDP method) declarations will be enhanced, see paragraph 4.2.2 »Routine Parameter«. The following parameters are related to the error handling:


The indicator is set to ‘TRUE’ when the BW transformation framework executes step 1, otherwise the indicator is set to ‘FALSE ‘.

In this case the BW transformation framework expects only the erroneous records in the output parameter errorTab.

  • errorTab

The output table parameter can be used to handover erroneous records during the 1st call.

The error table structure provides two fields:


The data input structure (inTab) is enhanced by the field SQL__PROCEDURE__SOURCE__RECORDfor all routines (Start-, Field-, End- and Expert routine).

The data output structure (outTab) for the Start-, End- and Expert routine is also enhanced by the field SQL__PROCEDURE__SOURCE__RECORD.

The field SQL__PROCEDURE__SOURCE__RECORDis be used to store the original record value from the persistent source object. For more information see paragraph »Field SQL__PROCEDURE__SOURCE__RECORD«. 

Next I’ll explain the individual steps how a BW transformation is processed using an example. As mentioned before, the following steps are processed in case the error handling is used:

  1. Determine erroneous records
  2. Determine semantic assigned records to:
    1. The new erroneous records
    2. The erroneous records in the error stack
  3. Transfer the non-erroneous records

Only step 1 and 3 must be considered in the SQL script implementation. Step 2 and the sub steps are processed internally from the BW transformation framework.

Figure 4.10 provides an overview how the error handling will be processed. To illustrate the runtime behavior I keep the logic to identify the erroneous records quite simple. The source object contains document item data for two documents (TEST100 and TEST200) for the first one five and for the second one four items are available. The record TEST100, Document Item 50 contains a not valid customer C0815, see (1). To ensure that only document information are written into the target if all items for the document are valid I set the semantic key to document number (/BIC/TK_DOCNR), see (2). The procedure coding contains two parts. The first part supplies the data for the errorTab and the second part the data for the outTab, see (3). The procedure is called twice during the processing. The first call is to collect the erroneous records, see Figure 4.11. Based on the errorTab result the BW transformation framework determines the corresponding records regarding the errorTab and the semantic key. The collected records are written to the error stack see (4). The second procedure call is to determine the non-erroneous records. As a result the collected erroneous records will be removed from the inTab before the second call is executed, see Figure 4.12. The outTab result from the second call is written into the target object, see (5). 


Figure 4.10: Error handling with SQL script

Figure 4.11 shows the procedure from the sample above in the debug mode during the first call to determine the erroneous records. The parameter I_ERROR_HANDLING is set to TRUE , see (1). Only the first statement to fetch the result for the output parameter errorTab is relevant for this step, see (2). In my coding sample the second SELECT statement will also be executed but the result parameter outTab is not used by the caller. For simplicity I have kept the logic here as simple as possible, but note that from a performance perspective there are better options. The result from the SELECT statement to detect the erroneous records is shown in (3). Based on the SQL_PROCEDURE__SOURCE_RECORD ID the BW transformation framework determines the corresponding semantic records from the source and writes them to the error stack. 


Figure 4.11: Error handling with SQL script – Determinate erroneous records

The next step is to transfer the non-erroneous records. The BW transformation framework calls the SQL script procedure a second time, see Figure 4.12. Now the parameter I_ERROR_HANDLING is set to FALSE , see (1). From the coding perspective, see (2), only the second part to get the outTab result is relevant. The inTab , see (3), now contains only source data which can be transferred without producing erroneous result information. 


Figure 4.12: Error handling with SQL script – Transfer the non-erroneous records

You must be Logged on to comment or reply to a post.
  • Hi, Torsten, thank you for this blog, it’s very helpfull.

    I hope you can help me, I’m implementing the error handling DTP, but it shows this error at execution:
    Conversion of SAPscript Text HELP_DOKU to HTML

    numeric overflow: cannot convert to Integer type: 20161010143941000003000 at function __typecast__() (at pos 221) (field:REQUEST)

    Message no. RS_EXCEPTION000

    My Expert Routine is: 

    Do you have any idea why?


    • Hi Torsten,

      Basis installed SP05 yesterday. I executed the process but it still ends with error, this time because of an invalid column name.  it inserts all rows but it ends with that error and it inserts nothing to the error stack.  I’ll ask if Basis can install the notes for SP06.

      Thank you.

      • Hi,
        up to now I didn’t get the point point when the error is occurs.
        I could not see any field in your procedure named REQUEST, so I assume the error occurs if you execute or activate the DTP, right?

        • Yes, it is when i execute the DTP.  Now, with the SP05 it shows another error. I had to make it work with abap becouse of the plan of the project. So, when the next SP is out i will try it out and let you know y it works.

          Thank you!

  • Hello Mr Kessler,

    Great blog with wonderfull information, but when I try to created a transformation with AMDP in my environment and activate the ABAP shows an error like that.

    Erro SQL com código ‘2.048’; ver mensagem SQL seguinte:
    => column store error: fail to create scenario: [34011] Inconsistent calculation mo
    => del;calculationNode (OPERATION.FUNCTION_CALL.OUTPUT):Attribute REQUID is missing
    => in node OUTTAB,Details (Errors): – calculationNode (OPERATION.FUNCTION_CALL.OUT
    => PUT): Attribute REQUID is missing in node OUTTAB.

    The AMDP Class activate without errors but the transformation still inactive.

    I opened an Incident to SAP helps me too.

    Thanks and best regards,
    Rui Keiti Kikumatsu

  • Hi Torsten,
    I have been looking for some time before I found you blog. It was sorely needed.

    I have a two questions regarding the field routines.

    For SQL Script based field routines the following points need to be considered:

    • The target structure requires for each source value exactly one value. The outTabinTab
    • Sort order may not be changed. The result value must be on the same row number as the source value

    This may simply be a matter of formatting. Are these rather three than two bullet points? I.e.

    • The target structure requires for each source value exactly one value.
    • The outTab and the inTab sort order may not be changed.
    • The result value must be on the same row number as the source value


    In case of using a join operator pay attention that inner join operations could lead in a subset of rows.

    Unfortunately, I do not understand this sentance at all. Do you mean that an inner join could create additional data records? (I thought only an outer join would do that) Or something else? Could you reformulate it or perhaps write it in German?


  • Torsten, can you please shed some light on how the system determines between processing mores “Parallel SAP HANA  Extraction” and “Parallel Processing with partial SAP HANA Execution”?

    I’ve got the former in Cube->ADSO transformation, and the latter in the ADSO->Cube. Does the target type determine the processing mode to use?

    • Hi,
      the processing mode “Parallel Processing with partial SAP HANA Execution” means that one part  of the data flow (transformation) is pushed down and a further part is processed on the ABAP side.

      But keep in mind the push down must be happen at first. Once the data are on the ABAP server, a push-down within the data flow above is no longer possible.

      Depending on the ADSO type it could be necessary to load the data into the ABAP stack to process some internal steps (like SID generation). In that case the system offers the processing mode “Parallel Processing with partial SAP HANA Execution”.


  • Hi Torsten,

    I have a new problem using a INNER JOIN with Master Data table of InfoObject (I need fill some attribuites in transformation). If I create the class without the JOIN the transformation activates but if I insert the JOIN there are errors and I can’t activate.

    “column store error: fail to create scenario: [34011] Inconsistent calculation model;scenario:Referenced template scenario SAPHBD: in node OPERATION.FUNCTION_CALL not valid,Details (Errors): – scenario: Referenced template scenario SAPHBD: in node OPERATION.FUNCTION_CALL not valid”

    My script look like below.

    Thanks for any help.
    Rui Keiti Kikumatsu

    outTab = SELECT


    FROM :inTab as intab


    • Hi,
      you can activate the AMDP class?

      I’m not sure which release (ABAP) you are, but I can not see the errorTab parameter in your script snippet.

      Sometimes the impact handling does not raise an event to recreate the dependent runtime objects.
      That means, if you change the method content of a AMDP class (method) it is necessary to reactivate the transformation as well.

      From your posting I understand you get an error on activation the transformation, right?
      In case the AMDP class can be activated and the transformation throws an error please check at first if all related notes (See my blog) are implemented in the latest version.

      We are continuously enhancing these notes.

      May be it is time for an incident!?! ;-[


  • Hi Torsten,

    Since the generated AMDP classes for transformation routines are created in $TMP, I get errors when trying to edit the PROCEDURE methods, since we have the local package as “Not modifiable” in “System Change Option” in SE03 in our dev. system.

    Do we have to make the local package “modifiable” in our dev. system? In general, this is not allowed in our project. Or is there a way to edit the method without switching the system change option?

    Best regards,



    • Hi Erik,

      first, why does the admin set the package $tmp to “not-modifiable” in the dev-system?

      Does he or she not trust their own developers? ;-}


      Yes it is necessary to set the $tmp to modifiable. The AMDP class is a temporary object and we need it to enable the customer to place the SQL Script code.

      The class himself will not been transported! That’s the reason why the class is not assigned to a dev-package.


      • Hi Torsten,

        Erik mentioned your question to me – and so I thought I would answer (please note the intention is not to hijack this discussion in any way, but to answer your question).

        As the development architect, for a large multinational – I am very concerned to preserve the consistency of development objects across all of our development environments. Also the D systems represents the source system for all of our development work, so once it becomes inconsistent, there is no clear system we can refresh it from. We are continually roling out development, and so there is no window when development is not in progress in D systems, and so they cannot be refreshed even if there was a suitable source to refresh from.

        So my question back to you is why do you think it is ok for developers to create temporary objects (that then risk being integrated with non- temporary objects). This risks errors on transport import, it means your unit testing in D systems is not necessarily representative of unit test results in T systems, and so more defects, and lower code quality results.

        These are my concerns. Its not that I don’t trust our developers – but as we have over 80 of them, and some are offshore, I think you can see then the scope of my concerns.

        • Hi,

          you’re welcome to interact the discussion.

          But please explain what can a developer do in the $tmp package what he can not do in any other development package?

          As I explained in my reply before we use the $tmp package as a temporary location for the SQL code.

          In case you will keep the $tmp package on not modifiable you can try the following workaround.

          But keep in mind this is a may be possible working work around (I never tested it) and we strongly recommend to use the $tmp package. But may be the workaround can be used, I’m not sure.

          Work around:

          • Create the transformation
          • Create the procedure (Expert / Start / End)
          • Assign the class manual to your development package (but do not transport the class)
            • Here I’m not sure if it possible to assign the BW generated “/BIC/…” object to your dev package
          • Modify the method in the AMDP class (open the class from the transformation editor, do not open the class manual)
          • Save and activate the class
          • Save the transformation (at this point the transformation framework reads the method definition from the the AMDP class and add the method definition to the transformation metadata)
          • Activate the transformation

          Keep in mind:

          • In the follow up systems (Q- and Prod.) the generated classes are assigned to $tmp package.






  • Hi

    Hi Torsten,

    I currently have a Problem deleting an infoobject and I assume it is related to the new HANA transformations and maybe you can help me with that Problem.

    We’re  running BW 7.50 SPa and HANA 122 and for that particular Infoobject I did not cretae a HANA Transformation.

    In table ( you referenced) RSTRANSTEPSCRIP I do not find any entry for that Infoobject but in table RSDHAXREF (Cross references)

    That entry will not allow me to delete the infoobject.

    Is this related to HANA transformations or do you ahe an idea how to delete that IObj?





    • Hi,

      from a far a way perspective everything looks like “works as designed”.

      What is the content of the table RSDHAXREF?

      Do you checked the related (referenced) HAP?


  • Hi Thorsten,


    very good blog. It helped me a lot to design transformations using AMDP Field routines.

    There is however a bug which causes that the result values of AMDP Field routines are not properly mapped to the Datapackage and the outcome has mixed values. Developers should install OSS Note 2467323 to solve AMDP Field routines.

    “Side effect” of the OSS Note is that SAP does add Technical Fields RECORD and SQL__PROCEDURE__SOURCE__RECORD also to the Field Routines. All the AMDP Field routines then have to be manually adjusted as they cause syntax errors without the technical fields.


    If you can update your blog concerning Field routines that would be great. I use it constantly throughout my latest developments. 🙂


    I was also only able to use 2 transformations (when it has AMDPs) with infosource between Datasource and aDSO. With third one it showed lots of strange errors when activating DTP. Maybe this is also solved with the OSS Note…


    Thanks and keep up the good work.



    • Hi,

      I’d updated the blog regarding the parameter for field routines.

      But it’s difficult to keep the blog parallel to the development ;-{

      Or I need one blog for each release and SP ;-}


      Regarding your stacked data flow issue, technically it should be possible to use more than one InfoSource in a data flow. Also if the source is a DataSource.

      In case the error is still present please create an incident.

      But please keep in mind, we recommend to use not more than 2 InfoSources in one data flow.

      As less InfoSource as better.


      • Hi Torsten,

        thank you for your great blog. I have a question about your recommendation to use not more than 2 InfoSources in one data flow.

        Is this still valid for BW/4 2.0? And is there a OSS note where this is documented?


        • Hi,

          this is only a recommendation. You can use more than 2 InfoSource.

          We recommended to use not more than 2 InfoSource because of the complexity of the generated runtime objects (CalculationScenario).

          The more complex a database object is, the more difficult it is for the optimizer to optimize the runtime object to perform the execution in a good time frame.

          And if the scenario is very complex it’s more complex for me to analyse it in case of an incident 😉




    Hi Trosten,

    Good Morning. Thanks for nice blog.

    We are on BW 7.5 with SP06. I have implemented the AMDP script for BW transformation End Routine. I have three mapping fields and 5 derived fields in the ADSO mapping. Through End Routine I have to update these 5 fields.

    I was planning to implement this process thru AMDP scripting to see how it works for this scenario. I have succeeded getting all the mapping fields into DTP but AMDP is not bringing the derived fields. No idea what is the issue.

    So, to see what is getting into the Outtab I thought of debugging the AMDP script but I am not succeeded in debugging the process. I am able to put the AMDP breakpoints and I did the Debug Configuration for procedure.

    I came to RSA1 and started DTP I did neither get any Debug Prospective pop up nor debug started.


    Your help is really appreciated in this. I am already ahead of dead lines on this object.





  • Hi Torsten,

    Happy to see your response.

    I have followed the blog Debug Process but somehow at DTP the Prospect pop up not triggering.


    I am activating the break points in the method. Please check my screen.






      what I see from the screenshot is that the request is running into an error (Request overall status is red) .

      Can you post the error message. May be the error occurs before the SQL Script is called!?!


  • Hi Torsten,

    I create a open ODS view on transformation, and I tried to create AMDP routine, but when I display ODS view data, the AMDP routine did not work, no matter if there are start/end/field/expert routine, except for those AMDP routine, other transformation options did work. Can I understand AMDP routines are not supported in tranformations for open ODS view.

    Many Thanks,


  • Hi Torsten,

    Thanks for such a nice detailed blog which is very helpful.

    I am creating AMDP script in end routine of a BW transformation, but getting few errors while activating the DTP. Kindly note the system is BW7.5 SP9 and there is no issues while using AMDP expert routines.

    1.  Error 1: While activating the DTP, Error message is “Info object REQUEST not available in version A. An exception with the type CX_RSD_IOBJ_NOT_EXIST was raised”.   Manual temporary workaround: After debugging through the ST22 dump, I manually maintained ‘REQUEST’ and ‘DATAPAKID’ new entries in RSDIOBJ & RSDDPA tables and was able to get past the error. Please note the standard technical chars 0REQUEST and 0DATAPAKID were already maintained in the tables and in active version. Any ideas how we can fix this error in a recommended way?
    2.  Error 2: After getting past error 1 by the manual workaround, the further message while activating the DTP is

    => Node (OPERATION.OUTPUT.PROJ) -> attribut”

    => s -> attribute: Invalid datatype, length or/and scale are missing: ty

    => igits=0,Details (Errors): – calculationNode (OPERATION.OUTPUT.PROJ) -> attribute

    => column store error: fail to create scenario: [34011] Inconsistent calculation mo

    SQL error with code ‘2,048’. See the following SQL message:


    I have selected few fields to be updated using the end routine script. When I choose all the fields as ‘target fields for end routine’ the DTP is able to activate successfully.

    Any ideas how can I get an work around for this, because I only want to update few (not all) fields using the end routine?


    Let me know if you would like to get any further details with the error message.

    Many thanks.



    • Hi,

      please do never change the content of SAP internal table without request from SAP!!!

      To add some data in the table RSDIOBJ and/or RSDDPA could may be “fix” the current issue but it could also generates unwanted sideffects.


    • Hi Torsten,

      Thanks for the reply.

      I have installed the notes recommended for SP9.

      This seems to be an isolated issue only when the target  is an SPO DSO. Manually removing the unwanted rules in the RSTRANRULE table for the tunneling transformation leading to each of the partitions of the SPO resolves the issue but obviously this is not the recommended way.

      We have logged an incident 604969/2017 for this.



      • Hi Torsten,

        Hope you are well.

        Would like to keep you informed that this logged incident  mentioned above (604969/2017) is taking quite some time for resolution. Any chance if could have a look please?



        • Hi Torsten,

          Very happy to confirm that this issue – incident (604969/2017) has been finally resolved by SAP releasing a new note as fix – 2625458 .

          Thank you and your team for all the help.



  • Hi Torsten,

    I wanted to understand why don’t we have option to write AMDP code in DTP filters. If I have my transformation HANA executable and now if I write a filter ABAP code in DTP field routine then would it not circle back ABAP server when executing?






      we do not push down the filter by coding.

      The DTP filter vallues are callculated in a prestep. The DTP filter values are added to the SQL statement which select the data from the source object.

      A DTP filter does not prevent a push down. Even if the filter implemented in ABAP or based on an BEx variable.

      You can find the DTP filter values in the generated SQL statement (INSERT AS SELECT). Search for the placeholder:

      PLACEHOLDER’=(‘<HAP-Name>.$$filter$$’ …


  • Hi Torsten,

    thanks for the nice blog and information provided.

    What isn’t clear to me how shall i model the following situation:

    I have a calculation view or procedure A developed in HANA with complicated and intensive logic. I would like to call it inside a transformation expert routine (AMDP) B and also fill the errorTab.

    Currently we use a data source as source which is built on the calculation view A with data extraction “Directly from source system”.

    The transformation is a 1:1 and stores the whole result in an ADSO C.

    The main thing we are looking for is to add lines to an errorTab, which seems like not supported for data source – directly from source system.

    Secondly if everything is pushed down would be beneficial.

    Do you have any advice how to model this situation.

    Thanks & Kind Regards,


    • Hi,

      I didn’t get your problem.

      In case you wants to consume the calcView or the procedure within the expert script you can consume both:

      • CalculationView in a normal SELECT statement and
        • outTab = select <calcView>
      • a procedure via CALL PROCEDURE
        • CALL procedure( :inTab, … , outTab);

      Instead of outTab as target you can also use the errorTab.


      If you wants to use the calcView as a source object for a transformation use the DataSource to extract the view. That’s the correct way.

      What do you mean by:

      The main thing we are looking for is to add lines to an errorTab, which seems like not supported for data source – directly from source system.

      I struggle a bit by the word ADD. The purpose of the errorTab is to collect erroneous source records. That means it makes only sense to write source records into the errorTab.


        • Yes,

          you can create new lines within your SQLScript but we need the field RECORD and

          SQL__PROCEDURE__SOURCE__RECORD to identify the source record.



      • Hi Torsten,

        i have a source with 2 entries just one field id 1) DATA 2) ERROR.

        No I call inside my ADMP a calculation view if source entry is DATA and fill the errorTab with a dummy entry if source entry is ERROR.

        I see the processing of the DATA from the calculation view perfectly into the target. However I do not find the dummy error entry I produce.

        That is my code for the errorTab

        errorTab = 
            'dummy error' AS ERROR_TEXT,
            :inTab as i 
            i."/BIC/ZKS_ID" = 'ERROR' and 
            :i_error_handling = 'TRUE';

        I can not find an error table with this statement, I guess it should be created with my user.

          SELECT *
            FROM "SAPEBH"."DD02V"
           WHERE "DDTEXT" like '%DTP%' and tabname like '/BIC/B%'

        What do I miss?


        Thanks for your support.

    • Okay,

      first do not use this part

      :i_error_handling = 'TRUE'

      in your WHERE condition.

      We call the routine several times to identify the erroneous records and semantically assigned records.

      During collecting erroneous records we are only pick the errorTab.


      After collecting erroneous records  we call the routine again to get the outTab result for the correct records. During this call the inTab is filtered by the erroneous records.

      Which Release/SP you are running?

      Did you checked, that all relevant notes are implemented in the latest version?




  • Hi Torsten,

    Hope you are doing great.

    We have an issue after transporting the HANA based BW transformation (AMDP code in end routine) from Dev to Quality system.

    The issue is, the INTAB and OUTTAB structures in the AMDP have the fields REQTSN , DATAPAKID  in dev (the target is an ADSO) and so we had to include those in our ‘SELECT’ statement. But after transporting the objects to quality, these two fields are missing in the INTAB and OUTTAB structures in the AMDP class. Hence, the AMDP class is facing syntax error issue in QA system.

    Any suggestions how we can fix this issue?

    Thank you.



  • HI Torsten,

    Yes the release/SP for both the systems are same (7.5 SP9).

    Thanks for the suggestion. Yes will create the incident.




  • Hi Trosten,

    i dont find any documentation or example about calling HANA-Procedure from BW-Transformation.  Could you please share a example?





    • Hi,

      what do you mean with  “calling HANA-Procedure from BW-Transformation”?

      Do you want to call a HANA procedure from an AMDP routine within a BW transformation?

      The coding you can write in an BW transformation related AMDP routine is a pure SQLScript routine. That means based on the AMDP method the AMDP framework creates a database procedure.

      Therefore you can call a database procedure in a AMDP routine as in a database procedure.

      SAP HANA Help: Procedure Calls



      • Hi Torsten,

        Thanks for your reply. I want to call a HANA-Procedure from AMDP routine whitin BW-Transformartion. For Example :

        i am using BFL-Procedure “AFLBFL_CUMULATE_DECUMULATE_PROC” (Link SAP BFL Cumulate), it has one Input and one Output Paramater and one Flag.

        my BW-Expert Transformation look like ;

         CALL "_SYS_AFL"."AFLBFL_CUMULATE_DECUMULATE_PROC"(:lt_input, :lt_output, 1); 
             outTab = SELECT
           r.INVCD_VAL as CUMVALUE,  ---Output value
           ' ' as RECORD,
        FROM :inTab as I inner join :lt_output as r

        when i called Procedure in Trasformation, i got a declaration error. tell “DDIC must decralated”

        how can i call BFL-Prodedure on BW-Trasformartion?

        Thanks for your comment.


        • Hi,

          that’s an AMDP restriction.

          A called procedure must be known in the DDIC dictionary or been in the AMDP namespace.

          That means the name of a used procedure within an AMDP method must starats with “/1BCAMDP/“.



          create function "/1BCAMDP/CHECK_DATE"(in_date varchar(10), in_default date)
          returns out_date date as
            declare exit handler for sqlexception out_date = in_default;
            out_date = cast(in_date as date);
          outTab =
          --          "/1BCAMDP/CHECK_DATE"(to_varchar(ORDER_DATE, 'YYYY-MM-DD'), '1900-01-01') "ORDER_DATE",
                    to_dats("/1BCAMDP/CHECK_DATE"(ORDER_DATE, '1900-01-01')) "ORDER_DATE",
              FROM :inTab;

          You can try to cover the BFL-Prodedure in a own procedure which starts with  “/1BCAMDP/“.




          • Thank for you very much Torsten, for your Sample Function.

            i created new Procedure with namespace “/1BCAMPDP/ZTESTPROD”. when I am calling a Procedure with “CALL “/1BCAMPDP/ZTESTPROD”(in, out) ;”, i got still error. 🙁

            Could you please share a Sample Procedure calling via Transformation?

            Thanks alot!!


          • Hi,

            my sample is related to a function. A table function can be used in the field list as shown in my last reply.

            You are using a procedure.

            If I understand the BFL guide correct, the procedure you wants to use writes data to one or more table. That’s not allowed within a transformation.

            An AMDP Procedure always had the property READ ONLY, this property is delegated to all used functions and procedures within your AMDP procedure.

            Can you post the exact error you get?



          • Hi,

            thanks for your help.I am just Beginner.

            Here is my example Procedure and AMDP-Transformartion. I thought that I can calculating something (input Parameter) with a Procedure and writing (output) to infoObject. If its not, why should I need Procedure from BW-View.

            Example Procedure

            Transformation :

            Error :


  • Hello Torsten

    Hope you are doing fine

    In our project till date we are loading data into ADSO with calculation view as datasource. It was working fine in most cases but in some scenario where data volume is huge, data load fails with memory dump and infact same error was also noticed if we try to run the CV manually as an individual entity.

    Main reason for the issue is that calculation view is taking the entite data at once and in no way we are able to handle it package by package.

    However, logic in calculation view is really complex and we don’t want to either throw it away or convert the entire graphical view into line by line of AMDP coding.

    It seems the best intermediate approach is to call the same calculation view within AMDP between 2 ADSOs and pass the data in some way packaged manner based on DTP package size

    Written the following pice of code

    outTab = select
    p.bill_type as “BILL_TYPE”
    ,i.record as “RECORD”
    from :inTab as i
    inner join (select BILL_NUM,
    BILL_TYPE from “_SYS_BIC”.”XX.local.Test/YY_POC_XVIEW_6″ where bill_num in (select distinct BILL_NUM from :intab order by BILL_NUM)) as p
    i.BILL_NUM = p.BILL_NUM and
    i.BILL_ITEM = p.BILL_ITEM and
    i.KNART = p.KNART and



    But strangely DTP is stil failing with the same memory dump even if I pass only 1 billing doc in intab

    However, if I execute the following statement in SQL monitor, it comes with out and no memory dump encountered

    select BILL_NUM,
    BILL_TYPE from “_SYS_BIC”.”XX.local.Test/YY_POC_XVIEW_6″ where bill_num in (‘9000013031′,’9000013032’)

    It seems though I am trying to pass bill_num from intab to calculation view, still it is executing the entire calculation view without the parameter and then trying to filter on bill num post execution of the entire data set.

    Please let me know if you are aware of any such restriction in AMDP logic while calling calculation view within it

    Deb Deep Ray

    • Hi,

      I’m here not the 100% correct expert on that topic. In same cases it could be that a filter condition will not be push down as far as necessary. In that case the calculation engine load first all necessary data into an internal table and than apply the filter.

      You can analyze this behavior by using the PlanViz. Here you can see on which level how many data will be transferred.



  • Hi Torsten,

    i am trying to filter a Key-Figure whit a Characteristic on Field AMDP-Routine, but i am getting error. i dont know where is my mistake?

    my Code :

    outtab = “BIL_I_CNT” Key-Figure and “IMODOCCAT” Characteristic;


    When i try to activate, i got a error :

    Thanks alot.


  • Hello Torsten,

    We tried to remplace our classic start routine abap source code with AMDP source code


    We have an issue to perform the where clause on the master data attributes with AMDP in start routine.

    begin of TN_S_OUT2, 
    end of TN_S_OUT2 . 

    The following source code will works correctly – WHERE NOTIFICATN <> ”


    But if we want to perform the filter on a master data attributes this doesnot work. – NOTIFICATN/BIC/ZSTATTSUP <> ”


    As we use CTRL+Space to insert the fields in the where clause the master data attribute field is insert between ” but it doesnot work.


    Question :

    Do you know how to fix this issue ?

    Which source code do you use to manage master data attributes?

    Kind Regards


    • Hi,

      in your post I see only the structure for the outtab (TN_S_OUT2) and not the structure for the intab. 

      In the WHERE condition you must use fields from the source object (:intab) of the select statement.


      In SQL and in SQLScript (Thats the language we are using within the AMDP method object names without double quotation marks are converted into uppercase values and in case of special characters within the name double quotation marks are always required.

      I’m not sure what kind of object this NOTIFICATN/BIC/ZSTATTSUP is. I would expect the name /BIC/ZSTATTSUP based of the data type name. 

      What kind is the source object?

      Which release / SP you are running?




      • Hi Torsten,
        >> Thanks for your reply would you mind find below in blue my answers to your comments.
        >> We are simply trying to replace an existing source code with a new AMDP code.
        >> The existing source code is 
        We would like to delete the source_package line where the attribute /BIC/ZSTATTSUP of the master data NOTIFICATN is empty
        in your post I see only the structure for the outtab (TN_S_OUT2) and not the structure for the intab. 
        In the WHERE condition you must use fields from the source object (:intab) of the select statement.
        In SQL and in SQLScript (Thats the language we are using within the AMDP method object names without double quotation marks are converted into uppercase values and in case of special characters within the name double quotation marks are always required.
        I’m not sure what kind of object this NOTIFICATN/BIC/ZSTATTSUP is.
        >> It correspond to the attribute /BIC/ZSTATTSUP of the master data NOTIFICATN previously coded 0NOTIFICATN__ZSTATTSUP
        I would expect the name /BIC/ZSTATTSUP based of the data type name.
        What kind is the source object?
        >> The dataflow is between a DSO an a Cube
        Which release / SP you are running?
        >> We are on SAP_ABA Release 7.5 SP 11
        >> Many thanks for your reply
        —- complete source code —-
        class /BIC/2Z4JN8Q3I91NUYWDKZ55EEB7P definition
        create public .
        public section.
        interfaces IF_AMDP_MARKER_HDB .
        begin of TN_S_IN1,
        end of TN_S_IN1 .
        begin of TN_S_IN2,
        end of TN_S_IN2 .
        begin of TN_S_IN.
        include type TN_S_IN1.
        include type TN_S_IN2.
        types end of TN_S_IN .
        begin of TN_S_OUT1,
        end of TN_S_OUT1 .
        begin of TN_S_OUT2,
        end of TN_S_OUT2 .
        begin of TN_S_OUT.
        include type TN_S_OUT1.
        include type TN_S_OUT2.
        types end of TN_S_OUT .
        begin of TN_S_ERROR1,
        ERROR_TEXT type string,
        SQL__PROCEDURE__SOURCE__RECORD type C length 56,
        end of TN_S_ERROR1 .
        begin of TN_S_ERROR.
        include type TN_S_ERROR1.
        types end of TN_S_ERROR .
        class-methods PROCEDURE
          value(i_error_handling) type STRING
          value(inTab) type /BIC/2Z4JN8Q3I91NUYWDKZ55EEB7P=>TN_T_IN
          value(outTab) type /BIC/2Z4JN8Q3I91NUYWDKZ55EEB7P=>TN_T_OUT
          value(errorTab) type /BIC/2Z4JN8Q3I91NUYWDKZ55EEB7P=>TN_T_ERROR .
          protected section.
        private section.
        *OUTTAB as default
        * OUTTAB with where clause on infoobject
        *OUTTAB with where clause on master data attributes – doesnot work
  • Hi Thorsten,

    you wrote that “In the context of SAP HANA processing semantics groups are not supported!” Is this still true or was this feature added at some Support Package level?

    I have noticed that in BW/4 HANA there is something very similar in the DTP called “Extraction grouped by“. And that works with HANA processing!




  • Hi Torsten,

    for Error handling in combination with HANA execution in BW/4HANA up to 2.0 SP1 in the documentation it says “Error handling is not possible if execution takes place in SAP HANA” though in

    Note 2580109 – Error handling in BW transformations and data transfer processes in SAP HANA runtime

    that that will be coming in BW/4HANA 2.0.

    Can you please confirm again that the Error handling you are describing here does not work wiith BW/4HANA 1.0 if HANA execution is used and when will it come in BW/4HANA 2.0?



  • Hi Torsten,


    We have a BW/4HANA project and we’re working with DTPs with HANA Expert Scripts. Curently we have 2 questions:

    1. Why are AMDPs created in $tmp package?
    2. I understand that only the AMDP method metadata is transported together with the transformation, but we have developed some HANA scripts inside AMDP methods and still the table RSTRANSTEPSCRIPT is empty. Can you tell why?

    P.S: We haven’t transported anything yet.

    The current system is 7.50 SP12 and HANA is 2.00.037


    Best regards,

    David Reza

    • Hi,

      regarding 1.:

      the class is only a temporary object to maintain the source code and a runtime object therefore we use the $tmp package. And the class will not be part of the transport request.

      regarding 2.:

      We changed the storage for the routine (because of some internal reason) a could of times ;-(

      Depending on your SP (I didn’t check but I belive in your case it’s RSTRANSCRIPT) one of the following table is used:

      RSTRANSTEPROUT (this is the current table in BW/4)







      • Hi Torsten,

        Its great to see good details from your blogs and posting regrading AMDP transformations.

        I have a query, can you please help me out.

        I have created a AMDP transformation, and need to extract one ASDO data based on the DTP fillters applied for that respective transformation.

        So how to get the DTP filter values inside AMDP?

        I have declared one global variable in AMDP class and thought of writing logic in DTP routine to populated the DTP values to that global variable declared inside class. But class name in not same in all environment. Even I took class name from table RSTRANSTEPSCRIPT with concat ‘/BIC/’. But I am not sure how to create object for that class name.

        For example = concat ‘/BIC’ ‘B0ZI8KYVSWSK0NZEXJBYHKIYQ’ to LV_VAR.

        Then how to create an object for that class name in that variable LV_VAR.

          DATAlo_ref TYPE REF TO object.
        CREATE OBJECT lo_ref TYPE (‘ LV_VAR.’).

        lo_ref=>lv_fromweek   = ‘20.2018’.

        Is there any best way to get the DTP filter values?

        Do we have any standard object name for all the AMDP class names?

        Can you please guide me on this issue.



        Kalidas T


        • Hi,

          which release / SP are you running?

          Normally when we are using the term AMDP we are talking about SQLScript routines within a transforamtion. But form BW/4 SP08 we are also using the AMPD class to create the ABAP routines.

          Your provided coding is ABAP coding, therefore I asume we are talking about ABAP routines.

          The DTP filter are stored as a XSTRING within the DTP meta data. There is no official API avaialble to get the filter values from the DTP.


          You can try the following code sample to get the DTP filter.

          The DTP name is provided by the request proxy P_R_REQUEST (GET_DTP) which is available in the AMDP class.


          But please keep in mind this code snippet use not offical provided API and the behavior of these mehtods can change by one of the next releases or SP.

            DATA(l_r_dtp) =
              cl_rsbk_dtp=>factory( i_dtp          = 'DTP_...
                                    i_no_authority = rs_c_true ).              
            DATA(l_r_dtp_n) = l_r_dtp->get_obj_ref_maintain( i_with_cto_check = rs_c_false ).
            DATA(l_r_filter) = l_r_dtp_n->get_obj_ref_filter( ).
            l_r_filter->get_selfields( IMPORTING e_t_selfields = DATA(l_t_selfields) ).


  • Dear  Torsten Kessler !
    It’s a great blog !

    I have developed and AMDP method and come up with an issue.

    • When I set break point in AMDP method, it takes only 12 seconds to finish the program
    • When I run directly ( remove all break points ) it takes more than 7 minuets to finish, and sometimes in rush time ( background jobs are running, a lot of user access to system) it throw error SQL code 2048 no memory allocation

    I use AMDP trace and find out that the reason come from

    But when I run in debug mode , here the result

    Could you please help me out this issue ?


    Thanks so much,


    • Hi Dan,

      it’s quite difficult to support you here without system access and only by this two screenshots.

      From here it’s looks like a performance optimization within your own coding or may be a system setup. Difficult to say.

      The join you are using looks a bit more complex and I could see the amount of data of each connected table and the source as well.

      I’m not sure but the difference between “normal” execution and the debug mode is that the SQLScript logic is executed in a sequenziell mode if the AMDP  debug mode is switch on.


      May be it’s helps if you run you code always in the SEQUENTIAL EXECUTION mode.

      You can try this, but I’m not sure.

      And please do not use the sequential execution as default for all your routines!!

      This command could prevent that the code will pass  to the correct (or best) engine decide by the optimizer.




      • Hi Torsten Kessler !

        Thanks so much for your answer !

        I realized that is the behavior of AMDP. When using debugging mode, system will re-generate each SQL command. But when you call AMDP on application server ( remove the BPs), system will try to combine as much as possible related sql .

        I have solved this issue by using WITH HINT( NO_INLINE ) to block combining related sqls

  • Hi Torsten,

    in Chapter  you describe, that for ADSOs the field SQL__PROCEDURE__SOURCE__RCORD should contain REQUEST TSN, DATAPAK and RECORD.

    Since we’re on BW on HANA 7.50 SP14 we expected to find the Request SID in SQL__PROCEDURE__SOURCE__RCORD but there it is not. DATAPAKID and Recordnumber are there, but instead of the Requests SID we have only zeros.




    First we thought we lost the information in a part of iut our routines, but even the very first intab does not contain one correct SQL__PROCEDURE__SOURCE__RECORD entry.

    We really need this information to be able to derive source and target of the transformation during runtime.

    Would be really happy to hear your opinion regarding this.



    • Hi,

      in case you are extracting from an ADSO and you are running the DTP in FULL, we are getting the data from the active table, therefore the source does not deliver any request information.

      I didn’t understand your request:

      “We really need this information to be able to derive source and target of the transformation during runtime.”

      What exactly do you mean by “derive source and target”?




      • Hi.

        Thanks for your answer.

        Our Scenario is loading data from ADSOn … m -> InfoSource -> InfoSource -> ADSOn … m. AMDP Script logic is located between the inforsources.

        All ADSOs have an active table with changelog.

        We just tested loading data in deltamode but even then the field SQL__PROCEDURE__SOURCE__RECORD is only populated with PAKID and recordnumber, no request SID.

        We need the information from which infoprovider data is coming from and to which datatarget it is being loaded. In the past (in ABAP) we got this information from p_r_request. Our idea was to get these information via the request that we are currently loading.

        We found an article that with BW4HANA these information are provided by new variables which we do not have in our BWonHANA.



  • Hi Torsten

    I have created a function similar to your example

    create function “/1BCAMDP/IS_DATE”(in_date varchar(10))

    — Checks if Data is valid

    — Returns 0 if not , 1 if valid date

    returns Result Int as


    declare exit handler for sqlexception Result = 0;

    Result =

    CASE WHEN YEAR(coalesce(in_date, ‘00000000’)) = ‘1’

    THEN 0

    ELSE 1




    The trfn coding looks like this

    Outtab =


    CASE WHEN “/1BCAMDP/IS_DATE”(ch_on) = 0



    THEN Current_date



    ELSE ch_on

    END as validdate,


    FROM :intab;

    SQL test  on HANA works


    WHEN “/1BCAMDP/IS_DATE”(‘2201001’) = 1

    THEN Current_Date

    ELSE ‘19651231’


    FROM dummy;

    But if I activated the trfn it ends in error

    Description Resource Path Location Type

    Details (Warnings):
    – scenario -> cubeSche 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => .PROJ.END,Details (Errors):
    – scenario:Failed to resolve referenced template sc 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => /4SKMJEQ39QGC91A34PFM_M=>GLOBAL_END (t -1) in node 0BW_TGT_INFOPROV_1.INPUT.CONV 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => > attribute (#DB_AGGREGATION): The property: dontPropagateFlag is deprecated..
    – 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => 8AF95Q8S7IVS1) -> calculationViews -> projection (0BW_TGT_INFOPROV_1.INPUT.CONV. 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => column store error: fail to create scenario: [34011] Inconsistent calculation mo 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => culationViews -> union (0BW_SRC_INFOPROV_1.AGGR.PROJ_AGGR_UNION) -> attributes – 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => del;scenario: Failed to resolve referenced template scenario DH2::SAPHANADB:/BIC 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => enario DH2::SAPHANADB:/BIC/4SKMJEQ39QGC91A34PFM_M=>GLOBAL_END (t -1) in node 0BW 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => g is deprecated..
    – scenario -> cubeSche 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => ma -> calculationScenario (/1BCAMDP/0BW:DAP:TR_00O2TPAECCM98AF95Q8S7IVS1) -> cal 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => PROJ) -> attributes -> attribute (0QUANTITY$TMP): The property: dontPropagateFla 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    => scenario -> cubeSchema -> calculationScenario (/1BCAMDP/0BW:DAP:TR_00O2TPAECCM9 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m #// BW Modeling Problem
    SQL error with code ‘2.048’. See the following SQL message: 04IXWRIB7SVX4SKMJEQ39QGC91A34PFM /sap/bw/modeling/trfn/04ixwrib7svx4skmjeq39qgc91a34pfm/m //rule/2 BW Modeling Problem


    Any Idea what coudl be wrong?




  • Hi Torsten,

    Recently we have upgraded HANA 2.0 to SP04 from SP03. After upgrade, we are unable to activate the transformation.

    Below is the error message:

    SQL error with code ‘2.048’. See the following SQL message:
    => column store error: fail to create scenario: [34011] Inconsistent calculation mo
    => del;scenario: Failed to resolve referenced template scenario SID::SCHEMA:/BIC/00
    => O2XXXXXXXXX=>PROCEDURE (t -1) in node END,Details (Errors): – scenar
    => io: Failed to resolve referenced template scenario SID::SCHEMA:/BIC/00O2XXXX=>PROCEDURE (t -1) in nodeEND. Details (Warnings): – scenario ->
    => cubeSchema -> calculationScenario (/1BCAMDP/0BW:DAP:TR_00O2TICNCD2KU2FJDU5L3E4O8
    => ) -> scenarioHints -> parameter: The doRegisterDependencies parameter has been r
    => emoved and will be ignored.

    Version M and A is not equal, processing skipped

    We have checked the note but version A and M are having the same entries.

    Could you please help us with this request?

    • Hi,

      the message says that the database procedure (/BIC/00O2XXXXXXXXX=>PROCEDURE) is not available.

      To ensure that the database procedure is generated please run the report RSDBGEN_AMDP, use the class /BIC/00O2XXXXXXXXX and select the option “Create database objects”.

      If the report does not raise an error check if the procedure really available.

      You can check that in transaction ST04 (Diagnostics => Procedures) check if the procedure available in the SAP BW schema.

      Or use the SQl statement:

      SELECT is_valid
        WHERE procedure_name = ‘BIC/00O2XXXXXXXXX=>PROCEDURE’

      If the procedure is available and valid try to activate the transformation again.

      If the error still present please create an incident.


  • Hi Torsten,


    We tried to create the procedure but it is giving error.

    SQL message     invalid object name: “


    OSS has been raised for the same – 154020.

  • Hi All,

    does anyone has a good coding on how to send out an email with specific errortab content. This should be triggered in the end routine, where errortab is generated.
    I guess it can be done using function/procedure calls passing errortab in parameters ?



    • Hi,

      the errorTab is only available in the runtime SAP HANA, means in AMDP routines?

      I’m not sure if there are API,function or something like that available in SQLScript.

      Or do you mean an ABAP End routine?


      Could you explain where do you want to collect your errorTab content and where do you want to send the mail?



      • Hi Torsten,

        I  was hoping getting something like this in amdp.

        pseudo code

        ERR_TABLE = SELECT “TABNAME”  FROM “DD02V”   WHERE “DDTEXT” like ‘%<<my_DTP>>%’;

        send_email = Select “/1BCAMDP/EMAIL_ERRORS”(:ERR_TABLE) from dummy;

        function “/1BCAMDP/EMAIL_ERRORS”(err_tab table) returns Result Int  as
        declare exit handler for sqlexception Result = 0;

        Result =   CALL api xyz(I am a bridge to abap to send emails from your error tab) return int


        If this is not working the I guess I need one flow more and go the old way – ABAP runtime

        and read table from error tab there.

        SELECT “TABNAME”  FROM “DD02V”   WHERE “DDTEXT” like ‘%<<my_DTP>>%’;

        Loop at tab name

        … send emails

        end loop



  • Hi Torsten,


    is error handling supported having 2 Infosources and 3 TRFN , all hana enabled? We are on BW4 and getting the message whiel activating the error dtp.

    Analysis process ‘TR_00O2TPAECCMAYXXXT89T5O1WN1’ can not be embedded in a transformation

    Message no. RSDHA056 AND

    Source   does not exist

    Message no. RSTRAN803


    Direct TRFN no stacked – works properly.


  • Hi Torsten Kessler ,


    Hope you are doing good!!

    Your blog has been very informative for our first development on AMDP based transformation.

    Regarding Error Handling feature i have a couple of questions as we are facing issues with this.

    1. Is Error Handling with Semantic Key is supported only for Expert routine based AMDP transformation or it is supported in Start/End and Field routine transformations as well? Because when we enable Semantic Key in our transformation with Start routine it fails with error – SAP HANA analysis process ‘TR_*’ does not exist. Is there any step we are missing.
    2. Secondly, in the Example of Fig 4.10, you have mentioned error handling to move a specific record to error stack. But what happens if my inTab has error records with more than one issue. For example, my source records have text fields that has # characters and some Master data info objects coming from transnational source is not present in Master data table ending with No SID found issue.

    In these cases, do we have to write the Error handling logic for each scenario (which can vary every time)? in ABAP based transformation we do not write any logic for error handling and is automatically identified and restored to Error stack table based on Semantic Key definition.

    Can we achieve the same in AMDP based transformations? And is there any updated block or SAP KBA for Error handling scenarios in AMDP?

    With Regards,

    Kalai Nila

    • Hi,

      it’s always helpful (needed) to know which release / SP you are running!

      We are currently working on BW4 version 2.0 SP06!

      And especially the HANA-push-down based error handling changed a strongly.