Skip to Content

1      HANA based BW Transformation

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

This blog provides information on the push-down feature for transformations in SAP BW powered by SAP HANA. The content here is based on experiences with real customer issues. The material used is partly taken from the upcoming version of the SAP education course PDEBWP – BW Backend und Programming.

 This blog is planned as part of a blog series which shares experiences collected while working on customer issues. The listed explanations are primarily based on releases between BW 7.40 SP09 and BW 7.5 SP00.

The following additional blogs are planned / available:

A HANA based BW transformation is a “normal” BW transformation. The new feature is that the transformation logic is executed inside the SAP HANA database. From a design time perspective, in the Administrator Workbench, there is no difference between a HANA based BW transformation and a BW transformation that is executed in the ABAP stack. By default the BW runtime tries to push down all transformations to SAP HANA. Be aware that there are some restrictions which prevent a push down. For example a push-down to the database (SAP HANA) is not possible if a BW transformation contains one or more ABAP routines (Start-, End-, Expert- or Field-Routine). For more information see Transformations in SAP HANA Database.

Restrictions for HANA Push-Down

Further restrictions are listed in the Help Portal. However, the documentation is not all-inclusive. Some restrictions related to complex and “hidden” features in a BW transformation are not listed in the documentation. In this context “hidden” means that the real reason is not directly visible inside the BW transformation.

The BAdI RSAR_CONNECTOR is a good example for such a “hidden” feature. A transformation using a customer specific formula implementation based on this BAdI cannot be pushed down. In this case the processing mode is switched to ABAP automatically.

The BW workbench offers a check button in the BW transformation UI to check if the BW transformation is “SAP HANA executable” or not. The check will provide a list of the features used in the BW transformation which prevent a push down.

SAP is constantly improving the push down capability by eliminating more and more restrictions In order to implement complex customer specific logic inside a BW transformation it is possible to create SAP HANA Expert Script based BW transformations. This feature is similar to the ABAP based Expert-Routine and allows customers to implement their own transformation logic in SQL Script. A detailed description of this feature is included later on.

SAP Note 2057542 – Recommendation: Usage of HANA-based Transformations provides some basic information and recommendations regarding the usage of SQL Script inside BW transformations.

1.1      HANA Push-Down

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

Figure_1_1.png

Figure 1.1: Execution of SAP BW Transformations

 

An ABAP based BW transformation loads the data package by package from the source database objects into the memory of the Application Server (ABAP) for further processing. The BW transformation logic is executed inside the Application Server (ABAP) and the transformed data packages are shipped back to the Database Server. The Database Server writes the resulting data packages into the target database object. Therefore, the data is transmitted twice between database and application server.

During processing of an ABAP based BW transformation, the source data package is processed row by row (row-based). The ABAP based processing allows to define field-based rules, which are processed as sequential processing steps.

For the HANA based BW transformation the entire transformation logic is transformed into a CalculationScenario (CalcScenario). From a technical perspective the Metadata for the CalcScenario are stored as a SAP HANA Transformation in BW (see transaction RSDHATR).

This CalcScenario is embedded into a ColumnView. To select data from the source object, the DTP creates a SQL SELECT statement based on this ColumnView (see blog »HANA based BW Transformation – Analyzing and debugging«) and the processing logic of the CalcScenario applies all transformation rules (defined in the BW transformation) to the selected source data. By shifting the transformation logic into the CalcScenario, the data can be transferred directly from the source object to the target object within a single processing step. Technically this is implemented as an INSERT AS SELECT statement that reads from the ColumnView and inserts into the target database object of the BW transformation. This eliminates the data transfer between Database Server and Application Server (ABAP). The complete processing takes place in SAP HANA. 

1.2      Create a HANA based BW Transformation

The following steps are necessary to push down a BW transformation:

  • Create a SAP HANA executable BW transformation
  • Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA

1.2.1       Create a standard SAP HANA executable BW transformation

A standard SAP HANA executable BW transformation is a BW transformation without SAP HANA specific implementation, which forces a SAP HANA execution.

The BW Workbench tries to push down new BW transformations by default.

The activation process checks a BW transformation for unsupported push down features such as ABAP routines. For a detailed list of restrictions see SAP Help –Transformations in SAP HANA Database.  If none of these features are used in a BW transformation, the activation process will mark the BW transformation as SAP HANA Execution Possible see (1) in Figure 1.2.

Figure_1_2.png

Figure 1.2: First simple SAP HANA based Transformation

When a BW transformation can be pushed down, the activation process generates all necessary SAP HANA runtime objects. The required metadata is also assembled in a SAP HANA Transformation (see Transaction RSDHATR). The related SAP HANA Transformation for a BW transformation can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.2.

From a technical perspective a SAP HANA Transformation is a SAP HANA Analysis Process (see Transaction RSDHAAP) with a strict naming convention. The naming convention for a SAP HANA Transformation is TR_<< Program ID for Transformation (Generated)>>, see (3) in Figure 1.2. A SAP HANA Transformation is only a runtime object which cannot not been explicit created or modified.

The tab CalculationScenario is only visible if the Export Mode (Extras => Export Mode On/Off) is switched on. The tab shows the technical definition of the corresponding CalculationScenario which includes the transformation logic and the SQLScript procedure (if the BW transformation is based on a SAP HANA Expert Script).

If the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2 the first precondition is given to push down and execute the BW transformation inside the database (SAP HANA). That means if the flag SAP HANA Execution Possible is set the BW transformation is able to execute in both modes (ABAP and HANA) and the real used processing mode is set inside the DTP. To be prepared for both processing modes the BW transformation framework generates the runtime objects for both modes. Therefore the Generated Program (see Extras => Display Generated Program) for the ABAP processing will also be visible.

The next step is to create the corresponding DTP, see paragraph 1.2.4 »Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA«.

1.2.2       Create a SAP HANA transformation with SAP HANA Expert Script

If the business requirement is more complex and it is not possible to implement these requirements with the standard BW transformation feature, it is possible to create a SQLScript procedure (SAP HANA Expert Script). When using a SAP HANA Expert Script to implement the business requirements the BW framework pushes the transformation logic down to the database. Be aware that there is no option to execute a BW transformation with a SAP HANA Expert Script in the processing mode ABAP, only processing mode HANA applies.

From the BW modelling perspective a SAP HANA Expert Script is very similar to an ABAP Expert Routine. The SAP HANA Expert Script replaces the entire BW transformation logic. The SAP HANA Expert Script has two parameters, one importing (inTab) and one exporting (outTab) parameter. The importing parameter provides the source data package and the exporting parameter is used to return the result data package.

However, there are differences from the perspective of implementation between ABAP and SQLScript. An ABAP processed transformation loops over the source data and processes them row by row. A SAP HANA Expert Script based transformation tries to processes the data in one block (INSERT AS SELECT). To get the best performance benefit of the push down it is recommended to use declarative SQLScript Logic to implement your business logic within the SAP HANA Expert Script, see blog »General recommendations«.

The following points should be considered before the business requirements are implemented with SAP HANA Expert Script:

  • ABAP is from today’s perspective, the more powerful language than SQL Script
  • Development support features such as syntax highlighting, forward navigation based on error messages, debugging support, etc. is better in the ABAP development environment.
  • SQL script development experience is currently not as widespread as ABAP development experience
  • A HANA executed transformation is not always faster

From the technical perspective the SAP HANA Expert Script is a SAP HANA database procedure. From the BW developer perspective the SAP HANA Expert Script is a SAP HANA database procedure implemented as a method in an AMDP (ABAP Managed Database Procedure) class.

The AMDP class is be generated by the BW framework and can only be modified within the ABAP Development Tools for SAP NetWeaver (ADT), see https://tools.hana.ondemand.com/#abap. The generated AMDP class cannot not be modified in the SAP GUI like Class editor (SE24) or the ABAP Workbench (SE80). Therefore it is recommended to implement the entire dataflow in the Modeling Tools for SAP BW powered by SAP HANA, see https://tools.hana.ondemand.com/#bw.  The BW transformation itself must still be implemented in the Data Warehousing Workbench (RSA1).

Next I’ll give a step by step introduction to create a BW transformation with a SAP HANA Expert Script.

Step 1: Start a SAP HANA Studio with both installed tools:

  • ABAP Development Tools for SAP NetWeaver (ADT) and
  • Modeling Tools for SAP BW powered by SAP HANA

Now we must switch into the BW Modeling Perspective. To open the BW Modeling Perspective go to Window => Other .. and select in the upcoming dialog the BW Modeling Perspective, see Figure 1.3.

Figure_1_3.png

Figure 1.3: Open the BW Modeling Perspective

To open the embedded SAP GUI a BW Project is needed. It is necessary to create the BW Project before calling the SAP GUI. To create a new BW Project open File => New => BW Project. To create a BW Project a SAP Logon Connection is required, choose the SAP Logon connection and use the Next button to enter your user logon data.

Recommendations: After entering your logon data it is possible to finalize the wizard and create the BW Project. I recommend to use the Next wizard page to change the project name. The default project name is:

     <System ID>_<Client>_<User name>_<Language>

I normally add at the end a postfix for the project type such as _BW for the BW Project. For an ABAP project later on I will use the postfix _ABAP. The reason I do that is both projects are using the same symbol in the project viewer and the used postfix makes it easier to identify the right project.

Once the BW Project is created we can open the embedded SAP GUI. The BW Modeling perspective toolbar provides a button to open the embedded SAP GUI, see Figure 1.4.

Figure_1_4.png

Figure 1.4: Open the embedded SAP GUI in Eclipse

Choose the created BW Project in the upcoming dialog. Next start the BW Workbench (RSA1) within the embedded SAP GUI and create the BW transformation or switch into the edit mode for an existing one.

To create a SAP HANA Expert Script open Edit => Routines => SAP HANA Expert Script Create in the menu of the BW transformation. Confirm the request to delete the existing transformation logic. Keep in mind that all implemented stuff like Start- End- or Field-Routines and formulas will be deleted if you confirm to create a SAP HANA Expert Script. 

In the next step the BW framework opens the AMDP class by calling the ABAP Development Tools for SAP NetWeaver (ADT). For this an ABAP project is needed. Select an existing ABAP Project or create a new one in the dialog.

A new window with the AMD class will appear. Sometimes it is necessary to reload the AMDP class by pressing F5. Enter your credentials if prompted. 

The newly generated AMDP class, see Figure 1.5, cannot not directly be activated. 

Figure_1_5.png

Figure 1.5: New generated AMDP Class 

Before I explain the elements of the AMDP class and the method I will finalize the transformation with a simple valid SQL statement. The used SQL statement, as shown in Figure 1.6, is a simple 1:1 transformation and is only used as an example to explain the technical behavior. 

Figure_1_6.png

Figure 1.6: Simple valid AMDP Method

 

Now we can activate the AMDP class and go back to the BW transformation by closing the AMDP class window. Now it is necessary to activate the BW transformation also. For a BW transformation with a SAP HANA Expert Script the flag SAP HANA Execution possible is set, see Figure 1.7.

Figure_1_7.png

Figure 1.7: BW Transformation with SAP HANA Script Processing 

As explained before, if you use a SAP HANA Expert Script the BW transformation can only been processed in SAP HANA. It is not possible to execute the transformation on the ABAP stack. Therefore the generated ABAP program (Extras => Display Generated Program) is not available for a BW transformation with the processing type SAP HANA Expert Script. 

1.2.2.1       Sorting after call of expert script 

Within the BW transformation the flag Sorting after call of expert script, see Figure 1.8, (Edit => Sorting after call of expert script) can be used to ensure that the data is written in the correct order to the target. 

Figure_1_8.pngFigure 1.8: Sorting after call of expert script 

If the data is extracted by delta processing the sort order of the data could be important (depending on the type of the used delta process).

By default, the flag is always set for all new transformations and it’s recommended to leave it unchanged.

For older transformations, created with a release before 7.40 SP12, the flag is not set by default. So the customer can set the flag if they need the data in a specific sort order.

Keep in mind that the flag has impact at two points:

  • The input/output structure of the SAP HANA Expert Script is enhanced / reduced by the field RECORD
  • The result data from the SAP HANA Expert Script will be sorted by the new field RECORD, if the flag is set, after calling the SAP HANA Expert Script

The inTab and the outTab structure of a SAP HANA Expert Script will be enhanced by the field RECORD if the flag is set. The added field RECORD is a combination of the fields REQUESTSID, DATAPAKID and RECORD from the source object of the transformation, see Figure 1.9. 

Figure_1_9.png

Figure 1.9: Concatenated field RECORD 

The RECORD field from the outTab structure is mapped to the internal field #SOURCE#.1.RECORD. Later on in a rownum node of the CalculationScenario the result data will be sorted by the new internal field #SOURCE#.1.RECORD, see Figure 1.10. 

Figure_1_10.png

Figure 1.10: CalculationScenario note rownum 

1.2.2.2       The AMDP Class 

The BW transformation framework generates an ABAP class with a method called PROCEDURE. The class implements the ABAP Managed Database Procedure (AMDP) marker interface IF_AMDP_MARKER_HDB. The interface marks the ABAP class as an AMDP class. A method of an AMDP class can be written as a database procedure. Therefore the BW transformation framework creates a HANA specific database procedure declaration for the method PROCEDURE, see Figure 1.11: 

Figure_1_11.png

Figure 1.11: Method PROCEDURE declaration 

This declaration specifies the method to the HANA database (HDB), the language to SQLSCRIPT and further on defines that the database procedure is READ ONLY. The read only option means that the method / procedure must be side-effect free. Side-effect free means that only SQL elements (DML) could be used to read data. Elements like DELETE, UPDATE, INSERT used on persistent database objects are not allowed. These data modification statements can also not be encapsulated in a further procedure. 

You cannot directly read data from a database object managed by ABAP like a table, view or procedure inside an AMDP procedure, see (1) in Figure 1.12. A database object managed by ABAP has to be declared before they can used inside an AMDP procedure, see (2). For more information about the USING option see AMDP – Methods in the ABAP documentation. 

Modification of the method declaration

In case of reading from further tables inside the SQL Script it could be necessary to change (enhance) the method declaration by adding the USING option. It is important to ensure that the first part of the method declaration is stable and will not be changed. Do not change the following part of the declaration:

 

METHOD PROCEDURE BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY

 

The USING option must be added at the end of the declaration part, see Figure 1.12

 

Figure_1_12.png

Figure 1.12: Declaration of DDIC objects 

The AMDP framework generates wrapper objects for the declared database object managed by ABAP .  The view /BIC/5MDEH7I6TAI98T0GHIE3P69D1=>/BIC/ATK_RAWMAT2#covw in (3) was generated for the declared table /BIC/ATK_RAWMAT2 in (2). The blog Under the HANA hood of an ABAP Managed Database Procedure provides some further background information about AMDP processing and which objects are generated. 

AMDP Class modification

Only the method implementation belongs to the BW transformation Meta data and only this part of the AMDP class would been stored, see table RSTRANSCRIPT.

Currently the ABAP Development Tools for SAP NetWeaver (ADT) does not protect the source code which should not been modified, like in an ABAP routine. That means all modifications in the AMDP class outside the method implementation will not be transported to the next system and will be overwritten by the next activation process. The BW transformation framework regenerates the AMDP class during the activation process.

 

Reading from local table without DDIC information
All tables and views from the local SAP schema must be declared in the USING clause and must be defined in the ABAP DDIC. It is possible to read data from other schema without a declaration in the USING clause.
If a table or view in the local SAP schema starts with the prefix /1BCAMDP/ it is not necessary to declare the table/view in the USING clause.

Later on I’ll provide some general recommendations in a separate blog which are based on experiences we collected in customer implementations and customer incidents. The general recommendation will cover the following topics:

  • Avoid preventing filter push down
  • Keep internal table small
  • Initial values
  • Column type definition
  • Avoid implicit casting
  • Use of DISTINCT
  • Potential pitfall at UNION / UNION ALL
  • Input Parameter inside underlying HANA objects
  • Internal vs. external format
AMDP Class name

Until version BW 7.50 SP04 the AMDP class name is generated based on the field RSTRAN-TRANPROG. From BW 7.50 SP04 the field RSTRAN-EXPERT is used to generate the AMDP class name. The metadata storage for the AMDP routines was adjusted to align the AMDP metadata storage analogous to the ABAP metadata.

 

AMDP database procedure

Sometime, in BW 7.40, it could be happen that the corresponding database procedure is not generated. The report RSDBGEN_AMDP can be used to generate the database procedures for a given AMDP class.

1.2.3       Dataflow with more than one BW transformation 

The push down option is not restricted on data flows with one BW transformation. It is also possible to push down a complete data flow with several included BW transformations (called stacked data flow). To get the best performance benefits from the push down it is recommended to stack a data flow by a maximum of three BW transformations. More are possible but not recommended.

The used InfoSources (see SAP Help: InfoSource and Recommendations for Using InfoSources) in a stacked data flow can be used to aggregate data within the data flow if the processing mode is set to ABAP. If the processing mode set to SAP HANA the data will not be aggregated as set in the InfoSource settings. The transformation itself does not know the processing mode, therefore you will not get a message about the InfoSource aggregation behavior. The used processing mode is set in the used DTP.

That means, the BW transformation framework prepares the BW transformation for both processing modes (ABAP and HANA). During the preparation the framework will not throw a warning regarding the lack of aggregation in the processing mode HANA. 

By using the check button for the HANA processing mode, within the BW transformation, you will get the corresponding message (warning) regarding the InfoSource aggregation, see Figure 1.13

Figure_1_13.png

Figure 1.13: HANA processing and InfoSources 

CalculationScenario in a stacked data flow

The corresponding CalculationScenario for a BW transformation is not available if the source object is an InfoSource. That means the tab CalculationScenario is not available in the export mode of the SAP HANA transformation, see Extras => Display Generated HANA Transformation. The source object for this CalculationScenario is an InfoSource and an InfoSource cannot be used as data source object in a CalculationScenario. The related CalculationScenario can only be obtain by using the SAP HANA Transformation from the corresponding DTP. I’ll explain this behavior later on in the blog »HANA based Transformation (deep dive)«.

1.2.4       Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA 

The Data Transfer Process (DTP) to execute a BW transformation provides a flag to control the HANA push-down of the transformation. The DTP flag SAP HANA Execution, see (1) in Figure 1.14, can be checked or unchecked by the user. However, the flag in the DTP can only be checked if the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2. By default the flag SAP HANA Execution will be set for each new DTP if

  • the BW transformation is marked as SAP HANA execution possible and
  • the DTP does not use any options which prevent a push down.

Up to BW 7.50 SP04 the following DTP options prevent a push down:

  • Semantic Groups
  • Error Handling – Track Records after Failed Request

The DTP UI provides a check button, like the BW transformation UI, to validate a DTP for HANA push down. In case a DTP is not able to push down the data flow (all involved BW transformations) logic, the check button will provide the reason.

Figure_1_14.png

Figure 1.14: DTP for the first simple SAP HANA based Transformation

In the simple transformation sample above I’m using one BW transformation to connect a persistent source object (DataSource (RSDS)) with a persistent target object (Standard DataStore Object (ODSO)). We also call this type a non-stacked dataflow – I’ll provide more information about non-stacked and stacked data flows later. The related SAP HANA Transformation for a DTP can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.14. In case of a non-stacked data flow the DTP uses the SAP HANA Transformation of the BW transformation, see (3) in Figure 1.14.

The usage of a filter in the DTP does not prevent the HANA push down. ABAP Routines or BEx Variables can be used as well. The filter value(s) is calculated in a pre-step and added to the SQL SELECT statement which reads the data from the source object. We will look into this later in more detail.

1.2.5       Execute a SAP HANA based transformation

From the execution perspective, regarding the handling, a HANA based transformation behaves comparable to an ABAP based transformation, simply press the ‘Execute’ button or execute the DTP form a process chain.

Later on I will provide more information about packaging and parallel processing.

1.2.6       Limitations

There is no option to execute a transformation with a SAP HANA Script on the ABAP application server. With BW 7.50 SP04 (the next feature pack) it is planned to deliver further option to use SAP HANA Scripts (Start-, End- and Field-Routines are planned) within a BW transformation.

1.2.7       Feature list that prevents a push down (Added on 07/20/2016)

The official SAP Help provides a small feature list which prevents a HANA execution. Here are a more detailed list:

  • Queries as InfoProvider are not supported as thesource
  • ABAP Start-, End- and Field-Routines 
    • With BW 7.50 SP04 corresponding SQL Script routines are supported
  • Formula elements that prevents a push down
    • DATECONV,
    • WEEK_TO_1ST_DAY,
    • FISCPER_CALMONTH,
    • CALMONTH_FISCPER,
    • CONDENSE,
    • ABORT_PACKAGE, 
    • SKIP_RECORD,
    • SKIP_RECORD_AS_ERROR
    • LOG_MESSAGE
    • Customer created formulas (BAdI RSAR_CONNECTOR)
  • Transfer routineCharacteristic- / InfoObject – Routine) are not supported
    • 0SOURSYSTEM and 0LOGSYS are supported
  • Rule typeTIME (Time Characteristic) with Time Distribution are not supported
  • Rule groups are not supported
  • InfoObject with time-dependent Attributes are not supported as source and as target
    • Supported with BW 7.50 SP04
  • Cube like DataStore Objects (advanced) are not supported as target
    • Supported with BW 7.50 SP04
  • DataStore Objects (advanced) with non-cumulative key figure(s) are not supported as target
    • Supported with BW 7.50 SP04
  • To read data from DataStore objects, the entire key must be provided
  • Near-line connections
    • Supported with BW 7.50 SP04 (Added on 09/21/2016)
  • DTP Options
    • Error Handling not supported
      • Supported with BW 7.50 SP04
    • Semantic Groups 

Further information about supported SAP HANA execution feature are provided in the SAP note: 2329819 – SAP HANA execution in DTPs (Data Transfer Processes) – Optimizations.

To report this post you need to login first.

44 Comments

You must be Logged on to comment or reply to a post.

  1. Nilesh Pathak

    Hi Torsten, thanks for blog – we were working on BW 7.5 SP2  HANA project & for one of requirement we need to delete some data coming in from source ERP to BW. like in BW we would normally use start routine to delete unwanted records but here as we know processing at HANA DB will not be supported when we use Start routine.

    Hence we were looking to use HANA SQL script in order to leverage HANA hence please can you let us know if this is possible to delete certain records based on some condition lets say

    I want to delete all records from source where field X is blank.

    If this is possible then can you advise syntax pls as when I was trying to use DELETE in procedure it was giving error.

    Hence appreciate your quick response.

    regards

    Nilesh

    (0) 
    1. Torsten Kessler Post author

      Hi,

      SQLScript Start-, End- and Field routines are planned for SP04.

      But to delete (do not further process) specific rows you can filter the inTab.

      For your sample above you can write the whole content from the inTab into a temporary table and filter out the rows which you do not want to process further on. Example:

        tmpInTab =

          SELECT *

            FROM :inTab

           WHERE “field_X” = ”;

      If there no further logic required you can also  write the filtered result into the outTab:

       

         outTab =

          SELECT <field list>

            FROM :inTab

           WHERE “field_X” = ”;

      Here you must explicit map the fields from the inTab to the field from the outTab!

      Keep in mind that the logic should be implemented by using declarative SQLScript logic

      to enable the optimizer to get the best runtime performance.

      I’ll provide more information about recommendation in one of the next upcoming blogs (General recommendation).

      Regrads

      Torsten

      (0) 
  2. Erdem Pekcan

    Great information, thank you!

    What could be the reason that I can’t see the generated procedure in Figure 1.12 ?

    Also, I try to map a sample field with a lookup. How should be the whole sytax?


    I’m working on a BW 7.4 SP14 and HANA 1.00.112.01 system.

    Thanks!

    (0) 
    1. Torsten Kessler Post author

      Hi,

      1.  == What could be the reason that I can’t see the generated procedure in Figure 1.12 ? ==

      You mean, you had created a transformation with a SAP HANA Expert Script and activated the AMDP class and the database procedure <<CLASS-NAME>>=>PROCEDURE is not in the folder PROCEDURE in the schema SAP<SID>?

      In that case you can first try to activate the transformation. Is the DB procedure still not there. Open the report RSDBGEN_AMDP enter your class name and choose the option “Create database objects”. But option two should not be necessary.


      2. ==Also, I try to map a sample field with a lookup. How should be the whole sytax? ==

      First, our recommendation is use standard transformation function if it possible. First check if a standard rule like (Masterdata read, read from classic DSO, read from advanced DSO) usable. We’d enhanced the lookup rule so that you also can map field in case the original InfoObjects are not in the source structure available. See SAP Help:

      Read from DataStore Object (advanced):

      To read from a DataStore object (advanced), the system first provides you with a proposal for how the fields can be assigned. The key fields must be assigned to the source fields of the transformation. When assigning the fields, the names do not need to be identical.


      In case the standard rules doesn’t fit you can read (join) the data from a table with the source data (inTab).

      The following sample explains how I read the field TK_VENDOR form the table /BIC/ATK_RAWMAT1 and add it to the outTab. To do that I JOIN the inTab with the table /BIC/ATK_RAWMAT1.

      outTab = SELECT intab.“MATERIAL”,

                        intab.“PLANT”,

                        ...

                        mat.“TK_VENDOR”,

                        ...

                       “RECORD”

            FROM :inTab as intab

            JOIN “/BIC/ATK_RAWMAT1” as mat

            ON intab.“MATERIAL” = mat.”TK_MAT”;

      More information about the JOIN operation could be found here All about Joins using SQL in HANA

      Hope this is helpful

      Regrads
      Torsten

      (0) 
      1. Erdem Pekcan

        Hi Torsten,

        Thanks for the detailled explanation.

        What I meant was, in Figure 1.12 the image marked with “3” includes the following line:

        /BIC/5MDEH7I6TAI98T0GHIE3P69D1=>/BIC/ATK_RAWMAT2#covw

        The part “#covw” seems to be a generated suffix, right?

        Is it generated once “USING ZTABLE” is added and acivated within the script?

        If so, that’s something I couldn’t find.

        I will try the standard option as long as possible.

        Thanks again.

        (0) 
        1. Torsten Kessler Post author

          Hi,

          that’s right the generated object

               /BIC/5MDEH7I6TAI98T0GHIE3P69D1=>/BIC/ATK_RAWMAT2#covw

          is a view (not a procedure) and could be find in the folder view (not column view).

          Torsten

          (0) 
  3. Vladimir Kolodyazhny

    Hello Torsten!

    We are currently facing the issue with filtering incoming data by dtp filters.

    I noticed, you’re planning some articles about

    • Input Parameter inside underlying HANA objects

    Maybe you can provide some additional sources to look up on this topics?

    (0) 
    1. Torsten Kessler Post author

      Hi Vladimir,

      In one of my next blogs (general recommendations)I will provide a sample where I use a CaclView with an input variable. The CalcView is part of a CompositeProvider and the CompositeProvider is a source of a transformation.

      I’ll show in case the input parameter is mandatory that the DTP filter is also mandatory. 


      Sorry but I have no further source material to provide.


      May be you can explain your issue in more details or in case of an error create an incident.


      Torsten

      (0) 
  4. abilash n

    Excellent Blog Torsten.. 🙂 with detail explanation,screenshots. Bookmarked it as currently I moved from BW to HANA. Definitely a big Like from my end…..(expecting will be back to bw once HANA project is completed)

    (0) 
  5. Jan Eeckhaut

    Hello,

    Very good blog, thank you!

    I see in your example that you refer to the active table if you need to read data from another ADSO in your SQL script.

    To be NLS safe, we read the ‘External SAP HANA view’

    (example: SELECT * FROM “system-local.bw.bw2hana::DADSO1”).

    What is your idea about using SQL script in BW and NLS?

    Regards

    (0) 
    1. Torsten Kessler Post author

      Hi,
      regarding your NLS question, do you mean
      – a NLS located object as source object of the transformation or
      – do you want to read data from a NLS located object inside the SQL script?

      Torsten

      (0) 
          1. Jan Eeckhaut

            Hello Torsten,

            In the meantime we use the ‘External SAP HANA view’ of an ADSO in a lot of our HANA Expert scripts to SELECT or JOIN. This works perfectly. We recently tried to upgrade from 7.5SP3 to 7.5SP7 and we encountered performances issues in some SQL statements that are using these views. In most cases but we have solved the issue by reactiving the ADSO (it regenerates the HANA view) but not all cases are solved. Do you advice we use these  ‘External SAP HANA view’  in HANA Expert scripts?

            Regards,

             

            Jan

            (0) 
            1. Torsten Kessler Post author

              Hi,

              first please ensure that all notes are implemented after the upgrade.

              I’d added some new notes today.

               

              Regarding the external view’s, it is not so easy to provide a general recommendation.

              I’ll check if it be possible provide a recommendation here and will come back.

               

              Torsten

              (0) 
              1. Torsten Kessler Post author

                Hi Jan,

                I’d discussed the topic internally and up to now we can say their is now way that we can recommend. That means We can not recommend to use external views neither we can recommend to read data direct from the active BW object table.

                But that does not mean that we forbid the use.

                What we recommend try to implement the transformation logic by using standard rule types (read from ADSO, read Masterdata, Formula, …).

                Sometimes it is necessary to split the logic of a routine in several standard rules (used in a stacked data flow) to obtain the same logic as in the routine .

                For BW/4 HANA (SP07) we’re planing to provide a new stable view which can be used in the staging context.

                Torsten

                (0) 
  6. Vasanth Gutta

    Nice blog Torsten;

    I am wondering if you can provide us some sample code for reading a inTab into temporary table and another dso table from BW into another temporary table with lookup using ABAP temporary work area structure and updating using loop functions to get the final result.

    Sorry for asking help in this kind of coding. Any help on sample typos to follow in AMDP SQL script?

    such as how we used to write expert ABAP routines as below.

    Loop at SOURCE_PACKAGE into workarea_source_package.

    read table /BIC/Ablablabla
    into workarea_looptable
    with key key = workarea_source_package.

    if sy-subrc = 0.

    wa_result_package-field1 = wokarea_sourcepackage-field1.
    wa_result_package-field2 = worarea_looptable-field2.
    wa_result_package-field3 = worarea_looptable-field4.



    endif.

    append workarea_result_package to RESULT_PACKAGE.

    endloop.

    (0) 
    1. Torsten Kessler Post author

      Hi,
      as I wrote above:

      However, there are differences from the perspective of implementation between ABAP and SQLScript. An ABAP processed transformation loops over the source data and processes them row by row. A SAP HANA Expert Script based transformation tries to processes the data in one block (INSERT AS SELECT). To get the best performance benefit of the push down it is recommended to use declarative SQLScript Logic to implement your business logic within the SAP HANA Expert Script, see blog »General recommendations«.

      That means you have to consider that a push down transformation is processed in one SQL (INSERT AS SELECT) statement. Therefore a general recommendation is to avoid statements like:

      – LOOP,
      – IF THEN,
      – WHILE,
      – …

      like we are typical using in ABAP.
      In a SQL Script routine (or AMDP routine) the  parameter inTab is the same as the SOURCE_PACKAGE in the ABAP routine.

      If you want to select some data from the inTab you can create a select on the inTab:
      tmpTable = SELECT COLUMN_A, COLUMN_B,  from :inTab;

      To read data from an active DSO table you can also create a select statement:
      tmpDSO = SELECT COLUMN_X, COLUMN_Y from “/BIC/Ablablabla”;

      Now you can create a UNION or a JOIN based on both tmp table. It is also possible to join or union the inTab directly with a database table.

      But keep in mind, if you create to complex statements it is sometime hard to read and understand the coding if you search for an error or if you searching the reason for erroneous data.

      Otherwise, compiling the whole logic in one complex statement can increase the runtmie.
      So the goal is to find a way in between, a good runtime and coding that can be maintained.

      Torsten

      (0) 
  7. Vasanth Gutta

    Thanks Torsten,

    The reason I am asking the traditional ABAP way because; we have little bit different situation. I have tried to do straight select statements but it doesn’t help transforming the data. Let me explain our situation with our developments;

    We have several hierarchies coming from several sources. I have planned to load them using BODS into BW PSA and then to aDSO with out any transformations. From there we are again loading into another aDSO from flattening it however the data is not as straight as other transformation for lookups.

    We have 5 fields up to 8 levels of hierarchy
    1. Child hierarchy code
    2. Level no
    3. Parent hierarchy code
    4. Child hierarchy code attribute 1
    5. Child hierarchy code attribute 2

    My target aDSO is like below:

    Lvl8 Code – Key
    Lvl8 code attr 1
    Lvl8 code attr 2
    Lvl7 Code
    Lvl7 code attr 1
    Lvl7 code attr 2
    Lvl6 Code
    Lvl6 code attr 1
    Lvl6 code attr 2
    Lvl5 Code
    Lvl5 code attr 1
    Lvl5 code attr 2
    Lvl4 Code
    Lvl4 code attr 1
    Lvl4 code attr 2
    Lvl3 Code
    Lvl3 code attr 1
    Lvl3 code attr 2
    Lvl2 Code
    Lvl2 code attr 1
    Lvl2 code attr 2
    Lvl1 Code
    Lvl1 code attr 1
    Lvl1 code attr 2

    Now I am trying to transform the data and flatten it from making the tail end level 8 code being the key in aDSO.

    My logic is as below:

    source1 = select * from :inTab where level = 8;
    source2 = seelct * from :inTab where level <> 8;

    from here I am stuck how to prepare the result data flattened without loops. Sorry for asking such a deep dive cause I am stuck at this where I have written most of the in ABAP expert routines to move the project. Any help?

    (0) 
    1. Torsten Kessler Post author

      Hi,
      you can use all SQL Script feature. It is also possible to use loops and conditions.

      I didn’t get your requirement complete but I think it would be possible do get the requested result with a statement like:

      SELECT CASE (level = 8) THEN …
      CASE (level = 7) THEN …

      otherwise you can implement it in a same way as in ABAP.
      As explained in the blog, if you are using declarative programming logic it will be translated into L (L-Pop).

      And this CAN (not must) blog the optimizer!

      The general declaration form the development is: “It is implausible that a L-Pop is faster than a pure SQL statement!”.

      Torsten

      (0) 
  8. Vasanth Gutta

    Hi Torsten,

    I have figured it out and written the AMDP script in SQL using unions and joins but very clumsy. I have been doing lot of ABAP and converting that into HANA AMDP SQL script is not that straight forward but made me think a lot.

    I have finally written and tested the transformation and data successfully but I couldn’t find difference in performance from BW routine world to HANA AMDP world. Or may be due to less volume of data.

    However its more interesting and challenging to learn new coding techniques. I went through HANA sql guide and writtent line by line for 3 days to finish it.

    This is how i have written:

    define lvl int :=7;

    temp_source = select from :inTab by HLVL desc, child asc;

    l_out = select
    “target structure with Nulls most and relevant fields for level 8 of parent child hierarchy with attributes” from temp_source

    while :lvl = 0 DO

    CASE (:lvl = 7)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 6)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 5)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 5, 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 4)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 4, 5, 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 3)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 3, 4, 5, 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 2)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 2, 3, 4, 5, 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    CASE (:lvl = 1)
    then
    l_out =  “target structure with Nulls most and relevant fields for level 1, 2, 3, 4, 5, 6, 7 and 8 of parent child hierarchy with attributes” from l_out as main inner join temp_source as part (main.parent = part.child) where “temp_source level = :lvl”;

    end case;

    lvl := :lvl – 1;

    end while;

    outTab = select from l_out structure;

    erorTab = select error stuff;

    (1) 
  9. Philipp Schornberg

    Hi Thorsten,

    is there any news regarding the new version of the PDEBWP course you mentioned? Or is it already available and I couldn’t find it?

    Thanks in advance,
    Philipp

     

    (0) 
    1. Torsten Kessler Post author

      Hi Philipp,

      we (the official trainer/course owner and me) are currently quite busy with other topics ;-(

      And the idea is to cover BW/4 topics as well in the new version.

      May be in the middle of the year but no promise on that.

      Torsten

       

      (0) 
  10. KD Jain

    Hi Torsten,

    It is a very informative blog. I got a lot of information from here.

    But for sure I have to read this one more time to be acquainted. Going for deep dive again, thanks

    Going for deep dive again, thanks a lot for this

    Thanks, KD Jain

    (0) 
    1. Torsten Kessler Post author

      Hi,

      if you say “include”, I assume you wants to call a procedure which is implemented in R, right?

      Or do you mean that you can change the method declaration  from:

      METHOD GLOBAL_EXPERT BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT OPTIONS READONLY.

      to:

      METHOD GLOBAL_EXPERT BY DATABASE PROCEDURE FOR HDB LANGUAGE R-SCRIPT OPTIONS READONLY.

      ?

      The second one is currently not possible.

      But we can talk about.

      Can you provided a use case?

      Torsten

       

      (0) 
  11. Rajesh Majeti

    Hi Torsten,

     

    We moved to ECC on HANA and leveraging  embedded BW for our BPC. As a part of our requirement, loading from one hierarchy Infoobject to another  Hierarchy Infoobject. We are having an issue like

    Target IOBJ HIER not supported for HANA execution

    Transformation must not be executed on HANA.

    Is it an issue with BW7.5 SP5 or the way on how we are handling the hierarchies?. I thought to check with  you before i raise to SAP as it looks like an issue with Hierarchies.

     

    Regards,

    Rajesh

    (0) 
    1. Torsten Kessler Post author

      Hi,

      the message only means that you can not execute the transformation in the processing mode HANA. But you can use the processing mode ABAP instead.

       

      Torsten

      (0) 
  12. Rajesh Majeti

    Hi Torsten,

     

    How to i enabled the  ABAP processing  mode?. I dono’t see that option in embedded BW in DTP  settings. Please let me know if i works differently.

    Regards,

    Rajesh

    (0) 
    1. Torsten Kessler Post author

      Hi,

      in the right upper corner should be a checkbox with the label “SAP HANA Execute”.

      If the checkbox is marked than the DTP is running in the HANA processing mode.

      If you remove the marker from the checkbox the DTP is executed in the ABAP mode.

      If the DTP runs in delta process and the init is already been processed it could be that you can not change the processing mode. In that case it is necessary to empty the target first.

       

      You can see the checkbox at point (1) in 

      Figure 1.14: DTP for the first simple SAP HANA based Transformation

       in this blog.

      Torsten

       

       

      (0) 
  13. Rajesh Majeti

    Thanks Torsten. That will be usual case in BW modeling but embedded BW is working strange.

    I have the checkmark disabled( not selected).  It looks like we cannot load hierarchies manually in BW7.5 SP5( only  Remote Hier  Class  with Hier Class Parameters:CL_FCOM_IP_HRY_READER_GLACCT).

     

    Regards,

    Rajesh

    (0) 
  14. Shanthi Bhaskar

    Hi Torsten,

     

    Great article.

    Did you had a chance of using the SAP Function modules in your SQL scripts. I am trying to convert lot of my transformations into AMDP, but I don’t know how to handle the FM’s.

     

    Thanks,

    Shanthi.

     

    (0) 
    1. Torsten Kessler Post author

      Hi,

      no that’s not possible.

      It makes also no sense . The main purpose to push down the transformation processing is to prevent the data transfer. In case of using FM in SQL script the data must be transfer from the database to the application server for processing in the FM.

       

      Keep in mind, it is not necessary to migrate existing transformation soon.

      The ABAP runtime is still available.

       

      Torsten

      (0) 
  15. Blues Clues

    Hi,

    Thanks for the blog. One question though.

    If I have global declarative section in my original ABAP with an internal table which is shared between Start and End Routines, How can I have similar data sharing between AMDP based Start and End Routines?

    OUTTAB provides for the dataset being processed but I have another global table that is used in the end routine for additional processing.

    I have also used that Global table for look ups and decision making steps between data packets such as if I have processed certain customers etc., in previous packets already or newer record came in a earlier data packet etc.,

     

    (0) 
    1. Torsten Kessler Post author

      Hi,

      that is not possible. The AMDP methods are declared as READ-ONLY to ensure that the coding is side-effect-free. That means you can not use temporary table or session variables to share data along database procedures.

      Your sample  with certain customer in earlier packages sounds like a semantic group issue. I’m not sure which release you are using but with BW4/HANA (I belive started with SP04) semantic groups does no longer prevent a push down.

      Further on, please do not try to translate all ABAP based transformation as a one-to-one copy into a SQL Script (AMDP) based transformation. The processing logic between ABAP and SQL Script is difference. Please see paragraph 1.1 in this blog.

      We provide a toolbox with a lot of standard transformation rules and the option to use ABAP and/or SQL Script in case the standard rules are not enough.

      Its up to you to pick the right option to implement the business logic.

      Torsten

      (0) 
  16. Naveen Rondla

    Hi Torsten,

    Thanks for the nice blog. You blog encouraged me to write try AMDP script. We have been recently upgraded to 7.5 SP5. When I tried to test a simple logic following your blog, I ran into an issue. Code was activated without any errors but when I tried to activate transformation, its giving an error that text field cannot be mapped (first snapshot attached). I deleted all my logic and added simple select from intab (second snapshot). Even then I see the error. If I delete start routine and mapped fields directly, transformation gets activated without any issues. I could not find any note on market place. I am not sure if I am doing anything wrong or its a bug. Your feedback is appreciated. 

     

    (0) 
    1. Torsten Kessler Post author

      Hi,

      for the topic BW transformation with HANA push down we do not create separate notes for each issue. We are putting the fixes in  collection notes for each SP.

      I’d written a special blog where you can find all relevant notes. 

      Please check which notes are relevant for your release / SP and implement them in the latest version!

      If all notes are on the current version, please try again.

      Torsten

      (0) 

Leave a Reply