Skip to Content

Last change on 01/28/2018

This blog is part of the blog series HANA based BW Transformation.

The following list is based on experiences with real customer messages and is intended to help customers learn from the mistakes of other customers.

9.1 SQL Script Procedures

Some SQL Script best practice tips could be found in chapter »Best Practices for Using SQLScript« inside the SAP HANA SQLScript Reference.

Here are some general SQLScript recommendations:

  • Do not mix the functions of SQLScript and CE functions within a script.
    • Example: Do not code SQLScript and read inside the SQLScript from a procedure or CalcView with CE Functions).
  • Pass on all parameters as required to the lowest level
  • Reduce the size of the data set as early as possible in the data flow. This can be done by applying static filters or applying dynamic filters using parameters at the lowest level
  • Minimize the size of any intermediate result sets in complex processing. Do not include any extra attribute columns that may increase the rows of aggregated results being passed between nodes in the data flow.
  • Avoid extremely large intermediate result sets, even if they will be reduced or aggregated before the final result is returned to the client. These can have high memory requirements.
  • Perform calculations at the last node possible in the data flow (ideally done in final aggregation or projection). This will reduce the amount of data rows on which you are performing the calculations also reduce the number of columns being transferred from one node to another.
  • In some situations, you can use a union and then aggregate the results instead of using a join for better performance.
  • Reduce Complexity of SQL Statements: Break up a complex SQL statement into many simpler ones. This makes a SQLScript procedure easier to comprehend.
  • Identify Common Sub-Expressions: If you split a complex query into logical sub queries it can help the optimizer to identify common sub expressions and to derive more efficient execution plans.
  • Understand the Costs of Statements: Employ the explain plan facility to investigate the performance impact of different SQL queries used in scripts.
  • Reduce Dependencies: As SQLScript is translated into a dataflow graph, and independent paths in this graph can be executed in parallel, reducing dependencies enables better parallelism, and thus better performance.

The following listed SQL Script issues are related to SQL Script in the context of BW.

9.1.1 Initial values

Within the BW all fields are defined as NOT NULL columns. Therefore, the developer is responsible to ensure that all data inside the result table of a SQL Script routine are initialized type conform. The SQL function COALESCE could be used to avoid NULL values and ensure that all values would be initialed correct in case of NULL values. Characteristic sample:

COALESCE( inTab."MATERIAL”, '' ) AS "MATERIAL”

In case of numeric field, it is important to initialize the field by a numeric value. The default initial value for a numeric field is 0 and not ‘’. Key figure sample:

COALESCE( inTab."KEYFIGURE”, 0 ) AS "KEYFIGURE”.

9.1.2 Avoid preventing filter push down

The optimizer tries to push down the filter as far as possible to the source to keep the data volume as small as possible. It may have a negative performance effect if the filter push down is prevented.

The calculation engine tries to optimizer the SQLScript coding in order to compile all local select statements to one select statement. Therefore the single statements would be in-lined. Sample:

Original statements:

tab = select A, B, C from T where A = 1;

tab2 = select C from :tab where C = 0;

In-lined statement:

tab2 = select C from (select A,B,C from T where A = 1) where C=0;

For further information about in-lining see HINTS: NO_INLINE and INLINE. The hints could also be used to force or prevent inlining for a single select.

A further option to prevent inlining is to put the coding into a SEQUENTIAL EXECUTION block. All statements inside a SEQUENTIAL EXECUTION block (BEGIN SEQUENTIAL EXECUTION … END;) would not been in-lined and would be executed sequentially. The force of sequential execution prevents parallelization!

Also the usage of Imperative SQLScript Logic prevents that the optimizer can inline the single SQL statements. Therefore the recommendation is to use declarative SQLScript logic to implement the SQL script procedures in the BW transformation context, if the logic allows it.

Use of imperative SQLScript logic

The procedure SQL code would be translated into L-coding in case of using structure control elements like LOOPs or IF-conditions or declaration of local variables, see Figure 9.1.

 

Figure 9.1: Resulting CeLjitPop for converted SQL Script code to L

Use

  • hints
  • the sequential execution block and
  • imperative SQLScript logic

with care!

Sometimes it makes sense to test if one of this features would increase the performance.

 

9.1.3 Concatenate in SQL Script within an AMDP Method (Added on 08/03/2017)

The following statement

SELECT 'A' || ' ' || 'B' || ' ' || 'C' FROM <any table>;

within an AMDP method will lead into the result 'ABC'.

The expected blanks are not available. To get the expected result it is necessary to set the space by using the CHAR Function (String).

The following statement

SELECT 'A' || CHAR(32) || 'B' || CHAR(32) || 'C' FROM <any table>;

delivers the expected result 'A B C'.

 

9.1.4 Keep internal table small

Keeping the internal table small is a general recommendation and means always check your WHERE condition twice. While processing the Calculation Scenario by the calculation engine it is necessary to materialize some Calculation Views. Materializing views means allocation of memory. From performance and memory allocation perspective we recommend to reduce the data volume as soon as possible, see paragraph 9.1.7 »Use of DISTINCT«. Inside all SAP HANA engines try to push down filter conditions as close as possible to the sources.

 

 

9.1.5 Use of DISTINCT

A join can multiply the number of resulting rows and in case of selecting only a subset of the whole width a DISTINCT can reduce the resulting number of rows significantly.

The customer coding below results in an internal table with more than 700.000.000 rows.

(inTab ~ 16.000.000 rows)

L_ITAB2 = 
  SELECT "BPARTNER", 
         "BP_EXTERN" 
    FROM :L_ITAB1   
   INNER JOIN :inTab      
      ON :L_ITAB1."BP_EXTERN" = :inTab."BP_EXTERN"   
   WHERE ...;

By adding DISTINCT to the statement the number of rows could be reduced to 1.300.

L_ITAB2 = 
  SELECT DISTINCT "BPARTNER", 
                  "BP_EXTERN"
    FROM :L_ITAB1    
   INNER JOIN :inTab
      ON :L_ITAB1."BP_EXTERN" = :inTab."BP_EXTERN"
   WHERE ...;

9.1.6 Potential pitfall at UNION / UNION ALL

The following aspects should be considered when using the UNION operator.

  • The structure of the first SELECT statement defines the column names of the resulting structure (table)
  • The UNION operator matches the columns from the involved SELECT statements by position and NOT by name
  • If the data type of a column differ from each other, the HANA engine will try to determine an appropriate data type

The individual SELECTs in an UNION can base on a persistent database table or an internal SQL Script (iTab) table. By combining several SELECT statements to a new result table one must pay attention to the column order of every single SELECT statement.

Figure 9.2 shows in (1) and (2) two table definitions with the same columns but the column order is different. (3) shows a structure definition for the output parameter of the procedure in (4).

The first UNION in (5) uses the * operator instead of an explicit column list (see comments). The column names of the resulting tables are based on the columns (“CHA_01”, “CHA_02”, “KYF_01”, “KYF_02”, “RECORD”) from the first SELECT statement. The datatype from the third (KYF_01 / RECORD) and the fifth (RECORD / KYF_02) column of both SELECT statements are different. The

Figure 9.2: UNION without and with explicit field list

9.1.7 Pitfalls in field routines

The runtime behavior of AMDP field routines are different from the runtime behavior of ABAP field routines.

The processing logic for an ABAP field routine is based on an external loop over the source package and for each row a field routine is called with a single value. The input fields of an ABAP field routine are single value variable (scalar variable). An ABAP field routine can consume several single values, depending on the field routine definition. An ABAP field routine always returns (EXPORTS) to one single value field.

The process logic for an AMDP field routine is based on every value in the column for the source package which is currently being processed. That means, if the AMDP field routine be called all values from the selected input fields are handover in one call. In Figure 9.4 only one source field is selected. Therefore, the table base input parameter for the AMDP routine contains one data column. There are further technical columns in the input table.

Figure 9.4: Field Routines Definition

The transformation process expects thus from the AMDP field routine an output table with the same number of rows as the input table!

To get a better understanding why this is so important I’ll provide some background information about the runtime processing.

A field based AMDP routine is embedded in a Calculation Scenario. For more details read through HANA based Transformation (deep dive).

Figure 9.5 shows the internal process of an AMDP field routine inside the Calculation Scenario. After extracting the data from the source object the start routine (in case a start routine is available) is called. In the next step the transformation process splits the dataset into several parts. To illustrate this Figure 9.5 is based on a transformation with two AMDP field routines. Therefore, the data package is split into three parts. One part for each field routine and the third part for the other transformation rules like formula or direct assignment or ….

This enables the underlying engine to parallelize the transformation process.

After processing the field routines and the other rules it is necessary to bring the results together. To do so one must bring all three data packages in the same order. Each data package contains two technical columns for this purpose. The Calculation Scenario contains an order projection for each processing thread to align the data of each thread in the same order.

The vertical union brings the separated data packets back together for further processing.

Figure 9.5: Field Routines and Vertical Union Part 1

Regarding the split packages, there are two prominent difficulties:

  1. Changing the number of rows within an AMDP field routine
  2. Changing the sorting order of the data column within an AMDP field routine

9.1.7.1 Changing the number of rows within an AMDP field routine

The following coding example should set the attribute Vendor (TK_VENDOR) if there is a vendor for the material (TK_MAT). The last part of the where condition (and mat.”/BIC/TK_VENDOR” <> ”) filters records without a vendor out. This can reduce the number of rows.

 

METHOD S0001_G01_R11 BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY USING /BIC/PTK_MAT.

-- target field: TK_VENDOR
  outTab =
    select mat."/BIC/TK_VENDOR",
           intab.record,
           inTab.sql__procedure__source__record
     from :inTab as inTab
     inner join "/BIC/PTK_MAT" as mat
       on inTab."/BIC/TK_MAT" = mat."/BIC/TK_MAT"
     where mat.objvers = 'A'
       and mat."/BIC/TK_VENDOR" <> '';
  errorTab =
    select '' as ERROR_TEXT,
           '' as SQL__PROCEDURE__SOURCE__RECORD
     from dummy
     where dummy <> 'X';

ENDMETHOD.

Listing 9.1: Example of a field routine that removes the records

Figure 9.6 shows a further example routine that makes removing rows possible. (1) shows the whole data package we get from the source object or from the start routine (2) shows the package split. Here we can see that each package gets two additional technical columns to the existing data column(s) SQL_PROCEDURE_SOURCE_RECORD and RECORD. For further information on these two fields go to my blog HANA based BW Transformation – New features delivered by 7.50 SP04.

(3) shows the field routine source code and the other transformation rules. The field routine logic doesn’t write all rows from the input table into the output table. The row that wasn’t transferred is crossed out in (4). The resulting data package is shown in (5). We get a NULL value, as seen in (6), based on a pitfall in the coding.

Additionally, to the NULL value a part of the data package is moved. The back values are moved up by the missing row.

NULL values are not allowed in BW and therefore the DTP leads to an error.

Figure 9.6: Field Routines and Vertical Union Part 2

In both cases, the Listing 9.2 and in Figure 9.6, the misconduct may occur but not necessarily. Often the test data in the development are well prepared. This means the error should not occur.

Figure 9.7 provides an example to avoid the pitfall. The not wanted values are replaced by an initial value.

Figure 9.7: Field Routines and Vertical Union Part 3

9.1.7.2 Changing the sorting order of the data column within an AMDP field routine

Within a field routine it is important to keep the sorting order for the data column. Therefore, do not change or delete the values from the two technical columns SQL_PROCEDURE_SOURCE_RECORD and RECORD. It is also not possible to add or delete rows.

Figure 9.8 shows how data can be sorted and how not. (1) shows the input data with one data column (Col 5). In case it is necessary to sort the data from Col 5 it is important to keep the order stable regarding the two technical fields SQL__PROCEDURE__SOURCE__RECORD and RECORD.

(2) is a negative example that showing a potential source of error by changing the sort order. The next step after the field routine sort the field routine data (outTab) over the two technical columns. In case (2) the resulting order is not determined.

(3) shows the way how the data can be sorted by the data column with keep the sorting order stable regarding the technical fields.

Figure 9.8: Changing the sorting order of the data column within an AMDP field routine

Changing the sorting order of the data column causes errors when merging the individual packages.

This kind of error is very difficult to identify. The result does not lead into a technical error. Only the values from one or more columns are in wrong rows.

9.1.7.3 Recommendations regarding AMDP field routines

We do not recommend to use field routine to implement complex logic like reading additional data or implement complex business logic.

The general purpose and our recommendation for field routine is to implement logic which cannot implement as formula. Some formulas could prevent a push down, see note 2423215 – SAP HANA Processing: Non Supported Formulas in HANA Transformations. This kind of formulas could be implemented as a field routine.

 

… coming soon…

To report this post you need to login first.

6 Comments

You must be Logged on to comment or reply to a post.

  1. Sebastian Gesiarz

    Hello Torsten,

     

    Looking forward to the continuation. Keep up the good work!

    It would be great if you could share some loop & lookup code samples (like in ABAP with field symbols).

     

    Best regards,

    Sebastian

    (0) 
  2. Florian Birnkraut

    Hello Torsten,

    you mentioned in your blog that it is possible to join the :inTab with an DSO, like the following:

    outTab = SELECT  intab."DOC_NUMBER",
     intab."RECORDMODE",
     FROM :inTab as intab
     inner join "_SYS_BIC"."system-local.bw.bw2hana/FSDAHD01" as sd
     and  sd."DOC_NUMBER" =  intab."DOC_NUMBER";

    We have here the problem that (can be seen in Plan PlanViz) the system will extract all data from

    “_SYS_BIC”.”pr.sd/DPRSDL016″, that leads to high usage of memory.

    If we don’t use the generated view (in DSO Externe SAP-HANA-Sicht für Reporting) and instead use the active table from the dso “/BIC/AFSDAHD01”, the system extracts only the data records which are needed and results in less (appr. only 20% of scenario before) usage of memory.

    We use a classical DSO, the generated view is an analytical view.

     

    Any suggestions?

    Best Regards,

    Florian

    (0) 
    1. Torsten Kessler Post author

      HI,

      it’s okay to use the active table here.

      as long as we do not offer a SQL Script API to determinate the active DSO table it is okay to use the hard coded name.

      but keep in mind if you later migrate to BW/4 the classic DSO will be migrate to an advanced DSO, that means also the active table name here will change.

      Our up coming migration tool will point such coding blocks, but from current point of view the coding must be handle by the customer.

      Torsten

      (0) 
      1. Florian Birnkraut

        Hi Torsten,

        Firstly thanks for your answer.

        We plan later also to use an nls (license and system exist but DAPs aren’t created), the recommendation is here to use the generated view, because otherwise the archived date would not be read.

        For me the behavior of the hana database seems to be like a bug.

        Normal behavior should if I join table A with table B and table A has 1000 entries (which are also the different keys) and B has 100 mio entries with 2000 entries which are equal to table A (according to the join statement key) that the database only extracts the 2000 entries from table B and not the complete table.

        This seems to be work correctly if we use the active table of the dso but if we use the generated view it doesn’t fit.

        Any other suggestions…we opened also a ticket…

        Regards,

        Florian

        (0) 
  3. Anand Kumar Kotha

    Hi Torsten,

    Nice document on HANA based BW Transformation – guidelines and recommendations. Keep up the good work!!. Looking forward your continuation in other topics too 🙂

     

    BR/

    Anand Kotha

     

     

    (1) 

Leave a Reply