One of the most important analysis tools used to investigate slow running SQL statements is the well known ST05 – SQL trace.

The idea of it is that the ABAP database interface notes down what SQL statement it send to the database, how long it took to get the result, what selection criteria were used and so on.

Obviously a key point here is that the trace contains exactly the SQL that was actually send to the database.

Recently I came across a speciality that must be considered when using the ST05 trace results on MaxDB databases. Otherwise one will end up with totally wrong execution paths and thus at wrong conclusions for how to improve performance.

Let’s look at the following SQL Explain plan:

Explain plan from ST05 trace 

We see that the estimated costs are quite high, but this is not what I want to point out today here. Instead keep an eye on he execution path – it’s a RANGE CONDITION FOR KEY that uses four key columns (MANDT, BZOBJ, KALNR, KALKA).

Next, also a common step in performance analysis, we take the statement, have the ST05 fill in the ?’s with the real variable values and use the ST05 or the SQL Studio to explain the statement again (and modify it eventually):

Explain plan with filled in Literals

Now we still see a RANGE CONDITION FOR KEY, but only two columns are used, making the plan even less efficient.

The values had been taken directly from the ST05 trace. Let’s double check this:

ST05 bind values

Sure enough the bind values are there.
Notable however is the fact that the values for KALNR and KADKY (now missing in the used KEY COLUMN list) are NUMBER type values.

The leading zeroes in A2/A3 and the date-like information in A5/A6 might give an indication that this might not be totally correct.
Let’s check the table column definition:


Surprisingly we find both columns to be CHARACTER types.
Well, not exactly surprisingly – many number-datatypes from the ABAP world are mapped to character columns.

For the application and the database interface this is rather transparent, but for the database query optimizer this is a issue as it cannot use non-matching datatypes for KEY or INDEX accesses. If we want to run or explain the statement manually, we have to take care of this.

Therefore, we have to enclose the numbers into apostrophs to mark them as character strings:

Corrected literals plan

And here we are, back at the original execution plan.

Now the real performance analysis can begin!

P.S. although the KADKY column is not used for the execution plan it’s still important to get the data type correctly – otherwise the optimizer has no chance to estimate the selectivity of the conditions correclty.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Stefan Koehler
    Hello Lars,
    let’s move together to the dark side again for a few moments :-))

    I don’t know exactly howto read the execution plan on MaxDB, but i can imagine that an index on KEKO is used for the “RANGE CONDITION FOR KEY” (something like an index range scan i guess).

    Now if you will use a different data type on oracle you will also not be able to make use of such an index. Under specific circumstances oracle performs an implicit data conversion, but normally it is recommended to use the correct data type or perform an explict conversion.

    I guess, that MaxDB is doing something like that too in some cases, or? But normally you should use always the correct data type.


    P.S.: Congratulations for passing the Oracle 11g Admin OCA

    1. Lars Breddemann Post author
      Hi Stefan,

      thanks for the congratulations!

      Indeed the issue with the changed execution plans is a result of type conversion.
      And you’re right with your assumption that MaxDB cannot use any index-technique on non-fitting data types.

      And you’re also pretty far right with your thinking concerning the main issue.
      the RANGE CONDIITION FOR KEY is in fact a kind of index access – but only so far as this is a B*Tree access method.

      Since in MaxDB we store ALL table data in B*trees (like index-organized tables in Oracle), using the primary KEY is the most efficient access method around.

      Still the statement is very slow and THAT is the real issue here.
      Maybe another hint may point anybody to the right direction: when using the same query but only specifying ONE value for KALNR this query is lightning fast. This holds true for either value you choose.
      Only when both values are put into the IN-List the whole query becomes slow as hell…

      WHY? 😉


Leave a Reply