Skip to Content
As you may already be aware, I’m currently  into Business Intelligence. The fun part for me is attempting to make  extractors in as compact and a performant way as possible. I had this extractor  the other day where I needed to select all the points for all the  examination periods of a / all student(s) in  an academic year.
            Once you get to know which tables you need to use, it is – in my opinion –  rather simple:

image

          Simple doesn’t mean easy though, nor is it a  reason to act hastily. There is a major snag somewhere when it comes to  performance. When I first created the extractor, it’ was still running after 14  hours. And that was for a mere 298.194 records. What was the problem? Not the  initial query to join HRP1001 with HRPADP506. It was the other query:               

     select gen~gradescale gen~gradesym zittd~perid
                   into (gradescale,gradesym, perid)
                   from PIQDBAGR_ASSIGNM as assignm
                   INNER JOIN ZPIQDBAGR_ZITTD AS zittd
                   ON assignm~agrid = zittd~upagrid
                   INNER JOIN PIQDBAGR_GEN AS gen
                   on zittd~agrid = gen~agrid
                   where assignm~modreg_id = wa-MODREG_ID
                       and zittd~agrtype = '0105'.
               ...
     endselect.              

                

As such it’s no no rocket science, if you’re  not afraid of creating a join on more than 2 tables. But for some reason, this  query was so viscous  that the engine  didn’t run well anymore. All the rules were followed: keys and properly defined  indexes were used where needed.               If we look at the query with ST05 and explain:

image

          You’ll see that the cost is very high due to  the fact that it seems to perform a table scan for PIQDBAGR_GEN, despite the defined keys and indexes.               

Even  ORACLE index didn’t help. It wasn’t such a burden if a couple of students were  involved, but we need to execute this query for all the courses and all the students.              The fun thing is that one can force it to use indexes in a strange way:

         

   select  zittd~perid
       into (gradescale,gradesym, perid)
       from PIQDBAGR_ASSIGNM as assignm
       INNER JOIN ZPIQDBAGR_ZITTD AS zittd
       ON assignm~agrid = zittd~upagrid
       INNER JOIN PIQDBAGR_GEN AS gen
       on zittd~agrid = gen~agrid
       where assignm~modreg_id = wa-MODREG_ID
           and zittd~agrtype = '0105'.
   ...
   endselect.              

               Which gives this as a result.           

image

         

It’s still a high cost, but at least it  doesn’t do a full table scan anymore.
             

         

Don’t  use this at work
                     
So what is the solution? Well after reading a lot of  forum post, web logs and external sources, I didn’t find any solution for it. I  did everything that should be done. A colleague of mine suggested to do it the  ‘simple’ way: nested queries. The thought only makes me shudder. First of all,  I find writing nested queries the same as writing spaghetti code – like in the  old days – with a lot of gotos. No, I’m not a snob being a bit sniffy for  everything which is not OO –I don’t write much OO anyway. Furthermore, I read  everywhere that nested queries are only good for a small amount of data and are  performance killers number 1.
            Having said that, having no alternatives I gave it a try and made nested joins  (with some trickery to make it more performant):
           

         

select single agrid  into agrid from PIQDBAGR_ASSIGNM where modreg_id = wa-MODREG_ID.
    if sy-subrc eq 0.
                   select  perid agrid into (perid,agrid) from ZPIQDBAGR_ZITTD
                                  where  upagrid = agrid and agrtype = '0105'.
                                  select  single gradescale gradesym
                                                  into  (gradescale,gradesym)
                                                  from  PIQDBAGR_GEN where agrid = agrid.
                                  if  sy-subrc eq 0.
                                  ....
                                  endif.
                   endselect.
     endif.

          And guess what? It ran very smoothly. The  extractor only needed 26 minutes this time. Compared to the unfinished job  after 14 hours, it’s a huge improvement.                     

Conclusion
            Is there one, I don’t know? Theory doesn’t match practice? Bad interaction  between SAP and Oracle? Bad Oracle drivers? I really don’t know. The only moral  of this story is to try everything when the tipped solution doesn’t work out.

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Arun Sambargi
    Hello Eddy, good explanation.

    Sometimes weird and not so recommended practises often help us resolving issues..

    Consider my issue..

    Had to fetch all the Billing documents & its flow for a particular Sales Organization & Plant..

    Got all the Order from the table VAPMA ( Key used MATNR, VKORG, TRVOG, WERKS )..

    My next task was to get the documnet flow for all the above orders. ( It can be a Sales Order, Invoice, Invoice cancel, Debit Memo, Credit Memo etc.. )

    Now the easiest way to get the data was to query the table VBFA ( the table which will give performace issue to any SAP Implementation in the long run ).

    This was something i did and believe me it worked like a charm . Code excerpt provided and explained..

    *************************************************

    *–Get all the Orders from Index table VAPMA
      SELECT  matnr
              vkorg
              trvog
              audat
              vtweg
              spart
              auart
              kunnr
              vkbur
              vkgrp
              bstnk
              ernam
              vbeln
              posnr
              werks
              FROM vapma
              INTO TABLE it_vapma
              WHERE matnr   IN s_matnr
              AND   vkorg   IN s_bukrs       “Sales Organization
              AND   trvog   EQ ‘0’
              AND   werks   IN s_werks.      “Sales documents
      IF sy-subrc EQ 0 AND it_vapma[] IS NOT INITIAL.

        SELECT  vbeln
                vbtyp
                kunnr
                FROM vbak
                INTO TABLE it_vbak
                FOR ALL ENTRIES IN it_vapma
                WHERE vbeln EQ it_vapma-vbeln.

      ENDIF.

    *–Get list of billing document types ( This is used to build the relation between a Order type and the corresponding billing document type )

      SELECT a~auart
             a~vbtyp
             b~fkart
             b~vbtyp
             b~fkarts
             FROM tvak AS a INNER JOIN tvfk AS b
             ON a~fkara = b~fkart
             INTO TABLE it_billing
              BYPASSING BUFFER.
      IF sy-subrc NE 0.
        CLEAR it_billing[].
      ENDIF.

    *–Had to add additional types due to our customization
      it_billing-vbtyp_v = ‘C’.
      it_billing-vbtyp_n = ‘N’.
      APPEND it_billing.
      CLEAR  it_billing.

      it_billing-vbtyp_v = ‘K’.
      it_billing-vbtyp_n = ‘S’.
      APPEND it_billing.
      CLEAR  it_billing.

      SORT it_billing BY vbtyp_v vbtyp_n.
      DELETE ADJACENT DUPLICATES FROM it_billing COMPARING vbtyp_v vbtyp_n.

      LOOP AT it_billing.
        CLEAR : it_vbak_temp[].

    *– Get all the order for a particular type
        LOOP AT it_vbak INTO wa_vbak WHERE vbtyp EQ  it_billing-vbtyp_v.
          wa_vbak_temp-vbeln = wa_vbak-vbeln.
          APPEND wa_vbak_temp TO it_vbak_temp.

        ENDLOOP.

    *– Query VBFA with Order filtered, Document category of preceding SD document and Document category of subsequent document, Select inside a loop is not recommended and that too on a VBFA

        CHECK sy-subrc IS INITIAL.
        SELECT  vbelv
                posnv
                vbeln
                posnn
                vbtyp_n
                rfmng
                vbtyp_v
                erdat
                erzet
                matnr
                bwart
                sobkz
        APPENDING TABLE it_vbfa
        FROM vbfa
        FOR ALL ENTRIES IN it_vbak_temp
        WHERE vbelv       = it_vbak_temp-vbeln
        AND vbtyp_v EQ it_billing-vbtyp_v
        AND vbtyp_n EQ it_billing-vbtyp_n.

      ENDLOOP.

      CALL FUNCTION ‘SAPGUI_PROGRESS_INDICATOR’
        EXPORTING
          text = ‘Fetching Billing Documents'(024).

      CLEAR : it_vbrp,
              it_vbrp[].

    *–Delete the duplicate entries in vbfa table.
      CLEAR: it_vbfa_temp,
             it_vbfa_temp[].

    it_vbfa_temp[] = it_vbfa[].
      SORT it_vbfa_temp BY vbeln.
      DELETE ADJACENT DUPLICATES FROM it_vbfa_temp COMPARING vbeln.

    *—Now get the Billing Doc details

      IF it_vbfa_temp[] IS NOT INITIAL.

        SELECT vbeln
               posnr
               fkimg
               netwr
               vgbel
               vgpos
               matnr
                 arktx
                 erdat
               erzet
               wavwr
               werks
               FROM vbrp
               INTO TABLE it_vbrp
               FOR ALL ENTRIES IN it_vbfa_temp
               WHERE vbeln EQ it_vbfa_temp-vbeln
               AND   matnr IN s_matnr.
        IF sy-subrc NE 0.
          CLEAR it_vbrp[].
        ENDIF.

    ************************************************

    Comments welcomed.

    Thanks,
    Arun

    (0) 
    1. Robert Buchta
      Hello Arun,

      good coding – matches my experiences. Sometimes the SQL Optimizer thinks its fine to do a full table scan even when all index key fields are available. Tried several ways to get a good performance –  the best (and fastest !) solution in most cases was to exclude that table from the join and read it seperately with “for all entries” – just as you did.
      But take caution with this method if you want the resulting data in one table ( eg in aruns example if you want the data from vbfa and vbrp combined in one itab.) Data loss could occur if you do a simple “loop over itab1” and add data from itab2 by “read itab2” when the tables do not only have 1:1 relations. If unsure better do nested loops over both tables  appending to a third table (caution..this can kill performance completely if done without proper keys)
      oh..and i always compare the results of the ” full join” variant to the “optimized” variant…which is a not so bad thing since sometimes the “for all entries” can be a little tricky ..

      (0) 
  2. Bertrand LAFOUGERE
    I get a similar issue on BI (in a update rules as long as I remember).

    The query was using a primary key but SQL trace show  that a full scan index get executed.

    After investigation it was database statistics (oracle level) that was not running !  After it everything was working like a charm.

    Eddy, are you sure that your database statistics are running ?

    (0) 

Leave a Reply