Skip to Content

First of all I would like to say that this is partly because Sharad Agrawal replied to my previous blog entry on “internal table – fill and read”. I thought about this previously, but after Sharad Agrawal asked about it I thought that it might be a good idea to check on “loop at itab where” more thoroughly.

Simple things first – no replacement

Replacement in sorted tables

Some additional words on the READ BINARY SEARCH in case you have a somehow sorted table: Did you see that I didn’t jump to a line with full specified key? I jumped to a line which does not exist. If you call READ BINARY SEARCH with a key which does not exist, the SY-TABIX points to the line where the entry should be located. So if you want to jump to the first entry in a table which has some attributes, you might use the READ TABLE … WITH KEY sortfield1 = v1 sortfield2 = SPACE BINARY SEARCH (of course space only if it’s a char field…)


Corrections! (after publishing this blog and learning from Sharad Agrawal)

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Former Member
    Hi Harry,
    Thank you writing an excellent weblog again. I have two comments to make.

    (1) As per my understanding, you can always use the partial key (staring from first field) for binary search on standard internal table. You don’t have to supply the remaining part of key as space.

    (2) Second, the last method might be even better if you check if ‘sy-subrc EQ’ 0 after reading ineternal table but before going for LOOP ..ENDLOOP. In this case, we have already identified that internal table does not contain the record we want, then there is no benefit by looping on internal table.

    What are your thoughts ?

    1. Harry Dietz
      Post author
      (1) I did not try to omit fields from the read, because I was told that in this case the read jumps somewhere inside the area. If you have
      And would use “read … with key1 = 2 binary search” the sy-tabix would point to one of 2-1,2-2,2-3 but it would not be defined to which one. I never tried this myself before, so I believed what I was told. UNTIL NOW. I tried and you are fully correct – in case the last key fields are omitted, the sy-tabix points to the _first_ matching entry (I tried with a simple program).

      (2) And because I didn’t know that (1) is correct, I didn’t know that (2) is possible. Now I know better and made some additional measurements…
      I will add this info to the blog and change the coding.
      The difference in the times is really small in the matches above 10%, but at 0 matches I got an improvement from around 600msn to 200msn, at 10% matches I got an improvement from 4400msn to 4300msn which is in the normal deviations. So the statistics don’t change much.

      1. Another great blog, I hope that sometime in the (near) future you do a little summary of all of your present and upcoming articles, it would be just great to have your weblogs linked from a central article as a nice reference!
  2. Former Member
    excellent piece of work.  I tend to use LOOP AT WHERE on type SORTED tables, but typically use TYPE STANDARD and read with BINARY SEARCHES, followed by LOOP AT FROM as I always thought this gives the best performance. It also means, in a multi key table, if you are using a subset of key fields on the read, it only needs to be sorted on that key-field subset.  When building tables I generally use READ BINARY SEARCH (as it always positions on the next highest key) and INSERT INDEX sy-tabix which removes the Sorting requirements, very handy on very large tables.

    Using the techniques in this blog you should be able to achieve very impressive processing savings on large volumes of data.  Using a nested LOOP at FROM within another LOOP will save much more time than a LOOP at WHERE (unless the table is a TYPE SORTED).



  3. Dragos Mihai Florescu
    I would say: we should also consider the overhead of transporting data from table to the workarea (or of initializing the field symbol).
    LOOP AT WHERE saves the system of this overhead. This is why, when the matching records represent a small percentage, your measurements tend to favour LOOP AT.

Leave a Reply