Performance improvement hints 3: internal table – fill and read
Sometimes I think what type of internal table I should use: standard tables, sorted tables, hashed tables. And how to fill them: append to standard table then sort or via read table…binary search then insert…index sy-tabix? So I created a test program to investigate about this.
The different types to fill an internal table
append&sort
This is the simplest one. I do appends on a standard table and then a sort.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
append ls_line to lt_tab.
enddo.
sort lt_tab.
The thing here is the fast appends and the slow sort – so this is interesting how this will compare to the following one.
read binary search & insert index sy-tabix
In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
read table lt_tab transporting no fields with key ... binary search.
if sy-subrc <> 0.
insert ls_line into lt_tab index sy-tabix.
endif.
enddo.
sorted table with non-unique key
Here I used a sorted table with a non-unique key and did inserts…
data: lt_tab type sorted table of ... with non-unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
sorted table with unique key
The coding is the same instead the sorted table is with a unique key.
data: lt_tab type sorted table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
hashed table
The last one is the hashed table (always with unique key).
data: lt_tab type hashed table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
measure
So I wrote a little program to check how long it takes to insert lines counting from 10 up to 10000. And ah uh oh. Everything nearly takes the same time!?
(measurement is always done five times)
This is funny. The result seems to indicate that there is no table type to be preferred above the other. But so far we only had a look on the filling of a table – and not on the reading. So let’s have a look on this.
reading the tables
I will now do a read on the tables in the “best” way, which means that I will read the standard table with a
binary search
, the sorted tables
with table key
and the hashed table
with table key
. Now let us see what 1000 reads do on the tables from above cost.
(measurement is always done five times)
The differences are not that big as you can see, but there is a trend visible. The confusing thing is, that there is a difference between the two standard tables – if you can explain this for me – do it.
</p>

May be you can do further research on the same line for the effective use of 'LOOP... ENDLOOP' in case when you have multiple records for one particular key in the internal table.
I normally use the three different methods. I am some time confused which one will work best in one particular situation. They are as following assuming internal table I_COEP is already sorted
(1) Loop at I_COEP where kokrs = i_cobk-kokrs
and belnr = i_cobk-belnr.
......
endloop.
(2) read table i_coep with key
kokrs = i_cobk-kokrs
belnr = i_cobk-belnr
binary search.
if sy-subrc = 0.
Loop at i_coep from sy-tabix
where kokrs = i_cobk-kokrs
and belnr = i_cobk-belnr.
.......
Endloop.
endif.
(3)read table i_coep with key
kokrs = i_cobk-kokrs
belnr = i_cobk-belnr
binary search.
if sy-subrc = 0.
Loop at i_coep from sy-tabix.
if ( i_coep-kokrs NE i_cobk-kokrs or
i_coep-belnr NE i_cobk-belnr )
exit.
endif.
Endloop.
endif.
About the three different loop...endloop types you wrote: I found that there is no difference between
loop at I_COEP where kokrs = i_cobk-kokrs and belnr = i_cobk-belnr....endloop.and this one
loop at I_COEP.if kokrs = i_cobk-kokrs and belnr = i_cobk-belnr....endif....endloop.but you are right that 2 and 3 are a good way, because you are directly jumping to the correct point. And because the "loop at where" and the "loop... if ..." are the same you can choose 2 or 3 by the number of characters to type 🙂
Kind regards
Harry
There a is difference in Loop...Endloop method 2 and 3. In method 2, first it gets the correct starting point but it goes to the end of table from there.
In method 3, not only it gets the correct starting point but it exits out also as soon as it finds the I_COEP record with different document number from COBK-BELNR. I guess it should give a significant time improvement in large internal table over method 2.
Just something additionally for you. I did some more tests and you can fine the results in the next blog entry: Performance improvement hints 4: loop at itab where...
Kind regards
Harry
Would it be faster ?
Basically, they are both index acess.
When dealing with large tables, I always use method (3), because it throws me out as soon as the condition is not true anymore (as opposed to 2, where useless checks are performed until EOT).
For smaller tables, method (1) could be ok even without sorting the table. 10 index accesses plus 10 IFs may be faster than the sort itself (not to mention the READ WITH KEY, even with BINARY SEARCH).
Cheers,
Dragos
I think the changing times for reading from the hashed table are just normal deviations. I made some further checks on the hashed tables and made this on sizes from 1000 to 100000 lines and see what the result is:
I think this shows that there is no real problem, except that measurements are always problematic 🙂
Thanks for your post.
I just have one question with reading the internal tables..
The trend for all the table types seems to be the same. But then, from the graph, it shows that a standard table (append and then sort) is better than a sorted table (for read). i believed it would be the other way.
Can we have this analysis extended to a few more records and see. Dont you think the behaviour could be different with may be a lakh records ?
Regards,
Prashanth.
Kind regards
Harry
I would like to ask you the same question that Prashanth have made.
Is it possible that one standard table could be better than a sorted table for read?
Regards,
Cristiano Wagner.