Skip to Content
Technical Articles



My first ABAP program was written in SAP R/3 4.7 back in 2005. It wasn’t long before I discovered BINARY SEARCH addition to the READ command, and thought it was the bee’s knees. Suddenly my programs ran so much faster and I was getting pat on the back from senior management.

Fast forward to 2020 and I hate you, BINARY SEARCH, I hate you very, very much.  Why such drastic change?

Side Effects Include Death

If you’ve worked with SAP for some time, you must know about the backwards compatibility. It means that when SAP comes up with new ABAP syntax or features, the old commands do not change and continue to run as they were. This is great for the customers because they don’t end up with syntax errors in their custom code after an ABAP upgrade. But backwards compatibility has a very dangerous side effect: it does not motivate the developers to learn anything new. Indeed, in the on-premise SAP world, one can still party like it’s 1999. Or more like 1972.

The case of BINARY SEARCH is especially egregious. Table types other than STANDARD have already existed for more than a decade. HASHED table type works perfectly with a unique key. But even if that is not an option, SORTED table type uses the binary search algorithm and can have a non-unique key. It’s practically the same thing but better!

But to this day, BINARY SEARCH continues to piggyback on its old fame. I still see it mentioned in some internal guidelines (or rather “misguidelines”?) and random general performance improvement suggestions, as if it’s the best thing since sliced bread. It even still rears its head in the SCN blogs.

It’s Not About Performance

The subject of performance with different internal table types has already been covered in the SCN blogs, books, and ABAP documentation. In my own simple test (fill different tables with lots of data, then read it in a LOOP), I got a very predictable result. HASHED table was the fastest by a mile, and SORTED table, on average, performed just as fast as BINARY SEARCH. But while working on that simple program, I realized that it’s not even about the performance.

In this program, I was planning to create a routine for each table type and call them thusly:

PERFORM sorted_table.

Inside these routines, I’d call other routines to select data from database and then read it:

FORM sorted_table.
  DATA: bookings_sorted TYPE t_sorted_table.
  PERFORM get_bookings CHANGING bookings_sorted.
  PERFORM read_bookings USING bookings_sorted.

(I went deliberately with a procedural program here to make the point clear to every developer. But the same issue would’ve been obvious if I used the class methods instead.)

For standard, hashed, and sorted tables this worked fine. But when the time came to type in the code for BINARY SEARCH, my brilliant copy-paste design failed for obvious reasons. BINARY SEARCH requires a special READ command syntax, hence read_bookings routine with a plain READ could not be used for it. Either I had to put additional code in the routine and add a flag to tell it when to use what or I had to write special code outside of routine just for BINARY SEARCH.

Less flexible and reusable code is always kind of a downer.

What You See Is Not What You Get

In addition to “dirtying of the code”, BINARY SEARCH has another problem: sometimes it works, sometimes it doesn’t, and you might not even know it. (Horst Keller correctly referred to it as to “error-prone”.) The internal table must be first sorted in certain way for BINARY SEARCH to produce the expected results. What happens if it’s not sorted properly? As I learned back in 2006, in that case the result is like a lottery. Depending on specific data, it might work correctly sometimes but then it could miss a record that clearly exists in the table.

One might say “what’s the big deal, just remember to sort” but that’s exactly the problem. You’re relying on generations of future developers who will maintain the program to remember that it requires “special treatment”. What could possibly go wrong.

Let Database Do Its Job

BINARY SEARCH signals not only the program’s age or lack of table type awareness by the previous developers. Usually, it comes hand in hand with bad program design and underused database capabilities.

Out of curiosity, I ran a code scan for BINARY SEARCH clause in a random set of the legacy programs. In vast majority of the cases, BINARY SEARCH could have been replaced not just by HASHED or SORTED table but by a SELECT statement. A typical example here would be reading MARA and MAKT tables separately, then merging two internal tables. That’s just straight-up SELECT… JOIN. And the scenarios where quantity or amount, for example, is accumulated by material or customer, SELECT… SUM would do just fine.

“Code pushdown” might be a novel HANA-related concept but allowing database to do its job has always been a good practice. It’s just we didn’t always practice it. And leaning on a crutch of BINARY SEARCH did not help. It’s time to let go.


Does all this mean that BINARY SEARCH has no use whatsoever and must be purged from ABAP syntax? Probably not. But the valid use cases for it are so few and far in between that, as a tool, BINARY SEARCH deserves to be put out of sight on a very high shelf of our ABAP garage. Or, preferably, burned with fire. 🙂

You must be Logged on to comment or reply to a post.
  • Nice summary. But is a known thing and we are beating around the bush. But can you comment on performance of the columnar tables vs row tables in select on big tables where we select not some but a group of fields. Request you to please share your opinion on those if you have any thoughts.

    • Thanks for the comment. “Beating around the bush” means suggesting something but not talking about it directly (hence the “around” part). I feel this blog is anything but. 🙂

      I’m not sure how the follow-up question is related to the subject of this blog, this is not about the DB performance. We can use the development tools available in any SAP system (also for many years already) to evaluate any code or scenario, if necessary.

      • This is an important point that you mention. There are very nice colleagues who like to share their knowledge. However, the knowledge isn’t always up to date and so you don’t learn modern solutions but how it was done in the 80s/90s.

  • I had some bad experiences with BINARY SEARCH, which took a lot of my life time and nerves 🙁 The solution for me was almost always to choose the right table type and to think about the overall design and structure. In the meantime I try to work a lot with table expressions if possible. No READ TABLE and no BINARY SEARCH either.

    • Thanks for chiming in!

      I haven’t used it for many years and every time I see it in a program I’m working on, I make an effort to remove it. In all this time, I’m yet to see a single case where BINARY SEARCH use is justified.

  • Thank you for the post.  I have always avoided BINARY SEARCH and I have never understood why developers would use it.  I rather assumed I was missing some important concept.

  • The other fun thing is that when a read on a STANDARD table fails the SY-SUBRC is 4. When it fails due to a BINARY SEARCH it is 8.

    That is a trick for young players. I think that is the same for SORTED tables.

    The most horrible thing is this. We have had sorted tables and the like for 22 years. I am fairly sure I heard about them in my first ever ABAP training course. I started programming (in ABAP) in 1999.

    And yet…..

    To this very day I still come across code, often freshly written code, where a full scan is done on a STANDARD internal table to get one record. In a loop. Often a nested loop.


  • I agree. A standard table with BINARY SEARCH is rarely (if ever) the best solution. The existence of secondary keys reduces the need for standard tables even further.

    For instance, a use case I sometimes see for BINARY SEARCH is that the internal table needs to be read using different keys, so the table is defined as a standard table, sorted one way, read using BINARY SEARCH, then sorted a different way somewhere else in the code and read again using BINARY SEARCH. Often, a better alternative is to define the table as SORTED or HASHED and then to define secondary SORTED and/or HASHED keys which can be used when reading or looping at the table.

    This can also be helpful if the same internal table needs to be read randomly for single records (in which case a HASHED key is best) and also looped at with a WHERE clause (in which case a SORTED key is best).

  • Hello Jelena,

    an entertaining summary, thank you!

    If only a few use cases for BINARY SEARCH are valid, then IMO we should create a catalog/pattern and point ot alternatives for all other cases.

    You noted BINARY SEARCH

    • has no clean syntax,
    • is error prone,
    • is not the fastest,
    • and it can hide a better solution.

    But you also stated with the first benefit: it is often the only quick fix available for performance enhancement.

    I will offer a second use case: we know that filling a standard table with APPEND is faster than filling any other table type. If we then decide we have enough reads to justify the costs of a SORT operation, BINARY SEARCH might be a good solution. Even then, I would recommend to use a sort-on-write copy of table to avoid BINARY SEARCH if you can afford the memory duplication.

    DATA lt_sorted TYPE SORTED TABLE OF my_struct WITH KEY my_key.
    "1) Fill lt_data 
    "2) Create a sorted copy without using SORT lt_data BY my_key.
    lt_sorted = lt_data.
    "3) Access sorted table without BINARY SEARCH
    DATA(record) = lt_sorted[ my_key = value ].

    Disclaimer: Statements on performance are often meaningless without profiling data.

    best regards,