To whom it may concern …

For any write access to a line of a database table the database sets a physical exclusive write lock on that line. This lock prevents any other write access to the line until it is released by a database commit or database rollback.

How can we see that in ABAP?

Rather simple, write a program:

DATA(wa) = VALUE scarr( carrid = ‘XXX’ ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

DO 100000000 TIMES.
ENDDO.

MESSAGE ‘Done’ TYPE ‘I’.

Run it in one internal session. Open another internal session and run another program in parallel:

DATA(wa) = VALUE scarr( carrid = ‘XXX’ ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

MESSAGE ‘Done’ TYPE ‘I’.

The program in session 2 finishes only when the first program has finished.

This is  as expected. The second program tries to write to the same line as the first program and therefore is locked.

You must be aware that such locks do not only occur for Open SQL statements but for all write accesses to database tables. Clearly all writing native SQL statements are other candidates. But also other ABAP statements access database tables. Recently, I stumbled over EXPORT TO DATABASE.

Program in internal session 1:

EXPORT dummy = ‘Dummy’ TO DATABASE demo_indx_table(xx) ID ‘XXX’.

DO 100000000 TIMES.
ENDDO.

MESSAGE ‘Done’ TYPE ‘I’.

Program in internal session 2:

EXPORT dummy = ‘Dummy’ TO DATABASE demo_indx_table(xx) ID ‘XXX’.

MESSAGE ‘Done’ TYPE ‘I’.

As before, the program in session 1 locks the parallel execution of the program in session 2 because the same lines in the INDX-type database table are accessed. This can lead to deadlock situations, where you might have not expected it.

To prevent such long lasting locking or even deadlock situations, the write locks must be released as fast as possible. These means, there must be database commits or database rollbacks as soon as possible. In classical ABAP programming a lot of implicit database commits occur. E.g., each call of a dynpro screen leads to a rollout of the work process and a database commit. If there is only a short time between write access and database commit, you don’t realize such locks in daily live. But if you have long running programs (as I have simulated above with the DO loop) without a database commit shortly after a write access, you can easily run into unwanted locking situations. In my recent case, I experienced deadlock situations during parallelized module tests with ABAP Unit: no screens -> no implicit database commits.

Therefore, as a rule:  If there is the danger of parallel write accesses to one and the same line of a database table, avoid long running processes after a write access without having a database commit in between.

In the examples above, you could prevent the deadlock e.g. as follows:

DATA(wa) = VALUE scarr( carrid = ‘XXX’ ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.


CALL FUNCTION ‘DB_COMMIT’.

DO 100000000 TIMES.
ENDDO.

MESSAGE ‘Done’ TYPE ‘I’.

or

EXPORT dummy = ‘Dummy’ TO DATABASE demo_indx_table(xx) ID ‘XXX’.


CALL FUNCTION ‘DB_COMMIT’.

DO 100000000 TIMES.
ENDDO.

MESSAGE ‘Done’ TYPE ‘I’.

By calling function module DB_COMMIT in the programs of session1 an explicit database commit is triggered. The programs in session 2 are not locked any more during the long running remainders of the programs in session 1.

It is not a rule, to place such calls behind each write access. Of course, a good transaction model should prevent deadlocks in application programs anyway. But if you experience deadlocks in special situations, e.g. in helper programs that are not governed by a clean transaction model, such explicit database commits can be helpful.

If deadlocks occur during automated testing only, you can also consider the usage of lock objects during test runs. A test that involves a write access can use the SAP enqueue/dequeue mechanism to lock and release table lines and to react appropriately if a line is already locked.

To report this post you need to login first.

16 Comments

You must be Logged on to comment or reply to a post.

  1. Shai Sinai

    Interesting topic.

    I must say I have never given it a deep thought as I always handle the locks in transaction level (and don’t run into this internal DB locking mechanism).

    (0) 
  2. Amit Sawant

    I am facing a deadlock issue in standard SAP Function module 🙁

    There is deadlock in PRICAT tables and Oracle Support team from SAP is looking into it, they want me to change the commit, add some wait. Problem is, that its standard SAP code, it was working fine before upgrade though.

    Amit

    (0) 
  3. Matt Fraser

    Hi Horst,

    I’m not an ABAPer, so perhaps I’m missing something about the EXPORT TO DATABASE statement. I’m not quite understanding how your second scenario is different from the first, and why it’s a deadlock. Isn’t it rather a lengthy exclusive lockwait? After all, does not the process in session 1 eventually complete successfully, and then the process in session 2 is allowed to execute after waiting for session 1 to finish? Or does the use of shared memory objects cause the deadlock situation, vs the direct-to-table writes of the first example?

    Cheers,

    Matt

    (0) 
    1. Horst Keller Post author

      Hi Matt,

      I use deadlock as defined in the ABAP Glossary. OK, let´s say this definition is a bit too sloppy and it´s rather about ¨exclusive lockwaits¨, at least in these examples. But those can and do lead to real deadlocks. And of course, the term deadlock gains more attention …

      Regarding EXPORTING TO DATABASE I simply wanted to show that not only SQL-statements can lead to lock situations on the database but also ABAP statements that are converted to SQL internally. Therefore, in fact, both scenarios are exactly the same, but the second one might be a little bit surprising, since you don´t see any SQL there. The EXPORT TO DATABASE statement is not connected to shared memory (there are other variants), but is converted to SQL that writes an ABAP data cluster directly to the lines of a table with a special structure.

      Thanks for your clarifying comment, I corrected the wording a bit.

      Horst


      (0) 
      1. Matt Fraser

        Ok, that makes sense. That definition of deadlock in the glossary doesn’t actually match the generic understanding of a deadlock from a database administrator’s (or DBMS platform) perspective, but with that definition, yes, your usage is correct. As a DBA, I would have definitely called it an exclusive lockwait, which is still undesirable, but not nearly as bad as a deadlock (aka deadly embrace), in which one session (or possibly both, but detection mechanisms today usually pick one as a loser so the other can succeed) will actually fail and short dump, vs simply being temporarily hung and waiting for a long time (or potentially forever, if the session holding the lock goes into an endless loop, or until some sort of timeout occurs).

        Nevertheless, your examples do show why long wait times are sometimes experienced. It happens all too frequently that I get user complaints that the system is suddenly too slow, when in fact it isn’t — they’re just running multiple parallel queries or reports that use the same selection parameters, and thus causing lockwaits on themselves. “Blame the system” then becomes the default reaction.

        Thanks for this blog.

        Cheers,

        Matt

        (0) 
    1. Horst Keller Post author

      Hi Brian,

      This is not about relying on lock objects or database locks.

      Database locks – as part of the database transaction (or LUW) concept – are simply there. The blog wants to point out, that simplistic programs, that do not prevent parallel write accesses, easily can run into locking situations resulting in waiting processes or even deadlock situations.

      Lock objects – as part of SAP’s transaction (or SAP LUW) concept – provide a logical lock concept on application server level. From the ABAP programmer’s point of view, database locks are physical attributes of the database. The logical lock concept provided by lock objects is a collaborative concept that is in fact not directly connected to database locks at all. Instead it serves to prevent locking situations on database level by transferring the setting and releasing of locks to the application server level. Technically an enqueue module of a lock object simply writes the ids of datasets to be locked into a central locking table and a dequeue module deletes it again. An ABAP exception occurs if you want to lock something that is alread locked (there is more about this, but I keep it simple).

      You use lock objects in application programs in order to prevent parallel write accesses on database level and with that to prevent locking situations there. One could introduce appropriate lock objects in the above simplistic example programs that try to set logical locks on the accessed lines. If a line is locked, the programs in session 2 should not wait, but e.g. send a message that no write access is possible in the moment.

      The prerequisite for a working locking mechanism basded on logical lock objects is, that all participating programs in fact use the enqueue/dequeue modules. If a participating program does not check the logical locks and tries to write data that are written in parallel, then the database locking takes place, because this cannot be circumvented.

      Of course, all well written application programs should involve a transactional concept that prevents locking on database level. In classical ABAP program, mainly based on Dynpros, the means for this are provided by SAP LUWs and lock objects. Other frameworks might use other logical concepts.

      The blog was intented to show, what can happen if you don’t follow a logical transactional concept in ABAP programs. If you own such programs and you run into locking situations, you must consider whether to stay simplistic and take care for commits on database level or if you can introduce a logical locking concept.

      Hope that clarified a bit.

      Horst

      (0) 

Leave a Reply