Locking in S4HANA via the Durable Locks & CDS View ObjectModel.Lifecycle annotation
This blog is about the new locking concept in SAP, the Durable Locks and the CDS View Lifecycle annotations which will work for the latest ABAP Programming model for Fiori(Draft).
Note: This is just based on my understanding and the analysis I’ve done on it in the S4 system with ABAP 7.52 as I couldn’t find any detailed documentation about this.
First we will see Why the Durable locks are introduced and How this concept is utilized (along with some interesting stuff 🙂 ) in the latest SAP ABAP programming model for Fiori. I am assuming the same will be there for the Restful application programming model(RAP) as well.
We will not discuss how to create the durable locks(they are not released to the customers) but only to understand how they work and how to override their behavior in a Draft enabled Fiori Application.
“Why the Durable locks” are introduced.
*****If you already have an idea about the locking in Fiori apps via etags & soft state, you can skip this and move to section on “How the durable locks works“*****
I think everyone must be aware about the current SAP strategy for developing the new apps, which uses the UI5/Fiori for the front end and the backed will be CDS, ABAP and they both will communicate using the OData service, which will be a stateless model.
In this scenario, there will be no session maintained as the OData requests from the UI5 app will go, opens the session in the backend, fetches/updates the data and closes the session.
But in normal GUI/WebDynrpo apps, the session will be there till you log off.
So in terms of locking, if we are opening the Webdynpro/GUI app in the Edit mode, usually we will call the lock object to create the locking and so the other users cannot edit the same data till the first user commits the data or leaves the transaction.
Note: For beginners, there are blogs/SAP help out there which explains the locking concept using lock objects
Here comes the issue with the Fiori Apps, which will not persists the locks as the session will be closed after every request.
For example, If we call a OData request to lock the sales order, the request will go to the backend by opening a session, locks the sales order and when the request is completed, the session will be closed and all the locks will be removed 🙁
So how will you handle locks in these apps? (else the data might get corrupted)
For this SAP earlier has given 2 features
- Odata soft state
Using the ETag, we can only achieve the Optimistic based locking.
It is based on a the timestamp or some other external calculation like hash out of the data.
- If the user A edits the sales order 1 and at the same time, user B can also edit the sales order 2.
- When they both click on edit, they will have already read the data of the sales order 1 in the UI5 app. Now assume they both will have the same timestamp(lastchangedon). But when they both saves the data, the first user’s request will go the backend and updates the data along with the timestamp(lastchangedon) in the data. So the timestamp (lastchangeon) is changed in the db.
- Then the user B request will go to the backend and the system will check if the timestamp in the db and timestamp coming from the UI matches and if not it will give error for the user B.
With a little configuration, OData framework automatically takes care of it. This will not be very helpful in all the scenarios as the users might have to enter the whole data and just to see an error message at the time of saving. There are many blogs & sap help to learn more on this topic.
OData Soft State
By using this a session is maintained and once the configured timeout is reached, the session will be closed. So till the time session is opened, we will have locks, then after the session is closed, locks are removed.
But SAP themselves said in the help to not use this for transactional processing and should be used only to improve performance by buffering the data. But anyways, opening a session will definitely put a toll on the server and performance and we will have to open the session just to persist the lock.
Below are the comments reg. the soft state from Jocelyn Dart and the SAP team
The issue with soft state is that it is not recognized on the UI if it is gone just as a new session is opened (and a new lock acquired). Therefore it is not usable or only in combination with eTag.
So for this reason, the new Durable Locks are introduced.Like the name “Durable” suggests they are “Long Lasting”. So these locks will not be removed even if the session is closed.
Wow right!!, no session required to maintain the locks 🙂 for me it’s an awesome feature and
Kudos to the SAP architects and the team who designed this.
“How the Durable locks” works.
These locks are long lasting because they don’t depend on the session/commit work. They depend on the context and will persist for a long time(based on the expiry time). So what is this new context I just mentioned??
From the SAP Help
A durable lock is requested using a context. A context is a lock phase in which durable locks are assigned to this context. A context belongs to a draft document and is bound to this draft or to a shorter lifespan.
Durable locks are only available for specific SAP applications or frameworks.
So the context is based on the draft document and like sap mentioned, these are only available for few frameworks and one such is ABAP Programming model for Fiori with draft.
So for the people who are not very familiar to this framework, please check my other blog as the below example is based on it and If you try that example once, it will be very easy to understand the below process.
Heads up: I will discuss two types of locking here.
- Enqueue locking -> This is SM12 lock & the Durable Lock
- Processing locking -> Lock maintained in the Draft Admin table & a Draft Lock
I am giving this warning because it might be a bit confusing. Hope I will try my best to explain 🙂
Let’s check the below video to find out how the locking works when 2 users(DUSER1, DUSER2) opened the same app and tries to edit the same data.
DUSER1 -> Edits the existing data, the framework will create a draft document and locks it by creating an SM12 lock.
DUSER2 -> Edits the same data, the framework will check the lock and throws the error.
DUSER1 -> Closes the app and the browser, without discarding the draft document( didn’t click on cancel button or SAVE button which will discard the draft).
DUSER2 -> Opens the same data and clicks on “Edit” and it will still show the error.
The error is still coming for the DUSER2 because the DUSER1 didn’t click the cancel button and closed the browser and so the framework didn’t remove the lock, which is the reason why the DUSER2 got the error. So how and when the lock will removed now? This will be discussed in the later part of the blog.
For now lets see the SM12 lock that will be created when the DUSER1 clicks on the “Edit” button.
Here you can see the Draft ID- which is nothing but the draft table entry key for the sales order header draft table. You can open the auto generated draft table and compare the data here.s
More technical information below:
When the user clicked on the “Edit” button, a function import call goes to the backend and it will call the below Action(standard auto generated action for the BO)
Class “*LIB_A_EDIT” call will create the draft entry and it will also create the lock using the durable manger class – “*DURABLE_LOCK_MNGR”.
Internally a kernal method is called which takes the draft id and will create the enqueue entry in the SM12.
So let’s get back to our issue where the lock is not removed when the DUSER1 closed the Fiori app without cancelling the created draft.
What happens in this scenario is the lock gets stuck in the SM12. To handle this, In the background SAP internally will call an RFC function module(periodically for every 5 minutes) to delete the locks for the draft entries which are expired.
So how they know that the draft entry is expired?? The expiry time for different CDS Views are defined in a standard table, which the framework uses to check if the draft entries are expired and if they are, it will delete the lock on the draft entry.
(In the server I tested, for every 5 minutes the RFC Function module is called in the background but it is not a background job, I think it is some kernal call) – This is called as “AUTO ABAP”, check Jocelyn Dart comments for more details.
Let’s see where they store the lock expiry timings and how much is the default expiry time.
You can see the first entry: <DEFAUTL> , which has the expiry_duration for all CDS Views(Transactional Enabled) is 900s, which is 15 minutes. So till 15 minutes, the Draft is locked form SM12 and no one can edit that entry from the Fiori app except the user who created that lock (DUSER1).
There is also another admin table, which has all the admin data related to all the draft entries. Whenever a draft entry is created in any application, it will be stored here. This has very important information related to the locks (ENQUEUE_CONTEXT & IN_PROCESS_BY)
The RFC FM which deletes the locks will take the data from the above table, which has the enqueue context & LAST_CHANGED_AT data. Then the framework will compare the last modification date from the admin table and the expiry time from the config to delete the SM12 lock that is linked to the Enqueue_context field.
So in the system the <DEFAULT> expiry time to clear the locks is 15 minutes and an additional 1-5 minutes because the RFC Fm to delete the locks will be called every 5 minutes.
Method call that cleans up all the locks. You can call that to clean immediately after 15 minutes if you don’t want to wait for that additional 5 minutes when you are testing. Don’t really recommend it though.
So how to override this 15 min locking period?? (if you want to increase or decrease the time).
Go to the root transactional CDS view and add the below annotation:
Now I changed the expiryInterval to 1 minute, BTW you cannot give less than 1 minute.. If you give, it will automatically change to 15 min the backend and it is not showing any error in the eclipse when activating the CDS view. But do not give 1 min also as it is not a realistic time.
you can check the documentation below:
Activate and see the table entry below, for our transactional CDS view, it is showing 60 second(1 min).
Now the enqueue locks will get expired after 1 min.
When the lock gets deleted come back to the Fiori App and refresh it, it will “still show that it is locked”.
This is because of “lifecycle.processing” expiry time (The second Lock)– I will discuss this one later.
If the DUSER2 opens and edits it, it will allow him to edit it even though in the first page it is showing as “Locked by DUSER1”. This is because we missed one annotation in the CDS view.
For SAP: I am not sure if this is a bug or expected behavior as the processing expiry time(2nd type of lock) has not been reached so it should not allow us to edit.. when debugged the code, they are checking if the etag is changed first and then checking the process expiry time(don’t remember now correctly). But Etag code passes because we didn’t mention the above annotation and it allows us to edit.. something in those lines..
Ok let’s add the below annotation and activate the view.
We are mentioning that “changedat” for the ETAG related check, this has a special significance, which i will discuss in an another blog 🙂.
For now to fix this issue, lets activate and test it.
Now as we added the ETAG annotation, when the DUSER2 clicks on edit button, it will give him the error that it is locked by DUSER1.
But if you see the above gif, there are no locks in the SM12 as the 15 min expiry time is passed and the locks are removed by the RFC FM. But still the error is being displayed when you click on the edit button. This is because the “Enqueue lock” is deleted by the framework but the draft entry still have the ‘In Process’ user assigned, which is the “Processing Lock”(2nd type).
You can also observe that the “Enqueue Context” is removed along with the SM12 lock.
Weird right? Why there are two types of locks for a single draft application?
Actually enqueue context locking is used for legacy GUI apps locking like BP Tcode.(I will explain this in an another blog).
Let’s check the draft config table for the Processing Lock
So it says for the “22” – Processing expiry event has the expiry duration of 900 seconds, which is 15 minutes.
This is same as the enequeue lock default setting, so in normal applications, these two locks won’t make any difference as they both will be released at the same time. So the difference when you maintain two different custom expiry intervals for both of them.
So let’s override it by adding the below annotation and activate the program.
If you go back and check the same draft config table after activation, you will see “21” expiry event for our CDS view as 60 seconds.
When the processing lock gets expired after 1 min, the “IN_PROCESS_BY” will be removed. lets see the table entry.
Let’s go back to the Fiori App
- When the DUSER2 opens the fiori application, it will show that there are unsaved changes by the DUSER1 (becuase duser1 created the draft entry).
- If the DURSER2 tries to edit that entry, it shows a warning popup and if the you click on “Edit”, It will delete the existing draft by DUSER1 and creates an another draft for DUSER2.
So for the Fiori app, if the User 1 edits an active entry, the framework creates a draft entry. So if the User 2 wants to edit the same active entry, he needs to wait for the framework to remove the Processing and the Enqueue locks for the DUSER1 draft entry.
BTW if the User 1 saves the data to the DB by clicking on the “Save” button or discards the draft entry by using the “Cancel” button from the Fiori app, then no lock will be applied on the active entry as the locks will be removed by the framework and the draft admin entry will removed from the admin draft table.
So what is the difference between these lifecycle.processing(draft lock) and lifecycle.enqueue(durable lock) expiry times as they are very similar?
I will show with an example about this in an another blog 🙂
Still confused?? Go and check the below comment by Harald Evers, the co-innovator of the Durable locking framework.
Design goals were to preserve the long-standing pessimistic enqueue locking in ABAP based stateful applications and combine it with further lock capabilities in the Fiori programming model. As a consequence of the durable enqueue lock approach, no existing lock coding in those stateful apps, process integration or batch input logic needs to be modified when introducing Durable enqueue locks in draft-based Fiori applications and OData services.
The strict pessimistic lock of session-based enqueues seamlessly work with or better said as durable enqueue locks if created by a framework with the corresponding context. This interoperability is important for scenarios where editing a draft in a Fiori application may get in conflict with concurrent stateful UI usage, process integration or batch input logic. The according interval is dedicated to this scenario and should be the smallest one.
In a pure Fiori world without existence of such stateful apps or enqueue based logic the processing interval may apply as a second approach to pessimistic locking. Since this is a different lock conflict pattern we decided to separate concerns and introduced the processing interval. However, you are right that for a draft exclusive to a single user both intervals should be the same for the sake of simplicity. So typically, if you change one of these intervals you should keep the other in sync. Nonetheless, designing it this way gives us options to further evolve draft handling, in future.
Finally, the optimistic locking applies once the intervals are exceeded and the draft lifecycle handler has run. Still, the user can be informed about former modifications of another user as a usability plus compared to the pure session-based locking. Without concurrent accesses a user may pick up entries made before and resume the draft processing (then entering a new phase of pessimistic locking).
Please leave a comment if you guys have any questions or suggestions.
Thanks & Best Regards,