Table Locking in SAP HANA Cloud, data lake and SAP IQ
Many years ago when Sybase IQ (pre acquisition) implemented multiple writers, we gained a great feature that allows customers to load data into tables on any number of nodes in the multiplex. This feature allowed some of our largest customers with the most demanding workloads to isolate data load activity to different nodes, machines, to achieve data load rates that are still some of the best, if not the best, in the industry.
At that same time, we also released a feature that allowed the application developers to pre-allocate object locks prior to command execution. The LOCK TABLE command allowed the connection to request a shared, write, or exclusive lock on the table. It also allowed the connection to wait indefinitely for the lock or to timeout after a period of time.
This feature was used extensively by Sybase (Replication Server) and SAP (ILM, Near Line Store, etc) as well as by customers to guarantee that a connection could get a lock on a table prior to loading the table. If you waited until the command was run to get the lock, you ran the risk that the lock was not available and that the command would fail and have to be retried.
The issue was that these two features were not able to be used together very easily. In a multiplex, LOCK TABLE could only be used on the coordinator node. Any load that wanted to pre-allocate the lock would have to run on the coordinator, which then defeated the purpose of having multiple nodes able to write data.
All that changed in 2020!
With the release of SAP IQ 16.1 SP04 PL03, the LOCK TABLE command was now able to be un on any node in the multiplex! You can now have a user connected to node 1 try to pre-allocate a WRITE lock to a table while on node 2 another user is trying to pre-allocate the same WRITE lock on the same table.
If you currently use SAP Replication Server (Sybase Replication Server or SRS) you had to have SRS connect to the coordinator to run the loads. If you are using the most recent versions of SAP IQ, that is no longer necessary as any WRITER node can accept and use the LOCK TABLE statement.
In 2020, SAP also release SAP HANA Cloud, data lake (HDL). As HDL is based on SAP IQ, it also inherits this feature.
As you are developing your applications and systems using SAP IQ or the HANA Cloud data lake, you can use the same logic for both without the need to change how, when, or where data is loaded.
To paraphrase the documents, there are a few caveats to be aware of when using this feature.
With IQ 16.1 SP04 PL03 we now allow LOCK TABLE (WRITE) to be run on all nodes in n MPX and to coordinate across all nodes! There were some bugs in that initial release, but they’ve been fixed as of PL06 and PL07.
We will only support the following semantics for LOCK TABLE statement. These restrictions only apply when LOCK TABLE is used on a WRITER node.
- LOCK TABLE can only be used on a single table
- Only a WRITE lock can be requested, no SHARED or EXCLUSIVE locks are allowed
- The WITH HOLD clause cannot be used
- Running consecutive LOCK TABLE statements from one connection will automatically commit the previous transaction and release the previous lock
- You can use the WAIT syntax without a time to wait indefinitely or with a time (hh:mm:ss format) to wait for a period of time
Enjoy this new feature!