As many of you well know, back in the old, old, old days (when my hair was brown), ASE only offered allpages locking (APL).   In the move to data only locking (DOL) nearly 20 years ago (yes, it nearly has been that long!!), we added the concept of a latch.   One the hard parts of engineering’s job is that they just don’t get to ‘implement row level locking’ – they have to do extensive testing about where bottlenecks are and define alternatives as part of the implementation to come up with the best solution.   In the case of APL locking, it was noted that most lock contention and deadlocks were actually due to index pages – especially any index (and we all have them) on monotonic sequences – such as TradeDate – in which we were all fighting over that last page.

At the time, the solution was to use a non-transactional latch.  Whereas locks were held for the duration of the transaction and consequently a slow insert (or a large one) could keep readers waiting ….foreeeeveeerrr…., a latch would only be held long enough to make the change on the index page.   You all knew this, of course – I mean, we only beat this to death a slew of sessions in 1998 or soooo……

What you may not have thought about is that we had both “shared” latches and “exclusive” latches – although it makes sense.  When someone is reading an index page, we can’t have some writer arbitrary whacking data in the middle or who knows what our poor reader would see.   And since readers aren’t modifying data – yes…we allow concurrent readers to share latches.   However, when modifying the index page, the writer grabbed an exclusive latch.

Now, some of you may also have noticed that we are talking about “pages”…..but wait – if we had datarows locking, why didn’t we simply lock the index rows being modified.   The answer is actually quite simple.   Remember, most indexes are not unique.   For non-unique indexes, most DBMS’s (MSSQL, Oracle, etc.) simply store the key values once and then have a RID array of values for all the rows that match the index keys.   In a shameless plug for my session at ISUG TECH (ISUG-TECH Conference, 2015 – March 29 – April 2), I am giving a session on data structures that discusses such basics before it gets into the real fun of compression and encryption.   Needless to say, the concept of an “index row” isn’t quite the same – hence we work at the page level.

Fast forward a few decades.   Now transaction volumes have gotten much higher by orders of magnitude we can only blame on decimalization, web-ification, flash mobs on your website or whatever man made disaster..errr opportunity afflicting your business volume.   Luckily, Intel, IBM and other HW have been boosting CPU speeds at the same order of magnitude.   ASE is now capable of executing 2.5 million transactions per minute using an internal version of a standard industry benchmark running on commodity hardware.   And it is only going to get better – faster with sp02.  

However, that neat little trick of non-transactional latches……wellllll….now is starting to resemble the same problem as we had with index locks.   We are now seeing a lot of latch contention between writers and readers/writers.

Enter the ASE Engineering Super Hero team <cue sound track>.    No seriously, ASE engineering came up with a really neat solution for ASE 16sp02 called “Latch Free B-Tree’s” (my boss is betting that I will get tongue tied when discussing this and Latchless Buffer Manager at ISUG TECH….but that topic is a good one for another blog).   In the Latch Free B-Tree, readers don’t use latches.  Neither do writers.  Instead, writers use an in-memory map (you just knew I was going to say “in-memory” something, didn’t you??)  and a delta/merge process to avoid latches as well.   And since they are writing to a delta area, there is no need for the readers to have a latch.   Keep in mind that we are not just avoiding the latch contention with this…we are avoiding the entire process of getting the latch at all.  Zero.  Nada.  Zilch.   If you remember from locking, we have lock hashtables, spinlocks and other nasty things that adds to lock overhead – while latches aren’t nearly as bad….we still have to keep a list of them around somewhere, so there is some overhead that now…just….doesn’t exist any more.   Gone.  Poof.

To say that this will improve transaction processing speeds would be like saying stepping on the gas pedal makes the car go faster…..     assuming you have it in gear, of course….and not in park.

Come to Atlanta if you want to hear more about ASE 16sp02 or keep your eyes peeled for webcast announcements to come.

To report this post you need to login first.

6 Comments

You must be Logged on to comment or reply to a post.

  1. Jason Froebe

    Sounds like ASE engineering is gearing up for a wholesale replacement of the locking and transaction log system to a version based system…

    (0) 
    1. Jeff Tallman Post author

      Not really – just getting rid of more spinlocks and contention points – which are impeding performance on larger SMP systems these days.   Of course, the other thing we have to keep in mind wrt to latches is that index leaf pages are sorted – so an insert/update/delete can cause rows to need to be moved around, hence the page level latch as well…..and with larger page sizes, this is quickly becoming a problem.

      One needs to also remember, that a versioning system needs a TON of memory to keep the versions around that other users may be after…..   Which impacts data cache, which means more PIO…. sooooo…….there are places where it will help and we are definitely looking into it there – more of a hybrid locking/mvcc implementation vs. strictly mvcc.

      (0) 
  2. Rajesh Neemkar

    Hi Jeff,

    I ‘m sure Sybase ASE do not have the UNDO segment feature kind of “functionality” (like in Oracle) with which if a user is modifying  a particular row, another user still will be able to read that particular row,ofcourse “Old data”

    Is there any thing like UNDO segment function in sybase ? or simply user need to wait till the row update is completed?

    Rajesh

    (0) 
    1. Jeff Tallman Post author

      An UNDO segment is only necessary for MVCC in which a query with an earlier sequence number has to undo the work of a later txn in order to restore the data page to a consistent state for it’s transaction sequence.    Since ASE uses locking for most concurrency controls, an UNDO is not necessary….and for a delta/merge, the newly modified data is retained in external structures and therefore no “undo” either (in fact, the pointer to the page is modified to point to the new delta instead….so even new readers after the index mod will see the new data…in MVCC, they wouldn’t).

      However, you should also check out READPAST locking and consider workflow queues if you are messing around with MVCC – it will expose one of the biggest problems with MVCC, as MVCC has to completely serialize access to a workflow queue table in Oracle due to the block level ITL.

      (0) 
  3. Hank Du

    This is great news, we finally have MVCC in ASE.

    Is the delta merge same as SAP HANA? There are performance issues with HANA delta merge.

    (0) 
    1. Jeff Tallman Post author

      More closer to HANA’s delta merge….however, remember several things: 1) we are merging an index row into a row store ….vs. a datarow into a column store.   As a result, things are much much simpler.  2) We are merging a lot less than HANA is…..soooooo……   

      (0) 

Leave a Reply