Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member182259
Contributor

As many of you well know, back in the old, old, old days (when my hair was brown), ASE only offered allpages locking (APL).   In the move to data only locking (DOL) nearly 20 years ago (yes, it nearly has been that long!!), we added the concept of a latch.   One the hard parts of engineering's job is that they just don't get to 'implement row level locking' - they have to do extensive testing about where bottlenecks are and define alternatives as part of the implementation to come up with the best solution.   In the case of APL locking, it was noted that most lock contention and deadlocks were actually due to index pages - especially any index (and we all have them) on monotonic sequences - such as TradeDate - in which we were all fighting over that last page.

At the time, the solution was to use a non-transactional latch.  Whereas locks were held for the duration of the transaction and consequently a slow insert (or a large one) could keep readers waiting ....foreeeeveeerrr...., a latch would only be held long enough to make the change on the index page.   You all knew this, of course - I mean, we only beat this to death a slew of sessions in 1998 or soooo......

What you may not have thought about is that we had both "shared" latches and "exclusive" latches - although it makes sense.  When someone is reading an index page, we can't have some writer arbitrary whacking data in the middle or who knows what our poor reader would see.   And since readers aren't modifying data - yes...we allow concurrent readers to share latches.   However, when modifying the index page, the writer grabbed an exclusive latch.

Now, some of you may also have noticed that we are talking about "pages".....but wait - if we had datarows locking, why didn't we simply lock the index rows being modified.   The answer is actually quite simple.   Remember, most indexes are not unique.   For non-unique indexes, most DBMS's (MSSQL, Oracle, etc.) simply store the key values once and then have a RID array of values for all the rows that match the index keys.   In a shameless plug for my session at ISUG TECH (ISUG-TECH Conference, 2015 - March 29 - April 2), I am giving a session on data structures that discusses such basics before it gets into the real fun of compression and encryption.   Needless to say, the concept of an "index row" isn't quite the same - hence we work at the page level.

Fast forward a few decades.   Now transaction volumes have gotten much higher by orders of magnitude we can only blame on decimalization, web-ification, flash mobs on your website or whatever man made disaster..errr opportunity afflicting your business volume.   Luckily, Intel, IBM and other HW have been boosting CPU speeds at the same order of magnitude.   ASE is now capable of executing 2.5 million transactions per minute using an internal version of a standard industry benchmark running on commodity hardware.   And it is only going to get better - faster with sp02.  

However, that neat little trick of non-transactional latches......wellllll....now is starting to resemble the same problem as we had with index locks.   We are now seeing a lot of latch contention between writers and readers/writers.

Enter the ASE Engineering Super Hero team <cue sound track>.    No seriously, ASE engineering came up with a really neat solution for ASE 16sp02 called "Latch Free B-Tree's" (my boss is betting that I will get tongue tied when discussing this and Latchless Buffer Manager at ISUG TECH....but that topic is a good one for another blog).   In the Latch Free B-Tree, readers don't use latches.  Neither do writers.  Instead, writers use an in-memory map (you just knew I was going to say "in-memory" something, didn't you??)  and a delta/merge process to avoid latches as well.   And since they are writing to a delta area, there is no need for the readers to have a latch.   Keep in mind that we are not just avoiding the latch contention with this...we are avoiding the entire process of getting the latch at all.  Zero.  Nada.  Zilch.   If you remember from locking, we have lock hashtables, spinlocks and other nasty things that adds to lock overhead - while latches aren't nearly as bad....we still have to keep a list of them around somewhere, so there is some overhead that now...just....doesn't exist any more.   Gone.  Poof.

To say that this will improve transaction processing speeds would be like saying stepping on the gas pedal makes the car go faster.....     assuming you have it in gear, of course....and not in park.

Come to Atlanta if you want to hear more about ASE 16sp02 or keep your eyes peeled for webcast announcements to come.

6 Comments