Authors = Hermann Gahm / Thorsten Schneider / Dr. Eric Westenberger
I would like to start with a disclaimer – as an SAP Press author myself I sometimes get free copies of SAP Press books on ABAP programming and in return I have to write a book review. This is because virtually nobody ever writes reviews on SAP textbooks, whereas action novels or racy romances or even “The Uncanny X-Men meet Captain Picard” (sadly, an actual graphic novel (comic book)) get half a million reviews. I personally have found it very difficult to work love scenes or violent “shoot them up” scenes involving machine guns or swordfights into my ABAP textbooks, and then when I do, the editor takes them out again saying that is not what the book is supposed to be about – and then complains about the lack of reviews! There is just no pleasing some people.
This is an enormous disappointment as I was hoping to be the first ever author of an SAP based textbook to win the annual “Literary Reviews Bad Something in Fiction” award. It is not enough to have your textbook classed as “fiction” you also have to win an award like this.
Water Stains in a Bathtub
In any event, this time I am reviewing a book about how best to program using ABAP if you have a HANA database. At time of writing HANA was seven years old yesterday (1st December 2017). Yesterday was the first day of winter here in Germany, so right on cue the snow came rolling out of the sky in Heidelberg. In the book they go on about how HANA was five years old.
This is the problem with writing about cutting edge technology. It takes about a year to write the book, and by the time you are finished, the world has moved on, as Stephen king would say. I shudder to think what Stephen King thought about the recent adaption of his “magnum opus” called “The Dark Tower” which condensed seven huge books into 90 minutes of Hollywood nonsense.
So the second edition of my book covered ABAP 7.50, and came out on the first day of TECHED 2016, which was also the day ABAP 7.51 was released. So it is virtually impossible to stay up to date. With HANA the problem is ten times worse, because ABAP 7.50 to 7.51 is not really that huge a leap, but HANA is young enough that it goes through radical changes every year, worse than a teenager going through puberty.
As a result books like this have to cover workarounds for problems that are most likely solved by the time the book comes out, as an example the way you had to manually transport HANA artefacts out of sequence with the ABAP code that called them, but now you can use ABAP managed database procedures.
This is not a criticism – there is no way around this problem.
One plus this book gets is in regard to the lack of “padding” it contains. You will find at the start of many SAP Press books chapters which start with lines like “Today’s business world is extremely dynamic and subject to constant change, with companies continuously under great pressure to innovate” – possibly there could be a contest for the most bland buzzword filled introduction in a textbook.
If you have ever seen shows such as “Five Go Mad in Dorset” you will be familiar with one of the Famous Five getting lost and thus accidently overhearing the bad guys talking about their evil plans in the form of “blah blah blah Uncle Quentin blah blah blah Third World War”.
In the same way a lot of SAP textbooks have a padding chapter at the start, sometimes two or three, which all take the form “Today’s business … blah blah blah … paradigm shift … blah blah blah … digital transformation …. Blah blah blah … one million tons of bananas …” and so on, and so forth. Reading such chapters is like having your teeth pulled out one by one with rusty pliers. It is a waste of everyone’s time, a waste of ink – which is the most expensive liquid in the world by the way – and a waste of paper with some poor tree dying to enable such nonsense. In addition the inventor of the printing press is turning in their grave,
They might as well just repeat the words “paradigm shift” again and again however many times they need to fill up the pages they need to pad out the book. You could be cleverer and have the first letter of every word spell out “paradigm shift” as follows:-
“Perhaps all really agile developers innovate given many holistic ideas for their perusal” where “perusal” is the “P” at the start of the next use of “paradigm shift”. It would not make any less sense than what currently gets written.
In any event, in this particular book, as all the authors work for SAP there is in fact a little bit of such padding but luckily it is kept to a bare minimum.
In actual fact the structure of the book is rock solid. It is highly technical but goes from the really abstract to the really practical starting with the theory and ending with do THIS and do THAT in this precise way in order to get the benefits of HANA.
This is important because – as is demonstrated – just changing your database to HANA is no magic bullet by a long shot. Badly written code starts performing worse, and you often need to have a serious re-write of code if you want to join the so called “mile high ten thousand club” which is where SAP parade customers who have got a ten thousand fold speed up on certain reports or batch jobs. SAP often say “no more batch jobs” but that is just silly. You would still want to schedule such things out of business hours, even if they now only take ten minutes instead of eight hours. Mind you some companies run 24/7 so there is no out of business hours but they would STILL schedule recurring jobs rather than running them manually every day.
Whenever I read the technical details of how the HANA database works I realise what a work of genius it really is, and why the competition is so worried, even if they pretend they are not, and just hurl insults whilst behind the scenes they are frantically trying to replicate the same sort of product.
My memory is terrible so about ten minutes after I read such things I forget, but I recall the general idea of HANA choosing one of many possible strategies to get the data from the database in the fastest possible way. One day – maybe even now – it will learn from past behaviour and get smarter with every query.
In addition how the column based data retrieval works is explained – it is somewhat like the star schema we are used to in BW with actual values getting represented by integers which can then be sorted and retrieved using a binary search. This is a gross over simplification but the book describes the actual situation well.
It then moves on to the mechanics of programming to maximise the benefits i.e. the “how to”. Then you get an actual worked example of how to optimise an existing application – the world has been crying out for such examples. I would say the example is not a PR puff piece but instead shoves into your face like a custard pie the fact you need to work and work to change your application code if you want HANA to do its job properly.
The last section is all about fancy functionality you might want to add – the most important of which is how to enable “Google like” searches, though of course they cannot mention Google by name.
Two Watercolours on a Piece of Paper
I now come to my random musings on the actual content of the book. It would be a Bad Thing to repeat much of the content of the book, as then you would not need to buy it, in much the same way that often movie/film reviews tell you the entire plot so there is no point going to see the film. Mind you, often trailers for films do the exact same thing, often containing every single joke from the film.
Native HANA Development (Doctor Livingstone exploring Africa)
To be fair, this book focuses on doing all development in the SAP system in ABAP (hence the name) but for completeness mentions you can also program directly in the HANA framework using whatever development environment SAP favours on any given day.
As it turns out, the picture being painted of native HANA development is not all that rosy. As an example, there is no LUW concept or locking concept in native HANA – so doing transactions in native SQL (SQLSCRIPT) should only be programmed by Captain Jock McRisky, the risk taking Scotsman, winner of the all Scotland “Risk Taking” contest 2017, who likes to walk the tightrope over the Grand Canyon with dynamite strapped to him whilst juggling hand grenades.
The book claims that if you write a WHERE clause like this “WHERE carrid = ‘LH abend’ “then this will actually retrieve the records starting with LH due to translation from ABAP to the underlying three digit database field. I tested that because I had never tried such a thing, and in a test program said WERKS = ‘3124’ WHEN WERKS = ‘1234 XX’.
The latter value might seem strange to you, you know that WERKS is only four characters long, but maybe the user was able to enter a much longer string on the selections screen, due to a lack of validation, for example, or maybe the SELECT statement is calculated dynamically or some such.
In a test program in ABAP I hard coded an invalid value “1234 xx” when in table T001W there is no such plant, just 1234, then I did a SELECT statement using the bogus value.
In actual fact the correct data was pulled back for plant 1234, although the primary key of the table was only four characters long. I had never even considered doing such a thing before! I am not sure this is of any use, one way or the other as I rarely use literals in SELECT statements.
As you no doubt know, you can do all sorts of native development inside the HANA framework, but this book provided some warnings/limitation of such development – for example there are no data clusters allowed in native HANA (as they are normally controlled by the ABAP kernel) and no authority check in native SQL (usually controlled inside HANA by different database users).
CDS views (which are usually defined using the ABAP in Eclipse option) have a mechanism for this (authority check) using the DCL (Data Control Language) concept.
Also in native SQL there is no buffer apparently, as that is an ABAP specific thing, just like automatic client handling. I do not recall exactly what buffer we are talking about here, as in a normal ABAP system when you do a SELECT statement there are loads of different buffers which could get accessed including “Buffer the Magic Dragon”.
There was an excellent explanation of difference between NULL and INITIAL which in all these years I have never quote been able to get my head around.
Since it was over thirty seconds ago that I read the book the difference has vanished from my head once again but I did note down that the example given was if during a query inside the HANA system using an outer join no record is found and you get a question mark in the HANA studio as it cannot tell the difference between no value at all and an INITIAL value.
No More Friends
Early in the book it talks about how data is stored and when you need a row based table and when you need a column based table. At that point the jury seems to be out and no recommendation is given. Later on (page 127) it then says you virtually always need a column based store and then points ahead to much further forward in the book for further clarification.
At that point (near the end of the book) the discussion veers away from row vs column based and starts talking about indexes in column based tables. You get told that when the table has more than half a million rows and the field you are looking for in the query is not the primary key and is very selective e.g. VGBEL then you need an index, otherwise the performance will be a hundred times worse.
During a database conversion to HANA all the secondary indexes get put on the exclusion list of HDB in the index settings, so they are inactive until you make an effort to switch them on again. This means your Z indexes, and indeed the standard SAP secondary indexes are still there if you do indeed need them back again. I would be interested to hear real people’s experiences in this area. I had presumed that indexes were a thing of the past in a HANA environment.
One thing of which there is no doubt – when updating/modifying only certain columns from a table in HANA, doing so in a batch (mass update of records) rather than one at a time, becomes even more important than in a non-HANA database. That being said updating lots of records one at a time is a horrible thing to do in any database.
No More Sherpas
I am now bouncing around at random, just regurgitating random notes I had made.
In a VIEW is SE11 you take the option EXTRAS -> CREATE_STATEMENT to see the database CREATE VIEW statement. In all these years of playing with SAP I had never done that – I suppose because up till now the database was a “black box” and ABAP developers did not have to care how it worked.
Mention was made of the strange CURSOR command, where you can declare an OPEN SQL statement and then trigger a database read later with FETCH NEXT CURSOR. I have never used that and find the idea very strange, and not OO in the least.
FOR ALL ENTRIES has an implicit DISTINCT and so does not retrieve all the rows, only ones with identical values for all columns asked for. One way to avoid missing rows is to always ask for all the fields of the primary key, whether you want them or not. I am sure this is documented somewhere, but it came a surprise to me.
Earlier I mentioned restrictions when programming using the native HANA system. Naturally there is another side to this coin – there are extra things inside HANA you do not find inside ABAP. For example ROLLUP and CUBE. The HANA database provides support for such things – what might they be?
No More Supplemental Oxygen
Earlier I talked about how the book goes into great technical detail about how the HANA database works under the covers, using all sorts of nifty tricks to get the data back faster, above and beyond the very fact everything is in memory. I forgot 99% of such tricks, but made a note of one the “Fast Data Access Protocol” whereby when doing a query using the FOR ALL ENTRIES table (which boils down to a list of selections) that list of values becomes a temporary database table itself and then a JOIN is done on the real database tables, obviously this is much faster, it would be on a standard database if such a thing was possible.
To say this again in a step by step manner – when us yellow shirts (my name for programmers) – or indeed standard SAP programmers – want to get information from the database we proceed as follows when we cannot do a database join directly.
We get a “target list” – say a unique list of sales orders.
Then a FOR ALL ENTRIES to get the rest of the data.
This works currently by building a whacking great SQL statement which is passed to the database. Far better than reading data in a loop, but still the SQL trace implies it is bad, just by the sheer number of lines in the trace. So inside HANA there is just an INNER JOIN but with one of tables being temporary, so naturally the SQL statement is a lot simpler.
Your Feet have Fallen Off
It is a brave new world pushing queries down to the database as you have to unlearn everything you have learned before.
As an example, when doing the database query inside SQLSCRIPT instead of a great big join on all the tables – which is the best way to do this from ABAP – the idea is to do lots of smaller queries, and then the HANA database optimizer itself transforms this into a single database query if it sees fit, or sends the queries to the database in parallel if they are unrelated. This requires a level of trust in the database far greater than in the past, a reversal of the “classic” way of doing things.
To put this another way you are advised to drastically cut back on inner joins in SQLSCRIPT, because the database knows better than you and makes decisions on your behalf …. There is a steep learning curve here…. And this to a group of programmers who have yet to embrace OO programming, and often do not know there is such a thing as a HASHED table in ABAP.
If someone could come up with a corker of an example here i.e. break down a complex ABAP INNER JOIN on lots of tables, and replace this with many individual SELECTS inside SQLSCRIPT which then combine to return the same result, but using parallel processing for some of this, that would be fantastic, and blow some people’s minds.
One Ring to Rule (Framework) Them All, One Ring to Bind Them
No book on HANA would be complete without a mention of the HANA rules framework. SAP brings out a new rules framework about every fifteen minutes on average so the question arises – how many rules frameworks does one need? It is like having five hundred pairs of shoes – you can only wear one pair at a time.
In BRF+ world a lot of work has been done on enabling a drag and drop framework, because without such a thing filling in decision tables or decision trees is just plain agony. At least with a decision table you can upload from Excel. BRF+ has not got there yet, because they developed their drag and drop using Silverlight, which was then dropped like a hot potato due to political reasons.
In the HANA rules framework by the looks of things it is just the same – just like in BRF plus you fill cells using a context menu. So no improvement there.
The instructions on how to set up rules in HANA in the book were quite detailed – because they needed to be in order to make things clear. I personally find the options are not obvious at first glance as to what they mean e.g. “NOTLIKE_*”. Generally if you have to explain several times what something means before it sinks into the mind of the reader then it cannot really be described as intuitive.
There is a warning that you must have the required development skills and understand the technological and semantic aspects of the data structures and types, otherwise you cannot properly design the decision tables within the HANA rules framework. This seems somewhat at odds with the BRF+ idea of getting business people to design the decision tables.
Being Attacked by a Yeti
Right at the end of the book are a whole bunch of suggestions for improving performance on any database, but naturally with a focus on HANA. As an example, I use INTO CORRESPONDING FIELDS of all the time in database queries. This book seems to suggest this should be used sparingly and for large results sets as the effort involved in comparing names can be relatively high for very quick SELECT statements. So presumably if I want five fields out of LIKP for a specific delivery then I should create a structure for this, and make sure the fields in the SELECT statement are in the same order as the structure. A bit of extra effort for the developer and the payoff is better performance.
What about the book? Is it any good?
Yes. It does not matter how fast the technology is changing – this will still be useful for some time to come. This will give you a good grounding into precisely how HANA works. It is not enough just to say “this is in-memory so it is really fast, so it is really good”. If you want to sell HANA you need to dig a bit deeper, and you will be pleasantly surprised by how good it really is on a technical level.
Then the practical example is fantastic – you learn that you actually need to work hard to get the benefits of HANA, a lesson everyone should have slapped around their face like a wet fish. Luckily you are told what to do, in great detail.
This has inspired me to start thinking about writing the next version of my own book. The proposed new ABAP Programming model – the so called RAP model, or sometimes “hip hop” model – is radically different from what most developers are comfortable with. You have things like the Web IDE and ABAP GIT (now endorsed by the CTO of SAP) coming at you like tennis balls out of a tennis ball machine, and yet some people still use “classical” exceptions on method calls because (a) they think a method is like a function module and (b) they do not know there is any other form of exception.
If I can make it different enough this time I might even be able to change the title from “ABAP to the Future” to something racier like “Fifty Shades of ABAP”. I wonder if SAP Press would let me. I hope so, then I might win that award I mentioned at the start of the blog.