Skip to Content
During SAP Virtualization and Cloud Week 2011, prompted particularly by the session “TRND20 – SAP Virtual Landscape Management and Virtual Appliance Factory (VLM / VAF)”, Chris Kernaghan and I had a lengthy discussion via Twitter about the pros and cons of automating the building of SAP systems and landscapes. I’ve captured the Twitter debate here, for posterity:-) The correct thing to do after that was to continue the discussion in the bar over a beer or three. Sadly, While Chris was in Palo Alto for the conference I was watching from the comfort of my own lounge in the UK so a discussion in person was not possible.

Chris has since written up his take on the issues Automation or Abdication – #DevOps Series. If you haven’t already, please go and read that first. It is an excellent summary and I agree with everything Chris has said. He is right to be worried about where these new technologies are leading us. “Here be Dragons”, as they used to say on the unexplored sections of old maps. However, there is at least one area of technology where similar techniques have been used successfully. More than that, they have been used to allow us to do things that would be impractical or impossible otherwise. That is the field of programming (or development, if you prefer).

The CPUs in all computers only understand 1s and 0s. In the good old days computer programs had to be written in those same 1s and 0s. Before long people realised that writing programs that way was hard work and designed programming languages that were more understandable to humans. That way we got Fortran, Algol, Pascal, BASIC, C, C++, Java, ABAP and so many more. Writing large software systems is these languages is still a difficult task, but it is at least possible. Can you imagine writing something like R/3 in binary or assembler? Can you imagine having to maintain it afterwards? Assembler is often called a write-only language, and for good reason!

These programming languages provide a level of abstraction from the underlying hardware (or sometimes two levels – both ABAP and Java run in a virtual machine which is itself a program written in another programming language). This level of abstraction hides many details of the operation of the CPU itself. Not having to worry about those details makes the code simpler. Sometimes, though, programs in high-level languages can have lower performance than you’d expect and at that point you need somebody who understands the low level details to figure out what’s going on. Most programmers don’t need to know those details, but some do.

The layers of abstraction don’t stop there, though. Most programming languages come with libraries of standard functions. Things that many or most programmers would need for most tasks, so they are written once to avoid everyone having to build them themselves. Then there are libraries that provide specialist functions that are hard to get right, or are hard to make efficient. Library writers can put a lot of work into these libraries so that most programmers don’t have to think about them. UI toolkits are a good example of libraries that need specialist knowledge to write. Many programmers, me included, are not UI experts. UI design is hard. Really hard. But using a well designed UI toolkit all programmers can produce nice looking, easy to use applications with next to no UI knowledge.

These libraries are absolutely not a silver bullet. Sometimes they don’t work the way you’d expect. Either the functionality is wrong, or the performance is bad, and you need somebody with a deep understanding of the various libraries in use to figure out what’s going wrong. Even if everything is working as it should, each of these layers of abstraction adds extra overhead. Today we tend to have enough CPU power at our disposal to not worry about the inefficiencies of the abstraction layers, but they are there and can come back to bite us. Java’s garbage collection is a good example of this.

In summary, then, in the world of the developer there are many layers of abstraction protecting the programmer from needing knowledge of the low level hardware on which their code runs. These layers of abstraction need to be designed carefully – compiler writers and library designers are a special breed – or things can go badly wrong, or slowly, or both. When done well, though, they enable programmers to achieve many things that would be impossible without them. Specialists are still needed, but not all programmers need to be specialists in everything.

This is the model that I had in mind as I looked excitedly at the presentation about the Virtual Landscape Manager and the Virtual Appliance Factory. I’ve done SAP system installs, debugged them, tuned them. But I’m actually more excited by higher level issues these days. If somebody more expert than I is prepared to build template systems I can just deploy, I’m happy to let them. I just don’t find sapinst that exciting any more:-) I can concentrate on mapping business processes to system components safe in the knowledge that those components have been configured carefully and properly. Sure there are dangers here, just as in the developer context. The templates will be designed with particular usages in mind and I need to make sure I use them appropriately. If not, I may hit problems.

What does this mean for the SAP NetWeaver/Basis role? This I haven’t really thought through yet. We are just at the start of this journey – we’re designing Fortran not C++ or Java. We’re a long way from having templates that customers can deploy without help and advice, especially for production environments. But we seem to be moving that way and I can see huge benefits from it. The roles of technical people inside SAP, in partners and at customers will change as the technology is developed. We shouldn’t be scared by this, but clearly the most important thing is the robustness and reliability of the systems we deploy to support our businesses and we shouldn’t do anything to jeopardise that. There’s a careful path to tread, but tread it we should. I’m excited by it, even if I think there are many years of work left…

To report this post you need to login first.

9 Comments

You must be Logged on to comment or reply to a post.

  1. Bala Prabahar
    Nice blog. I enjoyed reading your blog.
    I’m neither concerned nor excited but confused. Confused because SAP is trying to innovate in several different areas, some of which appear to be related. For example on the one hand, SAP is innovating on database front, in-memory DB. On the other hand, they’re porting SAP to Sybase; yet another hand, they’re discussing Virtual Appliance Factories. Am I the only one who thinks this is crazy or anyone out there who thinks likewise? Why this is crazy? Each one of these ideas will take a certain amount of time to materialize. If in-memory DB is what going to be target state(SAP’s claim), why would someone migrate to Sybase from another DB before migrating to HANA? Or is Sybase meant primarily meant for new customers of next few years?
    Now they’re discussing VAF. By the time they release 1st version of VAF, wouldn’t SAP be very close to releasing “HANA dependent SAP”?  When that happens, wouldn’t VAF Ver 1(assuming Ver 1 is for disk based RDBMS) become obsolete soon after it is released?
    As I said in the beginning, I’m confused. I know one thing: there is going to be plenty of opportunities for everyone who would like to work.
    Thanks,
    Bala
    (0) 
    1. Steve Rumsby Post author
      Thanks for the comment!

      You are right, SAP is working on several apparently competing things at once. That’s as it should be, though. There are, I’m sure, many other things inside their development labs that may end up pulling in a completely different direction. It will be a while before the interactions of all these things are fully worked out. The trick is how to decide when a new technology has matured enough to be usable for enterprise systems. Apart from a few brave early adopters I think all of the above technologies are years away from significant enterprise use. We need to be thinking about them now, though, and how they will affect us and our systems in the years to come.

      (0) 
  2. Tom Cenens
    Hello Steve

    I’m rather excited about what the future has to offer in terms of new technology and possibilities. I have had the opportunity to work on vmWare and use vm cloning and so on and I have to say I like it a lot.

    It does mean my role of a technical SAP consultant and my work will shift but that is to be expected, it doesn’t frighten me. If you look at how much technology is evolving over the years we are bound to hit big changes sooner or later.

    In memory will initially be available for Business Suite by placing important data (part of database) in in-memory and other data will stay on the regular database. So you will have both in parallel. There are already seven combinations available for in-memory use if I remember correctly.

    Don’t forget there is a persistence layer also to saveguard data in case your memory would fail, you don’t want to loose all your data.

    I thought it would take longer before I would see the use of Cloud Computing but I have seen now that it will be there much faster than I first anticipated.

    Kind regards

    Tom

    (0) 
    1. Tom Cenens
      Hello Steve, Bala

      Just wanted to point out:
      @Steve nice blog
      @Bala the part of in-memory in my comment is ment for you 😉

      Kind regards

      Tom

      (0) 
      1. Steve Rumsby Post author
        Thanks Tom. I’m pretty excited by the possibilities too. I hope it all works out the way I see it, but we need doomsayers like Chris (joke:-) to keep us honest. There are real concerns here and we can’t just ignore them and hope they’ll go away. I think only experience will tell if Chris or I are right. I suspect reality will turn out to be somewhere in between. The fun is in the process of finding it!
        (0) 
  3. Michelle Crapo

    It will be a great adventure to see how all of this evolves and changes.  I honestly don’t think we can ever do without a BASIS person with strong skills.  Those skills just may change!  Michelle

    (0) 
  4. Martin English
    Hi Steve,
      I am a bit ‘old-skool’ – I’ve written Assembler code (for production business systems), I have sat at a punch card machine and corrected syntax errors (COBOL, Fortran, Assembler…), and (this will really show my age…) I’ve stood in front of a mainframe toggling the single step switch, one instruction at a time, while a hardware engineer noted down the register values.

    What it all comes down to, is that when I’m feeling optimistic, I’m a firm believer in Murphy’s Law. Of course, the original Murphy ( http://www.abc.net.au/science/k2/moments/gmis9906.htm ) was a hardware guy, not a software guy, so HE was probably being optimistic. If I’m responsible for someones data I like to know, at least conceptually, how it is stored, so I can decide if the appropriate risk v cost asessments have been made, if there’s more efficient or reliable ways of storing and processing and acessing that data….

    Even something as basic as Computer data being digital in an analog world requires allowances.  For example, error checking and redundancy means someone had to imagine getting 1 bit wrong in a trillion.  That means 1 error per 125 million bytes, 1 per 125 Megabytes (I’m using US nomeclature, sorry, Steve). Of course, the problem of error checking / correction of data (over networks or on disk or tape) was been solved years ago, and the resolution is implemented in a standard manner.  But the associated overhead also explains why a Gigabyte network link will never download a gigabyte of usable data per second.  And some people CAN NOT understand this.

    I even remember when I used work in ‘Electronic Data Processing’.  It’s a term worth remembering, because the tools, layers, abstractions, etc, whatever they are, don’t get around the fundamental limitations of someone’s (whether that’s me, the cloud vendor, the appliance architech, or the designer of the bus on my laptop) choice of hardware and software used to store and process the data.  In turn, these choices determine the limitations on what you can achieve for the business.  An very simple example is that your data can not exceed the speed of light, meaning you will never get lower than .2 second network response time between East Coast Australia and the Amazon Cloud on the East Coast of the US.  Another modern example is that you can not currently access data stored in HANA via ABAP SQL, you need to use native SQL (meaning you need to make decisions about intermediate layers). Don’t forget that HANA resulted from the limitations of that ‘bag on the side’ of BW, the BW Accelerator, which was nothing more than a stand alone dvice used to cache a substantial subset of your data warehouse.

    In short, if you can not understand the limitations imposed by the real physical contraints of where or how your data is stored, then you can not have the knowledge to build or manage reliable systems and processes to store or manipulate your data.

    Except its NOT YOUR DATA. It’s someone elses business, payroll, livlihood. You’re the guardian, the gatekeeper and protector, all rolled into one….

    (0) 
    1. Steve Rumsby Post author
      “In short, if you can not understand the limitations imposed by the real physical contraints of where or how your data is stored, then you can not have the knowledge to build or manage reliable systems and processes to store or manipulate your data.”

      I agree absolutely. As I said, we are at the beginning of this journey, and such considerations are still very important. As time goes on, fewer of those limitations will be important I think. Even now, though, over 50 years after Fortran was introduced, there are situations where programmers have to resort to assembler to get the behaviour or performance they need. Mostly, programmers do quite nicely without knowing how a CPU works, but just sometimes that knowledge is important.

      We will need to concern ourselves with network, storage and server architectures for years to come. But eventually, I believe, in most circumstances such concerns will be unnecessary. Eventually. And between then and now the role of us systems people will change. We will start to work with higher-level building blocks. Some people will still need to understand the lowest level stuff, most won’t. Even those that do won’t need to use that knowledge all of the time.

      (0) 
      1. Martin English
        “Mostly, programmers do quite nicely without knowing how a CPU works, but just sometimes that knowledge is important.”

        It happens more often than you’d expect…. For example, why does a text file from a UNIX / LINUX system sometimes run together on a Windows system, why are the Line Feed characters being ignored ?

        The LF characters are still there.  The problem is that Unix only uses LF as the line termination sequence, while Windows requires the Carriage Return + Line Feed characters to generate a new line of text.  And, of course, just to be different, the original Macintosh operating system recognised just CR.

        To FURTHER confuse anyone who got this far, on Mac OS X, with its unix roots, files can legitamitely have the unix-style LF, or the old-Mac style CR. Or, maybe you have a file from Windows with CR/LF 🙂

        FWIW, If you go to the various internet protocol documents, such as:
        http://www.ietf.org/rfc/rfc0821.txt RFC 0821 (SMTP),
        http://www.ietf.org/rfc/rfc1939.txt RFC 1939 (POP),
        http://www.ietf.org/rfc/rfc2060.txt RFC 2060 (IMAP), and
        http://www.ietf.org/rfc/rfc2616.txt RFC 2616 (HTTP),
        you’ll see that they all specify CR+LF as the line termination sequence.  Which means, for once, its not a case of Microsoft attempting to impose it’s own standards on everyone else 🙂

        (0) 

Leave a Reply