Skip to Content

OK – so “switches” are a bunch of on/off’s (1’s and 0’s for +’s and -‘s) in some big bit-map inside the SAP kernel somewhere.  (Or at least they should be in some such bit-map somewhere …)

And these various switches determine the behavior of certain SAP technical objects.

Well, this idea has been used for quite some time now by IBM spin-offs other than SAP.  For example, there was an IBM spin-off called CyCare whose medical group automation software ran on Honeywell DPS-6’s under GCOS, and they used such on/off switch bitmaps to do their functional as well as technical system configuraton.

And even IBM itself used the same concept in SNA profiles, where the behavior of a particular hardware device was determined by whether certain properties were “on” or “off” in its profile (like for a controller, or tube, or whatever …)

But really – the idea goes back to some guys – Jakobson, Fant, and Halle  – who were “coming up” in the late 40’s/early 50’s  and were trying to apply information-theoretic constructs to the study of phonology (the systems of meaningful sounds in human languages.)

Their idea was to decompose all meaningful human speech sounds (known to laymen as “phonemes”) into their “distinctive features”.  And these “distinctive features” were (and are) … guess what ? – nothing more than binary on/off switches indicating whether a particular sound has a particular physical property.

For example, three of the “distinctive” features of the meaningful sound “p” in English and German are:

-voiced, +bilabial. -aspirated

while three of the distinctive features of the meaningful sound “b” in English and German are:

+voiced, +bilabial, -aspirated.

And in ancient Sanskrit and modern Prakrits languages such as Hindi, p and b have “aspirated” counterparts (typically transcribed into the English alphabet as “ph” and “bh”) with the distinctive features

-voiced, +bilabial. +aspirated

+voiced, +bilabial, +aspirated.

(If you’re a speaker of English, you can get an idea of the difference between the “unaspirated” and “aspirated” versions of “p” by holding your hand in front of your mouth and first saying “pot” and then saying “spot”.  You’ll note that when you say “pot”, there’s an outrush of breath that you can feel on your hand – that’s “aspiration”.  But when you say “spot”, that outrush of breath isn’t there, and that’s why the “p” in “spot” is termed “unaspirated”.  In English, we don’t think of aspirated and unaspirated “p” as two different “phonemes”, because the two sounds can’t be used to make a difference in meaning.  But in Hindi, one can find many words which mean different things and differ only in the presence or absence of aspiration on a “p”.)

Anyway, why am I going on about this matter of “distinctive features” at length?

Because it illustrates how completely devoid of intellectual content IT has become.

As intellectually impoverished as the notions of “relational database” and “SQL” were, at least Codd pretended to provide a set-theoretic justification for them – even if this justification was really nothing more than common sense puffed-up into pseudo-theory.

But now, there’s not even a pretension to intellectual content in IT. 

New stuff gets put out there as the latest and greatest with no reference to its theoretical and practical precedents, even though recognition of these precedents might in fact help fuel and shape even better developments in IT. 

(As another example of this, conside the individual VM’s that SAP now provides in its Java stack so that one user can’t bring down all users. I’ve noted before at SDN how this idea goes back to the old idea of MUSASS – multi-user single address space – an idea all old-time IBM’ers will recall if they worked with any multi-user 4GL DB running under MVS or any of its successors.)

I knew a young prof at NYU’s Courant Institute back in the 70’s whose attitude toward theory in Comp Sci was very simple – he would always say:

“The right theory is the theory that gets us the grant in June.”.

That’s kind of the attitude which software vendors seem to have these days – they seem to make no attempt to discharge their responsibilities as contributors to the theoretical foundations on which good IT must always draw.  Rather, the attitude of SW vendors seems to be:

“The right idea is the idea that gets us the largest increase in market share”.

I think that software vendors would do well to remember a saying used by scientists:

“We stand on the shoulders of giants”.

So do software vendors, even though they introduce every new tweak as if it’s never been thought of before.

And maybe SW innovation would benefit from a little more ‘historicism”, and a little more understanding of how we got to where we are now.

To report this post you need to login first.

17 Comments

You must be Logged on to comment or reply to a post.

    1. David Halitsky
      Glad you think the topic even worth commenting on, Jim.  I guess it all goes back to that black, black day when AT&T Labs told its Nobel Laureates that if they didn’t start working on the next “cell phone”, they’d be looking for “tenure” somewhere else (that’s an absolutely true story, BTW.) Maybe what SAP oughta fund is a “Walldorf Institute of Advanced Study”, where folks are encouraged to do nothing but think …)
      (0) 
  1. Gregory Misiorek
    we’ve heard about Mr Chomsky, but mostly in a different capacity. OTOH, Mr Codd has been greatly underappreciated (not in the least by his employer) even though his research has started this whole industry.

    @greg_not_so (no tweet)

    PS Hello Stockholm, is anyone listening?

    (0) 
    1. David Halitsky
      Have you ever heard of Gio Wiederhold? (They used to call him “daddy database”.)  I wish I could have gotten him to say in public what he once said to me in private about the relational paradigm.  All Codd did was to give Larry Ellison an idea about how to sell enough “70% solutions” to enough “mom-and-pop” shops that people actually started to think that he was offering an industrial-strength solution for serious consumers of IT.  The field has been recovering ever since (otherwise why such products as BI and BI accelerators)?
      (0) 
      1. Gregory Misiorek
        David,

        thanks for debunking the myth, but i wish i had been in Mr Ellison’s shoes when he was taking Mr Codd’s advice. not too shabby for just an idea. i was actually thinking more about Mr Ritchie after my prior posting, but would gladly hear another de-mystification as i’m only relying on public information.

        @greg_not_so

        (0) 
        1. David Halitsky
          Greg –

          Think about the GUIDs that are more and more being used as “primary keys” in SAP’s newer systems.

          If the “relational paradigm” really said anything significant about the “data:reality” nexus, there would be no need for GUIDs as primary keys.

          In this regard, it is good to also consider the fact that Codd’s work was strongly influenced by the fact that when he developed the relational paradigm at IBM’s SRI on 42nd Street in NYC, IBM’s main customers were … banks.  And therefore, the main files of IBM customers were files of accounts in which records were distinguished by account numbers.

          And so, a new mythology was born … the mythology of the “primary key” …

          Best
          djh

          (0) 
  2. Stephen Johannes
    David,

    Unfortuantely many folks have been successfully marketed to, that we don’t need to understand the “theory” on why something is designed, but rather does it work and is it easy.  I would even argue that the ABAP language acts a great crutch for “programmers” who have never opened any traditional compsci text.

    This even applies to “softer type skills” such as project management.  I was amazed at how much was still valid when reading a 20 year anniversary copy of the Man Month Myth on software engineering projects.  It just shows that if we don’t study our history or review the fundamental theory we are bound to suffer in the same mistakes.

    Always enjoy your blogs that make us look deeper at why we are doing things in a particular fashion.  Any thoughts on tackling the question on whether in-memory computing really can tack “hard computation” problems or are we just speeding up problems that were constrained by large linear but not exponential scale?

    Take care,

    Stephen

    (0) 
    1. David Halitsky
      Hi SJ – thanks for taking the time to respond – always nice to chat with you … I confess that the hardest problem for me has always been to even understand the P=NP question as described here:

      http://en.wikipedia.org/wiki/P_versus_NP_problem

      http://en.wikipedia.org/wiki/NP-hard

      But I will dare to make a fool of myself by saying that I think pre-computation and associated pattern analysis are under-utilized approaches to a lot of time-consuming practical computing problems …

      Best
      djh
      615-613-2123

      (0) 
      1. Stephen Johannes
        David,

        Yeah along those lines, but honestly I get confused trying to remember the whole NP-hard difference.  I think in general my question is that most of the technology with in memory computing is simply increasing the size “N” instead of necessarily coming up with a breakthrough that solves problems with “infinite computation time”.

        Based on SAP’s examples it’s safe to say that we can we now run polynomial bounded algorithms with record sets that perhaps approach multi-billion/trillion and achieve performance numbers that make the value of N appear to act like its under 100,000 due to hardware.

        I actually asked a similar question to Hasso Plattner Research folks at teched(at their display booth) if there were any problems/applications that could not be run via HANA that they encountered yet.  They did however show me a demo scheduling application(cross between bin packing/traveling sales person problem) as what you could do with the system horsepower. 

        Take care,

        Stephen

        Take care,

        Stephen

        (0) 
        1. Martin English
          I saw / heard somewhere that in the current release of HANA, the memory is backed up by a MaxDB database.  This implies a limit of 32TB (in the current release) on the database size.
          (0) 
        2. David Halitsky
          was considered by physicists to be good enough to kind of see what’s going on in a rough kind of way …

          That’s what I meant when I suggested that the “breakthroughs” are going to come from seeing what kinds of worthwhile patterns can actually be detected with LESS THAN EXPECTED computation …

          The place where SAP could really make a contribution here is in the area of BOMS and routings … but of course, for SAP to make any money to recoup the necessary R&D, it would have to convince a whole industry that inasmuch as BOM’s are ordered trees, there must be “efficient” BOMs and “inefficient” BOMs, just like there are “efficient” b-tree indices (e.g. “balanced” and “inefficient” b-tree indices (e.g. “unbalanced”).

          And why should SAP bother, when Gates and Ellison have demonstrated so well that SW companies can satisfy the shareholders by just marketing low-hanging fruit to the barely competent ?

          (0) 
      1. David Halitsky
        You’re absolutely correct to dismiss the point of the blog with such a fine analogical ad hominem. It would be great if the ratio of substance to style at SDN could be increased by more such ad homs.

        In any event, since you like “movie” references, here’s one for you that’s actually relevant to the blog post itself.

        Some pointy-headed artistic type once criticized one of the big Hollywood producers (Mayer or Zanuck, I think) for making so few great movies.

        The producer replied by asking his critic to remember that Hollywood doesn’t have to produce any.

        Ever since Larry Ellison and Bill Gates proved that the average CIO will swallow virtually any marketing ploy if it promises less cost, software companies have been free to act like those Hollywood producers of old … it’s only out of the goodness of their hearts (or perhaps a sense of shame) that they ever deliver intellectually-ground-breaking software every once in a great while.

        Best regards
        djh

        (0) 
  3. Martin English
    …  its amazing how many of the architechtural principles I picked up in brief stints as a DOS and later MVS Systems programmer in the 70’s and 80’s still apply.  The implication is that whether or not the science of computing has moved forward, the practice may not.

    Just theorising a stream of conciousness here, but I think there are at least two problems;

    1) Despite everyone’s best efforts, Time has not changed speed (at least as I perceive it).  So to get more out of a given time period, we have tried to cut down on the inefficient use of that time, including getting rid of the ‘thinking time’ you refer to, forgetting about the difference between efficient and effective 🙂

    2) There is the perception that Moore’s Law has diminished the need for efficient coding.  Apart from encouraging bad / lazy coding (we can always throw hardware at it, anyway, can’t we ?), recent evidence suggests that software (and the science behind software) has been advancing just as rapidly.  I recommend reading a recent report by an independent group of science and technology advisers to the White House – http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf – where one of the examples in the report describes how, over 15 years, the speed of completing a benchmark process improved by a factor of 43 million. Of the total, a factor of roughly 1,000 was attributable to faster processor speeds, and the rest, a factor of 43,000, was due to improvements in the efficiency of the algorithm.

    Rather than argue about whether Moore said “doubling of transistors” or “doubling of computing power” over “18 Months” or “24 Months”, there’s an extensive investigation of Moore’s law at Ars Technica http://arstechnica.com/articles/paedia/cpu/moore.ars that people can refer to.

    BTW, thanks for another thought provoking blog !!

    (0) 
    1. David Halitsky
      And let’s not forget David Patterson, the under-appreciated father of “RAID” arrays, who once said with great foresight that decrease in the cost of DASD’s would play a greater role in the development of applied computing than decrease in the cost of cycles.
      (0) 

Leave a Reply