“To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers”. In this quote from a speech by Edsger W. Dijkstras given after he received the ACM Turing Award, he describes a problem which he names the “software crisis“: Hardware has scaled by many orders of magnitude, society’s ambition to apply these machines has grown in proportion, “and it is the poor programmer who finds his job in this exploded field of tension between ends and means“. Astonishing is that this speech was held in 1972. By how many orders of magnitude has the “gigantic“power of hardware further increased since then? And software engineering? Has our knowledge how to create software scaled by orders of magnitudes?
No Silver Bullet
22 years after E.W. Dijkstras’ speech, the Standish Group released the results of a study that has become famous as the “chaos report“. Their main findings: 31.1% of projects will be canceled before they ever get completed. 52.7% of projects will cost in average over 189% of their original estimates and/or will be significantly delayed. Only 16.2% of software projects are completed on-time and on-budget. Despite of a firework of new technologies and methodologies in the 70ies and 80ies that aimed at overcoming the software crisis, the “silver bullet” had not been found, as Fred Brooks states it in his 3th edition of his famous and timeless book about the software crisis “The Mythical Man Month“.
The "chaos report" 1994-2009
The chaos report has been repeated every 2 years since 1996. There seems to be some slight improvement, which is not really surprising: If people didn’t learn at all from their experiences and failures, then that would be really surprising. However, in my opinion, breakthrough looks different than the slight improvement tendency that can be spotted in the picture above. Up to today, managing successfully the processes and activities for creating software is a huge challenge. Recently, I talked with a project manager for car projects at VW to get an idea if such failure rates are also usual in other engineering disciplines. In fact, there are some failing car projects, but only once in a while and usually due to external forces like changes in the market and not due to the inability to manage the engineering processes. Ironically, the most dreaded components in car projects, which are known to already have caused car project disasters, are the components related to software, like boardcomputers, integrated GPSes and the like.
Crisis? What Cri$i$?
How big is the problem really? We all got a bit numb with amounts of money during the financial crisis, but let’s do some calculus nevertheless. The global budget for IT Software and services projects in 2009 was around $ 1 trillion. This means, that with 24% totally failed projects and 44% about double as expensive as expected, around $460 billion (!) are either losses or unexpected costs – missed opportunities and follow-up costs not included. You might object, that the chaos report is maybe too pessimistic and my calculation to naïve. But even very conservative estimations about the losses and missed opportunities caused by failing software projects are not very far from this figure (e.g. National Institute of Standards and Technology: $ 69,5 Billion per year alone in the US) and the “hall of shame“ of individual software project disasters is an amazing read (for a few examples, see e.g. Charettte, R. Why software fails. IEEE Spectrum, Sept. 2005)
What to do about this problem? Well, I would be rich if I knew. I am not rich, but there is something odd that I find remarkable: If all the methodologies and technologies for creating software of the last 15 years didn’t bring significant improvements in terms of the success rate of projects, then we could have stayed with the technologies and methodologies of the mid-nineties. We all would have saved incredible amounts of time by not dealing with all the so called “breakthrough technologies” and trends. Probably in no other engineering discipline there is such a pace of “innovation” like in software engineering and I am sure that almost everybody working in this discipline has a hard time keeping pace with all the emerging technology. Well, I am a bit provocative here and I actually do believe that certain developments of the last decade were very important. But still, the total innovation fixation in SW engineering and computer science might be part of the problem why there is not too much progress regarding the decades lasting software crisis. Maybe it would be good to somewhat change the perspective from the new stuff to the old stuff, to the demerging technologies, to what was there already in the history of CS. Evaluating and analyzing thoroughly existing technology is maybe more fruitful than adopting one silver bullet after the other without much questioning, discarding not only “old” technology, but also disrupting the creation of experience.
The Power of Demerging Knowhow
An example of disrespect for what was already there: Even technology vendors like SUN release strategic frameworks, that have huge design flaws that could have been avoided by taking decades old wisdom serious. In 1972 (speaking of Timeless Software again), D.L. Parnas, 1972, one of the fathers of object orientation, wrote “Successful designs are those in which a change can be accommodated by modifying a single module.“ The EJB framework violated this advice, most importantly by having components fetching their resources from the container (the opposite of dependency injection). EJB has been criticized a lot because of the resulting lack of testability and flexibility. So much, that an open source alternative (spring/hibernate), which takes well known design principles very seriously, became more popular than the “industry standard” EJB (see picture below).
Demand for engineering skills in EJB vs Spring/Hibernate
I’ve been working since not very long in research in SAP Research France. Research in computer science is naturally very much focused on new solutions and new problems, it is focused on the question “how to?”, and not much on “what is?”. I wonder how research on existing technology would look like. Well, software engineering, since it’s beginnings, has developed a lot like a craft. Crafts develop slowly, by trial and failure, supported by common sense. There is nothing wrong with that, but it could be sped up by an empirical and scientific approach. Try to find research work that tries to empirically find relationships between project success (or associated measures like bug frequencies, changeability, reliability of effort estimates,..) on the one side and technologies or standards deployed (or abstractions of them) on the other – a bit like medical research works to prove relations between diseases and cures. You won’t find much! The lack of hard facts leaves the job of advocating for particular technologies to the marketing departments of software tool vendors. And discussions about technologies and their advantages or disadvantages sometimes seem to me like I imagine discussions must have been between medieval doctors about advantages of leeches vs. cupping classes for the extraction of the evil spirits in a patient.