Software robustness is a problem that everybody cares about but few people address in their products. The average project has several weeks devoted to testing, mostly in the weeks before deployment. Of course, most software ends up behind schedule and over budget, and testing is the first thing to get reduced or cut. Thus, much commercial software gets only a couple of days of testing before it is shipped. In the high-pressure world of the Internet, this model seems reasonable, as everyone rushes to get their products to market faster than their competitors. Since the next version will be released in just a couple of months, it makes sense to let the bugs pile up after release and fix them all for the next version, right? Of course, there are a lot of problems with this model. If you let your software ship with significant bugs that affect the experience of many users, you will quickly whittle away at the quality associated with your company’s brand. People will always remember you for the low quality of your first release. Another reason why this model doesn’t make sense is that as the software development cycle goes on, the cost of finding and fixing a single bug in software grows enormously. If a problem is caught in the requirements phase, it costs about $139 to fix. By the time coding begins, the cost rises to nearly $1,000 per bug. If the bug is not caught until after the project is completed, the costs rise significantly. For example, many companies have testing teams whose job it is to bang on a product extensively after the coding phase is complete. For these people to find bugs, and for those bugs to then be fixed, the average cost is over $7,000 per bug. If bugs are not caught and fixed until the software is deployed, the cost rises to over $14,000 per bug — more than 100 times more money per bug than if a bug is caught in the initial phase of development. Clearly, software doesn’t have to be 100% bug free. In fact, one of the hardest problems with testing is to know when to stop. If your company puts a team of testers on a project, and they spend four weeks on the finished product, they may find a lot of bugs the first week, some the second week, few the third week, and none the fourth week. But just because they found no bugs in the fourth week doesn’t mean there are none. There is no practical way to prove that any piece of real world software is devoid of bugs, even a well-tested piece of software. In addition, functionality for expert users of software often doesn’t get tested as well as the basic functionality, because testers are rarely expert users. No one wants to get a reputation for software that is not robust in the eyes of their expert users, because expert users have an impact on the usage habits of novice users. If these users get upset, your entire user base could slowly migrate to another product, even if you tested it fairly thoroughly! Testing is generally considered costly and a nuisance. But as we have just seen, it is a necessary nuisance. The goal for most companies should be to do the best job testing possible and to minimize the costs. The idea that seems to work best is “test early and test often.” Robustness isn’t a module that can be bolted onto the side of a preexisting system — it is far more cost-effective to develop robust software if you strive for this quality from day one. Similarly, the more software is tested, the more bugs will be found (although bad test strategies are often ineffective ones).