Skip to Content
Author's profile photo Jonathan Becher

What the Heck is Antifragile?

/wp-content/uploads/2015/04/antifragile_150x150_684677.jpgOK, an admission. I’ve been avoiding reading Antifragile, the latest but now three-year-old book by Nassim Nicholas Taleb. Nothing against Taleb; I follow him on twitter and really enjoyed his earlier books, The Black Swan and Fooled by Randomness.

The buzz surrounding Antifragile was mixed. A friend of mine summed up his opinion in one word: ‘Eh’. I still remember a withering review in the NY Times that included this sentence:

“Unfortunately he delivers such lessons with bullying grandiosity and off-putting, self-dramatizing asides.”

Ouch. But this weekend, I finally read Antifragile. Cover to cover. And I liked it. Not as much as the other two books but there’s plenty worth reading.

It starts with the concept of antifragile itself. It doesn’t just mean not fragile or unbreakable. Taleb uses it to refer to a category of things that get better with adversity and thrive in the face of chaos. That’s why the subtitle of the book is “Things That Gain from Disorder.”

The prototypical example is Hydra, the Greek mythological creature with many heads. When one head is cut off, two grow back in its place. A more commonplace example happens in the gym. When I lift weights, my muscles tear slightly. When they heal, I am stronger and can lift more weight. In fact, our entire bodies are antifragile. It’s estimated about 300 million cells in our body die every minute but our bodies adapt.

Smaller units tend to be more fragile than the larger, more complex systems of which they are part. Individuals are more fragile than families, families more fragile than communities. The same dynamic exists between neighborhoods, cities and countries.

Of course, not all communities and not all countries are equally antifragile. The Roman Empire may not have been built in a day but it didn’t last either. And yet Rome is still around.

So how does an organization become more antifragile?

Unexpectedly, it’s by stopping to try to protect it. Stability is not necessarily good. The longer we go without variations, without setbacks, without randomness, the worse the consequences will be when the unpredictable finally happens. In Taleb’s words,

“Preventing noise makes the problem worse in the long run. […]

“I have always been very skeptical of any form of optimization. In the black swan world, optimization isn’t possible. The best you can achieve is a reduction in fragility and greater robustness.”

My interpretation? Taleb agrees with my assessment that “failure is the new black.” If you want to be antifragile, you have to be willing to make and embrace mistakes.

This blog was originally post on Manage by Walking Around on April 13, 2015.

Please follow me on Twitter, LinkedIn, and Google+.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Paul Hardy
      Paul Hardy

      There have been several blogs on this subject on the SCN.

      My interpretation is that this concept can be directly applied to programming.

      Traditionally ever time you changed a program the logic became more complicated as you added extra features and fixed bugs with convoluted workarounds. Every time you made such a change you risk breaking an unrelated area of the application.

      The more such changes were made the more complex the program and the higher the chance of future changes breaking it i.e. it became more fragile as time went on, or as Robert Martin put it "the code rotted" until you were too scared to change anything for fear the whole thing would fall apart.

      He suggested turning this on it's head by applying what he called the "boy scout rule" which was "always leave the code cleaner than you found it". This could involve renaming an obscure variable so you could tell what it did, or breaking up a complicated subroutine into smaller parts, or spotting some duplicate code and encapsulating it in it's own routine.

      In ABAP terms this could involve breaking parts of huge function groups up into small classes which did "one thing", or changing the program so you can add a unit test. Or perhaps an extended program check or a code inspector check.

      You don't have to make one million changes - just one or two, and all in the area you are changing anyway.

      Then the next person who came along to change the program would find it easier than before. Thus the analogy of muscles you use above is 100% relevant - in this case we have made a slight tear in the program, and then put it back together stronger than before.

      Thus instead of becoming more fragile each time it encounters stress (being changed) the program becomes less fragile.

      The irony is a lot of developers are too scared to do this - make a seemingly unrelated change such as putting three chunks of identical code into one routine that is called in three places - because it is "dangerous". I say not doing this is dangerous, and in the foreword to "Clean Code" Robert Martin alleges not doing this is unprofessional.

      I changed quite a heavily used report the other day, one that had not been touched since about 2001.

      When the functional analyst checked with me the changes I had made I said "I added this feature, and that feature, and fixed that bug, and this bug, just like the specification said". In fact the specification was two lines long.

      In actual fact I had also done an SQL trace and removed all the identical selects, and done a SLIN and code inspector, and fixed all the errors and warnings, and renamed almost every variable such that TBL_VBAK became GT_DELIVERIES (i.e. that table was always filled with the value from table LIKP, but was named by the original programmer as TBL_VBAK which I thought was somewhat misleading) and changed the subroutine names so they said what they were doing, and got rid of 95% of the global variables. But I sort of glossed over that part.

      As I said you might need a lot of guts to make that sort of wholesale change, but one or two minor changes along those lines are highly unlikely to be risky.

      All you are doing is making things easier for the next programmer who comes along - and that might well be you.

      Cheersy Cheers


      Author's profile photo Matthew Billingham
      Matthew Billingham

      I've done similar - and will continue to do so, as I think not repairing broken windows is unprofessional.

      On the other hand, it has happened that while all my testing has shown no issues, when the users have got their hands, my refactoring hasn't always been 100% perfect.

      On the gripping hand* though, any such issues have always been easy to fix!