2018-11-19 concepts

Antifragility (part 2)

Hey all -

+ what I learned or rediscovered recently #

* Antifragility

In the last newsletter, I mentioned that an egg is a classic example of fragility: when there is change to the status quo, it can only hurt the egg. There is no upside to change, only downside: the egg cracks.

Two factors primarily work against the egg here: (1) change and (2) time. The more change - the more the egg jostles and bumps around - the more likely it is to crack. And the more time that elapses, the more likely it is that something will eventually happen.

The opposite is true for antifragility. Here, there is a fixed downside, but the possibility at an unknowable, high-magnitude upside. Venture capital is a good example: you invest a known amount of money in extremely risky ventures for the chance of some of them making it big. Meeting new people is another example: you spend time meeting a bunch of people for the chance of some of them profoundly changing your life (e.g. a new friend, romantic partner, business partner).

In contrast to the fragile egg, antifragile systems prefer more disorder and more time. The crazier, zanier the ventures - and the longer you wait - the better the outcomes. Antifragility says: shoot for the moon, but be patient.

I want to now focus on three really interesting features of antifragility, and we can use natural selection as our case study. Systems exhibiting natural selection improve over time by trying a bunch of different things (random mutation), dropping what doesn’t work (selection), and keeping what does (heritability).

Biological life, a capitalist economy, an individual business - these all exhibit natural selection. They learn by doing and improve by continual experimentation.

This leads to the first feature of antifragile systems. There is an antagonism between the parts and the whole: the system improves at the expense of the parts. The weak parts of the system - whether they’re organisms in a species or businesses in an economy or employees in a business - are pruned away. The system as a whole grows stronger.

We also see the second feature of antifragile systems: the system is highly adaptable to external change - whether it’s environmental change or changing economic and business conditions. How? By throwing the parts into the fire!

Improvement in antifragile systems is born out of cold, raw, merciless experience. There is no top-down, what-if theorizing here. Instead: experiment, fail, and do more of what worked. Antifragile systems favor heuristical knowledge - rules of thumb from testing and experimentation - over theoretical knowledge - abstract thinking which can easily become detached from the real world. Antifragile systems are practitioners and have the battle scars to prove it. All this doing yields adaptability.

Now for the last feature of antifragile systems: none of this would work if these systems weren’t redundant. If a basketball team depends too heavily on a single star player, then the team is highly exposed to the downside of that player leaving. There is a fragility, a concentration risk, a dependency risk. An antfragile system spreads its bets across multiple players: should any single player not succeed, the system can prune, learn and do more of what works.

Redundancy - by its very nature - is costly, often in terms of time, money or effort. Additional resources are allocated for the same function[1]. Redundancy is inefficient and suboptimal.

As a result, a basketball team with a single star player may be better than the more balanced team. But here’s where antifragility introduces an important concept: performance should be balanced with risk. Often, better performance comes with more risk. If we are too efficient, too optimal, too perfect - that creates fragility!

If we plan our schedule to arrive exactly on time for a train’s arrival, we do not allow for any slack in the system. Should something go slightly awry, we miss our train. Putting all our eggs in one train car makes for an efficient, but risky, delivery of eggs.

What are the takeaways of all of this?

If we want to be antifragile, look for situations with uncapped upside and a fixed downside. Be patient, be okay with inefficiency, learn by doing and quickly prune away things that don’t work. And finally, read Taleb’s book, Antifragile.

Thanks for reading,

Alex


[1] Netflix runs a software program called Chaos Monkey, which takes down services operating on Netflix’s client-facing network. Why? Because it forces the network to be redundant: you need multiple of those services running in case any one of them fails. It may be not be the most efficient, but it decreases risk.