Farnam Street helps you make better decisions, innovate, and avoid stupidity.

With over 350,000 monthly readers and more than 87,000 subscribers to our popular weekly digest, we've become an online intellectual hub.

A Method of Detection of Fragility: How to Detect Who Will Go Bust — Nassim Taleb

Nassim Taleb is at it again with this footnote from Antifragile on detecting fragility.

* * * * * *

Next, let us examine a method of detection of fragility. We can illustrate it with the story of the giant government sponsored lending firm called Fannie Mae, a corporation that collapsed, leaving the United States Taxpayer with hundreds of billions of dollars of losses (and, alas, still counting).

One day in 2003, Alex Berenson, a New York Times journalist came into my office with the secret risk reports of Fannie Mae, given to him by a defector. It was the kind of report getting into the guts of the methodology for risk calculation that only an insider can see — Fannie Mae made its own risk calculations and disclosed what it wants to whomever, the public or someone else. But only a defector could show us the guts to see how the risk was calculated.

We looked at the report: simply, a move upward in an economic variable lead to massive losses, a move downward (in the opposite direction), to small profits. Further move upwards to even larger additional losses and further moves downwards to even smaller profits. It looked exactly like the story of the stone in Figure x. Acceleration of harm was obvious —in fact it was monstrous. So we immediately saw that their blow-up was inevitable: their exposures were severely “concave”, similar to graph of traffic in figure x: losses that accelerate as one deviates economic variables (I did not even need to understand which one, as fragility to one variable of this magnitude implies fragility to all others parameters). I worked with my emotions, not my brain, and I had a pang before even understanding what numbers I had been looking at. It was the mother of all fragilities and, thanks to Berenson, the New York Times presented my concern. A smear campaign ensued, but nothing too notable. For I had in the meanwhile called a few key people there charlatans and they were not too excited about it.

I kept telling anyone who would listen to me, including random taxi drivers (well, almost), that the company Fannie Mae was “sitting on a barrel of dynamite”. Of course, blowups don’t happen every day (just as poorly built bridges don’t collapse immediately) and people kept saying that such opinion was wrong and unfounded (using some argument that the stock was going up or something even more stupid). I also inferred that other institutions, almost all banks, were in the same situation. After checking similar institutions, and seeing that the problem was general, I realized that a total collapse of the banking system was a certainty. I was so certain I could not see straight and went back to the markets for a revenge against the turkeys. As in the scene from the Godfather (III) “I was trying to get out and they pulled me back in”.

Things happened as if they were planned by destiny. Fannie Mae went bust, along with other banks. It just took a bit longer than expected, no big deal. But the stupid part of the story is that I still did not have a word for fragility. But thanks to the episode of attic I had a measure for it.

So it all boils down to the following: to figure out if our miscalculations or misforecasts are more harmful one way or the other than they are beneficial, and how accelerating the damage is. Exactly as in the story of the king, where the damage from a ten kilogram stone is more than twice the damage from a five kilogram one. Such accelerating damage means that a large stone would eventually kill the person. Likewise a large market deviation would eventually kill the company.

Once I figured out that fragility was directly from nonlinearity and convexity effects, and that convexity was measurable, I got all excited. The technique —detecting acceleration of harm —applies to anything that entails decision-making under uncertainty, and risk management. While it was the most interesting in medicine and technology, the immediate demand was in economics. So I suggested to the International Monetary Fund a measure of fragility to substitute for their measures of risk that they knew didn’t work. Most people in the risk business have been frustrated by the poor (rather, random) performance of their models but they didn’t like my earlier statements: “don’t use any model“. They wanted something. And a risk measure was there.

So here is something to use. And the technique, a simple heuristic, called fragility (and antifragility) detection heuristic works as follows. Let’s say you want to check if a town is overoptimized. Say you measure that when traffic increases by 10,000 cars, travel time to grows by ten minutes. But then if traffic increases by 10,000 more cars, travel time now extends by an extra thirty minutes. Such asymmetry shows that traffic is fragile and you have too many cars, and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect).

Likewise government deficits are particularly concave to changes in economic conditions. Every additional deviation in, say, the unemployment rate—particularly when the government has debt —makes the deficits incrementally worse.

That was in a way the technique I used intuitively to declare that The Highly Respected Firm Fannie Mae was on its way to the cemetery. Now with the IMF we had a simple measure with a stamp. It looks simple, too simple, so initial reaction by “experts” was “trivial” (by people who visibly never detected these risks before, academics scorn what they can understand too easily and get ticked off by what they did not think about themselves).

According to the wonderful principle that one should use people’s stupidity to have fun, I enlisted my friend Raphael Douady to help me rewrite this simple idea using the most opaque mathematical derivations, with incomprehensible theorems that would take half a day (for a professional) to understand. Remarkably —as it has been shown — if you can say something in a complicated manner with complex mathematics, people take it seriously. We got positive reactions, and we were now told that this simple detection heuristic was “intelligent” (by the same people).

FOOTNOTES
The method does not require a good model. Take a ruler. You know it is wrong. It will not be able to measure the height of the child. But it can certainly tell you if he is growing. In fact the error you get about the rate of growth of the child is much, much smaller than the error you would get measuring his height. The same with a scale: no matter how defective, it will almost always be able to tell you if you are gaining weight, so stop blaming it.

Convexity is about acceleration. The remarkable thing about measuring convexity effects to detect model errors is that even if the model is wrong it can tell you if an entity is fragile and by how much it is fragile. As with the defective scale we are only looking for second order effects.

Still curious? Read Antifragile: Things That Gain from Disorder

Source