Nassim Taleb on Decisions, Probability, and Extremistan

Nassim Taleb

Edited by John Brockman from Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction.

In 2006, Nassim Taleb used FNMA and bank risk managers as his prime perpetrators. He claims that those who are putting society at risk are “no true statisticians,” merely people using statistics in a self-serving manner. The subprime crisis helped him drill his point about the limits of statistically driven claims.

“The banking system, betting against black swans, has lost over one trillion dollars (so far) more than was ever made in the history of banking”. He draws a tableau showing boundaries where statistics works well and where it is questionable or unreliable.

Financial institutions earn money on transactions and lose everything taking risks they don’t understand. I want this to stop, and stop—the current patching by the banking establishment worldwide is akin to using the same doctor to cure the patient when the doctor has a track record of systematically killing them.

And we are beyond suckers: we are riding in a bus driven by a blindfolded driver, but refuse to acknowledge it. After 1998, when a “Nobel-crowned” collection of people blew up a hedge fund, the Fed helped their bailout. Risk managers and regulators are among the “me-too” users who pick up statistical tools from textbooks without understanding them, he says. For them it is a problem with statistical education and half-baked expertise.

Certain relationships that “look good” in research papers almost never replicate in real life. In the real world, you can be extremely wrong and be fine; in others you can be slightly wrong and explode. Knowledge matters little, very little, in many situations, he writes. No one has managed to prove that using models that don’t work increases blind risk taking, hence the accumulation of hidden risks in the system.

Ben Bernanke found plenty of economic explanations with graphs, jargon and curves, the kind of façade-of-knowledge that you find in economics textbooks. I have nothing against economists: you should let them entertain each other with their theories and elegant mathematics and help keep college students inside buildings. But beware: they can be plain wrong, yet frame things in a way that makes you feel stupid arguing with them. So make sure you do not give any of them risk-management responsibilities.

Decisions

There are two types of decisions you can make about whether something is true or false. One is simple, “binary,” i.e., you care just about the probability of events, and not their magnitude.

Probability structures

There are two classes of probability domains, very distinct qualitatively and quantitatively. In Mediocristan, exceptions occur but don’t carry large consequences. In Extremistan, exceptions can be everything (they will eventually, in time, represent everything). Note here an epistemological question: there is a category of “I don’t know” that I also bundle in Extremistan for the sake of decision making.

Two difficulties

In an earlier version of this article, I wrote that the passage from theory to the real world presents two distinct difficulties: “inverse problems” and “preasymptotics”. The inverse problem is more acute when more theories, more distributions can fit a set of data; it is compounded by the small sample properties of rare events. It is also acute in the presence of nonlinearities as the families of possible models explode in numbers.

Preasymptotics

Most statistical education is based on these asymptotic, Platonic properties—yet we live in the real-world that rarely resembles the asymptote. This compounds the ludic fallacy: most of what students of statistics do is assume a structure, typically with a known probability. Yet the problem we have is not so much making computations once you know the probabilities,  but finding the true distribution.

In a recent article, an economist expressed his surprise that financial markets experienced a string of events that “would happen once in 10,000 years”. A portrait of the gentleman accompanying the article revealed he was considerably younger than 10,000 years. It is fair to assume that he was drawing his inference from his empirical experience (and not from history at large) and not from some theoretical model that produces the risk of rare events. The rarer the event, the worse its inverse problem – since we don’t observe it. And theories are fragile (just think of Professor Bernanke).

Small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself. This is why we should worry in the fourth quadrant of the spectrum – this is where rare events are most likely to occur.

For rare events, the confirmation bias (the tendency, Bernanke-style, of finding samples that confirm your opinion, not those that disconfirm it) is very costly and very distorting. Most samples will not reveal the black swans. But if you are hit with them, you will not be in a position to discuss them.

Fallacy of the single event probability

In a developed country a newborn female is expected to die at around 79, according to insurance tables. When she reaches her 79th birthday, her life expectancy is another 10 years. At the age of 90, she should have another 4.7 years to go. The conditional expectation of additional life drops as a person gets older.

In Extremistan the expectation of an increase in a random variable does not drop as the variable gets larger. In the real world, say with stock returns, conditional on a loss being worse than 5 units, to use a conventional unit of measure units, it will be around 8 units.

You may be able to predict the frequency of a war, but you can’t gauge its effect. You may correctly predict a skilled person getting “rich,” but he can make a million, ten million, a billion, ten billion—there is no typical number. Sales estimates are totally uncorrelated to actual sales.

The absence of “typical” events is what makes prediction markets ludicrous, as they make events look binary. “A war” is meaningless: you need to estimate its damage, and no damage is typical. Many predicted that the First World War would occur, but nobody predicted its magnitude.

Parametrizing a power law lends itself to monstrous estimation errors. Small changes in the “alpha” main parameter used by power laws lead to monstrously large effects in the tails. Figure 5 shows more than 40,000 computations of the tail exponent “alpha” from different samples of different economic variables.

Parametrizing a power law lends itself to monstrous estimation errors. Small changes in the “alpha” main parameter used by powerlaws lead to monstrously large effects in the tails. For instance, you move alpha from 2.3 to 2 in the publishing business, and the sales of books in excess of 1 million copies triple.

Living in the fourth quadrant

A charlatan is someone likely (statistically) to give you advice, of the “how to” variety. The second type of advice is vastly more informational, and typically less charlatanic. Even in academia, there is little room for promotion by publishing negative results.

I used to give the same mathematical finance lectures for both graduate students and practitioners. I never had a disagreement with statisticians (who build the field) – only with users of statistical methods. This convinced me to engage in my new project: “how to live in a world we don’t understand”.

Only fools optimize, not realizing that a simple model error can blow through their capital (as it just did). The only weak point I know of financial markets is their ability to drive people and companies to “efficiency” against risks of extreme events. Some systems tend to optimize and become more fragile as they become more efficient.

Source: Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction, John Brockman

"A gilded No is more satisfactory than a dry yes" - Gracian