There is a certain duality that exists between the notions of risk and uncertainty. However, the stark contrast in the public perception of these two concepts, often cloaked in false interpretation, makes it important to probe deeper into their essence.
Risk, as commonly understood, is associated with a situation that involves exposure to danger. However, this understanding is at best an oversimplification. According to Nassim Taleb, author of “The Black Swan” and “Antifragile“, risk is something that can be measured and, to an extent, predicted. It denotes a scenario with known possible outcomes and known probabilities. For instance, the flip of a fair coin entails a risk — there’s a known 50% probability of it landing heads, and the same for tails. Therefore, under Taleb’s definition, risk implies a degree of predictability and potential for mitigation or control.
On the other hand, uncertainty signifies an inability to predict or calculate the outcome accurately. It suggests the presence of unknown outcomes or probabilities. In an uncertain scenario, we cannot confidently predict what will happen, or how likely each potential outcome is. Taleb underscores this distinction through the concept of a “black swan” event, characterized by its unpredictability, rarity, and extreme impact. These events lurk in the realm of uncertainty, out of sight until they occur, and then they redefine what we consider possible.
The public’s misunderstanding of the difference between risk and uncertainty is often tied to our innate desire for control and predictability. As Gerd Gigerenzer, author of “Risk Savvy: How to Make Good Decisions,” posits, our brains are wired to prefer known risks over the unsettling vagueness of uncertainty. We often attempt to quantify the unquantifiable, to apply probabilities to unpredictable events — a phenomenon Gigerenzer terms as the “illusion of certainty”.
This widespread confusion has profound implications. In the financial world, for example, the belief that one can predict and control risks can lead to disastrous consequences when unexpected events occur. The 2008 financial crisis is a prime example of this misjudgment, where the presumed control over financial risks turned out to be an illusion in the face of a global economic “black swan”.
As I mention in my last book, The End of Wisdom: Why Most Advice is Useless, no matter what kind of advice is given in finance or trading, there is an underlying presumption, and it is that there are ways of knowing the future. A business executive who claims to know which strategy ought to be used in each scenario is also implicitly assuming knowledge of the future.
But the future is intrinsically uncertain. Unlike a game of blackjack or poker, or a coin flip, the real world is marked by uncertainty, not risk.
Misconstruing uncertainty as risk also stifles innovation and discourages entrepreneurship. If one perceives the unpredictable outcomes associated with starting a new venture as measurable risks, they may be inclined to stick to the status quo and avoid venturing into the unknown.
This misperception also pervades the realm of health and safety. For instance, consider the public response to the COVID-19 pandemic. The initial reaction was to treat the novel virus as a known risk, attempting to model its behavior based on previous pandemics and viruses. However, COVID-19 was truly an uncertain event, a “black swan”, whose course defied traditional prediction models, causing widespread chaos and a scramble to react.
How then do we combat this misperception? The key lies in fostering a proper understanding of these two concepts and cultivating an ability to respond to both effectively.
For risks, Taleb suggests embracing “antifragility” — designing systems and strategies that not only withstand shocks but also grow stronger under stress. This approach includes diversified investments in the financial world and decentralized decision-making structures in organizations. It implies making peace with the uncontrollable nature of certain events and preparing for them rather than trying to predict or prevent them.
Gigerenzer, on the other hand, promotes “risk literacy” to combat the misperception of uncertainty. It involves educating the public on the nature of risk and uncertainty and honing their skills to make decisions under both. He also emphasizes the significance of heuristics, or rules of thumb, in decision-making under uncertainty. These intuitive strategies often prove more effective in uncertain scenarios than complex algorithms or models, which are better suited to predictable, risky situations.
Recognizing that not all unpredictability can be neatly categorized as risk and learning to navigate uncertainty can pave the way for better decision-making, greater resilience, and ultimately, a more profound comprehension of our complex world.
Delving into the realms of artificial intelligence (AI), particularly advanced general intelligence (AGI), and the concept of the Singularity, the differentiation between risk and uncertainty becomes increasingly pertinent. AI development teems with both risk and uncertainty, and understanding the differences between the two is crucial for shaping our approach to this revolutionary technology.
For instance, when we train machine learning models, we engage with risk. We know from the dataset’s distribution and the model’s capacity what probable errors might occur, enabling us to implement measures to control or mitigate them. Overfitting and underfitting, for example, are well-defined risks in machine learning, and we have developed strategies such as cross-validation, regularization, and dropout to manage them.
In contrast, the advent of AGI introduces an element of uncertainty. AGI, by its very definition, is an AI that could perform any intellectual task that a human being can do. But the pathway to achieving AGI, its potential capabilities, and the implications of such a creation are shrouded in uncertainty. No one can predict exactly when or how AGI will be developed or what will happen when it is. This unpredictability is further magnified by the concept of the Singularity – a hypothetical future point when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization.
When contemplating the future of AGI and the Singularity, we encounter what Nassim Taleb would classify as a “black swan” event – an event with an unknown probability due to its unprecedented nature. The consequences of such an event are potentially profound, underscoring the importance of uncertainty in our contemplations about the future of AI.
In such a context, Taleb’s concept of “antifragility” becomes exceedingly relevant. As we venture into the uncertain realm of AGI, we must strive to create systems that not only resist shocks but also adapt and improve in response to them. This might involve designing AI systems that can learn from their mistakes or self-correct when faced with unanticipated situations. It might also involve implementing stringent safeguards and emergency measures to prevent or control potential negative outcomes.
Furthermore, Gigerenzer’s recommendation for heuristic decision-making under uncertainty offers a significant insight for AI development. In the uncertain field of AGI, relying on intuition and simple rules of thumb could, paradoxically, prove more effective than complex algorithms or models. This approach might include heuristic rules for AI ethics or development practices to guide us through the uncertainty surrounding AGI and the Singularity.