The Precautionary Principle is a philosophical and legal approach to dealing with innovations that have the potential for causing harm when sufficient scientific knowledge on the subject is lacking.
The technological and economic progress that began with the industrial revolution created an insatiable demand for natural resources. These natural resources, their extraction, and their conversion into energy, has created ecological catastrophes on the planet.
Had the precautionary principles been applied in the past, we would have been able to avert ecological disasters. The problem with this idea, is to what extent can it be followed practically? If we were to hold off all scientific research until we have enough data, will this not destroy innovation?
Here, there is a delineation that is necessary. The precautionary principle should be applied when there is systemic risk. A normal risk only affects one component or individual, while systemic risk can destroy the whole system.
Ecology is a system. The atmosphere and the earth are connected, and to them, all living things are connected. If you change one variable, such as the burning of fossil fuels, a chain reaction will be triggered, which will higher carbon dioxide levels, increase the heat of the oceans, melt the ice, kill sea and land animals, and finally, kill humans. The risk of burning fossil fuels, one action, threatens the whole system.
Instances of systemic risks are numerous. In politics, intervening in complex political systems can result in a domino effect that throws a country into a hellish state, so that it moves from totalitarian order, into anarchic chaos. Such was the case in Afghanistan, Iraq, Libya, and Syria, among many others. In the financial collapse of 2008, millions of lives were affected because of risks taken on the US mortgage market. Since political and financial systems are complex, they are harder to control, trying to manipulate such systems without a good understanding of them is not a good idea. Intervening in Iraq was a mistake, not only because Iraq is a complex political system, but it was part of a geopolitical piece that maintained the balance of power. The disruption of Iraq’s politics caused internal and external chaos. The costs outweighed the benefits, at the very least, in the short to medium term.
In The End of History and The Last Man, Fukuyama, in his criticism of communism as an economic system, tells us about how the Russian government had to price hundreds of thousands of items each day – a task so complicated, requiring so many people, that it was impossible. Eventually, South East Asian economies, China, and Russia have had to adopt more capitalistic models. The latter model works better because it allows for bottom-up adaptations, and only these are fit to accommodate complex systems.
The trouble is when man thinks that he can, with top-down intervention, predict and control what occurs in a complex system. The proportion of chaos that is propagated depends upon the structure of the system (the internet spreads ideas faster, modern transportation moves people faster), and the size of the intervention (sending an army into another country is different from insulting its leader).
In 2020, the world witnessed a pandemic, where patient zero in China managed to change the lives of people across the entire world, by shutting down airports and businesses, and causing over a million deaths, and radically changing the lives of everyone in the world. The speed with which Covid-19 spread was accelerated with modern technology (airplanes, subways, buses). Systemic risk, when it comes to pandemics, increases with globalization.
There are many today, and in the past, who have argued that the Precautionary Principle should be applied to AI. In his book, Superintelligence: Paths, Dangers, Strategies, Bostrom argues that artificial intelligence will learn how to improve its own architecture. Superintelligence, as defined by Bostrom, refers to an intellect that can outperform the human intellect across multiple general cognitive domains. If such an intellect comes to being, then human intelligence would, in comparison, be negligible. And such a technological development, when in the hands of governments that possess the power to manipulate the environment but lack the wisdom of restraint and prudence, will set off a series of events that destroy the human species.
In this paper, Taleb uses the Precautionary Principle to argue against the genetic modification of organisms. Modifying the genes of organisms is tinkering with complexity since organisms are complex systems that are a part of the ecology, a more complex system. Since we are ignorant of effects of these actions, and since genetic engineering is a recent field which we do not sufficiently understand, we ought to avoid playing with fire.
In the same way, our ignorance about our own bodies and its complexities have caused harm to ourselves. Previously, we thought that microbes were bad for us, but as Young explains in I Contain Multitudes, there are millions of microbes that are good for us, but only around one hundred that can harm us. Yet we are obsessed with antibacterial cleaning products.
Microbes are neither good nor bad, it depends on context. A microbe could be good in the sense that it regulates our gut health, but if the same microbe got onto our skin and infected a wound, this would cause us trouble.
In his book, The Basic Laws of Human Stupidity, Cipolla categorizes people into four types: intelligent, bandits, helpless, and stupid. Intelligent people look for win-win situations, where they benefit themselves and others. Bandits only seek to benefit themselves, while harming others. The helpless seek to benefit others while harming themselves. And the stupid cause harm to others without incurring personal benefit.
The point of the book, other than being a humorous commentary on the nature of human beings, is to emphasize how simple it is to determine whether actions are intelligent or stupid. An action is defined as stupid only if it causes more harm than good to society. Powerful people are not immune to stupidity.
The maxim here is: It is much easier to avoid stupidity than to seek intelligence.
Under uncertainty, harm ought to be minimized. Action ought to be minimized. Stupidity is not manifest until after the fact. The stupid person does not necessarily intend to harm the other or themselves. They may do so from ignorance or overconfidence. The scientist that invents a technology that makes everyone sick is stupid, not because he is nefarious, but because his motivation, a small gain financially, is negligible when compared to the harm that he has caused.
In a tweet, Taleb’s fictional dialogue demonstrates the Precautionary Principle.
“Someone as rich as you shouldn’t be that vigilant with money.” Fat Tony, “If I weren’t so vigilant I wouldn’t have been and stayed, rich.”