The Precipice Summary (7/10)

“Six thousand years.” That’s how long human civilization has existed, according to Toby Ord, a Senior Research Fellow at Oxford University’s Future of Humanity Institute. This is but an instant compared to the 200,000 years of homo sapiens, and a mere heartbeat against the 3.5 billion years of life on Earth. Yet, in the nuclear age, we’ve come perilously close to snuffing out this brief candle of civilization — not just once, but many times over. Such is the opening gambit of Toby Ord’s thought-provoking work, The Precipice: Existential Risk and the Future of Humanity. This magnum opus is a clarion call to acknowledge and address the existential risks that threaten to extinguish human civilization.

In the book’s first part, “The Precipice,” Ord sets the stage with the chilling estimate that the existential risk we face this century is one in six – equivalent to playing Russian roulette with humanity’s future. He frames this in terms of a “precipice,” a point in time where our actions could lead to the irreversible end of humanity. “We stand at a critical juncture in our history,” Ord cautions. “Where our actions could lead us to an extraordinary future, or to disaster.”

Part II, “Existential Risk,” dives deep into the concept of existential risks, which Ord categorizes into natural and anthropogenic. Ord points out that we’ve survived natural existential risks like asteroid impacts and super-volcanic eruptions for hundreds of thousands of years. The anthropogenic risks — nuclear war, engineered pandemics, unaligned artificial intelligence (AI), and uncontrolled climate change — are far more pressing, as these are risks we’ve created ourselves and thus have control over. As Ord explains, “Our survival as a species is not something we can take for granted. It is something we must earn.”

Delving deeper into anthropogenic risks in part III, “The Major Risks,” Ord reveals the statistics behind the threats. Nuclear war — a 1 in 1,000 chance this century; engineered pandemics — 1 in 30 chance; unaligned AI — a 1 in 10 chance. The numbers are staggering, and Ord elucidates the complex mechanisms that underlie these probabilities, leading readers on a journey through the fields of biology, computer science, and geopolitics.

Nuclear War

For the threat of nuclear war, Ord investigates historical data, political dynamics, and the physical realities of nuclear weaponry. This includes analyzing patterns of conflict, existing geopolitical tensions, nuclear stockpiles, and the policies of nuclear-armed states. He combines this with expert testimony on the likelihood of nuclear war, taking into account factors such as arms control agreements, nuclear deterrence theory, and nuclear proliferation.

Engineered Pandemics

In estimating the probability of engineered pandemics, Ord explores advancements in biotechnology, particularly in gene editing and synthesis capabilities. He considers how these technologies might be misused either intentionally or accidentally to create a highly virulent and deadly pathogen. The analysis involves a range of disciplines, from molecular biology to public health, and takes into account the dual-use nature of many biotechnologies, their spread and democratization, and the difficulty of controlling access to them.

Unaligned Artificial Intelligence

Assessing the risk of unaligned AI, Ord delves into computer science, machine learning, and AI alignment research. He evaluates AI’s current capabilities and growth trajectory and considers the technical challenges in ensuring that advanced AI systems behave safely and as intended. Here, he weighs expert predictions on AI development timelines and the odds of an AI-related catastrophe. He also reflects on factors such as AI races, where competitive pressures might lead to the deployment of unsafe AI.

In all of these calculations, Ord acknowledges the inherent uncertainty and ambiguity involved in making such far-reaching predictions. His estimates are best-guess figures based on the available data, informed assumptions, and expert opinion, and should be understood as such. Despite their inherent uncertainty, these risk estimates serve a crucial purpose in highlighting the potential dangers and prompting efforts to mitigate them.

In the final section, “Our Response,” Ord delineates a roadmap for addressing these existential risks. He advocates for long-term thinking, international cooperation, and effective altruism — the practice of using evidence and reasoning to determine the most effective ways to benefit others. Ord urges readers to recognize the unprecedented power we now wield: “The choices we make today will resonate for millions of years,” he warns.

Ord concludes with a heartfelt appeal: “The power is in our hands. We must use it wisely, with diligence and great care, for the sake of all those who are to come.” His plea is not one of despair but of hope, urging us to turn our attention and resources to the existential risks that hang over humanity.

Throughout The Precipice, Ord paints a daunting picture, yet he remains optimistic. The existential risks we face are grave, but they are not insurmountable. By recognizing these threats, marshalling global resources, and taking informed, deliberate action, we can step back from the precipice and secure a promising future for humanity.

In essence, Toby Ord’s The Precipice is a philosophical treatise, a scientific exploration, and an urgent manifesto all in one. It’s a wake-up call to humanity, imploring us to recognize and act on the profound risks we face. It compels readers to think deeply about our place in time, our responsibilities to future generations, and the vast potential of our collective future — if we can navigate the precipice we currently stand on.


Navigate the intricate maze of Artificial Intelligence with “Through a Glass Darkly: Navigating the Future of AI.” This isn’t just another tech book; it’s a curated conversation featuring diverse experts—from innovators to ethicists—each lending unique insights into AI’s impact on our world. Whether you’re a student, a professional, or simply curious, this book offers a balanced, accessible guide to understanding AI’s promises and pitfalls. Step beyond the hype and discover the future that’s unfolding. Order your copy today.

"A gilded No is more satisfactory than a dry yes" - Gracian