Superintelligence: Paths, Dangers, Strategies Summary (8/10)

Superintelligence by Nick Bostrom is about the inevitability of a technological dystopia unless serious action is taken.

Imagining a technological dystopia is not original.Huxley and Orwell, have been able to write about the end of the world we love in novels, that people to this day refer to, they even have debates about who was more accurate.

Today, there are a few names who have achieved some fame for their technological fortune telling. The first that comes to mind is Ray Kurzweil, who in The Age of Spiritual Machines shows us the accuracy of the predictions he made decades ago, as well as anticipate what he calls “The Singularity.”

But what is noteworthy about Kurzweil, is that he doesn’t mind if artificial intelligence destroys homo sapiens, he looks forward to it. That is, homo sapiens in its current form. A version of homo sapiens where we are partly machines is okay though.

For him, there is nothing that can stop the development of artificial intelligence, so we might as well have something cool to look forward to, like immortality.

And this is where Nick Bostrom comes in. He is not so optimistic, not because he doesn’t think we cannot build superhuman AI, but because too much can go wrong if we do.

In Superintelligence, he presents different thought experiments of what can go wrong, and argues that unless we become highly sophisticated in the way that we manage the development of artificial intelligence, we may be mindlessly rushing towards a future that will put an end to the human race, in all it forms.

One of the arguments against Bostrom is that AI is only good for specialized tasks, but cannot achieve general intelligence like human beings.

But Bostrom refers us to Turing, to see how this problem can be overcome. Turing’s idea was to design a program that acquired most of its content through learning rather than having it pre-programmed. A variation of Turing’s idea (which is a child machine that can develop its potentialities by accumulating content) is a seed AI, that would be a more sophisticated AI that would be capable of improving its own architecture.

What does he mean by superintelligence? Critics of the doomsday theorists (like Bostrom) that say that AI can only outperform humans in specialized tasks, are making a semantic error, because Superintelligence, as Bostrom uses the term, refers to intellects that can outperform human minds across multiple general cognitive domains. But what does that mean exactly? Here, Bostrom divides potential super capabilities into three distinct parts: speed superintelligence, collective superintelligence, and quality superintelligence – each of which is enough to make human intelligence negligible.

Perhaps, you may accept that the possibility is there, but you may question the motivation of countries for developing artificial intelligence to such a degree. But that is a naive objection.

Besides the obvious advantage of gaining power over its citizens, and over other nations, they will become better at minimizing resistance to their authority. Governments can encourage its citizens to use genetic selection to increase the nation’s stock of human capital, and to select for traits that increase long-term social stability like docility, conformity, and obedience. You know, like in 1984.

Another counterargument may be that there are many paths that could unfold in our attempt to build superintelligence, but Bostrom reminds us that just because there are multiple paths, this does not mean there are multiple destinations. We may be in a situation where we encounter obstacles in our way, but that eventually, after enough attempts, we finally succeed.


Navigate the intricate maze of Artificial Intelligence with “Through a Glass Darkly: Navigating the Future of AI.” This isn’t just another tech book; it’s a curated conversation featuring diverse experts—from innovators to ethicists—each lending unique insights into AI’s impact on our world. Whether you’re a student, a professional, or simply curious, this book offers a balanced, accessible guide to understanding AI’s promises and pitfalls. Step beyond the hype and discover the future that’s unfolding. Order your copy today.


 Slow, Medium, Fast Takeoffs

The development of AI into superintelligence can either takeoff at a slow, medium or fast pace.

A slow takeoff is the ideal scenario, that will take place over decades or centuries. This would allow us ample time to respond and adapt.

A medium takeoff will occur over weeks or months, which will not afford us too much time to experiment or to respond politically, but existing systems could be applied for the challenge.

A fast takeoff occurs over minutes, hours, or days and in this scenario, there will not be time to deliberate.

A fast or medium takeoff are the most likely, but if pricey super computers are difficult to scale and Moore’s law will have by then expired, a slow takeoff cannot be ruled out.  

Most humans value their survival, this may not be the case with AI. There are many errors that can unfold because of similar differences between AI and humans. If programmers defined a simple goal for AI such as: “Make us smile”, there may be a perverse instantiation that paralyzes human facial muscles into constant beaming smiles. It does not matter if that was not what the programmer meant to do. Computers are too busy to pontificate about what you intended to program them to do.

Bostrom gives the absurd paperclip example to make this point. Imagine an AI that is designed in a factory, is given the final goal of maximizing the manufacture of paperclips, and then goes on to convert Earth and other parts of the observable universe into paperclips. That can conceivably happen, because AI doesn’t give a fuck.

"A gilded No is more satisfactory than a dry yes" - Gracian