Past developments and present capabilities (Superintelligence)

Growth Modes and Big History

Not long ago, our ancestors were still swinging through African trees. In evolutionary terms, the rise of Homo sapiens was a rapid ascent. Small shifts in posture, thumbs, and brain organization led to an extraordinary leap in cognition. Humans became capable of abstract thought, complex communication, and the cultural accumulation of knowledge—abilities unmatched by any other species.

These advances enabled humans to develop technologies that fueled migrations out of Africa and into entirely new environments. After agriculture took hold, populations swelled. Denser communities meant more shared ideas, specialized skills, and, eventually, a sharp increase in economic productivity. The Agricultural Revolution changed humanity’s growth trajectory. The Industrial Revolution amplified it further, ushering in a new era of productivity and wealth creation.


Changing Rates of Growth

For most of history, growth was slow. Hundreds of thousands of years ago, it took about a million years to increase human capacity enough to sustain an additional million people living at subsistence levels. By 5000 BCE, after agriculture’s rise, the same growth took just two centuries. Today, the global economy grows enough every 90 minutes to sustain another million people.

Even without another fundamental shift, current growth rates promise astounding changes. If global economic growth holds steady, the world could be nearly five times richer by 2050 and 34 times richer by 2100 than it is today. Yet such exponential growth, remarkable as it is, pales in comparison to what another transformative shift—like the Agricultural or Industrial Revolutions—could achieve.


The Prospect of Accelerating Growth

Economist Robin Hanson explored the concept of historical growth modes. For early hunter-gatherers, the world economy doubled roughly every 224,000 years. The advent of farming slashed that to 909 years. Industrial society has reduced the doubling time to about 6.3 years. Hanson theorizes that a comparable shift in growth rates could bring us to a world economy that doubles every two weeks.

Such a rate seems almost fantastical. But so, too, would today’s growth rates have appeared to a 17th-century observer. Accelerating growth may require a game-changing factor, such as the creation of machines vastly superior to human intelligence.


Toward Machine Superintelligence

The idea of an intelligence explosion—a rapid creation of superintelligent machines—aligns with this possibility. Such machines could design successors even more capable than themselves, triggering exponential improvements in intelligence and economic productivity.

The mathematician I.J. Good articulated this vision in 1965:

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Superintelligence raises profound questions. How could humanity survive the rise of entities far smarter than ourselves? Early pioneers of AI rarely considered such risks, focusing instead on the immediate challenge of achieving human-level intelligence. Yet, as we edge closer to realizing this capability, the stakes could not be higher.


Historical Lessons from Artificial Intelligence

The quest for artificial intelligence began in earnest in 1956 at Dartmouth College. Researchers at the time were optimistic, believing breakthroughs would come swiftly. Early AI systems achieved notable successes: solving mathematical proofs, demonstrating basic language understanding, and even playing games like checkers at a high level. But progress stalled as researchers encountered the combinatorial explosion—a problem where the complexity of tasks grows exponentially, overwhelming computational resources.

This frustration led to the first “AI winter” in the 1970s, a period of reduced funding and interest. A revival in the 1980s saw the rise of expert systems, rule-based programs designed to mimic human decision-making in specific domains. However, these systems were brittle and expensive, leading to yet another period of disillusionment.

The 1990s brought fresh momentum with the development of neural networks and evolutionary algorithms. These methods allowed machines to learn from data, rather than relying on pre-programmed rules. Advances in computing power and data availability propelled further progress, culminating in modern applications like speech recognition, autonomous vehicles, and game-playing AIs.


The State of AI Today

AI now excels in narrow domains. Chess engines outperform grandmasters, and systems like IBM’s Watson can outmatch human champions in trivia. Machine learning underpins technologies from facial recognition to personalized recommendations. Yet, these systems remain narrow, unable to generalize their intelligence beyond specific tasks.

The long-term goal of many researchers is to create artificial general intelligence (AGI)—machines capable of understanding and reasoning across diverse domains like a human. From AGI, it is but a small step to superintelligence, which would outpace human cognition in all areas.


The Challenges Ahead

Predicting the arrival of AGI is fraught with uncertainty. Surveys of AI experts suggest a 50% chance of achieving human-level AI by 2040 and a 90% chance by 2075. Yet these timelines reflect optimism, skepticism, and everything in between. Predictive challenges aside, the consequences of AGI—and superintelligence—are stark. They could range from unparalleled prosperity to existential risk.

The road to AGI will be paved with ethical and technical challenges. How do we ensure superintelligence aligns with human values? How do we manage its immense power? And how do we navigate the social, economic, and political upheaval such a transformation will bring?


Looking Forward

As humanity hurtles toward an uncertain future, one thing is clear: another transformative shift in growth mode could redefine life as we know it. Whether this comes through machine intelligence, biotechnological advancements, or something entirely unforeseen, the stakes are immense. For now, we stand at a precipice, awaiting the next great leap forward.

Book: Superintelligence: Paths, Dangers, Strategies (2014)

"A gilded No is more satisfactory than a dry yes" - Gracian