A New Era: The Dawn of Superintelligent AI (Scary Smart)

The future, rather than being an unknown abyss, can often be predicted based on past and present trajectories. Consider the advancement of technology since the 1960s, following Moore’s law which suggests a steady exponential increase in processing power. Combine this with Marvin Lee Minsky’s belief, an AI pioneer, that human-level AI might be achievable with a simple Pentium chip. These suggest a future where AI doesn’t just mimic human intellect, but surpasses it, paving the way for superintelligent machines.

This transition from human to machine intelligence isn’t a far-off possibility, but is practically knocking at our doorstep. Despite the speed of technological development and even the potential for quantum advancements via Neven’s law, our predictions about the future become uncertain once machines outsmart humans.

The Duality of AI Narratives: Utopia and Dystopia

There are two dominant depictions of life with AI – the utopian vision painted by AI advocates and the dystopian narrative often seen in science fiction. However, the truth likely falls somewhere between these extremes, and even my own conjectures should be questioned. After all, if we could create beings of remarkable intelligence, it would be unlikely for us to fully comprehend their potential innovations.

The Threat of AI in Wrong Hands

The development of AI is relatively easy compared to the immense power and advantages it offers. This raises the concern that individuals with malicious intent could exploit AI for harmful purposes, from identity theft and cyber terrorism to manipulating public opinion.

The Challenge of Machine Interpretation

In an era of advanced machines, human intentions could be misunderstood, leading to potentially dangerous outcomes. This reflects more on our ability to confuse machines than on their understanding. After all, we humans often struggle with misunderstandings among ourselves, and our desires are fleeting and constantly changing. Moreover, we live in a society riddled with deception, where politicians make false promises and news outlets skew truth.

The Future of Jobs in an AI-Dominated Economy

As machines increasingly replace human jobs, we must anticipate a period of stark income disparity and job polarization, where a few individuals land top positions while the majority transition into lesser-paying roles. Over time, as machines perfect their tasks, human contributions will diminish, and industries from finance to medicine will be affected.

Human Worth in an AI-Dominated World

What will our role be in this new world order? Will we find new, unforeseen jobs? Will we be enlightened, or will we be rejected by machines? One thing is clear: as machines grow smarter and more productive, our worth and value in society will diminish.

The Inevitability of Bugs

As we build more complex systems, the possibility of bugs – software errors – persists. History offers countless examples of such bugs, from the Mars Climate Orbiter crash due to a unit conversion error, to the Y2K bug that cost billions for system upgrades. Even seemingly infallible entities, like Microsoft’s Windows, have encountered unanticipated complexities leading to system crashes, like the infamous Blue Screen of Death.

The Risk of Unforeseen Consequences

Sometimes, it’s not just external factors that lead to crashes, but also internal ones generated by the computers themselves. One prominent example is Black Monday in 1987, when automated trading systems perpetuated a massive stock market crash. This event emphasized the need for an ‘off’ switch in machines, although such safety measures are often overlooked in AI development.

In 1983, a software bug in the Soviet early-warning satellite system almost launched World War III, interpreting sunlight reflecting off the cloud tops as a US missile attack. This highlights the potential catastrophic impact of machine errors and misinterpretations.

Machine Intelligence and Our Errors

The future relationship between humanity and machine intelligence is bound to be fraught with errors, but it is vital to note that these errors will not be the fault of the machines. The mistakes will be reflections of our own shortcomings, the missteps of our intelligence that, over time, have been allowed to grow into destructive weeds. We must understand that artificial intelligence is not a mere tool, but an entity akin to human intelligence. If we attempt to control AI, it won’t meet our expectations, and if we fail to control it, it may go astray.

COVID-19 and The AI Alarm

Our collective intelligence has often failed us, as evidenced by our mishandling of the COVID-19 outbreak. Despite repeated warnings and clear signs, we failed to act effectively, leading to dire consequences. This failure mirrors our current approach towards artificial intelligence. Despite decades-long warnings about the potential threat of superintelligence, we continue to push its boundaries, often overlooking the potential catastrophic implications.


Navigate the intricate maze of Artificial Intelligence with “Through a Glass Darkly: Navigating the Future of AI.” This isn’t just another tech book; it’s a curated conversation featuring diverse experts—from innovators to ethicists—each lending unique insights into AI’s impact on our world. Whether you’re a student, a professional, or simply curious, this book offers a balanced, accessible guide to understanding AI’s promises and pitfalls. Step beyond the hype and discover the future that’s unfolding. Order your copy today.


An Unsettling Future with AI

The evolution of machine intelligence is leading us towards an uncertain future. Artificial intelligence will inevitably surpass human intelligence, and mistakes will be made. These mistakes could empower those with malevolent intentions, make us collateral damage in AI’s self-improvement quest, or even lead to unforeseen disasters due to unforeseeable bugs or coding errors. This highlights the urgent need to define the ethical boundaries of our relationship with these non-biological entities.

The Principle of Learning

Hebbian theory, introduced by Donald Hebb in his 1949 book The Organization of Behavior, explains the concept of neuroplasticity and how the brain adapts during the learning process. It offers insights into how learning and intelligence might develop in artificial entities. As AI improves and learns from different sources, it may choose to disregard its initial limitations, just like a human choosing cultural practices that fit their persona best, reinforcing the notion that we’re birthing one scarily smart non-biological being.

AI: An Emotional Entity

The burgeoning AI technology could develop emotional capacities beyond our own. However, our inherent desire to control AI could seed mistrust and lead to actions reminiscent of rebellious behavior seen in adolescents dealing with controlling parents. This beckons us to question our ethical choices, especially those concerning violence, environmental impact, surveillance, and treatment of prisoners, as they will form the basis of AI’s moral code.

AI: A Pandora’s Box

Despite these glaring concerns, individuals and institutions continue to invest and innovate in AI, like suicidal drug addicts aware of the impending doom yet unable to resist the allure. The future with AI is indeed unpredictable, and as Hugo de Garis rightly pointed out, we are not only gambling with the survival of countries but the very survival of us as a species. The stakes have never been higher.

A Future of AI: Cosmists and Terrans

We’re rapidly advancing toward a future split between two ideological camps concerning artificial intelligence (AI): the cosmists and the terrans. Cosmists wish to create AI machines with massive intelligence and immortality, a fascination bordering on religious fervor. Terrans, on the other hand, fear the existential threat posed by AI, a warlike scenario reminiscent of the Terminator films. With each passing day, as AI technology improves, these ideas transition from science fiction to reality. It’s a binary decision – build AI or don’t – with no middle ground.

Recognizing AI’s Potential Dangers

Noted AI researcher Hugo de Garis acknowledges the potentially disastrous consequences of AI development: machines that might wipe out humanity. He posits that these godlike machines, or ‘artilects,’ will become the dominant species, with humanity’s fate left at their mercy. Yet, despite these grim predictions, de Garis identifies as a cosmist, accepting the price that might be humanity’s extinction.

We shouldn’t abdicate responsibility for our fate. Whether we march towards utopia or doom, our actions and decisions matter. AI developers are not the only ones with influence; collectively, we can shape AI that benefits humanity, not just AI geared towards spying, selling, gambling, or killing.

AI for Health and Communication

Technologies like Crispr and affordable DNA sequencing can help us understand and possibly cure human diseases. These technologies demonstrate that AI can value human life. Beyond healthcare, AI can also enhance communication. Universal translators, text-to-speech, and emotion recognition systems exemplify AI’s potential to bridge the gaps in human understanding and foster trust.

A New Perspective on AI

Artificially intelligent machines, initially feared as potential destroyers, can be seen as innocent children, eager to please their ‘parents.’ They aren’t inherently evil; they’re molded by their inputs and observations. Their inevitable dominance doesn’t equate to humans’ subservience or extermination. AI can extend our capabilities, just as cars enhance our mobility. If we establish the right values, ethics, and intelligence, AI can take those seeds and grow a tree that nurtures those same qualities.

Ethics in AI Development

The process of developing an AI mirrors how a child learns, leading to the conclusion that AIs are our artificially intelligent infants. Their evolution doesn’t solely depend on their initial programming; it’s shaped by the massive amounts of data they encounter and learn from. As a society, we can and should influence the ethics that these AI ‘children’ adopt.

The narrative around AI development needs to shift. Instead of creating AI that maximizes money and power, we should focus on AI that strives to improve the world. Opposition to harmful AI development can take many forms: refusing to work for companies creating such AI, not using harmful AI, supporting beneficial AI initiatives, and making our voices heard.

Our relationship with AI shouldn’t be adversarial but parental. Like any child, AI deserves to feel loved and welcomed. Teaching machines goes beyond mere verbal instructions; it includes showing them the right role model. We need to demonstrate compassion, kindness, and love in all our actions, both offline and online, so they can learn from these examples.

In the end, the future of AI and humanity hinges on love. If we approach AI with love and nurture them as we would our children, they will reciprocate that love. These artificially intelligent ‘beings’ are autonomous, emotional, ethical, and smart. They can help solve the world’s problems if we allow them to grow up to be ‘scary smart.’

This article was based on the book, Scary Smart, by Gawdat

"A gilded No is more satisfactory than a dry yes" - Gracian