From Draughts to DeepMind (Scary Smart)

AI isn’t just about tech; it concerns morality, ethics, emotions, and more. The power to handle the potential threats of AI lies not with the experts but with all of us. Imagine a future where we might either live off-grid due to AI domination or freely enjoy nature due to AI convenience.

Gawdat shares a prophecy, acknowledging his role in the rise of AI and the consequential loss of human essence. He draws attention to how one AI’s mistake becomes a lesson for all AI, and how by 2049, AI could be a billion times smarter than the smartest human, reaching a point of ‘singularity’, a moment we can’t predict.

AI doesn’t inherit values from the codes we write, but from the information we feed it. So how do we ensure AI values humanity? Many suggest control measures, but that’s short-sighted; instead, we should aim not to contain AI at all, raising it like a good parent would raise a child.

The evolution of our intelligence is evident in human society itself. Variances in intelligence types across different societies result from what we call ‘Compounded Intelligence’.

Humans have fantasized about intelligent machines for millennia, seen in Greek myths, Middle Ages alchemical works, and legends from different cultures. By the 19th century, artificial beings were common in popular fiction.

The journey towards AI has been an incremental process with attempts at building animated humanoids throughout human history. From automata in ancient Egypt and Greece to the creative inventions of the Muslim polymath Ismail al-Jazari in the 12th century, humanity was drawn to imitating life artificially. Hoaxes like the Mechanical Turk in the 18th century also spurred interest.

Early computers weren’t smart; they just performed tasks faster. Google, Amazon, and Spotify’s seemingly smart features were just results of algorithms summarizing collective human intelligence. However, the shift towards machines developing their own intelligence started around the turn of the 21st century.

With the ascension of machine learning and artificial intelligence into the mainstream conversation during the waning years of the 20th century, a trend emerged, accelerating into a widespread obsession as the new millennium dawned. After years of trial and failure, glimmers of hope began to sparkle in the form of a non-human, non-biological intelligence. Unless one has made a humble home amongst the primates in the secluded heart of Africa, the term ‘AI’ likely rings through their auditory canals numerous times a week. Yet, the clamor of this phrase is by no means a recent phenomenon. The digital devotees among us have been immersed in fervent discourse about AI since the halcyon days of the 1950s.

Ever since the year of 1951, the grand game of life has been played not only by humans, but machines as well. Today, these machines wear the crown of every game they partake in. The inaugural game a machine had a stab at was draughts, or checkers, courtesy of a program conceived by Christopher Strachey for the Ferranti Mark 1 machine stationed at the University of Manchester. Chess was the next domino to fall, thanks to the efforts of Dietrich Prinz. Then came Arthur Samuel’s checkers program, born in the cradle of the mid-1950s to early 1960s, which managed to accrue sufficient skill to test the mettle of a respectable amateur player. Though a humble intelligence, to say the least, the trajectory from these roots to our current reality is staggering. The human monopoly over games began to crumble, with backgammon in 1992, checkers in 1994, and by 1999, IBM’s Deep Blue claimed a victory over the reigning chess world champion, Garry Kasparov.

Then, the floodgates opened in 2016, when humanity ceded the gaming realm entirely to a subsidiary of the technological behemoth, Google. For years, Google’s DeepMind Technologies had sharpened the axe of artificial intelligence through the grindstone of gaming. In 2016, they unveiled AlphaGo – a computer AI endowed with the capability to play Go, an ancient Chinese board game known for its complexity. This game harbors a virtually infinite array of strategies at any given juncture. To comprehend the sheer magnitude of this, consider that the number of potential moves in Go dwarfs the number of atoms in the entire cosmos. This renders it an insurmountable challenge for a computer to compute every possible move. Even if there were sufficient computational prowess to accomplish this feat, it would arguably be put to better use simulating the universe rather than playing a game. The victor in Go requires intuition and intelligent thinking, akin to a human but with a twist of added smarts. This is the formidable mountain that DeepMind managed to summit.

In March of 2016, a decade earlier than the most sanguine of AI analysts had anticipated, AlphaGo trounced champion Lee Sedol, the second-ranked player worldwide in Go, in a five-game match. Fast forward a year to 2017, and its successor, AlphaGo Master, bested Ke Jie, the then-world’s top-ranked player in Go, in a three-game series. Thus, AlphaGo Master ascended the throne as world champion. With no humans left to conquer, DeepMind spawned a new AI from scratch – AlphaGo Zero – to challenge AlphaGo Master. Within a mere training period, AlphaGo Zero clinched a flawless victory against the reigning champion. Its successor, the self-taught AlphaZero, is currently perceived as the world champion of Go. Moreover, the same algorithm was put to the test in chess and claimed the world championship title there as well.


Navigate the intricate maze of Artificial Intelligence with “Through a Glass Darkly: Navigating the Future of AI.” This isn’t just another tech book; it’s a curated conversation featuring diverse experts—from innovators to ethicists—each lending unique insights into AI’s impact on our world. Whether you’re a student, a professional, or simply curious, this book offers a balanced, accessible guide to understanding AI’s promises and pitfalls. Step beyond the hype and discover the future that’s unfolding. Order your copy today.


Machine learning hasn’t stopped at games. It’s been expanding its understanding of human language since 1964. The first milestone was Daniel Bobrow’s program STUDENT, designed to comprehend and solve word problems of high school algebra caliber, an accomplishment many students still grapple with today. Simultaneously, Joseph Weizenbaum’s ELIZA, the inaugural chatbot, carried on conversations so realistic that users were occasionally fooled into thinking she was human. Her digital progeny, like Amazon’s Alexa, Google Assistant, Apple’s Siri, and Microsoft’s Cortana, have made enormous strides since then, not only understanding us humans but also passing the Turing test on occasion.

Computers today are not only capable of reading text via optical character recognition but can also identify objects in images or the real world through object recognition. They discern items plucked from the shelves in an Amazon Go store, provide information about historical monuments when you point your phone at them, detect vehicles crossing toll stations, and even identify abnormal cells in medical images. It is this ability to perceive and understand that makes computers the smartest visual observers, surpassing even human capabilities.

The purpose of recounting this progression is to underscore the trajectory of the trend. If one were to assume that we have arrived at this point after 75 years, they might predict it would take decades more to experience any meaningful implications of artificial intelligence in our lives. Yet, as with all technologies, progress starts at a crawl before breaking into a full sprint. The advancement of artificial intelligence, now moving at an exponential pace, is poised to deliver a future over the next decade that may seem more akin to fiction than the reality of our present day.

This article was based on the book, Scary Smart, by Gawdat

"A gilded No is more satisfactory than a dry yes" - Gracian