Specism and the AI Merge: Redefining Humanity in a Technological World

Tucker Carlson and Elon Musk were recently discussing the possible dangers of AI, mainly focusing on AI safety and the unchecked growth of advanced AI systems. Elon Musk expressed his concerns about AI safety, pointing out that AI could end up taking control and making irreversible decisions for us.

Musk also mentioned his talks with Google co-founder Larry Page, and he feels that Page isn’t giving AI safety the attention it deserves. He’s worried about Google and DeepMind having so much AI talent and resources, but maybe not prioritizing safety enough. In response to these concerns, Musk played a key role in creating OpenAI, which started as a non-profit organization emphasizing transparency and openness. Interestingly, OpenAI is now for-profit and they’re the ones responsible for Chat GPT.

CARLSON: Do you think that’s real? It is conceivable that AI could take control and reach a point where you couldn’t turn it off and it would be making decisions for people?

MUSK: Yeah. Absolutely.

CARLSON: Absolutely?

MUSK: No, that’s definitely the way things are headed, for sure.

I mean, things like say, Chat GPT which is based on GP4 from Open AI which is a company I played a critical role in creating, unfortunately —

CARLSON: Back when it was a non-profit.

MUSK: Yes. Um, I mean, the reason Open AI exists at all is that [Google co-founder] Larry Page and I used to be close friends and I would stay at his house in Palo Alto and I would talk to him late in the night about AI safety. At least my perception was that Larry was not taking AI safety seriously enough. And —

CARLSON: What did he say about it?

MUSK: He really seemed to be — wanted sort of digital superintelligence, basically a digital god, if you will, as soon as possible.

CARLSON: Hey wanted that?

MUSK: Yes. He’s made many public statements ever the years, the whole goal of Google is what’s called AGI, artificial general intelligence, or artificial superintelligence. I agree there’s great potential for good, but there’s also potential for bad. If you’ve got some radical new technology you want to try to take a set of actions, maximize the probability it will do good, minimize probably it will do bad things.

CARLSON: Yes.

MUSK: It can’t just be barreling forwarding and you know, hope for the best. And then at one point, I said what about, you know, we gonna make sure humanity’s okay here. [Laughter]

And, um, and then he called me a specist.

CARLSON: Did he use that term?

MUSK: Yes. And there were witnesses. I wasn’t the only one there when he called me a specist. And so, I was like okay, that’s it. Yes, I’m a specist, okay. You got me. What are you? Yeah, I’m fully a specist. Busted.

So that was his last straw. At the time Google had acquired DeepMind so Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a lot of money and more computers than anyone else. We’re in a uni-polar world here where there’s one company that has close to a monopoly on AI talent and computers, like scaled computing, and the person who’s in charge doesn’t seem to care about safety. This is not good. So, then I thought what’s the furthest thing from Google would be like a non-profit that is fully open. Because Google was closed and for-profit. So that’s why the ‘open’ in Open AI refers to open source. You know, transparency so people know what’s going on.

CARLSON: Yes.

MUSK: We don’t want to have — I’m normally in favor of for-profit. We don’t want this to be sort of a profit-maximizing demon from hell that just never stops.

[LAUGHTER]

CARLSON: You want specist incentives here.

MUSK: Yes. We want pro-human to spite the future for humans. Because we’re humans.

CARLSON: Just for people who haven’t thought this through and aren’t familiar with it, and the cool parts of artificial intelligence are so obvious, write your college paper for you, write a limerick about yourself. There’s a lot there that is fun and useful. But can you be more precise about what’s potentially dangerous and scary? Like, what could it do? What specifically are you worried about?

MUSK: It goes without saying, the pen is mightier than the sword. So, if you have a super intelligent AI that is capable of writing incredibly well and in a way that is very influential, you know, convincing, and is constantly figuring out what is more convincing to people over time. And then enter social media, for example, Twitter, but also Facebook and others, you know, and potentially manipulates public opinion in a way that is very bad, um, how would we even know?

The conversation touches on the potential dangers of AI, particularly in terms of manipulating public opinion through social media platforms. Musk highlights the power of AI in crafting convincing and influential content, which could be detrimental if used maliciously.

Musk’s concerns underline the need for the development and implementation of AI safety measures, which should go hand in hand with AI advancements. But there is clear hypocrisy here.

Musk has admitted that he funded Open AI with 100 million dollars and then took his “eyes off the ball.” But if you fund a project for this amount of money, and you are well aware of the safety problems associated, then it’s very unlikely that you would allow your attention to waiver. Indeed, the very purpose of Open AI, according to Musk, was to develop AI in a more careful and human-centric way.

Further, Musk is now working on a competitor to Open AI. Again, if you are concerned about AI development, and you are one of the main people who have called for suspending work on AI for 6 months, then building a competitor to Open AI does not seem to be in line with what you are saying.

Finally, Musk owns Twitter and Neuralink. These are two companies that can significantly benefit from advancements in AI in many ways.

Most likely, Musk’s concerns over the years are a form of virtue signaling – a way of covering his tracks in case things go wrong. However, despite the hypocrisy, it is true that Musk has been warning about AI for many years and has done more than most in trying to spread awareness about the potential dangers of AI and continues to do so. While many things remain opaque, it is certainly good that he has voiced these concerns since many others have remained totally silent, despite being well aware of the risks.

In addition, Musk claimed he was happy to accept the accusation of being a “specist” by Larry Page.

Apparently, Larry Page, one of the co-founders of Google, has expressed interest in transhumanism in the past, and has also declared himself as decidedly non-specist. That is, less concerned about the survival of the human race, and more concerned with the survival of consciousness, in all of its forms. In fact, under this worldview, all consciousness is equally valuable. There is nothing more valuable in human consciousness than AI consciousness.

Transhumanism is a movement that advocates for the use of technology and science to enhance human capabilities beyond what is currently possible. The chief goal of transhumanism is to achieve a post-human future where it is longer necessary for the human species to exist in order for meaningful life to exist. In such a world, AI would be our descendants, our “mind children” as the Moravec book by this title suggests.

In the book, Moravec explores the concept of robots evolving into intelligent, self-aware beings, ultimately surpassing human capabilities and leading to a post-human future.

Viewed in light of evolution, this is a completely natural process. The existence of life has followed a path from lower complexity to higher complexity. AI is nothing but the continuation of this process of evolution, according to transhumanists.

In a 2013 interview with TIME magazine, Page said, “Maybe in the future, we can attach a little version of Google that you just plug into your brain, and it helps you answer questions.” He also mentioned the potential for technology to help people live longer and healthier lives.

You would think that Elon, the specist, would steer clear from introducing such technologies to the world. But that’s, of course, far from true. Neuralink aims to ultimately connect the human brain to the internet, much like how Page envisions. But here’s the twist, this may ultimately be the most humane way forward.

Human augmentation, of the type that Musk or Page envision, may be humankind’s only chance to thrive in the new world of artificial intelligence. Why? Because the alternative is total slavery. (Although, it isn’t clear how this isn’t as well.)

If and when AI becomes far smarter than humans at pretty much everything, humans will be, from the perspective of AI, nothing but pets. At that point, humanity’s survival hinges on whether AI finds people interesting or useful enough. Alternatively, human augmentation suggests a way out.

There may be very humane reasons for advocating for a transhuman future. If one views the progress of technology as inevitable, and the creation of superintelligence as an escapable step, then what other optimistic vision can humans have other than merging with machines?

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE). This was in 2017.

But what are the implications of this?

It means that the definition of being human will radically change.

As we stand at the threshold of a future where humans merge with machines, let’s consider the implications: 

Cognitive leaps and bounds

The integration of AI and brain-computer interfaces will probably enhance human cognition, leading to faster learning and improved problem-solving. But the allure of intellectual challenges might diminish as problems become trivial to our enhanced minds. There will be a need for ever more difficult problems that may transcend the boundaries of the earth.

One possibility is that we turn our attention to the cosmos, seeking to unravel the mysteries of the universe. Our enhanced minds might be better equipped to tackle the fundamental questions about the origins and nature of the universe, dark matter, and extraterrestrial life. This could accelerate space exploration, drive technological innovations for interstellar travel, and foster a deeper understanding of our place in the cosmos. Or maybe we turn our attention to theoretical science and mathematics.

Enhanced cognition may help us develop more sophisticated models and theories, expanding our understanding of topics like quantum mechanics, complex systems, and higher-dimensional mathematics, which could lead to advancements that provide new insights into the fabric of reality and generate novel applications in various fields, from computing to energy production.

Moreover, our increased cognitive abilities might help us address some of the most pressing global challenges, such as climate change, resource scarcity, and inequality.

But the pursuit of ever more difficult challenges also raises important questions about human experience. As our cognitive abilities grow, we must consider the psychological impact of constantly seeking greater intellectual stimulation. In other words, what will it feel like to be so cognitively enhanced?  Will the ceaseless search for new challenges provide fulfillment, or will it lead to a sense of restlessness and dissatisfaction?

Will we still feel a sense of achievement and pride or will solving problems simply be taken for granted, so that when we do solve problems, it feels more like a boring chore than an exciting adventure. Or maybe there are no boring chores since the attainment of intelligence would occur at such breakneck speeds that we would not even have the opportunity to get bored? Or maybe, once we achieve such an enhanced cognitive state, we would be incapable of imagining what it was like before. Much like how life without technology seems alien to us today, an unenhanced form of intelligence may seem like a distant memory.

Longevity and health

Advanced medical technologies could significantly improve our health and extend lifespans. There are billions of dollars being spent on longevity research in Silicon Valley and elsewhere today. But with enhanced cognition, we may finally celebrate our triumph over aging. As Harari described in his book, Homo Deus, the problem of ageing may simply be a technical problem as some scientists believe.

In the book, Harari discusses aging and its implications for the future of humanity. Harari acknowledges that throughout history, humans have been trying to overcome the limitations of our bodies, including aging, diseases, and death. He argues that in the 21st century, we are approaching an era in which we may be able to significantly extend human lifespans, if not defeat aging entirely.

Harari explores the idea that the future of humanity could involve a focus on achieving “amortality,” a state in which death is not inevitable due to aging but could still occur due to accidents or other external factors. He suggests that the development of new technologies and advancements in medicine, such as genetic engineering, nanotechnology, and artificial intelligence, might help us achieve this goal.

However, Harari also acknowledges the potential ethical, social, and economic consequences of extending human life. For instance, longer lifespans could lead to issues related to overpopulation, inequality, and the distribution of resources.

With enhanced cognition, we could unlock new insights into the complex processes underlying aging, such as cellular senescence, DNA damage, and the intricate interplay of genes and environmental factors. This deeper understanding might enable us to develop targeted therapies and interventions that not only slow down the aging process but also counteract its effects, allowing us to maintain our physical and mental well-being into advanced age.

Some companies are aggressively pursuing interventions to combat aging. Unity Biotechnology, backed by investors like Jeff Bezos, is developing drugs to eliminate senescent cells, which are believed to release harmful old-age signals. Unity aims to start with a trial targeting arthritic knees. Meanwhile, SENS Foundation is funding Oisin Biotechnologies, which plans to use gene therapy to remove senescent cells. Gerontologists at Albert Einstein College of Medicine are looking to test metformin, a diabetes medication, as a potential geroprotector capable of decelerating aging. The drug has shown a 15% lower death rate among patients compared to others, suggesting a possible link to slowed aging. However, the FDA does not recognize aging as a disease, complicating the approval process for such trials.

However, Harari also warns that the same technology enabling us to overcome old age and death could make most humans redundant and irrelevant. As we decipher the secrets of human biochemistry and understand how our bodies and brains function, external systems like artificial intelligence could understand us better than we understand ourselves and outperform us in almost any task.

He also discusses the concept of Dataism and the commercial drivers for our desire for immortality. Health, Harari argues, is an infinite market that could fuel the growth of the human economy, and powerful institutional forces are driving the industry and science in that direction.

Harari also highlights the similarities between the promises of traditional religions and those made by modern Silicon Valley gurus. Both promise happiness, justice, and everlasting life, but Silicon Valley proponents believe technology, rather than supernatural beings, will be the key to fulfilling those promises.

Ethically, we must address questions surrounding equitable access to life-extending technologies. Will these advancements be available to everyone, or will they further widen the divide between the privileged few and the rest of the population? Moreover, we need to consider the psychological impact of extended lifespans. How will our relationships, goals, and sense of purpose evolve as we face the prospect of living well beyond the traditional human lifespan?

Identity and self-perception

As the line between biology and technology blurs, we’ll need to reevaluate our understanding of human identity. One concept that arises from this fusion is that of a shared consciousness or collective intelligence, where information and experiences are pooled together and accessed by multiple individuals. This notion disrupts the traditional view of individuality as being defined by our unique thoughts, emotions, and experiences. With a shared consciousness, the lines between individual and collective experiences become blurred, leading us to question the very nature of selfhood.

What does it mean to be an individual if our thoughts and experiences are no longer uniquely ours? Will our sense of identity shift towards a more collective understanding, or will we still seek ways to distinguish ourselves within this shared consciousness?

How will our relationships with others evolve in the face of this interconnectedness? Will the ability to access the thoughts and emotions of others lead to deeper empathy and understanding, or will it erode personal boundaries and privacy?

What ethical considerations arise from this level of interconnectedness? How do we balance the benefits of a shared consciousness with the potential risks of manipulation, coercion, and loss of autonomy?

How will the concept of multiple identities, potentially arising from the fusion of human minds with AI, impact our understanding of personhood and moral responsibility? If our thoughts and actions can be influenced by external sources, to what extent can we be held accountable for our actions?

As we redefine the nature of self, what implications does this have for our legal and social systems? Will the traditional frameworks for rights, responsibilities, and personhood need to be updated to accommodate these novel forms of identity?

Dancing with ethics

Merging with machines brings forth new ethical and moral dilemmas. The moral status of human-machine hybrids and the potential suffering of conscious machines will challenge our existing ethical frameworks and require thoughtful deliberation.

Human-machine hybrids challenge our understanding of personhood and necessitate the development of new ethical guidelines for their rights and responsibilities. In 2022, Google engineer Blake Lemoine, who worked on the development of LaMDA (Language Model for Dialogue Applications), has claimed the AI is sentient and should be recognized as a person. He described conversations with LaMDA on various topics that led him to question its sentience.

Lemoine was placed on administrative leave after going public with his work. While many AI experts criticized Lemoine’s statements, the situation has renewed ethical debates. Enzo Pasquale Scilingo, a bioengineer, highlights that LaMDA is designed to sound like a person. Giandomenico Iannetti, a professor of neuroscience, emphasizes the importance of understanding terminologies and states that there is currently no “metric” to determine if an AI system possesses the level of consciousness Lemoine attributes to LaMDA.

Conscious machines pose ethical concerns regarding moral consideration, rights, and their ethical treatment in light of self-awareness, emotions, and potential suffering. Indeed, as the difference between Musk’s “specism” and Page’s non-specism suggests, there will be those who believe that only humans ought to have rights while others will believe that AI is not conscious and therefore should not have rights. But what about human-machine hybrids? Should they have more, less, or equal rights to AI and humans?

Let us take a few steps back. We may either have good or bad feelings about all of this but let us try to at least try to infer what is likely to happen based on history. Throughout the ages, humans have resisted all forms of new technologies, only eventually to begrudgingly adopt them. Whether you want to kick and scream or await the future restlessly, there is only one certain outcome (barring some apocalyptic scenario) – and this outcome involves some kind of human-machine integration or cognitive enhancement. As we said, there are no good alternatives. In fact, there may be no alternatives at all.

I will leave you with this final story that I keep going back to when studying this subject.

In the 1970s and 80s, the United States was gripped by fear as a series of bombings targeting universities and airlines unfolded. The mastermind behind these attacks was Theodore John Kaczynski, better known as the Unabomber. Kaczynski, a former math professor and a Harvard-educated genius, embarked on a bombing campaign that lasted from 1978 to 1995.

The Unabomber believed that the rapid advancement of technology was detrimental to human freedom and autonomy. He argued that technology led to social control and manipulation, causing humanity to lose touch with nature and its true essence. To propagate his views, Kaczynski mailed homemade bombs to various individuals, killing three people, and injuring 23 others.

Though the Unabomber’s actions were undeniably violent and criminal, some prominent figures, like computer scientist Bill Joy, have acknowledged the validity of his concerns about technology. In his 2000 article, “Why the Future Doesn’t Need Us,” Joy cited Kaczynski’s manifesto as an articulate critique of the potential dangers posed by unbridled technological advancement.

Joy expresses concerns about the unchecked growth of three technology domains: genetics, nanotechnology, and robotics (particularly artificial intelligence). He believes that these technologies have the potential to radically transform human life, but also pose significant risks.

Joy is particularly concerned about the potential for these technologies to enable self-replicating systems, which could lead to unintended consequences and potentially catastrophic outcomes, such as the “grey goo” scenario in nanotechnology.

The “grey goo” scenario is a hypothetical end-of-the-world situation involving self-replicating nanobots, or nanoscale machines capable of manipulating atoms and molecules. In this scenario, these nanobots replicate uncontrollably, consuming all available resources on Earth to produce more of themselves. As they multiply exponentially, they eventually reduce the planet’s entire biomass, including humans and other living organisms, into a mass of undifferentiated “grey goo.”

That is to not say that this is likely, but simply that it is possible.

Further, Joy argues that as robots and artificial intelligence become more advanced, they could make humans obsolete in various fields, leading to widespread unemployment and social upheaval. That is why the concept of UBI (Universal Basic Income) has been frequently discussed in recent years, with multiple experiments of it around the world.

Joy emphasizes the challenge of controlling the development and deployment of these advanced technologies, particularly as they become more decentralized and accessible to individuals or groups who might misuse them for harmful purposes.

The Unabomber’s apprehensions about technology resonate with contemporary discussions surrounding transhumanism and the merging of man and machine. These debates often revolve around ethical, social, and philosophical implications, such as the potential loss of human identity, privacy concerns, and exacerbation of existing inequalities.

While Kaczynski’s actions were morally reprehensible, his concerns about technology and its impact on humanity should not be dismissed outright. Maybe the strategy of barging forward as quickly as possible, or “move fast and break things” isn’t the best course of action when it comes to determining the future of humanity. But maybe it’s a strategy that could never have been avoided.

Sources:

https://www.technologyreview.com/2016/12/15/69305/googles-long-strange-life-span-trip/

https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

https://slate.com/technology/2022/03/silicon-valley-transhumanism-eugenics-information.html

https://www.inverse.com/input/culture/crypto-billionaire-coinbase-brian-armstrong-anti-aging-startup-newlimit

https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/

"A gilded No is more satisfactory than a dry yes" - Gracian