Table of Contents
The Law of Accelerating Returns
The Singularity is near is a documentary that features Ray Kurzweil. It tells us about a future that is post biological – when human beings transcend their natural limitations and merge with AI.
Kurzweil believes that this will happen approximately around the year 2045. Kurzweil has grown his credibility over the years by making accurate predictions in the world of technology. He is also an inventor and has been one since the early days of his youth.
Kurzweil argues that technology improves exponentially, and this means that the near future is subject to immense change, so much so that the world will become unrecognizable to us in our own lifetimes. To appreciate his argument, think about how our ancestors used to live. For generations, they lived the same way their grandfathers did. Long spans of time would pass, and only gradual improvement would occur.
The industrial revolution set humanity on a different course. The pace of change picked up. The rate of change today is much higher than it was at the start of the industrial revolution, and this rate will continue to increase in the future.
There was a point in time when there were less than ten computers in the entire world. Today, almost every person carries several computers with them, and are interfacing with computers almost every moment of the day. In the last 20 years, AI has become sophisticated enough to beat the best human player in the game of chess, and then beat the best human in the much more complex game of Go.
The implication is that breakthroughs in computation, bioengineering, and energy will occur more frequently. And eventually, the accumulation of these changes across different fields will result in the most fundamental breakthrough of all: immortality.
More than that, Kurzweil imagines a future where you can communicate to others through thoughts, without having to use language, where human beings have deep and intimate relationships with machines, and in order to keep up with the advancement of AI, we will merge with AI. We will shed our biological skin and grow into a new, vastly superior silicon species. We will become gods.
But with such rapid change on the horizon, we must consider the moral dilemmas that we will inevitably confront. What happens if some of this technology gets in the wrong hands? Do people want immortality? And should AI have rights?
The Underground Army
Kurzweil states that it would be silly to resist technological change. There simply is no way for us to avoid technological growth and even if we did, there would be underground armies of innovators trying to make their own breakthroughs. An unregulated, irresponsible group of people gaining ownership of advanced technology that the civilized world does not understand or control is a far more catastrophic scenario. We shouldn’t fear a future where technology turns against us by mistake, but one where it is forced against us by malicious intent.
A sufficiently evil and capable group can launch a virus that can wipe out half the population of the earth and we would not have enough time to react. This doomsday scenario can only be avoided if we get there first as a civilization and figure out how to prevent these underground armies from carrying out their evil plots.
This is like how nuclear technology developed. It was a game-theoretically predetermined arms race that sovereign nations had choice but to partake in. The alternative was unconditional surrender to an infinitely more capable enemy. This was not an option that world leaders were willing to entertain.
Do You Want to Live Forever?
Of course, there are religious reasons for not wanting to become immortal. But Kurzweil makes the point that every person has the right to life. Someone who is 120 years old has the right to live longer, and there is no reason to stop her from doing that. A future where anti-ageing technology becomes widely available will result in high adoption rates because it is human instinct to survive. There will be resistors to this new paradigm, but eventually they will change their minds. It will be too irresistible not to.
According to Kurzweil, people will not “wax philosophically” when confronted with the choice between death and more life.
AI Rights
At what point will we acknowledge that AI has rights? Every day AI is becoming more sophisticated before, and eventually, they will become so complex that it will be impossible for people to distinguish between machines and fellow homo sapiens. In other words, AI will pass the Turing Test. At which point, should they have rights?
To answer this question, we have to think about what it means to be human and why advanced intelligence should be treated differently from lower levels of intelligence. Essentially, it comes down to suffering. An octopus should be treated better than a chicken since it is far more complex, and therefore feels more pain. An AI that behaves exactly like a human should be treated the same way humans are treated because they have the same capacity for pain.
This requires us to accept the presupposition that the brain is a computer. And if we can build a computer as sophisticated as a brain then there would be no difference between machines and people when it comes to subjective experience. AI would feel the same feelings of melancholy, loneliness, angst, triumph, and hope that any human being feels – it would be able to appreciate art and music and be capable of love.
The counter argument is that our understanding of human consciousness is very primitive. We know close to nothing about how the mind works. We may understand the individual parts of the network and how they correlate to different emotions and thoughts, but this does not constitute a proper understanding of what the mind is, or how it can be possible to artificially induce human consciousness.
Kurzweil would argue that this may be true, but we will become so much more knowledgeable and intelligent in the future, that even the greatest of mysteries will be solved. We may not know how consciousness works or even what it is today with our fallible, limited biological brains, but this will no longer be the case in a future where we transcend these limitations.