Rory Sutherland considers the possibility that a malign superintelligence could exist on Earth but is clever enough to conceal its presence and intentions. He suggests that humans might not be very adept at recognizing technological threats due to our evolutionary history.
Sutherland points out that throughout evolutionary time, humans primarily faced threats from things roughly the same size as us, such as ferocious animals or other people. We developed instincts to recognize these dangers and minimize the risk of infection, often unconsciously. However, when it comes to threats from advanced technology, our instincts are not well-suited to evaluate them.
He uses the example of driverless cars designed to look cute, like puppies on wheels, to illustrate how technology can exploit our instincts and pareidolia to mitigate our fear. Sutherland raises the question of whether such cuteness is a genuine mental patch to overcome unwarranted fear or a hack to lull us into a false sense of security.
Furthermore, Sutherland asks if there are technologies that have seduced us so effectively that we might only realize their risks when a major problem arises. He highlights our strong belief in “technological providence” and the need to be cautious about embracing new technologies without fully understanding their potential consequences.
Sutherland also proposes the idea of deliberately eschewing certain technologies for periods to maintain technological diversity and keep our mental adaptability sharp. He mentions his suggestion of a weekly “e-mail sabbath” as an example of such deliberate restraint in technology use.
In conclusion, Sutherland emphasizes that the threat of unintended consequences from technology is often more significant than intentionally evil scenarios involving technology. He calls for a thoughtful approach to technological adoption and an awareness of the need to maintain a balance between embracing innovation and safeguarding against unforeseen risks.