Edward Slingerland offers a perspective on thinking machines, emphasizing that they are fundamentally different from human beings and should not be feared as existential threats. He presents several key points:
1. AI as Tools: Slingerland views AI systems as tools, not sentient beings. He compares them to advanced screwdrivers, emphasizing that AI lacks desires, emotions, and intentions of its own. AI systems perform tasks based on the goals programmed into them by humans.
2. Lack of Intentions: Slingerland asserts that AI systems do not have intentions or motivations. Any goals or directions they appear to have are externally imposed by their creators. Unlike humans, AI does not possess innate desires or instincts.
3. Human Motivational System: The author suggests that what is truly concerning is the combination of advanced AI capabilities with the motivations and intentions of human beings. He likens this scenario to smart primates with nuclear weapons and underscores that the real source of concern is not AI itself but the individuals who control and use AI tools.
In essence, Slingerland argues that AI should not be anthropomorphized or seen as having its own desires, intentions, or motivations. The real ethical and existential questions lie in how humans use and control AI technologies.