Andy Clark challenges the notion that emerging learning machines, which utilize deep learning algorithms to mine big data for complex problem-solving, will lead to alien forms of intelligence. He proposes that as these machines learn more, they may end up thinking in ways that resemble human thought processes and even apply emotional and ethical labels similarly to humans.
Clark argues that these machines will rely on consuming vast electronic trails of human experience and interests, which provide a significant repository of general knowledge about the world. Their diet will consist of the massive amounts of human-generated data found on platforms like Facebook, Google, Amazon, and Twitter. Consequently, their learning experiences will be all too familiar, as they analyze and interpret the diverse array of human interactions and content available online.
He suggests that these machines are more likely to develop a world model with similarities to human understanding due to their dependence on human-generated data. Instead of becoming alien intelligences, they might end up resembling human perspectives, possibly enjoying activities like sports just as humans do.
In conclusion, Clark emphasizes that these learning systems will “eat” human-generated data to develop their intelligence, making them less alien and more closely aligned with human thought processes than commonly feared.