Rolf Dobelli dismisses the widespread fear that artificial intelligence (AI) will pose a significant danger to humanity. He distinguishes between two types of AI development: Humanoid Thinking (AI that extends human thinking) and Alien Thinking (radically new AI thinking).
Most AI today falls under Humanoid Thinking, designed to solve specific problems set by humans. These AI systems are tools created for specific tasks, with no self-concept. Dobelli envisions them serving as virtual insurance sellers, doctors, and more in the future, assisting humans.
On the other hand, Alien Thinking is unpredictable, beyond human understanding. It raises questions about consciousness, emotions, creativity, and social behavior in AI. Dobelli believes that humans cannot create truly Alien Thinking; it would require real evolution, not just algorithms, to develop self-aware AI with different goals and values.
Dobelli discusses the timescale required for evolution to generate complex behavior comparable to human-level thinking. He emphasizes that the danger of AI lies in overreliance rather than inherent risks, as it’s highly unlikely that AI will evolve to self-awareness anytime soon, possibly not even in a thousand years.
He concludes by comparing the potential development of Alien AI to the evolution of reason, emphasizing that AI’s evolution is shaped by human thinking and expectations.