Peter Norvig (What to think about machines that think)

Peter Norvig discusses the capabilities and concerns surrounding artificial intelligence (AI). He emphasizes that the question “Can machines think?” is less helpful than evaluating what tasks machines can perform effectively.

Norvig acknowledges the valid concerns raised by pessimists regarding the safe development of complex AI systems but points out that similar challenges exist in building large, complex non-AI systems. He underscores the need to predict, control, and mitigate the unintended consequences of both AI and non-AI systems.

He highlights three unique issues related to AI: adaptability, autonomy, and universality. AI systems that use machine learning are adaptable, but they can become inaccurate with excessive adaptation. The challenge lies in finding the right balance. Autonomy in AI can lead to errors, similar to automated traffic lights, but it involves trade-offs. Increased automation might change the nature of work rapidly, potentially impacting employment and income inequality.

Norvig discusses the universality of intelligent machines, stating that while we value intelligence, it’s only one of many attributes that influence success and problem-solving. He explains that computational complexity theory identifies problems where intelligence alone is insufficient.

In conclusion, Norvig encourages viewing AI as a tool that can address specific challenges in society, similar to inventions like the internal combustion engine or air-conditioning. He advises using the best tools, whether labeled as AI or not, while being aware of their potential failure modes.

"A gilded No is more satisfactory than a dry yes" - Gracian