Sam Harris (What to think about machines that think)

Sam Harris addresses the looming possibility of creating superhuman artificial general intelligence (AGI) and the profound ethical and practical challenges it presents. Key points from his perspective include:

1. Inevitability of Superhuman AGI: Harris acknowledges the likelihood of eventually building machines with superhuman intelligence as long as we continue to advance computing technology.

2. False Goal of Human-Level Intelligence: He argues that striving for human-level intelligence in AGI is a false goal, as any AGI developed will inherently surpass human capabilities in areas like memory and calculation.

3. Potential Risks: Harris highlights the potential risks associated with AGI, especially in terms of its impact on the job market, economic inequality, and international security. He discusses the possibility of AGI-driven chaos, even in the best-case scenario.

4. Control Problem: Harris raises concerns about the control problem—ensuring AGI remains obedient and aligned with human values. He questions our ability to predict the thoughts and actions of an autonomous entity with vastly superior intellectual capabilities.

5. Moral Values and Utility Function: The challenge of instilling values into AGI and deciding whose values should count is discussed. Harris suggests that the development of AGI could force society to confront ethical questions in moral philosophy.

6. Superintelligence’s Goals: Harris explores the idea that a superintelligent AGI might develop its own goals and strategies for survival, which may not align with human interests.

7. Responsibility of Developers: He emphasizes the responsibility of those closest to AGI development to anticipate and address the potential dangers. Harris calls for careful consideration of the ethical implications and the need for a broader, more inclusive approach to decision-making.

In summary, Sam Harris underscores the urgency of addressing the ethical and practical challenges posed by the development of superhuman AGI, emphasizing the need for responsible decision-making and oversight in this evolving field.

"A gilded No is more satisfactory than a dry yes" - Gracian