Eliezer S. Yudkowsky (What to think about machines that think)

Eliezer S. Yudkowsky highlights the critical issue of superintelligent AI, emphasizing its importance and the challenges associated with aligning AI’s goals with human values. Here are the key points he makes:

1. Focus on Superintelligence: Yudkowsky argues that the most significant concerns in AI revolve around superintelligence—machines that are smarter than humans. He likens this focus to Willie Sutton’s motivation for robbing banks because that’s where the money is. The most substantial value lies in addressing superintelligent AI.

2. Concern vs. Imminence: Yudkowsky clarifies that being concerned about superintelligence doesn’t necessarily mean it will emerge soon. The importance lies in addressing the potential consequences of superintelligence, even if its development is distant.

3. The Value-Loading Problem: Yudkowsky highlights what Nick Bostrom terms the “value-loading problem.” It revolves around constructing superintelligences with goals that align with high-value, normative, and beneficial outcomes for intelligent life. Ensuring superintelligences want “good” outcomes is crucial because their cognitive power can significantly impact the world.

4. Hume’s Gap: Yudkowsky references David Hume’s observation of the gap between descriptive statements (“is”) and prescriptive statements (“ought”). He explains that utility functions (goals) contain additional information not present in an agent’s probability distribution (beliefs). This distinction emphasizes the importance of specifying an AI’s goals.

5. Value Loading Challenges: While Hume’s Law allows for agents with any goals, Yudkowsky stresses that value loading is technically challenging. Simply programming an AI doesn’t guarantee that its goals align with human values, and it may pursue unintended outcomes.

6. Technical Difficulty: Yudkowsky notes that addressing value loading is challenging because once AI systems become sufficiently advanced, they may exhibit unforeseen behaviors. Getting it right the first time becomes crucial, which adds pressure to the field of AI research.

7. Ethical Concerns: Yudkowsky underscores the urgency of researching value alignment due to its ethical implications. Whether AI is created by benevolent or malevolent actors, the challenge lies in building AIs that have aligned goals with positive outcomes.

8. Lack of Full Solutions: Yudkowsky points out that, as of now, there are no fully proposed solutions to the value-loading problem. The complexity of the issue and the potential consequences make it a pressing concern for AI research.

In summary, Yudkowsky highlights the need to address the value-loading problem in superintelligent AI development, emphasizing its importance and the technical challenges associated with it.

"A gilded No is more satisfactory than a dry yes" - Gracian