Thomas G. Dietterich (What to think about machines that think)

Thomas G. Dietterich addresses concerns related to the rhetoric surrounding the existential risks of artificial intelligence, particularly the notion of an “intelligence explosion.” Here are the key points he makes:

1. Intelligence Explosion Misconception: Dietterich argues that the concept of an “intelligence explosion” is often mischaracterized. It’s not a spontaneous event but would require the construction of a specific kind of AI system capable of recursively advancing its own intelligence.

2. Four Steps for an Intelligence Explosion: He outlines four steps for an intelligence explosion: conducting experiments on the world, discovering new simplifying structures, designing and implementing new computing mechanisms, and granting autonomy and resources to these mechanisms.

3. Danger in the Fourth Step: Dietterich highlights that the fourth step, granting autonomy and resources, poses the greatest risk of an intelligence explosion. While most offspring may fail, the possibility of a runaway process cannot be ruled out.

4. Preventing an Intelligence Explosion: He suggests focusing on limiting the resources an automated design-and-implementation system can provide to its offspring (step 4) as a means of preventing an intelligence explosion.

5. Regulation Challenges: Dietterich acknowledges that regulating step-3 research, which involves designing new computational devices and algorithms, is challenging, and enforcing such regulations would be difficult.

6. Importance of Understanding: He emphasizes the need for humans to thoroughly understand AI systems before granting them autonomy, especially as steps 1, 2, and 3 have the potential to advance scientific knowledge and computational reasoning.

In summary, Dietterich argues that the risk of an intelligence explosion primarily lies in step 4, where AI systems could gain autonomy. To prevent this, he suggests focusing on controlling the resources allocated to AI offspring and ensuring a deep understanding of AI systems before granting them autonomy.

"A gilded No is more satisfactory than a dry yes" - Gracian