Anthony Aguirre expresses a cautious view regarding the near-future prospect of general-purpose artificial intelligence (AI). He highlights the complexity of replicating the effectiveness of evolved human intelligence in an artificial agent, requiring vast computational resources that may be beyond our capabilities for many decades.
Aguirre assigns a low probability to artificial general intelligence (AGI) arising in the next ten years (around 1%) and a slightly higher probability over the next thirty years (around 10%), citing that these numbers reflect both his analysis and the opinions of AI experts.
He expresses concern that if AGI is created, it may not function as desired, potentially being “insane” due to the complexity and lack of evolutionary stability. He notes that early AGIs may involve cobbled-together components, some of which may be opaque deep learning systems, making them challenging to predict and control.
Aguirre also considers the question of whether AGIs will quickly lead to superintelligent AIs (SIs) and highlights the potential challenges in developing stable and safe SIs.
Overall, Aguirre emphasizes the importance of thoughtful research and safety measures in AI development to lower the probability of undesirable outcomes, particularly when dealing with technologies that could have profound implications for humanity’s future.