August 9th, 2024

My views on AI changed every year 2017-2024

Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.

Read original articleLink Icon
My views on AI changed every year 2017-2024

Alexey Guzey reflects on his evolving views on artificial intelligence (AI) from 2017 to 2024, highlighting a significant shift in his understanding and beliefs about artificial general intelligence (AGI). Initially, Guzey believed AGI was decades away, but by 2022-2023, he felt it was imminent. However, by 2024, he expressed confusion over the concept of AGI itself, suggesting it may not be a coherent idea worth discussing. He acknowledges the utility of large language models (LLMs) in specific tasks but argues they lack true understanding and reasoning capabilities. Guzey critiques the notion that advancements in neural networks will lead to AGI, asserting that intelligence is context-dependent and cannot be solved in isolation. He also notes a decline in his interest in AI, viewing it as just another technology rather than a transformative force. His reflections include skepticism about the alignment of AI with human values and the potential risks associated with AGI. Guzey concludes that many of his previous beliefs about AI should be dismissed, emphasizing the complexity and unpredictability of the field.

- Guzey's views on AGI shifted from believing it was imminent to questioning its coherence.

- He finds LLMs useful for specific tasks but lacking in true understanding and reasoning.

- He critiques the idea that neural network advancements will lead to AGI.

- Guzey expresses a decline in interest in AI, viewing it as a standard technology.

- He emphasizes the importance of skepticism regarding AI alignment and potential risks.

Link Icon 2 comments
By @unraveller - 8 months
>Seems that DeepMind’s original thesis about “solving intelligence and then using it to solve everything else” fundamentally misunderstands what intelligence is. Intelligence is the ability to solve specific problems, therefore it necessarily exists in the context of all in which it lives and what came before it and the goal of “solving intelligence” is meaningless.

The definition seems off. Many problems only seem specific in hindsight, and humans solving problems on command with full awareness of the space is more a measure of obedience. Napoleon famously didn't want intelligent generals, he wanted lucky ones.

DeepMind et al probably refined the motto internally to be "solving reasoning" from great number of perspectives "then solving everything else".