My views on AI changed every year 2017-2024
Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.
Read original articleAlexey Guzey reflects on his evolving views on artificial intelligence (AI) from 2017 to 2024, highlighting a significant shift in his understanding and beliefs about artificial general intelligence (AGI). Initially, Guzey believed AGI was decades away, but by 2022-2023, he felt it was imminent. However, by 2024, he expressed confusion over the concept of AGI itself, suggesting it may not be a coherent idea worth discussing. He acknowledges the utility of large language models (LLMs) in specific tasks but argues they lack true understanding and reasoning capabilities. Guzey critiques the notion that advancements in neural networks will lead to AGI, asserting that intelligence is context-dependent and cannot be solved in isolation. He also notes a decline in his interest in AI, viewing it as just another technology rather than a transformative force. His reflections include skepticism about the alignment of AI with human values and the potential risks associated with AGI. Guzey concludes that many of his previous beliefs about AI should be dismissed, emphasizing the complexity and unpredictability of the field.
- Guzey's views on AGI shifted from believing it was imminent to questioning its coherence.
- He finds LLMs useful for specific tasks but lacking in true understanding and reasoning.
- He critiques the idea that neural network advancements will lead to AGI.
- Guzey expresses a decline in interest in AI, viewing it as a standard technology.
- He emphasizes the importance of skepticism regarding AI alignment and potential risks.
Related
I'm Terrified of Old People
Alexey Guzey reflects on his past arrogance and ignorance at 26, questioning the value of his advice and achievements. He acknowledges the wisdom of older individuals and emphasizes the importance of patience and continuous growth.
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
The definition seems off. Many problems only seem specific in hindsight, and humans solving problems on command with full awareness of the space is more a measure of obedience. Napoleon famously didn't want intelligent generals, he wanted lucky ones.
DeepMind et al probably refined the motto internally to be "solving reasoning" from great number of perspectives "then solving everything else".
Related
I'm Terrified of Old People
Alexey Guzey reflects on his past arrogance and ignorance at 26, questioning the value of his advice and achievements. He acknowledges the wisdom of older individuals and emphasizes the importance of patience and continuous growth.
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.