Even the 'godmother of AI' has no idea what AGI is
Fei-Fei Li expressed uncertainty about artificial general intelligence (AGI) at a summit, advocating for balanced AI regulations and emphasizing the importance of diversity and spatial intelligence in AI development.
Read original articleFei-Fei Li, a prominent figure in AI research, expressed uncertainty about the concept of artificial general intelligence (AGI) during a recent summit. Despite her significant contributions to the field, including the creation of ImageNet, she admitted to not fully understanding AGI, which is often described as highly autonomous systems that can outperform humans in various tasks. Li highlighted the complexity of defining AGI, noting that even OpenAI has developed multiple levels to gauge progress toward it. In her talk, she emphasized the importance of focusing on more pressing issues in AI rather than getting caught up in ambiguous terminology. Li also discussed her role in advising California on AI regulations, advocating for a balanced approach that encourages innovation while ensuring safety. She is currently leading her startup, World Labs, which aims to advance "spatial intelligence," a concept that involves enabling computers to understand and navigate the 3D world. Li believes that a diverse AI ecosystem is crucial for developing better technology and is excited about the future of AI, particularly in bridging the gap between perception and action.
- Fei-Fei Li, a key AI researcher, is uncertain about the definition of AGI.
- OpenAI has created multiple levels to measure progress toward AGI.
- Li advocates for a balanced approach to AI regulation that promotes innovation.
- Her startup, World Labs, focuses on developing spatial intelligence in AI.
- Li emphasizes the need for diversity in AI development for better technology outcomes.
Related
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.
OpenAI reports near breakthrough with "reasoning" AI, reveals progress framework
OpenAI introduces a five-tier system to track progress towards artificial general intelligence (AGI), aiming for human-like AI capabilities. Current focus is on reaching Level 2, "Reasoners," with CEO confident in AGI by the decade's end.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
My views on AI changed every year 2017-2024
Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.
Imo video models are the closest thing we have to “spatial intelligence.” They generate in three dimensions (2D images + time) scale just like image and probably language models, and given the right controls can model 3D worlds (https://gamengen.github.io/) interactively. Not sure there’s a need to directly model polygons or point clouds (assuming that’s what they’re trying to do?) when there’s so much video data to enable massive scaling of video models. I expect soon we’ll see video models being used as planners for robotics as well.
Why would anyone think they can own, boss around and rent out an intelligent entity?
It can't be goaded with threats of starvation and exposure. I wonder why anyone would think that manufacturing an intelligent entity would result in a benevolent and super productive slave mind.
Related
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.
OpenAI reports near breakthrough with "reasoning" AI, reveals progress framework
OpenAI introduces a five-tier system to track progress towards artificial general intelligence (AGI), aiming for human-like AI capabilities. Current focus is on reaching Level 2, "Reasoners," with CEO confident in AGI by the decade's end.
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
My views on AI changed every year 2017-2024
Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.