How close is AI to human-level intelligence?
Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.
Read original articleRecent advancements in artificial intelligence (AI), particularly with large language models (LLMs) like OpenAI's o1, have reignited discussions about the potential for achieving artificial general intelligence (AGI). While o1 claims to operate more like human thought processes than previous models, experts caution that LLMs alone are insufficient for reaching AGI. Researchers emphasize that despite the impressive capabilities of LLMs, such as problem-solving and language understanding, they still lack essential components for true general intelligence. The debate surrounding AGI has evolved significantly, with many now considering it a serious topic rather than fringe speculation. The term AGI, which gained traction around 2007, refers to AI systems that can perform a wide range of cognitive tasks akin to human reasoning. Current LLMs, while capable of impressive feats, remain limited to specific tasks and lack the ability to generalize across different domains. The architecture of LLMs, particularly the transformer model, has enabled them to learn complex language patterns, but the understanding of their inner workings is still incomplete. As AI continues to develop, the implications of achieving AGI raise both opportunities and risks, prompting ongoing research and ethical considerations.
- OpenAI's o1 model claims to mimic human thought processes more closely than previous LLMs.
- Experts believe LLMs alone are not enough to achieve artificial general intelligence (AGI).
- The AGI debate has shifted from fringe discussions to a mainstream topic among researchers.
- Current LLMs excel in specific tasks but lack the ability to generalize across different cognitive domains.
- The development of AGI poses both significant opportunities and potential risks for humanity.
Related
Sequoia: New ideas are required to achieve AGI
The article delves into the challenges of Artificial General Intelligence (AGI) highlighted by the ARC-AGI benchmark. It emphasizes the limitations of current methods and advocates for innovative approaches to advance AGI research.
OpenAI reports near breakthrough with "reasoning" AI, reveals progress framework
OpenAI introduces a five-tier system to track progress towards artificial general intelligence (AGI), aiming for human-like AI capabilities. Current focus is on reaching Level 2, "Reasoners," with CEO confident in AGI by the decade's end.
LLMs are a dead end to AGI, says François Chollet
François Chollet argues that large language models hinder progress toward artificial general intelligence and has launched the ARC Prize competition to promote AI systems that demonstrate true reasoning abilities.
Transcript for Yann LeCun: AGI and the Future of AI – Lex Fridman Podcast
Yann LeCun discusses the limitations of large language models, emphasizing their lack of real-world understanding and sensory data processing, while advocating for open-source AI development and expressing optimism about beneficial AGI.
AI can learn to think before it speaks
Recent advancements in AI, particularly OpenAI's o1 model, enhance reasoning capabilities but raise concerns about deception and safety. Further development and regulatory measures are essential for responsible AI evolution.
General intelligence is a strategy to defer the acquisition of abilities from the process of construction/blueprinting (ie., genes, evolution..) to the living environment of the animal. The most generally intelligent animals are those that have nearly all of their sensory motor skills acquired during their life -- we learn to walk and so can learn to play the piano, and to build a rocket.
There is a serious discontinuity in strategy to achive this defferal: the kinds of processes which "blueprint" the intelligence of a bacterium are discontinuous with the processes which a living animal needs to dynamically conceptualise its environment under shifts to its structure.
Of the latter animals need: living adaption of their sensory-motor systems, heirachical coordination of their bodies, robust causal modelling, and so on.
General intelligence is primitively a kind of movement, which becomes abstract only with a few hundred thousand years of culture. The earliest humans, able to lingusitically express almost nothing, were nevertheless generally intelligent.
Present computer-science-led investigations into "intelligence" assume you can operate syntactically across the most peripheral consequences of general intelligence given by linguistic representations. This is profoundly misguided: each todller necessarily must learn to walk. You cannot just project a slideshow of walking, and get anywhere. And if you remove this capability and install a "walking module", you've remved the very capabilities which allow that child then to do anything new at all.
There is nothing in the linguistic syntactical shadow of human intelligence to be found in creating generally capable systems. It's just overfitting to our 2024 reflections.
[1] Suppose you have an accurate web-spinning simulator and you train a transformer ANN on 40 million years of natural spiderweb construction: between trees, rocks, etc. This AI is excellent at spinning natural webs. Would the transformer be able to spin a functional web in your pantry or basement. If not, then the AI isn't as smart as a spider. I don't think this thought experiment is actually possible: any computer simulation would excessively simplify the physical complexity. But based on transformers' pattern of failures in other domains, I don't think they are good enough to pull it off.
Sure can. Whole world’s done this every time there’s a major advance in algorithms. We do this with other major advances, too, like how the industrial revolution was going to usher in utopia and GMO was going to end world hunger. Whenever we can’t see the end of something, only half the world figures it’s a vision problem, while the other half figures the end must not exist.
I don't think there is a single milestone. Intelligence has many aspects - IQ test type ability, chess playing ability, emotional intelligence, ability to go down to the shops and buying something and so on.
AI is making gradual progress and is very good at some things like chess and very bad at others. There will probably be a gradual passing of different milestones on different dates. When they can replace a plumber including figuring out the problem and getting and fitting the parts might be a sign they can do most stuff.
There's probably a way to go there but there are a lot of resources being thrown at the problem just now.
Still I find it excellent when exploring new knowledge domains or cross-comparing cross knowledge domains, since LLMs by design (and training corpus) will spill out highly probable terms/concepts matching my questions and phrase it nicely. Search on steroids when you will, where also real-time doesn't matter for me at all.
This is not intelligence, yet hugely valuable if used right. And I am sure because of this, a lot of scientific discoveries will be made with todays LLMs used in creative ways, since most of scientific discoveries is ultimately looking at X within a setting at Y, and there are a lot of potential X and Y combinations.
I am exaggerating a bit, but at some point (niels bohr?) had the thought of thinking about atoms like we do about planets, with stuff circling each other. Its an X but in Y situation. First come up with such a scenario (or: an automated way to combine lots of X and Y cleverly) and then filter the results for something that actually would make sense, and then dig deeper in a semi-automatic way with actual human in the loop at least.
> “Bad things could happen because of either the misuse of AI or because we lose control of it,” says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada.
God, I hate phrases like this. We've already lost control of it. We don't have any control. AI will evolve in the rich medium of capitalism and be used by anyone due to its ease of use and even laws will be unable to restrict that. At this point, since we've set up a system that promotes technologies regardless of their long-term cost or dangers, we simply cannot control them. Bad things are already happening and human beings are being integrated into a matrix of technology whose ultimate purposes is just the furthering of technology.
Even people like Dr. Bengio are just pawns in a system, whose purpose is just to present an artificially balanced viewpoint as if there were a reasonable set of pros and cons, designed to make people think that we could "lose control" but with the right thinking, we don't have to let that happen. I mean come on, just suppose for a second the hypothesis of "AI is already out of control". If Dr. Bengio and their colleagues acknowledged that, then they'd be out of a job. So just by evolutionary pressure on "organizations that monitor AI", they have to be artificially balanced.
I remeber Sam Altman pointing out in some interview that he considers GPT to be a reasoning machine. I suppose that if you consider what GPT does to be resoning, then calling it AI is not so far fetched.
I feel it’s more like pattern recognition though rather than reasoning, since there’s no black box ”reasoning” component in an LLM.
Related
Sequoia: New ideas are required to achieve AGI
The article delves into the challenges of Artificial General Intelligence (AGI) highlighted by the ARC-AGI benchmark. It emphasizes the limitations of current methods and advocates for innovative approaches to advance AGI research.
OpenAI reports near breakthrough with "reasoning" AI, reveals progress framework
OpenAI introduces a five-tier system to track progress towards artificial general intelligence (AGI), aiming for human-like AI capabilities. Current focus is on reaching Level 2, "Reasoners," with CEO confident in AGI by the decade's end.
LLMs are a dead end to AGI, says François Chollet
François Chollet argues that large language models hinder progress toward artificial general intelligence and has launched the ARC Prize competition to promote AI systems that demonstrate true reasoning abilities.
Transcript for Yann LeCun: AGI and the Future of AI – Lex Fridman Podcast
Yann LeCun discusses the limitations of large language models, emphasizing their lack of real-world understanding and sensory data processing, while advocating for open-source AI development and expressing optimism about beneficial AGI.
AI can learn to think before it speaks
Recent advancements in AI, particularly OpenAI's o1 model, enhance reasoning capabilities but raise concerns about deception and safety. Further development and regulatory measures are essential for responsible AI evolution.