December 5th, 2024

How close is AI to human-level intelligence?

Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.

Read original articleLink Icon
How close is AI to human-level intelligence?

Recent advancements in artificial intelligence (AI), particularly with large language models (LLMs) like OpenAI's o1, have reignited discussions about the potential for achieving artificial general intelligence (AGI). While o1 claims to operate more like human thought processes than previous models, experts caution that LLMs alone are insufficient for reaching AGI. Researchers emphasize that despite the impressive capabilities of LLMs, such as problem-solving and language understanding, they still lack essential components for true general intelligence. The debate surrounding AGI has evolved significantly, with many now considering it a serious topic rather than fringe speculation. The term AGI, which gained traction around 2007, refers to AI systems that can perform a wide range of cognitive tasks akin to human reasoning. Current LLMs, while capable of impressive feats, remain limited to specific tasks and lack the ability to generalize across different domains. The architecture of LLMs, particularly the transformer model, has enabled them to learn complex language patterns, but the understanding of their inner workings is still incomplete. As AI continues to develop, the implications of achieving AGI raise both opportunities and risks, prompting ongoing research and ethical considerations.

- OpenAI's o1 model claims to mimic human thought processes more closely than previous LLMs.

- Experts believe LLMs alone are not enough to achieve artificial general intelligence (AGI).

- The AGI debate has shifted from fringe discussions to a mainstream topic among researchers.

- Current LLMs excel in specific tasks but lack the ability to generalize across different cognitive domains.

- The development of AGI poses both significant opportunities and potential risks for humanity.

Link Icon 14 comments
By @mjburgess - 5 months
General intelligence is an ability to cope, adapt and thrive in an ecology: to start from a limited set of capabilities, and via exploration, acquire a rich competence. To develop conceptualisations, techniques of coordination and control, to form novel goals and strategies to realise them, and so on.

General intelligence is a strategy to defer the acquisition of abilities from the process of construction/blueprinting (ie., genes, evolution..) to the living environment of the animal. The most generally intelligent animals are those that have nearly all of their sensory motor skills acquired during their life -- we learn to walk and so can learn to play the piano, and to build a rocket.

There is a serious discontinuity in strategy to achive this defferal: the kinds of processes which "blueprint" the intelligence of a bacterium are discontinuous with the processes which a living animal needs to dynamically conceptualise its environment under shifts to its structure.

Of the latter animals need: living adaption of their sensory-motor systems, heirachical coordination of their bodies, robust causal modelling, and so on.

General intelligence is primitively a kind of movement, which becomes abstract only with a few hundred thousand years of culture. The earliest humans, able to lingusitically express almost nothing, were nevertheless generally intelligent.

Present computer-science-led investigations into "intelligence" assume you can operate syntactically across the most peripheral consequences of general intelligence given by linguistic representations. This is profoundly misguided: each todller necessarily must learn to walk. You cannot just project a slideshow of walking, and get anywhere. And if you remove this capability and install a "walking module", you've remved the very capabilities which allow that child then to do anything new at all.

There is nothing in the linguistic syntactical shadow of human intelligence to be found in creating generally capable systems. It's just overfitting to our 2024 reflections.

By @aithrowawaycomm - 5 months
I continue to be dismayed by AI's quixotic focus on "human" intelligence. AI is very far from pigeon-level intelligence or dog-level intelligence. I strongly suspect transformers are dumber than spiders.[1] This focus on human intelligence via formal human knowledge is putting the cart before the horse. If your "human-level" AI architecture cannot conceivably be modified for chimp intelligence, and requires bootstrapping with a bunch of pre-processed human knowledge, then it is not actually emulating human intelligence. LLMs are fancy encyclopedias, not primitive brains.

[1] Suppose you have an accurate web-spinning simulator and you train a transformer ANN on 40 million years of natural spiderweb construction: between trees, rocks, etc. This AI is excellent at spinning natural webs. Would the transformer be able to spin a functional web in your pantry or basement. If not, then the AI isn't as smart as a spider. I don't think this thought experiment is actually possible: any computer simulation would excessively simplify the physical complexity. But based on transformers' pattern of failures in other domains, I don't think they are good enough to pull it off.

By @handsclean - 5 months
> You can’t say everybody’s a crackpot.

Sure can. Whole world’s done this every time there’s a major advance in algorithms. We do this with other major advances, too, like how the industrial revolution was going to usher in utopia and GMO was going to end world hunger. Whenever we can’t see the end of something, only half the world figures it’s a vision problem, while the other half figures the end must not exist.

By @tim333 - 5 months
>models... AGI. ... unlikely to reach this milestone on their own.

I don't think there is a single milestone. Intelligence has many aspects - IQ test type ability, chess playing ability, emotional intelligence, ability to go down to the shops and buying something and so on.

AI is making gradual progress and is very good at some things like chess and very bad at others. There will probably be a gradual passing of different milestones on different dates. When they can replace a plumber including figuring out the problem and getting and fitting the parts might be a sign they can do most stuff.

There's probably a way to go there but there are a lot of resources being thrown at the problem just now.

By @anonyfox - 5 months
There is zero reasoning in it so far, everything up to today is perfectly explainable with advanced statistics and NLP. Its large _language_ models after all, no matter the hype.

Still I find it excellent when exploring new knowledge domains or cross-comparing cross knowledge domains, since LLMs by design (and training corpus) will spill out highly probable terms/concepts matching my questions and phrase it nicely. Search on steroids when you will, where also real-time doesn't matter for me at all.

This is not intelligence, yet hugely valuable if used right. And I am sure because of this, a lot of scientific discoveries will be made with todays LLMs used in creative ways, since most of scientific discoveries is ultimately looking at X within a setting at Y, and there are a lot of potential X and Y combinations.

I am exaggerating a bit, but at some point (niels bohr?) had the thought of thinking about atoms like we do about planets, with stuff circling each other. Its an X but in Y situation. First come up with such a scenario (or: an automated way to combine lots of X and Y cleverly) and then filter the results for something that actually would make sense, and then dig deeper in a semi-automatic way with actual human in the loop at least.

By @mkoubaa - 5 months
It's definity capable of human-level stupidity
By @Dah00n - 5 months
I have yet to speak to any actual expert who believes we will see any of the "I" in "AI" anytime soon, if ever.
By @jeswin - 5 months
While AI may not be at human level intelligence, we are already beginning to see what superhuman-level intelligence can look like.
By @frklem - 5 months
I can't believe this article is being published in Nature. The article is flawed, plagued with assumptions that I guess the author doesn't even notice (like what do we really mean by AGI, the epistemological problems/assumptions to intelligence, the real nature of thinking, the real functioning of the human brain). It is really curious that the philosophical community is addressing the debate on what AI really is and its implications, but the computer science community does not read almost anything about philosophy. Regarding the fear of 'losing control of it', I would suggest reading the works (or at least about) of Gunther Anders and Bernard Stiegler. Technology (in this case AI) is inseparable from human being, to the point that we already lost control of technology, its use and its meaning (like, 100 years ago). Another thing that surprises me is how the computer science community is blind to the work of Hubert Dreyfus and other contemporary philosophers that analyze AI from and epistemological and philosophical perspective. But, actually, I should no t be surprised: we barely study philosophy in any scientific discipline when attending university. This rhetoric about how AI is similar to the human brain is starting to be a bit boring. It assumes a very simplistic view on the brain and turns a deaf ear to other types of research (like language acquisition and embodiment, mind/brain duality, epistemological basis for knowledge acquisition, ontological basis of causal reasoning...). And above all, what is really upsetting is the techno-optimism behind this way of thinking.
By @vouaobrasil - 5 months
From the article:

> “Bad things could happen because of either the misuse of AI or because we lose control of it,” says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada.

God, I hate phrases like this. We've already lost control of it. We don't have any control. AI will evolve in the rich medium of capitalism and be used by anyone due to its ease of use and even laws will be unable to restrict that. At this point, since we've set up a system that promotes technologies regardless of their long-term cost or dangers, we simply cannot control them. Bad things are already happening and human beings are being integrated into a matrix of technology whose ultimate purposes is just the furthering of technology.

Even people like Dr. Bengio are just pawns in a system, whose purpose is just to present an artificially balanced viewpoint as if there were a reasonable set of pros and cons, designed to make people think that we could "lose control" but with the right thinking, we don't have to let that happen. I mean come on, just suppose for a second the hypothesis of "AI is already out of control". If Dr. Bengio and their colleagues acknowledged that, then they'd be out of a job. So just by evolutionary pressure on "organizations that monitor AI", they have to be artificially balanced.

By @shinycode - 5 months
/sarcasm/ The real question is : will it eat us ? (Like in matrix) if we believe there is only one dominant intelligence
By @italodev - 5 months
Is the question even that relevant? I would say the best models already have better than human culture. Maybe not skills, but culture for sure.
By @genericspammer - 5 months
Is there a distinction between LLM’s and AI, or do we consider LLM’s to exhibit intellect?

I remeber Sam Altman pointing out in some interview that he considers GPT to be a reasoning machine. I suppose that if you consider what GPT does to be resoning, then calling it AI is not so far fetched.

I feel it’s more like pattern recognition though rather than reasoning, since there’s no black box ”reasoning” component in an LLM.