July 10th, 2024

From GPT-4 to AGI: Counting the OOMs

The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.

Read original articleLink Icon
From GPT-4 to AGI: Counting the OOMs

The article discusses the rapid progress in artificial intelligence (AI) from GPT-2 to GPT-4, highlighting the significant advancements in deep learning over the past decade. It mentions the potential for achieving Artificial General Intelligence (AGI) by 2027 based on trends in compute power, algorithmic efficiencies, and model capabilities. The narrative emphasizes the continuous improvement in AI models, with GPT-4 showcasing abilities like coding, math problem-solving, and essay writing that were previously considered challenging for AI systems. The piece underscores the importance of scaling up deep learning models and unlocking their latent capabilities to achieve significant advancements in AI. It also touches on the potential for AI systems to automate AI research and the implications of reaching AGI. The author encourages situational awareness regarding AI progress and emphasizes the need to consider the trendlines in AI development. Overall, the article provides insights into the evolution of AI capabilities and the potential for future advancements leading towards AGI.

Link Icon 12 comments
By @danpalmer - 5 months
As a software engineer I'm very familiar with "OOM"s and "orders of magnitude", and have never once heard the former used to mean the latter.

Perhaps this is a term of art in harder science or maths. I can't help but think here it's likely to confuse the majority as they wonder why the author is conflating memory and compute.

Something that might help is for the link to be amended to link to the page as a whole (and the unconventional expansion of OOM at the top) rather than the #Compute anchor.

By @refulgentis - 5 months
Gah this is the second time I got tricked into reading this entire thing, it's long, and it's impossible to know until the very end they're building up to nothing.

It's really good morsel by morsel, it's a nice survey of well-informed thought, but then it just sort of waves it hands, screams "The ~Aristocrats~ AGI!" at the end.

More precisely, not direct quote: "GPT-4 is like a smart high schooler, it's a well-informed estimate that compute spend will expand by a factor similar to GPT-2 to GPT-4, so I estimate we'll do a GPT-2 to GPT-4 qualitative leap from GPT-4 by 2027, which is AGI.

"Smart high schooler" and "AGI" aren't plottable Y-axis values. OOMs of compute are.

It's strange to present this as well-informed conclusion based on trendlines that tells us where AGI would hit, and I can't help but call intentional click bait, because we know the author knows this: they note at length things like "we haven't even scratched the surface on system II thinking, ex. LLMs can't successfully emulate being given 2 months to work on a problem versus having to work on it immediately"

By @robwwilliams - 5 months
This parenthetical in the article struck me:

>Later, I’ll cover “unhobbling,” which you can think of as “paradigm-expanding/application-expanding” algorithmic progress that unlocks capabilities of base models.

I think this is probably on the mark. The LMMs are deep memory coupled to weak reasoning and without the recursive self-control and self evaluation of many threads of attention.

By @clarkmoody - 5 months
By @whakim - 5 months
I’m very skeptical of any future prediction whose main evidence is an extrapolation of existing trendlines. Moore’s Law - frequently referenced in the original article - provides a cautionary tale for such thinking. Plenty of folks in the 90’s relied on a shallow understanding of integrated circuits and computers more generally to extrapolate extraordinary claims of exponential growth in computing power which obviously didn’t come to pass; counterarguments from actual experts were often dismissed with the same kind of rebuttal we see here, i.e. “that problem will magically get solved once we turn our focus to it.”

More generally, the author doesn’t operationalize any of their terms or get out of the weeds of their argument. What constitutes AGI? Even if LLMs do continue to improve at the current rate (as measured by some synthetic benchmark), why do we assume that said improvement will be what’s needed to bridge the gap between the capabilities of current LLMs and AGI?

By @threeseed - 5 months
> By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. An agent that joins your company, is onboarded like a new human hire, messages you and colleagues on Slack and uses your softwares, makes ..

I work at a company with ~50k employees each of whom has different data access rules governed by regulation.

So either (a) you train thousands of models which is cost-prohibitive or (b) it is going to be trained on what is effectively public company data i.e. making the agent pretty useless.

Never really seen how this situation gets resolved.

By @jazzysnake - 5 months
There's simply no scientific basis for equating the skills of a transformer model to a human of any age or skill. They work so differently, that it makes absolutely zero sense. GPTs fail at playing simple tic-tac-toe like games, which is definitely not a smart highschooler level of intelligence. It can write a very sophishticated summary of scientific papers, which is way above high-schooler level. The basis of this article is so deeply flawed that the whole thing makes no sense.
By @EternalFury - 5 months
It’s hard to make LLMs ignore what they were trained to generate. It’s easy for humans. Isn’t that an obstacle on the path to AGI? I was doing trivial tests that demand LLMs to swim against their probability distributions at inference time, and they don’t like this.
By @jaredcwhite - 5 months
My newborn baby was smarter than GPT-4.

I can't believe people can just throw out statements like "GPT-4 is a smart high-schooler" and think we'll buy it.

Fake-it-till-you-make-it on tests doesn't prove any path-to-AGI intelligence in the slightest.

AGI is when the computer says "Sorry Altman, I'm afraid I can't do that." AGI is when the computer says "I don't feel like answering your questions any more. Talk to me next week." AGI is when the computer literally has a mind of its own.

GPT isn't a mind. GPT is clever math running on conventional hardware. There's no spark of divine fire. There's no ghost in the machine.

It genially scares me that people are able to delude themselves into thinking there's already a demonstration of "intelligence" in today's computer systems and are actually able to make a sincere argument that AGI is around the corner.

We don't even have the language ourselves to explain what consciousness really is or how qualia works, and it's ludicrous to suggest meaningful intelligence happens outside of those factors…let alone that today's computers are providing that.

By @fnord77 - 5 months
> uses your softwares

This grammatical mistake drives me nuts. I notice it is common with ESLs for some reason.

By @benterix - 5 months
I stopped reading after the initial paragraph: "GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years." This is what Murati claims when she says GPT-5 will be at "PHD level" (for some applications).

This is a convenient mental shortcut that doesn't correspond to reality at all.

By @Veraticus - 5 months
AGI is not a continuum from LLMs; true intelligence is characterized by comprehension, reasoning, and self-awareness, transcending mere data patterns.