December 7th, 2024

The GPT era is already ending

OpenAI launched its generative AI model, o1, claiming it advances AI reasoning beyond word prediction. Critics question its understanding, while the industry faces pressure to innovate amid stagnation.

Read original articleLink Icon
The GPT era is already ending

OpenAI has recently launched its new generative AI model, o1, which is touted as a significant advancement over previous models like ChatGPT. CEO Sam Altman claims that o1 represents a shift towards a new "reasoning era" in AI, moving beyond mere word prediction to a model that can simulate human-like reasoning. This development comes amid a backdrop of stagnation in AI advancements, where previous models have become indistinguishable from one another. OpenAI's focus on o1 suggests a strategic pivot to address limitations in current AI technologies, which have struggled to improve despite increasing size and complexity. Researchers have noted that o1 demonstrates genuine improvements in tasks related to coding, math, and science, although skepticism remains regarding whether these models truly "understand" their outputs. Critics argue that AI models are still fundamentally reliant on statistical patterns rather than genuine comprehension. The release of o1 is seen as a response to the growing pressure on AI companies to innovate and justify their high costs, as the industry grapples with the challenges of advancing beyond traditional word-predicting models.

- OpenAI's new model, o1, is positioned as a major advancement in AI reasoning capabilities.

- The launch reflects a strategic shift in response to stagnation in AI development and competition.

- Critics question whether AI models like o1 truly understand their outputs or merely mimic patterns.

- The AI industry faces pressure to innovate and justify costs amid growing competition.

- o1 aims to address limitations of previous models by focusing on reasoning rather than just prediction.

Link Icon 3 comments
By @aithrowawaycomm - 3 months
Ugh:

  The process might be akin to a chess-playing AI playing a million games to learn optimal strategies, Subbarao Kambhampati, a computer scientist at Arizona State University, told me.  Or perhaps a rat that, having run 10,000 mazes, develops a good strategy for choosing among forking paths and doubling back at dead ends.
Lab rats don’t run 10,000 mazes! They don’t live nearly long enough for that. They run less than a dozen, and seem to have a good strategy “baked in” as part of their spatial reasoning abilities. What Wong is really saying here is that o1 is like a very slow and stupid rat which cannot actually reason about anything.

The way AI constantly ignores and trivializes animal intelligence - a trend dating all the way back to Alan Turing - is in my view the root cause of AI winters, including the one coming in the next year or so. Investors don’t want to ask “is this thing actually smarter than a fish?” and executives don’t want to know the answer.

By @techfeathers - 3 months
Something always seemed incomplete about testing models against standardized tests; I would expect AI models to first do well on standardized tests, much better than humans, but it makes me wonder if there’s something else that humans possess that isn’t tested by these tests. We test humans in these tests to, and I would guess that loosely speaking there’s a correlation between a persons success on an advanced math test or the bar and success in their career, but we also sort of know examples where there’s an appearance of an inverse correlation, people who do great as a phd student or mathlete who can’t operate in a day to day job.

So when AI companies start saying these AI are as intelligent as a PHD student, it makes me wonder, most people aren’t as smart as a phd student and yet AI still seems to choke on some basic tasks.