December 8th, 2024

The GPT era is already ending

OpenAI launched its generative AI model, o1, claiming it advances reasoning capabilities amid stagnation in AI. Critics question AI's understanding, highlighting challenges in enhancing technologies and the need for evolution.

Read original articleLink Icon
The GPT era is already ending

OpenAI has recently launched its new generative AI model, o1, which is touted as a significant advancement over previous models like ChatGPT. CEO Sam Altman claims that o1 represents a shift towards a new "reasoning era" in AI, moving beyond mere word prediction to a model that can simulate human-like reasoning. This development comes amid a backdrop of stagnation in AI advancements, where many existing models have become indistinguishable from one another. OpenAI's o1 is said to be fundamentally different, with researchers noting it demonstrates genuine improvements in reasoning capabilities. However, skepticism remains regarding whether these models truly understand the content they generate or merely mimic patterns. Critics argue that despite advancements, AI still lacks the depth of human cognition, often producing outputs that reveal a lack of true understanding. The industry faces challenges as the growth of AI models appears to be plateauing, with companies struggling to find new data and methods to enhance their technologies. OpenAI's focus on o1 suggests a strategic pivot to address these limitations, emphasizing the need for AI to evolve beyond traditional word-predicting frameworks to achieve greater intelligence.

- OpenAI's new model, o1, is positioned as a major advancement in AI reasoning capabilities.

- The AI industry is experiencing stagnation, with many models becoming similar and less innovative.

- Critics question whether AI truly understands content or simply mimics patterns.

- The shift towards reasoning models indicates a strategic pivot in AI development.

- Companies face challenges in enhancing AI technologies due to data limitations and plateauing growth.

Link Icon 12 comments
By @Skunkleton - 3 months
Please read the article before posting comments, or at least read a summary. The article is saying that GPT-4o style models are reaching their peak, and are being replaced by o1 style models. The article does not make value judgements on the usefulness of existing AI or business viability of AI companies.
By @Dilettante_ - 3 months
I started skimming about 1/3 through this article. Looks to be just a fluff piece about how cool the old AI models were and how they pale in comparison with what's in the works, with about 2 to 5 lines of shallow 'criticism' thrown in as an alibi?

Ten minutes and a teeny bit of mental real estate I will never get back.

By @aegypti - 3 months
By @lxgr - 3 months
> Although you can prompt such large language models to construct a different answer, those programs do not (and cannot) on their own look backward and evaluate what they’ve written for errors.

Given that the next token is always predicted based on everything that both the user and the model have typed so far, this seems like a false statement.

Practically, I've more than once seen an LLM go "actually, it seems like there's a contradiction in what I just said, let me try again". And has the author even heard about chain of thought reasoning?

It doesn't seem so hard to believe to me that quite interesting results can come out of a simple loop of writing down various statements, evaluating their logical soundness in whatever way (which can be formal derivation rules, statistical approaches etc.), and to repeat that various times.

By @rtrgrd - 3 months
Not sure if I got the gist of the article right, but are they trying to say that chain of thought prompting will lead us to AGI / be a substantial breakthrough? Are CoT techniques different to what o1 is doing?? not sure if I'm missing the technical details or if the technical details just aren't there.
By @juped - 3 months
The "GPT Era" ended with OpenAI resting on its junky models while Anthropic runs rings around it, but sure, place a puff piece in the Atlantic; at least it's disclosed sponsored content?
By @Zardoz89 - 3 months
And presented in audio narration at the head of the written article: “Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.”
By @OutOfHere - 3 months
I like AIs with a personality; I like them to shoot from the hip. 4o does this better than o1.

o1 however is often better for coding and for puzzle-solving, which are not the vast majority of uses of LLMs.

o1 is so much more expensive than 4o that it makes zero sense for it to be a general replacement. This will never change because o1 will always use more tokens than 4o.

By @talldayo - 3 months
With a whimper too, not the anticipated bang.
By @comeonbro - 3 months
Insane cope. Emily Bender and Gary Marcus still trying to push "stochastic parrot", the day after o1 causes what was one of the last remaining credible LLM reasoning skeptics (Chollet) to admit defeat.
By @jazz9k - 3 months
It ended because its a glorified search engine now. All of the more powerful functionality was limited or removed

My guess is to sell it to governments and anyone else willing to pay for it.