AI Predictions for 2025, from Gary Marcus
Gary Marcus predicts that AGI won't be achieved by 2025, AI profits will be modest, regulatory frameworks will lag, job displacement will be minimal, and AI company valuations may decline.
Read original articleGary Marcus outlines his predictions for AI developments by the end of 2025, emphasizing that artificial general intelligence (AGI) will not be achieved, contrary to claims by figures like Elon Musk. He anticipates that no AI system will reliably solve more than four of the AI 2027 Marcus-Brundage tasks, and profits from AI models will remain modest. Regulatory frameworks in the U.S. will lag behind Europe, and issues such as hallucinations and reasoning errors in generative AI will persist. The hype surrounding AI agents and humanoid robotics will not translate into reliable products. He predicts that few radiologists will be replaced by AI, and truly driverless cars will remain limited in use. The impact of AI on the workforce will be minimal, with less than 10% of jobs affected. Marcus expresses medium confidence that technical moats will remain elusive, and AI company valuations may begin to decline. He also notes the potential for a significant cyberattack involving generative AI. Overall, he suggests that while AI will continue to evolve, many of the anticipated breakthroughs may not materialize as quickly as expected.
- AGI is unlikely to be achieved by the end of 2025.
- Profits from AI models will remain modest, with limited corporate adoption.
- Regulatory frameworks in the U.S. will lag behind Europe.
- The impact of AI on job displacement will be minimal, affecting less than 10% of the workforce.
- Technical moats for AI companies will remain elusive, with potential declines in valuations.
Related
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Why the collapse of the Generative AI bubble may be imminent
The Generative AI bubble is predicted to collapse soon, with declining investor enthusiasm and funding, potentially leading to failures of high-valued companies by the end of 2024.
My views on AI changed every year 2017-2024
Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.
AI companies are pivoting from creating gods to building products
AI companies are shifting from model development to practical product creation, addressing market misunderstandings and facing challenges in cost, reliability, privacy, safety, and user interface design, with meaningful integration expected to take a decade.
The AI Boom Has an Expiration Date
Leading AI executives predict superintelligent software could emerge within years, promising societal benefits. However, concerns about energy demands, capital requirements, and investor skepticism suggest a potential bubble in the AI sector.
• 7-10 GPT-4 level models
He only gets to claim this is true if you count a bunch of different versions of the same base model, or if you’re willing to say that some models that outperform GPT-4 on some benchmarks count as being GPT-4 level. I don’t think Marcus was right in spirit, here.
• No massive advance (no GPT-5, or disappointing GPT-5)
Seems too unquantifiable to judge, I would call o1 a massive advance over 4o, but I’m sure Marcus would not.
• Price wars
I guess so? From what I’ve read, the frontier models companies are still profitable, and OpenAI now has a $200/mo commercial model, hardly the action of a company deciding its prices purely to undercut the competition.
• Very little moat for anyone
It still seems like the only companies who have pulled off frontier model capabilities have spent many millions of dollars doing it. I think this might become true next year but I don’t think this can be judged as correct based on what we saw in 2024 alone.
• No robust solution to hallucinations
You only use words like “robust” in a prediction like this so you have room to weasel out of it later when the hallucinations diminish greatly but don’t quite go extinct.
• Modest lasting corporate adoption
My industry is oil and gas. A pretty hidebound and conservative industry. Adoption of LLMs has been massive.
• Modest profits, split 7-10 ways
Define modest.
I score Marcus at 0/7, at best 2/7.
Overall, the field has seen tremendous progress -- whether or not most users realize just how far we’ve come. Marcus's predictions don't sound specific enough -- no GPT5? Correct. But, what does that even mean?
The robustness of LLMs—like the original ChatGPT or GPT-3.5—is still far from a level where domain novices can rely on them with confidence. This might change as models incorporate more first-order data (e.g., direct observations in physics, embodiment) and improve in causal reasoning and deductive logic.
I find it crucial to have critical voices like Gary Marcus to counterbalance the overwhelming hype perpetuated by social media enthusiasts and corporate PR—much of which the media tends to echo uncritically.
One of Marcus's demands is the need for a more neuro-symbolic approach to advance AI. Progress can’t come solely from "scaling up" models. And it seems like he's right: All major ML companies seem to shift towards search-based algorithms (e.g., Q*) combined with reinforcement learning during inference time to explore the problem space, moving beyond mere "next-token prediction" training.
Of course there are plenty of problems with the current state of AI and LLMs, but to have such a preconceived pessimistic outlook that can't even acknowledge their massive and quick adoption and usefulness in multiple domains seems not intellectually honest.
https://news.ycombinator.com/item?id=42560545
I'd argue 1, 2, maybe 6 are effectively already doable. 3 & 5 are good tasks which are technically possible with some RAG hackery but in general is a good benchmark testing out-of-context fact retrieval. 4 & 10 might happen soon-ish with work on "agents" and proof synthesis respectively. 7 & 8 are too subjective. 9, maybe weakened to formulating or proving novel theorems, might be a good baseline for "peak human" intelligence (I think at best O1-O3 can spot and prove some lemmas, but nothing that anyone would bother publishing).
How about he indicates: 1. How he came to these conclusions (coin flip? Pessimism?) 2. How many predictions he missed. He's implying a very high rate of success, which is a big red flag of shenanigans for me.
This is little more than vague generalities and coin flipping with retroactive cherry picked "See?! I was right!" analysis.
A gypsy at a traveling circus serves up about the same.
I had a look at the "Marcus-Brundage tasks" that he has modestly named after himself and am stuck that for an AI skeptic he's listed things for 2027 well beyond 99.9% of humans like write 10,000 lines of bug free code, Oscar level screenplays, Nobel prize discoveries, Pulitzer books etc.
Related
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Why the collapse of the Generative AI bubble may be imminent
The Generative AI bubble is predicted to collapse soon, with declining investor enthusiasm and funding, potentially leading to failures of high-valued companies by the end of 2024.
My views on AI changed every year 2017-2024
Alexey Guzey's views on AI evolved from believing AGI was imminent to questioning its coherence. He critiques LLMs' limitations, expresses declining interest in AI, and emphasizes skepticism about alignment and risks.
AI companies are pivoting from creating gods to building products
AI companies are shifting from model development to practical product creation, addressing market misunderstandings and facing challenges in cost, reliability, privacy, safety, and user interface design, with meaningful integration expected to take a decade.
The AI Boom Has an Expiration Date
Leading AI executives predict superintelligent software could emerge within years, promising societal benefits. However, concerns about energy demands, capital requirements, and investor skepticism suggest a potential bubble in the AI sector.