January 16th, 2025

My subjective notes on the state of AI at the end of 2024

The AI landscape is evolving, with major developers influencing advancements. Generative knowledge bases are foundational, but progress is plateauing, leading to incremental changes in productivity and education rather than revolutionary shifts.

My subjective notes on the state of AI at the end of 2024

The AI landscape is rapidly evolving, prompting reflections on its current state and future predictions. A series of four posts has been created to explore key themes: industry transparency, generative knowledge bases, the current state of AI, and future forecasts. Analyzing the decisions of major AI developers like OpenAI and Google allows for informed assumptions about the industry. The foundation of current advancements lies in generative knowledge bases, which are large probabilistic models. However, the development of these neural networks appears to be reaching a plateau, suggesting that future progress will be more incremental and evolutionary rather than revolutionary. Expectations for the near future should not include singularity, strong AI, or significant job losses due to automation. Instead, the focus should be on increased labor productivity, job redistribution, educational turbulence, and shifts in the educational levels of future generations. The concept of generative knowledge bases is central to understanding the current AI situation and its implications.

- The AI industry is influenced by the decisions of major developers like OpenAI and Google.

- Current advancements are primarily based on generative knowledge bases, which are large probabilistic models.

- Development in neural networks is plateauing, indicating future progress will be incremental.

- There are no immediate expectations for singularity or strong AI, nor significant job losses due to automation.

- Future changes will likely involve increased productivity and shifts in education and job distribution.

Link Icon 3 comments
By @343rwerfd - 3 months
You're mentioning only publicly known information. The rumors mentioning radical advances behind closed doors are wild, and then you've suddenly got some stuff like deepseek or phi-4.

Rumors mention recursive "self" improvement (training) already ongoing at big scale, better AIs training lesser AIs (still powerful), to became better AIs, and the cycle restarts. Maybe o1 and o3 are just the beginning of what was choosed to make available publicly (also the newer Sonnet).

https://www.thealgorithmicbridge.com/p/this-rumor-about-gpt-...

The pace of change is actually uncertain, you could have revolutionary advances maybe 4-7 times this year, because the tide has changed and massive hardware (only available to few players) isn't a stopper anymore given that algorithms, software is taking the lead as the main force advancing AI development (anyone in the planet with a brain could make a radical leap in AI tech, anytime going forward).

https://sakana.ai/transformer-squared/

Beside the rumors and relatively (still) low impact recent innovations, we have history: remember that the technology behind gpt-2 existed basically two years before they made it public, and the theory behind that technology existed maybe 4 years before getting anything close to something practical.

All the public information is just old news. If you want to know where's everything going, you should look to where's the money going and/or where are the best teams working (deepseek, others like novasky > sky-t1).

https://novasky-ai.github.io/posts/sky-t1/

By @kingkongjaffa - 3 months
> OpenAI et al reaching a plateau.

Yes. The latest product releases from them all, have been chain of thought tweaks to existing models, rather than a new model entirely. Several models are perceivably the same or worse than previous models (sonnet 3.5 is sometimes worse than Opus 3.0 and Opus 3.5 is nowhere in sight.)

GPT4o is sometimes worse than base GPT4 when it was available.

The newest and largest models so far are either too expensive to run, and/or not much better than the previous best models, and so this is why they have not been released yet despite rumours that these newest models were being trained.

I would love announcements/data to the contrary.

By @moomoo11 - 3 months
Meanwhile, actual software we rely on to do our real jobs continues to suck and reach new levels of enshittification as new, mostly half-baked gen ai features are added instead of better UX or customization.