February 14th, 2025

AI Is Stifling Tech Adoption

AI models hinder new technology adoption due to knowledge gaps and biases towards established frameworks like React and Tailwind, influencing developers' choices and creating a feedback loop against innovation.

Read original articleLink Icon
FrustrationSkepticismConcern
AI Is Stifling Tech Adoption

The integration of AI models into developer workflows is reportedly hindering the adoption of new technologies. This is attributed to the knowledge gap created by training data cutoffs, which means AI models are often unaware of the latest technologies and updates. As a result, developers may find themselves relying on outdated information when seeking assistance from AI tools, leading to a preference for established technologies that AI can support. This creates a feedback loop where the lack of AI support for new technologies discourages their use, further limiting the training data available for AI models. Additionally, there is a noticeable bias in AI tools towards popular frameworks like React and Tailwind, which can influence developers' choices even when they prefer other technologies. Testing of various AI models revealed a consistent preference for React, suggesting that beginner developers may unwittingly adopt these technologies due to AI recommendations. The article calls for greater transparency from AI companies regarding the biases present in their models, as these biases significantly shape software development trends.

- AI models are creating a knowledge gap that stifles the adoption of new technologies.

- Developers often rely on AI tools that favor established frameworks, leading to a preference for technologies like React and Tailwind.

- The influence of AI on technology selection can create a feedback loop that discourages the use of newer technologies.

- There is a need for transparency from AI companies about the biases in their models.

- Beginner developers may be particularly susceptible to adopting technologies recommended by AI without critical evaluation.

AI: What people are saying
The discussion around AI's impact on technology adoption reveals several key themes.
  • Many commenters believe that AI tools reinforce the popularity of established frameworks like React and Tailwind, making it harder for newer technologies to gain traction.
  • Some argue that the reliance on AI can lead to stagnation in innovation, as developers may prefer familiar technologies due to better AI support and resources.
  • There is a call for continuous training of AI models to keep up with emerging technologies, as current models often lack knowledge of the latest frameworks.
  • Several users express concern that the quality of AI-generated code may lead to the perpetuation of outdated practices and discourage the exploration of new solutions.
  • Conversely, some believe that AI could accelerate the adoption of new technologies by providing better documentation and support for developers.
Link Icon 103 comments
By @crazygringo - about 2 months
No, AI isn't.

Any new tech, or version upgrade, or whatever, takes time for people to become familiar with it. You might as well say "Stack Overflow is stifling new tech adoption" because brand-new stuff doesn't have many Q's and A's yet. But that would be a silly thing to say.

I'm not going to adopt a brand-new database regardless of LLM training data cutoff, just because enough people haven't had enough experience with it.

And LLM's have a commercial incentive to retrain every so often anyways. It's not like we're going to confront a situation where an LLM doesn't know anything about tech that come out 5 or 10 years ago.

Early adopters will be early adopters. And early adopters aren't the kind of people relying on an LLM to tell them what to try out.

By @moyix - about 2 months
One thing that is interesting is that this was anticipated by the OpenAI Codex paper (which led to GitHub Copilot) all the way back in 2021:

> Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered. Further, for already existing packages, the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced (Eghbal, 2020; Trinkenreich et al., 2021).

https://arxiv.org/pdf/2107.03374 (Appendix H.4)

By @PaulRobinson - about 2 months
I think if you specify a technology in your prompt, any LLM should use that technology in its response. If you don't specify a technology, and that is an important consideration in the answer, it should clarify and ask about technology choices, and if you don't know, it can make a recommendation.

LLMs should not have hard-wired preferences through providers' prompt structure.

And while LLMs are stochastic parrots, and are likely to infer React if a lot of the training corpus mentions React, work should be done to actively prevent biases like this. If we can't get this right with JS frameworks, how are we going to solve it for more nuanced structural biases around ethnicity, gender, religion or political perspective?

What I'm most concerned about here is that Anthropic is taking investment from tech firms who vendor dev tooling - it would not take much for them to "prefer" one of those proprietary toolchains. We might not have much of a problem with React today, but what if your choice of LLM started to determine if you could or couldn't get recommendations on AWS vs Azure vs GCP vs bare metal/roll your own? Or if it suggested only commercial tools instead of F/LOSS?

And to take that to its logical conclusion, if that's happening, how do I know that the history assignment a kid is asking for help with isn't sneaking in an extreme viewpoint - and I don't care if it's extreme left or right, just warped by a political philosophy to be disconnected from truth - that the kid just accepts as truth?

By @__MatrixMan__ - about 2 months
Can confirm, I recently gave up on learning anything new re: data visualization and have just been using matplotlib instead. Training data for it has been piling up since 2008. The AI's are so good at it that you hardly ever have to look at the code, just ask for changes to the graph and iterate.

Honestly it's been kind of fun, but I do feel like the door is closing on certain categories of new thing. Local maxima are getting stickier, because even a marginal competence is enough to keep you there--since the AI will amplify that competence in well-trained domains by so much.

Emacs lisp is another one. I'd kind of like to build a map of these.

By @matsemann - about 2 months
I actually asked this a while back, but got little response: https://news.ycombinator.com/item?id=40263033

> Ask HN: Will LLMs hurt adoption of new frameworks and technology?

> If I ask some LLM/GPT a react question I get good responses. If I ask it about a framework released after the training data was obtained, it will either not know or hallucinate. Or if it's a lesser known framework the quality will be worse than for a known framework. Same with other things like hardware manuals not being trained on yet etc.

> As more and more devs rely on AI tools in their work flows, will emerging tech have a bigger hurdle than before to be adopted? Will we regress to the mean?

By @lasagnagram - about 2 months
No, new tech is just 100% extractive, wealth-generating garbage, and people are sick and tired of it. Come up with something new that isn't designed to vacuum up your data and your paycheck, and then maybe people will be more enthusiastic about it.
By @armchairhacker - about 2 months
AI may be exaggerating this issue, but it's always existed.

New tech has an inherent disadvantage vs legacy tech, because there's more built-up knowledge. If you choose React, you have better online resources (official docs, tutorials, answers to common pitfalls), more trust (it won't ship bugs or be abandoned), great third-party helper libraries, built-in IDE integration, and a large pool of employees with experience. If you choose some niche frontend framework, you have none of those.

Also, popular frameworks usually have better code, because they have years of bug-fixes from being tested on many production servers, and the API has been tailored from real-world experience.

In fact, I think the impact of AI generating better outputs for React is far less than that of the above. AI still works on novel programming languages and libraries, just at worse quality, whereas IDE integrations, helper libraries, online resources, etc. are useless (unless the novel language/library bridges to the popular one). And many people today still write code with zero AI, but nobody writes code without the internet.

By @hiAndrewQuinn - about 2 months
>Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. [... This] incentivises them to use something [older].

That sounds great to me, actually. A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.

By @spiderfarmer - about 2 months
>With Claude 3.5 Sonnet, which is generally my AI offering of choice given its superior coding ability, my “What personal preferences should Claude consider in responses?” profile setting includes the line “When writing code, use vanilla HTML/CSS/JS unless otherwise noted by me”. Despite this, Claude will frequently opt to generate new code with React, and in some occurrences even rewrite my existing code into React against my intent and without my consultation.

I noticed this too. Anyone found out how to make Claude work better?

By @spenvo - about 2 months
Like several other commenters in this thread, I also wrote[0] something recently on a related topic: Google's AI Overviews and ChatGPT harm the discovery of long tail information - from a product builder's perspective. Basically, users are having a tougher time finding accurate info about your product (even if the correct answer to their query is in Google's own search results). And I also found the basic tier of ChatGPT hallucinated my app's purpose in a way that was borderline slanderous. AI can make it tougher (at scale) for creators trying to break through.

[0] - https://keydiscussions.com/2025/02/05/when-google-ai-overvie...

By @catapulted - about 2 months
There is a counter example for this: MCP, a standard pushed by Anthropic, provides a long txt/MD optimized for Claude to be able to understand the protocol, which is very useful to bootstrap new plugins/servers that can be used as tools for LLMs. I found that fascinating and it works really well, and I was able to one-shot improve my CLInE extension (a coding agent similar to cursor.sh) to work with existing APIs/data.

It's so easy to bootstrap that even though the standard is a couple of months old, already has a few hundred (albeit probably low quality) implementations to adapt to different services.

- txt/markdown for LLMs: https://modelcontextprotocol.io/llms-full.txt

- server implementations: https://github.com/modelcontextprotocol/servers#-community-s...

By @VMG - about 2 months
Guess I figured out my niche as a SWE: have a later knowledge cutoff date than LLMs
By @jwblackwell - about 2 months
Larger context windows are helping solve this, though.

I use ALpineJS which is not as well known as React etc, but I just added a bunch of examples and instructions to the new cursor project rules, and it's now close to perfect.

Gemini models have up to 2M context windows, meaning you can probably fit your whole codebase and a ton of examples in a single request.

Furthermore, the agenetic way Cursor is now behaving, automatically building up context before taking action, seems to be another way around this problem

By @lackoftactics - about 2 months
> OpenAI’s latest models have cutoffs of late 2023.

The first paragraph is factually incorrect; the cutoff is June 2024 for 4o.

Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don't see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.

By @tobyhinloopen - about 2 months
I noticed this as I experimented with alternatives for React and all of them I tried were terrible on OpenAI/ChatGPT. Either it doesn't know them, or it makes weird mistakes, or uses very outdated (no longer working) versions of the code.

It is also annoying that most modern JS things have 4 versions to do the same thing: With TS, With TS + Decorators, With plain JS, with JSX, etc. so code generation picks one that isn't compatible with the "mode" you use.

By @physicsguy - about 2 months
If AI stifles the relentless churn in frontend frameworks then perhaps it's a good thing.
By @feoren - about 2 months
The answer to this seems obvious: continuous training of live models. No more "cutoff dates": have a process to continually ingest new information and update weights in existing models, to push out a new version every week.

Note that I said "obvious", not "easy", because it certainly isn't. In fact it's basically an unsolved problem, and probably a fiendishly difficult one. It may involve more consensus-based approaches like mixture of experts where you cycle out older experts, things like that -- there are dozens of large problems to tackle with it. But if you want to solve this, that's where you should be looking.

By @benve - about 2 months
I think this is true because I myself said to myself: "it is useless for me to create a library or abstraction for the developers of my project, much better to use everything verbose using the most popular libraries on the web". Until yesterday having an abstraction (or a better library/framework) could be very convenient to save time in writing a lot of code. Today if the code is mostly generated there is no need to create an abstraction. AI understands 1000 lines of code in python pandas much better than 10 lines of code using my library (which rationalises the use of pandas).

The result will not only be a disincentive to use new technologies, but a disincentive to build products with an efficient architecture in terms of lines of code, and in particular a disincentive to abstraction.

Maybe some product will become a hell with millions of lines of code that no one knows how to evolve and manage.

By @milesrout - about 2 months
Why would AI stifle tech adoption when ~nobody uses it? I think HN is in a bit of a bubble here. People on here seem to often think that everyone is using AI at work, it is really common and widely appreciated, etc.

It really, really isn't. Most people in the software industry do not use it. Its use in other industries and in the professions is even lower. AI coding tools are bad enough at widely used things like Python and JS. They are DOGSHIT at generating C or C++. They are basically terrible at doing anything other than regurgitating things from Medium blogspam tutorials.

The result is not people moving to only using technology that AI is "good" at (relatively, given it is terrible at coding anything at all). It is that the overwhelming majority don't use it at all. The thing is, nobody really talks about this because it isn't interesting _not_ to use something. You can't write many high-engagement blog posts to content-market your company by saying you still just use vim and ctags and documentation to write code, just like you did 10 years ago. That isn't noteworthy and nobody will read it or upvote it. HN is always biased by this towards the new, the noteworthy, changes to practices, etc. Just like browsing HN would lead you to believe people are rewriting their websites in new JS frameworks every 6 months. No, but posts about doing that obviously generate more engagement than 6-monthly "Update: Our website is still written in Ruby on Rails" posts would.

By @jimnotgym - about 2 months
Is this such a bad result? Do we need office CRUD apps to use bleeding edge technologies?
By @chrisco255 - about 2 months
This makes me fear less for web development jobs being lost to AI, to be honest. Look, we can create new frameworks faster than they can train new models. If we all agree to churn as much as possible the AIs will never be able to keep up.
By @mxwsn - about 2 months
This ought to be called the qwerty effect, for how the qwerty keyboard layout can't be usurped at this point. It was at the right place at the right time, even though arguably its main design choices are no longer relevant, and there are arguably better layouts like dvorak.

Python and React may similarly be enshrined for the future, for being at the right place at the right time.

English as a language might be another example.

By @killjoywashere - about 2 months
Pathologists as a specialty has been grousing about this for several years, at least since 2021 when the College of American Pathologists established the AI Committee. As a trivial example: any trained model deployed will necessarily be behind any new classification of tumors. This makes it harder to push the science and clinical diagnosis of cancer forward.

The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

It's absolutely fascinating watching software developers come to grips with what they have wrought.

By @dataviz1000 - about 2 months
I'm on the fence with this. I've been using Copilot with vscode constantly and it has greatly increased my productivity. Most important it helps me maintain momentum without getting stuck. Ten years ago I would face a problem with no solution, write a detailed question on Stack Exchange, and most likely solve it in a day or two with a lot of tinkering. Today I ask Claude. If it doesn't give me a good answer, I can get the information I need to solve the problem.

I've been thinking a lot of T.S. Eliot lately. He wrote and essay, "Tradition and the Individual Talent," which I think is pertinent to this issue. [0] (I should reread it.)

[0] https://www.poetryfoundation.org/articles/69400/tradition-an...

By @CharlieDigital - about 2 months
As the saying goes:

    while (React.isPopular) {
      React.isPopular = true
    }
It's actually quite sad because there are objectively better models both for performance and memory including Preact, Svelte, Vue, and of course vanilla.
By @anarticle - about 2 months
Sadly, as a person who used write AVX in C for real time imaging systems: don't care shipped.

I love dingling around with Cursor/Claude/qwen to get a 300 line prototype going in about 3-5 minutes with a framework I don't know. It's an amazing time to be small, I would hate to be working at a megacorp where you have to wait two months to get approval to use only GitHub copilot (terrible), in a time of so many interesting tools and more powerful models every month.

For new people, you still have to put the work in and learn if you want to transcend. That's always been there in this industry and I say that as a 20y vet, C, perl, java, rails, python, R, all the bash bits, every part matters just keep at it.

I feel like a lot of this is the js frontend committee running headlong into their first sea change in the industry.

By @d_watt - about 2 months
It's always been a thing with modes of encapsulating knowledge. The printing press caused the freezing of language, sometimes in a weird place*

Where great documentation was make or break for a open source project for the last 10 years, I think creating new projects with AI in mind will be required in the future. Maybe that means creating a large amount of examples, maybe it means providing fine tunes, maybe it means publishing a MCP server.

Maybe sad because it's another barrier to overcome, but the fact that AI coding is so powerful so quickly probably means it's worth the tradeoff, at least for now.

*https://www.dictionary.com/e/printing-press-frozen-spelling/

By @ilrwbwrkhv - about 2 months
> However, a leaked system prompt for Claude’s artifacts feature shows that both React and Tailwind are specifically mentioned.

Damn.

By @owenversteeg - about 2 months
I think as new data gets vacuumed up faster, this will be less of an issue. About a year ago here on HN I complained about how LLMs were useless for Svelte as they did not have it in their training data, and that they should update on a regular basis with fresh data. At the time my comment was considered ridiculous. One year later, that’s where we are, of course; the average cutoff of “LLM usefulness” with a new subject has dropped from multiple years to months and I see no reason that the trend will not continue.
By @hinkley - about 2 months
I don’t like that this conclusion seems to be that if humans adopt every new technology before AI can train on it that their jobs will be more secure. That is its own kind of hell.
By @AlienRobot - about 2 months
>Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. This forces developers to rely solely on potentially limited official documentation and early adopter experiences, which, for better or worse, tends to be an ‘old’ way of doing things and incentivises them to use something else.

I can't help but feel that a major problem these days is the lack of forums on the Internet, specially for programming. Forums foster and welcome new members, unlike StackOverflow. They're searchable, unlike Discord. Topics develop as people reply, unlike Reddit. You're talking to real people, unlike ChatGPT. You can post questions in them, unlike Github Issues.

When I had an issue with a C++ library, I could often find a forum thread made by someone with a similar problem. Perhaps because there are so many Javascript libraries, creating a separate forum for each one of them didn't make sense, and this is the end result.

I also feel that for documentation, LLMs are just not the answer. It's obvious that we need better tools. Or rather, that we need tools. I feel like before LLMs there simply weren't any universal tools for searching documentation and snippets other than Googling them, but Googling them never felt like the best method, so we jumped from one subpar method to another.

No matter what tool we come up with, it will never have the flexibility and power of just asking another human about it.

By @hinkley - about 2 months
I’m working on a side project that actually probably could use AI later on and I’m doing everything I can not to “put a bird on it” which is the phase we are at with AI.

I might be willing to use a SAT solver or linear algebra on it if I ever get to that point but there’s a lot else to do first. The problem space involves humans, so optimizing that can very quickly turn into “works in theory but not in practice”. It’d be the sort of thing where you use it but don’t brag about it.

By @jayd16 - about 2 months
It's pretty interesting and mildly shocking that everyone is just making the same 'who needs a new JS library' joke.

What about closed source tooling? How do you expect an AI to ever help you with something it doesn't have a license to know about? Not everything in the world can be anonymously scraped into the yearly revision.

If AI is going to stay we'll have to solve the problem of knowledge segmentation. If we solve that, keeping it up to date shouldn't be too bad.

By @pphysch - about 2 months
I don't think this is unique to AI. There are categories of knowledge that are infested with bad practices (webdev, enterprise software), and even a direct web search will lead you to those results. AI definitely regurgitates many of these bad practices, I've seen it, but it's not obvious to everyone.

I think it's unrealistic to expect a general purpose LLM would be an practical expert in a new field where there are potentially 0 human practical experts.

By @mtkd - about 2 months
Sonnet + Tailwind is something of a force multiplier though -- backend engineers now have a fast/reliable way of making frontend changes that are understandable and without relying on someone else -- you can even give 4o a whiteboard drawing of a layout and get the tailwind back in seconds

On the wider points, I do think it is reducing time coders are thinking about strategic situation as they're too busy advancing smaller tactical areas which AI is great at assisting -- and agree there is a recency issue looming, once these models have heavy weightings baked in, how does new knowledge get to the front quickly -- where is that new knowledge now people don't use Stackoverflow?

Maybe Grok becomes important purely because it has access to developers and researchers talking in realtime even if they are not posting code there

I worry the speed that this is happening results in younger developers not spending weeks or months thinking about something -- so they get some kind of code ADHD and never develop the skills to take on the big picture stuff later which could be quite a way off AI taking on

By @nektro - about 2 months
developers using ai continue to find new and novel ways to make themselves worse
By @jgalt212 - about 2 months
Along similar lines, I found Google auto complete to constrict my search space. I would only search the terms that auto complete.
By @benrutter - about 2 months
I think annecdotally this is true, I've definitely seen worse, but older technologies be chosen on the basis of LLM's knowing more about them.

That said, I also think it's a bad choice, and here's some good news on that front- you can make good choices which will put you and your project/company ahead of many projects/companies making bad choices!

I don't think the issue is that specific to LLMs- people have been choosing React and similar technologies "because it's easy to find developers" for ages.

It's definitely a shame to see people make poor design decisions for new reasons, but I think poor design decisions for dumb reasons are gonna outlive LLMs by some way.

By @photochemsyn - about 2 months
The central issue is high cost of training the models, it seems:

> "Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff... The cutoff means that models are strictly limited in knowledge up to a certain point. For instance, Anthropic’s latest models have a cutoff of April 2024, and OpenAI’s latest models have cutoffs of late 2023."

Hasn't DeepSeek's novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.

By @jleask - about 2 months
The underlying tech choice only matters at the moment because as software developers we are used to that choice being important. We see it as important because we currently are the ones that have to use it.

As more and more software is generated and the prompt becomes how we define software rather than code i.e. we shift up an abstraction level, how it is implemented will become less and less interesting to people. In the same way that product owners now do not care about technology, they just want a working solution that meets their requirements. Similarly I don't care how the assembly language produced by a compiler looks most of the time.

By @avbanks - about 2 months
LLM based AI tools are the new No/Low Code.
By @datadrivenangel - about 2 months
This is the same problem as google/search engines: A new technology has less web presence, and thus ranks lower in the mechanisms for information distribution and retrieval until people put in the work to market it.
By @Eridrus - about 2 months
This will be solved eventually on the AI model side. It isn't some law of nature that it takes a million tokens for an AI to learn something; just the fact that we can prompt these models should convince you of that.
By @tomduncalf - about 2 months
I was talking about this the other day - to some extent it feels like React (and Tailwind) has won, because LLMs understand it so deeply due to the amount of content out there. Even if they do train on other technologies that come after, there maybe won’t be the volume of data for it to gain such a deep understanding.

Also it doesn’t hurt that React has quite a stable/backwards compatible API, so outdated snippets probably still work… and in Tailwind’s case, I suspect the direct colocation of styles with the markup makes it a bit easier for AI to reason about.

By @NiloCK - about 2 months
I, too, wrote a shittier version of this a little while back: https://www.paritybits.me/stack-ossification/

Another observation since then: good documentation for newer tech stacks will not save the LLM's capabilities with that tech. I think the reason, in short, is that there's no shortcut for experience. Docs are book learning for tech stacks - millions (billions) of lines of source code among the training data are something else entirely.

By @mncharity - about 2 months
In contrast, I suggest AI could accelerate new tech adoption.

> if people are reluctant to adopt a new technology because of a lack of AI support, there will be fewer people [emphasis added] likely to produce material regarding said technology, which leads to an overall inverse feedback effect. Lack of AI support prevents a technology from gaining the required critical adoption mass, which in turn prevents a technology from entering use and having material made for it,

At present. But what if this is a transient? It depends on the new technology's dev team being unable to generate synthetic material. What happens when they can create for themselves a fine tune that translates between versions of their tech, and between "the old thing everyone else is using" and their new tech? One that encapsulates their "idiomatic best practice" of the moment? "Please generate our rev n+1 doc set Hal"? "Take the new Joe's ten thousand FAQ questions about topic X list and generate answers"? "Update our entries in [1]"? "Translate the Introduction to Data Analysis using Python open-source textbook to our tech"?

The quote illustrates a long-standing problem AI can help with - just reread it swapping "AI support" to "documentation". Once upon a time, releasing a new language was an ftp-able tar file with a non-portable compiler and a crappy text-or-PS file and a LISTSERV mailinglist. Now people want web sites, and spiffy docs, and Stack Overflow FAQs, and a community repo with lots and lots of batteries, and discuss, and a language server, and yes, now LLM support. But the effort delta between spiffy docs and big repo vs LLM support? Between SO and LLM latency? That depends on how much the dev team's own LLM can help with writing it all. If you want dystopian, think lots of weekend "I made my own X!" efforts easily training transliteration from an established X, and running a create-all-the-community-infrastructure-for-your-new-X hook. Which auto posts a Show HN.

AI could at long last get us out of the glacial pace of stagnant progress which has characterized our field for decades. Love the ongoing learning of JS churn? Just wait for HaskellNext! ;P

[1] https://learnxinyminutes.com/ https://rigaux.org/language-study/syntax-across-languages.ht... https://rosettacode.org/wiki/Category:Programming_Languages ...

By @delichon - about 2 months
Working in Zed I'm full of joy when I see how well Claude can help me code. But when I ask Claude about how to use Zed it's worse than useless, because it's training data is old compared to Zed, and it freely hallucinates answers. So for that I switch over to Perplexity calling OpenAI and get far better answers. I don't know if it's more recent training or RAG, but OpenAI knows about recent Zed github issues where Claude doesn't.

As long as the AI is pulling in the most recent changes it wouldn't seem to be stiflling.

By @trescenzi - about 2 months
Generative AI is fundamentally a tool that enables acceleration. Everything mentioned in this already true without Gen AI. Docs of new versions aren’t as easy to find till they aren’t as new. This is even true for things in the zeitgeist. Anyone around for the Python 2 to 3 or React class to hooks transitions knows how annoying that can be.

Yes new programmers will land on Python and React for most things. But they already do. And Gen AI will do what it does best and accelerate. It remains to be seen what’ll come of that trend acceleration.

By @nbuujocjut - about 2 months
Related: https://www.mjlivesey.co.uk/2025/02/01/llm-prog-lang.html

It’s doesn’t matter if a minority of passion techies will still be up for new tech, if the average developer just wanting to get the job done and relying on LLMs finds it harder, it will be a significant barrier.

By @montjoy - about 2 months
The lack of new training data also makes it bad at projects that are still maturing because it will suggest outdated code - or worse it will mix/match old and new syntax and generate something completely broken.

I worry that the lack of new examples for it to train on will self-reinforce running old syntax that has bad patterns.

If the "AI" could actually store its mistakes and corrections from interactive sessions long-term I think it would greatly alleviate this problem, but that opens up another whole set of problems.

By @carlosdp - about 2 months
I don't think this is a bad thing. Pretty much all of the author's examples of "new and potentially superior technologies" are really just different flavors of developer UX for doing the same things you could do with the "old" libraries/technologies.

In a world where AI is writing the code, who cares what libraries it is using? I don't really have to touch the code that much, I just need it to work. That's the future we're headed for, at lightning speed.

By @bilater - about 2 months
This is precisely why I have said that every new framework/library should have a markdown or text or whatever is the best format for LLM models endpoint that has all the docs and examples in one single page so you can easily copy it over to a models context. You want to make it as easy as possible for LLMs to be aware of how your software works. The fancy nested navigation guide walkthrough thing is cool for users but not optimized for this flow.
By @pmuk - about 2 months
I have noticed this. I think it also applies to the popularity of the projects in general and the number of training examples it has seen.

I was testing Github copilot's new "Agent" feature last weekend and rapidly built a working app with Vue.js + Vite + InstantSearch + Typesense + Tailwind CSS + DaisyUI

Today I tried to build another app with Rust and Dioxus and it could barely get the dev environment to load, kept getting stuck on circular errors.

By @lherron - about 2 months
I don't know how you solve the "training data and tooling prompts bias LLM responses towards old frameworks" part of this, but once a new (post-cutoff) framework has been surfaced, LLMs seem quite capable of adapting using in-context learning.

New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.

By @zkmon - about 2 months
People used to live in villages and places that were not connected by roads. Now since we have roads, any place that is not connected by a road will be seen a rough place. The difficulty is caused by usage of roads and vehicles, because it was not perceived or felt back in those days. So technology and assistance create new perceived problems.
By @richardw - about 2 months
I tried a new agent library with a model a few weeks ago. Just pasted the relevant api docs in and it worked fine.

However, while I’m proud of the outcomes, I’m not proud of the code. I’m not releasing anything open source until I feel it’s mine, which is another step. I’d be a bit embarrassed bringing another dev on.

“I’m Richard and I’m using AI to code” Support Group: “Hi Richard”

By @orbital-decay - about 2 months
So... it slows down adoption by providing easier alternatives for beginners? I guess you could look at it that way too.

Eventually it will go either of the two ways, though:

- models will have enough generalization ability to be trained on new stuff that has passed the basic usefulness test in the hands of enthusiasts and shows promise

- models will become smart enough to be useful even for obscure things

By @booleandilemma - about 2 months
Seems like a short-term problem. We're going to get to the point (maybe we're already there?) where we'll be able to point an AI at a codebase and say "refactor that codebase to use the latest language features" and it'll be done instantly. Sure, there might be a lag of a few months or a year, but who cares?
By @kristianp - about 2 months
> Claude’s artifacts feature

The article mentions that Claude’s artifacts feature is opinionated about using react and will even refuse, to code for Svelte Runes. It's hard to get it to use plain JavaScript because react is in the system prompt for artefacts. Poor prompt engineering in claude.

By @slevis - about 2 months
Looks like I might be the minority, but I disagree with this prediction. Better models will also be better at abstracting and we have seen several examples (e.g. the paper LIMO: Less is More for Reasoning) that with a small amount of training data, models can outperform larger models.
By @JimboOmega - about 2 months
Has there been any progress or effort on solving the underlying problem?

I'm not entirely sure why AI knowledge must be close to a year old, and clearly this is a problem developers are aware of.

Is there are a technical reason they can't be, for instance, a month behind rather than close to a year?

By @j45 - about 2 months
If people are skipping one shelf of tech, and jumping to the next shelf up with only ai trying to cover everything, and are let down, maybe there is an opportunity to share that there may be more realistic offers in the interim to offer across both.
By @__MatrixMan__ - about 2 months
The Arrows of Time by Greg Egan (Orthogonal, Book 3) deals with something analogous to this: Our characters must break themselves out of a cycle which is impeding innovation. If you like your scifi hard, the Orthogonal series is a lot of fun.
By @evanjrowley - about 2 months
Neovim author TJ DeVries Express similar concerns in a video earlier this year: https://youtu.be/pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L
By @OutOfHere - about 2 months
Always get the response with and without a web search. The web search may yield a newer solution.

Also, each package should ideally provide an LLM ingestible document. Upload this for the LLM, and have it answer questions specific to the new package.

By @conradfr - about 2 months
I was thinking the other day how coding assistants would hinder new languages adoption.
By @amelius - about 2 months
This is like saying in the 90s that Google Search would stifle tech adoption ...

I don't buy it. AI can teach me in 5 minutes how to write a kernel module, even if I've never seen one. AI brings more tech to our fingertips, not less.

By @memhole - about 2 months
I've wondered this myself. There was a post about gumroad a few months ago where the CEO explained the decision to migrate to typescript and react. The decision was in part because of how well AI generated those, iirc.
By @janalsncm - about 2 months
I’ve been out of web dev for a while, but maybe the problem is there’s a new framework every 6 months and instead of delivering value to the end user, developers are rewriting their app in whatever the new framework is.
By @at_ - about 2 months
Anecdotally, working on an old Vue 2 app I found Claude would almost always return "refactors" as React + Tailwind the first time, and need nudging back into using Vue 2.
By @ofirg - about 2 months
while it is true that there is a gap between what most LLMs "know" and current time that gap is getting smaller not larger with time, it is also possible to teach a model pass the knowledge cut-off with tools and an LLM might be encouraged to be aware of the gap and reach out for the latest information when it might have changed (pi is still pi but the country with the most people might have changed)
By @razodactyl - about 2 months
Not entirely sure it's a hard fact but this is definitely an example of bias in an AI system.

More reason to decouple and think for ourselves.

By @1970-01-01 - about 2 months
Are we really at the point where we are concerned how abstraction levels are not being abandoned as quickly as they were yesterday?
By @ZaoLahma - about 2 months
Seems plausible, especially in combination with the AI-coma that occurs when you tab-complete your way through problems at full speed.
By @jadbox - about 2 months
Upgrading to Tailwind v4 was horribly frustrating as every AI insisted on using v3 even though it technically knew the v4 api.
By @mring33621 - about 2 months
I don't know how this is surprising.

LLM-provided solutions will reinforce existing network effects.

Things that are popular will have more related content...

By @ripped_britches - about 2 months
This should not be relevant with cursor being able to include docs in every query. For those who don’t use this I feel for ya.
By @drbojingle - about 2 months
I think llms will be great for Lang's like elm personally. Especially with agents that can operate in an eval loop.
By @IshKebab - about 2 months
Huh how long until advertisers pay to get their product preferred by AI? If it isn't already happening...
By @jgalt212 - about 2 months
If you can build an app that an AI cannot, then you know some sort n-month head start on the competition.
By @spaceguillotine - about 2 months
If the only new feature is AI, its not worth the upgrade. Outside the lil tech bubble, people hate it.
By @cushychicken - about 2 months
…Isn’t this the website that constantly encourages people to “choose boring technology” for their web tech startups?

Aren’t a reasonable portion of the readers here people who bemoan the constant learning curve hellscape of frontend development?

And now we’re going to be upset that tools that help us work faster, which are trained on data freely available on the internet and thus affected by the volume of training material, decide to (gasp) choose solutions with a greater body of examples?

Just can’t satisfy all the people all the time, I guess! SMH.

By @lcfcjs6 - about 2 months
There is an enormous fear from mainstream media of AI, but the thing that excites me the most about this is in health care. AI will find the cure to Alzeimers and countless other diseases, there's no doubt about it. This simple fact is enough to make it acceptable.
By @g9yuayon - about 2 months
> Once it has finally released, it usually remains stagnant in terms of having its knowledge updated....meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

I find such argument weak. We can say the same thing about a book, like "Once The Art of Computer Program is finally published, it usually remains stagnant in terms of having its knowledge updated, thus disincentivizing people to learn new algorithms".

By @casey2 - about 2 months
Truly and honestly 99% of developers haven't even heard of chatgpt or copilot let alone the general public. It's a self-emposed problem on the orgs that choose to use such tools. More to the point, recency bias is so much stronger I'd rather have a system that points people to the current correct solution than a slightly better solution that is somehow harder to understand despite it's claimed simplicity by fanatics.
By @thecleaner - about 2 months
Shove the docs as context. Gemini has 2m context length.
By @ausbah - about 2 months
i do wonder if this could be mitigated by sufficiently popular newer libraries submitting training data of their library or whatever in action
By @zombiwoof - about 2 months
Yup, python pretty much wins due to training data
By @ramoz - about 2 months
We could call this the hamster-wheel theory.
By @Rehanzo - about 2 months
Does anyone know what font is used here?
By @jgalt212 - about 2 months
Herein lies the key for IP protection. Never use cloud hosted coding tools as the world will soon be able to copy your homework at zero cost.
By @tajd - about 2 months
Yeah maybe. But I think the thing I like is that is takes me a much shorter amount of time to create solutions for my users and myself. Then I can worry about “tech adoption” once I’ve achieved a relevant solution to my users.

If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.

They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.

By @skeeter2020 - about 2 months
I don't agree, because the people using these tools for their wore work were never doing innovative tech in the first place.
By @tiahura - about 2 months
Perhaps reasoning will help?
By @yieldcrv - about 2 months
Eh a cooldown period between the fanfare of a new thing and some battle testing before it gets added to the next AI’s training set is a good thing

the delay is like 8 months for now, thats fine

I think this is also great for some interview candidate assessments, you have new frameworks that AI can't answer questions about yet, and you can quiz a candidate on how well they are able to figure out how to use the new thing

By @highfrequency - about 2 months
I have definitely noticed that ChatGPT is atrocious at writing Polars code (which was written recently and has a changing API) while being good at Pandas. I figure this will mostly resolve when the standard reasoning models incorporate web search through API documentation + trial and error code compilation into their chain of thought.
By @anal_reactor - about 2 months
Not a problem. I'm sure that being able to work well with new information is the next goal most researchers are working towards, so the entire post feels like a boomer complaining "computers are bad because they're big and bulky" thirty years ago, not being able to imagine the smartphone revolution.
By @stevemadere - about 2 months
This is truly terrible.

What happened to a new JS front end library every week?

If this keeps up, we won't get to completely throw away all of our old code and retool every two years (the way we've been operating for the last 20 years)

How will we ever spend 85% of our time spinning up on new js front end libraries?

And don't even get me started on the back end.

If AI had been around in 2010, we probably still have some people writing apps in Rails.

OMG what a disaster that would be.

It's a good thing we just completely threw away all of the work that went into all of those gems. If people had continued using them, we wouldn't have had the chance to completely rewrite all of them in node and python from scratch.

By @_as_text - about 2 months
know what this will be about without reading

Python 3.12-style type annnotations are a good example imo, no one uses the type statement because dataset inertia

By @tolerance - about 2 months
So what.

…if society continues to delegate more of their work to AI then we are going to fall back into the grips that inform us that some people are better at things than other people are and some are worse at things than other people are and this is what lies beneath the bridge of relying or not relying on AI to leverage your capacity to think and act on what you feel.

I think that People who will be willing to put in effort for their crafts without AI will be the ones who will be willing to try out new things and seek opportunities for ingenuity in the future. I think that the problem people have with this idea is that it runs counter to notions related to—ahem

diversity, equity and inclusion…

On one hand and on it’s little finger is the legitimate concern that if companies who develop LLMs are not transparent with the technologies they make available to users when generating code, then they’ll hide all the scary and dangerous things that they make available to the people who’ll think, act and feel corrupt regardless of the tools they wield to impose disadvantages onto others. But I don’t think that will make a difference.

The only way out is hard work in a world bent on making the work easy after it makes you weak.