AI Is Stifling Tech Adoption
AI models hinder new technology adoption due to knowledge gaps and biases towards established frameworks like React and Tailwind, influencing developers' choices and creating a feedback loop against innovation.
Read original articleThe integration of AI models into developer workflows is reportedly hindering the adoption of new technologies. This is attributed to the knowledge gap created by training data cutoffs, which means AI models are often unaware of the latest technologies and updates. As a result, developers may find themselves relying on outdated information when seeking assistance from AI tools, leading to a preference for established technologies that AI can support. This creates a feedback loop where the lack of AI support for new technologies discourages their use, further limiting the training data available for AI models. Additionally, there is a noticeable bias in AI tools towards popular frameworks like React and Tailwind, which can influence developers' choices even when they prefer other technologies. Testing of various AI models revealed a consistent preference for React, suggesting that beginner developers may unwittingly adopt these technologies due to AI recommendations. The article calls for greater transparency from AI companies regarding the biases present in their models, as these biases significantly shape software development trends.
- AI models are creating a knowledge gap that stifles the adoption of new technologies.
- Developers often rely on AI tools that favor established frameworks, leading to a preference for technologies like React and Tailwind.
- The influence of AI on technology selection can create a feedback loop that discourages the use of newer technologies.
- There is a need for transparency from AI companies about the biases in their models.
- Beginner developers may be particularly susceptible to adopting technologies recommended by AI without critical evaluation.
Related
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
The AI backlash couldn't have come at a better time
Developers are frustrated with AI hype, seeking practical applications. Tools like RamaLama simplify deployment, while trends favor smaller, relevant models. Organizations aim to integrate AI into routine operations effectively.
AI-assisted coding will change software engineering: hard truths
AI-assisted coding is widely adopted among developers, enhancing productivity but requiring human expertise. Experienced engineers benefit more than beginners, facing challenges in completing projects and understanding AI-generated code.
AI Coding Assistant Is Gaslighting You – The Hidden Cost of Uncertainty
AI coding assistants are unpredictable, complicating developers' decision-making. Simple prompting may be more effective than autonomous agents. Improvements should focus on clarity and complementing human expertise while acknowledging limitations.
What do I mean by some software devs are "ngmi"?
The article emphasizes the necessity for software developers to adapt to AI tools, as those resisting change may face negative career impacts, while embracing AI can enhance productivity and job security.
- Many commenters believe that AI tools reinforce the popularity of established frameworks like React and Tailwind, making it harder for newer technologies to gain traction.
- Some argue that the reliance on AI can lead to stagnation in innovation, as developers may prefer familiar technologies due to better AI support and resources.
- There is a call for continuous training of AI models to keep up with emerging technologies, as current models often lack knowledge of the latest frameworks.
- Several users express concern that the quality of AI-generated code may lead to the perpetuation of outdated practices and discourage the exploration of new solutions.
- Conversely, some believe that AI could accelerate the adoption of new technologies by providing better documentation and support for developers.
Any new tech, or version upgrade, or whatever, takes time for people to become familiar with it. You might as well say "Stack Overflow is stifling new tech adoption" because brand-new stuff doesn't have many Q's and A's yet. But that would be a silly thing to say.
I'm not going to adopt a brand-new database regardless of LLM training data cutoff, just because enough people haven't had enough experience with it.
And LLM's have a commercial incentive to retrain every so often anyways. It's not like we're going to confront a situation where an LLM doesn't know anything about tech that come out 5 or 10 years ago.
Early adopters will be early adopters. And early adopters aren't the kind of people relying on an LLM to tell them what to try out.
> Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered. Further, for already existing packages, the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced (Eghbal, 2020; Trinkenreich et al., 2021).
https://arxiv.org/pdf/2107.03374 (Appendix H.4)
LLMs should not have hard-wired preferences through providers' prompt structure.
And while LLMs are stochastic parrots, and are likely to infer React if a lot of the training corpus mentions React, work should be done to actively prevent biases like this. If we can't get this right with JS frameworks, how are we going to solve it for more nuanced structural biases around ethnicity, gender, religion or political perspective?
What I'm most concerned about here is that Anthropic is taking investment from tech firms who vendor dev tooling - it would not take much for them to "prefer" one of those proprietary toolchains. We might not have much of a problem with React today, but what if your choice of LLM started to determine if you could or couldn't get recommendations on AWS vs Azure vs GCP vs bare metal/roll your own? Or if it suggested only commercial tools instead of F/LOSS?
And to take that to its logical conclusion, if that's happening, how do I know that the history assignment a kid is asking for help with isn't sneaking in an extreme viewpoint - and I don't care if it's extreme left or right, just warped by a political philosophy to be disconnected from truth - that the kid just accepts as truth?
Honestly it's been kind of fun, but I do feel like the door is closing on certain categories of new thing. Local maxima are getting stickier, because even a marginal competence is enough to keep you there--since the AI will amplify that competence in well-trained domains by so much.
Emacs lisp is another one. I'd kind of like to build a map of these.
> Ask HN: Will LLMs hurt adoption of new frameworks and technology?
> If I ask some LLM/GPT a react question I get good responses. If I ask it about a framework released after the training data was obtained, it will either not know or hallucinate. Or if it's a lesser known framework the quality will be worse than for a known framework. Same with other things like hardware manuals not being trained on yet etc.
> As more and more devs rely on AI tools in their work flows, will emerging tech have a bigger hurdle than before to be adopted? Will we regress to the mean?
New tech has an inherent disadvantage vs legacy tech, because there's more built-up knowledge. If you choose React, you have better online resources (official docs, tutorials, answers to common pitfalls), more trust (it won't ship bugs or be abandoned), great third-party helper libraries, built-in IDE integration, and a large pool of employees with experience. If you choose some niche frontend framework, you have none of those.
Also, popular frameworks usually have better code, because they have years of bug-fixes from being tested on many production servers, and the API has been tailored from real-world experience.
In fact, I think the impact of AI generating better outputs for React is far less than that of the above. AI still works on novel programming languages and libraries, just at worse quality, whereas IDE integrations, helper libraries, online resources, etc. are useless (unless the novel language/library bridges to the popular one). And many people today still write code with zero AI, but nobody writes code without the internet.
That sounds great to me, actually. A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.
I noticed this too. Anyone found out how to make Claude work better?
[0] - https://keydiscussions.com/2025/02/05/when-google-ai-overvie...
It's so easy to bootstrap that even though the standard is a couple of months old, already has a few hundred (albeit probably low quality) implementations to adapt to different services.
- txt/markdown for LLMs: https://modelcontextprotocol.io/llms-full.txt
- server implementations: https://github.com/modelcontextprotocol/servers#-community-s...
I use ALpineJS which is not as well known as React etc, but I just added a bunch of examples and instructions to the new cursor project rules, and it's now close to perfect.
Gemini models have up to 2M context windows, meaning you can probably fit your whole codebase and a ton of examples in a single request.
Furthermore, the agenetic way Cursor is now behaving, automatically building up context before taking action, seems to be another way around this problem
The first paragraph is factually incorrect; the cutoff is June 2024 for 4o.
Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don't see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.
It is also annoying that most modern JS things have 4 versions to do the same thing: With TS, With TS + Decorators, With plain JS, with JSX, etc. so code generation picks one that isn't compatible with the "mode" you use.
Note that I said "obvious", not "easy", because it certainly isn't. In fact it's basically an unsolved problem, and probably a fiendishly difficult one. It may involve more consensus-based approaches like mixture of experts where you cycle out older experts, things like that -- there are dozens of large problems to tackle with it. But if you want to solve this, that's where you should be looking.
The result will not only be a disincentive to use new technologies, but a disincentive to build products with an efficient architecture in terms of lines of code, and in particular a disincentive to abstraction.
Maybe some product will become a hell with millions of lines of code that no one knows how to evolve and manage.
It really, really isn't. Most people in the software industry do not use it. Its use in other industries and in the professions is even lower. AI coding tools are bad enough at widely used things like Python and JS. They are DOGSHIT at generating C or C++. They are basically terrible at doing anything other than regurgitating things from Medium blogspam tutorials.
The result is not people moving to only using technology that AI is "good" at (relatively, given it is terrible at coding anything at all). It is that the overwhelming majority don't use it at all. The thing is, nobody really talks about this because it isn't interesting _not_ to use something. You can't write many high-engagement blog posts to content-market your company by saying you still just use vim and ctags and documentation to write code, just like you did 10 years ago. That isn't noteworthy and nobody will read it or upvote it. HN is always biased by this towards the new, the noteworthy, changes to practices, etc. Just like browsing HN would lead you to believe people are rewriting their websites in new JS frameworks every 6 months. No, but posts about doing that obviously generate more engagement than 6-monthly "Update: Our website is still written in Ruby on Rails" posts would.
Python and React may similarly be enshrined for the future, for being at the right place at the right time.
English as a language might be another example.
The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.
It's absolutely fascinating watching software developers come to grips with what they have wrought.
I've been thinking a lot of T.S. Eliot lately. He wrote and essay, "Tradition and the Individual Talent," which I think is pertinent to this issue. [0] (I should reread it.)
[0] https://www.poetryfoundation.org/articles/69400/tradition-an...
while (React.isPopular) {
React.isPopular = true
}
It's actually quite sad because there are objectively better models both for performance and memory including Preact, Svelte, Vue, and of course vanilla.I love dingling around with Cursor/Claude/qwen to get a 300 line prototype going in about 3-5 minutes with a framework I don't know. It's an amazing time to be small, I would hate to be working at a megacorp where you have to wait two months to get approval to use only GitHub copilot (terrible), in a time of so many interesting tools and more powerful models every month.
For new people, you still have to put the work in and learn if you want to transcend. That's always been there in this industry and I say that as a 20y vet, C, perl, java, rails, python, R, all the bash bits, every part matters just keep at it.
I feel like a lot of this is the js frontend committee running headlong into their first sea change in the industry.
Where great documentation was make or break for a open source project for the last 10 years, I think creating new projects with AI in mind will be required in the future. Maybe that means creating a large amount of examples, maybe it means providing fine tunes, maybe it means publishing a MCP server.
Maybe sad because it's another barrier to overcome, but the fact that AI coding is so powerful so quickly probably means it's worth the tradeoff, at least for now.
*https://www.dictionary.com/e/printing-press-frozen-spelling/
Damn.
I can't help but feel that a major problem these days is the lack of forums on the Internet, specially for programming. Forums foster and welcome new members, unlike StackOverflow. They're searchable, unlike Discord. Topics develop as people reply, unlike Reddit. You're talking to real people, unlike ChatGPT. You can post questions in them, unlike Github Issues.
When I had an issue with a C++ library, I could often find a forum thread made by someone with a similar problem. Perhaps because there are so many Javascript libraries, creating a separate forum for each one of them didn't make sense, and this is the end result.
I also feel that for documentation, LLMs are just not the answer. It's obvious that we need better tools. Or rather, that we need tools. I feel like before LLMs there simply weren't any universal tools for searching documentation and snippets other than Googling them, but Googling them never felt like the best method, so we jumped from one subpar method to another.
No matter what tool we come up with, it will never have the flexibility and power of just asking another human about it.
I might be willing to use a SAT solver or linear algebra on it if I ever get to that point but there’s a lot else to do first. The problem space involves humans, so optimizing that can very quickly turn into “works in theory but not in practice”. It’d be the sort of thing where you use it but don’t brag about it.
What about closed source tooling? How do you expect an AI to ever help you with something it doesn't have a license to know about? Not everything in the world can be anonymously scraped into the yearly revision.
If AI is going to stay we'll have to solve the problem of knowledge segmentation. If we solve that, keeping it up to date shouldn't be too bad.
I think it's unrealistic to expect a general purpose LLM would be an practical expert in a new field where there are potentially 0 human practical experts.
On the wider points, I do think it is reducing time coders are thinking about strategic situation as they're too busy advancing smaller tactical areas which AI is great at assisting -- and agree there is a recency issue looming, once these models have heavy weightings baked in, how does new knowledge get to the front quickly -- where is that new knowledge now people don't use Stackoverflow?
Maybe Grok becomes important purely because it has access to developers and researchers talking in realtime even if they are not posting code there
I worry the speed that this is happening results in younger developers not spending weeks or months thinking about something -- so they get some kind of code ADHD and never develop the skills to take on the big picture stuff later which could be quite a way off AI taking on
That said, I also think it's a bad choice, and here's some good news on that front- you can make good choices which will put you and your project/company ahead of many projects/companies making bad choices!
I don't think the issue is that specific to LLMs- people have been choosing React and similar technologies "because it's easy to find developers" for ages.
It's definitely a shame to see people make poor design decisions for new reasons, but I think poor design decisions for dumb reasons are gonna outlive LLMs by some way.
> "Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff... The cutoff means that models are strictly limited in knowledge up to a certain point. For instance, Anthropic’s latest models have a cutoff of April 2024, and OpenAI’s latest models have cutoffs of late 2023."
Hasn't DeepSeek's novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.
As more and more software is generated and the prompt becomes how we define software rather than code i.e. we shift up an abstraction level, how it is implemented will become less and less interesting to people. In the same way that product owners now do not care about technology, they just want a working solution that meets their requirements. Similarly I don't care how the assembly language produced by a compiler looks most of the time.
Also it doesn’t hurt that React has quite a stable/backwards compatible API, so outdated snippets probably still work… and in Tailwind’s case, I suspect the direct colocation of styles with the markup makes it a bit easier for AI to reason about.
Another observation since then: good documentation for newer tech stacks will not save the LLM's capabilities with that tech. I think the reason, in short, is that there's no shortcut for experience. Docs are book learning for tech stacks - millions (billions) of lines of source code among the training data are something else entirely.
> if people are reluctant to adopt a new technology because of a lack of AI support, there will be fewer people [emphasis added] likely to produce material regarding said technology, which leads to an overall inverse feedback effect. Lack of AI support prevents a technology from gaining the required critical adoption mass, which in turn prevents a technology from entering use and having material made for it,
At present. But what if this is a transient? It depends on the new technology's dev team being unable to generate synthetic material. What happens when they can create for themselves a fine tune that translates between versions of their tech, and between "the old thing everyone else is using" and their new tech? One that encapsulates their "idiomatic best practice" of the moment? "Please generate our rev n+1 doc set Hal"? "Take the new Joe's ten thousand FAQ questions about topic X list and generate answers"? "Update our entries in [1]"? "Translate the Introduction to Data Analysis using Python open-source textbook to our tech"?
The quote illustrates a long-standing problem AI can help with - just reread it swapping "AI support" to "documentation". Once upon a time, releasing a new language was an ftp-able tar file with a non-portable compiler and a crappy text-or-PS file and a LISTSERV mailinglist. Now people want web sites, and spiffy docs, and Stack Overflow FAQs, and a community repo with lots and lots of batteries, and discuss, and a language server, and yes, now LLM support. But the effort delta between spiffy docs and big repo vs LLM support? Between SO and LLM latency? That depends on how much the dev team's own LLM can help with writing it all. If you want dystopian, think lots of weekend "I made my own X!" efforts easily training transliteration from an established X, and running a create-all-the-community-infrastructure-for-your-new-X hook. Which auto posts a Show HN.
AI could at long last get us out of the glacial pace of stagnant progress which has characterized our field for decades. Love the ongoing learning of JS churn? Just wait for HaskellNext! ;P
[1] https://learnxinyminutes.com/ https://rigaux.org/language-study/syntax-across-languages.ht... https://rosettacode.org/wiki/Category:Programming_Languages ...
As long as the AI is pulling in the most recent changes it wouldn't seem to be stiflling.
Yes new programmers will land on Python and React for most things. But they already do. And Gen AI will do what it does best and accelerate. It remains to be seen what’ll come of that trend acceleration.
It’s doesn’t matter if a minority of passion techies will still be up for new tech, if the average developer just wanting to get the job done and relying on LLMs finds it harder, it will be a significant barrier.
I worry that the lack of new examples for it to train on will self-reinforce running old syntax that has bad patterns.
If the "AI" could actually store its mistakes and corrections from interactive sessions long-term I think it would greatly alleviate this problem, but that opens up another whole set of problems.
In a world where AI is writing the code, who cares what libraries it is using? I don't really have to touch the code that much, I just need it to work. That's the future we're headed for, at lightning speed.
I was testing Github copilot's new "Agent" feature last weekend and rapidly built a working app with Vue.js + Vite + InstantSearch + Typesense + Tailwind CSS + DaisyUI
Today I tried to build another app with Rust and Dioxus and it could barely get the dev environment to load, kept getting stuck on circular errors.
New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.
However, while I’m proud of the outcomes, I’m not proud of the code. I’m not releasing anything open source until I feel it’s mine, which is another step. I’d be a bit embarrassed bringing another dev on.
“I’m Richard and I’m using AI to code” Support Group: “Hi Richard”
Eventually it will go either of the two ways, though:
- models will have enough generalization ability to be trained on new stuff that has passed the basic usefulness test in the hands of enthusiasts and shows promise
- models will become smart enough to be useful even for obscure things
The article mentions that Claude’s artifacts feature is opinionated about using react and will even refuse, to code for Svelte Runes. It's hard to get it to use plain JavaScript because react is in the system prompt for artefacts. Poor prompt engineering in claude.
I'm not entirely sure why AI knowledge must be close to a year old, and clearly this is a problem developers are aware of.
Is there are a technical reason they can't be, for instance, a month behind rather than close to a year?
Also, each package should ideally provide an LLM ingestible document. Upload this for the LLM, and have it answer questions specific to the new package.
I don't buy it. AI can teach me in 5 minutes how to write a kernel module, even if I've never seen one. AI brings more tech to our fingertips, not less.
More reason to decouple and think for ourselves.
LLM-provided solutions will reinforce existing network effects.
Things that are popular will have more related content...
Aren’t a reasonable portion of the readers here people who bemoan the constant learning curve hellscape of frontend development?
And now we’re going to be upset that tools that help us work faster, which are trained on data freely available on the internet and thus affected by the volume of training material, decide to (gasp) choose solutions with a greater body of examples?
Just can’t satisfy all the people all the time, I guess! SMH.
I find such argument weak. We can say the same thing about a book, like "Once The Art of Computer Program is finally published, it usually remains stagnant in terms of having its knowledge updated, thus disincentivizing people to learn new algorithms".
If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.
They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.
the delay is like 8 months for now, thats fine
I think this is also great for some interview candidate assessments, you have new frameworks that AI can't answer questions about yet, and you can quiz a candidate on how well they are able to figure out how to use the new thing
What happened to a new JS front end library every week?
If this keeps up, we won't get to completely throw away all of our old code and retool every two years (the way we've been operating for the last 20 years)
How will we ever spend 85% of our time spinning up on new js front end libraries?
And don't even get me started on the back end.
If AI had been around in 2010, we probably still have some people writing apps in Rails.
OMG what a disaster that would be.
It's a good thing we just completely threw away all of the work that went into all of those gems. If people had continued using them, we wouldn't have had the chance to completely rewrite all of them in node and python from scratch.
Python 3.12-style type annnotations are a good example imo, no one uses the type statement because dataset inertia
…if society continues to delegate more of their work to AI then we are going to fall back into the grips that inform us that some people are better at things than other people are and some are worse at things than other people are and this is what lies beneath the bridge of relying or not relying on AI to leverage your capacity to think and act on what you feel.
I think that People who will be willing to put in effort for their crafts without AI will be the ones who will be willing to try out new things and seek opportunities for ingenuity in the future. I think that the problem people have with this idea is that it runs counter to notions related to—ahem—
diversity, equity and inclusion…
On one hand and on it’s little finger is the legitimate concern that if companies who develop LLMs are not transparent with the technologies they make available to users when generating code, then they’ll hide all the scary and dangerous things that they make available to the people who’ll think, act and feel corrupt regardless of the tools they wield to impose disadvantages onto others. But I don’t think that will make a difference.
The only way out is hard work in a world bent on making the work easy after it makes you weak.
Related
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
The AI backlash couldn't have come at a better time
Developers are frustrated with AI hype, seeking practical applications. Tools like RamaLama simplify deployment, while trends favor smaller, relevant models. Organizations aim to integrate AI into routine operations effectively.
AI-assisted coding will change software engineering: hard truths
AI-assisted coding is widely adopted among developers, enhancing productivity but requiring human expertise. Experienced engineers benefit more than beginners, facing challenges in completing projects and understanding AI-generated code.
AI Coding Assistant Is Gaslighting You – The Hidden Cost of Uncertainty
AI coding assistants are unpredictable, complicating developers' decision-making. Simple prompting may be more effective than autonomous agents. Improvements should focus on clarity and complementing human expertise while acknowledging limitations.
What do I mean by some software devs are "ngmi"?
The article emphasizes the necessity for software developers to adapt to AI tools, as those resisting change may face negative career impacts, while embracing AI can enhance productivity and job security.