March 16th, 2025

AI Is Making Developers Dumb

Large language models can boost productivity for software developers but may reduce critical thinking and foundational skills. A balanced approach is essential to maintain knowledge and problem-solving abilities.

Read original articleLink Icon
AI Is Making Developers Dumb

The article discusses the impact of large language models (LLMs) on software developers, arguing that while these tools can enhance productivity, they may also diminish critical thinking and problem-solving skills. The author reflects on their own experience with LLMs, noting a growing dependency that led to a decline in foundational knowledge and coding abilities. This phenomenon, termed "Copilot Lag," describes a state where developers wait for AI prompts instead of independently solving problems. The author emphasizes the importance of understanding programming concepts deeply, rather than relying on AI-generated solutions. They acknowledge that LLMs can serve as valuable research tools if used with a critical mindset, encouraging developers to interrogate AI outputs and take notes to reinforce learning. Ultimately, the author advocates for a balanced approach to using LLMs, highlighting the need for developers to maintain their skills and knowledge.

- LLMs can enhance productivity but may reduce critical thinking in developers.

- "Copilot Lag" describes a reliance on AI that hinders independent problem-solving.

- Developers risk losing foundational knowledge by depending too much on LLMs.

- LLMs can be effective research tools if approached with skepticism and curiosity.

- Taking notes and actively engaging with learning materials is crucial for skill retention.

Link Icon 62 comments
By @popularrecluse - about 2 months
"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."

I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.

I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.

I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.

By @dfabulich - about 2 months
We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.

It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.

Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.

Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?

Wouldn't we all be smarter if we managed memory manually?

Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?

Wouldn't we all be smarter if we were wiring our own transistors?

It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.

(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)

By @MarcelOlsz - about 2 months
I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.

When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.

By @jll29 - about 2 months
LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.

In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.

As a friend commented, "these language models should never have been made available to the general public", only to researchers.

By @maratc - about 2 months
A personal anecdote from my previous place:

A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.

When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"

Turns out they just fed the task definition to some LLM and copied the answer to the pull request.

By @jtwaleson - about 2 months
Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
By @zusammen - about 2 months
I may be old-fashioned but I remember a time when silent failure was considered to be one of the worst things a system can do.

LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.

By @karaterobot - about 2 months
> I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.

I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.

In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.

By @mahoro - about 2 months
> There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next.

I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.

By @wilburTheDog - about 2 months
An LLM is a tool. It's your choice how you use it. I think there are at least two ways to use it that are helpful but don't replace your thinking. I sometimes have a problem I don't know how to solve that's too complex to ask google. I can write a paragraph in ChatGPT and it will "understand" what I'm asking and usually give me useful suggestions. Also I sometimes use it to do tedious and repetitive work I just don't want to do.

I don't generally ask it to write my code for me because that's the fun part of the job.

By @moribvndvs - about 2 months
I am at the point of abandoning coding copilots because I spend most of my time fighting the god damned things. Surely, some of this is on me, not tweaking settings or finding the right workflow to get the most of it. Some of it is problematic UX/implementation in VSCode or Cursor. But the remaining portion is an assortment of quirks that require me to hover over it like an overattentive parent trying to keep a toddler from constantly sticking its fingers in electrical sockets. All that plus the comparatively sluggish and inconsistent responsivity is fucking exhausting and I feel like I get _less_ done in copilot-heavy sessions. Up to a point they will improve over time, but right now it makes programming less enjoyable for me.

On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.

That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.

I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.

By @agumonkey - about 2 months
It's also making the sleazy and lazy one thrive a bit more, which is quite painful when passionated devs which are also great colleagues don't gain any real leverage from chatgpt.
By @Cyclone_ - about 2 months
I use LLMs for generating small chunks of code (less than 150 lines) but I am of the opinion that you should always understand what generated cide is doing. I take time go read through it and make sure it makes sense before I actually run it. I've found that for smaller chunks of code it's usually pretty accurate on the first try. Occasionally it can't figure it out all all, even with trying to massage the prompt to be more descriptive.
By @BooneJS - about 2 months
If you use LLMs in lieu of searching Stack Overflow, you're going to go faster and be neither smarter nor dumber. If you're prompting for entire functions, I suspect it'll be a crutch you learn to rely on forever.
By @Guthur - about 2 months
I'm in full agreement with this, and it's part of the reason I'm considering leaving the software engineering field for good.

I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.

By @palmotea - about 2 months
That's probably the mechanism by which AI will take over many jobs:

1. Skilled people do a good job, AI does a not-so-good job.

2. AI users get dumbed down so they can't do any better. Mediocrity normalized.

3. Replace the AI users with AI.

By @sys64739 - about 2 months
It ruined my friend's startup. Junior dev "wrote" WAY too much code with no ability to support it after the fact. Glitches in production would result in the kid disappearing for weeks at a time because he had no idea how anything actually worked under the hood. Friend was _so_ confident of his codebase before shit hit the fan - the junior dev misrepresented the state of the world, b/c he simply didn't know what he didn't know.
By @jazzcomputer - about 2 months
I'm learning javascript as my first programming language and I'm somewhere around beginner/intermediate. I used Chatgpt for a while, but stopped after a time and just mostly use documentation now. I don't want code solutions, I want code learning and I want certainty behind that learning.

I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.

I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.

By @feverzsj - about 2 months
Tried several times for C++, almost always got nonsense results. Maybe they only work for weakly typed language.
By @Kiro - about 2 months
I also love building things. LLM-assisted workflows have definitely not taken this away. If anything, it has only amplified my love for coding. I can finally focus on the creative parts only.

That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.

By @moffkalast - about 2 months
> Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used

It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?

This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?

By @gtsop - about 2 months
My guess is that AI will make programming even more misserable for those who entered the field for the wrong reasons. Now is the time to double down on learning the basics, the low level, the under-the-hood stuff.
By @righthand - about 2 months
I told my colleagues that if they’re just going to send me LLM code I cannot review it and assume they already double checked the work themselves. This gives them instant approval and if they want to spend time submitting follow up PRs because they’re not double checking their code and not understanding then they can do that. I honestly did this for two reasons:

1. The problem domain is a marketing site (low risk)

2. I got tired of fixing bad LLM code

I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.

I have no desire to be a code janitor.

By @casey2 - about 2 months
This entire line of reasoning is worker propaganda. Like the boss is some buffoon and the employees constantly have to skirt his nonsensical requirements to make create a reasonable product.

It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.

If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.

By @TrackerFF - about 2 months
Wonder if we'll have this discussion in 20 years. Or will traditional programmers be some niche "artisanal" group of workers, akin to what bootmakers and bespoke tailors are today.
By @kennysoona - about 2 months
Gen Z kind of already have a reputation for being 'dumb', being unable to supposedly do basic tasks expected from an entry level office working, or questioning basic things like why tasks get delegated down the chain. Maybe being bad at coding, especially if they are using AI, is just part of that?

I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.

By @deeviant - about 2 months
> Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them

Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.

By @tracerbulletx - about 2 months
If you want AI to make you less dumb, instead of using it like stack overflow, you can go on a road trip and have a deep conversation about a topic or field you want to learn more about, you can have it quiz you, do mock interviews, ask questions, have a chat, its incredible at that. As long as its not something where the documentation is less than a year or two old.
By @chasing - about 2 months
AI tools are great. They don’t absolve you from understanding what you’re doing and why.

One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.

By @pabs3 - about 2 months
Reminds me of this article about outsourcing:

https://berthub.eu/articles/posts/how-tech-loses-out/

By @falcor84 - about 2 months
> As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time.

I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.

By @Centigonal - about 2 months
I think "AI makes developers dumb" makes as much sense as "becoming a manager makes developers dumb."

I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.

I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.

By @minimaxir - about 2 months
What modern LLMs are good at is reducing boilerplate for workflows that are annoying and tedious, but b) genuinely save time b) are less likely for a LLM to screw up c) are easy to spot check and identify issues in the event the LLM does mess up.

For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.

Meanwhile I asked Claude 3.5 Sonnet this:

   Write Python code using the Pillow library to compose 5 images into a single image:

   1. The left half consists of one image.
   2. The right half consists of the remaining 4 images, equally sized with one quadrant each
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.

Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."

By @knallfrosch - about 2 months
Just a generic rant. How many people can sew; fell a tree; or skin an animal? Yeah, I thought so.

And no data or link to data either. Just a waves hand "I think it happened to me"

By @xpl - about 2 months
People said the same thing about IntelliSense a long time ago.
By @nialv7 - about 2 months
Or maybe AI is enabling dumb people to program?
By @bflesch - about 2 months
Or it makes dumb people become developers ;)
By @atomic128 - about 2 months
Here is a disturbing look at what the absolute knobs at Y Combinator (and elsewhere) are preaching/pushing, with commentary from Primeagen: https://www.youtube.com/watch?v=riyh_CIshTs

Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".

The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw

To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8

This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?

By @IshKebab - about 2 months
"Calculators are making people dumb"

"Spell checkers are making people dumb"

"Wikipedia is making people dumb"

Nothing to see here.

By @betimsl - about 2 months
> [...] This is to the point where is starts to become hard for you to work without one.

Why would one work without one?

By @hbogert - about 2 months
Everything is making us dumb. I remember when ATMs would give out your money before giving back your card. You would often find someone's card and maybe you could still shout to them if you saw them walking away.

Back then you'd giggle about how silly that person was, you wouldn't forget your card would you? Somewhere since then the mindset shifted and if a machine would allow for this to happen everybody would agree the designers of the machine did not do a good job on the user-experience.

This is just a silly example, but through everyday life everything has become streamlined and you can just cruise through a day on auto-pilot and machines will autocorrect you or the process how to use them makes it near impossible to get into a anomalous state. Sometimes I do have the feeling all this made us 'dumber' and I don't actively think anymore when interfacing with things because I assume it's foolproof.

However, not having to actively think about every little thing when interfacing with systems does give a lot of free mental capacity to be used for other things.

When reading these things I always get the feeling it's simply a "kids these days" piece. Go back 40 years when hardly anybody would use punch cards anymore. I'd imagine there were a lot of "real" developers who advocated that "kids" are wasting CPU cycles and memory because they've lost touch with the hardware and if they simply kept using punchcards they'd get a sense of "real" programming again.

My takeaway is, if we expect our ATMs to behave sane and keep us from doing dumb things, why wouldn't we expect at least a subset of developers wanting to get that same experience during development?

By @tarkin2 - about 2 months
Do human servants make you lazier or more productive? (A sincere thought experiment)
By @captainclam - about 2 months
This is one of the many many experiences in the tapestry of people figuring out how to use this new tool.

There will be many such cases of engineers losing their edge.

There will be many cases of engineers skillfully wielding LLMs and growing as a result.

There will be many cases of hobbyists becoming empowered to build new things.

There will be many cases of SWEs getting lazy and building up huge, messy, intractable code bases.

I enjoy reading from all these perspectives. I am tired of sweeping statements like "AI is Making Developers Dumb."

By @Frederation - about 2 months
*Inexperienced devs using tools to think for them instead of problem solving.
By @lowbloodsugar - about 2 months
Honestly I just don’t remember the names of methods and without my IDE I’d be a lot less productive than I am now. Are IDEs a problem?

The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.

LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.

By @Sparkyte - about 2 months
I dont think it is making developers dumb, you still need to audit and review the code. As long as you augment your writing by relying on base templating, finding material to read or have it explain code. It is really good.
By @MrMcCall - about 2 months
"Pay a lot, cry once." --Chinese Proverb
By @mulmen - about 2 months
AI lowers the bar. You can say Python makes developers dumb too. Or that canned food makes cooks dumb. That’s not really the point though. When something is easier more people can do it. That expansion is biased downward.
By @jas39 - about 2 months
Frankly, i don't think this is true at all. If anything I notice, for me, that I take better and more informed decisions, in many aspects of life. Think this criticism comes from a position of someone having invested alot of time in something AI can do quite well.
By @tehjoker - about 2 months
is crazy to me how people talk about aeons ago when these tool came out like two years ago
By @gdubs - about 2 months
I live on a farm and there are a lot of things that machines can do faster and cheaper. And for a lot of tasks, it makes more sense from a time / money tradeoff.

But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.

Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.

Look for the 80/20 rule and find those spots where you can keep yourself sharp.

By @cadamsdotcom - about 2 months
AI makes developers smarter when used in smart ways. How amazing to have code generated for you, freeing you to consider the next task (ie. “sit there waiting for the next task to come to mind”) .. oh, by the way, if you don’t understand the code, highlight it and ask for an explanation. Repeat ad infinitum until you understand what you’re reading.

The dumb developers are those resisting this amazing tool and trend.

By @kelseyfrog - about 2 months
Books made orators dumb. I'm not sure this argument has ever had any credence, not now and not when Socrates came up with his version for his time.

Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.

By @rvogler - about 2 months
"There’s a reason behind why I say this. Over time, you develop a reliance on [search engines]. This is to the point where is [sic!] starts to become hard for you to work without one."
By @annjose - about 2 months
I experimented with vibe coding [0] yesterday to build a Pomodoro timer app [1] and had a mixed experience.

The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].

On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.

No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.

So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:

  - Reading code - Understanding, verifying and correcting the code written by AI
  - Systems thinking - understand the big picture and how different components interact with each other
  - Guiding the AI system - giving clear instructions about what you want it to do
  - Architecture and optimization - Ensuring the underlying structure is solid and performance is good
  - Understand the programming language - without this, we wouldn't know when AI makes a mistake
  - Designing good experiences - As coding gets easier, it becomes more important and easier to build user-friendly experiences
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.

[0] https://en.wikipedia.org/wiki/Vibe_coding

[1] https://my-pomodoro-flow.netlify.app/

[2] https://github.com/annjose/pomodoro-flow

By @EVa5I7bHFq9mnYK - about 2 months
I'd say PHP and JS made developers dumb. And this is the kind of "developers" that AI is currently replacing.
By @Gualdrapo - about 2 months
I don't need AI to be dumb.