AI Is Making Developers Dumb
Large language models can boost productivity for software developers but may reduce critical thinking and foundational skills. A balanced approach is essential to maintain knowledge and problem-solving abilities.
Read original articleThe article discusses the impact of large language models (LLMs) on software developers, arguing that while these tools can enhance productivity, they may also diminish critical thinking and problem-solving skills. The author reflects on their own experience with LLMs, noting a growing dependency that led to a decline in foundational knowledge and coding abilities. This phenomenon, termed "Copilot Lag," describes a state where developers wait for AI prompts instead of independently solving problems. The author emphasizes the importance of understanding programming concepts deeply, rather than relying on AI-generated solutions. They acknowledge that LLMs can serve as valuable research tools if used with a critical mindset, encouraging developers to interrogate AI outputs and take notes to reinforce learning. Ultimately, the author advocates for a balanced approach to using LLMs, highlighting the need for developers to maintain their skills and knowledge.
- LLMs can enhance productivity but may reduce critical thinking in developers.
- "Copilot Lag" describes a reliance on AI that hinders independent problem-solving.
- Developers risk losing foundational knowledge by depending too much on LLMs.
- LLMs can be effective research tools if approached with skepticism and curiosity.
- Taking notes and actively engaging with learning materials is crucial for skill retention.
I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.
I don't generally ask it to write my code for me because that's the fun part of the job.
On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.
That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.
I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.
I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.
1. Skilled people do a good job, AI does a not-so-good job.
2. AI users get dumbed down so they can't do any better. Mediocrity normalized.
3. Replace the AI users with AI.
I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.
I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.
That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.
It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?
This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?
1. The problem domain is a marketing site (low risk)
2. I got tired of fixing bad LLM code
I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.
I have no desire to be a code janitor.
It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.
If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.
I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.
Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.
One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.
I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.
I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.
I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.
For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.
Meanwhile I asked Claude 3.5 Sonnet this:
Write Python code using the Pillow library to compose 5 images into a single image:
1. The left half consists of one image.
2. The right half consists of the remaining 4 images, equally sized with one quadrant each
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."
And no data or link to data either. Just a waves hand "I think it happened to me"
Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".
The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw
To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8
This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?
"Spell checkers are making people dumb"
"Wikipedia is making people dumb"
Nothing to see here.
Why would one work without one?
Back then you'd giggle about how silly that person was, you wouldn't forget your card would you? Somewhere since then the mindset shifted and if a machine would allow for this to happen everybody would agree the designers of the machine did not do a good job on the user-experience.
This is just a silly example, but through everyday life everything has become streamlined and you can just cruise through a day on auto-pilot and machines will autocorrect you or the process how to use them makes it near impossible to get into a anomalous state. Sometimes I do have the feeling all this made us 'dumber' and I don't actively think anymore when interfacing with things because I assume it's foolproof.
However, not having to actively think about every little thing when interfacing with systems does give a lot of free mental capacity to be used for other things.
When reading these things I always get the feeling it's simply a "kids these days" piece. Go back 40 years when hardly anybody would use punch cards anymore. I'd imagine there were a lot of "real" developers who advocated that "kids" are wasting CPU cycles and memory because they've lost touch with the hardware and if they simply kept using punchcards they'd get a sense of "real" programming again.
My takeaway is, if we expect our ATMs to behave sane and keep us from doing dumb things, why wouldn't we expect at least a subset of developers wanting to get that same experience during development?
There will be many such cases of engineers losing their edge.
There will be many cases of engineers skillfully wielding LLMs and growing as a result.
There will be many cases of hobbyists becoming empowered to build new things.
There will be many cases of SWEs getting lazy and building up huge, messy, intractable code bases.
I enjoy reading from all these perspectives. I am tired of sweeping statements like "AI is Making Developers Dumb."
The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.
LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.
But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.
Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.
Look for the 80/20 rule and find those spots where you can keep yourself sharp.
The dumb developers are those resisting this amazing tool and trend.
Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.
The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].
On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.
No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.
So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:
- Reading code - Understanding, verifying and correcting the code written by AI
- Systems thinking - understand the big picture and how different components interact with each other
- Guiding the AI system - giving clear instructions about what you want it to do
- Architecture and optimization - Ensuring the underlying structure is solid and performance is good
- Understand the programming language - without this, we wouldn't know when AI makes a mistake
- Designing good experiences - As coding gets easier, it becomes more important and easier to build user-friendly experiences
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.[0] https://en.wikipedia.org/wiki/Vibe_coding