September 11th, 2024

Why Copilot Is Making Programmers Worse at Programming

AI-driven coding tools like Copilot may enhance productivity but risk eroding fundamental programming skills, fostering dependency, reducing learning opportunities, isolating developers, and creating a false sense of expertise.

Read original articleLink Icon
Why Copilot Is Making Programmers Worse at Programming

The article discusses the potential negative impacts of AI-driven coding tools like GitHub's Copilot on programmers' skills and practices. While these tools enhance productivity by generating code and suggesting solutions, they may also lead to the erosion of fundamental programming skills. Developers risk becoming overly reliant on auto-generated code, which can result in a lack of understanding of the underlying mechanics, leading to code dependency. This dependency can diminish a programmer's sense of ownership and responsibility for their work, as they may attribute errors to the AI rather than their own coding practices. Furthermore, the convenience of these tools can reduce learning opportunities, as developers may not engage deeply with problem-solving processes. The article also highlights that reliance on AI tools can narrow creative thinking, as they often suggest conventional solutions rather than encouraging innovative approaches. Additionally, dependency on proprietary tools can isolate developers from broader programming communities and create a false sense of expertise, where developers feel proficient without a solid understanding of the code they produce. Ultimately, the article warns that while AI tools can be beneficial, they may hinder long-term skill development and critical thinking in programming.

- AI tools like Copilot may erode fundamental programming skills.

- Over-reliance on auto-generated code can lead to code dependency and reduced ownership.

- The convenience of AI tools can shortcut learning opportunities for developers.

- Dependency on proprietary tools may isolate developers from broader communities.

- AI-generated solutions can create a false sense of expertise among programmers.

Link Icon 33 comments
By @plasticeagle - 4 months
This article 100% read like it was written by AI.

I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

Even if I were unfamiliar with a new language, I'll still never use it. Instead, I'll consult the documentation and follow examples. I like coding, and I neither need nor want a machine to do it for me.

It's exactly the same as writing English. There is great pleasure to be found in writing, it's worth your time. Just be careful when doing so to not end up sounding exactly like ChatGPT.

By @amflare - 4 months
Pass. This article could have been published as "Why Internet Message Boards Are Making Programmers Worse at Programming" 30 years ago, "Why Google Is Making Programmers Worse at Programming" 20 years ago or "Why StackOverflow Is Making Programmers Worse at Programming" 10 years ago. Its the same-old-same-old. Has it been true in the past? Maybe, if your criteria for "good programmer" is "someone who went through the same struggle I did". But the industry grows and adapts. The next generation of programmers is going to be good at different things than we are. As long as they can get the same job done, who are we to say that they are wrong?
By @Carrok - 4 months
Some of these points may be true, but as someone who just started using Copilot at a new job, on an unfamiliar code base and programming language, it's been a lifesaver.

Obviously you have to read the code to make sure it makes sense, and much of the work is deleting the main bit of functionality it attempted to implement, and re-implementing it correctly.

However, having it autocomplete entire function definitions, including all the {} () => | : `${x.y}` fiddly bits, sure saves a lot of time.

The one point I don't agree with at all is `Dependency on Proprietary Tools` there are already plenty of open source alternatives, and these will only improve with time.

By @s_dev - 4 months
I feel like we'll be having the same arguments forever. Sure LLMs bring some issues but programmers will always find ways to write bad code no matter what tooling or techniques we have available. His critique is basically the same as 'calculators' will make us poorer at mathematics -- at lest in Europe PISA Maths skills have been steadily increasing since the introduction of calculators to school syllabi.
By @Night_Thastus - 4 months
I'm in the "it doesn't matter" camp.

Over time, more people will realize that tools like Copilot aren't worth the headache. The solutions are often wrong, the explanations of those solutions are wrong, the corrections when you point out a mistake are wrong, etc.

Once "AI" hype dies down and people see these tools for what they are, glorified Markov chains, it won't really matter. Maybe it will get some use in making boilerplate code for the most basic of applications, but that's about it. And the occasional junior dev stumbling into it not realizing just how bad their output can be.

By @siliconc0w - 4 months
Right now its a bit like Tesla's self driving. It mostly works but mostly works isn't a great standard and maintaining supervision to correct errors involves continually re-building state and trying to debug AI-code which can be more taxing than just doing the thing yourself.

This is case by case of course.. I used it the other day to generate fairly idiomatic table-driven tests. It took a few swings plus some manual tweaking but as I don't particularly enjoy writing tests I was pretty satisfied with the outcome and it had more coverage than I probably would've written. Well worth the 25 cents in API credits. On the other hand, there have been more than a few times I've given up trying to nudge the AI and just did it myself. In those cases it was just a net negative and just wasted time. So the trick is feeling out where that line is for each model so wasted time < saved time.

By @marcellus23 - 4 months
There's no evidence of any of that, it's pure speculation. I bet people were saying the same thing about Intellisense, and Google, and documentation-on-hover, and live compiler errors, etc. etc.
By @vydra - 4 months
I found the error rate of Copilot unacceptable for most of my daily work, so 2 months ago we kicked off a project to write a tool more appropriate for someone who practices TDD -- specifically, I don;t want to see generated code unless it passes my tests. The early results are very promising for my stack which is backend Java/Spring. See https://testdriven.com/testdriven-2-0-8354e8ad73d7
By @kayodelycaon - 4 months
> When a developer writes every line of code manually, they take full responsibility for its behaviour, whether it’s functional, secure, or efficient. In contrast, when AI generates significant portions of code, it’s easy to shift that sense of responsibility onto the AI assistant.

I’m not sure how true this is. Any place I worked, whoever checked in the code is responsible for it.

By @pton_xd - 4 months
From TFA: "Erosion of Core Programming Skills, Over-Reliance on Auto-Generated Code, Reduced Learning Opportunities, Narrowed Creative Thinking, Dependency on Proprietary Tools, False Sense of Expertise"

Personally, I've been making the same arguments against using even vanilla auto-complete. It's a distraction, erosion of the mind, encourages bad habits, etc.

By @mewpmewp2 - 4 months
Seems like stretched reasons. Similar things could be said about using libraries or languages that abstract away low level complexities. Also pretty sure it's AI generated article which I guess could be stronger point about writing skills being eroded than the arguments itself.
By @ergonaught - 4 months
Even if the assertion is correct (which I believe to be the case), the most probable reality-based outcome is that decision makers will continue to push toward automation.

If some significant portion of the humans doing the job can be replaced by LLM, or by much cheaper humans augmented by LLM, they will be so replaced. By the time it creates a real problem, those folks will have cashed out and moved on.

That isn't new behavior, but folks who fall back to that as a way of dismissing concerns lack a good grasp of the scale enabled by technology, here.

"Interesting times".

By @kjellsbells - 4 months
It's not so much that Copilot is a threat because it could produce tight, elegant code that handles all the edge cases and so on, but because employers still see coders as expensive keyboard jockeys whose code is not significantly better than what a Copilot user could cargo-cult into existence.

Blaming Copilot feels a bit like the wrong target. Like the Luddites, the real question is the relationship between employer and employee, and how the presence of the machine empowers or endangers the worker. To put it another way. Suppose a perfect Copilot existed, that required a human to drive it but made that human a 10x developer. Do you think they would get paid 10x as much? Or would the worker stay where they were, perhaps under threat of replacement, and the employer take the spoils?

https://www.flyingpenguin.com/?p=28925

Edit: to be clear, I am actually a big fan of Copilot for increasingly one's personal productivity, rather like a super-Google or a non-snarky-Stack Overflow. But I remain rather cynical about how those benefits might work in the new corporate environment.

By @ebiester - 4 months
Note: This is absolute speculation - it has no evidence or even anecdotes around it.
By @phendrenad2 - 4 months
Some reasons you shouldn't drive a car:

- Erosion of core horse-riding skills

Getting from point A to point B used to be a highly-skilled task, involving a fusion of man and beast working in tandem to accomplish the job. Now, by using a so-called "automobile" (more like "auto-mo-blah", am I right?) we're losing these core skills. Rather than deeply understanding the inner working of the horse's digestive tract, we're left with only the choice: basic petrol or premium?

- Over-reliance on roads

When driving a car, drivers can quickly reach their destination without understanding the underlying terrain. This leads to what experts (me) call "road dependence", where drivers are too reliant on roads, without checking if the route is the most efficient. There could be a badger path cutting 20 minutes off of your commute!

- Lack of ownership and responsibility

When going from point A to point B, car drivers shift responsibility for the drive to the roads they drive on. But the roads could expose them to rockslides, ice, highway robbers, bank robbers, and dangerous wildlife. They may think "if the road goes through here, it must be safe", rather than do due diligence and thoroughly research the route beforehand.

- Reduced learning opportunities

Getting from point A to point B used to be a highly trial-and-error process that forced you to LEARN THE HARD WAY that certain cliffs are too steep for the average horse. Rather than falling off a cliff repeatedly, road drivers don't learn these lessons at all.

- Narrowed creative riding

When riding a horse, you are beset by constant questions. "Is that cliff safe for my horse to scale", "are those berries safe for my horse to eat", "is that a bee nest in my path or just a lumpy tree branch". These force you to think creatively about your travels. As a road driver, the way is predetermined for you, and you won't be as adaptable if you run into unusual situations.

- Dependency on proprietary engines

All horses are exactly the same, right down to the color and number of hooves! This makes it easy to transfer your expertise from one horse to another. Unfortunately, once you become a car driver, you'll find that the manufacturers put the damn volume knob in a different place on every single model. And there's nothing you can do to change it, because it's proprietary.

By @plutoh28 - 4 months
Copilot comes free for students through GitHubs student developer pack[0]. I’ve gotten to try it out and I’ve found it to just be a great cheating tool in my classes.

Most assignments done by students are basic problems that have been solved tens of thousands of times and could be found everywhere all over GitHub.

Assignments where you have to write algorithms like bubblesort or binary search are as easy as typing the function signature and then having copilot fill in the rest.

Therefore, using copilot as a student will make you worse at programming, since you are robbed of the fundamental thinking skills that come from solving these problems.

[0] https://education.github.com/pack

By @arbol - 4 months
Copilot is useful for auto completing glob or writing a simple regex. Anything more complicated and it will often make mistakes. Finding mistakes in 20 lines of AI generated code is slower than writing it yourself.
By @skeptrune - 4 months
I think it's important to actually think through problems and read error messages yourself before hammering out some request and throwing it to a coding bot. Copilots may be pushing out how much time it takes for newer devs to actually do that which feels like the most damaging aspect relative to skill progression.

It's been frustrating to request a simple feature or tool and see interviewees spend hours fighting with a LLM to make it do what they want instead of just trying to do it themselves first and correctly picking the spots to use the AI.

By @jredwards - 4 months
If your premise is that the world is a better place when programmers all have regex syntax committed to memory, then sure, I guess AI tools are bad.

Personally, I don't think the aspects of writing code that AI tools help with the most are the important parts. I think AI tools are great at taking out the rote aspects and the glue code so that programmers can concentrate on the core issues and broader structure.

By @jumploops - 4 months
"Why C is making assembly programmers worse at programming"

"Its going to make you reliant on the [compiler], and you will never be able to do anything that the [compiler] cant already do."

LLMs are here to stay, even if they don't write perfect code -- they are clearly very useful to an increasing number of existing developers and, more importantly, bringing new developers into the art of software creation.

By @twodave - 4 months
A better thesis might be that copilot is making programmers _that_ are worse at programming, which is to say new developers or those who otherwise aren’t very good are able to use it to pass as more competent than they are.

Copilot saves me a lot of time. It frees me up to think more critically of the code I’m writing.

By @JohnMakin - 4 months
Replace "code generation" with "stack overflow" and you have essentially the same rant people were making 10+ years ago about that site - even more poignant in that a large amount of copilot's training comes from sites like stack overflow
By @rychco - 4 months
There’s not much to talk about here. Of course you won’t improve when a third party is producing your code.

That being said, I’m interested to see what happens to LLM effectiveness over time as the amount of LLM-generated code starts infecting training data.

By @jmclnx - 4 months
Well you know the old saying. There was only ever one COBOL Program written from scratch. All others were copied from it or other programs decedent from it.
By @simonw - 4 months
I will happily argue that Copilot, used thoughtfully and responsibly, can make programmers better at programming.

The very, very short version is that it lets programmers move faster and try more things, which helps them learn quicker.

The rate at which I learn new libraries, frameworks and languages has accelerated dramatically over the past two years as I learned to effectively use Copilot and other LLM tools.

I have 20+ years of experience already, so a reasonable question to ask is if that effect is limited to experienced developers.

I can't speak for developers with less experience than myself. My hunch is that they can benefit too, if they deliberately use these tools to help and accelerate their learning instead of just outsourcing to them. But you'd have to ask them, not me.

By @chrisml - 4 months
Treat Copilot (or any other AI system) as a TOOL. Each tool has its purpose and use. But remember YOU are the craftsman...
By @semiinfinitely - 4 months
I also dislike using AI for programming but for what its worth, I cannot reconfigure my neovim lua settings without AI
By @dyeje - 4 months
Why Hammers Make Builders Worse at Building
By @xyst - 4 months
This just means job security for me. We will be the next "Fortran/COBOL Cowboy's"
By @aPoCoMiLogin - 4 months
it doesn't make them worse, they just stay at the same level. the tool doesn't change that the person using it doesn't care about the end result.

making systematic mistakes is the trait of the character, there is nothing that can fix that, except the subject itself

By @Fergusonb - 4 months
While this raises valid concerns, Copilot and friends can actually enhance learning by exposing devs to new patterns and approaches. They still require problem-solving and critical evaluation skills. By handling routine tasks, they free up time for higher-level thinking.

It's just another abstraction layer, like high-level languages were. Responsible use combined with continuous learning can boost productivity without sacrificing knowledge.

The impact differs between experienced devs and beginners. As these tools evolve, we'll likely develop new meta-skills around AI collaboration. Like any tool, it's about how we use it.

By @deisteve - 4 months
this article is just a rehashing of the same tired contrarian takes about AI-assisted coding that we've been hearing for years. 'Programmers will become lazy and reliant on AI' is not a new problem, and it's not like Copilot is somehow uniquely capable of eroding fundamental programming skills.

in reality, Copilot and other LLMs are just tools, and like any tool, they can be used well or poorly. If a programmer is relying on Copilot to do all their thinking for them, then yeah, they're probably not going to learn much. But if they're using it as a starting point to explore new ideas and learn from the code it generates, then that's a different story.

And let's not forget that AI-assisted coding is not a replacement for human judgment and critical thinking. If a programmer is not reviewing and understanding the code they're writing, then that's a problem with their workflow, not with the tool they're using.

I'd love to see some actual data on how Copilot is being used in the wild, rather than just anecdotal evidence and hand-wringing about the 'dangers' of AI-assisted coding. Until then, I'll remain skeptical of this article's claims.

By @TrackerFF - 4 months
I get his point, but let's be real here: 10, 20 years from now, how many will actually be programming - the we know programming today?

In 10 years time, we will have a generation of coders where the majority have never coded anything without the help of some LLM. If that's even the thing, when that time comes.

Some posters here are living in some luddite delusion, if they think the "AI hype" will somehow just blow over, and people will go back to sifting through stack exchange, or simply read the man pages for something.

Sorry, that's just not going to happen. In the past two years we already have junior devs that are dependent on LLMs to work efficiently.

These posts remind me of how older folks (25-30 years ago) warned about search engines, and how they'd make the youngins lazy and uncritical.