Why Copilot Is Making Programmers Worse at Programming
AI-driven coding tools like Copilot may enhance productivity but risk eroding fundamental programming skills, fostering dependency, reducing learning opportunities, isolating developers, and creating a false sense of expertise.
Read original articleThe article discusses the potential negative impacts of AI-driven coding tools like GitHub's Copilot on programmers' skills and practices. While these tools enhance productivity by generating code and suggesting solutions, they may also lead to the erosion of fundamental programming skills. Developers risk becoming overly reliant on auto-generated code, which can result in a lack of understanding of the underlying mechanics, leading to code dependency. This dependency can diminish a programmer's sense of ownership and responsibility for their work, as they may attribute errors to the AI rather than their own coding practices. Furthermore, the convenience of these tools can reduce learning opportunities, as developers may not engage deeply with problem-solving processes. The article also highlights that reliance on AI tools can narrow creative thinking, as they often suggest conventional solutions rather than encouraging innovative approaches. Additionally, dependency on proprietary tools can isolate developers from broader programming communities and create a false sense of expertise, where developers feel proficient without a solid understanding of the code they produce. Ultimately, the article warns that while AI tools can be beneficial, they may hinder long-term skill development and critical thinking in programming.
- AI tools like Copilot may erode fundamental programming skills.
- Over-reliance on auto-generated code can lead to code dependency and reduced ownership.
- The convenience of AI tools can shortcut learning opportunities for developers.
- Dependency on proprietary tools may isolate developers from broader communities.
- AI-generated solutions can create a false sense of expertise among programmers.
Related
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.
Even if I were unfamiliar with a new language, I'll still never use it. Instead, I'll consult the documentation and follow examples. I like coding, and I neither need nor want a machine to do it for me.
It's exactly the same as writing English. There is great pleasure to be found in writing, it's worth your time. Just be careful when doing so to not end up sounding exactly like ChatGPT.
Obviously you have to read the code to make sure it makes sense, and much of the work is deleting the main bit of functionality it attempted to implement, and re-implementing it correctly.
However, having it autocomplete entire function definitions, including all the {} () => | : `${x.y}` fiddly bits, sure saves a lot of time.
The one point I don't agree with at all is `Dependency on Proprietary Tools` there are already plenty of open source alternatives, and these will only improve with time.
Over time, more people will realize that tools like Copilot aren't worth the headache. The solutions are often wrong, the explanations of those solutions are wrong, the corrections when you point out a mistake are wrong, etc.
Once "AI" hype dies down and people see these tools for what they are, glorified Markov chains, it won't really matter. Maybe it will get some use in making boilerplate code for the most basic of applications, but that's about it. And the occasional junior dev stumbling into it not realizing just how bad their output can be.
This is case by case of course.. I used it the other day to generate fairly idiomatic table-driven tests. It took a few swings plus some manual tweaking but as I don't particularly enjoy writing tests I was pretty satisfied with the outcome and it had more coverage than I probably would've written. Well worth the 25 cents in API credits. On the other hand, there have been more than a few times I've given up trying to nudge the AI and just did it myself. In those cases it was just a net negative and just wasted time. So the trick is feeling out where that line is for each model so wasted time < saved time.
I’m not sure how true this is. Any place I worked, whoever checked in the code is responsible for it.
Personally, I've been making the same arguments against using even vanilla auto-complete. It's a distraction, erosion of the mind, encourages bad habits, etc.
If some significant portion of the humans doing the job can be replaced by LLM, or by much cheaper humans augmented by LLM, they will be so replaced. By the time it creates a real problem, those folks will have cashed out and moved on.
That isn't new behavior, but folks who fall back to that as a way of dismissing concerns lack a good grasp of the scale enabled by technology, here.
"Interesting times".
Blaming Copilot feels a bit like the wrong target. Like the Luddites, the real question is the relationship between employer and employee, and how the presence of the machine empowers or endangers the worker. To put it another way. Suppose a perfect Copilot existed, that required a human to drive it but made that human a 10x developer. Do you think they would get paid 10x as much? Or would the worker stay where they were, perhaps under threat of replacement, and the employer take the spoils?
https://www.flyingpenguin.com/?p=28925
Edit: to be clear, I am actually a big fan of Copilot for increasingly one's personal productivity, rather like a super-Google or a non-snarky-Stack Overflow. But I remain rather cynical about how those benefits might work in the new corporate environment.
- Erosion of core horse-riding skills
Getting from point A to point B used to be a highly-skilled task, involving a fusion of man and beast working in tandem to accomplish the job. Now, by using a so-called "automobile" (more like "auto-mo-blah", am I right?) we're losing these core skills. Rather than deeply understanding the inner working of the horse's digestive tract, we're left with only the choice: basic petrol or premium?
- Over-reliance on roads
When driving a car, drivers can quickly reach their destination without understanding the underlying terrain. This leads to what experts (me) call "road dependence", where drivers are too reliant on roads, without checking if the route is the most efficient. There could be a badger path cutting 20 minutes off of your commute!
- Lack of ownership and responsibility
When going from point A to point B, car drivers shift responsibility for the drive to the roads they drive on. But the roads could expose them to rockslides, ice, highway robbers, bank robbers, and dangerous wildlife. They may think "if the road goes through here, it must be safe", rather than do due diligence and thoroughly research the route beforehand.
- Reduced learning opportunities
Getting from point A to point B used to be a highly trial-and-error process that forced you to LEARN THE HARD WAY that certain cliffs are too steep for the average horse. Rather than falling off a cliff repeatedly, road drivers don't learn these lessons at all.
- Narrowed creative riding
When riding a horse, you are beset by constant questions. "Is that cliff safe for my horse to scale", "are those berries safe for my horse to eat", "is that a bee nest in my path or just a lumpy tree branch". These force you to think creatively about your travels. As a road driver, the way is predetermined for you, and you won't be as adaptable if you run into unusual situations.
- Dependency on proprietary engines
All horses are exactly the same, right down to the color and number of hooves! This makes it easy to transfer your expertise from one horse to another. Unfortunately, once you become a car driver, you'll find that the manufacturers put the damn volume knob in a different place on every single model. And there's nothing you can do to change it, because it's proprietary.
Most assignments done by students are basic problems that have been solved tens of thousands of times and could be found everywhere all over GitHub.
Assignments where you have to write algorithms like bubblesort or binary search are as easy as typing the function signature and then having copilot fill in the rest.
Therefore, using copilot as a student will make you worse at programming, since you are robbed of the fundamental thinking skills that come from solving these problems.
It's been frustrating to request a simple feature or tool and see interviewees spend hours fighting with a LLM to make it do what they want instead of just trying to do it themselves first and correctly picking the spots to use the AI.
Personally, I don't think the aspects of writing code that AI tools help with the most are the important parts. I think AI tools are great at taking out the rote aspects and the glue code so that programmers can concentrate on the core issues and broader structure.
"Its going to make you reliant on the [compiler], and you will never be able to do anything that the [compiler] cant already do."
LLMs are here to stay, even if they don't write perfect code -- they are clearly very useful to an increasing number of existing developers and, more importantly, bringing new developers into the art of software creation.
Copilot saves me a lot of time. It frees me up to think more critically of the code I’m writing.
That being said, I’m interested to see what happens to LLM effectiveness over time as the amount of LLM-generated code starts infecting training data.
The very, very short version is that it lets programmers move faster and try more things, which helps them learn quicker.
The rate at which I learn new libraries, frameworks and languages has accelerated dramatically over the past two years as I learned to effectively use Copilot and other LLM tools.
I have 20+ years of experience already, so a reasonable question to ask is if that effect is limited to experienced developers.
I can't speak for developers with less experience than myself. My hunch is that they can benefit too, if they deliberately use these tools to help and accelerate their learning instead of just outsourcing to them. But you'd have to ask them, not me.
making systematic mistakes is the trait of the character, there is nothing that can fix that, except the subject itself
It's just another abstraction layer, like high-level languages were. Responsible use combined with continuous learning can boost productivity without sacrificing knowledge.
The impact differs between experienced devs and beginners. As these tools evolve, we'll likely develop new meta-skills around AI collaboration. Like any tool, it's about how we use it.
in reality, Copilot and other LLMs are just tools, and like any tool, they can be used well or poorly. If a programmer is relying on Copilot to do all their thinking for them, then yeah, they're probably not going to learn much. But if they're using it as a starting point to explore new ideas and learn from the code it generates, then that's a different story.
And let's not forget that AI-assisted coding is not a replacement for human judgment and critical thinking. If a programmer is not reviewing and understanding the code they're writing, then that's a problem with their workflow, not with the tool they're using.
I'd love to see some actual data on how Copilot is being used in the wild, rather than just anecdotal evidence and hand-wringing about the 'dangers' of AI-assisted coding. Until then, I'll remain skeptical of this article's claims.
In 10 years time, we will have a generation of coders where the majority have never coded anything without the help of some LLM. If that's even the thing, when that time comes.
Some posters here are living in some luddite delusion, if they think the "AI hype" will somehow just blow over, and people will go back to sifting through stack exchange, or simply read the man pages for something.
Sorry, that's just not going to happen. In the past two years we already have junior devs that are dependent on LLMs to work efficiently.
These posts remind me of how older folks (25-30 years ago) warned about search engines, and how they'd make the youngins lazy and uncritical.
Related
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.