Are Devs Becoming Lazy? The Rise of AI and the Decline of Care
The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.
Read original articleThe rise of AI tools like GitHub Copilot is transforming software development, leading to concerns about developers becoming complacent. While these tools enhance productivity by generating code suggestions, they also pose risks, as studies indicate that a significant portion of AI-generated code contains security vulnerabilities. This reliance on AI can foster a habit of accepting code without critical evaluation, undermining the craftsmanship that once characterized coding. Developers may prioritize convenience over understanding, leading to a decline in essential skills such as security awareness and debugging. To mitigate these risks, it is crucial for developers to treat AI suggestions as drafts that require thorough review, maintain their core skills, invest in security training, and use complementary tools for security analysis. Embracing AI should not equate to laziness; rather, it should be seen as an opportunity to enhance skills while remaining vigilant about potential pitfalls. Ultimately, developers must ensure they remain engaged and proactive in their work, rather than passively relying on AI.
- The use of AI tools like GitHub Copilot raises concerns about developer complacency and skill decline.
- A significant percentage of AI-generated code contains security vulnerabilities, which can lead to serious issues.
- Developers should review AI-generated code critically and not rely on it blindly.
- Maintaining core programming skills and investing in security training is essential in an AI-driven environment.
- AI should be viewed as a tool to assist, not replace, the critical thinking and craftsmanship of developers.
Related
Why Copilot Is Making Programmers Worse at Programming
AI-driven coding tools like Copilot may enhance productivity but risk eroding fundamental programming skills, fostering dependency, reducing learning opportunities, isolating developers, and creating a false sense of expertise.
Devs gaining little (if anything) from AI coding assistants
A study by Uplevel found that AI coding assistants like GitHub Copilot do not significantly boost developer productivity, increase bugs, and lead to more time spent reviewing code rather than writing it.
Researchers seeing little evidence of benefit from co pilots
A study by Uplevel found that AI coding assistants like GitHub Copilot do not significantly improve developer productivity and may increase bugs, with mixed results across different companies.
Using AI Generated Code Will Make You a Bad Programmer
Relying on AI-generated code can hinder personal growth and skill retention in programming, leading to dependency, legal ambiguities, and potential disrespect in the community, while emphasizing coding as an art form.
Using AI Generated Code Will Make You a Bad Programmer
Relying on AI-generated code can hinder personal growth and skill retention in programming, leading to dependency, legal ambiguities, and potential disrespect in the community, while emphasizing coding as an art form.
you've lost me here.
Caring and attention to quality have always been in short supply. Or supply _and_ demand when I think of some of the startups I've worked for.
There's a name for seeing the past through rose-colored glasses, isn't there?
"In the old days" developers had various degrees of skill and care, as they do "in the new days".
Coding, just like woodworking, is about creating products and solutions. Craftsmanship, precision, and knowing your tools is possibly how you make better software more elegant, easier to maintain, etc.
Not everyone needs a rocking chair that can hold up for 40 years, sometimes an upside down bucket works just fine.
Not only can I corroborate this experience already in my workplace, even in my personal projects I can feel the effect.
The most striking example was I was learning a new language (fennel) and wanted to translate some existing code (Lua) as an initial exercise. It got the first few files fine, and I did feel like a double-checked them to make sure I understood, but only when it failed to make a conversion that worked properly did I realize I hadn't been learning really and still had no idea how to fix the code. I just had to write it by hand to really get it in my head and that 1 hand-written file have me more insight than 10 ai translated ones.
O, it looked better, had better design too because the ai just took the style of the existing file and transposed in whereas I was able to rethink the other files using the strengths of the new language instead
Which is fine - developing the feeling of what to say and when takes time and experience. And sometimes it can help, after all - or spark a useful learning opportunity about why a particular llm recommendation looks useful but isn't. Though it takes a bit of energy to walk the line of teaching and encouraging growth without dampening enthusiasm, and spending that energy is its own opportunity cost.
But it does feel a bit different than before - I don't recall seeing as much less-helpful "here's what a stack overflow post said" messages in threads years ago. It did and does happen, but I think it is much more common with LLMs.
Thankfully, that is just a case of someone actively trying to not be lazy; trying to help, using the resources at hand. Decades ago that would have meant looking in some old docs or textbooks or specs, years ago that would have been googling, and now it's something else.
I think the accessibility and availability of answers from LLMs for such "quick research" situations is the culprit, rather than any particular decline in developer behaviors themselves. At least, in this case. I'm sure I would have seen the same rate of "trying to help but getting in the way" posting years ago had stack overflow or google had a way of conjuring answers to questions that simply didn't exist in its corpus.
I think the "in the old days" sentiment from the author somewhat divides things into a before/after AI situation, but IMO it has been a gradual gradient as new tooling and information sources have become available. AI is another one of those, and a big jump, but it doesn't feel like a novel change in "laziness" yet.
Though, there's been some big pushes towards relying more and more on various AI code reviewers and fuzzers in my area. I feel like that is an area where the author's concerns will come more and more into play - laziness at the boundary layers, at the places of oversight; essentially, laziness by senior devs and leadership.
The primary danger here is a substitution effect in which developers who would previously have thought carefully about a given bit of code no longer do, because the AI takes care of it for them. Both anecdotally and in my own experience, developers can still discern between situations where they are the expert, in which case it is more efficient to rely on their own code than AI suggestions because it lowers the chances of error, and situations where they are operating outside their expertise or simply writing boilerplate, in which case AI is the optimal choice.
I challenge anyone to produce a well-documented study in which the average quality of a large codebase declines as the correlates of AI usage rise. Until I see that, I will continue to read "decline of care" posts as "kids these days."
AI is really good at CSS and nesting React components, but it fails at proper state management most of the time. The issue is that it lacks mental models of the application state and can not really reconstruct them just by looking quick at a bit of context.
But I do love AI generated React components + CSS. Saves me a lot of time.
If you're on the outside looking in, it's easy get the impression that AI is eating the industry but the truth is much more nuanced. Yes devs use AI tools to help them code but as other commenters have pointed out, it just breaks down when you're deep in the trenches with large and complex codebases.
If you're starting a greenfield application then AI can most certainly help cut down on the repetitive tasks but most code out there is not greenfield code and implements lots of business logic for which AI is next to useless unless it's trained on that specific problem.
Where i personally see AI eating up tech is at the lower end, with many folks getting into tech are increasingly relying on AI to help them. I feel like long term this will only exacerbate the issue with the JR -> SR pipeline. Senior folks will only be more and more sought after and it will be hard for many Juniors that grew up on AI assistants to make that jump.
i.e. the age of the average linux kernel maintainer is rising to what now 50s ?
There's too many factors to judge this right but my feeling that a combination of lack of work ethic, depression, media bs has led people to believe that they don't need to be competent in what they're doing because of x reason.
The answer couldn't be farther from the truth. Actually implementing AI requires you to understand and troubleshoot hard problems anywhere on the stack from the emotions of user experience to the scale of electrity.
> Coding used to be about craftsmanship, precision, and knowing your tools inside and out.
It still is. Assistance from LLM tools helps me know my tools inside out better.
Also, “lazy” in coding often means you’ve automated something and made it efficient. So I don’t view lazy as bad.
Less careful is a concern. Not everyone is great at reviewing code. However we’ll be using AI for code reviews and security audits soon (obviously some are already). I suspect code quality will improve with AI use in many domains.
things that I wouldn't usually know or be too lazy to or have the energy or time or permission to code myself
So really the problem is a lack of code review... but I seem to recall that AI is decent at code review too. It won't spot state machine bugs, but SQL injections, no problem.
To the point I've seen people here argue against striving for quality workmanship, in favour of efficiency.
But like any other trade, some of us are craftsmen who really care about our work.
That this comes at the cost of "understanding" needs supporting evidence. Most devs I know only know 1 or 2 programming languages and their own special silo corner of the tech stack.
You're not paid to be not lazy or to learn in the most fulfilling way possible. You're paid to ship software that works.
They vote up each other's projects, downvote and ban opposition.
These people are now attracted to the new "AI" tools, which are a further asset in their arsenal.
Folks,if you're smart, keep you AI usage secret and use it to reclaim time for yourself by completing your objectives early. The only reward for a job well done is more work.
Look at the election, this is not a country of intelligent people.
If you think the upcoming programmers aren't outsourcing all their work to chatgpt, I got a bridge to sell you.
Yes and no. It depends. People are different. Good devs always care because they have totally different values than those who doesn't care. Humans 101. What else is new ?
See I can also write opinionated garbage.
Related
Why Copilot Is Making Programmers Worse at Programming
AI-driven coding tools like Copilot may enhance productivity but risk eroding fundamental programming skills, fostering dependency, reducing learning opportunities, isolating developers, and creating a false sense of expertise.
Devs gaining little (if anything) from AI coding assistants
A study by Uplevel found that AI coding assistants like GitHub Copilot do not significantly boost developer productivity, increase bugs, and lead to more time spent reviewing code rather than writing it.
Researchers seeing little evidence of benefit from co pilots
A study by Uplevel found that AI coding assistants like GitHub Copilot do not significantly improve developer productivity and may increase bugs, with mixed results across different companies.
Using AI Generated Code Will Make You a Bad Programmer
Relying on AI-generated code can hinder personal growth and skill retention in programming, leading to dependency, legal ambiguities, and potential disrespect in the community, while emphasizing coding as an art form.
Using AI Generated Code Will Make You a Bad Programmer
Relying on AI-generated code can hinder personal growth and skill retention in programming, leading to dependency, legal ambiguities, and potential disrespect in the community, while emphasizing coding as an art form.