November 8th, 2024

Are Devs Becoming Lazy? The Rise of AI and the Decline of Care

The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.

Read original articleLink Icon
Are Devs Becoming Lazy? The Rise of AI and the Decline of Care

The rise of AI tools like GitHub Copilot is transforming software development, leading to concerns about developers becoming complacent. While these tools enhance productivity by generating code suggestions, they also pose risks, as studies indicate that a significant portion of AI-generated code contains security vulnerabilities. This reliance on AI can foster a habit of accepting code without critical evaluation, undermining the craftsmanship that once characterized coding. Developers may prioritize convenience over understanding, leading to a decline in essential skills such as security awareness and debugging. To mitigate these risks, it is crucial for developers to treat AI suggestions as drafts that require thorough review, maintain their core skills, invest in security training, and use complementary tools for security analysis. Embracing AI should not equate to laziness; rather, it should be seen as an opportunity to enhance skills while remaining vigilant about potential pitfalls. Ultimately, developers must ensure they remain engaged and proactive in their work, rather than passively relying on AI.

- The use of AI tools like GitHub Copilot raises concerns about developer complacency and skill decline.

- A significant percentage of AI-generated code contains security vulnerabilities, which can lead to serious issues.

- Developers should review AI-generated code critically and not rely on it blindly.

- Maintaining core programming skills and investing in security training is essential in an AI-driven environment.

- AI should be viewed as a tool to assist, not replace, the critical thinking and craftsmanship of developers.

Link Icon 28 comments
By @rsanheim - 4 months
> In the old days, developers had to really know their stuff. Coding wasn’t just a checklist—it was a craft, and every line was written with care.

you've lost me here.

Caring and attention to quality have always been in short supply. Or supply _and_ demand when I think of some of the startups I've worked for.

By @lucianbr - 4 months
> In the old days, developers had to really know their stuff. Coding wasn’t just a checklist—it was a craft, and every line was written with care.

There's a name for seeing the past through rose-colored glasses, isn't there?

"In the old days" developers had various degrees of skill and care, as they do "in the new days".

By @mbesto - 4 months
> Coding used to be about craftsmanship, precision, and knowing your tools inside and out.

Coding, just like woodworking, is about creating products and solutions. Craftsmanship, precision, and knowing your tools is possibly how you make better software more elegant, easier to maintain, etc.

Not everyone needs a rocking chair that can hold up for 40 years, sometimes an upside down bucket works just fine.

By @MantisShrimp90 - 4 months
This.

Not only can I corroborate this experience already in my workplace, even in my personal projects I can feel the effect.

The most striking example was I was learning a new language (fennel) and wanted to translate some existing code (Lua) as an initial exercise. It got the first few files fine, and I did feel like a double-checked them to make sure I understood, but only when it failed to make a conversion that worked properly did I realize I hadn't been learning really and still had no idea how to fix the code. I just had to write it by hand to really get it in my head and that 1 hand-written file have me more insight than 10 ai translated ones.

O, it looked better, had better design too because the ai just took the style of the existing file and transposed in whereas I was able to rethink the other files using the strengths of the new language instead

By @shortrounddev2 - 4 months
The quality control practices are not up to the devs, they're up to management. Management wants to cut corners or perform incomplete, automated tests rather than human driven qa. It's about cutting costs by cutting corners
By @anonymousab - 4 months
One thing I have noticed with some newer coworkers / fresh grads is that they seem much more willing to copy someone's issue from slack into gpt and regurgitate out a "here's what gpt says", which muddies the water a bit or is at least a bit unhelpful.

Which is fine - developing the feeling of what to say and when takes time and experience. And sometimes it can help, after all - or spark a useful learning opportunity about why a particular llm recommendation looks useful but isn't. Though it takes a bit of energy to walk the line of teaching and encouraging growth without dampening enthusiasm, and spending that energy is its own opportunity cost.

But it does feel a bit different than before - I don't recall seeing as much less-helpful "here's what a stack overflow post said" messages in threads years ago. It did and does happen, but I think it is much more common with LLMs.

Thankfully, that is just a case of someone actively trying to not be lazy; trying to help, using the resources at hand. Decades ago that would have meant looking in some old docs or textbooks or specs, years ago that would have been googling, and now it's something else.

I think the accessibility and availability of answers from LLMs for such "quick research" situations is the culprit, rather than any particular decline in developer behaviors themselves. At least, in this case. I'm sure I would have seen the same rate of "trying to help but getting in the way" posting years ago had stack overflow or google had a way of conjuring answers to questions that simply didn't exist in its corpus.

I think the "in the old days" sentiment from the author somewhat divides things into a before/after AI situation, but IMO it has been a gradual gradient as new tooling and information sources have become available. AI is another one of those, and a big jump, but it doesn't feel like a novel change in "laziness" yet.

Though, there's been some big pushes towards relying more and more on various AI code reviewers and fuzzers in my area. I feel like that is an area where the author's concerns will come more and more into play - laziness at the boundary layers, at the places of oversight; essentially, laziness by senior devs and leadership.

By @xianshou - 4 months
Thoughtless reliance on AI is a concern, but this post also hearkens back to a halcyon age that never existed. The places where developers use tab-complete now are exactly those where they would have previously copied from Stack Overflow, which suffers from the same issue of convenient but insecure code becoming widely adopted. If anything, LLMs are able to exercise some degree of quality control due to post-training improvements rather than sampling only from the middle of the training distribution, so the average suggestion should be better and less stale than the SO equivalent.

The primary danger here is a substitution effect in which developers who would previously have thought carefully about a given bit of code no longer do, because the AI takes care of it for them. Both anecdotally and in my own experience, developers can still discern between situations where they are the expert, in which case it is more efficient to rely on their own code than AI suggestions because it lowers the chances of error, and situations where they are operating outside their expertise or simply writing boilerplate, in which case AI is the optimal choice.

I challenge anyone to produce a well-documented study in which the average quality of a large codebase declines as the correlates of AI usage rise. Until I see that, I will continue to read "decline of care" posts as "kids these days."

By @bhouston - 4 months
I am super pro-AI writing code, but honestly, I have had so many bugs in the AI-generated code if it gets even a little complex. And then you need to understand all for the code anyhow.

AI is really good at CSS and nesting React components, but it fails at proper state management most of the time. The issue is that it lacks mental models of the application state and can not really reconstruct them just by looking quick at a bit of context.

But I do love AI generated React components + CSS. Saves me a lot of time.

By @_fat_santa - 4 months
I feel like this article was written by someone who doesn't code every day but instead looks at tech industry trends.

If you're on the outside looking in, it's easy get the impression that AI is eating the industry but the truth is much more nuanced. Yes devs use AI tools to help them code but as other commenters have pointed out, it just breaks down when you're deep in the trenches with large and complex codebases.

If you're starting a greenfield application then AI can most certainly help cut down on the repetitive tasks but most code out there is not greenfield code and implements lots of business logic for which AI is next to useless unless it's trained on that specific problem.

Where i personally see AI eating up tech is at the lower end, with many folks getting into tech are increasingly relying on AI to help them. I feel like long term this will only exacerbate the issue with the JR -> SR pipeline. Senior folks will only be more and more sought after and it will be hard for many Juniors that grew up on AI assistants to make that jump.

By @Cheer2171 - 4 months
sed -i 's/Copilot/outdated Stack Overflow answers and random github repos/g'
By @nunez - 4 months
The adoption of AI in software development is absolutely mental to me. We as a collective are literally and willingly forfeiting our core skillset to technology that can only be operated by four huge tech companies that is known to get things seriously wrong. The I Got Mine mentality that justifies it all is staggering.
By @hazmazlaz - 4 months
Any suggestion that generated code is inferior to human written code because the generated code can contain vulnerabilities is ridiculous on its face. Human written code, despite all of the tools we have to assist with the creation of secure code, is full of vulnerabilities AND the problem is getting worse - not better. I guess I should be grateful because developers writing insecure code is the whole reason I have a career, but let's not fool ourselves into thinking that just because a human wrote the code it's less likely to contain vulnerabilities than generated code.
By @nisten - 4 months
no they're just becoming dumber.

i.e. the age of the average linux kernel maintainer is rising to what now 50s ?

There's too many factors to judge this right but my feeling that a combination of lack of work ethic, depression, media bs has led people to believe that they don't need to be competent in what they're doing because of x reason.

The answer couldn't be farther from the truth. Actually implementing AI requires you to understand and troubleshoot hard problems anywhere on the stack from the emotions of user experience to the scale of electrity.

By @simonw - 4 months
I was ready to disagree with this article - most of the first half is a rehashing of a paper about the first release of GitHub Copilot from 2021! - and then I got to the recommendations, and "Always Review AI-Suggested Code" and "Stay Sharp on Core Skills" are both good principles to hold on to when working with this stuff.

> Coding used to be about craftsmanship, precision, and knowing your tools inside and out.

It still is. Assistance from LLM tools helps me know my tools inside out better.

By @dwabyick - 4 months
There’s good points in this article, especially for new engineers who may not understand what the AI is writing.

Also, “lazy” in coding often means you’ve automated something and made it efficient. So I don’t view lazy as bad.

Less careful is a concern. Not everyone is great at reviewing code. However we’ll be using AI for code reviews and security audits soon (obviously some are already). I suspect code quality will improve with AI use in many domains.

By @4b11b4 - 4 months
If anything, I'm using llms to teach me all of these security / validation / testing / conventional commit semver squash merge

things that I wouldn't usually know or be too lazy to or have the energy or time or permission to code myself

By @Mathnerd314 - 4 months
> Always Review AI-Suggested Code

So really the problem is a lack of code review... but I seem to recall that AI is decent at code review too. It won't spot state machine bugs, but SQL injections, no problem.

By @rkagerer - 4 months
We've always been lazy.

To the point I've seen people here argue against striving for quality workmanship, in favour of efficiency.

But like any other trade, some of us are craftsmen who really care about our work.

By @rongenre - 4 months
Fundamentally a dev owns the code they write - it doesn't matter if it's copy/pasted from search results or filled in via an LLM.
By @8338550bff96 - 4 months
I would rather have lazy devs that actually ship shit to production than slow devs that drag what I could get done in 1 day out for entire sprints because they only communicate with each other for 30min tops per-day to unblock each other.

That this comes at the cost of "understanding" needs supporting evidence. Most devs I know only know 1 or 2 programming languages and their own special silo corner of the tech stack.

You're not paid to be not lazy or to learn in the most fulfilling way possible. You're paid to ship software that works.

By @htrp - 4 months
You could've made the same arguments about oop, linters, and devops tools.
By @LkGah - 4 months
The problem started with GitHub. GitHub is optimal for mediocre people who know how to game the system, flood projects with trivial and useless PRs, give LGTMs to other developers in their friend circles and generally know how to maintain the illusion of progress and useful activity.

They vote up each other's projects, downvote and ban opposition.

These people are now attracted to the new "AI" tools, which are a further asset in their arsenal.

By @warpeggio - 4 months
Capitalism incentivizes the lowest cost implementation to maximize margin. This isn't surprising through that lens.

Folks,if you're smart, keep you AI usage secret and use it to reclaim time for yourself by completing your objectives early. The only reward for a job well done is more work.

By @StarterPro - 4 months
They are, and the glut of A.I. products will only make it worse.

Look at the election, this is not a country of intelligent people.

If you think the upcoming programmers aren't outsourcing all their work to chatgpt, I got a bridge to sell you.

By @danielovichdk - 4 months
These subjective posts are moronic and it should be mandatory by any author to state their sources by which they speculate. They are opionated, invalidated and often poorly written thoughts which is never backed up by any evidence at all.

Yes and no. It depends. People are different. Good devs always care because they have totally different values than those who doesn't care. Humans 101. What else is new ?

See I can also write opinionated garbage.