AI-assisted coding will change software engineering: hard truths
AI-assisted coding is widely adopted among developers, enhancing productivity but requiring human expertise. Experienced engineers benefit more than beginners, facing challenges in completing projects and understanding AI-generated code.
Read original articleAI-assisted coding is transforming software engineering, but its impact is nuanced. Since the release of ChatGPT in late 2022, large language models (LLMs) have gained traction, with about 75% of developers utilizing AI tools for coding tasks. However, the media often exaggerates the potential for AI to replace software engineers, overlooking the tools' limitations and the ongoing need for human expertise. Addy Osmani, a seasoned software engineer, highlights two primary usage patterns: "bootstrappers," who leverage AI for rapid prototyping, and "iterators," who integrate AI into their daily coding workflows. While AI can accelerate development, it often leads to a "70% problem," where initial progress is easy, but completing the final 30% requires significant engineering knowledge. This creates a knowledge paradox, where experienced developers benefit more from AI tools than beginners, who may struggle with understanding and debugging AI-generated code. Osmani suggests practical patterns for effective AI use, such as generating initial drafts, maintaining constant communication with AI, and verifying outputs. He emphasizes that while AI can enhance productivity, it should be viewed as a tool for learning and prototyping rather than a complete replacement for traditional coding skills. The future of software engineering will likely see a continued integration of AI, but the demand for skilled engineers remains crucial.
- AI tools are widely adopted but have limitations that require human expertise.
- Experienced developers benefit more from AI than beginners, who may struggle with AI-generated code.
- The "70% problem" highlights the challenge of completing software projects with AI assistance.
- Practical patterns for using AI include generating drafts and maintaining constant communication.
- AI should be seen as a tool for learning and prototyping, not a replacement for coding skills.
Related
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
Are Devs Becoming Lazy? The Rise of AI and the Decline of Care
The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
Generative AI is not going to build your engineering team for you
Generative AI cannot replace junior engineers in software development, as it lacks the ability to manage complex systems. The industry must invest in training to ensure sustainable growth.
Oh no. We will have to endure another sort of AI slop infesting the web? It's bad enough as it is. Most smaller web sites are already broken in tiny ways. Who hasn't had to break out the browser debugger just get past some web sites broken order page?
Sloppy reviews, images, chat bots, and phishing are everywhere now. In this brave new world someone no computer experience tinkering a home with can produced a beautiful looking web site that's broken it 1000's of ways, we are going to be overrun with this crap. And they are going to be harvesting login email addresses and passwords.
It's going to be a rough decade.
But I think the opening of the article is important, it asks why products aren't getting better [0]. We all feel this, right? There's so much low hanging fruit that could make our lives less frustrating but is never done because it isn't flashy. Like Apple, you got all that AI but you can't use a regex to merge calendars? Google, you can't allow a test task to be made in the past (extra helpful when it repeats). Wikipedia still uses the m address and you go to the mobile site from desktop if you don't manually remove? I could go on and on but I think we're just in the wrong headspace.
[0] imo products are getting worse, but that decline started before GPT
Its a great tool and saves a great deal of time, but I have yet to go beyond generating snippets I have to vet, typically finding a made up library API call or misunderstanding of my natural language prompt.
I find it hard to pare down these LLM evangelizing articles into take aways that improve my day to day.
But that's not the problem we need to solve. All our programming languages are verbose and stupid. It takes too much effort to solve the problems we do in them.
But lately I have cut down their usage and gone back to writing stuff by hand.
Because it is so easy to apply the changes the AI suggests but there's this subtle shift over time in the code base towards a non-optimal architecture.
And it becomes really hard to get out of.
The place where I still use AI quite a lot is autocomplete but of the intelligent kind like if I am returning a different string for each enum, all of that gets autocompleted really fast.
So line completion models like what JetBrains provides for free is I think the right balance. Supermaven also works well.
It's incredible that people still haven't figured out or won't accept or plan for technology to continue to improve. Especially given how obvious the rapid improvement in this area has been.
Having said that the article seems to accurately reflect what it's like using the current tools.
But how could anyone reasonably expect the situation to be similar 3-5 years down the line?
If they just didn't frame it as a prediction then it would make sense.
How very naive. All productivity and efficiency gains will be utilized to push out an ever-increasing stream of new features because that is what drives sales and that is what the business needs.
It is the exact same reason why widening a highway does not actually reduce traffic congestion.
Also there's been multiple times where it just forgets a closing parenthesis in a function or tells me a function definition doesn't exist in the code even though it's literally right there.
Namely, a lot of predictions were made around NFTs that just didn’t make sense or were kind of dumb. My pet favorite was this notion that in the future you could bring your NFTs with you to different games and the like. You could buy a Batman NFT costume and have your guy wear it while playing metaverse World of Warcraft. They basically took Ready Player One and ran with it. Besides the fact that this is much harder to do than they could imagine, it’s also kind of a goofy idea.
I feel the same way with predictions made around AI agents. My pet favorite is the notion that we stop using the internet and delegate everything to our agents. Planning a trip? Let an AI agent handle things for you. Shopping? Likewise, let an agent handle your purchases. In the future ads won’t even be targeted at people, they’ll target agents instead, and pretty soon agents won’t even browse the internet but talk to other agents instead.
Is it feasible? I can’t say. I’m more interested in how goofy it all sounds. The notion that you no longer have buyer preferences while your agent gets served ads, or the notion of planning that trip to Rome or whatever and just entrusting the agent with the itinerary as if it won’t come up with unoriginal suggestions.
Work agents make more sense in general, but the sentiment remains.
From personal experience, writing real-life production code is like running a marathon; it requires endurance and rigor. I've seen AI-generated code — it’s more like a treadmill run, fine for practice only. Unpredictable issues, hallucinations pop up all the time with AI code, I have to rely on my own skills to navigate and solve problems.
https://www.sciencedirect.com/science/article/abs/pii/000510...
- The more tasks you automate the less practice people get in doing those tasks themselves and developing the experience in executing them.
- Yet, experience becomes more important as issues/exceptions occur (which they will)
- Ironically, when people are needed the most they are least prepared to step in because automation has taken over their day-to-day.
Net result might be a reduced supply of 'experience' but demand remains strong thus increasing the price of it.
The article mostly talks about how AI tools can help with new things, but a large amount of software development is brownfield, not greenfield.
What I am interested in as a person teaching a computing course, what is the best way to force people to understand/interact with the code coming from the LLM. I.e. when I give computing problems to students, it is often easy to put the problem in chatgpt and get an answer. In a very significant fraction of cases the code would be somewhat sensible and would pass the tests. In some cases the output would use the wrong approach or would fail the test, but not often enough to completely discourage cheating.
In the end this comes down to the question of what skills we want from people writing code with the help of the LLM, and how to test for those skills. (here I'm not talking about professional programmers, but scientists rather)
Either,
AI will enhance the work of software engineering on a fundamental level, helping SWE projects to be delivered (more) on time and with high(er) quality (I can't state how amazing this would be)
OR
things won't get significantly better, projects still can't reliably be delivered, software quality doesn't get better, etc. (the robots won't be taking our jobs)
It will be interesting to see which future we'll end up.
> this kind of crawling and training is happening, regardless of whether it is ethical or not
Glad we've established that it's going to change our profession regardless of ethics.
> Software engineers are getting closer to finding out if AI really can make them jobless
The capital class is definitely interested in this. They would love to pay fewer of us or pay us less and still get the same results. The question in 2025 might be: why would I pay you if you're not using GenAI assistants? Bob over there accepts a lower salary and puts out more code than anyone else on this team!. They may not care what the answer is: profit is all that matters.
After all, they clearly don't care about the ethics of training these models, exploiting labor in countries with weak worker protections, soaking up fresh water during local droughts, etc. Why would they care about you and your work?
Personally I don't find that generating code is where I do most of my programming work. I spend more of my time thinking and making sure I'm working on the right thing and that I'm building it correctly for the intended purpose. For that I want tools that aid me in my thinking: model checkers, automated theorem provers, and better type systems, etc. I need to talk to people. I don't find reviewing generated code to be especially productive even though it feels like work.
I think code synthesis will be more useful. Being able to generate a working implementation from a precise specification in a higher-level language will be highly practical. There won't be a need to review the code generated once we trust the kernel since the code would be correct by construction and it can be proven how the generated code ties to the specification.
We can't use GenAI to do the synthesis and replace a kernel as we still haven't solved the "black box" problem of neural nets.
The problem I find with GenAI and programming is that human language is sufficiently vague for communicating with folks but too imprecise for programming.
I suspect that in a few years there could be a gold mine for consulting: fixing AI-generated "house of cards" code.
Hope we're all good with the coming wave of security errors, breaches, and general malfeasance that's coming with the wave of GenAI code. You think software today could be better? The current models have been trained on all the patterns that make it the way it is now. And they will generate more of it. We have to hope that "software engineers" can read enough code, fast enough, and catch those errors before they ship. Should be good times.
Every company has its own little special framework crafted by an AI with its own nuances you need to learn, and your skills will no longer transfer from company to company. Gone will be the days when you can swap out a software engineer for a similar one with the same experience in a framework you use. Every engineer coming in has to start from zero and learn exactly how to work with your special paradigms, DSLs, etc.
Maybe trying to use cheaper models first and then calling the more expensive models to iterate and get through tests or errors.
I haven't really seen anything like this so I imagine it's a lot harder than I'm imagining.
What I would consider a game changer would be generating USEFUL unit and integration tests. Ideally that used the existing fixtures and utilities already in place. I’ve yet to see that happen even with code the LLM had just generated.
people run their code through it, and it finds a LOT of problems, some of them serious.
and then you fix them and you feel good.
But after that, you have a bunch of problems that aren't real. You either ignore them or the tool starts creating more work exponentially.
I suspect AI will be like that. It will help you a bit, but don't get caught up in it because you'll spend time distracted doing AI things.
1) Analogy - Using chat GPT to do code is like deciding to cross the amazon. You start moving, and half way through you realize the map is wrong. Now you are in the middle of the Amazon, without a map.
2) Reliable Matte Painting / Rough work -> I sketch, so matte painting is what GenAI reminds me of, quite a bit. It’s going to get you half way… somewhere, faster. You have to get to the end yourself.
It’s easier for me to assume that GenAI is going to be mostly correct 70% of the time, and never a 100% of the time. Build and use accordingly.
I’m tired about the chatter about the chatter about GenAI at this point.
makes me think everyone using OpenAI, Anthropic, Gemini API, Mistral, Copilot, Perplexity, etc is dumb or addicted to short term benefits while totally ignoring the long term consequences of paying to train your own replacement while agreeing not to compete
Some of the buckets:
* The builders, don't care how they get the result.
* The crafters, those who care how they get to the results (vim vs emacs), and folks who enjoy looking at tiny tiny details deep in the stack.
* The get-it-done people, using standard IDE tools, stick with them, and it's a strict 9-5.
...
And many with types, and subtypes of each ^^.
In my opinion, many people have a passion for making computers to do cool things. Along the way, some of us have fallen into the mentality of using a particular tool or way of doing things and get stuck in our ways.
I think it's another tool that you must know how to utilize and utilize in a way that does not atrophy your skills in the long run. I personally use it for learning and allowing me to get an in on a knowledge topic which I can then pull on and verify that the information is correct.
Related
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
Are Devs Becoming Lazy? The Rise of AI and the Decline of Care
The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
Generative AI is not going to build your engineering team for you
Generative AI cannot replace junior engineers in software development, as it lacks the ability to manage complex systems. The industry must invest in training to ensure sustainable growth.