December 6th, 2024

The 70% problem: Hard truths about AI-assisted coding

AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.

Read original articleLink Icon
The 70% problem: Hard truths about AI-assisted coding

AI-assisted coding has shown a significant productivity boost for developers, yet the quality of software produced has not improved proportionately. This phenomenon, termed the "70% problem," highlights that while AI tools can quickly generate prototypes, they often leave a critical 30% of work that requires human expertise to ensure maintainability and robustness. Developers can be categorized into two groups: "bootstrappers," who use AI to create initial prototypes rapidly, and "iterators," who integrate AI into their daily coding tasks. However, the reliance on AI can lead to issues, especially for less experienced developers, who may accept AI-generated code without the necessary scrutiny, resulting in fragile systems. The article emphasizes that AI tools are more beneficial for seasoned developers who can guide and refine AI outputs, while novices may struggle without foundational knowledge. The future of AI in software development is seen as a collaborative relationship where AI acts as a supportive tool rather than a replacement for human judgment. As AI tools evolve, they may become more autonomous, but the need for human oversight and expertise will remain crucial to ensure high-quality software development.

- AI tools boost productivity but do not significantly enhance software quality.

- Experienced developers benefit more from AI, while juniors may produce fragile code.

- The "70% problem" indicates that AI can generate prototypes but struggles with the final refinements.

- Future AI tools may evolve into more autonomous collaborators, requiring human guidance.

- Maintaining engineering standards is essential for producing robust software despite AI assistance.

Link Icon 76 comments
By @neilwilson - 4 months
Again one of the few advantages of having been round the sun a few more times than most is this isn’t the first time this has happened.

Packages were supposed to replace programming. They got you 70% of the way there as well.

Same with 4GLs, Visual Coding, CASE tools, even Rails and the rest of the opinionated web tools.

Every generation has to learn “There is no silver bullet”.

Even though Fred Brooks explained why in 1986. There are essential tasks and there are accidental tasks. The tools really only help with the accidental tasks.

AI is a fabulous tool that is way more flexible than previous attempts because I can just talk to it in English and it covers every accidental issue you can imagine. But it can’t do the essential work of complexity management for the same reason it can’t prove an unproven maths problem.

As it stands we still need human brains to do those things.

By @fxtentacle - 4 months
"AI is like having a very eager junior developer on your team"

That's a perfect summary, in my opinion. Both junior devs and AI tools tend to write buggy and overly verbose code. In both cases, you have to carefully review their code before merging, which takes time away from all the senior members of the team. But for a dedicated and loyal coworker, I'm willing to sacrifice some of my productivity to help them grow, because I know they'll help me back in the future. But current AI tools cannot learn from feedback. That means with AI, I'll be reviewing the exact same beginner's mistakes every time.

And that means time spent on proofreading AI output is mostly wasted.

By @foo42 - 4 months
I worry about 2 main pitfalls for junior devs, one more tractable than the other.

Firstly there is the double edged sword of AI when learning. The easy path is to use it as a way to shortcut learning, to get the juice without the pressing, skipping the discomfort of not knowing how to do something. But that's obviously skipping the learning too. The discomfort is necessary. On the flip side, if one uses an llm as a mentor who has all the time in the world for you, you can converse with it to get a deeper understanding, to get feedback, to unearth unknown unknowns etc. So there is an opportunity for the wise and motivated to get accelerated learning if they can avoid the temptation of a crutch.

The less tractable problem is hiring. Why does a company hire junior devs? Because there is a certain proportion of work which doesn't take as much experience and would waste more senior developers time. If AI takes away the lower skill tasks previously assigned to juniors, companies will be less inclined to pay for them.

Of course if nobody invests in juniors, where will the mid and senior developers of tomorrow come from? But that's a tragedy of the commons situation, few companies will wish to invest in developers who are likely to move on before they reap the rewards.

By @pcwelder - 4 months
> 3. The "Trust but verify" pattern

To add on to this point, there's a huge role of validation tools in the workflow.

If AI written rust code compiles and the test cases pass, it's a huge positive signal for me, because of how strict rust compiler is.

One example I can share is

https://github.com/rusiaaman/color-parser-py

which is a python binding of rust's csscolorparser created by Claude without me touching editor or terminal. I haven't reviewed the code yet, I just ensured that test cases really passed (on github actions), installed the package and started using it directly.

By @javaunsafe2019 - 4 months
I already wrote on another thread already but do it again: copilot failed me for any serious task. Let it be refactoring of a bit more complex Java method or iac code. Everytime there are hidden quirks and failures that make it easier to just do it myself instead of searching for the needle for minutes…. This combined the fact that ai already hitting a wall in terms of scaling gives a good outlook what’s its predictive future seems to be: successful in the far future when we have quantum computing or the like…
By @blixt - 4 months
I see the same things as Addy, though I'm not 100% sure it's something new happening because of AI assistants. I started learning programming in the late nineties as a 9-year-old sitting at a library paying 10 NOK for an hour of internet access (the librarians were sweet and "forgot" how long I was sitting at the computer because they saw how much I was enjoying it). And I did the exact same thing described in this article: I grabbed whatever code I could that did something, didn't know how to debug it, and at best I could slightly tweak it to do something slightly different. After a few years I got better at it anyway. I started pattern matching, and through curiosity I found out what more senior developers were doing.

Maybe the fact that I was just a kid made this different, but I guess my point is that just because AI can now write you a code file in 10 seconds, doesn't mean your learning process also got faster. It may still take years to become the developer that writes well-structured code and thinks of edge cases and understands everything that is going on.

When I imagine the young people that will sit down to build their own first thing with the help of AI, I'm really excited knowing that they might actually get a lot further a lot faster than I ever could.

By @ChicagoDave - 4 months
This mirrors my own experiences with Claude with one caveat.

GenAI can get deeper into a solution that consists of well known requirements. Like basic web application construction, api development, data storage, and oauth integration. GenAI can get close to 100%.

If you’re trying to build something that’s never been done before or is very complex, GenAI will only get to 50% and any attempt to continue will put you in a frustrating cycle of failure.

I’m having some further success by asking Claude to build a detailed Linear task list and tackling each task separately. To get this to work, I’ve built a file combining script and attaching these files to a Claude project. So one file might be project-client-src-components.txt and it contains all the files in my react nextjs app under that folder in a single file with full file path headers for each file.

We’ll see how deep I get before it can’t handle the codebase.

By @thiht - 4 months
> While engineers report being dramatically more productive with AI

Where are these people in real life? A few influencers or wannabes say that on Twitter or LinkedIn, but do you know actual people in real life who say they’re "dramatically more productive with AI"?

Everyone I know or talked to about AI has been very critical and rational, and has roughly the same opinion: AIs for coding (Copilot, Cursor, etc.) are useful, but not that much. They’re mostly convenient for some parts of what constitutes coding.

By @raincole - 4 months
I'd like to share a particular case showing the necessity of verifying AI's work.

Yesterday I asked o1-preview (the "best" reasoning AI on the market) how could I safely execute untrusted JavaScript code submitted by the user.

AI suggested a library called vm2, and gave me fully working code example. It's so good at programming that the code runs without any modifications from me.

However, then I looked up vm2's repository. It turns out to be an outdated project, abandoned due to security issues. The successor is isolated-vm.

The code AI gave me is 100% runnable. Had I not googled it, no amount of unit tests can tell me that vm2 is not the correct solution.

By @figassis - 4 months
My problem with ai assisted coding is if I use it to scaffold hundreds of lines of code, I the. Need to review every single line because bugs can be so subtle. Imagine for example Go’s for loop reference footgun. I really don’t know if AI can handle these cases or similar cases that I don’t know about. So this is potentially more work than just writing for scratch.

Using it as a smarter autocomplete is where I see a lot of productivity boosts. It replaces snippets, it completes full lines or blocks, and because verifying block likely takes less time than writing it, you can easily get a 100%+ speed up.

By @hazrmard - 4 months
I agree with the author. My work involves designing simulations for control. Yesterday, I asked GPT-4o to write a python-only simulation for a HVAC system (cooling tower, chiller on the water side, and multiple zones with air handling units on the air side).

It wrote functions to separately generate differential equations for water/air side, and finally combined them into a single state vector derivative for integration. Easy peasy, right?

No. On closer inspection, the heat transfer equations had flipped signs, or were using the wrong temperatures. I'd also have preferred to have used structured arrays for vectors, instead of plain lists/arrays.

However, the framework was there. I had to tweak some equations, prompt the LLM to re-write state vector representations, and there it was!

AI-assisted coding is great for getting a skeleton for a project up. You have to add the meat to the bones yourself.

By @jillesvangurp - 4 months
I'm replacing things that I used to delegate to juniors with generated code. Because it's quicker and better. And there's a category of stuff I used to not bother with at all that I'm also taking on. Because I can get it done in a reasonable time frame. It's more fun for me for sure and I definitely am more productive because of it.

My feeling is that this stuff is not bottle-necked on model quality but on UX. Chat is not that great of an interface. Copy pasting blobs of text back to an editor seems like it is a bit monkey work. And monkey work should be automated.

With AI interactions now being able to call functions, what we need is deeper integration with the tools we use. Refactor this, rename that. Move that function here. Etc. There's no need for it to imagine these things perfectly it just needs to use the tools that make that happen. IDEs have a large API surface but a machine readable description of that easily fits in a context window.

Recently chat gpt added the ability to connect applications. So, I can jump into a chat, connect Intellij to the chat and ask it a question about code in my open editor. Works great and is better than me just copy pasting that to a chat window. But why can't it make a modification for me? It still requires me to copy text back to the editor and then hope it will work.

Addressing that would be the next logical step. Do it such that I can review what it did and undo any damage. But it could be a huge time saver. And it would also save some tokens. Because a lot of code it generates is just echoing what I already had with only a few lines modification. I want it to modify those lines and not risk hallucinating introducing mistakes into the rest, which is a thing you have to worry about.

The other issue is that iterating on code gets progressively harder as there's more of it and it needs to regenerate more of it at every step. That's a UX problem as well. It stems from the context being an imperfect abstraction of my actual code. Applying a lot of small/simple changes to code would be much easier than re-imagining the entire thing from scratch every time. Most of my conversations the code under discussion diverges from what I have in my editor. At some point continuing the conversation becomes pointless and I just start a new one with the actual code. Which is tedious because now I'm dealing with ground hog day of having to explain the same context again. More monkey work. And if you do it wrong, you have to do it over and over again. It's amazing that it works but also quite tedious.

By @jccalhoun - 4 months
I've done a little bit of javascript but I was doing a hobby project with a raspberry pi. That meant learning python and linux. Chatgpt was invaluable in completing the project because although I know the very basics I don't know the libraries. The script Chatgpt provided based on my description included several libraries that are super common and useful but I had never heard of. So instead of trying to reinvent the wheel or endlessly googling until I found something that sort of did what I wanted and then looked at that code, I was able to get something that worked. Then I could adjust it and add features.
By @roenxi - 4 months
This article looks like a case of skating to where the puck is. Over the next 2-4 years this will change - the rate of improvement in AI is staggering and these tools are in their infancy.

I would not be confident betting a career on any those patterns holding. It is like people hand-optimising their assembly back in the day. At some point the compilers get good enough that the skill is a curio rather than an economic edge.

By @demirbey05 - 4 months
> the actual software we use daily doesn’t seem like it’s getting noticeably better.

%100 agree, I am testing o1 for some math problems. I asked that to prove convolution of two gaussian is gaussian. It gave me 3 page algebraic solution it is correct but not elegant nor good. I have seen more ingenious solution. These tools are really good at doing something but not good at doing like expert human as they claimed.

By @prmph - 4 months
Just tried ScreenshotToCode, Bolt.New, and v0.

ScreenshotToCode wants me to pay for a subscription, even before I have any idea of its capabilities. V0 keeps throwing an error in the generated code, which the AI tries to remedy, without success after 3 tries. Bolt.New redirects to StackBlitz, and after more than an hour, there are still spinners stuck on trying to import and deploy the generated code.

Sounds like snake oil all around. The days of AI-enabled low-/no-code are still quite a while away, I think, if at all feasible.

By @HenriTEL - 4 months
That's far from my experience. Last time I used chatgpt it added many comments to explain the generated code and also attached rough explanatory text. You can also just ask more details about something, or to list other approaches, explain tradeoffs, etc. You can also ask for general questions to help you get started and find a good design.

To me it's about asking the right questions no matter the level of experience. If you're junior it will help you ramp up at a speed I could no imagine before.

By @zerop - 4 months
I fear that in the goal of going to "manual coding" to "fully automated coding", we might end up in the middle, where we are "semi manual coding" assisted by AI which needs different software engineer skill.
By @mediumsmart - 4 months
In my experience the AI provides 170% of the solution and the hard part is getting rid of the shitty 70% it has been trained on.
By @estebarb - 4 months
Something that concerns me the most is how are we going to train new generations. I teached a course at the university and many students just chatgpted everything, without any critical thinking.

It doesn't matter how many times you showed that it invented assembly instructions or wxwidgets functions, they insist on cheating. I even told them the analogy of going to the gym: you lift with your own strength, you don't use a crane.

And of course, it is evident when you receive students that don't know what is a function or that cannot complete simple exercises during a written test.

We learned by reading, lots of trial and failing, curiosity, asking around, building a minimal reproducible bug for stackoverflow... they (the ones that rely only on chatgpt and not their brain) cannot even formulate a question by themselves.

By @riazrizvi - 4 months
In coding but also generally with deep expertise in other fields too, I find LLMs help only if you deal with them adversarially. You can quickly get its stabs on the current state of topics or opinions about your ideas, but you’ve got to fight it to get to better quality. It tends to give you first the generic crap, then you battle it to get to really interesting insights. Knowing what to fight it on is the key to getting to the good stuff.
By @empiricus - 4 months
I think we underestimate how indefatigable the AI is. It is very hard for a person to keep producing code on demand, again and again and again.
By @ilrwbwrkhv - 4 months
> In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output.

This. Junior devs are f*cked. I don't know how else to say it.

By @zkry - 4 months
As an experiment, at my work I've stopped using all AI tools and went back to my pre-ai workflows. It was kind of weird at difficult at first, like maybe having to drive without GPS navigation, but I feel like I'm essentially at pre-AI usage speed.

This experiment made me think, maybe most of the benefit from AI comes from this mental workload shift that our minds subconsciously crave. It's not that we achieve astronomical levels of productivity but rather our minds are free from large programming tasks (which may have downstream effects of course).

By @leeoniya - 4 months
i usually say this about all assistive ai, not just coding. you still need a close-to-expert human at the keyboard who can detect hallucinations. a great answer can only be deemed so by someone already very knowledgeable in the technical / deep subject matter.
By @gronky_ - 4 months
I think the same can be said about AI-assisted writing…

I like the ideas presented in the post but it’s too long and highly repetitive.

AI will happily expand a few information dense bullet points into a lengthy essay. But the real work of a strong writer is distilling complex ideas into few words.

By @agentultra - 4 months
I think the real problem is that people are misunderstanding what programming is: understanding problems.

The hard truth is that you will learn nothing if you avoid doing the work yourself.

I'm often re-reading ewd-273 [0] from Dijkstra, The programming task considered as an intellectual challenge. How little distance have we made since that paper was published! His burning question:

> Can we get a better understanding of the nature of the programming task, so that by virtue of this better understanding, programming becomes an order of magnitude easier, so that our ability to compose reliable programs is increased by a similar order of magnitude?

I think the answer AI-assistants provide is... no. Instead we're using the "same old methods," Dijkstra disliked so much. We're expected to rely on the Lindy effect and debug the code until we feel more confident that it does what we want it to. And we still struggle to convince ourselves that these programs are correct. We have to content ourselves with testing and hoping that we don't cause too much damage in the AI-assisted programming world.

Not my preferred way to work and practice programming.

As for, "democratizing access to programming..." I can't think of a field that is more open to sharing it's knowledge and wisdom. I can't think of a field that is more eager to teach its skills to as many people as possible. I can't think of any industry that is more open to accepting people, without accreditation, to take up the work and become critical contributors.

There's no royal road. You have to do the work if you want to build the skill.

I'm not an educator but I suspect that AI isn't helping people learn the practice of programming. Certainly not in the sense that Dijkstra meant it. It may be helping people who aren't interested in learning the skills to develop software on their own... up to a point, 70% perhaps. But that's always been the case with low-code/no-code systems.

[0] https://www.cs.utexas.edu/~EWD/ewd02xx/EWD273.PDF

Update: Added missing link, fixed consistent mis-spelling of one of my favourite researchers' name!

By @cess11 - 4 months
"Get a working prototype in hours or days instead of weeks"

This is nothing new. Algorithmic code generation has been around since forever, and it's robust in a way that "AI" is not. This is what many Java developers do, they have tools that integrate deeply with XML and libraries that consume XML output and create systems from that.

Sure, such tooling is dry and boring rather than absurdly polite and submissive, but if that's your kink, are you sure you want to bring it to work? What does it say about you as a professional?

As for IDE-integrated "assistants" and free floating LLM:s, when I don't get wrong code they consistently give suggestions that are much, much more complicated than the code I intend to write. If I were to let those I've tried write my code I'd be a huge liability for my team.

I expect the main result of the "AI" boom in software development to be a lot of work for people that are actually fluent, competent developers maintaining, replacing and decommissioning the stuff synthesised by people who aren't.

By @noisy_boy - 4 months
Using AI-assisted coding is like using an exoskeleton to lift things. Makes your life easy, you gradually lose strength because your muscles work less and when it breaks down, you break down too, because you no longer have the strength you used to have.
By @nopurpose - 4 months
Which tool can actually help coding and refactoring, not just autocomplete? Copilot plugin for Jetbrains IDE can only suggest source to copy paste or at most replace single snippet I selected.

What I'd like to do is to ask "write me libuv based event loop processing messages described by protobuf files in ./protos directory. Use 4 bytes length prefix as a frame header" and then it goes and updates files in IDE itself, adding them to CMakeLists.txt if needed.

That would be an AI assisted coding and we can then discuss its quality, but does it exist? I'd be happy to give it a go.

By @senko - 4 months
Great article, Addy gets to the core of it and explains in a non-biased (pro or con), non-hype way. The examples, patterns and recommendations match what I've seen pretty well.

I've been working on an agentic full-app codegen AI startup for about a year, and used Copilot and other coding assistance tools since it was generally available.

Last year, nobody even thought full app coding tools to be possible. Today they're all the rage: I track ~15 full codegen AI startups (what now seems to be called "agentic coding") and ~10 coding assistants. Of these, around half focus on a specific niche (eg. resolving github issues, full-stack coding a few app types, or building the frontend prototype), and half attempt to do full projects.

The paradox that Addy hints at is that senior, knowledgeable developers are much more likely to get value out of both of these categories. For assistants, you need to inspect the output and fix/adapt it. For agentic coders, you need to be able to micromanage or bypass them on issues that block them.

However, more experienced developers are (rightly) wary of new hyped up tools promising the moon. It's the junior devs, and even non-developers who drink the kool aid and embrace this, and then get stuck on 70%, or 90%... and they don't have the knowledge or experience to go past. It's worse than useless, they've spent their time, money, and possibly reputation (within their teams/orgs) on it, and got nothing out of it.

At the startup I mentioned, virtually all our dev time was spent on trying to move that breaking point from 50%, to 70%, to 90%, to larger projects, ... but in most cases it was still there. Literally an exponential amount of effort to move the needle. Based on this, I don't think we'll be able to see fully autonomous coding agents capable of doing non-toy projects any time soon. At the same time, the capabilities are rising and costs dropping down.

IMHO the biggest current limit for agentic coding is the speed (or lack of) of state-of-the-art models. If you can get 10x speed, you can throw in 10x more reasoning (inference-time computing, to use the modern buzzwords) and get 1.5x-2x better, in terms of quality or capability to reason about more complex projects.

By @01100011 - 4 months
Most of my time isn't spent coding. It's spent designing, discussing, documenting, and debugging. If AI wrote 90% of my code for me I'd still be busy all day.
By @johann8384 - 4 months
Today an AI tool let me build a new tool from scratch.

I published it to a git repo with unit tests, great coverage, security scanning, and pretty decent documentation of how the tool works.

I estimate just coding the main tool would have been 2 or 3 days and all the other overhead would have been at least another day or two. So I did a week of work in a few hours today. Maybe it did 70%, maybe it did 42.5%, either way it was a massive improvement to the way I used to work.

By @AlienRobot - 4 months
I realized today that the average person types half my typing speed. I wonder if this factors in people's tendency to use AI? Typing takes too long so just let the AI do it?

In some ways, I'm not impressed by AI because much of what AI has achieved I feel could have been done without AI, it's just that putting all of it in a simple textbox is more "sleek" than putting all that functionality in a complex GUI.

By @crnkofe - 4 months
Surely not 70% but like 5-10% might be a better ballpark figure. Coding or generating with LLMs is just part of the problem and always the fastest part of software building. All the other things eat disproportionately larger amount of time. QA, testing, integration testing, making specs, dealing with outages, dealing with customers, docs, production monitoring etc. etc. It would be cool if we get AI involved there especially for integration testing though.

I really dislike the entire narrative that's been built around the LLMs. Feels like startups are just creating hype to milk as much money out of VCs for as long as they can. They also like to use the classic and proven blockchain hype vocabulary (we're still early etc.).

Also the constant antropomorphizing of AI is getting ridiculous. We're not even close to replacing juniors with shitty generated code that might work. Reminds me of how we got "sold" automated shopping terminals. More convenient and faster that standing in line with a person but now you've got to do all the work yourself. Also the promises of doing stuff faster is nothing new. Productivity is skyrocketing but burnout is the hot topic at your average software conference.

By @csbartus - 4 months
This fully resonates with me.

When the AI boom started in 2022 I've been already focused on how to create provably, or likely correct software on budget.

Since then, I've figured out how to create correct software fast, on rapid iteration. (https://www.osequi.com/)

Now I can combine productivity and quality into one single framework / method / toolchain ... at least for a niche (React apps)

Do I use AI? Only for pair programming: suggestions for algorithms, suggestions for very small technical details like Typescript polymorphism.

Do I need more AI? Not really ...

My framework automates most part of the software development process: design (specification and documentation), development, verification. What's left is understanding aka designing the software architecture, and for that I'm using math, not AI, which provides me provably-correct translatable-to-code-models in a deterministic way. None of these will be offered by AI in the foreseeable future

By @codedokode - 4 months
Out of curiosity, I tried to use freely available LLM to generate simple Python tests. I provided the code and specified exactly what requirements I want to be tested. What I found out is that initially it generates repetitive, non-DRY code so I have to write propmts for improvement like "use parametrization for these two tests" or "move this copy-paste code into a function". And it turns out that it is faster to take initial version of code and fix it yourself rather than type those prompts and explain what you want to be done. And worse, the model I was using doesn't even learn anything and will make the same mistakes the next time.

But, this was a generic LLM, not a coding assistant. I wonder if they are different and if they remember what you were unhappy with the last time.

Also LLMs seem to be good with languages like Python, and really bad with C and Rust, especially when asked to do something with pointers, ownership, optimization etc.

By @iamflimflam1 - 4 months
I have friends who are building products from scratch using tools like Cursor. It’s impressive what someone who is already an expert developer can do. What I don’t see (yet) are these tools delivering for non developers. But this as just a matter of time.

I see a lot of devs who appear to be in a complete state of denial about what is happening. Understandable, but worrying.

By @artificialLimbs - 4 months
I built a full featured Laravel crud app recently with probably 10 tables, auth, users, ‘beautiful’ tailwind styling, dark/light mode button, 5 different tabs, history function, email functions. 99% ai generated code. I almost didn’t even look at the code, just run some tests and make sure I’ve got functionality and no unexpected ‘normal’ bugs like min/max/0. Took me 15-20 hours with Windsurf. Windsail. Waveshark. Whatever that nice VSCode ai editor skin is called (completely forgettable name btw). It’s completely blowing my mind that this is even possible. There were of course some frustrating moments and back/forth (why did you change the layout of the list? I just wanted a filter…), but overall phenomenal. Deploying it shortly because if it dies, so what, this was a free job for a farm stand anyway. =)
By @thenoblesunfish - 4 months
Thinking that AI assistants are going to make programmers better, as opposed to just faster, is liking thinking hiring a paralegal is going to make you a better lawyer, or hiring a nanny is going to make you a better parent, etc. It's helpful, but in terms of offloading some things you could do yourself.
By @crakhamster01 - 4 months
This tracks with my experience as a more "senior" dev using Copilot/Cursor. I can definitely see these tools being less useful, or more misleading, for someone just starting out in the field.

One worry I have is what will happen to my own skills over time with these tools integrated into my workflow. I do think there's a lot of value in going through the loop of struggling with -> developing a better understanding of technologies. While it's possible to maintain this loop with coding assistants, they're undoubtedly optimized towards providing quick answers/results.

I'm able to accomplish a lot more with these coding assistants now, but it makes me wonder what growth I'm missing out on by not always having to do it the "hard" way.

By @osigurdson - 4 months
I can't imagine anything more awful than using AI as a non-coder to build an application.
By @stuaxo - 4 months
That's not a 70% problem, that's an "after the first 90% is done you only gave the other 90 to do"

These people were never at 70% in the first place.

The article also misses experts using this to accelerat themselves at things they are not expert in.

By @leebriskcyrano - 4 months
The 70% framing suggests that these systems are asymptotically approaching some human "100%," but the theoretical ceiling for AI capabilities is much higher.

> the future isn't about AI replacing developers - it's about AI becoming an increasingly capable collaborator that can take initiative while still respecting human guidance and expertise.

I believe we will see humans transition to a purely ceremonial role for regulatory/liability reasons. Airplanes fly themselves with autopilot, but we still insist on putting humans at the yoke because everyone feels more comfortable with the arrangement.

By @miftassirri - 4 months
This feels like skipping the tutorial in a game
By @4dregress - 4 months
As a Jetbrains AI user I think it’s great.

I don’t ever use the code completion functionality in fact it can be a bit annoying. However asking it questions is the new Google search.

Over the last couple of years I’ve noticed that the quality of answers you get from googling has steeply declined, with most results now being terrible ad filled blog spam.

Asking the AI assistant the same query yields so much better answers and gives you the opportunity to delve deeper into said answer if you want to.

No more asking on stack overflow and having to wait for the inevitable snarky response.

It’s the best money I’ve spent on software in years. I feel like Picard asking the computer questions

By @lexandstuff - 4 months
Good article. A very relevant paper worth reading is Programming as Theory Building: https://pages.cs.wisc.edu/~remzi/Naur.pdf.

Programming is not just about producing a program, it's about developing a mental model of the problem domain and how all the components interact. You don't get that when Claude is writing all your code, so unless the LLM is flawless (which it likely never be on novel problems), you won't understand the problem enough to know how to fix things when they go wrong.

By @factsaresacred - 4 months
The clue is the name of the tools: "co-pilot".

Assistants that work best in the hands of someone who already knows what they're doing, removing tedium and providing an additional layer of quality assurance.

Pilot's still needed to get the plane in the air.

But even if the output from these tools is perfect, coding isn't only (or even mainly) about writing code, it's about building complex systems and finding workable solutions through problems that sometimes look like cul de sacs.

Once your codebase reaches a few thousand lines, LLMs struggle seeing the big picture and begin introducing one new problem for every one that they solve.

By @DesiLurker - 4 months
I mostly use it to get past the drudge-work. often I have mental blocks in doing super mundane things that 'just need to be done'. AI is good at those super defined and self contained problems ATM & thats okey for me. anything that requires deep expertise or knowledge base it falls flat IMO. It can change if the AI can get its own sandbox to try and learn with experimentation. IDk that has its own implications but its one way to improve its understanding of a system.
By @0xDEAFBEAD - 4 months
>I've seen this firsthand:

>Error messages that make no sense to normal users

>Edge cases that crash the application

>Confusing UI states that never got cleaned up

>Accessibility completely overlooked

>Performance issues on slower devices

>These aren't just P2 bugs - they're the difference between software people tolerate and software people love.

I wonder if we'll see something like the video game crash of 1983. Market saturation with shoddy games/software, followed by stigmatization: no one is willing to try out new apps anymore, because so many suck.

By @enum - 4 months
Academic studies are finding the same thing. Although there are a handful of beginners who are great at prompting, when you study beginning programmers at scale, you find that the mostly struggle to write prompts and understand why things go wrong. Here is one of several example studies:

https://dl.acm.org/doi/full/10.1145/3613904.3642706

By @AIorNot - 4 months
I absolutely think this is a spot on analysis btw and ties well into my own experience with LLM based coding.

However one difference between these tools and previous human developed technologies is these tools are offering direct intelligence sent via the cloud to your environment.

That is unprecedented. Its rather like the the first time we started piping energy through wires. Sure it was clunky then, bit give it time. LLMs are just the first phase of this new era.

By @mg - 4 months
Currently, this is my favorite test prompt for AI coding tools:

    Make a simple HTML page which
    uses the VideoEncoder API to
    create a video that the user
    can download.
So far, not a single AI has managed to create a working solution.

I don't know why. The AIs seem to have an understanding of the VideoEncoder API, so it seems it's not a problem of not having the infos they need. But none comes up with something that works.

By @ongytenes - 4 months
I'm afraid this is going to be like the calculator event. Before electronic calculators happened, kids at least learned to mentally do basic math. Now we have whole generations incapable of making change without a calculator. I met a graduate who claimed 32 ÷ 2 was too difficult because 32 is too big to work with mentally. I believe code development AI is going to lead to a whole generation of mediocre coders.
By @m3kw9 - 4 months
There is a problem with AI coding where you want to let it write as much as possible, but when it hits a wall where is just looping back to the same error, you would have to roll up the sleeve and get dirty.

As AI is able to write more complex code, the skill of the engineer must increase to go in when necessary to diagnose the code it wrote, if you can’t, your app is stuck to the level of the AI

By @h1fra - 4 months
The scary part is that in a few years, senior programmers who started their career with AI won't have the capacity to keep the AI in check.
By @mediumsmart - 4 months
I agree with the article. I am in the camp "learning with the AI" and the main task is getting young foolish Einsteiborg to use simple things and go step by step without jabbering about the next steps or their alternative etc. I also have to go in blocks to get a usable whole and git branch saves the day every time. But its also really nice and you learn so much.
By @shireboy - 4 months
I think this is a pretty clear-eyed view. I've been a developer for 25 years. I use Copilot every day now exactly the way he describes. I get it to do XYZ, then use it to refactor what it just did or clean it up myself. Every now and then it goes sideways, but on the whole it saves me time and helps me focus on business problems more than ceremony.
By @Frederation - 4 months
100% agreed.Much of my time involved with AI during the course of work is used delegating for better output and reevaluating everything it does. 70% sounds right. And I have even told AI, dont act like an over-eager 14 year with ADHD. Which I was/am still, myself. XD
By @mnky9800n - 4 months
I find that irritating and iterative problem of watching the ai fail over and over helps me understand the problem I’m trying to solve. But that led me to believe none of the promises of Altman et al are connected to reality.
By @globalise83 - 4 months
"the actual software we use daily doesn’t seem like it’s getting noticeably better"

Honestly, this seems like a straw man. The kind of distributed productivity tools like Miro, Figma, Stackblitz, etc. that we all use day-to-day are both impressive in terms of what they do, but even more impressive in terms of how they work. Having been a remote worker 15 years ago, the difference in what is available today is light-years ahead of what was available back then.

By @tippytippytango - 4 months
You break through the 70% barrier by writing detailed spec and architecture documents. Also tests that define behavior. Those go in every request as you build. Don’t ask an LLM to read your mind.
By @deadbabe - 4 months
If engineers have this problem then people with no engineering skill at all that just want to build some app will be hopeless. The days of no longer needing to hire engineers will never come.
By @mdavid626 - 4 months
Well, it turns out there is no free lunch. One will not get far without understanding the code and creating a mental model of it.
By @2snakes - 4 months
One thing that LLM can do besides generate code is explain complex code too. So that is inherently an upskilling feature.
By @est - 4 months
I always treat various AI tools like an intern. You have to oversee it.
By @nbittich - 4 months
The article itself sounds like written by chat gpt :D
By @mrjin - 4 months
70%, REALLY? My personal experience was that at least 50% of the time, Copilot backfires, sometimes the proposed code was beyond ridiculous. Thus I had to disable it.
By @remoquete - 4 months
Pretty much everything the author describes applies to technical writing, too. Thinking AI can replace senior writers is delusional.
By @commandlinefan - 4 months
An axiom that was true even before pervasive AI: If you're using the computer to do something you don't have time to do yourself, that's good. If you're using the computer to do something you don't understand, that's bad.
By @goap98 - 4 months
not sure
By @joshdavham - 4 months
> While engineers report being dramatically more productive with AI, the actual software we use daily doesn’t seem like it’s getting noticeably better.

I would disagree with this. There are many web apps and desktop apps that I’ve been using for years (some open source) and they’ve mostly all gotten noticeably better. I believe this is because the developers can iterate faster with AI.