July 28th, 2024

Up to 90% of my code is now generated by AI

A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.

Read original articleLink Icon
Up to 90% of my code is now generated by AI

The author, a senior full-stack developer, discusses the significant impact of generative artificial intelligence (GenAI) on programming, noting that up to 90% of their code is now AI-generated. Initially drawn to AI after the release of ChatGPT, the author has utilized tools like GitHub Copilot and Tabnine since 2021 to enhance coding efficiency. They acknowledge the limitations of current large language models (LLMs), including restricted reasoning and outdated knowledge, but emphasize the importance of leveraging these tools creatively. The author believes that creativity in programming stems from understanding and experience rather than solely from the capabilities of AI. They highlight the necessity of integrating LLMs into their workflow, customizing prompts, and utilizing advanced tools like Cursor and Aider to optimize coding tasks. The author encourages continuous learning and adaptation, asserting that the value derived from LLMs is closely tied to how developers interact with them. They advocate for a proactive approach to exploring new technologies and maintaining responsibility for their work, rather than relying entirely on AI. The article concludes with the author's commitment to using AI as a primary resource for coding, which has transformed their focus from merely writing code to shaping software solutions.

Link Icon 16 comments
By @notfried - 4 months
I don't understand the influx of these kinds of posts. I use ChatGPT and Claude daily, but I wouldn't say 90, 80, or even 50%. Not because I don't want it to be, but because it just can't.

LLMs are perfect for those who are at the beginner-level with some language, or with rather simple code that is not very business-specific, or solving/implementing tidbits that are isolated from the larger surface area of a product, or writing utility functions that do something that is well and simply defined, or the boilerplate of almost anything.

However, most of the time spent in programming is never spent on these stuff. They might constitute the largest percentage of the lines of codes written, but 90% of the time is spent on those other 10% of lines.

Give it a CSS problem like centering an object within a bit of a complex hierarchy, and it will go the rounds suggesting almost every solution that can be tried, only to loop back with the same exact confidence. I'd say, in certain cases, LLMs could be a time drain if you don't push the brakes.

By @yumraj - 4 months
I have lot of software engineering experience and I’m working on something for which I decided to use Rails where I had zero prior experience in either Ruby or Rails. I’ve been using Claude for help.

Here’s what my personal experience: it’s been great at helping me understand things and converting stuff, which is both helping with learning about Rails as well as making progress would have been hard otherwise. It did much better at explaining than Rails documentation which I found lacking.

For example, I gave it large Go structs and it generated Rails generate commands to generate schema and XML serialization code. There was a little back and forth regarding foreign key relationships but “we” were able to figure it out.

I was even able to ask it for opinion on some table design, asked it to play the role of an experienced DBA, and it did great.

In short, it’s great if you know what you want to do at granular level, especially for new stuff. But, if I didn’t know what I know, I don’t think it would have worked.

Think of it like a calculator, can calculate what I tell it to calculate faster than me, but that’s it. But that in itself is huge.

By @poikroequ - 4 months
This entire article lacks substance. It just feels like I'm reading a lot of vague nonsense.

> I have a habit of reaching for AI as the primary source of information, and I'm using Perplexity, Google, or StackOverflow less and less frequently.

In my experience, LLMs simplify and overgeneralize too much, lacking much of the context and insights from websites like stack overflow. I've been doing a lot of database work recently, something I'm not an expert in, and I've learned a lot by actually reading the actual source, not just blindly trusting the output of the AI. If I trusted AI as much as the author seems to, my database code would be much worse.

I look forward to the day when AI actually is good enough to generate 90% of my code. But as of today, it's just not.

By @_tom_ - 4 months
I'm interested but skeptical. I have not dived as deeply as you. Mostly I'm using ChatGPT. There is just no way it could generate 90% of my code. It is great at generating boilerplate for simple cases. I find it most useful for getting started on this I know nothing about. Like I was working with SVG recently, something I know nothing about. But is have to say chatgpt was helpful, but not, in the long run useful. Too many errors and its ability to refine answers is terrible. Too many attempts correction are met with cheerful fixes which have the same bugs.

Is anyone else actually getting good results for code generation using LLMs?

By @xg15 - 4 months
I'd like to see some of that code.
By @synicalx - 4 months
I don't think AI could generate 90% or even 40% of my code, but by god does it generate me a lot of boilerplate and comments. My work just gave us all Copilot and it's really good at creating useful comments (even for Pydocs), and writing out simple but mildly tedious things like the outline of a loop, simple functions etc. No risk really from either since they're all short enough for me to sanity check as the AI is writing them, but a very very useful little time saver IMO.
By @SkyPuncher - 4 months
I think about 90% of my code is “generated”.

Thing is, that’s only the actual, written code. Theres still a bunch of hard work that goes into figuring out what to generate and verifying that it’s correct

By @deterministic - 4 months
More than 90% of my C++ code is generated by a code generator from declarative specs. No need for AI. And I trust the output.
By @JSDevOps - 4 months
Up to...."Up to" If you are writing a bash wrapper to move some files or something yeah.... Great
By @t-writescode - 4 months
Do we have any court cases or any other such thing out there that's decided whether or not it's safe for developers to trust general / common output coming from LLMs? I would probably be more efficient using various AI systems to write my code; but I'm afraid of a lawsuit around licensing.

Microsoft and Jetbrains are both introducing this tooling into their IDEs with Copilot and AI Assistant, but I still worry (I'm a naturally over-cautious person).

edit: to be clear, I'll ask it questions, just like everyone and their dog; but any sort of direct line / code completion; or "write me a method in Java that will do X, Y and Z" and then copy-pasting that 10+ line thing directly is not something I do.

By @falcolas - 4 months
I'd personally found it to be akin to a exceptionally technical junior developer fresh out of college. It can generate some really good niche code, but it can also generate the exact same code that's right above it.

And so you need to check every single line it creates, even when doing the most mundane tasks. Useful, and probably the source of 90% of my work in the "rough draft" stage, but I also have to read and grok all of that 90%, and fix the 80% that's just barely (or very blatantly) not right for the final draft.

By @throwaway290 - 4 months
The point is basically

> just do your own thing, but explore paths you've never walked before.

Yeah that's how people grow, no joke. It's kind of independent to the rest of the article shilling LLMs.

By @recursivedoubts - 4 months
Up to 90% of my code is generated by Jetbrains software.