Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
Read original articleThe author, a senior full-stack developer, discusses the significant impact of generative artificial intelligence (GenAI) on programming, noting that up to 90% of their code is now AI-generated. Initially drawn to AI after the release of ChatGPT, the author has utilized tools like GitHub Copilot and Tabnine since 2021 to enhance coding efficiency. They acknowledge the limitations of current large language models (LLMs), including restricted reasoning and outdated knowledge, but emphasize the importance of leveraging these tools creatively. The author believes that creativity in programming stems from understanding and experience rather than solely from the capabilities of AI. They highlight the necessity of integrating LLMs into their workflow, customizing prompts, and utilizing advanced tools like Cursor and Aider to optimize coding tasks. The author encourages continuous learning and adaptation, asserting that the value derived from LLMs is closely tied to how developers interact with them. They advocate for a proactive approach to exploring new technologies and maintaining responsibility for their work, rather than relying entirely on AI. The article concludes with the author's commitment to using AI as a primary resource for coding, which has transformed their focus from merely writing code to shaping software solutions.
Related
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
How I Use AI
The author shares experiences using AI as a solopreneur, focusing on coding, search, documentation, and writing. They mention tools like GPT-4, Opus 3, Devv.ai, Aider, Exa, and Claude for different tasks. Excited about AI's potential but wary of hype.
Self hosting a Copilot replacement: my personal experience
The author shares their experience self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). Results varied, with none matching Copilot's speed and accuracy. Despite challenges, the author plans to continue using Copilot.
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
LLMs are perfect for those who are at the beginner-level with some language, or with rather simple code that is not very business-specific, or solving/implementing tidbits that are isolated from the larger surface area of a product, or writing utility functions that do something that is well and simply defined, or the boilerplate of almost anything.
However, most of the time spent in programming is never spent on these stuff. They might constitute the largest percentage of the lines of codes written, but 90% of the time is spent on those other 10% of lines.
Give it a CSS problem like centering an object within a bit of a complex hierarchy, and it will go the rounds suggesting almost every solution that can be tried, only to loop back with the same exact confidence. I'd say, in certain cases, LLMs could be a time drain if you don't push the brakes.
Here’s what my personal experience: it’s been great at helping me understand things and converting stuff, which is both helping with learning about Rails as well as making progress would have been hard otherwise. It did much better at explaining than Rails documentation which I found lacking.
For example, I gave it large Go structs and it generated Rails generate commands to generate schema and XML serialization code. There was a little back and forth regarding foreign key relationships but “we” were able to figure it out.
I was even able to ask it for opinion on some table design, asked it to play the role of an experienced DBA, and it did great.
In short, it’s great if you know what you want to do at granular level, especially for new stuff. But, if I didn’t know what I know, I don’t think it would have worked.
Think of it like a calculator, can calculate what I tell it to calculate faster than me, but that’s it. But that in itself is huge.
> I have a habit of reaching for AI as the primary source of information, and I'm using Perplexity, Google, or StackOverflow less and less frequently.
In my experience, LLMs simplify and overgeneralize too much, lacking much of the context and insights from websites like stack overflow. I've been doing a lot of database work recently, something I'm not an expert in, and I've learned a lot by actually reading the actual source, not just blindly trusting the output of the AI. If I trusted AI as much as the author seems to, my database code would be much worse.
I look forward to the day when AI actually is good enough to generate 90% of my code. But as of today, it's just not.
Is anyone else actually getting good results for code generation using LLMs?
Thing is, that’s only the actual, written code. Theres still a bunch of hard work that goes into figuring out what to generate and verifying that it’s correct
Microsoft and Jetbrains are both introducing this tooling into their IDEs with Copilot and AI Assistant, but I still worry (I'm a naturally over-cautious person).
edit: to be clear, I'll ask it questions, just like everyone and their dog; but any sort of direct line / code completion; or "write me a method in Java that will do X, Y and Z" and then copy-pasting that 10+ line thing directly is not something I do.
And so you need to check every single line it creates, even when doing the most mundane tasks. Useful, and probably the source of 90% of my work in the "rough draft" stage, but I also have to read and grok all of that 90%, and fix the 80% that's just barely (or very blatantly) not right for the final draft.
> just do your own thing, but explore paths you've never walked before.
Yeah that's how people grow, no joke. It's kind of independent to the rest of the article shilling LLMs.
Related
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
How I Use AI
The author shares experiences using AI as a solopreneur, focusing on coding, search, documentation, and writing. They mention tools like GPT-4, Opus 3, Devv.ai, Aider, Exa, and Claude for different tasks. Excited about AI's potential but wary of hype.
Self hosting a Copilot replacement: my personal experience
The author shares their experience self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). Results varied, with none matching Copilot's speed and accuracy. Despite challenges, the author plans to continue using Copilot.
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.