Ask HN: Best practices using AI as an experienced web dev
The author, an experienced web developer, reflects on their journey with web technologies and expresses skepticism about AI coding assistants, finding prompt crafting time-consuming and questioning their value for proficient developers.
The author, an experienced web developer, reflects on their journey with foundational web technologies like HTML, CSS, JavaScript, and PHP, while also embracing modern tools such as jQuery, Bootstrap, and Vue 3. They express skepticism towards many contemporary development abstractions, which they believe often serve business interests rather than developer needs. Recently, they have started exploring AI-driven coding assistants like Codeium and Claude within VSCode. Initially, these tools appeared promising, offering the ability to generate code from brief prompts. However, the author finds that crafting precise prompts consumes time that could be better spent writing code directly. They question the effectiveness of AI tools for proficient developers, wondering how to utilize these co-pilots effectively without becoming distracted by unnecessary complexities.
- The author has a strong background in traditional web development languages.
- They are exploring AI coding assistants but find prompt crafting time-consuming.
- There is skepticism about the value of AI tools for experienced developers.
- The author seeks advice on leveraging AI without falling into abstraction traps.
Related
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
Ask HN: How to deal with AI generated sloppy code
The author raises concerns about AI-generated code being overly complex and bloated, complicating debugging and maintenance, and invites the tech community to share their strategies for managing these issues.
Are Devs Becoming Lazy? The Rise of AI and the Decline of Care
The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.
The Copilot Pause
The "Copilot Pause" reveals developers' mental blocks when relying on AI tools, leading to poor coding practices. Periodic disconnection from AI is encouraged to enhance problem-solving skills and technical mastery.
a) those tend to be boilerplate, and LLMs are great for boilerplate, and
b) code quality doesn't really matter too much, and
c) those tend to be written in languages that you may not be well-versed in, since they usually aren't in the "primary" language of the project
1. I'm developing a utility package that's easily testable and I'm certain of the interface. I'll write the interface for the package in my editor, then ask an LLM to generate unit tests. Then, I'll sketch out the function calls/structure of the package and get an LLM to fill out the rest.
2. I'm bug bashing and want to quickly check if the bug is obvious. I'll feed a description of the behavior into GPT/Claude along with the relevant code (generally as little code as possible to prevent listicle-type responses that are irrelevant).
3. I'm adding code that follows an established pattern within the codebase -- for example, adding a new API handler that involves generating an OpenAPI path + component snippet, an http handler, and a few database methods. This is when copilots are particularly useful.
4. I'd like a sanity check for issues in a complex bit of code I've written.
I find these mirror the tasks you'd typically hand off to a less experienced dev on the team -- these are things that require validation based on knowledge that you already have, and the validation is more efficient than doing it yourself.
For experienced devs who already has solid understanding of the codebase they are working on, the potential upside of using AI is rather small. But if jumping into a new codebases, AI (with codebase context) can be used as a semantic search tool, or simply speed up the process of codebase understanding. And when it suggests code, it can surface patterns/conventions that were not documented. Need to be careful though, because it can repeat bad code too.
Disclosure: I'm building EasyCode, a context aware coding assistant.
1. As a Google/Stack replacement, asking complex queries in natural language with many follow-ups. It's really good at helping me understand complex topics step by step, at an appropriate level of detail.
2. To help me with the syntax I can never remember (like different combinations of TypeScript types and generics and explaining what it all means). I feed it three or four types and tell it "I need a fifth that inherits X from here, Y from there, and adds Z, which can be blah blah blah..." and it's really good at doing that and then also teaching me the syntax as it goes.
3. To write in-line JSDoc/TSDoc to make my functions clearer (to other devs). At work we have a largely uncommented codebase, and I try to add a bunch of context on anything I end up working on or refactoring.
4. To farm out some specific function, usually some sort of nested reducer that I hate writing manually or cascading entries of Object.entries() with many layers.
5. Ask it higher-level architectural questions about different frameworks or patterns, and treat it as a semi-informed second opinion that I always double-check.
Generally speaking, it's really pretty good at most of this. I manually read through and verify everything it produces line-by-line and ask it for corrections when I notice them. It's still a lot faster than, say, trying to code review a true junior dev's work. It's not quite as efficient as being able to easily talk shop with another experience dev, but it's rare for me (in my jobs) to have a lot of experienced devs working on the same feature/PR at once anyway, so compared to someone jumping into a branch fresh, ChatGPT is a lot better at picking up the context.
---------
I do NOT:
A) Use an in-IDE AI assistant. Copilot was hit or miss when it came out. It was great at simple things, but introduced subtle flaws in bigger things that I wouldn't always catch until later. It ended up wasting more time than it saved. The Jetbrains AI assistant was even worse. Maybe Claude or Cursor etc are better, I dunno, but I don't really need them. I love Webstorm as it is, without AI, and I can easily alt-tab to ChatGPT to get the answers I need only when I need.
B) Use it to write public-facing documentation. While it can be good at this, public-facing stuff demands a level of accuracy that it can't quite deliver yet. Besides, I really enjoy crafting English and don't want a robot to replace that yet :)
Overall, it's a huge time saver for sure. I expect it to fully replace me someday soon, but for now, we're friends and coworkers :)
0 - Try to write the code myself, using LSP hints as needed
1 - Read the primary source (man page, documentation, textbook) to find an answer. Upside is that I learn something about related topics along the way by skimming the table of contents
2 - Consult stack overflow/google. This has become less and less useful, as both of those resources have become flooded with garbage info and misleading blog posts in the last several years.
3 - Pull out the AI copilot and ask it for help, while sharing what I already know and what I think the shape of the solution will be.
4 - Actively seek help - talk to colleagues, post a question on a relevant forum, etc...
Is this perfect? No, I've wasted hours in the worst case with an answer that the copilot thought was correct, but was not. But on balance, I'd say that it has saved me many days worth of time in the last year, usually in the form of research and knowledge discovery. It's much faster to test out a bunch of potential solutions when the copilot is writing most of the code and I just tweak a few relevant parameters.
I've been using all the time I've saved on mundane programming by studying computer science from first principles. As someone without a CS degree, I am acutely aware of my gaps in knowledge about theoretical computer science. I consider myself a halfway-decent programmer at this point, so I don't find that filling my head with more and more syntax and esoteric rules about frameworks to be helpful. I'd rather learn what the basis is for all those rules, and reconstruct them myself as needed.
I also have a lot more confidence to branch out from my moneytree (web dev) and try my hand at other areas of programming, like embedded development, messaging, and language theory. This field is endlessly fascinating, and I selfishly want to learn and try it all. So far in my career, I've spent most of it in web dev, but have also been able to test the waters of embedded development for a year, and interesting "back-end" services for another year or so. I even had the confidence to start my own company with a friend, realizing that I could actually shoulder most of the development burden early on if I strategically rely on AI to prompt me through implementation details which I'm not quite an expert in.
This is my strategy for ensuring longevity in this career. I'll admit I'm only on year 8 of programming professionally, but I hope this is the correct attitude to have.
Related
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
Ask HN: How to deal with AI generated sloppy code
The author raises concerns about AI-generated code being overly complex and bloated, complicating debugging and maintenance, and invites the tech community to share their strategies for managing these issues.
Are Devs Becoming Lazy? The Rise of AI and the Decline of Care
The rise of AI tools like GitHub Copilot enhances productivity but raises concerns about developer complacency and skill decline, emphasizing the need for critical evaluation and ongoing skill maintenance.
The Copilot Pause
The "Copilot Pause" reveals developers' mental blocks when relying on AI tools, leading to poor coding practices. Periodic disconnection from AI is encouraged to enhance problem-solving skills and technical mastery.