Here's how I use LLMs to help me write code
Simon Willison shares insights on using Large Language Models in coding, emphasizing realistic expectations, clear instructions, and iterative testing. He promotes "vibe-coding" for creativity while advocating human oversight.
Read original articleSimon Willison discusses his experiences using Large Language Models (LLMs) to assist in coding, emphasizing that while some developers find success, others struggle due to unrealistic expectations and a lack of guidance. He stresses that coding with LLMs is not inherently easy and requires understanding their limitations and strengths. Key strategies include setting reasonable expectations, recognizing the importance of context, and treating LLMs as collaborative partners rather than autonomous coders. Willison advises users to provide clear instructions, test the generated code, and engage in iterative conversations to refine outputs. He highlights the significance of the training cut-off date for LLMs, which affects their familiarity with libraries and technologies. Willison also mentions the utility of tools that can execute code, enhancing the coding process. He introduces the concept of "vibe-coding," a relaxed approach to coding that encourages exploration and creativity. Overall, he advocates for using LLMs to augment coding skills rather than replace them, emphasizing the need for human oversight and testing.
- LLMs can enhance coding but require clear instructions and context management.
- Users should set realistic expectations and understand LLM limitations.
- Testing generated code is essential for ensuring functionality.
- Tools that execute code can streamline the coding process.
- "Vibe-coding" encourages a creative and exploratory approach to programming.
Related
Can LLMs write better code if you keep asking them to "write better code"?
The exploration of large language models in coding showed that iterative prompting can improve code quality, but diminishing returns and complexity issues emerged in later iterations, highlighting both potential and limitations.
How I Program with LLMs
The author discusses the positive impact of large language models on programming productivity, highlighting their uses in autocomplete, search, and chat-driven programming, while emphasizing the importance of clear objectives.
Cheating Is All You Need
Steve Yegge discusses the transformative potential of Large Language Models in software engineering, emphasizing their productivity benefits, addressing skepticism, and advocating for their adoption to avoid missed opportunities.
Hallucinations in code are the least dangerous form of LLM mistakes
Hallucinations in code from large language models are less harmful than in prose. Manual testing is essential, and developers should engage with and review LLM-generated code to enhance their skills.
Hallucinations in code are the least dangerous form of LLM mistakes
Hallucinations in code generated by large language models are less harmful than in prose, as errors are quickly detected. Active testing, context provision, and improved code review skills are essential for developers.
I think the most valuable suggestions from the article that I've found work well for me are:
Context - Provide sufficient context and a way to do this continually. Some tools do this for you like Cursor or Claude Code.
Testing - You need to be able to quickly test the code it gives you. It may be wrong the first time but right the second. The faster you can get to validating the faster you can get to the right code. It's likely going to be faster than writing it yourself.
If you're still having trouble, then find someone who isn't ask see if they'll let you watch them code with LLMs!
Related
Can LLMs write better code if you keep asking them to "write better code"?
The exploration of large language models in coding showed that iterative prompting can improve code quality, but diminishing returns and complexity issues emerged in later iterations, highlighting both potential and limitations.
How I Program with LLMs
The author discusses the positive impact of large language models on programming productivity, highlighting their uses in autocomplete, search, and chat-driven programming, while emphasizing the importance of clear objectives.
Cheating Is All You Need
Steve Yegge discusses the transformative potential of Large Language Models in software engineering, emphasizing their productivity benefits, addressing skepticism, and advocating for their adoption to avoid missed opportunities.
Hallucinations in code are the least dangerous form of LLM mistakes
Hallucinations in code from large language models are less harmful than in prose. Manual testing is essential, and developers should engage with and review LLM-generated code to enhance their skills.
Hallucinations in code are the least dangerous form of LLM mistakes
Hallucinations in code generated by large language models are less harmful than in prose, as errors are quickly detected. Active testing, context provision, and improved code review skills are essential for developers.