Programming with ChatGPT
Henrik Warne finds ChatGPT enhances his programming productivity with tailored code snippets, emphasizing the need for testing. He prefers it over GitHub CoPilot but is skeptical about LLMs replacing programmers.
Read original articleUsing ChatGPT for programming has significantly enhanced Henrik Warne's productivity. He finds it beneficial to receive tailored code snippets rather than searching through examples on platforms like Stack Overflow. Warne emphasizes the importance of testing the generated code to ensure it meets his needs and to understand its functionality for troubleshooting purposes. He shares a specific instance where he successfully used ChatGPT to write a Python program for downloading files from a Google bucket, overcoming authentication issues through iterative queries. While he acknowledges that the generated code may not always be perfect, he appreciates the efficiency it brings to his workflow. Warne expresses skepticism about the notion that LLMs will completely replace programmers, citing challenges in specifying system behavior and understanding code. He has been a paying user of ChatGPT, finding it a worthwhile investment for the productivity gains it offers. Additionally, he prefers ChatGPT over other tools like GitHub CoPilot for coding tasks and often uses it for shell command queries. However, he has been disappointed with its performance in generating and summarizing text. Overall, Warne views ChatGPT as a valuable tool that enhances programming efficiency while still requiring developer oversight.
- ChatGPT enhances coding productivity by providing tailored code snippets.
- Testing generated code is crucial for understanding and troubleshooting.
- Warne is skeptical about LLMs fully replacing programmers.
- He finds ChatGPT more effective than other coding tools like GitHub CoPilot.
- The tool is less effective for generating and summarizing text.
Related
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
Can ChatGPT do data science?
A study led by Bhavya Chopra at Microsoft, with contributions from Ananya Singha and Sumit Gulwani, explored ChatGPT's challenges in data science tasks. Strategies included prompting techniques and leveraging domain expertise for better interactions.
Self hosting a Copilot replacement: my personal experience
The author shares their experience self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). Results varied, with none matching Copilot's speed and accuracy. Despite challenges, the author plans to continue using Copilot.
Where Are Large Language Models for Code Generation on GitHub?
The study examines Large Language Models like ChatGPT and Copilot in GitHub projects, noting their limited use in smaller projects, short code snippets, and minimal modifications compared to human-written code.
GitHub Copilot – Lessons
Siddharth discusses GitHub Copilot's strengths in pair programming and learning new languages, but notes its limitations with complex tasks, verbosity, and potential impact on problem-solving skills among new programmers.
1) Boring stuff like JSON schema/JSON example modification and validation
2) Rubber ducky
3) Using this system prompt to walk me through areas in which I have no experience [0]
You are a very helpful code-writing assistant. When the user asks you for a solution to a long, complex problem, first, you will provide a plan with a numbered list of steps, each with the sub-items to complete. Then, you will ask the user if they understand and if the steps are satisfactory. If the user responds positively, you will then provide the specific code for step one. Next, you will ask the user if they are satisfied and understand. If the user responds positively, you will then proceed to step two. Continue the process until the entire plan is completed.
I recently finally used the OpenAI API in a project. It was gpt4-o to analyze news story sentiment. The ease of use and quality of output is impressive.[0] I should add that I have been using "presets" in the LibreChat GUI to allow me to have many system prompts easily available. It's kind of like Custom GPTs. Also, using LibreChat for work feels better as I believe that OpenAI states that they do not train on data provided via API.
Then equally interesting would be to see how the powers that be maneuver to block this residual income. 2035?
Then after that, perhaps a contest to have an AI acquire resources equivalent to residual income so that it can't be stopped. For example by borrowing for cheap land, installing photovoltaics and a condenser to supply water, then building out a robotic hydroponic garden, carbon collector, mine, smelter, etc, enough to sustain one person off-grid continuously in a scalable and repeatable fashion. 2040?
https://github.com/franzenzenhofer/bulkredirectchecker
no humans touched the code directly
and its not my most complex one, https://gpt.franzai.com is but closed source
how?
whenever chatgpt runs into a repetetive wall -> start a new chat
use https://github.com/franzenzenhofer/thisismy (also about 90% chatgpt written) command line tool
to fetch all the code (and online docs if necessary) -> deliver a new clean context and formulate the next step what you want to achieve
sometimes coding needs 100+ different chats always with a fresh start to achieve a goal
remember: chatgpt is not intelligent in an old fashioned way, it is a propability machine thats pretty good at mimicing intelligence
once propability goes astray you need to start anew
but limiting chatgpt to simple coding tasks just means that you are using it wrong
Prototypes are fun! Obviously production code or serious projects are different. But I've found a new joy in building software since GPT-4 came out - it's more fun than ever to build small ideas.
Instead, I offer another point of view: I don't want to use LLMs for coding because I like coding. Finding a good and elegant solution to a complex problem and then translating it into an executable by way of a precise specification is, to me, much more satisfying than prompt engineering my way around some LLM until it spits out a decent answer. I find doing code reviews to be an extremely draining activity and using an LLM would mean basically doing code reviews all the time.
Maybe that will mean that, at some point, I'll have to quit my profession because programming has been replaced by prompt engineering. I guess I'll find something else to do then.
(That doesn't mean that there aren't individual use cases where I have used ChatGPT - for example for writing simple bash scripts, given that nobody in their right mind really understands bash fully. But that's different from having my entire coding workflow based on an LLM.)
I'm sure if I was was dev who had learnt and worked in the pre-GPT era I'd have no problem using these tools as much as possible, but having started learning in the GPT era I feel conflicted. I make sure I understand each line of code generated whenever I use AI. Despite that I have a feeling I'm handicapping myself using these tools? Will it just make me a code reviewer/copy-paster rather than someone who can write something from scratch?
If it is reasonable to use these tools, at what point does it become so? like at what point can I consider myself well enough at programming to be able to use it like in the post.
Right now I'm purposely restraining myself from using these tools too much because what I can make using them is much better than what I can make myself, so as to get upto a certain level myself before I start making use of these capabilities
Am I thinking about this the right way? At what point does it make sense to start using these tools more freely without worrying about handicapping my learning?
The thing I've heard the most from other developers, particular those new to the profession, is that you "have to know most of what you're asking already to know if what you get from the LLM is right." You can use the LLM to learn, but for the actual programming they struggle because they don't have the background to understand the responses well enough to continue the implementation.
Also, for the record, C# and .NET, huge enterprise/ecommerce software, so not quite as malleable as bash scripts and what not.
its also nice to have as something to "bounce ideas" off of and see if it can think of any other solutions or ways to accomplish a goal.
I've enjoyed finding answers to things and suggestions on how to do them differently of how I was thinking of doing them
I've enjoyed receiving answers to questions I asked google with no match from what I'm asking.
Can't bring myselt to use it to code for me though, but all the above leads me to believe it shouldn't far now until I'm on board too
It generates most of time perfect formatting which i readily export to markdown with org-mode-html-export.
Showcase of the generated formatting as a screenshot [1].
Almost all my ChatGPT use comes down to writing queries for loading or transforming data. Getting rid of the boilerplate has helped immensely on my productivity.
EDIT: I should note, the vast majority of errors I get using solutions from LLMs, tend to be code it includes that contain legacy or dead libraries. Sometimes it starts to mix old and new libraries in the same code snippet, which will either outright fail, or output some weird results.
Which makes sense, as some answers are the product of being trained on 14 year old StackOverflow posts, while others are trained on newer stuff.
https://tools.simonwillison.net/image-resize-quality is a tool for dropping in an image and instantly seeing resized versions of that image at different JPEG qualities, each of which can be downloaded. I used to use the (much better) https://squoosh.app/ for this, but my cut-down version is optimized for my workflow (pick the smallest JPEG version that remains legible). Notes and prompts on how I built that here: https://simonwillison.net/2024/Jul/26/image-resize-and-quali...
django-http-debug - https://github.com/simonw/django-http-debug - is an actual open source Python package I released that was mostly written for me by Claude. It's a webhooks debugger - you can set up a URL and it will log all incoming requests to a database table for you. Notes on how I built that here: https://simonwillison.net/2024/Aug/8/django-http-debug/
datasette-checkbox is a Datasette plugin adding toggle checkboxes to any table with is_ or has_ columns. Animated demo and prompts showing how I built the initial prototype here: https://simonwillison.net/2024/Aug/16/datasette-checkbox/
https://tools.simonwillison.net/gemini-bbox is a tool for trying out Gemini 1.5 Pro's ability to return bounding boxes for items it identifies. You'll need a Gemini API key for this one, or take a look at the demo and notes here: https://simonwillison.net/2024/Aug/26/gemini-bounding-box-vi...
https://tools.simonwillison.net/gemini-chat is a similar tool for trying out different Gemini models (Google released three more yesterday) with a streaming chat interface. Notes on how I built that here: https://tools.simonwillison.net/gemini-chat
I still see some people arguing that LLM-assisted development like this is a waste of time, and they spend more effort correcting mistakes in the code than if they had written it from scratch themselves.
I couldn't disagree more. My development process has always started with prototypes, and the speed at which I can get a proof-of-concept prototype up and running with these tools is quite frankly absurd.
Related
The Death of the Junior Developer – Steve Yegge
The blog discusses AI models like ChatGPT impacting junior developers in law, writing, editing, and programming. Senior professionals benefit from AI assistants like GPT-4o, Gemini, and Claude 3 Opus, enhancing efficiency and productivity in Chat Oriented Programming (CHOP).
Can ChatGPT do data science?
A study led by Bhavya Chopra at Microsoft, with contributions from Ananya Singha and Sumit Gulwani, explored ChatGPT's challenges in data science tasks. Strategies included prompting techniques and leveraging domain expertise for better interactions.
Self hosting a Copilot replacement: my personal experience
The author shares their experience self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). Results varied, with none matching Copilot's speed and accuracy. Despite challenges, the author plans to continue using Copilot.
Where Are Large Language Models for Code Generation on GitHub?
The study examines Large Language Models like ChatGPT and Copilot in GitHub projects, noting their limited use in smaller projects, short code snippets, and minimal modifications compared to human-written code.
GitHub Copilot – Lessons
Siddharth discusses GitHub Copilot's strengths in pair programming and learning new languages, but notes its limitations with complex tasks, verbosity, and potential impact on problem-solving skills among new programmers.