August 25th, 2024

Programming with ChatGPT

Henrik Warne finds ChatGPT enhances his programming productivity with tailored code snippets, emphasizing the need for testing. He prefers it over GitHub CoPilot but is skeptical about LLMs replacing programmers.

Read original articleLink Icon
Programming with ChatGPT

Using ChatGPT for programming has significantly enhanced Henrik Warne's productivity. He finds it beneficial to receive tailored code snippets rather than searching through examples on platforms like Stack Overflow. Warne emphasizes the importance of testing the generated code to ensure it meets his needs and to understand its functionality for troubleshooting purposes. He shares a specific instance where he successfully used ChatGPT to write a Python program for downloading files from a Google bucket, overcoming authentication issues through iterative queries. While he acknowledges that the generated code may not always be perfect, he appreciates the efficiency it brings to his workflow. Warne expresses skepticism about the notion that LLMs will completely replace programmers, citing challenges in specifying system behavior and understanding code. He has been a paying user of ChatGPT, finding it a worthwhile investment for the productivity gains it offers. Additionally, he prefers ChatGPT over other tools like GitHub CoPilot for coding tasks and often uses it for shell command queries. However, he has been disappointed with its performance in generating and summarizing text. Overall, Warne views ChatGPT as a valuable tool that enhances programming efficiency while still requiring developer oversight.

- ChatGPT enhances coding productivity by providing tailored code snippets.

- Testing generated code is crucial for understanding and troubleshooting.

- Warne is skeptical about LLMs fully replacing programmers.

- He finds ChatGPT more effective than other coding tools like GitHub CoPilot.

- The tool is less effective for generating and summarizing text.

Link Icon 18 comments
By @consumer451 - 8 months
I am a muggle, but the three biggest use cases for me have been:

1) Boring stuff like JSON schema/JSON example modification and validation

2) Rubber ducky

3) Using this system prompt to walk me through areas in which I have no experience [0]

    You are a very helpful code-writing assistant. When the user asks you for a solution to a long, complex problem, first, you will provide a plan with a numbered list of steps, each with the sub-items to complete. Then, you will ask the user if they understand and if the steps are satisfactory. If the user responds positively, you will then provide the specific code for step one. Next, you will ask the user if they are satisfied and understand. If the user responds positively, you will then proceed to step two. Continue the process until the entire plan is completed.
I recently finally used the OpenAI API in a project. It was gpt4-o to analyze news story sentiment. The ease of use and quality of output is impressive.

[0] I should add that I have been using "presets" in the LibreChat GUI to allow me to have many system prompts easily available. It's kind of like Custom GPTs. Also, using LibreChat for work feels better as I believe that OpenAI states that they do not train on data provided via API.

By @zackmorris - 8 months
I wonder how long it will be before AI can program financial independence. Loosely it would look like a mix between One Red Paperclip, drop shipping, Angi, trading bots, etc. The prompt might be "create a $1000 per month income stream in the next 30 days". Maybe the owner of the first AI to pass this could get a $1 million prize. 2030?

Then equally interesting would be to see how the powers that be maneuver to block this residual income. 2035?

Then after that, perhaps a contest to have an AI acquire resources equivalent to residual income so that it can't be stopped. For example by borrowing for cheap land, installing photovoltaics and a condenser to supply water, then building out a robotic hydroponic garden, carbon collector, mine, smelter, etc, enough to sustain one person off-grid continuously in a scalable and repeatable fashion. 2040?

By @franze - 8 months
here is some code that was 100% chatgpt created

https://github.com/franzenzenhofer/bulkredirectchecker

no humans touched the code directly

and its not my most complex one, https://gpt.franzai.com is but closed source

how?

whenever chatgpt runs into a repetetive wall -> start a new chat

use https://github.com/franzenzenhofer/thisismy (also about 90% chatgpt written) command line tool

to fetch all the code (and online docs if necessary) -> deliver a new clean context and formulate the next step what you want to achieve

sometimes coding needs 100+ different chats always with a fresh start to achieve a goal

remember: chatgpt is not intelligent in an old fashioned way, it is a propability machine thats pretty good at mimicing intelligence

once propability goes astray you need to start anew

but limiting chatgpt to simple coding tasks just means that you are using it wrong

By @cryptoz - 8 months
My favorite thing to do with ChatGPT and coding is making fast prototypes. Since you know hallucinations might be a problem, and since ChatGPT struggles with larger contexts and files, just play to its strengths. I have lots of ideas of "small" to medium webapp ideas, and you can often get ChatGPT to write most of the code of a prototype and have it work quite quickly.

Prototypes are fun! Obviously production code or serious projects are different. But I've found a new joy in building software since GPT-4 came out - it's more fun than ever to build small ideas.

By @mehulashah - 8 months
I do believe that ChatGPT as a programmer productivity tool is its most valuable value proposition to date. In addition to generating code, it makes it much easier for me to chase down obscure error messages and potential root causes and workarounds. It's definitely not perfect. But, as part of the Interact->Generate->Verify cycle that modern AI has transformed other disciplines (eg. materials science, protein folding, mathematics, and more), it serves as a valuable component of each of these.
By @Tainnor - 8 months
I have my reservations about the quality of LLM generated code, but since I have neither studied ML in depth, nor compared different LLMs enough, I'll refrain from addressing that side of the debate - except maybe for noting that "I test the code" is not good enough for any serious project because we know that tests (manual or automated) can never prove the absence of bugs.

Instead, I offer another point of view: I don't want to use LLMs for coding because I like coding. Finding a good and elegant solution to a complex problem and then translating it into an executable by way of a precise specification is, to me, much more satisfying than prompt engineering my way around some LLM until it spits out a decent answer. I find doing code reviews to be an extremely draining activity and using an LLM would mean basically doing code reviews all the time.

Maybe that will mean that, at some point, I'll have to quit my profession because programming has been replaced by prompt engineering. I guess I'll find something else to do then.

(That doesn't mean that there aren't individual use cases where I have used ChatGPT - for example for writing simple bash scripts, given that nobody in their right mind really understands bash fully. But that's different from having my entire coding workflow based on an LLM.)

By @Mememaker197 - 8 months
I'm left a bit confused still about using ChatGPT and Claude as someone who's still learning (2nd year CS student) and nowhere near being a professional dev.

I'm sure if I was was dev who had learnt and worked in the pre-GPT era I'd have no problem using these tools as much as possible, but having started learning in the GPT era I feel conflicted. I make sure I understand each line of code generated whenever I use AI. Despite that I have a feeling I'm handicapping myself using these tools? Will it just make me a code reviewer/copy-paster rather than someone who can write something from scratch?

If it is reasonable to use these tools, at what point does it become so? like at what point can I consider myself well enough at programming to be able to use it like in the post.

Right now I'm purposely restraining myself from using these tools too much because what I can make using them is much better than what I can make myself, so as to get upto a certain level myself before I start making use of these capabilities

Am I thinking about this the right way? At what point does it make sense to start using these tools more freely without worrying about handicapping my learning?

By @redleggedfrog - 8 months
Honestly, the first article I've seen where the actual usage is explained clearly and matches my own experience. Maybe because I tend to write software the same way.

The thing I've heard the most from other developers, particular those new to the profession, is that you "have to know most of what you're asking already to know if what you get from the LLM is right." You can use the LLM to learn, but for the actual programming they struggle because they don't have the background to understand the responses well enough to continue the implementation.

Also, for the record, C# and .NET, huge enterprise/ecommerce software, so not quite as malleable as bash scripts and what not.

By @bilsbie - 8 months
My favorite thing is now being able to code fluently in new languages. I’d never used rust before but I was able to immediately get something up and running and I got a detailed explanation of how it works. With a programming background I can instantly understand most code once I get past the new syntax and idioms.
By @Wheaties466 - 8 months
this is how I have used it so far too. Small, Boring tasks that I can iterate ontop of.

its also nice to have as something to "bounce ideas" off of and see if it can think of any other solutions or ways to accomplish a goal.

By @onehair - 8 months
I was impressed with chatgpt in many a situation where it could understand me better than another person ona forum would. Will present me with a solution without creating an environment where I need to be careful how I'm gonna come across.

I've enjoyed finding answers to things and suggestions on how to do them differently of how I was thinking of doing them

I've enjoyed receiving answers to questions I asked google with no match from what I'm asking.

Can't bring myselt to use it to code for me though, but all the above leads me to believe it shouldn't far now until I'm on board too

By @emporas - 8 months
Regarding the formatting, i use Llama 3 mainly, to generate org-mode documentation about functions, i tell it to enclose lines with *Line x-y* and begin_src for the corresponding lines.

It generates most of time perfect formatting which i readily export to markdown with org-mode-html-export.

Showcase of the generated formatting as a screenshot [1].

[1] https://imgur.com/a/97fSWsg

By @SuperHeavy256 - 8 months
A simple and really good review of ChatGPT! Too many people are too negative of it, it's actually real developers like OP who use it nicely
By @amelius - 8 months
Let's try an easier target first: System administration with ChatGPT.
By @surfingdino - 8 months
I recently asked for a Dockerfile that builds an image which passes Docker's vulnerability scan. It gave me a file that failed over a dozen vulnerability tests.
By @TrackerFF - 8 months
Yeah, same use here. I work with data analysis, so 95% of the code I write is to wrangle data, or do something in the ETL-pipeline.

Almost all my ChatGPT use comes down to writing queries for loading or transforming data. Getting rid of the boilerplate has helped immensely on my productivity.

EDIT: I should note, the vast majority of errors I get using solutions from LLMs, tend to be code it includes that contain legacy or dead libraries. Sometimes it starts to mix old and new libraries in the same code snippet, which will either outright fail, or output some weird results.

Which makes sense, as some answers are the product of being trained on 14 year old StackOverflow posts, while others are trained on newer stuff.

By @ahaseeb - 8 months
Is there a good course/guide on how to train your engineers on AI who're resisting the change ?
By @simonw - 8 months
I'm increasingly building entire functional prototypes from start to finish using Claude 3.5 Sonnet. It's an amazing productivity boost. Here are a few recent examples:

https://tools.simonwillison.net/image-resize-quality is a tool for dropping in an image and instantly seeing resized versions of that image at different JPEG qualities, each of which can be downloaded. I used to use the (much better) https://squoosh.app/ for this, but my cut-down version is optimized for my workflow (pick the smallest JPEG version that remains legible). Notes and prompts on how I built that here: https://simonwillison.net/2024/Jul/26/image-resize-and-quali...

django-http-debug - https://github.com/simonw/django-http-debug - is an actual open source Python package I released that was mostly written for me by Claude. It's a webhooks debugger - you can set up a URL and it will log all incoming requests to a database table for you. Notes on how I built that here: https://simonwillison.net/2024/Aug/8/django-http-debug/

datasette-checkbox is a Datasette plugin adding toggle checkboxes to any table with is_ or has_ columns. Animated demo and prompts showing how I built the initial prototype here: https://simonwillison.net/2024/Aug/16/datasette-checkbox/

https://tools.simonwillison.net/gemini-bbox is a tool for trying out Gemini 1.5 Pro's ability to return bounding boxes for items it identifies. You'll need a Gemini API key for this one, or take a look at the demo and notes here: https://simonwillison.net/2024/Aug/26/gemini-bounding-box-vi...

https://tools.simonwillison.net/gemini-chat is a similar tool for trying out different Gemini models (Google released three more yesterday) with a streaming chat interface. Notes on how I built that here: https://tools.simonwillison.net/gemini-chat

I still see some people arguing that LLM-assisted development like this is a waste of time, and they spend more effort correcting mistakes in the code than if they had written it from scratch themselves.

I couldn't disagree more. My development process has always started with prototypes, and the speed at which I can get a proof-of-concept prototype up and running with these tools is quite frankly absurd.