February 25th, 2025

Vibe Coding and the Future of Software Engineering

Vibe coding, popularized by Andrej Karpathy, raises concerns about code quality and junior developers' skills. While some embrace it, established organizations prefer traditional practices amid increasing AI integration in software development.

Read original articleLink Icon
ConcernExcitementSkepticism
Vibe Coding and the Future of Software Engineering

Vibe coding, also known as vibeware, has gained traction in the programming community, largely popularized by Andrej Karpathy. This trend involves creating software without traditional coding practices, raising concerns about code quality and comprehension among programmers. While some fear that AI could lead to job losses for senior developers, others, particularly indie hackers and solopreneurs, embrace the potential of vibe coding for rapid development. The article discusses the mixed reception of vibe coding, noting that established organizations are unlikely to adopt it without rigorous testing and code reviews. The author argues that while AI tools are increasingly used in software development, human oversight remains crucial. The emergence of vibe coding has also sparked discussions about the capabilities of junior developers, with some claiming they lack coding skills. However, this sentiment is not new and has been echoed across generations of programmers. The future of software engineering may involve more AI-driven processes, such as self-healing software and conversation-driven development, which will require skilled engineers to manage and integrate these technologies. Ultimately, vibe coding represents a shift in how software is developed, necessitating a balance between innovation and quality assurance.

- Vibe coding is a controversial trend in software development, with mixed opinions on its impact.

- Concerns about code quality and the future of junior developers are prevalent among programmers.

- Established organizations are expected to maintain traditional coding practices despite the rise of AI tools.

- The future of software engineering may involve more AI-driven processes and automation.

- Software engineers will need to adapt to new roles in managing AI integration and ensuring code quality.

AI: What people are saying
The discussion around vibe coding reveals a divide in the programming community regarding its effectiveness and implications for software development.
  • Some developers find vibe coding beneficial for quick tasks, allowing them to leverage AI tools efficiently.
  • Others argue that it undermines code quality and the importance of understanding programming fundamentals, especially for larger projects.
  • Concerns are raised about the potential commoditization of software development roles and the risks of relying on AI-generated code.
  • Many emphasize the need for experienced developers to guide AI tools to ensure quality and reliability in production systems.
  • There is a general skepticism about the long-term viability of vibe coding without a solid programming foundation.
Link Icon 21 comments
By @Karrot_Kream - 2 months
If you're writing one-off scripts though, I find vibe coding fantastic. I found myself in a work meeting where I was mostly there to let a junior present some joint work we did and answer any questions the junior couldn't. Since I wasn't really needed (the junior eng was awesome), I was fidgeting and wanted to analyze the results from an API I had access to. A few prompts from Claude and I was hitting the API, fetching results, using numpy to crunch what I needed, and getting matplotlib to get me nice pretty graphs. I know Python and the ecosystem well so it wasn't hard to guide Claude correctly.

I probably got the whole thing done in 5 prompts and still had enough brain space to vaguely follow along the presentation. Before this kind of thing would have taken 20-30 min of heads down coding. This would have been a strictly "after work" project which means I probably wouldn't have done it (my real side projects and family need that time more than this analysis did.) That's the kind of thing that an experienced programmer can get out of vibes coding.

By @antirez - 2 months
As I said in one of my latest YT videos: if you can't code, sure, go all for it, the difference with x -> 0 in the denominator tends to infinity. But if you can code, there are much better ways to use generative AI to aid you in your coding tasks. Ways that will make you faster while you learn more, understand every single line of the code that is in your application, never letting badly written code go into your code base. Maybe in the future AI will be able to write much better code than humans, and vibe coding will be the right way, but now it's like when assembly was a hell lot better written by hand than what a compiler could do.
By @gngoo - 2 months
For me, vibe coding is the only logical way forward. I see that the term gets a lot of flak. But in almost 10 years of writing web-applications for a living, this feels even more exciting then when I finally "got it". And its measurable, I am sitting on 5 completed web-apps with traffic just this year, working with 3 clients, and due to my ability to be this productive, feel like I have a very stable future ahead. I even had my last client seek me out, for writing about coding like this, because he wants to replace his back-end and front-end coders with people who can "AI Code" the full-stack, but aren't new to this. Well, its sad, but I am likely replacing 3 people on their team. Only time will tell if that was their right decision, but I am not saying no to that.

But then again, I have been doing this for 10 years; that is my edge. Same exact stack (Django + boring frontend). I know the ins and out of my stack, quite obviously every single day, I see AI go into a direction that I know is going to produce a huge footgun along the way. I can just see that up ahead, suggest a different approach, and continue. IF I was entirely new to this, I would end up building stuff that breaks down after weeks or months or investments, not knowing when things went wrong, or how to go forward. Regardless, I feel like my time has come, and I am definitely spending 95% of my time just prompting the AI versus writing actual code. Even for the most minor changes, like changing a CharField to a TextField, I don't even want to open the models.py myself. In Cursor, I am averaging 5000-7000 fast requests per month, because in terms of ROI it pays off. I am looking forward to this getting better.

By @hnthrow90348765 - 2 months
My guess is CRUD app development will become commoditized like ordinary pentesting. The money in security is definitely not regular pentesting these days.

You'll see smaller team sizes at first, then continuing to shrink as individual positions get a higher workload and spread of knowledge.

I think "Vibe coding" is probably a canary for all of this so it's worth paying attention to what a non-programmer can actually accomplish. This creates narratives that get picked up by managers and decision-makers.

The capabilities taking user input will surely be hacked eventually, so I think those are a non-starter, not-for-nothing because of bitter, laid off developers wanting to see you fail.

By @cadamsdotcom - 2 months
If a “non coder” makes a project by vibe coding and learns to code as a side effect, great! One more coder. And it IS a great way to learn to code. If you want to know how to do something just ask for it and watch what was generated then ask why for anything you’re curious about. Can’t get much better as an education tool than that. Perfect for solo founders.

Alternatively if a “non coder” creates a project by vibe coding and it fails, maybe that failure happened faster with lower costs (especially their time, if it’s their own project) than if they’d had to go get financing, hire an offshore dev or two, and go back and forth for a few weeks or months.

Vibes are high on vibe coding.

By @mncharity - 2 months
Tweaking that list of capabilities yields integrated pervasive user observation, user interviewing, UX refinement discussions and prototyping and validation.

Friday AI report: A user was observed seeming to struggle a bit with X; we had a pain point discussion; they suggested some documentation and UI changes; they were confused, but further discussion turned up plausible improvements; we iterated on drafts and prototypes; I did an expanding alpha with interviews, and beta with sampled surveys, and integrated some feedback; evaluation was above threshold with no blockers; I've pushed the change to prod, and fed the nice users cookies.

By @risyachka - 2 months
>> Seems like programmers are terrified

Idk why terrified, vibe coding is nice but everyone who developed something bigger than a toy knows that code is 5% of the task and never was a bottleneck. Its not like faang employees write code all day long, or even half day.

Ah and you need to make sure it doesn’t nuke your db or send weird email to your users because someone prompt-engineered it badly.

By @Havoc - 2 months
Does this really work for anyone?

I still find myself building more building block style. „Make me a python function that does X“ and then stringing those together by hand

By @mohsen1 - 2 months
It's interesting to see a lot of senior folks are against this arguing that if you're working on a larger software project this falls apart. Two things can be argued against this:

1. Context sizes are going to grow. Gemini with 2M tokens is already doing amazing feats

2. We all agree that we should break bigger problems into smaller ones. So if you can isolate the problem into something that fits in a LLM context, no matter how large the larger software system is, you can make a lot of quick progress by leveraging LLMs for that isolated piece of software.

By @williamcotton - 2 months
I vibe coded this entire DSL, changing syntax and grammar while following my whimsy:

https://github.com/williamcotton/webdsl

By @cadamsdotcom - 2 months
Vibe coding will eat software.

Engineering - working to constraints, including user needs (ie Product Management) is forever.

By @voidhorse - 2 months
Honestly, the fact that this is even a concept assessed with any seriousness shows me how much bullshit the label "engineering" is when applied to software.

You want to have an LLM help you crap out a script, sure, but you mean to tell me you'd seriously consider using an LLM for a production systems that affects real people that deals with real people's real data and call yourself a software "engineer"?

Engineering is about designing systems that serve society and provably meet well specified constraints. You don't want the god damn bridge to collapse under load. If you feel comfortable using an LLM to "engineer" a software system, you ought to feel comfortable letting civil engineers "vibe out" their bridge designs. God this hype cycle has just made a complete mockery of this whole industry and I have no respect for the clowns pushing this shit.

By @hooverd - 2 months
I wonder how many new developers will have legit panic attacks if their LLM of choice has an outage.
By @mncharity - 2 months
> There are additional difficulties brought about by the economics of large scale production. Programmers have become "proletarianized". The elite expert programmer who crafted a system and stayed with it for many years, finely tuning it and adding new bells and whistles with ease, has by and large been superseeded by an entire generation of college graduates who were introduced to computing in their courses, and who are hired and fired by programming shops in accord with the winds of the market place.

> A radical approach to the complexity problem has been to suggest that the easiest way out is simply to make the machine do everything; i.e. automatic programming. [...] this approach does seem seductive, it is our estimate that it will not in the short run produce results of much value to the designer of [...] large scale programs

> the belief that man-machine interaction can be a symbiotic relationship in which the overall productivity is greater than the sum of the parts.

> how a knowledgeable computer could help an already competent programmer. It has been our experience that we can produce better and cleaner code faster when working with a partner who shares our understanding of the intentions and goal structure of our program. We, therefore, believe that the appropriate metaphor for our work is that of creating a program with the capabilities of a junior colleague working on a joint project. The program should know the problem domain, implementation techniques, and the programming language being used fairly well. It need not know everything in advance; it can always ask its senior partner for advice or further information. Furthermore, this program might well be capable of paying more attention to details, of writing trivial parts of the code, of checking that certain constraints are satisfied, and even (in some cases) of cleaning up a large system after it has been put together.

> First Scenario: Initial Design > I'd like to build a hash table `O.K. youll need an insert, a lookup, an array, a hasher, and optionally a delete routine.` The P.A. knows the main parts of a hashing system.

> parallels [...] between understanding a program [...and...] natural language. In both cases, a key component in the understanding system is the background knowledge base, which establishes a context for understanding the semantics of the particular utterance in question. The huge problem in natural language understanding research is that if you try to advance beyond conversations in toy domains like the blocks world, this background knowledge quickly amounts to having a common-sense model of the whole world of human existence. Unfortunately, building such a representation of the world is exactly the central unsolved research project of the entire A.I. community.

> The transition from tab equipment systems to the modern day computer utility, exemplified by MULTICS, has taken little more than two decades.

Understanding LISP Programs: Towards a Programmer's Apprentice (1974) https://dspace.mit.edu/handle/1721.1/41117

By @65 - 2 months
Yeah, and I can be a novel writer with AI. I'll have my Amazon best seller in no time thanks to AI!

It completely misses nuance. Are any of these apps actually useful?

I'm not sure how this is any better than jamming a bunch of Wordpress plugins together to kinda get the software to do what you want.

By @zombiwoof - 2 months
I wish software engineers focus less on being cool and more on getting done with work and living a life away from a screen
By @nighthawk454 - 2 months
Some things don’t require scratch build, some do, some are in between. But there’s no prompt for taste.
By @booleandilemma - 2 months
I'm so glad I learned how to program a computer before the era of vibe coding.
By @_bin_ - 2 months
there are "a few" problems with this.

if you know how you want something done, tough luck. LLMs, even the "really smart" ones, still often do it "their way". they use "their style" (whatever the most common way to write something might be) and "their preferred packages" (what ever the most common ones for the language are). i remember someone told me "hey dude try vercel's v0 it's so good" and i asked it for some basic svelte code. it spat out react.

if you are modifying an existing, non-AI codebase, it's really annoying for the same reason. if you have a preference for specific design patterns or code style, it's unlikely to work well without substantial prompting and re-trying.

they still can't really fix bugs. syntax errors sure, but actual time-costing logic bugs? figuring out lifetimes with rust? forget about it. all they do is add freaking print statements and say "try these things to fix it." no. you're the robot, you work for me, you do it.

they suck at functional languages/haskell. like they're really just bad.

lastly, they're interns, not employees. interns require hand-holding, supervision, and verbal abuse to get anything done right. bots are, for now, the same. they impose a cognitive load when you want something of any importance done: you can't actually trust anything it outputs, at all. you have to go re-check everything it does.

i remember a few days ago i wanted to parse a bunch of UDP packets from 10-20GB daily pcap dumps. I gave it the spec for the message format as a PDF and said "write this in rust", along with the existing (functional but slow) python implementation. this should be a simple case to apply an LLM: simple, routine, boilerplate code that can be next-token-predicted fairly simply, but still takes an annoying amount of time to type out. unfortunately it screwed up multiple times. it failed to use the pcap parsing crate (even when i supplied docs) because it probably wasn't frequent in its training corpus. more importantly, it just miswrote constants. like it would get the constant for length-checking a certain message type wrong despite it being plainly specified in the spec and the python version.

LLMs are cool research tech and I have friends who have used them to learn to write Python scripts and react webshit. in my opinion, they are of little value for "serious" programming. i realize that's an annoying and vaguely-conceited term but it's the best one I can think of at the moment. i look forward to when they actually work well.

in my opinion, a good improvement would be focusing on writing in at least somewhat-verifiable languages or writing s.t. pieces are verifiable. robot translates your request into rules, robot 2 writes the code from rules, SAT solver checks the check-able chunks for validity while robot 3 is specialized in checking unverifiable "connection points", use of side effects, etc. the "intern" problem is by far the biggest of what I've listed and this is probably the best way to solve it. once that's done, we can hopefully let these chug for a while until they get it right rather than giving users crappy output.

oh, and they MUST be tuned to be capable of saying, "I don't know."