Vibe Coding and the Future of Software Engineering
Vibe coding, popularized by Andrej Karpathy, raises concerns about code quality and junior developers' skills. While some embrace it, established organizations prefer traditional practices amid increasing AI integration in software development.
Read original articleVibe coding, also known as vibeware, has gained traction in the programming community, largely popularized by Andrej Karpathy. This trend involves creating software without traditional coding practices, raising concerns about code quality and comprehension among programmers. While some fear that AI could lead to job losses for senior developers, others, particularly indie hackers and solopreneurs, embrace the potential of vibe coding for rapid development. The article discusses the mixed reception of vibe coding, noting that established organizations are unlikely to adopt it without rigorous testing and code reviews. The author argues that while AI tools are increasingly used in software development, human oversight remains crucial. The emergence of vibe coding has also sparked discussions about the capabilities of junior developers, with some claiming they lack coding skills. However, this sentiment is not new and has been echoed across generations of programmers. The future of software engineering may involve more AI-driven processes, such as self-healing software and conversation-driven development, which will require skilled engineers to manage and integrate these technologies. Ultimately, vibe coding represents a shift in how software is developed, necessitating a balance between innovation and quality assurance.
- Vibe coding is a controversial trend in software development, with mixed opinions on its impact.
- Concerns about code quality and the future of junior developers are prevalent among programmers.
- Established organizations are expected to maintain traditional coding practices despite the rise of AI tools.
- The future of software engineering may involve more AI-driven processes and automation.
- Software engineers will need to adapt to new roles in managing AI integration and ensuring code quality.
Related
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
AI-assisted coding will change software engineering: hard truths
AI-assisted coding is widely adopted among developers, enhancing productivity but requiring human expertise. Experienced engineers benefit more than beginners, facing challenges in completing projects and understanding AI-generated code.
The second wave of AI coding is here
A second wave of AI coding tools is transforming software development, enhancing code generation and debugging, while debates continue over the effectiveness of large language models versus logic-based systems.
Ask HN: Is AI assisted programming going to change productivity expectations?
AI-driven code generation is significantly reducing coding time for software engineers, potentially increasing productivity expectations and shifting the balance between workload and free time in the industry.
A.I. Is Prompting an Evolution, Not Extinction, for Coders
Artificial intelligence is enhancing software developers' productivity, requiring new skills. Demand for skilled developers is expected to grow, while entry-level opportunities may decline due to A.I. integration.
- Some developers find vibe coding beneficial for quick tasks, allowing them to leverage AI tools efficiently.
- Others argue that it undermines code quality and the importance of understanding programming fundamentals, especially for larger projects.
- Concerns are raised about the potential commoditization of software development roles and the risks of relying on AI-generated code.
- Many emphasize the need for experienced developers to guide AI tools to ensure quality and reliability in production systems.
- There is a general skepticism about the long-term viability of vibe coding without a solid programming foundation.
I probably got the whole thing done in 5 prompts and still had enough brain space to vaguely follow along the presentation. Before this kind of thing would have taken 20-30 min of heads down coding. This would have been a strictly "after work" project which means I probably wouldn't have done it (my real side projects and family need that time more than this analysis did.) That's the kind of thing that an experienced programmer can get out of vibes coding.
But then again, I have been doing this for 10 years; that is my edge. Same exact stack (Django + boring frontend). I know the ins and out of my stack, quite obviously every single day, I see AI go into a direction that I know is going to produce a huge footgun along the way. I can just see that up ahead, suggest a different approach, and continue. IF I was entirely new to this, I would end up building stuff that breaks down after weeks or months or investments, not knowing when things went wrong, or how to go forward. Regardless, I feel like my time has come, and I am definitely spending 95% of my time just prompting the AI versus writing actual code. Even for the most minor changes, like changing a CharField to a TextField, I don't even want to open the models.py myself. In Cursor, I am averaging 5000-7000 fast requests per month, because in terms of ROI it pays off. I am looking forward to this getting better.
You'll see smaller team sizes at first, then continuing to shrink as individual positions get a higher workload and spread of knowledge.
I think "Vibe coding" is probably a canary for all of this so it's worth paying attention to what a non-programmer can actually accomplish. This creates narratives that get picked up by managers and decision-makers.
The capabilities taking user input will surely be hacked eventually, so I think those are a non-starter, not-for-nothing because of bitter, laid off developers wanting to see you fail.
Alternatively if a “non coder” creates a project by vibe coding and it fails, maybe that failure happened faster with lower costs (especially their time, if it’s their own project) than if they’d had to go get financing, hire an offshore dev or two, and go back and forth for a few weeks or months.
Vibes are high on vibe coding.
Friday AI report: A user was observed seeming to struggle a bit with X; we had a pain point discussion; they suggested some documentation and UI changes; they were confused, but further discussion turned up plausible improvements; we iterated on drafts and prototypes; I did an expanding alpha with interviews, and beta with sampled surveys, and integrated some feedback; evaluation was above threshold with no blockers; I've pushed the change to prod, and fed the nice users cookies.
Idk why terrified, vibe coding is nice but everyone who developed something bigger than a toy knows that code is 5% of the task and never was a bottleneck. Its not like faang employees write code all day long, or even half day.
Ah and you need to make sure it doesn’t nuke your db or send weird email to your users because someone prompt-engineered it badly.
I still find myself building more building block style. „Make me a python function that does X“ and then stringing those together by hand
1. Context sizes are going to grow. Gemini with 2M tokens is already doing amazing feats
2. We all agree that we should break bigger problems into smaller ones. So if you can isolate the problem into something that fits in a LLM context, no matter how large the larger software system is, you can make a lot of quick progress by leveraging LLMs for that isolated piece of software.
Engineering - working to constraints, including user needs (ie Product Management) is forever.
You want to have an LLM help you crap out a script, sure, but you mean to tell me you'd seriously consider using an LLM for a production systems that affects real people that deals with real people's real data and call yourself a software "engineer"?
Engineering is about designing systems that serve society and provably meet well specified constraints. You don't want the god damn bridge to collapse under load. If you feel comfortable using an LLM to "engineer" a software system, you ought to feel comfortable letting civil engineers "vibe out" their bridge designs. God this hype cycle has just made a complete mockery of this whole industry and I have no respect for the clowns pushing this shit.
> A radical approach to the complexity problem has been to suggest that the easiest way out is simply to make the machine do everything; i.e. automatic programming. [...] this approach does seem seductive, it is our estimate that it will not in the short run produce results of much value to the designer of [...] large scale programs
> the belief that man-machine interaction can be a symbiotic relationship in which the overall productivity is greater than the sum of the parts.
> how a knowledgeable computer could help an already competent programmer. It has been our experience that we can produce better and cleaner code faster when working with a partner who shares our understanding of the intentions and goal structure of our program. We, therefore, believe that the appropriate metaphor for our work is that of creating a program with the capabilities of a junior colleague working on a joint project. The program should know the problem domain, implementation techniques, and the programming language being used fairly well. It need not know everything in advance; it can always ask its senior partner for advice or further information. Furthermore, this program might well be capable of paying more attention to details, of writing trivial parts of the code, of checking that certain constraints are satisfied, and even (in some cases) of cleaning up a large system after it has been put together.
> First Scenario: Initial Design > I'd like to build a hash table `O.K. youll need an insert, a lookup, an array, a hasher, and optionally a delete routine.` The P.A. knows the main parts of a hashing system.
> parallels [...] between understanding a program [...and...] natural language. In both cases, a key component in the understanding system is the background knowledge base, which establishes a context for understanding the semantics of the particular utterance in question. The huge problem in natural language understanding research is that if you try to advance beyond conversations in toy domains like the blocks world, this background knowledge quickly amounts to having a common-sense model of the whole world of human existence. Unfortunately, building such a representation of the world is exactly the central unsolved research project of the entire A.I. community.
> The transition from tab equipment systems to the modern day computer utility, exemplified by MULTICS, has taken little more than two decades.
Understanding LISP Programs: Towards a Programmer's Apprentice (1974) https://dspace.mit.edu/handle/1721.1/41117
It completely misses nuance. Are any of these apps actually useful?
I'm not sure how this is any better than jamming a bunch of Wordpress plugins together to kinda get the software to do what you want.
if you know how you want something done, tough luck. LLMs, even the "really smart" ones, still often do it "their way". they use "their style" (whatever the most common way to write something might be) and "their preferred packages" (what ever the most common ones for the language are). i remember someone told me "hey dude try vercel's v0 it's so good" and i asked it for some basic svelte code. it spat out react.
if you are modifying an existing, non-AI codebase, it's really annoying for the same reason. if you have a preference for specific design patterns or code style, it's unlikely to work well without substantial prompting and re-trying.
they still can't really fix bugs. syntax errors sure, but actual time-costing logic bugs? figuring out lifetimes with rust? forget about it. all they do is add freaking print statements and say "try these things to fix it." no. you're the robot, you work for me, you do it.
they suck at functional languages/haskell. like they're really just bad.
lastly, they're interns, not employees. interns require hand-holding, supervision, and verbal abuse to get anything done right. bots are, for now, the same. they impose a cognitive load when you want something of any importance done: you can't actually trust anything it outputs, at all. you have to go re-check everything it does.
i remember a few days ago i wanted to parse a bunch of UDP packets from 10-20GB daily pcap dumps. I gave it the spec for the message format as a PDF and said "write this in rust", along with the existing (functional but slow) python implementation. this should be a simple case to apply an LLM: simple, routine, boilerplate code that can be next-token-predicted fairly simply, but still takes an annoying amount of time to type out. unfortunately it screwed up multiple times. it failed to use the pcap parsing crate (even when i supplied docs) because it probably wasn't frequent in its training corpus. more importantly, it just miswrote constants. like it would get the constant for length-checking a certain message type wrong despite it being plainly specified in the spec and the python version.
LLMs are cool research tech and I have friends who have used them to learn to write Python scripts and react webshit. in my opinion, they are of little value for "serious" programming. i realize that's an annoying and vaguely-conceited term but it's the best one I can think of at the moment. i look forward to when they actually work well.
in my opinion, a good improvement would be focusing on writing in at least somewhat-verifiable languages or writing s.t. pieces are verifiable. robot translates your request into rules, robot 2 writes the code from rules, SAT solver checks the check-able chunks for validity while robot 3 is specialized in checking unverifiable "connection points", use of side effects, etc. the "intern" problem is by far the biggest of what I've listed and this is probably the best way to solve it. once that's done, we can hopefully let these chug for a while until they get it right rather than giving users crappy output.
oh, and they MUST be tuned to be capable of saying, "I don't know."
Related
The 70% problem: Hard truths about AI-assisted coding
AI-assisted coding increases developer productivity but does not improve software quality significantly. Experienced developers benefit more, while novices risk creating fragile systems without proper oversight and expertise.
AI-assisted coding will change software engineering: hard truths
AI-assisted coding is widely adopted among developers, enhancing productivity but requiring human expertise. Experienced engineers benefit more than beginners, facing challenges in completing projects and understanding AI-generated code.
The second wave of AI coding is here
A second wave of AI coding tools is transforming software development, enhancing code generation and debugging, while debates continue over the effectiveness of large language models versus logic-based systems.
Ask HN: Is AI assisted programming going to change productivity expectations?
AI-driven code generation is significantly reducing coding time for software engineers, potentially increasing productivity expectations and shifting the balance between workload and free time in the industry.
A.I. Is Prompting an Evolution, Not Extinction, for Coders
Artificial intelligence is enhancing software developers' productivity, requiring new skills. Demand for skilled developers is expected to grow, while entry-level opportunities may decline due to A.I. integration.