The LLM Curve of Impact on Software Engineers
Large Language Models (LLMs) impact software engineers differently: junior engineers benefit greatly, mid-level engineers face limitations, senior engineers are skeptical, while Staff+ engineers use LLMs for innovation and prototyping.
Read original articleThe article discusses the varying impact of Large Language Models (LLMs) on software engineers at different career levels, proposing a "curve of impact." Junior engineers benefit significantly from LLMs, as they help them understand code and solve errors quickly. However, there is a risk of over-reliance, which could hinder skill development. Mid-level engineers also find LLMs useful for speeding up coding tasks but encounter limitations when it comes to understanding customer needs or debugging complex issues. Senior engineers, who possess deep knowledge of their codebases, often feel skeptical about LLMs, as these tools cannot grasp the nuances of their work or assist with high-level planning and problem-solving. In contrast, Staff+ engineers can leverage LLMs for rapid prototyping and experimentation, enhancing their ability to innovate. The author emphasizes that the differing experiences with LLMs stem from the distinct tasks and challenges faced by engineers at various levels, advocating for empathy towards differing opinions on LLM utility. The article concludes by acknowledging the evolving nature of LLMs and the excitement surrounding their potential future applications.
- LLMs significantly aid junior engineers in understanding code and solving problems.
- Mid-level engineers benefit from faster coding but face limitations in complex scenarios.
- Senior engineers often express skepticism about LLMs due to their inability to handle nuanced tasks.
- Staff+ engineers can effectively use LLMs for rapid prototyping and experimentation.
- The varying experiences with LLMs highlight the need for empathy towards differing perspectives in the engineering community.
Related
Ask HN: SWEs how do you future-proof your career in light of LLMs?
The integration of large language models in software engineering is rising, potentially diminishing junior roles and shifting senior engineers to guiding AI, necessitating adaptation for career longevity.
How I Program with LLMs
The author discusses the positive impact of large language models on programming productivity, highlighting their uses in autocomplete, search, and chat-driven programming, while emphasizing the importance of clear objectives.
Cheating Is All You Need
Steve Yegge discusses the transformative potential of Large Language Models in software engineering, emphasizing their productivity benefits, addressing skepticism, and advocating for their adoption to avoid missed opportunities.
How I use LLMs as a staff engineer
Sean Goedecke discusses the benefits and limitations of large language models in software engineering, highlighting their value in code writing and learning, while remaining cautious about their reliability for complex tasks.
By virtue of being a junior, as you're just picking things up, you have to be flexible, malleable and open to new things. Anything that seems to work is amazing. Then you start to form the "it works" vs. "it's built well" intuition. Then your skills and opinions start to calcify. Your beliefs become firmer, prescriptive, pedantic, maybe petty. Good code becomes a fixed definition in your head. And yes, you can be very productive within your niche, but there's a quick fall-off outside of it. But then, as you grow even more, you realize the beliefs you held dear are actually fluid. You start to foster an experimental side, trying things, dabbling, hacking, building little proofs-of-concept and many half-finished seedlings of ideas. This, which the OP identifies as 'Staff level', is for me just a kind of blossoming of the programming and hacking mindset.
With LLMs, you have to be open to learning their language, their biases. You have to be able to figure out how to get the best out of them, as they will almost never slot easily into how you currently work. You have to grow strong intuitions about context, linguistics, even a kind of theory-of-mind. The latest "Wait" paper from stanford shows us how such tiny inflections or insertions can lead to drastically superior results. These little learnings are borne of experimentation. Trying new tools and workflows, is as well, utterly vital, but per every emacs/vim or tabs/spaces debate, we see that people don't do well to branch outside of their workflows. They have their chosen way, and don't want to give others the time of day.
But when I experiment with new libraries and toolchains as a beginner, LLMs are like a private tutor. They can bring one up to lower mid level of experience pretty quickly.
The knowledge gradient between the user and the LLM is important.
I’m not sure if I’d say LLMs become useful again at a higher level of expertise/mastery though.
1. Title - which comes with different activities and quality standards to some degree. (I don't fully agree with the definitions, but to be fair, they're quite different from company to company.)
2. Familiarity with the code base you're working on.
3. Familiarity with the tech stack currently used.
I think the strongest correlation between these and LLM usefulness is (1) and (3).
For (1), they can be useful for churning out something quickly (if quality is not a big issue, e.g. experimentation), which I personally rarely do, because good tooling makes it easy enough to build an MVP in a stack you're familiar with. If I'm not familiar with a stack I have to learn, I use this kind of thing as an opportunity to do just that quite often. And in many cases, I look for ways to experiment without writing any code. I can see how it's controversial.
I personally find them most useful for learning a new stack (3) - armed with a search engine (and ideally someone with experience I can talk to) in addition. This seems comparatively uncontroversial.
For (2), understanding a large code base quicky, I'm pretty bearish. There won't be a lot of SO posts and the like about a company's internal code base, so the results are very mixed in my experience. I'd call this part highly controversial, and the kind of thing where those who don't know the code base find it to be magic, those who do know it find the results terrible.
The reasoning process is nice, because the CoT can fact check itself. But it can’t catch compiler bugs across multiple files/modules, because an LLM is not a compiler.
Though I still sometimes have a hard time writing my thoughts and logic out completely as I do not have a programing background nor was I very good at maths so my code probably would be crap for most programers.
- are you working in a domain and language adequately represented in the training data? For example, it’s a lot easier to prompt an LLM to do what you want in a React CRUD website than a Swift app with obscure APIs from after the knowledge cutoff. No doubt LLMs will eventually generalize so this stops being troublesome, but today with o3-mini, R1 et al you need a LOT of docs in the context to get good results outside the mainstream.
- greenfield codebase or big hairy codebase full of non-industry-standard architecture & conventions?
- how closely does your code formatting align to industry standard? (needing bespoke formatting, line breaks at column N, extra lint rules etc is distraction for LLMs)
Codebases should be moved toward industry standards so LLMs can be leveraged more effectively. Could be a good (if indirect) way for mid or senior engs to help their junior counterparts.
There is another aspect though - senior people tend to be the most set in their way & also most at risk of their hard won experience losing value. That dramatically influences how open people are to accepting change.
If a story from two years ago had to be dug up as an illustration of usefulness... I don't know where the author positions himself on that graph, but that's like the direct opposite of high impact on the day-to-day
Related
Ask HN: SWEs how do you future-proof your career in light of LLMs?
The integration of large language models in software engineering is rising, potentially diminishing junior roles and shifting senior engineers to guiding AI, necessitating adaptation for career longevity.
How I Program with LLMs
The author discusses the positive impact of large language models on programming productivity, highlighting their uses in autocomplete, search, and chat-driven programming, while emphasizing the importance of clear objectives.
Cheating Is All You Need
Steve Yegge discusses the transformative potential of Large Language Models in software engineering, emphasizing their productivity benefits, addressing skepticism, and advocating for their adoption to avoid missed opportunities.
How I use LLMs as a staff engineer
Sean Goedecke discusses the benefits and limitations of large language models in software engineering, highlighting their value in code writing and learning, while remaining cautious about their reliability for complex tasks.