February 7th, 2025

The LLM Curve of Impact on Software Engineers

Large Language Models (LLMs) impact software engineers differently: junior engineers benefit greatly, mid-level engineers face limitations, senior engineers are skeptical, while Staff+ engineers use LLMs for innovation and prototyping.

Read original articleLink Icon
The LLM Curve of Impact on Software Engineers

The article discusses the varying impact of Large Language Models (LLMs) on software engineers at different career levels, proposing a "curve of impact." Junior engineers benefit significantly from LLMs, as they help them understand code and solve errors quickly. However, there is a risk of over-reliance, which could hinder skill development. Mid-level engineers also find LLMs useful for speeding up coding tasks but encounter limitations when it comes to understanding customer needs or debugging complex issues. Senior engineers, who possess deep knowledge of their codebases, often feel skeptical about LLMs, as these tools cannot grasp the nuances of their work or assist with high-level planning and problem-solving. In contrast, Staff+ engineers can leverage LLMs for rapid prototyping and experimentation, enhancing their ability to innovate. The author emphasizes that the differing experiences with LLMs stem from the distinct tasks and challenges faced by engineers at various levels, advocating for empathy towards differing opinions on LLM utility. The article concludes by acknowledging the evolving nature of LLMs and the excitement surrounding their potential future applications.

- LLMs significantly aid junior engineers in understanding code and solving problems.

- Mid-level engineers benefit from faster coding but face limitations in complex scenarios.

- Senior engineers often express skepticism about LLMs due to their inability to handle nuanced tasks.

- Staff+ engineers can effectively use LLMs for rapid prototyping and experimentation.

- The varying experiences with LLMs highlight the need for empathy towards differing perspectives in the engineering community.

Link Icon 12 comments
By @padolsey - 2 months
Perhaps the meaningful axis is not around seniority, per se, but instead around experimentation as OP alludes to.

By virtue of being a junior, as you're just picking things up, you have to be flexible, malleable and open to new things. Anything that seems to work is amazing. Then you start to form the "it works" vs. "it's built well" intuition. Then your skills and opinions start to calcify. Your beliefs become firmer, prescriptive, pedantic, maybe petty. Good code becomes a fixed definition in your head. And yes, you can be very productive within your niche, but there's a quick fall-off outside of it. But then, as you grow even more, you realize the beliefs you held dear are actually fluid. You start to foster an experimental side, trying things, dabbling, hacking, building little proofs-of-concept and many half-finished seedlings of ideas. This, which the OP identifies as 'Staff level', is for me just a kind of blossoming of the programming and hacking mindset.

With LLMs, you have to be open to learning their language, their biases. You have to be able to figure out how to get the best out of them, as they will almost never slot easily into how you currently work. You have to grow strong intuitions about context, linguistics, even a kind of theory-of-mind. The latest "Wait" paper from stanford shows us how such tiny inflections or insertions can lead to drastically superior results. These little learnings are borne of experimentation. Trying new tools and workflows, is as well, utterly vital, but per every emacs/vim or tabs/spaces debate, we see that people don't do well to branch outside of their workflows. They have their chosen way, and don't want to give others the time of day.

By @caseyy - 2 months
This has been exactly my experience. In my work as a senior SWE in my day-to-day areas of responsibility, if I’m stumped, then an LLM won’t even get close.

But when I experiment with new libraries and toolchains as a beginner, LLMs are like a private tutor. They can bring one up to lower mid level of experience pretty quickly.

The knowledge gradient between the user and the LLM is important.

I’m not sure if I’d say LLMs become useful again at a higher level of expertise/mastery though.

By @fhd2 - 2 months
TFA is presenting one dimension (junior to staff), but actually appears to be talking about three dimensions:

1. Title - which comes with different activities and quality standards to some degree. (I don't fully agree with the definitions, but to be fair, they're quite different from company to company.)

2. Familiarity with the code base you're working on.

3. Familiarity with the tech stack currently used.

I think the strongest correlation between these and LLM usefulness is (1) and (3).

For (1), they can be useful for churning out something quickly (if quality is not a big issue, e.g. experimentation), which I personally rarely do, because good tooling makes it easy enough to build an MVP in a stack you're familiar with. If I'm not familiar with a stack I have to learn, I use this kind of thing as an opportunity to do just that quite often. And in many cases, I look for ways to experiment without writing any code. I can see how it's controversial.

I personally find them most useful for learning a new stack (3) - armed with a search engine (and ideally someone with experience I can talk to) in addition. This seems comparatively uncontroversial.

For (2), understanding a large code base quicky, I'm pretty bearish. There won't be a lot of SO posts and the like about a company's internal code base, so the results are very mixed in my experience. I'd call this part highly controversial, and the kind of thing where those who don't know the code base find it to be magic, those who do know it find the results terrible.

By @manmal - 2 months
This hits the nail on the head for me. Low complexity prototypes and modules are something o1 or R1 can reason well about. For more complex coding tasks, we‘d need IMO a feedback loop with a linter and a compiler, and maybe even a debugger and CI workflows. Basically an agent that works exactly like I do.

The reasoning process is nice, because the CoT can fact check itself. But it can’t catch compiler bugs across multiple files/modules, because an LLM is not a compiler.

By @xbmcuser - 2 months
I agree current LLM tech is the most useful for people that see the possibility but not the ability in terms of time or competence to get there. Things like excel formulas regex patterns that would take me sometimes hours to write are usually just a question away.

Though I still sometimes have a hard time writing my thoughts and logic out completely as I do not have a programing background nor was I very good at maths so my code probably would be crap for most programers.

By @cadamsdotcom - 2 months
Good article, very thought provoking. Here are some other dimensions that affect usefulness / value of adding LLMs to a workflow. LLMs can be amazing, or worse than useless depending where you land on these - regardless of seniority.

- are you working in a domain and language adequately represented in the training data? For example, it’s a lot easier to prompt an LLM to do what you want in a React CRUD website than a Swift app with obscure APIs from after the knowledge cutoff. No doubt LLMs will eventually generalize so this stops being troublesome, but today with o3-mini, R1 et al you need a LOT of docs in the context to get good results outside the mainstream.

- greenfield codebase or big hairy codebase full of non-industry-standard architecture & conventions?

- how closely does your code formatting align to industry standard? (needing bespoke formatting, line breaks at column N, extra lint rules etc is distraction for LLMs)

Codebases should be moved toward industry standards so LLMs can be leveraged more effectively. Could be a good (if indirect) way for mid or senior engs to help their junior counterparts.

By @Havoc - 2 months
I'd very much hope everyone at all skill levels is doing experimenting. Aside from that it seems like a plausible take from the individuals PoV.

There is another aspect though - senior people tend to be the most set in their way & also most at risk of their hard won experience losing value. That dramatically influences how open people are to accepting change.

By @twelve40 - 2 months
> Almost two years ago now, I was exploring

If a story from two years ago had to be dug up as an illustration of usefulness... I don't know where the author positions himself on that graph, but that's like the direct opposite of high impact on the day-to-day

By @1jreuben1 - 2 months
What is the impact on Software Architects ? AI Architecture guidance can be very ivory tower in regards to transformer model internals and ML pipelines, but when it comes to CodeGenAI refinement, you really need to touch the grass. On the plus side, system design skills means knowing what questions to ask, and good mermaidJs markdown designs are solid agent inputs. I guess the role will shift to researching and enabling the devs to best leverage CodeGenAI infra for full SDLC velocity with quality
By @latorf - 2 months
In OPs examples, what about privacy/IP concerns of your existing code base? When people for example mention copilot/cursor or any other auto completion tools or just chatgpt, do you happily let the models access your existing "company internal" code? Sure, no problem when you self-host, but I assume all these use cases talk about some external API? Are you even allowed to do that in most companies? As your IP basically leaves your machine at one point?
By @IshKebab - 2 months
Ha of course he only thinks they are useful for him. The ego...