February 20th, 2025

A.I. Is Prompting an Evolution, Not Extinction, for Coders

Artificial intelligence is enhancing software developers' productivity, requiring new skills. Demand for skilled developers is expected to grow, while entry-level opportunities may decline due to A.I. integration.

Read original articleLink Icon
A.I. Is Prompting an Evolution, Not Extinction, for Coders

Artificial intelligence (A.I.) is transforming the role of software developers rather than eliminating it, according to industry experts. A.I. coding assistants, such as those developed by Microsoft, are increasingly being adopted by software engineers to enhance productivity and streamline coding tasks. These tools can suggest code, identify bugs, and automate testing, allowing developers to save significant time. While concerns exist about A.I. potentially automating many coding jobs, the consensus is that the demand for skilled software developers will continue to grow, albeit with a shift in required skills. Developers will need to adapt by embracing A.I. tools and focusing on higher-level skills such as creativity, critical thinking, and problem-solving. The integration of A.I. into coding practices is expected to improve productivity by 10% to 30%, with some estimates suggesting A.I. could handle up to 90% of coding tasks in the future. However, the impact on entry-level positions remains uncertain, as the demand for junior developers has recently weakened. Training programs are evolving to include A.I. fundamentals, preparing new developers for a workforce where A.I. is integral. Overall, A.I. is seen as a tool that can enhance the capabilities of software engineers rather than replace them.

- A.I. tools are enhancing productivity for software developers, not replacing them.

- The demand for skilled software developers is expected to grow, requiring new skill sets.

- A.I. coding assistants can improve productivity by 10% to 30%.

- Training programs are adapting to include A.I. fundamentals for new developers.

- Entry-level job opportunities may be affected as A.I. tools become more prevalent.

Link Icon 20 comments
By @bushido - 3 months
By @toprerules - 3 months
People are absolutely insane with their takes on AI replacement theory. The complexity of our stacks has grown exponentially since the 70s. Very few people actually comprehend how many layers of indirection, performance, caching, etc. are between their CRUD web app and bare metal these days.

AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.

By @agentultra - 3 months
I'd really like to know the parameters are. I hear claims like, "it saves me an hour a day," or, "I'm 30% more productive with AI." What do these figures mean? They seem like proxies for fuzzy feelings.

When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?

My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.

The idea that we should just generate more code on top of the code until the problem goes away is alien to me.

Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.

So really, is AI a tool to make us more productive or a tool to remove our bargaining power?

By @gosub100 - 3 months
Sort of off-topic, but is there any generative AI for code? From my limited understanding, the code is trained on human written code, and the model adapts it to what most closely matches.

What I'm curious about is, can it find innovative ways to solve problems? Like the infamous Quake 3 inverse-sqrt hack? Can it silently convert (read: optimize) a std::string to a raw char* pointer if it doesn't have any harmful side effects? (I don't mean "can you ask it to do that for you?" , I mean can it think to do that on its own?) Can it come up with trippy shit we've never even seen before to solve existing problems? That would truly impress me.

Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?

By @fzeroracer - 3 months
> To do so, Mr. Giorgi has his own timesaving helper: an A.I. coding assistant. He taps a few keys and the software tool suggests the rest of the line of code. It can also recommend changes, fetch data, identify bugs and run basic tests. Even though the A.I. makes some mistakes, it saves him up to an hour many days.

> Still, nearly two-thirds of software developers are already using A.I. coding tools, according to a survey by Evans Data, a research firm.

> So far, the A.I. agents appear to improve the daily productivity of developers in actual business settings between 10 percent and 30 percent, according to studies. At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.

We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult.

By @TrackerFF - 3 months
I'm a big user of LLM tools.

The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.

And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.

Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.

By @jdashg - 3 months
I always thought hacking scenes in sci-fi were unrealistic, but if you're cooking up AI-fortified code lasagna at your endpoints, there are going to be a mishmash of vulnerabilities: Expert robust thought will be spread very thin by the velocity that systemic forces push developers to.
By @bigtimesink - 3 months
> Mark Zuckerberg, Meta’s chief executive, stirred alarm among developers last month when he predicted that A.I. technology sometime this year would effectively match the performance of a midlevel software engineer

Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.

By @FredPret - 3 months
At some point in the past, tools like Wordpress et al made it easy for the average person to roll out their own website.

This probably increased the overall demand for professional website makers and messed-up-Wordpress-fixers.

Now the argument goes that the average business will roll out their own apps using ChatGPT (amusing / scary), or that big software co's will replace engineers with LLMs.

For this last point, I just don't see how any of the current or near-future models could possibly load enough context to do actual engineering as opposed to generating code.

By @atlantic - 3 months
I've found that AI has saved me time consulting Stack Overflow. It combines thorough knowledge of the documentation with a lot of practical experience gleaned from online forums

It has also saved time producing well-defined functions, for very specific tasks. But you have to know how to work with it, going through several increasingly complex iterations, until you get what you want.

Producing full applications still seems a pipedream at this stage.

By @monicaliu - 3 months
New to AI assisted coding, but I'm finding myself spending a lot of time debugging its convincingly wrong output.
By @notnullorvoid - 3 months
I've been choosing not to use most of the AI code assistant stuff for a while, I try it every now and then. Each time it's the same outcome, it actively reduces my productivity by a fair amount. I suspect this is due to a mix of the majority of my programming being non-trivial (library building, complex-ish algos), and that I'm a bit of a perfectionist coder who enjoys programming.

LLMs are useful tools for programming, as a kind of search engine and squeaking rubber duck. AI as a programmer is worse than a junior, it's the junior that won't actively learn and improve. I think current AI architecture limits it from being much more than that.

By @jenkstom - 3 months
It seems like AI will generate opportunities for fixing code. Both in reducing internal technical debt ("code maintenance", which is a specialized skill already) and external technical debt (architecture, which is being built by AI also). Eventually AI will be good enough for both of these things as well, and then we may just become the priests of the Temples of Syrinx.
By @m2spring - 3 months
Business wants short term-solutions. The long-term effects it doesn't care, even if it clearly bites them in the ass.
By @kittikitti - 3 months
Microsoft uses the ability to replace software engineers to sell their own AI.
By @rickspencer3 - 3 months
In my Python programming, I have found that ChatGPT makes me something like 10x more productive. From learning to use a new API to tracking debugging, and especially things finding errors in my code from stack traces. Getting results goes SO MUCH FASTER.

However, it has not alleviated any responsibility from me to be a good coder because I have question literally everything little dang thing suggests. If I am learning a new API, it can write code that works, but I need to go read the reference documentation to make sure that it is using the API with current best practices, for example. A lot of code I have to flat out ask it why it did things in a certain way because they look buggy inefficient, and half the time it apologizes and fixes the code.

So, I use the code in my (personal) projects copiously, but I don't use a single line of code that it generates that I don't understand, or it always leads to problems because it did something completely wrong.

Note that, at work, for good reasons, we don't use AI generated code in our products, but I don't write production code in my day job anyway.

By @arisAlexis - 3 months
Nice cope from programmers. But reality hits hard.
By @reverendsteveii - 3 months
People got wildly out of hand thinking that AI would do what we currently do without us. The real truth is AI is gonna do 10x what we currently do with us.