Ask HN: Can we just admit we want to replace jobs with AI?
The discussion on AI models emphasizes concerns about job automation and the implications of Artificial General Intelligence, highlighting the need for honest dialogue to prepare society for its challenges.
The discussion surrounding artificial intelligence (AI) models, both open and closed source, highlights a growing concern about the impact of automation on jobs. With models like DeepSeek-R1, Llama, and Claude, there is a prevailing sentiment that the focus of AI development is primarily on automating tasks rather than advancing humanity. OpenAI's definition of Artificial General Intelligence (AGI) emphasizes the creation of systems that can outperform humans in economically valuable work, which raises questions about the future of employment. Despite some optimism among AI researchers about the potential benefits of AI, there is a call for a more realistic acknowledgment of the challenges posed by AGI. The argument suggests that a candid discussion about the implications of AGI is necessary for society to prepare adequately for its arrival, rather than fostering unrealistic expectations that could lead to failure.
- The focus of AI development is increasingly on job automation.
- OpenAI defines AGI as systems that outperform humans in economically valuable tasks.
- There is a disconnect between optimism in AI research and the potential negative impacts on employment.
- Honest discussions about AGI's implications are essential for societal preparedness.
- Acknowledging the challenges of AGI can help mitigate future failures.
Related
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
Ask HN: Let's assume AI does take developer jobs. What's the pivot?
The debate on AI's impact on employment reveals mixed opinions, with concerns about job loss contrasted by optimism. A proactive approach emphasizes adaptability and exploring alternative career paths.
How close is AI to human-level intelligence?
Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.
Sam Altman says "we are now confident we know how to build AGI"
OpenAI CEO Sam Altman believes AGI could be achieved by 2025, despite skepticism from critics about current AI limitations. The development raises concerns about job displacement and economic implications.
My subjective notes on the state of AI at the end of 2024
The AI landscape is evolving, with major developers influencing advancements. Generative knowledge bases are foundational, but progress is plateauing, leading to incremental changes in productivity and education rather than revolutionary shifts.
The outcome of further automation will be to move even more capital under the control of an even smaller number of hands. It's that increasing inequality which is the problem, not AGI.
Going back to the industrial revolution the people who control capital and production have openly sought to replace human labor with automation. Nothing new, just that this time it threatens “knowledge workers” and the techie HN crowd. If you hear someone claim that they want to “advance humanity” with automation they profit from you can safely assume they don’t mean that sincerely.
Are you ready for your Brave New World? :^)
It seems to me, that the path we are on will lead us closer to a Terminator-like future, and less likely to lead to a positive future.
Our leadership are greedy fools.
It is said that AI will make 200 million jobs redundant. China's workforce alone shrunk by 80 million during the previous decade according to their 2020 census.
Overall I think we need to increase efforts with AI, particularly make it more energy efficient, as currently it's simply expensive to run, if we wish to not bear the consequences of having a globally ageing literate population.
They're going to change, where humans sit in the loop and how they program will change, probably a good blend of procedural control and object-oriented processes, where the agents are considered objects and process flow is defined in the procedural methods, but don't freak out there's still a ton to do.
There are many design principles we can leverage to enhance human participation and fulfillment instead of just replacing them. Lights out manufacturing hasn't taken off, why would we think a lights out society would work any better?
Edited for formatting.
It's harder to do than it looks. Offshoring to places with lower wages has kept wages for many white- and blue-collar workers in the West from growing. But it is hard to do correctly.
I suspect AI or AGI or AI with agents will take a longer time to actually work reliably than many suspect.
Not sure how you (as a worker) want to prepare for that - maybe by becoming depressed, quiet and atomized in advance. US leaders (political and big tech) seem to be preparing exactly for that now, but my assumption is you are not one of them.
Between those two points is a period of instability. Judging from history, many people will be impacted and those people may not transition to new jobs. Though in my case, if I were to lose my software engineering job because AI had become good enough to replace me completely then my ego thinks that we'd be much closer to superhuman AI than I thought and all bets are off. Software developers don't just write code. They think, and think, and think and second guess, and look at the larger stack impact, and scaling, and so on. AI may well get to that level but IMO there is a reason we still don't see robots fixing plumbing, and it is because the reasoning part of these jobs are much deeper than some people realise.
Once we get to the point that huge amounts of white collar reasoning workers can be replaced, then I optimistically think we'll also be closer to reducing scarcity. What I'd hope for is that we would adjust to other types of work that we find hard to predict at the moment, just like every other leap in technology has resulted in. I know the strong narrative is AI can simply do those too, but try to imagine a world where we have superhuman ability to produce, but no consumers because no-one has the means to pay for the produce. That just doesn't work. These so called "elites" will have no-one to sell to. So it seems self obvious to me that a balance will be found. If it isn't people will elect governments who force that balance. Maybe some rough period ahead, but I don't subscribe to the doom theories on the economic side.
What I do unfortunately subscribe to is doom of another type. Once near superhuman AI is possible, then it feels impossible to stop it from being widely available, eventually. And given how many humans there are on the planet, I feel there will be some who are deranged enough to use the technology for full planet wide destruction of some sort. I think this feels almost close to inevitable.
[1] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...
[2] https://www.forbes.com/sites/dereksaul/2025/01/21/oracle-sto...
I think the people who will be most easily displaced are those that don't have an additional combined skillset. I see a lot of software engineers with a CS background whose skillset is only writing software. I think people from mixed backgrounds are likely to do better in a big disruption since they've got another thing to jump to, perhaps in a different industry.
Every ~5 yrs I've had to reinvent myself.
So, what about the disruption from AI? Just suck it up, ok? You have to reinvent yourself anyway if you are in the tech sector. Start moving already. Every tech disruption is a challenge and opportunity.
Sure, this won't be true for every role and organization, but for many this will definitely be true.
There was a time when cutting grass was done with scythes, specially shaped swords. All day back and forth swinging the blade to get grass just the perfect length. Swing, take a step, swing. It was backbreaking labor in the sun. So lawns probably were quite expensive and having your own was probably pretty flashy, or maybe they were public, provided by the state to the people, or maybe both.
Along comes the string of inventions that led to lawnmowers. Now, anyone can mow grass in an afternoon. Gone are the lawn-scything jobs! Did the sky fall then?
Of course we know how this played out. Some lawn-scythers lamented the loss of their work, but they’re forgotten to history. Other clever lawn-scythers went and trained as lawn-mowers, while a few even cleverer ones went and became lawnmower mechanics, a few became engineers so they could start to design lawnmowers, and some started lawnmower factories and employed plenty of workers making way more lawnmowers than anyone thought possible.
Lawnmowers aren’t free but they’re super cheap and getting cheaper all the time, they’re abundant, and the real cost now is labor, which is going up all the time. So what do you do? You pay a lawnmowing person to take care of your lawn while you work your high paid office job.
Whenever someone says AI is coming for the jobs, ask them which exact AI model is coming for the jobs; which tool built by which startup it is that’s coming for the jobs, and if so why are they hiring?
Please just think about it a bit. Everyone seems to be thinking "I'll lose my job!" but what you need to be thinking is "everyone loses their job".
they have and they will reap the benefits as the "alphas"; what happens to the rest of humanity, that's someone else's problem
IT, doctors, lawyers will be mostly gone. Probably most office worker jobs that I’m not thinking of.
Best thing to do is become a nurse, that’ll be bulletproof until AGI makes progress on robotics.
I think this is my main disconnect with the pessimists, I don't see "stop AI progress" as a valid option anyway.
My sense is that there is a lot of compute capacity, data acquisition, etc., required to create any dent in this type of automation.
Likely the next 2 years are focused on more “compute”, “data center capacity”, and “energy”. That allows for a) better training, b) more inference load, … and hence prepare for more automation.
Much like the Y2K era of 30 years back, we are entering the new era of YNA, an era whY Not Automate.
We are doing both!
You are assuming people like to work, and jobs are something good. Without jobs, people will have less stress and more free time. They will spend more time on leisure activities, with their families, and so on.
Also jobs contribute to carbon emissions, we have to maintain cities, large offices, car makes CO2 for commute...
Not all bad.
but from individual perspective I don't think that is the case. Since AlphaGo first time was released and beat world-class players, have all these players gone? not really, but it even promotes more people study Go with AI instead.
As a software engineer myself, am I enjoy using AI for coding? yes, for many trivial and repetitive works, but do I want it to take my coding works fully away such that I can just type natural languages? My answer is no. I still need the dopamine hit from coding myself, either for work or my for hobby time, even I am rebuilding some wheels other folks have already built, and I believe many of ours are the same.
1) Land
2) Natural Resources
3) Energy
4) Weapons
5) Physical/in-person Labour
That means that, unless there are big societal changes, you better have at least a few of those available to you. A huge percentage of Americans own no land nor investments in durable assets. That means at best, they are thrown into a suddenly way-oversupplied physical labour market in order to make a living.
At the geopolitical level, a country like the United States, with so much of its current wealth tied to amorphous things like IP, the petro dollar, and general stability, certainly has all 5 of the above categories covered. However, there's a lot of vulnerable perceived value in the stock market and housing market which might vanish if the world is hit with a sudden devaluation of labour. Even if the US is theoretically poised to capitalize off of AGI, a sudden collapse of the middle class and therefore the housing market and therefore the entire financial sector might ruin that.
UBI is the bare minimum I can imagine preventing total societal collapse. Countries with existent strong safety nets or fully socialist systems might have a big advantage in surviving this. I certainly feel a certain sense of comfort living in Quebec, where although many aspects of government are broken, I can at least count on a robust and cheap hydroelectric-supplied grid and reasonable social safety nets. Between AI and climate change, I feel like there are worse places to be this century.
This is not just for businesses but for everything in life.
For example, we all buy robot vaccuums, right? People even rearrange furniture to make sure it is compatible with the robot vaccuums. Everyone wants it to do more and be more reliable.
Seems to me that AI (today) allows amateurs to generate low quality professional work. Cheaply. Disrupting the careers of many creative professionals. Where does that lead?
Some people believe AGI is imminent, others believe AGI is here now. Observe their behavior, calm your anticipation, and satisfy your curiosoty rather than seeking to confirm a bias on the matter.
The tech is new in my experience and accepting claims beyond my capacity to validate such a grand assertion would require me to take on faith the word of someone I don't know or have never seen, who likely generated such a query in the first place outside the context length of Chatgpt mini.
In hunter/gatherer days within small bands, and also to a large extent within families (but perhaps a bit less now) the method was Generalized Reciprocity[1], where people basically take care of each other and share. This was supported by the extremely fertile and bountiful forests and other resources that were shared between early people.
Later, the large "water monopoly" cultures like ancient Egypt had largely Redistributive economies, where resources flowed to the center (the Pharaoh, etc.) and were distributed again outwards.
Finally we got Market Exchange, where people bargain in the marketplace for goods, and this has been efficiently able (to a greater or lesser degree) to distribute resources for hundreds of years. Of course we have some redistributive elements like Social Security and Welfare, but these are limited.
Market Exchange relies now on basically everyone having a Job or another means to acquire money. But with automation this breaks down because jobs will more and more be taken by the AIs and then by robotic AIs.
So only a few possibilities are likely: either we end up with almost everyone a pauper and increase dole payments just up to the point where there's no revolution, or we somehow end up with everyone owning the AI resources and their output, taking the place the forests and other ancient resources played in hunter-gatherer days, and everyone can eventually be wealthy.
It looks as if, at least in the USA we are going down the path of having a tiny oligarch class with everyone else locked out of ownership of the products of the AI, but this may not be true everywhere, and perhaps somehow won't end up being true here.
[1] Stone Age Economics by Marshall Sahlins https://archive.org/details/stoneageeconomic0000sahl/page/n5...
No because that's not real. AI will automate jobs but also advance humanity.
It's like saying can't we be real and admit that tractors and farm machinery were not advancing humanity they were just replacing farm jobs. They replaced some farm jobs but also provided food abundance and let people go off and work as personal trainers and the like to help people lose the weight. And work as scientists, artists etc.
Simplistic: A major impact today is that amateurs using AI can generate tons of low quality professional work for peanuts. Overall quality suffers but profits soar or rise enough. Where does that lead?
These are the same thing.
Automation does advance humanity. The reason for the current world prosperity has been lots of automation.
(There is the separate and much more concerning risk of humans going the way of the horse, but that does not seem to be what you are concerned about)
Cheap AI labor will depress wages and therefore reduce consumer demand. This is not a recipe for GDP growth or a vibrant economy anymore than outsourcing to reduce labor costs has been.
The people trying to sell AI as good for the economy will no doubt tell you that companies don't want to reduce your salary or lay you off - that they will be happy to keep payroll the same, or increase it (increased consumer spending = GDP growth!) by employing you as a highly paid AI whisperer. Yeah, no, companies are looking to reduce payroll.
I believe the goal is to advance humanity by automating out jobs.
I don't know if that is wishful thinking or if it will actually work (Nash equilibriums and politics make me suspect the former) but the idea that it was possible to do both at the same time was already a widespread meme in the spaces that culturally drive AI research of the sort we now see.
The idea that we can have both is also why optimistic people sometimes bring up (amongst other things) UBI in this context.
I'm really not sure what an individual can do to "prepare" for AGI; it's like trying to guess in 1884 if you should bet on Laissez-faire USA, the British Empire, or that manifesto you've heard of from Marx a few decades ago. And then deciding that, no matter what, the world will always need more buggy-whips and humans will never fly in heavier-than-air vehicles.
Nope. People will prepare as much as engineers care. But engineers don't care. Educating the people is tedious. It's easier to "manipulate"/direct them, which is the job of representatives, who are about status and money and being groomed by the industries, who are exclusively about money.
People are fine without AGI until they are not. That's another 15 years at least.
If you want to worry, worry about local solutions to climate change mitigation where you need old school manpower, big machines, shovels and logistics.
then robots will become a thing and the same for manual labor i think the non manual labor work class was chosen because they have demonstrated problematic sentiments to the powers that be
a lot of people might die or UBI may become a thing. migration to less developed areas might happen although i doubt the powers that be will allow cross border movements they love their fiefdoms
the barons will always be there unless there is a skynet type event that wipes out humanity same shit different day if you a peasant like me and not a good dev without any dev education or connections most probably will die it is what it is
Related
Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
Ask HN: Let's assume AI does take developer jobs. What's the pivot?
The debate on AI's impact on employment reveals mixed opinions, with concerns about job loss contrasted by optimism. A proactive approach emphasizes adaptability and exploring alternative career paths.
How close is AI to human-level intelligence?
Recent advancements in AI, particularly with OpenAI's o1, have sparked serious discussions about artificial general intelligence (AGI). Experts caution that current large language models lack the necessary components for true AGI.
Sam Altman says "we are now confident we know how to build AGI"
OpenAI CEO Sam Altman believes AGI could be achieved by 2025, despite skepticism from critics about current AI limitations. The development raises concerns about job displacement and economic implications.
My subjective notes on the state of AI at the end of 2024
The AI landscape is evolving, with major developers influencing advancements. Generative knowledge bases are foundational, but progress is plateauing, leading to incremental changes in productivity and education rather than revolutionary shifts.