July 22nd, 2024

All the existential risk, none of the economic impact. That's a shitty trade

Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.

Read original articleLink Icon
All the existential risk, none of the economic impact. That's a shitty trade

The article discusses the lack of economic impact from advancements in AI despite the high expectations surrounding the technology. Various reports and analyses, including one by Goldman Sachs, indicate that AI has not significantly boosted productivity or profits for companies. The article also touches on the potential risks associated with AI development, highlighting concerns about the creation of highly intelligent entities that could pose existential threats in the future. While some argue that introducing more intelligence, even if artificial, could have significant benefits, the author emphasizes the need to consider the potential risks and downsides of advancing AI technology. Despite the lack of immediate existential risks, the article suggests that the introduction of AI has implications that need to be carefully monitored and managed.

Link Icon 20 comments
By @bglazer - 3 months
If AI has no economic impact, it almost by definition also has no ability to marshal the kind of resources necessary to destroy all of humanity. All the doom scenarios seem to rely on quasi-magical abilities springing from advanced intelligence, but ignore that these scenarios require the AI to have access to significant resources.

Then, suppose AI does somehow think itself into effecting human annihilation, what happens when all the humans are dead? At the current state of things, all electricity production on the planet goes dark in maybe two weeks. Nuke plants irradiate significant parts of the earth, chip fabs are bricked, metal production is toast. In short the AI necessarily dies too.

The only way for this not to happen is AI controlled robots first gaining complete control of massive chunks of the economy. If AI is as useless as the article claims then that will effectively never happen. Thus, AI doom scenarios are always a murder suicide situation, and I’m not sure I believe anyone who says a superhumanly capable planner would pursue that plan. It even negates the “paperclip optimizer” as the AI’s demise obviously puts a hard upper limit on the number of paperclips produced.

By @abeppu - 3 months
I saw some blurb (I can't find it now) about the optimal time to launch an interstellar probe, and the punchline was that embarking too soon means you will be overtaken by a later vehicle which launched later but with better tech. When the rate of technological progress relative to the duration of a mission is above some threshold, it's rational to do more front-loaded research before actually committing to a particular plan to apply it.

In the AI case, advancement has been so rapid that to a certain extent, it makes sense for large companies to delay some aspects of retooling, because better models that come out next month may have different characteristics or requirements. We haven't seen the big shifts in productivity because reorganizing around new tech is slow, and the tech itself is still changing so fast.

By @hnthrow289570 - 3 months
This makes sense to me at the moment because these AI products are enhancements, not entire replacements.

But even if they were replacements, companies do not suddenly drop all of their current processes and rewrite them from scratch using new shiny tools. And most companies are too slow to do that within the 2 years that ChatGPT/Stable Diffusion have been out anyway.

By @photochemsyn - 3 months
"The latest estimates, using official figures, suggest that real output per employee in the median rich country is not growing at all."

If I can use an AI assistant to complete a job in 3 hours that used to take 9 hours, that's a win for me, because now I have 6 hours to spend on a personal open-source project, some family business, recreational (and healthy) exercise, or some community-beneficial volunteer activities. While this might not show up on the corporate balance sheet, it's going to make the overall society a lot healthier.

The existential risk of AI to the current socioeconomic order might have more to do with AI systems making better economic decisions than the overcompensated executive-shareholder sector does, and at far lower cost to the corporation or institution that adopts it. This makes logical sense, since the skills involved in humans climbing to positions of power in organizations (manipulation of other people, hyping up one's capabilities, clinging desperately to power, etc.) don't translate into efficient well-run operations.

By @daft_pink - 3 months
I think that the AI companies will not reap all the benefit of AI like the car companies with exception of a few winners didn’t reap the rewards of vehicles. I think that average human beings will receive the benefit of AI as I am spending $20 per month, but reaping huge rewards from writing the emails and documents that I write with them.
By @blueyes - 3 months
Another Luddite take from someone who clearly doesn't engage the technology directly either as a builder or a seller.

It takes years to decades for new technologies to achieve their full effect. Factories based on steam had to be rebuilt around eletricity. That took decades. It will take time to rebuild large operations around AI.

By @ppqqrr - 3 months
If a miraculous new technology that helps individuals grok any conceivable topic has “no economic impact,” you should probably question how you measure of economic impact - perhaps even your very definition of the economy, the nature of human labor and collaboration. Fundamental changes at the base of the hierarchy will always escape the attention of high level managerial class, because the one thing they’re never taught to do is to consider the possibility that their inherited power structure may become irrelevant.
By @cbsmith - 3 months
The first comment I saw nailed it for me: in the 80's and 90's there was lots of investment in computers as the PC revolution took off, and the productivity gains were minimal. As with all new technologies, it takes a while for there to be macroeconomic scale changes to industry, and those changes aren't going to come without the up front investment. This article (and frankly, much of society) is looking at the ROI on the wrong time scale.
By @tim333 - 3 months
You've kind of got two different things:

1) Current AI, mostly LLMs, not much economic impact, no extinction risk.

2) Sci-fi like future ASI, big impact, some existential risk.

Discussing the second is tricky as we don't quite know how it will go and will probably change all sorts of things. But re 'we are all going to die' type stuff, currently we will with certainty due to age, with sci-fi AI, not necessarily. So that kind of p(doom) is much lower with ASI.

By @christkv - 3 months
I don't think we have seen the start of productivity gains. We are using it to write drafts for public competitions. It’s turned something that would be weeks of pain into days. That means the people involved can spend less time on trite and focus on editorial. It’s also let our small team take on more parallel bids with less frustration.
By @talldayo - 3 months
> the longer AI takes to show up in any positive economic indicators, the more it becomes the case that AI has brought increasing existential risk in exchange for minimal upside.

No? None of that is predecided. You are concluding that based on vague threats made by people who are "in the know" that can't even substantiate their own fear when confronted over it.

Existential risk comes from when us, humanity, trusts our own lives in something that is faulty. The solution is to not use or design around fault-agnostic systems, not to fear scenarios where stupid people make mistakes. AI is not red mercury, it is not a magical force multiplier that terrorists use to make lunchbox nukes.

I'm getting so tired of HN submissions like this that throw an alarmist opinion over the fence like this guy is secretly onto something. This is bunk.

By @hankman86 - 3 months
Are there any independent statistics for how much AI based coding assistants like GitHub CoPilot improve an average software developer‘s productivity?

I had hoped for something around 5%, which is not massive, but still a non-trivial number.

Not sure how AI assistants in other “high value” professions far. Say in legal, healthcare, finance, civil engineering, business consulting to name a few. Does anyone have reliable figures?

By @geoffmanning - 3 months
Perhaps we should question how productivity is measured and learn to control all the various factors. I think there is much more than AI at play here, and in fact i would guess that AI has been a major factor in offsetting (preventing) an overall decrease in "productivity" (as currently defined).
By @JohnMakin - 3 months
The problems with the current generation and rollout of "AI" which I am using synonymously in this post with large language models, because that's the context it seems to be mostly used in print media - are so numerous that I could not list them out all in one post and this has been debated to an absurd degree already, on this site and on other platforms.

I really can't think of many, if any real positives. One example I like to use is in video game tutorial articles. I don't know if you've tried to use basically any search engine to find reliable video game information/walkthroughs lately, but the amount of blatant misinformation that has proliferated in this space makes searching the web for even the most basic information an exercise in complete futility.

This is something "AI" should be good at. Video game walkthroughs have been done to death, a game that's been out for 2+ years has likely already had every single secret and path in it discovered 10,000 times over and posted all over reddit and other sites. Yet, after completing my 2nd elden ring run this weekend, my first since 2022, I was struck at how much blatantly wrong info I stumbled across, even on websites I had considered "authoritative." And it's always subtle enough to be impossibly annoying - stuff like "head south from this site" when in fact, it should have said "north" Very small errors that seem like the result of hallucinations that cause an inordinate amount of time spent figuring out what's wrong - much like my experience with coding "assistants."

If it can't get something like this correct - which I'm aware isn't necessarily a purely "AI" problem - how do we expect it to do anything important? I really have not seen a solid case yet and would love to be convinced otherwise, but the more I learn the less I like what I see.

By @ImHereToVote - 3 months
The ultimate marshmallow test for humanity.
By @lowbloodsugar - 3 months
By @incomingpain - 3 months
Every new tech breakthrough has the 'it's going to destroy us, think of all the stable boy jobs that will be lost.'

Ultimately everything working out just fine. AI is not going to doom us.

AI's gardening skills are so bad, even the weeds can't grow.

Doomsayers can now downvote me.

By @exe34 - 3 months
if that's no economic impact, there won't be existential risk.
By @julienreszka - 3 months
They want to buy the dip that’s all. Anybody who actually uses the models with a good workflow ( specs=> unit tests=> function that passes ) sees the benefits