All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
Read original articleThe article discusses the lack of economic impact from advancements in AI despite the high expectations surrounding the technology. Various reports and analyses, including one by Goldman Sachs, indicate that AI has not significantly boosted productivity or profits for companies. The article also touches on the potential risks associated with AI development, highlighting concerns about the creation of highly intelligent entities that could pose existential threats in the future. While some argue that introducing more intelligence, even if artificial, could have significant benefits, the author emphasizes the need to consider the potential risks and downsides of advancing AI technology. Despite the lack of immediate existential risks, the article suggests that the introduction of AI has implications that need to be carefully monitored and managed.
Related
Goldman Sachs says the return on investment for AI might be disappointing
Goldman Sachs warns of potential disappointment in over $1 trillion AI investments by tech firms. High costs, performance limitations, and uncertainties around future cost reductions pose challenges for AI adoption.
Gen AI: too much spend, too little benefit?
Tech giants and entities invest $1 trillion in generative AI technology, including data centers and chips. Despite substantial spending, tangible benefits remain uncertain, raising questions about future AI returns and economic implications.
What happened to the artificial-intelligence revolution?
The AI revolution in San Francisco, led by tech giants, has seen massive investments but limited economic impact. Despite high expectations, firms struggle to adopt AI effectively, with revenue lagging behind market value.
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Goldman Sachs: AI Is overhyped, expensive, and unreliable
Goldman Sachs questions generative AI's economic viability due to high costs and limited benefits. Experts doubt AI's transformative impact, citing unreliable technology and skepticism about scalability and profitability. Venture capital analysis raises concerns about revenue generation.
Then, suppose AI does somehow think itself into effecting human annihilation, what happens when all the humans are dead? At the current state of things, all electricity production on the planet goes dark in maybe two weeks. Nuke plants irradiate significant parts of the earth, chip fabs are bricked, metal production is toast. In short the AI necessarily dies too.
The only way for this not to happen is AI controlled robots first gaining complete control of massive chunks of the economy. If AI is as useless as the article claims then that will effectively never happen. Thus, AI doom scenarios are always a murder suicide situation, and I’m not sure I believe anyone who says a superhumanly capable planner would pursue that plan. It even negates the “paperclip optimizer” as the AI’s demise obviously puts a hard upper limit on the number of paperclips produced.
In the AI case, advancement has been so rapid that to a certain extent, it makes sense for large companies to delay some aspects of retooling, because better models that come out next month may have different characteristics or requirements. We haven't seen the big shifts in productivity because reorganizing around new tech is slow, and the tech itself is still changing so fast.
But even if they were replacements, companies do not suddenly drop all of their current processes and rewrite them from scratch using new shiny tools. And most companies are too slow to do that within the 2 years that ChatGPT/Stable Diffusion have been out anyway.
If I can use an AI assistant to complete a job in 3 hours that used to take 9 hours, that's a win for me, because now I have 6 hours to spend on a personal open-source project, some family business, recreational (and healthy) exercise, or some community-beneficial volunteer activities. While this might not show up on the corporate balance sheet, it's going to make the overall society a lot healthier.
The existential risk of AI to the current socioeconomic order might have more to do with AI systems making better economic decisions than the overcompensated executive-shareholder sector does, and at far lower cost to the corporation or institution that adopts it. This makes logical sense, since the skills involved in humans climbing to positions of power in organizations (manipulation of other people, hyping up one's capabilities, clinging desperately to power, etc.) don't translate into efficient well-run operations.
It takes years to decades for new technologies to achieve their full effect. Factories based on steam had to be rebuilt around eletricity. That took decades. It will take time to rebuild large operations around AI.
1) Current AI, mostly LLMs, not much economic impact, no extinction risk.
2) Sci-fi like future ASI, big impact, some existential risk.
Discussing the second is tricky as we don't quite know how it will go and will probably change all sorts of things. But re 'we are all going to die' type stuff, currently we will with certainty due to age, with sci-fi AI, not necessarily. So that kind of p(doom) is much lower with ASI.
No? None of that is predecided. You are concluding that based on vague threats made by people who are "in the know" that can't even substantiate their own fear when confronted over it.
Existential risk comes from when us, humanity, trusts our own lives in something that is faulty. The solution is to not use or design around fault-agnostic systems, not to fear scenarios where stupid people make mistakes. AI is not red mercury, it is not a magical force multiplier that terrorists use to make lunchbox nukes.
I'm getting so tired of HN submissions like this that throw an alarmist opinion over the fence like this guy is secretly onto something. This is bunk.
I had hoped for something around 5%, which is not massive, but still a non-trivial number.
Not sure how AI assistants in other “high value” professions far. Say in legal, healthcare, finance, civil engineering, business consulting to name a few. Does anyone have reliable figures?
I really can't think of many, if any real positives. One example I like to use is in video game tutorial articles. I don't know if you've tried to use basically any search engine to find reliable video game information/walkthroughs lately, but the amount of blatant misinformation that has proliferated in this space makes searching the web for even the most basic information an exercise in complete futility.
This is something "AI" should be good at. Video game walkthroughs have been done to death, a game that's been out for 2+ years has likely already had every single secret and path in it discovered 10,000 times over and posted all over reddit and other sites. Yet, after completing my 2nd elden ring run this weekend, my first since 2022, I was struck at how much blatantly wrong info I stumbled across, even on websites I had considered "authoritative." And it's always subtle enough to be impossibly annoying - stuff like "head south from this site" when in fact, it should have said "north" Very small errors that seem like the result of hallucinations that cause an inordinate amount of time spent figuring out what's wrong - much like my experience with coding "assistants."
If it can't get something like this correct - which I'm aware isn't necessarily a purely "AI" problem - how do we expect it to do anything important? I really have not seen a solid case yet and would love to be convinced otherwise, but the more I learn the less I like what I see.
Ultimately everything working out just fine. AI is not going to doom us.
AI's gardening skills are so bad, even the weeds can't grow.
Doomsayers can now downvote me.
Related
Goldman Sachs says the return on investment for AI might be disappointing
Goldman Sachs warns of potential disappointment in over $1 trillion AI investments by tech firms. High costs, performance limitations, and uncertainties around future cost reductions pose challenges for AI adoption.
Gen AI: too much spend, too little benefit?
Tech giants and entities invest $1 trillion in generative AI technology, including data centers and chips. Despite substantial spending, tangible benefits remain uncertain, raising questions about future AI returns and economic implications.
What happened to the artificial-intelligence revolution?
The AI revolution in San Francisco, led by tech giants, has seen massive investments but limited economic impact. Despite high expectations, firms struggle to adopt AI effectively, with revenue lagging behind market value.
Pop Culture
Goldman Sachs report questions generative AI's productivity benefits, power demands, and industry hype. Economist Daron Acemoglu doubts AI's transformative potential, highlighting limitations in real-world applications and escalating training costs.
Goldman Sachs: AI Is overhyped, expensive, and unreliable
Goldman Sachs questions generative AI's economic viability due to high costs and limited benefits. Experts doubt AI's transformative impact, citing unreliable technology and skepticism about scalability and profitability. Venture capital analysis raises concerns about revenue generation.