July 2nd, 2024

Gen AI: too much spend, too little benefit?

Tech giants and entities invest $1 trillion in generative AI technology, including data centers and chips. Despite substantial spending, tangible benefits remain uncertain, raising questions about future AI returns and economic implications.

Read original articleLink Icon
Gen AI: too much spend, too little benefit?

The article discusses the significant investments, estimated at around $1 trillion, that tech giants and other entities are making in generative AI technology. These investments cover various areas such as data centers, chips, AI infrastructure, and the power grid. However, despite the substantial spending, there is a lack of tangible benefits or returns from these investments so far. The article raises questions about whether this massive expenditure will eventually result in the expected AI benefits and returns. It also highlights the potential implications for economies, companies, and markets depending on the outcomes of these investments. The discussion on the effectiveness of the spending on generative AI technology and its impact on various sectors is a key focus in the article.

Related

AI's $600B Question

AI's $600B Question

The AI industry's revenue growth and market dynamics are evolving, with a notable increase in the revenue gap, now dubbed AI's $600B question. Nvidia's dominance and GPU data centers play crucial roles. Challenges like pricing power and investment risks persist, emphasizing the importance of long-term innovation and realistic perspectives.

Taking a closer look at AI's supposed energy apocalypse

Taking a closer look at AI's supposed energy apocalypse

Artificial intelligence's impact on energy consumption in data centers is debated. Current data shows AI's energy use is a fraction of overall consumption, with potential growth by 2027. Efforts to enhance efficiency are crucial.

Taking a closer look at AI's supposed energy apocalypse

Taking a closer look at AI's supposed energy apocalypse

Artificial intelligence's energy impact, particularly in data centers, is debated. AI's energy demands are significant but only a fraction of overall data center consumption. Efforts to enhance AI cost efficiency could mitigate energy use concerns.

Goldman Sachs says the return on investment for AI might be disappointing

Goldman Sachs says the return on investment for AI might be disappointing

Goldman Sachs warns of potential disappointment in over $1 trillion AI investments by tech firms. High costs, performance limitations, and uncertainties around future cost reductions pose challenges for AI adoption.

The A.I. Boom Has an Unlikely Early Winner: Wonky Consultants

The A.I. Boom Has an Unlikely Early Winner: Wonky Consultants

Consulting firms like Boston Consulting Group, McKinsey, and KPMG profit from the AI surge, guiding businesses in adopting generative artificial intelligence. Challenges exist, but successful applications highlight the technology's potential benefits.

Link Icon 13 comments
By @jrm4 - 5 months
You know you're in trouble when even GOLDMAN SACHS is like "maybe this thing is a bubble"
By @wiradikusuma - 5 months
It could end up like Chatbot (pre-Gen AI) and 3D Printers. I started a company a few years back, literally with the name "bot" in it, and bought myself a 3D printer. Both are left in the dust.

I don't think Gen AI will be totally bust, but it won't be as promised (anytime soon). Just like in a software project, the last 10% is another 90%.

By @ijustlovemath - 5 months
By @muglug - 5 months
A whole lot of large tech companies are proudly shipping their “it demoed well” hackathon projects. Investors are far more eager for these things to get released than customers are to use them.
By @djaouen - 5 months
AI does have some legitimate uses (i.e., Grammarly and GitHub Copilot). It’s just when people say they are going to replace a whole operating system with AI that they are full of shit lol
By @piva00 - 5 months
I had no idea it had reached US$ 1tn Capex, that's an insane amount of money. That's 1/62th of what a Stanford study said would take to move the whole global population completely away from fossil fuels for our energy demands...
By @empath75 - 5 months
There's no way that that amount of projected spend over the next 10 years doesn't result in a lot of new efficiencies in chip production costs and energy usage, particularly a move away from graphics cards to more purpose built chips and algorithms designed for purpose-built chips.
By @torpfactory - 5 months
It seems like the killer apps for generative AI right now are:

1) Automating boring reading and writing tasks. Think marketing copy, recommendation letters, summarizing material, writing proposals, etc. LLMs are pretty good at this stuff but these are not many people's core job responsibilities (though they may take up a lot of their time). Consider it a productivity booster for the most part. Some entry level jobs will be eliminated, and this may create problems down the road as the pipeline of employees to oversee LLMs erodes.

2) Code writing tools a la Copilot for certain "boilerplate" code in commonly used languages. I think the impact is similar to (1) where entry level jobs erode and this may impact employee pipelines.

The core problem (as I see it) is that LLMs don't produce outputs good enough to be used without human oversight except on a small subset of tasks. So you end up needing humans (maybe fewer of them) to check the LLM output is headed in the right direction before you let it out into the world.

Consider voice interface LLMs for customer service. When will they get good enough to do the job with real money on the line? If your airline help desk keeps giving away free flights or on the flip side infuriating passengers by refusing allowed changes, can you really use it in production? My sense is they aren't good enough to replace the usual phone tree just yet.

When accuracy doesn't matter that much, LLMs will really shine because then they can be used without a human in the loop. Think some marketing/advertising and especially, especially propaganda.

I think the existing killer apps don't yet have enough money/savings in them to justify the spend. If generative AI technologies can get good enough on the accuracy front to remove humans from the loop in more contexts, we will be talking about much more dramatic value.

By @iamleppert - 5 months
What is lacking in AI is a concept of pain and suffering. Those are key to the human experience. If we want a truly capable AI, or even an AGI, we need to add in a suffering feature to these models.
By @andy_ppp - 5 months
Is there any way to short Open AI and this nonsense that AGI is round the corner? They might prove to be useful to some degree (niche areas) but as soon as I give the LLMs information or problems they haven't seen before they completely struggle to do anything sensible. Try getting them to compare two different JSON documents, they are very confident and produce absolute garbage if these have not been seen at the training stage.

If they can't handle new ideas humans will always be much more useful and these systems are good for references and human learning are not good for creating something new and of value. I've noticed for text LLMs are quite weirdly repetitive and have an empty style that requires a lot of editing to get it into a shape humans would craft.

People will say the improvements are coming but I think most of them have come from more data which is running out. I think one of the most profound things about real intelligence is being able to define and update concepts within your own mind… how to add new information to LLMs in realtime and have that reflected across the board seems intractable given the training and refinement these things would sit upon. There is no clear unit of information about a concept that links to all the other ideas. LLMs seem quite limited by this.

The brain is so much more complex than these algorithms too and so much more flexible, I don't see how a very good encyclopaedia with some fuzzy AI concept extraction capability is in any way the same as the human brain being able to apply and adapt concepts from all around art, science, literature and the human experience.