September 16th, 2024

The Subprime AI Crisis

The generative AI industry, especially OpenAI, faces financial challenges, seeking $6.5 billion while projecting $5 billion losses in 2024, with low adoption rates and skepticism about new model o1's capabilities.

Read original articleLink Icon
The Subprime AI Crisis

The current state of the generative AI industry, particularly concerning OpenAI, is precarious and may lead to a significant collapse. The author expresses concerns about the sustainability of the AI boom, highlighting signs of distress such as layoffs, leadership changes, and the need for massive funding. OpenAI is reportedly seeking to raise $6.5 billion to $7 billion at a valuation of $150 billion, while also attempting to secure $5 billion in debt. The company faces immense financial pressure, with projected losses of $5 billion in 2024 and increasing operational costs. Despite generating some revenue, the profitability of generative AI remains questionable, as evidenced by low adoption rates of AI features in Microsoft's products. The author warns that the potential fallout from an AI bubble burst could result in significant job losses and damage to the tech industry. The launch of OpenAI's new model, o1, has been met with skepticism, as it is perceived as underwhelming and flawed. Overall, the article paints a bleak picture of the future of generative AI, suggesting that the industry is in a state of magical thinking that could lead to dire consequences.

- The generative AI industry, particularly OpenAI, is facing significant financial challenges.

- OpenAI is seeking massive funding while projecting substantial losses.

- Adoption rates for AI features in business software are low, raising questions about their value.

- The potential collapse of the AI bubble could lead to widespread job losses in the tech sector.

- The recent launch of OpenAI's model o1 has been met with skepticism regarding its capabilities.

Link Icon 19 comments
By @pocketarc - 7 months
I think the main problem with AI sustainability is that all this VC investment is burning through ungodly amounts of money to produce something that provides no moat.

Even if all AI investment froze tomorrow, I'd still have my 405B Llama 3.1 model, along with countless other smaller models, and I'd run them to do whatever the heck I felt like doing, with no commitment to any provider.

Writing code with AI? I could swap to a local model. Costs nothing. Provides no revenue to any VC-backed company.

Yes, bigger models will always command a premium for the highest end of reasoning. But you don't always need the best possible reasoning. GPT-3 and early GPT-4 were more than good enough for a ton of use cases last year.

And we've seen the pace of development in the open source world these past two years. The open source community (Meta in particular) has completely obliterated the commercial value of these models.

If there were no "weights available" models, OpenAI would have an incredible, unbeatable moat, and they would be worth truly astounding amounts of money. But as it stands, we all have free, unfettered access to local models far better and cheaper than models that were flagship 18 months ago, and close enough to the performance of current flagship models that it won't make a difference for a ton of use cases.

There is no way to justify the current level of investment, with local models being freely available.

The assumption I'm making, of course, is that this transformer technology won't ever lead to AGI - it will just be another tool in our ever-expanding tool-belt.

By @memothon - 7 months
I always find it really strange when articles like this claim nobody is paying for generative AI. I can't find reliable stats on this but there are at least a million ChatGPT Plus subscribers. Does that not count?

It takes time for new advancements to get proliferated through the economy.

By @fzeroracer - 7 months
I agree with a lot of what's said here, but don't agree fully with the doom and gloom.

The job losses have already happened. Companies have laid off quite a bit of employees because they wanted to get ahead of the AI wave. They thought they could replace most of their engineers, and turn 1x into 10x. The only field that has benefited is the parasitical companies that have sprung up around these AI services trying to rentseek their way to profitability. So when the bubble bursts, laid off talent will be able to demand a premium to come back and fix the smoldering remains.

That said, it's still going to cause reasonably bad damage as a whole because so much of the tech industry is dependent on angel investors which behaves in almost cult-like ways when it comes to trying to find something to fund.

By @wcoenen - 7 months
> list the number of states with “A” in the name

Is that really a good example of LLM capabilities? LLMs don't even see those letters because of tokenization.

It's a bit like asking a Chinese speaker questions about imaginary Latin alphabet letters in Han characters. Sure, it demonstrates a limitation, but it's a bit of an edge case.

By @tsss - 7 months
Tens of thousands of people will lose their jobs? Do that many people even work on AI development? And why does OpenAI burn so much money? Maybe they'll have to stop offering their compute time at a discount, but generative AI is doing useful work and will never go away. There are many companies using it to productive ends and there will be demand for programmers to integrate generative AI in business processes.
By @george-in-sd - 7 months
Good article but not sure if a 'subprime crisis' is the right description - Openai is suppose to be risky and extremely high reward. This is exactly what VCs want to invest in.

Maybe OpenAI doesn't make it and maybe it turns out way too early for AGI ... but there is way too much stuff is working right now in multiple domains to say the industry will flop.

By @janalsncm - 7 months
These boom/bust cycles are a symptom of a top-heavy economy. Too much money is chasing too little fundamental innovation. Generative AI is cool, sure, but it’s more like a Starlink-sized idea imo. We need more of those types of ideas, not to pump trillions more dollars into making copycat LLMs.
By @axegon_ - 7 months
I find this to be a bit exaggerated but there are a few fair points. I think people are starting to see that ai is not as good as the demos and flashy videos makes them believe. Since the day chatgpt came out, my biggest problem with it has been the fact that you will ask it a mildly difficult question on any subject and it will confidently give you a response that's complete bs. And I'm starting to see people coming to the same conclusion as time goes by, despite the advancements. Ai has a place in this world even with it's current limitations and shortcomings, no doubt about it - there are cases in which it does a nearly perfect job - I'm currently building an AI rig for my home lab because it has a ton of use cases even outside my daily work(where I do use llms for processing unstructured data).

The second issue I see is that data is becoming widely unavailable-everything is getting blocked, crawlers are getting cut off immediately, 1000 API calls cost as much as a mid-sized flat in a European capital - the complete opposite of what the internet was supposed to be.

The third issue is the fact that people are gladly using chatgpt to answer questions on stackoverflow for example. We know how badly llms start performing when you train them on their own data... Not to mention the considerable spike in critical bugs in open source projects over the last two years-there's a good chance it a lot to do with it.

The fourth is social media-there are already tons of examples where troll farms are no longer paid workers in some dump in siberia but are in fact powered by chatgpt out some other service. And to think that I laughed at the dead internet theory when I first heard about it...

By @vfclists - 7 months
The article seems to be saying that the possible returns on the investments doesn't seem to match the amounts the AI companies are asking of investors.

My problem with AI boom seems is that the insane valuations of these AI companies seems to be based on nothing more than the power of the better funded AI companies to outbid possible competitors for limited amounts of chips available from chip foundries, ie TSMC et al.

If there is a glut of chips or the models become more refined and efficient then what power do these companies then have?

By @101008 - 7 months
The best usage of AI that I have seen is where people need something to show but quality doesn't matter much. AI is great to generate a lot of "something" (data, text), and if you pay attention you'll start to see where it fails. But for jobs/tasks where nobody pays attention (or nobody cares), AI is great.

Need to write a report to present at your company but nobody will care about it? Use AI.

Blog posts just for SEO? Use AI.

Illustration image as a header for a post where you need an image just to share the post on social? Use AI.

I am not saying people is in the right to do this, but this is where I saw AI being really useful at.

By @kaycey2022 - 7 months
Though I agree with almost everything said in this article, I can't help but realise that this is all feelings driven and not data driven. I also _feel_ that the current direction of generative AI is unsustainable and this will all collapse horribly if companies and investors keep following the current trajectory. But unfortunately we have very little data or precedence, either from this article and in general historically, to support this bias.

I use ai-chat quite a bit and the better it gets, I feel more threatened but that is because it leaves me some free time to think like that. And every other month when ai-chat starts spewing garbage answers, I feel pissed at the AI for making me do my research, but it gives me heart warm knowing that this shit cannot replace me.

I have also come to realise that AI needs to be trained to give you correct answers and cannot simply innovate on its own, which is what it needs to be "revolutionary". Also, our entire tech industry is based on products that do deterministic information retrieval. Whether that is getting accurate numbers from a bank account, or computing medical parameters or the velocity of incoming missiles from a bunch of formulae. AI on the other hand seems like tech that will give out answers like "the sum of 1 and 2 is 3 with 99.9% probability".

In any case, these are all just feelings and though I find myself nodding along with the article, there is no information here that is concrete.

By @sam_goody - 7 months
GPT-4 is genuinely useful for some stuff (summarizing documents, giving hints how to solve things in languages I do not speak, etc.)

Meta and others have released open weights with the claim that they compare to GPT-4; I imagine these are good enough for many of the similar tasks. There are bound to be at least a few more improvements in open weights before bust.

Apple is already building laptops with a mind towards local AI. As NVidia's and AMD's et al AI chips drop in price, they will be included in regular desktops and laptops.

While local-AI becomes more practical, the prices on remote-AI will go up, further driving local-AI improvements. Perhaps at some point will will have a subscription based weights service, where you get updated proprietary weights for your local model for $X/yr.

Local-AI will be fine for Microsoft. And for Google. So, I don't think AI is going to disappear; If anything, it will become more ubiquitous. Weights may start being released less frequently and the SaaS model may go, but that would likely be a net gain.

By @tannhaeuser - 7 months
Winter's coming.
By @tomaskafka - 7 months
If the article is right, watching the dinosaurs fail will be a sad sight, but I see a hope for a next era of small and swift mammals - small, task-specific, often local run models running on my phone, PC, VPS ...

AGI to end the humanity will need to be financed by the Chinese Communist Party, though.

By @1attice - 7 months
The obvious thing to do, if gen AI proves bust, is to funnel funding and other resources into miltech.

This will further alter the personality, intent, and product of Silicon Valley; refuseniks will be winnowed, and many people may find themselves working on projects they may find disturbing.

By @itfossil - 7 months
This blog post is so spot on it actually hurts. Big Tech is out of ideas and the MBA asshats running those orgs are looking to sell us on the product of their gullibility. The bubble is going to burst and a lot of people will eat shit when that happens.

I couldn't agree more.

By @djaouen - 7 months
I don't fall on either side. I am AI-neutral lol

But my hunch is that NVIDIA is overpriced, with a P/E ratio of 55. It is not a growth company.

By @amiantos - 7 months
While I love generative AI and support open source AI movements because of my enthusiasm for the technology, I have been extremely skeptical about all the efforts to monetize it in big time capitalist terms, both from big companies and several friends who thought AI was their big break to get into tech millions. There's just no full product there that isn't tacking an LLM or image generation into an existing product. So, at face value, I agree with this article, but I think it feels a little too gleeful, a little too "I told you so", even if in some ways it validates my own personal "I told you so". I worry the author overall does not like AI technology, which is popular to dislike these days, and that is coloring the overall tone of the piece, it's hard to tell.
By @alganet - 7 months
> After contemplating for eighteen seconds, it provided the names of 37 states, including Mississippi. The correct number, by the way, is 36.

Maybe it was counting the seconds _mississipily_.