August 27th, 2024

80% of AI Projects Crash and Burn, Billions Wasted Says Rand Report

A RAND Corporation report indicates that 80% of AI projects fail due to leadership misunderstandings, poor data quality, and lack of focus on practical problem-solving, urging better communication and infrastructure investment.

Read original articleLink Icon
80% of AI Projects Crash and Burn, Billions Wasted Says Rand Report

A RAND Corporation report reveals that approximately 80% of artificial intelligence (AI) projects fail, a rate significantly higher than traditional IT projects. The study, based on interviews with 65 data scientists and engineers, identifies key reasons for these failures, including leadership misunderstandings, poor data quality, and a lack of focus on practical problem-solving. Business leaders often miscommunicate project goals and have unrealistic expectations about AI capabilities, while technical teams may chase advanced technologies instead of addressing core issues. Additionally, inadequate infrastructure and insufficient investment in data management hinder project success. The report emphasizes the need for better communication between business and technical teams, a focus on long-term problem-solving, and investment in foundational infrastructure. It also highlights the importance of understanding AI's limitations and the necessity for patience in project execution. The findings serve as a wake-up call for the AI industry, urging organizations to adopt a more realistic approach to AI implementation, balancing innovation with practicality.

- 80% of AI projects fail, significantly higher than traditional IT projects.

- Leadership misunderstandings and poor data quality are major causes of failure.

- Organizations need to focus on long-term problem-solving rather than quick wins.

- Investment in infrastructure and data management is crucial for success.

- Clear communication between business and technical teams is essential for project alignment.

Link Icon 61 comments
By @tech_ken - 8 months
From the RAND report "First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI."

From personal experience this seems like it holds for most data-products, and doubly so for basically any statistical model. As a data scientist, it seems like my domain partners' vision for my contribution very often goes something like:

0. It would be great if we were omniscient

1. Here's some data we have related to a problem we'd like to be omniscient about

2. Please fit a model to it

3. ????

4. Profit

Data scientists and ML engineers need to be aggressive at early planning stages to actually determine what impact the requested model/data product will have. They need to be ready for the model to be wrong, and need to deeply internalize the concept of error bars, and how errors relate to their use-case. But so often 'the business stuff' gets left to the domain people due to organizational politics and people not wanting to get fired. I think the most successful AI orgs will be the ones that can most effectively close the gap between people who can build/manage models, and the people who understand the problem space. Treating AI/ML tools as simple plug and play solutions I think will lead to lots of expensive failures.

By @gmuslera - 8 months
The problem is not if the 80% of them fail, but if of the remaining 20% you get several black swans that make profit skyrocket for the whole investment set.

The problem is if none of them, even the surviving ones, don't worth too much anyway. In that case those billions would had been wasted. But if you invested everything to just one player, and that player failed, then your whole bet failed.

By @dwallin - 8 months
It's probably better to just link to the Rand Report: https://www.rand.org/pubs/research_reports/RRA2680-1.html
By @AnotherGoodName - 8 months
Is anyone else finding their company is asking teams to “insert ai everywhere any way you can”?

That’s a sign of a problem imho. The hype is so high the directives are to use ai everywhere regardless of fit. I’m a believer of ai but shoehorning it into everything as that currently boosts stock prices seems insane.

By @treprinum - 8 months
Isn't that better than normal? It used to be 90% of startups going belly up within 3 years, and out of remaining 10%, 9% becoming zombies and 1% having a proper exit? This looks more like Pareto 80/20 which is way better.
By @fidotron - 8 months
History will repeat itself: https://en.m.wikipedia.org/wiki/Dynamic_Analysis_and_Replann...

“DART achieved logistical solutions that surprised many military planners. Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined.”

By @sensanaty - 8 months
I work on a team doing some shitty AI feature, and as far as I can tell the only reason it's still alive is because our C-level has overdosed on the kool-aid and are adamant that they can squeeze blood out of the AI stone. Pretty much everyone in engineering is telling them it's a monumental waste of time, effort & money (especially money, our AI tooling/provider bills are astronomical compared to everything else we pay for), but to them the word "AI" holds so much power that they just can't resist sinking further and further resources into it.

It's really reinforced in me the knowledge that most execs are completely clueless and only chase trends that other execs in their circles chase without ever reflecting on it on their own.

By @ashryan - 8 months
Currently hugged so I can't read the article, but I can only wonder how this compares to the batting average of any given R&D effort. 20% of projects succeeding on a cutting edge technology might be pretty good, no?
By @bugbuddy - 8 months
But the man in black leather said that people don’t need to learn to code because AI will now do all the coding. Who should we believe?

Also, it is funny seeing how all the AI true believers in this thread coping. I am going to go short Nvidia after its earnings whatever the earning results. It is such an obvious trade.

By @Oras - 8 months
The website is down,

> Error establishing a database connection

A bit of irony that salesforcedevops Wordpress can’t manage the traffic from HN

By @mensetmanusman - 8 months
For anyone that knows anything about research and development, a 20% hit rate is actually quite high.
By @add-sub-mul-div - 8 months
There's so much LLM shovelware getting spammed here daily that I have a hard time believing 20% of all projects are succeeding. Are we even far enough into the LLM era though for bad projects to have run out of their borrowed time?
By @marcinzm - 8 months
This seems like a very strong selling point for B2B AI providers versus in-house enterprise builds of AI.

> First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.

The provider at least partially validates that this is a problem space that AI can improve which lowers the risk for the enterprise client.

> Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.

The provider leverages it's own proprietary data and/or pre-trained models which lowers the risk for the enterprise client. They also have the cross-client knowledge to best leverage and verify client data.

> Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.

Provider, especially startups, will lie about using the latest tech while doing something boring under the hood. This, amusingly, mitigates this risk.

> Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.

The provider manages this unless it's on-prem although in the latter it can provide support on deployments.

> Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.

Still a risk but a VC or big tech budgets covers that so another win.

By @notamy - 8 months
By @dwallin - 8 months
An interesting note:

> By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI.

So 60% general success rate vs 20% for an emerging technology that doesn't really have established best practices yet? That seems pretty good to me.

By @tayo42 - 8 months
> 1 For this project, we focused on the machine learning (ML) branch of AI because that is the technology underpinning most business applications of AI today. This includes AI models trained using supervised learning, unsupervised learning, or reinforcement learning approaches and large language models (LLMs). Projects that simply used pretrained LLMs (sometimes known as prompt engineering) but did not attempt to train or customize their own were not included in the scope of this work.

buried in a footnote. i wasn't sure what "ai project" actually meant

I wonder what the failure rate if it actually included "things that use a llm as an api" is too

By @alexfromapex - 8 months
In my experience, hiring managers worry way too much about finding research engineers with deep math skills when at the end of the day they need software folks to operationalize simple maybe slightly fine-tuned foundation models.
By @jimmar - 8 months
By @ein0p - 8 months
When it comes to truly novel things, 20% of even modest success is a very high number. I worked in research heavy places (industry labs) over the last decade and if 90% of things you try do not fail, your work is not ambitious enough. That is very hard thing for a SWE to live with, but such is the price of progress. The remaining 10% tend to make it worthwhile. 20% is twice that. It needs to go lower still - you’re not going to succeed by just finetuning yet another llama variant.
By @tompetry - 8 months
>> By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI.

So 40% of projects with more proven/experienced technologies fail? That's super high. Replace "AI" with any other project "type" in the root causes and sounds about right. So this feels more of a commentary on corporate "waste" in general than AI.

By @simonsarris - 8 months
80% seems far too optimistic. From what I know of projects and development I would think upwards of 90% of all software projects are never shipped. Maybe 95%. Even higher would not surprise me. Maybe this is considered pre-crash or pre-burn by them.

Maybe "80% of projects that get publicly acknowledged and are expected to be successful" crash and burn. It must be so much higher.

By @api - 8 months
80%+ of all startups fail. Tech is hit driven. The remaining 20% carry the entire industry.

Movies, music, and publishing are also hit driven in a similar way.

By @winternett - 8 months
Most of these projects are too similar in nature to succeed to begin with... Everyone is out to create yet another text chat bot or an image maker and then slap Google Authentication on it and tricks to get people to enroll into a monthly $ubscription... Few are out to be visionary and make products that can be sold to companies that will integrate the tools into their apps
By @Xx_crazy420_xX - 8 months
80% of AI projects crash, along with this site
By @freediver - 8 months
By @josefritzishere - 8 months
On average proejcts faily about 70% of the time so AI projects are only 10% away from the mean. https://www.projectmanagementworks.co.uk/project-failure-sta...
By @2OEH8eoCRo0 - 8 months
I'm bullish on AI but it was clear to even me that most projects were simple ChatGPT wrappers.
By @MattGaiser - 8 months
The site is down, so I can’t read the article, but how is “wasted” defined in the context of these articles?

As corporate consulting America has a tendency to call any project, no matter how speculative, wasted if it didn’t succeed.

By @wrs - 8 months
This movie is familiar…most of this summary in the Rand report applies/applied to any overhyped new technology. E.g., try substituting “NoSQL” for “AI” and see how well most of it reads.
By @hobs - 8 months
It says the problem is management understanding is failing, but its more they do not listen to their technical staff but instead read the equivalent of Cool Stuff Magazine, know their investors read it too, and that the next board meeting is going to revolve around asking them what their grand strategy for AI is.

It doesn't matter that they dont have one, frankly most of their data projects fail anyway and you just need one article published about your new vision to sell it for another six months your investor class.

By @kkfx - 8 months
I bet larger part of that was project decided by the management, with unrealistic goals and some external interested party stating they are possible...
By @herval - 8 months
how's that distribution different from any new tech wave? (crypto, past AI waves, robotics, self-driving cars, mobile games...)
By @bob1029 - 8 months
Appendix A should be mandatory interview questioning for anyone getting into this business. Question 9 is the ultimate test.
By @Terretta - 8 months
This is an incredible stat.

If you only have to explore five time-bound AI* projects to discover one that eradicates recurring costs of toil indefinitely, arguably you should be doing all of them you can.

* Nota bene: I'm not using AI as a buzzword for ML, which the article might be doing. In my book, a failed ML project is just a failed big data / big stats project. I'm using AI as a placeholder for when a machine can take over a thing they needed a person for.

By @TimPC - 8 months
I guess I’m pretty valuable then because my hit rate on AI projects I’ve lead is over triple the industry average.
By @kayge - 8 months
By @bhawks - 8 months
What is a project?

A startup?

Integrating chat bot into your support page?

What is AI?

Titles like this are often click bait - but since the site is down can't tell.

By @swalsh - 8 months
Site has a database error, so can't read the report. But here's what it probably says. "80% of AI projects don't solve a critical customer need, and find themselves with low usage/sales, and eventually run out of runway"

It's the same reason most businesses fail. Sell something people want, and people will buy it. Sell something people don't care about, even if it's powered by cool tech, people still won't buy it.

It probably also says something about the high cost of AI... but frankly if you're providing enough value to the customer, you can up your prices to compensate. If your value is too low (ie: not selling something people want) people won't pay it.

By @jonplackett - 8 months
20% success rate is pretty good no?
By @timcobb - 8 months
80/20 rule strikes again! (c'mon folks, this applies to just about everything)...
By @euph0ria - 8 months
The site crashed and burned..
By @rychco - 8 months
I’m shocked; I was certain this would be different than blockchain.
By @PeterStuer - 8 months
Just 80%? Sounds like AI projects are succeeding above average.
By @atoav - 8 months
Yeah, as predicted. As a film guy I told my totally hyped colleagues a few years ago that 3D films are not going to stick in the way they expected. When Bitcoin and crypto currencies started to become the next big thing I was the only person in my circles that had actually tried purchasing something with it in a real world setting, years prior. When LLMs became The Shit, I warned against overblown expectations as I had some intuition a out the limitations about it stemming from my own machine learning experiences.

And the only reason I was right all these times was because I looked at the technology and the technology did not remotely convince me.

Don't get me wrong stereoscopic Films (or 3D as they called it) are impressive in terms of technology. But the effects within movies doesn't bring much. The little distance that remains when people look onto a screen instead of being in a world is something many people need. 3D changes that distance which is not something everybody enjoys.

By @halyconWays - 8 months
>Billions wasted

Wow gosh. Where does that money go? It just evaporates?

By @_davide_ - 8 months
Is anyone surprised?
By @ms7892 - 8 months
Getting “ Error establishing a database connection”.
By @teqsun - 8 months
> Error establishing a database connection

site appears to be down

By @devops000 - 8 months
Error establishing a database connection
By @Taylor_OD - 8 months
Up like 5% from non ai projects?
By @Eumenes - 8 months
RAND wants that money funneled to weapon/missile development instead of chat bots
By @Havoc - 8 months
If 20% of AI projects work out that would be massive for humanity.

You don’t innovate with 100% odds

By @kome - 8 months
what schumpeter called capitalism's creative destruction. move along.

ai is still incredible tho.

By @normand1 - 8 months
Just wait until Rand looks into the success rate of Corporate IT Projects in general...
By @lyime - 8 months
and thats ok
By @paulsutter - 8 months
And yet progress keeps moving forward. It's almost like the investors understand the risks
By @seydor - 8 months
wait till you hear how many research projects crash and burn
By @axegon_ - 8 months
To be honest, I am really frustrated with what is happening: the hype train killing something which in principle could be a good thing, as usual. In the second half of the 2010s, it was blockchain: Payments - blockchain is the solution. Logistics - blockchain. World hunger - blockchain. Cure for cancer - blockchain. 75% of all job offers from startups were blockchain-related, and admittedly, I worked at such a startup, which, from what I'm able to gather, is a few months away from total collapse.

With vision models in the late 2010s, I was seeing AI winter 2.0 just around the corner - it felt like this was the best we could come up with. GANs were, to a very large degree, a party trick (and frankly, they still are).

LLMs changed that. And now everyone is shoving AI assistants down our throats, and people are trying to solve the exact same problems they were before, except now it's not blockchain but AI. To be clear: I was never on board with blockchain. AI - I can get behind it in some scenarios, and frankly, I use it every now and then. Startups and founders are very well aware that most startups and founders fail. But most commonly, they fail to acknowledge that the likelihood of them being part of the failing chunk is astronomically high.

Check this: a year and a half after ChatGPT came about and a number of very good open-source LLMs emerged, everyone and their dog has come up with some AI product (90% of the time it's an assistant). An assistant which, at large, is not very good. In addition, most of those are just frontends to ChatGPT. How do I know? Glad you asked - I've also been very critical of the modern-day web since people have been doing everything they can to outsource everything to the client. The number of times I've seen "id": "gpt-3.5-turbo" in the developer tools is astronomical.

Here's the simple truth: writing the code to train an AI model is not wildly difficult with all the documentation and resources you can get for free. The problems are:

Finding a shit load of data (and good data), which is becoming increasingly more difficult and borderline impossible - everyone is fencing their sites, services, and APIs - APIs which were completely free 2 years ago will set you back tens of thousands for even basic data.

As I said, the code you need to write is not something out of reach. Training it, on the other hand, is borderline impossible. Simply because it costs A LOT. Take Phi-3, which is a model you can easily run on a decent consumer-grade GPU. And even if you are aiming a bit higher, you can get something like a V100 on eBay for very little. But if you open up the documentation, you will see that in order to train it, Microsoft used 512x H100s. Even renting them out will set you back millions, and you can't be too sure how well you would be able to pull it off.

So in the grand scheme of things, what is happening now is the corporate equivalent of pump-and-dump. It's not even fake it till you make it. The big question on my mind is what would happen with the thousands of companies that have received substantial investments, have delivered a product, only for it to crash the second OpenAI stops working. And even not so much the companies, but the people behind these companies. As a friend once said, "If you owe 1M to the bank, you have a problem. If you owe 1B to the bank, the bank has a problem." In the context of startup investments, you are probably closer to 1M than 1B. Then again investors are commonly putting their eggs in different baskets but as it happens with investments and the current situation, all baskets are pretty risky, and the safe baskets are pretty full.

We are already seeing tons of failed products that have burned through astronomical amounts of cash. I am a believer in AI as an enhancement tool (not for productivity, not for solving problems, but just as an enhancement to your stack of tools). What I do fear is that sooner or later, people will start getting disappointed and frustrated with the lack of results, and before you know it, just the acronym "AI" will make everyone roll their eyes when they hear it. Examples: "www", "SEO", "online ads", "apps", "cloud", "blockchain".