80% of AI Projects Crash and Burn, Billions Wasted Says Rand Report
A RAND Corporation report indicates that 80% of AI projects fail due to leadership misunderstandings, poor data quality, and lack of focus on practical problem-solving, urging better communication and infrastructure investment.
Read original articleA RAND Corporation report reveals that approximately 80% of artificial intelligence (AI) projects fail, a rate significantly higher than traditional IT projects. The study, based on interviews with 65 data scientists and engineers, identifies key reasons for these failures, including leadership misunderstandings, poor data quality, and a lack of focus on practical problem-solving. Business leaders often miscommunicate project goals and have unrealistic expectations about AI capabilities, while technical teams may chase advanced technologies instead of addressing core issues. Additionally, inadequate infrastructure and insufficient investment in data management hinder project success. The report emphasizes the need for better communication between business and technical teams, a focus on long-term problem-solving, and investment in foundational infrastructure. It also highlights the importance of understanding AI's limitations and the necessity for patience in project execution. The findings serve as a wake-up call for the AI industry, urging organizations to adopt a more realistic approach to AI implementation, balancing innovation with practicality.
- 80% of AI projects fail, significantly higher than traditional IT projects.
- Leadership misunderstandings and poor data quality are major causes of failure.
- Organizations need to focus on long-term problem-solving rather than quick wins.
- Investment in infrastructure and data management is crucial for success.
- Clear communication between business and technical teams is essential for project alignment.
Related
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.
Most Fortune 500 companies see AI as 'risk factor', study finds
Over 56% of Fortune 500 companies now view AI as a risk, up from 9% in 2022, citing competition, ethics, and operations as major concerns, despite some reporting benefits.
Venture-Backed Startups Going Bankrupt at Alarming Rate
Bankruptcy rates among U.S. venture-backed startups rose 60% in the past year, with 254 bankruptcies in Q1 2024, driven by funding declines and challenges in delivering AI projects.
AI companies are pivoting from creating gods to building products
AI companies are shifting from model development to practical product creation, addressing market misunderstandings and facing challenges in cost, reliability, privacy, safety, and user interface design, with meaningful integration expected to take a decade.
From personal experience this seems like it holds for most data-products, and doubly so for basically any statistical model. As a data scientist, it seems like my domain partners' vision for my contribution very often goes something like:
0. It would be great if we were omniscient
1. Here's some data we have related to a problem we'd like to be omniscient about
2. Please fit a model to it
3. ????
4. Profit
Data scientists and ML engineers need to be aggressive at early planning stages to actually determine what impact the requested model/data product will have. They need to be ready for the model to be wrong, and need to deeply internalize the concept of error bars, and how errors relate to their use-case. But so often 'the business stuff' gets left to the domain people due to organizational politics and people not wanting to get fired. I think the most successful AI orgs will be the ones that can most effectively close the gap between people who can build/manage models, and the people who understand the problem space. Treating AI/ML tools as simple plug and play solutions I think will lead to lots of expensive failures.
The problem is if none of them, even the surviving ones, don't worth too much anyway. In that case those billions would had been wasted. But if you invested everything to just one player, and that player failed, then your whole bet failed.
That’s a sign of a problem imho. The hype is so high the directives are to use ai everywhere regardless of fit. I’m a believer of ai but shoehorning it into everything as that currently boosts stock prices seems insane.
“DART achieved logistical solutions that surprised many military planners. Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined.”
It's really reinforced in me the knowledge that most execs are completely clueless and only chase trends that other execs in their circles chase without ever reflecting on it on their own.
Also, it is funny seeing how all the AI true believers in this thread coping. I am going to go short Nvidia after its earnings whatever the earning results. It is such an obvious trade.
> Error establishing a database connection
A bit of irony that salesforcedevops Wordpress can’t manage the traffic from HN
> First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
The provider at least partially validates that this is a problem space that AI can improve which lowers the risk for the enterprise client.
> Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
The provider leverages it's own proprietary data and/or pre-trained models which lowers the risk for the enterprise client. They also have the cross-client knowledge to best leverage and verify client data.
> Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
Provider, especially startups, will lie about using the latest tech while doing something boring under the hood. This, amusingly, mitigates this risk.
> Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
The provider manages this unless it's on-prem although in the latter it can provide support on deployments.
> Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.
Still a risk but a VC or big tech budgets covers that so another win.
Does not appear to be in archive.is
> By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI.
So 60% general success rate vs 20% for an emerging technology that doesn't really have established best practices yet? That seems pretty good to me.
buried in a footnote. i wasn't sure what "ai project" actually meant
I wonder what the failure rate if it actually included "things that use a llm as an api" is too
https://web.archive.org/web/20240826091915/salesforcedevops....
Here's a link to the Rand report: https://www.rand.org/pubs/research_reports/RRA2680-1.html
So 40% of projects with more proven/experienced technologies fail? That's super high. Replace "AI" with any other project "type" in the root causes and sounds about right. So this feels more of a commentary on corporate "waste" in general than AI.
Maybe "80% of projects that get publicly acknowledged and are expected to be successful" crash and burn. It must be so much higher.
Movies, music, and publishing are also hit driven in a similar way.
https://web.archive.org/web/20240819212746/https://salesforc...
As corporate consulting America has a tendency to call any project, no matter how speculative, wasted if it didn’t succeed.
It doesn't matter that they dont have one, frankly most of their data projects fail anyway and you just need one article published about your new vision to sell it for another six months your investor class.
If you only have to explore five time-bound AI* projects to discover one that eradicates recurring costs of toil indefinitely, arguably you should be doing all of them you can.
* Nota bene: I'm not using AI as a buzzword for ML, which the article might be doing. In my book, a failed ML project is just a failed big data / big stats project. I'm using AI as a placeholder for when a machine can take over a thing they needed a person for.
A startup?
Integrating chat bot into your support page?
What is AI?
Titles like this are often click bait - but since the site is down can't tell.
It's the same reason most businesses fail. Sell something people want, and people will buy it. Sell something people don't care about, even if it's powered by cool tech, people still won't buy it.
It probably also says something about the high cost of AI... but frankly if you're providing enough value to the customer, you can up your prices to compensate. If your value is too low (ie: not selling something people want) people won't pay it.
And the only reason I was right all these times was because I looked at the technology and the technology did not remotely convince me.
Don't get me wrong stereoscopic Films (or 3D as they called it) are impressive in terms of technology. But the effects within movies doesn't bring much. The little distance that remains when people look onto a screen instead of being in a world is something many people need. 3D changes that distance which is not something everybody enjoys.
Wow gosh. Where does that money go? It just evaporates?
site appears to be down
You don’t innovate with 100% odds
ai is still incredible tho.
With vision models in the late 2010s, I was seeing AI winter 2.0 just around the corner - it felt like this was the best we could come up with. GANs were, to a very large degree, a party trick (and frankly, they still are).
LLMs changed that. And now everyone is shoving AI assistants down our throats, and people are trying to solve the exact same problems they were before, except now it's not blockchain but AI. To be clear: I was never on board with blockchain. AI - I can get behind it in some scenarios, and frankly, I use it every now and then. Startups and founders are very well aware that most startups and founders fail. But most commonly, they fail to acknowledge that the likelihood of them being part of the failing chunk is astronomically high.
Check this: a year and a half after ChatGPT came about and a number of very good open-source LLMs emerged, everyone and their dog has come up with some AI product (90% of the time it's an assistant). An assistant which, at large, is not very good. In addition, most of those are just frontends to ChatGPT. How do I know? Glad you asked - I've also been very critical of the modern-day web since people have been doing everything they can to outsource everything to the client. The number of times I've seen "id": "gpt-3.5-turbo" in the developer tools is astronomical.
Here's the simple truth: writing the code to train an AI model is not wildly difficult with all the documentation and resources you can get for free. The problems are:
Finding a shit load of data (and good data), which is becoming increasingly more difficult and borderline impossible - everyone is fencing their sites, services, and APIs - APIs which were completely free 2 years ago will set you back tens of thousands for even basic data.
As I said, the code you need to write is not something out of reach. Training it, on the other hand, is borderline impossible. Simply because it costs A LOT. Take Phi-3, which is a model you can easily run on a decent consumer-grade GPU. And even if you are aiming a bit higher, you can get something like a V100 on eBay for very little. But if you open up the documentation, you will see that in order to train it, Microsoft used 512x H100s. Even renting them out will set you back millions, and you can't be too sure how well you would be able to pull it off.
So in the grand scheme of things, what is happening now is the corporate equivalent of pump-and-dump. It's not even fake it till you make it. The big question on my mind is what would happen with the thousands of companies that have received substantial investments, have delivered a product, only for it to crash the second OpenAI stops working. And even not so much the companies, but the people behind these companies. As a friend once said, "If you owe 1M to the bank, you have a problem. If you owe 1B to the bank, the bank has a problem." In the context of startup investments, you are probably closer to 1M than 1B. Then again investors are commonly putting their eggs in different baskets but as it happens with investments and the current situation, all baskets are pretty risky, and the safe baskets are pretty full.
We are already seeing tons of failed products that have burned through astronomical amounts of cash. I am a believer in AI as an enhancement tool (not for productivity, not for solving problems, but just as an enhancement to your stack of tools). What I do fear is that sooner or later, people will start getting disappointed and frustrated with the lack of results, and before you know it, just the acronym "AI" will make everyone roll their eyes when they hear it. Examples: "www", "SEO", "online ads", "apps", "cloud", "blockchain".
Related
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.
Most Fortune 500 companies see AI as 'risk factor', study finds
Over 56% of Fortune 500 companies now view AI as a risk, up from 9% in 2022, citing competition, ethics, and operations as major concerns, despite some reporting benefits.
Venture-Backed Startups Going Bankrupt at Alarming Rate
Bankruptcy rates among U.S. venture-backed startups rose 60% in the past year, with 254 bankruptcies in Q1 2024, driven by funding declines and challenges in delivering AI projects.
AI companies are pivoting from creating gods to building products
AI companies are shifting from model development to practical product creation, addressing market misunderstandings and facing challenges in cost, reliability, privacy, safety, and user interface design, with meaningful integration expected to take a decade.