August 26th, 2024

Many FDA-approved AI medical devices are not trained on real patient data

A study found that nearly 43% of FDA-approved AI medical devices lack clinical validation with real patient data, raising concerns about their effectiveness and calling for improved regulatory standards.

Read original articleLink Icon
Many FDA-approved AI medical devices are not trained on real patient data

Research conducted by a multi-institutional team, including members from the University of North Carolina and Duke University, has revealed that nearly 43% of FDA-approved AI medical devices lack clinical validation based on real patient data. The study analyzed over 500 AI medical devices and found that many were either retrospectively validated using past data or used computer-generated images instead of actual patient data. The findings, published in Nature Medicine, highlight concerns regarding the credibility of these devices, as FDA authorization does not guarantee clinical effectiveness. The researchers advocate for improved regulatory standards and clearer distinctions between types of clinical validation studies, such as retrospective, prospective, and randomized controlled trials. They emphasize the need for the FDA and manufacturers to conduct thorough clinical validation studies and make the results publicly available to enhance trust in AI technologies in healthcare. The rapid increase in AI device approvals, from two per year in 2016 to 69 in recent years, underscores the urgency of addressing these validation issues to ensure patient safety and effective care.

- Nearly half of FDA-approved AI medical devices lack clinical validation on real patient data.

- The study analyzed over 500 AI devices, revealing significant gaps in validation methods.

- FDA authorization does not equate to proven clinical effectiveness.

- Researchers call for clearer regulatory standards and public access to validation results.

- The rapid increase in AI device approvals highlights the need for improved validation practices.

Related

It's not just hype. AI could revolutionize diagnosis in medicine

It's not just hype. AI could revolutionize diagnosis in medicine

Artificial intelligence (AI) enhances medical diagnosis by detecting subtle patterns in data, improving accuracy in identifying illnesses like strokes and sepsis. Challenges like costs and data privacy hinder widespread adoption, requiring increased financial support and government involvement. AI's potential to analyze healthcare data offers a significant opportunity to improve diagnostic accuracy and save lives, emphasizing the importance of investing in AI technology for enhanced healthcare outcomes.

Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless

Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless

Benchmarks used to assess AI models may mislead, lacking crucial insights. Google and Meta's AI boasts are criticized for outdated, unreliable tests. Experts urge more rigorous evaluation methods amid concerns about AI's implications.

The Data That Powers A.I. Is Disappearing Fast

The Data That Powers A.I. Is Disappearing Fast

A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and companies like OpenAI, Google, and Meta. Challenges prompt exploration of new data access tools and alternative training methods.

Brands should avoid the term 'AI'. It's turning off customers

Brands should avoid the term 'AI'. It's turning off customers

A study found that labeling products as "AI-powered" decreases purchase intentions due to trust issues and privacy concerns. Companies should focus on transparent messaging to improve consumer acceptance of AI.

A new public database lists all the ways AI could go wrong

A new public database lists all the ways AI could go wrong

The AI Risk Repository launched by MIT's CSAIL documents over 700 potential risks of advanced AI systems, emphasizing the need for ongoing monitoring and further research into under-explored risks.

Link Icon 14 comments
By @funnygiraffe - 8 months
"226 of 521 FDA-approved medical devices, or approximately 43%, lacked published clinical validation data."

The lack of "published" clinical validation studies implies neither that the AI developer performed no clinical validation nor that the FDA hasn't seen it. So, it is not clear if the problem is with the lack of clinical validation or the lack of reporting. For some reason the title exaggerates yet further (half of FDA-approved AI not "trained" on real patient data).

By @amiroo - 8 months
In the spirit of the authors' point of sharing one's supporting evidence, could you please share the data and code for scraping the list of devices from the FDA website? :)
By @Sevii - 8 months
Not a surprise. I worked on a ‘AI’ health insurance product for several years and even getting access to data was a struggle.
By @prashp - 8 months
What's missing here is the risk of the medical device not performing as expected on real patients. These risks are usually mitigated reasonably by medical device manufacturers and designers such that it doesn't matter that "AI medical devices" are not trained on real patient data.

The FDA keeps a database of all adverse events reported by manufacturers or healthcare systems [1], so we can check in a few years if these AI medical devices are causing an uptick of complaints.

[1] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/s...

By @w_for_wumbo - 8 months
Maybe we could have an option to be a data-donor as opposed to only having the option to donate your organs for the purpose of science.
By @y-curious - 8 months
Huge problem is data protections in the USA that do not allow for easy sharing of patient data for these purposes. The liability is huge and the upside is very little. Furthermore, if it's a college-affiliated hospital, that data is not going anywhere except to the internal teams within the college.
By @can16358p - 8 months
Don't want to go too off-topic but:

Why does this has to be so hard? I mean what if the regulators went the opposite approach: make ALL the data publicly available by anonymizing it.

What privacy concerns would anyone have if the whole data is completely anonymized? Wouldn't it create far more benefit by accelerating innovation in healthcare? I strongly believe the privacy concerns (which shouldn't exist after stripping out any PII) are outweighed by an order of magnitude by the upsides, which are literally saving millions of lives in the longer term.

But again, regulation, as always, cripple innovation, this time slowing down developments that would literally save lives. Great.

By @uslic001 - 8 months
As a gastroenterologist I have used Medtronic GI Genius AI to do colonoscopies for the past 8 months. It is mildly helpful but has way too many false positive alarms for "polyps" that are not polyps. It needs better training in the real world.
By @blackeyeblitzar - 8 months
How can anyone except big incumbents who are immune to competition get access to data? I feel this is an unfortunate situation where the opportunity to be an innovator is locked away. Some of the reasons are good but it’s not a great outcome for us all.
By @zomg - 8 months
another confounding problem is how the FDA classifies and approves devices for clearance. most regulatory strategies have the company strive to find a predicate and prove they are substantially equivalent (SESE), which generally means a class II device, which does NOT require clinical data for clearance.

it's frustrating that the article uses the term "approve" which is specific to PMA devices (class III), whereas class II devices are "cleared" by the FDA.

- a medical device executive

By @AbstractH24 - 8 months
As long as these are diagnostic aids, not treatments like a pacemaker or medication, I don't see this as so concerning.
By @harish1977 - 8 months
god save us.. hope they didn't use omniverse replicator.
By @romesmoke - 8 months
Had a chat with a pharmaceutical company whistleblower recently.

They told me that FDA has nothing to do with actual product testing, e.g., human trials. What the "approval" guarantees is proper facilities and other second-order criteria.

I'm writing this comment in the hopes of learning that I have misunderstood something crucial here. Because if it is indeed the case that the companies themselves are the only ones vouching for drug safety, that old story about capitalism and incentives implies we're seriously fucked.

By @freetanga - 8 months
Elizabeth Holmes (in coarse voice): “Hold my beer”