Many FDA-approved AI medical devices are not trained on real patient data
A study found that nearly 43% of FDA-approved AI medical devices lack clinical validation with real patient data, raising concerns about their effectiveness and calling for improved regulatory standards.
Read original articleResearch conducted by a multi-institutional team, including members from the University of North Carolina and Duke University, has revealed that nearly 43% of FDA-approved AI medical devices lack clinical validation based on real patient data. The study analyzed over 500 AI medical devices and found that many were either retrospectively validated using past data or used computer-generated images instead of actual patient data. The findings, published in Nature Medicine, highlight concerns regarding the credibility of these devices, as FDA authorization does not guarantee clinical effectiveness. The researchers advocate for improved regulatory standards and clearer distinctions between types of clinical validation studies, such as retrospective, prospective, and randomized controlled trials. They emphasize the need for the FDA and manufacturers to conduct thorough clinical validation studies and make the results publicly available to enhance trust in AI technologies in healthcare. The rapid increase in AI device approvals, from two per year in 2016 to 69 in recent years, underscores the urgency of addressing these validation issues to ensure patient safety and effective care.
- Nearly half of FDA-approved AI medical devices lack clinical validation on real patient data.
- The study analyzed over 500 AI devices, revealing significant gaps in validation methods.
- FDA authorization does not equate to proven clinical effectiveness.
- Researchers call for clearer regulatory standards and public access to validation results.
- The rapid increase in AI device approvals highlights the need for improved validation practices.
Related
It's not just hype. AI could revolutionize diagnosis in medicine
Artificial intelligence (AI) enhances medical diagnosis by detecting subtle patterns in data, improving accuracy in identifying illnesses like strokes and sepsis. Challenges like costs and data privacy hinder widespread adoption, requiring increased financial support and government involvement. AI's potential to analyze healthcare data offers a significant opportunity to improve diagnostic accuracy and save lives, emphasizing the importance of investing in AI technology for enhanced healthcare outcomes.
Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless
Benchmarks used to assess AI models may mislead, lacking crucial insights. Google and Meta's AI boasts are criticized for outdated, unreliable tests. Experts urge more rigorous evaluation methods amid concerns about AI's implications.
The Data That Powers A.I. Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and companies like OpenAI, Google, and Meta. Challenges prompt exploration of new data access tools and alternative training methods.
Brands should avoid the term 'AI'. It's turning off customers
A study found that labeling products as "AI-powered" decreases purchase intentions due to trust issues and privacy concerns. Companies should focus on transparent messaging to improve consumer acceptance of AI.
A new public database lists all the ways AI could go wrong
The AI Risk Repository launched by MIT's CSAIL documents over 700 potential risks of advanced AI systems, emphasizing the need for ongoing monitoring and further research into under-explored risks.
The lack of "published" clinical validation studies implies neither that the AI developer performed no clinical validation nor that the FDA hasn't seen it. So, it is not clear if the problem is with the lack of clinical validation or the lack of reporting. For some reason the title exaggerates yet further (half of FDA-approved AI not "trained" on real patient data).
The FDA keeps a database of all adverse events reported by manufacturers or healthcare systems [1], so we can check in a few years if these AI medical devices are causing an uptick of complaints.
[1] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/s...
Why does this has to be so hard? I mean what if the regulators went the opposite approach: make ALL the data publicly available by anonymizing it.
What privacy concerns would anyone have if the whole data is completely anonymized? Wouldn't it create far more benefit by accelerating innovation in healthcare? I strongly believe the privacy concerns (which shouldn't exist after stripping out any PII) are outweighed by an order of magnitude by the upsides, which are literally saving millions of lives in the longer term.
But again, regulation, as always, cripple innovation, this time slowing down developments that would literally save lives. Great.
it's frustrating that the article uses the term "approve" which is specific to PMA devices (class III), whereas class II devices are "cleared" by the FDA.
- a medical device executive
They told me that FDA has nothing to do with actual product testing, e.g., human trials. What the "approval" guarantees is proper facilities and other second-order criteria.
I'm writing this comment in the hopes of learning that I have misunderstood something crucial here. Because if it is indeed the case that the companies themselves are the only ones vouching for drug safety, that old story about capitalism and incentives implies we're seriously fucked.
Related
It's not just hype. AI could revolutionize diagnosis in medicine
Artificial intelligence (AI) enhances medical diagnosis by detecting subtle patterns in data, improving accuracy in identifying illnesses like strokes and sepsis. Challenges like costs and data privacy hinder widespread adoption, requiring increased financial support and government involvement. AI's potential to analyze healthcare data offers a significant opportunity to improve diagnostic accuracy and save lives, emphasizing the importance of investing in AI technology for enhanced healthcare outcomes.
Everyone Is Judging AI by These Tests. Experts Say They're Close to Meaningless
Benchmarks used to assess AI models may mislead, lacking crucial insights. Google and Meta's AI boasts are criticized for outdated, unreliable tests. Experts urge more rigorous evaluation methods amid concerns about AI's implications.
The Data That Powers A.I. Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and companies like OpenAI, Google, and Meta. Challenges prompt exploration of new data access tools and alternative training methods.
Brands should avoid the term 'AI'. It's turning off customers
A study found that labeling products as "AI-powered" decreases purchase intentions due to trust issues and privacy concerns. Companies should focus on transparent messaging to improve consumer acceptance of AI.
A new public database lists all the ways AI could go wrong
The AI Risk Repository launched by MIT's CSAIL documents over 700 potential risks of advanced AI systems, emphasizing the need for ongoing monitoring and further research into under-explored risks.