The staggering death toll of scientific lies
Scientific fraud, exemplified by Don Poldermans' falsified research, poses health risks and raises questions about accountability, with debates on criminalization and proposals for clearer legal frameworks and independent oversight.
Read original articleScientific fraud poses significant risks, including loss of life, as illustrated by the case of cardiologist Don Poldermans, whose falsified research on beta blockers led to increased mortality rates in heart surgery patients. Despite the serious implications of such misconduct, the scientific community often lacks effective punitive measures. Poldermans faced minimal consequences after admitting to using fictitious data, and many of his studies remain unaddressed. The article raises the question of whether research misconduct should be criminalized, noting that while some argue for legal repercussions, others caution against the potential chilling effect on scientific inquiry. Current penalties for fraud are often insufficient, with institutions prioritizing their reputations over thorough investigations. Proposals for a new legal framework to address scientific fraud have emerged, aiming to clarify the distinction between carelessness and intentional misconduct. However, the effectiveness of such measures remains uncertain, as the scientific community struggles with accountability. The need for external oversight is emphasized, suggesting that independent review boards could enhance the integrity of research. Ultimately, while criminalization may provide a means of accountability in severe cases, it is not the sole solution to the broader issue of scientific fraud.
- Scientific fraud can lead to significant health risks and fatalities.
- Current consequences for research misconduct are often minimal and ineffective.
- There is debate over whether to criminalize scientific fraud.
- Proposals for clearer legal definitions of fraud are being considered.
- Independent oversight may improve accountability in scientific research.
Related
The case for criminalizing scientific misconduct · Chris Said
The article argues for criminalizing scientific misconduct, citing cases like Sylvain Lesné's fake research. It proposes Danish-style committees and federal laws to address misconduct effectively, emphasizing accountability and public trust protection.
So Now the Feds Will Monitor Research Integrity?
The Biden administration forms a Scientific Integrity Task Force to monitor research integrity. Critics express concerns over proposed rule changes, citing persistent research misconduct issues amid increased government funding in universities.
Research into homeopathy: data falsification, fabrication and manipulation
Research on homeopathy faces credibility issues due to data manipulation in a study led by Michael Frass. The study, once positive, now raises concerns of scientific misconduct, urging withdrawal of publication. Challenges persist in alternative medicine research, highlighting the conflict between ideology and scientific integrity.
Peer review is essential for science. Unfortunately, it's broken
Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.
The Academic Culture of Fraud
In 2006, Sylvain Lesné's Alzheimer’s research faced retraction due to manipulated images, highlighting academic fraud issues. Similar cases reveal a troubling trend of inadequate accountability in research institutions.
What the guy did was clearly wrong but it’s a slightly tenuous causal chain between that and 800,000 deaths. Questions may be asked, for example, about whether the medical guidelines should have been based on studies that seemingly had a single point of failure (this one corrupt guy).
There’s an extremely toxic (and ironically very anti-scientific) culture of “study says it so it’s true” that permeates medical and scientific fields and the reporting thereof. Caveats and weaknesses in the primary research get ignored in favor of abstracts and headlines, with each layer of indirection discarding more of the nuance and adding more weight of certainty to a result that should in truth remain tentative.
Prosecuting one type of bad actor might not make a lot of difference and might distract from the much larger systemic issues facing our current model of scientific enquiry.
Physicists seem to be really good about this, and many other aspects of implementing the scientific method too.
I wish the other sciences would get on board. It would eliminate almost all the chronic problems that plague biological and social sciences: falsification, p-hacking, failing to notice honest methodological mistakes, outright fraud, etc.
I fear that the problem is, we can't get there from here for social reasons. The people at the top of these fields - the ones who drive culture in academic institutions, set publication standards for journals, influence where grant money is allocated, etc. - all got there by using sloppy methods and getting lucky. I think that, on some level, many of them know it, and know that fixing the rotten core of their field inevitably involves subjecting their own work - and, by extension, reputations - to a level of scrutiny that it is unlikely to survive.
In the US at least, it's nearly impossible to commit this kind of malfeasance without committing federal wire fraud - faked research would nearly always be part of a grant application, at least eventually, for example.
Plus, I'm surprised some enterprising lawyers haven't at least tried some massive class action lawsuits. The actual researcher may not have much to go after, but surely their institutions would. If you can get huge class action payouts for the dubious connection of talc in baby powder to cancer, why can't you get a payout here where (a) the malfeasance was intentional from the get go and (b) the harms are unambiguously clear from follow-up meta-analysis studies.
I guess I would like to understand if there is some fundamental reason that existing statutes aren't enough before adding laws.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932762/
This reaches the conclusion that beta blockers are harmful. However, if you look at the meat analysis, specifically figure 2, you find that the conclusion is mainly driven by a single trial - the 2008 POISE trial.
If you go to the POISE trial: https://www.thelancet.com/journals/lancet/article/PIIS0140-6...
You find that they discovered fraud in at least some of the hospitals:
" Concern was raised during central data consistency checks about 752 participants at six hospitals in Iran coordinated by one centre and 195 participants associated with one research assistant in three of 11 hospitals in Colombia. On-site auditing of these hospitals and cases indicated that fraudulent activity had occurred. Before the trial was concluded, the operations committee—blinded to the trial results at these hospitals and overall—decided to exclude these data (webappendix 1). "
We have an important question - should pre-op patients be given beta blockers - and the largest, most definitive trials to answer that question have at least some taint of fraud.
Either they're not that smart or the processes aren't very good -- though no single researcher is responsible for their field's poor processes. Either way, we shouldn't assume that any one PhD or MD recipient is an expert until something changes. Degrees, on their own, don't signify expertise or credibility.
Scientists making errors in good faith should on the other hand be insulated from any kind of liability.
1) Any research institutions that receive government funding are required to spent 10% (or 15% or 20% or whatever) of their total budget on replication. If they don't, they stop receiving any government funding.
2) When citing a paper scientists are required to include any replication studies, both successful and not successful.
This would hopefully lead to more replication studies being done, even if it doesn't answer the question of what to do with a study until it's been replicated sufficiently.
The second part would help us guess the validity of a paper. Papers that base their central premise on studies with multiple independent replication would probably be a bit more trustful than papers based on unverified studies.
I told her that I cannot read that further because she either won't get her PhD or I will be morally wounded.
She asked me for some examples of errors and to each of them she was saying, with evidence, that this is what "everyone does".
This was nutrition, something that is at least to some extend innate so there won't be disasters (I hope). The same thing with pharma is a disaster hanging by a thread.
Literally anybody can write bullshit and anybody with some cash or connections can get it published, deciding to make it a medical guideline because the text and it's metadata looks a certain way is basically as competent as just using ChatGPT.
I recall there was a discussion on HN a few years back about the Alzheimer plaque connection being established on fake data.
I've worked with both ends of the spectrum - fraudulent tenured PIs at leading research universities are not that rare, but highly skilled and reliable PIs are more common. The fundamental difference always seems to be record-keeping - frauds aren't interested in keeping detailed records of their activities that can be used by others to replicate their work (since their work is non-replicable). In contrast, the reputable researcher will want such detailed records for various reasons, including defense against false claims of fraud or incompetence, which is quite common if the research results are not aligned with corporate profit motives in areas like pharmaceuticals, fossil fuels and climate, environmental pollutants, etc.
If the powers that be really wanted to reduce research fraud, the easiest way is to make detailed record-keeping a requirement of federally-funded research, with regular audits of lab notebooks and comparisons to published work. This matters, because the problem is set to get worse with the spread of AI tools that make it possible to generate hard-to-detect fake datasets and images. In the past a great many frauds were caught because their fake data generation was so obvious, often just a copy and paste effort.
Before I get pilloried for whataboutism, all I'm trying to illustrate is that the title is a hyberbole. Fraud in medical research is definitely a problem leading to serious consequences for patients everywhere. Let's just call it what it is.
Related
The case for criminalizing scientific misconduct · Chris Said
The article argues for criminalizing scientific misconduct, citing cases like Sylvain Lesné's fake research. It proposes Danish-style committees and federal laws to address misconduct effectively, emphasizing accountability and public trust protection.
So Now the Feds Will Monitor Research Integrity?
The Biden administration forms a Scientific Integrity Task Force to monitor research integrity. Critics express concerns over proposed rule changes, citing persistent research misconduct issues amid increased government funding in universities.
Research into homeopathy: data falsification, fabrication and manipulation
Research on homeopathy faces credibility issues due to data manipulation in a study led by Michael Frass. The study, once positive, now raises concerns of scientific misconduct, urging withdrawal of publication. Challenges persist in alternative medicine research, highlighting the conflict between ideology and scientific integrity.
Peer review is essential for science. Unfortunately, it's broken
Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.
The Academic Culture of Fraud
In 2006, Sylvain Lesné's Alzheimer’s research faced retraction due to manipulated images, highlighting academic fraud issues. Similar cases reveal a troubling trend of inadequate accountability in research institutions.