The case for criminalizing scientific misconduct · Chris Said
The article argues for criminalizing scientific misconduct, citing cases like Sylvain Lesné's fake research. It proposes Danish-style committees and federal laws to address misconduct effectively, emphasizing accountability and public trust protection.
Read original articleThe article discusses the argument for criminalizing scientific misconduct, citing examples like Sylvain Lesné's faked Alzheimer's research and image manipulation at Harvard's Dana-Farber Cancer Institute. It highlights the significant impact of such misconduct on delaying potential treatments and causing the loss of Quality Adjusted Life Years (QALYs). Despite clear evidence, researchers like Lesné often face no consequences, with universities and journals failing to take decisive action. The proposal suggests implementing Danish-style independent committees and a federal criminal statute to address scientific misconduct more effectively. The article responds to objections regarding deterrence, scope of prosecution, and false accusations, emphasizing the need for accountability to protect public trust and prevent harm caused by fraudulent research practices.
Related
KrebsOnSecurity Threatened with Defamation Lawsuit over Fake Radaris CEO
KrebsOnSecurity faced a defamation lawsuit threat for exposing Radaris' true owners, the Lubarsky brothers, linked to questionable practices. Despite demands, KrebsOnSecurity stood by its reporting, revealing a complex web of interconnected businesses.
Simple ways to find exposed sensitive information
Various methods to find exposed sensitive information are discussed, including search engine dorking, Github searches, and PublicWWW for hardcoded API keys. Risks of misconfigured AWS S3 buckets are highlighted, stressing data confidentiality.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Some fundraisers pay >90% of the funds to themselves
A network of political nonprofits, known as 527s, misallocates over 90% of donations to fundraising rather than causes. ProPublica's investigation exposes lack of transparency, regulatory loopholes, and concerns over legitimacy.
The Encyclopedia Project, or How to Know in the Age of AI
Artificial intelligence challenges information reliability online, blurring real and fake content. An anecdote underscores the necessity of trustworthy sources like encyclopedias. The piece advocates for critical thinking amid AI-driven misinformation.
I agree that something stricter should be done, but it should not be about bringing the legal system into play. I see a fundamental issue with bringing science to trial courts, where rhetoric, appeals to emotions, and other different priorities are paramount, not technicalities about overenthusiastic interpretations, data fudging, p-hacking, empirical anomalies and wilful data manipulation.
Science works by different norms of truth (I would call this statistical) than the judicial system does (beyond reasonable doubt/preponderance of evidence). I believe an international peer scientific committee ostracising a person from publication for X number of years, or forever, might be a better measure than a criminal trial and punishment in open court.
As I once said:
"Perjury must be a crime. There is only one sin in science, and that sin is faking data, and faking evidence is faking data. Perjury is surely a crime."
I'll leave out the unfortunate context in which this needed saying.
We also don’t want a cottage industry of performative, ladder climbing researchers siphoning funding from real ones.
At a minimum, the public could stop funding researchers with faked data or images. It’s unacceptable that the NIH keeps funding known frauds. If you doctor images, you’re done.
Of course these are just the wildly successful instances of such fraud. Many such fraud probably represent just one more quickly ignore paper. But still, the crime matters because it has this potential to derail a field. The magnitude of the crime is not represented by calling it "fraud".
Given it relies on institutions (I assume this includes universities) sending it complaints, I’m not sure how we get over the conflict of interest mentioned.
> Sylvain Lesné, the lead author on the Alzheimer’s paper, remains a professor at the University of Minnesota and still receives NIH funding
Okay, maybe you permit anonymous complaints. If the anonymous complaint results in an adverse finding, the institution is penalised. If not, just the researcher.
Penalties should include fines. But also a term during which they are blacklisted from NIH funding.
[1] https://ufm.dk/en/research-and-innovation/councils-and-commi...
Maybe civil charge to pay back the public funding, in the case of there being public funding. Turning researchers into debt serfs may reach better compliance and sustain access to their cognitive abilities on behalf of the public, as opposed to prison
Maybe it's overall better to let science sort out science things.
Reproducibility is the test suite of science. Make experiments reliably, repeatably testable.
Wonder if computable research will become a requirement for publication. Will that make a slow process slower?
Related
KrebsOnSecurity Threatened with Defamation Lawsuit over Fake Radaris CEO
KrebsOnSecurity faced a defamation lawsuit threat for exposing Radaris' true owners, the Lubarsky brothers, linked to questionable practices. Despite demands, KrebsOnSecurity stood by its reporting, revealing a complex web of interconnected businesses.
Simple ways to find exposed sensitive information
Various methods to find exposed sensitive information are discussed, including search engine dorking, Github searches, and PublicWWW for hardcoded API keys. Risks of misconfigured AWS S3 buckets are highlighted, stressing data confidentiality.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Some fundraisers pay >90% of the funds to themselves
A network of political nonprofits, known as 527s, misallocates over 90% of donations to fundraising rather than causes. ProPublica's investigation exposes lack of transparency, regulatory loopholes, and concerns over legitimacy.
The Encyclopedia Project, or How to Know in the Age of AI
Artificial intelligence challenges information reliability online, blurring real and fake content. An anecdote underscores the necessity of trustworthy sources like encyclopedias. The piece advocates for critical thinking amid AI-driven misinformation.