April 25th, 2025

Huge reproducibility project fails to validate biomedical studies

A Brazilian reproducibility project found only 21% of biomedical studies could be replicated, revealing significant reliability issues and prompting calls for reforms to enhance research integrity in the scientific community.

Read original articleLink Icon
FrustrationConcernSkepticism
Huge reproducibility project fails to validate biomedical studies

A large-scale reproducibility project in Brazil has revealed significant challenges in validating biomedical studies. Coordinated by the Brazilian Reproducibility Initiative, the effort involved over 50 research teams assessing the replicability of findings from 60 selected biomedical papers published between 1998 and 2017. The project focused on three common research methods: cell metabolism assays, genetic material amplification, and rodent maze tests. Despite the ambitious scope, the results were disappointing, with only 21% of the experiments meeting the criteria for successful replication. The average effect size observed in the original studies was found to be 60% larger than in the follow-up experiments, indicating a tendency for published results to overestimate the effects of interventions. The findings underscore the need for reforms in Brazil's scientific practices, as emphasized by project coordinators who advocate for changes in public policy and university protocols to enhance research integrity. The study, which has not yet undergone peer review, highlights the ongoing reproducibility crisis in science, echoing similar findings from other large-scale replication efforts globally.

- A Brazilian reproducibility project found that less than half of the tested biomedical studies could be replicated.

- Only 21% of experiments met the criteria for successful replication, indicating significant issues in research reliability.

- The average effect size in original studies was 60% larger than in follow-up experiments, suggesting overestimation in published results.

- The initiative aims to prompt reforms in Brazil's scientific practices and improve research integrity.

- The study highlights the broader reproducibility crisis affecting the scientific community.

AI: What people are saying
The discussion surrounding the Brazilian reproducibility project reveals significant concerns about the state of scientific research.
  • Many commenters emphasize the detrimental impact of the "publish or perish" culture on research quality and integrity.
  • There is a call for systemic reforms to improve research practices and validation processes in academia.
  • Several participants highlight the importance of assessing the significance of studies that fail to replicate, questioning their impact on the scientific community.
  • Concerns about the reliability of published research are echoed, with some suggesting that the pressure to publish leads to lower standards.
  • Commenters express a desire for greater accountability in academia, advocating for scrutiny similar to that faced by public officials.
Link Icon 20 comments
By @85392_school - about 17 hours
By @drgo - about 1 hour
The crisis in science can only be fixed by addressing the slew of bad incentives built into the system. We can't predicate job security, promotion and prestige of every early career scientist on publishing as many papers as possible, and on obtaining grants (which requires publishing as many papers as possible) and then expect high-quality science. We can't starve universities of public funding and expect them not to selectively hire scientists whose main skill is publishing hundreds of "exciting" papers, and not overproduce low-quality future "scientists" who were trained in the dark arts of academic survival. Reform is more urgent than ever; AI has essentially obsoleted the mental model that equates the count of published papers with productivity and quality.
By @jl6 - about 16 hours
It would be interesting for reproducibility efforts to assess “consequentiality” of failed replications, meaning: how much does it matter that a particular study wasn’t reproducible? Was it a niche study that nobody cited anyway, or was it a pivotal result that many other publications depended on, or anything in between those two extremes?

I would like to think that the truly important papers receive some sort of additional validation before people start to build lives and livelihoods on them, but I’ve also seen some pretty awful citation chains where an initial weak result gets overegged by downstream papers which drop mention of its limitations.

By @jpeloquin - about 11 hours
The median sample size of the studies subjected to replication was n = 5 specimens (https://osf.io/atkd7). Probably because only protocols with an estimated cost less than BRL 5,000 (around USD 1,300 at the time) per replication were included. So it's not surprising that only ~ 60% of the original biomechemical assays' point estimates were in the replicates' 95% prediction interval. The mouse maze anxiety test (~ 10%) seems to be dragging down the average. n = 5 just doesn't give reliable estimates, especially in rodent psychology.
By @acscott314 - about 9 hours
Glad this is getting some attention

For central limit theorem to hold, the random variables must be (independently and identically dustributed) i.i.d. How do we know our samples are i.i.d.? We can only show if they are not

Add to that https://en.m.wikipedia.org/wiki/Why_Most_Published_Research_...

We've got to do better or science will stagnate

By @addoo - about 14 hours
This doesn’t really surprise me at all. It’s an unrelated field, but part of the reason I got completely disillusioned with research to the point I switched out of a program with a thesis was because I started noticing reproducibility problems in published work. My field is CS/CE, generally papers reference publicly available datasets and can be easily replicated… except I kept finding papers with results I couldn’t recreate. It’s possible I made mistakes (what does a college student know, after all), but usually there were other systemic problems on top of reproducibility. A secondary trait I would often notice is a complete exclusion of [easily intuited] counter-facts because they cut into the paper’s claim.

To my mind there is a nasty pressure that exists for some professions/careers, where publishing becomes essential. Because it’s essential, standards are relaxed and barriers lowered, leading to the lower quality work being published. Publishing isn’t done in response to genuine discovery or innovation, it’s done because boxes need to be checked. Publishers won’t change because they benefit from this system, authors won’t change because they’re bound to the system.

By @jkh1 - about 15 hours
In my field, trying to reproduce results or conclusions from papers happens on a regular basis especially when the outcome matters for projects in the lab. However, whatever the outcome, it can't be published because either it confirms the previous results and so isn't new or it doesn't and no journal wants to publish negative results. The reproducibility attempts are generally discussed at conferences in the corridors between sessions or at the bar in the evening. This is part of how a scientific consensus is formed in a community.
By @WhitneyLand - about 14 hours
As part of the larger reproducibility crisis including social science, I wonder how much these things contribute to declining public confidence in science and the post-truth era generally.
By @gitroom - about 8 hours
pretty crazy reading all this and realizing how shaky some "facts" really are - you think the root problem comes from pressure to publish or is it just sloppy science piling up over time?
By @N_A_T_E - about 16 hours
Is there any path forward to fixing the current reproducibility crisis in science? Individuals can do better, but that won't solve a problem at this scale. Could we make systemic changes to how papers are validated and approved for publication in major journals?
By @chmorgan_ - about 14 hours
I follow Vinay Prasad (https://substack.com/@vinayprasadmdmph) to keep up on these topics. It feels like getting a portal to the future in some way as he's on the cutting edge of analyzing the quality of the analysis in a ton of papers. You get to see what conclusions are likely to change in the next handful of years as the information becomes more widespread.
By @coastermug - about 17 hours
I’ve not got the context on why Brazil was chosen here (paywall) - but I coincidentally read a story on here of Richard Feynman visiting Brazil whereby he assessed their teaching and tried to impart his teaching and learning techniques.
By @hahaxdxd123 - about 11 hours
A lot of people have pointed out a reproducibility crisis in social sciences, but I think it's interesting to point out this happens in CompSci as well when verifying results is hard.

Reproducing ML Robotics papers requires the exact robot/environment/objects/etc -> people fudge their numbers and have strawman implementation of benchmarks.

LLMs are so expensive to train + the datasets are non-public -> Meta trained on the test set for Llama4 (and we wouldn't have known if not for some forum leak).

In some way it's no different than startups or salesmen overpromising - it's just lying for personal gain. The truth usually wins in the end though.

By @ein0p - about 15 hours
And all the drugs and treatments derived from those "studies" are going to continue to be prescribed for another couple of decades, much like they were cutting people up to "cure ulcers" long after it was proven that an antibiotic is all you really need to cure it. It took about a decade for that bulletproof, 100% reproducible study to make much of a difference in the field.
By @mrguyorama - about 14 hours
Yet again more people in this site equating "failed to reproduce" with "the original study can't possibly be correct and is probably fraudulent"

That's not how it works. Science is hard, experiment design is hard, and a failure to reproduce could mean a bunch of different things. It could mean the original research failed to mention something critical, or you had a fluke, or you didn't understand the process right, or something about YOUR setup is unknowingly different. Or the process itself is somewhat stochastic.

This goes 10X for such difficult sciences as psychology (which is literally still in infancy) and biology. In these fields, designing a proper experiment (controlling as much as you can) is basically impossible, so we have to tease signal out of noise and it's failure prone.

Hell, go watch Youtube Chemists who have Phds fail to reproduce old papers. Were those papers fraudulent? No, science is just difficult and failure prone.

If you treat "Paper published in Nature/Science" as a source of truth, you will regularly be wrong. Scientists do not do that. Nature is a magazine, and is a business, and sees themselves as trying to push the cutting edge of research, and they will happily publish an outright fraudulent paper if there is even the slightest chance it might be valid, and especially if it would be really cool if it's right.

When discussing how Jan Hendrik Schön got tens of outright fraudulent papers into Nature despite nobody being able to even confirm he ran any experiments, they said that "even false papers can push the field forward". One of the scientists who investigated and helped Schon get fired even said that peer review is no indicator of quality or correctness. Peer review wasn't even a formal part of science publishing until the 60s.

Science is "self correcting" because if the "effect" you saw isn't real, nobody will be able to build off your work. Alzheimer's Amyloid research has been really unproductive, which is how we knew it probably wasn't the magic bullet even before it had fraud scandals.

If you doubt this, look to China. They have ENORMOUS amounts of explicit fraud in their system, as well as a MUCH WORSE "publish or perish" state. Would you suggest it has slowed them down?

Stop trying to outsource your critical thinking to an authority. You cannot do science without publishing wrong or false papers. If you are reading about "science" in a news article, press release, or advertisement, you don't know science. I am continually flabbergasted by how often "Computer Scientists" don't even know the basics of the scientific method.

Scientists understood there was a strong link between cigarettes and cancer at least 20 years before we had comprehensive scientific studies to "prove" it.

That said, there are good things to do to mitigate the harms that "publish or perish" causes, like preregistration and an incentive to publish failed experiments, even though science progressed pretty well for 400 years without them. These reproducibility projects are great, but do not mistake their "these papers failed" as "these papers were written fraudulently, or by bad scientists, or were a waste".

Good programmers WILL ship bugs sometimes. Good scientists WILL publish papers that don't pan out. These are truths of human processes and imperfect systems.

By @kjkjadksj - about 9 hours
At the end of the day, people most trust results that validate in multiple datasets. No one really cherry picks one thing and builds off of that or they get slammed in peer review until they come back with sufficient evidence in the literature or through novel experiments.

A lot of things, in fact, do work. Hence, modern science producing so much despite this reproducibility crisis being even worse in decades past.

By @moralestapia - about 13 hours
Academia is 90% a scam these days and plenty of the professors involved are criminals. A criminal is someone who commits a crime (or many) [1], before some purist comes to ask "what do you mean?".

The most common crime they commit is fraud, the 2nd. most common one is sexual harassment, while the third one would be plagiarism, although this one might not necessarily be punishable depending on the jurisdiction.

(IMO. I can't provide data on that and I'm not willing to prosecute them personally, if that breaks the deal for you, that's ok to me.)

I know academia like the palm of my hand and have been everywhere around the world, it's the same thing all over. I can speak loudly about it because I'm catholic and have money, so those lowlives can't touch me :D.

Every single time this topic comes up, there's a lot of resistance from "the public" who is willing to go to great lengths to defend "the academics" even though they know absolutely nothing about academic life and their only grasp of it was created through TV and movies.

Anyone who has been involved in Academia for more than like 2 years can tell you the exact same thing. That doesn't mean they're also rotten, I'm just saying they've seen all these things taking place around.

We should really move the overton window around this topic so that scientists are held to the same public scrutiny as everybody else, like public officials, because btw. 9 out of 10 times they are being funded by public money. They should be held accountable, there should be jail for the offenders.

1: https://dictionary.cambridge.org/dictionary/english/criminal

By @sshine - about 16 hours
If they had just used NixOS, reproducibility would be less of a problem!
By @baxtr - about 15 hours
I find it bizarre that people find this problematic.

Even Einstein tried to find flaws in his own theories. This is how science should actually work.

We need to actively try and falsify theories and beliefs. Only if we fail to falsify, the theories should be considered valid.