October 20th, 2024

Irreproducible Results

The article highlights declining reproducibility in scientific experiments, particularly in biological sciences, due to biases favoring positive results. Experts recommend open-source databases to document all experimental outcomes for improved reliability.

Read original articleLink Icon
Irreproducible Results

The article discusses the evolving nature of the scientific method, particularly focusing on the issue of irreproducible results in experiments. It highlights that the reproducibility of scientific experiments, especially in biological sciences, has been declining over time. A study by John Crabbe, which involved standardized experiments across three different labs, revealed significant discrepancies in results, suggesting that much of the scientific data may be unreliable. The article points out a bias in science towards positive results, which contributes to this problem. This bias manifests in various ways, including the tendency to publish only positive outcomes and to design experiments that favor positive results. To address these issues, some experts advocate for the establishment of open-source databases where researchers must document their planned experiments and all results, including negative ones. This approach could enhance the robustness of positive findings and improve the overall reliability of scientific research.

- The reproducibility of scientific experiments is declining, particularly in biological sciences.

- A study showed significant discrepancies in results across different labs, indicating potential unreliability in scientific data.

- There is a bias in science towards publishing positive results, which exacerbates the issue of irreproducibility.

- Experts suggest creating open-source databases to document all experimental results, including negative ones.

- Addressing these biases could lead to more robust and reliable scientific findings.

Related

Peer review is essential for science. Unfortunately, it's broken

Peer review is essential for science. Unfortunately, it's broken

Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.

Ask HN: What are your worst pain points when dealing with scientific literature?

Ask HN: What are your worst pain points when dealing with scientific literature?

The author, experienced in computer science, aims to develop tools to overcome challenges in extracting value from scientific literature, seeking input on existing effective tools and major challenges in the field.

You got a null result. Will anyone publish it?

You got a null result. Will anyone publish it?

Researchers struggle to publish null or negative results, leading to bias favoring positive findings. Initiatives like registered reports aim to enhance transparency, but challenges persist in academia's culture. Efforts to encourage reporting null results continue, aiming to improve research integrity.

Why Most Published Research Findings Are False

Why Most Published Research Findings Are False

The article discusses the high prevalence of false research findings, influenced by biases, study power, and effect sizes, urging a critical evaluation of claims and caution against sole reliance on p-values.

We need to build the GitHub of scientific data

We need to build the GitHub of scientific data

A centralized platform for scientific data is urgently needed due to a 17% annual loss of datasets, which hampers progress. Features like version control and licensing would enhance collaboration and accessibility.

Link Icon 13 comments
By @rcxdude - 6 months
Biology experiments are notoriously sensitive: even fairly standard protocols can be wildly unreliable or unpredictable. I've heard of at least one instance where a lab worked out that for one protocol, the path they took when carrying the sample from one room to another mattered (one stairwell meant it didn't work, the other meant it did). Even in much simpler systems you get strange effects like disappearing polymorphs (https://en.wikipedia.org/wiki/Disappearing_polymorph)
By @NeuroCoder - 6 months
I had a neuroscience professor in undergrad who did a bunch of experiments where the only variables were things like the material of the cage, bedding, feeder, etc. He systematically tested variations in each separately. Outcomes varied in mice no matter what was changed. I would love to tell you what outcomes he measured, but it convinced me not to go into mice research so it's all just a distant memory.

On the other hand, I've worked with people since then who have their own mice studies going on. We are always learning new ways to improve the situation. It's just not a very impressive front page so it goes unnoticed by those not into mice research methods.

By @ChadNauseam - 6 months
I like a suggestion I read from Eliezer Yudkowsky - journals should accept or reject papers based on the experiment's preregistration, not based on the results
By @nextos - 6 months
You can see this is a problem if you mine out the distribution of p-values from articles.

Andrew Gelman had a great post on this topic I can't find now.

Pre-registration could be a great solution. Negative results are also important.

By @krisoft - 6 months
I don't understand what is so disturbing about the Crabbe test. They injected mouse with cocaine and they observed that the mouse was moving more than normal. They different in how much more. But why would they expect that the extra movement be constant and consistent?

Now if one set of mouse moved more, while an other started blowing orange soap bubbles from their ears that would be disturbing. But just that the average differed? Maybe I should read the paper in question.

By @smitty1e - 6 months
By @necovek - 6 months
This is extremely interesting.

On top of keeping and publishing "negative outcomes", could we also move to actually requiring verification and validation by another "lab" (or really, an experiment done in different conditions)?

By @begueradj - 6 months
With that in mind, how something like medication could even exist then ?
By @pazimzadeh - 6 months
>> [John Crabbe] performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.

>> The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

>> The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise.

This wasn't established when the post was written, but mice are sensitive and can align themselves to magnetic fields so if the output is movement the result is not thaaaat surprising. There are a lot of things that can affect mouse behavior, including possibly pheromones/smell of the experimenter. I am guessing that behavior patterns such as anxiety behavior can be socially reinforced as well, which could affect results. I can could come up with another dozen factors if I had to. Were mice tested one at a time? How many mice were tested? Time of day? Gut microbiota? If the effect isn't reproducible without the sun and moon lining up, then it could just a 'weak' effect that can be masked or enhanced by other factors. That doesn't mean it's not real, but that the underlying mechanism is unclear. Their experiment reminds me of the rat park experiment, which apparently did not always reproduce, but doesn't mean the effect isn't real in some conditions: https://en.wikipedia.org/wiki/Rat_Park.

I think the idea of publishing negative results is a great one. There are already "journals of negative results". However, for each negative result you could also make the case that some small but important experimental detail is the reason why the result is negative. So negative results have to be repeatable too. Otherwise, no one would have time to read all of the negative results that are being generated. And it would probably be a bad idea to not try an experiment just because someone else tried it before and got a negative result once.

Either way, researchers aren't incentivized to do that. You don't get more points on your grant submission for publishing negative results, unless you also found some neat positive results in the process.

By @11101010001100 - 6 months
Ironically, some of Jonah Lehrer's work is fabricated.
By @emmelaich - 6 months
(2011)