July 12th, 2024

Peer review is essential for science. Unfortunately, it's broken

Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.

Read original articleLink Icon
Peer review is essential for science. Unfortunately, it's broken

Peer review in science is crucial but currently flawed, lacking incentives to address fraud. The book "Rescuing Science: Restoring Trust in an Age of Doubt" explores the erosion of public trust in science during the COVID-19 pandemic and suggests ways to enhance trustworthiness. The pressure to publish leads to various forms of fraud, undermining trust. With modern science heavily reliant on computers and specialized software, peer review struggles to detect fraud effectively. Lack of public access to software codes further complicates the verification process, as scientists prioritize publishing papers over code transparency. This opacity not only hampers fraud detection but also impedes error identification, ultimately compromising the integrity of scientific research. As science evolves and becomes more intricate, the reliance on software code increases, making fraud detection more challenging. Addressing these issues requires a fundamental shift in the scientific community's incentive structures to prioritize transparency and integrity over publication metrics.

Related

The case for criminalizing scientific misconduct · Chris Said

The case for criminalizing scientific misconduct · Chris Said

The article argues for criminalizing scientific misconduct, citing cases like Sylvain Lesné's fake research. It proposes Danish-style committees and federal laws to address misconduct effectively, emphasizing accountability and public trust protection.

So Now the Feds Will Monitor Research Integrity?

So Now the Feds Will Monitor Research Integrity?

The Biden administration forms a Scientific Integrity Task Force to monitor research integrity. Critics express concerns over proposed rule changes, citing persistent research misconduct issues amid increased government funding in universities.

Ranking Fields by p-Value Suspiciousness

Ranking Fields by p-Value Suspiciousness

The article addresses p-hacking in research, focusing on suspicious p-value clustering across fields. Economics shows more credible results. Replication crisis is universal, emphasizing the call for research integrity and transparency.

Researchers discover a new form of scientific fraud: 'sneaked references'

Researchers discover a new form of scientific fraud: 'sneaked references'

Researchers identify "sneaked references" as a new form of scientific fraud, artificially boosting citation counts. Concerns arise over integrity in research evaluation systems, suggesting measures for verification and transparency. Manipulation distorts research impact assessment.

When scientific citations go rogue: Uncovering 'sneaked references'

When scientific citations go rogue: Uncovering 'sneaked references'

Researchers discovered "sneaked references," a new academic fraud involving adding extra references to boost citation counts. This manipulation distorts research visibility. Recommendations include rigorous verification and transparency in managing citations.

Link Icon 18 comments
By @sampo - 9 months
> And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

It is not that common that science airs their dirty laundry out in the public. With covid, the debates and disagreements were very public.

Was covid-19 a severe disease or "just a flu"? Does it spread primarily by droplets, or is it airborne? Do masks work, or not? Were the lockdowns useful, or not? Did the border closures help, or not? Did the virus likely escape from a laboratory, or jump to humans from a natural animal population? Is long covid a serious condition, or is it just psychological?

There isn't a single policy-relevant question about covid-19, where you don't have one camp of highly-credentialed experts in medical science and epidemiology arguing for A, and another camp of equally highly-credentialed experts arguing for B.

People are not that stupid. They can see that science doesn't agree on pretty much anything on covid. And not just tiny minutiae, but about the big, relevant questions. So the loss of public trust is well-earned. Epidemiology just isn't nearly as well developed a science as we thought it to be.

And if other sciences only take a meta level approach of "please trust the science", like the article author Paul Sutter here, it doesn't help. Maybe, if other sciences would call epidemiology out "please trust our science at least, we're not as bad as epidemiology", it could help some.

By @UniverseHacker - 9 months
This criticism is mostly outdated- including the analysis code is now required for most good journals. As a computational biology academic PI, I review a lot of papers, and don’t approve anything without code unless the analysis is trivial- like a standard off the shelf statistical test. In the past this objection has been overruled by editors, but not in the last few years.

That said, people misunderstand what peer review is and trust it too much. It is just a quick sanity check, and serves mostly to get feedback on how to make a paper more understandable. Good faith is assumed, reviewers do not look for fraud or hidden mistakes, that isn’t necessary or the point of peer review- fraud will eventually come out and consequences are severe. In most cases when I review a paper I have less than an hour to look it over, and mostly try to suggest things to make the paper easier to understand.

By @knighthack - 9 months
This article reads like nonsense - even the header is clickbait material.

Claiming science needs peer review is like saying a painting isn't art until a critic reviews it. The results existsindependently of external validation.

Peer review is not essential for science. Science can be done, whether or not it's peer-reviewed.

Especially when you're dealing with scientific frontiers, where no one really can verify things - and peers don't even exist? How does peer review work then?

The idea that something can be 'less scientific' just because it's not been peer-reviewed is nonsense. Unreviewed scientific work can stand on its own merit. Moreover this insistence on peer review really opens up more room for academic dogma and politics to creep into the hard sciences - not a good thing.

Given the amount of scientific fraud going on, even with 'peer review', it's just not necessary for true science and it's something that should be done away with.

By @elashri - 9 months
There are many problems with how many people perceive peer-review process that I gathered from my interactions with non-academics.

1. They assume that peer-review means reproducing the paper.

This is not the case. It just means that it is a work that followed the scientific method and does not have serious and obvious problems (like being a complete non-sense). Well there is a quality curve for that of-course depending on the editors/journals and peer-reviewers

2. Academics have to do peer review as part of their jobs.

It is always surprising for them to know that peer review is a volunteer based work in vast majority of cases. It is even a burden for many people that affects their actual funded work. The publishing industry has one of the highest profit margin for a reason.

3. The peer review process is the same across fields.

There is no way that CERN papers will be treated the same and a neuroscience lab at an average research institution. Not because of one being better and higher quality but because of different fields and journals..etc.

4. Peer-review is an essential part to confirm the results.

In reality peer-review is a filter to help scientists navigate the sea of new papers in their fields. And by filter I mean filter the non-sense and the low quality research not the correct results. Something also about making sure that the methodology and results make sense.

Those points are usually what I see people discussing and forming opinion based on. Any solution for the current academic problems will not only need a regulation and new policies. The real need will be a change the structure of funding and how it is allocated (with an actual increase in budget). This is where many people will start to think that there are other priorities for a budget increase.

By actual I mean an increase that really do more than barely catching with the inflation rate.

By @taylodl - 9 months
One receives grants for doing new research and writing papers about that research. There's no financial incentive for reviewing the existing body of work, and since one needs money to eat, guess what doesn't get done?

It's basically a case of lots of people are talking and few of them are listening. It's understandable that a lot of trash talk gets ignored.

By @LMYahooTFY - 9 months
We should seriously consider and question what peer review is.

Peer review is very obviously not essential for science, as the most astounding science, which gave us the modern era we all have come to enjoy, was done without peer review.

In fact, it's well worth reading about Robert Maxwell and his influence in birthing the scientific publishing industry (yes, strangely Ghislaine Maxwell's father) and the inception of peer review. And it's well worth asking how effective peer review is at encouraging scientific progress, how effective it is at producing gatekeepers, and how it incentivizes those gatekeepers. I'd suggest it's far better at the latter, and produces poor incentives.

But in today's discourse, peer review has become unquestionable and that should make us ask more questions.

By @2cynykyl - 9 months
I have come to realize that when we do a peer review for Journal X, all we are really doing is protecting the journal's reputation by helping weed out substandard papers.

There seems to be a belief that the peer review process is like a universal system, so if a paper is 'rejected' during peer review for Journal X then it will be thrown in the garbage and never trouble us again. But in reality, the authors will submit it to as many journals as necessary until it sneaks through, and eventually it will sneak through.

I repeat: Every rejected paper will eventually be published somewhere. Perhaps not in Nature, or Top Journal A, but it will appear in Respectable Journal B or Mediocre Journal C. And it will bare the seal of "peer reviewed".

Ironically, I guess my point is that peer review is not broken; it's working great for the journals.

By @BenFranklin100 - 9 months
From an earlier comment:

I have long maintained that the NIH should set aside 20% of its budget to fund spot checking of the research output of the remaining 80% of funded research. Even if this funding is only sufficient to review a handful of papers each year, it stands to have a transformative effect on the level of hype and p-hacking in many fields, and could nearly eliminate the rare cases of actual fraud. It would also improve the overall quality and reliability of the literature.

By @cyberbunga - 9 months
What I'm going to say is tangential to the article's main issue (scientific fraud), but I take issue with the "essential" in the headline. Not only there was science going on without peer review, there are some very proeminent examples of it, too. None of Einstein's annus mirabilis were peer reviewed. Einstein didn't care much for peer review either. Later on, when Physical Review tried to peer review one of his papers with Rosen, he chose to submit the paper elsewhere.

More about it in: https://hsm.stackexchange.com/questions/5885/how-did-the-pub...

By @mihaaly - 9 months
I presume that the current review process was essential for and formulated in the ages of printed journals. It is good to know before going into the press if a work worthwile the costs and hustle. Beyond the simplicity of publications that the writing mentions.

Meanwhile journals went online for presenting the papers yet procedures remained in the Gutenberg era, with pre-publishment reviews.

Why not using the advancement of modern technology not just for decreasing the printing costs and pushing pdf on a site (and keep charging a lot for the service), but extend it with post-publishment review possibility. Add further reviews later at the source of the article itself when relevant thing happens (advancement, refutement, reproduction (hehe), fraud, ...). Something of an advanced and improved commenting service from the journals, or better yet, a consortium of journals to form a common ground for Web 2.0 style interactions for scientists and perhaps other stakeholders and the general public too, where all can review posts and rate the journal, with weight of their own influence factor based on familiarity into the topic, likely determined by the quality and amount of their own publications reviewed, hence the weight of a presently sought out experienced researcher for peer-review have higher impact factor with his/hers attached review than Average Joe from the Lowerville Technical Society buying access to the review system. Oh yes, the system is not free to maintain, so those publishing and want review have subscription for it to the appropriate level (government support is not excluded obviously, this is a thing for the benefit of mankind afterall, in theory :) ).

It will not solve the fundamental replication and fraud crisis but may make it more traceable, discovering quality easier, if the system is built right. Which naturaly is an immence difficulty on its own, how to build right and reliable with current players not really interested in change, but probably worth trying, no?

By @SiempreViernes - 9 months
I'm always baffled that only putting "PH. D." in big letters is apparently enough to grant an author universal authority, somehow the actual subject one has studied is of no importance to a lot of people.

Anyway, this piece doesn't actually have much to say about the practical aspects of peer review, this is about it:

> Peer reviewers don’t have the time, effort, or inclination to pick over new research with a fine-toothed comb, let alone give it more than a long glance.

Basically, "it's broken, there's massive fraud" is a statement proved by assertion which ironically is the type of shoddy work the text is supposed to oppose.

For a different perspective, see "The replication crisis is not a crisis of false positives" Coleman et al 2024, https://osf.io/preprints/socarxiv/rkyf7

By @nuc1e0n - 9 months
From the article: "How am I supposed to judge the correctness of an article if I can’t see the entire process?" If you can't, then don't. When software developers do code reviews, obscurity of the changes made is sufficient to deny a pull request from being merged.

If a peer reviewer can't follow a scientific paper then they shouldn't be rubber stamping it. I'd say it's the responsibility of the paper's authors to ensure it has enough information in it to determine its validity. Reviewers shouldn't be afraid to send back papers that don't.

By @dartos - 9 months
What isn’t broken nowadays?
By @kelseyfrog - 9 months
I'm not sure I follow the author. He seems to complain that peer review is not sufficient to counter incentives to game the h-index[1].

It sounds like the h-index has ceased to be a measure and is now a target. He doesn't explain the incentives around higher h-index(which decisions are based on h-index, which benefits does it confer), and what we might do to replace it as a target.

1. https://en.wikipedia.org/wiki/H-index

By @csr86 - 9 months
Create Nobel prize category for proving existing research false
By @kkfx - 9 months
Off topic but... Why post ARS links in the long form instead of https://arstechnica.com/?p=2030357 ?
By @verelo - 9 months
So many ads, and this feels like an AI wrote it.