Suspicious phrases in peer reviews point to referees gaming the system
Researcher Maria Ángeles Oviedo-García found 263 suspicious peer reviews in MDPI journals, indicating potential conflicts of interest and template use, prompting an investigation and highlighting systemic issues in academic publishing.
Read original articleA recent analysis by researcher Maria Ángeles Oviedo-García from the University of Seville has revealed concerning patterns in peer reviews, particularly within journals published by MDPI. Oviedo-García identified 263 suspicious reviews across 37 journals, noting the frequent use of vague, generic phrases that suggest reviewers may be using templates to expedite their evaluations for personal gain. This practice raises questions about the integrity of scientific literature, as it could mislead future research based on these inadequately reviewed papers. Some reviewers reportedly requested citations to their own work, further indicating potential conflicts of interest. MDPI has acknowledged the issue and is investigating the flagged papers, with some requiring re-evaluation. The prevalence of template-based reviews is part of a broader concern regarding the quality of peer review, which has seen a rise in retractions due to compromised evaluations. Experts argue that while the detection of fake reviews is crucial, it is equally important to address systemic issues in the peer review process, such as the pressure on researchers to publish frequently and the shortage of qualified reviewers. Improving the overall quality of peer review will require significant changes in the academic publishing landscape.
- Template-based peer reviews are raising concerns about the integrity of scientific literature.
- MDPI is investigating suspicious reviews and has found some papers needing re-evaluation.
- The use of generic phrases in reviews may indicate a lack of genuine evaluation.
- The rise in retractions highlights broader issues in the peer review process.
- Systemic changes are needed to improve the quality of peer reviews in academic publishing.
Related
Journal retracts all 23 articles in special issue
A journal retracted 23 articles from a special issue due to compromised peer review. Guest editor Abbas Mardani didn't comment. Authors criticized lack of transparency and faced consequences. Publisher Springer mentioned ongoing investigations.
Peer review is essential for science. Unfortunately, it's broken
Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.
Thousands of papers misidentify microscopes, in possible sign of misconduct
A study found 28% of SEM-related research papers misidentify microscopes, suggesting misconduct and oversight issues. Identical errors indicate text reuse, highlighting the need for improved academic integrity and journal practices.
The papers that most heavily cite retracted studies
An analysis by Guillaume Cabanac reveals many academic papers cite retracted studies, raising reliability concerns. His tool flagged over 1,700 papers, prompting publishers to implement screening measures for retracted citations.
GPT-fabricated scientific papers on Google Scholar
The rise of GPT-generated scientific papers on Google Scholar threatens research integrity, with 62% lacking AI disclosure. Recommendations include improved filtering and education to enhance critical evaluation of research.
Of course, there is such a thing as "self-plagiarism". If you took an entire paper and resubmitted it to a different journal after it was published, that would be unethical. There is a related case that's fine where a book collects previously published papers, but that's done with full disclosure.
However, I also think that large portions of what is published should be reused, and the norms surrounding that reuse are suboptimal. The background section for two papers often should have many sentences or paragraphs that are identical down to the word.[0] I certainly remember some fruitless sessions wondering whether a vocabulary change between two papers was load-bearing or not--encouragine reuse and making it explicit when components were reused would make papers more readable.
In this particular case, I suspect that some of these reviewers were putting out shoddy reviews using copy-paste without thinking about it. But it's also possible some of them had just gotten good at copy-pasting the parts of reviews that are always boilerplate, and the non-repeated parts of their reviews are worthwhile.
[0] The fact that you often skip those sections when reading scientific or mathematical papers is an argument for reuse, not against it. In philosophy, those sections often aren't really skippable, because terminology too often has small idiosyncracies that are introduced early on.
First, a lot of journals post criteria to evaluate papers by. For various reasons, open journals seem to push these more strictly. Some of them get adopted across journals and even publishers. I could see a reviewer using the exact language of these criteria just to make it clear they're following the criteria.
Second, reviewing is kind of in crisis, ironically for reasons that cut against the the argument of the paper. It's often thankless and increasingly hard to find reviewers. When you can they're short on time. These boilerplate bits of text or even use of AI might just be a way to save time while still acting in good faith.
I don't mean to suggest there's nothing problematic going on or that the authors didn't address these issues but I can think of other interpretations of things that would be important to address.
I don't understand why that are smoking guns. They look like standard problems that can be addressed with standard phrases. Do they appear in absolutely all the reviews of some reviewers?
It's like “We recommend that the paper be accepted after revision.” The interesting part is if the paper makes sense and if the list of recommended revisions make sense, not that they copy&pasted a filler sentence.
https://english.elpais.com/science-tech/2024-05-31/internal-...
Related
Journal retracts all 23 articles in special issue
A journal retracted 23 articles from a special issue due to compromised peer review. Guest editor Abbas Mardani didn't comment. Authors criticized lack of transparency and faced consequences. Publisher Springer mentioned ongoing investigations.
Peer review is essential for science. Unfortunately, it's broken
Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.
Thousands of papers misidentify microscopes, in possible sign of misconduct
A study found 28% of SEM-related research papers misidentify microscopes, suggesting misconduct and oversight issues. Identical errors indicate text reuse, highlighting the need for improved academic integrity and journal practices.
The papers that most heavily cite retracted studies
An analysis by Guillaume Cabanac reveals many academic papers cite retracted studies, raising reliability concerns. His tool flagged over 1,700 papers, prompting publishers to implement screening measures for retracted citations.
GPT-fabricated scientific papers on Google Scholar
The rise of GPT-generated scientific papers on Google Scholar threatens research integrity, with 62% lacking AI disclosure. Recommendations include improved filtering and education to enhance critical evaluation of research.