February 21st, 2025

Please Commit More Blatant Academic Fraud

The article addresses academic fraud in AI research, emphasizing normalization of subtle fraud and a recent blatant case, urging for scrutiny and a cultural shift towards integrity in research practices.

Read original articleLink Icon
Please Commit More Blatant Academic Fraud

The article discusses the issue of academic fraud within the artificial intelligence research community, highlighting a recent case of blatant fraud that has sparked a call for greater scrutiny of published work. The author argues that subtle forms of fraud, such as cherry-picking data and manipulating results, have become normalized in academia, leading to a collective blind spot regarding the integrity of research. This normalization is perpetuated by a culture that prioritizes publication output over scientific rigor, creating an environment where researchers feel pressured to engage in dishonest practices. The author suggests that the emergence of explicit fraud may force the community to confront these issues and encourage a more critical evaluation of research. By acknowledging the prevalence of fraud, researchers may be motivated to produce work that withstands scrutiny and contributes genuine scientific value. The article concludes with a provocative call to action, urging researchers to commit more blatant fraud to catalyze a necessary reckoning within the field, ultimately aiming to strengthen academic norms and improve the integrity of AI research.

- The normalization of subtle academic fraud in AI research has created a collective blind spot.

- A recent case of blatant fraud may prompt greater scrutiny of published work.

- The pressure to publish can lead researchers to prioritize career advancement over scientific integrity.

- Acknowledging the prevalence of fraud could motivate researchers to produce more rigorous work.

- The author provocatively calls for more blatant fraud to catalyze change in academic norms.

Related

Peer review is essential for science. Unfortunately, it's broken

Peer review is essential for science. Unfortunately, it's broken

Peer review in science is flawed, lacking fraud incentives. "Rescuing Science: Restoring Trust in an Age of Doubt" explores trust erosion during COVID-19, suggesting enhancing trustworthiness by prioritizing transparency and integrity. Fraud undermines trust, especially with increased reliance on software codes in modern science.

The Academic Culture of Fraud

The Academic Culture of Fraud

In 2006, Sylvain Lesné's Alzheimer’s research faced retraction due to manipulated images, highlighting academic fraud issues. Similar cases reveal a troubling trend of inadequate accountability in research institutions.

Six top Alzheimer's researchers accused of fraud

Six top Alzheimer's researchers accused of fraud

Several prominent Alzheimer's researchers face fraud allegations, including falsified data and images. These accusations raise concerns about research integrity, impacting funding and suggesting 14% of publications may involve fraud.

To what extent is science a strong-link problem?

To what extent is science a strong-link problem?

A recent case of scientific misconduct involving a US researcher raises concerns about integrity in high-impact journals, emphasizing the need for interdisciplinary engagement and proactive promotion of overlooked scientific work.

Fake papers contaminate world scientific literature, fueling a corrupt industry

Fake papers contaminate world scientific literature, fueling a corrupt industry

Fake scholarly papers are undermining medical research integrity, with hundreds of thousands still circulating. The "publish or perish" culture fuels this issue, prompting the need for better detection tools and peer review improvements.

Link Icon 28 comments
By @ploynog - about 2 months
Double-blind review is a mirage that does not hold up. While I was in academia I reviewed a paper that turned out to be a blatant case of plagiarism. It was a clear Level 1 copy according to the IEEE plagiarism levels (Uncredited Verbatim Copy of more than 50% of a single paper). I submitted all of these findings with the original paper and what parts were copied (essentially all of it) as my review.

A few days later I got an email from the author (some professor) who wanted to discuss this with me, claiming that the paper was written by some of his students who were not credited as authors. They were unexperienced, made a mistake, yaddah yaddah yaddah. I forwarded the mail to the editors and never heard from this case again. I don't expect that anything happened, the corrective actions for a level-1 violation are pretty harsh and would have been hard to miss.

The fact that this person was able to obtain my name and contact info shattered any trust I had in the "blind" part of the double-blind review process.

The other two reviewers had recommended to accept the paper without revisions, by the way.

By @proto-n - about 2 months
"For the first time, researchers reading conference proceedings will be forced to wonder: does this work truly merit my attention? Or is its publication simply the result of fraud? [...] But the mere possibility that any given paper was published through fraud forces people to engage more skeptically with all published work."

Well... spending a few weeks reproducing a shiny conference paper that simply doesn't work and is easily beaten by any classical baseline will do that to you in the first few months of your PhD imo. I've become so skeptic over the years that I assume almost all papers to be lies until proven otherwise.

"This surfaces the fundamental tension between good science and career progression buried deep at the heart of academia. Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth."

For the first years of my PhD I simply refused to parttake in the subtle kinds of frauud listed in the second paragraph of the post. As a result, I barely had any publications worth mentioning. Mostly papers shared with others, where I couldn't stop the paper from happening by the time I realized that there is too little substance for me to be comfortable with it.

As a result, my publication history looks sad and my carreer looks nothing like I wished it would.

Now, a few years later, I've become much better at research and can now get my papers to the point where I'm comfortable submitting them with a straight face. I've also came to terms with overselling something that does have substance, just not as much as I wish it had.

By @GuestFAUniverse - about 2 months
I am a co-author on a paper I never asked for, but my supervisor insisted, because the petty idea upgrading his desperate try to the point of "considerable at all" came from me. It was a chair which normally prouded itself of only publishing in the most highly regarded journals of its field (internally graded A, B, C). They had a few years without viable paper. Desperate to publish. From my POV this was a D. The paper is worthless crap, hastily put together within two weeks. It should have been obvious to the reviewers.

I feel ashamed that my name is on it. I wish I could retract it.

So, yes please: make it hard to impossible for paper mills and kill the whole publish or perish approach.

By @nis0s - about 2 months
> Proclaiming that your work is a “promising first step” in your introduction, despite being fully aware that nobody will ever build on it.

Science produces discrete units which can be used in different ways, if not in their exact form from a preceding research. I am not sure it’s reasonable to say that existing ideas, even if not cited, are not inspirational (to the researchers themselves). Peer-review isn’t perfect, but I think that all accepted papers have something academically or scientifically relevant, even if there’s no guarantee that the paper will generate hundreds of subsequent citations. I think improving your subsequent work is more important, which includes mentioning why you think some previous work may not be as relevant anymore. This last step is often missing from many research papers.

I think the author is right that it doesn’t quite make sense to publish anything you know isn’t quite correct. But I can think of several papers in different fields which someone may think are “not quite correct”, but the goal of such papers, I think, is to demonstrate the power of low probability scenarios, or edge cases. Edge cases are important because they break expected behavior, and are often the root cause of system fragility, system evolution, or poor generalization in other systems.

By @ngriffiths - about 2 months
I had two experiences at the polar opposites of the spectrum - one research team I worked on had very high standards and was comfortable being patient for material that had value. The other involved an approach that obviously stood no chance to be useful to anyone.

Some differences:

- The first one was in a space with more low hanging fruit

- The first one was after large effect sizes, not the kind where you can massage the statistical model

- The second one was a topic with far higher public interest

- The second one was primarily an analytic project, whereas the first one was primarily experimental

I feel like bad science lives in the middle of a spectrum - you have young fields/subfields with boring but impressive experimental breakthroughs, and on the other end you have highly political questions that have been argued to death without resolution. Bad science is about borrowing some of the strategies used in politics because all the important experiments have already been done.

By @michaelt - about 2 months
> And we must ensure that explicit suggestions to modify one’s science in the service of one’s career – “you need to do X to be published”, “you need to publish Y to graduate”, “you need to avoid criticizing Z to get hired” – carry social penalties as severe as a suggestion of plagiarism or fraud.

One of the pernicious things in this area is that, even as we teach young researchers how to avoid making mistakes and engage sceptically with the work of others and that scientific fraud is a nontrivial issue, we also tell them how to commit fraud themselves and that their competition is doing it.

"Watch out for P-hacking, that's where the researcher uses a form of analysis that has a small chance of a false positive, and analyses loads of subsets of your dataset until a false positive arises and just publishes that one"

"Watch out for over-fitting to benchmarks, like a car taking the speed crown by sacrificing the ability to corner"

"Watch out for incomplete descriptions of test setups, like testing on a 'continent-scale map' but not mentioning how detailed a map it was"

"Watch out for citations where the cited paper doesn't say what is claimed, some people will copy-and-paste citations without reading the source paper"

"Watch out for papers using complicated notation, fancy equations and jargon to make you feel this looks like a 'proper' paper"

"Watch out for deceptive choice of accurate numbers, like a study with a 25% completion rate including the drop-outs in the number of participants"

"Watch out for simulations with inaccurate noise models, if the noise is gaussian in the simulation but a random walk in reality, great simulated results won't transfer to reality"

I've made no suggestion at all that you should modify your science or commit fraud - but I've also just trained you in how to do it.

By @bjackman - about 2 months
> Submitting a paper to a conference because it’s got a decent shot at acceptance and you don’t want the time you spent on it go to waste, even though you’ve since realized that the core ideas aren’t quite correct.

I don't see a problem with this? If papers are the vehicle for conference entries why shouldn't authors submit it just because it's wrong? Conferences are for discussion. So go there and discuss it... "My paper says XYZ, but since I wrote it I realised ABC" - sounds like a good talk to me?

(Naivety check: I am not an academic)

By @jimbokun - about 2 months
> Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth.

This is the funny part. There is little to no power and prestige to be had in the academic system. To a first approximation no one outside academia cares.

I was just working as a staff programmer and taking grad courses with my tuition benefit, and found myself getting caught up in the mentality of needing a PhD to really be successful and valuable. Then got a job in industry making far more money and realized how academia is a small self contained world with status hierarchies irrelevant outside that small world.

By @lqet - about 2 months
> * A group of colluding authors writes and submits papers to the conference.

> * The colluders share, amongst themselves, the titles of each other's papers, violating the tenet of blind reviewing and creating a significant undisclosed conflict of interest.

> * The colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts, in an attempt to be assigned to these papers as reviewers.

Is it that common that conference reviewers also submit papers to the conference? Wouldn't that alone already be a conflict of interest? (After all, you then have an interest in dismissing as many papers as possible to increase the likelihood of your own paper being accepted). And how do you create "duplicate accounts"? The conferences I have submitted to, and reviewed for, all had an invitation-like process for potential reviewers.

By @nicwilson - about 2 months
Hmmm, I wonder if you could turn this into a sport and have like one paper per year per group of total BS, and shame on the reviewers/conference/journal if they don't catch it, and kudos to the submitters the more blatant it is.

Come to think of it, is there a "Journal of Academic Fraud"?

By @Peteragain - about 2 months
I agree with the analysis completely, but the solution is depressing. I keep thinking that publications on arxiv might be a better source of knowledge given the motivation for publishing there is not a contribution to career progression. Keyword search over arxiv papers? But perhaps we should bring back the idea of anonymous publishing:-0
By @wucke13 - about 2 months
Tangentially relevant: Gernots list of benchmarking crimes.

https://gernot-heiser.org/benchmarking-crimes.html

By @jarbus - about 2 months
I don't have top-tier publications, and I haven't gotten any awards. I've seen people get awards for bullshit and farming prestigous publications. I only have one citation for work that is 100% mine. But every time I read something like this, I feel proud that I don't bullshit, and at least try to do real science. I truly believe in all my work and every sentence I write.

That being said, the insane emphasis on venue is what's pushing me out of academia. I can't compete with people like this.

By @vladms - about 2 months
Sometimes it is good to make parallels with other things to check if proposals make sense. How would sound "go and try to steal every car out there, so that car manufacturers improve their car security" ?

For me it sounds counter-productive. I have a feeling that lately (tens of years) many people try to focus on the negatives, rather than the positives. Should we focus on the 3 amazing papers this year, cited by hundreds, that resulted in clear progress or should we complain that 100 papers are useless? Let's focus on 100 because "someone is wrong on the internet".

I did a PhD (so might have more experience) but papers are meant for dissemination. For me everybody that wants to have them "perfect"/"useful" papers imagines a system that does lots of work for them. The system could be improved, but if anything (just throwing an idea) maybe researchers should try to do research in the industry to prove themselves. Then come back after 10 year in academia (maybe with savings) so that they are more independent of "career progression". A lot of research was done (historically, >100 years ago) by rich people, not constrained by a career.

By @userbinator - about 2 months
(2021)

Together, we can force the community to reckon with its own shortcomings, and develop stronger, better, and more scientific norms. It is a harsh treatment, to be sure – a chemotherapy regimen that risks destroying us entirely. But this is our best shot at destroying the cancer that has infected our community.

And now, nearly 4 years later, research funding gets DOGE'd...

By @ein0p - about 2 months
What's funny is that if you're well-read in your field, all such bullshit is plainly apparent. You know what the good baseline is, for example, and you know why the author didn't choose it. You know the deficiencies of the benchmarks and see how they were exploited to juice the results. You know their approach is infeasible IRL and can clearly articulate why. Etc, etc. You folks aren't fooling anyone but fools. It's sort of like e.g. a lazy employee thinks their manager doesn't know they're slacking off like mad. As a former manager I can tell you - that is most certainly not the case, and there's no way to hide.
By @throw4847285 - about 2 months
Unfortunately, I think this article is not pessimistic enough. If it were true that academic fraud is important for establishing the boundaries of respectable scholarship in a field, then behavioral economics wouldn't exist anymore. And evolutionary psychology would be a fraction of its current size.

Trendiness trumps all notions of academic rigor, and as long as a field "feels like" it's on the cutting edge it can go pretty far before collapsing in on itself.

By @bowsamic - about 2 months
It's not an exaggeration to say I literally don't know a single person who doesn't engage in this kind of academic fraud to some degree
By @anjc - about 2 months
Overly pessimistic, and doesn't acknowledge that heads of steam only build behind promising findings, while the deficient (or 'fraudulent') work die on the vine, published or not. In other words the system tends to work.

Secondly, there are many ingredients required to successfully publish, communicate science, foster collaboration, etc., beyond technical brilliance. I'm sure we all know many technically brilliant people whose career never advanced because they lacked in some necessary area. People shouldn't be discouraged from improving in all areas because OP's delicate genius is offended by their technical ability.

Speaking of discouragement, it's a shame and a disgrace that you publicly called your colleague's work bullshit, including a first author that isn't yourself.

By @gumbojuice - about 2 months
For one of my first papers during PhD, I had a blatant bug in my implementation. Only realized while doing further research building on that previous paper.

I think in any field it's natural to start out naive with an idea that may just be a few steps away from solving something important. Somewhere in the middle only to realize you're not, and then scramble with the ethical dilemma around your work, is it "good enough" or not. I was there anyway.

By @tonyg - about 2 months
> Undermining the credibility of computer science research is the best possible outcome for the field, since the institution in its current form does not deserve the credibility that it has.

Horseshit. This might be true for AI research (and even there that's an awfully broad brush you're using, mate), but it's certainly not true for other areas of computer science.

By @smolder - about 2 months
It's unfortunate that it's more work and less reward to shut down fraud that to perpetrate it. I think its more of a problem than ever before, even though grifting has always been a thing.
By @jillesvangurp - about 2 months
The academic world is essentially the world's oldest social network. The way academic publishing works is through a convoluted reputation system of academics endorsing each other's works via various publications, giving each other a 'like' by referring work, debating work in public at conferences, hiring each other's students, protege's, etc.

Fraud here basically means faking reputation. There are many ways to do this. And it's common because doing scientific work goes hand in hand with very generous funding. And money corrupts things. So attempts to fake reputation, plagiarize work, artificially boost relevance through low reputable referencing, bribes, etc. are as old as scientific publishing is.

There are a few interesting dynamics here that counter this: high quality publications will want to defend their reputation. E.g. Nature retracting an article tends to be scandalous. They do it to preserve their reputation. And it tends to be bad for the reputation of affected authors. Their reputation is based on them having very high standards and a long history of important people publishing important things that changed the world. Every time they publish something, that's the reputation that is at stake. So, they are strict. And they should be.

The problem is all the second and third rate researchers that make up most of the scientific community. We don't all get to have Einstein level reputations. And things are fiercely competitive at the bottom. And if you have no reputation, sacrificing it is a small price to pay. Also the prestigious publications are guarded by an elitist, highly political, in-crowd. We're talking big money here. And money corrupts. So, this works both ways.

With AI thrown in the mix, the academic world has to up its game. And the tools it is going to have to use are essentially the same used in other social networks. Bluesky, Twitter, etc. have exactly the same problem as scientific publishers; but at a much larger scale. They want to surface reputable stuff and filter out all the AI generated misinformation and it's an arms race.

One solution is using more AI or trying other clever tricks. A simpler solution is tying reputations to digital signatures. Scientific work is not anonymous. You literally stake your reputation by tying your (good) name to a publication and going on the record by "publishing" something. Digital signatures add some strength to that that AIs can't fake or forge. Either you said it and signed it; or you didn't. Easy to verify. And either you are reputable, by having your signature associated with a lot of reputable publications, or you are not. Also easy to verify.

If disreputable stuff gets flagged, you simply scrutinize all the authors and publications involved and let them sort out their reputations by taking appropriate actions (firing people, withdrawing articles, publicly apologizing, etc.). They'll all be eager to restore their reputations so that should be uncontroversial. Or they don't and lose their reputation.

Digital signatures are a severely underused tool currently. We've had access to those for half a century or so.

The challenge isn't technical but institutional. Lots of disreputable people and institutions are currently making a lot of money by operating in the shadows. The tools are there to fix this. But people don't seem to necessarily want to.