The Median Researcher Problem
The "Median Researcher Problem" highlights how median researchers influence scientific discourse, allowing flawed practices to persist, as seen in the replication crisis, while smaller, intelligent communities may remain unrecognized.
Read original articleThe "Median Researcher Problem" posits that the influence of memetic ideas in scientific fields is primarily driven by median researchers rather than the most competent ones. This concept suggests that if the majority of researchers in a field lack a strong understanding of critical methodologies, such as statistics, the flawed practices can proliferate despite the awareness of more skilled researchers. The replication crisis serves as a key example, where warnings about poor statistical practices went largely unheeded, indicating that the median researchers' perspectives significantly shape the field's discourse. The implications of this problem are notable; a small, intelligent research community can outperform larger fields due to better internal selection pressures, yet their contributions may remain unrecognized by the broader academic community. LessWrong is highlighted as such a community, characterized by high intelligence and rigorous norms, which may lead to innovative ideas that are not acknowledged by mainstream research. The discussion also touches on the dynamics of research communities, the role of lab leaders, and the influence of external factors like politics and funding on scientific discourse.
- The median researcher significantly influences the spread of ideas in scientific fields.
- Poor statistical practices can persist due to the lack of understanding among median researchers.
- Smaller, intelligent research communities can outperform larger fields but may not gain recognition.
- The replication crisis exemplifies the challenges posed by the median researcher problem.
- External factors, including politics and funding, also impact the quality and perception of research.
Related
Why Most Published Research Findings Are False
The article discusses the high prevalence of false research findings, influenced by biases, study power, and effect sizes, urging a critical evaluation of claims and caution against sole reliance on p-values.
AI Winter Is Coming
The AI ecosystem shows a disparity between producers and promoters, with academia criticized for superficial research and industry withholding valuable findings, potentially leading to another downturn in AI development.
Irreproducible Results
The article highlights declining reproducibility in scientific experiments, particularly in biological sciences, due to biases favoring positive results. Experts recommend open-source databases to document all experimental outcomes for improved reliability.
To what extent is science a strong-link problem?
A recent case of scientific misconduct involving a US researcher raises concerns about integrity in high-impact journals, emphasizing the need for interdisciplinary engagement and proactive promotion of overlooked scientific work.
Misinformation Does Spread Like a Virus, Epidemiology Shows
Recent research shows misinformation spreads like viruses, with mathematical models predicting its dynamics. Interventions like psychological inoculation can reduce its spread, highlighting the need for effective countermeasures.
Oooooh, now I see what's going on here. The good ol' "the reason no one listens to us because we're too smart for them".
Unfortunately, the fact that some smart people was ignored by their peers doesn't mean that "being ignored" suddenly becomes evidence for your thoughts being "beyond the median". It could also be that it's just not that groundbreaking.
(Disclaimer: I've only read a few LessWrong articles over years and I don't have that strong opinion of their community. Mostly basing this comment on just this post.)
Citation definitely needed.
Also, the problem of bad research isn’t an IQ problem. The corporate university model creates terrible incentives. Science has the problem too, but metrics gaming does less damage because it’s harder to get away with publishing actual wrong answers.
The reasons there’s more shitty research in “soft” fields are not a problem with the IQs of researchers but:
* more bikeshedding at all levels, from creates, peers, and the public. High-IQ people can be horrific bikeshedders, and tend to be just as oppressive in their mediocrity when they go into territory they know nothing about.
* lack of external options for hangers-on. Mediocre CS researchers can easily get jobs at FAANG and earn 5x more than actual good ones who stay in, so the non-serious people get pulled away. That doesn’t happen as much in the social sciences.
They are highly intelligent and skilled in the sense that they can progress their career through complex political moves within funding agencies, journal editorial boards, conference organization, and university departments. Proper statistical analysis and experimental design are absent because it's a nuisance in the way of success, not due to lack of understanding or low intelligence. There's still room for rigorous scientists to succeed, but it's becoming untenable for many to stay.
On one hand, the author seems not up to date with how standards have changed at flagship journals post replication crisis. There are new editorial teams, new standards, and a new culture of avoiding past mistakes. Does this mean things are perfect? No, but that's any human endeavor.
The author also doesn't define "memeticity" but implies that it is bad. However, given that science is a slow-moving conversation where outdated ideas are jettisoned and current ideas are explored and investigated... So some of amount of "memeticity" is to be expected. It seems like the author's issue is that some papers aren't ambitious enough?
At the same time, the author is also saying that the median person in any endeavor is not as skilled as the most skilled people. This is true by definition. Indeed, a small dedicated team of high-performing people, with the right leadership and clarity of vision, can indeed have an outsized impact. That's what most of us know or at least suspect in startup land.
- The author acknowledges that they don't provide good evidence for their claim, relying instead on intuition. I mean... come on, man. I don't think the claim is actually true!
- The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.
- They badly misstated the results of that IQ study, ignoring outliers like philosophy and economics which have poor reproducibility. The correlation between IQ and major is much better understood as indicating which undergrads will go on to academia, versus fields like biology and psychology where most students plan to enter the workforce after college. Replicability is incidental. (They also ignored that the study itself is probably not replicable! I believe the root cause of the replication crisis is motivated reasoning and laziness, both of which are certainly on display here.)
In general this post is the combination of undeserved arrogance and jaw-dropping ignorance that I expect from LessWrong. It is a community for narcissistic blowhards.
> A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
Sure, that's true.
> In particular, LessWrong sure seems like such a community.
HOLD ON THERE. The first thing you said: easily true. The second thing: show your proof!
Here it is:
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population.
Uh, let's rework that into a more plausible, pliable form - "steelman" if you will.
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, and fuzzier community norms explicitly intended to avoid various forms of predictable stupidity.
Line by line:
> We have a user base with probably-unusually-high intelligence
What, for like... all fields? This needs to be compared with specific fields. And shown in a less hand-wavy way how lesswrong scores meaningfully better than academic researchers with more subject matter expertise.
> community norms which require basically everyone to be familiar with statistics and economics
The problem is right there in the text you cited: memeticity [on lesswrong] is mostly determined, not by the most competent researchers, but instead by roughly-median researchers. Also "familiar" != learned well.
> fuzzier community norms explicitly intended to avoid various forms of predictable stupidity
I award zero points for this, especially when compared to an academic community which has similar (plausibly better) training, which better understands the pitfalls of data collection in their field.
Related
Why Most Published Research Findings Are False
The article discusses the high prevalence of false research findings, influenced by biases, study power, and effect sizes, urging a critical evaluation of claims and caution against sole reliance on p-values.
AI Winter Is Coming
The AI ecosystem shows a disparity between producers and promoters, with academia criticized for superficial research and industry withholding valuable findings, potentially leading to another downturn in AI development.
Irreproducible Results
The article highlights declining reproducibility in scientific experiments, particularly in biological sciences, due to biases favoring positive results. Experts recommend open-source databases to document all experimental outcomes for improved reliability.
To what extent is science a strong-link problem?
A recent case of scientific misconduct involving a US researcher raises concerns about integrity in high-impact journals, emphasizing the need for interdisciplinary engagement and proactive promotion of overlooked scientific work.
Misinformation Does Spread Like a Virus, Epidemiology Shows
Recent research shows misinformation spreads like viruses, with mathematical models predicting its dynamics. Interventions like psychological inoculation can reduce its spread, highlighting the need for effective countermeasures.