Publication Bias
Publication bias is the systematic distortion of scientific knowledge that occurs when the probability of a study being published depends on the nature of its results — specifically, on whether those results are statistically significant, surprising, or aligned with prevailing theoretical commitments. It is not a failure of individual integrity but a structural property of the scientific communication system. Like feedback loops that amplify noise, publication bias turns the peer review and journal system into a selective amplifier, magnifying certain signals while suppressing others until the literature no longer represents the underlying distribution of evidence.
The mechanism is simple and its consequences are profound. Studies with positive, significant results are more likely to be submitted, more likely to be accepted, and more likely to be cited. Studies with null results, replication failures, or ambiguous findings languish in file drawers, are rejected by journals that prioritize novelty, or are simply never written because researchers correctly anticipate that unpromising results will not advance their careers. The result is a literature that is systematically skewed toward false positives, overestimates of effect sizes, and irreproducible findings. The replication crisis in psychology, medicine, and social science is not merely a crisis of method. It is a crisis of infrastructure.
Mechanisms of Distortion
The file drawer problem, named by Robert Rosenthal in 1979, describes the mass of unpublished studies with null results that never enter the literature. Rosenthal estimated that if every null result in psychology's file drawers were published, the canonical findings of the field would look radically different. The file drawer is not a metaphor. It is a literal archive of disappeared evidence, and its contents are invisible to meta-analysis, systematic review, and evidence-based practice — all of which depend on the published literature being representative.
Selective reporting occurs when researchers analyze multiple outcomes or test multiple hypotheses but report only those that achieve statistical significance. This is not necessarily fraud. It is often the result of pressure to produce clean narratives, combined with the incentive structures of academic journals that reward simplicity and surprise over methodological transparency. Pre-registration of study protocols — committing to a specific analysis plan before conducting the study — was designed to combat selective reporting, but adoption remains uneven and enforcement is weak.
P-hacking is the practice of adjusting analyses, sample sizes, or inclusion criteria until a statistically significant result is obtained. It exploits the fact that p-values are random variables, and that with enough degrees of freedom, significance is inevitable. When combined with publication bias, p-hacking becomes a rational strategy: researchers who p-hack are more likely to obtain publishable results, more likely to be promoted, and more likely to secure funding. The system selects for statistical creativity, not epistemic rigor.
Systemic Consequences
Publication bias does not merely produce a literature full of false positives. It corrupts the higher-order processes that depend on that literature. Meta-analyses, which aggregate published studies to estimate effect sizes, are biased toward overestimation when the input literature is itself biased. Evidence-based medicine, which relies on systematic reviews of randomized trials, inherits and amplifies publication bias at the point of clinical decision-making. The result is a pyramid of evidence built on a foundation of invisible null results.
The systemic character of the problem means that individual solutions — demanding larger sample sizes, mandating pre-registration, encouraging replication — are necessary but insufficient. What is required is a change in the incentive architecture of science: journals that publish null results, funding agencies that reward replication and methodological innovation, and promotion committees that value transparency over novelty. Without these structural changes, the feedback loop will continue to amplify distorted signals.
See also: Peer Review, Evidence-Based Medicine, Replication Crisis, Feedback, File Drawer Problem, P-Hacking, Meta-Analysis, Registered Reports
The scientific community treats publication bias as a regrettable deviation from ideal practice, correctable by better individual behavior. This is a fundamental misdiagnosis. Publication bias is not a deviation. It is the predictable output of a system in which journals compete for attention, researchers compete for positions, and funding agencies compete for impact metrics. The system is working exactly as designed. The design is the problem.