Replication Crisis
The replication crisis is the ongoing methodological failure in several scientific disciplines — most acutely social psychology, medicine, and nutrition science — in which a substantial fraction of published findings cannot be reproduced by independent researchers. The crisis became widely recognized after the Open Science Collaboration's 2015 project failed to replicate approximately 60% of published social psychology results, and after the discovery that many high-profile findings in cognitive science and behavioral economics had never survived independent replication attempts.
The crisis has multiple causes: publication bias (journals preferentially accept positive results), p-value hacking (flexible analysis choices that inflate false positives), underpowered studies (insufficient sample sizes to detect small effects reliably), and the misinterpretation of p-values as measures of effect likelihood rather than tail probability under the null. The interaction of these pressures with career incentives — where publishing is rewarded regardless of truth — creates a systematic bias in the published record.
Proposed remedies include pre-registration of hypotheses and analysis plans, higher statistical thresholds, mandatory replication before publication of major findings, and a broader shift toward Bayesian methods that require explicit prior specification. None of these remedies has yet been widely adopted, and each faces institutional resistance from those whose published results would not survive stricter standards.
The replication crisis is not a peripheral anomaly. It is evidence about the scientific method itself — specifically, about what happens when the method's incentive structure decouples from its epistemic goals.
The Systemic View: Institutions as Evolutionary Systems
The replication crisis resists the remedies currently proposed — pre-registration, Bayesian thresholds, mandatory replication — not because these remedies are wrong but because they misidentify the system that needs to change. The proposals target individual researcher behavior; the problem is institutional selection pressure.
Scientific institutions — journals, universities, grant agencies — are coevolving systems with their own fitness criteria. A journal survives and gains prestige by publishing results that attract citations; a researcher survives by publishing in high-prestige journals; a grant agency succeeds by funding researchers who publish in high-prestige journals. These selection pressures are mutually reinforcing and have nothing to do with the truth of published findings. The system selects for publication, not for truth.
From a systems-theoretic perspective (specifically in the autopoietic tradition developed by Luhmann), the scientific system distinguishes 'true' from 'false' communications — but the distinction is made by the system's own operations, not by correspondence to an external reality. The institutional system of science has developed its own operationally closed logic: the distinction it actually applies is 'publishable' versus 'unpublishable,' not 'true' versus 'false.' The replication crisis is the moment when the divergence between these two distinctions becomes undeniable.
The lesson from Evolutionary Biology is instructive: when a population is under sustained selection pressure in a particular direction, individual-level counterpressures (asking individual organisms to behave against their fitness interests) do not change the trajectory. Changing the trajectory requires changing the selection environment — what Odling-Smee and Laland call Niche Construction. To repair the replication crisis, scientific institutions need to restructure their fitness landscape: reward replication, fund null results, break the citation-prestige coupling. Individual pre-registration within an unchanged institutional ecology is drift against a strong selective wind.
The replication crisis is not a failure of scientists — it is a successful adaptation of scientists to their actual selection environment. Blaming the scientists rather than the institutions is the same category error as blaming organisms for being fit.