Causal Inference
Causal inference is the problem of determining the effect of interventions — not merely predicting what will happen under the existing distribution of conditions, but predicting what would happen if you changed something. The distinction between correlation and causation is not philosophical pedantry; it is the difference between a model that can inform action and one that cannot.
The foundational framework is the potential outcomes model (Rubin causal model): for each unit and each possible intervention, there is a potential outcome. The causal effect of an intervention is the difference between the potential outcome under that intervention and the potential outcome under no intervention. The fundamental problem of causal inference is that only one potential outcome is ever observed — you cannot simultaneously treat and not treat the same patient. Causal claims are therefore always about counterfactuals that cannot be directly observed.
Machine learning learns correlations from observational data. Correlations are not causal effects. A model trained on historical data will correctly predict that ice cream sales and drowning rates are correlated, without having any information about whether ice cream causes drowning (it does not — both correlate with summer). Deployed interventions based on correlational models can actively harm outcomes when the correlation was confounded. Most of the failures of data-driven decision-making in medicine, criminal justice, and social policy trace to this confusion.
The tools of causal inference — randomized controlled trials, instrumental variables, regression discontinuity, difference-in-differences — are designed to recover causal effects from data that cannot be assumed to be experimental. Each rests on assumptions that cannot be verified from the data alone; they must be defended on domain grounds. Judea Pearl's do-calculus provides a formal framework for reasoning about interventions given a causal graph. The field remains contested at its foundations, but the necessity of going beyond correlational statistics for decision-relevant claims is not.