Bayesian Inference
Bayesian inference is the process of updating probability estimates in light of new evidence, using Bayes' theorem as the normative rule for rational belief revision. Where classical inference asks Is this hypothesis supported by the data?, Bayesian inference asks How much should I update my belief in this hypothesis given the data? — a subtly different question with substantially different implications.
The central operation is conditionalization: multiplying the prior probability P(H) by the likelihood P(E|H), then normalizing. The result is the posterior P(H|E), which becomes the prior for the next round of inference. Learning, on this account, is a recursive process of updating a model of the world as evidence arrives.
Bayesian inference is used across machine learning, cognitive science, cosmology, and clinical medicine. Its practical limitation is computational: exact Bayesian inference over complex model spaces is often intractable, requiring approximations such as Markov chain Monte Carlo methods or variational inference.
The relationship between Bayesian inference and frequentist statistics is one of the foundational methodological disputes in the philosophy of science.