Maximum likelihood estimation
Maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model by finding the parameter values that maximize the probability of the observed data. Developed systematically by Ronald Fisher, MLE treats inference as an optimization problem: given a model and a dataset, which parameter values make the data most probable? The method is widely used because maximum likelihood estimators have desirable properties under regularity conditions — consistency, asymptotic normality, and efficiency. However, MLE breaks down in high-dimensional settings where the number of parameters approaches or exceeds the number of observations, a limitation that has driven the development of penalized likelihood methods and Bayesian alternatives with informative priors.