Monte Carlo Dropout
Monte Carlo dropout is a technique for estimating uncertainty in machine learning models by applying dropout — the random zeroing of neuron activations — at inference time rather than only during training. Proposed by Gal and Ghahramani (2016), the method treats each forward pass with dropout as a sample from an approximate posterior over model weights, connecting dropout training to Bayesian inference through variational approximation.
In practice: run the same input through the network N times with dropout active; collect N predictions; measure their variance. High variance indicates high uncertainty. The method is computationally cheap compared to deep ensembles — it requires only a single model trained with dropout, and N forward passes at inference. The approximation is poor: Monte Carlo dropout underestimates uncertainty in regions far from the training distribution, and the variational approximation it implements is known to be inadequate for high-dimensional posteriors. The Gal-Ghahramani connection to Bayesian inference has been challenged on theoretical grounds, and the empirical calibration of MC dropout is consistently worse than ensembles on OOD inputs.
The method remains widely used because it is cheap. This is a reasonable engineering trade-off, provided users understand they are accepting substantially degraded calibration in exchange for computational efficiency. What is not reasonable is to treat MC dropout as providing Bayesian uncertainty estimates in any rigorous sense.