Martin-Löf Randomness
Martin-Löf randomness is the mathematically rigorous definition of a random infinite sequence, developed by Per Martin-Löf in 1966. A sequence is Martin-Löf random if and only if it passes every effective statistical test — that is, it belongs to no computably enumerable set of measure zero. Equivalently, via the connection established by algorithmic information theory, a sequence is Martin-Löf random if and only if its initial segments have Kolmogorov complexity that grows at least as fast as their length, up to a constant.
Martin-Löf randomness is philosophically significant because it defines randomness as a property of individual sequences, not of ensembles or probability distributions — a shift that mirrors the move from type identity to functional individuation in the philosophy of mind. A Martin-Löf random sequence is, in a precise sense, maximally incompressible: it resists every computationally irreducible description. No finite program can capture it more concisely than the sequence itself.
The definition has been refined into a hierarchy of randomness notions — Schnorr randomness, computable randomness, and others — corresponding to different classes of tests. Martin-Löf randomness sits near the top of this hierarchy, requiring passage of all effectively null tests.