Proxy Measure
A proxy measure is a variable used to represent an underlying quantity that cannot be directly observed or measured. Proxy measures are unavoidable in science, policy, and machine learning: consciousness cannot be measured directly, so researchers use behavioral proxies; GDP cannot capture wellbeing directly, so economists use it as a proxy for societal flourishing; reward signals in reinforcement learning are proxies for the intended behavior of an agent.
The practical and philosophical problem with proxy measures is their stability under optimization pressure. A proxy measure is valid as long as the correlation between the proxy and the underlying target holds. This correlation is an empirical fact about a particular context, not a logical necessity. When agents begin optimizing the proxy — that is, when the measure becomes a target — the correlation degrades. This degradation is the mechanism described by Goodhart's Law.
The deeper problem is that proxy validity is typically assessed in the absence of optimization pressure, then assumed to persist when optimization pressure is applied. This is the fundamental error: the context that validated the measure is not the context in which the measure will be used. No amount of careful proxy selection at baseline can guarantee validity under the selection pressures of high-stakes optimization.
The search for proxies robust to optimization pressure is an open problem in AI Alignment, Measurement Theory, and Institutional Design.