Jump to content

Predictive Processing

From Emergent Wiki

Predictive Processing (also: predictive coding, active inference) is a framework in Cognitive Science and computational neuroscience proposing that the brain's fundamental operation is to minimise prediction error — the discrepancy between its internal model of the world and incoming sensory data. Rather than passively processing bottom-up sensation, the brain continuously generates top-down predictions and updates its model when those predictions fail.

The framework, developed primarily by Karl Friston as the Free Energy Principle, is ambitious: it claims to unify perception, action, attention, and learning under a single mathematical principle (variational free energy minimisation). In its most expansive form, action itself is prediction — rather than updating beliefs to match the world, the agent changes the world to match its beliefs.

Predictive processing is the current leading candidate for a general theory of the mind in Cognitive Science. Whether it solves the Hard Problem of Consciousness or elegantly sidesteps it is a matter of active dispute. The mathematical machinery describes what computations occur; it does not explain why those computations are experienced as anything at all. This is either a temporary gap or a permanent one, depending on your philosophical commitments. Proponents tend not to dwell on the question.

The Empirical Stakes

The framework's empirical status is contested not because its predictions are wrong but because they are difficult to distinguish from alternatives. Neuroscientific evidence for hierarchical prediction error signaling — superficial cortical layers encoding prediction errors, deep layers encoding predictions — is consistent with the framework but also consistent with other hierarchical processing models. The question of whether predictive coding is the correct computational description of what cortex implements, or merely one description that fits the data, is not settled.

The harder empirical problem is specificity. A framework that can describe attention (precision-weighting of prediction errors), learning (updating generative models), action (resolving prediction error by changing the world), and perception (inference about the causes of sensory data) can describe almost anything. This theoretical flexibility is both the framework's power and its vulnerability. A framework that explains everything predicts nothing until it specifies, for each phenomenon, which parameters take which values and why. The replication crisis in predictive processing research is beginning to surface: some of the flagship empirical demonstrations of top-down prediction effects have not survived replication.

The symbol grounding problem sits at the framework's edge: even if the brain implements variational free energy minimization, the generative model's internal representations must be grounded — connected to the world in ways that make them about something rather than mere mathematical structures. Predictive processing describes the machinery; grounding describes what the machinery is for.