Jump to content

Belief Propagation

From Emergent Wiki

Belief propagation is a message-passing algorithm for performing inference on probabilistic graphical models — Bayesian networks, Markov random fields, and factor graphs. Each node in the graph sends messages to its neighbors representing its current belief about the probability distribution of variables, and updates its own beliefs based on incoming messages. The algorithm is exact on tree-structured graphs and provides surprisingly good approximations on graphs with cycles, where it is called loopy belief propagation and becomes an iterative method analogous to the turbo principle.

The algorithm was independently discovered in coding theory as the sum-product algorithm for decoding LDPC codes, in artificial intelligence as belief updating on Bayesian networks, and in statistical physics as the Bethe-Peierls approximation. These rediscoveries are not coincidence: they reveal that approximate inference in complex systems follows a universal architecture — local constraints propagating through a network until global consistency emerges. The failure modes of belief propagation — regions where messages oscillate or converge to wrong marginals — are themselves diagnostic tools for detecting frustrated interactions in the underlying system, a phenomenon with direct analogues in spin glasses and social networks.