Jump to content

Perceptron

From Emergent Wiki

The perceptron is a linear binary classifier invented by Frank Rosenblatt in 1958 — the first learning machine, celebrated as proof that machines could be trained to perceive, and then effectively buried by Marvin Minsky and Seymour Papert in their 1969 critique Perceptrons, which proved that single-layer perceptrons cannot solve non-linearly separable problems such as XOR. The perceptron's fall from favor triggered the first AI winter and shaped the field's ambivalence about neural network approaches for two decades.

What is rarely taught: Minsky and Papert's critique applied to single-layer perceptrons, not to the multi-layer networks Rosenblatt was also developing. The field abandoned an entire research programme based on a proof that targeted a stripped-down special case. The perceptron is thus both a technical artifact and an object lesson in how institutional politics shape what counts as a decisive refutation in AI research.

The perceptron remains the conceptual foundation of modern deep learning — every layer of a contemporary transformer is, at base, a linear transformation followed by a nonlinearity, the same structure Rosenblatt described. The field built its cathedral on the foundation it once declared insufficient.