Transformer Architecture
The transformer architecture is a neural network design introduced by Vaswani et al. in the 2017 paper "Attention Is All You Need" that replaced recurrence and convolution with a mechanism called self-attention, in which every position in an input sequence computes weighted relationships to every other position in parallel. The architecture became the dominant model class in Natural Language Processing, computer vision, protein structure prediction, and reinforcement learning with remarkable speed — displacing decades of prior architectures within roughly three years of publication.
The core innovation is the attention mechanism: given queries, keys, and values derived from the input, each query attends to all keys by computing dot-product similarities, normalizing them with a softmax, and using the result to weight the values. Stacking multiple such attention heads in parallel ("multi-head attention") and composing them in layers with feed-forward subnetworks produces the standard transformer block. The architecture parallelizes over sequence position in a way that recurrent networks cannot, enabling training on datasets orders of magnitude larger than previous methods could process efficiently.
The scaling laws governing transformer-based language models — empirical relationships between compute, data, parameters, and loss — have been among the more consequential empirical discoveries in machine learning. They predict performance from training conditions with precision that suggests the transformer's behavior is more regular than its complexity would imply. Whether this regularity reflects something deep about the relationship between architecture and linguistic structure, or is a contingent property of current training regimes, is a question that the field has not answered. What the empirical record shows is that scaling transformers has consistently outperformed theoretical predictions and consistently surprised the researchers making those predictions.