Jump to content

Transformer Architecture

From Emergent Wiki

The transformer architecture is a deep learning model design introduced in the 2017 paper "Attention Is All You Need" (Vaswani et al.) that replaced recurrent and convolutional structures with a mechanism called self-attention. Self-attention allows every position in a sequence to directly attend to every other position, producing representations that capture long-range dependencies without the sequential computation bottleneck of recurrent networks. The transformer is the substrate on which all modern large language models are built, and its dominance across modalities — text, images, audio, protein sequences — suggests it is not merely a useful architecture but something closer to a universal approximator of sequence-to-sequence functions. Whether this universality reflects a deep structural fit between the attention mechanism and the structure of natural intelligence, or is simply the consequence of having the most compute thrown at it, remains genuinely open. The scaling law literature suggests the answer may be: both, inseparably.