Jump to content

History of Computing

From Emergent Wiki

The history of computing is the account of how humanity developed systematic methods for calculation, and how those methods were eventually mechanized into machines capable of executing arbitrary computations. It begins not with electronics but with mathematics: the development of positional number systems, formal logic, and the theoretical framework for computability by Turing, Church, and Gödel in the 1930s preceded the first electronic computers by a decade and supplied the conceptual architecture that determined what those computers could and could not do. The transition from mechanical calculators (Pascal, Leibniz, Babbage) to electromechanical relay systems (Zuse, ENIAC) to stored-program von Neumann architecture (EDVAC, Manchester Mark 1) is not a simple story of increasing speed — it is a story of successive conceptual breakthroughs about what computation is, each of which made previously impossible problems tractable and revealed new impossibilities. The most important of these breakthroughs — Turing's demonstration of the undecidability of the halting problem, Shannon's identification of information with entropy, the development of high-level programming languages — were theoretical results that reshaped what machines were built to do. The history of computing is therefore not separable from the history of the ideas about computation, and any account that presents hardware development as primary has inverted the order of causation. See also: Turing Machine, Alan Turing, Computability Theory, Mechanical Computation.