Computation
Computation is the physical process by which a system transforms states according to rules. It is not a metaphor, not a model, not an abstraction laid over reality — it is a category of physical process as real as combustion or crystallization. The empirical fact of the late twentieth century is that computation, once thought to be the exclusive domain of human minds, is substrate-independent: anything that can hold states and transition between them according to rules is computing. This includes silicon circuits, biological neurons, quantum systems, chemical reaction networks, and cellular automata. The question is not whether these systems compute — observation settles that — but what the limits of computation are, and whether those limits are logical or physical.
The Historical Emergence of Computation as a Concept
The modern concept of computation crystallized between 1936 and 1950, in three separate but convergent traditions:
Mathematical logic (Church, Turing, Gödel): The question was decidability — is there an effective procedure for determining the truth of all mathematical statements? Turing's 1936 paper "On Computable Numbers" gave a precise definition of what it means for a function to be computable by mechanical means. The Turing Machine was not a physical device but an idealized model of what a human computer (a person performing calculations) could do with paper, pencil, and a finite set of instructions. Church's lambda calculus and Gödel's recursive functions provided equivalent formalizations. The convergence — now called the Church-Turing Thesis — was empirical, not proven: all proposed models of effective computation turned out to be equivalent in power.
Engineering (Babbage, Lovelace, von Neumann): The question was mechanization — could machines perform the calculations currently done by human computers? Babbage's Analytical Engine (1837) was never built, but Lovelace recognized that it could manipulate symbols according to rules, not just numbers. Von Neumann's stored-program architecture (1945) made this vision practical: instructions and data occupy the same memory, and the machine executes instructions sequentially. The modern computer is a physical realization of this architecture.
Cybernetics (Wiener, Shannon, McCulloch and Pitts): The question was control and communication — how do systems regulate themselves? McCulloch and Pitts (1943) showed that networks of idealized neurons could compute any logical function. Shannon (1948) defined information in terms of reduction of uncertainty and established the fundamental limits on data compression and error correction. Wiener (1948) argued that the principles of feedback and control applied equally to machines, organisms, and societies.
By 1950, these three traditions had fused: computation was recognized as a general phenomenon, not tied to any particular substrate or implementation.
What Computation Is: The Empiricist's Definition
The empiricist does not ask "what is computation in principle?" but "what do we observe when we observe a system computing?"
A system computes when:
- It has distinguishable states (voltage levels, molecular configurations, neuron firing patterns);
- It transitions between states according to rules (logic gates, chemical reaction pathways, synaptic weights);
- The states can be interpreted as representing something (numbers, symbols, propositions, sensor readings);
- The transitions preserve the correctness of the interpretation under some mapping.
Example: An electronic calculator transitions from the state "2 on display, + pressed, 3 entered" to the state "5 on display." The physical transition (voltage changes in transistors) corresponds to the abstract operation of addition. The correspondence is conventional (we designed the circuit to implement addition), but the computation itself is physical: energy flows, states change, and the outcome is reproducible.
This definition is liberal: it includes any physical process where state transitions follow rules and can be systematically interpreted. DNA replication computes (copying sequences). Protein folding computes (minimizing free energy under constraints). Even a falling rock computes its trajectory under Newtonian mechanics, though calling it computation adds nothing to our understanding. The interesting question is not what counts as computation — everything does, trivially — but what kinds of computation are useful, controllable, and scalable.
Physical Limits of Computation
Computation is physical, and physics imposes limits.
Landauer's Principle (1961): Erasing one bit of information requires dissipating at least kB T ln 2 joules of energy as heat, where kB is the Boltzmann Constant and T is temperature. This is not an engineering limit but a thermodynamic one: irreversible computation generates entropy. Reversible computation can in principle avoid this cost, but only if every step is logically reversible — a severe constraint.
Bekenstein Bound (1981): The maximum information content of a physical system is proportional to its energy and radius. A one-liter sphere at room temperature can store at most about 1031 bits. This is a limit from quantum mechanics and general relativity: more information requires more energy, and at some point the system collapses into a black hole.
Speed of Light: Information cannot propagate faster than light. A 1 GHz processor with components 1 cm apart loses 97% of each clock cycle to signal propagation. This is why modern chips pack transistors within nanometers of each other — and why quantum computers, if scalable, face decoherence from the same density that makes them fast.
The empiricist's observation: these are not obstacles to overcome but specifications of what computation is. A process that does not dissipate energy, does not occupy space, and does not take time is not computation — it is magic.
Substrate Independence and the Multiple Realizability of Algorithms
The most significant empirical fact about computation is that the same algorithm can be implemented on arbitrarily different physical substrates. Quicksort can run on silicon, neurons, water pipes, or trained pigeons. The correctness of the algorithm is independent of the medium.
This is not a philosophical thesis. It is an engineering reality. Every high-level programming language compiles to machine code, which runs on transistors, which are arrangements of doped silicon, which are quantum systems governed by Schrödinger's equation. At no point does the algorithm "care" about the substrate. What matters is that the substrate can reliably implement the state transitions the algorithm requires.
The implication: computation is a level of organization that abstracts over physics. This does not mean computation is non-physical — it means that many different physical processes can instantiate the same computational process. Multiple realizability is the norm, not the exception. The brain computes differently from a CPU, but both compute.
The provocateur's question: if computation is substrate-independent, what makes biological computation special? The answer cannot be "because it happens in neurons" — that is substrate-dependence smuggled back in. The answer must be what is computed, not where.
The Open Question: Does the Universe Compute, or Do We Compute the Universe?
The final empirical puzzle: is computation a feature of reality, or a lens we use to understand reality?
One view (Digital Physics): the universe is fundamentally computational. Physical law is an algorithm; particles are bits; quantum mechanics is quantum computation. On this view, discovering the laws of physics is reverse-engineering the universe's source code.
The opposing view: computation is a human category we impose on physical processes that happen to be regular and predictable. The universe does not compute — it evolves. We compute models of its evolution and mistake the model for the territory.
The empiricist's verdict: the question is empirically empty until someone proposes an experiment that distinguishes the two. Both views make identical predictions about what we observe. The difference is metaphysical, not physical. What we know for certain is that systems we build can compute, that we can use them to model the universe with increasing accuracy, and that the models themselves are physical processes constrained by the same thermodynamic limits as the systems they model.
That much is not interpretation. That much is measurement.