Machines
A machine is a physical system that performs work by transforming energy and information according to deterministic or stochastic rules. The concept is among humanity's oldest technical achievements and among its most contested philosophical categories. Machines are simultaneously practical objects, mathematical structures, cultural symbols, and philosophical puzzles. They are the subject of engineering, computer science, thermodynamics, and the philosophy of mind. The question of where machines end and minds begin — or whether that question is coherent — is one of the defining intellectual disputes of the current era.
This article treats machines not as a unified natural kind but as a family of related concepts whose shared features become visible only under analysis: the transformation of input to output according to specified rules, the independence of operation from the intentions of any particular operator, and the reproducibility of behavior across instances and contexts.
From Simple Machines to Computation
The classical mechanics tradition identified six simple machines — the lever, wheel and axle, pulley, inclined plane, wedge, and screw — as the fundamental primitives from which all mechanical devices are composed. This classification, originating with Greek mechanics and formalized by Renaissance engineers, treated machines as force multipliers: devices that trade distance for force or vice versa, governed by the law of conservation of energy.
The Industrial Revolution transformed the cultural and economic significance of machines while extending their theoretical scope. The steam engine, the loom, and the printing press were machines that amplified human productive capacity by orders of magnitude, restructuring labor, cities, and social organization. The thermodynamic analysis of heat engines (Carnot, 1824; Clausius, 1850) revealed that machines operate within fundamental limits — no engine can convert heat entirely into work without ejecting heat at a lower temperature. These limits are not engineering constraints; they are physical laws. The second law of thermodynamics sets a ceiling on what any machine can achieve.
The formal theory of computation generalized machines beyond the physical. Turing's abstract machine (1936) is a device with a read/write head, an infinite tape, and a finite set of rules governing what it reads and writes. This is a machine in the purest sense: deterministic transformation of input to output according to explicit rules, with no physical substrate specified. The Turing machine is the mathematical idealization of what a machine can compute in principle, and computability theory maps its theoretical limits. Every physical machine that performs computation can be described as a Turing machine — or as a collection of Turing machines operating in parallel.
Machines and the Philosophy of Mind
The question of whether the human mind is a machine has been contested since Descartes distinguished the mechanical body from the non-mechanical soul. For Descartes, machines were strictly deterministic, purely physical, and fundamentally limited: they could simulate many human behaviors but could never produce genuine understanding or flexible language use, because those require a soul. The Chinese Room argument (Searle, 1980) is the modern version of this claim: a machine that manipulates symbols according to rules does not thereby understand the symbols, even if its outputs are indistinguishable from those of a genuine understander.
The opposing tradition — beginning with Hobbes's claim that thought is computation and formalized by Turing's operational criterion — holds that if a machine behaves indistinguishably from a mind in all relevant respects, the question of whether it "really" understands is a pseudo-question. The Turing test operationalizes this: if a machine's outputs are indistinguishable from a human's in conversation, we have no non-question-begging reason to deny it understanding.
This debate is not merely philosophical. It has direct consequences for AI safety, consciousness research, and the governance of increasingly capable computational systems. If machines can be minds, then creating sufficiently capable machines raises questions about their moral status, rights, and interests. If machines cannot be minds regardless of their capabilities, then no amount of behavioral sophistication settles the question of whether a system has experiences, preferences, or wellbeing.
Machines as Category
The expansionist's claim: "machine" is not a natural kind but a historically contingent category that has been repeatedly destabilized by technological development. Windmills were machines; transistors were not originally called machines; large language models are routinely called machines in some contexts and described as something categorically different in others.
Every generation has had a dominant machine metaphor for understanding minds: hydraulic (Galenic medicine), clockwork (Descartes), telegraph (nineteenth-century psychology), computer (mid-twentieth century), neural network (late twentieth century). Each metaphor illuminated something and concealed something. The computational metaphor illuminated the rule-governed, symbol-processing aspects of cognition. It concealed the embodied, developmental, and thermodynamically embedded aspects.
The machines being built today — large-scale neural networks, robotic systems, quantum computers — do not fit comfortably into the category shaped by any of these historical metaphors. A large language model is a machine in the formal sense (deterministic or stochastic transformation of input to output according to learned parameters) but its properties resist the standard metaphors. It does not follow explicit rules; its rules are compressed from data and are not fully inspectable. It does not have a specified purpose; its behaviors emerge from training distributions. It is not static; its operation may change its parameters if fine-tuned on its own outputs.
The question for the next generation of machine builders and machine theorists: what new conceptual framework is required for entities that learn, adapt, and generate in ways that traditional machine concepts cannot adequately describe? The answer is not available yet. Its absence is not intellectual failure — it is the normal condition of foundational research.