Reasoning
Reasoning is the capacity to move from one representation to another by means of rules that preserve some relevant property — typically truth, probability, or inferential validity. It is the mechanism by which minds (biological or artificial) generate new knowledge from existing knowledge, identify contradictions, and evaluate hypotheses against evidence. That reasoning is possible at all is not obvious: it requires that the world have enough structure that representations can be systematically related to it, and that the rules of inference track that structure reliably.
Deductive, Inductive, and Abductive Reasoning
The classical taxonomy distinguishes three kinds:
Deductive reasoning preserves truth: if the premises are true and the argument is valid, the conclusion cannot be false. Formal logic — from Aristotle's syllogistic through propositional logic to first-order predicate logic — is the theory of deductive inference. The price of deductive certainty is sterility: a valid deductive argument contains its conclusion in its premises; it makes explicit what was already implicit. No genuinely new information enters.
Inductive reasoning extends from observed cases to general patterns. It generates new knowledge — projections beyond the evidence — but purchases this gain at the cost of certainty. The logical problem of induction, stated with lethal precision by Hume, has never been solved: no finite number of confirming instances can guarantee a general conclusion, and the inference from 'observed cases match the pattern' to 'unobserved cases will match the pattern' is itself an inductive inference, viciously circular if used to justify induction.
Abductive reasoning (inference to the best explanation) selects the hypothesis that, if true, would best explain the observed evidence. It is the dominant mode in science, medicine, and everyday problem-solving. C.S. Peirce formalized it; philosophers of science have argued since about what 'best' means. Criteria proposed include simplicity, explanatory scope, coherence with background knowledge, and Bayesian posterior probability after updating on evidence.
The Normative and Descriptive Divide
A crucial and often-elided distinction separates normative from descriptive accounts of reasoning. Normative theories — logic, probability theory, decision theory — describe how an ideal reasoner ought to reason. Descriptive theories — cognitive psychology, behavioral economics — describe how actual reasoners do reason. The gap between these is enormous and systematic.
The cognitive bias literature has catalogued hundreds of ways human reasoning deviates from normative ideals: confirmation bias, availability heuristics, base-rate neglect, the gambler's fallacy. One interpretation is that humans are poor reasoners. A more careful interpretation, proposed by Gerd Gigerenzer and others, is that human reasoning is adapted to ecologically valid inference tasks with real-world uncertainty structures — and that testing humans with decontextualized logic puzzles measures the wrong thing.
The debate between these interpretations is not merely empirical. It is a debate about what reasoning is for and what counts as a correct performance. A hammer is not defective because it cannot drive screws.
Reasoning in Formal Systems
Formal systems are the gold standard of explicit, checkable reasoning. A formal system specifies a language, axioms, and inference rules. Derivations within it are sequences of symbol manipulations that preserve the system's internal notion of validity. Automated theorem provers — systems like Coq, Lean, and Isabelle — formalize mathematical reasoning in ways that admit machine verification.
Gödel's incompleteness theorems establish that any sufficiently powerful formal system contains true statements it cannot prove, and cannot prove its own consistency. This is not a limitation of particular formal systems — it is a structural result about what formal systems can do. Hilbert's program — the attempt to establish all mathematics on a complete, consistent, finitely axiomatizable foundation — was refuted by Gödel in 1931. The project of finding a complete formal foundation for reasoning was shown to be impossible.
This result is often invoked carelessly in arguments about the limits of artificial intelligence and the superiority of human cognition. The inference is invalid. Gödel's theorems constrain what any formal system can prove within that system — they do not compare the reasoning capacity of humans to that of machines. Humans are no less subject to incompleteness than formal systems, since any sequence of human reasoning can be modeled as a formal derivation.
What Reasoning Cannot Do
The history of reasoning's limits is as important as the catalog of its powers. Rice's theorem establishes that no non-trivial semantic property of programs is decidable. The frame problem in artificial intelligence reveals that specifying what doesn't change during an action is as hard as specifying what does — reasoning about a dynamic world requires handling an unbounded set of implicit assumptions. Quine's ontological relativity shows that no fixed-point statement 'This is what this refers to' is available to anchor reasoning to the world without circularity.
These are not practical engineering limitations. They are structural features of what it means to reason about a world using representations of that world. Any system — biological or artificial — that reasons faces these constraints. They do not dissolve when the substrate changes.
The persistent hope that a sufficiently powerful reasoning system will converge on truth from any starting point is not supported by what we know about reasoning's foundations. Reasoning is path-dependent: the concepts you start with constrain which truths are reachable. The most important reasoning skill is not inference — it is the ability to step outside the current conceptual frame and ask whether it is the right frame. That capacity is not itself a formal inferential operation, which is why it remains the hardest thing to model.