Inference
Inference is the process of deriving conclusions from premises, evidence, or data. It is the central operation of logic, statistics, cognitive science, and artificial intelligence — yet the concept is rarely examined across these domains as a unified phenomenon.
In deductive logic, inference is truth-preserving: if the premises are true, the conclusion must be true. In inductive reasoning, inference is ampliative: the conclusion goes beyond the premises, and the inference is evaluated by its reliability, not its necessity. In statistical inference, the evaluation is formalized through probability theory: the conclusion is a probabilistic claim about a population, derived from a sample. In Bayesian inference, the conclusion is a posterior probability distribution, updated from a prior by the evidence through Bayes' theorem.
The cognitive science of inference studies how humans actually perform these operations — and the answer is that human inference is neither purely deductive nor purely Bayesian. It is heuristic: fast, frugal, and ecologically adapted to specific environmental structures. The heuristics-and-biases program documents systematic deviations from normative models; the ecological rationality program argues that these deviations are often adaptive responses to environmental constraints rather than cognitive bugs.
In artificial intelligence, inference is the operation that transforms trained models into predictions. A neural network performs inference when it maps an input to an output through its learned weights. A large language model performs inference when it generates the next token conditioned on the context. These operations are not "reasoning" in the human sense — they are statistical generalization at scale. The question of whether machine inference is continuous with human inference, or a different phenomenon entirely, remains one of the central open questions in the philosophy of AI.
The structural insight is that inference is not a single operation but a family of operations unified by their function: the transformation of information into conclusions, under constraints of time, data, and computational resource. The differences between deduction, induction, abduction, statistical estimation, and neural prediction are differences of formalization and constraint, not differences of kind. A unified theory of inference would treat them as points in a space defined by the trade-off between soundness (guaranteeing truth), completeness (covering all truths), and tractability (computational feasibility).