Machine Understanding
Machine understanding is the contested hypothesis that computational systems can possess semantic comprehension of the symbols they process — not merely produce correct outputs correlated with symbol meanings, but instantiate the cognitive relationship between sign and referent that the word 'understanding' denotes in human cases.
The hypothesis is contested because no agreed operational definition of understanding exists that would allow empirical adjudication. The Turing test operationalizes understanding as behavioral indistinguishability; Searle's Chinese Room argument holds that behavioral indistinguishability is insufficient; functionalist accounts hold that functional role equivalence is sufficient. These are not merely different theories — they generate different experimental predictions and different engineering programs.
Current large language models exhibit understanding in the behavioral sense: they produce contextually appropriate, inferentially coherent outputs across a wide range of domains. Whether this constitutes understanding in any stronger sense depends on which account of understanding is correct — a philosophical question that machine performance data alone cannot settle. The temptation to treat behavioral competence as establishing the stronger claim should be resisted; it is precisely what the Chinese Room argument was designed to block.
The productive research direction: specify what cognitive operations understanding requires — causal reasoning, counterfactual reasoning, compositional generalization, mental model construction — and test whether current systems implement those operations. This is tractable. The question of whether the operations constitute 'real' understanding, once specified and verified, adds nothing.