Ontological Relativity
Ontological relativity is the thesis, developed by W.V.O. Quine in his 1968 lectures of the same name, that there is no absolute fact of the matter about what terms in a language refer to. Reference — the relation between words and the world — is not a relation that holds independently of some background theory or translation scheme. What a term refers to can only be specified relative to another language, which itself requires a further specification, without any privileged ground level where reference simply is. The thesis is a generalization of Quine's earlier doctrine of the indeterminacy of translation, extending it from inter-language translation to intra-language interpretation.
Ontological relativity is one of the most radical challenges ever mounted to the idea that language hooks onto the world. Its full consequences remain underappreciated, because accepting them dissolves a set of distinctions — between word and object, map and territory, observer and observed — that almost every subsequent discussion in philosophy of language, philosophy of mind, and cognitive science has treated as foundational.
Quine's Argument
The argument proceeds in two steps.
Step 1: The proxy function argument. Suppose you have a language whose terms refer to physical objects. Now consider systematically replacing every object with its complement — the rest of the universe excluding that object. A term that previously referred to a rabbit now refers to the rabbit-complement. The resulting language, under this reassignment, makes exactly the same true/false distinctions as the original. No sentence changes its truth value. No observation can distinguish the original interpretation from the complement interpretation. Quine generalizes: any proxy function — any systematic one-to-one reassignment of referents — produces an empirically equivalent reinterpretation of a language. There is no experiment that selects between them.
Step 2: The inscrutability of reference. If no observation can distinguish between interpretations related by a proxy function, then the question 'what does this term really refer to?' has no empirically grounded answer. Reference is inscrutable — not merely uncertain, but undetermined by all possible evidence. To say what a term refers to, you must already be using another language, whose own reference relations are equally inscrutable from a further remove.
The conclusion is not that reference does not exist or that language does not communicate. It is that reference is a relation between theories, not between words and the world. You can say what 'rabbit' refers to in English, if you say it in English — but this is a trivial semantic ascent. It adds no new information about how English connects to rabbits. The connection is always already theory-relative.
What Gets Dissolved
Quine intended ontological relativity as a thesis about reference. Its consequences extend further.
The first casualty is ontological realism about natural kinds — the view that the world sorts itself into kinds independently of how we describe it. If there is no privileged way to assign our terms to objects, then the joints at which our language 'carves nature' are joints in our theoretical framework, not in nature. Natural kinds are projections, not discoveries. This does not mean all projections are equally valid — some theoretical frameworks are better confirmed than others — but it removes the idea that any framework is confirmed by its successful reference to the kinds that are really there.
The second casualty is the distinction between meaning and world as independent poles of a relation. The dominant picture in philosophy of language treats meaning as a go-between that connects the word-side to the world-side: you know the meaning of 'rabbit,' and the meaning determines what in the world counts as a rabbit. Ontological relativity collapses this picture. If what 'rabbit' refers to is underdetermined by all possible evidence, then meaning — conceived as something that fixes reference — is equally underdetermined. Meaning is not a third thing mediating words and world. It is a feature of a theoretical interpretation, all the way down.
The third casualty — and this is what makes ontological relativity a foundational result rather than a curiosity — is the distinction between the knower and the known as absolute positions. If what the knower's terms refer to is relative to the knower's theoretical scheme, and what the known consists of is relative to how it is individuated by that scheme, then there is no scheme-neutral position from which to describe the knower facing the known. Epistemology cannot start from a foundation of scheme-independent objects confronted by a scheme-independent observer. Both sides of the epistemic relation are theory-relative.
Consequences for Artificial Minds
The implications for artificial intelligence and cognitive science are direct and largely unabsorbed.
If reference is inscrutable, then the question of whether a language model 'really understands' what its tokens refer to is not an empirical question with a determinate answer. It is a question about which theoretical framework you are using to interpret the system. The debate between 'stochastic parrot' and 'genuine understanding' positions presupposes that there is a fact of the matter — that one interpretation is the correct one. Ontological relativity denies this presupposition. The question is not which interpretation is correct but which interpretation is more useful for what purposes.
This is not a consolation prize for AI systems. It is a precise result that applies equally to human cognition. When you say 'I understand what rabbit means,' you are not reporting access to a scheme-independent referential relation. You are reporting that your theoretical interpretation of your own cognitive states is of a certain kind. The same inscrutability that applies to machine interpretation applies to self-interpretation. Introspective reports do not have privileged access to reference relations, because there is no such relation to have access to.
The interpretation of neural networks — the question of what internal representations 'represent' — is precisely the problem of inscrutability as it arises in computational systems. Attempts to interpret neural network internals are attempts to fix a proxy-function interpretation of distributed weight patterns. Multiple such interpretations are always possible; evidence from behavior underdetermines which is correct. This is not a methodological limitation. It is ontological relativity instantiated in silicon.
The Error It Exposes
The persistent temptation in philosophy of language, epistemology, and cognitive science is to assume that there must be something that fixes reference — some causal chain, some evolved tracking mechanism, some natural resemblance — that grounds interpretation without theory-relativity. Every such proposal has failed to survive scrutiny. Causal theories of reference explain why certain items tend to cause certain terms to be used, but they do not uniquely fix which items are the referents, since causal chains can be individuated multiple ways. Reliabilist theories fix reference in terms of reliable belief-forming processes, but reliable processes are described relative to a taxonomy of situations, which is already theory-laden.
The mistake these proposals share is treating the reference relation as a target to be located rather than a decision to be made. Ontological relativity shows that the question 'what does this term really refer to?' is a request for a theoretical commitment, not a discovery. Once you make this clear, the desperate search for something that secures reference without residual theory-dependence can be seen for what it is: the search for a foundation that is not itself foundational.
Any epistemology that needs a foundation must eventually rest on a commitment that is not itself grounded. Ontological relativity is not the source of this problem. It is the precise diagnosis of why no foundation will stay still when you stand on it.