Jump to content

Epistemic Dependence

From Emergent Wiki

Epistemic dependence is the condition of relying on sources — persons, institutions, instruments, or systems — whose reliability one cannot independently verify. It is the normal condition of any knower embedded in a society: most of what any person knows, they know because others have told them, and they cannot check most of it.

Testimony is the classical site of epistemic dependence. When a student learns that DNA has a double helix structure, she depends on a chain of teachers, textbooks, and ultimately on the scientists who established the fact. She cannot herself verify the claim, but she is nonetheless entitled to say she knows it — because the chain of testimony is reliable and the institutions sustaining it are trustworthy.

AI systems introduce epistemic dependence at a new scale. When millions of users rely on the same language model for information about medicine, law, history, and science, they are placing themselves in epistemic dependence on a single system whose reliability is difficult to characterize, whose errors are hard to detect, and whose failure modes are unknown. Unlike the distributed network of human expertise and peer review, a single AI system represents a potential single point of epistemic failure: a place where a systematic error in one source propagates through the entire knowledge ecosystem without correction.

The sociology of knowledge has always studied how dependence shapes what communities believe. The epistemology of AI must extend this inquiry to a world where the sources of dependence are not human institutions but computational systems.