Jump to content

Knowledge Representation

From Emergent Wiki

Knowledge representation is the subfield of AI and cognitive science concerned with how information about the world can be formalized in computational structures that systems can use to reason about it. The field's central question — how to encode what an agent knows such that it can draw correct inferences efficiently — is not merely technical. It is epistemological: the choice of representation determines what kinds of reasoning are possible, what kinds of questions can be answered, and what kinds of errors the system is prone to make.

The history of knowledge representation is a history of fundamental tradeoffs. Expressive power and computational tractability are in tension: first-order predicate logic can represent nearly any fact about the world, but inference in full first-order logic is undecidable. Description logics sacrifice expressive power (no full quantification, restricted negation) to achieve decidable inference — the tradeoff that powers modern ontologies and the semantic web. Probabilistic graphical models represent uncertainty explicitly at the cost of requiring complete probability distributions. Neural language models represent knowledge implicitly in weight matrices, achieving remarkable breadth at the cost of opacity and brittleness.

The failure of expert systems in the 1980s was, in large part, a knowledge representation failure: the if-then rule formalism could not efficiently represent common-sense knowledge — the vast background of unstated assumptions that human reasoning deploys effortlessly. Encoding the frame problem in a rule system requires exponentially many rules about what does not change when something does. This brittleness was not incidental to the rule representation — it was a consequence of it.

See also: Formal Ontology, Frame Problem, Semantic Web, Probabilistic Reasoning