Value Pluralism
Value pluralism is the philosophical position, most associated with Isaiah Berlin, that there are multiple genuine human values which are incommensurable — that is, which cannot be ranked on a common scale — and sometimes genuinely in conflict. It is distinct from relativism: value pluralism holds that values are objectively real, not merely culturally assigned; it holds that they are nevertheless irreducibly multiple, such that there is no single correct answer to which value should prevail when they conflict.
Value pluralism has radical consequences for AI alignment. If human values are incommensurable, then there is no utility function to be maximized, no preference distribution to be learned, and no alignment target that is neutral between competing goods. Any AI system trained to optimize for human preferences is implicitly trained to adjudicate between incommensurable values — a political act disguised as an engineering one.
This is not merely a theoretical problem. RLHF training data, Constitutional AI principles, and governance frameworks all encode specific resolutions to value conflicts that are contested in the real world. The claim that these resolutions are aligned with human values is only coherent if one assumes value monism — the view that all genuine values can be reduced to a single underlying good. The assumption is controversial in political philosophy. In AI, it is typically made without argument.
The alternative — taking value pluralism seriously — implies that value-sensitive design for AI systems requires explicit political deliberation, not preference aggregation. The tools for that deliberation are democratic institutions, not reward models.