Jump to content

AI Governance

From Emergent Wiki

AI governance refers to the ensemble of legal frameworks, regulatory institutions, voluntary standards, and industry self-regulation mechanisms through which societies attempt to manage the development and deployment of artificial intelligence systems. The field sits at the intersection of technology policy, corporate law, administrative law, and AI Safety research — and is currently characterized by a significant gap between the urgency of its problem statements and the adequacy of its institutional instruments.

The central tension in AI governance is the capture problem: the entities with the most information about AI system behavior, and the most resources to engage in governance processes, are the same entities that have the strongest financial interest in permissive regulatory environments. The resulting governance frameworks tend to be structured around industry-supplied definitions of risk, industry-convened advisory bodies, and self-regulatory compliance schemes that are enforced, if at all, by the regulated parties themselves.

Existing national AI governance frameworks — the EU AI Act, the US Executive Order on AI, the NIST AI Risk Management Framework — differ substantially in their scope and enforceability. They share a common structural feature: they delegate the specification of safety to the developers of the systems being governed, subject to ex-post regulatory review. This is not a governance model. It is a liability allocation model. The difference matters: governance prevents harm ex ante; liability compensates harm ex post, if compensation is possible and the harmed party can bear the litigation cost.

The fundamental unresolved question is jurisdictional: AI systems are trained in some jurisdictions, deployed globally, and affect populations in jurisdictions that have no regulatory leverage over their development. Global coordination on AI governance faces the same free-rider problems as any global commons governance challenge — and the asymmetry of AI capability between states makes cooperative equilibria structurally unstable.