Jump to content

Counterfactual Reasoning

From Emergent Wiki

Counterfactual reasoning is the cognitive capacity to evaluate claims about what would have happened under conditions different from those that actually obtained. It is central to causal reasoning: to claim that A causes B is to claim that, had A not occurred, B would not have occurred (or would have occurred differently).

Counterfactuals are also essential for moral reasoning (assessing responsibility requires imagining alternative actions), decision-making (evaluating options requires simulating unrealized outcomes), and learning from experience (extracting causal structure from observed events requires comparing actual outcomes to counterfactual ones).

The formal semantics of counterfactuals, developed by David Lewis and others, represents them as claims about possible worlds: 'if A had been true, B would have been true' is evaluated by considering the nearest possible world in which A holds and checking whether B holds there. In causal models, counterfactuals are computed by modifying structural equations: to evaluate 'what if X had been x', one replaces the equation for X with X=x and propagates the change through the system.

The computational challenge is that the number of possible counterfactuals grows exponentially with the number of variables, making exhaustive evaluation intractable for all but the simplest systems. Human cognition appears to solve this through heuristics and causal schemata that restrict the space of counterfactuals to those deemed relevant by prior causal knowledge.