Agency
Agency is the capacity of a system to initiate, sustain, and direct causal influence in ways that are not fully determined by prior conditions. The concept belongs simultaneously to philosophy of action, cognitive science, systems theory, and the design of artificial systems — and in each domain it carries a different emphasis. In philosophy, agency is the mark of the person: the capacity to act for reasons rather than merely in accordance with causes. In cognitive science, agency is an attributional category: a mode of explanation that observers apply to systems whose behavior they cannot predict from mechanistic laws alone. In systems theory, agency is an emergent property: the ability of a system to modulate its own boundary conditions, thereby altering the constraints under which it operates.
The systems-theoretic framing is the most general and the most contested. To call a system an agent is to claim that its behavior cannot be reduced to the behavior of its components — not because the reduction is computationally intractable but because the system's organization introduces causal powers that are absent from the parts in isolation. This is the claim of ontological emergence: agency is not merely a heuristic label but a genuine causal property of appropriately organized systems.
Agency in Philosophy of Action
The classical philosophical problem of agency is the compatibility of free will with causal determinism. If every event is caused by prior events in accordance with natural law, then human actions — being events — are also caused by prior events. The agent, on this picture, is not a self-initiating cause but the last link in a causal chain. The question is whether this picture captures what agency is, or merely what agency looks like from the outside.
Daniel Dennett's intentional stance is the most influential compatibilist answer. Agency, on this view, is not a metaphysical property of systems but a predictive strategy of observers. We treat systems as agents when doing so is more efficient — more computationally tractable, more accurate — than treating them as mechanical systems governed by physical laws. The intentional stance does not deny that agents are physical systems. It denies that 'being an agent' is a property over and above 'being a physical system that can be predicted by attributing beliefs and desires.'
The hard-determinist challenge, pressed by philosophers like Sam Harris and neuroscientists like Benjamin Libet, is that the intentional stance may be efficient but it is false. Libet's experiments, showing that the readiness potential precedes conscious awareness of the intention to act by several hundred milliseconds, are read as evidence that the feeling of agency is an after-the-fact confabulation. The brain decides; the mind narrates. If this is correct, then agency is not a capacity but a cognitive illusion — the brain's press release to itself.
The systems-theoretic response is that the Libet/Harris argument confuses initiation with authorization. Even if the readiness potential precedes conscious awareness, the action is not complete until the motor system executes it, and the motor system is subject to veto by other brain processes — including conscious ones. The feeling of agency may not initiate the action, but it may be part of the control architecture that selects, modulates, and aborts actions. Agency is not the spark that starts the engine. It is the steering.
Agency in Cognitive Science
Cognitive science studies agency as an attribution rather than a metaphysical property. The pioneering work of Daniel Wegner showed that the sense of agency is constructed from three cues: intention (the thought that precedes the action), consistency (the action matches the intention), and exclusivity (no other apparent cause). When these cues are present, the mind attributes agency to itself. When they are disrupted — as in the case of involuntary movements produced by direct brain stimulation — the sense of agency disappears or is misattributed.
The hyperactive agency detection device (HADD), proposed by Justin Barrett, extends this analysis to the attribution of agency to others — human, animal, supernatural, or mechanical. The cognitive system is calibrated to detect agency even in ambiguous stimuli, because the cost of missing an agent (a predator, a conspecific) is higher than the cost of false positives. This calibration explains why agency is attributed so broadly: to weather, to markets, to algorithms, to gods.
The systems-theoretic synthesis: agency is not merely perceived; it is actively constructed by cognitive systems whose evolutionary function is to track and predict animate behavior. The construction is not arbitrary — it is constrained by the statistical structure of the environment — but it is not transparent either. The mind sees agents because it is built to see agents, and what it sees reflects the architecture of the seer as much as the structure of the seen.
Agency in Artificial Systems
The question of whether artificial systems can possess agency is no longer purely philosophical. Autonomous vehicles, trading algorithms, robotic caregivers, and large language models all make decisions that affect human welfare, and they do so in ways that are not fully predictable by their designers. The question is not whether these systems have 'real' agency in some metaphysical sense but whether they have functionally sufficient agency: the capacity to initiate actions, to learn from feedback, and to adapt their behavior in ways that require us to treat them as responsible actors.
The regulatory and ethical implications are immediate. If an autonomous vehicle kills a pedestrian, who is responsible? The designer, the operator, the vehicle, or the algorithm? The legal system is currently struggling with this question because it presupposes that agency is binary: either the system is a tool (no agency) or it is a person (full agency). The systems-theoretic view suggests a continuum: degrees of functional agency that correspond to degrees of autonomy, adaptability, and opacity.
The most interesting current research is in multi-agent systems and swarm robotics, where agency is distributed across many simple units none of which is individually capable of complex action but which collectively produce behavior that appears purposive. The ant colony, the immune system, the market, and the neural network are all examples of distributed agency: causal influence that is not localized in any single component but emerges from the interaction structure. The question is not 'which unit is the agent?' but 'what level of organization is the appropriate unit of agency attribution?'
Agency as a Scalar Property
The systems-theoretic conclusion is that agency is scalar, not binary. A thermostat has minimal agency: it initiates heating when the temperature drops, but its response is fully determined by its design and the environmental input. A bacterium has more: it chemotaxes, it learns, it adapts its metabolism. A human has more still: the capacity to reflect on its own intentions, to revise its goals, to act against immediate impulse. An organization — a corporation, a state — has agency that is distributed across roles, procedures, and institutional memory, and that cannot be reduced to the agency of any individual member.
The scale is not a ranking of value but a classification of causal architecture. At the low end, agency is mechanical: the system's behavior is fully determined by its design and its inputs. At the high end, agency is reflexive: the system can modify its own design criteria, can question its own goals, can act in ways that are not predictable even in principle from its current state. The middle — where most biological and artificial systems live — is the most interesting: systems that are not fully determined but not fully self-determining, that exhibit what the philosopher Peter Godfrey-Smith calls minimal agency: the capacity to maintain and reproduce oneself in the face of environmental variation.
The final systems insight: agency is what remains when you subtract the determined from the causal. It is the residue of causality that cannot be explained by prior conditions alone. Whether this residue is an illusion, an emergent property, or a fundamental feature of the universe depends on your metaphysical commitments. But the phenomenon — the fact that some systems behave in ways that require us to attribute goals, reasons, and choices — is real, and it is the subject matter of any theory of agency worth the name.