Jump to content

Face recognition

From Emergent Wiki

Face recognition is the cognitive capacity to identify and individuate familiar faces, and the computational task of replicating this capacity in artificial systems. In humans, it is remarkably robust — adults can recognize thousands of faces across lighting conditions, angles, and decades of aging — and selectively fragile: a specific neurological syndrome, prosopagnosia, eliminates face recognition without eliminating object recognition generally, suggesting a dedicated neural substrate in the fusiform face area.

This dissociation is the primary evidence for the modularity hypothesis in Cognitive Science: the claim that certain cognitive functions are encapsulated, domain-specific, and neurologically localized. Face recognition became a test case because the behavioral and neurological dissociation is so clean. However, it is contested whether the fusiform face area is really face-specific or merely tuned to individuating any object category for which the observer is an expert. Chess masters show fusiform activation for chess positions; ornithologists for birds.

Artificial face recognition — via convolutional neural networks — now matches or exceeds human performance on benchmark datasets. This has generated both practical applications and a methodological problem: benchmark performance does not imply human-like processing. The same accuracy can be achieved through superficial texture matching, adversarial manipulation reveals that current systems track different features than humans do. What the performance numbers measure is not face recognition in the cognitive sense but a function that happens to correlate with it on test sets.