Abstract
Neurons in the human amygdala and hippocampus that are selective for the identity of specific people are classically thought to encode a person’s identity invariant to visual features (e.g., skin tone, eye shape). However, it remains largely unknown how visual information from higher visual cortical areas is translated into such a semantic representation of an individual person. Here, we show that some amygdala and hippocampal neurons are selective to multiple different unrelated face identities based on shared visual features. The encoded identities form clusters in the representation of a deep neural network trained to recognize faces. Contrary to prevailing views, these neurons thus represent an individual’s face with a visual feature-based code rather than one based on association with known concepts. Feature neurons encoded faces regardless of their identity, race, gender, familiarity, or pixel-level visual features; and the region of feature space to which feature neurons are tuned predicted their response to new face stimuli. Our results reveal a new class of neurons that bridge the perception-driven representation of facial features in the higher visual cortex with mnemonic semantic representations in the MTL, which may form the basis for declarative memory.
Competing Interest Statement
The authors have declared no competing interest.