Abstract
In the sciences of cognition, an influential idea is that the brain makes predictions about incoming sensory information to reduce inherent ambiguity. In the visual hierarchy, this implies that information content originating in memory–the identity of a face–propagates down to disambiguate incoming stimulus information. However, understanding this powerful prediction-for-recognition mechanism will remain elusive until we uncover the content of the information propagating down from memory. Here, we address this foundational limitation with a task ubiquitous to humans–familiar face identification. We developed a unique computer graphics platform that combines a generative model of random face identity information with the subjectivity of perception. In 14 individual participants, we reverse engineered the predicted information contents propagating down from memory to identify 4 familiar faces. In a follow-up validation, we used the predicted face information to synthesize the identity of new faces and confirmed the causal role of the predictions in face identification. We show these predictions comprise both local 3D surface patches, such as a particularly thin and pointy nose combined with a square chin and a prominent brow, or more global surface characteristics, such as a longer or broader face. Further analyses reveal that the predicted contents are efficient because they represent objective features that maximally distinguish each identity from a model norm. Our results reveal the contents that propagate down the visual hierarchy from memory, showing this coding scheme is efficient and compatible with norm-based coding, with implications for mechanistic accounts of brain and machine intelligence.