PT - JOURNAL ARTICLE AU - Le Chang AU - Bernhard Egger AU - Thomas Vetter AU - Doris Y. Tsao TI - What computational model provides the best explanation of face representations in the primate brain? AID - 10.1101/2020.06.07.111930 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.06.07.111930 4099 - http://biorxiv.org/content/early/2020/06/08/2020.06.07.111930.short 4100 - http://biorxiv.org/content/early/2020/06/08/2020.06.07.111930.full AB - Understanding how the brain represents the identity of complex objects is a central challenge of visual neuroscience. The principles governing object processing have been extensively studied in the macaque face patch system, a sub-network of inferotemporal (IT) cortex specialized for face processing (Tsao et al., 2006). A previous study reported that single face patch neurons encode axes of a generative model called the “active appearance” model (Chang and Tsao, 2017), which transforms 50-d feature vectors separately representing facial shape and facial texture into facial images (Cootes et al., 2001; Edwards et al., 1998). However, it remains unclear whether this model constitutes the best model for explaining face cell responses. Here, we recorded responses of cells in the most anterior face patch AM to a large set of real face images, and compared a large number of models for explaining neural responses. We found that the active appearance model better explained responses than any other model except CORnet-Z, a feedforward deep neural network trained on general object classification to classify non-face images, whose performance it tied on some face image sets and exceeded on others. Surprisingly, deep neural networks trained specifically on facial identification did not explain neural responses well. A major reason is that units in the network, unlike neurons, are less modulated by face-related factors unrelated to facial identification such as illumination.Competing Interest StatementThe authors have declared no competing interest.