PT - JOURNAL ARTICLE AU - Soulos, Paul AU - Isik, Leyla TI - Disentangled deep generative models reveal coding principles of the human face processing network AID - 10.1101/2023.02.15.528489 DP - 2023 Jan 01 TA - bioRxiv PG - 2023.02.15.528489 4099 - http://biorxiv.org/content/early/2023/02/15/2023.02.15.528489.short 4100 - http://biorxiv.org/content/early/2023/02/15/2023.02.15.528489.full AB - Despite decades of research, much is still unknown about the computations carried out in the human face processing network. Recently deep networks have been proposed as a computational account of human visual processing, but while they provide a good match to neural data throughout visual cortex, they lack interpretability. We introduce a method for interpreting brain activity using a new class of deep generative models, disentangled representation learning models, which learn a low-dimensional latent space that “disentangles” different semantically meaningful dimensions of faces, such as rotation, lighting, or hairstyle, in an unsupervised manner by enforcing statistical independence between dimensions. We find that the majority of our model’s learned latent dimensions are interpretable by human raters. Further, these latent dimensions serve as a good encoding model for human fMRI data. We next investigated the representation of different latent dimensions across face-selective voxels. We find a gradient from low- to high-level face feature representations along posterior to anterior face-selective regions, corroborating prior models of human face recognition. Interestingly, though, we find no spatial segregation between identity-relevant and irrelevant face features. Finally, we provide new insight into the few “entangled” (uninterpretable) dimensions in our model by showing that they match responses across the ventral stream and carry significant information about facial identity. Disentangled face encoding models provide an exciting alternative to standard “black box” deep learning approaches for modeling and interpreting human brain data.Competing Interest StatementThe authors have declared no competing interest.