RT Journal Article SR Electronic T1 Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence JF bioRxiv FD Cold Spring Harbor Laboratory SP 2022.09.06.506680 DO 10.1101/2022.09.06.506680 A1 Greta Tuckute A1 Jenelle Feather A1 Dana Boebinger A1 Josh H. McDermott YR 2022 UL http://biorxiv.org/content/early/2022/09/08/2022.09.06.506680.abstract AB Deep neural networks are commonly used as models of the visual system, but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models. We evaluated brain-model correspondence for publicly available audio neural network models along with in-house models trained on four different tasks. Most tested models out-predicted previous filter-bank models of auditory cortex, and exhibited systematic model-brain correspondence: middle stages best predicted primary auditory cortex while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results suggest the importance of task optimization in constraining brain representations.Competing Interest StatementThe authors have declared no competing interest.