PT - JOURNAL ARTICLE AU - Maxwell A. Bertolero AU - Danielle S. Bassett TI - Deep Neural Networks Carve the Brain at its Joints AID - 10.1101/2020.02.20.958082 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.02.20.958082 4099 - http://biorxiv.org/content/early/2020/02/21/2020.02.20.958082.short 4100 - http://biorxiv.org/content/early/2020/02/21/2020.02.20.958082.full AB - How an individual’s unique brain connectivity determines that individual’s cognition, behavior, and risk for pathology is a fundamental question in basic and clinical neuroscience. In seeking answers, many have turned to machine learning, with some noting the particular promise of deep neural networks in modelling complex non-linear functions. However, it is not clear that complex functions actually exist between brain connectivity and behavior, and thus if deep neural networks necessarily outperform simpler linear models, or if their results would be interpretable. Here we show that, across 52 subject measures of cognition and behavior, deep neural networks fit to each brain region’s connectivity outperform linear regression, particularly for the brain’s connector hubs—regions with diverse brain connectivity—whereas the two approaches perform similarly when fit to brain systems. Critically, averaging deep neural network predictions across brain regions results in the most accurate predictions, demonstrating the ability of deep neural networks to easily model the various functions that exists between regional brain connectivity and behavior, carving the brain at its joints. Finally, we shine light into the black box of deep neural networks using multislice network models. We determined that the relationship between connector hubs and behavior is best captured by modular deep neural networks. Our results demonstrate that both simple and complex relationships exist between brain connectivity and behavior, and that deep neural networks can fit both. Moreover, deep neural networks are particularly powerful when they are first fit to the various functions of a system independently and then combined. Finally, deep neural networks are interpretable when their architectures are structurally characterized using multislice network models.