RT Journal Article SR Electronic T1 Neural state space alignment for magnitude generalisation in humans and recurrent networks JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.07.22.215541 DO 10.1101/2020.07.22.215541 A1 Hannah Sheahan A1 Fabrice Luyckx A1 Stephanie Nelli A1 Clemens Teupe A1 Christopher Summerfield YR 2020 UL http://biorxiv.org/content/early/2020/07/23/2020.07.22.215541.abstract AB A prerequisite for intelligent behaviour is to understand how stimuli are related and to generalise this knowledge across contexts. Generalisation can be challenging when relational patterns are shared across contexts but exist on different physical scales. Here, we studied neural representations in humans and recurrent neural networks performing a magnitude comparison task, for which it was advantageous to generalise concepts of “more” or “less” between contexts. Using multivariate analysis of human brain signals and of neural network hidden unit activity, we observed that both systems developed parallel neural “number lines” for each context. In both model systems, these number state spaces were aligned in a way that explicitly facilitated generalisation of relational concepts (more and less). These findings suggest a previously overlooked role for neural normalisation in supporting transfer of a simple form of abstract relational knowledge (magnitude) in humans and machine learning systems.Competing Interest StatementThe authors have declared no competing interest.