PT - JOURNAL ARTICLE AU - Ruiyi Zhang AU - Xaq Pitkow AU - Dora E Angelaki TI - Inductive biases of neural networks for generalization in spatial navigation AID - 10.1101/2022.12.07.519515 DP - 2022 Jan 01 TA - bioRxiv PG - 2022.12.07.519515 4099 - http://biorxiv.org/content/early/2022/12/07/2022.12.07.519515.short 4100 - http://biorxiv.org/content/early/2022/12/07/2022.12.07.519515.full AB - Artificial reinforcement learning agents that perform well in training tasks typically perform worse than animals in novel tasks. We propose one reason: generalization requires modular architectures like the brain. We trained deep reinforcement learning agents using neural architectures with various degrees of modularity in a partially observable navigation task. We found that highly modular architectures that largely separate computations of internal belief of state from action and value allow better generalization performance than agents with less modular architectures. Furthermore, the modular agent’s internal belief is formed by combining prediction and observation, weighted by their relative uncertainty, suggesting that networks learn a Kalman filter-like belief update rule. Therefore, smaller uncertainties in observation than in prediction lead to better generalization to tasks with novel observable dynamics. These results exemplify the rationale of the brain’s inductive biases and show how insights from neuroscience can inspire the development of artificial systems with better generalization.Competing Interest StatementThe authors have declared no competing interest.