Abstract
How the brain makes correct inferences about its environment based on noisy and ambiguous observations, is one of the fundamental questions in Neuroscience. Prior knowledge about the probability with which certain events occur in the environment plays an important role in this process. Humans are able to incorporate such prior knowledge in an efficient, Bayes optimal, way in many situations, but it remains an open question how the brain acquires and represents this prior knowledge. The long time spans over which prior knowledge is acquired make it a challenging question to investigate experimentally. In order to guide future experiments with clear empirical predictions, we used a neural network model to learn two commonly used tasks in the experimental literature (i.e. orientation classification and orientation estimation) where the prior probability of observing a certain stimulus is manipulated. We show that a model population of neurons learns to correctly represent and incorporate prior knowledge, by only receiving feedback about the accuracy of their inference from trial-to-trial and without any probabilistic feedback. We identify different factors that can influence the neural responses to unexpected or expected stimuli, and find a novel mechanism that changes the activation threshold of neurons, depending on the prior probability of the encoded stimulus. In a task where estimating the exact stimulus value is important, more likely stimuli also led to denser tuning curve distributions and narrower tuning curves, allocating computational resources such that information processing is enhanced for more likely stimuli. These results can explain several different experimental findings and clarify why some contradicting observations concerning the neural responses to expected versus unexpected stimuli have been reported and pose some clear and testable predictions about the neural representation of prior knowledge that can guide future experiments.
Author summary The probability with which certain events are observed in our environment plays an important role in how we perceive the world. In many situations humans use such knowledge about the environment efficiently to come to optimal inferences from the observations made. However, it remains unclear how the brain learns to incorporate such prior knowledge and what the neural representations are exactly. By simulating two tasks commonly used in experiments where we manipulate the probability of certain stimuli occurring we show that such prior knowledge can be acquired from feedback about the accuracy of individual observations, and does not need any explicit information about the probability of an event occurring. We identify different properties of the neural populations that can influence the neural responses in experiments and show how prior knowledge can influence these neural responses through different mechanisms. Interestingly, the networks learn to allocate neural resources efficiently to more likely stimuli in order to maximize task performance. These results give several interesting and testable predictions about the neural representation of prior knowledge in commonly used experimental paradigms that can guide future experiments.
Footnotes
↵* s.quax{at}donders.ru.nl