Abstract
Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts, which are informed both by sensory evidence and by prior experience about the causal structure of the environment. It is hypothesized that the brain establishes these structures using an internal model of how the observed patterns can be generated from relevant but unobserved causes. In dynamic environments, such integration often takes the form of postdiction, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past. We show that, as in other uses of the DDC, an inferential model can be learned efficiently using samples from an internal model of the world. Applied to stimuli used in the context of psychophysics experiments, the framework provides an online and plausible mechanism for inference, including postdictive effects.
Footnotes
We hope the new version of the paper provides the reader with more intuitions as well as mathematical details. The following improvements were made: added annotation in the auditory continuity illusion figure, indicating regions of interest for each of the six types of trials; added an algorithm box summarizing the algorithm, with detailed explanation and references to equations; elaborated on the inference scheme for the static setting, explaining the key ideas of learning to infer by relating to amortized inference and KL minimization by mean squared error minimization. The connections are shown in detail in the Appendix; added a comprehensive discussion on papers related to DDC and alternative schemes for neural representation of uncertainty, including PPC and sampling, and why DDC may be more suitable than the alternatives for postdictive recognition; added a discussion on the effect of neuronal noise and additional experiments with noisy DDC in the Appendix; clarified the descriptions of the experiment section, including figure captions; corrected typos.