Normalization for probabilistic inference with neurons

Biol Cybern. 2011 May;104(4-5):251-62. doi: 10.1007/s00422-011-0433-y. Epub 2011 May 14.

Abstract

Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, Neural Computation, 16(1):1-38, 2004; Eliasmith and Anderson, Neural engineering: computation, representation and dynamics in neurobiological systems, 2003; Ma et al., Nature Neuroscience, 9(11):1432-1438, 2006; Sahani and Dayan, Neural Computation, 15(10):2255-2279, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.

MeSH terms

  • Action Potentials
  • Models, Theoretical
  • Neurons / physiology*
  • Probability*