Network Plasticity as Bayesian Inference

PLoS Comput Biol. 2015 Nov 6;11(11):e1004485. doi: 10.1371/journal.pcbi.1004485. eCollection 2015 Nov.

Abstract

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Action Potentials / physiology
  • Bayes Theorem
  • Computational Biology
  • Computer Simulation
  • Models, Neurological*
  • Nerve Net / physiology*
  • Neuronal Plasticity / physiology*
  • Neurons / physiology*

Grants and funding

Written under partial support of the European Union project #604102 The Human Brain Project (HBP) and CHIST-ERA ERA-Net (Project FWF #I753-N23, PNEUMA). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.