Recurrent neural networks as versatile tools of neuroscience research

Curr Opin Neurobiol. 2017 Oct:46:1-6. doi: 10.1016/j.conb.2017.06.003. Epub 2017 Jun 29.

Abstract

Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input-output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.

Publication types

  • Review

MeSH terms

  • Animals
  • Humans
  • Models, Neurological*
  • Neural Networks, Computer*
  • Neurosciences / methods
  • Neurosciences / trends