Abstract
Neuronal responses in visual cortex show a diversity of complex temporal properties. These properties include sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low stimulus contrast. Here, we hypothesize that these seemingly disparate effects can be explained by a single, shared computational mechanism. We propose a model consisting of a linear stage, followed by history-dependent gain control. The model accounts for these various temporal phenomena, tested against an unusually diverse set of measurements - intracranial electrodes in patients, fMRI, and macaque single unit spiking. The model further enables us to uncover a systematic and rich variety of temporal encoding strategies across visual cortex: First, temporal receptive field shape differs both across and within visual field maps. Second, later visual areas show more rapid and pronounced adaptation. Our study provides a new framework to understand the transformation between visual input and dynamical cortical responses.
Author Summary The nervous system extracts meaning from the distribution of light over space and time. Spatial vision has been a highly successful research area, and the spatial receptive field has served as a fundamental and unifying concept that spans perception, computation, and physiology. While there has also been a large interest in temporal vision, the temporal domain has lagged the spatial domain in terms of quantitative models of how signals are transformed across the visual hierarchy. Here we present a model of temporal dynamics of neuronal responses in human cerebral cortex. We show that the model can accurately predict responses at the millisecond scale using intracortical electrodes in patient volunteers, and that the same model generalizes to multiple types of other measurements, including functional MRI and action potentials from monkey cortex. Further, we show that a single model can account for a variety of temporal phenomena, including short-term adaptation and slower dynamics at low stimulus contrast. By developing a computational model and showing that it successfully generalizes across measurement types, cortical areas, and stimuli, we provide new insights into how time-varying images are encoded and transformed into dynamic cortical responses.