Abstract
SUMMARY Visual input to the brain during natural behavior is highly dependent on movements of the eyes, head, and body. Neurons in mouse primary visual cortex (V1) respond to eye and head movements, but how information about eye and head position is integrated with visual processing during free movement is unknown, since visual physiology is generally performed under head-fixation. To address this, we performed single-unit electrophysiology in V1 of freely moving mice while simultaneously measuring the mouse’s eye position, head orientation, and the visual scene from the mouse’s perspective. Based on these measures we were able to map spatiotemporal receptive fields during free movement, using a generalized linear model (GLM) that predicted the activity of V1 neurons based on gaze-corrected visual input. Furthermore, we found that a significant fraction of visually-responsive neurons showed tuning for eye position and head orientation. Incorporating these variables into the GLM revealed that visual and positional signals are integrated through a multiplicative mechanism in the majority of modulated neurons, consistent with computation via gain fields and nonlinear mixed selectivity. These results provide new insight into coding in mouse V1, and more generally provide a paradigm for performing visual physiology under natural conditions, including active sensing and ethological behavior.
HIGHLIGHTS
Neurons in mouse V1 respond to both vision and self-motion, but it is unclear how these are combined.
We record neural activity in V1 concurrent with measurement of the visual input from the mouse’s perspective during free movement.
These data provide the first measurement of visual receptive fields in freely moving animals.
We show that many V1 neurons are tuned to eye position and head orientation, and these contribute a multiplicative gain on visual responses in the majority of modulated neurons.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
We have performed a combination of additional experiments (recordings in darkness), data analysis (including incorporating additional parameters into the model, exploring correlations with cell type/layer, and providing a number of further validations of our methods), as well as text revisions.