ABSTRACT
All visual information in mammals is encoded in the aggregate pattern of retinal ganglion cell (RGC) firing. How this information is decoded to yield percepts remains incompletely understood. We have trained convolutional neural networks with multielectrode array-recorded murine RGC responses to projected images. The trained model accurately reconstructed novel facial images solely from RGC firing data. In this model, subpopulations of cells with faster firing rates are largely sufficient for accurate reconstruction, and ON- and OFF-cells contribute complementary and overlapping information to image reconstruction. Information content for reconstruction correlates with overall firing rate, and locality of information contributing to reconstruction varies substantially across the image and retina. This model demonstrates that artificial neural networks are capable of learning multicellular sensory neural encoding, and provides a viable model for understanding visual information encoding.
Significance Statement Convolutional neural networks can be trained on high-density neuronal firing data from the optic nerve to reconstruct complicated images within a defined image space.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Competing Interest Statement: Authors declare that they have no competing interests.