Abstract
Top-down generation of neural representations is essential for both predictive-coding perception and imagination. In particular, predictive-coding perception or imagination requires top-down generated neural representations in primary visual cortex (V1) to reconstruct or simulate the V1 representations in response to the seen images. However, top-down generated representations are found to have low coding precision in both perception and imagination tasks. Why and how does the brain use top-down generated low-precision representations to reconstruct or simulate bottom-up stimulated high-precision representations? How can information of fine spatial scale be perceived or imagined using the top-down generated low-precision representations? By modeling visual system using variational auto-encoder, we reveal that training to generate low-precision representations facilitates higher-order cortex to form representations smooth to shape morphing of stimuli, thereby improving perceptive accuracy and robustness as well as imaginative creativity. Fine-scale information can be faithfully inferred from low-precision representations if fine-scale information is sparse. Our results provide fresh insights into visual perception and imagination as well as the sparseness of V1 activity, and suggest that generating low-precision representations that contain sparse fine-scale information is a strategy that the brain uses to improve the perception and imagination of fine-scale information.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
A minor revision