TY - JOUR T1 - Visual Perception of 3D Space and Shape in Time - Part II: 3D Space Perception with Holographic Depth JF - bioRxiv DO - 10.1101/2022.02.28.482181 SP - 2022.02.28.482181 AU - Isabella Bustanoby AU - Andrew Krupien AU - Umaima Afifa AU - Benjamin Asdell AU - Michaela Bacani AU - James Boudreau AU - Javier Carmona AU - Pranav Chandrashekar AU - Mark Diamond AU - Diego Espino AU - Arnav Gangal AU - Chandan Kittur AU - Yaochi Li AU - Tanvir Mann AU - Christian Matamoros AU - Trevor McCarthy AU - Elizabeth Mills AU - Stephen Nazareth AU - Justin Nguyen AU - Kenya Ochoa AU - Sophie Robbins AU - Despoina Sparakis AU - Brian Ta AU - Kian Trengove AU - Tyler Xu AU - Natsuko Yamaguchi AU - Christine Yang AU - Eden Zafran AU - Aaron P. Blaisdell AU - Katsushi Arisaka Y1 - 2022/01/01 UR - http://biorxiv.org/content/early/2022/03/02/2022.02.28.482181.abstract N2 - Visual perception plays a critical role in navigating 3D space and extracting semantic information crucial to survival. Even though visual stimulation on the retina is fundamentally 2D, we seem to perceive the world around us in vivid 3D effortlessly. This reconstructed 3D space is allocentric and faithfully represents the external 3D world. How can we recreate stable 3D visual space so promptly and reliably?To solve this mystery, we have developed new concepts MePMoS (Memory-Prediction-Motion-Sensing) and NHT (Neural Holography Tomography). These models state that visual signal processing must be primarily top-down, starting from memory and prediction. Our brains predict and construct the expected 3D space holographically using traveling alpha brainwaves. Thus, 3D space is represented by the three time signals in three directions.To test this hypothesis, we designed reaction time (RT) experiments to observe predicted space-to-time conversion, especially as a function of distance. We placed LED strips on a horizontal plane to cover distances from close up to 2.5 m or 5 m, either using a 1D or a 2D lattice. Participants were instructed to promptly report observed LED patterns at various distances. As expected, stimulation at the fixation cue location always gave the fastest RT. Additional RT delays were proportional to the distance from the cue. Furthermore, both covert attention (without eye movements) and overt attention (with eye movements) created the same RT delays, and both binocular and monocular views resulted in the same RTs. These findings strongly support our predictions, in which the observed RT-depth dependence is indicative of the spatiotemporal conversion required for constructing allocentric 3D space. After all, we perceive and measure 3D space by time as Einstein postulated a century ago.Competing Interest StatementThe authors have declared no competing interest. ER -