Abstract
We develop a visuomotor model that implements visual search as a focal accuracy-seeking policy, with the target’s position and category drawn independently from a common generative process. Consistently with the anatomical separation between the ventral versus dorsal pathways, the model is composed of two pathways, that respectively infer what to see and where to look. The “What” network is a classical deep learning classifier, that only processes a small region around the center of fixation, providing a “foveal” accuracy. In contrast, the “Where” network processes the full visual field in a biomimetic fashion, using a log-polar retinotopic encoding, which is preserved up to the action selection level. The foveal accuracy is used to train the “Where” network. After training, the “Where” network provides an “accuracy map” that serves to guide the eye toward peripheral objects. The comparison of both networks accuracies amounts to either select a saccade or to keep the eye at the center to identify the target. We test this setup on a simple task of finding a digit in a large, cluttered image. Our simulation results demonstrate the effectiveness of this approach, increasing by one order of magnitude the radius of the visual field toward which the agent can detect and recognize a target, either through a single saccade or with multiple ones. Importantly, our log-polar treatment of the visual information exploits the strong compression rate performed at the sensory level, providing ways to implement visual search in a sub-linear fashion, in contrast with mainstream computer vision.
Competing Interest Statement
The authors have declared no competing interest.