Abstract
Visual object recognition is a highly dynamic process by which we extract meaningful information about the things we see. However, the functional relevance and informational properties of feedforward and feedback signals remains largely unspecified. Additionally, it remains unclear whether computational models of vision alone can accurately capture object-specific representations and the evolving spatiotemporal neural dynamics. Here, we probe these dynamics using a combination of representational similarity and connectivity analyses of fMRI and MEG data recorded during the recognition of familiar, unambiguous objects from a wide range of categories. Modelling the visual and semantic properties of our stimuli using an artificial neural network as well as a semantic feature model, we find that unique aspects of the neural architecture and connectivity dynamics relate to visual and semantic object properties. Critically, we show that recurrent processing between anterior and posterior ventral temporal cortex relates to higher-level visual properties prior to semantic object properties, in addition to semantic-related feedback from the frontal lobe to the ventral temporal lobe between 250 and 500ms after stimulus onset. These results demonstrate the distinct contributions semantic object properties make in explaining neural activity and connectivity, highlighting it is a core part of object recognition.
Competing Interest Statement
The authors have declared no competing interest.