Special issue: Research reportGetting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations
Introduction
One of the most influential conceptualizations of the visual system suggests that the visual system is segregated, anatomically and functionally, into two visual pathways (Goodale & Milner, 1992). In this view, the ventral pathway supports visual perception, while the dorsal pathway subserves goal-directed actions. Most of the research on the cognitive and neural mechanisms underlying goal-directed actions utilized real 3D objects. Importantly however, with the proliferation of touchscreens in the last decade, humans now commonly perform visually guided actions upon two-dimensional images of objects. Although such interactions share some features with visuomotor control of real 3D objects, they provide a limited range of actions and have limited consequences. For example, using a hammer to pound a nail will have real consequences (which may contribute to achieving goals, such as successfully hanging a portrait, or failure, such as a bruised thumbnail). In stark contrast, even though a picture of a hammer may invoke the concept of hammering, one would certainly never try to pound a nail with a picture of a hammer. Thus, despite the similarity between actions directed to real 3D objects and images, given the differential effects, the underlying neural representations may be expected to differ.
Consistent with this conjecture, recent behavioral evidence shows that while visually guided grasping of real 3D objects can be performed using only visuomotor processing (within the dorsal visual stream, Goodale & Milner, 1992), grasps performed upon images show properties consistent with greater reliance on perceptual representations. First, the simulated grasping of object images, like purely visual perceptual tasks, follows a fundamental psychophysical principle (Weber's Law); whereas, grasping of real 3D objects does not (Ganel et al., 2008, Holmes and Heath, 2013). Second, grasping of images relies on holistic representation of shape (Freud & Ganel, 2015) while grasping of real 3D objects relies on analytic representation of object features (Ganel & Goodale, 2003). Third, crowding (i.e., the presence of flanking objects in a scene) impairs size perception but not grip scaling of 3D objects; whereas for 2D objects, the effect of crowding is similar for perception and action (Chen, Sperandio, & Goodale, 2015).
Given this psychophysical evidence that grasping is affected by stimulus realness, we expected differences in the neural processing of actions upon real and simulated objects. Recent human neuroimaging evidence has shown that even during passive viewing, the neural processing of real objects and images may differ (Snow et al., 2011). Thus, during manipulative actions it is likely that object realness may be expected to modulate the neural representations. Alternatively, there may be reasons to expect no such differences. For example, if precision grasping is akin to reaching to two locations with the index finger and thumb (Smeets & Brenner, 1999), one may expect similar neural representations for the grasping of real objects and images because the digit positions are similar.
Here we tested whether regions within the visual pathways generate distinct visuomotor representations that carry information regarding object realness in addition to the expected movement. We might expect differences between real objects and images in the ventral visual stream, where we have previously found such differences during passive viewing (Snow et al., 2011) and where we have found that realism of an action task affects activation (Kroliczak, Cavina-Pratesi, Goodman, & Culham, 2007). However, we might expect clearer differences, and differences that are specific to grasping, in the dorsal visual stream, specifically the anterior intraparietal sulcus (aIPS), which combines visual and motor cues to plan and execute visually guided actions (Culham et al., 2003, Gallivan and Culham, 2015, Singhal et al., 2013).
Grasping of a real 3D object evokes real consequences that must be anticipated even during action planning; whereas, action consequences are fairly minimal for simulated objects. Hence we predict that the distinction between object format would be evident in aIPS. Since such regions rely on visual information available before movement execution, this hypothesis predicts that the differences between object formats would be manifested in the planning phase and not just during execution. Finally, this hypothesis predicts that object realness will matter more for grasping of a real 3D object, which requires greater planning accuracy and anticipation of action consequences than reaching to touch the object. That is, errors in grasping real 3D objects will lead to consequences, corrections and recalibrations that are not necessary when grasping images; whereas, errors in reaching towards real 3D objects and images will lead to similar mislocalizations.
One important consequence of actions upon real 3D objects is tactile feedback that can be used to optimize manipulation (such as adjusting digit positions or grip force) and to “calibrate” forward models for better performance on subsequent trials (Säfström & Edin, 2008). However, this haptic feedback is absent for actions upon images (though visual feedback may still be available) and may be a critical factor in observed behavioral differences. Despite the fact that even simple terminal feedback can still engage the dorsal visual pathway (Whitwell, Ganel, Byrne, & Goodale, 2015), differential haptic feedback is thought to mediate the differences between actions directed to real 3D objects and images of the same objects (Hosang, Chan, Jazi, & Heath, 2015). Hence, to examine the sensitivity to object realness beyond the differences induced by the haptic feedback, we employed an experimental design that minimized the differences between the haptic feedback provided for real objects and images (see method for details).
We used functional magnetic resonance imaging (fMRI) to investigate the human neural representations of real objects versus images during two action types, grasping (for which object attributes like size and shape are highly relevant) and reaching to touch (which relies predominantly on object location) (Fig. 1). Because of the obvious differences in haptic feedback during execution of a grasp towards real objects versus images, we focussed our analyses on the planning period when stimuli were in view but before the action was initiated. We expected that the neural representations across sensory and sensorimotor brain regions, as inferred from MultiVoxel Pattern Analysis (MVPA), would differ during the planning of actions towards real objects versus images. Moreover, we predicted that the difference may be particularly marked during grasping movements, where object properties such as shape and size are relevant for grasp planning and are coded by areas like aIPS, compared to reaching, where only information about location is essential.
Section snippets
Participants
Data was analyzed from 13 right-handed volunteers who participated in the experiment (eleven females; mean age: 24.5 range: 22–29 years) and were recruited from the University of Western Ontario (London, Ontario, Canada). The data obtained from two additional participants were excluded. One of the subjects has excessive head and body movements during the scan and the other subject had only six runs available, which is not sufficient for the main analysis in which the data was divided for odd
MVPA
First, we analyzed the neural representations of the left aIPS, a key region for the computation of visually guided hand actions. As presented in Fig. 3A, full congruency (i.e., motor and visual congruency) induced greater correlation (i.e., more similar representations) than motor congruency, visual congruency and incongruent trials, suggesting that the left aIPS represents both motor (reach/grasp) and visual information regarding object realness during action planning. To statistically
Discussion
The present study investigated the neural representations that dissociate visually guided actions directed to images from those directed to real objects. Although previous behavioral studies suggested that actions directed to real objects rely on differential representations than actions directed toward images (Freud and Ganel, 2015, Holmes and Heath, 2013, Hosang et al., 2015), the neural underpinnings of this dissociation have not been investigated before. Our findings add to the
Conclusion
The present study examined the neural mechanisms that dissociate visuomotor control of actions directed to real 3D objects versus images. In line with previous behavioral investigations (Freud and Ganel, 2015, Holmes and Heath, 2013), we found that actions directed to images rely on distinct neural representations than those directed to real 3D objects. These dissociable representations may reflect the operation of a forward model generated by the visuomotor system, which integrates visual
Acknowledgements
This work was funded by a Canadian Institutes of Health Research Grant MOP 130345 to JCC, by a Natural Sciences and Engineering Research Council of Canada Discovery Grant RGPIN-2016-04748 to JCC, by the Yad-Hanadiv Postdoctoral fellowship to EF, and by The Israel Science Foundation (grant No. 65/15) to EF.
References (58)
- et al.
Representation of manipulable man-made objects in the dorsal stream
NeuroImage
(2000) - et al.
Neural representations of graspable objects: Are tools special?
Cognitive Brain Research
(2005) - et al.
The role of parietal cortex in visuomotor control: What have we learned from neuroimaging?
Neuropsychologia
(2006) - et al.
Representation of possible and impossible objects in the human visual cortex: Evidence from fMRI adaptation
NeuroImage
(2013) - et al.
Impossible expectations: fMRI adaptation in the lateral occipital complex (LOC) is modulated by the statistical regularities of 3D structural information
NeuroImage
(2015) - et al.
“What” is happening in the dorsal visual pathway
Trends in Cognitive Sciences
(2016) - et al.
Neural coding within human brain areas involved in actions
Current Opinion in Neurobiology
(2015) - et al.
Visual coding for action violates fundamental psychophysical principles
Current Biology
(2008) - et al.
Differences in the visual control of pantomimed and natural grasping movements
Neuropsychologia
(1994) - et al.
Separate visual pathways for perception and action
Trends in Neurosciences
(1992)
The lateral occipital complex and its role in object recognition
Vision Research
Goal-directed grasping: The dimensional properties of an object influence the nature of the visual information mediating aperture shaping
Brain and Cognition
Distinct and distributed functional connectivity patterns across cortex reflect the domain-specific constraints of object, face, scene, body, and tool category-selective modules in the ventral visual pathway
NeuroImage
The functional neuroanatomy of object agnosia: A case study
Neuron
Do human brain areas involved in visuomotor actions show a preference for real tools over visually similar non-tools?
Neuropsychologia
Neural response to perception of volume in the lateral occipital complex
Neuron
Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex
Neuropsychologia
The TINS Lecture the parietal association cortex in depth perception and visual control of hand action
Trends in Neurosciences
Beyond grasping: Representation of action in human anterior intraparietal sulcus
NeuroImage
Extrastriate body area in human occipital cortex responds to the performance of motor actions
Nature Neuroscience
Human anterior intraparietal area subserves prehension : A combined lesion and functional MRI activation study
Neurology
Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex
Journal of Neurophysiology
Structural and functional changes across the visual cortex of a patient with visual form agnosia
The Journal of Neuroscience
Differences in the effects of crowding on size perception and grip scaling in densely cluttered 3-d scenes
Psychological Science
Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas
Experimental Brain Research
Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates
Proceedings of the National Academy of Sciences
Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): Use of a cluster-size threshold
Magnetic Resonance in Medicine
Visual control of action directed toward two-dimensional objects relies on holistic processing of object shape
Psychonomic Bulletin & Review
Three-dimensional representations of objects in dorsal cortex are dissociable from those in ventral cortex
Cerebral Cortex
Cited by (70)
The Treachery of Images: How Realism Influences Brain and Behavior
2021, Trends in Cognitive SciencesA summary statistical representation influences perceptions but not visually or memory-guided grasping
2021, Human Movement ScienceCitation Excerpt :For example, 2D grasps produce smaller apertures than their 3D counterparts and adhere to the psychophysical principles of Weber's law during the early and late stages of aperture shaping (see Holmes & Heath, 2013). These behavioural results are supported by neuroimaging work reporting distinct activation patterns during the preparation of 2- and 3D grasps (Freud et al., 2018). In addition, 2D grasps do not provide terminal haptic feedback necessary to support an absolute visuo-haptic calibration for subsequent aperture shaping (Davarpanah Jazi & Heath, 2016; Davarpanah Jazi, Hosang, & Heath, 2015; Davarpanah Jazi, Yau, Westwood, & Heath, 2015; Schenk, 2012).