Abstract
Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display stick figures, pantomimes, and abstract animations of agent-object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and point-light-displays revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. Moreover, cross-decoding between pantomimes and animations suggests that right aIPL and LOTC represent action effects even in response to implied object interactions. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
We added a representational similarity analysis to assess in more detail the representational content isolated in the action-animation and action-PLD cross-decoding. This analysis revealed several interesting findings (action structure representations at a categorical rather than specific level; body motion representations capture involved effectors and action kinematics in parietal cortex, and more specific differences between the actions in LOTC), which we integrated in our article. Moreover, we ran a stimulus-based decoding using motion energy features and optical flow vectors. This analysis revealed no significant effects for the action-PLD cross-decoding, but for the action-animation cross-decoding. We therefore included a motion energy model in the RSA, which could indeed explain some of the representational variance in early visual cortex, but not in other brain regions.