Abstract
Body postures provide information about others’ actions, intentions, and emotional states. However, little is known about how postures are represented in the brain’s visual system. Considering our extensive visual and motor experience with body postures, we hypothesized that priors derived from this experience may systematically bias visual body posture representations. We examined two priors: gravity and biomechanical constraints. Gravity pushes lifted body parts downwards, while biomechanical constraints limit the range of possible postures (e.g., an arm raised far behind the head cannot go down further). Across three experiments (N = 246) we probed participants’ memory of briefly presented postures using change discrimination and adjustment tasks. Results showed that lifted arms were misremembered as lower and as more similar to biomechanically plausible postures. Inverting the body stimuli eliminated both biases, implicating holistic body processing. Together, these findings show that knowledge shapes body posture representations, reflecting modulation from a combination of category-general and category-specific priors.
Introduction
Body posture is an important social cue that provides information about others’ emotions, intentions, and mental states. The pressure to quickly and accurately recognize bodies and their movements has resulted in humans’ typically excellent performance in detecting and discriminating body posture and body motion (Neri et al., 1998; Reed et al., 2003; Stein et al., 2012; Thorat & Peelen, 2022), a skill that is supported by dedicated brain regions in visual cortex (Peelen & Downing, 2007). When bodies are inconsistent with our daily experience, such as when they are inverted, this ability is impaired (Gandolfo & Downing, 2020; Reed et al., 2003, 2006; Stein et al., 2012; Yin, 1969). This inversion effect is more pronounced for faces and bodies than for other objects, indicating more configural processing for these visually highly familiar stimuli.
Owning a body ourselves, we also have extensive motor, tactile, and proprioceptive experience of a body and its dynamics (Berlucchi & Aglioti, 2010). Neuropsychological evidence suggests that we have an internal model of the physical relationships between body parts that helps us execute our own actions and understand those of others (de Vignemont, 2010). Together with our extensive visual experience, these sensory modalities provide us with additional knowledge of hierarchical limb structure, the possible range of movements of joints, and the effort required for executing specific body actions. For example, we know that lifting an arm requires more effort than lowering it; and that reaching the chest by hand is easier than reaching the back. In this study, we asked whether and how our knowledge of the body influences the perceptual representation of body postures.
Previous research has shown that perceptual operations are influenced by knowledge and expectations (Bruner & Postman, 1949; de Lange et al., 2018). Specifically, Bayesian accounts of perception propose that regularities of the environment are used to build an internal model of the world, which then informs perceptual inference (Pouget et al., 2013). These priors include the distribution of visual properties in natural scenes (Girshick et al., 2011; Weiss et al., 2002), basic physical principles of motion (Freyd & Finke, 1984; McBeath et al., 1992), general gravity (Hubbard, 1990, 2020), and physical state (Hafri et al., 2022). Effects of priors typically manifest themselves by biasing the interpretation of new input in the direction of the prior, particularly when the input is ambiguous. For example, the last memory of a moving object is biased toward the motion direction, reflecting the prior that moving objects normally keep moving (Freyd & Finke, 1984). This representational momentum is larger for downward motion than upward motion, consistent with the omnipresent downward force of gravity (Hubbard, 1990).
Effects of knowledge on perception have been also observed for body movements. For example, when observing apparent body movements, the perceived movement tends to follow a biomechanically plausible path, even if that path is longer (Shiffrar & Freyd, 1990). Furthermore, postures leaning backward are judged to be more likely to fall than postures leaning forward (Bonnet et al., 2005). These findings indicate that the perceptual interpretation of apparent or implied body movements is influenced by knowledge of biomechanical constraints. However, it is unclear whether the perceptual representations of static body postures are systematically biased based on such constraints.
To address this question, we considered two priors that are relevant when constructing an internal model of the body. The first is the general force of gravity: We know that an object will fall if not supported, and likewise, a lifted arm will fall if no muscle force is used. If gravity knowledge affects body representations, we predict that a lifted arm will be represented as slightly lower than its actual position, as has been observed for unsupported objects (Bertamini, 1993; Freyd et al., 1988). Besides the general force of gravity, we hypothesized that the body-specific knowledge of biomechanical constraints also biases how static postures are perceived. Because of the physiological structure of the body, postures are confined to a limited range. For example, the elbow can only bend inwards but not outwards. If knowledge of these constraints informs perception, the representation of a (nearly) impossible posture may be biased towards the nearest possible posture. Crucially, biomechanical constraints can counteract the effect of gravity: an arm raised in front of the head will fall but an arm raised behind the head can hardly fall lower (Fig. 1).
a)∼d), Illustration of the hypothesis. The orange and blue arrows indicate the direction of the gravity and the biomechanical constraints, respectively; transparent arms indicate the predicted perceived arm positions according the two hypotheses. Here the arrows and the predicted arms only suggest the direction but not the extent of the effects. For the Upper-back posture (b), gravity and the biomechanical constraints point in opposite direction, potentially eliminating each other’s influence. For the lower-back posture (d), gravity and the biomechanical constraints go in the same direction, their effects potentially adding up. e), Stimulus examples. For each quadrant, we show the lowest posture (54°) and the highest posture (36°) used in the experiments. In the experiment, both figures had left-facing and right-facing versions, though only one of them is shown here.
To test these hypotheses, we used two different perceptual judgment tasks involving four arm postures subject to one or both of the two biases (Fig. 1). We predicted that lifted arms will generally be remembered as lower, towards the ground, reflecting a gravity-related bias. Furthermore, we predicted that lifted arms will be biased towards biomechanically possible postures. Specifically, biomechanical constraints limit further movement of the arm when the arm is raised behind the head, counteracting the gravity bias (Fig. 1b), while adding to the gravity bias when the arm is behind the hip (Fig. 1d). All the codes, stimuli, and raw data related to the experiments reported here are available at https://osf.io/qmtkw/.
Experiment 1
We designed a change discrimination task to probe the existence of biases in body posture representation due to gravity and biomechanical constraints. Participants compared two sequential postures whose arm positions slightly differed, with the second arm posture being slightly higher or lower (Fig. 2a). In the absence of biases, participants should detect upward and downward changes equally well. Instead, if biases distort the representation of the first posture during the brief interval, we may observe that detecting a change in one direction is easier than a change in the other. We first tested upper postures (Fig. 1a & 1b) in Experiment 1a, then followed with lower postures (Fig. 1c & 1d) in Experiment 1b to generalize the findings to visually different postures.
Trial Procedures. a) Change discrimination task used in Experiments 1a and 1b. In the trial shown here, the target is 45 degrees in the Upper-front, and the probe moves -9 degrees (i.e., upwards). Participants indicated whether the arm had moved up or down. The up-down text screen is shown for illustration purposes. b) Adjustment task used in Experiments 2 and 3. The target posture was either 36, 39, 42, 48, 51, or 54 degrees within each quadrant. The starting posture in the test image was chosen randomly from 30 to 60 from the same quadrant as the target. Participants adjusted the arm, indicated by transparent arms, using the up-arrow and down-arrow keys to match the target angle.
Method
Participants
The desired sample size was set to 60 for all experiments before testing. This was determined by a power analysis, revealing that a sample size of 52 was needed to detect an effect size of d = 0.4 with 80% power, as suggested recently (Brysbaert, 2019). We rounded this recommendation up to 60 to ensure enough power.
Participants were recruited through the SONA system in return for course credits. We required the participants to be above 18 years old with normal or corrected-to-normal vision. Recruitment stopped when the sample size reached 60 after the exclusion of low-quality data (see Analysis section). For Experiment 1a, we recruited 75 participants. Of these, one participant did not finish the task and 14 were excluded. For Experiment 1b, we recruited 67 participants. Of these, two did not finish the task and five were excluded. We thus acquired an effective sample size of 60 for both Experiment 1a (49 females, 11 males; age: M = 20.1, range = [18, 36]) and Experiment 1b (48 females, 12 males, 2 other; age: M = 20.6, range = [18, 47]). For these and the following experiments, digital informed consent was obtained from all participants. The procedures were approved by the University’s Ethical committee (Ethics no.: ECSW-2022-079).
Stimuli
Body images were generated by rendering digital human models in DAZ studio 4.15 (Daz Productions, Inc). A female character and a male character were used. The characters were standing in profile, one arm lifted, the other leaning naturally on the hip. The lifted arm positions were categorized into four quadrants (Fig. 1): two directions (front and back) x two arm heights (upper and lower). In each quadrant, we designated the upper bound of that quadrant as the zero point, with larger angles meaning that the arm is lower, i.e., closer to the feet. The arm could be presented at an angle of 36, 39, 42, 45, 48, 51, or 54 degrees in each quadrant. Both left-facing and right-facing figures were generated, so that an arm in front of the body was equally often presented in the left and right visual field to avoid possible confounds of visual field differences between the conditions (Fig. 1e). The lifted arm was always on the viewer’s side (right arm lifted when facing right, left arm lifted when facing left) to avoid the arm being occluded by other body parts.
Mask images were grey-scale checkerboard images. Body images were 300 pixels wide, 480 pixels high. Size in degree depended on the online participant’s screen resolution and the eye-to-screen distance. Mask images were 350 pixels wide, 525 pixels high. Masks were presented slightly larger than the body to achieve a better masking effect.
Procedure
Experimental procedures were programmed with JsPsych library (de Leeuw, 2015) and the psychophysics plugin (Kuroki, 2021). Experiments 1a and 1b tested the front and the back arm directions for upper postures (Experiment 1a) and lower postures (Experiment 1b). Data were aggregated for analysis. Experiment 1a also included a machine condition which was not relevant to the purpose of the current study (see Supplementary material 2).
In each trial, a fixation cross was first presented at the center for 800, 900, or 1000 ms, then a target body posture of either 36, 39, 42, 45, 48, 51, or 54 degrees in either quadrant was shown for 200 ms (Fig. 2a). Participants were instructed to remember the posture of the target and hold it in memory. Immediately after the target, a 1000-ms dynamic mask consisting of four consecutive checkerboard images (250 ms each) was shown to minimize aftereffects and/or apparent motion of the arm, after which the probe image appeared for 200 ms. Compared to the target, the arm in the probe would move upwards or downwards by an angle of 3, 6, or 9 degrees (equiprobable). The task was to judge whether the arm had moved up or down relative to the target. Participants indicated their choice by pressing F or J on the keyboard. The key-response mapping was counterbalanced across participants. Participants were asked to respond as accurately and as quickly as possible. The trial ended upon response or 4000 ms after the probe had disappeared.
Each combination of arm direction and angle difference included 24 trials, resulting in 288 trials in total, separated into four blocks. Angle difference, arm direction, and figure gender were completely interleaved while facing direction was kept identical within blocks to avoid extra effort for switching viewpoint between trials. The two left-facing and two right-facing blocks were in ABBA order, with about half of the subjects starting with a left-facing block and the others starting with a right-facing block. A practice session of 12 trials was delivered before the formal experiment. Feedback on the accuracy and mean response time across conditions were shown to the participant at the end of each block.
Analysis
Responses with RTs < 250 ms (relative to probe onset) were excluded (0.10% of the total number of trials across Exp 1a and 1b), as these most likely reflect anticipatory responses. For each participant, data quality was inspected by plotting the percentage of up responses for each angle difference level in each quadrant (Fig. S1a). Participants following task instructions should show an increase in the percentage of up responses as the angle difference decreases. That is, the more obvious that the arm moves up, the more likely people choose up, yielding a sigmoid curve. Some participants exhibited a flat or reversed curve, suggesting that they misunderstood the task or pressed randomly. These participants were detected using a slope index (Fig. S1b) of the difference between the mean up response percentage of the two most obvious moving-up levels (−9 and -6) and the mean of the two most obvious moving-down levels (9 and 6). Participants with a slope index below 0.2 in either the front or the back condition were excluded, resulting in 14 exclusions in Experiment 1a and 5 exclusions in Experiment 1b.
The perceptual biases of interest were quantified by the criterion (c) from signal detection theory, using the Psycho package in R (Makowski, 2018). Taking upward movement as the signal, a negative criterion means the subject was biased in favour of choosing up over down, suggesting upward changes were more noticeable. We calculated individual criterion in each condition from the hit rate (up response percentage when the arm actually moved up) and false alarm rate (up response percentage when the arm actually moved down):
The criteria for all subjects were tested against zero using two-tailed t-tests. A comparison of the four postures was made using a mixed ANOVA with arm height (upper, Experiment 1a; versus lower, Experiment 1b) as a between-subject factor and arm direction as a within-subject factor. Results of d prime were also analyzed and are presented in Supplementary material 1.
Results
Participants’ responses were in line with a gravity bias (Fig. 3): The arm in the target posture was remembered as lower than its actual position, as indexed by a criterion that was significantly below zero for all postures (Upper-front: M = -0.25, 95% CI = [-0.32, -0.18], t(59) = -7.08, p < .001, d = -0.91, BF10 = 5.75E6; Upper-back: M = -0.144, 95% CI = [-0.21, -0.07], t(59) = -4.38, p < .001, d = -0.57, BF10= 413; Lower-back: M = -0.11, 95% CI = [-0.19, -0.04], t(59) = -2.96, p = .004, d = -0.38, BF10 = 7.14) except the Lower-front: M = -0.06, 95% CI = [-0.14, 0.02], t(59) = -1.60, p = .114, d = -0.21, BF10 = 0.47.
Criterion results for the four conditions in Experiment 1. A negative criterion reflects a bias to respond “up”, indicating that the first posture was remembered as lower than the second posture. We interpret this overall bias as reflecting knowledge of gravity. The difference between Front and Back indicates that the criterion was influenced by arm position. Results showed an interaction between Front/Back and Upper/Lower, in line with biomechanical constraints (see Fig. 1).
***: p < .001, **: p < .01, *: p < .05, n.s.: not significant. Error bars denote 95% CI.
Next, we combined Experiments 1a and 1b using a mixed ANOVA with arm height and arm direction as factors to test the presence of a biomechanical bias. As illustrated in Fig. 1, compared to the front, the bias in the back should be diminished by biomechanical constraints when in the upper quadrant (Fig. 1a vs. 1b), but strengthened when in the lower quadrant (Fig. 1c vs. 1d). We thus predicted an interaction between arm height and arm direction. We indeed found this interaction (Fig. 3): F(1, 118) = 8.09, p = .005, η2p = 0.64. Specifically, for the upper postures, the gravity bias was stronger in the front than in the back, M = -0.108, 95% CI = [-0.03, -0.19], t(59) = -2.74, p = .008, d = -0.35, BF10 = 4.17, indicating that the upward biomechanical constraint counteracted the gravity bias. As predicted, the gravity bias for the Lower-front was numerically weaker than the gravity bias for the Lower-back, but this difference did not reach significance: M = 0.05, 95% CI = [-0.03, 0.13], t(59) = 1.3, p = .197, d = 0.17, BF10 = 0.32.
Experiment 2
The change discrimination task provided evidence for both gravity and biomechanical biases. We wondered whether these results would be specific to the change detection task, in which the two consecutive body postures may be perceived as part of an action. If so, the results could reflect biases in human action perception rather than biases in the static representation of the target posture. To address this, in Experiment 2 we tested whether the identified biases replicate in an adjustment task, in which participants reproduced the target posture.
Method
Participants
Experiment 2 adopted a within-subject design. The sample size was kept consistent with Experiment 1. 60 subjects (30 females, 30 males; age: M = 33.42, range = [21, 45]) were recruited through Prolific with monetary reward.
Procedure
Experiment 2 included the same four conditions as in Experiment 1. In each quadrant, six target angles (36, 39, 42, 48, 51, and 54) were used. Consistent with Experiment 1, a fixation and then the target posture was shown, followed by the mask. After the mask disappeared, the subjects were instructed to press one of the left-arrow or right-arrow keys to show the test image for adjustment. The initial posture of the test was randomized between 30 and 60 degrees, but always in the same quadrant as the target. Participants then pressed up and down arrow keys to manipulate the arm of the test image to move upwards or downwards. After adjusting the arm to the remembered target position, participants pressed space to confirm their answer. If no response was made, the trial ended after 10 s. If the test image was not initiated within 3 s after the mask, a warning would be given.
All other factors, including figure gender and facing direction, were kept consistent with Experiment 1. Facing direction was blocked, and other factors were interleaved. Each angle in each quadrant was presented eight times, resulting in 48 trials for each quadrant, 192 trials in total. The trials were divided into four blocks, with the order manipulated as in Experiment 1. Two mini practice blocks were completed before the start of the experiment. Feedback on average absolute error was given at the end of each block.
Analysis
Trials in which participants pressed space before adjusting the test image and trials in which the test image was not initiated within 3 s after the mask were discarded. Trials with an absolute error larger than 15 degrees were also discarded. Altogether, this led to the rejection of 5.41% of the total number of trials. No participants were excluded.
The error was defined as the target angle minus the response angle. For example, a target of 48 degrees adjusted as 52 degrees would yield an error of -4 degrees. Negative values indicate that the posture was adjusted to be lower than it was actually shown. Errors of all trials in one quadrant were averaged to get the mean error for each quadrant of each participant. Group mean error data were tested in the same way as the criterion in Experiment 1, except that a repeated-measures ANOVA was used instead of a mixed ANOVA.
For visualization purposes, we computed two indexes that reflect the two hypothesized effects. The gravity bias was given by the overall adjustment error, averaged across the four conditions (Upper-front, Upper-Back, Lower-front, Lower-back). The biomechanical constraint was indexed by the difference between postures with vs without biomechanical constraint, averaged across upper and lower postures (the mean of (Upper-front – Upper-back) and (Lower-back - Lower-front); Fig. 4b).
Results of Experiment 2. a) Mean error of the four conditions. b) Bias Indexes for individual participants. On the top are the calculation methods for the two indexes. ***: p < .001, **: p < .01, *: p < .05, n.s.: not significant. Error bars denote 95% CI.
Results
In the adjustment task, the direction and magnitude of biases are directly reflected in the direction and magnitude of the adjustment error. A negative error indicates that the target was remembered as lower than its actual position, reflecting a gravity bias. This was the case for all of the postures tested (Fig. 4a, Upper-front: M = -2.54, 95% CI = [-2.97, -2.11], t(59) = -11.8, p < .001, d = -1.52, BF10 = 1.68E14; Upper-back: M = -1.89, 95% CI = [-2.34, -1.44], t(59) = -8.46, p < .001, d = -1.09, BF10 = 9.86E8; Lower-back: M = -1.07, 95% CI = [-1.48, -0.66], t(59) = -5.22, p < .001, d = -0.67, BF10 = 6.52E3) except for the lower-front (M = -0.09, 95% CI = [-0.35, 0.52], t(59) = -0.39, p = .694, d = 0.05, BF10 = 0.15).
Also consistent with Experiment 1, a two-way repeated-measures ANOVA showed an interaction between arm height and arm direction, revealing the effect of biomechanical constraints, F(1, 59) = 48.1, p < .001, η2p = 0.45. As in Experiment 1, for the upper postures, the gravity bias was stronger in the front than the back: M = -0.65, 95% CI = [-0.25, -1.05], t(59) = -3.28, p = .002, d = -0.42, BF10 = 16.5. By contrast, for the lower postures, the gravity bias was stronger in the back than the front: M = -1.15, 95% CI = [-0.79, -1.52], t(59) = -6.32, p < .001, d = -0.82, BF10 = 3.44E5.
Fig. 4b shows the bias indexes for individual participants. Taking the four postures together, the error caused by gravity was significantly different from zero, M = -1.35, 95% CI = [-1.58, -1.12], t(59) = -11.61, p < .001, d = -1.50, BF10 = 8.50E13. The overall biomechanical constraint (M = -0.90, 95% CI = [-1.16, -0.64], t(59) = -6.94, p < .001, d = -0.90, BF10 = 3.37E6, also reflected in the interaction in the ANOVA) was also highly consistent across individuals. These results confirm and extend the results of Experiment 1 using a different task, generalizing the effects to a scenario where no action or motion is implied.
Experiment 3
A prominent feature of body perception is its susceptibility to inversion. Inversion has been shown to disrupt body and face perception more than other objects, which is believed to reflect the disrupted configural processing of bodies and faces (Reed et al., 2003; Yin, 1969; Gandolfo & Downing, 2020; Stein et al., 2012). In Experiment 3, we tested whether the effects of gravity and biomechanical constraints were contingent on holistic processing of bodies, using the inversion effect. Inverted bodies also serve as an ideal control for upright bodies since they are identical in terms of their low-level visual features.
Method
Participants
66 participants (21 females, 45 males; age: M = 30.04, range = [18, 45]) were recruited through Prolific in return for monetary reward. First, we recruited the intended sample size of 60, however, the number of participants starting with the upright vs inverted condition was not yet balanced when the sample size reached 60, therefore six additional participants were recruited. Data from only the first 60 participants yielded highly similar results. Digital informed consent was obtained.
Procedure
The task was identical to Experiment 2 except for that both upright and inverted conditions were tested. The inverted body images were generated by vertically flipping the upright images. Half of the participants started with the upright condition and half with the inverted. Both upright and inverted conditions contained a left-facing block and a right-facing block, with the order randomized across participants but kept the same for upright and inverted conditions. Within each block, trials of different combinations of arm direction, arm height, within-quadrant angle, and figure identities were interleaved. Because of the inclusion of the inverted condition, the trial number for each angle was halved compared to Experiment 2. In total, Upper-front, Upper-back, Lower-front, and Lower-back all included 24 trials for the upright and 24 for the inverted condition.
Analysis
Data cleaning procedures were identical to Experiment 2, with an exclusion rate of 6.90% of the total number of trials. Adjustment errors for each quadrant were averaged for upright and inverted conditions separately. A three-way repeated-measures ANOVA with arm height, arm direction, and body orientation was conducted. Two bias indexes for upright and inverted conditions were calculated and compared with two-tailed t-tests to test whether inversion diminished one or both of the biases.
For visualization purposes, and for a better comparison with the upright condition, the coding of angles for the inverted condition (Fig. 1c) followed a body-centered reference frame rather than a spatial reference frame. For example, negative errors for an upright body indicate that the arm was adjusted as closer to the feet and thus closer to the lower part of the screen. Similarly, negative errors for an inverted body indicate that the arm was adjusted as closer to the feet; however, because of the inversion and the reference to the body, this is now closer to the upper part of the screen.
Results
We found a significant three-way interaction between arm direction, arm height and body orientation, F(1,65) = 7.15, p = .009, η2p = .10, indicating that body orientation modulated the interaction between arm height and arm direction. Inspecting upright and inverted conditions separately (Fig. 5), the interaction of arm height and arm direction was significant for the upright body, replicating results from Experiment 2, F(1,65) = 24.1, p < .001, η2p = .27. In contrast, the inverted body did not show the interaction between arm height and arm direction, F(1,65) = 1.91, p = .171, η2p = .029. A main effect of body orientation (F(1,65) = 46.9, p < .001, η2p = .42) showed that inversion diminished the overall negative adjustment error, indicating a reduced gravity bias (Fig. 5).
Results of Experiment 3. a) Mean error of the four conditions. Left: upright, Right: inverted. b) Bias Indexes for individual participants. On top are the calculation methods for the two indexes. ***: p < .001, **: p < .01, *: p < .05, n.s.: not significant. Error bars denote 95% CI.
The bias indexes provided a more direct description of the inversion effect. As shown in Fig. 5b, inversion significantly disrupted both gravity bias: M = 1.25, 95% CI = [0.88, 1.61], t(65) = 2.67, p < .001, d = 0.84, BF10 = 3.52E6 and biomechanical constraints: M = 0.61, 95% CI = [0.15, 1.06], t(65) = 2.67, p = .009, d = -0.33, BF10 = 3.54. The inversion effect excluded the possibility that these biases emerge from part-based processing of bodies.
Discussion
The current results demonstrate that knowledge of gravity and of biomechanical constraints jointly shape visual memory representations of human body postures. In three experiments, these effects were replicated both directly and conceptually. Importantly, the biases were absent when bodies were inverted, ruling out low-level visual confounds and indicating that the biases emerge from whole-body representations. Together, the two tasks we used excluded potential confounds of motion perception, response biases and local visual feature processing, demonstrating a top-down influence on static body representations.
Our results are well explained by Bayesian theories in which perception is the result of an interplay between sensory input and priors derived from an internal model (de Lange et al., 2018; Ma et al., 2022). This internal model is shaped by environmental statistics (Girshick et al., 2011) and serves to achieve optimal inference (Pouget et al., 2013). When input is more ambiguous, priors will have a stronger influence, such that we perceive what is most likely. In our task, a body posture had to be maintained in visual memory for a brief interval. The fidelity of the sensory information will be reduced during this interval, making the posture representation susceptible to the influence of priors. This account would also naturally explain the current finding that the gravity bias was stronger for the upper arm postures than the lower arm postures (see Fig. 3, 4a, 5a) because an internal model of postures would reflect the probabilities of posture transitions: during most arm-involved actions (e.g., walking, using tools), an arm in the lower-front is equally likely to go up or down, whereas an arm in the upper-front is more likely to go down. Analyzing posture transitions in large video databases would be informative to establish the probability of posture transitions during natural actions, so that these can be more directly linked to the biases observed here.
An alternative interpretation of the current findings is that they reflect visuomotor simulation. In the current experiments, participants may have memorized the postures partly through motor simulation, embodying the observed postures. This interpretation has been used to account for the finding of smaller representational momentum for biomechanically awkward arm movements (Wilson et al., 2010). However, although the motor system has been shown to be activated during action observation (Buccino et al., 2004; Rizzolatti & Craighero, 2004), motor simulation may not be essential for understanding actions or representing body postures, as individuals born without upper limbs exhibit similar performance in action observation, action prediction, and mental imagery of postures (Vannuscorps et al., 2012; Vannuscorps & Caramazza, 2016). Nonetheless, though the biases in posture perception might not depend on motor experience, to determine whether the motor system interacts with the visual system still requires further investigation. Knowing whether the effects observed here are also present in individuals born without limbs will give us more insight into the contributions of visual and motor experience.
The current findings also raise new questions about the neural representation of body postures. Neuroimaging research on body perception has provided evidence for multiple cortical areas that are specifically engaged in body perception, including the extrastriate body area (EBA, Downing et al., 2001) and the fusiform body area (FBA, Peelen & Downing, 2005; Schwarzlose et al., 2005). Compared to the more extensively studied fusiform face area (FFA, Kanwisher et al., 1997), little is known about the representational structure of these areas (Downing & Peelen, 2011). It has been shown that the FFA represents faces in a face space centered around the average face, with distances from the center representing the deviation from the mean face (Leopold et al., 2001; Loffler et al., 2005). Based on the current results that knowledge of body structure informs posture perception, the body-selective areas may store an internal model of the body, including its constraints. Given the many combinations of body part postures, it would be advantageous for neurons to be tuned primarily to biomechanically possible postures. Indeed, previous work has shown that body representations in the EBA more strongly represent postures in commonly experienced visual field locations (Chan et al., 2010). Our results suggest that the representational space of body postures in body-selective regions might be biased, reflecting perceived rather than physical distances between postures.
In sum, we show that body posture representation is biased towards the ground and towards biomechanically plausible postures, indicating an influence of both general knowledge of the world (gravity) and specific knowledge of the body. These findings may reflect the influence of an internal model of the body based on environmental statistics. By employing such an encoding scheme, the visual system can efficiently predict upcoming postures, a critical component for humans’ ability to read others’ actions, intentions, and social interactions (Quadflieg & Koldewyn, 2017).
Author Contributions
Qiu Han: Conceptualization, Methodology, Software, Investigation, Formal Analysis, Validation, Visualization, Writing – Original Draft, Writing – Review & Editing, Funding Acquisition
Marco Gandolfo: Conceptualization, Methodology, Software, Validation, Writing – Review & Editing, Funding Acquisition
Marius Peelen: Conceptualization, Methodology, Validation, Writing – Review & Editing, Funding Acquisition
Declaration of interests
The authors declare no competing interests.
Acknowledgments
We thank Paul Downing and Eelke Spaak for feedback on earlier versions of the manuscript and the Peelen Lab members for the helpful suggestions on experimental design during lab meetings.
This project has received funding from the China Scholarship Council (CSC), European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie (grant agreement No. 101033489), and European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 725970).
Footnotes
This revision included a more thorough discussion of existing literature, more elaborated figures, and a few clarifications of method details.