Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Reaching around obstacles accounts for uncertainty in coordinate transformations

Parisa Abedi Khoozani, Dimitris Voudouris, Gunnar Blohm, Katja Fiehler
doi: https://doi.org/10.1101/706317
Parisa Abedi Khoozani
1Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
2Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: 0kpa@queensu.ca
Dimitris Voudouris
4Center for Mind, Brain, and Behaviour, Marburg University, Marburg, Germany
5Psychology and Sport Sciences, Justus Liebig University, Giessen, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Gunnar Blohm
1Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
2Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
3Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Katja Fiehler
4Center for Mind, Brain, and Behaviour, Marburg University, Marburg, Germany
5Psychology and Sport Sciences, Justus Liebig University, Giessen, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Data/Code
  • Preview PDF
Loading

Abstract

When reaching to a visual target, humans need to transform the spatial target representation into the coordinate system of their moving arm. It has been shown that increased demands in such coordinate transformations, for instance when the head is rolled toward one shoulder, lead to higher movement variability and influence movement decisions. However, it is unknown whether the brain incorporates such added variability to adjust movements when it is necessary. We designed an obstacle avoidance task in which participants had to reach to a visual target without colliding with an obstacle. We introduced different coordinate transformation demands by varying head roll (straight, 30° clockwise and 30° counterclockwise). In agreement with previous studies, we observed that the reaching variability increased when the head was tilted. In addition, participants systematically changed their obstacle avoidance behavior with head roll. In particular, they changed the preferred direction of passing the obstacle and increased the error margins indicated by stronger movement curvature. Indeed, participants’ reaching movements did not differ in the number of collisions between the head roll and the head straight conditions. These results suggest that the brain takes the added movement variability during head roll into account and compensates for it by adjusting the reaching trajectories.

Introduction

Transforming retinal information to the coordinate system of the moving arm is crucial for performing visually guided movements, e.g. reaching (Buneo & Andersen, 2006; Buneo et al., 2002; Cohen & Andersen, 2002; Engel, Flanders, & Soechting, 2002; Knudsen, 2002; Knudsen, du Lac, & Esterly, 1987; Lacquaniti & Caminiti, 2003; Soechting & Flanders, 1992). It has been suggested that coordinate transformations should be considered as stochastic processes that add uncertainty to the transformed signals (Alikhanian, Carvalho, & Blohm, 2015; McGuire & Sabes, 2009; Sober & Sabes, 2003, 2005). Furthermore, it has been shown that stochasticity in coordinate transformations propagates to the movement resulting in increased movement variability (Abedi Khoozani & Blohm, 2018; Burns, Nashed, & Blohm, 2011; Burns & Blohm, 2010; Schlicht & Schrater, 2007); however, it is unknown if the brain accounts for potential consequences of such added movement variability while planning and executing reaching movements.

Accurate coordinate transformations rely on the estimation of three-dimensional (3D) body pose (Blohm & Crawford 2007). This requires an internal model of different body parts with regard to each other, e.g. eye relative to head translation, and an estimation of joint angles, e.g. head rotation. While internal models are learned and -most likely- do not change, the estimation of the joint angle can arise from two sources: 1) afferent sensory signals and 2) efferent copies of motor commands. Both signals are corrupted with uncertainty in sensory processing and variability of neuronal spiking (Faisal, Selen, & Wolpert, 2009). Several studies have suggested that varying body pose, e.g. rolling the head, increases behavioral variability (Abedi Khoozani & Blohm, 2018; Burns, Nashed, & Blohm, 2011; Burns & Blohm, 2010; Schlicht & Schrater, 2007). For instance, Burns and Blohm (2010) showed that rolling the head to either shoulder results in higher goal-directed reaching variability compared to straight head reaching. The authors argued that this increased variability stems from the signal-dependent noise in coordinate transformations. However, another interpretation can be that since humans perform reaching mostly in an upright posture, the difference in variability can arise from the lack of experience, or less familiarity, in the rolled condition (Sober & Körding, 2012). To differentiate between these two speculations, we have previously asked human participants to perform visually guided reaching movements in both rolled head condition and with a neck load (Abedi Khoozani & Blohm, 2018). Our rationale was that if familiarity determined variability, then neck load should have no effect; conversely, active inference of head roll angles should result in larger (resp. smaller) variability due to increased (resp. decreased) signal-dependent noise due to muscle activity (Cordo et al., 2002; Faisal et al., 2008; Lechner-Steinleitner, 1978; Sadeghi et al., 2007; Scott & Loeb, 1994) in case of resistive (resp. assistive) load conditions. Since larger joint angle estimates and muscle activations are accompanied with higher uncertainty (Blohm & Crawford, 2007; Van Beuzekom & Van Gisbergen, 2000; Wade & Curthoys, 1997), both head roll and neck load manipulations should result in noisier coordinate transformations. Our result supported the hypothesis that signal-dependent noise in coordinate transformations increases movement variability (Abedi Khoozani & Blohm, 2018).However, it is unknown if the brain incorporates this added movement variability in reaching movements when it compromises task performance.

One approach to investigate whether the brain is accounting for the added movement variability caused by stochastic coordinate transformations is to perform reaching movements in constrained environments, i.e. in the presence of obstacles. A failure in accounting for the added variability should result in behavioral consequences, i.e. obstacle collisions. In general, humans are successful in avoiding obstacles and they do so by accounting for several factors such as sensory uncertainty (Cohen, Biddle, & Rosenbaum, 2010), e.g. visual uncertainty, motor noise (Cohen et al., 2010; Hamilton & Wolpert, 2002), and biomechanical costs (Cohen et al., 2010; Sabes, 1997; Sabes, Jordan, & Wolpert, 1998; Voudouris, Smeets, & Brenner, 2012). For instance, Cohen et al. (2010) showed that both higher visual uncertainty and increased motor noise resulted in increased distance from the obstacle (increased safety margins). In addition, removing visual feedback results in slower reaction times (Khan, Elliott, Coull, Chua, & Lyons, 2002) and decreased endpoint accuracy (Chua & Elliott, 1993; Heath, 2005; Heath, Westwood, & Binsted, 2004; Khan et al., 2002) in reaching. These results show that obstacle avoidance is sensitive to many factors such as sensory accuracy, systemic noise (i.e. motor noise), associated biomechanical costs, and task requirements, providing an optimal test bed to evaluate if increased noise induced by stochastic coordinate transformations is considered during reaching.

To investigate whether humans can compensate for the higher movement variability caused by stochastic coordinate transformations, we designed a reaching task to a visual target while avoiding an obstacle. To modulate the uncertainty in coordinate transformation, participants performed the reaching movements with different head orientations (30° toward right shoulder (clockwise; CW), 0°, and 30° toward the left shoulder (counterclockwise; CCW)). As shown in previous studies, varying the head roll results in higher movement variability (Abedi Khoozani & Blohm, 2018; Alikhanian et al., 2015; Burns & Blohm, 2010). Therefore, we expected either higher collision rate for the rolled head compared to straight head condition, if the added movement variability was ignored, or compensatory behavior (e.g., increased error margins), if the added movement variability was considered. Furthermore, previous studies showed that providing visual feedback of the moving hand will alleviate the effect of noisy coordinate transformations (Blohm & Crawford, 2007). Therefore, we asked participants to perform reaching movements with or without visual feedback. We expected to observe lower compensation, if there is any, for the visual feedback condition compared to the no-visual feedback condition. In agreement with previous studies, we observed that movement variability increased when the head was tilted. In both feedback conditions, this was accompanied by a change in the preferred direction of passing the obstacle and increased the error margin, while the collision rate remained unaffected. We conclude that the brain accounts for the added uncertainty due to coordinate transformations and compensates for it whenever it compromises task performance.

Materials and methods

Participants

We collected data from 18 healthy humans (10 female) aged between 19 to 38 years (M = 25 years) with normal or corrected to normal vision. All participants were right handed by self-report, and free of any known neurological issues. The experiment was approved by the Justus Liebig University Giessen, general board of ethics and all participants gave their written consents. They received monetary compensation (8 € / hour) or course credits for their participation.

Apparatus and stimuli

Participants were seated in front of a workspace that comprised a robotic setup with a graspable handle (vBot; Howard, Ingram, & Wolpert, 2009), a monitor, and a mirror. A helmet with a protruding long stick and a measuring framework were used to control for the head roll in each condition (Figure 1A). Visual stimuli were presented on the monitor and reflected in the mirror, which was placed above the robot (Figure 1B).

Figure 1.
  • Download figure
  • Open in new tab
Figure 1. Overview of the experimental setup.

A) Participants performed the reaching task in three head orientations (30° CCW, 0, and 30° CW). Here only the framework of the robotic setup and the head orientation setup is displayed, B) The vBot setup consisted of a mirror that displayed the task instructions from the monitor. Below the mirror, a robot handle was placed that could be freely moved by the participants. C) Participants brought their hand to the start point and reached to the target while avoiding hitting the obstacle in the center. Obstacles were randomly shifted to the left or right, along the X-position, across trials. Possible shift positions are depicted by “×” with the letters indicating the shifts; ML: most leftward, L: leftward, C: central, R: rightward, MR: most rightward, D) Example trial: first, participants were instructed to bring their hand to the starting position (1), as soon as they arrived at the starting position (2), the target position and the obstacle (here central) appeared and participants were instructed to move to the target position in less than 1000 ms (3).

A mirror was placed horizontally in front of the participants. This mirror prevented vision of the arm so that participants were unable to see the movements that they performed below the mirror. The visual stimuli were presented on a monitor and reflected on the mirror that was placed below it (Figure 1B). Four disks and the visual instructions were presented in each trial. Two discs (0.5 cm diameter) were blue, each serving as starting and target position. Both were located in the middle of the screen laterally (X-position; which was the same as the middle of the body). The starting and the target positions were 9 and 11 cm closer and further from the centre of the screen, respectively, resulting in a distance of 20 cm between the two. A third disc (1.8 cm diameter), colored red, served as the obstacle. It was presented in the middle of the screen and in five possible lateral positions: at the center (in line with the starting and the target position) or shifted by 1.8 cm or 3.6 cm either to the right or left of that central position (Figure 1C). To simulate a physical obstacle, a repulsive force-field (between 0 to 40N, faster movements toward the center resulted in higher force) from the center of the obstacle was applied (i.e. vBotDisc). Consequently, the handle could not move into the obstacle. Finally, a fourth, white disk (0.5 cm diameter) represented the position of the robotic handle, whenever visual feedback was provided. A visual instruction that was prompting participants to move to the starting position at the beginning of the trial was presented at the centre of the screen. Visual stimuli were implemented using Psychtoolbox (MATLAB 2015) while C++ programming was used for implementing the force-field and programming the robot.

Procedure

In order to reach to the viewed target, participants grasped the robot’s handle with their right hand while resting their forehead on the workspace’s framework in front of the screen. They performed the task in three possible head orientations: 30° CCW, 0°, 30° CW. Each head orientation condition was performed with and without visual feedback of their reaching hand. The visual feedback was provided by a moving white disc (0.5 cm diameter) on the screen representing the robot handle position. This resulted in six combinations, each of which was presented in separate blocks of trials. At the beginning of each block, the experimenter positioned the participant’s head in the respective orientation. Before the start of each trial, participants grasped the robot handle and brought it to a fixed starting position using visual feedback of the robot’s handle. After positioning the hand on the starting position, the target and an obstacle were simultaneously displayed on the mirror. Participants were instructed to immediately start reaching towards the visual target while avoiding the obstacle. If participants were able to reach the target without hitting the obstacle in less than 1000 ms from the moment of target presentation, the visual target would turn green indicating that the trial was successful. Otherwise the target would turn red indicating that the trial was aborted and would be repeated later. At the end of each trial all visual stimuli disappeared and the next trial started with the appearance of the starting position.

Before starting the experiment, participants performed a short practice block with 20 trials. Within each of the six blocks, each obstacle position was presented 48 times, resulting in a total of 240 trials per block, and thus in a total of 1440 trials for the complete experiment, which lasted roughly 60 minutes. The combination of the head angle and visual feedback for each block was chosen based on Latin squares method to counterbalance among all conditions (Jacobson & Matthews, 1998).

Data analysis

All offline analyses were performed using MATLAB 2018. For each trial, trajectories were normalized using functional data analysis (Ramsay & Silverman, 2005): B-splines were fit to each dimension of the raw data (x and y) and over-sampled to 2000 equally-spaced time points. A central differential algorithm was used to calculate hand velocity and acceleration. Before each derivation, a low-pass filter (autoregressive forward-backward filter, cutoff frequency = 50 Hz) was used to smooth the data. We calculated the movement onset by finding the moment of 25% and 75% of the trial’s peak velocity and then extrapolating a line between these two moments until this line crossed that trial’s baseline velocity as this was measured by averaging the velocity during the first 200 ms of the trial (for futher details see Brenner & Smeets, 2018). The reaction time (RT) was calculated by subtracting the time of movement onset from the time that the target appeared. The trial was ended as soon as the participants’ hand distance from the center of the target was less than 0.5 cm when hand visual feedback was provided. When visual feedback of the hand was absent, the end of the trial was defined as the moment when the participants’ hand reached the position in depth of the target and was less than 4 cm away from its lateral position. The movement duration (MD) was calculated as the time difference between the end of the trial and movement onset. Trials with RT < 100 ms (predictive movements) and RT+MD > 1000 ms (too slow) were removed from the analysis (1.7%) and considered as invalid trials.

To assess the effect of varying head orientation on movement behavior, we calculated movement variability across the whole reaching trajectory. To do so, the normalized trajectories of each participant were averaged separately for each obstacle position and movement direction (rightward or leftward of that obstacle). Since movements were predominantly along the Y-position direction, we only calculated the standard deviation of the handle’s lateral position across trials for each of the 2000 normalized steps. Then, we calculated the boundaries of the averaged trajectories by adding/subtracting the standard deviation to/from the mean of the trajectory along the X-position (Figure 2A). Finally, we calculated the movement variability as the area between the trajectory boundaries. We performed these steps separately for each participant, head orientation, visual condition, and movement direction for each obstacle position. We considered the calculated movement variability for different directions of each obstacle valid only if we had sufficient data. That is if the number of movements in a certain direction for an obstacle was less than 10% of the overall movements for that obstacle, we considered the movement variability invalid.

Figure 2.
  • Download figure
  • Open in new tab
Figure 2. Variability, rotational, and expansion biases calculations.

A) The overall movement variability was calculated as the area between the movement boundaries. The boundaries were calculated for each participant separately by adding/subtracting the standard deviation of the lateral movements from the average for all 2000 samples along the trajectory, B) rotational biases (α): we hypothesized that rolling the head creates rotational biases resulting in symmetrical shifts of the trajectories for the 30° CW/CCW head orientations (colored solid lines) around the straight head (black solid line), C) expansion biases (β): varying head orientation increases movement variability resulting in hand trajectories moving away from the obstacle, D) combination of rotational biases and expansion biases result in an asymmetry in the shifted trajectories for the 30° CW/CCW head orientations (colored solid lines) compared to the straight head (black solid line).

In the next step, we assessed if the added movement variability due to rolling the head had a tangible effect on the movement strategies. To do so, we considered two parameters: the direction of passing the obstacle (i.e. around the right side or the left side of the obstacle) and the distance while passing the obstacle. For movement direction, we calculated the percentage of rightward movements and expected to see higher rightward movements for the obstacle shifted to the left and the reverse for the obstacle shifted to the right and similar percentage of rightward and leftward movements for the central obstacle. We hypothesized that rolling the head should modulate the preferred movement direction; most noticeably for the central obstacle. With regards to distance from the obstacle, we hypothesized that reaching trajectories should deviate further away from the obstacle in the head roll conditions in order to compensate for the added movement variability. This should be reflected in a larger curvature which is more noticeable for the central obstacle (β: expansion biases; Figure 2C). Based on our earlier findings and due to under- or over-compensation for head roll (Abedi Khoozani & Blohm, 2018), we further expected that movement trajectories for straight head conditions should fall symmetrically between the trajectories for the two head roll conditions (α: rotational biases; Figure 2B). In our data, both expansion and rotational biases are combined, however, each will affect the trajectory differently. That is, rotational biases are expected to be symmetrical around the straight head condition, while expansion biases should cause the deviation curvature in the same direction, regardless of head roll direction. To separate rotational and expansion biases from each other we employed the following method. First, we assumed that both expansion and rotational biases are multiplicative; therefore, the shift caused by each of them is a multiplication of the maximum curvature of the normalized trajectory for the head straight condition with the same direction and visual condition as the rest of the considered trajectories in the calculation (mcH0). Therefore, using the following assumptions mcHR/L = β * |mcH0| and Embedded Image (where Embedded Image is the estimated head angle for the head roll conditions), we first calculated the shifts in trajectories (the difference of the averaged trajectories between straight head and 30° CW/CCW head orientations) for each head orientation caused by expansion (Δxβ) and rotation (Δxα) as following: Embedded Image , where the first subscripts indicate the movement direction (R: rightward and L: leftward) and the second ones indicate the head roll direction (HL: 30°CCW and HR: 30°CW). The visualization of the variables for the rightward movement direction is provided in Figure 2D.

For expansion biases, we calculated the percentage of expansion as following: Embedded Image , therefore, positive values indicate expansion while negative values indicate shrinkage. For rotational biases, we calculated the angular value as following: Embedded Image

Here, for simplification, we assumed that the rotational effect on horizontal values is small and therefore Embedded Image. Similar calculations can be applied for the leftward movements.

Finally, to assess if participants were able to compensate for the effect of varying head orientation, we calculated the movement speed and the collision rate via dividing the number of collisions with the obstacle by the total number of valid trials for each head roll and visual feedback condition.

The data and analysis code of this manuscript is available at OSF (https://osf.io/tf8p5/) and GitHub (https://github.com/Parisaabedi/Obstacle-Avoidance).

Statistical analysis

We used JASP (https://jasp-stats.org/) to perform the statistical analyses. To examine the effect of head orientation (0 and 30° C/CW), obstacle position (most leftward, leftward, central, rightward, and most rightward), and visual feedback (with and without) on the above-mentioned dependent variables (movement variability, movement direction, rotational and expansion biases, movement speed, and collision rate), we deployed repeated measures ANOVA. Significant differences between the conditions were further investigated with two-sample paired t-tests and the reported p-values were Bonferroni-Holm corrected. All the saved file of the performed statistical analysis is available at GitHub (https://github.com/Parisaabedi/Obstacle-Avoidance).

Results

The objective of this study was to investigate whether humans compensate for movement variability caused by increased demands in coordinate transformations. To this aim, we asked participants to reach to a visual target without colliding with an obstacle while having different head orientations (30° CW/CCW and 0°). We hypothesized that participants should increase their safety margins, i.e. stronger trajectory curvature, in the 30° CW/CCW head orientations compared to the straight head to compensate for the increased uncertainty. In addition, participants could also decrease their movement velocity to compensate for higher uncertainty.

In the first step, we demonstrate that participants indeed adapted their movement strategies to compensate for the higher variability. Figure 3 illustrates the trajectories of two example participants (#2 and #16). Both participants were able to successfully avoid the obstacle in all head orientations (the collision rate increase was less than 2%), however, each of the two participants showed a different movement behavior in the rolled head orientations (green and blue) compared to the straight head (black). Specifically, participant #2 moved further away from the obstacle to successfully reach to the target (increased error margin). In contrary, participant #16 kept the same distance from the obstacle, but instead decreased the peak velocity. Based on these results, one can speculate that humans might compensate for the added variability due to stochastic coordinate transformations using different strategies.

Figure 3.
  • Download figure
  • Open in new tab
Figure 3. Obstacle avoidance strategies of two participants.

The data is shown for the central obstacle position (red circle) without visual feedback. The left panel illustrates the trajectory data as well as the velocity data for participant #2. This participant moved further away from the obstacle, specifically for the rightward movements, in the head roll conditions (green and blue solid lines) compared to straight head condition (solid black line). The two panels on the sides illustrate a zoomed version of the trajectories. The peak velocity did not change for the rightward movements while increased for the leftward movements. The right panel illustrates the behavior of participant #16. In contrary to participant #2, this participant decreased the peak velocity and also decreased the distance from the obstacle, especially for leftward movements.

Figure 4.
  • Download figure
  • Open in new tab
Figure 4. Effect of varying head orientation on movement variability.

A) Visual feedback condition: varying the head orientation did not affect the movement variability. B) Without visual feedback condition: participants showed different effects of varying head orientation on their movement variability. Some participants showed increased movement variability while others showed decreased movement variability. Error bars are standard deviations.

Rolling the head increased movement variability

Previous studies demonstrate that rolling the head increases movement variability (Burns & Blohm, 2010; Abedi Khoozani & Blohm, 2018). Therefore, in the first step we investigated the effect of varying head orientation on movement variability depending on the visual feedback of the hand. To remind the reader, we calculated the movement variability as the surrounding area between the lateral deviations from the averaged trajectory. Since we didn’t expect to observe any effect of obstacle position on the movement variability, we performed a 3 (head orientation) × 2 (visual feedback) repeated measures ANOVA. We observed a main effect of head roll (F(2,34)=4.39, p = 0.020, η2=0.205), a main effect of visual feedback (F(1,17)=31.36, p < 0.001, η2=0.648) and no interaction between head roll and visual feedback (F(2,34)=14.08, p = 0.403). Post-hoc t-test for the head orientation effect revealed a significant increase of movement variability for the CW head orientation compared to straight head (t(17) = 2.729, p = 0.043, Cohen’s d = − 0.643), but the difference between CCW vs straight head was slightly more variable (t(17) = 2.171, p = 0.089). Unsurprisingly, there was no difference between CCW vs CW (t(17)= 1.040, p = 0.313). Thus, we confirmed previous work that rolling the head increased the movement variability. In the next step we assessed if the participants adapted their movement behavior to compensate for the increased movement variability.

Participants adapted their obstacle avoidance behavior for head roll conditions

To further explore the effect of head roll on movement strategies, we determined the following parameters: Directional changes (preferred direction to pass the obstacle), rotational and expansion biases, and collision rate. We found that all participants adapted their movement direction and expansion biases in the 30° CW/CCW head orientations to successfully reach to the target with similar collision rates as in the straight head orientation.

Directional changes

Figure 5 depicts the percentage of rightward movements for different head orientations, obstacle positions, and visual feedback conditions. Varying the head roll changed the preference in passing the obstacle from a certain side (left vs right). Rolling the head CCW led to a tendency to pass the obstacle from the right side, while rolling the head CW changed the tendency to pass the obstacle from the left side. Unsurprisingly, shifting the obstacle to the right or left of the central position changed the preferred direction of the movement. For example, when the centrally placed obstacle was shifted to the right, participants preferred to pass it from its left side and vice versa. Lastly, having visual feedback of the movement did not seem to influence the passing side. The 3 (head orientation) × 5 (obstacle position) × 2 (visual feedback) repeated measures ANOVA revealed a main effect of head orientation (F(2,34) = 12.564, p < 0.001, η2 = 0.43), a main effect of obstacle position (F(4,68) = 290.279, p < 0.001, η2=0.95), and an interaction between head orientation and obstacle position (F(8,136) = 405.711, p < 0.001, η2 = 0.29). As can been seen in Figure 5 and revealed from the statistical analysis, there is no difference between the two obstacles configurations on the left (most leftward and leftward) or on the right of the central obstacle (most rightward and rightward). As there was no effect of visual feedback (p = 0.963) and no interaction between visual feedback and any other conditions (all p’s > 0.2), we collapsed the percentage of rightward movements across the visual feedback conditions as well as across the two leftward and the two rightward obstacle configurations (Figure 3C). The repeated measure ANOVA for the collapsed data for the central obstacle revealed a main effect of head orientation (F(2,68) = 15.91, p < 0.001, η2 = 0.48). Post-hoc t-test showed a significant difference between the straight head and the CW head orientation (t(17) = 3.076, p = 0.021, Cohen’s d = 0.73), between the straight head and the CCW head orientation (t(17) = 2.589, p = 0.019, Cohen’s d = 0.61), and between the CW and the CWW head orientation (t(17) = 5.703, p < 0.001, Cohen’s d = 1.344). These results demonstrate that participants opted for more rightward and leftward crosses compared to the straight head when rolling the head CCW and CW, respectively.

Figure 5.
  • Download figure
  • Open in new tab
Figure 5. Percentage of rightward movements.

Head roll caused changes in the preferred direction of movement A) with and B) without visual feedback. In both conditions rolling the head CCW increased the tendency in passing the obstacle from the right side. Obstacle positions: ML: most leftward, L: leftward, C: central, R: rightward, MR: most rightward. Open circles representing single participants. C) Since there was no difference between the leftward obstacle shifts (ML and L) and between rightward obstacle shifts (R and ML), we collapsed the leftward and rightward shifts. The star indicates statistical significance with p < 0.05.

Rotational and Expansion biases

We showed that increasing the coordinate transformation demands affects the obstacle avoidance strategy as reflected in the preferred direction to cross an obstacle. In the next step, our goal was to examine the effect of increased movement variability on the error margins.

To do so, we first needed to select the trajectories for which we had enough data. As shown in Figure 5, varying obstacle position modulated the percentage of rightward movements independent of head orientations. For instance, for most leftward obstacles more than 95% of the movements passed the obstacle from the right side. Therefore, only very few leftward trajectories occurred in such cases which is not sufficient for further analysis. As a result, from now on, we only considered the reasonable preferred direction, i.e. direction with more than 10% of the overall movements, for non-central obstacle positions, i.e. rightward movements for obstacles shifted to the left and vice versa.

In addition to changing the preferred direction, increasing the safety margin by increasing the trajectory curvature could also compensate for the increased variability caused by head roll. However, we expect to observe the increased curvature only when it is necessary, i.e. when the distance between the moving hand and the obstacle is not high enough to avoid hitting the obstacle. For instance, as mentioned earlier, in the absence of visual feedback, participant #2 only increased the curvature in the head rolled conditions when passing the obstacle for rightward but not for leftward movements (Figure 3, left panel). One possible explanation could be that removing visual feedback caused a higher safety margin for leftward movements in the straight head condition (difference between solid and dotted line in figure 6F-H) which resulted in no need for further requirement to increase the curvature in the head roll conditions.

Figure 6.
  • Download figure
  • Open in new tab
Figure 6. Participants showed both rotational and expansion biases caused by varying head orientation.

A-E: Movement trajectories for different head orientations and obstacle positions when visual feedback is provided. Trajectories are averaged across participants. The shaded areas indicate the standard errors of the mean. The obstacle is depicted as the red circle. A: most rightward obstacle, B: rightward obstacle, C: central obstacle, D: leftward obstacle, E: most leftward obstacle. Providing visual feedback of the hand abolished the rotational biases but not the expansion biases. F-J: Movement trajectories for different head orientations and obstacle positions when visual feedback is removed. Obstacle positions are equal to the ones in A-E. The shift due to removing the visual feedback (difference between the black dotted line and the solid line) is most prominent for leftward movements ([F] and [G]) compared to rightward movements ([I] and [J]). Rightward movements showed both rotational and expansion biases while leftward movements are mainly overlapping.

To investigate the effect of head roll on movement trajectories for all participants, we plotted the pooled trajectories for each obstacle for both with and without visual feedback conditions (Figure 6A-E and Figure 6F-J respectively). Since the effects were more prominent when visual feedback was withdrawn, we will focus on this condition. As illustrated in Figure 6F-J, rolling the head created both rotational (blue and green trajectories are shifted in opposite side of the black) and expansion (the trajectory shifts are not symmetrical; i.e. green trajectory being close to black one for Figure 6I-J) biases when visual feedback of the hand was removed. The effect of head roll was more noticeable for rightward (Figure 6I-J) compared to leftward movements (Figure 6F-G). However, one needs to consider that the shift caused by removing the visual feedback for leftward movements while the head was straight was already stronger than for the rightward movements; the difference between the black solid line (without visual feedback) and the black dotted line (with visual feedback). This observation is in line with previous studies (Chapman & Goodale, 2008; De Haan et al., 2014; Menger, Dijkerman, et al., 2013; Menger et al., 2012; Ross et al., 2018; Ross et al., 2015), and as Menger et al. (2013) illustrated mainly due to the degree of obstructiveness of the obstacle. This indicates that people adapt their compensatory behavior if it is necessary. In other words, if the safety margins for the leftward movements were sufficiently large, there is no further need to increase the margins in the presence of higher uncertainty.

To quantify the effect of head orientation on safety margins, we first separated the rotational biases (due to misestimation of the head angle) from the expansion biases (due to uncertainty in head angle estimation). For details of this calculation please see “Materials and Methods” section. We performed the calculations for each individual participant and each obstacle, separately for each visual condition. In the next step, we combined the three leftward movements (for MR, R, and C obstacle positions) and similarly the three rightward movements (for ML, L, and C obstacle positions). Figure 7 demonstrates the changes in rotational (Figure 7A) and expansion biases (Figure 7B) for different movement directions.

Figure 7.
  • Download figure
  • Open in new tab
Figure 7. Quantification of rotational and expansion biases caused by rolling the head.

A) rotational biases caused by misestimation of the head angle, B) expansion biases caused by increased uncertainty due to head roll. Error bars are standard errors of the mean. Stars represent statistical significance, p < 0.05.

Regarding the rotational biases, similar to previous studies (Burns & Blohm, 2010; Abedi Khoozani & Blohm, 2018), we considered a positive value of the rotational bias as an indicator of overestimated head angle and vice versa for negative rotational biases. In the presence of visual feedback, the rotational biases were not significantly higher than zero for both rightward movements (t(17) = 0.926, p = 0.367, Cohen’s d = 0.22) and leftward movements (t(17) = 2.115, p = 0.050, Cohen’s d = 0.50). On the other hand, when visual feedback was removed, we observed an overestimation of head angle for the rightward movements (t(17) = 2.521, p = 0.022, Cohen’s d = 0.59) and no statistically significant rotational biases for leftward movements (t(17) = 0.646, p = 0.527, Cohen’s d = 0.15).

Regarding the expansion bias, a positive value indicates that participants increased the movement curvature to create a higher safety margin. Consequently, a zero expansion bias means no change in movement curvature for the CW/CCW head orientations compared to the straight head orientation and a negative expansion bias indicates a smaller distance between movement trajectory and the obstacle for the CW/CCW head orientations than the straight head orientation. Additionally, for better visualization we used the expansion/shrinkage percentage; i.e. +20% is equivalent to β = 1.2 and −20% is equivalent to β = 0.8. As can be seen in Figure7B, participants significantly increased their movement curvature while rolling their head compared to the straight head for rightward movements both with visual feedback (t(17) = 3.082, p = 0.007, Cohen’s d = 0.73) and without visual feedback (t(17) = 4.142, p < 0.001, Cohen’s d = 0.98). For leftward movements, the movement curvature was increased when visual feedback was provided (t(17) = 3.388, p = 0.003, Cohen’s d = 0.80). However, when visual feedback was withdrawn, curvature of leftward movements did not differ significantly for different head orientations (t(17) = 1.537, p = 0.143, Cohen’s d = 0.36), possibly because such movements had already a large safety margin from the obstacle (see Figure 6F-H). In contrary, for rightward movements, having no visual feedback did not cause higher movement curvature; therefore, it was necessary to increase movement curvature for the head roll conditions to increase the safety margin to successfully avoid the obstacle.

Movement speed

As mentioned before, increasing movement speed is another compensation strategy to counteract increased movement variability due to rolling the head. However, we did not find any changes in movement speed for any of the experimental conditions (all p > 0.1).

Adapted behavior resulted in the same collision rate for different head orientations

As demonstrated in the previous sections, participants adapted their behavior (movement directions and safety margins) for different head orientations. It is logical to assume that the purpose of this adapted behavior was to compensate for the increased movement variability due to head roll. We did not expect to observe any difference in the collision rate for the different obstacle positions, therefore, we pooled the data across the obstacle positions and assessed if head orientation or visual feedback affected the collision rate. The 3 (head orientation) x 2 (visual feedback) repeated measures ANOVA revealed no main effect of head orientation (F(2,34) = 0.100, p = 0.905, η2=0.006), a main effect of visual feedback (F(1,17) = 12.831, p = 0.002, η2=0.430), and no interaction between the two (F(2,34) = 1.044, p = 0.363, η2=0.058). As illustrated in Figure 8, removing visual feedback caused an increase in the collision rate. However, in both visual conditions, the collision rate remained the same for different head orientations indicating that participants were able to successfully compensate for the added variability due to varying head orientations.

Figure 8.
  • Download figure
  • Open in new tab
Figure 8. Effect of head orientation on collision rate.

A) Visual feedback condition: participants were able to successfully perform the reaching task showing a low collision rate for different head orientations. B) No visual feedback condition: removing the visual feedback increased the overall collision irrespective of the head orientation. Error bars are standard deviations.

Discussion

The goal of the current study was to assess whether and how humans account for the added movement uncertainty induced by stochastic coordinate transformations in goal-directed movements. To this aim, we asked human participants to reach to visual targets while avoiding obstacles with different head orientations (straight and 30° CW/CCW) and visual feedback configurations (with/without visual feedback). We hypothesized that if humans are compensating for the increased uncertainty caused by stochastic coordinate transformations, varying head orientation should not affect their performance (i.e. same collision rate for all head orientations). If that was true, we hypothesized to observe compensatory effects in the trajectories, such as increased error margins (increased curvature), for the rolled compared to the straight head conditions. As expected, rolling the head increased movement variability. To accommodate this increased variability, participants adapted their movement behavior by varying their preferred movement direction and increasing their safety margins from the obstacle (based on collision likelihood). Consequently, the collision rate remained the same for all head orientations. Thus, humans considered the increased movement variability resulting from stochastic coordinate transformations in their goal-directed movements.

The main assumption of the current study is that the stochasticity of coordinate transformations propagates to the final motor output. This assumption is based on numerous studies demonstrating that uncertainty in coordinate transformations causes higher variability in movement execution (Abedi Khoozani & Blohm, 2018; Biguer, Prablanc, & Jeannerod, 1984; Bock, 1986, 1993; Burns & Blohm, 2010; Henriques el al., 1998; Henriques & Crawford, 2000; Lewald & Ehrenstein, 2002; McGuire & Sabes, 2009; Schlicht & Schrater, 2007; Schütz, Henriques, & Fiehler, 2013; Sober & Sabes, 2003, 2005; Vaziri, Diedrichsen, & Shadmehr, 2006). For instance, when reaching to visual targets of different eccentricities with respect to gaze fixation, reaching movements overshoot the target in the absence of visual feedback (Bock, 1986, 1993; Henriques et al., 1998; Henriques & Crawford, 2000; Lewald & Ehrenstein, 2002; Vaziri et al., 2006). It has been suggested that these overshoots likely arise from noise in transforming the visual estimate of the target to the proprioceptive estimate of the hand (Dessing et al., 2012). Furthermore, McGuire and Sabes (2009) showed that the gaze-dependent errors vary based on the target’s effector (visual or proprioceptive or both) as well as the available information of the initial hand position (with or without visual feedback). They showed that the gaze-dependent reaching errors are only observable for visual targets and are abolished for proprioceptive targets, suggesting that the transformation of a visual target to the coordinate frame of the arm systematically affects reaching movements. Based on the evidence that accurate coordinate transformations rely on the estimation of body geometry (Blohm & Crawford, 2007) and to elaborate more on the effect of stochastic coordinate transformations on reaching movements, previous studies varied the reliability of the head angle estimations via rolling the head and/or loading the neck (Abedi Khoozani & Blohm, 2018; Burns & Blohm, 2010). Both factors biased reaching movements and increased the movement variability compared to a control condition (e.g. straight head and no neck load). All together, we, therefore, argue that there is a clear propagation of uncertainty caused by stochastic coordinate transformations to the performed reaching movements.

If stochastic coordinate transformations cause higher movement variability, does the brain account for such noise when planning and executing goal-directed movements? In the following, we argue that it is rather unlikely that the brain dismisses such nuisances.

We observed two main strategies to compensate for the increased movement variability caused by stochastic coordinate transformations: (1) changes in the preferred direction of passing the obstacle, and (2) increased safety margins. With regards to strategy (1), we believe that it is caused by signal-dependent noise. Since for the rightward/leftward obstacles one direction is distinctly dominant (e.g. rightward direction for the obstacle shifted to the most leftward positions), we only focus on the central obstacle in which the likelihood of passing the obstacle from the right- or left side was (almost) at chance level. To elaborate more on why changing the preferred direction will facilitate the effect of coordinate transformations, we exemplarily consider the 30° CW head orientation. In this configuration, participants preferred to pass the obstacle from the left side (Figure 5). It has been shown that humans move their gaze to specific task related landmarks (e.g. possible contact point with obstacle) during reaching movements to gain spatial information for movement control (Johansson et al., 2001). Consequently, for 30° CW head orientation, a rightward eye rotation is required in order to have a more accurate view of the right side of the screen (with regard to the body and screen midpoint) while such rotation is not required for left side of the screen. Meanwhile the accurate coordinate transformation relies on the estimation of body geometry (Blohm & Crawford, 2007), here both head and eye angles. Additionally, it has been established that the noise associated with this estimation is signal-dependent (Abedi Khoozani & Blohm, 2018; Burns & Blohm, 2010; Schlicht & Schrater, 2007); that is the higher the amplitude of the signal, the noisier the estimation. Therefore, the extra rotations-translation of the eye, required for the right side of the screen, may result in noisier eye-in-head orientation estimations and, consequently, noisier coordinate transformations. Accordingly, to decrease the uncertainty associated with the coordinate transformation it is logical to pass the obstacle on the left side, which is also in accordance with our data (see Figure 5C). Hence, we argue that a likely explanation is that participants systematically adapted their preferred movement direction to decrease the uncertainty accompanied by required coordinate transformations and ultimately the likelihood of hitting the obstacle.

With regards to strategy (2), we observed increased safety margins (i.e. expansion biases) when the head was tilted except for movements performed on the left side of the obstacles in the absence of visual feedback. The asymmetry in increasing safety margins between rightward and leftward movements was unexpected. To explain this pattern of results it is noteworthy to emphasize that the increase in trajectory curvature due to removing the visual feedback is much less for rightward compared to leftward passes (the comparison between dotted line and solid line in Figure 6B), which is also in accordance with previous findings (Chapman & Goodale, 2008; De Haan et al., 2014; Menger et al., 2013; Menger et al., 2012; Ross et al., 2018; Ross et al., 2015). Menger et al. (2013) demonstrated that the observed asymmetry cannot be explained by motor lateralization and argued that the observed behavior is most likely due to the degree of intrusiveness of an obstacle for the whole arm (Menger et al., 2013; Menger et al., 2012). In other words, when humans are passing an obstacle, they accurately calculate the likelihood of hitting the obstacle with their whole moving arm (Voudouris et al., 2012). Furthermore, the expansion biases persist even when the visual feedback of the hand was available during the movement. While it has been argued that providing visual feedback of the hand will remove the biases caused by gaze-shifts and more generally by coordinate transformations (Brown, Marlin, & Morrow, 2015; Dessing et al., 2012; Saunders, 2004; Saunders & Knill, 2003), we argue that the signal-dependent noise still persists in the system. In other words, while the extra source of information (i.e. visual information of the hand position) decreases the amount of uncertainty, it is not fully abolished. Similarly, Ross et al. (2015) observed that varying the fixation while the visual feedback was available caused participants to veer away from the fixated obstacle as opposed to free viewing or central fixation. The authors speculated that the observed pattern can be explained by the misestimation of the target position on the retina, however, we argue that the observed veering away from the fixated obstacle might be better explained by stochastic coordinate transformations: Given that varying the gaze position will result in higher uncertainty in eye-in-head orientation estimation and consequently in noisier movements, it is logical to increase the safety margin to decrease the likelihood of obstacle collision.

As mentioned above, in our study participants increased their safety margin for tilted head conditions if the initial safety margin (straight head condition curvature) could not account for the added uncertainty due to stochastic coordinate transformations. More specifically, since the curvature for the straight head condition is already high enough for leftward movements without visual feedback, we did not observe any expansion biases. This can be explained by including the movement signal-dependent noise in the calculation of collision likelihood and motor planning (Hamilton & Wolpert, 2002; Harris & Wolpert, 1998). Harris and Wolpert (1998) proposed a theoretical framework, called task optimization in the presence of signal-dependent noise (TOPS), and showed that including signal-dependent noise provides a general framework which can explain both saccadic and point-to-point arm movements. In a later study, Hamilton and Wolpert (2002) extended this framework and showed that TOPS can also predict the trajectories generated in an obstacle avoidance task. Based on these observations, they proposed that controlling the statistics of the movement (such as minimizing the endpoint error) while accounting for signal-dependent noise might offer a unifying principle for goal-directed movements. Therefore, it is also crucial to have a better understanding of the nature of such signal-dependent noise corrupting the movements. While previous studies (Hamilton & Wolpert, 2002; Harris & Wolpert, 1998; Van Beers, Baraduc, & Wolpert, 2002) mainly associated the signal-dependent noise with the amplitude of the motor command, we argue that the processing, e.g. coordinate transformations, required for generating the motor command can also cause movement variability and therefore, it is important to account for such noise in the motor circuitry.

While the role of coordinate transformations in the motor planning stage is demonstrated through many studies (Abedi Khoozani & Blohm, 2018; Burns & Blohm, 2010; McGuire & Sabes, 2009; Sober & Sabes, 2003, 2005), surprisingly, its impact is fully ignored in the optimal motor control field. This study provided the first evidence that the brain accommodates for the added movement variability due to uncertainty in coordinate transformations. However, it is not clear at what stage during motor planning and execution such accommodations might occur. Based on the optimal control theory, the motor system selects the appropriate control law to calculate the motor command based on the desired task goal (e.g. grab a pen) and the current system states (i.e. limb position). Since both motor commands and sensory signals of motor performance are corrupted with noise, the optimal state estimator uses both sensory signals (feedback circuitry) and an efference copy of the motor commands (feedforward circuitry) to estimate the current state of the limb (Scott & Norman, 2003; Scott, 2004; Todorov, 2004; Todorov & Jordan, 2002). Within this framework, however, it is unknown in which coordinates each of these processes (feedback and feedforward) can be carried out. It has been shown that it is beneficial to plan movements in multiple coordinate frames (McGuire & Sabes, 2009). In addition, the feedback of the movement can be presented in different coordinate frames, e.g. the visual feedback of the hand in retinal coordinates and the proprioceptive feedback of the hand in body coordinates. Thus, it is not trivial which coordinate system should be used for implementing optimal feedback control. For instance, all signals could be transformed and then combined in one coordinate frame (e.g. visual, proprioceptive, or an intermediate frame). Alternatively, all signals could be transformed into the other signal’s coordinate frame (i.e. visual to proprioceptive and vice versa) and the error signal is generated in all coordinate frames in parallel (similar to generating movement plans in multiple coordinate systems). Further modeling and experimental studies are required to investigate the role of coordinate transformations in the optimal motor control framework. Such studies have implications not only in the motor control field, but also in perception, decision making as well as applicable fields such as brain machine interfaces and robotics.

All in all, we believe that uncertainty in coordinate transformations results from signal-dependent noise and propagates to motor behavior. Additionally, the brain accounts for such noise during motor planning, and possibly execution, and adapts the behavior whenever such noise can compromise performance.

Acknowledgement

We would like to thank Marie Mosebach for her help in data collection. This project was funded by NSERC (Canada), DFG IRTG 1901 “The brain in action” (Germany), and DAAD (Germany).

Footnotes

  • https://osf.io/tf8p5/

  • https://github.com/Parisaabedi/Obstacle-Avoidance

References

  1. ↵
    Abedi Khoozani, P., & Blohm, G. (2018). Neck muscle spindle noise biases reaches in a multisensory integration task. Journal of Neurophysiology, 120(3), 893–909. https://doi.org/10.1152/jn.00643.2017
    OpenUrl
  2. ↵
    Alikhanian, H., Carvalho, S. R., & Blohm, G. (2015). Quantifying effects of stochasticity in reference frame transformations on posterior distributions. Frontiers in Computational Neuroscience, 9, 82. https://doi.org/10.3389/fncom.2015.00082
    OpenUrl
  3. Battaglia, P. W., & Schrater, P. R. (2007). Humans trade off viewing time and movement duration to improve visuomotor accuracy in a fast reaching task. Journal of Neuroscience, 27(26), 6984–6994. https://doi.org/10.1523/JNEUROSCI.1309-07.2007
    OpenUrlAbstract/FREE Full Text
  4. ↵
    Biguer, B., Prablanc, C., & Jeannerod, M. (1984). The contribution of coordinated eye and head movements in hand pointing accuracy. Experimental Brain Research, 55, 462–469.
    OpenUrlCrossRefPubMedWeb of Science
  5. ↵
    Blohm, G., & Crawford, J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7(5), 4. https://doi.org/10.1167/7.5.4
    OpenUrlAbstract
  6. ↵
    Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64, 476–482.
    OpenUrlCrossRefPubMedWeb of Science
  7. ↵
    Bock, O. (1993). Localization of objects in the peripheral visual field. Behavioural Brain Research, 56(1), 77–84. https://doi.org/10.1016/0166-4328(93)90023-J
    OpenUrlCrossRefPubMedWeb of Science
  8. ↵
    Brenner, E., & Smeets, J. B. J. (2018). How can you best measure reaction times? Journal of Motor Behavior, 25, 1–10. https://doi.org/10.1080/00222895.2018.1518311
    OpenUrl
  9. ↵
    Brown, L. E., Marlin, M. C., & Morrow, S. (2015). On the contributions of vision and proprioception to the representation of hand-near targets. Journal of Neurophysiology, 113(2), 409–419. https://doi.org/10.1152/jn.00005.2014
    OpenUrlCrossRefPubMed
  10. ↵
    Buneo, C. A., & Andersen, R. A. (2006). The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44(13), 2594–2606. https://doi.org/10.1016/j.neuropsychologia.2005.10.011
    OpenUrlCrossRefPubMedWeb of Science
  11. ↵
    Buneo, C. A., Jarvis, M. R., Batista, A. P., & Andersen, R. A. (2002). Direct visuomotor transfomrations for reaching. Nature, 416(3), 632–636.
    OpenUrlCrossRefPubMedWeb of Science
  12. ↵
    Burns, J. K., Nashed, J. Y., & Blohm, G. (2011). Head roll influences perceived hand position. Journal of Vision, 11(9), 3. https://doi.org/10.1167/11.9.3
    OpenUrlAbstract/FREE Full Text
  13. ↵
    Burns, J. K., & Blohm, G. (2010). Multi-sensory weights depend on contextual noise in reference frame transformations. Frontiers in Human Neuroscience, 4, 221. https://doi.org/10.3389/fnhum.2010.00221
    OpenUrl
  14. ↵
    Chapman, C. S., & Goodale, M. A. (2008). Missing in action: The effect of obstacle position and size on avoidance while reaching. Experimental Brain Research, 191(1), 83–97. https://doi.org/10.1007/s00221-008-1499-1
    OpenUrlCrossRefPubMedWeb of Science
  15. ↵
    Chua, R., & Elliott, D. (1993). Visual regulation of manual aiming. Human Movement Science, 12(4), 365–401. https://doi.org/10.1016/0167-9457(93)90026-L
    OpenUrlCrossRefWeb of Science
  16. ↵
    Cohen, R. G., Biddle, J. C., & Rosenbaum, D. A. (2010). Manual obstacle avoidance takes into account visual uncertainty, motor noise, and biomechanical costs. Experimental Brain Research, 201(3), 587–592. https://doi.org/10.1007/s00221-009-2042-8
    OpenUrlPubMed
  17. ↵
    Cohen, Y. E., & Andersen, R. A. (2002). A common reference frame for movement plans in the posterior parietal cortex. Nature Reviews Neuroscience, 3(7), 553–562. https://doi.org/10.1038/nrn873
    OpenUrlCrossRefPubMedWeb of Science
  18. ↵
    De Haan, A. M., Van der Stigchel, S., Nijnens, C. M., & Dijkerman, H. C. (2014). The influence of object identity on obstacle avoidance reaching behaviour. Acta Psychologica, 150, 94–99. https://doi.org/10.1016/j.actpsy.2014.04.007
    OpenUrl
  19. ↵
    Dessing, J. C., Byrne, P. A., Abadeh, A., & Crawford, J. D. (2012). Hand-related rather than goal-related source of gaze-dependent errors in memory-guided reaching. Journal of Vision, 12(11), 17. https://doi.org/10.1167/12.11.17
    OpenUrlAbstract/FREE Full Text
  20. ↵
    Engel, K. C., Flanders, M., & Soechting, J. F. (2002). Oculocentric frames of reference for limb movement. Archives Italiennes de Biologie, 140(3), 211–219.
    OpenUrlPubMedWeb of Science
  21. ↵
    Faisal, A. A., Selen, L. P., & Wolpert, D. M. (2008). Noise in the nervous system. Nature reviews. Neuroscience, 9(4), 292–303. doi:10.1038/nrn2258
    OpenUrlCrossRefPubMedWeb of Science
  22. ↵
    Hamilton, A. F. D. C., & Wolpert, D. M. (2002). Controlling the statistics of action: obstacle avoidance. Journal of Neurophysiology, 87(5), 2434–2440. https://doi.org/10.1152/jn.00875.2001.
    OpenUrlPubMedWeb of Science
  23. ↵
    Harris, C. M., & Wolpert, D. M. (1998). Signal-dependent noise determines motor planning. Nature, 394(6695), 780–784. https://doi.org/10.1038/nature03031.1.
    OpenUrlCrossRefPubMedWeb of Science
  24. ↵
    Heath, M. (2005). Role of limb and target vision in the online control of memory-guided reaches. Motor Control, 9, 281–309. https://doi.org/10.1123/mcj.9.3.281
    OpenUrlPubMedWeb of Science
  25. ↵
    Heath, M., Westwood, D. A., & Binsted, G. (2004). The control of memory-guided reaching movements in peripersonal space. Motor Control, 8, 76–106. https://doi.org/10.1123/mcj.8.1.76
    OpenUrlPubMedWeb of Science
  26. ↵
    Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. The Journal of Neuroscience, 18(4), 1583–1594.
    OpenUrlAbstract/FREE Full Text
  27. ↵
    Henriques, D. Y. P., & Crawford, J. D. (2000). Direction-dependent distortions of retinocentric space in the visuomotor transformation for pointing. Experimental Brain Research, 132(2), 179–194. https://doi.org/10.1007/s002210000340
    OpenUrlCrossRefPubMedWeb of Science
  28. ↵
    Howard, I. S., Ingram, J. N., & Wolpert, D. M. (2009). A modular planar robotic manipulandum with end-point torque control. Journal of Neuroscience Methods, 181(2), 199–211. https://doi.org/10.1016/j.jneumeth.2009.05.005
    OpenUrlCrossRefPubMedWeb of Science
  29. ↵
    Jacobson, M., & Matthews, P. (1998). Generating uniformly distributed random Latin squares. Journal of Combinatorial Designs, 4(6), 405–437. https://doi.org/10.1002/(SICI)1520-6610(1996)4:6<405::AID-JCD3>3.0.CO;2-J
    OpenUrl
  30. ↵
    Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21(17), 6917–6932. https://doi.org/10.1523/JNEUROSCI.21-17-06917.2001
    OpenUrlAbstract/FREE Full Text
  31. ↵
    Khan, M. A., Elliott, D., Coull, J., Chua, R., & Lyons, J. (2002). Optimal control strategies under different feedback schedules: Kinematic evidence. Journal of Motor Behavior, 34(1), 45–57. https://doi.org/10.1080/00222890209601930
    OpenUrlCrossRefPubMedWeb of Science
  32. ↵
    Knudsen, E. (2002). Computational maps in the brain. Annual Review of Neuroscience, 10(1), 41–65. https://doi.org/10.1146/annurev.neuro.10.1.41
    OpenUrl
  33. ↵
    Knudsen, E. I., du Lac, S., & Esterly, S. D. (1987). Computational maps in the brain. Annual Review of Neuroscience, 10, 41–65. https://doi.org/10.1146/annurev.ne.10.030187.000353
    OpenUrlCrossRefPubMedWeb of Science
  34. ↵
    Lacquaniti, & Caminiti. (2003). Visuo-motor transformations for arm reaching. European Journal of Neuroscience, 10(1), 195–203. https://doi.org/10.1046/j.1460-9568.1998.00040.x
    OpenUrl
  35. ↵
    Lewald, J., & Ehrenstein, W. H. (2002). Auditory-visual spatial integration: A new psychophysical approach using laser pointing to acoustic targets. The Journal of the Acoustical Society of America, 104(3), 1586–1597. https://doi.org/10.1121/1.424371
    OpenUrl
  36. ↵
    McGuire, L. M. M., & Sabes, P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12(8), 1056–1061. https://doi.org/10.1038/nn.2357
    OpenUrlCrossRefPubMedWeb of Science
  37. ↵
    Menger, R., Dijkerman, H. C., & Van der Stigchel, S. (2013). The effect of similarity: non-spatial features modulate obstacle avoidance. PLoS ONE 8(4): e59294. https://doi.org/10.1371/journal.pone.0059294
    OpenUrl
  38. ↵
    Menger, R., Van der Stigchel, S., & Dijkerman, H. C. (2012). How obstructing is an obstacle? The influence of starting posture on obstacle avoidance. Acta Psychologica, 141, 1–8. https://doi.org/10.1016/j.actpsy.2012.06.006
    OpenUrlCrossRefPubMed
  39. ↵
    Menger, R., Van Der Stigchel, S., & Dijkerman, H. C. (2013). Outsider interference: No role for motor lateralization in determining the strength of avoidance responses during reaching. Experimental Brain Research, 229(4), 533–543. https://doi.org/10.1007/s00221-013-3615-0
    OpenUrl
  40. ↵
    Ramsay, J., & Silverman, B. W. (2005). Functional data analysis. New York, NY: Springer. https://doi.org/10.1007/b98888
  41. ↵
    Ross, A. I., Schenk, T., Billino, J., Macleod, M. J., & Hesse, C. (2018). Avoiding unseen obstacles: Subcortical vision is not sufficient to maintain normal obstacle avoidance behaviour during reaching. Cortex, 98, 177–193. https://doi.org/10.1016/j.cortex.2016.09.010
    OpenUrl
  42. ↵
    Ross, A. I., Schenk, T., & Hesse, C. (2015). The effect of gaze position on reaching movements in an obstacle avoidance task. PLoS ONE, 10(12), e0144193. https://doi.org/10.1371/journal.pone.0144193
    OpenUrl
  43. ↵
    Sabes, P. N. (1997). Obstacle avoidance and a perturbation sensitivity model for motor planning. The Journal of Neuroscience, 17(18), 7119–7128.
    OpenUrlAbstract/FREE Full Text
  44. ↵
    Sabes, P. N., Jordan, M. I., & Wolpert, D. M. (1998). The role of inertial sensitivity in motor planning. The Journal of Neuroscience, 18(15), 5948–5957. https://doi.org/10.1523/jneurosci.18-15-05948.1998
    OpenUrlAbstract/FREE Full Text
  45. ↵
    Saunders, J. A. (2004). Visual feedback control of hand movements. The Journal of Neuroscience, 24(13), 3223–3234. https://doi.org/10.1523/jneurosci.4319-03.2004
    OpenUrlAbstract/FREE Full Text
  46. ↵
    Saunders, J. A., & Knill, D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research, 152(3), 341–352. https://doi.org/10.1007/s00221-003-1525-2
    OpenUrlCrossRefPubMedWeb of Science
  47. ↵
    Schlicht, E. J., & Schrater, P. R. (2007). Impact of coordinate transformation uncertainty on human sensorimotor control. Journal of Neurophysiology, 97(6), 4203–4214. https://doi.org/10.1152/jn.00160.2007
    OpenUrlCrossRefPubMedWeb of Science
  48. ↵
    Schütz I., Henriques D.Y.P., & Fiehler K. (2013) Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Research, 87, 46–52
    OpenUrlCrossRefPubMedWeb of Science
  49. ↵
    Scott, S. H., & Norman, K. E. (2003). Computational approaches to motor control and their potential role for interpreting motor dysfunction. Current Opinion in Neurology, 16:693–698. doi:10.1097/01.wco.0000102631.16692.71
    OpenUrlCrossRefPubMedWeb of Science
  50. ↵
    Scott, S. H. (2004). Optimal feedback control and the neural basis of volitional motor control. Nature Reviews Neuroscience, 5(7), 532–545. https://doi.org/10.1038/nrn1427
    OpenUrlCrossRefPubMed
  51. ↵
    Sober, S. J., & Körding, K. P. (2012). What silly postures tell us about the brain. Frontiers in Neuroscience, 6, 5–6. https://doi.org/10.3389/fnins.2012.00154
    OpenUrl
  52. ↵
    Sober, S. J., & Sabes, P. N. (2003). Multisensory integration during motor planning. The Journal of Neuroscience, 23(18), 6982–6992.
    OpenUrlAbstract/FREE Full Text
  53. ↵
    Sober, S. J., & Sabes, P. N. (2005). Flexible strategies for sensory integration during motor planning. Nature Neuroscience, 8(4), 490–497. https://doi.org/10.1038/nn1427
    OpenUrlCrossRefPubMedWeb of Science
  54. ↵
    Soechting, J. F., Flanders, M. (1992). Moving in three dimensional space: frames of reference, vectors, and coordinate systems. Annual Review of Neuroscience, 167–191.
  55. ↵
    Todorov, E. (2004). Optimality principles in sensorimotor control. Nature Neuroscience, 7(9), 907–915. https://doi.org/10.1038/nn1309
    OpenUrlCrossRefPubMedWeb of Science
  56. ↵
    Todorov, E., & Jordan, M. I. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11), 1226–1235. https://doi.org/10.1038/nn963
    OpenUrlCrossRefPubMedWeb of Science
  57. ↵
    Van Beers, R. J., Baraduc, P., & Wolpert, D. M. (2002). Role of uncertainty in sensorimotor control. Philosophical Transactions of the Royal Society B: Biological Sciences, 357(1424), 1137–1145. https://doi.org/10.1098/rstb.2002.1101
    OpenUrlCrossRefPubMedWeb of Science
  58. ↵
    Vaziri, S., Diedrichsen, J., & Shadmehr, R. (2006). Why does the brain predict sensory consequences of oculomotor commands? Optimal integration of the predicted and the actual sensory feedback. The Journal of Neuroscience, 26(16), 4188–4197. https://doi.org/10.1523/jneurosci.4747-05.2006.
    OpenUrlAbstract/FREE Full Text
  59. ↵
    Voudouris, D., Smeets, J. B. J., & Brenner, E. (2012). Do obstacles affect the selection of grasping points? Human Movement Science, 31(5), 1090–1102. https://doi.org/10.1016/j.humov.2012.01.005
    OpenUrlCrossRefPubMed
Back to top
PreviousNext
Posted July 24, 2019.
Download PDF
Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Reaching around obstacles accounts for uncertainty in coordinate transformations
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Reaching around obstacles accounts for uncertainty in coordinate transformations
Parisa Abedi Khoozani, Dimitris Voudouris, Gunnar Blohm, Katja Fiehler
bioRxiv 706317; doi: https://doi.org/10.1101/706317
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Reaching around obstacles accounts for uncertainty in coordinate transformations
Parisa Abedi Khoozani, Dimitris Voudouris, Gunnar Blohm, Katja Fiehler
bioRxiv 706317; doi: https://doi.org/10.1101/706317

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (4087)
  • Biochemistry (8766)
  • Bioengineering (6480)
  • Bioinformatics (23346)
  • Biophysics (11751)
  • Cancer Biology (9149)
  • Cell Biology (13255)
  • Clinical Trials (138)
  • Developmental Biology (7417)
  • Ecology (11369)
  • Epidemiology (2066)
  • Evolutionary Biology (15088)
  • Genetics (10402)
  • Genomics (14011)
  • Immunology (9122)
  • Microbiology (22050)
  • Molecular Biology (8780)
  • Neuroscience (47373)
  • Paleontology (350)
  • Pathology (1420)
  • Pharmacology and Toxicology (2482)
  • Physiology (3704)
  • Plant Biology (8050)
  • Scientific Communication and Education (1431)
  • Synthetic Biology (2209)
  • Systems Biology (6016)
  • Zoology (1250)