Abstract
Collecting data online via crowdsourcing platforms has proven to be a very efficient way to recruit individuals from a large diverse sample. While many fields in psychology have embraced online studies, the field of motor learning has lagged behind. We suspect this is because of an implicit assumption that the loss of experimental control with online data collection will be problematic for kinematic data. As a first foray to bring motor learning online, we developed a web-based platform to collect kinematic data, serving as a template for researchers to create their own online sensorimotor control and learning experiments. As a proof-of-concept, we present three visuomotor rotation experiments conducted with this platform, asking if fundamental motor learning phenomena discovered in the lab could be reproduced online. In all experiments, there was a close correspondence between the results obtained online with those previously reported from research conducted in the laboratory. As such, our web-based motor learning platform can serve as a powerful tool to exploit the benefits of crowdsourcing approaches and extend research on motor learning beyond the confines of the traditional laboratory.
Introduction
The ability to produce a wide repertoire of movements, and to adjust those movements in response to changes in the body and environment, is a core feature of human competence. This ability helps a tired ping-pong player compensate for her fatigue, and facilitates a patient’s motor recovery from neurological injury (John W. Krakauer, Hadjiosif, Xu, Wong, & Haith, 2019; Roemmich & Bastian, 2018; Jonathan S. Tsay & Winstein, 2020). By improving our understanding of how movements are learned, we can uncover general principles about how the motor system functions and develops, optimize training techniques for sport and rehabilitation, and design better brain-machine interfaces.
A paradigmatic approach for studying motor learning is to introduce a new mapping between the motion of the arm and the corresponding visual feedback (J. W. Krakauer, Pine, Ghilardi, & Ghez, 2000). Historically, such visuomotor perturbations were accomplished by the use of prism glasses that distort the visual field (Helmholtz, 1924). Nowadays, virtual reality setups allow more flexible control of the relationship between hand position and a feedback signal. A commonly used perturbation is visuomotor rotation. Here, participants reach to a visual target with vision of the arm occluded. Feedback is provided in the form of a cursor presented on a computer monitor. After a brief training period during which the feedback corresponds to the actual hand position, a perturbation is introduced by rotating the position of the cursor from the actual hand position (e.g., 15°). The mismatch between the expected and actual position of the feedback induces a change in the heading direction of the hand, with the hand moving in the opposite direction of the rotation and thus reducing the mismatch in subsequent trials. If the mismatch is small, this change will emerge in a gradual manner over trials and occurs outside the participant’s awareness, a phenomenon known as implicit sensorimotor adaptation. If the mismatch is large, this adaptive learning process may also be accompanied by more explicit adjustments in aiming (Kim, Avraham, & Ivry, 2020; McDougle, Ivry, & Taylor, 2016; Shadmehr, Smith, & Krakauer, 2010).
Motor learning experiments are typically run in-person, exploiting finely calibrated apparatuses (digitizing tablets, robotic manipulandum, full VR displays, etc.) that provide data with high temporal and spatial resolution. However, these experiments come at a cost: Beyond the expenses associated with purchasing specialized equipment, the labor demands are high for recruiting participants and administering the experiment, especially since testing is usually limited to one participant at a time. In-person studies are also likely WEIRD (white, educated, industrialized, rich, and democratic), limiting the generalizability of these research findings to the population writ large (Henrich, Heine, & Norenzayan, 2010). Moreover, exceptional circumstances that limit in-person testing, such as a global pandemic, may halt research progress (Fauci, Lane, & Redfield, 2020).
Online experiments have been embraced across the social sciences as a powerful alternative approach for collecting data for behavioral experiments (Anwyl-Irvine, Massonnié, Flitton, Kirkham, & Evershed, 2020). Crowdsourcing platforms, such as Amazon Mechanical Turk (mturk.com) and Prolific (www.prolific.co), allow researchers to recruit a large number of participants, perform rapid pilot testing, and efficiently collect data using a variety of experimental designs. Compared to in-person studies, the online recruitment pool is likely to be more representative of the general population (Paolacci & Chandler, 2014). Online studies can also reach patient populations who have mobility deficits that limit their capability and willingness to come to the lab.
Several studies have shown that the data obtained in online studies replicate those obtained from in-person studies (e.g., Crump, McDonnell, & Gureckis, 2013). However, only a limited number of online studies have been performed in the domain of sensorimotor learning. The field of motor learning may have shied away from these online methods because of concerns related to the inherent loss of experimental control with online data collection, something that may be especially problematic for kinematic data. Not only will the response devices be variable, but it would be difficult to control how movements are produced between participants or even across the experimental session for a single participant. Previous efforts examining motor learning in the wild (Chen et al., 2018; Crocetta et al., 2018; Fernandes, Albert, & Kording, 2011; Haar, van Assel, & Faisal, 2020; John W. Krakauer et al., 2020; Takiyama & Shinya, 2016) have primarily focused on testing specific hypotheses in their ecological setup, making it hard to directly compare their findings with those obtained in the lab. Here we set out to create a general-purpose online platform that could be adopted by researchers for studying sensorimotor control and learning. We report a series of experiments with designs commonly used to study sensorimotor learning. We ask whether the data from our online studies replicate core phenomena reported in previous in-person studies. The platform, OnPoint, is available on GitHub (Jonathan Sanching Tsay, Lee, et al., 2020), and participants were recruited over Amazon’s Mechanical Turk. The results show a close correspondence between the motor learning behavior observed in-person and online, validating our tool as a platform for motor learning research, and serving as a proof-of-concept to bring motor learning outside the confines of the traditional laboratory.
Results
Experiment 1: Learning visuomotor rotation of different sizes
Motor learning is frequently treated as an implicit phenomenon. Indeed, expert performers frequently comment on letting their “body do the thinking” when they execute an overlearned skill (Schmidt & Young, 1987). However, these experts are also able to make rapid and flexible motor corrections, suggesting that even when behavior seems automatic, there remains considerable cognitive control (Fitts & Posner, 1979). Recent work has highlighted how performance in even simple sensorimotor adaptation tasks reflects the operation of multiple learning processes that may solve different computational problems (Benson, Anguera, & Seidler, 2011; Diedrichsen, White, Newman, & Lally, 2010; Haith, Huberdeau, & Krakauer, 2015; Hegele & Heuer, 2010; Leow, Marinovic, de Rugy, & Carroll, 2018; Mazzoni & Krakauer, 2006; Miyamoto, Wang, & Smith, 2020; Taylor, Krakauer, & Ivry, 2014; Werner et al., 2015). One source of evidence for this comes from a study by Bond and Taylor (Bond & Taylor, 2015) who studied how people learn to respond when the visual feedback was rotated, and in particular, when the size of the rotation was manipulated between 15° and 90° (Figure 1a). Explicit strategies, as measured by verbal aim reports, were dominant when the error size was large, producing deviations in hand angle that scaled with the size of the perturbation. Yet, implicit adaptation, as measured by aftereffects during a no-feedback block that was introduced immediately after learning, remained constant over these perturbations.
(a) Schematic of a visuomotor rotation task. The cursor feedback (red dot) was rotated with respect to the movement direction of the hand, with the size of the rotation varied across groups (15°, 30°, 60°, or 90°). Translucent and solid colors display hand and cursor positions at early and late stages of learning, respectively. (b, d) Mean time courses of hand angle for 15° (green), 30° (yellow), 60° (purple), and 90° (pink) rotation conditions from the in-person experiment of Bond and Taylor (2015) and the online experiment. Hand angle is presented relative to the target (0°) during veridical feedback, no-feedback (grey background), and rotation trials. Shaded region denotes SEM. (c, e) Average hand angles during early and late phases of the rotation block, and during the no-feedback aftereffect block from the in-person (c) and online (e) experiments. Box plots denote the median (thick horizontal lines), quartiles (1st and 3rd, the edges of the boxes), and extrema (min and max, vertical thin lines). The data from each participant is shown as translucent dots.
Experiment 1 was designed to provide an online replication of Bond and Taylor (2015), testing whether the learning of visuomotor rotation – incorporating both explicit and implicit processes – scales with rotation size, and whether the aftereffect – reflecting solely the implicit process – remains constant across rotation sizes. After a series of baseline blocks to familiarize the participants with the apparatus and basic trial procedure, participants experienced one of four rotation sizes (15°, 30°, 60°, 90°), with the perturbation constant for an entire block of 80 rotation trials. Participants were instructed to make the cursor intersect the target; we did not specify if they should explicitly alter their aim to facilitate performance.
These learning functions are presented in Figure 1b (in-person, Bond and Taylor (2015)) and 1d (online version). We analyzed our data together with those obtained by Bond and Taylor (2015), evaluating mean performance at three phases of the experiment: Early adaptation, late adaptation, and aftereffect (Figures 1c and 1e). Learning scaled with the size of the rotation during early learning (main effect of perturbation size: F(1,136) = 64.5, p < 0.01), a signature of strategic aiming at play. While there was no main effect of setting (F(1,136) = 0.5, p = 0.46), adaptation scaled faster in the in-person group compared to the online group (setting x perturbation size interaction: F(1,136) = 64.5, p < 0.01). The mean angle during the late phase of adaptation in all conditions reached an asymptote close to the size of the perturbation. As such, learning scaled with the size of perturbation during the late phases of learning in both experiments (F(1,136) = 810.1, p < 0.01). There was neither a main effect of setting (F(1,136) = 1.3, p = 0.24) nor an interaction between setting x perturbation size (F(1,136) = 0.37, p = 0.54).
Hand angle dropped dramatically in the no-feedback aftereffect block, presumably due to the termination of aiming. However, the direction of the hand movements remained different than zero in the direction away from the feedback (all groups: p < 0.01), the signature of an implicit aftereffect. Critically, the magnitude of the aftereffect did not scale with the size of the rotation (F(1,136) = 2.5, p = 0.12), indicating that implicit adaptation reaches a common saturation point, at least for the large range of values tested here. The magnitude of aftereffects was nominally similar to that reported in Bond and Taylor (2015) (main effect of setting: F(1,136) = 1.9,p = 0.17), with the size of the aftereffect ranging from 0° to 30°. There was no interaction between setting x perturbation size in aftereffects (F(1,136) = 0.1, p = 0.75).
While the data from our online study are similar to the results from Bond and Taylor (2015), there were several notable differences. First, within-participant variability was greater in the online group (In-person SD: 12.4 ± 1.3°; Online SD: 19.0 ± 1.5°; p < 0.01). This may be due to the lack of stringent experimental supervision or differences in the types of movements used in the in-person study (arm movements in Bond and Taylor) and online study (likely wrist and/or finger movements given that most participants used a trackpad). Second, the online participants learned at a slower rate. This may be because participants in the in-person study were able to identify and implement an aiming strategy faster than those tested online. The lower variability for in-person movements is likely more conducive for identifying an appropriate strategy. Alternatively, it may be easier to strategically adjust the aim of natural arm movements compared to the aim of finer movements involving more distal joints over a trackpad.
Third, and unexpectedly, the aftereffect data for the online participants was non-monotonic: More variance was explained by a quadratic model (, p < 0.01) compared to a linear model (
, p = 0.29), an effect that was not present in the Bond and Taylor (2015) data, where neither a linear (
, p = 0.06) or quadratic function (
, p = 0.14) accounted for a significant percentage of variance. The reason for the non-monotonicity in the online data is unclear. The dip for the 90° group might reflect some sort of discounting by the implicit system of this large, non-ecological error (Berniker & Kording, 2008, 2011; Körding et al., 2007; Wei & Körding, 2009). However, the aftereffect for this group was similar to that observed for the group exposed to a 15° rotation, the condition in which strategic aiming is unlikely to make much, if any contribution (Morehead, Qasim, Crossley, & Ivry, 2015). An alternative possibility is that the aftereffect data for the 30° and 60° groups are artifactually inflated by some residual effect of the aiming strategy in the no-feedback aftereffect block. For example, there may have been a hysteresis effect when re-establishing the mapping required to move straight to the target when using a trackpad or mouse, an effect that was not present for the 90°. It is also possible that the extent of implicit adaptation as measured in the aftereffect data does vary with error size, albeit in a non-linear manner. We revisit this question using a different approach in Exp 2.
Experiment 2: Adaptation in response to non-contingent rotated visual feedback
In Experiment 2, we turn to a method designed to measure implicit learning in the absence of strategic aiming. Motivated by the idea that adaptation is obligatory in response to a visual sensory prediction error, Morehead et al. (Morehead, Taylor, Parvin, & Ivry, 2017) replaced the standard movement-contingent visual feedback cursor with a “visual clamp”. Here, the cursor follows an invariant trajectory on all trials, with the radial position dependent on the participant’s hand position (as in standard feedback), but the angular position always shifted from the target by a fixed angle (Figure 2a). In this manner, the angular position of the cursor is no longer contingent on the participant’s movement. This manipulation, in combination with instructions to ignore the cursor feedback and always move directly to the target, induces gradual changes in hand angle away from the target in the direction opposite to the perturbation. Learning here is assumed to be entirely implicit, verified in both subjective interviews provided by participants at the end of the experimental session (Kim, Morehead, Parvin, Moazzezi, & Ivry, 2018; Kim, Parvin, & Ivry, 2019), as well as in reports of sensed hand location obtained on probe trials throughout the adaptation block (Jonathan S. Tsay, Parvin, & Ivry, 2020).
(a) Schematic of the clamped feedback task. The cursor feedback (red dot) follows a trajectory rotated relative to the target, independent of the position of the participant’s hand. The rotation size remains invariant throughout the rotation block but varied across groups. Participants were instructed to move directly to the target (blue circle) and ignore the visual feedback. The translucent and solid colors display hand position early and late in learning, respectively. (b, d) Mean time courses of hand angle for 0° (green), 7.5° (dark green), 15° (brown), and 30° (dark purple) rotation conditions (in-person experiment, adapted from Morehead et al. 2017). Hand angle is presented relative to the target (0°) during nofeedback (dark grey background), veridical feedback, and rotation trials. Shaded region denotes SEM. (c, e) Average heading angles during early and late phases of the rotation block, and during the no-feedback aftereffect block from the in-person (c) and online (e) experiments. Box plots denote the median (thick horizontal lines), quartiles (1st and 3rd, the edges of the boxes), and extrema (min and max, vertical thin lines). The data from each participant is shown as translucent dots.
Given the assumption that learning is implicit, the clamp method provides another way to ask how error size influences implicit adaptation. Morehead et al (2017) demonstrated that the rate of adaptation is largely invariant over a wide range of error sizes (clamp angles ranging from 7.5° - 95°). Moreover, the asymptote has also been shown to be independent of the error size for this range of perturbations, averaging between 15° - 25° across several studies (Avraham, Keizman, & Shmuelof, 2019; Kim et al., 2018; Jonathan S. Tsay, Avraham, et al., 2020; Jonathan S. Tsay, Kim, Parvin, Stover, & Ivry, 2020).
Experiment 2 used a design based on a subset of the conditions in Morehead et al (2017). We examined adaptation in response to visual clamps of 7.5°, 15°, and 30°, with each perturbation tested in separate groups of participants as in Experiment 1. We also included a 0° condition, one in which the cursor feedback always moved directly to the target. This condition provides a baseline to ensure that changes in hand angle in the other groups are driven by error-based learning, rather than changes due to fatigue or proprioceptive drift (Brown, Rosenbaum, & Sainburg, 2003a, 2003b; Cameron, de la Malla, & López-Moliner, 2015; Wann & Ibrahim, 1992).
These learning functions are presented in Figures 2b & 2d. We analyzed our data together with those obtained by Morehead et al (2017), evaluating mean performance at three phases of the experiment: Early adaptation, late adaptation, and aftereffect. As expected, there was no consistent change in performance in response to the 0° clamp in our data (one sample permutation test: early learning: p = 0.62; late adaptation, p = 0.87; aftereffects, p = 0.19), similar to that observed in Morehead et al (2017) (one sample permutation test: early learning, p = 0.46; late adaptation, p = 0.26; aftereffects, p = 0.82). In contrast, adaptation was evident in all stages of learning for the non-zero clamps (one sample permutation test, all p < 0.05).
For non-zero clamp sizes, adaptation did not scale with rotation size during early learning (F(1,86) = 0.2, p = 0.66), late learning (F(1,86) = 0.0, p = 0.91), and the no-feedback aftereffect block (F(1,86) = 0.0, p = 0.96) (Figures 2c and 2e). The functions for the 7.5°, 15°, and 30° clamps reach a common asymptote around 15°, with the range of values across individuals similar to that seen in the aftereffect data of Experiment 1. We note that the magnitude of adaptation is approximately twice that of the perturbation for the 7.5°. While this might seem puzzling, it is important to keep in mind that, unlike normal adaptation studies where the position of the feedback cursor is contingent on the hand movement and thus, the size of the visual error is reduced throughout adaptation, the error size remains invariant with the clamped feedback task and continues to drive adaptation. In terms of a comparison to in-person results, the online data were similar to those collected by Morehead et al. (no main effect of setting: Early, F(1,86) = 0.8, p = 0.13; Late, F(1,86) = 0.2, p = 0.63, Aftereffects, F(1,86) = 0.0, p = 0.98). Within-participant variability was again greater in the online group (In-person SD: 4.5 ± 1.4°; Online SD: 8.2 ± 0.4°; p < 0.01).
In sum, these online results replicate two core insights that are derived from in-person studies using clamped feedback. First, implicit adaptation occurs automatically in response to a visual sensory prediction error. Second, the learning function is invariant across a large range of error sizes, both in the shape of the function and its asymptotic value. This invariance poses a challenge to the standard state-space model of sensorimotor adaptation where the rate and magnitude of learning are dependent on error size (Herzfeld, Vaswani, Marko, & Shadmehr, 2014; Marko, Haith, Harran, & Shadmehr, 2012; Shadmehr et al., 2010; Smith, Ghazizadeh, & Shadmehr, 2006). Thus, the current results add additional evidence pointing to the need for novel perspectives of adaptation, ones that do not assume adaptation to be sensitive to error size, but instead constrained by the limits of sensorimotor plasticity (Kim et al., 2018) or sensory biases (Heald, Lengyel, & Wolpert, 2020; Jonathan S. Tsay, Kim, et al., 2020).
Experiment 3: Adaptation in response to variable, non-contingent rotated visual feedback
The use of a fixed perturbation for each participant in Experiments 1 and 2 allowed us to assess the full learning curve and aftereffect. This design often lacks the power to identify subtle differences in sensitivity to error size because the standard methods of analysis involve smoothing the data over multiple trials and making comparisons between individuals (or across sessions if a repeated measures design is employed). An alternative approach to study the effect of error size on implicit adaptation is to use a random perturbation schedule, exposing each individual to a range of error sizes throughout the perturbation block. By including both clockwise and counterclockwise rotations, there is no cumulative measure of learning; rather, the analysis focuses on trial-to-trial changes in heading angle (Figure 3a) (Avraham et al., 2019; Hutter & Taylor, 2018; Körding & Wolpert, 2004; Marko et al., 2012; Jonathan S. Tsay, Avraham, et al., 2020; Wei & Körding, 2009, 2010). Even if the feedback is contingent on hand position, learning with this method is assumed to be entirely implicit since these trial-by-trial perturbations, if relatively small, fall within the window of variation that arises from motor noise (Avraham et al., 2019; Gaffin-Cahn, Hudson, & Landy, 2019). Variable perturbations can also be employed with non-contingent clamped feedback, with the instructions providing a way to ensure that the behavioral changes are automatic and implicit.
(a) Schematic of the task. The cursor feedback (red dot) was rotated relative to the target, independent of the position of the participant’s hand. The size of the rotation was varied randomly on a trial-bytrial basis. (b, c) The average change in hand angle from trial n to trial n + 1 is plotted as a function of rotation size on trial n. Thin grey lines are individual data collected in-person (b) and online (c), with the best-fitting loess line indicated by the orange curve (shaded region denotes SEM). Orange points denote group means and bars denote SEM.
Following the in-person method used in Tsay et al (2020) (Jonathan Sanching Tsay, Haith, Ivry, & Kim, 2020), we varied the size of the non-contingent clamped feedback across trials. Each participant was exposed to a set of eight rotation sizes between 0 - 60°, with four of these involving clockwise rotations and the other four involving counterclockwise rotations of the same size. To sample a large range while keeping the experiment within 1 hour, participants received different sets of perturbations (total of four sets, see Methods). Given that the eight perturbations within a set have a mean of zero, there should be limited accumulated learning across trials. Similar to Experiment 2, participants were instructed to ignore the cursor feedback and always move directly to the target.
As a trial-by-trial measure of implicit adaptation, we averaged each participant’s change in hand angle from trial n to trial n + 1, as a function of the rotation size on trial n. As can be seen in Figure 3c, the participants showed a sign-dependent change in hand angle in response to the clamped feedback, similar to that observed in the in-person study of Tsay et al. (Figure 3b, adapted from (Kasuga, Hirashima, & Nozaki, 2013; Kim et al., 2018; Ranjan & Smith, 2020; Jonathan S. Tsay, Avraham, et al., 2020; Wei & Körding, 2009, 2010). The function is sublinear, composed of a quasi-linear zone for smaller perturbations (up to around 16°) and a saturation range for larger perturbations; indeed, the data suggest that the size of the trial-by-trial change in hand angle may fall off for the largest perturbations. In both the online and in-person studies, the mean changes in hand angle fall within a similar range (± 2.5°).
To statistically evaluate these data, we first extracted the slope from each individual’s learning function, asking whether this value was significantly less than 0. The slopes were significantly less than 0 for the online and in-person experiments (both p < 0.01), confirming robust sign-dependent implicit adaptation. We then asked whether the learning functions were sublinear by comparing, for each individual, the slope when computed using all perturbation sizes to the slope when using only the small perturbations (in-person: 0, ±4°; online: the smallest two rotation sizes in their set, maximum size = ±25°). If the function is sublinear, the absolute slope calculated using all of the rotation sizes should be smaller (less negative). The results indicated that the functions were sublinear in both sets of data (in-person, p = 0.01; online, p = 0.02).
In sum, the results of Experiment 3 show a striking correspondence to that obtained in-person using a near-identical design (Kasuga, Hirashima, & Nozaki, 2013; Kim et al., 2018; Ranjan & Smith, 2020; Jonathan S. Tsay, Avraham, et al., 2020; Wei & Körding, 2009, 2010). Moreover, the functions, both in shape and magnitude are quite similar to that reported in previous studies that have used a variable-sized perturbation to study implicit adaptation (Kasuga, Hirashima, & Nozaki, 2013; Kim et al., 2018; Ranjan & Smith, 2020; Jonathan S. Tsay, Avraham, et al., 2020; Wei & Körding, 2009, 2010).
Discussion
Bringing motor learning experiments online has considerable potential for providing researchers with a tool to collect data from large and diverse samples in an efficient and low-cost method. As a proof-of-concept, we report here three experiments examining behavioral changes in response to perturbed visual feedback, adopting established tasks for our online platform. Qualitatively, the results from these three on-line studies show a close correspondence with those obtained from in-person studies. Specifically, early and late learning scaled with the size of the rotation when both implicit and explicit processes were involved (Exp 1), but implicit adaptation was insensitive to error size across a large range of errors (7.5° - 90°, Exps 1 and 2). In a more granular analysis, size sensitivity was found for smaller errors (Exp 3). These results, in aggregate, demonstrate that online experiments provide a viable alternative to study sensorimotor adaptation outside the confines of the traditional laboratory setting.
These similarities between online and in-person experiments are especially striking in light of the many differences between online and in-person settings. Almost all of the participants in our study reported using a trackpad (see Methods).# Although we did not obtain detailed reports, we assume that their “reaching” movements here involved relatively small rotations about the wrist, perhaps coupled with extension of the index finger. These types of movements will entail a very different set of biomechanical and sensory constraints compared to reaches performed by moving along a digitizing tablet or when holding a robotic manipulandum (de Rugy, Hinder, Woolley, & Carson, 2009; Debats & Heuer, 2018; Hollerbach & Flash, 1982; Yin, Wang, Wei, & Körding, 2019). In-person experiments afford additional control, with the experimenter in a position to provide verbal instructions, answer questions, and supervise the participant to ensure the movement is performed as desired. This level of control is not possible with online studies where instructions are only given with on-screen messages and online monitoring is limited to feedback messages (e.g., “too far” or “too slow”).
Another limitation with online experiments is greater uncertainty concerning the temporal delay of the feedback (Anwyl-Irvine, Dalmaijer, Hodges, & Evershed, 2020). This can be a critical factor for studies of adaptation given the evidence showing that the rate of learning can fall off dramatically if the feedback is delayed, at least for endpoint feedback (Brudner, Kethidi, Graeupner, Ivry, & Taylor, 2016; Held, Efstathiou, & Greene, 1966; Kitazawa, Kohno, & Uka, 1995). We have observed this in our studies using clamped visual feedback. In our original study (Morehead, Taylor, Parvin, & Ivry, 2017), the common asymptote across different error sizes was ~15°. Subsequent to that study, we modified the code to reduce the feedback delay (from around 70 ms to 25 ms). Using this refined code, Kim et al. (2018) also observed a common asymptote in response to clamps of different sizes, but now the asymptotic values were ~25°. For this reason, we would urge caution in the use of online studies if the focus of the research is on absolute values such as the point of saturation. Concerns with temporal delays are mitigated for relative comparisons (such as the analyses presented here to compare conditions in the online studies).
In summary, online experiments provide a viable and novel way to test predictions about motor learning with large numbers of participants in a short amount of time. Whereas it would have taken months to collect the data reported here if the studies were run in-person, our online platform allowed us to collect these data in just a few days. Moreover, participants recruited online represent greater diversity, one that spans a range in terms of age, ethnicity, handedness, and years of education (see Participants) (Paolacci & Chandler, 2014). We do not envision online experiments replacing in-person testing in the domain of sensorimotor control and learning, since the laboratory affords the means to capture kinematic data with unparalleled precision. Nonetheless, many core phenomena central to our understanding of sensorimotor learning are robust and ripe for online investigation.
Methods
Participants
The protocol was approved by the institutional review board at the University of California, Berkeley. Participants (n = 260; age range = 21 – 61, mean age ± sd = 34.6 ± 9.0) were recruited from the Amazon Mechanical Turk (AMT). Participants received financial compensation for their participation at an $8 per hour rate. Recruitment was restricted to the United States. Based on the participants who completed an optional online survey (n = 180 out of 260 responded, 130 declined to participate in the survey), there were 100 male participants, 69 female participants, and 11 identified as other. 124 of the participants identified as White, 17 as Asian, 25 as African American, 1 as a Pacific-Islander, 2 as multi-racial, and 11 declined to answer. 144 of the participants were right-handed, 22 left-handed, and 4 self-identified as ambidextrous. In terms of response devices, we encouraged participants to use a trackpad to limit variance from the device used. As a result, there were 154 trackpad users but only 16 mouse users (others opted not to provide this information). No statistical methods were used to determine the target sample sizes.
Apparatus
Participants used their own computer to access a dynamic webpage (HTML, JavaScript, and CSS) hosted on Google Firebase. The task progression was controlled by JavaScript code running locally in the participant’s web browser. We assumed that monitor sampling rates were typically around 60 Hz, with little variation across computers (Anwyl-Irvine, Dalmaijer, et al., 2020). The size and position of stimuli were scaled based on each participant’s screen size, which was automatically detected.
A package containing all the codes of the experiment can be accessed and downloaded from GitHub (https://github.com/alan-s-lee/OnPoint) and Gorilla (https://gorilla.sc/openmaterials/111001). We also provided a user manual to assist other researchers in setting up motor learning experiments at https://tinyurl.com/y6k8fvkk.
Reaching Task Procedure
The participant made reaching movements by moving the computer cursor with the trackpad or computer mouse. We did not obtain information concerning the monitors used by each participant (something corrected in on-going studies); as such, we cannot specify the size of the stimuli. However, from our experience in subsequent studies, we assume that most online participants are using a laptop computer. To provide a rough sense of the stimulation conditions, we assume that the typical monitor had a 13” screen with a width of 1366 pixels and height of 768 pixel (Anwyl-Irvine, Dalmaijer, et al., 2020). On each trial, the participants made a center-out planar movement from the center of the workspace to a visual target. The center position was indicated by a white circle (0.5 cm in diameter) and the target location was indicated by a blue circle (also 0.5 cm in diameter). The radial distance of the target from the start location was 6 cm. In experiments 1 and 2, the target could appear at one of two locations on an invisible virtual circle (45°: upper right quadrant; 135°: upper left quadrant). For these experiments, a movement cycle is defined as 2 consecutive reaches, one to each target. In Experiment 3, the target appeared in a single position at 45° throughout the entire experiment.
To initiate each trial, the participant moved the cursor, represented by a white dot on their screen (0.5 cm in diameter), into the start location. During this initialization phase, feedback was provided when the cursor was within 4 cm of the start circle. Once the participant maintained the cursor in the start position for 500 ms, the target appeared. The location of the target in Experiments 1 and 2 was selected in a pseudo-randomized manner. The participant was instructed to reach, attempting to rapidly “slice” through the target. The feedback cursor, when presented (see below) remained visible throughout the duration of the movement and remained fixed for 50 ms at the radial distance of the target when the movement amplitude reached 6 cm. If the movement was not completed within 500 ms, the message “too slow” was displayed in red 20 pt. Times New Roman font at the center of the screen for 750 ms.
The feedback could take one of the following forms: veridical feedback, no-feedback, rotated contingent feedback (Exp. 1), and rotated non-contingent (“clamped”) feedback (Exps. 2 and 3). During veridical feedback trials, the movement direction of the visual feedback was veridical with respect to the movement direction of the hand. During no-feedback trials, the feedback cursor was extinguished as soon as the hand left the start circle and remained off for the entire reach. The cursor only became visible during the return phase of the trial, when the cursor was within 4 cm of the start circle. With rotated contingent feedback, the cursor moved at an angular offset relative to the position of the hand; the radial position of the cursor corresponded to that of the hand up to 6 cm, at which point, the cursor position was frozen for 500 ms before turning off. During rotated clamped-feedback trials, the cursor moved at a specified angular offset relative to the position of the target, regardless of the movement direction of the hand (“clamped feedback”); as with rotated contingent feedback, the radial position of the cursor corresponded to that of the hand.
Experiment 1: Learning visuomotor rotation of different sizes
AMT participants (N = 100) completed a visuomotor adaptation task consisting of four blocks of trials (178 trials total: 89 trials x 2 targets): Baseline no-feedback block (28 trials), baseline feedback block (28 trials), rotated feedback block (100 trials), and no-feedback aftereffect block (20 trials). During the rotation block, each participant was assigned one of four rotation sizes (15°, 30°, 60°, 90°; 25 participants/ group), with the direction of the rotation (clockwise or counterclockwise) counterbalanced across participants Prior to each baseline block, the instruction “Move directly to the target as fast and accurately as you can” appeared on the screen. Prior to the rotation block, a new instruction message was presented: “Your cursor will now be rotated by a certain amount. In order to continue hitting the target, you will have to aim away from the target.” Prior to the no-feedback aftereffect block, the participants were instructed “Move directly to the target as fast and accurately as you can.”
Experiment 2: Adaptation in response to non-contingent rotated visual feedback
A new sample of AMT participants (N = 80) completed a visuomotor adaptation task, with the same exact block structure as Experiment 1 (178 total trials). There was only one critical difference: Rotated, non-contingent feedback was used during the rotation block, with the clamp fixed at one of four angular offsets relative to the target (0°, 7.5°, 15°, 30°; 20 participants/group). The direction of the non-zero clamps (clockwise or counterclockwise) was counterbalanced across participants.
The instructions for baseline and no-feedback aftereffect blocks were identical to those used in Experiment 1. Prior to the rotation block, the instructions were modified to read: “The white cursor will no longer be under your control. Please ignore the white cursor and continue aiming directly towards the target.” To clarify the invariant nature of the clamped feedback, three demonstration trials were provided. On all three trials, the target appeared straight ahead (90° position) and the participant was told to reach to the left (demo 1), to the right (demo 2), and backward (demo 3). On all three of these demonstration trials, the cursor moved in a straight line, 90° offset from the target. In this way, the participant could see that the spatial trajectory of the cursor was unrelated to their own reach direction.
Experiment 3: Adaptation in response to variable, non-contingent rotated visual feedback
A new sample of AMT participants (N = 60) completed a visuomotor adaptation task consisting of four blocks of trials (255 total trials): Baseline no-feedback block (5 trials), baseline feedback block (15 trials), rotated feedback block (230 trials), and no-feedback aftereffect block (5 trials). During the rotation block, the non-contingent feedback varied from trial to trial, both in direction (clockwise or counterclockwise) and angular offset. Participants were assigned one of four sets of rotation sizes (Set 1: ±2°, ±4°, ±6°, ±20°; Set 2: ±10°, ±25°, ±40°, ±60°; Set 3: ±7.5°, ±15°, ±30°, ±45°; Set 4: ±2°, ±4°, ±17°, ±27°) where ± indicates that the clamped feedback could be rotated clockwise (-) or counterclockwise (+). Given that eight perturbations within a set have a mean of zero, the accumulated learning across trials should be limited. The same demonstration trials (see Experiment 2) were included before the rotated clamped feedback block.
Attention and Instruction Checks
It can be difficult to verify if participants tested online fully attend to the task. To mitigate this issue, we sporadically instructed participants to make specific keypresses: “Press the letter “b” to proceed.” If participants failed the make an accurate keypress, the experiment was terminated. These attention checks were randomly introduced within the first 50 trials of the experiment. We also wanted to verify that the participants understand the task, and in particular, understood in Experiments 2 and 3 that the angular position of the feedback was independent of the direction of their hand movement. To this end, we included one instruction check after the three demonstration trials: “Identify the correct statement. Press ‘a’: I will aim away from the target and ignore the white dot. Press ‘b’: I will aim directly towards the target location and ignore the white dot.” The experiment was terminated if participants failed to make an accurate keypress (i.e., “b”).
Data Analysis
The primary dependent variable of reach performance was hand angle, defined as the angle of the hand relative to the target when movement amplitude reached 6 cm from the start position (i.e., angle between a line connecting the start position to the target and a line connecting the start position to the hand). To aid visualization, the hand angle values for the groups (or trials in Experiment 3) with counterclockwise rotations were flipped, such that a positive hand angle corresponds to an angle in the opposite direction of the rotated feedback, the direction expected to result from learning.
Outlier responses were defined as trials in which the hand angle deviated by more than 3 standard deviations from a moving 5-trial window. These outlier trials were excluded from further analysis since behavior on these trials could reflect attentional lapses or anticipatory movements to another target location (average percent of trials removed per participant: Experiment 1: 2.0 ± 1.0% Experiment 2: 1.5 ± 1.1% Experiment 3: 1.0 ± 0.7%).
Experiments 1 and 2: Data analysis mimicked the two studies we sought to replicate (Bond & Taylor, 2015; Morehead et al., 2017). The mean heading angle for each movement cycle was calculated and baseline subtracted to evaluate adaptation relative to idiosyncratic movement biases. Baseline was defined as the last 5 cycles of the verdical feedback baseline block (cycles 24 – 28). We evaluated three hand angle measures: early adaptation, late adaptation, and aftereffect. Early adaptation was operationalized as the average mean hand angle over cycles 31 – 35 (cycles 3-7 of the rotation block). Late adaptation was defined as the mean hand angle over cycles 64 – 68 (cycles 35 – 40 of the rotation block, mimicking Bond and Taylor, 2015; and Kim et al, 2018). The aftereffect was operationalized as the average mean angle over the first 5 cycles of the no-feedback aftereffect block (cycles 79 – 83).
All dependent measures were evaluated using an ANCOVA permutation test (R statistical packages: aovperm in the permuco package; 5000 permutations) as a more robust measure when the data is both normal and non-normally distributed (Lehmann and Romano 2008). Post-hoc pairwise permutation t-tests were performed (R statistical package: perm.t.test), and p values were Bonferroni correct to assess group differences.
Experiment 3: As our measure of trial-by-trial adaptation, we calculated the change in hand angle on trial n + 1 as a function of the rotation size on trial n for each trial. Means were then calculated for each clamp size, averaging over the clockwise and counterclockwise perturbations for a given size. These mean data were submitted to a linear regression to extract each individual’s slope (R statistical package: lm), with Rotation Size as the main effect. To ask whether these learning functions were sublinear, we compared each individual’s slope computed with all four rotation sizes, against the slope computed with the two smallest rotation sizes in their set. If adaptation was sublinear, then the slopes computing using all rotation sizes would be smaller in absolute magnitudes (less negative) than the slope computed using only small rotation sizes.
Author Contributions
All authors contributed to the study design. Testing, data collection, and data analysis were performed by J.S.T. All authors drafted the manuscript and approved the final version of the manuscript for submission.
Acknowledgements
This work was supported by grants R35 NS116883, R01 NS105839, and R01 NS1058389 from the National Institutes of Health (NIH).
Footnotes
https://docs.google.com/document/d/1E5XzQU2dJw7m880P7VhmESPpUNQlEdMcf9fweHLtG0o/edit?ts=5ef3fad7
↵# Because there were too few mouse users in each experiment, we opted not add device (trackpad or mouse) as a factor in our analysis. However, in an unpublished study using a standard visuomotor rotation, we tested 435 individuals, 205 who reported using a trackpad and 225 who reported using a mouse (5 opted to not provide this information or used some other response device). There were no differences in measures of adaptation for the two groups.