Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Action enhances predicted touch

View ORCID ProfileEmily R. Thomas, View ORCID ProfileDaniel Yon, View ORCID ProfileFloris P. de Lange, View ORCID ProfileClare Press
doi: https://doi.org/10.1101/2020.03.26.007559
Emily R. Thomas
1Department of Psychological Sciences, Birkbeck, University of London, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Emily R. Thomas
  • For correspondence: ethoma09@mail.bbk.ac.uk
Daniel Yon
1Department of Psychological Sciences, Birkbeck, University of London, UK
2Department of Psychology, Goldsmiths, University of London, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Daniel Yon
Floris P. de Lange
3Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Floris P. de Lange
Clare Press
1Department of Psychological Sciences, Birkbeck, University of London, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Clare Press
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

It is widely believed that predicted action outcomes are perceptually attenuated. The present experiments determined whether predictive mechanisms in fact generate attenuation, or instead enhance perception – via neural ‘sharpening’ mechanisms assumed to operate in sensory cognition domains outside of action. We manipulated probabilistic expectations in a force judgement task. Participants produced actions and rated the intensity of concurrent tactile forces. Experiment 1 confirmed previous findings that action outcomes are perceived less intensely than similar passive stimulation, but demonstrated more intense perception when reducing the contribution of non-predictive gating processes. Experiments 2 and 3 manipulated prediction explicitly and found that expected outcomes are perceived more, not less, intensely than unexpected outcomes. These findings challenge a central tenet of prominent motor control theories and demonstrate that sensorimotor prediction operates via qualitatively similar mechanisms to other prediction and regardless of the sensory domain.

Introduction

Prominent motor theories (Blakemore et al., 1998; Dogge, Custers, & Aarts, 2019; Fiehler et al., 2019; Kilteni & Ehrsson, 2017) propose that we attenuate – or downweight – perceptual processing of expected action outcomes. Such downweighting mechanisms are thought to finesse the limited capacity of our sensory systems, prioritising processing of more informative unexpected sensory events that signal the need to perform new actions or update of our models of the world (Press et al., 2020b; Wolpert & Flanagan, 2001). For example, if we lift a cup of coffee that is lighter than expected, attenuated processing of expected signals (e.g., touch on our fingertips) will allow dedicated processing of unexpected events (e.g., accelerating motion of the cup) to allow swift updating of our beliefs about the environment (e.g., the weight of the cup) and support corrective action to avoid spillage. These downweighting mechanisms are invoked to explain that self-produced tactile sensations generate lower activity in bilateral secondary somatosensory cortex (Blakemore et al., 1998; Kilteni & Ehrsson, 2020; Shergill et al., 2013, 2014), and are perceived to be less intense (Bays et al., 2005, 2006; Kilteni et al., 2019; Kilteni & Ehrsson, 2017; Shergill et al., 2003; Wolpe et al., 2016, 2018), than externally-produced forces. This theory also provides an explanation for why it is difficult to tickle oneself (Blakemore et al., 1998).

However, while these accounts have been influential for the last two decades, evidence that attenuation results from predictive mechanisms is sparse – especially considering the number of non-predictive mechanisms known to influence perception during action (Press et al., 2020a, 2020b; Press & Cook, 2015; Seki & Fetz, 2012). For this reason, in a recent study Kilteni et al. (2019) aimed to determine whether the attenuating influence of action on tactile perception in fact reflected the operation of predictive mechanisms. The defining feature of prediction mechanisms is that they operate according to stimulus probabilities (de Lange et al., 2018). As such, effects of prediction mechanisms are typically measured by presenting events with high and low conditional probabilities, and comparing perception of the ‘expected’ with the ‘unexpected’. In contrast, typical experiments demonstrating attenuation during action compare the perception of events in the presence or absence of action, or when events are coincident versus delayed with respect to action (e.g. Bays et al., 2005, 2006; Blakemore et al., 1998; Kilteni et al., 2019; Kilteni & Ehrsson, 2017; Shergill et al., 2013; Wolpe et al., 2016, 2018). In these experiments it is assumed that the sensory events which coincide with action are the (more) predicted consequences, which explains why perception of them is attenuated. In an important extension of previous work, Kilteni et al. (2019) trained participants that sensory events would be delayed by 100 ms, and inverted the typical attenuation effect such that delayed (but expected) events were now perceived less intensely than simultaneous (but unexpected) events. They therefore concluded that we do indeed downweight perception of tactile effects predicted by action – as has been assumed for the last 20 years but never explicitly demonstrated.

However, while the environmental statistics were altered in Kilteni et al. (2019) to make delayed events more probable, this training protocol confounds prediction with repetition. Participants in the test group were trained in only one repeated mapping – consisting of a 100 ms action-outcome delay – and subsequently presented with events at that delay or at no delay. Given that repeated presentation of stimulus features can attenuate perception of those features (Ofen et al., 2007) and neural sensory processing (Grill-Spector et al., 2006), downweighting effects observed in this study could be determined by repetition of the 100 ms delay rather than prediction of it. The distinction between mechanisms underlying repetition and prediction effects has been scrutinised widely for the last decade and a number of differences emerge. For instance, when repetition and prediction are orthogonalised by sometimes rendering stimulus alternations predicted, repetition and prediction neural effects exhibit distinct time-courses (Todorovic & De Lange, 2012). More recently, evidence has emerged that repetition neural effects are likely mediated by mechanisms functionally distinct from prediction effects – with prediction effects emerging via neural sharpening (Kok et al., 2012; Yon et al., 2018) and repetition effects via local neural scaling (Alink et al., 2018).

There may also be theoretical reason to query whether prediction mechanisms could in principle generate such sensory attenuation. Certainly, it has been argued that the mechanism typically outlined in attenuation theories – where the prediction is subtracted from the input to generate the percept – is inconsistent with prominent predictive processing models of perception and action (Brown et al., 2013). Furthermore, the neural mechanisms that have been characterised to underlie predictive influences on perception in domains outside of action are thought to generate a qualitatively opposite influence on perception. Specifically, it is thought that when we predict a sensory event we increase the gain on sensory units tuned to that event and via competitive local interactions relatively inhibit sensory populations tuned to unpredicted events (de Lange et al., 2018; Press et al., 2020b; Press & Yon, 2019; Summerfield & de Lange, 2014). Evidence for such a mechanism has been provided by a number of neuroimaging techniques. For instance, transcranial magnetic stimulation demonstrates that disruptive pulses to sensory regions at the time of predictive cues eliminates perceptual cueing effects (Gandolfo & Downing, 2019), magnetoencephalography work demonstrates that visual events can be decoded from visual brain activity before they are presented (Kok et al., 2017), and functional magnetic resonance imaging studies indicate that sensory suppression is observed only in voxels that do not encode the presented stimuli (Kok et al., 2012; Yon et al., 2018).

Such predictive neural ‘sharpening’ processes are thought to upweight, rather than downweight, perception of expected events, enabling rapid generation of largely veridical experiences in the face of sensory noise (de Lange et al., 2018; Kersten et al., 2004; Yuille & Kersten, 2006). Specifically, increasing the gain on sensory channels is thought increase the detectability and apparent intensity of a sensory event (Brown et al., 2013; Carrasco et al., 2004; Wyart et al., 2012). Such accounts are consistent with a range of findings that expected visual events are recognised and identified more readily (Bar, 2004; Palmer, 1975; Puri & Wojciulik, 2008), perceived with higher intensity (Han & van Rullen, 2016; Yon & Press, 2018), and better decoded from sensory brain activity (Heilbron et al., 2020; Kok et al., 2012) than unexpected events. Evidence from our lab and others’ in the visual domain demonstrates that predictions made on the basis of action can similarly upweight perceptual processing of visual outcomes (Christensen et al., 2011; Dogge, Custers, Gayet, et al., 2019; Yon et al., 2018, 2019, Yon & Press, 2017, 2018), suggesting that action predictions do not shape perception differently per se. It is also perhaps unclear why the adaptive arguments presented for downweighting (informativeness) and upweighting (veridicality) predicted percepts should apply differentially in the domain of action (see Press et al., 2020b).

However, a stark difference between studies purporting to demonstrate upweighting and downweighting is that the former study visual perception whereas the latter study tactile perception. It is widely believed that action predictions shape tactile perception in a qualitatively distinct way from other sensory modalities. The majority of reports on the topic simply report that action prediction attenuates processing of predicted tactile outcomes without proposing why tactile perception should be different. However, there are a number of possibilities. For example, it was recently proposed that the difference relates to tactile events being ‘body-related’ in contrast with many visual events (Dogge, Custers, & Aarts, 2019). Additionally, predictive influences on perception may be mediated by different neural mechanisms that exhibit different functions. Specifically, tactile attenuation during action is thought to be dependent on the degree of connectivity between secondary somatosensory cortex and the cerebellum (Kilteni & Ehrsson, 2020), whereas prediction in the visual domain is thought to be mediated by hippocampal representation (Kok et al., 2020; Kok & Turk-Browne, 2018). Therefore, in principle cerebellar predictions may attenuate sensation, whereas hippocampal predictions may sharpen it. Alternatively, there may not be a qualitative distinction in the tactile domain specifically, and instead attenuation may result from mechanisms that are not predictive.

To distinguish these possibilities, we employed the force judgement paradigm used widely in the action domain and thought to demonstrate predictive attenuation, and in Experiments 2 and 3 employed predictive manipulations unconfounded by repetition. Such force judgement paradigms (e.g. Bays et al., 2005, 2006; Kilteni et al., 2019) require participants to move an active right index finger towards a passive left finger, and the left finger receives stimulation mechanically. In these paradigms, touch on the left finger is reported as less forceful when accompanied by action, relative to similar judgements when passive.

Experiment 1 was designed to replicate this design and remove potentially confounding influences of non-predictive sensory gating found in the typical set-up. Specifically, this paradigm has been employed by action researchers partly to allow for dissociation of identity-specific prediction effects from those of identity-general gating effects (Bays et al., 2005) – where action generates generalised gating of all sensation on a moving effector (Williams et al., 1998; Williams & Chapman, 2000), perhaps mediated by neural mechanisms observed at the earliest relay in the spinal cord (Seki & Fetz, 2012). These mechanisms are not considered predictive, because they influence perception of events regardless of whether they were predicted outcomes of action, e.g. gaps in ongoing wrist vibrations are not a typical outcome predicted by juggling but are perceptually attenuated during this task (Juravle & Spence, 2011). Force judgement paradigms are therefore considered optimal for examining the influence of predictive mechanisms because they probe perception of events presented to passive effectors. However, given that the active finger is concurrently stimulated by pressing the trigger device, generalised gating mechanisms may still influence responses – perception of this low intensity stimulation on the active finger could bias responses about the passive stimulation (Firestone & Scholl, 2016). Experiment 1 therefore contrasted findings in the typical set up with an additional condition in which participants did not receive stimulation to their active finger.

Upon finding that tactile events presented alongside action can in fact be perceived more forcefully when reducing the contribution of non-predictive gating processes in this manner, Experiments 2 and 3 aimed to examine explicitly the contribution of predictive mechanisms to perception. Therefore, in Experiments 2 and 3 participants always judged the force of stimulation while concurrently performing an action, but where the tactile events were either predicted or not on the basis of the concurrent action. We thereby tested whether tactile events predicted by action are indeed perceived less intensely than unexpected events – as downweighting accounts would propose – or instead are perceived as more intense – consistent with upweighting theories of prediction outside of action domains.

Results

Experiment 1

In Experiment 1, participants moved an active right index finger towards a passive left index finger. The ‘Contact’ condition closely resembled previous studies where participants pressed a button with the active finger to generate a mechanical force on the passive finger. They made judgements about the intensity of this force relative to a reference event, and these judgements were compared against those when all fingers were inactive. We were also interested in whether similar effects would be found in a ‘No Contact’ condition, where a similar downward motion of the active finger triggered the same stimulation but without making contact with any device. The tactile event was triggered by the same action, but via an infrared motion tracker (generating no tactile feedback) rather than a button press. We predicted that we would replicate the typical attenuation finding (e.g., Bays et al., 2005; 2006) in the Contact condition, such that participants would perceive the passive tactile event to be less forceful when accompanied by action. If attenuation reflects the operation of predictive downweighting mechanisms, the reduction in intensity ratings during action would likely be equivalent in Contact and No Contact conditions. If instead attenuation is modulated by generalised gating, the pattern should be different in the No Contact condition – to the extent that events may even be perceived with greater intensity when accompanied by action, as predicted by upweighting theories from sensory cognition.

Participants held their left hand palm upwards (Fig. 1A) with their index fingertip positioned against a solenoid, and their right hand resting on a shelf above. At the start of each trial, participants were cued onscreen to move their right index finger (‘move’; Active trials – 50 %) or remain stationary (‘do not move’; Passive trials – 50 %). On Active trials, they moved their finger downwards to a button (Contact blocks) or performed a similar movement without making button contact (No Contact blocks – motion detected via infrared tracking). The target stimulus was delivered to the left index finger for 30 ms, resulting in apparent synchrony of stimulation with movement. Participants judged the force of this stimulus against a subsequently presented reference. In Passive trials, the target stimulus was delivered 500 ms after the cue to remain still. Participant responses were modelled by cumulative Gaussians to estimate psychometric functions, and Points of Subjective Equivalence (PSEs) were calculated to compute the point at which participants judge the target and reference events to have equal force. Lower values indicate more forceful target percepts.

Figure. 1:
  • Download figure
  • Open in new tab
Figure. 1: Experiment 1.

(A) On each trial, participants made downward movements with their right index finger either over a motion tracker (No Contact condition), or towards a button with which they made contact (Contact condition). Each movement elicited a tactile punctate event to the left index finger positioned directly below. (B) PSEs were calculated for each participant (data represents an example participant in the No Contact condition for Active [dark blue] and Passive [light blue] trials). (C) Mean PSEs were higher in Active than Passive trials in the Contact conditions, but lower in Active than Passive trials in the No Contact condition. Larger PSEs indicate less intense target percepts (* p < .05, ** p = .001). (D) PSE effect of movement (Passive – Active) for the Contact (top) and No Contact (bottom) condition, plotted with raincloud plots (Allen et al., 2019) displaying probability density estimates (upper) and box and scatter plots (lower). Boxes denote lower, middle and upper quartiles, whiskers denote 1.5 interquartile range, and dots denote difference scores for each participant (N=30). Positive effects of movement indicate more intensely perceived active events relative to passive events, but negative values indicate the reverse – less intensely perceived active events.

PSE values were analysed in a 2×2 within-participants ANOVA, revealing no main effect of Contact (F(1, 29) = 3.11, p = .089 ηp2 = .10) or Movement (F(1, 29) = 1.24, p = .274, ηp2 = .04). However, there was a significant interaction between Contact and Movement (F(1, 29) = 15.39, p < .001, ηp2 = .35), driven by lower force judgements (higher PSEs) in Active (M = 4.73, SD = 1.22) compared to Passive trials (M = 4.35, SD = .80) in the Contact condition (t(29) = 2.07, p = .047, d = .38), but higher force judgements (lower PSEs) in Active (M = 3.82, SD = 1.11) compared to Passive trials (M = 4.45, SD = .97) in the No Contact condition (t(29) = −3.80, p = .001, d = .69, see Fig. 1C). Thus, when a moving effector receives cutaneous stimulation simultaneously with passive effector stimulation, tactile events are perceived less intensely during movement. Conversely, when the moving effector does not receive such stimulation, tactile events are perceived more intensely during movement. These results therefore replicate previous findings (e.g., Bays et al., 2005; 2006; Kilteni et al., 2019) that tactile events on a passive finger are perceived less intensely during active movement, but this is only the case when the specifics of the paradigm are replicated such that the active finger is also stimulated. When such stimulation is absent, tactile forces are rated as more intense during movement. These findings are therefore consistent with our suggestion that some attenuation previously thought to be determined by predictive mechanisms could in principle be generated by generalised gating mechanisms – even when the target tactile events are delivered to passive effectors.

Experiment 2

Experiment 1 found that tactile events presented during action can be rated as more forceful than passive events, when the active finger is not itself stimulated. This effect is plausibly explained by predictive upweighting mechanisms given that the tactile event can be anticipated on the basis of action. If the mechanism generating these influences is predictive, these effects should be sensitive to conditional probabilities between actions and outcomes (de Lange et al., 2018). Therefore, in Experiment 2 we compared perception of tactile events when they were expected or unexpected based on learned action-outcome probabilities.

Participants performed one of two movements that predicted one of two tactile effects (Fig. 2A). Participants now moved their right index finger upwards or downwards in response to a shape (square or circle) imperative stimulus. Such movement triggered the delivery of a target tactile stimulus to the left index finger or left middle finger. Presenting two action types and two stimulation types allowed us to compare perception of expected and unexpected events, while controlling for repetition effects. During the training session, each action (e.g., upward movement) was 100% predictive of a specific tactile event (e.g., stimulation to the index finger). In a test session 24 hours later, the action-outcome relationship was degraded to measure perception of unexpected events – the expected finger was stimulated on 66.6% of trials, and the unexpected was stimulated on the remaining 33.3% of trials. Downweighting accounts predict that participants will rate expected events as less intense (forceful) than unexpected events, whereas upweighting theories predict that expected tactile stimulation will be rated more intensely than unexpected stimulation.

Figure. 2:
  • Download figure
  • Open in new tab
Figure. 2:

(A) Experiment 2. On each trial, participants made a downwards or upwards movement with their right index finger over a motion tracker, which elicited tactile punctate events to the left index or middle finger. Movements were perfectly predictive of tactile events during the training session, and 66.6% predictive during the test session. (B) The design of Experiment 3 was similar to Experiment 2, but participants instead made only downwards movements, now with either their right index or middle finger. (C) Mean PSEs were lower for expected than unexpected trials in both Experiment 2 and Experiment 3. Larger PSEs indicate a less intensely perceived target stimulus (* p < .05). (D) PSE expectation effect (Unexpected – Expected) plotted with raincloud plots (Allen et al., 2019) displaying probability density estimates (upper) and box and scatter plots (lower), for Experiment 2 (top) and Experiment 3 (bottom). Boxes denote lower, middle and upper quartiles, whiskers denote 1.5 interquartile range, and dots denote difference scores for each participant (N=30). Positive expectation effect values indicate more intensely perceived expected events relative to unexpected events.

PSE values were lower on expected trials (M = 3.72, SD = .96) than unexpected trials (M = 3.93, SD = .80; t(29) = −2.13, p = .041, d = .39, see Fig. 2C), demonstrating that – as predicted under upweighting accounts – expected target events were perceived to be more forceful than unexpected events. This finding is difficult to reconcile with claims in the action literature that predicted percepts should be downweighted, or attenuated.

Experiment 3

Experiment 3 was designed to provide a conceptual replication of Experiment 2, using a set-up more closely aligned with typical action paradigms – whereby one always makes a movement towards another effector. Similarly to Experiment 2, this movement could predict the stimulation or not, on the basis of conditional probabilities established in the training phase. Participants’ right hand was vertically aligned with their left hand, such that movements of their right index finger were directed towards their left index finger (Fig. 2B). During the training blocks, movements were perfectly predictive of tactile events. Half the participants experienced a mapping whereby moving the right hand index finger resulted in left hand index stimulation and middle finger movement resulted in middle stimulation. The other half experienced the flipped mapping (Materials and Methods). We hypothesised that – like in Experiment 2 – tactile events predicted on the basis of the preceding movement would be perceived as more intense than unexpected events.

We made two further changes in Experiment 3 to remove confounds from Experiment 2. In Experiment 2, a cue instructed participants of actions to perform. In principle, the expectation effects observed in Experiment 2 may have resulted from cue-outcome learning rather than action-outcome learning. Experiment 3 thus removed these cues and required free selection of action. The explicit reference stimulus was also removed from Experiment 3 and comparisons were made against an implicit reference, eliminating the possibility that effects in Experiment 2 were determined by influences of expectation on perception of the reference stimulus.

Like in Experiment 2, PSE values were lower in expected (M = 4.08, SD = 1.00) than unexpected (M = 4.28, SD = .99) trials (t(29) = −2.56, p = .016, d = .47, see Fig. 2C). These results show that tactile events expected on the basis of action were perceived as more forceful than unexpected events, consistent with the results of Experiment 2 and upweighting theories of perception.

Computational modelling

Our findings demonstrate that expected tactile forces resulting from action are rated more intensely than equivalent unexpected forces. These findings are consistent with predictive upweighting theories of perception, which propose that it is adaptive for observers to combine sampled sensory evidence with prior knowledge, biasing perception towards what we expect (Yuille & Kersten, 2006). This may be achieved mechanistically by altering the weights on sensory channels, increasing the gain of expected relative to unexpected signals (de Lange et al., 2018; Summerfield & de Lange, 2014). Such a mechanism would increase the detectability and apparent intensity of expected signals relative to unexpected ones (Brown et al., 2013). However, an alternative explanation for the findings is that expectation effects reflect biasing in response-generation circuits – such that action biases people to respond that events are more intense when they are expected, rather than altering perception itself (see De Lange, Rahnev, Donner, & Lau, 2013; Firestone & Scholl, 2016).

These different kinds of bias can be dissociated in computational models that conceptualise perceptual decisions as a process of evidence accumulation. Perceptual biases are thought to grow across time – every time response units sample from perceptual units they will be sampling from a biased representation, therefore increasing the magnitude of biasing effects across a larger number of samples (Urai et al., 2019; Yon et al., 2019). In contrast, response biases are thought to operate regardless of current incoming evidence and to be present from the outset of a trial – analogous to setting a threshold criterion for responses (Leite & Ratcliff, 2011). According to this logic, we can model the decision process with drift diffusion modelling (DDM) to identify the nature of the biasing process. DDMs conceptualise two-choice decisions (e.g. ‘stronger or weaker than average?’) as a noisy process of sequentially sampling sensory evidence to compute a decision variable (Ratcliff & McKoon, 2008). We can thus establish whether action expectations shift the starting point of evidence accumulation towards a response boundary (‘start biasing’, z parameter; Fig. 3A), or instead bias the rate of evidence accumulation (db parameter, ‘drift biasing’, Fig. 3B).

Figure 3.
  • Download figure
  • Open in new tab
Figure 3.

Illustration of how the DDM could explain expectation biases, and results of computational modelling. (A) For an unbiased decision process (black lines), sensory evidence integrates towards the upper response boundary when stimuli are stronger than average (solid lines) and towards the lower response boundary when weaker than average (dotted lines). Baseline shifts in decision circuits could shift the start point of the accumulation process nearer to the upper boundary for expected events (influencing the parameter z; blue lines - Start bias model). (B) Alternatively, selectively altering the weights on sensory channels could bias evidence accumulation in line with expectations (influencing parameter db; red lines – Drift bias model). (C) Simulated Start + Drift bias (winning DIC model) expectation effect plotted against the empirical expectation effect. (D) Simulated Drift bias expectation effect plotted against the empirical expectation effect accounting for simulated Start bias effects (plotted as the residuals from a model where the simulated Start bias effect predicts the empirical effect). Importantly, our regression analysis revealed that drift biases accounted for significant additional variance once accounting for start biases. All expectation effects were calculated by subtracting Expected PSEs from the Unexpected PSEs.

We fit hierarchical DDMs to participant choice and reaction time data from Experiment 3 (NB: reaction times were not collected in Experiment 2). We specified four different models: 1) a null model where no parameters were permitted to vary between expected and unexpected trials; 2) a start bias model where the start point of evidence accumulation (z) could vary between expectation conditions; 3) a drift bias model where a constant added to evidence accumulation (db) could vary according to expectation; 4) a start + drift bias model where both parameters could vary according to expectation. Models were compared using deviance information criteria (DIC) as an approximation of Bayesian model evidence. Lower DIC values indicate better model fit. Fitting the DDM to the behavioural data found that the model allowing both start and drift biases to vary according to expectation provided the best fit (DIC relative to null = −234.8) relative to both the start bias (DIC relative to null = −191.06) and drift bias (DIC relative to null = −8.62) models. This finding may suggest that observed biases are a product of both start and drift rate biasing. However, although the DIC measure does include a penalty for model complexity, it is thought to be biased towards models with higher complexity (Wiecki et al., 2013) and it indeed favoured the most complex model here.

Therefore, given that we were interested in whether any of the PSE expectation effect is generated by sensory biasing – rather than possible additional contributions of response biasing – we conducted a posterior predictive check to evaluate how well simulated data from each of the models could reproduce key patterns in our data. The posterior model parameters for the start bias, drift bias, and start + drift bias models were used to simulate a distribution of 500 reaction times and choices for each trial for each participant. From this simulated data we calculated the probability that a ‘ stronger than average’ response was given at each intensity level, separately for expected and unexpected trials. This allowed us to model simulated psychometric functions for expected and unexpected trials, exactly as we had done for empirical decisions. Performing this procedure for each model yielded separate simulated expectation effects (Unexpected PSE – Expected PSE) for each participant under the start bias, drift bias and start + drift bias models. Correlations were calculated to quantify how well simulated expectation effects reproduced empirical expectation effects, which revealed significant relationships for all three models (Start bias model: r30= .39, p = .034; Drift bias model: r30= .43, p = .017; Start + Drift bias model: r30= .53, p = .003, see Fig. 3C).

More informatively, we examined whether drift biasing accounted for any further variance in expectation effects than start biasing alone, by conducting a stepwise linear regression to predict the empirical expectation effect (Unexpected PSE – Expected PSE). In the first step, we included the simulated expectation effect from the start bias model to predict the empirical expectation effect. The simulated start bias data was able to predict the empirical expectation effect (R2 = .15, F(1,28) = 4.96, p = .034). In the second step, we included the simulated expectation effect from the drift bias model as an additional predictor of the empirical expectation effect. The regression model remained significant with this addition (R2 = .32, F(2,27) = 6.34, p = .006), but more importantly provided a significant improvement to the model fit (Fchange(1,27) = 6.72, p = .015; n.b., the same result would be obtained if simultaneously regressing both predictors against the empirical expectation effect using the enter method to establish unique variance). This analysis reveals that a model implementing a drift biasing mechanism better predicts empirical effects of expectation on perceptual decisions, by explaining unique variance in participant decisions that cannot be explained by response biasing.

General Discussion

Extant models disagree about how predictions should shape perception of action outcomes. We examined whether sensorimotor prediction attenuates perception of tactile events, as is widely assumed in the action literature, or instead whether predicted events may be perceptually upweighted – in line with theories in the wider sensory cognition literature. We adapted a force judgement paradigm used widely in the action literature, but applied predictive manipulations. Experiment 1 replicated typical findings that self-produced forces were rated as less intense than similar passive forces presented in the absence of action. However, this attenuation effect was reversed when cutaneous stimulation was removed from the active finger, consistent with a contribution from general gating rather than specific prediction mechanisms. Experiments 2 and 3 adapted the paradigm to examine the relative intensity of tactile events expected or unexpected on the basis of conditional probabilities, always in the presence of action. Both of these experiments found that expected tactile action outcomes were perceived more, not less, intensely than those that were unexpected. Computational modelling suggested that expectations alter the way sensory evidence is integrated – increasing the gain afforded to expected tactile signals.

These findings are consistent with upweighting perceptual accounts from outside of action domains, proposing that we use knowledge about statistical likelihoods to bias our percepts towards those that are more probable (de Lange et al., 2018; Kersten et al., 2004; Kersten & Yuille, 2003). Under these theories we will perceive the environment accurately, on average, despite sensory noise and the need to process information rapidly (Kersten et al., 2004). These theories have been developed outside of action domains, but in fact there is a range of evidence from visual processing during action which is consistent with them (Christensen et al., 2011; Yon et al., 2018, 2019; Yon & Press, 2017). Under these accounts, pre-activation of expected perceptual representations will generate perceptual upweighting of the expected, because higher gain on the representation is thought to be associated with a stronger perceptual experience (Carrasco et al., 2004; Wyart et al., 2012). The present findings indicate that these mechanisms operate similarly in touch.

The findings are harder to reconcile with the prominent downweighting theories from action (Blakemore et al., 1998; Dogge, Custers, & Aarts, 2019; Fiehler et al., 2019; Kilteni & Ehrsson, 2017), which propose that expected action outcomes are perceptually attenuated. As already outlined, it has been previously argued that a neural mechanism operating in the manner outlined in attenuation theories – where the prediction is subtracted from the input – is inconsistent with prominent predictive coding theories of perception (Brown et al., 2013). It is therefore essential to consider how the present findings can be resolved with the multitude of data cited in support of downweighting theories. A large body of work has attributed attenuation to predictive processes in humans as well as in a variety of other species and sensory systems. For example, attenuating internally-generated electric fields in Mormyrid fish improves detection of prey-like stimuli, a finding thought to be a consequence of predicting self-generated sensory input (Bell, 2001; Enikolopov et al., 2018). Similarly, virtual reality trained mice show suppressed auditory responses to self-produced tones generated by treadmill running (Schneider et al., 2018) or licking behaviours (Singla et al., 2017), compared to no movement. In humans, studies measuring the perceived force of a tactile stimulus during movement show similar attenuation effects during movement relative to no movement, and commonly attribute these effects to predictive mechanisms (Bays et al., 2005, 2006).

However, none of the above studies have demonstrated whether underlying mechanisms are predictive – i.e., operating according to stimulus probabilities (de Lange et al., 2018). There are a number of non-predictive mechanisms which could instead explain attenuation, and on the basis of the current findings we propose that some effects are instead generated by identity-general gating mechanisms, and others (e.g., Kilteni et al., 2019) by mechanisms shaping perception according to event repetition (see Press et al., 2020a, 2020b, for further discussion). Furthermore, it is essential to note that findings of reduced somatosensory (Blakemore et al., 1998; Kilteni & Ehrsson, 2020; Shergill et al., 2013) or visual (Kontaris et al., 2009; Stanley & Miall, 2007) neural response when perceiving expected action outcomes – frequently taken to support attenuation theories – may not be reflective of predictive downweighting processes. It has been demonstrated in visual cognition that a global signal reduction can emerge via neural sharpening rather than processes proposed in attenuation accounts (de Lange et al., 2018). Specifically, functional magnetic resonance imaging studies demonstrate that a visual stimulus predicted by a preceding tone (Kok et al., 2012) or a congruent action (Yon et al., 2018) evokes weaker activation only in visual cortical voxels tuned to unexpected stimuli, and multivariate pattern analysis (MVPA) demonstrates superior decoding for the expected stimulus.

It is also worth highlighting a recent proposal of some of ours that the influence of prediction on perception may not be as simple as either upweighting or downweighting accounts propose (Press et al., 2020b). We outline that any monolithic process upweighting what we expect to render our experiences more veridical will necessarily make them less informative, whereas monolithic downweighting processes rendering our experiences more informative will make them less veridical. Given the necessity of both veridical and informative experiences, we have proposed how both aims may be achieved via opposing processes operating at distinct timescales. Our theory suggests that perception is initially biased towards what we expect in order to generate experiences rapidly that, on average, are more veridical. However, if events are presented which generate particularly high surprise, later processes adapted to subserve learning highlight these events. Therefore, any relative downweighting of the expected is achieved via reactive processes that prioritise only the most informative of unexpected inputs. This theory may address some of the discrepancies in the literature, where outcomes may depend upon the extent to which events are ‘unexpected’ (cf. Kullback Leibler Divergence; Itti & Baldi, 2009) – i.e., whether unexpected inputs should warrant model updating – and the particular measure of sensory processing. Regardless of the ultimate resolution of this debate, the important conclusion from the present studies is that sensorimotor prediction does not appear to exhibit a qualitatively distinct downweighting influence on tactile perception – inconsistent with current theories.

Resolution of these conflicts will be crucial for determining the typical influence of prediction on perception in healthy young populations, as well as older and clinical populations. Tactile attenuation during action – and even more specifically, this particular force judgement paradigm – has been used to demonstrate sensory differences associated with healthy ageing (Wolpe et al., 2016), motor severity in Parkinson’s disease (Wolpe et al., 2018) and hallucinatory severity in schizophrenia (Shergill et al., 2014). Such differences are typically attributed to aberrant prediction of action consequences, but a closer inspection is necessary if the underlying mechanisms are not predictive. It will also be essential for future work to determine the level of overlap and interaction between expectation-based and attention-based processes when examining these mechanisms (Summerfield & de Lange, 2014). While expectation influences perception according to statistical likelihood of event occurrence, attention prioritises perceptual information according to task relevance. However, in our natural environment as well as the majority of experimental paradigms, more probable events are also more relevant for task performance. For example, many classic attentional paradigms in fact manipulate stimulus probabilities (Posner et al., 1980), while expectation manipulations render the relatively more expected also more task relevant (e.g. Rohenkohl, Gould, Pessoa, & Nobre, 2014). Future work must disentangle the relative influences of probabilities and task relevance to both upweighting and downweighting mechanisms to understand where these mechanisms can and cannot be dissociated.

To conclude, these findings suggest that sensorimotor prediction may increase, rather than decrease, the perceived intensity of tactile events, likely reflecting the operation of a mechanism that aids veridical perception of action outcomes in a noisy sensory environment. These findings challenge a central tenet of prominent motor control theories and demonstrate that sensorimotor prediction operates via qualitatively similar mechanisms to other prediction and regardless of the sensory domain.

Materials and Methods

Experiment 1

Participants

Thirty participants (16 female, mean age = 25.53 years [SD = 5.25]) were recruited from Birkbeck, University of London, and paid a small honorarium for their participation. Eight participants were replacements for those who could not complete the perceptual discrimination and an additional four for exclusions due to technical malfunction. The sample size was determined a priori on the basis of pilot testing to estimate effect size. The experiment was performed with local ethical committee approval and in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki for all experiments.

Procedure

The experiment was conducted in MATLAB using the Cogent toolbox. Participants positioned their left hand palm upwards and held their index fingertip against a downward-facing solenoid (diameter of metal rod = 4 mm; diameter of solenoid = 15 mm; TACT-CONTR-TR2, Heijo Research Electronics), positioned so that the metal rod of the solenoid sat on the apex of the fingertip. Their right hand rested on a shelf, positioned such that the index finger distal phalange was directly above the left hand distal phalange, but rotated 90 degrees anticlockwise relative to their left hand (Fig 1A). An infrared motion tracker (Leap Motion Controller using the Matleap MATLAB interface) was placed on the shelf supporting the solenoids at the midpoint between them. Participants’ hands were visually occluded during the experiment and white noise was played through headphones (53 Db; piloting confirmed that this level resulted in inaudible solenoid movement) throughout testing. In Contact blocks, participants’ right hand was positioned 5 cm above their left hand, and in No Contact blocks it was moved to 12 cm above to allow movements to be made without touching the shelf. Termination points of movements were approximately the same in Contact and No Contact conditions. It is worth noting that this palm separation generates a difference between our Contact and No Contact conditions additionally to contact. This allowed us to replicate the typical setup in the Contact condition while allowing movement to be registered in the No Contact condition. Importantly, all conclusions relate primarily to the simple effects within the Contact and No Contact conditions, so this additional difference should not alter the conclusions.

At the start of each trial, participants were cued onscreen to move their right index finger (‘move’; Active trials) or remain stationary (‘do not move’; Passive trials). On Active trials, they rotated their index finger downwards at the metacarpophalangeal joint. When motion was detected – by a button press in Contact blocks and infrared tracking in No Contact blocks – the target stimulus was delivered to the left index finger for 30 ms, resulting in apparent synchrony of stimulation with movement. After 1000 ms, a reference stimulus was presented for 30 ms. The target stimulus presented one of seven logarithmically-spaced forces, and the reference stimulus always presented the fourth (middle) force. After a 300 – 500 ms delay, participants were asked which tap was more forceful, responding with a left foot pedal for the first stimulus and a right foot pedal for the second stimulus. The next trial started after 1000 ms. In Passive trials, the target stimulus was delivered 500 ms after the cue to remain still.

There were 560 trials in total; 140 for each of the Active and Passive conditions, in both the Contact and No Contact blocks. The order of blocks was counterbalanced across participants and trial type order was randomized. Participants completed eight practice trials before the main test blocks.

Modelling functions

Participant responses were modelled by cumulative Gaussians to estimate psychometric functions, using the Palamedes Toolbox (Prins & Kingdom, 2018) in Matlab. This procedure was performed separately for expected and unexpected trials during the test phase. The mean of the modelled Gaussian was taken as the PSE, describing the point at which participants judge the target and reference events to have equal force. Lower values are indicative of more intense target percepts.

Experiment 2

Participants

Thirty new participants (20 female, mean age = 22.80 years [SD = 3.18]) were recruited in the same way as in Experiment 1. Six participants were replacements for those where acceptable psychometric functions could not be modelled to their responses, where they were unable to follow instructions concerning movement performance, or where there was technical malfunction. One participant’s PSE scores were winsorized to meet the normality assumptions of parametric tests (from z = 3.34 to z = 3; Tukey, 1962).

Procedure changes relative to Experiment 1

Participants were positioned with their left index and middle finger making contact with independent solenoids. At the beginning of each trial an arbitrary cue (either a square or circle) instructed participants to move their right index finger either upwards or downwards from the metacarpophalangeal joint, tracked by an infrared motion sensor. This action triggered delivery of the target stimulus to either solenoid. During training, participants’ action (e.g. downwards movement) was 100% predictive of the location of tactile event (e.g. left index finger). In the test session this relationship was reduced to 66.6% predictive of the tactile event. The training and test sessions were carried out at the same time on consecutive days. There were 420 trials in each session. Trial order was randomised and the action-stimulus mapping was counterbalanced across participants. Participants completed 12 practice trials before the main session trials.

Experiment 3

Participants

Thirty new participants (22 female, mean age = 24.3 years [SD = 4.34]) were recruited similarly to preceding studies. Three participants were replacements for those who could not complete the perceptual discrimination, and a further six for those who were unable to follow instructions concerning movement performance (>20% recorded movement errors on average).

Procedure changes relative to Experiment 2

The following changes were made relative to Experiment 2. Independent solenoids were now attached to both fingers via adhesive tape (diameter of metal rod = 4 mm; diameter of solenoid = 18 mm; TactAmp 4.2 Dancer Design). Participants’ hands, and therefore index and middle fingers, were spatially aligned with each other. At the start of each trial, participants selected to make a downwards movement with their right index or middle finger. After a 300 – 500 ms delay, participants were asked whether they perceived the test force to be more or less forceful than the average force intensity. The foot pedals were positioned at either 45 (for stronger) or 90 (for weaker) degree angles relative to their right foot to record responses. An example of the average force was presented to each finger once every 21 trials.

The experiment consisted of two training blocks followed by a test block, all occurring in the same session of testing. There were 210 trials in each block. During both of the training blocks, movements were perfectly predictive of tactile events. Half of the participants experienced a mapping whereby moving the right hand index finger resulted in left hand index stimulation and middle finger movement resulted in middle stimulation. The other half experienced a mapping whereby index finger movement resulted in middle stimulation, and middle finger movement in index stimulation. In the first training block participants responded yes/no to the question ‘Tap on index or middle finger?’. In the second training block they were asked about the force, similarly to in Experiment 2 and in subsequent test blocks.

Computational modelling

We fit DDMs to participant choice and reaction time data from Experiment 3 using the hDDM package implemented in Python (Wiecki, Sofer, & Frank, 2013). In the hDDM, model parameters for each participant are treated as random effects drawn from group-level distributions, and Bayesian Markov Chain Monte Carlo (MCMC) sampling is used to estimate group and participant level parameters simultaneously. All models were estimated with MCMC sampling, and parameters were estimated with 30,000 samples (‘burn-in’=7,500). Model convergence was assessed by inspecting chain posteriors and simulating reaction time distributions for each participant.

Acknowledgements

The work was supported by a Leverhulme Trust (RPG-2016-105) and Wellcome Trust (204770/Z/16/Z) grant awarded to CP. FdeL was supported by a Vidi Grant (Nederlandse Organisatie voor Wetenschappelijk Onderzoek, 452-13-016) and ERC Starting Grant (Horizon 2020 Framework Programme, 678286).

References

  1. ↵
    Alink, A., Abdulrahman, H., & Henson, R. N. (2018). Forward models demonstrate that repetition suppression is best modelled by local neural scaling. Nature Communications, 9(1), 1–10. https://doi.org/10.1038/s41467-018-05957-0
    OpenUrl
  2. ↵
    Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R., & Kievit, R. A. (2019). Raincloud plots: A multi-platform tool for robust data visualization [version 1; peer review: 2 approved]. Wellcome Open Research, 4, 63. https://doi.org/10.12688/wellcomeopenres.15191.1
    OpenUrlCrossRefPubMed
  3. ↵
    Bar, M. (2004). Visual objects in context. Nature Reviews Neuroscience, 5(8), 617–629. https://doi.org/10.1038/nrn1476
    OpenUrlCrossRefPubMedWeb of Science
  4. ↵
    Bays, P. M., Flanagan, J. R., & Wolpert, D. M. (2006). Attenuation of self-generated tactile sensations is predictive, not postdictive. PLoS Biology, 4(2), 281–284. https://doi.org/10.1371/journal.pbio.0040028
    OpenUrlCrossRefWeb of Science
  5. ↵
    Bays, P. M., Wolpert, D. M., & Flanagan, J. R. (2005). Perception of the consequences of self-action is temporally tuned and event driven. Current Biology, 15(12), 1125–1128. https://doi.org/10.1016/j.cub.2005.05.023
    OpenUrlCrossRefPubMedWeb of Science
  6. ↵
    Bell, C. C. (2001). Memory-based expectations in electrosensory systems. Current Opinion in Neurobiology, 11(4), 481–487. https://doi.org/10.1016/S0959-4388(00)00238-5
    OpenUrlCrossRefPubMedWeb of Science
  7. ↵
    Blakemore, S. J., Wolpert, D. M., & Frith, C. D. (1998). Central cancellation of self-produced tickle sensation. Nature Neuroscience, 1(7), 635–640. https://doi.org/10.1038/2870
    OpenUrlCrossRefPubMedWeb of Science
  8. ↵
    Brown, H., Adams, R. A., Parees, I., Edwards, M., & Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognitive Processing, 14(4), 411–427. https://doi.org/10.1007/s10339-013-0571-3
    OpenUrl
  9. ↵
    Carrasco, M., Ling, S., & Read, S. (2004). Attention alters appearance. Nature Neuroscience, 7(3), 308–313. https://doi.org/10.1038/nn1194
    OpenUrlCrossRefPubMedWeb of Science
  10. ↵
    Christensen, A., Ilg, W., & Giese, M. A. (2011). Spatiotemporal tuning of the facilitation of biological motion perception by concurrent motor execution. The Journal of Neuroscience, 31(9), 3493–3499. https://doi.org/10.1523/JNEUROSCI.4277-10.2011
    OpenUrlAbstract/FREE Full Text
  11. ↵
    de Lange, F. P., Heilbron, M., & Kok, P. (2018). How Do Expectations Shape Perception? Trends in Cognitive Sciences, 22(9), 764–779. https://doi.org/10.1016/j.tics.2018.06.002
    OpenUrlCrossRefPubMed
  12. ↵
    de Lange, F. P., Rahnev, D. A., Donner, T. H., & Lau, H. (2013). Prestimulus Oscillatory Activity over Motor Cortex Reflects Perceptual Expectations. Journal of Neuroscience, 34(4), 1400–1410. https://doi.org/10.1523/JNEUROSCI.1094-12.2013
    OpenUrl
  13. ↵
    Dogge, M., Custers, R., & Aarts, H. (2019). Moving Forward: On the Limits of Motor-Based Forward Models. Trends in Cognitive Sciences, 23(9), 743–753. https://doi.org/10.1016/J.TICS.2019.06.008
    OpenUrl
  14. ↵
    Dogge, M., Custers, R., Gayet, S., Hoijtink, H., & Aarts, H. (2019). Perception of action-outcomes is shaped by life-long and contextual expectations. Scientific Reports, 9(1), 5225. https://doi.org/10.1038/s41598-019-41090-8
    OpenUrl
  15. ↵
    Enikolopov, A. G., Abbott, L. F., & Sawtell, N. B. (2018). Internally Generated Predictions Enhance Neural and Behavioral Detection of Sensory Stimuli in an Electric Fish. Neuron, 99(1), 135–146. https://doi.org/10.1016/j.neuron.2018.06.006
    OpenUrl
  16. ↵
    Fiehler, K., Brenner, E., & Spering, M. (2019). Prediction in goal-directed action. Journal of Vision, 19(9), 1–21. https://doi.org/10.1167/19.9.10
    OpenUrlCrossRef
  17. ↵
    Firestone, C., & Scholl, B. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioural and Brain Sciences, 39, e229. https://doi.org/https://doi.org/10.1017/S0140525X15000965
    OpenUrlPubMed
  18. ↵
    Gandolfo, M., & Downing, P. E. (2019). Causal Evidence for Expression of Perceptual Expectations in Category-Selective Extrastriate Regions. Current Biology, 29(15), 2496–2500.e3. https://doi.org/10.1016/J.CUB.2019.06.024
    OpenUrl
  19. ↵
    Grill-Spector, K., Henson, R., & Martin, A. (2006). Repetition and the brain: neural models of stimulus-specific effects. Trends in Cognitive Sciences, 10(1), 14–23. https://doi.org/10.1016/j.tics.2005.11.006
    OpenUrlCrossRefPubMedWeb of Science
  20. ↵
    Han, B., & van Rullen, R. (2016). Shape perception enhances perceived contrast: evidence for excitatory predictive feedback? Scientific Reports, 6, 22944. https://doi.org/10.1038/srep22944
    OpenUrl
  21. ↵
    Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & de Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11(1), 1–11. https://doi.org/10.1038/s41467-019-13996-4
    OpenUrl
  22. ↵
    Itti, L., & Baldi, P. (2009). Bayesian surprise attracts human attention. Vision Research, 49(10), 1295–1306. https://doi.org/10.1016/j.visres.2008.09.007
    OpenUrlCrossRefPubMedWeb of Science
  23. ↵
    Juravle, G., & Spence, C. (2011). Juggling reveals a decisional component to tactile suppression. Experimental Brain Research, 213(1), 87–97. https://doi.org/10.1007/s00221-011-2780-2
    OpenUrlPubMed
  24. ↵
    Kersten, D., Mamassian, P., & Yuille, A. (2004). Object Perception as Bayesian Inference. Annu. Rev. Psychol., 55, 271–304. https://doi.org/10.1146/annurev.psych.55.090902.142005
    OpenUrlCrossRefPubMedWeb of Science
  25. ↵
    Kersten, D., & Yuille, A. (2003). Bayesian models of object perception. Current Opinion in Neurobiology, 13(2), 150–158. https://doi.org/10.1016/S0959-4388(03)00042-4
    OpenUrlCrossRefPubMedWeb of Science
  26. ↵
    Kilteni, K., & Ehrsson, H. (2020). Functional connectivity between the cerebellum and somatosensory areas implements the attenuation of self-generated touch. Journal of Neuroscience, 40(4), 894–906. https://doi.org/10.1523/JNEUROSCI.1732-19.2019
    OpenUrlAbstract/FREE Full Text
  27. ↵
    Kilteni, K., & Ehrsson, H. H. (2017). Sensorimotor predictions and tool use: Hand-held tools attenuate self-touch. Cognition, 165, 1–9. https://doi.org/10.1016/j.cognition.2017.04.005
    OpenUrlCrossRefPubMed
  28. ↵
    Kilteni, K., Houborg, C., & Ehrsson, H. H. (2019). Rapid learning and unlearning of predicted sensory delays in self-generated touch. ELife, 8(e42888). https://doi.org/10.7554/eLife.42888
  29. ↵
    Kok, P., Jehee, J. F. M., & de Lange, F. P. (2012). Less Is More: Expectation Sharpens Representations in the Primary Visual Cortex. Neuron, 75(2), 265–270. https://doi.org/10.1016/j.neuron.2012.04.034
    OpenUrlCrossRefPubMedWeb of Science
  30. ↵
    Kok, P., Mostert, P., & De Lange, F. P. (2017). Prior expectations induce prestimulus sensory templates. Proceedings of the National Academy of Sciences, 114(39), 10473–10478. https://doi.org/10.1073/pnas.1705652114
    OpenUrlAbstract/FREE Full Text
  31. ↵
    Kok, P., Rait, L. I., & Turk-Browne, N. B. (2020). Content-based Dissociation of Hippocampal Involvement in Prediction. Journal of Cognitive Neuroscience, 32(3), 527–545. https://doi.org/10.1162/jocn_a_01509
    OpenUrl
  32. ↵
    Kok, P., & Turk-Browne, N. B. (2018). Associative prediction of visual shape in the hippocampus. Journal of Neuroscience, 38(31), 6888–6899. https://doi.org/10.1523/JNEUROSCI.0163-18.2018
    OpenUrlAbstract/FREE Full Text
  33. ↵
    Kontaris, I., Wiggett, A. J., & Downing, P. E. (2009). Dissociation of extrastriate body and biological-motion selective areas by manipulation of visual-motor congruency. Neuropsychologia, 47(14), 3118–3124. https://doi.org/10.1016/j.neuropsychologia.2009.07.012
    OpenUrlCrossRefPubMedWeb of Science
  34. ↵
    Leite, F. P., & Ratcliff, R. (2011). What cognitive processes drive response biases? A diffusion model analysis. Judgment and Decision Making, 6(7), 651–687. https://psycnet.apa.org/record/2011-24967-007
    OpenUrlWeb of Science
  35. ↵
    Ofen, N., Moran, A., & Sagi, D. (2007). Effects of trial repetition in texture discrimination. Vision Research, 47(8), 1094–1102. https://doi.org/10.1016/j.visres.2007.01.023
    OpenUrlCrossRefPubMedWeb of Science
  36. ↵
    Palmer, T. E. (1975). The effects of contextual scenes on the identification of objects. Memory & Cognition, 3(5), 519–526. https://doi.org/10.3758/BF03197524
    OpenUrlCrossRefPubMedWeb of Science
  37. ↵
    Posner, M., Snyder, C., & Davidson, B. (1980). Attention and the detection of signals. Journal of Experimental Psychology: General, 109(2), 160–174. https://psycnet.apa.org/record/1981-11809-001
    OpenUrlCrossRefWeb of Science
  38. ↵
    Press, C., & Cook, R. (2015). Beyond action-specific simulation: domain-general motor contributions to perception. Trends in Cognitive Sciences, 19(4), 176–178. https://doi.org/10.1016/j.tics.2015.01.006
    OpenUrlCrossRefPubMed
  39. ↵
    Press, C., Kok, P., & Yon, D. (2020a). Learning to Perceive and Perceiving to Learn. Trends in Cognitive Sciences, 24(4), 260–261. https://doi.org/10.1016/j.tics.2020.01.002
    OpenUrl
  40. ↵
    Press, C., Kok, P., & Yon, D. (2020b). The Perceptual Prediction Paradox. Trends in Cognitive Sciences, 24(1), 13–24. https://doi.org/10.1016/J.TICS.2019.11.003
    OpenUrl
  41. ↵
    Press, C., & Yon, D. (2019). Perceptual Prediction: Rapidly Making Sense of a Noisy World. Current Biology, 29(15), R751–R753. https://doi.org/10.1016/j.cub.2019.06.054
    OpenUrl
  42. ↵
    Prins, N., & Kingdom, F. A. A. (2018). Applying the model-comparison approach to test specific research hypotheses in psychophysical research using the Palamedes toolbox. Frontiers in Psychology, 9(1250). https://doi.org/10.3389/fpsyg.2018.01250
  43. ↵
    Puri, A. M., & Wojciulik, E. (2008). Expectation both helps and hinders object perception. Vision Research, 48(4), 589–597. https://doi.org/10.1016/j.visres.2007.11.017
    OpenUrlCrossRefPubMedWeb of Science
  44. ↵
    Ratcliff, R., & McKoon, G. (2008). The Diffusion Decision Model: Theory and Data for Two-Choice Decision Tasks. Neural Computation, 20(4), 873–922. https://doi.org/10.1162/neco.2008.12-06-420
    OpenUrlCrossRefPubMedWeb of Science
  45. ↵
    Rohenkohl, G., Gould, I. C., Pessoa, J., & Nobre, A. C. (2014). Combining spatial and temporal expectations to improve visual perception. Journal of Vision, 14(4), 8–8. https://doi.org/10.1167/14.4.8
    OpenUrlAbstract/FREE Full Text
  46. ↵
    Schneider, D. M., Sundararajan, J., & Mooney, R. (2018). A cortical filter that learns to suppress the acoustic consequences of movement. Nature, 561, 391–395. https://doi.org/10.1038/s41586-018-0520-5
    OpenUrlCrossRefPubMed
  47. ↵
    Seki, K., & Fetz, E. E. (2012). Gating of Sensory Input at Spinal and Cortical Levels during Preparation and Execution of Voluntary Movement. Journal of Neuroscience, 32(3), 890–902. https://doi.org/10.1523/JNEUROSCI.4958-11.2012
    OpenUrlAbstract/FREE Full Text
  48. ↵
    Shergill, S. S., Bays, P. M., Frith, C. D., & Wolpert, D. M. (2003). Two Eyes for an Eye: The Neuroscience of Force Escalation. Science, 301(5630), 187. https://doi.org/10.1126/science.1085327
    OpenUrlFREE Full Text
  49. ↵
    Shergill, S. S., White, T. P., Joyce, D. W., Bays, P. M., Wolpert, D. M., & Frith, C. D. (2013). Modulation of somatosensory processing by action. NeuroImage, 70, 356–362. https://doi.org/10.1016/J.NEUROIMAGE.2012.12.043
    OpenUrlCrossRefPubMedWeb of Science
  50. ↵
    Shergill, S. S., White, T. P., Joyce, D. W., Bays, P. M., Wolpert, D. M., & Frith, C. D. (2014). Functional magnetic resonance imaging of impaired sensory prediction in schizophrenia. JAMA Psychiatry, 71(1), 28–35. https://doi.org/10.1001/jamapsychiatry.2013.2974
    OpenUrl
  51. ↵
    Singla, S., Dempsey, C., Warren, R., Enikolopov, A. G., & Sawtell, N. B. (2017). A cerebellum-like circuit in the auditory system cancels responses to self-generated sounds. Nature Neuroscience, 20(7), 943–950. https://doi.org/10.1038/nn.4567
    OpenUrlCrossRef
  52. ↵
    Stanley, J., & Miall, R. C. (2007). Functional activation in parieto-premotor and visual areas dependent on congruency between hand movement and visual stimuli during motor-visual priming. NeuroImage, 34(1), 290–299. https://doi.org/10.1016/j.neuroimage.2006.08.043
    OpenUrlPubMed
  53. ↵
    Summerfield, C., & de Lange, F. P. (2014). Expectation in perceptual decision making: neural and computational mechanisms. Nature Reviews Neuroscience, 15(11), 745–756. https://doi.org/10.1038/nrn3838
    OpenUrlCrossRefPubMed
  54. ↵
    Todorovic, A., & De Lange, F. P. (2012). Repetition Suppression and Expectation Suppression Are Dissociable in Time in Early Auditory Evoked Fields. The Journal of Neuroscience, 32(39), 13389 – 13395. https://doi.org/10.1523/JNEUROSCI.2227-12.2012
    OpenUrlAbstract/FREE Full Text
  55. ↵
    Tukey, J. W. (1962). The Future of Data Analysis. The Annals of Mathematical Statistics, 33(1), 1–67.
    OpenUrl
  56. ↵
    Urai, A. E., de Gee, J. W., Tsetsos, K., & Donner, T. H. (2019). Choice history biases subsequent evidence accumulation. ELife, 8(e46331). https://doi.org/10.7554/eLife.46331
  57. ↵
    Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Frontiers in Neuroinformatics, 7, 14. https://doi.org/10.3389/fninf.2013.00014
    OpenUrl
  58. ↵
    Williams, S. R., & Chapman, E. C. (2000). Time-course and magnitude of movement-related gating of tactile detection in humans. II. Effect of motor tasks. Journal of Neurophysiology, 84(2), 863–875. https://doi.org/10.1152/jn.2002.88.4.1968
    OpenUrlCrossRefPubMedWeb of Science
  59. ↵
    Williams, S. R., Shenasa, J., & Chapman, E. C. (1998). Time course and magnitude of movement-related gating of tactile detection in humans. I. Importance of stimulus location. Journal of Neurophysiology, 79(2), 947–963. https://doi.org/10.1152/jn.00527.2001
    OpenUrlCrossRefPubMedWeb of Science
  60. ↵
    Wolpe, N., Ingram, J. N., Tsvetanov, K. A., Geerligs, L., Kievit, R. A., Henson, R. N., Wolpert, D. M., Rowe, J. B., Tyler, L. K., Brayne, C., Bullmore, E., Calder, A., Cusack, R., Dalgleish, T., Duncan, J., Matthews, F. E., Marslen-Wilson, W., Shafto, M. A., Campbell, K., … Rowe, J. B. (2016). Ageing increases reliance on sensorimotor prediction through structural and functional differences in frontostriatal circuits. Nature Communications, 7(1), 13034. https://doi.org/10.1038/ncomms13034
    OpenUrl
  61. ↵
    Wolpe, N., Zhang, J., Nombela, C., Ingram, J. N., Wolpert, D. M., & Rowe, J. B. (2018). Sensory attenuation in Parkinson’s disease is related to disease severity and dopamine dose. Scientific Reports, 8(1), 15643. https://doi.org/10.1038/s41598-018-33678-3
    OpenUrl
  62. ↵
    Wolpert, D. M., & Flanagan, J. R. (2001). Motor prediction. Current Biology, 11(18), R729–R732. https://doi.org/10.1016/S0960-9822(01)00432-8
    OpenUrlCrossRefPubMedWeb of Science
  63. ↵
    Wyart, V., Nobre, A. C., & Summerfield, C. (2012). Dissociable prior influences of signal probability and relevance on visual contrast sensitivity. Proceedings of the National Academy of Sciences, 109(16), 3593–3598. https://doi.org/10.1073/pnas.1204601109
    OpenUrlAbstract/FREE Full Text
  64. ↵
    Yon, D., Gilbert, S. J., De Lange, F. P., & Press, C. (2018). Action sharpens sensory representations of expected outcomes. Nature Communications, 9(1), 4288. https://doi.org/10.1038/s41467-018-06752-7
    OpenUrl
  65. ↵
    Yon, D., & Press, C. (2017). Predicted action consequences are perceptually facilitated before cancellation. Journal of Experimental Psychology: Human Perception and Performance, 43(6), 1073–1083. https://doi.org/10.1037/xhp0000385
    OpenUrl
  66. ↵
    Yon, D., & Press, C. (2018). Sensory predictions during action support perception of imitative reactions across suprasecond delays. Cognition, 173, 21–27. https://doi.org/doi:10.1016/j.cognition.2017.12.008
    OpenUrl
  67. ↵
    Yon, D., Zainzinger, V., Lange, F. de, Eimer, M., & Press, C. (2019). Action biases perceptual decisions toward expected outcomes. PsyArXiv. https://doi.org/10.31234/OSF.IO/3ZP8N
  68. ↵
    Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: analysis by synthesis? Trends in Cognitive Sciences, 10(7), 301–308. https://doi.org/10.1016/j.tics.2006.05.002
    OpenUrlCrossRefPubMedWeb of Science
Back to top
PreviousNext
Posted March 26, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Action enhances predicted touch
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Action enhances predicted touch
Emily R. Thomas, Daniel Yon, Floris P. de Lange, Clare Press
bioRxiv 2020.03.26.007559; doi: https://doi.org/10.1101/2020.03.26.007559
Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
Action enhances predicted touch
Emily R. Thomas, Daniel Yon, Floris P. de Lange, Clare Press
bioRxiv 2020.03.26.007559; doi: https://doi.org/10.1101/2020.03.26.007559

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (2653)
  • Biochemistry (5291)
  • Bioengineering (3701)
  • Bioinformatics (15840)
  • Biophysics (7289)
  • Cancer Biology (5650)
  • Cell Biology (8131)
  • Clinical Trials (138)
  • Developmental Biology (4791)
  • Ecology (7563)
  • Epidemiology (2059)
  • Evolutionary Biology (10621)
  • Genetics (7752)
  • Genomics (10175)
  • Immunology (5233)
  • Microbiology (13977)
  • Molecular Biology (5403)
  • Neuroscience (30911)
  • Paleontology (217)
  • Pathology (886)
  • Pharmacology and Toxicology (1527)
  • Physiology (2263)
  • Plant Biology (5043)
  • Scientific Communication and Education (1045)
  • Synthetic Biology (1401)
  • Systems Biology (4162)
  • Zoology (815)