PreprintPDF Available

Abstract and Figures

It is widely believed that predicted tactile action outcomes are perceptually attenuated. The present experiments determined whether predictive mechanisms always generate attenuation, or instead can enhance perception – as typically observed in sensory cognition domains outside of action. We manipulated probabilistic expectations in a paradigm often used to demonstrate tactile attenuation. Participants produced actions and subsequently rated the intensity of forces on a passive finger. Experiment 1 confirmed previous findings that action outcomes are perceived less intensely than passive stimulation, but demonstrated more intense perception when active finger stimulation was removed. Experiments 2 and 3 manipulated prediction explicitly and found that expected touch during action is perceived more intensely than unexpected touch. Computational modelling suggested that expectations increase the gain afforded to expected tactile signals. These findings challenge a central tenet of prominent motor control theories and demonstrate that sensorimotor predictions do not exhibit a qualitatively distinct influence on tactile perception. Statement of Relevance Perception of expected action outcomes is thought to be attenuated. Such a mechanism may be adaptive because surprising inputs are more useful - e.g., signalling the need to take new courses of action - and is thought to explain why we cannot tickle ourselves and unusual aspects of action and awareness in clinical populations. However, theories outside of action purport that predicted events are perceptually facilitated, allowing us to generate largely accurate representations of our noisy sensory world. We do not know whether action predictions really alter perception differently from other predictions because different manipulations have been performed. Here we perform similar manipulations and demonstrate that action predictions can enhance, rather than attenuate, touch. We thereby demonstrate that action predictions may not have a qualitatively distinct influence on perception, such that we must re-examine theories concerning how predictions influence perception across domains and clinical theories based upon their assumptions.
Content may be subject to copyright.
A preview of the PDF is not available
... A second account could be that mental operations have a gating effect (Cromwell et al., 2008), independent from predictive mechanisms, thereby directly affecting the strength of the mental representations of self-generated words. A similar gating mechanism has recently been shown to offer a plausible alternative to forward model accounts of sensory self-attenuation (Thomas et al., 2020). ...
Preprint
Full-text available
Previous studies have shown that self-generated stimuli in auditory, visual, and somatosensory domains are attenuated, producing decreased behavioral and neural responses compared to the same stimuli that are externally generated. Yet, whether such attenuation also occurs for higher-level cognitive functions beyond sensorimotor processing remains unknown. In this study, we assessed whether cognitive functions such as numerosity estimations are subject to attenuation. We designed a task allowing the controlled comparison of numerosity estimations for self (active condition) and externally (passive condition) generated words. Our behavioral results showed a larger underestimation of self- compared to externally-generated words, suggesting that numerosity estimations for self-generated words are attenuated. Moreover, the linear relationship between the reported and actual number of words was stronger for self-generated words, although the ability to track errors about numerosity estimations was similar across conditions. Neuroimaging results revealed that numerosity underestimation involved increased functional connectivity between the right intraparietal sulcus and an extended network (bilateral supplementary motor area, left inferior parietal lobule and left superior temporal gyrus) when estimating the number of self vs. externally generated words. We interpret our results in light of two models of attenuation and discuss their perceptual versus cognitive origins.
... On the one hand, Bayesian models of metacognition suggest the top-down predictions enhance introspection about expected events ( 11 ; see Fig. 1a). These accounts share a family resemblance to Bayesian models of perception 9,[12][13][14] , which assume it is adaptive for sensory representations to be weighted towards predicted outcomesmaking us more likely to see, hear or feel sensory events that we expect to occur [15][16][17][18] . Analogously to Bayesian of models of perception, Bayesian models of metacognition suggest that top-down predictions 'sharpen' internal representations of expected events, leading to more sensitive metacognition about predicted signals 11,19 . ...
Preprint
Full-text available
Metacognition allows us to explicitly represent the uncertainty in our perceptions and decisions. Recent theories suggest that we use predictive models of our environment to optimise these introspective processes, but extant accounts disagree about the role prediction plays: some accounts suggest that we should have more sensitive subjective insight for predictable events, while others stress that metacognition should be enhanced for surprising prediction errors. Here two experiments compare these accounts. Participants performed actions to generate visual outcomes that could move in expected or unexpected directions. Across both experiments, signal detection analyses revealed enhanced metacognition for unexpected outcomes. A combination of reverse correlation and computational modelling suggested this advantage arose because metacognitive processes are more sensitive to unexpected information. These results are consistent with higher order inference models of introspective awareness and point to a mechanism that may optimise diverse aspects of cognition and behaviour in an unstable world.
... Such suppression mechanisms are thought to attenuate tactile sensations during action regardless of whether they are predicted effects of action or not and are thought to be mediated by spinal mechanisms (Seki & Fetz, 2012), and comparable mechanisms may similarly attenuate sensory processing in a nonpredictive fashion across modalities in humans and other animals (Crapse & Sommer, 2008). Importantly, recent experiments in touch suggest that when confounds related to sensory suppression are removed, action predictions may influence perception in a qualitatively similar fashion irrespective of sensory modality (Thomas, Yon, de Lange, & Press, 2020). ...
Article
Full-text available
We predict how our actions will influence the world around us. Prevailing models in the action control literature propose that we use these predictions to suppress or “cancel” perception of expected action outcomes, to highlight more informative surprising events. However, contrasting normative Bayesian models in sensory cognition suggest that we are more, not less, likely to perceive what we expect—given that what we expect is more likely to occur. Here we adjudicated between these models by investigating how expectations influence perceptual decisions about action outcomes in a signal detection paradigm. Across three experiments, participants performed one of two manual actions that were sometimes accompanied by brief presentation of expected or unexpected visual outcomes. Contrary to dominant cancellation models but consistent with Bayesian accounts, we found that observers were biased to report the presence of expected action outcomes. There were no effects of expectation on sensitivity. Computational modeling revealed that the action-induced bias reflected a sensory bias in how evidence was accumulated rather than a baseline shift in decision circuits. Expectation effects remained in Experiments 2 and 3 when orthogonal cues indicated which finger was more likely to be probed (i.e. task-relevant). These biases toward perceiving expected action outcomes are suggestive of a mechanism that would enable generation of largely veridical representations of our actions and their consequences in an inherently uncertain sensory world.
... Such suppression mechanisms are thought to attenuate tactile sensations during action regardless of whether they are predicted effects of action or not and are thought to be mediated by spinal mechanisms (Seki & Fetz, 2012), and comparable mechanisms may similarly attenuate sensory processing in a non-predictive fashion across modalities in humans and other animals (Crapse & Sommer, 2008). Importantly, recent experiments in touch suggest that when confounds related to sensory suppression are removed, action predictions may influence perception in a qualitatively similar fashion irrespective of sensory modality (Thomas, Yon, de Lange & Press, 2020). ...
Preprint
Full-text available
We predict how our actions will influence the world around us. Prevailing models of action control propose that we use these predictions to suppress or ‘cancel’ perception of expected action outcomes. However, contrasting normative Bayesian models in sensory cognition suggest that top-down predictions bias observers toward perceiving what they expect. Here we adjudicated between these models by investigating how expectations influence perceptual decisions about briefly presented action outcomes. Contrary to dominant cancellation models, we found that observers’ perceptual decisions are biased toward the presence of outcomes congruent with their actions. Computational modelling revealed this action-induced bias reflected a bias in how sensory evidence was accumulated, rather than a baseline shift in decision circuits. In combination, these results reveal a gain control mechanism that can explain how we generate largely veridical representations of our actions and their consequences in an inherently uncertain sensory world.
Article
Full-text available
Previous studies have shown that self-generated stimuli in auditory, visual, and somatosensory domains are attenuated, producing decreased behavioral and neural responses compared to the same stimuli that are externally generated. Yet, whether such attenuation also occurs for higher-level cognitive functions beyond sensorimotor processing remains unknown. In this study, we assessed whether cognitive functions such as numerosity estimations are subject to attenuation in 56 healthy participants (32 women). We designed a task allowing the controlled comparison of numerosity estimations for self (active condition) and externally (passive condition) generated words. Our behavioral results showed a larger underestimation of self- compared to externally-generated words, suggesting that numerosity estimations for self-generated words are attenuated. Moreover, the linear relationship between the reported and actual number of words was stronger for self-generated words, although the ability to track errors about numerosity estimations was similar across conditions. Neuroimaging results revealed that numerosity underestimation involved increased functional connectivity between the right intraparietal sulcus and an extended network (bilateral supplementary motor area, left inferior parietal lobule and left superior temporal gyrus) when estimating the number of self vs. externally generated words. We interpret our results in light of two models of attenuation and discuss their perceptual versus cognitive origins.SIGNIFICANCE STATEMENTWe perceive sensory events as less intense when they are self-generated compared to externally-generated ones. This phenomenon, called attenuation enables us to distinguish sensory events from self and external origins. Here, we designed a novel fMRI paradigm to assess whether cognitive processes such as numerosity estimations are also subject to attenuation. When asking participants to estimate the number of words they had generated or passively heard, we found bigger underestimation in the former case, providing behavioral evidence of attenuation. Attenuation was associated with increased functional connectivity of the intraparietal sulcus, a region involved in numerosity processing. Together, our results indicate that attenuation of self-generated stimuli is not limited to sensory consequences but also impact cognitive processes such as numerosity estimations.
Preprint
Full-text available
In recent decades, research on somatosensory perception has led to two important observations. First, self-generated touches that are predicted by voluntary movements become attenuated compared to externally generated touches of the same intensity (attenuation). Second, externally generated touches feel weaker and are more difficult to detect during movement compared to rest (gating). Researchers today often consider gating and attenuation to be the same suppression process; however, this assumption is unwarranted because, despite more than forty years of research, no study has combined them in a single paradigm. We quantified how people perceive self-generated and externally generated touches during movement and rest. We demonstrate that whereas voluntary movement gates the precision of both self-generated and externally generated touch, the amplitude of self-generated touch is selectively attenuated compared to externally generated touch. We further show that attenuation and gating neither interact nor correlate, and we conclude that they represent distinct perceptual phenomena.
Preprint
Full-text available
In recent decades, research on somatosensory perception has led to two important observations. First, self-generated touches that are predicted by voluntary movements become attenuated compared to externally generated touches of the same intensity (attenuation). Second, externally generated touches feel weaker and are more difficult to detect during movement compared to rest (gating). Researchers today often consider gating and attenuation to be the same suppression process; however, this assumption is unwarranted because, despite more than forty years of research, no study has combined them in a single paradigm. We quantified how people perceive self-generated and externally generated touches during movement and rest. We demonstrate that whereas voluntary movement gates the precision of both self-generated and externally generated touch, the amplitude of self-generated touch is selectively attenuated compared to externally generated touch. We further show that attenuation and gating neither interact nor correlate, and we conclude that they represent distinct perceptual phenomena.
Article
Full-text available
Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions. Letters are more easily identified when embedded in a word. Here, the authors show that word contexts can enhance letter information in early visual cortex, suggesting that the advantage offered by words occurs already during early perceptual processing.
Article
Full-text available
Recent work suggests that a key function of the hippocampus is to predict the future. This is thought to depend on its ability to bind inputs over time and space and to retrieve upcoming or missing inputs based on partial cues. In line with this, previous research has revealed prediction-related signals in the hippocampus for complex visual objects, such as fractals and abstract shapes. Implicit in such accounts is that these computations in the hippocampus reflect domain-general processes that apply across different types and modalities of stimuli. An alternative is that the hippocampus plays a more domain-specific role in predictive processing, with the type of stimuli being predicted determining its involvement. To investigate this, we compared hippocampal responses to auditory cues predicting abstract shapes (Experiment 1) versus oriented gratings (Experiment 2). We measured brain activity in male and female human participants using high-resolution fMRI, in combination with inverted encoding models to reconstruct shape and orientation information. Our results revealed that expectations about shape and orientation evoked distinct representations in the hippocampus. For complex shapes, the hippocampus represented which shape was expected, potentially serving as a source of top–down predictions. In contrast, for simple gratings, the hippocampus represented only unexpected orientations, more reminiscent of a prediction error. We discuss several potential explanations for this content-based dissociation in hippocampal function, concluding that the computational role of the hippocampus in predictive processing may depend on the nature and complexity of stimuli.
Article
Full-text available
Since the early 1970s, numerous behavioral studies have shown that self-generated touch feels less intense than the same touch applied externally. Computational motor control theories have suggested that cerebellar internal models predict the somatosensory consequences of our movements and that these predictions attenuate the perception of the actual touch. Despite this influential theoretical framework, little is known about the neural basis of this predictive attenuation. This is due to the limited number of neuroimaging studies, the presence of conflicting results about the role and the location of cerebellar activity, and the lack of behavioral measures accompanying the neural findings. Here, we combined psychophysics with functional magnetic resonance imaging to detect the neural processes underlying somatosensory attenuation in male and female healthy human participants. Activity in bilateral secondary somatosensory areas was attenuated when the touch was presented during a self-generated movement (self-generated touch) than in the absence of movement (external touch). An additional attenuation effect was observed in the cerebellum that is ipsilateral to the passive limb receiving the touch. Importantly, we further found that the degree of functional connectivity between the ipsilateral cerebellum and the contralateral primary and bilateral secondary somatosensory areas was linearly and positively related to the degree of behaviorally assessed attenuation; that is, the more participants perceptually attenuated their self-generated touches, the stronger this corticocerebellar coupling. Collectively, these results suggest that the ipsilateral cerebellum is fundamental in predicting self-generated touch and that this structure implements somatosensory attenuation via its functional connectivity with somatosensory areas.SIGNIFICANCE STATEMENTWhen we touch our hand with the other, the resulting sensation feels less intense than when another person or a machine touches our hand with the same intensity. Early computational motor control theories have proposed that the cerebellum predicts and cancels the sensory consequences of our movements; however, the neural correlates of this cancelation remain unknown. By means of functional magnetic resonance imaging, we show that the more participants attenuate the perception of their self-generated touch, the stronger the functional connectivity between the cerebellum and the somatosensory cortical areas. This provides conclusive evidence about the role of the cerebellum in predicting the sensory feedback of our movements and in attenuating the associated percepts via its connections to early somatosensory areas.
Article
Full-text available
Self-generated touch feels less intense and less ticklish than identical externally generated touch. This somatosensory attenuation occurs because the brain predicts the tactile consequences of our self-generated movements. To produce attenuation, the tactile predictions need to be time-locked to the movement, but how the brain maintains this temporal tuning remains unknown. Using a bimanual self-touch paradigm, we demonstrate that people can rapidly unlearn to attenuate touch immediately after their movement and learn to attenuate delayed touch instead, after repeated exposure to a systematic delay between the movement and the resulting touch. The magnitudes of the unlearning and learning effects are correlated and dependent on the number of trials that participants have been exposed to. We further show that delayed touches feel less ticklish and non-delayed touches more ticklish after exposure to the systematic delay. These findings demonstrate that the attenuation of self-generated touch is adaptive.
Article
Full-text available
Prior knowledge shapes what we perceive. A new brain stimulation study suggests that this perceptual shaping is achieved by changes in sensory brain regions before the input arrives, with common mechanisms operating across different sensory areas.
Article
We thank Corlett for his thought-provoking response to our recent article. Corlett shares our concerns about inconsistencies in theories of perceptual prediction and highlights some reminiscent debates in learning theory. He also proposes why the perceptual prediction mechanisms may operate differently in the domain of action relative to other domains of sensory cognition. Here, we highlight how we share the conviction that dialogue across disciplines will inform both models of perception and learning but clarify that important distinctions between the explananda mean the theoretical puzzles are not reducible to each other. We also question whether action prediction mechanisms do indeed operate differently.
Article
From the noisy information bombarding our senses, our brains must construct percepts that are veridical - reflecting the true state of the world - and informative - conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.
Article
Prediction allows humans and other animals to prepare for future interactions with their environment. This is important in our dynamically changing world that requires fast and accurate reactions to external events. Knowing when and where an event is likely to occur allows us to plan eye, hand, and body movements that are suitable for the circumstances. Predicting the sensory consequences of such movements helps to differentiate between self-produced and externally generated movements. In this review, we provide a selective overview of experimental studies on predictive mechanisms in human vision for action. We present classic paradigms and novel approaches investigating mechanisms that underlie the prediction of events guiding eye and hand movements.
Article
The human ability to anticipate the consequences that result from action is an essential building block for cognitive, emotional, and social functioning. A dominant view is that this faculty is based on motor predictions, in which a forward model uses a copy of the motor command to predict imminent sensory action-consequences. Although this account was originally conceived to explain the processing of action-outcomes that are tightly coupled to bodily movements, it has been increasingly extrapolated to effects beyond the body. Here, we critically evaluate this generalization and argue that, although there is ample evidence for the role of predictions in the processing of environment-related action-outcomes, there is hitherto little reason to assume that these predictions result from motor-based forward models.
Article
Expectations about a visual event shape the way it is perceived [1, 2, 3, 4]. For example, expectations induced by valid cues signaling aspects of a visual target can improve judgments about that target, relative to invalid cues [5, 6]. Such expectation effects are thought to arise via pre-activation of a template in neural populations that represent the target [7, 8] in early sensory areas [9] or in higher-level regions. For example, category cues (“face” or “house”) modulate pre-target fMRI activity in associated category-selective brain regions [10, 11]. Further, a relationship is sometimes found between the strength of template activity and success in perceptual tasks on the target [12, 13, 14]. However, causal evidence linking pre-target activity with expectation effects is lacking. Here we provide such evidence, using fMRI-guided online transcranial magnetic stimulation (TMS). In two experiments, human volunteers made binary judgments about images of either a body or a scene. Before each target image, a verbal cue validly or invalidly indicated a property of the image, thus creating perceptual expectations about it. To disrupt these expectations, we stimulated category-selective visual brain regions (extrastriate body area, EBA; occipital place area, OPA) during the presentation of the cue. Stimulation ended before the target images appeared. We found a double dissociation: TMS to EBA during the cue period removed validity effects only in the body task, whereas stimulating OPA removed validity effects only in the scene task. Perceptual expectations are expressed by the selective activation of relevant populations within brain regions that encode the target.