ArticleLiterature Review
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

From the noisy information bombarding our senses, our brains must construct percepts that are veridical - reflecting the true state of the world - and informative - conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the majority of studies testing for ES, researchers have implicitly or explicitly treated suppression due to fulfilled expectations and surprise-related response enhancements as two sides of the same coin. However, ES and surprise-related responses are treated as distinct effects within a number of theoretical frameworks (e.g., Kovács and Vogels, 2014;Hsu et al., 2015Hsu et al., , 2018Schomaker and Meeter, 2015;Grotheer and Kovács, 2016;Press et al., 2020). ...
... Effects of attention may confound measures of ES if attention is rapidly directed to a stimulus that violates an observer's expectations, which, in turn, enhances stimulus-selective neural responses (Kok et al., 2016;Schomaker and Meeter, 2015;Press et al., 2020). In support of this notion, Richter et al. (2019) reported [surprisingexpected] BOLD signal differences when participants attended to the critical stimuli, but not when participants were required to ignore these stimuli and concurrently perform a difficult task (for similar findings see Larsson and Smith, 2012). ...
... Here, we note that the model proposed by Press et al. (2020) defines surprisal as a continuous variable that is quantified as Kullback-Liebler divergence (KLD; see also Itti and Baldi, 2009), in contrast to the categorical neutral/surprising conditions described in this review. It is unclear whether surprise responses are elicited once KLD reaches a fixed threshold to become sufficiently unexpected (where this threshold may vary across contexts and across individuals), or whether increases in KLD render surprise responses as more gradually likely to occur. ...
Article
Reports of expectation suppression have shaped the development of influential predictive coding-based theories of visual perception. However recent work has highlighted confounding factors that may mimic or inflate expectation suppression effects. In this review, we describe four confounds that are prevalent across experiments that tested for expectation suppression: effects of surprise, attention, stimulus repetition and adaptation, and stimulus novelty. With these confounds in mind we then critically review the evidence for expectation suppression across probabilistic cueing, statistical learning, oddball, action-outcome learning and apparent motion designs. We found evidence for expectation suppression within a specific subset of statistical learning designs that involved weeks of sequence learning prior to neural activity measurement. Across other experimental contexts, whereby stimulus appearance probabilities were learned within one or two testing sessions, there was inconsistent evidence for genuine expectation suppression. We discuss how an absence of expectation suppression could inform models of predictive processing, repetition suppression and perceptual decision-making. We also provide suggestions for designing experiments that may better test for expectation suppression in future work.
... This novel finding supports our Survival Hypothesis, positing that prediction errors differentially influence the encoding of unconsciously-presented fearful and neutral faces. Our findings shed light on the seemingly paradoxical theory behind how the brain constructs conscious visual percepts (Press et al., 2020). On the one hand, it is important that our perception is veridical. ...
... To reconcile this paradox, our findings support an opposing process model of expectation and conscious perception (Press et al., 2020). This model posits that neural representations of expected and unexpected stimuli are enhanced at different times throughout perceptual processing depending on the informative content. ...
... It is important to note that, although the present study was motivated by previous research on affective stimuli and perceptual decision-making (Otten et al., 2017), we cannot ascertain the degree to which our findings reflect affective vs low-level visual processing due to the inherent visual differences between neutral and fearful faces (Hedger et al., 2015;Webb and Hibbard, 2020). As explained above, however, our findings present novel evidence for a two-process model of perception in which more surprising stimuli -whether due to their visual salience or their affective content -are prioritised for conscious access (Press et al., 2020). ...
Article
Full-text available
The folk psychological notion that “we see what we expect to see” is supported by evidence that we become consciously aware of visual stimuli that match our prior expectations more quickly than stimuli that violate our expectations. Similarly, “we see what we want to see,” such that more biologically-relevant stimuli are also prioritised for conscious perception. How, then, is perception shaped by biologically-relevant stimuli that we did not expect? Here, we conducted two experiments using breaking continuous flash suppression (bCFS) to investigate how prior expectations modulated response times to neutral and fearful faces. In both experiments, we found that prior expectations for neutral faces hastened responses, whereas the opposite was true for fearful faces. This interaction between emotional expression and prior expectations was driven predominantly by participants with higher trait anxiety. Electroencephalography (EEG) data collected in Experiment 2 revealed an interaction evident in the earliest stages of sensory encoding, suggesting prediction errors expedite sensory encoding of fearful faces. These findings support a survival hypothesis, where biologically-relevant fearful stimuli are prioritised for conscious access even more so when unexpected, especially for people with high trait anxiety.
... Yet, reports of tactile suppression shortly before and during passive movements indicate that peripheral, reafferent signals may also mask the detection of tactile probes (20), although suppression before active movements may precede the effects observed in passive movements (4,21). The possibility of backward masking mechanisms as well as the fact that the external tactile probes cannot be predicted by an efference copy of the motor command have led to recent claims that the reduced sensitivity on a moving limb is rather caused by general cancellation policies, and tactile suppression of externally generated stimuli is independent of sensorimotor predictions (22,23). Here, we aim to resolve this debate by investigating whether tactile suppression stems from such precise sensorimotor predictions or whether it originates from an unspecific mechanism that leads to a blanket reduction in tactile sensitivity and thus does not distinguish between predicted and unpredicted sensory feedback. ...
... If tactile suppression stems from sensorimotor predictions, we expect stronger suppression (increased threshold diff ) in congruent than in incongruent conditions, as the probe frequencies would match the movement-related predictions. If, on the other hand, tactile suppression stems from an unspecific mechanism, then we should observe similar suppression across all movement conditions (comparable threshold diff ) (22). ...
... Tactile suppression took place in all movement conditions and was generally greater in congruent compared to incongruent conditions. accounts claim that it stems from an unspecific mechanism, because the probe stimuli used to measure suppression cannot be predicted by an efference copy of the motor command itself (22,23), or because backward masking may generally obscure any sensations on the moving limb (1). Here, we set out to address the debate on the origin of tactile suppression and show that tactile suppression of externally generated sensations originates from specific sensorimotor predictions. ...
Article
Full-text available
Significance Tactile sensations on a moving hand are perceived weaker than when presented on the same but stationary hand. There is an ongoing debate about whether this weaker perception is based on sensorimotor predictions or is due to a blanket reduction in sensitivity. Here, we show greater suppression of sensations matching predicted sensory feedback. This reinforces the idea of precise estimations of future body sensory states suppressing the predicted sensory feedback. Our results shine light on the mechanisms of human sensorimotor control and are relevant for understanding clinical phenomena related to predictive processes.
... between predictable and unpredictable sensory states, but instead cancels out any tactile probes arising on the moving limb. 5,7 Despite evidence that a movement-induced reduction in tactile sensitivity is accompanied by a downregulation of neural activity in the primary somatosensory and motor cortices already before movement onset, 8 whether and how this phenomenon is related to prediction and expresses itself on the perceptual level is still obscure. ...
... If, on the other hand, tactile suppression stems from a general gating mechanism, then we should observe similar suppression across all conditions (comparable thresholddiff). 5,7 First, to determine whether tactile perception was affected during the movement as compared to rest, we performed t-tests against zero on thresholddiff and precisiondiff. ...
... Our results are incompatible with the idea that suppression of external probe stimuli is due to an unspecific gating mechanism that is caused by peripheral reafferences, independent of sensorimotor predictions. 5,7 Although a general gating mechanism could explain the observed suppression in all four movement conditions compared to rest, it cannot account for differences between congruent and incongruent conditions. This congruency effect cannot be attributed to differences in the movement either, 16,31,32 as kinematic behaviour was similar between the conditions. ...
Preprint
Full-text available
The ability to sample sensory information with our hands is crucial for smooth and efficient interactions with the world. Despite this important role of touch, tactile sensations on a moving hand are perceived weaker than when presented on the same but stationary hand. ¹⁻³ This phenomenon of tactile suppression has been explained by predictive mechanisms, such as forward models, that estimate future sensory states of the body on the basis of the motor command and suppress the associated predicted sensory feedback. ⁴ The origins of tactile suppression have sparked a lot of debate, with contemporary accounts claiming that suppression is independent of predictive mechanisms and is instead akin to unspecific gating. ⁵ Here, we target this debate and provide evidence for sensation-specific tactile suppression due to sensorimotor predictions. Participants stroked with their finger over textured surfaces that caused predictable vibrotactile feedback signals on that finger. Shortly before touching the texture, we applied external vibrotactile probes on the moving finger that either matched or mismatched the frequency generated by the stroking movement. We found stronger suppression of the probes that matched the predicted sensory feedback. These results show that tactile suppression is not limited to unspecific gating but is specifically tuned to the predicted sensory states of a movement.
... For example, it could be that the prevalence of congruent percepts built slowly with accumulating evidence of movement in a given direction. Alternatively, in-line with a recent theoretical model [52], perception might have been initially biased towards prediction-consistent percepts and then subsequently shifted to reflect more surprising (informative) events. Such effects have been reported for timescales of hundreds of milliseconds, but it has been suggested that they may generalise to longer timescales [52]. ...
... Alternatively, in-line with a recent theoretical model [52], perception might have been initially biased towards prediction-consistent percepts and then subsequently shifted to reflect more surprising (informative) events. Such effects have been reported for timescales of hundreds of milliseconds, but it has been suggested that they may generalise to longer timescales [52]. To examine the possibility of such non-linear associations between the progress of time and the predominance of particular percepts, a generalised additive model [53,54] (GAM) was used to fit the data. ...
Article
When two different images are presented separately to each eye, one experiences smooth transitions between them-a phenomenon called binocular rivalry. Previous studies have shown that exposure to signals from other senses can enhance the access of stimulation-congruent images to conscious perception. However, despite our ability to infer perceptual consequences from bodily movements, evidence that action can have an analogous influence on visual awareness is scarce and mainly limited to hand movements. Here, we investigated whether one's direction of locomotion affects perceptual access to optic flow patterns during binocular rivalry. Participants walked forwards and backwards on a treadmill while viewing highly-realistic visualisations of self-motion in a virtual environment. We hypothesised that visualisations congruent with walking direction would predominate in visual awareness over incongruent ones, and that this effect would increase with the precision of one's active proprioception. These predictions were not confirmed: optic flow consistent with forward locomotion was prioritised in visual awareness independently of walking direction and proprioceptive abilities. Our findings suggest the limited role of kinaesthetic-proprioceptive information in disambiguating visually perceived direction of self-motion and indicate that vision might be tuned to the (expanding) optic flow patterns prevalent in everyday life.
... Attempts to assess how expectations influence our perception show that we are more likely to report perceiving an expected than an unexpected stimulus [2][3][4][5][6] . However, although the facilitatory effects of expectation on perceptual processing have been found in the wider sensory literature, they usually conflict with work from the action domain 7 . ...
... Studies have reported either attenuation, enhancement, or no effects in detection or discrimination tasks with either loud (L) or near-threshold (NT) sounds by obtaining various measures that are used as a proxy of either bias or sensitivity (Point of Subject Equality, PSE; Just Noticeable Difference, JND; dʹ, d-prime). 7,41 . Yet, we reason that the findings obtained from the previous self-generation studies cannot provide solid conclusions on this matter, due to the use of a small range of intensities (either supra-threshold only [52][53][54] , near-threshold only 46 , or only one of each 65 ). ...
Article
Full-text available
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.
... The general idea of closed-loop information processing also gained traction in theories of perception. Think of reentrant processing (Di Lollo et al., 2000;Pascual-Leone and Walsh, 2001), predictive coding (Friston and Kiebel, 2009;Clark, 2013;Press et al., 2020), or the sensorimotor hypothesis of vision (O'Regan and Noë, 2001). All of these theories share the central tenet that a past state of the cognitive system (e.g., a sensory activation, a memory trace, a motor command) is compared with a current state. ...
... For example, while repeated visual input sometimes facilitates selection as in priming of visual attention (cf. Maljkovic and Nakayama, 1994;Kristjánsson and Campana, 2010;Valuch et al., 2017), humans also show the opposite tendency in other situations -that is, a preference for the selection of novel input that deviates the most from what is expected or what has been seen (Horstmann, 2002(Horstmann, , 2005Itti and Baldi, 2009; for a discussion of the principles in action control, see also Feldman and Friston, 2010;Jiang et al., 2013;Press et al., 2020). Whether repeated or novel information is selected for processing could, in many cases, depend on the requirements of the task at hand (cf. ...
Article
Full-text available
In the current review, we argue that experimental results usually interpreted as evidence for cognitive resource limitations could also reflect functional necessities of human information processing. First, we point out that selective processing of only specific features, objects, or locations at each moment in time allows humans to monitor the success and failure of their own overt actions and covert cognitive procedures. We then proceed to show how certain instances of selectivity are at odds with commonly assumed resource limitations. Next, we discuss examples of seemingly automatic, resource-free processing that challenge the resource view but can be easily understood from the functional perspective of monitoring cognitive procedures. Finally, we suggest that neurophysiological data supporting resource limitations might actually reflect mechanisms of how procedural control is implemented in the brain.
... Note that we are not suggesting that this is an all-or-nothing switch; it is likely that the hippocampus always represents both predictions (through pattern completion in CA3 21,24,65,66 ) and errors (potentially through mismatch comparison in CA1 34,36,67 ), but that the balance between the two depends on contextual factors such as novelty and unexpected uncertainty. An analogous switch between prediction vs. surprise dominated representations has recently been proposed in the realm of perception, albeit on a sub-second timescale 68 . ...
... In sum, the current findings demonstrate a role for the hippocampus in both acquiring and exploiting predictive associations, bridging the fields of learning and perception. These fields have separately made progress in investigating the roles of prediction, novelty and uncertainty 1,52 , but have until now largely remained segregated literatures, despite great promise to inform one another 68,94 . Ultimately, weighting predictions and errors according to their reliability is crucial to optimally perceive and engage with our environment, and the current findings suggest that the hippocampus plays a crucial role in this process. ...
Article
Full-text available
We constantly exploit the statistical regularities in our environment to help guide our perception. The hippocampus has been suggested to play a pivotal role in both learning environmental statistics, as well as exploiting them to generate perceptual predictions. However, it is unclear how the hippocampus balances encoding new predictive associations with the retrieval of existing ones. Here, we present the results of two high resolution human fMRI studies (N = 24 for both experiments) directly investigating this. Participants were exposed to auditory cues that predicted the identity of an upcoming visual shape (with 75% validity). Using multivoxel decoding analysis, we find that the hippocampus initially preferentially represents unexpected shapes (i.e., those that violate the cue regularities), but later switches to representing the cue-predicted shape regardless of which was actually presented. These findings demonstrate that the hippocampus is involved both acquiring and exploiting predictive associations, and is dominated by either errors or predictions depending on whether learning is ongoing or complete. Successfully exploiting the regularities in our environment requires balancing the encoding of new information with the retrieval of stored associations. Here, the authors show that the hippocampus switches from representing novel information (errors) to representing predictions as learning proceeds.
... This process entails both the exploitation of existing knowledge, to efficiently process predicted events, and the updating of the knowledge itself, in case of encountering unexpected situations. While there is accumulating evidence for predictive processing of sensory experiences (Friston, 2005(Friston, , 2008Keller & Mrsic-Flogel, 2018;Press et al., 2020;Rao & Ballard, 1999;Walsh et al., 2020), when it comes to the long-term memory consequences of encountering predicted and unpredicted events, empirical findings are mixed at best (Greve et al., 2017;Gronau & Shachar, 2015;Kafkas & Montaldi, 2018a;Ortiz-Tudela et al., 2018;Sinclair & Barense, 2018). In this study, we examined the episodic memory consequences of experiences that vary in levels of prediction violation. ...
... Assuming that predictions necessarily stem from stored knowledge, an influential theoretical model tries to resolve this apparent paradox of enhanced memory both for wellpredicted events and for events giving rise to PE ((Van Kesteren et al., 2012), see (Press et al., 2020) for a similar discussion in the perceptual domain). This account states that two different neural mechanisms are responsible for the seemingly conflicting evidence. ...
Preprint
The characterization of the relationship between predictions and one-shot episodic encoding poses an important challenge for memory research. On the one hand, events that are compatible with our previous knowledge are thought to be remembered better than incompatible ones. On the other hand, unexpected situations, by virtue of their surprise, are known to cause enhanced learning. Several theoretical accounts try to solve this apparent paradox by conceptualizing prediction error (PE) as a continuum ranging from low PE (for expectation matching events) to high PE (for expectation mismatching ones). Under such framework, the relationship between PE and memory encoding would be described by a U-shape function with higher memory performance for extreme levels of PE and lower memory for middle levels of PE. In this study we used a gradual manipulation of the strength of association between scenes and objects to render different levels of PE and then tested for episodic memory of the (mis)matching events. In two experiments, and in contrast to what was anticipated, recognition memory as a function of PE followed an inverted U-shape, with higher performance for intermediate levels of PE. Furthermore, in two additional experiments we showed the relevance of explicit predictions at encoding to reveal such inverted U pattern, thus providing the boundary conditions of the effect. We discuss our current findings in the light of the uncertainty in the environment and the importance of the operations underlying encoding tasks.
... While results of the present study support the sharpening account of predictive coding theories, they do not necessarily argue against the dampening account Kumar et al., 2017;Press et al., 2020;Richter et al., 2018). First, this account posits that prediction signals filter out predicted features of stimuli by silencing neurons tuned to these 52 PRIOR SCENE KNOWLEDGE INCREASES ITS PERCEIVED SHARPNESS features and increasing the sensitivity of neurons tuned to other features. ...
... Such a condition would therefore be needed in further experiments in order to arbitrate between the sharpening and dampening accounts. Importantly, recent works suggest that these two accounts of predictive coding may not be mutually exclusive but could coexist (Press et al., 2020), so that their influence on perception would vary according to temporal constraints and signal precision. Under this framework, the sharpening mechanism would take place first, allowing to confirm expectations. ...
Article
Full-text available
Predictive coding theories of visual perception postulate that expectations based on prior knowledge modulate the processing of information by sharpening the representation of expected features of a stimulus in visual cortex but few studies directly investigated whether expectations qualitatively affect perception. Our study investigated the influence of expectations based on prior experience and contextual information on the perceived sharpness of objects and scenes. In Experiments 1 and 2, we used a perceptual matching task. Participants saw two blurred images depicting the same object or scene and had to adjust the blur level of the right image to match the blur level of the left one. We manipulated the availability of relevant information to form expectations about the image's content: one of the two images contained predictable information while the other one unpredictable. At an equal level of blur, predictable objects and scenes were perceived as sharper than unpredictable ones. Experiment 3 involving explicit sharpness judgments confirmed these results. Our findings support the sharpening account of predictive coding theories by showing that expectations increase the perceived sharpness of the visual signal. Expectations about the visual environment help us understand it more easily, but also makes us perceive it better. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... /2021 Generating top-down expectations of perceptual experiences is of little use in facilitating perceptual processing if these signals are not successfully integrated with bottom-up sensory representations. Incorporating perceptual expectations into feedforward sensory processing serves two important purposes: First, via predictive coding, it reduces the enormous physiological cost of continuously processing an environment filled with static signals that have little relevance for behavior; second, via prediction error, it provides a mechanism for learning when perceptual expectations are violated (Press et al., 2020;Grotheer & Kovacs, 2016;Auksztulewicz & Friston, 2016). In particular, expectations about perceptual events tend to lead to suppression of neural activity (also known as expectation suppression; Todorovic et al., 2011), which was first noted as a critical process in sensorimotor integration and motor learning, as organisms must be able to dissociate sensory experiences related to their own actions from those that arise externally from the environment (Crapse & Sommer, 2008). ...
... Prediction is useful not only because it pre-activates perceptual information, but also because it serves as a template against which to compare incoming signals: Consistent sensory information is processed more efficiently, while a mismatch triggers an error response that drives plasticity (Press et al., 2020). Price and Devlin (2011) describe how the magnitude of prediction error varies during learning: ...
Preprint
A perceptual adaptation deficit often accompanies reading difficulty in dyslexia, manifesting in poor perceptual learning of consistent stimuli and reduced neurophysiological adaptation to stimulus repetition. However, it is not known how adaptation deficits relate to differences in feedforward or feedback processes in the brain. Here we used electroencephalography (EEG) to interrogate the feedforward and feedback contributions to neural adaptation as adults with and without dyslexia viewed pairs of faces and words in a paradigm that manipulated whether there was a high probability of stimulus repetition versus a high probability of stimulus change. We measured three neural dependent variables: expectation (the difference between prestimulus EEG power with and without the expectation of stimulus repetition), feedforward repetition (the difference between event-related potentials (ERPs) evoked by an expected change and an unexpected repetition), and feedback-mediated prediction error (the difference between ERPs evoked by an unexpected change and an expected repetition). Expectation significantly modulated prestimulus theta- and alpha-band EEG in both groups. Unexpected repetitions of words, but not faces, also led to significant feedforward repetition effects in the ERPs of both groups. However, neural prediction error when an unexpected change occurred instead of an expected repetition was significantly weaker in dyslexia than the control group for both faces and words. These results suggest that the neural and perceptual adaptation deficits observed in dyslexia reflect the failure to effectively integrate perceptual predictions with feedforward sensory processing. In addition to reducing perceptual efficiency, the attenuation of neural prediction error signals would also be deleterious to the wide range of perceptual and procedural learning abilities that are critical for developing accurate and fluent reading skills.
... This efference copy is used to compute a neural prediction-a corollary discharge (Sperry, 1950)-of the sensory consequences of the action. If the corollary discharge matches the sensory input, the sensory input is tagged as self-generated and neural and perceptual responses are suppressed-this phenomenon is called sensory suppression (Horváth, 2015;Schröger, Marzecová, & SanMiguel, 2015;Hughes, Desantis, & Waszak, 2013;Bendixen, SanMiguel, & Schröger, 2012; for an alternative explanation of this phenomenon, see Press, Kok, & Yon, 2020;Reznik & Mukamel, 2019;Yon, Gilbert, de Lange, & Press, 2018). Alternatively, if the corollary discharge does not match the sensory input, or if the sensory input is not accompanied by a corollary discharge, the sensory input is tagged as externally generated (Straka, Simmers, & Chagnaud, 2018;Schneider & Mooney, 2018;Crapse & Sommer, 2008;Poulet & Hedwig, 2007;Schütz-Bosbach & Prinz, 2007). ...
... Again, the results of this study do not support this. Third, it has recently been suggested that sensory suppression is not the result of an internal forward model; instead, it is the result of predictive signals that enhance responses from neurons tuned for the sensory consequences of our actions and suppress responses from neurons that are tuned away (Press et al., 2020;Reznik & Mukamel, 2019;Yon et al., 2018). However, distinguishing between these theories is beyond the scope of this study. ...
Article
Sensory suppression refers to the phenomenon that sensory input generated by our own actions, such as moving a finger to press a button to hear a tone, elicits smaller neural responses than sensory input generated by external agents. This observation is usually explained via the internal forward model in which an efference copy of the motor command is used to compute a corollary discharge, which acts to suppress sensory input. However, because moving a finger to press a button is accompanied by neural processes involved in preparing and performing the action, it is unclear whether sensory suppression is the result of movement planning, movement execution, or both. To investigate this, in two experiments, we compared event-related potentials to self-generated tones that were produced by voluntary, semivoluntary, or involuntary button-presses, with externally generated tones that were produced by a computer. In Experiment 1, the semivoluntary and involuntary button-presses were initiated by the participant or experimenter, respectively, by electrically stimulating the median nerve in the participant's forearm, and in Experiment 2, by applying manual force to the participant's finger. We found that tones produced by voluntary button-presses elicited a smaller N1 component of the event-related potential than externally generated tones. This is known as N1-suppression. However, tones produced by semivoluntary and involuntary button-presses did not yield significant N1-suppression. We also found that the magnitude of N1-suppression linearly decreased across the voluntary, semivoluntary, and involuntary conditions. These results suggest that movement planning is a necessary condition for producing sensory suppression. We conclude that the most parsimonious account of sensory suppression is the internal forward model.
... The role of expectations about object attributes for IB raises the question whether such expectations might directly cause failures to detect visual objects, even when focal attention is not diverted to a different primary task. Resolving this question would be theoretically important, as there is currently much controversial debate about whether expectations can modulate sensory processes that give rise to conscious awareness in ways that are independent of attention (e.g., Alink & Blank, 2021;Press et al., 2020;Rungratsameetaweemana & Serences, 2019). It is well known that expectations modulate the selective processing of visual signals (Feldman & Friston, 2010;Summerfield & de Lange, 2014;de Lange et al., 2018), and this may also affect conscious access. ...
... This is likely to be linked to the fact that targets were presented only briefly and subsequently masked. As recently argued by Press et al. (2020), perception is initially biased towards expected events, whereas unpredicted events can be selectively highlighted at later stages. ...
Article
Full-text available
Selective attention gates access to conscious awareness, resulting in surprising failures to notice clearly visible but unattended objects ('inattentional blindness'). Here, we demonstrate that expectations can have a similar effect, even for fully attended objects ('expectation-based blindness'). In three experiments, participants (N = 613) were presented with rapid serial visual presentation (RSVP) streams at fixation and had to identify a target object indicated by a cue. Target category was repeated for the first 19 trials but unexpectedly changed on trial 20. The probability of correct target reports on this surprise trial was substantially lower than on preceding and subsequent trials. This impairment was present for switches between target letters and digits, and also for changes between human and animal face images. In contrast, no drop in accuracy was observed for novel target objects from the same category as previous targets. These results demonstrate that predictions about object categories affect visual awareness. Objects that are task relevant and focally attended often fail to get noticed when their category changes unexpectedly.
... More broadly, the current study's findings are relevant for a current debate on how does the mind integrate information to make predictions about upcoming events (Dogge et al. 2019;Press et al. 2020;Yon and Frith 2021). Dogge et al. (2019) argued that while sensorimotor-based internal models (e.g., the comparator) are fit to make predictions regarding bodily-sensations (e.g., being tickled vs. tickling oneself; Blakemore et al. 2000), they are a poor fit for predicting extra-body effects (like the action-effects in our study). ...
... value-free, action-effects) response frequency could follow a different computation, one where response frequency is scaled by contingency and reward outcome. Another possible explanation for this discrepancy may be found in a recent study by Yon et al. (2020). In the study, the conceptual component of agency was quantified using several approaches, including signal detection, it was found-that explicit judgements of agency are biased upward ("I am in control"), due to overweighting of action-effect occurrences (differently from the unbiased description of causality suggested by Delta-P; Wasserman et al. 1983, but see Spellman 1996. ...
Article
Full-text available
Humans and other animals live in dynamic environments. To reliably manipulate the environment and attain their goals they would benefit from a constant modification of motor-responding based on responses' current effect on the current environment. It is argued that this is exactly what is achieved by a mechanism that reinforces responses which have led to accurate sensorimotor predictions. We further show that evaluations of a response's effectiveness can occur simultaneously, driven by at least two different processes, each relying on different statistical properties of the feedback and affecting a different level of responding. Specifically, we show the continuous effect of (a) a sensorimotor process sensitive only to the conditional probability of effects given that the agent acted on the environment (i.e., action-effects) and of (b) a more abstract judgement or inference that is also sensitive to the conditional probabilities of occurrence of feedback given no action by the agent (i.e., inaction-effects). The latter process seems to guide action selection (e.g., should I act?) while the former the manner of the action's execution. This study is the first to show that different evaluation processes of a response’s effectiveness influence different levels of responding.
... In tactile suppression, sensorimotor predictions indeed reduce sensitivity to externally-generated stimuli on one's own body independently of masking, as reflected in tactile suppression on a limb that is about to move but eventually not moving (Voss, Ingram, Wolpert, & Haggard, 2008). Although tactile suppression is specific to the predicted sensory consequences of a movement (Fuehrer, Voudouris, Lezkan, Drewing, & Fiehler, 2022), recent accounts propose that suppression of externally-generated stimuli is unrelated to sensorimotor predictions (Press, Kok, & Yon, 2018;Kilteni & Ehrsson, 2022), stimulating a debate about the mechanisms underlying movement-induced tactile suppression. ...
... This tuning of suppression adds evidence on a debate about the origins of this phenomenon. Recent accounts claim that suppression of externally-generated stimuli on a moving limb stems solely from a mechanism that does not involve predictive commands based on internal forward models (Press et al., 2018;Kilteni & Ehrsson, 2022). However, we have recently shown that tactile suppression stems from specific sensorimotor predictions (Fuehrer, Voudouris, Lezkan, Drewing, & Fiehler, 2022). ...
Article
Tactile perception is impaired in a limb that is moving compared to when it is static. A possible mechanism that explains this phenomenon is an internal forward model that estimates future sensory states of the moving limb and suppresses associated feedback signals arising from that limb. Because sensorimotor estimations are based on an interplay of efferent and afferent feedback signals, the strength of tactile suppression may also depend on the relative utilization of sensory feedback from the moving limb. To test how the need to process somatosensory feedback influences movement-induced tactile suppression, we asked participants to perform reach-to-grasp movement of different demands: the target object was covered with materials of different frictional properties and the task was performed both with and without vision. As expected, participants performed the grasping movement more carefully when interacting with objects of low than high friction surfaces and when performing the task without vision. This denotes a greater need for somatosensory guidance of the digits to appropriately position the digits on the object. Accordingly, tactile suppression was weaker when grasping low than high friction objects, but only when grasping without vision. This suggests that movement-induced tactile suppression is modulated by grasping demands. Tactile suppression is downregulated when the need to process somatosensory feedback signals from that moving limb increases, like in situations when somatosensory input about the digit's state is the sole source of sensory information and when this information is particularly important for the task at hand.
... For example, recent studies suggested that the mapping between visual and motor codes is accomplished via statistical association processes (e.g. Press, Berlot, Bird, Ivry, & Cook, 2014;Press, Kok, & Yon, 2020;Yon, Zainzinger, de Lange, Eimer, & Press, 2021). The constant exposure to the temporal perturbation in the current study might have changed this mapping process and thereby produce the observed adaptation transfer to other visuo-motor tasks, but not to audio-motor tasks. ...
Article
Full-text available
Complex, goal-directed and time-critical movements require the processing of temporal features in sensory information as well as the fine-tuned temporal interplay of several effectors. Temporal estimates used to produce such behavior may thus be obtained through perceptual or motor processes. To disentangle the two options, we tested whether adaptation to a temporal perturbation in an interval reproduction task transfers to interval reproduction tasks with varying sensory information (visual appearance of targets, modality, and virtual reality [VR] environment or real-world) or varying movement types (continuous arm movements or brief clicking movements). Halfway through the experiments we introduced a temporal perturbation, such that continuous pointing movements were artificially slowed down in VR, causing participants to adapt their behavior to sustain performance. In four experiments, we found that sensorimotor adaptation to temporal perturbations is independent of environment context and movement type, but modality specific. Our findings suggest that motor errors induced by temporal sensorimotor adaptation affect the modality specific perceptual processing of temporal estimates.
... As expected, we observed an overall increase in suppression for tactile compared to visual feedback, both in the early and in the late phase of the movement. This differential effect argues against a general gating mechanism, leading to an overall suppression of external somatosensory signals during movement (Press et al., 2020;Kilteni and Ehrsson, 2021). The increase in suppression for tactile compared to visual feedback points to a predictive component based on the expected tactile feedback at the end of the reach. ...
Article
Full-text available
Predictable somatosensory feedback leads to a reduction in tactile sensitivity. This phenomenon, called tactile suppression , relies on a mechanism that uses an efference copy of motor commands to help select relevant aspects of incoming sensory signals. We investigated whether tactile suppression is modulated by (a) the task-relevancy of the predicted consequences of movement and (b) the intensity of related somatosensory feedback signals. Participants reached to a target region in the air in front of a screen; visual or tactile feedback indicated the reach was successful. Furthermore, tactile feedback intensity (strong vs. weak) varied across two groups of participants. We measured tactile suppression by comparing detection thresholds for a probing vibration applied to the finger either early or late during reach and at rest. As expected, we found an overall decrease in late-reach suppression, as no touch was involved at the end of the reach. We observed an increase in the degree of tactile suppression when strong tactile feedback was given at the end of the reach, compared to when weak tactile feedback or visual feedback was given. Our results suggest that the extent of tactile suppression can be adapted to different demands of somatosensory processing. Downregulation of this mechanism is invoked only when the consequences of missing a weak movement sequence are severe for the task. The decisive factor for the presence of tactile suppression seems not to be the predicted action effect as such, but the need to detect and process anticipated feedback signals occurring during movement.
... To find the right balance between favoring the expected and the novel, the brain may dynamically adjust the relative weights assigned to visual inputs and to top-down predictions, for example, based on current internal mental states (Herz, Baror, & Bar, 2020) and the precision of both the visual input and our predictions in a given situation ( Yon & Frith, 2021). A recent complimentary account suggests that during the perceptual processing cascade, processing is, in turn, biased toward the expected and then the surprising (Press, Kok, & Yon, 2020). When and how natural vision is biased toward the expected structure of the world and toward novel, unexpected information and how this balance is controlled on a neural level are exciting questions for future investigation. ...
Article
Full-text available
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part–whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part–whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part–whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
... Finally, a recent theoretical proposal suggests that action prediction should increase the perceived intensity of expected effects (e.g., the self-generated test tap) whereas secondary processes increase the intensity of subsequent events that are surprising (Press et al., 2020a). Accordingly, the attenuation of self-generated touch reflects generalized sensory suppression effects during movement rather than motor predictions, and/or it is a postdictive process that occurs after the presentation of the stimulus (Press et al., 2020b). ...
Article
Full-text available
The discovery of mirror neurons in the macaque brain in the 1990s triggered investigations on putative human mirror neurons and their potential functionality. The leading proposed function has been action understanding: accordingly, we understand the actions of others by ‘simulating’ them in our own motor system through a direct matching of the visual information to our own motor programs. Furthermore, it has been proposed that this simulation involves the prediction of the sensory consequences of the observed action, similar to the prediction of the sensory consequences of our executed actions. Here, we tested this proposal by quantifying somatosensory attenuation behaviorally during action observation. Somatosensory attenuation manifests during voluntary action and refers to the perception of self‐generated touches as less intense than identical externally generated touches because the self‐generated touches are predicted from the motor command. Therefore, we reasoned that if an observer simulates the observed action and, thus, he/she predicts its somatosensory consequences, then he/she should attenuate tactile stimuli simultaneously delivered to his/her corresponding body part. In three separate experiments, we found a systematic attenuation of touches during executed self‐touch actions, but we found no evidence for attenuation when such actions were observed. Failure to observe somatosensory attenuation during observation of self‐touch is not compatible with the hypothesis that the putative human mirror neuron system automatically predicts the sensory consequences of the observed action. In contrast, our findings emphasize a sharp distinction between the motor representations of self and others.
... One theory is that expecting a stimulus evokes a "template" in neural populations that prefer the expected location and features, even before a stimulus arrives (Kok, Failing, & de Lange, 2014;Kok, Mostert, & De Lange, 2017). The computational role of these neuronal effects is still debated, as well as their relation to attention and other aspects of decision-making (Press, Kok, & Yon, 2020;Rungratsameetaweemana & Serences, 2019;Summerfield & Egner, 2016). One fMRI study investigated the neural correlates of payoff and probability manipulations (like we used) in a motion discrimination task. ...
Preprint
Full-text available
The appearance of a salient stimulus rapidly inhibits saccadic eye movements. Curiously, this “oculomotor freezing” reflex is triggered only by stimuli that the observer reports seeing. It remains unknown, however, if oculomotor freezing is linked to the observer’s sensory experience , or their decision that a stimulus was present. To dissociate between these possibilities, we manipulated decision criterion via monetary payoffs and stimulus probability in a detection task. These manipulations greatly shifted observers’ decision criteria but did not affect the degree to which microsaccades were inhibited by stimulus presence. Moreover, the link between oculomotor freezing and explicit reports of stimulus presence was stronger when the criterion was conservative rather than liberal. We conclude that the sensory threshold for oculomotor freezing is independent of decision bias. Provided that conscious experience is also unaffected by such bias, oculomotor freezing is an implicit indicator of sensory awareness. New & Noteworthy Sometimes a visual stimulus reaches awareness, and sometimes it does not. To understand why, we need objective, bias-free measures of awareness. We discovered that a reflexive freezing of small eye movements indicates when an observer detects a stimulus. Furthermore, when we biased observers’ decisions to report seeing the stimulus, the oculomotor reflex was unaltered. This suggests that the threshold for conscious perception is independent of the decision criterion and is revealed by oculomotor freezing.
... This act of balancing needs to depend on the uncertainty itself, which is described as the precision weighting of prior predictions. For example, it has been proposed that the effect of attention -sometimes inhibiting and sometimes boosting the impact of sensory data -is modulated by precision weighting and the surprise of the sensory data (Kok et al., 2012;Press et al., 2020). ...
Preprint
Full-text available
In this paper we present a computational modeling account of an active self in artificial agents. In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control. We argue that this requires laying out an embodied cognitive model that combines bottom-up processes (sensorimotor learning and fine-grained adaptation of control) with top-down processes (cognitive processes for strategy selection and decision-making). We present such a conceptual computational architecture based on principles of predictive processing and free energy minimization. Using this general model, we describe how a sense of control can form across the levels of a control hierarchy and how this can support action control in an unpredictable environment. We present an implementation of this model as well as first evaluations in a simulated task scenario, in which an autonomous agent has to cope with un-/predictable situations and experiences corresponding sense of control. We explore different model parameter settings that lead to different ways of combining low-level and high-level action control. The results show the importance of appropriately weighting information in situations where the need for low/high-level action control varies and they demonstrate how the sense of control can facilitate this.
... A common thread among these lines of investigation is that stimulus regularities provide an opportunity to generate predictions. Ideally, expected input should be processed more efficiently, while mismatches should trigger an error response that, in a virtuous circle, improves future predictions (Press et al., 2020). In dyslexia, however, the availability of predictions seems to have a reduced effect on perception, and this may be related to findings, in other studies, of reduced neural mismatch responses in dyslexia Gu and Bi, 2020). ...
Article
Full-text available
The neural representation of a repeated stimulus is the standard against which a deviant stimulus is measured in the brain, giving rise to the well-known mismatch response. It has been suggested that individuals with dyslexia have poor implicit memory for recently repeated stimuli, such as the train of standards in an oddball paradigm. Here, we examined how the neural representation of a standard emerges over repetitions, asking whether there is less sensitivity to repetition and/or less accrual of “standardness” over successive repetitions in dyslexia. We recorded magnetoencephalography (MEG) as adults with and without dyslexia were passively exposed to speech syllables in a roving-oddball design. We performed time-resolved multivariate decoding of the MEG sensor data to identify the neural signature of standard vs. deviant trials, independent of stimulus differences. This “multivariate mismatch” was equally robust and had a similar time course in the two groups. In both groups, standards generated by as few as two repetitions were distinct from deviants, indicating normal sensitivity to repetition in dyslexia. However, only in the control group did standards become increasingly different from deviants with repetition. These results suggest that many of the mechanisms that give rise to neural adaptation as well as mismatch responses are intact in dyslexia, with the possible exception of a putatively predictive mechanism that successively integrates recent sensory information into feedforward processing.
... The N170 differences in our study may reflect this sharpening effect, whereby representations of expressions that were congruent with scene-based expectations were sharpened by neural systems, which resulted in a more negative N170 amplitude. In addition, the ambiguity of the stimulus could also affect this sharpening effect (Press et al., 2020). Because of the emotional clarity of the happy expression relative to the fearful expression (Aviezer et al., 2012), the happy expression in the present study had a more significant scene effect on the N170 compared with that of the fearful expression. ...
Article
Prior expectations play an important role in the process of perception. In real life, facial expressions always appear within a scene, which enables individuals to generate predictions that affect facial expression judgments. In the present study, using event-related potentials, we investigated the influence of scene-based expectation on facial expression processing. In addition, we used a cognitive task to manipulate cognitive load to interfere with scene-based expectation. Results showed that under the condition of sufficient cognitive resources, faces elicited more negative N170 amplitudes and more positive N400 amplitudes when the emotional valence of the scenes and faces was congruent. However, in the condition of cognitive load, no such difference was observed. The findings suggested that the effect of expectation on facial expression recognition emerges during both the early and late stages of facial expression processing, and the effect is weakened when cognitive resources are occupied by unrelated tasks.
... The added priors would not be individualized but represent general influences of experience, i.e., irrespective of structural body variations. Computationally, this could be covered by a top-down modulation to predict influences of previous experiences on prior couplings, e.g., visuo-tactile integration or sensorimotor learning, using the implementation of learning-based models of inter-and intramodal sensory signals (Van Dam et al., 2014;Parise, 2016;Noel et al., 2018;Litwin, 2020;Press et al., 2020). ...
Article
Full-text available
Using the seminal rubber hand illusion and related paradigms, the last two decades unveiled the multisensory mechanisms underlying the sense of limb embodiment, that is, the cognitive integration of an artificial limb into one's body representation. Since also individuals with amputations can be induced to embody an artificial limb by multimodal sensory stimulation, it can be assumed that the involved computational mechanisms are universal and independent of the perceiver's physical integrity. This is anything but trivial, since experimentally induced embodiment has been related to the embodiment of prostheses in limb amputees, representing a crucial rehabilitative goal with clinical implications. However, until now there is no unified theoretical framework to explain limb embodiment in structurally varying bodies. In the present work, we suggest extensions of the existing Bayesian models on limb embodiment in normally-limbed persons in order to apply them to the specific situation in limb amputees lacking the limb as physical effector. We propose that adjusted weighting of included parameters of a unified modeling framework, rather than qualitatively different model structures for normally-limbed and amputated individuals, is capable of explaining embodiment in structurally varying bodies. Differences in the spatial representation of the close environment (peripersonal space) and the limb (phantom limb awareness) as well as sensorimotor learning processes associated with limb loss and the use of prostheses might be crucial modulators for embodiment of artificial limbs in individuals with limb amputation. We will discuss implications of our extended Bayesian model for basic research and clinical contexts.
... When engaging a second person, this imprecision presents as a failure of commitment to a discourse plan 40 with low confidence on the message choice (ambivalence). This state of low precision of higher order priors makes all lower level models equally likely for selection 41 ; this increases the likelihood of frequent shifts in conversational goal, messages and speech structure (loosened associations: derailment, incoherence). ...
... For example, Bayesian models of multisensory integration suggest that observers combine signals from different modalities according to their estimated precision, lending more weight to more certain sensory channels (Alais & Burr, 2004;Ernst & Banks, 2002). Similarly, Bayesian models of prediction suggest that observers make perceptual inferences by combining incoming evidence with probabilistic expectationsleaning more on prior knowledge when the evidence is more ambiguous i.e., less precise (Olkkonen et al., 2014;Press et al., 2020;. It is possible that the precision representations used to solve these combination problems are also shaped by expectations. ...
Preprint
Full-text available
Bayesian models of the mind suggest that we estimate the reliability or ‘precision’ of incoming sensory signals to guide perceptual inference and to construct feelings of confidence or uncertainty about what we are perceiving. However, accurately estimating precision is likely to be challenging for bounded systems like the brain. One way observers could overcome this challenge is to form expectations about the precision of their perceptions and use these expectations to guide metacognition and awareness. Here we test this possibility. Participants made perceptual decisions about visual motion stimuli, while providing confidence ratings (Exps 1 and 2) or ratings of subjective visibility (Exp 3). In each experiment, participants acquired probabilistic expectations about the likely strength of upcoming signals. We found these expectations about precision altered metacognition and awareness – with participants feeling more confident and stimuli appearing more vivid when stronger sensory signals were expected, without concomitant changes in objective perceptual performance. Computational modelling revealed that this effect could be well-explained by a predictive learning model that infers the precision (strength) of current signals as a weighted-combination of incoming evidence and top-down expectation. These results support an influential but untested tenet of Bayesian models of cognition: suggesting that agents do not only ‘read out’ the reliability of information arriving at their senses, but also take into account prior knowledge about how reliable or ‘precise’ different sources of information are likely to be. This reveals that expectations about precision exert a powerful influence on how the sensory world appears and how much we trust our senses.
... However, since the action-sound contingency was fixed in all conditions, our ability to ascribe motor-locked responses in auditory cortex to differences in the degree of predictability is limited. The effects of long delays and differences in levels of predictability on motor-sensory recalibration and sense of agentic control still requires further research (Haggard 2017;Press et al. 2020;Arikan et al. 2021). ...
Article
Sensory perception is a product of interactions between the internal state of an organism and the physical attributes of a stimulus. It has been shown across the animal kingdom that perception and sensory-evoked physiological responses are modulated depending on whether or not the stimulus is the consequence of voluntary actions. These phenomena are often attributed to motor signals sent to relevant sensory regions that convey information about upcoming sensory consequences. However, the neurophysiological signature of action-locked modulations in sensory cortex, and their relationship with perception, is still unclear. In the current study, we recorded neurophysiological (using Magnetoencephalography) and behavioral responses from 16 healthy subjects performing an auditory detection task of faint tones. Tones were either generated by subjects’ voluntary button presses or occurred predictably following a visual cue. By introducing a constant temporal delay between button press/cue and tone delivery, and applying source-level analysis, we decoupled action-locked and auditory-locked activity in auditory cortex. We show action-locked evoked-responses in auditory cortex following sound-triggering actions and preceding sound onset. Such evoked-responses were not found for button-presses that were not coupled with sounds, or sounds delivered following a predictive visual cue. Our results provide evidence for efferent signals in human auditory cortex that are locked to voluntary actions coupled with future auditory consequences.
... To find the right balance between favoring the expected and the novel, the brain may dynamically adjust the relative weights assigned to visual inputs and to top-down predictions, for example based on current internal mental states (Herz et al., 2020) and the precision of both the visual input and our predictions in a given situation (Yon & Frith, 2021). A recent complimentary account suggests that during the perceptual processing cascade, processing is in turn biased towards the expected and then the surprising (Press et al., 2020). When and how natural vision is biased towards the expected structure of the world and towards novel, unexpected information and how this balance is controlled on a neural level are exciting questions for future investigation. ...
Preprint
Full-text available
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: by dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their consistent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene’s part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
... When engaging a second person, this imprecision presents as a failure of commitment to a discourse plan 40 with low confidence on the message choice (ambivalence). This state of low precision of higher-order priors makes all lower-level models equally likely for selection; 41 this increases the likelihood of frequent shifts in conversational goal, messages and speech structure (loosened associations: derailment, incoherence). The presence of imprecise priors at various higher levels of active inference considerably reduces the speed of message selection (reduced spontaneity) and implementation (reduced rate of speech). ...
Article
As we move through our environment, our visual system is presented with optic flow, a potentially important cue for perception, navigation and postural control. How does the brain anticipate the optic flow that arises as a consequence of our own movement? Converging evidence suggests that stimuli are processed differently by the brain if occurring as a consequence of self-initiated actions, compared to when externally generated. However, this has mainly been demonstrated with auditory stimuli. It is not clear how this occurs with optic flow. We measured behavioural, neurophysiological and head motion responses of 29 healthy participants to radially expanding, vection-inducing optic flow stimuli, simulating forward transitional motion, which were either initiated by the participant’s own button-press (“self-initiated flow”) or by the computer (“passive flow”). Self-initiation led to a prominent and left-lateralized inhibition of the flow-evoked posterior event-related alpha desynchronization (ERD), and a stabilisation of postural responses. Neither effect was present in control button-press-only trials, without optic flow. Additionally, self-initiation also produced a large event-related potential (ERP) negativity between 130-170 ms after optic flow onset. Furthermore, participants’ visual induced motion sickness (VIMS) and vection intensity ratings correlated positively across the group – although many participants felt vection in the absence of any VIMS, none reported the opposite combination. Finally, we found that the simple act of making a button press leads to a detectable head movement even when using a chin rest. Taken together, our results indicate that the visual system is capable of predicting optic flow when self-initiated, to affect behaviour.
Article
Surprising scenarios can have different behavioural and neuronal consequences depending on the violation of the expectation. On the one hand, previous research has shown that the omission of a visual stimulus results in a robust cortical response representing that missing stimulus, a so-called negative prediction error. On the other hand, a large amount of studies revealed positive prediction error signals, entailing an increased neural response that can be attributed to the experience of a surprising, unexpected stimulus. However, there has been no evidence, so far, regarding how and when these prediction error signals co-occur. Here, we argue that the omission of an expected stimulus can and often does coincide with the appearance of an unexpected one. Therefore, we investigated whether positive and negative prediction error signals evoked by unpredicted cross-category stimulus transitions would temporally coincide during a speeded forced-choice fMRI paradigm. Foremost, our findings provide evidence of a behavioural effect regarding the facilitation of responses linked to expected stimuli. In addition, we obtained evidence for negative prediction error signals as seen in differential activation of FFA and PPA during unexpected place and face trials, respectively. Lastly, a psychophysiological interaction analysis revealed evidence for positive prediction error signals represented by context-dependent functional coupling between the right IFG and FFA or PPA, respectively, implicating a network that updates the internal representation after the appearance of an unexpected stimulus through involvement of this frontal area. The current results are consistent with a predictive coding account of cognition and underline the importance of considering the potential dual nature of expectation violations. Furthermore, our results put forward that positive and negative prediction error signalling can be directly linked to regions associated with the processing of different stimulus categories.
Preprint
Full-text available
Some theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from M/EEG studies, showing reduced auditory N1 and P2 responses to self- compared to externally generated events, or when stimulus properties are more predictable (e.g. prototypical). The current study examined the sensitivity of N1 and P2 responses to statistical regularities of speech. We employed a motor-to-auditory paradigm comparing ERP responses to externally and self-generated pseudowords, varying in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-generated stimuli, with greater suppression effect for more predictable features such as high phonotactic probability and first syllable stress in pseudowords. We observe an interaction between phonotactic probability and condition on the N1 amplitude, with an enhanced effect of phonotactic probability in processing self-generated stimuli. However, the directionality of this effect was reversed compared to what was expected, namely a larger N1 amplitude for high probability items, possibly indicating a perceptual bias toward the more predictable item. We further observed an effect of syllable stress on the P2 amplitude, with greater amplitudes in response to first syllable stress items. The current results suggest that phonotactic probability plays an important role in processing self-generated speech, supporting feedforward models involved in speech production.
Article
Full-text available
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli’s semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli’s goal-relevance via distractor’s colour (matching vs. mismatching the target), 2) stimuli’s multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ~30ms, followed by strength-based modulations at ~100ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one’s goals, stimuli’s perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
Article
Previous studies have shown that self-generated stimuli in auditory, visual, and somatosensory domains are attenuated, producing decreased behavioral and neural responses compared to the same stimuli that are externally generated. Yet, whether such attenuation also occurs for higher-level cognitive functions beyond sensorimotor processing remains unknown. In this study, we assessed whether cognitive functions such as numerosity estimations are subject to attenuation in 56 healthy participants (32 women). We designed a task allowing the controlled comparison of numerosity estimations for self (active condition) and externally (passive condition) generated words. Our behavioral results showed a larger underestimation of self- compared to externally-generated words, suggesting that numerosity estimations for self-generated words are attenuated. Moreover, the linear relationship between the reported and actual number of words was stronger for self-generated words, although the ability to track errors about numerosity estimations was similar across conditions. Neuroimaging results revealed that numerosity underestimation involved increased functional connectivity between the right intraparietal sulcus and an extended network (bilateral supplementary motor area, left inferior parietal lobule and left superior temporal gyrus) when estimating the number of self vs. externally generated words. We interpret our results in light of two models of attenuation and discuss their perceptual versus cognitive origins.SIGNIFICANCE STATEMENTWe perceive sensory events as less intense when they are self-generated compared to externally-generated ones. This phenomenon, called attenuation enables us to distinguish sensory events from self and external origins. Here, we designed a novel fMRI paradigm to assess whether cognitive processes such as numerosity estimations are also subject to attenuation. When asking participants to estimate the number of words they had generated or passively heard, we found bigger underestimation in the former case, providing behavioral evidence of attenuation. Attenuation was associated with increased functional connectivity of the intraparietal sulcus, a region involved in numerosity processing. Together, our results indicate that attenuation of self-generated stimuli is not limited to sensory consequences but also impact cognitive processes such as numerosity estimations.
Article
It has been argued that novel compared to familiar stimuli are preferentially encoded into memory. Nevertheless, treating novelty as a categorical variable in experimental research is considered simplistic. We highlight the dimensional aspect of novelty and propose an experimental design that manipulates novelty continuously. We created the Graded Novelty Encoding Task (GNET), in which the difference between stimuli (i.e. novelty) is parametrically manipulated, paving the way for quantitative models of novelty processing. We designed an algorithm which generates visual stimuli by placing colored shapes in a grid. During the familiarization phase of the task, we repeatedly presented five pictures to the participants. In a subsequent incidental learning phase, participants were asked to differentiate between the “familiars” and novel images that varied in the degree of difference to the familiarized pictures (i.e. novelty). Finally, participants completed a surprise recognition memory test, where the novel stimuli from the previous phase were interspersed with distractors with similar difference characteristics. We numerically expressed the differences between the stimuli to compute a dimensional indicator of novelty and assessed whether it predicted recognition memory performance. Based on previous studies showing the beneficial effect of novelty on memory formation, we hypothesized that the more novel a given picture was, the better subsequent recognition performance participants would demonstrate. Our hypothesis was confirmed: recognition performance was higher for more novel stimuli. The GNET captures the continuous nature of novelty, and it may be useful in future studies that examine the behavioral and neurocognitive aspects of novelty processing.
Article
Full-text available
We inhabit a continuously changing world, where the ability to anticipate future states of the environment is critical for adaptation. Anticipation can be achieved by learning about the causal or temporal relationship between sensory events, as well as by learning to act on the environment to produce an intended effect. Together, sensory-based and intention-based predictions provide the flexibility needed to successfully adapt. Yet it is currently unknown whether the two sources of information are processed independently to form separate predictions, or are combined into a common prediction. To investigate this, we ran an experiment in which the final tone of two possible four-tone sequences could be predicted from the preceding tones in the sequence and/or from the participants’ intention to trigger that final tone. This tone could be congruent with both sensory-based and intention-based predictions, incongruent with both, or congruent with one while incongruent with the other. Trials where predictions were incongruent with each other yielded similar prediction error responses irrespectively of the violated prediction, indicating that both predictions were formulated and coexisted simultaneously. The violation of intention-based predictions yielded late additional error responses, suggesting that those violations underwent further differential processing which the violations of sensory-based predictions did not receive.
Article
Episodic memory is reconstructive and is thus prone to false memory formation. Although false memories are proposed to develop via associative processes, the nature of their neural representations, and the effect of sleep on false memory processing is currently unclear. The present research employed the Deese-Roediger-McDermott (DRM) paradigm and a daytime nap to determine whether semantic false memories and true memories could be differentiated using event-related potentials (ERPs). We also sought to illuminate the role of sleep in memory formation and learning. Healthy participants (N = 34, 28F, mean age = 23.23, range = 18-33) completed the learning phase of the DRM task followed by an immediate and a delayed recognition phase. The two recognition phases were separated by either a 2hr daytime nap or an equivalent wake period. Linear mixed modelling of effects at delayed recognition revealed larger LPC amplitudes for true memories in contrast to false memories for those in the wake group, and larger P300 amplitudes for false compared to true memories across sleep and wake groups. Larger LPC amplitudes for true memories were associated with enhanced true memory recognition following sleep, whilst larger P300 amplitudes were associated with similar true and false memory recognition rates. These findings are argued to reflect sleep’s ability to promote memory generalisation associated with pattern completion, whilst also enhancing true memory recognition when memory traces have a strong episodic basis (linked to pattern separation). The present research suggests that true and false memories have differing neural profiles and are reflective of adaptive memory processes.
Article
A perceptual adaptation deficit often accompanies reading difficulty in dyslexia, manifesting in poor perceptual learning of consistent stimuli and reduced neurophysiological adaptation to stimulus repetition. However, it is not known how adaptation deficits relate to differences in feedforward or feedback processes in the brain. Here we used electroencephalography (EEG) to interrogate the feedforward and feedback contributions to neural adaptation as adults with and without dyslexia viewed pairs of faces and words in a paradigm that manipulated whether there was a high probability of stimulus repetition versus a high probability of stimulus change. We measured three neural dependent variables: expectation (the difference between prestimulus EEG power with and without the expectation of stimulus repetition), feedforward repetition (the difference between event-related potentials (ERPs) evoked by an expected change and an unexpected repetition), and feedback-mediated prediction error (the difference between ERPs evoked by an unexpected change and an expected repetition). Expectation significantly modulated prestimulus theta- and alpha-band EEG in both groups. Unexpected repetitions of words, but not faces, also led to significant feedforward repetition effects in the ERPs of both groups. However, neural prediction error when an unexpected change occurred instead of an expected repetition was significantly weaker in dyslexia than the control group for both faces and words. These results suggest that the neural and perceptual adaptation deficits observed in dyslexia reflect the failure to effectively integrate perceptual predictions with feedforward sensory processing. In addition to reducing perceptual efficiency, the attenuation of neural prediction error signals would also be deleterious to the wide range of perceptual and procedural learning abilities that are critical for developing accurate and fluent reading skills.
Chapter
Somatosensory processing is about the development of bodily experience via a variety of sensations, including tactile sensations and sensations of position and motion. This chapter discusses recent neuroimaging findings regarding the brain mechanisms of oral somatosensory processing. It outlines the brain mechanisms associated with gustation. Especially, the chapter highlights the role of affective–motivational processing of food, an issue highly relevant to gustatory and oral functions. Following somatosensation and gustation, it focuses on how the cognitive–affective functions shape our experience of oral conditions. Specifically, the chapter outlines the current understanding of perception, attention, motivation and emotion from the perspective of cognitive neuroscience and highlights the association between oral sensorimotor functions and these cognitive–affective functions. The chapter also focuses on the issues of multisensory integration, and specifically, focuses on the current knowledge of multisensory integration related to oral functions and summarizes the relevant brain mechanisms.
Article
In this paper we present a computational modeling account of an active self in artificial agents. In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control. We argue that this requires laying out an embodied cognitive model that combines bottom-up processes (sensorimotor learning and fine-grained adaptation of control) with top-down processes (cognitive processes for strategy selection and decision-making). We present such a conceptual computational architecture based on principles of predictive processing and free energy minimization. Using this general model, we describe how a sense of control can form across the levels of a control hierarchy and how this can support action control in an unpredictable environment. We present an implementation of this model as well as first evaluations in a simulated task scenario, in which an autonomous agent has to cope with un-/predictable situations and experiences corresponding sense of control. We explore different model parameter settings that lead to different ways of combining low-level and high-level action control. The results show the importance of appropriately weighting information in situations where the need for low/high-level action control varies and they demonstrate how the sense of control can facilitate this.
Article
It is widely believed that predicted tactile action outcomes are perceptually attenuated. The present experiments determined whether predictive mechanisms necessarily generate attenuation or, instead, can enhance perception—as typically observed in sensory cognition domains outside of action. We manipulated probabilistic expectations in a paradigm often used to demonstrate tactile attenuation. Adult participants produced actions and subsequently rated the intensity of forces on a static finger. Experiment 1 confirmed previous findings that action outcomes are perceived less intensely than passive stimulation but demonstrated more intense perception when active finger stimulation was removed. Experiments 2 and 3 manipulated prediction explicitly and found that expected touch during action is perceived more intensely than unexpected touch. Computational modeling suggested that expectations increase the gain afforded to expected tactile signals. These findings challenge a central tenet of prominent motor control theories and demonstrate that sensorimotor predictions do not exhibit a qualitatively distinct influence on tactile perception.
Article
In this My word, Press et al. tackle the 'theory crisis' in cognitive science. Using examples of good and not-so-good theoretical practice, they distinguish theories from effects, predictions, hypotheses, typologies, and frameworks in a self-help checklist of seven questions to guide theory construction, evaluation, and testing.
Article
Visual scene context is well-known to facilitate the recognition of scene-congruent objects. Interestingly, however, according to predictive-processing accounts of brain function, scene congruency may lead to reduced (rather than enhanced) processing of congruent objects, compared with incongruent ones, because congruent objects elicit reduced prediction-error responses. We tested this counterintuitive hypothesis in two online behavioral experiments with human participants ( N = 300). We found clear evidence for impaired perception of congruent objects, both in a change-detection task measuring response times and in a bias-free object-discrimination task measuring accuracy. Congruency costs were related to independent subjective congruency ratings. Finally, we show that the reported effects cannot be explained by low-level stimulus confounds, response biases, or top-down strategy. These results provide convincing evidence for perceptual congruency costs during scene viewing, in line with predictive-processing theory.
Article
The appearance of a salient stimulus rapidly and automatically inhibits saccadic eye movements. Curiously, this "oculomotor freezing" response is triggered only by stimuli that the observer reports seeing. It remains unknown, however, if oculomotor freezing is linked to the observer's sensory experience, or their decision that a stimulus was present. To dissociate between these possibilities, we manipulated decision criterion via monetary payoffs and stimulus probability in a detection task. These manipulations greatly shifted observers' decision criteria but did not affect the degree to which microsaccades were inhibited by stimulus presence. Moreover, the link between oculomotor freezing and explicit reports of stimulus presence was stronger when the criterion was conservative rather than liberal. We conclude that the sensory threshold for oculomotor freezing is independent of decision bias. Provided that conscious experience is also unaffected by such bias, oculomotor freezing is an implicit indicator of sensory awareness.
Article
Full-text available
There is increasing evidence that imagination relies on similar neural mechanisms as externally triggered perception. This overlap presents a challenge for perceptual reality monitoring: deciding what is real and what is imagined. Here, we explore how perceptual reality monitoring might be implemented in the brain. We first describe sensory and cognitive factors that could dissociate imagery and perception and conclude that no single factor unambiguously signals whether an experience is internally or externally generated. We suggest that reality monitoring is implemented by higher-level cortical circuits that evaluate first-order sensory and cognitive factors to determine the source of sensory signals. According to this interpretation, perceptual reality monitoring shares core computations with metacognition. This multi-level architecture might explain several types of source confusion as well as dissociations between simply knowing whether something is real and actually experiencing it as real. We discuss avenues for future research to further our understanding of perceptual reality monitoring, an endeavour that has important implications for our understanding of clinical symptoms as well as general cognitive function.
Article
During active movement, there is normally a tight relation between motor command and sensory representation about the resulting spatial displacement of the body. Indeed, some theories of space perception emphasize the topographic layout of sensory receptor surfaces, while others emphasize implicit spatial information provided by the intensity of motor command signals. To identify which has the primary role in spatial perception, we developed experiments based on everyday self-touch, in which the right hand strokes the left arm. We used a robot-mediated form of self-touch to decouple the spatial extent of active or passive right hand movements from their tactile consequences. Participants made active movements of the right hand between unpredictable, haptically defined start and stop positions, or the hand was passively moved between the same positions. These movements caused a stroking tactile motion by a brush along the left forearm, with minimal delay, but with an unpredictable spatial gain factor. Participants judged the spatial extent of either the right hand’s movement, or of the resulting tactile stimulation to their left forearm. Across five experiments, we found that movement extent strongly interfered with tactile extent perception, and vice versa. Crucially, interference in both directions was stronger during active than passive movements. Thus, voluntary motor commands produced stronger integration of multiple sensorimotor signals underpinning the perception of personal space. Our results prompt a reappraisal of classical theories that reduce space perception to motor command information.
Article
During speaking or listening, endogenous motor or exogenous visual processes have been shown to fine-tune the auditory neural processing of incoming acoustic speech signal. To compare the impact of these cross-modal effects on auditory evoked responses, two sets of speech production and perception tasks were contrasted using EEG. In a first set, participants produced vowels in a self-paced manner while listening to their auditory feedback. Following the production task, they passively listened to the entire recorded speech sequence. In a second set, the procedure was identical except that participants also watched online their own articulatory movements. While both endogenous motor and exogenous visual processes fine-tuned auditory neural processing, these cross-modal effects were found to act differentially on the amplitude and latency of auditory evoked responses. A reduced amplitude was observed on auditory evoked responses during speaking compared to listening, irrespective of the auditory or audiovisual feedback. Adding orofacial visual movements to the acoustic speech signal also speeded up the latency of auditory evoked responses, irrespective of the perception or production task. Taken together, these results suggest distinct motor and visual influences on auditory neural processing, possibly through different neural gating and predictive mechanisms.
Preprint
Full-text available
Perceivers can use past experiences to make sense of ambiguous sensory signals. However, this may be inappropriate when the world changes and past experiences no longer predict what the future holds. Optimal learning models propose that observers decide whether to stick with or update their predictions by tracking the uncertainty or ‘precision’ of their expectations. But contrasting theories of prediction have argued that we are prone to misestimate uncertainty – leading to stubborn predictions that are difficult to dislodge. To compare these possibilities, we had participants learn novel perceptual predictions before using fMRI to record visual brain activity when predictive contingencies were disrupted - meaning that previously ‘expected’ events become objectively improbable. Multivariate pattern analyses revealed that representations in primary visual cortex were ‘sharpened’ in line with prior expectations, despite marked changes in the statistical structure of the environment, rendering these expectations no longer valid. These results suggest that our perceptual systems do indeed form stubborn predictions even from short periods of learning – and more generally suggest that top-down expectations have the potential to help or hinder perceptual inference in bounded minds like ours.
Preprint
Full-text available
Actions modulate sensory processing by attenuating responses to self- compared to externally-generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli, however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance — but no differences in memory bias — and attenuated responses and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds.
Article
Full-text available
Perception and behavior can be guided by predictions, which are often based on learned statistical regularities. Neural responses to expected stimuli are frequently found to be attenuated after statistical learning. However, whether this sensory attenuation following statistical learning occurs automatically or depends on attention remains unknown. In the present fMRI study, we exposed human volunteers to sequentially presented object stimuli, in which the first object predicted the identity of the second object. We observed a reliable attenuation of neural activity for expected compared to unexpected stimuli in the ventral visual stream. Crucially, this sensory attenuation was only apparent when stimuli were attended, and vanished when attention was directed away from the predictable objects. These results put important constraints on neurocomputational theories that cast perception as a process of probabilistic integration of prior knowledge and sensory information.
Article
Full-text available
Prior knowledge shapes what we perceive. A new brain stimulation study suggests that this perceptual shaping is achieved by changes in sensory brain regions before the input arrives, with common mechanisms operating across different sensory areas.
Article
Full-text available
Somatosensory input generated by one's actions (i.e., self-initiated body movements) is generally attenuated. Conversely, externally caused somatosensory input is enhanced, for example, during active touch and the haptic exploration of objects. Here, we used functional magnetic resonance imaging (fMRI) to ask how the brain accomplishes this delicate weighting of self-generated versus externally caused somatosensory components. Finger movements were either self-generated by our participants or induced by functional electrical stimulation (FES) of the same muscles. During half of the trials, electrotactile impulses were administered when the (actively or passively) moving finger reached a predefined flexion threshold. fMRI revealed an interaction effect in the contralateral posterior insular cortex (pIC), which responded more strongly to touch during self-generated than during FES-induced movements. A network analysis via dynamic causal modeling revealed that connectivity from the secondary somatosensory cortex via the pIC to the supplementary motor area was generally attenuated during self-generated relative to FES-induced movements-yet specifically enhanced by touch received during self-generated, but not FES-induced movements. Together, these results suggest a crucial role of the parietal operculum and the posterior insula in differentiating self-generated from externally caused somatosensory information received from one's moving limb.
Article
Full-text available
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world.
Article
Full-text available
Perception likely results from the interplay between sensory information and top-down signals. In this electroencephalography (EEG) study, we utilised the hierarchical frequency tagging (HFT) method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of nonlinear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on a multispectral phase coherence (MSPC) measure, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention, which would not have been possible using other amplitude-based measures. Our results link expectation to the modulation of descending signals and to the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, the results link attention to the modulation of ascending signals and to the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.
Article
Full-text available
Predictive coding models propose that predictions (stimulus likelihood) reduce sensory signals as early as primary visual cortex (V1), and that attention (stimulus relevance) can modulate these effects. Indeed, both prediction and attention have been shown to modulate V1 activity, albeit with fMRI, which has low temporal resolution. This leaves it unclear whether these effects reflect a modulation of the first feedforward sweep of visual information processing and/or later, feedback-related activity. In two experiments, we used electroencephalography and orthogonally manipulated spatial predictions and attention to address this issue. Although clear top-down biases were found, as reflected in pre-stimulus alpha-band activity, we found no evidence for top-down effects on the earliest visual cortical processing stage (<80 ms post-stimulus), as indexed by the amplitude of the C1 event-related potential component and multivariate pattern analyses. These findings indicate that initial visual afferent activity may be impenetrable to top-down influences by spatial prediction and attention.
Article
Full-text available
The way humans perceive the outcomes of their actions is strongly colored by their expectations. These expectations can develop over different timescales and are not always complementary. The present work examines how long-term (structural) expectations – developed over a lifetime - and short-term (contextual) expectations jointly affect perception. In two studies, including a pre-registered replication, participants initiated the movement of an ambiguously rotating sphere by operating a rotary switch. In the absence of any learning, participants predominantly perceived the sphere to rotate in the same direction as their rotary action. This bias toward structural expectations was abolished (but not reversed) when participants were exposed to incompatible action-effect contingencies (e.g., clockwise actions causing counterclockwise percepts) during a preceding learning phase. Exposure to compatible action-effect contingencies, however, did not add to the existing structural bias. Together, these findings reveal that perception of action-outcomes results from the combined influence of both long-term and immediate expectations.
Article
Full-text available
Hallucinations, perceptions in the absence of objectively identifiable stimuli, illustrate the constructive nature of perception. Here, we highlight the role of prior beliefs as a critical elicitor of hallucinations. Recent empirical work from independent laboratories shows strong, overly precise priors can engender hallucinations in healthy subjects and that individuals who hallucinate in the real world are more susceptible to these laboratory phenomena. We consider these observations in light of work demonstrating apparently weak, or imprecise, priors in psychosis. Appreciating the interactions within and between hierarchies of inference can reconcile this apparent disconnect. Data from neural networks, human behavior, and neuroimaging support this contention. This work underlines the continuum from normal to aberrant perception, encouraging a more empathic approach to clinical hallucinations.
Article
Full-text available
Events that conform to our expectations, that is, are congruent with our world knowledge or schemas, are better remembered than unrelated events. Yet events that conflict with schemas can also be remembered better. We examined this apparent paradox in 4 experiments, in which schemas were established by training ordinal relationships between randomly paired objects, whereas event memory was tested for the number of objects on each trial. Better memory was found for both congruent and incongruent trials, relative to unrelated trials, producing memory performance that was a “U-shaped” function of congruency. The congruency advantage but not incongruency advantage was mediated by postencoding processes, whereas the incongruency advantage, but not congruency advantage, emerged even if the information probed by the memory test was irrelevant to the schema. Schemas therefore augment event memory in multiple ways, depending on the match between novel and existing information.
Article
Full-text available
Primates interpret conspecific behaviour as goal-directed and expect others to achieve goals by the most efficient means possible. While this teleological stance is prominent in evolutionary and developmental theories of social cognition, little is known about the underlying mechanisms. In predictive models of social cognition, a perceptual prediction of an ideal efficient trajectory would be generated from prior knowledge against which the observed action is evaluated, distorting the perception of unexpected inefficient actions. To test this, participants observed an actor reach for an object with a straight or arched trajectory on a touch screen. The actions were made efficient or inefficient by adding or removing an obstructing object. The action disappeared mid-trajectory and participants touched the last seen screen position of the hand. Judgements of inefficient actions were biased towards the efficient prediction (straight trajectories upward to avoid the obstruction, arched trajectories downward towards the target). These corrections increased when the obstruction's presence/absence was explicitly acknowledged, and when the efficient trajectory was explicitly predicted. Additional supplementary experiments demonstrated that these biases occur during ongoing visual perception and/or immediately after motion offset. The teleological stance is at least partly perceptual, providing an ideal reference trajectory against which actual behaviour is evaluated.
Article
Full-text available
Early stages of visual processing are carried out by neural circuits activated by simple and specific features, such as the orientation of an edge. A fundamental question in human vision is how the brain organises such intrinsically local information into meaningful properties of objects. Classic models of visual processing emphasise a one-directional flow of information from early feature-detectors to higher-level information-processing. By contrast to this view, and in line with predictive-coding models of perception, here, we provide evidence from human vision that high-level object representations dynamically interact with the earliest stages of cortical visual processing. In two experiments, we used ambiguous stimuli that, depending on the observer's prior object-knowledge, can be perceived as either coherent objects or as a collection of meaningless patches. By manipulating object knowledge we were able to determine its impact on processing of low-level features while keeping sensory stimulation identical. Both studies demonstrate that perception of local features is facilitated in a manner consistent with an observer's high-level object representation (i.e., with no effect on object-inconsistent features). Our results cannot be ascribed to attentional influences. Rather, they suggest that high-level object representations interact with and sharpen early feature-detectors, optimising their performance for the current perceptual context.
Article
Full-text available
Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions especially if there is partial overlap (hence partial mismatch) between expectations and input. With speech, “Slips of the ear” occur when expectations lead to misperception. For instance, a entomologist, might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song "Blowing in the Wind"). Here, we contrast two mechanisms by which prior expectations may lead to misperception of degraded speech. Firstly, clear representations of the common sounds in the prior and input (i.e., expected sounds) may lead to incorrect confirmation of the prior. Secondly, insufficient representations of sounds that deviate between prior and input (i.e., prediction errors) could lead to deception. We used cross-modal predictions from written words that partially match degraded speech to compare neural responses when male and female human listeners were deceived into accepting the prior or correctly reject it. Combined behavioural and multivariate representational similarity analysis of functional magnetic resonance imaging data shows that veridical perception of degraded speech is signalled by representations of prediction error in the left superior temporal sulcus. Instead of using top-down processes to support perception of expected sensory input, our findings suggest that the strength of neural prediction error representations distinguishes correct perception and misperception.
Article
Full-text available
Perception during action is optimized by sensory predictions about the likely consequences of our movements. Influential theories in social cognition propose that we use the same predictions during interaction, supporting perception of similar reactions in our social partners. However, while our own action outcomes typically occur at short, predictable delays after movement execution, the reactions of others occur at longer, variable delays in the order of seconds. To examine whether we use sensorimotor predictions to support perception of imitative reactions, we therefore investigated the temporal profile of sensory prediction during action in two psychophysical experiments. We took advantage of an influence of prediction on apparent intensity, whereby predicted visual stimuli appear brighter (more intense). Participants performed actions (e.g., index finger lift) and rated the brightness of observed outcomes congruent (index finger lift) or incongruent (middle finger lift) with their movements. Observed action outcomes could occur immediately after execution, or at longer delays likely reflective of those in natural social interaction (1800 or 3600 ms). Consistent with the previous literature, Experiment 1 revealed that congruent action outcomes were rated as brighter than incongruent outcomes. Importantly, this facilitatory perceptual effect was found irrespective of whether outcomes occurred immediately or at delay. Experiment 2 replicated this finding and demonstrated that it was not the result of response bias. These findings therefore suggest that visual predictions generated during action are sufficiently general across time to support our perception of imitative reactions in others, likely generating a range of benefits during social interaction.
Article
Full-text available
Humans are more likely to report perceiving an expected than an unexpected stimulus. Influential theories have proposed that this bias arises from expectation altering the sensory signal. However, the effects of expectation can also be due to decisional criterion shifts independent of any sensory changes. In order to adjudicate between these two possibilities, we compared the behavioral effects of pre-stimulus cues (pre cues; can influence both sensory signal and decision processes) and post-stimulus cues (post cues; can only influence decision processes). Subjects judged the average orientation of a series of Gabor patches. Surprisingly, we found that post cues had a larger effect on response bias (criterion c) than pre cues. Further, pre and post cues did not differ in their effects on stimulus sensitivity (d’) or the pattern of temporal or feature processing. Indeed, reverse correlation analyses showed no difference in the temporal or feature-based use of information between pre and post cues. Overall, post cues produced all of the behavioral modulations observed as a result of pre cues. These findings show that pre and post cues affect the decision through the same mechanisms and suggest that stimulus expectation alters the decision criterion but not the sensory signal itself.
Article
Full-text available
Some people hear voices that others do not, but only some of those people seek treatment. Using a Pavlovian learning task, we induced conditioned hallucinations in four groups of people who differed orthogonally in their voice-hearing and treatment-seeking statuses. People who hear voices were significantly more susceptible to the effect. Using functional neuroimaging and computational modeling of perception, we identified processes that differentiated voice-hearers from non–voice-hearers and treatment-seekers from non–treatment-seekers and characterized a brain circuit that mediated the conditioned hallucinations. These data demonstrate the profound and sometimes pathological impact of top-down cognitive processes on perception and may represent an objective means to discern people with a need for treatment from those without. © 2017, American Association for the Advancement of Science. All rights reserved.
Article
Full-text available
Insistence on sameness and intolerance of change are among the diagnostic criteria for autism spectrum disorder (ASD), but little research has addressed how people with ASD represent and respond to environmental change. Here, behavioral and pupillometric measurements indicated that adults with ASD are less surprised than neurotypical adults when their expectations are violated, and decreased surprise is predictive of greater symptom severity. A hierarchical Bayesian model of learning suggested that in ASD, a tendency to overlearn about volatility in the face of environmental change drives a corresponding reduction in learning about probabilistically aberrant events, thus putatively rendering these events less surprising. Participant-specific modeled estimates of surprise about environmental conditions were linked to pupil size in the ASD group, thus suggesting heightened noradrenergic responsivity in line with compromised neural gain. This study offers insights into the behavioral, algorithmic and physiological mechanisms underlying responses to environmental volatility in ASD.
Article
Full-text available
Stimulus predictability can lead to substantial modulations of brain activity, such as shifts in sustained magnetic field amplitude, measured with magnetoencephalography (MEG). Here, we provide a mechanistic explanation of these effects using MEG data acquired from healthy human volunteers (N = 13, 7 female). In a source-level analysis of induced responses, we established the effects of orthogonal predictability manipulations of rapid tone-pip sequences (namely, sequence regularity and alphabet size) along the auditory processing stream. In auditory cortex, regular sequences with smaller alphabets induced greater gamma activity. Furthermore, sequence regularity shifted induced activity in frontal regions toward higher frequencies. To model these effects in terms of the underlying neurophysiology, we used dynamic causal modeling for cross-spectral density and estimated slow fluctuations in neural (postsynaptic) gain. Using the model-based parameters, we accurately explain the sensor-level sustained field amplitude, demonstrating that slow changes in synaptic efficacy, combined with sustained sensory input, can result in profound and sustained effects on neural responses to predictable sensory streams.
Article
Full-text available
Autism spectrum disorder currently lacks an explanation that bridges cognitive, computational, and neural domains. In the past 5 years, progress has been sought in this area by drawing on Bayesian probability theory to describe both social and nonsocial aspects of autism in terms of systematic differences in the processing of sensory information in the brain. The present article begins by synthesizing the existing literature in this regard, including an introduction to the topic for unfamiliar readers. The key proposal is that autism is characterized by a greater weighting of sensory information in updating probabilistic representations of the environment. Here, we unpack further how the hierarchical setting of Bayesian inference in the brain (i.e., predictive processing) adds significant depth to this approach. In particular, autism may relate to finer mechanisms involved in the context-sensitive adjustment of sensory weightings, such as in how neural representations of environmental volatility inform perception. Crucially, in light of recent sensorimotor treatments of predictive processing (i.e., active inference), hypotheses regarding atypical sensory weighting in autism have direct implications for the regulation of action and behavior. Given that core features of autism relate to how the individual interacts with and samples the world around them (e.g., reduced social responding, repetitive behaviors, motor impairments, and atypical visual sampling), the extension of Bayesian theories of autism to action will be critical for yielding insights into this condition.
Article
Full-text available
Models of action control suggest that predicted action outcomes are “cancelled” from perception, allowing agents to devote resources to more behaviorally relevant unexpected events. These models are supported by a range of findings demonstrating that expected consequences of action are perceived less intensely than unexpected events. A key assumption of these models is that the prediction is subtracted from the sensory input. This early subtraction allows preferential processing of unexpected events from the outset of movement, thereby promoting rapid initiation of corrective actions and updating of predictive models. We tested this assumption in three psychophysical experiments. Participants rated the intensity (brightness) of observed finger movements congruent or incongruent with their own movements at different timepoints after action. Across Experiments 1 and 2, evidence of cancellation—whereby congruent events appeared less bright than incongruent events—was only found 200 ms after action, whereas an opposite effect of brighter congruent percepts was observed in earlier time ranges (50 ms after action). Experiment 3 demonstrated that this interaction was not a result of response bias. These findings suggest that “cancellation” may not be the rapid process assumed in the literature, and that perception of predicted action outcomes is initially “facilitated.” We speculate that the representation of our environment may in fact be optimized via two opposing processes: The primary process facilitates perception of events consistent with predictions and thereby helps us to perceive what is more likely, but a later process aids the perception of any detected events generating prediction errors to assist model updating.
Article
Full-text available
Autism is a developmental condition, characterized by difficulties of social interaction and communication, as well as restricted interests and repetitive behaviors. Although several important conceptions have shed light on specific facets, there is still no consensus about a universal yet specific theory in terms of its underlying mechanisms. While some theories have exclusively focused on sensory aspects, others have emphasized social difficulties. However, sensory and social processes in autism might be interconnected to a higher degree than what has been traditionally thought. We propose that a mismatch in sensory abilities across individuals can lead to difficulties on a social, i.e. interpersonal level and vice versa. In this article, we, therefore, selectively review evidence indicating an interrelationship between perceptual and social difficulties in autism. Additionally, we link this body of research with studies, which investigate the mechanisms of action control in social contexts. By doing so, we highlight that autistic traits are also crucially related to differences in integration, anticipation and automatic responding to social cues, rather than a mere inability to register and learn from social cues. Importantly, such differences may only manifest themselves in sufficiently complex situations, such as real-life social interactions, where such processes are inextricably linked.
Article
Full-text available
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains.
Article
Full-text available
Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses.
Article
Full-text available
Prior expectations shape neural responses in sensory regions of the brain, consistent with a Bayesian predictive coding account of perception. Yet, it remains unclear whether such a mechanism is already functional during early stages of development. To address this issue, we study how the infant brain responds to prediction violations using a cross-modal cueing paradigm. We record electroencephalographic responses to expected and unexpected visual events preceded by auditory cues in 12-month-old infants. We find an increased response for unexpected events. However, this effect of prediction error is only observed during late processing stages associated with conscious access mechanisms. In contrast, early perceptual components reveal an amplification of neural responses for predicted relative to surprising events, suggesting that selective attention enhances perceptual processing for expected events. Taken together, these results demonstrate that cross-modal statistical regularities are used to generate predictions that differentially influence early and late neural responses in infants.
Article
Full-text available
What determines what we see? In contrast to the traditional “modular” understanding of perception, according to which visual processing is encapsulated from higher-level cognition, a tidal wave of recent research alleges that states such as beliefs, desires, emotions, motivations, intentions, and linguistic representations exert direct top-down influences on what we see. There is a growing consensus that such effects are ubiquitous, and that the distinction between perception and cognition may itself be unsustainable. We argue otherwise: none of these hundreds of studies — either individually or collectively — provide compelling evidence for true top-down effects on perception, or “cognitive penetrability”. In particular, and despite their variety, we suggest that these studies all fall prey to only a handful of pitfalls. And whereas abstract theoretical challenges have failed to resolve this debate in the past, our presentation of these pitfalls is empirically anchored: in each case, we show not only how certain studies could be susceptible to the pitfall (in principle), but how several alleged top-down effects actually are explained by the pitfall (in practice). Moreover, these pitfalls are perfectly general, with each applying to dozens of other top-down effects. We conclude by extracting the lessons provided by these pitfalls into a checklist that future work could use to convincingly demonstrate top-down effects on visual perception. The discovery of substantive top-down effects of cognition on perception would revolutionize our understanding of how the mind is organized; but without addressing these pitfalls, no such empirical report will license such exciting conclusions.
Article
Full-text available
How do expectations influence transitions between unconscious and conscious perceptual processing? According to the influential predictive processing framework, perceptual content is determined by predictive models of the causes of sensory signals. On one interpretation, conscious contents arise when predictive models are verified by matching sensory input (minimizing prediction error). On another, conscious contents arise when surprising events falsify current perceptual predictions. Finally, the cognitive impenetrability account posits that conscious perception is not affected by such higher level factors. To discriminate these positions, we combined predictive cueing with continuous flash suppression (CFS) in which the relative contrast of a target image gradually increases over time. In four experiments we established that expected stimuli enter consciousness faster than neutral or unexpected stimuli. These effects are difficult to account for in terms of response priming, pre-existing stimulus associations, or the attentional mechanisms that cause asynchronous temporal order judgments (of simultaneously presented stimuli). Our results further suggest that top-down expectations play a larger role when bottom-up input is ambiguous, in line with predictive processing accounts of perception. Taken together, our findings support the hypothesis that conscious access depends on verification of perceptual predictions.
Article
Full-text available
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others - during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions - both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then - in principle - they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Article
Full-text available
To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision.
Article
Full-text available
Sensory signals are highly structured in both space and time. These structural regularities in visual information allow expectations to form about future stimulation, thereby facilitating decisions about visual features and objects. Here, we discuss how expectation modulates neural signals and behaviour in humans and other primates. We consider how expectations bias visual activity before a stimulus occurs, and how neural signals elicited by expected and unexpected stimuli differ. We discuss how expectations may influence decision signals at the computational level. Finally, we consider the relationship between visual expectation and related concepts, such as attention and adaptation.
Article
Full-text available
The capacity of long-term memory is thought to be virtually unlimited. However, our memory bank may need to be pruned regularly to ensure that the information most important for behavior can be stored and accessed efficiently. Using functional magnetic resonance imaging of the human brain, we report the discovery of a context-based mechanism for determining which memories to prune. Specifically, when a previously experienced context is reencountered, the brain automatically generates predictions about which items should appear in that context. If an item fails to appear when strongly expected, its representation in memory is weakened, and it is more likely to be forgotten. We find robust support for this mechanism using multivariate pattern classification and pattern similarity analyses. The results are explained by a model in which context-based predictions activate item representations just enough for them to be weakened during a misprediction. These findings reveal an ongoing and adaptive process for pruning unreliable memories.