ArticlePDF Available


Prior knowledge shapes what we perceive. A new brain stimulation study suggests that this perceptual shaping is achieved by changes in sensory brain regions before the input arrives, with common mechanisms operating across different sensory areas.
Perceptual prediction: Rapidly making sense of a noisy world
Clare Press* and Daniel Yon
Department of Psychological Sciences, Birkbeck, University of London
Prior knowledge shapes what we perceive. A new brain stimulation study in Current Biology
suggests that this shaping is achieved by changes in sensory brain regions before the input
arrives, with common mechanisms operating across different sensory areas.
Main text
Our brains have to make sense of the vast quantities of information bombarding our senses.
The information reaching our eyes, ears and other receptors changes rapidly across space
and time, and the signals are imperfect [1]. For example, when we listen to a friend on the
metro the sound of their voice is masked by the noise of the train. Our brains must rapidly
generate a best guess about what we heard to guide our behaviour effectively we will be a
poor conversation partner if it takes us several seconds to work out what they said. A new
study [2] shows how the brain can generate this best guess by sending predictive signals to
brain regions involved in processing sensory input.
Work from the cognitive sciences across the last few decades has demonstrated that we likely
use our expectations to help shape what we perceive. There are many statistical regularities
within our environment and we can combine these with the sensory input to inform the likely
state of the world. If our conversational partner is a fellow academic, it is more likely that they
said ‘I love computers’ than ‘I love reviewers’, and biasing our perceptual experiences in line
with these likelihoods will tend to increase their accuracy [1,3]. Biased perceptual decisions
have been shown across a number of disciplines and with a number of methods. For example,
we are faster to identify everyday household objects (e.g., loaves of bread) when they are
preceded by observation of contexts in which they are typically seen (kitchen counters; [4]),
and we are more likely to report the presence of stimuli expected on the basis of arbitrary,
probabilistically-paired cues [5]. Such biasing is also demonstrated through perceptual errors
that occur when typical regularities are disrupted. For example, we report concave faces to
have the more typical convex structure when shading cues are ambiguous [6], and that
sensations last for a similar length of time to concurrently performed actions likely because
they typically last for comparable durations [7].
While cognitive scientists have reported for some time that perception is biased by our
expectations, the precise mechanisms realising these influences have remained elusive.
Indeed, some have even queried whether top-down knowledge really alters what we perceive
at all or rather just the decisions we make about our experiences [8]. For example, producing
slow actions may make us hallucinate that simultaneous events last for longer, because we
typically experience slow actions to be accompanied by long sensations. Alternatively, this
knowledge could just bias us to report that events have lasted for longer because we believe
they should have done, while our perceptual experiences remain unchanged. We can
disentangle these possibilities partly by using rigorous behavioural experiments that
manipulate these processes [8] and constructing computational models of the decision
process [5]. Neuroimaging methods have also been used to understand the underlying
mechanisms, e.g., examining pattern classification accuracy of sensory signals when
sensations were expected or not [912]. These findings have prompted suggestions that
expectations indeed influence perceptual experiences themselves via ‘pre-activation of
sensory units tuned to expected events before the input is received [11]. This pre-activation is
thought to lead to competitive interactions that inhibit units tuned to the unexpected, turning
up the volume (relative sensory gain) on expected inputs and thereby biasing perception
towards what we expect (‘sharpening’ theories; see Fig. 1).
However, it remains debated whether expectations really alter perception, partly because
these changes in sensory brain areas may not in fact play a causal role in changing perception
[13]. Gandolfo and Downing [2] addressed this question in a clever study using transcranial
magnetic stimulation (TMS). In their task, participants made rapid judgements about observed
bodies or visual scenes (e.g. is this body slim?). Stimuli were preceded by written cues to
establish expectations about which particular stimulus would be shown (e.g. ‘m’ predicted a
male body). In line with previous work, the participants were faster and more accurate when
their expectations were valid. More importantly, the authors applied TMS at the time of the
cues disrupting activity in either the extrastriate body area (EBA) or the occipital place area
(OPA). They revealed a compelling double dissociation whereby disrupting activity in body-
selective EBA abolished behavioural expectation effects for body stimuli but not scenes, and
disrupting scene-selective OPA activity had the converse effect. Such a pattern provides
convincing evidence that effects of expectations on perceptual decisions are indeed mediated
by changes in specific sensory processing. It also provides evidence to support the idea that
these modulations are realised through pre-activating units tuned to expected inputs before
the sensory information even hits the receptors.
One particularly interesting feature of this study is the specific regions where effects are found.
EBA and OPA are considered higher level sensory processing regions encoding the complex
configurations of information that characterise bodies and scenes, respectively. Predictive
sharpening effects have sometimes been observed predominantly in primary visual cortex
[9,10], prompting suggestions that predictive influences are only realised through interactions
at the earliest points in the cortical hierarchy. However, the predictive influence identified by
Gandolfo and Downing in these late visual brain areas suggests this is unlikely to be the case,
raising the alternative possibility that previous effects have been confined to early processing
regions because these areas are most sensitive to the stimuli used in these studies, i.e.,
gratings and edges [9,see also 14].
These findings suggest that regardless of the particular sensory region, expectations may
modulate processing in a similar way. Although EBA and OPA encode different kinds of visual
information, influences of prediction appeared to be mediated through similar pre-activation
processes. In other words, the same domain-general pre-activation mechanism may sharpen
representations similarly in different domain-specific sensory regions. This finding concurs
with recent results from our lab revealing that sensory predictions operate via common
mechanisms across domains. In this instance, we demonstrated that the precise nature of the
predictive (not predicted) information did not alter the nature of effects. Specifically, visual
predictions made on the basis of action sharpened visual brain activity just like when the
predictions are furnished by arbitrary sensory cues [12]. This finding in fact conflicted with
previous reports that action expectations have a distinct influence on perception i.e.,
dampening rather than sharpening processing of predicted inputs ([15]; it had been thought to
be for this reason that we cannot tickle ourselves [e.g., 16]). If predictive mechanisms work
similarly across domains regardless of the particular nature of the predictive or predicted
information then it seems logical that Gandolfo and Downing’s findings would have
implications for any domain where observers can rely on probabilistic knowledge. For
example, as well as implications for action perception and normative sensory cognition, similar
principles may explain findings from language [17] and social cognition [18] with effects of
expectations realised through pre-activation of relevant representations in different parts of
the cortical hierarchy.
However, the idea that sensory-specific pre-activation drives our enhanced ability to identify
expected events leaves open questions about the mechanisms that generate predictive
dampening effects when these are found. Why do predictions sometimes attenuate rather than
sharpen perception, e.g., why can’t we tickle ourselves? These findings of attenuated rather
than enhanced processing of the expected are prominent in action control literatures but in
fact are also found elsewhere [17,19]. Similar temporally-tuned methods to those employed
by Gandolfo and Downing may prove useful in disentangling the precise nature of mechanisms
operating across the sensory hierarchy [see 20].
In conclusion, Gandolfo and Downing’s new work contributes to a lively debate about the role
of prior knowledge in shaping what we perceive. Their findings provide compelling evidence
that expectations alter perception through influences realised in specific sensory areas before
the sensory events are presented, and contribute to an emerging view that a common set of
domain-general principles may account for the effects of prediction across a host of
1. Bar, M. (2004). Visual objects in context. Nat. Rev. Neurosci. 5, 617629.
2. Gandolfo, M., and Downing, P. (2019). Causal evidence for expression of perceptual predictions
in category-selective extrastriate regions. Curr. Biol.
3. de Lange, F.P., Heilbron, M., and Kok, P. (2018). How do expectations shape perception? Trends
Cogn. Sci. 22, 764-779.
4. Palmer, S.E. (1975). The effects of contextual scenes on the identification of objects. Mem. Cogn.
3, 519526.
5. Wyart, V., Nobre, A.C., and Summerfield, C. (2012). Dissociable prior influences of signal
probability and relevance on visual contrast sensitivity. Proc. Natl. Acad. Sci. U.S.A. 109, 3593
6. Gregory, R.L. (1970). The Intelligent Eye. New York, McGraw-Hill.
Figure 1. Combining noisy sensory input with our expectations is a powerful way to generate largely accurate
representations of our environment efficiently. Gandolfo and Downing suggest that this is achieved by pre-
activating sensory representations of expected stimuli, e.g., those of particular bodies within extrastriate body
area, such that perception is biased towards what is expected and therefore more likely to be there.
7. Yon, D., Edey, R., Ivry, R.B., and Press, C. (2017). Time on your hands: Perceived duration of
sensory events is biased toward concurrent actions. J. Exp. Psychol. Gen. 146, 182193.
8. Firestone, C., and Scholl, B.J. (2016). Cognition does not affect perception: Evaluating the
evidence for “top-down” effects. Behav. Brain Sci. 39, e229.
9. Kok, P., Jehee, J.F.M., and de Lange, F.P. (2012). Less is more: expectation sharpens
representations in the primary visual cortex. Neuron 75, 265270.
10. Smith, F.W., and Muckli, L. (2010). Nonstimulated early visual areas carry information about
surrounding context. Proc. Natl. Acad. Sci. U.S.A. 107, 2009920103.
11. Kok, P., Mostert, P., and Lange, F.P. de (2017). Prior expectations induce prestimulus sensory
templates. Proc. Natl. Acad. Sci. U.S.A. 114, 1047310478.
12. Yon, D., Gilbert, S.J., de Lange, F.P., and Press, C. (2018). Action sharpens sensory
representations of expected outcomes. Nat. Commun. 9, 4288.
13. Bang, J.W., & Rahnev, D. (2017). Stimulus expectation alters decision criterion but not sensory
signal in perceptual decision making. Sci. Rep. 7, 17072.
14. Alilović, J., Timmermans, B., Reteig, L.C., van Gaal, S., and Slagter, H.A. (2019). No evidence
that predictions and attention modulate the first feedforward sweep of cortical information
Processing. Cereb. Cortex 29, 22612278.
15. Brown, H., Adams, R.A., Parees, I. Edwards, M., and Friston, K. (2013). Active inference, sensory
attenuation and illusions. Cogn. Process. 14, 411-427.
16. Blakemore, S.J., Wolpert, D.M., and Frith, C.D. (1998). Central cancellation of self-produced
tickle sensation. Nat. Neurosci. 1, 635640.
17. Blank, H., and Davis, M.H. (2016). Prediction errors but not sharpened signals simulate
multivoxel fMRI patterns during speech perception. PLoS Biol. 14, e1002577.
18. Hudson, M., McDonough, K.L., Edwards, R., and Bach P. (2018). Perceptual teleology:
expectations of action efficiency bias social perception. Proc. Royal Soc. Lond. [Biol.] 285,
19. Richter, D., Ekman, M., & de Lange, F.P. (2018). Suppressed sensory responses to predictable
object stimuli throughout the ventral visual stream. J. Neurosci. 38, 7452-7461.
20. Yon, D., and Press, C. (2017). Predicted action consequences are perceptually facilitated before
cancellation. J. Exp. Psychol. Hum. Percept. Perform. 43, 10731083.
... As an example, a recent study has provided evidence that the use of inter-individual differences in perception and memory as predictors substantially improved the ability to account for differences in generalization gradients 41 . In the field of perception, it is well-acknowledged that humans do not have direct access to the physical world, and that the brain may probabilistically represent sensory inputs with large individual differences in how one perceives their surroundings [42][43][44][45][46][47][48] . However, previous research on generalization has largely ignored the potential impact of inter-individual differences, often by either equating perception to the physical dimension (making perception the same for everyone) or employing psychophysical mapping methods such as multidimensional scaling 11,12 to derive a mental representation of encountered stimuli. ...
... Nevertheless, it is important to acknowledge that this simplified methodology might not fully capture the intricate complexities inherent in the underlying processes. For example, with the recent advancement of human perception research, the possibility that the perceptual system operates according to Bayes' rule has been widely discussed [42][43][44][45][46][47][48] . Under this framework, perception contains probabilistic representations that incorporate a likelihood function and prior perceptual knowledge. ...
Full-text available
Human generalization research aims to understand the processes underlying the transfer of prior experiences to new contexts. Generalization research predominantly relies on descriptive statistics, assumes a single generalization mechanism, interprets generalization from mono-source data, and disregards individual differences. Unfortunately, such an approach fails to disentangle various mechanisms underlying generalization behaviour and can readily result in biased conclusions regarding generalization tendencies. Therefore, we combined a computational model with multi-source data to mechanistically investigate human generalization behaviour. By simultaneously modelling learning, perceptual and generalization data at the individual level, we revealed meaningful variations in how different mechanisms contribute to generalization behaviour. The current research suggests the need for revising the theoretical and analytic foundations in the field to shift the attention away from forecasting group-level generalization behaviour and toward understanding how such phenomena emerge at the individual level. This raises the question for future research whether a mechanism-specific differential diagnosis may be beneficial for generalization-related psychiatric disorders.
... One way this predictive biasing is achieved in perceptual systems is through 'sharpening': reshaping patterns of sensory gain so that our sensory systems are particularly sensitised to the signals that we expect to occur Press & Yon, 2019). In neural circuits, this kind of sharpening creates higher fidelity representations of sensory events when they conform with prior expectations (Kok et al., 2012;Yon et al., 2018Yon et al., , 2023, and in perceptual terms, exaggerating sensory gain in this way renders agents better able to detect the perceptual events they expect to occur. ...
Full-text available
Action allows us to shape the world around us. But to act effectively we need to accurately sense what we can and cannot control. Classic theories across cognitive science suppose that this ‘sense of agency’ is constructed from the sensorimotor signals we experience as we interact with our surroundings. But these sensorimotor signals are inherently ambiguous, and can provide us with a distorted picture of what we can and cannot influence. Here we investigate one way that agents like us might overcome the inherent ambiguity of these signals: by combining noisy sensorimotor evidence with prior beliefs about control acquired through explicit communication with others. Using novel tools to measure and model control decisions, we find that explicit beliefs about the controllability of the environment alter both the sensitivity and bias of agentic choices; meaning that we are both better at detecting and more biased to feel control when we are told to expect it. These seemingly paradoxical effects on agentic choices can be captured by a computational model where expecting to be in control exaggerates the sensitivity or ‘gain’ of the mechanisms we use to detect our influence over our surroundings – making us increasingly sensitised to both true and illusory signs of agency. In combination, these results reveal a cognitive and computational mechanism that allows public communication about what we can and cannot influence to reshape our private sense of control.
... Some of us have recently discussed how predictions often need to exhibit quite distinct behavioural shaping of perception to serve the organism 4,41 . To overcome noise in sensory processing and generate broadly accurate experiences rapidly, we may bias perception towards what we expect 42,43 . However, larger error signals (that cannot have resulted from noise) may require high perceptual resources, to enable accurate perception and resultant model updating. ...
Full-text available
‘Predictive processing’ frameworks of cortical functioning propose that neural populations in different cortical layers serve distinct roles in representing the world. There are distinct testable theories within this framework that we examined with a 7T fMRI study, where we contrasted responses in primary visual cortex (V1) to expected (75% likely) and unexpected (25%) Gabor orientations. Multivariate decoding analyses revealed an interaction between expectation and layer, such that expected events could be decoded with comparable accuracy across layers, while unexpected events could only be decoded in superficial laminae. These results are in line with predictive processing accounts where expected virtual input is injected into deep layers, while superficial layers process the ‘error’ with respect to expected signals. While this account of cortical processing has been popular for decades, such distinctions have not previously been demonstrated in the human sensory brain. We discuss how both prediction and error processes may operate together to shape our unitary perceptual experiences.
... The authors argued that the TAE might be an auditory example of perceptual sharpening (Kok et al., 2012;Teufel et al., 2018), as prior knowledge of a perceptual target sharpens the contrast between low-level features. Sharpening in anticipatory tasks, such as primed visual object representation, or tracking unfolding speech in noise, is thought to be related to pre-activation of the sensory representations of the expected stimulus (Gandolfo & Downing, 2019;Press & Yon, 2019). Previous studies of perceptual sharpening have involved static visual images such as scenes or gradients. ...
Full-text available
The present dissertation project investigated auditory time perception with regard to the (perceived) tempo properties of music and the spontaneous motor tempo (SMT). The theoretical basis of these investigations are the models of the internal clock, which are based on an intrinsic timekeeper that sends out impulses in a linear or dynamic-oscillatory system. The SMT is considered to be an indicator of the pulse rate of this timekeeper. The aim of this dissertation was to answer the following overarching questions: (1) To what extent does the tempo of music influence the assessment of durations and does this influence depend on the metrical level at which the musical beat is perceived (i.e., perceived tempo)? (2) Which factors influence the SMT independent of external stimuli such as music? (3) What role does musical experience play in auditory time perception of music and are there systematic differences in the pulse rate of the internal clock? In order to answer these questions, four empirical studies were carried out, whereby Studies 1 and 2 investigated the first question and Studies 3 and 4 were devoted to the second question. The studies carried out were able to deepen the understanding of the time-distortion effect of the musical tempo and show that this effect also depends on the individual sensorimotor synchronization. Furthermore, it could be shown that the internal clock, measured with the spontaneous motor tempo, does not only depend on the time of day but also on the respective chronotype, which suggests an influence of the biological clock on the perception of time.
... Instead, we found significantly increased activation for more predictable events ( negative entropy ) with a bilateral activity pattern comprising the IFG, posterior insula and middle temporal gyrus. In general, such prediction enhancement effects are proposed to reflect increased activation of predicted elements ( Press et al., 2020 ;Press and Yon, 2019 ). Interoceptive processing in the insula is proposed to involve posterior-to-anterior regions, with posterior regions supporting primary objective physical features of interoceptive information, whereas the AIC serves the subjective integration of interoceptive and motivational signals ( Craig, 2009 ;Gu et al., 2013;Seth, 2013bSeth, , 2012. ...
Full-text available
Emotional experiences are proposed to arise from contextualized perception of bodily responses, also referred to as interoceptive inferences. The recognition of emotions benefits from adequate access to one's own interoceptive information. However, direct empirical evidence of interoceptive inferences and their neural basis is still lacking. In the present fMRI study healthy volunteers performed a probabilistic emotion classification task with videotaped dynamically unfolding facial expressions. In a first step, we aimed to determine functional areas involved in the processing of dynamically unfolding emotional expressions. We then tested whether individuals with higher interoceptive accuracy (IAcc), as assessed by the Heartbeat detection task (HDT), or higher interoceptive sensitivity (IS), as assessed by the Multidimensional Assessment of Interoceptive Awareness, Version 2 (MAIA-2), benefit more from the contextually given likelihood of emotional valence and whether brain regions reflecting individual IAcc and/or IS play a role in this. Individuals with higher IS benefitted more from the biased probability of emotional valence. Brain responses to more predictable emotions elicited a bilateral activity pattern comprising the inferior frontal gyrus and the posterior insula. Importantly, individual IAcc scores positively covaried with brain responses to more surprising and less predictable emotional expressions in the insula and caudate nucleus. We show for the first time that IAcc score is associated with enhanced processing of interoceptive prediction errors, particularly in the anterior insula. A higher IS score seems more likely to be associated with a stronger weighting of attention to interoceptive changes processed by the posterior insula and ventral prefrontal cortex.
Placebo interventions generate mismatches between expected pain and sensory signals from which pain states are inferred. Since we lack direct access to bodily states, we can only infer whether nociceptive activity indicates tissue damage or results from noise in sensory channels. Predictive processing models propose to make optimal inferences using prior knowledge given noisy sensory data. However, these models do not provide a satisfactory explanation of how pain relief expectations are translated into physiological manifestations of placebo responses. Furthermore, they do not account for individual differences in the ability to endogenously regulate nociceptive activity in predicting placebo analgesia. The brain not only passively integrates prior pain expectations with nociceptive activity to infer pain states (perceptual inference), but also initiates various types of 'actions' to ensure sensory data are consistent with prior pain expectations (active inference). We argue that depending on whether the brain interprets conflicting sensory data (prediction errors) as 'signal to learn from' or 'noise to be attenuated', the brain initiates opposing types of action to facilitate learning from sensory data or, conversely, enhance the biasing influence of prior pain expectations on pain perception. Furthermore, we discuss the role of stress, anxiety, and unpredictability of pain in influencing the weighting of prior pain expectations and sensory data and how they relate to the individual ability to regulate nociceptive activity (endogenous pain modulation). Finally, we provide suggestions for future studies to test the implications of the active inference model of placebo analgesia.
Perceivers can use past experiences to make sense of ambiguous sensory signals. However, this may be inappropriate when the world changes and past experiences no longer predict what the future holds. Optimal learning models propose that observers decide whether to stick with or update their predictions by tracking the uncertainty or "precision" of their expectations. However, contrasting theories of prediction have argued that we are prone to misestimate uncertainty-leading to stubborn predictions that are difficult to dislodge. To compare these possibilities, we had participants learn novel perceptual predictions before using fMRI to record visual brain activity when predictive contingencies were disrupted-meaning that previously "expected" events become objectively improbable. Multivariate pattern analyses revealed that expected events continued to be decoded with greater fidelity from primary visual cortex, despite marked changes in the statistical structure of the environment, which rendered these expectations no longer valid. These results suggest that our perceptual systems do indeed form stubborn predictions even from short periods of learning-and more generally suggest that top-down expectations have the potential to help or hinder perceptual inference in bounded minds like ours.
Forming expectations about what we are likely to perceive often facilitates perception. We forge such expectations on the basis of strong statistical relationships between events in our environment. However, due to our ever-changing world these relationships often subsequently degrade or even disappear, yet it is unclear how these altered statistics influence perceptual expectations. We examined this question across two studies by training participants in perfect relationships between actions (index or little finger abductions) and outcomes (clockwise or counter-clockwise gratings), before degrading the predictive relationship in a test phase – such that ‘expected’ events followed actions on 50–75% of trials and ‘unexpected’ events ensued on the remainder. Perceptual decisions about outcomes were faster and less error prone on expected than unexpected trials when predictive relationships remained high and reduced as the relationship diminished. Drift diffusion modelling indicated that these effects are explained by shifting the starting point in the evidence accumulation process as well as biasing the rate of evidence accumulation – with the former reflecting biases from statistics within the training session and the latter those of the test session. These findings demonstrate how perceptual expectations are updated as statistical certainty diminishes, with interacting influences speculatively dependent upon learning consolidation. We discuss how underlying mechanisms optimise the interaction between learning and perception – allowing our experiences to reflect a nuanced, ever-changing environment.
Full-text available
Perceivers can use past experiences to make sense of ambiguous sensory signals. However, this may be inappropriate when the world changes and past experiences no longer predict what the future holds. Optimal learning models propose that observers decide whether to stick with or update their predictions by tracking the uncertainty or ‘precision’ of their expectations. But contrasting theories of prediction have argued that we are prone to misestimate uncertainty – leading to stubborn predictions that are difficult to dislodge. To compare these possibilities, we had participants learn novel perceptual predictions before using fMRI to record visual brain activity when predictive contingencies were disrupted - meaning that previously ‘expected’ events become objectively improbable. Multivariate pattern analyses revealed that representations in primary visual cortex were ‘sharpened’ in line with prior expectations, despite marked changes in the statistical structure of the environment, rendering these expectations no longer valid. These results suggest that our perceptual systems do indeed form stubborn predictions even from short periods of learning – and more generally suggest that top-down expectations have the potential to help or hinder perceptual inference in bounded minds like ours.
Full-text available
According to predictive processing theories, emotional inference involves simultaneously minimising discrepancies between predictions and sensory evidence relating to both one's own and others' states, achievable by altering either one's own state (empathy) or perception of another's state (egocentric bias) so they are more congruent. We tested a key hypothesis of these accounts, that predictions are weighted in inference according to their precision (inverse variance). If correct, increasingly precise self-related predictions should be associated with increasingly biased perception of another's emotional expression. We manipulated predictions about upcoming own-pain (low or high magnitude) using cues that afforded either precise (a narrow range of possible magnitudes) or imprecise (a wide range) predictions. Participants judged pained facial expressions presented concurrently with own-pain to be more intense when own-pain was greater, and precise cues increased this biasing effect. Implications of conceptualising interpersonal influence in terms of predictive processing are discussed.
Full-text available
Predictive coding models propose that predictions (stimulus likelihood) reduce sensory signals as early as primary visual cortex (V1), and that attention (stimulus relevance) can modulate these effects. Indeed, both prediction and attention have been shown to modulate V1 activity, albeit with fMRI, which has low temporal resolution. This leaves it unclear whether these effects reflect a modulation of the first feedforward sweep of visual information processing and/or later, feedback-related activity. In two experiments, we used electroencephalography and orthogonally manipulated spatial predictions and attention to address this issue. Although clear top-down biases were found, as reflected in pre-stimulus alpha-band activity, we found no evidence for top-down effects on the earliest visual cortical processing stage (<80 ms post-stimulus), as indexed by the amplitude of the C1 event-related potential component and multivariate pattern analyses. These findings indicate that initial visual afferent activity may be impenetrable to top-down influences by spatial prediction and attention.
Full-text available
When we produce actions we predict their likely consequences. Dominant models of action control suggest that these predictions are used to ‘cancel’ perceptual processing of expected outcomes. However, normative Bayesian models of sensory cognition developed outside of action propose that rather than being cancelled, expected sensory signals are represented with greater fidelity (sharpened). Here, we distinguished between these models in an fMRI experiment where participants executed hand actions (index vs little finger movement) while observing movements of an avatar hand. Consistent with the sharpening account, visual representations of hand movements (index vs little finger) could be read out more accurately when they were congruent with action and these decoding enhancements were accompanied by suppressed activity in voxels tuned away from, not towards, the expected stimulus. Therefore, inconsistent with dominant action control models, these data show that sensorimotor prediction sharpens expected sensory representations, facilitating veridical perception of action outcomes.
Full-text available
Primates interpret conspecific behaviour as goal-directed and expect others to achieve goals by the most efficient means possible. While this teleological stance is prominent in evolutionary and developmental theories of social cognition, little is known about the underlying mechanisms. In predictive models of social cognition, a perceptual prediction of an ideal efficient trajectory would be generated from prior knowledge against which the observed action is evaluated, distorting the perception of unexpected inefficient actions. To test this, participants observed an actor reach for an object with a straight or arched trajectory on a touch screen. The actions were made efficient or inefficient by adding or removing an obstructing object. The action disappeared mid-trajectory and participants touched the last seen screen position of the hand. Judgements of inefficient actions were biased towards the efficient prediction (straight trajectories upward to avoid the obstruction, arched trajectories downward towards the target). These corrections increased when the obstruction's presence/absence was explicitly acknowledged, and when the efficient trajectory was explicitly predicted. Additional supplementary experiments demonstrated that these biases occur during ongoing visual perception and/or immediately after motion offset. The teleological stance is at least partly perceptual, providing an ideal reference trajectory against which actual behaviour is evaluated.
Full-text available
Humans are more likely to report perceiving an expected than an unexpected stimulus. Influential theories have proposed that this bias arises from expectation altering the sensory signal. However, the effects of expectation can also be due to decisional criterion shifts independent of any sensory changes. In order to adjudicate between these two possibilities, we compared the behavioral effects of pre-stimulus cues (pre cues; can influence both sensory signal and decision processes) and post-stimulus cues (post cues; can only influence decision processes). Subjects judged the average orientation of a series of Gabor patches. Surprisingly, we found that post cues had a larger effect on response bias (criterion c) than pre cues. Further, pre and post cues did not differ in their effects on stimulus sensitivity (d’) or the pattern of temporal or feature processing. Indeed, reverse correlation analyses showed no difference in the temporal or feature-based use of information between pre and post cues. Overall, post cues produced all of the behavioral modulations observed as a result of pre cues. These findings show that pre and post cues affect the decision through the same mechanisms and suggest that stimulus expectation alters the decision criterion but not the sensory signal itself.
Full-text available
Models of action control suggest that predicted action outcomes are “cancelled” from perception, allowing agents to devote resources to more behaviorally relevant unexpected events. These models are supported by a range of findings demonstrating that expected consequences of action are perceived less intensely than unexpected events. A key assumption of these models is that the prediction is subtracted from the sensory input. This early subtraction allows preferential processing of unexpected events from the outset of movement, thereby promoting rapid initiation of corrective actions and updating of predictive models. We tested this assumption in three psychophysical experiments. Participants rated the intensity (brightness) of observed finger movements congruent or incongruent with their own movements at different timepoints after action. Across Experiments 1 and 2, evidence of cancellation—whereby congruent events appeared less bright than incongruent events—was only found 200 ms after action, whereas an opposite effect of brighter congruent percepts was observed in earlier time ranges (50 ms after action). Experiment 3 demonstrated that this interaction was not a result of response bias. These findings suggest that “cancellation” may not be the rapid process assumed in the literature, and that perception of predicted action outcomes is initially “facilitated.” We speculate that the representation of our environment may in fact be optimized via two opposing processes: The primary process facilitates perception of events consistent with predictions and thereby helps us to perceive what is more likely, but a later process aids the perception of any detected events generating prediction errors to assist model updating.
Full-text available
Perceptual systems must rapidly generate accurate representations of the world from sensory inputs that are corrupted by internal and external noise. We can typically obtain more veridical representations by integrating information from multiple channels, but this integration can lead to biases when inputs are, in fact, not from the same source. Although a considerable amount is known about how different sources of information are combined to influence what we perceive, it is not known whether temporal features are combined. It is vital to address this question given the divergent predictions made by different models of cue combination and time perception concerning the plausibility of cross-modal temporal integration, and the implications that such integration would have for research programs in action control and social cognition. Here we present four experiments investigating the influence of movement duration on the perceived duration of an auditory tone. Participants either explicitly (Experiments 1–2) or implicitly (Experiments 3–4) produced hand movements of shorter or longer durations, while judging the duration of a concurrently presented tone (500–950 ms in duration). Across all experiments, judgments of tone duration were attracted toward the duration of executed movements (i.e., tones were perceived to be longer when executing a movement of longer duration). Our results demonstrate that temporal information associated with movement biases perceived auditory duration, placing important constraints on theories modeling cue integration for state estimation, as well as models of time perception, action control and social cognition.
Expectations about a visual event shape the way it is perceived [1, 2, 3, 4]. For example, expectations induced by valid cues signaling aspects of a visual target can improve judgments about that target, relative to invalid cues [5, 6]. Such expectation effects are thought to arise via pre-activation of a template in neural populations that represent the target [7, 8] in early sensory areas [9] or in higher-level regions. For example, category cues (“face” or “house”) modulate pre-target fMRI activity in associated category-selective brain regions [10, 11]. Further, a relationship is sometimes found between the strength of template activity and success in perceptual tasks on the target [12, 13, 14]. However, causal evidence linking pre-target activity with expectation effects is lacking. Here we provide such evidence, using fMRI-guided online transcranial magnetic stimulation (TMS). In two experiments, human volunteers made binary judgments about images of either a body or a scene. Before each target image, a verbal cue validly or invalidly indicated a property of the image, thus creating perceptual expectations about it. To disrupt these expectations, we stimulated category-selective visual brain regions (extrastriate body area, EBA; occipital place area, OPA) during the presentation of the cue. Stimulation ended before the target images appeared. We found a double dissociation: TMS to EBA during the cue period removed validity effects only in the body task, whereas stimulating OPA removed validity effects only in the scene task. Perceptual expectations are expressed by the selective activation of relevant populations within brain regions that encode the target.
Prediction plays a crucial role in perception, as prominently suggested by predictive coding theories. However, the exact form and mechanism of predictive modulations of sensory processing remain unclear, with some studies reporting a downregulation of the sensory response for predictable input whereas others observed an enhanced response. In a similar vein, downregulation of the sensory response for predictable input has been linked to either sharpening or dampening of the sensory representation, which are opposite in nature. In the present study, we set out to investigate the neural consequences of perceptual expectation of object stimuli throughout the visual hierarchy, using fMRI in human volunteers. Participants of both sexes were exposed to pairs of sequentially presented object images in a statistical learning paradigm, in which the first object predicted the identity of the second object. Image transitions were not task relevant; thus, all learning of statistical regularities was incidental. We found strong suppression of neural responses to expected compared with unexpected stimuli throughout the ventral visual stream, including primary visual cortex, lateral occipital complex, and anterior ventral visual areas. Expectation suppression in lateral occipital complex scaled positively with image preference and voxel selectivity, lending support to the dampening account of expectation suppression in object perception.
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.
Significance The way that we perceive the world is partly shaped by what we expect to see at any given moment. However, it is unclear how this process is neurally implemented. Recently, it has been proposed that the brain generates stimulus templates in sensory cortex to preempt expected inputs. Here, we provide evidence that a representation of the expected stimulus is present in the neural signal shortly before it is presented, showing that expectations can indeed induce the preactivation of stimulus templates. Importantly, these expectation signals resembled the neural signal evoked by an actually presented stimulus, suggesting that expectations induce similar patterns of activations in visual cortex as sensory stimuli.