PreprintPDF Available

Action biases perceptual decisions toward expected outcomes.

Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

We predict how our actions will influence the world around us. Prevailing models of action control propose that we use these predictions to suppress or ‘cancel’ perception of expected action outcomes. However, contrasting normative Bayesian models in sensory cognition suggest that top-down predictions bias observers toward perceiving what they expect. Here we adjudicated between these models by investigating how expectations influence perceptual decisions about briefly presented action outcomes. Contrary to dominant cancellation models, we found that observers’ perceptual decisions are biased toward the presence of outcomes congruent with their actions. Computational modelling revealed this action-induced bias reflected a bias in how sensory evidence was accumulated, rather than a baseline shift in decision circuits. In combination, these results reveal a gain control mechanism that can explain how we generate largely veridical representations of our actions and their consequences in an inherently uncertain sensory world.
Content may be subject to copyright.
Action biases perceptual decisions towards expected outcomes
Daniel Yon1,2*, Vanessa Zainzinger1, Floris P. de Lange3, Martin Eimer1 & Clare Press1
1. Department of Psychological Sciences, Birkbeck, University of London, UK
2. Department of Psychology, Goldsmiths, University of London, UK
3. Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
*Corresponding author: / @danieljamesyon
Accepted as an Article at JEP:General on 17th April 2020
We predict how our actions will influence the world around us. Prevailing models in the
action control literature propose that we use these predictions to suppress or ‘cancel’
perception of expected action outcomes, to highlight more informative surprising events.
However, contrasting normative Bayesian models in sensory cognition suggest that we are
more, not less, likely to perceive what we expect given that what we expect is more likely
to occur. Here we adjudicated between these models by investigating how expectations
influence perceptual decisions about action outcomes in a signal detection paradigm. Across
three experiments, participants performed one of two manual actions that were sometimes
accompanied by brief presentation of expected or unexpected visual outcomes. Contrary to
dominant cancellation models but consistent with Bayesian accounts, we found that observers
were biased to report the presence of expected action outcomes. There were no effects of
expectation on sensitivity. Computational modelling revealed that the action-induced bias
reflected a sensory bias in how evidence was accumulated rather than a baseline shift in
decision circuits. Expectation effects remained in Experiments 2 and 3 when orthogonal cues
indicated which finger was more likely to be probed (i.e., task-relevant). These biases
towards perceiving expected action outcomes are suggestive of a mechanism that would
enable generation of largely veridical representations of our actions and their consequences in
an inherently uncertain sensory world.
1. Introduction
Effectively acting on the world around us requires predicting the consequences of our actions
(James, 1890). We select actions based on their predicted outcomes and use these predictions
to generate rapid corrective movements when we experience deviant sensory input (Hommel,
Müsseler, Aschersleben & Prinz, 2001; Wolpert, Ghahramani & Jordan, 1995). Influential
‘Cancellation’ models in the action control literature propose that we also use these
predictions to suppress perception of expected sensory inputs, across sensory modalities
(Bays & Wolpert, 2007; Blakemore et al., 1998; Fiehler, Brenner & Spering, 2019; Kilteni &
Ehrsson, 2017; Kilteni, Houborg & Ehrsson, 2019; see also Müsseler & Hommel, 1997; Fig.
1a). Such a mechanism would allow us to ignore predictable sensations and therefore remain
maximally sensitive to more behaviourally-relevant unexpected events. Such cancellation
models provide an appealing explanation for why it is difficult to tickle oneself (Weiskrantz,
Elliott & Darlington, 1971). The idea has also drawn wide support from studies showing that
sensory events predictably resulting from action are perceived as less intense than similar
events presented in the absence of action (Bays, Wolpert & Flanagan, 2005; Sato, 2008).
Indeed, interest in cancellation mechanisms has been galvanised by studies suggesting an
intimate link between such effects and the feelings of control that accompany out movements
(the sense of agency’), with dysfunctions of cancellation associated with pathologies of
agency in a variety of psychiatric conditions (Frith, Blakemore & Wolpert, 2000).
However, the core principle guiding Cancellation models that perception of predicted
inputs is suppressed contrasts with prominent Bayesian models in the wider sensory
cognition literature (Fig. 1b). These models suggest that we are more, not less, likely to
perceive what we expect (de Lange, Heilbron & Kok, 2018; Press & Yon, 2019). They
emphasise how in an inherently ambiguous sensory world it is adaptive for organisms to
combine sampled sensory evidence with prior knowledge about what is likely to occur.
Mechanistically this can be achieved by altering the weights on sensory channels, by
increasing the gainof expected relative to unexpected signals (de Lange et al., 2018;
Summerfield & de Lange, 2014). Increasing the gain afforded to expected sensory signals in
this fashion effectively ‘turning up the volume’ of events that conform to our prior
predictions would bias perceptual processing and predispose observers to perceive events
that they expect to occur (e.g., Wyart, Nobre & Summerfield, 2012; Hudson, Nicholson, Ellis
& Bach, 2016; Hudson, Nicholson, Simpson, Ellis & Bach, 2016). For example, a range of
curious illusory phenomena such as the tendency of observers to perceive concave faces as
convex (Gregory, 1997) could arise via such mechanisms. Importantly, while these kinds of
biases may lead to occasional misperceptions, they may nonetheless reflect an adaptive and
efficient way of generating veridical percepts, since expected events are by definition
more likely to occur. While these models have been developed outside of action contexts, a
mechanism that biases perception in line with expectations could be just as adaptive during
action. For example, if we are trying to flick the light switch in a darkened room, we will
generate more veridical estimates of our ongoing actions if we are biased to perceive
expected events (e.g. the sight of a moving hand).
Cancellation and Bayesian accounts of how predictions shape perception have been difficult
to compare directly because experimental approaches differ between disciplines (Press, Kok
& Yon, 2020b). Support for Bayesian models within the normative sensory cognition
literature typically examines an organisms ability to detect a low intensity stimulus (Stein &
Peelen, 2015; Wyart, Nobre & Summerfield, 2012). In contrast, action studies reporting
cancellation have typically asked participants to judge the intensity of action outcomes (Bays
et al., 2005; Blakemore et al., 1998; Kilteni, Houborg & Ehrsson, 2019; Sato, 2008; Weiss,
Herwig & Schütz-Bosbach, 2011; see also Yon & Press, 2017, 2018). To render the
paradigms more comparable the present series of perceptual experiments used a signal
detection paradigm within the domain of action (see also Cardoso-Leite et al., 2010; Schwarz
et al., 2018). They also presented visual action outcomes, given that both Cancellation and
Bayesian theories hypothesise comparable operation of mechanisms across sensory
modalities (Brown, Adams, Parees, Edwards & Friston, 2013; Wolpert, Doya & Kawato,
2003) and that visual events have been used more commonly in the normative sensory
cognition literature (see General Discussion).
Fig. 1: A schematic illustration of how predictive signals influence activation of sensory units under
Cancellation and Bayesian models, alongside their putative influences on perception. Cancellation models
developed in the action literature (a, left) hypothesise that when we move (e.g., depress our index finger) we
generate a predictive signal that suppresses activity in sensory units tuned to expected perceptual outcome s (e.g.
visual units tuned to the sight of a hand with a depressed index finger). Weakening activity in these units
reduces the signal-to-noise ratio of the sensory population leading to less intense percepts and biasing
observers away from perceiving these outcomes (b, left). In contrast, Bayesian models developed in the wider
sensory cognition literature (a, right) suggest that predictive signals alter the weights on sensory channels such
that the volume is ‘turned up’ (e.g., the gain is increased) on expected relative to unexpected signals. Such
weighting leads to a higher signal-to-noise ratio when expectations are valid, leading to more intense percepts
and biases towards perceiving expected outcomes (b, right).
Participants produced manual actions (abducting their index or middle finger) and detected
visual action outcomes, which could be congruent or incongruent with their own movement.
This congruency manipulation exploits the fact that congruent action outcomes will be more
expected than incongruent ones, based either on inherited evolutionary expectations or our
extensive experience of controlling our actions (Hommel et al., 2001). Under Cancellation
models, suppressing activity in units tuned to expected stimuli should make it harder for
predictable action outcomes to reach detection threshold (see Fig. 1), making observers either
less sensitive to these events or biased to report that they did not occur. In contrast, under
normative Bayesian models selectively increasing the relative weight on expected sensory
channels should either bias observers to report that congruent events occurred or make them
more sensitive to congruent outcomes. However, given recent findings in the broader sensory
cognition literature, we anticipated that sensitising sensory channels tuned to expected events
would increase hits and also false alarms in signal detection terms, biasing observers rather
than increasing their sensitivity (Wyart et al., 2012). We subsequently used computational
modelling to pinpoint which aspect of perceptual decision making is influenced by
2. Experiment 1: How do actions influence detection of expected outcomes?
2.1. Methods
2.1.1. Participants
Twenty-four healthy participants (19 female, 5 male, mean age = 24.9 years, SD = 5.38) took
part in Experiment 1. All participants in all experiments reported normal or corrected-to-
normal vision and no history of psychiatric or neurological illness. A sample size of 24
participants per experiment was selected such that each would have at least 80% power to
detect a medium-sized effect of action-outcome congruency on perceptual decisions (Cohen's
dz = .6, N=24, alpha =0.05 provides 80.3% power - G*Power All experiments were
approved by the local ethics committee at Birkbeck, University of London.
2.1.2. Procedure
The experiment took place in a dimly lit testing cubicle. Participants sat ~55 cm from the
monitor (153 x 32 cm, 60 Hz) used for stimulus presentation, with their hands placed above
two keypads. The participant’s right hand was rotated 90°, such that their knuckles were
aligned with the body midline. Each trial began with the presentation of a greyscale avatar
hand (Poser 10, Smith Micro Software). This image remained on screen until participants
executed either an index or little finger tapping action - depressing the relevant key.
Movements were freely-selected, but the experiment ran until participants executed at least
100 of each type. On 50% of trials, participant’s actions triggered a synchronous movement
of the onscreen hand (signal present) that was displayed for 17 ms. On the remaining 50% of
trials the hand remained still (signal absent). On signal present trials, half of observed
movements were congruent with the participant’s own action (e.g. execute index tap, observe
index tap), and half were incongruent (e.g. execute index tap, observe middle tap). Regardless
of signal presence, the index and middle region of the avatar hand was backwards-masked by
an oval texture comprised of avatar fingers (Cutting, Moore & Morrison, 1988) for 100 ms.
This image was in turn followed by a visual white noise mask presented for 300-600 ms.
Participants were subsequently asked about the movement of one of the two fingers (e.g. did
the INDEX finger move?). They registered their decision with a button press with their left
thumb. On half of trials participants were probed about the congruent finger of the avatar
hand (i.e. the index finger if they moved their index finger), and on the remaining half the
incongruent finger was probed. Participants in Experiment 1 also made a confidence
judgement about their decision (‘high confidence’ or ‘low confidence’) to collect pilot data
for an additional experiment.
Participants completed at least 200 trials
. Trial types were randomised across the experiment
and breaks were taken every 40 trials. Before the main experiment participants completed a
short practice block which familiarised them with the main task (16 trials). Participants
subsequently completed a longer practice block without producing actions where they
completed a 1 up 1 down adaptive staircase, to adjust the difficulty of the perceptual
discrimination such that it was approximately matched for all participants. This staircase
targeted the amount of observed finger movement (minimum 1° rotation around the
metacarpophalangeal joint, maximum 16°) that was required for detection on ~50% of trials.
The staircase terminated after 12 reversals, and the average of the last six turning points was
taken as an estimate of the participant’s threshold. The main experiment began after
completing the adaptive staircase, and test stimuli (when present) were shown at this
threshold value on all subsequent trials.
2.2. Results and Discussion
All tests in all experiments used an alpha level of .05, and for non-significant results we
calculated Bayes Factors to quantify evidence for the absence of an effect (i.e. the null
hypothesis; Dienes, 2014). Separate signal detection theoretic measures of sensitivity (d’) and
Participants in Experiment 1 were successful in following the instruction to execute roughly equal numbers of
each action, and therefore completed on average 218 trials (SEM = 2.5), comprising on average 49.8 %
congruent trials (SEM=.0014).
bias (c) were calculated using hit rates and false alarm rates on congruent and incongruent
trials. d’ reflects the extent to which participants are more likely to report the presence of a
stimulus when it is present than when it is absent (d’ = z[hit rate] z[false alarm rate]), while
c reflects the extent to which participants are more likely to respond ‘present’ or ‘absent’
regardless of objective stimulus presence ( c = -.05[z(hit rate) + z(false alarm rate)]. For some
participants in some conditions response counts were empty (e.g., no misses) which can
preclude calculation of d’ and c. In line with previous recommendations (Hautus, 1995) this
issue was overcome by adjusting counts of hits, misses, false alarms and correct rejections by
+.5 in all experiments, and this adjustment was applied to all participants to avoid introducing
biases into group-level analyses (Snodgrass & Corwin, 1988).
Sensitivity and bias on congruent and incongruent trials were compared using t-tests. These
analyses revealed that participants were more liberal in reporting the presence of congruent
action outcomes (lower c t23 = 2.35, p=.028, dz = .480; see Fig. 2) and were also more
sensitive when judgements probed the congruent finger (higher d’ t23 = 2.29, p=.031, dz =
.467; see Table 1). These findings are predicted by the Bayesian account and inconsistent
with the Cancellation account.
Fig. 2: Action execution and detection task, with signal detection c results from Experiments 1-3. a. Participants
performed actions, which were paired with synchronous congruent or incongruent movements of an avatar hand
that they were required to detect. In Experiments 2 and 3, attentional arrow cues also informed participants
about which finger of the avatar hand was likely to be probed. b. We calculated the signal detection theoretic
measure c to index biases induced by action. These values were lower (i.e. responses were more liberal) on
congruent (saturated) relative to incongruent (desaturated) trials, irrespective of attentional focus. This effect
demonstrates that perceptual decisions were biased towards expected action outcomes. Error bars show 95%
within-participant confidence intervals of the mean difference between conditions.
Table 1: Mean (SD) sensitivity (d’) values across all conditions in Experiments 1-3
3. Experiment 2: Dissociating effects of expectation and attention on detection
Experiment 1 found that participants were more sensitive to (higher d’) and biased to report
the presence of (lower c) congruent action outcomes. However, one possibility is that these
results reflect effects of ‘attention’ rather than ‘expectation’ per se. That is, in both laboratory
tasks and natural settings, top-down expectations (i.e. what is likely to occur) are often
confounded with top-down attention (i.e. what is relevant for task performance; Summerfield
& Egner, 2016). While in our task movements of congruent fingers are just as probable as
incongruent ones making both types of event equally task-relevant actors may have
learned outside the laboratory to allocate top-down attention to congruent fingers as these are
typically more relevant for controlling our actions (note of course that our logic also assumes
that congruent movements are more expected due to learning outside of the experimental
setting; see Introduction). This interpretation is particularly likely, given that when
expectation and attention have been orthogonalised in the sensory cognition literature, the
former has been found to generate biasing effects and the latter sensitivity effects (Wyart et
al., 2012). Experiment 2 examined this possibility by orthogonally manipulating action-
outcome congruency and task relevance with a new sample. This was achieved by
introducing two separate cues to the task: a number cue indicating which action participants
should perform (thereby manipulating expectations about outcomes) and an orthogonal arrow
cue indicating which outcomes will be relevant for perceptual decisions (indicating what
participants should attend to).
3.1. Methods
3.1.1. Participants
A new sample of 24 participants took part in Experiment 2 (17 female, 7 male, mean age =
24.4 years, SD = 4.23). Data from one additional participant was lost due to a technical
3.1.2. Procedure
The procedure of Experiment 2 was identical to Experiment 1 with the following changes.
Trials began with the presentation of the neutral observed hand overlaid with an arrow cue.
On 50% of trials the arrow cue was valid, pointing to the finger that was subsequently
probed. On 25% of trials the cue was invalid, pointing to the finger that was not subsequently
probed. On the remaining 25% of trials the cue was neutral, pointing to both fingers and
indicating that they would be probed with equal probability. After 700 ms, an imperative cue
(‘1’ or ‘2’) was presented above the arrow, indicating which action participants were required
to perform (index or middle tap, respectively). After participants executed the correct action,
the same stimulus sequence was triggered as in Experiment 1. Participants made the same
detection judgements, but confidence judgements were not collected in this experiment. The
experiment comprised 320 trials. Trial types were randomised across the experiment, and
breaks were taken every 20 trials.
3.2. Results and Discussion
Signal detection theoretic measures of sensitivity and bias were calculated for each
combination of action-outcome congruency (congruent, incongruent) and cue validity (valid,
neutral, invalid), and effects were evaluated using ANOVAs with the same factorial structure.
Both analyses revealed an effect of cue validity, such that participants were more sensitive to
(higher d’ - F2,46 = 4.246, p=.020, ηp2=.156; Table 1) and more liberal in reporting (lower c -
F2,46 = 14.57, p<.001, ηp2=.388; Fig. 2) validly cued events validating that participants used
these cues to guide their attention (Hawkins et al., 1990). T-tests decomposing these cuing
effect found that judgements were more sensitive when cues were valid than invalid (t23 =
2.85, p=.009, dz = .581), though differences between sensitivity on valid and neutral cuing
trials (t23 = 1.98, p=.059, dz = .404, BF01 = 1.36) or neutral and invalid trials (t23 = .937,
p=.359, dz = .191, BF01 = 3.01) were non-significant. Comparable tests found observers were
more liberal on validly cued trials than invalid ones (t23 = 4.11, p<.001 , dz = .838) and more
liberal on neutrally cued than invalidly cued trials (t23 = 4.81, p<.001 , dz = .981), but
differences between valid and neutral cuing trials were not significant (t23 = 1.28, p=.214, dz
= .261, BF01 = 2.21). Moreover, effects of cue validity on judgement sensitivity did not
interact with action-outcome congruency (F2,46 = .184, p=.833, ηp2=.008, BF01 = 7.19).
Importantly, these analyses also found that participants were biased to report the presence of
congruent action outcomes (F1,23 = 8.47, p=.008, ηp2=.269)
, and the magnitude of this bias
did not interact with the focus of attention (F2,46 = 1,29, p=.286, ηp2=.053, BF01 = 5.95),
suggesting that observers are biased to perceive predictable action outcomes irrespective of
task relevance. However, there was no significant effect of congruency on sensitivity (F1,23 =
1.70, p=.205, ηp2=.069, BF01 = 1.52). Therefore, when attention is orthogonally manipulated
participants are still biased (lower c) to report the presence of congruent action outcomes.
The effect of congruency on sensitivity was not detectable in Experiment 2, consistent with
In Experiment 2 one participant was an outlier, showing an especially large action-induced bias (z score =
3.76). Re-running the same analyses without data from this participant revealed the same effect of action
congruency on c values (p=.003), the same correlation between congruency effects and modelled drift biases
(r=-.924, p<.001) and did not change any other statistical patterns observed.
the possibility that this effect in Experiment 1 was determined by attentional processes
(Wyart et al., 2012).
4. Experiment 3: Controlling imperative-outcome mapping
Experiment 3 investigated whether the congruency biasing effect was driven by expectations
engendered by action or a mapping between imperative cues and stimulus types (1 =
index/left; 2 = middle or right a common mapping used in musical training). To rule out
possible mapping between cues and outcomes, we compared one group of participants on an
identical procedure as Experiment 2 to another group who received arbitrary shapes as
imperatives rather than numbers (NB: no cue-outcome mapping can be learnt within the
experiment as there is no within-experiment contingency).
4.1. Methods
4.1.1. Participants
A new sample of 48 participants took part in Experiment 2 (33 female, 15 male, mean age =
24.7 years, SD = 4.08), with 24 in each of the two groups. This provides the same power to
detect the effect individually within each of the two groups as in Experiments 1 and 2, as well
as enabling examination of the interaction according to cue type.
4.1.2. Procedure
One group of 24 participants completed a procedure identical to Experiment 2, where
numbers (‘1’ or ‘2’) cued participants to perform index or middle finger actions. A separate
group of 24 participants completed a near identical procedure, except circles and squares
indicated to participants that they should execute index and middle finger movements. Half of
these participants received a circle-index / square-middle mapping and half received a circle-
middle / square-index mapping.
4.2. Results and Discussion
Measures of sensitivity and bias were calculated separately for each combination of
experimental conditions and analysed with separate ANOVAs. Therefore, the only change
with respect to Experiment 2 was the addition of a between-participants factor of cue type
Greenhouse Geisser corrections were employed where appropriate. Participants were again
more liberal in reporting the presence of congruent than incongruent action outcomes (F1,46 =
14.59, p<.001, ηp2=.241; Fig. 2), and more liberal in reporting the presence of validly cued
events (F2,92 = 5.062, p=.008, ηp2=.009). T-tests decomposing the attentional cuing effect on
c values found that observers were more liberal on validly cued trials than invalid ones (t47 =
2.73, p=.009 , dz = .394) and more liberal on neutrally cued than invalidly cued trials (t47 =
2.47, p=.017, dz = .356), but differences between valid and neutral cuing trials were not
significant (t47 = 1.19, p=.242, dz = .171, BF01 = 3.30).
The effect of action-outcome congruency did not interact with the focus of attention (p=.080,
ηp2=.025, BF01 = 3.81). Crucially, cue type (shapes vs numbers) also did not interact with the
factor action-outcome congruency (F1,46 = 1.192, p = .281, ηp2=.025, BF01 = 2.61) or generate
a three-way interaction with congruency and attentional focus (F2,92 = .234, p=.792, ηp2=.005,
BF01 = 7.69. Indeed, separate analyses of the number (F1,23 = 7.50, p=.012, ηp2=.246) and
shape cuing conditions (F1,23 = 8.02, p=.009, ηp2=.259) revealed a significant effect of
In Experiment 3 one participant was an outlier, showing an especially large action-induced bias (z score =
4.12). Re-running the same analyses without data from this participant revealed the same effect of action
congruency on c values (p<.001), the same correlation between congruency effects and modelled drift biases
(r=-.807, p<.001) and did not change any other statistical patterns observed.
congruency in both groups underscoring that participants were more likely to report the
presence of congruent action outcomes irrespective of cue type.
Equivalent analyses of d’ found that judgement sensitivity again improved with valid cues
(F2,92 = 9.87, p <001, ηp2=.257; Table 1). T-tests decomposing this cuing effect found
judgements were more sensitive when cues were valid compared to invalid (t47 = 4.49,
p<.001, dz = .648), valid compared to neutral (t47 = 2.42, p=.019, dz = .349) and neutral
compared to invalid (t47 = 2.07, p=.044, dz = .298). However, d’ was uaffected by action-
outcome congruency (F1,46 = .092, p=.763, ηp2=.002, BF01 = 7.63), nor did effects of
relevance interact with congruency (F2,92 = .638, p=.531, ηp2=.014, BF01 = 9.80). Neither the
main effect of validity (F2,92 = .194, p=.824, ηp2=.004, BF01 = 11.7) nor any interaction with
congruency (F2,92 = .542, p=.583 ηp2=.012, BF01 = 6.21) was affected by cue type.
Experiment 3 therefore replicated the findings from Experiment 2 while controlling for the
particular imperative mapping, suggesting that it was the action-based expectations that
generated the biasing effects.
5. Combined analysis
No significant congruency effects on d’ were found in Experiments 2 and 3, in contrast to
Experiment 1 where observers were more sensitive to the presence of congruent action
outcomes. As noted, one plausible explanation for this discrepancy is that effects on
sensitivity are driven by attentional mechanisms (see Wyart, Nobre & Summerfield, 2012)
and sensitivity effects were correspondingly abolished when attention was orthogonally cued
in Experiments 2 and 3. However, Bayes Factors suggested evidence for a null result was
convincing in Experiment 3 (BF01 > 3) but indecisive in Experiment 2 (BF01 < 3). Combining
data from Experiments 2 and 3 for greater precision, we found convincing evidence for the
absence of a congruency effect on d’ - t71 = 1.02, p = .312, dz = .120, BF01 = 4.69
suggesting that sensitivity effects were absent across the two experiments.
6. Computational modelling: Which aspect of perceptual decision making is biased by
We used computational modelling of participant choices and reaction-time distributions to
pinpoint which aspect of perceptual decision making is biased by action. Drift diffusion
models (DDMs) of perceptual decision making have enjoyed growing prominence in the
cognitive sciences (Ratcliff, Smith, Brown & McKoon, 2016; see Fig. 3). These models
assume that when making perceptual choices (e.g., was a stimulus present or not?), observers
have an internal representation of sensory evidence which is sampled by decision circuits.
Decision circuits continuously sample from the representations of sensory evidence, and
when the accumulated decision variable meets a response boundary (e.g. ‘respond present’),
the appropriate response is triggered. The representation of sensory evidence is therefore
separable from the representation of decisions about that evidence.
There are two ways that the DDM could accommodate the action-induced bias found in
Experiments 1-3. First, expectations during action could shift the starting point of the
evidence accumulation process toward the ‘respond present’ decision bound (varying
parameter z of the DDM; see Fig. 3a). Such effects are often thought to reflect biases in the
decision circuits. For example, predictive cues can induce preparatory motor activity before a
stimulus is presented (de Lange, Rahnev, Donner & Lau, 2013) and these starting point
effects could operate even if participants were insensitive to the perceptual information (e.g.,
they closed their eyes). However, a second alternative is that action directly biases sensory
representations, as though agents have selectively increased the gain (or ‘precision’) afforded
to expected sensory signals (Friston, 2018). This kind of bias would manifest as an
asymmetric bias in the rate of evidence accumulation (Mulder, Wagenmakers, Ratcliff,
Boekel & Forstmann, 2012; ‘drift biasing’, affecting parameter db of the DDM; see Fig. 3b),
because as the sensory evidence is sampled more it provides progressively more evidence in
favour of the expected event.
6.1. Methods
To investigate whether either of these possibilities could account for the action-induced bias
we observed in our experiments, we fit hierarchical DDMs to participant choice and reaction
time data using the hDDM package implemented in Python (Wiecki, Sofer & Frank, 2013).
In the hierarchical DDM, model parameters for each participant are treated as random effects
drawn from group-level distributions, and Bayesian Markov Chain Monte Carlo (MCMC)
sampling is used to estimate group and participant level parameters simultaneously.
We specified four different models for data in each experiment. 1) a null model where no
parameters were permitted to vary between congruent and incongruent trials. 2) a start bias
model where the start point of evidence accumulation (z) could vary on congruent and
incongruent trials. 3) a drift bias model where a constant bias in evidence accumulation (db)
could vary between congruent and incongruent trials. 4) a start + drift bias model where both
parameters could vary. To improve model fits, all models in Experiments 2 and 3 also
allowed global drift rate (but not drift bias i.e., biases to drift asymmetrically towards
present or absent decisions) to vary as a function of cue validity to account for the effect of
attentional cues on d’ (Ratcliff et al., 2016). Varying drift rate (rather than drift bias) captures
effects that reflect more reliable evidence accumulation to the appropriate response boundary
(i.e., ‘respond present’ when stimuli are present and ‘respond absent’ when absent) and
therefore accounts for sensitivity effects seen as a function of cue validity, rather than a
biased accumulation toward one response boundary over another. In no model did we allow
drift rate to vary between congruent and incongruent trials, as this would amount to assuming
that observers are generally more sensitive to all kinds of sensory evidence on congruent
trials (e.g. higher d’) , rather than sensitising particular sensory channels that encode expected
All models were estimated with MCMC sampling with 30,000 samples (‘burn-in’=7500).
Model convergence was assessed by inspecting chain posteriors and simulating reaction time
distributions for each participant. Models were compared using deviance information criteria
(DIC) as an approximation of Bayesian model evidence. Estimated parameters in each model
were compared using the Bayesian significance test implemented in hDDM, which computes
the posterior probability that group-level parameters differ across conditions.
6.2. Results and Discussion
Fitting the DDM to the behavioural data found in all experiments that the drift biasing
model provided a better fit than the start biasing model, and that in Experiments 2-3 a model
only implementing a drift bias outperformed a model implementing both biases (see Fig. 3c).
Analysing modelled parameters revealed higher drift biases on congruent relative to
incongruent trials (posterior probabilities that congruent db > incongruent db: Experiment 1 =
0.819, Experiment 2 = 0.959, Experiment 3 = 0.899 higher values indicate greater
differences between conditions; Wiecki, Sofer & Frank, 2013). To confirm that these
differences in drift bias explained the effect of expectations on perceptual decisions, we
calculated and correlated the difference between drift bias parameters (congruent db
incongruent db) and the magnitude of the behavioural bias (congruent c incongruent c) for
each participant. This analysis revealed strong relationships within all three samples
(Experiment 1: r24=-.877, p<.001; Experiment 2: r24=-.973, p<.001; Experiment 3: r48=-.855,
p<.001, see Fig. 3d).
This modelling thereby suggests that action-induced biases were best accounted for by a
sensory drift biasing mechanism where observers are biased to accumulate sensory
evidence in line with their expectations rather than a change in later decision circuits.
Fig 3. Illustration of how the DDM could explain action-induced biases, and results of computational modelling.
a. For an unbiased decision process (black lines) sensory evidence integrates toward the upper response
boundary when stimuli are present (solid lines) and toward the lower response boundary when stimuli are absent
(dotted lines). Baseline shifts in decision circuits could shift the start point of the accumulation process nearer to
the upper boundary for congruent events (influencing the parameter z; blue lines - Start bias model). b.
Alternatively, selectively altering the weights on sensory channels could bias evidence accumulation in line with
expectations (influence parameter db; red lines Drift bias model). c. Across all experiments the Drift bias
model provided a better fit than the Start bias model (lower Deviance Information Criteria [DIC] indicates
better model fit). d. Moreover, in each experiment there was a strong correlation between the drift bias values
modelled to each participant and the empirical action-induced bias on c values.
7. General Discussion
Cognitive scientists have proposed models of perceptual prediction that disagree about how
our expectations should shape what we perceive (Press, Kok & Yon, 2020b). Cancellation
models influential in action control have suggested that agents are less likely to perceive the
predictable consequences of their actions (Bays & Wolpert, 2007), in contrast to Bayesian
models from the wider sensory cognition literature which suggest that observers weight
perception towards prior knowledge (see de Lange, Kok & Heilbron, 2018; Press & Yon,
2019). The present experiments suggest that evidence accumulation is biased in line with
expectations during action, such that observers are more likely to perceive the outcomes they
expect. This pattern concords with Bayesian accounts of expectation developed in the wider
sensory cognition literature, which assume observers increase the weight they give to
expected inputs when making perceptual judgements. Biasing perceptual decisions in this
fashion is an effective way to rapidly generate more veridical percepts from sensory signals
corrupted by irreducible internal and external noise. While this process increases the
likelihood that expected signals are detected (i.e. more hits), it also makes observers prone to
hallucinate events when confronted with signal-like noise (i.e., more false alarms; Wyart et
al., 2012), and therefore is thought to generate biasing rather than sensitivity effects as
observed here. Indeed, we found these action-induced biases in perception were dissociable
from top-down attention, with orthogonal task-relevance cues altering judgement sensitivity.
This finding is in line with previous studies of expectation and attention outside of action
contexts (Wyart et al., 2012), and perhaps therefore indicative of domain-general
These patterns are consistent with findings of how previous decisions about sensations
influence subsequent decisions (Talluri, Urai, Tsetsos, Usher & Donner, 2018), and with a
recent fMRI study which found that expected action outcomes are more readily decoded from
early and late visual brain areas (Yon, Gilbert, de Lange & Press, 2018). These neural effects
are predicted by Bayesian models, but it has alternatively been suggested that effects of
expectation in sensory brain areas could reflect feedback from decision-related areas in
higher-level cortex that have no causal effect on perception (Bang & Rahnev, 2017; Choe,
Blake & Lee, 2014). The current findings importantly indicate effects of action expectation
on perception, and a corresponding drift biasing mechanism that is consistent with an early
sensory biasing account.
The predictive relationship exploited in these experiments e.g. that observed index finger
movements are an expected consequence of moving one’s index finger – is strong and stable.
It reflects an expectation that is likely acquired through our extensive experience of
controlling our actions (Hommel et al., 2001). However, we would predict that in principle
the same underlying mechanisms operate when we acquire new predictions. In line with this
assumption, our effects are akin to those found outside of action settings when
participants learn within an experiment that colour cues predict the orientation of gratings
(Wyart et al., 2012). The hypothesis that similar mechanisms operate when we acquire new
predictions may appear inconsistent with previous conflicting findings when participants are
presented with novel action-outcome mappings within an experiment. In an elegant training
study, Cardoso-Leite, Mamassian, Schütz-Bosbach and Waszak (2010) found evidence that
participants are less sensitive to grating orientations ‘predicted’ by actions that had been
paired with the gratings during training, with no influence of actions on bias measures.
However, sensitivity was especially high in this study, such that a bias towards perceiving the
expected, accompanied by ceiling effects on the hit rate, would appear as a sensitivity
reduction. Perhaps more importantly, this single study used a small sample and the findings
may not replicate (Schwarz, Pfister, Kluge, Weller & Kunde, 2018). It may therefore be
suggested that there was insufficient opportunity in this paradigm to acquire the action-
outcome mappings reliably. However, in principle, given sufficient opportunity for prediction
acquisition, we would hypothesise prediction mechanisms to operate in a qualitatively similar
fashion when predictions are both ‘old’ and ‘new’ (Dogge, Custers, Gayet, Hoijtink & Aarts,
This pattern of results is difficult to reconcile with cancellation models, and their central
claim that observers are less likely to perceive the multisensory predictable consequences of
their movements (Bays & Wolpert, 2007; Blakemore et al., 1998). For example, key support
for cancellation models has come from studies that show predictable signals generated by
action are perceived to be less intense than similar (i.e., unpredicted) events presented in the
absence of action (Bays et al., 2005). These studies are in fact often difficult to interpret,
because there are several differences between the predicted and unpredicted conditions. Most
notably, many of the studies compare perception of ‘predicted’ self-generated events with
perception of ‘unpredicted’ sensory events generated by external sources while participants
themselves remain passive or where the sensory and motor events overlap less due to
temporal misalignment (Blakemore, Frith & Wolpert, 1999). This comparison is perhaps
confounding the operation of expectation mechanisms with that of other processes (see Press,
Kok & Yon, 2020a). For example, if conceptualising action as an additional task, classic
working memory models would hypothesise reduced sensory processing when events are
presented in combination with action (Baddeley, 1996). It is therefore difficult to isolate
effects of prediction mechanisms from those introduced by the dual- vs single-task design.
The measures employed typically also cannot isolate perceptual effects from decisional
biases (see Firestone & Scholl, 2016) or expectation from attention-based processes (see
Summerfield & Egner, 2016).
One explanation for the difference between the present and previous studies could be that we
have examined visual action effects whereas much evidence to support cancellation comes
from tactile paradigms (e.g., Juravle, McGlone & Spence, 2013; Juravle, Binsted & Spence,
2017; Kilteni & Ehrsson, 2017). The models of which we are aware hypothesise similar
operation of predictive Bayesian or Cancellation mechanisms across domains (Wolpert et al.,
2003; Brown et al., 2013), and there may be no reason to hypothesise distinct adaptive
arguments for the sense of touch. However, touch may be influenced differently because of
operation of additional generalised ‘suppression’ mechanisms. Such suppression mechanisms
are thought to attenuate tactile sensations during action regardless of whether they are
predicted effects of action or not and are thought to be mediated by spinal mechanisms (Seki
& Fetz, 2012), and comparable mechanisms may similarly attenuate sensory processing in a
non-predictive fashion across modalities in humans and other animals (Crapse & Sommer,
2008). Importantly, recent experiments in touch suggest that when confounds related to
sensory suppression are removed, action predictions may influence perception in a
qualitatively similar fashion irrespective of sensory modality (Thomas, Yon, de Lange &
Press, 2020).
Nevertheless, there remain studies that are less prone to these alternative explanations and
report ‘cancelled’ percepts for expected, relative to unexpected, action outcomes (e.g.,
Roussel, Hughes & Waszak, 2013; 2014). It is therefore important that future research
unpacks the features that have driven previous reports of ‘cancellation’ – including the
possibility that additional mechanisms are recruited as sensory processing unfolds (see Press,
Kok & Yon, 2020b). Specifically, some of us have recently proposed that the influence of
expectation on perception may not be as simple as suggested in either Bayesian or
Cancellation theories. Under this proposal, observers are initially biased towards perceiving
expected sensory events, but any especially surprising inputs likely relevant for model
updating are reactively upweighted (Press, Kok & Yon, 2020b). Such a reactive
upweighting process only for a subset of ‘unexpected’ events is consistent with evidence
from the learning and inference literature, and is in line with the proposed adaptive function
of a cancellation mechanism i.e., highlighting events that are informative to the organism.
This account may explain some discrepancies in the literature e.g., finding perceptual
upweighting of rapid, punctate consequences of movement, in contrast with evidence for
cancellation-like phenomena when action outcomes unfold dynamically (e.g., Lally, Frendo
& Diedrichsen, 2011). Future studies should therefore establish whether expectation effects
are modulated by the time at which perception is probed and the extent of the surprise elicited
by an ‘unexpected’ stimulus which is low with the barely detectable sensations used in the
present signal detection task (Press, Kok & Yon, 2020b).
The concept of cancellation has had a wide-ranging influence on research investigating the
sense of agency and its aberration in psychiatric disease. For example, experiences of
passivity in schizophrenia where patients feel moved by an external force have often been
attributed to a failure of predictive cancellation, which causes self-produced action outcomes
to appear unusually intense (Frith et al., 2000). However, our results suggest that
sensorimotor predictions can increase the weight we give to expected sensory signals, which
may suggest that these mechanisms contribute to the experience of agency in a different way.
Specifically, biases towards perceiving expected outcomes may help observers to overcome
sensory noise, giving us higher fidelity representations of our ongoing movements and their
consequences. An inability to incorporate this kind of top-down knowledge into perceptual
estimates could leave agents with higher levels of uncertainty about their actions, leaving
them vulnerable to developing the unusual beliefs that characterise psychosis (Fletcher &
Frith, 2009).
Important contributions to work on sensory prediction during action have also come from
models of predictive processing and active inference developed in computational
neuroscience (Friston, 2005). These models suggest that all aspects of perception, action and
cognition arise as agents minimise the mismatch between their models of the extracranial
world and incoming sensory evidence. Notably, this can be achieved either by using the
evidence to update the models (i.e. perception) or by using action to change the evidence (i.e.
to alter the world so it conforms to the model). In principle, the architecture of these models
can account for the tendency of agents to up- or down-weight perception of predictable action
outcomes through changes in the gain or ‘precision’ afforded to sensory evidence (Friston,
2008; Yon, de Lange & Press, 2019). For example, researchers using this framework have
suggested that sensory precision on certain channels is attenuated during action explaining
‘cancellation’ phenomena (Brown et al., 2013; Van Doorn et al., 2015) while also
suggesting that observers can increase the precision afforded to expected sensory signals
leading to ‘representational sharpening’ and biases toward perceiving what we expect
(Friston, 2018). However, while models of precision-weighting during action have important
advantages over forward-model based accounts (e.g., where it has been difficult to specify
computationally how predictions could ‘cancel’ sensory signals – see Brown et al., 2013), it
remains the case that the precision on a sensory channel, and perception of a single event,
cannot be up- and down-weighted simultaneously. This makes it difficult to use existing
models of active inference to make precise predictions about those conditions under which
perception of expected action outcomes should be enhanced and those where it should be
attenuated. Our present findings may provide some constraints on such models that aid the
generation of more precise behavioural predictions in the future, as these results suggest that
the precision afforded to visual channels encoding expected outcomes is augmented rather
than attenuated certainly when sensory events coincide temporally with action initiation.
We hope that behavioural experiments like ours will play an important role in informing and
constraining such models elaborating a comprehensive account of how and why sensory
gain is augmented and attenuated as we act upon the world around us.
In conclusion, these results have shown that observers are biased towards perceiving the
expected outcomes of their movements. These findings are difficult to reconcile with
dominant cancellation accounts in action control, but concord well with normative models of
Bayesian perceptual inference. Namely, increasing the weight we give to expected sensory
signals may explain how we develop a largely veridical representation of our actions and
their consequences in an inherently uncertain sensory world.
Previous work in the lab has investigated whether specific types of sensorimotor
representations i.e., visual-motor mirror representations of action are the product of
domain-general statistical learning processes. After finding broad support for this idea (e.g.,
Cook, Bird, Catmur, Press & Heyes, 2014; Press et al., 2012), we began to investigate
whether domain-general models can explain the functional influence of sensorimotor
predictions on perception as well as the origin of such representations. In a recent
neuroimaging study (Yon et al., 2018) we found representations in visual brain areas were
reshaped toward expected action outcomes, which is more in line with domain-general ideas
in the broader sensory cognition literature than the cancellation models in action. However,
while these neuroimaging data are consistent with these models, it has been suggested that
expectation-related activity in sensory brain areas plays no causal role in determining what
we perceive (Bang & Rahnev, 2017; Choe et al., 2014). Here we directly investigated how
expectations influence the perception of action outcomes finding evidence that we are
biased toward perceiving expected action outcomes.
Baddeley, A. (1996). Exploring the central executive. Quarterly Journal of Experimental
Psychology, 49, 528.
Bang, J. W., & Rahnev, D. (2017). Stimulus expectation alters decision criterion but not
sensory signal in perceptual decision making. Scientific Reports, 7, 17072.
Bays, P. M., & Wolpert, D. M. (2007). Computational principles of sensorimotor control that
minimize uncertainty and variability. The Journal of Physiology, 578, 387396.
Bays, P. M., Wolpert, D. M., & Flanagan, J. R. (2005). Perception of the consequences of
self-action is temporally tuned and event driven. Current Biology, 15, 11251128.
Blakemore, S. J., Wolpert, D. M., & Frith, C. D. (1998). Central cancellation of self-produced
tickle sensation. Nature Neuroscience, 1, 635640.
Blakemore, S. J., Frith, C. D., & Wolpert, D. M. (1999). Spatio-temporal prediction
modulates the perception of self-produced stimuli. Journal of Cognitive
Neuroscience, 11(5), 551559.
Brown, H., Adams, R. A., Parees, I., Edwards, M., & Friston, K. (2013). Active inference,
sensory attenuation and illusions. Cognitive Processing, 14, 411-427.
Cardoso-Leite, P., Mamassian, P., Schütz-Bosbach, S., & Waszak, F. (2010). A new look at
sensory attenuation: action-effect anticipation affects sensitivity, not response bias.
Psychological Science, 21, 17401745.
Choe, K. W., Blake, R., & Lee, S. H. (2014). Dissociation between neural signatures of
stimulus and choice in population activity of human V1 during perceptual decision-
making. Journal of Neuroscience, 34, 27252743.
Cook, R., Bird, G., Catmur, C., Press, C., & Heyes, C. (2014). Mirror neurons: from origin to
function. Behavioral and Brain Sciences, 37, 177192.
Crapse, T. B., & Sommer, M. A. (2008). Corollary discharge across the animal kingdom.
Nature Reviews Neuroscience, 9(8), 587600.
Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait.
Perception & Psychophysics, 44, 339347.
de Lange, F. P., Heilbron, M., & Kok, P. (2018). How do expectations shape perception?
Trends in Cognitive Sciences, 22, 764-779.
de Lange, F. P., Rahnev, D. A., Donner, T. H., & Lau, H. (2013). Prestimulus oscillatory
activity over motor cortex reflects perceptual expectations. Journal of Neuroscience,
33, 14001410.
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in
Psychology, 5, 781.
Dogge, M., Custers, R., Gayet, S., Hoijtink, H., & Aarts, H. (2019). Perception of action-
outcomes is shaped by life-long and contextual expectations. Scientific Reports, 9,
Fiehler, K., Brenner, E., & Spering, M. (2019). Prediction in goal-directed action. Journal of
Vision, 19, 10.
Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: evaluating the
evidence for “top-down” effects. Behavioral and Brain Sciences, 39, e229.
Fletcher, P. C., & Frith, C. D. (2009). Perceiving is believing: a Bayesian approach to
explaining the positive symptoms of schizophrenia. Nature Reviews Neuroscience, 10,
Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal
Society of London. Series B, Biological Sciences, 360(1456), 815836.
Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11),
Friston, K. (2018). Does predictive coding have a future? Nature Neuroscience, 21, 1019
Frith, C. D., Blakemore, S. J., & Wolpert, D. M. (2000). Abnormalities in the awareness and
control of action. Philosophical Transactions of the Royal Society of London B, 355,
Gregory, R. L. (1997). Knowledge in perception and illusion. Philosophical Transactions of
the Royal Society of London B: Biological Sciences, 352, 11211127.
Hautus, M. J. (1995). Corrections for extreme proportions and their biasing effects on
estimated values of d′. Behavior Research Methods, Instruments, & Computers, 27,
Hawkins, H. L., Hillyard, S. A., Luck, S. J., Mouloua, M., Downing, C. J., & Woodward, D.
P. (1990). Visual attention modulates signal detectability. Journal of Experimental
Psychology. Human Perception and Performance, 16, 802811.
Hommel, B., Müsseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event
Coding (TEC): a framework for perception and action planning. Behavioral and Brain
Sciences, 24, 849878.
Hudson, M., Nicholson, T., Ellis, R., & Bach, P. (2016). I see what you say: prior knowledge
of other’s goals automatically biases the perception of their actions. Cognition, 146,
245-250. doi: 10.1016/j.cognition.2015.09.021.
Hudson, M., Nicholson, T., Simpson, W. A., Ellis, R., & Bach, P. (2016). One step ahead: the
perceived kinematics of others’ actions are biased toward expected goals. Journal of
Experimental Psychology: General, 145, 1-7. doi: 10.1037/xge0000126.
James, W. (1890). The Principles of Psychology. New York: Henry Holt and Company.
Juravle, G., McGlone, F. P., & Spence, C. (2013). Context-dependent changes in tactile
perception during exploratory versus reaching movements. Frontiers in
Psychology, 4, 913.
Juravle, G., Binsted, G., & Spence, C. (2017). Tactile suppression in goal-directed
movement. Psychonomic Bulletin & Review, 24, 1060-1076.
Kilteni, K., & Ehrsson, H. H. (2017). Body ownership determines the attenuation of self-
generated tactile sensations. Proceedings of the National Academy of Sciences USA,
114, 8426-8431.
Kilteni, K., Houborg, C., & Ehrsson, H. H. (2019). Rapid learning and unlearning of
predicted sensory delays in self-generated touch. eLife, 8, e42888.
doi: 10.7554/eLife.42888
Lally, N., Frendo, B., Diedrichsen, J. (2011). Sensory cancellation of self-movement
facilitates visual motion detection. Journal of Vision, 11, 5.
Mulder, M. J., Wagenmakers, E. J., Ratcliff, R., Boekel, W., & Forstmann, B. U. (2012). Bias
in the brain: a diffusion model analysis of prior probability and potential payoff.
Journal of Neuroscience, 32, 23352343.
Müsseler, J., & Hommel, B. (1997). Blindness to response-compatible stimuli. Journal of
Experimental Psychology: Human Perception and Performance, 23, 861-872.
Press, C., Catmur, C., Cook, R., Widmann, H., Heyes, C., & Bird, G. (2012). fMRI evidence
of ‘mirror’ responses to geometric shapes. PLOS ONE, 7, e51934.
Press, C., Kok, P., & Yon, D. (2020a). Learning to perceive and perceiving to learn. Trends
in Cognitive Sciences, 24, 260-261.
Press, C., Kok, P., & Yon, D. (2020b). The perceptual prediction paradox. Trends in
Cognitive Sciences, 24, 13-24.
Press, C., & Yon, D. (2019). Perceptual prediction: rapidly making sense of a noisy world.
Current Biology, 29, 751-753. doi: 10.1016/j.cub.2019.06.054
Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model:
current issues and history. Trends in Cognitive Sciences, 20, 260281.
Roussel, C., Hughes, G., & Waszak, F. (2013). A preactivation account of sensory
attenuation. Neuropsychologia, 51(5), 922929.
Roussel, C., Hughes, G., & Waszak, F. (2014). Action prediction modulates both
neurophysiological and psychophysical indices of sensory attenuation. Frontiers in
Human Neuroscience, 8, 115.
Sato, A. (2008). Action observation modulates auditory perception of the consequence of
others’ actions. Consciousness and Cognition, 17, 12191227.
Seki, K., & and Fetz, E. E. (2012). Gating of sensory input at spinal and cortical levels during
preparation and execution of voluntary movement. Journal of Neuroscience, 32, 890-
Schwarz, K. A., Pfister, R., Kluge, M., Weller, L., & Kunde, W. (2018). Do we see it or not?
Sensory attenuation in the visual domain. Journal of Experimental Psychology.
General, 147, 418430.
Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory:
applications to dementia and amnesia. Journal of Experimental Psychology. General,
117, 3450.
Stein, T., & Peelen, M. V. (2015). Content-specific expectations enhance stimulus
detectability by increasing perceptual sensitivity. Journal of Experimental
Psychology. General, 144, 10891104.
Summerfield, C., & de Lange, F. P. (2014). Expectation in perceptual decision making:
neural and computational mechanisms. Nature Reviews Neuroscience, 15, 745756.
Summerfield, C., & Egner, T. (2016). Feature-based attention and feature-based expectation.
Trends in Cognitive Sciences, 20, 401404.
Talluri, B. C., Urai, A. E., Tsetsos, K., Usher, M., & Donner, T. H. (2018). Confirmation bias
through selective overweighting of choice-consistent evidence. Current Biology, 28,
Thomas, E. R., Yon, D., de Lange, F. P., & Press, C. (2020.). Action enhances predicted
touch. bioRxiv,
Van Doorn, G., Paton, B., Howell, J., & Hohwy, J. (2015). Attenuated self-tickle sensation
even under trajectory perturbation. Consciousness and Cognition, 36, 147153.
Weiskrantz, L., Elliott, J., & Darlington, C. (1971). Preliminary observations on tickling
oneself. Nature, 230, 598599.
Weiss, C., Herwig, A., & Schütz-Bosbach, S. (2011). The self in social interactions: sensory
attenuation of auditory action effects is stronger in interactions with others. PLOS
ONE, 6, e22723.
Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of
the drift-diffusion model in python. Frontiers in Neuroinformatics, 7, 14.
Wolpert, D.M., Doya, K., & Kawato, M. (2003). A unifying computational framework for
motor control and social interaction. Philosophical Transactions of the Royal Society
B, 358, 593-602.
Wolpert, D. M., Ghahramani, Z., & Jordan, M. I. (1995). An internal model for sensorimotor
integration. Science, 269, 18801882.
Wyart, V., Nobre, A. C., & Summerfield, C. (2012). Dissociable prior influences of signal
probability and relevance on visual contrast sensitivity. Proceedings of the National
Academy of Sciences of the United States of America, 109, 35933598.
Yon, D., de Lange, F. P., & Press, C. (2019). The predictive brain as a stubborn scientist.
Trends in Cognitive Sciences, 23, 68.
Yon, D., Gilbert, S. J., de Lange, F. P., & Press, C. (2018). Action sharpens sensory
representations of expected outcomes. Nature Communications, 9, 4288.
Yon, D., & Press, C. (2017). Predicted action consequences are perceptually facilitated before
cancellation. Journal of Experimental Psychology. Human Perception and
Performance, 43, 10731083.
Yon, D., & Press, C. (2018). Sensory predictions during action support perception of
imitative reactions across suprasecond delays. Cognition, 173, 2127.
This work was funded by grants from the Leverhulme Trust (RPG-2016-105) and Wellcome
Trust (204770/Z/16/Z). We are grateful to Richard Cooper for advice concerning the
Author contributions
All authors contributed to the design of the study. V.Z. and D.Y. collected the data, which
were analysed and modelled by D.Y. in conjunction with C.P. D.Y. wrote the manuscript and
all authors were involved in revisions. C.P. supervised this work.
Competing interests
The authors declare no competing interests.
Correspondence should be addressed to Daniel Yon (
... an increase in sensitivity (contrast gain) for stimulus-like external signals (Reynolds & Heeger, 2009). Decreases in criterion induced by expectation have recently been explained as involving changes in stimulus-specific gain (Wyart, Nobre, & Summerfield, 2012;Yon, Zainzinger, de Lange, Eimer, & Press, 2020), suggesting that a similar mechanism may be in play here. According to these accounts, expectation makes stimulus-specific sensory units more sensitive to incoming signals, in turn making the detection threshold more likely to be breached. ...
... Furthermore, in contrast to top-down signals generated by expectations, internally generated signals during imagery are generally strong enough to lead to a visual experience in the absence of external input (Kosslyn et al., 2001). Fully disentangling these two accounts of the effect of imagery on perceptual detection may be possible in future studies by using drift diffusion modelling (Yon et al., 2020) or by varying stimulus energy in a systemic way over trials (Wyart et al., 2012). ...
... Another important issue is whether the imagery effects observed here could be (partly) due to imagery increasing visual attention to the external stimulus. Indeed, spatial attention has also been shown to sometimes decrease criterion (Downing, 1988;Hawkins et al., 1990;Yon et al., 2020), leading to more presence responses. Furthermore, similar to imagery, attention increases sensory activation for the attended stimulus (Ling & Carrasco, 2006;Ling, Liu, & Carrasco, 2009;Liu, Larsson, & Carrasco, 2007;Liu, Pestilli, & Carrasco, 2005), even in the absence of external input (Chelazzi, Duncan, Miller, & Desimone, 1998;Chelazzi, Miller, Duncan, & Desimone, 1993;Peelen & Kastner, 2011;Tanaka, 1996). ...
Full-text available
Visual experiences can be triggered externally, by signals coming from the outside world during perception; or internally, by signals from memory during mental imagery. Imagery and perception activate similar neural codes in sensory areas, suggesting that they might sometimes be confused. In the current study, we investigated whether imagery influences perception by instructing participants to imagine gratings while externally detecting these same gratings at threshold. In a series of three experiments, we showed that imagery led to a more liberal criterion for reporting stimulus presence, and that this effect was both independent of expectation and stimulus-specific. Furthermore, participants with more vivid imagery were generally more likely to report the presence of external stimuli, independent of condition. The results can be explained as either a low-level sensory or a high-level decision-making effect. We discuss that the most likely explanation is that during imagery, internally generated sensory signals are sometimes confused for perception and suggest how the underlying mechanisms can be further characterized in future research. Our findings show that imagery and perception interact and emphasize that internally and externally generated signals are combined in complex ways to determine conscious perception.
... Our findings are consistent with various theoretical accounts of sensory attenuation (e.g., Cardoso-Leite et al., 2010;van Moorselaar & Slagter, 2019;Waszak, Cardoso-Leite, & Hughes, 2012;Yon, Gilbert, de Lange, & Press, 2018;Yon, Zainzinger, de Lange, Eimer, & Press, 2020). We could interpret the findings of Experiments 1-3 as an indication that acquiring action-outcome association is difficult in the case of relatively complex, relational features (perhaps requiring a more extensive learning phase) if at all possible. ...
... Similarly, Gozli and Ansorge (2016) found bias both for and against a predicted outcome, when they varied the delay between the action and the outcome. These findings suggest that the processing advantages and disadvantages of action-outcomes could be reconciled within a unified framework that includes aspects of the preactivation and the inhibition accounts (e.g., Press et al., 2019Press et al., , 2020van Moorselaar & Slagter, 2019;Waszak et al., 2012;Yon & Press, 2017, 2018Yon et al., 2020). ...
Four experiments are reported that investigate the relationship between action-outcome learning and the ability to ignore distractors. Each participant performed 600 acquisition trials, followed by 200 test trials. In the acquisition phase, participants were presented with a fixed action-outcome contingency (e.g., Key #1 ➔ green distractors), while that contingency was reversed in the test phase. In Experiments 1-3, a distractor feature depended on the participants' action. In Experiment 1, actions determined the color of the distractors; in Experiment 2, they determined the target-distractor distance; in Experiment 3, they determined target-distractor compatibility. Results suggest that with the relatively simple features (color and distance), exposure to action-outcome contingencies changed distractor cost, whereas with the complex or relational feature (target-distractor compatibility), exposure to the contingencies did not affect distractor cost. In Experiment 4, the same pattern of results was found (effect of contingency learning on distractor cost) with perceptual sequence learning, using visual cues ("X" vs. "O") instead of actions. Thus, although the mechanism of associative learning may not be unique to actions, such learning plays a role in the allocation of attention to task-irrelevant events.
... However, in the animal kingdom corollary discharge has been found to influence sensory processing in myriad ways besides cancellation of reafference 43 . Contrary to cancellation theories, recent sharpening models propose that perception is biased towards the expected input 44,45 in line with evidence showing enhanced BOLD responses to self-generated stimuli 46,47 and increased discharges in some neurons during self-initiated vocalizations 48 . The discrepancy between cancellation and sharpening accounts is also reflected in human studies attempting to assess the behavioural correlates of the neurophysiological effects of self-generation on stimulus processing. ...
Full-text available
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.
... Here, sensory signals that are in line with prior expectations, such as those generated by own actions, are assumed to be perceptually enhanced rather than attenuated Kok et al., 2012;Press et al., 2020;Yon et al., 2018). Indeed, competing with cancellation models, several studies found enhanced perceptual sensitivity for sensory action feedback when compared to less predictable externally generated sensory signals (Myers et al., 2020;Reznik et al., 2014;Schmalenbach et al., 2017;Straube et al., 2020;van Kemenade et al., 2016;Yon et al., 2020). ...
Full-text available
Sensory action consequences are highly predictable and thus engage less neural resources compared to externally generated sensory events. While this has frequently been observed to lead to attenuated perceptual sensitivity and suppression of activity in sensory cortices, some studies conversely reported enhanced perceptual sensitivity for action consequences. These divergent findings might be explained by the type of action feedback, i.e., discrete outcomes vs. continuous feedback. Therefore, in the present study we investigated the impact of discrete and continuous action feedback on perceptual and neural processing during action feedback monitoring. During fMRI data acquisition, participants detected temporal delays (0-417ms) between actively or passively generated wrist movements and visual feedback that was either continuously provided during the movement or that appeared as a discrete outcome. Both feedback types resulted in (1) a neural suppression effect (active<passive) in a largely shared network including bilateral visual and somatosensory cortices, cerebellum and temporoparietal areas. Yet, compared to discrete outcomes, (2) processing continuous feedback led to stronger suppression in right superior temporal gyrus (STG), Heschl´s gyrus, and insula suggesting specific suppression of features linked to continuous feedback. Furthermore, (3) BOLD suppression in visual cortex for discrete outcomes was specifically related to perceptual enhancement. Together, these findings indicate that neural representations of discrete and continuous action feedback are similarly suppressed but might depend on different predictive mechanisms, where reduced activation in visual cortex reflects facilitation specifically for discrete outcomes, and predictive processing in STG, Heschl´s gyrus, and insula is particularly relevant for continuous feedback.
... The copyright holder for this preprint this version posted November 23, 2020. Reznik et al., 2014;Yon et al., 2020). Collectively, the discrepancy in the results reported 108 so far points to factors other than self-generation that may interactively modulate sensory 109 processing during motor actions. ...
Full-text available
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity. Highlights Self-generation and stimulus intensity interactively shape auditory perception. Supra-threshold self-generated sounds are perceptually attenuated. When near-threshold, perceived intensity is enhanced for self-generated sounds. Self-generation and intensity modulate perceptual bias, rather than sensitivity. Surprise-driven attentional mechanisms may underlie these perceptual shifts.
Full-text available
Accounts of predictive processing propose that conscious experience is influenced not only by passive predictions about the world, but also by predictions encompassing how the world changes in relation to our actions—that is, on predictions about sensorimotor contingencies. We tested whether valid sensorimotor predictions, in particular learned associations between stimuli and actions, shape reports about conscious visual experience. Two experiments used instrumental conditioning to build sensorimotor predictions linking different stimuli with distinct actions. Conditioning was followed by a breaking continuous flash suppression task, measuring the speed of reported breakthrough for different pairings between the stimuli and prepared actions, comparing those congruent and incongruent with the trained sensorimotor predictions. In Experiment 1, counterbalancing of the response actions within the breaking continuous flash suppression task was achieved by repeating the same action within each block but having them differ across the two blocks. Experiment 2 sought to increase the predictive salience of the actions by avoiding the repetition within blocks. In Experiment 1, breakthrough times were numerically shorter for congruent than incongruent pairings, but Bayesian analysis supported the null hypothesis of no influence from the sensorimotor predictions. In Experiment 2, reported conscious perception was significantly faster for congruent than for incongruent pairings. A meta-analytic Bayes factor combining the two experiments confirmed this effect. Altogether, we provide evidence for a key implication of the action-oriented predictive processing approach to conscious perception, namely that sensorimotor predictions shape our conscious experience of the world.
Full-text available
It is widely believed that predicted tactile action outcomes are perceptually attenuated. The present experiments determined whether predictive mechanisms always generate attenuation, or instead can enhance perception – as typically observed in sensory cognition domains outside of action. We manipulated probabilistic expectations in a paradigm often used to demonstrate tactile attenuation. Participants produced actions and subsequently rated the intensity of forces on a passive finger. Experiment 1 confirmed previous findings that action outcomes are perceived less intensely than passive stimulation, but demonstrated more intense perception when active finger stimulation was removed. Experiments 2 and 3 manipulated prediction explicitly and found that expected touch during action is perceived more intensely than unexpected touch. Computational modelling suggested that expectations increase the gain afforded to expected tactile signals. These findings challenge a central tenet of prominent motor control theories and demonstrate that sensorimotor predictions do not exhibit a qualitatively distinct influence on tactile perception. Statement of Relevance Perception of expected action outcomes is thought to be attenuated. Such a mechanism may be adaptive because surprising inputs are more useful - e.g., signalling the need to take new courses of action - and is thought to explain why we cannot tickle ourselves and unusual aspects of action and awareness in clinical populations. However, theories outside of action purport that predicted events are perceptually facilitated, allowing us to generate largely accurate representations of our noisy sensory world. We do not know whether action predictions really alter perception differently from other predictions because different manipulations have been performed. Here we perform similar manipulations and demonstrate that action predictions can enhance, rather than attenuate, touch. We thereby demonstrate that action predictions may not have a qualitatively distinct influence on perception, such that we must re-examine theories concerning how predictions influence perception across domains and clinical theories based upon their assumptions.
Full-text available
Self-generated touch feels less intense and less ticklish than identical externally generated touch. This somatosensory attenuation occurs because the brain predicts the tactile consequences of our self-generated movements. To produce attenuation, the tactile predictions need to be time-locked to the movement, but how the brain maintains this temporal tuning remains unknown. Using a bimanual self-touch paradigm, we demonstrate that people can rapidly unlearn to attenuate touch immediately after their movement and learn to attenuate delayed touch instead, after repeated exposure to a systematic delay between the movement and the resulting touch. The magnitudes of the unlearning and learning effects are correlated and dependent on the number of trials that participants have been exposed to. We further show that delayed touches feel less ticklish and non-delayed touches more ticklish after exposure to the systematic delay. These findings demonstrate that the attenuation of self-generated touch is adaptive.
Full-text available
Prior knowledge shapes what we perceive. A new brain stimulation study suggests that this perceptual shaping is achieved by changes in sensory brain regions before the input arrives, with common mechanisms operating across different sensory areas.
Full-text available
The way humans perceive the outcomes of their actions is strongly colored by their expectations. These expectations can develop over different timescales and are not always complementary. The present work examines how long-term (structural) expectations – developed over a lifetime - and short-term (contextual) expectations jointly affect perception. In two studies, including a pre-registered replication, participants initiated the movement of an ambiguously rotating sphere by operating a rotary switch. In the absence of any learning, participants predominantly perceived the sphere to rotate in the same direction as their rotary action. This bias toward structural expectations was abolished (but not reversed) when participants were exposed to incompatible action-effect contingencies (e.g., clockwise actions causing counterclockwise percepts) during a preceding learning phase. Exposure to compatible action-effect contingencies, however, did not add to the existing structural bias. Together, these findings reveal that perception of action-outcomes results from the combined influence of both long-term and immediate expectations.
Full-text available
Bayesian theories of perception have traditionally cast the brain as an idealised scientist, refining predictions about the outside world based on evidence sampled by the senses. However, recent predictive coding models include predictions that are resistant to change, and these stubborn predictions can be usefully incorporated into cognitive models.
Full-text available
When we produce actions we predict their likely consequences. Dominant models of action control suggest that these predictions are used to ‘cancel’ perceptual processing of expected outcomes. However, normative Bayesian models of sensory cognition developed outside of action propose that rather than being cancelled, expected sensory signals are represented with greater fidelity (sharpened). Here, we distinguished between these models in an fMRI experiment where participants executed hand actions (index vs little finger movement) while observing movements of an avatar hand. Consistent with the sharpening account, visual representations of hand movements (index vs little finger) could be read out more accurately when they were congruent with action and these decoding enhancements were accompanied by suppressed activity in voxels tuned away from, not towards, the expected stimulus. Therefore, inconsistent with dominant action control models, these data show that sensorimotor prediction sharpens expected sensory representations, facilitating veridical perception of action outcomes.
We thank Corlett for his thought-provoking response to our recent article. Corlett shares our concerns about inconsistencies in theories of perceptual prediction and highlights some reminiscent debates in learning theory. He also proposes why the perceptual prediction mechanisms may operate differently in the domain of action relative to other domains of sensory cognition. Here, we highlight how we share the conviction that dialogue across disciplines will inform both models of perception and learning but clarify that important distinctions between the explananda mean the theoretical puzzles are not reducible to each other. We also question whether action prediction mechanisms do indeed operate differently.
From the noisy information bombarding our senses, our brains must construct percepts that are veridical - reflecting the true state of the world - and informative - conveying what we did not already know. Influential theories suggest that both challenges are met through mechanisms that use expectations about the likely state of the world to shape perception. However, current models explaining how expectations render perception either veridical or informative are mutually incompatible. While the former propose that perceptual experiences are dominated by events we expect, the latter propose that perception of expected events is suppressed. To solve this paradox we propose a two-process model in which probabilistic knowledge initially biases perception towards what is likely and subsequently upweights events that are particularly surprising.
Prediction allows humans and other animals to prepare for future interactions with their environment. This is important in our dynamically changing world that requires fast and accurate reactions to external events. Knowing when and where an event is likely to occur allows us to plan eye, hand, and body movements that are suitable for the circumstances. Predicting the sensory consequences of such movements helps to differentiate between self-produced and externally generated movements. In this review, we provide a selective overview of experimental studies on predictive mechanisms in human vision for action. We present classic paradigms and novel approaches investigating mechanisms that underlie the prediction of events guiding eye and hand movements.
People's assessments of the state of the world often deviate systematically from the information available to them [1]. Such biases can originate from people's own decisions: committing to a categorical proposition, or a course of action, biases subsequent judgment and decision-making. This phenomenon, called confirmation bias [2], has been explained as suppression of post-decisional dissonance [3, 4]. Here, we provide insights into the underlying mechanism. It is commonly held that decisions result from the accumulation of samples of evidence informing about the state of the world [5-8]. We hypothesized that choices bias the accumulation process by selectively altering the weighting (gain) of subsequent evidence, akin to selective attention. We developed a novel psychophysical task to test this idea. Participants viewed two successive random dot motion stimuli and made two motion-direction judgments: a categorical discrimination after the first stimulus and a continuous estimation of the overall direction across both stimuli after the second stimulus. Participants' sensitivity for the second stimulus was selectively enhanced when that stimulus was consistent with the initial choice (compared to both, first stimuli and choice-inconsistent second stimuli). A model entailing choice-dependent selective gain modulation explained this effect better than several alternative mechanisms. Choice-dependent gain modulation was also established in another task entailing averaging of numerical values instead of motion directions. We conclude that intermittent choices direct selective attention during the evaluation of subsequent evidence, possibly due to decision-related feedback in the brain [9]. Our results point to a recurrent interplay between decision-making and selective attention.