ArticlePDF Available

Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

American Psychological Association
Journal of Experimental Psychology: Human Perception and Performance
Authors:

Abstract and Figures

In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise- or a counterclockwise direction. Action execution had to be delayed until the appearance of a visual go signal, which induced an apparent rotational motion in either a clockwise- or a counterclockwise direction. Stimulus detection was faster when the direction of the induced apparent motion was consistent with the direction of the concurrently intended manual object rotation. Responses to action-consistent motions were also faster when the participants prepared the manipulation actions but signaled their stimulus detections with another motor effector (i.e., with a foot response). Taken together, the present study demonstrates a motor-visual priming effect of prepared object manipulations on visual motion perception, indicating a bidirectional functional link between action and perception beyond object-related visuomotor associations.
Content may be subject to copyright.
Object Manipulation and Motion Perception: Evidence of an Influence of
Action Planning on Visual Processing
Oliver Lindemann and Harold Bekkering
Radboud University Nijmegen
In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the
context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object
along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction. Action
execution had to be delayed until the appearance of a visual go signal, which induced an apparent
rotational motion in either a clockwise- or a counterclockwise direction. Stimulus detection was faster
when the direction of the induced apparent motion was consistent with the direction of the concurrently
intended manual object rotation. Responses to action-consistent motions were also faster when the
participants prepared the manipulation actions but signaled their stimulus detections with another motor
effector (i.e., with a foot response). Taken together, the present study demonstrates a motor-visual
priming effect of prepared object manipulations on visual motion perception, indicating a bidirectional
functional link between action and perception beyond object-related visuomotor associations.
Keywords: object manipulation, motion perception, perception–action coupling, motor-visual priming,
embodied cognition
Accumulating behavioral and neuropsychological research has
suggested a close and bidirectional link between perceptual and
motor processes (see e.g., Hommel, Mu¨sseler, Aschersleben, &
Prinz, 2001). For instance, several cueing experiments have shown
that visual images of graspable objects (Craighero, Fadiga, Rizzo-
latti, & Umilta`, 1998; Tucker & Ellis, 1998) or film sequences of
the actions of others (Brass, Bekkering, & Prinz, 2001; Vogt,
Taylor, & Hopkins, 2003) prime the motor system and speed up
the initiation of an action when the cue and the motor response are
congruent (visuomotor priming). It is interesting to note, however,
that recent studies reported evidence for an effect of the opposite
directionality, that is, an impact of motor actions on visual pro-
cessing (here referred to as motor-visual priming). Action-induced
effects on vision have been observed in participants performing
rather simple actions like button-press responses (Kunde & Wu¨hr,
2004; Mu¨sseler & Hommel, 1997; Wu¨hr&Mu¨sseler, 2001), pen
movements (Zwickel, Grosjean, & Prinz, 2007), pointing move-
ments (Bekkering & Pratt, 2004; Deubel, Schneider, & Paprotta,
1998; Linnell, Humphreys, McIntyre, Laitinen, & Wing, 2005), or
hand posture changes (Hamilton, Wolpert, & Frith, 2004; Miall et
al., 2006).
So far, only few studies reported motor-visual priming effects
for more complex and natural motor behaviors like reaching for
and grasping an object (Craighero, Fadiga, Rizzolatti, & Umilta`,
1999; Fagioli, Hommel, & Schubotz, 2007; Symes, Tucker, Ellis,
Vainio, & Ottoboni, 2008). For example, a study by Craighero et
al. (1999) demonstrated that the processing of a visual stimulus is
facilitated if it affords the same type of grasping response as the
participant concurrently intends to perform. In that paradigm,
differently oriented wooden bars had to be grasped without the aid
of sight. A word cue informed the participants about the orienta-
tion of the bar and instructed them to prepare the corresponding
grasping action. However, the actual execution of the prepared
motor response had to be delayed until a visual go signal was
presented. Craighero et al. (1999) reported faster responses if the
go signals afforded the same type of grasping response as the
prepared action. It is interesting to note that this effect was also
observed when the participants prepared a manual grasping re-
sponse but signaled their detection of the visual stimulus with
another motor effector. This finding has been interpreted as sup-
port for the idea of motor-visual priming because it indicates that
the preparation of a grasping movement facilitates the visual
processing of stimuli that are associated with similar motor actions
or that afford the same type of grip. Additional evidence for the
idea of action-induced effects has been provided by studies in
which grasping and pointing movements are compared to show
that the intention to grasp an object selectively enhances the
processing of visual object properties such as size (Fagioli et al.,
2007) or orientation (Bekkering & Neggers, 2002; Hannus, Cor-
nelissen, Lindemann, & Bekkering, 2005). Thus, the literature
provides several examples indicating that the planning of grasping
actions automatically modulates visual attention toward those ob-
ject features and dimensions that are relevant for the selection and
programming of that particular motor response. It is, however,
Oliver Lindemann and Harold Bekkering, Donders Institute for Brain,
Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, the
Netherlands.
The present research is part of the Interactive Collaborative Information
Systems (ICIS) project, supported by Dutch Ministry of Economic Affairs
Grant BSIK03024. We acknowledge Boris van Waterschoot and Giovanna
Girardi for their assistance in collecting the data.
Correspondence concerning this article should be addressed to Oliver
Lindemann, Donders Institute for Brain, Cognition, and Behaviour, Rad-
boud University Nijmegen, P.O. Box 9104, 6500 HE Nijmegen, the Neth-
erlands. E-mail: o.lindemann@donders.ru.nl
Journal of Experimental Psychology: © 2009 American Psychological Association
Human Perception and Performance
2009, Vol. 35, No. 4, 1062–1071
0096-1523/09/$12.00 DOI: 10.1037/a0015023
1062
unclear whether action-induced effects of grasping actions are
restricted to these visuomotor associations between intrinsic object
properties and afforded grip.
It is surprising that researchers investigating the interaction
between perceptual and motor processes in grasping have not paid
much attention to the fact that grasping actions in everyday life are
predominately instrumental and are directed toward an action
goal
1
that implies a manipulation of the object. For instance,
depending on whether one wishes to open or close a faucet, one
grasps it with the intention to rotate it afterward in a clockwise
(CW) or a counterclockwise (CCW) direction. Although it is
widely recognized that the intended manipulation of an object
plays a very crucial role in the selection and preparation of the
initial reach-to-grasp movement (e.g., Rosenbaum, Meulenbroek,
Vaughan, & Jansen, 2001), the role of action goals for the presence
of motor-visual priming effects has not yet been investigated.
Because each object manipulation implies a visually perceivable
movement, and taking into account the importance of visual feed-
back for the control of motor actions (cf. Castiello, 2005; Glover,
2004), it is plausible to assume that the perceptual processing of
visual motions, especially, is characterized by a close perception–
action coupling. As yet, very little is known about the interference
between action and motion perception. It has been shown, for
example, that the perception of moving objects automatically
activates responses that correspond spatially to the direction of the
perceived motion (Bosbach, Prinz, & Kerzel, 2004; Michaels,
1988; Proctor, Van Zandt, Lu, & Weeks, 1993). However, the only
indication for an effect of the reversed directionality, which is an
impact of action planning on motion perception, is coming from
the finding of biased motion judgments while executing action. For
example, Wohlschla¨ger (2000) asked participants to indicate the
direction of ambiguous apparent motion displays while they were
turning a knob either CW or CCW. He observed that participants
tend to judge the ambiguous rotations in the direction of their
currently performed action and interpreted this as evidence that
motion perception is biased in the direction of the produced
movement. However, it cannot be excluded that the effects on
directional judgments may have been caused by a guessing bias in
perceptually unclear situations. It is interesting to note that
Zwickel et al. (2007) recently reported that under some conditions,
the opposite action-induced effect, namely a contrast effect be-
tween production and perception of movement directions, could
also be observed. Taken together, both findings suggest a close
coupling between concurrent action execution and motion percep-
tion. However, it is still an open question whether perceptual
processing of motions is likewise modulated by motor intentions
and by merely prepared but not yet executed motor responses, as
is known for static object perception (e.g., Craighero et al., 1999).
The aim in the present study was to investigate motor-visual
priming in the context of object manipulation actions and to
examine whether perceptual effects of grasping actions go beyond
the processing of objects properties. On the basis of the consider-
ations outlined above, visual motion perception provides a likely
candidate for a domain that is sensitive to motor preparation. Thus,
we conducted three behavioral experiments to test the idea that
planning of an object manipulation affects perception of visual
motions. We hypothesized that the intention to manipulate an
object (e.g., to rotate an object) facilitates the processing of visual
motions (e.g., a rotational motion on a computer screen) in the
same direction as the prepared action.
Experiment 1
In Experiment 1, we investigated the interaction between object
manipulation actions and visual motion perception. We asked
participants to reach out and grasp an object and to subsequently
rotate it in a CW or CCW direction (see Figure 1). Similar to the
delayed-grasping paradigm proposed by Craighero et al. (1999),
participants were instructed to prepare the object manipulation in
advance and to delay its execution until the appearance of a visual
go signal. The go signal was a tilted bar that afforded either the
same type of grip as the prepared action involved or the orthogonal
grip. It is important to note that before the go signal appeared, a
horizontal or a vertical bar was shown. Due to this initial stimulus,
the onset of the go signal induced an apparent rotational motion of
45° in either a CW or a CCW direction (see Figure 2A). In some
trials, a solid circle was presented as go signal. These trials served
as control condition, because a circle is not associated with one of
the two required initial grips in the experiment, and its appearance
did not induce any apparent motion. Assuming that the participants
prepare the actual manipulation before the onset of the reach-to-
grasp movement, we predicted a facilitated processing of the
rotational motions in the same direction as the intended object
rotation.
Method
Participants. Thirty students from the Radboud University
Nijmegen participated in exchange for EUR 4.50 (U.S.$6) or
course credits. All participants were naive to the purpose of the
study, had normal or corrected-to-normal vision, and were free of
any motor problems that could have affected task performance.
Apparatus. Participants were required to manipulate an
X-shaped object (manipulandum; Figure 1B) consisting of two
perpendicularly intersecting wooden bars (8 cm 1.1 cm 5 cm,
each) mounted on a base plate (30 cm 15 cm). The manipulan-
dum could be rotated around its crossing point, with the rotation
axis being parallel to the Cartesian z-axis. Owing to small pegs
underneath the X-shaped object and holes inside the base plate, the
manipulandum clicked into place after rotating 90°.
A small pin placed on the base plate at a distance of 15 cm from
the manipulandum’s rotation axis marked the starting position for
the grasping movements. The manipulandum was oriented such
that both crossing bars were aligned 45° diagonally to the partic-
ipant’s midsagittal plane and was positioned behind a wooden
screen (44 cm 45 cm), which allowed the participants to reach
it comfortably with their right hand but obscured the manipulan-
dum and their hand from view (Figure 1A).
1
We use the term action goal to describe any kind of cognitive repre-
sentation of changes in the environment that a person intends to achieve
with a motor action. Behavioral goals can vary in terms of their remote-
ness, for instance, from a proximal goal like grasping the faucet to a more
distal goals like filling the bathtub with water or having a bath. In this
respect, action goals are here understood as proximal goals at the level of
motor intentions (Jacob & Jeannerod, 2005).
1063
OBJECT MANIPULATION AND MOTION PERCEPTION
Stimuli. All stimuli were presented in the center of a computer
screen that was placed at a viewing distance of approximately 70
cm in front of the participants. A black horizontal or vertical bar
(visual angle of 4.1° 1.3° or 1.3° 4.1°, respectively) was
presented as an initial stimulus that was visible until the go signal
appeared. A blue or yellow cross (0.9° of visual angle) on top of
the bar served as an action cue to indicate the required motor
response.
In the rotation condition, the go signals consist of bars in the
same color and size as the initial stimuli. However, the bars were
tilted from the vertical either 45° or 45°, and the bars thus
afforded the same type of grip as the currently prepared action
involved (grip consistent) or afforded the orthogonal grip (grip
inconsistent). Because the go signals were presented at the same
location as the initial stimuli, an apparent rotational motion was
induced by the appearance of the tilted bars (see Figure 2A for an
illustration). For example, the presentation of a 45° tilted bar
resulted in an apparent CW motion if the initial stimulus was
oriented vertically and resulted in a CCW motion if the initial
stimulus was oriented horizontally. That is, depending on the
required motor response, the onset of the go signal inducted a
rotational motion either in the same direction (rotation consistent)
or in the opposite direction (rotation inconsistent) as the currently
prepared object manipulation. A solid circle subtending a visual
angle of 2.7° was used as control condition.
Procedure. Participants performed a short training block, prior
to the actual experiment, in which they practiced grasping and
rotating the manipulandum without vision. The experimenter dem-
onstrated the two possible object manipulation actions and showed
how to rotate the object for 90° in a CW and CCW direction. The
manipulandum always had to be grasped along one of its two
crossing bars, that is, either with the index finger at the top-left and
the thumb at the bottom-right leg (left grip) or with the index
finger at the top-right and the thumb at the bottom-left leg (right
grip). Each rotation afforded a specific type of grip. The object had
to be grasped with a left grip for CW rotations and with a right grip
for CCW rotations. The object manipulations were only demon-
strated and were never verbally instructed. When a motor response
was carried out incorrectly, the experimenter corrected the partic-
ipants and again demonstrated the required action.
The experimental block was started if participants were able to
carry out the movements fluently without vision. Half the partic-
ipants were presented with the horizontal bar as initial stimulus,
and the other half were presented with the vertical bar as initial
stimulus. Each trial began with a presentation of a gray cross
projected on top of the initial stimulus. Participants were instructed
to fixate their eyes on the cross and to hold the start peg (starting
position) with index finger and thumb. As soon as the hand was
placed correctly in the starting position, the color of the cross
changed to cue the action. The action cues remained visible for
2,000 ms. A blue cross indicated a left grasp and a 90° CW
rotation, whereas a yellow cross prescribed a right grasp and a 90°
CCW rotation. It is important to note that participants were re-
quired to prepare for the object manipulation but to withhold from
action execution. After a random interval (250 –750 ms), the initial
stimulus disappeared and the go signal was presented for the
duration of 1,000 ms. Participants had to initiate their prepared
motor response as soon as they detected the onset of go signal.
After rotating the manipulandum, they returned their hands to the
starting position, and the next trial started.
Design. Apart from 10 randomly chosen practice trials, the
experimental block comprised 144 trials presented in a random
order. They were composed of all possible combinations of the two
manual responses (left grasp, CW rotation; right grasp, CCW
rotation) and the three go signals (circle, bar tilted 45°, bar tilted
45°). The orientation of the initial stimulus (horizontal, vertical)
was balanced between subjects.
Go signals could be considered consistent or inconsistent with
respect to the currently prepared grasping movement. Moreover,
depending on the induced apparent rotation, each trial was either
consistent or inconsistent with respect to the prepared object
rotation. In the control condition (i.e., solid circle as go signal), the
go signal was not associated with a specific type of grip and did
not induce any apparent motion. For participants with the horizon-
Figure 1. A: Illustration of the experimental setup. Participants were
seated in front of a computer screen. The starting position and the manipu-
landum were obscured from the participant’s view by means of a wooden
screen. B: Illustration of the X-shaped manipulandum that could be rotated
along the rotation axes indicated by R.
Figure 2. A: Apparent visual motions caused by the sequence of events in Experiments 1 and 3. Depending on the
orientation of initial bar (i.e., horizontal or vertical), the go signal (i.e., 45° or 45° tiled bar) induced an apparent
rotational motion in a clockwise (CW) or a counterclockwise (CCW) direction. The appearance of the solid circle (i.e.,
control condition) caused no apparent visual motion. B: Sequence of events for the neutral no rotation condition in
Experiment 2. The CW and CCW rotation conditions were identical to Experiments 1 and 3 (see Figure 2A).
1064 LINDEMANN AND BEKKERING
tal bar as initial stimuli, all grip-consistent go signals induced a
rotation-consistent visual motion, whereas for the vertical bar
group, apparent rotation-consistent motions were only induced by
grip-inconsistent stimuli.
Data acquisition and analysis. Hand movements were re-
corded with a sampling rate of 100 Hz, with an electromagnetic
position tracking system (miniBIRD 800, Ascension Technology
Corporation, Burlington, VT). Three sensors were attached to the
thumb, index finger, and wrist of the participant’s right hand.
Hand response latencies were determined offline. We applied a
fourth-order Butterworth lowpass filter with a cut-off frequency of
10 Hz on the raw data. The reaction times (RTs) were determined
by calculating the time intervals between the stimulus onsets and
the reach movement onsets. Reach onset times were defined as the
moments when the tangential velocity of the index-finger sensor
first exceeded a threshold of 10 cm/s and remained above this level
for the minimum duration of 200 ms.
In all experiments reported here, anticipation responses (re-
sponses ahead of go signal onset and RTs 150 ms), missing
responses (no reactions and RTs 800 ms), and incorrect actions
(e.g., wrong grip, cessations of movement while reaching, incor-
rect rotation direction) were considered errors and were excluded
from the statistical analyses. A Type-I error rate of ␣⫽.05 was
used in all statistical tests. Whenever appropriate, pairwise post
hoc comparisons were conducted with the Bonferroni procedure.
Results
Anticipations occurred in 14.9% of all trials (4.9% of RTs 0
ms; 10.4% of RTs 150 ms). The missing rate was below 1%;
8.4% of the actions were performed incorrectly.
We applied a repeated measures multivariate analysis of variance
(MANOVA)
2
with the within-subject factors Manual Response (left
grasp, CW rotation; right grasp, CCW rotation) and Rotation
Consistency (consistent, inconsistent, control) and with the
between-subjects factor Initial Stimulus Orientation (horizontal,
vertical) on the mean RT data (see Table 1). As hypothesized, the
analysis revealed a main effect for the factor Rotation Consistency,
F(2, 27) 9.75, p.001, partial
2
.42. All other effects failed
to reach significance. Post hoc ttests yielded shorter RTs to go
signals inducing rotation-consistent motions (322 ms), as com-
pared with go signals inducing rotation-inconsistent motions (345
ms), t(29) ⫽⫺4.16, p.001, or no motions (control condition:
338 ms), t(29) ⫽⫺3.31, p.01. Moreover, as a separate one-way
MANOVA with the factor Grip Consistency (grip consistent, grip
inconsistent, control) indicated, there were no significant differ-
ences between responses to grip-consistent stimuli (331 ms), grip-
inconsistent stimuli (336 ms) and stimuli that did not afford a
specific grip (control condition: 338 ms), F(2, 28) 1.
To compare the effects of Rotation Consistency and Grip Con-
sistency directly and to see whether the two factors interacted, we
calculated, for each participant, the deviations of the mean RTs to
the grip-consistent and grip-inconsistent bars from the mean RT in
the control condition. The resulting RT effects were submitted to
a univariate analysis of variance with the factors Rotation Consis-
tency (consistent, inconsistent) and Grip Consistency (consistent,
inconsistent). The main effect for Rotation Consistency was sig-
nificant, F(1, 56) 9.61, p.003, partial
2
.15, whereas there
was no effect for Grip Consistency (F1). Mean RT effects are
depicted in Figure 3 and indicate a positive effect (15 ms) for
consistent rotational motions relative to the control condition and
a negative effect (7 ms) for rotational motions. It is interesting to
note that the two factors did not interact (F1), which shows that
the Rotation Consistency effect was independent of the orientation
of the go signal.
Discussion
Experiment 1 demonstrated that stimulus detections were sped
up when go signals induced apparent rotational motions in the
same direction as the currently prepared object manipulation. This
rotation consistency effect reflects an interference effect between
object manipulation and visual motion perception and indicates, in
particular, a perceptual benefit for consistent visual motion. We
interpret this finding as evidence for an impact of action planning
on perceptual processing and as support for the notion of motor-
visual priming effects in motion perception.
It is interesting to note that if the apparent motions were incon-
sistent with the prepared action, stimulus detections tend to be
slower, as compared with the control condition. This may reflect
an impaired processing of inconsistent rotational motions. Al-
though the results clearly demonstrate an interaction between
object manipulation and visual motion perception, it remains un-
clear whether both a positive and a negative effect—that is, a
facilitated processing of consistent motions and an impaired pro-
cessing of inconsistent motions— contribute to the presence of
motor-visual interactions. It is important to consider that the in-
terpretation of positive and negative RT effects strongly depends
on the used baseline condition.
As described above, the control condition was implemented by
varying the go signal. We chose his procedure because the solid
circle presented as a go signal could serve as a control condition
for rotation consistency effects as well as for grip consistency
effects. However, when trying to separate the positive and the
negative impacts of action planning on motion perception, it might
be problematic to interpret the control condition of Experiment 1
as an appropriate, neutral baseline estimate because the control
condition differed from the rotation conditions not only with
respect to the induced visual motion but also with respect to the
presented stimuli. That is, due to the different visual properties of
the go signals, it is unclear whether the results allow a conclusion
about the presence of positive and negative motor-visual priming.
We thus designed a second experiment to clarify this question.
Experiment 2
Experiment 2 focused on the rotation consistency effect and
introduced another no rotation condition that provides a better
baseline estimate for an analysis of positive and negative motor-
visual priming effects. Again, we presented bars in different ori-
entations as initial stimuli and as go signals. In contrast to the
previous experiment, however, trials without apparent motion
were now implemented by a variation of the initial stimulus and
not by a variation of the go signal. That is, a no rotation trial started
2
We used the multivariate Ftest based on the Pillai–Bartlett Vcriterion
for all within-subject factor analyses reported here (O’Brien & Kaiser,
1985).
1065
OBJECT MANIPULATION AND MOTION PERCEPTION
with the presentation of a solid circle followed by a tilted bar as go
signal (see Figure 2B). Because all go signals were tilted bars, the
three experimental conditions (rotation consistent, rotation incon-
sistent, no rotation) differed only with respect to the induced
apparent rotational motion. The no rotation condition of Experi-
ment 2 can consequently be interpreted as a measurement for a
neutral baseline that separates the positive effects and the negative
effects of action planning on motion perception.
Method
Participants. Fifteen students from the Radboud University
Nijmegen participated in exchange for EUR 4.50 (U.S.$6) or
course credits. All participants were naı¨ve to the purpose of the
study, had normal or corrected-to-normal vision, and were free of
any motor problems that could have affected their task perfor-
mance.
Apparatus, stimuli, and data acquisition. The apparatus, stim-
uli, and data acquisition were the same as in Experiment 1.
Procedure. The procedure was basically unchanged. Only the
sequence of events in the no rotation condition, without apparent
motion, was modified, as depicted in Figure 2B. The solid circle
did not serve as go signal; rather, the solid circle was presented in
some of the trials as initial stimulus. The go signal was a bar that
was tilted either 45° or 45°. To minimize the number of
anticipation responses, we presented a sinusoid 4,400-Hz tone
(200 ms duration) as negative feedback when participants re-
sponded before the onset of the go signal.
Design and analysis. In contrast to Experiment 1, the initial
stimuli (horizontal bar, vertical bar, circle) were varied blockwise
within subject. Each of the three experimental blocks comprised 72
trials composed of all possible combinations of the two manual
responses (left grasp, CW rotation; right grasp, CCW rotation) and
the two types of go signals (bar tilted 45°, bar tilted 45°). All
blocks started with 10 randomly chosen additional practice trials
that were not analyzed later. The order of blocks was permutated
across participants. Depending on the initial stimulus, the onset of
the tilted bar induced an apparent rotational motion that was
consistent or inconsistent with the prepared action or that induced
no rotation (neutral no rotation condition).
Because go signals in all three rotation conditions were either
consistent or inconsistent with the prepared grip, we obtained from
each subject a mean RT for all combinations of the factors Rota-
tion Consistency and Grip Consistency. The influence of both
factors could therefore be directly tested without calculating RT
effects.
Results
Participants’ tendency to respond before the go signal onsets
was much smaller (0.9% of RTs 0 ms and 3.3% of RTs 150
ms) than in Experiment 1, reflecting the presence of the negative
feedback in the case of anticipation responses. Again, the rates of
missing (1%) and incorrect (4.4%) responses were low.
Mean RTs (see Table 2) were submitted to a repeated measures
MANOVA, with the within-subject factors Manual Response (left
grasp, CW rotation; right grasp, CCW rotation), Rotation Consis-
tency (consistent, inconsistent, neutral no rotation), and Grip Con-
sistency (consistent, inconsistent). The analysis revealed a nonsig-
nificant trend for the factor Manual Response, F(1, 14) 3.46,
p.08, partial
2
.12, indicating the slight tendency to initiate
CCW object manipulation actions (297 ms) faster than CW object
manipulations (305 ms). It is important to note that we observed an
effect for the Rotation Consistency, F(2, 13) 5.56, p.05,
partial
2
.46. The detections of apparent rotational motions
Rotation Consistent Rotation Inconsistent
−20
−15
−10
−5
0
5
10
15
20
25
RT effect (ms)
Grip Consistent
Grip Inconsistent
Figure 3. Mean reaction time (RT) effects (i.e., deviations from the
control condition) of Experiment 1 as a function of the factors Rotation
Consistency and Grip Consistency. Error bars represent standard errors.
Table 1
Mean Reaction Times (in ms) for Experiments 1 (Hand Response Latencies) and 3 (Foot Response Latencies)
Experiment
Vertical initial stimulus Horizontal initial stimulus
Rotation
consistent
Rotation
inconsistent Control
Rotation
consistent
Rotation
inconsistent Control
MSEMSEMSEMSEMSEMSE
Experiment 1
Left grasp & CW rotation 324 21 354 24 334 20 322 21 344 24 332 20
Right grasp & CCW rotation 337 21 339 23 354 24 307 21 340 23 330 24
Mean 331 21 347 23 344 22 314 21 343 23 331 22
Experiment 3
Left grasp & CW rotation 328 17 341 15 330 18 327 18 322 15 323 19
Right grasp & CCW rotation 326 16 334 19 340 19 311 16 326 19 330 15
M327 16 338 17 335 18 320 17 324 16 327 17
Note. CW clockwise; CCW counterclockwise.
1066 LINDEMANN AND BEKKERING
consistent with the prepared action were faster (292 ms) than the
detections of inconsistent rotational motions (309 ms), t(14)
3.44, p.01. RTs in rotation-inconsistent trials and neutral no
rotation trials (301 ms) did not differ significantly, t(14) 1.05. The
factor Grip Consistency did not reach significance, F(1, 14) 1.53.
There were no interaction effects.
For a better comparison of the results with the outcome of
Experiment 1, we additionally calculated the mean RT effect of the
presentation of the tilted bars for each subject and each condition
(see Figure 4 for means). Again, the 2 (Grip Consistency) 2
(Rotation Consistency) MANOVA yielded only an effect for Ro-
tation Consistency, F(1, 14) 11.90, p.01, partial
2
.45
(other Fs1), indicating a positive effect (8 ms) for rotation-
consistent motions as well as a negative effect (9 ms) for
rotation-inconsistent motions.
Discussion
Experiment 2 provides additional support for the presence of
motor-visual priming of motion perception. The results further-
more confirm the presence of a positive and a negative motor-
visual priming of motion perception. Both effects were comparable
in size, suggesting that prepared motor actions facilitate the pro-
cessing of consistent visual rotational motions, on the one hand,
and impair the processing of inconsistent motions, on the other
hand.
We have interpreted the observed rotation consistency effect in
Experiments 1 and 2 as an impact of action planning on the
perception of visual motions. However, it is important to notice
that the execution of the object manipulation actions in the first
two experiments was directly coupled to the detection of the visual
motions. As a result, it might be possible that the outcome was
driven by stimulus–response priming. That is, in contrast to an
action-induced effect on motion perception, the RT differences
could reflect an accelerated initiation of manual actions compris-
ing an object rotation that is consistent with the perceived visual
motion. Such visuomotor priming (Vogt et al., 2003), however,
would represent an effect of reversed directionality, as the hypoth-
esized effect of motor-visual priming. Because this alternative
account could not be ruled out, we conducted a third experiment to
distinguish between the two conflicting explanations.
Experiment 3
The aim in Experiment 3 was to examine the origin of the
interference between object manipulation and motion perception.
In particular, we sought to provide direct evidence for the notion
that the observed rotation consistency effect reflects motor-visual
priming on the level of motion perception rather than stimulus–
response priming on the level of response execution. To test this
assumption, we introduced a second motor response. That is,
participants again prepared one of two object manipulation actions.
In contrast to the previous experiments, however, the onset of the
second visual stimulus (i.e., the apparent visual motion) did not
prompt the execution of the manual action. Rather, participants
were instructed to signal the detection of the stimulus by a speeded
foot pedal response. The object manipulation had to be performed
later in the trials in response to an auditory signal.
The rationale of Experiment 3 was as follows (cf. Craighero et
al., 1999; Fagioli et al., 2007): If, as hypothesized, the preparation
of a manual action affects the perceptual processing of visual
motions, we should also observe a priming effect for stimulus
detections indicated by another effector system (in this case the
foot). By contrast, if the alternative explanation holds, that is, if the
perception of visual motions had influenced the initiation of ma-
nipulation actions in the same or opposite direction, we expect to
find no priming effect in the latencies of foot pedal responses
because foot responses do not share any spatial features with the
perceived stimulus rotation.
Method
Participants. Fifteen students from the Radboud University
Nijmegen participated in exchange for EUR 6 (U.S.$8) or course
Table 2
Mean Hand Response Latencies (in ms) for Experiment 2
Manual response
Grip consistent Grip inconsistent
Rotation
consistent
Rotation
inconsistent
Neutral no
rotation
Rotation
consistent
Rotation
inconsistent
Neutral no
rotation
M SE M SE M SE M SE M SE M SE
Left grasp & CW rotation 298 9 310 14 295 11 297 14 324 9 305 12
Right grasp & CCW rotation 289 9 302 15 301 13 287 12 304 13 302 13
M293 8 306 13 298 11 292 12 314 10 304 12
Note. CW clockwise; CCW counterclockwise.
Rotation Consistent Rotation Inconsistent
−20
−15
−10
−5
0
5
10
15
20
25
RT effect (ms)
Grip Consistent
Grip Inconsistent
Figure 4. Mean reaction time (RT) effects (i.e., deviations from the
neutral condition) of Experiment 2 as a function of the factors Rotation
Consistency and Grip Consistency. Error bars represent standard errors.
1067
OBJECT MANIPULATION AND MOTION PERCEPTION
credits. All had normal or corrected-to-normal vision and were
naı¨ve to the purpose of the experiment.
Apparatus, stimuli, and data acquisition. The apparatus and
stimuli were identical to those used in Experiment 1. A sinusoid
900-Hz tone (150 ms duration) was used as an auditory go signal
to trigger the execution of the object manipulations. To record the
foot responses, we placed a foot pedal (conventionally used by
percussionists to play the bass drum) under the table and attached
a motion-tracking sensor to the end of the pedal’s drumstick (17.5
cm long). When the pedal had been pressed, a sinusoid 440-Hz
tone (50 ms duration) sounded as feedback and indicated the
correctness of the response. In the case of an anticipation response,
a negative auditory feedback was given (4,400 Hz lasting 200 ms).
Data acquisition was the same as in previous experiments, with
the exception that we used a fourth motion-tracking sensor to
measure the foot pedal responses. The same criterion as used for
the hand responses (i.e., velocity threshold of 10 cm/s) was chosen
to determine the foot response latencies.
Procedure and design. The procedure was similar to Experi-
ment 1. A horizontal or vertical bar was presented as initial
stimulus. The object manipulations were precued by colored
crosses. Again, the second visual stimulus was a bar tilted 45° or
45° or a solid circle (see Figure 1A and Experiment 1 for
presentation times). However, it did not serve as go signal for the
manual actions. Rather, participants were instructed to make a foot
response (with their right foot) as soon as the second stimulus
appeared. Six hundred milliseconds after the participant pressed
the foot pedal, the auditory go signal sounded and indicated the
initiation of the prepared object manipulation.
Experiment 3 was divided into four blocks of 48 trials each. As
in Experiment 2, the orientation of the initial stimulus was varied
blockwise within subject. Half of the participants saw a horizontal
bar in Blocks 1 and 3 and saw a vertical bar in Blocks 2 and 4; for
the other half, the order was reversed.
Results
Of the foot responses, 4.7% were excluded from the analysis
due to an incorrect execution of the delayed object manipulation.
Anticipatory foot responses occurred in only 2.6% of the re-
sponses.
The MANOVA of the foot RTs (see Table 1 for means) with the
within-subject factors Manual Response (left grasp, CW rotation;
right grasp, CCW rotation), Rotation Consistency (consistent, in-
consistent, control), and Initial Stimulus Orientation (horizontal,
vertical) revealed a simple main effect for Rotation Consistency,
F(2, 13) 4.34, p.05, partial
2
.40. Post hoc ttests yielded
shorter RTs for foot responses to visual motions consistent with
the planned object manipulation (320 ms) than for foot responses
to inconsistent motions (332 ms), t(14) ⫽⫺3.08, p.01, or
control signals (332 ms), t(14) ⫽⫺3.30, p.01. Additionally,
there was a trend to an interaction between the factors Manual
Response and Rotation Consistency, F(2, 13) 3.00, p.08,
partial
2
.31, which reflects the tendency to smaller rotation-
consistency effects when a left grasp and a CW rotation were
required. There were no further significant effects (Fs1.8).
The MANOVA testing for grip consistency effects yielded no
differences among grip-consistent (328 ms), grip-inconsistent (325
ms), and control stimuli (331 ms; F1). RT effects were
calculated and entered into a 2 (Grip Consistency) 2 (Rotation
Consistency) MANOVA (see Figure 5 for means). There was no
effect for Grip Consistency, F(1, 14) 1, but there was a signif-
icant effect for Rotation Consistency, F(1, 14) 5.46, p.05,
partial
2
.28, indicating an average positive effect (8 ms) for
consistent rotational motion.
Discussion
The foot response latencies of Experiment 3 reveal the same
rotation consistency effect as reported in Experiments 1 and 2.
That is, faster foot responses were observed if the apparent visual
motions were consistent with the prepared manipulation action.
Because the signaling of the visual motions took place before the
manual action had to be executed and because the foot responses
were unrelated to the stimuli and apparent motions, we can exclude
the existence of stimulus–response priming at the level of response
initiation. Rather, the foot response latencies clearly indicate a
facilitated perceptual processing of visual motions consistent with
the concurrently intended motor act. The outcome of Experiment
3 therefore provides strong support for the notion of motor-visual
priming of object manipulations on motion perception.
General Discussion
In this study, we investigated motor-visual priming in the con-
text of object manipulation actions and provided evidence for the
presence of action-induced effects on visual motion perception. In
three experiments, we demonstrated that participants who prepared
themselves to grasp and rotate an object detect the onset of a visual
stimulus faster if it induced an apparent visual motion in the same
direction as implied by the intended manipulation action. It is
important to note the effects on motion perception also emerged if
participants indicated their stimulus detections by pressing a foot
pedal, that is, by a motor response unrelated to the apparent visual
motion and the intended manual action. This observation clearly
rejects the possibility of stimulus–response priming effects and
provides straightforward evidence for a modulated visual process-
ing as the result of prepared object manipulation actions. We
therefore argue that the reported effects of rotation consistency
reflect motor-visual priming. The pattern of priming effects more-
Rotation Consistent Rotation Inconsistent
−20
−15
−10
−5
0
5
10
15
20
25
RT effect (ms)
Grip Consistent
Grip Inconsistent
Figure 5. Mean effects (i.e., deviations from the control condition) in the
foot response latencies of Experiment 3 as a function of the factors
Rotation Consistency and Grip Consistency. Error bars represent standard
errors.
1068 LINDEMANN AND BEKKERING
over suggests a positive impact of action planning on the detection of
consistent visual motion, as well as a negative action-induced effect
on the perception of inconsistent motions. That is, action prepara-
tion seems not only to facilitate the detection of action consistent
motions but also to impair the processing of action inconsistent
motions.
Previous research has shown that the intention to grasp an object
selectively enhances the visual discrimination of the perceptual
dimensions size and orientation, which are relevant for the pro-
gramming of reach-to-grasp movements (Bekkering & Neggers,
2002; Craighero et al., 1999; Hannus et al., 2005; Symes et al.,
2008). It is noteworthy that it is known from studies on object
perception that these two stimulus dimensions are automatically
associated with specific types of motor responses (Ellis & Tucker,
2000; Tucker & Ellis, 1998). The present experiments now dem-
onstrate a motor-visual priming effect that goes beyond the process
of grip selection and direct visuomotor transformation. Our finding
of motor priming of visual motions thus provides new evidence for
a bidirectional coupling of perception and action. It substantially
extends previous research at least in two aspects.
First, we investigated the question of motor interference in the
context of natural goal-directed manipulation actions and demon-
strate that action-induced effects also emerge when participants
prepare a short sequence of motor movements, such as reaching,
grasping, and turning an object. So far, research in this field has
focused mostly on rather simple and one-dimensional motor re-
sponses like button-press responses or mere grasping movements
without object use (Craighero et al., 1999; Fagioli et al., 2007;
Hannus et al., 2005; Mu¨sseler & Hommel, 1997; Wu¨hr&Mu¨s-
seler, 2001). The major advantage of the presented object manip-
ulation paradigm is that it allows a direct investigation of action
goals and the actual, intended distal effects in the environment.
Notably, RT effects found in the reaching of the object were driven
by a movement that had to be performed at the end of motor
sequence (i.e., the object rotation). Not only does this indicates that
participants planned the manipulation of the object before the
reach-to-grasp movement was initiated but also, and it is most
important to note, this shows very clearly that the preparation of a
motor behavior that has not yet been executed has an impact on
perceptual cognitive processes. The interference between intended
manipulations and motion perception therefore provides strong
support for the idea of action-induced effects. It is interesting to
note that in contrast to action-induced effects reported for mere
reach-to-grasp movements (e.g., Craighero et al., 1999), the per-
formance to detection stimuli affording the same type of grip as
currently prepared was fully unaffected if participants planned to
grasp the object to manipulate it afterward. Apparently, the nature
of the intended action goal determines which stimulus features are
primed in the perceptual processing. This finding is in line with the
idea that action planning represents a goal-driven process that
involves an anticipation of the desired action effects at a sensory
level (often referred to as the idea of ideomotor action; see, e.g.,
Greenwald, 1970; Stock & Stock, 2004). We accordingly suggest
conceptualizing the observed priming effects of object manipula-
tion as perceptual resonance resulting from motor intentions (Rue-
schemeyer, Lindemann, van Elk, & Bekkering, in press; Schu¨tz-
Bosbach & Prinz, 2007).
Second, the interaction between object manipulations and mo-
tion detection shows that effects of action planning are not re-
stricted to the perceptual processing of intrinsic object properties.
Although there was evidence that visual motions facilitate the
selection of compatible motor responses (Bosbach et al., 2004), to
date, only very little was known about the reversed effect. A first
indication for action-induced effects on motion perception has
been provided by Wohlschla¨ger (2000) showing that participants’
direction judgments of ambiguous apparent motions are systemat-
ically biased toward the direction of a simultaneously performed
turning action (but see also Zwickel et al., 2007, for the finding of
contrast effects). It is important to note that this finding has been
interpreted as evidence for a close coupling between concurrent
action execution and motion perception. The present study now
demonstrates that perceptual processing of motions is already
modulated as the result of motor intentions and mere action prep-
aration. It is furthermore important to notice that the interpretation
of the effects reported by Wohlschla¨ger (2000) in terms of a
primed perceptual processing is potentially problematic because
differences in judgments of ambiguous motion displays are likely
to reflect a guessing bias in perceptually unclear situations. With
our findings of effects in the stimulus detection times, we can
exclude the possibility of judgment biases and thus provide, for the
first time, unambiguous empirical evidence for the notion that
motor behavior affects the perceptual processing of visual motions.
Another important advantage of the suggested object manipu-
lation paradigm is that it controls for potential confounds in earlier
studies on motor-visual priming. As mentioned above, Craighero
et al. (1999) was one of the first to report motor-visual priming
effects of reach-to-grasp movements. In contrast to the present
study, they required participants to grasp objects positioned in
different orientations and observed faster responses when the go
signals afforded the same type of grip as the target object. Because
motor actions were fully determined by the orientation of the target
object, it was unclear whether the stimulus processing interacted
with the prepared response or with the cognitive representation of
the object. Moreover, it is important to note that target objects and go
signals in the consistent trials of the Craighero et al.’s (1999) exper-
iments were always orientated in parallel. It therefore might also be
possible that priming effects were driven by an overlap of visual
properties (i.e., orientation or grip affordances) between the go
signal and the target object (stimulus– object congruency). Due to
the use of a single X-shaped manipulandum, we could ensure that
the target object was always associated with both possible grasping
responses and that its orientation was held constant across all trials.
Consequently, we can reject this alternative account and exclude
that the observed RTs effects were driven by the congruency of the
to-be-detected stimulus and the to-be-grasped object.
The motor-visual priming effect—that is, the facilitated process-
ing of action-consistent motions and the impaired processing of
action-inconsistent motions—seems to be in conflict with studies
that reported an impaired accuracy in the identification of stimuli
that share features with a prepared action (the so called action-
effect blindness; Kunde & Wu¨hr, 2004; Mu¨sseler & Hommel,
1997; Wu¨hr&Mu¨ sseler, 2001). For example, Mu¨ sseler and Hom-
mel (1997) presented left- and right-pointing arrowheads shortly
before the execution of a manual left or right key press response
and found impaired identifications for arrows that corresponded to
the action (e.g., left-pointing arrowhead while planning a left key
press response). A crucial difference between the finding of motor-
visual priming and the finding of action-effect blindness is that the
1069
OBJECT MANIPULATION AND MOTION PERCEPTION
former effect represents a RT difference in a speeded task, whereas
the latter effect is found in the accuracy of unspeeded perceptual
judgments. Although there is evidence that these methodological
differences could account for the different perceptual effects (San-
tee & Egeth, 1982), we argue that the two findings are also, from
a theoretical point of view, not in contradiction. The impaired
accuracy in the perception of action-consistent stimuli has mostly
been explained within a common coding framework (e.g., theory
of event coding; Hommel et al., 2001), which suggests that per-
ception and action planning share cognitive codes that represent
the features of both perceived stimuli and intended actions. It is
furthermore assumed that the preparation of an action and its
maintenance in short-term memory requires an integration of all
associated and activated feature codes into one coherent action
plan. Once a feature code becomes integrated, it is bounded and, as
a consequence, less available for another integration such as is
needed for the representation of a subsequent perceptual event.
The likelihood that a certain feature code has to be integrated when
an event is perceived depends on the feature’s relevance for the
task (Hommel et al., 2001). That is, unattended task-irrelevant
features may become activated but will not become part of any
binding. In contrast to code integration, the mere activation of
feature codes is assumed to facilitate the perceptual processing of
events sharing these features. The planning of an action and the
resulting integration of feature codes therefore should only cause
inhibition effects on the attempt to integrate this code in a second
cognitive representation (see also Mu¨sseler, 1999, for a more
detailed discussion). Taken together, it seems to be important to
discern that the direction of the motion in the present paradigm
was irrelevant to the participants’ task, and no short-term memory
representation of the perceptual event had to be created for later
recall. Due to this, and in line with the theoretical considerations
outlined above, action-effect blindness was not expected to occur.
Instead, our data indicated a facilitation of motion detections
sharing features with the intended action. Whether the encoding of
visual motions into a short-term memory representation is im-
paired, as predicted by the theory of event coding (Hommel et al.,
2001), cannot be answered at this point and requires additional
investigations of action effects on the accuracy of motion percep-
tion.
In sum, the present study demonstrates an action-induced effect
of object manipulations on motion perception and thus provides
evidence for a bidirectional link between motor representations
and perceptual representations that cannot be explained by visuo-
motor associations of superficial motor-object characteristics. The
motor-visual priming of motion perception originates from the
relation between prepared actions (i.e., object manipulation) and
expected action outcomes (i.e., rotational motions) and seems to
suggests that visual perception is modulated toward changes in the
environment representing a potential consequence of the currently
intended motor act. Our finding can thus be interpreted in line with
theories of ideomotor action (Stock & Stock, 2004), which hold
that actions are represented and planned in terms of their sensory
consequences. Accordingly, the reported motor-visual priming ef-
fect on motion perception provides empirical support for the
notion that the planning of goal-directed actions is accompanied by
an activation of sensory representations of the intended action
consequences.
References
Bekkering, H., & Neggers, S. F. W. (2002). Visual search is modulated by
action intentions. Psychological Science, 13, 370 –374.
Bekkering, H., & Pratt, J. (2004). Object-based processes in the planning
of goal-directed hand movements. Quarterly Journal of Experimental
Psychology, 57, 1345–1368.
Bosbach, S., Prinz, W., & Kerzel, D. (2004). A Simon effect with station-
ary moving stimuli. Journal of Experimental Psychology: Human Per-
ception and Performance, 30, 39 –55.
Brass, M., Bekkering, H., & Prinz, W. (2001). Movement observation
affects movement execution in a simple response task. Acta Psycho-
logica, 106, 3–22.
Castiello, U. (2005). The neuroscience of grasping. Nature Reviews Neu-
roscience, 6, 726 –736.
Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta` , C. (1998). Visuomotor
priming. Visual Cognition, 5, 109 –125.
Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta` , C. (1999). Action for
perception: A motor-visual attentional effect. Journal of Experimental
Psychology: Human Perception and Performance, 25, 1673–1692.
Deubel, H., Schneider, W. X., & Paprotta, I. (1998). Selective dorsal and
ventral processing: Evidence for a common attentional mechanism in
reaching and perception. Visual Cognition, 5, 81–107.
Ellis, R., & Tucker, M. (2000). Micro-affordance: The potentiation of
components of action by seen objects. British Journal of Psychology, 91,
451– 471.
Fagioli, S., Hommel, B., & Schubotz, R. I. (2007). Intentional control of
attention: Action planning primes action-related stimulus dimensions.
Psychological Research, 71, 22–29.
Glover, S. (2004). Separate visual representations in the planning and
control of action. Behavioral and Brain Sciences, 27, 3–24.
Greenwald, A. G. (1970). A choice reaction time test of ideomotor theory.
Journal of Experimental Psychology, 86, 20 –25.
Hamilton, A., Wolpert, D., & Frith, U. (2004). Your own action influences
how you perceive another person’s action. Current Biology, 14, 493–
498.
Hannus, A., Cornelissen, F. W., Lindemann, O., & Bekkering, H. (2005).
Selection-for-action in visual search. Acta Psychologica, 118, 171–191.
Hommel, B., Mu¨ sseler, J., Aschersleben, G., & Prinz, W. (2001). The
theory of event coding (TEC): A framework for perception and action
planning. Behavioral and Brain Sciences, 24, 849 –937.
Jacob, P., & Jeannerod, M. (2005). The motor theory of social cognition:
A critique. Trends in Cognitive Sciences, 9, 21–25.
Kunde, W., & Wu¨ hr, P. (2004). Actions blind to conceptually overlapping
stimuli. Psychological Research, 68, 199 –207.
Linnell, K. J., Humphreys, G. W., McIntyre, D. B., Laitinen, S., & Wing,
A. M. (2005). Action modulates object-based selection. Vision Research,
45, 2268 –2286.
Miall, R. C., Stanley, J., Todhunter, S., Levick, C., Lindo, S., & Miall, J. D.
(2006). Performing hand actions assists the visual discrimination of
similar hand postures. Neuropsychologia, 44, 966 –976.
Michaels, C. F. (1988). S-R compatibility between response position and
destination of apparent motion: Evidence for the detection of affor-
dances. Journal of Experimental Psychology: Human Perception and
Performance.
Mu¨ sseler, J. (1999). How independent from action control is perception?
An event coding account for more equally ranked crosstalks. In G.
Aschersleben, T. Bachmann & J. Mu¨ sseler (Eds.), Cognitive contribu-
tions to the perception of spatial and temporal events (pp. 121–148).
Amsterdam: Elsevier.
Mu¨ sseler, J., & Hommel, B. (1997). Blindness to response-compatible
stimuli. Journal of Experimental Psychology: Human Perception and
Performance, 23, 861– 872.
O’Brien, R. G., & Kaiser, M. K. (1985). MANOVA method for analyzing
1070 LINDEMANN AND BEKKERING
repeated measures designs: An extensive primer. Psychological Bulletin,
97, 316 –333.
Proctor, R. W., Van Zandt, T., Lu, C. H., & Weeks, D. J. (1993).
Stimulus-response compatibility for moving stimuli: Perception of af-
fordances or directional coding? Journal of Experimental Psychology:
Human Perception and Performance, 19, 81–91.
Rosenbaum, D. A., Meulenbroek, R. J., Vaughan, J., & Jansen, C. (2001).
Posture-based motion planning: Applications to grasping. Psychological
Review, 108, 709 –734.
Rueschemeyer, S.-A., Lindemann, O., van Elk, M. & Bekkering, H. (in
press). Embodied cognition: The interplay between automatic resonance
and selection-for-action mechanisms. European Journal of Social Psy-
chology.
Santee, J. L., & Egeth, H. E. (1982). Do reaction time and accuracy
measure the same aspects of letter recognition? Journal of Experimental
Psychology: Human Perception and Performance, 8, 489 –501.
Schu¨ tz-Bosbach, S., & Prinz, W. (2007). Perceptual resonance: Action-
induced modulation of perception. Trends in Cognitive Sciences, 11,
349 –555.
Stock, A., & Stock, C. (2004). A short history of ideo-motor action.
Psychological Research, 68, 176 –188.
Symes, E., Tucker, M., Ellis, R., Vainio, L., & Ottoboni, G. (2008). Grasp
preparation improves change detection for congruent objects. Journal of
Experimental Psychology: Human Perception and Performance, 34,
854 – 871.
Tucker, M., & Ellis, R. (1998). On the relations between seen objects and
components of potential actions. Journal of Experimental Psychology:
Human Perception and Performance, 24, 830 – 846.
Vogt, S., Taylor, P., & Hopkins, B. (2003). Visuomotor priming by
pictures of hand postures: Perspective matters. Neuropsychologia, 41,
941–951.
Wohlschla¨ ger, A. (2000). Visual motion priming by invisible actions.
Vision Research, 40, 925–930.
Wu¨ hr, P., & Mu¨sseler, J. (2001). Time course of the blindness to response-
compatible stimuli. Journal of Experimental Psychology: Human Per-
ception and Performance, 27, 1260 –1270.
Zwickel, J., Grosjean, M., & Prinz, W. (2007). Seeing while moving:
Measuring the online influence of action on perception. Quarterly Jour-
nal of Experimental Psychology, 60, 1063–1071.
Received April 3, 2006
Revision received September 3, 2008
Accepted October 30, 2008
Call for Nominations
The Publications and Communications (P&C) Board of the American Psychological Association has
opened nominations for the editorships of Experimental and Clinical Psychopharmacology, Journal of
Abnormal Psychology, Journal of Comparative Psychology, Journal of Counseling Psychology,
Journal of Experimental Psychology: Human Perception and Performance, Journal of Personality
and Social Psychology: Attitudes and Social Cognition, PsycCRITIQUES, and Rehabilitation Psy-
chology for the years 2012–2017. Nancy K. Mello, PhD, David Watson, PhD, Gordon M. Burghardt, PhD,
Brent S. Mallinckrodt, PhD, Glyn W. Humphreys, PhD, Charles M. Judd, PhD, Danny Wedding, PhD, and
Timothy R. Elliott, PhD, respectively, are the incumbent editors.
Candidates should be members of APA and should be available to start receiving manuscripts in early
2011 to prepare for issues published in 2012. Please note that the P&C Board encourages participation by
members of underrepresented groups in the publication process and would particularly welcome such
nominees. Self-nominations are also encouraged.
Search chairs have been appointed as follows:
Experimental and Clinical Psychopharmacology, William Howell, PhD
Journal of Abnormal Psychology, Norman Abeles, PhD
Journal of Comparative Psychology, John Disterhoft, PhD
Journal of Counseling Psychology, Neil Schmitt, PhD
Journal of Experimental Psychology: Human Perception and Performance, Leah
Light, PhD
Journal of Personality and Social Psychology: Attitudes and Social Cognition,
Jennifer Crocker, PhD
PsycCRITIQUES, Valerie Reyna, PhD
Rehabilitation Psychology, Bob Frank, PhD
Candidates should be nominated by accessing APA’s EditorQuest site on the Web. Using your
Web browser, go to http://editorquest.apa.org. On the Home menu on the left, find “Guests.” Next,
click on the link “Submit a Nomination,” enter your nominee’s information, and click “Submit.”
Prepared statements of one page or less in support of a nominee can also be submitted by e-mail
to Emnet Tesfaye, P&C Board Search Liaison, at emnet@apa.org.
Deadline for accepting nominations is January 10, 2010, when reviews will begin.
1071
OBJECT MANIPULATION AND MOTION PERCEPTION
... This suggests that the motor system is not only involved in executing actions, but also in shaping perception based on the perceiver's intended actions. In fact, multiple perceptual aspects, such as orientation, size, luminance, location, motion, among many others, have been reported as target features that are directly influenced by action planning (Musseler and Hommel, 1997;Craighero et al., 1999;Wohlschläger, 2000;Zwickel et al., 2007;Lindemann and Bekkering, 2009;Kirsch et al., 2012). For example, studies by Kirsch have shown how planning itself interferes with distance perception and, therefore, with target spatial location (Kirsch et al., 2012;Kirsch and Kunde, 2013;Kirsch, 2015). ...
... Hand motion direction biased subjects' motion perception. Under a similar experimental approach to that of Craighero et al. (1999), Lindemann and Bekkering (2009) instructed volunteers to reach, grasp, and subsequently rotate an x-shaped manipulandum following the visual go signal's onset. Here, a tilted bar (-45 • or +45 • ) served as the visual go signal. ...
... Here, a tilted bar (-45 • or +45 • ) served as the visual go signal. Volunteers detected the onset of the go signal faster in the congruent conditions, in which the go signal, and action planning presented the same direction (Lindemann and Bekkering, 2009). These findings imply that perception was facilitated in the direction in which the action had been previously planned. ...
Article
Perception and action are fundamental processes that characterize our life and our possibility to modify the world around us. Several pieces of evidence have shown an intimate and reciprocal interaction between perception and action, leading us to believe that these processes rely on a common set of representations. The present review focuses on one particular aspect of this interaction: the influence of action on perception from a motor effector perspective during two phases, action planning and the phase following execution of the action. The movements performed by eyes, hands, and legs have a different impact on object and space perception; studies that use different approaches and paradigms have formed an interesting general picture that demonstrates the existence of an action effect on perception, before as well as after its execution. Although the mechanisms of this effect are still being debated, different studies have demonstrated that most of the time this effect pragmatically shapes and primes perception of relevant features of the object or environment which calls for action; at other times it improves our perception through motor experience and learning. Finally, a future perspective is provided, in which we suggest that these mechanisms can be exploited to increase trust in artificial intelligence systems that are able to interact with humans.
... action-relevant visual feature, participants were required to discriminate two briefly and simultaneously presented spatial frequency gratings as having the same or different orientations. Unlike previous studies 7,12,13 , to ensure sufficient error rates across participants, the difference between the two gratings was continually updated throughout the task depending on each participant's performance. The key manipulation of interest was that, before the onset of the gratings, participants were cued to prepare one of two grasping actions, oriented either towards the right or the left. ...
... However, action preparation may affect later processing stages such as motor preparation. Cued motor preparation is typically accompanied by prominent changes in the power of beta oscillations (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) Hz) over central electrode sites [20][21][22] , with neural sources in the contralateral pre-Rolandic 'sensorimotor' region 23 . While the exact functional role of beta band activity in cued movement tasks is not yet clear 22 , there is a general consensus that beta oscillations provide a reliable indicator of the onset of movement preparation, execution, as well as motor imagery [24][25][26] and may reflect an active process promoting existing motor or cognitive states 27 . ...
... www.nature.com/scientificreports/ by a congruently oriented stimulus 7,13 . However, no differences were observed between grasp congruent and incongruent discriminations in perceptual sensitivity (d') or estimates of judgement noise for both cue-target intervals of 1000 ms and 500 ms. ...
Article
Full-text available
Action preparation can facilitate performance in tasks of visual perception, for instance by speeding up responses to action-relevant stimulus features. However, it is unknown whether this facilitation reflects an influence on early perceptual processing, or instead post-perceptual processes. In three experiments, a combination of psychophysics and electroencephalography was used to investigate whether visual features are influenced by action preparation at the perceptual level. Participants were cued to prepare oriented reach-to-grasp actions before discriminating target stimuli oriented in the same direction as the prepared grasping action (congruent) or not (incongruent). As expected, stimuli were discriminated faster if their orientation was congruent, compared to incongruent, with the prepared action. However, action-congruency had no influence on perceptual sensitivity, regardless of cue-target interval and discrimination difficulty. The reaction time effect was not accompanied by modulations of early visual-evoked potentials. Instead, beta-band (13–30 Hz) synchronization over sensorimotor brain regions was influenced by action preparation, indicative of improved response preparation. Together, the results suggest that action preparation may not modulate early visual processing of orientation, but likely influences higher order response or decision related processing. While early effects of action on spatial perception are well documented, separate mechanisms appear to govern non-spatial feature selection.
... This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Lindemann & Bekkering, 2009) or stimulus feature (e.g., Fagioli et al., 2007;Wykowska et al., 2009), as well as studies that have participants perform a discrimination task to stimuli that are presented immediately following the execution of an instructed motor response (e.g., Müsseler & Hommel, 1997;Wohlschläger, 2000). A critical aspect in these studies is that the stimuli either share or do not share features of the currently planned or executed action. ...
Article
Full-text available
Humans are constantly enacting motor responses based on perceptual judgments or decisions. Recent work suggests that accumulating evidence for a decision and planning the action to enact the decision are coupled. Further, decision commitment may occur when the action reaches its motor threshold. Across several experiments, this coupled perception–action account of perceptuomotor decision making was tested by determining if increasing response activation corresponding to one decision influenced the evidence needed to make that decision. Participants were presented with stimuli that contained varying ratios of yellow-to-blue squares and made a speeded left/right-hand response to report whether the stimulus had more yellow or blue squares, respectively. Response activation was modulated by presenting stimuli laterally on the screen—spatially compatible or incompatible with the color reports. When stimuli appeared leftward (spatially compatible with a left response/“yellow” report), the threshold for a “yellow” perceptuomotor decision was reduced—consistent with the hypothesis that increasing “yellow” response activation would lead to a “yellow” reporting bias. Further, when stimuli appeared rightward (spatially compatible with a right response/“blue” report), the threshold for a “blue” perceptuomotor decision was reduced. An additional experiment revealed that directional saccades occurring during the task were unlikely to account for biases. Overall, spatially induced response activation influenced the decision outcomes, providing support for a tightly coupled perception–action system underlying perceptuomotor decisions.
... These theoretical developments have been accompanied by a growing body of empirical work investigating the effects of action, or the opportunity an object presents for action, on visual perception (Bekkering and Neggers 2002;Chan et al. 2013;Lindemann and Bekkering 2009). One of the most promising avenues of research for investigating how actions may affect perceptual presence focuses on how the brain responds to real objects versus images of objects (Gomez et al. 2018;Marini et al. 2019;Snow et al. 2014;Snow and Culham 2021). ...
Article
Extended reality (XR), encompassing various forms of virtual reality (VR) and augmented reality (AR), has become a powerful experimental tool in consciousness research due to its capability to create holistic and immersive experiences of oneself and surrounding environments through simulation. One hallmark of a successful XR experience is when it elicits a strong sense of presence, which can be thought of as a subjective sense of reality of the self and the world. Although XR research has shed light on many factors that may influence presence (or its absence) in XR environments, there remains much to be discovered about the detailed and diverse phenomenology of presence, and the neurocognitive mechanisms that underlie it. In this chapter, we analyse the concept of presence and relate it to the way in which humans may generate and maintain a stable sense of reality during both natural perception and virtual experiences. We start by reviewing the concept of presence as developed in XR research, covering both factors that may influence presence and potential ways of measuring presence. We then discuss the phenomenological characteristics of presence in human consciousness, drawing on clinical examples where presence is disturbed. Next, we describe two experiments using XR that investigated the effects of sensorimotor contingency and affordances on a specific form of presence related to the sense of objects as really existing in the world, referred to as 'objecthood'. We then go beyond perceptual presence to discuss the concept of 'conviction about reality', which corresponds to people's beliefs about the reality status of their perceptual experiences. We finish by exploring how the novel XR method of 'Substitutional Reality' can allow experimental investigation of these topics, opening new experimental directions for studying presence beyond the 'as-if' experience of fully simulated environments.
... These theoretical developments have been accompanied by a growing body of empirical work investigating the effects of action, or the opportunity an object presents for action, on visual perception (Bekkering & Neggers, 2002;Chan et al., 2013;Lindemann & Bekkering, 2009). One of the most promising avenues of research for investigating how actions may affect perceptual presence focuses on how the brain responds to real objects versus images of objects (Gomez et al., 2018;Marini et al., 2019;Snow et al., 2014;Snow & Culham, 2021). ...
Preprint
Full-text available
Extended Reality (XR), encompassing various forms of virtual reality (VR) and augmented reality (AR), has become a powerful experimental tool in consciousness research due to its capability to create holistic and immersive experiences of oneself and surrounding environments through simulation. One hallmark of a successful XR experience is when it elicits a strong sense of presence, which can be thought of as a subjective sense of reality of the self and the world. Although XR research has shed light on many factors that may influence presence (or its absence) in XR environments, there remains much to be discovered about the detailed and diverse phenomenology of presence, and the neurocognitive mechanisms that underlie it. In this chapter, we analyse the concept of presence and relate it to the way in which humans may generate and maintain a stable sense of reality during both natural perception and virtual experiences. We start by reviewing the concept of presence as developed in XR research, covering both factors that may influence presence and potential ways of measuring presence. We then discuss the phenomenological characteristics of presence in human consciousness, drawing on clinical examples where presence is disturbed. Next, we describe two experiments using XR that investigated the effects of sensorimotor contingency and affordances on a specific form of presence related to the sense of objects as really existing in the world, referred to as ‘objecthood’. We then go beyond perceptual presence to discuss the concept of 'conviction about reality', which corresponds to people's beliefs about the reality status of their perceptual experiences. We finish by exploring how the novel XR method of ‘Substitutional Reality’ can allow experimental investigation of these topics, opening new experimental directions for studying presence beyond the ‘as-if’ experience of fully simulated environments.
... In this Predictive Processing Theory of SensoriMotor Contingencies (PPSMC), perceived objecthood arises when the brain encodes a rich repertoire of predictions about how sensory signals would change given specific actions (Seth, 2014(Seth, , 2015. Both STC and PPSMC predict that visual phenomenology, including the phenomenology of objecthood, will be systematically shaped by how a perceptual system masters, or encodes, Accompanying these theoretical developments, a growing body of empirical research has investigated embodied approaches to cognition and perception (Bekkering & Neggers, 2002;Brunel, Carvalho, & Goldstone, 2015;Chan, Peterson, Barense, & Pratt, 2013;Fagioli, Hommel, & Schubotz, 2007;Hannus, Cornelissen, Lindemann, & Bekkering, 2005;Lindemann & Bekkering, 2009). In one striking example, Dieter, Hu, Knill, Blake, and Tadin (2014) found that blindfolded participants who waved their hands in front of their faces reported experiencing visual sensations, demonstrating that actions may not only shape visual perception but may even possess a generative effect on visual experience. ...
... However, given the high diversity of the studied phenomena and the used paradigms, this approach requires further investigation. Specifically, actions impact visual object perception already during action planning, that is, before action execution (e.g., Kirsch and Kunde 2013;Müsseler and Hommel 1997;Lindemann and Bekkering 2009). In such a situation, no body-related afferent input is yet available which could inform about an external object at the time of its visual appearance. ...
Article
Full-text available
The present study examined mutual influences of visual and body-related signals during planning of an object-oriented action. Participants were to enclose a visual target object using two cursors controlled by the movements of their fingers. During movement preparation, they were asked to judge either the size of the object or a certain finger distance. Both types of judgments were systematically affected by the transformation of finger movements into the movements of visual cursors. We suggest that these biases are perceptual consequences of sensory integration of visual and body-related signals relating to the same external object.
Article
Scent is an important product attribute and an integral component of the consumption experience as consumers often want to perceive a product’s smell to make a well-informed purchase decision. It is difficult, however, to communicate the properties of a scent without the physical presence of odorants. Through five experiments conducted in a perfume-advertising context, our research shows that implied explosion, whether visually (e.g., a spritz blast) or semantically created, can increase perceived scent intensity, subsequently enhancing perceived scent persistence. It also found a positive effect of perceived scent persistence on purchase intention. In conclusion, the research suggests that implied explosion can be a powerful tool for advertisers to enhance scent perception, consequently boosting purchase intention.
Chapter
Co-speech gestures are ubiquitous: when people speak, they almost always produce gestures. Gestures reflect content in the mind of the speaker, often under the radar and frequently using rich mental images that complement speech. What are gestures doing? Why do we use them? This book is the first to systematically explore the functions of gesture in speaking, thinking, and communicating – focusing on the variety of purposes served for the gesturer as well as for the viewer of gestures. Chapters in this edited volume present a range of diverse perspectives (including neural, cognitive, social, developmental and educational), consider gestural behavior in multiple contexts (conversation, narration, persuasion, intervention, and instruction), and utilize an array of methodological approaches (including both naturalistic and experimental). The book demonstrates that gesture influences how humans develop ideas, express and share those ideas to create community, and engineer innovative solutions to problems.
Article
Full-text available
This study investigated the effect of audiovisual integration on action-perception transfer.40 subjects were randomly divided four groups: visual, visual-auditory, control visual and control visual-auditory. Visual groups watched pattern skilled basketball player and other groups in addition to watching pattern skilled basketball player, heard Elbow angular velocity as sonification. In first stage, the pattern is presented to subjects for five times and them replying to ten questions about different aspects of pattern. Then they performed parameter recognition and pattern recongnition tests. In second stage, experimental groups watch pattern five times again and perform it after each watch. Control groups watch pattern similar to experimental group but they must not perform it. All groups responded to the questionnaire and participated in a recognition tests again. Results showed that before action, in “percent confidence reply” and no “reply to questions” there is significant different between experimental groups. But after action in both “percent confidence reply” and “reply to questions” there was significant different between experimental groups and control groups (p<0.05). In this study was confirmed effect of visual-auditory integration on action-perception transfer. This results is explainable based of Common Coding Theory, Direct Matching Hypothesis and Predictive Models. The results are consistent with modality appropriateness hypothesis.
Article
Full-text available
This article describes a model of motion planning instantiated for grasping. According to the model, one of the most important aspects of motion planning is establishing a constraint hierarchy - a set of prioritized requirements defining the task to be performed. For grasping, constraints include avoiding collisions with to-be-grasped objects and minimizing movement-related effort. These and other constraints are combined with instance retrieval (recall of stored postures) and instance generation (generation of new postures and movements to them) to simulate flexible prehension. Dynamic deadline setting is used to regulate termination of instance generation, and performance of more than one movement at a time with a single effector is used to permit obstacle avoidance. Old and new data are accounted for with the model.
Article
Full-text available
Two experiments were performed to explore a possible visuomotor priming effect. The participants were instructed to fixate a cross on a computer screen and to respond, when the cross changed colour ("go" signal), by grasping one of two objects with their right hand. The participants knew in advance the nature of the to-be-grasped object and the appropriate motor response. Before (100 msec), simultaneously with or after (100 msec) the "go" signal, a two-dimensional picture of an object (the prime), centred around the fixation cross, was presented. The prime was not predictive of the nature of the to-be-grasped object. There was a congruent condition, in which the prime depicted the to-be-grasped object, an incongruent condition, in which the prime depicted the other object, and a neutral condition, in which either no prime was shown or the prime depicted an object that did not belong to the set of to-be-grasped objects. It was found that, in the congruent condition, reaction time for initiating a grasping movement was reduced. These results provide evidence of visuomotor priming.
Article
Full-text available
There is an ongoing controversy in the study of visual perception as to how closely visual processes are tied to cognitive processes. The present paper extends this controversy in that it considers crosstalks between the starting and the end point of the information stream, that is, between visual and action-control processes. Though it is usually admitted that action-control processes affect which information is picked up, accentuated, disregarded, or rejected, the traditional view continues to espouse a predominantly one-way route for visual information processing. In the present contribution, an alternative view is developed that allows for more equally-ranked cros-stalks. It is based on the idea that actions are controlled by the anticipation of their intended effects, and that perception and action control therefore share codes in the same representational domain. As a consequence, action-control processes are capable affecting and modifying visual processes in a more elementary manner. Theoretical, neurophysiological as well as behavioral evidence supporting this view will be presented.
Article
Full-text available
It is suggested that seen objects potentiate a range of actions associated with them, irrespective of the intentions of the viewer. Evidence for this possibility is provided by the data from two experiments, both of which required a participant to make a binary motor response to an auditory stimulus. In the first experiment the response was a power or precision grip, which was performed whilst simultaneously viewing a real object which would normally be grasped using either a power or precision grip. A significant interaction of response and grip compatibility of the object was observed. Similar results were obtained in the second experiment when a wrist rotation of a given direction was used as a response, whilst viewing objects which would require wrist rotations if they were to be grasped. The effects of the seen objects on components of action are described as microaffordances which are said to be dispositonal states of the viewer's nervous system.
Article
Full-text available
Theories in motor control suggest that the parameters specified during the planning of goal-directed hand movements to a visual target are defined in spatial parameters like direction and amplitude. Recent findings in the visual attention literature, however, argue widely for early object-based selection processes. The present experiments were designed to examine the contributions of object-based and space-based selection processes to the preparation time of goal-directed pointing movements. Therefore, a cue was presented at a specific location. The question addressed was whether the initiation of responses to uncued target stimuli could benefit from being either within the same object (object based) or presented at the same direction (space based). Experiment 1 replicated earlier findings of object-based benefits for non-goal-directed responses. Experiment 2 confirmed earlier findings of space-based benefits for goal-directed hand pointing movements. In Experiments 3 and 4, space-based and object-based manipulations were combined while requiring goal-directed hand pointing movements. The results clearly favour the notion that the selection processes for goal-directed pointing movements are primarily object based. Implications for theories on selective attention and action planning are discussed.
Article
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together, the findings support the view that visual attention can be best understood as a selection-for-action mechanism.
Article
The primate visual system can be divided into a ventral stream for perception and recognition and a dorsal stream for computing spatial information for motor action. How are selection mechanisms in both processing streams coordinated? We recently demonstrated that selection-for-perception in the ventral stream (usually termed “visual attention”) and saccade target selection in the dorsal stream are tightly coupled (Deubel & Schneider, 1996). Here we investigate whether such coupling also holds for the preparation of manual reaching movements. A dual-task paradigm required the preparation of a reaching movement to a cued item in a letter string. Simultaneously, the ability to discriminate between the symbols “E” and “∃” presented tachistoscopically within the surrounding distractors was taken as a measure of perceptual performance. The data demonstrate thatdiscrimination performance is superior when the discrimination stimulus is also the target for manual aiming; when the discrimination stimulus and pointing targetreferto differentobjects, performance deteriorates. Therefore, it is not possible to maintain attention on a stimulus for the purpose of discriminationwhiledirecting a movementtoa spatially separate object. The results argue for an obligatory coupling of (ventral) selection-for-perception and (dorsal) selection-for-action.
Article
Embodied cognition, or the notion that cognitive processes develop from goal-directed interactions between organisms and their environment has stressed the automaticity of perceptual and motor resonance mechanisms in other cognitive domains like language. The present paper starts with reviewing abundant empirical evidence for automatic resonance mechanisms between action and language and examples of other cognitive domains such as number processing. Special attention is given here to social implications of embodied cognition. Then some more recent evidence indicating the importance of the action context on the interaction between action and language is reviewed. Finally, a theoretical notion about how automatic and selective mechanisms can be incorporated in an embodied cognitive perspective is provided. Copyright © 2009 John Wiley & Sons, Ltd.