Vision-for-action: the effects of object property discrimination and action state on affordance compatibility effects.
ABSTRACT When a person views an object, the action the object evokes appears to be activated independently of the person's intention to act. We demonstrate two further properties of this vision-to-action process. First, it is not completely automatic, but is determined by the stimulus properties of the object that are attended. Thus, when a person discriminates the shape of an object, action affordance effects are observed; but when a person discriminates an object's color, no affordance effects are observed. The former, shape property is associated with action, such as how an object might be grasped; the latter, color property is irrelevant to action. Second, we also show that the action state of an object influences evoked action. Thus, active objects, with which current action is implied, produce larger affordance effects than passive objects, with which no action is implied. We suggest that the active object activates action simulation processes similar to those proposed in mirror systems.
- SourceAvailable from: François Osiurak[Show abstract] [Hide abstract]
ABSTRACT: Our understanding of human tool use comes mainly from neuropsychology, particularly from patients with apraxia or action disorganization syndrome. However, there is no integrative, theoretical framework explaining what these neuropsychological syndromes tell us about the cognitive/neural bases of human tool use. The goal of the present article is to fill this gap, by providing a theoretical framework for the study of human tool use: The Four Constraints Theory (4CT). This theory rests on two basic assumptions. First, everyday tool use activities can be formalized as multiple problem situations consisted of four distinct constraints (mechanics, space, time, and effort). Second, each of these constraints can be solved by the means of a specific process (technical reasoning, semantic reasoning, working memory, and simulation-based decision-making, respectively). Besides presenting neuropsychological evidence for 4CT, this article shall address epistemological, theoretical and methodological issues I will attempt to resolve. This article will discuss how 4CT diverges from current cognitive models about several widespread hypotheses (e.g., notion of routine, direct and automatic activation of tool knowledge, simulation-based tool knowledge).Neuropsychology Review 04/2014; · 6.42 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Action representations associated with object use may be incidentally activated during visual object processing, and the time course of such activations may be influenced by lexical-semantic context (e.g., Lee, Middleton, Mirman, Kalénine, & Buxbaum, 2012). In this study we used the “visual world” eye-tracking paradigm to examine whether a deficit in producing skilled object-use actions (apraxia) is associated with abnormalities in incidental activation of action information, and assessed the neuroanatomical substrates of any such deficits. Twenty left hemisphere stroke patients, ten of whom were apraxic, performed a task requiring identification of a named object in a visual display containing manipulation-related and unrelated distractor objects. Manipulation relationships among objects were not relevant to the identification task. Objects were cued with neutral (“S/he saw the….”), or action-relevant (“S/he used the….”) sentences. Non-apraxic participants looked at use-related non-target objects significantly more than at unrelated non-target objects when cued both by neutral and action-relevant sentences, indicating that action information is incidentally activated. In contrast, apraxic participants showed delayed activation of manipulation-based action information during object identification when cued by neutral sentences. The magnitude of delayed activation in the neutral sentence condition was reliably predicted by lower scores on a test of gesture production to viewed objects, as well as by lesion loci in the inferior parietal and posterior temporal lobes. However, when cued by a sentence containing an action verb, apraxic participants showed fixation patterns that were statistically indistinguishable from non-apraxic controls. In support of grounded theories of cognition, these results suggest that apraxia and temporal-parietal lesions may be associated with abnormalities in incidental activation of action information from objects. Further, they suggest that the previously-observed facilitative role of action verbs in the retrieval of object-related action information extends to participants with apraxia.Neuropsychologia 01/2014; · 3.48 Impact Factor
- Physics of Life Reviews 01/2014; · 6.58 Impact Factor
A number of authors have emphasized the close link
between perception and action. Clearly, perceptual sys-
tems evolved primarily to enable organisms to extract
information from the visual array to enable actions ap-
propriate to support survival (e.g., finding and pursuing
prey, avoiding predators, etc.). Therefore, because of this
intimate relationship between vision and action through-
out evolution, these systems should not be considered to
be completely independent modules (see, e.g., Gibson,
1979; Prinz, 1990).
It is now well established that vision can be converted
fluently into action. This seems to be the case even when
people have no intention of acting on an object they may
be viewing; an action may nevertheless be covertly acti-
vated and can influence ongoing behavior (see Prinz &
Hommel, 2002). For example, the Simon (1969) effect
reveals that although irrelevant to current task goals, the
spatial location of a stimulus relative to an effector such
as the hand influences processing, whereby, for example,
responses are faster when the visual stimulus and the re-
sponding hand are on the same side of space. Although
there can be somewhat complex relationships between vi-
sion and action in Simon-type tasks (see, e.g., Hommel,
1995; Michaels, 1988), the link between vision and action
appears to be a ubiquitous finding.
Similarly, in people with lesions to the frontal lobe, the
fluent and automatic link between vision and action is re-
vealed. Such individuals, even when told not to respond
to an object placed in front of them, nevertheless act on
the object in an appropriate manner: They might reach out
and grasp a coffee cup, for example, even though verbal-
izing that they know they should not grasp the object (see,
e.g., Lhermitte, 1983). This “utilization behavior” again
clearly demonstrates that the actions afforded by an object
appear to be automatically encoded, even when no action
The final example is the technique developed by Tucker
and Ellis (1998), and this is the focus of the present ar-
ticle. In a range of experiments (e.g., Ellis & Tucker, 2000;
Phillips & Ward, 2002; Tucker & Ellis, 2004), it has been
demonstrated that even though the grasp response evoked
by a visual object is irrelevant to the participant’s task, it
still appears to be encoded, speeding response when com-
patible with a current action, and slowing response when
incompatible. In one study, participants were required to
decide whether an object was upright (press the right key,
for example), or inverted (press the left key). If an object
was upright and evoked a right-hand grasp, such as a fry-
ing pan with its handle oriented toward the right hand,
reaction times (RTs) were faster than if the handle was
oriented toward the left hand. As this example illustrates,
when motor representations such as grasping actions are
spontaneously activated by a visual stimulus, responses
that are compatible with the evoked action are facilitated,
and responses that are incompatible with the evoked ac-
tion are made more difficult, as indicated by the RT dif-
The present work engages two issues concerned with
these automatic vision-to-action processes: The first is the
role of attention, and the second concerns the action state
of the viewed object. As noted above, Tucker and Ellis
(1998) have shown that action affordances appear to be
493 Copyright 2006 Psychonomic Society, Inc.
This research was supported by Economic and Social Research Coun-
cil Grant RES-000-23-0429, awarded to S.P.T. and A.E.H. Correspon-
dence concerning this article should be addressed to S. P. Tipper, Centre
for Clinical and Cognitive Neuroscience, School of Psychology, Univer-
sity of Wales, Bangor, Gwynedd, LL57 2AS, Wales (e-mail: s.tipper@
Vision-for-action: The effects of object property
discrimination and action state on
affordance compatibility effects
STEVEN P. TIPPER, MATTHEW A. PAUL, and AMY E. HAYES
University of Wales, Bangor, Wales
When a person views an object, the action the object evokes appears to be activated independently
of the person’s intention to act. We demonstrate two further properties of this vision-to-action process.
First, it is not completely automatic, but is determined by the stimulus properties of the object that
are attended. Thus, when a person discriminates the shape of an object, action affordance effects are
observed; but when a person discriminates an object’s color, no affordance effects are observed. The
former, shape property is associated with action, such as how an object might be grasped; the latter,
color property is irrelevant to action. Second, we also show that the action state of an object influences
evoked action. Thus, active objects, with which current action is implied, produce larger affordance
effects than passive objects, with which no action is implied. We suggest that the active object activates
action simulation processes similar to those proposed in mirror systems.
Psychonomic Bulletin & Review
2006, 13 (3), 493-498
494 TIPPER, PAUL, AND HAYES
evoked automatically. However, there is still the issue of
which properties of the visual object are focused on to
produce such grasp affordances. To study this, a range of
tasks have been employed, for example: deciding whether
an object is inverted or upright, deciding whether an ob-
ject is to be found in a garage or a kitchen, and deciding
whether an object is living or man-made. Importantly, in
each of these tasks, attention is focused on object prop-
erties such as shape and meaning. To know whether an
object is inverted, for example, close analysis of its shape
is necessary to identify what the object is, and to access
memories concerning its normal orientation. Although a
reach-and-grasp response is not required, attention is fo-
cused on the object’s shape information that is necessary
to guide such actions. The question we ask here is whether
the automatic vision-to-action affordance compatibility
effects are observed when attention is directed to a stim-
ulus property that is unrelated to grasp affordances. We
contrast the automatic grasp compatibility effects when
participants identify the shape of an object (which is a
property related to grasping the object) with conditions
in which attention is directed to the color of the object
(which does not influence grasp).
In both conditions, attend shape and attend color, all
of the participants view exactly the same objects (door
handles evoking right- or left-hand reach-to-grasp). Con-
sider Figure 1. Panel A shows the square-shaped handle,
and Panel B shows the round-shaped handle. Clearly,
when participants are required to discriminate these
shapes, they are focusing attention on a property of the
object that would influence the final stages of grasp as the
hand comes into contact with the handle. Note, also, that
these stimuli vary in color, being tinted blue (Panel A) and
green (Panel B). When participants make left and right
keypress responses to discriminate color, this property is
unrelated to grasp. That is, whether a handle was blue or
green would not affect how it was grasped.
We predict that Tucker and Ellis’s (1998) grasp affor-
dance effects will be confirmed when participants dis-
criminate door-handle shape (square vs. round). However,
it is not known whether similar action affordance compati-
bility effects will be observed when color is discriminated.
It is possible that no such effects will be observed. If this
is the case, it would reveal that the grasp evoked by an
object can be encoded automatically (i.e., when grasping
is not relevant to a person’s task), but for this to occur, at-
tention has to be focused on properties of the object, such
as shape, that are linked to action.
The second issue these studies engage concerns the na-
ture of the action state of the observed object. We distin-
guish between “passive” and “active” states of an object.
Passive states are the typical form of object representa-
tion, in which an object evokes an action, but no action
appears to be taking place. This is the situation examined
in all studies to date. When a person views a frying pan
with the handle oriented toward the right hand, for ex-
ample, this stimulus clearly evokes an action with the right
hand, but there is no sense that the object is actually being
acted upon. In other situations, an object state can strongly
imply that some force/action is acting on it. Consider pan-
els C and D in Figure 1. These show the door handles de-
pressed. This position of the handle can only be achieved
if a force is acting down on the internal spring mechanism.
It should be noted that in initial pilot studies, action af-
fordance effects with the door-handle stimuli were very
small. Therefore, in an attempt to increase the affordance
effects, and also to specifically increase the sense of ac-
tive object state, we presented short video clips of a hand
reaching toward, grasping, and pushing the handle down,
prior to starting the experiment (to be described in detail
We predict that larger action affordance compatibility
effects will be observed in the active object state, because
this implies an action evoked by another person. This idea
is supported by research from Rizzolatti and colleagues
(e.g., di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzo-
latti, 1992), who have shown cells in frontal area F5 of the
monkey that respond when the animal makes a particular
action, such as a pinch grip to grasp a peanut, but also
when the animal observes someone else perform the same
action. A substantial range of studies have confirmed the
existence of these so-called “mirror” cells in the monkey,
and also that similar networks are activated in humans
when they observe the actions of other people (e.g., see
Rizzolatti & Craighero, 2004, for a review). Of particular
pertinence to the present study, Ferrari, Rozzi, and Fogassi
(2005) have recently discovered tool-responding mirror
cells in the lateral sector of F5. These cells are selectively
activated when the monkey observes an experimenter with
Figure 1. Representation of the stimuli used in Experiment 1.
Panels A and B represent the “passive” objects. Panel A shows the
square-shaped handle, and Panel B shows the round-shaped han-
dle. These stimuli also varied in color (blue or green) and pointing
direction (left or right). Half the participants discriminated shape
while ignoring color, and half discriminated color while ignoring
shape. Panels C and D show the “active” object state, in which
action is implied.
Passive Object Passive Object
Active ObjectActive Object
ATTENTION AND ACTIVE ACTION STATE INFLUENCES AFFORDANCES 495
a tool. Such cells appear to encode the taking possession
of an object and modifying its state, as in the act of de-
pressing a door handle in the present study.
It appears, then, that vision–action neural systems
simulate observed behavior. It is noteworthy that the mir-
ror system is not necessarily the same as that encoding
the action possibilities of viewed objects. This is because,
although a subset of cells in F5 respond when a person
views an action, these “mirror cells” do not necessarily
react when a person views an object that would evoke a
particular action. Therefore, when a person is viewing the
depressed door handle in the active state, action affordance
effects might be more robust because two mechanisms
are activated: the previously discussed grasp response,
evoked when viewing an object, and a simulation of an-
other person’s action that is necessary to account for the
active state of the handle.
This idea—that when the depressed handle is viewed,
the image strongly implies that action has produced this
state of the object—could be questioned, because no ac-
tion is actually observed. However, although no hand ac-
tion is visible in the scene, recent work suggests that this is
not necessary to activate action simulation or mirror sys-
tems. For example, Umiltà et al. (2001) demonstrated that
when (1) an object is placed in front of a monkey, (2) the
object is occluded by a barrier, and then (3) a hand reaches
behind the barrier to grasp the object, the mirror cells of
F5 still respond even though the animal cannot directly
view the action. Mirror neurons have also been shown to
respond to the sound of an action being performed, such
as the sound of tearing paper, even though the action can-
not be seen (see, e.g., Kohler et al., 2002).
To preview our findings: We do find that the object
property participants discriminate is critical for action
compatibility effects. When participants analyze shape,
action affordance effects are observed; when participants
analyze color in the same objects, no effects are observed.
Second, when participants analyze the shape of visual ob-
jects, the action state appears to influence the action com-
patibility effects. As predicted, larger effects are observed
in the active object state than in the passive object state.
Participants. Thirty-two undergraduates from the University of
Wales, Bangor, participated for course credit (27 females, 5 males;
mean age ? 21 years). All of the participants had normal or corrected-
to-normal visual acuity, stereopsis, and no color blindness. All ex-
cept 3 participants reported that they were right-handed. Participants
were randomly assigned to one of two experimental groups. All were
naive as to the purpose of the study and gave informed consent prior
to their participation.
Apparatus and Stimuli. Stimuli were photos of door handles
presented in a nonvisible frame measuring 600 ? 600 pixels at the
center of the screen. All stimuli were displayed on a white background
and were viewed on a 19-in. computer monitor (1,280 ? 1,024 reso-
lution) from a distance of 57 cm (maintained by use of a chin rest).
A gray fixation cross (font: Courier New, size: 18) was visible at the
center of the screen before and after the handle appeared.
Handles could appear in 16 different configurations: two colors
(green or blue), two shapes (round handle or square handle), two
directions (handle pointing to the left or right), and two orientations
(one “passive,” where the handle was horizontal, and another “ac-
tive,” where the handle was rotated 45º from horizontal). Thus there
were 2 ? 2 ? 2 ? 2 ? 16 individual stimuli, examples of which are
shown in Figure 1.
Procedure and Design. The participants were seated in front
of the display screen in a darkened room and were initially shown a
brief video clip consisting of four sequential subclips, each lasting
2 sec: a male/female left/right hand reaching toward and operating a
door handle (for an example, see Figure 2).
Each individual trial consisted of a fixation screen presented for
1,000 msec followed by the to-be-responded-to handle presented
centrally for 1,000 msec (or until response), which was then re-
placed by a fixation screen lasting 1,000 msec. A short tone lasting
1,000 msec indicated whether the correct or incorrect response (or
lack of response within 1,000 msec) had been made. The partici-
pants pressed the space bar to initiate trials.
The participants were instructed to maintain central fixation
throughout each trial and to respond to either the shape or color of the
door handle that appeared (half of the participants responded to color
and half to shape). Emphasis was placed on both speed and accuracy
of responses. All responses were made on a computer keyboard pre-
sented centrally to the screen and the participant. In the color distinc-
tion condition, the participants responded “blue” or “green” by press-
ing either the A or L key on the keyboard. In the shape distinction
condition, the participants responded either “round” or “square” by
pressing either the A or L key on the keyboard. Response keys were
counterbalanced left and right in each condition.
The participants performed a 16-trial practice block to familiar-
ize themselves with the task, after which the 128 randomized ex-
perimental trials began. There was a forced 1-min break halfway
through the experimental block. The experiment lasted approxi-
mately 20 min total.
Results and Discussion
For each participant, the mean RT for correct responses
was calculated for compatible and incompatible condi-
tions for both the passive and active object state condi-
tions. Panels A and B of Figure 3 show the RT effects, and
Table 1 shows the error rates. A three-way mixed ANOVA
was undertaken on the RTs. There was no significant main
effect of the between-participants factor of stimulus dis-
crimination (color vs. shape) [F(1,30) ? .037, n.s.]. There
was a significant effect of object affordance compatibility
[F(1,30) ? 12.957, p ? .0001], where RTs were faster for
objects compatible with the responding hand. The critical
compatibility ? stimulus discrimination (color vs. shape)
interaction was obtained [F(1,30) ? 15.552, p ? .0001],
where compatibility effects were larger when discriminat-
ing shape than when discriminating color. There was no
main effect of passive or active object state [F(1,30) ?
.283, n.s.]. Neither of the two-way interactions was signif-
icant [object state ? stimulus discrimination, F(1,30) ?
2.827, n.s.; compatibility ? object state, F(1,30) ? 3.146,
n.s.]. However, the three-way interaction between stimu-
lus discrimination ? compatibility ? object state was
significant [F(1,30) ? 4.776, p ? .04]. Error rates were
analyzed analogously to the RT data; the ANOVA revealed
no significant main effects or any interactions.
Further ANOVAs analyzed color and shape discrimina-
tion RT data separately. In analyzing the RTs in the color
496 TIPPER, PAUL, AND HAYES
discrimination task, we found no main effects for compat-
ibility [F(1,15) ? .077, n.s.] and object state [F(1,15) ?
.664, n.s.] or interaction [F(1,15) ? .085, n.s.]. In sharp
contrast, shape discrimination showed highly signifi-
cant action compatibility effects [F(1,15) ? 23.025, p ?
.0001], but no main effect of action state [F(1,15) ?
2.437, n.s.]. However, there was a significant interac-
tion between affordance compatibility and object action
state [F(1,15) ? 7.823, p ? .02]. Planned contrasts of
the shape discrimination data were conducted using one-
tailed t tests, as compatibility effects are predicted a priori
from previous studies (e.g., Philips & Ward, 2002; Tucker
& Ellis, 1998). Significant action compatibility effects
were found for both active and passive action state objects
[respectively, t(15) ? 5.494, p ? .0001, one-tailed; and
t(15) ? 2.009, p ? .03, one-tailed].1
The results obtained from Experiment 1 appear to pro-
vide clear answers to our two research questions. First,
the property of an object attended is crucial for observing
action affordance effects: They are obtained with shape
but not color discrimination. Second, object state does
appear to influence affordances: Active objects (handle
depressed) produce larger action affordances than passive
objects (handle horizontal).
This experiment tested whether the action affordance
effects observed when participants discriminate object
shape are due to the action affordances evoked by the ob-
ject, rather than simply being due to low-level visual prop-
erties such as the orientation of the stimulus (horizontal vs.
45º slope). We reran the shape discrimination task with an
additional 16 participants and with two changes: First, the
stimuli shown in Figure 4 were used. These were identical
to the stimuli in Experiment 1 except that the surround
was removed. In pilot studies, no participants reported
this object to be a door handle. The second change was
that no video clip was shown, to prevent participants from
encoding the object as something that could be grasped
and depressed like a door handle. All other methods and
the procedure were identical to those used in the shape
discrimination condition of Experiment 1.
Results and Discussion
The mean RTs for correct responses can be seen in
Panel C of Figure 3. The RT effects were analyzed as in
Experiment 1. The main effect of action compatibility
was nonsignificant [F(1,15) ? 1.58, n.s.]. The main ef-
fect of passive versus active object was also nonsignifi-
cant [F(1,15) ? .184, n.s.], as was the interaction between
these factors [F(1,15) ? .009, n.s.]. Errors in the passive
object condition were 3% and 3.6% for compatible and
incompatible conditions, respectively, and for the active
condition they were 4.1% and 4.1% for the compatible and
incompatible conditions. There were no significant main
effects or interactions in the analyses of these errors.
Although the stimuli and the shape discrimination task
in Experiment 2 were very similar to those of Experi-
ment 1, the subtle differences in the visual stimuli had a
dramatic effect. That is, when the objects were no longer
perceived as graspable door handles that could be seen in
Figure 2. Three frames from the video clip presented to participants
prior to testing (male right-hand reach-to-grasp). Four videos were
shown to participants. These represented a male and female hand, and
reaches with the left and right hands.
ATTENTION AND ACTIVE ACTION STATE INFLUENCES AFFORDANCES 497
Figure 3. Reaction time data averaged across participants for Experiments 1 and 2. Panel A is shape discrimination
of door handles in Experiment 1. Panel B is color discrimination of door handles in Experiment 1. Panel C is shape dis-
crimination of the bar stimuli in Experiment 2.
passive or active states, there were no action affordance
effects, and these results did not differ between passive
(horizontal) and active (45º slope) stimuli.
Previous work (e.g., Tucker & Ellis, 1998) has estab-
lished that when a person views an object, the action
evoked by the object appears to be automatically encoded.
Thus, even though the grasp response was not relevant
to the participant’s task, it nevertheless appeared to be
encoded, and it facilitated responses that were compat-
ible with it. In the present study, we have confirmed these
effects and demonstrated two further properties of these
First, the property of the visual object that is attended
is critical for action affordances to be evoked. When par-
ticipants have to attend to shape (e.g., when discriminat-
ing the shape of the door handles), clear action affordance
compatibility effects resulted. In sharp contrast, when
they attended to the color of the door handles, no action
affordance effects were observed. It is important to note
two things: First, exactly the same stimuli were observed
by all participants; hence, the contrast in affordance ef-
fects is based purely on the stimulus property attended.
Second, there was no hint of a main effect between color
and shape discrimination; hence, both discriminations
were resolved at about the same time and were matched
for difficulty. Speed of processing has been shown to be
important in other vision–action tasks, such as the Simon
effect (see, e.g., Hommel, 1994), so this was an important
factor to rule out.
R. Ellis (personal communication) notes that action af-
fordance effects can also be produced when object texture
is identified. Although texture discrimination might seem
similar to color discrimination, we feel there are funda-
mental differences. For example, as made clear by Gibson
(1979), texture gradients, unlike variations in color, are
important cues to the 3-D properties of the world (such
as shadow). Texture can provide important cues about the
Average Error Rates (Percentage Errors) for Compatible and Incompatible
Affordance Conditions for Passive and Active Stimulus States in Experiment 1
Discrimination Task Compatible Incompatible Compatible Incompatible
Color discrimination 2.8
Reaction Time (msec)
Experiment 1Experiment 2
498 TIPPER, PAUL, AND HAYES
properties of the 3-D structure of an object that are nec-
essary to guide grasping actions. The existence of affor-
dance effects when encoding object texture also supports
the notion that attention must be directed to object proper-
ties associated with shape.
The second property of vision-to-action processes re-
vealed by this study concerned the role of the action state
of the viewed object. All previous studies have presented
what we have termed passive objects. Although they evoke
specific actions, there is no evidence that action is actually
under way when a person views the object. In contrast to
this, we presented an active action state object: The door
handles were depressed by 45º. Our idea was that this im-
plied action state would activate visual–motor systems
similar to the mirror systems discussed by Rizzolatti and
colleagues. Recent work has shown that even when an ac-
tion cannot be directly observed, but can be inferred from
other visual or auditory cues, the observer simulates the
action. In our task, the depressed door handle, combined
with the prior priming of this event via exposure to video
clips showing such action, strongly implies force acting
on the object.
The results obtained clearly support the notion that in
the active object condition, there may be both activation
of affordance directly from perception of the object (see,
e.g., Tucker & Ellis, 1998) and simulation of the inferred
grasp and depression action. This combined effect pro-
duced more robust effects than when only passive objects
were viewed. It is striking that in the color discrimination
condition and the bar condition of Experiment 2, there was
no hint of an action affordance effect in the active condi-
tion, which further rules out low-level stimulus properties
such as orientation of the handle.
In sum, two further properties of the vision-to-action
processes have been identified. First, activation of ac-
tion, such as grasp, is not completely automatic. Although
such effects can be produced when an object’s action af-
fordance is irrelevant to a person’s task, nevertheless, at-
tention has to be oriented to action-relevant features of the
object, such as shape. Second, there may be two effects
mediating action affordances—one evoked by viewing an
object, and a second evoked by simulation of observed or
inferred actions directed to the object.
di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzo-
latti, G. (1992). Understanding motor events: A neurophysiological
study. Experimental Brain Research, 91, 176-180.
Ellis, R., & Tucker, M. (2000). Micro-affordance: The potentiation of
components of action by seen objects. British Journal of Psychology,
Ferrari, P. F., Rozzi, S., & Fogassi, L. (2005). Mirror neurons re-
sponding to observation of actions made with tools in monkey ventral
premotor cortex. Journal of Cognitive Neuroscience, 17, 212-226.
Gibson, J. J. (1979). The ecological approach to visual perception. Bos-
ton: Houghton Mifflin.
Hommel, B. (1994). Spontaneous decay of response-code activation.
Psychological Research, 56, 261-268.
Hommel, B. (1995). Stimulus–response compatibility and the Simon
effect: Toward an empirical clarification. Journal of Experimental
Psychology: Human Perception & Performance, 21, 764-775.
Kohler, E., Keysers, C., Umiltà, M. A., Fogassi, L., Gallese, V., &
Rizzolatti, G. (2002). Hearing sounds, understanding actions: Ac-
tion representation in mirror neurons. Science, 297, 846-848.
Lhermitte, F. (1983). “Utilization behaviour” and its relation to lesions
of the frontal lobes. Brain, 106, 237-255.
Michaels, C. F. (1988). S–R compatibility between response position
and destination of apparent motion: Evidence of the detection of af-
fordances. Journal of Experimental Psychology: Human Perception &
Performance, 14, 231-240.
Phillips, J. C., & Ward, R. (2002). S–R correspondence effects of ir-
relevant visual affordance: Time course and specificity of response
activation. Visual Cognition, 9, 540-558.
Prinz, W. (1990). A common coding approach to perception and action.
In O. Neumann & W. Prinz (Eds.), Relationships between perception
and action: Current approaches (pp. 167-201). Berlin: Springer.
Prinz, W., & Hommel, B. (Eds.). (2002). Attention and performance
XIX: Common mechanisms in perception and action. Oxford: Oxford
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system.
Annual Review of Neuroscience, 27, 169-192.
Simon, J. R. (1969). Reactions toward the source of stimulation. Journal
of Experimental Psychology, 81, 174-176.
Tucker, M., & Ellis, R. (1998). On the relations between seen objects
and components of potential actions. Journal of Experimental Psy-
chology: Human Perception & Performance, 24, 830-846.
Tucker, M., & Ellis, R. (2004). Action priming by briefly presented
objects. Acta Psychologica, 116, 185-203.
Umiltà, M. A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L., Key-
sers, C., & Rizzolatti, G. (2001). I know what you are doing: A
neurophysiological study. Neuron, 31, 155-165.
1. We conducted a pilot study identical to the shape discrimination
task, but without showing participants video clips of hands depressing
the door handles prior to beginning the task. The pattern of results was
similar to the shape discrimination results of Experiment 1, but weaker:
There was no main effect of compatibility, but the compatibility ? ac-
tion state interaction was marginally significant (p ? .08). Planned con-
trasts revealed a significant compatibility effect for the active action state
but not for the passive action state.
(Manuscript received May 5, 2005;
revision accepted for publication August 28, 2005.)
Figure 4. Representation of the bar stimuli used in Experi-
ment 2. They are identical to those shown in Figure 1 (Experi-
ment 1), except for the removal of the hinge/door attachment