Selection-for-action in visual search
, Frans W. Cornelissen
, Harold Bekkering
Laboratory for Experimental Ophthalmology, School of Behavioral and Cognitive
Neurosciences, University of Groningen, The Netherlands
Nijmegen Institute for Cognition and Information, University of Nijmegen, P.O. Box 9104,
6500 HE Nijmegen, The Netherlands
School of Behavioral and Cognitive Neurosciences, University of Groningen, The Netherlands
Department of Psychology, University of Groningen, The Netherlands
Available online 25 November 2004
Grasping an object rather than pointing to it enhances processing of its orientation but not
its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant
feature. In two experiments we investigated the limitations and targets of this bias. Speciﬁcally,
in Experiment 1 we were interested to ﬁnd out whether the eﬀect is capacity demanding, there-
fore we manipulated the set-size of the display. The results indicated a clear cognitive process-
ing capacity requirement, i.e. the magnitude of the eﬀect decreased for a larger set size.
Consequently, in Experiment 2, we investigated if the enhancement eﬀect occurs only at the
level of behaviorally relevant feature or at a level common to diﬀerent features. Therefore
we manipulated the discriminability of the behaviorally neutral feature (color). Again, results
showed that this manipulation inﬂuenced the action enhancement of the behaviorally relevant
feature. Particularly, the eﬀect of the color manipulation on the action enhancement suggests
that the action eﬀect is more likely to bias the competition between diﬀerent visual features
rather than to enhance the processing of the relevant feature. We oﬀer a theoretical account
0001-6918/$ - see front matter 2004 Elsevier B.V. All rights reserved.
Corresponding author. Address: Nijmegen Institute for Cognition and Information, University of
Nijmegen, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands. Tel.: +31 24 361 5593; fax: +31 24 361
E-mail address: firstname.lastname@example.org (A. Hannus).
Acta Psychologica 118 (2005) 171–191
that integrates the action–intention eﬀect within the biased competition model of visual selec-
2004 Elsevier B.V. All rights reserved.
PsycINFO classiﬁcation: 2323; 2346; 2340
Keywords: Visual attention; Selection-for-action; Biased competition; Conjunction search
A widely investigated question in the ﬁeld of cognitive science concerns the selec-
tion mechanisms that enable to concentrate visual processing on some aspects of the
environment. In this study we explore the dependence of spatial cognitive processes
on action intentions. This issue can be addressed in a so-called visual search task in
which the observer searches for a pre-speciﬁed target among an array of non-targets.
Recently, it has been found that a speciﬁc action intention about what to do with the
searched object, i.e. grasping the object or pointing at it, aﬀects the way how people
search the objects in their visual space (Bekkering & Neggers, 2002). In this study we
focus on the limitations and targets of this process. We demonstrate that an action
intention can determine how people are searching for objects in the space. However,
under which conditions or at which level of cognitive processing this eﬀect occurs is
Neurophysiological studies suggest that up until a certain level individual features
are processed independently (e.g. Maunsell & Van Essen, 1983; Moutoussis & Zeki,
2002; Zeki, 1973, 1977). In this study we test if the intention to execute a goal-direc-
ted movement has an eﬀect at the level of independent or interdependent feature pro-
cessing. However, ﬁrst we introduce the two in our view most relevant theories about
visual attention concerning our research question: the biased competition model and
the selection-for-action approach.
1.1. Biased competition
A nowadays dominant model accounting for selective attention is the theory of
biased competition (Desimone, 1998; Desimone & Duncan, 1995; Kastner &
Ungerleider, 2001). This model describes the interplay between bottom-up and
top-down sources of attention. Its basic idea is that visual objects in the scene
compete for representation, analysis and control of behavior. This competition re-
sults from limitations in processing capacity. On the one hand, the bottom-up input
from the visual scene determines the spatial distribution and feature attributes of
objects. While processing this information, a target could ‘‘pop-out’’ due to a bot-
tom-up bias to direct the attention towards salient local inhomogeneities. On the
other hand, top-down processes can bias competition towards behaviorally relevant
information, based on the goals of the individual. In its current form, the biased
competition model does not make speciﬁc predictions about the role of action inten-
172 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
tion as a modulator of attention, but it could be easily adapted to do so. (See also
Birmingham & Pratt, 2005, for further information on the organization of spatial
More explicitly, the functioning of a perceptual system may be seen as gathering
and integrating the sensory information in order to adapt to the environmental con-
ditions in which the action must take place. It is essential for the preparation of the
planned action. This idea is reﬂected in diﬀerent models claiming a close interaction
between conscious visual processing and motor behavior (e.g. Allport, 1987; Gibson,
1979; Hommel, Mu
¨sseler, Aschersleben, & Prinz, 2001; Neumann, 1987, 1990; Riz-
zolatti & Craighero, 1998).
In everyday situations people hardly ever search for objects in their environment
just for purely perceptual purposes. In most cases, they have a clear intention to do
something with the object they are searching for. Hence, it would make sense to
change the relative weights given to diﬀerent attributes of a visual object depending
on the action currently at hand or planned for the immediate future. For instance, if
the intention is to ﬁnd a dictionary on the bookshelf in order to take it from the shelf,
the weight given to the processing of various features might be diﬀerent compared to
a situation where oneÕs intention is just to ﬁnd the dictionary to ascertain that it is
there. In the ﬁrst case, selectively more weight would be given to processing the infor-
mation about its size and relative orientation in the space than in the second case,
because this information is relevant for preparing a grasping movement. If the inten-
tion is to only detect the presence of the dictionary, itÕs orientation in space is less
Critically, the selection-for-action approach assumes that there are no limitations
to perceive multiple objects, but only limitations of eﬀector systems to carry out mul-
tiple actions concurrently (e.g., Allport, 1987, 1990). Thus, competition for process-
ing resources can be assumed to take place in the action system. Consequently,
information about diﬀerent attributes of an object should be bound together in a
way that allows the purposeful use of that object according to the intended action.
Therefore, selective attentional processing reﬂects the necessity of selecting informa-
tion relevant to the task at hand. Convergent evidence for the existence of an action-
related attentional system emerges from several experimental paradigms. For
instance, Craighero, Fadiga, Rizzolatti, and Umilta
`(1999) demonstrated that if the
subject has prepared a grasping movement, then a stimulus with congruent orienta-
tion is processed faster. In addition, a common selection mechanism for the saccadic
eye movements and object recognition was found in a study by Deubel and Schnei-
der (1996). Finally, clinical studies with neglect patients have shown that object aﬀor-
dances can improve the detection of visual objects (Humphreys & Riddoch, 2001)
and that action relations between objects can improve the detection of both of them
(Riddoch, Humphreys, Edwards, Baker, & Wilson, 2003). Recent experimental sup-
port for the selection-for-action notion in visual search comes from the study by
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 173
Bekkering and Neggers (2002) mentioned above. They demonstrated a selective
enhancement of orientation processing (compared to the color processing) when
the task required grasping of an object in relation to pointing toward the object. This
ﬁnding is in line with the idea that visual perception handles the world in a way that
is optimized for upcoming motor acts rather than just for a passive feedforward way
1.3. Experimental questions addressed in this study
The aim of the present study was to examine one of the central remaining issues of
the action–intention eﬀect reported by Bekkering and Neggers (2002), namely about
the limitations and targeted processes: does the action–intention have an eﬀect only
on the action-relevant feature or does it bias the competition between both features.
Bekkering and Neggers found that participants were better able to discriminate the
orientation of the stimuli when they had to grasp a target stimulus compared to the
condition where they had to point to the target, since the relative orientation in space
is more important for the grasping preparation than for the pointing preparation.
This suggests that the behaviorally relevant feature can be processed more eﬃciently.
At the same time the discrimination accuracy of color did not depend on the motor
task, as the color of the object should be equally relevant for both grasping and
pointing. However, to be convinced that the comparison of orientation and color
discrimination is valid, the discrimination task of one feature should be equal to
the discrimination diﬃculty of the other feature. Notably, in BekkeringÕs and Neg-
gersÕexperiment the color discrimination performance was in general better than
the orientation discrimination performance, suggesting that color discrimination
could in principle have been easier than orientation discrimination. Therefore we
ﬁrst wanted to replicate the previous ﬁndings while controlling the discriminability
of the two object features within a reﬁned experimental set-up. First, 2D images pro-
jected by LCD projector on a screen were used as stimuli instead of 3D objects. This
enabled a ﬁne matching of orientation and color contrasts of target and non-target
elements to make the orientation and color discrimination equally diﬃcult in the ﬁrst
experiment and to control the decrease of color contrast in the second experiment.
Second, the implementation of 2D stimuli allowed a direct visual template cueing
of both color and orientation of the target, while orientation was cued auditorily
in the 3D set-up of Bekkering and Neggers (2002). Third, the ﬂexibility of target
positioning was increased. Finally, the 2D screen allowed using a larger set size to
manipulate the search diﬃculty.
The target was a conjunction of color and orientation. Participants were required
either to search and point toward the target or to search and grasp the target. We
measured the accuracy of the initial saccade. As in grasping the orientation of the
target is more important than during pointing, we expect selectively improved per-
formance on the discrimination of this feature. As the targetÕs color is equally rele-
vant for both actions we expect no such change for this feature.
In ﬁrst experiment, the set size was changed to simultaneously vary the amount of
bottom-up information for both the behaviorally relevant (orientation) and the
174 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
behaviorally neutral (color) visual feature. Increase of set size increases the diﬃculty
of the search task (Bundesen, 1990) and this increases the load on cognitive process-
ing. A decreased eﬀect of action–intention under the larger set size should indicate
that there are no recourses left for selective enhancement of the behaviorally relevant
feature. This would indicate that the eﬀect of action–intention is limited by process-
ing capacity. However, if the eﬀect of action–intention does not depend on the set
size, no capacity limitations can be assumed. We expected that the selective enhance-
ment of one speciﬁc action-related feature is a function of the load on cognitive
Further, we were interested, if the top-down bias toward behaviorally relevant
feature has an eﬀect only at the level of this particular feature, or does it aﬀect the
processing level common for both features. In the second experiment a similar con-
junction search task was used, yet the discriminability of behaviorally neutral feature
was decreased, and the discriminability of behaviorally relevant feature remained the
same as in the ﬁrst experiment. If the action–intention aﬀects only processing of the
behaviorally relevant feature, the eﬀect should not depend on the discriminability of
the behaviorally neutral feature. However, if the action–intention somehow aﬀects
the competition between two features (or some other common mechanism), the eﬀect
on visual search should decrease, because overall target–non-target discriminability
is diminished. Our hypothesis is based on the assumption, that the capacity of cog-
nitive processing is limited, thereby causing a competition for it amongst features. In
an attempt to create an unbiased situation in terms of bottom-up information about
feature discriminability in the ﬁrst experiment, we made the search for color and ori-
entation approximately equally diﬃcult. In the second experiment we purposefully
decreased the color contrast and thereby made color discrimination harder. In this
situation, the color discrimination requires more processing capacity compared to
the relatively higher color contrast as used in Experiment 1. If this additional color
processing capacity can be taken from the available orientation processing capacity,
the possibility to bias the orientation processing in the grasping condition should be
decreased, leading to a decreased enhancement of orientation processing in grasping
compared to pointing. However, if the eﬀect of action–intention operates before the
feature binding, the discriminability of color should have no eﬀect on the capacity
used for orientation processing. In conclusion, if the previously found action-related
enhancement is indeed related to biased competition between the features involved,
the eﬀect should appear under equal and relatively easy discriminability of both fea-
tures (Experiment 1) and should decrease if the discriminability of one feature is de-
creased (Experiment 2).
2. Experiment 1
The aim of this experiment was to test whether the task-dependent facilitation of
one feature (orientation in the grasping condition) is limited by task diﬃculty. This
question directly derives from the results obtained in the past experiment. The origi-
nal Bekkering and Neggers (2002) study showed a maximal action–intention eﬀect
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 175
for 7 stimuli compared to the 4 and 10 set size conditions. Hence, the amount of bot-
tom-up information was manipulated directly by set size. The smaller set size con-
tained 7 stimuli (the optimal condition in the Bekkering and Neggers study) and
the larger set size 16 stimuli. This higher number of stimuli was chosen to double
the number of stimuli in smaller set size and thereby to have a relatively larger var-
iation of bottom-up information. Also, the smaller set size condition stayed within
the limited capacity of probable parallel processing of feature conjunctions (Pashler,
1987) whereas the large set size should be more demanding and evoke additional se-
rial processing. If the eﬀect of action–intention depends on the limitations in cogni-
tive capacity, it should decrease in larger set size, because the more diﬃcult task
leaves less cognitive recourses available for the selective enhancement of orientation
processing in the grasping condition. In order to tackle this question, we had to re-
ﬁne the experimental conditions as described above. Again, like in the BekkeringÕs
and NeggersÕpaper, a conjunction search task with two diﬀerent motor requirements
was used. In one condition, the task of the subject was to point to the target, in an-
other condition to grasp it.
2.1.1. Determining feature search performance
Aiming to compare performance on individual features in a conjunction search
task in a meaningful way, we should make sure that the diﬃculty of each task is
at least approximately comparable. Discrimination of one feature (e.g. clockwise tilt
vs. counterclockwise tilt) might be inherently more diﬃcult for the visual system than
discrimination of another feature (green vs. red). Therefore, we ﬁrst determined 50%
detection thresholds in orientation and color feature search tasks. These values were
then used to set the feature contrasts in the conjunction search task of Experiment 1.
Three subjects (aged 24–30 years) participated in this pilot measurement, among
them one of the authors.
Stimuli were presented on a 2000 CRT-monitor (subtending 31by 23) and gen-
erated by a Power Macintosh computer using software routines provided in the Psy-
chophysics Toolbox (Brainard, 1997; Pelli, 1997;http://psychtoolbox.org/). Screen
resolution was set to 1152 ·870 with a refresh frequency of 75 Hz. The background
luminance of the screen was 25 cd/m
. The luminance of the stimuli was 35 cd/m
(40% contrast). Viewing distance was 40 cm.
The stimuli had the shape of a bar. The length of the stimuli was 5.7. The subject
had to ﬁxate at the central ﬁxation cross and at the same time to press a key. Next, a
target cue with particular color and orientation appeared in the centre of the screen,
disappearing after 500 ms. In the color feature search task, the target was a green or
red 45tilted bar (40% luminance contrast in relation to the background). The non-
targets had always the opposite contrast of the target. Color contrast could be 1.5%,
2.2%, 3.3%, 5.0%, 7.5%, 11.3%, 16.9%, 25.3%, or 38.0%. Next, 13 stimuli appeared in
a circle with a radius of 16.7and centered on the ﬁxation cross. One of them was the
target. In the orientation feature search task the design was the same. In order to
overcome the internal representation of verticality, the reference value for manipu-
176 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
lating the orientation was a 45clockwise tilt. Thus, the target was a gray bar (40%
luminance contrast in relation to the background) with an orientation diﬀerence of
1.5%, 2.2%, 3.3%, 5.0%, 7.5%, 11.3%, 16.9%, 25.3%, or 38.0% relative to 45. Non-
targets had the opposite tilt of the target. One of the stimuli was the target. Stimuli
disappeared after 2500 ms or after a saccade was made. Subjects were instructed to
ﬁnd and ﬁxate the target as quickly and as accurately as possible. A correct response
was deﬁned by the ﬁrst saccadic eye movement landing on or close to the target. In
both tasks, subjects performed 1008 trials.
Eye movements were recorded at 250 Hz with an infrared video-based eye tracker
(Eyelink Gazetracker; SR Research Ltd., Mississauga, Ontario, Canada) and soft-
ware routines from the Eyelink Toolbox (Cornelissen, Peters, & Palmer, 2002;
http://psychtoolbox.org/). In the analysis, only trials were included in which subjects
did not make any saccades while the target cue was presented. Only the ﬁrst saccade
after target presentation was analyzed. An eye movement was considered a saccade
when the velocity of the eye was at least 25/s with an acceleration of 9500/s
The pilot experiments took place in a closed, dark room. Subjects were instructed
to restrain their head by the chin-rest, and to make a saccade as accurately and
quickly as possible towards the target.
The error rates were computed for all diﬀerent contrasts between the target and
non-targets. Next, a Weibull function was ﬁtted to the average data of all subjects.
Performance thresholds were determined by eye for each feature based on the ﬁtted
Twelve subjects (mean age 24 years) participated in the main Experiment 1, in re-
turn for payment. All participants were naı
¨ve as to the purpose of the experiment
and had normal or corrected to normal vision.
2.1.3. Apparatus and stimuli
A LCD projector presented the computer-generated stimuli on a translucent
screen, positioned on the table in front of the subject, with dimensions of 51.8hor-
izontally and 39vertically and a background luminance of 111 cd/m
. The viewing
distance was 45 cm.
The performance threshold of 50% correct target detection for both color and ori-
entation feature were used, a color contrast of 7.2% and an orientation contrast of
Each trial started with a white ﬁxation cross of 1.2of visual angle, presented in the
centre of the screen for 500 ms. After that a target cue was presented in the centre of
the screen for 1000 ms. The target was a tilted bar (visual angle: 0.6·2.3). It could
be either isoluminant green or red (7.2% color contrast in addition to 40% luminance
contrast) and more or less clockwise tilted (11.9% contrast in relation to the 45as a
‘‘standard’’). The experimental procedure is schematically shown in Fig. 1.
After the disappearance of the target cue, immediately the search display was pre-
sented for 1500 ms. Stimuli were positioned in the perimeter of an imaginary approx-
imate circle with a radius of 11.5. The search display contained either 16 or 7
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 177
stimuli; one of them was always the target. Among the non-target elements 1/3 of
stimuli had the same color as the target but diﬀerent orientation, 1/3 of stimuli
had the same orientation as the target but diﬀerent color, and 1/3 of the stimuli
had diﬀerent color and diﬀerent orientation compared to the target. Displays of
16 stimuli occupied all the possible positions on the imaginary circle, and displays
of 7 items occupied positions chosen randomly from the 16 positions.
Eye movements were recorded with an infrared video-based eye tracker (ASL
5000 Series, Model 501; Applied Science Laboratories, Bedford, MA, USA) at the
frequency of 60 Hz. An eye movement was considered a saccade when the velocity
of the eye was at least 45/s for at least 50 ms.
2.1.4. Design and procedure
The ﬁrst factor manipulated was the behavioral task. Subjects conducted two
blocks of tasks. They had to ﬁxate on the ﬁxation cross, and after that to look at
the target cue. After the target cue disappeared subjects had to search for the target.
Overt eye movements and minor free head movements were allowed. In one block
the task was to ﬁnd the target as fast and as accurately as possible, and to point
on it on the screen as fast as possible after target detection. In another block the sub-
jects were asked to ﬁnd the target as fast and as accurate as possible, and to grasp it
Fig. 1. A schematic overview of the experimental paradigm. At 16 possible positions objects were
presented. One-third of non-targets had the same color as the target, 1/3 of non-targets had the same
orientation as the target, and 1/3 of non-targets had both diﬀerent color and diﬀerent orientation. In this
example, the target is the black more clockwise oriented bar which would correspond to the green more
clockwise oriented bar. The non-targets are white bars (corresponding to red) also oriented more
clockwise, black bars (corresponding to green) oriented less clockwise, and white bars (corresponding to
red) oriented less clockwise.
178 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
on the screen with index ﬁnger and thumb along the linear axis. The second factor
represented the set size (7 or 16 stimuli, randomly mixed in a block).
The search performance was assessed as the accuracy and latency of the ﬁrst sacc-
adic eye movement that was initiated after the appearance of the search display.
Four types of responses arose from the search:
1. Hit. The ﬁrst saccade was directed to the target.
2. Color error. Initial saccade was made toward a non-target with targetsÕorienta-
tion but wrong color.
3. Orientation error. Initial saccade was made toward a non-target with targetsÕcolor
but wrong orientation.
4. Double error. Initial saccade was made to a non-target with both wrong color and
Participants completed both blocks of trials in a single session, with block order
counterbalanced across participants. Each block contained 160 trials, with an equal
number of each type of target. The trials within the block were presented in random
In order to exclude the outlyi ng responses, trials with late ncies below 100 ms or
above 500 ms were discarded from the analysis. In addition, the saccades with ambig-
uous endpoint were omitted (a window was deﬁned as a range of 2around the
stimulus position). Due to that, 33% of the trials were excluded from the analysis
(25.6% had a ambiguous endpoint, 0.02% were anticipatio n saccades under 100 ms
latency, 7.1% had a longer latency of more than 500 ms).
The descriptive values are presented in Table 1.
2.2.1. Hit analysis
An analysis of variance (ANOVA) of the hits with two factors (set size: 7, 16 stim-
uli, and task condition: grasping, pointing) revealed signiﬁcant main eﬀects for both
the set size, F(1, 11) = 66.02, p< 0.001, and task condition, F(1, 11) = 8.47, p< 0.05.
The accuracy of hitting the correct target with the initial saccade was signiﬁcantly
lower in larger set size condition (M= 17.3%), compared to the smaller set size con-
dition (M= 36.9%). More hits were made in the grasping condition (M= 30.5%)
compared to pointing (M= 23.7%). Importantly, there was a signiﬁcant interaction
between set size and task condition, F(1, 11) = 8.14, p< 0.05, indicating that the
probability to hit the target did not depend on the behavioral task in the larger
set size condition (M= 16.8% vs. M= 17.7%, FisherÕs least signiﬁcant diﬀerence,
p> 0.75), whereas in the smaller set size the probability of hits was signiﬁcantly high-
er in the grasping task (M= 30.5% vs. M= 43.3%, p< 0.01).
The equivalent two factorial ANOVA of the saccadic latencies showed only a
main eﬀect of set size, F(1, 11) = 8.36, p< 0.05, indicating that longer latencies were
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 179
obtained in the larger set size condition (M= 310ms) compared to the smaller set
size condition (M= 291 ms).
2.2.2. Error analysis
The results of error analysis are shown in Fig. 2.
First, the two set sizes were analysed separately. The amounts of color errors and
orientation errors are interdependent, because the error types are disjunct categories.
That is, if subject makes a color error in one particular trial, then he cannot make an
orientation error in the same trial (we omitted the double errors, as they do not give
any speciﬁc information, if the color or orientation discrimination failed, and their
number was relatively constant over all compared conditions). Thus, for further
analyses in the accuracy analysis the error types had to be considered as two depen-
dent variables. In order to compare the accuracy in the grasping and pointing con-
dition, we conducted for each set-size (7 and 16 stimuli) a separate multivariate
Distribution and latencies of ﬁrst saccadic eye movements in Experiment 1 and Experiment 2
Response type Experiment 1 Experiment 2
Proportion Latency Proportion Latency
M(%) SD M(ms) SD M(%) SD M(ms) SD
Small set size
Hits 30.5 11.7 285 59 27.7 12.4 262 51
Orientation errors 50.5 9.0 270 51 37.5 7.4 260 42
Color errors 10.3 4.9 266 70 20.9 7.4 248 48
Double errors 8.8 7.0 260 55 13.9 8.3 234 41
Hits 43.3 12.8 298 67 28.3 9.5 264 45
Orientation errors 38.0 11.7 282 60 35.3 8.3 252 37
Color errors 12.3 10.0 264 63 21.7 7.4 240 39
Double errors 6.4 3.9 284 71 14.6 7.0 237 41
Large set size
Hits 16.8 7.2 297 68 11.8 6.1 283 71
Orientation errors 52.6 8.9 283 57 39.5 11.4 262 54
Color errors 16.3 6.2 275 70 27.4 10.4 255 65
Double errors 14.3 6.6 269 67 21.3 8.2 255 42
Hits 17.7 7.0 323 75 10.1 5.5 282 61
Orientation errors 47.3 12.3 296 60 40.3 7.8 252 47
Color errors 18.9 7.3 283 51 29.6 6.1 261 55
Double errors 16.0 9.1 279 60 20.0 9.7 245 36
Note: Hit = saccade directed to the target; color errors = saccade to a non-target with targetÕs orientation
but wrong color; orientation errors = saccade to a non-target with targetÕs color but wrong orientation;
double errors = saccade to a non-target with both wrong color and orientation. N= 12 (Experiment 1);
N= 13 (Experiment 2).
180 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
analyses of variance (MANOVA; WilksÕsKcriterion) with the two dependent vari-
ables (color errors and orientation errors) and one within-subject factor (task condi-
tion: grasping, pointing).
For the large set-size, no inﬂuence of the task condition was obtained in the mul-
tivariate analysis of the errors, K= 0.81, p> 0.35. However, for the small set size, the
MANOVA yield a signiﬁcant eﬀect of the task condition on errors, K= 0.49,
p< 0.05. A post-hoc analysis (FisherÕs least signiﬁcant diﬀerence, p< 0.01) indicated
that the amount of orientation errors were signiﬁcantly lower in the grasping condi-
tion (M= 38.0%) compared to the pointing condition (M= 50.5%). Interestingly,
the amount of color errors did not diﬀer in both tasks (M= 12.3% vs.
Thus, results showed for the small set size a selective facilitation of
orientation discrimination when grasping was required.
In analyzing the saccadic latencies, we deﬁned the error type as a factor and con-
ducted a 2 (task condition: grasping, pointing) ·2 (set size: 7, 16 stimuli) ·2 (error
type: color error, orientation error) ANOVA. Latencies revealed a main eﬀect of
error type, F(1, 11) = 5.69, p< 0.05, showing a general tendency of faster erroneous
Set Size 7
Proportion of Total Responses (%)
Set Size 16
Fig. 2. Saccadic error distribution are plotted as a function of the motor task and set size in Experiment 1.
In smaller set size with 6 distractors the saccadic errors occur signiﬁcantly less when participants grasp the
target object compared to saccades preceding a pointing movement. In the larger set size, the action–
intention eﬀect on visual search disappears. Mean values and standard errors are presented.
It appears that generally the color dscrimination is more eﬃcient than the orientation discrimination,
despite our eﬀort to match the discrimination diﬃculty for both features in the pilot experiment. Our more
recent experiments designed to speciﬁcally tackle this phenomenon show that the equal feature
discriminability drawn from feature search tasks does not predict the feature discriminability in a
conjunction search task. Since we ﬁnd this phenomenon also in a visual search tasks, without any
requirements to point or grasp, we have reason to believe that this does not aﬀect our conclusions about
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 181
color discrimination compared to the orientation discrimination. Also, the set size
had a main eﬀect, F(1, 11) = 5.13, p< 0.05, the latencies increased along the increase
of set size.
Even though the present experiment was carried out in a rather diﬀerent way from
that of Bekkering and Neggers (2002), the results corroborate the earlier ﬁnding that
visual processing of a behaviorally relevant feature is selectively enhanced. The ﬁrst
experiment demonstrated that the action–intention eﬀect is also present for goal-
directed actions toward 2D stimuli. We found that subjects processed the relative
orientation of stimuli more eﬃciently when this feature was selectively important
for planned action relative to when it was not, i.e. grasping compared with pointing.
At the same time, the color discrimination performance remained the same for both
the pointing and the grasping condition.
Most importantly, the eﬀect of action–intention was statistically signiﬁcant
only in the smaller set size, in larger set size it disappeared. Saccadic latencies
showed a signiﬁcant set size eﬀect, suggesting that the search task became more
diﬃcult for the larger set size. The increase in bottom-up information presumably
increased the load on cognitive processing thereby limiting the possibility to pro-
cess the action relevant feature optimally. This result strongly suggests an inter-
play between top-down (action-relevant) and bottom-up (stimulus-driven) visual
3. Experiment 2
In the second experiment we wanted to explore this interplay between bottom-up
and top-down sources from another perspective. Speciﬁcally, we aimed to test fur-
ther, if the enhancement of a behaviorally relative feature appears at the level of
individual visual features or at the level of conjunction processing where the individ-
ual features are competing with each other. To do so, we manipulated the discrim-
inability of color, the feature that should be equally relevant for both pointing and
grasping. The discriminability of orientation was the same as in the Experiment 1. If
the action–intention aﬀects only the processing of orientation, it should appear inde-
pendently of the discriminability of color. If the action–intention aﬀects the compe-
tition between orientation and color, the eﬀect size should also depend on the
diﬃculty of the color processing. Increasing the diﬃculty of color discrimination
should require more of the limited processing resources. If this happens at the cost
of orientation processing, less capacity will be available for the enhancement of ori-
entation processing in the grasping task compared to pointing. Consequently, the
action–intention eﬀect should decrease. In the second experiment we took the
10% feature detection threshold instead of the previous used 50% detection level
for the color dimension.
182 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
¨ve subjects (mean age 25 years) with normal or corrected to normal
vision participated in return for payment. One of them had participated in Experi-
3.1.2. Apparatus, stimuli, and procedure
The apparatus, tasks and experimental settings were similar to the Experiment 1,
except for the color contrast of stimuli. The color contrast between target and non-
targets was decreased to 2% contrast between red and green stimuli, which corre-
sponded to the level where the subjects of pilot experiment made about 10% correct
responses in color feature search task. The orientation of stimuli was the same as in
the Experiment 1. Although the eﬀect of action–intention disappeared in the larger
set size in Experiment 1, we still used the larger set size also in Experiment 2 to keep
the experimental setting similar to the Experiment 1. Therefore both set sizes of 7
and 16 stimuli were used.
Again, omission of the ﬁrst saccades with latencies less than 100 ms or longer than
500 ms, and with ambiguous terminus lead to the rejection of 31% of the trials (23%
had ambiguous end point, 0.08% were anticipation saccades, 7.8% had a latency of
more than 500 ms). Descr iptive values are presented in Table 1.
3.2.1. Hit analysis
An ANOVA showed no eﬀect of the task condition on the search accuracy. The
main eﬀect of set size on hit probability was highly signiﬁcant, F(1,12) = 92.71,
p< 0.001, the increase in set size is related to the decrease of search accuracy. In
the smaller set size the mean hit accuracy was 28.0%, in the larger set size it was
Also, the ANOVA of saccadic latencies yield a main eﬀect of set size, F(1,11) =
5.97, p< 0.05, indicating slower reaction times on larger set size (M= 283 ms) trials
compared to smaller set size trials (M= 265 ms).
3.2.2. Error analysis
The distribution of color and orientation errors is presented in Fig. 3.
The MANOVA (WilksÕsKcriterion) with two within-subject factors (set size: 7,
16 stimuli, and task condition: grasping, pointing) and two dependent variables (col-
or errors and orientation errors) showed no eﬀect of the task. Only the set size re-
vealed a signiﬁcant main eﬀect (K= 0.28, p< 0.001). The 2 (task condition) ·2
(set size) ·2 (error type) ANOVA of the saccadic latencies yield no eﬀects (all
Ks < 0.6).
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 183
3.2.3. Comparative analysis between experiments
Critical results were obtained by analyzing the two experiments together. The
overall size of the action–intention eﬀect can be best characterized not by purely
looking at the amount of orientation and color errors, but by the accuracy of the cor-
rect detection of a feature. Therefore, for each subject, we determined the proportion
of correctly discriminated color responses (color hits) and orientation responses (ori-
entation hits). As a next step, we computed relative hit rates (orientation hits/color
hits) for both the pointing and grasping conditions. Next, the ratio of hit rates was
computed (grasping hit rate/pointing hit rate) and expressed as a logarithmic value
to give equal weight to ratios below and above 1. The results are shown in Fig. 4.
The critical comparison included only the smaller set sizes of both experiments. A
t-test revealed that the relative hit rate as the measure of eﬀect size was signiﬁcantly
lower with decreased color contrast in Experiment 2 (M= 0.03) compared with the
Experiment 1 (M= 0.14), t(23) = 2.34, p< 0.05. The action eﬀect thus decreased
when the discriminability of the behaviorally neutral feature was decreased. Fig. 4
shows that increasing set size had an additional diminishing eﬀect on action–inten-
tion in both experiments.
Further, the conditional probabilities to detect one feature correctly, depending
on the accuracy of the detection of other feature was calculated. First, we calculated
the conditional probability to detect one feature correctly if the other feature was
also detected correctly, e.g. p(color correctjorientation correct) = p(color correct, ori-
entation correct)/[p(color correct, orientation correct) + p(color incorrect, orienta-
tion correct)]. These probabilities were estimated by calculating the relevant ratios,
e.g. hits/(hits + color errors). Second, we calculated the conditional probability to de-
tect one feature correctly if the other feature was detected incorrectly. Therefore the
amount of the errors on the other feature was divided by the sum of the errors on the
other feature and double errors, e.g. p(color correctjorientation incorrect) = p(color
Set Size 7
Proportion of Total Responses (%)
Pointing Grasping 0
Set Size 16
Fig. 3. Saccadic error distribution are plotted as a function of the motor task and set size in Experiment 2.
The planned motor task has no systematic eﬀect on the direction of initial saccades. Mean values and
standard errors are presented.
184 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
correct, orientation incorrect)/[p(color correct, orientation incorrect) + p(color incor-
rect, orientation incorrect)]. Next, these values were corrected for guessing probabil-
ity: for the conditional feature hits when the other feature was detected correctly, the
guessing probability is 1/3 and 1/6 for smaller and larger set size, respectively; for the
conditional feature hits when the other feature was detected incorrectly, the guessing
probability is 2/4 and 5/10 for smaller and larger set size, respectively. These values
were average over all set sizes, tasks, and response types. The mean probability to
detect one feature correct if the other feature is also detected correctly is 19.1%. This
is signiﬁcantly smaller than the mean probability to detect one feature correctly when
the other feature is detected incorrectly, 34.2%, v
(1,N= 25) = 4.24; p< 0.05. Thus,
the detection probability of one feature is higher when the detection of the other fea-
Under low color discriminability conditions, no signiﬁcant enhancement of pro-
cessing of the behaviorally relevant feature, i.e. orientation, was found. Apparently,
an increased demand for color processing diminishes the action enhancement-eﬀect
for the orientation processing as observed under otherwise equal conditions in
Experiment 1. An important theoretical consequence of this ﬁnding is that lowered
color discriminability presumably modulates the competition between color and ori-
entation processing. We oﬀer an explanation that under the approximately equal fea-
ture discriminability conditions in Experiment 1 more processing resources could be
Set Size 7 Set Size 16 Set Size 7 Set Size 16
Color Feature Discriminability
Effect Size (Log)
Color Feature Discriminability
Fig. 4. The overall size of the action–intention eﬀect is plotted. To do so, the detection accuracy of a
speciﬁc feature was determined of the two experiments. The 50% Color Feature Discriminability refers to
the higher color contrast of Experiment 1, at which the subjects would make 50% correct responses in a
color feature search task. The 10% Color Feature Discriminability corresponds to the lower color contrast
used in Experiment 2, at which the subjects would make approximately 10% correct responses in a color
feature search task. The eﬀect size is expressed as a ratio of grasping hit rate (orientation hits/color hits)
over pointing hit rate. Mean values and standard errors are presented. This illustrates the decrease of the
eﬀect of action–intention along the increase of the amount of bottom-up information.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 185
allocated to the processing of behaviorally relevant feature, if this feature was selec-
tively more relevant to the action at hand. In Experiment 2 the color discrimination
was made more diﬃcult. We assume that as color processing was not irrelevant to
ﬁnding the correct target, the additional resources previously allocated to the en-
hanced orientation processing were needed for color processing. The disappearance
of the action–intention eﬀect under these conditions is in accordance with this line of
Moreover, comparison of the conditional probabilities to detect one feature cor-
rectly depending on the accuracy of the detection of the other feature revealed a clear
trend. The accuracy to detect one feature correctly is higher, if the detection of the
other feature was incorrect. This is an additional ﬁnding indicating a competition be-
tween the visual features.
4. General discussion
The aim of this study was to investigate the biasing eﬀect of action–intention
on selective attention in more detail. We corroborated the ﬁnding that the inten-
tion to grasp an image of an object selectively enhances processing of the orienta-
tion of that object compared with a condition in which the task is to reach
and point to the object. Moreover, we now show that this selective enhancement
occurs even when the task is a rather unnatural pantomimic act and the object is
a 2D object without any volumetric properties. This ﬁnding suggests that the
enhancement in processing of the relevant visual feature over the task-irrelevant fea-
ture is a more general phenomenon. Hence, if people have to ﬁnd a target object in
visual space, the searching process can be aﬀected by the intentions they have
To address the question whether it does aﬀect only the processing of the action-
relevant visual feature or the competition between the two features, two manipula-
tions of bottom-up sources of information were conducted. First, the dependence
of action–intention eﬀect on the capacity limitations in the visual system was tested.
Increasing set size in order to increase the load on cognitive processing decreased the
eﬀect of action–intention. This indicates that the eﬀect is limited by the available pro-
cessing capacity. Second, we found that lowering the discriminability of the behav-
iorally neutral feature caused a decrease in the size of the action–intention eﬀect.
This indicates that the eﬀect of action–intention aﬀects visual attention at a level
common to both features, rather than a level at which features are processed
Importantly, the saccadic latencies reveal that the facilitation of behaviorally rel-
evant visual features cannot simply be explained by a speed-accuracy trade-oﬀ. The
inspection time that is needed to detect only correct color or correct orientation did
not depend on the behavioral task.
The current results also rule out an explanation in terms of simple priming from
the cue. In the Bekkering and Neggers study (2002), the color feature was primed
directly on the stimulus board, while the orientation cue was primed by an auditory
186 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
cue (high or low tone). Therefore, one could have argued that the orientation cue
had to be represented more cognitively, increasing the change to ﬁnd an eﬀect for this
dimension over the color dimension. Here, the target cue primed both features, and
as a result the search template was identical under all conditions. Apparently, when
one feature is more relevant in terms of the planned action, its processing is selec-
One could argue that the facilitation of orientation in grasping reﬂects the
inﬂuence of motor preparation to visual discrimination (see for a possible demon-
stration of such an eﬀect Craighero et al., 1999). However, this explanation can-
not explain all ﬁndings so far. First, the eﬀect disappeared in the Bekkering and
Neggers study (2002) with four elements, suggesting that the visual discrimination
enhancement is not present if the task is relatively easy. Second, the fact that the
eﬀect of action–intention decreased when the discriminability of the behaviorally
neutral feature (color) was lowered implies that other factors besides motor-visual
priming interact in the visual search processes. If only the preparation to grasp
would facilitate the orientation processing as an independent factor in the con-
junction search task, color discriminability should not have had such a dramatic
eﬀect on the eﬀect size, since the orientation dimension was not varied across
Alternatively, we argue that the competition between color and orientation
processing is modulated by a competition between the top-down and bottom-up
components. Apparently, bottom-up components like for instance the ﬁrst seg-
mentation of the visual world based on one feature directly inﬂuences the pro-
cesses in the conjunction search. As a result the top-down eﬀect can be present or
not. More speciﬁcally, the data suggest that if the task is too easy as in the Bekkering
and Neggers study (2002) with four elements, or if the task is too hard as in this
study with 16 elements, bottom-up factors might solely determine the visual search
Now we would like to propose the description of the observed biased atten-
tional selection at the three levels of analysis as suggested by Marr (1982): the
computational-, algorithm-, and implementation level of description. First, the
goal of the computation carried out by the attentional system is to select out of
the visual space the information relevant for action preparation, like suggested in
the selection-for-action approach. The causative principle for biased selective atten-
tion is the need to select these aspects of the environment that are behaviorally rel-
evant and, due to the limited capacity of cognitive processing, to ignore what is
redundant. A parsimonious system should process relevant information at the
At the level of algorithm the representations and transformation are described.
The explanation we oﬀer is that of biased competition, originating from a top-down
input. There are two sources of top-down modulation: the action intention (e.g. to
grasp the object) and the search template (the knowledge about the features of the
object). The search template is compared with the incoming information, whereas
the activation of action-relevant features is higher. In the theory of biased com-
petition, Desimone and Duncan (1995) suggest the bias operating through the
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 187
attentional template. The current data show that a bias can originate from an action
plan. The visual cue representing the color and orientation of the target was the same
in both pointing and grasping task, whereas the action plan—what should be done
with this target—inﬂuences search accuracy. Thus, although the physical input from
the visual cue to the attentional template is the same for both hand movement tasks,
in the terms of this theory, the action plan modiﬁes the template in favor of the
behaviorally relevant visual feature. Alternatively, the action plan could also directly
increase the activation of task relevant visual features. The biased competition model
can thus be maintained if one assumes a gain in activation for action-related visual
characteristics. This allows the visual system to allocate more processing resources to
the processing of behaviorally relevant feature. However, if the discriminability of
behaviorally neutral feature is decreased, the processing of this feature probably re-
quires more resources and this decreases the processing eﬃciency for the behavior-
ally more relevant feature. Note, that the behaviorally neutral feature is actually
not irrelevant in order to solve the task. Therefore an interaction between bottom-
up (stimulus discriminability) and top-down (behavioral goal) appears.
At the implementation level one possible mechanism would be an enhanced tun-
ing of orientation-selective neurons in visual cortical areas. Although current results
do not reveal any indications about the neural correlates of the action–intention ef-
fect, we propose some candidates that should be looked for in the future. A neural
base for biased competition in attentional modulation could lie in the visual dorsal
stream. It is assumed that visual objects have diﬀerent representations in ventral
stream and dorsal stream (Ungerleider & Haxby, 1994; Ungerleider & Mishkin,
1982). Though visual input is the same for both visual streams, dorsal processing
is related to the control of manipulating the objects, the ventral stream is responsible
for processing of perceptual characteristics of objects (Goodale, Milner, Jakobson, &
Carey, 1991; Milner & Goodale, 1995). Vidyasagar (1999) proposed a model of vi-
sual selection employing the faster transmission and spatial coding of the dorsal
stream that conducts a preattentive parallel processing over the whole scene. This
information is fed back into the earlier cortical areas to selectively facilitate the loca-
tions containing relevant information. A mechanism like this could underlie the bias
in favor of a behaviorally relevant visual feature, as revealed in our results.
In addition, the neural bases for top-down attentional modulation are often
attributed to the prefrontal cortex. The attentional set, that guides the visual process-
ing to task-relevant information, is localized in the dorsolateral prefrontal region
(Banich et al., 2000). In a visual search task the subject is asked to ﬁnd the predeﬁned
stimulus. It is plausible to assume that a representation of this stimulus is held in
working memory, which is correlated with activity in prefrontal cortex (DÕEsposito,
Remarkably, despite that we aimed to match the diﬃculty for color and orientation discrimination in
Experiment 1, color discrimination was generally better compared to orientation discrimination. This
suggests that color and orientation processing are not independent in a conjunction task. We found
additional evidence for such a dependence. The chance of getting a feature correct is conditional on
performance for the other feature.
188 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
Postle, Ballard, & Lease, 1999; Ranganath, Johnson, & DÕEsposito, 2003). Close
relationships between attention and working memory are assumed (Desimone &
Duncan, 1995; Duncan & Humphreys, 1989). Miller, Erickson, and Desimone
(1996) found that the maintenance of a stimulus representation is related to prefron-
tal activity in macaques. The prefrontal activity could be the underlying mechanism
of top-down attentional modulation due to feedback inputs to visual cortex (Miller
et al., 1996). Recently, Iba and Sawaguchi (2003) also highlighted the importance of
prefrontal cortex in a visual selection task. After a local inactivation of macaqueÕs
dorsolateral prefrontal cortex, they found a disturbance of saccadic eye movements
in a visual search task (erroneously directed initial saccade, independent of stimulus
saliency) but not in a simple object detection task. Moreover, there is evidence for
shared neural network components at several frontoparietal areas for both spatial
attention and working memory operations (Awh & Jonides, 2001; LaBar, Gitelman,
Parrish, & Mesulam, 1999).
Most likely, the eﬀect of action–intention on visual search cannot be localized in
one speciﬁc area; rather the extensive parallel and feedback connections build up a
network responsible for the interaction between action intentions at the one hand
and visual processing of the world at the other hand. Gathering more speciﬁc in-
sights in the connections between action and perception in visual search might also
reveal new insights in the coupling between user-driven top-down processes and
stimulus driven bottom-up processes in general.
Allport, A. (1987). Selection for action: Some behavioral and neurophysiological considerations of
attention and action. In H. Heuer & A. F. Sanders (Eds.), Perspectives on perception and action
(pp. 395–419). Hillsdale, NJ: Lawrence Erlbaum Associates.
Allport, A. (1990). Visual attention. In M. I. Posner (Ed.), Foundations of cognitive science (pp. 631–682).
Cambridge, MA: MIT Press.
Awh, E., & Jonides, J. (2001). Overlapping mechanisms of attention and spatial working memory. Trends
in Cognitive Sciences, 5, 119–126.
Banich, M. T., Milham, M. P., Atchley, R. A., Cohen, N. J., Webb, A., Wszalek, T., et al. (2000).
Prefrontal regions play a role in imposing an attentional ÔsetÕ: Evidence from fMRI. Cognitive Brain
Research, 10, 1–9.
Bekkering, H., & Neggers, S. F. (2002). Visual search is modulated by action intentions. Psychological
Science, 13, 370–374.
Birmingham, E., & Pratt, J. (2005). Examining inhibition of return with onset and oﬀset cues in the
multiple-cuing paradigm. Acta Psychologica 118, 101–121.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436.
Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523–547.
Cornelissen, F. W., Peters, E. M., & Palmer, J. (2002). The Eyelink Toolbox: eye tracking with MATLAB
and the Psychophysics Toolbox. Behavioral Research Methods, Instruments, and Computers, 34,
Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta
`, C. (1999). Action for perception: A motor-visual
attentional eﬀect. Journal of Experimental Psychology: Human Perception and Performance, 6,
Desimone, R. (1998). Visual attention mediated by biased competition in exstrastriate visual cortex.
Philosophical Transactions of the Royal Society of London B, 353, 1245–1255.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 189
Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of
Neuroscience, 18, 193–222.
DÕEsposito, M., Postle, B. R., Ballard, D., & Lease, J. (1999). Maintenance versus manipulation
of information held in working memory: An event-related fMRI study. Brain and Cognition, 41,
Deubel, H., & Schneider, W. X. (1996). Saccade target selection and object recognition: Evidence for a
common attentional mechanism. Vision Research, 36, 1827–1837.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96,
Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Miﬄin
Goodale, M. A., Milner, A. D., Jakobson, L. S., & Carey, D. P. (1991). A neurological dissociation
between perceiving objects and grasping them. Nature, 349, 154–156.
Hommel, B., Mu
¨sseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event Coding (TEC): a
framework for perception and action planning. Behavioral and Brain Sciences, 24, 849–937.
Humphreys, G. W., & Riddoch, M. J. (2001). Detection by action: Neuropsychological evidence for
action-deﬁned templates in search. Nature Neuroscience, 4, 84–88.
Iba, M., & Sawaguchi, T. (2003). Involvement of the dorsolateral prefrontal cortex of monkeys in
visuospatial target selection. Journal of Neuroscience, 89, 587–599.
Kastner, S., & Ungerleider, L. G. (2001). The neural basis of biased competition in human visual cortex.
Neuropsychologia, 39, 1263–1276.
LaBar, K. S., Gitelman, D. R., Parrish, T. B., & Mesulam, M.-M. (1999). Neuroanatomic overlap of
working memory and spatial attention networks: A functional MRI comparison within subjects.
Neuroimage, 10, 695–704.
Marr, D. (1982). Vision. A computational investigation into the human representation and processing of
visual information. San Francisco: W.H. Freeman and Company.
Maunsell, J. H., & Van Essen, D. C. (1983). The connections of the middle temporal visual area (MT) and
their relationship to a cortical hierarchy in the macaque monkey. Journal of Neuroscience, 3,
Miller, E. K., Erickson, C. A., & Desimone, R. (1996). Neural mechanisms of visual working memory in
prefrontal cortex of the macaque. Journal of Neuroscience, 15, 5154–5167.
Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford: Oxford University Press.
Moutoussis, K., & Zeki, S. (2002). Responses of spectrally selective cells in macaque area V2 to
wavelengths and colors. Journal of Neurophysiology, 87, 2104–2112.
Neumann, O. (1987). Beyond capacity: A functional view of attention. In H. Heuer & A. F. Sanders
(Eds.), Perspectives on perception and action (pp. 361–394). Hillsdale, NJ: Lawrence Erlbaum
Neumann, O. (1990). Visual attention and action. In O. Neumann & W. Prinz (Eds.), Relationships
between perception and action: Current approaches (pp. 227–267). Berlin: Springer-Verlag.
Pashler, H. (1987). Detecting conjunctions of color and form: Reassessing the serial search hypothesis.
Perception and Psychophysics, 47, 191–201.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into
movies. Spatial Vision, 10, 437–442.
Ranganath, C., Johnson, M. K., & DÕEsposito, M. (2003). Prefrontal activity associated with working
memory and episodic long-term memory. Neuropsychologia, 41, 378–389.
Riddoch, M. J., Humphreys, G. W., Edwards, S., Baker, T., & Wilson, K. (2003). Seeing the action:
Neuropsychological evidence for action-based eﬀects on object selection. Nature Neuroscience, 6,
Rizzolatti, G., & Craighero, L. (1998). Spatial attention: Mechanisms and theories. In M. Sabourin, F.
Craick, & M. Robert (Eds.), Advances in psychological science.Biological and cognitive aspects (vol. 2,
pp. 171–198). Montreal: Psychology Press.
Ungerleider, L. G., & Haxby, J. V. (1994). ‘‘What’’ and ‘‘Where’’ in the human brain. Current Opinion in
Neurobiology, 4, 157–165.
190 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, &
R. J. W. Mansﬁeld (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge: MIT Press.
Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain
Research Reviews, 30, 66–76.
Zeki, S. M. (1973). Colour coding in rhesus monkey prestriate cortex. Brain Research, 53, 422–427.
Zeki, S. M. (1977). Colour coding in the superior temporal sulcus of rhesus monkey visual cortex.
Proceedings of the Royal Society of London Series B, 197, 195–223.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 191