ArticlePDF Available

Abstract and Figures

Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.
Content may be subject to copyright.
Selection-for-action in visual search
Aave Hannus
a,b,*
, Frans W. Cornelissen
a
,
Oliver Lindemann
b,c
, Harold Bekkering
b,d
a
Laboratory for Experimental Ophthalmology, School of Behavioral and Cognitive
Neurosciences, University of Groningen, The Netherlands
b
Nijmegen Institute for Cognition and Information, University of Nijmegen, P.O. Box 9104,
6500 HE Nijmegen, The Netherlands
c
School of Behavioral and Cognitive Neurosciences, University of Groningen, The Netherlands
d
Department of Psychology, University of Groningen, The Netherlands
Available online 25 November 2004
Abstract
Grasping an object rather than pointing to it enhances processing of its orientation but not
its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant
feature. In two experiments we investigated the limitations and targets of this bias. Specifically,
in Experiment 1 we were interested to find out whether the effect is capacity demanding, there-
fore we manipulated the set-size of the display. The results indicated a clear cognitive process-
ing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size.
Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the
level of behaviorally relevant feature or at a level common to different features. Therefore
we manipulated the discriminability of the behaviorally neutral feature (color). Again, results
showed that this manipulation influenced the action enhancement of the behaviorally relevant
feature. Particularly, the effect of the color manipulation on the action enhancement suggests
that the action effect is more likely to bias the competition between different visual features
rather than to enhance the processing of the relevant feature. We offer a theoretical account
0001-6918/$ - see front matter 2004 Elsevier B.V. All rights reserved.
doi:10.1016/j.actpsy.2004.10.010
*
Corresponding author. Address: Nijmegen Institute for Cognition and Information, University of
Nijmegen, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands. Tel.: +31 24 361 5593; fax: +31 24 361
6066.
E-mail address: a.hannus@nici.kun.nl (A. Hannus).
Acta Psychologica 118 (2005) 171–191
www.elsevier.com/locate/actpsy
that integrates the action–intention effect within the biased competition model of visual selec-
tive attention.
2004 Elsevier B.V. All rights reserved.
PsycINFO classification: 2323; 2346; 2340
Keywords: Visual attention; Selection-for-action; Biased competition; Conjunction search
1. Introduction
A widely investigated question in the field of cognitive science concerns the selec-
tion mechanisms that enable to concentrate visual processing on some aspects of the
environment. In this study we explore the dependence of spatial cognitive processes
on action intentions. This issue can be addressed in a so-called visual search task in
which the observer searches for a pre-specified target among an array of non-targets.
Recently, it has been found that a specific action intention about what to do with the
searched object, i.e. grasping the object or pointing at it, affects the way how people
search the objects in their visual space (Bekkering & Neggers, 2002). In this study we
focus on the limitations and targets of this process. We demonstrate that an action
intention can determine how people are searching for objects in the space. However,
under which conditions or at which level of cognitive processing this effect occurs is
yet unknown.
Neurophysiological studies suggest that up until a certain level individual features
are processed independently (e.g. Maunsell & Van Essen, 1983; Moutoussis & Zeki,
2002; Zeki, 1973, 1977). In this study we test if the intention to execute a goal-direc-
ted movement has an effect at the level of independent or interdependent feature pro-
cessing. However, first we introduce the two in our view most relevant theories about
visual attention concerning our research question: the biased competition model and
the selection-for-action approach.
1.1. Biased competition
A nowadays dominant model accounting for selective attention is the theory of
biased competition (Desimone, 1998; Desimone & Duncan, 1995; Kastner &
Ungerleider, 2001). This model describes the interplay between bottom-up and
top-down sources of attention. Its basic idea is that visual objects in the scene
compete for representation, analysis and control of behavior. This competition re-
sults from limitations in processing capacity. On the one hand, the bottom-up input
from the visual scene determines the spatial distribution and feature attributes of
objects. While processing this information, a target could ‘‘pop-out’’ due to a bot-
tom-up bias to direct the attention towards salient local inhomogeneities. On the
other hand, top-down processes can bias competition towards behaviorally relevant
information, based on the goals of the individual. In its current form, the biased
competition model does not make specific predictions about the role of action inten-
172 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
tion as a modulator of attention, but it could be easily adapted to do so. (See also
Birmingham & Pratt, 2005, for further information on the organization of spatial
attention.)
1.2. Selection-for-action
More explicitly, the functioning of a perceptual system may be seen as gathering
and integrating the sensory information in order to adapt to the environmental con-
ditions in which the action must take place. It is essential for the preparation of the
planned action. This idea is reflected in different models claiming a close interaction
between conscious visual processing and motor behavior (e.g. Allport, 1987; Gibson,
1979; Hommel, Mu
¨sseler, Aschersleben, & Prinz, 2001; Neumann, 1987, 1990; Riz-
zolatti & Craighero, 1998).
In everyday situations people hardly ever search for objects in their environment
just for purely perceptual purposes. In most cases, they have a clear intention to do
something with the object they are searching for. Hence, it would make sense to
change the relative weights given to different attributes of a visual object depending
on the action currently at hand or planned for the immediate future. For instance, if
the intention is to find a dictionary on the bookshelf in order to take it from the shelf,
the weight given to the processing of various features might be different compared to
a situation where oneÕs intention is just to find the dictionary to ascertain that it is
there. In the first case, selectively more weight would be given to processing the infor-
mation about its size and relative orientation in the space than in the second case,
because this information is relevant for preparing a grasping movement. If the inten-
tion is to only detect the presence of the dictionary, itÕs orientation in space is less
important.
Critically, the selection-for-action approach assumes that there are no limitations
to perceive multiple objects, but only limitations of effector systems to carry out mul-
tiple actions concurrently (e.g., Allport, 1987, 1990). Thus, competition for process-
ing resources can be assumed to take place in the action system. Consequently,
information about different attributes of an object should be bound together in a
way that allows the purposeful use of that object according to the intended action.
Therefore, selective attentional processing reflects the necessity of selecting informa-
tion relevant to the task at hand. Convergent evidence for the existence of an action-
related attentional system emerges from several experimental paradigms. For
instance, Craighero, Fadiga, Rizzolatti, and Umilta
`(1999) demonstrated that if the
subject has prepared a grasping movement, then a stimulus with congruent orienta-
tion is processed faster. In addition, a common selection mechanism for the saccadic
eye movements and object recognition was found in a study by Deubel and Schnei-
der (1996). Finally, clinical studies with neglect patients have shown that object affor-
dances can improve the detection of visual objects (Humphreys & Riddoch, 2001)
and that action relations between objects can improve the detection of both of them
(Riddoch, Humphreys, Edwards, Baker, & Wilson, 2003). Recent experimental sup-
port for the selection-for-action notion in visual search comes from the study by
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 173
Bekkering and Neggers (2002) mentioned above. They demonstrated a selective
enhancement of orientation processing (compared to the color processing) when
the task required grasping of an object in relation to pointing toward the object. This
finding is in line with the idea that visual perception handles the world in a way that
is optimized for upcoming motor acts rather than just for a passive feedforward way
of processing.
1.3. Experimental questions addressed in this study
The aim of the present study was to examine one of the central remaining issues of
the action–intention effect reported by Bekkering and Neggers (2002), namely about
the limitations and targeted processes: does the action–intention have an effect only
on the action-relevant feature or does it bias the competition between both features.
Bekkering and Neggers found that participants were better able to discriminate the
orientation of the stimuli when they had to grasp a target stimulus compared to the
condition where they had to point to the target, since the relative orientation in space
is more important for the grasping preparation than for the pointing preparation.
This suggests that the behaviorally relevant feature can be processed more efficiently.
At the same time the discrimination accuracy of color did not depend on the motor
task, as the color of the object should be equally relevant for both grasping and
pointing. However, to be convinced that the comparison of orientation and color
discrimination is valid, the discrimination task of one feature should be equal to
the discrimination difficulty of the other feature. Notably, in BekkeringÕs and Neg-
gersÕexperiment the color discrimination performance was in general better than
the orientation discrimination performance, suggesting that color discrimination
could in principle have been easier than orientation discrimination. Therefore we
first wanted to replicate the previous findings while controlling the discriminability
of the two object features within a refined experimental set-up. First, 2D images pro-
jected by LCD projector on a screen were used as stimuli instead of 3D objects. This
enabled a fine matching of orientation and color contrasts of target and non-target
elements to make the orientation and color discrimination equally difficult in the first
experiment and to control the decrease of color contrast in the second experiment.
Second, the implementation of 2D stimuli allowed a direct visual template cueing
of both color and orientation of the target, while orientation was cued auditorily
in the 3D set-up of Bekkering and Neggers (2002). Third, the flexibility of target
positioning was increased. Finally, the 2D screen allowed using a larger set size to
manipulate the search difficulty.
The target was a conjunction of color and orientation. Participants were required
either to search and point toward the target or to search and grasp the target. We
measured the accuracy of the initial saccade. As in grasping the orientation of the
target is more important than during pointing, we expect selectively improved per-
formance on the discrimination of this feature. As the targetÕs color is equally rele-
vant for both actions we expect no such change for this feature.
In first experiment, the set size was changed to simultaneously vary the amount of
bottom-up information for both the behaviorally relevant (orientation) and the
174 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
behaviorally neutral (color) visual feature. Increase of set size increases the difficulty
of the search task (Bundesen, 1990) and this increases the load on cognitive process-
ing. A decreased effect of action–intention under the larger set size should indicate
that there are no recourses left for selective enhancement of the behaviorally relevant
feature. This would indicate that the effect of action–intention is limited by process-
ing capacity. However, if the effect of action–intention does not depend on the set
size, no capacity limitations can be assumed. We expected that the selective enhance-
ment of one specific action-related feature is a function of the load on cognitive
processing.
Further, we were interested, if the top-down bias toward behaviorally relevant
feature has an effect only at the level of this particular feature, or does it affect the
processing level common for both features. In the second experiment a similar con-
junction search task was used, yet the discriminability of behaviorally neutral feature
was decreased, and the discriminability of behaviorally relevant feature remained the
same as in the first experiment. If the action–intention affects only processing of the
behaviorally relevant feature, the effect should not depend on the discriminability of
the behaviorally neutral feature. However, if the action–intention somehow affects
the competition between two features (or some other common mechanism), the effect
on visual search should decrease, because overall target–non-target discriminability
is diminished. Our hypothesis is based on the assumption, that the capacity of cog-
nitive processing is limited, thereby causing a competition for it amongst features. In
an attempt to create an unbiased situation in terms of bottom-up information about
feature discriminability in the first experiment, we made the search for color and ori-
entation approximately equally difficult. In the second experiment we purposefully
decreased the color contrast and thereby made color discrimination harder. In this
situation, the color discrimination requires more processing capacity compared to
the relatively higher color contrast as used in Experiment 1. If this additional color
processing capacity can be taken from the available orientation processing capacity,
the possibility to bias the orientation processing in the grasping condition should be
decreased, leading to a decreased enhancement of orientation processing in grasping
compared to pointing. However, if the effect of action–intention operates before the
feature binding, the discriminability of color should have no effect on the capacity
used for orientation processing. In conclusion, if the previously found action-related
enhancement is indeed related to biased competition between the features involved,
the effect should appear under equal and relatively easy discriminability of both fea-
tures (Experiment 1) and should decrease if the discriminability of one feature is de-
creased (Experiment 2).
2. Experiment 1
The aim of this experiment was to test whether the task-dependent facilitation of
one feature (orientation in the grasping condition) is limited by task difficulty. This
question directly derives from the results obtained in the past experiment. The origi-
nal Bekkering and Neggers (2002) study showed a maximal action–intention effect
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 175
for 7 stimuli compared to the 4 and 10 set size conditions. Hence, the amount of bot-
tom-up information was manipulated directly by set size. The smaller set size con-
tained 7 stimuli (the optimal condition in the Bekkering and Neggers study) and
the larger set size 16 stimuli. This higher number of stimuli was chosen to double
the number of stimuli in smaller set size and thereby to have a relatively larger var-
iation of bottom-up information. Also, the smaller set size condition stayed within
the limited capacity of probable parallel processing of feature conjunctions (Pashler,
1987) whereas the large set size should be more demanding and evoke additional se-
rial processing. If the effect of action–intention depends on the limitations in cogni-
tive capacity, it should decrease in larger set size, because the more difficult task
leaves less cognitive recourses available for the selective enhancement of orientation
processing in the grasping condition. In order to tackle this question, we had to re-
fine the experimental conditions as described above. Again, like in the BekkeringÕs
and NeggersÕpaper, a conjunction search task with two different motor requirements
was used. In one condition, the task of the subject was to point to the target, in an-
other condition to grasp it.
2.1. Method
2.1.1. Determining feature search performance
Aiming to compare performance on individual features in a conjunction search
task in a meaningful way, we should make sure that the difficulty of each task is
at least approximately comparable. Discrimination of one feature (e.g. clockwise tilt
vs. counterclockwise tilt) might be inherently more difficult for the visual system than
discrimination of another feature (green vs. red). Therefore, we first determined 50%
detection thresholds in orientation and color feature search tasks. These values were
then used to set the feature contrasts in the conjunction search task of Experiment 1.
Three subjects (aged 24–30 years) participated in this pilot measurement, among
them one of the authors.
Stimuli were presented on a 2000 CRT-monitor (subtending 31by 23) and gen-
erated by a Power Macintosh computer using software routines provided in the Psy-
chophysics Toolbox (Brainard, 1997; Pelli, 1997;http://psychtoolbox.org/). Screen
resolution was set to 1152 ·870 with a refresh frequency of 75 Hz. The background
luminance of the screen was 25 cd/m
2
. The luminance of the stimuli was 35 cd/m
2
(40% contrast). Viewing distance was 40 cm.
The stimuli had the shape of a bar. The length of the stimuli was 5.7. The subject
had to fixate at the central fixation cross and at the same time to press a key. Next, a
target cue with particular color and orientation appeared in the centre of the screen,
disappearing after 500 ms. In the color feature search task, the target was a green or
red 45tilted bar (40% luminance contrast in relation to the background). The non-
targets had always the opposite contrast of the target. Color contrast could be 1.5%,
2.2%, 3.3%, 5.0%, 7.5%, 11.3%, 16.9%, 25.3%, or 38.0%. Next, 13 stimuli appeared in
a circle with a radius of 16.7and centered on the fixation cross. One of them was the
target. In the orientation feature search task the design was the same. In order to
overcome the internal representation of verticality, the reference value for manipu-
176 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
lating the orientation was a 45clockwise tilt. Thus, the target was a gray bar (40%
luminance contrast in relation to the background) with an orientation difference of
1.5%, 2.2%, 3.3%, 5.0%, 7.5%, 11.3%, 16.9%, 25.3%, or 38.0% relative to 45. Non-
targets had the opposite tilt of the target. One of the stimuli was the target. Stimuli
disappeared after 2500 ms or after a saccade was made. Subjects were instructed to
find and fixate the target as quickly and as accurately as possible. A correct response
was defined by the first saccadic eye movement landing on or close to the target. In
both tasks, subjects performed 1008 trials.
Eye movements were recorded at 250 Hz with an infrared video-based eye tracker
(Eyelink Gazetracker; SR Research Ltd., Mississauga, Ontario, Canada) and soft-
ware routines from the Eyelink Toolbox (Cornelissen, Peters, & Palmer, 2002;
http://psychtoolbox.org/). In the analysis, only trials were included in which subjects
did not make any saccades while the target cue was presented. Only the first saccade
after target presentation was analyzed. An eye movement was considered a saccade
when the velocity of the eye was at least 25/s with an acceleration of 9500/s
2
.
The pilot experiments took place in a closed, dark room. Subjects were instructed
to restrain their head by the chin-rest, and to make a saccade as accurately and
quickly as possible towards the target.
The error rates were computed for all different contrasts between the target and
non-targets. Next, a Weibull function was fitted to the average data of all subjects.
Performance thresholds were determined by eye for each feature based on the fitted
curve.
2.1.2. Participants
Twelve subjects (mean age 24 years) participated in the main Experiment 1, in re-
turn for payment. All participants were naı
¨ve as to the purpose of the experiment
and had normal or corrected to normal vision.
2.1.3. Apparatus and stimuli
A LCD projector presented the computer-generated stimuli on a translucent
screen, positioned on the table in front of the subject, with dimensions of 51.8hor-
izontally and 39vertically and a background luminance of 111 cd/m
2
. The viewing
distance was 45 cm.
The performance threshold of 50% correct target detection for both color and ori-
entation feature were used, a color contrast of 7.2% and an orientation contrast of
11.9%.
Each trial started with a white fixation cross of 1.2of visual angle, presented in the
centre of the screen for 500 ms. After that a target cue was presented in the centre of
the screen for 1000 ms. The target was a tilted bar (visual angle: 0.6·2.3). It could
be either isoluminant green or red (7.2% color contrast in addition to 40% luminance
contrast) and more or less clockwise tilted (11.9% contrast in relation to the 45as a
‘‘standard’’). The experimental procedure is schematically shown in Fig. 1.
After the disappearance of the target cue, immediately the search display was pre-
sented for 1500 ms. Stimuli were positioned in the perimeter of an imaginary approx-
imate circle with a radius of 11.5. The search display contained either 16 or 7
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 177
stimuli; one of them was always the target. Among the non-target elements 1/3 of
stimuli had the same color as the target but different orientation, 1/3 of stimuli
had the same orientation as the target but different color, and 1/3 of the stimuli
had different color and different orientation compared to the target. Displays of
16 stimuli occupied all the possible positions on the imaginary circle, and displays
of 7 items occupied positions chosen randomly from the 16 positions.
Eye movements were recorded with an infrared video-based eye tracker (ASL
5000 Series, Model 501; Applied Science Laboratories, Bedford, MA, USA) at the
frequency of 60 Hz. An eye movement was considered a saccade when the velocity
of the eye was at least 45/s for at least 50 ms.
2.1.4. Design and procedure
The first factor manipulated was the behavioral task. Subjects conducted two
blocks of tasks. They had to fixate on the fixation cross, and after that to look at
the target cue. After the target cue disappeared subjects had to search for the target.
Overt eye movements and minor free head movements were allowed. In one block
the task was to find the target as fast and as accurately as possible, and to point
on it on the screen as fast as possible after target detection. In another block the sub-
jects were asked to find the target as fast and as accurate as possible, and to grasp it
Fig. 1. A schematic overview of the experimental paradigm. At 16 possible positions objects were
presented. One-third of non-targets had the same color as the target, 1/3 of non-targets had the same
orientation as the target, and 1/3 of non-targets had both different color and different orientation. In this
example, the target is the black more clockwise oriented bar which would correspond to the green more
clockwise oriented bar. The non-targets are white bars (corresponding to red) also oriented more
clockwise, black bars (corresponding to green) oriented less clockwise, and white bars (corresponding to
red) oriented less clockwise.
178 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
on the screen with index finger and thumb along the linear axis. The second factor
represented the set size (7 or 16 stimuli, randomly mixed in a block).
The search performance was assessed as the accuracy and latency of the first sacc-
adic eye movement that was initiated after the appearance of the search display.
Four types of responses arose from the search:
1. Hit. The first saccade was directed to the target.
2. Color error. Initial saccade was made toward a non-target with targetsÕorienta-
tion but wrong color.
3. Orientation error. Initial saccade was made toward a non-target with targetsÕcolor
but wrong orientation.
4. Double error. Initial saccade was made to a non-target with both wrong color and
orientation.
Participants completed both blocks of trials in a single session, with block order
counterbalanced across participants. Each block contained 160 trials, with an equal
number of each type of target. The trials within the block were presented in random
order.
2.2. Results
In order to exclude the outlyi ng responses, trials with late ncies below 100 ms or
above 500 ms were discarded from the analysis. In addition, the saccades with ambig-
uous endpoint were omitted (a window was defined as a range of 2around the
stimulus position). Due to that, 33% of the trials were excluded from the analysis
(25.6% had a ambiguous endpoint, 0.02% were anticipatio n saccades under 100 ms
latency, 7.1% had a longer latency of more than 500 ms).
The descriptive values are presented in Table 1.
2.2.1. Hit analysis
An analysis of variance (ANOVA) of the hits with two factors (set size: 7, 16 stim-
uli, and task condition: grasping, pointing) revealed significant main effects for both
the set size, F(1, 11) = 66.02, p< 0.001, and task condition, F(1, 11) = 8.47, p< 0.05.
The accuracy of hitting the correct target with the initial saccade was significantly
lower in larger set size condition (M= 17.3%), compared to the smaller set size con-
dition (M= 36.9%). More hits were made in the grasping condition (M= 30.5%)
compared to pointing (M= 23.7%). Importantly, there was a significant interaction
between set size and task condition, F(1, 11) = 8.14, p< 0.05, indicating that the
probability to hit the target did not depend on the behavioral task in the larger
set size condition (M= 16.8% vs. M= 17.7%, FisherÕs least significant difference,
p> 0.75), whereas in the smaller set size the probability of hits was significantly high-
er in the grasping task (M= 30.5% vs. M= 43.3%, p< 0.01).
The equivalent two factorial ANOVA of the saccadic latencies showed only a
main effect of set size, F(1, 11) = 8.36, p< 0.05, indicating that longer latencies were
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 179
obtained in the larger set size condition (M= 310ms) compared to the smaller set
size condition (M= 291 ms).
2.2.2. Error analysis
The results of error analysis are shown in Fig. 2.
First, the two set sizes were analysed separately. The amounts of color errors and
orientation errors are interdependent, because the error types are disjunct categories.
That is, if subject makes a color error in one particular trial, then he cannot make an
orientation error in the same trial (we omitted the double errors, as they do not give
any specific information, if the color or orientation discrimination failed, and their
number was relatively constant over all compared conditions). Thus, for further
analyses in the accuracy analysis the error types had to be considered as two depen-
dent variables. In order to compare the accuracy in the grasping and pointing con-
dition, we conducted for each set-size (7 and 16 stimuli) a separate multivariate
Table 1
Distribution and latencies of first saccadic eye movements in Experiment 1 and Experiment 2
Response type Experiment 1 Experiment 2
Proportion Latency Proportion Latency
M(%) SD M(ms) SD M(%) SD M(ms) SD
Small set size
Pointing
Hits 30.5 11.7 285 59 27.7 12.4 262 51
Orientation errors 50.5 9.0 270 51 37.5 7.4 260 42
Color errors 10.3 4.9 266 70 20.9 7.4 248 48
Double errors 8.8 7.0 260 55 13.9 8.3 234 41
Grasping
Hits 43.3 12.8 298 67 28.3 9.5 264 45
Orientation errors 38.0 11.7 282 60 35.3 8.3 252 37
Color errors 12.3 10.0 264 63 21.7 7.4 240 39
Double errors 6.4 3.9 284 71 14.6 7.0 237 41
Large set size
Pointing
Hits 16.8 7.2 297 68 11.8 6.1 283 71
Orientation errors 52.6 8.9 283 57 39.5 11.4 262 54
Color errors 16.3 6.2 275 70 27.4 10.4 255 65
Double errors 14.3 6.6 269 67 21.3 8.2 255 42
Grasping
Hits 17.7 7.0 323 75 10.1 5.5 282 61
Orientation errors 47.3 12.3 296 60 40.3 7.8 252 47
Color errors 18.9 7.3 283 51 29.6 6.1 261 55
Double errors 16.0 9.1 279 60 20.0 9.7 245 36
Note: Hit = saccade directed to the target; color errors = saccade to a non-target with targetÕs orientation
but wrong color; orientation errors = saccade to a non-target with targetÕs color but wrong orientation;
double errors = saccade to a non-target with both wrong color and orientation. N= 12 (Experiment 1);
N= 13 (Experiment 2).
180 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
analyses of variance (MANOVA; WilksÕsKcriterion) with the two dependent vari-
ables (color errors and orientation errors) and one within-subject factor (task condi-
tion: grasping, pointing).
For the large set-size, no influence of the task condition was obtained in the mul-
tivariate analysis of the errors, K= 0.81, p> 0.35. However, for the small set size, the
MANOVA yield a significant effect of the task condition on errors, K= 0.49,
p< 0.05. A post-hoc analysis (FisherÕs least significant difference, p< 0.01) indicated
that the amount of orientation errors were significantly lower in the grasping condi-
tion (M= 38.0%) compared to the pointing condition (M= 50.5%). Interestingly,
the amount of color errors did not differ in both tasks (M= 12.3% vs.
M= 10.3%).
1
Thus, results showed for the small set size a selective facilitation of
orientation discrimination when grasping was required.
In analyzing the saccadic latencies, we defined the error type as a factor and con-
ducted a 2 (task condition: grasping, pointing) ·2 (set size: 7, 16 stimuli) ·2 (error
type: color error, orientation error) ANOVA. Latencies revealed a main effect of
error type, F(1, 11) = 5.69, p< 0.05, showing a general tendency of faster erroneous
Pointin
g
Graspin
g
0
10
20
30
40
50
60
70
Set Size 7
Proportion of Total Responses (%)
Pointin
g
Graspin
g
0
10
20
30
40
50
60
70
Set Size 16
Orientation Error
Color Error
Fig. 2. Saccadic error distribution are plotted as a function of the motor task and set size in Experiment 1.
In smaller set size with 6 distractors the saccadic errors occur significantly less when participants grasp the
target object compared to saccades preceding a pointing movement. In the larger set size, the action–
intention effect on visual search disappears. Mean values and standard errors are presented.
1
It appears that generally the color dscrimination is more efficient than the orientation discrimination,
despite our effort to match the discrimination difficulty for both features in the pilot experiment. Our more
recent experiments designed to specifically tackle this phenomenon show that the equal feature
discriminability drawn from feature search tasks does not predict the feature discriminability in a
conjunction search task. Since we find this phenomenon also in a visual search tasks, without any
requirements to point or grasp, we have reason to believe that this does not affect our conclusions about
this study.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 181
color discrimination compared to the orientation discrimination. Also, the set size
had a main effect, F(1, 11) = 5.13, p< 0.05, the latencies increased along the increase
of set size.
2.3. Discussion
Even though the present experiment was carried out in a rather different way from
that of Bekkering and Neggers (2002), the results corroborate the earlier finding that
visual processing of a behaviorally relevant feature is selectively enhanced. The first
experiment demonstrated that the action–intention effect is also present for goal-
directed actions toward 2D stimuli. We found that subjects processed the relative
orientation of stimuli more efficiently when this feature was selectively important
for planned action relative to when it was not, i.e. grasping compared with pointing.
At the same time, the color discrimination performance remained the same for both
the pointing and the grasping condition.
Most importantly, the effect of action–intention was statistically significant
only in the smaller set size, in larger set size it disappeared. Saccadic latencies
showed a significant set size effect, suggesting that the search task became more
difficult for the larger set size. The increase in bottom-up information presumably
increased the load on cognitive processing thereby limiting the possibility to pro-
cess the action relevant feature optimally. This result strongly suggests an inter-
play between top-down (action-relevant) and bottom-up (stimulus-driven) visual
processing.
3. Experiment 2
In the second experiment we wanted to explore this interplay between bottom-up
and top-down sources from another perspective. Specifically, we aimed to test fur-
ther, if the enhancement of a behaviorally relative feature appears at the level of
individual visual features or at the level of conjunction processing where the individ-
ual features are competing with each other. To do so, we manipulated the discrim-
inability of color, the feature that should be equally relevant for both pointing and
grasping. The discriminability of orientation was the same as in the Experiment 1. If
the action–intention affects only the processing of orientation, it should appear inde-
pendently of the discriminability of color. If the action–intention affects the compe-
tition between orientation and color, the effect size should also depend on the
difficulty of the color processing. Increasing the difficulty of color discrimination
should require more of the limited processing resources. If this happens at the cost
of orientation processing, less capacity will be available for the enhancement of ori-
entation processing in the grasping task compared to pointing. Consequently, the
action–intention effect should decrease. In the second experiment we took the
10% feature detection threshold instead of the previous used 50% detection level
for the color dimension.
182 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
3.1. Method
3.1.1. Participants
Thirteen naı
¨ve subjects (mean age 25 years) with normal or corrected to normal
vision participated in return for payment. One of them had participated in Experi-
ment 1.
3.1.2. Apparatus, stimuli, and procedure
The apparatus, tasks and experimental settings were similar to the Experiment 1,
except for the color contrast of stimuli. The color contrast between target and non-
targets was decreased to 2% contrast between red and green stimuli, which corre-
sponded to the level where the subjects of pilot experiment made about 10% correct
responses in color feature search task. The orientation of stimuli was the same as in
the Experiment 1. Although the effect of action–intention disappeared in the larger
set size in Experiment 1, we still used the larger set size also in Experiment 2 to keep
the experimental setting similar to the Experiment 1. Therefore both set sizes of 7
and 16 stimuli were used.
3.2. Results
Again, omission of the first saccades with latencies less than 100 ms or longer than
500 ms, and with ambiguous terminus lead to the rejection of 31% of the trials (23%
had ambiguous end point, 0.08% were anticipation saccades, 7.8% had a latency of
more than 500 ms). Descr iptive values are presented in Table 1.
3.2.1. Hit analysis
An ANOVA showed no effect of the task condition on the search accuracy. The
main effect of set size on hit probability was highly significant, F(1,12) = 92.71,
p< 0.001, the increase in set size is related to the decrease of search accuracy. In
the smaller set size the mean hit accuracy was 28.0%, in the larger set size it was
11.0%.
Also, the ANOVA of saccadic latencies yield a main effect of set size, F(1,11) =
5.97, p< 0.05, indicating slower reaction times on larger set size (M= 283 ms) trials
compared to smaller set size trials (M= 265 ms).
3.2.2. Error analysis
The distribution of color and orientation errors is presented in Fig. 3.
The MANOVA (WilksÕsKcriterion) with two within-subject factors (set size: 7,
16 stimuli, and task condition: grasping, pointing) and two dependent variables (col-
or errors and orientation errors) showed no effect of the task. Only the set size re-
vealed a significant main effect (K= 0.28, p< 0.001). The 2 (task condition) ·2
(set size) ·2 (error type) ANOVA of the saccadic latencies yield no effects (all
Ks < 0.6).
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 183
3.2.3. Comparative analysis between experiments
Critical results were obtained by analyzing the two experiments together. The
overall size of the action–intention effect can be best characterized not by purely
looking at the amount of orientation and color errors, but by the accuracy of the cor-
rect detection of a feature. Therefore, for each subject, we determined the proportion
of correctly discriminated color responses (color hits) and orientation responses (ori-
entation hits). As a next step, we computed relative hit rates (orientation hits/color
hits) for both the pointing and grasping conditions. Next, the ratio of hit rates was
computed (grasping hit rate/pointing hit rate) and expressed as a logarithmic value
to give equal weight to ratios below and above 1. The results are shown in Fig. 4.
The critical comparison included only the smaller set sizes of both experiments. A
t-test revealed that the relative hit rate as the measure of effect size was significantly
lower with decreased color contrast in Experiment 2 (M= 0.03) compared with the
Experiment 1 (M= 0.14), t(23) = 2.34, p< 0.05. The action effect thus decreased
when the discriminability of the behaviorally neutral feature was decreased. Fig. 4
shows that increasing set size had an additional diminishing effect on action–inten-
tion in both experiments.
Further, the conditional probabilities to detect one feature correctly, depending
on the accuracy of the detection of other feature was calculated. First, we calculated
the conditional probability to detect one feature correctly if the other feature was
also detected correctly, e.g. p(color correctjorientation correct) = p(color correct, ori-
entation correct)/[p(color correct, orientation correct) + p(color incorrect, orienta-
tion correct)]. These probabilities were estimated by calculating the relevant ratios,
e.g. hits/(hits + color errors). Second, we calculated the conditional probability to de-
tect one feature correctly if the other feature was detected incorrectly. Therefore the
amount of the errors on the other feature was divided by the sum of the errors on the
other feature and double errors, e.g. p(color correctjorientation incorrect) = p(color
Pointing Grasping
0
10
20
30
40
50
60
70
Set Size 7
Proportion of Total Responses (%)
Pointing Grasping 0
10
20
30
40
50
60
70
Set Size 16
Orientation Error
Color Error
Fig. 3. Saccadic error distribution are plotted as a function of the motor task and set size in Experiment 2.
The planned motor task has no systematic effect on the direction of initial saccades. Mean values and
standard errors are presented.
184 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
correct, orientation incorrect)/[p(color correct, orientation incorrect) + p(color incor-
rect, orientation incorrect)]. Next, these values were corrected for guessing probabil-
ity: for the conditional feature hits when the other feature was detected correctly, the
guessing probability is 1/3 and 1/6 for smaller and larger set size, respectively; for the
conditional feature hits when the other feature was detected incorrectly, the guessing
probability is 2/4 and 5/10 for smaller and larger set size, respectively. These values
were average over all set sizes, tasks, and response types. The mean probability to
detect one feature correct if the other feature is also detected correctly is 19.1%. This
is significantly smaller than the mean probability to detect one feature correctly when
the other feature is detected incorrectly, 34.2%, v
2
(1,N= 25) = 4.24; p< 0.05. Thus,
the detection probability of one feature is higher when the detection of the other fea-
ture fails.
3.3. Discussion
Under low color discriminability conditions, no significant enhancement of pro-
cessing of the behaviorally relevant feature, i.e. orientation, was found. Apparently,
an increased demand for color processing diminishes the action enhancement-effect
for the orientation processing as observed under otherwise equal conditions in
Experiment 1. An important theoretical consequence of this finding is that lowered
color discriminability presumably modulates the competition between color and ori-
entation processing. We offer an explanation that under the approximately equal fea-
ture discriminability conditions in Experiment 1 more processing resources could be
Set Size 7 Set Size 16 Set Size 7 Set Size 16
.00
.05
.10
.15
.20
50%
Color Feature Discriminability
Effect Size (Log)
10%
Color Feature Discriminability
Fig. 4. The overall size of the action–intention effect is plotted. To do so, the detection accuracy of a
specific feature was determined of the two experiments. The 50% Color Feature Discriminability refers to
the higher color contrast of Experiment 1, at which the subjects would make 50% correct responses in a
color feature search task. The 10% Color Feature Discriminability corresponds to the lower color contrast
used in Experiment 2, at which the subjects would make approximately 10% correct responses in a color
feature search task. The effect size is expressed as a ratio of grasping hit rate (orientation hits/color hits)
over pointing hit rate. Mean values and standard errors are presented. This illustrates the decrease of the
effect of action–intention along the increase of the amount of bottom-up information.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 185
allocated to the processing of behaviorally relevant feature, if this feature was selec-
tively more relevant to the action at hand. In Experiment 2 the color discrimination
was made more difficult. We assume that as color processing was not irrelevant to
finding the correct target, the additional resources previously allocated to the en-
hanced orientation processing were needed for color processing. The disappearance
of the action–intention effect under these conditions is in accordance with this line of
reasoning.
Moreover, comparison of the conditional probabilities to detect one feature cor-
rectly depending on the accuracy of the detection of the other feature revealed a clear
trend. The accuracy to detect one feature correctly is higher, if the detection of the
other feature was incorrect. This is an additional finding indicating a competition be-
tween the visual features.
4. General discussion
The aim of this study was to investigate the biasing effect of action–intention
on selective attention in more detail. We corroborated the finding that the inten-
tion to grasp an image of an object selectively enhances processing of the orienta-
tion of that object compared with a condition in which the task is to reach
and point to the object. Moreover, we now show that this selective enhancement
occurs even when the task is a rather unnatural pantomimic act and the object is
a 2D object without any volumetric properties. This finding suggests that the
enhancement in processing of the relevant visual feature over the task-irrelevant fea-
ture is a more general phenomenon. Hence, if people have to find a target object in
visual space, the searching process can be affected by the intentions they have
about it.
To address the question whether it does affect only the processing of the action-
relevant visual feature or the competition between the two features, two manipula-
tions of bottom-up sources of information were conducted. First, the dependence
of action–intention effect on the capacity limitations in the visual system was tested.
Increasing set size in order to increase the load on cognitive processing decreased the
effect of action–intention. This indicates that the effect is limited by the available pro-
cessing capacity. Second, we found that lowering the discriminability of the behav-
iorally neutral feature caused a decrease in the size of the action–intention effect.
This indicates that the effect of action–intention affects visual attention at a level
common to both features, rather than a level at which features are processed
independently.
Importantly, the saccadic latencies reveal that the facilitation of behaviorally rel-
evant visual features cannot simply be explained by a speed-accuracy trade-off. The
inspection time that is needed to detect only correct color or correct orientation did
not depend on the behavioral task.
The current results also rule out an explanation in terms of simple priming from
the cue. In the Bekkering and Neggers study (2002), the color feature was primed
directly on the stimulus board, while the orientation cue was primed by an auditory
186 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
cue (high or low tone). Therefore, one could have argued that the orientation cue
had to be represented more cognitively, increasing the change to find an effect for this
dimension over the color dimension. Here, the target cue primed both features, and
as a result the search template was identical under all conditions. Apparently, when
one feature is more relevant in terms of the planned action, its processing is selec-
tively facilitated.
One could argue that the facilitation of orientation in grasping reflects the
influence of motor preparation to visual discrimination (see for a possible demon-
stration of such an effect Craighero et al., 1999). However, this explanation can-
not explain all findings so far. First, the effect disappeared in the Bekkering and
Neggers study (2002) with four elements, suggesting that the visual discrimination
enhancement is not present if the task is relatively easy. Second, the fact that the
effect of action–intention decreased when the discriminability of the behaviorally
neutral feature (color) was lowered implies that other factors besides motor-visual
priming interact in the visual search processes. If only the preparation to grasp
would facilitate the orientation processing as an independent factor in the con-
junction search task, color discriminability should not have had such a dramatic
effect on the effect size, since the orientation dimension was not varied across
experiments.
Alternatively, we argue that the competition between color and orientation
processing is modulated by a competition between the top-down and bottom-up
components. Apparently, bottom-up components like for instance the first seg-
mentation of the visual world based on one feature directly influences the pro-
cesses in the conjunction search. As a result the top-down effect can be present or
not. More specifically, the data suggest that if the task is too easy as in the Bekkering
and Neggers study (2002) with four elements, or if the task is too hard as in this
study with 16 elements, bottom-up factors might solely determine the visual search
process.
Now we would like to propose the description of the observed biased atten-
tional selection at the three levels of analysis as suggested by Marr (1982): the
computational-, algorithm-, and implementation level of description. First, the
goal of the computation carried out by the attentional system is to select out of
the visual space the information relevant for action preparation, like suggested in
the selection-for-action approach. The causative principle for biased selective atten-
tion is the need to select these aspects of the environment that are behaviorally rel-
evant and, due to the limited capacity of cognitive processing, to ignore what is
redundant. A parsimonious system should process relevant information at the
maximum.
At the level of algorithm the representations and transformation are described.
The explanation we offer is that of biased competition, originating from a top-down
input. There are two sources of top-down modulation: the action intention (e.g. to
grasp the object) and the search template (the knowledge about the features of the
object). The search template is compared with the incoming information, whereas
the activation of action-relevant features is higher. In the theory of biased com-
petition, Desimone and Duncan (1995) suggest the bias operating through the
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 187
attentional template. The current data show that a bias can originate from an action
plan. The visual cue representing the color and orientation of the target was the same
in both pointing and grasping task, whereas the action plan—what should be done
with this target—influences search accuracy. Thus, although the physical input from
the visual cue to the attentional template is the same for both hand movement tasks,
in the terms of this theory, the action plan modifies the template in favor of the
behaviorally relevant visual feature. Alternatively, the action plan could also directly
increase the activation of task relevant visual features. The biased competition model
can thus be maintained if one assumes a gain in activation for action-related visual
characteristics. This allows the visual system to allocate more processing resources to
the processing of behaviorally relevant feature. However, if the discriminability of
behaviorally neutral feature is decreased, the processing of this feature probably re-
quires more resources and this decreases the processing efficiency for the behavior-
ally more relevant feature. Note, that the behaviorally neutral feature is actually
not irrelevant in order to solve the task. Therefore an interaction between bottom-
up (stimulus discriminability) and top-down (behavioral goal) appears.
2
At the implementation level one possible mechanism would be an enhanced tun-
ing of orientation-selective neurons in visual cortical areas. Although current results
do not reveal any indications about the neural correlates of the action–intention ef-
fect, we propose some candidates that should be looked for in the future. A neural
base for biased competition in attentional modulation could lie in the visual dorsal
stream. It is assumed that visual objects have different representations in ventral
stream and dorsal stream (Ungerleider & Haxby, 1994; Ungerleider & Mishkin,
1982). Though visual input is the same for both visual streams, dorsal processing
is related to the control of manipulating the objects, the ventral stream is responsible
for processing of perceptual characteristics of objects (Goodale, Milner, Jakobson, &
Carey, 1991; Milner & Goodale, 1995). Vidyasagar (1999) proposed a model of vi-
sual selection employing the faster transmission and spatial coding of the dorsal
stream that conducts a preattentive parallel processing over the whole scene. This
information is fed back into the earlier cortical areas to selectively facilitate the loca-
tions containing relevant information. A mechanism like this could underlie the bias
in favor of a behaviorally relevant visual feature, as revealed in our results.
In addition, the neural bases for top-down attentional modulation are often
attributed to the prefrontal cortex. The attentional set, that guides the visual process-
ing to task-relevant information, is localized in the dorsolateral prefrontal region
(Banich et al., 2000). In a visual search task the subject is asked to find the predefined
stimulus. It is plausible to assume that a representation of this stimulus is held in
working memory, which is correlated with activity in prefrontal cortex (DÕEsposito,
2
Remarkably, despite that we aimed to match the difficulty for color and orientation discrimination in
Experiment 1, color discrimination was generally better compared to orientation discrimination. This
suggests that color and orientation processing are not independent in a conjunction task. We found
additional evidence for such a dependence. The chance of getting a feature correct is conditional on
performance for the other feature.
188 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
Postle, Ballard, & Lease, 1999; Ranganath, Johnson, & DÕEsposito, 2003). Close
relationships between attention and working memory are assumed (Desimone &
Duncan, 1995; Duncan & Humphreys, 1989). Miller, Erickson, and Desimone
(1996) found that the maintenance of a stimulus representation is related to prefron-
tal activity in macaques. The prefrontal activity could be the underlying mechanism
of top-down attentional modulation due to feedback inputs to visual cortex (Miller
et al., 1996). Recently, Iba and Sawaguchi (2003) also highlighted the importance of
prefrontal cortex in a visual selection task. After a local inactivation of macaqueÕs
dorsolateral prefrontal cortex, they found a disturbance of saccadic eye movements
in a visual search task (erroneously directed initial saccade, independent of stimulus
saliency) but not in a simple object detection task. Moreover, there is evidence for
shared neural network components at several frontoparietal areas for both spatial
attention and working memory operations (Awh & Jonides, 2001; LaBar, Gitelman,
Parrish, & Mesulam, 1999).
Most likely, the effect of action–intention on visual search cannot be localized in
one specific area; rather the extensive parallel and feedback connections build up a
network responsible for the interaction between action intentions at the one hand
and visual processing of the world at the other hand. Gathering more specific in-
sights in the connections between action and perception in visual search might also
reveal new insights in the coupling between user-driven top-down processes and
stimulus driven bottom-up processes in general.
References
Allport, A. (1987). Selection for action: Some behavioral and neurophysiological considerations of
attention and action. In H. Heuer & A. F. Sanders (Eds.), Perspectives on perception and action
(pp. 395–419). Hillsdale, NJ: Lawrence Erlbaum Associates.
Allport, A. (1990). Visual attention. In M. I. Posner (Ed.), Foundations of cognitive science (pp. 631–682).
Cambridge, MA: MIT Press.
Awh, E., & Jonides, J. (2001). Overlapping mechanisms of attention and spatial working memory. Trends
in Cognitive Sciences, 5, 119–126.
Banich, M. T., Milham, M. P., Atchley, R. A., Cohen, N. J., Webb, A., Wszalek, T., et al. (2000).
Prefrontal regions play a role in imposing an attentional ÔsetÕ: Evidence from fMRI. Cognitive Brain
Research, 10, 1–9.
Bekkering, H., & Neggers, S. F. (2002). Visual search is modulated by action intentions. Psychological
Science, 13, 370–374.
Birmingham, E., & Pratt, J. (2005). Examining inhibition of return with onset and offset cues in the
multiple-cuing paradigm. Acta Psychologica 118, 101–121.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436.
Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523–547.
Cornelissen, F. W., Peters, E. M., & Palmer, J. (2002). The Eyelink Toolbox: eye tracking with MATLAB
and the Psychophysics Toolbox. Behavioral Research Methods, Instruments, and Computers, 34,
613–617.
Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta
`, C. (1999). Action for perception: A motor-visual
attentional effect. Journal of Experimental Psychology: Human Perception and Performance, 6,
1673–1692.
Desimone, R. (1998). Visual attention mediated by biased competition in exstrastriate visual cortex.
Philosophical Transactions of the Royal Society of London B, 353, 1245–1255.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 189
Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of
Neuroscience, 18, 193–222.
DÕEsposito, M., Postle, B. R., Ballard, D., & Lease, J. (1999). Maintenance versus manipulation
of information held in working memory: An event-related fMRI study. Brain and Cognition, 41,
66–86.
Deubel, H., & Schneider, W. X. (1996). Saccade target selection and object recognition: Evidence for a
common attentional mechanism. Vision Research, 36, 1827–1837.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96,
433–458.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin
Company.
Goodale, M. A., Milner, A. D., Jakobson, L. S., & Carey, D. P. (1991). A neurological dissociation
between perceiving objects and grasping them. Nature, 349, 154–156.
Hommel, B., Mu
¨sseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event Coding (TEC): a
framework for perception and action planning. Behavioral and Brain Sciences, 24, 849–937.
Humphreys, G. W., & Riddoch, M. J. (2001). Detection by action: Neuropsychological evidence for
action-defined templates in search. Nature Neuroscience, 4, 84–88.
Iba, M., & Sawaguchi, T. (2003). Involvement of the dorsolateral prefrontal cortex of monkeys in
visuospatial target selection. Journal of Neuroscience, 89, 587–599.
Kastner, S., & Ungerleider, L. G. (2001). The neural basis of biased competition in human visual cortex.
Neuropsychologia, 39, 1263–1276.
LaBar, K. S., Gitelman, D. R., Parrish, T. B., & Mesulam, M.-M. (1999). Neuroanatomic overlap of
working memory and spatial attention networks: A functional MRI comparison within subjects.
Neuroimage, 10, 695–704.
Marr, D. (1982). Vision. A computational investigation into the human representation and processing of
visual information. San Francisco: W.H. Freeman and Company.
Maunsell, J. H., & Van Essen, D. C. (1983). The connections of the middle temporal visual area (MT) and
their relationship to a cortical hierarchy in the macaque monkey. Journal of Neuroscience, 3,
2563–2586.
Miller, E. K., Erickson, C. A., & Desimone, R. (1996). Neural mechanisms of visual working memory in
prefrontal cortex of the macaque. Journal of Neuroscience, 15, 5154–5167.
Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford: Oxford University Press.
Moutoussis, K., & Zeki, S. (2002). Responses of spectrally selective cells in macaque area V2 to
wavelengths and colors. Journal of Neurophysiology, 87, 2104–2112.
Neumann, O. (1987). Beyond capacity: A functional view of attention. In H. Heuer & A. F. Sanders
(Eds.), Perspectives on perception and action (pp. 361–394). Hillsdale, NJ: Lawrence Erlbaum
Associates.
Neumann, O. (1990). Visual attention and action. In O. Neumann & W. Prinz (Eds.), Relationships
between perception and action: Current approaches (pp. 227–267). Berlin: Springer-Verlag.
Pashler, H. (1987). Detecting conjunctions of color and form: Reassessing the serial search hypothesis.
Perception and Psychophysics, 47, 191–201.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into
movies. Spatial Vision, 10, 437–442.
Ranganath, C., Johnson, M. K., & DÕEsposito, M. (2003). Prefrontal activity associated with working
memory and episodic long-term memory. Neuropsychologia, 41, 378–389.
Riddoch, M. J., Humphreys, G. W., Edwards, S., Baker, T., & Wilson, K. (2003). Seeing the action:
Neuropsychological evidence for action-based effects on object selection. Nature Neuroscience, 6,
82–89.
Rizzolatti, G., & Craighero, L. (1998). Spatial attention: Mechanisms and theories. In M. Sabourin, F.
Craick, & M. Robert (Eds.), Advances in psychological science.Biological and cognitive aspects (vol. 2,
pp. 171–198). Montreal: Psychology Press.
Ungerleider, L. G., & Haxby, J. V. (1994). ‘‘What’’ and ‘‘Where’’ in the human brain. Current Opinion in
Neurobiology, 4, 157–165.
190 A. Hannus et al. / Acta Psychologica 118 (2005) 171–191
Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, &
R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge: MIT Press.
Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain
Research Reviews, 30, 66–76.
Zeki, S. M. (1973). Colour coding in rhesus monkey prestriate cortex. Brain Research, 53, 422–427.
Zeki, S. M. (1977). Colour coding in the superior temporal sulcus of rhesus monkey visual cortex.
Proceedings of the Royal Society of London Series B, 197, 195–223.
A. Hannus et al. / Acta Psychologica 118 (2005) 171–191 191
... Crucially, this was the case even though color and orientation selection efficacy had been fully equated based on prior single feature search experiments to achieve "feature equality" (Olds, Graham, & Jones, 2009). Comparable asymmetries have been described in other contexts: selection-for-action paradigms (Bekkering & Neggers, 2002;Hannus, Cornelissen, Lindemann, & Bekkering, 2005), in paradigms in which feature-cues were given prior to the conjunction search display (Anderson, Heinke, & Humphreys, 2010;Zhuang & Papathomas, 2011), in a feature preview paradigm (Olds & Fockler, 2004;Olds et al., 2009), and in double feature singleton search (Koene & Zhaoping, 2007;Zhaoping & May, 2007). ...
... Moreover, our results corroborate previous findings: "feature equality," derived on the basis of feature search, does not imply equal feature selection efficacy during conjunction search (Hannus et al., 2005;Hannus et al., 2006;Olds & Fockler, 2004;Olds et al., 2009). Comparable asymmetries have been described in selection-for-action paradigms (Bekkering & Neggers, 2002;Hannus et al., 2005) and for conjunctions of orientation and size (Hannus et al., 2006). ...
... Moreover, our results corroborate previous findings: "feature equality," derived on the basis of feature search, does not imply equal feature selection efficacy during conjunction search (Hannus et al., 2005;Hannus et al., 2006;Olds & Fockler, 2004;Olds et al., 2009). Comparable asymmetries have been described in selection-for-action paradigms (Bekkering & Neggers, 2002;Hannus et al., 2005) and for conjunctions of orientation and size (Hannus et al., 2006). These asymmetries are thus a behavioral fingerprint (Zhaoping et al., 2009) of the involvement of conjunctively tuned neural mechanisms in search and preview. ...
Article
Full-text available
Visual search often requires combining information on distinct visual features such as color and orientation, but how the visual system does this is not fully understood. To better understand this, we showed observers a brief preview of part of a search stimulus—either its color or orientation—before they performed a conjunction search task. Our experimental questions were (1) whether observers would use such previews to prioritize either potential target locations or features, and (2) which neural mechanisms might underlie the observed effects. In two experiments, participants searched for a prespecified target in a display consisting of bar elements, each combining one of two possible colors and one of two possible orientations. Participants responded by making an eye movement to the selected bar. In our first experiment, we found that a preview consisting of colored bars with identical orientation improved saccadic target selection performance, while a preview of oriented gray bars substantially decreased performance. In a follow-up experiment, we found that previews consisting of discs of the same color as the bars (and thus without orientation information) hardly affected performance. Thus, performance improved only when the preview combined color and (noninformative) orientation information. Previews apparently result in a prioritization of features and conjunctions rather than of spatial locations (in the latter case, all previews should have had similar effects). Our results thus also indicate that search for, and prioritization of, combinations involve conjunctively tuned neural mechanisms. These probably reside at the level of the primary visual cortex.
... Setting up a specific action plan primes action-related feature dimensions by increasing their weight and thus their impact on perceptual processing (intentional weighting; Hommel, 2009;Hommel et al., 2001;Memelink & Hommel, 2013). Not only does processing of actionrelevant features of the goal object itself increase (e.g., Bekkering & Neggers, 2002;Gutteling et al., 2011;Hannus et al., 2005), but entire feature dimensions that provide action-relevant information are primed (e.g., Fagioli et al., 2007;Wykowska & Schubö, 2012;Wykowska et al., 2009). ...
... Memory for colour, conversely, tended to be better while a pointing movement was being planned. The latter effect, however, was much smaller and did not reach statistical significanceprobably because the action relevance of colour information for pointing movements is simply not that high (see also Bekkering & Neggers, 2002;Hannus et al., 2005). ...
Article
Full-text available
Perception is shaped by actions, which determine the allocation of selective attention across the visual field. Here, we review evidence that maintenance in visual working memory is similarly influenced by actions (eye or hand movements), planned and executed well after encoding: Representations that are relevant for an upcoming action – because they spatially correspond to the action goal or because they are defined along action-related feature dimensions – are automatically prioritised over action-irrelevant representations and held in a stable state. We summarise what is known about specific characteristics and mechanisms of selection-for-action in working memory, such as its temporal dynamics and spatial specificity, and delineate open questions. This newly-burgeoning area of research promotes a more functional perspective on visual working memory that emphasizes its role in action control.
... This common code consists of a network of features distributed across domains (such as action and perception) that can be bound together to represent common sensorimotor events. Several behavioural studies have demonstrated the existence of the bi-directional link between action and perception in terms of an "action-modulated perception" mechanism that automatically enhances relevant object features during action preparation [3][4][5][6]. For example, a relevant feature like orientation perception is enhanced during the preparation of a grasping action and not during the preparation of a pointing action for which object orientation is unimportant [7,8]. ...
Article
Full-text available
Perception and action are essential in our day-to-day interactions with the environment. Despite the dual-stream theory of action and perception, it is now accepted that action and perception processes interact with each other. However, little is known about the impact of unpredicted changes of target size during grasping actions on perception. We assessed whether size perception and saccade amplitude were affected before and after grasping a target that changed its horizontal size during the action execution under the presence or absence of tactile feedback. We have tested twenty-one participants in 4 blocks of 30 trials. Blocks were divided into two experimental tactile feedback paradigms: tactile and non-tactile. Trials consisted of 3 sequential phases: pre-grasping size perception, grasping, and post-grasping size perception. During pre- and post-phases, participants executed a saccade towards a horizontal bar and performed a manual size estimation of the bar size. During grasping phase, participants were asked to execute a saccade towards the bar and to make a grasping action towards the screen. While grasping, 3 horizontal size perturbation conditions were applied: non-perturbation, shortening, and lengthening. 30% of the trials presented perturbation, meaning a symmetrically shortened or lengthened by 33% of the original size. Participants' hand and eye positions were assessed by a motion capture system and a mobile eye-tracker, respectively. After grasping, in both tactile and non-tactile feedback paradigms, size estimation was significantly reduced in lengthening (p = 0.002) and non-perturbation (p
... Relatedly, it has been shown that perceptual recognition of to-be-grasped features of the object is enhanced due to the attention allocation (e.g., Bekkering & Neggers, 2002;Hannus, Cornelissen, Lindemann, & Bekkering, 2005). ...
Thesis
Manual actions are the key motor actions interacting with the physical world around us. In a day, we perform a huge variety of skilled manual actions to interact with and manipulate objects, communicate with people through gestures or written text messages as well as do sports, play musical instruments and execute dance figures. In natural environments, we perform manual actions generally with other motor or cognitive tasks such as grasping a coffee cup while having a conversation. Moreover, in natural environments, individual physiological or cognitive states such as action goals or environmental situations such as target object location can change unexpectedly. Therefore, in natural environments, action flexibility is an important cognitive ability which provides rapid and precise manual actions suiting to desired action outcomes. Accordingly, performing skilled manual actions which are efficiently performed with other tasks as well as rapidly adapted to changing action demands require the close engagement of the sensory, motor and cognitive systems. The current thesis aims to further the understanding of the neuro-cognitive mechanisms underlying manual action control including action flexibility. That is, how the human brain orchestrates the sensorimotor systems with cognitive processes to plan, execute and adapt a variety of skilled manual actions in natural environments. For this aim, the current thesis focuses on grasping movements as the most frequently performed, yet the most complex manual actions requiring cognitive processes, and working memory (WM) as the core cognitive process guiding goal-directed behavior. The current thesis particularly investigated the neurophysiological correlates of the functional domain interactions between manual action control (grasping movements) and cognition (WM). For this investigation, a cognitive-motor dual-task paradigm which required the concurrent performance of WM task and manual task was integrated into an electroencephalography (EEG) setting. For a profound investigation of the domain interactions, in three experimental studies, different movement phases (execution, re-planning), WM domains (verbal, visuospatial), processes (encoding, maintenance, retrieval) and response modalities (manual, vocal) were focused. EEG was recorded while participants were performing the experimental tasks. Event-related potentials (ERPs) were extracted from the EEG recordings. Study 1 investigated the neurophysiological correlates of the domain interactions between movement execution and WM. That is, whether the execution of a prepared movement without additional planning requirements interferes with WM, and where the locus of the interference is, i.e., encoding or retrieval process of verbal and visuospatial domains. In a dual-task scenario, participants either performed verbal and visuospatial WM tasks alone (single-task condition) or concurrently with a manual task which required grasping a sphere, holding it and placing it on a motor target, i.e., grasp-and-place movement (dual-task condition). ERPs were extracted for encoding and retrieval processes in verbal and visuospatial tasks. Both the behavioral memory performance and ERPs were compared between single-task and dual-task conditions. The behavioral analyses showed that memory performance was lower in the dual-task compared to the single-task for the visuospatial task, but not for the verbal task. That is, concurrent movement execution interfered only with visuospatial domain and decreased memory performance only for the visuospatial task, i.e., domain-specific movement execution costs. ERP analyses showed the different ERP patterns in the dual-task compared to the single-task only during the encoding process in the visuospatial task. That is, domain-specific interference of movement execution was also obtained at the neurophysiological level, which was further specific to the encoding process of visuospatial domain, i.e., domain and process-specific movement execution costs. Study 2 investigated the neurophysiological correlates of the domain interactions between movement re-planning and WM. That is, whether changing the prepared movement plan of an ongoing movement interferes with WM, and where the locus of the interference is, i.e., maintenance or retrieval process of verbal and visuospatial domains. In the dual-task scenario, participants performed verbal and visuospatial tasks concurrently with grasp-and-place movement which required either executing the initially prepared movement plan (prepared movement condition) or changing it with a new alternative plan for reversing the movement direction (re-planned movement condition). ERPs were extracted for maintenance and retrieval processes in verbal and visuospatial tasks. Both the behavioral memory performance and ERPs were compared between prepared movement and re-planned movement conditions. The behavioral analyses showed that memory performance was lower in the re-planned condition compared to the prepared condition in both verbal and visuospatial tasks. That is, concurrent movement re-planning interfered with both verbal and visuospatial domains and decreased memory performance for both WM tasks, i.e., domain-general movement re-planning costs. ERP analyses showed the different ERP patterns in the re-planned condition compared to the prepared condition during the maintenance process, but not the retrieval process, in verbal and visuospatial tasks. That is, domain-general interference of movement re-planning was also obtained at the neurophysiological level, which was further specific to the maintenance process of verbal and visuospatial domains, i.e., domain-general, but process-specific movement re-planning costs. Study 3 investigated the role of the WM response modality in the neurophysiological correlates of the movement re-planning-WM interactions. That is, whether the neurophysiological interactions between movement re-planning and WM depend on the particular pairing of stimulus-response modalities within WM tasks as well as on the response modality overlap between WM tasks and manual task. The experimental procedure was the same as in Study 2. Participants performed verbal and visuospatial tasks concurrently with grasp-and-place movement which required executing the initially prepared movement plan for some trials (prepared movement condition) and changing it for other trials (re-planned movement condition). Different from Study 2 in which WM tasks included manual response modality, the current WM tasks included vocal response modality, i.e., spoken report of memory items. Both the behavioral memory performance and ERP analyses showed the similar results compared to the results obtained with manual response modality (Study 2). Namely, movement re-planning interfered with both verbal and visuospatial domains and decreased memory performance for both WM tasks, i.e., domain-general movement re-planning costs. Moreover, prepared and re-planned movements generated different ERPs only during the maintenance process of verbal and visuospatial domains, i.e., domain-general, but process-specific movement re-planning costs. Taken together, the current studies have provided the first systematic investigations of the neurophysiological correlates of the functional domain interactions between manual action control and WM. The current findings have shown that manual action-WM interactions are complex and dependent on a variety of factors such as movement phases (execution, re-planning), WM domains (verbal, visuospatial) and processes (encoding, maintenance, retrieval). Nevertheless, the current findings have pointed out the functional role of WM in the execution and re-planning of manual actions, and thus the importance of investigating the domain interactions between manual action control, WM and human neurophysiology. In general, the current thesis, by bringing together the human movement science (grasping movements), cognitive science (working memory) and neurophysiology (EEG), contributes to the understanding of the neuro-cognitive mechanisms underlying manual action control which has enabled humans to develop species-specific manual skills, and thus to achieve social, cultural and technological progress. Accordingly, the current thesis mainly relates to the human manual intelligence, and then also provides inputs to the other related fields such as cognitive robotics.
... Given that altered vision within peri-hand space is thought to result from feedback from reach and grasp networks in the dorsal stream (Perry and Fallah 2017) and that grasp-relevant object features appear to improve target detection accuracy for subsequent reach and grasp movements (Hannus et al. 2005;Bekkering and Neggers 2002), we hypothesized that objects that afford grasping and are easily defined by their orientation would facilitate visual perception within peri-hand space. To test this, participants were asked to complete a visual search task in which they had to identify a single target image among an array of eleven distractor images. ...
Article
Full-text available
Peri-hand space is the area surrounding the hand. Objects within this space may be subject to increased visuospatial perception, increased attentional prioritization, and slower attentional disengagement compared to more distal objects. This may result from kinesthetic and visual feedback about the location of the hand that projects from the reach and grasp networks of the dorsal visual stream back to occipital visual areas, which in turn, refines cortical visual processing that can subsequently guide skilled motor actions. Thus, we hypothesized that visual stimuli that afford action, which are known to potentiate activity in the dorsal visual stream, would be associated with greater alterations in visual processing when presented near the hand. To test this, participants held their right hand near or far from a touchscreen that presented a visual array containing a single target object that differed from 11 distractor objects by orientation only. The target objects and their accompanying distractors either strongly afforded grasping or did not. Participants identified the target among the distractors by reaching out and touching it with their left index finger while eye-tracking was used to measure visual search times, target recognition times, and search accuracy. The results failed to support the theory of enhanced visual processing of graspable objects near the hand as participants were faster at recognizing graspable compared to non-graspable targets, regardless of the position of the right hand. The results are discussed in relation to the idea that, in addition to potentiating appropriate motor responses, object affordances may also potentiate early visual processes necessary for object recognition.
... This suggests that any coupling between perception and action observed here is not affected by the difficulty of the perceptual discrimination. This is in contrast to findings that increasing the number of items in visual search tasks influences motor-visual priming effects with effects vanishing at larger set sizes 36,62 . This initially suggested that the processing resources shared by action and perception are to some extent capacity limited, as at larger set sizes there are insufficient resources for actions to further enhance stimulus processing. ...
Article
Full-text available
Action preparation can facilitate performance in tasks of visual perception, for instance by speeding up responses to action-relevant stimulus features. However, it is unknown whether this facilitation reflects an influence on early perceptual processing, or instead post-perceptual processes. In three experiments, a combination of psychophysics and electroencephalography was used to investigate whether visual features are influenced by action preparation at the perceptual level. Participants were cued to prepare oriented reach-to-grasp actions before discriminating target stimuli oriented in the same direction as the prepared grasping action (congruent) or not (incongruent). As expected, stimuli were discriminated faster if their orientation was congruent, compared to incongruent, with the prepared action. However, action-congruency had no influence on perceptual sensitivity, regardless of cue-target interval and discrimination difficulty. The reaction time effect was not accompanied by modulations of early visual-evoked potentials. Instead, beta-band (13–30 Hz) synchronization over sensorimotor brain regions was influenced by action preparation, indicative of improved response preparation. Together, the results suggest that action preparation may not modulate early visual processing of orientation, but likely influences higher order response or decision related processing. While early effects of action on spatial perception are well documented, separate mechanisms appear to govern non-spatial feature selection.
Article
Full-text available
Actions shape what we see and memorize. A previous study suggested the interaction between motor and memory systems by showing that memory encoding for task-irrelevant items was enhanced when presented with motor-response cues. However, in the studies on the attentional boost effect, it has been revealed that detection of the target stimulus can lead to memory enhancement without requiring overt action. Thus, the direct link between the action and memory remains unclear. To exclude the effect of the target detection process as a potential confounder, this study assessed the benefit of action for memory by separating items from the response cue in time. In our pre-registered online experiment (N = 142), participants responded to visual Go cues by pressing a key (i.e., motor task) or counting (i.e., motor-neutral cognitive task) while ignoring No-go cues. In each trial, two task-irrelevant images were sequentially presented after the cue disappearance. After encoding the Go/No-go tasks, participants performed a surprise recognition memory test for those images. Importantly, we quantified the impact of overt execution of the action by comparing memories with and without motor response and the impact of covert motor processes (e.g., preparation and planning of action) by comparing memory between the motor and cognitive tasks. The results showed no memory differences between Go and No-go trials in the motor task. This means that the execution itself was not critical for memory enhancement. However, the memory performance in the motor No-go trials was higher than that in the cognitive No-go trials, only for the items presented away from the cues in time. Therefore, engaging the motor task itself could increase incidental memory for the task-irrelevant items compared to a passive viewing situation. We added empirical evidence on the online interaction between action and memory encoding. These memory advantages could be especially brought in action preparation and planning. We believe this fact may expand our present understanding of everyday memory, such as active learning.
Article
Résumé Afin de sélectionner l’information pertinente pour agir au sein de l’environnement, l’individu met en place de véritables stratégies d’exploration visuelle. Les facteurs déterminant ces stratégies sont liés à la structure de la scène visuelle mais aussi aux représentations de l’individu. De récentes études montrent que la spécificité de l’intention d’action peut modifier l’exploration visuelle de l’individu. Nous évaluons si (expérience 1), avant le caractère spécifique de cette intention, le simple fait d’avoir l’intention d’agir ou non avec un objet de l’environnement peut modifier les stratégies de recherche perceptive. De manière complémentaire (expérience 2), nous nous intéressons plus précisément aux effets de la congruence et de la spécificité de l’action lorsqu’il y a en effet intention d’action. Nos résultats permettent d’avancer que les stratégies perceptivo-motrices sont élaborées de manière cohérente avec le but poursuivi dans une situation donnée et sont dépendantes de l’intention d’action, de la congruence et de la spécificité de l’action par rapport à l’environnement.
Article
Purpose The purpose of this paper is to study online clothing consumers' behaviour and their visual attention mechanism to provide objective and quantitative evidences for the display and sales of online clothing. Design/methodology/approach Firstly, this paper conducted a Focus Group Methodology and questionnaire survey to obtain concern factors of online clothing. Secondly, the online clothing's bottom-up visual stimulation and consumer's top-down expectations were analysed, and proposed the hypotheses about significant stimulus of clothing and consumer's emotional experience. Thirdly, the online clothing consumer's visual attention rules and related qualitative results were discussed, and proposed visual attention law for online clothing. Finally, took the company's 84th quarter clothing design practices as research projects, all the hypotheses were demonstrated through eye movement physiology experiments, online clothing trial release and node sales data. Findings Online clothing has unique visual display ways compared with other online products such as online advertising, brands and food packaging. Clothing patterns of unfamiliar (fresh) font shapes are more attractive than the patterns of familiar fonts. The cause of the bottom-up visual attention bias is the contrast between clothing features, not the absolute stimulus intensity of the features themselves. Clothing factors can change their emotional experience from no difference to significant difference under the influence of other clothing factors. Originality/value Put forward hypotheses of online clothing consumer behaviour and its visual attention mechanism, provided objective and quantitative evidences through eye tracker.
Article
Full-text available
Two experiments were performed to explore a possible visuomotor priming effect. The participants were instructed to fixate a cross on a computer screen and to respond, when the cross changed colour ("go" signal), by grasping one of two objects with their right hand. The participants knew in advance the nature of the to-be-grasped object and the appropriate motor response. Before (100 msec), simultaneously with or after (100 msec) the "go" signal, a two-dimensional picture of an object (the prime), centred around the fixation cross, was presented. The prime was not predictive of the nature of the to-be-grasped object. There was a congruent condition, in which the prime depicted the to-be-grasped object, an incongruent condition, in which the prime depicted the other object, and a neutral condition, in which either no prime was shown or the prime depicted an object that did not belong to the set of to-be-grasped objects. It was found that, in the congruent condition, reaction time for initiating a grasping movement was reduced. These results provide evidence of visuomotor priming.
Book
This book is the fruit of a study group on perception and action that worked at the Center for Interdisciplinary Research (ZiP) of the University of Bielefeld, FRG in the academic year 1984-1985. We express our gratitude to the ZiF for hosting the group and for providing fmancial and organizational support for its scientific activities, including a meeting of the authors of the present volume that took place at the ZiF in July 1986. This is/ the study group's last common product, and it took considerable time to give the book its fmal shape. Most of the editing was done while one of us (0. N.) was a Fellow at the Netherlands Institute for Advanced Study in the Humanities and Social Sciences (NlAS) during the academic year 1987-1988. Thanks are due to NIAS for its generous support. We also thank all our friends and colleagues who contributed to the book.
Chapter
Until recently, there has been relatively little interest in relationships between attention and action. When modern research into attention began in the 1950s, the dominating approach was to analyze attention within the information-processing framework, and information processing was conceptualized mainly in terms of what happens to stimulus information after it has entered the human processing system (e. g., Broadbent, 1958, 1971; Neisser, 1967). Attentional processes were viewed as modifying the flow of information by selecting part of the incoming information for processing. Most of the theoretical discussions since the 1960s have centered around the question of where in the processing sequence this selection takes place. This has usually been phrased as the problem of whether selective attention operates “early,” i. e., prior to complete stimulus analysis, or “late,” after the information in the stimulus has been fully identified (e. g. Deutsch Deutsch, 1963; Treisman Geffen, 1967). Common to both theoretical positions has been the conviction that not all incoming information can be dealt with simultaneously by the system because processing capacity is limited. Hence, selection was thought to be necessary. Attention was thus conceptualized as related to the analysis and internal representation of incoming information rather than to the control of action.
Article
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together, the findings support the view that visual attention can be best understood as a selection-for-action mechanism.