ArticlePublisher preview available

Eye Movements in Reading and Information Processing: 20 Years of Research

American Psychological Association
Psychological Bulletin
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.
Psychological
Bulletin
1998, Vol. 124,
No. 3,
372-422
Copyright
1998
by the
Americi
i
Psychological Association, Inc.
0033-2909/98/$3.00
Eye
Movements
in
Reading
and
Information
Processing:
20
Years
of
Research
Keith
Rayner
University
of
Massachusetts
at
Amherst
Recent
studies
of eye
movements
in
reading
and
other
information
processing
tasks,
such
as
music
reading,
typing,
visual
search,
and
scene
perception,
are
reviewed.
The
major
emphasis
of the
review
is
on
reading
as a
specific
example
of
cognitive
processing.
Basic
topics
discussed
with
respect
to
reading
are (a) the
characteristics
of eye
movements,
(b) the
perceptual
span,
(c)
integration
of
information
across
saccades,
(d) eye
movement
control,
and (e)
individual
differences
(including
dyslexia).
Similar
topics
are
discussed
with
respect
to the
other
tasks
examined.
The
basic
theme
of
the
review
is
that
eye
movement
data
reflect
moment-to-moment
cognitive
processes
in the
various
tasks
examined.
Theoretical
and
practical
considerations
concerning
the use of eye
movement
data
are
also
discussed.
Many
studies using
eye
movements
to
investigate cognitive
processes have appeared over
the
past
20
years.
In an
earlier
review,
I
(Rayner,
1978b)
argued that since
the
mid-1970s
we
have
been
in a
third
era of eye
movement
research
and
that
the
success
of
research
in the
current
era
would depend
on the
ingenuity
of
researchers
in
designing interesting
and
informative
studies.
It
would
appear
from
the
vast number
of
studies using
eye
movement data over
the
past
20
years that research
in
this
third
era is
fulfilling
the
promise inherent
in
using
eye
movement
behavior
to
infer
cognitive
processes.
The first era of eye
move-
ment
research extended
from
Javal's initial observations con-
cerning
the
role
of eye
movements
in reading in
1879
(see
Huey,
1908)
up
until about 1920. During this era, many basic
facts
about
eye
movements were discovered. Issues such
as
saccadic
suppression
(the
fact that
we do not
perceive information during
an
eye
movement),
saccade latency (the time that
it
takes
to
initiate
an eye
movement),
and the
size
of the
perceptual span
(the region
of
effective
vision) were
of
concern
in
this
era.
The
second era, which coincided with
the
behaviorist
movement
in
experimental psychology, tended
to
have
a
more applied
focus,
and
little
research
was
undertaken with
eye
movements
to
infer
cognitive
processes.
Although classic work
by
Tinker
(1946)
on
reading
and by
Buswell
(1935)
on
scene
perception
was
carried
out
during this era,
in retrospect,
most
of the
work seems
to
have
focused
on the eye
movements
per se
(or
on
surface
aspects
of the
task
being
investigated).
Tinker's (1958)
final
review ended
on the
rather pessimistic note that almost
every-
Preparation
of
this
article
was
supported
by a
Research
Scientist
Award
from
the
National
Institute
of
Mental
Health
(MH01255)
and by
Grants
HD
17246
and
HD
26765
from
the
National
Institutes
of
Health.
Thanks
are
extended
to Ken
Ciuffreda,
Charles
Clifton,
David
Irwin,
and
Alexander
Pollatsek
for
their
helpful
comments
on
prior
versions
of
this
article.
Correspondence
concerning
this
article
should
be
addressed
to
Keith
Rayner,
Department
of
Psychology,
University
of
Massachusetts,
Am-
herst,
Massachusetts
01003.
Electronic
mail
may be
sent
to
rayner@
psych.umass.edu.
thing that could
be
learned about reading
from
eye
movements
(given
the
technology
at the
time)
had
been discovered. Perhaps
that opinion
was
widely
held,
because between
the
late
1950s
and
the
mid-1970s
little research
with
eye
movements
was
undertaken.
The
third
era of eye
movement
research
began
in the
mid-
1970s
and has
been marked
by
improvements
in eye
movement
recording
systems that have allowed measurements
to be
more
accurate
and
more easily obtained.
It is
beyond
the
scope
of the
present review
to
detail
all of the
technological advancements
that have been made. Numerous works have dealt
with
methods
of
analyzing
eye
movement data (see
Kliegl
&
Olson, 1981;
Pillalamarri,
Barnette,
Birkmire,
&
Karsh,
1993; Scinto
&
Bar-
nette,
1986),
and
much
has
been learned about
the
characteris-
tics
of
various eye-tracking systems
(see
Deubel
&
Bridgeman,
1995a,
1995b;
Mullet
Cavegn, d'Ydewalle,
&
Groner,
1993).
More important,
the era has
yielded tremendous technological
advances that have made
it
possible
to
interface laboratory com-
puters
with
eye-tracking systems
so
that large amounts
of
data
can
be
collected
and
analyzed.
These
technological advances
have
also
allowed
for
innovative techniques
to be
developed
in
which
the
visual
display
is
changed contingent
on the eye
posi-
tion.
In the
eye-contingent
display
change paradigm (McCon-
kie,
1997;
McConkie
&
Rayner, 1975;
Rayner,
1975b; Reder,
1973),
eye
movements
are
monitored,
and
changes
are
made
in
the
visual display that
the
reader
is
looking
at,
contingent
on
when
the
eyes move
(or at
some other critical point
in the
fixation).
Finally,
the
development
of
general theories
of
lan-
guage processing
has
made
it
possible
to use eye
movement
records
for a
critical examination
of the
cognitive processes
underlying
reading.
In
the
present
article,
recent studies
of eye
movements
in
reading
and
other
information
processing tasks
are
examined.
Since
the
last
review
in
this
journal
(Rayner,
1978b), there have
been
many reviews
of eye
movement research
(Kennedy,
1987;
LeVy-Schoen
&
O'Regan, 1979; O'Regan, 1990; Pollatsek,
1993; Rayner, 1984, 1993, 1995, 1997; Rayner
&
Pollatsek,
1987, 1992;
O.
Underwood, 1985). However, none
of
them
are
372
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
... While the PVL emerges from aggregated reading behavior, the ILP captures a single, moment-to-moment decision made by the oculomotor system. While the ILP typically aligns with the PVL in skilled readers, it is still susceptible to modulation by low-level visual factors, such as parafoveal word length, spacing, and visual salience [1,[3][4][5][6]. It is generally accepted that typical oculomotor behavior is characterized by a narrower spread of ILPs around the PVL [7]. ...
... Several studies have demonstrated that poor readers and dyslexics display atypical eye movement patterns (e.g., [26][27][28][29]). These patterns include more and longer fixations, shorter saccade lengths, and a higher frequency of regressions [6,30,31], resembling those of beginning readers. Moreover, dyslexic readers have been shown to land closer to the beginning of words, suggesting an overreliance on serial decoding strategies [27,28,32]. ...
... These fixations are frequently shorter and more likely to fall within the same word, as corrective re-fixations are rapidly triggered based on low-level visual cues [26,27,[33][34][35][36]. Moreover, lexical access processes are more likely to fail if the useful visual information about the shape and location of ensuing words is degraded or lacking [6]. It has also been reported that individuals with developmental dyslexia (DD) have difficulty in narrowing their focus of attention, hampering the exact planning of fine-tuned saccades (e.g., diffuse spread of ILPs, unexpected/atypical saccades, e.g., [37][38][39][40][41]; see [42] for a meta-analysis). ...
Article
Full-text available
The initial saccade of experienced readers tends to land halfway between the beginning and the middle of words, at a position originally referred to as the preferred viewing location (PVL). This study investigated whether a simple physical manipulation-namely, increasing the saliency (brightness or color) of the letter located at the PVL-can positively influence saccadic targeting strategies and optimize reading performance. An eye-movement experiment was conducted with 25 adults and 24 s graders performing a lexical decision task. Results showed that this manipulation had no effect on initial landing positions in proficient readers, who already landed most frequently at the PVL, suggesting that PVL saliency is irrelevant once automatized saccade targeting routines are established. In contrast, the manipulation shifted the peak of the landing site distribution toward the PVL for a cluster of readers with immature saccadic strategies (with low reading-level scores and ILPs close to the beginning of words), but only in the brightness condition, and had a more compelling effect in a cluster with oculomotor instability (with flattened and diffuse landing position curves along with oculomotor and visuo-attentional deficits). These findings suggest that guiding the eyes toward the PVL may offer a novel way to improve reading efficiency, particularly for individuals with oculomotor and visuo-attentional difficulties.
... The study used the desktop-mounted eye-tracker Tobii TX-300 in a laboratory setting, which recorded eye movements at 300 Hz using infrared corneal reflection with a 0.5°precision. Eye movements include fixations and saccades (Duchowski, 2007;Rayner, 1998). Fixations last for about 200 to 300 ms. ...
... Fixations last for about 200 to 300 ms. Saccades are short (20-40 ms) rapid eye movements between fixations during which information processing is suppressed (Rayner, 1998). When reading English, eye fixations last about 200 to 250 ms and range from just under 100 to over 500 ms (Rayner, 1998). ...
... Saccades are short (20-40 ms) rapid eye movements between fixations during which information processing is suppressed (Rayner, 1998). When reading English, eye fixations last about 200 to 250 ms and range from just under 100 to over 500 ms (Rayner, 1998). ...
Article
Encouraging air travelers’ participation in voluntary carbon offsetting (VCO) remains challenging. Drawing on dual-process and social influence theories, this study investigates whether heuristic cues can optimize message design for online carbon offsetting. Through sequentially designed randomized national surveys and psychophysiological experiments, results revealed that messages leveraging primacy-recency, anchoring , and foot-in-the-door techniques are most effective in increasing carbon offsetting intentions. Findings indicated that women tend to be rational carbon offset purchasers while older men are more likely to be heuristic-induced purchasers. The experimental study further reveals that transparency and efficacy messages with statistical information, rather than heuristic cues, are the most effective approaches in raising respondents’ attention to carbon offsetting details. Practical communication strategies such as providing readable and accurate information are proposed to promote participation in aviation VCO programs.
... Accordingly, lack of parafoveal processing may result in spillover (or lag) effects, that is, a situation in which processing of item n spills over processing of item n+1 (Kliegl et al., 2006;Rayner, 1998;Rayner et al., 2005;Rayner & Duffy, 1986). Specifically, eye-tracking studies have documented spillover effects as increased gaze duration on item n+1 if item n is a low-frequency word (compared to situations where item n is high-frequency; Kliegl et al., 2006;Rayner et al., 2005;Rayner & Duffy, 1986). ...
... In addition, it was primarily in the left-to-right to task that most of the measures continued to be negatively affected after the first row (Table 2). In this context it is relevant to consider that oculomotor control (i.e., the launching of saccades) seems be more efficient in the default naming direction (here, left-to-right; Yan et al., 2024) and that parafoveal processing runs stronger in the language-default reading direction Yan et al., 2024), extending 10-15 character spaces to the right of the fixation in alphabetical orthographies (Rayner, 1998), but limited to 4-5 character spaces in the vertical direction (Ojanpää et al., 2002). In light of this, our findings can be taken to indicate that our participants had to reduce the temporal offset EVS to offset the challenge introduced by increased parafoveal processing in the language-default direction. ...
Article
Full-text available
Studies of cognitive control using tasks with isolated (single-item) versus multiple (multi-item) item presentation have shown that multi-item tasks may be more effective in capturing limitations of the cognitive control system arising from capacity constraints. Importantly, during multi-item tasks, performance decreases but does not collapse, consistent with effective management of cognitive overload. This has been interpreted by positing a strategic shift from a more parallel to a more serial processing, whereby the processing of the current item is prioritized over upcoming items, termed lockout scheduling. Here, we investigate this proposal by examining within-task modulations of eye–voice span measures and articulation duration in multi-item color naming tasks with and without internal interference (i.e., incongruent and neutral Stroop conditions). We hypothesized that lockout scheduling would manifest itself across measures and conditions as the task progresses and cognitive overload emerges. The results showed dynamic changes in both eye-voice span and articulation measures as a function of time-on-task, consistent with the gradual implementation of a lockout-scheduling strategy in response to emerging cognitive overload. The observed patterns indicate that the shift from a more parallel to a more serial processing is not an all-or-none phenomenon but a dynamic interplay between gaze, voice, and covert (i.e., parafoveal) attention, affecting online decisions about spatial distance, timing, and processing resource allocation.
... Tsai et al., 2012). Despite the existence of studies with contradicting results, the hypothesis that attention during complex processes is closely linked with eye movements is widely accepted (Rayner, 1998). ...
Article
Full-text available
Eye-tracking technology has emerged as a powerful tool in science education research, providing unparalleled insights into learners’ visual attention, cognitive processing, and engagement with complex visual stimuli. This systematic review synthesises findings from 170 studies published in Web of Science-indexed journals, selected from an initial pool of 525 articles. The analysis reveals that most studies were conducted in Europe (with physics education dominating at 34%) and primarily targeted university students (55%), while only 22% focused on younger learners, including preschool and lower-secondary students. The median sample size across studies was 36 participants, highlighting a methodological constraint that merits attention. The results identify key research themes: the processing of scientific representations (29%), reading behaviours in learning materials (28%), problem-solving tasks (19%), experiments and simulations (18%), and video-based learning environments (6%). Eye-tracking metrics such as fixation duration, dwell time, and transition patterns were predominantly used to measure learners’ attention and cognitive load. Findings underscore the critical influence of learner expertise, prior knowledge, and spatial abilities on visual processing patterns. Novice learners exhibited surface-level engagement, frequent switching between representations, and difficulty integrating visual and textual information, whereas experts demonstrated focused, deeper processing. Instructional interventions were shown to enhance learners’ comprehension and performance significantly. However, challenges persist, including methodological inconsistencies, small sample sizes, and underexplored factors like emotional responses and self-regulation. The review highlights the pressing need for further research that utilises meta-analytical approaches, addresses diverse learner populations, and explores complex learning environments with eye-tracking technology. By offering actionable insights for instructional design and visual learning strategies, this review advances our understanding of how visual stimuli shape learning in science education and paves the way for evidence-based pedagogical innovations.
... At roughly 4 words s −1 , and assuming six eight-bit characters per word, this corresponds to 190 raw bits s −1 -two orders of magnitude below the 10 Mb s −1 optic-nerve figure. Eye-movement work shows why: each content word attracts a fixation of 200-250 ms and the perceptual span rarely exceeds 15 letters, making sustained rates above 300 wpm impossible without loss of comprehension (Rayner, 1998). Computational modelling confirms that lexical access and syntactic integration must proceed in a largely serial fashion; in Just and Carpenter's classic model, conceptual propositions are integrated at a pace of only a few dozen bits per second despite much faster visual inflow (Just and Carpenter, 1980). ...
Preprint
Full-text available
Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
Article
This study examines how subtitles and image visualizations influence gaze behavior, working alliance, and behavior change intentions in virtual health conversations with ECAs. Visualizations refer to images on a 3D model TV and text on a virtual whiteboard, both reinforcing key content conveyed by the ECA. Using a 2 2 factorial design, participants were randomly assigned to one of four conditions: no subtitles or visualizations (Control), subtitles only (SUB), visualizations only (VIS), or both subtitles and visualizations (VISSUB). Structural equation path modeling showed that SUB and VIS individually reduced gaze toward the ECA, whereas VISSUB moderated this reduction, resulting in less gaze loss than the sum of either condition alone. Gaze behavior was positively associated with working alliance, and perceptions of enjoyment and appropriateness influenced engagement, which in turn predicted behavior change intentions. VIS was negatively associated with behavior change intentions, suggesting that excessive visual input may introduce cognitive trade‐offs.
Article
The coherence principle suggests removing unnecessary—or seductive—content from educational texts to reduce cognitive load. However, the binary proposition that all seductive details should be excluded neglects images' potential to prime semantically related concepts, which makes texts easier to process. It was hypothesized that this priming would cause at least tangentially related images to enhance processing and recall of concepts. Participants learned 24 concepts under four conditions: direct depictions, tangentially related and unrelated images, and no image. Participants' fixation durations on concepts, their complementing sentences and images, and recall performance were measured. Multilevel models revealed that coherence effects were only present for unrelated images and that images that are at least tangentially related facilitated learning. These effects were unaffected by participants' familiarity with concepts. The study concludes that semantically related images may outweigh their cognitive load, suggesting that educators should consider their priming potential when designing instructional materials.
Article
In addition to architecture and infrastructure, urban outdoor advertising also shapes urban visual identity, serving as a prominent carrier of public information and visual stimuli. However, excessive or poorly designed advertisements disrupt the cityscape and contribute to visual pollution and cognitive overload. Leveraging computer-based eye tracking, this study examines the visual and cognitive effects of outdoor advertising designs within urban contexts. Key eye-tracking metrics, including total fixation duration, fixation count, time to first fixation, and first fixation duration, are measured to analyze the influence of various variables on visual attention and user experience, such as color contrast, text complexity, information hierarchy, and spatial layout. The findings reveal that high-contrast, text-heavy designs hinder visual flow and increase mental effort, while visually balanced layouts improve legibility and reduce cognitive burden. These results offer actionable insights for optimizing urban visual identity and enhancing the clarity, comfort, and coherence of outdoor advertising. By integrating perceptual data into urban design strategies, this research provides a data-driven approach to smarter, more human-centered advertising management and urban aesthetic governance.
Article
Spreading out study opportunities over time improves the retention of verbal material compared to consecutive study, yet little is known about the influence of temporal spacing on orthographic learning specifically. The current study addressed four questions: (1) do readers' eye movements during orthographic learning differ under spaced and massed conditions? (2) is the spacing effect observed in offline post‐tests? (3) can readers' eye movements during learning be linked with learning success in offline post‐tests? (4) can E‐Z Reader simulate the spacing effect during orthographic learning? Eighty adults silently read sentences containing novel words while their eye movements were monitored. Sentences were read four times; half of the items were spaced while half were massed. Participants completed a post‐test assessing their written word form learning (orthographic choice or spelling). Simulations with E‐Z Reader were used to interpret the human data. During orthographic learning, massed items had shorter total reading times than spaced items. A spacing advantage was noted in the offline post‐tests. Longer fixations during learning were associated with higher response accuracy at post‐test. Implementing a processing deadline enabled E‐Z Reader to simulate participants' eye movements; simulations suggested that massed items may have received less attentional processing than spaced items during learning. Temporal spacing results in longer fixations during learning and better learning outcomes using offline tests. The combination of human eye movements and computational modeling provides useful insights into how reading and memory intersect and points to new directions for future research.
Article
Full-text available
Eye-movement-contingent display changes were used to control the visibility of characters during the reading of Chinese text. Characters outside a window of legible text were masked by dissimilar characters, and effects of viewing constraints were ascertained in several oculomotor measures. The results revealed an asymmetric perceptual span that extended 1 character to the left of the fixated character and 3 characters to its right. The size of right-directed saccades extended across 2 to Ɖ character spaces, indicating that the perceptual spans of successive fixations overlapped slightly and that some linguistic information was integrated across fixations. The relatively small spatial overlap of successive spans appears to reflect a text-specific process. However, the results also revealed substantial similarities in the coding of morphographic Chinese and alphabetic English texts, indicating that text-specific coding routines are subordinated to general coding principles.
Article
Full-text available
The experiment in this article extended studies by A. W. Inhoff and K. Rayner (1986) and J. M. Henderson and F. Ferreira (1990) to determine how the printed frequency of two adjacent words influenced the benefit of having parafoveal preview of the 2nd word. High- and low-span participants (assessed by M. Daneman and P. A. Carpenter’s, 1980, Reading Span Test) were tested to determine whether working memory capacity influenced parafoveal preview benefit. Parafoveal preview benefit was determined by an interaction of both words’ frequencies in first fixation and by the 2nd word’s frequency in gaze duration. However, readers were generally fixated closer to the beginning of the 2nd word when the 1st word was low frequency. When the viewing distance confound was minimized, the prior word’s frequency did affect parafoveal preview benefit. Parafoveal preview benefit did not vary between reading groups. Group distributions of fixation duration provided no evidence for J. M. Henderson and F. Ferreira’s fixation cutoff model.
Article
Full-text available
How does the visual system retain and combine information about an object across time and space? This question was investigated by manipulating the spatiotemporal continuity and form continuity of 2 perceptual objects over time. In Experiment 1 the objects were viewed in central vision within a single eye fixation, in Experiment 2 they were viewed across a saccadic eye movement, and in Experiment 3 they were viewed at different spatial and retinal locations over time. In all 3 experiments some information about the object was found to be linked to its spatiotemporal continuity, and some information was found to be independent of spatiotemporal continuity. Form continuity was found to produce no effect. The results support a theory of dynamic visual identification according to which information is maintained over time both by episodic object representations and long-term memory representations, neither of which necessarily code specific sensory information.
Article
Full-text available
Research with brief presentations of scenes has indicated that scene context facilitates object identification. In the present experiments we used a paradigm in which an object in a scene is “wiggled”—drawing both attention and an eye fixation to itself—and then named. Thus the effect of scene context on object identification can be examined in a situation in which the target object is fixated and hence is fully visible. Experiment 1 indicated that a scene background that was episodically consistent with a target object facilitated the speed of naming. In Experiments 2 and 3, we investigated the time course of scene background information acquisition using display changes contingent on eye movements to the target object. The results from Experiment 2 were inconclusive; however, Experiment 3 demonstrated that scene background information present only on either the first or second fixation on a scene significantly affected naming time. Thus background information appears to be both extracted and able to affect object identification continuously during scene viewing.
Article
Full-text available
We investigated whether readers use verb information to aid in their initial parsing of temporarily ambiguous sentences. In the first experiment, subjects' eye movements were recorded. In the second and third experiments, subjects read sentences by using a noncumulative and cumulative word-by-word self-paced paradigm, respectively. The results of the first two experiments supported Frazier and Rayner's (1982) garden-path model of sentence comprehension: Verb information did not influence the initial operation of the parser. The third experiment indicated that the cumulative version of the self-paced paradigm is not appropriate for studying on-line parsing. We conclude that verb information is not used by the parser to modify its initial parsing strategies, although it may be used to guide subsequent reanalysis.
Article
Full-text available
Two experiments were conducted to examine the effects of foveal processing difficulty on the perceptual span in reading. Subjects read sentences while their eye movements were recorded. By changing the text contingent on the reader's current point of fixation, foveal processing difficulty and the availability of parafoveal word information were independently manipulated. In Experiment 1, foveal processing difficulty was manipulated by lexical frequency, and in Experiment 2 foveal difficulty was manipulated by syntactic complexity. In both experiments, less parafoveal information was acquired when processing in the fovea was difficult. We conclude that the perceptual span is variable and attentionally constrained. We also discuss the implications of the results for current models of the relation between covert visual-spatial attention and eye movement control in reading.
Article
Full-text available
Readers’ eye movements were monitored as they read sentences containing lexically ambiguous words. The ambiguous words were either biased (one strongy dominant interpretation) or nonbiased. Readers’ gaze durations were longer on nonbiased than biased words when the disambiguating information followed the target word. In Experiment 1, reading times on the disambiguating word did not differ whether the disambiguation followed the target word immediately or occurred several words later. In Experiment 2, prior disambiguation eliminated the long gaze durations on nonbiased target words but resulted in long gaze durations on biased target words if the context demanded the subordinate meaning. The results indicate that successful integration of one meaning with prior context terminates the search for alternative meanings of that word. This results in selective (single meaning) access when integration of a dominant meaning is fast (due to a biasing context) and identification of a subordinate meaning is slow (a strongly biased ambiguity with a low-frequency meaning).
Article
Full-text available
The possibility was explored that the informativeness of a specific region within a word can influence eye movements during reading. In Experiment 1, words containing identifying information either toward the beginning or toward the end were displayed asymmetrically around the point of fixation so that the reader was initially presented with either the informative or noninformative zone. Words were read with shorter summed initial fixation time when the reading was started from the informative zone. In Experiments 2 and 3, the target words were presented in sentences that were to be comprehended. More attention was given to the informative endings of words than to redundant endings. The latter were also skipped more often. The duration of the first fixation was not affected by information distribution within the word, whereas the second fixation duration was. The results of these experiments lend good support to the hypothesis of immediate lexical control over fixation behavior and to the notion of a convenient viewing position.
Article
Full-text available
Six experiments are reported dealing with the types of information integrated across eye movements (EMs) in picture perception. A line drawing of an object was presented in peripheral vision, and the 12 Ss (members of the university community) made an EM to it. During the saccade, the initially presented picture was replaced by another that the S was instructed to name as quickly as possible. The relation between the stimulus on the 1st fixation and the stimulus on the 2nd fixation was varied. Across experiments, there was about 100–230 msec facilitation when the pictures were identical compared with a control condition in which only the target location was specified on the 1st fixation. This finding implies that information about the 1st picture facilitated naming the 2nd picture. When the pictures represented the same concept (e.g., 2 pictures of a horse), there was a 90-msec facilitation effect that could have been the result of either the visual or conceptual similarity of the pictures. However, when the pictures had different names, only visual similarity produced facilitation; there appeared to be inhibition from the competing names. Results of all experiments are consistent with a model in which the activation of both the visual features and the name of the picture seen on the 1st fixation survive the saccade and combine with the information extracted on the 2nd fixation to produce identification and naming of the 2nd picture. (32 ref)
Article
Full-text available
221 high school and college students studied a 1,481-word passage to achieve prememorized goals in 3 experiments. Inspection times and eye movements were recorded in goal-relevant and nonrelevant text neighborhoods. Averaged group data indicated 2 inspection modes: (a) relatively rapid inspection of incidental text, and (b) slow goal processing. Goal-relevant sentences resulted in over twice as many fixations, each 15 msec longer than incidental sentences. Relative goal-processing time and goal achievements were positively correlated. Contrary to previous conjecture, the obtained descriptive model suggests that goal-guided reading is time-efficient when density of goal-relevant information in text is low. Qualitative differences were observed in style of Ss' responses to task demands. General processing models of goal-guided learning may have to be elaborated to accommodate inspection style differences among readers. (34 ref)